14 SEPTEMBER 2010
RESILIENCE AND CONTINGENCY PLANNING FOR DNS
Note: The following is the output of the real-time captioning taken during Fifth Meeting of the IGF, in Vilnius. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the session, but should not be treated as an authoritative record.
>> GIOVANNI SEPPANI: Good morning everybody. We are almost ready to start. We have been trying to fix some issues with the remote participants. We have three remote participants. Margarita Valdez from Chile and Michele Neylon from Ireland and Khoudia Gueye Ep Sy. They are going to try to make sure that the remote participants can get connected and go through the slides. So the workshop will start in about ten minutes. We are trying to get in touch with the local logistics. So thank you for your patience.
(Please stand by. Session waiting to begin)
>> GIOVANNI SEPPIA: Ladies and gentlemen, welcome to this workshop. we are about to start. And as we are having some logistic issues with the remote participants, we have decided to start now, and have the remote speakers joining us later, as soon as the man in charge of fixing these kinds of issues has completed his work in another workshop that is having the same kind of issues downstairs.
So we have decided to start. So this is the workshop on resilience and contingency planning in DNS. I'm quite proud to be the moderator of this workshop, which with these great panelists with great expertise in the area coming from different organisations but with different perspectives, but with one message, which is that currently the DNS is not really an endangered ecosystem, but is a system that already has stability and resilience built within.
I would like to start with a short video, which is a remix of the video that Centra produced some time ago about DNS, how it works. And it's a remix, because it contains some interviews that were captured during the past ICANN meeting in Brussels.
And I hope the sound will work.
(Sound of a modem connecting)
>> The Internet has been running for the last few decades and it has been running quite well.
>> GIOVANNI SEPPIA: Okay. There is no sound.
>> People are getting Internet which might not have that good intentions.
I had the privilege of being involved from the very beginning.
>> The original idea when the Internet was developed as a research project under the US military was to create a network which cannot be destroyed.
>> We started -- we thought -- we did think about security but we did not focus on it a lot.
>> It's able to absorb an enormous amount of change and an enormous amount of damage in one area and still carry on work. That's how it was designed.
>> In those days the system was quite small and everybody knew each other. And if somebody had been attacking another system, there would have been -- they would have been found quickly because you knew where the attack was coming from and you call up the right person there and you say: Please stop doing that.
>> We need to fight against identify theft, phishing and other types of crime on the Internet.
>> If there is a threat, the threat is probably bureaucracy. It's probably the involvement of intergovernmental organisations.
>> The Internet should be for the public interest. It should not be captured by one group. It belongs to everybody.
>> They regularly label it as a US controlled, inward looking, and highly technical organisation.
>> In Europe we say well, it's governed by the states.
>> But that's a legitimate debate. That is not an easy question to resolve.
>> The question of who owns the Internet is the wrong question to ask. It's not a question of rights or ownership. It's a question of responsibilities and public service.
>> I was in Russia a couple years ago, and a senior official said: What would happen if the US took Russia off the Internet?
>> There is no way to kill the Internet just by one switch. The Internet is judged, if you look at the definition of Internet, it's an interconnection of networks.
>> The political reaction would be immediate. It would be very harsh, and also there would be work arounds instantly.
>> It is critical infrastructure now for the planet. It is something that many of us have a strong responsibility for our part for. And we do it for the better good of the Internet and for the people as a whole.
>> The Internet goes directly into the economy and policy. And the -- let's say the individual life and freedoms of people.
>> Every so often somebody will say: At the time we were working on this, could you tell what was going to happen?
>> They need to be aware that changes are coming
>> Who knows what is going to happen next.
>> Everything is proceeding exactly on schedule.
>> GIOVANNI SEPPIA: Okay. So, this was a remix, reedited version of the DNS video that was produced by Centra with some interviews that were done during the ICANN Brussels meeting. I would like to start with the first panelist, who is Max Larson Henry. He is looking after the dot.HT bs ccTLD, but he is running a great project to provide connectivity to 40 schools in rural areas of Haiti.
And as you know, as we all know, IT has been recently impacted by a devastating hurricane and Max will refer to it in his presentation. It's an honour to have Max here. Thanks a lot, Max. The floor is yours.
>> MAX LARSON HENRY: Thank you, Giovanni. Good morning, everybody. My name is Max Larson Henry. I'm the technical contact of dot.ht. I'm going to talk about our best practices regarding DNS operations helping small ccTLDs to have services, services continuity during hard times.
I will talk about the value of collaboration between peers and inside the community.
So, dot.ht is the country code for Haiti and dot.ht has been integrated into a consortium of the faculty of sciences of the State University of Haiti. And the foundation of the Haitian sustainable network, which was a project of the UNDP. So we began operation in 2005. I can say that we are a small ccTLD. Currently we have between 2500 and 3,000 domain names registered.
As for our infrastructure, we have some of our servers were hosted in Haiti before the earthquake and some of our secondary servers are hosted in the University of Princeton. Another one in France and another one by the University of Poly Technic. And the last one, we have been using the service from PCH for the last two years. The application is managed by COCA, and also a shadow master for the dot.Ht zone. And we are using monitoring software to keep track of all those services.
As you know, after the earthquake, the telecom centre, we used it as a service to host our service. It collapsed along with the server, so during the earthquake we lost all of our infrastructure in Haiti that were hosted in Haiti, for the managing of our DNS servers.
So, as I was saying, the telecom centre collapse with the DNS server, the primary and one secondary server for dot.ht and also for name.ht and phone was available to all managers of the secondary, but dot.ht continues to work through a secondary site outside of Haiti.
What happened is some of the managers of the secondaries noticed that they -- the primary in Haiti was unreachable, so they -- the contact -- the core provider, and they have information about the IP address, and they reconfigured their secondary to pull that from the shadow master. So this way, dot.ht continues to be up and running.
So, what we learned, after this situation, is that it is important for DNS operators to apply to adopt base common practices. At some point when we had to open services for dot.ht, there was some political talk to, you know, to host all our infrastructure in Haiti for some political reasons. But I think we did the right move to our services also outside of the country. As you know, each year in Haiti we have -- the country is hit by a lot of hurricanes, and sometimes we loss connectivity with the rest of the Internet. So, at this time we think for this reason and for other reasons, to make sure that our system is stable and robust, we have to have other systems, other DNS servers outside of the country.
So, geographic diversity is good. It is important to avoid a single point of failure. And in the case of our DNS infrastructure during the earthquake, it proved very useful to have a shadow master, in our case outside of the country.
What did we learn also is people networking. We, like, relied on people that we have major workshops like this and other technical workshops to help us during these tough situations, and make sure that the service continues even though we were not available to talk to them.
So it was very interesting for us and also other discussions that have been taking place on mailing lists. It was very useful.
And we learned also from previous incidents, we had an outage two years ago at the telecom centre. So we made a decision to expand the field.
So what next for us is really to reinforce our infrastructure. We are in the process to have a number of resources from IP spaces to host HT servers at the Haitian exchange point and we also are working on contingency planning. Also, we plan to have module -- more geographic diversity, having more than one shadow master outside of the country would be great and we are working on some other projects with local communities like to host. It's a project that we are engaged with Latnic and Internet consortium, they are already in Haiti and we are trying to get them out of custom and to have them installed as soon as possible.
We also interpret content providers to bring more content in Haiti. And also, we do more training and organising workshops. As you know we are managing a critical infrastructure, and for a small ccTLD, usually when you talk about staff, we are talking about one or two people. So we are thinking to work on this and to try to train much more people that would be able to help us in the future.
That's all, thanks. Thank you. And some of these people, Stefano and others, have also been working a lot during the earthquake to have fuel for diesel generators. So if you want more information, we have an Internet infrastructure in Haiti, feel free to jot an e-mail. Thanks a lot.
>> GIOVANNI SEPPIA: Thank you so much, Max. I'd like to invite the audience, if you have any questions regarding the presentation of Max, we will open a discussion at the end of all of the presentations. But if you have any immediate questions, please feel free to approach one of the two microphones in the room and ask it to Max.
Okay. So we will leave the discussion floor open at the end of the presentations.
The next speaker should have been Margarita Valdez from Chile. But they are still trying to make the system work, and therefore I would like to ask Kurt Eric Lindquist of Netnod for many years now to speak.
If you can move, please, Max. Thank you. And he has been also involved in the work of TF.
So, Kurt, the floor is yours.
>> KURT ERIK LINDQVIST: Thank you. I'm Kurt Erik Lindqvist, the CEO of Netnod in Sweden. And we run the -- one of the group servers, and we run the exchange points in Sweden, and we are a not-for-profit foundation.
And so I was going to talk a bit about what the root servers do for the scalability and security of the DNS system. And I'm fortunate to have one of the other root people on the same panel, which will be later.
The root server sits at the top of the DNS hierarchy. What you do is you all start by asking your catch resources, one of the root servers. There are 13 servers with the A to M, Finland names, root servers. And they are run by two organisations. Each of these organisations are stand-alone. We have no organizational ties to each other, except that a few of them are viewed as part of government, but most of us are stand-alone from each other. We have technical coordination between us, but nothing more than that. I only represent our organisation. We do not speak on behalf of each other. In the past we have made joint statements.
And to ensure the stability and security of the system, these organisations are very very different in type. There is everything from NASA to us to the Japan and others. And this assures that we have some diversity and sustainability in the organizational part of this. And if you are more interested in who runs the root service, there is a URL on the slide that gives you more information about this.
So, we all know that the Internet is of the utmost importance to the society today and for our communications. And one of the most critical parts of this is probably the DNS. And the -- the way that we have -- the root servers is adopted is that we have done this diversity in organisations, but also in location of technology. So, for example, I believe that many of the root servers used everything from different hardware to different software to different operating systems. Again, this is to minimize the impact in case there is a vulnerability or a problem with any of these components, we minimize the risk of having one of those being taken out by this.
When it comes to location, originally there were 13 of the servers, and there was only 13 physical servers more or less. Today using a technology called Anycast, it enabled operators to make many copies of these around the world. I think we are close to 300 or maybe over 300 locations worldwide where the copies exist. And these -- and also, again, for diversity, not everyone uses this technology. Only a few of us use Anycast. And not all of us use it so that we are not all vulnerable.
And this applies to the Top Level Domains. You heard from Haiti that they use the Unicast but they also have Anycast and many TLDs around the world do the same. When they go to an organisation and provide the Anycast services, we do this for 25 countries from around the world and many other organisations do as well.
Sorry we have a map -- these are the locations where we have all of the root servers today. You can see it's a fairly widely distribution today, although Europe seems to have an overload of root servers. But we are deploying more of these. And all of the root servers are currently actively looking for new locations to deploy this. There is no upper number on these, on the amount of servers. And just speaking for ourselves, this is -- we have servers deployed and we are trying to deploy more in Africa and South America because we want to cover those geographical regions better.
And so how do we -- just maybe to give you some more background of how data ends up on the servers, the root server is published by the function on Verisign, and John will talk more about how this is deployed. He will, okay.
And they send it to Verisign. All root servers collect it from Verisign and we validate that the data transferred to us is correct, and from there we push this data out to the locations around the world, validating that the information is correct. We have no method of verifying that a zone is correct when it comes to content. We have no method of knowing that that is all data that is supposed to or not supposed to be in the zone is there. So we just take the data as it is, and publish it around the world in all of these locations.
And this happens more or less instantaneously. The delay from the publication to the worldwide propagation is seconds. That's why we do the root server deployment, as you saw in Haiti, having access to the TLD servers locally inside of a country, as we have seen in the hacker attack in Estonia, all operators have access to the infrastructure in the country, it's important from a national point of view. You can be severed from the rest of the world, but you can have a functioning Internet inside your country. When we deployed it, the root servers tried to be diverse, but most deploy what is called internal exchanges. And we encourage all operators in the world to do exchanges with us for free.
It's for their benefit. It benefits us as well, but we don't get paid to run the service. We don't really have a service level obligation. But of course we want to do as best root service as we can to all users, and we make sure that everyone can pair with the servers. And we also hope that doing this, at least from our point of view, from Netnod economics, we hope that by putting the servers at exchange points, it encourages the mild build-out of more exchanges and helps with the resiliency and the infrastructure in that country.
And by deploying the root servers there, anyone can get access to it. All networks get equal access. We don't want to put this inside of an incumbent and in that way discriminate against other users or operators in the country.
And we look for different partners to do the deployments and to make sure that we have as widespread interconnectivity as possible.
And I think today we exchange with 1100 operators around the world in 40 locations. So it's a fairly dense interconnect.
So, I do think -- I think I said everything on this slide already.
From a DNS perspective, from the country perspective, having one of the root servers inside of your country, having Anycasted TLD, you might think this is a waste, because you don't get any benefit in your country. As an example of Haiti, you can have an issue that the rest of the world might want to reach you and having copies in outside locations is a good thing. And the root servers that I think all of us are happy to deploy and put these at exchange points create a good infrastructure for doing this. It's very, very critical from a national point of view as well.
And that was all of my slides.
>> GIOVANNI SEPPIA: Thank you so much, Kurt. So, we move to the next panelist, who is going to be Lim Choon Sai from Singapore. He is the general manager of the Singapore Network Information Centre, the dot.sg Top Level Domain. And he is also Director of the Infocom Development Authority of Singapore, responsible for the Internet resource management of the area. He is also serving as the director in the dot Asia and the APTLD boards. And I will help him with the slides.
>> LIM CHOON SAI: Good afternoon everyone. We are honoured and grateful to the organiser for being able to share some of the things we are doing in announcing the DNS resiliencies in our workplace.
I'll cover many areas and I want to focus on how, in our opinion, the DNS resiliency can be enhanced through institutional frameworks as well as DNS infrastructures. And we have some conclusions after that.
For those of you who are not familiar with where Singapore is is located, we are located at the southernmost tip of the Asia continent, as you can see from the chart here.
And it's a small island country with a population of 5 million. And some of the key points is that we have these penetrations, some statistics of the dot.sg domains, we have close to 120,000 SG names, and we run the level and second level dot.sg domains.
In the area of institutional framework, this is a private company, but it's fully owned by the government. The Infocom Regulatory Authority, we are fully owned, but the day-to-day operations of the dot.sg registry is we are fully autonomous. We take care of our own finances, P&L, policy, formulations. The only occasion where we need to consult regulatory authority, our parent company, is when we implement policy of national impact, something like the DNSSEC, or in fact the whole ISP industry. Other than that, we are fully autonomous and we have to be self financed. And we didn't get any funding from the external party.
And, of course, our system is accessed by registrars. And we work closely with registrars. And most importantly, the Internet being the interconnected world, we cannot shun ourselves from external forces. And in that respect, we work closely with external organisations, like people at ICANN, ATLE and other regional TLDs, like the centre and others. So we firmly believe that it's through close collaborations and cooperation that we can ensure the DNS ecosystem can be secure and resilient.
Under the institutional framework, specifically, we enjoy some privileges being owned by the government. So we are housed at the government centre, which is housing for the government agencies on the Web sites, all the server systems.
And we have to adhere strictly to the government's procedures and controls. And in that respect, we feel we are quite safe.
We can also have access to some of the monitoring services, like the cyber watch centres. And in case of problems, if our DNS system runs into problems, we can tap the government power to direct the ISP to refresh. Being a private company, we don't have the power to tell ISPs what to do in running the DNS, but we can use the government's directive to do something that we want to do.
And lastly, we work closely and continuously with the external bodies, the external stakeholders on, you know, best practices and security alerts and findings
In the area of DNS infrastructure, we have four levels of protections. The first one is physical location. This is where we house close to the government data centre, not exactly inside the centre, but next to it, so that gives us sufficient protections.
And for the hardware and software systems, we run high availability to software and hardware and we make sure that we don't have a single point of failure. We connect to more than one ISP, to make sure that when one ISP is down, we are not suffering any outages.
We also implement intrusion detection, and it's a cyber watch monitoring service, to make sure that the system is not under any DDOS attack or any other form of attack. Lastly, we have a disaster recovery contingency plan where we have off-site another set of servers in place, so that we can switch over in case of natural disasters, things like fires and other epidemics.
Constellations, I think this is typical of any DNS. We have a main server and primary main server. And we have a secondary server, which is anything that allows us to prevent any DDOS attack. Having the robust infrastructure and the stringent procedures in accessing the system is not good enough if we don't have integrity in the data system. So we took steps to make sure that there is no data errors in the systems.
So for the data input, we allow the registrar to access our systems in registering the dot.sg names. This is an area where we have strict access control on how people can access our system. We have typically three levels of profession, firewall, use of passwords and our own SSL set of verifications and we have a database locking system, to make sure that we know who has access to change the data.
In the data output areas, we have quite elaborate mechanisms in place to make sure that there are no errors generated by logic or other software attacks. We have this protection where we make sure that the names will continue to be resolved, even though the end-user could renew. And this is only given to certain important domain names.
And we regularly check whether there are any syntax errors in the records, just to make sure about the data integrity in the file. And we do the inline coding checking in the programmes to check the mechanisms.
We also do the join site comparison, meaning that at the regular intervals we will compare the current joined file size with the previous join file size. And if we notice that the join file size has a sudden change, either reduction or balloon to a high degree, that's the time where we check for human errors, to see whether it's the natural evolution or there is some error in the logic that makes the join file size much different from the previous join file size.
So this is sort of as a precautionary measure. And we carefully check the join file. We put the join file into a programme, my system, and then we externally test the reaction to the my system, just to make sure that everything in the "My programme" is okay. And we believe that through this we can at least be sure that, you know, we have done all the necessary things to make sure that the join file does not have any sort of errors, because having a join file record is one thing. But if a join file record -- if either of the records are not there, or the record is partially, and that's the thing where the error will start to emerge and we have to do a lot of checking.
As the last resort in the design, we are quite concerned that there could be any unforeseen failures in the join logic. Of course, having taken all the precautionary measures, there could be a location where through the programme logic the join file may not have complete data generations. And that's the time where we have a procedure to have a rapid swing of the join file record to the imaging primary servers, because when the join file is corrupted, I think mostly we will be under tremendous pressure from the public for one quick resolution of the names.
And we thought it was better to -- as a compromise, it's better to forego the 5 percent of a domain, you know, the new or the change the name server in the recent time, compared to the 95 join file which we have in order -- so we have a swing over to the imaging server, just to make sure that the previously good join file records are in place, while we take time to troubleshoot any errors that come up in the corrupted join file of the recent times.
So as conclusion, we, in our opinion, the DNS resiliency can be enhanced by collaboration among the government, ccTLD, ISP industry, and policies and procedure implementations. Certain times we can pursue the ISP to cooperate with us. Sometimes you have to rely on some regulatory measures from the regulatory authority.
And it's also equally important that we work closely with the industry stakeholders on best practices and security alerts and findings, and this area, I mean we use ICANN and other ccTLDs and the regional APTLDs, the regional TLD organisations. On our own we don't think we can have a very secure enhanced DNS. Because being connected to the Internet, anything can happen at any time. So we make sure that we have given enough attention to working with the other external stakeholders.
And of course equally important is internally we have to make sure that we have a robust DNS infrastructure. Adequate contingency measures.
We don't think we have all the protection measures in place, but certainly we would want to learn from others to make sure that our contingency measures are always enhanced and protected through learning from other stakeholders.
And with that, I end my presentation. Thank you very much.
>> GIOVANNI SEPPIA: Thank you for this interesting and complete presentation.
The next speaker is Thomas de Haan. Thomas is senior policy coordinator of the administrative public affairs office of Finland. And he has been involved with this work for several years.
You're almost there. In the meanwhile, a quick update on the remote participants. So we are almost ready to have two of them speaking on this workshop. As to all the technical issues, I'm looking at the screen in front of me having been fixed. So we can have soon Margarita and Michele joining the workshop. I now leave the floor to Thomas.
>> THOMAS de HAAN: How do you do screen wide?
Thank you. All right. Is it working?
Maybe my presentation will be slightly different than the ones you have heard. I heard a lot of, to be honest, success stories about ccTLDs and companies who work in resiliency and contingencies. I think our, let's say, our point of view is much more in the way that, given all this good work, there are still some risks and some threats which we, as a government, see as, let's say, important to deal with. And my presentation is going about this issue.
What I want to talk about is a cooperation which the Dutch ccTLD registry and (inaudible) joining us. I think we are both on equal terms in this cooperation, a voluntarily exercise.
A very quick summary of the Dutch ccTLD. It's the fourth -- fifth -- I don't know for sure. The fourth largest ccTLD in the world. Private, not-for-profit. Independent from the government. Relying fully on self organisation and self regulation. Of course, accountable to the LIC.
And this is maybe dangerous, because I see colleagues from DB and UK here. But I would say in size, et cetera, it would be comparable to the Dutch UK.
What is triggering the interest in the dot domain, surely a couple of things which are typical for the Netherlands. We are still, I think, top -- the third broadband company in the world, at least according to the last figures. We have a fairly globalized market. We have of course a major Internet hub in Amsterdam. We have a high density in using registrations, 2900, and we have a very big dot.NL domain being used. Approximately 70 percent of the Dutch registrants or registrants active in the Netherlands are using dot.NL. So that means that we have quite -- let's say, an economic and also societal dependence on the dot.NL domain.
So this triggered our attention about six or seven years ago. We did a study, which is on the, let's say, the impact of a failure of a dot.NL domain. The results were not, let's say -- were, in the sense, obvious that as we see in many other ccTDLs, the impact if something happens is large. You can calculate it in a certain way, but it's risky to put figures on it. But the figure is very high.
On the other hand, the chance of failure is almost zero in the sense that, due to all of these stories I hear around, which I have also heard from our registry of course, that the chance of a real failure is -- and an outage of the complete dot.NL domain is very, very, very low. Still, we think there are a couple of things which we should arrange with our registry. Let's say although we have a very robust and resilient environment, still of course there are other kinds of risks, which could enhance or could endanger the stability and continuity.
So this was our, let's say, starting point from the government. And we entered in dialog with -- between the ministry and SIDN. And I think this is very important for us, that we didn't impose or make, let's say, intention to make regulation. Of course, regulation is always a last resort. But in, let's say, in this case, having a chance with our registry, it was a pure peer-to-peer dialog and a real open exchange of views, in which we tried to see what our roles are from both sides. Of course not only the roles, but what are our mutual interests, and what kinds of safeguards do we want for continuity and stability.
And what eventually we did is we formalized some arrangements in the covenants. A covenant is maybe not such a known instrument, but it's kind of an MOU, you could say. It's legally binding, but it's a rather free form kind of agreement.
Basically, the covenant itself is just two or three pages with highlights of the things we wanted to arrange. But before we got to this covenant, we made a study together, which we really analyzed and we thoroughly analyzed the state of security and the state of governance, the state of let's say the whole picture, we kind of analyzed.
To go quickly, three of the covenants, I think there are three important hooking points. As a government, we reiterate self regulation. We think it's, in this environment, the best option to deal with.
CcTLD governance. Again, it acknowledges the special interests of us, let's say, having our rule from having responsibility on the ICT policy, nationally, and also the security and the -- yes, you could say that not purely network security but security of the whole ICT system in the Netherlands.
And again, there are ties to the Netherlands, which is a formalization which already is in their complete governance structure.
We had some measures which we agreed upon. Basically, these are probably a continuation of an exercise which again was already being done on making the system more robust. And I want to get into this, because I think many of these things already I have heard here around me, many things sound very familiar. This is basically a kind of continuation and maybe an enhancement of the security of the total system.
We have with two -- as an ending point, we have two obligations. That is that we, as the state, we agreed upon having a kind of emergency assistance in the case of a severe catastrophe. We are still working this out, but it's clear that we, when SIDN faces a major attack to a complete outage of the system due to something, then they should be able to rely also on the government to do their utmost best. And so far they are not themselves able to have a solution to this.
The second point is to work out a kind of last resort scenario for delegation. Maybe because this is kind of new in the landscape of the ccTLD government relationship, I think I will go into this a little bit more in detail.
Underlying -- let me first say that both SIDN and the government were very much -- sorry, we very much, let's say, are aware of the fact that in the case of a very severe outage or severe failure of SIDN, of course it could be the situation that the ICDN doesn't exist anymore. In that case we thought that the government has a kind of a role of initiating a process to get to a new national registry. So I want to -- I won't go into the methodological part of it, but what we agreed upon is that we have some checkup moments. We attached criteria to what is really a trigger moment for a redelegation. And the main criteria is that there should be some major economic damage, which means not only to, let's say, the economy as a whole, but to the registrars, maybe, to the almost 2000 registrars, which also could be affected by some kind of threat.
Tests will be going on for a long time and tests will be irreversible. Because I hope that 99 percent of the problems can always be having a solution by, let's say, having kind of restart or continuation of IDN itself.
And then important also is that the failure should be attributable to the registry or it should be a failure which is not an incident, but it's structural.
And then of course we also made arrangements about the fact that if we disagree about something, it would trigger -- of course we have dispute settlement and as a last resort we have the courts to decide upon this.
I think I won't go into this, it's much more procedural.
Maybe two points to end with, I think my time is over already, we have also looked at the possibility of having a caretaker for the period in which we have really an unstable situation. These are the things which we are still discussing. I think we have the framework of the kind of mitigation process, but there are some things we are still working on.
It doesn't mean that we are looking actually for a caretaker, but we have to have a kind of blueprint, a kind of guideline and what it is to do in the case of an unstable situation.
And I won't go into this.
Let me end with this figure. Of course, we have ICANN, which at the end does the relegation or is the authority to do the relegation. We are missing one link in the three party exercise. And that means that we are still also discussing with ICANN the way in which we present this national agreement on the last resort scenario. That means that we probably, let's say, the most basic thing is exchange of letters in which we just say okay, ICANN, we have a national procedure. We have a procedure in which we have both governments and local community involved, and we have this candidate as a nominee for being the new registry. And what we think is INIZE for the interest of the Internet -- let's say for the Internet, I think they would follow this.
Of course, we have kind of some principles which guide relegation, which means that we have to get principles and some other principles in which sovereignty and the fact that the choice of a national registry is something for the national markets and for the local community. And of course I think the best place to choose a potential new registry is in the country itself, with its own rules.
So this was my presentation. Thank you.
>> GIOVANNI SEPPIA: Thanks a lot, Thomas. Indeed, it was an interesting presentation and an interesting overview of the dynamics between the government and the registry in the Netherlands.
I have now to rush and leave the floor to the remote participant, Michele Neylon. And I believe the screen behind me will work to get the remote participant fully involved. And I have to continue to speak, because apparently the tryouts that have been done so far -- they keep trying. So Michele, in the meanwhile, I can introduce -- okay. Almost. Michele is founder and Manager Director of Blacknight Solutions. And I'm really and sincerely thankful to Michele for joining us today, as this participation was not foreseen until last night, when it was planned for another registrar to speak at this workshop, but then this registrar, Markus, had some issue reaching Vilnius and therefore he couldn't make it, not even remotely. And therefore last night I was in contact with Michele, who is very -- like a volunteer to help me in having a representative from the registrar community, which is indeed a community that I really wanted to have participation in this workshop.
So, I'm now leaving the floor to Michele, who is a registrar and is credited with many ccTLDs around Europe and Italy. And it's really my pleasure to have Michele onboard this workshop today. Thank you.
I understand that was a sign of gratitude on their side as well. So thank you, Michele.
>> MICHELE NEYLON: Good morning, everybody. Thanks for letting me make this brief intervention. Unfortunately, with the way this remote participation is set up, I'm kind of talking to a screen. You can probably see me and hear me, but I can't see you or hear you at the moment.
Giovanni asked me to step in and give this brief presentation about what we are doing in our own way with respect to DNS stability and security. So, this is just a very brief overview. And hopefully people will have questions during the general question session at the end.
Just a brief overview, we are, as already stated, we are a registrar. We are ICANN accredited and with other ccTLDs, and we're based in Ireland. We are currently the largest registrar and hosting provider in Ireland. And we host well over 100,000 domain names. We are using multiple data centres for both the hosting and the name server, and we have multiple sets of both authoritative and nonauthoritative resolver name servers. We have invested heavily in building a very resilient network. We used multiple carriers in both trunks of the main network. Plus we peer at the INEX in Dublin, in terms of the DNS side of things, of course, through our regular back-ups and various other things. So within the network itself, we have built things in such a manner that each name server is hanging off of a different root and behind a different set of firewalls, and there's multiple sets of pretty much everything.
But the problem for us of course is that we're a commercial organisation. We don't have the -- we're not funded by the government. We don't have -- we're not a noncommercial venture that might be able to get other types of funding. So, any kind of resilience or extra back-up or whatever, we have to fund it ourselves. So, the thing that we always have a problem with is the cost of us adding an extra name server and going into another location versus the price that we then would have to charge our clients for the services that we offer.
The other challenge of course that we have is that as new extensions to the DNS become available, we have to be able to integrate those with the customer facing software. So, for example, if we are giving a user the ability to add basic DNS records so that they can point their domain name somewhere else or whatever, then we have to be able to give them access to editing other records in a manner which will actually work for them. And of course the other problem that we have is that in many cases while a certain type of user would be used in having access to DNSSEC records, the vast majority don't care. So in terms of adding new services in this area, we have to weigh up the demand versus the cost of actually implementing this.
So the kind of things that we do plan to do further down the road in order to add to this kind of resilience and stability would of course be to add more name servers. So whether that is a case of adding more name servers in data centres and locations that we have direct control over, or whether this is a case of doing this through our various partners, it's hard to say. Because again we have to take the cost factor into consideration.
DNSSEC is something which is on our roadmap. In the first instance, we will be having resolvers, because there is no actual customer facing aspect to that. But getting it into systems which would allow our users to add records is going to be a bit more complicated.
In terms of IPv6, we have IPv6 connectivity throughout the network, but we're still seeing all sorts of interesting issues with being able to maintain that. So, for example, here, in our services, we have IPv6 on the desktop, but when we try to connect through to certain services run by various registries on the other side of Europe, we run into difficulties. Because the network, the roots and everything between Ireland and, say, you know, the far side of Germany or maybe the US isn't exactly stable. So while in theory we should be able to connect, in reality we can end up running into brick walls. So when it comes to rolling out services over IPv6 to all of our clients, it's something that we have to be very careful about. While the idea itself is attractive, the reality could be a bit different.
And another thing of course that we intend to do is to peer at more exchanges. One of the things that came up as a theme in of the other speakers' talks was in terms of general security and stability. This is something that as a registrar we take very, very seriously. Because if we have issues with respect to security of the services that we're offering as a commercial organisation, that can have a very negative impact. If users feel that our services are not secure, then they will move to another provider.
While the registry operator might have to look after the public good, from a commercial operator's perspective, while that is all well and good, the reality is that you're going to be motivated by simple economics. If people are not comfortable with your services, then they will then move to somebody else.
And that's the end of my very brief presentation. And thank you.
>> GIOVANNI SEPPIA: Thanks a lot, Michele. Can you hear me?
I understand -- yes? No? No.
Thanks a lot, Michele. Again, it has been a big favor to this community to present -- to present the view of the -- to have the registrar perspective.
And the next presenter is Margarita Valdez from Chile. And she is currently the legal and business manager of NIC Chile.cl. And she is another active participant of the TLD community and the Internet worldwide community. Thanks a lot, Margarita, for your willingness to share your experience with a catastrophe that impacted Chile recently, but also for staying up so late in the middle of the night and being able to be with us today here in Vilnius. So thanks for your big effort for this. And the floor is yours.
>> MARGARITA VALDEZ: (Microphone is fading in and out)
>> GIOVANNI SEPPIA: Margarita, it looks like you're talking Draconian to us or another language. Wait, hold on a second, Margarita. Hold on. We have an audio issue.
Can she hear me? I'm looking at the technicians in the corner. Can she hear me?
>> GIOVANNI SEPPIA: So we can hear her. So it's mutual. There is no communication.
>> MARGARITA VALDEZ: Obviously in response to the top domain choice, we have two decades to -- for this duty. And more than --
>> GIOVANNI SEPPIA: You can send a message to Margarita saying that the communication process is having some failures. Could you please do so?
Thank you. She doesn't -- she continues to talk. Okay. That is really Margarita. Okay. I think she wants really to go to bed. And... so --
>> MARGARITA VALDEZ: Chile is also -- we have the computer science of the physical and mathematical science of the University of Chile.
I can show you what was the expansion or the dimension of the earthquake that we survived on February 27th. And the -- the red button, the red circle that you are seeing is the huge point that the earthquake was --
>> GIOVANNI SEPPIA: Maybe what we can do is try to communicate to Margarita and leave the floor to John Crain so she can put the screen back up to the slides.
>> MARGARITA VALDEZ: It was 8.8. So --
>> GIOVANNI SEPPIA: So if we can have it --
>> MARGARITA VALDEZ: Probably the main point of the event is --
>> GIOVANNI SEPPIA: Hello? Can we have the slides back on the screen, and so I can leave the floor to John Crain of ICANN, who is the last physically present speaker of this workshop. And in the meanwhile, you can try to communicate with Margarita and fix the audio issues.
I'll try it again.
Hello? Yes. Can we try to have the slides back? Okay. Great.
So I leave the floor to John Crain. And John has been with ICANN since the early years, and is currently responsible for establishing strategy, planning and execution for ICANN's external security, stability and resiliency programme. He has also been heavily involved in cctld trainings worldwide, to make sure that ccTLDs are given their capacities and the knowledge to implement contingency planning.
So, John, thanks a lot for being with us today. And the floor is yours.
>> JOHN CRAIN: Thank you. It shows that you cannot rely on the Internet! It never seems to work.
So I'll talk briefly, not on a technical matter, but about some of the things that we at ICANN have been working together with the community on, relating to contingency planning.
So, we have two main focuses that we work on. One is training, capability building. And the other is exercise it.
So in our training efforts, the first effort that we have been holding and in conjunction with the regional ccTLD group, many of you may know the CCENTR, AFTLD and LACTLD. This came from a request to help them with contingency planning and response. It's designed for smaller organisations. Basically what we did is we hired some consultants to take the contingency planning processes that are out there in the industry, if you have ever seen them, they are thousands of pages long when you read the manuals on how to do contingency planning, and to make those more suitable for small organisations. Many of those in our industry are, you know, organisations with only a few staff. So spending all of your time going through thousands of pages of documentation and forms is not really what you want to be doing.
So far, we have had 245 attendees at these trainings, and those represented 123 of the ccTLDs. I think it was well received. And some of the people here in this room have actually been to those trainings.
The other type of training that we're looking at is something we call the registry operations curriculum, which is more of a technical training. It's in collaboration with the Internet society, ISOC. The network resource start-up centre, out of Oregon University, and once again the regional ccTLD organisations, a three-part training programme aimed at improving the capabilities of operators and their staff, so that they can actually build more stable and resilient systems.
The other side of the contingency planning is exercising. It's not very good having processes if you don't test them. This is what we call tabletop exercises. We have done these with some of the registry, registrars and infrastructure operators. ICANN itself has many processes around data escrow and processes for what we could do if a TLD failed. And we have done tabletop exercises to simulate these scenarios and test the processes, and therefore aiming at improving them.
This is an important part of contingency planning, is to actually exercise your plans and then to keep constantly improving the processes.
We have also been involved in other exercises. There is a lot of things going on these days called cyber exercises. Often now one part of cyber exercise is at a national and international level is look at the reliance on the DNS. There are a couple cases where we have been asked to go and give expertise.
We also see many of the members of our community at these exercises, ccTLDs, registrar, registry, so it's something that the community is actively involved in.
My suspicion is that we will see more and more of these exercises happening, and more and more of us in the industry will become more involved with these.
That was it, really. I wanted to keep it short. One of the things I wanted to emphasize is that as an industry, what we do very well is we collaborate. The training efforts that ICANN is doing, it's not really an ICANN training effort, it's a community training effort. A lot of things that people talked about here are also community style responses and programmes, so I think that is very important. And I'm honoured to be here with a lot of people in the community. With that, I kept it very short.
>> GIOVANNI SEPPIA: Thank you so much, John. And we are now trying to get connected again with Margarita from Chile.
And she would like to share with us the experience they had there after the 8.8 Richter earthquake in February of this year.
Margarita, can you hear me?
>> MARGARITA VALDEZ: Hello. Okay. Can you hear me?
Well, I would like to show you what we survived, what we have done after surviving the earthquake in February. Something about the Chile registry domain. We have 20 years been working for the community and growing the registry and we are part of the University of Chile.
This is a chart about the earthquake that I can show you, how hard was that. And we have had another earthquake in 1960. And this is about the February one.
This is a comparison about the earthquake that we have done about how strong was the earthquake in February? You can see in this slide is what was the magnitude. This was in the south of Chile, very close to the Epicentre.
We have a nonDNS infrastructure. We have three types. The main DNS and the power generation offices and the UPS and power generations. Our production servers, we use one of the same type, another in the contingency site. Our network equipment is duplicated.
In the fact of the earthquake, the nonDNS, because the activity was (inaudible) the earthquake links were (inaudible) at almost 4.
At 4:30 the engineering team were in the site for inspection. At 6, the European sound of the (inaudible) and the Europeans arrived for inspection at 10. Orange link-up degraded. Returned the energy to power was again working at 11 a.m.
Continued inspection by the engineering team.
At 9:30 p.m, the self generation of the DNS was completely (inaudible)
We have more than 50 secondary servers, three in the cloud, more than 30 are Netnod for the (inaudible) and Internet service consortium. 8 (inaudible). 6 servers active in Chile, four in Santiago and one in (inaudible). We have the root server in the main site in Santiago. This is a chart of how we worked in -- in the Anycast system that we have currently working.
And the impact of the earthquake in the DNS infrastructure is as follows. The epicenter, (inaudible) had minimal traffic (inaudible) in our services. At 9, I said it's okay, just use the contingency site. A (inaudible) the traffic that -- there was no power after the earthquake. The other -- the other servers of the Anycast system were working with the queries about CL. So you can see in the chart that Los Angeles was the most of -- the servers about their response about the dot.CL.
Our conclusions are that the network of the DNS servers around the world guaranteed uninterrupted domain service for the dot.CL. This allowed the national network to operate even with all Internet links down. The sound generation was normal each half hour back between 4 a.m. and 9:30 p.m. Some of the -- some were generated. Some were not published. Now we are analyzing the event to have a proper response for an emergency in the future.
One of the main problems was communications within the team, loss of power, cell phones and land lines were down. About the Internet at the night, the Chilean Internet did not pass the test (inaudible) with the interchange agreement -- the main problem was lack of energy. This includes physical network, routing servers and (inaudible)
The 17 February could have saved lives and it could have operated well, but what was the problem? (Off microphone.) Connections with the self energy, if they understand that Internet is fundamental to operate (inaudible) that to do service in the epicenters and clear out the site of those (Off microphone.)
Connections to multiple for technological partners in the describe could support the design, to convince our authorities that the connection to the Internet is critical.
Final reflection. Public Internet is critical for the infrastructure. Very useful in a catastrophe. And Chile almost managed to pass the test of the (Off microphone.) And this is a feature that was very well-known around the world after the earthquake. And that's it.
>> GIOVANNI SEPPIA: Okay. It looks like we have lost Margarita. She was at the end of her presentation. And I would like -- is it possible to have her back? Okay.
If not, I would like to now open the floor for discussion for questions
I would like to -- I'm looking at technicians. I would like to thank you Margarita for the presentation. Margarita, thank you so much. I'm looking at the technicians to make sure that we have the audio in the audience now. So can you -- could you please -- yes, thank you. Switch the audio. And I would like now to open the floor for discussion.
We have heard so many interesting perspectives from different operators of the DNS. And I think it's been useful to understand how contingency plans and resilience are ensured in different environments.
Again, the floor is now open for discussion and questions to the panelists. I would suggest we do not ask questions to the remote panelists. We all know them and I'm sure they will be happy to respond to questions online via the e-mail addresses. All presentations will be uploaded on the sites.
To complete the presentation, one second Peter, is I just would like to thank Madam Khoudia Gueye Ep Sy who should have joined us today as well as remote participants, but there was a big issue of having her connected remotely from Senegal. So currently the presentation of Madan Gueye will be up on the site itself, but she is not able to join us during the workshop itself.
I'd like to leave the floor open to discussions and questions. And I saw there was Peter in line. No more questions? You changed your mind?
>> PETER: I had a question for the remote participant, but I'm following your instructions and asking her online.
>> GIOVANNI SEPPIA: That is a good choice. Because I believe the chances of Margarita responding are very limited now.
Any questions, Eric?
>> ERIC IRIARTE: Question, this is a direct question for John. How is the proposal for ICANN for the next year working with the organisation, continuing each year that there will be workshops?
>> JOHN CRAIN: So we have two programmes. We are still continuing with both. I believe you specifically, Eric, will be hearing from our staff with the possibility of doing a training in Cartegena. I'd like to do a train the trainers programme so other members of the community can perform the trainings rather than us hiring consultants all the time. So the intention is to continue with these programmes. Of course, if the -- if these programmes are done at the behest of the ccTLD community, so if you want specific trainings, please come talk to me.
>> GIOVANNI SEPPIA: Thank you, John. Yes, sure. Okay.
>> JOHN CRAIN: I have a question for Thomas. And that is about emergency support. So one of the things that a lot of us can do as engineers and operational people is we can design highly resilient systems. When something happens, such as a similar one, when Max and everybody was over in Haiti, one of the things that was a struggle was the support for fuel, for shelter, for food, et cetera. Is this the kind of thing that you're talking about, when you talked about emergency support?
>> THOMAS de HAAN: Well, maybe I can also refer this question to Julof. Because this part of the government was also triggered by the city itself.
>> GIOVANNI SEPPIA: There is a microphone behind you.
>> Well, I was a bit afraid you were going to say no, because the answer to the question is yes, exactly that kind of thing.
So, internally, and I think even to Thomas we refer to it as the black electrical clause. That if something happens beyond our influence, which has potentially or actually has a severe impact, that we can have a special position.
It can be the Army helping us, but it can be something as simple as getting the preference in the delivery of electricity on the grid.
>> KURT ERIK LINDQVIST: This is something that we have support for in Sweden. But the rules on the Internet changes in Sweden, and more because of that because we are a root server operator, we have a special status. And all of our operational facilities are co-located in secure facilities that the government provide for critical infrastructure. And if we had a similar incident in Sweden, we are listed for power, diesel, and the diesel is supplied to the bunkers by the military. And it's done for all of the cell phone operators for us, for the voice interconnection. So we have a very similar system as a well in place. So that is something that others should look in as well. It's useful for us. Never had to use it, though.
>> THOMAS de HAAN: One complement is that we have the European directives about the critical infrastructure, and of course this has been filled in for telecom operators. And there are openings for essential facilities, which are other than telecom. And of course these things we have to take into account. But in the end, of course, it comes down to being able to take the telephone to the right guy to have something arranged.
So we can either rely on regulation and even, let's say, a system of being the central facility, but of course it doesn't solve really the problem. Of course, this is a parallel project which is going on, because it comes from Brussels. But in the end, in parallel we are also working on the other thing, which was mentioned.
>> GIOVANNI SEPPIA: Thank you Thomas, John. Kurt. Other questions from the floor that you would like to share? Other quick input from Madam Khoudia from Senegal. She sent an e-mail telling us that in certain areas of Africa, the main issues are coming from our outage political issues, local political issues, the fact that there are no root servers located in the region, and no real Internet Service Providers as well as no local networks. So this is the input that Madame Khoudia brings to us. We are living in a world where there are so many environments where the Internet not always works as we would like to. And the DNS connectivity, as we probably most of us in this room believe, it is maybe in certain areas is not the way it should be.
I'm asking once more if there are questions from the floor. Otherwise, as I mentioned in the workshop overview, one of the objectives of this workshop was to end with an agreement among the TLD organisations to make sure that best practice sharing is always ensured, especially in an area which is very delicate, such as the contingency planning area. And I would like, before I leave the floor to Peter van Rost and other representatives from the regional organisations. I want to thank all of the speakers for their efforts and with a very flexible agenda. And also the technicians and the scribe. They helped me a lot today in making this workshop through. So I leave the floor to Peter. And this is dedicated to the agreement which Peter will talk about.
>> PETER VAN ROSTE: Good morning, everyone. My name is Peter Van Roste. I'm the General Manager for the Centre, which is the regional organisation for country codes registries in Europe.
In the room I have my colleagues from the other region, the Asia region, African region and Latin American region. And when we were building up to this IGF and in preparation of this workshop, we came very quickly to the conclusion that we basically already formalized on some of the last occasions at IGF. It's quite frustrating that there is never something tangible coming out of the workshops. And of course people would like to keep it as such, to avoid that IGF suddenly becomes some sort of policy forum. But we felt that it would be really appropriate to use this opportunity and this occasion to formalize the things that we have, to strengthen them, and to commit to exchanging information. I think it's become clear -- it becomes clear when you're sitting through a session like this that only the sharing of information strengthens the DNS as we know it today.
And obviously there is the whole technical facility, which has functioned well. And in particular I'm referring to the case in Chile and in Haiti. But I think we need to build some repository. And not just on security related issues, by the way, but we have to help the ccTLDs across the world to learn each other's best practices and to share information, so that we basically link the four regional organisations together.
And to that extent, we have written a letter of intent, which I would invite my colleagues from the other organisations now to type their names, since we didn't have time to print them.
In fact, I invite you three up to the stage to coincide. Erik?
>> I'm playing the notary to make sure that signatures are there. So I'm just reporting to you that yes, they are typing their names.
I confirm that all of the four signatures are now there.
So good luck!
>> And to make sure that there is a proper follow-up from this, at the ICANN meeting in Cartegena we will make sure that you have a detailed list of what that cooperation will include. And I would invite all of you that are involved in ccTLD management to contribute to that effort. I'm sure that it will make our lives significantly easier. Thank you.
>> Peter, can I just say a couple words. On behalf of the CCNSO, I think this is fantastic. The original organisations are a part of the structure around the world, and it's good to see all of us working more closely together and cooperating. I just wanted to say thanks. I think you're all doing a brilliant job. Thank you.
>> GIOVANNI SEPPIA: And that brings us to the close. Thanks everybody, thanks to the audience and again thanks to the technician and to the scribe for this very difficult workshop in terms of logistics. Thank you so much.
(End of workshop)