Wednesday, January 27, 2010

A proposal to extend the DNS protocol

Today a group of DNS and content providers, including Neustar/UltraDNS and Google are publishing a proposal to extend the DNS protocol. DNS is the system that translates an easy-to-remember name like to a numeric address like These are the IP addresses that computers use to communicate with one another on the Internet.

By returning different addresses to requests coming from different places, DNS can be used to load balance traffic and send users to a nearby server. For example, if you look up from a computer in New York, it may resolve to an IP address pointing to a server in New York City. If you look up from the Netherlands, the result could be an IP address pointing to a server in the Netherlands. Sending you to a nearby server improves speed, latency, and network utilization.

Currently, to determine your location, authoritative nameservers look at the source IP address of the incoming request, which is the IP address of your DNS resolver, rather than your IP address. This DNS resolver is often managed by your ISP or alternately is a third-party resolver like Google Public DNS. In most cases the resolver is close to its users, in which case the authoritative nameservers will be able to find the nearest server. However, some DNS resolvers serve many users over a wider area. In these cases, your lookup for may return the IP address of a server several countries away from you. If the authoritative nameserver could detect where you were, a closer server might have been available.

Our proposed DNS protocol extension lets recursive DNS resolvers include part of your IP address in the request sent to authoritative nameservers. Only the first three octets, or top 24 bits, are sent providing enough information to the authoritative nameserver to determine your network location, without affecting your privacy.

The Internet-Draft was posted to the dnsext mailing list today, and over the next few months our group hopes to see this proposal accepted as an official Internet standard. We plan to continue working with all interested parties on implementing this solution and are looking forward to a healthy discussion on the dnsext mailing list.

(Updated 24 Jan 2011 to fix broken links)


  1. Google wants to shave a few dozen milliseconds off the time a HTTP request takes for an extremely small number of users, when accessing a *tiny* fraction of a percent of websites, by creating a major change to the most important protocol on the Internet? Doesn't seem very clever to me.

    It also leaks information about the end user unnescesarily. A prime example of where this would have been bad is the DNS pre-fetching flaw in most webmail implementations (inc GMail over HTTP) which I blogged about the other day:

  2. This is horrible. This is so GOOG can monitor ALL of your web activity, all the time.

    If you ever use Google, or see adwords anywhere, they already have your ip--all 4 octets.

    With this DNS extension, they can see what sites buckets of people are visiting when they're NOT on google sites or where goog ads are being served. It's not resolved down to the user, but it's bucketed, and over time, they can guess what's happening.

    This proposal is absolutely about google getting more data about your internet habits, and more data about the market spaces they don't (yet) control.

  3. To be honest? I don't like the idea. It would certainly open the door to a well load balanced world, but I rather prefer concentrate to a cloud environment instead a DNS tricks with dozen of hosting around the world.

  4. IMHO will not work.
    1)One-way satellite ISP customers - FAIL. Sometimes they use DNS of terrestrial ISP, and real content retrieved over Sat-ISP (over another IP). In my case for example DNS sometimes is from UK, sometimes from Germany, and sometimes Lebanon.

    2)Multihomed ISP with non-BGP load-balancing - FAIL. DNS can go through one backbone, service over another. In Lebanon for example they have two completely different backbones, USA and European, and sometimes they are chosen in very strange manner.

    3)What if access to DNS done from reserved, so called "gray" ip? 10.x.x.x won't help much.

    Better to use extension thats already exist. And to do that over HTTP. You can get already X-Forwarded-for there, and real ip of the customer.
    If they have their own protocol - embed that extension there.

  5. "Google wants to shave a few dozen milliseconds off the time a HTTP request takes for an extremely small number of users, when accessing a *tiny* fraction of a percent of websites, by creating a major change to the most important protocol on the Internet?"
    Even if just Google uses this protocol, 70% (at least) of the internet users will gain time.

  6. One way satellites, multi homed sites, ... and anyone who is using a DNS server far away from his computer are exactly the users who will benefit the most out of this proposal.

    In the current world, most major internet sites will send your browser to different locations based on the IP address of the resolver configured on your computer.

    What we're proposing will allow those internet sites to send your browser to the best location based on your network IP, rather than the one of your resolver.

    Google will not be the only one to benefit, as the extension will be usable by anyone. And it's only a backward compatible DNS extension: not all ISPs or recursive resolvers will need to use it or even know that it exists, even if we do hope that it will be used where it does matter.

    Also, if implemented as we suggest, the data that will be disclosed will not be enough to uniquely identify a user in most of the cases, and will not add anything more to what your computer normally sends when opening an HTTP connection.

  7. Shaving off the last octet will not protect anyone's privacy. It's nice that they decided to shave off the last 8 bits to save space, but it won't keep anyone safe. If anything you've just narrowed down the list of possible hosts that sent the dns requests from 4 billion to 255.

    Note: I haven't read the RFC, just going by the OP text.

  8. This comment has been removed by the author.

  9. @ Carlo Contavalli - Can't this be done by using the existing DNS LOC resource record specification? If you (or your ISP) want to expose your geographic location simply enter a LOC record in the reverse zone.

    To modify the fundamental operation of DNS, and to do so in such a way as to negate all the benefits of caches along the resolver path, seems arrogant and ill-considered to me.

  10. Hey, Google! Stop breaching our privacy! Your proposal would also allow to you and Chinese government, Big Brother and whoever else to collect certain information in abusive manner. As the result, you're not better than Chinese government in your efforts to trash our privacy now and forever. Oh yeah, and dissidents tracking now made easy with new DNS system? Just as well as imposing censorship? What a great proposal.

  11. Does no one know about a thing called Anycast. Its there in IPv4 not just IPv6 and some content providers use it. You have one IP Address that is advertised to the global routing table from multiple places. The routers along the way actually direct your traffic to the nearest source thus having the same effect as what they want this new extension for.

    When will we stop trying to reinvent the wheel just because someone else though it is and invest in further developing what is already there.

  12. This seems to increase load on all nameservers - both caching/recursive nameservers and authoritative ones.
    I just don't see a compelling benefit to this.

    At the moment an authoritative response can be cached and returned to any other client that makes the same request (i.e. request for A for will return the same answer for and

    If this proposal were to be adopted, a caching nameserver has the option of:
    1) do not cache an authoritative response, so the benefits of caching are lost, and both the caching server and the authoritative server will see increased load.

    2) cache per-subnet results separately, so instead of caching 1 result, the caching server has to store N results (where N is the number of subnets served by the caching server). If a cache is storing N answers instead of 1, answers will need to be expired (and re-requested) more frequently, again increasing the load on both the caching server and the authoritative server.

  13. "Users who wish their full IP address to be hidden can include an edns-client-ip option specifying the wildcard address"

    How about having the recursive resolver default to this behavior unless the original client specifically sends an edns-client-ip option. Clients opt-in instead of opt-out, which also gives them the choice of how much of their IP address they want to expose.

  14. guys,

    of _much_ more interest would be to back-port the extensions to DNS that allowed it to act as a peer-to-peer naming service WITHOUT requiring a server AT ALL.

    what? what is this? DNS without a server, how could this EVER be possible? what is this madman talking about?

    take a close look at RFC 1001 and 1002. you will see that the packet formats are IDENTICAL to DNS. all that happened was that the zone field was called "scope", a stupid "mangling" was put onto host names, and an extra DNS type called "Name Query" was added, which is a bit like a DNS server "all" query, except it can be sent to any machine.

    yes, you guessed it: NetBIOS name resolution was derived originally from DNS.

    by dropping all the stupidities (name mangling, "scope") and making use of DNSSEC, an update to DNS to become a true server-independent peer-to-peer service would be absolutely revolutionary.

    and the neat thing is that there already exists a free software implementation on which the changes could be made to work with very little effort, which is in prevalent use today across millions of free software machines for at least twelve years: yes, you guessed it, it's called "nmbd" and it's been part of samba since around 1.9.14 or possibly even earlier.

  15. I don't like the idea, everyday most we lose our privacy online. And this is one more step for that.

    Technically it seems to be causing more load on the servers and more confusion, because a nameserver don't need to accept what the authoritative server "says" so, one is giving answers for questions but the questioner don't pay attention to it.

    What's good on this? Please Google, start thinking on privacy because lots of people are getting noticed about what's happening online...

  16. @Steve

    Exatcly my concern, especially as a hostmaster on a large number of domains. Also this will be vociferously objected by ICANN, they are having enough issues with root scaling without throwing this into the mix. Root servers are going to have enough on their plate in the coming years with DNSSEC, expanding IPv6, increasing number of TLDs without ordering up an O(n) increase in the amount of DNS traffic. I really wish the Google engineers had talked to a few people at IANA / ICANN before launching this kind of silliness.


    People with asymmetric routing will have problems because the DNS server will serve up depending on the source rouiting, whereas the customer will usually be more concerned about the destination routing. This might be a "nice" idea for content providers, but for network, security and privacy engineers it will be very painful.

    _Local_decisions_should_be_made_by_locally_informed_systems;_this_is_the_basis_of_routing_and_it should_be_the_basis_of_DNS._This_is_how_distributed_systems_work._If_it_is_not_broken_do_NOT_fix_it._

  17. First, my location may be private. Please ask first. And the answer is 'no', until I've gotten to the service I wanted and can choose their terms of service, ok?

    Second, what if my address resolves to a corporate router somewhere. I know, that's good enough for load balancing and shortest-path calculations.

    Third, don't add unrelated features to DNS. Location is important to the resource, but not to the requester. Leave DNS alone, unless you're improving security, which this does NOT do.

    Really, this is not good for us. It's marginally good for providers, who should do it some other way.

    And don't get me started on the political and human-rights ramifications of this. Not well thought out, gang. It solves a minor problem and causes many more...

  18. I am a DNS system administrator at a mid-size ISP. At first I was speculative about this proposal based on the comments here. Taking an end-user customer's perspective, I just read the Internet draft, and at this point, I think it sounds reasonable. The privacy fears over a network address as tight as a /24 will probably not match a production deployment, since DNS admins are going to want as few records cached as possible. I don't think there is a valid "privacy" concern here anyway, as it is not a secret to the owner of the authoritative server that a certain IP netblock is accessing their resources.

    Hopefully BIND and other DNS servers will add new configuration directives that will enable DNS admins to populate data about recursion clients' subnets and specify the netmask for each that is to be specified in this protocol. Adding that configuration isn't really more work for DNS admins because typically we already have to configure new netblocks to un-block recursive queries for those clients.

  19. Only capturing the first 3 octets of the ip address, still narrows the request as having originated from a group of 255 possible addresses. That's pretty useful to someone wanting to track your online behavior. Add to that the recent work done by the EFF showing how you can use non-ip based information from the web request (ie. user agent, accept-encoding, etc.) to create a "fingerprint" of a visitor and you have everything you need to track user behavior pretty darn accurately. Smells fishy to me.

  20. Ok, I really don't see what all the hype about privacy is with this. Sure the DNS server gets the first 3 octets of your IP. Which would not have been exposed before to it, since it would only get your revolver's IP.

    With this though it would pass on the first 3 octets of your IP, which doesn't really matter as when you actually go to lets say the website you were making the DNS request for would have your FULL IP address...Imagine that.

    My opinion: I think this is a good idea.

  21. A great idea Google! As someone who operates distributed servers it would be great to point people to their closest server so they get a faster response time.

    Bit silly for people to be worried about the 3/4 of their ip address to be exposed - as soon as that dns request is completed they'll be sending the full ip address to the website provider to view the website!

    There's a lot of comments from people here who don't understand how DNS, TCP/IP, NAT or Satellite connections work - a lot of mis-information.

    Go read the protocol standards and you'll find that Satellite and private ip addresses (10.*.*.* etc that have to be NAT'd to use the internet) will work with this.

    If you think that your DNS queries should be private - you need to get off the internet. DNS requests have to passed to public servers in order to get the information you need.

  22. DNS is not broken. DO NOT MESS WITH IT for a slight performance gain for some (very few) users. If you really need the geographical data, get it on the actual network connection and implement a redirect in the application. Or alternatively require the people to use the Google DNS for maximum performance.

    Doing this by a DNS protocol change is entirely the wrong way and I can only attribute it to mental laziness.

  23. @arno I'd say doing it in DNS is the right point - moving it up the protocol chain means that you wont have to make a connection and be redirected to a second server. Doing that slows down the response time to the user, resulting in more connections and increased network load.

    Google arn't saying DNS is broken, just that it can be improved with a fairly minor improvement.

    Granted that it wont be of use to 99% of websites that are hosted on a single server. But for any website that is on more than one server (and especially those in a cloud) this will make the users experience so much better.

    Ever had to wait for a website to load because its coming from the other side of the globe? This DNS change will help.

  24. Arno: This is not messing with the DNS Protocol, it is simply extending it. Anycast is "messing with dns," as is DNSSEC, by your argument. Invalid.

    Jared Mauch: (Hi) That was my first thought, actually..

    All: There are very few "privacy implications" to this - if you don't want your Internet activity monitored, unplug your computer. I'm a privacy advocate, but the first three octets of my IP is nothing.
    If I'm looking up '' in a DNS request, guess what. Not only is my *full* IP going to be logged once I connect to the site, browser information, hardware platform, and much other info will be sent along -- Complaining that this extension will violate privacy is kind of a silly attempt at a point.

    Allowing your ISP(if they don't already sell your data and habits already) to send this (essentially) geo-data to a nameserver so that it can direct you to a server "closer to you" is brilliant, and I'm glad Google and the guys at UltraDNS see and support this [full disclosure, I use UltraDNS].

    I'd like to see Vixie's point of view on this, but I doubt he'll be posting on blogspot anytime soon ;)

  25. Privacy, schmivacy. Wake up and embrace reality. Privacy is an illusion.

    But this concoction of an extension to the DNS protocol looks like an opportunistic move meant to reduce the pains and aches of a corporation's logistics. Doesn't feel like it's in the spirit of the internet.

    With my 50,000 foot level view of the mechanics of the internet, the benefit of the proposed change eludes me. But you and I are not a part of the internet draft review team, so why don't we wait and see what they have to say about this. I hope they judge the proposal on its merit and dismiss it promptly. Or else, we'll have created a precedent for a series of ridiculous tuning proposals for the internet in an increasingly narcissistic community.

    Like Arno and others said, keep your redirects higher up the protocol stack. And stop whining about how much better off we would all be if we didn't have to create a TCP connection to redirect until you've come up with thorough stats representative of the internet at large.

    Man, people these days.

  26. Re: jyaif

    "Even if just Google uses this protocol, 70% (at least) of the internet users will gain time.

    You don't get it do you? What proportion of Internet users use a DNS resolver that's not in their own country? 0.01%? Less? What percentage of websites are distributed across multiple countries? 0.0001%? Less? Multiply those two figures together and what do you get? I can guarantee you it's not near "70%"

    For the tiny minority of sites that want websites to run from multiple countries, they can either deal with the issue with anycast, or deal with it at the HTTP level. There are many possibilities. It doesn't require a change to DNS

  27. @MickeyC it's not just about the percentage of sites but also about the amount of traffic they get.
    Also the big sites are not only distributed across countries but also across zones within the same country ( google, amazon, ebay, yahoo , etc )

    Still I think they could it by using even 2 bytes instead of 3 so people would be less concerned about privacy.
    Also the privacy concern here is not about the sites where the user ends up but about all the intermediary DNS servers that forward a query. Now only the destination site ( well, actually some others too like ad servers that post ads on the site and every device between you and the destination sites or ad servers ) will know who you are but after this is implemented a lot of other third parties that you can't really track will know.

    Someone mentioned this should be opt in instead of opt out and I fully agree with this.

  28. mihai:

    Most people are using a DNS server in the same country that they are in, so Google already makes a good guess about which server to point them to. None of these people will benefit.

    That drops the 70% claim to something far far below 1% even if every single person on the Internet is using Google all the time...

    It would not be feasible to make such a system opt in. They should just use one of the many other methods available to get the user connecting to a close server, rather than fudging DNS.

  29. Thanks all for the lively debate! We wanted to clarify a few points below.

    Regarding privacy concerns: Every time you visit a site, your browser does a DNS lookup followed by many HTTP connections to fetch the various components of the page (images, frames, or even ads or popups). With each HTTP connection, you are sharing your full IP address. What this draft proposes is giving only the top 24 bits (a partial amount) of this information earlier on in the process, so the DNS server can make better decisions.

    Regarding concerns about modifying the DNS protocol: This proposal is optional. It does not modify the DNS protocol itself but uses EDNS0, a mechanism that is already part of the DNS protocol that allows new extensions to be developed and added. It is similar to having a new header in the HTTP protocol; it can be used and implemented, but is not required.

    Regarding concerns about root or TLD servers: Root servers, ICANN, and owners of TLD servers will not see any increase in load if the specifications are implemented correctly. An improper implementation will just send a few extra bytes with client-ip information attached. There will be no increase in the number of queries or traffic generated. The root and TLD servers will ignore the EDNS0 option, returning a result that will be cached as always.

    Regarding concerns about caches and increased load on recursive resolvers: if you run a recursive resolver that is handling a few networks only, or networks that are topologically close to each other, you will have no need to enable this extension. In contrast, if you are running open resolvers or resolvers serving many different networks, chances are that to reduce the latency experienced by your users, you already invested resources to have multiple resolvers in different locations, finding ways to share the cache or duplicating the content of your caches. The proposed extension allows recursive resolvers to clearly see which results are localized, and for which networks the results can be cached. This will allow resolvers to implement smarter caching algorithms, better decide how to cache results, and to reduce latency by having more precise information about the user's location early on in the process, rather than when they have already opened an HTTP connection.

  30. Ha -- I was going to say the same thing, thanks @Julian. I'm all for privacy, but the full IP address will be visible as soon as the connection is made anyway. Seems to me this step simply makes sure the whole connection starts smoother/faster/better, right?

    Discloser comment -- I work for Neustar,and think this is pretty cool.

  31. @jrishaw et al: Yes, I get the feeling that people don't understand that their IP address is given to Google when they go to google. However, for people using things like proxies or Tor, having the IP address go up the ISP chain is an information leak. Of course, it always was. This just makes the leak a little bigger.

    For people in free countries and with a rule of law and not half-trying to hide their tracks, this proposal will help make things faster. That is 99.9% of the users in the US/Europe.

    For a very small number of people that use a proxy for http but not for dns, this will be a problem. People using script-kiddie level privacy tools are the ones for which this might make a information leak (to say, Chinese authorities -- though their servers are likely already passing IP info around). For people that really know how to hide their tracks it will make no difference.

  32. There is no need for this DNS extension as niche networking vendors ALREADY offer appliances which globally load balance DNS requests with very low TTL's and can also act as the Authoritative Server for zones. There are already companies out there which specialize in serving key data for Geo-LBing, anyone ever heard of

    Can Google focus their efforts on more important DNS related items such as DNSSEC which can mitigate spoofing and man in the middle attacks by implementing chains of trust between Authoritive and downstream recursive DNS Servers?

    Hmm? Makes one wonder what settings they already have inside their Chrome browser to track one's www surfing trends?


  33. This comment has been removed by the author.

  34. This appears like a good and important improvement if you want cloud services to be faster, cheaper, greener. For those not directly affected, can perhaps expect some performance improvement from secondary effects when the backbone networks are less filled with cross-country traffic that can rather go local. Let's allow some evolution in the internet protocols. Refusing change is not the way to make the internets' tubes better.

  35. And what about privacy? It will make it easy for the governments to control their citizens. Google first makes a lot of noise on the Human Rights Issue in China and then suggests to make the job of repressive regimes even easier! It is disgusting :(

  36. After last post on marketing without search engines, I decided to follow up with a strategy you can use to get quality free traffic. One of the easiest ways to get visitors to your web site is to spend money. Nothing is more effortless then paying for traffic. But if you can’t afford it or don’t want to pay, there’s an equally simple but free way to get traffic: ad swaps.

  37. Will this enhance Google Caffeine and Google Local Search? Is that the real reason to want part of the user's IP address and Google wanting the ability to route users to specific servers?

  38. Why use ip-address? How will this be in IPv6? How much should be sent then? Why not a country/state code in stead? (us/nj) (us/ca) (dk) (de) I would believe the ISP already knows where the customer is located... And it ought to be possible somehow not to send that information.

  39. Better solution just add into DNS request optional location information (for example Coutry+City). If forwarder has received location from requestor he will send it to authoritative nameservers as is. If got nothing will send own location. In most case location will be from local provider forwarder nameservers. Sometimes - from real client.

  40. Truri, Bell - Location-based DNS responses are based on the network location of the client - not the Geo-location.

    I.e. If I'm on a mobile device then I want to be be redirected to a server near to my network operator's data link, not to a server near where my device happens to be.

    The IP address is *exactly* the right information to use for this, no other data would give a correct result.

    For people saying that there are already companies who offer this service, this only works if the Authoritative nameservers receive the request direct from the client, and it breaks when a global dns proxy is being used (e.g. ISPs' dns servers, Google DNS, OpenDNS) - this proposal is to fix that and to allow the same service to be implemented correctly (by Google and the existing companies) when the request is coming through a non-authoritative nameserver.

    For IP V6 - it's specified in the link.