Wednesday, September 1, 2010

First Impressions on a Data-Oriented Network Architecture

The paper assigned for tomorrow is A Data-Oriented (and Beyond) Network Architecture. I've decided to share some of my first impressions, though my opinions are subject to change after our discussion in class tomorrow.

My first thought is that the paper doesn't seem to propose as much of a "clean-slate" as it promises in the introduction. Is this architecture really intended to completely replace DNS? It seems to say that I will need to have a bookmark to get to any site (p.3). How would I get to a search engine to find other sites? How would I get to a site that isn't indexed by search engines? How would I follow up on, say, a radio advertisement? Additionally, if the object names are based on a hash of the principal's public key, what happens when the key expires (typically keys are valid for about a year)? If users consider DONA names less usable than DNS names, then this would limit the usefulness of the new system.

On a lower level, the paper implies that HTTP would be rebuilt on top of DONA, but there aren't enough specifics that I would know what this looks like. At one point, it mentions that the URL would not be needed because the name is taken care of by DONA (p.5), but at another point it states that a principle may choose to just name her web site or name her web site and each page within it (p.3). It's hard to tell just how revolutionary the design is intending to be. Would it work alongside DNS and HTTP, or would it really be a complete replacement?

My biggest concern is that the design is data-oriented instead of service oriented. The paper was published in 2007, but even by then the "Web 2.0" phenomenon was in full swing. The feasibility section estimates the number of public web pages on the order of 10^10, but considering that personalization could multiply this by the number of users, I think that the estimates in the feasibility analysis (p.9) are orders of magnitude too small. I could imagine a company like Google or Facebook easily producing 10^12 to 10^16 unique data objects per day. Since there wasn't any discussion of cookies or AJAX or even latency, I just don't know what to think about feasibility. Delivering static content may indeed be better in the new architecture, but what if the video producer chooses to create a separate stream for each user for the purposes of watermarking? The design seems to be focused on completely static content.

Of course, if HTTP and DNS are intended to stay in its current form, then many of my concerns may be irrelevant: I could see DONA being the basis of a great worldwide CDN. However, the promise of "a clean-slate redesign of Internet naming and name resolution" leaves me with a lot of big questions. Hopefully some of these will be answered tomorrow.

No comments:

Post a Comment