Tuesday, September 28, 2010

Slow Start

I've always wondered how a system with an exponential rate of packet transmission could be called "slow start". Then I saw Figure 3 ("Startup behavior of TCP without Slow-start") from Congestion Avoidance and Control. This congestion-control-free TCP sends as many packets as possible, resulting in huge amounts of packet loss. The graph shows a sequence of sharp saw teeth. In the trace, one sequence packets was resent four times!

With TCP Tahoe, on the other hand, the trace showed a smooth increase and a steady rate of delivery with few retransmissions. Overall, the TCP with congestion control transmitted data at more than twice the rate because it didn't wastefully retransmit packets. Compared to a near-vertical line with sharp slope, an exponential plot really does look slow. I am now at peace with the name "slow start".

Congestion Control in User Space

I ran into iTCP, which stands for "interactive TCP". The idea is that applications can interact with the TCP system. A better name might be "layer violating TCP", but I can't blame the authors for picking "iTCP" instead (it's much catchier). On the one hand, this model seems to add unnecessary complication and opportunities for abuse. But on the other hand, there is a perverse sensibility to this approach in the spirit of end-to-end architectures. For example, iTCP makes it easy for a video streaming application to recognize that packets are being dropped and to lower the video quality as a result.

Unfortunately, iTCP is inherently platform-dependent, and it doesn't seem likely that every OS would incorporate this feature. Rather than building this into the TCP implementation of an OS (which would probably slow down network processing for all applications and complicate the APIs), I think it would make more sense to achieve this functionality in userspace for those few applications that need it. A userspace "TCP" library could send and receive packets over UDP and provide all of the hooks and features in iTCP. This implementation would be platform-independent, and although it might be a bit slower than the in-kernel implementation, it would be much more flexible, and it wouldn't affect other applications.

Friday, September 24, 2010

In the Clouds: Networks and Society

We are studying computer networking. One thing that is interesting about it is the relationship between computer networking and society. I would say a computer network is highly constrained and simplified compared to a network of people. Yet the similarity may be strong enough that much can be learned. Particularly, we might be able to learn more about ourselves.

Since we are in the business of designing computers, we get insights into why we organize things certain ways. Since we are not in the business of designing people, we are not in the same position to get those types of insights. Following this argument, we may learn more about people through computer science, or we may learn more about society through the study of computer networking.

One of the things that inspired this thought was Andrew's description in class of roles and responsibilities in relation to coordination in distributed systems. He used terminology traditionally applied to society, which is now applied to computer science.

People who study society are typically not computer scientists. Can they learn enough about computer science to make connections to their discipline?

Within-Flow Measurement

The paper "TCP Revisited: A Fresh Look at TCP in the wild", describes an approach to internet measurement which does scale to very large numbers of flows. This was my complaint about the first measurement paper we studied together, that it doesn't scale because it requires end-to-end measurements. This paper recognizes the need to make within-flow measurements and describes new algorithms for doing so, using statistical techniques. A few end-to-end measurements were made to validate the new algorithms.

Some might argue that end-to-end measurements are needed in order to get accurate results. While this is probably true, it may not be so true after looking at the big picture.

For example, let's suppose I can get very accurate measurements using end-to-end. Yet, because the measurements are end-to-end, the number of measurements is necessarily limited. If the number of measurements are limited, then we have less data with which to make inferences. It may be better to have a large number of less accurate measurements, than to have a small number of highly accurate measurements.

Thursday, September 23, 2010

Coordination in Distributed Systems

I have recently read several papers on services that provide coordination for distributed systems. This turns out to be a fascinating area, and as I read each paper, I found myself getting sucked into more. I group these papers into two different categories: one type involves services that provide mechanisms for distributed coordination (including ZooKeeper, Chubby, and Sinfonia); the other involves protocols that guarantee certain properties despite various types of failures (such as Paxos, PBFT, Aardvark, and Zab).

The high-level papers describe specific services for distributed coordination. A group of servers (five seems to be a popular number) communicate with each other and agree on state. Clients communicate with one or more of these servers to read or modify this global state. The main property is that if one or two of the five servers fail, the others can keep the service running, even if the "leader" fails. Various guarantees about consistency may be provided--not only is consistency difficult to achieve in the event of different types of failures, but being able to tolerate failures usually requires sacrificing performance. I was very impressed by the work that has been done in this area.

The low-level papers were also fun. Protocols are designed to withstand Byzantine faults, which seem to encompass just about anything that can go wrong in a distributed system, including crashed servers, lost or repeated messages, corrupted data, and inconsistencies. The Practical Byzantine Fault Tolerance (PBFT) algorithm, introduced in 1999, seems to have launched a whole range of fascinating research. It reminds me of security in that cynical thinking is critical.

Friday, September 17, 2010

Lost in a Maze of DHTs

I enjoyed reading about Chord, and I'm impressed by the ideas behind Distributed Hash Tables. This area of research seems to be less than 10 years old, but as far as I can tell, there are dozens of different DHT designs and systems. In trying to make sense of all of this, I came across a recent 24-page survey that covers both design and applications, named appropriately Distributed Hash Tables: Design and Applications.

It seems like a great intro, but I feel like I'm missing something. As far as I can tell, CAN, Chord, Pastry, and Tapestry were all introduced about the same time in 2001, and Kademlia came out a year later. I still haven't read enough to know whether one is much better than the others. If one had been introduced a year earlier than the others, would there still be as many, or would the others just built on the work of the predecessors?

Great, Except That it Doesn't Scale

The paper, "End-to-End Internet Packet Dynamics", seems to make a great contribution to the study of networks. It applies a new method for analysing packet dynamics, which proves very effective. The new method is to install a service on selected 'ends' of the network and then to pass TCP traffic between each pair of 'ends'. Obviously much can be discovered using this approach. So, is there an even better way to study packet dynamics? In particular, is there a method which scales better than this one, which is quadratic in the number of 'ends'? Because this measurement method scales poorly, it necessarily limits the amount of measuring which can be done. Is it possible to make similar measurements in such a way as to support large-scale, real-time measurements? For example, How effectively can a single router be used to deduce or infer the dynamics of packets passing through it?

Since I study Natural Language Processing (NLP), I have been looking for connections between NLP and the internet. The questions asked above suggest a possible connection. In NLP, we are typically trying to infer things by observing only the traffic that flows between two or more people. The traffic I refer to is language, speech or text for example. We typically do not have direct access to the thought processes and intents of the people who either send or receive the traffic. These people are like the 'ends' of the network. In NLP, rather than making end-to-end measurements, as done in this paper, we do our measuring from the middle.

Does it Scale?

The paper, "Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications", clearly answers what is perhaps one of the most important questions in computer science, "Does it scale?" I like the whole idea of distributed scalable systems. The paper offered a strong theoretical foundation for the proposed method. The method scales very well and is robust to node failures. I have been trying to think of some constructive criticism, but have not been successful.

One question I have is, How does cloud computing work today? Is it based on something like Chord? What advantages could a more centralized hash table have over Chord, given that the centralized solution scales just as well?

Tuesday, September 14, 2010

Making Sense of Packet Dumps

I recently read a paper called End-to-End Internet Packet Dynamics by Varn Paxson. This paper tried to make sense of packet dumps from 20,000 100 Kb TCP connections. It's a little old (1999), but I think it does a great job. The task is extremely difficult because of the complexity of the system being measured. Any particular effect could be caused by the packet sniffer, the TCP implementation on the sender or the receiver, or any of the network links or routers in between. And as a passive observer, the analysis program can only guess at the internal state of each component of the system.

I was particularly interested by some of the unexpected effects described in the paper, such as non-FIFO queuing, non-independent loss events, and route fluttering. In the face of such idiosyncratic behavior, I wonder what other bizarre effects have continued unnoticed for years. Occasionally, I'm just amazed that such a complex system as the Internet works at all.

I'm impressed both with the insightful observations from the author, and also with his acknowledgement that some of the conclusions might be wrong. Unfortunately, as the author acknowledges, many of the measured quantities exhibit extremely high variance, and some of the observations only apply to particular links or operating systems. This analysis is difficult to perform, but it really needs to happen again and again as the Internet continues to evolve as a system.

Monday, September 13, 2010

Tussles on the Internet

David Clark et al. wrote a paper in 2002 called Tussle in Cyberspace: Defining Tomorrow's Internet. This paper, which is related to the idea of an invariant that I mentioned in an earlier post, has even more relevance today than when it was first published. Beginning with, "The Internet was created in simpler times", the paper reminds us that our networks will reflect the conflicts from our society. Competing interests will always result in some sort of conflict, and technology cannot dictate the end results. The paper recommends that architectures be designed with enough flexibility to avoid breaking under social tension.

For example, "conservative governments and corporations put their users behind firewalls, and the users route and tunnel around them. ISPs give their users a single IP address, and users attach a network of computers using address translation." The thought of firewalls and tunnels may make designers cringe with images of trenches in wartime, but the more they try to protect their protocols from being used for evil purposes, the more these designs will be defiled. As stated by the authors: "Do not design so as to dictate the outcome. Rigid designs will be broken."

The authors try (rather pitifully) to keep a neutral stance about the many ongoing struggles on the Internet. However, they eventually break out of character and give a chapter on how to keep the Internet innovative and reliable in spite of these tussles. Their recommendation is to "bias the tussle" with open architectures, fault-tolerant designs, and encryption.

Although the outcome cannot be dictated by the designers, I agree that open designs may occasionally succeed at motivating reluctant parties to allow openness. One example that I thought of during my reading is open source software (particularly under the GPL), which is often more expensive to fork than to contribute to. This effect increases with the activity and usefulness of the project. Biasing outcomes is extremely difficult to pull off, but sometimes it works.

Evaluating Architecture Design with Invariants

I recently read Invariants: A New Design Methodology for Network Architectures. This paper defines an "invariant" as a property of a design that limits backwards compatibility, and it contrasts "explicit" invariants, which are designed interntionally, from "implicit" invariants, which are unintentional.

The idea of an implicit invariant is especially interesting. When a design fails to address a need from its users, they will use the system in ways that the designers did not intend. For example, port numbers were intended for the simple purpose of multiplexing connections; however, well-known port numbers are now built in to the logic of firewalls and routers.

As the authors acknowledged, the approach is fairly early, but it definitely seems like an interesting way to think about architecture.

Friday, September 10, 2010

What is Allowed? What is the Internet?

Today, Travis asked the interesting question, should middleboxes be allowed? It is interesting to me because it touches on the whole idea of what the internet is. I thought the internet was a network of autonomous networks, not the network to rule all networks. My understanding is that middleboxes, generally, are tools which are employed in specific networks, not in the internet itself.

If there are middleboxes in the internet itself, then I suppose that they provide some service that is necessary for the interconnection of all networks to function properly. At that high level, the interconnection of all networks, the policy is of necessity to place as few requirements as possible on connecting networks. Whether or not a network employs middleboxes can obviously not be a requirement for all networks. The more requirements, the less possibility of connecting all networks.

I think the real question is: Do I want my network, or the networks I use, to have middleboxes?

Grasping

I am trying to grasp a new area of study, network architecture. I appreciate Dr. Zappala's approach to helping us do that. So, what have I actually learned? Also, what avenues do I think might be promising for future research? It is interesting to me that as a graduate student I am asked to begin to comprehend an entirely new area of computer science, and at the same time throw out some ideas for possibly contributing to the area. I like the challenge. Maybe that is why I am a graduate student.

I tend to like network architecture ideas which emphasize small interchangeable parts, rather than large-scale integrated solutions. In my mind, large integrated solutions belong to the networks which connect to the internet. The internet itself should be dedicated to providing basic communications between disparate networks.

It seems that many ideas we have studied are an attempt to solve problems of typical users. In my mind, the problems of typical users should be solved by specific networks which cater to typical users. The internet should be treated separately from the networks which connect to it. Otherwise, we treat the internet as a single inflexible behemoth.

Friday, September 3, 2010

Branding

The paper, A Data-Oriented (and Beyond) Network Architecture, proposes to replace well known internet names, like www.google.com, with long names which are not human readable. As a newbie to this field I appreciate the paper for the exposure it provides to related work. Routing by name and anycast seem to be important.

I'm sure there are a lot of things to learn from this work. However, my gut reaction is that the idea of getting rid of human readable names will never ever work. Those names are too valuable. An internet name is essentially a brand. Brands can have a lot of value. This issue of internet naming is a financial issue, as well as a networking issue. Naming is also an important human computer interaction issue.

To me, naming is what abstracts the data from the specific host from which it may be obtained. When I type in a name, I expect a service, and I don't care which specific host provides it. I do expect the name to be persistent. And it generally is.

The new approach to naming proposed in this paper is intended to improve persistence of names for data or services. It seems to me that the proposed changes to naming may actually make names less persistent than they are today. The new names are associated with public-private key pairs. This means that if the key changes, then the name is no longer valid.

Wednesday, September 1, 2010

First Impressions on a Data-Oriented Network Architecture

The paper assigned for tomorrow is A Data-Oriented (and Beyond) Network Architecture. I've decided to share some of my first impressions, though my opinions are subject to change after our discussion in class tomorrow.

My first thought is that the paper doesn't seem to propose as much of a "clean-slate" as it promises in the introduction. Is this architecture really intended to completely replace DNS? It seems to say that I will need to have a bookmark to get to any site (p.3). How would I get to a search engine to find other sites? How would I get to a site that isn't indexed by search engines? How would I follow up on, say, a radio advertisement? Additionally, if the object names are based on a hash of the principal's public key, what happens when the key expires (typically keys are valid for about a year)? If users consider DONA names less usable than DNS names, then this would limit the usefulness of the new system.

On a lower level, the paper implies that HTTP would be rebuilt on top of DONA, but there aren't enough specifics that I would know what this looks like. At one point, it mentions that the URL would not be needed because the name is taken care of by DONA (p.5), but at another point it states that a principle may choose to just name her web site or name her web site and each page within it (p.3). It's hard to tell just how revolutionary the design is intending to be. Would it work alongside DNS and HTTP, or would it really be a complete replacement?

My biggest concern is that the design is data-oriented instead of service oriented. The paper was published in 2007, but even by then the "Web 2.0" phenomenon was in full swing. The feasibility section estimates the number of public web pages on the order of 10^10, but considering that personalization could multiply this by the number of users, I think that the estimates in the feasibility analysis (p.9) are orders of magnitude too small. I could imagine a company like Google or Facebook easily producing 10^12 to 10^16 unique data objects per day. Since there wasn't any discussion of cookies or AJAX or even latency, I just don't know what to think about feasibility. Delivering static content may indeed be better in the new architecture, but what if the video producer chooses to create a separate stream for each user for the purposes of watermarking? The design seems to be focused on completely static content.

Of course, if HTTP and DNS are intended to stay in its current form, then many of my concerns may be irrelevant: I could see DONA being the basis of a great worldwide CDN. However, the promise of "a clean-slate redesign of Internet naming and name resolution" leaves me with a lot of big questions. Hopefully some of these will be answered tomorrow.

Worthwhile Objectives

The paper, 'The Design Philosophy of the DARPA Internet Protocols', by David Clark is interesting to me, in part because it wasn't written years earlier. DARPA began developing what we now recognize as the internet, 15 years before this paper was written. The reason for the paper is to explain the goals of DARPA's 'internet' research project, or rather the author's view of those goals.

How is that possible, that the goals of a research project of such magnitude were not recorded earlier? He doesn't address that question. It is a bit unbelievable to me. Today, about 22 years later, we are starting out a graduate course by studying this paper. That is evidence to me of the importance of clearly understanding goals, or objectives.

I suppose that to push research forward, we will need to identify worthwhile objectives. Which objectives are most worthwhile? Thinking about the internet, the top level objective was to interconnect existing networks. Such an objective certainly was worthwhile, or at least has had great impact on the world. As part of this class I would like to understand which objectives are being pursued and which might prove most worthwhile.

Net Neutrality

Net Neutrality seems to be one of the biggest current issues in Internet architecture. Telecommunication companies seem to think of themselves as providing a unique service, but I think of them as a simple utility.

A recent article on the topic reports that AT&T believes that net neutrality rules should allow "paid prioritization". I'm fine with a corporate network using QoS to shape its internal traffic, but I'm uncomfortable with ISPs doing this with other people's traffic. I have had too many experiences where "cleverness" with traffic shaping caused inexplicable problems. For example, on BYU's network I have seen SSH connections terminated or stalled for no obvious reason, while other types of traffic worked fine. Even the network engineers can't necessarily figure out what is going on.

Rather than ISPs implementing complicated traffic shaping, why not keep the network simple? If a customer is using enough traffic to cause congestion, they should throttle that user's traffic irrespective of which protocol is being used. They should charge a customer for the bandwidth being used without trying to extort extra fees.

Above all, ISPs should fairly advertise what service is being provided: if the network is congested enough that there is a consistent need for paid prioritization, then the ISP is not providing the advertised bandwidth to its customers. Instead of "Download speeds up to 15 Mbps with PowerBoost", the fair advertisements might read "Peak bandwidth 15 Mbps, minimum guaranteed bandwidth 20 kbps". Another ISP might advertise "Peak bandwidth 15 Mbps, minimum guaranteed bandwidth 500 kbps" by better throttling users that are "hogging" the network. I just don't see how extorting content providers does anything to solve the problem. How could you fairly advertise bandwidth to reflect complicated traffic shaping methods?  "Bandwidth of 1 Mbps for google.com; 800 kbps for yahoo.com; 100 kbps per second for Skype; SSH and Bittorrent traffic have lowest priority; forged reset packets may be whimsically sent..."

NAT already makes it hard enough to try interesting new protocols on the Internet. Do the ISPs really need to make things even worse?