Saturday, December 18, 2010

The End

One of the things I am taking away from this class is what I have learned about writing a good research paper. We have read alot of papers, reported on them to the class, asked and answered questions about them. Throughout the semester Dr. Zappala has pointed out aspects of good writing, which has been very helpful. He has especially commented on what makes an introduction useful.

He has taught us how to find new papers, how to break into new research areas on our own. He has taught us, through the activities of this class, how to separate a research area into parts, how to begin to comprehend the parts and their relationship to whole. I have learned a lot about networking, but perhaps more importantly I have learned a lot about writing and research.

The interactive format of the class has been very productive and fun. Having the consistent experience of presenting papers to each other was, I thought, a great use of class time. The instructor jumped in to help us a lot, but that was much better than being left out of the teaching experience. We really do learn by doing.

Think by Writing

I am the polar opposite of a blogger. So why am I blogging? Because it is required for this class. I must admit it may have been a good experience. I have used this blog as an opportunity to think about a lot of different things relating to computer networking. Writing about my thoughts has helped me to think more clearly. I know that this is a well understood principle, but it really has helped me. Blogging throughout the semester helped me to formulate the ideas leading to my research proposal, just submitted at the end of the semester.

Writing isn't my favorite activity; talking is much easier for me. All this writing is probably making writing easier, and perhaps more enjoyable. See, look at me, I'm blabbering on and on to an invisible audience. Save yourself. Please stop reading now while you can. If you are still reading one of two things might be true. Either I have become really good at blogging, or you are putting together my grade for this class.

Tuesday, December 14, 2010

Deployment, Deployment, Deployment

A major recurring theme in my study of computer networking is overcoming barriers to deployment. It seems that just about every proposal must address this issue.

By design, the Internet is something that everyone can use. It is the network of networks. The primary objective which motivated the original design of the Internet was to tie all types of different networks together into one network. Since the Internet is designed for such universal use, it is a special challenge to deploy any sort of substantive change.

Any change which might break compatibility of systems currently communicating over the Internet faces a steep uphill battle, no matter how green the grass might be on the other side. It seems that improvements do come, but require some sort of non-disruptive deployment path.

Deployment tends to be the domain of engineers, rather than researchers. However, it seems to me that deployment is central to research in this field.

Additional Header Information

Headers are used to convey certain information about a packet. There are situations where additional information might be helpful, beyond that originally provided in the header.

One example of providing additional header information is explicit congestion notification. Routers mark headers with congestion information which can then be used by senders to avoid congesting the network.

Another example is NetFence. In this proposed solution to the problem of DoS attacks, a special NetFence header is inserted between the IP and TCP headers. The purpose of this header is to facilitate communication between routers and hosts, aimed at minimizing damage from malicious hosts.

In these examples, problems related to congestion and security are addressed by adding information to packet headers as they travel through the network. This is an observation that is interesting to me. I am seeing headers as more than just a place to store static information, but rather has a means for routers and hosts to communicate with each other regarding the packets they are transporting.

Smart 'Middles'

NetFence, discussed in class, is a proposed solution to the problem of DoS attacks. It departs from previous work in that it places the 'middle' in the first line of defense against these types of attacks, rather than the 'ends'.

We have already learned that the 'middle' has access to important information that can be very difficult for the 'ends' to infer. For example, the total number of flows at a bottleneck link and the capacity of that link. This type of information is easily accessible at the bottleneck router, but can be very inaccessible to the affected senders. Such information can make it possible for senders to avoid congestion.

Security is yet another concern which demonstrates the need for smart 'middles'. This is all interesting in light of the end-to-end principle. This principle might be interpreted as stating that the 'middle' should not be replicating work which is better done at the 'ends'. However, there is significant work that the 'ends' are ill-equipped to do.

If, as discussed in NetFence, a sender and receiver collude to overwhelm a link, the 'ends' are both malicious and the target of the attack is the 'middle' itself. Certainly in such a case, initiative is needed in the 'middle'.

Thursday, December 9, 2010

Bufferbloat and TCP Performance

I ran into two articles by Jim Gettys, entitled The criminal mastermind: bufferbloat and Whose house is of glasse, must not throw stones at another. In a nutshell, the point is that operating systems, home routers, cable modems, etc. are buffering too much data, which hurts the performance of TCP because it doesn't get notification of congestion. His results show that the performance effects of poorly tuned buffer sizes are dramatic and widespread. It's an interesting read.

Security and Laziness

I read that Morris had suggested the theoretical possibility of an Initial Sequence Number attack in Bell Labs Computer Science Technical Report #117, February 25, 1985. In 1995, Kevin Mitnick carried out this attack (described in Tsutomu Shimomura's book Takedown and in many other resources online). I find it very interesting that security problems so often go unfixed until after they're exploited.

I had this experience once as a system administrator in the CS department. I had noticed that a course's submission system was insecure and that students could steal from and/or overwrite other students' submissions. I emailed the professor about this in November 2003 and again in February 2004 when the problem still had not been fixed. In March 2005, a student was caught cheating, and it turned out that the student had exploited the security problem that I had reported more than a year earlier. If the professor had taken half an hour to carry out the single-command fix that I had proposed, then this student may not have cheated.

Another recent example of this is website encryption. For years, it has been common knowledge that unencrypted HTTP sessions can be hijacked. However, almost no major websites used SSL by default. Finally, Firesheep made people realize that this was a real problem (even though it had already been a real problem for years earlier). Somehow, people feel justified in ignoring security problems if they think the exploit sounds hard or unlikely, even if it is neither.