I recently read a blog post entitled Google and Microsoft Cheat on Slow-Start. Should You?. The article points out that Google has an initial congestion window of 9 packets, and Microsoft's value is even larger, while most websites have a value between 2 or 4. All of this is interesting, but the article goes on to accuse Google and Microsoft of "cheating," citing a "violation" of RFC 3390. Although the tone of RFC 3390 does seem to encourage this sort of reaction, I think that this is an unfortunate attitude.
First, we should be encouraging people to develop protocols, not to keep them stagnant. Holding strictly to decade-old values (4 KB initial windows were proposed in 1998) does not necessarily do anyone any favors. RFC 3390 mentions that this particular value of 4 KB was tested and found to not cause additional congestion on a "28.8 bps dialup channel." Twelve years later, when most Americans have links that are orders of magnitude faster, shouldn't this be reconsidered?
Second, I am bothered by the use of the word "cheating," which implies that a larger initial congestion window would help the perpetrator to the detriment of all other users. Although this may be true over some specific link, in general web sites are motivated to pick a good value. If the value is unnecessarily low, than web pages load too slowly, and if it is too high, then web pages also load too slowly (due to dropped packets). If web sites are trying to pick the optimal value, should this be considered cheating?
I think we should try to foster an attitude that is positive toward experimenting with improvements to Internet Protocols as long as they retain backwards compatibility and don't risk causing catastrophic problems.