I have summarized the content covered in the Computer Networks course for the midterm exam, focusing on the material from Weeks 1 to 6.
Overall
I enrolled in this course in Fall 2024 with the expectation of deeply learning the fundamentals of networking while engaging with implementation challenges. The course includes a midterm and a final exam, and since the midterm will cover content from Weeks 1 to 6, I have organized the material up to this point. To briefly explain the content to a third party, it is necessary to understand the essence, and I believe this is a good opportunity to solidify my knowledge.
CS 6250: Computer Networks
Week 1
We began with the history of networking research and learned an overview of each layer of the OSI 7-layer model. Dividing the model into seven layers provides flexibility and cost efficiency; however, it also has drawbacks, such as the potential for overlapping functions at each layer. A principle of network implementation is the End-to-End (E2E) principle, which advocates for minimal functionality within the network and the implementation of new applications and protocols at the end systems. Firewalls and NATs do not adhere to this principle due to reasons such as dropping packets.
We also delved into the Data Link layer, discussing the Spanning Tree Algorithm in the context of Bridges and Switches. The Spanning Tree Algorithm is a distributed algorithm that prevents broadcast storms caused by loops in the network topology by allowing Bridges and Switches to share information with each other.
Week 2
This week focused on the Transport Layer and Application Layer. When wanting to use multiple applications simultaneously on a host, the Transport Layer achieves this through Multiplexing, which uses ports as additional identifiers. The primary protocols at the Transport Layer are TCP and UDP, which have different approaches regarding packet quality. TCP maintains packet quality through methods such as Automatic Repeat Request (ARQ).
TCP also employs a rate control mechanism known as Flow Control to protect against buffer overflow at the receiving side. This mechanism shares the remaining processing capacity of the receiving buffer with the sender, controlling the sending pace. Additionally, Congestion Control protects against packet drops caused by network congestion by regulating transmission speed to prevent exceeding network capacity. End-to-End congestion control operates at the Transport layer, calculating unacknowledged data based on the difference between sent data and returned ACK, ensuring that it does not exceed the capacity of the network or receiving side.
In relation to network capacity, the Additive Increase/Multiplicative Decrease (AIMD) method is utilized, gradually increasing the number of packets sent and halving the capacity size if latency is observed. If all connections sharing a link have the same RTT (Round-Trip Time), adopting AIMD ensures that each connection ultimately shares capacity equally, maintaining fairness. Conversely, if RTTs differ, those with shorter RTTs increase their capacity more quickly, leading to a lack of fairness.
Week 3
This week centered on the Network Layer, particularly focusing on Interior Gateway Protocols (IGP) for routing within the same administrative domain. Forwarding refers to transferring packets from incoming links to outgoing links within a single router, while routing involves routers collaborating using routing protocols to find an appropriate path for packets from the source node to the destination node.
We covered two routing algorithms: the Link-State algorithm and the Distance Vector Routing algorithm. The Link-State algorithm is based on Dijkstra’s algorithm and requires each router to maintain topology information for the entire network. The Distance Vector Routing algorithm is based on the Bellman-Ford Algorithm. In the Distance Vector Routing algorithm, when a link is cut, there may be instances where the state is not accurately reflected
Week 4
Following the discussion of intradomain topics in Week 3, Week 4 covered interdomain content. ISPs providing network services are classified into three tiers: Global scale Tier 1, Regional scale Tier 2, and Local Tier 3. Tier 3 connects to Tier 2, and Tier 2 connects to Tier 1, forming a hierarchical structure. However, the emergence of Internet Exchange Points (IXPs) and Content Delivery Networks (CDNs) that offer content has led to a flattening of this hierarchy.
Autonomous Systems (AS) are groups of routers operating under the same administrative authority, and the exchange of routing information between ASes is managed by the Border Gateway Protocol (BGP), while intra-AS communication is handled by Interior Gateway Protocols (IGP).
Connections between ASes can be categorized into relationships known as Transit, which involve a provider and customer, and Peering, which involves exchanging traffic. BGP sessions between different ASes are referred to as eBGP, while sessions within the same AS are called iBGP. The difference between iBGP and IGP is that iBGP aims to distribute external routes, whereas IGP determines paths within the AS.
IXPs can facilitate Peering between ASes and provide Route Servers to manage routing information for multiple ASes, enabling Multi-lateral BGP.
Week 5
In this week, we learned about the details of routers. Routers provide two functions: Forwarding plane function and Control plane function. The Forwarding plane function refers to the action of transferring packets from the appropriate output link interface to the input link interface, which must be completed within a very short time scale. This function is primarily implemented in hardware.
The Control plane function involves the implementation of routing protocols, maintenance of routing tables, and calculation of forwarding tables, typically implemented in software.
In the early days of the internet, a class-based (fixed-length prefix) IP address model was used. However, due to the rapid depletion of IP addresses, Classless Internet Domain Routing (CIDR) emerged. CIDR essentially allows for the allocation of IP addresses using prefixes of arbitrary lengths. While CIDR helped reduce the size of router tables, it introduced a new problem called longest-matching-prefix lookup.
Switching fabric is responsible for forwarding packets from input ports to output ports. Routers perform prefix-match lookup to find the destination address of packets. Since memory access was the primary time-consuming factor in lookups, Multibit tries were developed to reduce the number of memory accesses.
Week 6
Recently, as network complexity has increased, there has been a need to control packets based on TCP flags and source IP addresses, leading to an increased focus on packet classification. Efficient methods for matching destination and source IP addresses include Set-pruning tries and Backtracking approach. Set-pruning tries are fast but consume a lot of memory, while the Backtracking approach is slower but conserves memory. The Grid of tries approach is a hybrid of these two methods.
In scheduling that connects input lines to output lines, a method called “Take the Ticket” allows requests to be sent simultaneously to the desired output line, but the Head-of-line problem arises when there are duplicate output lines, causing the rest to wait. To resolve this issue, Parallel iterative matching utilizes virtual queues for each output link, allowing for parallel processing.
In routers, scheduling is necessary, and the simplest implementation in the output link buffer is First-In-First-Out (FIFO) with tail drop, where packets added when the output link buffer is full are dropped. To avoid quality degradation due to drops, Bit-by-Bit Round Robin scheduling was introduced, which sends packets with the smallest finishing round number. However, calculating the smallest finishing round number requires the implementation of a priority queue, leading to higher time complexity.