This is a paper about the Internet Transport Layer TCP/UDP. It was written by Marko Bekric, for South East European University. It was submitted to Electronics Fanboy by the author to be posted.
Internet Transport Layer TCP/UDP
By, Marko Bekric
The TCP/UDP is a part of the most important protocol hierarchy – The Transport Layer. There would be no sense for the concept of layered protocols without the transport layer. This layer provides cost-effective and reliable data transport from the source computer to the destination computer. That should not depend from the physical network or from the networks which are currently in use. The transport layer has its own protocols, designs, services and performances.
In this research project I will make an introduction to the transport layer. Mostly, I will keep up with the TCP/UDP, but I will also mention some other things closely related with the TCP/UDP. Before all, just to have a short say, the TCP/UDP is a part from the transport layer. The UDP (User Datagram Protocol) supports Network Application and it is similar to TCP (Transmission Control Protocol) which is used in client/server programs like video conference systems. TCP/UDP takes important place in the transport layer which is placed on the fourth place in the OSI (Open System Interconnection) model. In the OSI reference model, the transport layer is responsible for providing data transfer at an agreed-upon level of quality, such as at specified transmission speeds and error rates. To ensure delivery, outgoing packets are assigned numbers in sequence. The numbers are included in the packets that are transmitted by lower layers. The transport layer at the receiving end checks the packet numbers to make sure all have been delivered and to put the packet contents into the proper sequence for the recipient. The transport layer provides services for the session layer above it, and uses the network layer below it to find a route between source and destination. The transport layer is crucial in many ways, because it sits between the upper layers (which are strongly application-dependent) and the lower ones (which are network-based).The layers below the transport layer are collectively known as the subnet layers. Depending on how well (or not) they perform their function, the transport layer has to interfere less (or more) in order to maintain a reliable connection.
2. History of TCP & main facts to mention
In the early 1960s The US Department Of Defense (DOD) indicated the need for a wide-area communication system, covering the United States and allowing the interconnection of heterogeneous hardware and software systems. In 1967 the Stanford Research Institute was contracted to develop the suite of protocols for this network, initially to be known as ARPANet. Other participants in the project included the University of Berkeley (California) and the private company BBN (Bolt, Barenek and Newman). Development work commenced in 1970 and by 1972 approximately 40 sites were connected via TCP/IP. In 1973 the first international connection was made and in 1974 TCP/IP was released to the public. Initially the network was used to interconnect governments; military and educational sites together. Slowly, as time progressed, commercial companies were allowed access and by 1990 the backbone of the Internet, as it was now known, was being extended into one country after the other. One of the major reasons why TCP/IP has become the de facto standard world-wide for industrial and telecommunications applications is the fact that the Internet was designed around it in the first place and that, without it, no Internet access is possible. TCP/IP, or rather – the TCP/IP protocol suite – is not limited to the TCP and IP protocols, but consist of a multitude of interrelated protocols that occupy the upper three layers of the ARPA model. TCP/IP does not include the bottom network access layer, but depends on it for access to the medium. The TCP/IP protocol suite is being used for communications, whether for voice, video, or data. There is a new service being brought out for voice over IP at a consumer cost of 5.5 cents per minute. Radio broadcasts are all over the Web. Video is coming, but the images are still shaky and must be buffered heavily before displaying on the monitor. However,
give it time. All great things are refined by time, and applications over TCP/IP are no exception. Today, you will not find too many data communications installments that have not implemented or have not thought about the TCP/IP protocol. TCP/IP is becoming so common that it is not so much a matter of selecting the TCP/IP protocol stack as it is selecting applications that support it. Many users do not even know they are using the TCP/IP protocol. All they know is that they have a connection to the Web, which many people confuse with the Internet. We’ll get into the details of the differences later, but for now, you just need to understand that the Web is an application of the Internet. The Web uses the communications facilities of the Internet to provide for data flow between clients and servers. The Internet is not the Web and the Web is not the Internet. In the 1970s, everyone had some type of WANG machine in their office. In the 1980s and early 1990s, Novell’s NetWare applications consumed every office. Today, NetWare continues to dominate the network arena with its installed based of client/server network applications. However, the TCP/IP protocol and Internet browsers, such as NetScape’s Navigator and Microsoft’s Internet Explorer, and Web programming languages are combining to produce powerful corporate networks known as intranets, which mimic the facilities of the Internet but on a corporate scale. Intranets from different companies or simply different sites can communicate with each other through the Internet. Consumers can access corporate intranets through an extranet, which is simply part of the corporate intranet that is available to the public. A great example of this is electronic commerce, which is what you use when you purchase something via the Internet. Directory services are provided through Domain Name Services (DNSs) Microsystems. File and print services are provided in many different ways. Finally, the ultimate in full connectivity is the Internet, which allows the corporate intranets to interconnect (within the same corporation or different corporations), providing global connectivity unmatched by any network application today. Therefore, within a short time (possibly 1998), very powerful applications will be built that utilize the TCP/IP software suite that will eventually rival NetWare at the core. Another key factor of TCP/IP is extensibility. How many people can you name that use NetWare out of their house to allow for corporate connectivity or for commercial connectivity? Yes, programs such as remote node and remote control allow for NetWare clients to be accessed remotely, but not as seamlessly as with TCP/IP. TCP/IP allows you to move your workstation to any part of the network, including dialing in from any part of the world, and gain access to your network or another network. This brings up another point: How many networks interact using NetWare? Theoretically, with TCP/IP you can access (excluding security mechanisms for now) any other TCP/IP network in the world from any point in the world. Addressing in TCP/IP is handled on a global scale to ensure uniqueness. Novell attempted global addressing but failed. Novell addresses are unique to each private installation, such as a single company, but are probably massively duplicated when taken as a whole (all installations). I know many installations with the Novell address of 1A somewhere in their network. Not everyone is going to renumber their network for uniqueness, but one trick is to match the 32–bit address of TCP/IP subnets to your Novell network. Convert each octet of the 32–bit address of TCP/IP into hex and use that as your NetWare address. Novell has entered the TCP/IP fray with its IntranetWare and support for native IP. IntraNetWare allows NetWare workstations to access TCP/IP resources. As of version 5.0, IntraNetWare is going away in name only and another version of NetWare is supposed to allow for NetWare to run directly on top of TCP/IP (this is known as native TCP/IP support).
Microsoft and its emerging NT platform can also use TCP/IP as a network protocol. Two flavors are available:
• Native TCP/IP and its applications (TELNET, FTP, etc.)
• RFC compliant (RFC 1001 and 1002) TCP, which allows file and print service
This enables the ability to telnet from an NT server or workstation and transfer files to that workstation or server using native TCP/IP. For file and print services in a TCP/IP environment, NT can be configured to use NetBIOS over TCP/IP. This enables NT to be involved in a routed network. NT can run many other protocols as well.
3. TCP General usage
3.1 Basic Functions
TCP is a connection-oriented protocol and is therefore reliable, although this word is used
in a data communications context and not in an everyday sense. TCP establishes a
connection between two hosts before any data is transmitted. Because a connection is set
up beforehand, it is possible to verify that all packets are received on the other end and to
arrange re-transmission in the case of lost packets. Because of all these built-in functions,
TCP involves significant additional overhead in terms of processing time and header size.
TCP includes the following functions:
• Fragmentation of large chunks of data into smaller segments that can be
accommodated by IP. The word ‘segmentation’ is used here to differentiate
it from the ‘fragmentation’ performed by IP
• Data stream reconstruction from packets received
• Receipt acknowledgment
• Socket services for providing multiple connections to ports on remote hosts
• Packet verification and error control
• Flow control
• Packet sequencing and reordering
In order to achieve its intended goals, TCP makes use of ports and sockets, connection
oriented communication, sliding windows, and sequence numbers/acknowledgments.
Whereas IP can route the message to a particular machine on the basis of its IP address,
TCP has to know for which process (i.e. software program) on that particular machine it
is destined. This is done by means of port numbers ranging from 1 to 65 535.
Port numbers are controlled by IANA (the Internet Assigned Numbers Authority) and
can be divided into three groups.
Well known ports, ranging from 1 to 1023, have been assigned by IANA and are
globally known to all TCP users. For example, HTTP uses port 80.
Registered ports are registered by IANA in cases where the port number cannot be
classified as well known, yet it is used by a significant number of users. Examples are
port numbers registered for Microsoft Windows or for specific types of PLCs. These
numbers range from 1024 to 49 151, the latter being 75% of 65 536.
A third class of port numbers is known as ephemeral ports. These range from 49 151 to
65 535 and can be used by anyone on an ad-hoc basis.
In order to identify both the location and application to which a particular packet is to be
sent, the IP address (location) and port number (process) is combined into a functional
address called a socket. The IP address is contained in the IP header and the port number
is contained in the TCP or UDP header.
In order for any data to be transferred under TCP, a socket must exist both at the source
and at the destination. TCP is also capable of creating multiple sockets to the same port.
3.4 Sequence numbers
A fundamental notion in the TCP design is that every BYTE of data sent over the TCP connection has a unique 32-bit sequence number. Of course this number cannot be sent along with every byte, yet it is nevertheless implied. However, the sequence number of
the FIRST byte in each segment is included in the accompanying TCP header, for each subsequent byte that number is simply incremented by the receiver in order to keep track of the bytes. Before any data transmission takes place, both sender and receiver (e.g. client and server) have to agree on the initial sequence numbers (ISNs) to be used. This process is described under ‘establishing a connection’. Since TCP supports full duplex operation, both client and server will decide on their initial sequence numbers for the connection, even though data may only flow in one direction for that specific connection. The sequence number, for obvious reasons, cannot start at 0 every time, as it will create serious problems in the case of short-lived multiple sequential connections between two machines. A packet with a sequence number from an earlier connection could easily arrive late, during a subsequent connection. The receiver will have difficulty in deciding whether the packet belongs to a former or to the current connection. It is easy to visualize a similar problem in real life. Imagine tracking a parcel carried by UPS if all UPS agents started issuing tracking numbers beginning with 0 every morning. The sequence number is generated by means of a 32-bit software counter that starts at 0 during boot-up and increments at a rate of about once every 4 microseconds (although this varies depending on the operating system being used). When TCP establishes a connection, the value of the counter is read and used as the initial sequence number. This creates an apparently random choice of the initial sequence number.
3.5 Acknowledgment numbers
TCP acknowledges data received on a PER SEGMENT basis, although several consecutive segments may be acknowledged at the same time.
The acknowledgment number returned to the sender to indicate successful delivery equals the number of the last byte received +1, hence it points to the next expected sequence number. For example: 10 bytes are sent, with sequence number 33. This means
that the first byte is numbered 33 and the last byte is numbered 42. If received successfully, an acknowledgment number (ACK) of 43 will be returned. The sender now knows that the data has been received properly, as it agrees with that number.
TCP does not issue selective acknowledgments, so if a specific segment contains errors, the acknowledgement number returned to the sender will point to the first byte in the defective segment. This implies that the segment starting with that sequence number, and
all subsequent segments (even though they may have been transmitted successfully) have to be retransmitted. From the previous paragraph it should be clear that a duplicate acknowledgement received by the sender means that there was an error in the transmission of one or more bytes following that particular sequence number. We should note that the sequence number and the acknowledgment number in one header
are not related at all. The former relates to outgoing data, the latter refers to incoming data. During the connection establishment phase the sequence numbers for both hosts are setup independently, hence these two numbers will never bear any resemblance to each
3.6 Sliding windows
Obviously there is a need to get some sort of acknowledgment back to ensure that there is a guaranteed delivery. This technique, called positive acknowledgment with retransmission, requires the receiver to send back an acknowledgment message within a given time. The transmitter starts a timer so that if no response is received from the destination node within a given time, another copy of the message will be transmitted.
An example of this situation is given in bellow.
[frame_center src=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP-300×278.bmp” href=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP.bmp”]
The sliding window form of positive acknowledgment is used by TCP, as it is very time
consuming waiting for each individual acknowledgment to be returned for each packet
transmitted. Hence the idea is that a number of packets (with cumulative number of bytes
not exceeding the window size) are transmitted before the source may receive an acknowledgment to the first message (due to time delays, etc). As long as acknowledgments are received, the window slides along and the next packet is transmitted. During the TCP connection phase each host will inform the other side of its permissible window size. For example, for Windows 95/98 this is typically 8K or around 8192 bytes. This means that, using Ethernet, 5 full data frames comprising 5 × 1460 = 7300 bytes can be sent without acknowledgment. At this stage the window size has shrunk to less than 1000 bytes, which means that unless an ACK is generated, the sender will have to pause its transmission.
3.7 Establishing a connection
A three-way SYN/ SYN_ACK/ACK handshake (as illustrated bellow) is used to establish a TCP connection. As this is a full duplex protocol it is possible (and necessary) for a connection to be established in both directions at the same time.
[frame_center src=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP1-290×180.bmp” href=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP1.bmp”]
As mentioned before, TCP generates pseudo-random sequence numbers by means of a 32-bit software counter that resets at boot-up and then increments every 4 microseconds. The host establishing the connection reads a value ‘x’ from the counter where x can vary
between 0 and 232 –1) and inserts it in the sequence number field. It then sets the SYN
flag = 1 and transmits the header (no data yet) to the appropriate IP address and port
number. Assuming that the chosen sequence number was 132, this action would then be
abbreviated as SYN 132. The receiving host (e.g. the server) acknowledges this by
incrementing the received sequence number by one, and sending it back to the originator as an acknowledgment number. It also sets the ACK flag = 1 to indicate that this is an acknowledgment.
This results in an ACK 133. The first byte expected would therefore be numbered 133. At the
same time the server obtains its own sequence number (y), inserts it in the header, and
also sets the SYN flag in order to establish a connection in the opposite direction. The
header is then sent off to the originator (the client), conveying the message e.g. SYN 567.
The composite ‘message’ contained within the header would thus be ACK 133, SYN 567.
The originator receives this, notes that its own request for a connection has been
complied with, and also acknowledges the other node’s request with an ACK 568. Two way
communication is now established.  After this, we are going to see bellow what is needed for closing a connection.
3.8 Closing a connection
An existing connection can be terminated in several ways.
Firstly, one of the hosts can request to close the connection by setting the FIN flag. The
other host can acknowledge this with an ACK, but does not have to close immediately as it
may need to transmit more data. This is known as a half-close. When the second host is
also ready to close, it will send a FIN that is acknowledged with an ACK. The resulting
situation is known as a full close. Secondly, either of the nodes can terminate its connection
with the issue of RST, resulting in the other node also relinquishing its connection and
(although not necessarily) responding with an ACK.
Both situations are depicted in the illustrated picture bellow
[frame_center src=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP2-290×180.bmp” href=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP2.bmp”]
3.9 The push operation
TCP normally breaks the data stream into what it regards are appropriately sized
segments, based on some definition of efficiency. However, this may not be swift enough
for an interactive keyboard application. Hence the push instruction (PSH bit in the code
field) used by the application program forces delivery of bytes currently in the stream and
the data will be immediately delivered to the process at the receiving end.
3.10 Maximum segment size
Both the transmitting and receiving nodes need to agree on the maximum size segments
they will transfer. This is specified in the options field.
On the one hand TCP ‘prefers’ IP not to perform any fragmentation as this leads to a
reduction in transmission speed due to the fragmentation process, and a higher probability
of loss of a packet and the resultant retransmission of the entire packet.
On the other hand, there is an improvement in overall efficiency if the data packets are
not too small and a maximum segment size is selected that fills the physical packets that
are transmitted across the network. The current specification recommends a maximum
segment size of 536 (this is the 576 byte default size of an X.25 frame minus 20 bytes
each for the IP and TCP headers). If the size is not correctly specified, for example too small, the framing bytes (headers etc) consume most of the packet size resulting in
considerable overhead. Refer to RFC 879 for a detailed discussion on this issue.
3.11 The TCP Frame
The TCP Frame consists of a header plus data and is structured as follows:
[frame_center src=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP3-290×180.bmp” href=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP3.bmp”]
Now let’s analyze this practical exercise to what we’ve said above, in the facts and the usage of TCP.
The various fields within the header are as follows:
Source port: 16 bits
The source port number.
Destination port: 16 bits
The destination port number.
Sequence number: 32 bits
The sequence number of the first data byte in the current segment, except when the
SYN flag is set. If the SYN flag is set, a connection is still being established and the
sequence number in the header is the initial sequence number (ISN). The first subsequent
data byte is ISN+1.
Refer to the discussion on sequence numbers.
Acknowledgment number: 32 bits
If the ACK flag is set, this field contains the value of the next sequence number the
sender of this message is expecting to receive. Once a connection is established this is
Refer to the discussion on acknowledgment numbers.
Data offset: 4 bits
The number of 32 bit words in the TCP header. (Similar to IHL in the IP header.) This
indicates where the data begins. The TCP header (even one including options) is always
an integral number of 32 bits long.
Reserved: 6 bits
Reserved for future use. Must be zero.
Control bits (flags): 6 bits
(From left to right)
URG: Urgent pointer field significant
ACK: Acknowledgment field significant
PSH: Push function
RST: Reset the connection
SYN: Synchronize sequence numbers
FIN: No more data from sender
Checksum: 16 bits
The checksum field is the 16-bit one’s complement of the one’s complement sum of all
16-bit words in the header and text. If a segment contains an odd number of header and
text octets to be check-summed, the last octet is padded on the right with zeros to form a
16-bit word for checksum purposes. The pad is not transmitted as part of the segment.
While computing the checksum, the checksum field itself is replaced with zeros.
This is known as the standard Internet checksum, and is the same as the one used for
the IP header.
The checksum also covers a 96-bit ‘pseudo header’ conceptually appended to the TCP
header. This pseudo header contains the source IP address, the destination IP address, the
protocol number (06), and TCP length. It must be emphasized that this pseudo header is
only used for computation purposes and is NOT transmitted. This gives TCP protection
against misrouted segments.
Bellow we have illustrated Pseudo TCP header format
[frame_center src=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP4-290×174.bmp” href=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP4.bmp”]
Window: 16 bits
The number of data octets beginning with the one indicated in the acknowledgement
field, which the sender of this segment is willing or able to accept.
Refer to the discussion on sliding windows.
Urgent pointer: Urgent data is placed in the beginning of a frame, and the urgent
pointer points at the last byte of urgent data (relative to the sequence number i.e. the
number of the first byte in the frame). This field is only being interpreted in segments
with the URG control bit set.
Options: Options may occupy space at the end of the TCP header and are a multiple of
8 bits in length. All options are included in the checksum.
This was a practical showing of a TCP frame.
3.12 TCP/IP Protocol Suite
What is ARP? ARP is Address resolution protocol. The Internet, but not the TCP/IP protocol, grew up with high-speed local networks such as Ethernet, Token Ring, and FDDI. Before the Internet, there was the ARPAnet and this too ran the TCP/IP protocol. The ARPAnet started on serial lines to communicate between the sites and Ethernet, or any LAN for that matter, was not a consideration. IP addressing worked just fine in this environment. Routing was accomplished between message processors known as IMPs (Interface Message Processors). The hosts connected to the IMP and the IMP connected to the phone lines, which interconnected all ARPAnet sites. The IP address identified the host (and later the network and subnetwork). There was not a need for physically identifying a host for there was only one host per physical connection to the IMP. Multiple hosts could connect to an IMP, but each had an IP address to which the IMP forwarded the information. Ethernet was commercially available in 1980 and started to gain more recognition when version 2.0 was released in 1982. Since multiple stations were to connect to a network (single cable segment) like Ethernet, each station had to be physically identified on the Ethernet. The designers of Local Area Networks (LANs) allotted 48 bits to identify a network attachment. This is known as a physical address or MAC address. Physical addresses identify stations at their datalink level. IP is an addressing scheme used at the network level. On a LAN (Ethernet, Token Ring, etc.), two communicating stations can set up a session only if they know each other’s physical address. Think of a MAC address as the number on your house. Lots of houses on your street and each uniquely identified by the
number. This is a MAC address. Since the MAC address is 48 bits and IP is 32 bits, a problem existed and an RFC resolved this problem. The resolution was simple and it did not affect the already established IP addressing scheme. It is known as Address Resolution Protocol, or ARP. This is an IP-address to-physical-station-address resolution (actual name is binding). If you are trying to communicate to a host on the same network number as the one on which you are currently residing, the TCP/IP protocol will use ARP to find the physical address of the destination station. If the network number of the destination station is remote, a router must be used to forward the datagram to the destination. The ARP process is used here as well, but only to find the physical address of the router. There have been enhancements to this protocol although not through an RFC. Some
stations listen to all ARP packets since the originator sends them in broadcast mode. All stations receive these packets and will glean the information that they need. The information in the packets includes the senders’ hardware and IP address mapping. In some instances, this information is used by other stations to build their ARP cache. Many ARP tables (cache) empty their tables periodically to reduce the cycles needed to refresh the cache, to conserve memory, and to keep the table up to date. If a station moves from one subnet to another and stations on the subnet do not empty their tables, they will continue to have an entry for that hardware address. ARP is defined in RFC 826.
3.12.1 ARP Packet Format
The ARP packet format is shown. It contains just a few fields, but notice one thing: It
does not reside on top of IP. It has its own Ethernet Type field (0806), which identifies
the protocol ownership of the packet and allows it to uniquely identify itself. It never
leaves its local segment, so why use IP?
There are five main fields: the operation (ARP request or ARP reply), the source and
destination IP addresses, and the source and destination hardware addresses (more
commonly known as MAC addresses).
The type of hardware identifies the LAN (10-Mbps Ethernet, for example), the type of
protocol identifies the protocol being used. This makes ARP versatile. It can be used
with other types of protocols as well. The most famous one is AppleTalk through the
AppleTalk ARP protocol.
[frame_center src=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP-290×180.jpg” href=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP.jpg”]
3.12.2 ARP Operation
As illustrated bellow, in order to attach to another station on a TCP/IP network, the
source station must know the designation station’s IP address. This can be accomplished
in many ways; for example, typing the address in directly using a TCP/IP based program,
or using a name server. In this example, Station 18.104.22.168 wants a connection with
22.214.171.124 (no subnet addressing is used here). Therefore, the network address of this
Class B address is 126.96.36.199 and the personal computer’s host address is 1.1; hence, the
address is 188.8.131.52.
With ARP, it is assumed that the IP address of the destination station is already known
either through a name service (a central service or file on a network station that maps
IP addresses to host names, explained in more detail later), or by using the IP address
itself. To reduce overhead on the network, most TCP network stations will maintain a
LAN physical-address-to-IP-address table on their host machines. The ARP table is
nothing more than a section of RAM memory that will contain datalink physical (or
MAC addresses) to IP address mappings that it has learned from the network.
[frame_center src=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP1-290×180.jpg” href=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP1.jpg”]
The packet format for a RARP packet is the same as for ARP. The only difference is that
the field that will be filled in will be the sender’s physical address. The IP address
fields will be empty. A RARP server will receive this packet, fill in the IP address fields,
and reply to the sender—the opposite of the ARP process.
Other protocol similar to this are BOOTP and Dynamic Host Configuration Protocol
(DHCP). DHCP is more powerful than RARP, but it does supply one of the same functions
as RARP: resolving an IP address. Besides being less functional than DHCP, RARP only
works on single subnets. RARP works at the datalink layer and therefore cannot span
subnets gracefully. DHCP can span subnets.
3.13 Classless Inter-Domain Routing (CIDR)
Now let’s take a look at something which should be familiar to us from the lectures.
There is a lot more to CIDR than what is presented here, but for our purposes, this will
do. With CIDR, network numbers and classes of networks are no longer valid for
routing purposes. This is where the network IP address format changes to <IP Address,
prefix length>. Mind you, this is for the Internet routing tables (ISPs); Class addressing
is continuing to be used in customer environments. Classless could operate in a customer
environment, but most hosts would not understand this type of implementation. The
millions and millions of hosts that are attached to the Internet are still operating in a
Class environment; therefore, we simply have created a hierarchical routing
environment that does not affect the customer environment whatsoever. Let’s start
out this discussion by assigning a prefix to the well-known Class addresses. CIDR could
operate in a customer environment, but that would require upgrading all routers and
hosts to understand CIDR. This is not going to happen. CIDR is primarily used on the
Class A networks have a /8 prefix
Class B networks have a /16 prefix
Class C networks have a /24 prefix
/8? /16? /24? Hopefully, something clicked here! What we have changed to is the
network prefix. A network number is basically a network prefix. Nodes on a classless
network simply determine the address by finding the prefix value. This value indicates
the number of bits, starting from the left, which will be used for the network. The
remaining bits are left for host assignment. The prefix can range anywhere from /0 to
/32, which allows us to move the network portion of the address anywhere on the 32-bit
Imagine then, an address of 184.108.40.206/20. This looks like a Class C address, but the
natural mask for a Class C is 24 bits or /24 prefix. This one allows for only 20 bits as the
network assignment. But this prefix could be assigned to any address regardless of
class. It could be assigned to 220.127.116.11 or 18.104.22.168. The prefix does not care about
Class. This is the capability of CIDR. The following section assumes that you can
convert binary to decimal and vice versa. If not, please refer to the appendix at the end
of this book for an explanation on binary.
3.14 TCP/IP Tools in Windows
[frame_center src=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP2-290×180.jpg” href=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP2.jpg”]
The Ipconfig Tool
You can use the Ipconfig tool to verify the TCP/IP configuration parameters on a host, including the
• For IPv4, the IPv4 address, subnet mask, and default gateway.
• For IPv6, the IPv6 addresses and the default router.
Ipconfig is useful in determining whether the configuration is initialized and whether a duplicate IP
address is configured. To view this information, type ipconfig at a command prompt.
The Ping Tool
After you verify the configuration with the Ipconfig tool, use the Ping tool to test connectivity. The Ping
tool is a diagnostic tool that tests TCP/IP configurations and diagnoses connection failures. For IPv4,
Ping uses ICMP Echo and Echo Reply messages to determine whether a particular IPv4-based host is
available and functional. For IPv6, Ping uses ICMP for IPv6 (ICMPv6) Echo Request and Echo Reply
messages. The basic command syntax is ping Destination, in which Destination is either an IPv4 or
IPv6 address or a name that can be resolved to an IPv4 or IPv6 address.
3.14.1 TCP/IP Naming Schemes in Windows
Although IP is designed to work with the 32-bit (IPv4) and 128-bit (IPv6) addresses of sending and
destination hosts, computers users are much better at using and remembering names than IP addresses. If a name is used as an alias for an IP address, mechanisms must exist for assigning names to IP addresses, ensuring their uniqueness, and for resolving the name to its IP address. TCP/IP components of Windows use separate mechanisms for assigning and resolving host names(used by Windows Sockets applications) and NetBIOS names (used by NetBIOS applications).
A host name is an alias assigned to an IP node to identify it as a TCP/IP host. The host name can be
up to 255 characters long and can contain alphabetic and numeric characters and the “-” and “.”
characters. Multiple host names can be assigned to the same host.
Windows Sockets applications, such as Internet Explorer and the Ping tool, can use one of two values
to refer to the destination: the IP address or a host name. When the user specifies an IP address, name
resolution is not needed. When the user specifies a host name, the host name must be resolved to an
Host names can take various forms. The two most common forms are a nickname and a fully qualified
domain name (FQDN). A nickname is an alias to an IP address that individual people can assign and
use. An FQDN is a structured name, such as www.microsoft.com, that follows the Internet conventions
used in DNS.
A NetBIOS name is a 16-byte name that identifies a NetBIOS application on the network. A NetBIOS
name is either a unique (exclusive) or group (nonexclusive) name. When a NetBIOS application
communicates with a specific NetBIOS application on a specific computer, a unique name is used.
When a NetBIOS process communicates with multiple NetBIOS applications on multiple computers, a
group name is used. The NetBIOS name identifies applications at the Session layer of the OSI model. For example, the NetBIOS Session service operates over TCP port 139. Because all NetBT session requests are addressed to TCP destination port 139, a NetBIOS application must use the destination NetBIOS name when it establishes a NetBIOS session. An example of a process using a NetBIOS name is the file and print sharing server service on a Windows –based computer. When your computer starts up, the server service registers a unique NetBIOS name based on your computer’s name. The exact name used by the server service is the 15- character computer name plus a 16th character of 0x20. If the computer name is not 15 characters long, it is padded with spaces up to 15 characters long. Other network services also use the computer name to build their NetBIOS names, and the 16th character is typically used to identify each service. When you attempt to make a file-sharing connection to a computer running Windows by specifying the computer’s name, the Server service on the file server that you specify corresponds to a specific NetBIOS name. For example, when you attempt to connect to the computer called CORPSERVER, the NetBIOS name corresponding to the Server service is CORPSERVER <20>. (Note the padding using the space character.) Before a file and print sharing connection can be established, a TCP connection must be created. For a TCP connection to be created, the NetBIOS name CORPSERVER <20> must be resolved to an IPv4 address. NetBIOS name resolution is the process of mapping a NetBIOS name to an IPv4 address.
3.14.2 Network Monitor
You can use Microsoft Network Monitor to simplify troubleshooting complex network problems by
monitoring and capturing network traffic for analysis. Network Monitor works by configuring a network adapter to capture all incoming and outgoing packets.
You can define capture filters so that only specific frames are saved. Filters can save frames based on
source and destination MAC addresses, source and destination protocol addresses, and pattern
matches. After a packet is captured, you can use display filtering to further isolate a problem. When a
packet has been captured and filtered, Network Monitor interprets and displays the packet data in
readable terms. This is available for free download on the official site of Microsoft.
3.15 TCP/IP Security Overview
The field of network security in general and of TCP/IP security in particular
is too wide to be dealt within an all encompassing way in this book, so the focus
of this chapter is on the most common security exposures and measures to
counteract them. Because many, if not all, security solutions are based on
cryptographic algorithms, we also provide a brief overview of this topic for the
better understanding of concepts presented.
For thousands of years, people have been guarding the gates to where they
store their treasures and assets. Failure to do so usually resulted in being
robbed, victimized by society, or even killed. Though things are usually not as
dramatic anymore, they can still become very bad. Modern day IT managers
have realized that it is equally important to protect their communications
networks against intruders and saboteurs from both inside and outside. One
does not have to be overly paranoid to find some good reasons as to why this is
• Packet sniffing: To gain access to cleartext network data and passwords
• Impersonation: To gain unauthorized access to data or to create unauthorized
• e-mails by impersonating an authorized entity
• Denial-of-service: To render network resources non-functional
• Replay of messages: To gain access to information and change it in transit
• Password cracking: To gain access to information and services that would
• normally be denied (dictionary attack)
• Guessing of keys: To gain access to encrypted data and passwords
• (brute-force attack)
• Viruses: To destroy data
• Port scanning: To discover potential available attack points
Though these attacks are not exclusively specific to TCP/IP networks, they must
be considered potential threats to anyone who is going to base their network on
TCP/IP, which is the most prevalent protocol used. TCP/IP is an open protocol,
and therefore, hackers find easy prey by exploiting vulnerabilities using the
3.15.1 Solutions to network security problems
Network owners need to try to protect themselves with the same zealousness that intruders use to search for a way to get into the network. To that end, we provide some solutions to effectively defend a network from an attack, specifically against the attacks mentioned earlier. It has to be noted that any of these solutions only solve a single (or a very limited number) of security problems. Therefore, consider a combination of several such solutions to guarantee a certain level of safety and security. These solutions include: Encryption: To protect data and passwords Authentication by digital signatures and certificates: To verify who is sending data over the network
Authorization: To prevent improper access integrity checking and message authentication codes: To protect against improper alteration of messages
Non-repudiation: To make sure that an action cannot be denied by the person
who performed it One-time passwords and two-way random number handshakes: To mutually
authenticate parties of a conversation Frequent key refresh, strong keys, and prevention of deriving future keys: To protect against breaking of keys (cryptanalysis)
Address concealment: To protect against denial-of-service attacks
Disable unnecessary services: To minimize the number of attack points.
3.15.2 Implementations of security solutions
The following protocols and systems are commonly used to provide various
degrees of security services in a computer network.
_ IP filtering
_ Network Address Translation (NAT)
_ IP Security Architecture (IPSec)
_ Secure Shell (SSH)
_ Secure Sockets Layer (SSL)
_ Application proxies
_ Kerberos and other authentication systems (AAA servers)
_ Secure Electronic Transactions (SET)
3.15.3 Network Security Policy
An organization’s overall security policy must be determined according to security and business needs analysis and based on security best practices. Because a firewall relates to network security only, a firewall has little value unless the overall security policy is properly defined. A network security policy defines those services that will be explicitly allowed or denied, how these services will be used, and the exceptions to these rules. Every
rule in the network security policy should be implemented on a firewall, remote access server (RAS), or both. Generally, a firewall uses one of the following methods. Everything not specifically permitted is denied
An organization’s overall security policy must be determined according to security and business needs analysis and based on security best practices. Because a firewall relates to network security only, a firewall has little value unless the overall security policy is properly defined. A network security policy defines those services that will be explicitly allowed or denied, how these services will be used, and the exceptions to these rules. Every
rule in the network security policy should be implemented on a firewall, remote access server (RAS), or both. Generally, a firewall uses one of the following methods.
Everything not specifically permitted is denied
This approach blocks all traffic between two networks except for those services and applications that are permitted. Therefore, each desired service and application is implemented one by one. No service or application that might be a potential hole on the firewall is permitted. This is the most secure method, denying services and applications unless explicitly allowed by the administrator. However, from the point of users, it might be more restrictive and less convenient.
Everything not specifically denied is permitted
This approach allows all traffic between two networks except for those services and applications that are denied. Therefore, each untrusted or potentially harmful service or application is denied one by one. Although this is a flexible and convenient method for the users, it can potentially cause some serious security problems, especially as new applications are introduced into the environment.
Remote access servers should provide authentication of users and should ideally also provide for limiting certain users to certain systems and networks within the corporate intranet (authorization). Remote access servers must also determine if a user is considered roaming (can connect from multiple remote locations) or stationary (can connect only from a single remote location), and if the server should use callback for particular users after they are properly authenticated. Generally, anonymous access should at best, be granted to servers in a demilitarized zone (DMZ, see “Screened subnet firewall (demilitarized zone)” on page 808). All services within a corporate intranet should require at least password authentication and appropriate access control. Direct access from the
outside should always be authenticated and accounted.
3.15.4 A firewall concept
A firewall is a system (or group of systems) that enforces a security policy between a secure internal network and an untrusted network such as the Internet. Firewalls tend to be seen as a protection between the Internet and a private network. But generally speaking, a firewall should be considered as a means to divide the world into two or more networks: one or more secure networks and one or more non-secure networks. A firewall can be a PC, a router, a midrange, a mainframe, a UNIX workstation, or a combination of these that determines which information or services can be accessed from the outside and who is permitted to use the information and
services from outside. Generally, a firewall is installed at the point where the secure internal network and untrusted external network meet, which is also known as a choke point. In order to understand how a firewall works, consider the network to be a building to which access must be controlled. The building has a lobby as the only entry point. In this lobby, receptionists welcome visitors, security guards watch visitors,
video cameras record visitor actions, and badge readers authenticate visitors who enter the building.
Although these procedures can work well to control access to the building, if an unauthorized person succeeds in entering, there is no way to protect the building against this intruder’s actions. However, if the intruder’s movements are monitored, it can be possible to detect any suspicious activity.
Similarly, a firewall is designed to protect the information resources of the organization by controlling the access between the internal secure network and the untrusted external network (see Figure 22-11 on page 796). However, it is important to note that even if the firewall is designed to permit the trusted data to
pass through, deny the vulnerable services, and prevent the internal network from outside attacks, a newly created attack can penetrate the firewall at any time. The network administrator must examine all logs and alarms generated by the firewall on a regular basis. Otherwise, it is generally not possible to protect
the internal network from outside attacks.
UDP is a standard protocol with STD number 6. UDP is described by RFC 768 – User Datagram Protocol. Its status is standard and almost every TCP/IP implementation intended for small data units transfer or those which can afford to lose a little amount of data (such as multimedia streaming) will include UDP. UDP is basically an application interface to IP. It adds no reliability, flow-control, or error recovery to IP. It simply serves as a multiplexer/demultiplexer for sending and receiving datagrams, using ports to direct the datagram. UDP provides a mechanism for one application to send a datagram to another. The UDP layer can be regarded as being extremely thin and is, consequently, very efficient, but it requires the application to take responsibility for error recovery and so on. Applications sending datagrams to a host need to identify a target that is more specific than the IP address, because datagrams are normally directed to certain processes and not to the system as a whole. UDP provides this by using ports. In a certain sense, sending a message to a remote host and getting a reply back is a lot like making a function call in a programming language. In both cases you start with one or more parameters and you get back a result. This observation has led people to try to arrange request-reply interactions on networks to be cast in the form of procedure calls. Such an arrangement makes network applications much easier to program and more familiar to deal with. For example, just imagine a procedure named get_IP_address (host_name) that works by sending a UDP packet to a DNS server and waiting for the reply, timing out and trying again if one is not forthcoming quickly enough. In this way, all the details of networking can be hidden from the programmer.
The key work in this area was done by Birrell and Nelson (1984). In a nutshell, what Birrell and Nelson suggested was allowing programs to call procedures located on remote hosts. When a process on machine 1 calls a procedure on machine 2, the calling process on 1 is suspended and execution of the called procedure takes place on 2. Information can be transported from the caller to the callee in the parameters and can come back in the procedure result. No message passing is visible to the programmer. This technique is known as RPC (Remote Procedure Call) and has become the basis for many networking applications. Traditionally, the calling procedure is known as the client and the called procedure is known as the server, and we will use those names here too. The idea behind RPC is to make a remote procedure call look as much as possible like a local one. In the simplest form, to call a remote procedure, the client program must be bound with a small library procedure, called the client stub, that represents the server procedure in the client’s address space. Similarly, the server is bound with a procedure called the server stub. These procedures hide the fact that the procedure call from the client to the server is not local. The key item to note here is that the client procedure, written by the user, just makes a normal (i.e., local) procedure call to the client stub, which has the same name as the server procedure. Since the client procedure and client stub are in the same address space, the parameters are passed in the usual way. Similarly, the server procedure is called by a procedure in its address space with the parameters it expects. To the server procedure, nothing is unusual. In this way, instead of I/O being done on sockets, network communication is done by faking a normal procedure call.Despite the conceptual elegance of RPC, there are a few snakes hiding under the grass. A big one is the use of pointer parameters. Normally, passing a pointer to a procedure is not a problem. The called procedure can use the pointer in the same way the caller can because both procedures live in the same virtual address space. With RPC, passing pointers is impossible because the client and server are in different address spaces.In some cases, tricks can be used to make it possible to pass pointers. Suppose that the first parameter is a pointer to an integer, k. The client stub can marshal k and send it along to the server. The server stub then creates a pointer to k and passes it to the server procedure, just as it expects. When the server procedure returns control to the server stub, the latter sends k back to the client where the new k is copied over the old one, just in case the server changed it. In effect, the standard calling sequence of call-by-reference has been replaced by copy-restore. Unfortunately, this trick does not always work, for example, if the pointer points to a graph or other complex data structure. For this reason, some restrictions must be placed on parameters to procedures called remotely.A second problem is that in weakly-typed languages, like C, it is perfectly legal to write a procedure that computes the inner product of two vectors (arrays), without specifying how large either one is. Each could be terminated by a special value known only to the calling and called procedure. Under these circumstances, it is essentially impossible for the client stub to marshal the parameters: it has no way of determining how large they are.A third problem is that it is not always possible to deduce the types of the parameters, not even from a formal specification or the code itself. An example is printf, which may have any number of parameters (at least one), and the parameters can be an arbitrary mixture of integers, shorts, longs, characters, strings, floating-point numbers of various lengths, and other types. Trying to call printf as a remote procedure would be practically impossible because C is so permissive. However, a rule saying that RPC can be used provided that you do not program in C (or C++) would not be popular. A fourth problem relates to the use of global variables. Normally, the calling and called procedure can communicate by using global variables, in addition to communicating via parameters. If the called procedure is now moved to a remote machine, the code will fail because the global variables are no longer shared.These problems are not meant to suggest that RPC is hopeless. In fact, it is widely used, but some restrictions are needed to make it work well in practice. Of course, RPC need not use UDP packets, but RPC and UDP are a good fit and UDP is commonly used for RPC. However, when the parameters or results may be larger than the maximum UDP packet or when the operation requested is not idempotent (i.e., cannot be repeated safely, such as when incrementing a counter), it may be necessary to set up a TCP connection and send the request over it rather than use UDP.
[frame_center src=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP3-290×180.jpg” href=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP3.jpg”]
4.1 UDP Datagram Format
Each UDP datagram is sent within a single IP datagram. Although, the IP datagram might be fragmented during transmission, the receiving IP implementation will reassemble it before presenting it to the UDP layer. All IP implementations are required to accept datagrams of 576 bytes, which mean that, allowing for maximum-size IP header of 60 bytes, a UDP datagram of 516 bytes is acceptable to all implementations. Many implementations will acceplarger datagrams, but this is not guaranteed.
The UDP datagram has an 8-byte header
[frame_center src=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP4-290×180.jpg” href=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP4.jpg”]
Source Port Indicates the port of the sending process. It is the port to
which replies are addressed.
Destination Port Specifies the port of the destination process on the
Length The length (in bytes) of this user datagram, including the
Checksum An optional 16-bit one’s complement of the one’s
complement sum of a pseudo-IP header, the UDP
header, and the UDP data. In Figure 4-3, we see a
pseudo-IP header. It contains the source and destination
IP addresses, the protocol, and the UDP length.
4.2 UDP application programming interface
The application interface offered by UDP is described in RFC 768. It provides for:
• The creation of new receive ports
• The receive operation that returns the data bytes and an indication of source
• port and source IP address
• The send operation that has, as parameters, the data, source, and destination
• ports and addresses
• The way this interface is implemented is left to the discretion of each vendor.
• Be aware that UDP and IP do not provide guaranteed delivery, flow-control, or
• error recovery, so these must be provided by the application.
• Standard applications using UDP include:
• Trivial File Transfer Protocol
• Domain Name System name server
• Remote Procedure Call, used by the Network File System “Remote Procedure Call (RPC)”
• Simple Network Management Protocol (see 17.1, “The Simple Network Management Protocol
• Lightweight Directory Access Protocol
4.3 UDP Basic Functions
The second protocol that occupies the host-to-host layer is UDP. As in the case of TCP, it
makes use of the underlying IP protocol to deliver its datagrams.
UDP is a ‘connectionless’ or non-connection-oriented protocol and does not require a
connection to be established between two machines prior to data transmission. It is
therefore said to be an ‘unreliable’ protocol – the word ‘unreliable’ used here as opposed
to ‘reliable’ in the case of TCP. As in the case of TCP, packets are still delivered to sockets or ports. However, no connection is established beforehand and therefore UDP cannot guarantee that packets are
retransmitted if faulty, received in the correct sequence, or even received at all. In view
of this, one might doubt the desirability of such an unreliable protocol. There are,
however, some good reasons for its existence. Sending a UDP datagram involves very little overhead in that there are no synchronization parameters, no priority options, no sequence numbers, no retransmit
timers, no delayed acknowledgement timers, and no retransmission of packets. The
header is small; the protocol is quick, and streamlined functionally. The only major
drawback is that delivery is not guaranteed. UDP is therefore used for communications
that involve broadcasts, for general network announcements, or for real-time data. A
particularly good application is with streaming video and streaming audio where low
transmission overheads are a prerequisite, and where retransmission of lost packets is not
only unnecessary but also definitely undesirable.
4.4 The UDP Frame
The format of the UDP frame and the interpretation of its fields are described RFC 768.
The frame consists of a header plus data and contains the following fields:
[frame_center src=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP5-290×180.jpg” href=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP5.jpg”]
Source port: 16 bits
This is an optional field. When meaningful, it indicates the port of the sending process,
and may be assumed to be the port to which a reply should be addressed in the absence of
any other information. If not used, a value of zero is inserted.
Destination port: 16 bits
As for source port
Message length: 16 bits
This is the length in bytes of this datagram including the header and the data. (This
means the minimum value of the length is eight.)
Checksum: 16 bits
This is the 16-bit one’s complement of the one’s complement sum of a pseudo header
of information from the IP header, the UDP header, and the data, padded with ‘0’ bytes at
the end (if necessary) to make a multiple of two bytes.
The pseudo header, conceptually prefixed to the UDP header, contains the source address, the destination address, the protocol, and the UDP length. As in the case of TCP,
this header is used for computational purposes only, and is NOT transmitted. This
information gives protection against misrouted datagrams. This checksum procedure is the same as is used in TCP.
[frame_center src=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP6-290×180.jpg” href=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP6.jpg”]
If the computed checksum is zero, it is transmitted as all ones (the equivalent in one’s
complements arithmetic). An all zero transmitted checksum value means that the
transmitter generated no checksum (for debugging or for higher level protocols that don’t
UDP is numbered protocol 17 (21 octal) when used with the Internet protocol.
4.4 UDP Ports
To use UDP, an application must supply the IP address and UDP port number of the source and
destination applications. A port provides a location for sending messages. A unique number identifies
each port. UDP ports are distinct and separate from TCP ports even though some of them use the
same number. Just like TCP ports, UDP port numbers below 1024 are well-known ports that IANA
[frame_center src=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP7-290×180.jpg” href=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP7.jpg”]
4.5 Packet Multiplexing and Demultiplexing
When a sending host sends an IPv4 or IPv6 packet, it includes information in the packet so that the
data within the packet can be delivered to the correct application on the destination. The inclusion of
identifiers so that data can be delivered to one of multiple entities in each layer of a layered architecture
is known as multiplexing. Multiplexing information for IP packets consists of identifying the node on the network, the IP upper layer protocol, and for TCP and UDP, the port corresponding to the application to which the data is destined. The destination host uses these identifiers to demultiplex, or deliver the data layer by layer, to the correct destination application. The IP packet also includes information for the destination host to send a response.
IP contains multiplexing information to do the following:
· Identify the sending node (the Source IP Address field in the IPv4 header or the Source Address field in the IP v6 header).
· Identify the destination node (the Destination IP Address field in the IPv4 header or the Destination
Address in the IPv6 header).
· Identify the upper layer protocol above the IPv4 or IPv6 Internet layer (the Protocol field in the IPv4
header or the Next Header field of the IPv6 header).
· For TCP segments and UDP messages, identify the application from which the message was sent (the
Source Port in the TCP or UDP header).
· For TCP segments and UDP messages, identify the application to which the message is destined (the
Destination Port in the TCP or UDP header).
TCP and UDP ports can use any number between 0 and 65,535. Port numbers for client-side
applications are typically dynamically assigned when there is a request for service, and IANA preassigns port numbers for well-known server-side applications.
All of this information is used to provide multiplexing information so that:
· The packet can be forwarded to the correct destination.
· The destination can use the packet payload to deliver the data to the correct application.
· The receiving application can send a response.
When a packet is sent, this information is used in the following ways:
· The routers that forward IPv4 or IPv6 packets use the Destination IP Address field in the IPv4 header or the Destination Address in the IPv6 header to deliver the packet to the correct node on the network.
· The destination node uses the Protocol field in the IPv4 header or the Next Header field of the IPv6
header to deliver the packet payload to the correct upper-layer protocol.
· For TCP segments and UDP messages, the destination node uses the Destination Port field in the TCP
or UDP header to demultiplex the data within the TCP segment or UDP message to the correct
Here we have a graphical illustration of a IPv4 Packet demultiplexing.
[frame_center src=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP8-290×180.jpg” href=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP8.jpg”]
4.5 Application Programming Interfaces
Windows networking applications use two main application programming interfaces (APIs) to accessTCP/UDP
services in Windows: Windows Sockets and NetBIOS. The figure bellow shows these APIs and the possible
data flows when using them.
[frame_center src=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP9-290×180.jpg” href=”https://userhaven.com/home/wp-content/uploads/2014/11/TCP-UDP9.jpg”]
Some architectural differences between the Windows Sockets and NetBIOS APIs are the following:
· NetBIOS over TCP/IP (NetBT) is defined for operation over IPv4. Windows Sockets operates over both
IPv4 and IPv6.
· Windows Sockets applications can operate directly over the IPv4 or IPv6 Internet layers, without the
use of TCP or UDP. NetBIOS operates over TCP and UDP only.
Windows Sockets is a commonly used, modern API for networking applications in Windows. The
TCP/IP services and tools supplied with Windows are examples of Windows Sockets applications.
Windows Sockets provides services that allow applications to use a specific IP address and port, initiate
and accept a connection to a specific destination IP address and port, send and receive data, and close a
There are three types of sockets:
• A stream socket, which provides a two-way, reliable, sequenced, and unduplicated flow of data using
• A datagram socket, which provides bidirectional flow of data using UDP.
• A raw socket, which allows protocols to access IP directly, without using TCP or UDP.
A socket functions as an endpoint for network communication. An application creates a stream or
datagram socket by specifying three items: the IP address of the host, the type of service (TCP for
connection-based service and UDP for connectionless), and the port the application is using. Two
sockets, one for each end of the connection, form a bidirectional communications path. For raw
sockets, the application must specify the entire IP payload.
NetBIOS is an older API that provides name management, datagram, and session services to NetBIOS
applications. An application program that uses the NetBIOS interface API for network communication
can be run on any protocol implementation that supports the NetBIOS interface. Examples of Windows
applications and services that use NetBIOS are file and printer sharing and the Computer Browser
NetBIOS also defines a protocol that functions at the OSI Session layer. This layer is implemented by
the underlying protocol implementation, such as NetBIOS over TCP/IP (NetBT), which RFCs 1001 and
1002 define. The NetBIOS name service uses UDP port 137. The NetBIOS datagram service uses
UDP port 138. The NetBIOS session service uses TCP port 139.
For more information about NetBIOS and NetBT, see Chapter 11, “NetBIOS over TCP/IP.”
The ISO Transport Protocol Class 4 (or TP4) (see Table II) was designed for the same reasons as TCP. It provides a similar CO, reliable service over a CL unreliable network. TP4 relies on the same mechanisms as TCP with the following differences. First, TP4 provides a Comessage service rather than CO-byte.
Therefore, sequence numbers enumerate TPDUs rather than bytes. Next, sequence numbers are not initiated from a clock counter as in TCP [Stevens 1994], but rather start from 0. A destination reference number is used to distinguish between connections. This number is similar to the destination port number in TCP, but here the reference number maps onto the port number and can be chosen randomly or sequentially [Bertsekas and Gallager 1992]. Another important difference is that (at least in theory) a set of QoS parameters (see Section 3) can be negotiated for a TP4 connection. Other differences between TCP and TP4 are discussed in Piscitello
and Chapin .
The ISO Transport Protocol Class 0 (or TP0) (see Table II) was designed as a minimum transport protocol providing only those functions necessary to establish a connection, transfer data, and report protocol errors. TP0 was designed to operate on top of a CO reliable network service that also provides endtoend flow control. TP0 does not even provide its own disconnection procedures; when the underlying network connection closes, TP0 closes with it. One interesting use of TP0 is that it can be employed to create an OSI transport service on TCP’s reliable byte-stream service, enabling OSI applications to run over TCP/IP networks [Rose and Cass 1987].
Network Block Transfer (NETBLT) (see Table III) was developed at MIT for high throughput bulk data transfer [Clark et al. 1987]. It is optimized to operate efficiently over long-delay links. NETBLT was designed originally to operate on top of IP, but can operate on top of any network protocol that provides a similar CL unreliable network service. Data exchange is realized via unidirectional connections. The unit of transmission is a buffer, several of which can be concurrently active to keep data flowing at a constant rate. Connection is established via a 2-wayhandshake during which buffer, TPDU and burst sizes are negotiated. Flow control is accomplished using buffers (transport-user-level control) and rate control (transport-protocol-level control). Either transport user of a connection can limit the flow of data by not providing a buffer. Additionally, NETBLT uses burst size and burst rate parameters to accomplish rate control. NETBLT uses selective retransmission for error recovery. After a transport sender has transmitted a whole buffer, it waits for a control TPDU from the transport receiver. This TPDU can be a RESEND, indicating lost TPDUs, or an OK, acknowledging the whole buffer. A
GO allows the transmission of another buffer. Instead of waiting after each buffer, a multiple buffering mechanism
can be used [Dupuy et al. 1992].
The Versatile Message Transaction Protocol (VMTP) (see Table III) was designed at Stanford University to provide high performance communication service for distributed operating systems, including file access, remote procedure calls (RPC), real-time datagrams and multicast [Cheriton and Williamson 1989]. VMTP is a request-response protocol that uses timer-based connection management to provide communication
between network-visible entities. Each entity has a 64-bit identifier that is unique, stable, and independent of hostaddress. The latter property allows entities to be migrated and handled independent
of network layer addressing, facilitating process migration and mobile and multihomed hosts [Cheriton and Williamson 1989]. Each request (and response) is identified by a transaction identifier. In the common case, a client increments its transaction identifier
and sends a request to a single server; the server sends back a response with the same (Client, Transaction) identifier. A response implicitly acknowledges the request, and each new request implicitly acknowledges the last response sent to this client by the server. Multicast is realized by sending to a group of servers. Datagram support is provided by indicating in the request that no response is expected. Additionally, VMTP provides a streaming mode in which an entity can issue a stream of requests, receiving the responses back asynchronously [Williamson and Cheriton 1989]. Flow control is achieved by a rate control scheme borrowed from NETBLT with negotiated interpacket delay time.
Transaction TCP (T/TCP) (see Table III) is a backwards-compatible extension of TCP that provides efficient transactionoriented service in addition to CO service [Braden 1992a; 1994]. The goal of
T/TCP is to allow each transaction to be efficiently performed as a single incarnation of a TCP connection. It introduces two major improvements over TCP. First, after an initial transaction is handled using a 3-way-handshake connection, subsequent transactions streamline connection establishment through the use of a 32-bit incarnation number, called a “connection count” (CC) carried in each TPDU. T/TCP uses the monotonically increasing CC values in initial CR-TPDUs to bypass the 3-way-handshake, using a mechanism called TCP Accelerated Open. With this mechanism, a transport entity needs to cache a small amount of state for each remote peer entity. The second improvement is that T/TCP shortens the delay in the TIME-WAIT12 state. T/TCP defines three new TCP options, each of which carries one 32-bit CC value. These options accelerate connection setup for transactions. T/TCP includes all normal TCP semantics, and operates exactly
as TCP for all features other than connection establishment and termination.
Real-time Transport Protocol (RTP) was designed for real-time multiparticipant multimedia applications
[Schulzrinne 1996; Schulzrinne et al. 1996]. Even though RTP is called a transport protocol by its designers, this sometimes creates confusion, because RTP by itself does not provide a complete transport service. RTP TPDUs must be encapsulated within the TPDUs of another transport protocol that provides framing, checksums, and endto-end delivery, such as UDP. The main transport layer functions performed by
12When a TCP entity performs an active close and sends the final ACK, that entity must remain in
the TIME-WAIT state for twice the Maximum Segment Lifetime (MSL), the maximum time any
TPDU can exist in the network before being discarded. This allows time for the other TCP entity to send and resend its final ACK in case the first copy is lost. This closing scheme prevents TPDUs
of a closed connection from appearing in a subsequent connection between the same pair of TCP
entities [Braden 1992b].
The Xpress Transport Protocol’s design (XTP Version 4.013) (see Table IV) was coordinated within the XTP Forum to support a variety of applications such as multimedia distribution and distributed applications over WANs as well as LANs [Strayer et al. 1992]. Originally XTP was designed to be implemented in VLSI; hence it has a 64-bit alignment, a fixed-size header, and fields likely to control a TPDU’s initial processing located early in the header. However, no hardware implementations were ever built. XTP combines classic functions of TCP, UDP, and TP4, and adds new services such as transport multicast, multicast group management, priorities, rate
and burst control, and selectable error and flow control mechanisms. XTP can operate on top of network protocols such as IP or ISO CLNP, data link protocols such as 802.2, or directly on top of the AAL of ATM. XTP simply requires framing and end-to-end delivery from the underlying service. One of XTP’s most important features is the orthogonality it provides between communication paradigm, error control and flow control. An XTP user can choose any communication paradigm (CO, CL, or transaction- oriented) and whether or not to enable error control and/or flow control. XTP uses both window-based and ratebased flow control.
To conclude with a few sentences, The Transport Layer is equivalent to the Transport Layer in the OSI model. The Internet Transport layer is implemented by TCP and the User Datagram Protocol (UDP). TCP provides reliable data transport, while UDP provides unreliable data transport. TCP (Transmission Control Protocol) is the most commonly used protocol on the Internet. The reason for this is because TCP offers error correction. When the TCP protocol is used there is a “guaranteed delivery.” This is due largely in part to a method called “flow control.” Flow control determines when data needs to be re-sent, and stops the flow of data until previous packets are successfully transferred. This works because if a packet of data is sent, a collision may occur. When this happens, the client re-requests the packet from the server until the whole packet is complete and is identical to its original. UDP (User Datagram Protocol) is anther commonly used protocol on the Internet. However, UDP is never used to send important data such as webpages, database information, etc; UDP is commonly used for streaming audio and video. Streaming media such as Windows Media audio files (.WMA) , Real Player (.RM), and others use UDP because it offers speed! The reason UDP is faster than TCP is because there is no form of flow control or error correction. The data sent over the Internet is affected by collisions, and errors will be present. Remember that UDP is only concerned with speed. This is the main reason why streaming media is not high quality.
 Andrew S. Tanenbaum “Computer Networks” 4th Edition – Chapter 6
 Larry L. Peterson, Bruce S. Davie “Computer Networks: A Systems Approach, Second Edition”
 William Stallings “Data and Computer Communications (8th Edition)”
 Douglas E. Comer, David L. Stevens “Internetworking with TCP/IP” Vol. III
 W. Richard Stevens “TCP/IP Illustrated, Vol. 1: The Protocols (Addison-Wesley Professional Computing Series)”
 PDF presentation about TCP/IP (given in the materials used folder)
 Joseph Davies “TCP/IP Fundamentals for Microsoft Windows Server 2008”
 Martin W. Murhammer, Orcun Atakan, Stefan Bretz,
Larry R. Pugh, Kazunari Suzuki, David H. Wood “TCP/IP Tutorial and Technical Overview”