Sunday, July 21, 2019

Internet Protocol Version 4 Analysis

Internet Protocol Version 4 Analysis Chapter 2: Literature Review 2.1 Introduction Multimedia streaming over internet is getting its revolutionary in the communication, entertainment and interactive game industries. The web now becomes a popular medium for video streaming since the user does not have to wait to download a large file before seeing the video or hearing the sound. Instead, the media is sent in a continuous stream and is played as it arrives. It can integrate all other media formats such as text, video, audio, images and even live radio and TV broadcasts can all be integrated and delivered through a single medium. These applications may require in terms of bandwidth, latency and reliability than traditional data applications to support the growth of multimedia technology in the future [1]. The transportation of multimedia traffic over networks become more complicated because multimedia is becoming cheaper and cheaper and therefore used more and more. Problems with bearing multimedia flows on networks are mainly related to the bandwidth they require and to the strict maximum delay requirements that must be met [2]. This is important when multimedia applications have to provide users with real-time interaction. Because of the rapid growth of Internet usage and the requirement of different applications, the IPv4 is no more relevant to support the future networks. Many new devices, such as mobile phones, require an IP address to connect to the Internet. Thus, there is a need for a new protocol that would provide new services. To overcome to these problems, a new version of Internet Protocol has been introduced. This is called Internet Protocol next generation (IPng or IPv6), which is designed by the IETF [3] to replace the current version Internet Protocol, IP Version 4 (IPv4). IPv6 is designed to solve the problems of IPv4. It does so by creating a new version of the protocol which serves the function of IPv4, but without the same limitations of IPv4. IPv6 is not totally different from IPv4. The differences between IPv6 and IPv4 are including in five major areas which is addressing, routing, security, configuration and support for mobile devices [4]. Like all the development and new inventio ns, the problems of current Internet Protocol made researcher to develop some new techniques to solve these problems. Even they have tried to make some changes on the current protocol, these changes still didnt help a much. So, at the end the way came to development of a new protocol which is known as IPv6 or IPng. 2.2 OSI 7 Layer Computer networks are complex dynamic systems and difficult task to understand, design, and implement a computer network. Networking protocols need to be established for low level computer communication up to how application programs communicate. Each step in this protocol is called a layer and divided into several layers simplifies the solution. The main idea behind layering is that each layer is responsible for different tasks. The Open System Interconnection (OSI) Reference Model defines seven layers [5]. Physical Layer. This layer deals, for instance, with conversion of bits to electrical signals, bit level synchronization. Data Link Layer. It is responsible for transmitting information across a link, detecting data corruption, and addressing. Network Layer. The layer enables any party in the network to communicate with each other. Transport Layer. It establishes reliable communication between a pair in the system, deals with lost and duplicated packets. Session Layer. This layer is responsible for dialogue control and changing. Presentation Layer. The main task of this layer is to represent data in a way convenient for the user. Application Layer. Applications in this case include Web browsing, file transferring, etc. The Network Layer is the layer that is the most interesting in the context of this project. The following section gives a better view of this layer. 2.3 Network Layer As was mentioned before, this layer is responsible for enabling the communication between any party. The most used method for transporting data within and between communications networks is the Internet Protocol (IP). 2.3.1 Internet Protocol IP is a protocol that provides a connectionless, unreliable, and best-efforts packet delivery system. More details on these network service types are given below [5]. In a connectionless model the data packets are transferred independently from all others and containing full source and the destination address. It is worth mentioning that another type is the connection oriented model. However, the connection-oriented model and its details are beyond the scope of this project and thus will not be pursued in this report. The reader can consult [5] for further information on this type of service. Unreliable delivery means that packets may be lost, delayed, duplicated, delivered non-consecutively (in an order other than that in which they were sent), or damaged in transmission. 2.4 Internet Protocol Version 4 As we know, IPv4 is the current protocol for communication on the Internet. It is the protocol that underlies most communication on networks today, such as TCP/IP and UDP/IP. The largest weakness of IPv4 is its address space [7]. Each IPv4 address only have 32 bits and consists of two parts, defined as network identifier and host identifier [5]. A standard method of displaying an IPv4 address is as decimal value of four octets, each separated a period, for example: 192.168.2.5. Traditionally [6], IP addresses are presented by classfull addressing. 5 classes of address were created, which is A to E. Class A consists of 16,777,214 hosts while class B consists of 65,534 hosts and class C consists of 254 hosts. Class D is reserved for use with multicasting and class E is a block of IP addresses reserved for future use [7]. The class D and E addresses are not used to address public host, so this leaves the rest of the entire range of IP addresses carved up into classes A C. As soon as a site is connected to the Internet, it needs to be given an entire class C. Assuming that many sites only need one or two addresses then this waste over 200 addresses. Once a site reaches over 254 full addressable machines it would need an entire class B, which would waste over 65,000 addresses and so on. This allocation system is obviously insufficient and wastes much of a limited resource. 2.4.1 Header Header is a part of the IP packet[5]. There is a number of fields in an IPv4 header. Below are the some explanations for each field. 2.4.2.1 Version This field (4-bit long) is used to determine the version of IP datagram that is considered. For IPv4 it is set to 4. 2.4.2.12 Internet Header Length (IHL) The Internet Header Length is the length of the header. 2.4.2.3 Type of Service Theoretically, this field (1 octet long) should indicate something special about the protocol. However, it has never really been used. 2.4.2.4 Total Length Total is the length of data in the fragment plus the header. 2.4.2.5 Identification This field is useful for fragmentation only. Its purpose is to enable the destination node to perform reassembly. This implies that the destination node must know which fragments belong to each other, i.e. the source, destination, and protocol fields should match. 2.4.2.6 Offset Offset indicates the point at which this fragment belongs in the reassembly packet. The field is related to fragmentation mechanism and has similar vulnerabilities as the identification field. 2.4.2.7 Time to Live TTL measures the time duration of the datagram presence in a network. This guarantees that no datagram exists forever in the network. 2.4.2.8 Protocol This field identifies the transport protocols, for example UDP or TCP. Since the field contains an arbitrary value that indicates some protocol, encapsulation of one datagram into another (IP tunneling) is possible. 2.4.2.9 Header Checksum The checksum is used to detect transmission errors. However, this field was removed in IPv6. 2.4.2.10 Source Address. This field specifies the source address. 2.4.2.11 Destination Address The destination address (4 octets long) is specified in this field. No attacks related to this field are known. 2.4.2.12 Options The field (variable size) was designed to improve the IP communication. There are several options defined for this field. Among them are: security, source routing, and route recording. 2.4.2.13 Padding The field (variable size) is used to fill the IP header with zeros if the header length is less than 32 bits. 2.5 Internet Protocol Version 6 IPv6 is a new version that is specified in RFC2460 [5] to overcome the weakness of the current protocol in certain aspect. It uses a 128 bit long address field which is 4 times longer than Ipv4 addresses. This size of address space removes one of the worst issues with IPv4 and IPv6 doesnt have classes of addresses. In general, IPv4 and IPv6 have a similar in their basic framework and also many differences. At a first view, there are obviously differences in the addresses between IPv4 and IPv6. IPv6 addresses range from 0000:0000:0000:0000:0000:0000:0000:0000 to ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff. In addition to this preferred format, IPv6 addresses may be specified in two other shortened formats: Omit leading zeros Specify IPv6 addresses by omitting leading zeros. For example, IPv6 address 1050:0000:0000:0000:0005:0600:300c:326b may be written as 1050:0:0:0:5:600:300c:326b. Double colon Specify IPv6 addresses by using double colons (::) in place of a series of zeros. For example, IPv6 address ff06:0:0:0:0:0:0:c3 may be written as ff06::c3. Double colons may be used only once in an IP address. The IPv6 addresses are similar to IPv4 except that they are 16 octets long. A critical fact to be observed is that the present 32-bit IP addresses may be accommodated in IPv6 as a special case of IPv6 addressing. The standard representation of IPv6 addresses is a hexadecimal value of 16-bit each separated by a colon. Not only does IPv6 have different address representation, but it also discards the previous concept of network classes. The 6-byte addresses are very popular in the 802 LANs. The next generation of LANs will use 8-byte address space specified by the Institute of Electrical and Electronics Engineers (IEEE) [9]. Thus, the IPv6 addresses should be 8 bytes long. 2.5.1 IPv6Header Some of  IPv4 header fields excluded in IPv6, and some of  them has been made optional. As a result of this the packet processing time and packet header size is reduced. The header consists of two parts, which are: the basic IPng header and IPng extension headers. 2.5.2.1 Version This field (4-bit long), same as in IPv4 case, is used to determine the version of IP datagram and is set to 6 in the present case. This field is the same in both versions. The reasoning for this is that these two protocols should coexist during the transition period. 2.5.2.2 Flow Label This field is 20 bits long and, as yet, there is no specific functionality assigned to it. 2.5.2.3 Payload Length Only IPv6 has this field. Since the header length is constant in IPv6, just one field is needed. This field replaces IHL and Total Length fields in IPv4. It carries information about the length of data (the headers are not included). 2.5.2.4 Next Header Next Header field replaces the Protocol field in the IPv4 header. 2.5.2.5 Hops limit This field is a hop count that decrements. This field redefines the Time to Life field present in IPv4. 2.5.2.6 Source Address The source address is indicated by this field (16 octets long). No attacks related to this field have been experienced. 2.5.2.7 Destination Address This field (16 octets long) specifies the destination address. No attacks related to this field are known. IPv6 brings major changes to the IP header. IPv6s header is far more flexible and contains fewer fields, with the number of fields dropping from 13 to 8. Fewer header fields result in a cleaner header format and Quality of Service (QoS) that was not present in IPv4. IP option fields in headers have been replaced by a set of optional extensions. The efficiency of IPv6s header can be seen by comparing the address to header size. Even though the IPv6 address is four times as large as the IPv4 address, the header is only twice as large. Priority traffic, such as real time audio or video, can be distinguished from lower priority traffic through a priority field [8]. Based on the [27] experiment, it clearly show the brake-down of the various headers in both IPv4 and IPv6, it is evident that the overhead incurred is minimal between IPv4 and IPv6. In theory, the performance overhead between these two protocols is so minimal that the benefits of IPv6 should quickly overshadow the negatives. Table 1: Packet breakdown and overhead incurred by header information 2.6 Streaming Overview In recent years, there has been major increasing in multimedia streaming application such as audio and video broadcast over internet. The increasing number of internet subscribers with broadband access from both work and home enables multimedia applications with high quality can be delivered to the user. However, since the best effort internet is unreliable with a high packet lost and inconsistency in packet arrival, it does not provide any QoS control. This is a crucial part when dealing with real-time multimedia traffic. The multimedia streaming is a real-time application includes audio and video which is stored in stream server and streamed its content to client upon request. The example includes continuous media server, digital library, and shopping and entertainment services. Prior to streaming, video was usually downloaded. Since, it took a long time to download video files, streaming was invented with the intention of avoiding download delays and enhancing user experience. In streaming, video content is played as it arrives over the network, in the sense that there is no wait period for a complete download. Real-time streaming has a timing constraint such that the data are played continuously. If the packet data are not arrive in time, the playback is paused and will cause the in smoothness in multimedia presentation and its definitely annoying to the user. Because of this factor, multimedia streaming require isochronous processing and QoS [10] from end to end view. The lack of QoS has not prevented the rapid growth of real-time streaming application and this growth is expected to continue and multimedia traffic will form a higher portion of of the internet load. Thus, the overall behavior of these applications will have a significant impact on the other internet traffic. 2.7 Downloading Versus Streaming Application Basically downloading applications such as FTP involve downloading a file before it is viewed by a user. The examples of multimedia downloading applications are downloading an MP3 song to an IPod or any portable device, downloading a video file to a computer via P2P application such as BitTorrent. Downloading is usually a simple and easiest way to deliver media to a user. However, downloading has two potentially important disadvantages for multimedia applications. First, a large buffer is required whenever a large media file such as MPEG-4 movie is downloaded. Second, the amount of time required for the download can be relatively large, (depends on the network traffic), thereby requiring the user to wait minutes or even hours before being able to view the content. Thus, while downloading is simple and robust, it provides only limited flexibility both to users and to application designers. In contrast, in the streaming mode actually is by split the media bit stream into separate packet which can be transmitted independently. This enables the receiver to decode and play back the parts of the bit stream that are already received. The transmitter continues to send multimedia data packet while the receiver decodes and simultaneously plays back other, already received parts of the bit stream. This enables low delay between the current data is sent by the transmitter to the moment it is viewed by the user. Low delay is of paramount importance for interactive applications such as video conferencing, but it is also important both for video on demand, where the user may desire to change channels or programs quickly, and for live broadcast, but the delay must be finite. Another advantage of streaming is its relatively low storage requirements and increased flexibility for the user, compared to downloading. However, streaming applications, unlike downloading applications, have de adlines and other timing requirements to ensure continuous real-time media play out. This leads to new challenges for designing communication systems to best support multimedia streaming applications. [12] 2.8 Standard/Protocols for Streaming A good streaming protocol is required to achieve a quality of continuous playback in multimedia streaming over the internet with the short delay when a user downloading a multimedia content over the internet. The streaming protocol provides a service such as transport, and QoS control mechanism including quality adaptation, congestion control and error control. The streaming protocol is built on the top of network level protocol and the transport level protocol. The multimedia streaming protocol is based on IP network and â€Å"User Datagram Protocol† (UDP) is mainly used, despite of some streaming application using TCP. Like TCP, UDP is a transport layer protocol, but UDP is a connectionless transport protocol. UDP does not guarantee a reliable transmission and in order arrival packet. Under UDP also, there is no guarantee that is packet will arrive to its destination [16]. The UDP packet may get lost in the network when there is a lot of network traffic. Therefore, UDP is not suitable for data packet transfer where a guarantee delivery is important.UDP is never used to send important data such as webpage, database information, etc; UDP is commonly used for streaming audio and video. Streaming media such as Windows Media audio files (.WMA), Real Player (.RM), and others format use UDP because it offers speed. The reason UDP is faster than TCP is because there is no form of flow control or error correction. The data sent over the Inte rnet is affected by collisions, and errors will be present. Remember that UDP is only concerned with speed. This is the main reason why streaming media is not high quality. However, UDP is the ideal transport layer protocol for streaming application which the priority is to transfer the packet from the sender to its destination and does not contribute any delay which is the result of the transmission of lost packets. Since UDP does not guarantee in packet delivery, the client needs to rely Real time Transport Protocol (RTP) [10]. The RTP provides the low-level transport functions suitable for applications transmitting real-time data, such as video or audio, over multicast or unicast services The RTP standard consists of two elementary services, transmitted over two different channels. One of them is the real-time transport protocol which carries the data and the other works as control and monitor channel named RTP control protocol (RTCP) [13]. RTP packets are encapsulated within UDP datagrams. This step incorporates a high throughput and efficient bandwidth usage. The RTP data packets contain a 12 byte header followed by the payload, which can be a video frame, set of audio samples etc. The header includes a payload type indicating the kind of data contained in the packet (e.g. JPEG video, MP3 audio, etc), a timestamp (32 bits), and a sequence number to allow ordering and loss detection of RTP pa ckets [11]. According to the standard [14], the transport of RTP streams can use both UDP and TCP transport protocols, with a strong preference for the datagram oriented support offered by UDP. The primary function of RTCP is to provide feedback on the quality of the data distribution. The feedback may be directly useful for control of adaptive encodings along with fault diagnostics in the transmission. In summary, RTP is a data transfer protocol while RTCP is control protocol. The Real-time Streaming Protocol (RTSP) [25] is a client-server signaling system based on messaging in ASCII format. It establishes procedures and controls, either one or more time-synchronized streams continuous media such as audio and video. The protocol is intentionally similar in syntax and operation to HTTP and therefore hires the option of using proxies, tunnels and caches. RTSP and works well both for large audiences, and single-viewer media-on-demand. RTSP provides control functionality such as pause, fast forward, reverse and absolute positioning and works much like a VCR remote control. The necessary additional information in the negotiation is conducted in the Session Description Protocol (SDP), sent as an attachment of RTSP appropriate response [13]. The Requirement for Multimedia Application Various multimedia applications have different requirements for QoS describes in the following QoS parameters such as throughput, delay, delay variation (jitter) and packet loss. In most cases, the application of QoS requirements can be determine by the user which are the factors that affect the quality of applications [17]. For example, from experimenting concluded that acceptable quality, one-way delay requirements for interactive voice should be less than 250 ms. This delay includes the value of the delays imposed on all components of the communication channels, as a source of delay, transmission delays, delays in the network and the determination of the delay. There are some factors which affect QoS application requirements such as interactive and noninteractive applications, User/Application characteristics (delay tolerance and intolerance, adaptive and nonadaptive characteristics) and application criticality (Mission-critical and non-mission-critical applications) [15]. The thr ee types for this application requirement will be discuss in next section. 2.10.1 Interactive and Noninteractive Applications An interactive application involves some form of between two parties such as people-to-people, people-to-machine or machine-to-machine. An example of interactive applications is: People-to-people application such as IP telephony, interactive voice/video, videoconferencing People-to-machine application such as Video-on-demand (VOD), streaming audio/video Machine-to-machine application: Automatic machine control The time elapsed between interactions is essential to the success of an interactive application. The degree of interactivity determines the level of severity or delay the requirement. For example, interactive voice applications, which involve human interaction (conversation) in real time, are stringent requirements of delay (in order of milliseconds). Streaming (play), video applications involve less interaction and do not require real-time response. Applications streaming, therefore, are more relaxed requirements of delay (in order of seconds). Often applications tolerance delay is determined by users tolerance delay (ie, higher delay tolerance leads to more relaxed delay requirements). Jitter delay is also related to QoS support for interactive tasks. The delay jitter can be corrected by de-jittering techniques buffer. However, the buffer introduces delay in the original signal, which also affects the interactivity of the task. In general, an application with strict requirements de lay also has a strict delay jitter requirements [15]. 2.10.2 Tolerance and Intolerance Tolerance and intolerance also one of the key that affect in QoS parameter values require by the user. Latency tolerance and intolerance determines the strictness of the delay requirement. As we already mentioned, streaming multimedia applications are more latency tolerant than interactive multimedia applications. The level of latency tolerance extremely depends based on users satisfaction, expectation, and the urgency of the application such as mission critical. Distortion tolerance to the commitment of the application quality depends on users satisfaction, users expectation, and the application media types. For example, users are more tolerant to video distortion than to audio distortion. In this case, during congestion, the network has to maintain the quality of the audio output over the quality of the video output [15]. 2.10.3 Adaptive and Nonadaptive Characteristics Adaptive and nonadaptive aspects mostly describe the mechanisms invoked by the applications to adapt to QoS degradation and the common adaptive techniques are rate adaptation and delay adaptation. Rate adaptive application can adjust the data rate injected into the network. During network congestion, the applications reduce the data rate by dropping some packets, increasing the codec data compression, or changing the multimedia properties. This technique may cause degradation of the perceived quality but will keep it within acceptable levels. Delay-tolerant adaptive applications are tolerate to a certain level of delay jitter by deploying the de-jittered buffer or adaptive playback technique. Adaptation is trigged by some form of implicit or explicit feedback from the network or end user [15]. 2.10.4 Application Criticality Mission-critical aspects reflect the importance of application usage, which determines the strictness of the QoS requirements and Failing the mission may result in disastrous consequences. For example: Air Traffic Control Towers (ATCTs): The Traffic controller is responsible to guide the pilot for direction, takeoff and landing process. Life and death of the pilot and passenger may depend on the promptness and accuracy of the Air Traffic Control (ATC) system. E Banking system: The failure of this system may lead to the losses to the bank and user is unable to make an online transaction (view account summary, account history, transaction status, manage cheques and transfer funds online) and to make a online payment ( loans, bills, and credit card) and other transaction. 2.10.6 Examples of Application Requirements Video applications can be classified into two groups: interactive video (i.e., video conferencing, long-distance learning, remote surgery) and streaming video (i.e., RealVideo, Microsoft ASF, QuickTime, Video on Demand, HDTV). As shown in table 2, video applications bandwidth requirements are relatively high depending on the video codec. Video codec Bandwidth Requirement Uncompressed HDTV 1.5 Gbps HDTV 360 Mbps Standard definition TV (SDTV) 270Mbps Compressed MPEG2 25-60 Mbps Broadcast quality HDTV 19.4 Mbps MPEG 2 SDTV 6 Mbps MPEG 1 1.5 Mbps MPEG 4 5 kbps 4 Mbps H.323 (h.263) 28 kbps 1 Mbps Table 2 : Video Codec Bandwidth Requirement [15] 2.11 Packet Delay Delay has a direct impact on users satisfaction. Real-time media applications require the delivery of information from the source to the destination within a certain period of time. Long delays may cause incidents such as data missing the playback point, which can degrade the quality of service of the application. Moreover, it can cause user frustration during interactive tasks. For example, the International Telecommunication Union (ITU) considers network delay for voice applications in Recommendation G.114 and defines three bands of one-way delay as shown in table 2. Range in Millisecond (ms) Description 0 150 Acceptable for most user application. 150 400 Acceptable provided that administrators are aware of the transmission time and the impact it has on the transmission quality of user applications. > 400 Unacceptable for general. However in certain cases this limit exceeds. Table 3: Standard for delay limit for voice In the data transmission process, each packet is moving from its source to its destination. The process of data transmission usually starts with a packet from a ho Internet Protocol Version 4 Analysis Internet Protocol Version 4 Analysis Chapter 2: Literature Review 2.1 Introduction Multimedia streaming over internet is getting its revolutionary in the communication, entertainment and interactive game industries. The web now becomes a popular medium for video streaming since the user does not have to wait to download a large file before seeing the video or hearing the sound. Instead, the media is sent in a continuous stream and is played as it arrives. It can integrate all other media formats such as text, video, audio, images and even live radio and TV broadcasts can all be integrated and delivered through a single medium. These applications may require in terms of bandwidth, latency and reliability than traditional data applications to support the growth of multimedia technology in the future [1]. The transportation of multimedia traffic over networks become more complicated because multimedia is becoming cheaper and cheaper and therefore used more and more. Problems with bearing multimedia flows on networks are mainly related to the bandwidth they require and to the strict maximum delay requirements that must be met [2]. This is important when multimedia applications have to provide users with real-time interaction. Because of the rapid growth of Internet usage and the requirement of different applications, the IPv4 is no more relevant to support the future networks. Many new devices, such as mobile phones, require an IP address to connect to the Internet. Thus, there is a need for a new protocol that would provide new services. To overcome to these problems, a new version of Internet Protocol has been introduced. This is called Internet Protocol next generation (IPng or IPv6), which is designed by the IETF [3] to replace the current version Internet Protocol, IP Version 4 (IPv4). IPv6 is designed to solve the problems of IPv4. It does so by creating a new version of the protocol which serves the function of IPv4, but without the same limitations of IPv4. IPv6 is not totally different from IPv4. The differences between IPv6 and IPv4 are including in five major areas which is addressing, routing, security, configuration and support for mobile devices [4]. Like all the development and new inventio ns, the problems of current Internet Protocol made researcher to develop some new techniques to solve these problems. Even they have tried to make some changes on the current protocol, these changes still didnt help a much. So, at the end the way came to development of a new protocol which is known as IPv6 or IPng. 2.2 OSI 7 Layer Computer networks are complex dynamic systems and difficult task to understand, design, and implement a computer network. Networking protocols need to be established for low level computer communication up to how application programs communicate. Each step in this protocol is called a layer and divided into several layers simplifies the solution. The main idea behind layering is that each layer is responsible for different tasks. The Open System Interconnection (OSI) Reference Model defines seven layers [5]. Physical Layer. This layer deals, for instance, with conversion of bits to electrical signals, bit level synchronization. Data Link Layer. It is responsible for transmitting information across a link, detecting data corruption, and addressing. Network Layer. The layer enables any party in the network to communicate with each other. Transport Layer. It establishes reliable communication between a pair in the system, deals with lost and duplicated packets. Session Layer. This layer is responsible for dialogue control and changing. Presentation Layer. The main task of this layer is to represent data in a way convenient for the user. Application Layer. Applications in this case include Web browsing, file transferring, etc. The Network Layer is the layer that is the most interesting in the context of this project. The following section gives a better view of this layer. 2.3 Network Layer As was mentioned before, this layer is responsible for enabling the communication between any party. The most used method for transporting data within and between communications networks is the Internet Protocol (IP). 2.3.1 Internet Protocol IP is a protocol that provides a connectionless, unreliable, and best-efforts packet delivery system. More details on these network service types are given below [5]. In a connectionless model the data packets are transferred independently from all others and containing full source and the destination address. It is worth mentioning that another type is the connection oriented model. However, the connection-oriented model and its details are beyond the scope of this project and thus will not be pursued in this report. The reader can consult [5] for further information on this type of service. Unreliable delivery means that packets may be lost, delayed, duplicated, delivered non-consecutively (in an order other than that in which they were sent), or damaged in transmission. 2.4 Internet Protocol Version 4 As we know, IPv4 is the current protocol for communication on the Internet. It is the protocol that underlies most communication on networks today, such as TCP/IP and UDP/IP. The largest weakness of IPv4 is its address space [7]. Each IPv4 address only have 32 bits and consists of two parts, defined as network identifier and host identifier [5]. A standard method of displaying an IPv4 address is as decimal value of four octets, each separated a period, for example: 192.168.2.5. Traditionally [6], IP addresses are presented by classfull addressing. 5 classes of address were created, which is A to E. Class A consists of 16,777,214 hosts while class B consists of 65,534 hosts and class C consists of 254 hosts. Class D is reserved for use with multicasting and class E is a block of IP addresses reserved for future use [7]. The class D and E addresses are not used to address public host, so this leaves the rest of the entire range of IP addresses carved up into classes A C. As soon as a site is connected to the Internet, it needs to be given an entire class C. Assuming that many sites only need one or two addresses then this waste over 200 addresses. Once a site reaches over 254 full addressable machines it would need an entire class B, which would waste over 65,000 addresses and so on. This allocation system is obviously insufficient and wastes much of a limited resource. 2.4.1 Header Header is a part of the IP packet[5]. There is a number of fields in an IPv4 header. Below are the some explanations for each field. 2.4.2.1 Version This field (4-bit long) is used to determine the version of IP datagram that is considered. For IPv4 it is set to 4. 2.4.2.12 Internet Header Length (IHL) The Internet Header Length is the length of the header. 2.4.2.3 Type of Service Theoretically, this field (1 octet long) should indicate something special about the protocol. However, it has never really been used. 2.4.2.4 Total Length Total is the length of data in the fragment plus the header. 2.4.2.5 Identification This field is useful for fragmentation only. Its purpose is to enable the destination node to perform reassembly. This implies that the destination node must know which fragments belong to each other, i.e. the source, destination, and protocol fields should match. 2.4.2.6 Offset Offset indicates the point at which this fragment belongs in the reassembly packet. The field is related to fragmentation mechanism and has similar vulnerabilities as the identification field. 2.4.2.7 Time to Live TTL measures the time duration of the datagram presence in a network. This guarantees that no datagram exists forever in the network. 2.4.2.8 Protocol This field identifies the transport protocols, for example UDP or TCP. Since the field contains an arbitrary value that indicates some protocol, encapsulation of one datagram into another (IP tunneling) is possible. 2.4.2.9 Header Checksum The checksum is used to detect transmission errors. However, this field was removed in IPv6. 2.4.2.10 Source Address. This field specifies the source address. 2.4.2.11 Destination Address The destination address (4 octets long) is specified in this field. No attacks related to this field are known. 2.4.2.12 Options The field (variable size) was designed to improve the IP communication. There are several options defined for this field. Among them are: security, source routing, and route recording. 2.4.2.13 Padding The field (variable size) is used to fill the IP header with zeros if the header length is less than 32 bits. 2.5 Internet Protocol Version 6 IPv6 is a new version that is specified in RFC2460 [5] to overcome the weakness of the current protocol in certain aspect. It uses a 128 bit long address field which is 4 times longer than Ipv4 addresses. This size of address space removes one of the worst issues with IPv4 and IPv6 doesnt have classes of addresses. In general, IPv4 and IPv6 have a similar in their basic framework and also many differences. At a first view, there are obviously differences in the addresses between IPv4 and IPv6. IPv6 addresses range from 0000:0000:0000:0000:0000:0000:0000:0000 to ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff. In addition to this preferred format, IPv6 addresses may be specified in two other shortened formats: Omit leading zeros Specify IPv6 addresses by omitting leading zeros. For example, IPv6 address 1050:0000:0000:0000:0005:0600:300c:326b may be written as 1050:0:0:0:5:600:300c:326b. Double colon Specify IPv6 addresses by using double colons (::) in place of a series of zeros. For example, IPv6 address ff06:0:0:0:0:0:0:c3 may be written as ff06::c3. Double colons may be used only once in an IP address. The IPv6 addresses are similar to IPv4 except that they are 16 octets long. A critical fact to be observed is that the present 32-bit IP addresses may be accommodated in IPv6 as a special case of IPv6 addressing. The standard representation of IPv6 addresses is a hexadecimal value of 16-bit each separated by a colon. Not only does IPv6 have different address representation, but it also discards the previous concept of network classes. The 6-byte addresses are very popular in the 802 LANs. The next generation of LANs will use 8-byte address space specified by the Institute of Electrical and Electronics Engineers (IEEE) [9]. Thus, the IPv6 addresses should be 8 bytes long. 2.5.1 IPv6Header Some of  IPv4 header fields excluded in IPv6, and some of  them has been made optional. As a result of this the packet processing time and packet header size is reduced. The header consists of two parts, which are: the basic IPng header and IPng extension headers. 2.5.2.1 Version This field (4-bit long), same as in IPv4 case, is used to determine the version of IP datagram and is set to 6 in the present case. This field is the same in both versions. The reasoning for this is that these two protocols should coexist during the transition period. 2.5.2.2 Flow Label This field is 20 bits long and, as yet, there is no specific functionality assigned to it. 2.5.2.3 Payload Length Only IPv6 has this field. Since the header length is constant in IPv6, just one field is needed. This field replaces IHL and Total Length fields in IPv4. It carries information about the length of data (the headers are not included). 2.5.2.4 Next Header Next Header field replaces the Protocol field in the IPv4 header. 2.5.2.5 Hops limit This field is a hop count that decrements. This field redefines the Time to Life field present in IPv4. 2.5.2.6 Source Address The source address is indicated by this field (16 octets long). No attacks related to this field have been experienced. 2.5.2.7 Destination Address This field (16 octets long) specifies the destination address. No attacks related to this field are known. IPv6 brings major changes to the IP header. IPv6s header is far more flexible and contains fewer fields, with the number of fields dropping from 13 to 8. Fewer header fields result in a cleaner header format and Quality of Service (QoS) that was not present in IPv4. IP option fields in headers have been replaced by a set of optional extensions. The efficiency of IPv6s header can be seen by comparing the address to header size. Even though the IPv6 address is four times as large as the IPv4 address, the header is only twice as large. Priority traffic, such as real time audio or video, can be distinguished from lower priority traffic through a priority field [8]. Based on the [27] experiment, it clearly show the brake-down of the various headers in both IPv4 and IPv6, it is evident that the overhead incurred is minimal between IPv4 and IPv6. In theory, the performance overhead between these two protocols is so minimal that the benefits of IPv6 should quickly overshadow the negatives. Table 1: Packet breakdown and overhead incurred by header information 2.6 Streaming Overview In recent years, there has been major increasing in multimedia streaming application such as audio and video broadcast over internet. The increasing number of internet subscribers with broadband access from both work and home enables multimedia applications with high quality can be delivered to the user. However, since the best effort internet is unreliable with a high packet lost and inconsistency in packet arrival, it does not provide any QoS control. This is a crucial part when dealing with real-time multimedia traffic. The multimedia streaming is a real-time application includes audio and video which is stored in stream server and streamed its content to client upon request. The example includes continuous media server, digital library, and shopping and entertainment services. Prior to streaming, video was usually downloaded. Since, it took a long time to download video files, streaming was invented with the intention of avoiding download delays and enhancing user experience. In streaming, video content is played as it arrives over the network, in the sense that there is no wait period for a complete download. Real-time streaming has a timing constraint such that the data are played continuously. If the packet data are not arrive in time, the playback is paused and will cause the in smoothness in multimedia presentation and its definitely annoying to the user. Because of this factor, multimedia streaming require isochronous processing and QoS [10] from end to end view. The lack of QoS has not prevented the rapid growth of real-time streaming application and this growth is expected to continue and multimedia traffic will form a higher portion of of the internet load. Thus, the overall behavior of these applications will have a significant impact on the other internet traffic. 2.7 Downloading Versus Streaming Application Basically downloading applications such as FTP involve downloading a file before it is viewed by a user. The examples of multimedia downloading applications are downloading an MP3 song to an IPod or any portable device, downloading a video file to a computer via P2P application such as BitTorrent. Downloading is usually a simple and easiest way to deliver media to a user. However, downloading has two potentially important disadvantages for multimedia applications. First, a large buffer is required whenever a large media file such as MPEG-4 movie is downloaded. Second, the amount of time required for the download can be relatively large, (depends on the network traffic), thereby requiring the user to wait minutes or even hours before being able to view the content. Thus, while downloading is simple and robust, it provides only limited flexibility both to users and to application designers. In contrast, in the streaming mode actually is by split the media bit stream into separate packet which can be transmitted independently. This enables the receiver to decode and play back the parts of the bit stream that are already received. The transmitter continues to send multimedia data packet while the receiver decodes and simultaneously plays back other, already received parts of the bit stream. This enables low delay between the current data is sent by the transmitter to the moment it is viewed by the user. Low delay is of paramount importance for interactive applications such as video conferencing, but it is also important both for video on demand, where the user may desire to change channels or programs quickly, and for live broadcast, but the delay must be finite. Another advantage of streaming is its relatively low storage requirements and increased flexibility for the user, compared to downloading. However, streaming applications, unlike downloading applications, have de adlines and other timing requirements to ensure continuous real-time media play out. This leads to new challenges for designing communication systems to best support multimedia streaming applications. [12] 2.8 Standard/Protocols for Streaming A good streaming protocol is required to achieve a quality of continuous playback in multimedia streaming over the internet with the short delay when a user downloading a multimedia content over the internet. The streaming protocol provides a service such as transport, and QoS control mechanism including quality adaptation, congestion control and error control. The streaming protocol is built on the top of network level protocol and the transport level protocol. The multimedia streaming protocol is based on IP network and â€Å"User Datagram Protocol† (UDP) is mainly used, despite of some streaming application using TCP. Like TCP, UDP is a transport layer protocol, but UDP is a connectionless transport protocol. UDP does not guarantee a reliable transmission and in order arrival packet. Under UDP also, there is no guarantee that is packet will arrive to its destination [16]. The UDP packet may get lost in the network when there is a lot of network traffic. Therefore, UDP is not suitable for data packet transfer where a guarantee delivery is important.UDP is never used to send important data such as webpage, database information, etc; UDP is commonly used for streaming audio and video. Streaming media such as Windows Media audio files (.WMA), Real Player (.RM), and others format use UDP because it offers speed. The reason UDP is faster than TCP is because there is no form of flow control or error correction. The data sent over the Inte rnet is affected by collisions, and errors will be present. Remember that UDP is only concerned with speed. This is the main reason why streaming media is not high quality. However, UDP is the ideal transport layer protocol for streaming application which the priority is to transfer the packet from the sender to its destination and does not contribute any delay which is the result of the transmission of lost packets. Since UDP does not guarantee in packet delivery, the client needs to rely Real time Transport Protocol (RTP) [10]. The RTP provides the low-level transport functions suitable for applications transmitting real-time data, such as video or audio, over multicast or unicast services The RTP standard consists of two elementary services, transmitted over two different channels. One of them is the real-time transport protocol which carries the data and the other works as control and monitor channel named RTP control protocol (RTCP) [13]. RTP packets are encapsulated within UDP datagrams. This step incorporates a high throughput and efficient bandwidth usage. The RTP data packets contain a 12 byte header followed by the payload, which can be a video frame, set of audio samples etc. The header includes a payload type indicating the kind of data contained in the packet (e.g. JPEG video, MP3 audio, etc), a timestamp (32 bits), and a sequence number to allow ordering and loss detection of RTP pa ckets [11]. According to the standard [14], the transport of RTP streams can use both UDP and TCP transport protocols, with a strong preference for the datagram oriented support offered by UDP. The primary function of RTCP is to provide feedback on the quality of the data distribution. The feedback may be directly useful for control of adaptive encodings along with fault diagnostics in the transmission. In summary, RTP is a data transfer protocol while RTCP is control protocol. The Real-time Streaming Protocol (RTSP) [25] is a client-server signaling system based on messaging in ASCII format. It establishes procedures and controls, either one or more time-synchronized streams continuous media such as audio and video. The protocol is intentionally similar in syntax and operation to HTTP and therefore hires the option of using proxies, tunnels and caches. RTSP and works well both for large audiences, and single-viewer media-on-demand. RTSP provides control functionality such as pause, fast forward, reverse and absolute positioning and works much like a VCR remote control. The necessary additional information in the negotiation is conducted in the Session Description Protocol (SDP), sent as an attachment of RTSP appropriate response [13]. The Requirement for Multimedia Application Various multimedia applications have different requirements for QoS describes in the following QoS parameters such as throughput, delay, delay variation (jitter) and packet loss. In most cases, the application of QoS requirements can be determine by the user which are the factors that affect the quality of applications [17]. For example, from experimenting concluded that acceptable quality, one-way delay requirements for interactive voice should be less than 250 ms. This delay includes the value of the delays imposed on all components of the communication channels, as a source of delay, transmission delays, delays in the network and the determination of the delay. There are some factors which affect QoS application requirements such as interactive and noninteractive applications, User/Application characteristics (delay tolerance and intolerance, adaptive and nonadaptive characteristics) and application criticality (Mission-critical and non-mission-critical applications) [15]. The thr ee types for this application requirement will be discuss in next section. 2.10.1 Interactive and Noninteractive Applications An interactive application involves some form of between two parties such as people-to-people, people-to-machine or machine-to-machine. An example of interactive applications is: People-to-people application such as IP telephony, interactive voice/video, videoconferencing People-to-machine application such as Video-on-demand (VOD), streaming audio/video Machine-to-machine application: Automatic machine control The time elapsed between interactions is essential to the success of an interactive application. The degree of interactivity determines the level of severity or delay the requirement. For example, interactive voice applications, which involve human interaction (conversation) in real time, are stringent requirements of delay (in order of milliseconds). Streaming (play), video applications involve less interaction and do not require real-time response. Applications streaming, therefore, are more relaxed requirements of delay (in order of seconds). Often applications tolerance delay is determined by users tolerance delay (ie, higher delay tolerance leads to more relaxed delay requirements). Jitter delay is also related to QoS support for interactive tasks. The delay jitter can be corrected by de-jittering techniques buffer. However, the buffer introduces delay in the original signal, which also affects the interactivity of the task. In general, an application with strict requirements de lay also has a strict delay jitter requirements [15]. 2.10.2 Tolerance and Intolerance Tolerance and intolerance also one of the key that affect in QoS parameter values require by the user. Latency tolerance and intolerance determines the strictness of the delay requirement. As we already mentioned, streaming multimedia applications are more latency tolerant than interactive multimedia applications. The level of latency tolerance extremely depends based on users satisfaction, expectation, and the urgency of the application such as mission critical. Distortion tolerance to the commitment of the application quality depends on users satisfaction, users expectation, and the application media types. For example, users are more tolerant to video distortion than to audio distortion. In this case, during congestion, the network has to maintain the quality of the audio output over the quality of the video output [15]. 2.10.3 Adaptive and Nonadaptive Characteristics Adaptive and nonadaptive aspects mostly describe the mechanisms invoked by the applications to adapt to QoS degradation and the common adaptive techniques are rate adaptation and delay adaptation. Rate adaptive application can adjust the data rate injected into the network. During network congestion, the applications reduce the data rate by dropping some packets, increasing the codec data compression, or changing the multimedia properties. This technique may cause degradation of the perceived quality but will keep it within acceptable levels. Delay-tolerant adaptive applications are tolerate to a certain level of delay jitter by deploying the de-jittered buffer or adaptive playback technique. Adaptation is trigged by some form of implicit or explicit feedback from the network or end user [15]. 2.10.4 Application Criticality Mission-critical aspects reflect the importance of application usage, which determines the strictness of the QoS requirements and Failing the mission may result in disastrous consequences. For example: Air Traffic Control Towers (ATCTs): The Traffic controller is responsible to guide the pilot for direction, takeoff and landing process. Life and death of the pilot and passenger may depend on the promptness and accuracy of the Air Traffic Control (ATC) system. E Banking system: The failure of this system may lead to the losses to the bank and user is unable to make an online transaction (view account summary, account history, transaction status, manage cheques and transfer funds online) and to make a online payment ( loans, bills, and credit card) and other transaction. 2.10.6 Examples of Application Requirements Video applications can be classified into two groups: interactive video (i.e., video conferencing, long-distance learning, remote surgery) and streaming video (i.e., RealVideo, Microsoft ASF, QuickTime, Video on Demand, HDTV). As shown in table 2, video applications bandwidth requirements are relatively high depending on the video codec. Video codec Bandwidth Requirement Uncompressed HDTV 1.5 Gbps HDTV 360 Mbps Standard definition TV (SDTV) 270Mbps Compressed MPEG2 25-60 Mbps Broadcast quality HDTV 19.4 Mbps MPEG 2 SDTV 6 Mbps MPEG 1 1.5 Mbps MPEG 4 5 kbps 4 Mbps H.323 (h.263) 28 kbps 1 Mbps Table 2 : Video Codec Bandwidth Requirement [15] 2.11 Packet Delay Delay has a direct impact on users satisfaction. Real-time media applications require the delivery of information from the source to the destination within a certain period of time. Long delays may cause incidents such as data missing the playback point, which can degrade the quality of service of the application. Moreover, it can cause user frustration during interactive tasks. For example, the International Telecommunication Union (ITU) considers network delay for voice applications in Recommendation G.114 and defines three bands of one-way delay as shown in table 2. Range in Millisecond (ms) Description 0 150 Acceptable for most user application. 150 400 Acceptable provided that administrators are aware of the transmission time and the impact it has on the transmission quality of user applications. > 400 Unacceptable for general. However in certain cases this limit exceeds. Table 3: Standard for delay limit for voice In the data transmission process, each packet is moving from its source to its destination. The process of data transmission usually starts with a packet from a ho

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.