Skip to main content

IP Datagram

IP Datagrams: An IP datagram is a highly structured series of fields that are strictly defined.The two primary sections of an IP datagram are the header and the payload.


Header:



  1. Version: The very first field is four bits, and indicates what version of Internet protocol is being used. The most common version of IP is version four or IPv4. Version six or IPv6, is rapidly seeing more widespread adoption.
  2. Header:After the version field, we have the Header Length field. This is also a four bit field that declares how long the entire header is. This is almost always 20 bytes in length when dealing with IPv4. In fact, 20 bytes is the minimum length of an IP header. You couldn't fit all the data you need for a properly formatted IP header in any less space.
  3. Service Type :These eight bits can be used to specify details about quality of service or QoS technologies. The important takeaway about QoS is that there are services that allow routers to make decisions about which IP datagram may be more important than others.
  4. Total Length Field & Identification:This field is a 16 bit field, known as the Total Length field. It's used for exactly what it sounds like, to indicate the total length of the IP datagram it's attached to. The identification field, is a 16-bit number that's used to group messages together. IP datagrams have a maximum size and you might already be able to figure out what that is. Since the Total Length field is 16 bits, and this field indicates the size of an individual datagram, the maximum size of a single datagram is the largest number you can represent with 16 bits: 65,535. If the total amount of data that needs to be sent is larger than what can fit in a single datagram, the IP layer needs to split this data up into many individual packets. When this happens, the identification field is used so that the receiving end understands that every packet with the same value in that field is part of the same transmission.
  5. Flags & Fragmentation offset:The flag field is used to indicate if a datagram is allowed to be fragmented, or to indicate that the datagram has already been fragmented. Fragmentation is the process of taking a single IP datagram and splitting it up into several smaller datagrams. While most networks operate with similar settings in terms of what size an IP datagram is allowed to be, sometimes, this could be configured differently. If a datagram has to cross from a network allowing a larger datagram size to one with a smaller datagram size, the datagram would have to be fragmented into smaller ones. The fragmentation offset field contains values used by the receiving end to take all the parts of a fragmented packet and put them back together in the correct order.
  6. T.T.L(Time to Live):This field is an 8-bit field that indicates how many router hops a datagram can traverse before it's thrown away. Every time a datagram reaches a new router, that router decrements the T.T.L field by one. Once this value reaches zero, a router knows it doesn't have to forward the datagram any further. The main purpose of this field is to make sure that when there's a misconfiguration in routing that causes an endless loop, datagrams don't spend all eternity trying to reach their destination.
  7. Type:This is another 8-bit field that contains data about what transport layer protocol is being used. The most common transport layer protocols are T.C.P and U.D.P
  8. Header Checksum:This field is a checksum of the contents of the entire IP datagram header. It functions very much like the Ethernet checksum field.Since the T.T.L field has to be recomputed at every router that a datagram touches, the checksum field necessarily changes, too.
  9. Source& Destination IP Address:Its 32 bit number having the address of sender and receiver.
  10. IP Option: This is an optional field and is used to set special characteristics for datagrams primarily used for testing purposes.
  11. Padding: Padding field is just a series of zeros used to ensure the header is the correct total size.



Comments

Popular posts from this blog

Secure Network(Week 4)

Network hardening is the process of securing a network by reducing its potential vulnerabilities through configuration changes and taking specific steps.   Implicit deny is a network security concept where anything not explicitly permitted or allowed should be denied. Analyzing logs is the practice of collecting logs from different networks and sometimes client devices on your network, then performing an automated analysis on them. Correlation analysis is the process of taking log data from different systems and matching events across the systems. Flood guards provide protection against Dos or denial of service attacks. EAP-TLS is an authentication type supported by EAP that uses TLS to provide mutual authentication of both the client and the authenticating server.   if you really want to lock down your network, you can implement 802.1x . DHCP Snooping Attack Why WEP Encryption fall apart? A general concept in security and encryption is to never send the plain ...

Troubleshooting and debugging

Troubleshooting is the process of identifying, analyzing, and solving problems.  Debugging is the process of identifying, analyzing, and removing bugs in a system. We sometimes use troubleshooting and debugging interchangeably.  But generally, we say troubleshooting when we're fixing problems in the system running the application, and debugging when we're fixing the bugs in the actual code of the application. Debuggers let us follow the code line by line, inspect changes in variable assignments, interrupt the program when a specific condition is met, and more. System calls are the calls that the programs running on our computer make to the running kernel.   A reproduction case is a way to verify if the problem is present or not. Where to check for log file in OS? On Linux , you'd read system logs like /var/log/syslog and user-specific logs like the .xsession-errors file located in the user's home directory. On MacOs , on top of the system logs, you'd go through...

Authentication Authorization Accounting(week 3)

Identification is the idea of describing an entity uniquely. Biometric authentication is the process of using unique physiological characteristics of an individual to identify them. C.R.L(Certificate revocation list) :This is a signed list published by the CA which defines certificates that have been explicitly revoked. Lightweight Directory Access Protocol(LDAP): LDAP is an open industry-standard protocol for accessing and maintaining directory services. Authentication is related to verifying the identity a user, authorization pertains to describing what the user account has access to or doesn't have access to. An access control list or ACL , is a way of defining permissions or authorizations for objects.  RADIUS or Remote Authentication Dial-In User Service , is a protocol that provides AAA services for users on a network.It's a very common protocol used to manage access to internal networks, WiFi networks, email services and VPN services. when a client wants to access a r...