The internet is primarily made up of TCP packets, but packet loss is often measured by using non-connection oriented packets such as Internet Control Message Protocol (ICMP) or User Datagram Protocol (UDP) packets. The reason for this is that TCP is a connection-oriented protocol, and it automatically attempts to resolve issues of packet loss without the user or application knowing that packet loss occurred. By using ICMP or UDP, an application or user can send one packet and expect exactly one packet back. If a UDP or ICMP packet is sent, but a packet is not received in return from the remote end, then packet loss likely occurred.
For example, ping is a utility that uses ICMP or UDP packets. In addition to calculating latency, ping can also calculate packet loss . If you open a terminal or command prompt and run the ping command to a remote server, you will get a percentage of packet loss upon the command's completion.