Category Archives: Linux networking

ttcp vs iperf

You can use the Test TCP utility (TTCP) to measure TCP throughput through an IP path. In order to use it, start the receiver on one side of the path, then start the transmitter on the other side. The transmitting side sends a specified number of TCP packets to the receiving side. At the end of the test, the two sides display the number of bytes transmitted and the time elapsed for the packets to pass from one end to the other. You can then use these figures to calculate the actual throughput on the link. For general information on TTCP, refer to Network Performance Testing with TTCP .

The TTCP utility can be effective in determining the actual bit rate of a particular WAN or modem connection. However, you can also use this feature to test the connection speed between any two devices with IP connectivity between them.

Server:./ttcp -r -s -p 9999

Client: ./ttcp -s -p 9999 < /boot/vmLinuz

ipref test netwok performance

this tools is to measure network performance. Iperf was originally developed by NLANR/DAST as a modern alternative for measuring TCP and UDP bandwidth performance

Server:
iperf -s -p 9999

Client:
iperf -c 192.168.122.150 -p 9999

Iperf features:
TCP
Measure bandwidth
Report MSS/MTU size and observed read sizes.
Support for TCP window size via socket buffers.
Multi-threaded if pthreads or Win32 threads are available. Client and server can have multiple simultaneous connections.
UDP
Client can create UDP streams of specified bandwidth.
Measure packet loss
Measure delay jitter
Multicast capable
Multi-threaded if pthreads are available. Client and server can have multiple simultaneous connections. (This doesn’t work in Windows.)

Where appropriate, options can be specified with K (kilo-) and M (mega-) suffices. So 128K instead of 131072 bytes.
Can run for specified time, rather than a set amount of data to transfer.
Picks the best units for the size of data being reported.
Server handles multiple connections, rather than quitting after a single test.
Print periodic, intermediate bandwidth, jitter, and loss reports at specified intervals.
Run the server as a daemon.
Run the server as a Windows NT Service
Use representative streams to test out how link layer compression affects your achievable bandwidth.

tc limit incoming bandwidth openvz

DEV=venet0
tc qdisc del dev $DEV root
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit
tc class add dev $DEV parent 1: classid 1:1 cbq rate 256kbit allot 1500 prio 5 bounded isolated
tc filter add dev $DEV parent 1: protocol ip prio 16 u32 match ip dst X.X.X.X flowid 1:1
tc qdisc add dev $DEV parent 1:1 sfq perturb 10

traceroute and tracepath

When you execute a traceroute command (http://www.linuxhow.tk), your machine sends out 3 UDP packets with a TTL (Time-to-Live) of 1.  When those packets reach the next hop router, it will decrease the TTL to 0 and thus reject the packet.  It will send an ICMP Time-to-Live Exceeded (Type 11), TTL equal 0 during transit (Code 0) back to your machine – with a source address of itself, therefore you now know the address of the first router in the path.

Next your machine will send 3 UDP packets with a TTL of 2, thus the first router that you already know passes the packets on to the next router after reducing the TTL by 1 to 1.  The next router decreases the TTL to 0, thus rejecting the packet and sending the same ICMP Time-to-Live Exceeded with its address as the source back to your machine.  Thus you now know the first 2 routers in the path.

This keeps going until you reach the destination.  Since you are sending UDP packets with the destination address of the host you are concerned with, once it gets to the destination the UDP packet is wanting to connect to the port that you have sent as the destination port, since it is an uncommon port, it will most like be rejected with an ICMP Destination Unreachable (Type 3), Port Unreachable (Code 3).  This ICMP message is sent back to your machine, which will understand this as being the last hop, therefore traceroute will exit, giving you the hops between you and the destination.

The UDP packet is sent on a high port, destined to another high port.  On a Linux box, these ports were not the same, although usually in the 33000.  The source port stayed the same throughout the session, however the destination port was increase by one for each packet sent out.

One note, traceroute actually sends 1 UDP packet of TTL, waits for the return ICMP message, sends the second UDP packet, waits, sends the third, waits, etc, etc, etc.

If during the session, you receive * * *, this could mean that that router in the path does not return ICMP messages, it returns messages with a TTL too small to reach your machine or a router with buggy software.  After a * * * within the path, traceroute will still increment the TTL by 1, thus still continuing on in the path determination.

linux – check network card supported speed

ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: off
Supports Wake-on: pumbg
Wake-on: g
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes

curl usage

curl -IL “URL”

this will send a HEAD request (-I), follow through all redirects (-L), and display some useful information in the end.

you can get more with GET request:

curl -sL -w "%{http_code} %{url_effective}\\n" "URL" -o /dev/null
  • url_effective
  • http_code
  • http_connect
  • time_total
  • time_namelookup
  • time_connect
  • time_pretransfer
  • time_redirect
  • time_starttransfer
  • size_download
  • size_upload
  • size_header
  • size_request
  • speed_download
  • speed_upload
  • content_type
  • num_connects
  • num_redirects
  • ftp_entry_path

IPADDR_START

cd /etc/sysconfig/network-scripts

ls ifcfg-eth0-range*

If you already have a range file, you will need to create a new one for the new range of IPs you are adding, eg ‘nano ifcfg-eth1-range1` .  If you have one named range1, name the next range2 and so on.

ifcfg-eth0-range1

Place the following text in the file:

IPADDR_START=192.168.0.10
IPADDR_END=192.168.0.110
CLONENUM_START=0

Note: CLONENUM_START defines where the alias will start.  If this is the second range file, you will need to set CLONENUM_START to a value higher than the number of IP addresses assigned.  To check what you currently have used, you can run ‘ifconfig –a | grep eth0’.  This will list devices such as eth0:0, eth0:1, eth0:2, and so on.  If you are currently using upto eth0:16, you will need to set CLONENUM_START to 17 to assign the IPs correctly.

packets loss on NIC interface

Default settings for NICs is good for most cases however there are times when you need to do some performance tuning.
When you start to observe increasing drops of RX packets it means that your system cannot process incoming packets fast enough. You can verify on your monitoring system to correlate this issue with increased network traffic at the same time.
# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:22:19:50:ea:76
inet addr:192.168.x.x  Bcast:192.168.x.x  Mask:255.255.xxx.xxx
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:3208932 errors:0 dropped:19188 overruns:0 frame:0
TX packets:1543138 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000First verify current NIC settings:
# ethtool -g eth0
Ring parameters for eth0:
Pre-set maximums:
RX:             1020
RX Mini:        0
RX Jumbo:       4080
TX:             255
Current hardware settings:
RX:             255
RX Mini:        0
RX Jumbo:       0
TX:             255

Increasing ring buffer for rx should fix this issue:
# ethtool -G eth0 rx 512

To have this settings persistent make sure you add this command to /etc/rc.local script.