Hi,
Regarding the UDP tests, I have some doubts that I would like to clarify or at least comment with someone else.
Consider the following setup for this subject:
- laptop connected to Router through an 2 meters ethernet cable;
- iperf v2.0.5 running on both router & pc (changing only the mode);
- OEM image has qdisc set to hyfi_pfifo_fast
root@OpenWrt:/# tc qdisc
qdisc hyfi_pfifo_fast 0: dev eth0 root refcnt 2 [Unknown qdisc, optlen=24]
qdisc hyfi_pfifo_fast 0: dev wlan0 root refcnt 2 [Unknown qdisc, optlen=24]
!!!Deficit -4, rta_len=48
qdisc pfifo_fast 0: dev wifi0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
root@OpenWrt:/#
- both custom images have default qdisc set to fq_codel (default)
root@OpenWrt:QC-IPDock:/# tc qdisc
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Commands used:
- iperf in server mode: iperf -u -s
- iperf in client mode: iperf -u -c 192.168.2.x -b 1000M
Note: not changing packet size or any other parameter deliberately.
On OEM image:
(running iperf as server on router)
root@OpenWrt:/# iperf -u -s
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.1 port 5001 connected with 192.168.2.2 port 49200
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0-10.0 sec 1.06 GBytes 911 Mbits/sec 0.016 ms 3813/777747 (0.49%)
[ 3] local 192.168.2.1 port 5001 connected with 192.168.2.2 port 49201
[ 3] 0.0- 9.9 sec 1.08 GBytes 935 Mbits/sec 0.022 ms 10950/796270 (1.4%)
[ 3] local 192.168.2.1 port 5001 connected with 192.168.2.2 port 49202
[ 3] 0.0-10.0 sec 1.06 GBytes 907 Mbits/sec 0.012 ms 2425/774281 (0.31%)
[ 3] local 192.168.2.1 port 5001 connected with 192.168.2.2 port 49203
[ 3] 0.0-10.2 sec 524 MBytes 429 Mbits/sec 15.062 ms 418154/791653 (53%)
[ 3] local 192.168.2.1 port 5001 connected with 192.168.2.2 port 49204
[ 3] 0.0- 9.1 sec 476 MBytes 440 Mbits/sec 0.028 ms 433265/772786 (56%)
[ 3] local 192.168.2.1 port 5001 connected with 192.168.2.2 port 49205
[ 3] 0.0- 9.1 sec 472 MBytes 435 Mbits/sec 0.024 ms 444554/781277 (57%)
[ 3] local 192.168.2.1 port 5001 connected with 192.168.2.2 port 49206
[ 3] 0.0-10.0 sec 542 MBytes 455 Mbits/sec 0.022 ms 375189/761780 (49%)
[ 3] local 192.168.2.1 port 5001 connected with 192.168.2.2 port 49207
[ 3] 0.0-10.0 sec 489 MBytes 410 Mbits/sec 0.036 ms 440635/789700 (56%)
[ 3] local 192.168.2.1 port 5001 connected with 192.168.2.2 port 49208
[ 3] 0.0- 9.9 sec 1.02 GBytes 883 Mbits/sec 0.023 ms 44287/791117 (5.6%)
[ 3] local 192.168.2.1 port 5001 connected with 192.168.2.2 port 49209
[ 3] 0.0-10.0 sec 1.09 GBytes 933 Mbits/sec 0.021 ms 1522/795344 (0.19%)
[ 3] local 192.168.2.1 port 5001 connected with 192.168.2.2 port 49210
[ 3] 0.0-10.0 sec 1.06 GBytes 914 Mbits/sec 0.019 ms 1015/777971 (0.13%)
(running iperf as client on router)
C:\iperf>iperf -u -s
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.2 port 5001 connected with 192.168.2.1 port 37917
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0-10.0 sec 590 MBytes 495 Mbits/sec 0.039 ms 904/421734 (0.21%)
[ 3] 0.00-9.99 sec 15 datagrams received out-of-order
[ 4] local 192.168.2.2 port 5001 connected with 192.168.2.1 port 49299
[ 4] 0.0-10.0 sec 601 MBytes 504 Mbits/sec 0.028 ms 1990/430620 (0.46%)
[ 3] local 192.168.2.2 port 5001 connected with 192.168.2.1 port 49679
[ 3] 0.0-10.0 sec 601 MBytes 505 Mbits/sec 0.023 ms 2105/430985 (0.49%)
[ 4] local 192.168.2.2 port 5001 connected with 192.168.2.1 port 48248
[ 4] 0.0-10.0 sec 602 MBytes 505 Mbits/sec 0.038 ms 960/430634 (0.22%)
[ 3] local 192.168.2.2 port 5001 connected with 192.168.2.1 port 57819
[ 3] 0.0-10.0 sec 562 MBytes 472 Mbits/sec 0.027 ms 1717/422803 (0.41%)
[ 4] local 192.168.2.2 port 5001 connected with 192.168.2.1 port 50314
[ 4] 0.0-10.0 sec 577 MBytes 484 Mbits/sec 0.042 ms 847/433097 (0.2%)
In my custom images the behavior is basically the same, so I will not pollute more this post with their outputs. This also happens on the 11ac interface.
I know that I am setting the bandwidth a little above of the real maximum ethernet capability but that is intentional to stress the connection. Also, I am aware that UDP does not implement congestion or traffic flow mechanisms (and others), reason why I would expect to have some error.
But is it normal to have that huge increase/decrease in error (aka lost datagrams) between consecutive tests?
Has anyone else done similar tests with UDP, that can comment on this behavior and on these metrics, so I can have an idea if they are acceptable?
I suspect that I can improve this throughput changing the queue/network scheduling policy but when I tried to change it to PRIO on my custom K4.1 image, I didn't see any improvements or anything that shows that the policy is working. Also, trying to apply some commands as described on the example https://wiki.openwrt.org/doc/howto/pack … r.example1, I get some strange messages:
root@OpenWrt:QC-IPDock:/#
root@OpenWrt:QC-IPDock:/# iperf -u -s -D
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
Running Iperf Server as a daemon
The Iperf daemon process ID : 745
root@OpenWrt:QC-IPDock:/#
root@OpenWrt:QC-IPDock:/#
root@OpenWrt:QC-IPDock:/#
root@OpenWrt:QC-IPDock:/# [ 3] local 192.168.2.1 port 5001 connected with 192.168.2.2 port 60831
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0-10.0 sec 771 MBytes 646 Mbits/sec 0.017 ms 222700/772431 (29%)
[ 0] 0.0-10.0 sec 230 MBytes 193 Mbits/sec 0.005 ms 620530/784431 (79%)
[ 0] 0.0-10.0 sec 245 MBytes 207 Mbits/sec 0.013 ms 628874/803679 (78%)
[ 0] 0.0-10.0 sec 1.02 GBytes 879 Mbits/sec 0.006 ms 33244/780663 (4.3%)
[ 0] 0.0-10.0 sec 1.03 GBytes 886 Mbits/sec 0.006 ms 32538/785940 (4.1%)
[ 0] 0.0-10.2 sec 241 MBytes 197 Mbits/sec 15.081 ms 624693/796343 (78%)
root@OpenWrt:QC-IPDock:/#
root@OpenWrt:QC-IPDock:/#
root@OpenWrt:QC-IPDock:/# insmod /lib/modules/4.1.23/net_sched/sch_prio.ko
module is already loaded - sch_prio
root@OpenWrt:QC-IPDock:/#
root@OpenWrt:QC-IPDock:/# tc -s qdisc show dev eth0
qdisc fq_codel 0: root refcnt 2 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 10460 bytes 20 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
root@OpenWrt:QC-IPDock:/#
root@OpenWrt:QC-IPDock:/# tc qdisc add dev eth0 root handle 1: prio default 30
What is "default"?
Usage: ... prio bands NUMBER priomap P1 P2...[multiqueue]
root@OpenWrt:QC-IPDock:/# tc qdisc add dev eth0 root handle 1: prio
root@OpenWrt:QC-IPDock:/#
root@OpenWrt:QC-IPDock:/# tc -s qdisc show dev eth0
qdisc prio 1: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
root@OpenWrt:QC-IPDock:/# tc class add dev eth0 parent 1: classid 1:1 prio rate
1000kbit?
Error: Qdisc "prio" is classless.
root@OpenWrt:QC-IPDock:/#
root@OpenWrt:QC-IPDock:/# tc -s qdisc show dev eth0
qdisc prio 1: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
root@OpenWrt:QC-IPDock:/#
root@OpenWrt:QC-IPDock:/# iperf -u -s -D
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
Running Iperf Server as a daemon
The Iperf daemon process ID : 756
root@OpenWrt:QC-IPDock:/#
root@OpenWrt:QC-IPDock:/# [ 3] local 192.168.2.1 port 5001 connected with 192.168.2.2 port 62121
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0-10.0 sec 730 MBytes 612 Mbits/sec 0.018 ms 266525/787019 (34%)
[ 0] 0.0-10.2 sec 694 MBytes 568 Mbits/sec 15.250 ms 307865/802608 (38%)
[ 0] 0.0-10.0 sec 714 MBytes 599 Mbits/sec 0.014 ms 282951/792288 (36%)
[ 0] 0.0-10.2 sec 709 MBytes 581 Mbits/sec 15.279 ms 283310/789091 (36%)
[ 0] 0.0-10.0 sec 906 MBytes 760 Mbits/sec 0.009 ms 133781/779729 (17%)
[ 0] 0.0-10.0 sec 730 MBytes 612 Mbits/sec 0.024 ms 279199/799639 (35%)
[ 0] 0.0-10.0 sec 723 MBytes 607 Mbits/sec 0.011 ms 269335/785359 (34%)
[ 0] 0.0-10.0 sec 899 MBytes 754 Mbits/sec 0.006 ms 143421/784914 (18%)
[ 0] 0.0-10.2 sec 751 MBytes 615 Mbits/sec 14.379 ms 258632/794064 (33%)
root@OpenWrt:QC-IPDock:/#
So, according to the information available on http://man.cx/tc and even on "man tc" command, PRIO is CLASSFUL QDISC but on the output it complains that it is CLASSLESS...
Also I was not able to apply the same command as stated on openwrt link.
And the list of network schedulers available in my openwrt kernel_menuconfig is limited to only
drwxrwxr-x 2 sjuliao sjuliao 4096 Oct 10 14:06 .
drwxrwxr-x 6 sjuliao sjuliao 4096 Oct 10 14:06 ..
-rw-r--r-- 1 sjuliao sjuliao 20264 Oct 10 14:02 sch_cbq.ko
-rw-r--r-- 1 sjuliao sjuliao 8700 Oct 10 14:02 sch_choke.ko
-rw-r--r-- 1 sjuliao sjuliao 8012 Oct 10 14:02 sch_codel.ko
-rw-r--r-- 1 sjuliao sjuliao 10200 Oct 10 14:02 sch_drr.ko
-rw-r--r-- 1 sjuliao sjuliao 8536 Oct 10 14:02 sch_dsmark.ko
-rw-r--r-- 1 sjuliao sjuliao 10452 Oct 10 14:02 sch_esfq.ko
-rw-r--r-- 1 sjuliao sjuliao 10924 Oct 10 14:02 sch_fq.ko
-rw-r--r-- 1 sjuliao sjuliao 10784 Oct 10 14:02 sch_gred.ko
-rw-r--r-- 1 sjuliao sjuliao 18972 Oct 10 14:02 sch_hfsc.ko
-rw-r--r-- 1 sjuliao sjuliao 8092 Oct 10 14:02 sch_hhf.ko
-rw-r--r-- 1 sjuliao sjuliao 19984 Oct 10 14:02 sch_htb.ko
-rw-r--r-- 1 sjuliao sjuliao 4168 Oct 10 12:42 sch_ingress.ko
-rw-r--r-- 1 sjuliao sjuliao 7148 Oct 10 14:02 sch_mqprio.ko
-rw-r--r-- 1 sjuliao sjuliao 7796 Oct 10 14:02 sch_multiq.ko
-rw-r--r-- 1 sjuliao sjuliao 12120 Oct 10 14:02 sch_netem.ko
-rw-r--r-- 1 sjuliao sjuliao 8380 Oct 10 14:02 sch_pie.ko
-rw-r--r-- 1 sjuliao sjuliao 4136 Oct 10 14:02 sch_plug.ko
-rw-r--r-- 1 sjuliao sjuliao 7640 Oct 10 14:02 sch_prio.ko
-rw-r--r-- 1 sjuliao sjuliao 16988 Oct 10 14:02 sch_qfq.ko
-rw-r--r-- 1 sjuliao sjuliao 9496 Oct 10 14:02 sch_red.ko
-rw-r--r-- 1 sjuliao sjuliao 9228 Oct 10 14:02 sch_sfb.ko
-rw-r--r-- 1 sjuliao sjuliao 13048 Oct 10 13:15 sch_sfq.ko
-rw-r--r-- 1 sjuliao sjuliao 10272 Oct 10 14:02 sch_tbf.ko
-rw-r--r-- 1 sjuliao sjuliao 9400 Oct 10 14:02 sch_teql.ko
So I have no sch_pfifo_fast (for instance) as stated on https://wiki.openwrt.org/doc/howto/pack … ssfulqdisc. Is this normal?
Does it depend on some selection?
Is there some tutorial or easy way to setup these network schedulers without much effort to check which ones perform better or do I have to test them individually? In this last case, anyone can suggest a good recipe to test them individually without depending too much on their specific parameters (my intention is to have an quick and overall idea of their general performance and only then, spend time tuning the most promising ones)?
Kind regards
(Last edited by sjuliao on 10 Oct 2016, 16:41)