OpenWrt Forum Archive

Topic: Qos Script With Overhead Strange Values

The content of this topic has been archived on 21 Apr 2018. There are no obvious gaps in this topic, but there may still be some posts missing at the end.

I was having some problems with qos when i set the values on /etc/config/qos to my real values. 256/128
The main hsfc qdisc is created with -70kbit of rate , which is really strange. So taking a look at /usr/lib/generate.sh i saw this line.

[ "$overhead" = 1 ] && download=$(($download * 98 / 100 - (80 * 1024 / $download)))

Where is the logic in that line ?

If i have a 600kbit link i will get 452kbit , loosing 148kbits which is almost 25%.

If i have a 1024kbit link i will get 923kbit , loosing less than 10%.

Where that formula come from ? I think it is wrong...

Actual overhead heavily depends on the type of your line, so it can't be easily put into some formula. Some schedulers like HTB have their own overhead parameters, ie they account for some overhead for every packet they send. That's the solution I'm using. However this too does not necessarily relate to actual overhead. Depending on the type of your line calculating actual overhead can be extremely complicated especially when packets are first encapsulated into another protocol and then transferred in fixed size cells and so on, so there isn't really any scheduler that does it 100% right.

Yeah , i agree , but clearly there is some problem with that formula , cause for a 256k link you will get -71 kbit of rate ...

If all that formula does is change upload/download rate, it should just reduce it by a fixed percentage (and the overhead variable itself could represent the percentage you want to use). Anything else is just random. Anyway that's how I do it in all my QoS scripts, like this: TOTAL_RATE=$((($RATE*(100-$RATE_SUB_PERCENT))/100)). I obtain the rate of my DSL line directly from my DSL modem and reduce it by 5 percent to be on the safe side. Additionally the HTB scheduler I use also tries to account for overhead (in my case 'mpu 96 overhead 50'), as no matter how many percent you take from the rate, you can probably always abuse overhead if you send lots of very small packets.

where did you get mpu from ?
i know PPPOE overhead is 44 .

Correct settings depend on your specific situation. I simply took what they used in this thread http://www.opensubscriber.com/message/l … 03651.html - they're probably not the correct values for me either but it works great nevertheless and so I didn't change them yet. Either you get some useful information about overhead from your modem so you can actually determine optimal settings or you have to try and guess what works best for you.

Well , i upgrade my line to 608kbit/160kbit , but even putting 512kbit ceil in htb or hfsc when i have p2p running on the other machine the response time gets terrible.
I made a P2P class with rate  12kbit ceil 512kbit prio 7 [minimum priority]
You have any sugestion ?

Yeah, don't use prio parameter of HTB if you can help it.

For more suggestions, I'd need to see the whole (HTB) setup, unfortunately I can't help with HSFC cause I didn't get HSFC to work properly myself. wink I'm happy with my HTB setup since 2003 though.

RATE=608 --> 608*48/53 = 550kbit ~=536kbit
OVERHEAD="mpu 96 overhead 48"
tc qdisc add dev imq0 root handle 1: htb default 40 r2q 1
tc class add dev imq0 parent 1: classid 1:1 htb rate 536kbit $OVERHEAD
tc class add dev imq0 parent 1:1 classid 1:10 htb prio 0 rate  64kbit ceil 536kbit $OVERHEAD [ack/syn/icmp/dns/ssh]
tc class add dev imq0 parent 1:1 classid 1:20 htb prio 1 rate 160kbit ceil 536kbit $OVERHEAD [skype only / small udp packets !p2p ]
tc class add dev imq0 parent 1:1 classid 1:30 htb prio 2 rate 288kbit ceil 536kbit $OVERHEAD [Well Known traffic ]
tc class add dev imq0 parent 1:1 classid 1:40 htb prio 7 rate  12kbit ceil 536kbit $OVERHEAD [P2P/Unknown traffic]
tc qdisc add dev imq0 parent 1:10 handle 100: esfq perturb 10 hash ctnatchg
tc qdisc add dev imq0 parent 1:20 handle 200: esfq perturb 10 hash ctnatchg
tc qdisc add dev imq0 parent 1:30 handle 300: esfq perturb 10 hash ctnatchg
tc qdisc add dev imq0 parent 1:40 handle 400: red min 2572 max 7716 burst 2 avpkt 1492 limit 30864 probability 0.95 [only p2p gets here]
tc filter add dev imq0 parent 1: prio 1 protocol ip handle 1 fw flowid 1:10
tc filter add dev imq0 parent 1: prio 2 protocol ip handle 2 fw flowid 1:20
tc filter add dev imq0 parent 1: prio 3 protocol ip handle 3 fw flowid 1:30
tc filter add dev imq0 parent 1: prio 4 protocol ip handle 4 fw flowid 1:40


The problem is that when P2P is downloading at full speed [536kbit] browse is too slow ...
Ping is not good too hmm I can see the packets going to the right class so i can not understand hmm

ping www.uol.com.br
PING www.uol.com.br (200.221.2.45) 56(84) bytes of data.
64 bytes from home.uol.com.br (200.221.2.45): icmp_seq=1 ttl=53 time=329 ms
64 bytes from home.uol.com.br (200.221.2.45): icmp_seq=2 ttl=53 time=678 ms
64 bytes from home.uol.com.br (200.221.2.45): icmp_seq=3 ttl=53 time=411 ms
64 bytes from home.uol.com.br (200.221.2.45): icmp_seq=4 ttl=53 time=728 ms
64 bytes from home.uol.com.br (200.221.2.45): icmp_seq=5 ttl=53 time=611 ms
64 bytes from home.uol.com.br (200.221.2.45): icmp_seq=6 ttl=53 time=415 ms
64 bytes from home.uol.com.br (200.221.2.45): icmp_seq=7 ttl=53 time=558 ms
64 bytes from home.uol.com.br (200.221.2.45): icmp_seq=8 ttl=53 time=552 ms
64 bytes from home.uol.com.br (200.221.2.45): icmp_seq=9 ttl=53 time=613 ms
64 bytes from home.uol.com.br (200.221.2.45): icmp_seq=10 ttl=53 time=613 ms
64 bytes from home.uol.com.br (200.221.2.45): icmp_seq=11 ttl=53 time=579 ms

--- www.uol.com.br ping statistics ---
11 packets transmitted, 11 received, 0% packet loss, time 9997ms
rtt min/avg/max/mdev = 329.396/553.667/728.349/115.506 ms

Sorry that I'm not yet as knowledgable as as you are, but I stumbled upon this thread and your topic seems related.

I have a 10MBit DSL line. And with a cleared iptables I get a net download to the effect of 9600kbit/sec. The thing is, once qos is active the downstream breaks down to about 50% that. Only if I increase option dowload to 20000 I get the same ballpark performance. Why is it that I have to *double* the rate here?

slightly tangentially, is it even possible to really shape/queue the downstream? Server's send you data down the pipe, which cannot really be queued properly, right? What is this value for?

towolf wrote:

Server's send you data down the pipe, which cannot really be queued properly, right? What is this value for?

There are two reasons:
- You bottlenecking the connection at your router may help prevent queues from building up elsewhere (important as queues mean lag).
- You dropping packets of connections that send too much data (choke your connection or other traffic) may cause the opposite side to slow down their sending of new data.

It's completely true that you do not have full control over what the provider sends you down the pipe. In an optimal situation you would have to be able to configure traffic shaping on the provider side. In a good situation you would be able to use more sophisticated methods for downstream shaping (like scaling tcp window sizes and the like). In the current situation all you can do is drop packets and hope that it will prevent queues / slow the other side down.

Considering that this situation is suboptimal it actually does work quite well. Dropping a packet here and there is a low price to pay if you get a good balanced overall service quality in return.

Even so, it's still much more important to shape upload properly, as there is only so much you can do with download shaping. So when rolling your own solution, think about the upload first.

Yeah, thanks. Still the question remains how to set qos-scripts up. I'm not really fond of the idea to roll my own custom solution (no time for yet another world of techno-stuff).

How do I determine proper values for

option upload
option download

My method was, get speed of FTP upload and download, 9600 and 960 kbps respectively.
Fiddle with the values in /etc/config/qos while doing simulateneuos bidirectional FTP until I found that ingress 750 and egress 15600 work best together, i.e., I get simultaneous traffic close to line speed.

Why 15600? It makes no sense, does it?

ADDENDUM:

# qos-stat
#################
# EGRESS STATUS #
#################

class hfsc 1: root
Sent 0 bytes 0 pkts (dropped 0, overlimits 0)
period 0 level 2

class hfsc 1:1 parent 1: sc m1 0bit d 0us m2 750000bit ul m1 0bit d 0us m2 750000bit
Sent 0 bytes 0 pkts (dropped 0, overlimits 0)
period 11677 work 48789673 bytes level 1

class hfsc 1:10 parent 1:1 leaf 100: rt m1 437000bit d 1.0ms m2 75000bit ls m1 437000bit d 1.0ms m2 416000bit ul m1 0bit d 0us m2 750000bit
Sent 448257 bytes 8386 pkts (dropped 0, overlimits 0)
period 6157 work 448257 bytes rtwork 292321 bytes level 0

class hfsc 1:20 parent 1:1 leaf 200: rt m1 399000bit d 2.6ms m2 375000bit ls m1 399000bit d 2.6ms m2 208000bit ul m1 0bit d 0us m2 750000bit
Sent 4320 bytes 70 pkts (dropped 0, overlimits 0)
period 70 work 4320 bytes rtwork 4194 bytes level 0

class hfsc 1:30 parent 1:1 leaf 300: ls m1 0bit d 100.0ms m2 104000bit ul m1 0bit d 0us m2 750000bit
Sent 48344556 bytes 36254 pkts (dropped 497, overlimits 0)
backlog 5p
period 5755 work 48337096 bytes level 0

class hfsc 1:40 parent 1:1 leaf 400: ls m1 0bit d 200.0ms m2 20000bit ul m1 0bit d 0us m2 750000bit
Sent 0 bytes 0 pkts (dropped 0, overlimits 0)
period 0 level 0

I'm still struggling to get behind the rationale of the classes/queues, mostly because all this looks really cryptic. I gather that hfsc 1:30 corresponds to config class "Normal". I did a long FTP upload here.

How bad is the red "dropped 497"? Do I have to make adjustments to reduce this? Sorry for being obnoxious, I'd really like to have a fire-and-forget solution.

(Last edited by towolf on 27 May 2008, 16:48)

The discussion might have continued from here.