OpenWrt Forum Archive

Topic: Custom x86 Image (NO LONGER MAINTAINED)

The content of this topic has been archived between 6 Sep 2015 and 6 May 2018. Unfortunately there are posts – most likely complete pages – missing.

Alex Atkin UK wrote:

I thought that at first but on reading up, maxfail is just how many times pppd should try to connect before dying and holdoff is how long to wait after a failure before retrying.

Seems to me neither of those things should cause it to hang if it genuinely dropped the connection.

That is good.  Things are looking up for the PPPoE side of your equation.

Just rebooted the router, it came back up fine.

Other than testing how well QoS works (or not), everything seems fine.

As I suspected, everything on the web feels snappier now.

Alex Atkin UK wrote:

Just rebooted the router, it came back up fine.

Other than testing how well QoS works (or not), everything seems fine.

As I suspected, everything on the web feels snappier now.

That is amazing news!  Do you have WiFi setup with the build as well?  I'm having troubles with my 2.4ghz adapter failing to work after a period of time.  I am going to be trying out the experimental build that lacks ACPI and any real power management to see if it fixes my problem.

Sure, WiFi has worked all along but as its only for my laptop (although when I replace my phone next year that will support 5Ghz too) its not a top priority anyway.

Most of the network will still run off WiFi from the Buffalo as it does an excellent job of handling 2.4Ghz clients and needs to stay on the network anyway so I have enough ethernet ports for everything.

I really wish I could have gotten a PCIe x1 card with a switch built-in, I know PCI cards existed that did that, but they cost about 3x the price of the whole box itself as they are very rare- lol.

Alex Atkin UK wrote:

Sure, WiFi has worked all along but as its only for my laptop its not a top priority anyway.

Most of the network will still run off WiFi from the Buffalo as it does an excellent job of handling 2.4Ghz clients and needs to stay on the network anyway so I have enough ethernet ports for everything.

I really wish I could have gotten a PCIe x1 card with a switch built-in, I know PCI cards existed that did that, but they cost about 3x the price of the whole box itself as they are very rare- lol.

I can appreciate that... If you do anything 2.4ghz on the box let me know.  I'm curious if you have similar issues to what I have.  I'm also a little irritated that it took so much shuffling of my jumpers to get it to where it is now.  You did mention power management stuff affecting WiFi so I want to ensure that ACPI and anything else is disabled.  Hopefully an updated kernel will do the trick for me.

Well all seems well here.  Downloading off Usenet at max speed while watching Netflix on PS3, flawlessly.  This is with IFB which I can only assume is working based on these results, but its a little confusing as I do not see any QoS marks in /proc/net/ip_conntrack.
http://csdprojects.co.uk/OpenWRT/Atom.png

Alex Atkin UK wrote:

Well all seems well here.  Downloading off Usenet at max speed while watching Netflix on PS3, flawlessly.  This is with IFB which I can only assume is working based on these results, but its a little confusing as I do not see any QoS marks in /proc/net/ip_conntrack.
http://csdprojects.co.uk/OpenWRT/Atom.png

I am seeing weird crashes with collectd on my install.  Might be related?  I'm glad the IFB stuff is working.  Let me know if you think I can drop the IMQ stuff.  I'm planning on sticking with whatever OpenWRT uses by default.

Semi-bad news: The sysupgrade tool takes an image file and likely will do something of a "scorched earth" upgrade.  Because of this I will be starting on an upgrade in place tool that is meant to replace the standard sysupgrade process.  The tool will be designed to work with ext[3|4] and btrfs.  In the case of btrfs I will set it up to include support for snapshots.

Good news!  My Jenkins instance looks like it is back to building correctly.  It is setup to auto-build and publish on the 1st and 15th of each month.  It is setup to run against the main openwrt trunk and my stable branch.  Keep an eye out for auto-builds.  They will likely go online before I have a chance to update the thread with the pertinent information.

I will also run builds mid-cycle depending on what changes make it into the stable branch on bitbucket (ie. the update in place tool goes live).

Collectd I believe has crashed on every build so far, certainly the test builds.

I'm a little bothered at the idea that I cannot see IFB doing its work in /proc/net/ip_conntrack as I am used to monitoring that to see how its prioritising traffic and to see which things need tweaking for a better priority.  So I may yet still switch back to IMQ and would certainly prefer if we can keep support for both, as long as it doesn't hurt.

I think I need to find out who is working on the OpenWRT QoS support and find out what IFB can and cannot do, to know for certain if its sufficient.

Alex Atkin UK wrote:

Collectd I believe has crashed on every build so far, certainly the test builds.

I'm a little bothered at the idea that I cannot see IFB doing its work in /proc/net/ip_conntrack as I am used to monitoring that to see how its prioritising traffic and to see which things need tweaking for a better priority.  So I may yet still switch back to IMQ and would certainly prefer if we can keep support for both, as long as it doesn't hurt.

I think I need to find out who is working on the OpenWRT QoS support and find out what IFB can and cannot do, to know for certain if its sufficient.

IMQ should be safe to leave in.  I won't make any changes.

Good to know.  As long as it doesn't break IFB its the most logical choice really.  Its not like we need to worry about the IMQ module taking up precious RAM, like you would on ARM/MIPS.

Presumably OpenWRT do not change the kernel all that often so its not going to be a pain patching it all the time?

Still running on the test build btw, haven't stress tested WiFi (I don't think its going to really be possible with just one WiFi device) but otherwise its not shown any problems so far.

Alex Atkin UK wrote:

Good to know.  As long as it doesn't break IFB its the most logical choice really.  Its not like we need to worry about the IMQ module taking up precious RAM, like you would on ARM/MIPS.

Presumably OpenWRT do not change the kernel all that often so its not going to be a pain patching it all the time?

They've not changed it since I started tracking things closely at least 4 months ago.  I'm not really worried about it.  As long as there are good patches from the IMQ guys, we should be fine.

Alex Atkin UK wrote:

Still running on the test build btw, haven't stress tested WiFi (I don't think its going to really be possible with just one WiFi device) but otherwise its not shown any problems so far.

My wi-fi drops no matter what, it can sit idle for 12 hours and die or it can die under load after 5 minutes.  Normally it lasts 8-12 hours before it stops working.  I can still see the access point, but no traffic flows.  I think it relates to the jumper setting dance I had to do in order to get it to get even that far.  I didn't get a chance to fiddle with it this weekend so hopefully this coming weekend I'll be able to figure it out.

Had my first pppd hang, modem resynced and pppd didn't shut down.  I had to manually kill pppd on the router.

I have changed maxfail 0 to maxfail 10, hopefully that will stop it hanging in the future.

Alex Atkin UK wrote:

Had my first pppd hang, modem resynced and pppd didn't shut down.  I had to manually kill pppd on the router.

I have changed maxfail 0 to maxfail 10, hopefully that will stop it hanging in the future.

How is everything else working now that the MSI interrupts have been disabled?

Latency is excellent, I haven't had to put that freaky dma hack on with this build.  Pinging my ISPs router at the other end of the PPP link is pretty amazing:

PING 1x5.255.2x6.252 (1x5.255.2x6.252): 56 data bytes
64 bytes from 1x5.255.2x6.252: seq=0 ttl=64 time=4.206 ms
64 bytes from 1x5.255.2x6.252: seq=1 ttl=64 time=3.905 ms
64 bytes from 1x5.255.2x6.252: seq=2 ttl=64 time=4.012 ms
64 bytes from 1x5.255.2x6.252: seq=3 ttl=64 time=3.938 ms
64 bytes from 1x5.255.2x6.252: seq=4 ttl=64 time=8.007 ms
64 bytes from 1x5.255.2x6.252: seq=5 ttl=64 time=3.875 ms
64 bytes from 1x5.255.2x6.252: seq=6 ttl=64 time=4.100 ms
64 bytes from 1x5.255.2x6.252: seq=7 ttl=64 time=4.013 ms
64 bytes from 1x5.255.2x6.252: seq=8 ttl=64 time=3.890 ms
64 bytes from 1x5.255.2x6.252: seq=9 ttl=64 time=4.120 ms

I did discover that I was right, IFB wasn't working.  Looks like this bug still hasn't been patched in trunk, I have made the change myself now and it does seem to be apply the connection mark. 

Lord knows why Netflix still worked perfectly with QoS none-functional though, I guess part of the reason it struggled must have been the old router being overloaded?

The download graphs are much more smooth on the Atom, on the Buffalo it was very bursty in nature with constant peaks whereas it maintains a more constant data stream now.  It seems to have had a notable improvement on downloads off Xbox Live and PSN, although neither get anywhere near even 1/3 of my connection speed.

Alex Atkin UK wrote:

Latency is excellent, I haven't had to put that freaky dma hack on with this build.  Pinging my ISPs router at the other end of the PPP link is pretty amazing

I'm glad to hear it. 


Alex Atkin UK wrote:

I did discover that I was right, IFB wasn't working.  Looks like this bug still hasn't been patched in trunk, I have made the change myself now and it does seem to be apply the connection mark.

I've created a ticket for this item here and I'll be applying it to my sources soon.  Probably after I get my router back to stable.

Alex Atkin UK wrote:

Lord knows why Netflix still worked perfectly with QoS none-functional though, I guess part of the reason it struggled must have been the old router being overloaded?

The download graphs are much more smooth on the Atom, on the Buffalo it was very bursty in nature with constant peaks whereas it maintains a more constant data stream now.  It seems to have had a notable improvement on downloads off Xbox Live and PSN, although neither get anywhere near even 1/3 of my connection speed.

This could very well be to the buffalo having trouble keeping up.  I moved over to the Atom because I was seeing utilization spikes on my original netgear box I put into service.  I all but gave up on anything <1Ghz and 256Mb ram for my router a few years ago [more?] now.  I had nothing but problems with the micro / embedded platforms like the Soekris and ALIX too.  They never seemed to be able to really keep up with my usage.

The qos-scripts patch above wasn't applied for a reason. Quote from nbd:

there's a limitation in ifb where it cannot run the per-packet netfilter rules before ingress shaping
that patch works around it by saving the connmark based on the result of per-packet rules, but that's not acceptable
it messes up conntrack vs per-packet rule handling

jow wrote:

The qos-scripts patch above wasn't applied for a reason. Quote from nbd:

there's a limitation in ifb where it cannot run the per-packet netfilter rules before ingress shaping
that patch works around it by saving the connmark based on the result of per-packet rules, but that's not acceptable
it messes up conntrack vs per-packet rule handling

@Alex: I'm not as familiar with the QoS stuff, I'm going to put this on hold given jow's comments.

Isn't that EXACTLY the limitation that the developers of IMQ use as their argument for why IFB is insufficient?

So knowing that, why is OpenWRT migrating to IFB at all?  It seems like it wasn't fully thought through.

I guess its good we have IMQ support in this build after all.

OpenWrt migrated to IFB because keeping IMQ working for bleeding edge kernels was a huge pain in the ass. It might be seemingly simple right now because the kernel does not change but that is only because we're in the middle of preparing the release.

Understandable, except if IFB reduces the flexibility of the QoS implementation in OpenWRT its a HUGE step backwards IMO.

Or am I wrong and IFB can support 99% of use cases that IMQ can?

I have been trying to get my head around this for weeks as you can see on this thread, to use IFB or IMQ, that is the question.

I would rather stick with whatever OpenWRT has as standard for ease of building for the op, but if its a choice between an implementation that works and one that doesn't - its pretty obvious which to choose then.

Alex Atkin UK wrote:

Understandable, except if IFB reduces the flexibility of the QoS implementation in OpenWRT its a HUGE step backwards IMO.

Or am I wrong and IFB can support 99% of use cases that IMQ can?

There are also major concerns with time and management.  If the dev's were spending too much time with getting IMQ working on a given kernel then it makes more sense to drop IMQ in favor of IFB.  Release management is a tricky business and sometimes you have to fallback if the time investment grows too great.  From what jow posted this sounds like one of those cases.

Alex Atkin UK wrote:

I have been trying to get my head around this for weeks as you can see on this thread, to use IFB or IMQ, that is the question.

It looks like IFB is a bit newer and not quite as robust as the IMQ solution.  If you read through the IMQ site and other write-ups it looks like this is similar to the zfs vs btrfs arguments.  Both systems have their place and sometimes you'll need the one that's been around longer.

If my analogy is off base / wrong, please correct me and I'll update my post.

Alex Atkin UK wrote:

I would rather stick with whatever OpenWRT has as standard for ease of building for the op, but if its a choice between an implementation that works and one that doesn't - its pretty obvious which to choose then.

Ease of building isn't a concern of mine at the moment.  I've done enough work building distro's from sources where I'm not worried.  My worry is that the Atom boards and other equipment I am targeting is properly utilized first and foremost.  After that I want to make sure my needs and the needs of users are met.  I started with OpenWRT because it is the best Linux Router distribution out there.  It is also being actively maintained and has a good community.

The great part about a community is we can do things that fall outside of the core scope of the project.  Like building OpenWRT for equipment that is well outside of their base target (Atom / C7 / XEON / Whatever).  We can also take a closer look at setting up IMQ as a secondary package that is installable, but not on by default.  This way the default matches documentation but if you need to deviate, you can.  I already setup IMQ as a set of packages and the core can sit side by side with IFB.  The trick would be the QoS scripts package.  In order to finish off IMQ support, we will need to update the backfire QoS scripts package and set it up so you can have the IFB *or* IMQ version installed.  Once we have an updated QoS scripts package we will likely be good to go, if not we play around with it and make it work in the end.

Keep in mind there is also a very large package change coming down the line.  We may end up filling in gaps by default even if there was no IFB / IMQ discussion.  I plan on keeping an eye on the package shuffle and hopefully I can step up and volunteer as maintainer of some.  Either way I expect to get stuck working on updating some packages as needs arise.

The problem is that as I understand it from the documentation about IFB, the developer specifically said that IFB will NEVER work at the netfilter layer, as that was the whole reason for not putting IMQ in the kernel in the first place.
So IFB will never be a complete replacement for IMQ which means OpenWRT will have downgraded their QoS implementation. 

Seeing as one of the top reasons for me using custom firmware is for QoS functionality, that's a pretty huge problem IMO.  Granted, as long your Atom build works well I have no reason to go back to a pre-built MIPS router, but it still means OpenWRT loses some value as a firmware replacement.

As for an IMQ package, the only thing I did was take the following files from backfire and slap them onto your trunk build:

/usr/bin/qos-start
/usr/bin/qos-stop
/usr/bin/qos-stat
/usr/lib/qos/generate.sh
/usr/lib/qos/tcrules.awk

That seems to be all there is to it.  Thanks to LuCI the configuration carries over without any changes.

Alex Atkin UK wrote:

The problem is that as I understand it from the documentation about IFB, the developer specifically said that IFB will NEVER work at the netfilter layer, as that was the whole reason for not putting IMQ in the kernel in the first place.
So IFB will never be a complete replacement for IMQ which means OpenWRT will have downgraded their QoS implementation. 

Seeing as one of the top reasons for me using custom firmware is for QoS functionality, that's a pretty huge problem IMO.  Granted, as long your Atom build works well I have no reason to go back to a pre-built MIPS router, but it still means OpenWRT loses some value as a firmware replacement.

As for an IMQ package, the only thing I did was take the following files from backfire and slap them onto your trunk build:

/usr/bin/qos-start
/usr/bin/qos-stop
/usr/bin/qos-stat
/usr/lib/qos/generate.sh
/usr/lib/qos/tcrules.awk

That seems to be all there is to it.  Thanks to LuCI the configuration carries over without any changes.

I'll take a look.  I was thinking of grabbing the backfire sources and using that as a base for a new qos-scripts-imq package.  I want to properly package the necessary bits so it is easier for everyone that wants to change over to IMQ.

Edit: Ticket for tracking is here

(Last edited by mcrosson on 19 Sep 2012, 23:17)

The only other thing I aware of is the kernel module, which so far I haven't seen any issues with keeping them both on there as default as which QoS scripts you are using determines which module is actually loaded.

I just switched from IFB to IMQ right now on my router without rebooting, it doesn't seem to have caused any issues that I can see.  I just made sure I ran qos-stop before overwriting the scripts with the IMQ versions, then qos-start.

Sorry, posts 201 to 200 are missing from our archive.