OpenWrt Forum Archive

Topic: Is 802.3ad (LACP) supported ?

The content of this topic has been archived on 30 Mar 2018. There are no obvious gaps in this topic, but there may still be some posts missing at the end.

I am trying to find out if OpenWRT supports link aggregation (interface bonding) specifically the standardized LACP. It seems that it is supported in linux kernel for quite some time but I have so far found no example for OpenWRT.

I have seen some comments here on the forum that it would have to be supported in silicon (which I think is incorrect) on the HW.

If anyone has any sort of insight into this I would really appreciate it.

AFAIK it doesn't have to be supported by the NIC silicon, you should be able to channel up any type of NICs (within reasonable limits, f.e. both should be 1Gig or 100M, and not mixed up). However, this needs to be supported on the switch side as well, where the naming conventions and config nomenclature will vary from vendor to vendor.

Link aggregation would also only make sense if you have physically separate NICs, and not just a switch VLANned into separate interfaces.

From the top of my head, I don't know if configuring bonding is supported currently.


A quick note: You were looking explicitly for channel aggregation within bonding. Channel bonding has multiple modes (balance-rr, active-backup, broadcast, etc) that can be used.

thanks for the reply. I have done some more research and it seems that it is indeed possible and yes there is no need for HW support. In another post here in the forum someone posted this as a way how to do it :

root@lede:~# cat /etc/rc.local
# Put your custom commands here that should be executed once
# the system init finished. By default this file does nothing.

ifconfig bond0 down
echo 802.3ad > /sys/devices/virtual/net/bond0/bonding/mode
#echo balance_rr > /sys/devices/virtual/net/bond0/bonding/mode
echo fast > /sys/devices/virtual/net/bond0/bonding/lacp_rate
echo layer3+4 > /sys/devices/virtual/net/bond0/bonding/xmit_hash_policy
ifconfig bond0 up

echo +eth0 > /sys/devices/virtual/net/bond0/bonding/slaves
echo +eth1 > /sys/devices/virtual/net/bond0/bonding/slaves

exit 0

This seems to be using an rc script to do this but I was hoping that there would be a way how to do just in the normal network config file.

The other thing is that for this it seem I need some kernel modules but the list I've found is obsolete and the only currently existing module is kmod-bonding but I'm not sure it is enough.

As for the switch side I know it does support LACP and I will configure it once it will come to that and yes my strong preference is to use LACP rather than static link aggregation.

I will try to do some more research on this and report here but any insights are welcomed.

I think kmod-bonding should be enough for the kernel side, along with the ifenslave package. What obsolete list have you found?


after a night of trials and reading I think I have some leads on one the issue.

  • The kmod-bonding is enough as kmod-mii is already installed by default

  • I can't find ifenslave in opkg

  • There aren't any examples of configuration of the bond0 in /etc/config/network (which implies that is not how it is done?)

  • Changes to configuration in /sys/devices/virtual/net or /sys/class/net are not persistent

In some older docs (from 2011?) for the binding module I have found a recommendation to use iproute2 through the rc scripts. On some other page (not sure where) I read the ifenslave is a deprecated tool.

This leaves me with two options:

  • Use the RC scripts (which is quite bad as the config will spread across 3 or 4 config files)


  • Figure out if ifenslave would make any difference (and if it is still in the repo hidden somewhere)

And again any insight would be really appreciated.

(Last edited by tnk on 10 Jan 2017, 01:30)

The discussion might have continued from here.