Linux LACP bonding with qemu

Before posting something, READ the changelog, WATCH the videos, howto and provide following:
Your install is: Bare metal, ESXi, what CPU model, RAM, HD, what EVE version you have, output of the uname -a and any other info that might help us faster.

Moderator: mike

Post Reply
ldemon123
Posts: 11
Joined: Sat Feb 24, 2018 3:42 pm

Linux LACP bonding with qemu

Post by ldemon123 » Sat Sep 05, 2020 10:12 am

Hello all!

I have spent a lot of time, and i cannot bring up lacp bonding between two linux hosts or linux and cisco device.
Can anyone tell me please the right way?

I just achieved LACP pdu has sent every !!! 46 seconds on the ubuntu server! With lacp_rate=fast config.
Then i have tried centos-server without success at all, just periodic lacp frames captured, otherwise L3 bond work well in active/backup mode between two linux guests and ??? arp monitoring.

Anyone have ahieved working L2 LACP bond config on the linux guest?
Thanks in advance.

ldemon123
Posts: 11
Joined: Sat Feb 24, 2018 3:42 pm

Re: Linux LACP bonding with qemu

Post by ldemon123 » Thu Sep 10, 2020 7:24 am

+upd

Above tests was implemented on eve-ng community edition.
I have asked my colleagues about their experience, and they have infomed about success with LACP between Windows and N9Kv on EVE-NG pro eddition.

I will try LACP linux == N9Kv on pro edition and will update this post .

Uldis (UD)
Posts: 5083
Joined: Wed Mar 15, 2017 4:44 pm
Location: London
Contact:

Re: Linux LACP bonding with qemu

Post by Uldis (UD) » Thu Sep 10, 2020 12:05 pm

I never did it on EVE Pro with Linux node on topology, but I can confirm that LACP from Winserver 2016 to Nexus vPC works fine
Uldis
You do not have the required permissions to view the files attached to this post.

ldemon123
Posts: 11
Joined: Sat Feb 24, 2018 3:42 pm

Re: Linux LACP bonding with qemu

Post by ldemon123 » Mon Sep 21, 2020 6:46 am

Hi,

I have configured LACP bonding with linux server.

# SERVER
1. Change network driver inside EVE-NG to e1000

[root@localhost ~]# dmesg | grep e1000
[ 8.278641] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
[ 8.279379] e1000: Copyright (c) 1999-2006 Intel Corporation.
[ 8.279884] e1000: E1000 MODULE IS NOT SUPPORTED
[ 10.016230] e1000 0000:00:03.0 eth0: (PCI:33MHz:32-bit) 00:50:00:00:04:00
[ 10.016239] e1000 0000:00:03.0 eth0: Intel(R) PRO/1000 Network Connection
[ 11.189108] e1000 0000:00:04.0 eth1: (PCI:33MHz:32-bit) 00:50:00:00:04:01
[ 11.189772] e1000 0000:00:04.0 eth1: Intel(R) PRO/1000 Network Connection
[ 12.236497] e1000 0000:00:05.0 eth2: (PCI:33MHz:32-bit) 00:50:00:00:04:02
[ 12.237106] e1000 0000:00:05.0 eth2: Intel(R) PRO/1000 Network Connection
[ 12.248651] e1000 0000:00:03.0 ens3: renamed from eth0
[ 12.250711] e1000 0000:00:04.0 ens4: renamed from eth1
[ 12.255837] e1000 0000:00:05.0 ens5: renamed from eth2
[ 61.231456] e1000: ens3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[ 61.338548] e1000: ens4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[ 61.418442] e1000: ens5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[ 63.810458] e1000: ens3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[ 63.940840] e1000: ens4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None

2. comfigure fake MAC address for the bond interface.
MACADDR=00:05:00:00:04:10

[root@localhost ~]# dmesg | tail -10
[ 62.304102] IPv6: ADDRCONF(NETDEV_CHANGE): ens4: link becomes ready
[ 62.308158] IPv6: ADDRCONF(NETDEV_CHANGE): ens5: link becomes ready
[ 63.810007] bond0: (slave ens3): Enslaving as a backup interface with an up link
[ 63.810458] e1000: ens3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[ 63.881066] bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond
[ 63.940840] e1000: ens4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[ 63.980192] bond0: (slave ens4): Enslaving as a backup interface with an up link
[ 63.982355] bond0: (slave ens3): link status definitely up, 1000 Mbps full duplex
[ 63.982435] bond0: active interface up!
[ 63.982535] bond0: (slave ens4): link status definitely up, 1000 Mbps full duplex


[root@localhost ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Peer Notification Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 00:05:00:00:04:10
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 9
Partner Key: 32783
Partner Mac Address: 00:23:04:ee:be:01

Slave Interface: ens3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:50:00:00:04:00
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 00:05:00:00:04:10
port key: 9
port priority: 255
port number: 1
port state: 61
details partner lacp pdu:
system priority: 32667
system mac address: 00:23:04:ee:be:01
oper key: 32783
port priority: 32768
port number: 261
port state: 61

Slave Interface: ens4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:50:00:00:04:01
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 00:05:00:00:04:10
port key: 9
port priority: 255
port number: 2
port state: 61
details partner lacp pdu:
system priority: 32667
system mac address: 00:23:04:ee:be:01
oper key: 32783
port priority: 32768
port number: 16645
port state: 61

[root@localhost ~]# ip a s ens3
2: ens3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
link/ether 00:05:00:00:04:10 brd ff:ff:ff:ff:ff:ff
[root@localhost ~]# ip a s ens4
3: ens4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
link/ether 00:05:00:00:04:10 brd ff:ff:ff:ff:ff:ff
[root@localhost ~]# ip a s bond0
5: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:05:00:00:04:10 brd ff:ff:ff:ff:ff:ff

[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond-bond0
BONDING_OPTS="downdelay=200 miimon=100 mode=802.3ad updelay=200 use_carrier=0"
TYPE=Bond
MACADDR=00:05:00:00:04:10
BONDING_MASTER=yes
NAME=bond-bond0
UUID=25acfe57-1d57-48f9-bfb4-fc2b34afb154
DEVICE=bond0
ONBOOT=yes
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond-slave-ens3
TYPE=Ethernet
NAME=bond-slave-ens3
UUID=ddafec67-bac7-4d08-a078-09c1af3f327a
DEVICE=ens3
ONBOOT=yes
MASTER=bond0
SLAVE=yes
[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond-slave-ens4
TYPE=Ethernet
NAME=bond-slave-ens4
UUID=f7e908db-2aa6-49e0-b8a0-a50267837fd9
DEVICE=ens4
ONBOOT=yes
MASTER=bond0
SLAVE=yes

#SWITCH

switch# sh vpc 15


vPC status
----------------------------------------------------------------------------
Id Port Status Consistency Reason Active vlans
-- ------------ ------ ----------- ------ ---------------
15 Po15 up success success 1


Please check "show vpc consistency-parameters vpc <vpc-num>" for the
consistency reason of down vpc and for type-2 consistency reasons for
any vpc.

switch#

switch# sh lacp counters interface port-channel 15 ; sh clock
NOTE: Clear lacp counters to get accurate statistics

------------------------------------------------------------------------------
LACPDUs Markers/Resp LACPDUs
Port Sent Recv Recv Sent Pkts Err
------------------------------------------------------------------------------
port-channel15
Ethernet1/5 69 56 0 0 0

06:36:16.604 UTC Mon Sep 21 2020
Time source is NTP

switch# sh lacp counters interface port-channel 15 ; sh clock
NOTE: Clear lacp counters to get accurate statistics

------------------------------------------------------------------------------
LACPDUs Markers/Resp LACPDUs
Port Sent Recv Recv Sent Pkts Err
------------------------------------------------------------------------------
port-channel15
Ethernet1/5 70 57 0 0 0

2020 Sep 21 06:27:16 switch %ETHPORT-5-SPEED: Interface port-channel15, operational speed changed to 1 Gbps
2020 Sep 21 06:27:16 switch %ETHPORT-5-IF_DUPLEX: Interface port-channel15, operational duplex mode changed to Full
2020 Sep 21 06:27:16 switch %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface port-channel15, operational Receive Flow Control state changed to off
2020 Sep 21 06:27:16 switch %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface port-channel15, operational Transmit Flow Control state changed to off
2020 Sep 21 06:27:32 switch %ETH_PORT_CHANNEL-5-PORT_UP: port-channel15: Ethernet1/5 is up
2020 Sep 21 06:27:37 switch %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel15: first operational port changed from none to Ethernet1/5
2020 Sep 21 06:27:37 switch %ETHPORT-5-IF_UP: Interface port-channel15 is up in mode trunk

38) FSM:<Ethernet1/5> Transition at 127604 usecs after Mon Sep 21 06:27:33 2020
Previous state: [LACP_ST_PORT_MEMBER_RECEIVE_ENABLED]
Triggered event: [LACP_EV_PARTNER_PDU_IN_SYNC]
Next state: [FSM_ST_NO_CHANGE]

39) FSM:<Ethernet1/5> Transition at 409432 usecs after Mon Sep 21 06:27:34 2020
Previous state: [LACP_ST_PORT_MEMBER_RECEIVE_ENABLED]
Triggered event: [LACP_EV_PARTNER_PDU_IN_SYNC]
Next state: [FSM_ST_NO_CHANGE]

40) FSM:<Ethernet1/5> Transition at 350459 usecs after Mon Sep 21 06:27:35 2020
Previous state: [LACP_ST_PORT_MEMBER_RECEIVE_ENABLED]
Triggered event: [LACP_EV_PARTNER_PDU_IN_SYNC]
Next state: [FSM_ST_NO_CHANGE]

41) FSM:<Ethernet1/5> Transition at 296945 usecs after Mon Sep 21 06:27:36 2020
Previous state: [LACP_ST_PORT_MEMBER_RECEIVE_ENABLED]
Triggered event: [LACP_EV_PARTNER_PDU_IN_SYNC]
Next state: [FSM_ST_NO_CHANGE]

42) FSM:<Ethernet1/5> Transition at 253793 usecs after Mon Sep 21 06:27:37 2020
Previous state: [LACP_ST_PORT_MEMBER_RECEIVE_ENABLED]
Triggered event: [LACP_EV_PARTNER_PDU_IN_SYNC_COLLECT_ENABLED_DISTRIBUTING_ENABLED]
Next state: [LACP_ST_WAIT_FOR_HW_TO_PROGRAM_TRANSMIT_PATH]

43) FSM:<Ethernet1/5> Transition at 260922 usecs after Mon Sep 21 06:27:37 2020
Previous state: [LACP_ST_WAIT_FOR_HW_TO_PROGRAM_TRANSMIT_PATH]
Triggered event: [LACP_EV_PORT_HW_PATH_ENABLED]
Next state: [LACP_ST_PORT_MEMBER_COLLECTING_AND_DISTRIBUTING_ENABLED]

2020 Sep 21 06:27:16 switch %ETHPORT-5-SPEED: Interface port-channel15, operational speed changed to 1 Gbps
2020 Sep 21 06:27:16 switch %ETHPORT-5-IF_DUPLEX: Interface port-channel15, operational duplex mode changed to Full
2020 Sep 21 06:27:16 switch %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface port-channel15, operational Receive Flow Control state changed to off
2020 Sep 21 06:27:16 switch %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface port-channel15, operational Transmit Flow Control state changed to off
2020 Sep 21 06:27:32 switch %ETH_PORT_CHANNEL-5-PORT_UP: port-channel15: Ethernet1/5 is up
2020 Sep 21 06:27:37 switch %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel15: first operational port changed from none to Ethernet1/5
2020 Sep 21 06:27:37 switch %ETHPORT-5-IF_UP: Interface port-channel15 is up in mode trunk

Now i'am trying to find a way to determine link failure on the server side.
(is seems thaty linux didn't shutdown the link while lacp pdu timeout)
ps may be the only way is to monitor with arp.

Post Reply