Skip to content


allow lpars to inter-communicate faster than 1gb/sec

in the past, we thought that inter-lpar communication cannot be faster than a gig. Since, (i beleive) ffebruary of this year this limit does not longer apply. Bellow is a re-print from a document created by Doug Herman, IBM – which I may have already presented (buried) somewhere in here.

When using the Virtual I/O server (VIO) for virtualizing physical networks to client logical partitions, a Shared Ethernet Adapter (SEA) is configured by using the physical Ethernet adapter and the virtual Ethernet adapter to create a layer 2 bridge. One way to improve network performance is to use the “largesend” option on the VIO SEA and the client logical partitions.
The largesend feature allows sending large data packets over virtual Ethernet adapters without breaking up the packets into smaller MTU size packets. Starting with AIX 6.1 TL7-SP1 and AIX 7.1 TL0-SP1, the operating system supports the mtu_bypass attribute for the shared Ethernet adapter to provide a persistent way to enable the largesend feature:

ftp://public.dhe.ibm.com/common/ssi/ecm/en/pow03049usen/POW03049USEN.PDF

Using largesend (mtu_bypass) on the AIX interfaces boosts throughput between logical partitions within the hypervisor of the Power server, without using additional processor utilization. Set “largesend” on the VIO SEA, and mtu_bypass (largesend) on the AIX LPAR interfaces. This lowers both the sending AIX LPAR and the sending VIO processor usage when transferring to an outside machine. All MTU sizes remain at 1500. There is no requirement for Jumbo Frames. Some examples of largesend attributes for performance

# ifconfig en0 largesend 

(LPAR to LPAR, virtual to virtual, in same machine single stream, binary FTP dd test)
1Gb per second without largesend
3.8Gb per second with largesend – Higher throughput
Processor utilization slightly higher on sender and slightly lower on receiver

largesend=1 on VIO SEA and largesend on client interfaces – Much lower processor utilization on sender and on sending VIO

VIO SEA physical adapters should have both large_send and large_receive set to yes

$ lsdev -dev ent0 -attr |grep lar
large_receive yes Enable receive TCP segment aggregation True
large_send yes Enable hardware Transmit TCP segmentation 

To make change settings permanent:

VIO Server:

$ chdev -dev ent# -attr largesend=1 large_receive=yes 

AIX LPAR

# chdev –l en# -a mtu_bypass=on 

Posted in AIX.

Tagged with , , .


2 Responses

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.

  1. MarkD:-) says

    Robert,

    thanks for the info!!!

    MarkD:-)

  2. Rob says

    http://gibsonnet.net/AIX/ibm/ATS_Tech_Talk_Optimizing_POWER7_and_AIX_Update_Dec_2012.pdf

    Steve Nasypany writes:
    “Driving 10 Gb throughput requires significant cpu resources (whether dedicated
    or shared adapter). Consuming two POWER7 CPUs is not unusual.”

    That’s been my experience – CPU usage increases with thruput.

    “LPAR to LPAR, virtual to virtual, in same machine single stream, binary FTP dd test)
    1Gb per second without largesend
    3.8Gb per second with largesend – Higher throughput
    Processor utilization slightly higher on sender ”

    I’m wondering if:

    put “|dd if=/dev/zero bs=1m count=100″ /dev/null

    Is very light on the CPU as there is little overhead there? Buffer copies
    of a million zeroes is just screaming for algorithm improvements.

    Try iperf and see if you use more CPU:

    http://www.perzl.org/aix/index.php?n=Main.iperf



Some HTML is OK

or, reply to this post via trackback.



© 2008-2014 www.wmduszyk.com - best viewed with your eyes.