[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: File System Performance Loss in Genera 8.0 (16X slower!)



    Date: Sun 8 Jul 90 03:21:57-CDT
    From: AI.CLIVE@MCC.COM (Clive Dawson)

	    Date: Fri, 6 Jul 90 16:20 EDT
	    From: barmar@Think.COM (Barry Margolin)
	    Subject: File System Performance Loss in Genera 8.0 (16X slower!)
	    To: cogen@XN.LL.MIT.EDU
	    cc: SLUG@Warbucks.AI.SRI.COM

	    [...]
	    Another change that can affect things adversely is the support in 8.0
	    IP-TCP for reassembling large datagrams, which is used by 8.0 NFS.
	    The maximum packet size on an ethernet is about 1500 bytes, but NFS
	    will request 8Kbytes of the file at a time.  This datagram will be
	    split into a series of ethernet packets, but if just one of them isn't
	    received the entire series will have to be sent again (retransmission
	    occurs at the datagram level, not the packet level).  I was seeing
	    this when I was trying my above "benchmarks" on another machine (go
	    into Peek Network mode and check whether there are a bunch of
	    Reassembly Nodes -- this indicates you're getting lots of partial
	    datagrams).
						    barmar

    Yes!  We are seeing precisely the same thing.  Attempts to access an
    Ultrix system via NFS using Rel.8 will often result in long delays.
    Peek Network shows many Reassembly Nodes.   Can you elaborate on
    what 8.0 is doing differently regarding the reassembly of large
    packets?  

It's doing it.

In Genera 8.0 the largest IP datagram that could be received was just
under 2K bytes.  Since this isn't very large, Genera would keep its
request size small enough that it would never be fragmented.  It would
request 1K from hosts on the same network (the ethernet max packet size
is around 1500 bytes), and .5K from hosts on other networks (all
networks supporting IP are required to be able to handle packets of at
least .5K).

In Genera 8.0, IP is able to reassemble datagrams of arbitrary size.  By
default it requests 4K bytes of data when reading files from a server on
the local net (it still defaults to .5K for remote networks, presumably
because the likelihood of losing a packet is higher).  On an ethernet
this will be fragmented into at least three packets.

	      Did something different happen in 7.2 (where we never
    had this problem)?  Does anybody have a cure for this?

Try setting NFS:*LOCAL-NETWORK-TRANSFER-SIZE* back down to 1024.

                                                barmar