[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

performance with >> 9 MW of memory



    Date: Mon, 9 Dec 1991 16:06-0000
    From: Scott McKay <SWM@SAPSUCKER.SCRC.Symbolics.COM>

	Date: Fri, 6 Dec 1991 17:48 EST
	From: p2@porter.asl.dialnet.symbolics.com (Peter Paine)

	    Date: Fri, 6 Dec 1991 20:24-0000
	    From: SWM@SAPSUCKER.SCRC.Symbolics.COM (Scott McKay)

		Date: Fri, 6 Dec 1991 14:42 EST
		From: pan@ATHENA.PANGARO.dialnet.symbolics.com (Paul Pangaro)

		I have heard second hand that performance of symbolics machines
		(certainly this was about L machines, maybe G's too, that is part of the
		question) "degrades" with more than 9 MW of memory. Anybody know
		anything about this? I have no other info

	    Back in about 1986, a bug was discovered in the virtual memory system
	    that caused performance to degrade in machines with tons of memory.
	    It was fixed the same afternoon.  I know of no such problem any more,
	    but the story of that ancient bug is sometimes told for its amusement
	    value.  We have some substantially loaded machines in-house that have
	    the expected performance: fast.

	How loaded is loaded?
	What are the limits?

    I dunno, something like 14.5MW on a G-machine (3620, 3650).
    L-machines are smaller.
    I-machines are bigger.

Is the correct equation for G-mach = (* 14.5MW 36/8) 65.25 Mbytes?

I see on the MacIvory II Show Machine Configuration:

NuBus slot C: National Semiconductor NS8/16 Memory Expansion Card p/n
  NS8/16 (id #x10F) rev Rev. 2.21, 16MB total, 2.6MW in use

(* 2.6 40/8) = 13 Mbytes - where did the other 3 Mbytes go?
				     
It would be very useful to know the actual/precise limits for memory
expansion.