[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
David Gadbois architectural questions
Perhaps I can shed some light on your architectural questions.
Date: Fri, 2 Dec 88 12:32 CST
From: email@example.com (David Gadbois)
Date: Fri, 2 Dec 88 08:46 PST
From: DE@PHOENIX.SCH.Symbolics.COM (Doug Evans)
Also, it's pretty difficult to squeeze performance out of a 40bit
computer with a 32bit address space when it has to use a bus structure
with 36bit datapaths and 24bit addresspaths.
I thought the address path was 28 bits, since the word organization is
(I'm told) 28-bit address + 6-bit tag + 2-bit cdr code or 32-bit
immediate data + 2-bit tag + 2-bit cdr code. If the address path is
24-bit, how do you deal with the extra 4 bits?
The 3600 has a 28-bit virtual address and a 24-bit physical address. Doug
was referring to the 24-bit physical address on the Lbus. It's fairly common
for virtual-memory computers to have fewer physical address bits than virtual
address bits, since you are likely to have more secondary storage (disk) than
primary storage. On the other hand, the Ivory has 32-bit address for both
virtual and physical, to keep things simple.
While I'm asking questions:
How does the MacIvory do 40-bit (or 48 -- see next question) words with
the 32 bit NuBus?
The MacIvory uses a pair of NuBus cycles to transfer its 48-bit words.
One cycle is a 32-bit cycle, the other is a 16-bit cycle.
It has a cache to minimize the performance impact of the slow NuBus.
I may be confused by this, but the literature I've read implies about
the XL400 says it's memory organization includes 8 bits of error
correction/detection, and that its Ivory board handles the ECD itself.
Is the MacIvory set up that way too?
All Symbolics machines use error-correcting memory to ensure reliability
(too bad some of the older machines don't have processors that are as
reliable as the memory!). In Ivory-based machines the error detection
and correction logic is integrated into the Ivory chip.
Since this is a substantial amount
of memory overhead (and maybe silicon, too), I was wondering what the
design argument was to add it in. Is the memory potentially bad enough
to require a lot of error-fixing overhead?
Dynamic RAMs, the cheapest form of memory, are not designed to be
perfectly reliable. Perfect reliability would make them many times more
expensive than they are, which hardware manufacturers consider would be
a poor tradeoff; I'm inclined to agree. The rate of errors in an
individual RAM chip is pretty low, but when you have hundreds or
thousands of RAMs in a system and thousands or millions of systems in
use, it adds up.
Dynamic RAMs have a small but finite error rate, so it's necessary to do
something to compensate. One approach is to use error-correcting memory
so the user is insulated from the errors. Another approach is to use
parity, so the machine crashes reliably when an error occurs instead of
doing something unpredictable. The last approach is to ignore the issue
and hope the problem won't happen often enough for the user to notice;
this can be viable in machines that have a small amount of memory (hence
a fairly low error rate) and users who won't know the difference.
Symbolics of course finds only the first approach acceptable, since we
think the computer should help you, not hinder you.
I think the 20% overhead for error correcting memory is well worthwhile.
Think of it as $50/megabyte and it doesn't seem very expensive insurance.
The overhead in chip area within the Ivory chip for ECC is quite small.
Also putting ECC on-chip makes it possible to do it in parallel with
computation, saving time. I'm rather surprised that some people are
still designing microprocessors without on-chip ECC as an option.
I think that the 3600 architecture allowed for up to 34 tags. The
Ivory, with a full 6-bit tag field (assuming you're still using 2 bits
out of the 40 for a cdr code), lets you have up to 64. Aren't there
some big incompatibility potentials here?
How much of your own code depends on the exact number of tag bits? None
of it, I bet. Almost all code is written at a higher level of abstraction.
A small amount of low-level Symbolics code is affected by this architectural
difference, but it isn't much.