[vcf-midatlantic] Original Macintosh Architecture Questions

David Riley fraveydank at gmail.com
Sat Jun 26 02:05:56 UTC 2021


On Jun 25, 2021, at 9:01 AM, John Heritage <john.heritage at gmail.com> wrote:
> 
> Wow!  Good articles and thanks David!  Since you mentioned you loved the original hardware design of the Mac, I hope you won't mind a few more questions.. 
> 
> Interesting on the IWM making it to the Mac,  I knew that was a really innovative chip on the Apple II, but didn't realize it also was used in early Macs.

The IWM was, essentially, an IC version of the Disk II controller (there are a few differences, but that's pretty close).  I think it came to the Mac first, since the Apple //e came after the Apple /// tanked, but I could have my timelines mixed up there.

> What was the reason the PC used 9 sectors per track while the Mac and Amiga used 10/11 sectors per track each on the 3.5" floppies?  (I assume the ST defaulted to 9 sectors per track to stay compatible with DOS disks..)  Was it because the PC technically predated the original Mac and Amiga/ST and the early drives/disks were risky with higher densities?  Did Apple trade anything off for 400KB floppies in the Mac?

The short answer is that 400/800K floppies used GCR encoding for their low-level format, while 360/720K (and also 1440K) floppies used MFM. They're just two different encoding methods; I don't think one is necessarily superior to another other than the fact that you can pack a little more space into a track with GCR encoding.  MFM was the defacto standard on the IBM side for floppies (it's what the original IBM floppy controllers supported), so when Apple added HD capability with the SWIM chip, they went with MFM to be compatible with PCs (thus also gaining compatibility with DD 360/720K disks).

The "ISM" (Integrated Sander Machine, AFAICT, as that part was designed by Wendell Sander) portion of the SWIM was basically a new decoding machine that could handle both MFM and GCR with a lot of the same circuitry and state machines. It's really a brilliant design; you can read the SWIM chip design docs on the Archive somewhere if you're curious.  Really neat stuff.  Later SWIM chips dropped the IWM portion entirely, if memory serves, because the ISM worked well enough for both modes and didn't require the CPU overhead that the IWM did (mostly owing to the tradeoffs that made the Disk II controller such simple hardware; it's basically a step away from bit-banged).

> Hardware wise, is it fair to say that the Mac probably cost less to manufacture in ~ 1985 than say the Amiga 1000 or Atari 520ST?    The Amiga had several custom chips, and even the ST seemed to have more ICs on board, though monitors were optional for both.  (I think in 1984 RAM was still very expensive so I can see why the 128KB Mac would be high, especially as an early adopter premium).   I always assumed the Mac premium was due to it's software library and 'brand' (at least in the US) at the time, but curious if there were any hardware reasons for the cost.

I don't know if I'd say that... what the Amiga and ST cost in custom chips, the Mac probably made up for with rather expensive PALs, though they did at least use the mask-programmed version of them (HALs) once they'd finalized the design, which would have saved a fair amount at those volumes.  PALs were off the shelf, which made it much easier to prototype, but in volume, a custom chip can be cheaper.  I don't know the relative manufacturing prices, but I'd guess that the Mac cost a bit more to build with the CRT and supporting circuitry.

Of course, there's an article about exactly this, too: https://www.folklore.org/StoryView.py?project=Macintosh&story=Price_Fight.txt

> Last question -- were there 'fast ram' upgrades for the original Macs (say pre-1990) that would allow the 68000 access to local memory for faster execution than RAM on the shared bus?  or did all RAM have to be shared?   It looks like the ram size limitations on the early Macs were relatively close to what the 68000 could do.. 

The original Mac was a pretty simple machine; everything lived on the 68000 bus, for the most part (the 68000, as you may know, also had some special circuitry for directly addressing 6800-bus-compatible devices, which I think they used for the VIA and the Z8530 serial chip, possibly some others, but it was all still the same bus with some slightly different control signals).  I have a 128K with a Max II upgrade (pass-through board for the CPU socket that expanded RAM to 1MB and added Mac Plus-ish ROMs, we always called it a "Mac Minus" when I was growing up), but as far as I know the most you could really do to speed up the RAM on that system would be to shift the timings like the Classic did.

The original 68K had a 24-bit address bus, so it could address up to 16MB, but not all of that could be RAM because you have to leave room for I/O devices and the ROM. Unlike Intel CPUs, the 68K didn't have the (IMO) silly notion of a separate I/O space, which really just involves a more complex chip select arrangement to add what amounts to one additional address bit, which can only be addressed with a separate, more limited set of instructions.  These days, even on PCs, most PCIe memory is mapped into main memory space, but PCI (and thus PCIe) still has the notion of I/O space because it still has to support legacy PC peripherals like the keyboard controller and serial ports that the architecture expect to be at I/O addresses.  Instead, like most other computers, all the I/O lives alongside the memory, and you need to make room for a reasonable amount of ROM, so 4MB wound up being the practical limit that you generally see in the Classic and SE, because then you can switch based on just the two high address bits.

The RAM chips in the 128K were 16x 4164 (64k x 1 bit) which were relatively new at the time; the Apple II and II+ (effectively the same board) used the older 4116s, which required additional +12v and -5v supplies and tended to die fairly often, while the 4164 only required 5v and was 4x the RAM in the same footprint (reclaiming the two supply pins left room for an additional address pin, which is used for both row and column addresses).  The 512K swapped the 4164s for 41256s, which as you might guess are 256k x 1 bit; those used the other bonus pin for another address line. Later boards are dual-purpose with some stuffing options determining whether they're 128K or 512K (a few extra resistors and I think one more address pin multiplexer that's absent from the 128K board for the additional address bit).

Later machines used 30-pin SIMMs, which made it a lot easier to pack more memory in a machine (the Max II board I mentioned is jam-packed full of ZIP memory chips, which were rather unpleasant things).  With a standard pinout, you can fit up to 16MB on a 30-pin SIMM, but it wouldn't have done much good for a machine with a 24-bit address bus, so the 68000 machines really only supported up to 1MB SIMMs (installed in pairs, because those are 8-bit SIMMs, which is why you never see a single 4MB SIMM on those machines).

It wasn't until the II series and cousins (Quadras, LCs, etc) that you started seeing memory buses that were decoupled from the processor buses; the 68000 CPU bus was simple enough you could lash everything to it with some PALs (still more complex than the 6800/6502-style bus of the Apple II, but only just), but the later buses got much faster and more complex, requiring custom ASICs to interface (not to mention the variety of expansion interfaces, particularly NuBus, which also required complex adaptation).


- Dave



More information about the vcf-midatlantic mailing list