re: John Heritage Subject: Modem Error Correction - when/what baud/bps?
My questions are -- is this a correct recollection? and when did error correction become pretty standard ...
My memory may have dropped a few bits but I remember that error correction was first in the protocol the programs implemented, not the modem. Early modems had very little signal processing. The first ones didn't even auto-dial! "smart modems" added a microcontroller mostly for autodialing. That's why xmodem, uucp and such were developed. A checksum or CRC (cyclic redundency check) was added to each block. Any error caused the entire block to be sent again. Once micro-controllers were cheap enough, error correcting was moved into the modem hardware (so long as both modems supported the same "standard"). There were several totally incompatible families of modems - async (home users, bbs) - synchronous (mainframe links) - PEP (Telebit trailblazers, favored by Unix/uucp)
i.e. An early 300 or 1200 baud "client and BBS" (or server) would typically send the 8 bits for an ASCII byte and an additional 2 bits -- a start and stop bit ...
Once modems gained DSP (digital signal processors), the data format you "fed it" externally bore little relationship to the symbols used among the modems. External modems usually use asynchronous serial via RS232 to a UART. Internal modems are parallel, so there's no stop/start bits. Early modems were FSK: frequency shift keying. One tone for a '1' or mark, another tone for '0' or space (thus the museum's all discrete-part acoustic-coupled modem in the wood box). Faster modems use frequency and phase shifting together. See: https://en.wikipedia.org/wiki/Phase-shift_keying https://en.wikipedia.org/wiki/Constellation_diagram enough? -- Jeff Jonas