On 01/21/2018 07:25 AM, John Heritage via vcf-midatlantic wrote:
IIRC -- Part of the reason modem error correction was a big deal was because in addition to improving transfer accuracy (i.e. less loss), error correction typically reduced the number of transmitted bits per byte from 10 to 8.
i.e. An early 300 or 1200 baud "client and BBS" (or server) would typically send the 8 bits for an ASCII byte* and an additional 2 bits -- a start and stop bit, or similar, but later error correcting modems (i.e. 9600bps or higher) would only need 8 bits.
In practice, a 2400bps modem without error correction would be limited to ~ 240 characters per second, while the same modem with error correction would be ~ 300 characters per second raw throughput.
My questions are -- is this a correct recollection? and when did error correction become pretty standard -- was it the 2400 "baud" era or was it more like 9600 bps and above?
The "MNP" protocols (Microcom Network Protocol) implemented the type of error correction that you're talking about in the 2400 baud era, and 2400 baud modems were the first I saw that stuff on. MNP-5 was wonderful (I had an early MNP-5 modem) because it added data compression, which not only compensated for the bandwidth-reducing error correction overhead, but got you a bit over 2400 baud too. Of course you had to have an MNP-5 capable modem on the other end, or it would just do regular 2400 baud phase-modulated tones. -Dave -- Dave McGuire, AK4HZ New Kensington, PA