Modem Error Correction - when/what baud/bps?
Techie questions -- IIRC -- Part of the reason modem error correction was a big deal was because in addition to improving transfer accuracy (i.e. less loss), error correction typically reduced the number of transmitted bits per byte from 10 to 8. i.e. An early 300 or 1200 baud "client and BBS" (or server) would typically send the 8 bits for an ASCII byte* and an additional 2 bits -- a start and stop bit, or similar, but later error correcting modems (i.e. 9600bps or higher) would only need 8 bits. In practice, a 2400bps modem without error correction would be limited to ~ 240 characters per second, while the same modem with error correction would be ~ 300 characters per second raw throughput. My questions are -- is this a correct recollection? and when did error correction become pretty standard -- was it the 2400 "baud" era or was it more like 9600 bps and above? I was lucky enough to go from 1200 bps to 9600 bps in one jump, though later someone gifted me a 2400 baud modem that also had error correction, and I used that for a second machine in the house.. Thanks, John *Yes, ASCII wasn't always a standard :)
On 01/21/2018 07:25 AM, John Heritage via vcf-midatlantic wrote:
Techie questions --
IIRC -- Part of the reason modem error correction was a big deal was because in addition to improving transfer accuracy (i.e. less loss), error correction typically reduced the number of transmitted bits per byte from 10 to 8.
i.e. An early 300 or 1200 baud "client and BBS" (or server) would typically send the 8 bits for an ASCII byte* and an additional 2 bits -- a start and stop bit, or similar, but later error correcting modems (i.e. 9600bps or higher) would only need 8 bits.
In practice, a 2400bps modem without error correction would be limited to ~ 240 characters per second, while the same modem with error correction would be ~ 300 characters per second raw throughput.
My questions are -- is this a correct recollection? and when did error correction become pretty standard -- was it the 2400 "baud" era or was it more like 9600 bps and above?
I think it started with the 9600 baud era. I know it was there for 14.4 and above.
I was lucky enough to go from 1200 bps to 9600 bps in one jump, though later someone gifted me a 2400 baud modem that also had error correction, and I used that for a second machine in the house..
I started with 110 (300 baud frequencies but sending 110 baud). I really wasn't that technical with the modem until later. I seem to recall that when we hit 9600 we were no longer sending bits but rather analog that represented the bits (like 4 bits). But it also depended on the quadrature of the signal. Sorry my memory is not so good on this subject anymore. I do recall that we had to stop saying bits per second because it was no longer really bits, it was baud. -- Linux Home Automation Neil Cherry ncherry@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies
On 01/21/2018 09:15 AM, Neil Cherry via vcf-midatlantic wrote:
I seem to recall that when we hit 9600 we were no longer sending bits but rather analog that represented the bits (like 4 bits). But it also depended on the quadrature of the signal. Sorry my memory is not so good on this subject anymore. I do recall that we had to stop saying bits per second because it was no longer really bits, it was baud.
They're modems...they've *always* sent analog that represented the bits. But when you say "like 4 bits", perhaps you're referring to the phase modulation types, which can sent multiple bits simultaneously. In mainstream modems, that started with 1200 baud. That's why 300 baud sounds like a tone (which is all it is), and 1200 baud sounds like noise to human auditory perception. -Dave -- Dave McGuire, AK4HZ New Kensington, PA
On Sun, Jan 21, 2018 at 07:25:34AM -0500, John Heritage via vcf-midatlantic wrote:
In practice, a 2400bps modem without error correction would be limited to ~ 240 characters per second, while the same modem with error correction would be ~ 300 characters per second raw throughput.
My questions are -- is this a correct recollection? and when did error correction become pretty standard -- was it the 2400 "baud" era or was it more like 9600 bps and above?
Looks like introduced during 2400 era but common at 9600 and above. https://en.wikipedia.org/wiki/Microcom_Networking_Protocol http://www.astro.rug.nl/~vogelaar/modems.html
On 01/21/2018 07:25 AM, John Heritage via vcf-midatlantic wrote:
IIRC -- Part of the reason modem error correction was a big deal was because in addition to improving transfer accuracy (i.e. less loss), error correction typically reduced the number of transmitted bits per byte from 10 to 8.
i.e. An early 300 or 1200 baud "client and BBS" (or server) would typically send the 8 bits for an ASCII byte* and an additional 2 bits -- a start and stop bit, or similar, but later error correcting modems (i.e. 9600bps or higher) would only need 8 bits.
In practice, a 2400bps modem without error correction would be limited to ~ 240 characters per second, while the same modem with error correction would be ~ 300 characters per second raw throughput.
My questions are -- is this a correct recollection? and when did error correction become pretty standard -- was it the 2400 "baud" era or was it more like 9600 bps and above?
The "MNP" protocols (Microcom Network Protocol) implemented the type of error correction that you're talking about in the 2400 baud era, and 2400 baud modems were the first I saw that stuff on. MNP-5 was wonderful (I had an early MNP-5 modem) because it added data compression, which not only compensated for the bandwidth-reducing error correction overhead, but got you a bit over 2400 baud too. Of course you had to have an MNP-5 capable modem on the other end, or it would just do regular 2400 baud phase-modulated tones. -Dave -- Dave McGuire, AK4HZ New Kensington, PA
participants (4)
-
Dave McGuire -
David Gesswein -
John Heritage -
Neil Cherry