On 01/21/2018 07:25 AM, John Heritage via vcf-midatlantic wrote:
Techie questions --
IIRC -- Part of the reason modem error correction was a big deal was because in addition to improving transfer accuracy (i.e. less loss), error correction typically reduced the number of transmitted bits per byte from 10 to 8.
i.e. An early 300 or 1200 baud "client and BBS" (or server) would typically send the 8 bits for an ASCII byte* and an additional 2 bits -- a start and stop bit, or similar, but later error correcting modems (i.e. 9600bps or higher) would only need 8 bits.
In practice, a 2400bps modem without error correction would be limited to ~ 240 characters per second, while the same modem with error correction would be ~ 300 characters per second raw throughput.
My questions are -- is this a correct recollection? and when did error correction become pretty standard -- was it the 2400 "baud" era or was it more like 9600 bps and above?
I think it started with the 9600 baud era. I know it was there for 14.4 and above.
I was lucky enough to go from 1200 bps to 9600 bps in one jump, though later someone gifted me a 2400 baud modem that also had error correction, and I used that for a second machine in the house..
I started with 110 (300 baud frequencies but sending 110 baud). I really wasn't that technical with the modem until later. I seem to recall that when we hit 9600 we were no longer sending bits but rather analog that represented the bits (like 4 bits). But it also depended on the quadrature of the signal. Sorry my memory is not so good on this subject anymore. I do recall that we had to stop saying bits per second because it was no longer really bits, it was baud. -- Linux Home Automation Neil Cherry ncherry@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies