Techie questions -- IIRC -- Part of the reason modem error correction was a big deal was because in addition to improving transfer accuracy (i.e. less loss), error correction typically reduced the number of transmitted bits per byte from 10 to 8. i.e. An early 300 or 1200 baud "client and BBS" (or server) would typically send the 8 bits for an ASCII byte* and an additional 2 bits -- a start and stop bit, or similar, but later error correcting modems (i.e. 9600bps or higher) would only need 8 bits. In practice, a 2400bps modem without error correction would be limited to ~ 240 characters per second, while the same modem with error correction would be ~ 300 characters per second raw throughput. My questions are -- is this a correct recollection? and when did error correction become pretty standard -- was it the 2400 "baud" era or was it more like 9600 bps and above? I was lucky enough to go from 1200 bps to 9600 bps in one jump, though later someone gifted me a 2400 baud modem that also had error correction, and I used that for a second machine in the house.. Thanks, John *Yes, ASCII wasn't always a standard :)