link-editor fun and games before virtual memory (was C compiler)
The IDE isolated the programmers from full control of the compiler and its code generation options.
Once upon a time, programmers were REQUIRED to understand and sometimes manually configure the link-editor. That allowed for things like load-no-call: load debugging modules/subroutines that were not called anywhere, but were available when debugging. Before virtual memory, there were overlays. The IBM 1130 made that simple with LOCAL (load on call) and SOCAL (system call load on call) directives. But only one was loaded at a time: one load-on-call could not call another. I think the IBM 360 allowed for specifying a tree of nested overlays. When using the PIC18, I had to configure the linker to load my "C" code around the boot loader. 'tis the joy of not using absolute addresses for everything. .csect is the IBM pseudo-op for such things: http://www.ibm.com/support/knowledgecenter/ssw_aix_71/com.ibm.aix.alangref/i... Everyone had their own names for such stuff :-( Anyone who hand-coded assembler did absolute addressing. I worked on a project where the assembler did not generate relocatable code. Everything was absolute addressing, requiring the build-meister to manually map out memory for all modules, storage sections, temporary scratch-pad areas, etc. It was easy to trash memory when anything changed size. -- jeffj
On 01/21/2017 02:01 PM, Jeffrey Jonas via vcf-midatlantic wrote:
The IDE isolated the programmers from full control of the compiler and its code generation options.
Once upon a time, programmers were REQUIRED to understand and sometimes manually configure the link-editor.
We do that now, in the embedded world. And in kernel development, though that's a lot less common in terms of developer population. I moved some sections around in a GNU ld linker script just yesterday.
That allowed for things like load-no-call: load debugging modules/subroutines that were not called anywhere, but were available when debugging.
Mmm yes. :-) The story I told a day or two ago in Evan's thread, about reducing the size of a binary by properly configuring the toolchain, involved putting each function in its own named section (compiler can do that automatically) and then GC-out the functions that aren't used. Otherwise all the functions in a given object file get included in the binary even if only one function in that object file is actually called.
Before virtual memory, there were overlays. The IBM 1130 made that simple with LOCAL (load on call) and SOCAL (system call load on call) directives. But only one was loaded at a time: one load-on-call could not call another.
I think the IBM 360 allowed for specifying a tree of nested overlays.
Very nice. A major PDP-11 OSs, RSX-11, has incredibly powerful overlay functionality to allow for very large programs. That's all controlled through TKB, the TasK Builder, which is RSX-11's analog of the linker.
When using the PIC18, I had to configure the linker to load my "C" code around the boot loader. 'tis the joy of not using absolute addresses for everything.
.csect is the IBM pseudo-op for such things: http://www.ibm.com/support/knowledgecenter/ssw_aix_71/com.ibm.aix.alangref/i... Everyone had their own names for such stuff :-(
Anyone who hand-coded assembler did absolute addressing. I worked on a project where the assembler did not generate relocatable code. Everything was absolute addressing, requiring the build-meister to manually map out memory for all modules, storage sections, temporary scratch-pad areas, etc. It was easy to trash memory when anything changed size.
Ugh, automatic section sizing and overflow warnings help us with that stuff nowadays, thank heaven. -Dave -- Dave McGuire, AK4HZ New Kensington, PA
participants (2)
-
Dave McGuire -
Jeffrey Jonas