What was the first mainstream advanced debugger?
up vote
24
down vote
favorite
What was the first mainstream development toolchain that provided advanced debugging facilities?
By "mainstream" I mean something that was available on home computers, not mainframes or super-expensive workstations. It also implies that it was commercially available (not custom-developed tools that never escaped the garage of some mad scientist).
My definition of "advanced debugging facilities" includes at least the following features:
- Setting breakpoints without having to define them explicitly in the source code (e.g. using the STOP statement in the middle of a gwbasic program does not qualify)
- Running step-by-step at the source code level statements (not having to resort to low-level single-machine-instruction stepping)
- Dumping the call stack, showing the actual function names from the source code
- Show values of global and local variables without having to look at the raw memory / CPU registers, and know the ABI specs by heart
Any language qualifies (well, any language that supports function calls, given the requirement on being able to view some call stack).
software-development debugging
|
show 9 more comments
up vote
24
down vote
favorite
What was the first mainstream development toolchain that provided advanced debugging facilities?
By "mainstream" I mean something that was available on home computers, not mainframes or super-expensive workstations. It also implies that it was commercially available (not custom-developed tools that never escaped the garage of some mad scientist).
My definition of "advanced debugging facilities" includes at least the following features:
- Setting breakpoints without having to define them explicitly in the source code (e.g. using the STOP statement in the middle of a gwbasic program does not qualify)
- Running step-by-step at the source code level statements (not having to resort to low-level single-machine-instruction stepping)
- Dumping the call stack, showing the actual function names from the source code
- Show values of global and local variables without having to look at the raw memory / CPU registers, and know the ABI specs by heart
Any language qualifies (well, any language that supports function calls, given the requirement on being able to view some call stack).
software-development debugging
4
The question is at the same time overly borad and way to narrow, as it implies the use of a high level language as well as stack based function calls and then narrowing down to home computers - devices that could by definition only offer a very limited amount of the requested features. Any answer will be argueable at best.
– Raffzahn
Dec 6 at 20:46
15
Your definition of “advanced debugging” is troublesomely low-level-imperative oriented. (I wouldn't even call that advanced.) The first proper debuggers were arguably Lisp interpreters, of course not with breakpoints and such nonsense but with code-as-data and REPLs.
– leftaroundabout
Dec 6 at 22:28
6
FYI, "PC" is used to distinguish a single-user computer from a time-sharing computer. The Xerox Alto for example was called a personal computer even though it cost $32,000 in 1979 and was used for research.
– traal
Dec 6 at 23:19
8
I'm gonna guess fly swatter.
– jpmc26
Dec 7 at 0:53
8
Well, this isn't an answer, because it's way outside your definition of "mainstream". But I have to mention it as the grand-daddy of interactive debuggers. It's DDT, the debugger for the DEC PDP-1, a machine delivered in 1961. The debugger, built maybe in 1962, featured breakpoints, variable display in several formats, and code patching on the fly. It was certainly advanced for its time, and was likely an ancestor of the first debugger that meets your criteria. Wikipedia
– Walter Mitty
Dec 7 at 1:18
|
show 9 more comments
up vote
24
down vote
favorite
up vote
24
down vote
favorite
What was the first mainstream development toolchain that provided advanced debugging facilities?
By "mainstream" I mean something that was available on home computers, not mainframes or super-expensive workstations. It also implies that it was commercially available (not custom-developed tools that never escaped the garage of some mad scientist).
My definition of "advanced debugging facilities" includes at least the following features:
- Setting breakpoints without having to define them explicitly in the source code (e.g. using the STOP statement in the middle of a gwbasic program does not qualify)
- Running step-by-step at the source code level statements (not having to resort to low-level single-machine-instruction stepping)
- Dumping the call stack, showing the actual function names from the source code
- Show values of global and local variables without having to look at the raw memory / CPU registers, and know the ABI specs by heart
Any language qualifies (well, any language that supports function calls, given the requirement on being able to view some call stack).
software-development debugging
What was the first mainstream development toolchain that provided advanced debugging facilities?
By "mainstream" I mean something that was available on home computers, not mainframes or super-expensive workstations. It also implies that it was commercially available (not custom-developed tools that never escaped the garage of some mad scientist).
My definition of "advanced debugging facilities" includes at least the following features:
- Setting breakpoints without having to define them explicitly in the source code (e.g. using the STOP statement in the middle of a gwbasic program does not qualify)
- Running step-by-step at the source code level statements (not having to resort to low-level single-machine-instruction stepping)
- Dumping the call stack, showing the actual function names from the source code
- Show values of global and local variables without having to look at the raw memory / CPU registers, and know the ABI specs by heart
Any language qualifies (well, any language that supports function calls, given the requirement on being able to view some call stack).
software-development debugging
software-development debugging
asked Dec 6 at 20:35
dim
393312
393312
4
The question is at the same time overly borad and way to narrow, as it implies the use of a high level language as well as stack based function calls and then narrowing down to home computers - devices that could by definition only offer a very limited amount of the requested features. Any answer will be argueable at best.
– Raffzahn
Dec 6 at 20:46
15
Your definition of “advanced debugging” is troublesomely low-level-imperative oriented. (I wouldn't even call that advanced.) The first proper debuggers were arguably Lisp interpreters, of course not with breakpoints and such nonsense but with code-as-data and REPLs.
– leftaroundabout
Dec 6 at 22:28
6
FYI, "PC" is used to distinguish a single-user computer from a time-sharing computer. The Xerox Alto for example was called a personal computer even though it cost $32,000 in 1979 and was used for research.
– traal
Dec 6 at 23:19
8
I'm gonna guess fly swatter.
– jpmc26
Dec 7 at 0:53
8
Well, this isn't an answer, because it's way outside your definition of "mainstream". But I have to mention it as the grand-daddy of interactive debuggers. It's DDT, the debugger for the DEC PDP-1, a machine delivered in 1961. The debugger, built maybe in 1962, featured breakpoints, variable display in several formats, and code patching on the fly. It was certainly advanced for its time, and was likely an ancestor of the first debugger that meets your criteria. Wikipedia
– Walter Mitty
Dec 7 at 1:18
|
show 9 more comments
4
The question is at the same time overly borad and way to narrow, as it implies the use of a high level language as well as stack based function calls and then narrowing down to home computers - devices that could by definition only offer a very limited amount of the requested features. Any answer will be argueable at best.
– Raffzahn
Dec 6 at 20:46
15
Your definition of “advanced debugging” is troublesomely low-level-imperative oriented. (I wouldn't even call that advanced.) The first proper debuggers were arguably Lisp interpreters, of course not with breakpoints and such nonsense but with code-as-data and REPLs.
– leftaroundabout
Dec 6 at 22:28
6
FYI, "PC" is used to distinguish a single-user computer from a time-sharing computer. The Xerox Alto for example was called a personal computer even though it cost $32,000 in 1979 and was used for research.
– traal
Dec 6 at 23:19
8
I'm gonna guess fly swatter.
– jpmc26
Dec 7 at 0:53
8
Well, this isn't an answer, because it's way outside your definition of "mainstream". But I have to mention it as the grand-daddy of interactive debuggers. It's DDT, the debugger for the DEC PDP-1, a machine delivered in 1961. The debugger, built maybe in 1962, featured breakpoints, variable display in several formats, and code patching on the fly. It was certainly advanced for its time, and was likely an ancestor of the first debugger that meets your criteria. Wikipedia
– Walter Mitty
Dec 7 at 1:18
4
4
The question is at the same time overly borad and way to narrow, as it implies the use of a high level language as well as stack based function calls and then narrowing down to home computers - devices that could by definition only offer a very limited amount of the requested features. Any answer will be argueable at best.
– Raffzahn
Dec 6 at 20:46
The question is at the same time overly borad and way to narrow, as it implies the use of a high level language as well as stack based function calls and then narrowing down to home computers - devices that could by definition only offer a very limited amount of the requested features. Any answer will be argueable at best.
– Raffzahn
Dec 6 at 20:46
15
15
Your definition of “advanced debugging” is troublesomely low-level-imperative oriented. (I wouldn't even call that advanced.) The first proper debuggers were arguably Lisp interpreters, of course not with breakpoints and such nonsense but with code-as-data and REPLs.
– leftaroundabout
Dec 6 at 22:28
Your definition of “advanced debugging” is troublesomely low-level-imperative oriented. (I wouldn't even call that advanced.) The first proper debuggers were arguably Lisp interpreters, of course not with breakpoints and such nonsense but with code-as-data and REPLs.
– leftaroundabout
Dec 6 at 22:28
6
6
FYI, "PC" is used to distinguish a single-user computer from a time-sharing computer. The Xerox Alto for example was called a personal computer even though it cost $32,000 in 1979 and was used for research.
– traal
Dec 6 at 23:19
FYI, "PC" is used to distinguish a single-user computer from a time-sharing computer. The Xerox Alto for example was called a personal computer even though it cost $32,000 in 1979 and was used for research.
– traal
Dec 6 at 23:19
8
8
I'm gonna guess fly swatter.
– jpmc26
Dec 7 at 0:53
I'm gonna guess fly swatter.
– jpmc26
Dec 7 at 0:53
8
8
Well, this isn't an answer, because it's way outside your definition of "mainstream". But I have to mention it as the grand-daddy of interactive debuggers. It's DDT, the debugger for the DEC PDP-1, a machine delivered in 1961. The debugger, built maybe in 1962, featured breakpoints, variable display in several formats, and code patching on the fly. It was certainly advanced for its time, and was likely an ancestor of the first debugger that meets your criteria. Wikipedia
– Walter Mitty
Dec 7 at 1:18
Well, this isn't an answer, because it's way outside your definition of "mainstream". But I have to mention it as the grand-daddy of interactive debuggers. It's DDT, the debugger for the DEC PDP-1, a machine delivered in 1961. The debugger, built maybe in 1962, featured breakpoints, variable display in several formats, and code patching on the fly. It was certainly advanced for its time, and was likely an ancestor of the first debugger that meets your criteria. Wikipedia
– Walter Mitty
Dec 7 at 1:18
|
show 9 more comments
7 Answers
7
active
oldest
votes
up vote
36
down vote
accepted
The "Basic Programming" cartridge for the Atari 2600 came out in 1980 and it supports all of those except the first one. It had windows for the program, stack, variables, and output which could individually switched on and off (the cartridge would vertically stack as many enabled windows as would fit, stopping when all enabled windows were shown or it reached the bottom of the screen). Unfortunately, it had to fit in 4K of code space and run on a machine with 128 bytes of RAM (so the program, variables, stack, and output were limited to a total of 64 bytes), but I don't think I've seen anything that sophisticated on any more powerful machines prior to that.
LOL. Supercat, that's a great answer. ;) Made my day !
– Raffzahn
Dec 6 at 21:17
1
A very interesting answer in the trivia sense, but falls way short of being a mainstream programming environment.
– Brian H
Dec 7 at 1:15
@BrianH: While I agree it's hardly a mainstream programming environment, the inclusion of variable and stack windows suggest that Warren Robinett had some interesting ideas for what a development environment could do, which might have been implemented in more "mainstream" fashion had he been targeting a more powerful computer.
– supercat
Dec 7 at 5:59
1
@BrianH - Seems like it ought to count as "mainstream" to me. Basic was the language of choice on mass-marketed computers back then, and that cartridge was sold in Sears.
– T.E.D.
Dec 8 at 22:09
1
@BrianH: SO answers are expected to be useful to many people in addition to the person asking the original question. In many cases, people interested in wanting to know about the first X to do Y will be interested in when various features were invented. While Robinett's cartridge was a toy, it demonstrates that many concepts found in mainstream debuggers go back a long time.
– supercat
Dec 10 at 8:14
|
show 6 more comments
up vote
15
down vote
The earliest innovator I know of was Manx. Manx made 'C' programming environments for early, low-cost computers like Apple ][, CP/M-80, MS-DOS, and Motorola 68000.
Manx Aztec C v1.06 had symbolic (source code) debugging support when released in early 1984. Manx provided a utility "SIDSYM", which was used along with their linker's ability to generate symbol tables, to feed source code and symbols into SID. SID was a debugger from Digital Research for CP/M-80 that supported symbol tables. The Aztec C data sheet from 1983 mentions this feature.
The system above was still pretty primitive by modern standards. The "killer feature" was a source-code level debugger that hid most of the machine code details and allowed you to interact with your 'C' code as written, and access run-time variables easily, and set conditional and regular breakpoints with ease. This is the basic user experience we all expect today in Visual Studio, etc.
Manx continued to evolve the source debugging feature. By 1988, their 'C' compiler for the Amiga included a fully visual GUI source debugger that was probably the most advanced for any low-cost PC or home computer at the time. The MS-DOS versions of Aztec 'C' for the 8086 also supported source debugging at the same time, but with a less advanced UI. This feature is described in the manual available online. This was before Borland first shipped their initial version of Turbo Debugger for MS-DOS, as an add-on product to their compilers. Other 3rd parties also made symbolic debuggers for Turbo Pascal, but none that I know of offered as advanced a solution as Manx Aztec C at the time.
Needless to say, Source-level debugging was a "killer feature" for low-cost development environments by 1989. It was not long after that the major players like Borland and Microsoft would catch up and begin offering compilers with IDE's and integrated source debugging as we know today.
You can download Manx Aztec C for various retro computers from the Aztec Museum.
You seem to be praising Aztec C for something SID actually did - Aztec C only provided the input for SID.
– tofro
Dec 8 at 23:49
@tofro Aztec C worked with SID in 1983. By 1988, they includedsdb
, an advanced source debugger for the time.
– Brian H
Dec 9 at 15:08
add a comment |
up vote
10
down vote
Microsoft Codeview shipped in 1985 and has most of the features you're asking about (its been a while so I can't be certain it has everything e.g. call stack walking), when I was doing Turbo C support for Borland in 87/88 support for it was possibly the most requested feature.
You could do source level stepping with Turbo C & symdeb but it wasn't as nice.
Version 2 of Turbo C had a debugger (shipped late '88?) which used the 386 hardware support.
Both of these had dual monitor support which was nice.
What 386 features did the Turbo C 2 debugger use? I remember debugging on my 8086 without missing anything ;-). The dual monitor support was definitely very useful.
– Stephen Kitt
Dec 7 at 12:05
2
More memory (as the debugger uses XMS), hardware breakpoints on read / write memory/IO.
– PeterI
Dec 7 at 12:40
1
Ah, now I get it, I thought you were talking about the IDE’s built-in debugger, not Turbo Debugger.
– Stephen Kitt
Dec 7 at 13:45
1
My Borland Turbo C 2.0 Reference Guide has a copyright year of 1988.
– a CVn
Dec 8 at 14:17
1
@wizzwizz4 If the Reference Guide is copyrighted 1988, that's a pretty strong corroboration for the "shipped late '88?" in the answer. At least I would expect the two to largely coincide.
– a CVn
Dec 8 at 16:11
|
show 2 more comments
up vote
10
down vote
This is less of a definitive answer than a musing about some seemingly underlying assumptions within the question. For the question itself Supercat has already given a near perfect answer. While it is of the typical kind where everyone know it's wrong, it does tick all boxes. I seriously would love to give it double points :)
Different Computer Classes
The most obvious issue with the question may be that it separates different 'classes' of computers - and more important implies important advancements made by them in prior. While it is true, that mainframes implemented many solutions first, they where still tied to their repertoire of tools - like command line operation on terminals in next to all cases. And even within the mainframe world solutions weren't developed for the fun of it, but when needed. Much like with minis and micros in parallel or later.
For example the AID facility of BS2000 (a /370ish OS) did already support quite complex interactive debugging in the early 80s. But all based on very cryptic command line arguments.
Need for more comfortable (I would prefer that term over 'advanced, which is more of a marketing blurb) debuggers grew out for one of more capable systems and more complex software, but more important due changes in software development. This includes especially the need to dive into large chunks of code generated by less then well known tools (compilers), imported from libraries or worst of all, inherited form prior developers.
This development happened independent of machine type or 'class'.
Source Level Debugging
Already early microcomputer systems offered basic debuggers that did go past all hex handling of Hex-Kits. For example the original Apple II Monitor included convenient single-stepping and breakpoints with disassembling the instructions. And with the introduction of disk drives and Assemblers many additional tools came to hand, including source level debuggers. That's the 1978..1980 time frame. Eventually with the ORCA system as a first plateau supporting very convenient Assembly as well as Pascal and C - noteworthy here, that is wasn't until the mid 1980s that C became a professional supported choice on Apple II (or similar) systems.
In general it was not only missing need, but more so missing capabilities of the systems in use that prevented the use of more comfortable features. To access join binary and source code the later needed to be at least indexed, a feature the quite limited memory of these systems could, if at all, only support for extreme small programs. Even more so if the source code had to be displayed. Impossible without multiple disk drives.
Beside Assembly (and the all omnipotent BASIC) PASCAL was the only other choice. Here the UCSD system was eventually the best known - and it made a great case how to provide high level support on small machines. The p-code did include many references to the source/module structure already by default, and due its module structure parts of the system could be swapped in on demand.
It wasn't until affordable machines by default had a RAM size of 512 KiB or more and hard drives that more comfortable features as requests became possible for most programming environments.
Workstations are Anything But PCs
When comparing machines, that have been sold as 'workstations' at their time, with other available it becomes clear, that there is no basic difference to other machines of the same time -- except they usually represent an upper end configuration.
A Sinclair QL of 1984 used a 7.5 MHz 68008 with 128 KiB RAM and micro-drives, while a SUN 2/50 of the same year had a 10 MHz 68010 with 8 MiB RAM and (at least) one HD. The same point can be made for any other computer sold as Workstation - not to mention todays Gamer PCs that outclass what many manufacturers sell as workstation :)
Home Computers are Computers at Home
While this sounds nice at first, it's a classic post-hoc fallacy. After all, that would make any computer someone may have at home a "home computer". Of course including any mainframe, mini or workstation running zOS or Unix. It's obvious this won't work as a portable classification - most definitely as porting the same computer into an office would right away turn it into an office computer - wouldn't it?
The location a computer is set up can not retroactive define a class.
Conclusion
The question contains a lot on unnecessary constrictions and implied assumptions that are neither clear nor helpful at all, and at the same time an unclear definition what environment exactly is asked for.
It may be useful to drop these restrictions and clarify what kind of functions and their representation is asked for.
1
We have a feature supporting upvoting twice. It's called a bounty. You might've heard of it! We've only had three bounties awarded so far (iirc) and four set, and none set on somebody else's question. You'd be the first.
– wizzwizz4♦
Dec 7 at 7:22
Thanks. And I'd be really interested in how the development environment of Smalltalk and Lisp looked in say 1980...
– Martin Schröder
Dec 7 at 9:29
1
@MartinSchröder - uniquely, Smalltalk standardised its development environment at the same time as standardising the language itself, so any modern (unextended) implementation of Smalltalk-80 should provide a development experience that is very similar to the original 1980 implementation. This means that if you download either GNU Smalltalk or Squeak and play with them, other than the extended GUI class frameworks, what you're using is pretty similar to what existed in 1980. Which is to say, a very high quality polished environment, albeit not one that was available on a "home" computer.
– Jules
Dec 7 at 20:17
1
The "other" answer also assumes that everyone believes "advanced" means "has lots more goo, much of which gets in the way of people who want a lot less goo." Or put another way, I did very good debugging in some pretty "extremely feature poor" debuggers and I was very happy with them. I didn't realize that I absolutely had to have source-level debugging ... which often worked very badly because the optimizer had moved my code all over the damned place. I was perfectly content to look at machine instructions. No, really. Much better than broken source-code debugging.
– Julie in Austin
Dec 7 at 20:26
add a comment |
up vote
10
down vote
I'm including this answer in response to a couple of suggestions, although it does not meet some of the criteria laid out in the question. DDT for the DEC PDP-1 has to be recalled as probably the very first interactive debugger, and most interactive debuggers that came later were indirectly inspired by DDT. (see DDT Writeup).
DDT was built in late 1961 or early 1962 at MIT shortly after the arrival of the PDP-1 (serial number two). The author was Alan Kotok, who later went on to be the principal engineer of the DEC PDP-10, one of the main timesharing systems before the microcomputer made a single user machine economical.
DDT allowed the user to set a breakpoint, examine variables in several convenient formats, alter variables, disassemble instructions, and patch code. This allowed the human to make efficient use of the very scarce machine time available. Some of the features mentioned in the question are lacking, but DDT was very advanced for its timeframe. A more advanced version of DDT was later built for the PDP-6 and the PDP-10. The VAX debugger was probably inspired by these.
The PDP-1 (wikipedia) was probably the first commercial interactive computer intended as such. Even though only 53 were built, it had an enormous impact on later industries like video gaming (see Spacewar), Word Processing (see Expensive Typewriter), and even computer hacking (see Hackaday).
Steve Russell, author of Spacewar, later introduced a couple of prep school kids (Paul Allen and Bill Gates) to the PDP-10.
The PDP-1 computer predates the microchip, which made mainstream computing viable. But mainstream computers of the 1970s and beyond owe an enormous debt to the PDP-1. Without it, mainstream computing might have taken a very different path.
Woah. From the "expensive typewriter" manual, in chapter "Transfer commands": "N (Nightmare): Transfers control to the Nightmare version of the assembler CERTAINLY, which gets the source program directly from ET's text buffer. The resulting binary program is placed on drum field 1, the program is brought into the users core memory, ID is provided with the symbol table and control is returned to the ID, where a P (proceed) command will start the newly assembled program running.". I don't quite uderstand how all of this worked, but the guys surely had fun naming the programs and commands.
– dim
Dec 7 at 15:11
I didn't know about Expensive typewriter but the concept of having a "control mode" and a "text mode" sure reminds me a lot the vi editor (which came out only a few years later).
– ChatterOne
Dec 7 at 15:23
I think the name "expensive typewriter" has been used more than once.
– Walter Mitty
Dec 7 at 18:58
TECO had a text mode that was terminated by the ESC character. Certain commands like Insert and Search initiated text ode.
– Walter Mitty
Dec 10 at 20:35
add a comment |
up vote
7
down vote
THINK Pascal had an integrated debugger that meets all your criteria. You could mark stop points in the editor and then debug your compiled code using them. The debugger supported stepping, a meaningful call stack display, and a good list of the variables as structured typed information.
I think Manx Aztec C had these features much earlier.
– Brian H
Dec 7 at 3:12
1
If we're looking for the earliest usage, it would be from when it was called Lightspeed Pascal; I think it was introduced in 1986. The name was changed at some point to THINK Pascal. (I'm not sure if the name change was when Symantec bought it from THINK Technologies or if the name was changed earlier.)
– Mitchell Spector
Dec 8 at 21:31
add a comment |
up vote
6
down vote
The extremely popular BBC Micro (1981-1986, still in retro use today) had an exceptionally powerful debugger, "Beebmon" by Watford Electronics, capable of breakpoints, IRQ trapping, code stepping, and so on. From a summary:
BEEBMON
A ROM based machine code monitor for BBC Micro.
It enables machine code programs to be debugged
and altered easily and quickly. Being a ROM, its
commands are always readily available and occupy no
user memory. Appears to take no base page and only one page of relocatable workspace (256 bytes) and
no more anywhere in RAM. Beebmon can do more than any other machine code monitors currently on the market. The special features include facilities like: TABULATE, MODIFY, FILL, COPY, COMPARE,
SEARCH (HEX & ASCII) CHECKSUM,
DISASSEMBLE, RE-LOCATE and, by emulating the 6502 processor, SINGLE STEP, BREAK POINTS ON
READ/WRITE/EXECUTE OF LOCATION. Also BREAKPOINTS ON A, X & Y REGISTERS are provided .
HAS WINDOWS INTO MEMORY & TEST WINDOWS. All this and more...
I used this to trace IRQ handling in the OS and other ROMs, and to reimplement disk and keyboard handlers for software which required modified handling. I think it came out very early on, around 1982, and certainly by 1983 (according to the source).
Description source: Watford Electronics flier, 1983, on 4corn.co.uk
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "648"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f8487%2fwhat-was-the-first-mainstream-advanced-debugger%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
7 Answers
7
active
oldest
votes
7 Answers
7
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
36
down vote
accepted
The "Basic Programming" cartridge for the Atari 2600 came out in 1980 and it supports all of those except the first one. It had windows for the program, stack, variables, and output which could individually switched on and off (the cartridge would vertically stack as many enabled windows as would fit, stopping when all enabled windows were shown or it reached the bottom of the screen). Unfortunately, it had to fit in 4K of code space and run on a machine with 128 bytes of RAM (so the program, variables, stack, and output were limited to a total of 64 bytes), but I don't think I've seen anything that sophisticated on any more powerful machines prior to that.
LOL. Supercat, that's a great answer. ;) Made my day !
– Raffzahn
Dec 6 at 21:17
1
A very interesting answer in the trivia sense, but falls way short of being a mainstream programming environment.
– Brian H
Dec 7 at 1:15
@BrianH: While I agree it's hardly a mainstream programming environment, the inclusion of variable and stack windows suggest that Warren Robinett had some interesting ideas for what a development environment could do, which might have been implemented in more "mainstream" fashion had he been targeting a more powerful computer.
– supercat
Dec 7 at 5:59
1
@BrianH - Seems like it ought to count as "mainstream" to me. Basic was the language of choice on mass-marketed computers back then, and that cartridge was sold in Sears.
– T.E.D.
Dec 8 at 22:09
1
@BrianH: SO answers are expected to be useful to many people in addition to the person asking the original question. In many cases, people interested in wanting to know about the first X to do Y will be interested in when various features were invented. While Robinett's cartridge was a toy, it demonstrates that many concepts found in mainstream debuggers go back a long time.
– supercat
Dec 10 at 8:14
|
show 6 more comments
up vote
36
down vote
accepted
The "Basic Programming" cartridge for the Atari 2600 came out in 1980 and it supports all of those except the first one. It had windows for the program, stack, variables, and output which could individually switched on and off (the cartridge would vertically stack as many enabled windows as would fit, stopping when all enabled windows were shown or it reached the bottom of the screen). Unfortunately, it had to fit in 4K of code space and run on a machine with 128 bytes of RAM (so the program, variables, stack, and output were limited to a total of 64 bytes), but I don't think I've seen anything that sophisticated on any more powerful machines prior to that.
LOL. Supercat, that's a great answer. ;) Made my day !
– Raffzahn
Dec 6 at 21:17
1
A very interesting answer in the trivia sense, but falls way short of being a mainstream programming environment.
– Brian H
Dec 7 at 1:15
@BrianH: While I agree it's hardly a mainstream programming environment, the inclusion of variable and stack windows suggest that Warren Robinett had some interesting ideas for what a development environment could do, which might have been implemented in more "mainstream" fashion had he been targeting a more powerful computer.
– supercat
Dec 7 at 5:59
1
@BrianH - Seems like it ought to count as "mainstream" to me. Basic was the language of choice on mass-marketed computers back then, and that cartridge was sold in Sears.
– T.E.D.
Dec 8 at 22:09
1
@BrianH: SO answers are expected to be useful to many people in addition to the person asking the original question. In many cases, people interested in wanting to know about the first X to do Y will be interested in when various features were invented. While Robinett's cartridge was a toy, it demonstrates that many concepts found in mainstream debuggers go back a long time.
– supercat
Dec 10 at 8:14
|
show 6 more comments
up vote
36
down vote
accepted
up vote
36
down vote
accepted
The "Basic Programming" cartridge for the Atari 2600 came out in 1980 and it supports all of those except the first one. It had windows for the program, stack, variables, and output which could individually switched on and off (the cartridge would vertically stack as many enabled windows as would fit, stopping when all enabled windows were shown or it reached the bottom of the screen). Unfortunately, it had to fit in 4K of code space and run on a machine with 128 bytes of RAM (so the program, variables, stack, and output were limited to a total of 64 bytes), but I don't think I've seen anything that sophisticated on any more powerful machines prior to that.
The "Basic Programming" cartridge for the Atari 2600 came out in 1980 and it supports all of those except the first one. It had windows for the program, stack, variables, and output which could individually switched on and off (the cartridge would vertically stack as many enabled windows as would fit, stopping when all enabled windows were shown or it reached the bottom of the screen). Unfortunately, it had to fit in 4K of code space and run on a machine with 128 bytes of RAM (so the program, variables, stack, and output were limited to a total of 64 bytes), but I don't think I've seen anything that sophisticated on any more powerful machines prior to that.
answered Dec 6 at 20:59
supercat
6,727736
6,727736
LOL. Supercat, that's a great answer. ;) Made my day !
– Raffzahn
Dec 6 at 21:17
1
A very interesting answer in the trivia sense, but falls way short of being a mainstream programming environment.
– Brian H
Dec 7 at 1:15
@BrianH: While I agree it's hardly a mainstream programming environment, the inclusion of variable and stack windows suggest that Warren Robinett had some interesting ideas for what a development environment could do, which might have been implemented in more "mainstream" fashion had he been targeting a more powerful computer.
– supercat
Dec 7 at 5:59
1
@BrianH - Seems like it ought to count as "mainstream" to me. Basic was the language of choice on mass-marketed computers back then, and that cartridge was sold in Sears.
– T.E.D.
Dec 8 at 22:09
1
@BrianH: SO answers are expected to be useful to many people in addition to the person asking the original question. In many cases, people interested in wanting to know about the first X to do Y will be interested in when various features were invented. While Robinett's cartridge was a toy, it demonstrates that many concepts found in mainstream debuggers go back a long time.
– supercat
Dec 10 at 8:14
|
show 6 more comments
LOL. Supercat, that's a great answer. ;) Made my day !
– Raffzahn
Dec 6 at 21:17
1
A very interesting answer in the trivia sense, but falls way short of being a mainstream programming environment.
– Brian H
Dec 7 at 1:15
@BrianH: While I agree it's hardly a mainstream programming environment, the inclusion of variable and stack windows suggest that Warren Robinett had some interesting ideas for what a development environment could do, which might have been implemented in more "mainstream" fashion had he been targeting a more powerful computer.
– supercat
Dec 7 at 5:59
1
@BrianH - Seems like it ought to count as "mainstream" to me. Basic was the language of choice on mass-marketed computers back then, and that cartridge was sold in Sears.
– T.E.D.
Dec 8 at 22:09
1
@BrianH: SO answers are expected to be useful to many people in addition to the person asking the original question. In many cases, people interested in wanting to know about the first X to do Y will be interested in when various features were invented. While Robinett's cartridge was a toy, it demonstrates that many concepts found in mainstream debuggers go back a long time.
– supercat
Dec 10 at 8:14
LOL. Supercat, that's a great answer. ;) Made my day !
– Raffzahn
Dec 6 at 21:17
LOL. Supercat, that's a great answer. ;) Made my day !
– Raffzahn
Dec 6 at 21:17
1
1
A very interesting answer in the trivia sense, but falls way short of being a mainstream programming environment.
– Brian H
Dec 7 at 1:15
A very interesting answer in the trivia sense, but falls way short of being a mainstream programming environment.
– Brian H
Dec 7 at 1:15
@BrianH: While I agree it's hardly a mainstream programming environment, the inclusion of variable and stack windows suggest that Warren Robinett had some interesting ideas for what a development environment could do, which might have been implemented in more "mainstream" fashion had he been targeting a more powerful computer.
– supercat
Dec 7 at 5:59
@BrianH: While I agree it's hardly a mainstream programming environment, the inclusion of variable and stack windows suggest that Warren Robinett had some interesting ideas for what a development environment could do, which might have been implemented in more "mainstream" fashion had he been targeting a more powerful computer.
– supercat
Dec 7 at 5:59
1
1
@BrianH - Seems like it ought to count as "mainstream" to me. Basic was the language of choice on mass-marketed computers back then, and that cartridge was sold in Sears.
– T.E.D.
Dec 8 at 22:09
@BrianH - Seems like it ought to count as "mainstream" to me. Basic was the language of choice on mass-marketed computers back then, and that cartridge was sold in Sears.
– T.E.D.
Dec 8 at 22:09
1
1
@BrianH: SO answers are expected to be useful to many people in addition to the person asking the original question. In many cases, people interested in wanting to know about the first X to do Y will be interested in when various features were invented. While Robinett's cartridge was a toy, it demonstrates that many concepts found in mainstream debuggers go back a long time.
– supercat
Dec 10 at 8:14
@BrianH: SO answers are expected to be useful to many people in addition to the person asking the original question. In many cases, people interested in wanting to know about the first X to do Y will be interested in when various features were invented. While Robinett's cartridge was a toy, it demonstrates that many concepts found in mainstream debuggers go back a long time.
– supercat
Dec 10 at 8:14
|
show 6 more comments
up vote
15
down vote
The earliest innovator I know of was Manx. Manx made 'C' programming environments for early, low-cost computers like Apple ][, CP/M-80, MS-DOS, and Motorola 68000.
Manx Aztec C v1.06 had symbolic (source code) debugging support when released in early 1984. Manx provided a utility "SIDSYM", which was used along with their linker's ability to generate symbol tables, to feed source code and symbols into SID. SID was a debugger from Digital Research for CP/M-80 that supported symbol tables. The Aztec C data sheet from 1983 mentions this feature.
The system above was still pretty primitive by modern standards. The "killer feature" was a source-code level debugger that hid most of the machine code details and allowed you to interact with your 'C' code as written, and access run-time variables easily, and set conditional and regular breakpoints with ease. This is the basic user experience we all expect today in Visual Studio, etc.
Manx continued to evolve the source debugging feature. By 1988, their 'C' compiler for the Amiga included a fully visual GUI source debugger that was probably the most advanced for any low-cost PC or home computer at the time. The MS-DOS versions of Aztec 'C' for the 8086 also supported source debugging at the same time, but with a less advanced UI. This feature is described in the manual available online. This was before Borland first shipped their initial version of Turbo Debugger for MS-DOS, as an add-on product to their compilers. Other 3rd parties also made symbolic debuggers for Turbo Pascal, but none that I know of offered as advanced a solution as Manx Aztec C at the time.
Needless to say, Source-level debugging was a "killer feature" for low-cost development environments by 1989. It was not long after that the major players like Borland and Microsoft would catch up and begin offering compilers with IDE's and integrated source debugging as we know today.
You can download Manx Aztec C for various retro computers from the Aztec Museum.
You seem to be praising Aztec C for something SID actually did - Aztec C only provided the input for SID.
– tofro
Dec 8 at 23:49
@tofro Aztec C worked with SID in 1983. By 1988, they includedsdb
, an advanced source debugger for the time.
– Brian H
Dec 9 at 15:08
add a comment |
up vote
15
down vote
The earliest innovator I know of was Manx. Manx made 'C' programming environments for early, low-cost computers like Apple ][, CP/M-80, MS-DOS, and Motorola 68000.
Manx Aztec C v1.06 had symbolic (source code) debugging support when released in early 1984. Manx provided a utility "SIDSYM", which was used along with their linker's ability to generate symbol tables, to feed source code and symbols into SID. SID was a debugger from Digital Research for CP/M-80 that supported symbol tables. The Aztec C data sheet from 1983 mentions this feature.
The system above was still pretty primitive by modern standards. The "killer feature" was a source-code level debugger that hid most of the machine code details and allowed you to interact with your 'C' code as written, and access run-time variables easily, and set conditional and regular breakpoints with ease. This is the basic user experience we all expect today in Visual Studio, etc.
Manx continued to evolve the source debugging feature. By 1988, their 'C' compiler for the Amiga included a fully visual GUI source debugger that was probably the most advanced for any low-cost PC or home computer at the time. The MS-DOS versions of Aztec 'C' for the 8086 also supported source debugging at the same time, but with a less advanced UI. This feature is described in the manual available online. This was before Borland first shipped their initial version of Turbo Debugger for MS-DOS, as an add-on product to their compilers. Other 3rd parties also made symbolic debuggers for Turbo Pascal, but none that I know of offered as advanced a solution as Manx Aztec C at the time.
Needless to say, Source-level debugging was a "killer feature" for low-cost development environments by 1989. It was not long after that the major players like Borland and Microsoft would catch up and begin offering compilers with IDE's and integrated source debugging as we know today.
You can download Manx Aztec C for various retro computers from the Aztec Museum.
You seem to be praising Aztec C for something SID actually did - Aztec C only provided the input for SID.
– tofro
Dec 8 at 23:49
@tofro Aztec C worked with SID in 1983. By 1988, they includedsdb
, an advanced source debugger for the time.
– Brian H
Dec 9 at 15:08
add a comment |
up vote
15
down vote
up vote
15
down vote
The earliest innovator I know of was Manx. Manx made 'C' programming environments for early, low-cost computers like Apple ][, CP/M-80, MS-DOS, and Motorola 68000.
Manx Aztec C v1.06 had symbolic (source code) debugging support when released in early 1984. Manx provided a utility "SIDSYM", which was used along with their linker's ability to generate symbol tables, to feed source code and symbols into SID. SID was a debugger from Digital Research for CP/M-80 that supported symbol tables. The Aztec C data sheet from 1983 mentions this feature.
The system above was still pretty primitive by modern standards. The "killer feature" was a source-code level debugger that hid most of the machine code details and allowed you to interact with your 'C' code as written, and access run-time variables easily, and set conditional and regular breakpoints with ease. This is the basic user experience we all expect today in Visual Studio, etc.
Manx continued to evolve the source debugging feature. By 1988, their 'C' compiler for the Amiga included a fully visual GUI source debugger that was probably the most advanced for any low-cost PC or home computer at the time. The MS-DOS versions of Aztec 'C' for the 8086 also supported source debugging at the same time, but with a less advanced UI. This feature is described in the manual available online. This was before Borland first shipped their initial version of Turbo Debugger for MS-DOS, as an add-on product to their compilers. Other 3rd parties also made symbolic debuggers for Turbo Pascal, but none that I know of offered as advanced a solution as Manx Aztec C at the time.
Needless to say, Source-level debugging was a "killer feature" for low-cost development environments by 1989. It was not long after that the major players like Borland and Microsoft would catch up and begin offering compilers with IDE's and integrated source debugging as we know today.
You can download Manx Aztec C for various retro computers from the Aztec Museum.
The earliest innovator I know of was Manx. Manx made 'C' programming environments for early, low-cost computers like Apple ][, CP/M-80, MS-DOS, and Motorola 68000.
Manx Aztec C v1.06 had symbolic (source code) debugging support when released in early 1984. Manx provided a utility "SIDSYM", which was used along with their linker's ability to generate symbol tables, to feed source code and symbols into SID. SID was a debugger from Digital Research for CP/M-80 that supported symbol tables. The Aztec C data sheet from 1983 mentions this feature.
The system above was still pretty primitive by modern standards. The "killer feature" was a source-code level debugger that hid most of the machine code details and allowed you to interact with your 'C' code as written, and access run-time variables easily, and set conditional and regular breakpoints with ease. This is the basic user experience we all expect today in Visual Studio, etc.
Manx continued to evolve the source debugging feature. By 1988, their 'C' compiler for the Amiga included a fully visual GUI source debugger that was probably the most advanced for any low-cost PC or home computer at the time. The MS-DOS versions of Aztec 'C' for the 8086 also supported source debugging at the same time, but with a less advanced UI. This feature is described in the manual available online. This was before Borland first shipped their initial version of Turbo Debugger for MS-DOS, as an add-on product to their compilers. Other 3rd parties also made symbolic debuggers for Turbo Pascal, but none that I know of offered as advanced a solution as Manx Aztec C at the time.
Needless to say, Source-level debugging was a "killer feature" for low-cost development environments by 1989. It was not long after that the major players like Borland and Microsoft would catch up and begin offering compilers with IDE's and integrated source debugging as we know today.
You can download Manx Aztec C for various retro computers from the Aztec Museum.
edited Dec 7 at 13:17
answered Dec 7 at 2:11
Brian H
16.8k60140
16.8k60140
You seem to be praising Aztec C for something SID actually did - Aztec C only provided the input for SID.
– tofro
Dec 8 at 23:49
@tofro Aztec C worked with SID in 1983. By 1988, they includedsdb
, an advanced source debugger for the time.
– Brian H
Dec 9 at 15:08
add a comment |
You seem to be praising Aztec C for something SID actually did - Aztec C only provided the input for SID.
– tofro
Dec 8 at 23:49
@tofro Aztec C worked with SID in 1983. By 1988, they includedsdb
, an advanced source debugger for the time.
– Brian H
Dec 9 at 15:08
You seem to be praising Aztec C for something SID actually did - Aztec C only provided the input for SID.
– tofro
Dec 8 at 23:49
You seem to be praising Aztec C for something SID actually did - Aztec C only provided the input for SID.
– tofro
Dec 8 at 23:49
@tofro Aztec C worked with SID in 1983. By 1988, they included
sdb
, an advanced source debugger for the time.– Brian H
Dec 9 at 15:08
@tofro Aztec C worked with SID in 1983. By 1988, they included
sdb
, an advanced source debugger for the time.– Brian H
Dec 9 at 15:08
add a comment |
up vote
10
down vote
Microsoft Codeview shipped in 1985 and has most of the features you're asking about (its been a while so I can't be certain it has everything e.g. call stack walking), when I was doing Turbo C support for Borland in 87/88 support for it was possibly the most requested feature.
You could do source level stepping with Turbo C & symdeb but it wasn't as nice.
Version 2 of Turbo C had a debugger (shipped late '88?) which used the 386 hardware support.
Both of these had dual monitor support which was nice.
What 386 features did the Turbo C 2 debugger use? I remember debugging on my 8086 without missing anything ;-). The dual monitor support was definitely very useful.
– Stephen Kitt
Dec 7 at 12:05
2
More memory (as the debugger uses XMS), hardware breakpoints on read / write memory/IO.
– PeterI
Dec 7 at 12:40
1
Ah, now I get it, I thought you were talking about the IDE’s built-in debugger, not Turbo Debugger.
– Stephen Kitt
Dec 7 at 13:45
1
My Borland Turbo C 2.0 Reference Guide has a copyright year of 1988.
– a CVn
Dec 8 at 14:17
1
@wizzwizz4 If the Reference Guide is copyrighted 1988, that's a pretty strong corroboration for the "shipped late '88?" in the answer. At least I would expect the two to largely coincide.
– a CVn
Dec 8 at 16:11
|
show 2 more comments
up vote
10
down vote
Microsoft Codeview shipped in 1985 and has most of the features you're asking about (its been a while so I can't be certain it has everything e.g. call stack walking), when I was doing Turbo C support for Borland in 87/88 support for it was possibly the most requested feature.
You could do source level stepping with Turbo C & symdeb but it wasn't as nice.
Version 2 of Turbo C had a debugger (shipped late '88?) which used the 386 hardware support.
Both of these had dual monitor support which was nice.
What 386 features did the Turbo C 2 debugger use? I remember debugging on my 8086 without missing anything ;-). The dual monitor support was definitely very useful.
– Stephen Kitt
Dec 7 at 12:05
2
More memory (as the debugger uses XMS), hardware breakpoints on read / write memory/IO.
– PeterI
Dec 7 at 12:40
1
Ah, now I get it, I thought you were talking about the IDE’s built-in debugger, not Turbo Debugger.
– Stephen Kitt
Dec 7 at 13:45
1
My Borland Turbo C 2.0 Reference Guide has a copyright year of 1988.
– a CVn
Dec 8 at 14:17
1
@wizzwizz4 If the Reference Guide is copyrighted 1988, that's a pretty strong corroboration for the "shipped late '88?" in the answer. At least I would expect the two to largely coincide.
– a CVn
Dec 8 at 16:11
|
show 2 more comments
up vote
10
down vote
up vote
10
down vote
Microsoft Codeview shipped in 1985 and has most of the features you're asking about (its been a while so I can't be certain it has everything e.g. call stack walking), when I was doing Turbo C support for Borland in 87/88 support for it was possibly the most requested feature.
You could do source level stepping with Turbo C & symdeb but it wasn't as nice.
Version 2 of Turbo C had a debugger (shipped late '88?) which used the 386 hardware support.
Both of these had dual monitor support which was nice.
Microsoft Codeview shipped in 1985 and has most of the features you're asking about (its been a while so I can't be certain it has everything e.g. call stack walking), when I was doing Turbo C support for Borland in 87/88 support for it was possibly the most requested feature.
You could do source level stepping with Turbo C & symdeb but it wasn't as nice.
Version 2 of Turbo C had a debugger (shipped late '88?) which used the 386 hardware support.
Both of these had dual monitor support which was nice.
edited Dec 7 at 12:43
answered Dec 7 at 11:48
PeterI
3,1231730
3,1231730
What 386 features did the Turbo C 2 debugger use? I remember debugging on my 8086 without missing anything ;-). The dual monitor support was definitely very useful.
– Stephen Kitt
Dec 7 at 12:05
2
More memory (as the debugger uses XMS), hardware breakpoints on read / write memory/IO.
– PeterI
Dec 7 at 12:40
1
Ah, now I get it, I thought you were talking about the IDE’s built-in debugger, not Turbo Debugger.
– Stephen Kitt
Dec 7 at 13:45
1
My Borland Turbo C 2.0 Reference Guide has a copyright year of 1988.
– a CVn
Dec 8 at 14:17
1
@wizzwizz4 If the Reference Guide is copyrighted 1988, that's a pretty strong corroboration for the "shipped late '88?" in the answer. At least I would expect the two to largely coincide.
– a CVn
Dec 8 at 16:11
|
show 2 more comments
What 386 features did the Turbo C 2 debugger use? I remember debugging on my 8086 without missing anything ;-). The dual monitor support was definitely very useful.
– Stephen Kitt
Dec 7 at 12:05
2
More memory (as the debugger uses XMS), hardware breakpoints on read / write memory/IO.
– PeterI
Dec 7 at 12:40
1
Ah, now I get it, I thought you were talking about the IDE’s built-in debugger, not Turbo Debugger.
– Stephen Kitt
Dec 7 at 13:45
1
My Borland Turbo C 2.0 Reference Guide has a copyright year of 1988.
– a CVn
Dec 8 at 14:17
1
@wizzwizz4 If the Reference Guide is copyrighted 1988, that's a pretty strong corroboration for the "shipped late '88?" in the answer. At least I would expect the two to largely coincide.
– a CVn
Dec 8 at 16:11
What 386 features did the Turbo C 2 debugger use? I remember debugging on my 8086 without missing anything ;-). The dual monitor support was definitely very useful.
– Stephen Kitt
Dec 7 at 12:05
What 386 features did the Turbo C 2 debugger use? I remember debugging on my 8086 without missing anything ;-). The dual monitor support was definitely very useful.
– Stephen Kitt
Dec 7 at 12:05
2
2
More memory (as the debugger uses XMS), hardware breakpoints on read / write memory/IO.
– PeterI
Dec 7 at 12:40
More memory (as the debugger uses XMS), hardware breakpoints on read / write memory/IO.
– PeterI
Dec 7 at 12:40
1
1
Ah, now I get it, I thought you were talking about the IDE’s built-in debugger, not Turbo Debugger.
– Stephen Kitt
Dec 7 at 13:45
Ah, now I get it, I thought you were talking about the IDE’s built-in debugger, not Turbo Debugger.
– Stephen Kitt
Dec 7 at 13:45
1
1
My Borland Turbo C 2.0 Reference Guide has a copyright year of 1988.
– a CVn
Dec 8 at 14:17
My Borland Turbo C 2.0 Reference Guide has a copyright year of 1988.
– a CVn
Dec 8 at 14:17
1
1
@wizzwizz4 If the Reference Guide is copyrighted 1988, that's a pretty strong corroboration for the "shipped late '88?" in the answer. At least I would expect the two to largely coincide.
– a CVn
Dec 8 at 16:11
@wizzwizz4 If the Reference Guide is copyrighted 1988, that's a pretty strong corroboration for the "shipped late '88?" in the answer. At least I would expect the two to largely coincide.
– a CVn
Dec 8 at 16:11
|
show 2 more comments
up vote
10
down vote
This is less of a definitive answer than a musing about some seemingly underlying assumptions within the question. For the question itself Supercat has already given a near perfect answer. While it is of the typical kind where everyone know it's wrong, it does tick all boxes. I seriously would love to give it double points :)
Different Computer Classes
The most obvious issue with the question may be that it separates different 'classes' of computers - and more important implies important advancements made by them in prior. While it is true, that mainframes implemented many solutions first, they where still tied to their repertoire of tools - like command line operation on terminals in next to all cases. And even within the mainframe world solutions weren't developed for the fun of it, but when needed. Much like with minis and micros in parallel or later.
For example the AID facility of BS2000 (a /370ish OS) did already support quite complex interactive debugging in the early 80s. But all based on very cryptic command line arguments.
Need for more comfortable (I would prefer that term over 'advanced, which is more of a marketing blurb) debuggers grew out for one of more capable systems and more complex software, but more important due changes in software development. This includes especially the need to dive into large chunks of code generated by less then well known tools (compilers), imported from libraries or worst of all, inherited form prior developers.
This development happened independent of machine type or 'class'.
Source Level Debugging
Already early microcomputer systems offered basic debuggers that did go past all hex handling of Hex-Kits. For example the original Apple II Monitor included convenient single-stepping and breakpoints with disassembling the instructions. And with the introduction of disk drives and Assemblers many additional tools came to hand, including source level debuggers. That's the 1978..1980 time frame. Eventually with the ORCA system as a first plateau supporting very convenient Assembly as well as Pascal and C - noteworthy here, that is wasn't until the mid 1980s that C became a professional supported choice on Apple II (or similar) systems.
In general it was not only missing need, but more so missing capabilities of the systems in use that prevented the use of more comfortable features. To access join binary and source code the later needed to be at least indexed, a feature the quite limited memory of these systems could, if at all, only support for extreme small programs. Even more so if the source code had to be displayed. Impossible without multiple disk drives.
Beside Assembly (and the all omnipotent BASIC) PASCAL was the only other choice. Here the UCSD system was eventually the best known - and it made a great case how to provide high level support on small machines. The p-code did include many references to the source/module structure already by default, and due its module structure parts of the system could be swapped in on demand.
It wasn't until affordable machines by default had a RAM size of 512 KiB or more and hard drives that more comfortable features as requests became possible for most programming environments.
Workstations are Anything But PCs
When comparing machines, that have been sold as 'workstations' at their time, with other available it becomes clear, that there is no basic difference to other machines of the same time -- except they usually represent an upper end configuration.
A Sinclair QL of 1984 used a 7.5 MHz 68008 with 128 KiB RAM and micro-drives, while a SUN 2/50 of the same year had a 10 MHz 68010 with 8 MiB RAM and (at least) one HD. The same point can be made for any other computer sold as Workstation - not to mention todays Gamer PCs that outclass what many manufacturers sell as workstation :)
Home Computers are Computers at Home
While this sounds nice at first, it's a classic post-hoc fallacy. After all, that would make any computer someone may have at home a "home computer". Of course including any mainframe, mini or workstation running zOS or Unix. It's obvious this won't work as a portable classification - most definitely as porting the same computer into an office would right away turn it into an office computer - wouldn't it?
The location a computer is set up can not retroactive define a class.
Conclusion
The question contains a lot on unnecessary constrictions and implied assumptions that are neither clear nor helpful at all, and at the same time an unclear definition what environment exactly is asked for.
It may be useful to drop these restrictions and clarify what kind of functions and their representation is asked for.
1
We have a feature supporting upvoting twice. It's called a bounty. You might've heard of it! We've only had three bounties awarded so far (iirc) and four set, and none set on somebody else's question. You'd be the first.
– wizzwizz4♦
Dec 7 at 7:22
Thanks. And I'd be really interested in how the development environment of Smalltalk and Lisp looked in say 1980...
– Martin Schröder
Dec 7 at 9:29
1
@MartinSchröder - uniquely, Smalltalk standardised its development environment at the same time as standardising the language itself, so any modern (unextended) implementation of Smalltalk-80 should provide a development experience that is very similar to the original 1980 implementation. This means that if you download either GNU Smalltalk or Squeak and play with them, other than the extended GUI class frameworks, what you're using is pretty similar to what existed in 1980. Which is to say, a very high quality polished environment, albeit not one that was available on a "home" computer.
– Jules
Dec 7 at 20:17
1
The "other" answer also assumes that everyone believes "advanced" means "has lots more goo, much of which gets in the way of people who want a lot less goo." Or put another way, I did very good debugging in some pretty "extremely feature poor" debuggers and I was very happy with them. I didn't realize that I absolutely had to have source-level debugging ... which often worked very badly because the optimizer had moved my code all over the damned place. I was perfectly content to look at machine instructions. No, really. Much better than broken source-code debugging.
– Julie in Austin
Dec 7 at 20:26
add a comment |
up vote
10
down vote
This is less of a definitive answer than a musing about some seemingly underlying assumptions within the question. For the question itself Supercat has already given a near perfect answer. While it is of the typical kind where everyone know it's wrong, it does tick all boxes. I seriously would love to give it double points :)
Different Computer Classes
The most obvious issue with the question may be that it separates different 'classes' of computers - and more important implies important advancements made by them in prior. While it is true, that mainframes implemented many solutions first, they where still tied to their repertoire of tools - like command line operation on terminals in next to all cases. And even within the mainframe world solutions weren't developed for the fun of it, but when needed. Much like with minis and micros in parallel or later.
For example the AID facility of BS2000 (a /370ish OS) did already support quite complex interactive debugging in the early 80s. But all based on very cryptic command line arguments.
Need for more comfortable (I would prefer that term over 'advanced, which is more of a marketing blurb) debuggers grew out for one of more capable systems and more complex software, but more important due changes in software development. This includes especially the need to dive into large chunks of code generated by less then well known tools (compilers), imported from libraries or worst of all, inherited form prior developers.
This development happened independent of machine type or 'class'.
Source Level Debugging
Already early microcomputer systems offered basic debuggers that did go past all hex handling of Hex-Kits. For example the original Apple II Monitor included convenient single-stepping and breakpoints with disassembling the instructions. And with the introduction of disk drives and Assemblers many additional tools came to hand, including source level debuggers. That's the 1978..1980 time frame. Eventually with the ORCA system as a first plateau supporting very convenient Assembly as well as Pascal and C - noteworthy here, that is wasn't until the mid 1980s that C became a professional supported choice on Apple II (or similar) systems.
In general it was not only missing need, but more so missing capabilities of the systems in use that prevented the use of more comfortable features. To access join binary and source code the later needed to be at least indexed, a feature the quite limited memory of these systems could, if at all, only support for extreme small programs. Even more so if the source code had to be displayed. Impossible without multiple disk drives.
Beside Assembly (and the all omnipotent BASIC) PASCAL was the only other choice. Here the UCSD system was eventually the best known - and it made a great case how to provide high level support on small machines. The p-code did include many references to the source/module structure already by default, and due its module structure parts of the system could be swapped in on demand.
It wasn't until affordable machines by default had a RAM size of 512 KiB or more and hard drives that more comfortable features as requests became possible for most programming environments.
Workstations are Anything But PCs
When comparing machines, that have been sold as 'workstations' at their time, with other available it becomes clear, that there is no basic difference to other machines of the same time -- except they usually represent an upper end configuration.
A Sinclair QL of 1984 used a 7.5 MHz 68008 with 128 KiB RAM and micro-drives, while a SUN 2/50 of the same year had a 10 MHz 68010 with 8 MiB RAM and (at least) one HD. The same point can be made for any other computer sold as Workstation - not to mention todays Gamer PCs that outclass what many manufacturers sell as workstation :)
Home Computers are Computers at Home
While this sounds nice at first, it's a classic post-hoc fallacy. After all, that would make any computer someone may have at home a "home computer". Of course including any mainframe, mini or workstation running zOS or Unix. It's obvious this won't work as a portable classification - most definitely as porting the same computer into an office would right away turn it into an office computer - wouldn't it?
The location a computer is set up can not retroactive define a class.
Conclusion
The question contains a lot on unnecessary constrictions and implied assumptions that are neither clear nor helpful at all, and at the same time an unclear definition what environment exactly is asked for.
It may be useful to drop these restrictions and clarify what kind of functions and their representation is asked for.
1
We have a feature supporting upvoting twice. It's called a bounty. You might've heard of it! We've only had three bounties awarded so far (iirc) and four set, and none set on somebody else's question. You'd be the first.
– wizzwizz4♦
Dec 7 at 7:22
Thanks. And I'd be really interested in how the development environment of Smalltalk and Lisp looked in say 1980...
– Martin Schröder
Dec 7 at 9:29
1
@MartinSchröder - uniquely, Smalltalk standardised its development environment at the same time as standardising the language itself, so any modern (unextended) implementation of Smalltalk-80 should provide a development experience that is very similar to the original 1980 implementation. This means that if you download either GNU Smalltalk or Squeak and play with them, other than the extended GUI class frameworks, what you're using is pretty similar to what existed in 1980. Which is to say, a very high quality polished environment, albeit not one that was available on a "home" computer.
– Jules
Dec 7 at 20:17
1
The "other" answer also assumes that everyone believes "advanced" means "has lots more goo, much of which gets in the way of people who want a lot less goo." Or put another way, I did very good debugging in some pretty "extremely feature poor" debuggers and I was very happy with them. I didn't realize that I absolutely had to have source-level debugging ... which often worked very badly because the optimizer had moved my code all over the damned place. I was perfectly content to look at machine instructions. No, really. Much better than broken source-code debugging.
– Julie in Austin
Dec 7 at 20:26
add a comment |
up vote
10
down vote
up vote
10
down vote
This is less of a definitive answer than a musing about some seemingly underlying assumptions within the question. For the question itself Supercat has already given a near perfect answer. While it is of the typical kind where everyone know it's wrong, it does tick all boxes. I seriously would love to give it double points :)
Different Computer Classes
The most obvious issue with the question may be that it separates different 'classes' of computers - and more important implies important advancements made by them in prior. While it is true, that mainframes implemented many solutions first, they where still tied to their repertoire of tools - like command line operation on terminals in next to all cases. And even within the mainframe world solutions weren't developed for the fun of it, but when needed. Much like with minis and micros in parallel or later.
For example the AID facility of BS2000 (a /370ish OS) did already support quite complex interactive debugging in the early 80s. But all based on very cryptic command line arguments.
Need for more comfortable (I would prefer that term over 'advanced, which is more of a marketing blurb) debuggers grew out for one of more capable systems and more complex software, but more important due changes in software development. This includes especially the need to dive into large chunks of code generated by less then well known tools (compilers), imported from libraries or worst of all, inherited form prior developers.
This development happened independent of machine type or 'class'.
Source Level Debugging
Already early microcomputer systems offered basic debuggers that did go past all hex handling of Hex-Kits. For example the original Apple II Monitor included convenient single-stepping and breakpoints with disassembling the instructions. And with the introduction of disk drives and Assemblers many additional tools came to hand, including source level debuggers. That's the 1978..1980 time frame. Eventually with the ORCA system as a first plateau supporting very convenient Assembly as well as Pascal and C - noteworthy here, that is wasn't until the mid 1980s that C became a professional supported choice on Apple II (or similar) systems.
In general it was not only missing need, but more so missing capabilities of the systems in use that prevented the use of more comfortable features. To access join binary and source code the later needed to be at least indexed, a feature the quite limited memory of these systems could, if at all, only support for extreme small programs. Even more so if the source code had to be displayed. Impossible without multiple disk drives.
Beside Assembly (and the all omnipotent BASIC) PASCAL was the only other choice. Here the UCSD system was eventually the best known - and it made a great case how to provide high level support on small machines. The p-code did include many references to the source/module structure already by default, and due its module structure parts of the system could be swapped in on demand.
It wasn't until affordable machines by default had a RAM size of 512 KiB or more and hard drives that more comfortable features as requests became possible for most programming environments.
Workstations are Anything But PCs
When comparing machines, that have been sold as 'workstations' at their time, with other available it becomes clear, that there is no basic difference to other machines of the same time -- except they usually represent an upper end configuration.
A Sinclair QL of 1984 used a 7.5 MHz 68008 with 128 KiB RAM and micro-drives, while a SUN 2/50 of the same year had a 10 MHz 68010 with 8 MiB RAM and (at least) one HD. The same point can be made for any other computer sold as Workstation - not to mention todays Gamer PCs that outclass what many manufacturers sell as workstation :)
Home Computers are Computers at Home
While this sounds nice at first, it's a classic post-hoc fallacy. After all, that would make any computer someone may have at home a "home computer". Of course including any mainframe, mini or workstation running zOS or Unix. It's obvious this won't work as a portable classification - most definitely as porting the same computer into an office would right away turn it into an office computer - wouldn't it?
The location a computer is set up can not retroactive define a class.
Conclusion
The question contains a lot on unnecessary constrictions and implied assumptions that are neither clear nor helpful at all, and at the same time an unclear definition what environment exactly is asked for.
It may be useful to drop these restrictions and clarify what kind of functions and their representation is asked for.
This is less of a definitive answer than a musing about some seemingly underlying assumptions within the question. For the question itself Supercat has already given a near perfect answer. While it is of the typical kind where everyone know it's wrong, it does tick all boxes. I seriously would love to give it double points :)
Different Computer Classes
The most obvious issue with the question may be that it separates different 'classes' of computers - and more important implies important advancements made by them in prior. While it is true, that mainframes implemented many solutions first, they where still tied to their repertoire of tools - like command line operation on terminals in next to all cases. And even within the mainframe world solutions weren't developed for the fun of it, but when needed. Much like with minis and micros in parallel or later.
For example the AID facility of BS2000 (a /370ish OS) did already support quite complex interactive debugging in the early 80s. But all based on very cryptic command line arguments.
Need for more comfortable (I would prefer that term over 'advanced, which is more of a marketing blurb) debuggers grew out for one of more capable systems and more complex software, but more important due changes in software development. This includes especially the need to dive into large chunks of code generated by less then well known tools (compilers), imported from libraries or worst of all, inherited form prior developers.
This development happened independent of machine type or 'class'.
Source Level Debugging
Already early microcomputer systems offered basic debuggers that did go past all hex handling of Hex-Kits. For example the original Apple II Monitor included convenient single-stepping and breakpoints with disassembling the instructions. And with the introduction of disk drives and Assemblers many additional tools came to hand, including source level debuggers. That's the 1978..1980 time frame. Eventually with the ORCA system as a first plateau supporting very convenient Assembly as well as Pascal and C - noteworthy here, that is wasn't until the mid 1980s that C became a professional supported choice on Apple II (or similar) systems.
In general it was not only missing need, but more so missing capabilities of the systems in use that prevented the use of more comfortable features. To access join binary and source code the later needed to be at least indexed, a feature the quite limited memory of these systems could, if at all, only support for extreme small programs. Even more so if the source code had to be displayed. Impossible without multiple disk drives.
Beside Assembly (and the all omnipotent BASIC) PASCAL was the only other choice. Here the UCSD system was eventually the best known - and it made a great case how to provide high level support on small machines. The p-code did include many references to the source/module structure already by default, and due its module structure parts of the system could be swapped in on demand.
It wasn't until affordable machines by default had a RAM size of 512 KiB or more and hard drives that more comfortable features as requests became possible for most programming environments.
Workstations are Anything But PCs
When comparing machines, that have been sold as 'workstations' at their time, with other available it becomes clear, that there is no basic difference to other machines of the same time -- except they usually represent an upper end configuration.
A Sinclair QL of 1984 used a 7.5 MHz 68008 with 128 KiB RAM and micro-drives, while a SUN 2/50 of the same year had a 10 MHz 68010 with 8 MiB RAM and (at least) one HD. The same point can be made for any other computer sold as Workstation - not to mention todays Gamer PCs that outclass what many manufacturers sell as workstation :)
Home Computers are Computers at Home
While this sounds nice at first, it's a classic post-hoc fallacy. After all, that would make any computer someone may have at home a "home computer". Of course including any mainframe, mini or workstation running zOS or Unix. It's obvious this won't work as a portable classification - most definitely as porting the same computer into an office would right away turn it into an office computer - wouldn't it?
The location a computer is set up can not retroactive define a class.
Conclusion
The question contains a lot on unnecessary constrictions and implied assumptions that are neither clear nor helpful at all, and at the same time an unclear definition what environment exactly is asked for.
It may be useful to drop these restrictions and clarify what kind of functions and their representation is asked for.
edited Dec 7 at 13:07
idspispopd
51
51
answered Dec 6 at 23:09
Raffzahn
44.6k5102179
44.6k5102179
1
We have a feature supporting upvoting twice. It's called a bounty. You might've heard of it! We've only had three bounties awarded so far (iirc) and four set, and none set on somebody else's question. You'd be the first.
– wizzwizz4♦
Dec 7 at 7:22
Thanks. And I'd be really interested in how the development environment of Smalltalk and Lisp looked in say 1980...
– Martin Schröder
Dec 7 at 9:29
1
@MartinSchröder - uniquely, Smalltalk standardised its development environment at the same time as standardising the language itself, so any modern (unextended) implementation of Smalltalk-80 should provide a development experience that is very similar to the original 1980 implementation. This means that if you download either GNU Smalltalk or Squeak and play with them, other than the extended GUI class frameworks, what you're using is pretty similar to what existed in 1980. Which is to say, a very high quality polished environment, albeit not one that was available on a "home" computer.
– Jules
Dec 7 at 20:17
1
The "other" answer also assumes that everyone believes "advanced" means "has lots more goo, much of which gets in the way of people who want a lot less goo." Or put another way, I did very good debugging in some pretty "extremely feature poor" debuggers and I was very happy with them. I didn't realize that I absolutely had to have source-level debugging ... which often worked very badly because the optimizer had moved my code all over the damned place. I was perfectly content to look at machine instructions. No, really. Much better than broken source-code debugging.
– Julie in Austin
Dec 7 at 20:26
add a comment |
1
We have a feature supporting upvoting twice. It's called a bounty. You might've heard of it! We've only had three bounties awarded so far (iirc) and four set, and none set on somebody else's question. You'd be the first.
– wizzwizz4♦
Dec 7 at 7:22
Thanks. And I'd be really interested in how the development environment of Smalltalk and Lisp looked in say 1980...
– Martin Schröder
Dec 7 at 9:29
1
@MartinSchröder - uniquely, Smalltalk standardised its development environment at the same time as standardising the language itself, so any modern (unextended) implementation of Smalltalk-80 should provide a development experience that is very similar to the original 1980 implementation. This means that if you download either GNU Smalltalk or Squeak and play with them, other than the extended GUI class frameworks, what you're using is pretty similar to what existed in 1980. Which is to say, a very high quality polished environment, albeit not one that was available on a "home" computer.
– Jules
Dec 7 at 20:17
1
The "other" answer also assumes that everyone believes "advanced" means "has lots more goo, much of which gets in the way of people who want a lot less goo." Or put another way, I did very good debugging in some pretty "extremely feature poor" debuggers and I was very happy with them. I didn't realize that I absolutely had to have source-level debugging ... which often worked very badly because the optimizer had moved my code all over the damned place. I was perfectly content to look at machine instructions. No, really. Much better than broken source-code debugging.
– Julie in Austin
Dec 7 at 20:26
1
1
We have a feature supporting upvoting twice. It's called a bounty. You might've heard of it! We've only had three bounties awarded so far (iirc) and four set, and none set on somebody else's question. You'd be the first.
– wizzwizz4♦
Dec 7 at 7:22
We have a feature supporting upvoting twice. It's called a bounty. You might've heard of it! We've only had three bounties awarded so far (iirc) and four set, and none set on somebody else's question. You'd be the first.
– wizzwizz4♦
Dec 7 at 7:22
Thanks. And I'd be really interested in how the development environment of Smalltalk and Lisp looked in say 1980...
– Martin Schröder
Dec 7 at 9:29
Thanks. And I'd be really interested in how the development environment of Smalltalk and Lisp looked in say 1980...
– Martin Schröder
Dec 7 at 9:29
1
1
@MartinSchröder - uniquely, Smalltalk standardised its development environment at the same time as standardising the language itself, so any modern (unextended) implementation of Smalltalk-80 should provide a development experience that is very similar to the original 1980 implementation. This means that if you download either GNU Smalltalk or Squeak and play with them, other than the extended GUI class frameworks, what you're using is pretty similar to what existed in 1980. Which is to say, a very high quality polished environment, albeit not one that was available on a "home" computer.
– Jules
Dec 7 at 20:17
@MartinSchröder - uniquely, Smalltalk standardised its development environment at the same time as standardising the language itself, so any modern (unextended) implementation of Smalltalk-80 should provide a development experience that is very similar to the original 1980 implementation. This means that if you download either GNU Smalltalk or Squeak and play with them, other than the extended GUI class frameworks, what you're using is pretty similar to what existed in 1980. Which is to say, a very high quality polished environment, albeit not one that was available on a "home" computer.
– Jules
Dec 7 at 20:17
1
1
The "other" answer also assumes that everyone believes "advanced" means "has lots more goo, much of which gets in the way of people who want a lot less goo." Or put another way, I did very good debugging in some pretty "extremely feature poor" debuggers and I was very happy with them. I didn't realize that I absolutely had to have source-level debugging ... which often worked very badly because the optimizer had moved my code all over the damned place. I was perfectly content to look at machine instructions. No, really. Much better than broken source-code debugging.
– Julie in Austin
Dec 7 at 20:26
The "other" answer also assumes that everyone believes "advanced" means "has lots more goo, much of which gets in the way of people who want a lot less goo." Or put another way, I did very good debugging in some pretty "extremely feature poor" debuggers and I was very happy with them. I didn't realize that I absolutely had to have source-level debugging ... which often worked very badly because the optimizer had moved my code all over the damned place. I was perfectly content to look at machine instructions. No, really. Much better than broken source-code debugging.
– Julie in Austin
Dec 7 at 20:26
add a comment |
up vote
10
down vote
I'm including this answer in response to a couple of suggestions, although it does not meet some of the criteria laid out in the question. DDT for the DEC PDP-1 has to be recalled as probably the very first interactive debugger, and most interactive debuggers that came later were indirectly inspired by DDT. (see DDT Writeup).
DDT was built in late 1961 or early 1962 at MIT shortly after the arrival of the PDP-1 (serial number two). The author was Alan Kotok, who later went on to be the principal engineer of the DEC PDP-10, one of the main timesharing systems before the microcomputer made a single user machine economical.
DDT allowed the user to set a breakpoint, examine variables in several convenient formats, alter variables, disassemble instructions, and patch code. This allowed the human to make efficient use of the very scarce machine time available. Some of the features mentioned in the question are lacking, but DDT was very advanced for its timeframe. A more advanced version of DDT was later built for the PDP-6 and the PDP-10. The VAX debugger was probably inspired by these.
The PDP-1 (wikipedia) was probably the first commercial interactive computer intended as such. Even though only 53 were built, it had an enormous impact on later industries like video gaming (see Spacewar), Word Processing (see Expensive Typewriter), and even computer hacking (see Hackaday).
Steve Russell, author of Spacewar, later introduced a couple of prep school kids (Paul Allen and Bill Gates) to the PDP-10.
The PDP-1 computer predates the microchip, which made mainstream computing viable. But mainstream computers of the 1970s and beyond owe an enormous debt to the PDP-1. Without it, mainstream computing might have taken a very different path.
Woah. From the "expensive typewriter" manual, in chapter "Transfer commands": "N (Nightmare): Transfers control to the Nightmare version of the assembler CERTAINLY, which gets the source program directly from ET's text buffer. The resulting binary program is placed on drum field 1, the program is brought into the users core memory, ID is provided with the symbol table and control is returned to the ID, where a P (proceed) command will start the newly assembled program running.". I don't quite uderstand how all of this worked, but the guys surely had fun naming the programs and commands.
– dim
Dec 7 at 15:11
I didn't know about Expensive typewriter but the concept of having a "control mode" and a "text mode" sure reminds me a lot the vi editor (which came out only a few years later).
– ChatterOne
Dec 7 at 15:23
I think the name "expensive typewriter" has been used more than once.
– Walter Mitty
Dec 7 at 18:58
TECO had a text mode that was terminated by the ESC character. Certain commands like Insert and Search initiated text ode.
– Walter Mitty
Dec 10 at 20:35
add a comment |
up vote
10
down vote
I'm including this answer in response to a couple of suggestions, although it does not meet some of the criteria laid out in the question. DDT for the DEC PDP-1 has to be recalled as probably the very first interactive debugger, and most interactive debuggers that came later were indirectly inspired by DDT. (see DDT Writeup).
DDT was built in late 1961 or early 1962 at MIT shortly after the arrival of the PDP-1 (serial number two). The author was Alan Kotok, who later went on to be the principal engineer of the DEC PDP-10, one of the main timesharing systems before the microcomputer made a single user machine economical.
DDT allowed the user to set a breakpoint, examine variables in several convenient formats, alter variables, disassemble instructions, and patch code. This allowed the human to make efficient use of the very scarce machine time available. Some of the features mentioned in the question are lacking, but DDT was very advanced for its timeframe. A more advanced version of DDT was later built for the PDP-6 and the PDP-10. The VAX debugger was probably inspired by these.
The PDP-1 (wikipedia) was probably the first commercial interactive computer intended as such. Even though only 53 were built, it had an enormous impact on later industries like video gaming (see Spacewar), Word Processing (see Expensive Typewriter), and even computer hacking (see Hackaday).
Steve Russell, author of Spacewar, later introduced a couple of prep school kids (Paul Allen and Bill Gates) to the PDP-10.
The PDP-1 computer predates the microchip, which made mainstream computing viable. But mainstream computers of the 1970s and beyond owe an enormous debt to the PDP-1. Without it, mainstream computing might have taken a very different path.
Woah. From the "expensive typewriter" manual, in chapter "Transfer commands": "N (Nightmare): Transfers control to the Nightmare version of the assembler CERTAINLY, which gets the source program directly from ET's text buffer. The resulting binary program is placed on drum field 1, the program is brought into the users core memory, ID is provided with the symbol table and control is returned to the ID, where a P (proceed) command will start the newly assembled program running.". I don't quite uderstand how all of this worked, but the guys surely had fun naming the programs and commands.
– dim
Dec 7 at 15:11
I didn't know about Expensive typewriter but the concept of having a "control mode" and a "text mode" sure reminds me a lot the vi editor (which came out only a few years later).
– ChatterOne
Dec 7 at 15:23
I think the name "expensive typewriter" has been used more than once.
– Walter Mitty
Dec 7 at 18:58
TECO had a text mode that was terminated by the ESC character. Certain commands like Insert and Search initiated text ode.
– Walter Mitty
Dec 10 at 20:35
add a comment |
up vote
10
down vote
up vote
10
down vote
I'm including this answer in response to a couple of suggestions, although it does not meet some of the criteria laid out in the question. DDT for the DEC PDP-1 has to be recalled as probably the very first interactive debugger, and most interactive debuggers that came later were indirectly inspired by DDT. (see DDT Writeup).
DDT was built in late 1961 or early 1962 at MIT shortly after the arrival of the PDP-1 (serial number two). The author was Alan Kotok, who later went on to be the principal engineer of the DEC PDP-10, one of the main timesharing systems before the microcomputer made a single user machine economical.
DDT allowed the user to set a breakpoint, examine variables in several convenient formats, alter variables, disassemble instructions, and patch code. This allowed the human to make efficient use of the very scarce machine time available. Some of the features mentioned in the question are lacking, but DDT was very advanced for its timeframe. A more advanced version of DDT was later built for the PDP-6 and the PDP-10. The VAX debugger was probably inspired by these.
The PDP-1 (wikipedia) was probably the first commercial interactive computer intended as such. Even though only 53 were built, it had an enormous impact on later industries like video gaming (see Spacewar), Word Processing (see Expensive Typewriter), and even computer hacking (see Hackaday).
Steve Russell, author of Spacewar, later introduced a couple of prep school kids (Paul Allen and Bill Gates) to the PDP-10.
The PDP-1 computer predates the microchip, which made mainstream computing viable. But mainstream computers of the 1970s and beyond owe an enormous debt to the PDP-1. Without it, mainstream computing might have taken a very different path.
I'm including this answer in response to a couple of suggestions, although it does not meet some of the criteria laid out in the question. DDT for the DEC PDP-1 has to be recalled as probably the very first interactive debugger, and most interactive debuggers that came later were indirectly inspired by DDT. (see DDT Writeup).
DDT was built in late 1961 or early 1962 at MIT shortly after the arrival of the PDP-1 (serial number two). The author was Alan Kotok, who later went on to be the principal engineer of the DEC PDP-10, one of the main timesharing systems before the microcomputer made a single user machine economical.
DDT allowed the user to set a breakpoint, examine variables in several convenient formats, alter variables, disassemble instructions, and patch code. This allowed the human to make efficient use of the very scarce machine time available. Some of the features mentioned in the question are lacking, but DDT was very advanced for its timeframe. A more advanced version of DDT was later built for the PDP-6 and the PDP-10. The VAX debugger was probably inspired by these.
The PDP-1 (wikipedia) was probably the first commercial interactive computer intended as such. Even though only 53 were built, it had an enormous impact on later industries like video gaming (see Spacewar), Word Processing (see Expensive Typewriter), and even computer hacking (see Hackaday).
Steve Russell, author of Spacewar, later introduced a couple of prep school kids (Paul Allen and Bill Gates) to the PDP-10.
The PDP-1 computer predates the microchip, which made mainstream computing viable. But mainstream computers of the 1970s and beyond owe an enormous debt to the PDP-1. Without it, mainstream computing might have taken a very different path.
edited Dec 7 at 14:58
answered Dec 7 at 14:27
Walter Mitty
41828
41828
Woah. From the "expensive typewriter" manual, in chapter "Transfer commands": "N (Nightmare): Transfers control to the Nightmare version of the assembler CERTAINLY, which gets the source program directly from ET's text buffer. The resulting binary program is placed on drum field 1, the program is brought into the users core memory, ID is provided with the symbol table and control is returned to the ID, where a P (proceed) command will start the newly assembled program running.". I don't quite uderstand how all of this worked, but the guys surely had fun naming the programs and commands.
– dim
Dec 7 at 15:11
I didn't know about Expensive typewriter but the concept of having a "control mode" and a "text mode" sure reminds me a lot the vi editor (which came out only a few years later).
– ChatterOne
Dec 7 at 15:23
I think the name "expensive typewriter" has been used more than once.
– Walter Mitty
Dec 7 at 18:58
TECO had a text mode that was terminated by the ESC character. Certain commands like Insert and Search initiated text ode.
– Walter Mitty
Dec 10 at 20:35
add a comment |
Woah. From the "expensive typewriter" manual, in chapter "Transfer commands": "N (Nightmare): Transfers control to the Nightmare version of the assembler CERTAINLY, which gets the source program directly from ET's text buffer. The resulting binary program is placed on drum field 1, the program is brought into the users core memory, ID is provided with the symbol table and control is returned to the ID, where a P (proceed) command will start the newly assembled program running.". I don't quite uderstand how all of this worked, but the guys surely had fun naming the programs and commands.
– dim
Dec 7 at 15:11
I didn't know about Expensive typewriter but the concept of having a "control mode" and a "text mode" sure reminds me a lot the vi editor (which came out only a few years later).
– ChatterOne
Dec 7 at 15:23
I think the name "expensive typewriter" has been used more than once.
– Walter Mitty
Dec 7 at 18:58
TECO had a text mode that was terminated by the ESC character. Certain commands like Insert and Search initiated text ode.
– Walter Mitty
Dec 10 at 20:35
Woah. From the "expensive typewriter" manual, in chapter "Transfer commands": "N (Nightmare): Transfers control to the Nightmare version of the assembler CERTAINLY, which gets the source program directly from ET's text buffer. The resulting binary program is placed on drum field 1, the program is brought into the users core memory, ID is provided with the symbol table and control is returned to the ID, where a P (proceed) command will start the newly assembled program running.". I don't quite uderstand how all of this worked, but the guys surely had fun naming the programs and commands.
– dim
Dec 7 at 15:11
Woah. From the "expensive typewriter" manual, in chapter "Transfer commands": "N (Nightmare): Transfers control to the Nightmare version of the assembler CERTAINLY, which gets the source program directly from ET's text buffer. The resulting binary program is placed on drum field 1, the program is brought into the users core memory, ID is provided with the symbol table and control is returned to the ID, where a P (proceed) command will start the newly assembled program running.". I don't quite uderstand how all of this worked, but the guys surely had fun naming the programs and commands.
– dim
Dec 7 at 15:11
I didn't know about Expensive typewriter but the concept of having a "control mode" and a "text mode" sure reminds me a lot the vi editor (which came out only a few years later).
– ChatterOne
Dec 7 at 15:23
I didn't know about Expensive typewriter but the concept of having a "control mode" and a "text mode" sure reminds me a lot the vi editor (which came out only a few years later).
– ChatterOne
Dec 7 at 15:23
I think the name "expensive typewriter" has been used more than once.
– Walter Mitty
Dec 7 at 18:58
I think the name "expensive typewriter" has been used more than once.
– Walter Mitty
Dec 7 at 18:58
TECO had a text mode that was terminated by the ESC character. Certain commands like Insert and Search initiated text ode.
– Walter Mitty
Dec 10 at 20:35
TECO had a text mode that was terminated by the ESC character. Certain commands like Insert and Search initiated text ode.
– Walter Mitty
Dec 10 at 20:35
add a comment |
up vote
7
down vote
THINK Pascal had an integrated debugger that meets all your criteria. You could mark stop points in the editor and then debug your compiled code using them. The debugger supported stepping, a meaningful call stack display, and a good list of the variables as structured typed information.
I think Manx Aztec C had these features much earlier.
– Brian H
Dec 7 at 3:12
1
If we're looking for the earliest usage, it would be from when it was called Lightspeed Pascal; I think it was introduced in 1986. The name was changed at some point to THINK Pascal. (I'm not sure if the name change was when Symantec bought it from THINK Technologies or if the name was changed earlier.)
– Mitchell Spector
Dec 8 at 21:31
add a comment |
up vote
7
down vote
THINK Pascal had an integrated debugger that meets all your criteria. You could mark stop points in the editor and then debug your compiled code using them. The debugger supported stepping, a meaningful call stack display, and a good list of the variables as structured typed information.
I think Manx Aztec C had these features much earlier.
– Brian H
Dec 7 at 3:12
1
If we're looking for the earliest usage, it would be from when it was called Lightspeed Pascal; I think it was introduced in 1986. The name was changed at some point to THINK Pascal. (I'm not sure if the name change was when Symantec bought it from THINK Technologies or if the name was changed earlier.)
– Mitchell Spector
Dec 8 at 21:31
add a comment |
up vote
7
down vote
up vote
7
down vote
THINK Pascal had an integrated debugger that meets all your criteria. You could mark stop points in the editor and then debug your compiled code using them. The debugger supported stepping, a meaningful call stack display, and a good list of the variables as structured typed information.
THINK Pascal had an integrated debugger that meets all your criteria. You could mark stop points in the editor and then debug your compiled code using them. The debugger supported stepping, a meaningful call stack display, and a good list of the variables as structured typed information.
answered Dec 6 at 20:52
Michael Shopsin
954323
954323
I think Manx Aztec C had these features much earlier.
– Brian H
Dec 7 at 3:12
1
If we're looking for the earliest usage, it would be from when it was called Lightspeed Pascal; I think it was introduced in 1986. The name was changed at some point to THINK Pascal. (I'm not sure if the name change was when Symantec bought it from THINK Technologies or if the name was changed earlier.)
– Mitchell Spector
Dec 8 at 21:31
add a comment |
I think Manx Aztec C had these features much earlier.
– Brian H
Dec 7 at 3:12
1
If we're looking for the earliest usage, it would be from when it was called Lightspeed Pascal; I think it was introduced in 1986. The name was changed at some point to THINK Pascal. (I'm not sure if the name change was when Symantec bought it from THINK Technologies or if the name was changed earlier.)
– Mitchell Spector
Dec 8 at 21:31
I think Manx Aztec C had these features much earlier.
– Brian H
Dec 7 at 3:12
I think Manx Aztec C had these features much earlier.
– Brian H
Dec 7 at 3:12
1
1
If we're looking for the earliest usage, it would be from when it was called Lightspeed Pascal; I think it was introduced in 1986. The name was changed at some point to THINK Pascal. (I'm not sure if the name change was when Symantec bought it from THINK Technologies or if the name was changed earlier.)
– Mitchell Spector
Dec 8 at 21:31
If we're looking for the earliest usage, it would be from when it was called Lightspeed Pascal; I think it was introduced in 1986. The name was changed at some point to THINK Pascal. (I'm not sure if the name change was when Symantec bought it from THINK Technologies or if the name was changed earlier.)
– Mitchell Spector
Dec 8 at 21:31
add a comment |
up vote
6
down vote
The extremely popular BBC Micro (1981-1986, still in retro use today) had an exceptionally powerful debugger, "Beebmon" by Watford Electronics, capable of breakpoints, IRQ trapping, code stepping, and so on. From a summary:
BEEBMON
A ROM based machine code monitor for BBC Micro.
It enables machine code programs to be debugged
and altered easily and quickly. Being a ROM, its
commands are always readily available and occupy no
user memory. Appears to take no base page and only one page of relocatable workspace (256 bytes) and
no more anywhere in RAM. Beebmon can do more than any other machine code monitors currently on the market. The special features include facilities like: TABULATE, MODIFY, FILL, COPY, COMPARE,
SEARCH (HEX & ASCII) CHECKSUM,
DISASSEMBLE, RE-LOCATE and, by emulating the 6502 processor, SINGLE STEP, BREAK POINTS ON
READ/WRITE/EXECUTE OF LOCATION. Also BREAKPOINTS ON A, X & Y REGISTERS are provided .
HAS WINDOWS INTO MEMORY & TEST WINDOWS. All this and more...
I used this to trace IRQ handling in the OS and other ROMs, and to reimplement disk and keyboard handlers for software which required modified handling. I think it came out very early on, around 1982, and certainly by 1983 (according to the source).
Description source: Watford Electronics flier, 1983, on 4corn.co.uk
add a comment |
up vote
6
down vote
The extremely popular BBC Micro (1981-1986, still in retro use today) had an exceptionally powerful debugger, "Beebmon" by Watford Electronics, capable of breakpoints, IRQ trapping, code stepping, and so on. From a summary:
BEEBMON
A ROM based machine code monitor for BBC Micro.
It enables machine code programs to be debugged
and altered easily and quickly. Being a ROM, its
commands are always readily available and occupy no
user memory. Appears to take no base page and only one page of relocatable workspace (256 bytes) and
no more anywhere in RAM. Beebmon can do more than any other machine code monitors currently on the market. The special features include facilities like: TABULATE, MODIFY, FILL, COPY, COMPARE,
SEARCH (HEX & ASCII) CHECKSUM,
DISASSEMBLE, RE-LOCATE and, by emulating the 6502 processor, SINGLE STEP, BREAK POINTS ON
READ/WRITE/EXECUTE OF LOCATION. Also BREAKPOINTS ON A, X & Y REGISTERS are provided .
HAS WINDOWS INTO MEMORY & TEST WINDOWS. All this and more...
I used this to trace IRQ handling in the OS and other ROMs, and to reimplement disk and keyboard handlers for software which required modified handling. I think it came out very early on, around 1982, and certainly by 1983 (according to the source).
Description source: Watford Electronics flier, 1983, on 4corn.co.uk
add a comment |
up vote
6
down vote
up vote
6
down vote
The extremely popular BBC Micro (1981-1986, still in retro use today) had an exceptionally powerful debugger, "Beebmon" by Watford Electronics, capable of breakpoints, IRQ trapping, code stepping, and so on. From a summary:
BEEBMON
A ROM based machine code monitor for BBC Micro.
It enables machine code programs to be debugged
and altered easily and quickly. Being a ROM, its
commands are always readily available and occupy no
user memory. Appears to take no base page and only one page of relocatable workspace (256 bytes) and
no more anywhere in RAM. Beebmon can do more than any other machine code monitors currently on the market. The special features include facilities like: TABULATE, MODIFY, FILL, COPY, COMPARE,
SEARCH (HEX & ASCII) CHECKSUM,
DISASSEMBLE, RE-LOCATE and, by emulating the 6502 processor, SINGLE STEP, BREAK POINTS ON
READ/WRITE/EXECUTE OF LOCATION. Also BREAKPOINTS ON A, X & Y REGISTERS are provided .
HAS WINDOWS INTO MEMORY & TEST WINDOWS. All this and more...
I used this to trace IRQ handling in the OS and other ROMs, and to reimplement disk and keyboard handlers for software which required modified handling. I think it came out very early on, around 1982, and certainly by 1983 (according to the source).
Description source: Watford Electronics flier, 1983, on 4corn.co.uk
The extremely popular BBC Micro (1981-1986, still in retro use today) had an exceptionally powerful debugger, "Beebmon" by Watford Electronics, capable of breakpoints, IRQ trapping, code stepping, and so on. From a summary:
BEEBMON
A ROM based machine code monitor for BBC Micro.
It enables machine code programs to be debugged
and altered easily and quickly. Being a ROM, its
commands are always readily available and occupy no
user memory. Appears to take no base page and only one page of relocatable workspace (256 bytes) and
no more anywhere in RAM. Beebmon can do more than any other machine code monitors currently on the market. The special features include facilities like: TABULATE, MODIFY, FILL, COPY, COMPARE,
SEARCH (HEX & ASCII) CHECKSUM,
DISASSEMBLE, RE-LOCATE and, by emulating the 6502 processor, SINGLE STEP, BREAK POINTS ON
READ/WRITE/EXECUTE OF LOCATION. Also BREAKPOINTS ON A, X & Y REGISTERS are provided .
HAS WINDOWS INTO MEMORY & TEST WINDOWS. All this and more...
I used this to trace IRQ handling in the OS and other ROMs, and to reimplement disk and keyboard handlers for software which required modified handling. I think it came out very early on, around 1982, and certainly by 1983 (according to the source).
Description source: Watford Electronics flier, 1983, on 4corn.co.uk
edited Dec 10 at 18:20
cxw
1034
1034
answered Dec 8 at 11:17
Stilez
1612
1612
add a comment |
add a comment |
Thanks for contributing an answer to Retrocomputing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f8487%2fwhat-was-the-first-mainstream-advanced-debugger%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
4
The question is at the same time overly borad and way to narrow, as it implies the use of a high level language as well as stack based function calls and then narrowing down to home computers - devices that could by definition only offer a very limited amount of the requested features. Any answer will be argueable at best.
– Raffzahn
Dec 6 at 20:46
15
Your definition of “advanced debugging” is troublesomely low-level-imperative oriented. (I wouldn't even call that advanced.) The first proper debuggers were arguably Lisp interpreters, of course not with breakpoints and such nonsense but with code-as-data and REPLs.
– leftaroundabout
Dec 6 at 22:28
6
FYI, "PC" is used to distinguish a single-user computer from a time-sharing computer. The Xerox Alto for example was called a personal computer even though it cost $32,000 in 1979 and was used for research.
– traal
Dec 6 at 23:19
8
I'm gonna guess fly swatter.
– jpmc26
Dec 7 at 0:53
8
Well, this isn't an answer, because it's way outside your definition of "mainstream". But I have to mention it as the grand-daddy of interactive debuggers. It's DDT, the debugger for the DEC PDP-1, a machine delivered in 1961. The debugger, built maybe in 1962, featured breakpoints, variable display in several formats, and code patching on the fly. It was certainly advanced for its time, and was likely an ancestor of the first debugger that meets your criteria. Wikipedia
– Walter Mitty
Dec 7 at 1:18