The XKL Toad-1 System

The XKL Toad-1 System (hereafter “the Toad”) is an extended clone of the DECSYSTEM-20, the third generation of the PDP-10 family of computers from Digital Equipment Corporation. What does that mean? To answer that requires us to step back into the intertwined history of DEC, BBN,1 SAIL,2 and other parts of Stanford University’s computing community and environs.

It’s a long story. Get comfortable. I think it will be worth your time.

The PDP-10

The PDP-10 family (which includes the earlier PDP-6) is a typical mainframe computer of the mid-1960s. Like many science oriented computers prior to IBM’s System/360 line, the PDP-10 architecture addressed binary words which were 36 bits long, rather than individual characters as was common in business oriented systems. In instructions, memory addresses took up half of the 36 bit word; 18 bits is enough to address 262,144 locations, or 256KW, a very large memory in the days when each bit of magnetic core cost about $1.00.3 Typical installations had 64KW or 96KW attached. The KA-10 CPU in the first generation PDP-104 could not handle any more memory than that.

Another important feature of the PDP-10 was timesharing, a facility by which multiple users of the computer each was given the illusion that each was alone in interacting with the system. The PDP-6 was in fact the first commercially available system to feature interactive timesharing as a standard facility rather than as an added cost item.

TENEX

In the late 1960s, virtual memory was an important topic of research: How to make use of the much larger, less expensive capacity of direct access media such as disks and drums to extend the address space of computers, instead of the very expensive option of adding more core.5 One company which was looking at the issue was Bolt, Beranek, & Newman, who were interested in demand paged virtual memory, that is, viewing memory as made up of chunks, or pages, accessed independently, and what an operating system which had access to such facilities would look like.

To facilitate this research, BBN created a pager which they attached to a DEC PDP-10, and began writing an operating system which they called TENEX for “PDP-10 Executive”6 TENEX was very different from Tops-10, the operating system provided by DEC, but was interactive in the same way as the older OS. The big difference was that more programs could run at the same time, because only the currently executing portions of each program needed to be present in the main (non-virtual) memory of the computer.

TENEX was popular operating system, especially in university settings, so many PDP-10s had the BBN pager attached. In fact, the BBN pager was also used on a PDP-10 system which ran neither TENEX nor Tops-10, to wit, the WAITS system at SAIL.7

The DECsystem-10

The second generation of the PDP-10 underwent a name change, to the DECsystem-10, as well as gaining a faster new processor, the KI-10. This changed the way memory was handled, by adding a pager which divided memory up into 512 word blocks (“pages”). Programs were still restricted to 18 bits of address like previous generations, but the CPU could now handle 22 bits of address in the pager, so the physical memory could be up to four megawords (4MW), which is 16 times as much as the KA-10.

This pager was not compatible with, and was much less capable than, the BBN device, although DEC provided a version of TENEX modified to work with the KI pager for customers willing to pay extra. Some customers considered this to be too little, too late.

SAIL and the Super FOONLY

In the late 1960s, computer operating systems were an object of study in the broader area of artificial intelligence research. This was true of the Stanford Artificial Intelligence Laboratory, for example, where the PDP-6 timesharing monitor8 had been heavily modified to make it more useful for AI researchers. When the PDP-10 came out three years later, SAIL acquired one, attached a BBN pager, and connected it to the PDP-6, modifying the monitor (now named Tops-10) to run on both CPUs, with the 10 handing jobs off to the 6 if they called for equipment attached to the latter. By 1972, the monitor had diverged so greatly from Tops-10 that it received a new name, WAITS.

But the hardware was old and slow, and a faster system was desired. The KI-10 processor was underpowered from the perspective of the SAIL researchers, so they began designing their own PDP-10 compatible system, the Super FOONLY.9 This design featured a BBN style pager and used very fast semiconductors10 in its circuitry. It also expanded the pager address to 22 bits, like the KI-10, so was capable of addressing up to 4MW of memory. Finally, unlike the DEC systems, this system was build around the use of a fast microcoded processor which implemented the PDP-10 architecture as firmware rather than as special purpose hardware.

DECSYSTEM-20 and TOPS-20

DEC was aware of the discontent with their new system among customers; to remedy the situation, they purchased the design of the SuperFOONLY from Stanford, and hired a graduate student from SAIL to install and maintain the SUDS drawing system at DEC’s facilities in Massachusetts. The decision was made to keep the KI-10 pager design in the hardware, and implement the BBN style pager in microcode.

Because of the demand for TENEX from a large part of their customer base, DEC also decided to port the BBN operating system to the new hardware based on the SAIL design. DEC added certain features to the new operating system which had been userland code libraries in TENEX, such as command processing, so that a single style of command handling was available to all programmers.

When DEC announced the new system as the DECSYSTEM-20, with its brand new operating system called TOPS-20, they fully expected customers who wanted to use the new hardware would flock to it, and would port all of their applications from Tops-10 to TOPS-20, even though the new OS did not support many older peripherals on which the existing applications relied. The customers rebelled, and DEC was forced to port Tops-10 to the new hardware, offering different microcode to support the older OS on the new KL-10 processor.

Code Name: Jupiter

DEC focused on expanding the capabilities of their flagship minicomputer line, the PDP-11 family, for the next few years, with a planned enhancement to take the line from 16 bit mini to 32 bit supermini. The end result was an entirely new family, the VAX, which offered virtual memory like the PDP-10 mainframes in a new lower cost package.

But DEC did not forget their mainframe customer base. They began designing a new PDP-10 system, intended to include enhanced peripherals, support more memory, and run much faster than the KL-10 in the current Dec-10/DEC-20 systems. As part of the design, codenamed “Jupiter”, the limited 18 bit address space of the older systems was upgraded to 30 bits, that is, a memory size of one gigaword (1GW = 1024MW), which was nearly 2.5 times the size of the equivalent VAX memory, and far larger than the memory sizes available in the IBM offerings of the period.

Based on the promise of the Jupiter systems, customers made do with the KL-10 systems which were available, often running multiple systems to make up for the lack of horsepower. Features were added to the KL, by changes to the microcode as well as by adding new hardware. The KL-10 was enhanced with the ability to address the new 30-bit address space, although the implementation was limited to addressing 23 bits (where the hardware only handled 22); thus, although a system maxed out at 4MW, virtual memory could make it look like 8MW.

DEC also created a minicomputer sized variant of the PDP-10 family, which they called the DECSYSTEM-2020. This was intended to extend the family into department sized entities, rather than the corporation sized mainframe members of the family.11 There was also some interest in creating a desktop variant; one young engineer was well known for pushing the idea of a “10 on a desk”, although his idea was never prototyped at DEC.

DEC canceled the Jupiter project, apparently destined to be named the DECSYSTEM-40, in May 1983, with an announcement to the Large Systems customers at the semiannual DECUS symposia. Customer outrage was so great that DEC agreed to continue hardware development on the KL-10 until 1988, and software development across the family until 1993.

Stanford University Network

In 1980, there were about a dozen sites at Stanford University which housed PDP-10 systems, mostly KL-10 systems running TOPS-20 but also places like SAIL, which had attached a KL-10 to the WAITS dual processor. Three of the TOPS-20 sites were the Computer Science Department (“CSD”), the Graduate School of Business (“GSB”), and the academic computing facility called LOTS.12

At this time, local-area networking was seen as a key element in the future of computing, and the director of LOTS (whom we’ll call “R”) wrote a white paper on the future of Ethernet13 on the campus. R also envisioned a student computer, what today we would call a workstation, which featured a megabyte of memory, a million pixels on the screen, a processor capable of executing a million instructions per second, and an Ethernet connection capable of transferring a millions bits of data per second, which he called the “4M machine”.

Networking also excited the director of the CSD computer facility, whom we’ll call “L”.14 L designed an Ethernet interface for the KL-10 processors in the DEC-20s which were ubiquitous at Stanford. This was dubbed the Massbus-Ethernet Interface Subsystem, or MEIS,15 pronounced “maze“.

The director of the GSB computer facility, whom we’ll call “S”, was likewise interested in networking, as well as being a brilliant programmer herself. (Of some importance to the story is the fact that she was eventually married to L.) S assigned one of the programmers working for her to add code to the TOPS-20 operating system to support the MEIS, using the PUP protocols created at PARC for the Alto personal computer.16

The various DEC-20 systems were scattered across the Stanford campus, each one freestanding in a computer room. R, L, and S ran miles of 50ohm coaxial cable, the medium of the original Ethernet, through the so-called steam tunnels under the campus, connecting all the new MEISes together. Now, it was possible to transfer files between DEC-20s from the command line rather than by writing them to a tape and carrying them from one site to another. It was also possible to log in from one DEC-20 to another–but using one mainframe to connect to another seemed wasteful of resources on the source system, so L came up with a solution.

R’s dream of a 4M machine had borne fruit: While still at CSD, he had a graduate student create the design for the Stanford University Network processor board. L repurposed the SUN-1 board17 at the processor in a terminal interface processor (“EtherTIP”), in imitation of the TIPs used by systems connected to the ARPANET and to commercial networks like Tymnet and Telenet. Now, instead of wiring terminals directly to a single mainframe, and using the mainframe to connect from one place to another, the terminals could be wired to an EtherTIP and could freely connect to any system on the Ethernet.

A feature of the PUP protocols invented at PARC was the concept of internetworking, connecting two or more Ethernets together to make a larger network. This is done by using a computer connected to both networks to forward data from each to the other. At PARC, a dedicated Alto acted as the router for this purpose; L designated some of the SUN-1 based system as routers rather than as EtherTIPs, and the Stanford network was complete.

Stanford University also supported a number of researchers who were given access to the ARPANET as part of their government sponsored research, so several of the PDP-10s on campus were connected to the ARPANET. When the ARPANET converted to using the TCP/IP protocols which had been developed for the purpose of bring internetworking to wide area networks, our threesome were ready, and assigned programmers from CSD, GSB, and LOTS to make L’s Ethernet routers speak TCP/IP as well as PUP. TOPS-20 was also updated to use TCP/IP, by Stanford programmers as well as by DEC.

S and L saw a business opportunity in all this, and began a small company to sell the MEIS and the associated routers and TIPs to companies and universities who wanted to add Ethernet to their facilities. They saw this as a way to finance the development of L’s long-cherished dream of a desktop PDP-10. They eventually left Stanford as the company grew, as it had tapped the exploding networking market at just the right time. The company grew so large in fact that the board of directors discarded the plan to build L’s system, and so the founders left Cisco Systems to pursue other opportunities.

XKL

L moved to Redmond in 1990, where he founded XKL Systems Corporation. This company had as its business plan to build the “10 on a desk”. The product was codenamed “TOAD”, which is what L had been calling his idea for a decade and a half because “Ten On A Desktop” is a mouthful. He hired a small team of engineers, including his old friend R from Stanford, to build a system which implemented the full 30-bit address space which DEC had abandoned with the cancelled Jupiter project, and which included modern peripherals and networking capabilities.18 R was assigned as Chief Architect; his job was to insure that the TOAD was fully compatible with the entire PDP-10 family, without necessarily replicating every bug in the earlier systems.

R also oversaw the port of TOPS-20 to the new hardware, although some boards19 had a pair of engineers assigned: One handled the detailed design and implementation of the board, while the other worked on the changes to the relevant portion of the operating system. R was responsible for the changes which related to the TOAD’s new bus architecture, as well as those relating to the much larger memory which the TOAD supported and the new CPU.20

The TOAD was supposed to come to market with a boring name, the “TD-1”, but ran into trademark issues. By that time, I was working at XKL, officially doing pre- and post-sales customer advocacy, but also working on the TOPS-20 port.21 Part of my customer advocacy duties was some low-key marketing; when we lost the proposed name, I pointed out that people had been hearing about L’s TOAD for years, and we should simply go with it; S, considered the unofficial “Arbiter of Taste” at XKL, agreed with me.22 We officially introduced the XKL Toad-1 System at a DECUS trade show in the spring of 1995.

First computers: Rich Alderson

I first learned to program on a 1401, a commercial computer from IBM. The particular system on which I learned FORTRAN IV had 12K characters in memory, a 1402 card reader/punch, a 1403 printer, and two 1311 disk drives with a whopping 400K character capacity. The character encoding was Binary Coded Decimal (BCD).

Note that I said “character” rather than “byte”. The 1401 came out in 1960, before the term byte came into broad use; originally, “byte” referred to portions of a longer memory word and did not refer to a specific size.

Characters are addressable in the 1401, and consist of 6 data bits, a parity bit, and a word mark bit which is used to define larger areas (“fields”) in memory. We’ll come back to the word mark in a moment.

The data bits in a character are labeled B-A-8-4-2-1. The B and A bits are called “zone bits”, and map fairly directly to zone punches on a Hollerith card to define alphabetic and special characters. The numeric bits directly encode the decimal digits 1-9 in binary; zero (“0”) is encoded specially, as 8+2, so that a space character can be represented as having no bits turned on at all. Alphabetics and special characters use a combination of numeric bits and one or both zones (e.g., “A” is B-A-1, “I” is B-A-8-1).

Data is operated on in fields, addressed by the highest numbered character in memory for each field. Processing of data begins at that location, and moves lower in memory for each step in the process: For example, addition starts with the 1’s place, then moves to the 10’s place, the 100’s place, etc. How do we know when to stop? This is the purpose of the word mark bit! The lowest numbered character in the field has the word mark turned on (set to 1), and the hardware takes notice of this and stops when that character is processed.

All of this is invisible to a FORTRAN programmer, or a user of any other high level language such as COBOL or RPG, but it is critical to anyone who needs to write in a machine language representation like an assembler program. The 1401 comes with two assemblers, the earlier, more primitive Symbolic Programming System (SPS) and the more powerful Autocoder.

The 1401 instruction set consists of individual characters, chosen to be mnemonic wherever possible: _A_ is the Add instruction, and _M_ is the Move instruction which transfers data from one field to another. SPS uses the instruction characters directly, so that a programmer has to know each one, and numeric addresses of fields. Autocoder instead uses English words such as “ADD” and “MOVE” for instructions, and allows the use of names with lengths assigned in place of numeric addresses for fields.

Finally, there are three predefined fields in memory which are used to move data between the card reader, the card punch, and the printer: Word marks are permanently set at locations 1, 101, and 201; a MOVE from location 80 reads a card from the reader and puts the new data into the destination field, a MOVE into location 180 punches a card from the source field, and a MOVE into location 333 causes a 132 character line to be printed. (The first character in a printed line does things like skip to a new page or double space).

That’s the first computer I used–even though I didn’t learn the messy internals for several years!

Time-sharing in 12KW: Running TSS/8 On Real PDP-8 Hardware

But first, a little history

Digital Equipment Corporation’s PDP-8 minicomputer was a small but incredibly flexible little computer. Introduced in 1965 at a cost of $18,000 it created a new market for small computers, and soon PDP-8s found themselves used for all sorts of tasks: Industrial control, laboratory data capture and analysis, word processing, software development, and education. They controlled San Francisco’s BART Subway displays, ran the scoreboard at Fenway Park and assisted in brain surgery.

They were also used in early forays into time sharing systems. Time-sharing stood in stark contrast to the batch processing systems that were popular at the time: Whereas batch processing systems were generally hands-off systems (where you’d submit a stack of punched cards to an operator and get your results back days later) a time-sharing system allowed multiple users to interact conversationally with a single computer at the same time. These systems did so by giving each user a tiny timeslice of the computer: each user’s program would run for a few hundred milliseconds before another user’s program would get a chance. This switching happens so quickly it is imperceivable to users — providing the illusion that each user had the entire computer to themselves. Sharing the system in this manner allowed for more efficient use of computing resources in many cases.

TSS/8 was one such time-sharing endeavor, started as a research project at Carnegie-Mellon University in 1967. A PDP-8 system outfitted with 24KW of memory could comfortably support 20 simultaneous users. Each user got what appeared to them as a 4K PDP-8 system with which they were free to do whatever they pleased, and the system was (in theory, at least) impervious to user behavior: a badly behaved user program could not affect the system or other users.

With assistance from DEC, TSS/8 was fleshed out into a stable system and was made available to the world at large in 1968, eventually selling over a hundred copies. It was modestly popular in high schools and universities, where it provided a cost-effective means to provide computing resources for education. While it was never a widespread success and was eventually forgotten and supplanted on the PDP-8 by single-user operating systems like OS/8, TSS/8 was a significant development, as Gordon Bell notes:

“While only a hundred or so systems were sold, TSS/8 was significant because it established the notion that multiprogramming applied even to minicomputers. Until recently, TSS/8 was the lowest cost (per system and per user) and highest performance/cost timesharing system. A major side benefit of TSS/8 was the training of the implementors, who went on to implement the RSTS timesharing system for the PDP-11 based on the BASIC language.”

Gordon Bell, “Computer Engineering: A DEC View of Hardware Systems Design,” 1978

It is quite notable that DEC made such a system possible on a machine as small as the PDP-8: An effective time-sharing system requires assistance from the hardware to allow separation of privileges and isolation of processes — without these there would be no way to stop a user’s program from doing whatever it wanted to with the system: trampling on other users’ programs or wreaking havoc with system devices either maliciously or accidentally. So DEC had to go out of their way to support time-sharing on the PDP-8.

PDP-8 Time-Sharing Hardware

In combination with the MC8/I memory extension (which allowed up to 32KW of memory to be addressed by the PDP-8), the KT8/I was the hardware option that made this possible, and was available on the PDP-8/I as an option at its introduction. The KT8 option was made available for the original PDP-8 around this time as well.

So what does the KT8/I do (in combination with the MC8/I) that makes time-sharing on the PDP-8 feasible? First, it provides two privilege levels for program execution: Executive, and User. The PDP-8 normally runs at the Executive privilege level — at this level all instructions can be executed normally. Under the User privilege level, most instructions execute as normal, but certain instructions are forbidden and cause a trap. On the PDP-8, trappable instructions are:

  • IOTs (I/O Transfer instructions, generally used for controlling hardware and peripherals).
  • The HLT (Halt) instruction, which normally stops the processor.
  • The OSR and LAS instructions, which access the front panel’s switch register.

Under a time-sharing system such as TSS/8, the operating system’s kernel (or “Monitor” in TSS parlance) runs at the Executive privilege level. The Monitor can then control the hardware and deal with scheduling user processes.

User processes (or “Jobs” in TSS) run at the User level (as you might have guessed by the name). At this level, user programs can do whatever they want, but if one of the classes of instructions listed above is executed, the user’s program is suspended (the processor traps the instruction via an interrupt) and the PDP-8’s processor returns to the Monitor in Executive mode to deal with the privileged instruction. If the instruction is indeed one that a user program is not allowed to execute, the Monitor may choose to terminate the user program. In many cases, IOTs are used as a mechanism for user programs to request a service from the Monitor. For example, a user program might execute an IOT to open a file, type a character to the terminal, or to send a message to another user. Executing this IOT causes a trap, the Monitor examines the trapped instruction and translates it into the appropriate action, after which the Monitor resumes execution of user’s program in User mode.

Thus the privileged Executive mode and the unprivileged User mode make it possible to build an operating system that can prevent user processes from interfering with the functioning of the system’s hardware. The MC8/I Memory Extension hardware provided the other piece: Compartmentalizing user processes so they can’t stomp on other user programs or the operating system itself.

A basic PDP-8 system has a 12-bit address space and is thus capable of addressing only 4KW of memory. The MC8/I allowed extending memory up to 32KW in 4KW fields of memory — it did so by providing a three bit wide Extended Memory Address register (which thus provided up to 8 fields.) This did not provide a linear (flat) memory space: The PDP-8 processor could still only directly address 4096 words. But it did allow the processor to access data or execute instructions from any of these 8 fields of memory by executing a special IOT which caused future memory accesses and/or program instructions to come from a new field.

With this hardware assistance it becomes (relatively) trivial to limit a user program to stay within its own 4KW field: if it attempts to execute a memory management IOT to switch fields the KT8/I will cause a trap and the Monitor can either abort the user’s program or ensure that the field switch was a valid one (swapping in memory or moving things around to ensure that the right field is in the right place). (This latter proves to be significantly more difficult to do, for reasons I will spare you the details on. You’re welcome.)

This article’s supposed to be about running TSS/8 on a real PDP-8, let’s talk about that then shall we?

Where were we. Oh yes, TSS/8.

TSS/8 was initially designed to run on a PDP-8/I (introduced 1968) or the original PDP-8 (1965) equipped with the following hardware at minimum:

  • 12KW of memory
  • KT8/I and MC8/I or equivalent
  • A programmable or line-time clock (KB8/I)
  • An RF08 or DF32 fixed head disc controller with up to four RS08s or DS32 fixed head disks

Optionally supported were the TC08 DECtape controller and a DC08 or PT08 terminal controller for connecting up multiple user terminals. As time went on, TSS/8 was extended to support the newer Omnibus PDP-8 models and peripherals: The PDP-8/e (1970), 8/f, 8/m and the 8/a introduced in 1976.

TSS/8 used an RF08 or DF32 disc for storing the operating system, swapping space, and the user filesystem. Of these the most critical application was swapping: each user on the system got 4KW of swap space on the disk for their current job — as multiple users shared the system and there became more users than memory, a user’s program would be swapped out to disk to allow another user’s program to run, then swapped back in at a later time. Thus the need for fast transfer rate with minimal latency was required: The RF08 being a fixed-head disk had very little latency (averaging about 17ms due to rotational delays) and had a transfer rate of about 62KW/second.

Fixed head disks also had the advantage of being word addressable, unlike many later storage mechanisms which read data a sector at a time. This made transfers of small amounts of data (like filesystem structures) more efficient as only the necessary data needed to be transferred.

Our RF08 Controller with two RS08 drives (256KW capacity each)
Our RF08 Controller with two RS08 drives (256KW capacity each)

We’ve wanted to get TSS/8 running at the museum for a long time. The biggest impediment to running TSS/8 on real hardware in this year of 2019 is the requirement for a fixed-head disk. There are not many RF08s or DF32s left in the world these days, and the ones that remain are difficult to keep operational in the long term. We have contemplated restoring a PDP-8/I and the one RF08 controller (with two RF08 discs) in our collection or building an RF08 emulator, but I thought it would be an interesting exercise to get it to run on the PDP-8/e we already have on exhibit on the second floor, with the hardware we already have restored and operational.

LCM+L's PDP-8/e.  RK05 drive on the left.
LCM+L’s PDP-8/e. RK05 drive on the left.

Our 8/e is outfitted with an RK05 drive, controlled by the usual RK8E Omnibus controller. The RK05 is a removable pack drive with a capacity of approximately 2.5MB and a transfer rate of 102KW/sec. On paper it didn’t seem infeasible to run a time-sharing system with an RK05 instead of a RF08 — each user’s 4K swap area transposes nicely to a single track on an RK05 (a single track is 16 sectors of 256 words yielding 4096 words) and the capacity is larger than the maximum size for an RF08 controller (1.6MW vs 1.0MW). However, the seek time of the RK05 (10ms track-to-track, 50ms average vs. no seek time on the RF08) means performance is going to be lower, the only question is by how much. My theory was that while the system would be slower it would still be usable. Only one way to find out, I figured.

Finding TSS/8 Sources

Of course, in order to modify the system it would be useful to have access to the original source code. Fortunately the heavy lifting here has already been done: John Wilson transcribed a set of source listings way back in the 1980s and made them available on the Internet in the early 2000s. Since then a couple of PDP-8 hackers (Brad Parker and Vincent Slyngstad) combined efforts to make those source listings build again, and made the results available here. Cloning that repository provides the sources and the tools necessary to assemble the TSS/8 source code and build an RF08 disk containing the resultant binaries along with a working TSS/8 filesystem. I began with this as a base and started in to hacking away.

Hacking Away

The first thing one notices when perusing the TSS/8 source is that it has comments. Lots of comments. Useful comments. I would like to extend my heartfelt thanks to the original authors of this code, you are the greatest.

Lookit’ them comments: That’s the way you do it!

There are two modules in TSS/8 that need modifications: INIT and TS8. Everything else builds on top of these. INIT is a combination of bootstrap, diagnostic, backup, and patching tool. Most of the time it’s used to cold boot the TSS/8 system: It reads TS8 into fields 0 and 1 of the PDP-8’s memory and starts executing it. TS8 is the TSS/8 Monitor (analogous to the “kernel” in modern parlance). It manages the hardware, schedules user jobs, and executes user requests.

It made sense to make changes to INIT first, since it brings up the rest of the system. These changes ended up being fairly straightforward as everything involved with booting the system read entire 4K tracks in at a time, nothing complicated. (I still have yet to modify the DECtape dump/restore routines, however.)

The code for TS8, the TSS/8 Monitor, lives in ts8.pal, and this is where the bulk of the code changes live. The Monitor contains the low-level disk I/O routines used by the rest of the system. I spent some time studying the code in ts8.pal to understand better what needed to be changed and it all boiled down to two sets of routines: one used for swapping processes in and out 4KW at a time, and one used for filesystem transfers of arbitrary size.

I started with the former as it seemed the less daunting task. The swapping code is given a 4K block of memory to transfer either to (“swapping out”) or from (“swapping in”) the fixed-head disk. For the DF32 and RF08 controllers this is simple: You just tell the controller “copy 4KW from here and put it over there” (more or less) and it goes off and does it and causes an interrupt to let the processor know when it’s done. Simple:

SWPIN,    0
     DCMA        /TO STOP THE DISC
     TAD SWINA   /RETURN ADDRESS FOR INTURRUPT CHAIN
     DCA I DSWATA    /SAVE IT
     TAD INTRC   /GET THE TRAC # TO BE READ IN
     IFZERO RF08-40 <     
     TAD SQREQ   /FIELD TO BE USED     
     DEAL     
     CLA     
     NOP     /JUST FOR PROPER LENGTH     >
     IFZERO RF08 <     
     DXAL     
     TAD SQREQ   /FIELD TO BE SWAPPED IN     
     TAD C0500   /ENABLE INTERRUPT ON ERROR AND ON COMPLETION     
     DIML     >
     DCA DSWC    /WORD COUNT
     CMA
     DCA DSMA    /CORE ADDRESS
     DMAR
     JMP I SWPIN

SWPTR,    JMP SWPERR      /OOPS
     TAD FINISH      /DID WE JUST SWAP IN OR OUT?
     SMA
     JMP SWPOK       /IN; SO WE'RE FINISHED
     CIA
     DCA FINISH      /SAVE IT
     JMS SWPIO       /START SWAP IN
     DISMIS          /GO BACK TO WHAT WE WERE DOING

For the RK05 things are a bit more complicated: The RK8E controller can only transfer data one sector (256 words) at a time, so my new swapping code would need to run 16 times (and be interrupted 16 times) in order to transfer a full 4KW. And it would have to keep track of the source and destination addresses manually. Obviously this code was going to take up more space, and space was already at a premium in this code (the TSS/8 Monitor gets a mere 8KW to do everything it needs to do). After fighting with the assembler and optimizing and testing things I came up with:

SWPIN, TAD SQREQ                 / GET FIELD TO BE SWAPPED IN
     TAD C0400                   / READ SECTOR, INTERRUPT
     DLDC                        / LOAD COMMAND REGISTER:
                                 / FIELD IS IN BITS 6-8;
                                 / INTERRUPTS ENABLED ON TRANSFER COMPLETE
                                 / OF A 256-WORD READ TO DRIVE ZERO.
     TAD     INTRC               / GET THE TRACK # TO READ FROM
     TAD     RKSWSE              / ADD SECTOR
     DLAG                        / LOAD ADDRESS, GO
     JMP I   SWPIT
     
 / FOR RK05:
 / ON EACH RETURN HERE, CHECK STATUS REG (ERROR OR SUCCESS MODIFIES
 / ENTRY ADDRESS TO SWPTR)
 / ON COMPLETION, INC. SECTOR COUNT, DO NEXT SECTOR.  ON LAST SECTOR
 / FINISH THE SWAP.
 SWPA,    SWPTR                  /RETURN ADDRESS AFTER SWAP
 
 SWPTR, JMP SWPERR      /OOPS
     TAD RKADR
     TAD C0400       /NEXT ADDRESS
     DCA RKADR
     TAD RKSWSE      /NEXT SECTOR
     IAC
     AND C0017   
     SNA             /SECTOR = 16? DONE?
     JMP SWFIN       /YEP, FINISH THINGS UP.
     DCA RKSWSE      /NO - DO NEXT SECTOR
     JMS SWPIO       /START NEXT SECTOR TRANSFER
     DISMIS          /GO BACK TO WHAT WE WERE DOING
 SWFIN, TAD FINISH   /DID WE JUST SWAP IN OR OUT?    
     SMA
     JMP SWPOK       /IN; SO WE'RE FINISHED
     CIA
     DCA FINISH      /SAVE IT
     JMS SWPIR       /START SWAP IN
     DISMIS          /GO BACK TO WHAT WE WERE DOING      
     

The above is only slightly larger than the original code. Like the original, it’s interrupt driven: SWPIN sets up a sector transfer then returns to the Monitor — the RK8E will interrupt the processor when this transfer is done, at which point the Monitor will jump to SWPTR to process it. SWPTR then determines if there are more sectors to transfer, and if so starts the next transfer, calculating the disk and memory addresses needed to do so.

After testing this code, TSS/8 would initialize and prompt for a login, and then hang attempting to do a filesystem operation to read the password database. Time to move on to the other routine that needed to be changed: the filesystem transfer code. This ended up being considerably more complicated than the swapping routine. As mentioned earlier, the RF08 and DF32 disks are word-addressable, meaning that any arbitrary word at any address on disk can be accessed directly. And these controllers can transfer any amount of data from a single word to 4096 words in a single request. The RK05 can only transfer a sector’s worth of data (256 words) at once and transfers must start on a sector boundary (a multiple of 256 words). The TSS/8 filesystem code makes heavy use of the flexibility of the RF08/DF32, and user programs can request transfers of arbitrary lengths from arbitrary addresses as well. This means that the RK05 code I’m adding will need to do some heavy lifting in order to meet the needs of its callers.

Like the swapping code, a single request may require multiple sector transfers to complete. Further, the new code will need to have access to a private buffer 256 words in length for the transfer of a single RK05 sector — it cannot copy sector data directly to the caller’s destination like it does with the RF08/DF32 because that destination is not likely to be large enough. (Consider the case where the caller wants to read only one word!) So for a read operation, the steps necessary are:

  1. Given a word address for the data being requested from disk, calculate the RK05 sector S that word lives in. (i.e. divide the address by 256).
  2. Given the same, calculate the offset O in that sector that the data starts at (i.e. calculate the word address modulo 256)
  3. Start a read from the RK05 for sector S into the Monitor’s private sector buffer. Return to the Monitor and wait for an interrupt signalling completion.
  4. On receipt of an interrupt, calculate the length of the data to be copied from the private sector buffer into the caller’s memory (the data’s final destination). Calculate the length L as 256-O (i.e. copy up to the end of the sector we read.)
  5. Copy L words from offset O in the private sector buffer to the caller’s memory.
  6. Decrement the caller’s requested word count by L and see if any words remain to be transferred: If yes, increment the sector S, reset O to 0 (we start at the beginning of the next sector) and go back to step 3.
  7. If no more words to be transferred, we’re done and we can take a break. Whew.

Doing a Write is more complicated: Because the offset O may be in the middle of a sector, we have to do a read-modify-write cycle: Read the sector first into the private buffer, copy in the modified data at offset O in the buffer, and then write the whole buffer back to disk.

This code ended up not fitting in Field 0 of TS8 — I had to move it into Field 1 in order to have space for both the code and the private sector buffer. So as not to bore you I won’t paste the final code here (it’s pretty long) but if you’re curious you can see it starting around line 6994 of ts8.pal.

This code while functional has some obvious weaknesses and could be optimized: the read-modify-write cycle for write operations is only necessary for transfers that start at a non-sector boundary or are less than a sector in size. Repeated reads from the same sector could bypass the actual disk transfer (only the first read need actually hit the disk). Similarly, repeated writes to the same sector need only commit the sector to disk when a new sector is requested. I’m waiting to see how the system holds up under heavy use, and what disk usage patterns emerge before undertaking these changes, premature optimization being the root of all evil and whatnot.

The first boot of TSS/8 on our PDP-8/e!

I tested all of these changes as I was writing them under SIMH, an excellent suite of emulators for a variety of systems including the PDP-8. When I was finally ready to try it on real hardware, I used David Gesswein’s dumprest tools to write the disk image out to a real RK05 pack, and toggled in the RK05 TSS/8 bootstrap I wrote to get INIT started. After a a couple of weeks of working only under the emulator, it was a real relief when it started right up the first time on the real thing, let me tell you!

TSS/8 is currently running on the floor at the museum, servicing only two terminals. I’m in the process of adding six more KL8E asynchronous serial lines so that we can have eight users on the system — the hope is to make the system available online early next year so that people around the world can play with TSS/8 on real hardware.

I’ve also been working on tracking down more software to run on TSS/8. In addition to what was already available on the RF08 disk image (PALD, BASIC, FOCAL, FORTRAN, EDIT) I’ve dug up ALGOL, and ported CHEKMO II and LISP over. If anyone out there is sitting on TSS/8 code — listings, paper tape, disk pack, or DECtape, do drop me a line!

And if you’re so inclined, and have your own PDP-8 system with an RK05 you can grab the latest copy of my changes on our Github at https://github.com/livingcomputermuseum/cpus-pdp8 and give it a whirl. Comments, questions, and pull requests are always welcome!

A Journey Into the Ether: Debugging Star Microcode

Back in January I unleashed my latest emulation project Darkstar upon the world. At that time I knew it still had a few areas that needed more refinement, and a few areas that were very rough around the edges. The Star’s Ethernet controller fell into that latter category: No detailed documentation for the Ethernet controller has been unearthed, so my emulated version of it was based on a reading of the schematics and diagnostic microcode listings, along with a bit of guesswork.

Needless to say, it didn’t really work: The Ethernet controller could transmit packets just fine but it wasn’t very good at receiving them. I opted to release V1.0 of Darkstar despite this deficiency — while networking was an important part of Xerox’s computing legacy, there were still many interesting things that could be done with the emulator without it. I’d get the release out the door, take a short break, and then get back to debugging.

Turns out the break wasn’t exactly short — sometimes you get distracted by other shiny projects — but a couple of weeks back I finally got back to working on Darkstar and I started with an investigation of the Receiver end of the Ethernet interface — where were things going wrong?

The first thing I needed to do was come up with some way to see what was actually being received by the Star, at the macrocode level. While I lack sources for the Interlisp-D Ethernet microcode, I could see it running in Darkstar’s debugger, and it seemed to be picking up incoming packets, reading in the words of data from these packets and then finally shuffling them off to the main memory. From this point things got very opaque — what was the software (in this case the operating system itself) doing with that data, and why was it apparently not happy with it?

The trickiest part here was finding diagnostic software to run on the Star that could show me the raw Ethernet data being received, and after a long search through available Viewpoint, XDE, and Interlisp-D tools and finding nothing that met my needs I decided to write my own in Interlisp-D. The choice to use Interlisp-D was mainly due to the current lack of XDE compilers, but also because the Interlisp-D documentation covered exactly what I needed to accomplish, using the ETHERRECORDS library. I wrote some quick and dirty code to print out the contents of any packets coming in, and got… crickets. Nothing. NIL, as the Lisp folks say.

Hmm.

So I went back and watched the microcode read a packet in and while it was indeed pulling in data, upon closer inspection it was discarding the packet after the first few words. The microcode was checking that the packet’s Destination MAC address (which begins each Ethernet packet’s header) matched that of the Star’s MAC address and it was ascertaining that the packet in question wasn’t addressed to it. This is reasonable behavior, but the packets it was receiving from my test harness were all Broadcast packets, which use a destination address of ff:ff:ff:ff:ff:ff and which are, by definition, destined for all machines on the network — which is when I finally noticed that hey wait a minute… the words the microcode is reading in for the destination address aren’t all FF’s as they should be… and then I slapped my forehead when I saw what I had done:

Whoops.

I’d accidentally used the “PayloadData” field (which contains just the actual data in the packet) rather than the “Data” field (which contains the full packet including the Ethernet header). So the microcode was never seeing Ethernet headers at all, instead it was trying to interpret packet data as the header!

I fixed that and things were looking much, much better. I was able to configure TCP/IP on Interlisp-D and connect to a UNIX host and things were generally working, except when they weren’t. On rare occasions the Star would drop a single word (two bytes) from an incoming packet with no fanfare or errors:

The case of the missing words. Note the occasional loss of two characters in the above directory listing.

This was puzzling to say the least. After some investigation it became clear that the lost word was randomly positioned within the packet; it wasn’t lost at the beginning or end of the packet due to an off-by-one error or something not getting reset between packets. Further investigation indicated that without fail, the microcode was reading in each word from the packet via the ←EIData function (which reads the next incoming word from the Ethernet controller and puts it on the Central Processor’s X Bus). On the surface it looked like the microcode was reading each word in properly… but then why was one random word getting lost?

It was time to take a good close look at the microcode. I lack source code for the Interlisp-D Ethernet microcode but my hunch was that it would be pretty similar to that used in Pilot since no one in their right mind rewrites microcode unless they absolutely have to. I have some snippets of Pilot microcode, fortunately, and as luck would have it the important portions of it matched up with what Interlisp was using, notably the below loop:

{main input loop}
EInLoop: MAR ← E ← [rhE, E + 1], EtherDisp, BRANCH[$,EITooLong], c1;
MDR ← EIData, DISP4[ERead, 0C], c2;
ERead: EE ← EE - 1, ZeroBr, GOTO[EInLoop], c3, at[0C,10,ERead];
E ← uESize, GOTO[EReadEnd], c3, at[0D,10,ERead];
E ← EIData, uETemp2 ← EE, GOTO[ERCross], c3, at[0E,10,ERead];
E ← EIData, uETemp2 ← EE, L6←L6.ERCrossEnd, GOTO[ERCross], c3, at[0F,10,ERead];

The code starting with the label EInLoop (helpfully labeled “main input loop”) loads the Memory Address Register (MAR) with the address where the next word from the Ethernet packet will be stored; and the following line invokes ←EIData to read the word in and write it to memory via the Memory Data Register (MDR). The next instruction then decrements a word counter in a register named EE and loops back to EInLoop (“GOTO[EInLoop]”). (If this word counter underflows then the packet is too large for the microcode to handle and is abandoned.)

An important diversion is in order to discuss how branches work in Star microcode. By default, each microinstruction has an INIA (InitialNext Instruction Address) field that tells the processor where to find the next instruction to be executed. Microinstructions need not be ordered sequentially in memory, and in fact, generally are not (this makes looking at a raw dump of microcode highly entertaining). At the end of every instruction, the processor looks at the INIA field and jumps to that address.

To enable conditional jumps, a microinstruction can specify one of several types of Branches or Dispatches. These cause the processor to modify the INIA of the next instruction by OR’ing in one or more bits based on a condition or status present during the current instruction. (This is then referred to as NIA, for Next Instruction Address). For example, the aforementioned word counter underflow is checked by the line:

ERead:    EE ← EE - 1, ZeroBr, GOTO[EInLoop], c3, at[0C,10,ERead];

The EE register is decremented by 1 and the ZeroBr field specifies a branch if the result of that operation was zero. If that was the case, then the INIA of the next instruction (at EInLoop) is modified — ZeroBr will OR a “1” into it.

EInLoop:    MAR ← E ← [rhE, E + 1], EtherDisp, BRANCH[$,EITooLong],    c1;

This branch is denoted by the BRANCH[$,EITooLong] assembler macro which denotes the two possible destinations of the branch. The dollar sign ($) indicates that in the case of no branch, the next sequential instruction should be executed, and that that instruction needs no special address. In the case of a branch (indicating underflow) the processor will jump to EITooLong instead.

Clear as mud? Good! So how does this loop exit under normal conditions? In the microcode instruction at EInLoop there is the clause EtherDisp. This causes a microcode dispatch — a multi-way jump — based on two status bits from the Ethernet controller. The least-significant bit in this status is the Attn bit, used to indicate that the Ethernet controller has something to report: A completed packet, a hardware error, etc. The other bit is always zero if the Ethernet controller is installed. (If it’s not present, the bit is always 1).

Just like a conditional branch, a dispatch modifies the INIA of the next instruction by ORing those status bits in to form the final NIA. The instruction following EInLoop is:

MDR ← EIData, DISP4[ERead, 0C],    c2;

The important part to us right now is the DISP4 assembler macro: this sets up a dispatch table starting with the label ERead which it places at address 0x0C (binary: 1100). Note how the lower two bits in this address are clear, to allow branches and dispatches to OR modified bits in. In the case where EtherDisp specfies no special conditions (all bits zero) the INIA of this instruction is unmodified and left as 0x0C and the loop continues. In the case of a normal packet completion, EtherDisp will indicate that the Attn bit is set, ORing in 1, resulting in an NIA of 0x0D (binary: 1101).

This all looked pretty straightforward and I didn’t see any obvious way a single word could get lost here, so I looked at the other ways this loop could be exited — how do we get to the instruction at 0x0E (binary: 1110) from the dispatch caused by EtherDisp? At first this left me scratching my head — as mentioned earlier, the second bit masked in by EtherDisp is always zero! The clue is in what the instruction at 0x0E does: it jumps to a Page Cross handler for the routine.

This of course requires another brief (not so brief?) diversion into Central Processor minutiae. The Star’s Central Processor contains a simple mechanism for providing virtual memory via a Page Map, which maps virtual addresses to physical addresses. Each page is 256 words in size, and the CP has special safeguards in place to trap memory accesses that might cross a page boundary both to prevent illegal memory accesses and so the map can be maintained. In particular, any microinstruction that loads MAR via an ALU operation that causes a carry out of the low 8 bits (i.e. calculating an address that crosses a 256-word boundary) results in any memory access in the following instruction being aborted and a PageCross branch being taken. This allows the microcode to deal with Page Map-related activities (update access bits or cause a page fault, for example) before resuming the aborted memory access.

Whew. So, in the case of to the code in question:

{main input loop}
EInLoop: MAR ← E ← [rhE, E + 1], EtherDisp, BRANCH[$,EITooLong], c1;
MDR ← EIData, DISP4[ERead, 0C], c2;
ERead: EE ← EE - 1, ZeroBr, GOTO[EInLoop], c3, at[0C,10,ERead];
E ← uESize, GOTO[EReadEnd], c3, at[0D,10,ERead];
E ← EIData, uETemp2 ← EE, GOTO[ERCross], c3, at[0E,10,ERead];
E ← EIData, uETemp2 ← EE, L6←L6.ERCrossEnd, GOTO[ERCross], c3, at[0F,10,ERead];

Imagine (if you will) that register E (the Ethernet controller microcode gets two whole CPU registers of its very own and their names are E and EE) contains 0xFF (255) and the processor is running the instruction at EInLoop.  The ALU adds 1 to it, resulting in 0x100 — this is a carry out from the low 8-bits and so a PageCross branch is forced during the next instruction.  A PageCross branch will OR a “2” into the INIA of the next instruction.

The next instruction attempts to store the next word from the Ethernet’s input FIFO into memory via the MDR←EIData operation but this store is aborted due to the Page Cross caused during the last instruction.  And at last, a 2 is ORed into INIA, causing a dispatch to 0x0E (binary: 1110).  So in answer to our (now much earlier) question:  The routine at 0x0E is invoked when a Page Cross occurs while reading in an Ethernet packet.  (How the code gets to the routine at 0x0F is left as an exercise to the reader.)

And as it turns out, it’s the instruction at 0x0E that’s triggering the bug in my emulated Ethernet controller. 

E ← EIData, uETemp2 ← EE, GOTO[ERCross],    c3, at[0E,10,ERead];

Note the E←EIData operation being invoked — it’s reading in the word from the Ethernet controller for a second time during this turn through the loop, and remember that the first time it did this, it threw the result away since the MDR<- operation was canceled.  This second read is done with the intent to store the abandoned word away (in register E) until the Map operation is completed.

So what’s the issue here?  On the real hardware, those two ←EIData operations return the same data word rather than reading the next word from the input packet.  This is in fact one of the more clearly spelled-out details in the Ethernet schematic — it even explains why it’s happening! — one that I completely, entirely missed when writing the emulation:

Seems pretty clear to me…

Microinstructions in the Star’s Central Processor are grouped into clicks of three instructions each; a click’s worth of instructions execute atomically — they cannot be interrupted.  Each instruction in a click executes in a single cycle, referred to as Cycle 1, Cycle 2, and Cycle 3 (or c1, c2, and c3 for short).  You can see these cycles notated in the microcode snippet above.  Some microcode functions behave differently depending on what cycle they fall on.  ←EIData only loads in the next word from the Ethernet FIFO when executed during a c2; an ←EIData during c1 or c3 returns the last word loaded.  I had missed this detail, and as a result, my emulation caused any invocation of ←EIData to pull the next word from the FIFO.  As demonstrated above this nearly works, but causes a single word to be lost when a packet read crosses a page boundary.

I fixed the ←EIData issue in Darkstar and at long last, Ethernet is working properly.  I was even able to connect to one of the machines here at the museum:

The release on Github has been updated; grab a copy and let me know how it works for you!

If you’re interested in learning more about how the Star works at the microcode level, the Hardware Reference and Microcode Reference are a good starting point. Or drop me a line!

Introducing Darkstar: A Xerox Star Emulator

Star History and Development

The Xerox 810 Information System (“Star”)

In 1981, Xerox released the Xerox 8010 Information System (codenamed “Dandelion” during development) and commonly referred to as the Star. The Star took what Xerox learned from the research and experimentation done with the Alto at Xerox PARC and attempted to build a commercial product from it.  It was envisioned as center point of the office of the future, combining high-resolution graphics with the now-familiar mouse, Ethernet networking for sharing and collaborating, and Xerox’s Laser Printer technology for faithful “WYSIWYG” document reproduction.  The Star’s operating system (called “Star” at the outset, though later renamed “Viewpoint”) introduced the Desktop Metaphor to the world.  In combination with the Star’s unique keyboard it provided a flexible, intuitive environment for creating and collaborating on documents and mail in a networked office environment.

The Star’s Keyboard

Xerox later sold the Star hardware as the “Xerox 1108 Scientific Information Processor” – In this form it competed with Lisp workstations from Symbolics, LMI, and Texas Instruments in the burgeoning AI workstation market and while it wasn’t quite as powerful as any of their offerings it was considerably more affordable – and sometimes much smaller.  (The Symbolics 3600 workstation, c. 1983 was the size of a refrigerator and cost over $100,000).

The Star never sold well – it was expensive ($16,500 for a single workstation and most offices would need far more than just one) and despite being flexible and powerful, it was also quite slow. Unlike the IBM PC, which also made its debut in 1981 and would eventually sell millions, Xerox ended up selling somewhere in the neighborhood of 25,000 systems, making the task of finding a working Star a challenge these days.

Given its history and relationship to the Alto, the Star seemed appropriate for my next emulation project. (You can find the Alto emulator, ContrAlto, here). As with the Alto a substantial amount of detailed hardware documentation had been preserved and archived, making it possible to learn about the machine’s inner workings… except in a few rather important places:


From the March 1982 edition of the Dandelion Hardware Manual.  Still waiting for these sections to be written…

Fortunately, Al Kossow at Bitsavers was able to provide extra documentation that filled in most of the holes.  Cross-referencing all of this with the available schematics, it looked like there was enough information to make the project possible.

The Dandelion Hardware

The Star’s Central Processor (CP). Note the ALU (4xAM2901, top middle) and 4KW microcode store (bottom)

Much like the Alto, the Dandelion’s Central Processor (referred to as the “CP”) is microcoded, and, again like the Alto, this microcode is responsible for controlling various peripherals, including the display, Ethernet, and hard drive.  The CP is also responsible for executing bytecode macroinstructions.  These macroinstructions are what the Star’s user programs and operating systems are actually compiled to.  The CP is sometimes referred to as the “Mesa” processor because it was designed to efficiently execute Mesa bytecodes, but it was in no way limited to implementing just the Mesa instruction set: The Interlisp-D and Smalltalk systems defined their own microcode for executing their own bytecodes, custom-tailored and optimized to their environments.

Mesa was a strongly-typed “high-level language.” (Xerox hackers loved their puns…) It originated on the Alto but quickly grew too large for it (a smaller, stripped-down Mesa called “Butte” (i.e. “a small Mesa”) existed for the Alto but was still fairly unwieldy.)  The Star’s primary operating system was written in Mesa, which allowed a set of very sophisticated tools to be developed in a relatively short period of time.

The Star architecture offloaded the control of lower-speed devices (the keyboard and mouse, serial ports, and the floppy drive) to an 8-bit Intel 8085-based I/O processor board, referred to as the IOP.  The IOP is responsible for booting the system: it runs basic diagnostics, loads microcode into the Central Processor and starts it running.  Once the CP is running, it takes over and works in tandem with the IOP.

Emulator Development

The Star’s I/O Processor (IOP). Intel 8085 is center-right.

Since the IOP brings the whole system up, it seemed that the IOP was the logical place to begin implementing the emulator.  I started with an emulation of the 8085 processor and hooked up the IOP ROMs and RAMs.  Since the first thing the IOP does at power up or reset is execute a vigorous set of self-tests, the IOP was, in effect, testing my work as I progressed which was extremely helpful.  This is one important lesson Xerox learned from the Alto and applied to the Star: on-board diagnostics are a good thing.  The Alto had no diagnostic facilities built in so if anything failed that prevented the system from running the only way to determine the fault was to get out the oscilloscope and the schematics and start probing.  On the Star, diagnostics and status are reported through a 4-digit LED display, the “Maintenance Panel” (or MP for short).  If the IOP finds a fault during testing, it presents a series of codes on this panel.  During a normal system boot, various codes are displayed to indicate progress.  The MP was the first I/O device I emulated on the IOP, for obvious reasons.

Development on the IOP progressed nicely for several weeks (and the codes reported in the emulated MP kept increasing, reflecting my progress in a quantitative way) and during this time I implemented a source-level debugger for the IOP’s 8085 code to help me along.  This was invaluable in working out what the IOP was trying to do and why it was failing to do so.  It allowed me to step through the original code, place breakpoints, and investigate the contents of the IOP’s registers and memory while the emulated system was running.

The IOP Debugger

Once the IOP self-tests were passing, the IOP emulation was running to the point where it attempted to actually boot the Central Processor!  This meant I had to shift gears and switch over to implementing an emulation of the CP and make it talk to the IOP. This is where the real fun began.

For the next couple of months I hunkered down and implemented a rough emulation of the CP, starting with system’s 16-bit ALU (implemented with four 4-bit AM2901 ALU chips chained together).  The 2901 (see top portion of the following diagram) forms the nexus of the processor; in addition to providing the processor’s 16 registers and basic arithmetic and logical operations, it is the primary data path between the “X bus” and “Y bus.”  The X Bus provides inputs to the ALU from various sources: I/O devices, main memory, a handful of special-purpose register files and the Mesa stack and bytecode buffer.  The ALU’s output connects to the Y bus, providing inputs back into these same components.

The Star Central Processor Data Paths

One of the major issues I was confronted with nearly immediately when writing the CP emulation was one of fidelity: how faithful to the hardware does this emulation need to be? This issue arose specifically because of two hardware details related to the ALU and its inputs:

  1. The AM2901 ALU has a set of flags that get raised based on the result of an ALU operation (for example, the “Carry” flag gets raised if the result of an operation causes a carry out from the most-significant bits). For arithmetic operations these flags make sense but the 2901 also sets these flags as the result of logical operations. The meaning of the flags in these cases is opaque and of no real use to programmers (what does it mean for a “carry” flag to be set as a result of a logical OR?) and exist only as a side-effect of the ALU’s internal logic. But they are documented in the spec sheet (see the picture below).
  2. With a 137ns clock cycle time, the CP pushes the underlying hardware to its limits. As a result, some combinations of input sources requested by a microinstruction will not produce valid results because the data simply cannot all make it to its destination on time. Some combinations will produce garbage in all bits, but some will be correct only in the lower nibble or byte of the result, with the upper bits being undefined. (This is due to the ALU in the CP being comprised of four 4-bit ALUs chained together.)
Logic equations for the “NOT R XOR S” ALU operation’s flags. What it means is an exercise left to the reader.

I spent a good deal of time pondering and experimenting. For #1, I decided to implement my ALU emulation with the assumption that Xerox’s microcode would not make use of the condition flags for non-arithmetic operations, as I could see no reason to make use of them for logical ops and implementing the equations for all of them would be computationally expensive, making the emulation slower. This ended up being a valid assumption for all logical ops except for OR — as it turns out, some microcode assumed that the Carry flag would be set appropriately for this class of operation. When this issue was found, I added the appropriate operations to my ALU implementation.

For #2 I assumed that if Xerox’s microcode made use of any “invalid” combinations of input sources, that it wouldn’t depend on the garbage portion of the results. (That is, if code made use of microinstructions that would only produce valid results in the lower 4 or 8 bits, the microcode would also only depend on the lower 4 or 8 bits generated.) Thus the emulated ALU always produces a complete, correct result across all 16-bits regardless of input source. This assumption appears to have held — I have encountered no real-world microcode that makes assumptions about undefined results thus far.

The above compromises were made for reasons of implementation simplicity and efficiency. The downside is that it is possible to write microcode that will behave differently on the emulation than on the real hardware. However, going through the time, trouble, and expense of a 100% accurate emulation did not seem worth it when no real microcode would ever require this level of accuracy. Emulation is full of trade-offs like this. It would be great to provide an emulation that is perfect in every respect, but sometimes compromises must be made.

I implemented a debugger and disassembler for the CP similar to the one I put together when emulating the IOP.  Emulation of the various X bus-related registers and devices followed, and slowly but surely the CP started passing boot diagnostics as I fixed bugs and implemented missing hardware.  Finally it reached the point where it moved from the diagnostic stage to executing the first Mesa bytecodes of the operating system – the Star was now executing real code!  At that time it seemed appropriate to implement the Star’s display controller so I could see what the Star was trying to tell me – and a few days and much debugging of the central processor later I was greeted with this display from the install floppy (and there was much rejoicing):

The emulated Star says “Hello” for the very first time

Following this I spent two weeks of late nights hacking — implementing the hard disk controller and fixing bugs.  The Star’s hard drive controller doesn’t use an off-the-shelf controller chip as this wasn’t an option at the time the Star was being developed in the late 1970s. It’s a very clever, minimal design with most of the heavy lifting being done in microcode rather than hardware. Thus the emulation has to work at a very low level, simulating (in a sense) the rotation of the platters and providing data from the disk as it moves under the heads, one word at a time (and at just the right time.)

During this period I also got to learn how Xerox’s hard disk formatting and diagnostic tools worked.  This involved some reverse engineering:  Xerox didn’t want end-users to be able to do destructive things with their hard disks so these tools were password protected.  If you needed your drive reformatted you called a Xerox service engineer and they came out to take care of it (for a minor service charge).  These days, these service engineers are in short supply for some reason.

Luckily, the passcodes are stored in plaintext on the floppy disk so they were easy to unearth.  For future reference, the password is “wizard” or “elf” (if you’re so inclined):

Having solved The Mystery of the Missing Passwords I was at last able to format a virtual hard disk and install Viewpoint, and after waiting nervously for the installation to finish I was rewarded with:

Viewpoint, at long last!

Everything looked good, until the hard disk immediately corrupted itself and the system crashed!  It was very encouraging to see a real operating system running (or nearly so), and over the following weeks I hammered out the remaining issues and started on a design for a real user interface for the emulator. 

I gave it a name: Darkstar.  It starts with a “D” (thus falling in line with the rest of the “D-Machines” produced by Xerox) contains “Star” in the name, and is also a nerdy reference to a cult-classic sci-fi film.  Perfect. 

Getting Darkstar

Darkstar is available for download on our Github site and is open source under the BSD 2-Clause license.  It runs on Windows and on Unix systems using the Mono runtime.  It is still very much a work in progress.  Feedback, bug reports, and contributions are always welcome.

Fun with the Star

You’ve downloaded and installed Darkstar and have perused the documentation – now what?  Darkstar doesn’t come with any Xerox software, but pre-built hard disk images are available on Bitsavers (and for the more adventurous among you, piles of floppy disk images are available if you want to install something yourself).  Grab http://bitsavers.org/bits/Xerox/8010/8010_hd_images.zip — this contains hard disk images for Viewpoint 2.0, XDE 5.0, and The Harmony release of Interlisp-D. 

You’ll probably want to start with Viewpoint; it’s the crowning achievement of the Star and it invented the desktop metaphor, with icons representing documents and folders. 

To boot Viewpoint successfully you will need to set the emulated Star’s time and date appropriately – Xerox imposed a very strict licensing scheme (referred to as Product Factoring) typically with licenses that expired monthly.  Without a valid license code, Viewpoint grants users a 6-day grace period, after which all programs are deactivated. 

Since this is an emulation, we can control everything about the system so we can tell the emulated Star that it’s always just a few hours after the installation took place, bypassing the grace period expiration and allowing you to play with Viewpoint for as long as you like.  Set the date to Nov. 10, 1990 and start the system running.

Now wait.  The system is running diagnostics.

Keep waiting.  Viewpoint is loading.

Go get a coffee.

Seriously, it takes a while for Viewpoint to start up.  Xerox didn’t intend for users to reboot their Stars very often, apparently.  Once everything is loaded a graphic of a keyboard will start bouncing around the screen:

The Bouncing Keyboard

Press any key or click the mouse to get started and you will be presented with the Viewpoint Logon Option Sheet:

The Logon Option Sheet

You can login with user name “user” with password “password”.  Hit the “Next” key (mapped to “Home” on your computer’s keyboard) to move between fields, or use the mouse to click in them.  Click on the “Start” button to log in and in a few moments, there you are:

Initial Viewpoint Desktop

The world is your oyster.  Some things work as you expect – click on things to select them, double-click to open them.  Some things work a little differently – you can’t drag icons around with the mouse as you might expect: if you want to move them, use the “Move” key (F6) on your keyboard; if you want to copy them, use the “Copy” key (F4).  These two keys apply to most objects in the system: files, folders, graphical objects, you name it.  The Star made excellent use of the mouse, but it was also very keyboard-centric and employed a keyboard designed to work efficiently with the operating system and tools.  Documentation for the system is available online – check out the PDFs at http://bitsavers.org/pdf/xerox/viewpoint/VP_2.0/, as they’re worth a read to familiarize yourself with the system. 

If you want to write a new document, you can open up the “Blank Document” icon, click the “Edit” button and start writing your magnum opus:

Plagiarism?

One can change text and paragraph properties – font type, size, weight and all other sorts of groovy things by selecting text with the mouse (use the left mouse to select the starting point, and the right to define the end) and pressing the “Prop’s” key (F8):

Mad Props

If you’re an artist or just want to pretend that you are one, open up the “Blank Canvas” icon on the desktop:

MSPaint.exe’s Great Grandpappy

Need to do some quick calculations?  Check out the Calculator accessory:

I’m the Operator with my Pocket Calculator
Help!

There are of course many more things that can be done with Viewpoint, far too many to cover here.  Check out the extensive documentation as linked previously, and also look at the online training and help available from within Viewpoint itself (check the “Help” tab in the upper-right corner.)

Viewpoint is only one of a handful of systems that can be run on Darkstar. Stay tuned for future installments, covering XDE and Interlisp-D!

Life with a 51 year old CDC 6500

The CDC 6500 has led a rough life over the last 6 months or so: way back on the afternoon of July 2, 2018, I got an email from the CDC’s Power Control PLC telling me that it had to turn off the computer because the cooling water was too hot! A technician came out and found that the chiller was low on refrigerant. He brought it back up to the proper level, and went away. Next morning it was down again.

After much gnashing of teeth and tearing of hair, it was determined that the compressor was bad in the chiller. “We’ll have a new one in 5 weeks!” The new one turned out to be bad too, and so another was ordered, that was easier to get. Only about 3 weeks, instead of the 8 that the official one took. That worked for a few weeks, and the CDC went down again because the water was too hot.

This time it was very puzzling because as long as the technician was here, it worked fine. He spent most of a morning watching it, decided it was OK, and left, but he didn’t make it to the freeway before it went down again. He came back, and watched for the rest of the afternoon, and found that the main condenser fan would overheat, and shut down, causing the backup fan to come on. The load wasn’t very high, so the backup fan had to cycle on and off, while the main fan motor cooled off. This would go on for a while, till both motors were off at the same time, and then the compressor would go over pressure because the condenser fans were off, and the chiller would stop cooling, resulting in the “Water too HOT” computer shutdown.

Another week went by waiting for replacement fan motors from the chiller manufacturer, with no luck. Eventually we gave up and got new fan motors locally, installed them and the chiller has been working since. While the CDC didn’t seem to mind being off for 102 days for the compressor problem, it didn’t like being off for 3 weeks while we fiddled with the fans.

Both when it was off for 102 days, and this time, we found that Bay 1 was low on refrigerant. The first time we just filled it up, but the second time we looked closer, and found that there is a small leak where the power wires go into the Bay 1 compressor. The compressor manufacturer, the same guys that made the chillers compressor, will gladly sell us a new compressor, but the parts for the 50 year old, R12 compressor, are no longer available. We are working on that, but I haven’t heard that we found the parts yet.

Back to more recent times: now that the chiller is chillin’, and the CDC’s cooling system is coolin’, why isn’t the computer computing?

Let’s run some diagnostics and see what happens: I try to run my CDC diagnostic tape, but the machine complains that Central Memory doesn’t seem to be available. No, I didn’t run the real tape drives, I ran the imaginary one that uses a disk file on a PC to pretend to be  a tape drive. Anyway, that didn’t work, so I flip the zillion or so Dead Start switches in my emulated Dead Start panel, to fire up my Central processor based memory test, and get no display at all! This is distinctly unusual. Let’s try my PP based Central Memory test: That seems to work till it finishes writing all of memory, then the display goes blank. Is there a pattern here?

I put a scope probe on the memory request line inside the memory controller in Chassis 3, and find that someone is requesting memory every possible cycle. There are four possible requests: the Peripheral Processors as a group get a request line, each Central Processor gets a request line, and the missing Extended Core Memory gets a request. Let’s find out who it is: the PP’s aren’t doing it, neither of the CP’s are doing it, and the non-existent ECM isn’t doing it. Huh? Nobody is wants it, but ALL of it is getting requested!

I am going to step back a little bit, and try to explain why it sometimes takes me a while to fix this beast. This machine was designed before there were any standards about logic diagrams. Every manufacturer had to come up with their own scheme for schematics. Here is one where I found a problem, but we will get to that in a bit.

Now when there are two squares with one above the other, and arrows from each going to the other, those are flip-flops. When you have a square, or a circle with multiple arrows going into it, that is a gate. Which one is an “or” gate, and which one is an “and” gate? Sorry, you have to figure out that for your self, because the CDC documentation says either one can be either one. The triangle with a number in it would be a test point on the edge of the module. The two overlapping circles, kind of like an elipsis, indicate that is a coax cable receiver, as opposed to a regular twisted pair signal. A “P” followed by a number indicates a pin of the module.

This module receives the PP read and write signals from the PP’s in chassis 1, on pins P19 and P24. On the right side of the diagram, you can see where all the pins connect. If we look at pin 24, we can see it connects to W07 wire 904, and pin 19 is connected to W07-903. The W “Jack”s are coax cables, the other letter signals go somewhere inside this chassis.

Really, what we are looking at here, is that a circle, or a square is the collector pull-up resistor of one or more silicon NPN transistors. the arrow heads are the base of the transistors, and the line coming into the head has a base resistor in it. If there are three arrows coming into a square, like at the bottom, those three 2n2369 transistors all have their collectors tied together, with one pull-up resistor. I could be slow, because it took about 6 months before I felt I was at all fluent in reading these logic diagrams.

Now we have to talk about the Central Memory Architecture a bit. The CDC has 32 banks of 16K words of memory. Each of these banks is separate, and they can be interleaved any way the 4 requestors ask for them. At the moment, I am only running half of them, because there is something wrong with the other half. Each of these banks does a complete cycle in 1uS. The memory controller in chassis 3 can put out an address every 100nS, along with whether it is for a read or a write. This request goes to all banks in parallel. If the bank that it points to is available, he will send back an “accept” pulse, and that requestor is free to ask for something else. If the controller doesn’t get an “accept” he will ask again in about 400nS. There is a bunch of modules involved in this dance, and it is a big circle.

A little more background: This machine was designed before there was such a thing as plated through holes on printed circuit boards. The two boards in each module were double sided. What they did when they needed to get to the other side of a PCB, was they would put a tiny brass rivet in the via hole, and solder both sides.

What I eventually found was that the signal from P23 of the module in 3L34, wasn’t making it to pin 15! There was a via rivet that wasn’t making its connection to the other side of the board. I re-soldered all the vias on that module, and now we were only requesting memory when someone wanted it!

Now that we can request memory and have a reasonable chance of it responding correctly, it is on to testing memory. I loaded up my CP based test, and it ran… for a while. Then it quit, with  a very strange error. The test uses a single bit, and its complement to check the existence of every location of memory. It will read a location, and compare it with what should be there, and put the difference in a second register. Normally I would expect a single bit error, or maybe 12 bits if a module failed that way. The result looked like 59 bad bits, or the error being exactly the same as what it read. Usually this is because the CPU that is running the test is miss-executing the compare instruction.

While I was thinking about that, I ran Exchange Jump Test to see what that said. A PP can cause a CP to swap all its registers, including the Program Counter with the contents of some memory that the PP points to. This is called an Exchange Jump. The whole process happens in about 2.6uS as it requests 16 banks of memory in a sequence. This works the memory pretty hard. Exchange Jump Test (EJT) would fail after a while, and as I looked at the results, I noticed that it was usually failing a certain bit in bank 7. I checked, and noticed it was an original memory module, so I looked at my bench and found I didn’t have any new ones assembled, so I had to put the sides on a couple of finished PCB assemblies, and test them. I then swapped out the old memory in bank 7 with a new semiconductor memory, and EJT passed!

I then checked to see if my CP based memory test worked, and it did too. We are back in Business after over 5 months. I am keeping my fingers crossed in the hope that the chiller stays alive for a while.

Bruce Sherry

Detecting Nothing

The IBM360/30 gets stuck in a microcode loop.  The documentation indicates that a branch should be taken if the Z-bus is zero, and the branch should be taken.  The branch is not being taken.

A previous annoyance was that the microcode would stop at address 0xB46.  As the documentation indicates for that location, it is checking that a register is zero.  Hmm… checking for zero?  That is the problem with the loop not stopping.  So I dug deeper into this stop.  There was a stuck bit!  And here is what I found:

The circuit is an AND-OR-Invert gate and the output of the AND was high.  The above circuit is the AND gate.  If any of the inputs on the bottom go low the output should go low.  The output was not going low.  However, there is nothing on this circuit to force it low, but rather it allows the output to go low.  So, the problem was the input to the OR gate:

Aha!  That transistor with an X on it is not good.  Fortunately, we have spares of this SLT module, and replacing it fixed the problem with the first microcode stop.  However, the microcode loop with the non-taken branch is still not working as documented.  Dig deeper…

New peripherals for old Computers

Five years ago, when we were getting done with restoring our PDP10-KI, we were running out of working disk drives to run it from. We were down to one set of replacement heads, two working drives, and we didn’t have a source for new ones. We found some folks that said they could rebuild the packs, but it turned out that they couldn’t re-write the servo surface, so if we lost that we were in trouble.

Alright, what else can be done? The Digital RP06, the drive of choice for the KI, has lots of registers available from the MASBUS. The MASBUS is kind of a UNIBUS, with a synchronous data channel for moving the actual data. We had been having difficulty keeping track of everything on an existing project, so I looked into doing things a different way.

My idea was to use an FPGA, (Field Programmable Gate Array) to emulate the behavior of the control unit inside the RP06. This is like the ease of writing software, but for hardware. No wires to change, no cuts, when I mess up the logic. The PC would be responsible for handling the actual data for the disk, or possibly tape.

I spent a while poking around the Internet looking for an FPGA card that would plug into a PC. There were a lot of expensive, and some less expensive evaluation boards out there. I eventually happened across http://mesanet.com/. I had heard of these folks before from my experiences with LinuxCNC, which runs my milling machine at home. These folks have been doing this for a long time, and since they have to deal with industrial environments and big motors, their products are very robust.

For the MASBUS Disk Emulator, (MDE), I chose the Mesa 5i22 card, which has 96 I/O’s I can play with, and a Spartan3 Xilinx FPGA. The 5i22 doesn’t remember what the Xilinx configuration is, so the PC pours in the correct bits each time.

Bob Armstrong, down in the Silicon Valley, wrote all the software for the PC, and we eventually emulated RP06’s, RP07’s which hold twice the data, and TU77 tape drives. Here is a picture of our main collection of MDE’s.

There are 3 Industrial PCs, and 8 MASBUS Cable Driver/Receiver boxes. These are running both PDP10-KL’s, and the PDP10-KS’s. There is a real TU78 tape drive, to the left of the MDE rack. The PDP10-KI, and the PDP11/70 each have their own located elsewhere in the museum.

I also used the 5i22 for all of the emulations that we needed for the CDC6500:

 

Here we have one Industrial PC, and 6 6000 series channel attach Driver/Receivers, along with one 3000 series channel Driver/Receiver. We emulate the dead start panel, the DD60 Display, Tape drives, disk drives, printers, card readers, card punches, the serial terminal interface and the 6681 channel converter so we could talk to the real 405 card reader. Bob Armstrong also wrote the PC code for all these emulations.

Jeff Kaylin has also used 5i22 cards on the Sigma 9, with Bob doing the software, and Craig Arno and Glen Hermannsfeldt used one to emulate the card reader and punch on the IBM 360/20.

All is not sweetness and light however, after making the 5i22 for over 10 years, the parts are getting hard to obtain, so Mesa Electronics has stopped production of that board. We ordered all their remaining stock of the 5i22s. They no longer make the 5i22, but they make lots of other similar boards, so we ordered some 7i61’s and 7i80’s to play with.

The 7i61 uses USB to talk to a PC, and has 96 I/O’s to play with. The 7i80 uses Ethernet to talk to the PC, but only has 72 I/O’s. To conserve 5i22’s, I converted my CDC 6681 6000 to 3000 channel converter to the 7i61 because it needs all 4 cables for 96 I/Os. I used my code, along with open source code from Peter Wallace, of Mesa Electronics, to load my code into the serial flash chip on the 7i61, so the PC is no longer involved in the 6681 emulation. After turning on the power to the Mesa card, it knows how to be a 6681 automagicly! I no longer have to remember to type the proper incantation into the PC to get it loaded up, this is a GOOD thing!

After getting the CDC 6500 working, I had several broken modules that I wanted to fix, so I built a Module tester, using the 7i80 card:

You can see the 7i80-HD card under the cables down from the test card.

It took me a while to collect the appropriate bits of Peter Wallace’s code, so that I could have the Ethernet interface live in with my test code, all at the same time, but perseverance pays off, and it all works now. I have fixed 4 out of 5 broken UA modules, and I know why I am not going to fix the other UA, and the ED module that I replaced in CP1.

I have to confess: the module plugged into the tester, is not really from the 6500, but it is the same form factor, and technology.

I love these Mesa Electronics “Anything I/O” cards! As their name says, I can teach them to be most Anything!

Bruce Sherry 20180418.

That Pesky PS Module!

When we last left our hero, he had re-soldered all the Via rivets on one of the 510 “PS”, core memory sense amplifier modules in the CDC6500, and the machine was working.

That lasted about a day, and the memory went away again. What was wrong this time? You guessed it, bit 56 in bank 36 was bad again. Third time is the charm: I am going to replace this module! I head off looking for a spare PS module. Where did we put all those spare parts we got with the machine? Oh wait, we didn’t get any spares with the machine. Bummer!

This is where I get to practice my “MAD Skillz”, and make some spare PS modules. What does a PS module look like? What does CDC give me?

On the left side we have an actual schematic of one of the 4 amplifiers on the module, YAY! Having been around this block before, I take apart the offending module and check to see if it matches. Wiring wise, yes it matches, but the values have been changed to protect the innocent.

After a while playing with the newest version of Eagle, I ended up with this:

The easy part is done, now the fun starts! The circuitry for the module is split between two printed circuit boards, one that connects to the odd pins on the connector, and has odd numbered transistors, and one for the even bits. Eagle really doesn’t understand this, so I have to fool it. First I put test points on each side of all components that go between the boards. I have to then add in the wire jumpers that also go between the boards, and I end up with another schematic:

I take this schematic and duplicate it into odd and even sides, then I write an Eagle script to delete all the between boards components, and all the test points that belong on the other board. Here is what the schematic for the odd board looks like, pretty terrible:

Now I have two schematics: PS_O and PS_E, and I do the PC layout thing. I have the original boards to use as an example, which I follow very closely so that timing and signal integrity will be as close as I can to the original module. Here are pictures of the odd and even layouts:

But wait, I’m not done yet! Remember Eagle doesn’t understand the whole module. I now have to verify that the two boards, together with all the components that go between, match that original schematic I started with.

I go over every line on the board pairs, and the schematic highlighting them as I go, until EVERYTHING is highlighted.

Done yet? Grumble, grumble: No! I have forgotten to identify which end of the diodes and polarized capacitors have the band on them! If I want them assembled properly, I guess I should do that before they go out to FAB!

Back to the PC mines…

Bruce Sherry 20180201 10:57AM

Chasing the Pesky Ratio!

It seems like I did something really silly! I had to come up with some goals for 2018. I hate this time of year, I think everybody does. OK, what can I put down that is measurable and achievable? How about keeping the CDC6500 running more than 50% of the time? That might work. Oops, did I hit the send button?

“Hey, Daiyu: How do I tell what users have been on the machine?” Daiyu Hurst is my systems programmer, who lives Back East somewhere. If it is on the other side of Montana from Seattle, it is just “Back East” to me. She lives in one of those “I” states, Indiana, or Illinois, not Idaho, I know where that is. After a short pause, she found the appropriate incantations for me to utter, and we have a list of who was on the machine, and when they logged in. I had to use Perl to filter out those lines, but that was pretty easy.

What is all this other gobble-de-gook in this file:

 03.14.06.AAAI005T. UCCO, 4.096KCHS. 
 03.16.09.AAAI005T. UCCO, 4.096KCHS. 
 03.18.11.AAAI005T. UCCO, 4.096KCHS. 
 03.20.12.AAAI005T. UCCO, 4.096KCHSUCCO, 4.096KCHS. 
 05.00.30.SYSTEM S. ARSY, 1, 98/01/24.
 05.00.30.SYSTEM S. ADPM, 11, NSD.
 05.00.30.SYSTEM S. ADDR, 01, LCM, 40.
 05.00.44.SYSTEM S. SDCI, 46.603SECS.:
 05.00.44.SYSTEM S. SDCA, 0.032SECS.:
 07.32.30.SYSTEM S. ARSY, 1, 98/01/24.
 07.32.30.SYSTEM S. ADPM, 11, NSD.
 07.32.30.SYSTEM S. ADDR, 01, LCM, 40.
 07.33.07.AAAI005T. ABUN, BRUCE, LCM.:
 07.33.37.SYSTEM S. SDCI, 116.108SECS.:
 07.33.37.SYSTEM S. SDCA, 0.078SECS.:
 07.33.37.SYSTEM S. SDCM, 0.005KUNS.:
 07.33.37.SYSTEM S. SDMR, 0.004KUNS.:

The line with “ARSY” in it is when I booted the machine, at 5:00 this morning, from home. It crashed before I got in, and I booted it again at 7:32. Then we get to 7:33:07, and the “ABUN” line, where I login from telnet.

From the first few lines we can see that the machine appeared to still be running and putting things in its accounting log at 3:20, but it crashed before it could print a message about 3:22.

OK from this, I can mutter a few incantations at PERL, and come up with something like:

1054 Booted on 98/01/23 @ 07.39.30
 Previous uptime: 0 days 5 hours 59 minutes
 Down time: 0 days 17 hours 28 minutes
 1065 Booted on 98/01/23 @ 13.38.30
 Previous uptime: 0 days 1 hours 23 minutes
 Down time: 0 days 4 hours 35 minutes
 1068 Booted on 98/01/23 @ 14.12.30
 Previous uptime: 0 days 0 hours 0 minutes
 Down time: 0 days 0 hours 33 minutes
 1392 New Date:98/01/24
 1498 Booted on 98/01/24 @ 05.00.30
 Previous uptime: 0 days 13 hours 7 minutes
 Down time: 0 days 1 hours 40 minutes
 1503 Booted on 98/01/24 @ 07.32.30
 Previous uptime: 0 days 0 hours 0 minutes
 Down time: 0 days 2 hours 31 minutes

Last uptime: 0 days 0 hours 1 minutes

Total uptime: 2 days, 1 hours 37 minutes in: 0 months 7 days 0 hours 14 minutes
Booted 15 times, upratio = 0.29

Here is where the hunt for the Pesky Ratio comes in: See that last line? In the last week, the CDC has been running 29% of the time. That isn’t even close to 50%. I KNOW the 6000 series were not the most reliable machines of their time, but really: 29%?

What has been going on? A week ago, I was having trouble keeping the machine going for more than a couple of minutes. Finally, it occurred to me I might see how the memory was doing, and it wasn’t doing well. It took me a while to find why bit 56 in bank 36 was bad. I had to explore the complete wrong end of the word for a while, before I realized that end worked, and I should have been looking at the other end. I chased it down to Sense Amplifier (PS) module 12M40. When I put it on the extender, the signal would come and go, as I probed different places. I noticed that I had re-soldered a couple of via rivets before, so I re-soldered ALL the via rivets on the module.

What do I mean “via rivets”? In those days, either one of two things were true: either they didn’t have plated through holes in printed circuit boards, or they were too expensive. None of the CDC 6500 modules I have looked at have plated through holes. Most of the modules do have traces on both sides of the two PCBs that the module is made with. How did they get a signal from one side to the other? They put in a tiny brass rivet! Near as I can tell, all the soldering was done from the outside of the module, and most of the time the solder would flow to the top of the rivet somehow. Since I have found many of these rivets not conducting, I have to assume that the process wasn’t perfect.

After soldering all the rivets on this module, I put it back in the machine, and we were off and running. Monday, I booted the machine at 8:11, and it ran till 2:11. When I got in yesterday, the machine wouldn’t boot. Testing memory again found bit 56 in bank 36 bad again! I put module 12M40 on the extender, and the signal wasn’t there. I poked a spot with the scope, and it was there. I poked, prodded, squeezed, twisted and tweaked, and I couldn’t get it to fail.

This is three times for This Module! I like to keep the old modules if I can, but my Pesky Ratio is suffering here! I took the machine back down, and brought it back up with only 64K of memory, and pulled out the offending module:

There are 510 of these PS modules in the machine, three for each of the 170 storage modules, or about 10% of all the modules in the machine. Having a spare would be nice. My next task will be to make about 10 new PS modules.

In the time I have been writing this post, the display on the CDC has gone wonky again. This appears to happen when the Perpheral Processors (PP’s) forget how to skip on zero for a while. Once this happens, I can’t talk coherently to channel 10 or PP11. I have a few little tests that copy themselves to all the PP’s, and they will all work, except the last one: PP11.

I have yet to write a diagnostic that can catch the PP’s making the mistake that I can see on the logic analyzer once a day or so. Right now the solution seems to be to wait a while, and the problem will go away again. This is another reason while the Pesky Ratio is so difficult to hunt, but I fix what I can, when I can.

Onward: One bug at a time!