Letting the cat out of the bag!

Back in September 2018, we got a new addition to the LCM+L computer collection, but we didn’t talk about it. This was something that we had been looking for for 10 years or more. We knew where a couple of these machines were, but were not able to convince the owner of that collection to part with one. We kept looking, but the ones this collector had were the only ones we could find. Well, that isn’t quite true, Stephen Jones and I went down to the San Francisco Bay area to visit the Computer History Museum, and photograph one they had in their warehouse, but they weren’t willing to part with it. We thought maybe we could build one.

Stephen went to Australia on a hunt, and came back with some bits of one of these, but not a whole machine.

Stephen finally got the owner of the big collection in Sweden on the phone, and got him, Peter Lothberg, to talk to us again about his collection in late spring last year. A bunch more negotiations happened. Paul Allen was brought in, and a contract was signed.

A couple of our Archivists, Cynde Moya and Ameila Roberts, along with Stephen Jones and Jeff Kaylin, from Engineering, went to Stockholm in August to go over the whole collection, and pack up everything Peter was willing to sell us, put it in containers, and get them on a ship.

In September, the containers came in, and around 8 one morning, the first container arrived at our loading dock. Stephen sent Paul Allen a picture of the open container, and he was here by 8:30. He was very excited, because he had been waiting for one of these for over 10 years. He asked Stephen “Can you have it working in 3 weeks?” Stephen said something like “How about by the end of the year?”

Meet KATIA:

KATIA: a PDP10-KA, Serial Number 175.

The first thing we had to do to KATIA to get it running was to upgrade the power supplies. For a lot of our older mainframes, we have decided to leave the old supplies in place, and just move the wiring over to new efficient and reliable switching power supplies. David Cameron and I spent around a week figuring out how to do this, this time around, and I think it looks pretty good!

KATIA Bay 1 power supplies (sideways).

The silver bits, are the new supplies and their mounting plates, mounted either in front of, or next to the original power supply filter capacitors.

We also had to upgrade the power supplies in the memory, where we have designed new modules to replace the old modules, but the parts were scarce, so we just replaced all the old aluminum electrolytic capacitors in the existing modules.

We re-configured the main blowers to run on lazy American electrons (120V), as opposed to those very energetic (240V) European electrons that they were set up for.

We powered up the machine, and I started into the debug process. I started by adjusting all those new power supplies so that all the correct voltages appeared at the correct places, and they all shared the load as best I could.

Now, what can it do? One of the first thing we will need is to be able to read diagnostics on the paper tape reader. With the KI, I had discovered this process requires the adder part of the CPU to work, and so I fired up the Way-Back machine to recover how I had tested the KI back then. It only took two instructions, and ADDI (add immediate) and a JRST (jump). But wait, they have to be stored somewhere! I had to fix some memory.

In PDP10’s there are two kinds of memory, Accumulators (the first 16 locations of memory), and main memory. The Accumulators in most PDP10’s live in special logic inside the CPU, called “Fast Accumulators”. I had to steal a board from one of the other 2 KA’s that came with KATIA, in order to get the Fast AC”s to work.

I then started poking at the MG10 128K word memory box we had hooked up to KATIA. I got 32K of it working, and that was enough to run the paper tape diagnostics. Had to replace the light bulb in the card reader.

Now to check the adder: I loaded the two instructions with the front panel, and no, the adder wasn’t working. It couldn’t propagate a carry past bit 20. After a bunch of poking around I found that there was some kind of a connection problem between cards 2A29 and 2A30, which are in chassis 2 (the middle one), on the top row, about 3/4 of he way from the left. Swapping them and swapping them back seems to fix it.

After about another 4 weeks, with plenty of tearing of hair and gnashing of teeth, lots of board swapping and transistor changing, I finally got KATIA to run the KA versions of diagnostics that we run from paper tape on the KI! From that point the tapes get too big for the paper tape reader to handle easily, so on the KI we run those from the DECtape drives.

It took another couple of weeks to get the DECTape drives and controllers working on both the KA and KI. We needed the KI to work so that we could write the proper diagnostics onto a DECtape for KATIA to read. The problem with that, is that the KI hasn’t been able to boot in a few months. It passes all the paper tape diagnostics, but there is something fishy with the disk interfaces. Bother!

I fixed the rest of the MG10 so we had 128K words of memory working.

While I am working on the KI on a Thursday afternoon, it gets decided we should get KATIA down from the 3rd floor, and on display in 2nd floor computer room before opening on Friday! This involves detaching the DECtape cabinet from one end, the memory from the other end, and disconnecting a whole bunch of cables that go between chassis 3 and the other two chassis’. We also have to make space by moving the IBM 360/30 out of where we want to put KATIA.

We start by disconnecting all the required stuff Thursday afternoon, and getting the 360/30 put back together enough to move, and we move everyting.

By Friday all KATIA’s boxes have power again, and I carefully put all the cables back where they came from the day before. Half of the Museum shows up for powering up KATIA in its (her?) new home. I turn it on, and poke the Read-In button to load up my simple memory diagnostic, and… nothing happens!

Has the adder stopped working again? Well, yes. I wiggle the cards, and voila, KATIA can add again. Yay! I poke the Read-In button again and… not nothing! The tape goes through the reader, but the lights keep incrementing, even after the end of the tape has fallen out of the reader. THAT is not Correct!

Finally on Friday morning two WEEKS later, I have grovelled over the gospel according to DEC, and abased myself before the computer gods to almost understand how Read-In is supposed to work. Hmm, is that a string to pull on?

Read-In works by loading a BLKI instruction into the instruction register of the machine and executing it. PDP10’s are NOT Reduced Instruction Set Computers! The BLKI instruction reads a memory location, a pointer, adds 1 to each half of the data there, puts it back in memory, then does the I/O instruction implied by the “I”, uses the right half of that memory location to point at where in memory to put the data it read from the paper tape reader. Then it looks at the left half of the pointer, and if it is zero, it skips the next instruction, if it wasn’t, it will execute the next instruction.

As I was watching it try to do this with the oscilloscope, it didn’t appear that the part where it was reading the pointer location was taking as long as it should have, considering that pointer was located in core memory at the time, and reading and writing it should have taken a whole micro-second!

The I/O instruction was decoded over in chassis 3, but the instruction timing and memory control was over in chassis 1. Eventually I noticed that the part of the instruction it was skipping was the memory access, which was supposed to be started by a signal called “IOT BLK”, which wasn’t making it to chassis 1. I could see it in chassis 3! The chassis 3 end of the cable had been loose to move the machine, let’s take a look!

Cable 3E06. Note broken resistor near the bottom middle of the card.

I replaced the broken resistor, but it still didn’t work! OK let’s look farther up the cable, did anything else happen in the move?

3E06 to 1L44 cable.

That doesn’t look right! I suspect that somehow, in the move, this cable escaped momentarily enough to get caught under a caster and rubbed away, because it used to work, and the signal in question is one of the top two signals in the cable. The consequences of doing things in a hurry.

Do I pull this cable from one of the other machines we have or do I try to repair it?

Cable repaired.

As soon as this was done, KATIA went back to behaving for me. I ran all the diagnostics we had run, and a few more that we ran from DECtape on the KI, till I got to the one that also failed on the KI. We found the locations in this one that we had to patch for the KI, and it now runs too!

Did I get it done in 3 weeks? Unfortunately no, so Paul Allen didn’t get to see it run, and play with it. He must have known back when he asked about 3 weeks, because it was very close to three weeks from then that he left us.

KATIA: For you Paul! Sorry I didn’t get it working in time.

Bruce Sherry

Bendix G-15 – Solder Degradation

In the process of restoring the Bendix G-15, we have discovered a phenomena that degrades the electrical connections which provide bias and signal flow, rendering the computer non-functional.

Failed Connections

Below is a group of photos which illuminate this failure mode called “electromigration”. This process is caused by a continuous DC potential applied to a metal junction. Metal ions migrate in the direction of current flow. For most new machines, this is not a problem as this process takes quite a number of years to progress to the point where the electrical connection is broken. At LCM+L, we get machines after they had run for a long time. Worse yet, since it is our intent to restore and run the machines for as long as we can, it is necessary to find a solution that allows that maintenance need only be done every decade or so.

A failed connection ( as verified with an ohmmeter ). Please note the circular crack running around and just above the base of the circular conductor.
A similar failed connection.
This one hadn’t quite failed. You can see just a small connection at around 260 degrees. This connection will fail in a fairly short period of time.

Failing Solder

This same phenomena plays out in the metal structure of the solder itself. The photos below show the before and after of solder restoration. In the first photo, the solder looks dull and mottled. This is due to the tin having migrated out leaving only lead in the Tin/Lead solder formulations used until the early 2000’s. The modern formulations are Tin/Silver/Copper and are much less likely to have metal ion migration.

The large resistors at the top show the effects of tin migration.
After removing the old solder and replacing it with a modern formulation, you can see the solder is smooth and bright, indicating good integrity.

Long, Repetitive Work

This restoration process took quite a while. After determining that all the tube modules in the machine were affected in this way, we simply set about removing a module and then removing and replacing the solder in all the high, continuous current sections.

An interesting article on solder, covering some of the topics mentioned in the article can be found at: https://en.wikipedia.org/wiki/Solder

Xerox ALTO – Interesting Issue

In the process of restoring the Xerox ALTO, an interesting issue came up.

Background

We received our first ALTO in running condition and after evaluation and testing, put in on the exhibit floor available to the public. One afternoon about a year later, the machine suddenly froze and stopped functioning. It was taken off the floor and evaluated in one of our labs. When it became clear that power supply current was not flowing into random parts of the backplane, the focus shifted to the power supply rails. It was there we were confronted with this phenomena.

The ALTO Was A Prototype

Certain production and test details were left out of the ALTO. The amount of current running through individual pins supplying regulated DC to the logic is unusually high. Most of the time, power supply current is fed through as many pins as possible to reduce the total current running through any one pin. Because the ALTO was a prototype, the designers only used the minimum number of pins to do the job. This resulted in a phenomenon called “electro-migration”. It is the reverse of the process used for electroplating. In this instance, tin ions migrate away from the solder joints carrying the power supply current. The six pins in the center of the first photo show a mottled (instead of smooth) surface, and one of the pins has a dark ring around it indicating where the solder has totally migrated away from the connection (lower right pin). The second photo show six pins where the tin has migrated away.

Example of electro-migration on Xerox ALTO power supply bus.
Another example. Here all six connections in the middle of this photo are compromised.

Confronted with the preventing this in the future, LCM Engineering increased the surface area of the connections by soldering brass buss rails to all of the ALTO backplane power supply pins. This is shown in the photo below:

Brass buss rails soldered to ALTO backplane to increase power supply current capacity. The buss rails are the vertical elements running through the backplane.

Once this fix was applied, the ALTO was put back in service, and this phenomenon has not repeated itself.

The ALTO will be monitored to see if this phenomenon shows up again. This chapter has also been instructive for some of the other machines we are restoring. In these instances, it is extreme age, rather than something done for a prototype as the causative factor.

Introducing Darkstar: A Xerox Star Emulator

Star History and Development

The Xerox 810 Information System (“Star”)

In 1981, Xerox released the Xerox 8010 Information System (codenamed “Dandelion” during development) and commonly referred to as the Star. The Star took what Xerox learned from the research and experimentation done with the Alto at Xerox PARC and attempted to build a commercial product from it.  It was envisioned as center point of the office of the future, combining high-resolution graphics with the now-familiar mouse, Ethernet networking for sharing and collaborating, and Xerox’s Laser Printer technology for faithful “WYSIWYG” document reproduction.  The Star’s operating system (called “Star” at the outset, though later renamed “Viewpoint”) introduced the Desktop Metaphor to the world.  In combination with the Star’s unique keyboard it provided a flexible, intuitive environment for creating and collaborating on documents and mail in a networked office environment.

The Star’s Keyboard

Xerox later sold the Star hardware as the “Xerox 1108 Scientific Information Processor” – In this form it competed with Lisp workstations from Symbolics, LMI, and Texas Instruments in the burgeoning AI workstation market and while it wasn’t quite as powerful as any of their offerings it was considerably more affordable – and sometimes much smaller.  (The Symbolics 3600 workstation, c. 1983 was the size of a refrigerator and cost over $100,000).

The Star never sold well – it was expensive ($16,500 for a single workstation and most offices would need far more than just one) and despite being flexible and powerful, it was also quite slow. Unlike the IBM PC, which also made its debut in 1981 and would eventually sell millions, Xerox ended up selling somewhere in the neighborhood of 25,000 systems, making the task of finding a working Star a challenge these days.

Given its history and relationship to the Alto, the Star seemed appropriate for my next emulation project. (You can find the Alto emulator, ContrAlto, here). As with the Alto a substantial amount of detailed hardware documentation had been preserved and archived, making it possible to learn about the machine’s inner workings… except in a few rather important places:


From the March 1982 edition of the Dandelion Hardware Manual.  Still waiting for these sections to be written…

Fortunately, Al Kossow at Bitsavers was able to provide extra documentation that filled in most of the holes.  Cross-referencing all of this with the available schematics, it looked like there was enough information to make the project possible.

The Dandelion Hardware

The Star’s Central Processor (CP). Note the ALU (4xAM2901, top middle) and 4KW microcode store (bottom)

Much like the Alto, the Dandelion’s Central Processor (referred to as the “CP”) is microcoded, and, again like the Alto, this microcode is responsible for controlling various peripherals, including the display, Ethernet, and hard drive.  The CP is also responsible for executing bytecode macroinstructions.  These macroinstructions are what the Star’s user programs and operating systems are actually compiled to.  The CP is sometimes referred to as the “Mesa” processor because it was designed to efficiently execute Mesa bytecodes, but it was in no way limited to implementing just the Mesa instruction set: The Interlisp-D and Smalltalk systems defined their own microcode for executing their own bytecodes, custom-tailored and optimized to their environments.

Mesa was a strongly-typed “high-level language.” (Xerox hackers loved their puns…) It originated on the Alto but quickly grew too large for it (a smaller, stripped-down Mesa called “Butte” (i.e. “a small Mesa”) existed for the Alto but was still fairly unwieldy.)  The Star’s primary operating system was written in Mesa, which allowed a set of very sophisticated tools to be developed in a relatively short period of time.

The Star architecture offloaded the control of lower-speed devices (the keyboard and mouse, serial ports, and the floppy drive) to an 8-bit Intel 8085-based I/O processor board, referred to as the IOP.  The IOP is responsible for booting the system: it runs basic diagnostics, loads microcode into the Central Processor and starts it running.  Once the CP is running, it takes over and works in tandem with the IOP.

Emulator Development

The Star’s I/O Processor (IOP). Intel 8085 is center-right.

Since the IOP brings the whole system up, it seemed that the IOP was the logical place to begin implementing the emulator.  I started with an emulation of the 8085 processor and hooked up the IOP ROMs and RAMs.  Since the first thing the IOP does at power up or reset is execute a vigorous set of self-tests, the IOP was, in effect, testing my work as I progressed which was extremely helpful.  This is one important lesson Xerox learned from the Alto and applied to the Star: on-board diagnostics are a good thing.  The Alto had no diagnostic facilities built in so if anything failed that prevented the system from running the only way to determine the fault was to get out the oscilloscope and the schematics and start probing.  On the Star, diagnostics and status are reported through a 4-digit LED display, the “Maintenance Panel” (or MP for short).  If the IOP finds a fault during testing, it presents a series of codes on this panel.  During a normal system boot, various codes are displayed to indicate progress.  The MP was the first I/O device I emulated on the IOP, for obvious reasons.

Development on the IOP progressed nicely for several weeks (and the codes reported in the emulated MP kept increasing, reflecting my progress in a quantitative way) and during this time I implemented a source-level debugger for the IOP’s 8085 code to help me along.  This was invaluable in working out what the IOP was trying to do and why it was failing to do so.  It allowed me to step through the original code, place breakpoints, and investigate the contents of the IOP’s registers and memory while the emulated system was running.

The IOP Debugger

Once the IOP self-tests were passing, the IOP emulation was running to the point where it attempted to actually boot the Central Processor!  This meant I had to shift gears and switch over to implementing an emulation of the CP and make it talk to the IOP. This is where the real fun began.

For the next couple of months I hunkered down and implemented a rough emulation of the CP, starting with system’s 16-bit ALU (implemented with four 4-bit AM2901 ALU chips chained together).  The 2901 (see top portion of the following diagram) forms the nexus of the processor; in addition to providing the processor’s 16 registers and basic arithmetic and logical operations, it is the primary data path between the “X bus” and “Y bus.”  The X Bus provides inputs to the ALU from various sources: I/O devices, main memory, a handful of special-purpose register files and the Mesa stack and bytecode buffer.  The ALU’s output connects to the Y bus, providing inputs back into these same components.

The Star Central Processor Data Paths

One of the major issues I was confronted with nearly immediately when writing the CP emulation was one of fidelity: how faithful to the hardware does this emulation need to be? This issue arose specifically because of two hardware details related to the ALU and its inputs:

  1. The AM2901 ALU has a set of flags that get raised based on the result of an ALU operation (for example, the “Carry” flag gets raised if the result of an operation causes a carry out from the most-significant bits). For arithmetic operations these flags make sense but the 2901 also sets these flags as the result of logical operations. The meaning of the flags in these cases is opaque and of no real use to programmers (what does it mean for a “carry” flag to be set as a result of a logical OR?) and exist only as a side-effect of the ALU’s internal logic. But they are documented in the spec sheet (see the picture below).
  2. With a 137ns clock cycle time, the CP pushes the underlying hardware to its limits. As a result, some combinations of input sources requested by a microinstruction will not produce valid results because the data simply cannot all make it to its destination on time. Some combinations will produce garbage in all bits, but some will be correct only in the lower nibble or byte of the result, with the upper bits being undefined. (This is due to the ALU in the CP being comprised of four 4-bit ALUs chained together.)
Logic equations for the “NOT R XOR S” ALU operation’s flags. What it means is an exercise left to the reader.

I spent a good deal of time pondering and experimenting. For #1, I decided to implement my ALU emulation with the assumption that Xerox’s microcode would not make use of the condition flags for non-arithmetic operations, as I could see no reason to make use of them for logical ops and implementing the equations for all of them would be computationally expensive, making the emulation slower. This ended up being a valid assumption for all logical ops except for OR — as it turns out, some microcode assumed that the Carry flag would be set appropriately for this class of operation. When this issue was found, I added the appropriate operations to my ALU implementation.

For #2 I assumed that if Xerox’s microcode made use of any “invalid” combinations of input sources, that it wouldn’t depend on the garbage portion of the results. (That is, if code made use of microinstructions that would only produce valid results in the lower 4 or 8 bits, the microcode would also only depend on the lower 4 or 8 bits generated.) Thus the emulated ALU always produces a complete, correct result across all 16-bits regardless of input source. This assumption appears to have held — I have encountered no real-world microcode that makes assumptions about undefined results thus far.

The above compromises were made for reasons of implementation simplicity and efficiency. The downside is that it is possible to write microcode that will behave differently on the emulation than on the real hardware. However, going through the time, trouble, and expense of a 100% accurate emulation did not seem worth it when no real microcode would ever require this level of accuracy. Emulation is full of trade-offs like this. It would be great to provide an emulation that is perfect in every respect, but sometimes compromises must be made.

I implemented a debugger and disassembler for the CP similar to the one I put together when emulating the IOP.  Emulation of the various X bus-related registers and devices followed, and slowly but surely the CP started passing boot diagnostics as I fixed bugs and implemented missing hardware.  Finally it reached the point where it moved from the diagnostic stage to executing the first Mesa bytecodes of the operating system – the Star was now executing real code!  At that time it seemed appropriate to implement the Star’s display controller so I could see what the Star was trying to tell me – and a few days and much debugging of the central processor later I was greeted with this display from the install floppy (and there was much rejoicing):

The emulated Star says “Hello” for the very first time

Following this I spent two weeks of late nights hacking — implementing the hard disk controller and fixing bugs.  The Star’s hard drive controller doesn’t use an off-the-shelf controller chip as this wasn’t an option at the time the Star was being developed in the late 1970s. It’s a very clever, minimal design with most of the heavy lifting being done in microcode rather than hardware. Thus the emulation has to work at a very low level, simulating (in a sense) the rotation of the platters and providing data from the disk as it moves under the heads, one word at a time (and at just the right time.)

During this period I also got to learn how Xerox’s hard disk formatting and diagnostic tools worked.  This involved some reverse engineering:  Xerox didn’t want end-users to be able to do destructive things with their hard disks so these tools were password protected.  If you needed your drive reformatted you called a Xerox service engineer and they came out to take care of it (for a minor service charge).  These days, these service engineers are in short supply for some reason.

Luckily, the passcodes are stored in plaintext on the floppy disk so they were easy to unearth.  For future reference, the password is “wizard” or “elf” (if you’re so inclined):

Having solved The Mystery of the Missing Passwords I was at last able to format a virtual hard disk and install Viewpoint, and after waiting nervously for the installation to finish I was rewarded with:

Viewpoint, at long last!

Everything looked good, until the hard disk immediately corrupted itself and the system crashed!  It was very encouraging to see a real operating system running (or nearly so), and over the following weeks I hammered out the remaining issues and started on a design for a real user interface for the emulator. 

I gave it a name: Darkstar.  It starts with a “D” (thus falling in line with the rest of the “D-Machines” produced by Xerox) contains “Star” in the name, and is also a nerdy reference to a cult-classic sci-fi film.  Perfect. 

Getting Darkstar

Darkstar is available for download on our Github site and is open source under the BSD 2-Clause license.  It runs on Windows and on Unix systems using the Mono runtime.  It is still very much a work in progress.  Feedback, bug reports, and contributions are always welcome.

Fun with the Star

You’ve downloaded and installed Darkstar and have perused the documentation – now what?  Darkstar doesn’t come with any Xerox software, but pre-built hard disk images are available on Bitsavers (and for the more adventurous among you, piles of floppy disk images are available if you want to install something yourself).  Grab http://bitsavers.org/bits/Xerox/8010/8010_hd_images.zip — this contains hard disk images for Viewpoint 2.0, XDE 5.0, and The Harmony release of Interlisp-D. 

You’ll probably want to start with Viewpoint; it’s the crowning achievement of the Star and it invented the desktop metaphor, with icons representing documents and folders. 

To boot Viewpoint successfully you will need to set the emulated Star’s time and date appropriately – Xerox imposed a very strict licensing scheme (referred to as Product Factoring) typically with licenses that expired monthly.  Without a valid license code, Viewpoint grants users a 6-day grace period, after which all programs are deactivated. 

Since this is an emulation, we can control everything about the system so we can tell the emulated Star that it’s always just a few hours after the installation took place, bypassing the grace period expiration and allowing you to play with Viewpoint for as long as you like.  Set the date to Nov. 10, 1990 and start the system running.

Now wait.  The system is running diagnostics.

Keep waiting.  Viewpoint is loading.

Go get a coffee.

Seriously, it takes a while for Viewpoint to start up.  Xerox didn’t intend for users to reboot their Stars very often, apparently.  Once everything is loaded a graphic of a keyboard will start bouncing around the screen:

The Bouncing Keyboard

Press any key or click the mouse to get started and you will be presented with the Viewpoint Logon Option Sheet:

The Logon Option Sheet

You can login with user name “user” with password “password”.  Hit the “Next” key (mapped to “Home” on your computer’s keyboard) to move between fields, or use the mouse to click in them.  Click on the “Start” button to log in and in a few moments, there you are:

Initial Viewpoint Desktop

The world is your oyster.  Some things work as you expect – click on things to select them, double-click to open them.  Some things work a little differently – you can’t drag icons around with the mouse as you might expect: if you want to move them, use the “Move” key (F6) on your keyboard; if you want to copy them, use the “Copy” key (F4).  These two keys apply to most objects in the system: files, folders, graphical objects, you name it.  The Star made excellent use of the mouse, but it was also very keyboard-centric and employed a keyboard designed to work efficiently with the operating system and tools.  Documentation for the system is available online – check out the PDFs at http://bitsavers.org/pdf/xerox/viewpoint/VP_2.0/, as they’re worth a read to familiarize yourself with the system. 

If you want to write a new document, you can open up the “Blank Document” icon, click the “Edit” button and start writing your magnum opus:

Plagiarism?

One can change text and paragraph properties – font type, size, weight and all other sorts of groovy things by selecting text with the mouse (use the left mouse to select the starting point, and the right to define the end) and pressing the “Prop’s” key (F8):

Mad Props

If you’re an artist or just want to pretend that you are one, open up the “Blank Canvas” icon on the desktop:

MSPaint.exe’s Great Grandpappy

Need to do some quick calculations?  Check out the Calculator accessory:

I’m the Operator with my Pocket Calculator
Help!

There are of course many more things that can be done with Viewpoint, far too many to cover here.  Check out the extensive documentation as linked previously, and also look at the online training and help available from within Viewpoint itself (check the “Help” tab in the upper-right corner.)

Viewpoint is only one of a handful of systems that can be run on Darkstar. Stay tuned for future installments, covering XDE and Interlisp-D!

Bendix G-15 Vacuum Tubes

Early in the restoration and troubleshooting of the Bendix G-15 it was noted that tube filament failures occur with some regularity. It is not possible to observe working filaments on all the tube modules, as at least half the tubes have what is call a “getter coating” at the top of the tube, obscuring the filaments.
We hosted a subject matter expert to aid with troubleshooting the G-15, and he indicated that tube filament failures were the principal cause of machine downtime, usually about once per week. This invariably entailed up to a day of troubleshooting to find the offending tube(s).
Due to the above information and our own experience, it was decided to engineer a sensor and indicator system which would allow quick identification of the offending tube or tubes.
The configuration decided upon was a hall effect sensor coupled to a passive magnetic field concentrator ( wound ferrite core ) placed in the current path of each individual vacuum tube filament that would light an led when the tube filament was functional. Up to six sensors ( the largest complement of filaments in a tube module ) are packaged on a substrate which fits on each tube module and are powered by the filament voltage entering each module.

We gave the sensor package the acronym “FICUS”. It breaks down to FI = Filament, CU = Current, and S = Sensor.

Here is what a hall effect sensor mated to a wound ferrite core looks like:

Below is a photo of a FICUS module in the process of being assembled:

Four element FICUS Module

Here is a completed FICUS module:

Note the mating connector and wires ready to attach to the vacuum tube module.

Here is the FICUS after the wires have been attached to the vacuum tube module:

Here is the completed vacuum tube/FICUS module ready to be plugged into the Bendix G-15:

Here is the view of the Vacuum Tube/FICUS module oriented as it would be in the G-15:

And finally, a couple of Vacuum Tube/FICUS modules in an operating G-15:

CDC Cooling update

In my last blog on the CDC, we were having a slow refrigerant leak in Bay 1, and we were waiting for parts. The new parts came, but they were wrong, then they came again, and they were wrong again. Eventually the right parts were delivered, so we took the CDC down yesterday morning to work on the cooling system.

Cole, from Hermanson, arrived at 7AM, and we proceeded to shuffle some of the chiller plumbing around, so I can always be able to restart the computer after a power fail, and it took about an hour, and seems to work fine!

The old compressor in the basement, showing where the power goes in.

A little later, Jeff Walcker, also from Hermanson arrived to install the new compressor parts that had arrived. We opened the 3phase 60Hz breaker for Bay 1, and Jeff started un-hooking the power. We had discovered that it was leaking where the compressor power wires go into the case. As Jeff was taking it apart, all the little bits were turning into powder. The compressor is 52 years old. It was decided that we needed a new compressor. Bummer!

Contrary to recent experience, not only was a replacement compressor available, it was in stock, IN SEATTLE! Jeff was back from getting it almost before we were done deciding that was what we should do! Compared to 102 days for the chiller, this is amazing.

He did a bunch of wrestling with the compressors to get the new one in, and we don’t have the motor cooling loop hooked up yet, because the pipe didn’t want to go on the new one quite the same as it came off the old one.

New compressor installed.

You can see the disconnected copper cooling loop wrapped around the compressor in the above picture. Jeff will bring a slightly longer hose when he comes for our quarterly maintenance next month.

The CDC is happy again, and hopefully not leaking!

DEC Computer Power Supply Module Retrofit

In the process of troubleshooting our earliest machines, we had to replace large components called electrolytic capacitors. These are located in all the power supplies for any computer. We successfully replaced these devices and got the machines running. Recently though, we have started to see these devices fail once more. They have a finite life of a maximum of 14 years. That means that we have to replace these devices every 10 to 14 years. Also, the larger capacitors are no longer manufactured, but can still be special ordered. As it is our mission to have our computing hardware last for a lot longer than that, we did our research and engineered a replacement for the power supply modules these capacitors are found in. Our goal was to provide several decades of service without having to service these modules. The photos and descriptions below show the process:

Below is what the original power supply circuit board looked like

When we strip out the circuit board and remove the heat sink, we get this

We created, using a CAD program and a 3D printer, a plastic component mounting for the new components.

As you can see, the plastic mount fit perfectly into the old power module frame.

After populating the mount with all the components it looks like this.

Now we attach the modified heat sink to the original module frame.

Install the assembled component mount in the frame along with the modified heatsink and the new power module is complete.

One of the features of the module is, it has no solder connections, all of them being compression.  Wires are compressed into a square cross section using a stainless steel screw.  This provides very high reliability.

 

The upshot of all this work ( there were 38 modules in various machines ), is power supplies that are more efficient and have a rated MTBF ( mean time before failure ) of 40 years. These power supply modules draw 2/3 less power and produce 2/3 less heat, reducing the heat load on all the components in a machine. In addition, as a result of these changes, the total power savings per year is 250,000 kilowatt hours. Electricity rates in this area of Seattle are about 8 cents/kilowatt hour. That means a direct cost savings on our electric bill of $20,000 a year.

Life with a 51 year old CDC 6500

The CDC 6500 has led a rough life over the last 6 months or so: way back on the afternoon of July 2, 2018, I got an email from the CDC’s Power Control PLC telling me that it had to turn off the computer because the cooling water was too hot! A technician came out and found that the chiller was low on refrigerant. He brought it back up to the proper level, and went away. Next morning it was down again.

After much gnashing of teeth and tearing of hair, it was determined that the compressor was bad in the chiller. “We’ll have a new one in 5 weeks!” The new one turned out to be bad too, and so another was ordered, that was easier to get. Only about 3 weeks, instead of the 8 that the official one took. That worked for a few weeks, and the CDC went down again because the water was too hot.

This time it was very puzzling because as long as the technician was here, it worked fine. He spent most of a morning watching it, decided it was OK, and left, but he didn’t make it to the freeway before it went down again. He came back, and watched for the rest of the afternoon, and found that the main condenser fan would overheat, and shut down, causing the backup fan to come on. The load wasn’t very high, so the backup fan had to cycle on and off, while the main fan motor cooled off. This would go on for a while, till both motors were off at the same time, and then the compressor would go over pressure because the condenser fans were off, and the chiller would stop cooling, resulting in the “Water too HOT” computer shutdown.

Another week went by waiting for replacement fan motors from the chiller manufacturer, with no luck. Eventually we gave up and got new fan motors locally, installed them and the chiller has been working since. While the CDC didn’t seem to mind being off for 102 days for the compressor problem, it didn’t like being off for 3 weeks while we fiddled with the fans.

Both when it was off for 102 days, and this time, we found that Bay 1 was low on refrigerant. The first time we just filled it up, but the second time we looked closer, and found that there is a small leak where the power wires go into the Bay 1 compressor. The compressor manufacturer, the same guys that made the chillers compressor, will gladly sell us a new compressor, but the parts for the 50 year old, R12 compressor, are no longer available. We are working on that, but I haven’t heard that we found the parts yet.

Back to more recent times: now that the chiller is chillin’, and the CDC’s cooling system is coolin’, why isn’t the computer computing?

Let’s run some diagnostics and see what happens: I try to run my CDC diagnostic tape, but the machine complains that Central Memory doesn’t seem to be available. No, I didn’t run the real tape drives, I ran the imaginary one that uses a disk file on a PC to pretend to be  a tape drive. Anyway, that didn’t work, so I flip the zillion or so Dead Start switches in my emulated Dead Start panel, to fire up my Central processor based memory test, and get no display at all! This is distinctly unusual. Let’s try my PP based Central Memory test: That seems to work till it finishes writing all of memory, then the display goes blank. Is there a pattern here?

I put a scope probe on the memory request line inside the memory controller in Chassis 3, and find that someone is requesting memory every possible cycle. There are four possible requests: the Peripheral Processors as a group get a request line, each Central Processor gets a request line, and the missing Extended Core Memory gets a request. Let’s find out who it is: the PP’s aren’t doing it, neither of the CP’s are doing it, and the non-existent ECM isn’t doing it. Huh? Nobody is wants it, but ALL of it is getting requested!

I am going to step back a little bit, and try to explain why it sometimes takes me a while to fix this beast. This machine was designed before there were any standards about logic diagrams. Every manufacturer had to come up with their own scheme for schematics. Here is one where I found a problem, but we will get to that in a bit.

Now when there are two squares with one above the other, and arrows from each going to the other, those are flip-flops. When you have a square, or a circle with multiple arrows going into it, that is a gate. Which one is an “or” gate, and which one is an “and” gate? Sorry, you have to figure out that for your self, because the CDC documentation says either one can be either one. The triangle with a number in it would be a test point on the edge of the module. The two overlapping circles, kind of like an elipsis, indicate that is a coax cable receiver, as opposed to a regular twisted pair signal. A “P” followed by a number indicates a pin of the module.

This module receives the PP read and write signals from the PP’s in chassis 1, on pins P19 and P24. On the right side of the diagram, you can see where all the pins connect. If we look at pin 24, we can see it connects to W07 wire 904, and pin 19 is connected to W07-903. The W “Jack”s are coax cables, the other letter signals go somewhere inside this chassis.

Really, what we are looking at here, is that a circle, or a square is the collector pull-up resistor of one or more silicon NPN transistors. the arrow heads are the base of the transistors, and the line coming into the head has a base resistor in it. If there are three arrows coming into a square, like at the bottom, those three 2n2369 transistors all have their collectors tied together, with one pull-up resistor. I could be slow, because it took about 6 months before I felt I was at all fluent in reading these logic diagrams.

Now we have to talk about the Central Memory Architecture a bit. The CDC has 32 banks of 16K words of memory. Each of these banks is separate, and they can be interleaved any way the 4 requestors ask for them. At the moment, I am only running half of them, because there is something wrong with the other half. Each of these banks does a complete cycle in 1uS. The memory controller in chassis 3 can put out an address every 100nS, along with whether it is for a read or a write. This request goes to all banks in parallel. If the bank that it points to is available, he will send back an “accept” pulse, and that requestor is free to ask for something else. If the controller doesn’t get an “accept” he will ask again in about 400nS. There is a bunch of modules involved in this dance, and it is a big circle.

A little more background: This machine was designed before there was such a thing as plated through holes on printed circuit boards. The two boards in each module were double sided. What they did when they needed to get to the other side of a PCB, was they would put a tiny brass rivet in the via hole, and solder both sides.

What I eventually found was that the signal from P23 of the module in 3L34, wasn’t making it to pin 15! There was a via rivet that wasn’t making its connection to the other side of the board. I re-soldered all the vias on that module, and now we were only requesting memory when someone wanted it!

Now that we can request memory and have a reasonable chance of it responding correctly, it is on to testing memory. I loaded up my CP based test, and it ran… for a while. Then it quit, with  a very strange error. The test uses a single bit, and its complement to check the existence of every location of memory. It will read a location, and compare it with what should be there, and put the difference in a second register. Normally I would expect a single bit error, or maybe 12 bits if a module failed that way. The result looked like 59 bad bits, or the error being exactly the same as what it read. Usually this is because the CPU that is running the test is miss-executing the compare instruction.

While I was thinking about that, I ran Exchange Jump Test to see what that said. A PP can cause a CP to swap all its registers, including the Program Counter with the contents of some memory that the PP points to. This is called an Exchange Jump. The whole process happens in about 2.6uS as it requests 16 banks of memory in a sequence. This works the memory pretty hard. Exchange Jump Test (EJT) would fail after a while, and as I looked at the results, I noticed that it was usually failing a certain bit in bank 7. I checked, and noticed it was an original memory module, so I looked at my bench and found I didn’t have any new ones assembled, so I had to put the sides on a couple of finished PCB assemblies, and test them. I then swapped out the old memory in bank 7 with a new semiconductor memory, and EJT passed!

I then checked to see if my CP based memory test worked, and it did too. We are back in Business after over 5 months. I am keeping my fingers crossed in the hope that the chiller stays alive for a while.

Bruce Sherry

IBM at LCM+L

As anyone familiar with LCM+L knows, the museum initially grew out of Paul Allen’s personal collection of vintage computers. Many of the larger systems in the collection reflected his own experiences with computers beginning in when he was still in high school. Among the systems he used then were System/360 mainframes manufactured by IBM, most of them stodgy batch processing systems with little appeal for a young man who had been exposed to interactive computing on systems from General Electric and Digital Equipment Corporation. There was, however, one member of the family which was different, IBM’s entry into the world of timeshared interactive computing, the System/360 Model 67.

The heart of the difference between the 360/671 and other members of the System/360 family is the operating system, composed of two independent parts. CP-67, the control program, provides timeshared access to all of the system’s features in the form of “virtual machines”; CMS, the Cambridge Monitor System, runs in each user’s own virtual machine and provides the interactive facilities for programming, text editing, and everything else the user might want to accomplish. The combination was known as CP/CMS.2

I came to work for Paul Allen in 2003, to improve and expand his collection and eventually to turn it into a museum. The wish list we developed was large, and of course included several models of the System/360, including and especially the 360/67. The quixotically intense search met with minimal success for years because IBM almost never sold their large computers, instead leasing them to customers so as to control the supply: IBM did not want to compete against their own products for market share. This meant that retired systems rarely made their way into the hands of collectors; they were instead sold overseas, leased to new customers, or scrapped. For a while, the best we could do was the lights panel from the console of a 360/91 from which all circuitry (rich in gold) had been removed.

The first major break came with a story on the Australian Broadcasting Company’s web site about the impending demise of systems owned by the Australian Computer Museum Society.3 My colleague Keith Perez contacted the ACMS and learned that they owned a 360/40, which they were not interested in deaccessioning. This conversation continued for a while, then tapered off until 2011, when Keith encountered an acquaintance of Tony Epton, president of the ACMS, while on a business trip to Sainte-Nazaire, France. The ensuing renewed discussions resulted in another colleague, Ian King, making a side trip to Perth in February before returning from a trip to Adelaide to have a look at an IBM 7090 system.4 Ian visited the barn in which the ACMS was storing two 360/40 systems, and recommended that we purchase one of them. The system arrived in Seattle in September 2011.

Once the 360/40 arrived, we brought in a retired IBM Customer Engineer to assess its prospects for restoration. At this point we learned something important about IBM mainframes of the 1960s and 1970s: No two are exactly alike, and without the system specific Automated Logic Diagrams (ALDs) which document how it was assembled, the chances of restoring one to operating condition are greatly reduced. The former CE also noted the amount of dust caked on the circuitry–the system had been stored in a barn in a desert–which would decrease the likelihood of a successful restoration. He passed on the opportunity to work on the project.

In 2012, we acquired three IBM systems (a 360/20, a 360/44, and a 360/65) from the American Computer Museum5 in Montana, none in working condition: The internal disk drive in the Model 44 had broken loose from its housing and was held in place by a piece of rope, and the internal console cables of the Model 65 had all been cut. The 360/65 was particularly painful: More than a dozen bundles of 50 to 100 identical wires each were made useless. Neither system could be repaired with our facilities.

Bob Barnett, the museum’s business manager, also located a 360/65 in Virginia which belonged to one of the principals at Sine Nomine Associates, David Boyes. David had contacts within IBM who he believed could be helpful in arranging for LCM+L to obtain licenses for the software we wanted to run, and was eager to help us put up a large System/360.6

The 360/20 is a 16-bit minicomputer only marginally related to the main System/360 line. As a stopgap, to be able to say we had a running System/360, the one we acquired from the American Computer Museum was restored to running condition by an enthusiastic pair of contractors, Glen Hermmannsfeldt and Craig Arno, with help from Keith Hayes and Josh Dersch of LCM+L; it was displayed in the Computer Room from 2015 to 2017, initially while the restoration work was done and then as an example of a small batch system. As is often done for vintage systems at LCM+L, virtual peripherals–a card reader and punch–were created for the 360/20.

By 2015, the desire for an IBM system capable of providing a timesharing experience led to the acquisition of a 4341 system7 from Paul Pierce of Portland, Oregon.8 By this time, we had established an ongoing dialogue with the team who had successfully restored an IBM 1401 at the Computer History Museum (CHM) in California. One of the members of the team introduced us to Fundamental Software Inc.9 Faced with the task of restoring 40-year-old tape and disk drives, or creating our own emulations, we decided that we would instead acquire an FSI FLEX-CUB to provide disks, tapes, and terminal services to the 4341.

Jeff Kaylin was given the task of making the 4341 CPU run. Beginning in July 2015, he spent seven months getting the power system into working condition; first power up was on 12 February 2016.

Once the system was working to this extent, we ordered a FLEX-CUB from FSI and began attaching 3278 terminals to the built-in controller for testing. Also at this time, David Boyes informed us that he had arranged licensing for the VM/SP HPO operating system for us.

The FLEX-CUB arrived at LCM+L on 1 June 2016, with a minimal VM/370 installation in place courtesy of our friends at FSI. After some phone consultations with FSI Support, we were able to IPL10 the system into VM/370. Three weeks of getting additional terminals configured followed, with discussions of the OS configuration between FSI and me, replacements of capacitors and CRTs in terminals, and so on. Progress halted on 20 June, when Jeff arrived on a Monday morning to find the system halted with the words CHECK STOP and an error code on the console.

We obtained an 8in diskette with diagnostics from FSI. Memory tests showed that the memory was working; swapping of boards with spares commenced. The power sequence was a suspect for a long time. Jeff began making schematics for the various boards in order to understand where faults might occur that matched the diagnostic callouts. For two months, Jeff wrestled with the system with no progress.

Our consultant from CHM advised Cynde Moya, our Collections Manager, of the existence of 4341 and 4361 systems housed in a warehouse in Sacramento, California. I spoke with the owner, Daniel de Long, and learned that he had a working 4361 plus spares in the form of another 4361 and two 4331s. I traveled to Sacramento a week later to have a look, seeing the 4361 IPLed and running under DOS/VSE.11 After some discussion, the 4361 equipment began arriving at LCM+L on 2 November 2016.

In December 2016, Jeff began pulling the power supplies out of the 4361, to check the capacitors. All were within tolerance, but since 2004 our policy has always been to replace all aluminum electrolytic capacitors in any device we restore.12 The new capacitors were installed and the power supplies replaced in the chassis in the remaining weeks of 2016.

In mid-January 2017, the newly refurbished 4361 replaced the 4341 in the Computer Room. FSI, who have been very helpful throughout the project, advised us on how to cable the FLEX-CUB to the new system. A different power outlet was installed to accommodate the different plug on the 4361.

When the power button was pushed, the built-in floppy drives’ motor spun, but stopped as soon as the button was released. Jeff tried attaching the operator console, with no change in behavior. A phone call to Dan de Long revealed that the system was wired for 230V rather than 208V, necessitating either a change in the room wiring or a reconfiguration of the system’s power supplies; the latter was a simple matter of changing jumpers on four transformers to provide single-phase 208V, after which the system powered up and stayed up.

Power issues continued to plague Jeff. The first supply in the system would come up, with its test point providing 1.5V as expected, and all the proper voltages supplied; the second and thrid supplies showed no voltages. Going through the ALDs allowed him to trace through all four supplies with no luck in determining the problem.

After a couple of weeks, I suggested that Jeff contact Dan again, who pointed out that the system requires that a printer be attached in order to complete the power sequence. We ordered capacitors for the printer, and had additional outlets installed under the raised floor. The printer was ready to go a month later, after degraded old foam insulation was replaced along with the power supply rebuild.13

With the printer installed, the system would now power up, but the printer would not stay powered on. A long correspondence, with pictures, commenced between Jeff and Dan. This went on from mid-March to mid-May, when a suggestion to swap the cables on the floppy disk drives led to the replacement of one drive. The system would now perform an Initial Microcode Load (“IML”), after which it suggested running the Problem Finder diagnostic tool. Progress! A few more days of fiddling about (bad breakers in the power supplies, etc.) led to the indicator lights on the console keyboard signalling “Power Complete”.

Jeff cabled the FLEX-CUB to the 4361, and changed some system settings on the console to allow it to run VM/370 instead of DOS/VSE. I sent the FLEX-CUB configuration which had been set up for the 4341 to Fundamental Software; they sent one back which had the proper incantations for the 4361 instead and installed it for us remotely.

After I checked over Jeff’s revised settings on the console, we tried to IPL the system, which could not find the configured IPL device. The Problem Finder tool likewise did not find it. I reviewed the FLEX-CUB configuration, and did not find anything problematic there, so stopped for the evening, asked Jeff to locate the Operating Procedures manual for the 4361, and sent pictures to FSI of the console screen showing the Unit Control Words (UCWs) defined for the devices attached to the system. The next day, I got back suggestions for updated UCWs and updated the settings on the console while Jeff moved the channel cables to their new places. Although the system still did not come up, it did report channel status on the console so we knew the system was alive.

The next day, I revised the UCWs again on advice from FSI, to change the controllers on all disks and tapes to 3880s. Several attempts to IPL the system were unsuccessful, but in the mean time we attached more 3278/3279 terminals and got the correct keyboards on them. A day later, after telling the system that the 3279-2A display was a 1052 Selectric-style printing terminal with no printer attached and another IPL, we were prompted for date and time; FSI advised issuing the command CP ENABLE ALL to make the attached terminals live in the system. FSI did little more configuration on the FLEX-CUB, and they and I were able to log on to the MAINT account! That was the end of May, 2017.

Now my task of installing a full operating system began. Several weeks of reading manuals ensued, along with the installation of the Hercules emulator14 on a Windows desktop and on a Linux server. By the end of June, 2017, I had the public domain VM/370 running on both, a task made simpler due in equal parts to the existence of turnkey installations and an active Hercules community.15 In particular, the members of the Hercules-VM group have been very helpful over the last year, offering suggestions, advice, software, and general excitement for our project.

I reached out to David Boyes to ask that he put us in touch with his IBM contact for licensing VM/SP, the preferred version of VM/CMS for our hardware. David wrote back to me that his contact was no longer at IBM, but that he would try to find us the proper person to talk to; he also told me that the tapes he had preserved had been shipped off to CHM a while back, and that he was asking that images be made. A week later, I had the name of IBM’s Product Manager for z/VM and Related Products,16 George Madl, and sent him a message outlining LCM+L’s mission and place of the 4361 and VM/SP in the museum’s offerings. He forwarded the request to Glenda Ford in IBM’s licensing department. Glenda shepherded the request through IBM’s processes for four months and by mid-November had worked out a very favorable license with reasonable restrictions (no support, no commercial use of the system, no fees).

While waiting for an answer to the license question, I moved on with planning for VM/SP, starting with a review of the differences between VM/370 and VM/SP installation. As the weeks went by, I proposed a backup plan in which we would begin by installing VM/370, and upgrade to VM/SP when the licensing came through. This took us to the end of 2017.

In January 2018, with help from FSI, I configured eight 3350 disk drives on the 4361. As we worked together to finalize the new setup, they set up a production VM/370 system on three drives, along with an emulated card reader and punch and an emulated printer. (We even uncovered a bug in the FLEX-CUB software, so the benefit was not all in one direction!) I set up guest accounts for two users who had been asking since the 4341 restoration began, and collected their impressions.

For further planning, I returned to the Hercules emulator, looking at access to language processors and other utilities. I planned to provision our new VM/370 from the prebuilt Hercules disk images, so had to learn the ins and outs of DDR (the DASD Dump/Restore program).17 I added three more 3350 disks to the system, in order to hold the desired contents from the Hercules ready-built VM/370 system. I had to remember to re-IPL the system in order to make the new drives available; the 1970s had no concept of “plug-and-play” peripherals.

It became clear that the integration of the Hercules “6-pack” (made up of six 3350 disk images) was very tight, and the simplest way forward might be to install these images onto our FLEX-CUB disks via DDR. I consulted with the H390-VM mailing list, who concurred in that idea. However, at this point two people came forward with offers of assistance.

One of the architects of the Hercules “6-pack VM” system had available the installation tapes for VM/SP Release 5, which was our original target for the 4361. He provided us with images of the tapes and images of 3350 disks onto which the installation files had been placed, and gave us a hand from the UK in getting things set up under Hercules.

The other is Drew Derbyshire, one of the VM/370 beta testers. Drew is a contract programmer with 10 years’ experience in the VM/CMS world, including a long stint working on the CP nucleus for IBM. He is also local to Seattle, and a member of LCM+L, so was well placed to help us move forward with the installation and configuration of VM/SP for our particular purpose.

On 1 March 2018, I was able to IPL the 4361 under VM/SP, having copied the installation disk images over to the FLEX-CUB with help from FSI and our helpers. These were still 3350s, so I created sixteen new 3380-K disk images on the FLEX-CUB, a total of just under 20GB of storage space,18 as the first step in making the system available to the public by 1 April.

At this point Drew, as a contractor, and I began a fruitful working relationship, trading configuration notes, ideas for further work, and so on. Drew set up a Hercules mimic of the 4361’s exact configuration in order to experiment when the museum was not open. This was helpful when the 4361’s disks were clobbered due to errors in configurations, and Drew did the artwork for the VM/SP splash page on display terminals connecting to the system.

Over the next 10 weeks, Drew and I built CP nucleuses19 with different parameter settings, different numbers of terminals defined, 3380 disks instead of 3350, and so on. In mid-May, the 4361 had a machine check, which Jeff and I traced down over the next week to a memory issue.20 Jeff pulled memory modules from the 4341 to replace those called out by the IBM diagnostics; I began backing up all the disks to tapes, taking the system down every night and bringing it up the next morning.

The interruption was annoying because the developer/maintainer of the Stanford Pascal Compiler was installing his program on the system when the memory fault occurred. Once that was repaired, Drew and he completed testing of the installation and declared it good.

I booted the 4361 on Friday evening, 18 May 2018, for a test run over the weekend. Drew accidentally crashed it from a remote location on Saturday morning, but brought it back up during open hours at LCM+L. The system ran for a week without incident, so I posted an invitation to the H390-VM list for anyone interested to apply for a beta account. This was as much to test the account management software Drew had written as to shine a light on any blind spots we had with regard to software for the casual user.

Since 1 June 2018, Drew has installed the PL/I Optimizing Compiler, Fortran/VS, and other pieces of software to make the system more hospitable. In addition, one of the beta test users installed a version of the IND$FILE file transfer program by cutting and pasting a hexadecimal dump of the binary program into his directory, then let us know about it to install for general use. Drew has made great use of it to make updates from his Hercules testbed to the running 4361.

Future possibilities include installing RSCS and NJE, the remote-job entry subsystem for VM, to create a BITNET-style network site,21 and creating subsidiary virtual machines running other interactive operating systems such as the Michigan Terminal System or the McGill University MUSIC timesharing system, so stay tuned for further developments!

The DEC 340 Monitor, Ship It

Playing Spacewars.

Spacewars on a 30E Monitor

My last article explained that the DEC 340 Monitor pointed at and shot dots from an electron gun to light up spots on its screen. That was my magic chant, the method of how the DEC 340 drew its pictures as a collection of dots.

Every picture a DEC 340 ever showed was made of dots flashed onto its radar tube by the tube electron gun. Reproducing the technology that poked those dots onto the tube is what I do now. It is a fascinating puzzle to solve, even inspiring,

     Now is the winter of my discontent
     Made glorious summer by this DEC of 340;
     And all distracting clouds that lowered upon my dome
     Into the deep sea must be buried.

Or else I won’t get it done.  I don’t have full documentation and parts are missing.  It is a puzzle I don’t have all the pieces for; the challenge of coming up with missing puzzle pieces and getting them to work and fit together, inspires me, but that’s not enough. It must consume me with passion! A hero’s journey has begun and heroes can’t be distracted. There is no time for it.  The devil is in the details and there are a lot of details in this project.  Trying to keep track of all of them is challenging so I make it fun.  This blog series will catch up to the actual work of that challenge later but for now I need to describe the scenery a bit more.  I hope a transistor will appear in my next installment but I have to set the background for them or they aren’t very interesting characters.  They can be captivating entertainers but you have to get them in the right venue or they won’t amuse.  More background is needed.

Designing the original DEC 340 may have been an easier job then mine.  Don’t think that is a complaint, it is only an observation.  The DEC 340 circuits were already in use in a preceding DEC monitor, a monitor that DEC designers had full documentation for.  This was a monitor they could touch and turn on.  The designers had drawings and net-lists which described how all the plug in System Design Modules were interconnected to create the monitor.  The most useful document I have, is this one.

http://www.bitsavers.org/pdf/dec/graphics/H-340_Type_340_Precision_Incremental_CRT_System_Nov64.pdf

Many thanks to a reader who upon reading the first article of my series supplied this link.  The copy I had must have been a copy of a copy of a copy and was relatively illegible.  This one is crisp and clean.  Many thanks.

The document is a maintenance manual intended for limited distribution.  It has a lot of good information but it is not an engineering file that would have everything I need to know to reconstruct a DEC 340.  It contains documents which give a good understanding of how the DEC 340 works.  It is detailed enough to perform maintenance on the machine.  It’s my Rosetta Stone.   I’m using it to translate old to new, after enough pondering and understanding.

Rosetta Stone

DEC engineers had all the bright and shiny System Design Modules that they needed, building blocks from which the DEC 340 is made.  The DEC 340 was an evolutionary design which evolved from the DEC 30E, an earlier DEC monitor. The original 340 design did not start from scratch. The designers had a working monitor and a clear goal.  I have a DEC 30E manual similar to the DEC 340 manual.  It has information that describes some of the missing chips off my stone.

The goal seems to have been to improve DEC 30E performance. Other considerations may have motivated a new design, but circuitry differences tell me that performance enhancement was the big goal.

The 340 was built on top of the 30E design.  The DEC 340 has additional circuitry the 30E did not have but all the tube driving circuits the DEC 30E had were found inside a DEC 340.  A 30E is shown at the top of this article playing Spacewars. The DEC 30E used the same tube as the DEC 340 and a 30E manual was good to find.  It documents the tube neck circuitry better than the 340 manual does.

It may be that designing the 340 was an easier job because the 30E monitor already existed but I’ll guess the original 340 designers were also told something like, ‘you’ll have it done by next Tuesday right’? That not actually being a question. A complication of course, which at the end of the day would make their job a lot harder than mine. Especially if tomorrow were already Tuesday.

The 30E used the DEC point and shoot method of drawing dots on its screen like the DEC 340 and later DEC monitors did, but the 30E could not show as many dots on a screen as a 340. The DEC 340 uses tricks to get more dots on its screen.  Displaying Spacewars makes for a great photo, but I don’t see a star field of dots as much of a performance test.  I’d like to see how it did at rendering a few sentences.  I suspect a few sentences would push 30E hardware to it’s limit.  Magnetic deflection of Cathode Ray Tube beams is band limited.  Band limited means the yoke magnetic field can’t be changed as fast we need it to be, to steer a pointed-at spot around as fast as we want.  Spending a few moments considering how many dots are in a picture made up of all dots, and how much time is allowed to draw them all to prevent screen flicker, shows how fast an ideal DEC 340 tube yoke needs to be.

In a DEC 340, a square in the center of the CRT is used for display. It is a little less than 10 inches on a side, and each point in the square has a unique ten bit X and Y address. Ten bit addressing allows for 1024 X  and 1024 Y values.  Spots outside this center screen square can’t be pointed at.

If only one fiftieth of a second is allowed before a screen must be redrawn or it flickers, an all white screen needs to point and shoot at more than fifty million dots in a second, because there are over a million dots in an all white screen.  This allows for less than twenty nanoseconds of time to draw a dot. Point and shoot magnetic deflection can’t keep up so an all white screen on a DEC 340,  that does not flicker, is impossible. But a whiter screen than that which can be drawn on a DEC 30E is possible with a DEC 340.  Improvements were made.

In the computer room at the Living Computer Museum, in Seattle Washington, there is a CDC 6500 with a working DD 60 display attached. The tubes in that display use electrostatic deflection, which does not have the inherent bandwidth problem that magnetic deflection has. The DD 60 displays are also large pieces of furniture. A tube which uses electrostatic deflection has to be about three times longer than it is wide or the electron beam will only make a small square in the center of its screen.  An electron beam has only a very brief time in which to be deflected as it passes between two electrically charged deflection plates, the resulting deflection angle is limited.

A vintage monitor made by Hewlett Packard, the HP 1300, got around the geometry problem by a special (and patented) innovation, a special grid inside its CRT which enlarges a picture on the screen. The Living Computer Museum has a DEC PDP computer with a working HP1300 attached. It is a custom implementation used at the University of Oregon as a research device for many years. The HP1300 was a monitor, but it was not a computer monitor. The HP1300 requires external X and Y voltages with a gate signal to move the beam around and to turn it on and off. The CRTs in HP1300s look like old TV tubes, wider than they are tall, but internally they are more like oscilloscope tubes with electrostatic deflection plates. To me, they have a one of a kind special delight because of the innovation.

Electrostatic deflection, for whatever reason, was not an option for DEC, so the best that could be done with magnetic deflection was attempted. The goal was to make something useful. Thinking of a computer monitor as a device to represent photo images, and not as something that only outputs useful data at the time, was a figment of somebody’s imagination. Probably more than one somebody had the same figment, but practically it was for the time, a wild dream. Digital images could not be read from a sea of cheap memory as they are today. At the time the DEC 340 was made, everything in an image was programmed.  There were no digital cameras. Computer monitors were a different animal and people thought of them totally differently than they do today.  Drawing a line took time, Drawing two lines took more time.  Not everything in a picture had the same ‘cost’ but all modern computer monitor pictures do have the same ‘cost’, a cost that modern hardware has no problem keeping up with.  The time it takes to read a image from memory.  Things were different in the days of tube monitors like the DEC 340.

A DEC 30E takes thirty five microseconds to set X and Y magnetic fields to point to a spot on the screen. With 1048576 points in a square, that is more than 1000 times slower than we need to light up every point in a display screen square sans any flicker. I guessed at fifty refreshes a second, but that is a reasonable guess. Improvement in response time is obviously desirable, and a 1000 to 1 ratio is a lot of room for improvement. Is the picture as bleak as it could be? The answer to that question, it turns out, is no.

A DEC computer at the time, would need several microseconds to set X and Y values of a point. Several microseconds is not a lot different than the 35 microseconds it takes for a 30E magnetic field to settle on a pointed-at value.  Certainly a lot less than a 1000 to 1 ratio.  Not much 30E speed improvement is needed before a DEC computer that a DEC 30E attached to limits performance not the 30E monitor.  This is similar to the weakest link in a chain being the link that defines how strong a chain is.  Magnetic deflection would not be the limiting factor once the 30E could be made to outperform a DEC PDP computer. The DEC 340 attempted to implement that improvement. Two different things were done to get more spots on the screen.

The response time to settle the magnetic field between adjacent points was reduced by blending an open loop response with a closed loop response. That’s engineering talk most of you won’t have a clue about understanding and I’ll let that be my concentration the next time I write an article installment. The 30E was an entirely closed loop device. I’ll let that be my first sentence for the next installment. I’ll go from there.

The second thing that was done was to make the interface between the DEC 340 and a host computer more elaborate so entire lines could be easily described.  A program that draws lines and dots is going to obviously be shorter than one that draws just dots.  The DEC 340 takes line segments and calculates point positions on these lines by itself, to draw the lines as points. The DEC 340 is a programmable device able to take some of the burden of ‘vector’ programming away from the host computer it’s attached to. The DEC 340 could be set in different modes to ease information transfer between its host computer and free up CPU clock cycles.  Description of these modes will wait for a future day because that has to do with device logic, for now I’m concentrating on the electronics that drive the CRT.

Once DEC marketing saw something describing improved performance,  I’ll guess pressure to ‘ship it‘ inside the DEC eco-sphere must have been intense.  Getting into the details of the electronic circuits that produce that magnetic deflection speedup, I’ll do in my next installment.