Connecting to the PDP-7

Two 55-pin connectors are available on the PDP-7 SN129. One provides 18 bits of output data, and the other can read 18 bits of input data.

My project was to solder 50-pin ribbon cables to the 55-pin connectors. This will allow us to easily connect to external logic.

55-pin round connector
Pin numbers spiral in, so it is best to begin soldering at the inside and work out. Here are the first four pairs of wire. The green wire is the 25th pair of the ribbon cable.

Each wire has heat-shrink tubing to both insulate and to provide structural strength.

55-pin round connector
This is the other connector, and being of the other gender it spirals the other direction. The final two wires are ready to be soldered.

Computer Maintenance Hell!

Back in October 2018, our PDP10-KI went down, and it didn’t want to come back up. I ran all the normal diagnostics, and they all worked, but the TOPS-10 would hang when I tried to boot it. That is the definition of Computer Maintenance Hell, Everything works, but the operating system won’t run!

Running the normal diagnostics sounds like an easy thing, but that isn’t always the case! The first bunch of diagnostics run from paper tape, and that is pretty easy. As we continue past DBKAG, the tapes don’t fit well in the reader, so we switch over to getting them off of DECTape, herein lies the rub: the TD10 DECTape controller on the KI is almost always broken when I need it.

After much gnashing of teeth, and tearing of hair, there was enough blood on the floor for the dust bunnies to leave tracks in that pointed to what was wrong with the TD10, and we were off once again. I ran the rest of the usual diagnostics, and they all passed! Still didn’t boot.

I had plenty of things to keep me occupied, so our poor PDP10-KI didn’t get a lot of my attention. During our group session bringing up the KATIA, we played with the KI some, and found that the KI didn’t like its memory! The KA liked it, but the KI didn’t! It would run the DDMMD memory diagnostic for about 10 or 15 minutes, then fail. The KA would happily run the KIs memory well past where the KI would fail. Looking at the errors, it appeared as if things were getting confused about which particular bit of memory it was talking to. It would always start failing at location 0374000 where either it hadn’t inverted the contents of those locations, or it had done it twice.

Now it did’t fail all the time. The part of the test that failed was going through memory incrementing the address by a more significant bit than the LSB, then wrap around to the LSB. When it started with the LSB, bit 35, everything was fine. It worked when it did bit 34. It had to get up to bit 25 before we had problems, we would fail between bit 25, and bit 21, 20 through 18 worked too.

I spent quite a while trying to write a diagnostic that did what DDMMD was doing, but in a quick and repeatable way. I believe I got pretty close, but nothing I wrote would tickle the problem… bother!

Months have now passed, and I broke down and plugged in the logic analyzer. Most of the time, I use an oscilloscope as my main debug tool. ‘Scopes don’t lie as much as logic analyzers do! If the logic is working, procducing 1’s and 0’s as it should, a logic analyzer is a good tool. When things are broken, sometimes you get a half, or a third instead of a one or zero, and this is where the ‘scope is better about telling the truth, and the logic analyzer will lie. Here the machine was pretty much working, at least the diagnostics thought so.

Here is one of the first logic analyzer traces I took, just showing the logic analyzer sample number, the memory operation, and the address:

1581 wr 626415
1601 wr 626435
1621 wr 626455
1641 wr 626475
1661 wr 626515
1681 wr 626535

I did a bunch of work with PERL to go through the 100MB of data that came out of the logic analyzer, and boil it down to what you see here.

Now it turns out that the way this part of DDMMD worked, is that it would fill memory from the bottom to the top stepping by 1, complement each location using the funny addressing pattern, then verify from bottom to top normally. I added the top 8 bits from the CPU’s MA (Memory Address) register to the logic analyzer:

104873 rd 377774, 376
104903 rd 377775, 376
104933 rd 377776, 376
104963 rd 377777, 376
104989 rd 777000, 400 ***
105019 rd 400001, 400
105047 rd 400002, 400
105075 rd 400003, 400
105103 rd 400004, 400

This is where it is doing the final verification, and you can see something funny here: the upper bits from the MA register incremented like I expected them to, but when a whole bunch of them changed, the address going to the actual memory didn’t follow as quickly! Instead of going from 377777 to 400000, it went to 777000! Here we get into a bit of logic called the “Pager”.

A PDP10 can really only talk to 256K words of memory at one time. How can the KI use 4MWs of memory? That is the Pagers job! The Pagers job is to translate the logical address that the CPU provides into a physical address of a hopefully larger memory. While running diagnostics, the Pager should be turned off, resulting in a maximum of 256KW of memory directly addressed from the MA register to the address lines going to memory. Something was going wrong here!

I added another set of 8 probes from the Logic Analyzer, and started moving backwards from the physical address going to the memory, to where the MA register fed into the Pager. When I got to the output of the CAM’s, there was something I didn’t understand.

What is a CAM you ask? CAM stands for “Content Addressable Memory”. What you do is give it the logical address that you want, and it will tell you if it knows about that, with a single line for each location inside itself. All four of them.

I got lucky, the first group of 8 output bits looked like this:

536691 wr 360631, 360, 400
536793 wr 362631, 362, 100
536844 wr 363631, 362, 040
536896 wr 364631, 364, 020
536947 wr 365631, 364, 010
536999 wr 366631, 366, 004
537052 wr 367631, 366, 006

Near as I can tell, there should be only a single 1 in the right column. It is octal, so we can watch which location in the CAM has the data as the addresses change, and when we get to 367631, we get two ones! I believe that output should have been a 002, not 006!

That output came from board 2PR09, so I swapped it and 2PR08, and I couldn’t run the diagnostic at all due to a “Page Fail Trap Error”! Ah, I think we are very close here! I checked the inventory, and we didn’t have a record for an M260 board, so I stole one from one of the machines that came in in September, and Voila, the memory test passed! It can even run TOPS-10 if we don’t try to initialize its serial ports. This could be correct since we stole a bunch of its serial lines to use on KATIA while the KI was asleep.

OK, Since the KI, the KA, and the CDC are all working, I seem to have made it out of Computer Maintenance Hell for now. Give them a little while, one of them will fail.

Bruce Sherry

Adventures in PDP10 land

The PDP10-KI went down sometime in the fall, maybe October. This is the machine just to the right of the CDC6500 as you come in the second floor computer room. I noticed this fairly quickly and tried to reboot it, but it would hang when I tried, several times.

OK, it must be time to run diagnostics, which I proceeded to do. It passed all the diagnostics from DBKAA to DBKAH, which are the ones on paper tape, but the TD10 DECTape controller, just to the right of the console had quit again, as it has done almost every time I need to use it.

This TD10 and I just do not get along very well. It always kicks me around the block several times before it will let me know what is wrong, so I can fix it. Because of this history, it sometimes takes a while for me to generate the gumption to work on it. This time was no exception, since I was busy trying to get the KA working. If you don’t know about gumption and gumption traps, you should read the classic “Zen and the Art of Motorcycle Maintenance”, which doesn’t really talk about motorcycles, or Zen much.

Back to our story: We fixed the KA, moved it down to the second floor computer room, fixed it again, and had a small gathering to boot it the first time. The CDC was being its normal self, but was refusing to be down when I arrived at work, which is my signal to drop everything and work on it.
I was running out of reasons to avoid working on the KI.

Contrary to normal behavior, it only took a few hours to figure out the problem with the TD10, so I could run the diagnostics that are very awkward to load from paper tape.

All the normal diagnostics ran except for DBKAL. I think it took a few days to remember that there is a diagnostic for which the binary we have is wrong, and I have to patch a couple of locations after loading, in order for it to work. Now DBKAL and DBKAM work. On to some more obscure ones. All the CPU ones seem to pass, does it boot now? No, it still hangs after the OS is loaded, and we type “GO” to fire up timesharing.

What else can we test? We ran DDRHA, which tests the RH10 disk controller, but it passed. We then ran DDRPI, which tests the disk drives. Now we don’t really use the disk drives it is expecting to test, we use our MDE (Massbus Disk Emulator). We have been using this MDE for about 5 years now, but there could still be a bug hiding in there somewhere.

DDRPI looked like everything was fine for about 20 minutes, while it was doing register tests, seek tests, all ones and zeros tests. When it got to testing the surface of the disk, things started to go wrong. It would get an error where it looked like the data was misplaced, like it was reading the wrong sector or something like that.

How could that be? This thing has been working fine for over 5 years, in fact when we ran DDRPI from the KA, using the KI’s Memory, RH10s, DAS33, and MDE, EVERYTHING was FINE, even the surface test!

How about the memory? It passes my little MARCH memory test, from the KI or the KA. Dragging out the DECTape again, we loaded up DDMMD, which is one of the memory diagnostics. We fired it up, and it ran fine for about 15 minutes, whereupon it started spewing out errors. We have run this a BUNCH, and the errors usually seem to start at location 374000, and the data seems to be inverted from what it should be. The test complains about address bit 24.

We run the same test from the KA against the KI’s memory, and it works fine! It is handy to have the KA right across the room to enable this kind of testing.

What is really going on here? Let’s look at the console:

OK, TN=2, that means it is doing “Address” test. AS=F24 turns out to mean that it was doing fast addressing on address bit 24. “What does that mean” you may ask? I did! After much grovelling over the DDMMD listing, and consulting with Rich Alderson, I found that they would fill memory with the address and its complement, and then go through reading a location, verifying it was correct, and writing the complement in it, then go back and read and verify that they all had the complement. Ah, but what about that F24 part? When they are reading and complementing the data they start skipping locations, by changing which address bit they increment first. The first time they do this, they use bit 35 as the lsb, but next time through they shift the lsb over to bit 34, then 33 etc. When they get to bit 24, we run into this problem.

How do I figure out what is really going on here? I decided to write a version of my MARCH that does this, MRCHFA. It took a while, but I finally got it to work on the KA, and tried it on the KI: Unfortunately the KI passed it too! What else are they doing differently? OK, more grovelling over the listing: They are stuffing all their inner loops down in the Fast AC’s to speed them up. On to MRCHF3, which pushes the inner loops down to the Fast AC’s. Does the KI fail that one? Nope!

I’m running out of ideas, where do we go from here? I decide to just watch it for a while, and see what happens AFTER it starts to fail. I see it fail bit 24 from 374000 to 377777, both bit 23 and 22 over the same range, then it starts failing location 700000, in the same way. Shortly thereafter, the program gives up in disgust, and stops printing the results, and starts ignoring the errors. Now I just watch the lights on the ARM10 memory.

I got used to the way the lights blink while working on MRCHFA, and MRCHF3, as the LSB moves up the address, the slow blinking address follows it.

But wait, that isn’t what I am seeing: I see the lights increment from the bottom, do the FA thing, then increment from the bottom again, and then shift the LSB over. What is going on here? Is there anything on the ARM10 that will tell me anything? Yes, there are the read and write lights. While writing, both lights come on, but a read only lights the read light. After the FA thing, it just reads! Back to grovelling over the listing some more.

Ok, what they do is: fill memory from the bottom to the top, check and complement using the shifting LSB, and then check from the bottom to the top. Another new test called THEIRS, which of course doesn’t catch the problem either. I am running out of hair to pull out here!

As I write this, both machines are running DDMMD against the others memory, happily, no errors. No happy ending… YET!

Letting the cat out of the bag!

Back in September 2018, we got a new addition to the LCM+L computer collection, but we didn’t talk about it. This was something that we had been looking for for 10 years or more. We knew where a couple of these machines were, but were not able to convince the owner of that collection to part with one. We kept looking, but the ones this collector had were the only ones we could find. Well, that isn’t quite true, Stephen Jones and I went down to the San Francisco Bay area to visit the Computer History Museum, and photograph one they had in their warehouse, but they weren’t willing to part with it. We thought maybe we could build one.

Stephen went to Australia on a hunt, and came back with some bits of one of these, but not a whole machine.

Stephen finally got the owner of the big collection in Sweden on the phone, and got him, Peter Lothberg, to talk to us again about his collection in late spring last year. A bunch more negotiations happened. Paul Allen was brought in, and a contract was signed.

A couple of our Archivists, Cynde Moya and Ameila Roberts, along with Stephen Jones and Jeff Kaylin, from Engineering, went to Stockholm in August to go over the whole collection, and pack up everything Peter was willing to sell us, put it in containers, and get them on a ship.

In September, the containers came in, and around 8 one morning, the first container arrived at our loading dock. Stephen sent Paul Allen a picture of the open container, and he was here by 8:30. He was very excited, because he had been waiting for one of these for over 10 years. He asked Stephen “Can you have it working in 3 weeks?” Stephen said something like “How about by the end of the year?”

Meet KATIA:

KATIA: a PDP10-KA, Serial Number 175.

The first thing we had to do to KATIA to get it running was to upgrade the power supplies. For a lot of our older mainframes, we have decided to leave the old supplies in place, and just move the wiring over to new efficient and reliable switching power supplies. David Cameron and I spent around a week figuring out how to do this, this time around, and I think it looks pretty good!

KATIA Bay 1 power supplies (sideways).

The silver bits, are the new supplies and their mounting plates, mounted either in front of, or next to the original power supply filter capacitors.

We also had to upgrade the power supplies in the memory, where we have designed new modules to replace the old modules, but the parts were scarce, so we just replaced all the old aluminum electrolytic capacitors in the existing modules.

We re-configured the main blowers to run on lazy American electrons (120V), as opposed to those very energetic (240V) European electrons that they were set up for.

We powered up the machine, and I started into the debug process. I started by adjusting all those new power supplies so that all the correct voltages appeared at the correct places, and they all shared the load as best I could.

Now, what can it do? One of the first thing we will need is to be able to read diagnostics on the paper tape reader. With the KI, I had discovered this process requires the adder part of the CPU to work, and so I fired up the Way-Back machine to recover how I had tested the KI back then. It only took two instructions, and ADDI (add immediate) and a JRST (jump). But wait, they have to be stored somewhere! I had to fix some memory.

In PDP10’s there are two kinds of memory, Accumulators (the first 16 locations of memory), and main memory. The Accumulators in most PDP10’s live in special logic inside the CPU, called “Fast Accumulators”. I had to steal a board from one of the other 2 KA’s that came with KATIA, in order to get the Fast AC”s to work.

I then started poking at the MG10 128K word memory box we had hooked up to KATIA. I got 32K of it working, and that was enough to run the paper tape diagnostics. Had to replace the light bulb in the card reader.

Now to check the adder: I loaded the two instructions with the front panel, and no, the adder wasn’t working. It couldn’t propagate a carry past bit 20. After a bunch of poking around I found that there was some kind of a connection problem between cards 2A29 and 2A30, which are in chassis 2 (the middle one), on the top row, about 3/4 of he way from the left. Swapping them and swapping them back seems to fix it.

After about another 4 weeks, with plenty of tearing of hair and gnashing of teeth, lots of board swapping and transistor changing, I finally got KATIA to run the KA versions of diagnostics that we run from paper tape on the KI! From that point the tapes get too big for the paper tape reader to handle easily, so on the KI we run those from the DECtape drives.

It took another couple of weeks to get the DECTape drives and controllers working on both the KA and KI. We needed the KI to work so that we could write the proper diagnostics onto a DECtape for KATIA to read. The problem with that, is that the KI hasn’t been able to boot in a few months. It passes all the paper tape diagnostics, but there is something fishy with the disk interfaces. Bother!

I fixed the rest of the MG10 so we had 128K words of memory working.

While I am working on the KI on a Thursday afternoon, it gets decided we should get KATIA down from the 3rd floor, and on display in 2nd floor computer room before opening on Friday! This involves detaching the DECtape cabinet from one end, the memory from the other end, and disconnecting a whole bunch of cables that go between chassis 3 and the other two chassis’. We also have to make space by moving the IBM 360/30 out of where we want to put KATIA.

We start by disconnecting all the required stuff Thursday afternoon, and getting the 360/30 put back together enough to move, and we move everyting.

By Friday all KATIA’s boxes have power again, and I carefully put all the cables back where they came from the day before. Half of the Museum shows up for powering up KATIA in its (her?) new home. I turn it on, and poke the Read-In button to load up my simple memory diagnostic, and… nothing happens!

Has the adder stopped working again? Well, yes. I wiggle the cards, and voila, KATIA can add again. Yay! I poke the Read-In button again and… not nothing! The tape goes through the reader, but the lights keep incrementing, even after the end of the tape has fallen out of the reader. THAT is not Correct!

Finally on Friday morning two WEEKS later, I have grovelled over the gospel according to DEC, and abased myself before the computer gods to almost understand how Read-In is supposed to work. Hmm, is that a string to pull on?

Read-In works by loading a BLKI instruction into the instruction register of the machine and executing it. PDP10’s are NOT Reduced Instruction Set Computers! The BLKI instruction reads a memory location, a pointer, adds 1 to each half of the data there, puts it back in memory, then does the I/O instruction implied by the “I”, uses the right half of that memory location to point at where in memory to put the data it read from the paper tape reader. Then it looks at the left half of the pointer, and if it is zero, it skips the next instruction, if it wasn’t, it will execute the next instruction.

As I was watching it try to do this with the oscilloscope, it didn’t appear that the part where it was reading the pointer location was taking as long as it should have, considering that pointer was located in core memory at the time, and reading and writing it should have taken a whole micro-second!

The I/O instruction was decoded over in chassis 3, but the instruction timing and memory control was over in chassis 1. Eventually I noticed that the part of the instruction it was skipping was the memory access, which was supposed to be started by a signal called “IOT BLK”, which wasn’t making it to chassis 1. I could see it in chassis 3! The chassis 3 end of the cable had been loose to move the machine, let’s take a look!

Cable 3E06. Note broken resistor near the bottom middle of the card.

I replaced the broken resistor, but it still didn’t work! OK let’s look farther up the cable, did anything else happen in the move?

3E06 to 1L44 cable.

That doesn’t look right! I suspect that somehow, in the move, this cable escaped momentarily enough to get caught under a caster and rubbed away, because it used to work, and the signal in question is one of the top two signals in the cable. The consequences of doing things in a hurry.

Do I pull this cable from one of the other machines we have or do I try to repair it?

Cable repaired.

As soon as this was done, KATIA went back to behaving for me. I ran all the diagnostics we had run, and a few more that we ran from DECtape on the KI, till I got to the one that also failed on the KI. We found the locations in this one that we had to patch for the KI, and it now runs too!

Did I get it done in 3 weeks? Unfortunately no, so Paul Allen didn’t get to see it run, and play with it. He must have known back when he asked about 3 weeks, because it was very close to three weeks from then that he left us.

KATIA: For you Paul! Sorry I didn’t get it working in time.

Bruce Sherry

Bendix G-15 – Solder Degradation

In the process of restoring the Bendix G-15, we have discovered a phenomena that degrades the electrical connections which provide bias and signal flow, rendering the computer non-functional.

Failed Connections

Below is a group of photos which illuminate this failure mode called “electromigration”. This process is caused by a continuous DC potential applied to a metal junction. Metal ions migrate in the direction of current flow. For most new machines, this is not a problem as this process takes quite a number of years to progress to the point where the electrical connection is broken. At LCM+L, we get machines after they had run for a long time. Worse yet, since it is our intent to restore and run the machines for as long as we can, it is necessary to find a solution that allows that maintenance need only be done every decade or so.

A failed connection ( as verified with an ohmmeter ). Please note the circular crack running around and just above the base of the circular conductor.
A similar failed connection.
This one hadn’t quite failed. You can see just a small connection at around 260 degrees. This connection will fail in a fairly short period of time.

Failing Solder

This same phenomena plays out in the metal structure of the solder itself. The photos below show the before and after of solder restoration. In the first photo, the solder looks dull and mottled. This is due to the tin having migrated out leaving only lead in the Tin/Lead solder formulations used until the early 2000’s. The modern formulations are Tin/Silver/Copper and are much less likely to have metal ion migration.

The large resistors at the top show the effects of tin migration.
After removing the old solder and replacing it with a modern formulation, you can see the solder is smooth and bright, indicating good integrity.

Long, Repetitive Work

This restoration process took quite a while. After determining that all the tube modules in the machine were affected in this way, we simply set about removing a module and then removing and replacing the solder in all the high, continuous current sections.

An interesting article on solder, covering some of the topics mentioned in the article can be found at: https://en.wikipedia.org/wiki/Solder

Xerox ALTO – Interesting Issue

In the process of restoring the Xerox ALTO, an interesting issue came up.

Background

We received our first ALTO in running condition and after evaluation and testing, put in on the exhibit floor available to the public. One afternoon about a year later, the machine suddenly froze and stopped functioning. It was taken off the floor and evaluated in one of our labs. When it became clear that power supply current was not flowing into random parts of the backplane, the focus shifted to the power supply rails. It was there we were confronted with this phenomena.

The ALTO Was A Prototype

Certain production and test details were left out of the ALTO. The amount of current running through individual pins supplying regulated DC to the logic is unusually high. Most of the time, power supply current is fed through as many pins as possible to reduce the total current running through any one pin. Because the ALTO was a prototype, the designers only used the minimum number of pins to do the job. This resulted in a phenomenon called “electro-migration”. It is the reverse of the process used for electroplating. In this instance, tin ions migrate away from the solder joints carrying the power supply current. The six pins in the center of the first photo show a mottled (instead of smooth) surface, and one of the pins has a dark ring around it indicating where the solder has totally migrated away from the connection (lower right pin). The second photo show six pins where the tin has migrated away.

Example of electro-migration on Xerox ALTO power supply bus.
Another example. Here all six connections in the middle of this photo are compromised.

Confronted with the preventing this in the future, LCM Engineering increased the surface area of the connections by soldering brass buss rails to all of the ALTO backplane power supply pins. This is shown in the photo below:

Brass buss rails soldered to ALTO backplane to increase power supply current capacity. The buss rails are the vertical elements running through the backplane.

Once this fix was applied, the ALTO was put back in service, and this phenomenon has not repeated itself.

The ALTO will be monitored to see if this phenomenon shows up again. This chapter has also been instructive for some of the other machines we are restoring. In these instances, it is extreme age, rather than something done for a prototype as the causative factor.

Bendix G-15 Vacuum Tubes

Early in the restoration and troubleshooting of the Bendix G-15 it was noted that tube filament failures occur with some regularity. It is not possible to observe working filaments on all the tube modules, as at least half the tubes have what is call a “getter coating” at the top of the tube, obscuring the filaments.
We hosted a subject matter expert to aid with troubleshooting the G-15, and he indicated that tube filament failures were the principal cause of machine downtime, usually about once per week. This invariably entailed up to a day of troubleshooting to find the offending tube(s).
Due to the above information and our own experience, it was decided to engineer a sensor and indicator system which would allow quick identification of the offending tube or tubes.
The configuration decided upon was a hall effect sensor coupled to a passive magnetic field concentrator ( wound ferrite core ) placed in the current path of each individual vacuum tube filament that would light an led when the tube filament was functional. Up to six sensors ( the largest complement of filaments in a tube module ) are packaged on a substrate which fits on each tube module and are powered by the filament voltage entering each module.

We gave the sensor package the acronym “FICUS”. It breaks down to FI = Filament, CU = Current, and S = Sensor.

Here is what a hall effect sensor mated to a wound ferrite core looks like:

Below is a photo of a FICUS module in the process of being assembled:

Four element FICUS Module

Here is a completed FICUS module:

Note the mating connector and wires ready to attach to the vacuum tube module.

Here is the FICUS after the wires have been attached to the vacuum tube module:

Here is the completed vacuum tube/FICUS module ready to be plugged into the Bendix G-15:

Here is the view of the Vacuum Tube/FICUS module oriented as it would be in the G-15:

And finally, a couple of Vacuum Tube/FICUS modules in an operating G-15:

CDC Cooling update

In my last blog on the CDC, we were having a slow refrigerant leak in Bay 1, and we were waiting for parts. The new parts came, but they were wrong, then they came again, and they were wrong again. Eventually the right parts were delivered, so we took the CDC down yesterday morning to work on the cooling system.

Cole, from Hermanson, arrived at 7AM, and we proceeded to shuffle some of the chiller plumbing around, so I can always be able to restart the computer after a power fail, and it took about an hour, and seems to work fine!

The old compressor in the basement, showing where the power goes in.

A little later, Jeff Walcker, also from Hermanson arrived to install the new compressor parts that had arrived. We opened the 3phase 60Hz breaker for Bay 1, and Jeff started un-hooking the power. We had discovered that it was leaking where the compressor power wires go into the case. As Jeff was taking it apart, all the little bits were turning into powder. The compressor is 52 years old. It was decided that we needed a new compressor. Bummer!

Contrary to recent experience, not only was a replacement compressor available, it was in stock, IN SEATTLE! Jeff was back from getting it almost before we were done deciding that was what we should do! Compared to 102 days for the chiller, this is amazing.

He did a bunch of wrestling with the compressors to get the new one in, and we don’t have the motor cooling loop hooked up yet, because the pipe didn’t want to go on the new one quite the same as it came off the old one.

New compressor installed.

You can see the disconnected copper cooling loop wrapped around the compressor in the above picture. Jeff will bring a slightly longer hose when he comes for our quarterly maintenance next month.

The CDC is happy again, and hopefully not leaking!

DEC Computer Power Supply Module Retrofit

In the process of troubleshooting our earliest machines, we had to replace large components called electrolytic capacitors. These are located in all the power supplies for any computer. We successfully replaced these devices and got the machines running. Recently though, we have started to see these devices fail once more. They have a finite life of a maximum of 14 years. That means that we have to replace these devices every 10 to 14 years. Also, the larger capacitors are no longer manufactured, but can still be special ordered. As it is our mission to have our computing hardware last for a lot longer than that, we did our research and engineered a replacement for the power supply modules these capacitors are found in. Our goal was to provide several decades of service without having to service these modules. The photos and descriptions below show the process:

Below is what the original power supply circuit board looked like

When we strip out the circuit board and remove the heat sink, we get this

We created, using a CAD program and a 3D printer, a plastic component mounting for the new components.

As you can see, the plastic mount fit perfectly into the old power module frame.

After populating the mount with all the components it looks like this.

Now we attach the modified heat sink to the original module frame.

Install the assembled component mount in the frame along with the modified heatsink and the new power module is complete.

One of the features of the module is, it has no solder connections, all of them being compression.  Wires are compressed into a square cross section using a stainless steel screw.  This provides very high reliability.

 

The upshot of all this work ( there were 38 modules in various machines ), is power supplies that are more efficient and have a rated MTBF ( mean time before failure ) of 40 years. These power supply modules draw 2/3 less power and produce 2/3 less heat, reducing the heat load on all the components in a machine. In addition, as a result of these changes, the total power savings per year is 250,000 kilowatt hours. Electricity rates in this area of Seattle are about 8 cents/kilowatt hour. That means a direct cost savings on our electric bill of $20,000 a year.

Life with a 51 year old CDC 6500

The CDC 6500 has led a rough life over the last 6 months or so: way back on the afternoon of July 2, 2018, I got an email from the CDC’s Power Control PLC telling me that it had to turn off the computer because the cooling water was too hot! A technician came out and found that the chiller was low on refrigerant. He brought it back up to the proper level, and went away. Next morning it was down again.

After much gnashing of teeth and tearing of hair, it was determined that the compressor was bad in the chiller. “We’ll have a new one in 5 weeks!” The new one turned out to be bad too, and so another was ordered, that was easier to get. Only about 3 weeks, instead of the 8 that the official one took. That worked for a few weeks, and the CDC went down again because the water was too hot.

This time it was very puzzling because as long as the technician was here, it worked fine. He spent most of a morning watching it, decided it was OK, and left, but he didn’t make it to the freeway before it went down again. He came back, and watched for the rest of the afternoon, and found that the main condenser fan would overheat, and shut down, causing the backup fan to come on. The load wasn’t very high, so the backup fan had to cycle on and off, while the main fan motor cooled off. This would go on for a while, till both motors were off at the same time, and then the compressor would go over pressure because the condenser fans were off, and the chiller would stop cooling, resulting in the “Water too HOT” computer shutdown.

Another week went by waiting for replacement fan motors from the chiller manufacturer, with no luck. Eventually we gave up and got new fan motors locally, installed them and the chiller has been working since. While the CDC didn’t seem to mind being off for 102 days for the compressor problem, it didn’t like being off for 3 weeks while we fiddled with the fans.

Both when it was off for 102 days, and this time, we found that Bay 1 was low on refrigerant. The first time we just filled it up, but the second time we looked closer, and found that there is a small leak where the power wires go into the Bay 1 compressor. The compressor manufacturer, the same guys that made the chillers compressor, will gladly sell us a new compressor, but the parts for the 50 year old, R12 compressor, are no longer available. We are working on that, but I haven’t heard that we found the parts yet.

Back to more recent times: now that the chiller is chillin’, and the CDC’s cooling system is coolin’, why isn’t the computer computing?

Let’s run some diagnostics and see what happens: I try to run my CDC diagnostic tape, but the machine complains that Central Memory doesn’t seem to be available. No, I didn’t run the real tape drives, I ran the imaginary one that uses a disk file on a PC to pretend to be  a tape drive. Anyway, that didn’t work, so I flip the zillion or so Dead Start switches in my emulated Dead Start panel, to fire up my Central processor based memory test, and get no display at all! This is distinctly unusual. Let’s try my PP based Central Memory test: That seems to work till it finishes writing all of memory, then the display goes blank. Is there a pattern here?

I put a scope probe on the memory request line inside the memory controller in Chassis 3, and find that someone is requesting memory every possible cycle. There are four possible requests: the Peripheral Processors as a group get a request line, each Central Processor gets a request line, and the missing Extended Core Memory gets a request. Let’s find out who it is: the PP’s aren’t doing it, neither of the CP’s are doing it, and the non-existent ECM isn’t doing it. Huh? Nobody is wants it, but ALL of it is getting requested!

I am going to step back a little bit, and try to explain why it sometimes takes me a while to fix this beast. This machine was designed before there were any standards about logic diagrams. Every manufacturer had to come up with their own scheme for schematics. Here is one where I found a problem, but we will get to that in a bit.

Now when there are two squares with one above the other, and arrows from each going to the other, those are flip-flops. When you have a square, or a circle with multiple arrows going into it, that is a gate. Which one is an “or” gate, and which one is an “and” gate? Sorry, you have to figure out that for your self, because the CDC documentation says either one can be either one. The triangle with a number in it would be a test point on the edge of the module. The two overlapping circles, kind of like an elipsis, indicate that is a coax cable receiver, as opposed to a regular twisted pair signal. A “P” followed by a number indicates a pin of the module.

This module receives the PP read and write signals from the PP’s in chassis 1, on pins P19 and P24. On the right side of the diagram, you can see where all the pins connect. If we look at pin 24, we can see it connects to W07 wire 904, and pin 19 is connected to W07-903. The W “Jack”s are coax cables, the other letter signals go somewhere inside this chassis.

Really, what we are looking at here, is that a circle, or a square is the collector pull-up resistor of one or more silicon NPN transistors. the arrow heads are the base of the transistors, and the line coming into the head has a base resistor in it. If there are three arrows coming into a square, like at the bottom, those three 2n2369 transistors all have their collectors tied together, with one pull-up resistor. I could be slow, because it took about 6 months before I felt I was at all fluent in reading these logic diagrams.

Now we have to talk about the Central Memory Architecture a bit. The CDC has 32 banks of 16K words of memory. Each of these banks is separate, and they can be interleaved any way the 4 requestors ask for them. At the moment, I am only running half of them, because there is something wrong with the other half. Each of these banks does a complete cycle in 1uS. The memory controller in chassis 3 can put out an address every 100nS, along with whether it is for a read or a write. This request goes to all banks in parallel. If the bank that it points to is available, he will send back an “accept” pulse, and that requestor is free to ask for something else. If the controller doesn’t get an “accept” he will ask again in about 400nS. There is a bunch of modules involved in this dance, and it is a big circle.

A little more background: This machine was designed before there was such a thing as plated through holes on printed circuit boards. The two boards in each module were double sided. What they did when they needed to get to the other side of a PCB, was they would put a tiny brass rivet in the via hole, and solder both sides.

What I eventually found was that the signal from P23 of the module in 3L34, wasn’t making it to pin 15! There was a via rivet that wasn’t making its connection to the other side of the board. I re-soldered all the vias on that module, and now we were only requesting memory when someone wanted it!

Now that we can request memory and have a reasonable chance of it responding correctly, it is on to testing memory. I loaded up my CP based test, and it ran… for a while. Then it quit, with  a very strange error. The test uses a single bit, and its complement to check the existence of every location of memory. It will read a location, and compare it with what should be there, and put the difference in a second register. Normally I would expect a single bit error, or maybe 12 bits if a module failed that way. The result looked like 59 bad bits, or the error being exactly the same as what it read. Usually this is because the CPU that is running the test is miss-executing the compare instruction.

While I was thinking about that, I ran Exchange Jump Test to see what that said. A PP can cause a CP to swap all its registers, including the Program Counter with the contents of some memory that the PP points to. This is called an Exchange Jump. The whole process happens in about 2.6uS as it requests 16 banks of memory in a sequence. This works the memory pretty hard. Exchange Jump Test (EJT) would fail after a while, and as I looked at the results, I noticed that it was usually failing a certain bit in bank 7. I checked, and noticed it was an original memory module, so I looked at my bench and found I didn’t have any new ones assembled, so I had to put the sides on a couple of finished PCB assemblies, and test them. I then swapped out the old memory in bank 7 with a new semiconductor memory, and EJT passed!

I then checked to see if my CP based memory test worked, and it did too. We are back in Business after over 5 months. I am keeping my fingers crossed in the hope that the chiller stays alive for a while.

Bruce Sherry