Computer Maintenance Hell!

Back in October 2018, our PDP10-KI went down, and it didn’t want to come back up. I ran all the normal diagnostics, and they all worked, but the TOPS-10 would hang when I tried to boot it. That is the definition of Computer Maintenance Hell, Everything works, but the operating system won’t run!

Running the normal diagnostics sounds like an easy thing, but that isn’t always the case! The first bunch of diagnostics run from paper tape, and that is pretty easy. As we continue past DBKAG, the tapes don’t fit well in the reader, so we switch over to getting them off of DECTape, herein lies the rub: the TD10 DECTape controller on the KI is almost always broken when I need it.

After much gnashing of teeth, and tearing of hair, there was enough blood on the floor for the dust bunnies to leave tracks in that pointed to what was wrong with the TD10, and we were off once again. I ran the rest of the usual diagnostics, and they all passed! Still didn’t boot.

I had plenty of things to keep me occupied, so our poor PDP10-KI didn’t get a lot of my attention. During our group session bringing up the KATIA, we played with the KI some, and found that the KI didn’t like its memory! The KA liked it, but the KI didn’t! It would run the DDMMD memory diagnostic for about 10 or 15 minutes, then fail. The KA would happily run the KIs memory well past where the KI would fail. Looking at the errors, it appeared as if things were getting confused about which particular bit of memory it was talking to. It would always start failing at location 0374000 where either it hadn’t inverted the contents of those locations, or it had done it twice.

Now it did’t fail all the time. The part of the test that failed was going through memory incrementing the address by a more significant bit than the LSB, then wrap around to the LSB. When it started with the LSB, bit 35, everything was fine. It worked when it did bit 34. It had to get up to bit 25 before we had problems, we would fail between bit 25, and bit 21, 20 through 18 worked too.

I spent quite a while trying to write a diagnostic that did what DDMMD was doing, but in a quick and repeatable way. I believe I got pretty close, but nothing I wrote would tickle the problem… bother!

Months have now passed, and I broke down and plugged in the logic analyzer. Most of the time, I use an oscilloscope as my main debug tool. ‘Scopes don’t lie as much as logic analyzers do! If the logic is working, procducing 1’s and 0’s as it should, a logic analyzer is a good tool. When things are broken, sometimes you get a half, or a third instead of a one or zero, and this is where the ‘scope is better about telling the truth, and the logic analyzer will lie. Here the machine was pretty much working, at least the diagnostics thought so.

Here is one of the first logic analyzer traces I took, just showing the logic analyzer sample number, the memory operation, and the address:

1581 wr 626415
1601 wr 626435
1621 wr 626455
1641 wr 626475
1661 wr 626515
1681 wr 626535

I did a bunch of work with PERL to go through the 100MB of data that came out of the logic analyzer, and boil it down to what you see here.

Now it turns out that the way this part of DDMMD worked, is that it would fill memory from the bottom to the top stepping by 1, complement each location using the funny addressing pattern, then verify from bottom to top normally. I added the top 8 bits from the CPU’s MA (Memory Address) register to the logic analyzer:

104873 rd 377774, 376
104903 rd 377775, 376
104933 rd 377776, 376
104963 rd 377777, 376
104989 rd 777000, 400 ***
105019 rd 400001, 400
105047 rd 400002, 400
105075 rd 400003, 400
105103 rd 400004, 400

This is where it is doing the final verification, and you can see something funny here: the upper bits from the MA register incremented like I expected them to, but when a whole bunch of them changed, the address going to the actual memory didn’t follow as quickly! Instead of going from 377777 to 400000, it went to 777000! Here we get into a bit of logic called the “Pager”.

A PDP10 can really only talk to 256K words of memory at one time. How can the KI use 4MWs of memory? That is the Pagers job! The Pagers job is to translate the logical address that the CPU provides into a physical address of a hopefully larger memory. While running diagnostics, the Pager should be turned off, resulting in a maximum of 256KW of memory directly addressed from the MA register to the address lines going to memory. Something was going wrong here!

I added another set of 8 probes from the Logic Analyzer, and started moving backwards from the physical address going to the memory, to where the MA register fed into the Pager. When I got to the output of the CAM’s, there was something I didn’t understand.

What is a CAM you ask? CAM stands for “Content Addressable Memory”. What you do is give it the logical address that you want, and it will tell you if it knows about that, with a single line for each location inside itself. All four of them.

I got lucky, the first group of 8 output bits looked like this:

536691 wr 360631, 360, 400
536793 wr 362631, 362, 100
536844 wr 363631, 362, 040
536896 wr 364631, 364, 020
536947 wr 365631, 364, 010
536999 wr 366631, 366, 004
537052 wr 367631, 366, 006

Near as I can tell, there should be only a single 1 in the right column. It is octal, so we can watch which location in the CAM has the data as the addresses change, and when we get to 367631, we get two ones! I believe that output should have been a 002, not 006!

That output came from board 2PR09, so I swapped it and 2PR08, and I couldn’t run the diagnostic at all due to a “Page Fail Trap Error”! Ah, I think we are very close here! I checked the inventory, and we didn’t have a record for an M260 board, so I stole one from one of the machines that came in in September, and Voila, the memory test passed! It can even run TOPS-10 if we don’t try to initialize its serial ports. This could be correct since we stole a bunch of its serial lines to use on KATIA while the KI was asleep.

OK, Since the KI, the KA, and the CDC are all working, I seem to have made it out of Computer Maintenance Hell for now. Give them a little while, one of them will fail.

Bruce Sherry

Adventures in PDP10 land

The PDP10-KI went down sometime in the fall, maybe October. This is the machine just to the right of the CDC6500 as you come in the second floor computer room. I noticed this fairly quickly and tried to reboot it, but it would hang when I tried, several times.

OK, it must be time to run diagnostics, which I proceeded to do. It passed all the diagnostics from DBKAA to DBKAH, which are the ones on paper tape, but the TD10 DECTape controller, just to the right of the console had quit again, as it has done almost every time I need to use it.

This TD10 and I just do not get along very well. It always kicks me around the block several times before it will let me know what is wrong, so I can fix it. Because of this history, it sometimes takes a while for me to generate the gumption to work on it. This time was no exception, since I was busy trying to get the KA working. If you don’t know about gumption and gumption traps, you should read the classic “Zen and the Art of Motorcycle Maintenance”, which doesn’t really talk about motorcycles, or Zen much.

Back to our story: We fixed the KA, moved it down to the second floor computer room, fixed it again, and had a small gathering to boot it the first time. The CDC was being its normal self, but was refusing to be down when I arrived at work, which is my signal to drop everything and work on it.
I was running out of reasons to avoid working on the KI.

Contrary to normal behavior, it only took a few hours to figure out the problem with the TD10, so I could run the diagnostics that are very awkward to load from paper tape.

All the normal diagnostics ran except for DBKAL. I think it took a few days to remember that there is a diagnostic for which the binary we have is wrong, and I have to patch a couple of locations after loading, in order for it to work. Now DBKAL and DBKAM work. On to some more obscure ones. All the CPU ones seem to pass, does it boot now? No, it still hangs after the OS is loaded, and we type “GO” to fire up timesharing.

What else can we test? We ran DDRHA, which tests the RH10 disk controller, but it passed. We then ran DDRPI, which tests the disk drives. Now we don’t really use the disk drives it is expecting to test, we use our MDE (Massbus Disk Emulator). We have been using this MDE for about 5 years now, but there could still be a bug hiding in there somewhere.

DDRPI looked like everything was fine for about 20 minutes, while it was doing register tests, seek tests, all ones and zeros tests. When it got to testing the surface of the disk, things started to go wrong. It would get an error where it looked like the data was misplaced, like it was reading the wrong sector or something like that.

How could that be? This thing has been working fine for over 5 years, in fact when we ran DDRPI from the KA, using the KI’s Memory, RH10s, DAS33, and MDE, EVERYTHING was FINE, even the surface test!

How about the memory? It passes my little MARCH memory test, from the KI or the KA. Dragging out the DECTape again, we loaded up DDMMD, which is one of the memory diagnostics. We fired it up, and it ran fine for about 15 minutes, whereupon it started spewing out errors. We have run this a BUNCH, and the errors usually seem to start at location 374000, and the data seems to be inverted from what it should be. The test complains about address bit 24.

We run the same test from the KA against the KI’s memory, and it works fine! It is handy to have the KA right across the room to enable this kind of testing.

What is really going on here? Let’s look at the console:

OK, TN=2, that means it is doing “Address” test. AS=F24 turns out to mean that it was doing fast addressing on address bit 24. “What does that mean” you may ask? I did! After much grovelling over the DDMMD listing, and consulting with Rich Alderson, I found that they would fill memory with the address and its complement, and then go through reading a location, verifying it was correct, and writing the complement in it, then go back and read and verify that they all had the complement. Ah, but what about that F24 part? When they are reading and complementing the data they start skipping locations, by changing which address bit they increment first. The first time they do this, they use bit 35 as the lsb, but next time through they shift the lsb over to bit 34, then 33 etc. When they get to bit 24, we run into this problem.

How do I figure out what is really going on here? I decided to write a version of my MARCH that does this, MRCHFA. It took a while, but I finally got it to work on the KA, and tried it on the KI: Unfortunately the KI passed it too! What else are they doing differently? OK, more grovelling over the listing: They are stuffing all their inner loops down in the Fast AC’s to speed them up. On to MRCHF3, which pushes the inner loops down to the Fast AC’s. Does the KI fail that one? Nope!

I’m running out of ideas, where do we go from here? I decide to just watch it for a while, and see what happens AFTER it starts to fail. I see it fail bit 24 from 374000 to 377777, both bit 23 and 22 over the same range, then it starts failing location 700000, in the same way. Shortly thereafter, the program gives up in disgust, and stops printing the results, and starts ignoring the errors. Now I just watch the lights on the ARM10 memory.

I got used to the way the lights blink while working on MRCHFA, and MRCHF3, as the LSB moves up the address, the slow blinking address follows it.

But wait, that isn’t what I am seeing: I see the lights increment from the bottom, do the FA thing, then increment from the bottom again, and then shift the LSB over. What is going on here? Is there anything on the ARM10 that will tell me anything? Yes, there are the read and write lights. While writing, both lights come on, but a read only lights the read light. After the FA thing, it just reads! Back to grovelling over the listing some more.

Ok, what they do is: fill memory from the bottom to the top, check and complement using the shifting LSB, and then check from the bottom to the top. Another new test called THEIRS, which of course doesn’t catch the problem either. I am running out of hair to pull out here!

As I write this, both machines are running DDMMD against the others memory, happily, no errors. No happy ending… YET!

Letting the cat out of the bag!

Back in September 2018, we got a new addition to the LCM+L computer collection, but we didn’t talk about it. This was something that we had been looking for for 10 years or more. We knew where a couple of these machines were, but were not able to convince the owner of that collection to part with one. We kept looking, but the ones this collector had were the only ones we could find. Well, that isn’t quite true, Stephen Jones and I went down to the San Francisco Bay area to visit the Computer History Museum, and photograph one they had in their warehouse, but they weren’t willing to part with it. We thought maybe we could build one.

Stephen went to Australia on a hunt, and came back with some bits of one of these, but not a whole machine.

Stephen finally got the owner of the big collection in Sweden on the phone, and got him, Peter Lothberg, to talk to us again about his collection in late spring last year. A bunch more negotiations happened. Paul Allen was brought in, and a contract was signed.

A couple of our Archivists, Cynde Moya and Ameila Roberts, along with Stephen Jones and Jeff Kaylin, from Engineering, went to Stockholm in August to go over the whole collection, and pack up everything Peter was willing to sell us, put it in containers, and get them on a ship.

In September, the containers came in, and around 8 one morning, the first container arrived at our loading dock. Stephen sent Paul Allen a picture of the open container, and he was here by 8:30. He was very excited, because he had been waiting for one of these for over 10 years. He asked Stephen “Can you have it working in 3 weeks?” Stephen said something like “How about by the end of the year?”

Meet KATIA:

KATIA: a PDP10-KA, Serial Number 175.

The first thing we had to do to KATIA to get it running was to upgrade the power supplies. For a lot of our older mainframes, we have decided to leave the old supplies in place, and just move the wiring over to new efficient and reliable switching power supplies. David Cameron and I spent around a week figuring out how to do this, this time around, and I think it looks pretty good!

KATIA Bay 1 power supplies (sideways).

The silver bits, are the new supplies and their mounting plates, mounted either in front of, or next to the original power supply filter capacitors.

We also had to upgrade the power supplies in the memory, where we have designed new modules to replace the old modules, but the parts were scarce, so we just replaced all the old aluminum electrolytic capacitors in the existing modules.

We re-configured the main blowers to run on lazy American electrons (120V), as opposed to those very energetic (240V) European electrons that they were set up for.

We powered up the machine, and I started into the debug process. I started by adjusting all those new power supplies so that all the correct voltages appeared at the correct places, and they all shared the load as best I could.

Now, what can it do? One of the first thing we will need is to be able to read diagnostics on the paper tape reader. With the KI, I had discovered this process requires the adder part of the CPU to work, and so I fired up the Way-Back machine to recover how I had tested the KI back then. It only took two instructions, and ADDI (add immediate) and a JRST (jump). But wait, they have to be stored somewhere! I had to fix some memory.

In PDP10’s there are two kinds of memory, Accumulators (the first 16 locations of memory), and main memory. The Accumulators in most PDP10’s live in special logic inside the CPU, called “Fast Accumulators”. I had to steal a board from one of the other 2 KA’s that came with KATIA, in order to get the Fast AC”s to work.

I then started poking at the MG10 128K word memory box we had hooked up to KATIA. I got 32K of it working, and that was enough to run the paper tape diagnostics. Had to replace the light bulb in the card reader.

Now to check the adder: I loaded the two instructions with the front panel, and no, the adder wasn’t working. It couldn’t propagate a carry past bit 20. After a bunch of poking around I found that there was some kind of a connection problem between cards 2A29 and 2A30, which are in chassis 2 (the middle one), on the top row, about 3/4 of he way from the left. Swapping them and swapping them back seems to fix it.

After about another 4 weeks, with plenty of tearing of hair and gnashing of teeth, lots of board swapping and transistor changing, I finally got KATIA to run the KA versions of diagnostics that we run from paper tape on the KI! From that point the tapes get too big for the paper tape reader to handle easily, so on the KI we run those from the DECtape drives.

It took another couple of weeks to get the DECTape drives and controllers working on both the KA and KI. We needed the KI to work so that we could write the proper diagnostics onto a DECtape for KATIA to read. The problem with that, is that the KI hasn’t been able to boot in a few months. It passes all the paper tape diagnostics, but there is something fishy with the disk interfaces. Bother!

I fixed the rest of the MG10 so we had 128K words of memory working.

While I am working on the KI on a Thursday afternoon, it gets decided we should get KATIA down from the 3rd floor, and on display in 2nd floor computer room before opening on Friday! This involves detaching the DECtape cabinet from one end, the memory from the other end, and disconnecting a whole bunch of cables that go between chassis 3 and the other two chassis’. We also have to make space by moving the IBM 360/30 out of where we want to put KATIA.

We start by disconnecting all the required stuff Thursday afternoon, and getting the 360/30 put back together enough to move, and we move everyting.

By Friday all KATIA’s boxes have power again, and I carefully put all the cables back where they came from the day before. Half of the Museum shows up for powering up KATIA in its (her?) new home. I turn it on, and poke the Read-In button to load up my simple memory diagnostic, and… nothing happens!

Has the adder stopped working again? Well, yes. I wiggle the cards, and voila, KATIA can add again. Yay! I poke the Read-In button again and… not nothing! The tape goes through the reader, but the lights keep incrementing, even after the end of the tape has fallen out of the reader. THAT is not Correct!

Finally on Friday morning two WEEKS later, I have grovelled over the gospel according to DEC, and abased myself before the computer gods to almost understand how Read-In is supposed to work. Hmm, is that a string to pull on?

Read-In works by loading a BLKI instruction into the instruction register of the machine and executing it. PDP10’s are NOT Reduced Instruction Set Computers! The BLKI instruction reads a memory location, a pointer, adds 1 to each half of the data there, puts it back in memory, then does the I/O instruction implied by the “I”, uses the right half of that memory location to point at where in memory to put the data it read from the paper tape reader. Then it looks at the left half of the pointer, and if it is zero, it skips the next instruction, if it wasn’t, it will execute the next instruction.

As I was watching it try to do this with the oscilloscope, it didn’t appear that the part where it was reading the pointer location was taking as long as it should have, considering that pointer was located in core memory at the time, and reading and writing it should have taken a whole micro-second!

The I/O instruction was decoded over in chassis 3, but the instruction timing and memory control was over in chassis 1. Eventually I noticed that the part of the instruction it was skipping was the memory access, which was supposed to be started by a signal called “IOT BLK”, which wasn’t making it to chassis 1. I could see it in chassis 3! The chassis 3 end of the cable had been loose to move the machine, let’s take a look!

Cable 3E06. Note broken resistor near the bottom middle of the card.

I replaced the broken resistor, but it still didn’t work! OK let’s look farther up the cable, did anything else happen in the move?

3E06 to 1L44 cable.

That doesn’t look right! I suspect that somehow, in the move, this cable escaped momentarily enough to get caught under a caster and rubbed away, because it used to work, and the signal in question is one of the top two signals in the cable. The consequences of doing things in a hurry.

Do I pull this cable from one of the other machines we have or do I try to repair it?

Cable repaired.

As soon as this was done, KATIA went back to behaving for me. I ran all the diagnostics we had run, and a few more that we ran from DECtape on the KI, till I got to the one that also failed on the KI. We found the locations in this one that we had to patch for the KI, and it now runs too!

Did I get it done in 3 weeks? Unfortunately no, so Paul Allen didn’t get to see it run, and play with it. He must have known back when he asked about 3 weeks, because it was very close to three weeks from then that he left us.

KATIA: For you Paul! Sorry I didn’t get it working in time.

Bruce Sherry

CDC Cooling update

In my last blog on the CDC, we were having a slow refrigerant leak in Bay 1, and we were waiting for parts. The new parts came, but they were wrong, then they came again, and they were wrong again. Eventually the right parts were delivered, so we took the CDC down yesterday morning to work on the cooling system.

Cole, from Hermanson, arrived at 7AM, and we proceeded to shuffle some of the chiller plumbing around, so I can always be able to restart the computer after a power fail, and it took about an hour, and seems to work fine!

The old compressor in the basement, showing where the power goes in.

A little later, Jeff Walcker, also from Hermanson arrived to install the new compressor parts that had arrived. We opened the 3phase 60Hz breaker for Bay 1, and Jeff started un-hooking the power. We had discovered that it was leaking where the compressor power wires go into the case. As Jeff was taking it apart, all the little bits were turning into powder. The compressor is 52 years old. It was decided that we needed a new compressor. Bummer!

Contrary to recent experience, not only was a replacement compressor available, it was in stock, IN SEATTLE! Jeff was back from getting it almost before we were done deciding that was what we should do! Compared to 102 days for the chiller, this is amazing.

He did a bunch of wrestling with the compressors to get the new one in, and we don’t have the motor cooling loop hooked up yet, because the pipe didn’t want to go on the new one quite the same as it came off the old one.

New compressor installed.

You can see the disconnected copper cooling loop wrapped around the compressor in the above picture. Jeff will bring a slightly longer hose when he comes for our quarterly maintenance next month.

The CDC is happy again, and hopefully not leaking!

Life with a 51 year old CDC 6500

The CDC 6500 has led a rough life over the last 6 months or so: way back on the afternoon of July 2, 2018, I got an email from the CDC’s Power Control PLC telling me that it had to turn off the computer because the cooling water was too hot! A technician came out and found that the chiller was low on refrigerant. He brought it back up to the proper level, and went away. Next morning it was down again.

After much gnashing of teeth and tearing of hair, it was determined that the compressor was bad in the chiller. “We’ll have a new one in 5 weeks!” The new one turned out to be bad too, and so another was ordered, that was easier to get. Only about 3 weeks, instead of the 8 that the official one took. That worked for a few weeks, and the CDC went down again because the water was too hot.

This time it was very puzzling because as long as the technician was here, it worked fine. He spent most of a morning watching it, decided it was OK, and left, but he didn’t make it to the freeway before it went down again. He came back, and watched for the rest of the afternoon, and found that the main condenser fan would overheat, and shut down, causing the backup fan to come on. The load wasn’t very high, so the backup fan had to cycle on and off, while the main fan motor cooled off. This would go on for a while, till both motors were off at the same time, and then the compressor would go over pressure because the condenser fans were off, and the chiller would stop cooling, resulting in the “Water too HOT” computer shutdown.

Another week went by waiting for replacement fan motors from the chiller manufacturer, with no luck. Eventually we gave up and got new fan motors locally, installed them and the chiller has been working since. While the CDC didn’t seem to mind being off for 102 days for the compressor problem, it didn’t like being off for 3 weeks while we fiddled with the fans.

Both when it was off for 102 days, and this time, we found that Bay 1 was low on refrigerant. The first time we just filled it up, but the second time we looked closer, and found that there is a small leak where the power wires go into the Bay 1 compressor. The compressor manufacturer, the same guys that made the chillers compressor, will gladly sell us a new compressor, but the parts for the 50 year old, R12 compressor, are no longer available. We are working on that, but I haven’t heard that we found the parts yet.

Back to more recent times: now that the chiller is chillin’, and the CDC’s cooling system is coolin’, why isn’t the computer computing?

Let’s run some diagnostics and see what happens: I try to run my CDC diagnostic tape, but the machine complains that Central Memory doesn’t seem to be available. No, I didn’t run the real tape drives, I ran the imaginary one that uses a disk file on a PC to pretend to be  a tape drive. Anyway, that didn’t work, so I flip the zillion or so Dead Start switches in my emulated Dead Start panel, to fire up my Central processor based memory test, and get no display at all! This is distinctly unusual. Let’s try my PP based Central Memory test: That seems to work till it finishes writing all of memory, then the display goes blank. Is there a pattern here?

I put a scope probe on the memory request line inside the memory controller in Chassis 3, and find that someone is requesting memory every possible cycle. There are four possible requests: the Peripheral Processors as a group get a request line, each Central Processor gets a request line, and the missing Extended Core Memory gets a request. Let’s find out who it is: the PP’s aren’t doing it, neither of the CP’s are doing it, and the non-existent ECM isn’t doing it. Huh? Nobody is wants it, but ALL of it is getting requested!

I am going to step back a little bit, and try to explain why it sometimes takes me a while to fix this beast. This machine was designed before there were any standards about logic diagrams. Every manufacturer had to come up with their own scheme for schematics. Here is one where I found a problem, but we will get to that in a bit.

Now when there are two squares with one above the other, and arrows from each going to the other, those are flip-flops. When you have a square, or a circle with multiple arrows going into it, that is a gate. Which one is an “or” gate, and which one is an “and” gate? Sorry, you have to figure out that for your self, because the CDC documentation says either one can be either one. The triangle with a number in it would be a test point on the edge of the module. The two overlapping circles, kind of like an elipsis, indicate that is a coax cable receiver, as opposed to a regular twisted pair signal. A “P” followed by a number indicates a pin of the module.

This module receives the PP read and write signals from the PP’s in chassis 1, on pins P19 and P24. On the right side of the diagram, you can see where all the pins connect. If we look at pin 24, we can see it connects to W07 wire 904, and pin 19 is connected to W07-903. The W “Jack”s are coax cables, the other letter signals go somewhere inside this chassis.

Really, what we are looking at here, is that a circle, or a square is the collector pull-up resistor of one or more silicon NPN transistors. the arrow heads are the base of the transistors, and the line coming into the head has a base resistor in it. If there are three arrows coming into a square, like at the bottom, those three 2n2369 transistors all have their collectors tied together, with one pull-up resistor. I could be slow, because it took about 6 months before I felt I was at all fluent in reading these logic diagrams.

Now we have to talk about the Central Memory Architecture a bit. The CDC has 32 banks of 16K words of memory. Each of these banks is separate, and they can be interleaved any way the 4 requestors ask for them. At the moment, I am only running half of them, because there is something wrong with the other half. Each of these banks does a complete cycle in 1uS. The memory controller in chassis 3 can put out an address every 100nS, along with whether it is for a read or a write. This request goes to all banks in parallel. If the bank that it points to is available, he will send back an “accept” pulse, and that requestor is free to ask for something else. If the controller doesn’t get an “accept” he will ask again in about 400nS. There is a bunch of modules involved in this dance, and it is a big circle.

A little more background: This machine was designed before there was such a thing as plated through holes on printed circuit boards. The two boards in each module were double sided. What they did when they needed to get to the other side of a PCB, was they would put a tiny brass rivet in the via hole, and solder both sides.

What I eventually found was that the signal from P23 of the module in 3L34, wasn’t making it to pin 15! There was a via rivet that wasn’t making its connection to the other side of the board. I re-soldered all the vias on that module, and now we were only requesting memory when someone wanted it!

Now that we can request memory and have a reasonable chance of it responding correctly, it is on to testing memory. I loaded up my CP based test, and it ran… for a while. Then it quit, with  a very strange error. The test uses a single bit, and its complement to check the existence of every location of memory. It will read a location, and compare it with what should be there, and put the difference in a second register. Normally I would expect a single bit error, or maybe 12 bits if a module failed that way. The result looked like 59 bad bits, or the error being exactly the same as what it read. Usually this is because the CPU that is running the test is miss-executing the compare instruction.

While I was thinking about that, I ran Exchange Jump Test to see what that said. A PP can cause a CP to swap all its registers, including the Program Counter with the contents of some memory that the PP points to. This is called an Exchange Jump. The whole process happens in about 2.6uS as it requests 16 banks of memory in a sequence. This works the memory pretty hard. Exchange Jump Test (EJT) would fail after a while, and as I looked at the results, I noticed that it was usually failing a certain bit in bank 7. I checked, and noticed it was an original memory module, so I looked at my bench and found I didn’t have any new ones assembled, so I had to put the sides on a couple of finished PCB assemblies, and test them. I then swapped out the old memory in bank 7 with a new semiconductor memory, and EJT passed!

I then checked to see if my CP based memory test worked, and it did too. We are back in Business after over 5 months. I am keeping my fingers crossed in the hope that the chiller stays alive for a while.

Bruce Sherry

Testing old technology

I have 4500 modules in the CDC 6500, and it isn’t always easy to debug them in the machine, because convincing the machine to wiggle its lines so I can check each transistor on a particular module is difficult.

In order to make this problem a little easier, I have built a cordwood module tester:

It has taken a couple of revisions to get the pin drivers correct, but it does useful things for me when I need it to. The three cables going out the back go to a Mesa Electronics 7i80 FPGA board, which I use for driving signals to the module under test. I have written some hardware for the FPGA, in VHDL, that knows how to talk to the tester board.

I have also written an assembler in Perl, to assemble stimulus commands I write, to the appropriate VHDL that gets included into the FPGA. Here is a sample of what I feed into the assembler:

# P14 = T70 negative pulse
# P17 = clock gate
# set the pins:
p1=h p2=h p3=h p4=h p5=h p6=h p7=h p8=h p9=h p10=h p11=h p12=h p13=h p14=h p15=h p16=h p17=h
p18=h p19=h p20=h p21=h p22=h p23=h p24=h p25=h p26=h p27=h p28=h
# scope sync pulse on unused pin
p1=h
p1=L
# run clocks for a bit...
P17=h
P14=L
P14=h
P14=h
P14=h P17=L
P14=L

Each line represents what it should do for the next 20 nano-seconds. This results in a table that gets glued together with some header and trailer bits to create the VHDL input file. Here is the output listing:

 # P14 = T70 negative pulse
 # P17 = clock gate
 # set the pins:
 0 0ns 0xffe0000 p1=0 p2=0 p3=0 p4=0 p5=0 p6=0 p7=0 p8=0 p9=0 p10=0 p11=0 p12=0 p13=0 p14=0 p15=0 p16=0 p17=0
 1 20ns 0x0000000 p18=0 p19=0 p20=0 p21=0 p22=0 p23=0 p24=0 p25=0 p26=0 p27=0 p28=0
 # scope sync pulse on unused pin
 2 40ns 0x0000000 p1=0
 3 60ns 0x0000001 p1=1
 # run clocks for a bit...
 4 80ns 0x0000001 P17=0
 5 100ns 0x0002001 P14=1
 6 120ns 0x0000001 P14=0
 7 140ns 0x0000001 P14=0
 8 160ns 0x0010001 P14=0 P17=1
 9 180ns 0x0012001 P14=1
 10 200ns 0x0010001 P14=0
 11 220ns 0x0010001 P14=0

The listing shows the memory location, what time that instruction should execute, the actual bits in hexadecimal notation, and what the source file said.

This handy little tool has enabled me to fix 4 out of 5 of the UA modules I had replaced in the CDC, and to know what was wrong with the other one.

New peripherals for old Computers

Five years ago, when we were getting done with restoring our PDP10-KI, we were running out of working disk drives to run it from. We were down to one set of replacement heads, two working drives, and we didn’t have a source for new ones. We found some folks that said they could rebuild the packs, but it turned out that they couldn’t re-write the servo surface, so if we lost that we were in trouble.

Alright, what else can be done? The Digital RP06, the drive of choice for the KI, has lots of registers available from the MASBUS. The MASBUS is kind of a UNIBUS, with a synchronous data channel for moving the actual data. We had been having difficulty keeping track of everything on an existing project, so I looked into doing things a different way.

My idea was to use an FPGA, (Field Programmable Gate Array) to emulate the behavior of the control unit inside the RP06. This is like the ease of writing software, but for hardware. No wires to change, no cuts, when I mess up the logic. The PC would be responsible for handling the actual data for the disk, or possibly tape.

I spent a while poking around the Internet looking for an FPGA card that would plug into a PC. There were a lot of expensive, and some less expensive evaluation boards out there. I eventually happened across http://mesanet.com/. I had heard of these folks before from my experiences with LinuxCNC, which runs my milling machine at home. These folks have been doing this for a long time, and since they have to deal with industrial environments and big motors, their products are very robust.

For the MASBUS Disk Emulator, (MDE), I chose the Mesa 5i22 card, which has 96 I/O’s I can play with, and a Spartan3 Xilinx FPGA. The 5i22 doesn’t remember what the Xilinx configuration is, so the PC pours in the correct bits each time.

Bob Armstrong, down in the Silicon Valley, wrote all the software for the PC, and we eventually emulated RP06’s, RP07’s which hold twice the data, and TU77 tape drives. Here is a picture of our main collection of MDE’s.

There are 3 Industrial PCs, and 8 MASBUS Cable Driver/Receiver boxes. These are running both PDP10-KL’s, and the PDP10-KS’s. There is a real TU78 tape drive, to the left of the MDE rack. The PDP10-KI, and the PDP11/70 each have their own located elsewhere in the museum.

I also used the 5i22 for all of the emulations that we needed for the CDC6500:

 

Here we have one Industrial PC, and 6 6000 series channel attach Driver/Receivers, along with one 3000 series channel Driver/Receiver. We emulate the dead start panel, the DD60 Display, Tape drives, disk drives, printers, card readers, card punches, the serial terminal interface and the 6681 channel converter so we could talk to the real 405 card reader. Bob Armstrong also wrote the PC code for all these emulations.

Jeff Kaylin has also used 5i22 cards on the Sigma 9, with Bob doing the software, and Craig Arno and Glen Hermannsfeldt used one to emulate the card reader and punch on the IBM 360/20.

All is not sweetness and light however, after making the 5i22 for over 10 years, the parts are getting hard to obtain, so Mesa Electronics has stopped production of that board. We ordered all their remaining stock of the 5i22s. They no longer make the 5i22, but they make lots of other similar boards, so we ordered some 7i61’s and 7i80’s to play with.

The 7i61 uses USB to talk to a PC, and has 96 I/O’s to play with. The 7i80 uses Ethernet to talk to the PC, but only has 72 I/O’s. To conserve 5i22’s, I converted my CDC 6681 6000 to 3000 channel converter to the 7i61 because it needs all 4 cables for 96 I/Os. I used my code, along with open source code from Peter Wallace, of Mesa Electronics, to load my code into the serial flash chip on the 7i61, so the PC is no longer involved in the 6681 emulation. After turning on the power to the Mesa card, it knows how to be a 6681 automagicly! I no longer have to remember to type the proper incantation into the PC to get it loaded up, this is a GOOD thing!

After getting the CDC 6500 working, I had several broken modules that I wanted to fix, so I built a Module tester, using the 7i80 card:

You can see the 7i80-HD card under the cables down from the test card.

It took me a while to collect the appropriate bits of Peter Wallace’s code, so that I could have the Ethernet interface live in with my test code, all at the same time, but perseverance pays off, and it all works now. I have fixed 4 out of 5 broken UA modules, and I know why I am not going to fix the other UA, and the ED module that I replaced in CP1.

I have to confess: the module plugged into the tester, is not really from the 6500, but it is the same form factor, and technology.

I love these Mesa Electronics “Anything I/O” cards! As their name says, I can teach them to be most Anything!

Bruce Sherry 20180418.

That Pesky PS Module!

When we last left our hero, he had re-soldered all the Via rivets on one of the 510 “PS”, core memory sense amplifier modules in the CDC6500, and the machine was working.

That lasted about a day, and the memory went away again. What was wrong this time? You guessed it, bit 56 in bank 36 was bad again. Third time is the charm: I am going to replace this module! I head off looking for a spare PS module. Where did we put all those spare parts we got with the machine? Oh wait, we didn’t get any spares with the machine. Bummer!

This is where I get to practice my “MAD Skillz”, and make some spare PS modules. What does a PS module look like? What does CDC give me?

On the left side we have an actual schematic of one of the 4 amplifiers on the module, YAY! Having been around this block before, I take apart the offending module and check to see if it matches. Wiring wise, yes it matches, but the values have been changed to protect the innocent.

After a while playing with the newest version of Eagle, I ended up with this:

The easy part is done, now the fun starts! The circuitry for the module is split between two printed circuit boards, one that connects to the odd pins on the connector, and has odd numbered transistors, and one for the even bits. Eagle really doesn’t understand this, so I have to fool it. First I put test points on each side of all components that go between the boards. I have to then add in the wire jumpers that also go between the boards, and I end up with another schematic:

I take this schematic and duplicate it into odd and even sides, then I write an Eagle script to delete all the between boards components, and all the test points that belong on the other board. Here is what the schematic for the odd board looks like, pretty terrible:

Now I have two schematics: PS_O and PS_E, and I do the PC layout thing. I have the original boards to use as an example, which I follow very closely so that timing and signal integrity will be as close as I can to the original module. Here are pictures of the odd and even layouts:

But wait, I’m not done yet! Remember Eagle doesn’t understand the whole module. I now have to verify that the two boards, together with all the components that go between, match that original schematic I started with.

I go over every line on the board pairs, and the schematic highlighting them as I go, until EVERYTHING is highlighted.

Done yet? Grumble, grumble: No! I have forgotten to identify which end of the diodes and polarized capacitors have the band on them! If I want them assembled properly, I guess I should do that before they go out to FAB!

Back to the PC mines…

Bruce Sherry 20180201 10:57AM

Chasing the Pesky Ratio!

It seems like I did something really silly! I had to come up with some goals for 2018. I hate this time of year, I think everybody does. OK, what can I put down that is measurable and achievable? How about keeping the CDC6500 running more than 50% of the time? That might work. Oops, did I hit the send button?

“Hey, Daiyu: How do I tell what users have been on the machine?” Daiyu Hurst is my systems programmer, who lives Back East somewhere. If it is on the other side of Montana from Seattle, it is just “Back East” to me. She lives in one of those “I” states, Indiana, or Illinois, not Idaho, I know where that is. After a short pause, she found the appropriate incantations for me to utter, and we have a list of who was on the machine, and when they logged in. I had to use Perl to filter out those lines, but that was pretty easy.

What is all this other gobble-de-gook in this file:

 03.14.06.AAAI005T. UCCO, 4.096KCHS. 
 03.16.09.AAAI005T. UCCO, 4.096KCHS. 
 03.18.11.AAAI005T. UCCO, 4.096KCHS. 
 03.20.12.AAAI005T. UCCO, 4.096KCHSUCCO, 4.096KCHS. 
 05.00.30.SYSTEM S. ARSY, 1, 98/01/24.
 05.00.30.SYSTEM S. ADPM, 11, NSD.
 05.00.30.SYSTEM S. ADDR, 01, LCM, 40.
 05.00.44.SYSTEM S. SDCI, 46.603SECS.:
 05.00.44.SYSTEM S. SDCA, 0.032SECS.:
 07.32.30.SYSTEM S. ARSY, 1, 98/01/24.
 07.32.30.SYSTEM S. ADPM, 11, NSD.
 07.32.30.SYSTEM S. ADDR, 01, LCM, 40.
 07.33.07.AAAI005T. ABUN, BRUCE, LCM.:
 07.33.37.SYSTEM S. SDCI, 116.108SECS.:
 07.33.37.SYSTEM S. SDCA, 0.078SECS.:
 07.33.37.SYSTEM S. SDCM, 0.005KUNS.:
 07.33.37.SYSTEM S. SDMR, 0.004KUNS.:

The line with “ARSY” in it is when I booted the machine, at 5:00 this morning, from home. It crashed before I got in, and I booted it again at 7:32. Then we get to 7:33:07, and the “ABUN” line, where I login from telnet.

From the first few lines we can see that the machine appeared to still be running and putting things in its accounting log at 3:20, but it crashed before it could print a message about 3:22.

OK from this, I can mutter a few incantations at PERL, and come up with something like:

1054 Booted on 98/01/23 @ 07.39.30
 Previous uptime: 0 days 5 hours 59 minutes
 Down time: 0 days 17 hours 28 minutes
 1065 Booted on 98/01/23 @ 13.38.30
 Previous uptime: 0 days 1 hours 23 minutes
 Down time: 0 days 4 hours 35 minutes
 1068 Booted on 98/01/23 @ 14.12.30
 Previous uptime: 0 days 0 hours 0 minutes
 Down time: 0 days 0 hours 33 minutes
 1392 New Date:98/01/24
 1498 Booted on 98/01/24 @ 05.00.30
 Previous uptime: 0 days 13 hours 7 minutes
 Down time: 0 days 1 hours 40 minutes
 1503 Booted on 98/01/24 @ 07.32.30
 Previous uptime: 0 days 0 hours 0 minutes
 Down time: 0 days 2 hours 31 minutes

Last uptime: 0 days 0 hours 1 minutes

Total uptime: 2 days, 1 hours 37 minutes in: 0 months 7 days 0 hours 14 minutes
Booted 15 times, upratio = 0.29

Here is where the hunt for the Pesky Ratio comes in: See that last line? In the last week, the CDC has been running 29% of the time. That isn’t even close to 50%. I KNOW the 6000 series were not the most reliable machines of their time, but really: 29%?

What has been going on? A week ago, I was having trouble keeping the machine going for more than a couple of minutes. Finally, it occurred to me I might see how the memory was doing, and it wasn’t doing well. It took me a while to find why bit 56 in bank 36 was bad. I had to explore the complete wrong end of the word for a while, before I realized that end worked, and I should have been looking at the other end. I chased it down to Sense Amplifier (PS) module 12M40. When I put it on the extender, the signal would come and go, as I probed different places. I noticed that I had re-soldered a couple of via rivets before, so I re-soldered ALL the via rivets on the module.

What do I mean “via rivets”? In those days, either one of two things were true: either they didn’t have plated through holes in printed circuit boards, or they were too expensive. None of the CDC 6500 modules I have looked at have plated through holes. Most of the modules do have traces on both sides of the two PCBs that the module is made with. How did they get a signal from one side to the other? They put in a tiny brass rivet! Near as I can tell, all the soldering was done from the outside of the module, and most of the time the solder would flow to the top of the rivet somehow. Since I have found many of these rivets not conducting, I have to assume that the process wasn’t perfect.

After soldering all the rivets on this module, I put it back in the machine, and we were off and running. Monday, I booted the machine at 8:11, and it ran till 2:11. When I got in yesterday, the machine wouldn’t boot. Testing memory again found bit 56 in bank 36 bad again! I put module 12M40 on the extender, and the signal wasn’t there. I poked a spot with the scope, and it was there. I poked, prodded, squeezed, twisted and tweaked, and I couldn’t get it to fail.

This is three times for This Module! I like to keep the old modules if I can, but my Pesky Ratio is suffering here! I took the machine back down, and brought it back up with only 64K of memory, and pulled out the offending module:

There are 510 of these PS modules in the machine, three for each of the 170 storage modules, or about 10% of all the modules in the machine. Having a spare would be nice. My next task will be to make about 10 new PS modules.

In the time I have been writing this post, the display on the CDC has gone wonky again. This appears to happen when the Perpheral Processors (PP’s) forget how to skip on zero for a while. Once this happens, I can’t talk coherently to channel 10 or PP11. I have a few little tests that copy themselves to all the PP’s, and they will all work, except the last one: PP11.

I have yet to write a diagnostic that can catch the PP’s making the mistake that I can see on the logic analyzer once a day or so. Right now the solution seems to be to wait a while, and the problem will go away again. This is another reason while the Pesky Ratio is so difficult to hunt, but I fix what I can, when I can.

Onward: One bug at a time!

 

 

The Hunt!

The CDC 6500 has been down since last Friday, so that will be a week in 3 hours. What have I been doing during that time? Let me tell you:

The first thing I noticed was that my PP memory test, called March, wasn’t working. The first real thing it does, after getting loaded into PP0, is copy itself to the next PP in line. In order to do that, it increments 3 instructions to point to the next channel from the one that got loaded in from the deadstart system. After it has self-modified its program properly, it runs those instructions to do the actual copy. The very first OAN instruction it tried to execute hung, this is not supposed to happen.

I spent 3 days looking at this problem before I started drawing timing diagrams of the channel address being selected by the various PPs. The PPs each have their own memory, but they all share the same execution hardware in chassis 1. This makes it a little hard to look at, as a PP is running 1uS cycles, the hardware is running 100nS cycles, and each PP gets a 100nS Slot to do his thing. As I was looking at PP0s slot time, and what channel he was trying to push some data to, it looked like it was getting done at the wrong time. When I plotted out 1uS of all the channel address bits, I finally noticed that PP0 was addressing channel 0, PP1 was addressing channel 1… and PP11 was addressing channel 11, and back to PP0.

The strange thing about that was that that is the way the system starts at deadstart time. Every PP sucks on the channel with his number. The deadstart panel lives off of the end of channel 0, PP0 sucks up everything the deadstart panel put on channel 0, stores it into memory, and when the panel disconnects, because he has run out of program to send, the PP starts executing the program.

Wait a minute here: the program was supposed to have incremented the 3 channel instructions, so they would be pointing to channel 1, why is PP0 still looking at channel 0? Rats: the channel hardware is doing fine, but the increment isn’t working! 3 days to prove something wasn’t the problem!

OK, so the increment isn’t working, what is it doing? I spent a while writing little bits of code to test various ways of incrementing a location of memory, and then Daiyu Hurst reminded me about a program she had generated for me that was a stand-alone version of the PP verification program that runs on the beginning of most deadstart tapes. OK, what does that do?

It hangs at location 6. It did that because it failed a ZJN (jump on zero) instruction. Why is that? The accumulator wasn’t zero. Hmm, instruction 1 was LDN 0, which loads the accumulator with 0! Why doesn’t that work? After another day, or so, I prove to myself that it actually does work, and 0 gets loaded into the accumulator at the end of instruction 1. Another thing that isn’t the problem!

What’s next? The next instruction is UJN 2, (unconditional jump 2 locations forward) which being at location 2, should jump to 4, which it does. It is not supposed to change the contents of the accumulator, but it does!

There are 2 inputs to the “A” adder, the A input is selected to be A, and the B input is zeros. All 12 of the inputs to the A side are zero. Wait: aren’t there 18 bit in the accumulator, what about those other 6 bits? Ah: bit 14 is a 1!

It will not sit still! I chase bit 14 for a while, and it starts working, but a different bit is failing now! I chased different bits around the loop for a while, put module K01 on the extender to look, and the test started passing! This worked for a while. I had the PPs test memory, and that worked, but if I had CP0 test memory, it didn’t like it. When I got back from lunch, it had gone back to failing my LDN 0 test. I put some secret sauce on the pins of module K01, and we are back to trying to run other diagnostics.

I remembered I was having trouble with the imaginary tape drives, to I tried booting from real tape, and I get to the part where it tests memory, and that fails. OK, we have some progress.

That was then, this is now, and we are back to failing to LDN 0. I found that bit 0 for the “B” input of the A adder was not correct. It seems that a via rivet was not conducting between the collector of Q30 and Q32 to the base of Q19 on my friend the QA module in K01. I resoldered all the via rivets, and the edge pins, just for good measure.

Central Memory still doesn’t work, but I can run some diagnostics again!

To paraphrase Sherlock Holmes: When you eliminate all the things the problem isn’t, you are left with what the problem is!

Bruce Sherry