The worlds created on the LivingComputers minecraft server have boundaries. One may go from -1024 to 1023 in both directions.
If one gets outside the boundary, one may not move. And if one is far enough outside the boundary, one suffocates in a wall.
There are 2 methods to get outside the boundary: transport there (if you have that permission), and dismount from transportation. Getting off a boat or a minecart may put you outside the boundary. If the boat or minecart are still in position, you may remount it, and reenter the world.
Villagers may not pass through the boundary. However, it is possible to trade across it.
Water passes through the boundary.
Arrows pass through the boundary.
Trees grow through the boundary.
I believe when blocks are destroyed, their remains may end up across the boundary. I believe if you get close enough you can capture it.
I have yet to observe monsters across a boundary, and I suspect I can be shot, but I don’t know if a creeper can blow me up. (I’m not sure I want to find out).
Dynamite will destroy blocks across the boundary. Water may be poured across the boundary (and fetched.) A creeper blowing up on my side of the boundary did not seem to destroy blocks across it.
Using setBlock() to place blocks requires some attention.
Blocks are placed at Integer locations. You may pass a floating point value to the function, but it is converted to integer.
It is the converting to integer which may aggravate you:
The function int(x + dx) does not necessarily return the same value as int(x) + int(dx). If you are placing blocks around the origin (0,y,0), any -1 > dx < 1 will be truncated to 0. However, away from the origin, say x = 14, x – 0.5 will go to 13, where x + 0.5 will go to 14.
I assume your x and z will be integers already, so make dx and dz integers before you add them to x and z.
setBlock(x+int(dx), y, z+int(dz), 0)
The CDC 6500 has been down since last Friday, so that will be a week in 3 hours. What have I been doing during that time? Let me tell you:
The first thing I noticed was that my PP memory test, called March, wasn’t working. The first real thing it does, after getting loaded into PP0, is copy itself to the next PP in line. In order to do that, it increments 3 instructions to point to the next channel from the one that got loaded in from the deadstart system. After it has self-modified its program properly, it runs those instructions to do the actual copy. The very first OAN instruction it tried to execute hung, this is not supposed to happen.
I spent 3 days looking at this problem before I started drawing timing diagrams of the channel address being selected by the various PPs. The PPs each have their own memory, but they all share the same execution hardware in chassis 1. This makes it a little hard to look at, as a PP is running 1uS cycles, the hardware is running 100nS cycles, and each PP gets a 100nS Slot to do his thing. As I was looking at PP0s slot time, and what channel he was trying to push some data to, it looked like it was getting done at the wrong time. When I plotted out 1uS of all the channel address bits, I finally noticed that PP0 was addressing channel 0, PP1 was addressing channel 1… and PP11 was addressing channel 11, and back to PP0.
The strange thing about that was that that is the way the system starts at deadstart time. Every PP sucks on the channel with his number. The deadstart panel lives off of the end of channel 0, PP0 sucks up everything the deadstart panel put on channel 0, stores it into memory, and when the panel disconnects, because he has run out of program to send, the PP starts executing the program.
Wait a minute here: the program was supposed to have incremented the 3 channel instructions, so they would be pointing to channel 1, why is PP0 still looking at channel 0? Rats: the channel hardware is doing fine, but the increment isn’t working! 3 days to prove something wasn’t the problem!
OK, so the increment isn’t working, what is it doing? I spent a while writing little bits of code to test various ways of incrementing a location of memory, and then Daiyu Hurst reminded me about a program she had generated for me that was a stand-alone version of the PP verification program that runs on the beginning of most deadstart tapes. OK, what does that do?
It hangs at location 6. It did that because it failed a ZJN (jump on zero) instruction. Why is that? The accumulator wasn’t zero. Hmm, instruction 1 was LDN 0, which loads the accumulator with 0! Why doesn’t that work? After another day, or so, I prove to myself that it actually does work, and 0 gets loaded into the accumulator at the end of instruction 1. Another thing that isn’t the problem!
What’s next? The next instruction is UJN 2, (unconditional jump 2 locations forward) which being at location 2, should jump to 4, which it does. It is not supposed to change the contents of the accumulator, but it does!
There are 2 inputs to the “A” adder, the A input is selected to be A, and the B input is zeros. All 12 of the inputs to the A side are zero. Wait: aren’t there 18 bit in the accumulator, what about those other 6 bits? Ah: bit 14 is a 1!
It will not sit still! I chase bit 14 for a while, and it starts working, but a different bit is failing now! I chased different bits around the loop for a while, put module K01 on the extender to look, and the test started passing! This worked for a while. I had the PPs test memory, and that worked, but if I had CP0 test memory, it didn’t like it. When I got back from lunch, it had gone back to failing my LDN 0 test. I put some secret sauce on the pins of module K01, and we are back to trying to run other diagnostics.
I remembered I was having trouble with the imaginary tape drives, to I tried booting from real tape, and I get to the part where it tests memory, and that fails. OK, we have some progress.
That was then, this is now, and we are back to failing to LDN 0. I found that bit 0 for the “B” input of the A adder was not correct. It seems that a via rivet was not conducting between the collector of Q30 and Q32 to the base of Q19 on my friend the QA module in K01. I resoldered all the via rivets, and the edge pins, just for good measure.
Central Memory still doesn’t work, but I can run some diagnostics again!
To paraphrase Sherlock Holmes: When you eliminate all the things the problem isn’t, you are left with what the problem is!
In restoring the Bendix G-15 vacuum tube computer, I have uncovered a phenomena which is requiring us to replace over 3000 germanium diodes. These diodes appear to have lost their hermetic seal and the atmospheric contamination has caused their leakage current to rise to very high levels as they reach a normal operating ambient temperature of approx. 40 degrees C. Because these diodes are used in the clamp circuits that generate the 20 volt logic swing of the computer, the combined low impedance of the approx. 3000 diodes ends up shorting out the -20 volt power supply after 5 to 10 minutes of power-on time. We have replacement diodes on order, and this should resolve the power supply issue.
Interestingly though, the failed diodes exhibit another interesting phenomena which this engineer hasn’t seen before. Hooking up a diode to an ohmmeter to measure its leakage current, and heating the diode to about 40 degrees C, causes the diode leakage, measured as resistance, to go from a few thousand ohms to a few tens of ohms. If the ohmmeter remains connected and the diode is allowed to cool to normal ambient, the low resistance measurement persists. If the ohmmeter is disconnected briefly and then reconnected, the diode leakage current returns to its nominal few thousand ohms.
A few months ago, our PDP10 Model 1095 ( pictured ) had just successfully booted the WAITS operating system and was running an early version of Ethernet. One afternoon, the PDP11-40 front-end computer ( unit with chassis extended on left ) stopped working and I was tasked to find out what had happened and repair it. What followed was almost three months of difficult troubleshooting and repair.
What had happened was, one of the peripheral devices ( a TC-11 DECTAPE Controller at the left end of the machine ) attached to the PDP11’s Unibus had had a power supply failure, causing the regulated 15 volt supply to rise to 28 volts. These supplies have an over-voltage crowbar circuit which is designed to shutdown the supply by blowing a fuse if the power supply ever goes into an over-voltage condition. This crowbar circuit failed and this resulted in a number of circuit boards in the PDP11 frying.
Once I replaced and/or repaired the failed circuit boards, I upgraded the TC-11 power supply to a modern switcher which doesn’t have the failure mode described above.
With the hardware sorted out ( this is a couple of weeks into troubleshooting ), I set about trying to boot the WAITS operating system once again. A further snag cropped up at this point. WAITS wouldn’t fully start and would complain about a “pointer mismatch”. This points to the DTE-20 10-11 interface, but no combination of replacement boards would succeed in bringing up WAITS ( except for a number of random times ). The solution to this problem turned out to be an old bugaboo of the KL-10 processor. A number of the control devices do not fully initialize at power-on as their reset lines do not go to all of the parts in a particular device. We have seen this phenomena on the RH-20, but apparently the DTE-20 also has some hardware that doesn’t get initialized. I determined this by running the -10 side diagnostic for the DTE-20, and then booting WAITS successfully. This was after weeks of eliminating all other possibilities as myself and others were not aware that the DTE-20 had components that came up in an unknown state at power-on.
One side note: In upgrading the TC-11 power supply, it was found that the power controller that feed line voltage to it, had failed some time ago and been hacked to make it work without it’s contactor. A new contactor was ordered and installed.
It has been about 3 months, but we seem to have 128KW of memory on the CDC 6500 now.
From the picture you can see “CM = 303700”, which is just about 100KW free!
We have built 35 new Storage Modules, all of which work, and 32 of them are installed in the machine in place of Core Modules which were not happy. There are 10 more new Storage Modules in process, and the mostly assembled boards should be in next week. They will need their connector pins and pulse transformers installed. I will have to build more chassis sides and fronts for them, but that may take a while, as I still have 3 good modules, itching to go to work, sitting on my bench.
Something I find interesting in this photo, is the memory access patterns. It is hard to see in the picture, but bank 30 is at the top of chassis 11, and bank 34 is at the top of chassis 12. The left two LEDs in the new storage modules are the Read and Write indicators. The ones on chassis 12 are bright, and the ones in chassis 11 are off. The top rows are separated by 4 locations! The machine isn’t real busy, it just has two instances of a prime number program running, but still… Can’t see that with Core Modules.
Anyway, let’s see, both CPUs working: check! All of memory working: check! Real card reader working: check! Real tape drives working: oops, not at the moment. I guess I’m not done yet.
Our first floor ‘Tennis for Two’ exhibit has a new scope. Not really new though because chronologically it is lots older than the scope we were using. We had been using a Tektronix 465 scope which dates from no earlier than 1972. I remember this oscilloscope model well as I had one exactly like it on my bench at a previous job as ‘my scope’. The Tektronix 465 which had been displaying Tennis for Two was made at least 14 years after Tennis For Two had been invented. Not an ideal situation for a display certainly, but a lot better than no oscilloscope at all which was our alternative. It took a while to acquire and restore a period appropriate scope.
I wanted to use the same model oscilloscope which had displayed Tennis for Two when it made its debut at Brookhaven National Labs back in 1958 but which scope was it exactly? All I had to go on was this photograph.
The Tennis for Two oscilloscope is on the left side of the instrument stage. This next photo shows exactly where. This is actually two photographs photo-shopped together and the scope image detail portion was not photographed in 1958.
Enlarging a photo does not produce the detail shown above. The original photo is black and white and the enlargement is in color. Obviously two photos were combined. Beyond the obvious in the original ‘Blade Runner‘ movie Agent Deckard used a device which allowed him to zoom in on a photograph without loss of picture quality no matter what the magnification.
In real life that is simply not possible. In real life the ‘replicant’ from the Blade Runner movie would have lived another day safe from Dekard for this is what you get upon enlargement of our scope in the Tennis for Two photo. Loss of detail.
The photo is too fuzzy to make out the white DuMont label just below the display screen in the photo center or a model number. You can’t even tell where these labels are. I had to look at the photos of countless old scopes to make a match. It turned out the diamond pattern of the central knobs below the screen is very distinctive and only DuMont scopes of Dumont ‘304’ model type have that pattern. Upon identifying the type of scope it was I was able to find one on E-Bay and restore it. After physically acquiring the oscilloscope certain features in my fuzzy photograph such as the white DuMont Label made sense. Without a real scope to compare it to these features would have remained a mystery.
Restoration involved replacing almost every capacitor in the scope which could have age related issues before I turned it on. That is not all but most of them. Minor troubleshooting then was able to get the scope to work. The vertical Amplifier had a bad connection on a calibration switch which would not let the vertical amplifier signal pass. As we don’t need to use the calibration switch I simply bypassed the connection. If we were to ever calibrate this oscilloscope we would not use the internal calibration signal and would provide an external calibration signal anyway.
Here is the final result with the enlarged photograph I pasted on my office wall while I researched scope pictures. The shapes on some of the knobs are different but that is OK. DuMont mixed round knobs with pointed knobs of the ‘crows feet’ variety frequently. The scope we are using now and which I show here is an enhanced ‘304’ type called a DuMont 9559 and has options for doing RF measurements. I also acquired a plain Jane DuMont ‘304’ which has all crows feet type knobs. Physically the front panels are identical otherwise.
What matters most is not that the knobs absolutely match but that we now display Tennis for Two on a period appropriate oscilloscope. The scope I restored was in better overall condition between the two I had acquired and I suspect Brookhaven used top of the line models in their research. We really can’t know what the exact model used was so the Dumont 9559 is an appropriate choice.
Here is what has been going on on the CDC 6500: More Memory!
I’m sure the computer purity police will come and take me away, but this is what I have been doing. We are now up to 22 new storage module replacements, 17 of which you can see here. There are 3 more in chassis’s 9 and 10, and PP0 and PP1 each have one in chassis 1. I have 3 more to finish assembly of, when the parts come in later today.
Of the twenty units I have used in Central Memory, they are all involved in getting the First Location of banks 20 to 37 working, and that is all that works so far. If I try to test the second location, the first location fails. I don’t think the other three boards will get me through the second location, so I will probably be going out for more modules later today, or maybe next week.
My poor little milling machine has been working its bearings to the bone making Storage Modules sides and fronts. Since it doesn’t have “rigid tapping” (read automatic tapping), I started doing that by hand, but eventually I figured out a way to have the milling machine supply the energy to turn the tap, while I manually told it which direction to turn it.
So far, all the modules built have had the surface mount assembly done outside the Museum, and we have installed the pins and transformers. I may see about trying to convince other folks to do the through hole assembly and the machining.
Things are improving, we have moved from 65536 locations of memory to 65552 locations that work. Unfortunately it doesn’t work well enough to let the machine run with it out there, I still have to completely disable the upper 64K in order to have the machine boot.
The Alto proposal was a bold one, with a short time-frame and lofty goals. In a few short months, Chuck Thacker and the rest of the crew had designed and implemented a complete computer encapsulated in just a handful of circuit boards. The processor was comprised of two small boards (approximately 7″x10″) containing a mere 138 integrated circuits.
Peripheral controllers were similarly simple — single boards of the same size for each of the Display, Ethernet, and Disk.
How was such economy of hardware possible? Microcode.
Microcode and you:
In many computers from the Alto’s era, the architecture of the processor was hard-wired into the circuitry — there was dedicated hardware to fetch, decode, and execute each instruction in the processor’s repertoire. This meant that once designed and built, the computer’s instruction set could not be modified or extended, and any bugs in the hardware were very expensive to fix. It also meant that the hardware was more complicated — involving more components and longer development cycles. As processors became more and more advanced these limitations became more pressing.
Microcode and micro-programming helped solve these problems by replacing large swaths of control logic with software.
So What *is* microcode?
Microcode is software. It’s very low-level software, and it’s written in a language tailored to a very specific domain — the control of a specific set of hardware — but it’s software all the same. Let’s look at a small snippet of a program from the highest level, down to a microcode representation.
At the highest level, we have human language:
Add 5 to 6 and give me the result.
Which, if you’re able to read this blog, you are likely to understand the meaning of.
In a somewhat-high-level programming language, like C, this translates roughly to:
i = 5 + 6;
This isn’t too much different from the English statement above and if you’re familiar with programming or math at all, you can figure out that the above statement adds 5 to 6 and stores the result in the variable i.
In Nova assembly language, the above C program might look something like:
Now we’re starting to get a little bit further from the source material. This code consists of three memory locations (containing the operands and the result) and four instructions. Each of these instructions performs a small part of the “i=5+6” operation. “LDA” is Nova for “Load Accumulator” and loads 5 into Accumulator 0, and 6 into Accumulator 1.
“ADC” is Nova for “ADD with Carry.” It adds Accumulator 0 and Accumulator 1 together and stores the result in Accumulator 0. “STA” means “Store Accumulator” and puts the contents of Accumulator 0 (the result of 5+6) into a location in memory (designated as “I” in this example.)
In Alto microcode, a direct analog to the Nova instruction sequence above could be implemented as:
The number of instructions is larger (9 vs 4) and the microcode operates at a lower level than the equivalent Nova code.
The “MAR← FIVE” operation tells the Memory Address Register hardware to load a 16-bit address. The subsequent “NOP” here (and after every MAR← operation) is necessary to give the memory hardware time to accept the memory request. If this is omitted, unpredictable things will happen.
The “T← MD” operation puts the Memory Data register — the contents of the memory at the loaded location in the previous instruction — onto the ALU bus.
The “L← MD+T” instruction puts the memory data onto the ALU bus and tells the processor’s ALU to add the T register to it; and finally it causes the result to be stored in the L register.
What you may notice in the above examples is that the Nova machine language instructions tell the computer as a whole what to do (i.e. load a register, or perform an addition) whereas the Alto microcode instructions tell the computer how to go about doing it (i.e. “put this data on the memory bus,” or “tell the ALU to add this to that.”) It directs the individual components of the processor (the memory bus, the ALU, the register file, and the shifter) to perform the right operations in the right order.
So what components is the Alto processor made of? Here’s a handy diagram of the Alto CPU:
The nexus of the processor is the Processor Bus, a 16-bit wide channel that most of the other components in the Alto connect to in order to transfer data. The microcode controls who gets to read and write from this channel and at what time.
The R and S registers are 16-bit register sets; the R bank contains 32 16-bit registers, and the S register bank contains up to 8 sets of 32 16-bit registers. These are general-purpose registers and are used to store data for any purpose by the microcode.
The L, M, and T registers are intermediate registers used to provide inputs and outputs to the ALU and the general-purpose registers.
The ALU (short for Arithmetic Logic Unit) hardware takes two 16-bit inputs (A and B), a Function (for example, ADD or XOR) and produces a 16-bit result. The A input comes directly from the Processor Bus (and thus can operate on data from a variety of sources) and the B input comes from the T register.
The Shifter applies a shift or rotate operation to the output of the ALU before it gets written back to the R registers.
The MAR (Memory Address Register) contains the 16-bit address for the current operation (read or write) to the Alto’s Main Memory. Data is transferred to and from the specified address via the Memory Data Bus.
Each of the components above and their interactions with each other are carefully controlled by the microcode. Each microcode instruction is a 32-bit word consisting of the following fields:
RSEL selects the general-purpose Register to be read from or written to.
ALUF selects the Function the ALU should apply to its A and B inputs.
BS selects what source on the Processor Bus should be sent to the ALU.
F1 and F2 are Special Functions that control the shifter, memory, branching operations, and other operations.
T and L specify whether the output from the ALU should be written back to the T and L intermediate registers.
And the NEXT field specifies the address of the next Microinstruction. Unlike most conventional machine language where by default each instruction follows the last, Alto microcode must explicitly specify the address of the next instruction to be executed.
The Alto is similar to many other computers of its era in that it uses microcode to interpret and execute a higher-level instruction set — in this case an instruction set very similar to that of the Data General Nova minicomputer (which was popular at PARC at the time). The Alto’s microcode was also directly involved in controlling hardware — more on this later.
This means the earlier microcode example is contrived — it shows what an addition might look like in microcode, but not what the execution of the equivalent Nova assembly code example actually entails. So as to provide full disclosure, here’s the microcode instruction sequence the Alto goes through when executing a single Nova instruction, the “LDA 0,FIVE” instruction from the earlier code sequence:
Whew. That’s 13 micro-instructions to execute a single Nova ADC instruction. (Keep in mind that each micro-instruction executes in 170 nanoseconds, meaning that these 13 micro-instructions execute in about 2.2 microseconds, which is within spitting distance of the instruction time on a real Nova.)
The above sequence takes the current instruction, decodes it as an LDA instruction and performs the operations needed to execute it. If it doesn’t make a whole lot of sense at first glance, this is entirely normal. For the hardcore hackers among us, the above code sequence is taken directly from the microcode listing available on Bitsavers, the hardware manual is here and if you want to explore it in depth, you can use the ContrAlto emulator to step through the code line by line and see how it all works.
The Alto used microcode to simplify implementation of its processor. This was state-of-the-art at the time, but not altogether novel. What was novel about the Alto is its use of the processor’s microcode engine to drive every part of the Alto, not just the CPU. The display, disk, Ethernet, and even the refreshing of the Alto’s dynamic RAM was driven by microcode running on the Alto’s processor. In essence, software replaced much of what would have been done in dedicated hardware on other computers.
The Alto was also unique in that it provided “CRAM” (Control RAM) – a microcode store that could be changed on the fly. This allowed the processor’s behavior to be extended or modified at the whim of the programmer – while the Alto was running.
To allow the devices in the Alto to share the Alto’s processor with minimal overhead, the Alto’s designers developed very simple cooperative task-switching hardware.
Conceptually, the processor is shared between sixteen microcode Tasks, with a priority assigned to each one: Task 0 is the lowest priority task, and Task 15 the highest. Each device in the Alto has one or more tasks dedicated to it. To make task switching fast and cheap to implement in hardware, the only state saved by the task-switching hardware for each Task is that task’s program counter — the MPC (“Micro Program Counter”).
Only one task is running on the processor at a time, and at any time the microcode for a task may invoke a “task switch” function (named, strangely enough, TASK). When a TASK instruction is executed, the processor selects the highest priority task that needs to run (is “Requesting Wakeup”) and switches to it by jumping to the instruction pointed to by that Task’s MPC.
The Emulator Task (Task 0) is always eligible for wakeup but runs at the lowest priority and which can be interrupted at any time by hardware that needs attention. As suggested by the name, this task contains microcode that “Emulates” the Nova instruction set — it fetches, interprets, and executes the instructions for the user software that runs on the Alto.
The Disk Word Task (Task 14) is the highest priority task implemented in a standard Alto. It needs to run at the highest priority because its main job is pulling in words of data off the Diablo 30 disk drive as they move under the read head.
The Display Word Task (Task 9) is responsible for picking up words out of the Alto’s memory and painting them on the display as the CRT’s electron beam moves across the screen.
Since the Alto’s Task system is cooperative (meaning that task switches only happen when microcode explicitly requests them) the microcode for each task must be careful to yield to other tasks every so often so that they don’t get starved of time. If the Disk Word Task is late in execution, data from the disk is corrupted. If the Display Word Task doesn’t get enough time, the display will flicker or display glitches. And since the Alto hardware saves only the MPC for each task, each task’s microcode must make sure that any important state is saved somewhere (for example, the 32-general purpose R registers) before the task switch happens.
This puts some interesting constraints on the microcode (and results in much hair-pulling on the part of the micro-coder) as you might imagine. But despite these limitations, and despite the extremely low-level details the microcode has to deal with, the wizards behind the Alto’s design managed to fit microcode implementing the basic Nova instruction set (with extensions for graphics operations), disk controller, Ethernet controller, display, and memory tasks into a 1,024 word microcode ROM. With three words to spare.
The Task-based microcode architecture, in tandem with the writable microcode Control RAM (CRAM) made the Alto a very flexible computer — new hardware could be quickly implemented and added, and the microcode to drive it could easily be loaded and debugged. Entirely new instruction sets were devised, experimented with, and revised quickly. Applications could load custom microcode to accelerate graphics rendering.
In the next article, we will go into depth on the rest of the Alto’s hardware.