Introducing Darkstar: A Xerox Star Emulator

Star History and Development

The Xerox 810 Information System (“Star”)

In 1981, Xerox released the Xerox 8010 Information System (codenamed “Dandelion” during development) and commonly referred to as the Star. The Star took what Xerox learned from the research and experimentation done with the Alto at Xerox PARC and attempted to build a commercial product from it.  It was envisioned as center point of the office of the future, combining high-resolution graphics with the now-familiar mouse, Ethernet networking for sharing and collaborating, and Xerox’s Laser Printer technology for faithful “WYSIWYG” document reproduction.  The Star’s operating system (called “Star” at the outset, though later renamed “Viewpoint”) introduced the Desktop Metaphor to the world.  In combination with the Star’s unique keyboard it provided a flexible, intuitive environment for creating and collaborating on documents and mail in a networked office environment.

The Star’s Keyboard

Xerox later sold the Star hardware as the “Xerox 1108 Scientific Information Processor” – In this form it competed with Lisp workstations from Symbolics, LMI, and Texas Instruments in the burgeoning AI workstation market and while it wasn’t quite as powerful as any of their offerings it was considerably more affordable – and sometimes much smaller.  (The Symbolics 3600 workstation, c. 1983 was the size of a refrigerator and cost over $100,000).

The Star never sold well – it was expensive ($16,500 for a single workstation and most offices would need far more than just one) and despite being flexible and powerful, it was also quite slow. Unlike the IBM PC, which also made its debut in 1981 and would eventually sell millions, Xerox ended up selling somewhere in the neighborhood of 25,000 systems, making the task of finding a working Star a challenge these days.

Given its history and relationship to the Alto, the Star seemed appropriate for my next emulation project. (You can find the Alto emulator, ContrAlto, here). As with the Alto a substantial amount of detailed hardware documentation had been preserved and archived, making it possible to learn about the machine’s inner workings… except in a few rather important places:


From the March 1982 edition of the Dandelion Hardware Manual.  Still waiting for these sections to be written…

Fortunately, Al Kossow at Bitsavers was able to provide extra documentation that filled in most of the holes.  Cross-referencing all of this with the available schematics, it looked like there was enough information to make the project possible.

The Dandelion Hardware

The Star’s Central Processor (CP). Note the ALU (4xAM2901, top middle) and 4KW microcode store (bottom)

Much like the Alto, the Dandelion’s Central Processor (referred to as the “CP”) is microcoded, and, again like the Alto, this microcode is responsible for controlling various peripherals, including the display, Ethernet, and hard drive.  The CP is also responsible for executing bytecode macroinstructions.  These macroinstructions are what the Star’s user programs and operating systems are actually compiled to.  The CP is sometimes referred to as the “Mesa” processor because it was designed to efficiently execute Mesa bytecodes, but it was in no way limited to implementing just the Mesa instruction set: The Interlisp-D and Smalltalk systems defined their own microcode for executing their own bytecodes, custom-tailored and optimized to their environments.

Mesa was a strongly-typed “high-level language.” (Xerox hackers loved their puns…) It originated on the Alto but quickly grew too large for it (a smaller, stripped-down Mesa called “Butte” (i.e. “a small Mesa”) existed for the Alto but was still fairly unwieldy.)  The Star’s primary operating system was written in Mesa, which allowed a set of very sophisticated tools to be developed in a relatively short period of time.

The Star architecture offloaded the control of lower-speed devices (the keyboard and mouse, serial ports, and the floppy drive) to an 8-bit Intel 8085-based I/O processor board, referred to as the IOP.  The IOP is responsible for booting the system: it runs basic diagnostics, loads microcode into the Central Processor and starts it running.  Once the CP is running, it takes over and works in tandem with the IOP.

Emulator Development

The Star’s I/O Processor (IOP). Intel 8085 is center-right.

Since the IOP brings the whole system up, it seemed that the IOP was the logical place to begin implementing the emulator.  I started with an emulation of the 8085 processor and hooked up the IOP ROMs and RAMs.  Since the first thing the IOP does at power up or reset is execute a vigorous set of self-tests, the IOP was, in effect, testing my work as I progressed which was extremely helpful.  This is one important lesson Xerox learned from the Alto and applied to the Star: on-board diagnostics are a good thing.  The Alto had no diagnostic facilities built in so if anything failed that prevented the system from running the only way to determine the fault was to get out the oscilloscope and the schematics and start probing.  On the Star, diagnostics and status are reported through a 4-digit LED display, the “Maintenance Panel” (or MP for short).  If the IOP finds a fault during testing, it presents a series of codes on this panel.  During a normal system boot, various codes are displayed to indicate progress.  The MP was the first I/O device I emulated on the IOP, for obvious reasons.

Development on the IOP progressed nicely for several weeks (and the codes reported in the emulated MP kept increasing, reflecting my progress in a quantitative way) and during this time I implemented a source-level debugger for the IOP’s 8085 code to help me along.  This was invaluable in working out what the IOP was trying to do and why it was failing to do so.  It allowed me to step through the original code, place breakpoints, and investigate the contents of the IOP’s registers and memory while the emulated system was running.

The IOP Debugger

Once the IOP self-tests were passing, the IOP emulation was running to the point where it attempted to actually boot the Central Processor!  This meant I had to shift gears and switch over to implementing an emulation of the CP and make it talk to the IOP. This is where the real fun began.

For the next couple of months I hunkered down and implemented a rough emulation of the CP, starting with system’s 16-bit ALU (implemented with four 4-bit AM2901 ALU chips chained together).  The 2901 (see top portion of the following diagram) forms the nexus of the processor; in addition to providing the processor’s 16 registers and basic arithmetic and logical operations, it is the primary data path between the “X bus” and “Y bus.”  The X Bus provides inputs to the ALU from various sources: I/O devices, main memory, a handful of special-purpose register files and the Mesa stack and bytecode buffer.  The ALU’s output connects to the Y bus, providing inputs back into these same components.

The Star Central Processor Data Paths

One of the major issues I was confronted with nearly immediately when writing the CP emulation was one of fidelity: how faithful to the hardware does this emulation need to be? This issue arose specifically because of two hardware details related to the ALU and its inputs:

  1. The AM2901 ALU has a set of flags that get raised based on the result of an ALU operation (for example, the “Carry” flag gets raised if the result of an operation causes a carry out from the most-significant bits). For arithmetic operations these flags make sense but the 2901 also sets these flags as the result of logical operations. The meaning of the flags in these cases is opaque and of no real use to programmers (what does it mean for a “carry” flag to be set as a result of a logical OR?) and exist only as a side-effect of the ALU’s internal logic. But they are documented in the spec sheet (see the picture below).
  2. With a 137ns clock cycle time, the CP pushes the underlying hardware to its limits. As a result, some combinations of input sources requested by a microinstruction will not produce valid results because the data simply cannot all make it to its destination on time. Some combinations will produce garbage in all bits, but some will be correct only in the lower nibble or byte of the result, with the upper bits being undefined. (This is due to the ALU in the CP being comprised of four 4-bit ALUs chained together.)
Logic equations for the “NOT R XOR S” ALU operation’s flags. What it means is an exercise left to the reader.

I spent a good deal of time pondering and experimenting. For #1, I decided to implement my ALU emulation with the assumption that Xerox’s microcode would not make use of the condition flags for non-arithmetic operations, as I could see no reason to make use of them for logical ops and implementing the equations for all of them would be computationally expensive, making the emulation slower. This ended up being a valid assumption for all logical ops except for OR — as it turns out, some microcode assumed that the Carry flag would be set appropriately for this class of operation. When this issue was found, I added the appropriate operations to my ALU implementation.

For #2 I assumed that if Xerox’s microcode made use of any “invalid” combinations of input sources, that it wouldn’t depend on the garbage portion of the results. (That is, if code made use of microinstructions that would only produce valid results in the lower 4 or 8 bits, the microcode would also only depend on the lower 4 or 8 bits generated.) Thus the emulated ALU always produces a complete, correct result across all 16-bits regardless of input source. This assumption appears to have held — I have encountered no real-world microcode that makes assumptions about undefined results thus far.

The above compromises were made for reasons of implementation simplicity and efficiency. The downside is that it is possible to write microcode that will behave differently on the emulation than on the real hardware. However, going through the time, trouble, and expense of a 100% accurate emulation did not seem worth it when no real microcode would ever require this level of accuracy. Emulation is full of trade-offs like this. It would be great to provide an emulation that is perfect in every respect, but sometimes compromises must be made.

I implemented a debugger and disassembler for the CP similar to the one I put together when emulating the IOP.  Emulation of the various X bus-related registers and devices followed, and slowly but surely the CP started passing boot diagnostics as I fixed bugs and implemented missing hardware.  Finally it reached the point where it moved from the diagnostic stage to executing the first Mesa bytecodes of the operating system – the Star was now executing real code!  At that time it seemed appropriate to implement the Star’s display controller so I could see what the Star was trying to tell me – and a few days and much debugging of the central processor later I was greeted with this display from the install floppy (and there was much rejoicing):

The emulated Star says “Hello” for the very first time

Following this I spent two weeks of late nights hacking — implementing the hard disk controller and fixing bugs.  The Star’s hard drive controller doesn’t use an off-the-shelf controller chip as this wasn’t an option at the time the Star was being developed in the late 1970s. It’s a very clever, minimal design with most of the heavy lifting being done in microcode rather than hardware. Thus the emulation has to work at a very low level, simulating (in a sense) the rotation of the platters and providing data from the disk as it moves under the heads, one word at a time (and at just the right time.)

During this period I also got to learn how Xerox’s hard disk formatting and diagnostic tools worked.  This involved some reverse engineering:  Xerox didn’t want end-users to be able to do destructive things with their hard disks so these tools were password protected.  If you needed your drive reformatted you called a Xerox service engineer and they came out to take care of it (for a minor service charge).  These days, these service engineers are in short supply for some reason.

Luckily, the passcodes are stored in plaintext on the floppy disk so they were easy to unearth.  For future reference, the password is “wizard” or “elf” (if you’re so inclined):

Having solved The Mystery of the Missing Passwords I was at last able to format a virtual hard disk and install Viewpoint, and after waiting nervously for the installation to finish I was rewarded with:

Viewpoint, at long last!

Everything looked good, until the hard disk immediately corrupted itself and the system crashed!  It was very encouraging to see a real operating system running (or nearly so), and over the following weeks I hammered out the remaining issues and started on a design for a real user interface for the emulator. 

I gave it a name: Darkstar.  It starts with a “D” (thus falling in line with the rest of the “D-Machines” produced by Xerox) contains “Star” in the name, and is also a nerdy reference to a cult-classic sci-fi film.  Perfect. 

Getting Darkstar

Darkstar is available for download on our Github site and is open source under the BSD 2-Clause license.  It runs on Windows and on Unix systems using the Mono runtime.  It is still very much a work in progress.  Feedback, bug reports, and contributions are always welcome.

Fun with the Star

You’ve downloaded and installed Darkstar and have perused the documentation – now what?  Darkstar doesn’t come with any Xerox software, but pre-built hard disk images are available on Bitsavers (and for the more adventurous among you, piles of floppy disk images are available if you want to install something yourself).  Grab http://bitsavers.org/bits/Xerox/8010/8010_hd_images.zip — this contains hard disk images for Viewpoint 2.0, XDE 5.0, and The Harmony release of Interlisp-D. 

You’ll probably want to start with Viewpoint; it’s the crowning achievement of the Star and it invented the desktop metaphor, with icons representing documents and folders. 

To boot Viewpoint successfully you will need to set the emulated Star’s time and date appropriately – Xerox imposed a very strict licensing scheme (referred to as Product Factoring) typically with licenses that expired monthly.  Without a valid license code, Viewpoint grants users a 6-day grace period, after which all programs are deactivated. 

Since this is an emulation, we can control everything about the system so we can tell the emulated Star that it’s always just a few hours after the installation took place, bypassing the grace period expiration and allowing you to play with Viewpoint for as long as you like.  Set the date to Nov. 10, 1990 and start the system running.

Now wait.  The system is running diagnostics.

Keep waiting.  Viewpoint is loading.

Go get a coffee.

Seriously, it takes a while for Viewpoint to start up.  Xerox didn’t intend for users to reboot their Stars very often, apparently.  Once everything is loaded a graphic of a keyboard will start bouncing around the screen:

The Bouncing Keyboard

Press any key or click the mouse to get started and you will be presented with the Viewpoint Logon Option Sheet:

The Logon Option Sheet

You can login with user name “user” with password “password”.  Hit the “Next” key (mapped to “Home” on your computer’s keyboard) to move between fields, or use the mouse to click in them.  Click on the “Start” button to log in and in a few moments, there you are:

Initial Viewpoint Desktop

The world is your oyster.  Some things work as you expect – click on things to select them, double-click to open them.  Some things work a little differently – you can’t drag icons around with the mouse as you might expect: if you want to move them, use the “Move” key (F6) on your keyboard; if you want to copy them, use the “Copy” key (F4).  These two keys apply to most objects in the system: files, folders, graphical objects, you name it.  The Star made excellent use of the mouse, but it was also very keyboard-centric and employed a keyboard designed to work efficiently with the operating system and tools.  Documentation for the system is available online – check out the PDFs at http://bitsavers.org/pdf/xerox/viewpoint/VP_2.0/, as they’re worth a read to familiarize yourself with the system. 

If you want to write a new document, you can open up the “Blank Document” icon, click the “Edit” button and start writing your magnum opus:

Plagiarism?

One can change text and paragraph properties – font type, size, weight and all other sorts of groovy things by selecting text with the mouse (use the left mouse to select the starting point, and the right to define the end) and pressing the “Prop’s” key (F8):

Mad Props

If you’re an artist or just want to pretend that you are one, open up the “Blank Canvas” icon on the desktop:

MSPaint.exe’s Great Grandpappy

Need to do some quick calculations?  Check out the Calculator accessory:

I’m the Operator with my Pocket Calculator
Help!

There are of course many more things that can be done with Viewpoint, far too many to cover here.  Check out the extensive documentation as linked previously, and also look at the online training and help available from within Viewpoint itself (check the “Help” tab in the upper-right corner.)

Viewpoint is only one of a handful of systems that can be run on Darkstar. Stay tuned for future installments, covering XDE and Interlisp-D!

IBM at LCM+L

As anyone familiar with LCM+L knows, the museum initially grew out of Paul Allen’s personal collection of vintage computers. Many of the larger systems in the collection reflected his own experiences with computers beginning in when he was still in high school. Among the systems he used then were System/360 mainframes manufactured by IBM, most of them stodgy batch processing systems with little appeal for a young man who had been exposed to interactive computing on systems from General Electric and Digital Equipment Corporation. There was, however, one member of the family which was different, IBM’s entry into the world of timeshared interactive computing, the System/360 Model 67.

The heart of the difference between the 360/671 and other members of the System/360 family is the operating system, composed of two independent parts. CP-67, the control program, provides timeshared access to all of the system’s features in the form of “virtual machines”; CMS, the Cambridge Monitor System, runs in each user’s own virtual machine and provides the interactive facilities for programming, text editing, and everything else the user might want to accomplish. The combination was known as CP/CMS.2

I came to work for Paul Allen in 2003, to improve and expand his collection and eventually to turn it into a museum. The wish list we developed was large, and of course included several models of the System/360, including and especially the 360/67. The quixotically intense search met with minimal success for years because IBM almost never sold their large computers, instead leasing them to customers so as to control the supply: IBM did not want to compete against their own products for market share. This meant that retired systems rarely made their way into the hands of collectors; they were instead sold overseas, leased to new customers, or scrapped. For a while, the best we could do was the lights panel from the console of a 360/91 from which all circuitry (rich in gold) had been removed.

The first major break came with a story on the Australian Broadcasting Company’s web site about the impending demise of systems owned by the Australian Computer Museum Society.3 My colleague Keith Perez contacted the ACMS and learned that they owned a 360/40, which they were not interested in deaccessioning. This conversation continued for a while, then tapered off until 2011, when Keith encountered an acquaintance of Tony Epton, president of the ACMS, while on a business trip to Sainte-Nazaire, France. The ensuing renewed discussions resulted in another colleague, Ian King, making a side trip to Perth in February before returning from a trip to Adelaide to have a look at an IBM 7090 system.4 Ian visited the barn in which the ACMS was storing two 360/40 systems, and recommended that we purchase one of them. The system arrived in Seattle in September 2011.

Once the 360/40 arrived, we brought in a retired IBM Customer Engineer to assess its prospects for restoration. At this point we learned something important about IBM mainframes of the 1960s and 1970s: No two are exactly alike, and without the system specific Automated Logic Diagrams (ALDs) which document how it was assembled, the chances of restoring one to operating condition are greatly reduced. The former CE also noted the amount of dust caked on the circuitry–the system had been stored in a barn in a desert–which would decrease the likelihood of a successful restoration. He passed on the opportunity to work on the project.

In 2012, we acquired three IBM systems (a 360/20, a 360/44, and a 360/65) from the American Computer Museum5 in Montana, none in working condition: The internal disk drive in the Model 44 had broken loose from its housing and was held in place by a piece of rope, and the internal console cables of the Model 65 had all been cut. The 360/65 was particularly painful: More than a dozen bundles of 50 to 100 identical wires each were made useless. Neither system could be repaired with our facilities.

Bob Barnett, the museum’s business manager, also located a 360/65 in Virginia which belonged to one of the principals at Sine Nomine Associates, David Boyes. David had contacts within IBM who he believed could be helpful in arranging for LCM+L to obtain licenses for the software we wanted to run, and was eager to help us put up a large System/360.6

The 360/20 is a 16-bit minicomputer only marginally related to the main System/360 line. As a stopgap, to be able to say we had a running System/360, the one we acquired from the American Computer Museum was restored to running condition by an enthusiastic pair of contractors, Glen Hermmannsfeldt and Craig Arno, with help from Keith Hayes and Josh Dersch of LCM+L; it was displayed in the Computer Room from 2015 to 2017, initially while the restoration work was done and then as an example of a small batch system. As is often done for vintage systems at LCM+L, virtual peripherals–a card reader and punch–were created for the 360/20.

By 2015, the desire for an IBM system capable of providing a timesharing experience led to the acquisition of a 4341 system7 from Paul Pierce of Portland, Oregon.8 By this time, we had established an ongoing dialogue with the team who had successfully restored an IBM 1401 at the Computer History Museum (CHM) in California. One of the members of the team introduced us to Fundamental Software Inc.9 Faced with the task of restoring 40-year-old tape and disk drives, or creating our own emulations, we decided that we would instead acquire an FSI FLEX-CUB to provide disks, tapes, and terminal services to the 4341.

Jeff Kaylin was given the task of making the 4341 CPU run. Beginning in July 2015, he spent seven months getting the power system into working condition; first power up was on 12 February 2016.

Once the system was working to this extent, we ordered a FLEX-CUB from FSI and began attaching 3278 terminals to the built-in controller for testing. Also at this time, David Boyes informed us that he had arranged licensing for the VM/SP HPO operating system for us.

The FLEX-CUB arrived at LCM+L on 1 June 2016, with a minimal VM/370 installation in place courtesy of our friends at FSI. After some phone consultations with FSI Support, we were able to IPL10 the system into VM/370. Three weeks of getting additional terminals configured followed, with discussions of the OS configuration between FSI and me, replacements of capacitors and CRTs in terminals, and so on. Progress halted on 20 June, when Jeff arrived on a Monday morning to find the system halted with the words CHECK STOP and an error code on the console.

We obtained an 8in diskette with diagnostics from FSI. Memory tests showed that the memory was working; swapping of boards with spares commenced. The power sequence was a suspect for a long time. Jeff began making schematics for the various boards in order to understand where faults might occur that matched the diagnostic callouts. For two months, Jeff wrestled with the system with no progress.

Our consultant from CHM advised Cynde Moya, our Collections Manager, of the existence of 4341 and 4361 systems housed in a warehouse in Sacramento, California. I spoke with the owner, Daniel de Long, and learned that he had a working 4361 plus spares in the form of another 4361 and two 4331s. I traveled to Sacramento a week later to have a look, seeing the 4361 IPLed and running under DOS/VSE.11 After some discussion, the 4361 equipment began arriving at LCM+L on 2 November 2016.

In December 2016, Jeff began pulling the power supplies out of the 4361, to check the capacitors. All were within tolerance, but since 2004 our policy has always been to replace all aluminum electrolytic capacitors in any device we restore.12 The new capacitors were installed and the power supplies replaced in the chassis in the remaining weeks of 2016.

In mid-January 2017, the newly refurbished 4361 replaced the 4341 in the Computer Room. FSI, who have been very helpful throughout the project, advised us on how to cable the FLEX-CUB to the new system. A different power outlet was installed to accommodate the different plug on the 4361.

When the power button was pushed, the built-in floppy drives’ motor spun, but stopped as soon as the button was released. Jeff tried attaching the operator console, with no change in behavior. A phone call to Dan de Long revealed that the system was wired for 230V rather than 208V, necessitating either a change in the room wiring or a reconfiguration of the system’s power supplies; the latter was a simple matter of changing jumpers on four transformers to provide single-phase 208V, after which the system powered up and stayed up.

Power issues continued to plague Jeff. The first supply in the system would come up, with its test point providing 1.5V as expected, and all the proper voltages supplied; the second and thrid supplies showed no voltages. Going through the ALDs allowed him to trace through all four supplies with no luck in determining the problem.

After a couple of weeks, I suggested that Jeff contact Dan again, who pointed out that the system requires that a printer be attached in order to complete the power sequence. We ordered capacitors for the printer, and had additional outlets installed under the raised floor. The printer was ready to go a month later, after degraded old foam insulation was replaced along with the power supply rebuild.13

With the printer installed, the system would now power up, but the printer would not stay powered on. A long correspondence, with pictures, commenced between Jeff and Dan. This went on from mid-March to mid-May, when a suggestion to swap the cables on the floppy disk drives led to the replacement of one drive. The system would now perform an Initial Microcode Load (“IML”), after which it suggested running the Problem Finder diagnostic tool. Progress! A few more days of fiddling about (bad breakers in the power supplies, etc.) led to the indicator lights on the console keyboard signalling “Power Complete”.

Jeff cabled the FLEX-CUB to the 4361, and changed some system settings on the console to allow it to run VM/370 instead of DOS/VSE. I sent the FLEX-CUB configuration which had been set up for the 4341 to Fundamental Software; they sent one back which had the proper incantations for the 4361 instead and installed it for us remotely.

After I checked over Jeff’s revised settings on the console, we tried to IPL the system, which could not find the configured IPL device. The Problem Finder tool likewise did not find it. I reviewed the FLEX-CUB configuration, and did not find anything problematic there, so stopped for the evening, asked Jeff to locate the Operating Procedures manual for the 4361, and sent pictures to FSI of the console screen showing the Unit Control Words (UCWs) defined for the devices attached to the system. The next day, I got back suggestions for updated UCWs and updated the settings on the console while Jeff moved the channel cables to their new places. Although the system still did not come up, it did report channel status on the console so we knew the system was alive.

The next day, I revised the UCWs again on advice from FSI, to change the controllers on all disks and tapes to 3880s. Several attempts to IPL the system were unsuccessful, but in the mean time we attached more 3278/3279 terminals and got the correct keyboards on them. A day later, after telling the system that the 3279-2A display was a 1052 Selectric-style printing terminal with no printer attached and another IPL, we were prompted for date and time; FSI advised issuing the command CP ENABLE ALL to make the attached terminals live in the system. FSI did little more configuration on the FLEX-CUB, and they and I were able to log on to the MAINT account! That was the end of May, 2017.

Now my task of installing a full operating system began. Several weeks of reading manuals ensued, along with the installation of the Hercules emulator14 on a Windows desktop and on a Linux server. By the end of June, 2017, I had the public domain VM/370 running on both, a task made simpler due in equal parts to the existence of turnkey installations and an active Hercules community.15 In particular, the members of the Hercules-VM group have been very helpful over the last year, offering suggestions, advice, software, and general excitement for our project.

I reached out to David Boyes to ask that he put us in touch with his IBM contact for licensing VM/SP, the preferred version of VM/CMS for our hardware. David wrote back to me that his contact was no longer at IBM, but that he would try to find us the proper person to talk to; he also told me that the tapes he had preserved had been shipped off to CHM a while back, and that he was asking that images be made. A week later, I had the name of IBM’s Product Manager for z/VM and Related Products,16 George Madl, and sent him a message outlining LCM+L’s mission and place of the 4361 and VM/SP in the museum’s offerings. He forwarded the request to Glenda Ford in IBM’s licensing department. Glenda shepherded the request through IBM’s processes for four months and by mid-November had worked out a very favorable license with reasonable restrictions (no support, no commercial use of the system, no fees).

While waiting for an answer to the license question, I moved on with planning for VM/SP, starting with a review of the differences between VM/370 and VM/SP installation. As the weeks went by, I proposed a backup plan in which we would begin by installing VM/370, and upgrade to VM/SP when the licensing came through. This took us to the end of 2017.

In January 2018, with help from FSI, I configured eight 3350 disk drives on the 4361. As we worked together to finalize the new setup, they set up a production VM/370 system on three drives, along with an emulated card reader and punch and an emulated printer. (We even uncovered a bug in the FLEX-CUB software, so the benefit was not all in one direction!) I set up guest accounts for two users who had been asking since the 4341 restoration began, and collected their impressions.

For further planning, I returned to the Hercules emulator, looking at access to language processors and other utilities. I planned to provision our new VM/370 from the prebuilt Hercules disk images, so had to learn the ins and outs of DDR (the DASD Dump/Restore program).17 I added three more 3350 disks to the system, in order to hold the desired contents from the Hercules ready-built VM/370 system. I had to remember to re-IPL the system in order to make the new drives available; the 1970s had no concept of “plug-and-play” peripherals.

It became clear that the integration of the Hercules “6-pack” (made up of six 3350 disk images) was very tight, and the simplest way forward might be to install these images onto our FLEX-CUB disks via DDR. I consulted with the H390-VM mailing list, who concurred in that idea. However, at this point two people came forward with offers of assistance.

One of the architects of the Hercules “6-pack VM” system had available the installation tapes for VM/SP Release 5, which was our original target for the 4361. He provided us with images of the tapes and images of 3350 disks onto which the installation files had been placed, and gave us a hand from the UK in getting things set up under Hercules.

The other is Drew Derbyshire, one of the VM/370 beta testers. Drew is a contract programmer with 10 years’ experience in the VM/CMS world, including a long stint working on the CP nucleus for IBM. He is also local to Seattle, and a member of LCM+L, so was well placed to help us move forward with the installation and configuration of VM/SP for our particular purpose.

On 1 March 2018, I was able to IPL the 4361 under VM/SP, having copied the installation disk images over to the FLEX-CUB with help from FSI and our helpers. These were still 3350s, so I created sixteen new 3380-K disk images on the FLEX-CUB, a total of just under 20GB of storage space,18 as the first step in making the system available to the public by 1 April.

At this point Drew, as a contractor, and I began a fruitful working relationship, trading configuration notes, ideas for further work, and so on. Drew set up a Hercules mimic of the 4361’s exact configuration in order to experiment when the museum was not open. This was helpful when the 4361’s disks were clobbered due to errors in configurations, and Drew did the artwork for the VM/SP splash page on display terminals connecting to the system.

Over the next 10 weeks, Drew and I built CP nucleuses19 with different parameter settings, different numbers of terminals defined, 3380 disks instead of 3350, and so on. In mid-May, the 4361 had a machine check, which Jeff and I traced down over the next week to a memory issue.20 Jeff pulled memory modules from the 4341 to replace those called out by the IBM diagnostics; I began backing up all the disks to tapes, taking the system down every night and bringing it up the next morning.

The interruption was annoying because the developer/maintainer of the Stanford Pascal Compiler was installing his program on the system when the memory fault occurred. Once that was repaired, Drew and he completed testing of the installation and declared it good.

I booted the 4361 on Friday evening, 18 May 2018, for a test run over the weekend. Drew accidentally crashed it from a remote location on Saturday morning, but brought it back up during open hours at LCM+L. The system ran for a week without incident, so I posted an invitation to the H390-VM list for anyone interested to apply for a beta account. This was as much to test the account management software Drew had written as to shine a light on any blind spots we had with regard to software for the casual user.

Since 1 June 2018, Drew has installed the PL/I Optimizing Compiler, Fortran/VS, and other pieces of software to make the system more hospitable. In addition, one of the beta test users installed a version of the IND$FILE file transfer program by cutting and pasting a hexadecimal dump of the binary program into his directory, then let us know about it to install for general use. Drew has made great use of it to make updates from his Hercules testbed to the running 4361.

Future possibilities include installing RSCS and NJE, the remote-job entry subsystem for VM, to create a BITNET-style network site,21 and creating subsidiary virtual machines running other interactive operating systems such as the Michigan Terminal System or the McGill University MUSIC timesharing system, so stay tuned for further developments!

What I learned Mapping Minecraft Worlds

Minecraft is getting a little stale for me now.  I’ve done my exploring, and exploiting.  Nothing left but… to look at the database!

Each Minecraft world has its own folder in the save directory, with other subfolders and a lot of data files.  I noticed that each map created in the world is a separate file, and that file is in GZip format, and there is a library for Python to look at them.  So, I would tediously explore the world, creating maps and then look at the database.  The worlds we have on our server are limited in size plus and minus 1024 blocks.  Each map displays 128 pixels, a pixel being one block at the closest zoom, and can be 16 x 16 blocks at furthest zoom.  Plus, the maps have a funny center offset of 64.  That means at highest resolution, to cover our explorable world, 17 x 17 maps are needed.  First, I would go to -1024, -1024 and create my first map (being Map_0).  Then I would move 128 east and do it 16 more times (to location 1024, -1024).  Then I’d go back to -1024 and go south 128.  That would be Map_17, and so it goes until Map_288.  How tedious (more about that later).  But then I have 289 maps, all of which I know how to unpack, and I could then generate a map!

But I don’t stop there, oh no!  I also can decode other files in the database to find height information and region information, and locations of interesting structures — although labels and making points visible is still a hand-done process.  *** Something Learned ***  PNG files support transparent areas.  Alas, Microsoft products seem to not support it.  However, one may drag a PNG picture into Powerpoint, use the Picture Tools Format tab, go to Color and choose Set Transparent Color.  Then click on the background color of your picture and it will disappear!  Then you may right-click on the picture and save it.  Having transparent areas will let me overlay features on my map.

*** Something Learned ***  How to make pictures with overlays?  Welcome to CSS.  The key is to make all images placed in the same absolute position.

img
{
position:absolute;
top: 36px;
left: 0px;
}

#base
{
z-index: 10
}

I also assigned z-index to my photos — just to be sure.

In HTML I installed buttons, stacked my maps: Topological, Regional, Altitude; stacked my overlays: Mine Rails, Spawners, Features, and Grid.  Here is the link to a world I want to become our new Amplified world: http://wilkinsfreeman.info/mc/layers.htm

Back to tedious map making.  The reason I stay with Office 2010 products is because later versions reduce functionality for security reasons.  With Excel 2010, I am still able to make system calls and use AppActivate and SendKeys.  Let me tell you how I automated map making.

The first task is to get into the Minecraft world.  Then one must be able to go into creative mode.  My program transports at altitude, to it is wise to start out flying, and looking in an interesting direction, and have nothing in your hands or inventory.  To run the program requires pausing Minecraft, and I use chat.  So, the first thing my program needs to do is to get out of chat.  I put in a lot of time delays using Now and While loops.  Now returns time to the second, so this will wait one second.

t = Now
While t = Now
DoEvents
Wend
AppActivate “Minecraft 1.12.2”, False
SendKeys “~”

The ~ is the way to send Enter.  AppActivate selects Minecraft as the active window, and then SendKeys sends the Enter.

Next I set up loops to go from -1024 to 1024 in steps of 128 in both directions.  I found that sending the complete command to Minecraft sometimes doesn’t work, so I send the / to start the command, wait, then send the rest of the command.  The first command teleports me to the chosen location, the next command gives me an empty map, then I right click to use the map, then I send Q to throw the map away (since there will be 289 maps, I can’t hold them all).

I have found that a one second wait after the transport may not be enough.  My current program (not quite perfect) goes “/” one second “tp -1023 160 -1023~” two seconds “/” one second “give sigma9 map~” two seconds, right-mouse click, four seconds, “Q” and wait two seconds.  That’s 53 minutes.

For z = -1024 To 1024 Step 128
For x = -1024 To 1024 Step 128
AppActivate “Minecraft 1.12.2″, False
q = Trim(Str(x)) & ” 160 ” & Trim(Str(z))
SendKeys “/”
t = Now
While t = Now
DoEvents
Wend
SendKeys “tp ” & q & “~”
t = Now
While t = Now
DoEvents
Wend
t = Now
While t = Now
DoEvents
Wend
AppActivate “Minecraft 1.12.2”, False
SendKeys “/”
t = Now
While t = Now
DoEvents
Wend
SendKeys “give sigma9 map~”
t = Now
While t = Now
DoEvents
Wend
t = Now
While t = Now
DoEvents
Wend
AppActivate “Minecraft 1.12.2”, False
‘send a down event
mouse_event MOUSEEVENTF_RIGHTDOWN, 0&, 0&, 0&, 0&
‘and an up
mouse_event MOUSEEVENTF_RIGHTUP, 0&, 0&, 0&, 0&
t = Now
While t = Now
DoEvents
Wend
t = Now
While t = Now
DoEvents
Wend
t = Now
While t = Now
DoEvents
Wend
t = Now
While t = Now
DoEvents
Wend
AppActivate “Minecraft 1.12.2”, False
SendKeys “q”
t = Now
While t = Now
DoEvents
Wend
t = Now
While t = Now
DoEvents
Wend
Next x
Next z
End Sub

I have found that it is wise to do the AppActivate before the mouse events.  One would think one would only need one AppActivate, but no.  Of course, one must then NOT TOUCH anything for the whole hour.  Do not have e-mail open, do not plug in USB, do not lose power.  It is okay to touch the mouse slightly if you want to be sure your computer doesn’t go to sleep.

The point of this exercise is to find interesting worlds.  I prefer a world with multiple region types, several villages, a temple and witch’s hut in the explorable part.  I also like to have water near spawn so I can make a quick getaway.  The world in the map example only has 3 villages, which is disappointing, but it has a lovely ocean (not too much) and two witch huts just right there.  The desert Temple is there in the upper right corner, but it is buried under the amplified terrain.

I will be exploring more world possibilities.  Try LCM+L for the seed and you will get Antarctica.

 

 

Fixing 40-year-old Software Bugs, Part One

The museum had a big event a few weeks ago, celebrating the 45th anniversary of the 1st “Intergalactic Spacewar Olympics.”  Just a couple of weeks before said event, the museum acquired a beautiful Digital Equipment Corporation Lab-8/e minicomputer and I thought it would be an interesting challenge to get the system restored and running Spacewar in time for the event.

As is fairly obvious to you DEC-heads out there, the Lab-8/e was a PDP-8/e minicomputer in a snazzy green outfit.  It came equipped with scads of analog hardware for capturing and replaying laboratory data, and a small Tektronix scope for displaying information.  What makes this machine perfect for the PDP-8 version of Spacewar is the inclusion of the VC8E Point Plotting controller and the KE8E Extended Arithmetic Element (or EAE).  The VC8E is used by Spacewar to draw the game’s graphics on a display; the EAE is used to make the various rotations and translations done by the game’s code fast enough to be fun.

The restoration was an incredibly painless process.  I started with the power supply which worked wonderfully after replacing 40+ year old capacitors, and from there it was a matter of testing and debugging the CPU and analog hardware.  There were a few minor faults but in a few days everything was looking good, so I moved on to getting Spacewar running.

But which version to choose?  There are a number of Spacewar variants for the PDP-8, but I decided upon this version, helpfully archived on David Gesswein’s lovely PDP-8 site.  It has the advantage of being fairly advanced with lots of interesting options, and the source code is adaptable for a variety of different configurations — it’ll run on everything from a PDP-12 with a VR12 to a PDP-8/e with a VC8E.

I was able to assemble the source file into a binary tape image suited for our Lab-8/e’s hardware using the Palbart assembler.  The Lab-8/e has a VC8E display and the DK8-EP programmable clock installed.  (The clock is used to keep the game running at a constant frame-rate, without it the game speed would vary depending on how much stuff was onscreen and how much work the CPU has to do.)  These are selected by defining VC8E=1 and DKEP=1 in the source file

Loading and running the program yielded an empty display, though the CPU was running *something*.  This was disappointing, but did I really think it’d be that easy?  After some futzing about I noticed that if I hit a key on the Lab-8/e’s terminal, the Tektronix screen would light up briefly for a single frame of the game, and then go dark again.  Very puzzling.

My immediate suspicion was that the DK8-EP programmable clock wasn’t interrupting the CPU. The DK8-EP’s clock can be set to interrupt after a specified interval has elapsed, and Spacewar uses this functionality to keep the game running at a steady speed — every time the clock interrupts, the screen is redrawn and the game’s state is updated.  (Technically, due to the way interrupts are handled by the Spacewar code, an interrupt from any device will cause the screen to be redrawn — which is why input from the terminal was causing the screen flash.)

I dug out the DK8-EP diagnostics and loaded them onto the Lab-8/e.  The DK8-EP passed with flying colors, but Spacewar was still a no go.  I decided to take a closer look at the Spacewar code, specifically the code that sets up the DK8-EP.  That code looks like this (with PDP-12 conditional code elided):

/SUBROUTINE TO START UP CLOCK
/MAY BE HARDWARE DEPENDENT
/THIS IS FOR KW12A CLOCK - PDP12
/OR PROGRAMABLE PDP8E CLOCK DK8EP
   CLSK=6131      /SKIP IF CLOCK
   CLLR=6132      /LOAD CONTROL
   CLAB=6133      /AC TO BUFFER PRESET
   CLEN=6134      /LOAD ENABLE
   CLSA=6135      /BIT RESET FLAGS

STCLK, 0
   CLA CLL        /JUST IN CASE   
   TAD (-40       /ABOUT 30CPS
   CLAB           /LOAD PRSET
   CLA CLL
   TAD (5300      /INTR ON CLOCK - 1KC
   CLLR
   CLA CLL
   JMP I STCLK

The bit relevant to our issue is in bold above; the CLLR IOT instruction is used to load the DK8-EP’s clock control register with the contents of the 8’s Accumulator register (in this case, loaded with the value 5300 octal by the previous instruction).  The comments suggest that this sets a 1 Khz clock rate, with an interrupt every time the clock overflows.

I dug out the a copy of the programming manual for the DK8-EP from the 1972 edition of the “PDP-8 Small Computer Handbook” (which you can find here if you’re so  inclined).  Pages 7-28 and 7-29 reveal the following information:

DK8-EP Nitty Gritty

 

The instruction we’re interested in is the CLDE (octal 6132) instruction: (the Spacewar code defines this as CLLR) “Set Clock Enable Register Per AC.”  The value set in the AC by the Spacewar code (from the octal value 5300) decodes as:

  • Bit 0 set: Enables clock overflow to cause an interrupt.
  • Bits 1&2 set to 01: Counter runs at selected rate.
  • Bits 3,4&5 set to 001: 1Khz clock rate.

(Keep in mind that the PDP-8, like many minicomputers from the era, numbers its bits in the opposite order of today’s convention, so the MSB is bit 0, and the LSB is bit 11.)  So the comments in the code appear to be correct: the code sets up the clock to interrupt, and it should be enabled and running at a 1Khz rate.  Why wasn’t it interrupting?  I wrote a simple test program to verify the behavior outside of Spacewar, just in case it was doing something unexpected that was affecting the clock.  It behaved identically.  At this point I was beyond confused.

But wait: The diagnostic was passing — what was it doing to make interrupts happen?

DK8E-EP Diagnostic Listing

The above is a snippet of code from the DK8E family diagnostic listing, used to test whether a clock overflow causes an interrupt as expected.  The JMS I XIOTF instruction at location 2431 jumps to a subroutine that executes a CLOE IOT to set the Clock Enable Register with the contents in AC calculated in the preceding instruction.  (Wait, CLOE?  I thought the mnemonic was supposed to be CLDE?)  The three TAD instructions at locations 2426-2430 define the Clock Enable Register bits.  The total sum is 4610 octal, which means (again referring to the 1972 Small Computer Handbook):

  • Bit 0 set: Enables clock overflow to cause an interrupt
  • Bits 1+2 unset: Counter runs at selected rate, and overflows every 4096 counts.
  • Bit 3, 4+5 set to 110: 1Mhz clock rate
  • Bit 8 set:  Events in Channels 1, 2, or 3 cause an interrupt request and overflow.

So this seems pretty similar to what the Spacewar code does (at a different clock rate) with one major difference:  Bit 8 is set.  Based on the description in the Small Computer Handbook having bit 8 set doesn’t make a lot of sense — this test isn’t testing channels 1, 2, or 3 and this code doesn’t configure these channels either.  Also, the CLOE vs CLDE mnemonic difference is odd.

All the same, the bit is set and the diagnostic does pass.  What happens if I set that Clock Enable Register bit in the Spacewar code?  Changing the TAD (5300 instruction to TAD (5310 is a simple enough matter (why, I don’t even need to reassemble it, I can just toggle the new bits in via the front panel!) and lo and behold… it works.

But why doesn’t the code make any sense?  I thought perhaps there might have been a different revision of the hardware or a different set of documentation so I took a look around and finally found the following at the end of the DK8-EP engineering drawings:

The Real Instructions

Oh hey look at that why don’t you.  Bit 8’s description is a bit more elaborate here: “Enabled events in channels 1, 2, or 3 or an enabled overflow (bit 0) cause an interrupt request when bit 0 is set to a one.” And per this manual, setting bit 0 doesn’t enable interrupts at all! To add insult to injury, on the very next page we have this:

More Real Info

That’s definitely CLOE, not CLDE.  The engineering drawings date from January 1972 (first revision in 1971), while the 1972 edition of the PDP-8 Small Computer Handbook has a copyright of 1971, so they’re from approximately the same time period.  I suspect that the programming information given in the Small Computer Handbook was simply poorly transcribed from the engineering documentation…  and then Spacewar was written using it as a reference.  There is a good chance given that this version of Spacewar supports a multitude of different hardware (including four different kinds of programmable clocks) that it was never actually tested with a DK8-EP.  Or perhaps there actually was a hardware change removing the requirement for bit 8 being set, though I can find no evidence of one.

So with that bug fixed, all’s well and our hero can ride off into the sunset in the general direction of the 2017 Intergalactic Spacewar Olympics, playing Spacewar all the way.  Right?  Not so fast, we’re not out of the woods yet.  Stay tuned for PART TWO!

The Xerox Alto: An Introduction

In this series of articles I’ll go into the Alto in detail — the low-level hardware implementation and microcode, the software and languages and ideas that this hardware made possible, and the environment at Xerox PARC where the Alto was designed and used.  But before we dive down to the bedrock, let’s start with a bit of background.

Alto History

In the early 1970s, Alan Kay, a computer scientist at PARC, had a vision of a personal portable computer that he called the Dynabook.  In many ways, Dynabook was similar to modern laptops or tablets – it weighed under two pounds, contained a keyboard, display, and pointing device, and had a tablet form-factor.  The goal of the Dynabook was to be “a personal computer for children of all ages,” a portable educational computer.  Kay’s vision wasn’t technically feasible at the time – but the vision behind it was a driving force for the research he lead both during his tenure at PARC and beyond.

Alan Kay's Dynabook. Image courtesy of Wikipedia.
Alan Kay’s Dynabook. Image courtesy of Wikipedia.

In 1972 Kay proposed the idea of building a small personal computer (which he termed “KiddiComp”) to allow experimenting with the kinds of ideas he envisioned for the future Dynabook – in particular user interface design, education and computer-literacy for children.  This was centered around a programming language he called “Smalltalk” which made use of unique hardware for the time:  A high-resolution bitmapped display with a pointing device.  Unfortunately, when Dr. Kay submitted a proposal to the management at PARC to get a few KiddiComp systems built, he was denied funding.

Enter Butler Lampson and Chuck Thacker.  When PARC’s request for the purchase of a DEC PDP-10 for their research work was turned down in 1971 these two brilliant engineers figured they could design and build their own PDP-10 within eighteen months.  And they did – MAXC (pronounced “Max”) was a microcoded recreation of the PDP-10 architecture using semiconductor memory rather than core, and a pair of them were used at PARC for the next decade.  By 1972 Lampson and Thacker were both itching for a new project.

From Alan Kay’s “Early History of Smalltalk”:

In Sept [1972] … Butler and Chuck came over and asked: "Do you have any money?" I said, 
"Yes, about $230K for NOVAs and CGs. Why?" They said, "How would you like us to build 
your little machine for you?" I said, "I'd like it fine. What is it?" Butler said: 
"I want a '$500 PDP-10', Chuck wants a '10 times faster NOVA', and you want a 'kiddicomp'. 
What do you need on it?" I told them most of the results we had gotten from the fonts, 
painting, resolution, animation, and music studies. I asked where this had come from 
all of a sudden and Butler told me that they wanted to do it anyway, that Executive "X" 
[the executive who shot down Kay’s KiddiComp request] was away for a few months on a 
"task force" so maybe they could "Sneak it in", and that Chuck had a bet with Bill Vitic 
that he could do a whole machine in just 3 months. "Oh," I said.

Thacker and crew started the skunkworks project on November 22, 1972 and by April of 1973 the first Interim Dynabook (aka Alto) was up and running and displaying graphics:

The first Alto. From Alan Kay's
The first Alto. From Alan Kay’s “Early History of Smalltalk.”

The Alto featured a bitmapped display of 606×808 pixels with the approximate dimensions of an 8.5”x11” sheet of paper, 64KW (in 16-bit words) of semiconductor memory, a microcode clock rate of 170ns (approximately 6Mhz) and local storage of 2.5MB on a removable pack.  In 1974 the design and implementation of Ethernet networking was completed, and became standard Alto hardware.  All of this was implemented in a couple hundred ICs and fit under a desk.  Over the next 10 years, approximately 2,000 Altos were manufactured for use within PARC and at research labs and universities around the world.

The Alto was designed as a research vessel for efforts within PARC, and in that regard it was an astounding success.  Over the next decade, the Alto was involved in experiments in:

  • Human/Computer interaction
  • Education
  • Programming languages (BCPL, Smalltalk, Lisp, and Mesa)
  • Networking and distributed computing
  • Desktop publishing, word-processing and laser printing
  • The Graphical User Interface (GUI)
  • Computer-generated music and audio
  • Computer-generated graphics and animation
  • Computer-aided circuit design

Most famously, the Alto (running Smalltalk) served as the inspiration for the modern Graphical User Interface.  As the story goes, in 1979 teams from both Apple and Microsoft saw what PARC had been working on and were inspired, integrating aspects of what they saw into the Macintosh and Windows, respectively.

Smalltalk was important in early computer education and HCI studies and lives on to this day in the Squeak programming language.

The Mesa programming language originated on the Alto and was instrumental in the development of the Xerox Star desktop environment.

The Bravo word processor introduced WYSIWYG editing to the world and is in many respects the great-grandfather of Microsoft Word – Bravo’s author Charles Simonyi took what he’d learned from Bravo with him to Microsoft when he left PARC in the early 1980s.

The Ethernet networking research done with the Alto at PARC lead to the definition of Ethernet as an industry standard and was a major influence on the design of TCP/IP and associated networking protocols.

The Alto’s success was due in no small part to its clever, minimalist hardware design.  This design made the Alto extremely flexible and easily adaptable to whatever crazy idea or experiment that needed to be investigated.

In the forthcoming articles in this series, I will go into detail about the Alto’s hardware and demonstrate how the Alto achieved this flexibility.

 

Apple 1 INTEGER BASIC #2

Okay, if the Apple 1 computer does not have screen addressing, how am I doing SPIROGRAPH?

7000 REM SET PIXEL (X, Y) 0-39, 0-47
7010 Z = 1 : Q = V(10*(Y/2)+X/4+1) * 2
7020 FOR S = 1 TO 4 – X MOD 4
7030 Z = Z * 4: Q = Q / 4
7040 NEXT S
7050 Q = Q MOD 4: Z = Z / 4
7060 IF Y MOD 2 = 0 THEN 7080
7070 IF Q MOD 4 = 0 THEN V(10*(Y/2)+X/4+1) = V(10*(Y/2)+X/4+1) + Z: RETURN
7080 IF Q MOD 4 = 0 THEN V(10*(Y/2)+X/4+1) = V(10*(Y/2)+X/4+1) + Z * 2: RETURN

8000 REM PRINT SCREEN 40 * 48; TBTBTBTB
8010 FOR Y = 0 TO 23
8020 FOR X = 0 TO 39
8030 IF Y = 23 AND X = 39 THEN RETURN
8040 Z = V(Y * 10 + X / 4 + 1) * 4
8050 FOR S = 1 TO 4 – X MOD 4
8060 Z = Z / 4
8070 NEXT S
8080 Z = Z MOD 4
8090 IF Z = 3 THEN PRINT “:”;
8100 IF Z = 2 THEN PRINT “‘”;
8110 IF Z = 1 THEN PRINT “,”;
8120 IF Z = 0 THEN PRINT ” “;
8130 NEXT X
8140 NEXT Y

As the screen is 40 x 24 characters, it would be nice to make that 40 x 48 — using ‘ , and : to double vertical resolution.

First I made an array of 240 integers.  It would have been nice to make an array of 40 x 48 integers, but there isn’t enough memory, and two dimensional arrays are not supported.  So I had to fit two vertical dots and 4 horizontal dots into one integer.  Then, using shift-by-divide and the MOD function, I could calculate what I wanted on the screen, and finally print it all out at once.  However, I didn’t print out the last character, as that would have scrolled the screen.

Spirograph is round, so the 48 lines vertically are not needed.  And, this is important, because the program actually loads, but does not run — getting an out-of-memory error.  Deleting some of the many REMarks frees up enough space to run.