ApolloHoax.net

Apollo Discussions => The Hoax Theory => Topic started by: Everett on November 26, 2019, 12:18:39 PM

Title: "The AGC wan't powerful enough"
Post by: Everett on November 26, 2019, 12:18:39 PM
A pretty common (but strangely never elaborated on... ::)) HB claim is that the Apollo Guidance Computer wasn't powerful enough. Reading (on years old threads, so I can't actually post) some HB say "it had the power of a pocket calculator" makes me want to scream "WHAT DO YOU THINK A GUIDANCE COMPUTER IS! IT'S A GLORIFIED CALCULATOR HOOKED UP TO AN AUTOPILOT!"
Excuse me there, wanted to get that out. Aham. Anyway, it seems (presumably younger) people think everything has to be done with a computer. Some prominent HB (the kind with a book/DVD to sell) called it "just enough to change to the spin cycle on the washing machine." Oddly enough, while my washer has a lot of computer options built into it (which ironically makes it more complicated to use), my dryer from the 80's can change cycles all by itself, and it doesn't have computer in it. It's also far more reliable, and has outlasted, two consecutive computer-containing washers.People seem to forget (or really were born far after) what you can do without any computer at all. Somebody on the DI forum claimed 2kb RAM and 32kb isn't enough to run a string of neon lights. Well, I'm a bit of a historian in a bunch of fields, and I'm pretty sure they had big blinking/etc neon lights in the 50's, and I don't think those used computers... ::)

If anything, it's more powerful than it needs to be. The alignment procedure to correct gyro drift had (IIRC) the computer orient the ship to where it's current gyro alignment thought a star should be, then the sextet was used to correct for where it actually was. Really, as long as there were attitude display counters with enough digits, there's no reason it couldn't be manually maneuvered to the correct attitude an astronaut looked up from a sheet of paper, figure out the deviation, then manually enter the actual attitude into the computer. The math also actually isn't as complex as I thought it was; I'd always assumed it was calculus, but looking at BobB's site showed it's only (dense) algebra. It's worth pointing out that on Apollo 11 simply running the rendezvous raider during descent actually exceeded the AGC's capacity. It's also worth pointing out that it was not programmable, all the programs were hardwired in (literally!) when they were made. (This would probably be a good place to show a photograph of the "programs" (the core rope memory cores.)) As for the digital autopilot part, my response in more along the lines "cool, you made a digital one." Completely analog electro-mechanical autopilots capable of doing basically the same job had been standard issue in airliner cockpits for two decades by that point.

Since most people don't know about it (probably since most people who used them firsthand are dead by now), it's actually somewhat surprising what analog electro-mechanical computers can do. I've flown (in flight simulator, but full system accuracy) an 1947-vintage airliner from just after takeoff in New York all the way to final approach in Paris using a 1947-vintage autopilot. All it does it pitch hold, heading hold, altitude hold, and it can also make perfectly coordinated turns. (Better than I can.) You can use the turn handle to change heading and then it'll hold that, you can adjust the pitch it's supposed to hold to climb or descend (along with manual control of engine power), and if you're at the altitude you want you can set it to hold it. (The navigation didn't involve said autopilot.) 0kb RAM, 0kb memory. And that's really all you need. Heck, the Soyuz capsule didn't get any digital computer until the 21st century! And it was more automated (cam timer systems) than US craft of the same vintage.

Just going by what was in widespread use, analog computers can be a lot more capable than people think. For example of what an analog computer could do all the way back in 1935, check out this page:
http://www.navweaps.com/index_tech/tech-056.php
Basically, a mechanical computer that handled fire control calculations for anti aircraft fire (and surface fire) in all 3 axis to automatically aim the guns so that the shell would arrive where the aircraft will be at the time the shell arrives. It has a gyro to correct for ship roll and such, takes into account wind speed and direction and gun wear (inputted manually) and even compensates for the Coriolis effect! Radars elsewhere on the ship were capable of generating inputs themselves which would feed directly into the fire control computer, which could aim the guns all on its own, no human intervention required. And it's not the size of a room either; it's 3ft x 4ft x 6ft.
Title: Re: "The AGC wan't powerful enough"
Post by: Obviousman on November 26, 2019, 03:27:21 PM
These are the range / fire control computers from the USS Alabama and the USS Drum, both WW2-vintage vessels.

Even in the mid-1980s, when I did my Navigator training, we were still using the Ground Position Indicator Mark 7 (GPI Mk7). This was a mechanical computer that gave you wind velocity (manual wind or via doppler), true air speed (TAS), ground speed, position along & across planned track, or just plain latitude / longitude. Hook that up to an autopilot and you had a complete navigation system, so what is so strange about the AGC?

(As an aside, the mechanical GPI Mk7 used a spherical resolver to account for latitude changes. Since pretty much all navigation sorties went in / out on the 360 radial until 50 nm from East Sale, the resolvers (think a metal ball, something like a golf ball but smooth) ended up getting a groove in the resolver. This mean it was not unusual to be approaching the 360 at 50 when all of a sudden the GPI would 'jump' to the 360 radial!)

And if you have a penchant for 50s / 60s British nav and bombing systems, you'll enjoy this... and should let her know what an outstanding job she has done:

http://www.tatjavanvark.nl/tvve/dduck0.html
Title: Re: "The AGC wan't powerful enough"
Post by: Abaddon on November 26, 2019, 08:43:26 PM
A pretty common (but strangely never elaborated on... ::)) HB claim is that the Apollo Guidance Computer wasn't powerful enough. Reading (on years old threads, so I can't actually post) some HB say "it had the power of a pocket calculator" makes me want to scream "WHAT DO YOU THINK A GUIDANCE COMPUTER IS! IT'S A GLORIFIED CALCULATOR HOOKED UP TO AN AUTOPILOT!"
<respectful snip for brevity>

Functionally, it was exactly a calculator. It was never designed to add up ones taxes, or do ones math homework, it was designed to operate a CSM/Lunar module to achieve a Lunar landing. That is a very different task.

Add to that the simple fact that there exists a gaggle of people out there that have reconstructed that very same AGC to perform exactly as specified just for the sheer hell of it.

Hell, I will admit that I harbour a desire to do likewise. Alas there is never the time. While I lack the time to do it, many have already done it and demonstrated that it does exactly what it says on the tin.

Bottom line is that this is a classic argument from ignorance. "If I can't figure it out, then nobody else can". Oh really? I think not. Well, that's a lie. I know not.
Title: Re: "The AGC wan't powerful enough"
Post by: MBDK on November 27, 2019, 04:31:39 AM
Add to that the simple fact that there exists a gaggle of people out there that have reconstructed that very same AGC to perform exactly as specified just for the sheer hell of it.
Indeed.  I forget how/where I acquired this link (may have even gotten it from one of this board's members), but here is a page where you can see the step-by-step process of how one group did it, and how to build your own, if you are able (money, time, tools, etc.) and so inclined.  There are also links to programming guides for it on that page.

https://www.instructables.com/id/Open-Apollo-Guidance-Computer-DSKY/

Edit:  Forgot to include the link.  *sheepish grin*
Title: Re: "The AGC wan't powerful enough"
Post by: smartcooky on November 28, 2019, 06:36:34 AM
Just going by what was in widespread use, analog computers can be a lot more capable than people think. For example of what an analog computer could do all the way back in 1935, check out this page:
http://www.navweaps.com/index_tech/tech-056.php
Basically, a mechanical computer that handled fire control calculations for anti aircraft fire (and surface fire) in all 3 axis to automatically aim the guns so that the shell would arrive where the aircraft will be at the time the shell arrives. It has a gyro to correct for ship roll and such, takes into account wind speed and direction and gun wear (inputted manually) and even compensates for the Coriolis effect! Radars elsewhere on the ship were capable of generating inputs themselves which would feed directly into the fire control computer, which could aim the guns all on its own, no human intervention required. And it's not the size of a room either; it's 3ft x 4ft x 6ft.

Back in the dim dark ages of the early 1970's, one of the systems I use to service and repair was something called an "Air Position and Mileage System". It was fitted to De Havilland Devon and Heron aircraft, the Bristol Freighter and the English Electric Canberra. This was a devilishly clever bit of kit - in effect an electro-mechanical computer that solved the dead-reckoning navigation formula and provided the aircrew with an approximate latitude and longitude. This was from before the days of inertial navigation, and when GPS stood for General Purpose Shelter

It consisted of two major parts

1. The Air Mileage Unit used an artificial "pitot balance pressure" produced by a motor-driven fan and compared it with the pressure from the pitot head. By varying the speed of the fan motor to balance the two pressures against each other, the average speed of the fan motor provided an electrical value for the true airspeed at any altitude within the aircraft's operational parameters.

2. The Air Position Unit sat below the navigator's table, and drove the Air Position Indication (located on the Nav's instrument panel).  On take-off, it would be set to the known Lat & Long, and thereafter, took the electrical pitot input, combined it with electrical signals from the gyrocompass, and fed the result via bowden cables to the API to drive latitude and longitude counters. The system was reasonably accurate (later models were able to account for drift angle caused by wind). The counters could be re-set manually when definite visual or other fixes were obtained.

All of this was achieved with a combination, of gears, synchros and electric motors... not a computer chip in sight!
Title: Re: "The AGC wan't powerful enough"
Post by: raven on November 28, 2019, 07:35:36 PM
One thing of note is, at least from my fairly cursory examination of the material, the early moon landing conspiracy claims don't mention a lack of computing power as being  a showstopper.
It's only later, as home computers and similar devices got more and more powerful (I have a pretty out of date smart phone, a Samsung Galaxy S5, and it still has far more computing power than everything involved in the whole Apollo program combined), that this claim started to crop up. To those who say "But a simulation of the lunar landing takes X power, yet they only had Y!" I say, reality has its own  weight. How much computation does it take to simulate dropping a rubber ball? How much does it take to drop a ball? Infinitely less than the former.
Really, the whole thing can be debunked by a high school drop out, namely myself, so it's definitely scraping the barrel.
Title: Re: "The AGC wan't powerful enough"
Post by: smartcooky on November 29, 2019, 04:50:27 AM
One thing of note is, at least from my fairly cursory examination of the material, the early moon landing conspiracy claims don't mention a lack of computing power as being  a showstopper.
It's only later, as home computers and similar devices got more and more powerful (I have a pretty out of date smart phone, a Samsung Galaxy S5, and it still has far more computing power than everything involved in the whole Apollo program combined), that this claim started to crop up. To those who say "But a simulation of the lunar landing takes X power, yet they only had Y!" I say, reality has its own  weight. How much computation does it take to simulate dropping a rubber ball? How much does it take to drop a ball? Infinitely less than the former.
Really, the whole thing can be debunked by a high school drop out, namely myself, so it's definitely scraping the barrel.

Its probably the case that it would have taken more "computer power" to fake the moon landings in 1969 that it did to actually go there.
Title: Re: "The AGC wan't powerful enough"
Post by: JayUtah on November 29, 2019, 01:04:47 PM
You can never bore me by talking about electromechanical computers.  I think I was a preteen the first time I encountered one, an elevation computer in a derelict WWII tank.  There's something viscerally pleasing about turning knobs and watching dials magically move back and forth in response.  Then recently I had the pleasure of engaging with one of the docents aboard USS Iowa who happened to have been an operator for the fire control computers during his active service.  There's nothing more exciting to me than to see these old guys' eyes light up when they realize you understand how their machinery worked, and when you can work through the trigonometry with them, and also roll up your sleeves and get into the guts of the mechanisms.

I think there's a Clavius page that explains why simulating something is actually more computationally intensive than just coping with the inputs nature provides you.  And yes, the modern desktop computer running a game-type simulation also has to make the graphics to simulate the things that simply exist as physical objects in the actual LM and mind their own business.  For some middle ground, try the old Lunar Lander game.  Those were electronic, to be sure, but did not use general-purpose programmable CPUs.  Most of those games were simple state machines implemented with TTL logic.  The "program" was soldered onto the board.  I bring this up because the AGC, as a stored-program computer, was already more sophisticated than that.

My software guys are adapting code originally intended for an embedded device to run on general-purpose computers.  When I hear a groan from their cubicles, it's usually because they've run across yet another programming construct that presumes the program is running in real-time mode on a dedicated CPU, and they'll have to make risky architectural changes to get it to run on a time-sharing system.  This factors into the comparison because the AGC software could rightly presume it was the only software running on the hardware.
Title: Re: "The AGC wan't powerful enough"
Post by: Abaddon on November 29, 2019, 04:09:54 PM
You can never bore me by talking about electromechanical computers.  I think I was a preteen the first time I encountered one, an elevation computer in a derelict WWII tank.  There's something viscerally pleasing about turning knobs and watching dials magically move back and forth in response.  Then recently I had the pleasure of engaging with one of the docents aboard USS Iowa who happened to have been an operator for the fire control computers during his active service.  There's nothing more exciting to me than to see these old guys' eyes light up when they realize you understand how their machinery worked, and when you can work through the trigonometry with them, and also roll up your sleeves and get into the guts of the mechanisms.

I think there's a Clavius page that explains why simulating something is actually more computationally intensive than just coping with the inputs nature provides you.  And yes, the modern desktop computer running a game-type simulation also has to make the graphics to simulate the things that simply exist as physical objects in the actual LM and mind their own business.  For some middle ground, try the old Lunar Lander game.  Those were electronic, to be sure, but did not use general-purpose programmable CPUs.  Most of those games were simple state machines implemented with TTL logic.  The "program" was soldered onto the board.  I bring this up because the AGC, as a stored-program computer, was already more sophisticated than that.

My software guys are adapting code originally intended for an embedded device to run on general-purpose computers.  When I hear a groan from their cubicles, it's usually because they've run across yet another programming construct that presumes the program is running in real-time mode on a dedicated CPU, and they'll have to make risky architectural changes to get it to run on a time-sharing system.  This factors into the comparison because the AGC software could rightly presume it was the only software running on the hardware.
Yup. Early PCs for example had a problem with IRQ conflicts. The AGC avoids such things because there is nothing else to generate an IRQ.
Title: Re: "The AGC wan't powerful enough"
Post by: smartcooky on November 29, 2019, 04:44:14 PM
For some middle ground, try the old Lunar Lander game.  Those were electronic, to be sure, but did not use general-purpose programmable CPUs.  Most of those games were simple state machines implemented with TTL logic.  The "program" was soldered onto the board.  I bring this up because the AGC, as a stored-program computer, was already more sophisticated than that.

I had a Lunar Lander Game that could be programmed into my old HP25C programmable calculator. IIRC, you had to punch in about 40 lines of code, and then play the game by repeatedly deciding how long to burn the decent engine (0 = engine off)

There were no graphics, of course, you had to work out what was happening by interpreting the red numeric LED display which, IIRC, showed the velocity, altitude and fuel remaining.

I'll bet that program used craploads more "computing power" than actually letting the LM fall out of the lunar sky to land at Tranquillity Base!
Title: Re: "The AGC wan't powerful enough"
Post by: ka9q on November 29, 2019, 11:03:05 PM
My software guys are adapting code originally intended for an embedded device to run on general-purpose computers.  When I hear a groan from their cubicles, it's usually because they've run across yet another programming construct that presumes the program is running in real-time mode on a dedicated CPU, and they'll have to make risky architectural changes to get it to run on a time-sharing system.  This factors into the comparison because the AGC software could rightly presume it was the only software running on the hardware.
I could ask why you're doing this, given the low cost and ubiquity of microcontrollers and small microprocessors you could dedicate to the task, thus avoiding the real-time challenges just as the original did...
Title: Re: "The AGC wan't powerful enough"
Post by: smartcooky on November 30, 2019, 02:04:45 AM
You can never bore me by talking about electromechanical computers.  I think I was a preteen the first time I encountered one, an elevation computer in a derelict WWII tank.  There's something viscerally pleasing about turning knobs and watching dials magically move back and forth in response.  Then recently I had the pleasure of engaging with one of the docents aboard USS Iowa who happened to have been an operator for the fire control computers during his active service.  There's nothing more exciting to me than to see these old guys' eyes light up when they realize you understand how their machinery worked, and when you can work through the trigonometry with them, and also roll up your sleeves and get into the guts of the mechanisms.

I think there's a Clavius page that explains why simulating something is actually more computationally intensive than just coping with the inputs nature provides you.  And yes, the modern desktop computer running a game-type simulation also has to make the graphics to simulate the things that simply exist as physical objects in the actual LM and mind their own business.  For some middle ground, try the old Lunar Lander game.  Those were electronic, to be sure, but did not use general-purpose programmable CPUs.  Most of those games were simple state machines implemented with TTL logic.  The "program" was soldered onto the board.  I bring this up because the AGC, as a stored-program computer, was already more sophisticated than that.

My software guys are adapting code originally intended for an embedded device to run on general-purpose computers.  When I hear a groan from their cubicles, it's usually because they've run across yet another programming construct that presumes the program is running in real-time mode on a dedicated CPU, and they'll have to make risky architectural changes to get it to run on a time-sharing system.  This factors into the comparison because the AGC software could rightly presume it was the only software running on the hardware.
Yup. Early PCs for example had a problem with IRQ conflicts. The AGC avoids such things because there is nothing else to generate an IRQ.

Nonetheless, you could overload the core memory by giving it too much to do, e.g. hence  1201 & 1202
Title: Re: "The AGC wan't powerful enough"
Post by: Abaddon on November 30, 2019, 08:10:35 AM
You can never bore me by talking about electromechanical computers.  I think I was a preteen the first time I encountered one, an elevation computer in a derelict WWII tank.  There's something viscerally pleasing about turning knobs and watching dials magically move back and forth in response.  Then recently I had the pleasure of engaging with one of the docents aboard USS Iowa who happened to have been an operator for the fire control computers during his active service.  There's nothing more exciting to me than to see these old guys' eyes light up when they realize you understand how their machinery worked, and when you can work through the trigonometry with them, and also roll up your sleeves and get into the guts of the mechanisms.

I think there's a Clavius page that explains why simulating something is actually more computationally intensive than just coping with the inputs nature provides you.  And yes, the modern desktop computer running a game-type simulation also has to make the graphics to simulate the things that simply exist as physical objects in the actual LM and mind their own business.  For some middle ground, try the old Lunar Lander game.  Those were electronic, to be sure, but did not use general-purpose programmable CPUs.  Most of those games were simple state machines implemented with TTL logic.  The "program" was soldered onto the board.  I bring this up because the AGC, as a stored-program computer, was already more sophisticated than that.

My software guys are adapting code originally intended for an embedded device to run on general-purpose computers.  When I hear a groan from their cubicles, it's usually because they've run across yet another programming construct that presumes the program is running in real-time mode on a dedicated CPU, and they'll have to make risky architectural changes to get it to run on a time-sharing system.  This factors into the comparison because the AGC software could rightly presume it was the only software running on the hardware.
Yup. Early PCs for example had a problem with IRQ conflicts. The AGC avoids such things because there is nothing else to generate an IRQ.

Nonetheless, you could overload the core memory by giving it too much to do, e.g. hence  1201 & 1202
Sure, but that is a hardware limitation, not a code limitation. Which was, ironically, implemented by interrupts.
Title: Re: "The AGC wan't powerful enough"
Post by: rocketman on November 30, 2019, 09:08:26 AM
So how much power do they think is required?

Boeing 747s flew before the moon landing, so I guess the state of computing power at the time was adequate for a large jet airliner, but inadequate for a spacecraft.  Unless Boeing 747s are fake too.

Much of the computing power used in a lot of modern applications is for a high resolution graphics interface.  If you don't have that, it's a lot easier.
Title: Re: "The AGC wan't powerful enough"
Post by: Zakalwe on November 30, 2019, 12:44:47 PM
No discussion of this can be complete without acknowledging Charles "Doc" Draper and his work on inertial navigation systems. He developed autopilots that allowed a bomber to fly from Boston to California with no input from the pilot in 1953.

https://www.lindahall.org/charles-stark-draper/


(https://www.sunburstandluminary.com/PIX/[email protected])

“Doc, can you design a guidance system that will take men to the moon and back safely?”

“Yes.”

“Well, when will it be ready?”

“It will be ready when you need it, Mr. Webb.”

“Well, how will I know that it'll work?”

“I'll go along with it.”


Don Eyles' book "Sunburst and Luminary (https://www.sunburstandluminary.com/SLhome.html)" is a fascinating source of info on how the navigation systems were developed. Also, Digital Apollo (https://www.amazon.co.uk/Digital-Apollo-Human-Machine-Spaceflight/dp/0262516101/ref=sr_1_1?keywords=digital+apollo&qid=1575135789&sr=8-1)

Title: Re: "The AGC wan't powerful enough"
Post by: Abaddon on November 30, 2019, 02:49:03 PM
So how much power do they think is required?
Nobody knows. The claim is that the AGC was insufficient and there it stops. What, in their opinion, would constitute sufficient power is a desert populated by tumbleweeds.

Boeing 747s flew before the moon landing, so I guess the state of computing power at the time was adequate for a large jet airliner, but inadequate for a spacecraft.  Unless Boeing 747s are fake too.
Well, they must be. If nobody could fly the first Apollo landing because it had not been tested, then nobody could have flown the first 747 untested either. Same stupid logic makes all aircraft impossible.

Much of the computing power used in a lot of modern applications is for a high resolution graphics interface.  If you don't have that, it's a lot easier.
Yup. Running a 7 segment LED is a trivial issue compared to a modern graphics card. Indeed while one can run an AGC simulator on your PC, most of the time your PC will be doing something else. This is not the case with an actual AGC. It does nothing else at all. It doesn't have to.
Title: Re: "The AGC wan't powerful enough"
Post by: JayUtah on November 30, 2019, 04:19:34 PM
I could ask why you're doing this, given the low cost and ubiquity of microcontrollers and small microprocessors you could dedicate to the task...

It's a legacy algorithm the customer wants to run in its supercomputing cloud.  I gather they don't need the real-time capacity anymore, just the SIMD multiprocessing synchronization.  There is actually quite a lot of this sort of conversion happening, where data are gathered in the field with comparatively dumb methods and then uploaded for central processing.  Previously the data would have been processed locally.  This particular one would be a trivial job if the code had been written cleanly to start with.  From the bits I've seen, it's poorly modularized.

The point was that code like
Code: [Select]
while ( true )
   ;
while you're doing nothing but waiting for an interrupt was often acceptable in embedded systems.  In more primitive architectures like the AGC, you can't do any of the various things modern embedded microprocessors do, like going to sleep.  I'm guessing an "idle" CPU in the AGC would just be running a tight loop.  In a modern time-sharing system, the proper thing to do would be to yield the CPU and let the scheduler decide what to do.
Title: Re: "The AGC wan't powerful enough"
Post by: JayUtah on November 30, 2019, 04:35:13 PM
Sure, but that is a hardware limitation, not a code limitation. Which was, ironically, implemented by interrupts.

Having only four bits for an interrupt bus was extremely short-sighted of Intel.  But the 1201 and 1202 alarms raise issues my senior software guys are always getting on the juniors' case over:  just because something is happening under the hood doesn't mean you get to ignore it.  Every option incurs overhead.  In the AGC case, they had worked out on paper how much time the computer would normally be expected to spend servicing interrupts.  And they planned the rest of the code with that "fixed" overhead in mind.  When the number of radar interrupts doubled unexpectedly, this had a cascade effect on the rest of the hardware-software system combined.

Yes, you could have solved some of the Apollo 11 problems in hardware.  Provide more erasable memory, and you get more core sets.  That way a program is less likely to complain that it can't find an empty core set and raise a program alarm.  Similarly, always tie the radar power supply clocks and you'll never get duplicate interrupts.  But that's not how embedded systems designers think.  They're writing code for a computing instrument they designed, and whose operational behavior is something they can know intimately.  Today developers are taught to write portable code.  Those techniques seem ill-suited to even modern embedded systems design.  The AGC programmers wrote code that exactly fit their machine.  This was more the norm.
Title: Re: "The AGC wan't powerful enough"
Post by: Abaddon on November 30, 2019, 05:21:04 PM
Sure, but that is a hardware limitation, not a code limitation. Which was, ironically, implemented by interrupts.

Having only four bits for an interrupt bus was extremely short-sighted of Intel.
Sure, in hindsight. Remember the notion that nobody could possibly need more that 640k RAM? Remember Extended memory up to 1 meg RAM? Put bluntly, developers will find a way to consume all available resources, it's just how it is.

But the 1201 and 1202 alarms raise issues my senior software guys are always getting on the juniors' case over:  just because something is happening under the hood doesn't mean you get to ignore it.  Every option incurs overhead.  In the AGC case, they had worked out on paper how much time the computer would normally be expected to spend servicing interrupts.  And they planned the rest of the code with that "fixed" overhead in mind.  When the number of radar interrupts doubled unexpectedly, this had a cascade effect on the rest of the hardware-software system combined.
I have always regarded the 1201 and 1202 as the AGC equivalent of "I don't know".

Yes, you could have solved some of the Apollo 11 problems in hardware.  Provide more erasable memory, and you get more core sets.  That way a program is less likely to complain that it can't find an empty core set and raise a program alarm.  Similarly, always tie the radar power supply clocks and you'll never get duplicate interrupts.  But that's not how embedded systems designers think.  They're writing code for a computing instrument they designed, and whose operational behavior is something they can know intimately.
And that makes sense in context. "I don't know" is a valid answer to any question. I would rather have that then an outright crash on a lunar mission.

Today developers are taught to write portable code.  Those techniques seem ill-suited to even modern embedded systems design.  The AGC programmers wrote code that exactly fit their machine.  This was more the norm.
Ah. That is a personal bugbear of mine. Cross platform developers are becoming divorced from the actual hardware. The hardware matters no matter how one slices it. Why else would we have Apples and PCs? And software developed for each? Not to mention linux.

On top of that we have a gaggle of devs working in apps for smartphones and such devices.

To my mind, and understanding of the underlying hardware is essential. Not so much to the new young dudes, apparently.
Title: Re: "The AGC wan't powerful enough"
Post by: raven on November 30, 2019, 09:23:30 PM
Its probably the case that it would have taken more "computer power" to fake the moon landings in 1969 that it did to actually go there.
Given that I've heard the claim it used CGI (a rediculously preposterous idea) that would need to be decades ahead of its time to have even a hope of achieving what was seen on TV, let alone in photographs, I have no doubt of that.
Title: Re: "The AGC wan't powerful enough"
Post by: JayUtah on December 01, 2019, 04:45:09 PM
Sure, in hindsight.

I remember it being discussed at the time.  Remember also, as you say, segmented memory addressing, a flash-in-the-pan technique that was very quickly superseded by flat address spaces in other, better architectures.  But we got stuck with segment registers for far longer than needed simply because Intel cemented it in place.  In deference to your defense, I'll point out that the AGC had a pretty bonkers memory banking scheme.  For all its beauty, a lot of it too was obsolete right out of the gates (multiple puns intended).  I'll quit ragging on Intel.

Quote
Remember the notion that nobody could possibly need more that 640k RAM?

Apocryphal statement.  But yes, the history of computer technology is the history of people making bizarrely wrong guesses about what the future would bring.  It wasn't too long ago that 15 teraFLOPS was a pretty fast computer.  Almost half my senior software staff comes from the gaming industry.  Those guys know how to push hardware.  But they also know how to analyze the hardware and optimize for it so that they don't push beyond the hardware..  That talent is what I wish were more prevalent in the software industry, and I think that's what the AGC programmers exemplified.

Quote
I have always regarded the 1201 and 1202 as the AGC equivalent of "I don't know".
[...]
"I don't know" is a valid answer to any question. I would rather have that then an outright crash on a lunar mission.

I had to read this a couple of times before I understood it enough to agree with it.  Yes, I think an important consideration in any critical system -- however designed and built -- is not to promise (or insinuate) anything it can't deliver.  So on the one hand, an automated system should never behave as if it has things well in hand when it can know it doesn't.  On the other hand, it should do its best to fail gracefully.  And by that I mean fall back to successively less capable modes of operation rather than stop suddenly altogether.  Even sophisticated automotive controllers often have a "limp mode" that provides basic engine operation.  And for PGNS there was AGS.  But especially with highly qualified pilots, you don't want to err on the side of suppressing failure indications in a misguided attempt to limp along as if nothing was wrong.  One can make the case that certain large airframe manufacturers need to learn that lesson anew.

The way the AGC was architected, we could discuss forever what a "crash" means, in the computer sense.  But the real genius was that while 1201 and 1202 simply signaled symptoms in terms of undesirable, detectable software states, a human could make a judgment.  The AGC didn't know why the Executive was overloaded, or why there were no available core sets.  That level of introspection was not provided by the programmers.  But Steve Bales knew.  Which is to say, he knew that the consequences of not running certain periodic tasks would be an accumulation of uncorrected error, but that as long as that condition did not persist, the entire vehicle could stay within flight tolerances even though not strictly within the designated deadband.  It's the equivalent of taking your eyes off the road for a minute to fiddle with the radio.  It's naturally not as safe as maintaining situational awareness, but it can be tolerated briefly.

Quote
Ah. That is a personal bugbear of mine. Cross platform developers are becoming divorced from the actual hardware. The hardware matters no matter how one slices it.

Yeah, there's a lot of potential discussion to be had there, and if we had it I'd like the more professional software developers to weigh in on it.  I've rarely seen reuse done well, even with the best of intentions.  I've rarely seen portability done well, but I'm sure some of the open-source community could easily come up with good examples.  What irks me above all are some programmers who come from a certain language culture (which shall remain nameless) who are blithely unaware that any sort of hardware exists at all.  A few of these people -- a very few, thankfully -- seem to have no idea whatsoever how computers work.

That said, as ka9q points out, often the right answer is simply to throw more silicon at the problem.  If $2,000 worth of additional RAM will solve someone's problem in as much time as it takes to install the SIMMs, then why would any conscientious engineering company expend ten times that much or more in programmer time and money to bring the present solution under the existing hardware umbrella?  For many classes of problems in computing, there are severe limits to what can be optimized by programmers burning their neurons long into the night.  I've seen talented programmers achieve factors (not just margins) of increased resource efficiency -- admittedly in originally poor code.  But I've also seen expensive software improvement efforts that result in only marginal increases in performance or efficiency, sometimes at the expense of correctness in a complicated algorithm.  Whatever else an algorithm is, it has to be correct.

I've found that electrical engineers take a very different approach to software than computer scientists.  Historically they write only embedded software, and they don't think for a moment that they can change the hardware without also having to change the software, or that the software they write for one gadget will transfer seamlessly to some other gadget.  The commercial reality of reuse and standardization is changing this, but if you want to talk just in terms of what EEs think software is, it's instructive.
Title: Re: "The AGC wan't powerful enough"
Post by: bknight on December 02, 2019, 03:10:29 PM
I can remember Derek's repeat of John's theory that the AGNC wasn't up to snuff in two threads (UM and here) prior to A13.  Of course he could not present any evidence to that end, but his buddy that worked for Hughes let him know.  ::)

We're going onto six months post A11 anniversary and still no dramatic proof that Apollo was fake and no D to show his stuff.
Title: Re: "The AGC wan't powerful enough"
Post by: Zakalwe on December 02, 2019, 04:36:14 PM
I can remember Derek's repeat of John's theory that the AGNC wasn't up to snuff in two threads (UM and here) prior to A13.  Of course he could not present any evidence to that end, but his buddy that worked for Hughes let him know.  ::)

We're going onto six months post A11 anniversary and still no dramatic proof that Apollo was fake and no D to show his stuff.

Who was the other loon that was convinced that Aldrin was going to break down and admit it was all a hoax on the eve of (IIRC) the 40th anniversary (or was it the 45th)? We're still waiting for that one to happen too....
Title: Re: "The AGC wan't powerful enough"
Post by: Abaddon on December 02, 2019, 05:56:00 PM
Sure, in hindsight.

I remember it being discussed at the time.  Remember also, as you say, segmented memory addressing, a flash-in-the-pan technique that was very quickly superseded by flat address spaces in other, better architectures.  But we got stuck with segment registers for far longer than needed simply because Intel cemented it in place.  In deference to your defense, I'll point out that the AGC had a pretty bonkers memory banking scheme.  For all its beauty, a lot of it too was obsolete right out of the gates (multiple puns intended).  I'll quit ragging on Intel.
My opinion on that is that it was a triumph of marketing over engineering.
Quote
Remember the notion that nobody could possibly need more that 640k RAM?

Apocryphal statement.  But yes, the history of computer technology is the history of people making bizarrely wrong guesses about what the future would bring.  It wasn't too long ago that 15 teraFLOPS was a pretty fast computer.  Almost half my senior software staff comes from the gaming industry.  Those guys know how to push hardware.  But they also know how to analyze the hardware and optimize for it so that they don't push beyond the hardware..  That talent is what I wish were more prevalent in the software industry, and I think that's what the AGC programmers exemplified.
Alas, it is still all too common.

Quote
I have always regarded the 1201 and 1202 as the AGC equivalent of "I don't know".
[...]
"I don't know" is a valid answer to any question. I would rather have that then an outright crash on a lunar mission.

I had to read this a couple of times before I understood it enough to agree with it.  Yes, I think an important consideration in any critical system -- however designed and built -- is not to promise (or insinuate) anything it can't deliver.  So on the one hand, an automated system should never behave as if it has things well in hand when it can know it doesn't.  On the other hand, it should do its best to fail gracefully.  And by that I mean fall back to successively less capable modes of operation rather than stop suddenly altogether.  Even sophisticated automotive controllers often have a "limp mode" that provides basic engine operation.  And for PGNS there was AGS.  But especially with highly qualified pilots, you don't want to err on the side of suppressing failure indications in a misguided attempt to limp along as if nothing was wrong.  One can make the case that certain large airframe manufacturers need to learn that lesson anew.
Ah the semi mythical graceful exit. There still exists a surprising amount of software that refuses point blank to provide any data in the event of something catastrophic. To often one sees a helpful message like "There was an error." and that's it. Nothing else.

The way the AGC was architected, we could discuss forever what a "crash" means, in the computer sense.  But the real genius was that while 1201 and 1202 simply signaled symptoms in terms of undesirable, detectable software states, a human could make a judgment.  The AGC didn't know why the Executive was overloaded, or why there were no available core sets.  That level of introspection was not provided by the programmers.  But Steve Bales knew.  Which is to say, he knew that the consequences of not running certain periodic tasks would be an accumulation of uncorrected error, but that as long as that condition did not persist, the entire vehicle could stay within flight tolerances even though not strictly within the designated deadband.  It's the equivalent of taking your eyes off the road for a minute to fiddle with the radio.  It's naturally not as safe as maintaining situational awareness, but it can be tolerated briefly.
Albeit he had backroom engineers supporting him, it was still a huge call to make.

Quote
Ah. That is a personal bugbear of mine. Cross platform developers are becoming divorced from the actual hardware. The hardware matters no matter how one slices it.

Yeah, there's a lot of potential discussion to be had there, and if we had it I'd like the more professional software developers to weigh in on it.  I've rarely seen reuse done well, even with the best of intentions.  I've rarely seen portability done well, but I'm sure some of the open-source community could easily come up with good examples.  What irks me above all are some programmers who come from a certain language culture (which shall remain nameless) who are blithely unaware that any sort of hardware exists at all.  A few of these people -- a very few, thankfully -- seem to have no idea whatsoever how computers work.
Fundamental SQL is not half bad at that, right up until one gets to the more esoteric procedure calls, however.

That said, as ka9q points out, often the right answer is simply to throw more silicon at the problem.  If $2,000 worth of additional RAM will solve someone's problem in as much time as it takes to install the SIMMs, then why would any conscientious engineering company expend ten times that much or more in programmer time and money to bring the present solution under the existing hardware umbrella?
Hence the need to develop a minimum spec, and stick to it whatever marketing might say.

For many classes of problems in computing, there are severe limits to what can be optimized by programmers burning their neurons long into the night.  I've seen talented programmers achieve factors (not just margins) of increased resource efficiency -- admittedly in originally poor code.  But I've also seen expensive software improvement efforts that result in only marginal increases in performance or efficiency, sometimes at the expense of correctness in a complicated algorithm.  Whatever else an algorithm is, it has to be correct.
And that is a tightrope to walk. Chuck hardware or programming resources at the problem? I have been fortunate with most of my clients in that when I make that call, they simply say "OK" based on a solid case and authorise whatever loot is needed for whichever route forward. But not everybody is so fortunate.

I've found that electrical engineers take a very different approach to software than computer scientists.  Historically they write only embedded software, and they don't think for a moment that they can change the hardware without also having to change the software, or that the software they write for one gadget will transfer seamlessly to some other gadget.  The commercial reality of reuse and standardization is changing this, but if you want to talk just in terms of what EEs think software is, it's instructive.
I lack much experience with EEs in the programming realm, so cannot comment much. What little I have suggests that they are concerned largely with naturally embedded solutions such as PLCs that control traffic lights, for example. I learned those ropes some 35 years ago and have no used it since, nor would I make the attempt. But I know it is an art to operate within those constraints.
Title: Re: "The AGC wan't powerful enough"
Post by: Abaddon on December 02, 2019, 06:01:01 PM
I can remember Derek's repeat of John's theory that the AGNC wasn't up to snuff in two threads (UM and here) prior to A13.  Of course he could not present any evidence to that end, but his buddy that worked for Hughes let him know.  ::)

We're going onto six months post A11 anniversary and still no dramatic proof that Apollo was fake and no D to show his stuff.

Who was the other loon that was convinced that Aldrin was going to break down and admit it was all a hoax on the eve of (IIRC) the 40th anniversary (or was it the 45th)? We're still waiting for that one to happen too....

Oh, that is ringing some bells. Can't dredge up the name from the memory banks. Sorry.
Title: Re: "The AGC wan't powerful enough"
Post by: Zakalwe on December 03, 2019, 03:59:44 AM

Who was the other loon that was convinced that Aldrin was going to break down and admit it was all a hoax on the eve of (IIRC) the 40th anniversary (or was it the 45th)? We're still waiting for that one to happen too....

Oh, that is ringing some bells. Can't dredge up the name from the memory banks. Sorry.

Was it the worlds most boring accountant that had Neil Armstrong's ghost appearing to him in his dreams? Or the "atomic bombs don't work" loon? After a while they all start to blur into a mushy mess of stupidity, resentment and ignorance!  ;D
Title: Re: "The AGC wan't powerful enough"
Post by: rocketman on December 03, 2019, 06:20:05 AM
Or the "atomic bombs don't work" loon?

Seems like that one should have negative consequences on one's likelihood of survival, at least back in the days of more active testing.
Title: Re: "The AGC wan't powerful enough"
Post by: ApolloEnthusiast on December 03, 2019, 08:04:56 AM
We're going onto six months post A11 anniversary and still no dramatic proof that Apollo was fake and no D to show his stuff.
He probably meant the 60th anniversary and typed 50th by mistake.  We'll see in another 10 years  ::)
Title: Re: "The AGC wan't powerful enough"
Post by: bknight on December 03, 2019, 08:24:41 AM
We're going onto six months post A11 anniversary and still no dramatic proof that Apollo was fake and no D to show his stuff.
He probably meant the 60th anniversary and typed 50th by mistake.  We'll see in another 10 years  ::)
LOL Well he indicated he was a physicist and could have made a fat finger entry.  No he stated the same BS in two threads (UM and here) so I conclude he believed that Apollo didn't land on the Moon, except A14, A15 and A16.
Title: Re: "The AGC wan't powerful enough"
Post by: ka9q on December 03, 2019, 09:23:42 AM
I have always regarded the 1201 and 1202 as the AGC equivalent of "I don't know".
They really mean "I'm running out of real time". The specific ways that's detected don't matter. Adding more memory wouldn't have fixed the problem because running out of memory was only a symptom of the real problem.

This became instantly obvious to me when I saw the recent Youtube demo of a recreated AGC re-running the Apollo 11 landing with spurious rendezvous radar interrupts added. The COMP ACTY light went on almost continuously, as it had to. Had I been in Aldrin's position I like to think that I would have immediately realized what was going on, though not why or whether I could continue.  Any real-time computer system MUST have some idle time left over, or it won't keep up.
Title: Re: "The AGC wan't powerful enough"
Post by: ka9q on December 03, 2019, 09:44:03 AM
The point was that code like
Code: [Select]
while ( true )
   ;
while you're doing nothing but waiting for an interrupt was often acceptable in embedded systems.
That's bad news even in embedded systems. It runs up the power and heat. If you're testing a memory location it also hits memory (and any cache) very hard, depriving any other cores of those access cycles. But even in general purpose multitasking operating systems you sometimes want to busy wait when you know the wait will be so short that it would take much more time to call the scheduler to release the CPU, only to come right back to where you left off. For this reason, Intel (and other CPU architectures, I suppose) added the PAUSE instruction for use inside spinloops. It's like a NOOP (no operation) that takes some small but unspecified time to complete, greatly reducing CPU power consumption.

On an Intel system, if you simply wanted to wait for an interrupt (without actually testing anything) you'd use the HLT (halt) instruction. It continues after an interrupt has been serviced.
Title: Re: "The AGC wan't powerful enough"
Post by: JayUtah on December 03, 2019, 12:09:33 PM
They really mean "I'm running out of real time". The specific ways that's detected don't matter. Adding more memory wouldn't have fixed the problem because running out of memory was only a symptom of the real problem.

Right.  Adding more erasable memory means more core sets available, which only fixes the immediately reported symptom.  Since the underlying cause remains unaddressed, the problem is likely to behave like the proverbial bag of bread dough:  the underlying problem will just manifest itself elsewhere, in a different way.  In a way, that's what happened.  1201 and 1202 both mean a resource shortfall.  In one case the OS was reporting that when the interrupt happened to tell the AGC to run its real-time tasks, they were still running from the previous interrupt.  In the other case, the OS was telling a new task that it couldn't have a core set because they were full -- and they were full because the real-time tasks were taking too long to complete.  All of that was the result of too much time being spent reading unexpected data from the radar.

Quote
Had I been in Aldrin's position I like to think that I would have immediately realized what was going on, though not why or whether I could continue.

Yeah, it's not clear whether Aldrin apprehended what was going on.  But in general the troubleshooting checklist for flying still has Aviate as the prime directive.  If the craft is flyable notwithstanding the warnings, you have to consider whether the best plan is to press on to the landing and then try to troubleshoot once you're on the ground.

One thing Aldrin had that not all pilots do is the flight controllers and their back rooms.  As soon as the program alarm is reported, you can hear one of the flight controllers say, "Same thing we had."  He's referring to the ill-fated simulation where the controllers had first been presented with the program alarms and had incorrectly called for an abort.  Chagrined, the story goes that they delved into the program alarms and had playbooks for all of them.  So when pressed for a recommendation, they were able to give a go.  Then you hear them trying to troubleshoot:  "It seems to happen whenever we have a 16 68 up," and later, "Noun 68 may well be the problem here."  Verb 16 means to display the specified noun data on the DSKY and update it at intervals.  In order to do a semi-continuous update, you have to schedule a task to be woken up at those intervals.  That lengthens the list of things that have to be done on a periodic basis.

The controllers were methodical, but initially wrong.  Neither verb 16 nor noun 68 was the problem.  They were what revealed the problem.  But for the floating radar clocks (the root cause), nothing would have been wrong with Aldrin wanting to monitor noun 68.  I too like to think I'd be adept at gathering all the information and making sound decisions.  But historically even well-trained, highly-skilled people are often fairly bad at it.  The history of watching engineers and operators respond to emergencies suggests they do largely what Bales and his colleagues did:  they hypothesize a de minimis cause and then unconsciously filter new data according to whether it fits that hypothesis.

Quote
Any real-time computer system MUST have some idle time left over, or it won't keep up.

Indeed, even those with a fixed duty cycle.  The AGC didn't have a fixed duty cycle:  the operator could add real-time tasks to it willy-nilly, and the different modes of operation changed the cycle.  For example, in unaccelerated flight, the digital autopilot operated much more leisurely.
Title: Re: "The AGC wan't powerful enough"
Post by: rocketman on December 03, 2019, 12:37:33 PM
Does anyone here know about, or is anyone involved with, this?

https://www.ibiblio.org/apollo/
Title: Re: "The AGC wan't powerful enough"
Post by: JayUtah on December 03, 2019, 12:59:14 PM
That's bad news even in embedded systems. It runs up the power and heat.

Right; it would only work in the dumbest of control systems.  You could do it, but it would still be inadvisable.  But in a time-sharing system, it's anathema except for the reason you gave, where you can know that scheduler overhead is worse in that case.  Linux, I believe, has a spinlock for just that purpose.

Quote
On an Intel system, if you simply wanted to wait for an interrupt (without actually testing anything) you'd use the HLT (halt) instruction. It continues after an interrupt has been serviced.

That's good to know.  I didn't have any specific architecture in mind when I was writing.  But you bring up an important point:  these days "embedded" is quite likely to mean "battery-powered," such as in a consumer handset.  Things like power usage and heat are extremely important.  However, "embedded" can also mean "industrial controller."  In that case it's hooked up to an almost limitless power supply, and its thermal environment may be extreme.  What you may want from that is for it to be dumb and rugged.
Title: Re: "The AGC wan't powerful enough"
Post by: JayUtah on December 03, 2019, 01:00:45 PM
Does anyone here know about, or is anyone involved with, this?

https://www.ibiblio.org/apollo/

Yes, back in the day I was somewhat involved with it.  I didn't write any of the code, but I corresponded with the guy who was writing it and I tested the end result.  If you want to write code to run on the AGC, I highly endorse it.
Title: Re: "The AGC wan't powerful enough"
Post by: bknight on December 03, 2019, 01:08:09 PM
They really mean "I'm running out of real time". The specific ways that's detected don't matter. Adding more memory wouldn't have fixed the problem because running out of memory was only a symptom of the real problem.

Right.  Adding more erasable memory means more core sets available, which only fixes the immediately reported symptom.  Since the underlying cause remains unaddressed, the problem is likely to behave like the proverbial bag of bread dough:  the underlying problem will just manifest itself elsewhere, in a different way.  In a way, that's what happened.  1201 and 1202 both mean a resource shortfall.  In one case the OS was reporting that when the interrupt happened to tell the AGC to run its real-time tasks, they were still running from the previous interrupt.  In the other case, the OS was telling a new task that it couldn't have a core set because they were full -- and they were full because the real-time tasks were taking too long to complete.  All of that was the result of too much time being spent reading unexpected data from the radar.

Quote
Had I been in Aldrin's position I like to think that I would have immediately realized what was going on, though not why or whether I could continue.

Yeah, it's not clear whether Aldrin apprehended what was going on.  But in general the troubleshooting checklist for flying still has Aviate as the prime directive.  If the craft is flyable notwithstanding the warnings, you have to consider whether the best plan is to press on to the landing and then try to troubleshoot once you're on the ground.

One thing Aldrin had that not all pilots do is the flight controllers and their back rooms.  As soon as the program alarm is reported, you can hear one of the flight controllers say, "Same thing we had."  He's referring to the ill-fated simulation where the controllers had first been presented with the program alarms and had incorrectly called for an abort.  Chagrined, the story goes that they delved into the program alarms and had playbooks for all of them.  So when pressed for a recommendation, they were able to give a go.  Then you hear them trying to troubleshoot:  "It seems to happen whenever we have a 16 68 up," and later, "Noun 68 may well be the problem here."  Verb 16 means to display the specified noun data on the DSKY and update it at intervals.  In order to do a semi-continuous update, you have to schedule a task to be woken up at those intervals.  That lengthens the list of things that have to be done on a periodic basis.

The controllers were methodical, but initially wrong.  Neither verb 16 nor noun 68 was the problem.  They were what revealed the problem.  But for the floating radar clocks (the root cause), nothing would have been wrong with Aldrin wanting to monitor noun 68.  I too like to think I'd be adept at gathering all the information and making sound decisions.  But historically even well-trained, highly-skilled people are often fairly bad at it.  The history of watching engineers and operators respond to emergencies suggests they do largely what Bales and his colleagues did:  they hypothesize a de minimis cause and then unconsciously filter new data according to whether it fits that hypothesis.

Quote
Any real-time computer system MUST have some idle time left over, or it won't keep up.

Indeed, even those with a fixed duty cycle.  The AGC didn't have a fixed duty cycle:  the operator could add real-time tasks to it willy-nilly, and the different modes of operation changed the cycle.  For example, in unaccelerated flight, the digital autopilot operated much more leisurely.

Add reading on in the ALSJ page noting the landing of A11, there is a passage that Pete had a lot of trouble nulling out the horizontal movements during A12 landing.  The guys at MIT rewrote the program for the AGC to null out the lateral movements automatically, IIRC.
Title: Re: "The AGC wan't powerful enough"
Post by: ka9q on December 03, 2019, 04:33:55 PM
Yeah, it's not clear whether Aldrin apprehended what was going on.  But in general the troubleshooting checklist for flying still has Aviate as the prime directive.  If the craft is flyable notwithstanding the warnings, you have to consider whether the best plan is to press on to the landing and then try to troubleshoot once you're on the ground.
Well, that was the big question, wasn't it? Would the craft remain flyable notwithstanding the warnings? Especially during the dead-man zone within the last 100 m or so when an abort wouldn't succeed?
Quote
One thing Aldrin had that not all pilots do is the flight controllers and their back rooms.  As soon as the program alarm is reported, you can hear one of the flight controllers say, "Same thing we had."  He's referring to the ill-fated simulation where the controllers had first been presented with the program alarms and had incorrectly called for an abort.  Chagrined, the story goes that they delved into the program alarms and had playbooks for all of them.  So when pressed for a recommendation, they were able to give a go.

I can't think of a better example of the aphorism "Luck favors the prepared". Gene Kranz discusses all this at length in Failure is not an option. What Kranz had his controllers do after that simulation is what programmers call achieving high code coverage: making sure that you test every possible decision path to make sure it does the right thing. One interesting way to achieve this is called fuzzing. You just throw random garbage at the program. If it crashes (or runs out of time in a real-time system), you've got a problem to fix. You don't expect the program to do anything useful, of course; you only want to know if it will crash, which it should never do regardless of its input. In a sense, the Apollo simulation supervisors were fuzzing the controller/astronaut system.

Fuzzing isn't the only thing you do, of course; it is simply an adjunct to methodical analysis (code "walk throughs"). But it has an uncanny ability to reveal problems that only become obvious in hindsight. Accident investigations often do the same thing, but they're a little more costly.

You know, I could probably teach a course based entirely on NTSB reports and what they reveal about human, engineering and system failures.

Quote
The controllers were methodical, but initially wrong.  Neither verb 16 nor noun 68 was the problem.
I can't fault them for that. They didn't know why the AGC was running out of cycles; that took an engineering investigation. (I think MIT figured it out while they were on the moon, and the ascent checklist was modified to turn off the rendezvous radar.) But they probably did already know during the landing that 16 68 was pretty compute intensive, and turning it off would relieve the load. It was a good call.

Quote
The history of watching engineers and operators respond to emergencies suggests they do largely what Bales and his colleagues did:  they hypothesize a de minimis cause and then unconsciously filter new data according to whether it fits that hypothesis.
Yeah, and this is why I teach my students about fault trees and how they help you avoid jumping to conclusions. But I think you're still a little hard on the Apollo 11 flight controllers. They succeeded, didn't they?

Quote
Quote
Any real-time computer system MUST have some idle time left over, or it won't keep up.

Indeed, even those with a fixed duty cycle.  The AGC didn't have a fixed duty cycle:  the operator could add real-time tasks to it willy-nilly, and the different modes of operation changed the cycle.  For example, in unaccelerated flight, the digital autopilot operated much more leisurely.
Yeah. So you have to test for the worst case with every possible task running to make sure you have enough spare cycles. If you don't, you have to carefully plan which tasks are allowed to run.
Title: Re: "The AGC wan't powerful enough"
Post by: ka9q on December 03, 2019, 05:01:12 PM
But in a time-sharing system, it's anathema except for the reason you gave, where you can know that scheduler overhead is worse in that case.  Linux, I believe, has a spinlock for just that purpose.
Yup, that's what I had in mind. The spin loop is used on things like waiting for access to a shared variable. Only one thread of execution at a time can be allowed to modify a shared variable, but these accesses are so quick (a few instructions) that a spinwait is faster and more efficient. You still use the PAUSE instruction, though. Also the usual case is that nobody else is using the variable (it is unlocked) so you don't have to wait at all.

By the way, you can use the halt instruction as a "slow pause" instruction. Even if you're not waiting for an interrupt yourself, one will always come along (the system clock timer if nothing else). You wouldn't do this in Linux except when the system is completely idle because the wait will almost certainly be long enough to make it worthwhile to invoke the scheduler.

Quote
But you bring up an important point:  these days "embedded" is quite likely to mean "battery-powered," such as in a consumer handset.  Things like power usage and heat are extremely important.  However, "embedded" can also mean "industrial controller."  In that case it's hooked up to an almost limitless power supply, and its thermal environment may be extreme.  What you may want from that is for it to be dumb and rugged.
I worked for Qualcomm, so to me "embedded" naturally implies the former case (small, battery powered, extremely energy-starved).

Title: Re: "The AGC wan't powerful enough"
Post by: ka9q on December 03, 2019, 05:04:39 PM
Add reading on in the ALSJ page noting the landing of A11, there is a passage that Pete had a lot of trouble nulling out the horizontal movements during A12 landing.  The guys at MIT rewrote the program for the AGC to null out the lateral movements automatically, IIRC.
And yet all the Apollo commanders prided themselves on "manually" landing the LM. There was no such thing as a fully manual landing mode...
Title: Re: "The AGC wan't powerful enough"
Post by: JayUtah on December 03, 2019, 06:02:55 PM
Well, that was the big question, wasn't it? Would the craft remain flyable notwithstanding the warnings?

Yep.  You can hear the urgency in Armstrong's voice when he asks for them to rule on the 1202.

Quote
One interesting way to achieve this is called fuzzing. You just throw random garbage at the program.

We fuzz our software extensively.  The joke goes that the testing department for a bar tests for patrons asking for 1 beer, -1 beer, 9999999 beers, 0 beers, and "dog" beers.  Then the whole thing blows up when someone asks to use the restroom.

Quote
In a sense, the Apollo simulation supervisors were fuzzing the controller/astronaut system.

Dunno...  If I were going to fuzz the Net-1/MOCR setup, I'd have a mariachi band suddenly appear right at DOI.

Quote
You know, I could probably teach a course based entirely on NTSB reports and what they reveal about human, engineering and system failures.

I've taken such courses, based largely on those kinds of sources.  There are also a couple of good books written by sociologists who study how critical decision-makers work in technical environments.

Quote
But they probably did already know during the landing that 16 68 was pretty compute intensive...

Except that I don't think it is.  Updating noun 68 is intensive, and it happens anyway as part of the landing tasks.  I don't think displaying it is, even if it's as often as once per second.  What I gather from the analysis is that it was just long enough.  If you're running at 99% capacity and you add an extra 2%, the nonlinear response is what gets you.  You don't get a 1202 at 99% but you get one at 101%.

Quote
...and turning it off would relieve the load. It was a good call.

Yes.  If the recommendation based on initial analysis is that the crew has to reduce the load on the Executive, then any real-time tasks that can be eliminated with should be.  But the snippets we hear on the FD loop have them speculating what's so special about noun 68.  This would have been a wrong direction to go, but as you say, they had no urgency beyond stabilizing the current thing.

A similar situation happened during the fatal Columbia re-entry.  As temperature sensors and other sensors started going offline, the flight controllers were looking for systemic commonalities.  It wasn't until much later in the troubleshooting process that they realized all those sensors were going offline because they were being destroyed -- the commonality was that they were in the rapidly heating part of the orbiter.

Quote
Yeah, and this is why I teach my students about fault trees and how they help you avoid jumping to conclusions. But I think you're still a little hard on the Apollo 11 flight controllers. They succeeded, didn't they?

They did, and in the final analysis that's all that matters.  I merely bring it up as an example of de minimis thinking.  The saving grace is that de minimis remedies aren't doomed to immediate failure.  Also, in their further defense, there is a theory of operating complex systems that says you apply only the minimal effective remedy.  You don't fix more than what the data say are broken.

Quote
Yeah. So you have to test for the worst case with every possible task running to make sure you have enough spare cycles. If you don't, you have to carefully plan which tasks are allowed to run.

And I gather MIT generally took the latter approach.  The specs for the computer had to be locked down at a certain point, but afterwards people started realizing what a useful gadget the computer was and gave it more and more tasks to do.
Title: Re: "The AGC wan't powerful enough"
Post by: Obviousman on December 03, 2019, 08:14:49 PM
And yet all the Apollo commanders prided themselves on "manually" landing the LM. There was no such thing as a fully manual landing mode...

I thought P67 was a full "manual" landing?
Title: Re: "The AGC wan't powerful enough"
Post by: smartcooky on December 04, 2019, 05:51:35 AM
Dunno...  If I were going to fuzz the Net-1/MOCR setup, I'd have a mariachi band suddenly appear right at DOI.

I just can't pass up this opportunity...

https://www.youtube.com/watch?v=VBTRp80Q64U

Title: Re: "The AGC wan't powerful enough"
Post by: bknight on December 04, 2019, 09:38:12 AM
Add reading on in the ALSJ page noting the landing of A11, there is a passage that Pete had a lot of trouble nulling out the horizontal movements during A12 landing.  The guys at MIT rewrote the program for the AGC to null out the lateral movements automatically, IIRC.
And yet all the Apollo commanders prided themselves on "manually" landing the LM. There was no such thing as a fully manual landing mode...

Of course they did all but one were jet jockeys and the majority flying off a carrier.
Title: Re: "The AGC wan't powerful enough"
Post by: JayUtah on December 04, 2019, 11:12:44 AM
I just can't pass up this opportunity...

Good on you.  I figured Confuse-A-Cat might have been too obscure a reference.

The idea behind fuzzing is that you're not necessarily trying to exercise some specific means.  Take lock-picking, for example.  The overt approach is to twist the lock barrel to apply shear to the lock pins, then use a tiny pick to raise each pin to the appropriate position.  The constant shear holds it there while you work on the other pins.  But there's also a sawtooth tool you can just randomly slide in and out while you told the barrel in torsion, and it sort of randomly raises and lowers the pins.  It's often much faster than the explicit method, but it requires less skill.  This would be equivalent to fuzzing the lock.

Now consider the AGC.  The DSKY operates by sending a specific byte to the computer when each key is pressed.  Modern keyboards work pretty much the same way.  The uplink channel is simply a virtual DSKY.  The modem receives digital values that are fed to the AGC as keystrokes, allowing anything that can be done on the DKSY to be done remotely.  An explicit plan to take over an Apollo spacecraft might involve feeding it keystrokes that tell the computer to do something deleterious, like orient the ship for gimbal lock or apply RCS hardover commands.  There are probably innumerable ways to sabotage an Apollo mission by hijacking the AGC uplink.  Fuzzing, on the other hand, would be simply feeding random words over this channel, to see if any of them accidentally put the computer into an unusable state.  It's exactly equivalent to button-mashing the DSKY.

It would be impossible to program the AGC to recognize and reject all the possible sequences of improper input.  But it can apply strict controls on what it accepts as proper input.  And anything outside the canonical sequences like "verb-key, digit, digit, noun-key, digit, digit, enter-key" essentially puts the AGC into a mode where it doesn't accept any more input until the key-release button is pressed.  This makes the computer harder to operate because there's no equivalent to the backspace key.  But it's safer in that fuzzy input will very quickly get caught, and all subsequent input rejected, without impairing the background operation of the software.  I'm sure ka9q can talk at length about similar discriminators in his technologies.

Even better, this channel is disabled unless the crew explicitly switches the Uplink switch to Accept.  This too has tradeoffs.  What if the crew are incapacitated and can't enable ground control of the computer?
Title: Re: "The AGC wan't powerful enough"
Post by: ka9q on December 04, 2019, 11:17:25 AM
And yet all the Apollo commanders prided themselves on "manually" landing the LM. There was no such thing as a fully manual landing mode...

I thought P67 was a full "manual" landing?
Even in P67 the astronauts' inputs are still being passed through the computer. The LM could not be landed without a functioning computer.
Title: Re: "The AGC wan't powerful enough"
Post by: ka9q on December 04, 2019, 11:33:15 AM
A similar situation happened during the fatal Columbia re-entry.  As temperature sensors and other sensors started going offline, the flight controllers were looking for systemic commonalities.  It wasn't until much later in the troubleshooting process that they realized all those sensors were going offline because they were being destroyed -- the commonality was that they were in the rapidly heating part of the orbiter.
Again I think you're being hard on the controllers. They were looking for commonality because they knew that individual sensors, especially mechanical ones like temperature and pressure, often fail for many different reasons and it's difficult or impossible to incorporate fast, reliable self-test features. The last thing you want is to take some drastic action on the basis of false input, so the first thing you must do is to determine if the reading(s) are real. It's difficult to know whether a single non-nominal reading is real or not, so they rely on redundancy -- one of the most time-tested ways to detect and overcome random failures. If a set of nearby temperature sensors give similar out-of-range readings, and those sensors don't share a lot of support hardware ("no commonality"), the readings are probably real.

Of course, none of this really mattered in the Columbia disaster; there was absolutely nothing the ground could have done to prevent it once the entry began.
Title: Re: "The AGC wan't powerful enough"
Post by: ka9q on December 04, 2019, 11:43:17 AM
I just can't pass up this opportunity...
Good on you.  I figured Confuse-A-Cat might have been too obscure a reference.
Not to a Python (Monty) fan like me.
Quote
Even better, this channel is disabled unless the crew explicitly switches the Uplink switch to Accept.  This too has tradeoffs.  What if the crew are incapacitated and can't enable ground control of the computer?
The uplink enable switch was put there for a specific reason -- preventing the Russians (or anyone else) from injecting false commands. It was a manual form of the (in)famous CRM-114 device in "Doctor Strangelove". Today we'd do it with cryptographic authentication, but that wasn't practical in the 1960s.
Title: Re: "The AGC wan't powerful enough"
Post by: JayUtah on December 04, 2019, 11:58:30 AM
I thought P67 was a full "manual" landing?

It's billed as such, especially since the ROD switch operated as a direct throttle setting control in that program.  Someone will have to verify what mode the DAP was in, as opposed to either Auto or Att Hold for P66.  P67 was removed starting with Apollo 13.  It's important to realize the AGC was not out of the loop in P67 because, obviously, we're referring to an AGC program.

In P66, the ROD switch selects the descent rate.  In P67 it selects the throttle setting in discrete intervals.  Each still involves the computer, because at its heart the ROD switch is just a switch with two affirmative outputs:  up, or down.  How the two bits representing that input are interpreted is still a matter of software.  In P66, the up-bit, if newly set, is interpreted by the software as adjusting the set-point delta-h to be 0.5 foot per second slower than its current setting.  That value in turn is used in a software control law, along with other parameters, to derive a control output that represents a change in throttle setting.  The control law suddenly sees an "error" between the desired sink rate and the deduced sink rate.  It has to increase thrust in order to slow the sink rate, then decrease thrust again to maintain a steady descent once the desired sink rate is achieved.

In P67, a newly set up-bit bypasses the control law and simply adds a fixed incremental value to the register in the AGC that represents the throttle setting.  The reason this was initially contemplated is that it's very hard for something to go wrong with that logic in the computer, presuming the computer hardware is minimally functional.  An alternate implementation could conceivably bypass the computer altogether and connect the switch directly to the pintle position servo on the DPS, using appropriate electronics.  But in reality, if the AGC became entirely inoperative, the only real remedy was the Abort button, allowing AGS to take over and disabling PGNS completely.

In contrast, the joysticks on both spacecraft had a hardover mode.  Normally the position of the joystick was just a digital value given to the computer.  That is, it required a computer to interpret that digital value in terms of what mode the software was told to obey.  There's no one right way.  If I move the joystick to the right, it's clear that means some variant of "roll to the right."  But one of the possible questions is what happens when I let go and the joystick springs back to the detent?  Does that mean I've achieved the desired roll rate, or the desired roll attitude?  In case something goes wrong with the logic that lets that question be answered in whatever way the pilot selects, moving the joystick all the way to the right clicks the hardover switch on that extreme, and that's connected directly, electrically, to the wires that control the solenoid valves on the roll-specific RCS jets.  Unless something is very wrong with the spacecraft, it would be hard for that method of control to become inoperative.  Now it would be quite hard to fly the LM in that mode, but it could be done.

One of the questions ka9q and I are debating is the notion of flyability in the face of computer failure.  One theory of responding to an exigent crisis is to take whatever action leads you to the most achievable stable state.  What exactly that state looks like depends on what point of your mission you're in.  The space shuttle had an abort-to-orbit mode during the ascent.  This is because ascent is not a stable state.  Some degree of control must be actively maintained in order to avoid catastrophe.  Orbit is a stable state -- even a slapdash orbit.  It can be maintained for some time simply by passive physics.  The point is to achieve a state in which diagnosis and contemplation have time to occur, without constantly having to address exigencies.  Similarly Apollo 13 had an exigent life-support crisis; a stable state had to be achieved first before moving on.

The 120x alarms occurred early in the descent.  In the face of dire failure then, the most readily achievable stable state is a return to lunar orbit.  Not so later in the flight, where aborts become more dodgy.  If the LM remained minimally flyable (compare Apollo 10), it may have been a more stable situation to fly to a landing -- any landing.  Sitting comfortably on the lunar surface is a stable state.
Title: Re: "The AGC wan't powerful enough"
Post by: JayUtah on December 04, 2019, 12:19:17 PM
Again I think you're being hard on the controllers.

I'm not trying to be hard on anyone, and I agree the controllers acted appropriately.  We tend to think this way, even when we haven't been trained to do it.

Quote
The last thing you want is to take some drastic action on the basis of false input, so the first thing you must do is to determine if the reading(s) are real.

Correct.  Catastrophizing is the other extreme, and it's harmful too -- perhaps even more so.  Even more to the point, we prefer that operators do nothing if they don't understand the whole set of indications.  If the readings don't describe a coherent state of the system, don't change the state of the system so as to wreck what coherence there is, or may emerge.

My argument here is a bit hindsightedly circular.  I don't mean to say that operators shouldn't first vet the indications, nor that they shouldn't take clear action on the basis of some exigent subset of those indications.  In all the cases I'm looking at, where operator action mattered, the operators achieved an acceptable outcome, largely through quick-yet-methodical analysis.  (I haven't talked about Three Mile Island yet, though.)  But what happens inevitably is that people will start a deeper troubleshooting exercise while they're still dealing with exigency.  There isn't often a clear demarcation between when operators stop swerving and start trying to figure out what's wrong with the steering.  What we discover, when these types of situation do result in failure, is that the operators erred on the side of de minimis thinking when there were indications that a larger problem was afoot.  They downplayed nonconforming indications.  But this is only when they failed.  It's not a question of what operators typically do and why; it's a matter of what operators did wrong when they failed, and why.  They don't always fail, even when they start going down the de minimis hypothesization path.  Initially yes, you apply de minimis thinking because, as you say, most problems have minimal causes.  It's what you do when new indications stop adding up that interests me.

We don't tend to err on the side of catastrophization.  We tend to do too little, not too much.  This is really only an academic conclusion; it colors how we try to train people to operate complex systems, but it doesn't mean we habitually suck at it.
Title: Re: "The AGC wan't powerful enough"
Post by: JayUtah on December 04, 2019, 12:28:52 PM
It was a manual form of the (in)famous CRM-114 device in "Doctor Strangelove". Today we'd do it with cryptographic authentication, but that wasn't practical in the 1960s.

One of our in-house cryptographic software modules is titled "crm114".  But by the same token, that's why some of our nuclear warheads have purely mechanical fusing and detonation mechanisms.  There's no way to electronically fuzz something that doesn't rely on electronics to operate.  If I were defending against a nuclear attack, I'd certainly want to see if I could get the warhead to detonate early, at a relatively safe altitude.
Title: Re: "The AGC wan't powerful enough"
Post by: Obviousman on December 04, 2019, 03:30:48 PM
If I move the joystick to the right, it's clear that means some variant of "roll to the right."  But one of the possible questions is what happens when I let go and the joystick springs back to the detent?  Does that mean I've achieved the desired roll rate, or the desired roll attitude?

Weren't they two of the modes available in the Mercury AFCS? One that simply stopped the thrust and the roll / pitch / yaw rate remained at whatever it was, and the other mode held the attitude when the hand controller was returned to a neutral position?

IIRC the astronauts preferred the latter mode but it used significantly more fuel. Also IIRC, for this reason I think it got removed from later missions.
Title: Re: "The AGC wan't powerful enough"
Post by: Abaddon on December 04, 2019, 05:53:54 PM
Good on you.  I figured Confuse-A-Cat might have been too obscure a reference.

The idea behind fuzzing is that you're not necessarily trying to exercise some specific means.  Take lock-picking, for example.  The overt approach is to twist the lock barrel to apply shear to the lock pins, then use a tiny pick to raise each pin to the appropriate position.  The constant shear holds it there while you work on the other pins.  But there's also a sawtooth tool you can just randomly slide in and out while you told the barrel in torsion, and it sort of randomly raises and lowers the pins.  It's often much faster than the explicit method, but it requires less skill.  This would be equivalent to fuzzing the lock.

That is amusing. I pick locks as a hobby. I have a nice set of Southord's picks and craft my own. To me, it is simply another form of puzzle. Your summary is spot on. The "sawtooth" tool to which you refer is known as a rake. While it is to me, a hobby (I have a long held interest in puzzles of all kinds) it occasionally comes in handy. "raking" will instantly open any piece of common office furniture. In extremis, I have improvised with a paper clip and a nail scissors. Still works like a charm.
Title: Re: "The AGC wan't powerful enough"
Post by: Obviousman on December 04, 2019, 06:38:28 PM
It was a manual form of the (in)famous CRM-114 device in "Doctor Strangelove". Today we'd do it with cryptographic authentication, but that wasn't practical in the 1960s.

One of our in-house cryptographic software modules is titled "crm114".  But by the same token, that's why some of our nuclear warheads have purely mechanical fusing and detonation mechanisms.  There's no way to electronically fuzz something that doesn't rely on electronics to operate.  If I were defending against a nuclear attack, I'd certainly want to see if I could get the warhead to detonate early, at a relatively safe altitude.

Off topic but I love some of the obscure homages you see in movies et al. One of my favourites is in Star Trek: First Contact, when they are trying to detach the deflector dish and have to use a manual control unit labelled 'AE35'.... which was the name of the transmitter component in 2001 - A Space Odyssey that HAL says was going to malfunction.
Title: Re: "The AGC wan't powerful enough"
Post by: JayUtah on December 04, 2019, 08:24:02 PM
...a manual control unit labelled 'AE35'

As I get more and more into conceptual design for entertainment, I find more and more places to put such Easter eggs.  So do my colleagues.  When the crew of Serenity go to planet Miranda, the rescue ship is designated C57D.  A surprising number of designers are also big fans of Space: 1999.
Title: Re: "The AGC wan't powerful enough"
Post by: Everett on December 05, 2019, 02:51:46 PM
Actually, I'd say the astronauts 'did' manually land the LM, even if had a fly-by-wire system the controls were running through. If you consider they didn't simply because of that, then by that logic it's impossible to 'manually' land a fully functioning say, 777, even on a empty dry salt lake by looking out the window, since the pilot's controls are still going through a computer. The LM case is more complicated, since they're using modes that, for example, hold constant vertical speed, but they're also telling it exactly what to do and which way to fly at any given moment, but that's opposed to say, an autoland on a airplane, where the computer decides where to do to follow an ILS signal. To put it another way, on the LM the astronauts give the LM only-in-some-aspects commands, while an automatic landing would give a computer a task.

Am I making any sense?
Title: Re: "The AGC wan't powerful enough"
Post by: smartcooky on December 05, 2019, 07:09:24 PM
It was a manual form of the (in)famous CRM-114 device in "Doctor Strangelove". Today we'd do it with cryptographic authentication, but that wasn't practical in the 1960s.

One of our in-house cryptographic software modules is titled "crm114".  But by the same token, that's why some of our nuclear warheads have purely mechanical fusing and detonation mechanisms.  There's no way to electronically fuzz something that doesn't rely on electronics to operate.  If I were defending against a nuclear attack, I'd certainly want to see if I could get the warhead to detonate early, at a relatively safe altitude.

Off topic but I love some of the obscure homages you see in movies et al. One of my favourites is in Star Trek: First Contact, when they are trying to detach the deflector dish and have to use a manual control unit labelled 'AE35'.... which was the name of the transmitter component in 2001 - A Space Odyssey that HAL says was going to malfunction.

Some of the "messaging" can be more direct than that, with the programmer literally talking to who ever is using the program.

Back in the 1980's, I was quite heavily involved in programming Apple ][+ computers to analyse data from the output of a photoelectric photometer being used to observe variable stars. This programming was done in 6502 "assembly language" using a screen not unlike this one...

(https://i.imgur.com/PMgqqyj.png)

... and directly keying in sequences of hexadecimal numbers. I got quite good at "seeing" sequences of hex numbers, reading them as groups and recognizing them as program instructions. In the middle are the sequences of Hex numbers [00 to FF), with their ASCII interpretations in the right column./

One day, I was examining a subroutine written by another programmer for a different purpose, and trying to figure out how to integrate that subroutine into what I was doing - in other words, I was hacking his program and trying to pinch part of it to use it for my program. While doing this, I suddenly hit a sequence of hexadecimal numbers that I didn't immediately recognize as program instructions - they were letters, and when I looked over in the right column, I got a message from the programmer....




WHAT THE HELL ARE YOU DOING POKING AROUND IN HERE!

     
Title: Re: "The AGC wan't powerful enough"
Post by: bknight on December 05, 2019, 10:45:45 PM
Caught with your hand in the cookie jar.  :o
Title: Re: "The AGC wan't powerful enough"
Post by: raven on December 06, 2019, 06:39:59 PM
I laughed, thank you, a delightful little Easter egg.