Author Topic: Boeing Starliner  (Read 15775 times)

Offline cjameshuff

  • Mars
  • ***
  • Posts: 372
Re: Boeing Starliner
« Reply #15 on: September 03, 2021, 06:43:37 PM »
Bezos like to describe the company in terms of the tortoise and hare fable, even incorporating tortoises into the "coat of arms", but they don't seem to understand that the tortoise didn't win because it was slow, it won despite it, and only because the hare didn't take the race seriously. It's a poorly-fitting metaphor for this: Blue Origin is actually more like the hare in that they keep stopping, but they never raced ahead in the first place. Boeing's lost all ability to move without someone dangling a government contract in front of them. SpaceX sometimes falls down but always bounces right back, and occasionally veers off on a wild tangent, but they're quick to abandon bad ideas when they find better ones.

What Bezos calls "skipping steps" has allowed SpaceX to gain a wide range of experience, explore a variety of different approaches, and develop multiple generations of successful launch systems while Blue Origin is still working on getting New Glenn off the ground and Boeing's still trying to pretend everything's fine the way it is. Boeing's a fat old hog that won't budge without a carrot waved in front of his nose. SpaceX is more like a pack of hounds frantically scouring the landscape for a trail. And Blue Origin is doing a fair imitation of an unidentifiable piece of roadkill, breeding pests and emitting foul odors.

Offline Jeff Raven

  • Venus
  • **
  • Posts: 93
Re: Boeing Starliner
« Reply #16 on: September 05, 2021, 02:01:03 AM »
I was given a paper to read from Boeing on some outfitting they did to one of their derivatives. Kind of reads like a high school report what with the language. Things like, "engineers were mesmerised by the video."

Definitely not something you'd see in a major engineering publication.

As someone who teaches scientific writing, that example is truly horrifying.

Offline Jeff Raven

  • Venus
  • **
  • Posts: 93
Re: Boeing Starliner
« Reply #17 on: September 05, 2021, 02:19:52 AM »

The second problem was performance - all the backend services were doing screen management over a network, which was already slow.  Add to this that the APIs were exceedingly primitive.  Like I said, I could only add or remove one row at a time from the table I was responsible for, and with network latency each command took about a quarter second to execute.  Not a problem when you're monitoring 5 or 10 objects, deadly when you're managing a couple hundred (it took on the order of 5 minutes to update the table if you were managing hundreds of reports).  Under realistic scenarios the whole thing just fell down hard.  We suggested to the Boeing team that they add calls to perform bulk updates (or at least a call to delete all the rows in the table), but those suggestions were dismissed as being "too hard". 

How this got past review and approved (especially since it couldn't satisfy some key functional requirements) is a mystery to me, meaning it was never really reviewed.  Someone at Boeing should have kicked that design back with "are you f___ing kidding me" stamped on every page in red ink, but didn't. 

At the time I told myself it was because this wasn't something in a critical path so they weren't putting their best team on it, but it was part of a larger modernization program and proved to be an accurate representation of the overall effort. 

A thousand years ago I started college as a computer science major. (switched out of that after a little over a year) One of the primary things my teachers drilled into us - other than you damn well better proof your work, anticipate every possible human error, no matter how small or unlikely, and provide a fix for it, and put comments in everywhere - was that you should make every bit of coding as fast and efficient as possible. It's not a big deal if your bit takes a quarter-second (back then that would have been blazing fast) until you realize that it might be a module that's part of a larger program, one which calls your little slice of code 100 times a day. And if that code could be executed in a tenth of second instead, that's a 60% time savings, multiplied by 100, and that adds up to real differences. An idealized view? Perhaps. But I mentioned that story to my nephew, who has a doctorate in and teaches computer science, and he said that that's still the standard, and something he also tells his students.

And, as you know, with the complexity of programs, a seemingly minor module or path could somehow find its way into a very critical thread, so them not checking this stuff and putting that red stamp on it (love the imagery) really can't be excused.

Offline JayUtah

  • Neptune
  • ****
  • Posts: 3787
    • Clavius
Re: Boeing Starliner
« Reply #18 on: September 05, 2021, 05:48:51 PM »
Boeing's lost all ability to move without someone dangling a government contract in front of them.

The majority of Boeing's profits are still from private-sector business.  Without a customer or a potential customer driving specific forms of innovation, it's hard to make the case that Boeing should use its investor funds to do the same kinds of things as younger companies are doing.  It has a fair amount to do with Boeing having become a dysfunctional company.  But it has more to do with Boeing being a publicly-held company.
"Facts are stubborn things." --John Adams

Offline jfb

  • Mars
  • ***
  • Posts: 396
Re: Boeing Starliner
« Reply #19 on: September 06, 2021, 01:12:43 AM »

The second problem was performance - all the backend services were doing screen management over a network, which was already slow.  Add to this that the APIs were exceedingly primitive.  Like I said, I could only add or remove one row at a time from the table I was responsible for, and with network latency each command took about a quarter second to execute.  Not a problem when you're monitoring 5 or 10 objects, deadly when you're managing a couple hundred (it took on the order of 5 minutes to update the table if you were managing hundreds of reports).  Under realistic scenarios the whole thing just fell down hard.  We suggested to the Boeing team that they add calls to perform bulk updates (or at least a call to delete all the rows in the table), but those suggestions were dismissed as being "too hard". 

How this got past review and approved (especially since it couldn't satisfy some key functional requirements) is a mystery to me, meaning it was never really reviewed.  Someone at Boeing should have kicked that design back with "are you f___ing kidding me" stamped on every page in red ink, but didn't. 

At the time I told myself it was because this wasn't something in a critical path so they weren't putting their best team on it, but it was part of a larger modernization program and proved to be an accurate representation of the overall effort. 

A thousand years ago I started college as a computer science major. (switched out of that after a little over a year) One of the primary things my teachers drilled into us - other than you damn well better proof your work, anticipate every possible human error, no matter how small or unlikely, and provide a fix for it, and put comments in everywhere - was that you should make every bit of coding as fast and efficient as possible. It's not a big deal if your bit takes a quarter-second (back then that would have been blazing fast) until you realize that it might be a module that's part of a larger program, one which calls your little slice of code 100 times a day. And if that code could be executed in a tenth of second instead, that's a 60% time savings, multiplied by 100, and that adds up to real differences. An idealized view? Perhaps. But I mentioned that story to my nephew, who has a doctorate in and teaches computer science, and he said that that's still the standard, and something he also tells his students.

My mantra has been:

It doesn’t matter how fast your code is if it’s wrong.
It doesn’t matter how fast your code is if you can’t maintain it.
It doesn’t matter how fast your code is if it’s a malware vector.
It doesn’t matter how fast your code is if it falls over when someone sneezes in the next room.

Speed matters, but those things matter more.  Yes, if you can shave a couple of cycles out of each iteration of a tight loop that executes thousands of times, that’s worth doing.   Shaving a couple of cycles off a subroutine that executes exactly once at startup, however, isn’t worth the effort. 

Different war story on the maintainability front.  We were making a sonar simulator for a Navy lab.  We handled modeling the noise spectra and the movement of objects in the simulation.  A second group built a massive DSP to take our noise spectra and turn them into the signals you’d get from a towed array.  A third contractor was responsible for a 3D graphical display.

At one point we were asked to look at the graphics code to see if we could speed it up (this was written in C on a Silicon Graphics  system using (not-yet-open) GL).  A small program, only about 5000 lines or so, but they were all in main.  The original author believed that using actual subroutines was too inefficient, so he used gotos to branch all over the place - something like 15 gotos in all. It took my coworker a couple of weeks to puzzle out the flow of control. 

The code was so tightly coupled with itself we could not make any changes without breaking something.  We tried compiling with level 1 optimization - the compiler ate all the RAM, then ate all the swap space and the system panicked.  This code was literally unmaintainable.

We finally gave the lab two choices - let us rewrite the whole thing from the keel up, or buy faster hardware.

They wound up buying faster hardware.

Quote
And, as you know, with the complexity of programs, a seemingly minor module or path could somehow find its way into a very critical thread, so them not checking this stuff and putting that red stamp on it (love the imagery) really can't be excused.

Yup.  But again, the performance wasn’t the main problem, it was the absolute inability to satisfy a basic functional requirement that was so boggling.  How that got by anyone is an absolute travesty.

Offline Peter B

  • Saturn
  • ****
  • Posts: 1268
Re: Boeing Starliner
« Reply #20 on: September 06, 2021, 03:44:47 AM »
I was given a paper to read from Boeing on some outfitting they did to one of their derivatives. Kind of reads like a high school report what with the language. Things like, "engineers were mesmerised by the video."

Definitely not something you'd see in a major engineering publication.

Out of interest, is the paper publicly available?

Offline grmcdorman

  • Earth
  • ***
  • Posts: 145
Re: Boeing Starliner
« Reply #21 on: September 06, 2021, 09:59:41 AM »
With respect to optimization, premature optimization is also bad. Yes, you can write your code such that it would need to be completely rewritten to be more efficient, but you can also write code where optimization isn't needed - and as jfb says, that can result in unmaintainable code. There is often a balance between clarity (and stability) and maximum speed.

One approach is to try to write things reasonably efficiently, but with a focus on clarity/stability/maintainability. Then run a performance analysis tool to see where you need to focus your optimization efforts.

Offline Glom

  • Saturn
  • ****
  • Posts: 1102
Re: Boeing Starliner
« Reply #22 on: September 06, 2021, 11:03:32 AM »
I was given a paper to read from Boeing on some outfitting they did to one of their derivatives. Kind of reads like a high school report what with the language. Things like, "engineers were mesmerised by the video."

Definitely not something you'd see in a major engineering publication.

Out of interest, is the paper publicly available?

Not sure. It is related to a military product so I don't really want to reveal any details to avoid trouble.

With respect to optimization, premature optimization is also bad. Yes, you can write your code such that it would need to be completely rewritten to be more efficient, but you can also write code where optimization isn't needed - and as jfb says, that can result in unmaintainable code. There is often a balance between clarity (and stability) and maximum speed.

One approach is to try to write things reasonably efficiently, but with a focus on clarity/stability/maintainability. Then run a performance analysis tool to see where you need to focus your optimization efforts.

What are some examples of being too optimised?

Offline Peter B

  • Saturn
  • ****
  • Posts: 1268
Re: Boeing Starliner
« Reply #23 on: September 06, 2021, 11:55:00 AM »
I was given a paper to read from Boeing on some outfitting they did to one of their derivatives. Kind of reads like a high school report what with the language. Things like, "engineers were mesmerised by the video."

Definitely not something you'd see in a major engineering publication.

Out of interest, is the paper publicly available?

Not sure. It is related to a military product so I don't really want to reveal any details to avoid trouble.

Yeah, no worries.

Quote
With respect to optimization, premature optimization is also bad. Yes, you can write your code such that it would need to be completely rewritten to be more efficient, but you can also write code where optimization isn't needed - and as jfb says, that can result in unmaintainable code. There is often a balance between clarity (and stability) and maximum speed.

One approach is to try to write things reasonably efficiently, but with a focus on clarity/stability/maintainability. Then run a performance analysis tool to see where you need to focus your optimization efforts.

What are some examples of being too optimised?

I assume it would be over-optimising to spend a lot of time refining code which ends up being deleted.

Like that Everyday Astronaut video walking around the SpaceX Starship - the first iteration of the first stage has grid fins that don't fold into the side of the booster during launch. Presumably once they've worked out what the grid fins should look like or even if they're needed, then they can work out how they can be designed to fold down into the side of the booster.

Offline cjameshuff

  • Mars
  • ***
  • Posts: 372
Re: Boeing Starliner
« Reply #24 on: September 06, 2021, 01:35:05 PM »
Boeing's lost all ability to move without someone dangling a government contract in front of them.

The majority of Boeing's profits are still from private-sector business.  Without a customer or a potential customer driving specific forms of innovation, it's hard to make the case that Boeing should use its investor funds to do the same kinds of things as younger companies are doing.  It has a fair amount to do with Boeing having become a dysfunctional company.  But it has more to do with Boeing being a publicly-held company.

I was mainly speaking that portion of their business that more or less overlaps what SpaceX and Blue Origin are doing. Publicly held companies can still take risks, try to reach new markets or start projects that let them improve their capabilities. Boeing used to do so, even as recently as the Delta IV which they initially had commercial ambitions for, but they now seem content to let ULA handle Vulcan, with costs and risks shared with Lockheed and government contracts paying for much of development, while not supporting anything more than an updated launcher to replace Delta IV and Atlas V. SLS is purely a government project with no commercial applications whatsoever, and they've shown no real interest in doing anything with Starliner beyond the NASA contract. They make commercial satellite busses, but they're content with minor variations on what they've done before. They no longer seem interested in developing new products or capabilities of their own initiative.

Offline cjameshuff

  • Mars
  • ***
  • Posts: 372
Re: Boeing Starliner
« Reply #25 on: September 06, 2021, 02:04:23 PM »
I was given a paper to read from Boeing on some outfitting they did to one of their derivatives. Kind of reads like a high school report what with the language. Things like, "engineers were mesmerised by the video."

Definitely not something you'd see in a major engineering publication.

Out of interest, is the paper publicly available?

Not sure. It is related to a military product so I don't really want to reveal any details to avoid trouble.

With respect to optimization, premature optimization is also bad. Yes, you can write your code such that it would need to be completely rewritten to be more efficient, but you can also write code where optimization isn't needed - and as jfb says, that can result in unmaintainable code. There is often a balance between clarity (and stability) and maximum speed.

One approach is to try to write things reasonably efficiently, but with a focus on clarity/stability/maintainability. Then run a performance analysis tool to see where you need to focus your optimization efforts.

What are some examples of being too optimised?

Optimizing a spin-delay function?

In realtime code, you have to guarantee that you have sufficient processing time for the worst case. You may have only microseconds before you have to finish one item of work and start the next. There's not really any point in optimizing the other cases, and it's sometimes actually better to write things so they always take the same amount of time. The processing cycles you consume have to be available for your use anyway, and it's easier to verify that your code can actually do the job in the time available if it always takes the same amount of time to run. If you do have different paths that take different amounts of time, any effort spent optimizing for time is wasted unless it's spent on the worst-case path.

Other times, you don't have the space to do things the fastest possible way, and have to make tradeoffs that sacrifice performance to do things in less space, such as by recomputing things as neded. Or you usually need to process data in large batches, and take advantage of GPU acceleration or SIMD instructions that don't perform nearly as well when you just need to do a small number of operations. It might take a thousand times longer for those cases, but since they're small and rare you'll never save more than a fraction of a second by optimizing for them. And init code on a long-running daemon or interactive program isn't really worth optimizing unless it's taking long enough that startup time is actually an issue. It might be months or years before you run that code again.

And in general, you can usually make a much bigger difference by a change of approach at a higher level than you can in micro-optimizing every little piece of code. Implementing a more appropriate data structure or algorithm can make a bigger difference than any amount of time spent optimizing a search of an unsorted list.

Offline molesworth

  • Mars
  • ***
  • Posts: 349
  • the curse of st custards
Re: Boeing Starliner
« Reply #26 on: September 06, 2021, 04:50:31 PM »
Different war story on the maintainability front.  We were making a sonar simulator for a Navy lab.  We handled modeling the noise spectra and the movement of objects in the simulation.  A second group built a massive DSP to take our noise spectra and turn them into the signals you’d get from a towed array.  A third contractor was responsible for a 3D graphical display.

At one point we were asked to look at the graphics code to see if we could speed it up (this was written in C on a Silicon Graphics  system using (not-yet-open) GL).  A small program, only about 5000 lines or so, but they were all in main.  The original author believed that using actual subroutines was too inefficient, so he used gotos to branch all over the place - something like 15 gotos in all. It took my coworker a couple of weeks to puzzle out the flow of control. 

The code was so tightly coupled with itself we could not make any changes without breaking something.  We tried compiling with level 1 optimization - the compiler ate all the RAM, then ate all the swap space and the system panicked.  This code was literally unmaintainable.

We finally gave the lab two choices - let us rewrite the whole thing from the keel up, or buy faster hardware.

They wound up buying faster hardware.

That takes me back to my days developing simulator graphics, also on SGI hardware (no, it wasn't for a US Navy lab  ;) ).  Luckily our team were all pretty good, but we still did have to be aware of the temptation of premature optimisation.  It is, as the saying goes, the root of all evil...

After a few years doing that I moved into games development, which took optimisation to a whole other level!!
Days spent at sea are not deducted from one's allotted span - Phoenician proverb

Offline jfb

  • Mars
  • ***
  • Posts: 396
Re: Boeing Starliner
« Reply #27 on: September 06, 2021, 04:52:15 PM »
What are some examples of being too optimised?

Stuff I’ve run across - replicating the behavior of a library function in your code (badly) to avoid the overhead of a function call, using bitwise operations in place of arithmetic operators, using pointer notation instead of array subscript notation, being clever with bit packing, etc. 

Being clever with bit packing caused some heartburn for Mac programmers back in the ‘80s.  The original MacOS ran on a Motorola 68000 CPU, which had a 32-bit word size, but only 24 lines on the address bus.  Pointer types (used to store address values) were 32 bits wide, with the upper 8 bits unused.  Enterprising Mac programmers would stuff data into that uppermost byte to get the most out of that precious 128k of RAM.  Of course, when the 68020 came out, it used a 32-line address bus, so all that code had to be rewritten. 


Offline molesworth

  • Mars
  • ***
  • Posts: 349
  • the curse of st custards
Re: Boeing Starliner
« Reply #28 on: September 07, 2021, 06:31:12 AM »
I guess we've wandered way off topic from the thread's original subject, but I think the most outstanding piece of optimisation I've ever seen, and which still blows my mind, is the Fast Inverse Square Root.  The code looks like complete nonsense but gives a result within a percent or two of the actual value, and back in the days when Pentium processors were top of the range it was huge performance improvement, especially for graphics processing where 1/sqrt(x) is used a lot.  Even today it's still faster than general-purpose processors or libraries can achieve.

A good article on it is - https://medium.com/hard-mode/the-legendary-fast-inverse-square-root-e51fee3b49d9 - see if you can figure out how it works before reading the explanation  :) 
(Warning - code contains sweary word...)
Days spent at sea are not deducted from one's allotted span - Phoenician proverb

Offline smartcooky

  • Uranus
  • ****
  • Posts: 1959
Re: Boeing Starliner
« Reply #29 on: September 07, 2021, 07:46:02 AM »
What are some examples of being too optimised?

Stuff I’ve run across - replicating the behavior of a library function in your code (badly) to avoid the overhead of a function call, using bitwise operations in place of arithmetic operators, using pointer notation instead of array subscript notation, being clever with bit packing, etc. 

Being clever with bit packing caused some heartburn for Mac programmers back in the ‘80s.  The original MacOS ran on a Motorola 68000 CPU, which had a 32-bit word size, but only 24 lines on the address bus.  Pointer types (used to store address values) were 32 bits wide, with the upper 8 bits unused.  Enterprising Mac programmers would stuff data into that uppermost byte to get the most out of that precious 128k of RAM.  Of course, when the 68020 came out, it used a 32-line address bus, so all that code had to be rewritten. 



Try dealing with anything to do with a CDP1802 (aka RCA1802). Talk about heartburn!

It was an arse of a thing to work with and trouble shoot, and even worse in the application we used it for -  the Canadian Marconi AN/APN510 Doppler GS/DA radar. This piece of equipment used the 8 bit CDP1802 in a 16 bit application. The data and address bus lines on the cct boards were all the same 8 physical lines, and the high and low order 8 bit bytes of each were multiplexed. This made a normal logic probe impossible to use because at any given moment, the logic state of  pin you were probing could have been representing any of four conditions.  Say you were probing bit 8 of the bus, you wouldn't know if you were seeing bit 8 of the High Order or Low order byte of the Data Bus or the Address Bus. What we had was a special-to-type Logic Probe (it had a special name that I can't recall) that could be clipped onto a DIL CMOS chip, with a seven segment display and a row of LEDs at the top. One if the pins picked up the multiplex synch pulse, and there were two micro-switch buttons on the side - the four possible conditions (both in, both out, one in one out, or the reverse) gave you the 1 or 0 bits on the LEDS and the hex value on the seven-segment display for whatever part of the bus you had select with the micro switches

It was a true nightmare to fault find.

NOTE: Apparently the CDP1802 was used in a number of space probes such as Magellan, Galileo and the HST.
« Last Edit: September 07, 2021, 07:48:05 AM by smartcooky »
If you're not a scientist but you think you've destroyed the foundation of a vast scientific edifice with 10 minutes of Googling, you might want to consider the possibility that you're wrong.