Author Topic: Hunchback aka inquisitivemind.  (Read 125800 times)

Offline pzkpfw

  • Venus
  • **
  • Posts: 44
Re: Hunchback aka inquisitivemind.
« Reply #210 on: July 14, 2012, 09:19:24 PM »
In contrast, abusing a table gives you immediate, controllable, reliable results with a fraction of the work, despite being the "wrong" way, using semantic structuring of data (the table) to do layout (which should only concern presentation of data).

Amen. The next time someone hassles me for using tables, I'm telling JayUtah they program COBOL.

Offline ka9q

  • Neptune
  • ****
  • Posts: 3014
Re: Hunchback aka inquisitivemind.
« Reply #211 on: July 14, 2012, 09:24:49 PM »
Forth?
It was widely used for a time (probably still is) in certain old embedded systems. The onboard computer on the AMSAT Phase-III spacecraft (Oscars 10, 13 & 40) ran a German variant of Forth called IPS. Yes, it was incomprehensible but it was compact. That was important when you only had a 400 bps link to an 1802 computer with 16KB of space, and you couldn't just reload the whole thing every time you wanted to change or add something.

The fact that Forth is so compact and easy to implement but impossible to understand does suggest it as a good intermediate language. As cjameshuff says, like Postscript and PDF.
« Last Edit: July 14, 2012, 09:46:34 PM by ka9q »

Offline ka9q

  • Neptune
  • ****
  • Posts: 3014
Re: Hunchback aka inquisitivemind.
« Reply #212 on: July 14, 2012, 10:11:34 PM »
The Java language is not terrible, but is unnecessarily verbose, omitting useful features like operator overloading,
My colleagues kept saying that operator overloading wasn't very useful, but it was what finally prompted me to learn C++ after programming in C for many years. I have two different applications that can really use it: algebraic forward error correction algorithms like Reed-Solomon codes that operate over Galois (finite) fields, and orbital tracking programs that do a lot of analytic geometry, especially vector operations. Being able to define '+' and '*' over very different things than just integers and floating point numbers can really make the code more readable, and that should be the primary goal of any good programming language.

Problem is, I had already written these applications in C so I knew how fast they could run. I could never get my C++ version of my Reed-Solomon decoder to go faster than about 25% of my C version. So while the C++ version is an excellent teaching tool -- the formulas look just like they do in the textbooks -- I am still not sure how useful it is for real programs that have real performance requirements.



Offline cjameshuff

  • Mars
  • ***
  • Posts: 372
Re: Hunchback aka inquisitivemind.
« Reply #213 on: July 14, 2012, 10:41:43 PM »
Problem is, I had already written these applications in C so I knew how fast they could run. I could never get my C++ version of my Reed-Solomon decoder to go faster than about 25% of my C version. So while the C++ version is an excellent teaching tool -- the formulas look just like they do in the textbooks -- I am still not sure how useful it is for real programs that have real performance requirements.

The performance impact shouldn't be anything like that. There's a few things you could be doing wrong...passing by value when you should be passing by reference, etc.

This is one downside to C++...it's easy to accidentally write code that works, but does things like unnecessarily copy complex objects or initializes them in an inefficient way. You should usually be able to avoid these pitfalls, however.

Offline ka9q

  • Neptune
  • ****
  • Posts: 3014
Re: Hunchback aka inquisitivemind.
« Reply #214 on: July 15, 2012, 06:11:12 AM »
That's what I thought. But I was handicapped by working at an extremely low level, overloading the '+' and '*' operators to implement Galois field arithmetic. A Galois field is a finite set with "add" and "multiply" operations that behave much like regular arithmetic, the main difference being that Galois field arithmetic never overflows. All arguments are permitted and produce results in the set (i.e., it's closed). Just like regular arithmetic it has "+" and "*" operations with identity elements "0" and "1" respectively. Multiplication behaves like repeated addition, and exponentiation behaves like repeated multiplication.  All this is what makes it a field.

In the usual representation of each set element, polynomial form, "addition" consists merely of XORing the two binary values. Simple enough.

"Multiplication" is a little more complicated. It can be done directly in hardware with feedback shift registers but that's tedious in software. Just as in regular arithmetic you can multiply by adding logarithms (the principle of a slide rule) so the usual software approach is to convert the arguments from polynomial form to index (log) form with a lookup table, add the logs with regular addition and convert back to polynomial form with an antilog table. Multiplication by zero is a special case; as with regular numbers you can't take the logarithm of zero so you have to test and set the result to 0 if either argument is 0.

An algebraic code like Reed-Solomon (used on the Compact Disc, among many other things) can be implemented over a Galois field of any size, but GF(256) (8 bit field elements) is by far the most popular because it's a natural fit to most computers. So I'm doing lots of very simple operations on 8 bit values so the C++ overhead is substantial. I played with it for some time trying to minimize the amount of copying but as I said I never got it faster than 25% of my existing C code.

Maybe this is just one of those things best done in C; it's a shame because once you implement the Galois field classes (which also includes polynomial division) the C++ is so much smaller and easier to read, just like the coding theory textbooks. When you do it in C, the coding algorithm becomes lost in all the C implementation details.

Offline LunarOrbit

  • Administrator
  • Saturn
  • *****
  • Posts: 1046
    • ApolloHoax.net
Re: Hunchback aka inquisitivemind.
« Reply #215 on: July 15, 2012, 12:44:04 PM »
Web design/development is a nightmare without CSS and some form of server-side processing (PHP or ASP) to include common page elements (headers, footers, etc.) into pages.

The company I work for has taken on clients with existing websites that were created years ago using MS Frontpage (or possibly MS Word). There is no consistency in the fonts because the various style changes were made to individual words or paragraphs rather than to the HTML tags. And without the ability to use server-side includes the headers and footers have to be hard coded into each and every page. If you want to add a new menu link to the header you have to change every single page rather than just making the change in one header file that is automatically included on all of the other pages.

I guess what I'm saying is that static HTML is fine for small websites, but it becomes a lot of work to maintain for large sites. That means you waste a lot of your time unnecessarily. My other website (TheSpaceRace.com) was static HTML with just a bit of PHP up until maybe 2 or 3 years ago when I started using Wordpress. I'll recommend Wordpress to anyone... it frees up all of that time I used to spend fiddling with the HTML so now I can focus more on the actual content. For example, I don't have to worry about changing the "Next Page" and "Previous Page" links on 50+ pages every time I added a new page, because Wordpress handles that for me automatically.
It suddenly struck me that that tiny pea, pretty and blue, was the Earth.
I put up my thumb and shut one eye, and my thumb blotted out the planet Earth.
I didn't feel like a giant. I felt very, very small.
- Neil Armstrong (1930-2012)

Offline Glom

  • Saturn
  • ****
  • Posts: 1102
Re: Hunchback aka inquisitivemind.
« Reply #216 on: July 15, 2012, 01:21:50 PM »
I'm still reeling from the scathing attacks on Perl.  Since moving out of reservoir engineering, I don't use it as much anymore, but still use a little bit for manipulating text files, eg collating several csv files into one single csv that I use in visualisation tools.

As a scripting language, I find it is the easiest to just write and run of the ones I know.  It is true that so often the code looks like someone smashed their head on the keyboard and that's before we get into its use of regular expressions.  I love regular expressions.  I've used them now in a number of languages and they fit the most organically into Perl.  I'm still not clear of the whole meaning of greedy and global though.  Ease of using hashes is also neat.

Why no boolean though?

Offline gillianren

  • Uranus
  • ****
  • Posts: 2211
    • My Letterboxd journal
Re: Hunchback aka inquisitivemind.
« Reply #217 on: July 15, 2012, 02:34:20 PM »
If I remember correctly, I knew a guy who helped write a reference book on Perl.  I don't know quite why; several of the people in that writing group were tech types of one stripe or another, but it was a long time ago, and the only thing I remember for sure is that he submitted the glossary to our group.  It was full of things like "call by volume: summon your children by throwing encyclopedias at them."
"This sounds like a job for Bipolar Bear . . . but I just can't seem to get out of bed!"

"Conspiracy theories are an irresistible labour-saving device in the face of complexity."  --Henry Louis Gates

Offline cjameshuff

  • Mars
  • ***
  • Posts: 372

Offline JayUtah

  • Neptune
  • ****
  • Posts: 3787
    • Clavius
Re: Hunchback aka inquisitivemind.
« Reply #219 on: July 16, 2012, 10:48:09 AM »
Amen. The next time someone hassles me for using tables, I'm telling JayUtah they program COBOL.

Going back to what I said about graphic artists, in the Olden Days the HTML table was the only fine control a web designer had over the relative positions of objects in an HTML document.  The web quickly outgrew the "dressed up text" character and became a page-layout environment, and the HTML table was the only standardized element (distinct from browser-dependent nightmares) that let you arrange things with pixel-level accuracy.

Netscape Navigator became the de facto reference implementation, and other browser developers went into kaniptions attempting to duplicate the results of Netscape's table layout engine.  The layout of an HTML table is non-trivial, and almost doesn't have a closed form.  Try it sometime, but not without a good supply of breakable objects you can do without and some vodka.  Netscape didn't necessary get it right, but their idiosyncrasy simply won based on popularity.

And today the HTML table is still the simplest, most universal way to lay out a page with deterministic juxtapositions.  As long as the W3 insists on the linear-flow model (a la Tex and LaTex) for HTML documents, people will (and should) continue to use it.
"Facts are stubborn things." --John Adams

Offline Glom

  • Saturn
  • ****
  • Posts: 1102
Re: Hunchback aka inquisitivemind.
« Reply #220 on: July 16, 2012, 11:30:27 AM »
The sad thing is that it's true.  When I was preparing some visualisations of a programme concept using HTML/CSS, I tried to be a good boy and not use tables to lay it out all the components.  I was never able to get it quite right.

On the other hand, using tables hard codes the layout.  Using CSS for the layouts makes it pluggable.

Offline JayUtah

  • Neptune
  • ****
  • Posts: 3787
    • Clavius
Re: Hunchback aka inquisitivemind.
« Reply #221 on: July 16, 2012, 03:34:57 PM »
I'm still reeling from the scathing attacks on Perl.

Well, it is what it is.

Quote
...but still use a little bit for manipulating text files, eg collating several csv files into one single csv that I use in visualisation tools.

No other scripting language is better at manipulating text.  This is Perl's strong suit.

Quote
As a scripting language, I find it is the easiest to just write and run of the ones I know.

Perl fulfills the role for Unix that REXX does in the IBM mainframe world.  And I too will write a 50-line Perl script to automate some simple task, or as an ad hoc solution.  With a little experience, everyone can keep enough Perl in their heads to quick-draw it like a gun and get past some obstacle.

The fact that Perl has clear strengths doesn't stop it from having crippling weaknesses, and from being grossly misused.  It's the weakness and misuse that we're railing against.  And part of that problem is the strong Perl evangelism community that tends to respond to clear failures in the language as if they were shortcomings in programmer skill and knowledge.  That's a pretty entrenched defense.

Quote
I love regular expressions.  I've used them now in a number of languages and they fit the most organically into Perl.

Regular expressions are the heart of text processing in any language.  Their elementary integration into Perl is one of its strengths.  The propensity to compose write-only (i.e., forever thereafter illegible) regular expressions to implement some rule is one of the weaknesses, not of the language but of how it's commonly used.

Quote
I'm still not clear of the whole meaning of greedy and global though.

Greedy means to match the longest possible string in the input sentence, for any one invocation of the matching algorithm.  This is most often what you want, and what Perl does by default.  However there are real-world cases where the most straightforward-looking regular expression doesn't do what you want.

Consider the task of extracting all the HTML tags from a sentence.  A tag is an open angle bracket "<" followed possibly by the negation qualifier "/", followed by some upper or lower case text, followed by the closing angle bracket ">".  So a viable regular expression in the classical syntax would be '</?[a-zA-Z]+]>' read as "< followed by zero or one / followed by one or more alphabetic characters followed by >".  But in fact HTML tags may contain qualifiers that in turn may contain arbitrary text.  So you'd be tempted to expand the meat of the expression as '</?.+>' thinking that the engine will stop accepting the overspecified "." (i.e., match any character) when it sees the closed bracket >.

But in the marked-up sentence

We want to succeed

you want your expression to match

We >want to succeed

but under greedy rules it will match

We want to succeed

because the first > in the initial tag also matches under the '.*' element and the algorithm detects a longer matchable string, ending with the final > after the tag.

The greediness question is therefore one of affinity where departure from one parsing state is ambiguous.  In the world of actual machines, think of the interaction between your turn signals and hazard lights.  When your hazard lights are on, the turn signals don't work.  The engineers specifically gave greater "affinity" (or precedence, if you prefer) to the hazard lights when resolving contention for the control signal.  Similarly you can choose whether your regular expression engine will give greater affinity to the continuing case or to a succeeding case.  Greedy means the continuing case has greater affinity.

A better way to write the expression is '</?[^>]+>' which means "< followed by an optional / followed by one or more characters that aren't >, followed by >".  While better suited to the task, it gets criticism because it's not very clear.  And these expressions become even less clear when the expression gets more complicated.  What, for example, would the expression look like when you recall that > may legitimately appear inside a string literal that's an argument to a qualifier?

Global means that an input sentence may contain several distinct matchable strings (i.e., but separated by non-matching symbols); a global match returns a list of matching substrings or allows you to restart the search where the previous one left off, and a global replacement replaces all matching substrings rather than just the first.  In my example above, the sentence contains two substrings that match the simple HTML tag expression.  But the desirable non-greedy match will only catch the first one.  A global match would require us to deal with a set of plural matches, or the ability to restart the search on the substring "Want..." so that we would catch the closing tag.  For regex-based rewrites, the "g" qualifier says "replace all matching substrings, not just the first."

Quote
Ease of using hashes is also neat.

You can argue that all possible data structures can be composed using only a container and an association list, which is the theory behind Perl data structures.  However, Perl's array-flattening pretty much eliminates that.  And when you add references, composing any meaningful data structure in Perl is a nightmare of dereferences, delimiters, and context-changing coersions.  That said, the simple hash is your friend.

Quote
Why no boolean though?

Because in Perl it's a context, not a data type.  Despite all the other contexts that also have associated intrinsic data types.
"Facts are stubborn things." --John Adams

Offline JayUtah

  • Neptune
  • ****
  • Posts: 3787
    • Clavius
Re: Hunchback aka inquisitivemind.
« Reply #222 on: July 16, 2012, 03:56:25 PM »
[Forth is a]nother language that requires you to pull out your brain and re-seat it sideways.

Can't have said it better.  The compactness issues are not irrelevant; sometimes those are the constraints.  The AGC had to sacrifice code clarity in order to fit it into the box.

But in general, the primary purpose of a computer program in source form is to express design intent to another programmer.  Compilers and interpreters are generally lenient in what they'll accept as the specification of a program, but the human eyes and brain are too easily fooled and require a clear expression.  Hence a language that forces you to think in an unnatural way about how a problem is being solved puts you immediately at a disadvantage.  Yes, stack-based languages are easy to interpret, but that's not how humans think.

Quote
For a very similar language, look at PostScript.

Indeed, and programming anything but the simplest list of function closures in Postscript is a nightmare.  I once irritated an entire office floor by programming a fractal computation in Postscript and sending it to the printer.  Printers are notoriously ineffective at number crunching, so after two hours of running we finally killed the job.

One can program in stack-based languages, but one generally shouldn't if you expect it to be commercially successful.
"Facts are stubborn things." --John Adams

Offline JayUtah

  • Neptune
  • ****
  • Posts: 3787
    • Clavius
Re: Hunchback aka inquisitivemind.
« Reply #223 on: July 16, 2012, 06:02:03 PM »
More on Perl:
https://sites.google.com/site/steveyegge2/ancient-languages-perl

Quit a bit on the rantish side, but he raises some good points.

Quote
All languages suck. At this junction in human history, the best thing you can do as a programmer (other than perhaps write your own language) is not to get too attached to any particular one, and choose them as appropriate for the task at hand, since each language has its niche.

My point exactly.  A software engineer may rank his proficiency in various languages, but to develop a language affinity to the point of evangelism is tantamount to saying you have to build spacecraft out of concrete.

Quote
To alleviate the situation, in Perl 5, Larry [Wall] added references, which are sort of like C pointers, but more confusing, and with none of the inherent value of pointers.

A legitimate criticism against C/C++ is the difficulty of pointers and pointer expressions.  Modern languages have fallen all over themselves in the rush to avoid the low-level ugliness of actual machine addresses and underlying storage.  Yet in the C world, pointers are highly necessary, especially in C's system-programming role.  However, Perl takes everything that was bad about pointers in C, but leaves us without the ability easily to debug them.  No wonder, then, that Perl programs that make heavy use of references are now all of a sudden suffering huge heap bloat because the garbage-collector (that magical piece of runtime software that's supposed to free the programmer from the drudgery and error-prone tedium of memory management) can't figure out whether something is freeable or not.

Quote
Perl also has "contexts", which means that you can't trust a single line of Perl code that you ever read.

This is the single biggest criticism I have against Perl.  In its attempt to be a weakly-typed language, it has succeeded only in being an inscrutably and dangerously typed language.  I can't count the number of times my clients had to deal with obscure bugs caused by code in which the inferred in-context type didn't match the actual in-context type, and so the value was silently misinterpreted.
"Facts are stubborn things." --John Adams

Offline JayUtah

  • Neptune
  • ****
  • Posts: 3787
    • Clavius
Re: Hunchback aka inquisitivemind.
« Reply #224 on: July 16, 2012, 06:23:32 PM »
...the C++ is so much smaller and easier to read, just like the coding theory textbooks. When you do it in C, the coding algorithm becomes lost in all the C implementation details.

In my instruction this is what we call "transparency at the application level," or "transparent application logic."

Now if you're copying algorithms out of books, it often can be one of those cases where you work until you get it right, then you encapsulate it and never deal with it again.  Many of our computational kernels such as LS-DYNA work this way, even though they're traceable to academic papers.  And we have control-logic code that works this way.

But it's hard when you're implementing a mathematical construct, and you plan later to write new programs that implement new applications of the underlying algebra.  Debugging those programs is indeed a matter of being able to see your new formulas expressed in the code.  This is what we mean by transparency at the application level.  You want the rules of a business operation or the laws of a scientific model to be visible in the code so that they can be verified by inspection and modified with confidence.  But when an operation analogous to addition has to be represented as a function call, it loses that transparency.

Code: [Select]
vec3 tensor = anchor_pt + direc_vec;

is more comprehensible than

Code: [Select]
vec3 tensor;
vec3_copy( &tensor, (vec3 *) &anchor_pt );
vec3_add( &tensor, &direc_vec );

I programmed lots of design-engineering code in C++ using classes to represent 2D and 3D points and vectors in Euclidean or projective space.  It's especially awesome to be able to know that an E3 point is structurally and conceptually equivalent to an E3 vector (hence can be used somewhat interchangeably), that a point or vector can be promoted in dimensionality by adding a zero to the new dimension (i.e., a standard cast) and that straightforward conversions exist between projective and Euclidean points.  Encoding all those equivalences and conversions in your object system lets you program at the mechanical bushing level, not the coordinate-goes-where? level.

But yes, efficiency.  The sad fact is that the alleged near-C performance of C++ code is typically achieved only if you eschew many of the useful features of C++.  Transparency at the application level can only happen if you accept inefficiency at the machine-code level.  While you can optimize with clever use of references and coding that avoids temporaries, there is overhead.
"Facts are stubborn things." --John Adams