|
Post by Baron von Lotsov on Mar 2, 2023 12:56:40 GMT
It's what I have suspected for a very long time.
First of all to be clear, a job is something that achieves a beneficial result and the rules are how you do the job. Ostensibly the rules are there so you do the job the best way. The job can be anything - the rules are set out in advance and are there to cope with any job within the scope of the rules. I've got an example here with programming which clearly illustrates the trouble with rules. In the example we have a set of programming rules taught to students on how to program the correct way to produce what they call "clean code".
These rules are things like avoid using branches in code generated by if statements because the CPU can look ahead only until it gets to a branch. Make each function only do one thing, don't repeat code and things like that. In industry often they call this "best practice". In many jobs it is difficult to assess the relative performance because you get other factors like which human did it and so on. With computers though we can get an objective measure of the efficiency of the job by measuring how long the same computer takes to do the job with each method employed. Just to be even more devious we take the example of clean code given to students to follow. We do it the example way and then we do it by ignoring these rules. The results speak for themselves. They are absolutely shocking. Indeed anyone following these rules is working as a complete fuckwit. The first breaking of the rules gives a 2x speed-up. If we add one bit more complexity to the example clean code and try again, as per we use the methodology on a slightly more complicated task we get a 10X speed-up by breaking the rules. If this were an industry that could be sped up 10x then the produce could be produced for 1/10th of the price cos time is money.
Here's the scary video that shows you.
|
|
|
Post by Orac on Mar 2, 2023 15:12:51 GMT
This doesn't surprise me much
Things like object opaqueness and polymorphism probably have a huge complexity overhead at the 'assembler' level (ie the amount of instructions performed)
It doesn't shock me that the overhead is ~ ten times
|
|
|
Post by Baron von Lotsov on Mar 3, 2023 21:43:14 GMT
This doesn't surprise me much Things like object opaqueness and polymorphism probably have a huge complexity overhead at the 'assembler' level (ie the amount of instructions performed) It doesn't shock me that the overhead is ~ ten times Well according the inventor of C++, polymorphism can be done without any indirection or what amounts to additional CPU cycles. This is because it is figured out in compile time. That's the theory, but we see the reality is not that, and for all sorts of complicated reasons that should in theory not happen. We're probably talking about Microsoft, known for its hideously inefficient software, and most like in the tools as well.
But I'm not blaming that individual company, but the system of rules taught to all programmers in the schools that teach it. It's a specific example of a wider problem. Rules and procedures are far leas efficient than thinking on the job in the context of the actual problem to solve, not the textbook version. It is why British industry is so crap. In the case of industry it is ISO9000 and its successors.
|
|
|
Post by Orac on Mar 4, 2023 2:50:47 GMT
I think The problem is the compiler has to take a generalised approach that will work in each case - and doing this, you end up with gubbins
It depends on your priority - for a lot of code, readability, extensibility and fault finding ease are a lot more important than the performance. Clearly 10X is quite a hit though lol. However, it may seem to be a worthwhile compromise when people are faced with a spaghetti monster whose performance is effectively zero because nobody can (or dare try to) fix anything.
|
|
|
Post by Baron von Lotsov on Mar 5, 2023 21:52:19 GMT
I think The problem is the compiler has to take a generalised approach that will work in each case - and doing this, you end up with gubbins Right, he did mention the creation of some virtual tables in memory. Can you imagine it doing that for a function that is one line of code? Originally the idea would be the compiler would see what is going on and produce the exact same machine code as if all the lines were in one function. I'm writing a compiler myself and this is what I do. If you call a function in my code it links it in the same way as if you didn't.
The reason here is compelling. It's a system that generalises, which is what any kind of rule does. The rule does not iterate every possible combination of reality or you would have millions of them, but instead it groups a load of things together and says if one of these things then do this. When we run a firm in this way it becomes hideously inefficient. You get different paths of communication, like perhaps a star topology with the boss in the centre and everyone has to ask the boss everything, like a central server, or you can have a hierarchy if the organisation is larger plus a large rule book. There is another way though and that is that the person doing the job takes the goals and computes themselves what to do in each situation. This is a distributed intelligence architecture. Corporations in the West though tend to go for the master rule book approach. This is like the compiler that does everything. If they can't see the flaw in software, where we can run benchmarks to measure objectively, they sure won't have it figured out in corporate management.
10x the hit is only the start. Lets say the compiler was 10X inefficient and your code was 10x inefficient and probably a few other things too. You've got a geometric progression going.
|
|