flat assembler
Message board for the users of flat assembler.

flat assembler > Heap > HLLs suck!

Goto page Previous  1, 2, 3, 4, 5, 6, 7, 8  Next
Author
Thread Post new topic Reply to topic
f0dder



Joined: 19 Feb 2004
Posts: 3170
Location: Denmark
kohlrak wrote:
f0dder wrote:
Well, once again, I'm not a fan of sloppy software.

What I'm arguing is that there's a big difference between begin sub-optimal and outright sloppy.

I'm trying to bait you into descerning what that is. Smile

Smile - it's not something that can be easily defined and set in stone, it really depends on the application type, the target audience, et cetera. It's something I usually judge on a per-app basis.

kohlrak wrote:
Unfortunately, most people rely too much on other things (compiler and such) to handle that for them, despite that those thing won't do it for them. They see it runs fine enough on their dev machine (which, in many cases with bigger companies, tend to be much better than their client's pc) with no other programs running alongside it, then they release it as "faster and better" than a previous version.
Dunno if it's "most people", but we definitely do see it happen too much.

Pidgin bloated and buggy? Haven't run into any bugs myself (I only use it for MSN, other parts could be buggy), and as for bloated... after removing locales I don't need (I've ken da, en_GB, uk even though I probably only need either en_GB or uk) total install size is down to 14meg. I could figure out which DLLs are responsible for protocols I don't need and shave off a few megs extra... Yes, it could be done smaller, and it seems they link statically to libc which adds in a bit more weight, and apart from pidgin itself there's the GTK runtime files (feels like somewhat of a hit on Windows, but on *u*x you'd find a lot more programs sharing those runtime files).

All in all, it could've been worse. Haven't tried pidgin on low-spec hardware though, so dunno how bad it is there... I did notice that adding aspell plugin introduced a slight amount of lag even on my workstation though, so I'm considering turning that off again.

kohlrak wrote:
Depends on how big the program is and how deep the calls go, really. If it goes deep enough, one could significantly waste time, especially in a loop (make a bunch of function calls at at least a depth of 10 in a loop, all because the coder is coding for their screen size).
Too much of a thing is obviously never good. But I very much do prefer splitting larger chunks of code in smaller functions, and I'm not afraid of nesting a bit. The compiler will do inlining, and if it doesn't I'll discover it when profiling. If the profiling doesn't find hot spots, even non-inlined 20-level-deep nesting isn't going to matter - something that deep probably would be an example of bad code design, though Wink

I find that (proper) splitting into functions goes a long way towards writing easy-to-maintain code, and while there's no such thing as fully self-documenting source code, properly split and named functions can make a lot of comments superfluous.

kohlrak wrote:
Also, i'm more fond of dynamic linking than static linking (i don't want to spend hours waiting for the install because they're afraid to use DLL files provided with windows).
Once again, depends Smile - I pretty much do prefer linking against runtimes dynamically, but there can be redistributable issues that complicates matters.
Post 04 Dec 2009, 07:31
View user's profile Send private message Visit poster's website Reply with quote
kohlrak



Joined: 21 Jul 2006
Posts: 1421
Location: Uncle Sam's Pad
f0dder wrote:
kohlrak wrote:
f0dder wrote:
Well, once again, I'm not a fan of sloppy software.

What I'm arguing is that there's a big difference between begin sub-optimal and outright sloppy.

I'm trying to bait you into descerning what that is. Smile

Smile - it's not something that can be easily defined and set in stone, it really depends on the application type, the target audience, et cetera. It's something I usually judge on a per-app basis.


I've got nothing over you, except a few exaggerated examples, but fair enough, it's really not that important.

Quote:
kohlrak wrote:
Unfortunately, most people rely too much on other things (compiler and such) to handle that for them, despite that those thing won't do it for them. They see it runs fine enough on their dev machine (which, in many cases with bigger companies, tend to be much better than their client's pc) with no other programs running alongside it, then they release it as "faster and better" than a previous version.
Dunno if it's "most people", but we definitely do see it happen too much.


Indeed... Crying or Very sad

Quote:
Pidgin bloated and buggy? Haven't run into any bugs myself (I only use it for MSN, other parts could be buggy), and as for bloated... after removing locales I don't need (I've ken da, en_GB, uk even though I probably only need either en_GB or uk) total install size is down to 14meg. I could figure out which DLLs are responsible for protocols I don't need and shave off a few megs extra... Yes, it could be done smaller, and it seems they link statically to libc which adds in a bit more weight, and apart from pidgin itself there's the GTK runtime files (feels like somewhat of a hit on Windows, but on *u*x you'd find a lot more programs sharing those runtime files).


Lots of random crashes. Even myspace messenger was a 1meg beta sometime ago, and i doubt the way it was distributed that it was a release build, but a debug build. And that thing was reasonably slow and buggy. Pidgin itself isn't necessarily slow, I really don't think it needs to be that big... Not that it's a major problem, but with all the random crashes, i just have this gut feeling that i'd flip if i saw the source, because to have this much trouble finding bugs, it must be some kind of mess. Rolling Eyes

Quote:
All in all, it could've been worse. Haven't tried pidgin on low-spec hardware though, so dunno how bad it is there... I did notice that adding aspell plugin introduced a slight amount of lag even on my workstation though, so I'm considering turning that off again.


I can't exactly drop that. I really need that plugin out of all of them. Laughing I was happy to find it in firefox as well when i got ubuntu.

Quote:
kohlrak wrote:
Depends on how big the program is and how deep the calls go, really. If it goes deep enough, one could significantly waste time, especially in a loop (make a bunch of function calls at at least a depth of 10 in a loop, all because the coder is coding for their screen size).
Too much of a thing is obviously never good. But I very much do prefer splitting larger chunks of code in smaller functions, and I'm not afraid of nesting a bit. The compiler will do inlining, and if it doesn't I'll discover it when profiling. If the profiling doesn't find hot spots, even non-inlined 20-level-deep nesting isn't going to matter - something that deep probably would be an example of bad code design, though Wink


Depends on your compiler. I was messing around with idapro and found that this isn't the case with gcc (was the same time i realized how sloppy gcc really is [not horribly inefficient compared to what it COULD be, however it's still sloppy]). Anyway, as for bad design, that's unfortunately the most common case i come across with HLL code to begin with, but that's not the fault of the language... Wink

Quote:
I find that (proper) splitting into functions goes a long way towards writing easy-to-maintain code, and while there's no such thing as fully self-documenting source code, properly split and named functions can make a lot of comments superfluous.


Indeed true.

Quote:
kohlrak wrote:
Also, i'm more fond of dynamic linking than static linking (i don't want to spend hours waiting for the install because they're afraid to use DLL files provided with windows).
Once again, depends Smile - I pretty much do prefer linking against runtimes dynamically, but there can be redistributable issues that complicates matters.


Such as copyright issues and the like. If libraries are properly maintained (see GTK for a bad example), aside from the copyright issues, most of your issues go away. This also makes updating a particular part of an application incredibly faster over the internet. If only package managers could learn from mercurial (main package, then update packages that would only contain updated binaries), things would be even better.
Post 04 Dec 2009, 08:43
View user's profile Send private message Visit poster's website AIM Address Yahoo Messenger MSN Messenger Reply with quote
DOS386



Joined: 08 Dec 2006
Posts: 1904
Azu wrote:

Re: HLLs suck! Wtf is this? Found it in d3d9.dll

Code:
add    esp, 0Ch
mov esp, ebp    

Evil or Very Mad


Really ???

Code:
5640  55                push ebp
5641  89E5              mov  ebp,esp
    


Guess where I found it Laughing

2 hints:

- The code DOESN'T WORK Laughing

- The "author" is more likely an Optimizing Compiler than Assembly Artist Laughing

EDIT : reduced code, see links below


Last edited by DOS386 on 27 Dec 2009, 01:36; edited 1 time in total
Post 11 Dec 2009, 01:42
View user's profile Send private message Reply with quote
LocoDelAssembly
Your code has a bug


Joined: 06 May 2005
Posts: 4633
Location: Argentina
Quote:

5873 F7DB neg ebx
5875 83D600 adc esi,0 ; !!!

AMD's NEG documentation wrote:
If the value is 0, the instruction clears the CF flag to 0; otherwise, it sets CF to 1.


Quote:

Guess where I found it

Some badly set Delphi compiler?
Post 11 Dec 2009, 02:09
View user's profile Send private message Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3170
Location: Denmark
Isn't "add reg, 1" vs. "inc reg" an optimization on P4?
Post 11 Dec 2009, 08:31
View user's profile Send private message Visit poster's website Reply with quote
LocoDelAssembly
Your code has a bug


Joined: 06 May 2005
Posts: 4633
Location: Argentina
Yes, although I think it is misapplied in all cases as there doesn't seems to be any carry flag dependency (for either reading or writing).
Post 11 Dec 2009, 14:34
View user's profile Send private message Reply with quote
DOS386



Joined: 08 Dec 2006
Posts: 1904
LocoDelAssembly wrote:
Quote:
Guess where I found it
Some badly set Delphi compiler?


NO. GCC (MinGW) and MPLAYER Shocked

http://pastebin.com/m31384347
http://pastebin.com/m2b5d7a67
http://www.bttr-software.de/forum/board_entry.php?id=7504

Code:
  o16 nop    ; shock
  rep ret    ; shock
    
Post 25 Dec 2009, 12:54
View user's profile Send private message Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3170
Location: Denmark
I wonder which GCC version is used (I don't expect mingw is still stuck with GCC 3.x?) and which compiler flags... obviously a -march >= ppro has been selected, considering the use of CMOVxx instructions.

Btw, you seem to be consistently using "DGJPP" to refer to "DJGPP" - is there a reason for that, or is it just a persistent typo?
Post 25 Dec 2009, 14:25
View user's profile Send private message Visit poster's website Reply with quote
vid
Verbosity in development


Joined: 05 Sep 2003
Posts: 7108
Location: Slovakia
DOS386 wrote:
NO. GCC (MinGW) and MPLAYER Shocked

http://pastebin.com/m31384347
http://pastebin.com/m2b5d7a67


Did you ever hear about optimisation by aligning loops and funcs?
Post 25 Dec 2009, 17:20
View user's profile Send private message Visit poster's website AIM Address MSN Messenger ICQ Number Reply with quote
kalambong



Joined: 08 Nov 2008
Posts: 165
Put it this way, the programs I write I make sure they can run on PIII machines, and not running too horribly slow, or bloated.

But then I am not an optimize freak Very Happy
Post 26 Dec 2009, 02:55
View user's profile Send private message Reply with quote
DustWolf



Joined: 26 Jan 2006
Posts: 373
Location: Ljubljana, Slovenia
f0dder wrote:
Where to draw the line, for me, depends on the application. I don't like the fact that a generic thing like word processors has become as sloppy as they have - both OpenOffice and the later versions of MS Office are beasts. But a piece of specialized business app being a bit on the heavy end? Doesn't matter (apart from personal pride) if it won't work comfortably on a PIII if it's never going to run on anything but dualcore machines.


Typically I would read a complete thread before responding but... YEARGH! That one just ticks me off!! Shocked

I am a sysadmin and our company had to downgrade from MS Office 2007 to MS Office 2003, because it could no longer process the volume of data our laboratories work with daily in decent time even on our newest, fastest machines (yeah, our offices run Phenom IIs, due to me). And this was not something only a zealot would notice, the users themselves came to me telling me that Office 2007 just won't cut it.

Realistic enough for you?!

Yeah, sure, MS can afford it. No problem. If you can afford to fuck with the user, why care?

#$&"#%!
Post 26 Dec 2009, 17:17
View user's profile Send private message AIM Address Yahoo Messenger MSN Messenger Reply with quote
Borsuc



Joined: 29 Dec 2005
Posts: 2468
Location: Bucharest, Romania
Unoptimized programs using the CPU too much increase entropy. Razz
Post 26 Dec 2009, 18:01
View user's profile Send private message Reply with quote
DustWolf



Joined: 26 Jan 2006
Posts: 373
Location: Ljubljana, Slovenia
I think we've all been dancing around the main issue -- which is what Azu pointed out and wanted to discuss initially. It's not about that one instruction. It's about the state of HLL affairs and what code they cause. On so many levels.


* On a very low level, the C compiler is essentially a duplo-block builder. All of the low-level fans here have been bashing that fact and all of the high-level fans here didn't actually see a problem with it. The C compiler takes pre-written blocks of code and assembles them per the programmer's instructions.

If the programmer is lucky, some universal optimizations are applied to the interfaces between the blocks and if the programmer is indeed very lucky, those optimizations don't break the program (see gcc manual regarding recommended -O3 usage). See, despite being capable of some optimization. the compiler doesn't know the big picture (and that my friends, is why a string-based switch statement in C can result in a 50 Meg hello-world program in MSVC, consisting of padding, mostly).

And quite honestly, despite you all focusing on the optimization that is done, a lot of this optimization is in fact never done. And this results in a lot of facepalm-invoking inter-block interfaces, one of which Azu has found.

And on the long run, this is why some of us apparently more unreasonable programmers come to conclude that making a real-life nuclear power plant, out of duplo blocks, easier as it may be, is a very very bad idea.


* On a slightly higher level there is a completely different problem. As I program in HLLs (ever since I learned FASM), I find myself feeling like I'm programming workarounds to fit my intentions in the compiler's language, instead of just making what I want. My camp kinda realizes that most HLL programmers don't realize this, because they have never experienced the freedom of being able to just make the computer do what you want. But the point that these HLL programmers actively argue that programming everything in their twisted little abstract workaroundy way is BETTER is just Stupid to us and ultimately self-destructive. Because what ultimately matters is the end result and not something else.

And the point of the little rant in the above paragraph is that ASM ultimately lets you express your intentions more directly. And this automatically means better performing code. Forget what the optimizer can do for you, if it doesn't understand what you're trying to do (and it does not) it can't make your code better than you. It's not in how the code does what it does, it's what it does.

Not only that some things just can't be done in HLLs directly, but also when a programmer codes in ASM and tries to optimize in ASM, (s)he understands much better the problems the computer faces when executing the instructions and can thus rework the algorithm in such a way when optimizing, that it's much easier for the computer and at the same time still does what (s)he wanted it to do.

If any of you guys have or had education in the area of informatics, this bit was actually explained to you in theory and you should be keenly aware of it. When a program is dumbed-down into a HLL, some information is lost and it cannot be magically re-created by the optimizer no matter how good it is.


* And on a much higher long-term level, it was proven again and again, that HLLs promote bad coding practices. In most cases, it is not sloppy programmers that should be blamed for poorly-preforming code, it's the compiler which is designed in such a way that it obscures and makes non-deterministic, important behaviour characteristics. Programmers who learn only these languages thus do not know when they are walking into a future nightmare, because they do not have an accurate idea on what is going on in the background, nor do they have any influence over it within their language.

And last but not least, it's about bloody time you people update your beliefs. C might have been a good idea 25 years ago. ASM might have been a bad idea 25 years ago. Right now, C is the material representation of the concept of "bleh"; it is not easy to understand nor intuitive as a HLL should be, nor is it giving the programmer control over what he is making. Right now, FASM is an excellent language suited for large projects and with a powerful macro engine rival to C's function framework, which however maintains a pure ASM soul, giving the programmer total control over everything.

Just my 2 cents.

EDIT: And oh, for the record. I use HLLs a lot. My current very interesting project includes code from both FASM and web-based XML/XSLT. Some HLLs are good for some things, but not all HLLs are good for anything.

LP,
Jure
Post 26 Dec 2009, 18:17
View user's profile Send private message AIM Address Yahoo Messenger MSN Messenger Reply with quote
Borsuc



Joined: 29 Dec 2005
Posts: 2468
Location: Bucharest, Romania
I disagree about C being "bleh"... most of your rant, while true, focused more on the higher level languages. C being in the 'middle' has supposedly less problems, especially regarding sloppy code and the programmer still has to know what he is coding and sometimes even assembly knowledge helps when programming C.

To me saying C is 'bleh' seems like an excuse to dismiss it since it doesn't have many of the pitfalls of the other HLLs in comparison with asm. Asm is more low-level, of course, but C is not a high-level language, it's sort of middle-level. There's even C--.

_________________
Previously known as The_Grey_Beast
Post 26 Dec 2009, 19:12
View user's profile Send private message Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3170
Location: Denmark
DustWolf wrote:
f0dder wrote:
Where to draw the line, for me, depends on the application. I don't like the fact that a generic thing like word processors has become as sloppy as they have - both OpenOffice and the later versions of MS Office are beasts. But a piece of specialized business app being a bit on the heavy end? Doesn't matter (apart from personal pride) if it won't work comfortably on a PIII if it's never going to run on anything but dualcore machines.


Typically I would read a complete thread before responding but... YEARGH! That one just ticks me off!! Shocked

I am a sysadmin and our company had to downgrade from MS Office 2007 to MS Office 2003, because it could no longer process the volume of data our laboratories work with daily in decent time even on our newest, fastest machines (yeah, our offices run Phenom IIs, due to me). And this was not something only a zealot would notice, the users themselves came to me telling me that Office 2007 just won't cut it.

Realistic enough for you?!

Yeah, sure, MS can afford it. No problem. If you can afford to fuck with the user, why care?

#$&"#%!
Would you consider your usage scenario something that standard users of Office do, though?

DustWolf wrote:
If the programmer is lucky, some universal optimizations are applied to the interfaces between the blocks and if the programmer is indeed very lucky, those optimizations don't break the program (see gcc manual regarding recommended -O3 usage)
If the optimizer breaks your code, then either:
1) you've found a compiler bug that should be reported. Shame on the compiler writers.
2) you're doing (too) tricky stuff and are breaking language rules or depending on non-standard behavior. Shame on you.

In either case, obviously your test suite will pick up the error before the bug ends up in the wild. You do write test suites for your real-world software, don't you? Wink

DustWolf wrote:
See, despite being capable of some optimization. the compiler doesn't know the big picture (and that my friends, is why a string-based switch statement in C can result in a 50 Meg hello-world program in MSVC, consisting of padding, mostly).
String-based switch statement? care to elaborate?

_________________
Image - carpe noctem
Post 26 Dec 2009, 21:04
View user's profile Send private message Visit poster's website Reply with quote
DustWolf



Joined: 26 Jan 2006
Posts: 373
Location: Ljubljana, Slovenia
Borsuc wrote:
To me saying C is 'bleh' seems like an excuse to dismiss it since it doesn't have many of the pitfalls of the other HLLs in comparison with asm. Asm is more low-level, of course, but C is not a high-level language, it's sort of middle-level. There's even C--.


Please stop the neologisms. The Wikipedia people might not agree with me because they are too young to tell, but C is a classic high-level language. The definition of one being that the language is an abstraction of the code underneath. C is an abstraction powerhouse and C++ is a degree further down that line.

My decision to describe C as "bleh" though, is based solely on the contrast between the impression of physics students (who had to do real-life math in the programming language and were forced to endure it's striking un-intuitiveness although appreciating it's ability to get work done more reliably) and the perspective experienced C programmers like to take of the language (who agree on the opposite). Bleh is what I call a good-for-nothing fossil of the past.

Although Delphi is worse, admittedly.

EDIT: Though yeah, my "rant" (which is actually just inflicted reality) covers many other HLLs as well and applies much more strongly to many proprietary HLLs.

LP,
Jure


Last edited by DustWolf on 27 Dec 2009, 02:15; edited 1 time in total
Post 27 Dec 2009, 01:38
View user's profile Send private message AIM Address Yahoo Messenger MSN Messenger Reply with quote
DOS386



Joined: 08 Dec 2006
Posts: 1904
vid wrote:
Did you ever hear about optimisation by aligning loops and funcs?


Did you ever look at FASM source ? I can't find such "optimisations" inside Idea
Post 27 Dec 2009, 01:42
View user's profile Send private message Reply with quote
DustWolf



Joined: 26 Jan 2006
Posts: 373
Location: Ljubljana, Slovenia
f0dder wrote:
Would you consider your usage scenario something that standard users of Office do, though?


I would call it a relevant real-life example. I have no idea what other users of Office tend to use it for, although it is perfectly possible they don't use it for stuff it obviously doesn't do well. Such as getting real work done, for example.

Quote:
DustWolf wrote:
If the programmer is lucky, some universal optimizations are applied to the interfaces between the blocks and if the programmer is indeed very lucky, those optimizations don't break the program (see gcc manual regarding recommended -O3 usage)
If the optimizer breaks your code, then either:
1) you've found a compiler bug that should be reported. Shame on the compiler writers.
2) you're doing (too) tricky stuff and are breaking language rules or depending on non-standard behavior. Shame on you.

In either case, obviously your test suite will pick up the error before the bug ends up in the wild. You do write test suites for your real-world software, don't you? Wink


My bracketed statements are not provided solely for the entertainment of those who agree with me. DO read the manual and it's recommendation regarding -O3 optimization. It tends to break working code (then again, I read that manual a few years back... It might have gotten lost by now. Usually the recommendation not to use -O3 is included in READMEs of software sources).

Quote:
String-based switch statement? care to elaborate?


Yep. May no longer be relevant as I only did that particular mistake once. In MSVC's superior optimization skills, a switch statement is compiled into an array of jumps, where the index of this list corresponds to the input value (an ingenious way to break a perfectly good optimization method for switch statements, if you ask me, but then again this is Microsoft). This means if the switch cases are 1,2,3, then the jumps in the table are sequential... jmp case1, jmp case2, jmp case3. However, of course, if you use strings the strings are interpreted by the compiler as what they are... binary sequences of varying length that can be interpreted as numbers, the distance between an "antelope" and a "zebra" is a very very big number, resulting in a very very long jump table... resulting in a 50 MB hello world file (well actually the size depends on the strings used, but you get the picture).

Try it.

EDIT: To provide some background info on this... As I was working on my kernel I and my friends have been reverse-engineering a lot of Windows code (sorry, can't get VMWare to comply any other way, they don't follow standards) and we have found tons of the nonsense that Azu has discovered all over the place. Some of these un-optimizations far more profound than the single instruction quirk Azu has pointed out. We weren't writing it down, so that I could show them to you, but it was enough to let us know just what serious issues you can get away with in a very large project.

LP,
Jure
Post 27 Dec 2009, 01:48
View user's profile Send private message AIM Address Yahoo Messenger MSN Messenger Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 16702
Location: In your JS exploiting you and your system
But all of that is not the fault of the HLL, it is the fault of the compilers. Different compilers give different results. You can't blame the HLL for problems with the compilers (the the users settings for it).
Post 27 Dec 2009, 02:50
View user's profile Send private message Visit poster's website Reply with quote
DustWolf



Joined: 26 Jan 2006
Posts: 373
Location: Ljubljana, Slovenia
revolution wrote:
But all of that is not the fault of the HLL, it is the fault of the compilers. Different compilers give different results. You can't blame the HLL for problems with the compilers (the the users settings for it).


The HLL is obviously designed in such a way that enables bugs like those. Saying that a list of real-life bugs does not apply to the language because the flawless-world theory does not imply their existence is bizarre to say the least.

You know, come to think of it, dictatorship is good for people! Not in reality, but you can't really blame the errors of all of these imperfect world leaders on the system of dictatorship. They obviously did it wrong.

My why do I even consider that HLLs may be to blame. Rolling Eyes

LP,
Jure
Post 27 Dec 2009, 03:04
View user's profile Send private message AIM Address Yahoo Messenger MSN Messenger Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  
Goto page Previous  1, 2, 3, 4, 5, 6, 7, 8  Next

< Last Thread | Next Thread >
Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You can attach files in this forum
You can download files in this forum


Copyright © 1999-2019, Tomasz Grysztar.

Powered by rwasa.