flat assembler
Message board for the users of flat assembler.
Index
> Main > Compilation speed |
Author |
|
LocoDelAssembly 23 Aug 2006, 02:42
There was too much swapping or just too much CPU usage?
|
|||
23 Aug 2006, 02:42 |
|
RedGhost 23 Aug 2006, 04:15
I have never had fasm take more than like 1 second to compile anything but I have a very fast computer
Perhaps the preprocessor logic takes more time than just assembling instructions _________________ redghost.ca |
|||
23 Aug 2006, 04:15 |
|
ChrisLeslie 23 Aug 2006, 04:30
Quote: There was too much swapping or just too much CPU usage? Judging from the amount of disk activity I think it was continually swapping to disk. Did not check CPU useage. After all it is running on Win95! with 32Meg so that is likely the reason. If I compile on my fast computer at work (after shoowing away the kangaroos from the front door and without the boss knowing!!) it is very fast as normal. But I think I will still continue to program at a lower level for my excercise. Chris |
|||
23 Aug 2006, 04:30 |
|
dead_body 23 Aug 2006, 08:07
you may download fasmpre.exe and see what are wrong in your macroses, and why they are so slow.
|
|||
23 Aug 2006, 08:07 |
|
Tomasz Grysztar 23 Aug 2006, 08:39
With extensive use of the macros it's quite normal - especially as heavy ones as win32ax (which from the beginning were then ones where the features are more important than speed). However I still use some of the macros in the fasmw sources, even though I sometimes compile it on old P60 machine.
On the other hand - the first versions of fasm were self-compiling in about few minutes on the 386 processor - and I already thought it was fast! |
|||
23 Aug 2006, 08:39 |
|
vid 23 Aug 2006, 10:45
it is expected, those macros are "doing everything for you smart-way", and that, performed in "interpreted scripting language" of course is very slow.
As tomasz said, those macros are for functionality, at cost of speed. One great advantage of FASM is that you can always make thing yourself and you don't have to use higher level constructs. |
|||
23 Aug 2006, 10:45 |
|
LocoDelAssembly 23 Aug 2006, 12:15
Since he said that there was too much disk activity the problem here was that FASM has used more than the physical available memory and disk accesses are a lot of times slower than RAM accesses.
|
|||
23 Aug 2006, 12:15 |
|
vid 23 Aug 2006, 12:17
good point, try to decrease FASM memory to minimal needed value
|
|||
23 Aug 2006, 12:17 |
|
ChrisLeslie 12 Sep 2006, 08:57
The slowness was coming from an over use of macros, with way too much macro nesting. I have proceduralised much of the application and now the compilation speed is normal, even with with the Win32 headers and Tomasz's .if macros. This is a lesson for me to avoid going overboard with macros when designing an application. The situation is not so apparent with a fast computer, but I would rather solve a problem with technique rather that resorting to a faster computer!
Thanks for all the support. Chris |
|||
12 Sep 2006, 08:57 |
|
rugxulo 12 Sep 2006, 15:25
Well, you are to be congratulated since most people (myself included) often just "do what works" even if there is a faster way. We need to break this habit and keep pushing ourselves to improve, especially with tasks that we perform often (e.g., compression, re-assembling, etc.). But, often there are no easy solutions.
For example, ZOO 2.01 (zoo a mytest *.txt) is faster than ZOO 2.10 doing the same thing. Why? I dunno, but if speed is important, use 2.01. Also, four different HA variants I tried (HA C--, HA NT, HA DOS, and LGHA) all performed differently. (See here for Matt Mahoney's compression benchmark). I know of several timing utils (first is Win32-based, others are DOS): Timer, UPCT, RunTime, and DJGPP's Redir. P.S. ChrisLeslie, if you still have the old macro-bloated version of your project, try it under pure DOS with FASM (DOS version), and see if the time is still 30+ secs. |
|||
12 Sep 2006, 15:25 |
|
f0dder 12 Sep 2006, 22:12
Quote:
Better compression ratio in ZOO 2.10? |
|||
12 Sep 2006, 22:12 |
|
rugxulo 12 Sep 2006, 23:58
f0dder, there does seem to be a (very very) small improvement (< 2k, in my wimpy test) in the default compression method in 2.10 vs. the older 2.01, but I hardly think this warrants more than double the compression time.
<EDIT> I'm not talking about the new high compression method introduced but the old standard compression method. </EDIT> |
|||
12 Sep 2006, 23:58 |
|
< Last Thread | Next Thread > |
Forum Rules:
|
Copyright © 1999-2024, Tomasz Grysztar. Also on GitHub, YouTube.
Website powered by rwasa.