flat assembler
Message board for the users of flat assembler.
Index
> Compiler Internals > db 0. A bug or not a bug? Goto page Previous 1, 2, 3, 4 Next |
Author |
|
LocoDelAssembly 28 Feb 2012, 16:21
Sorry for the off-topic, but why my macro is expanded correctly? I forgot to use forward :?
Anyway, corrected and simplified: Code: macro def [name, size] { forward local n rept 1 s:(size)-1\{n equ s\} ; To avoid having the assembler to calculate (size - 1) each time twice macro signExtend.#name v.out, v.in \{ v.out = v.in or (0 - v.in shr n) shl n \} } def byte, 8,\ word, 16,\ dword, 32,\ qword, 64 purge def |
|||
28 Feb 2012, 16:21 |
|
l_inc 28 Feb 2012, 16:30
LocoDelAssembly
Because forward is the default block expansion behaviour for macros with undefined arguments number. E.g., you didn't use forward within the rept macro-block, but it's still expanded right. |
|||
28 Feb 2012, 16:30 |
|
Tomasz Grysztar 28 Feb 2012, 16:52
LocoDelAssembly wrote: Would this if-less version work and be compatible with any future extra precision? LocoDelAssembly wrote: But still, I think something like "load signed" should be native, and also store and data definition directives should support both signed and unsigned for easy range checking (if nothing is specified then the default is used which is the union of both if I understand the current behavior correctly) l_inc wrote: The purpose is to sort the original values passed to the macro. For example, if you have "-1" and "$FFFF'FFFF'FFFF'FFFF" values, they have the same qword representation (both correct because fasm allows composite ranges for values), so you will not be able to distinguish them when you operate on the qword outputs. But in reality, one of those numbers is negative, and the other positive (and very large). Corrected fasm will respect that when operating on those values directly. But then when asked to truncate the value into qword, it is going to create the same representation for both of them, as that's what it does (again, as it already does with "dword" etc.) |
|||
28 Feb 2012, 16:52 |
|
LocoDelAssembly 28 Feb 2012, 17:43
Tomasz, it seems I'm a bit confused on how you'll implement the extra precision. What would be the following code display?
Code: alpha = alpha or -1 if alpha < 1 display "bit 65th set" ; This is what I'm expecting else display "bit 65th not set" end if As for store, it is not like it needs a special handling, but it is just for the programmers' convenience, for instance "db signed -1" should assemble while "db unsigned -1" should fail and "db signed 255" should also fail. PS: My macro is supposed to sign extend the value and make sure fasm detects the number as negative if the sign bit was set in the original (smaller than fasm's internal representation) size. Am I right in assuming that fasm will continue to have two's complement arithmetic and bit-wise logic? PS2: Thanks l_inc, I found it strange initially because "macro arg, [args]" doesn't default to forward and args evaluates as the full list, but of course this is a different case and this is was the source of my confusion. |
|||
28 Feb 2012, 17:43 |
|
Tomasz Grysztar 28 Feb 2012, 18:04
LocoDelAssembly wrote: Tomasz, it seems I'm a bit confused on how you'll implement the extra precision. What would be the following code display? LocoDelAssembly wrote: If the code above does what I expect, why shl would cause overflow? Shouldn't it behave like multiplying by a power of 2? LocoDelAssembly wrote: As for store, it is not like it needs a special handling, but it is just for the programmers' convenience, for instance "db signed -1" should assemble while "db unsigned -1" should fail and "db signed 255" should also fail. LocoDelAssembly wrote: PS: My macro is supposed to sign extend the value and make sure fasm detects the number as negative if the sign bit was set in the original (smaller than fasm's internal representation) size. Am I right in assuming that fasm will continue to have two's complement arithmetic and bit-wise logic? But I will probably keep the special handling of "not", "xor" and "shr" operators when operating in "sized" environment. So they will operate like 2-adic only when no size context is specified. |
|||
28 Feb 2012, 18:04 |
|
Tomasz Grysztar 28 Feb 2012, 18:11
LocoDelAssembly: oh, OK, I misread your macro. What it does should not cause problems, though it was a little strange to me when I first looked at it. There will be no overflow there.
|
|||
28 Feb 2012, 18:11 |
|
l_inc 28 Feb 2012, 18:55
Tomasz Grysztar
Quote: This is not possible with old fasm, either. You sort only the qword representations of the computed values. This is possible, because the qword representation is the complete representation of a (zero-based) value in the current fasm implementation, which is going to be changed. Quote: For example, if you have "-1" and "$FFFF'FFFF'FFFF'FFFF" values, they have the same qword representation Yes. And they are also equal. In current fasm implementation it's just the same value. Therefore sorting with load-stores would preserve the values. |
|||
28 Feb 2012, 18:55 |
|
Tomasz Grysztar 28 Feb 2012, 21:37
l_inc wrote: This is possible, because the qword representation is the complete representation of a (zero-based) value in the current fasm implementation, which is going to be changed. And still, the qword representation is not the same thing as the original value, even though fasm's bug made it seem so. l_inc wrote: Yes. And they are also equal. In current fasm implementation it's just the same value. In fact my goal is to have specification that defines results in such a way, that they are independent of the internal precision (only some of the being unobtainable because of the overflow errors). This should also pave the way for the future 129-bit (or larger) precision in fasm 2.0 (when/if it finally comes to life). |
|||
28 Feb 2012, 21:37 |
|
l_inc 28 Feb 2012, 22:05
Tomasz Grysztar
My argumentation is over. Now I'm at least sure, you're aware of possible concerns. However, I also think, that the consistent behaviour across different arithmetic word sizes is better, than the ability to easily store-load values without losing sign bit. Nevertheless IMHO the default arithmetic word size has to be a part of language specification, because, as you could see, it has a significant impact on how the programs behave and therefore should be kept in mind by the programmer. |
|||
28 Feb 2012, 22:05 |
|
Tomasz Grysztar 28 Feb 2012, 22:10
I'm planning to update the documentation with very unequivocal specification of how the values will be computed - but I will leave the possible overflow errors under clause "when assembler is not able to maintain the precision required to give the right result", as this may be dependent on what internal precision the given implementation has. But this will follow the general fasm's rule, that when it gives a result, it does its best to ensure that the result is correct, and if it is not able, it signalizes an error (overflow in this case).
|
|||
28 Feb 2012, 22:10 |
|
l_inc 28 Feb 2012, 23:49
Tomasz Grysztar
Quote: I will leave the possible overflow errors under clause "when assembler is not able to maintain the precision required to give the right result" This significantly reduces the programming freedom. Example (back to displaying numbers, but no more store-load): There are definitely many ways to implement displaying a decimal representation of a value, but I prefer to find the highest power of ten above the value and then start to reduce the power, so that I get digits directly in the display order. So this is a simplified version of the previously discussed macro: Code: macro dispDec num* { local digCount,tenPow,number number = num if number < 0 display '-' number = -number end if digCount = 0 tenPow = 1 while tenPow <= number tenPow = tenPow*10 digCount = digCount + 1 end while if digCount = 0 digCount = 1 tenPow = 10 end if repeat digCount tenPow = tenPow/10 display number/tenPow + '0' number = number mod tenPow end repeat } The problem of this macro is that the highest value it's able to display is 10^18-1. However theoretically any value can be passed up to 1 shl 63 - 1. So being a laborious programmer I want to improve the macro which is achieved by the following little fix (there is still one special case remained, but I leave it aside): Code: macro dispDec num* { local digCount,tenPow,number number = num if number < 0 display '-' number = -number end if digCount = 0 tenPow = 1 while tenPow <= number & tenPow < 1000000000000000000 ;an additional check for higher values! tenPow = tenPow*10 digCount = digCount + 1 end while if digCount = 0 digCount = 1 tenPow = 10 end if ;some more additional checks if tenPow > number tenPow = tenPow/10 else digCount = digCount + 1 end if repeat digCount display number/tenPow + '0' number = number mod tenPow tenPow = tenPow/10 end repeat } Now it's able to show the correct result even for the call dispDec 0x7fffffffffffffff which is the highest possible positive number. And in this way my macro is going to work always. Now a fly in the ointment comes. As long as the author does not specify the internal assembler word size, he's forcing me to find the value of 1000000000000000000 empirically. Well, not a problem. But after all the author is changing the assembler internal word size and then states, that's my responsibility for using undocumented features and therefore for the fact, that nothing works anymore. So don't you think it's better to make such a crucial thing like the highest possible precision a part of specification? Wouldn't you be astonished if Intel said: "Oh, eax is a register of some size which we won't document, but if you exceed that size, we will notify you with a triple fault"? Because that's exactly your current approach: "I leave many things implementation dependent and undocumented forcing the programmers to empirically investigate them, but at least I'm then able to change those things making the half of all written macros inoperative". |
|||
28 Feb 2012, 23:49 |
|
Tomasz Grysztar 29 Feb 2012, 00:16
As an interesting side note, your first macro is able to display "0x7fffffffffffffff" when assembled with my development fasm. This:
Code: macro dispDec num* { local digCount,tenPow,number number = num if number < 0 display '-' number = -number end if digCount = 0 tenPow = 1 while tenPow <= number tenPow = tenPow*10 digCount = digCount + 1 end while if digCount = 0 digCount = 1 tenPow = 10 end if repeat digCount tenPow = tenPow/10 display number/tenPow + '0' number = number mod tenPow end repeat } dispDec 0x7fffffffffffffff l_inc wrote: Now a fly in the ointment comes. As long as the author does not specify the internal assembler word size, he's forcing me to find the value of 1000000000000000000 empirically. Well, not a problem. But after all the author is changing the assembler internal word size and then states, that's my responsibility for using undocumented features and therefore for the fact, that nothing works anymore. l_inc wrote: So don't you think it's better to make such a crucial thing like the highest possible precision a part of specification? Wouldn't you be astonished if Intel said: "Oh, eax is a register of some size which we won't document, but if you exceed that size, we will notify you with a triple fault"? Because that's exactly your current approach: "I leave many things implementation dependent and undocumented forcing the programmers to empirically investigate them, but at least I'm then able to change those things making the half of all written macros inoperative". The precision will be guaranteed to cover at least the range of all the standard use cases, currently 64-bit, because we have DQ directive which should work for all the values that fit into qword. If fasm allows a bit more of range (like - currently - additional range down to -(1 shl 64)) and is therefore able to correctly calculate a few more kinds of expressions, it is just an additional bonus. I don't think you really need to take such bonus ranges into consideration when writing a simple display macro. You may throw in some assertion in case someone uses it with non-standard value (and so it will cause an error, just like when someone tries to display "0+ebx*4" with it, or some external symbol, etc.). |
|||
29 Feb 2012, 00:16 |
|
l_inc 29 Feb 2012, 00:39
Tomasz Grysztar
Quote: How would anything stop to work? Your display macro would still be able to display all the values it was able to - it may not be able to display some new super-large numbers that assembler is suddenly able to also calculate on, but neither it was able to display them earlier. For a user of macro it may appear differently: "Earlier I could display any possible value with this macro. Now it displays crap". And the reasoning for giving higher values could be for example moving from a 64-bit architecture to a 128-bit architecture, so that the virtual addresses needed to be displayed reside far above the used value of 1000000000000000000. So the conclusion is: the macro worked, now it doesn't. Anyway it's not about that simple dispDec. That's just an example, that you're free to generalize. But I understand your viewpoint. P.S. I appreciate, that you continue to answer my posts, even though I may appear very stubborn. |
|||
29 Feb 2012, 00:39 |
|
Tomasz Grysztar 29 Feb 2012, 09:13
We looked at this from quite different perspectives - I was thinking about the programmer that want to use some values in his code, and that he generally should be able to specify any value for allowed range when he uses it as immediate in instruction, or data, etc. For such programmer it would not be a problem when assembler allowed to compute even larger range of values - just like when one is writing 16-bit program, he probably will never need values larger than 32-bit, and they would overflow anyway if he tried to put them anywhere in his code (he may yet not even known that DQ directive exists). From such perspective it is not really important what the maximum range is.
And your was the point of view of a writer of some macro, who would like to guarantee that his macro will work correctly with whatever the user provides (as long as it is some expression generally accepted by fasm). Well, as long as fasm continued to be developed, this approach has its problems anyway - the new features may get added and the old macro may still fail to accommodate them (though with use of "eqtype" and new "relativeto" it may be able to detect and signalize that it has met something unknown), the extending of range is just another example. Why not go for something a bit less aspiring, like "a macro that displays decimal values in signed 64-bit range", etc.? l_inc wrote: P.S. I appreciate, that you continue to answer my posts, even though I may appear very stubborn. |
|||
29 Feb 2012, 09:13 |
|
l_inc 01 Mar 2012, 00:26
Tomasz Grysztar
Quote: Why not go for something a bit less aspiring, like "a macro that displays decimal values in signed 64-bit range", etc.? That's definitely a possible approach. I just wanted you to consider how much you restrict the programming freedom when defining some aspects of the compiler's behaviour to be implementation dependent. |
|||
01 Mar 2012, 00:26 |
|
Tomasz Grysztar 01 Mar 2012, 21:45
The new version is out, here is a little test snippet that shows how the revised engine is working:
Code: macro signExtend.qword v.out, v.in { if ~ v.in and 8000000000000000h v.out = v.in else v.out = v.in and 7FFFFFFFFFFFFFFFh - 8000000000000000h end if } x = qword -823543 signExtend.qword a,x assert x>0 assert a<0 _x dq x _a dq a load xd qword from _x load ad qword from _a assert xd=ad You can also replace my macro with the ones proposed by LocoDelAssembly, and it should work, too. |
|||
01 Mar 2012, 21:45 |
|
l_inc 04 Mar 2012, 14:55
Tomasz Grysztar
Is this the expected behaviour? Code: MAX_QWORD_SIGNED equ 0x7fffffffffffffff ;Test 1. Compiles OK. myvar1 = -MAX_QWORD_SIGNED myvar1 = myvar1*2-2 ;Test 2. Compiles OK. myvar2 = -MAX_QWORD_SIGNED-1 myvar2 = myvar2 shl 1 ;Test 3. Fails. myvar3 = -MAX_QWORD_SIGNED-1 myvar3 = myvar3*2 |
|||
04 Mar 2012, 14:55 |
|
revolution 04 Mar 2012, 15:02
l_inc wrote: Is this the expected behaviour? I have already discussed this with Tomasz and he is aware of this result. Note this also: Code: x = not 0xffffffffffffffff x = x * 1 ;error: value out of range. x = x / 1 ;error: value out of range. |
|||
04 Mar 2012, 15:02 |
|
l_inc 04 Mar 2012, 15:20
revolution
OK. Thank you. But I'd like to note the difference between "expected" and "known". The latter one does not mean, it's not a bug, and could be corrected later. The question arises from the normal expectation to behave multiplication with 2 and left shift by 1 the same way. |
|||
04 Mar 2012, 15:20 |
|
Goto page Previous 1, 2, 3, 4 Next < Last Thread | Next Thread > |
Forum Rules:
|
Copyright © 1999-2024, Tomasz Grysztar. Also on GitHub, YouTube.
Website powered by rwasa.