flat assembler
Message board for the users of flat assembler.
Index
> Compiler Internals > why is 2*2.5 not the same as 5.0 in fasmg? Goto page 1, 2 Next |
Author |
|
Tomasz Grysztar 29 Aug 2017, 16:14
There were some mistakes in the implementation of rounding. Please try version "hwq8c".
|
|||
29 Aug 2017, 16:14 |
|
tthsqe 30 Aug 2017, 20:25
Your version hwq8c does 2*2.5=5.0 but there are still some issues.
Code: ; when b/a is exactly representable by the float data ; type, I don't think we should get an error here a = 3 b = 6 if (float a)*((float b)/(float a)) = float b else err 'oops' end if So I traced through your code. Correct me if I am wrong on the following points: (1) floats are represented by a 32 bit exponent and a 32n bit mantissa. (2) the highest bit in the mantissa of a non-zero float is always 1 (3) a*b is calculated by calculating 64n bits but doing rounding on only 32n + 32 bits. (4) a/b is calculated as a*(1/b) I like (1) and especially (2). GMP does not do (2). I see problems with (3) and (4) with (4) being the most severe. Here is the problem with (4) Code: 1/3 = 0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa * 2^-129 6 = 0xc0000000000000000000000000000000 * 2^-125 6*1/3 = 0x7fffffffffffffffffffffffffffffff80000000000000000000000000000000 * 2^-254 this last number is going to get shifted to 0xffffffffffffffffffffffffffffffff * 2^-127 As for (3), why not do the rounding on all 64n bits? The current code will not round correctly when the bits of lesser significance than mantissa_tail affect the rounding. I am fairly certain this set of cases is nonempty. If you want to fix (4), you are probably going to have to do the gradeschool algorithm. (I went to grade school but didn't learn this algorithm properly there) I have some notes on how to do this properly and I could share them with you. This would save you the headache if you don't have this working already. Because of (2), this is going to be slightly easier in fasmg than it was for me. This may sound pedantic, but I would surely prize accuracy from the assembler over speed. |
|||
30 Aug 2017, 20:25 |
|
tthsqe 30 Aug 2017, 20:31
Do you allow others to submit patches to fasmg?
|
|||
30 Aug 2017, 20:31 |
|
Tomasz Grysztar 30 Aug 2017, 22:18
tthsqe wrote: Correct me if I am wrong on the following points: tthsqe wrote: As for (3), why not do the rounding on all 64n bits? tthsqe wrote: If you want to fix (4), you are probably going to have to do the gradeschool algorithm. (I went to grade school but didn't learn this algorithm properly there) |
|||
30 Aug 2017, 22:18 |
|
Tomasz Grysztar 31 Aug 2017, 15:51
The problem here is: if I start making little tweaks like that, it would actually be better to rewrite the entire module with this different goal in mind. In its current form it is based on a premise similar to the one used by fasm 1 - that the precision problems are "buffered" by maintaining more bits of the mantissa that required for the intended applications. In fasm 1 you could not convert float into a value larger that 80-bit, but internally it was computing more bits, just so it could be little reckless with precision problems, knowing that they are going to be cushioned by the additional bits. In fasmg this "guaranteed" (as stated by the manual) precision is the 128-bit representation. But, since fasmg also allows to do some basic arithmetic operations on these values, it is not surprising that one could expect more rigorous approach. The alternative I had in mind was a kind of "adaptive precision" floating point, where the size of mantissa could grow indefinitely if only it was possible to keep the exact result. I guess that variant would be more satisfying to you, but I dropped it thinking that it is not really necessary for an assembler. On the other hand, if it was available, it could probably create some new possible uses of fasmg.
tthsqe wrote: Do you allow others to submit patches to fasmg? |
|||
31 Aug 2017, 15:51 |
|
tthsqe 31 Aug 2017, 16:14
By allowing floating point operations into fasmg you have opened a new can of worms. As I see it, the floating point operations should be reliable since they are now a part of the language. I will submit to you a reworked floating point module and see if you like it. It will take a while but will have exact rounding (with ties going to even) for +,-,*, and /.
|
|||
31 Aug 2017, 16:14 |
|
Tomasz Grysztar 31 Aug 2017, 17:59
What is your opinion on adaptive precision then? If I'd go into direction of better precision, I'd prefer to go all the way.
Not anytime soon, though. When I had the time to work on the floats module, I still dismissed the idea as too complex, and just made one that I deemed enough for the general assembly needs. The are so many other interesting things to do with fasmg... |
|||
31 Aug 2017, 17:59 |
|
tthsqe 31 Aug 2017, 18:46
By adaptive precision i hope you mean arbitrary precision, in which case the complicated question of how to request the desired precision needs to be adressed. You could have another keyword used like "setfloatprec n" in which case all subsequent operations would be performed with at least n bits. Of course the floats stored in memory at any point in time could have many different precisions, but the calculations would alway product floats with the last set value of n. An asdembler with arbitary precision floats as well as ints would be nice, though the latter can be used to simulate the former.
|
|||
31 Aug 2017, 18:46 |
|
Tomasz Grysztar 31 Aug 2017, 18:52
No, what I meant by "adaptive precision" is a bit more specific. Not only the precision would be adjustable, but in case of operations that could give exact result, the exact result would be produced even if it required precision to be increased. For example multiplication of two floats with precision n would produce a number with precision up to 2n (might be smaller if mantissa ends with a bunch of zeros). And when adding a very small number to a very large one, precision would have to grow tremendously to contain the exact result.
In fact, my current implementation could be extended to become what I describe above, because at the time I was writing it I still had the idea in mind, even if I considered it impractical. tthsqe wrote: An asdembler with arbitary precision floats as well as ints would be nice, though the latter can be used to simulate the former. |
|||
31 Aug 2017, 18:52 |
|
tthsqe 31 Aug 2017, 19:59
So you are essentially talking about arithmetic over Q since correct handling of 1 + 2^-1000 while choaking on 2/3 would be considered strange behaviour by almost any user. Even with such adaptive capabilities, there would still have to be a limit on the sizes of the numerators and denominators, otherwise calculations can expload quickly. So when a rational calculation goes over the limit, you would have to choose a smaller denominator.
|
|||
31 Aug 2017, 19:59 |
|
Tomasz Grysztar 31 Aug 2017, 20:09
tthsqe wrote: So you are essentially talking about arithmetic over Q since correct handling of 1 + 2^-1000 while choaking on 2/3 would be considered strange behaviour by almost any user. Speaking of strange choices, for arithmetic over Q the 2-adic approach would be much more in my spirit. This is in fact something I worked on for my PhD thesis, which I sadly never finished. |
|||
31 Aug 2017, 20:09 |
|
tthsqe 31 Aug 2017, 20:29
Ok. So fasmg should have an interesting floating point format. It doesnt need to be fast since anyone desiring speed can simulate fixed points with ints. Since I am not familiar with completions as much as I would like to be, I would go with 2^e × a / b where a and b are odd. This isnt that interesting, but its light years adead of the boring 1.blahblah × 2^e format, which cant represent most interesting numbers. If log a + log b ever gets too big, you would choose the last convergent in the CF partial quotients that fits. Good idea?
|
|||
31 Aug 2017, 20:29 |
|
tthsqe 05 Sep 2017, 12:41
Thomasz,
The realisation that you must have long division because you have "/" and "mod" for big ints took be to line 3000 or so for expressions.inc. I think the reason people frequently don't read your code is that it is close to inscrutable. For multiplication by more than one segment, I see a two way split of the input terms and then a three point evaluation. However, I can't follow your long division algorithm. Can you follow mine? https://github.com/tthsqe12/cas/blob/master/idiv.cpp#L78 (most of the functions used in there are self evident, and _estq is defined at the beginning of the file) Could you give some hints on how your division works? Does it produce the quotient segments in order from high to low? What is the complexity? |
|||
05 Sep 2017, 12:41 |
|
Tomasz Grysztar 05 Sep 2017, 13:21
tthsqe wrote: Could you give some hints on how your division works? Does it produce the quotient segments in order from high to low? What is the complexity? |
|||
05 Sep 2017, 13:21 |
|
tthsqe 06 Sep 2017, 20:49
Quote: This is a revised version of the algorithm used by fasm 1 for division by 64-bit number, it computes iterated approximations until remainder is smaller than the divisor. The first approximation is made with baseline division by 32-bit value (made from the most significant bits of the divisor). So it seems you have never implemented a proper grade school division algorithm! I reworked the arbitrary precision floats in my cas project so that rounding is exact and to the nearest even (this of course then requires that the highest bit in the mantissa is set, like in your implementation). The result is clean simple c code that I hope you can translate into the fasmg core. If I get it thoroughly tested, would you be willing to take some time to translate the code into fasmg? |
|||
06 Sep 2017, 20:49 |
|
Tomasz Grysztar 06 Sep 2017, 21:43
tthsqe wrote: So it seems you have never implemented a proper grade school division algorithm! tthsqe wrote: I reworked the arbitrary precision floats in my cas project so that rounding is exact and to the nearest even (this of course then requires that the highest bit in the mantissa is set, like in your implementation). The result is clean simple c code that I hope you can translate into the fasmg core. If I get it thoroughly tested, would you be willing to take some time to translate the code into fasmg? |
|||
06 Sep 2017, 21:43 |
|
tthsqe 06 Sep 2017, 23:59
Alight. I hope you can see the value of exact rounding and that we can see it in fasmg sometime (or some other interesting float format). As for code size and temporary workspace, division and multiplication are about the same for the former and equal for the latter.
|
|||
06 Sep 2017, 23:59 |
|
Tomasz Grysztar 07 Sep 2017, 15:05
tthsqe wrote: As for code size and temporary workspace, division and multiplication are about the same for the former and equal for the latter. One might think that Karatsuba multiplication would end up taking more space in executable, but because the code consist of multiple calls to functions that are available anyway, it ended up being quite concise and I was happy with it. A similar thing happened with the division. And, if you look closer, what this algorithm actually achieves is not that much different from the classic long division. On average every iteration refines about 32 bits (so one digit, if we measure in machine words) of the result, starting from the top (as this is just repeatedly refined approximation). |
|||
07 Sep 2017, 15:05 |
|
Tomasz Grysztar 12 Aug 2018, 21:01
Tomasz Grysztar wrote: In my case translating the code would probably end up changing to simply writing my own variant in assembly. I have also update the floating point converter (used to generate things like single precision floats with DD, etc.) so that it uses the same rounding mode. Everything should be more consistent now. |
|||
12 Aug 2018, 21:01 |
|
Goto page 1, 2 Next < Last Thread | Next Thread > |
Forum Rules:
|
Copyright © 1999-2024, Tomasz Grysztar. Also on GitHub, YouTube.
Website powered by rwasa.