flat assembler
Message board for the users of flat assembler.
![]() Goto page Previous 1, 2, 3, 4 |
Author |
|
hopcode 12 May 2013, 09:45
1000 iteration OK
1.000.000 iteration OK if you doubt on bpp interpolations on > 4Bio. iterations well, ![]() enjoy this... http://www.foerstemann.name/labor/zoom999.html translate "details" there using google, and browse clips on youtube _________________ ⠓⠕⠏⠉⠕⠙⠑ |
|||
![]() |
|
tthsqe 12 May 2013, 10:58
Hey - I'm glad someone else has found phaumann's page.
![]() I was able to reproduce his technique and speed it up by about one order of magnitude for very deep zooms. I created a higher quality video similar to his 10^999 video here: http://www.youtube.com/watch?v=ohzJV980PIQ |
|||
![]() |
|
fredlllll 12 May 2013, 15:43
no need to translate. im german
![]() well i think this technique is okay for slideshows or videos, and when i see that right, you increased your iterations while zooming deeper is that right? because the gradient "shrinks" but i want for example render the mandelbrot 4000000x3000000 pixels with an appropriate amount of iterations ( i think 1000 would do). so i dont think that this optimization would do it in this case. |
|||
![]() |
|
hopcode 13 May 2013, 04:01
fantastic, great stuff, tthsqe
![]() now some note about the wheel being actually reinvented ![]() we should distinguish 2 things: - the surface of the set - the window on the set the surface is the mapping of a physical space the window is the point of view, the lens moving on the surface. http://www.mrob.com/pub/muency/roundofferror.html wrote: Iteration involves multiplication and addition (Z = Z2 + C = Z*Z + C) and the multiplication that is about "picture", i. e the window on the surface. this is valid only for the window. i cannot say, for example, whether 26 days calculations on tthsqe mandelbrot have been done using double precision or at least a 3Kb-precision vector (for the 10^-1006). because what i see there on youtube is always a window, a point of view on the surface. i do not need much colours there, just because of the non-linear output of the function iterating. and the self-similarity of the set may induce the same error approximation, illusorially broken from an eventual double precision usage and restated correctly after the self-similarity property of the set (also, we cannot perceive that error on that window) also, i disagree substantially with the quote above. it may lead to think bignum precision is not worth. anyway that stated difference does not bring much to the exploration capability of an engine. you can test the contrary here online, it may freeze browser: http://galileo.phys.virginia.edu/~re8m/mandelbrotset.html 500,1000 or 2000 or 10000 iterations works the same without better perception of colouring. you zoom it again and again and then 500,1000 or 2000 or 10000 iterations,always the same because of the self-similarity property of the set (from the background coming in foreground) if those different number of iterations works visually the same on the same window, this doesnt mean calculations are done correctly or better say, after zooming, you can unzoom without breaking the property of the set for bad rounding-off while zooming. you may have 1000 iterations on double precision in a window of 1024 x 1024 focusing a part of a surface of 10^64 pixel area, or the same working on a 4kb vector precision, what i enjoy on display is always 1024 x 1024 and the self-similarty of the set. this doesnt mean on the whole, 10^64 pixel area, calcualtions are done correctly. requirements: - description of precision bits - window size - surface size - iterations on the pixel optional: - approx error - cycles on the pixel - CPU used - total time calculation EDIT/ADD: a bit as in the oriental saying: you square the whole tree, but you cannot see the apple on it. or you square the apple, and you cannot out-square the whole tree in the same time. _________________ ⠓⠕⠏⠉⠕⠙⠑ |
|||
![]() |
|
fredlllll 13 May 2013, 08:41
are we still talking about my cloud thing? because i dont think that you understood me
i want to use the google api. which expects single tiles for the whole picture. zoom1: 1 big tile zoom2: 4 tiles in the big tile zoom3: 4 tiles in each of the above and so on. what does the window and surface size help here?? |
|||
![]() |
|
hopcode 13 May 2013, 12:10
requirements update:
- window size - surface size/zooming goal - iterations on the pixel
- cycles on the pixel - CPU used - total time calculation - pixels / thread - number of colours precision has the priority. i think that code is nonethless a good template to start with and improve for a live-click engine. it may work as a backend CGI. from my hints about memory managing and a bit more theory i am sure now it is possible to do each single thread at least ~4x faster now at the same 10^2 zoom conditions. while now fact is that memory is so badly accessed that, i estimate, running a single thread should be faster than 4, even at say 1600 iterations. and should not warm the CPU. i dont know about your opinion, but i am not satisfied with the math precision libraries out-there. in the meanwhile, if you have any idea about precision managing, we can discuss it here or in other thread. in other case, the thread may be marked as solved. i will enjoy hearing from you + the theory a bit more ![]() without testing or writing a single line of code. Cheers, ![]() _________________ ⠓⠕⠏⠉⠕⠙⠑ |
|||
![]() |
|
tthsqe 13 May 2013, 15:17
One should not be afraid and feel the need for increasing precision as the iteration count increases. Rather, precision should be sufficient enough to distinguish between pixels in your image. The reason for this is that the quadratic polynomial z^2+c splits the plane into two Fatou domains (usu. colored domain for infinity, black domain for 0) on which z^2+c behaves regularly.
precision only has the priority to the extent that it can distinguish between pixels in your image For example in the blocky picture posted from Mandelbrot Explorer, the zoom was about 43, and since it was divided into about 1000=2^10 pixels, that gives a precision of 53 bits, which is exactly the limit of double precision. If you want to go deeper than double precision, you have to create your own multi-precision arithmetic, which is what I spent some time doing. The program that I used to create the deep zooms goes up to 64 qwords (about 1500 decimal digits) and at each zoom level I used only enough qwords to distinguish the pixels at that zoom level. |
|||
![]() |
|
hopcode 13 May 2013, 18:26
exactly, B-N = ~44, and this information is fundamental before evaluating an algo! you should write the precision bits involved in the note on youtube description too,
for those who dont know the theory. when not so, it would be just like any other youtube clip. 10^2 is... too little factor. because one of the goal of exploring a set should be in those few minutes while enjoining it, (for those like me who didnt know the theory behind it) another way to catch awareness of the magnitude of kosmos. btw: i downloaded all Munafo website, because there are interesting numbers and papers on it. Cheers, ![]() _________________ ⠓⠕⠏⠉⠕⠙⠑ |
|||
![]() |
|
Goto page Previous 1, 2, 3, 4 < Last Thread | Next Thread > |
Forum Rules:
|
Copyright © 1999-2025, Tomasz Grysztar. Also on GitHub, YouTube.
Website powered by rwasa.