flat assembler
Message board for the users of flat assembler.

Index > Main > Help with converting Mandelbrot C to assembly(speed optim.)

Goto page Previous  1, 2, 3, 4
Author
Thread Post new topic Reply to topic
fredlllll



Joined: 17 Apr 2013
Posts: 56
fredlllll 12 May 2013, 09:21
as you see the pictures above i already know that. but how about double precision?
and i dont think that these programs work realtime(like the maps api would)
i also want to support arbitrary precision with the cloud. maybe for a little bit of money, because arbitrary takes way more effort than double.
Post 12 May 2013, 09:21
View user's profile Send private message Reply with quote
hopcode



Joined: 04 Mar 2008
Posts: 563
Location: Germany
hopcode 12 May 2013, 09:45
1000 iteration OK
1.000.000 iteration OK
if you doubt on bpp interpolations on > 4Bio. iterations well, Wink
enjoy this... http://www.foerstemann.name/labor/zoom999.html
translate "details" there using google, and browse clips on youtube

_________________
⠓⠕⠏⠉⠕⠙⠑
Post 12 May 2013, 09:45
View user's profile Send private message Visit poster's website Reply with quote
tthsqe



Joined: 20 May 2009
Posts: 767
tthsqe 12 May 2013, 10:58
Hey - I'm glad someone else has found phaumann's page. Smile
I was able to reproduce his technique and speed it up by about one order of magnitude for very deep zooms. I created a higher quality video similar to his 10^999 video here:
http://www.youtube.com/watch?v=ohzJV980PIQ
Post 12 May 2013, 10:58
View user's profile Send private message Reply with quote
fredlllll



Joined: 17 Apr 2013
Posts: 56
fredlllll 12 May 2013, 15:43
no need to translate. im german Wink
well i think this technique is okay for slideshows or videos, and when i see that right, you increased your iterations while zooming deeper is that right? because the gradient "shrinks"
but i want for example render the mandelbrot 4000000x3000000 pixels with an appropriate amount of iterations ( i think 1000 would do). so i dont think that this optimization would do it in this case.
Post 12 May 2013, 15:43
View user's profile Send private message Reply with quote
hopcode



Joined: 04 Mar 2008
Posts: 563
Location: Germany
hopcode 13 May 2013, 04:01
fantastic, great stuff, tthsqe Smile e234 is my favourite!
now some note about the wheel being actually reinvented Wink

we should distinguish 2 things:
- the surface of the set
- the window on the set

the surface is the mapping of a physical space
the window is the point of view, the lens moving on the surface.
http://www.mrob.com/pub/muency/roundofferror.html wrote:
Iteration involves multiplication and addition (Z = Z2 + C = Z*Z + C) and the multiplication
is the main source of the growth of the error term. A good rough estimate is that the error doubles with each iteration. This means that if our numbers have B binary digits
of precision then after N iterations only the first B-N binary digits are accurate. So, an accurate picture of the Mandelbrot Set with a Dwell Limit of D and a grid spacing
(distance between adjacent pixels) of 2-N can be accurately drawn using N+D bits of accuracy in the math. Since most Mandelbrot views use dwell limits
like 1000 or 10,000 and most floating point math libraries only support about 50 bits of accuracy, there seems to be a problem. What are the pictures
we're looking at — are they actually related in any way to the actual appearance of the Mandelbrot Set? In fact, they are. Experiments show that even when you write
a 1000-bit floating point library, the pictures look the same as they did with 50-bit floating point. In fact a few pixels differ here and there, but there are no noticable differences
even when you look very close. The fundamental reason for this is that the floating point roundoff is effectively insignificant compared to the non-linear mapping
induced by the (chatotic) iteration itself. Unless an iterate Zn happens to fall on or near a critical point, the mapping actually serves to diminish the errors from previous steps and so
the total error ends up being only about twice the roundoff from a single iteration.

that is about "picture", i. e the window on the surface. this is valid only
for the window. i cannot say, for example, whether 26 days calculations on tthsqe mandelbrot
have been done using double precision or at least a 3Kb-precision vector (for the 10^-1006).
because what i see there on youtube is always a window, a point of view on the surface.
i do not need much colours there, just because of the non-linear output of the function iterating.
and the self-similarity of the set may induce the same error approximation, illusorially broken
from an eventual double precision usage and restated correctly after the self-similarity property of the set
(also, we cannot perceive that error on that window)

also, i disagree substantially with the quote above. it may lead to think bignum precision is not worth.
anyway that stated difference does not bring much to the exploration capability of an engine.

you can test the contrary here online, it may freeze browser:
http://galileo.phys.virginia.edu/~re8m/mandelbrotset.html

500,1000 or 2000 or 10000 iterations works the same without better perception of colouring.
you zoom it again and again and then 500,1000 or 2000 or 10000 iterations,always the same
because of the self-similarity property of the set (from the background coming in foreground)
if those different number of iterations works visually the same on the same window,
this doesnt mean calculations are done correctly or better say, after zooming, you can unzoom
without breaking the property of the set for bad rounding-off while zooming.

you may have 1000 iterations on double precision in a window of 1024 x 1024
focusing a part of a surface of 10^64 pixel area, or the same working on a 4kb vector
precision, what i enjoy on display is always 1024 x 1024 and the self-similarty of the set.
this doesnt mean on the whole, 10^64 pixel area, calcualtions are done correctly.

requirements:

    - description of precision bits
    - window size
    - surface size
    - iterations on the pixel

optional:

    - approx error
    - cycles on the pixel
    - CPU used
    - total time calculation


EDIT/ADD:
a bit as in the oriental saying:
you square the whole tree, but you cannot see the apple on it.
or you square the apple, and you cannot out-square the whole tree in the same time.

_________________
⠓⠕⠏⠉⠕⠙⠑
Post 13 May 2013, 04:01
View user's profile Send private message Visit poster's website Reply with quote
fredlllll



Joined: 17 Apr 2013
Posts: 56
fredlllll 13 May 2013, 08:41
are we still talking about my cloud thing? because i dont think that you understood me

i want to use the google api. which expects single tiles for the whole picture.
zoom1: 1 big tile
zoom2: 4 tiles in the big tile
zoom3: 4 tiles in each of the above
and so on. what does the window and surface size help here??
Post 13 May 2013, 08:41
View user's profile Send private message Reply with quote
hopcode



Joined: 04 Mar 2008
Posts: 563
Location: Germany
hopcode 13 May 2013, 12:10
requirements update:
    - description of precision bits
    - window size
    - surface size/zooming goal
    - iterations on the pixel
optional:
    - approx error
    - cycles on the pixel
    - CPU used
    - total time calculation
    - pixels / thread
    - number of colours
tthsqe's code cannot explore correctly more than 1/2 * 10^2 zoom for a matter of aprox. error, satisfying also the Munafo theory i quoted above, about float precisions.
precision has the priority.
i think that code is nonethless a good template to start with and improve for a live-click engine. it may work as a backend CGI. from my hints about memory managing and a bit more theory
i am sure now it is possible to do each single thread at least ~4x faster now at the same 10^2 zoom conditions.

while now fact is that memory is so badly accessed that, i estimate, running a single thread should be faster than 4, even at say 1600 iterations. and should not warm the CPU.
i dont know about your opinion, but i am not satisfied with the math precision libraries out-there. in the meanwhile, if you have any idea about precision managing,
we can discuss it here or in other thread. in other case, the thread may be marked as solved.

i will enjoy hearing from you + the theory a bit more Smile and in the meanwhile, what is of relevance for me,
without testing or writing a single line of code.
Cheers,
Smile

_________________
⠓⠕⠏⠉⠕⠙⠑
Post 13 May 2013, 12:10
View user's profile Send private message Visit poster's website Reply with quote
tthsqe



Joined: 20 May 2009
Posts: 767
tthsqe 13 May 2013, 15:17
One should not be afraid and feel the need for increasing precision as the iteration count increases. Rather, precision should be sufficient enough to distinguish between pixels in your image. The reason for this is that the quadratic polynomial z^2+c splits the plane into two Fatou domains (usu. colored domain for infinity, black domain for 0) on which z^2+c behaves regularly.

precision only has the priority to the extent that it can distinguish between pixels in your image

For example in the blocky picture posted from Mandelbrot Explorer, the zoom was about 43, and since it was divided into about 1000=2^10 pixels, that gives a precision of 53 bits, which is exactly the limit of double precision.

If you want to go deeper than double precision, you have to create your own multi-precision arithmetic, which is what I spent some time doing.

The program that I used to create the deep zooms goes up to 64 qwords (about 1500 decimal digits) and at each zoom level I used only enough qwords to distinguish the pixels at that zoom level.
Post 13 May 2013, 15:17
View user's profile Send private message Reply with quote
hopcode



Joined: 04 Mar 2008
Posts: 563
Location: Germany
hopcode 13 May 2013, 18:26
exactly, B-N = ~44, and this information is fundamental before evaluating an algo! you should write the precision bits involved in the note on youtube description too,
for those who dont know the theory. when not so, it would be just like any other youtube clip.

10^2 is... too little factor. because one of the goal of exploring a set should be in those few minutes while enjoining it,
(for those like me who didnt know the theory behind it) another way to catch awareness of the magnitude of kosmos.

btw: i downloaded all Munafo website, because there are interesting numbers and papers on it.
Cheers,
Very Happy

_________________
⠓⠕⠏⠉⠕⠙⠑
Post 13 May 2013, 18:26
View user's profile Send private message Visit poster's website Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  
Goto page Previous  1, 2, 3, 4

< Last Thread | Next Thread >
Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Copyright © 1999-2025, Tomasz Grysztar. Also on GitHub, YouTube.

Website powered by rwasa.