flat assembler
Message board for the users of flat assembler.

Index > Main > problem with sign

Goto page Previous  1, 2
Author
Thread Post new topic Reply to topic
Tomasz Grysztar



Joined: 16 Jun 2003
Posts: 7796
Location: Kraków, Poland
Tomasz Grysztar
revolution wrote:
Indeed PI was proved irrational a long time ago (...)
However it is still not known whether it is a normal number.
Post 30 Oct 2013, 13:50
View user's profile Send private message Visit poster's website Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17664
Location: In your JS exploiting you and your system
revolution
Tomasz Grysztar wrote:
revolution wrote:
Indeed PI was proved irrational a long time ago (...)
However it is still not known whether it is a normal number.
Okay, seems that we need to normalise it then. Just to be sure.
Post 30 Oct 2013, 14:45
View user's profile Send private message Visit poster's website Reply with quote
Tomasz Grysztar



Joined: 16 Jun 2003
Posts: 7796
Location: Kraków, Poland
Tomasz Grysztar
revolution wrote:
Okay, seems that we need to normalise it then. Just to be sure.
This may not be easy if one wanted to have a proven result. For example the binary Liouville number 0.110001000000000000000001... is irrational and transcendental, like Pi - however it would not be easy to normalize it. For example the "Von Neumann whitening" would produce a sequence of identical bits out of it.
Post 30 Oct 2013, 15:14
View user's profile Send private message Visit poster's website Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17664
Location: In your JS exploiting you and your system
revolution
Tomasz Grysztar wrote:
This may not be easy if one wanted to have a proven result. For example the binary Liouville number 0.110001000000000000000001... is irrational and transcendental, like Pi - however it would not be easy to normalize it. For example the "Von Neumann whitening" would produce a sequence of identical bits out of it.
Simplistic whitening schemes are out then. But something like a keyed hash could probably overcome such limitations.
Post 30 Oct 2013, 15:24
View user's profile Send private message Visit poster's website Reply with quote
hopcode



Joined: 04 Mar 2008
Posts: 563
Location: Germany
hopcode
i imagine keyed hash may suceed only on locality, to whiten a relative little stream of data exposing what has been marked as "bias".
3 seconds of a whitened stream from a Geiger counter in 100m^2 may be undistingushable from a 3 seconds withened output coming
from an unknown generator.

after listening 1 hour stream from the same unknown source of data, its relatively simple to distinguish the output of a whitened Geiger,
we know as biased but uncorrelated, form that of a hashing system, not biased but showing to have lost correlation on the (fingerprinted) nature
of such bestial number above, capable to escape the b-base expansion of the normal numbers.
(for the case it is the expansion marked on the n!th digit, but other properties may be in the stream too).

in both cases also we never would say "randomness" because that same "randomness" depends on locality and,
as provided in the link above, from the interval "any finite pattern of numbers occurs with the expected limiting frequency".

that same expansion may result normal (i.e following the expectation,standard deviation etc) in a wider system. in the same way
extending area of the Geiger from 100m^2 to 1 km probability of clicks cannot result being random, just because observations are more
and more consistent to events on 1 km area.

i imagine that the fact that pi has not yet be proved as normal, may result an advantage for those
who write crypt stuff.

    1) because strength of the algo depends directly on this (ever done) demonstration.
    (it happens already with any not-yet-known crypto algo, before someone breaks it)

    2) because once having the evidence that pi is normal, we will have automatically
    the perfect key hashing algo. both would rely on a determined base-b expansion of numbers.


all this way of doing,measuring whitening etc, doesnt take in account the fact that we are normally brought
to think "randomness" as an absolute concept, as "unpredictability". on the contrary, we must admit it is prone
to the conditions of a system in which signals and data are being observed and recorded.
otherwhile, all may continue to be as it is:

keyed-hash will work properly, Geiger counter emits biased random data,
randomness remains what it is. in the philosophy too
a dumb, ping-pong-routed effort between quantistical determinism and quantistical idealism,
just to say, Very Happy

Cheers,

_________________
⠓⠕⠏⠉⠕⠙⠑
Post 31 Oct 2013, 11:07
View user's profile Send private message Visit poster's website Reply with quote
hopcode



Joined: 04 Mar 2008
Posts: 563
Location: Germany
hopcode
tips-or-trick,Very Happy its something like a LCG and what i like most:
it is reversible (after tweaking it a bit)
it is multiplicative

results seems promising but theory is not yet implemented in the way i mean.
it is very slow, at the moment. i share only results, because i would like
to peel the potato myself alone. and thats funny Very Happy
Code:
-------------------------------------
Entropy = 7.989643 bits per byte.
Optimum compression would reduce the size
of this 16384 byte file by 0 percent.

Chi square distribution for 16384 samples is 234.31, and randomly
would exceed this value 81.92 percent of the times.

Arithmetic mean value of data bytes is 126.1403 (127.5 = random).
Monte Carlo value for Pi is 3.194139194 (error 1.67 percent).
Serial correlation coefficient is 0.002701 (totally uncorrelated = 0.0).

-------------------------------------
Entropy = 7.994758 bits per byte.
Optimum compression would reduce the size
of this 32768 byte file by 0 percent.

Chi square distribution for 32768 samples is 238.28, and randomly
would exceed this value 76.65 percent of the times.

Arithmetic mean value of data bytes is 126.8322 (127.5 = random).
Monte Carlo value for Pi is 3.152536166 (error 0.35 percent).
Serial correlation coefficient is -0.002910 (totally uncorrelated = 0.0).

--------------------------------------
Entropy = 7.997050 bits per byte.
Optimum compression would reduce the size
of this 65536 byte file by 0 percent.

Chi square distribution for 65536 samples is 268.51, and randomly
would exceed this value 26.85 percent of the times.

Arithmetic mean value of data bytes is 126.7138 (127.5 = random).
Monte Carlo value for Pi is 3.155099799 (error 0.43 percent).
Serial correlation coefficient is 0.001993 (totally uncorrelated = 0.0).

---------------------------------------
Entropy = 7.998566 bits per byte.
Optimum compression would reduce the size
of this 131072 byte file by 0 percent.

Chi square distribution for 131072 samples is 260.91, and randomly
would exceed this value 38.62 percent of the times.

Arithmetic mean value of data bytes is 126.9442 (127.5 = random).
Monte Carlo value for Pi is 3.145433738 (error 0.12 percent).
Serial correlation coefficient is 0.002855 (totally uncorrelated = 0.0).

---------------------------------------
Entropy = 7.999312 bits per byte.
Optimum compression would reduce the size
of this 262144 byte file by 0 percent.

Chi square distribution for 262144 samples is 251.13, and randomly
would exceed this value 55.68 percent of the times.

Arithmetic mean value of data bytes is 127.0865 (127.5 = random).
Monte Carlo value for Pi is 3.145342184 (error 0.12 percent).
Serial correlation coefficient is 0.001801 (totally uncorrelated = 0.0).

---------------------------------------
Entropy = 7.999609 bits per byte.
Optimum compression would reduce the size
of this 524288 byte file by 0 percent.

Chi square distribution for 524288 samples is 284.21, and randomly
would exceed this value 10.09 percent of the times.

Arithmetic mean value of data bytes is 127.3320 (127.5 = random).
Monte Carlo value for Pi is 3.139080578 (error 0.08 percent).
Serial correlation coefficient is 0.000191 (totally uncorrelated = 0.0).

----------------------------------------
Entropy = 7.999843 bits per byte.
Optimum compression would reduce the size
of this 1048576 byte file by 0 percent.

Chi square distribution for 1048576 samples is 228.00, and randomly
would exceed this value 88.70 percent of the times.

Arithmetic mean value of data bytes is 127.3755 (127.5 = random).
Monte Carlo value for Pi is 3.140202103 (error 0.04 percent).
Serial correlation coefficient is -0.001312 (totally uncorrelated = 0.0).

----------------------------------------
Entropy = 7.999920 bits per byte.
Optimum compression would reduce the size
of this 2097152 byte file by 0 percent.

Chi square distribution for 2097152 samples is 233.88, and randomly
would exceed this value 82.45 percent of the times.

Arithmetic mean value of data bytes is 127.4360 (127.5 = random).
Monte Carlo value for Pi is 3.143305915 (error 0.05 percent).
Serial correlation coefficient is -0.000797 (totally uncorrelated = 0.0).

------------------------------------
Entropy = 7.999957 bits per byte.
Optimum compression would reduce the size
of this 4194304 byte file by 0 percent.

Chi square distribution for 4194304 samples is 252.43, and randomly
would exceed this value 53.36 percent of the times.

Arithmetic mean value of data bytes is 127.4706 (127.5 = random).
Monte Carlo value for Pi is 3.143122810 (error 0.05 percent).
Serial correlation coefficient is -0.000971 (totally uncorrelated = 0.0).

------------------------------------
Entropy = 7.999976 bits per byte.
Optimum compression would reduce the size
of this 8388608 byte file by 0 percent.

Chi square distribution for 8388608 samples is 273.55, and randomly
would exceed this value 20.28 percent of the times.

Arithmetic mean value of data bytes is 127.4586 (127.5 = random).
Monte Carlo value for Pi is 3.144422327 (error 0.09 percent).
Serial correlation coefficient is -0.000550 (totally uncorrelated = 0.0).
    

it stays always max 1 standard deviation from the mean.
i dint play with it much,i would like to take some time to think about it again befor the NIST
tests. the results you see in this thread are the only i tried.

generally, dont LOL please Very Happy now because i dont know if it is possible,
the idea as follow:

i start with an hypercube of 32 dimensions on a stream of pi.
because a hypercube of 0-dimension is a point, and a hypercube of 1-dimension
is a line, if i "translate" the actual dimension of my hypercube i should
get back a new 32+1 dimensions hypercube.
ok,i "translate" it, considering the minimum distance among the old points an the new ones.
it should be in some way (i think) a constant distance.

my seed works on the rules for the convergence of p (our hypercube is a special
one born from pi) as method to keep distances constant.

the sum of the old points value in the old dimension
and the new ones should be always stay in the expansion of digits of pi.

if this sums are correct, they will not break the relation. we would have always
the same constant distance but new and newer dimensions stacked on the initial vector,
getting back for free all properties from pi (the goal)

it is a bit like a sci-fi-algo, i know, but i like so much it.
i tested it only once 16MB data with diehard-tests and it seems it passed them all !

Cheers,
Very Happy

_________________
⠓⠕⠏⠉⠕⠙⠑
Post 01 Nov 2013, 18:11
View user's profile Send private message Visit poster's website Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17664
Location: In your JS exploiting you and your system
revolution
Actually, the world is not hurting for a new PRNG. There are already lots of great ones out there (and there a lot of crappy ones out there also). It is just a matter of pointing people to the right places to help them see that the first PRNG they encounter is not the only type available.

The other point that needs mentioning is that passing all the various tests (like diehard et al) is no guarantee of anything. Lots of PRNGs can pass all the tests. What is more important is the implementation. If it is implemented badly then you lose all the advantages and won't even know it.
Post 02 Nov 2013, 00:05
View user's profile Send private message Visit poster's website Reply with quote
hopcode



Joined: 04 Mar 2008
Posts: 563
Location: Germany
hopcode
revolution wrote:
...There are already lots of great ones out there...
my idea is to collect them in one thread here, or in wiki.
i have started this thread on x64lab messageboard Randomness reloaded. if someone interested, you may visit it.
Some assembly is required Wink
i will attach there links,useful papers,some template of my own to start with,and general results, from time to time... at ease... no hurry.

Cheers,
Very Happy

_________________
⠓⠕⠏⠉⠕⠙⠑
Post 02 Nov 2013, 16:46
View user's profile Send private message Visit poster's website Reply with quote
hopcode



Joined: 04 Mar 2008
Posts: 563
Location: Germany
hopcode
...Morning... Very Happy
applying my analysis R script on RND sources, and on Fibonacci (as example)
against my "pi-cubed" idea for a first visual plotted comparison. links/tools there on http://board.x64lab.net/viewtopic.php?p=96#p96

_________________
⠓⠕⠏⠉⠕⠙⠑
Post 11 Nov 2013, 05:35
View user's profile Send private message Visit poster's website Reply with quote
hopcode



Joined: 04 Mar 2008
Posts: 563
Location: Germany
hopcode
added Fibonacci LFSR 32bit assembly implementation and test.
Compile it with fasm
http://board.x64lab.net/viewtopic.php?f=2&t=80
ok, i dont update in anymore stuff about "Randmness reloaded". you know th link
Happy tweaking
Very Happy

_________________
⠓⠕⠏⠉⠕⠙⠑
Post 12 Nov 2013, 18:24
View user's profile Send private message Visit poster's website Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  
Goto page Previous  1, 2

< Last Thread | Next Thread >
Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Copyright © 1999-2020, Tomasz Grysztar. Also on GitHub, YouTube, Twitter.

Website powered by rwasa.