flat assembler
Message board for the users of flat assembler.

Index > Windows > Disable SMT for my process?

Goto page Previous  1, 2, 3, 4, 5, 6, 7  Next
Author
Thread Post new topic Reply to topic
Azu



Joined: 16 Dec 2008
Posts: 1159
Azu 04 Sep 2009, 09:41
Tomasz Grysztar wrote:
Azu wrote:
lim ceiling(N/16)/N does not equal 1

True, it does not - it equal to 1/16.

Azu wrote:
It equals 0.0650406504

You haven't answered my earlier question. Do you know what a limit is?
My point is it will never be exactly 1/16 since it isn't linear. The higher the n the closer it will be but it cannot be exactly. (except of course when n is a multiple of 16 like I said, then it might be O(n), but this is only true if it's a multiple of 16.. and you said there can't be arbitrary conditions attached to it, so it can't be O(n), it must work infinitely for everything )




P.S. if an algorithm can't have an arbitary limit and work with big-O notation (and thus my "O(1) SIMD while less n than 17" doesn't work) then doesn't that prevent any algorithm on x86 architecture from using big-O, since they are always limited to 32bits of 64bit of possible memory to use as inputs? They all have arbitrary limitation on how much input! So they should all be incompatible with big-O always.. if not, then it is acceptable that my SIMD algorithm "is O(1) when there is under 17 inputs"! Very Happy
Post 04 Sep 2009, 09:41
View user's profile Send private message Send e-mail AIM Address Yahoo Messenger MSN Messenger ICQ Number Reply with quote
Tomasz Grysztar



Joined: 16 Jun 2003
Posts: 8367
Location: Kraków, Poland
Tomasz Grysztar 04 Sep 2009, 09:50
Azu wrote:
My point is it will never be exactly 1/16 since it isn't linear.
It doesn't have to be exactly 1/16. But the limit IS exactly 1/16. That's what the "asimptotic" word is about.

Azu wrote:
P.S. if an algorithm can't have an arbitary limit and work with big-O notation (and thus my "O(1) SIMD while less n than 17" doesn't work) then doesn't that prevent any algorithm on x86 architecture from using big-O, since they are always limited to 32bits of 64bit of possible memory to use as inputs? They all have arbitrary limitation on how much input! So they should all be incompatible with big-O always.. if not, then it is acceptable that my SIMD algorithm "is O(1) when there is under 17 inputs"! Very Happy

That's a different story, and it certainly is possible that there are some algorithms that have a better speed in terms of O() than the one we use, but not really for any sizes of data that we can fit in our computers.

However in most cases the difference will be in favor of lesser complexity much earlier.

Consider two examples, first the algorithm that for N input bytes needs to make N^2 additions. And the second algorithm that for N input bytes needs to make 1000000000*N additions. The first one has complexity O(N^2), and the second one complexity O(N).
Clearly for data smaller than 1000000000 bytes, the first one is going to be faster, so we won't use second one, even though it is linear in theory.

Second example: we have two algorithms, first one for N input bytes needs to make N^2 additions, and the second one only needs 16*N. The first one still would be faster for inputs smaller than 16 bytes, but the second one is much better for anything bigger.
And sometimes it is the speed on really big inputs that matters - for the small ones the execution time is usually so fast, that you won't see a difference anyway.

So, O() notation only tells us, that O(N) is eventually going to be faster than O(N^2). It doesn't tell us when exactly.

As for the algorithm that have the same complexity, like both O(N) in your example, the O() notation doesn't really tell us anything about which one is faster and when.


Last edited by Tomasz Grysztar on 04 Sep 2009, 09:52; edited 1 time in total
Post 04 Sep 2009, 09:50
View user's profile Send private message Visit poster's website Reply with quote
Azu



Joined: 16 Dec 2008
Posts: 1159
Azu 04 Sep 2009, 09:52
But in those examples, whenever you change N, it changes how it must do. I think that is why they scale linearly. But for mine, you can change N without changing how many times it goes.. so it is illogical to call mine O(n), I think..


If a function is O(n) then shouldn't it work less when n=1 than when n=2?


Last edited by Azu on 04 Sep 2009, 09:53; edited 2 times in total
Post 04 Sep 2009, 09:52
View user's profile Send private message Send e-mail AIM Address Yahoo Messenger MSN Messenger ICQ Number Reply with quote
Tomasz Grysztar



Joined: 16 Jun 2003
Posts: 8367
Location: Kraków, Poland
Tomasz Grysztar 04 Sep 2009, 09:53
Azu wrote:
But in those examples, whenever you change N, it changes how much work is done. I think that is why they scale linearly. But for mine, you can change N without changing how many times it goes..

Only locally. And it doesn't really change a thing in large scale.
Post 04 Sep 2009, 09:53
View user's profile Send private message Visit poster's website Reply with quote
Azu



Joined: 16 Dec 2008
Posts: 1159
Azu 04 Sep 2009, 09:56
That is subjective.

I could say

h=Planck constant
O(N*(N*h))

"Doesn't seem to change much at large scale"

But does that make it O(N*N)? No.. it scales differently..


Last edited by Azu on 04 Sep 2009, 09:56; edited 1 time in total
Post 04 Sep 2009, 09:56
View user's profile Send private message Send e-mail AIM Address Yahoo Messenger MSN Messenger ICQ Number Reply with quote
Tomasz Grysztar



Joined: 16 Jun 2003
Posts: 8367
Location: Kraków, Poland
Tomasz Grysztar 04 Sep 2009, 09:56
Azu wrote:
If a function is O(n) then shouldn't it work less when n=1 than when n=2?
No, O() doesn't say anything about n=1 and n=2. It only says something about asymptotic behavior. It is even possible to have an O(n) that will be sometimes smaller for N+1 than it is for N.
Post 04 Sep 2009, 09:56
View user's profile Send private message Visit poster's website Reply with quote
Azu



Joined: 16 Dec 2008
Posts: 1159
Azu 04 Sep 2009, 10:00
What is the point of big-O notation if it doesn't say how many times the algorithm needs ran? What does it say then??? Confused



It doesn't reflect how fast it is.
It doesn't reflect how algorithmically complex it is.
It doesn't reflect how big it is.

What does it reflect then?


Last edited by Azu on 04 Sep 2009, 10:01; edited 1 time in total
Post 04 Sep 2009, 10:00
View user's profile Send private message Send e-mail AIM Address Yahoo Messenger MSN Messenger ICQ Number Reply with quote
Tomasz Grysztar



Joined: 16 Jun 2003
Posts: 8367
Location: Kraków, Poland
Tomasz Grysztar 04 Sep 2009, 10:01
Azu wrote:
That is subjective.

I could say

h=Planck constant
O(N*(N*h))

"Doesn't seem to change much at large scale"

But does that make it O(N*N)? No.. it scales differently..

I don't really know what you're trying to say here.
When speaking of "large scale", I meant (perhaps not being precise enough) the asimptotic behavior.
As long as you don't really understand, what a limit is, you are not going to understand O() notation either.
Post 04 Sep 2009, 10:01
View user's profile Send private message Visit poster's website Reply with quote
Madis731



Joined: 25 Sep 2003
Posts: 2139
Location: Estonia
Madis731 04 Sep 2009, 10:01
I am REALLY, TRULY sorry for my post about big O-notation Sad I didn't know that my joke would carry you thus far.

Sorry! Sad
Post 04 Sep 2009, 10:01
View user's profile Send private message Visit poster's website Yahoo Messenger MSN Messenger Reply with quote
Azu



Joined: 16 Dec 2008
Posts: 1159
Azu 04 Sep 2009, 10:02
By limit you mean that if you make the n big enough, it will work the way you say.


I say that no matter how big you make it, it won't. There will always be 16 in a row without the loops going up. So it can not be O(n)..


Last edited by Azu on 04 Sep 2009, 10:04; edited 1 time in total
Post 04 Sep 2009, 10:02
View user's profile Send private message Send e-mail AIM Address Yahoo Messenger MSN Messenger ICQ Number Reply with quote
Tomasz Grysztar



Joined: 16 Jun 2003
Posts: 8367
Location: Kraków, Poland
Tomasz Grysztar 04 Sep 2009, 10:03
Azu wrote:
What is the point of big-O notation if it doesn't say how many times the algorithm needs ran? What does it say then??? Confused

It does say, that algorithm of smaller complexity is going to be faster than the one of greater complexity for sufficiently large N. It doesn't say anything about how large this N must be for this to become true, though - you have to figure it out yourself.
Post 04 Sep 2009, 10:03
View user's profile Send private message Visit poster's website Reply with quote
Tomasz Grysztar



Joined: 16 Jun 2003
Posts: 8367
Location: Kraków, Poland
Tomasz Grysztar 04 Sep 2009, 10:05
Azu wrote:
Even if you took it all the way up to infinity it wouldn't be the same.. unless infinity is a multiple of 16.. so how "large" do you mean? Confused

"Unless infinity is a multiple of 16". Nice one. Smile

Don't you really want to learn anything about mathematical limits? They would really help you understand this, and I don't think it's possible to understand O() without knowing them well.
Post 04 Sep 2009, 10:05
View user's profile Send private message Visit poster's website Reply with quote
Azu



Joined: 16 Dec 2008
Posts: 1159
Azu 04 Sep 2009, 10:06
Tomasz Grysztar wrote:
Azu wrote:
What is the point of big-O notation if it doesn't say how many times the algorithm needs ran? What does it say then??? Confused

It does say, that algorithm of smaller complexity is going to be faster than the one of greater complexity for sufficiently large N. It doesn't say anything about how large this N must be for this to become true, though - you have to figure it out yourself.
???


It can't be O(n) then, since in mine n can change without the number of times it runs changing.


Tomasz Grysztar wrote:
Azu wrote:
Even if you took it all the way up to infinity it wouldn't be the same.. unless infinity is a multiple of 16.. so how "large" do you mean? Confused

"Unless infinity is a multiple of 16". Nice one. Smile

Don't you really want to learn anything about mathematical limits? They would really help you understand this, and I don't think it's possible to understand O() without knowing them well.
If you really think it'll help, sure. All I know about them so far is they are a way to make subjective approximations and use them to say "I win" if you think the results you get are "close enough that they may as well be right" (whoever defines that..) in math debates. Idea
Like to say "ceil(N/X)/N=(N/X)/N=N/N/X=1/X so the floor and ceiling functions are useless and can't be used" even though they aren't equal.

Is this a good understanding so far? Or am I missing something?
Post 04 Sep 2009, 10:06
View user's profile Send private message Send e-mail AIM Address Yahoo Messenger MSN Messenger ICQ Number Reply with quote
Borsuc



Joined: 29 Dec 2005
Posts: 2465
Location: Bucharest, Romania
Borsuc 04 Sep 2009, 15:26
Azu, please think of the derivative, or rather a partial derivative (which is a derivative in ONE variable).

Take "N" in this instance. If you have 1000000000000000000000*N, do you know what the derivative will say about 1000000000000000000000? NOTHING it is ignored because it doesn't change, the result is a constant. As N becomes bigger and bigger (asymptote), the 1000000000000000000000 will be dwarfed and insignificant, no matter how big it is, because it doesn't change.

An O(N) function means that the change is linear. The change in parameter. Constants don't matter. O(1) means that the function always, absolutely always takes the same way to compute, that is, the same number of steps, because no matter how you change N, O(1) doesn't even use it, and never changes.

You can observe that conditional functions are not described easily with mathematics -- because their behavior might change based on some conditions. An upper limit is also a condition. A function theoretically can accept an infinite number of Ns, bigger and bigger. Theoretically, since that's what O(N) is about.

If you have conditions, it breaks it up. I mean, if you have a function that for odd N you have Bubble Sort, and for even N, you have quicksort, it's impossible to classify one Big-O notation for it, because it has conditions.

Anything that does NOT change with N is insignificant as N gets bigger and bigger, in effect, changes. Anything that doesn't change will be dwarfed in theory. That is why 10000000000*N and N are the same, both linear.

And it doesn't tell you the execution speed. It tells you how complex the function is as you change N.

I bolded the emphasis. Hope it's clearer now. Cheers Smile

Madis731 wrote:
I am REALLY, TRULY sorry for my post about big O-notation Sad I didn't know that my joke would carry you thus far.

Sorry! Sad
Hey why be sorry for bringing up an educational subject? Smile
Post 04 Sep 2009, 15:26
View user's profile Send private message Reply with quote
Azu



Joined: 16 Dec 2008
Posts: 1159
Azu 04 Sep 2009, 15:31
But with O(10000000000*N), the code/loops/complexity/whatever the hell you call it.. changes every time you change N. So yes, it makes sense to just put O(N) for that.


But if it only changes every 16.. I think it should be O(ceil(N/16)).. since N changing from 1 through 16 is still just ran once.. but at 17 it becomes twice.. etc..


Last edited by Azu on 04 Sep 2009, 15:33; edited 1 time in total
Post 04 Sep 2009, 15:31
View user's profile Send private message Send e-mail AIM Address Yahoo Messenger MSN Messenger ICQ Number Reply with quote
Borsuc



Joined: 29 Dec 2005
Posts: 2465
Location: Bucharest, Romania
Borsuc 04 Sep 2009, 15:32
Azu wrote:
But it goes in steps of 16.. if it was O(n), then it would do 16 times less if given 1 input than it would if given 16.. but it does the same either way..
But it is always 16, it doesn't change with N. It is a constant, don't you get it?

The difference in speed will always be 16 times slower, for example. It doesn't change with N, so it has the same relationship as when N=1 as when N=99999999, a linear relationship.

If, when N=2 the difference was 2, and when N=3, the difference was 4 times, then it was more like exponential relationship.
Post 04 Sep 2009, 15:32
View user's profile Send private message Reply with quote
Azu



Joined: 16 Dec 2008
Posts: 1159
Azu 04 Sep 2009, 15:35
Borsuc wrote:
Azu wrote:
But it goes in steps of 16.. if it was O(n), then it would do 16 times less if given 1 input than it would if given 16.. but it does the same either way..
But it is always 16, it doesn't change with N. It is a constant, don't you get it?

The difference in speed will always be 16 times slower, for example. It doesn't change with N, so it has the same relationship as when N=1 as when N=99999999, a linear relationship.

If, when N=2 the difference was 2, and when N=3, the difference was 4 times, then it was more like exponential relationship.
Put these into a calculator


1/16
2/16
3/16
4/16
5/16
6/16
7/16
8/16
9/16
10/16
11/16
12/16
13/16
14/16
15/16
16/16
17/16


And compare to these


ceiling(1/16)
ceiling(2/16)
ceiling(3/16)
ceiling(4/16)
ceiling(5/16)
ceiling(6/16)
ceiling(7/16)
ceiling(8/16)
ceiling(9/16)
ceiling(10/16)
ceiling(11/16)
ceiling(12/16)
ceiling(13/16)
ceiling(14/16)
ceiling(15/16)
ceiling(16/16)
ceiling(17/16)


And then tell me there is no difference..



You see, in the first one, every time N changes, you get a different result. But in the second one you don't. Because it is a different algorithm...
Post 04 Sep 2009, 15:35
View user's profile Send private message Send e-mail AIM Address Yahoo Messenger MSN Messenger ICQ Number Reply with quote
Borsuc



Joined: 29 Dec 2005
Posts: 2465
Location: Bucharest, Romania
Borsuc 04 Sep 2009, 15:36
You're working with too small numbers. When N approaches infinity (asymptote), a difference of 16 is insignificant. Smile
Post 04 Sep 2009, 15:36
View user's profile Send private message Reply with quote
Azu



Joined: 16 Dec 2008
Posts: 1159
Azu 04 Sep 2009, 15:38
Way before N approaches infinity, you run out of RAM, or Windows BSODs, or both. This will happen before you get even one tenth of the way to infinity! Razz
Post 04 Sep 2009, 15:38
View user's profile Send private message Send e-mail AIM Address Yahoo Messenger MSN Messenger ICQ Number Reply with quote
Borsuc



Joined: 29 Dec 2005
Posts: 2465
Location: Bucharest, Romania
Borsuc 04 Sep 2009, 15:39
Mathematics doesn't have RAM, it has abstractions. Big-O tells you the mathematical relationship of the function as it changes.

That means the derivative. If you have c+N, where N increases, and take the derivative, c will be ignored (completely) because it doesn't change. (for ANY number c no matter how big)

The derivative of a constant is 0. O(1) is a constant, hence it doesn't change.
Post 04 Sep 2009, 15:39
View user's profile Send private message Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  
Goto page Previous  1, 2, 3, 4, 5, 6, 7  Next

< Last Thread | Next Thread >
Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Copyright © 1999-2025, Tomasz Grysztar. Also on GitHub, YouTube.

Website powered by rwasa.