flat assembler
Message board for the users of flat assembler.

Index > Heap > 192 GB Tower of Power

Goto page 1, 2  Next
Author
Thread Post new topic Reply to topic
madmatt



Joined: 07 Oct 2003
Posts: 1045
Location: Michigan, USA
madmatt
Now THIS is a GAMING computer!!Cool Cool :http://blogs.zdnet.com/gadgetreviews/?p=2617

*EDIT*
Quote:
So just how much does 192GB cost? Well, you do the math: $1,800 for hardware plus $40,800 for 12 RAM modules plus $8,000 or so for Dell’s customary 20% markup. Max that sucker out, and you’re looking at a potential $50,000 price tag.

There's ALWAYS a down side! Very Happy

_________________
Gimme a sledge hammer! I'LL FIX IT!
Post 28 Mar 2009, 00:17
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17342
Location: In your JS exploiting you and your system
revolution
Do people really waste their money on this kind of stuff? What game can make good use of it anyway?
Post 28 Mar 2009, 03:47
View user's profile Send private message Visit poster's website Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3170
Location: Denmark
f0dder
Not a single game of today could.

And even moving game data files to a ramdisk doesn't help that much wrt loading speed (tested with farcry2... yes, I have enough RAM) for a lot of games.

But hey, 192GB in an x86 workstation would kinda give you some bragging rights Smile
Post 28 Mar 2009, 06:31
View user's profile Send private message Visit poster's website Reply with quote
madmatt



Joined: 07 Oct 2003
Posts: 1045
Location: Michigan, USA
madmatt
Imagine the power it would take to run this thing maxed all the way Shocked , 192GB ram, a top of the line video and sound card, surround sound speaker system, HD monitor, etc! You'll see the city lights flickering for miles around! Very Happy
Post 28 Mar 2009, 07:41
View user's profile Send private message Reply with quote
tom tobias



Joined: 09 Sep 2003
Posts: 1320
Location: usa
tom tobias
"conventional"-- i.e. 'typical' or 'ordinary', not superdeluxe, custom designed, extravagantly expensive--desktop computer:

year -------------conventional amount of memory in gigabytes RAM

1980.................0.004
2005.................4.0
2030.................4,000.0

Most of the computing solutions, here at FASM forum, and throughout the world, continue to be based upon the 1980's model of memory availability, hence, one continues to read, as recently as last week here on FASM forum, about how many "bytes" of memory could be saved by employing this or that instruction, with this or that shortcut, to "save time", meaning, to reduce the execution time from 0.004 milliseconds to 0.003 milliseconds, with a consequence that the operator of the program perceived exactly zero difference between the two solutions, but if the slower version, which required more memory (gasp, horrors, eek there's a mouse in the house) happened to be more intuitive, and less cryptic, think how much more easily it would be modified by future users..........
Ah, some folks never change....
Smile
Post 29 Mar 2009, 10:54
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17342
Location: In your JS exploiting you and your system
revolution
tom tobias wrote:
Most of the computing solutions, here at FASM forum, and throughout the world, continue to be based upon the 1980's model of memory availability, hence, one continues to read, as recently as last week here on FASM forum, about how many "bytes" of memory could be saved by employing this or that instruction, with this or that shortcut, to "save time", meaning, to reduce the execution time from 0.004 milliseconds to 0.003 milliseconds, with a consequence that the operator of the program perceived exactly zero difference between the two solutions, but if the slower version, which required more memory (gasp, horrors, eek there's a mouse in the house) happened to be more intuitive, and less cryptic, think how much more easily it would be modified by future users..........
Your usage model assumes particular things that are not in evidence. I completely agree with you IF each and every piece of code is run only a few times. BUT not every piece of code is run under that model. Some code is run quadrillions of times in some applications. Shaving a micro second or more here and there can give a huge leap in performance in those cases. Your example above shows a 33% boost in throughput (4us --> 3us), that could mean that the runtime situation only has to install 3/4 of the previously needed blade servers to meet the same performance level. This would save capital cost, electricity costs and maintenance costs. SDRAM capacity may be larger today but L1-cache is still quite tiny, saving bytes in the critical loops can mean the difference between fitting in the cache or not, potentially saving time-expensive accesses to the relatively slow L2 or the even slower SDRAM modules.
Post 29 Mar 2009, 12:43
View user's profile Send private message Visit poster's website Reply with quote
tom tobias



Joined: 09 Sep 2003
Posts: 1320
Location: usa
tom tobias
revolution wrote:
...saving bytes in the critical loops can mean the difference between fitting in the cache or not, potentially saving time-expensive accesses to the relatively slow L2 or the even slower SDRAM modules.
Thank you for this excellent rejoinder. As FASM forum members will remember, revolution is one of the pre-eminent supporters of use of FASM on architectures OTHER than Intel x86. Therefore, I found this comment of particular significance, for, the crux of the message is related NOT to memory, but rather to cpu architecture.

By "cache", revolution here refers to private memory, i.e. memory available to only one cpu, or only one "core", and not to any other, nor to the DMA controller.

Again we note the fetish of conserving memory so that the entire program would "fit" into cache, a tic or mannerism which to me, is just remarkable. Here's a guy, (I am not certain of that!) very talented, obviously gifted, yet, stuck back in the era of 1980's computing. As brilliant as this chap is, he cannot imagine a day when "cache" is obsolete, when each cpu or core, has enormous quantities of memory available, all at the same high speed access, as "cache" is today. Yes, saving memory was a good rationale, about 30 years ago. I do not accept the validity of that rationale today.
Post 29 Mar 2009, 20:30
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17342
Location: In your JS exploiting you and your system
revolution
But this use of memory in the L1-cache is still very important today. If L1 gets bigger the CPU makers have to make it slower to access so it becomes a trade-off. They can't keep it small to make it fast and at the same time keep it large to make it useful.

I have two applications today that make use of the L1 cache to obtain the proper performance level. If the L1 cache was slower, or the code didn't fit into the L1, then there would need to be more CPUs in the system to make up for the performance drop. As you can imagine this would increase costs and complexity. So, instead, a cheaper and more reliable solution was to optimise the code for size and speed.

There is no reason to expect that every application revolves around a user clicking buttons and waiting for feedback on the display. In most cases that usage model, as I have already agreed, cannot make good profit from optimised coding techniques. But that is only one usage scenario and I would also argue that it is not the most common usage scenario. I think by far more code and CPUs are used in embedded, space limited, applications. Embedded CPUs come in all sorts of types which include x86 based and non-x86 based systems.

I'm not sure if you are aware of the speed differences for the various memories. So just to give and idea, L1 access time ~ 3 clock cycles, L2 ~ 9, SDRAM ~ 40-50, HDD >100000. SDRAM is slow, and making it larger doesn't help matters. I don't care if it becomes 1 peta byte, that won't help me in my embedded applications.
Post 30 Mar 2009, 00:44
View user's profile Send private message Visit poster's website Reply with quote
tom tobias



Joined: 09 Sep 2003
Posts: 1320
Location: usa
tom tobias
revolution wrote:
...SDRAM is slow, and making it larger doesn't help matters. I don't care if it becomes 1 peta byte, that won't help me in my embedded applications.

"slow" ???
slower than what? The notion of "cache" became popular, when, 15 years ago? At what speed did the ordinary desktop (not embedded) computer operate, when cache was first introduced?

Were not the first Pentium cpu's out there in the early 90's running at a speed of 60 MHz? Which improvement accounted for the greatest reduction in execution time, cache, or cpu speed? If cache, according to your data, requires 3 clock cycles, while sdram requires fifteen times as many clock cycles, then does this improvement outweigh the delta comparing cpu speed from the era when cache was introduced, to the present time? hint: NO. 100 mHz is 20 times slower than the slowest current desktop computer.

Embedded applications: wow. There's a whole kettle of fish:

first of all, embedded applications RARELY suffer from speed problems. Typically, the engineering task is rather to REDUCE costs, not execution times.

secondly, the thread topic of interest was focused on a "huge" memory for a desktop computer, not an embedded design.

Thirdly, I have never before encountered an embedded design problem for which a particular cpu was eliminated because its cache memory was inadequate. Back in the olden times, dare I write, "the prehistoric era" in which I dwell, we solved "embedded" problems with controllers, flip flops, nand gates, and other discrete components, rarely requiring CISC cpus. We DID use CISC cpus for embedded applications, BUT NOT because of the cache, but because of the superiority of the tools, emulators, editors, compilers, all of which facilitated getting the product to market faster. The critical time thus being not simple program execution, for which embedded applications are renowned, but rather human time reduction, human labor costs decreased: those were the design imperatives, not cache memory latency versus sdram access time....

Fourthly, and finally, electric power reduction, for 98% of all embedded applications, in my experience, is a thousand fold more important as a consideration, than the speed of cache versus sdram. Since cache rich cpus consume a lot more electricity than cpus sans cache, guess which design wins in 99% of all embedded applications: THE LEAST EXPENSIVE.

With regard to your main point, that sdram is fifteen times slower to access than cache, allow me to speculate for the future:
a. newly designed "cpu" integrated into the memory controller itself, will change that relationship, ultimately the entire memory will be cache.
b. in the intermediate future, cpu architecture dictates the relationship between motherboard components, including sdram, hence, newer cpu designs will reduce the existing bottleneck, and sdram access times will decrease in newer cpus in the future.
c. the main point, for today, is simple: in the case of cpu-memory architecture issues, it is wise to commence with analysis of the time REQUIRED to complete a particular task, rather than to commence with a design based on a prejudice of trying to REDUCE the exection time, given that MOST tasks are already executed more quickly than the human operator can respond. In other words, saving clock cycles must be demonstrated as having utility. If, on the other hand, as is GENERALLY the case, one saves clock cycles at the expense of readability, then, future modification of the code becomes impossible, and COSTS SKYROCKET.

Money, money, money.
Post 30 Mar 2009, 10:31
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17342
Location: In your JS exploiting you and your system
revolution
tom tobias wrote:
first of all, embedded applications RARELY suffer from speed problems
Agreed, but it seems you do acknowledge that sometime they do. Good, because in my case they definitely do, and that is that case I am discussing. I don't think it is fair to assume that everyone here encounters the same usage requirements that you do. I haven't programmed for a PC platform for the past 9 months, but that doesn't mean I have not programmed for anything. Quite the opposite, I have spent nearly all my time making embedded tasks run faster with optimisations to save power and time. And why is saving power important? Because they run on batteries and have to meet certain QOS standards without draining the battery in 5 minutes.
tom tobias wrote:
Since cache rich cpus consume a lot more electricity than cpus sans cache ...
My experience with this clearly differs from yours. With cache the CPUs consume much less power for a given task. This is because accessing external data buses is more power-expensive than the internal cache buses. The thing that is perhaps confusing you is that generally we ask the cached CPU to do more tasks (because they can) and thus appear to use more power, but in fact each individual task takes less power.
tom tobias wrote:
given that MOST tasks are already executed more quickly than the human operator can respond
Assumes that the system actually has human operators. Once again, not the usage model I am discussing. Your comments tend to suggest that the whole programming world is a PC with a human operator pushing buttons. Will you at least acknowledge that others may have differing usage models?
tom tobias wrote:
Money, money, money.
Must be funny ...



*EDIT*

Here is a nice quote from one of the processor manuals:
Quote:
For example, suppose a program is accessing data items D1, D2, ..., Dn cyclically, and that all of these data items happen to use the same cache set. With round-robin replacement in an m-way set-associative cache, the program is liable to get:

• nearly 100% cache hits on these data items when n ≤ m
• 0% cache hits as soon as n becomes m+1 or greater.

In other words, a minor increase in the amount of data being processed can lead to a major change in how effective the cache is.
This is why fitting code/data into the cache is important. Once you become bigger than your cache then the cache becomes much less useful. This is why saving bytes in code can be important in certain situations.

I really don't understand tom tobias' argument that SDRAM can become the cache of the future. I would like to suggest that tom tobias try a test and disable the cache in his PC and see how much of a slowdown is perceived. You can't discount the benefit of cache and further you also can't discount the benefit of properly optimised code when used in the proper environment.


Last edited by revolution on 05 Apr 2009, 18:59; edited 1 time in total
Post 30 Mar 2009, 11:22
View user's profile Send private message Visit poster's website Reply with quote
Borsuc



Joined: 29 Dec 2005
Posts: 2466
Location: Bucharest, Romania
Borsuc
tom tobias wrote:
"conventional"-- i.e. 'typical' or 'ordinary', not superdeluxe, custom designed, extravagantly expensive--desktop computer:

year -------------conventional amount of memory in gigabytes RAM

1980.................0.004
2005.................4.0
2030.................4,000.0

Most of the computing solutions, here at FASM forum, and throughout the world, continue to be based upon the 1980's model of memory availability, hence, one continues to read, as recently as last week here on FASM forum, about how many "bytes" of memory could be saved by employing this or that instruction, with this or that shortcut, to "save time", meaning, to reduce the execution time from 0.004 milliseconds to 0.003 milliseconds, with a consequence that the operator of the program perceived exactly zero difference between the two solutions
Whoa slow down a bit will ya? Who said we don't perceive differences? I mean, that's precisely what measuring how much it takes does right? Print some information on the screen -- the time it took to execute something down to miliseconds precision -- which is perceived by us (I mean that information). Twisted Evil

Ha gotcha!

tom tobias wrote:
but if the slower version, which required more memory (gasp, horrors, eek there's a mouse in the house) happened to be more intuitive, and less cryptic, think how much more easily it would be modified by future users..........
Not to show disrespect but we are supposed to make better software. Like evolution dictates as an ideology, the weak software (and slower) should be eliminated.

Let me give you some advice: "You give a man a fish, you'll help him eat for a day. You teach him how to fish, and you'll help him eat for the rest of his life."

Apply that here. "You give a man a simple-to-understand program, he'll remain dumb when reading optimized/better (cryptic in your opinion) code FOREVER."

I mean you don't exactly suggest that having someone translating a language in terms you can understand is going to help you more than learning that language yourself?

Think about this tom: how are you SUPPOSED to be able to READ optimized code when you DON'T EVEN WANT TO LEARN IT? You have to USE it to LEARN it. Not AVOID it like you do. Why are you surprised you don't understand it then? LMAO.

How are you going to improve if you are not willing to do it? Cryptic code is useful also for stimulating brain and "exercising" it to make more sense even out of hard-to-read code for "normal" untrained people.

And we all know, those who don't exercise/train their brains don't deserve it, according to evolution theory Razz

If anything such "cryptic" doe is useful to improve your brain capacity. Period.



EDIT: You need an imperfect body, something that makes you think 5 times slower, so you can see how good an "unoptimized" design looks like. If you then say that you are unfortunate for being that way, I'll reply with "Man, Nature just wanted your design to be 'easy to understand' you know?" Razz

_________________
Previously known as The_Grey_Beast
Post 30 Mar 2009, 15:39
View user's profile Send private message Reply with quote
Azu



Joined: 16 Dec 2008
Posts: 1159
Azu
tom tobias wrote:
"conventional"-- i.e. 'typical' or 'ordinary', not superdeluxe, custom designed, extravagantly expensive--desktop computer:

year -------------conventional amount of memory in gigabytes RAM

1980.................0.004
2005.................4.0
2030.................4,000.0

Most of the computing solutions, here at FASM forum, and throughout the world, continue to be based upon the 1980's model of memory availability, hence, one continues to read, as recently as last week here on FASM forum, about how many "bytes" of memory could be saved by employing this or that instruction, with this or that shortcut, to "save time", meaning, to reduce the execution time from 0.004 milliseconds to 0.003 milliseconds, with a consequence that the operator of the program perceived exactly zero difference between the two solutions, but if the slower version, which required more memory (gasp, horrors, eek there's a mouse in the house) happened to be more intuitive, and less cryptic, think how much more easily it would be modified by future users..........
Ah, some folks never change....
Smile
If your program consists of just one single instruction, then yes, the difference will obviously be unnoticeable.

Most programs nowadays consist of LOTS of instructions, though. So that "0.004 milliseconds to 0.003 milliseconds" becomes "30 minutes to 40 minutes".


That "8.0 bytes to 2.0 bytes" of disk space becomes "80 gigabytes to 20 gigabytes".

Etc.


But, no.. it would make the code look (gasp, horror) TECHNICAL! OH NOEZ... because it's not like you could just use the equ (or, god forbid, macro) statements to just make your code look however you want while not being bloated to hell, right?
That would be to "new" and "weird".

Ah, some folks never change....
Sad


tom tobias wrote:
revolution wrote:
L1 access time ~ 3 clock cycles, L2 ~ 9, SDRAM ~ 40-50, HDD >100000. SDRAM is slow, and making it larger doesn't help matters. I don't care if it becomes 1 peta byte, that won't help me in my embedded applications.

"slow" ???
slower than what?
Slower than the things that he just listed as being faster than it, maybe?

Consider hiring a professional translator to read posts for you instead of relying on Babel Fish.

Or take some basic English classes.


tom tobias wrote:
The notion of "cache" became popular, when, 15 years ago?
Throw out your car, then. It relies on the notion of "combustion" (which was discovered by cavemen, thousands of years ago)..


tom tobias wrote:
Fourthly, and finally, electric power reduction
Guess which program consumes less power.. The 10 megabyte one that requires billions of clock cycles per second, or the 2 megabyte one that does the same thing with a few million clock cycles per second?


tom tobias wrote:
"cpu" integrated into the memory controller itself, will change that relationship, ultimately the entire memory will be cache.
Don't you mean memory controller integrated into the CPU? And SDRAM is still much slower.

tom tobias wrote:
the main point, for today, is simple: in the case of cpu-memory architecture issues, it is wise to commence with analysis of the time REQUIRED to complete a particular task, rather than to commence with a design based on a prejudice of trying to REDUCE the exection time, given that MOST tasks are already executed more quickly than the human operator can respond.
I hope people like you are NEVER allowed to mess with the DirectX and/or OpenGL source codes. Please, God, please.


I can see it now.. "ya sure all 3D apps will only ever run at 2 FPS but who cares at least it was easy to make lol!".. Shocked


Hint;
Production time = only done once for the program. In the case of libraries, not even done for every program.
Execution time = done over, and over, and over, constantly, for everyone using it, until it is obsoleted.


Last edited by Azu on 05 Apr 2009, 09:03; edited 1 time in total
Post 04 Apr 2009, 09:49
View user's profile Send private message Send e-mail AIM Address Yahoo Messenger MSN Messenger ICQ Number Reply with quote
Borsuc



Joined: 29 Dec 2005
Posts: 2466
Location: Bucharest, Romania
Borsuc
How does the memory controlled on the chip make RAM so much faster? It's still limited in speed because it's far and needs its own clock, since it doesn't come pre-packaged with the CPU. Therefore it is a lot slower than the cache.

And light speed is still a limiting factor. It's a lot when you consider the limits being pushed at the quantum level. Any electric field "moves" at the speed of light.

I think what it does is offload the chipset load when, for example, stuff has to be sent to the video card and RAM simultaneously...

This was addressed to tom btw.
Post 04 Apr 2009, 19:24
View user's profile Send private message Reply with quote
mattst88



Joined: 12 May 2006
Posts: 260
Location: South Carolina
mattst88
Azu wrote:
Slower then the things that he just listed as being faster then it, maybe?

Consider hiring a professional translator to read posts for you instead of relying on Babel Fish.

Or take some basic English classes.


Dude, really?

_________________
My x86 Instruction Reference -- includes SSE, SSE2, SSE3, SSSE3, SSE4 instructions.
Assembly Programmer's Journal
Post 04 Apr 2009, 23:03
View user's profile Send private message Visit poster's website Reply with quote
LocoDelAssembly
Your code has a bug


Joined: 06 May 2005
Posts: 4633
Location: Argentina
LocoDelAssembly
Thanks for posting that mattst88. I was always wondering if those who type "then" at times was a typo or just synonym of "than" when the context allows it, something similar to using "an" instead of "a" when the word next to it starts with a vowel.

So, is it always a typo?
Post 04 Apr 2009, 23:37
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17342
Location: In your JS exploiting you and your system
revolution
LocoDelAssembly wrote:
Thanks for posting that mattst88. I was always wondering if those who type "then" at times was a typo or just synonym of "than" when the context allows it, something similar to using "an" instead of "a" when the word next to it starts with a vowel.

So, is it always a typo?
Generally I think yes. I usually just ignore typos, I try to read past then and get the intent of the message. I make a lot of then myself also but I don't mind being shown my errors and will correct then if I spot then.

A different error is shown here: then <---> them. There are lots of ways to make typos

[edit]Reason: Correct my typo :p


Last edited by revolution on 05 Apr 2009, 15:37; edited 1 time in total
Post 05 Apr 2009, 00:55
View user's profile Send private message Visit poster's website Reply with quote
mattst88



Joined: 12 May 2006
Posts: 260
Location: South Carolina
mattst88
LocoDelAssembly:

Yes, it's wrong. Almost always this mistake is made because the person actually doesn't know the difference. That is, they don't hear a difference in the two words when spoken.

than is used with comparisons. E.g., FASM has fewer bugs now than it had last year.

then is used in conditional phrases and other situations. E.g., If I make lots of English mistakes, then I shouldn't tell others to learn English.

LocoDelAssembly and revolution,

I'd say it is not a typo in the vast majority of cases -- the user simply does not know the difference in the two words.
Post 05 Apr 2009, 15:26
View user's profile Send private message Visit poster's website Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17342
Location: In your JS exploiting you and your system
revolution
But I have always known the difference between 'then' and 'than' but I still make the occasional typing error. Also, I know the difference between 'your' and 'you're', but again I occasionally make the typing error and put the wrong one.

I find that I am often thinking ahead to the next sentence while I am typing the current one and don't fully concentrate on getting the proper spelling down. It is just a case of being too quick to type and not taking time to read it thoroughly.
Post 05 Apr 2009, 15:44
View user's profile Send private message Visit poster's website Reply with quote
mattst88



Joined: 12 May 2006
Posts: 260
Location: South Carolina
mattst88
revolution wrote:
But I have always known the difference between 'then' and 'than' but I still make the occasional typing error. Also, I know the difference between 'your' and 'you're', but again I occasionally make the typing error and put the wrong one.


In that case, it is a typo. I was making the distinction between a typo and not knowing the difference in the words.

I'm not interested in doing it myself, but I'd be willing to bet that, for example, in Azu's 229 posts (at the time of this writing) he has made the then instead of than mistake in the vast majority of cases.

_________________
My x86 Instruction Reference -- includes SSE, SSE2, SSE3, SSSE3, SSE4 instructions.
Assembly Programmer's Journal
Post 05 Apr 2009, 18:39
View user's profile Send private message Visit poster's website Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17342
Location: In your JS exploiting you and your system
revolution
Well, let's hope that some others reading here have been educated a small amount and they now know the difference between then and than.

OR, in typo mode:

Well, lets hope thet some other reading hear have bene educated a smell amount and thay now no the difference between then and then.
Post 05 Apr 2009, 19:19
View user's profile Send private message Visit poster's website Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  
Goto page 1, 2  Next

< Last Thread | Next Thread >
Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You can attach files in this forum
You can download files in this forum


Copyright © 1999-2020, Tomasz Grysztar. Also on YouTube, Twitter.

Website powered by rwasa.