flat assembler
Message board for the users of flat assembler.

Index > Main > GPU double precision coding ?

Author
Thread Post new topic Reply to topic
Kuemmel



Joined: 30 Jan 2006
Posts: 200
Location: Stuttgart, Germany
Kuemmel 20 Apr 2008, 11:23
Hi people,

I've read at
http://www.theinquirer.net/gb/inquirer/news/2007/10/11/g92-rv670-gpgpu-monsters
that the recent graphics cards can do double precision floating point...

...so a possible 'victim' Wink for me to do some fractals on the GPU. Single precision wasn't worth trying...did anybody ever tried GPU coding (...and if it's just passing the instruction to the GPU with x86 assembler...however this might be stupid or not...) ?

I see some old talk here in the forum about GPU's but nothing much done...may be it's something more likely to be written in high level languages...?
Post 20 Apr 2008, 11:23
View user's profile Send private message Visit poster's website Reply with quote
edfed



Joined: 20 Feb 2006
Posts: 4353
Location: Now
edfed 20 Apr 2008, 11:55
openGL
Post 20 Apr 2008, 11:55
View user's profile Send private message Visit poster's website Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3175
Location: Denmark
f0dder 20 Apr 2008, 12:59
Google for nvidia's CUDA.

GPUs definitely aren't x86 Smile, and the "assembly language" you program in is an abstraction from what the GPU really executes...
Post 20 Apr 2008, 12:59
View user's profile Send private message Visit poster's website Reply with quote
kandamun



Joined: 20 Jul 2005
Posts: 25
kandamun 21 Apr 2008, 09:27
I've recently read this http://www.codeproject.com/KB/graphics/GPUNN.aspx
it might not be exactly on the topic, but seems interesting.
Post 21 Apr 2008, 09:27
View user's profile Send private message ICQ Number Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 20445
Location: In your JS exploiting you and your system
revolution 21 Apr 2008, 12:31
GPUs have some potential for use with computation intensive tasks, but you have to make sure you buy one that has all the information available on how to use it. There are some cards out there that you will have a lot of trouble with finding all the necessary info.
Post 21 Apr 2008, 12:31
View user's profile Send private message Visit poster's website Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3175
Location: Denmark
f0dder 21 Apr 2008, 13:17
revolution wrote:
GPUs have some potential for use with computation intensive tasks, but you have to make sure you buy one that has all the information available on how to use it. There are some cards out there that you will have a lot of trouble with finding all the necessary info.
Hm? Isn't it generally "CUDA or nothing" (or at least some manual use of shaders)? Or are you suggesting struggling for finding the hardware and register information and programming the GPUs to the metal? Smile

_________________
Image - carpe noctem
Post 21 Apr 2008, 13:17
View user's profile Send private message Visit poster's website Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 20445
Location: In your JS exploiting you and your system
revolution 21 Apr 2008, 13:30
f0dder wrote:
Hm? Isn't it generally "CUDA or nothing" (or at least some manual use of shaders)? Or are you suggesting struggling for finding the hardware and register information and programming the GPUs to the metal? Smile
Definitely "to the metal", who programs in any other way?
Post 21 Apr 2008, 13:30
View user's profile Send private message Visit poster's website Reply with quote
edfed



Joined: 20 Feb 2006
Posts: 4353
Location: Now
edfed 21 Apr 2008, 13:32
i think that if you want to code for a GPU, as revolution stated, you first will make a selection based on disponibility of the programer manuals.

Nvidia will be prefered to ATI for example. because they GIVE the Technical Reference of their products. their products are open. and thats why llinux support Nvidia.

the GPU are µP exactlly like X86 are µP, then, to code for X86, you need to know a lot of things before to code. instruction set, memory mapping, memory model, mecanisms etc...

OpenGL is a library incorporated in the BIOS of the GPU cards, it is a standard, but is limited cause you cannot access the elementary instructions one by one.
Post 21 Apr 2008, 13:32
View user's profile Send private message Visit poster's website Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3175
Location: Denmark
f0dder 21 Apr 2008, 13:38
revolution wrote:
f0dder wrote:
Hm? Isn't it generally "CUDA or nothing" (or at least some manual use of shaders)? Or are you suggesting struggling for finding the hardware and register information and programming the GPUs to the metal? Smile
Definitely "to the metal", who programs in any other way?
The GPGPU stuff I've seen has either targetted shader assembly language directly, or used one of the higher-level shading languages - CUDA seems to be where it's at.

edfed wrote:
Nvidia will be prefered to ATI for example. because they GIVE the Technical Reference of their products. their products are open. and thats why llinux support Nvidia.
Other way around, dude. ATi have released partial documentation for some of their GPUs (and I couldn't find it with a quick search a couple months ago) - nvidia has only said that the might be releasing. Intel has released full documentation on some of their integrated graphics. Nvidia linux drivers are binary-only.

edfed wrote:
OpenGL is a library incorporated in the BIOS of the GPU cards, it is a standard, but is limited cause you cannot access the elementary instructions one by one.
Uuuuh... no. OpenGL is a bunch of software libraries, not part of the card BIOS.

_________________
Image - carpe noctem
Post 21 Apr 2008, 13:38
View user's profile Send private message Visit poster's website Reply with quote
Kuemmel



Joined: 30 Jan 2006
Posts: 200
Location: Stuttgart, Germany
Kuemmel 21 Apr 2008, 18:12
I found the "Mandelbrot" app on

http://www.nvidia.com/object/cuda_sample_graphics-interop.html

...just I can't run it on my f**king old passive cooled Nvidia chip Wink

Anybody out there trying to run it on Linux or Windows and tell what it is and how fast ?...do they have some speed measurement ?

It's really stupid that it's only single precision as you can't go deep into a Mandelbrot fractal with that poor precison...I also find some claims on the net that this single precision isn't even according to international standards...seems that I got to wait may be a year when finally double precision is implemented and available more often and cheaper...
Post 21 Apr 2008, 18:12
View user's profile Send private message Visit poster's website Reply with quote
Borsuc



Joined: 29 Dec 2005
Posts: 2465
Location: Bucharest, Romania
Borsuc 22 Apr 2008, 16:33
Kuemmel wrote:
It's really stupid that it's only single precision as you can't go deep into a Mandelbrot fractal with that poor precison...
Make an asm program in software for that Razz
Post 22 Apr 2008, 16:33
View user's profile Send private message Reply with quote
bitRAKE



Joined: 21 Jul 2003
Posts: 4073
Location: vpcmpistri
bitRAKE 23 Apr 2008, 05:02
He already has: http://board.flatassembler.net/topic.php?t=5122 Rolling Eyes
(...and doing quite well, I should say!)

_________________
¯\(°_o)/¯ “languages are not safe - uses can be” Bjarne Stroustrup
Post 23 Apr 2008, 05:02
View user's profile Send private message Visit poster's website Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  


< Last Thread | Next Thread >
Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Copyright © 1999-2025, Tomasz Grysztar. Also on GitHub, YouTube.

Website powered by rwasa.