flat assembler
Message board for the users of flat assembler.
Index
> Windows > Rewrite GDI32 function |
Author |
|
Kazyaka 21 Nov 2011, 16:42
Hello,
I study using bitmaps for some time. I'm using GDI32.dll and I wonder is it possible to rewrite one function from this library (written in C) and few subfunctions to my program. It will speed up it a lot! I've searched for GDI32 source code and I've found this: http://source.winehq.org/WineAPI/gdi32.html Function which I want to rewrite is SetDIBitsToDevice. What do you think about my idea? Maybe someone knows faster method to display bitmap on screen from a bit array? |
|||
21 Nov 2011, 16:42 |
|
AsmGuru62 21 Nov 2011, 17:09
I am thinking that GDI takes advantage of a hardware acceleration - how can you do faster than that?
|
|||
21 Nov 2011, 17:09 |
|
AsmGuru62 22 Nov 2011, 14:44
It is most likely possible, but I've never done it, so can't advise on that.
Personally I find the simple BitBlt() API to be fast enough. In reality, how many CPU cycles you can really save when going to hardware without GDI32? |
|||
22 Nov 2011, 14:44 |
|
Kazyaka 22 Nov 2011, 14:54
@AsmGuru62
I must do speed test to know how many cycles I can save. I think I'll need help someone experienced. I prefer SetDIBitsToDevice. It's the fastest method of displaying bitmap (using GDI32). |
|||
22 Nov 2011, 14:54 |
|
pabloreda 22 Nov 2011, 18:45
I uses SetDIBitsToDevice and is really fast, BUT if you found a better aproach, count with my for testing..
in WINE (linux) I have problem of speed, but I not sure if is this call.. if you like go depper, take a look to http://www.directfb.org/index.php |
|||
22 Nov 2011, 18:45 |
|
f0dder 22 Nov 2011, 21:53
Kazyaka wrote: I've searched for some simple function and I've analyzed Sleep (Kernel32). Using this looks like: Instead of approaching optimization from a "Oh, this theoretically seems to have a lot of overhead, let me try optimizing right away!", you should measure where your performance bottlenecks are, and start optimizing that. If it turns out you spend any substantial time in SetDIBitsToDevice(), you probably aren't doing much at all in your own code. _________________ - carpe noctem |
|||
22 Nov 2011, 21:53 |
|
Kazyaka 23 Nov 2011, 15:09
Thanks for your comments. They gave me much to thinking.
@f0dder About delay with micro- and milliseconds: it was only simple example. I won't use it. If someone has something to add - feel free to post. |
|||
23 Nov 2011, 15:09 |
|
f0dder 23 Nov 2011, 16:08
Kazyaka wrote: About delay with micro- and milliseconds: it was only simple example. I won't use it. It's also relevant for other situations, though, where it might seem more reasonable to optimize code; for instance *A versus *W API calls. When calling a *A API, it converts your ascii to utf-16, calls the *W version, and converts results (if any) back to ascii. This seems pretty wasteful, but in real life situations you're unlikely to be able to benchmark any noticeable speed difference. So if you have working ascii code, don't spend time rewriting to unicode for speed benefits - at least not without benchmarking your code and making sure that the A->W->A overhead is measurable and large enough to warrant the time spent. (But do consider doing it anyway for internationalization reasons, if applicable ). |
|||
23 Nov 2011, 16:08 |
|
< Last Thread | Next Thread > |
Forum Rules:
|
Copyright © 1999-2024, Tomasz Grysztar. Also on GitHub, YouTube.
Website powered by rwasa.