flat assembler
Message board for the users of flat assembler.

Index > High Level Languages > gif libraries / examples?

Goto page Previous  1, 2, 3  Next
Author
Thread Post new topic Reply to topic
vivik



Joined: 29 Oct 2016
Posts: 671
vivik 20 Apr 2018, 05:51
@DimonSoft
No language but C/C++ is flexible enough to do what I want. Hearing about other languages when talking about decompression and codecs is weird. Other languages trade development speed for code quality (even C itself, but to a lesser extend), so many tricks are evil for them, comfort is their main reason for existing.

Build systems kinda suck though, I'm working on it.
Post 20 Apr 2018, 05:51
View user's profile Send private message Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 1228
Location: Belarus
DimonSoft 20 Apr 2018, 16:10
vivik wrote:
@DimonSoft
No language but C/C++ is flexible enough to do what I want. Hearing about other languages when talking about decompression and codecs is weird. Other languages trade development speed for code quality (even C itself, but to a lesser extend), so many tricks are evil for them, comfort is their main reason for existing.

I wonder what is so special in C/C++ that other languages are not capable of, especially for compression and multimedia. Both fields of IT are absolutely language-agnostic. Is it just that most libraries are written in these languages?
Post 20 Apr 2018, 16:10
View user's profile Send private message Visit poster's website Reply with quote
vivik



Joined: 29 Oct 2016
Posts: 671
vivik 21 Apr 2018, 05:56
Interpreted and jit languages (python, java, c#, lisp, forth) add an extra delay on executable loading and bring extra dependencies. Golang and D use garbage collection, which introduces an extra pause from time to time. Pascal, delphi and fortran are probably comparable in performance, I'm just too lazy to learn them, they ain't that much better.

They have their uses, but they aren't that necessary for small programs, they actually harm small programs. It's not like anybody cares about that though.

Speed critical code is usually written in C + assembly, it usually pays off to tune it manually.

The thing is, I'm currently studying windows, and my best bet right now is windows provided compilers (that is, C/C++ compiler, visual basic compiler, c# compiler). I can't expect other compilers to provide good support for windows specific things, like structured exception handling.
Post 21 Apr 2018, 05:56
View user's profile Send private message Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 1228
Location: Belarus
DimonSoft 21 Apr 2018, 08:10
vivik wrote:
Interpreted and jit languages (python, java, c#, lisp, forth) add an extra delay on executable loading and bring extra dependencies. Golang and D use garbage collection, which introduces an extra pause from time to time. Pascal, delphi and fortran are probably comparable in performance, I'm just too lazy to learn them, they ain't that much better.

Well, I don’t think garbage languages are suitable either. As for the laziness… Sitting in the world of bad design just because it works and is used by thousands of hamsters? I don’t know, not my way definitely.

I do NOT insist that you should switch to another language, my offtop-post was just related to how C[++] folks tend to avoid best practices and then find themselves happy with solving the issues that arise.
Post 21 Apr 2018, 08:10
View user's profile Send private message Visit poster's website Reply with quote
vivik



Joined: 29 Oct 2016
Posts: 671
vivik 21 Apr 2018, 11:33
I know nothing about those best practices. They usually prefer simplicity to speed, and it looks stupid to me. It's not what I'm learning C for.

What are best practices on making C libraries? There are probably none, or there are 3 different ways that compete between each other. Such a mess.
Post 21 Apr 2018, 11:33
View user's profile Send private message Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 1228
Location: Belarus
DimonSoft 22 Apr 2018, 08:22
vivik wrote:
I know nothing about those best practices.

Well, something like fixing all the places that cause compiler warnings. Or, referring to your new topic on the forum, checking if all the build configurations actually build.

vivik wrote:
They usually prefer simplicity to speed, and it looks stupid to me. It's not what I'm learning C for.

Garbage [collection] languages? Well, maybe. Others definitely don’t. If you prefer speed to simplicity, why not use assembly?

vivik wrote:
What are best practices on making C libraries? There are probably none, or there are 3 different ways that compete between each other. Such a mess.

Exactly. Funny thing is that very few actually do something besides saying it’s a mess. Like switching to other languages that have more specific conventions and are much better in interoperability.
Post 22 Apr 2018, 08:22
View user's profile Send private message Visit poster's website Reply with quote
vivik



Joined: 29 Oct 2016
Posts: 671
vivik 22 Apr 2018, 08:41
Wish somebody would pay me for maintaining those libraries.
Post 22 Apr 2018, 08:41
View user's profile Send private message Reply with quote
rugxulo



Joined: 09 Aug 2005
Posts: 2341
Location: Usono (aka, USA)
rugxulo 23 Apr 2018, 06:27
vivik wrote:
By the way, back in 2012 I downloaded a few pieces of pixelart from pixiv. I converted all gif images to png. Most of them became smaller, but a few became bigger instead. Here are those "anomalies".

I don't know what is the cause exactly, I guess gif header is smaller than png header. And not all images benefit from png "vertical compression", the simple horizontal compression from gif is enough for them.


PNG was meant to replace (patented, although nowadays unencumbered) GIF. So PNG didn't use LZW but Deflate instead (which, as you probably know, is used by GZip and Zip, among others).
Post 23 Apr 2018, 06:27
View user's profile Send private message Visit poster's website Reply with quote
rugxulo



Joined: 09 Aug 2005
Posts: 2341
Location: Usono (aka, USA)
rugxulo 23 Apr 2018, 06:37
vivik wrote:
@DimonSoft
No language but C/C++ is flexible enough to do what I want.


Don't be naive, there are many useful languages, most of which can self compile (e.g. FPC). See Tiobe Index. There are many people who can 90% (or even 100%) live without C/C++ entirely (ahem, Oberon).

vivik wrote:

Hearing about other languages when talking about decompression and codecs is weird.


How is it weird? If it works, it works. I know of an older port of LZMA ported to FPC, and they also have Deflate / Zip support (used by their installer, IIRC). I also one time tried to encourage GNU Emacs to port some existing (Allegro?? Franz??) Lisp Deflate code so that all its man pages could be auto-(de)compressed.

vivik wrote:

Other languages trade development speed for code quality (even C itself, but to a lesser extend), so many tricks are evil for them, comfort is their main reason for existing.


Comfort? As opposed to discomfort? Who prefers explicit discomfort? Yes, tools should facilitate your work, not hinder you.

What tricks can't you live without? I know that everything has limits, even C, but most people seem to get along just fine.

Other languages besides C/C++ are definitely not "slow" (although, as revolution correctly points out every so often, that entirely depends on private implementation details which will absolutely change/regress, without warning!, upon subsequent releases of cpu, compiler, OS, etc. Thus, such "speed results" are temporary and shouldn't be relied upon, instead preferring stable and well-tested algorithms).

vivik wrote:

Build systems kinda suck though, I'm working on it.


Just simplify the process where you don't even need a complex build system. No, I don't mean make everything a single file, but avoid obscure tricks. It is possible to write a very simple, but portable, makefile.
Post 23 Apr 2018, 06:37
View user's profile Send private message Visit poster's website Reply with quote
rugxulo



Joined: 09 Aug 2005
Posts: 2341
Location: Usono (aka, USA)
rugxulo 23 Apr 2018, 07:20
vivik wrote:
Interpreted and jit languages (python, java, c#, lisp, forth) add an extra delay on executable loading and bring extra dependencies.


There are native, optimizing compilers for Java, Lisp, Forth, Python, Rexx, etc.

vivik wrote:

Golang and D use garbage collection, which introduces an extra pause from time to time.


IIRC, they have heavily improved it in recent Go releases.

D actually makes g.c. optional in most cases.

vivik wrote:

Pascal, delphi and fortran are probably comparable in performance, I'm just too lazy to learn them, they ain't that much better.


I sympathize and don't blame you for learning "yet another" language. Just use what you know, it's "good enough" for most things. It's not reasonable to jump ship at every fad.

However, "ain't much better" is slightly wrong. Of course they have some advantages. All Pascal variants have stricter, and safer, arrays, thus easier to optimize. C99 added "restrict" to somewhat (manually) mitigate that. And of course all Pascal derivatives (beyond the original) have better modularity. Though C++ is fairly close to (eventually?) adding modules itself. That speeds up and simplifies builds.

vivik wrote:

They have their uses, but they aren't that necessary for small programs, they actually harm small programs. It's not like anybody cares about that though.


Small? What is small? FPC has a smartlinker, so they do care. But it's hard to worry when modern computers have several MB huge L2 and L3 caches. So even Intel and AMD are aware of it and helping to reduce any slowdowns.

vivik wrote:

Speed critical code is usually written in C + assembly, it usually pays off to tune it manually.


Not unless your backend code generator is really naive, which is rare. Few people are better than the compiler writer(s) themselves. It's not the compiler that's smart but rather the author.

OS, cpu, and compilers can, and will, regress behind your back without warning in future revisions. No amount of testing prevents all bugs and inconveniences. It's best to not focus on semi-private implementation details. Focus on a good algorithm instead. Everything else is volatile. What's fast in one may become slow later (due to bugfixes, added features, obsoletion, security hardening, or just careless developers). Things change, but it's not always for the better.

vivik wrote:

The thing is, I'm currently studying windows, and my best bet right now is windows provided compilers (that is, C/C++ compiler, visual basic compiler, c# compiler). I can't expect other compilers to provide good support for windows specific things, like structured exception handling.


I already mentioned that FPC supposedly supports SEH, but if that's not good enough, okay.

BTW, Windows has many editions, and while it's easy to think of the desktop as pervasive, nothing lasts forever. Even Windows has changed a lot over the years. I'm not saying you can't study it, but don't put all your eggs in one basket, if you can help it.
Post 23 Apr 2018, 07:20
View user's profile Send private message Visit poster's website Reply with quote
rugxulo



Joined: 09 Aug 2005
Posts: 2341
Location: Usono (aka, USA)
rugxulo 23 Apr 2018, 07:26
vivik wrote:
Wish somebody would pay me for maintaining those libraries.


Why, so you can buy a faster rig? I have old stuff too, as mentioned, and it's a blessing in disguise. New machines mask slowdown and inefficiencies, lulling you into a false sense of security. Slower does actually encourage better algorithms!

But anyways, know your audience. Go find a place that really needs (or wants) GIFs and PNGs. Presumably FASM isn't a good forum for that (since there is a very small attachment limit).
Post 23 Apr 2018, 07:26
View user's profile Send private message Visit poster's website Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 20289
Location: In your JS exploiting you and your system
revolution 23 Apr 2018, 07:31
rugxulo wrote:
... (although, as revolution correctly points out every so often, that entirely depends on private implementation details which will absolutely change/regress, without warning!, upon subsequent releases of cpu, compiler, OS, etc. Thus, such "speed results" are temporary and shouldn't be relied upon, instead preferring stable and well-tested algorithms).
Very Happy
Post 23 Apr 2018, 07:31
View user's profile Send private message Visit poster's website Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 1228
Location: Belarus
DimonSoft 23 Apr 2018, 21:00
Speaking about changes in performance with new CPU releases I’d say that these days a lot of cool and smart tricks that used to be something to boast about in super-optimizing C[++] compilers don’t really pay for themselves anymore: CPU manufacturers optimize for typical code sequences, not for cool tricks.

Especially funny is that most of C[++] optimizations became possible because of undefined behaviour which has always been more a gun to shoot yourself than a useful feature.
Post 23 Apr 2018, 21:00
View user's profile Send private message Visit poster's website Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 20289
Location: In your JS exploiting you and your system
revolution 24 Apr 2018, 01:28
DimonSoft wrote:
Speaking about changes in performance with new CPU releases I’d say that these days a lot of cool and smart tricks that used to be something to boast about <snip> CPU manufacturers optimize for typical code sequences, not for cool tricks.
This is true for the 64 bit ARM. The instruction set was radically changed from the original 32 bit. A lot of neat "tricks" became impossible. And the reason for the big change ... because the HLL compilers could not generate code to take advantage of the advanced instruction modes. So, in effect, the HLL compilers have shaped the design of the CPU. Not the other way around. So now the 64 bit ARM instructions are mostly just basic stuff that HLL compilers find easy to generate.
Post 24 Apr 2018, 01:28
View user's profile Send private message Visit poster's website Reply with quote
rugxulo



Joined: 09 Aug 2005
Posts: 2341
Location: Usono (aka, USA)
rugxulo 24 Apr 2018, 04:36
I've mentioned ENTER/LEAVE slowdown here before (I think). I have at least three old compilers (Oberon-M, 1991; TMT Pascal, 2002; Virtual Pascal, 2004) that all generate those for nested procedures. For whatever reason, it's four times slower on my (barely) modern machines while it (allegedly) was actually faster on 586 and older cpus. Only one compiler offers to avoid it, preferring the older 8086-style instructions.

Granted, some compilers don't use it anyways for various reasons (since I think you're limited to like 32 nesting levels), e.g. FPC. (Even TP 5.5 freeware doesn't use it, but I have no idea about newer ones that had $G+ 286 code generation. Weird when unoptimized TP is much faster than 32-bit code.) My point is that it's a regression, and I don't know why. Some obscure bug that could only be fixed in slow microcode??

Dunno, and I don't have any good (tiny, assembly) example to propagate to test on newer cpus. I'd almost be tempted to post the VP/Win32 .EXE, but then I almost forgot that antiviruses hate everything unorthodox, so it makes no sense to even waste our time. I would love to know if newer cpus (CoffeeLake or Ryzen++) still have that problem or not.

Of course, it only matters in something executed for a long time or multiple times, thus you won't notice a few extra seconds here or there. So most people probably don't notice. Well, most compilers don't use it anymore. It just seems silly. Micro-optimizations in plain cpu instructions rarely make much difference, so it's almost not worth focusing on, but this is one rare example where it actually is noticeable.

Of course, there are many other reasons for slowdown. And personally, this is all just my curiosity since I don't really "need" the speed. It's better to focus on portability or features or some other practical aspect. Still, I find it weird.
Post 24 Apr 2018, 04:36
View user's profile Send private message Visit poster's website Reply with quote
alexfru



Joined: 23 Mar 2014
Posts: 80
alexfru 24 Apr 2018, 05:52
DimonSoft wrote:
Especially funny is that most of C[++] optimizations became possible because of undefined behaviour which has always been more a gun to shoot yourself than a useful feature.


I'm not sure I agree about the word most. Better computers (faster and with more RAM and storage) allow compiler writers to employ more powerful code analysis and optimizations. Then there are some language features that enable more compile-time optimizations like templates, constexpr, etc. IOW, show me the numbers! Smile
Post 24 Apr 2018, 05:52
View user's profile Send private message Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 1228
Location: Belarus
DimonSoft 24 Apr 2018, 07:16
alexfru wrote:
DimonSoft wrote:
Especially funny is that most of C[++] optimizations became possible because of undefined behaviour which has always been more a gun to shoot yourself than a useful feature.


<…> show me the numbers! Smile

No need to show the numbers. Even the simpliest cases of undefined behaviour like “integer overflow is UB” make it possible to optimize out checks like if (a + 1 > a). Not to mention “NULL pointer dereferencing is UB” which allows an optimizing compiler to optimize out half of your program if you skip a check for NULL.

I haven’t really counted optimizations (they’re somewhat different in different compilers) but really a lot of them do rely on the fact that in cases of UB a compiler is allowed to do anything.
Post 24 Apr 2018, 07:16
View user's profile Send private message Visit poster's website Reply with quote
alexfru



Joined: 23 Mar 2014
Posts: 80
alexfru 24 Apr 2018, 10:00
DimonSoft wrote:
alexfru wrote:
DimonSoft wrote:
Especially funny is that most of C[++] optimizations became possible because of undefined behaviour which has always been more a gun to shoot yourself than a useful feature.


<…> show me the numbers! Smile

No need to show the numbers. Even the simpliest cases of undefined behaviour like “integer overflow is UB” make it possible to optimize out checks like if (a + 1 > a). Not to mention “NULL pointer dereferencing is UB” which allows an optimizing compiler to optimize out half of your program if you skip a check for NULL.

I haven’t really counted optimizations (they’re somewhat different in different compilers) but really a lot of them do rely on the fact that in cases of UB a compiler is allowed to do anything.


These aren't good examples. If the first is a check for signed overflow, it's broken by definition, irrespective of what the compiler does. Likewise, if your code is dereferencing NULL pointers, it's broken by definition. If you don't agree with these definitions, you're not writing in C/C++ or you're not doing it right.

OTOH, if you have code like this:
Code:
void foo(char* p) {
  if (p) do_something_with(*p);
}
void bar(char* p) {
  if (p) foo(p);
}
    


The compiler may inline foo() into bar() and remove one NULL check, yielding:
Code:
void foo(char* p) {
  if (p) do_something_with(*p);
}
void bar(char* p) {
  if (p) do_something_with(*p);
}
    

Or it may even generate foo with an alias name of bar.

No undefined behavior here, just a regular optimization, for which the compiler needs to analyze code and data flows, which necessitate the relevant algorithms, which need additional resources such as memory/storage and CPU cycles.
In C++ programs, where lots of classes and class methods are common, such optimizations are very, very effective.

As for UB in general, things would've been a bit easier and sometimes more practical on the human side of programming if the degree or effects of undefined behavior had been contained.
Post 24 Apr 2018, 10:00
View user's profile Send private message Reply with quote
vivik



Joined: 29 Oct 2016
Posts: 671
vivik 24 Apr 2018, 19:52
Everything I'm telling about programming languages are just my personal quirks. My goal is minimal size (just for the hell of it (and for studying the resulted binary)) and using windows specific features properly. If you know of a language that generates windows binaries smaller or equal to 4 kilobytes, let me know.

I managed to compile and use giflib, decoded a simple image. Also I modified it a bit, so that it decodes the first frame of an animation, and reports if gif have only one frame or more. Can't play animation yet (need to parse some fields, and just not used to directdraw yet).

Also, I'll add "int _GifError;" everywhere near "#include <gif_lib.h>", because there is "extern int _GifError;" in "gif_lib_private.h". Version 5 (i use version 4 now) changed the way errors are reported, they added an extra *error to all main function calls. I'm not sure which way is better though, extern should be faster, maybe.

Oh, and I also added function DGifOpenFileNameW. Just so it could accept filenames in unicode. It uses _wfopen instead of that weird thing it used before. I'll change it to something else later.

Here is the code for decoding:

Code:
                        GifFileType *gifFile = DGifOpenFileNameW(L"25__5__x256_rgb_a.gif");

                        //NOW
                        //DGifSlurp(gifFile);

                        int number_of_frames;
                        DGifSlurp_firstonly(gifFile, &number_of_frames);

                        int animation_frame = 0;



                        BYTE* p = (BYTE*)ddsd.lpSurface;

                        BYTE* in_cursor = gifFile->SavedImages[animation_frame].RasterBits;

                        GifColorType* Colors;

                        ColorMapObject* ColorMap = gifFile->SavedImages[animation_frame].ImageDesc.ColorMap;

                        if (!ColorMap) goto try_global;

                        Colors = ColorMap->Colors;

                        if (Colors) goto done; //else try_global

try_global:
                                
                        Colors = gifFile->SColorMap->Colors;

done:

                        int pitch_diff;
                        pitch_diff = ddsd.lPitch - ddsd.dwWidth * 4;
                        //LATER: if (is_rgb)

                        if (pitch_diff == 0) {
                                for (int y = 0; y < ddsd.dwHeight; y++) {
                                        for (int x = 0; x < ddsd.dwWidth; x++) {
                                                GifColorType* color = &Colors[*in_cursor++];
                                                *p++ = color->Blue;
                                                *p++ = color->Green;
                                                *p++ = color->Red;
                                                *p++ = 0;
                                        }
                                }
                        } else {
                                for (int y = 0; y < ddsd.dwHeight; y++) {
                                        for (int x = 0; x < ddsd.dwWidth; x++) {
                                                GifColorType* color = &Colors[*in_cursor++];
                                                *p++ = color->Blue;
                                                *p++ = color->Green;
                                                *p++ = color->Red;
                                                *p++ = 0;
                                        }
                                        p += pitch_diff;
                                }
                        }

                        DGifCloseFile(gifFile);    


I'm curious if I can decompress first gif frame, free all (or most of) giflib state, and then continue with decompressing of the second frame and further. I know that gif needs previous frame to decompress next (because it can only update part of the frame), but does it need anything else, like code tables and such? I should just read the code...

I probably should post some sort of instruction on how to compile it, right now it's just collection of notes. Not now, I need to get comfy with it. Good thing that you should only figure it out once per lifetime.
Post 24 Apr 2018, 19:52
View user's profile Send private message Reply with quote
rugxulo



Joined: 09 Aug 2005
Posts: 2341
Location: Usono (aka, USA)
rugxulo 25 Apr 2018, 01:25
vivik wrote:
Everything I'm telling about programming languages are just my personal quirks. My goal is minimal size (just for the hell of it (and for studying the resulted binary)) and using windows specific features properly. If you know of a language that generates windows binaries smaller or equal to 4 kilobytes, let me know.


I already quoted that old 2011 article about Delphi ... but I personally don't see the point. Hey, I'm sympathetic, believe me. Email attachment limits, slow (or metered) bandwidth, small disks, whatever. There's tons of reasons not to be wasteful. But 4 kb is literally nothing. You seem to even scorn 30 kb as bloated!! Even I can't agree there. Heck, the Google logo (.PNG) on its homepage is apparently 13 kb! So you're wasting your time.

It's not wrong to wonder, question things, try to make things better. Sure, some inefficiencies exist. But when you're already relying on MSVC (which I've never downloaded, it's huge, well over 1 GB download) and Windows (which even XP was a bloated 1.5 GB or such, SP3 added more, Vista and newer are like 10x worse), it's "almost" pointless.

If your whole target OS of potential customers is Windows, they're presumably on fast (broadband) Internet just for official patches/bugfixes. And they're already using gigs of space, so what's a few kb more??

Again, it's not wrong to try to minimize, I'm sympathetic. But 4 kb (or even 4 MB) is a rounding error, not significant in the least! It's doesn't mean you can't keep your eyes peeled, but expecting modern compilers to do better than 100 kb is usually asking too much.

tcc (TinyC) 0.9.27 for Win32:
console "Hello, world!" using printf() from MSVCRT.DLL: 2048 bytes
GUI "Hello Windows!" (..\examples\hello_win.c using windows.h): 4096 bytes

The same Win32 console program compiled by OpenWatcom 1.9 (N.B. no MSVCRT used) is 28 kb. Using puts() instead makes it only 25 kb.

EDIT: Virtual Pascal outputs 11 kb simple console "hello" PE/Win32 program. It too has a smartlinker built-in and has inline (586 only) assembly supporting old Delphi 2 syntax. You're honestly not going to get much better than that for 32-bit code (not using MSVCRT.DLL).

EDIT#2: SmallerC seems to be 20 kb (printf) and 12 kb (puts), respectively, on Win32. That is hardly world-ending (again, not using MSVCRT.DLL).

EDIT#3: I had to first "restore" (SmallerC's) smlrpp.exe out of MS Security Essentials' quarantine because they suck. Very annoying (yet again), but they're far from the only overzealous antivirus. If anything, they probably penalize smaller .EXEs more than others since those are easier to hide or disseminate.
Post 25 Apr 2018, 01:25
View user's profile Send private message Visit poster's website Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  
Goto page Previous  1, 2, 3  Next

< Last Thread | Next Thread >
Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Copyright © 1999-2024, Tomasz Grysztar. Also on GitHub, YouTube.

Website powered by rwasa.