flat assembler
Message board for the users of flat assembler.
Index
> Linux > Terminal Video Series: Modern Linux Assembly Language Goto page 1, 2 Next |
Author |
|
redsock 07 Dec 2019, 22:37
Inspired by all of the fine work by Tomasz on his YouTube channel, I have created the first in a series of videos about Linux Assembly Language.
I start the series by comparing assembly language with 12 other languages. https://2ton.com.au/videos/tvs_part1/ |
|||
07 Dec 2019, 22:37 |
|
revolution 08 Dec 2019, 07:48
Isn't there a way to compile C without any of the standard libraries with some command line switch?
I've seen other code around that can compile to almost raw assembly without any of the normal C startup or initialisation code included. |
|||
08 Dec 2019, 07:48 |
|
revolution 08 Dec 2019, 08:23
|
|||
08 Dec 2019, 08:23 |
|
ProMiNick 08 Dec 2019, 08:32
redsock, that would be good if direct links to video will be present here too
https://2ton.com.au/videos/tvs_part1/tvs_part1.mp4. Thanks, thou are the one which tutorials I willing to see too. (I interest in thours about net & linux). Only left revolution (with something about ARM & AARCH64). |
|||
08 Dec 2019, 08:32 |
|
redsock 09 Dec 2019, 10:01
I will be sure to add direct download links for future videos in the series. From the logs of all of the visitors so far, it looks like people are happily extracting it from the HTML anyway.
|
|||
09 Dec 2019, 10:01 |
|
vid 09 Dec 2019, 11:02
As for the "technical" aspect of the video, I like it very much. Especially the live writing / compiling / running the code, complete with keyboard sounds... The way you express things is concise, and easily understandable. I like that you don't shy away from using all kinds of tools available, whichever seems handy at the time. It makes the video also a sort-of tutorial. Also, you have the "radio narrator" voice, pleasant to listen to, that doesn't get boring fast. This style of presentation suits you very well. I wonder, how much preparation and editing did you have to do, to create such video?
However, I object to the criteria used for the comparison. In main question of "why" for a language, the speed and size are not considered important by most people. The most important factor is how hard it is to produce code that does what you want it to. That is more often a question of availability of 3rd party libraries, and the volume of boiler-plate code you are forced to write around the actual functionality (formatting data into complicated structures, memory management, error handling, etc.). As you said in the beginning, your comparison is only a partial answer, and in this case a very small unimportant part. As for the speed/size/complexity comparison itself: The number of instructions / syscalls in linearly-executed code doesn't really matter. Such criteria only matters in highly nested innermost loops, that get executed zillion times a second. Otherwise, any kind of linear code size/complexity increase is practically unimportant. 10 vs. 100'000 instructions don't make any real difference, unless this code is executed hundreds of thousand times a second. In real world, the speed difference is more a question of overall software architecture, not low-level optimization. How many levels of abstraction are there? Does some component do the same lookup over and over, instead of caching results, because of its "nice" interface that hides implementation details? In this manner, yes, higher languages do encourage more abstraction, that tends to hide the underlying complexity and cause measurably slower execution time. But even here, the "practical usability" factor usually takes over the speed. That's why people prefer them. I'd suggest you try to write an application that does something complex, useful in real world. For example something like 'grep' with regexp. And make sure it is written in a robust way, for example no limitation for line length, with error proper handling and reporting everywhere. It should display the difference between file not found and file not accessible errors, produce nice error message if the network file is disconnected in the middle of parsing, or if the memory runs out, etc. Test how well it scales with extremely long regexps, input files larger than available memory, etc. That would make a more meaningful real-world comparison. |
|||
09 Dec 2019, 11:02 |
|
redsock 09 Dec 2019, 11:26
Thank you for your praise re: technical aspect and composition. I did one full pass of the video using a single stereo microphone, and my DasKeyboard Ultimate with Cherry MX Blue's in it blasted the entire soundtrack out. Several hours attempting to filter the keyboard noise later, and I decided to do a separate narration and just muted/suppressed the audio on the original take. All of the console/typing was done in a single pass without breaks or edits.
I design and write code for a living, and have for just over 3 decades. Whether that gives me the pedestal to stand on or not, I'll address your redresses: 1) There was no criteria used for the comparison. The phrase I used to begin it with was "what matters here is relative time between the languages." What I actually said verbatim from the outset is that my "answer as to why one might choose assembly language" is a -partial one-, and titled it "Baby Steps" to really drive home the point. Further in the video, I directly mention that lots of people don't care one bit about the metrics I presented. Does this invalidate them entirely? I think not. I stand by my conclusion that this -is- a starting point for the discussion. Fade in to some dreamy-eyed discourse about: Imagine a system that was not populated by busybox or glibc-tools, but instead was populated by the absolute-bare-minimum timings required to get something done. *That* was the point, and I think you may have missed it. 2) You commented that number of instructions / syscalls isn't important. What I actually said aloud in the video was: taskclock is the useful metric. How long did it actually take? If I have 1.5M runs of something to do, you are suggesting that the microsecond/millisecond measures aren't important? Hmmm I say. In -my- real world, I have to decide how often, when, and how much of something gets run to warrant how much effort I put into it. I will be sure to more directly address your sentiment when I post the next in the series. I think you may be happier with the next in my planned series. Cheers |
|||
09 Dec 2019, 11:26 |
|
vid 09 Dec 2019, 12:08
Amazing that it was done on single take. I'd do so many mistakes.
re 1. Indeed, I was heeding your call for a discussion with my objections to choice of metrics. I think the question of choosing metrics is not subjective. Either some metric reflects some practical real-world difference, then it is a useful metric, or it doesn't and thus isn't useful. The metric you chose (number of instructions / syscalls), as I explained, doesn't well correspond to any practical differences and so it's not useful. Even taskclock metric is not automatically practical. Its usefulness depends on particular use case. If the purpose of the software is only to simply print string when user manually starts the application, then the user doesn't "feel" difference between 0.01s and 0.1s, so such difference doesn't matter. If the same software gets used by other tool automatically in a loop by hundreds, and the time difference becomes visible, only THEN the difference starts to matter. Similar approach could be used for judging your imagined system: How much effort would it cost to develop vs. what benefit would it bring to users? Given how extremely taxing it would be to develop software for such a system, would it be worth it, if it ran fast? The wealth of available software turns out to be more important factor than speed, for most people. re 2: The important part is "If I have 1.5M runs of something to do". That is the question which needs to be answered, before choosing the right tool. If you don't have those 1.5M runs, there's no reason to waste your time choosing harder-to-use tool/language, even if it produces theoretically faster result. That's why you didn't bother to write your speed testing script in ASM. It only makes sense to choose harder-to-use tool when you do have those 1.5M runs. And even if you have 1.5M runs, the difference between ASM and C is so small that there is a very little practical case for using ASM, aside from personal preference. It's the overall software architecture that affects the speed most, not linear differences like quality of code optimization. That was my main point. Anyway, I am looking forward to see this topic addressed in next part(s). |
|||
09 Dec 2019, 12:08 |
|
redsock 09 Dec 2019, 12:17
We are in agreement, stay tuned
|
|||
09 Dec 2019, 12:17 |
|
revolution 09 Dec 2019, 16:05
The kind of thinking "Just use a tool that makes it simple for me to get a task done" leads to the terrible sort of culture we have around web design. You can use JS libraries to make some useless fancy menus and whatnot. It is easy to deploy and get it working. But the end result is that the whole world suffers with requiring everyone to download megabytes of JS code, and have powerful machines to run the fifteen layers of abstraction, and expose themselves to malvertising and breaches, just so that some web designer can be lazy.
We can try to justify it by saying that it isn't me that runs the code 15M times, it is distributed to 15M user's machines. But all that does is distribute the inefficiency to someone else. If redsock can make people more aware of things like this then I am all for it. It might encourage people to spend a bit more time to make things work better. And perhaps we can save the human race from self destruction via global warming caused by overworked machines running crappy HLL code all day long. Last edited by revolution on 14 Dec 2019, 20:29; edited 1 time in total |
|||
09 Dec 2019, 16:05 |
|
bitRAKE 11 Dec 2019, 14:46
Nice start redsock - got me intrigued to see what's cooking.
<Rant> Over a billion android devices trying to update - size matters. Multi-gigabyte games requiring a complete reinstall practically when they update - size matters. Why am I downloading 35 megabytes for a mouse driver? You'd think it was an advanced AI that detected repetitive motion to create macros automatically. No, it's a marketing virus. Save the planet? We can't even remember we live in four dimensions. Just throw some more LEDs on it and call it good. </Rant> |
|||
11 Dec 2019, 14:46 |
|
ganuonglachanh 14 Dec 2019, 11:32
Thanks redsock, waiting for part 2
|
|||
14 Dec 2019, 11:32 |
|
Furs 14 Dec 2019, 15:08
revolution wrote: The kind of thinking "Just use a tool that makes it simple for me to get a task done" leads to the terrible sort of culture we have around web design. You can use JS libraries to make some useless fancy menus and whatnot. It is easy to deploy and get it working. But the end result is that the whole world suffers with requiring everyone to download megabytes of JS code, and have powerful machines to run the fifteen layers of abstraction, and expose themselves to malvertising and breaches, just so that some web designer can be lazy. Even 0.1s vs 0.01s matters, because of wasted energy (especially if on battery) and heat, even if it's not noticeable for a human... But even worse is when someone says 0.1s is fine cause most people won't notice. He makes it into a library. 100 different guys do the same thing, because after all, their development machine and tests is what matters. Then these libraries end up being used by thousands, and the user has to wait for a slow ass loading for like 5 extra seconds than it should. I am tired of lazy developers. |
|||
14 Dec 2019, 15:08 |
|
redsock 17 Jan 2020, 04:01
I have just recently uploaded the latest installment of my Terminal Video Series on Modern Linux Assembly Language.
In this one, I analyse a 12GB XML dump from the Japanese Wikipedia with some quite interesting (IMO) results and do the same in a few other languages as well. Please let me know what you think. This one is a little longer than the last, but I don't think I could have cut that code much faster. https://2ton.com.au/videos/tvs_part2/ Cheers! |
|||
17 Jan 2020, 04:01 |
|
revolution 17 Jan 2020, 17:47
It is no surprise that the interpreted languages are much slower.
But I think that GCC did a reasonable job there. Did you try with different -O levels? |
|||
17 Jan 2020, 17:47 |
|
bitRAKE 17 Jan 2020, 21:27
Memory bound tasks can only get so fast. Should be able to reach the read bandwidth. Given the nature of the data, it might even be faster to decompress and do the counts.
_________________ ¯\(°_o)/¯ “languages are not safe - uses can be” Bjarne Stroustrup |
|||
17 Jan 2020, 21:27 |
|
redsock 18 Jan 2020, 00:09
bitRAKE wrote: Memory bound tasks can only get so fast. Should be able to reach the read bandwidth. Given the nature of the data, it might even be faster to decompress and do the counts. revolution wrote: Did you try with different -O levels? |
|||
18 Jan 2020, 00:09 |
|
guignol 21 Jan 2020, 07:12
is there a direct link to a video file
|
|||
21 Jan 2020, 07:12 |
|
ProMiNick 21 Jan 2020, 07:45
guignol, https://2ton.com.au/videos/tvs_part2/tvs_part2.mp4
I guess all next will be by analogy https://2ton.com.au/videos/X/X.mp4 where X is tvs_part1,tvs_part2 ... and so on use some obsolete browser versions they will download file instead of dislaying its content (in case of modern browser versions context menu over video - and save as) |
|||
21 Jan 2020, 07:45 |
|
Goto page 1, 2 Next < Last Thread | Next Thread > |
Forum Rules:
|
Copyright © 1999-2024, Tomasz Grysztar. Also on GitHub, YouTube.
Website powered by rwasa.