flat assembler
Message board for the users of flat assembler.
Index
> OS Construction > rewrite linux in asm with fasm Goto page 1, 2, 3, 4 Next |
Author |
|
revolution 03 Oct 2010, 15:53
One of the major strengths of Linux is portability.
Another major strength is the number of people involved in updating, fixing and expanding it. If you make an assembly version of Linux you would have to be prepared to forgo both of the above strengths. What do you hope to gain by rewriting Linux in assembly? What problem are you trying to solve? If there is a clear reason why you want to do it then perhaps one of two things could happen. 1) You can gain the help of others once they see a problem that needs solving, or 2) a different solution is proposed that solves the issue and doesn't require nearly so much work. If your problem is just the lengthy compile time, then I am not so sure fasm can solve that for you. Once a few macros and things are included the compile time can become very lengthy even for relatively small sources (a few MB only). Also fasm is limited to ~2GB of memory at runtime. I doubt this would be enough memory to build all of Linux in a single command. |
|||
03 Oct 2010, 15:53 |
|
edfed 03 Oct 2010, 23:57
ok, then, just imagine the size of source code for a light linux, like DSL.
but fully optimised. of course, the driver interface should be identical to the origianl linux. but with asm, redesign all the linux kernel. with a goal, fit in 640k, just to be loaded in low memory, and then, give us Max memory -1MB free for appications and maybe device drivers. optimise linux not for portabiity, but for X86 exclusive execution, fast and efficient. i think it is really possible to divide by 4 the size of linux kernel if made with asm, and of course, restudy the structure. make shortcuts in the software, suppress layers in the kernel... something not really new, but iterresting to redo. the first linux was in asm, made by a single coder, it was just a multitask switch. |
|||
03 Oct 2010, 23:57 |
|
Tyler 04 Oct 2010, 02:02
Quote:
References? |
|||
04 Oct 2010, 02:02 |
|
Octavio 04 Oct 2010, 11:27
edfed wrote: l about 15 years ago i compiled the kernel in less than one hour using a pentium 120Mhz. Are you sure do you have a fast machine? If this is true ,how could linux developers work with it? I usually build my Os in 0.1 seconds using a ultra_hiper_fast netbook. About your idea,try to do it first with some very small aplication,and then post again. |
|||
04 Oct 2010, 11:27 |
|
ManOfSteel 04 Oct 2010, 12:02
Octavio wrote: about 15 years ago i compiled the kernel in less than one hour using a pentium 120Mhz. Octavio, 15 years ago the Linux kernel was ~300,000 lines. Nowadays it has become horribly fat, with more than ~13,000,000 lines and growing. They are seriously in need of a stricter development process. Octavio wrote: If this is true ,how could linux developers work with it? Of course, everything is compilable with a really fast multicore machine, and by telling make to use multiple jobs, and by using tools like ccache, and by having /usr/obj and /usr/src mounted on an md/tmpfs, etc. Octavio wrote: About your idea,try to do it first with some very small aplication,and then post again. I have done so with relatively small applications such as WMs or curses games and the results are positive. C compilers can sometimes "miss" things and somehow uglily "optimize" code or compile code in really weird ways that only a human coder can spot and fix manually. |
|||
04 Oct 2010, 12:02 |
|
Coty 04 Oct 2010, 18:14
edfed wrote: the first linux was in asm, made by a single coder, it was just a multitask switch. Actually it's always been in C. http://www.kernel.org/pub/linux/kernel/Historic/ However its boot loader engaged p-mode and it was in ASM using GAS. (Posible two stage I don't remember.) Octavio wrote: about 15 years ago i compiled the kernel in less than one hour using a pentium 120Mhz. Are you sure do you have a fast machine? Quote: usually build my Os in 0.1 seconds using a ultra_hiper_fast netbook. Do you use FASM or GCC? |
|||
04 Oct 2010, 18:14 |
|
vid 04 Oct 2010, 22:00
Tyler wrote:
from edfed? |
|||
04 Oct 2010, 22:00 |
|
edfed 04 Oct 2010, 22:26
from a book (in a campus) i've read in my early studies of OS construction.
i don't remember the reference, but it is that. it spoke about the first devellopments from linus, when he told on the channel " i am writing an os compatible with unix, i need the list of standard functions" something like that. this step was in asm, but he wrote it in C a very short time after. |
|||
04 Oct 2010, 22:26 |
|
rugxulo 05 Oct 2010, 04:38
Octavio!!! My online friend!!! If only I wasn't such a sucky coder, I would have helped you (and your OS) more.
Anyways, back to boring other OSes. If you want a fast and small OS, you can't use Linux. Part of the blame is their lousy style, part of it is just entropy, and part is GCC's fault. Try the faster-to-build -O1 (or older GCCs with older kernels, esp. GCC 2.95.3). It'd probably be faster to disable building all the dumb drivers. I think that's where most of the bloat comes from. (There have been, barely, some minor efforts at building a slim kernel, but most people just give up or go to a really old one like 2.2.x or even 2.0.x, eek.) One guy told me (a year ago?) that he can rebuild his FreeBSD kernel in 30 mins (modern? machine). Another guy with quad core 64-bit and tons of RAM (8 GB?) could rebuild Linux with X11 and everything in like 12 hours (Gentoo?). I am able to rebuild FreeDOS' kernel in 90 secs. (old old P166) and the shell itself takes only a few minutes longer, heh. Of course, that's a "glorified bootloader" (as some erroneously say), and even that is written in (convoluted?) C. :-/ On this semi-modern P4 2.40 Ghz Celeron, I can build GCC 2.7.2.3 (core, C only) for DJGPP in like two minutes. Yet when I tried building GCC 4.1.2 (with C++ and GM2) under latest Cygwin, it took forever (and didn't really work, it needed libstdc++ for some reason, which didn't build, ugh)! Wirth's revision of his Modula-2 compiler built itself in like 45 secs "back in the day" (early '80s). Of course he built Lilith entirely in M2. If you want a small, reasonable OS, you'll have to use OctaOS, Menuet, or DexOS. If you want a POSIX-compatible OS that's small and easy to build, try Minix. (N.B. I haven't tried 3.x much, but 2.x is quite small and nice for its purposes and has its compiler included with full system sources. Even the compiler is FOSS now.) If you really wanted a lean Linux, you should look at BasicLinux, BlueFlops, DeLi, Slackware, tomsrtbt, Slitaz, DSL, tinycore, etc. (EDIT: Even better, LFS, aka Linux From Scratch !) Even default non-X11 *BSDs are hundreds of megs, ugh. P.S. Almost forgot OberonOS, heh, Wirth's magnum opus, kernel and compiler in like 200 kb, mostly fully open sourced in his book, though it's a bit weird (and I haven't delved into it ... yet, heh, I'm only 20 years too late). Obligatory link: http://www.cs.inf.ethz.ch/~wirth/Articles/LeanSoftware.pdf |
|||
05 Oct 2010, 04:38 |
|
ManOfSteel 05 Oct 2010, 12:20
rugxulo wrote: Even default non-X11 *BSDs are hundreds of megs, ugh. Let us talk about FreeBSD... The setup (base+kernel+man pages) itself is ~130MB. A full system with X and regular desktop applications included can be as small as 2GB on the disk if you choose your applications wisely and a "clean" system is barely more than 500MB. You can customize the base system and kernel and remove anything you would never use. The GENERIC kernel is less than 40MB - modules included - and 11MB alone, and can (very easily) be reduced to half of that (5-6MB) if only the required device drivers are included. FreeBSD is many times smaller than most GNU/Linux distros of the same type (general purpose desktop or server). By the way, FreeBSD also has special stripped down versions for embedded systems. Please, find any other system (of the same type) that is as complete and stable, yet smaller on the disk. Last edited by ManOfSteel on 05 Oct 2010, 12:27; edited 2 times in total |
|||
05 Oct 2010, 12:20 |
|
revolution 05 Oct 2010, 12:26
ManOfSteel: Now edfed will suggest to write freeBSD in asm with fasm.
|
|||
05 Oct 2010, 12:26 |
|
TmX 05 Oct 2010, 16:06
ManOfSteel wrote:
Really? Now I use Arch Linux on my VirtualBox. The disk size is about 1.5 GB (mostly development tools, and X11, no desktop environments though). Maybe I should give it a try... |
|||
05 Oct 2010, 16:06 |
|
bitRAKE 05 Oct 2010, 18:40
Wow, here is an incredible historical overview.
http://www.cs.inf.ethz.ch/~wirth/Articles/GoodIdeas_origFig.pdf ...thanks for the direction, rugxulo. People are always wondering why the instructions exist the way they are, and this paper provides some good insight: the good, bad, and ugly. Quote: Hence, hardware protection appears as a crutch, covering only a small part of possible transgressions, and dispensable if software were implemented safely. It appears now as having been a good idea its time that should have been superseded in the meantime. Last edited by bitRAKE on 05 Oct 2010, 18:53; edited 1 time in total |
|||
05 Oct 2010, 18:40 |
|
ManOfSteel 05 Oct 2010, 18:44
@TmX: It is quite similar in size. Note that the 2GB I mentioned include the ~450MB full source (core system+kernel). And even though I have no DE, I still use relatively big applications such as Blender and GIMP.
|
|||
05 Oct 2010, 18:44 |
|
edfed 05 Oct 2010, 20:49
revolution wrote: ManOfSteel: Now edfed will suggest to write freeBSD in asm with fasm. why not, if it can run GIMP it is ok. fasmw IDE compiles itself in ... 1.4 second on my PIII @600MHz... wow! i am happy to see that i am not alone to find linux and others very bloated. it is a pitty because the goal of these systems are to be open source, but where is the "opensource" philosophy when you need a full day to compile, and don't be sure of the result... can you imagine how a ASM code can take 6 MB when compiled, whithout datas, drivers etc... it is just incredible to see the waste of ressource. i supose many of linux develloppers don't care about physical limitations, ad think: linux devers and users wrote:
what is the advantage to own a quad core with 2GB of ram if the OS to run on needs 1GB for itself? i have just one thing or two to say: and |
|||
05 Oct 2010, 20:49 |
|
ManOfSteel 05 Oct 2010, 21:54
edfed wrote: i am not alone to find linux and others very bloated. What "others"? The Linux *kernel*, yes, definitely. Most DEs, sure. *Some* GNU/Linux distros that are marketed for the general population as replacements for Windows and that support every freakin' webcam and WLAN chip on the market, hell yeah. Bloat comes from this me-too approach. But many projects are not like that. Many have sound development processes. Many put stability before features. Many release the code when it is ready, not right on the deadline even if the code is buggy and ugly and half broken. And some projects do take minimalism to the limit. Also, nothing prevents you from removing the extra parts. Nothing prevents you from building custom kernels and removing everything but what you need. Hell, nothing prevents you from choosing the most basic OS and adding the most minimalistic applications, again stripping them down to the bare minimum when you build them. But there is a cost many people are not ready to "pay". |
|||
05 Oct 2010, 21:54 |
|
Tyler 06 Oct 2010, 01:18
edfed wrote:
|
|||
06 Oct 2010, 01:18 |
|
rugxulo 06 Oct 2010, 02:52
Last I checked, Vista (Home Premium on up, not that joke called Home Basic) and Win7 both need at least 1 GB of RAM and 16 GB of HD to run. OpenSuSE also last I checked "recommended" 1 GB. I know for a fact that XP can run in much much less, but they want to phase that out because "it's old". Part of the problem is (lack of) drivers, some which only work or don't on newer hardware. Yeah, you win some, but mostly you just lose. Do even MS know what all that space is for??? I doubt it. (And to think we thought Win9x or XP were bloated, sheesh.) Even Ubuntu needs gigs and gigs for a full install. (BTW, I'm on Lucid Puppy 5.01 now.)
A review of some *BSD I read (perhaps NetBSD) bragged about how it could run in bare console in "as little as 40 MB" ... doing what??? That's a lot of RAM, but the problem is that nobody knows and nobody cares. One guy with an ultra modern rig told me, "Compiling is fast enough, it doesn't need to be faster", ugh. Surely he doesn't compile much then. GCC is slow as molasses, the whole C header fiasco (always being reparsed) doesn't help. Plus, G++ with -O2 can easily use 150 MB alone just for one file if you're not careful. It's not that asm is so perfect and HLLs are so bad, but a lot of compilers are abandoned at "good enough" when in truth they could be 10 x better. FPC is actually a good example of them not sitting on their laurels, but nobody's perfect anyways. In short, there's a tradeoff in everything, and it takes a lot of sweat and elbow grease to get things to work properly. Still, we can't be too complacent, even if others are, because things are not nearly perfect as we'd like to pretend. P.S. "HDs and RAM are cheap enough, cpus are fast enough" ... then I dare you to disable all optimizations, avoid all makefiles (partial rebuilding), and decompress everything. It's really not thousands of times better these days as they claim. Even with vast enhancements and improvements, many things have regressed or gotten more complex behind the scenes. It's sad when you have to run DOSBox (emulation) because modern Windows (even 32-bit!) can't run simple DOS cmdline apps (despite V86 mode), esp. when it's obvious that your 15-year-old PC can run it faster. |
|||
06 Oct 2010, 02:52 |
|
TmX 06 Oct 2010, 04:03
edfed wrote:
well the linux kernel is portable to many CPU architectures, and it supports lot of hardwares, hence the bloat freedos, menuet, are probably not as bloated as linux, but on the other hand, they only run on X86... there are always some trade offs to consider |
|||
06 Oct 2010, 04:03 |
|
Goto page 1, 2, 3, 4 Next < Last Thread | Next Thread > |
Forum Rules:
|
Copyright © 1999-2025, Tomasz Grysztar. Also on GitHub, YouTube.
Website powered by rwasa.