flat assembler
Message board for the users of flat assembler.
Index
> OS Construction > Why don't modern OSes use Ring1 and/or Ring2? Goto page 1, 2 Next |
Author |
|
dap 13 May 2008, 19:45
I think the problem is portability, they need to make it work on machines which don't provide so much separations. Maybe there is an elegant way to make the additionnal rings optional, so that the OS will still work on other architectures.
|
|||
13 May 2008, 19:45 |
|
edfed 13 May 2008, 22:31
hem///
i think the problem is more about productivity. the main philosophy of M$ is to earn money as soonest as possible. then, when they build a "new" os, they will make the kernel and basta, directly, they make some remakes of theirs poor applications. coding for the 4 rings is very hard to do. it shall be made in 4 steps instead of the 2 currents steps. i invite every Fasm coders to code not fast, but very well... one month to make a simple routine is not that much, assuming after, it will work forever until the planet explode. 1st) build a strong kernel with a set of libraries. 2nd) build a strong device driver interface and a set of libraries 3rd) build a strong set of API and a set of of librairies 4th) build a very strong library to use all the 3 steps above. and a lot of applications to f*ck windows then, we use all the rings. another reason is as stated dapounet, to increase portability... but we all know that nowadays, x86 is the MAIN architecture, and tend to be the only one. Last edited by edfed on 22 May 2008, 18:59; edited 1 time in total |
|||
13 May 2008, 22:31 |
|
Octavio 15 May 2008, 08:44
revolution wrote: Why not just simply put all the drivers into Ring1? That way the kernel can't be touched. Because there is no posible protection against drivers. Protection is implemented only in the cpu not in the others chips,so you can´t prevent a driver to acces all memory using a dma (busmaster) chip. x86 cpus have many features that anybody uses just because they are not needed. |
|||
15 May 2008, 08:44 |
|
vid 15 May 2008, 09:32
It is portability. You need to remember most "real" software (OSes included) nowadays is not only x86.
Ring 1 is sometimes used for virtualization of ring0 by virtal machines though. |
|||
15 May 2008, 09:32 |
|
revolution 15 May 2008, 09:38
Octavio wrote:
|
|||
15 May 2008, 09:38 |
|
revolution 15 May 2008, 09:42
vid wrote: It is portability. You need to remember most "real" software (OSes included) nowadays is not only x86. vid wrote: Ring 1 is sometimes used for virtualization of ring0 by virtal machines though. |
|||
15 May 2008, 09:42 |
|
vid 15 May 2008, 10:09
Quote: I suppose that malware targets would most likely be Windows/Unix/Linux and probably something like 99% of those OSes are run on x86 Oh yeah, for malware running on most common case is enough, and i wouldn't be surprised if it used some of those modes. Quote: There is already a lot of CPU specific code in the 'portable' OSes, so much so that a small piece of extra code for using other rings in x86 would not seem to be a hardship. That's not about "extra per-CPU piece of code". It is matter of different design. If you rely on HW for protection of various kernel parts, you will face quite ugly problems on other architectures with this design. Unless you really want to restrict on x86, doing this is bad idea. You will need some custom protection mechanisms in OS anyway, and they are finely suitable for doing what ring1 can. |
|||
15 May 2008, 10:09 |
|
revolution 15 May 2008, 10:53
vid wrote: That's not about "extra per-CPU piece of code". It is matter of different design. If you rely on HW for protection of various kernel parts, you will face quite ugly problems on other architectures with this design. |
|||
15 May 2008, 10:53 |
|
vid 15 May 2008, 11:03
Quote: You can use a simple fallback position, if not-x86 then just use the two level protection, system/user. But if is-x86 then use the four level protection code What would it be good for, if two-level protection runs just fine? You also don't have to maintain, update, and test two different mechanisms. |
|||
15 May 2008, 11:03 |
|
revolution 15 May 2008, 11:13
vid wrote: What would it be good for, if two-level protection runs just fine? You also don't have to maintain, update, and test two different mechanisms. |
|||
15 May 2008, 11:13 |
|
edfed 15 May 2008, 11:17
one other problem is the TSS.
wich system uses this hardware mechanism? ring protection: assuming the OSes will be developed for x86 exclusivelly, why don't use them? Windows, linux, and unix are fo many differnt arch, but... but there is not only Windoze, linux and unix. there are many other systems that ae coded in asm for x86. and i am pretty sure you ( vid) have a x86 as main computer. don't forget we code in asm with fasm for x86. then, speaking about "portability" is off topic. the main problem is not to adapt code for other arch, but to code for this dear x86 arch. |
|||
15 May 2008, 11:17 |
|
revolution 15 May 2008, 11:23
edfed wrote: the main problem is not to adapt code for other arch, but to code for this dear x86 arch. |
|||
15 May 2008, 11:23 |
|
LocoDelAssembly 15 May 2008, 20:34
Excuse my ignorance but can 4-level protection be efficiently implemented on paged flat memory model environment?
|
|||
15 May 2008, 20:34 |
|
System86 15 May 2008, 21:56
OS/2 (at least 16-bit) used ring 2 for applications that had direct I/O access (set IOPL=2). Anyway, Windows NT used only ring 0/3 because of portability to machines which had only 2 rings. Almost all Windows NT systems are x86, though, so I think the focus on portability when designing NT was harmful.
Also, interrupts can be set to be callable only at a privilege level equal to or less than a certain value for the DPL (data privilege level) for that IDT entry. It is possible to make an interrupt that only can be called at ring 0, 1, 2, but not 3. Windows could therefore let certain calls to be made by ring 2 code but not ring 3 code, so under the new model OS extensions running at ring 2 could call OS services (like maybe low level disk access) normal code can't. |
|||
15 May 2008, 21:56 |
|
revolution 16 May 2008, 00:46
LocoDelAssembly wrote: Excuse my ignorance but can 4-level protection be efficiently implemented on paged flat memory model environment? |
|||
16 May 2008, 00:46 |
|
tom tobias 16 May 2008, 22:04
LocoDelAssembly wrote: ...efficiently implemented on paged flat memory model ... Previously, I argued against paging: http://board.flatassembler.net/topic.php?t=8433&postdays=0&postorder=asc&start=0 In that thread, fudder had inquired why some folks dislike paging, and I responded by expressing my opinion that paging is obsolete, a method of compensating for a LACK of physical memory. Paging was a very important technique thirty years ago, when memory was expensive. It remains my opinion, that a good (i.e. much simpler) cpu architecture will eliminate paging, placing the burden of swapping out to disk with the programmer, instead of hampering every cpu with this unneeded overhead. Believe me, in 1978, no one of us could imagine a machine that cost a few hundred dollars with two gigabytes of ram. How many days could one run the internet on a machine with no paging, assuming 16 hours of continual useage per day? How much physical memory does one require, per hour of use, accessing different web sites, responding to forum questions, and answering email? I suppose, without giving the matter much thought, that one needs 16 megabytes of memory per page accessed, and assuming that one wished to retain in memory, twenty different web pages at all times, without overwriting any page, then one would need half a gigabyte. If one watched real time video, at 30 frames per second, assuming overwriting (plus clearing any location not overwritten) one would need another dozen megabyte locations, but still there would be ample memory left for a variety of calculations, tables, and so on....The OS itself, particularly if based on the new cpu's linear model, should not require a signicant space. In short, I believe that we are overwhelmed by our willingness to accept 1970's era thinking, when we ought to be jettisoning those ideas, and starting anew, commencing with the existing memory controller. |
|||
16 May 2008, 22:04 |
|
daniel.lewis 22 May 2008, 01:18
Well, we also have Virtualization to thank. AMD Pacifica and Intel VT instruction sets mean that you can if you so desire, completely isolate different execution environments.
To me, the Virtual Machine multiplexes and nothing else and the OS provides the HAL. Exokernel theory anyone? So VMWare, Virtual Iron and Xen need an OS that can be used to host programs running within a single security context (no paging, no rings). So we need an OS which is lean and doesn't have the overhead of multiplexing. Since we already require Pacifica or VT, we can assume the LAPIC, no FDD, no PIO, no segmentation, SSE2, no dial-up, ethernet, and that USB is dominant. For the hell of it, why not limit it to x86-64? I'm working on it. Let me know if you want to be. |
|||
22 May 2008, 01:18 |
|
edfed 22 May 2008, 18:58
Quote: why not limit it to x86-64 because i (and many other computers) don't have it. there are a lot of x86-32 still available, and it will be like it for a long time. 32 bits for ever???? |
|||
22 May 2008, 18:58 |
|
System86 22 May 2008, 19:47
2 questions:
1. Does Bochs/QEMU/other emulators support VM's inside the emulated CPU? I'm NOT talking about Bochs using a VM to boost performance, I'm asking if any emulator lets you run a hypervisor within the emulator? 2. Can IVT/Pacifica be used in real mode? |
|||
22 May 2008, 19:47 |
|
Goto page 1, 2 Next < Last Thread | Next Thread > |
Forum Rules:
|
Copyright © 1999-2024, Tomasz Grysztar. Also on GitHub, YouTube.
Website powered by rwasa.