Message board for the users of flat assembler.
> OS Construction > Building a memory bitmap for 64bit
Goto page Previous 1, 2
neville 10 Dec 2008, 02:57
Depends what you mean by "big". And today's "big" is tomorrow's "small" is next week's "tiny". In FAMOS I allocate memory in multiples of 512kB chunks (I call them "smegs" = semi-megs). It also depends on how applications are written - to make memory management easier or harder. On the contrary, paging can produce physical RAM fragmentation but the OS would be unaware of it.
neville: as discussed in this thread, not having a paging mechanism poses a lot of problems... like getting your memory fragmented. Allocating big fixed chunks at application startup isn't really a viable solution for typical real-world applications...
I don't use pages, not even physical ones, only flat linear memory... . But yes, there would be a speed difference, larger or smaller depending on many factors. Sometimes I'm sure it could be noticeable.
Yes, a CPU has to access physical memory, but can you feel a speed difference (on current, real-world x86 hardware) between accessing (physical) pages that are scattered all over RAM vs. physical pages that are allocated contiguously?
I just designed it. Here: MOV CR0.PG,0
And designing an alternative to paging, for a theoretical CPU, is non-trivial.
Very trivial alternative to paging which works
FAMOS - the first memory operating system
|10 Dec 2008, 02:57||
Chewy509 11 Dec 2008, 06:40
(I call them "smegs" = semi-megs).
Umm, you might want to come up with a better term than 'smegs': http://answers.yahoo.com/question/index?qid=20071103081129AANMN53
|11 Dec 2008, 06:40||
f0dder 11 Dec 2008, 08:03
Have you quantified this, or are you just guessing?
Might work for a small hobby kernel or very specific embedded OS, but isn't really suitable for a generic OS, imho.
- carpe noctem
|11 Dec 2008, 08:03||
calpol2004 11 Dec 2008, 22:11
ah, linked lists! a good idea, definite candidate but im currently toying with this idea:
I will use a bitmap but instead of using 4kb blocks ill use 4MB blocks so the bitmap would be 1024 times smaller and through simple algorithms such as starting scans from last unallocated section of blocks and once thats used up then scanning from end of last allocated block. To make finding a spare block very fast and resistant to fragmentation.
Of course this on it's own is reckless, lots of ram would be allocated but little of it used. So im going to use heaps. With a heapallocate call where instead of allocating a 4mb block you allocate part of a block already allocated. this removes the issue of programs allocating a 4MB block for something a few kb big. Obviously expanding the heap by another block as needed. The advantage of this is that programs can allocate large pieces of memory quickly with memalloc for a block of x*4mb BUT allocate small pieces of memory quickly by allocating memory within memory blocks.
Searching the heaps bitmap would quick too as its the size of the heap not the address space (100 bits for 4kb blocks inside a heap 4mb big, with MMX/SSE thats like what? 2-4 instructions?).
The main issues are fragmentation of the heap and some wasted ram, since the heap itself is paged i will have to deal with it somehow. And since a lot of heaps for processes may not always be full there will be some wasted ram but thats the cost for improved processor performance.
This can kill some low-end systems though. A pc with 32mb of ram for example will lose a large percentage of memory on bitmaps, page tables and partially filled heaps. Feel like im building a system for a computer with a 100mhz cpu but 1GB of ram (exaggerating :p) if you get what im saying. Trying to find that balance between memory usage and cpu speed is difficult.
Some stuff to think about. IMO the design choices you make when building a kernel are much harder than the actual code. I do need to think over some alternative methods though like linked lists.
|11 Dec 2008, 22:11||
bitRAKE 12 Dec 2008, 02:30
How do you plan to handle process stacks? Will they grow dynamically from the heap?
|12 Dec 2008, 02:30||
calpol2004 12 Dec 2008, 08:54
Well i was just going to have a fixed stack, 4kb in size. But keeping the stack on a heap would be better, how it would grow dynamically i'm unsure of though.
Detecting when a processes stack is about to fill up (4kb is quite big, id imagine an error would only exceed it but i hate restrictions, even ones like this) might be difficult. Perhaps on each task switch check the value of the processes esp register? but that isn't very clean. There must be some way of causing a exception when it overflows, the ss selector having its own gdt entry for each process and catch the gpf then changing that dynamically but i don't like that so much, messy but it seems like the only reliable way.
|12 Dec 2008, 08:54||
f0dder 12 Dec 2008, 11:20
calpol2004: you don't feel like supporting multiple threads per process?
|12 Dec 2008, 11:20||
calpol2004 12 Dec 2008, 16:30
Haven't really thought about it, much more concerned with making a functional memory manager.
I should probably treat each thread as a process and have lower quantums for threads in threads and even lower for threads in a thread in a thread. Just each one have the same process identifier and share the same address space...Then theres fact that they all have to have their own stack. More complex than i think no doubt.
Nothing i do with the memory manager atm with heaps should effect the operation of how i'd use my task scheduler. at least i hope it won't. One thing at a time.
|12 Dec 2008, 16:30||
edfed 12 Dec 2008, 17:05
bitmap is good in one case:
hard disk sector mapping.
0 = free
1 = occupied/dead
above this, there is the FILE SYSTEM
windows uses the file system to say how much memory is used by a file/folder, not a bitmap.
and more, to say where are the free sectors, it will scan the entire tree.
linux is different, to know where are free sectors, it have a bitmap., the better solution, use only 1/4096th of the drive.
for memory, it's different as it can be allocated only ONE byte to a segment descriptor. don't think about paging for now.
there is the segmentation mechanism to allocate memory.
to know how much sectors are free, i don't know, i don't use linux.
then, i conclude, a memory bitmap is not a good idea.
it use too much memory.
i prefer to create string.
a chain of [freebytescount] and [usedbytescount].
0 free sectors
123 used sectors
432324 free bytes
4321434 used bytes
another way to know how much memory, and where it is used is:
a ram file system
the OS, application, or anything shall alloca&te the memory when loading something.
this memory is calculated from the file size or header, this header is a designation from the file system, a system structure to indicate what is the file.
when the memory is searched, the MMU will seek in the freeram structure to know where is the free chunk of memory.
then, it allocates the memory with a new object in the freeram structure
and load the file into.
when the file is fired out, it's memory shall be freed.
then, to close an application, you shall do it via a function that will link file operations / disk operations / memory operations.
|12 Dec 2008, 17:05||
baldr 12 Dec 2008, 17:45
How do computer look like from process' point of view (in your execution environment)?
Can it support dynamic structures like heap/stack?
Multiple threads of execution?
Memory-mapped I/O (linear frame buffer should be considered also, I don't think VBE updates MTRRs for LFB)?
There is much more information about virtual address space to be described than simple "present"/"not present" approach…
"Don't belong. Never join. Think for yourself. Peace." – Victor Stone.
|12 Dec 2008, 17:45||
f0dder 12 Dec 2008, 22:30
edfed: linux filesystems (even ext) are moving away from the bitmap approach and towards extents-based approaches. Guess what NTFS uses (and has used for the past zillion years)?
|12 Dec 2008, 22:30||
|Goto page Previous 1, 2
< Last Thread | Next Thread >
Copyright © 1999-2023, Tomasz Grysztar. Also on GitHub, YouTube.
Website powered by rwasa.