flat assembler
Message board for the users of flat assembler.

Index > OS Construction > Building a memory bitmap for 64bit

Goto page Previous  1, 2
Author
Thread Post new topic Reply to topic
neville



Joined: 13 Jul 2008
Posts: 507
Location: New Zealand
neville 10 Dec 2008, 02:57
f0dder wrote:
Quote:
neville: as discussed in this thread, not having a paging mechanism poses a lot of problems... like getting your memory fragmented. Allocating big fixed chunks at application startup isn't really a viable solution for typical real-world applications...
Depends what you mean by "big". And today's "big" is tomorrow's "small" is next week's "tiny". In FAMOS I allocate memory in multiples of 512kB chunks (I call them "smegs" = semi-megs). It also depends on how applications are written - to make memory management easier or harder. On the contrary, paging can produce physical RAM fragmentation but the OS would be unaware of it.
Quote:
Yes, a CPU has to access physical memory, but can you feel a speed difference (on current, real-world x86 hardware) between accessing (physical) pages that are scattered all over RAM vs. physical pages that are allocated contiguously?
I don't use pages, not even physical ones, only flat linear memory... Smile . But yes, there would be a speed difference, larger or smaller depending on many factors. Sometimes I'm sure it could be noticeable.
Quote:
And designing an alternative to paging, for a theoretical CPU, is non-trivial.
I just designed it. Here: MOV CR0.PG,0
Very trivial alternative to paging which works Wink

_________________
FAMOS - the first memory operating system
Post 10 Dec 2008, 02:57
View user's profile Send private message Visit poster's website Reply with quote
Chewy509



Joined: 19 Jun 2003
Posts: 297
Location: Bris-vegas, Australia
Chewy509 11 Dec 2008, 06:40
Quote:
(I call them "smegs" = semi-megs).


Umm, you might want to come up with a better term than 'smegs': http://answers.yahoo.com/question/index?qid=20071103081129AANMN53

Shocked
Post 11 Dec 2008, 06:40
View user's profile Send private message Visit poster's website Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3175
Location: Denmark
f0dder 11 Dec 2008, 08:03
neville wrote:
f0dder wrote:
Quote:
Yes, a CPU has to access physical memory, but can you feel a speed difference (on current, real-world x86 hardware) between accessing (physical) pages that are scattered all over RAM vs. physical pages that are allocated contiguously?
I don't use pages, not even physical ones, only flat linear memory... Smile . But yes, there would be a speed difference, larger or smaller depending on many factors. Sometimes I'm sure it could be noticeable.
Have you quantified this, or are you just guessing?

Quote:
Quote:
And designing an alternative to paging, for a theoretical CPU, is non-trivial.
I just designed it. Here: MOV CR0.PG,0
Very trivial alternative to paging which works Wink
Might work for a small hobby kernel or very specific embedded OS, but isn't really suitable for a generic OS, imho.

_________________
Image - carpe noctem
Post 11 Dec 2008, 08:03
View user's profile Send private message Visit poster's website Reply with quote
calpol2004



Joined: 16 Dec 2004
Posts: 110
calpol2004 11 Dec 2008, 22:11
ah, linked lists! a good idea, definite candidate but im currently toying with this idea:

I will use a bitmap but instead of using 4kb blocks ill use 4MB blocks so the bitmap would be 1024 times smaller and through simple algorithms such as starting scans from last unallocated section of blocks and once thats used up then scanning from end of last allocated block. To make finding a spare block very fast and resistant to fragmentation.

Of course this on it's own is reckless, lots of ram would be allocated but little of it used. So im going to use heaps. With a heapallocate call where instead of allocating a 4mb block you allocate part of a block already allocated. this removes the issue of programs allocating a 4MB block for something a few kb big. Obviously expanding the heap by another block as needed. The advantage of this is that programs can allocate large pieces of memory quickly with memalloc for a block of x*4mb BUT allocate small pieces of memory quickly by allocating memory within memory blocks.

Searching the heaps bitmap would quick too as its the size of the heap not the address space (100 bits for 4kb blocks inside a heap 4mb big, with MMX/SSE thats like what? 2-4 instructions?).

The main issues are fragmentation of the heap and some wasted ram, since the heap itself is paged i will have to deal with it somehow. And since a lot of heaps for processes may not always be full there will be some wasted ram but thats the cost for improved processor performance.

This can kill some low-end systems though. A pc with 32mb of ram for example will lose a large percentage of memory on bitmaps, page tables and partially filled heaps. Feel like im building a system for a computer with a 100mhz cpu but 1GB of ram (exaggerating :p) if you get what im saying. Trying to find that balance between memory usage and cpu speed is difficult.

Some stuff to think about. IMO the design choices you make when building a kernel are much harder than the actual code. I do need to think over some alternative methods though like linked lists.
Post 11 Dec 2008, 22:11
View user's profile Send private message MSN Messenger Reply with quote
bitRAKE



Joined: 21 Jul 2003
Posts: 4159
Location: vpcmpistri
bitRAKE 12 Dec 2008, 02:30
How do you plan to handle process stacks? Will they grow dynamically from the heap?
Post 12 Dec 2008, 02:30
View user's profile Send private message Visit poster's website Reply with quote
calpol2004



Joined: 16 Dec 2004
Posts: 110
calpol2004 12 Dec 2008, 08:54
Well i was just going to have a fixed stack, 4kb in size. But keeping the stack on a heap would be better, how it would grow dynamically i'm unsure of though.

Detecting when a processes stack is about to fill up (4kb is quite big, id imagine an error would only exceed it but i hate restrictions, even ones like this) might be difficult. Perhaps on each task switch check the value of the processes esp register? but that isn't very clean. There must be some way of causing a exception when it overflows, the ss selector having its own gdt entry for each process and catch the gpf then changing that dynamically but i don't like that so much, messy but it seems like the only reliable way.
Post 12 Dec 2008, 08:54
View user's profile Send private message MSN Messenger Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3175
Location: Denmark
f0dder 12 Dec 2008, 11:20
calpol2004: you don't feel like supporting multiple threads per process?
Post 12 Dec 2008, 11:20
View user's profile Send private message Visit poster's website Reply with quote
calpol2004



Joined: 16 Dec 2004
Posts: 110
calpol2004 12 Dec 2008, 16:30
Haven't really thought about it, much more concerned with making a functional memory manager.

I should probably treat each thread as a process and have lower quantums for threads in threads and even lower for threads in a thread in a thread. Just each one have the same process identifier and share the same address space...Then theres fact that they all have to have their own stack. More complex than i think no doubt.

Nothing i do with the memory manager atm with heaps should effect the operation of how i'd use my task scheduler. at least i hope it won't. One thing at a time.
Post 12 Dec 2008, 16:30
View user's profile Send private message MSN Messenger Reply with quote
edfed



Joined: 20 Feb 2006
Posts: 4354
Location: Now
edfed 12 Dec 2008, 17:05
bitmap is good in one case:
hard disk sector mapping.
0 = free
1 = occupied/dead

above this, there is the FILE SYSTEM

windows uses the file system to say how much memory is used by a file/folder, not a bitmap.
and more, to say where are the free sectors, it will scan the entire tree.

linux is different, to know where are free sectors, it have a bitmap., the better solution, use only 1/4096th of the drive.
for memory, it's different as it can be allocated only ONE byte to a segment descriptor. don't think about paging for now.
there is the segmentation mechanism to allocate memory.

to know how much sectors are free, i don't know, i don't use linux.

then, i conclude, a memory bitmap is not a good idea.
it use too much memory.
i prefer to create string.
a chain of [freebytescount] and [usedbytescount].
for example,
0,123,432324,4321434,4343,343,1234,1,2,1,4324,123,2345234234,2134,34,4
0 free sectors
123 used sectors
432324 free bytes
4321434 used bytes
etc etc...

another way to know how much memory, and where it is used is:
a ram file system

the OS, application, or anything shall alloca&te the memory when loading something.
this memory is calculated from the file size or header, this header is a designation from the file system, a system structure to indicate what is the file.

when the memory is searched, the MMU will seek in the freeram structure to know where is the free chunk of memory.

then, it allocates the memory with a new object in the freeram structure
and load the file into.

when the file is fired out, it's memory shall be freed.
then, to close an application, you shall do it via a function that will link file operations / disk operations / memory operations.

let's go!
Post 12 Dec 2008, 17:05
View user's profile Send private message Visit poster's website Reply with quote
baldr



Joined: 19 Mar 2008
Posts: 1651
baldr 12 Dec 2008, 17:45
calpol2004,

How do computer look like from process' point of view (in your execution environment)?
Can it support dynamic structures like heap/stack?
Multiple threads of execution?
Memory-mapped files?
User/supervisor separation?
Memory-mapped I/O (linear frame buffer should be considered also, I don't think VBE updates MTRRs for LFB)?

There is much more information about virtual address space to be described than simple "present"/"not present" approach…

_________________
"Don't belong. Never join. Think for yourself. Peace." – Victor Stone.
Post 12 Dec 2008, 17:45
View user's profile Send private message Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3175
Location: Denmark
f0dder 12 Dec 2008, 22:30
edfed: linux filesystems (even ext) are moving away from the bitmap approach and towards extents-based approaches. Guess what NTFS uses (and has used for the past zillion years)? Smile
Post 12 Dec 2008, 22:30
View user's profile Send private message Visit poster's website Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  
Goto page Previous  1, 2

< Last Thread | Next Thread >
Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Copyright © 1999-2025, Tomasz Grysztar. Also on GitHub, YouTube.

Website powered by rwasa.