flat assembler
Message board for the users of flat assembler.
Index
> Compiler Internals > [sug] targeted allocation of memory in Windows fasm Goto page Previous 1, 2, 3 Next |
Author |
|
revolution 11 Jan 2015, 23:27
I have RAM but if one process commits it all but uses only a small portion of it then no one else gets to use it. How is that not a waste?
|
|||
11 Jan 2015, 23:27 |
|
revolution 12 Jan 2015, 01:03
A pagefile only gives the illusion of having more RAM. There is no difference between simply buying extra RAM and using a pagefile to pretend there is more RAM. In both cases you will eventually exhaust the supply of available memory if all programs are wasteful and act like the RAM supply is infinite.
Perhaps the only "disadvantage" of disabling the pagefile is that I have to buy more RAM to compensate. If I have 2GB physical RAM + 2GB pagefile I can only run the same set of programs as 4GB physical + 0GB pagefile. I became tired of Windows continually swapping stuff to the pagefile so I installed more RAM instead. Now if only all of my programs wouldn't go and commit unneeded gigabytes of it then I don't need a pagefile. The worst offender is not fasm actually because I always have the -m option, instead the worst offender is all those .NET managed monstrosities. I avoid those whenever possible. |
|||
12 Jan 2015, 01:03 |
|
l_inc 12 Jan 2015, 01:34
revolution
Quote: A pagefile only gives the illusion of having more RAM Sure. Hence the name: virtual memory, which is by definition an illusion of (a large amount of) memory. An important point here is that it's better to waste an illusion (a portion of a number) than the real memory, which is — you are right — wasted, if the illusion is not engaged. Quote: Perhaps the only "disadvantage" of disabling the pagefile is that I have to buy more RAM to compensate It is actually a huge disadvantage. But I wouldn't say, it's the only one. Pagefiles are important for crashdumps. If you have it disabled, you won't be able to figure out, why your system crashed. _________________ Faith is a superposition of knowledge and fallacy |
|||
12 Jan 2015, 01:34 |
|
revolution 12 Jan 2015, 04:23
The last time my system crashed was something like 2006.
|
|||
12 Jan 2015, 04:23 |
|
l_inc 12 Jan 2015, 12:50
revolution
Lucky you. I have BSoDs much more often with all kinds of third-party hardware drivers. Last time (about a month ago) those were caused by a Qualcomm's ethernet adapter driver. _________________ Faith is a superposition of knowledge and fallacy |
|||
12 Jan 2015, 12:50 |
|
cod3b453 12 Jan 2015, 20:27
The 0x7FFC0000 would make sense; despite the potential of having 3GB user memory it still doesn't like spanning the 2GB boundary so you have a maximum of 2GB minus the existing allocation and -4MB sounds about right (code+data+win32 & win64 thread stack). Found this out the hard way at work a little while back.
|
|||
12 Jan 2015, 20:27 |
|
JohnFound 12 Jan 2015, 23:14
BTW, do you guys really need such huge memory blocks? Fresh IDE needs 256K for compilation and it is the biggest program I ever wrote - 328000 lines of code. What you actually compile?
|
|||
12 Jan 2015, 23:14 |
|
l_inc 13 Jan 2015, 01:59
cod3b453
Quote: despite the potential of having 3GB user memory it still doesn't like spanning the 2GB boundary Such problems could be theoretically possible on a 64-bit system if one remembers the canonical addressing and that addresses are signed values. Still Windows has no problems crossing the 2GB boundary with a single allocation (just checked on both 64 bit Win7 and Win7 with a 3:1 split) even though the allocated region is then non-contiguous on a 64-bit system. However I don't believe revolution to be able to allocate a contiguous range of more than 2GB using two VirtualAlloc calls, because even on a 64 bit Windows no such region exists in a 32 bit process. Not even a combined region of kind: xxx to 0x7fffffff and 0xffffffff80000000 to yyy. That's because there's a user-kernel shared data region residing in the upper kilobytes of the lower half of the addressing space and because of the TEB, PEB and other regions residing at the top of the accessible addressing space. Btw. in a 64 bit process allocating more than 2GB in a single call is not a problem. JohnFound How much memory would you expect the following snippet to eat up? Code: struc equexpand [vals] { common match x,vals \{ . equ x \} } a equ a a a equexpand a a equexpand a a equexpand a a equexpand a a equexpand a P.S. I've blurred incorrect information (sure, the addresses are not sign-extended to 64-bit in a compatibility mode). _________________ Faith is a superposition of knowledge and fallacy |
|||
13 Jan 2015, 01:59 |
|
revolution 13 Jan 2015, 03:27
l_inc wrote: However I don't believe revolution to be able to allocate a contiguous range of more than 2GB using two VirtualAlloc calls, because even on a 64 bit Windows no such region exists in a 32 bit process. Not even a combined region of kind: xxx to 0x7fffffff and 0xffffffff80000000 to yyy. That's because there's a user-kernel shared data region residing in the upper kilobytes of the lower half of the addressing space and because of the TEB, PEB and other regions residing at the top of the accessible addressing space. l_inc wrote: Btw. in a 64 bit process allocating more than 2GB in a single call is not a problem. |
|||
13 Jan 2015, 03:27 |
|
JohnFound 13 Jan 2015, 06:10
l_inc, I am aware than memory eating constructions are possible, but they usually are not useful for practical programming. Just a play for fun and education with FASM macro features.
|
|||
13 Jan 2015, 06:10 |
|
revolution 13 Jan 2015, 09:44
JohnFound wrote: BTW, do you guys really need such huge memory blocks? Fresh IDE needs 256K for compilation and it is the biggest program I ever wrote - 328000 lines of code. What you actually compile? Code: ;... .data static_alloc1 rd 1 shl 28 ;this is going to need 1GB static_alloc2 rd 1 shl 20 ;and another 4MB ;... |
|||
13 Jan 2015, 09:44 |
|
l_inc 13 Jan 2015, 12:59
revolution
Quote: Even with a single allocation of 0x7ffc0000 it already spans across the 2GB boundary. See my example code I don't think I understand, who you're talking to, cause that's pretty much what I said. Quote: For my system it starts at 0x7fff0000 and goes to 0xfffb0000 That's because this is the largest contiguous free virtual address range. A range directly below 0x7fff0000 is already reserved for the shared user-kernel area and a range above 0xfffb0000 is reserved for the PEB/TEB related structures. Therefore you won't be able to allocate the ranges contiguously even with multiple calls to VirtualAlloc, which you claimed to be able to do. Quote: fasm is a 32-bit program ATM This is not related to the discussion, whether there is a limitation of 0x7ffc0000 for a single allocation. _________________ Faith is a superposition of knowledge and fallacy |
|||
13 Jan 2015, 12:59 |
|
revolution 13 Jan 2015, 13:15
l_inc wrote: That's because this is the largest contiguous free virtual address range. A range directly below 0x7fff0000 is already reserved for the shared user-kernel area and a range above 0xfffb0000 is reserved for the PEB/TEB related structures. Therefore you won't be able to allocate the ranges contiguously even with multiple calls to VirtualAlloc, which you claimed to be able to do. |
|||
13 Jan 2015, 13:15 |
|
l_inc 13 Jan 2015, 13:19
revolution
Again this is a completely different discussion. So I guess we closed the question whether Windows explicitly imposes the limitation for a largest single allocation. It's just a matter of a chance of hitting a previously reserved range. _________________ Faith is a superposition of knowledge and fallacy |
|||
13 Jan 2015, 13:19 |
|
revolution 13 Jan 2015, 13:35
I am quite comfortable with the 2GB limit actually. But if there is a need to increase it then it can be done with a second allocation. And we can use the "additional_memory" section to contain the other part. It would also require doing two range checks in the exception code if we want to be able to catch unexpected memory accesses (which I think is desirable), and an additional call to free upon exit.
|
|||
13 Jan 2015, 13:35 |
|
l_inc 13 Jan 2015, 13:57
revolution
The question is about practical relevance vs. implementation complexity. E.g., good to have if nothing's to be to done for that. If you say, you are comfortable with 2GB (somewhat less actually) then it's a minus point to the practical relevance. JohnFound Quote: memory eating constructions are possible The example was supposed to show that it's not always obvious if one or another construct is memory hungry. Like if you see a rept large_value then you could be sure there's a waste of memory. Quote: but they usually are not useful for practical programming That's really about being cautious while implementing macros. The recently popped up discussion about lists of strings shows that it's not always trivial to stay thrifty. E.g., self-redefining (including recursive) macros can be quite memory hungry. So one would better avoid those. _________________ Faith is a superposition of knowledge and fallacy |
|||
13 Jan 2015, 13:57 |
|
revolution 13 Jan 2015, 14:07
l_inc wrote: The question is about practical relevance vs. implementation complexity. E.g., good to have if nothing's to be to done for that. If you say, you are comfortable with 2GB (somewhat less actually) then it's a minus point to the practical relevance. My palindromic post number is 12321 |
|||
13 Jan 2015, 14:07 |
|
revolution 16 Jan 2015, 14:44
Please see further ahead in the thread for updated code with enhancements and bug fixes.
Attached is the updated SYSTEM.INC file and the cumulative patch file that allocates memory on 64-bit systems in two parts giving up to 3GB of space. If you specify a memory usage value on the command line then it will drop back to the previous allocation algorithm with a single allocated section and a 3:1 split for the main and additional parts. On 32-bit systems without the /3GB switch the code will allocate only one section as previously.
|
|||||||||||||||||||||
16 Jan 2015, 14:44 |
|
revolution 13 Apr 2015, 15:48
There is a bug with the previous implementation. If one of the source files is zero bytes in length then it erroneously shows an out of memory error.
It can be fixed by adding these three lines to SYSTEM.INC: Code: jz file_error clc ret read: mov ebp,ecx + test ecx,ecx + jz .zero_length push edx push PAGE_READWRITE push MEM_COMMIT push ecx ;allocate the required number of bytes push edx ;at this address call [VirtualAlloc] pop edx test eax,eax jz out_of_memory + .zero_length: push 0 push bytes_count push ebp push edx push ebx
|
|||||||||||||||||||||
13 Apr 2015, 15:48 |
|
Goto page Previous 1, 2, 3 Next < Last Thread | Next Thread > |
Forum Rules:
|
Copyright © 1999-2024, Tomasz Grysztar. Also on GitHub, YouTube.
Website powered by rwasa.