flat assembler
Message board for the users of flat assembler.

flat assembler > Compiler Internals > [sug] targeted allocation of memory in Windows fasm

Goto page Previous  1, 2, 3  Next
Author
Thread Post new topic Reply to topic
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 16782
Location: In your JS exploiting you and your system
I have RAM but if one process commits it all but uses only a small portion of it then no one else gets to use it. How is that not a waste?
Post 11 Jan 2015, 23:27
View user's profile Send private message Visit poster's website Reply with quote
l_inc



Joined: 23 Oct 2009
Posts: 881
revolution
You are right. It is if you disable pagefiles. Which is a huge disadvantage of disabling them. Otherwise there is an increase in this number and everyone else still gets the memory. The whole physical memory still can be used by others, there is no increase in the swapping and page fault rates and hence no overall system performance degradation.

_________________
Faith is a superposition of knowledge and fallacy
Post 12 Jan 2015, 00:36
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 16782
Location: In your JS exploiting you and your system
A pagefile only gives the illusion of having more RAM. There is no difference between simply buying extra RAM and using a pagefile to pretend there is more RAM. In both cases you will eventually exhaust the supply of available memory if all programs are wasteful and act like the RAM supply is infinite.

Perhaps the only "disadvantage" of disabling the pagefile is that I have to buy more RAM to compensate. If I have 2GB physical RAM + 2GB pagefile I can only run the same set of programs as 4GB physical + 0GB pagefile. I became tired of Windows continually swapping stuff to the pagefile so I installed more RAM instead. Now if only all of my programs wouldn't go and commit unneeded gigabytes of it then I don't need a pagefile. The worst offender is not fasm actually because I always have the -m option, instead the worst offender is all those .NET managed monstrosities. I avoid those whenever possible.
Post 12 Jan 2015, 01:03
View user's profile Send private message Visit poster's website Reply with quote
l_inc



Joined: 23 Oct 2009
Posts: 881
revolution
Quote:
A pagefile only gives the illusion of having more RAM

Sure. Hence the name: virtual memory, which is by definition an illusion of (a large amount of) memory. An important point here is that it's better to waste an illusion (a portion of a number) than the real memory, which is — you are right — wasted, if the illusion is not engaged.
Quote:
Perhaps the only "disadvantage" of disabling the pagefile is that I have to buy more RAM to compensate

It is actually a huge disadvantage. But I wouldn't say, it's the only one. Pagefiles are important for crashdumps. If you have it disabled, you won't be able to figure out, why your system crashed.

_________________
Faith is a superposition of knowledge and fallacy
Post 12 Jan 2015, 01:34
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 16782
Location: In your JS exploiting you and your system
The last time my system crashed was something like 2006.
Post 12 Jan 2015, 04:23
View user's profile Send private message Visit poster's website Reply with quote
l_inc



Joined: 23 Oct 2009
Posts: 881
revolution
Lucky you. I have BSoDs much more often with all kinds of third-party hardware drivers. Last time (about a month ago) those were caused by a Qualcomm's ethernet adapter driver.

_________________
Faith is a superposition of knowledge and fallacy
Post 12 Jan 2015, 12:50
View user's profile Send private message Reply with quote
cod3b453



Joined: 25 Aug 2004
Posts: 619
The 0x7FFC0000 would make sense; despite the potential of having 3GB user memory it still doesn't like spanning the 2GB boundary so you have a maximum of 2GB minus the existing allocation and -4MB sounds about right (code+data+win32 & win64 thread stack). Found this out the hard way at work a little while back.
Post 12 Jan 2015, 20:27
View user's profile Send private message Reply with quote
JohnFound



Joined: 16 Jun 2003
Posts: 3500
Location: Bulgaria
BTW, do you guys really need such huge memory blocks? Fresh IDE needs 256K for compilation and it is the biggest program I ever wrote - 328000 lines of code. What you actually compile?
Post 12 Jan 2015, 23:14
View user's profile Send private message Visit poster's website ICQ Number Reply with quote
l_inc



Joined: 23 Oct 2009
Posts: 881
cod3b453
Quote:
despite the potential of having 3GB user memory it still doesn't like spanning the 2GB boundary

Such problems could be theoretically possible on a 64-bit system if one remembers the canonical addressing and that addresses are signed values. Still Windows has no problems crossing the 2GB boundary with a single allocation (just checked on both 64 bit Win7 and Win7 with a 3:1 split) even though the allocated region is then non-contiguous on a 64-bit system.

However I don't believe revolution to be able to allocate a contiguous range of more than 2GB using two VirtualAlloc calls, because even on a 64 bit Windows no such region exists in a 32 bit process. Not even a combined region of kind: xxx to 0x7fffffff and 0xffffffff80000000 to yyy. That's because there's a user-kernel shared data region residing in the upper kilobytes of the lower half of the addressing space and because of the TEB, PEB and other regions residing at the top of the accessible addressing space. Btw. in a 64 bit process allocating more than 2GB in a single call is not a problem.

JohnFound
How much memory would you expect the following snippet to eat up?
Code:
struc equexpand [vals] { common match x,vals \{ . equ x \} }
a equ a a
a equexpand a
a equexpand a
a equexpand a
a equexpand a
a equexpand a    


P.S. I've blurred incorrect information (sure, the addresses are not sign-extended to 64-bit in a compatibility mode).

_________________
Faith is a superposition of knowledge and fallacy
Post 13 Jan 2015, 01:59
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 16782
Location: In your JS exploiting you and your system
l_inc wrote:
However I don't believe revolution to be able to allocate a contiguous range of more than 2GB using two VirtualAlloc calls, because even on a 64 bit Windows no such region exists in a 32 bit process. Not even a combined region of kind: xxx to 0x7fffffff and 0xffffffff80000000 to yyy. That's because there's a user-kernel shared data region residing in the upper kilobytes of the lower half of the addressing space and because of the TEB, PEB and other regions residing at the top of the accessible addressing space.
Even with a single allocation of 0x7ffc0000 it already spans across the 2GB boundary. See my example code in the first post of this topic, it already crosses 0x80000000. For my system it starts at 0x7fff0000 and goes to 0xfffb0000
l_inc wrote:
Btw. in a 64 bit process allocating more than 2GB in a single call is not a problem.
fasm is a 32-bit program ATM. If it ever goes to 64-bit then many things dealing with address spaces would become non-problems.
Post 13 Jan 2015, 03:27
View user's profile Send private message Visit poster's website Reply with quote
JohnFound



Joined: 16 Jun 2003
Posts: 3500
Location: Bulgaria
l_inc, I am aware than memory eating constructions are possible, but they usually are not useful for practical programming. Just a play for fun and education with FASM macro features.
Post 13 Jan 2015, 06:10
View user's profile Send private message Visit poster's website ICQ Number Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 16782
Location: In your JS exploiting you and your system
JohnFound wrote:
BTW, do you guys really need such huge memory blocks? Fresh IDE needs 256K for compilation and it is the biggest program I ever wrote - 328000 lines of code. What you actually compile?
I don't think I've ever used 2GB but I have needed a large space in some circumstances.
Code:
;...
.data
  static_alloc1 rd 1 shl 28 ;this is going to need 1GB
  static_alloc2 rd 1 shl 20 ;and another 4MB
;...    
Anyhow, why not maximise the allocation when it doesn't harm the actual usage? It is available if we ever need it.
Post 13 Jan 2015, 09:44
View user's profile Send private message Visit poster's website Reply with quote
l_inc



Joined: 23 Oct 2009
Posts: 881
revolution
Quote:
Even with a single allocation of 0x7ffc0000 it already spans across the 2GB boundary. See my example code

I don't think I understand, who you're talking to, cause that's pretty much what I said.
Quote:
For my system it starts at 0x7fff0000 and goes to 0xfffb0000

That's because this is the largest contiguous free virtual address range. A range directly below 0x7fff0000 is already reserved for the shared user-kernel area and a range above 0xfffb0000 is reserved for the PEB/TEB related structures. Therefore you won't be able to allocate the ranges contiguously even with multiple calls to VirtualAlloc, which you claimed to be able to do.
Quote:
fasm is a 32-bit program ATM

This is not related to the discussion, whether there is a limitation of 0x7ffc0000 for a single allocation.

_________________
Faith is a superposition of knowledge and fallacy
Post 13 Jan 2015, 12:59
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 16782
Location: In your JS exploiting you and your system
l_inc wrote:
That's because this is the largest contiguous free virtual address range. A range directly below 0x7fff0000 is already reserved for the shared user-kernel area and a range above 0xfffb0000 is reserved for the PEB/TEB related structures. Therefore you won't be able to allocate the ranges contiguously even with multiple calls to VirtualAlloc, which you claimed to be able to do.
You are referring to user allocated space. There are system allocate pages also but all of the gaps can be closed thus making a large contiguous range. So one's code needs to be careful about relying upon faults to detect out of bounds memory accesses. This is why I placed the range checks in the code I posted to make sure that we are doing what we expect.
Post 13 Jan 2015, 13:15
View user's profile Send private message Visit poster's website Reply with quote
l_inc



Joined: 23 Oct 2009
Posts: 881
revolution
Again this is a completely different discussion. So I guess we closed the question whether Windows explicitly imposes the limitation for a largest single allocation. It's just a matter of a chance of hitting a previously reserved range.

_________________
Faith is a superposition of knowledge and fallacy
Post 13 Jan 2015, 13:19
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 16782
Location: In your JS exploiting you and your system
I am quite comfortable with the 2GB limit actually. But if there is a need to increase it then it can be done with a second allocation. And we can use the "additional_memory" section to contain the other part. It would also require doing two range checks in the exception code if we want to be able to catch unexpected memory accesses (which I think is desirable), and an additional call to free upon exit.
Post 13 Jan 2015, 13:35
View user's profile Send private message Visit poster's website Reply with quote
l_inc



Joined: 23 Oct 2009
Posts: 881
revolution
The question is about practical relevance vs. implementation complexity. E.g., good to have if nothing's to be to done for that. If you say, you are comfortable with 2GB (somewhat less actually) then it's a minus point to the practical relevance.

JohnFound
Quote:
memory eating constructions are possible

The example was supposed to show that it's not always obvious if one or another construct is memory hungry. Like if you see a rept large_value then you could be sure there's a waste of memory.
Quote:
but they usually are not useful for practical programming

That's really about being cautious while implementing macros. The recently popped up discussion about lists of strings shows that it's not always trivial to stay thrifty. E.g., self-redefining (including recursive) macros can be quite memory hungry. So one would better avoid those.

_________________
Faith is a superposition of knowledge and fallacy
Post 13 Jan 2015, 13:57
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 16782
Location: In your JS exploiting you and your system
l_inc wrote:
The question is about practical relevance vs. implementation complexity. E.g., good to have if nothing's to be to done for that. If you say, you are comfortable with 2GB (somewhat less actually) then it's a minus point to the practical relevance.
For me, yes. But I am not the only user of fasm. Others may have needs or desires for more space.

My palindromic post number is 12321
Post 13 Jan 2015, 14:07
View user's profile Send private message Visit poster's website Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 16782
Location: In your JS exploiting you and your system
Please see further ahead in the thread for updated code with enhancements and bug fixes.

Attached is the updated SYSTEM.INC file and the cumulative patch file that allocates memory on 64-bit systems in two parts giving up to 3GB of space. If you specify a memory usage value on the command line then it will drop back to the previous allocation algorithm with a single allocated section and a 3:1 split for the main and additional parts.

On 32-bit systems without the /3GB switch the code will allocate only one section as previously.


Description: SYSTEM.INC giving larger memory availability
Download
Filename: SYSTEM.INC
Filesize: 10.31 KB
Downloaded: 246 Time(s)

Description: Cumulative patch file
Download
Filename: Patch.txt
Filesize: 5.39 KB
Downloaded: 280 Time(s)

Post 16 Jan 2015, 14:44
View user's profile Send private message Visit poster's website Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 16782
Location: In your JS exploiting you and your system
There is a bug with the previous implementation. If one of the source files is zero bytes in length then it erroneously shows an out of memory error.

It can be fixed by adding these three lines to SYSTEM.INC:
Code:
        jz      file_error
        clc
        ret
 read:
        mov     ebp,ecx
+       test    ecx,ecx
+       jz      .zero_length
        push    edx
        push    PAGE_READWRITE
        push    MEM_COMMIT
        push    ecx             ;allocate the required number of bytes
        push    edx             ;at this address
        call    [VirtualAlloc]
        pop     edx
        test    eax,eax
        jz      out_of_memory
+    .zero_length:
        push    0
        push    bytes_count
        push    ebp
        push    edx
        push    ebx    
Attached are the updated files.


Description: Cumulative patch file
Download
Filename: Patch.txt
Filesize: 5.27 KB
Downloaded: 265 Time(s)

Description: SYSTEM.INC giving larger memory availability
Download
Filename: SYSTEM.INC
Filesize: 10.43 KB
Downloaded: 224 Time(s)

Post 13 Apr 2015, 15:48
View user's profile Send private message Visit poster's website Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  
Goto page Previous  1, 2, 3  Next

< Last Thread | Next Thread >
Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Copyright © 1999-2019, Tomasz Grysztar.

Powered by rwasa.