flat assembler
Message board for the users of flat assembler.
Index
> Main > Memory resizing strategy. |
Author |
|
vid 01 Dec 2010, 13:11
Whenever allocation becomes the problem (both speed and size wise), the developer should write custom memory management suited for particular task. Otherwise, I think plain *1.5 is just okay. There is no way better for every single possible case, and there is no need to think about solutions to nonexistent problems
|
|||
01 Dec 2010, 13:11 |
|
JohnFound 01 Dec 2010, 13:31
Custom memory management will not solve the problem, because the problem is not related to any particular memory manager.
What about the "plain *1.5" - I, as a former Delphi programmer can only agree with this approach. But many C and C++ guys will claim "plain *2" is better and faster... (also it is possible to use plain *1.25, *1.125... that are, at least, as easy to compute and will make very little "overallocation"). |
|||
01 Dec 2010, 13:31 |
|
vid 01 Dec 2010, 14:43
I meant that when this causes real problems to particular application, then the application shouldn't use generic memory manager. So I don't see any reason to spend too many thoughts on this.
|
|||
01 Dec 2010, 14:43 |
|
edfed 01 Dec 2010, 15:58
push dword[count] ;count of bytes
call malloc ;find first available memory chunk pop dword[fptr] ;far pointer to memory chunk it appear that the problem is malloc only. |
|||
01 Dec 2010, 15:58 |
|
f0dder 02 Dec 2010, 08:14
Don't have any links on hand, but I recall being told that the *2 factor had been chosen after a fair amount of research and testing - that for generic purposes it was the best compromise between not wasting too much memory and not wasting too much [s]memory[/s] time on more reallocations.
You'll have to keep in mind that wasted memory is only part of the problem - consider memory fragmentation. It might actually be a much better strategy to overshoot memory allocation by a fair amount if you can reduce the amount of fragmented small-to-the-point-of-being-unusable free holes in your heap. I kinda agree with vid anyway - if memory allocation becomes a legitimate concern in your application, move to custom allocators. Custom resize strategies, pooled allocation, you name it. Also, if you need a lot of resizing, consider not using a linear layout data structure. The C++ deque container, for instance, would typically allocate in "linked chunks": Quote: Both vectors and deques provide thus a very similar interface and can be used for similar purposes, but internally both work in quite different ways: While vectors are very similar to a plain array that grows by reallocating all of its elements in a unique block when its capacity is exhausted, the elements of a deques can be divided in several chunks of storage, with the class keeping all this information and providing a uniform access to the elements. Therefore, deques are a little more complex internally, but this generally allows them to grow more efficiently than the vectors with their capacity managed automatically, specially in large sequences, because massive reallocations are avoided. Last edited by f0dder on 02 Dec 2010, 09:34; edited 1 time in total |
|||
02 Dec 2010, 08:14 |
|
f0dder 02 Dec 2010, 08:17
edfed wrote: push dword[count] ;count of bytes _________________ - carpe noctem |
|||
02 Dec 2010, 08:17 |
|
JohnFound 02 Dec 2010, 09:02
Quote: Don't have any links on hand, but I recall being told that the *2 factor had been chosen after a fair amount of research and testing - that for generic purposes it was the best compromise between not wasting too much memory and not wasting too much memory. Well, it would be interesting to read some arguments about this opinion and some texts what was tested and how. Actually I know that for a specific tasks, custom memory manager is the best. I am talking here especially for "general purpose" memory allocation strategies. |
|||
02 Dec 2010, 09:02 |
|
sinsi 02 Dec 2010, 09:30
f0dder wrote: best compromise between not wasting too much memory and not wasting too much memory. So it was arbitrary then |
|||
02 Dec 2010, 09:30 |
|
f0dder 02 Dec 2010, 09:35
JohnFound wrote: Well, it would be interesting to read some arguments about this opinion and some texts what was tested and how. sinsi wrote:
_________________ - carpe noctem |
|||
02 Dec 2010, 09:35 |
|
JohnFound 02 Dec 2010, 10:36
BTW, the initial size of the memory block is important too in the approach size*multiplier. The main problems with smaller multipliers is that the function grows too slow on small sizes. Actually, the approach where the multiplier is big at the begin, and then becomes smaller can improve memory usage without affecting speed too much...
But the problem in this variant is that we don't know when to make the multiplier smaller. Because we don't know the final size of the memory. But on the other hand, if we knew the final size, we could allocate it at once. What if we compute some statistics on the allocation procedure and then optimize the resize strategy using this statistics. |
|||
02 Dec 2010, 10:36 |
|
f0dder 02 Dec 2010, 10:46
JohnFound wrote: What if we compute some statistics on the allocation procedure and then optimize the resize strategy using this statistics. _________________ - carpe noctem |
|||
02 Dec 2010, 10:46 |
|
bogdanontanu 02 Dec 2010, 15:50
I usually use something like: new_size = old_size + empiric_constant; in my C/C++ programs.
And then empiric_constant = N*sizeof(ONE_ITEM); where N is calculated from my experience and tests with the algorithm at hand. Form my experience I have found that new_size = old_size*2; is to much for real life production code. |
|||
02 Dec 2010, 15:50 |
|
Kain 03 Dec 2010, 22:58
if i don't know what to expect, i use (size needed * 2 )+padding
where padding is long enough to provide overflow space for moving instructions. |
|||
03 Dec 2010, 22:58 |
|
drobole 04 Dec 2010, 01:23
How about leaving it to the client to decide the block size?
|
|||
04 Dec 2010, 01:23 |
|
< Last Thread | Next Thread > |
Forum Rules:
|
Copyright © 1999-2024, Tomasz Grysztar. Also on GitHub, YouTube.
Website powered by rwasa.