Message board for the users of flat assembler.
> Heap > Intel's TSX thing
the only benefit i can see about Intel TSX http://en.wikipedia.org/wiki/Intel_TSX
comes from the read/write consistency on the single cache-line (64 bytes).
the last good instruction on older processors was the CMPXCHG16B, allowing a 16 bytes perfect lock.
i didnt read docs carefully,
nor i can test the new instros XACQUIRE,XTEST etc (btw, in FASM already ?) i may be missing something, ok.
now, granted improvements after the fact that associativity in the L1 set has been scaled to 8 on Haswell,
you have however to re-invent the wheel to assure coherence on transactions involving more than one line of datas and/or
on accesses on the whole set.
this interesting document about Haswell
confirms me that on older processor it is exaclty the same but relatively to the L1 size or associativity.
(i told here on board about my own 3C rule, giving out empirical guideline-formulas)
Intel states this (from http://software.intel.com/en-us/blogs/2013/06/23/tsx-fallback-paths ). i quote it again
but that is just the point if i had good algo, i scale it simply to the Haswell properties,
If you already have a good lockless algorithm implemented, in many cases it may be fast
avoiding also to reinvent the wheel by using that new technology !
what i exactly dont need is their good "fast-path" library, or a presumed GCC compiling capability.
also, if you can address me with documents explaining/showing the benefits of this
TSX-thing i will be very thankful.
|08 Dec 2013, 07:07||
Acquire and release mechanisms are good in certain situations. And lock mechanisms are also good in certain situations. It is a mistake to think that one method will be better in all situations. One must look at their requirements and decide based upon that which mechanism will be best suited.
Note: These are by no means the only two possible mechanisms available for synchronisation, but they are the most commonly used and most easily understood in most cases.
|08 Dec 2013, 07:22||
Interesting. IBM also came out with this after many decades of supporting many other locking and serialization mechanisms. I wonder if it has to do with demand from database or high throughput transaction code. Because as far as I can see (on IBM, where I work) we really don't need this. IBM has a guaranteed consistency model that doesn't require memory barriers. I realize this is different from Intel and various other microprocessors.
|17 Dec 2013, 18:55||
< Last Thread | Next Thread >
Copyright © 1999-2020, Tomasz Grysztar. Also on YouTube, Twitter.
Website powered by rwasa.