flat assembler
Message board for the users of flat assembler.

Index > Heap > Petabox and the death of NTFS

Goto page Previous  1, 2, 3, 4, 5 ... 10, 11, 12  Next
Author
Thread Post new topic Reply to topic
bitRAKE



Joined: 21 Jul 2003
Posts: 2911
Location: [RSP+8*5]
bitRAKE
Image
Post 26 Feb 2009, 17:05
View user's profile Send private message Visit poster's website Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17260
Location: In your JS exploiting you and your system
revolution
Maybe it will be the memristor that will be the basis of our personal petabox?

Self-Programming Hybrid Memristor/Transistor Circuit Could Continue Moore's Law
Post 26 Feb 2009, 17:42
View user's profile Send private message Visit poster's website Reply with quote
comrade



Joined: 16 Jun 2003
Posts: 1137
Location: Russian Federation
comrade
bitRAKE wrote:
MichaelH wrote:
Will that be enough space to install windows 10 ?
Sure, the Windows 10 Professional Service version will only need about half that and a fiber connection for Windows Update to suck on. Monthly dues will reflect data usage as MS will store all your documents. We'll go from cloud computing to vacuum computing - it'll be out of this world.


Actually Windows 7 has a smaller disk footprint than Windows Vista. Here is a good blog post with an interesting chart of where the disk-space goes: http://blogs.msdn.com/e7/archive/2008/11/19/disk-space.aspx

And NTFS is very well written to be just thrown out the door. How easy do you think it is to add transactional support? (which came with Vista)

_________________
comrade (comrade64@live.com; http://comrade.ownz.com/)
Post 27 Feb 2009, 14:37
View user's profile Send private message Visit poster's website AIM Address Yahoo Messenger MSN Messenger ICQ Number Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17260
Location: In your JS exploiting you and your system
revolution
comrade wrote:
Actually Windows 7 has a smaller disk footprint than Windows Vista. Here is a good blog post with an interesting chart of where the disk-space goes: http://blogs.msdn.com/e7/archive/2008/11/19/disk-space.aspx
Thanks for the nice link. Some good reading there.
comrade wrote:
And NTFS is very well written to be just thrown out the door.
Yes, indeed, I very much agree with that. NTFS is the most stable and robust FS I have ever used. But with large storage "just around the corner" NTFS will need to be upgraded to something else. I hope that the "something else" will be just as robust and reliable.
Post 27 Feb 2009, 14:46
View user's profile Send private message Visit poster's website Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3170
Location: Denmark
f0dder
revolution: perhaps pools of NTFS (or other filesystems) with a logical layer on top? I'm not sure petabyte storage makes much sense wrt. traditional files-and-folders structure?
Post 27 Feb 2009, 15:16
View user's profile Send private message Visit poster's website Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17260
Location: In your JS exploiting you and your system
revolution
f0dder wrote:
revolution: perhaps pools of NTFS (or other filesystems) with a logical layer on top? I'm not sure petabyte storage makes much sense wrt. traditional files-and-folders structure?
Well MS tried to introduce the "cabinet" idea, and failed. But what would replace the file/folder paradigm? More drives (C:, D:, E:, ... ZZ:)? More desktops? More of what?
Post 27 Feb 2009, 15:43
View user's profile Send private message Visit poster's website Reply with quote
Borsuc



Joined: 29 Dec 2005
Posts: 2466
Location: Bucharest, Romania
Borsuc
revolution wrote:
Maybe it will be the memristor that will be the basis of our personal petabox?

Self-Programming Hybrid Memristor/Transistor Circuit Could Continue Moore's Law
Nah, that seems more like RAM or Flash than huge storage (read: hard drives) for their price.

Also, that 3nm density article you linked a while back got me thinking. If a square inch has 1 TB with that thing on, then a 5 square inch one only has... 25TB?

Add some 100 layers (as in hard disks) if you want, or shrink it to 1 nm (let's say, the limit). So 25*3*100 = 7500 TB = 7,5 PB

Still far from 16 exabyte-limit of NTFS (or is there a lower limit, I remember reading somewhere that 16 exabytes is the limit).

The obvious problem would be that increasing the number of layers would make it:

- bigger
- waste more power
- less reliable

it's already pretty big (5 by 5 inches hmm).

_________________
Previously known as The_Grey_Beast
Post 27 Feb 2009, 20:50
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17260
Location: In your JS exploiting you and your system
revolution
Borsuc wrote:
Also, that 3nm density article you linked a while back got me thinking. If a square inch has 1 TB with that thing on, then a 5 square inch one only has... 25TB?

Add some 100 layers (as in hard disks) if you want, or shrink it to 1 nm (let's say, the limit). So 25*3*100 = 7500 TB = 7,5 PB

Still far from 16 exabyte-limit of NTFS (or is there a lower limit, I remember reading somewhere that 16 exabytes is the limit).
But you need to consider the performance and usability of a volume that size under NTFS.

You would certainly have to rewrite the NTFS driver in WinXP because it is limited to 2^32-1 clusters (even though the NTFS spec theoretically allows 2^64-1 clusters). But if you didn't want to increase the total cluster limit in the driver then you are forced to increase the cluster size. Your example of 7.5PB would mean a cluster size has to be at least 2MB. The efficiency would reduce dramatically with such a large cluster size.


Last edited by revolution on 28 Feb 2009, 17:04; edited 1 time in total
Post 28 Feb 2009, 09:47
View user's profile Send private message Visit poster's website Reply with quote
Borsuc



Joined: 29 Dec 2005
Posts: 2466
Location: Bucharest, Romania
Borsuc
Well if your hard drive has that much huge storage then files on it are going to be huge anyway. Aligning to 2MB isn't that much of a problem, and even if it is, you can use NTFS compression Razz

However the bandwidth of such huge data could be a problem, if not technically wise, then consumption-wise.

I also made a stupid mistake (I should multiply the result by 3 again, because 1nm is 9 times smaller than 3nm, since it's on surface), but it's still far.

_________________
Previously known as The_Grey_Beast
Post 28 Feb 2009, 16:59
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17260
Location: In your JS exploiting you and your system
revolution
Theoretical minimum space requirement for a petabyte (2^53 bits):

Assume that the decoding and control logic adds some sort of overhead that is not too large. For this post let's say it doubles the required storage space. That makes 2^54 bits.

Assume that the storage medium is a three dimensional cube of equal dimensions on all sides. That makes 2^18 bits equivalent size on each side.

Assume that each storage bit is (3nm)^3 in size. That makes each side 2^18 * 3nm, or 786432nm, or 0.786432mm.

So based upon those assumptions, we could store a petabyte, and associated decode and control logic, in a cube with sides less than 0.8mm.

Don't ask me how the 3d structure is made or powered, it is just a theoretical construct.

Possible? Probable? Impossible? Likely?
Post 01 Mar 2009, 08:34
View user's profile Send private message Visit poster's website Reply with quote
bitRAKE



Joined: 21 Jul 2003
Posts: 2911
Location: [RSP+8*5]
bitRAKE
Three edges of the cube take input commands and the output happens on the other three edges - some kind of cellular automata propagates through the cube. Could create simulations for smaller cubes to determine efficient communication protocol - need to reduce heat build up.

Construction would need to be incredibly precise.
Post 01 Mar 2009, 15:59
View user's profile Send private message Visit poster's website Reply with quote
Borsuc



Joined: 29 Dec 2005
Posts: 2466
Location: Bucharest, Romania
Borsuc
Hmm I seem to get different results, but can't find the problem Confused (feel free to point out a fatal flaw in my calculations Very Happy):

A cm has 10^7 objects with 1 nm, and 10^7/3 objects of 3 nm.
Of course since it's a cube, we do it in three dimensions, e.g: 10^21 / 27.
Also, a byte is 8 bits, so we have to divide by 8.

That means 4.6 exabytes (a 1cm cube), somehow I expected more judging by your calculations.

I'm not sure if 'writing' (not reading) to the actual cube could be done well. And by that I don't mean performance, but maintenance (i.e avoiding errors).

_________________
Previously known as The_Grey_Beast
Post 01 Mar 2009, 16:42
View user's profile Send private message Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3170
Location: Denmark
f0dder
Borsuc: just a small note: NTFS compression only works with cluster sizes <= 4k Smile

_________________
Image - carpe noctem
Post 01 Mar 2009, 17:16
View user's profile Send private message Visit poster's website Reply with quote
bitRAKE



Joined: 21 Jul 2003
Posts: 2911
Location: [RSP+8*5]
bitRAKE
Borsuc wrote:
I'm not sure if 'writing' (not reading) to the actual cube could be done well. And by that I don't mean performance, but maintenance (i.e avoiding errors).
Polymerization error rate is less than 1 in a billion base pairs (read somewhere), and nature has been working on the problem for some time. Furthermore, this process happens in a rather volatile environment!

_________________
¯\(°_o)/¯ unlicense.org
Post 01 Mar 2009, 18:04
View user's profile Send private message Visit poster's website Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17260
Location: In your JS exploiting you and your system
revolution
Borsuc wrote:
Hmm I seem to get different results, but can't find the problem Confused (feel free to point out a fatal flaw in my calculations Very Happy):

A cm has 10^7 objects with 1 nm, and 10^7/3 objects of 3 nm.
Of course since it's a cube, we do it in three dimensions, e.g: 10^21 / 27.
Also, a byte is 8 bits, so we have to divide by 8.

That means 4.6 exabytes (a 1cm cube), somehow I expected more judging by your calculations.
You calculations look fine to me. The only thing different is that you left no space for the addressing and control logic, you have pure storage without any overhead.
Post 01 Mar 2009, 23:39
View user's profile Send private message Visit poster's website Reply with quote
Borsuc



Joined: 29 Dec 2005
Posts: 2466
Location: Bucharest, Romania
Borsuc
f0dder wrote:
Borsuc: just a small note: NTFS compression only works with cluster sizes <= 4k Smile
damn Razz
I didn't know that.

_________________
Previously known as The_Grey_Beast
Post 02 Mar 2009, 19:09
View user's profile Send private message Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3170
Location: Denmark
f0dder
Also, it doesn't work very well for often-changing files - well, at least not if you dislike heavy file fragmentation Smile. But for mostly read-only files, it's OK.
Post 02 Mar 2009, 19:30
View user's profile Send private message Visit poster's website Reply with quote
Borsuc



Joined: 29 Dec 2005
Posts: 2466
Location: Bucharest, Romania
Borsuc
Sure I know that, cause it has to recompress each cluster any may need to rewrite the whole file (not fitting, different size, fragmentation).
Post 02 Mar 2009, 19:32
View user's profile Send private message Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3170
Location: Denmark
f0dder
Borsuc wrote:
Sure I know that, cause it has to recompress each cluster any may need to rewrite the whole file (not fitting, different size, fragmentation).
It actually works a lot differently than that Smile, which is also part of the reason why you can end up with a zillion fragments. Have a look at this link for some more information.

_________________
Image - carpe noctem
Post 02 Mar 2009, 19:45
View user's profile Send private message Visit poster's website Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17260
Location: In your JS exploiting you and your system
revolution
Somehow I think that with petabyte sized storage we won't need to use compression. But sparse files will still likely be useful in some circumstances.

Also, depending upon the storage type, fragmentation might not be an issue. If the storage memory is random access then no one will care. If it stays with mechanical movement devices (HDDs) then we will all care.
Post 03 Mar 2009, 12:07
View user's profile Send private message Visit poster's website Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  
Goto page Previous  1, 2, 3, 4, 5 ... 10, 11, 12  Next

< Last Thread | Next Thread >
Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You can attach files in this forum
You can download files in this forum


Copyright © 1999-2020, Tomasz Grysztar.

Powered by rwasa.