flat assembler
Message board for the users of flat assembler.

Index > Heap > shred, clearing hard disk data

Goto page 1, 2  Next
Author
Thread Post new topic Reply to topic
sleepsleep



Joined: 05 Oct 2006
Posts: 8906
Location: ˛                             ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣Posts: 334455
sleepsleep
shred it
or could i just copy a 4GB debian iso 100 times ?
Post 13 Dec 2010, 19:21
View user's profile Send private message Reply with quote
vid
Verbosity in development


Joined: 05 Sep 2003
Posts: 7105
Location: Slovakia
vid
In theory, shred.

In practice, unless someone extremely powerful is after you, copying 4GB ISO should work well enough. But still shred should be faster and easier to do, so why not use it?
Post 13 Dec 2010, 19:32
View user's profile Send private message Visit poster's website AIM Address MSN Messenger ICQ Number Reply with quote
sleepsleep



Joined: 05 Oct 2006
Posts: 8906
Location: ˛                             ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣Posts: 334455
sleepsleep
ic, thanks vid... "faster" is the key,, gonna try the linux shred my ntfs partition...
Post 13 Dec 2010, 19:34
View user's profile Send private message Reply with quote
ManOfSteel



Joined: 02 Feb 2005
Posts: 1154
ManOfSteel
Does shred support setting up the block size like dd does?
Post 13 Dec 2010, 19:40
View user's profile Send private message Reply with quote
sleepsleep



Joined: 05 Oct 2006
Posts: 8906
Location: ˛                             ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣Posts: 334455
sleepsleep
Code:
crunchbang@crunchbang:~$ shred --help
Usage: shred [OPTION]... FILE...
Overwrite the specified FILE(s) repeatedly, in order to make it harder
for even very expensive hardware probing to recover the data.

Mandatory arguments to long options are mandatory for short options too.
  -f, --force    change permissions to allow writing if necessary
  -n, --iterations=N  overwrite N times instead of the default (3)
      --random-source=FILE  get random bytes from FILE
  -s, --size=N   shred this many bytes (suffixes like K, M, G accepted)
  -u, --remove   truncate and remove file after overwriting
  -v, --verbose  show progress
  -x, --exact    do not round file sizes up to the next full block;
                   this is the default for non-regular files
  -z, --zero     add a final overwrite with zeros to hide shredding
      --help     display this help and exit
      --version  output version information and exit

    


.... no idea... manofsteel
Post 13 Dec 2010, 19:44
View user's profile Send private message Reply with quote
sleepsleep



Joined: 05 Oct 2006
Posts: 8906
Location: ˛                             ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣Posts: 334455
sleepsleep
maybe this is enough
Code:
dd if=/dev/zero of=/dev/sda
    


need about 8 hours to dd zero the whole 500GB,,, lol.... speed around 16 MB / second

now i understand why people need faster RPM Smile
Post 13 Dec 2010, 20:53
View user's profile Send private message Reply with quote
bazik



Joined: 28 Jul 2003
Posts: 34
Location: .de
bazik
sleepsleep wrote:
maybe this is enough
Code:
dd if=/dev/zero of=/dev/sda
    


need about 8 hours to dd zero the whole 500GB,,, lol.... speed around 16 MB / second


You should set the block size (bs=) to a reasonable size for current hardware as the default is 512b which really easy slows down the whole process. I'd suggest 4 MB on huge cache drives (32 MB+) you'd even go better with 12 MB or more.
Post 13 Dec 2010, 20:59
View user's profile Send private message Reply with quote
ManOfSteel



Joined: 02 Feb 2005
Posts: 1154
ManOfSteel
Try
Code:
dd if=/dev/zero of=/dev/whatever bs=8m    

instead.

You may want to test it a few times to find the most appropriate (efficient) block size.
Post 13 Dec 2010, 21:00
View user's profile Send private message Reply with quote
sleepsleep



Joined: 05 Oct 2006
Posts: 8906
Location: ˛                             ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣Posts: 334455
sleepsleep
lol, eheheheh.... :p
thanks bazik & manofsteel,, save my precious time to "sleep".... :p

btw, this is what that actually convince me..

16 Systems - The Great Zero Challenge
http://16s.us/zero/

moving 108 MB / second now.... Smile good, hopefully i could make it before 9 AM later Smile
Post 13 Dec 2010, 21:07
View user's profile Send private message Reply with quote
Tyler



Joined: 19 Nov 2009
Posts: 1216
Location: NC, USA
Tyler
shred writes multiple(we'll say n) times. The first n-1 are random, and the nth rewrite is 0. Multiple writes are supposed to make recovery harder. Also, shred != dd, dd acts on devices, shred acts on files as they are presented by the file system. See man shred and especially the warning.

man shred wrote:

CAUTION: Note that shred relies on a very important assumption: that
the file system overwrites data in place. This is the traditional way
to do things, but many modern file system designs do not satisfy this
assumption. The following are examples of file systems on which shred
is not effective, or is not guaranteed to be effective in all file sys‐
tem modes:

* log-structured or journaled file systems, such as those supplied with
AIX and Solaris (and JFS, ReiserFS, XFS, Ext3, etc.)

* file systems that write redundant data and carry on even if some
writes fail, such as RAID-based file systems

* file systems that make snapshots, such as Network Appliance's NFS
server

* file systems that cache in temporary locations, such as NFS version 3
clients

* compressed file systems

In the case of ext3 file systems, the above disclaimer applies (and
shred is thus of limited effectiveness) only in data=journal mode,
which journals file data in addition to just metadata. In both the
data=ordered (default) and data=writeback modes, shred works as usual.
Ext3 journaling modes can be changed by adding the data=something
option to the mount options for a particular file system in the
/etc/fstab file, as documented in the mount man page (man mount).

In addition, file system backups and remote mirrors may contain copies
of the file that cannot be removed, and that will allow a shredded file
to be recovered later.


Maybe
Code:
dd if=/dev/random of=/dev/whatever bs=8m
dd if=/dev/zero of=/dev/whatever bs=8m
    
Post 13 Dec 2010, 21:27
View user's profile Send private message Reply with quote
bazik



Joined: 28 Jul 2003
Posts: 34
Location: .de
bazik
/dev/urandom is faster.
Post 13 Dec 2010, 21:40
View user's profile Send private message Reply with quote
sleepsleep



Joined: 05 Oct 2006
Posts: 8906
Location: ˛                             ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣Posts: 334455
sleepsleep
it is getting slower... weird..
Code:
sudo watch -n 60 kill -USR1 12039
~~~
229664215040 bytes (230 GB) copied, 2293.42 s, 100 MB/s
28033+3 records in
28033+3 records out
235175530496 bytes (235 GB) copied, 2353.47 s, 99.9 MB/s
28683+3 records in
28683+3 records out
240628125696 bytes (241 GB) copied, 2413.5 s, 99.7 MB/s
29333+3 records in
29333+3 records out
246080720896 bytes (246 GB) copied, 2473.46 s, 99.5 MB/s
29983+3 records in
29983+3 records out
251533316096 bytes (252 GB) copied, 2533.48 s, 99.3 MB/s
30632+3 records in
30632+3 records out
256977522688 bytes (257 GB) copied, 2593.43 s, 99.1 MB/s
31277+3 records in
31277+3 records out
262388174848 bytes (262 GB) copied, 2653.43 s, 98.9 MB/s
    
Post 13 Dec 2010, 21:50
View user's profile Send private message Reply with quote
Tyler



Joined: 19 Nov 2009
Posts: 1216
Location: NC, USA
Tyler
/dev/urandom ain't random.
Post 13 Dec 2010, 21:52
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17287
Location: In your JS exploiting you and your system
revolution
If you are concerned about journalling FSes messing up the shred operations then use something like TrueCrypt to encrypt the whole drive. The encryption bypasses all file system stuff and scrambles the underlying raw sectors. Just use a long un-rememberable random password, then no one can recover the data.

All the stuff about requiring multiple passes is a misunderstanding from some time ago about old MFM and RLL drives being recoverable with very specialised hardware. This does not apply any more to new drive technologies. One pass of writing any data (random, or all zeros, or all ones, whatever) is enough to make all previous data unrecoverable. The encoding formats and the physical magnetic domain size are so close to limits that any previous data is rendered impossible to recover no matter how expensive the recovery equipment is.
Post 14 Dec 2010, 00:00
View user's profile Send private message Visit poster's website Reply with quote
sleepsleep



Joined: 05 Oct 2006
Posts: 8906
Location: ˛                             ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣Posts: 334455
sleepsleep
thanks for the info revolution,,

so, probably the fastest way to make a hard disk "unreadable" is, encrypt the whole drive with long un-rememberable random password... (btw, how long would it take to encrypt a 500 GB?)

then after that, clear the hard disk partitions, then the effects should be like all zero hard disk drive, am i right?

but would encrypt a 500GB faster than zero all the /dev/sda? i don't know but love to know...
Post 14 Dec 2010, 02:24
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17287
Location: In your JS exploiting you and your system
revolution
The time needed to encrypt with new data (i.e. a fresh format) for the whole disk (note that whole disk means partitions are not involved, they are at a higher level of hierarchy) is the same as normal writing. In truecrypt you select whole disk and tick the quick format option and start, then just forget the password and wait for it to finish. This guarantees to overwrite everything: partitions, pagefile/swapfile, hibernation file, journalling logs, recycle bins etc.

The time taken depends upon the interface speed to the HDD. My 1TB drives take 12 hours over a USB2 connection. But an internal ATA connection would be a lot quicker and depends only upon the HDD limitations.
Post 14 Dec 2010, 02:36
View user's profile Send private message Visit poster's website Reply with quote
Tyler



Joined: 19 Nov 2009
Posts: 1216
Location: NC, USA
Tyler
If what he says is true, you wouldn't need to encrypt it before you zero it. Just zeroing should be fine. Zeroing would be faster because it requires negligible processor time, compared to that required for encryption.
Post 14 Dec 2010, 03:17
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17287
Location: In your JS exploiting you and your system
revolution
Except that zeroing goes through the FS. And you can't guarantee to catch every part of the disk that you want to catch.

Also zeroing is not any faster. The CPU can generate "random" encrypted data faster than the HDD can accept it. So the limitation is the HDD, not the data generation software.


Last edited by revolution on 14 Dec 2010, 03:22; edited 1 time in total
Post 14 Dec 2010, 03:21
View user's profile Send private message Visit poster's website Reply with quote
Tyler



Joined: 19 Nov 2009
Posts: 1216
Location: NC, USA
Tyler
You do realize zeroing the partition by means of zeroing a block device destroys the file system altogether don't you? And that zeroing /dev/sda will zero the whole harddrive, making all the other partitions, if there are any, also unrecoverable. Just asking.
Post 14 Dec 2010, 03:22
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17287
Location: In your JS exploiting you and your system
revolution
Tyler wrote:
You do realize zeroing the partition by means of zeroing a block device destroys the file system altogether don't you? And that zeroing /dev/sda will zero the whole harddrive, making all the other partitions, if there are any, also unrecoverable. Just asking.
If that is the case then that would also work. Seems to be easier than the truecrypt thing also.
Post 14 Dec 2010, 03:24
View user's profile Send private message Visit poster's website Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  
Goto page 1, 2  Next

< Last Thread | Next Thread >
Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You can attach files in this forum
You can download files in this forum


Copyright © 1999-2020, Tomasz Grysztar. Also on YouTube, Twitter.

Website powered by rwasa.