flat assembler
Message board for the users of flat assembler.

flat assembler > Heap > vivik's twitter

Goto page Previous  1, 2, 3, 4, 5, 6  Next
Author
Thread Post new topic Reply to topic
DimonSoft



Joined: 03 Mar 2010
Posts: 420
Location: Belarus
vivik wrote:
It's quite easy and fast to delete the end of the file by just changing it's size. But I wish I could also delete some bits in the beginning or the middle of the file, to use fragmentation to speed up my algorithms.

Deleting the beginning could be useful, but not deleting the middle. HDD are fast only when they read data sequentially, this is why they do defragmentation from time to time. This artificial fragmentation can speed some operations up, but will hurt performance in the long run. You should just ignore the no longer useful data in the middle for some period of time, and do clean ups once per day or so.

I know some file system can do this, but I forgot the name. I read it in microsoft docs somewhere, good luck finding it twice.

How do I delete bytes from the beginning of a file?
Post 29 Sep 2018, 07:38
View user's profile Send private message Visit poster's website Reply with quote
vivik



Joined: 29 Oct 2016
Posts: 485
tl;dr: we wouldn't add this feature to NTFS because it doesn't play well with other features it has.

Well, I have to use raw partition instead. No file system is fastest file system. Or just very large files, filesystem in a file.
Post 29 Sep 2018, 08:19
View user's profile Send private message Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 420
Location: Belarus
In fact, it doesn’t play well with multitasking. It’s a valid question: what should happen to file offsets for a program when another program truncates the file at the beginning?

If it should have old offsets, how should writes to the truncated part be handled? Should it be ignored thus possibly losing valuable user data? Or should another program be notified? How? By means of return values which are often ignored by programs? A separate notification mechanism? Which one? Window messages are bad choice due to console applications and services, synchronization objects are bad choice because that would require additional complexity of multiple threads synchronization since you don’t want your GUI thread to wait.

If it should use new offsets, you definitely have the same problem: how would another program know it should recalculate all its offsets including cached ones?

What if the file format itself uses file offsets in its fields, which is quite common? Will it be faster to rewrite all these fields, possibly located throughout the whole file, than to implement custom file shrinking in the program that knows aboutthe details of the format?

How will you make old software aware of the possibility of file offset changes?

Now, how many programs actually need the feature? Are those three ones really worth making other programs bother (a lot!)?

Note that all the questions are not related to NTFS by any means.
Post 29 Sep 2018, 15:39
View user's profile Send private message Visit poster's website Reply with quote
vivik



Joined: 29 Oct 2016
Posts: 485
How will it work with multitasking? Add an extra variable called "offset_change" that is writable by one thread/process and readable by all others. Can use lockless multithreading in cases like that. "offset_change" indicates how many bytes were deleted at the beginning of the file. (actually it's better to put into the ReadFile itself, it will return an extra int showing how much data from beginning of file wasn't returned due to it being deleted already. It already returns total amount of data read, it will also return "data_skipped").

Offsets don't change if beginning of file was deleted. When you reopen the file for read or write, it will again start from zero.

About file formats, well, I think headers should be stored in a separate file. Or in a same file, but be padded by like 4kb border.

With mp4, headers are stored either at the beginning or at the end of a file. When it's at the end you can't start playing video file until the file is downloaded completely from some remote site, or your browser downloads end of the file first / you enable option "download start and end of the file first" in qbittorrent. But placing headers at the beginning requires rewriting the entire file from start to finish and recalculating all offsets. It's complicated to do in 1 pass, during encoding you don't know how many keyframes you need (unless you are a cheap bastard like youtube or pornhub and just put keyframes every 5 or 4 seconds even for still images), and thus you don't know how much space your header will need. There are some other formats too, I guess they are used in CD/DVD disks and they don't use a global header and are more close to ogg, I don't know details.

With matroshka (matroska, but it's clothing), there is something simular. You can either put cue information (used to speed up seeking, it's pairs of "time" -- "file offset") at the beginning or at the end of the file. It is possible to write matroshka in one pass by telling ffmpeg how much you think the header will take, and it will just reserve this space. Also, ffmpeg always uses 8 bytes for all numbers whose size it doesn't know ahead of time, even if 1 or 2 is enough (matroshka uses system simular to utf-8, bigger numbers use more space), fixing those numbers requires a second pass and rewriting that 4gb file again from start to finish, nobody really bothers with that.

With ogg and opus files, seeking is more fun. You just go to a random place in a file, and search for string "OggS", each packet starts with those. Then you validate that it's not just a random coincedence by reading the rest of the packet header (I don't understand this part). You can in theory slightly speed this up by using a header, but looks like it works well enough for ogg as is due to small packet size.

The mp4 and matroshka actually could be speed up slightly if file system could insert chunks of 4kb data in the middle of a file, right now you have to rewrite them in their entirety in those cases. It will introduce fragmentation, so, this requires a lot of micromanagement and restrictions to do right (like, nothing but audio on partition A, first 20kb reserved for metadata info, etc). Of course no filesystem does that, but raw partitions are relatively easy to use.

The biggest issue I have with this, what should I do if some disk space goes bad. File systems seem to rewrite data on some other location, transparently from the user. But if I use offsets for everything, it gets complicated. I need to look at DVD-RW format, it's not as a scary problem as it seems. I will most likely have a copy of everything on two disks, my biggest problem is noticing problems in time to fix them before I lose anything.

About "bothering other programs". You could introduce a flag "CAN_DELETE_BEGINNING" in CreateFileA, so old programs wouldn't even notice a change.
Post 30 Sep 2018, 07:33
View user's profile Send private message Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 420
Location: Belarus
vivik wrote:
How will it work with multitasking? Add an extra variable called "offset_change" that is writable by one thread/process and readable by all others. Can use lockless multithreading in cases like that. "offset_change" indicates how many bytes were deleted at the beginning of the file. (actually it's better to put into the ReadFile itself, it will return an extra int showing how much data from beginning of file wasn't returned due to it being deleted already. It already returns total amount of data read, it will also return "data_skipped").

Doesn’t work. It’s multitasking. What if read starts when the file is complete, first part happends to be read successfully, then request to shrink the file is executed and after that the read has to proceed. You might get a gap in the data you’re trying to read.

Even if you serialize file reads (serializing a 1 GB read before thousand of 4 KB reads works like magic, very-very slow and laggy magic!) you have another problem: make the old software aware of the change in the API. You can’t say that only new software will use the new API: the same file might be and once will be read by both new and old program simultaneously. You have to somehow propagate the information for the old program in a way that will not break any old program out there. You can’t choose one way of ReadFile handling and say that other ways are wrong and not supported anymore: it’s like asking them to use Space Shuttle to rescue Apollo 13 team, as they didn’t know what a Space Shuttle is back then.

vivik wrote:
Offsets don't change if beginning of file was deleted. When you reopen the file for read or write, it will again start from zero.

When you reopen. What happens to programs that already have the file opened?

Quote:
About file formats, well, I think headers should be stored in a separate file. Or in a same file, but be padded by like 4kb border.

Good luck making all the vendors use your idea. Especially with file formats that are going to be supported by tiny microcontrollers not having 4 KB of memory totally.

Good luck making your users copy both files. Some fruity company seems to have tried to do something similar: OSes that do not know about this stuff just show 2 files, and one of them is enough to work with the document, so they won’t bother saving another one.

vivik wrote:
With mp4, headers are stored either at the beginning or at the end of a file. When it's at the end you can't start playing video file until the file is downloaded completely from some remote site, or your browser downloads end of the file first / you enable option "download start and end of the file first" in qbittorrent. But placing headers at the beginning requires rewriting the entire file from start to finish and recalculating all offsets. It's complicated to do in 1 pass, during encoding you don't know how many keyframes you need (unless you are a cheap bastard like youtube or pornhub and just put keyframes every 5 or 4 seconds even for still images), and thus you don't know how much space your header will need. There are some other formats too, I guess they are used in CD/DVD disks and they don't use a global header and are more close to ogg, I don't know details.

ZIP’s main header is at the end of the file. Unfortunately, it ends with a variable-length comment field so there’s no reliable way to find the beginning of the header block. You just have to guess by comparing signatures like if a signature cannot be part of the comment itself.

Even if your header at the end of the file is fixed-size, there’s another problem: how do you tell the difference between a well-formed file and a file that hasn’t been downloaded till the end and happened to have something similar to header at the end of the downloaded part?

Quote:
With ogg and opus files, seeking is more fun. You just go to a random place in a file, and search for string "OggS", each packet starts with those. Then you validate that it's not just a random coincedence by reading the rest of the packet header (I don't understand this part). You can in theory slightly speed this up by using a header, but looks like it works well enough for ogg as is due to small packet size.

The same for MP3, just with a smaller signature. Works here because accidentally having something similar to a frame that is not a frame doesn’t cause significant changes in the result: you’ll hardly hear a very short frame-length piece of sound that doesn’t sound right. Doesn’t work for formats where all the information should be lossless by its nature.

vivik wrote:
The mp4 and matroshka actually could be speed up slightly if file system could insert chunks of 4kb data in the middle of a file, right now you have to rewrite them in their entirety in those cases. It will introduce fragmentation, so, this requires a lot of micromanagement and restrictions to do right (like, nothing but audio on partition A, first 20kb reserved for metadata info, etc). Of course no filesystem does that, but raw partitions are relatively easy to use.

Why 4 KB? Why not 8? 16? What if you happen to be 1 byte off the block size? Having a file almost twice the needed size just to make it faster to write it? Reading becomes a nightmare, since the file might have gaps all around itself, not to mention the fragmentation that might cause reading such a file turn into jumping all over the disk.

vivik wrote:
The biggest issue I have with this, what should I do if some disk space goes bad. File systems seem to rewrite data on some other location, transparently from the user. But if I use offsets for everything, it gets complicated. I need to look at DVD-RW format, it's not as a scary problem as it seems. I will most likely have a copy of everything on two disks, my biggest problem is noticing problems in time to fix them before I lose anything.

Critical cases are critical cases until valid use cases work well.

vivik wrote:
About "bothering other programs". You could introduce a flag "CAN_DELETE_BEGINNING" in CreateFileA, so old programs wouldn't even notice a change.

Old program opens file A without the flag (it doesn’t know about it). New program opens file A and trims the file at the beginning. What did you gain with the flag? How do you make old program understand what happened?
Post 30 Sep 2018, 09:14
View user's profile Send private message Visit poster's website Reply with quote
vivik



Joined: 29 Oct 2016
Posts: 485
Why would you want to open a file in more than one program/thread in a first place? Shouldn't you use shared memory for this?

If file opened with CAN_DELETE_BEGINNING, it can't be opened without this flag. Opposite is the same.

New programs don't need to open same files as the old ones, or they will use the old method of opening them. No issues whatsoever here, other than there is more code in the os.
Post 30 Sep 2018, 16:29
View user's profile Send private message Reply with quote
Furs



Joined: 04 Mar 2016
Posts: 1260
@DimonSoft: Solving it is easier than you think. It's just not very worthwhile.

In Unix and Linux, a file can be removed when it is opened by another application. No, it's not a multi-tasking nightmare, the file still exists on the filesystem and still takes space. It's only deleted when the last link to it is gone.

The filesystem itself has inodes (the file's name) which is a link to the file (you can have more than one filenames link to the same file contents -- so-called hardlinks).

But other links to the file include file descriptors from apps which opened it (handles in Windows), and the file's name is just one such link (albeit, non-volatile and persistent across reboots). If you remove the file's name while an app still has it open, a link still exists so the file is not marked free until that link is closed. Yes it's not accessible by filename anymore, but the file's contents still exist and aren't freed!

In this situation, when you truncate the beginning of the file, the file's name is linked to such truncation so that any new app opening it via filename will get the truncated file.

But existing apps which have it open will operate on the old file (same file but extended at the beginning) since they are linked to the old file. Only when those handles get closed will the file's beginning actually get "freed" on disk.



Or you can just deny access like vivik said with the flag. No problem whatsoever. It's just not worth it.
Post 30 Sep 2018, 18:18
View user's profile Send private message Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 420
Location: Belarus
vivik wrote:
Why would you want to open a file in more than one program/thread in a first place? Shouldn't you use shared memory for this?

If file opened with CAN_DELETE_BEGINNING, it can't be opened without this flag. Opposite is the same.

Some files are just too large to be mapped into virtual address space completely, so shared memory is not a solution. What you suggest is basically an exclusive lock on the file. It works but has its disadvantages, otherwise we wouldn’t have had a separate sharing parameter in CreateFile.

vivik wrote:
New programs don't need to open same files as the old ones, or they will use the old method of opening them. No issues whatsoever here, other than there is more code in the os.

Now how many applications really need that? Note that using such a feature makes your code much more complex: at the very least you have to update all the file offsets you’ve ever cached in your program whenever you perform such a truncation.

What you would gain is some performance improvement in a rare use case. And now you have a new piece of code in the OS to be tested for functional correctness and, later, for backwards compatibility. And you have an even more complex logic behind file handling functions, propagating to your software.

Now, there’s one more thing: if you implement shrinking at the beginning, you’d better also implement extending at the beginning. And since the whole stuff is limited to cluster-size pieces (for some definition of a cluster) you end up having your files split into such pieces and always being aware of such pieces. And then it turns out it is cheaper to just mark a piece as free to use: this saves time looking for a free cluster and avoids fragmentation. Which can be implemented in files we currently have with even smaller granularity suitable for your task and without bothering about all the stuff.

What I’m trying to say is that such a feature is not yet implemented in popular OSes for a reason. If it was so useful, it would have been implemented, even in spite of technical difficulties (with many limitations). Another thing is that none of the reasons are actually relative to NTFS.

Furs wrote:
But existing apps which have it open will operate on the old file (same file but extended at the beginning) since they are linked to the old file. Only when those handles get closed will the file's beginning actually get "freed" on disk.

Which brings the problem of “I’ve deleted this data but I can still see it in another program. WTF?” That’s a technically obvious thing, but the situation is somewhat different from file deletion semantically: when you delete a file you’re basically saying “It’s OK if you delete this file, I don’t care of it anymore”, file deletion doesn’t imply changes to a shared object, but in case of shrinking you’re explicitly changing the object being shared so the changes are expected to be visible to everyone who owns a handle to the object. Leaving such pieces available for quite some time might look different from shrinking at the end.

Besides, leaving them available means that attempts to free some disk space by shrinking older records in a log file (one of the use cases the the feature) would not work if there’s more than one handle to the file.
Post 30 Sep 2018, 20:28
View user's profile Send private message Visit poster's website Reply with quote
Furs



Joined: 04 Mar 2016
Posts: 1260
DimonSoft wrote:
Which brings the problem of “I’ve deleted this data but I can still see it in another program. WTF?” That’s a technically obvious thing, but the situation is somewhat different from file deletion semantically: when you delete a file you’re basically saying “It’s OK if you delete this file, I don’t care of it anymore”, file deletion doesn’t imply changes to a shared object, but in case of shrinking you’re explicitly changing the object being shared so the changes are expected to be visible to everyone who owns a handle to the object. Leaving such pieces available for quite some time might look different from shrinking at the end.

Besides, leaving them available means that attempts to free some disk space by shrinking older records in a log file (one of the use cases the the feature) would not work if there’s more than one handle to the file.
You could add some API to update the file based on deletions like this. Obviously an app has to be aware of it, there's no other way.

How do you delete a file and expect the other app to know this without having code written for it? Makes no sense to me. In Windows, this thing is simply prohibited (access denied), and in Unix, it's what I explained: the file still exists until all handles (file descriptors) are closed.
Post 30 Sep 2018, 21:43
View user's profile Send private message Reply with quote
vivik



Joined: 29 Oct 2016
Posts: 485
>But existing apps which have it open will operate on the old file (same file but extended at the beginning) since they are linked to the old file. Only when those handles get closed will the file's beginning actually get "freed" on disk.

Or space is freed immediately, and apps that have the handle opened will be notified about it by receiving data_skipped > 0.

Parallelising hdd access is a fruitless idea if you have only one hdd, it can only read one thing at the time anyway.

Thinking that thing isn't implemented yet because it's useless is wrong, more often it's just not useful enough / too complex to bother. People will still buy a video game if it looks good, they can tolerate long loading times. I'm not interested in good enough solutions, I'm interested in best solutions. I wouldn't be here overwise.

>What I’m trying to say is that such a feature is not yet implemented in popular OSes for a reason. If it was so useful, it would have been implemented, even in spite of technical difficulties (with many limitations). Another thing is that none of the reasons are actually relative to NTFS.

It is implemented in some server version of windows, I just forgot the name. I've read about it on msdn somewhere.

>And then it turns out it is cheaper to just mark a piece as free to use: this saves time looking for a free cluster and avoids fragmentation. Which can be implemented in files we currently have with even smaller granularity suitable for your task and without bothering about all the stuff.

Are you talking about sparce files? I heard that in NTFS I can zeroify some space in a file, and it will take nothing on hdd. It solves one part of a problem (removing beginning of a file), introduces new one (file keeps growing in size ad infinitum, with no good way to shrink it back), and ignores another one (reducing fragmentation without the need for the full hdd defragmentation, keeping hdd access sequential).

I'd limit one chunk of hdd space to one thread, and teach that thread to share the memory he got from there with other threads when needed, probably with some copy-on-write functionality or just read only. Pretty much, doing all file caching manually. Probably, would limit hdd access to one thread. Would limit os to one thread. Would go back to dos. Something tells me that all hardware we use was made for dos. Heh, especially multicore processors.

I should be coding right now, but I'm still confused by cmake and visual studio projects, and playing 天結いキャッスルマイスター feels more rewarding. I feel like I broke the game by grinding this much. Still no porn, even though it's on my hdd for 3 weeks already? It's very plot heavy, I was like "stop talking and let me play already" at some point. I'm quite enjoying it.

Somehow the turn-based games that you can savescum the fuck out of are the most enjoyable for me. Immediate feedback for everything. But it becomes quite a chore if nothing new happens after a while, makes me wish somebody would play it for me, makes me want to automate this.
Post 01 Oct 2018, 06:30
View user's profile Send private message Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 420
Location: Belarus
Furs wrote:
You could add some API to update the file based on deletions like this. Obviously an app has to be aware of it, there's no other way.

Which is yet another reason to abandon implementing a corner case feature that can already be implemented more effectively with existing APIs, just with a bit more code in the application that needs the feature.

Quote:
How do you delete a file and expect the other app to know this without having code written for it? Makes no sense to me. In Windows, this thing is simply prohibited (access denied), and in Unix, it's what I explained: the file still exists until all handles (file descriptors) are closed.

If you scroll the page up, you’ll see that’s exactly what I’ve said.

vivik wrote:
Or space is freed immediately, and apps that have the handle opened will be notified about it by receiving data_skipped > 0.

Which requires changing the API and updating old software.

vivik wrote:
Parallelising hdd access is a fruitless idea if you have only one hdd, it can only read one thing at the time anyway.

ReadFile might not require HDD access if the data requested is in cache. So, while parallelising HDD access is not a good idea, parallelising ReadFile calls is.

vivik wrote:
Thinking that thing isn't implemented yet because it's useless is wrong, more often it's just not useful enough / too complex to bother. People will still buy a video game if it looks good, they can tolerate long loading times. I'm not interested in good enough solutions, I'm interested in best solutions. I wouldn't be here overwise.

I’ve never said it is useless. It is just too corner case and, like in all programming, avoiding the complexities by changing the algorithm might give you even larger performance gain, so the total effect of the feature being implemented is less than the complexities that would need to be solved. This is the reason for not having it, not some NTFS stuff. We obviously can go on and dance on the rake of complexities, we just don’t really need to.

vivik wrote:
It is implemented in some server version of windows, I just forgot the name. I've read about it on msdn somewhere.

Isn’t it something with the word “sparse” in it? Does the implementation really differ with only a single parameter to ReadFile/WriteFile?

vivik wrote:
Are you talking about sparce files? I heard that in NTFS I can zeroify some space in a file, and it will take nothing on hdd. It solves one part of a problem (removing beginning of a file), introduces new one (file keeps growing in size ad infinitum, with no good way to shrink it back), and ignores another one (reducing fragmentation without the need for the full hdd defragmentation, keeping hdd access sequential).

Not, I’m not. I’m trying to say that there’re two use cases. First: you’re always writing the file as a stream of bytes without really requiring random access. In this case you gain nothing by cheaper shrinking of the file at the beginning. Second case: you’re accessing the file randomly. In this case you already have to have some means to mark certain pieces of the file as free since you might not only delete items from the beginning, but from the middle of the file as well. And this marking mechanism will have a smaller granularity, will not require special support from the OS (thus being more portable) and will be more effecient like any specific solution as compared to a generic one.

vivik wrote:
I'd limit one chunk of hdd space to one thread, and teach that thread to share the memory he got from there with other threads when needed, probably with some copy-on-write functionality or just read only. Pretty much, doing all file caching manually. Probably, would limit hdd access to one thread. Would limit os to one thread. Would go back to dos. Something tells me that all hardware we use was made for dos. Heh, especially multicore processors.

I like this piece Smile
Post 01 Oct 2018, 08:09
View user's profile Send private message Visit poster's website Reply with quote
vivik



Joined: 29 Oct 2016
Posts: 485
>Isn’t it something with the word “sparse” in it? Does the implementation really differ with only a single parameter to ReadFile/WriteFile?

It's a file system, and it's not NTFS. That's all I remember. It's not a sparse file. I probably saved a link somewhere, but I forgot where. It allowed to add or delete data in the middle of a file.

Okay, found it, I guess. https://docs.microsoft.com/en-us/windows/desktop/fileio/block-cloning it's ReFS. Look around this in the features. This may or may not include deleting beginning.

Who cares, raw access works for me. I'm not going to write an os, so all this api talk is completely unrelated. I do want to write a filesystem though.

"This is complex, it shouldn't be done", sounds like somebody is just lazy.

>Not, I’m not. I’m trying to say that there’re two use cases. First: you’re always writing the file as a stream of bytes without really requiring random access. In this case you gain nothing by cheaper shrinking of the file at the beginning. Second case: you’re accessing the file randomly. In this case you already have to have some means to mark certain pieces of the file as free since you might not only delete items from the beginning, but from the middle of the file as well. And this marking mechanism will have a smaller granularity, will not require special support from the OS (thus being more portable) and will be more effecient like any specific solution as compared to a generic one.

The fuck does that mean, "marking pieces of file as free". You mean "deleting middle of a file"? The same thing I'm proposing?
Post 01 Oct 2018, 20:45
View user's profile Send private message Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 420
Location: Belarus
vivik wrote:
Okay, found it, I guess. https://docs.microsoft.com/en-us/windows/desktop/fileio/block-cloning it's ReFS. Look around this in the features. This may or may not include deleting beginning.

Doesn’t seem to be the case. From what I can tell reading the article, it’s all about letting the OS know when you create copies of relatively large chunks of data on disk and deferring disk space allocation until it is required to be used (copy-on-write). It looks like an easy way to implement deduplication which is also available in file systems like ZFS, for example.

vivik wrote:
"This is complex, it shouldn't be done", sounds like somebody is just lazy.

For me it’s more the case of Occam’s Razor and the rule of –100 points. Engineering is about tradeoffs after all.

vivik wrote:
The fuck does that mean, "marking pieces of file as free". You mean "deleting middle of a file"? The same thing I'm proposing?

I’m saying that if a file may be changed not only by shrinking it at the end, then the file is almost definitely accessed in a random manner, not sequentially. And then implementing deletion of pieces from inside the file at file system level may not be the best solution. File system only knows about storage units and file metadata, while the actual application knows what the data means and can operate on smaller data chunks than a file system.

Such a file may already have some kind of linked list or bitmap, or whatever that lets the application know which pieces of the file should be interpreted which way. Say, if it’s a table in a database, the file format will already allow the DBMS to mark certain records in the table as unused. Those records are usually smaller than file system cluster. Implementing all the necessary bookkeeping at file system level means that you either have to stop at the cluster-sized chunks or choose a smaller unit (up to a single byte for the most universal solution). And this will have its overhead even for those applications that don’t need the feature.

So yes, I’ve suggested the solution to your original problem, but it basically is a suggestion to solve local problems by local solutions.
Post 01 Oct 2018, 21:20
View user's profile Send private message Visit poster's website Reply with quote
vivik



Joined: 29 Oct 2016
Posts: 485
https://en.wikipedia.org/wiki/B-tree

The data structure heavily used in ReFS.

Here is more info on what ReFS actually is: https://blogs.msdn.microsoft.com/b8/2012/01/16/building-the-next-generation-file-system-for-windows-refs/

You know, if I know that file is going to have gaps in it, I can create multiple files instead of one. And then decrease the size of one individual file. The question, will the defragmentation and os place each such file sequentially or not?
Post 02 Oct 2018, 04:47
View user's profile Send private message Reply with quote
vivik



Joined: 29 Oct 2016
Posts: 485
Code:
1>------ Build started: Project: simd, Configuration: Debug Win32 ------
2>------ Build started: Project: INSTALL, Configuration: Debug Win32 ------
2>-- Install configuration: "Debug"
2>-- Installing: C:/install/lib/turbojpeg-staticd.lib
2>CMake Error at cmake_install.cmake:48 (file):
2>  file INSTALL cannot find
2>  "C:/build/lib-img/libjpeg-turbo-1.5.3/tjbench-static.exe".    


Well, duuh, it's in libjpeg-turbo-1.5.3/Debug. Why cmake searches for it in the wrong place?

Problem is around here:

Code:
install(TARGETS turbojpeg-static ARCHIVE DESTINATION lib)
    


Code:
      install(PROGRAMS ${CMAKE_CURRENT_BINARY_DIR}/tjbench-static.exe
        DESTINATION bin RENAME tjbench.exe)
    


Last edited by vivik on 02 Oct 2018, 10:22; edited 1 time in total
Post 02 Oct 2018, 06:43
View user's profile Send private message Reply with quote
vivik



Joined: 29 Oct 2016
Posts: 485
Code:
# This does nothing except when using MinGW.  CMAKE_BUILD_TYPE has no meaning
# in Visual Studio, and it always defaults to Debug when using NMake.
if(NOT CMAKE_BUILD_TYPE)
  # set(CMAKE_BUILD_TYPE Release)
  ##MINE
  set(CMAKE_BUILD_TYPE "Debug")
endif()
    


Hm, why do I bother then?

Wait, there is this later:

Code:
set(CMAKE_DEBUG_POSTFIX "-d")
    


I need to check if this actually works, that vs2017 adds postfix in debug and doesn't in release. Edit: yep, it works.


Last edited by vivik on 02 Oct 2018, 09:37; edited 1 time in total
Post 02 Oct 2018, 07:53
View user's profile Send private message Reply with quote
vivik



Joined: 29 Oct 2016
Posts: 485
cmake visual studio different linking paths for debug and release

https://stackoverflow.com/questions/2209929/linking-different-libraries-for-debug-and-release-builds-in-cmake-on-windows

Code:
target_link_libraries ( app
    debug ${Boost_FILESYSTEM_LIBRARY_DEBUG}
    optimized ${Boost_FILESYSTEM_LIBRARY_RELEASE} )

target_link_libraries ( app
    debug ${Boost_LOG_LIBRARY_DEBUG}
    optimized ${Boost_LOG_LIBRARY_RELEASE} )

target_link_libraries ( app
    debug ${Boost_PROGRAM_OPTIONS_LIBRARY_DEBUG}
    optimized ${Boost_PROGRAM_OPTIONS_LIBRARY_RELEASE} )
    
Post 02 Oct 2018, 07:56
View user's profile Send private message Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 420
Location: Belarus
vivik wrote:
You know, if I know that file is going to have gaps in it, I can create multiple files instead of one. And then decrease the size of one individual file. The question, will the defragmentation and os place each such file sequentially or not?

The difference is that having gaps in a single file make it possible to add records without allocating disk space. Your solution of using separate files for pieces means that you deallocate space each time there’s a new gap and when you decide to fill the gap, you have to allocate more disk space which may fail, unlike when you haven’t freed it and just keep it reserved.

Besides, all the gap handling becomes expensive. Imagine there’s a new gap in one of such piece files: you’ll have to copy beginning of the source file to a separate file (disk space allocation), then copy the end of the source file to a separate file (another disk space allocation), then remove the source file. Similar procedure for filling the gap. And then you have to think about file names, about users being able to copy all the necessary files properly, about maintaining the logical connection between the files…
Post 02 Oct 2018, 08:20
View user's profile Send private message Visit poster's website Reply with quote
vivik



Joined: 29 Oct 2016
Posts: 485
Yeah, it's a shit solution, support from file system is necessary for this to be done well. Or just manual read write on raw partition.
Post 02 Oct 2018, 09:37
View user's profile Send private message Reply with quote
vivik



Joined: 29 Oct 2016
Posts: 485
Interesting how cmake can find zlibd.lib, but can't find zlib-d.lib. Wanted to change CMAKE_DEBUG_POSTFIX to "-d" because it looks more readable, but oh well.

Fucking java conquered the world while I wasn't looking. Everyone only hires for java or c#, wtf. Wake me up if somebody hires for c.
Post 02 Oct 2018, 09:51
View user's profile Send private message Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  
Goto page Previous  1, 2, 3, 4, 5, 6  Next

< Last Thread | Next Thread >
Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You can attach files in this forum
You can download files in this forum


Copyright © 1999-2018, Tomasz Grysztar.

Powered by rwasa.