Joined: 05 Apr 2005
|Posted: Sun May 01, 2005 8:54 pm Post subject: My discussion with szaka (main author of ntfsclone)
Thank you very much for your criticism, I really like the constructive discussions.
I will try to answer all your questions
szaka wrote about zsplit:
>That one doesn't have explicit bad block support. Ntfsclone analyzes >NTFS structures and it doesn't even
>try to save sectors known to be faulty. This way it's MUCH more gently >to damaged or dying disks what
>means that one has a much greater chance to rescue his/her data >compared to dd, dd-rescue, device image,
You are right, and I have mentioned it already in a FAQ section on my homepage, zsplit does not analyze the particular file-system, the main idea of device-image tools is file-system independence. Consequently it is not possible for zsplit to recognize sectors which are marked on the file-system level as bad. But this is not dramatically at all! Zsplit need not the file-system marker to establish whether one particular sector is bad or not. This recognition is based solely on the fact: if a sector is readable - this is a good sector, otherwise this is a bad sector. Normally, in case of not readable sector, the reading process will be aborted with I/O error. With option -r "--noerror", just after the I/O-error the not-readable sector will be replaced with an empty sector and reading process continue its job. You should carefully read the code of my tool and you will see how it is realized in a source code. So, your statement above is definitely wrong.
>Also some quick notes to your FAQ at >http://www.device-image.de/main_faq.htm
>1) The "undocumented NTFS" remark isn't true. You're misguided by >common miconceptions. See my comments
>about the topic for example on >http://www.partimage.org/forums/viewtopic.php?t=315
The following part is taken form the FAQ of NTFS-project:
BEGIN OF CITATION-------------------------------------------------------------
Microsoft haven't released any documentation about the internals of NTFS, so we had to reverse engineer the filesystem from scratch. The method was roughly:
1. Look at the volume with a hex editor
2. Perform some operation, e.g. create a file
3. Use the hex editor to look for changes
4. Classify and document the changes
5. Repeat steps 1-4 forever
If this sounds like a lot of work, then you probably understand how hard the task has been. We now understand pretty much everything about NTFS and we have documented it for the benefit of others: http://linux-ntfs.sourceforge.net/ntfs/index.html
END OF CITATION----------------------------------------------------------------
The last sentence "We now understand pretty much everything...", with other words, says that although very much information has been gathered during the hard work of reverse engineering, one still cannot say that we have complete knowledge about a structure and all coded features of NTFS file-system. This is the main reason for me to argue my decision be independent from the file-system. Only if we have full (100%) information about a file-system structure we can create 100% reliable backup/restore software if the backup procedure is based on a reading and saving on a file-system level. If, in this case, the information is not complete - we have not reliable enough backup/restore procedure.
On the other side I am very impressed by the hard work of reverse engineering which people have done in order to enable support of NTFS file-system for Linux. I am sure that it is very important work and it should be continued under all circumstances.
>Partimage does have incomplete and dangerous NTFS code but ntfsclone >doesn't and it never did. I know this
>because I'm the main author and we took extreme efforts to prevent >any kind of data loss during
>2) Partimage's and ntfsclone's main advantage is that they save only >used data and not everything so they
>are faster and the output is better compressable. They understand the >filesystem hence they can have
>further advantages, like the above bad sector related one.
I have never argued that ntfsclone has a dangerous NTFS code. Approximately a year ago I begun to read and study the code of partimage, and found that; although the software concept was very attractive for me, some implemented features were not satisfactory. For example: overloaded security. I could run the partimage client only after some changes in a beginning authorization as partimag user. Furthermore, I found that size of partition/device which could be readable by partimage could be not bigger than 4 gigabyte (or gibibyte in new notification). It was so because of using of utility functions (I am sure that 4 gigabyte restriction comes with these functions) from zlib-library which were implemented on top of the basic stream-oriented functions. All my efforts to contact the author/s of partimage in order to discuss this issue were not successful.
Also nearly a year ago I have tried your software ntfsclone. Theoretically it is a very good approach..., but sorry, this tool did not do the job for me. I have tried it several times, and then I decided to write the tool which would work for me. Zsplit/unzsplit is the first approach of my idea of independence from the file-system.
>3) Partimage and ntfsclone ignores geometries, though people interpret >geometry several ways and it's not
>clear what you really mean.
Under several geometry I mean the differences in addressing (C/H/S) cylinder, head, sector. If you restore to device with a different C/H/S addressing. But now I see this issue is not relevant anymore because of LBA logical block addressing. You are right I should correct this issue in the FAQ section. Thank you for this hint.
>4) "Permanent experimental NTFS support": not true. Ntfsclone works >reliably for over two years now.
Please see above.
>5) "FAT file-system could be compressed to 40%-50% of the original >size, NTFS file-system is instead less
>compressive". Not true. Things may depend on the default block size. >Otherwise compressed NTFS metadatas
>are between 0.3-5 MB for any sizes of disks today in use, so given the >same amount of data, the NTFS
>overhead is close to neglectable compared to FAT. Experience also >confirms this, for example ntfsclone
>outputs almost exclusively can be compressed to 40%-50% of the >original size (unless volume is already
>compressed by NTFS).
Dear Szaka, please switch from thinking in file-system terms to raw device terms. If I say: "NTFS file-system is instead less compressive", I mean that the raw image from this file-system (not image from reading the file-system information) is really less compressive comparing to the raw image of FAT file system. This is reality and proofed by many tests which I have done in order to determine the end-size of compressed raw-image.
This discussion on the partimage forum could be seen here: