zlib

BlitzMax Forums/Brucey's Modules/zlib

TaskMaster(Posted 2013) [#1]
What is the chance you could wrap the zlib library? With or without the minizip extension.


Htbaa(Posted 2013) [#2]
Wasn't zlib not already available as pub.zlib?


TaskMaster(Posted 2013) [#3]
I thought that when I was searching for compression libraries as well. I saw the zlib library web page and said "Wait, isn't that in pub.zlib?" But when I looked I found that pub.zlib is just a tiny portion of zlib. zlib has a lot more functionality than what is available in pub.zlib.


Brucey(Posted 2013) [#4]
Direct TStream support would be a bit difficult, given that TStream is driven by Ints.

A libarchive module is probably the way forward…

… watch this space.


Brucey(Posted 2013) [#5]
zlib has a lot more functionality than what is available in pub.zlib.


What kind of functionality are you needing, exactly?


TaskMaster(Posted 2013) [#6]
I am trying to compress files into a zip (or other archive) on the fly. Lots of files and files larger than the 2GB integer limit imposed by the file streams in Blitz.

Currently I am just copying all of the files to a directory tree. I would like to push them into an archive on the fly. The program I am writing is a backup program.

I use the Windows API for copying the files, so that files larger than 2GB can be copied.


Brucey(Posted 2013) [#7]
On Windows, it seems 64-bit seek/tell support requires msvcr80 as a minimum. Which doesn't bother me any, to add as a requirement... since that must date back to 2005 or somewhere in the distant past.


Derron(Posted 2013) [#8]
So instead of using a library, you should consider using a external/3rdparty tool which can be controlled by command line (7zip and others).


bye
Ron


Brucey(Posted 2013) [#9]
Not really.
I've got no problems using libraries. It's more fun anyway :-)

libarchive turns out to be quite useful, and has a nice API to talk to a variety of different compression systems.


Derron(Posted 2013) [#10]
Ok then I misinterpreted that "doesn't bother me ... since..." as irony.


bye
Ron


TaskMaster(Posted 2013) [#11]
Calling a third party tool a couple hundred thousand times while doing a complete backup would probably make my backups take 5 times longer than necessary. I would rather leave them uncompressed and in a directory tree, than to do that. Ideally though, I would open an archive and add files to it at the backup destination. Currently, I am just copying all of them and maintaining the directory tree. With NTFS compression on, it isn't so bad, but when it is hundred of thousands of files, the OS handles it badly when you need to move or copy the files around. One large archive file is much easier to manage.


Derron(Posted 2013) [#12]
filehandling in win32-blitzmax is slow compared to linux (especially when you need the file access/creation/... date).

If you know the file size you could use the external file only in case of >2gb files. For smaller ones you use the internal functions. In the case of compressing >2gb of data, the time to call an external process should be ignorable.


bye
Ron


Brucey(Posted 2013) [#13]
I'm all for more choices...


Derron(Posted 2013) [#14]
More choices which people will use to stress your nerves "ahh brucey XYZ is not working/cannot compile/is missing..." - if you erm.. _like_ to torture yourself that way :P.

Say if you need one to crack the binary whip.


bye
Ron


TaskMaster(Posted 2013) [#15]
Actually, since BMax is not being updated anymore, it is easier for Brucey to keep his modules up to date. :)

I know I am glad I don't have to go change stuff in my ifsoGUI module because of BlitzMax changes anymore.

Derron not sure what you mean by calling an external function only for big files. I either want to put ALL of the backed up files in one archive or not. Not archive the individual files in the backup tree. If I put them in an Archive, it would be best to do it progmatically so I can open the archive and add files to it while backing up, then close the archive when I am done. If I were to use an external 7zip call, or some such thing, I would have to keep calling it to add each additional file as I went along. That is what would be cumbersome and add a lot of backup time to the process which can already take a few hours to backup a server share that has 550,000 files.


Brucey(Posted 2013) [#16]
After being annoyed by my test app asking for msvcr80.dll when I introduced stream support (ah, so it did bother me!), I had a rummage through the mingw headers and found another way to get long file support for seek/tell, which appears to work (for now).


Derron(Posted 2013) [#17]
@TaskManager.

I just meant: for each bigger file you tell an external packer to "add a file to an existing archive". For all smaller ones you will be able to use your internal routine. This will introduce many "file open/close"-calls. Exception is you sort by filesize before archiving.

But maybe this idea can went into the fog of forgotten tales ... if bruceys "way" is the right one.


bye
Ron


TaskMaster(Posted 2013) [#18]
Ah, I see what you mean now. That would make sense, except that not being able to read long files keeps the existing zip modules form being able to access a zip file that is larger than 2GB, not just the files you want to put into it.

Anyway, I don't really mind not archiving the files, at least that method works. Archiving just simplifies some stuff.

If I copy the file, I then have to redate the file. I have to set the dates and times (created, modified, last accessed) of the newly copied file to be the same as the old ones. That isn't difficult, but it adds a step of opening the source file, reading the dates, opening the destination file, writing the dates. For each file. If it were put into an archive, those dates are maintained automatically by the archiving/dearchiving process.

Also, as I stated before, if I decided to delete or move an archive directory, the file system handles one file MUCH MUCH better than it does 500,000 files and directories. Even if it is a huge file.

Anyway, it isn't keeping me from moving forward and if an archiving solution comes about, I will be able to easily implement it.