Archived
This forum has been archived. Please start a new discussion on GitHub.
Use ICE to send large files(C++)?
how to Use ICE to send/receive large files(C++)? thanks!
0
Comments
-
Please see this post regarding our support policy here on these forums:
http://www.zeroc.com/vbulletin/showthread.php?t=1697
I also recommend to search these forums for similar questions.0 -
Large Files
Unless something has changed recently, there is no real "streaming" facility in ICE that I am aware of. We package tiff files up into a sequence of bytes. It does eat memory, but only until the dispatch has completed. Anyway, memory is cheap these days0 -
If you want to trasmite large files you must know that
Ice.MessageSizeMax property control the maximun size of a menssage
from Ice Programming book 27.3 pag:611Ice.MessageSizeMax=2048 # Large MessageSize is 2MB
samething likesequence<Byte> Bytes; dictionary<long,Bytes> ChunkMap; module Files { interface FileIO { idempotent void writeChunk(Bytes bytesSeq); write a chunk to the end idempotent void clear(); // delete all chunks nonmutating Bytes readChunk(pos); // return a chunk by pos nonmutating int countChunks(); //return the number of chunks }; class File implements FileIO { int maxChunkSize; //This property can control the maximum size of a chunk ChunkMap chunks; //This dictionary store all chunks }; };
0 -
xdm wrote:If you want to trasmite large files you must know that
Ice.MessageSizeMax property control the maximun size of a menssage
from Ice Programming book 27.3 pag:611
If you plan to trasmite very large files i think that the best aproach is divide a file in chunks and trasmit chunks
samething likesequence<Byte> Bytes; dictionary<long,Bytes> ChunkMap; module Files { interface FileIO { idempotent void writeChunk(Bytes bytesSeq); write a chunk to the end idempotent void clear(); // delete all chunks nonmutating Bytes readChunk(pos); // return a chunk by pos nonmutating int countChunks(); //return the number of chunks }; class File implements FileIO { int maxChunkSize; //This property can control the maximum size of a chunk ChunkMap chunks; //This dictionary store all chunks }; };
thank you!
but in this case, pushing every char in a large file into a vector must be terrible! it must waste a lot of time and memory. is there some improvement? or suggestion? thanks!0 -
there can be same inprovements depending of what are you exacting trying to do.
You can read source code of IcePacth to view how Zeroc boys build his Pacth service that include File tramision and other nices features.
in my experience network bandwidth is the factor that limit speed and not memory
in my previus example i store all chunks in a map by value you can change this to store chunks by ref and you only have in memory chunks that are being trasmited0 -
thelONE wrote:thank you!
but in this case, pushing every char in a large file into a vector must be terrible! it must waste a lot of time and memory. is there some improvement? or suggestion? thanks!
The amount of time required to allocate and copy some memory is totally insignificant compared to the amount of time required to send the data over the network.0