Priority: P2: Important
Affects Version/s: 4.8.2
Fix Version/s: None
A certain file would always get corrupted on the receiving end when transferred through an instance of QHttpMultiPart. It was always the same small contiguous set of bytes that were corrupt (though, the byte values were random garbage), and always at the very end of the data in the part. I was able to confirm that the receiving side was not the culprit (I used Wireshark to verify that the corruption was present in the data in transit).
Also, this corruption occurred when the size of the data part, including the boundary and headers, was just a hair over a multiple of 16K. By "a hair", I mean anywhere from 1 to, I think, 43 bytes over (just about the size of a boundary marker).
I believe the reason is that the part index, as calculated in QHttpMultiPartIODevice::readData(char *data, qint64 maxSize), is incremented prematurely, as the logic there doesn't take into account the leading boundary data. I confirmed this by trying out the following hack (the '+ 6' accounts for the hyphens, CRs, and LFs):
// O R I G I N A L (broken)
//while (index < multiPart->parts.count() &&
// readPointer >= partOffsets.at(index) + multiPart->parts.at(index).d->size())
// F I X (now takes the boundary length into consideration)
while (index < multiPart->parts.count() &&
readPointer >= partOffsets.at(index) + multiPart->parts.at(index).d->size() + multiPart->boundary.count() + 6)
I've attached the small test app that I used to troubleshoot this issue. It's just a simple little console app that acts as both server and client in-one. It takes two parameters on the command line: <input file> <destination file>. The client portion reads the input file and sends it to the server via a QHttpMultiPart instance. The server accepts the connection and writes the raw request data to the destination file, terminating after a small amount of idle time. The 16K.txt file is designed to trigger this bug.