fix processing of large bulks (above 2GB)

- protocol parsing (processMultibulkBuffer) was limitted to 32big positions in the buffer
  readQueryFromClient potential overflow
- rioWriteBulkCount used int, although rioWriteBulkString gave it size_t
- several places in sds.c that used int for string length or index.
- bugfix in RM_SaveAuxField (return was 1 or -1 and not length)
- RM_SaveStringBuffer was limitted to 32bit length
This commit is contained in:
Oran Agra
2017-12-21 11:10:48 +02:00
parent 0b561883b4
commit 60a4f12f8b
8 changed files with 39 additions and 33 deletions

View File

@ -310,7 +310,7 @@ void rioSetAutoSync(rio *r, off_t bytes) {
* generating the Redis protocol for the Append Only File. */
/* Write multi bulk count in the format: "*<count>\r\n". */
size_t rioWriteBulkCount(rio *r, char prefix, int count) {
size_t rioWriteBulkCount(rio *r, char prefix, long count) {
char cbuf[128];
int clen;