start_server now uses return value from Tcl exec to get the server pid,
however this introduces errors that depend from timing: a lot of the
testing code base assumed the server to be actually up and running when
server_start returns.
So the old code that waits to see the pid in the log file was restored.
zmalloc(0) cauesd to actually trigger a non-zero allocation since with
standard libc malloc we have our own zmalloc header for memory tracking,
but at the same time the returned pointer is at the end of the block and
not in the middle. This triggers a false positive when testing with
valgrind.
When the inline protocol args count is 0, we now avoid reallocating
c->argv, preventing the issue to happen.
Basically: test to make sure we can load cmsgpack
and do some sanity checks to make sure pack/unpack works
properly. We also have a bonus test for circular encoding
and decoding because I was curious how it worked.
Main reasons for upgrade:
- Remove a warning when building Redis
- Add multi pack/unpack
- Improve memory usage and use Lua allocator properly
- Fix some edge case encoding/decoding bugs
cjson calls isinf, but some Solaris versions don't have isinf
even with the attempted fix we have in deps/Makefile.
We can harmlessly include the Redis solarisfixes.h header to
give cjson proper isinf.
Note: cjson has a compile-time setting for using their own defined
isinf, but the Redis definition in solarisfixes.h is more complete.
Fixes antirez#1620
The new cjson has some improvements over our current version including
increased platform compatability, a new resource limit to restrict
decode depth, and better invalid number handling.
One minor change was required to deps/Makefile because this version
of cjson doesn't export itself globally, so we added a quick little
define of -DENABLE_CJSON_GLOBAL.
cjson now has an optional higher performing float parsing interface,
but we are not including it (g_fmt.c, dtoa.c) because it requires
endianness declaration during compile time.
This commit is exactly lua_cjson.c from 2.1.0 with one minor
change of altering the two Lua includes for local search
instead of system-wide importing.
A few people have written custom C commands because bit
manipulation isn't exposed through Lua. Let's give
them Mike Pall's bitop.
This adds bitop 1.0.2 (2012-05-08) from http://bitop.luajit.org/
bitop is imported as "bit" into the global namespace.
New Lua commands: bit.tobit, bit.tohex, bit.bnot, bit.band, bit.bor, bit.bxor,
bit.lshift, bit.rshift, bit.arshift, bit.rol, bit.ror, bit.bswap
Verification of working (the asserts would abort on error, so (nil) is correct):
127.0.0.1:6379> eval "assert(bit.tobit(1) == 1); assert(bit.band(1) == 1); assert(bit.bxor(1,2) == 3); assert(bit.bor(1,2,4,8,16,32,64,128) == 255)" 0
(nil)
127.0.0.1:6379> eval 'assert(0x7fffffff == 2147483647, "broken hex literals"); assert(0xffffffff == -1 or 0xffffffff == 2^32-1, "broken hex literals"); assert(tostring(-1) == "-1", "broken tostring()"); assert(tostring(0xffffffff) == "-1" or tostring(0xffffffff) == "4294967295", "broken tostring()")' 0
(nil)
Tests also integrated into the scripting tests and can be run with:
./runtest --single unit/scripting
Tests are excerpted from `bittest.lua` included in the bitop distribution.
RDB EOF detection was relying on the final part of the RDB transfer to
be a magic 40 bytes EOF marker. However as the slave is put online
immediately, and because of sockets timeouts, the replication stream is
actually contiguous with the RDB file.
This means that to detect the EOF correctly we should either:
1) Scan all the stream searching for the mark. Sucks CPU-wise.
2) Start to send the replication stream only after an acknowledge.
3) Implement a proper chunked encoding.
For now solution "2" was picked, so the master does not start to send
ASAP the stream of commands in the case of diskless replication. We wait
for the first REPLCONF ACK command from the slave, that certifies us
that the slave correctly loaded the RDB file and is ready to get more
data.
The old DEBUG POPULATE form for automatic creation of test keys is:
DEBUG POPULATE <count>
Now an additional form is available:
DEBUG POPULATE <count> <prefix>
When prefix is not specified, it defaults to "key", so the keys are
named incrementally from key:0 to key:<count-1>. Otherwise the specified
prefix is used instead of "key".
The command is useful in order to populate different Redis instances
with key names guaranteed to don't collide. There are other debugging
uses, for example it is possible to add additional N keys using a count
of N and a random prefix at every call.
This caused BGSAVE to be triggered a second time without any need when
we switch from socket to disk target via the command
CONFIG SET repl-diskless-sync no
and there is already a slave waiting for the BGSAVE to start.
Also comments clarified about what is happening.
EWOULDBLOCK with the fdset rio target is returned when we try to write
but the send timeout socket option triggered an error. Better to
translate the error in something the user can actually recognize as a
timeout.
We need to avoid that a child -> slaves transfer can continue forever.
We use the same timeout used as global replication timeout, which is
documented to also affect I/O operations during bulk transfers.
To perform a socket write() for each RDB rio API write call was
extremely unefficient, so now rio has minimal buffering capabilities.
Writes are accumulated into a buffer and only when a given limit is
reacehd are actually wrote to the N slaves FDs.
Trivia: rio lacked support for buffering since our targets were:
1) Memory buffers.
2) C standard I/O.
Both were buffered already.
This is useful for normal replication in order to refresh the slave
when we are persisting on disk, but for diskless replication the
child is already receiving data while in WAIT_BGSAVE_END state.