Should not be an issue given that the precision is the second here, at
least if we are using a decent HZ value and the cached time refreshes
enough times. So the cached time is only used if HZ is >= 10.
Commands may use cached time when the precision is not vital. It is
a good idea to refresh it from time to time during the execution of long
running scripts or transactions composed of quite a lot of commands.
mstime() is not going to change significantly every 16 iterations, so
now we update `now` with the current milliseconds time only when we
check if the time limit was reached.
When the key expires far in the future compared to the cached time in
server.mstime, calling mstime(), that calls gettimeofday(), should not
be very useful. Instead when we are near the expire, we want the
additional precision.
This commit is related to issue #2552.
This commit allows to avoid two mstime() calls inside the call()
function, when the following conditions are true:
1. slowlog is disabled.
2. latency monitoring is disabled.
3. command time acconuting is disabled.
Note that '3' was not configurable, this patch just disable it without
really allowing the user to turn it on, since this is currently an
experiment. If the commit will be merged into unstable, proper support
to configure this parameter will be added.
Related to issue #2552.
This fixes issue #2535, that was actually an hiredis library bug (I
submitted an issue and fix to the redis/hiredis repo as well).
When an asynchronous hiredis connection subscribes to a Pub/Sub channel
and gets an error, and in other related conditions, the function
redisProcessCallbacks() enters a code path where the link is
disconnected, however the function returns before freeing the allocated
reply object. This causes a memory leak. The memory leak was trivial to
trigger in Redis Sentinel, which uses hiredis, every time we tried to
subscribe to an instance that required a password, in case the Sentinel
was configured either with the wrong password or without password at
all. In this case, the -AUTH error caused the leaking code path to be
executed.
It was verified with Valgrind that after this change the leak no longer
happens in Sentinel with a misconfigured authentication password.
master was closing the connection if the RDB transfer took long time.
and also sent PINGs to the slave before it got the initial ACK, in which case the slave wouldn't be able to find the EOF marker.
Bug as old as Redis and blocking operations. It's hard to trigger since
only happens on instance role switch, but the results are quite bad
since an inconsistency between master and slave is created.
How to trigger the bug is a good description of the bug itself.
1. Client does "BLPOP mylist 0" in master.
2. Master is turned into slave, that replicates from New-Master.
3. Client does "LPUSH mylist foo" in New-Master.
4. New-Master propagates write to slave.
5. Slave receives the LPUSH, the blocked client get served.
Now Master "mylist" key has "foo", Slave "mylist" key is empty.
Highlights:
* At step "2" above, the client remains attached, basically escaping any
check performed during command dispatch: read only slave, in that case.
* At step "5" the slave (that was the master), serves the blocked client
consuming a list element, which is not consumed on the master side.
This scenario is technically likely to happen during failovers, however
since Redis Sentinel already disconnects clients using the CLIENT
command when changing the role of the instance, the bug is avoided in
Sentinel deployments.
Closes#2473.
--stat mode already used to reconnect automatically if the server is no
longer available. This is useful since this is an interactive mode used
for debugging, however the same applies to --latency and --latency-dist
modes, so now both use the reconnecting command execution as well.
The reconnection code was modified to use basic VT100 escape sequences
in order to play better with different kinds of output on the screen
when the reconnection happens, and to hide the reconnection attempt
output when finally the reconnection happens.
So far not able to find a color palette within the 256 colors which is
not confusing. However I believe it is a possible task, so will try
better later.
This test on Linux was extremely slow, since in Tcl we can't enable
easily tcp-nodelay, so the busy loop used to take *a lot* with bigger
writes. Fixed using pipelining.
Rationale is that when re-entering, it is likely due to Lua debugging
hooks. Returning an error will be ignored in most cases, going totally
unnoticed. With the log at least we leave a trace.
Related to issue #2302.
The cleanup code expects that if 'di' is not NULL, it is a valid
iterator that should be freed.
The result of this bug was a crash of the AOF rewriting process if an
error occurred after the DBs data are written and the iterator is no
longer valid.
1. Server unxtime may remain not updated while loading AOF, so ETA is
not updated correctly.
2. Number of processed byte was not initialized.
3. Possible division by zero condition (likely cause of issue #1932).
Otherwise there are security risks, especially when providing Redis as a
service, the user may "sniff" for admin commands renamed to an
unguessable string via rename-command in redis.conf.
People mostly use SORT against lists, but our prior
behavior was pretending lists were an unordered bag
requiring a forced-sort when no sort was requested.
We can just use the native list ordering to ensure
consistency across replicaion and scripting calls.
Closes#2079Closes#545 (again)