If a large amonut of keys are all expiring about at the same time, the
"active" expired keys collection cycle used to block as far as the
percentage of already expired keys was >= 25% of the total population of
keys with an expire set.
This could block the server even for many seconds in order to reclaim
memory ASAP. The new algorithm uses at max a small amount of
milliseconds per cycle, even if this means reclaiming the memory less
promptly it also means a more responsive server.
Every matched key in a KEYS call is checked for expiration. When the key
is set to expire, the call to `getExpire` will assert that the key also
exists in the main dictionary. This in turn causes a rehashing step to
be executed. Rehashing a dictionary when there is an iterator active may
result in the iterator emitting duplicate entries, or not emitting some
entries at all. By using a safe iterator, the rehash step is omitted.
Now it uses the new wait_for_condition testing primitive.
Also wait_for_condition implementation was fixed in this commit to properly
escape the expr command and its argument.
Two limits are added:
1) Up to SLOWLOG_ENTRY_MAX_ARGV arguments are logged.
2) Up to SLOWLOG_ENTRY_MAX_STRING bytes per argument are logged.
3) slowlog-max-len is set to 128 by default (was 1024).
The number of remaining arguments / bytes is logged in the entry
so that the user can understand better the nature of the logged command.
just fieldobj itself as sentinel of the fact a field object is used or
not, instead of using the filed length, that may be confusing both for
people and for the compiler emitting a warning.
lookupKeyByPattern() was implemented with a trick to speedup the lookup
process allocating two fake Redis obejcts on the stack. However now that
we propagate expires to the slave as DEL operations the lookup of the
key may result into a call to expireIfNeeded() having the stack
allocated object as argument, that may in turn use it to create the
protocol to send to the slave. But since this fake obejcts are
inherently read-only this is a problem.
As a side effect of this fix there are no longer size limits in the
pattern to be used with GET/BY option of SORT.
See https://github.com/antirez/redis/issues/460 for bug details.