5103 Commits

Author SHA1 Message Date
Amin Mesbah
7928f578e2 Use geohash limit defines in constraint check
Slight robustness improvement, especially if the limit values are
changed, as was suggested in antires/redis#4291 [1].

[1] https://github.com/antirez/redis/pull/4291
2018-09-14 12:36:34 +02:00
Jeffrey Lovitz
bb2bed7866 CLI Help text loop verifies arg count 2018-09-14 12:36:34 +02:00
youjiali1995
246980d091 sentinel: fix randomized sentinelTimer. 2018-09-14 12:36:34 +02:00
youjiali1995
fa7de8c499 bio: fix bioWaitStepOfType. 2018-09-14 12:36:34 +02:00
Weiliang Li
7642f9d517 fix usage typo in redis-cli 2018-09-14 12:36:33 +02:00
antirez
a72af0eac6 Redis 5.0 RC5. 2018-09-06 13:04:23 +02:00
antirez
3e1fb5ff4c Use commands (effects) replication by default in scripts.
See issue #5250 and issue #5292 for more info.
2018-09-05 19:55:52 +02:00
antirez
c6c71abe55 Safer script stop condition on OOM.
Here the idea is that we do not want freeMemoryIfNeeded() to propagate a
DEL command before the script and change what happens in the script
execution once it reaches the slave. For example see this potential
issue (in the words of @soloestoy):

On master, we run the following script:

    if redis.call('get','key')
    then
        redis.call('set','xxx','yyy')
    end
    redis.call('set','c','d')

Then when redis attempts to execute redis.call('set','xxx','yyy'), we call freeMemoryIfNeeded(), and the key may get deleted, and because redis.call('set','xxx','yyy') has already been executed on master, this script will be replicated to slave.

But the slave received "DEL key" before the script, and will ignore maxmemory, so after that master has xxx and c, slave has only one key c.

Note that this patch (and other related work) was authored collaboratively in
issue #5250 with the help of @soloestoy and @oranagra.

Related to issue #5250.
2018-09-05 19:55:49 +02:00
antirez
dfbce91a14 Propagate read-only scripts as SCRIPT LOAD.
See issue #5250 and the new comments added to the code in this commit
for details.
2018-09-05 19:55:45 +02:00
antirez
1705e42eb2 Don't perform eviction when re-entering the event loop.
Related to #5250.
2018-09-05 19:55:42 +02:00
antirez
a0dd6f822b Clarify why remaining may be zero in readQueryFromClient().
See #5304.
2018-09-05 19:55:38 +02:00
zhaozhao.zz
2eed31a59e networking: fix unexpected negative or zero readlen
To avoid copying buffers to create a large Redis Object which
exceeding PROTO_IOBUF_LEN 32KB, we just read the remaining data
we need, which may less than PROTO_IOBUF_LEN. But the remaining
len may be zero, if the bulklen+2 equals sdslen(c->querybuf),
in client pause context.

For example:

Time1:

python
>>> import os, socket
>>> server="127.0.0.1"
>>> port=6379
>>> data1="*3\r\n$3\r\nset\r\n$1\r\na\r\n$33000\r\n"
>>> data2="".join("x" for _ in range(33000)) + "\r\n"
>>> data3="\n\n"
>>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
>>> s.settimeout(10)
>>> s.connect((server, port))
>>> s.send(data1)
28

Time2:

redis-cli client pause 10000

Time3:

>>> s.send(data2)
33002
>>> s.send(data3)
2
>>> s.send(data3)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
socket.error: [Errno 104] Connection reset by peer

To fix that, we should check if remaining is greater than zero.
2018-09-05 19:55:34 +02:00
antirez
37fb606cdc Merge branch '5.0' of github.com:/antirez/redis into 5.0 2018-09-04 13:13:47 +02:00
zhaozhao.zz
1898e6ce7f networking: optimize parsing large bulk greater than 32k
If we are going to read a large object from network
try to make it likely that it will start at c->querybuf
boundary so that we can optimize object creation
avoiding a large copy of data.

But only when the data we have not parsed is less than
or equal to ll+2. If the data length is greater than
ll+2, trimming querybuf is just a waste of time, because
at this time the querybuf contains not only our bulk.

It's easy to reproduce the that:

Time1: call `client pause 10000` on slave.

Time2: redis-benchmark -t set -r 10000 -d 33000 -n 10000.

Then slave hung after 10 seconds.
2018-09-04 12:54:16 +02:00
antirez
82fc63d151 Unblocked clients API refactoring. See #4418. 2018-09-04 12:54:14 +02:00
zhaozhao.zz
839bb52cc2 if master is already unblocked, do not unblock it twice 2018-09-04 12:54:09 +02:00
zhaozhao.zz
2e1cd82d96 fix multiple unblock for clientsArePaused() 2018-09-04 12:54:02 +02:00
antirez
17233080c3 Make pending buffer processing safe for CLIENT_MASTER client.
Related to #5305.
2018-09-04 12:53:56 +02:00
antirez
8bf42f6031 After slave Lua script leaves busy state, re-process the master buffer.
Technically speaking we don't really need to put the master client in
the clients that need to be processed, since in practice the PING
commands from the master will take care, however it is conceptually more
sane to do so.
2018-09-04 12:53:48 +02:00
antirez
c2b104c73c While the slave is busy, just accumulate master input.
Processing command from the master while the slave is in busy state is
not correct, however we cannot, also, just reply -BUSY to the
replication stream commands from the master. The correct solution is to
stop processing data from the master, but just accumulate the stream
into the buffers and resume the processing later.

Related to #5297.
2018-09-04 12:53:41 +02:00
antirez
7b75f4ae7e Allow scripts to timeout even if from the master instance.
However the master scripts will be impossible to kill.

Related to #5297.
2018-09-04 12:53:38 +02:00
antirez
adc4e031bf Allow scripts to timeout on slaves as well.
See reasoning in #5297.
2018-09-04 12:53:35 +02:00
dejun.xdj
20ec1f0ced Revise the comments of latency command. 2018-09-04 12:53:32 +02:00
Chris Lamb
8e5423eb85 Correct "did not received" -> "did not receive" typos/grammar. 2018-09-04 12:53:28 +02:00
Sascha Roland
eea0d3c50a #5299 Fix blocking XREAD for streams that ran dry
The conclusion, that a xread request can be answered syncronously in
case that the stream's last_id is larger than the passed last-received-id
parameter, assumes, that there must be entries present, which could be
returned immediately.
This assumption fails for empty streams that actually contained some
entries which got removed by xdel, ... .

As result, the client is answered synchronously with an empty result,
instead of blocking for new entries to arrive.
An additional check for a non-empty stream is required.
2018-08-29 19:12:29 +02:00
zhaozhao.zz
5ad888ba17 Supplement to PR #4835, just take info/memory/command as random commands 2018-08-29 12:29:28 +02:00
zhaozhao.zz
d928487f2b some commands' flags should be set correctly, issue #4834 2018-08-29 12:29:26 +02:00
antirez
02d729b492 Make slave-ignore-maxmemory configurable. 2018-08-29 12:29:16 +02:00
antirez
447da44d96 Introduce repl_slave_ignore_maxmemory flag internally.
Note: this breaks backward compatibility with Redis 4, since now slaves
by default are exact copies of masters and do not try to evict keys
independently.
2018-08-29 12:29:14 +02:00
antirez
868b29252b Better variable meaning in processCommand(). 2018-08-29 12:29:00 +02:00
antirez
319f2ee659 Re-apply rebased #2358. 2018-08-29 12:28:55 +02:00
zhaozhao.zz
22c166da5a block: format code 2018-08-29 12:28:43 +02:00
zhaozhao.zz
c03c591330 block: rewrite BRPOPLPUSH as RPOPLPUSH to propagate 2018-08-29 12:28:39 +02:00
zhaozhao.zz
fcd5ef1624 networking: make setProtocolError simple and clear
Function setProtocolError just records proctocol error
details in server log, set client as CLIENT_CLOSE_AFTER_REPLY.
It doesn't care about querybuf sdsrange, because we
will do it after procotol parsing.
2018-08-29 12:28:35 +02:00
zhaozhao.zz
656e4b2f9d networking: just move qb_pos instead of sdsrange in processInlineBuffer 2018-08-29 12:28:31 +02:00
zhaozhao.zz
2c7972cec8 networking: just return C_OK if multibulk processing saw a <= 0 length. 2018-08-29 12:28:25 +02:00
zhaozhao.zz
aff86fa1f5 pipeline: do not sdsrange querybuf unless all commands processed
This is an optimization for processing pipeline, we discussed a
problem in issue #5229: clients may be paused if we apply `CLIENT
PAUSE` command, and then querybuf may grow too large, the cost of
memmove in sdsrange after parsing a completed command will be
horrible. The optimization is that parsing all commands in queyrbuf
, after that we can just call sdsrange only once.
2018-08-29 12:28:19 +02:00
Chris Lamb
45a6c5be2a Use SOURCE_DATE_EPOCH over unreproducible uname + date calls.
See <https://reproducible-builds.org/specs/source-date-epoch/> for more
details.

Signed-off-by: Chris Lamb <chris@chris-lamb.co.uk>
2018-08-29 12:28:16 +02:00
dejun.xdj
b59f04a099 Streams: ID of xclaim command starts from the sixth argument. 2018-08-29 12:28:09 +02:00
shenlongxing
a3f2437bdd Fix stream command paras 2018-08-29 12:28:06 +02:00
antirez
df9112354b Fix AOF comment to report the current behavior.
Realted to #5201.
2018-08-29 12:28:04 +02:00
antirez
5b06bdf457 Redis 5.0 RC4. 2018-08-03 16:46:06 +02:00
antirez
fbbcc6a657 Streams IDs parsing refactoring.
Related to #5184.
2018-08-02 18:34:42 +02:00
antirez
63addc5c1e Fix zslUpdateScore() edge case.
When the element new score is the same of prev/next node, the
lexicographical order kicks in, so we can safely update the node in
place only when the new score is strictly between the adjacent nodes
but never equal to one of them.

Technically speaking we could do extra checks to make sure that even if the
score is the same as one of the adjacent nodes, we can still update on
place, but this rarely happens, so probably not a good deal to make it
more complex.

Related to #5179.
2018-08-02 18:34:34 +02:00
antirez
724740cc19 More commenting of zslUpdateScore(). 2018-08-02 18:34:34 +02:00
antirez
ddc87eef4f Explain what's the point of zslUpdateScore() in top comment. 2018-08-02 18:34:34 +02:00
antirez
741f29ea52 Remove old commented zslUpdateScore() from source. 2018-08-02 18:34:34 +02:00
antirez
201168368a Optimize zslUpdateScore() as asked in #5179. 2018-08-02 18:34:34 +02:00
antirez
8c297e8b43 zsetAdd() refactored adding zslUpdateScore(). 2018-08-02 18:34:34 +02:00
dejun.xdj
bd2f3f6bb1 Streams: rearrange the usage of '-' and '+' IDs in stream commands. 2018-08-02 18:34:34 +02:00