4968 Commits

Author SHA1 Message Date
antirez
8a02f61b27 MIGRATE: test more corner cases. 2015-12-13 10:17:37 +01:00
antirez
32e940c084 MIGRATE: Fix new argument rewriting refcount handling. 2015-12-13 10:17:33 +01:00
antirez
c460458d57 MIGRATE: fix replies processing and argument rewriting.
We need to process replies after errors in order to delete keys
successfully transferred. Also argument rewriting was fixed since
it was broken in several ways. Now a fresh argument vector is created
and set if we are acknowledged of at least one key.
2015-12-13 10:17:26 +01:00
antirez
5912afc9d4 Test: pipelined MIGRATE tests added. 2015-12-13 10:16:46 +01:00
antirez
9276d78776 Pipelined multiple keys MIGRATE. 2015-12-13 10:16:03 +01:00
antirez
3ab63b5cc6 Cluster: redis-trib migrate default timeout set to 60 sec. 2015-12-11 11:00:39 +01:00
Salvatore Sanfilippo
71bf5604c6 Merge pull request #2918 from danmaz74/3.0
redis-trib.rb: --timeout XXXXX option added to fix and reshard
2015-12-11 10:57:05 +01:00
antirez
bf09e58d9d Cluster: replica migration with delay.
We wait a fixed amount of time (5 seconds currently) much greater than
the usual Cluster node to node communication latency, before migrating.
This way when a failover occurs, before detecting the new master as a
target for migration, we give the time to its natural slaves (the slaves
of the failed over master) to announce they switched to the new master,
preventing an useless migration operation.
2015-12-11 09:26:15 +01:00
antirez
5ad4f7e0b1 Cluster: more reliable migration tests.
The old version was modeled with two failovers, however after the first
it is possible that another slave will migrate to the new master, since
for some time the new master is not backed by any slave. Probably there
should be some pause after a failover, before the migration. Anyway the
test is simpler in this way, and depends less on timing.
2015-12-10 13:00:18 +01:00
antirez
711bf140f3 Fix merge of cluster migrate-to flag. 2015-12-10 09:31:28 +01:00
antirez
6007ea3bcb Cluster: more reliable replicas migration test. 2015-12-10 09:30:27 +01:00
antirez
6d5d8d10a9 Remove debugging message left there for error. 2015-12-10 09:30:22 +01:00
antirez
2e43bcffaf Fix replicas migration by adding a new flag.
Some time ago I broken replicas migration (reported in #2924).
The idea was to prevent masters without replicas from getting replicas
because of replica migration, I remember it to create issues with tests,
but there is no clue in the commit message about why it was so
undesirable.

However my patch as a side effect totally ruined the concept of replicas
migration since we want it to work also for instances that, technically,
never had slaves in the past: promoted slaves.

So now instead the ability to be targeted by replicas migration, is a
new flag "migrate-to". It only applies to masters, and is set in the
following two cases:

1. When a master gets a slave, it is set.
2. When a slave turns into a master because of fail over, it is set.

This way replicas migration targets are only masters that used to have
slaves, and slaves of masters (that used to have slaves... obviously)
and are promoted.

The new flag is only internal, and is never exposed in the output nor
persisted in the nodes configuration, since all the information to
handle it are implicit in the cluster configuration we already have.
2015-12-10 09:30:13 +01:00
daniele
47bd2a09b4 redis-trib.rb: --timeout XXXXX option added to fix and reshard commands. Defaults to 15000 milliseconds 2015-12-06 22:47:57 +01:00
antirez
4f7d1e46cf Fix renamed define after merge. 2015-11-27 16:10:34 +01:00
antirez
3626699f1f Handle wait3() errors.
My guess was that wait3() with WNOHANG could never return -1 and an
error. However issue #2897 may possibly indicate that this could happen
under non clear conditions. While we try to understand this better,
better to handle a return value of -1 explicitly, otherwise in the
case a BGREWRITE is in progress but wait3() returns -1, the effect is to
match the first branch of the if/else block since server.rdb_child_pid
is -1, and call backgroundSaveDoneHandler() without a good reason, that
will, in turn, crash the Redis server with an assertion.
2015-11-27 16:09:49 +01:00
antirez
fe71dffbf2 Redis Cluster: hint about validity factor when slave can't failover. 2015-11-27 11:34:30 +01:00
antirez
8e491b1708 Remove "s" flag for MIGRATE in command table.
Maybe there are legitimate use cases for MIGRATE inside Lua scripts, at
least for now. When the command will be executed in an asynchronous
fashion (planned) it is possible we'll no longer be able to permit it
from within Lua scripts.
2015-11-17 15:40:47 +01:00
antirez
3da69a9f22 Update redis-cli help and the script to generate it. 2015-11-17 15:40:18 +01:00
antirez
d4f55990f8 Fix MIGRATE entry in command table.
Thanks to Oran Agra (@oranagra) for reporting. Key extraction would not
work otherwise and it does not make sense to take wrong data in the
command table.
2015-11-17 15:35:47 +01:00
antirez
28fb193ccd Fix error reply in subscribed Pub/Sub mode.
PING is now a valid command to issue in this context.
2015-11-09 11:12:03 +01:00
antirez
c5f9f199df CONTRIBUTING updated. 2015-10-27 12:07:54 +01:00
antirez
c7ec1a367a Redis 3.0.5 3.0.5 2015-10-15 15:44:54 +02:00
David Thomson
ab1f8ea508 Add back blank line 2015-10-15 13:06:31 +02:00
David Thomson
1fa63a78bc Update import command to optionally use copy and replace parameters 2015-10-15 13:06:31 +02:00
antirez
568c83dda7 Cluster: redis-trib fix, coverage for migrating=1 case.
Kinda related to #2770.
2015-10-15 13:06:31 +02:00
antirez
892b1c3c58 Redis.conf example: make clear user must pass its path as argument. 2015-10-15 12:46:17 +02:00
antirez
cbf6614c1a Regression test for issue #2813. 2015-10-15 11:25:19 +02:00
antirez
7cb8481053 Move end-comment of handshake states.
For an error I missed the last handshake state.
Related to issue #2813.
2015-10-15 10:22:13 +02:00
antirez
8242d069f1 Make clear that slave handshake states must be ordered.
Make sure that people from the future will not break this rule.
Related to issue #2813.
2015-10-15 10:22:13 +02:00
antirez
6ef80f4ed2 Minor changes to PR #2813.
* Function to test for slave handshake renamed slaveIsInHandshakeState.
* Function no longer accepts arguments since it always tests the
  same global state.
* Test for state translated to a range test since defines are guaranteed
  to stay in order in the future.
* Use the new function in the ROLE command implementation as well.
2015-10-15 10:22:13 +02:00
Kevin McGehee
dc03e4c51b Fix master timeout during handshake
This change allows a slave to properly time out a dead master during
the extended asynchronous synchronization state machine.  Now, slaves
will record their last interaction with the master and apply the
replication timeout before a response to the PSYNC request is received.
2015-10-15 10:22:13 +02:00
antirez
30978004b3 redis-cli pipe mode: don't stay in the write loop forever.
The code was broken and resulted in redis-cli --pipe to, most of the
times, writing everything received in the standard input to the Redis
connection socket without ever reading back the replies, until all the
content to write was written.

This means that Redis had to accumulate all the output in the output
buffers of the client, consuming a lot of memory.

Fixed thanks to the original report of anomalies in the behavior
provided by Twitter user @fsaintjacques.
2015-09-30 16:27:19 +02:00
antirez
652e662d1a Test: fix false positive in HSTRLEN test.
HINCRBY* tests later used the value "tmp" that was sometimes generated
by the random key generation function. The result was ovewriting what
Tcl expected to be inside Redis with another value, causing the next
HSTRLEN test to fail.
2015-09-15 09:38:26 +02:00
antirez
a0ff29bcf2 Test: MOVE expire test improved.
Related to #2765.
2015-09-14 12:37:13 +02:00
antirez
e2c0d89662 MOVE re-add TTL check fixed.
getExpire() returns -1 when no expire exists.

Related to #2765.
2015-09-14 12:36:34 +02:00
antirez
5b6c764711 MOVE now can move TTL metadata as well.
MOVE was not able to move the TTL: when a key was moved into a different
database number, it became persistent like if PERSIST was used.

In some incredible way (I guess almost nobody uses Redis MOVE) this bug
remained unnoticed inside Redis internals for many years.
Finally Andy Grunwald discovered it and opened an issue.

This commit fixes the bug and adds a regression test.

Close #2765.
2015-09-14 12:32:23 +02:00
antirez
a74ef35f07 Release note typo fixed: senitel -> sentinel. 2015-09-08 10:41:03 +02:00
antirez
4698284b41 Redis 3.0.4. 3.0.4 2015-09-08 10:02:10 +02:00
antirez
718a826c51 Sentinel: command arity check added where missing. 2015-09-08 09:33:39 +02:00
Rogerio Goncalves
3915e1c71a Check args before run ckquorum. Fix issue #2635 2015-09-08 09:28:25 +02:00
antirez
ce4c17308e Fix merge issues in 490847c. 2015-09-07 17:29:21 +02:00
antirez
490847c681 Undo slaves state change on failed rdbSaveToSlavesSockets().
As Oran Agra suggested, in startBgsaveForReplication() when the BGSAVE
attempt returns an error, we scan the list of slaves in order to remove
them since there is no way to serve them currently.

However we check for the replication state BGSAVE_START, which was
modified by rdbSaveToSlaveSockets() before forking(). So when fork fails
the state of slaves remain BGSAVE_END and no cleanup is performed.

This commit fixes the problem by making rdbSaveToSlavesSockets() able to
undo the state change on fork failure.
2015-09-07 16:21:24 +02:00
antirez
c20218eb57 Sentinel: fix bug in config rewriting during failover
We have a check to rewrite the config properly when a failover is in
progress, in order to add the current (already failed over) master as
slave, and don't include in the slave list the promoted slave itself.

However there was an issue, the variable with the right address was
computed but never used when the code was modified, and no tests are
available for this feature for two reasons:

1. The Sentinel unit test currently does not test Sentinel ability to
persist its state at all.
2. It is a very hard to trigger state since it lasts for little time in
the context of the testing framework.

However this feature should be covered in the test in some way.

The bug was found by @badboy using the clang static analyzer.

Effects of the bug on safety of Sentinel
===

This bug results in severe issues in the following case:

1. A Sentinel is elected leader.
2. During the failover, it persists a wrong config with a known-slave
entry listing the master address.
3. The Sentinel crashes and restarts, reading invalid configuration from
disk.
4. It sees that the slave now does not obey the logical configuration
(should replicate from the current master), so it sends a SLAVEOF
command to the master (since the slave master is the same) creating a
replication loop (attempt to replicate from itself) which Redis is
currently unable to detect.
5. This means that the master is no longer available because of the bug.

However the lack of availability should be only transient (at least
in my tests, but other states could be possible where the problem
is not recovered automatically) because:

6. Sentinels treat masters reporting to be slaves as failing.
7. A new failover is triggered, and a slave is promoted to master.

Bug lifetime
===

The bug is there forever. Commit 16237d78 actually tried to fix the bug
but in the wrong way (the computed variable was never used! My fault).
So this bug is there basically since the start of Sentinel.

Since the bug is hard to trigger, I remember little reports matching
this condition, but I remember at least a few. Also in automated tests
where instances were stopped and restarted multiple times automatically
I remember hitting this issue, however I was not able to reproduce nor
to determine with the information I had at the time what was causing the
issue.
2015-09-07 15:52:44 +02:00
antirez
34d87be519 Sentinel: clarify effect of resetting failover_start_time. 2015-09-07 15:52:33 +02:00
ubuntu
0513de624c SCAN iter parsing changed from atoi to chartoull 2015-09-07 13:25:30 +02:00
antirez
49f2f691cb Log client details on SLAVEOF command having an effect. 2015-08-21 15:30:49 +02:00
antirez
c2ff9de31b startBgsaveForReplication(): handle waiting slaves state change.
Before this commit, after triggering a BGSAVE it was up to the caller of
startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in
order to update them accordingly. However when the replication target is
the socket, this is not possible since the process of updating the
slaves and sending the FULLRESYNC reply must be coupled with the process
of starting an RDB save (the reason is, we need to send the FULLSYNC
command and spawn a child that will start to send RDB data to the slaves
ASAP).

This commit moves the responsibility of handling slaves in
WAIT_BGSAVE_START to startBgsavForReplication() so that for both
diskless and disk-based replication we have the same chain of
responsiblity. In order accomodate such change, the syncCommand() also
needs to put the client in the slave list ASAP (just after the initial
checks) and not at the end, so that startBgsavForReplication() can find
the new slave alrady in the list.

Another related change is what happens if the BGSAVE fails because of
fork() or other errors: we now remove the slave from the list of slaves
and send an error, scheduling the slave connection to be terminated.

As a side effect of this change the following errors found by
Oran Agra are fixed (thanks!):

1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned
up, otherwise they remain in a wrong state forever since we setup them
for full resync before actually trying to fork.

2. updateSlavesWaitingBgsave() with replication target set as "socket"
was broken since the function changed the slaves state from
WAIT_BGSAVE_START to WAIT_BGSAVE_END via
replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets()
will not find any slave in the right state (WAIT_BGSAVE_START) to feed.
2015-08-21 11:55:14 +02:00
antirez
c5a8c8e907 Fixed issues introduced during last merge. 2015-08-20 10:21:50 +02:00
antirez
4363fa1d76 Force slaves to resync after unsuccessful PSYNC.
Using chained replication where C is slave of B which is in turn slave of
A, if B reconnects the replication link with A but discovers it is no
longer possible to PSYNC, slaves of B must be disconnected and PSYNC
not allowed, since the new B dataset may be completely different after
the synchronization with the master.

Note that there are varius semantical differences in the way this is
handled now compared to the past. In the past the semantics was:

1. When a slave lost connection with its master, disconnected the chained
slaves ASAP. Which is not needed since after a successful PSYNC with the
master, the slaves can continue and don't need to resync in turn.

2. However after a failed PSYNC the replication backlog was not reset, so a
slave was able to PSYNC successfully even if the instance did a full
sync with its master, containing now an entirely different data set.

Now instead chained slaves are not disconnected when the slave lose the
connection with its master, but only when it is forced to full SYNC with
its master. This means that if the slave having chained slaves does a
successful PSYNC all its slaves can continue without troubles.

See issue #2694 for more details.
2015-08-20 10:19:42 +02:00