Commit Graph

4206 Commits

Author SHA1 Message Date
06894b8fe4 pfcount support multi keys 2015-09-08 09:22:11 +02:00
7bac7d37cb Fix merge issues in 490847c. 2015-09-07 17:29:43 +02:00
9155cdc3d6 Undo slaves state change on failed rdbSaveToSlavesSockets().
As Oran Agra suggested, in startBgsaveForReplication() when the BGSAVE
attempt returns an error, we scan the list of slaves in order to remove
them since there is no way to serve them currently.

However we check for the replication state BGSAVE_START, which was
modified by rdbSaveToSlaveSockets() before forking(). So when fork fails
the state of slaves remain BGSAVE_END and no cleanup is performed.

This commit fixes the problem by making rdbSaveToSlavesSockets() able to
undo the state change on fork failure.
2015-09-07 16:21:33 +02:00
8c8a7cded9 Sentinel: fix bug in config rewriting during failover
We have a check to rewrite the config properly when a failover is in
progress, in order to add the current (already failed over) master as
slave, and don't include in the slave list the promoted slave itself.

However there was an issue, the variable with the right address was
computed but never used when the code was modified, and no tests are
available for this feature for two reasons:

1. The Sentinel unit test currently does not test Sentinel ability to
persist its state at all.
2. It is a very hard to trigger state since it lasts for little time in
the context of the testing framework.

However this feature should be covered in the test in some way.

The bug was found by @badboy using the clang static analyzer.

Effects of the bug on safety of Sentinel
===

This bug results in severe issues in the following case:

1. A Sentinel is elected leader.
2. During the failover, it persists a wrong config with a known-slave
entry listing the master address.
3. The Sentinel crashes and restarts, reading invalid configuration from
disk.
4. It sees that the slave now does not obey the logical configuration
(should replicate from the current master), so it sends a SLAVEOF
command to the master (since the slave master is the same) creating a
replication loop (attempt to replicate from itself) which Redis is
currently unable to detect.
5. This means that the master is no longer available because of the bug.

However the lack of availability should be only transient (at least
in my tests, but other states could be possible where the problem
is not recovered automatically) because:

6. Sentinels treat masters reporting to be slaves as failing.
7. A new failover is triggered, and a slave is promoted to master.

Bug lifetime
===

The bug is there forever. Commit 16237d78 actually tried to fix the bug
but in the wrong way (the computed variable was never used! My fault).
So this bug is there basically since the start of Sentinel.

Since the bug is hard to trigger, I remember little reports matching
this condition, but I remember at least a few. Also in automated tests
where instances were stopped and restarted multiple times automatically
I remember hitting this issue, however I was not able to reproduce nor
to determine with the information I had at the time what was causing the
issue.
2015-09-07 15:53:08 +02:00
c47854615d Sentinel: clarify effect of resetting failover_start_time. 2015-09-07 15:53:08 +02:00
3ff2d65ffa SCAN iter parsing changed from atoi to chartoull 2015-09-07 13:25:38 +02:00
194b7e2186 Force slaves to resync after unsuccessful PSYNC.
Using chained replication where C is slave of B which is in turn slave of
A, if B reconnects the replication link with A but discovers it is no
longer possible to PSYNC, slaves of B must be disconnected and PSYNC
not allowed, since the new B dataset may be completely different after
the synchronization with the master.

Note that there are varius semantical differences in the way this is
handled now compared to the past. In the past the semantics was:

1. When a slave lost connection with its master, disconnected the chained
slaves ASAP. Which is not needed since after a successful PSYNC with the
master, the slaves can continue and don't need to resync in turn.

2. However after a failed PSYNC the replication backlog was not reset, so a
slave was able to PSYNC successfully even if the instance did a full
sync with its master, containing now an entirely different data set.

Now instead chained slaves are not disconnected when the slave lose the
connection with its master, but only when it is forced to full SYNC with
its master. This means that if the slave having chained slaves does a
successful PSYNC all its slaves can continue without troubles.

See issue #2694 for more details.
2015-08-21 15:32:04 +02:00
730f7c5f5e replicationHandleMasterDisconnection() belongs to replication.c. 2015-08-21 15:32:01 +02:00
12d2a89410 flushSlavesOutputBuffers(): details clarified via comments.
Talking with @oranagra we had to reason a little bit to understand if
this function could ever flush the output buffers of the wrong slaves,
having online state but actually not being ready to receive writes
before the first ACK is received from them (this happens with diskless
replication).

Next time we'll just read this comment.
2015-08-21 15:31:59 +02:00
7a02677097 Log client details on SLAVEOF command having an effect. 2015-08-21 15:31:11 +02:00
5630eeb12b startBgsaveForReplication(): handle waiting slaves state change.
Before this commit, after triggering a BGSAVE it was up to the caller of
startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in
order to update them accordingly. However when the replication target is
the socket, this is not possible since the process of updating the
slaves and sending the FULLRESYNC reply must be coupled with the process
of starting an RDB save (the reason is, we need to send the FULLSYNC
command and spawn a child that will start to send RDB data to the slaves
ASAP).

This commit moves the responsibility of handling slaves in
WAIT_BGSAVE_START to startBgsavForReplication() so that for both
diskless and disk-based replication we have the same chain of
responsiblity. In order accomodate such change, the syncCommand() also
needs to put the client in the slave list ASAP (just after the initial
checks) and not at the end, so that startBgsavForReplication() can find
the new slave alrady in the list.

Another related change is what happens if the BGSAVE fails because of
fork() or other errors: we now remove the slave from the list of slaves
and send an error, scheduling the slave connection to be terminated.

As a side effect of this change the following errors found by
Oran Agra are fixed (thanks!):

1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned
up, otherwise they remain in a wrong state forever since we setup them
for full resync before actually trying to fork.

2. updateSlavesWaitingBgsave() with replication target set as "socket"
was broken since the function changed the slaves state from
WAIT_BGSAVE_START to WAIT_BGSAVE_END via
replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets()
will not find any slave in the right state (WAIT_BGSAVE_START) to feed.
2015-08-21 11:55:30 +02:00
a7e9b38a48 slaveTryPartialResynchronization and syncWithMaster: better synergy.
It is simpler if removing the read event handler from the FD is up to
slaveTryPartialResynchronization, after all it is only called in the
context of syncWithMaster.

This commit also makes sure that on error all the event handlers are
removed from the socket before closing it.
2015-08-07 12:19:21 +02:00
d4e4bd039a syncWithMaster(): non blocking state machine. 2015-08-07 12:18:29 +02:00
c9df63c103 Fixed issues introduced during last merge. 2015-08-06 17:33:04 +02:00
ce3a2d085b startBgsaveForReplication(): log what you really do. 2015-08-06 12:36:13 +02:00
6974e69f35 Replication: add REPLCONF CAPA EOF support.
Add the concept of slaves capabilities to Redis, the slave now presents
to the Redis master with a set of capabilities in the form:

    REPLCONF capa SOMECAPA capa OTHERCAPA ...

This has the effect of setting slave->slave_capa with the corresponding
SLAVE_CAPA macros that the master can test later to understand if it
the slave will understand certain formats and protocols of the
replication process. This makes it much simpler to introduce new
replication capabilities in the future in a way that don't break old
slaves or masters.

This patch was designed and implemented together with Oran Agra
(@oranagra).
2015-08-06 12:36:13 +02:00
be56e4cf33 Fix synchronous readline "\n" handling.
Our function to read a line with a timeout handles newlines as requests
to refresh the timeout, however the code kept subtracting the buffer
size left every time a newline was received, for a bug in the loop
logic. Fixed by this commit.
2015-08-05 17:00:04 +02:00
a67d67b561 Fix replication slave pings period.
For PINGs we use the period configured by the user, but for the newlines
of slaves waiting for an RDB to be created (including slaves waiting for
the FULLRESYNC reply) we need to ping with frequency of 1 second, since
the timeout is fixed and needs to be refreshed.
2015-08-05 17:00:04 +02:00
9a5560f4c3 Fix RDB encoding test for new csvdump format. 2015-08-05 14:05:34 +02:00
6da198cdf1 Remove slave state change handled by replicationSetupSlaveForFullResync(). 2015-08-05 14:00:25 +02:00
39994c2493 Make sure we re-emit SELECT after each new slave full sync setup.
In previous commits we moved the FULLRESYNC to the moment we start the
BGSAVE, so that the offset we provide is the right one. However this
also means that we need to re-emit the SELECT statement every time a new
slave starts to accumulate the changes.

To obtian this effect in a more clean way, the function that sends the
FULLRESYNC reply was overloaded with a more important role of also doing
this and chanigng the slave state. So it was renamed to
replicationSetupSlaveForFullResync() to better reflect what it does now.
2015-08-05 14:00:22 +02:00
a89326f0f6 Test: csvdump now scans all DBs. 2015-08-05 14:00:18 +02:00
b2ff48ef19 Don't send SELECT to slaves in WAIT_BGSAVE_START state. 2015-08-05 14:00:15 +02:00
7967f1bca6 PSYNC test: also test the vanilla SYNC. 2015-08-05 14:00:11 +02:00
e684e7266c syncCommand() comments improved. 2015-08-05 14:00:08 +02:00
4b010572cd PSYNC initial offset fix.
This commit attempts to fix a bug involving PSYNC and diskless
replication (currently experimental) found by Yuval Inbar from Redis Labs
and that was later found to have even more far reaching effects (the bug also
exists when diskstore is off).

The gist of the bug is that, a Redis master replies with +FULLRESYNC to
a PSYNC attempt that fails and requires a full resynchronization.
However, the baseline offset sent along with FULLRESYNC was always the
current master replication offset. This is not ok, because there are
many reasosn that may delay the RDB file creation. And... guess what,
the master offset we communicate must be the one of the time the RDB
was created. So for example:

1) When the BGSAVE for replication is delayed since there is one
   already but is not good for replication.
2) When the BGSAVE is not needed as we attach one currently ongoing.
3) When because of diskless replication the BGSAVE is delayed.

In all the above cases the PSYNC reply is wrong and the slave may
reconnect later claiming to need a wrong offset: this may cause
data curruption later.
2015-08-05 14:00:04 +02:00
dc4d24440f Test PSYNC with diskless replication.
Thanks to Oran Agra from Redis Labs for providing this patch.
2015-08-05 14:00:00 +02:00
d3688e8b68 Merge pull request #2666 from Abioy/2.8
bugfix: errno might change before logging
2015-07-17 10:44:18 +02:00
88a38fe3d3 Merge pull request #2678 from Kiemes/fix-config-resetstat
Fix: aof_delayed_fsync is not reset, fixes #2677
2015-07-17 10:39:02 +02:00
cb98ae12f6 Fix: aof_delayed_fsync is not reset
aof_delayed_fsync was not set to 0 when calling CONFIG RESETSTAT
2015-07-16 11:57:07 +02:00
9c7f98521e bugfix: errno might change before logging
Signed-off-by: Yongyue Sun <abioy.sun@gmail.com>
2015-07-13 14:59:05 +08:00
fbb9d619f7 Fix 2.8.21 release notes to give full credits. 2015-06-04 12:00:21 +02:00
7f8b865a65 Redis 2.8.21 2.8.21 2015-06-04 11:32:24 +02:00
700b863f13 hide access to debug table 2015-06-03 13:36:02 +02:00
5a1b22ad7f disable loading lua bytecode 2015-06-03 13:36:02 +02:00
1eeb9bd714 Scripting: Lua cmsgpack lib updated to include str8 support 2015-06-03 08:45:56 +02:00
22ee2f9cd5 adding a sentinel command: "flushconfig"
This new command triggers a config flush to save the in-memory config to
disk. This is useful for cases of a configuration management system or a
package manager wiping out your sentinel config while the process is
still running - and has not yet been restarted. It can also be useful
for scripting a backup and migrate or clone of a running sentinel.
2015-05-25 12:06:57 +02:00
cdedad2322 Sentinel: CKQUORUM tests 2015-05-19 12:27:26 +02:00
76412ec9c0 Sentinel: SENTINEL CKQUORUM command
A way for monitoring systems to check that Sentinel is technically able
to reach the quorum and failover, using the currently visible Sentinels.
2015-05-19 12:27:26 +02:00
f42fcff6d2 Rewrite smoveCommand test with ternary operator 2015-05-15 17:39:39 +02:00
19382c8be6 uphold the smove contract to return 0 when the element is not a member of the source set, even if source=dest 2015-05-15 17:39:39 +02:00
bea224308b protocol error log should be seen debug/verbose level 2015-05-15 17:06:45 +02:00
22979836e5 Fix 2.8.20 changelog typo 2015-05-05 11:50:30 +02:00
11934a9f32 Redis 2.8.20 2.8.20 2015-05-05 11:14:33 +02:00
8db3969971 sdsfree x and y 2015-05-04 13:03:06 +02:00
98756d4c0d fix doc example 2015-05-04 13:03:06 +02:00
7316fda33a fix typo 2015-05-04 13:03:06 +02:00
a5bada19be update copyright year 2015-05-04 12:56:52 +02:00
f0ab4fd6fd Making sentinel flush config on +slave
Originally, only the +slave event which occurs when a slave is
reconfigured during sentinelResetMasterAndChangeAddress triggers a flush
of the config to disk.  However, newly discovered slaves don't
apparently trigger this flush but do trigger the +slave event issuance.

So if you start up a sentinel, add a master, then add a slave to the
master (as a way to reproduce it) you'll see the +slave event issued,
but the sentinel config won't be updated with the known-slave entry.

This change makes sentinel do the flush of the config if a new slave is
deteted in sentinelRefreshInstanceInfo.
2015-05-04 12:55:30 +02:00
e4c54498db Sentinel: remove useless sentinelFlushConfig() call
To rewrite the config in the loop that adds slaves back after a master
reset, in order to handle switching to another master, is useless: it
just adds latency since there is an fsync call in the inner loop,
without providing any additional guarantee, but the contrary, since if
after the first loop iteration the server crashes we end with just a
single slave entry losing all the other informations.

It is wiser to rewrite the config at the end when the full new
state is configured.
2015-05-04 12:55:30 +02:00