2016-11-23 18:20:46 -05:00
|
|
|
package consensus
|
|
|
|
|
|
|
|
import (
|
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
|
|
|
"context"
|
2016-11-23 18:20:46 -05:00
|
|
|
"sync"
|
|
|
|
"testing"
|
|
|
|
"time"
|
|
|
|
|
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
|
|
|
"github.com/stretchr/testify/require"
|
2017-09-18 23:16:14 -04:00
|
|
|
crypto "github.com/tendermint/go-crypto"
|
|
|
|
data "github.com/tendermint/go-wire/data"
|
2017-04-21 18:19:41 -04:00
|
|
|
"github.com/tendermint/tendermint/p2p"
|
2016-11-23 18:20:46 -05:00
|
|
|
"github.com/tendermint/tendermint/types"
|
2017-10-04 16:40:45 -04:00
|
|
|
cmn "github.com/tendermint/tmlibs/common"
|
2016-11-23 18:20:46 -05:00
|
|
|
)
|
|
|
|
|
|
|
|
func init() {
|
2017-05-02 00:43:49 -04:00
|
|
|
config = ResetConfig("consensus_byzantine_test")
|
2016-11-23 18:20:46 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
//----------------------------------------------
|
|
|
|
// byzantine failures
|
|
|
|
|
|
|
|
// 4 validators. 1 is byzantine. The other three are partitioned into A (1 val) and B (2 vals).
|
|
|
|
// byzantine validator sends conflicting proposals into A and B,
|
|
|
|
// and prevotes/precommits on both of them.
|
|
|
|
// B sees a commit, A doesn't.
|
|
|
|
// Byzantine validator refuses to prevote.
|
|
|
|
// Heal partition and ensure A sees the commit
|
|
|
|
func TestByzantine(t *testing.T) {
|
|
|
|
N := 4
|
2017-05-14 21:44:01 +02:00
|
|
|
logger := consensusLogger()
|
2017-01-11 15:32:03 -05:00
|
|
|
css := randConsensusNet(N, "consensus_byzantine_test", newMockTickerFunc(false), newCounter)
|
2016-12-19 10:44:25 -05:00
|
|
|
|
|
|
|
// give the byzantine validator a normal ticker
|
|
|
|
css[0].SetTimeoutTicker(NewTimeoutTicker())
|
2016-11-23 18:20:46 -05:00
|
|
|
|
|
|
|
switches := make([]*p2p.Switch, N)
|
2017-05-14 21:44:01 +02:00
|
|
|
p2pLogger := logger.With("module", "p2p")
|
2016-11-23 18:20:46 -05:00
|
|
|
for i := 0; i < N; i++ {
|
2017-05-02 00:43:49 -04:00
|
|
|
switches[i] = p2p.NewSwitch(config.P2P)
|
2017-05-14 21:44:01 +02:00
|
|
|
switches[i].SetLogger(p2pLogger.With("validator", i))
|
2016-11-23 18:20:46 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
eventChans := make([]chan interface{}, N)
|
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
|
|
|
reactors := make([]p2p.Reactor, N)
|
2016-11-23 18:20:46 -05:00
|
|
|
for i := 0; i < N; i++ {
|
|
|
|
if i == 0 {
|
2017-09-18 23:16:14 -04:00
|
|
|
css[i].privValidator = NewByzantinePrivValidator(css[i].privValidator)
|
2016-11-23 18:20:46 -05:00
|
|
|
// make byzantine
|
|
|
|
css[i].decideProposal = func(j int) func(int, int) {
|
|
|
|
return func(height, round int) {
|
2017-05-02 11:53:32 +04:00
|
|
|
byzantineDecideProposalFunc(t, height, round, css[j], switches[j])
|
2016-11-23 18:20:46 -05:00
|
|
|
}
|
|
|
|
}(i)
|
|
|
|
css[i].doPrevote = func(height, round int) {}
|
|
|
|
}
|
|
|
|
|
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
|
|
|
eventBus := types.NewEventBus()
|
|
|
|
eventBus.SetLogger(logger.With("module", "events", "validator", i))
|
|
|
|
_, err := eventBus.Start()
|
|
|
|
require.NoError(t, err)
|
|
|
|
defer eventBus.Stop()
|
|
|
|
|
|
|
|
eventChans[i] = make(chan interface{}, 1)
|
|
|
|
err = eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, eventChans[i])
|
|
|
|
require.NoError(t, err)
|
2016-11-23 18:20:46 -05:00
|
|
|
|
2016-12-22 22:02:58 -05:00
|
|
|
conR := NewConsensusReactor(css[i], true) // so we dont start the consensus states
|
2017-05-14 21:44:01 +02:00
|
|
|
conR.SetLogger(logger.With("validator", i))
|
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
|
|
|
conR.SetEventBus(eventBus)
|
2016-11-23 18:20:46 -05:00
|
|
|
|
2017-10-03 18:49:20 -04:00
|
|
|
var conRI p2p.Reactor // nolint: gotype
|
2016-11-23 18:20:46 -05:00
|
|
|
conRI = conR
|
2017-05-29 23:11:40 -04:00
|
|
|
|
2016-11-23 18:20:46 -05:00
|
|
|
if i == 0 {
|
|
|
|
conRI = NewByzantineReactor(conR)
|
|
|
|
}
|
|
|
|
reactors[i] = conRI
|
|
|
|
}
|
|
|
|
|
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
|
|
|
defer func() {
|
|
|
|
for _, r := range reactors {
|
|
|
|
if rr, ok := r.(*ByzantineReactor); ok {
|
|
|
|
rr.reactor.Switch.Stop()
|
|
|
|
} else {
|
|
|
|
r.(*ConsensusReactor).Switch.Stop()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}()
|
|
|
|
|
2017-05-02 00:43:49 -04:00
|
|
|
p2p.MakeConnectedSwitches(config.P2P, N, func(i int, s *p2p.Switch) *p2p.Switch {
|
2016-11-23 18:20:46 -05:00
|
|
|
// ignore new switch s, we already made ours
|
|
|
|
switches[i].AddReactor("CONSENSUS", reactors[i])
|
|
|
|
return switches[i]
|
|
|
|
}, func(sws []*p2p.Switch, i, j int) {
|
|
|
|
// the network starts partitioned with globally active adversary
|
|
|
|
if i != 0 {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
p2p.Connect2Switches(sws, i, j)
|
|
|
|
})
|
|
|
|
|
2016-12-22 22:02:58 -05:00
|
|
|
// start the state machines
|
|
|
|
byzR := reactors[0].(*ByzantineReactor)
|
|
|
|
s := byzR.reactor.conS.GetState()
|
2017-10-26 18:29:23 -04:00
|
|
|
byzR.reactor.SwitchToConsensus(s, 0)
|
2016-12-22 22:02:58 -05:00
|
|
|
for i := 1; i < N; i++ {
|
|
|
|
cr := reactors[i].(*ConsensusReactor)
|
2017-10-26 18:29:23 -04:00
|
|
|
cr.SwitchToConsensus(cr.conS.GetState(), 0)
|
2016-12-22 22:02:58 -05:00
|
|
|
}
|
|
|
|
|
2016-11-23 18:20:46 -05:00
|
|
|
// byz proposer sends one block to peers[0]
|
|
|
|
// and the other block to peers[1] and peers[2].
|
|
|
|
// note peers and switches order don't match.
|
|
|
|
peers := switches[0].Peers().List()
|
|
|
|
ind0 := getSwitchIndex(switches, peers[0])
|
|
|
|
ind1 := getSwitchIndex(switches, peers[1])
|
|
|
|
ind2 := getSwitchIndex(switches, peers[2])
|
|
|
|
|
|
|
|
// connect the 2 peers in the larger partition
|
|
|
|
p2p.Connect2Switches(switches, ind1, ind2)
|
|
|
|
|
|
|
|
// wait for someone in the big partition to make a block
|
2017-05-30 13:27:31 -04:00
|
|
|
<-eventChans[ind2]
|
2016-11-23 18:20:46 -05:00
|
|
|
|
2017-05-02 11:53:32 +04:00
|
|
|
t.Log("A block has been committed. Healing partition")
|
2016-11-23 18:20:46 -05:00
|
|
|
|
|
|
|
// connect the partitions
|
|
|
|
p2p.Connect2Switches(switches, ind0, ind1)
|
|
|
|
p2p.Connect2Switches(switches, ind0, ind2)
|
|
|
|
|
|
|
|
// wait till everyone makes the first new block
|
|
|
|
// (one of them already has)
|
|
|
|
wg := new(sync.WaitGroup)
|
|
|
|
wg.Add(2)
|
|
|
|
for i := 1; i < N-1; i++ {
|
|
|
|
go func(j int) {
|
|
|
|
<-eventChans[j]
|
|
|
|
wg.Done()
|
|
|
|
}(i)
|
|
|
|
}
|
|
|
|
|
|
|
|
done := make(chan struct{})
|
|
|
|
go func() {
|
|
|
|
wg.Wait()
|
|
|
|
close(done)
|
|
|
|
}()
|
|
|
|
|
|
|
|
tick := time.NewTicker(time.Second * 10)
|
|
|
|
select {
|
|
|
|
case <-done:
|
|
|
|
case <-tick.C:
|
|
|
|
for i, reactor := range reactors {
|
2017-10-04 16:40:45 -04:00
|
|
|
t.Log(cmn.Fmt("Consensus Reactor %v", i))
|
|
|
|
t.Log(cmn.Fmt("%v", reactor))
|
2016-11-23 18:20:46 -05:00
|
|
|
}
|
|
|
|
t.Fatalf("Timed out waiting for all validators to commit first block")
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
//-------------------------------
|
|
|
|
// byzantine consensus functions
|
|
|
|
|
2017-05-02 11:53:32 +04:00
|
|
|
func byzantineDecideProposalFunc(t *testing.T, height, round int, cs *ConsensusState, sw *p2p.Switch) {
|
2016-11-23 18:20:46 -05:00
|
|
|
// byzantine user should create two proposals and try to split the vote.
|
|
|
|
// Avoid sending on internalMsgQueue and running consensus state.
|
|
|
|
|
|
|
|
// Create a new proposal block from state/txs from the mempool.
|
|
|
|
block1, blockParts1 := cs.createProposalBlock()
|
|
|
|
polRound, polBlockID := cs.Votes.POLInfo()
|
|
|
|
proposal1 := types.NewProposal(height, round, blockParts1.Header(), polRound, polBlockID)
|
2017-09-06 11:50:43 -04:00
|
|
|
if err := cs.privValidator.SignProposal(cs.state.ChainID, proposal1); err != nil {
|
|
|
|
t.Error(err)
|
|
|
|
}
|
2016-11-23 18:20:46 -05:00
|
|
|
|
|
|
|
// Create a new proposal block from state/txs from the mempool.
|
|
|
|
block2, blockParts2 := cs.createProposalBlock()
|
|
|
|
polRound, polBlockID = cs.Votes.POLInfo()
|
|
|
|
proposal2 := types.NewProposal(height, round, blockParts2.Header(), polRound, polBlockID)
|
2017-09-06 11:50:43 -04:00
|
|
|
if err := cs.privValidator.SignProposal(cs.state.ChainID, proposal2); err != nil {
|
|
|
|
t.Error(err)
|
|
|
|
}
|
2016-11-23 18:20:46 -05:00
|
|
|
|
|
|
|
block1Hash := block1.Hash()
|
|
|
|
block2Hash := block2.Hash()
|
|
|
|
|
|
|
|
// broadcast conflicting proposals/block parts to peers
|
|
|
|
peers := sw.Peers().List()
|
2017-05-02 11:53:32 +04:00
|
|
|
t.Logf("Byzantine: broadcasting conflicting proposals to %d peers", len(peers))
|
2016-11-23 18:20:46 -05:00
|
|
|
for i, peer := range peers {
|
|
|
|
if i < len(peers)/2 {
|
|
|
|
go sendProposalAndParts(height, round, cs, peer, proposal1, block1Hash, blockParts1)
|
|
|
|
} else {
|
|
|
|
go sendProposalAndParts(height, round, cs, peer, proposal2, block2Hash, blockParts2)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-09-12 20:49:22 -04:00
|
|
|
func sendProposalAndParts(height, round int, cs *ConsensusState, peer p2p.Peer, proposal *types.Proposal, blockHash []byte, parts *types.PartSet) {
|
2016-11-23 18:20:46 -05:00
|
|
|
// proposal
|
|
|
|
msg := &ProposalMessage{Proposal: proposal}
|
|
|
|
peer.Send(DataChannel, struct{ ConsensusMessage }{msg})
|
|
|
|
|
|
|
|
// parts
|
|
|
|
for i := 0; i < parts.Total(); i++ {
|
|
|
|
part := parts.GetPart(i)
|
|
|
|
msg := &BlockPartMessage{
|
|
|
|
Height: height, // This tells peer that this part applies to us.
|
|
|
|
Round: round, // This tells peer that this part applies to us.
|
|
|
|
Part: part,
|
|
|
|
}
|
|
|
|
peer.Send(DataChannel, struct{ ConsensusMessage }{msg})
|
|
|
|
}
|
|
|
|
|
|
|
|
// votes
|
|
|
|
cs.mtx.Lock()
|
|
|
|
prevote, _ := cs.signVote(types.VoteTypePrevote, blockHash, parts.Header())
|
|
|
|
precommit, _ := cs.signVote(types.VoteTypePrecommit, blockHash, parts.Header())
|
|
|
|
cs.mtx.Unlock()
|
|
|
|
|
|
|
|
peer.Send(VoteChannel, struct{ ConsensusMessage }{&VoteMessage{prevote}})
|
|
|
|
peer.Send(VoteChannel, struct{ ConsensusMessage }{&VoteMessage{precommit}})
|
|
|
|
}
|
|
|
|
|
|
|
|
//----------------------------------------
|
|
|
|
// byzantine consensus reactor
|
|
|
|
|
|
|
|
type ByzantineReactor struct {
|
2017-10-04 16:40:45 -04:00
|
|
|
cmn.Service
|
2016-11-23 18:20:46 -05:00
|
|
|
reactor *ConsensusReactor
|
|
|
|
}
|
|
|
|
|
|
|
|
func NewByzantineReactor(conR *ConsensusReactor) *ByzantineReactor {
|
|
|
|
return &ByzantineReactor{
|
|
|
|
Service: conR,
|
|
|
|
reactor: conR,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (br *ByzantineReactor) SetSwitch(s *p2p.Switch) { br.reactor.SetSwitch(s) }
|
|
|
|
func (br *ByzantineReactor) GetChannels() []*p2p.ChannelDescriptor { return br.reactor.GetChannels() }
|
2017-09-12 20:49:22 -04:00
|
|
|
func (br *ByzantineReactor) AddPeer(peer p2p.Peer) {
|
2016-11-23 18:20:46 -05:00
|
|
|
if !br.reactor.IsRunning() {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// Create peerState for peer
|
2017-09-12 20:49:22 -04:00
|
|
|
peerState := NewPeerState(peer).SetLogger(br.reactor.Logger)
|
|
|
|
peer.Set(types.PeerStateKey, peerState)
|
2016-11-23 18:20:46 -05:00
|
|
|
|
|
|
|
// Send our state to peer.
|
|
|
|
// If we're fast_syncing, broadcast a RoundStepMessage later upon SwitchToConsensus().
|
|
|
|
if !br.reactor.fastSync {
|
2017-01-17 20:58:27 +04:00
|
|
|
br.reactor.sendNewRoundStepMessages(peer)
|
2016-11-23 18:20:46 -05:00
|
|
|
}
|
|
|
|
}
|
2017-09-12 20:49:22 -04:00
|
|
|
func (br *ByzantineReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
|
2016-11-23 18:20:46 -05:00
|
|
|
br.reactor.RemovePeer(peer, reason)
|
|
|
|
}
|
2017-09-12 20:49:22 -04:00
|
|
|
func (br *ByzantineReactor) Receive(chID byte, peer p2p.Peer, msgBytes []byte) {
|
2016-11-23 18:20:46 -05:00
|
|
|
br.reactor.Receive(chID, peer, msgBytes)
|
|
|
|
}
|
|
|
|
|
|
|
|
//----------------------------------------
|
|
|
|
// byzantine privValidator
|
|
|
|
|
|
|
|
type ByzantinePrivValidator struct {
|
2017-09-18 23:16:14 -04:00
|
|
|
types.Signer
|
2016-11-23 18:20:46 -05:00
|
|
|
|
2017-09-18 23:16:14 -04:00
|
|
|
pv types.PrivValidator
|
2016-11-23 18:20:46 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
// Return a priv validator that will sign anything
|
2017-09-18 23:16:14 -04:00
|
|
|
func NewByzantinePrivValidator(pv types.PrivValidator) *ByzantinePrivValidator {
|
2016-11-23 18:20:46 -05:00
|
|
|
return &ByzantinePrivValidator{
|
2017-09-18 23:16:14 -04:00
|
|
|
Signer: pv.(*types.PrivValidatorFS).Signer,
|
|
|
|
pv: pv,
|
2016-11-23 18:20:46 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-09-21 16:32:02 -04:00
|
|
|
func (privVal *ByzantinePrivValidator) GetAddress() data.Bytes {
|
|
|
|
return privVal.pv.GetAddress()
|
2016-11-23 18:20:46 -05:00
|
|
|
}
|
|
|
|
|
2017-09-21 16:32:02 -04:00
|
|
|
func (privVal *ByzantinePrivValidator) GetPubKey() crypto.PubKey {
|
|
|
|
return privVal.pv.GetPubKey()
|
2017-09-18 23:16:14 -04:00
|
|
|
}
|
2016-11-23 18:20:46 -05:00
|
|
|
|
2017-09-18 23:16:14 -04:00
|
|
|
func (privVal *ByzantinePrivValidator) SignVote(chainID string, vote *types.Vote) (err error) {
|
2017-09-16 01:07:04 -04:00
|
|
|
vote.Signature, err = privVal.Sign(types.SignBytes(chainID, vote))
|
|
|
|
return err
|
2016-11-23 18:20:46 -05:00
|
|
|
}
|
|
|
|
|
2017-09-16 01:07:04 -04:00
|
|
|
func (privVal *ByzantinePrivValidator) SignProposal(chainID string, proposal *types.Proposal) (err error) {
|
|
|
|
proposal.Signature, err = privVal.Sign(types.SignBytes(chainID, proposal))
|
2016-11-23 18:20:46 -05:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2017-09-16 01:07:04 -04:00
|
|
|
func (privVal *ByzantinePrivValidator) SignHeartbeat(chainID string, heartbeat *types.Heartbeat) (err error) {
|
|
|
|
heartbeat.Signature, err = privVal.Sign(types.SignBytes(chainID, heartbeat))
|
2017-07-29 14:15:10 -04:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2016-11-23 18:20:46 -05:00
|
|
|
func (privVal *ByzantinePrivValidator) String() string {
|
2017-10-04 16:40:45 -04:00
|
|
|
return cmn.Fmt("PrivValidator{%X}", privVal.GetAddress())
|
2016-11-23 18:20:46 -05:00
|
|
|
}
|