2016-06-26 00:40:53 -04:00
package consensus
import (
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
"context"
2016-11-23 18:20:46 -05:00
"fmt"
2017-11-12 06:41:15 +00:00
"os"
2018-07-27 13:52:42 -04:00
"path"
2018-01-25 01:01:27 -05:00
"runtime"
2017-11-12 06:41:15 +00:00
"runtime/pprof"
2016-06-26 00:40:53 -04:00
"sync"
"testing"
"time"
2018-10-09 16:10:05 +04:00
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
2019-01-28 14:13:17 +02:00
abcicli "github.com/tendermint/tendermint/abci/client"
2018-06-22 06:59:02 +02:00
"github.com/tendermint/tendermint/abci/example/kvstore"
2018-07-27 13:52:42 -04:00
abci "github.com/tendermint/tendermint/abci/types"
2018-09-21 20:36:48 +02:00
cfg "github.com/tendermint/tendermint/config"
2019-08-11 14:27:03 -04:00
cstypes "github.com/tendermint/tendermint/consensus/types"
cmn "github.com/tendermint/tendermint/libs/common"
2018-07-01 22:36:49 -04:00
"github.com/tendermint/tendermint/libs/log"
2018-07-27 13:52:42 -04:00
mempl "github.com/tendermint/tendermint/mempool"
2017-04-21 18:19:41 -04:00
"github.com/tendermint/tendermint/p2p"
2019-05-28 03:39:58 +09:00
"github.com/tendermint/tendermint/p2p/mock"
2018-09-21 20:36:48 +02:00
sm "github.com/tendermint/tendermint/state"
blockchain: Reorg reactor (#3561)
* go routines in blockchain reactor
* Added reference to the go routine diagram
* Initial commit
* cleanup
* Undo testing_logger change, committed by mistake
* Fix the test loggers
* pulled some fsm code into pool.go
* added pool tests
* changes to the design
added block requests under peer
moved the request trigger in the reactor poolRoutine, triggered now by a ticker
in general moved everything required for making block requests smarter in the poolRoutine
added a simple map of heights to keep track of what will need to be requested next
added a few more tests
* send errors to FSM in a different channel than blocks
send errors (RemovePeer) from switch on a different channel than the
one receiving blocks
renamed channels
added more pool tests
* more pool tests
* lint errors
* more tests
* more tests
* switch fast sync to new implementation
* fixed data race in tests
* cleanup
* finished fsm tests
* address golangci comments :)
* address golangci comments :)
* Added timeout on next block needed to advance
* updating docs and cleanup
* fix issue in test from previous cleanup
* cleanup
* Added termination scenarios, tests and more cleanup
* small fixes to adr, comments and cleanup
* Fix bug in sendRequest()
If we tried to send a request to a peer not present in the switch, a
missing continue statement caused the request to be blackholed in a peer
that was removed and never retried.
While this bug was manifesting, the reactor kept asking for other
blocks that would be stored and never consumed. Added the number of
unconsumed blocks in the math for requesting blocks ahead of current
processing height so eventually there will be no more blocks requested
until the already received ones are consumed.
* remove bpPeer's didTimeout field
* Use distinct err codes for peer timeout and FSM timeouts
* Don't allow peers to update with lower height
* review comments from Ethan and Zarko
* some cleanup, renaming, comments
* Move block execution in separate goroutine
* Remove pool's numPending
* review comments
* fix lint, remove old blockchain reactor and duplicates in fsm tests
* small reorg around peer after review comments
* add the reactor spec
* verify block only once
* review comments
* change to int for max number of pending requests
* cleanup and godoc
* Add configuration flag fast sync version
* golangci fixes
* fix config template
* move both reactor versions under blockchain
* cleanup, golint, renaming stuff
* updated documentation, fixed more golint warnings
* integrate with behavior package
* sync with master
* gofmt
* add changelog_pending entry
* move to improvments
* suggestion to changelog entry
2019-07-23 10:58:52 +02:00
"github.com/tendermint/tendermint/store"
2016-06-26 00:40:53 -04:00
"github.com/tendermint/tendermint/types"
2019-07-31 11:34:17 +02:00
dbm "github.com/tendermint/tm-db"
2016-06-26 00:40:53 -04:00
)
2016-11-23 18:20:46 -05:00
//----------------------------------------------
// in-process testnets
2019-07-30 16:13:35 +02:00
func startConsensusNet ( t * testing . T , css [ ] * ConsensusState , n int ) (
2019-02-23 08:11:27 +04:00
[ ] * ConsensusReactor ,
[ ] types . Subscription ,
[ ] * types . EventBus ,
) {
2019-07-30 16:13:35 +02:00
reactors := make ( [ ] * ConsensusReactor , n )
2019-02-23 08:11:27 +04:00
blocksSubs := make ( [ ] types . Subscription , 0 )
2019-07-30 16:13:35 +02:00
eventBuses := make ( [ ] * types . EventBus , n )
for i := 0 ; i < n ; i ++ {
2018-01-18 20:38:19 -05:00
/ * logger , err := tmflags . ParseLogLevel ( "consensus:info,*:error" , logger , "info" )
2017-11-12 06:41:15 +00:00
if err != nil { t . Fatal ( err ) } * /
2016-12-22 22:02:58 -05:00
reactors [ i ] = NewConsensusReactor ( css [ i ] , true ) // so we dont start the consensus states
2018-01-24 23:34:57 -05:00
reactors [ i ] . SetLogger ( css [ i ] . Logger )
2016-06-26 00:40:53 -04:00
2018-01-18 20:38:19 -05:00
// eventBus is already started with the cs
eventBuses [ i ] = css [ i ] . eventBus
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
reactors [ i ] . SetEventBus ( eventBuses [ i ] )
2019-02-23 08:11:27 +04:00
blocksSub , err := eventBuses [ i ] . Subscribe ( context . Background ( ) , testSubscriber , types . EventQueryNewBlock )
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
require . NoError ( t , err )
2019-02-23 08:11:27 +04:00
blocksSubs = append ( blocksSubs , blocksSub )
2019-05-02 05:15:53 +08:00
if css [ i ] . state . LastBlockHeight == 0 { //simulate handle initChain in handshake
sm . SaveState ( css [ i ] . blockExec . DB ( ) , css [ i ] . state )
}
2016-06-26 00:40:53 -04:00
}
2016-11-23 18:20:46 -05:00
// make connected switches and start all reactors
2019-07-30 16:13:35 +02:00
p2p . MakeConnectedSwitches ( config . P2P , n , func ( i int , s * p2p . Switch ) * p2p . Switch {
2016-06-26 00:40:53 -04:00
s . AddReactor ( "CONSENSUS" , reactors [ i ] )
2018-01-18 20:38:19 -05:00
s . SetLogger ( reactors [ i ] . conS . Logger . With ( "module" , "p2p" ) )
2016-06-26 00:40:53 -04:00
return s
2016-09-13 22:25:11 -04:00
} , p2p . Connect2Switches )
2016-06-26 00:40:53 -04:00
2017-01-12 02:29:53 -05:00
// now that everyone is connected, start the state machines
// If we started the state machines before everyone was connected,
// we'd block when the cs fires NewBlockEvent and the peers are trying to start their reactors
2017-11-12 06:41:15 +00:00
// TODO: is this still true with new pubsub?
2019-07-30 16:13:35 +02:00
for i := 0 ; i < n ; i ++ {
2016-12-22 22:02:58 -05:00
s := reactors [ i ] . conS . GetState ( )
2017-10-26 18:29:23 -04:00
reactors [ i ] . SwitchToConsensus ( s , 0 )
2016-12-22 22:02:58 -05:00
}
2019-02-23 08:11:27 +04:00
return reactors , blocksSubs , eventBuses
2017-01-12 02:29:53 -05:00
}
2017-12-16 19:16:08 -05:00
func stopConsensusNet ( logger log . Logger , reactors [ ] * ConsensusReactor , eventBuses [ ] * types . EventBus ) {
logger . Info ( "stopConsensusNet" , "n" , len ( reactors ) )
for i , r := range reactors {
logger . Info ( "stopConsensusNet: Stopping ConsensusReactor" , "i" , i )
2017-01-12 02:29:53 -05:00
r . Switch . Stop ( )
}
2017-12-16 19:16:08 -05:00
for i , b := range eventBuses {
logger . Info ( "stopConsensusNet: Stopping eventBus" , "i" , i )
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
b . Stop ( )
}
2017-12-16 19:16:08 -05:00
logger . Info ( "stopConsensusNet: DONE" , "n" , len ( reactors ) )
2017-01-12 02:29:53 -05:00
}
2016-12-22 22:02:58 -05:00
2017-01-12 02:29:53 -05:00
// Ensure a testnet makes blocks
2018-01-18 20:38:19 -05:00
func TestReactorBasic ( t * testing . T ) {
2017-01-12 02:29:53 -05:00
N := 4
2019-02-18 08:45:27 +01:00
css , cleanup := randConsensusNet ( N , "consensus_reactor_test" , newMockTickerFunc ( true ) , newCounter )
defer cleanup ( )
2019-02-23 08:11:27 +04:00
reactors , blocksSubs , eventBuses := startConsensusNet ( t , css , N )
2017-12-16 19:16:08 -05:00
defer stopConsensusNet ( log . TestingLogger ( ) , reactors , eventBuses )
2016-06-26 00:40:53 -04:00
// wait till everyone makes the first new block
2018-01-19 00:14:35 -05:00
timeoutWaitGroup ( t , N , func ( j int ) {
2019-02-23 08:11:27 +04:00
<- blocksSubs [ j ] . Out ( )
2017-01-11 15:32:03 -05:00
} , css )
2016-06-26 00:40:53 -04:00
}
2016-06-26 15:33:11 -04:00
2018-07-26 18:53:19 -04:00
// Ensure we can process blocks with evidence
func TestReactorWithEvidence ( t * testing . T ) {
2018-10-09 16:10:05 +04:00
types . RegisterMockEvidences ( cdc )
types . RegisterMockEvidences ( types . GetCodec ( ) )
2018-07-27 13:52:42 -04:00
nValidators := 4
testName := "consensus_reactor_test"
tickerFunc := newMockTickerFunc ( true )
appFunc := newCounter
// heed the advice from https://www.sandimetz.com/blog/2016/1/20/the-wrong-abstraction
// to unroll unwieldy abstractions. Here we duplicate the code from:
// css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
genDoc , privVals := randGenesisDoc ( nValidators , false , 30 )
css := make ( [ ] * ConsensusState , nValidators )
logger := consensusLogger ( )
for i := 0 ; i < nValidators ; i ++ {
stateDB := dbm . NewMemDB ( ) // each state needs its own db
state , _ := sm . LoadStateFromDBOrGenesisDoc ( stateDB , genDoc )
2018-08-10 00:25:57 -05:00
thisConfig := ResetConfig ( fmt . Sprintf ( "%s_%d" , testName , i ) )
2019-02-18 08:45:27 +01:00
defer os . RemoveAll ( thisConfig . RootDir )
2018-07-27 13:52:42 -04:00
ensureDir ( path . Dir ( thisConfig . Consensus . WalFile ( ) ) , 0700 ) // dir for wal
app := appFunc ( )
2018-08-06 00:18:24 -04:00
vals := types . TM2PB . ValidatorUpdates ( state . Validators )
2018-07-27 13:52:42 -04:00
app . InitChain ( abci . RequestInitChain { Validators : vals } )
pv := privVals [ i ]
// duplicate code from:
// css[i] = newConsensusStateWithConfig(thisConfig, state, privVals[i], app)
blockDB := dbm . NewMemDB ( )
blockchain: Reorg reactor (#3561)
* go routines in blockchain reactor
* Added reference to the go routine diagram
* Initial commit
* cleanup
* Undo testing_logger change, committed by mistake
* Fix the test loggers
* pulled some fsm code into pool.go
* added pool tests
* changes to the design
added block requests under peer
moved the request trigger in the reactor poolRoutine, triggered now by a ticker
in general moved everything required for making block requests smarter in the poolRoutine
added a simple map of heights to keep track of what will need to be requested next
added a few more tests
* send errors to FSM in a different channel than blocks
send errors (RemovePeer) from switch on a different channel than the
one receiving blocks
renamed channels
added more pool tests
* more pool tests
* lint errors
* more tests
* more tests
* switch fast sync to new implementation
* fixed data race in tests
* cleanup
* finished fsm tests
* address golangci comments :)
* address golangci comments :)
* Added timeout on next block needed to advance
* updating docs and cleanup
* fix issue in test from previous cleanup
* cleanup
* Added termination scenarios, tests and more cleanup
* small fixes to adr, comments and cleanup
* Fix bug in sendRequest()
If we tried to send a request to a peer not present in the switch, a
missing continue statement caused the request to be blackholed in a peer
that was removed and never retried.
While this bug was manifesting, the reactor kept asking for other
blocks that would be stored and never consumed. Added the number of
unconsumed blocks in the math for requesting blocks ahead of current
processing height so eventually there will be no more blocks requested
until the already received ones are consumed.
* remove bpPeer's didTimeout field
* Use distinct err codes for peer timeout and FSM timeouts
* Don't allow peers to update with lower height
* review comments from Ethan and Zarko
* some cleanup, renaming, comments
* Move block execution in separate goroutine
* Remove pool's numPending
* review comments
* fix lint, remove old blockchain reactor and duplicates in fsm tests
* small reorg around peer after review comments
* add the reactor spec
* verify block only once
* review comments
* change to int for max number of pending requests
* cleanup and godoc
* Add configuration flag fast sync version
* golangci fixes
* fix config template
* move both reactor versions under blockchain
* cleanup, golint, renaming stuff
* updated documentation, fixed more golint warnings
* integrate with behavior package
* sync with master
* gofmt
* add changelog_pending entry
* move to improvments
* suggestion to changelog entry
2019-07-23 10:58:52 +02:00
blockStore := store . NewBlockStore ( blockDB )
2018-07-27 13:52:42 -04:00
// one for mempool, one for consensus
mtx := new ( sync . Mutex )
proxyAppConnMem := abcicli . NewLocalClient ( mtx , app )
proxyAppConnCon := abcicli . NewLocalClient ( mtx , app )
// Make Mempool
2019-05-04 10:41:31 +04:00
mempool := mempl . NewCListMempool ( thisConfig . Mempool , proxyAppConnMem , 0 )
2018-07-27 13:52:42 -04:00
mempool . SetLogger ( log . TestingLogger ( ) . With ( "module" , "mempool" ) )
if thisConfig . Consensus . WaitForTxs ( ) {
mempool . EnableTxsAvailable ( )
}
// mock the evidence pool
// everyone includes evidence of another double signing
vIdx := ( i + 1 ) % nValidators
2018-12-22 06:36:45 +01:00
addr := privVals [ vIdx ] . GetPubKey ( ) . Address ( )
evpool := newMockEvidencePool ( addr )
2018-07-27 13:52:42 -04:00
// Make ConsensusState
blockExec := sm . NewBlockExecutor ( stateDB , log . TestingLogger ( ) , proxyAppConnCon , mempool , evpool )
cs := NewConsensusState ( thisConfig . Consensus , state , blockExec , blockStore , mempool , evpool )
cs . SetLogger ( log . TestingLogger ( ) . With ( "module" , "consensus" ) )
cs . SetPrivValidator ( pv )
eventBus := types . NewEventBus ( )
eventBus . SetLogger ( log . TestingLogger ( ) . With ( "module" , "events" ) )
eventBus . Start ( )
cs . SetEventBus ( eventBus )
cs . SetTimeoutTicker ( tickerFunc ( ) )
cs . SetLogger ( logger . With ( "validator" , i , "module" , "consensus" ) )
css [ i ] = cs
2018-07-26 18:53:19 -04:00
}
2018-07-27 13:52:42 -04:00
2019-02-23 08:11:27 +04:00
reactors , blocksSubs , eventBuses := startConsensusNet ( t , css , nValidators )
2018-07-26 18:53:19 -04:00
defer stopConsensusNet ( log . TestingLogger ( ) , reactors , eventBuses )
2018-07-27 13:52:42 -04:00
// wait till everyone makes the first new block with no evidence
timeoutWaitGroup ( t , nValidators , func ( j int ) {
2019-02-23 08:11:27 +04:00
msg := <- blocksSubs [ j ] . Out ( )
block := msg . Data ( ) . ( types . EventDataNewBlock ) . Block
2018-07-27 13:52:42 -04:00
assert . True ( t , len ( block . Evidence . Evidence ) == 0 )
2018-07-26 18:53:19 -04:00
} , css )
// second block should have evidence
2018-07-27 13:52:42 -04:00
timeoutWaitGroup ( t , nValidators , func ( j int ) {
2019-02-23 08:11:27 +04:00
msg := <- blocksSubs [ j ] . Out ( )
block := msg . Data ( ) . ( types . EventDataNewBlock ) . Block
2018-07-27 13:52:42 -04:00
assert . True ( t , len ( block . Evidence . Evidence ) > 0 )
2018-07-26 18:53:19 -04:00
} , css )
}
2018-07-27 13:52:42 -04:00
// mock evidence pool returns no evidence for block 1,
// and returnes one piece for all higher blocks. The one piece
// is for a given validator at block 1.
2018-07-26 18:53:19 -04:00
type mockEvidencePool struct {
height int
ev [ ] types . Evidence
}
2018-07-27 13:52:42 -04:00
func newMockEvidencePool ( val [ ] byte ) * mockEvidencePool {
return & mockEvidencePool {
ev : [ ] types . Evidence { types . NewMockGoodEvidence ( 1 , 1 , val ) } ,
}
}
2018-08-08 16:03:58 +04:00
// NOTE: maxBytes is ignored
2018-09-21 13:00:36 +04:00
func ( m * mockEvidencePool ) PendingEvidence ( maxBytes int64 ) [ ] types . Evidence {
2018-07-26 18:53:19 -04:00
if m . height > 0 {
return m . ev
}
return nil
}
2018-07-27 13:52:42 -04:00
func ( m * mockEvidencePool ) AddEvidence ( types . Evidence ) error { return nil }
func ( m * mockEvidencePool ) Update ( block * types . Block , state sm . State ) {
2018-07-26 18:53:19 -04:00
if m . height > 0 {
2018-07-27 13:52:42 -04:00
if len ( block . Evidence . Evidence ) == 0 {
panic ( "block has no evidence" )
}
2018-07-26 18:53:19 -04:00
}
2018-08-08 16:03:58 +04:00
m . height ++
2018-07-26 18:53:19 -04:00
}
2019-02-09 00:30:45 +01:00
func ( m * mockEvidencePool ) IsCommitted ( types . Evidence ) bool { return false }
2018-07-26 18:53:19 -04:00
2018-07-27 13:52:42 -04:00
//------------------------------------
2018-11-28 14:52:35 +01:00
// Ensure a testnet makes blocks when there are txs
func TestReactorCreatesBlockWhenEmptyBlocksFalse ( t * testing . T ) {
2017-08-10 01:09:04 -04:00
N := 4
2019-02-18 08:45:27 +01:00
css , cleanup := randConsensusNet ( N , "consensus_reactor_test" , newMockTickerFunc ( true ) , newCounter ,
2017-08-10 01:09:04 -04:00
func ( c * cfg . Config ) {
c . Consensus . CreateEmptyBlocks = false
} )
2019-02-18 08:45:27 +01:00
defer cleanup ( )
2019-02-23 08:11:27 +04:00
reactors , blocksSubs , eventBuses := startConsensusNet ( t , css , N )
2017-12-16 19:16:08 -05:00
defer stopConsensusNet ( log . TestingLogger ( ) , reactors , eventBuses )
2017-08-10 01:09:04 -04:00
// send a tx
2019-01-17 21:46:40 -05:00
if err := assertMempool ( css [ 3 ] . txNotifier ) . CheckTx ( [ ] byte { 1 , 2 , 3 } , nil ) ; err != nil {
2019-07-25 07:35:30 +02:00
t . Error ( err )
2017-09-06 13:11:47 -04:00
}
2017-08-10 01:09:04 -04:00
// wait till everyone makes the first new block
2018-01-19 00:14:35 -05:00
timeoutWaitGroup ( t , N , func ( j int ) {
2019-02-23 08:11:27 +04:00
<- blocksSubs [ j ] . Out ( )
2017-08-10 01:09:04 -04:00
} , css )
}
2019-05-28 03:39:58 +09:00
func TestReactorReceiveDoesNotPanicIfAddPeerHasntBeenCalledYet ( t * testing . T ) {
N := 1
css , cleanup := randConsensusNet ( N , "consensus_reactor_test" , newMockTickerFunc ( true ) , newCounter )
defer cleanup ( )
reactors , _ , eventBuses := startConsensusNet ( t , css , N )
defer stopConsensusNet ( log . TestingLogger ( ) , reactors , eventBuses )
var (
reactor = reactors [ 0 ]
peer = mock . NewPeer ( nil )
msg = cdc . MustMarshalBinaryBare ( & HasVoteMessage { Height : 1 , Round : 1 , Index : 1 , Type : types . PrevoteType } )
)
reactor . InitPeer ( peer )
// simulate switch calling Receive before AddPeer
assert . NotPanics ( t , func ( ) {
reactor . Receive ( StateChannel , peer , msg )
reactor . AddPeer ( peer )
} )
}
func TestReactorReceivePanicsIfInitPeerHasntBeenCalledYet ( t * testing . T ) {
N := 1
css , cleanup := randConsensusNet ( N , "consensus_reactor_test" , newMockTickerFunc ( true ) , newCounter )
defer cleanup ( )
reactors , _ , eventBuses := startConsensusNet ( t , css , N )
defer stopConsensusNet ( log . TestingLogger ( ) , reactors , eventBuses )
var (
reactor = reactors [ 0 ]
peer = mock . NewPeer ( nil )
msg = cdc . MustMarshalBinaryBare ( & HasVoteMessage { Height : 1 , Round : 1 , Index : 1 , Type : types . PrevoteType } )
)
// we should call InitPeer here
// simulate switch calling Receive before AddPeer
assert . Panics ( t , func ( ) {
reactor . Receive ( StateChannel , peer , msg )
} )
}
2018-09-21 20:36:48 +02:00
// Test we record stats about votes and block parts from other peers.
func TestReactorRecordsVotesAndBlockParts ( t * testing . T ) {
N := 4
2019-02-18 08:45:27 +01:00
css , cleanup := randConsensusNet ( N , "consensus_reactor_test" , newMockTickerFunc ( true ) , newCounter )
defer cleanup ( )
2019-02-23 08:11:27 +04:00
reactors , blocksSubs , eventBuses := startConsensusNet ( t , css , N )
2018-09-21 20:36:48 +02:00
defer stopConsensusNet ( log . TestingLogger ( ) , reactors , eventBuses )
2018-03-19 13:13:19 +03:00
2018-09-21 20:36:48 +02:00
// wait till everyone makes the first new block
timeoutWaitGroup ( t , N , func ( j int ) {
2019-02-23 08:11:27 +04:00
<- blocksSubs [ j ] . Out ( )
2018-09-21 20:36:48 +02:00
} , css )
2018-03-19 13:13:19 +03:00
2018-09-21 20:36:48 +02:00
// Get peer
peer := reactors [ 1 ] . Switch . Peers ( ) . List ( ) [ 0 ]
// Get peer state
ps := peer . Get ( types . PeerStateKey ) . ( * PeerState )
2018-03-19 13:13:19 +03:00
2018-09-21 20:36:48 +02:00
assert . Equal ( t , true , ps . VotesSent ( ) > 0 , "number of votes sent should have increased" )
assert . Equal ( t , true , ps . BlockPartsSent ( ) > 0 , "number of votes sent should have increased" )
2018-03-19 13:13:19 +03:00
}
2016-11-23 18:20:46 -05:00
//-------------------------------------------------------------
// ensure we can make blocks despite cycling a validator set
2016-06-26 15:33:11 -04:00
2017-12-16 19:16:08 -05:00
func TestReactorVotingPowerChange ( t * testing . T ) {
2017-01-11 15:32:03 -05:00
nVals := 4
2017-12-16 19:16:08 -05:00
logger := log . TestingLogger ( )
2019-02-18 08:45:27 +01:00
css , cleanup := randConsensusNet ( nVals , "consensus_voting_power_changes_test" , newMockTickerFunc ( true ) , newPersistentKVStore )
defer cleanup ( )
2019-02-23 08:11:27 +04:00
reactors , blocksSubs , eventBuses := startConsensusNet ( t , css , nVals )
2017-12-16 19:16:08 -05:00
defer stopConsensusNet ( logger , reactors , eventBuses )
2017-01-11 15:32:03 -05:00
// map of active validators
activeVals := make ( map [ string ] struct { } )
for i := 0 ; i < nVals ; i ++ {
2018-12-22 06:36:45 +01:00
addr := css [ i ] . privValidator . GetPubKey ( ) . Address ( )
activeVals [ string ( addr ) ] = struct { } { }
2017-01-11 15:32:03 -05:00
}
// wait till everyone makes block 1
2018-01-19 00:14:35 -05:00
timeoutWaitGroup ( t , nVals , func ( j int ) {
2019-02-23 08:11:27 +04:00
<- blocksSubs [ j ] . Out ( )
2017-01-11 15:32:03 -05:00
} , css )
//---------------------------------------------------------------------------
2017-12-16 19:16:08 -05:00
logger . Debug ( "---------------------------- Testing changing the voting power of one validator a few times" )
2017-01-11 15:32:03 -05:00
2017-09-21 16:32:02 -04:00
val1PubKey := css [ 0 ] . privValidator . GetPubKey ( )
2018-05-31 23:56:44 -04:00
val1PubKeyABCI := types . TM2PB . PubKey ( val1PubKey )
updateValidatorTx := kvstore . MakeValSetChangeTx ( val1PubKeyABCI , 25 )
2017-01-11 15:32:03 -05:00
previousTotalVotingPower := css [ 0 ] . GetRoundState ( ) . LastValidators . TotalVotingPower ( )
2019-02-23 08:11:27 +04:00
waitForAndValidateBlock ( t , nVals , activeVals , blocksSubs , css , updateValidatorTx )
waitForAndValidateBlockWithTx ( t , nVals , activeVals , blocksSubs , css , updateValidatorTx )
waitForAndValidateBlock ( t , nVals , activeVals , blocksSubs , css )
waitForAndValidateBlock ( t , nVals , activeVals , blocksSubs , css )
2017-01-11 15:32:03 -05:00
if css [ 0 ] . GetRoundState ( ) . LastValidators . TotalVotingPower ( ) == previousTotalVotingPower {
t . Fatalf ( "expected voting power to change (before: %d, after: %d)" , previousTotalVotingPower , css [ 0 ] . GetRoundState ( ) . LastValidators . TotalVotingPower ( ) )
}
2018-05-31 23:56:44 -04:00
updateValidatorTx = kvstore . MakeValSetChangeTx ( val1PubKeyABCI , 2 )
2017-01-11 15:32:03 -05:00
previousTotalVotingPower = css [ 0 ] . GetRoundState ( ) . LastValidators . TotalVotingPower ( )
2019-02-23 08:11:27 +04:00
waitForAndValidateBlock ( t , nVals , activeVals , blocksSubs , css , updateValidatorTx )
waitForAndValidateBlockWithTx ( t , nVals , activeVals , blocksSubs , css , updateValidatorTx )
waitForAndValidateBlock ( t , nVals , activeVals , blocksSubs , css )
waitForAndValidateBlock ( t , nVals , activeVals , blocksSubs , css )
2017-01-11 15:32:03 -05:00
if css [ 0 ] . GetRoundState ( ) . LastValidators . TotalVotingPower ( ) == previousTotalVotingPower {
t . Fatalf ( "expected voting power to change (before: %d, after: %d)" , previousTotalVotingPower , css [ 0 ] . GetRoundState ( ) . LastValidators . TotalVotingPower ( ) )
}
2018-05-31 23:56:44 -04:00
updateValidatorTx = kvstore . MakeValSetChangeTx ( val1PubKeyABCI , 26 )
2017-01-11 15:32:03 -05:00
previousTotalVotingPower = css [ 0 ] . GetRoundState ( ) . LastValidators . TotalVotingPower ( )
2019-02-23 08:11:27 +04:00
waitForAndValidateBlock ( t , nVals , activeVals , blocksSubs , css , updateValidatorTx )
waitForAndValidateBlockWithTx ( t , nVals , activeVals , blocksSubs , css , updateValidatorTx )
waitForAndValidateBlock ( t , nVals , activeVals , blocksSubs , css )
waitForAndValidateBlock ( t , nVals , activeVals , blocksSubs , css )
2017-01-11 15:32:03 -05:00
if css [ 0 ] . GetRoundState ( ) . LastValidators . TotalVotingPower ( ) == previousTotalVotingPower {
t . Fatalf ( "expected voting power to change (before: %d, after: %d)" , previousTotalVotingPower , css [ 0 ] . GetRoundState ( ) . LastValidators . TotalVotingPower ( ) )
}
}
2017-12-16 19:16:08 -05:00
func TestReactorValidatorSetChanges ( t * testing . T ) {
2017-12-25 14:07:45 -06:00
nPeers := 7
nVals := 4
2019-05-02 05:15:53 +08:00
css , _ , _ , cleanup := randConsensusNetWithPeers ( nVals , nPeers , "consensus_val_set_changes_test" , newMockTickerFunc ( true ) , newPersistentKVStoreWithPath )
2019-02-18 08:45:27 +01:00
defer cleanup ( )
2017-12-16 19:16:08 -05:00
logger := log . TestingLogger ( )
2019-02-23 08:11:27 +04:00
reactors , blocksSubs , eventBuses := startConsensusNet ( t , css , nPeers )
2017-12-16 19:16:08 -05:00
defer stopConsensusNet ( logger , reactors , eventBuses )
2016-12-19 19:50:40 -05:00
2016-11-23 18:20:46 -05:00
// map of active validators
activeVals := make ( map [ string ] struct { } )
for i := 0 ; i < nVals ; i ++ {
2018-12-22 06:36:45 +01:00
addr := css [ i ] . privValidator . GetPubKey ( ) . Address ( )
activeVals [ string ( addr ) ] = struct { } { }
2016-06-26 15:33:11 -04:00
}
2016-09-13 22:25:11 -04:00
2016-11-23 18:20:46 -05:00
// wait till everyone makes block 1
2018-01-19 00:14:35 -05:00
timeoutWaitGroup ( t , nPeers , func ( j int ) {
2019-02-23 08:11:27 +04:00
<- blocksSubs [ j ] . Out ( )
2017-01-11 15:32:03 -05:00
} , css )
2016-09-13 22:25:11 -04:00
2016-12-23 20:08:12 +04:00
//---------------------------------------------------------------------------
2017-12-16 19:16:08 -05:00
logger . Info ( "---------------------------- Testing adding one validator" )
2016-12-23 20:08:12 +04:00
2017-09-21 16:32:02 -04:00
newValidatorPubKey1 := css [ nVals ] . privValidator . GetPubKey ( )
2018-05-31 23:56:44 -04:00
valPubKey1ABCI := types . TM2PB . PubKey ( newValidatorPubKey1 )
newValidatorTx1 := kvstore . MakeValSetChangeTx ( valPubKey1ABCI , testMinPower )
2016-09-13 22:25:11 -04:00
2016-11-23 18:20:46 -05:00
// wait till everyone makes block 2
// ensure the commit includes all validators
// send newValTx to change vals in block 3
2019-02-23 08:11:27 +04:00
waitForAndValidateBlock ( t , nPeers , activeVals , blocksSubs , css , newValidatorTx1 )
2016-09-13 22:25:11 -04:00
2016-11-23 18:20:46 -05:00
// wait till everyone makes block 3.
// it includes the commit for block 2, which is by the original validator set
2019-02-23 08:11:27 +04:00
waitForAndValidateBlockWithTx ( t , nPeers , activeVals , blocksSubs , css , newValidatorTx1 )
2016-09-13 22:25:11 -04:00
2016-11-23 18:20:46 -05:00
// wait till everyone makes block 4.
// it includes the commit for block 3, which is by the original validator set
2019-02-23 08:11:27 +04:00
waitForAndValidateBlock ( t , nPeers , activeVals , blocksSubs , css )
2016-09-13 22:25:11 -04:00
2016-11-23 18:20:46 -05:00
// the commits for block 4 should be with the updated validator set
2016-12-23 20:08:12 +04:00
activeVals [ string ( newValidatorPubKey1 . Address ( ) ) ] = struct { } { }
2016-09-13 22:25:11 -04:00
2016-11-23 18:20:46 -05:00
// wait till everyone makes block 5
// it includes the commit for block 4, which should have the updated validator set
2019-02-23 08:11:27 +04:00
waitForBlockWithUpdatedValsAndValidateIt ( t , nPeers , activeVals , blocksSubs , css )
2016-06-26 15:33:11 -04:00
2016-12-23 20:08:12 +04:00
//---------------------------------------------------------------------------
2017-12-16 19:16:08 -05:00
logger . Info ( "---------------------------- Testing changing the voting power of one validator" )
2016-12-23 20:08:12 +04:00
2017-09-21 16:32:02 -04:00
updateValidatorPubKey1 := css [ nVals ] . privValidator . GetPubKey ( )
2018-05-31 23:56:44 -04:00
updatePubKey1ABCI := types . TM2PB . PubKey ( updateValidatorPubKey1 )
updateValidatorTx1 := kvstore . MakeValSetChangeTx ( updatePubKey1ABCI , 25 )
2017-01-11 15:32:03 -05:00
previousTotalVotingPower := css [ nVals ] . GetRoundState ( ) . LastValidators . TotalVotingPower ( )
2016-12-23 20:08:12 +04:00
2019-02-23 08:11:27 +04:00
waitForAndValidateBlock ( t , nPeers , activeVals , blocksSubs , css , updateValidatorTx1 )
waitForAndValidateBlockWithTx ( t , nPeers , activeVals , blocksSubs , css , updateValidatorTx1 )
waitForAndValidateBlock ( t , nPeers , activeVals , blocksSubs , css )
waitForBlockWithUpdatedValsAndValidateIt ( t , nPeers , activeVals , blocksSubs , css )
2017-01-11 15:32:03 -05:00
if css [ nVals ] . GetRoundState ( ) . LastValidators . TotalVotingPower ( ) == previousTotalVotingPower {
t . Errorf ( "expected voting power to change (before: %d, after: %d)" , previousTotalVotingPower , css [ nVals ] . GetRoundState ( ) . LastValidators . TotalVotingPower ( ) )
2016-12-23 20:08:12 +04:00
}
//---------------------------------------------------------------------------
2017-12-16 19:16:08 -05:00
logger . Info ( "---------------------------- Testing adding two validators at once" )
2016-12-23 20:08:12 +04:00
2017-09-21 16:32:02 -04:00
newValidatorPubKey2 := css [ nVals + 1 ] . privValidator . GetPubKey ( )
2018-05-31 23:56:44 -04:00
newVal2ABCI := types . TM2PB . PubKey ( newValidatorPubKey2 )
newValidatorTx2 := kvstore . MakeValSetChangeTx ( newVal2ABCI , testMinPower )
2016-12-23 20:08:12 +04:00
2017-09-21 16:32:02 -04:00
newValidatorPubKey3 := css [ nVals + 2 ] . privValidator . GetPubKey ( )
2018-05-31 23:56:44 -04:00
newVal3ABCI := types . TM2PB . PubKey ( newValidatorPubKey3 )
newValidatorTx3 := kvstore . MakeValSetChangeTx ( newVal3ABCI , testMinPower )
2016-12-23 20:08:12 +04:00
2019-02-23 08:11:27 +04:00
waitForAndValidateBlock ( t , nPeers , activeVals , blocksSubs , css , newValidatorTx2 , newValidatorTx3 )
waitForAndValidateBlockWithTx ( t , nPeers , activeVals , blocksSubs , css , newValidatorTx2 , newValidatorTx3 )
waitForAndValidateBlock ( t , nPeers , activeVals , blocksSubs , css )
2016-12-23 20:08:12 +04:00
activeVals [ string ( newValidatorPubKey2 . Address ( ) ) ] = struct { } { }
activeVals [ string ( newValidatorPubKey3 . Address ( ) ) ] = struct { } { }
2019-02-23 08:11:27 +04:00
waitForBlockWithUpdatedValsAndValidateIt ( t , nPeers , activeVals , blocksSubs , css )
2016-12-23 20:08:12 +04:00
//---------------------------------------------------------------------------
2017-12-16 19:16:08 -05:00
logger . Info ( "---------------------------- Testing removing two validators at once" )
2016-12-23 20:08:12 +04:00
2018-05-31 23:56:44 -04:00
removeValidatorTx2 := kvstore . MakeValSetChangeTx ( newVal2ABCI , 0 )
removeValidatorTx3 := kvstore . MakeValSetChangeTx ( newVal3ABCI , 0 )
2016-12-23 20:08:12 +04:00
2019-02-23 08:11:27 +04:00
waitForAndValidateBlock ( t , nPeers , activeVals , blocksSubs , css , removeValidatorTx2 , removeValidatorTx3 )
waitForAndValidateBlockWithTx ( t , nPeers , activeVals , blocksSubs , css , removeValidatorTx2 , removeValidatorTx3 )
waitForAndValidateBlock ( t , nPeers , activeVals , blocksSubs , css )
2016-12-23 20:08:12 +04:00
delete ( activeVals , string ( newValidatorPubKey2 . Address ( ) ) )
delete ( activeVals , string ( newValidatorPubKey3 . Address ( ) ) )
2019-02-23 08:11:27 +04:00
waitForBlockWithUpdatedValsAndValidateIt ( t , nPeers , activeVals , blocksSubs , css )
2016-06-26 15:33:11 -04:00
}
2017-01-05 00:26:31 +04:00
// Check we can make blocks with skip_timeout_commit=false
func TestReactorWithTimeoutCommit ( t * testing . T ) {
N := 4
2019-02-18 08:45:27 +01:00
css , cleanup := randConsensusNet ( N , "consensus_reactor_with_timeout_commit_test" , newMockTickerFunc ( false ) , newCounter )
defer cleanup ( )
2017-01-05 00:26:31 +04:00
// override default SkipTimeoutCommit == true for tests
for i := 0 ; i < N ; i ++ {
2017-05-02 00:43:49 -04:00
css [ i ] . config . SkipTimeoutCommit = false
2017-01-05 00:26:31 +04:00
}
2019-02-23 08:11:27 +04:00
reactors , blocksSubs , eventBuses := startConsensusNet ( t , css , N - 1 )
2017-12-16 19:16:08 -05:00
defer stopConsensusNet ( log . TestingLogger ( ) , reactors , eventBuses )
2017-01-05 00:26:31 +04:00
// wait till everyone makes the first new block
2018-01-19 00:14:35 -05:00
timeoutWaitGroup ( t , N - 1 , func ( j int ) {
2019-02-23 08:11:27 +04:00
<- blocksSubs [ j ] . Out ( )
2017-01-11 18:37:36 -05:00
} , css )
2017-01-05 00:26:31 +04:00
}
2019-02-23 08:11:27 +04:00
func waitForAndValidateBlock (
t * testing . T ,
n int ,
activeVals map [ string ] struct { } ,
blocksSubs [ ] types . Subscription ,
css [ ] * ConsensusState ,
txs ... [ ] byte ,
) {
2018-01-19 00:14:35 -05:00
timeoutWaitGroup ( t , n , func ( j int ) {
2017-12-16 19:16:08 -05:00
css [ j ] . Logger . Debug ( "waitForAndValidateBlock" )
2019-02-23 08:11:27 +04:00
msg := <- blocksSubs [ j ] . Out ( )
newBlock := msg . Data ( ) . ( types . EventDataNewBlock ) . Block
2017-12-16 19:16:08 -05:00
css [ j ] . Logger . Debug ( "waitForAndValidateBlock: Got block" , "height" , newBlock . Height )
2016-12-17 13:24:54 -05:00
err := validateBlock ( newBlock , activeVals )
2017-12-16 19:16:08 -05:00
assert . Nil ( t , err )
2016-11-23 18:20:46 -05:00
for _ , tx := range txs {
2019-01-17 21:46:40 -05:00
err := assertMempool ( css [ j ] . txNotifier ) . CheckTx ( tx , nil )
2017-12-16 19:16:08 -05:00
assert . Nil ( t , err )
}
} , css )
}
2019-02-23 08:11:27 +04:00
func waitForAndValidateBlockWithTx (
t * testing . T ,
n int ,
activeVals map [ string ] struct { } ,
blocksSubs [ ] types . Subscription ,
css [ ] * ConsensusState ,
txs ... [ ] byte ,
) {
2018-01-19 00:14:35 -05:00
timeoutWaitGroup ( t , n , func ( j int ) {
2017-12-16 19:16:08 -05:00
ntxs := 0
BLOCK_TX_LOOP :
for {
css [ j ] . Logger . Debug ( "waitForAndValidateBlockWithTx" , "ntxs" , ntxs )
2019-02-23 08:11:27 +04:00
msg := <- blocksSubs [ j ] . Out ( )
newBlock := msg . Data ( ) . ( types . EventDataNewBlock ) . Block
2017-12-16 19:16:08 -05:00
css [ j ] . Logger . Debug ( "waitForAndValidateBlockWithTx: Got block" , "height" , newBlock . Height )
err := validateBlock ( newBlock , activeVals )
assert . Nil ( t , err )
// check that txs match the txs we're waiting for.
// note they could be spread over multiple blocks,
// but they should be in order.
for _ , tx := range newBlock . Data . Txs {
assert . EqualValues ( t , txs [ ntxs ] , tx )
2018-04-02 10:21:17 +02:00
ntxs ++
2017-12-16 19:16:08 -05:00
}
if ntxs == len ( txs ) {
break BLOCK_TX_LOOP
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
}
}
2017-12-16 19:16:08 -05:00
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
} , css )
}
2019-02-23 08:11:27 +04:00
func waitForBlockWithUpdatedValsAndValidateIt (
t * testing . T ,
n int ,
updatedVals map [ string ] struct { } ,
blocksSubs [ ] types . Subscription ,
css [ ] * ConsensusState ,
) {
2018-01-19 00:14:35 -05:00
timeoutWaitGroup ( t , n , func ( j int ) {
2017-11-10 18:09:04 -05:00
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
var newBlock * types . Block
LOOP :
for {
2017-12-16 19:16:08 -05:00
css [ j ] . Logger . Debug ( "waitForBlockWithUpdatedValsAndValidateIt" )
2019-02-23 08:11:27 +04:00
msg := <- blocksSubs [ j ] . Out ( )
newBlock = msg . Data ( ) . ( types . EventDataNewBlock ) . Block
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
if newBlock . LastCommit . Size ( ) == len ( updatedVals ) {
2017-12-16 19:16:08 -05:00
css [ j ] . Logger . Debug ( "waitForBlockWithUpdatedValsAndValidateIt: Got block" , "height" , newBlock . Height )
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
break LOOP
} else {
2017-12-16 19:16:08 -05:00
css [ j ] . Logger . Debug ( "waitForBlockWithUpdatedValsAndValidateIt: Got block with no new validators. Skipping" , "height" , newBlock . Height )
new pubsub package
comment out failing consensus tests for now
rewrite rpc httpclient to use new pubsub package
import pubsub as tmpubsub, query as tmquery
make event IDs constants
EventKey -> EventTypeKey
rename EventsPubsub to PubSub
mempool does not use pubsub
rename eventsSub to pubsub
new subscribe API
fix channel size issues and consensus tests bugs
refactor rpc client
add missing discardFromChan method
add mutex
rename pubsub to eventBus
remove IsRunning from WSRPCConnection interface (not needed)
add a comment in broadcastNewRoundStepsAndVotes
rename registerEventCallbacks to broadcastNewRoundStepsAndVotes
See https://dave.cheney.net/2014/03/19/channel-axioms
stop eventBuses after reactor tests
remove unnecessary Unsubscribe
return subscribe helper function
move discardFromChan to where it is used
subscribe now returns an err
this gives us ability to refuse to subscribe if pubsub is at its max
capacity.
use context for control overflow
cache queries
handle err when subscribing in replay_test
rename testClientID to testSubscriber
extract var
set channel buffer capacity to 1 in replay_file
fix byzantine_test
unsubscribe from single event, not all events
refactor httpclient to return events to appropriate channels
return failing testReplayCrashBeforeWriteVote test
fix TestValidatorSetChanges
refactor code a bit
fix testReplayCrashBeforeWriteVote
add comment
fix TestValidatorSetChanges
fixes from Bucky's review
update comment [ci skip]
test TxEventBuffer
update changelog
fix TestValidatorSetChanges (2nd attempt)
only do wg.Done when no errors
benchmark event bus
create pubsub server inside NewEventBus
only expose config params (later if needed)
set buffer capacity to 0 so we are not testing cache
new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}
This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"
use TimeoutCommit instead of afterPublishEventNewBlockTimeout
TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.
waitForBlockWithUpdatedVals
rewrite WAL crash tests
Task:
test that we can recover from any WAL crash.
Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).
when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.
remove sleep
no cs.Lock around wal.Save
test different cases (empty block, non-empty block, ...)
comments
add comments
test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks
fixes as per Bucky's last review
reset subscriptions on UnsubscribeAll
use a simple counter to track message for which we panicked
also, set a smaller part size for all test cases
2017-06-26 19:00:30 +04:00
}
}
err := validateBlock ( newBlock , updatedVals )
2017-12-16 19:16:08 -05:00
assert . Nil ( t , err )
2017-01-11 15:32:03 -05:00
} , css )
2016-09-13 22:25:11 -04:00
}
2016-11-23 18:20:46 -05:00
// expects high synchrony!
func validateBlock ( block * types . Block , activeVals map [ string ] struct { } ) error {
if block . LastCommit . Size ( ) != len ( activeVals ) {
return fmt . Errorf ( "Commit size doesn't match number of active validators. Got %d, expected %d" , block . LastCommit . Size ( ) , len ( activeVals ) )
2016-09-13 22:25:11 -04:00
}
2016-06-26 15:33:11 -04:00
2016-11-23 18:20:46 -05:00
for _ , vote := range block . LastCommit . Precommits {
if _ , ok := activeVals [ string ( vote . ValidatorAddress ) ] ; ! ok {
return fmt . Errorf ( "Found vote for unactive validator %X" , vote . ValidatorAddress )
}
2016-06-26 15:33:11 -04:00
}
return nil
}
2018-01-19 00:14:35 -05:00
func timeoutWaitGroup ( t * testing . T , n int , f func ( int ) , css [ ] * ConsensusState ) {
2016-11-23 18:20:46 -05:00
wg := new ( sync . WaitGroup )
wg . Add ( n )
for i := 0 ; i < n ; i ++ {
2018-01-19 00:14:35 -05:00
go func ( j int ) {
f ( j )
wg . Done ( )
} ( i )
2016-11-23 18:20:46 -05:00
}
2016-06-26 15:33:11 -04:00
2016-11-23 18:20:46 -05:00
done := make ( chan struct { } )
go func ( ) {
wg . Wait ( )
close ( done )
} ( )
2016-06-26 15:33:11 -04:00
2017-11-12 06:41:15 +00:00
// we're running many nodes in-process, possibly in in a virtual machine,
// and spewing debug messages - making a block could take a while,
2017-12-16 19:55:04 -05:00
timeout := time . Second * 300
2017-11-12 06:41:15 +00:00
2016-11-23 18:20:46 -05:00
select {
case <- done :
2017-11-12 06:41:15 +00:00
case <- time . After ( timeout ) :
2017-01-11 15:32:03 -05:00
for i , cs := range css {
2017-11-12 06:41:15 +00:00
t . Log ( "#################" )
t . Log ( "Validator" , i )
t . Log ( cs . GetRoundState ( ) )
t . Log ( "" )
2017-01-11 15:32:03 -05:00
}
2018-01-25 01:01:27 -05:00
os . Stdout . Write ( [ ] byte ( "pprof.Lookup('goroutine'):\n" ) )
2017-11-12 06:41:15 +00:00
pprof . Lookup ( "goroutine" ) . WriteTo ( os . Stdout , 1 )
2018-01-25 01:01:27 -05:00
capture ( )
2017-01-11 15:32:03 -05:00
panic ( "Timed out waiting for all validators to commit a block" )
2016-11-23 18:20:46 -05:00
}
2016-06-26 15:33:11 -04:00
}
2018-01-25 01:01:27 -05:00
func capture ( ) {
trace := make ( [ ] byte , 10240000 )
count := runtime . Stack ( trace , true )
fmt . Printf ( "Stack of %d bytes: %s\n" , count , trace )
}
2019-08-11 14:27:03 -04:00
//-------------------------------------------------------------
// Ensure basic validation of structs is functioning
func TestNewRoundStepMessageValidateBasic ( t * testing . T ) {
testCases := [ ] struct {
testName string
messageHeight int64
messageRound int
messageStep cstypes . RoundStepType
messageLastCommitRound int
expectErr bool
} {
{ "Valid Message" , 0 , 0 , 0x01 , 1 , false } ,
{ "Invalid Message" , - 1 , 0 , 0x01 , 1 , true } ,
{ "Invalid Message" , 0 , - 1 , 0x01 , 1 , true } ,
{ "Invalid Message" , 0 , 0 , 0x00 , 1 , true } ,
{ "Invalid Message" , 0 , 0 , 0x00 , 0 , true } ,
{ "Invalid Message" , 1 , 0 , 0x01 , 0 , true } ,
}
for _ , tc := range testCases {
2019-09-11 01:15:18 -04:00
tc := tc
2019-08-11 14:27:03 -04:00
t . Run ( tc . testName , func ( t * testing . T ) {
message := NewRoundStepMessage {
Height : tc . messageHeight ,
Round : tc . messageRound ,
Step : tc . messageStep ,
LastCommitRound : tc . messageLastCommitRound ,
}
assert . Equal ( t , tc . expectErr , message . ValidateBasic ( ) != nil , "Validate Basic had an unexpected result" )
} )
}
}
func TestNewValidBlockMessageValidateBasic ( t * testing . T ) {
testBitArray := cmn . NewBitArray ( 1 )
testCases := [ ] struct {
testName string
messageHeight int64
messageRound int
messageBlockParts * cmn . BitArray
expectErr bool
} {
{ "Valid Message" , 0 , 0 , testBitArray , false } ,
{ "Invalid Message" , - 1 , 0 , testBitArray , true } ,
{ "Invalid Message" , 0 , - 1 , testBitArray , true } ,
{ "Invalid Message" , 0 , 0 , cmn . NewBitArray ( 0 ) , true } ,
}
for _ , tc := range testCases {
2019-09-11 01:15:18 -04:00
tc := tc
2019-08-11 14:27:03 -04:00
t . Run ( tc . testName , func ( t * testing . T ) {
message := NewValidBlockMessage {
Height : tc . messageHeight ,
Round : tc . messageRound ,
BlockParts : tc . messageBlockParts ,
}
message . BlockPartsHeader . Total = 1
assert . Equal ( t , tc . expectErr , message . ValidateBasic ( ) != nil , "Validate Basic had an unexpected result" )
} )
}
}
func TestProposalPOLMessageValidateBasic ( t * testing . T ) {
testBitArray := cmn . NewBitArray ( 1 )
testCases := [ ] struct {
testName string
messageHeight int64
messageProposalPOLRound int
messageProposalPOL * cmn . BitArray
expectErr bool
} {
{ "Valid Message" , 0 , 0 , testBitArray , false } ,
{ "Invalid Message" , - 1 , 0 , testBitArray , true } ,
{ "Invalid Message" , 0 , - 1 , testBitArray , true } ,
{ "Invalid Message" , 0 , 0 , cmn . NewBitArray ( 0 ) , true } ,
}
for _ , tc := range testCases {
2019-09-11 01:15:18 -04:00
tc := tc
2019-08-11 14:27:03 -04:00
t . Run ( tc . testName , func ( t * testing . T ) {
message := ProposalPOLMessage {
Height : tc . messageHeight ,
ProposalPOLRound : tc . messageProposalPOLRound ,
ProposalPOL : tc . messageProposalPOL ,
}
assert . Equal ( t , tc . expectErr , message . ValidateBasic ( ) != nil , "Validate Basic had an unexpected result" )
} )
}
}
func TestBlockPartMessageValidateBasic ( t * testing . T ) {
testPart := new ( types . Part )
testCases := [ ] struct {
testName string
messageHeight int64
messageRound int
messagePart * types . Part
expectErr bool
} {
{ "Valid Message" , 0 , 0 , testPart , false } ,
{ "Invalid Message" , - 1 , 0 , testPart , true } ,
{ "Invalid Message" , 0 , - 1 , testPart , true } ,
}
for _ , tc := range testCases {
2019-09-11 01:15:18 -04:00
tc := tc
2019-08-11 14:27:03 -04:00
t . Run ( tc . testName , func ( t * testing . T ) {
message := BlockPartMessage {
Height : tc . messageHeight ,
Round : tc . messageRound ,
Part : tc . messagePart ,
}
assert . Equal ( t , tc . expectErr , message . ValidateBasic ( ) != nil , "Validate Basic had an unexpected result" )
} )
}
message := BlockPartMessage { Height : 0 , Round : 0 , Part : new ( types . Part ) }
message . Part . Index = - 1
assert . Equal ( t , true , message . ValidateBasic ( ) != nil , "Validate Basic had an unexpected result" )
}
func TestHasVoteMessageValidateBasic ( t * testing . T ) {
const (
validSignedMsgType types . SignedMsgType = 0x01
invalidSignedMsgType types . SignedMsgType = 0x03
)
testCases := [ ] struct {
testName string
messageHeight int64
messageRound int
messageType types . SignedMsgType
messageIndex int
expectErr bool
} {
{ "Valid Message" , 0 , 0 , validSignedMsgType , 0 , false } ,
{ "Invalid Message" , - 1 , 0 , validSignedMsgType , 0 , true } ,
{ "Invalid Message" , 0 , - 1 , validSignedMsgType , 0 , true } ,
{ "Invalid Message" , 0 , 0 , invalidSignedMsgType , 0 , true } ,
{ "Invalid Message" , 0 , 0 , validSignedMsgType , - 1 , true } ,
}
for _ , tc := range testCases {
2019-09-11 01:15:18 -04:00
tc := tc
2019-08-11 14:27:03 -04:00
t . Run ( tc . testName , func ( t * testing . T ) {
message := HasVoteMessage {
Height : tc . messageHeight ,
Round : tc . messageRound ,
Type : tc . messageType ,
Index : tc . messageIndex ,
}
assert . Equal ( t , tc . expectErr , message . ValidateBasic ( ) != nil , "Validate Basic had an unexpected result" )
} )
}
}
func TestVoteSetMaj23MessageValidateBasic ( t * testing . T ) {
const (
validSignedMsgType types . SignedMsgType = 0x01
invalidSignedMsgType types . SignedMsgType = 0x03
)
validBlockID := types . BlockID { }
invalidBlockID := types . BlockID {
Hash : cmn . HexBytes { } ,
PartsHeader : types . PartSetHeader {
Total : - 1 ,
Hash : cmn . HexBytes { } ,
} ,
}
testCases := [ ] struct {
testName string
messageHeight int64
messageRound int
messageType types . SignedMsgType
messageBlockID types . BlockID
expectErr bool
} {
{ "Valid Message" , 0 , 0 , validSignedMsgType , validBlockID , false } ,
{ "Invalid Message" , - 1 , 0 , validSignedMsgType , validBlockID , true } ,
{ "Invalid Message" , 0 , - 1 , validSignedMsgType , validBlockID , true } ,
{ "Invalid Message" , 0 , 0 , invalidSignedMsgType , validBlockID , true } ,
{ "Invalid Message" , 0 , 0 , validSignedMsgType , invalidBlockID , true } ,
}
for _ , tc := range testCases {
2019-09-11 01:15:18 -04:00
tc := tc
2019-08-11 14:27:03 -04:00
t . Run ( tc . testName , func ( t * testing . T ) {
message := VoteSetMaj23Message {
Height : tc . messageHeight ,
Round : tc . messageRound ,
Type : tc . messageType ,
BlockID : tc . messageBlockID ,
}
assert . Equal ( t , tc . expectErr , message . ValidateBasic ( ) != nil , "Validate Basic had an unexpected result" )
} )
}
}
func TestVoteSetBitsMessageValidateBasic ( t * testing . T ) {
const (
validSignedMsgType types . SignedMsgType = 0x01
invalidSignedMsgType types . SignedMsgType = 0x03
)
validBlockID := types . BlockID { }
invalidBlockID := types . BlockID {
Hash : cmn . HexBytes { } ,
PartsHeader : types . PartSetHeader {
Total : - 1 ,
Hash : cmn . HexBytes { } ,
} ,
}
testBitArray := cmn . NewBitArray ( 1 )
testCases := [ ] struct {
testName string
messageHeight int64
messageRound int
messageType types . SignedMsgType
messageBlockID types . BlockID
messageVotes * cmn . BitArray
expectErr bool
} {
{ "Valid Message" , 0 , 0 , validSignedMsgType , validBlockID , testBitArray , false } ,
{ "Invalid Message" , - 1 , 0 , validSignedMsgType , validBlockID , testBitArray , true } ,
{ "Invalid Message" , 0 , - 1 , validSignedMsgType , validBlockID , testBitArray , true } ,
{ "Invalid Message" , 0 , 0 , invalidSignedMsgType , validBlockID , testBitArray , true } ,
{ "Invalid Message" , 0 , 0 , validSignedMsgType , invalidBlockID , testBitArray , true } ,
}
for _ , tc := range testCases {
2019-09-11 01:15:18 -04:00
tc := tc
2019-08-11 14:27:03 -04:00
t . Run ( tc . testName , func ( t * testing . T ) {
message := VoteSetBitsMessage {
Height : tc . messageHeight ,
Round : tc . messageRound ,
Type : tc . messageType ,
// Votes: tc.messageVotes,
BlockID : tc . messageBlockID ,
}
assert . Equal ( t , tc . expectErr , message . ValidateBasic ( ) != nil , "Validate Basic had an unexpected result" )
} )
}
}