mirror of
https://github.com/fluencelabs/tendermint
synced 2025-06-10 20:01:20 +00:00
Merge pull request #3288 from tendermint/release/v0.30.0
Release/v0.30.0
This commit is contained in:
49
CHANGELOG.md
49
CHANGELOG.md
@ -1,5 +1,52 @@
|
||||
# Changelog
|
||||
|
||||
## v0.30.0
|
||||
|
||||
*February 8th, 2019*
|
||||
|
||||
This release fixes yet another issue with the proposer selection algorithm.
|
||||
We hope it's the last one, but we won't be surprised if it's not.
|
||||
We plan to one day expose the selection algorithm more directly to
|
||||
the application ([\#3285](https://github.com/tendermint/tendermint/issues/3285)), and even to support randomness ([\#763](https://github.com/tendermint/tendermint/issues/763)).
|
||||
For more, see issues marked
|
||||
[proposer-selection](https://github.com/tendermint/tendermint/labels/proposer-selection).
|
||||
|
||||
This release also includes a fix to prevent Tendermint from including the same
|
||||
piece of evidence in more than one block. This issue was reported by @chengwenxi in our
|
||||
[bug bounty program](https://hackerone.com/tendermint).
|
||||
|
||||
### BREAKING CHANGES:
|
||||
|
||||
* Apps
|
||||
- [state] [\#3222](https://github.com/tendermint/tendermint/issues/3222)
|
||||
Duplicate updates for the same validator are forbidden. Apps must ensure
|
||||
that a given `ResponseEndBlock.ValidatorUpdates` contains only one entry per pubkey.
|
||||
|
||||
* Go API
|
||||
- [types] [\#3222](https://github.com/tendermint/tendermint/issues/3222)
|
||||
Remove `Add` and `Update` methods from `ValidatorSet` in favor of new
|
||||
`UpdateWithChangeSet`. This allows updates to be applied as a set, instead of
|
||||
one at a time.
|
||||
|
||||
* Block Protocol
|
||||
- [state] [\#3286](https://github.com/tendermint/tendermint/issues/3286) Blocks that include already committed evidence are invalid.
|
||||
|
||||
* P2P Protocol
|
||||
- [consensus] [\#3222](https://github.com/tendermint/tendermint/issues/3222)
|
||||
Validator updates are applied as a set, instead of one at a time, thus
|
||||
impacting the proposer priority calculation. This ensures that the proposer
|
||||
selection algorithm does not depend on the order of updates in
|
||||
`ResponseEndBlock.ValidatorUpdates`.
|
||||
|
||||
### IMPROVEMENTS:
|
||||
- [crypto] [\#3279](https://github.com/tendermint/tendermint/issues/3279) Use `btcec.S256().N` directly instead of hard coding a copy.
|
||||
|
||||
### BUG FIXES:
|
||||
- [state] [\#3222](https://github.com/tendermint/tendermint/issues/3222) Fix validator set updates so they are applied as a set, rather
|
||||
than one at a time. This makes the proposer selection algorithm independent of
|
||||
the order of updates in `ResponseEndBlock.ValidatorUpdates`.
|
||||
- [evidence] [\#3286](https://github.com/tendermint/tendermint/issues/3286) Don't add committed evidence to evidence pool.
|
||||
|
||||
## v0.29.2
|
||||
|
||||
*February 7th, 2019*
|
||||
@ -11,7 +58,7 @@ Special thanks to external contributors on this release:
|
||||
`crypto` packages:
|
||||
- p2p:
|
||||
- Partial fix for MITM attacks on the p2p connection. MITM conditions may
|
||||
still exist. See \#3010.
|
||||
still exist. See [\#3010](https://github.com/tendermint/tendermint/issues/3010).
|
||||
- crypto:
|
||||
- Eliminate our fork of `btcd` and use the `btcd/btcec` library directly for
|
||||
native secp256k1 signing. Note we still modify the signature encoding to
|
||||
|
@ -1,4 +1,4 @@
|
||||
## v0.30
|
||||
## v0.31.0
|
||||
|
||||
**
|
||||
|
||||
@ -6,6 +6,16 @@ Special thanks to external contributors on this release:
|
||||
|
||||
### BREAKING CHANGES:
|
||||
|
||||
* CLI/RPC/Config
|
||||
|
||||
* Apps
|
||||
|
||||
* Go API
|
||||
|
||||
* Blockchain Protocol
|
||||
|
||||
* P2P Protocol
|
||||
|
||||
### FEATURES:
|
||||
|
||||
### IMPROVEMENTS:
|
||||
|
23
UPGRADING.md
23
UPGRADING.md
@ -3,6 +3,29 @@
|
||||
This guide provides steps to be followed when you upgrade your applications to
|
||||
a newer version of Tendermint Core.
|
||||
|
||||
## v0.30.0
|
||||
|
||||
This release contains a breaking change to both the block and p2p protocols,
|
||||
however it may be compatible with blockchains created with v0.29.0 depending on
|
||||
the chain history. If your blockchain has not included any pieces of evidence,
|
||||
or no piece of evidence has been included in more than one block,
|
||||
and if your application has never returned multiple updates
|
||||
for the same validator in a single block, then v0.30.0 will work fine with
|
||||
blockchains created with v0.29.0.
|
||||
|
||||
The p2p protocol change is to fix the proposer selection algorithm again.
|
||||
Note that proposer selection is purely a p2p concern right
|
||||
now since the algorithm is only relevant during real time consensus.
|
||||
This change is thus compatible with v0.29.0, but
|
||||
all nodes must be upgraded to avoid disagreements on the proposer.
|
||||
|
||||
### Applications
|
||||
|
||||
Applications must ensure they do not return duplicates in
|
||||
`ResponseEndBlock.ValidatorUpdates`. A pubkey must only appear once per set of
|
||||
updates. Duplicates will cause irrecoverable failure. If you have a very good
|
||||
reason why we shouldn't do this, please open an issue.
|
||||
|
||||
## v0.29.0
|
||||
|
||||
This release contains some breaking changes to the block and p2p protocols,
|
||||
|
@ -95,13 +95,13 @@ func newBlockchainReactor(logger log.Logger, genDoc *types.GenesisDoc, privVals
|
||||
|
||||
// let's add some blocks in
|
||||
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
|
||||
lastCommit := &types.Commit{}
|
||||
lastCommit := types.NewCommit(types.BlockID{}, nil)
|
||||
if blockHeight > 1 {
|
||||
lastBlockMeta := blockStore.LoadBlockMeta(blockHeight - 1)
|
||||
lastBlock := blockStore.LoadBlock(blockHeight - 1)
|
||||
|
||||
vote := makeVote(&lastBlock.Header, lastBlockMeta.BlockID, state.Validators, privVals[0]).CommitSig()
|
||||
lastCommit = &types.Commit{Precommits: []*types.CommitSig{vote}, BlockID: lastBlockMeta.BlockID}
|
||||
lastCommit = types.NewCommit(lastBlockMeta.BlockID, []*types.CommitSig{vote})
|
||||
}
|
||||
|
||||
thisBlock := makeBlock(blockHeight, state, lastCommit)
|
||||
|
@ -23,11 +23,8 @@ import (
|
||||
|
||||
// make a Commit with a single vote containing just the height and a timestamp
|
||||
func makeTestCommit(height int64, timestamp time.Time) *types.Commit {
|
||||
return &types.Commit{
|
||||
Precommits: []*types.CommitSig{
|
||||
{Height: height, Timestamp: timestamp},
|
||||
},
|
||||
}
|
||||
commitSigs := []*types.CommitSig{{Height: height, Timestamp: timestamp}}
|
||||
return types.NewCommit(types.BlockID{}, commitSigs)
|
||||
}
|
||||
|
||||
func makeStateAndBlockStore(logger log.Logger) (sm.State, *BlockStore) {
|
||||
|
@ -61,7 +61,7 @@ func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger) {
|
||||
} else {
|
||||
pv := privval.GenFilePV(privValKeyFile, privValStateFile)
|
||||
pv.Save()
|
||||
logger.Info("Generated private validator file", "file", "keyFile", privValKeyFile,
|
||||
logger.Info("Generated private validator file", "keyFile", privValKeyFile,
|
||||
"stateFile", privValStateFile)
|
||||
}
|
||||
}
|
||||
|
@ -211,6 +211,7 @@ func (m *mockEvidencePool) Update(block *types.Block, state sm.State) {
|
||||
}
|
||||
m.height++
|
||||
}
|
||||
func (m *mockEvidencePool) IsCommitted(types.Evidence) bool { return false }
|
||||
|
||||
//------------------------------------
|
||||
|
||||
|
@ -537,10 +537,8 @@ func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
|
||||
}
|
||||
case *types.Vote:
|
||||
if p.Type == types.PrecommitType {
|
||||
thisBlockCommit = &types.Commit{
|
||||
BlockID: p.BlockID,
|
||||
Precommits: []*types.CommitSig{p.CommitSig()},
|
||||
}
|
||||
commitSigs := []*types.CommitSig{p.CommitSig()}
|
||||
thisBlockCommit = types.NewCommit(p.BlockID, commitSigs)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -954,7 +954,7 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
|
||||
if cs.Height == 1 {
|
||||
// We're creating a proposal for the first block.
|
||||
// The commit is empty, but not nil.
|
||||
commit = &types.Commit{}
|
||||
commit = types.NewCommit(types.BlockID{}, nil)
|
||||
} else if cs.LastCommit.HasTwoThirdsMajority() {
|
||||
// Make the commit from LastCommit
|
||||
commit = cs.LastCommit.MakeCommit()
|
||||
|
@ -53,11 +53,8 @@ func BenchmarkRoundStateDeepCopy(b *testing.B) {
|
||||
Data: types.Data{
|
||||
Txs: txs,
|
||||
},
|
||||
Evidence: types.EvidenceData{},
|
||||
LastCommit: &types.Commit{
|
||||
BlockID: blockID,
|
||||
Precommits: precommits,
|
||||
},
|
||||
Evidence: types.EvidenceData{},
|
||||
LastCommit: types.NewCommit(blockID, precommits),
|
||||
}
|
||||
parts := block.MakePartSet(4096)
|
||||
// Random Proposal
|
||||
|
@ -14,8 +14,7 @@ import (
|
||||
// see:
|
||||
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
|
||||
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
|
||||
var secp256k1N, _ = new(big.Int).SetString("fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141", 16)
|
||||
var secp256k1halfN = new(big.Int).Div(secp256k1N, big.NewInt(2))
|
||||
var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
|
||||
|
||||
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
|
||||
// The returned signature will be of the form R || S (in lower-S form).
|
||||
|
@ -28,7 +28,8 @@ type EvidencePool struct {
|
||||
state sm.State
|
||||
}
|
||||
|
||||
func NewEvidencePool(stateDB dbm.DB, evidenceStore *EvidenceStore) *EvidencePool {
|
||||
func NewEvidencePool(stateDB, evidenceDB dbm.DB) *EvidencePool {
|
||||
evidenceStore := NewEvidenceStore(evidenceDB)
|
||||
evpool := &EvidencePool{
|
||||
stateDB: stateDB,
|
||||
state: sm.LoadState(stateDB),
|
||||
@ -132,6 +133,12 @@ func (evpool *EvidencePool) MarkEvidenceAsCommitted(height int64, evidence []typ
|
||||
|
||||
}
|
||||
|
||||
// IsCommitted returns true if we have already seen this exact evidence and it is already marked as committed.
|
||||
func (evpool *EvidencePool) IsCommitted(evidence types.Evidence) bool {
|
||||
ei := evpool.evidenceStore.getEvidenceInfo(evidence)
|
||||
return ei.Evidence != nil && ei.Committed
|
||||
}
|
||||
|
||||
func (evpool *EvidencePool) removeEvidence(height, maxAge int64, blockEvidenceMap map[string]struct{}) {
|
||||
for e := evpool.evidenceList.Front(); e != nil; e = e.Next() {
|
||||
ev := e.Value.(types.Evidence)
|
||||
|
@ -56,8 +56,8 @@ func TestEvidencePool(t *testing.T) {
|
||||
valAddr := []byte("val1")
|
||||
height := int64(5)
|
||||
stateDB := initializeValidatorState(valAddr, height)
|
||||
store := NewEvidenceStore(dbm.NewMemDB())
|
||||
pool := NewEvidencePool(stateDB, store)
|
||||
evidenceDB := dbm.NewMemDB()
|
||||
pool := NewEvidencePool(stateDB, evidenceDB)
|
||||
|
||||
goodEvidence := types.NewMockGoodEvidence(height, 0, valAddr)
|
||||
badEvidence := types.MockBadEvidence{goodEvidence}
|
||||
@ -84,3 +84,24 @@ func TestEvidencePool(t *testing.T) {
|
||||
assert.Nil(t, err)
|
||||
assert.Equal(t, 1, pool.evidenceList.Len())
|
||||
}
|
||||
|
||||
func TestEvidencePoolIsCommitted(t *testing.T) {
|
||||
// Initialization:
|
||||
valAddr := []byte("validator_address")
|
||||
height := int64(42)
|
||||
stateDB := initializeValidatorState(valAddr, height)
|
||||
evidenceDB := dbm.NewMemDB()
|
||||
pool := NewEvidencePool(stateDB, evidenceDB)
|
||||
|
||||
// evidence not seen yet:
|
||||
evidence := types.NewMockGoodEvidence(height, 0, valAddr)
|
||||
assert.False(t, pool.IsCommitted(evidence))
|
||||
|
||||
// evidence seen but not yet committed:
|
||||
assert.NoError(t, pool.AddEvidence(evidence))
|
||||
assert.False(t, pool.IsCommitted(evidence))
|
||||
|
||||
// evidence seen and committed:
|
||||
pool.MarkEvidenceAsCommitted(height, []types.Evidence{evidence})
|
||||
assert.True(t, pool.IsCommitted(evidence))
|
||||
}
|
||||
|
@ -37,8 +37,8 @@ func makeAndConnectEvidenceReactors(config *cfg.Config, stateDBs []dbm.DB) []*Ev
|
||||
logger := evidenceLogger()
|
||||
for i := 0; i < N; i++ {
|
||||
|
||||
store := NewEvidenceStore(dbm.NewMemDB())
|
||||
pool := NewEvidencePool(stateDBs[i], store)
|
||||
evidenceDB := dbm.NewMemDB()
|
||||
pool := NewEvidencePool(stateDBs[i], evidenceDB)
|
||||
reactors[i] = NewEvidenceReactor(pool)
|
||||
reactors[i].SetLogger(logger.With("validator", i))
|
||||
}
|
||||
|
@ -117,32 +117,33 @@ func (store *EvidenceStore) listEvidence(prefixKey string, maxNum int64) (eviden
|
||||
return evidence
|
||||
}
|
||||
|
||||
// GetEvidence fetches the evidence with the given height and hash.
|
||||
func (store *EvidenceStore) GetEvidence(height int64, hash []byte) *EvidenceInfo {
|
||||
// GetEvidenceInfo fetches the EvidenceInfo with the given height and hash.
|
||||
// If not found, ei.Evidence is nil.
|
||||
func (store *EvidenceStore) GetEvidenceInfo(height int64, hash []byte) EvidenceInfo {
|
||||
key := keyLookupFromHeightAndHash(height, hash)
|
||||
val := store.db.Get(key)
|
||||
|
||||
if len(val) == 0 {
|
||||
return nil
|
||||
return EvidenceInfo{}
|
||||
}
|
||||
var ei EvidenceInfo
|
||||
err := cdc.UnmarshalBinaryBare(val, &ei)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return &ei
|
||||
return ei
|
||||
}
|
||||
|
||||
// AddNewEvidence adds the given evidence to the database.
|
||||
// It returns false if the evidence is already stored.
|
||||
func (store *EvidenceStore) AddNewEvidence(evidence types.Evidence, priority int64) bool {
|
||||
// check if we already have seen it
|
||||
ei_ := store.GetEvidence(evidence.Height(), evidence.Hash())
|
||||
if ei_ != nil && ei_.Evidence != nil {
|
||||
ei := store.getEvidenceInfo(evidence)
|
||||
if ei.Evidence != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
ei := EvidenceInfo{
|
||||
ei = EvidenceInfo{
|
||||
Committed: false,
|
||||
Priority: priority,
|
||||
Evidence: evidence,
|
||||
@ -165,6 +166,11 @@ func (store *EvidenceStore) AddNewEvidence(evidence types.Evidence, priority int
|
||||
// MarkEvidenceAsBroadcasted removes evidence from Outqueue.
|
||||
func (store *EvidenceStore) MarkEvidenceAsBroadcasted(evidence types.Evidence) {
|
||||
ei := store.getEvidenceInfo(evidence)
|
||||
if ei.Evidence == nil {
|
||||
// nothing to do; we did not store the evidence yet (AddNewEvidence):
|
||||
return
|
||||
}
|
||||
// remove from the outqueue
|
||||
key := keyOutqueue(evidence, ei.Priority)
|
||||
store.db.Delete(key)
|
||||
}
|
||||
@ -177,8 +183,12 @@ func (store *EvidenceStore) MarkEvidenceAsCommitted(evidence types.Evidence) {
|
||||
pendingKey := keyPending(evidence)
|
||||
store.db.Delete(pendingKey)
|
||||
|
||||
ei := store.getEvidenceInfo(evidence)
|
||||
ei.Committed = true
|
||||
// committed EvidenceInfo doens't need priority
|
||||
ei := EvidenceInfo{
|
||||
Committed: true,
|
||||
Evidence: evidence,
|
||||
Priority: 0,
|
||||
}
|
||||
|
||||
lookupKey := keyLookup(evidence)
|
||||
store.db.SetSync(lookupKey, cdc.MustMarshalBinaryBare(ei))
|
||||
@ -187,13 +197,7 @@ func (store *EvidenceStore) MarkEvidenceAsCommitted(evidence types.Evidence) {
|
||||
//---------------------------------------------------
|
||||
// utils
|
||||
|
||||
// getEvidenceInfo is convenience for calling GetEvidenceInfo if we have the full evidence.
|
||||
func (store *EvidenceStore) getEvidenceInfo(evidence types.Evidence) EvidenceInfo {
|
||||
key := keyLookup(evidence)
|
||||
var ei EvidenceInfo
|
||||
b := store.db.Get(key)
|
||||
err := cdc.UnmarshalBinaryBare(b, &ei)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return ei
|
||||
return store.GetEvidenceInfo(evidence.Height(), evidence.Hash())
|
||||
}
|
||||
|
@ -27,6 +27,21 @@ func TestStoreAddDuplicate(t *testing.T) {
|
||||
assert.False(added)
|
||||
}
|
||||
|
||||
func TestStoreCommitDuplicate(t *testing.T) {
|
||||
assert := assert.New(t)
|
||||
|
||||
db := dbm.NewMemDB()
|
||||
store := NewEvidenceStore(db)
|
||||
|
||||
priority := int64(10)
|
||||
ev := types.NewMockGoodEvidence(2, 1, []byte("val1"))
|
||||
|
||||
store.MarkEvidenceAsCommitted(ev)
|
||||
|
||||
added := store.AddNewEvidence(ev, priority)
|
||||
assert.False(added)
|
||||
}
|
||||
|
||||
func TestStoreMark(t *testing.T) {
|
||||
assert := assert.New(t)
|
||||
|
||||
@ -46,7 +61,7 @@ func TestStoreMark(t *testing.T) {
|
||||
assert.True(added)
|
||||
|
||||
// get the evidence. verify. should be uncommitted
|
||||
ei := store.GetEvidence(ev.Height(), ev.Hash())
|
||||
ei := store.GetEvidenceInfo(ev.Height(), ev.Hash())
|
||||
assert.Equal(ev, ei.Evidence)
|
||||
assert.Equal(priority, ei.Priority)
|
||||
assert.False(ei.Committed)
|
||||
@ -72,9 +87,10 @@ func TestStoreMark(t *testing.T) {
|
||||
assert.Equal(0, len(pendingEv))
|
||||
|
||||
// evidence should show committed
|
||||
ei = store.GetEvidence(ev.Height(), ev.Hash())
|
||||
newPriority := int64(0)
|
||||
ei = store.GetEvidenceInfo(ev.Height(), ev.Hash())
|
||||
assert.Equal(ev, ei.Evidence)
|
||||
assert.Equal(priority, ei.Priority)
|
||||
assert.Equal(newPriority, ei.Priority)
|
||||
assert.True(ei.Committed)
|
||||
}
|
||||
|
||||
|
@ -80,12 +80,8 @@ func (pkz privKeys) signHeader(header *types.Header, first, last int) *types.Com
|
||||
vote := makeVote(header, vset, pkz[i])
|
||||
commitSigs[vote.ValidatorIndex] = vote.CommitSig()
|
||||
}
|
||||
|
||||
res := &types.Commit{
|
||||
BlockID: types.BlockID{Hash: header.Hash()},
|
||||
Precommits: commitSigs,
|
||||
}
|
||||
return res
|
||||
blockID := types.BlockID{Hash: header.Hash()}
|
||||
return types.NewCommit(blockID, commitSigs)
|
||||
}
|
||||
|
||||
func makeVote(header *types.Header, valset *types.ValidatorSet, key crypto.PrivKey) *types.Vote {
|
||||
|
@ -70,7 +70,7 @@ func TestValidateBlock(t *testing.T) {
|
||||
},
|
||||
signedHeader: types.SignedHeader{
|
||||
Header: &types.Header{Height: 11},
|
||||
Commit: &types.Commit{BlockID: types.BlockID{Hash: []byte("0xDEADBEEF")}},
|
||||
Commit: types.NewCommit(types.BlockID{Hash: []byte("0xDEADBEEF")}, nil),
|
||||
},
|
||||
wantErr: "Data hash doesn't match header",
|
||||
},
|
||||
@ -81,7 +81,7 @@ func TestValidateBlock(t *testing.T) {
|
||||
},
|
||||
signedHeader: types.SignedHeader{
|
||||
Header: &types.Header{Height: 11},
|
||||
Commit: &types.Commit{BlockID: types.BlockID{Hash: []byte("DEADBEEF")}},
|
||||
Commit: types.NewCommit(types.BlockID{Hash: []byte("DEADBEEF")}, nil),
|
||||
},
|
||||
},
|
||||
// End Header.Data hash mismatch test
|
||||
@ -169,7 +169,7 @@ func TestValidateBlockMeta(t *testing.T) {
|
||||
ValidatorsHash: []byte("Tendermint"),
|
||||
Time: testTime2,
|
||||
},
|
||||
Commit: &types.Commit{BlockID: types.BlockID{Hash: []byte("DEADBEEF")}},
|
||||
Commit: types.NewCommit(types.BlockID{Hash: []byte("DEADBEEF")}, nil),
|
||||
},
|
||||
wantErr: "Headers don't match",
|
||||
},
|
||||
@ -188,7 +188,7 @@ func TestValidateBlockMeta(t *testing.T) {
|
||||
ValidatorsHash: []byte("Tendermint-x"),
|
||||
Time: testTime2,
|
||||
},
|
||||
Commit: &types.Commit{BlockID: types.BlockID{Hash: []byte("DEADBEEF")}},
|
||||
Commit: types.NewCommit(types.BlockID{Hash: []byte("DEADBEEF")}, nil),
|
||||
},
|
||||
wantErr: "Headers don't match",
|
||||
},
|
||||
|
@ -345,8 +345,7 @@ func NewNode(config *cfg.Config,
|
||||
return nil, err
|
||||
}
|
||||
evidenceLogger := logger.With("module", "evidence")
|
||||
evidenceStore := evidence.NewEvidenceStore(evidenceDB)
|
||||
evidencePool := evidence.NewEvidencePool(stateDB, evidenceStore)
|
||||
evidencePool := evidence.NewEvidencePool(stateDB, evidenceDB)
|
||||
evidencePool.SetLogger(evidenceLogger)
|
||||
evidenceReactor := evidence.NewEvidenceReactor(evidencePool)
|
||||
evidenceReactor.SetLogger(evidenceLogger)
|
||||
|
@ -227,11 +227,10 @@ func TestCreateProposalBlock(t *testing.T) {
|
||||
mempool.SetLogger(logger)
|
||||
|
||||
// Make EvidencePool
|
||||
types.RegisterMockEvidencesGlobal()
|
||||
types.RegisterMockEvidencesGlobal() // XXX!
|
||||
evidence.RegisterMockEvidences()
|
||||
evidenceDB := dbm.NewMemDB()
|
||||
evidenceStore := evidence.NewEvidenceStore(evidenceDB)
|
||||
evidencePool := evidence.NewEvidencePool(stateDB, evidenceStore)
|
||||
evidencePool := evidence.NewEvidencePool(stateDB, evidenceDB)
|
||||
evidencePool.SetLogger(logger)
|
||||
|
||||
// fill the evidence pool with more evidence
|
||||
@ -261,7 +260,7 @@ func TestCreateProposalBlock(t *testing.T) {
|
||||
evidencePool,
|
||||
)
|
||||
|
||||
commit := &types.Commit{}
|
||||
commit := types.NewCommit(types.BlockID{}, nil)
|
||||
block, _ := blockExec.CreateProposalBlock(
|
||||
height,
|
||||
state, commit,
|
||||
@ -270,7 +269,6 @@ func TestCreateProposalBlock(t *testing.T) {
|
||||
|
||||
err = blockExec.ValidateBlock(state, block)
|
||||
assert.NoError(t, err)
|
||||
|
||||
}
|
||||
|
||||
func state(nVals int, height int64) (sm.State, dbm.DB) {
|
||||
|
@ -2,7 +2,6 @@ package state
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
@ -95,7 +94,6 @@ func (blockExec *BlockExecutor) CreateProposalBlock(
|
||||
txs := blockExec.mempool.ReapMaxBytesMaxGas(maxDataBytes, maxGas)
|
||||
|
||||
return state.MakeBlock(height, txs, commit, evidence, proposerAddr)
|
||||
|
||||
}
|
||||
|
||||
// ValidateBlock validates the given block against the given state.
|
||||
@ -103,7 +101,7 @@ func (blockExec *BlockExecutor) CreateProposalBlock(
|
||||
// Validation does not mutate state, but does require historical information from the stateDB,
|
||||
// ie. to verify evidence from a validator at an old height.
|
||||
func (blockExec *BlockExecutor) ValidateBlock(state State, block *types.Block) error {
|
||||
return validateBlock(blockExec.db, state, block)
|
||||
return validateBlock(blockExec.evpool, blockExec.db, state, block)
|
||||
}
|
||||
|
||||
// ApplyBlock validates the block against the state, executes it against the app,
|
||||
@ -143,7 +141,7 @@ func (blockExec *BlockExecutor) ApplyBlock(state State, blockID types.BlockID, b
|
||||
return state, err
|
||||
}
|
||||
if len(validatorUpdates) > 0 {
|
||||
blockExec.logger.Info("Updates to validators", "updates", makeValidatorUpdatesLogString(validatorUpdates))
|
||||
blockExec.logger.Info("Updates to validators", "updates", types.ValidatorListString(validatorUpdates))
|
||||
}
|
||||
|
||||
// Update the state with the block and responses.
|
||||
@ -368,70 +366,6 @@ func validateValidatorUpdates(abciUpdates []abci.ValidatorUpdate,
|
||||
return nil
|
||||
}
|
||||
|
||||
// If more or equal than 1/3 of total voting power changed in one block, then
|
||||
// a light client could never prove the transition externally. See
|
||||
// ./lite/doc.go for details on how a light client tracks validators.
|
||||
func updateValidators(currentSet *types.ValidatorSet, updates []*types.Validator) error {
|
||||
for _, valUpdate := range updates {
|
||||
// should already have been checked
|
||||
if valUpdate.VotingPower < 0 {
|
||||
return fmt.Errorf("Voting power can't be negative %v", valUpdate)
|
||||
}
|
||||
|
||||
address := valUpdate.Address
|
||||
_, val := currentSet.GetByAddress(address)
|
||||
// valUpdate.VotingPower is ensured to be non-negative in validation method
|
||||
if valUpdate.VotingPower == 0 { // remove val
|
||||
_, removed := currentSet.Remove(address)
|
||||
if !removed {
|
||||
return fmt.Errorf("Failed to remove validator %X", address)
|
||||
}
|
||||
} else if val == nil { // add val
|
||||
// make sure we do not exceed MaxTotalVotingPower by adding this validator:
|
||||
totalVotingPower := currentSet.TotalVotingPower()
|
||||
updatedVotingPower := valUpdate.VotingPower + totalVotingPower
|
||||
overflow := updatedVotingPower > types.MaxTotalVotingPower || updatedVotingPower < 0
|
||||
if overflow {
|
||||
return fmt.Errorf(
|
||||
"Failed to add new validator %v. Adding it would exceed max allowed total voting power %v",
|
||||
valUpdate,
|
||||
types.MaxTotalVotingPower)
|
||||
}
|
||||
// TODO: issue #1558 update spec according to the following:
|
||||
// Set ProposerPriority to -C*totalVotingPower (with C ~= 1.125) to make sure validators can't
|
||||
// unbond/rebond to reset their (potentially previously negative) ProposerPriority to zero.
|
||||
//
|
||||
// Contract: totalVotingPower < MaxTotalVotingPower to ensure ProposerPriority does
|
||||
// not exceed the bounds of int64.
|
||||
//
|
||||
// Compute ProposerPriority = -1.125*totalVotingPower == -(totalVotingPower + (totalVotingPower >> 3)).
|
||||
valUpdate.ProposerPriority = -(totalVotingPower + (totalVotingPower >> 3))
|
||||
added := currentSet.Add(valUpdate)
|
||||
if !added {
|
||||
return fmt.Errorf("Failed to add new validator %v", valUpdate)
|
||||
}
|
||||
} else { // update val
|
||||
// make sure we do not exceed MaxTotalVotingPower by updating this validator:
|
||||
totalVotingPower := currentSet.TotalVotingPower()
|
||||
curVotingPower := val.VotingPower
|
||||
updatedVotingPower := totalVotingPower - curVotingPower + valUpdate.VotingPower
|
||||
overflow := updatedVotingPower > types.MaxTotalVotingPower || updatedVotingPower < 0
|
||||
if overflow {
|
||||
return fmt.Errorf(
|
||||
"Failed to update existing validator %v. Updating it would exceed max allowed total voting power %v",
|
||||
valUpdate,
|
||||
types.MaxTotalVotingPower)
|
||||
}
|
||||
|
||||
updated := currentSet.Update(valUpdate)
|
||||
if !updated {
|
||||
return fmt.Errorf("Failed to update validator %X to %v", address, valUpdate)
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// updateState returns a new State updated according to the header and responses.
|
||||
func updateState(
|
||||
state State,
|
||||
@ -448,7 +382,7 @@ func updateState(
|
||||
// Update the validator set with the latest abciResponses.
|
||||
lastHeightValsChanged := state.LastHeightValidatorsChanged
|
||||
if len(validatorUpdates) > 0 {
|
||||
err := updateValidators(nValSet, validatorUpdates)
|
||||
err := nValSet.UpdateWithChangeSet(validatorUpdates)
|
||||
if err != nil {
|
||||
return state, fmt.Errorf("Error changing validator set: %v", err)
|
||||
}
|
||||
@ -552,13 +486,3 @@ func ExecCommitBlock(
|
||||
// ResponseCommit has no error or log, just data
|
||||
return res.Data, nil
|
||||
}
|
||||
|
||||
// Make pretty string for validatorUpdates logging
|
||||
func makeValidatorUpdatesLogString(vals []*types.Validator) string {
|
||||
chunks := make([]string, len(vals))
|
||||
for i, val := range vals {
|
||||
chunks[i] = fmt.Sprintf("%s:%d", val.Address, val.VotingPower)
|
||||
}
|
||||
|
||||
return strings.Join(chunks, ",")
|
||||
}
|
||||
|
@ -79,7 +79,7 @@ func TestBeginBlockValidators(t *testing.T) {
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
lastCommit := &types.Commit{BlockID: prevBlockID, Precommits: tc.lastCommitPrecommits}
|
||||
lastCommit := types.NewCommit(prevBlockID, tc.lastCommitPrecommits)
|
||||
|
||||
// block for height 2
|
||||
block, _ := state.MakeBlock(2, makeTxs(2), lastCommit, nil, state.Validators.GetProposer().Address)
|
||||
@ -139,7 +139,7 @@ func TestBeginBlockByzantineValidators(t *testing.T) {
|
||||
commitSig0 := (&types.Vote{ValidatorIndex: 0, Timestamp: now, Type: types.PrecommitType}).CommitSig()
|
||||
commitSig1 := (&types.Vote{ValidatorIndex: 1, Timestamp: now}).CommitSig()
|
||||
commitSigs := []*types.CommitSig{commitSig0, commitSig1}
|
||||
lastCommit := &types.Commit{BlockID: prevBlockID, Precommits: commitSigs}
|
||||
lastCommit := types.NewCommit(prevBlockID, commitSigs)
|
||||
for _, tc := range testCases {
|
||||
|
||||
block, _ := state.MakeBlock(10, makeTxs(2), lastCommit, nil, state.Validators.GetProposer().Address)
|
||||
@ -280,7 +280,7 @@ func TestUpdateValidators(t *testing.T) {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
updates, err := types.PB2TM.ValidatorUpdates(tc.abciUpdates)
|
||||
assert.NoError(t, err)
|
||||
err = updateValidators(tc.currentSet, updates)
|
||||
err = tc.currentSet.UpdateWithChangeSet(updates)
|
||||
if tc.shouldErr {
|
||||
assert.Error(t, err)
|
||||
} else {
|
||||
|
@ -80,10 +80,13 @@ type BlockStore interface {
|
||||
// evidence pool
|
||||
|
||||
// EvidencePool defines the EvidencePool interface used by the ConsensusState.
|
||||
// Get/Set/Commit
|
||||
type EvidencePool interface {
|
||||
PendingEvidence(int64) []types.Evidence
|
||||
AddEvidence(types.Evidence) error
|
||||
Update(*types.Block, State)
|
||||
// IsCommitted indicates if this evidence was already marked committed in another block.
|
||||
IsCommitted(types.Evidence) bool
|
||||
}
|
||||
|
||||
// MockMempool is an empty implementation of a Mempool, useful for testing.
|
||||
@ -92,3 +95,4 @@ type MockEvidencePool struct{}
|
||||
func (m MockEvidencePool) PendingEvidence(int64) []types.Evidence { return nil }
|
||||
func (m MockEvidencePool) AddEvidence(types.Evidence) error { return nil }
|
||||
func (m MockEvidencePool) Update(*types.Block, State) {}
|
||||
func (m MockEvidencePool) IsCommitted(types.Evidence) bool { return false }
|
||||
|
@ -322,7 +322,8 @@ func TestProposerFrequency(t *testing.T) {
|
||||
vals := make([]*types.Validator, N)
|
||||
totalVotePower := int64(0)
|
||||
for j := 0; j < N; j++ {
|
||||
votePower := int64(cmn.RandInt() % maxPower)
|
||||
// make sure votePower > 0
|
||||
votePower := int64(cmn.RandInt()%maxPower) + 1
|
||||
totalVotePower += votePower
|
||||
privVal := types.NewMockPV()
|
||||
pubKey := privVal.GetPubKey()
|
||||
@ -424,49 +425,71 @@ func TestProposerPriorityDoesNotGetResetToZero(t *testing.T) {
|
||||
require.Equal(t, len(updatedState2.NextValidators.Validators), 2)
|
||||
_, updatedVal1 := updatedState2.NextValidators.GetByAddress(val1PubKey.Address())
|
||||
_, addedVal2 := updatedState2.NextValidators.GetByAddress(val2PubKey.Address())
|
||||
|
||||
// adding a validator should not lead to a ProposerPriority equal to zero (unless the combination of averaging and
|
||||
// incrementing would cause so; which is not the case here)
|
||||
totalPowerBefore2 := curTotal
|
||||
// while adding we compute prio == -1.125 * total:
|
||||
wantVal2ProposerPrio := -(totalPowerBefore2 + (totalPowerBefore2 >> 3))
|
||||
wantVal2ProposerPrio = wantVal2ProposerPrio + val2VotingPower
|
||||
// then increment:
|
||||
// Steps from adding new validator:
|
||||
// 0 - val1 prio is 0, TVP after add:
|
||||
wantVal1Prio := int64(0)
|
||||
totalPowerAfter := val1VotingPower + val2VotingPower
|
||||
// mostest:
|
||||
wantVal2ProposerPrio = wantVal2ProposerPrio - totalPowerAfter
|
||||
avg := big.NewInt(0).Add(big.NewInt(val1VotingPower), big.NewInt(wantVal2ProposerPrio))
|
||||
// 1. Add - Val2 should be initially added with (-123) =>
|
||||
wantVal2Prio := -(totalPowerAfter + (totalPowerAfter >> 3))
|
||||
// 2. Scale - noop
|
||||
// 3. Center - with avg, resulting val2:-61, val1:62
|
||||
avg := big.NewInt(0).Add(big.NewInt(wantVal1Prio), big.NewInt(wantVal2Prio))
|
||||
avg.Div(avg, big.NewInt(2))
|
||||
wantVal2ProposerPrio = wantVal2ProposerPrio - avg.Int64()
|
||||
wantVal1Prio := 0 + val1VotingPower - avg.Int64()
|
||||
wantVal2Prio = wantVal2Prio - avg.Int64() // -61
|
||||
wantVal1Prio = wantVal1Prio - avg.Int64() // 62
|
||||
|
||||
// 4. Steps from IncrementProposerPriority
|
||||
wantVal1Prio = wantVal1Prio + val1VotingPower // 72
|
||||
wantVal2Prio = wantVal2Prio + val2VotingPower // 39
|
||||
wantVal1Prio = wantVal1Prio - totalPowerAfter // -38 as val1 is proposer
|
||||
|
||||
assert.Equal(t, wantVal1Prio, updatedVal1.ProposerPriority)
|
||||
assert.Equal(t, wantVal2ProposerPrio, addedVal2.ProposerPriority)
|
||||
assert.Equal(t, wantVal2Prio, addedVal2.ProposerPriority)
|
||||
|
||||
// Updating a validator does not reset the ProposerPriority to zero:
|
||||
// 1. Add - Val2 VotingPower change to 1 =>
|
||||
updatedVotingPowVal2 := int64(1)
|
||||
updateVal := abci.ValidatorUpdate{PubKey: types.TM2PB.PubKey(val2PubKey), Power: updatedVotingPowVal2}
|
||||
validatorUpdates, err = types.PB2TM.ValidatorUpdates([]abci.ValidatorUpdate{updateVal})
|
||||
assert.NoError(t, err)
|
||||
|
||||
// this will cause the diff of priorities (31)
|
||||
// this will cause the diff of priorities (77)
|
||||
// to be larger than threshold == 2*totalVotingPower (22):
|
||||
updatedState3, err := updateState(updatedState2, blockID, &block.Header, abciResponses, validatorUpdates)
|
||||
assert.NoError(t, err)
|
||||
|
||||
require.Equal(t, len(updatedState3.NextValidators.Validators), 2)
|
||||
_, prevVal1 := updatedState3.Validators.GetByAddress(val1PubKey.Address())
|
||||
_, prevVal2 := updatedState3.Validators.GetByAddress(val2PubKey.Address())
|
||||
_, updatedVal1 = updatedState3.NextValidators.GetByAddress(val1PubKey.Address())
|
||||
_, updatedVal2 := updatedState3.NextValidators.GetByAddress(val2PubKey.Address())
|
||||
|
||||
// divide previous priorities by 2 == CEIL(31/22) as diff > threshold:
|
||||
expectedVal1PrioBeforeAvg := prevVal1.ProposerPriority/2 + prevVal1.VotingPower
|
||||
wantVal2ProposerPrio = wantVal2ProposerPrio/2 + updatedVotingPowVal2
|
||||
// val1 will be proposer:
|
||||
total := val1VotingPower + updatedVotingPowVal2
|
||||
expectedVal1PrioBeforeAvg = expectedVal1PrioBeforeAvg - total
|
||||
avgI64 := (wantVal2ProposerPrio + expectedVal1PrioBeforeAvg) / 2
|
||||
wantVal2ProposerPrio = wantVal2ProposerPrio - avgI64
|
||||
wantVal1Prio = expectedVal1PrioBeforeAvg - avgI64
|
||||
assert.Equal(t, wantVal2ProposerPrio, updatedVal2.ProposerPriority)
|
||||
_, updatedVal1 = updatedState3.NextValidators.GetByAddress(val1PubKey.Address())
|
||||
// 2. Scale
|
||||
// old prios: v1(10):-38, v2(1):39
|
||||
wantVal1Prio = prevVal1.ProposerPriority
|
||||
wantVal2Prio = prevVal2.ProposerPriority
|
||||
// scale to diffMax = 22 = 2 * tvp, diff=39-(-38)=77
|
||||
// new totalPower
|
||||
totalPower := updatedVal1.VotingPower + updatedVal2.VotingPower
|
||||
dist := wantVal2Prio - wantVal1Prio
|
||||
// ratio := (dist + 2*totalPower - 1) / 2*totalPower = 98/22 = 4
|
||||
ratio := (dist + 2*totalPower - 1) / (2 * totalPower)
|
||||
// v1(10):-38/4, v2(1):39/4
|
||||
wantVal1Prio /= ratio // -9
|
||||
wantVal2Prio /= ratio // 9
|
||||
|
||||
// 3. Center - noop
|
||||
// 4. IncrementProposerPriority() ->
|
||||
// v1(10):-9+10, v2(1):9+1 -> v2 proposer so subsract tvp(11)
|
||||
// v1(10):1, v2(1):-1
|
||||
wantVal2Prio += updatedVal2.VotingPower // 10 -> prop
|
||||
wantVal1Prio += updatedVal1.VotingPower // 1
|
||||
wantVal2Prio -= totalPower // -1
|
||||
|
||||
assert.Equal(t, wantVal2Prio, updatedVal2.ProposerPriority)
|
||||
assert.Equal(t, wantVal1Prio, updatedVal1.ProposerPriority)
|
||||
}
|
||||
|
||||
@ -527,22 +550,22 @@ func TestProposerPriorityProposerAlternates(t *testing.T) {
|
||||
_, oldVal1 := updatedState2.Validators.GetByAddress(val1PubKey.Address())
|
||||
_, updatedVal2 := updatedState2.NextValidators.GetByAddress(val2PubKey.Address())
|
||||
|
||||
totalPower = val1VotingPower // no update
|
||||
v2PrioWhenAddedVal2 := -(totalPower + (totalPower >> 3))
|
||||
v2PrioWhenAddedVal2 = v2PrioWhenAddedVal2 + val1VotingPower // -11 + 10 == -1
|
||||
v1PrioWhenAddedVal2 := oldVal1.ProposerPriority + val1VotingPower // -10 + 10 == 0
|
||||
totalPower = 2 * val1VotingPower // now we have to validators with that power
|
||||
v1PrioWhenAddedVal2 = v1PrioWhenAddedVal2 - totalPower // mostest
|
||||
// have to express the AVG in big.Ints as -1/2 == -1 in big.Int while -1/2 == 0 in int64
|
||||
avgSum := big.NewInt(0).Add(big.NewInt(v2PrioWhenAddedVal2), big.NewInt(v1PrioWhenAddedVal2))
|
||||
avg := avgSum.Div(avgSum, big.NewInt(2))
|
||||
expectedVal2Prio := v2PrioWhenAddedVal2 - avg.Int64()
|
||||
totalPower = 2 * val1VotingPower // 10 + 10
|
||||
expectedVal1Prio := oldVal1.ProposerPriority + val1VotingPower - avg.Int64() - totalPower
|
||||
// val1's ProposerPriority story: -10 (see above) + 10 (voting pow) - (-1) (avg) - 20 (total) == -19
|
||||
// 1. Add
|
||||
val2VotingPower := val1VotingPower
|
||||
totalPower = val1VotingPower + val2VotingPower // 20
|
||||
v2PrioWhenAddedVal2 := -(totalPower + (totalPower >> 3)) // -22
|
||||
// 2. Scale - noop
|
||||
// 3. Center
|
||||
avgSum := big.NewInt(0).Add(big.NewInt(v2PrioWhenAddedVal2), big.NewInt(oldVal1.ProposerPriority))
|
||||
avg := avgSum.Div(avgSum, big.NewInt(2)) // -11
|
||||
expectedVal2Prio := v2PrioWhenAddedVal2 - avg.Int64() // -11
|
||||
expectedVal1Prio := oldVal1.ProposerPriority - avg.Int64() // 11
|
||||
// 4. Increment
|
||||
expectedVal2Prio = expectedVal2Prio + val2VotingPower // -11 + 10 = -1
|
||||
expectedVal1Prio = expectedVal1Prio + val1VotingPower // 11 + 10 == 21
|
||||
expectedVal1Prio = expectedVal1Prio - totalPower // 1, val1 proposer
|
||||
|
||||
assert.EqualValues(t, expectedVal1Prio, updatedVal1.ProposerPriority)
|
||||
// val2 prio when added: -(totalVotingPower + (totalVotingPower >> 3)) == -11
|
||||
// -> -11 + 10 (voting power) - (-1) (avg) == 0
|
||||
assert.EqualValues(t, expectedVal2Prio, updatedVal2.ProposerPriority, "unexpected proposer priority for validator: %v", updatedVal2)
|
||||
|
||||
validatorUpdates, err = types.PB2TM.ValidatorUpdates(abciResponses.EndBlock.ValidatorUpdates)
|
||||
@ -551,34 +574,40 @@ func TestProposerPriorityProposerAlternates(t *testing.T) {
|
||||
updatedState3, err := updateState(updatedState2, blockID, &block.Header, abciResponses, validatorUpdates)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// proposer changes from now on (every iteration) as long as there are no changes in the validator set:
|
||||
assert.NotEqual(t, updatedState3.Validators.Proposer.Address, updatedState3.NextValidators.Proposer.Address)
|
||||
assert.Equal(t, updatedState3.Validators.Proposer.Address, updatedState3.NextValidators.Proposer.Address)
|
||||
|
||||
assert.Equal(t, updatedState3.Validators, updatedState2.NextValidators)
|
||||
_, updatedVal1 = updatedState3.NextValidators.GetByAddress(val1PubKey.Address())
|
||||
_, oldVal1 = updatedState3.Validators.GetByAddress(val1PubKey.Address())
|
||||
_, updatedVal2 = updatedState3.NextValidators.GetByAddress(val2PubKey.Address())
|
||||
_, oldVal2 := updatedState3.Validators.GetByAddress(val2PubKey.Address())
|
||||
|
||||
// val2 will be proposer:
|
||||
assert.Equal(t, val2PubKey.Address(), updatedState3.NextValidators.Proposer.Address)
|
||||
// val1 will still be proposer:
|
||||
assert.Equal(t, val1PubKey.Address(), updatedState3.NextValidators.Proposer.Address)
|
||||
|
||||
// check if expected proposer prio is matched:
|
||||
// Increment
|
||||
expectedVal2Prio2 := expectedVal2Prio + val2VotingPower // -1 + 10 = 9
|
||||
expectedVal1Prio2 := expectedVal1Prio + val1VotingPower // 1 + 10 == 11
|
||||
expectedVal1Prio2 = expectedVal1Prio2 - totalPower // -9, val1 proposer
|
||||
|
||||
expectedVal1Prio2 := oldVal1.ProposerPriority + val1VotingPower
|
||||
expectedVal2Prio2 := oldVal2.ProposerPriority + val1VotingPower - totalPower
|
||||
avgSum = big.NewInt(expectedVal1Prio + expectedVal2Prio)
|
||||
avg = avgSum.Div(avgSum, big.NewInt(2))
|
||||
expectedVal1Prio -= avg.Int64()
|
||||
expectedVal2Prio -= avg.Int64()
|
||||
|
||||
// -19 + 10 - 0 (avg) == -9
|
||||
assert.EqualValues(t, expectedVal1Prio2, updatedVal1.ProposerPriority, "unexpected proposer priority for validator: %v", updatedVal2)
|
||||
// 0 + 10 - 0 (avg) - 20 (total) == -10
|
||||
assert.EqualValues(t, expectedVal2Prio2, updatedVal2.ProposerPriority, "unexpected proposer priority for validator: %v", updatedVal2)
|
||||
|
||||
// no changes in voting power and both validators have same voting power
|
||||
// -> proposers should alternate:
|
||||
oldState := updatedState3
|
||||
abciResponses = &ABCIResponses{
|
||||
EndBlock: &abci.ResponseEndBlock{ValidatorUpdates: nil},
|
||||
}
|
||||
validatorUpdates, err = types.PB2TM.ValidatorUpdates(abciResponses.EndBlock.ValidatorUpdates)
|
||||
require.NoError(t, err)
|
||||
|
||||
oldState, err = updateState(oldState, blockID, &block.Header, abciResponses, validatorUpdates)
|
||||
assert.NoError(t, err)
|
||||
expectedVal1Prio2 = 1
|
||||
expectedVal2Prio2 = -1
|
||||
expectedVal1Prio = -9
|
||||
expectedVal2Prio = 9
|
||||
|
||||
for i := 0; i < 1000; i++ {
|
||||
// no validator updates:
|
||||
abciResponses := &ABCIResponses{
|
||||
|
@ -13,7 +13,7 @@ import (
|
||||
//-----------------------------------------------------
|
||||
// Validate block
|
||||
|
||||
func validateBlock(stateDB dbm.DB, state State, block *types.Block) error {
|
||||
func validateBlock(evidencePool EvidencePool, stateDB dbm.DB, state State, block *types.Block) error {
|
||||
// Validate internal consistency.
|
||||
if err := block.ValidateBasic(); err != nil {
|
||||
return err
|
||||
@ -145,6 +145,9 @@ func validateBlock(stateDB dbm.DB, state State, block *types.Block) error {
|
||||
if err := VerifyEvidence(stateDB, state, ev); err != nil {
|
||||
return types.NewErrEvidenceInvalid(ev, err)
|
||||
}
|
||||
if evidencePool != nil && evidencePool.IsCommitted(ev) {
|
||||
return types.NewErrEvidenceInvalid(ev, errors.New("evidence was already committed"))
|
||||
}
|
||||
}
|
||||
|
||||
// NOTE: We can't actually verify it's the right proposer because we dont
|
||||
@ -185,6 +188,8 @@ func VerifyEvidence(stateDB dbm.DB, state State, evidence types.Evidence) error
|
||||
// The address must have been an active validator at the height.
|
||||
// NOTE: we will ignore evidence from H if the key was not a validator
|
||||
// at H, even if it is a validator at some nearby H'
|
||||
// XXX: this makes lite-client bisection as is unsafe
|
||||
// See https://github.com/tendermint/tendermint/issues/3244
|
||||
ev := evidence
|
||||
height, addr := ev.Height(), ev.Address()
|
||||
_, val := valset.GetByAddress(addr)
|
||||
|
@ -121,6 +121,31 @@ func TestValidateBlockEvidence(t *testing.T) {
|
||||
require.True(t, ok)
|
||||
}
|
||||
|
||||
// always returns true if asked if any evidence was already committed.
|
||||
type mockEvPoolAlwaysCommitted struct{}
|
||||
|
||||
func (m mockEvPoolAlwaysCommitted) PendingEvidence(int64) []types.Evidence { return nil }
|
||||
func (m mockEvPoolAlwaysCommitted) AddEvidence(types.Evidence) error { return nil }
|
||||
func (m mockEvPoolAlwaysCommitted) Update(*types.Block, State) {}
|
||||
func (m mockEvPoolAlwaysCommitted) IsCommitted(types.Evidence) bool { return true }
|
||||
|
||||
func TestValidateFailBlockOnCommittedEvidence(t *testing.T) {
|
||||
var height int64 = 1
|
||||
state, stateDB := state(1, int(height))
|
||||
|
||||
blockExec := NewBlockExecutor(stateDB, log.TestingLogger(), nil, nil, mockEvPoolAlwaysCommitted{})
|
||||
// A block with a couple pieces of evidence passes.
|
||||
block := makeBlock(state, height)
|
||||
addr, _ := state.Validators.GetByIndex(0)
|
||||
alreadyCommittedEvidence := types.NewMockGoodEvidence(height, 0, addr)
|
||||
block.Evidence.Evidence = []types.Evidence{alreadyCommittedEvidence}
|
||||
block.EvidenceHash = block.Evidence.Hash()
|
||||
err := blockExec.ValidateBlock(state, block)
|
||||
|
||||
require.Error(t, err)
|
||||
require.IsType(t, err, &types.ErrEvidenceInvalid{})
|
||||
}
|
||||
|
||||
/*
|
||||
TODO(#2589):
|
||||
- test unmarshalling BlockParts that are too big into a Block that
|
||||
|
@ -490,8 +490,8 @@ func (cs *CommitSig) String() string {
|
||||
}
|
||||
|
||||
// toVote converts the CommitSig to a vote.
|
||||
// Once CommitSig has fewer fields than vote,
|
||||
// converting to a Vote will require more information.
|
||||
// TODO: deprecate for #1648. Converting to Vote will require
|
||||
// access to ValidatorSet.
|
||||
func (cs *CommitSig) toVote() *Vote {
|
||||
if cs == nil {
|
||||
return nil
|
||||
@ -509,18 +509,30 @@ type Commit struct {
|
||||
BlockID BlockID `json:"block_id"`
|
||||
Precommits []*CommitSig `json:"precommits"`
|
||||
|
||||
// Volatile
|
||||
// memoized in first call to corresponding method
|
||||
// NOTE: can't memoize in constructor because constructor
|
||||
// isn't used for unmarshaling
|
||||
height int64
|
||||
round int
|
||||
hash cmn.HexBytes
|
||||
bitArray *cmn.BitArray
|
||||
}
|
||||
|
||||
// NewCommit returns a new Commit with the given blockID and precommits.
|
||||
// TODO: memoize ValidatorSet in constructor so votes can be easily reconstructed
|
||||
// from CommitSig after #1648.
|
||||
func NewCommit(blockID BlockID, precommits []*CommitSig) *Commit {
|
||||
return &Commit{
|
||||
BlockID: blockID,
|
||||
Precommits: precommits,
|
||||
}
|
||||
}
|
||||
|
||||
// VoteSignBytes constructs the SignBytes for the given CommitSig.
|
||||
// The only unique part of the SignBytes is the Timestamp - all other fields
|
||||
// signed over are otherwise the same for all validators.
|
||||
func (commit *Commit) VoteSignBytes(chainID string, cs *CommitSig) []byte {
|
||||
return cs.toVote().SignBytes(chainID)
|
||||
return commit.ToVote(cs).SignBytes(chainID)
|
||||
}
|
||||
|
||||
// memoizeHeightRound memoizes the height and round of the commit using
|
||||
@ -543,27 +555,20 @@ func (commit *Commit) memoizeHeightRound() {
|
||||
|
||||
// ToVote converts a CommitSig to a Vote.
|
||||
// If the CommitSig is nil, the Vote will be nil.
|
||||
// When CommitSig is reduced to contain fewer fields,
|
||||
// this will need access to the ValidatorSet to properly
|
||||
// reconstruct the vote.
|
||||
func (commit *Commit) ToVote(cs *CommitSig) *Vote {
|
||||
// TODO: use commit.validatorSet to reconstruct vote
|
||||
// and deprecate .toVote
|
||||
return cs.toVote()
|
||||
}
|
||||
|
||||
// Height returns the height of the commit
|
||||
func (commit *Commit) Height() int64 {
|
||||
if len(commit.Precommits) == 0 {
|
||||
return 0
|
||||
}
|
||||
commit.memoizeHeightRound()
|
||||
return commit.height
|
||||
}
|
||||
|
||||
// Round returns the round of the commit
|
||||
func (commit *Commit) Round() int {
|
||||
if len(commit.Precommits) == 0 {
|
||||
return 0
|
||||
}
|
||||
commit.memoizeHeightRound()
|
||||
return commit.round
|
||||
}
|
||||
@ -595,9 +600,10 @@ func (commit *Commit) BitArray() *cmn.BitArray {
|
||||
}
|
||||
|
||||
// GetByIndex returns the vote corresponding to a given validator index.
|
||||
// Panics if `index >= commit.Size()`.
|
||||
// Implements VoteSetReader.
|
||||
func (commit *Commit) GetByIndex(index int) *Vote {
|
||||
return commit.Precommits[index].toVote()
|
||||
return commit.ToVote(commit.Precommits[index])
|
||||
}
|
||||
|
||||
// IsCommit returns true if there is at least one vote.
|
||||
|
@ -3,6 +3,7 @@ package types
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
@ -68,6 +69,16 @@ func (v *Validator) String() string {
|
||||
v.ProposerPriority)
|
||||
}
|
||||
|
||||
// ValidatorListString returns a prettified validator list for logging purposes.
|
||||
func ValidatorListString(vals []*Validator) string {
|
||||
chunks := make([]string, len(vals))
|
||||
for i, val := range vals {
|
||||
chunks[i] = fmt.Sprintf("%s:%d", val.Address, val.VotingPower)
|
||||
}
|
||||
|
||||
return strings.Join(chunks, ",")
|
||||
}
|
||||
|
||||
// Bytes computes the unique encoding of a validator with a given voting power.
|
||||
// These are the bytes that gets hashed in consensus. It excludes address
|
||||
// as its redundant with the pubkey. This also excludes ProposerPriority
|
||||
|
@ -12,14 +12,20 @@ import (
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
// The maximum allowed total voting power.
|
||||
// It needs to be sufficiently small to, in all cases::
|
||||
// MaxTotalVotingPower - the maximum allowed total voting power.
|
||||
// It needs to be sufficiently small to, in all cases:
|
||||
// 1. prevent clipping in incrementProposerPriority()
|
||||
// 2. let (diff+diffMax-1) not overflow in IncrementPropposerPriotity()
|
||||
// 2. let (diff+diffMax-1) not overflow in IncrementProposerPriority()
|
||||
// (Proof of 1 is tricky, left to the reader).
|
||||
// It could be higher, but this is sufficiently large for our purposes,
|
||||
// and leaves room for defensive purposes.
|
||||
const MaxTotalVotingPower = int64(math.MaxInt64) / 8
|
||||
// PriorityWindowSizeFactor - is a constant that when multiplied with the total voting power gives
|
||||
// the maximum allowed distance between validator priorities.
|
||||
|
||||
const (
|
||||
MaxTotalVotingPower = int64(math.MaxInt64) / 8
|
||||
PriorityWindowSizeFactor = 2
|
||||
)
|
||||
|
||||
// ValidatorSet represent a set of *Validator at a given height.
|
||||
// The validators can be fetched by address or index.
|
||||
@ -42,19 +48,17 @@ type ValidatorSet struct {
|
||||
// NewValidatorSet initializes a ValidatorSet by copying over the
|
||||
// values from `valz`, a list of Validators. If valz is nil or empty,
|
||||
// the new ValidatorSet will have an empty list of Validators.
|
||||
// The addresses of validators in `valz` must be unique otherwise the
|
||||
// function panics.
|
||||
func NewValidatorSet(valz []*Validator) *ValidatorSet {
|
||||
validators := make([]*Validator, len(valz))
|
||||
for i, val := range valz {
|
||||
validators[i] = val.Copy()
|
||||
}
|
||||
sort.Sort(ValidatorsByAddress(validators))
|
||||
vals := &ValidatorSet{
|
||||
Validators: validators,
|
||||
vals := &ValidatorSet{}
|
||||
err := vals.updateWithChangeSet(valz, false)
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("cannot create validator set: %s", err))
|
||||
}
|
||||
if len(valz) > 0 {
|
||||
vals.IncrementProposerPriority(1)
|
||||
}
|
||||
|
||||
return vals
|
||||
}
|
||||
|
||||
@ -74,6 +78,9 @@ func (vals *ValidatorSet) CopyIncrementProposerPriority(times int) *ValidatorSet
|
||||
// proposer. Panics if validator set is empty.
|
||||
// `times` must be positive.
|
||||
func (vals *ValidatorSet) IncrementProposerPriority(times int) {
|
||||
if vals.IsNilOrEmpty() {
|
||||
panic("empty validator set")
|
||||
}
|
||||
if times <= 0 {
|
||||
panic("Cannot call IncrementProposerPriority with non-positive times")
|
||||
}
|
||||
@ -81,20 +88,23 @@ func (vals *ValidatorSet) IncrementProposerPriority(times int) {
|
||||
// Cap the difference between priorities to be proportional to 2*totalPower by
|
||||
// re-normalizing priorities, i.e., rescale all priorities by multiplying with:
|
||||
// 2*totalVotingPower/(maxPriority - minPriority)
|
||||
diffMax := 2 * vals.TotalVotingPower()
|
||||
diffMax := PriorityWindowSizeFactor * vals.TotalVotingPower()
|
||||
vals.RescalePriorities(diffMax)
|
||||
vals.shiftByAvgProposerPriority()
|
||||
|
||||
var proposer *Validator
|
||||
// call IncrementProposerPriority(1) times times:
|
||||
for i := 0; i < times; i++ {
|
||||
proposer = vals.incrementProposerPriority()
|
||||
}
|
||||
vals.shiftByAvgProposerPriority()
|
||||
|
||||
vals.Proposer = proposer
|
||||
}
|
||||
|
||||
func (vals *ValidatorSet) RescalePriorities(diffMax int64) {
|
||||
if vals.IsNilOrEmpty() {
|
||||
panic("empty validator set")
|
||||
}
|
||||
// NOTE: This check is merely a sanity check which could be
|
||||
// removed if all tests would init. voting power appropriately;
|
||||
// i.e. diffMax should always be > 0
|
||||
@ -102,7 +112,7 @@ func (vals *ValidatorSet) RescalePriorities(diffMax int64) {
|
||||
return
|
||||
}
|
||||
|
||||
// Caculating ceil(diff/diffMax):
|
||||
// Calculating ceil(diff/diffMax):
|
||||
// Re-normalization is performed by dividing by an integer for simplicity.
|
||||
// NOTE: This may make debugging priority issues easier as well.
|
||||
diff := computeMaxMinPriorityDiff(vals)
|
||||
@ -146,6 +156,9 @@ func (vals *ValidatorSet) computeAvgProposerPriority() int64 {
|
||||
|
||||
// compute the difference between the max and min ProposerPriority of that set
|
||||
func computeMaxMinPriorityDiff(vals *ValidatorSet) int64 {
|
||||
if vals.IsNilOrEmpty() {
|
||||
panic("empty validator set")
|
||||
}
|
||||
max := int64(math.MinInt64)
|
||||
min := int64(math.MaxInt64)
|
||||
for _, v := range vals.Validators {
|
||||
@ -173,21 +186,31 @@ func (vals *ValidatorSet) getValWithMostPriority() *Validator {
|
||||
}
|
||||
|
||||
func (vals *ValidatorSet) shiftByAvgProposerPriority() {
|
||||
if vals.IsNilOrEmpty() {
|
||||
panic("empty validator set")
|
||||
}
|
||||
avgProposerPriority := vals.computeAvgProposerPriority()
|
||||
for _, val := range vals.Validators {
|
||||
val.ProposerPriority = safeSubClip(val.ProposerPriority, avgProposerPriority)
|
||||
}
|
||||
}
|
||||
|
||||
// Makes a copy of the validator list
|
||||
func validatorListCopy(valsList []*Validator) []*Validator {
|
||||
if valsList == nil {
|
||||
return nil
|
||||
}
|
||||
valsCopy := make([]*Validator, len(valsList))
|
||||
for i, val := range valsList {
|
||||
valsCopy[i] = val.Copy()
|
||||
}
|
||||
return valsCopy
|
||||
}
|
||||
|
||||
// Copy each validator into a new ValidatorSet
|
||||
func (vals *ValidatorSet) Copy() *ValidatorSet {
|
||||
validators := make([]*Validator, len(vals.Validators))
|
||||
for i, val := range vals.Validators {
|
||||
// NOTE: must copy, since IncrementProposerPriority updates in place.
|
||||
validators[i] = val.Copy()
|
||||
}
|
||||
return &ValidatorSet{
|
||||
Validators: validators,
|
||||
Validators: validatorListCopy(vals.Validators),
|
||||
Proposer: vals.Proposer,
|
||||
totalVotingPower: vals.totalVotingPower,
|
||||
}
|
||||
@ -284,57 +307,6 @@ func (vals *ValidatorSet) Hash() []byte {
|
||||
return merkle.SimpleHashFromByteSlices(bzs)
|
||||
}
|
||||
|
||||
// Add adds val to the validator set and returns true. It returns false if val
|
||||
// is already in the set.
|
||||
func (vals *ValidatorSet) Add(val *Validator) (added bool) {
|
||||
val = val.Copy()
|
||||
idx := sort.Search(len(vals.Validators), func(i int) bool {
|
||||
return bytes.Compare(val.Address, vals.Validators[i].Address) <= 0
|
||||
})
|
||||
if idx >= len(vals.Validators) {
|
||||
vals.Validators = append(vals.Validators, val)
|
||||
// Invalidate cache
|
||||
vals.Proposer = nil
|
||||
vals.totalVotingPower = 0
|
||||
return true
|
||||
} else if bytes.Equal(vals.Validators[idx].Address, val.Address) {
|
||||
return false
|
||||
} else {
|
||||
newValidators := make([]*Validator, len(vals.Validators)+1)
|
||||
copy(newValidators[:idx], vals.Validators[:idx])
|
||||
newValidators[idx] = val
|
||||
copy(newValidators[idx+1:], vals.Validators[idx:])
|
||||
vals.Validators = newValidators
|
||||
// Invalidate cache
|
||||
vals.Proposer = nil
|
||||
vals.totalVotingPower = 0
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// Update updates the ValidatorSet by copying in the val.
|
||||
// If the val is not found, it returns false; otherwise,
|
||||
// it returns true. The val.ProposerPriority field is ignored
|
||||
// and unchanged by this method.
|
||||
func (vals *ValidatorSet) Update(val *Validator) (updated bool) {
|
||||
index, sameVal := vals.GetByAddress(val.Address)
|
||||
if sameVal == nil {
|
||||
return false
|
||||
}
|
||||
// Overwrite the ProposerPriority so it doesn't change.
|
||||
// During block execution, the val passed in here comes
|
||||
// from ABCI via PB2TM.ValidatorUpdates. Since ABCI
|
||||
// doesn't know about ProposerPriority, PB2TM.ValidatorUpdates
|
||||
// uses the default value of 0, which would cause issues for
|
||||
// proposer selection every time a validator's voting power changes.
|
||||
val.ProposerPriority = sameVal.ProposerPriority
|
||||
vals.Validators[index] = val.Copy()
|
||||
// Invalidate cache
|
||||
vals.Proposer = nil
|
||||
vals.totalVotingPower = 0
|
||||
return true
|
||||
}
|
||||
|
||||
// Remove deletes the validator with address. It returns the validator removed
|
||||
// and true. If returns nil and false if validator is not present in the set.
|
||||
func (vals *ValidatorSet) Remove(address []byte) (val *Validator, removed bool) {
|
||||
@ -366,6 +338,253 @@ func (vals *ValidatorSet) Iterate(fn func(index int, val *Validator) bool) {
|
||||
}
|
||||
}
|
||||
|
||||
// Checks changes against duplicates, splits the changes in updates and removals, sorts them by address
|
||||
//
|
||||
// Returns:
|
||||
// updates, removals - the sorted lists of updates and removals
|
||||
// err - non-nil if duplicate entries or entries with negative voting power are seen
|
||||
//
|
||||
// No changes are made to 'origChanges'
|
||||
func processChanges(origChanges []*Validator) (updates, removals []*Validator, err error) {
|
||||
// Make a deep copy of the changes and sort by address
|
||||
changes := validatorListCopy(origChanges)
|
||||
sort.Sort(ValidatorsByAddress(changes))
|
||||
|
||||
removals = make([]*Validator, 0, len(changes))
|
||||
updates = make([]*Validator, 0, len(changes))
|
||||
var prevAddr Address
|
||||
|
||||
// Scan changes by address and append valid validators to updates or removals lists
|
||||
for _, valUpdate := range changes {
|
||||
if bytes.Equal(valUpdate.Address, prevAddr) {
|
||||
err = fmt.Errorf("duplicate entry %v in %v", valUpdate, changes)
|
||||
return nil, nil, err
|
||||
}
|
||||
if valUpdate.VotingPower < 0 {
|
||||
err = fmt.Errorf("voting power can't be negative %v", valUpdate)
|
||||
return nil, nil, err
|
||||
}
|
||||
if valUpdate.VotingPower == 0 {
|
||||
removals = append(removals, valUpdate)
|
||||
} else {
|
||||
updates = append(updates, valUpdate)
|
||||
}
|
||||
prevAddr = valUpdate.Address
|
||||
}
|
||||
return updates, removals, err
|
||||
}
|
||||
|
||||
// Verifies a list of updates against a validator set, making sure the allowed
|
||||
// total voting power would not be exceeded if these updates would be applied to the set.
|
||||
// It also computes the total voting power of the set that would result after the updates but
|
||||
// before the removals.
|
||||
//
|
||||
// Returns:
|
||||
// updatedTotalVotingPower - the new total voting power if these updates would be applied
|
||||
// err - non-nil if the maximum allowed total voting power would be exceeded
|
||||
//
|
||||
// 'updates' should be a list of proper validator changes, i.e. they have been scanned
|
||||
// by processChanges for duplicates and invalid values.
|
||||
// No changes are made to the validator set 'vals'.
|
||||
func verifyUpdates(updates []*Validator, vals *ValidatorSet) (updatedTotalVotingPower int64, err error) {
|
||||
|
||||
// Scan the updates, compute new total voting power, check for overflow
|
||||
updatedTotalVotingPower = vals.TotalVotingPower()
|
||||
|
||||
for _, valUpdate := range updates {
|
||||
address := valUpdate.Address
|
||||
_, val := vals.GetByAddress(address)
|
||||
if val == nil {
|
||||
// new validator, add its voting power the the total
|
||||
updatedTotalVotingPower += valUpdate.VotingPower
|
||||
} else {
|
||||
// updated validator, add the difference in power to the total
|
||||
updatedTotalVotingPower += valUpdate.VotingPower - val.VotingPower
|
||||
}
|
||||
|
||||
if updatedTotalVotingPower < 0 {
|
||||
err = fmt.Errorf(
|
||||
"failed to add/update validator with negative voting power %v",
|
||||
valUpdate)
|
||||
return 0, err
|
||||
}
|
||||
overflow := updatedTotalVotingPower > MaxTotalVotingPower
|
||||
if overflow {
|
||||
err = fmt.Errorf(
|
||||
"failed to add/update validator %v, total voting power would exceed the max allowed %v",
|
||||
valUpdate, MaxTotalVotingPower)
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
|
||||
return updatedTotalVotingPower, nil
|
||||
}
|
||||
|
||||
// Computes the proposer priority for the validators not present in the set based on 'updatedTotalVotingPower'
|
||||
// Leaves unchanged the priorities of validators that are changed.
|
||||
//
|
||||
// 'updates' parameter must be a list of unique validators to be added or updated.
|
||||
// No changes are made to the validator set 'vals'.
|
||||
func computeNewPriorities(updates []*Validator, vals *ValidatorSet, updatedTotalVotingPower int64) int {
|
||||
|
||||
numNew := 0
|
||||
// Scan and update the proposerPriority for newly added and updated validators
|
||||
for _, valUpdate := range updates {
|
||||
address := valUpdate.Address
|
||||
_, val := vals.GetByAddress(address)
|
||||
if val == nil {
|
||||
// add val
|
||||
// Set ProposerPriority to -C*totalVotingPower (with C ~= 1.125) to make sure validators can't
|
||||
// un-bond and then re-bond to reset their (potentially previously negative) ProposerPriority to zero.
|
||||
//
|
||||
// Contract: updatedVotingPower < MaxTotalVotingPower to ensure ProposerPriority does
|
||||
// not exceed the bounds of int64.
|
||||
//
|
||||
// Compute ProposerPriority = -1.125*totalVotingPower == -(updatedVotingPower + (updatedVotingPower >> 3)).
|
||||
valUpdate.ProposerPriority = -(updatedTotalVotingPower + (updatedTotalVotingPower >> 3))
|
||||
numNew++
|
||||
} else {
|
||||
valUpdate.ProposerPriority = val.ProposerPriority
|
||||
}
|
||||
}
|
||||
|
||||
return numNew
|
||||
}
|
||||
|
||||
// Merges the vals' validator list with the updates list.
|
||||
// When two elements with same address are seen, the one from updates is selected.
|
||||
// Expects updates to be a list of updates sorted by address with no duplicates or errors,
|
||||
// must have been validated with verifyUpdates() and priorities computed with computeNewPriorities().
|
||||
func (vals *ValidatorSet) applyUpdates(updates []*Validator) {
|
||||
|
||||
existing := make([]*Validator, len(vals.Validators))
|
||||
copy(existing, vals.Validators)
|
||||
|
||||
merged := make([]*Validator, len(existing)+len(updates))
|
||||
i := 0
|
||||
|
||||
for len(existing) > 0 && len(updates) > 0 {
|
||||
if bytes.Compare(existing[0].Address, updates[0].Address) < 0 {
|
||||
merged[i] = existing[0]
|
||||
existing = existing[1:]
|
||||
} else {
|
||||
merged[i] = updates[0]
|
||||
if bytes.Equal(existing[0].Address, updates[0].Address) {
|
||||
// validator present in both, advance existing
|
||||
existing = existing[1:]
|
||||
}
|
||||
updates = updates[1:]
|
||||
}
|
||||
i++
|
||||
}
|
||||
|
||||
for j := 0; j < len(existing); j++ {
|
||||
merged[i] = existing[j]
|
||||
i++
|
||||
}
|
||||
|
||||
for j := 0; j < len(updates); j++ {
|
||||
merged[i] = updates[j]
|
||||
i++
|
||||
}
|
||||
|
||||
vals.Validators = merged[:i]
|
||||
vals.totalVotingPower = 0
|
||||
}
|
||||
|
||||
// Checks that the validators to be removed are part of the validator set.
|
||||
// No changes are made to the validator set 'vals'.
|
||||
func verifyRemovals(deletes []*Validator, vals *ValidatorSet) error {
|
||||
|
||||
for _, valUpdate := range deletes {
|
||||
address := valUpdate.Address
|
||||
_, val := vals.GetByAddress(address)
|
||||
if val == nil {
|
||||
return fmt.Errorf("failed to find validator %X to remove", address)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Removes the validators specified in 'deletes' from validator set 'vals'.
|
||||
// Should not fail as verification has been done before.
|
||||
func (vals *ValidatorSet) applyRemovals(deletes []*Validator) {
|
||||
|
||||
for _, valUpdate := range deletes {
|
||||
address := valUpdate.Address
|
||||
_, removed := vals.Remove(address)
|
||||
if !removed {
|
||||
// Should never happen
|
||||
panic(fmt.Sprintf("failed to remove validator %X", address))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// UpdateWithChangeSet attempts to update the validator set with 'changes'
|
||||
// It performs the following steps:
|
||||
// - validates the changes making sure there are no duplicates and splits them in updates and deletes
|
||||
// - verifies that applying the changes will not result in errors
|
||||
// - computes the total voting power BEFORE removals to ensure that in the next steps the relative priorities
|
||||
// across old and newly added validators is fair
|
||||
// - computes the priorities of new validators against the final set
|
||||
// - applies the updates against the validator set
|
||||
// - applies the removals against the validator set
|
||||
// - performs scaling and centering of priority values
|
||||
// If error is detected during verification steps it is returned and the validator set
|
||||
// is not changed.
|
||||
func (vals *ValidatorSet) UpdateWithChangeSet(changes []*Validator) error {
|
||||
return vals.updateWithChangeSet(changes, true)
|
||||
}
|
||||
|
||||
// main function used by UpdateWithChangeSet() and NewValidatorSet()
|
||||
// If 'allowDeletes' is false then delete operations are not allowed and must be reported if
|
||||
// present in 'changes'
|
||||
func (vals *ValidatorSet) updateWithChangeSet(changes []*Validator, allowDeletes bool) error {
|
||||
|
||||
if len(changes) <= 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Check for duplicates within changes, split in 'updates' and 'deletes' lists (sorted)
|
||||
updates, deletes, err := processChanges(changes)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !allowDeletes && len(deletes) != 0 {
|
||||
err = fmt.Errorf("cannot process validators with voting power 0: %v", deletes)
|
||||
return err
|
||||
}
|
||||
|
||||
// Verify that applying the 'deletes' against 'vals' will not result in error.
|
||||
if err := verifyRemovals(deletes, vals); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Verify that applying the 'updates' against 'vals' will not result in error.
|
||||
updatedTotalVotingPower, err := verifyUpdates(updates, vals)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Compute the priorities for updates
|
||||
numNewValidators := computeNewPriorities(updates, vals, updatedTotalVotingPower)
|
||||
if len(vals.Validators)+numNewValidators <= len(deletes) {
|
||||
err = fmt.Errorf("applying the validator changes would result in empty set")
|
||||
return err
|
||||
}
|
||||
|
||||
// Apply updates and removals
|
||||
vals.applyUpdates(updates)
|
||||
vals.applyRemovals(deletes)
|
||||
|
||||
// Scale and center
|
||||
vals.RescalePriorities(PriorityWindowSizeFactor * vals.TotalVotingPower())
|
||||
vals.shiftByAvgProposerPriority()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Verify that +2/3 of the set had signed the given signBytes.
|
||||
func (vals *ValidatorSet) VerifyCommit(chainID string, blockID BlockID, height int64, commit *Commit) error {
|
||||
|
||||
|
@ -4,6 +4,7 @@ import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"math"
|
||||
"math/rand"
|
||||
"strings"
|
||||
"testing"
|
||||
"testing/quick"
|
||||
@ -45,31 +46,29 @@ func TestValidatorSetBasic(t *testing.T) {
|
||||
assert.Nil(t, vset.Hash())
|
||||
|
||||
// add
|
||||
|
||||
val = randValidator_(vset.TotalVotingPower())
|
||||
assert.True(t, vset.Add(val))
|
||||
assert.NoError(t, vset.UpdateWithChangeSet([]*Validator{val}))
|
||||
|
||||
assert.True(t, vset.HasAddress(val.Address))
|
||||
idx, val2 := vset.GetByAddress(val.Address)
|
||||
idx, _ = vset.GetByAddress(val.Address)
|
||||
assert.Equal(t, 0, idx)
|
||||
assert.Equal(t, val, val2)
|
||||
addr, val2 = vset.GetByIndex(0)
|
||||
addr, _ = vset.GetByIndex(0)
|
||||
assert.Equal(t, []byte(val.Address), addr)
|
||||
assert.Equal(t, val, val2)
|
||||
assert.Equal(t, 1, vset.Size())
|
||||
assert.Equal(t, val.VotingPower, vset.TotalVotingPower())
|
||||
assert.Equal(t, val, vset.GetProposer())
|
||||
assert.NotNil(t, vset.Hash())
|
||||
assert.NotPanics(t, func() { vset.IncrementProposerPriority(1) })
|
||||
assert.Equal(t, val.Address, vset.GetProposer().Address)
|
||||
|
||||
// update
|
||||
assert.False(t, vset.Update(randValidator_(vset.TotalVotingPower())))
|
||||
val = randValidator_(vset.TotalVotingPower())
|
||||
assert.NoError(t, vset.UpdateWithChangeSet([]*Validator{val}))
|
||||
_, val = vset.GetByAddress(val.Address)
|
||||
val.VotingPower += 100
|
||||
proposerPriority := val.ProposerPriority
|
||||
// Mimic update from types.PB2TM.ValidatorUpdates which does not know about ProposerPriority
|
||||
// and hence defaults to 0.
|
||||
|
||||
val.ProposerPriority = 0
|
||||
assert.True(t, vset.Update(val))
|
||||
assert.NoError(t, vset.UpdateWithChangeSet([]*Validator{val}))
|
||||
_, val = vset.GetByAddress(val.Address)
|
||||
assert.Equal(t, proposerPriority, val.ProposerPriority)
|
||||
|
||||
@ -116,8 +115,9 @@ func BenchmarkValidatorSetCopy(b *testing.B) {
|
||||
for i := 0; i < 1000; i++ {
|
||||
privKey := ed25519.GenPrivKey()
|
||||
pubKey := privKey.PubKey()
|
||||
val := NewValidator(pubKey, 0)
|
||||
if !vset.Add(val) {
|
||||
val := NewValidator(pubKey, 10)
|
||||
err := vset.UpdateWithChangeSet([]*Validator{val})
|
||||
if err != nil {
|
||||
panic("Failed to add validator")
|
||||
}
|
||||
}
|
||||
@ -284,7 +284,7 @@ func randPubKey() crypto.PubKey {
|
||||
func randValidator_(totalVotingPower int64) *Validator {
|
||||
// this modulo limits the ProposerPriority/VotingPower to stay in the
|
||||
// bounds of MaxTotalVotingPower minus the already existing voting power:
|
||||
val := NewValidator(randPubKey(), cmn.RandInt64()%(MaxTotalVotingPower-totalVotingPower))
|
||||
val := NewValidator(randPubKey(), int64(cmn.RandUint64()%uint64((MaxTotalVotingPower-totalVotingPower))))
|
||||
val.ProposerPriority = cmn.RandInt64() % (MaxTotalVotingPower - totalVotingPower)
|
||||
return val
|
||||
}
|
||||
@ -563,18 +563,12 @@ func TestValidatorSetVerifyCommit(t *testing.T) {
|
||||
sig, err := privKey.Sign(vote.SignBytes(chainID))
|
||||
assert.NoError(t, err)
|
||||
vote.Signature = sig
|
||||
commit := &Commit{
|
||||
BlockID: blockID,
|
||||
Precommits: []*CommitSig{vote.CommitSig()},
|
||||
}
|
||||
commit := NewCommit(blockID, []*CommitSig{vote.CommitSig()})
|
||||
|
||||
badChainID := "notmychainID"
|
||||
badBlockID := BlockID{Hash: []byte("goodbye")}
|
||||
badHeight := height + 1
|
||||
badCommit := &Commit{
|
||||
BlockID: blockID,
|
||||
Precommits: []*CommitSig{nil},
|
||||
}
|
||||
badCommit := NewCommit(blockID, []*CommitSig{nil})
|
||||
|
||||
// test some error cases
|
||||
// TODO: test more cases!
|
||||
@ -599,3 +593,357 @@ func TestValidatorSetVerifyCommit(t *testing.T) {
|
||||
err = vset.VerifyCommit(chainID, blockID, height, commit)
|
||||
assert.Nil(t, err)
|
||||
}
|
||||
|
||||
func TestEmptySet(t *testing.T) {
|
||||
|
||||
var valList []*Validator
|
||||
valSet := NewValidatorSet(valList)
|
||||
assert.Panics(t, func() { valSet.IncrementProposerPriority(1) })
|
||||
assert.Panics(t, func() { valSet.RescalePriorities(100) })
|
||||
assert.Panics(t, func() { valSet.shiftByAvgProposerPriority() })
|
||||
assert.Panics(t, func() { assert.Zero(t, computeMaxMinPriorityDiff(valSet)) })
|
||||
valSet.GetProposer()
|
||||
|
||||
// Add to empty set
|
||||
v1 := newValidator([]byte("v1"), 100)
|
||||
v2 := newValidator([]byte("v2"), 100)
|
||||
valList = []*Validator{v1, v2}
|
||||
assert.NoError(t, valSet.UpdateWithChangeSet(valList))
|
||||
verifyValidatorSet(t, valSet)
|
||||
|
||||
// Delete all validators from set
|
||||
v1 = newValidator([]byte("v1"), 0)
|
||||
v2 = newValidator([]byte("v2"), 0)
|
||||
delList := []*Validator{v1, v2}
|
||||
assert.Error(t, valSet.UpdateWithChangeSet(delList))
|
||||
|
||||
// Attempt delete from empty set
|
||||
assert.Error(t, valSet.UpdateWithChangeSet(delList))
|
||||
|
||||
}
|
||||
|
||||
func TestUpdatesForNewValidatorSet(t *testing.T) {
|
||||
|
||||
v1 := newValidator([]byte("v1"), 100)
|
||||
v2 := newValidator([]byte("v2"), 100)
|
||||
valList := []*Validator{v1, v2}
|
||||
valSet := NewValidatorSet(valList)
|
||||
verifyValidatorSet(t, valSet)
|
||||
|
||||
// Verify duplicates are caught in NewValidatorSet() and it panics
|
||||
v111 := newValidator([]byte("v1"), 100)
|
||||
v112 := newValidator([]byte("v1"), 123)
|
||||
v113 := newValidator([]byte("v1"), 234)
|
||||
valList = []*Validator{v111, v112, v113}
|
||||
assert.Panics(t, func() { NewValidatorSet(valList) })
|
||||
|
||||
// Verify set including validator with voting power 0 cannot be created
|
||||
v1 = newValidator([]byte("v1"), 0)
|
||||
v2 = newValidator([]byte("v2"), 22)
|
||||
v3 := newValidator([]byte("v3"), 33)
|
||||
valList = []*Validator{v1, v2, v3}
|
||||
assert.Panics(t, func() { NewValidatorSet(valList) })
|
||||
|
||||
// Verify set including validator with negative voting power cannot be created
|
||||
v1 = newValidator([]byte("v1"), 10)
|
||||
v2 = newValidator([]byte("v2"), -20)
|
||||
v3 = newValidator([]byte("v3"), 30)
|
||||
valList = []*Validator{v1, v2, v3}
|
||||
assert.Panics(t, func() { NewValidatorSet(valList) })
|
||||
|
||||
}
|
||||
|
||||
type testVal struct {
|
||||
name string
|
||||
power int64
|
||||
}
|
||||
|
||||
func TestValSetUpdatesBasicTestsExecute(t *testing.T) {
|
||||
valSetUpdatesBasicTests := []struct {
|
||||
startVals []testVal
|
||||
updateVals []testVal
|
||||
expectedVals []testVal
|
||||
expError bool
|
||||
}{
|
||||
// Operations that should result in error
|
||||
0: { // updates leading to overflows
|
||||
[]testVal{{"v1", 10}, {"v2", 10}},
|
||||
[]testVal{{"v1", math.MaxInt64}},
|
||||
[]testVal{{"v1", 10}, {"v2", 10}},
|
||||
true},
|
||||
1: { // duplicate entries in changes
|
||||
[]testVal{{"v1", 10}, {"v2", 10}},
|
||||
[]testVal{{"v1", 11}, {"v1", 22}},
|
||||
[]testVal{{"v1", 10}, {"v2", 10}},
|
||||
true},
|
||||
2: { // duplicate entries in removes
|
||||
[]testVal{{"v1", 10}, {"v2", 10}},
|
||||
[]testVal{{"v1", 0}, {"v1", 0}},
|
||||
[]testVal{{"v1", 10}, {"v2", 10}},
|
||||
true},
|
||||
3: { // duplicate entries in removes + changes
|
||||
[]testVal{{"v1", 10}, {"v2", 10}},
|
||||
[]testVal{{"v1", 0}, {"v2", 20}, {"v2", 30}, {"v1", 0}},
|
||||
[]testVal{{"v1", 10}, {"v2", 10}},
|
||||
true},
|
||||
4: { // update with negative voting power
|
||||
[]testVal{{"v1", 10}, {"v2", 10}},
|
||||
[]testVal{{"v1", -123}},
|
||||
[]testVal{{"v1", 10}, {"v2", 10}},
|
||||
true},
|
||||
5: { // delete non existing validator
|
||||
[]testVal{{"v1", 10}, {"v2", 10}},
|
||||
[]testVal{{"v3", 0}},
|
||||
[]testVal{{"v1", 10}, {"v2", 10}},
|
||||
true},
|
||||
|
||||
// Operations that should be successful
|
||||
6: { // no changes
|
||||
[]testVal{{"v1", 10}, {"v2", 10}},
|
||||
[]testVal{},
|
||||
[]testVal{{"v1", 10}, {"v2", 10}},
|
||||
false},
|
||||
7: { // voting power changes
|
||||
[]testVal{{"v1", 10}, {"v2", 10}},
|
||||
[]testVal{{"v1", 11}, {"v2", 22}},
|
||||
[]testVal{{"v1", 11}, {"v2", 22}},
|
||||
false},
|
||||
8: { // add new validators
|
||||
[]testVal{{"v1", 10}, {"v2", 20}},
|
||||
[]testVal{{"v3", 30}, {"v4", 40}},
|
||||
[]testVal{{"v1", 10}, {"v2", 20}, {"v3", 30}, {"v4", 40}},
|
||||
false},
|
||||
9: { // delete validators
|
||||
[]testVal{{"v1", 10}, {"v2", 20}, {"v3", 30}},
|
||||
[]testVal{{"v2", 0}},
|
||||
[]testVal{{"v1", 10}, {"v3", 30}},
|
||||
false},
|
||||
10: { // delete all validators
|
||||
[]testVal{{"v1", 10}, {"v2", 20}, {"v3", 30}},
|
||||
[]testVal{{"v1", 0}, {"v2", 0}, {"v3", 0}},
|
||||
[]testVal{{"v1", 10}, {"v2", 20}, {"v3", 30}},
|
||||
true},
|
||||
}
|
||||
|
||||
for i, tt := range valSetUpdatesBasicTests {
|
||||
// create a new set and apply updates, keeping copies for the checks
|
||||
valSet := createNewValidatorSet(tt.startVals)
|
||||
valSetCopy := valSet.Copy()
|
||||
valList := createNewValidatorList(tt.updateVals)
|
||||
valListCopy := validatorListCopy(valList)
|
||||
err := valSet.UpdateWithChangeSet(valList)
|
||||
|
||||
if tt.expError {
|
||||
// for errors check the validator set has not been changed
|
||||
assert.Error(t, err, "test %d", i)
|
||||
assert.Equal(t, valSet, valSetCopy, "test %v", i)
|
||||
} else {
|
||||
assert.NoError(t, err, "test %d", i)
|
||||
}
|
||||
// check the parameter list has not changed
|
||||
assert.Equal(t, valList, valListCopy, "test %v", i)
|
||||
|
||||
// check the final validator list is as expected and the set is properly scaled and centered.
|
||||
assert.Equal(t, getValidatorResults(valSet.Validators), tt.expectedVals, "test %v", i)
|
||||
verifyValidatorSet(t, valSet)
|
||||
}
|
||||
}
|
||||
|
||||
func getValidatorResults(valList []*Validator) []testVal {
|
||||
testList := make([]testVal, len(valList))
|
||||
for i, val := range valList {
|
||||
testList[i].name = string(val.Address)
|
||||
testList[i].power = val.VotingPower
|
||||
}
|
||||
return testList
|
||||
}
|
||||
|
||||
// Test that different permutations of an update give the same result.
|
||||
func TestValSetUpdatesOrderTestsExecute(t *testing.T) {
|
||||
// startVals - initial validators to create the set with
|
||||
// updateVals - a sequence of updates to be applied to the set.
|
||||
// updateVals is shuffled a number of times during testing to check for same resulting validator set.
|
||||
valSetUpdatesOrderTests := []struct {
|
||||
startVals []testVal
|
||||
updateVals []testVal
|
||||
}{
|
||||
0: { // order of changes should not matter, the final validator sets should be the same
|
||||
[]testVal{{"v1", 10}, {"v2", 10}, {"v3", 30}, {"v4", 40}},
|
||||
[]testVal{{"v1", 11}, {"v2", 22}, {"v3", 33}, {"v4", 44}}},
|
||||
|
||||
1: { // order of additions should not matter
|
||||
[]testVal{{"v1", 10}, {"v2", 20}},
|
||||
[]testVal{{"v3", 30}, {"v4", 40}, {"v5", 50}, {"v6", 60}}},
|
||||
|
||||
2: { // order of removals should not matter
|
||||
[]testVal{{"v1", 10}, {"v2", 20}, {"v3", 30}, {"v4", 40}},
|
||||
[]testVal{{"v1", 0}, {"v3", 0}, {"v4", 0}}},
|
||||
|
||||
3: { // order of mixed operations should not matter
|
||||
[]testVal{{"v1", 10}, {"v2", 20}, {"v3", 30}, {"v4", 40}},
|
||||
[]testVal{{"v1", 0}, {"v3", 0}, {"v2", 22}, {"v5", 50}, {"v4", 44}}},
|
||||
}
|
||||
|
||||
for i, tt := range valSetUpdatesOrderTests {
|
||||
// create a new set and apply updates
|
||||
valSet := createNewValidatorSet(tt.startVals)
|
||||
valSetCopy := valSet.Copy()
|
||||
valList := createNewValidatorList(tt.updateVals)
|
||||
assert.NoError(t, valSetCopy.UpdateWithChangeSet(valList))
|
||||
|
||||
// save the result as expected for next updates
|
||||
valSetExp := valSetCopy.Copy()
|
||||
|
||||
// perform at most 20 permutations on the updates and call UpdateWithChangeSet()
|
||||
n := len(tt.updateVals)
|
||||
maxNumPerms := cmn.MinInt(20, n*n)
|
||||
for j := 0; j < maxNumPerms; j++ {
|
||||
// create a copy of original set and apply a random permutation of updates
|
||||
valSetCopy := valSet.Copy()
|
||||
valList := createNewValidatorList(permutation(tt.updateVals))
|
||||
|
||||
// check there was no error and the set is properly scaled and centered.
|
||||
assert.NoError(t, valSetCopy.UpdateWithChangeSet(valList),
|
||||
"test %v failed for permutation %v", i, valList)
|
||||
verifyValidatorSet(t, valSetCopy)
|
||||
|
||||
// verify the resulting test is same as the expected
|
||||
assert.Equal(t, valSetCopy, valSetExp,
|
||||
"test %v failed for permutation %v", i, valList)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// This tests the private function validator_set.go:applyUpdates() function, used only for additions and changes.
|
||||
// Should perform a proper merge of updatedVals and startVals
|
||||
func TestValSetApplyUpdatesTestsExecute(t *testing.T) {
|
||||
valSetUpdatesBasicTests := []struct {
|
||||
startVals []testVal
|
||||
updateVals []testVal
|
||||
expectedVals []testVal
|
||||
}{
|
||||
// additions
|
||||
0: { // prepend
|
||||
[]testVal{{"v4", 44}, {"v5", 55}},
|
||||
[]testVal{{"v1", 11}},
|
||||
[]testVal{{"v1", 11}, {"v4", 44}, {"v5", 55}}},
|
||||
1: { // append
|
||||
[]testVal{{"v4", 44}, {"v5", 55}},
|
||||
[]testVal{{"v6", 66}},
|
||||
[]testVal{{"v4", 44}, {"v5", 55}, {"v6", 66}}},
|
||||
2: { // insert
|
||||
[]testVal{{"v4", 44}, {"v6", 66}},
|
||||
[]testVal{{"v5", 55}},
|
||||
[]testVal{{"v4", 44}, {"v5", 55}, {"v6", 66}}},
|
||||
3: { // insert multi
|
||||
[]testVal{{"v4", 44}, {"v6", 66}, {"v9", 99}},
|
||||
[]testVal{{"v5", 55}, {"v7", 77}, {"v8", 88}},
|
||||
[]testVal{{"v4", 44}, {"v5", 55}, {"v6", 66}, {"v7", 77}, {"v8", 88}, {"v9", 99}}},
|
||||
// changes
|
||||
4: { // head
|
||||
[]testVal{{"v1", 111}, {"v2", 22}},
|
||||
[]testVal{{"v1", 11}},
|
||||
[]testVal{{"v1", 11}, {"v2", 22}}},
|
||||
5: { // tail
|
||||
[]testVal{{"v1", 11}, {"v2", 222}},
|
||||
[]testVal{{"v2", 22}},
|
||||
[]testVal{{"v1", 11}, {"v2", 22}}},
|
||||
6: { // middle
|
||||
[]testVal{{"v1", 11}, {"v2", 222}, {"v3", 33}},
|
||||
[]testVal{{"v2", 22}},
|
||||
[]testVal{{"v1", 11}, {"v2", 22}, {"v3", 33}}},
|
||||
7: { // multi
|
||||
[]testVal{{"v1", 111}, {"v2", 222}, {"v3", 333}},
|
||||
[]testVal{{"v1", 11}, {"v2", 22}, {"v3", 33}},
|
||||
[]testVal{{"v1", 11}, {"v2", 22}, {"v3", 33}}},
|
||||
// additions and changes
|
||||
8: {
|
||||
[]testVal{{"v1", 111}, {"v2", 22}},
|
||||
[]testVal{{"v1", 11}, {"v3", 33}, {"v4", 44}},
|
||||
[]testVal{{"v1", 11}, {"v2", 22}, {"v3", 33}, {"v4", 44}}},
|
||||
}
|
||||
|
||||
for i, tt := range valSetUpdatesBasicTests {
|
||||
// create a new validator set with the start values
|
||||
valSet := createNewValidatorSet(tt.startVals)
|
||||
|
||||
// applyUpdates() with the update values
|
||||
valList := createNewValidatorList(tt.updateVals)
|
||||
valSet.applyUpdates(valList)
|
||||
|
||||
// check the new list of validators for proper merge
|
||||
assert.Equal(t, getValidatorResults(valSet.Validators), tt.expectedVals, "test %v", i)
|
||||
verifyValidatorSet(t, valSet)
|
||||
}
|
||||
}
|
||||
|
||||
func permutation(valList []testVal) []testVal {
|
||||
if len(valList) == 0 {
|
||||
return nil
|
||||
}
|
||||
permList := make([]testVal, len(valList))
|
||||
perm := rand.Perm(len(valList))
|
||||
for i, v := range perm {
|
||||
permList[v] = valList[i]
|
||||
}
|
||||
return permList
|
||||
}
|
||||
|
||||
func createNewValidatorList(testValList []testVal) []*Validator {
|
||||
valList := make([]*Validator, 0, len(testValList))
|
||||
for _, val := range testValList {
|
||||
valList = append(valList, newValidator([]byte(val.name), val.power))
|
||||
}
|
||||
return valList
|
||||
}
|
||||
|
||||
func createNewValidatorSet(testValList []testVal) *ValidatorSet {
|
||||
valList := createNewValidatorList(testValList)
|
||||
valSet := NewValidatorSet(valList)
|
||||
return valSet
|
||||
}
|
||||
|
||||
func verifyValidatorSet(t *testing.T, valSet *ValidatorSet) {
|
||||
// verify that the vals' tvp is set to the sum of the all vals voting powers
|
||||
tvp := valSet.TotalVotingPower()
|
||||
assert.Equal(t, valSet.totalVotingPower, tvp,
|
||||
"expected TVP %d. Got %d, valSet=%s", tvp, valSet.totalVotingPower, valSet)
|
||||
|
||||
// verify that validator priorities are centered
|
||||
l := int64(len(valSet.Validators))
|
||||
tpp := valSet.TotalVotingPower()
|
||||
assert.True(t, tpp <= l || tpp >= -l,
|
||||
"expected total priority in (-%d, %d). Got %d", l, l, tpp)
|
||||
|
||||
// verify that priorities are scaled
|
||||
dist := computeMaxMinPriorityDiff(valSet)
|
||||
assert.True(t, dist <= PriorityWindowSizeFactor*tvp,
|
||||
"expected priority distance < %d. Got %d", PriorityWindowSizeFactor*tvp, dist)
|
||||
}
|
||||
|
||||
func BenchmarkUpdates(b *testing.B) {
|
||||
const (
|
||||
n = 100
|
||||
m = 2000
|
||||
)
|
||||
// Init with n validators
|
||||
vs := make([]*Validator, n)
|
||||
for j := 0; j < n; j++ {
|
||||
vs[j] = newValidator([]byte(fmt.Sprintf("v%d", j)), 100)
|
||||
}
|
||||
valSet := NewValidatorSet(vs)
|
||||
l := len(valSet.Validators)
|
||||
|
||||
// Make m new validators
|
||||
newValList := make([]*Validator, m)
|
||||
for j := 0; j < m; j++ {
|
||||
newValList[j] = newValidator([]byte(fmt.Sprintf("v%d", j+l)), 1000)
|
||||
}
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
// Add m validators to valSetCopy
|
||||
valSetCopy := valSet.Copy()
|
||||
assert.NoError(b, valSetCopy.UpdateWithChangeSet(newValList))
|
||||
}
|
||||
}
|
||||
|
@ -545,10 +545,7 @@ func (voteSet *VoteSet) MakeCommit() *Commit {
|
||||
for i, v := range voteSet.votes {
|
||||
commitSigs[i] = v.CommitSig()
|
||||
}
|
||||
return &Commit{
|
||||
BlockID: *voteSet.maj23,
|
||||
Precommits: commitSigs,
|
||||
}
|
||||
return NewCommit(*voteSet.maj23, commitSigs)
|
||||
}
|
||||
|
||||
//--------------------------------------------------------------------------------
|
||||
|
@ -20,7 +20,7 @@ const (
|
||||
// Must be a string because scripts like dist.sh read this file.
|
||||
// XXX: Don't change the name of this variable or you will break
|
||||
// automation :)
|
||||
TMCoreSemVer = "0.29.2"
|
||||
TMCoreSemVer = "0.30.0"
|
||||
|
||||
// ABCISemVer is the semantic version of the ABCI library
|
||||
ABCISemVer = "0.15.0"
|
||||
@ -38,10 +38,12 @@ func (p Protocol) Uint64() uint64 {
|
||||
|
||||
var (
|
||||
// P2PProtocol versions all p2p behaviour and msgs.
|
||||
P2PProtocol Protocol = 6
|
||||
// This includes proposer selection.
|
||||
P2PProtocol Protocol = 7
|
||||
|
||||
// BlockProtocol versions all block data structures and processing.
|
||||
BlockProtocol Protocol = 9
|
||||
// This includes validity of blocks and state updates.
|
||||
BlockProtocol Protocol = 10
|
||||
)
|
||||
|
||||
//------------------------------------------------------------------------
|
||||
|
Reference in New Issue
Block a user