Compare commits

...

11 Commits

Author SHA1 Message Date
Ethan Buchman
d9d4f3e629 Prepare v0.29.0 (#3184)
* update changelog and upgrading

* add note about max voting power in abci spec

* update version

* changelog
2019-01-21 19:32:10 -05:00
Ethan Buchman
7a8aeff4b0 update spec for Merkle RFC 6962 (#3175)
* spec: specify when MerkleRoot is on hashes

* remove unnecessary hash methods

* update changelog

* fix test
2019-01-21 10:02:57 -05:00
Ethan Buchman
de5a6010f0 fix DynamicVerifier for large validator set changes (#3171)
* base verifier: bc->bv and check chainid

* improve some comments

* comments in dynamic verifier

* fix comment in doc about BaseVerifier

It requires the validator set to perfectly match.

* failing test for #2862

* move errTooMuchChange to types. fixes #2862

* changelog, comments

* ic -> dv

* update comment, link to issue
2019-01-21 09:21:04 -05:00
Ethan Buchman
da95f4aa6d mempool: enforce maxMsgSize limit in CheckTx (#3168)
- fixes #3008
- reactor requires encoded messages are less than maxMsgSize
- requires size of tx + amino-overhead to not exceed maxMsgSize
2019-01-20 17:27:49 -05:00
Ethan Buchman
4f8769175e [types] hash of ConsensusParams includes only a subset of fields (#3165)
* types: dont hash entire ConsensusParams

* update encoding spec

* update blockchain spec

* spec: consensus params hash

* changelog
2019-01-19 16:08:57 -05:00
Ismail Khoffi
40c887baf7 Normalize priorities to not exceed total voting power (#3049)
* more proposer priority tests

 - test that we don't reset to zero when updating / adding
 - test that same power validators alternate

* add another test to track / simulate similar behaviour as in #2960

* address some of Chris' review comments

* address some more of Chris' review comments

* temporarily pushing branch with the following changes:
The total power might change if:
   - a validator is added
   - a validator is removed
   - a validator is updated

Decrement the accums (of all validators) directly after any of these events
(by the inverse of the change)

* Fix 2960 by re-normalizing / scaling priorities to be in bounds of total
power, additionally:

 - remove heap where it doesn't make sense
 - avg. only at the end of IncrementProposerPriority instead of each
   iteration
 - update (and slightly improve)
   TestAveragingInIncrementProposerPriorityWithVotingPower to reflect
   above changes

* Fix 2960 by re-normalizing / scaling priorities to be in bounds of total
power, additionally:

 - remove heap where it doesn't make sense
 - avg. only at the end of IncrementProposerPriority instead of each
   iteration
 - update (and slightly improve)
   TestAveragingInIncrementProposerPriorityWithVotingPower to reflect
   above changes

* fix tests

* add comment

* update changelog pending & some minor changes

* comment about division will floor the result & fix typo

* Update TestLargeGenesisValidator:
 - remove TODO and increase large genesis validator's voting power
accordingly

* move changelog entry to P2P Protocol

* Ceil instead of flooring when dividing & update test

* quickly fix failing TestProposerPriorityDoesNotGetResetToZero:

 - divide by Ceil((maxPriority - minPriority) / 2*totalVotingPower)

* fix typo: rename getValWitMostPriority -> getValWithMostPriority

* test proposer frequencies

* return absolute value for diff. keep testing

* use for loop for div

* cleanup, more tests

* spellcheck

* get rid of using floats: manually ceil where necessary

* Remove float, simplify, fix tests to match chris's proof (#3157)
2019-01-19 15:55:08 -05:00
Ethan Buchman
d3e8889411 update btcd fork for v0.1.1 (#3164)
* update btcd fork for v0.1.1

* changelog
2019-01-19 14:08:41 -05:00
Ethan Buchman
d17969e378 Merge pull request #3154 from tendermint/master
Merge master back to develop
2019-01-18 12:39:21 -05:00
Ethan Buchman
07263298bd Merge pull request #3153 from tendermint/release/v0.28.1
Release/v0.28.1
2019-01-18 12:38:48 -05:00
Ethan Buchman
5a2e69df81 changelog and version 2019-01-18 12:11:02 -05:00
Ethan Buchman
8fd8f800d0 Bucky/fix evidence halt (#34)
* consensus: createProposalBlock function

* blockExecutor.CreateProposalBlock

- factored out of consensus pkg into a method on blockExec
- new private interfaces for mempool ("txNotifier") and evpool with one function each
- consensus tests still require more mempool methods

* failing test for CreateProposalBlock

* Fix bug in include evidece into block

* evidence: change maxBytes to maxSize

* MaxEvidencePerBlock

- changed to return both the max number and the max bytes
- preparation for #2590

* changelog

* fix linter

* Fix from review

Co-Authored-By: ebuchman <ethan@coinculture.info>
2019-01-17 21:46:40 -05:00
45 changed files with 1124 additions and 418 deletions

View File

@@ -1,5 +1,90 @@
# Changelog
## v0.29.0
*January 21, 2019*
Special thanks to external contributors on this release:
@bradyjoestar, @kunaldhariwal, @gauthamzz, @hrharder
This release is primarily about making some breaking changes to
the Block protocol version before Cosmos launch, and to fixing more issues
in the proposer selection algorithm discovered on Cosmos testnets.
The Block protocol changes include using a standard Merkle tree format (RFC 6962),
fixing some inconsistencies between field orders in Vote and Proposal structs,
and constraining the hash of the ConsensusParams to include only a few fields.
The proposer selection algorithm saw significant progress,
including a [formal proof by @cwgoes for the base-case in Idris](https://github.com/cwgoes/tm-proposer-idris)
and a [much more detailed specification (still in progress) by
@ancazamfir](https://github.com/tendermint/tendermint/pull/3140).
Fixes to the proposer selection algorithm include normalizing the proposer
priorities to mitigate the effects of large changes to the validator set.
That said, we just discovered [another bug](https://github.com/tendermint/tendermint/issues/3181),
which will be fixed in the next breaking release.
While we are trying to stabilize the Block protocol to preserve compatibility
with old chains, there may be some final changes yet to come before Cosmos
launch as we continue to audit and test the software.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* CLI/RPC/Config
* Apps
- [state] [\#3049](https://github.com/tendermint/tendermint/issues/3049) Total voting power of the validator set is upper bounded by
`MaxInt64 / 8`. Apps must ensure they do not return changes to the validator
set that cause this maximum to be exceeded.
* Go API
- [node] [\#3082](https://github.com/tendermint/tendermint/issues/3082) MetricsProvider now requires you to pass a chain ID
- [types] [\#2713](https://github.com/tendermint/tendermint/issues/2713) Rename `TxProof.LeafHash` to `TxProof.Leaf`
- [crypto/merkle] [\#2713](https://github.com/tendermint/tendermint/issues/2713) `SimpleProof.Verify` takes a `leaf` instead of a
`leafHash` and performs the hashing itself
* Blockchain Protocol
* [crypto/merkle] [\#2713](https://github.com/tendermint/tendermint/issues/2713) Merkle trees now match the RFC 6962 specification
* [types] [\#3078](https://github.com/tendermint/tendermint/issues/3078) Re-order Timestamp and BlockID in CanonicalVote so it's
consistent with CanonicalProposal (BlockID comes
first)
* [types] [\#3165](https://github.com/tendermint/tendermint/issues/3165) Hash of ConsensusParams only includes BlockSize.MaxBytes and
BlockSize.MaxGas
* P2P Protocol
- [consensus] [\#3049](https://github.com/tendermint/tendermint/issues/3049) Normalize priorities to not exceed `2*TotalVotingPower` to mitigate unfair proposer selection
heavily preferring earlier joined validators in the case of an early bonded large validator unbonding
### FEATURES:
### IMPROVEMENTS:
- [rpc] [\#3065](https://github.com/tendermint/tendermint/issues/3065) Return maxPerPage (100), not defaultPerPage (30) if `per_page` is greater than the max 100.
- [instrumentation] [\#3082](https://github.com/tendermint/tendermint/issues/3082) Add `chain_id` label for all metrics
### BUG FIXES:
- [crypto] [\#3164](https://github.com/tendermint/tendermint/issues/3164) Update `btcd` fork for rare signRFC6979 bug
- [lite] [\#3171](https://github.com/tendermint/tendermint/issues/3171) Fix verifying large validator set changes
- [log] [\#3125](https://github.com/tendermint/tendermint/issues/3125) Fix year format
- [mempool] [\#3168](https://github.com/tendermint/tendermint/issues/3168) Limit tx size to fit in the max reactor msg size
- [scripts] [\#3147](https://github.com/tendermint/tendermint/issues/3147) Fix json2wal for large block parts (@bradyjoestar)
## v0.28.1
*January 18th, 2019*
Special thanks to external contributors on this release:
@HaoyangLiu
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BUG FIXES:
- [consensus] Fix consensus halt from proposing blocks with too much evidence
## v0.28.0
*January 16th, 2019*

View File

@@ -1,4 +1,4 @@
## v0.29.0
## v0.30.0
*TBD*
@@ -7,23 +7,17 @@ Special thanks to external contributors on this release:
### BREAKING CHANGES:
* CLI/RPC/Config
- [types] consistent field order of `CanonicalVote` and `CanonicalProposal`
* Apps
* Go API
- [node] \#3082 MetricsProvider now requires you to pass a chain ID
* Blockchain Protocol
* [merkle] \#2713 Merkle trees now match the RFC 6962 specification
* P2P Protocol
### FEATURES:
### IMPROVEMENTS:
- [rpc] \#3065 return maxPerPage (100), not defaultPerPage (30) if `per_page` is greater than the max 100.
- [instrumentation] \#3082 add 'chain_id' label for all metrics
### BUG FIXES:
- [log] \#3060 fix year format

5
Gopkg.lock generated
View File

@@ -361,11 +361,12 @@
revision = "6b91fda63f2e36186f1c9d0e48578defb69c5d43"
[[projects]]
digest = "1:605b6546f3f43745695298ec2d342d3e952b6d91cdf9f349bea9315f677d759f"
digest = "1:83f5e189eea2baad419a6a410984514266ff690075759c87e9ede596809bd0b8"
name = "github.com/tendermint/btcd"
packages = ["btcec"]
pruneopts = "UT"
revision = "e5840949ff4fff0c56f9b6a541e22b63581ea9df"
revision = "80daadac05d1cd29571fccf27002d79667a88b58"
version = "v0.1.1"
[[projects]]
digest = "1:ad9c4c1a4e7875330b1f62906f2830f043a23edb5db997e3a5ac5d3e6eadf80a"

View File

@@ -75,6 +75,10 @@
name = "github.com/prometheus/client_golang"
version = "^0.9.1"
[[constraint]]
name = "github.com/tendermint/btcd"
version = "v0.1.1"
###################################
## Some repos dont have releases.
## Pin to revision
@@ -92,9 +96,6 @@
name = "github.com/btcsuite/btcutil"
revision = "d4cc87b860166d00d6b5b9e0d3b3d71d6088d4d4"
[[constraint]]
name = "github.com/tendermint/btcd"
revision = "e5840949ff4fff0c56f9b6a541e22b63581ea9df"
[[constraint]]
name = "github.com/rcrowley/go-metrics"

View File

@@ -3,6 +3,28 @@
This guide provides steps to be followed when you upgrade your applications to
a newer version of Tendermint Core.
## v0.29.0
This release contains some breaking changes to the block and p2p protocols,
and will not be compatible with any previous versions of the software, primarily
due to changes in how various data structures are hashed.
Any implementations of Tendermint blockchain verification, including lite clients,
will need to be updated. For specific details:
- [Merkle tree](./docs/spec/blockchain/encoding.md#merkle-trees)
- [ConsensusParams](./docs/spec/blockchain/state.md#consensusparams)
There was also a small change to field ordering in the vote struct. Any
implementations of an out-of-process validator (like a Key-Management Server)
will need to be updated. For specific details:
- [Vote](https://github.com/tendermint/tendermint/blob/develop/docs/spec/consensus/signing.md#votes)
Finally, the proposer selection algorithm continues to evolve. See the
[work-in-progress
specification](https://github.com/tendermint/tendermint/pull/3140).
For everything else, please see the [CHANGELOG](./CHANGELOG.md#v0.29.0).
## v0.28.0
This release breaks the format for the `priv_validator.json` file

View File

@@ -10,6 +10,7 @@ import (
"github.com/tendermint/tendermint/abci/example/code"
abci "github.com/tendermint/tendermint/abci/types"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
@@ -17,12 +18,17 @@ func init() {
config = ResetConfig("consensus_mempool_test")
}
// for testing
func assertMempool(txn txNotifier) sm.Mempool {
return txn.(sm.Mempool)
}
func TestMempoolNoProgressUntilTxsAvailable(t *testing.T) {
config := ResetConfig("consensus_mempool_txs_available_test")
config.Consensus.CreateEmptyBlocks = false
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
cs.mempool.EnableTxsAvailable()
assertMempool(cs.txNotifier).EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
startTestRound(cs, height, round)
@@ -40,7 +46,7 @@ func TestMempoolProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
config.Consensus.CreateEmptyBlocksInterval = ensureTimeout
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
cs.mempool.EnableTxsAvailable()
assertMempool(cs.txNotifier).EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
startTestRound(cs, height, round)
@@ -55,7 +61,7 @@ func TestMempoolProgressInHigherRound(t *testing.T) {
config.Consensus.CreateEmptyBlocks = false
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
cs.mempool.EnableTxsAvailable()
assertMempool(cs.txNotifier).EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
newRoundCh := subscribe(cs.eventBus, types.EventQueryNewRound)
@@ -91,7 +97,7 @@ func deliverTxsRange(cs *ConsensusState, start, end int) {
for i := start; i < end; i++ {
txBytes := make([]byte, 8)
binary.BigEndian.PutUint64(txBytes, uint64(i))
err := cs.mempool.CheckTx(txBytes, nil)
err := assertMempool(cs.txNotifier).CheckTx(txBytes, nil)
if err != nil {
panic(fmt.Sprintf("Error after CheckTx: %v", err))
}
@@ -141,7 +147,7 @@ func TestMempoolRmBadTx(t *testing.T) {
// Try to send the tx through the mempool.
// CheckTx should not err, but the app should return a bad abci code
// and the tx should get removed from the pool
err := cs.mempool.CheckTx(txBytes, func(r *abci.Response) {
err := assertMempool(cs.txNotifier).CheckTx(txBytes, func(r *abci.Response) {
if r.GetCheckTx().Code != code.CodeTypeBadNonce {
t.Fatalf("expected checktx to return bad nonce, got %v", r)
}
@@ -153,7 +159,7 @@ func TestMempoolRmBadTx(t *testing.T) {
// check for the tx
for {
txs := cs.mempool.ReapMaxBytesMaxGas(int64(len(txBytes)), -1)
txs := assertMempool(cs.txNotifier).ReapMaxBytesMaxGas(int64(len(txBytes)), -1)
if len(txs) == 0 {
emptyMempoolCh <- struct{}{}
return

View File

@@ -225,7 +225,7 @@ func TestReactorCreatesBlockWhenEmptyBlocksFalse(t *testing.T) {
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
// send a tx
if err := css[3].mempool.CheckTx([]byte{1, 2, 3}, nil); err != nil {
if err := assertMempool(css[3].txNotifier).CheckTx([]byte{1, 2, 3}, nil); err != nil {
//t.Fatal(err)
}
@@ -448,7 +448,7 @@ func waitForAndValidateBlock(t *testing.T, n int, activeVals map[string]struct{}
err := validateBlock(newBlock, activeVals)
assert.Nil(t, err)
for _, tx := range txs {
err := css[j].mempool.CheckTx(tx, nil)
err := assertMempool(css[j].txNotifier).CheckTx(tx, nil)
assert.Nil(t, err)
}
}, css)

View File

@@ -137,7 +137,7 @@ func (pb *playback) replayReset(count int, newStepCh chan interface{}) error {
pb.cs.Wait()
newCS := NewConsensusState(pb.cs.config, pb.genesisState.Copy(), pb.cs.blockExec,
pb.cs.blockStore, pb.cs.mempool, pb.cs.evpool)
pb.cs.blockStore, pb.cs.txNotifier, pb.cs.evpool)
newCS.SetEventBus(pb.cs.eventBus)
newCS.startForReplay()

View File

@@ -87,7 +87,7 @@ func sendTxs(cs *ConsensusState, ctx context.Context) {
return
default:
tx := []byte{byte(i)}
cs.mempool.CheckTx(tx, nil)
assertMempool(cs.txNotifier).CheckTx(tx, nil)
i++
}
}

View File

@@ -57,6 +57,16 @@ func (ti *timeoutInfo) String() string {
return fmt.Sprintf("%v ; %d/%d %v", ti.Duration, ti.Height, ti.Round, ti.Step)
}
// interface to the mempool
type txNotifier interface {
TxsAvailable() <-chan struct{}
}
// interface to the evidence pool
type evidencePool interface {
AddEvidence(types.Evidence) error
}
// ConsensusState handles execution of the consensus algorithm.
// It processes votes and proposals, and upon reaching agreement,
// commits blocks to the chain and executes them against the application.
@@ -68,11 +78,18 @@ type ConsensusState struct {
config *cfg.ConsensusConfig
privValidator types.PrivValidator // for signing votes
// services for creating and executing blocks
blockExec *sm.BlockExecutor
// store blocks and commits
blockStore sm.BlockStore
mempool sm.Mempool
evpool sm.EvidencePool
// create and execute blocks
blockExec *sm.BlockExecutor
// notify us if txs are available
txNotifier txNotifier
// add evidence to the pool
// when it's detected
evpool evidencePool
// internal state
mtx sync.RWMutex
@@ -128,15 +145,15 @@ func NewConsensusState(
state sm.State,
blockExec *sm.BlockExecutor,
blockStore sm.BlockStore,
mempool sm.Mempool,
evpool sm.EvidencePool,
txNotifier txNotifier,
evpool evidencePool,
options ...StateOption,
) *ConsensusState {
cs := &ConsensusState{
config: config,
blockExec: blockExec,
blockStore: blockStore,
mempool: mempool,
txNotifier: txNotifier,
peerMsgQueue: make(chan msgInfo, msgQueueSize),
internalMsgQueue: make(chan msgInfo, msgQueueSize),
timeoutTicker: NewTimeoutTicker(),
@@ -484,7 +501,7 @@ func (cs *ConsensusState) updateToState(state sm.State) {
// If state isn't further out than cs.state, just ignore.
// This happens when SwitchToConsensus() is called in the reactor.
// We don't want to reset e.g. the Votes, but we still want to
// signal the new round step, because other services (eg. mempool)
// signal the new round step, because other services (eg. txNotifier)
// depend on having an up-to-date peer state!
if !cs.state.IsEmpty() && (state.LastBlockHeight <= cs.state.LastBlockHeight) {
cs.Logger.Info("Ignoring updateToState()", "newHeight", state.LastBlockHeight+1, "oldHeight", cs.state.LastBlockHeight+1)
@@ -599,7 +616,7 @@ func (cs *ConsensusState) receiveRoutine(maxSteps int) {
var mi msgInfo
select {
case <-cs.mempool.TxsAvailable():
case <-cs.txNotifier.TxsAvailable():
cs.handleTxsAvailable()
case mi = <-cs.peerMsgQueue:
cs.wal.Write(mi)
@@ -921,20 +938,8 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
return
}
maxBytes := cs.state.ConsensusParams.BlockSize.MaxBytes
maxGas := cs.state.ConsensusParams.BlockSize.MaxGas
// bound evidence to 1/10th of the block
evidence := cs.evpool.PendingEvidence(types.MaxEvidenceBytesPerBlock(maxBytes))
// Mempool validated transactions
txs := cs.mempool.ReapMaxBytesMaxGas(types.MaxDataBytes(
maxBytes,
cs.state.Validators.Size(),
len(evidence),
), maxGas)
proposerAddr := cs.privValidator.GetPubKey().Address()
block, parts := cs.state.MakeBlock(cs.Height, txs, commit, evidence, proposerAddr)
return block, parts
return cs.blockExec.CreateProposalBlock(cs.Height, cs.state, commit, proposerAddr)
}
// Enter: `timeoutPropose` after entering Propose.

View File

@@ -166,6 +166,11 @@ the tags will be hashed into the next block header.
The application may set the validator set during InitChain, and update it during
EndBlock.
Note that the maximum total power of the validator set is bounded by
`MaxTotalVotingPower = MaxInt64 / 8`. Applications are responsible for ensuring
they do not make changes to the validator set that cause it to exceed this
limit.
### InitChain
ResponseInitChain can return a list of validators.
@@ -206,6 +211,7 @@ following rules:
- if the validator does not already exist, it will be added to the validator
set with the given power
- if the validator does already exist, its power will be adjusted to the given power
- the total power of the new validator set must not exceed MaxTotalVotingPower
Note the updates returned in block `H` will only take effect at block `H+2`.

View File

@@ -51,7 +51,7 @@ type Header struct {
// hashes of block data
LastCommitHash []byte // commit from validators from the last block
DataHash []byte // Merkle root of transactions
DataHash []byte // MerkleRoot of transaction hashes
// hashes from the app output from the prev block
ValidatorsHash []byte // validators for the current block
@@ -83,25 +83,27 @@ type Version struct {
## BlockID
The `BlockID` contains two distinct Merkle roots of the block.
The first, used as the block's main hash, is the Merkle root
of all the fields in the header. The second, used for secure gossipping of
the block during consensus, is the Merkle root of the complete serialized block
cut into parts. The `BlockID` includes these two hashes, as well as the number of
parts.
The first, used as the block's main hash, is the MerkleRoot
of all the fields in the header (ie. `MerkleRoot(header)`.
The second, used for secure gossipping of the block during consensus,
is the MerkleRoot of the complete serialized block
cut into parts (ie. `MerkleRoot(MakeParts(block))`).
The `BlockID` includes these two hashes, as well as the number of
parts (ie. `len(MakeParts(block))`)
```go
type BlockID struct {
Hash []byte
Parts PartsHeader
PartsHeader PartSetHeader
}
type PartsHeader struct {
Hash []byte
type PartSetHeader struct {
Total int32
Hash []byte
}
```
TODO: link to details of merkle sums.
See [MerkleRoot](/docs/spec/blockchain/encoding.md#MerkleRoot) for details.
## Time
@@ -142,12 +144,12 @@ The vote includes information about the validator signing it.
```go
type Vote struct {
Type SignedMsgType // byte
Type byte
Height int64
Round int
Timestamp time.Time
BlockID BlockID
ValidatorAddress Address
Timestamp Time
ValidatorAddress []byte
ValidatorIndex int
Signature []byte
}
@@ -160,8 +162,8 @@ a _precommit_ has `vote.Type == 2`.
## Signature
Signatures in Tendermint are raw bytes representing the underlying signature.
The only signature scheme currently supported for Tendermint validators is
ED25519. The signature is the raw 64-byte ED25519 signature.
See the [signature spec](/docs/spec/blockchain/encoding.md#key-types) for more.
## EvidenceData
@@ -188,6 +190,8 @@ type DuplicateVoteEvidence struct {
}
```
See the [pubkey spec](/docs/spec/blockchain/encoding.md#key-types) for more.
## Validation
Here we describe the validation rules for every element in a block.
@@ -205,7 +209,7 @@ the current version of the `state` corresponds to the state
after executing transactions from the `prevBlock`.
Elements of an object are accessed as expected,
ie. `block.Header`.
See [here](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md) for the definition of `state`.
See the [definition of `State`](/docs/spec/blockchain/state.md).
### Header
@@ -284,28 +288,25 @@ The first block has `block.Header.TotalTxs = block.Header.NumberTxs`.
LastBlockID is the previous block's BlockID:
```go
prevBlockParts := MakeParts(prevBlock, state.LastConsensusParams.BlockGossip.BlockPartSize)
prevBlockParts := MakeParts(prevBlock)
block.Header.LastBlockID == BlockID {
Hash: SimpleMerkleRoot(prevBlock.Header),
Hash: MerkleRoot(prevBlock.Header),
PartsHeader{
Hash: SimpleMerkleRoot(prevBlockParts),
Hash: MerkleRoot(prevBlockParts),
Total: len(prevBlockParts),
},
}
```
Note: it depends on the ConsensusParams,
which are held in the `state` and may be updated by the application.
The first block has `block.Header.LastBlockID == BlockID{}`.
### LastCommitHash
```go
block.Header.LastCommitHash == SimpleMerkleRoot(block.LastCommit)
block.Header.LastCommitHash == MerkleRoot(block.LastCommit.Precommits)
```
Simple Merkle root of the votes included in the block.
MerkleRoot of the votes included in the block.
These are the votes that committed the previous block.
The first block has `block.Header.LastCommitHash == []byte{}`
@@ -313,37 +314,42 @@ The first block has `block.Header.LastCommitHash == []byte{}`
### DataHash
```go
block.Header.DataHash == SimpleMerkleRoot(block.Txs.Txs)
block.Header.DataHash == MerkleRoot(Hashes(block.Txs.Txs))
```
Simple Merkle root of the transactions included in the block.
MerkleRoot of the hashes of transactions included in the block.
Note the transactions are hashed before being included in the Merkle tree,
so the leaves of the Merkle tree are the hashes, not the transactions
themselves. This is because transaction hashes are regularly used as identifiers for
transactions.
### ValidatorsHash
```go
block.ValidatorsHash == SimpleMerkleRoot(state.Validators)
block.ValidatorsHash == MerkleRoot(state.Validators)
```
Simple Merkle root of the current validator set that is committing the block.
MerkleRoot of the current validator set that is committing the block.
This can be used to validate the `LastCommit` included in the next block.
### NextValidatorsHash
```go
block.NextValidatorsHash == SimpleMerkleRoot(state.NextValidators)
block.NextValidatorsHash == MerkleRoot(state.NextValidators)
```
Simple Merkle root of the next validator set that will be the validator set that commits the next block.
MerkleRoot of the next validator set that will be the validator set that commits the next block.
This is included so that the current validator set gets a chance to sign the
next validator sets Merkle root.
### ConsensusParamsHash
### ConsensusHash
```go
block.ConsensusParamsHash == TMHASH(amino(state.ConsensusParams))
block.ConsensusHash == state.ConsensusParams.Hash()
```
Hash of the amino-encoded consensus parameters.
Hash of the amino-encoding of a subset of the consensus parameters.
### AppHash
@@ -358,20 +364,20 @@ The first block has `block.Header.AppHash == []byte{}`.
### LastResultsHash
```go
block.ResultsHash == SimpleMerkleRoot(state.LastResults)
block.ResultsHash == MerkleRoot(state.LastResults)
```
Simple Merkle root of the results of the transactions in the previous block.
MerkleRoot of the results of the transactions in the previous block.
The first block has `block.Header.ResultsHash == []byte{}`.
## EvidenceHash
```go
block.EvidenceHash == SimpleMerkleRoot(block.Evidence)
block.EvidenceHash == MerkleRoot(block.Evidence)
```
Simple Merkle root of the evidence of Byzantine behaviour included in this block.
MerkleRoot of the evidence of Byzantine behaviour included in this block.
### ProposerAddress

View File

@@ -30,6 +30,12 @@ For example, the byte-array `[0xA, 0xB]` would be encoded as `0x020A0B`,
while a byte-array containing 300 entires beginning with `[0xA, 0xB, ...]` would
be encoded as `0xAC020A0B...` where `0xAC02` is the UVarint encoding of 300.
## Hashing
Tendermint uses `SHA256` as its hash function.
Objects are always Amino encoded before being hashed.
So `SHA256(obj)` is short for `SHA256(AminoEncode(obj))`.
## Public Key Cryptography
Tendermint uses Amino to distinguish between different types of private keys,
@@ -68,23 +74,27 @@ For example, the 33-byte (or 0x21-byte in hex) Secp256k1 pubkey
would be encoded as
`EB5AE98721020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
### Addresses
### Key Types
Addresses for each public key types are computed as follows:
Each type specifies it's own pubkey, address, and signature format.
#### Ed25519
First 20-bytes of the SHA256 hash of the raw 32-byte public key:
TODO: pubkey
The address is the first 20-bytes of the SHA256 hash of the raw 32-byte public key:
```
address = SHA256(pubkey)[:20]
```
NOTE: before v0.22.0, this was the RIPEMD160 of the Amino encoded public key.
The signature is the raw 64-byte ED25519 signature.
#### Secp256k1
RIPEMD160 hash of the SHA256 hash of the OpenSSL compressed public key:
TODO: pubkey
The address is the RIPEMD160 hash of the SHA256 hash of the OpenSSL compressed public key:
```
address = RIPEMD160(SHA256(pubkey))
@@ -92,12 +102,21 @@ address = RIPEMD160(SHA256(pubkey))
This is the same as Bitcoin.
The signature is the 64-byte concatenation of ECDSA `r` and `s` (ie. `r || s`),
where `s` is lexicographically less than its inverse, to prevent malleability.
This is like Ethereum, but without the extra byte for pubkey recovery, since
Tendermint assumes the pubkey is always provided anyway.
#### Multisig
TODO
## Other Common Types
### BitArray
The BitArray is used in block headers and some consensus messages to signal
whether or not something was done by each validator. BitArray is represented
The BitArray is used in some consensus messages to represent votes received from
validators, or parts received in a block. It is represented
with a struct containing the number of bits (`Bits`) and the bit-array itself
encoded in base64 (`Elems`).
@@ -119,24 +138,27 @@ representing `1` and `0`. Ie. the BitArray `10110` would be JSON encoded as
Part is used to break up blocks into pieces that can be gossiped in parallel
and securely verified using a Merkle tree of the parts.
Part contains the index of the part in the larger set (`Index`), the actual
underlying data of the part (`Bytes`), and a simple Merkle proof that the part is contained in
the larger set (`Proof`).
Part contains the index of the part (`Index`), the actual
underlying data of the part (`Bytes`), and a Merkle proof that the part is contained in
the set (`Proof`).
```go
type Part struct {
Index int
Bytes byte[]
Proof byte[]
Bytes []byte
Proof SimpleProof
}
```
See details of SimpleProof, below.
### MakeParts
Encode an object using Amino and slice it into parts.
Tendermint uses a part size of 65536 bytes.
```go
func MakeParts(obj interface{}, partSize int) []Part
func MakeParts(block Block) []Part
```
## Merkle Trees
@@ -144,12 +166,12 @@ func MakeParts(obj interface{}, partSize int) []Part
For an overview of Merkle trees, see
[wikipedia](https://en.wikipedia.org/wiki/Merkle_tree)
We use the RFC 6962 specification of a merkle tree, instantiated with sha256 as the hash function.
We use the RFC 6962 specification of a merkle tree, with sha256 as the hash function.
Merkle trees are used throughout Tendermint to compute a cryptographic digest of a data structure.
The differences between RFC 6962 and the simplest form a merkle tree are that:
1) leaf nodes and inner nodes have different hashes.
This is to prevent a proof to an inner node, claiming that it is the hash of the leaf.
1) leaf nodes and inner nodes have different hashes.
This is for "second pre-image resistance", to prevent the proof to an inner node being valid as the proof of a leaf.
The leaf nodes are `SHA256(0x00 || leaf_data)`, and inner nodes are `SHA256(0x01 || left_hash || right_hash)`.
2) When the number of items isn't a power of two, the left half of the tree is as big as it could be.
@@ -173,46 +195,74 @@ The differences between RFC 6962 and the simplest form a merkle tree are that:
h0 h1 h2 h3 h0 h1 h2 h3 h4 h5
```
### Simple Merkle Root
### MerkleRoot
The function `MerkleRoot` is a simple recursive function defined as follows:
```go
func MerkleRootFromLeafs(leafs [][]byte) []byte{
// SHA256(0x00 || leaf)
func leafHash(leaf []byte) []byte {
return tmhash.Sum(append(0x00, leaf...))
}
// SHA256(0x01 || left || right)
func innerHash(left []byte, right []byte) []byte {
return tmhash.Sum(append(0x01, append(left, right...)...))
}
// largest power of 2 less than k
func getSplitPoint(k int) { ... }
func MerkleRoot(items [][]byte) []byte{
switch len(items) {
case 0:
return nil
case 1:
return leafHash(leafs[0]) // SHA256(0x00 || leafs[0])
return leafHash(leafs[0])
default:
k := getSplitPoint(len(items)) // largest power of two smaller than items
left := MerkleRootFromLeafs(items[:k])
right := MerkleRootFromLeafs(items[k:])
return innerHash(left, right) // SHA256(0x01 || left || right)
k := getSplitPoint(len(items))
left := MerkleRoot(items[:k])
right := MerkleRoot(items[k:])
return innerHash(left, right)
}
}
```
Note: we will abuse notion and invoke `SimpleMerkleRoot` with arguments of type `struct` or type `[]struct`.
For `struct` arguments, we compute a `[][]byte` containing the hash of each
Note: `MerkleRoot` operates on items which are arbitrary byte arrays, not
necessarily hashes. For items which need to be hashed first, we introduce the
`Hashes` function:
```
func Hashes(items [][]byte) [][]byte {
return SHA256 of each item
}
```
Note: we will abuse notion and invoke `MerkleRoot` with arguments of type `struct` or type `[]struct`.
For `struct` arguments, we compute a `[][]byte` containing the amino encoding of each
field in the struct, in the same order the fields appear in the struct.
For `[]struct` arguments, we compute a `[][]byte` by hashing the individual `struct` elements.
For `[]struct` arguments, we compute a `[][]byte` by amino encoding the individual `struct` elements.
### Simple Merkle Proof
Proof that a leaf is in a Merkle tree consists of a simple structure:
Proof that a leaf is in a Merkle tree is composed as follows:
```golang
type SimpleProof struct {
Total int
Index int
LeafHash []byte
Aunts [][]byte
}
```
Which is verified using the following:
Which is verified as follows:
```golang
func (proof SimpleProof) Verify(index, total int, leafHash, rootHash []byte) bool {
computedHash := computeHashFromAunts(index, total, leafHash, proof.Aunts)
func (proof SimpleProof) Verify(rootHash []byte, leaf []byte) bool {
assert(proof.LeafHash, leafHash(leaf)
computedHash := computeHashFromAunts(proof.Index, proof.Total, proof.LeafHash, proof.Aunts)
return computedHash == rootHash
}
@@ -230,22 +280,14 @@ func computeHashFromAunts(index, total int, leafHash []byte, innerHashes [][]byt
if index < numLeft {
leftHash := computeHashFromAunts(index, numLeft, leafHash, innerHashes[:len(innerHashes)-1])
assert(leftHash != nil)
return SimpleHashFromTwoHashes(leftHash, innerHashes[len(innerHashes)-1])
return innerHash(leftHash, innerHashes[len(innerHashes)-1])
}
rightHash := computeHashFromAunts(index-numLeft, total-numLeft, leafHash, innerHashes[:len(innerHashes)-1])
assert(rightHash != nil)
return SimpleHashFromTwoHashes(innerHashes[len(innerHashes)-1], rightHash)
return innerHash(innerHashes[len(innerHashes)-1], rightHash)
}
```
### Simple Tree with Dictionaries
The Simple Tree is used to merkelize a list of items, so to merkelize a
(short) dictionary of key-value pairs, encode the dictionary as an
ordered list of `KVPair` structs. The block hash is such a hash
derived from all the fields of the block `Header`. The state hash is
similarly derived.
### IAVL+ Tree
Because Tendermint only uses a Simple Merkle Tree, application developers are expect to use their own Merkle tree in their applications. For example, the IAVL+ Tree - an immutable self-balancing binary tree for persisting application state is used by the [Cosmos SDK](https://github.com/cosmos/cosmos-sdk/blob/develop/docs/sdk/core/multistore.md)
@@ -297,4 +339,6 @@ type CanonicalVote struct {
The field ordering and the fixed sized encoding for the first three fields is optimized to ease parsing of SignBytes
in HSMs. It creates fixed offsets for relevant fields that need to be read in this context.
See [#1622](https://github.com/tendermint/tendermint/issues/1622) for more details.
For more details, see the [signing spec](/docs/spec/consensus/signing.md).
Also, see the motivating discussion in
[#1622](https://github.com/tendermint/tendermint/issues/1622).

View File

@@ -60,7 +60,7 @@ When hashing the Validator struct, the address is not included,
because it is redundant with the pubkey.
The `state.Validators`, `state.LastValidators`, and `state.NextValidators`, must always by sorted by validator address,
so that there is a canonical order for computing the SimpleMerkleRoot.
so that there is a canonical order for computing the MerkleRoot.
We also define a `TotalVotingPower` function, to return the total voting power:
@@ -78,6 +78,8 @@ func TotalVotingPower(vals []Validators) int64{
ConsensusParams define various limits for blockchain data structures.
Like validator sets, they are set during genesis and can be updated by the application through ABCI.
When hashed, only a subset of the params are included, to allow the params to
evolve without breaking the header.
```go
type ConsensusParams struct {
@@ -86,6 +88,18 @@ type ConsensusParams struct {
Validator
}
type hashedParams struct {
BlockMaxBytes int64
BlockMaxGas int64
}
func (params ConsensusParams) Hash() []byte {
SHA256(hashedParams{
BlockMaxBytes: params.BlockSize.MaxBytes,
BlockMaxGas: params.BlockSize.MaxGas,
})
}
type BlockSize struct {
MaxBytes int64
MaxGas int64

View File

@@ -57,10 +57,10 @@ func (evpool *EvidencePool) PriorityEvidence() []types.Evidence {
return evpool.evidenceStore.PriorityEvidence()
}
// PendingEvidence returns uncommitted evidence up to maxBytes.
// If maxBytes is -1, all evidence is returned.
func (evpool *EvidencePool) PendingEvidence(maxBytes int64) []types.Evidence {
return evpool.evidenceStore.PendingEvidence(maxBytes)
// PendingEvidence returns up to maxNum uncommitted evidence.
// If maxNum is -1, all evidence is returned.
func (evpool *EvidencePool) PendingEvidence(maxNum int64) []types.Evidence {
return evpool.evidenceStore.PendingEvidence(maxNum)
}
// State returns the current state of the evpool.

View File

@@ -86,26 +86,26 @@ func (store *EvidenceStore) PriorityEvidence() (evidence []types.Evidence) {
return l
}
// PendingEvidence returns known uncommitted evidence up to maxBytes.
// If maxBytes is -1, all evidence is returned.
func (store *EvidenceStore) PendingEvidence(maxBytes int64) (evidence []types.Evidence) {
return store.listEvidence(baseKeyPending, maxBytes)
// PendingEvidence returns up to maxNum known, uncommitted evidence.
// If maxNum is -1, all evidence is returned.
func (store *EvidenceStore) PendingEvidence(maxNum int64) (evidence []types.Evidence) {
return store.listEvidence(baseKeyPending, maxNum)
}
// listEvidence lists the evidence for the given prefix key up to maxBytes.
// listEvidence lists up to maxNum pieces of evidence for the given prefix key.
// It is wrapped by PriorityEvidence and PendingEvidence for convenience.
// If maxBytes is -1, there's no cap on the size of returned evidence.
func (store *EvidenceStore) listEvidence(prefixKey string, maxBytes int64) (evidence []types.Evidence) {
var bytes int64
// If maxNum is -1, there's no cap on the size of returned evidence.
func (store *EvidenceStore) listEvidence(prefixKey string, maxNum int64) (evidence []types.Evidence) {
var count int64
iter := dbm.IteratePrefix(store.db, []byte(prefixKey))
defer iter.Close()
for ; iter.Valid(); iter.Next() {
val := iter.Value()
if maxBytes > 0 && bytes+int64(len(val)) > maxBytes {
if count == maxNum {
return evidence
}
bytes += int64(len(val))
count++
var ei EvidenceInfo
err := cdc.UnmarshalBinaryBare(val, &ei)

View File

@@ -13,3 +13,8 @@ func init() {
cryptoAmino.RegisterAmino(cdc)
types.RegisterEvidences(cdc)
}
// For testing purposes only
func RegisterMockEvidences() {
types.RegisterMockEvidences(cdc)
}

View File

@@ -35,34 +35,40 @@ func NewBaseVerifier(chainID string, height int64, valset *types.ValidatorSet) *
}
// Implements Verifier.
func (bc *BaseVerifier) ChainID() string {
return bc.chainID
func (bv *BaseVerifier) ChainID() string {
return bv.chainID
}
// Implements Verifier.
func (bc *BaseVerifier) Verify(signedHeader types.SignedHeader) error {
func (bv *BaseVerifier) Verify(signedHeader types.SignedHeader) error {
// We can't verify commits older than bc.height.
if signedHeader.Height < bc.height {
// We can't verify commits for a different chain.
if signedHeader.ChainID != bv.chainID {
return cmn.NewError("BaseVerifier chainID is %v, cannot verify chainID %v",
bv.chainID, signedHeader.ChainID)
}
// We can't verify commits older than bv.height.
if signedHeader.Height < bv.height {
return cmn.NewError("BaseVerifier height is %v, cannot verify height %v",
bc.height, signedHeader.Height)
bv.height, signedHeader.Height)
}
// We can't verify with the wrong validator set.
if !bytes.Equal(signedHeader.ValidatorsHash,
bc.valset.Hash()) {
return lerr.ErrUnexpectedValidators(signedHeader.ValidatorsHash, bc.valset.Hash())
bv.valset.Hash()) {
return lerr.ErrUnexpectedValidators(signedHeader.ValidatorsHash, bv.valset.Hash())
}
// Do basic sanity checks.
err := signedHeader.ValidateBasic(bc.chainID)
err := signedHeader.ValidateBasic(bv.chainID)
if err != nil {
return cmn.ErrorWrap(err, "in verify")
}
// Check commit signatures.
err = bc.valset.VerifyCommit(
bc.chainID, signedHeader.Commit.BlockID,
err = bv.valset.VerifyCommit(
bv.chainID, signedHeader.Commit.BlockID,
signedHeader.Height, signedHeader.Commit)
if err != nil {
return cmn.ErrorWrap(err, "in verify")

View File

@@ -8,7 +8,7 @@ import (
"github.com/tendermint/tendermint/types"
)
// FullCommit is a signed header (the block header and a commit that signs it),
// FullCommit contains a SignedHeader (the block header and a commit that signs it),
// the validator set which signed the commit, and the next validator set. The
// next validator set (which is proven from the block header) allows us to
// revert to block-by-block updating of lite Verifier's latest validator set,

View File

@@ -13,6 +13,9 @@ import (
"github.com/tendermint/tendermint/types"
)
var _ PersistentProvider = (*DBProvider)(nil)
// DBProvider stores commits and validator sets in a DB.
type DBProvider struct {
logger log.Logger
label string

View File

@@ -53,10 +53,6 @@ SignedHeader, and that the SignedHeader was to be signed by the exact given
validator set, and that the height of the commit is at least height (or
greater).
SignedHeader.Commit may be signed by a different validator set, it can get
verified with a BaseVerifier as long as sufficient signatures from the
previous validator set are present in the commit.
DynamicVerifier - this Verifier implements an auto-update and persistence
strategy to verify any SignedHeader of the blockchain.

View File

@@ -18,12 +18,17 @@ var _ Verifier = (*DynamicVerifier)(nil)
// "source" provider to obtain the needed FullCommits to securely sync with
// validator set changes. It stores properly validated data on the
// "trusted" local system.
// TODO: make this single threaded and create a new
// ConcurrentDynamicVerifier that wraps it with concurrency.
// see https://github.com/tendermint/tendermint/issues/3170
type DynamicVerifier struct {
logger log.Logger
chainID string
// These are only properly validated data, from local system.
logger log.Logger
// Already validated, stored locally
trusted PersistentProvider
// This is a source of new info, like a node rpc, or other import method.
// New info, like a node rpc, or other import method.
source Provider
// pending map to synchronize concurrent verification requests
@@ -35,8 +40,8 @@ type DynamicVerifier struct {
// trusted provider to store validated data and the source provider to
// obtain missing data (e.g. FullCommits).
//
// The trusted provider should a CacheProvider, MemProvider or
// files.Provider. The source provider should be a client.HTTPProvider.
// The trusted provider should be a DBProvider.
// The source provider should be a client.HTTPProvider.
func NewDynamicVerifier(chainID string, trusted PersistentProvider, source Provider) *DynamicVerifier {
return &DynamicVerifier{
logger: log.NewNopLogger(),
@@ -47,68 +52,71 @@ func NewDynamicVerifier(chainID string, trusted PersistentProvider, source Provi
}
}
func (ic *DynamicVerifier) SetLogger(logger log.Logger) {
func (dv *DynamicVerifier) SetLogger(logger log.Logger) {
logger = logger.With("module", "lite")
ic.logger = logger
ic.trusted.SetLogger(logger)
ic.source.SetLogger(logger)
dv.logger = logger
dv.trusted.SetLogger(logger)
dv.source.SetLogger(logger)
}
// Implements Verifier.
func (ic *DynamicVerifier) ChainID() string {
return ic.chainID
func (dv *DynamicVerifier) ChainID() string {
return dv.chainID
}
// Implements Verifier.
//
// If the validators have changed since the last known time, it looks to
// ic.trusted and ic.source to prove the new validators. On success, it will
// try to store the SignedHeader in ic.trusted if the next
// dv.trusted and dv.source to prove the new validators. On success, it will
// try to store the SignedHeader in dv.trusted if the next
// validator can be sourced.
func (ic *DynamicVerifier) Verify(shdr types.SignedHeader) error {
func (dv *DynamicVerifier) Verify(shdr types.SignedHeader) error {
// Performs synchronization for multi-threads verification at the same height.
ic.mtx.Lock()
if pending := ic.pendingVerifications[shdr.Height]; pending != nil {
ic.mtx.Unlock()
dv.mtx.Lock()
if pending := dv.pendingVerifications[shdr.Height]; pending != nil {
dv.mtx.Unlock()
<-pending // pending is chan struct{}
} else {
pending := make(chan struct{})
ic.pendingVerifications[shdr.Height] = pending
dv.pendingVerifications[shdr.Height] = pending
defer func() {
close(pending)
ic.mtx.Lock()
delete(ic.pendingVerifications, shdr.Height)
ic.mtx.Unlock()
dv.mtx.Lock()
delete(dv.pendingVerifications, shdr.Height)
dv.mtx.Unlock()
}()
ic.mtx.Unlock()
dv.mtx.Unlock()
}
//Get the exact trusted commit for h, and if it is
// equal to shdr, then don't even verify it,
// and just return nil.
trustedFCSameHeight, err := ic.trusted.LatestFullCommit(ic.chainID, shdr.Height, shdr.Height)
// equal to shdr, then it's already trusted, so
// just return nil.
trustedFCSameHeight, err := dv.trusted.LatestFullCommit(dv.chainID, shdr.Height, shdr.Height)
if err == nil {
// If loading trust commit successfully, and trust commit equal to shdr, then don't verify it,
// just return nil.
if bytes.Equal(trustedFCSameHeight.SignedHeader.Hash(), shdr.Hash()) {
ic.logger.Info(fmt.Sprintf("Load full commit at height %d from cache, there is not need to verify.", shdr.Height))
dv.logger.Info(fmt.Sprintf("Load full commit at height %d from cache, there is not need to verify.", shdr.Height))
return nil
}
} else if !lerr.IsErrCommitNotFound(err) {
// Return error if it is not CommitNotFound error
ic.logger.Info(fmt.Sprintf("Encountered unknown error in loading full commit at height %d.", shdr.Height))
dv.logger.Info(fmt.Sprintf("Encountered unknown error in loading full commit at height %d.", shdr.Height))
return err
}
// Get the latest known full commit <= h-1 from our trusted providers.
// The full commit at h-1 contains the valset to sign for h.
h := shdr.Height - 1
trustedFC, err := ic.trusted.LatestFullCommit(ic.chainID, 1, h)
prevHeight := shdr.Height - 1
trustedFC, err := dv.trusted.LatestFullCommit(dv.chainID, 1, prevHeight)
if err != nil {
return err
}
if trustedFC.Height() == h {
// sync up to the prevHeight and assert our latest NextValidatorSet
// is the ValidatorSet for the SignedHeader
if trustedFC.Height() == prevHeight {
// Return error if valset doesn't match.
if !bytes.Equal(
trustedFC.NextValidators.Hash(),
@@ -118,11 +126,12 @@ func (ic *DynamicVerifier) Verify(shdr types.SignedHeader) error {
shdr.Header.ValidatorsHash)
}
} else {
// If valset doesn't match...
if !bytes.Equal(trustedFC.NextValidators.Hash(),
// If valset doesn't match, try to update
if !bytes.Equal(
trustedFC.NextValidators.Hash(),
shdr.Header.ValidatorsHash) {
// ... update.
trustedFC, err = ic.updateToHeight(h)
trustedFC, err = dv.updateToHeight(prevHeight)
if err != nil {
return err
}
@@ -137,14 +146,21 @@ func (ic *DynamicVerifier) Verify(shdr types.SignedHeader) error {
}
// Verify the signed header using the matching valset.
cert := NewBaseVerifier(ic.chainID, trustedFC.Height()+1, trustedFC.NextValidators)
cert := NewBaseVerifier(dv.chainID, trustedFC.Height()+1, trustedFC.NextValidators)
err = cert.Verify(shdr)
if err != nil {
return err
}
// By now, the SignedHeader is fully validated and we're synced up to
// SignedHeader.Height - 1. To sync to SignedHeader.Height, we need
// the validator set at SignedHeader.Height + 1 so we can verify the
// SignedHeader.NextValidatorSet.
// TODO: is the ValidateFull below mostly redundant with the BaseVerifier.Verify above?
// See https://github.com/tendermint/tendermint/issues/3174.
// Get the next validator set.
nextValset, err := ic.source.ValidatorSet(ic.chainID, shdr.Height+1)
nextValset, err := dv.source.ValidatorSet(dv.chainID, shdr.Height+1)
if lerr.IsErrUnknownValidators(err) {
// Ignore this error.
return nil
@@ -160,31 +176,31 @@ func (ic *DynamicVerifier) Verify(shdr types.SignedHeader) error {
}
// Validate the full commit. This checks the cryptographic
// signatures of Commit against Validators.
if err := nfc.ValidateFull(ic.chainID); err != nil {
if err := nfc.ValidateFull(dv.chainID); err != nil {
return err
}
// Trust it.
return ic.trusted.SaveFullCommit(nfc)
return dv.trusted.SaveFullCommit(nfc)
}
// verifyAndSave will verify if this is a valid source full commit given the
// best match trusted full commit, and if good, persist to ic.trusted.
// best match trusted full commit, and if good, persist to dv.trusted.
// Returns ErrTooMuchChange when >2/3 of trustedFC did not sign sourceFC.
// Panics if trustedFC.Height() >= sourceFC.Height().
func (ic *DynamicVerifier) verifyAndSave(trustedFC, sourceFC FullCommit) error {
func (dv *DynamicVerifier) verifyAndSave(trustedFC, sourceFC FullCommit) error {
if trustedFC.Height() >= sourceFC.Height() {
panic("should not happen")
}
err := trustedFC.NextValidators.VerifyFutureCommit(
sourceFC.Validators,
ic.chainID, sourceFC.SignedHeader.Commit.BlockID,
dv.chainID, sourceFC.SignedHeader.Commit.BlockID,
sourceFC.SignedHeader.Height, sourceFC.SignedHeader.Commit,
)
if err != nil {
return err
}
return ic.trusted.SaveFullCommit(sourceFC)
return dv.trusted.SaveFullCommit(sourceFC)
}
// updateToHeight will use divide-and-conquer to find a path to h.
@@ -192,29 +208,30 @@ func (ic *DynamicVerifier) verifyAndSave(trustedFC, sourceFC FullCommit) error {
// for height h, using repeated applications of bisection if necessary.
//
// Returns ErrCommitNotFound if source provider doesn't have the commit for h.
func (ic *DynamicVerifier) updateToHeight(h int64) (FullCommit, error) {
func (dv *DynamicVerifier) updateToHeight(h int64) (FullCommit, error) {
// Fetch latest full commit from source.
sourceFC, err := ic.source.LatestFullCommit(ic.chainID, h, h)
sourceFC, err := dv.source.LatestFullCommit(dv.chainID, h, h)
if err != nil {
return FullCommit{}, err
}
// Validate the full commit. This checks the cryptographic
// signatures of Commit against Validators.
if err := sourceFC.ValidateFull(ic.chainID); err != nil {
return FullCommit{}, err
}
// If sourceFC.Height() != h, we can't do it.
if sourceFC.Height() != h {
return FullCommit{}, lerr.ErrCommitNotFound()
}
// Validate the full commit. This checks the cryptographic
// signatures of Commit against Validators.
if err := sourceFC.ValidateFull(dv.chainID); err != nil {
return FullCommit{}, err
}
// Verify latest FullCommit against trusted FullCommits
FOR_LOOP:
for {
// Fetch latest full commit from trusted.
trustedFC, err := ic.trusted.LatestFullCommit(ic.chainID, 1, h)
trustedFC, err := dv.trusted.LatestFullCommit(dv.chainID, 1, h)
if err != nil {
return FullCommit{}, err
}
@@ -224,21 +241,21 @@ FOR_LOOP:
}
// Try to update to full commit with checks.
err = ic.verifyAndSave(trustedFC, sourceFC)
err = dv.verifyAndSave(trustedFC, sourceFC)
if err == nil {
// All good!
return sourceFC, nil
}
// Handle special case when err is ErrTooMuchChange.
if lerr.IsErrTooMuchChange(err) {
if types.IsErrTooMuchChange(err) {
// Divide and conquer.
start, end := trustedFC.Height(), sourceFC.Height()
if !(start < end) {
panic("should not happen")
}
mid := (start + end) / 2
_, err = ic.updateToHeight(mid)
_, err = dv.updateToHeight(mid)
if err != nil {
return FullCommit{}, err
}
@@ -249,8 +266,8 @@ FOR_LOOP:
}
}
func (ic *DynamicVerifier) LastTrustedHeight() int64 {
fc, err := ic.trusted.LatestFullCommit(ic.chainID, 1, 1<<63-1)
func (dv *DynamicVerifier) LastTrustedHeight() int64 {
fc, err := dv.trusted.LatestFullCommit(dv.chainID, 1, 1<<63-1)
if err != nil {
panic("should not happen")
}

View File

@@ -10,6 +10,7 @@ import (
dbm "github.com/tendermint/tendermint/libs/db"
log "github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/types"
)
func TestInquirerValidPath(t *testing.T) {
@@ -70,6 +71,70 @@ func TestInquirerValidPath(t *testing.T) {
assert.Nil(err, "%+v", err)
}
func TestDynamicVerify(t *testing.T) {
trust := NewDBProvider("trust", dbm.NewMemDB())
source := NewDBProvider("source", dbm.NewMemDB())
// 10 commits with one valset, 1 to change,
// 10 commits with the next one
n1, n2 := 10, 10
nCommits := n1 + n2 + 1
maxHeight := int64(nCommits)
fcz := make([]FullCommit, nCommits)
// gen the 2 val sets
chainID := "dynamic-verifier"
power := int64(10)
keys1 := genPrivKeys(5)
vals1 := keys1.ToValidators(power, 0)
keys2 := genPrivKeys(5)
vals2 := keys2.ToValidators(power, 0)
// make some commits with the first
for i := 0; i < n1; i++ {
fcz[i] = makeFullCommit(int64(i), keys1, vals1, vals1, chainID)
}
// update the val set
fcz[n1] = makeFullCommit(int64(n1), keys1, vals1, vals2, chainID)
// make some commits with the new one
for i := n1 + 1; i < nCommits; i++ {
fcz[i] = makeFullCommit(int64(i), keys2, vals2, vals2, chainID)
}
// Save everything in the source
for _, fc := range fcz {
source.SaveFullCommit(fc)
}
// Initialize a Verifier with the initial state.
err := trust.SaveFullCommit(fcz[0])
require.Nil(t, err)
ver := NewDynamicVerifier(chainID, trust, source)
ver.SetLogger(log.TestingLogger())
// fetch the latest from the source
latestFC, err := source.LatestFullCommit(chainID, 1, maxHeight)
require.NoError(t, err)
// try to update to the latest
err = ver.Verify(latestFC.SignedHeader)
require.NoError(t, err)
}
func makeFullCommit(height int64, keys privKeys, vals, nextVals *types.ValidatorSet, chainID string) FullCommit {
height += 1
consHash := []byte("special-params")
appHash := []byte(fmt.Sprintf("h=%d", height))
resHash := []byte(fmt.Sprintf("res=%d", height))
return keys.GenFullCommit(
chainID, height, nil,
vals, nextVals,
appHash, consHash, resHash, 0, len(keys))
}
func TestInquirerVerifyHistorical(t *testing.T) {
assert, require := assert.New(t), require.New(t)
trust := NewDBProvider("trust", dbm.NewMemDB())

View File

@@ -25,12 +25,6 @@ func (e errUnexpectedValidators) Error() string {
e.got, e.want)
}
type errTooMuchChange struct{}
func (e errTooMuchChange) Error() string {
return "Insufficient signatures to validate due to valset changes"
}
type errUnknownValidators struct {
chainID string
height int64
@@ -85,22 +79,6 @@ func IsErrUnexpectedValidators(err error) bool {
return false
}
//-----------------
// ErrTooMuchChange
// ErrTooMuchChange indicates that the underlying validator set was changed by >1/3.
func ErrTooMuchChange() error {
return cmn.ErrorWrap(errTooMuchChange{}, "")
}
func IsErrTooMuchChange(err error) bool {
if err_, ok := err.(cmn.Error); ok {
_, ok := err_.Data().(errTooMuchChange)
return ok
}
return false
}
//-----------------
// ErrUnknownValidators

View File

@@ -6,6 +6,8 @@ import (
"github.com/tendermint/tendermint/types"
)
var _ PersistentProvider = (*multiProvider)(nil)
// multiProvider allows you to place one or more caches in front of a source
// Provider. It runs through them in order until a match is found.
type multiProvider struct {

View File

@@ -1,7 +1,7 @@
package lite
import (
log "github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/types"
)

View File

@@ -65,6 +65,9 @@ var (
// ErrMempoolIsFull means Tendermint & an application can't handle that much load
ErrMempoolIsFull = errors.New("Mempool is full")
// ErrTxTooLarge means the tx is too big to be sent in a message to other peers
ErrTxTooLarge = fmt.Errorf("Tx too large. Max size is %d", maxTxSize)
)
// ErrPreCheck is returned when tx is too big
@@ -309,6 +312,13 @@ func (mem *Mempool) CheckTx(tx types.Tx, cb func(*abci.Response)) (err error) {
return ErrMempoolIsFull
}
// The size of the corresponding amino-encoded TxMessage
// can't be larger than the maxMsgSize, otherwise we can't
// relay it to peers.
if len(tx) > maxTxSize {
return ErrTxTooLarge
}
if mem.preCheck != nil {
if err := mem.preCheck(tx); err != nil {
return ErrPreCheck{err}

View File

@@ -14,10 +14,12 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
amino "github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
cfg "github.com/tendermint/tendermint/config"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/proxy"
"github.com/tendermint/tendermint/types"
@@ -394,6 +396,60 @@ func TestMempoolCloseWAL(t *testing.T) {
require.Equal(t, 1, len(m3), "expecting the wal match in")
}
// Size of the amino encoded TxMessage is the length of the
// encoded byte array, plus 1 for the struct field, plus 4
// for the amino prefix.
func txMessageSize(tx types.Tx) int {
return amino.ByteSliceSize(tx) + 1 + 4
}
func TestMempoolMaxMsgSize(t *testing.T) {
app := kvstore.NewKVStoreApplication()
cc := proxy.NewLocalClientCreator(app)
mempl := newMempoolWithApp(cc)
testCases := []struct {
len int
err bool
}{
// check small txs. no error
{10, false},
{1000, false},
{1000000, false},
// check around maxTxSize
// changes from no error to error
{maxTxSize - 2, false},
{maxTxSize - 1, false},
{maxTxSize, false},
{maxTxSize + 1, true},
{maxTxSize + 2, true},
// check around maxMsgSize. all error
{maxMsgSize - 1, true},
{maxMsgSize, true},
{maxMsgSize + 1, true},
}
for i, testCase := range testCases {
caseString := fmt.Sprintf("case %d, len %d", i, testCase.len)
tx := cmn.RandBytes(testCase.len)
err := mempl.CheckTx(tx, nil)
msg := &TxMessage{tx}
encoded := cdc.MustMarshalBinaryBare(msg)
require.Equal(t, len(encoded), txMessageSize(tx), caseString)
if !testCase.err {
require.True(t, len(encoded) <= maxMsgSize, caseString)
require.NoError(t, err, caseString)
} else {
require.True(t, len(encoded) > maxMsgSize, caseString)
require.Equal(t, err, ErrTxTooLarge, caseString)
}
}
}
func checksumIt(data []byte) string {
h := md5.New()
h.Write(data)

View File

@@ -6,7 +6,6 @@ import (
"time"
amino "github.com/tendermint/go-amino"
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/libs/clist"
"github.com/tendermint/tendermint/libs/log"
@@ -18,8 +17,10 @@ import (
const (
MempoolChannel = byte(0x30)
maxMsgSize = 1048576 // 1MB TODO make it configurable
peerCatchupSleepIntervalMS = 100 // If peer is behind, sleep this amount
maxMsgSize = 1048576 // 1MB TODO make it configurable
maxTxSize = maxMsgSize - 8 // account for amino overhead of TxMessage
peerCatchupSleepIntervalMS = 100 // If peer is behind, sleep this amount
)
// MempoolReactor handles mempool tx broadcasting amongst peers.
@@ -98,11 +99,6 @@ func (memR *MempoolReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
}
}
// BroadcastTx is an alias for Mempool.CheckTx. Broadcasting itself happens in peer routines.
func (memR *MempoolReactor) BroadcastTx(tx types.Tx, cb func(*abci.Response)) error {
return memR.Mempool.CheckTx(tx, cb)
}
// PeerState describes the state of a peer.
type PeerState interface {
GetHeight() int64

View File

@@ -15,10 +15,14 @@ import (
"github.com/tendermint/tendermint/abci/example/kvstore"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/crypto/ed25519"
"github.com/tendermint/tendermint/evidence"
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/libs/log"
mempl "github.com/tendermint/tendermint/mempool"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
@@ -192,3 +196,110 @@ func testFreeAddr(t *testing.T) string {
return fmt.Sprintf("127.0.0.1:%d", ln.Addr().(*net.TCPAddr).Port)
}
// create a proposal block using real and full
// mempool and evidence pool and validate it.
func TestCreateProposalBlock(t *testing.T) {
config := cfg.ResetTestRoot("node_create_proposal")
cc := proxy.NewLocalClientCreator(kvstore.NewKVStoreApplication())
proxyApp := proxy.NewAppConns(cc)
err := proxyApp.Start()
require.Nil(t, err)
defer proxyApp.Stop()
logger := log.TestingLogger()
var height int64 = 1
state, stateDB := state(1, height)
maxBytes := 16384
state.ConsensusParams.BlockSize.MaxBytes = int64(maxBytes)
proposerAddr, _ := state.Validators.GetByIndex(0)
// Make Mempool
memplMetrics := mempl.PrometheusMetrics("node_test")
mempool := mempl.NewMempool(
config.Mempool,
proxyApp.Mempool(),
state.LastBlockHeight,
mempl.WithMetrics(memplMetrics),
mempl.WithPreCheck(sm.TxPreCheck(state)),
mempl.WithPostCheck(sm.TxPostCheck(state)),
)
mempool.SetLogger(logger)
// Make EvidencePool
types.RegisterMockEvidencesGlobal()
evidence.RegisterMockEvidences()
evidenceDB := dbm.NewMemDB()
evidenceStore := evidence.NewEvidenceStore(evidenceDB)
evidencePool := evidence.NewEvidencePool(stateDB, evidenceStore)
evidencePool.SetLogger(logger)
// fill the evidence pool with more evidence
// than can fit in a block
minEvSize := 12
numEv := (maxBytes / types.MaxEvidenceBytesDenominator) / minEvSize
for i := 0; i < numEv; i++ {
ev := types.NewMockRandomGoodEvidence(1, proposerAddr, cmn.RandBytes(minEvSize))
err := evidencePool.AddEvidence(ev)
assert.NoError(t, err)
}
// fill the mempool with more txs
// than can fit in a block
txLength := 1000
for i := 0; i < maxBytes/txLength; i++ {
tx := cmn.RandBytes(txLength)
err := mempool.CheckTx(tx, nil)
assert.NoError(t, err)
}
blockExec := sm.NewBlockExecutor(
stateDB,
logger,
proxyApp.Consensus(),
mempool,
evidencePool,
)
commit := &types.Commit{}
block, _ := blockExec.CreateProposalBlock(
height,
state, commit,
proposerAddr,
)
err = blockExec.ValidateBlock(state, block)
assert.NoError(t, err)
}
func state(nVals int, height int64) (sm.State, dbm.DB) {
vals := make([]types.GenesisValidator, nVals)
for i := 0; i < nVals; i++ {
secret := []byte(fmt.Sprintf("test%d", i))
pk := ed25519.GenPrivKeyFromSecret(secret)
vals[i] = types.GenesisValidator{
pk.PubKey().Address(),
pk.PubKey(),
1000,
fmt.Sprintf("test%d", i),
}
}
s, _ := sm.MakeGenesisState(&types.GenesisDoc{
ChainID: "test-chain",
Validators: vals,
AppHash: nil,
})
// save validators to db for 2 heights
stateDB := dbm.NewMemDB()
sm.SaveState(stateDB, s)
for i := 1; i < int(height); i++ {
s.LastBlockHeight++
s.LastValidators = s.Validators.Copy()
sm.SaveState(stateDB, s)
}
return s, stateDB
}

View File

@@ -72,7 +72,7 @@ func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
// NopMetrics returns no-op Metrics.
func NopMetrics() *Metrics {
return &Metrics{
Peers: discard.NewGauge(),
Peers: discard.NewGauge(),
PeerReceiveBytesTotal: discard.NewCounter(),
PeerSendBytesTotal: discard.NewCounter(),
PeerPendingSendBytes: discard.NewGauge(),

View File

@@ -29,7 +29,8 @@ type BlockExecutor struct {
// events
eventBus types.BlockEventPublisher
// update these with block results after commit
// manage the mempool lock during commit
// and update both with block results after commit.
mempool Mempool
evpool EvidencePool
@@ -73,6 +74,31 @@ func (blockExec *BlockExecutor) SetEventBus(eventBus types.BlockEventPublisher)
blockExec.eventBus = eventBus
}
// CreateProposalBlock calls state.MakeBlock with evidence from the evpool
// and txs from the mempool. The max bytes must be big enough to fit the commit.
// Up to 1/10th of the block space is allcoated for maximum sized evidence.
// The rest is given to txs, up to the max gas.
func (blockExec *BlockExecutor) CreateProposalBlock(
height int64,
state State, commit *types.Commit,
proposerAddr []byte,
) (*types.Block, *types.PartSet) {
maxBytes := state.ConsensusParams.BlockSize.MaxBytes
maxGas := state.ConsensusParams.BlockSize.MaxGas
// Fetch a limited amount of valid evidence
maxNumEvidence, _ := types.MaxEvidencePerBlock(maxBytes)
evidence := blockExec.evpool.PendingEvidence(maxNumEvidence)
// Fetch a limited amount of valid txs
maxDataBytes := types.MaxDataBytes(maxBytes, state.Validators.Size(), len(evidence))
txs := blockExec.mempool.ReapMaxBytesMaxGas(maxDataBytes, maxGas)
return state.MakeBlock(height, txs, commit, evidence, proposerAddr)
}
// ValidateBlock validates the given block against the given state.
// If the block is invalid, it returns an error.
// Validation does not mutate state, but does require historical information from the stateDB,

View File

@@ -3,6 +3,7 @@ package state
import (
"bytes"
"fmt"
"math"
"math/big"
"testing"
@@ -264,14 +265,133 @@ func TestOneValidatorChangesSaveLoad(t *testing.T) {
}
}
func TestProposerFrequency(t *testing.T) {
// some explicit test cases
testCases := []struct {
powers []int64
}{
// 2 vals
{[]int64{1, 1}},
{[]int64{1, 2}},
{[]int64{1, 100}},
{[]int64{5, 5}},
{[]int64{5, 100}},
{[]int64{50, 50}},
{[]int64{50, 100}},
{[]int64{1, 1000}},
// 3 vals
{[]int64{1, 1, 1}},
{[]int64{1, 2, 3}},
{[]int64{1, 2, 3}},
{[]int64{1, 1, 10}},
{[]int64{1, 1, 100}},
{[]int64{1, 10, 100}},
{[]int64{1, 1, 1000}},
{[]int64{1, 10, 1000}},
{[]int64{1, 100, 1000}},
// 4 vals
{[]int64{1, 1, 1, 1}},
{[]int64{1, 2, 3, 4}},
{[]int64{1, 1, 1, 10}},
{[]int64{1, 1, 1, 100}},
{[]int64{1, 1, 1, 1000}},
{[]int64{1, 1, 10, 100}},
{[]int64{1, 1, 10, 1000}},
{[]int64{1, 1, 100, 1000}},
{[]int64{1, 10, 100, 1000}},
}
for caseNum, testCase := range testCases {
// run each case 5 times to sample different
// initial priorities
for i := 0; i < 5; i++ {
valSet := genValSetWithPowers(testCase.powers)
testProposerFreq(t, caseNum, valSet)
}
}
// some random test cases with up to 300 validators
maxVals := 100
maxPower := 1000
nTestCases := 5
for i := 0; i < nTestCases; i++ {
N := cmn.RandInt() % maxVals
vals := make([]*types.Validator, N)
totalVotePower := int64(0)
for j := 0; j < N; j++ {
votePower := int64(cmn.RandInt() % maxPower)
totalVotePower += votePower
privVal := types.NewMockPV()
pubKey := privVal.GetPubKey()
val := types.NewValidator(pubKey, votePower)
val.ProposerPriority = cmn.RandInt64()
vals[j] = val
}
valSet := types.NewValidatorSet(vals)
valSet.RescalePriorities(totalVotePower)
testProposerFreq(t, i, valSet)
}
}
// new val set with given powers and random initial priorities
func genValSetWithPowers(powers []int64) *types.ValidatorSet {
size := len(powers)
vals := make([]*types.Validator, size)
totalVotePower := int64(0)
for i := 0; i < size; i++ {
totalVotePower += powers[i]
val := types.NewValidator(ed25519.GenPrivKey().PubKey(), powers[i])
val.ProposerPriority = cmn.RandInt64()
vals[i] = val
}
valSet := types.NewValidatorSet(vals)
valSet.RescalePriorities(totalVotePower)
return valSet
}
// test a proposer appears as frequently as expected
func testProposerFreq(t *testing.T, caseNum int, valSet *types.ValidatorSet) {
N := valSet.Size()
totalPower := valSet.TotalVotingPower()
// run the proposer selection and track frequencies
runMult := 1
runs := int(totalPower) * runMult
freqs := make([]int, N)
for i := 0; i < runs; i++ {
prop := valSet.GetProposer()
idx, _ := valSet.GetByAddress(prop.Address)
freqs[idx] += 1
valSet.IncrementProposerPriority(1)
}
// assert frequencies match expected (max off by 1)
for i, freq := range freqs {
_, val := valSet.GetByIndex(i)
expectFreq := int(val.VotingPower) * runMult
gotFreq := freq
abs := int(math.Abs(float64(expectFreq - gotFreq)))
// max bound on expected vs seen freq was proven
// to be 1 for the 2 validator case in
// https://github.com/cwgoes/tm-proposer-idris
// and inferred to generalize to N-1
bound := N - 1
require.True(t, abs <= bound, fmt.Sprintf("Case %d val %d (%d): got %d, expected %d", caseNum, i, N, gotFreq, expectFreq))
}
}
// TestProposerPriorityDoesNotGetResetToZero assert that we preserve accum when calling updateState
// see https://github.com/tendermint/tendermint/issues/2718
func TestProposerPriorityDoesNotGetResetToZero(t *testing.T) {
// assert that we preserve accum when calling updateState:
// https://github.com/tendermint/tendermint/issues/2718
tearDown, _, state := setupTestCase(t)
defer tearDown(t)
origVotingPower := int64(10)
val1VotingPower := int64(10)
val1PubKey := ed25519.GenPrivKey().PubKey()
val1 := &types.Validator{Address: val1PubKey.Address(), PubKey: val1PubKey, VotingPower: origVotingPower}
val1 := &types.Validator{Address: val1PubKey.Address(), PubKey: val1PubKey, VotingPower: val1VotingPower}
state.Validators = types.NewValidatorSet([]*types.Validator{val1})
state.NextValidators = state.Validators
@@ -288,8 +408,9 @@ func TestProposerPriorityDoesNotGetResetToZero(t *testing.T) {
require.NoError(t, err)
updatedState, err := updateState(state, blockID, &block.Header, abciResponses, validatorUpdates)
assert.NoError(t, err)
assert.Equal(t, -origVotingPower, updatedState.NextValidators.Validators[0].ProposerPriority)
curTotal := val1VotingPower
// one increment step and one validator: 0 + power - total_power == 0
assert.Equal(t, 0+val1VotingPower-curTotal, updatedState.NextValidators.Validators[0].ProposerPriority)
// add a validator
val2PubKey := ed25519.GenPrivKey().PubKey()
@@ -301,22 +422,33 @@ func TestProposerPriorityDoesNotGetResetToZero(t *testing.T) {
assert.NoError(t, err)
require.Equal(t, len(updatedState2.NextValidators.Validators), 2)
_, updatedVal1 := updatedState2.NextValidators.GetByAddress(val1PubKey.Address())
_, addedVal2 := updatedState2.NextValidators.GetByAddress(val2PubKey.Address())
// adding a validator should not lead to a ProposerPriority equal to zero (unless the combination of averaging and
// incrementing would cause so; which is not the case here)
totalPowerBefore2 := origVotingPower // 10
wantVal2ProposerPrio := -(totalPowerBefore2 + (totalPowerBefore2 >> 3)) + val2VotingPower // 89
avg := (0 + wantVal2ProposerPrio) / 2 // 44
wantVal2ProposerPrio -= avg // 45
totalPowerAfter := origVotingPower + val2VotingPower // 110
wantVal2ProposerPrio -= totalPowerAfter // -65
assert.Equal(t, wantVal2ProposerPrio, addedVal2.ProposerPriority) // not zero == -65
totalPowerBefore2 := curTotal
// while adding we compute prio == -1.125 * total:
wantVal2ProposerPrio := -(totalPowerBefore2 + (totalPowerBefore2 >> 3))
wantVal2ProposerPrio = wantVal2ProposerPrio + val2VotingPower
// then increment:
totalPowerAfter := val1VotingPower + val2VotingPower
// mostest:
wantVal2ProposerPrio = wantVal2ProposerPrio - totalPowerAfter
avg := big.NewInt(0).Add(big.NewInt(val1VotingPower), big.NewInt(wantVal2ProposerPrio))
avg.Div(avg, big.NewInt(2))
wantVal2ProposerPrio = wantVal2ProposerPrio - avg.Int64()
wantVal1Prio := 0 + val1VotingPower - avg.Int64()
assert.Equal(t, wantVal1Prio, updatedVal1.ProposerPriority)
assert.Equal(t, wantVal2ProposerPrio, addedVal2.ProposerPriority)
// Updating a validator does not reset the ProposerPriority to zero:
updatedVotingPowVal2 := int64(1)
updateVal := abci.ValidatorUpdate{PubKey: types.TM2PB.PubKey(val2PubKey), Power: updatedVotingPowVal2}
validatorUpdates, err = types.PB2TM.ValidatorUpdates([]abci.ValidatorUpdate{updateVal})
assert.NoError(t, err)
// this will cause the diff of priorities (31)
// to be larger than threshold == 2*totalVotingPower (22):
updatedState3, err := updateState(updatedState2, blockID, &block.Header, abciResponses, validatorUpdates)
assert.NoError(t, err)
@@ -324,11 +456,18 @@ func TestProposerPriorityDoesNotGetResetToZero(t *testing.T) {
_, prevVal1 := updatedState3.Validators.GetByAddress(val1PubKey.Address())
_, updatedVal2 := updatedState3.NextValidators.GetByAddress(val2PubKey.Address())
expectedVal1PrioBeforeAvg := prevVal1.ProposerPriority + prevVal1.VotingPower // -44 + 10 == -34
wantVal2ProposerPrio = wantVal2ProposerPrio + updatedVotingPowVal2 // -64
avg = (wantVal2ProposerPrio + expectedVal1PrioBeforeAvg) / 2 // (-64-34)/2 == -49
wantVal2ProposerPrio = wantVal2ProposerPrio - avg // -15
assert.Equal(t, wantVal2ProposerPrio, updatedVal2.ProposerPriority) // -15
// divide previous priorities by 2 == CEIL(31/22) as diff > threshold:
expectedVal1PrioBeforeAvg := prevVal1.ProposerPriority/2 + prevVal1.VotingPower
wantVal2ProposerPrio = wantVal2ProposerPrio/2 + updatedVotingPowVal2
// val1 will be proposer:
total := val1VotingPower + updatedVotingPowVal2
expectedVal1PrioBeforeAvg = expectedVal1PrioBeforeAvg - total
avgI64 := (wantVal2ProposerPrio + expectedVal1PrioBeforeAvg) / 2
wantVal2ProposerPrio = wantVal2ProposerPrio - avgI64
wantVal1Prio = expectedVal1PrioBeforeAvg - avgI64
assert.Equal(t, wantVal2ProposerPrio, updatedVal2.ProposerPriority)
_, updatedVal1 = updatedState3.NextValidators.GetByAddress(val1PubKey.Address())
assert.Equal(t, wantVal1Prio, updatedVal1.ProposerPriority)
}
func TestProposerPriorityProposerAlternates(t *testing.T) {
@@ -338,9 +477,9 @@ func TestProposerPriorityProposerAlternates(t *testing.T) {
// have the same voting power (and the 2nd was added later).
tearDown, _, state := setupTestCase(t)
defer tearDown(t)
origVotinPower := int64(10)
val1VotingPower := int64(10)
val1PubKey := ed25519.GenPrivKey().PubKey()
val1 := &types.Validator{Address: val1PubKey.Address(), PubKey: val1PubKey, VotingPower: origVotinPower}
val1 := &types.Validator{Address: val1PubKey.Address(), PubKey: val1PubKey, VotingPower: val1VotingPower}
// reset state validators to above validator
state.Validators = types.NewValidatorSet([]*types.Validator{val1})
@@ -361,12 +500,14 @@ func TestProposerPriorityProposerAlternates(t *testing.T) {
assert.NoError(t, err)
// 0 + 10 (initial prio) - 10 (avg) - 10 (mostest - total) = -10
assert.Equal(t, -origVotinPower, updatedState.NextValidators.Validators[0].ProposerPriority)
totalPower := val1VotingPower
wantVal1Prio := 0 + val1VotingPower - totalPower
assert.Equal(t, wantVal1Prio, updatedState.NextValidators.Validators[0].ProposerPriority)
assert.Equal(t, val1PubKey.Address(), updatedState.NextValidators.Proposer.Address)
// add a validator with the same voting power as the first
val2PubKey := ed25519.GenPrivKey().PubKey()
updateAddVal := abci.ValidatorUpdate{PubKey: types.TM2PB.PubKey(val2PubKey), Power: origVotinPower}
updateAddVal := abci.ValidatorUpdate{PubKey: types.TM2PB.PubKey(val2PubKey), Power: val1VotingPower}
validatorUpdates, err = types.PB2TM.ValidatorUpdates([]abci.ValidatorUpdate{updateAddVal})
assert.NoError(t, err)
@@ -386,16 +527,18 @@ func TestProposerPriorityProposerAlternates(t *testing.T) {
_, oldVal1 := updatedState2.Validators.GetByAddress(val1PubKey.Address())
_, updatedVal2 := updatedState2.NextValidators.GetByAddress(val2PubKey.Address())
totalPower := origVotinPower
totalPower = val1VotingPower // no update
v2PrioWhenAddedVal2 := -(totalPower + (totalPower >> 3))
v2PrioWhenAddedVal2 = v2PrioWhenAddedVal2 + origVotinPower // -11 + 10 == -1
v1PrioWhenAddedVal2 := oldVal1.ProposerPriority + origVotinPower // -10 + 10 == 0
v2PrioWhenAddedVal2 = v2PrioWhenAddedVal2 + val1VotingPower // -11 + 10 == -1
v1PrioWhenAddedVal2 := oldVal1.ProposerPriority + val1VotingPower // -10 + 10 == 0
totalPower = 2 * val1VotingPower // now we have to validators with that power
v1PrioWhenAddedVal2 = v1PrioWhenAddedVal2 - totalPower // mostest
// have to express the AVG in big.Ints as -1/2 == -1 in big.Int while -1/2 == 0 in int64
avgSum := big.NewInt(0).Add(big.NewInt(v2PrioWhenAddedVal2), big.NewInt(v1PrioWhenAddedVal2))
avg := avgSum.Div(avgSum, big.NewInt(2))
expectedVal2Prio := v2PrioWhenAddedVal2 - avg.Int64()
totalPower = 2 * origVotinPower // 10 + 10
expectedVal1Prio := oldVal1.ProposerPriority + origVotinPower - avg.Int64() - totalPower
totalPower = 2 * val1VotingPower // 10 + 10
expectedVal1Prio := oldVal1.ProposerPriority + val1VotingPower - avg.Int64() - totalPower
// val1's ProposerPriority story: -10 (see above) + 10 (voting pow) - (-1) (avg) - 20 (total) == -19
assert.EqualValues(t, expectedVal1Prio, updatedVal1.ProposerPriority)
// val2 prio when added: -(totalVotingPower + (totalVotingPower >> 3)) == -11
@@ -421,10 +564,12 @@ func TestProposerPriorityProposerAlternates(t *testing.T) {
assert.Equal(t, val2PubKey.Address(), updatedState3.NextValidators.Proposer.Address)
// check if expected proposer prio is matched:
avgSum = big.NewInt(oldVal1.ProposerPriority + origVotinPower + oldVal2.ProposerPriority + origVotinPower)
expectedVal1Prio2 := oldVal1.ProposerPriority + val1VotingPower
expectedVal2Prio2 := oldVal2.ProposerPriority + val1VotingPower - totalPower
avgSum = big.NewInt(expectedVal1Prio + expectedVal2Prio)
avg = avgSum.Div(avgSum, big.NewInt(2))
expectedVal1Prio2 := oldVal1.ProposerPriority + origVotinPower - avg.Int64()
expectedVal2Prio2 := oldVal2.ProposerPriority + origVotinPower - avg.Int64() - totalPower
expectedVal1Prio -= avg.Int64()
expectedVal2Prio -= avg.Int64()
// -19 + 10 - 0 (avg) == -9
assert.EqualValues(t, expectedVal1Prio2, updatedVal1.ProposerPriority, "unexpected proposer priority for validator: %v", updatedVal2)
@@ -468,9 +613,8 @@ func TestProposerPriorityProposerAlternates(t *testing.T) {
func TestLargeGenesisValidator(t *testing.T) {
tearDown, _, state := setupTestCase(t)
defer tearDown(t)
// TODO: increase genesis voting power to sth. more close to MaxTotalVotingPower with changes that
// fix with tendermint/issues/2960; currently, the last iteration would take forever though
genesisVotingPower := int64(types.MaxTotalVotingPower / 100000000000000)
genesisVotingPower := int64(types.MaxTotalVotingPower / 1000)
genesisPubKey := ed25519.GenPrivKey().PubKey()
// fmt.Println("genesis addr: ", genesisPubKey.Address())
genesisVal := &types.Validator{Address: genesisPubKey.Address(), PubKey: genesisPubKey, VotingPower: genesisVotingPower}
@@ -494,11 +638,11 @@ func TestLargeGenesisValidator(t *testing.T) {
blockID := types.BlockID{block.Hash(), block.MakePartSet(testPartSize).Header()}
updatedState, err := updateState(oldState, blockID, &block.Header, abciResponses, validatorUpdates)
// no changes in voting power (ProposerPrio += VotingPower == 0 in 1st round; than shiftByAvg == no-op,
// no changes in voting power (ProposerPrio += VotingPower == Voting in 1st round; than shiftByAvg == 0,
// than -Total == -Voting)
// -> no change in ProposerPrio (stays -Total == -VotingPower):
// -> no change in ProposerPrio (stays zero):
assert.EqualValues(t, oldState.NextValidators, updatedState.NextValidators)
assert.EqualValues(t, -genesisVotingPower, updatedState.NextValidators.Proposer.ProposerPriority)
assert.EqualValues(t, 0, updatedState.NextValidators.Proposer.ProposerPriority)
oldState = updatedState
}
@@ -508,7 +652,6 @@ func TestLargeGenesisValidator(t *testing.T) {
// see how long it takes until the effect wears off and both begin to alternate
// see: https://github.com/tendermint/tendermint/issues/2960
firstAddedValPubKey := ed25519.GenPrivKey().PubKey()
// fmt.Println("first added addr: ", firstAddedValPubKey.Address())
firstAddedValVotingPower := int64(10)
firstAddedVal := abci.ValidatorUpdate{PubKey: types.TM2PB.PubKey(firstAddedValPubKey), Power: firstAddedValVotingPower}
validatorUpdates, err := types.PB2TM.ValidatorUpdates([]abci.ValidatorUpdate{firstAddedVal})
@@ -598,10 +741,33 @@ func TestLargeGenesisValidator(t *testing.T) {
}
count++
}
// first proposer change happens after this many iters; we probably want to lower this number:
// TODO: change with https://github.com/tendermint/tendermint/issues/2960
firstProposerChangeExpectedAfter := 438
updatedState = curState
// the proposer changes after this number of blocks
firstProposerChangeExpectedAfter := 1
assert.Equal(t, firstProposerChangeExpectedAfter, count)
// store proposers here to see if we see them again in the same order:
numVals := len(updatedState.Validators.Validators)
proposers := make([]*types.Validator, numVals)
for i := 0; i < 100; i++ {
// no updates:
abciResponses := &ABCIResponses{
EndBlock: &abci.ResponseEndBlock{ValidatorUpdates: nil},
}
validatorUpdates, err := types.PB2TM.ValidatorUpdates(abciResponses.EndBlock.ValidatorUpdates)
require.NoError(t, err)
block := makeBlock(updatedState, updatedState.LastBlockHeight+1)
blockID := types.BlockID{block.Hash(), block.MakePartSet(testPartSize).Header()}
updatedState, err = updateState(updatedState, blockID, &block.Header, abciResponses, validatorUpdates)
if i > numVals { // expect proposers to cycle through after the first iteration (of numVals blocks):
if proposers[i%numVals] == nil {
proposers[i%numVals] = updatedState.NextValidators.Proposer
} else {
assert.Equal(t, proposers[i%numVals], updatedState.NextValidators.Proposer)
}
}
}
}
func TestStoreLoadValidatorsIncrementsProposerPriority(t *testing.T) {

View File

@@ -133,10 +133,11 @@ func validateBlock(stateDB dbm.DB, state State, block *types.Block) error {
}
// Limit the amount of evidence
maxEvidenceBytes := types.MaxEvidenceBytesPerBlock(state.ConsensusParams.BlockSize.MaxBytes)
evidenceBytes := int64(len(block.Evidence.Evidence)) * types.MaxEvidenceBytes
if evidenceBytes > maxEvidenceBytes {
return types.NewErrEvidenceOverflow(maxEvidenceBytes, evidenceBytes)
maxNumEvidence, _ := types.MaxEvidencePerBlock(state.ConsensusParams.BlockSize.MaxBytes)
numEvidence := int64(len(block.Evidence.Evidence))
if numEvidence > maxNumEvidence {
return types.NewErrEvidenceOverflow(maxNumEvidence, numEvidence)
}
// Validate all evidence.

View File

@@ -109,10 +109,9 @@ func TestValidateBlockEvidence(t *testing.T) {
// A block with too much evidence fails.
maxBlockSize := state.ConsensusParams.BlockSize.MaxBytes
maxEvidenceBytes := types.MaxEvidenceBytesPerBlock(maxBlockSize)
maxEvidence := maxEvidenceBytes / types.MaxEvidenceBytes
require.True(t, maxEvidence > 2)
for i := int64(0); i < maxEvidence; i++ {
maxNumEvidence, _ := types.MaxEvidencePerBlock(maxBlockSize)
require.True(t, maxNumEvidence > 2)
for i := int64(0); i < maxNumEvidence; i++ {
block.Evidence.Evidence = append(block.Evidence.Evidence, goodEvidence)
}
block.EvidenceHash = block.Evidence.Hash()

View File

@@ -323,16 +323,17 @@ func MaxDataBytes(maxBytes int64, valsCount, evidenceCount int) int64 {
}
// MaxDataBytesUnknownEvidence returns the maximum size of block's data when
// evidence count is unknown. MaxEvidenceBytesPerBlock will be used as the size
// evidence count is unknown. MaxEvidencePerBlock will be used for the size
// of evidence.
//
// XXX: Panics on negative result.
func MaxDataBytesUnknownEvidence(maxBytes int64, valsCount int) int64 {
_, maxEvidenceBytes := MaxEvidencePerBlock(maxBytes)
maxDataBytes := maxBytes -
MaxAminoOverheadForBlock -
MaxHeaderBytes -
int64(valsCount)*MaxVoteBytes -
MaxEvidenceBytesPerBlock(maxBytes)
maxEvidenceBytes
if maxDataBytes < 0 {
panic(fmt.Sprintf(
@@ -637,6 +638,7 @@ func (commit *Commit) StringIndented(indent string) string {
//-----------------------------------------------------------------------------
// SignedHeader is a header along with the commits that prove it.
// It is the basis of the lite client.
type SignedHeader struct {
*Header `json:"header"`
Commit *Commit `json:"commit"`

View File

@@ -36,8 +36,8 @@ func (err *ErrEvidenceInvalid) Error() string {
// ErrEvidenceOverflow is for when there is too much evidence in a block.
type ErrEvidenceOverflow struct {
MaxBytes int64
GotBytes int64
MaxNum int64
GotNum int64
}
// NewErrEvidenceOverflow returns a new ErrEvidenceOverflow where got > max.
@@ -47,7 +47,7 @@ func NewErrEvidenceOverflow(max, got int64) *ErrEvidenceOverflow {
// Error returns a string representation of the error.
func (err *ErrEvidenceOverflow) Error() string {
return fmt.Sprintf("Too much evidence: Max %d bytes, got %d bytes", err.MaxBytes, err.GotBytes)
return fmt.Sprintf("Too much evidence: Max %d, got %d", err.MaxNum, err.GotNum)
}
//-------------------------------------------
@@ -72,13 +72,23 @@ func RegisterEvidences(cdc *amino.Codec) {
func RegisterMockEvidences(cdc *amino.Codec) {
cdc.RegisterConcrete(MockGoodEvidence{}, "tendermint/MockGoodEvidence", nil)
cdc.RegisterConcrete(MockRandomGoodEvidence{}, "tendermint/MockRandomGoodEvidence", nil)
cdc.RegisterConcrete(MockBadEvidence{}, "tendermint/MockBadEvidence", nil)
}
// MaxEvidenceBytesPerBlock returns the maximum evidence size per block -
// 1/10th of the maximum block size.
func MaxEvidenceBytesPerBlock(blockMaxBytes int64) int64 {
return blockMaxBytes / 10
const (
MaxEvidenceBytesDenominator = 10
)
// MaxEvidencePerBlock returns the maximum number of evidences
// allowed in the block and their maximum total size (limitted to 1/10th
// of the maximum block size).
// TODO: change to a constant, or to a fraction of the validator set size.
// See https://github.com/tendermint/tendermint/issues/2590
func MaxEvidencePerBlock(blockMaxBytes int64) (int64, int64) {
maxBytes := blockMaxBytes / MaxEvidenceBytesDenominator
maxNum := maxBytes / MaxEvidenceBytes
return maxNum, maxBytes
}
//-------------------------------------------
@@ -193,6 +203,25 @@ func (dve *DuplicateVoteEvidence) ValidateBasic() error {
//-----------------------------------------------------------------
// UNSTABLE
type MockRandomGoodEvidence struct {
MockGoodEvidence
randBytes []byte
}
var _ Evidence = &MockRandomGoodEvidence{}
// UNSTABLE
func NewMockRandomGoodEvidence(height int64, address []byte, randBytes []byte) MockRandomGoodEvidence {
return MockRandomGoodEvidence{
MockGoodEvidence{height, address}, randBytes,
}
}
func (e MockRandomGoodEvidence) Hash() []byte {
return []byte(fmt.Sprintf("%d-%x", e.Height_, e.randBytes))
}
// UNSTABLE
type MockGoodEvidence struct {
Height_ int64

View File

@@ -22,6 +22,14 @@ type ConsensusParams struct {
Validator ValidatorParams `json:"validator"`
}
// HashedParams is a subset of ConsensusParams.
// It is amino encoded and hashed into
// the Header.ConsensusHash.
type HashedParams struct {
BlockMaxBytes int64
BlockMaxGas int64
}
// BlockSizeParams define limits on the block size.
type BlockSizeParams struct {
MaxBytes int64 `json:"max_bytes"`
@@ -116,13 +124,16 @@ func (params *ConsensusParams) Validate() error {
return nil
}
// Hash returns a hash of the parameters to store in the block header
// No Merkle tree here, only three values are hashed here
// thus benefit from saving space < drawbacks from proofs' overhead
// Revisit this function if new fields are added to ConsensusParams
// Hash returns a hash of a subset of the parameters to store in the block header.
// Only the Block.MaxBytes and Block.MaxGas are included in the hash.
// This allows the ConsensusParams to evolve more without breaking the block
// protocol. No need for a Merkle tree here, just a small struct to hash.
func (params *ConsensusParams) Hash() []byte {
hasher := tmhash.New()
bz := cdcEncode(params)
bz := cdcEncode(HashedParams{
params.BlockSize.MaxBytes,
params.BlockSize.MaxGas,
})
if bz == nil {
panic("cannot fail to encode ConsensusParams")
}

View File

@@ -3,25 +3,20 @@ package types
import (
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/crypto/merkle"
"github.com/tendermint/tendermint/crypto/tmhash"
cmn "github.com/tendermint/tendermint/libs/common"
)
//-----------------------------------------------------------------------------
// ABCIResult is the deterministic component of a ResponseDeliverTx.
// TODO: add Tags
// TODO: add tags and other fields
// https://github.com/tendermint/tendermint/issues/1007
type ABCIResult struct {
Code uint32 `json:"code"`
Data cmn.HexBytes `json:"data"`
}
// Hash returns the canonical hash of the ABCIResult
func (a ABCIResult) Hash() []byte {
bz := tmhash.Sum(cdcEncode(a))
return bz
}
// Bytes returns the amino encoded ABCIResult
func (a ABCIResult) Bytes() []byte {
return cdcEncode(a)
}

View File

@@ -16,20 +16,21 @@ func TestABCIResults(t *testing.T) {
e := ABCIResult{Code: 14, Data: []byte("foo")}
f := ABCIResult{Code: 14, Data: []byte("bar")}
// Nil and []byte{} should produce the same hash.
require.Equal(t, a.Hash(), a.Hash())
require.Equal(t, b.Hash(), b.Hash())
require.Equal(t, a.Hash(), b.Hash())
// Nil and []byte{} should produce the same bytes
require.Equal(t, a.Bytes(), a.Bytes())
require.Equal(t, b.Bytes(), b.Bytes())
require.Equal(t, a.Bytes(), b.Bytes())
// a and b should be the same, don't go in results.
results := ABCIResults{a, c, d, e, f}
// Make sure each result hashes properly.
// Make sure each result serializes differently
var last []byte
for i, res := range results {
h := res.Hash()
assert.NotEqual(t, last, h, "%d", i)
last = h
assert.Equal(t, last, a.Bytes()) // first one is empty
for i, res := range results[1:] {
bz := res.Bytes()
assert.NotEqual(t, last, bz, "%d", i)
last = bz
}
// Make sure that we can get a root hash from results and verify proofs.

View File

@@ -4,8 +4,6 @@ import (
"bytes"
"fmt"
"github.com/tendermint/tendermint/crypto/tmhash"
"github.com/tendermint/tendermint/crypto"
cmn "github.com/tendermint/tendermint/libs/common"
)
@@ -70,12 +68,6 @@ func (v *Validator) String() string {
v.ProposerPriority)
}
// Hash computes the unique ID of a validator with a given voting power.
// It excludes the ProposerPriority value, which changes with every round.
func (v *Validator) Hash() []byte {
return tmhash.Sum(v.Bytes())
}
// Bytes computes the unique encoding of a validator with a given voting power.
// These are the bytes that gets hashed in consensus. It excludes address
// as its redundant with the pubkey. This also excludes ProposerPriority

View File

@@ -13,13 +13,13 @@ import (
)
// The maximum allowed total voting power.
// We set the ProposerPriority of freshly added validators to -1.125*totalVotingPower.
// To compute 1.125*totalVotingPower efficiently, we do:
// totalVotingPower + (totalVotingPower >> 3) because
// x + (x >> 3) = x + x/8 = x * (1 + 0.125).
// MaxTotalVotingPower is the largest int64 `x` with the property that `x + (x >> 3)` is
// still in the bounds of int64.
const MaxTotalVotingPower = int64(8198552921648689607)
// It needs to be sufficiently small to, in all cases::
// 1. prevent clipping in incrementProposerPriority()
// 2. let (diff+diffMax-1) not overflow in IncrementPropposerPriotity()
// (Proof of 1 is tricky, left to the reader).
// It could be higher, but this is sufficiently large for our purposes,
// and leaves room for defensive purposes.
const MaxTotalVotingPower = int64(math.MaxInt64) / 8
// ValidatorSet represent a set of *Validator at a given height.
// The validators can be fetched by address or index.
@@ -78,44 +78,57 @@ func (vals *ValidatorSet) IncrementProposerPriority(times int) {
panic("Cannot call IncrementProposerPriority with non-positive times")
}
const shiftEveryNthIter = 10
// Cap the difference between priorities to be proportional to 2*totalPower by
// re-normalizing priorities, i.e., rescale all priorities by multiplying with:
// 2*totalVotingPower/(maxPriority - minPriority)
diffMax := 2 * vals.TotalVotingPower()
vals.RescalePriorities(diffMax)
var proposer *Validator
// call IncrementProposerPriority(1) times times:
for i := 0; i < times; i++ {
shiftByAvgProposerPriority := i%shiftEveryNthIter == 0
proposer = vals.incrementProposerPriority(shiftByAvgProposerPriority)
}
isShiftedAvgOnLastIter := (times-1)%shiftEveryNthIter == 0
if !isShiftedAvgOnLastIter {
validatorsHeap := cmn.NewHeap()
vals.shiftByAvgProposerPriority(validatorsHeap)
proposer = vals.incrementProposerPriority()
}
vals.shiftByAvgProposerPriority()
vals.Proposer = proposer
}
func (vals *ValidatorSet) incrementProposerPriority(subAvg bool) *Validator {
for _, val := range vals.Validators {
// Check for overflow for sum.
val.ProposerPriority = safeAddClip(val.ProposerPriority, val.VotingPower)
func (vals *ValidatorSet) RescalePriorities(diffMax int64) {
// NOTE: This check is merely a sanity check which could be
// removed if all tests would init. voting power appropriately;
// i.e. diffMax should always be > 0
if diffMax == 0 {
return
}
validatorsHeap := cmn.NewHeap()
if subAvg { // shift by avg ProposerPriority
vals.shiftByAvgProposerPriority(validatorsHeap)
} else { // just update the heap
// Caculating ceil(diff/diffMax):
// Re-normalization is performed by dividing by an integer for simplicity.
// NOTE: This may make debugging priority issues easier as well.
diff := computeMaxMinPriorityDiff(vals)
ratio := (diff + diffMax - 1) / diffMax
if ratio > 1 {
for _, val := range vals.Validators {
validatorsHeap.PushComparable(val, proposerPriorityComparable{val})
val.ProposerPriority /= ratio
}
}
}
func (vals *ValidatorSet) incrementProposerPriority() *Validator {
for _, val := range vals.Validators {
// Check for overflow for sum.
newPrio := safeAddClip(val.ProposerPriority, val.VotingPower)
val.ProposerPriority = newPrio
}
// Decrement the validator with most ProposerPriority:
mostest := validatorsHeap.Peek().(*Validator)
mostest := vals.getValWithMostPriority()
// mind underflow
mostest.ProposerPriority = safeSubClip(mostest.ProposerPriority, vals.TotalVotingPower())
return mostest
}
// should not be called on an empty validator set
func (vals *ValidatorSet) computeAvgProposerPriority() int64 {
n := int64(len(vals.Validators))
sum := big.NewInt(0)
@@ -131,11 +144,38 @@ func (vals *ValidatorSet) computeAvgProposerPriority() int64 {
panic(fmt.Sprintf("Cannot represent avg ProposerPriority as an int64 %v", avg))
}
func (vals *ValidatorSet) shiftByAvgProposerPriority(validatorsHeap *cmn.Heap) {
// compute the difference between the max and min ProposerPriority of that set
func computeMaxMinPriorityDiff(vals *ValidatorSet) int64 {
max := int64(math.MinInt64)
min := int64(math.MaxInt64)
for _, v := range vals.Validators {
if v.ProposerPriority < min {
min = v.ProposerPriority
}
if v.ProposerPriority > max {
max = v.ProposerPriority
}
}
diff := max - min
if diff < 0 {
return -1 * diff
} else {
return diff
}
}
func (vals *ValidatorSet) getValWithMostPriority() *Validator {
var res *Validator
for _, val := range vals.Validators {
res = res.CompareProposerPriority(val)
}
return res
}
func (vals *ValidatorSet) shiftByAvgProposerPriority() {
avgProposerPriority := vals.computeAvgProposerPriority()
for _, val := range vals.Validators {
val.ProposerPriority = safeSubClip(val.ProposerPriority, avgProposerPriority)
validatorsHeap.PushComparable(val, proposerPriorityComparable{val})
}
}
@@ -373,8 +413,7 @@ func (vals *ValidatorSet) VerifyCommit(chainID string, blockID BlockID, height i
if talliedVotingPower > vals.TotalVotingPower()*2/3 {
return nil
}
return fmt.Errorf("Invalid commit -- insufficient voting power: got %v, needed %v",
talliedVotingPower, vals.TotalVotingPower()*2/3+1)
return errTooMuchChange{talliedVotingPower, vals.TotalVotingPower()*2/3 + 1}
}
// VerifyFutureCommit will check to see if the set would be valid with a different
@@ -456,12 +495,37 @@ func (vals *ValidatorSet) VerifyFutureCommit(newSet *ValidatorSet, chainID strin
}
if oldVotingPower <= oldVals.TotalVotingPower()*2/3 {
return cmn.NewError("Invalid commit -- insufficient old voting power: got %v, needed %v",
oldVotingPower, oldVals.TotalVotingPower()*2/3+1)
return errTooMuchChange{oldVotingPower, oldVals.TotalVotingPower()*2/3 + 1}
}
return nil
}
//-----------------
// ErrTooMuchChange
func IsErrTooMuchChange(err error) bool {
switch err_ := err.(type) {
case cmn.Error:
_, ok := err_.Data().(errTooMuchChange)
return ok
case errTooMuchChange:
return true
default:
return false
}
}
type errTooMuchChange struct {
got int64
needed int64
}
func (e errTooMuchChange) Error() string {
return fmt.Sprintf("Invalid commit -- insufficient old voting power: got %v, needed %v", e.got, e.needed)
}
//----------------
func (vals *ValidatorSet) String() string {
return vals.StringIndented("")
}
@@ -508,20 +572,6 @@ func (valz ValidatorsByAddress) Swap(i, j int) {
valz[j] = it
}
//-------------------------------------
// Use with Heap for sorting validators by ProposerPriority
type proposerPriorityComparable struct {
*Validator
}
// We want to find the validator with the greatest ProposerPriority.
func (ac proposerPriorityComparable) Less(o interface{}) bool {
other := o.(proposerPriorityComparable).Validator
larger := ac.CompareProposerPriority(other)
return bytes.Equal(larger.Address, ac.Address)
}
//----------------------------------------
// For testing

View File

@@ -392,10 +392,16 @@ func TestAveragingInIncrementProposerPriority(t *testing.T) {
func TestAveragingInIncrementProposerPriorityWithVotingPower(t *testing.T) {
// Other than TestAveragingInIncrementProposerPriority this is a more complete test showing
// how each ProposerPriority changes in relation to the validator's voting power respectively.
// average is zero in each round:
vp0 := int64(10)
vp1 := int64(1)
vp2 := int64(1)
total := vp0 + vp1 + vp2
avg := (vp0 + vp1 + vp2 - total) / 3
vals := ValidatorSet{Validators: []*Validator{
{Address: []byte{0}, ProposerPriority: 0, VotingPower: 10},
{Address: []byte{1}, ProposerPriority: 0, VotingPower: 1},
{Address: []byte{2}, ProposerPriority: 0, VotingPower: 1}}}
{Address: []byte{0}, ProposerPriority: 0, VotingPower: vp0},
{Address: []byte{1}, ProposerPriority: 0, VotingPower: vp1},
{Address: []byte{2}, ProposerPriority: 0, VotingPower: vp2}}}
tcs := []struct {
vals *ValidatorSet
wantProposerPrioritys []int64
@@ -407,95 +413,89 @@ func TestAveragingInIncrementProposerPriorityWithVotingPower(t *testing.T) {
vals.Copy(),
[]int64{
// Acumm+VotingPower-Avg:
0 + 10 - 12 - 4, // mostest will be subtracted by total voting power (12)
0 + 1 - 4,
0 + 1 - 4},
0 + vp0 - total - avg, // mostest will be subtracted by total voting power (12)
0 + vp1,
0 + vp2},
1,
vals.Validators[0]},
1: {
vals.Copy(),
[]int64{
(0 + 10 - 12 - 4) + 10 - 12 + 4, // this will be mostest on 2nd iter, too
(0 + 1 - 4) + 1 + 4,
(0 + 1 - 4) + 1 + 4},
(0 + vp0 - total) + vp0 - total - avg, // this will be mostest on 2nd iter, too
(0 + vp1) + vp1,
(0 + vp2) + vp2},
2,
vals.Validators[0]}, // increment twice -> expect average to be subtracted twice
2: {
vals.Copy(),
[]int64{
((0 + 10 - 12 - 4) + 10 - 12) + 10 - 12 + 4, // still mostest
((0 + 1 - 4) + 1) + 1 + 4,
((0 + 1 - 4) + 1) + 1 + 4},
0 + 3*(vp0-total) - avg, // still mostest
0 + 3*vp1,
0 + 3*vp2},
3,
vals.Validators[0]},
3: {
vals.Copy(),
[]int64{
0 + 4*(10-12) + 4 - 4, // still mostest
0 + 4*1 + 4 - 4,
0 + 4*1 + 4 - 4},
0 + 4*(vp0-total), // still mostest
0 + 4*vp1,
0 + 4*vp2},
4,
vals.Validators[0]},
4: {
vals.Copy(),
[]int64{
0 + 4*(10-12) + 10 + 4 - 4, // 4 iters was mostest
0 + 5*1 - 12 + 4 - 4, // now this val is mostest for the 1st time (hence -12==totalVotingPower)
0 + 5*1 + 4 - 4},
0 + 4*(vp0-total) + vp0, // 4 iters was mostest
0 + 5*vp1 - total, // now this val is mostest for the 1st time (hence -12==totalVotingPower)
0 + 5*vp2},
5,
vals.Validators[1]},
5: {
vals.Copy(),
[]int64{
0 + 6*10 - 5*12 + 4 - 4, // mostest again
0 + 6*1 - 12 + 4 - 4, // mostest once up to here
0 + 6*1 + 4 - 4},
0 + 6*vp0 - 5*total, // mostest again
0 + 6*vp1 - total, // mostest once up to here
0 + 6*vp2},
6,
vals.Validators[0]},
6: {
vals.Copy(),
[]int64{
0 + 7*10 - 6*12 + 4 - 4, // in 7 iters this val is mostest 6 times
0 + 7*1 - 12 + 4 - 4, // in 7 iters this val is mostest 1 time
0 + 7*1 + 4 - 4},
0 + 7*vp0 - 6*total, // in 7 iters this val is mostest 6 times
0 + 7*vp1 - total, // in 7 iters this val is mostest 1 time
0 + 7*vp2},
7,
vals.Validators[0]},
7: {
vals.Copy(),
[]int64{
0 + 8*10 - 7*12 + 4 - 4, // mostest
0 + 8*1 - 12 + 4 - 4,
0 + 8*1 + 4 - 4},
0 + 8*vp0 - 7*total, // mostest again
0 + 8*vp1 - total,
0 + 8*vp2},
8,
vals.Validators[0]},
8: {
vals.Copy(),
[]int64{
0 + 9*10 - 7*12 + 4 - 4,
0 + 9*1 - 12 + 4 - 4,
0 + 9*1 - 12 + 4 - 4}, // mostest
0 + 9*vp0 - 7*total,
0 + 9*vp1 - total,
0 + 9*vp2 - total}, // mostest
9,
vals.Validators[2]},
9: {
vals.Copy(),
[]int64{
0 + 10*10 - 8*12 + 4 - 4, // after 10 iters this is mostest again
0 + 10*1 - 12 + 4 - 4, // after 6 iters this val is "mostest" once and not in between
0 + 10*1 - 12 + 4 - 4}, // in between 10 iters this val is "mostest" once
0 + 10*vp0 - 8*total, // after 10 iters this is mostest again
0 + 10*vp1 - total, // after 6 iters this val is "mostest" once and not in between
0 + 10*vp2 - total}, // in between 10 iters this val is "mostest" once
10,
vals.Validators[0]},
10: {
vals.Copy(),
[]int64{
// shift twice inside incrementProposerPriority (shift every 10th iter);
// don't shift at the end of IncremenctProposerPriority
// last avg should be zero because
// ProposerPriority of validator 0: (0 + 11*10 - 8*12 - 4) == 10
// ProposerPriority of validator 1 and 2: (0 + 11*1 - 12 - 4) == -5
// and (10 + 5 - 5) / 3 == 0
0 + 11*10 - 8*12 - 4 - 12 - 0,
0 + 11*1 - 12 - 4 - 0, // after 6 iters this val is "mostest" once and not in between
0 + 11*1 - 12 - 4 - 0}, // after 10 iters this val is "mostest" once
0 + 11*vp0 - 9*total,
0 + 11*vp1 - total, // after 6 iters this val is "mostest" once and not in between
0 + 11*vp2 - total}, // after 10 iters this val is "mostest" once
11,
vals.Validators[0]},
}

View File

@@ -20,3 +20,8 @@ func RegisterBlockAmino(cdc *amino.Codec) {
func GetCodec() *amino.Codec {
return cdc
}
// For testing purposes only
func RegisterMockEvidencesGlobal() {
RegisterMockEvidences(cdc)
}

View File

@@ -18,7 +18,7 @@ const (
// TMCoreSemVer is the current version of Tendermint Core.
// It's the Semantic Version of the software.
// Must be a string because scripts like dist.sh read this file.
TMCoreSemVer = "0.28.0"
TMCoreSemVer = "0.29.0"
// ABCISemVer is the semantic version of the ABCI library
ABCISemVer = "0.15.0"
@@ -36,10 +36,10 @@ func (p Protocol) Uint64() uint64 {
var (
// P2PProtocol versions all p2p behaviour and msgs.
P2PProtocol Protocol = 5
P2PProtocol Protocol = 6
// BlockProtocol versions all block data structures and processing.
BlockProtocol Protocol = 8
BlockProtocol Protocol = 9
)
//------------------------------------------------------------------------