lint markdown docs using a stop-words and write-good linters (#2195)

* lint docs with write-good, stop-words

* remove package-lock.json

* update changelog

* fix wrong paragraph formatting

* fix some docs formatting

* fix docs format

* fix abci spec format
This commit is contained in:
Peng Zhong
2018-08-27 15:33:46 +08:00
committed by Anton Kaliaev
parent 8a84593c02
commit 20e35654c6
53 changed files with 3440 additions and 831 deletions

View File

@@ -57,7 +57,7 @@ is malicious or faulty.
A commit in Tendermint is a set of signed messages from more than 2/3 of
the total weight of the current Validator set. Validators take turns proposing
blocks and voting on them. Once enough votes are received, the block is considered
committed. These votes are included in the *next* block as proof that the previous block
committed. These votes are included in the _next_ block as proof that the previous block
was committed - they cannot be included in the current block, as that block has already been
created.
@@ -71,8 +71,8 @@ of the latest state of the blockchain. To achieve this, it embeds
cryptographic commitments to certain information in the block "header".
This information includes the contents of the block (eg. the transactions),
the validator set committing the block, as well as the various results returned by the application.
Note, however, that block execution only occurs *after* a block is committed.
Thus, application results can only be included in the *next* block.
Note, however, that block execution only occurs _after_ a block is committed.
Thus, application results can only be included in the _next_ block.
Also note that information like the transaction results and the validator set are never
directly included in the block - only their cryptographic digests (Merkle roots) are.

View File

@@ -104,8 +104,8 @@ type Vote struct {
```
There are two types of votes:
a *prevote* has `vote.Type == 1` and
a *precommit* has `vote.Type == 2`.
a _prevote_ has `vote.Type == 1` and
a _precommit_ has `vote.Type == 2`.
## Signature
@@ -162,10 +162,10 @@ We refer to certain globally available objects:
`prevBlock` is the `block` at the previous height,
and `state` keeps track of the validator set, the consensus parameters
and other results from the application. At the point when `block` is the block under consideration,
the current version of the `state` corresponds to the state
after executing transactions from the `prevBlock`.
the current version of the `state` corresponds to the state
after executing transactions from the `prevBlock`.
Elements of an object are accessed as expected,
ie. `block.Header`.
ie. `block.Header`.
See [here](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md) for the definition of `state`.
### Header
@@ -288,6 +288,7 @@ This can be used to validate the `LastCommit` included in the next block.
```go
block.NextValidatorsHash == SimpleMerkleRoot(state.NextValidators)
```
Simple Merkle root of the next validator set that will be the validator set that commits the next block.
Modifications to the validator set are defined by the application.
@@ -427,11 +428,8 @@ Execute(s State, app ABCIApp, block Block) State {
AppHash: AppHash,
LastValidators: state.Validators,
Validators: state.NextValidators,
NextValidators: UpdateValidators(state.NextValidators, ValidatorChanges),
NextValidators: UpdateValidators(state.NextValidators, ValidatorChanges),
ConsensusParams: UpdateConsensusParams(state.ConsensusParams, ConsensusParamChanges),
}
}
```

View File

@@ -48,33 +48,33 @@ spec](https://github.com/tendermint/go-amino#computing-the-prefix-and-disambigua
In what follows, we provide the type names and prefix bytes directly.
Notice that when encoding byte-arrays, the length of the byte-array is appended
to the PrefixBytes. Thus the encoding of a byte array becomes `<PrefixBytes>
<Length> <ByteArray>`. In other words, to encode any type listed below you do not need to be
to the PrefixBytes. Thus the encoding of a byte array becomes `<PrefixBytes> <Length> <ByteArray>`. In other words, to encode any type listed below you do not need to be
familiar with amino encoding.
You can simply use below table and concatenate Prefix || Length (of raw bytes) || raw bytes
( while || stands for byte concatenation here).
| Type | Name | Prefix | Length | Notes |
| ---- | ---- | ------ | ----- | ------ |
| PubKeyEd25519 | tendermint/PubKeyEd25519 | 0x1624DE64 | 0x20 | |
| PubKeySecp256k1 | tendermint/PubKeySecp256k1 | 0xEB5AE987 | 0x21 | |
| PrivKeyEd25519 | tendermint/PrivKeyEd25519 | 0xA3288910 | 0x40 | |
| PrivKeySecp256k1 | tendermint/PrivKeySecp256k1 | 0xE1B0F79B | 0x20 | |
| SignatureEd25519 | tendermint/SignatureEd25519 | 0x2031EA53 | 0x40 | |
| Type | Name | Prefix | Length | Notes |
| ------------------ | ----------------------------- | ---------- | -------- | ----- |
| PubKeyEd25519 | tendermint/PubKeyEd25519 | 0x1624DE64 | 0x20 | |
| PubKeySecp256k1 | tendermint/PubKeySecp256k1 | 0xEB5AE987 | 0x21 | |
| PrivKeyEd25519 | tendermint/PrivKeyEd25519 | 0xA3288910 | 0x40 | |
| PrivKeySecp256k1 | tendermint/PrivKeySecp256k1 | 0xE1B0F79B | 0x20 | |
| SignatureEd25519 | tendermint/SignatureEd25519 | 0x2031EA53 | 0x40 | |
| SignatureSecp256k1 | tendermint/SignatureSecp256k1 | 0x7FC4A495 | variable |
|
### Examples
1. For example, the 33-byte (or 0x21-byte in hex) Secp256k1 pubkey
`020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
would be encoded as
`EB5AE98221020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
`020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
would be encoded as
`EB5AE98221020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
2. For example, the variable size Secp256k1 signature (in this particular example 70 or 0x46 bytes)
`304402201CD4B8C764D2FD8AF23ECFE6666CA8A53886D47754D951295D2D311E1FEA33BF02201E0F906BB1CF2C30EAACFFB032A7129358AFF96B9F79B06ACFFB18AC90C2ADD7`
would be encoded as
`16E1FEEA46304402201CD4B8C764D2FD8AF23ECFE6666CA8A53886D47754D951295D2D311E1FEA33BF02201E0F906BB1CF2C30EAACFFB032A7129358AFF96B9F79B06ACFFB18AC90C2ADD7`
`304402201CD4B8C764D2FD8AF23ECFE6666CA8A53886D47754D951295D2D311E1FEA33BF02201E0F906BB1CF2C30EAACFFB032A7129358AFF96B9F79B06ACFFB18AC90C2ADD7`
would be encoded as
`16E1FEEA46304402201CD4B8C764D2FD8AF23ECFE6666CA8A53886D47754D951295D2D311E1FEA33BF02201E0F906BB1CF2C30EAACFFB032A7129358AFF96B9F79B06ACFFB18AC90C2ADD7`
### Addresses
@@ -152,28 +152,27 @@ func MakeParts(obj interface{}, partSize int) []Part
For an overview of Merkle trees, see
[wikipedia](https://en.wikipedia.org/wiki/Merkle_tree)
A Simple Tree is a simple compact binary tree for a static list of items. Simple Merkle trees are used in numerous places in Tendermint to compute a cryptographic digest of a data structure. In a Simple Tree, the transactions and validation signatures of a block are hashed using this simple merkle tree logic.
If the number of items is not a power of two, the tree will not be full
and some leaf nodes will be at different levels. Simple Tree tries to
keep both sides of the tree the same size, but the left side may be one
greater, for example:
greater, for example:
```
Simple Tree with 6 items Simple Tree with 7 items
* *
/ \ / \
/ \ / \
/ \ / \
/ \ / \
* * * *
/ \ / \ / \ / \
/ \ / \ / \ / \
/ \ / \ / \ / \
Simple Tree with 6 items Simple Tree with 7 items
* *
/ \ / \
/ \ / \
/ \ / \
/ \ / \
* * * *
/ \ / \ / \ / \
/ \ / \ / \ / \
/ \ / \ / \ / \
* h2 * h5 * * * h6
/ \ / \ / \ / \ / \
/ \ / \ / \ / \ / \
h0 h1 h3 h4 h0 h1 h2 h3 h4 h5
```
@@ -224,7 +223,6 @@ For `[]struct` arguments, we compute a `[][]byte` by hashing the individual `str
Proof that a leaf is in a Merkle tree consists of a simple structure:
```
type SimpleProof struct {
Aunts [][]byte
@@ -265,8 +263,8 @@ func computeHashFromAunts(index, total int, leafHash []byte, innerHashes [][]byt
The Simple Tree is used to merkelize a list of items, so to merkelize a
(short) dictionary of key-value pairs, encode the dictionary as an
ordered list of ``KVPair`` structs. The block hash is such a hash
derived from all the fields of the block ``Header``. The state hash is
ordered list of `KVPair` structs. The block hash is such a hash
derived from all the fields of the block `Header`. The state hash is
similarly derived.
### IAVL+ Tree
@@ -300,7 +298,6 @@ For instance, an ED25519 PubKey would look like:
Where the `"value"` is the base64 encoding of the raw pubkey bytes, and the
`"type"` is the full disfix bytes for Ed25519 pubkeys.
### Signed Messages
Signed messages (eg. votes, proposals) in the consensus are encoded using Amino-JSON, rather than in the standard binary format.

View File

@@ -75,7 +75,6 @@ func TotalVotingPower(vals []Validators) int64{
}
```
### ConsensusParams
TODO

View File

@@ -1,56 +1,53 @@
# BFT time in Tendermint
# BFT time in Tendermint
Tendermint provides a deterministic, Byzantine fault-tolerant, source of time.
Time in Tendermint is defined with the Time field of the block header.
Tendermint provides a deterministic, Byzantine fault-tolerant, source of time.
Time in Tendermint is defined with the Time field of the block header.
It satisfies the following properties:
- Time Monotonicity: Time is monotonically increasing, i.e., given
a header H1 for height h1 and a header H2 for height `h2 = h1 + 1`, `H1.Time < H2.Time`.
- Time Validity: Given a set of Commit votes that forms the `block.LastCommit` field, a range of
valid values for the Time field of the block header is defined only by
Precommit messages (from the LastCommit field) sent by correct processes, i.e.,
a faulty process cannot arbitrarily increase the Time value.
- Time Monotonicity: Time is monotonically increasing, i.e., given
a header H1 for height h1 and a header H2 for height `h2 = h1 + 1`, `H1.Time < H2.Time`.
- Time Validity: Given a set of Commit votes that forms the `block.LastCommit` field, a range of
valid values for the Time field of the block header is defined only by
Precommit messages (from the LastCommit field) sent by correct processes, i.e.,
a faulty process cannot arbitrarily increase the Time value.
In the context of Tendermint, time is of type int64 and denotes UNIX time in milliseconds, i.e.,
corresponds to the number of milliseconds since January 1, 1970. Before defining rules that need to be enforced by the
In the context of Tendermint, time is of type int64 and denotes UNIX time in milliseconds, i.e.,
corresponds to the number of milliseconds since January 1, 1970. Before defining rules that need to be enforced by the
Tendermint consensus protocol, so the properties above holds, we introduce the following definition:
- median of a set of `Vote` messages is equal to the median of `Vote.Time` fields of the corresponding `Vote` messages,
where the value of `Vote.Time` is counted number of times proportional to the process voting power. As in Tendermint
the voting power is not uniform (one process one vote), a vote message is actually an aggregator of the same votes whose
number is equal to the voting power of the process that has casted the corresponding votes message.
where the value of `Vote.Time` is counted number of times proportional to the process voting power. As in Tendermint
the voting power is not uniform (one process one vote), a vote message is actually an aggregator of the same votes whose
number is equal to the voting power of the process that has casted the corresponding votes message.
Let's consider the following example:
- we have four processes p1, p2, p3 and p4, with the following voting power distribution (p1, 23), (p2, 27), (p3, 10)
and (p4, 10). The total voting power is 70 (`N = 3f+1`, where `N` is the total voting power, and `f` is the maximum voting
power of the faulty processes), so we assume that the faulty processes have at most 23 of voting power.
Furthermore, we have the following vote messages in some LastCommit field (we ignore all fields except Time field):
- (p1, 100), (p2, 98), (p3, 1000), (p4, 500). We assume that p3 and p4 are faulty processes. Let's assume that the
`block.LastCommit` message contains votes of processes p2, p3 and p4. Median is then chosen the following way:
the value 98 is counted 27 times, the value 1000 is counted 10 times and the value 500 is counted also 10 times.
So the median value will be the value 98. No matter what set of messages with at least `2f+1` voting power we
choose, the median value will always be between the values sent by correct processes.
We ensure Time Monotonicity and Time Validity properties by the following rules:
- let rs denotes `RoundState` (consensus internal state) of some process. Then
`rs.ProposalBlock.Header.Time == median(rs.LastCommit) &&
rs.Proposal.Timestamp == rs.ProposalBlock.Header.Time`.
- we have four processes p1, p2, p3 and p4, with the following voting power distribution (p1, 23), (p2, 27), (p3, 10)
and (p4, 10). The total voting power is 70 (`N = 3f+1`, where `N` is the total voting power, and `f` is the maximum voting
power of the faulty processes), so we assume that the faulty processes have at most 23 of voting power.
Furthermore, we have the following vote messages in some LastCommit field (we ignore all fields except Time field): - (p1, 100), (p2, 98), (p3, 1000), (p4, 500). We assume that p3 and p4 are faulty processes. Let's assume that the
`block.LastCommit` message contains votes of processes p2, p3 and p4. Median is then chosen the following way:
the value 98 is counted 27 times, the value 1000 is counted 10 times and the value 500 is counted also 10 times.
So the median value will be the value 98. No matter what set of messages with at least `2f+1` voting power we
choose, the median value will always be between the values sent by correct processes.
- Furthermore, when creating the `vote` message, the following rules for determining `vote.Time` field should hold:
We ensure Time Monotonicity and Time Validity properties by the following rules:
- if `rs.Proposal` is defined then
`vote.Time = max(rs.Proposal.Timestamp + 1, time.Now())`, where `time.Now()`
denotes local Unix time in milliseconds.
- if `rs.Proposal` is not defined and `rs.Votes` contains +2/3 of the corresponding vote messages (votes for the
current height and round, and with the corresponding type (`Prevote` or `Precommit`)), then
`vote.Time = max(median(getVotes(rs.Votes, vote.Height, vote.Round, vote.Type)), time.Now())`,
where `getVotes` function returns the votes for particular `Height`, `Round` and `Type`.
The second rule is relevant for the case when a process jumps to a higher round upon receiving +2/3 votes for a higher
round, but the corresponding `Proposal` message for the higher round hasn't been received yet.
- let rs denotes `RoundState` (consensus internal state) of some process. Then
`rs.ProposalBlock.Header.Time == median(rs.LastCommit) && rs.Proposal.Timestamp == rs.ProposalBlock.Header.Time`.
- Furthermore, when creating the `vote` message, the following rules for determining `vote.Time` field should hold:
- if `rs.Proposal` is defined then
`vote.Time = max(rs.Proposal.Timestamp + 1, time.Now())`, where `time.Now()`
denotes local Unix time in milliseconds.
- if `rs.Proposal` is not defined and `rs.Votes` contains +2/3 of the corresponding vote messages (votes for the
current height and round, and with the corresponding type (`Prevote` or `Precommit`)), then
`vote.Time = max(median(getVotes(rs.Votes, vote.Height, vote.Round, vote.Type)), time.Now())`,
where `getVotes` function returns the votes for particular `Height`, `Round` and `Type`.
The second rule is relevant for the case when a process jumps to a higher round upon receiving +2/3 votes for a higher
round, but the corresponding `Proposal` message for the higher round hasn't been received yet.

View File

@@ -2,31 +2,31 @@
## Terms
- The network is composed of optionally connected *nodes*. Nodes
directly connected to a particular node are called *peers*.
- The consensus process in deciding the next block (at some *height*
`H`) is composed of one or many *rounds*.
- `NewHeight`, `Propose`, `Prevote`, `Precommit`, and `Commit`
represent state machine states of a round. (aka `RoundStep` or
just "step").
- A node is said to be *at* a given height, round, and step, or at
`(H,R,S)`, or at `(H,R)` in short to omit the step.
- To *prevote* or *precommit* something means to broadcast a [prevote
vote](https://godoc.org/github.com/tendermint/tendermint/types#Vote)
or [first precommit
vote](https://godoc.org/github.com/tendermint/tendermint/types#FirstPrecommit)
for something.
- A vote *at* `(H,R)` is a vote signed with the bytes for `H` and `R`
included in its [sign-bytes](block-structure.html#vote-sign-bytes).
- *+2/3* is short for "more than 2/3"
- *1/3+* is short for "1/3 or more"
- A set of +2/3 of prevotes for a particular block or `<nil>` at
`(H,R)` is called a *proof-of-lock-change* or *PoLC* for short.
- The network is composed of optionally connected _nodes_. Nodes
directly connected to a particular node are called _peers_.
- The consensus process in deciding the next block (at some _height_
`H`) is composed of one or many _rounds_.
- `NewHeight`, `Propose`, `Prevote`, `Precommit`, and `Commit`
represent state machine states of a round. (aka `RoundStep` or
just "step").
- A node is said to be _at_ a given height, round, and step, or at
`(H,R,S)`, or at `(H,R)` in short to omit the step.
- To _prevote_ or _precommit_ something means to broadcast a [prevote
vote](https://godoc.org/github.com/tendermint/tendermint/types#Vote)
or [first precommit
vote](https://godoc.org/github.com/tendermint/tendermint/types#FirstPrecommit)
for something.
- A vote _at_ `(H,R)` is a vote signed with the bytes for `H` and `R`
included in its [sign-bytes](block-structure.html#vote-sign-bytes).
- _+2/3_ is short for "more than 2/3"
- _1/3+_ is short for "1/3 or more"
- A set of +2/3 of prevotes for a particular block or `<nil>` at
`(H,R)` is called a _proof-of-lock-change_ or _PoLC_ for short.
## State Machine Overview
At each height of the blockchain a round-based protocol is run to
determine the next block. Each round is composed of three *steps*
determine the next block. Each round is composed of three _steps_
(`Propose`, `Prevote`, and `Precommit`), along with two special steps
`Commit` and `NewHeight`.
@@ -36,22 +36,22 @@ In the optimal scenario, the order of steps is:
NewHeight -> (Propose -> Prevote -> Precommit)+ -> Commit -> NewHeight ->...
```
The sequence `(Propose -> Prevote -> Precommit)` is called a *round*.
The sequence `(Propose -> Prevote -> Precommit)` is called a _round_.
There may be more than one round required to commit a block at a given
height. Examples for why more rounds may be required include:
- The designated proposer was not online.
- The block proposed by the designated proposer was not valid.
- The block proposed by the designated proposer did not propagate
in time.
- The block proposed was valid, but +2/3 of prevotes for the proposed
block were not received in time for enough validator nodes by the
time they reached the `Precommit` step. Even though +2/3 of prevotes
are necessary to progress to the next step, at least one validator
may have voted `<nil>` or maliciously voted for something else.
- The block proposed was valid, and +2/3 of prevotes were received for
enough nodes, but +2/3 of precommits for the proposed block were not
received for enough validator nodes.
- The designated proposer was not online.
- The block proposed by the designated proposer was not valid.
- The block proposed by the designated proposer did not propagate
in time.
- The block proposed was valid, but +2/3 of prevotes for the proposed
block were not received in time for enough validator nodes by the
time they reached the `Precommit` step. Even though +2/3 of prevotes
are necessary to progress to the next step, at least one validator
may have voted `<nil>` or maliciously voted for something else.
- The block proposed was valid, and +2/3 of prevotes were received for
enough nodes, but +2/3 of precommits for the proposed block were not
received for enough validator nodes.
Some of these problems are resolved by moving onto the next round &
proposer. Others are resolved by increasing certain round timeout
@@ -80,14 +80,13 @@ parameters over each successive round.
+--------------------------------------------------------------------+
```
Background Gossip
=================
# Background Gossip
A node may not have a corresponding validator private key, but it
nevertheless plays an active role in the consensus process by relaying
relevant meta-data, proposals, blocks, and votes to its peers. A node
that has the private keys of an active validator and is engaged in
signing votes is called a *validator-node*. All nodes (not just
signing votes is called a _validator-node_. All nodes (not just
validator-nodes) have an associated state (the current height, round,
and step) and work to make progress.
@@ -97,21 +96,21 @@ epidemic gossip protocol is implemented among some of these channels to
bring peers up to speed on the most recent state of consensus. For
example,
- Nodes gossip `PartSet` parts of the current round's proposer's
proposed block. A LibSwift inspired algorithm is used to quickly
broadcast blocks across the gossip network.
- Nodes gossip prevote/precommit votes. A node `NODE_A` that is ahead
of `NODE_B` can send `NODE_B` prevotes or precommits for `NODE_B`'s
current (or future) round to enable it to progress forward.
- Nodes gossip prevotes for the proposed PoLC (proof-of-lock-change)
round if one is proposed.
- Nodes gossip to nodes lagging in blockchain height with block
[commits](https://godoc.org/github.com/tendermint/tendermint/types#Commit)
for older blocks.
- Nodes opportunistically gossip `HasVote` messages to hint peers what
votes it already has.
- Nodes broadcast their current state to all neighboring peers. (but
is not gossiped further)
- Nodes gossip `PartSet` parts of the current round's proposer's
proposed block. A LibSwift inspired algorithm is used to quickly
broadcast blocks across the gossip network.
- Nodes gossip prevote/precommit votes. A node `NODE_A` that is ahead
of `NODE_B` can send `NODE_B` prevotes or precommits for `NODE_B`'s
current (or future) round to enable it to progress forward.
- Nodes gossip prevotes for the proposed PoLC (proof-of-lock-change)
round if one is proposed.
- Nodes gossip to nodes lagging in blockchain height with block
[commits](https://godoc.org/github.com/tendermint/tendermint/types#Commit)
for older blocks.
- Nodes opportunistically gossip `HasVote` messages to hint peers what
votes it already has.
- Nodes broadcast their current state to all neighboring peers. (but
is not gossiped further)
There's more, but let's not get ahead of ourselves here.
@@ -144,14 +143,14 @@ and all prevotes at `PoLC-Round`. --> goto `Prevote(H,R)` - After
Upon entering `Prevote`, each validator broadcasts its prevote vote.
- First, if the validator is locked on a block since `LastLockRound`
but now has a PoLC for something else at round `PoLC-Round` where
`LastLockRound < PoLC-Round < R`, then it unlocks.
- If the validator is still locked on a block, it prevotes that.
- Else, if the proposed block from `Propose(H,R)` is good, it
prevotes that.
- Else, if the proposal is invalid or wasn't received on time, it
prevotes `<nil>`.
- First, if the validator is locked on a block since `LastLockRound`
but now has a PoLC for something else at round `PoLC-Round` where
`LastLockRound < PoLC-Round < R`, then it unlocks.
- If the validator is still locked on a block, it prevotes that.
- Else, if the proposed block from `Propose(H,R)` is good, it
prevotes that.
- Else, if the proposal is invalid or wasn't received on time, it
prevotes `<nil>`.
The `Prevote` step ends: - After +2/3 prevotes for a particular block or
`<nil>`. -->; goto `Precommit(H,R)` - After `timeoutPrevote` after
@@ -161,11 +160,12 @@ receiving any +2/3 prevotes. --> goto `Precommit(H,R)` - After
### Precommit Step (height:H,round:R)
Upon entering `Precommit`, each validator broadcasts its precommit vote.
- If the validator has a PoLC at `(H,R)` for a particular block `B`, it
(re)locks (or changes lock to) and precommits `B` and sets
`LastLockRound = R`. - Else, if the validator has a PoLC at `(H,R)` for
`<nil>`, it unlocks and precommits `<nil>`. - Else, it keeps the lock
unchanged and precommits `<nil>`.
(re)locks (or changes lock to) and precommits `B` and sets
`LastLockRound = R`. - Else, if the validator has a PoLC at `(H,R)` for
`<nil>`, it unlocks and precommits `<nil>`. - Else, it keeps the lock
unchanged and precommits `<nil>`.
A precommit for `<nil>` means "I didnt see a PoLC for this round, but I
did get +2/3 prevotes and waited a bit".
@@ -177,24 +177,24 @@ conditions](#common-exit-conditions)
### Common exit conditions
- After +2/3 precommits for a particular block. --> goto
`Commit(H)`
- After any +2/3 prevotes received at `(H,R+x)`. --> goto
`Prevote(H,R+x)`
- After any +2/3 precommits received at `(H,R+x)`. --> goto
`Precommit(H,R+x)`
- After +2/3 precommits for a particular block. --> goto
`Commit(H)`
- After any +2/3 prevotes received at `(H,R+x)`. --> goto
`Prevote(H,R+x)`
- After any +2/3 precommits received at `(H,R+x)`. --> goto
`Precommit(H,R+x)`
### Commit Step (height:H)
- Set `CommitTime = now()`
- Wait until block is received. --> goto `NewHeight(H+1)`
- Set `CommitTime = now()`
- Wait until block is received. --> goto `NewHeight(H+1)`
### NewHeight Step (height:H)
- Move `Precommits` to `LastCommit` and increment height.
- Set `StartTime = CommitTime+timeoutCommit`
- Wait until `StartTime` to receive straggler commits. --> goto
`Propose(H,0)`
- Move `Precommits` to `LastCommit` and increment height.
- Set `StartTime = CommitTime+timeoutCommit`
- Wait until `StartTime` to receive straggler commits. --> goto
`Propose(H,0)`
## Proofs
@@ -236,20 +236,20 @@ Further, define the JSet at height `H` of a set of validators `VSet` to
be the union of the JSets for each validator in `VSet`. For a given
commit by honest validators at round `R` for block `B` we can construct
a JSet to justify the commit for `B` at `R`. We say that a JSet
*justifies* a commit at `(H,R)` if all the committers (validators in the
_justifies_ a commit at `(H,R)` if all the committers (validators in the
commit-set) are each justified in the JSet with no duplicitous vote
signatures (by the committers).
- **Lemma**: When a fork is detected by the existence of two
conflicting [commits](./validators.html#commiting-a-block), the
union of the JSets for both commits (if they can be compiled) must
include double-signing by at least 1/3+ of the validator set.
**Proof**: The commit cannot be at the same round, because that
would immediately imply double-signing by 1/3+. Take the union of
the JSets of both commits. If there is no double-signing by at least
1/3+ of the validator set in the union, then no honest validator
could have precommitted any different block after the first commit.
Yet, +2/3 did. Reductio ad absurdum.
- **Lemma**: When a fork is detected by the existence of two
conflicting [commits](./validators.html#commiting-a-block), the
union of the JSets for both commits (if they can be compiled) must
include double-signing by at least 1/3+ of the validator set.
**Proof**: The commit cannot be at the same round, because that
would immediately imply double-signing by 1/3+. Take the union of
the JSets of both commits. If there is no double-signing by at least
1/3+ of the validator set in the union, then no honest validator
could have precommitted any different block after the first commit.
Yet, +2/3 did. Reductio ad absurdum.
As a corollary, when there is a fork, an external process can determine
the blame by requiring each validator to justify all of its round votes.

View File

@@ -1,14 +1,14 @@
# Light client
A light client is a process that connects to the Tendermint Full Node(s) and then tries to verify the Merkle proofs
about the blockchain application. In this document we describe mechanisms that ensures that the Tendermint light client
has the same level of security as Full Node processes (without being itself a Full Node).
A light client is a process that connects to the Tendermint Full Node(s) and then tries to verify the Merkle proofs
about the blockchain application. In this document we describe mechanisms that ensures that the Tendermint light client
has the same level of security as Full Node processes (without being itself a Full Node).
To be able to validate a Merkle proof, a light client needs to validate the blockchain header that contains the root app hash.
Validating a blockchain header in Tendermint consists in verifying that the header is committed (signed) by >2/3 of the
voting power of the corresponding validator set. As the validator set is a dynamic set (it is changing), one of the
core functionality of the light client is updating the current validator set, that is then used to verify the
blockchain header, and further the corresponding Merkle proofs.
To be able to validate a Merkle proof, a light client needs to validate the blockchain header that contains the root app hash.
Validating a blockchain header in Tendermint consists in verifying that the header is committed (signed) by >2/3 of the
voting power of the corresponding validator set. As the validator set is a dynamic set (it is changing), one of the
core functionality of the light client is updating the current validator set, that is then used to verify the
blockchain header, and further the corresponding Merkle proofs.
For the purpose of this light client specification, we assume that the Tendermint Full Node exposes the following functions over
Tendermint RPC:
@@ -19,51 +19,50 @@ Validators(height int64) (ResultValidators, error) // returns validator set for
LastHeader(valSetNumber int64) (SignedHeader, error) // returns last header signed by the validator set with the given validator set number
type SignedHeader struct {
Header Header
Header Header
Commit Commit
ValSetNumber int64
ValSetNumber int64
}
type ResultValidators struct {
BlockHeight int64
Validators []Validator
// time the current validator set is initialised, i.e, time of the last validator change before header BlockHeight
ValSetTime int64
BlockHeight int64
Validators []Validator
// time the current validator set is initialised, i.e, time of the last validator change before header BlockHeight
ValSetTime int64
}
```
We assume that Tendermint keeps track of the validator set changes and that each time a validator set is changed it is
being assigned the next sequence number. We can call this number the validator set sequence number. Tendermint also remembers
We assume that Tendermint keeps track of the validator set changes and that each time a validator set is changed it is
being assigned the next sequence number. We can call this number the validator set sequence number. Tendermint also remembers
the Time from the header when the next validator set is initialised (starts to be in power), and we refer to this time
as validator set init time.
Furthermore, we assume that each validator set change is signed (committed) by the current validator set. More precisely,
given a block `H` that contains transactions that are modifying the current validator set, the Merkle root hash of the next
validator set (modified based on transactions from block H) will be in block `H+1` (and signed by the current validator
set), and then starting from the block `H+2`, it will be signed by the next validator set.
Note that the real Tendermint RPC API is slightly different (for example, response messages contain more data and function
names are slightly different); we shortened (and modified) it for the purpose of this document to make the spec more
clear and simple. Furthermore, note that in case of the third function, the returned header has `ValSetNumber` equals to
`valSetNumber+1`.
given a block `H` that contains transactions that are modifying the current validator set, the Merkle root hash of the next
validator set (modified based on transactions from block H) will be in block `H+1` (and signed by the current validator
set), and then starting from the block `H+2`, it will be signed by the next validator set.
Note that the real Tendermint RPC API is slightly different (for example, response messages contain more data and function
names are slightly different); we shortened (and modified) it for the purpose of this document to make the spec more
clear and simple. Furthermore, note that in case of the third function, the returned header has `ValSetNumber` equals to
`valSetNumber+1`.
Locally, light client manages the following state:
```golang
valSet []Validator // current validator set (last known and verified validator set)
valSetNumber int64 // sequence number of the current validator set
valSet []Validator // current validator set (last known and verified validator set)
valSetNumber int64 // sequence number of the current validator set
valSetHash []byte // hash of the current validator set
valSetTime int64 // time when the current validator set is initialised
valSetTime int64 // time when the current validator set is initialised
```
The light client is initialised with the trusted validator set, for example based on the known validator set hash,
validator set sequence number and the validator set init time.
The core of the light client logic is captured by the VerifyAndUpdate function that is used to 1) verify if the given header is valid,
and 2) update the validator set (when the given header is valid and it is more recent than the seen headers).
and 2) update the validator set (when the given header is valid and it is more recent than the seen headers).
```golang
VerifyAndUpdate(signedHeader SignedHeader):
assertThat signedHeader.valSetNumber >= valSetNumber
assertThat signedHeader.valSetNumber >= valSetNumber
if isValid(signedHeader) and signedHeader.Header.Time <= valSetTime + UNBONDING_PERIOD then
setValidatorSet(signedHeader)
return true
@@ -76,7 +75,7 @@ isValid(signedHeader SignedHeader):
assertThat Hash(valSetOfTheHeader) == signedHeader.Header.ValSetHash
assertThat signedHeader is passing basic validation
if votingPower(signedHeader.Commit) > 2/3 * votingPower(valSetOfTheHeader) then return true
else
else
return false
setValidatorSet(signedHeader SignedHeader):
@@ -85,7 +84,7 @@ setValidatorSet(signedHeader SignedHeader):
valSet = nextValSet.Validators
valSetHash = signedHeader.Header.ValidatorsHash
valSetNumber = signedHeader.ValSetNumber
valSetTime = nextValSet.ValSetTime
valSetTime = nextValSet.ValSetTime
votingPower(commit Commit):
votingPower = 0
@@ -96,9 +95,9 @@ votingPower(commit Commit):
votingPower(validatorSet []Validator):
for each validator in validatorSet do:
votingPower += validator.VotingPower
votingPower += validator.VotingPower
return votingPower
updateValidatorSet(valSetNumberOfTheHeader):
while valSetNumber != valSetNumberOfTheHeader do
signedHeader = LastHeader(valSetNumber)
@@ -110,5 +109,5 @@ updateValidatorSet(valSetNumberOfTheHeader):
Note that in the logic above we assume that the light client will always go upward with respect to header verifications,
i.e., that it will always be used to verify more recent headers. In case a light client needs to be used to verify older
headers (go backward) the same mechanisms and similar logic can be used. In case a call to the FullNode or subsequent
checks fail, a light client need to implement some recovery strategy, for example connecting to other FullNode.
headers (go backward) the same mechanisms and similar logic can be used. In case a call to the FullNode or subsequent
checks fail, a light client need to implement some recovery strategy, for example connecting to other FullNode.

View File

@@ -4,10 +4,10 @@
`MConnection` is a multiplex connection that supports multiple independent streams
with distinct quality of service guarantees atop a single TCP connection.
Each stream is known as a `Channel` and each `Channel` has a globally unique *byte id*.
Each stream is known as a `Channel` and each `Channel` has a globally unique _byte id_.
Each `Channel` also has a relative priority that determines the quality of service
of the `Channel` compared to other `Channel`s.
The *byte id* and the relative priorities of each `Channel` are configured upon
The _byte id_ and the relative priorities of each `Channel` are configured upon
initialization of the connection.
The `MConnection` supports three packet types:
@@ -53,13 +53,14 @@ Messages are chosen for a batch one at a time from the channel with the lowest r
## Sending Messages
There are two methods for sending messages:
```go
func (m MConnection) Send(chID byte, msg interface{}) bool {}
func (m MConnection) TrySend(chID byte, msg interface{}) bool {}
```
`Send(chID, msg)` is a blocking call that waits until `msg` is successfully queued
for the channel with the given id byte `chID`. The message `msg` is serialized
for the channel with the given id byte `chID`. The message `msg` is serialized
using the `tendermint/wire` submodule's `WriteBinary()` reflection routine.
`TrySend(chID, msg)` is a nonblocking call that queues the message msg in the channel
@@ -76,8 +77,8 @@ and other higher level thread-safe data used by the reactors.
## Switch/Reactor
The `Switch` handles peer connections and exposes an API to receive incoming messages
on `Reactors`. Each `Reactor` is responsible for handling incoming messages of one
or more `Channels`. So while sending outgoing messages is typically performed on the peer,
on `Reactors`. Each `Reactor` is responsible for handling incoming messages of one
or more `Channels`. So while sending outgoing messages is typically performed on the peer,
incoming messages are received on the reactor.
```go

View File

@@ -17,8 +17,9 @@ See [the peer-exchange docs](https://github.com/tendermint/tendermint/blob/maste
## New Full Node
A new node needs a few things to connect to the network:
- a list of seeds, which can be provided to Tendermint via config file or flags,
or hardcoded into the software by in-process apps
or hardcoded into the software by in-process apps
- a `ChainID`, also called `Network` at the p2p layer
- a recent block height, H, and hash, HASH for the blockchain.

View File

@@ -29,26 +29,26 @@ Both handshakes have configurable timeouts (they should complete quickly).
Tendermint implements the Station-to-Station protocol
using X25519 keys for Diffie-Helman key-exchange and chacha20poly1305 for encryption.
It goes as follows:
- generate an ephemeral X25519 keypair
- send the ephemeral public key to the peer
- wait to receive the peer's ephemeral public key
- compute the Diffie-Hellman shared secret using the peers ephemeral public key and our ephemeral private key
- generate two keys to use for encryption (sending and receiving) and a challenge for authentication as follows:
- create a hkdf-sha256 instance with the key being the diffie hellman shared secret, and info parameter as
`TENDERMINT_SECRET_CONNECTION_KEY_AND_CHALLENGE_GEN`
- get 96 bytes of output from hkdf-sha256
- if we had the smaller ephemeral pubkey, use the first 32 bytes for the key for receiving, the second 32 bytes for sending; else the opposite
- use the last 32 bytes of output for the challenge
- create a hkdf-sha256 instance with the key being the diffie hellman shared secret, and info parameter as
`TENDERMINT_SECRET_CONNECTION_KEY_AND_CHALLENGE_GEN`
- get 96 bytes of output from hkdf-sha256
- if we had the smaller ephemeral pubkey, use the first 32 bytes for the key for receiving, the second 32 bytes for sending; else the opposite
- use the last 32 bytes of output for the challenge
- use a seperate nonce for receiving and sending. Both nonces start at 0, and should support the full 96 bit nonce range
- all communications from now on are encrypted in 1024 byte frames,
using the respective secret and nonce. Each nonce is incremented by one after each use.
using the respective secret and nonce. Each nonce is incremented by one after each use.
- we now have an encrypted channel, but still need to authenticate
- sign the common challenge obtained from the hkdf with our persistent private key
- send the amino encoded persistent pubkey and signature to the peer
- wait to receive the persistent public key and signature from the peer
- verify the signature on the challenge using the peer's persistent public key
If this is an outgoing connection (we dialed the peer) and we used a peer ID,
then finally verify that the peer's persistent public key corresponds to the peer ID we dialed,
ie. `peer.PubKey.Address() == <ID>`.
@@ -69,7 +69,6 @@ an optional whitelist which can be managed through the ABCI app -
if the whitelist is enabled and the peer does not qualify, the connection is
terminated.
### Tendermint Version Handshake
The Tendermint Version Handshake allows the peers to exchange their NodeInfo:
@@ -89,6 +88,7 @@ type NodeInfo struct {
```
The connection is disconnected if:
- `peer.NodeInfo.ID` is not equal `peerConn.ID`
- `peer.NodeInfo.Version` is not formatted as `X.X.X` where X are integers known as Major, Minor, and Revision
- `peer.NodeInfo.Version` Major is not the same as ours
@@ -97,7 +97,6 @@ The connection is disconnected if:
- `peer.NodeInfo.ListenAddr` is malformed or is a DNS host that cannot be
resolved
At this point, if we have not disconnected, the peer is valid.
It is added to the switch and hence all reactors via the `AddPeer` method.
Note that each reactor may handle multiple channels.

View File

@@ -1,46 +1,46 @@
## Blockchain Reactor
* coordinates the pool for syncing
* coordinates the store for persistence
* coordinates the playing of blocks towards the app using a sm.BlockExecutor
* handles switching between fastsync and consensus
* it is a p2p.BaseReactor
* starts the pool.Start() and its poolRoutine()
* registers all the concrete types and interfaces for serialisation
- coordinates the pool for syncing
- coordinates the store for persistence
- coordinates the playing of blocks towards the app using a sm.BlockExecutor
- handles switching between fastsync and consensus
- it is a p2p.BaseReactor
- starts the pool.Start() and its poolRoutine()
- registers all the concrete types and interfaces for serialisation
### poolRoutine
* listens to these channels:
* pool requests blocks from a specific peer by posting to requestsCh, block reactor then sends
- listens to these channels:
- pool requests blocks from a specific peer by posting to requestsCh, block reactor then sends
a &bcBlockRequestMessage for a specific height
* pool signals timeout of a specific peer by posting to timeoutsCh
* switchToConsensusTicker to periodically try and switch to consensus
* trySyncTicker to periodically check if we have fallen behind and then catch-up sync
* if there aren't any new blocks available on the pool it skips syncing
* tries to sync the app by taking downloaded blocks from the pool, gives them to the app and stores
- pool signals timeout of a specific peer by posting to timeoutsCh
- switchToConsensusTicker to periodically try and switch to consensus
- trySyncTicker to periodically check if we have fallen behind and then catch-up sync
- if there aren't any new blocks available on the pool it skips syncing
- tries to sync the app by taking downloaded blocks from the pool, gives them to the app and stores
them on disk
* implements Receive which is called by the switch/peer
* calls AddBlock on the pool when it receives a new block from a peer
- implements Receive which is called by the switch/peer
- calls AddBlock on the pool when it receives a new block from a peer
## Block Pool
* responsible for downloading blocks from peers
* makeRequestersRoutine()
* removes timeout peers
* starts new requesters by calling makeNextRequester()
* requestRoutine():
* picks a peer and sends the request, then blocks until:
* pool is stopped by listening to pool.Quit
* requester is stopped by listening to Quit
* request is redone
* we receive a block
* gotBlockCh is strange
- responsible for downloading blocks from peers
- makeRequestersRoutine()
- removes timeout peers
- starts new requesters by calling makeNextRequester()
- requestRoutine():
- picks a peer and sends the request, then blocks until:
- pool is stopped by listening to pool.Quit
- requester is stopped by listening to Quit
- request is redone
- we receive a block
- gotBlockCh is strange
## Block Store
* persists blocks to disk
- persists blocks to disk
# TODO
* How does the switch from bcR to conR happen? Does conR persist blocks to disk too?
* What is the interaction between the consensus and blockchain reactors?
- How does the switch from bcR to conR happen? Does conR persist blocks to disk too?
- What is the interaction between the consensus and blockchain reactors?

View File

@@ -46,11 +46,11 @@ type bcStatusResponseMessage struct {
## Architecture and algorithm
The Blockchain reactor is organised as a set of concurrent tasks:
- Receive routine of Blockchain Reactor
- Task for creating Requesters
- Set of Requesters tasks and
- Controller task.
The Blockchain reactor is organised as a set of concurrent tasks:
- Receive routine of Blockchain Reactor
- Task for creating Requesters
- Set of Requesters tasks and - Controller task.
![Blockchain Reactor Architecture Diagram](img/bc-reactor.png)
@@ -58,41 +58,39 @@ The Blockchain reactor is organised as a set of concurrent tasks:
These are the core data structures necessarily to provide the Blockchain Reactor logic.
Requester data structure is used to track assignment of request for `block` at position `height` to a
peer with id equals to `peerID`.
Requester data structure is used to track assignment of request for `block` at position `height` to a peer with id equals to `peerID`.
```go
type Requester {
mtx Mutex
mtx Mutex
block Block
height int64
peerID p2p.ID
height int64
peerID p2p.ID
redoChannel chan struct{}
}
```
Pool is core data structure that stores last executed block (`height`), assignment of requests to peers (`requesters`),
current height for each peer and number of pending requests for each peer (`peers`), maximum peer height, etc.
Pool is core data structure that stores last executed block (`height`), assignment of requests to peers (`requesters`), current height for each peer and number of pending requests for each peer (`peers`), maximum peer height, etc.
```go
type Pool {
mtx Mutex
mtx Mutex
requesters map[int64]*Requester
height int64
height int64
peers map[p2p.ID]*Peer
maxPeerHeight int64
numPending int32
maxPeerHeight int64
numPending int32
store BlockStore
requestsChannel chan<- BlockRequest
errorsChannel chan<- peerError
requestsChannel chan<- BlockRequest
errorsChannel chan<- peerError
}
```
Peer data structure stores for each peer current `height` and number of pending requests sent to
the peer (`numPending`), etc.
Peer data structure stores for each peer current `height` and number of pending requests sent to the peer (`numPending`), etc.
```go
type Peer struct {
id p2p.ID
id p2p.ID
height int64
numPending int32
timeout *time.Timer
@@ -100,202 +98,202 @@ type Peer struct {
}
```
BlockRequest is internal data structure used to denote current mapping of request for a block at some `height` to
a peer (`PeerID`).
BlockRequest is internal data structure used to denote current mapping of request for a block at some `height` to a peer (`PeerID`).
```go
type BlockRequest {
Height int64
PeerID p2p.ID
PeerID p2p.ID
}
```
### Receive routine of Blockchain Reactor
It is executed upon message reception on the BlockchainChannel inside p2p receive routine. There is a separate p2p
receive routine (and therefore receive routine of the Blockchain Reactor) executed for each peer. Note that
try to send will not block (returns immediately) if outgoing buffer is full.
It is executed upon message reception on the BlockchainChannel inside p2p receive routine. There is a separate p2p receive routine (and therefore receive routine of the Blockchain Reactor) executed for each peer. Note that try to send will not block (returns immediately) if outgoing buffer is full.
```go
handleMsg(pool, m):
upon receiving bcBlockRequestMessage m from peer p:
block = load block for height m.Height from pool.store
if block != nil then
try to send BlockResponseMessage(block) to p
else
try to send bcNoBlockResponseMessage(m.Height) to p
block = load block for height m.Height from pool.store
if block != nil then
try to send BlockResponseMessage(block) to p
else
try to send bcNoBlockResponseMessage(m.Height) to p
upon receiving bcBlockResponseMessage m from peer p:
pool.mtx.Lock()
requester = pool.requesters[m.Height]
if requester == nil then
error("peer sent us a block we didn't expect")
continue
upon receiving bcBlockResponseMessage m from peer p:
pool.mtx.Lock()
requester = pool.requesters[m.Height]
if requester == nil then
error("peer sent us a block we didn't expect")
continue
if requester.block == nil and requester.peerID == p then
if requester.block == nil and requester.peerID == p then
requester.block = m
pool.numPending -= 1 // atomic decrement
peer = pool.peers[p]
if peer != nil then
peer.numPending--
if peer.numPending == 0 then
peer.timeout.Stop()
// NOTE: we don't send Quit signal to the corresponding requester task!
else
trigger peer timeout to expire after peerTimeout
pool.mtx.Unlock()
pool.numPending -= 1 // atomic decrement
peer = pool.peers[p]
if peer != nil then
peer.numPending--
if peer.numPending == 0 then
peer.timeout.Stop()
// NOTE: we don't send Quit signal to the corresponding requester task!
else
trigger peer timeout to expire after peerTimeout
pool.mtx.Unlock()
upon receiving bcStatusRequestMessage m from peer p:
try to send bcStatusResponseMessage(pool.store.Height)
try to send bcStatusResponseMessage(pool.store.Height)
upon receiving bcStatusResponseMessage m from peer p:
pool.mtx.Lock()
peer = pool.peers[p]
if peer != nil then
peer.height = m.height
else
peer = create new Peer data structure with id = p and height = m.Height
pool.peers[p] = peer
pool.mtx.Lock()
peer = pool.peers[p]
if peer != nil then
peer.height = m.height
else
peer = create new Peer data structure with id = p and height = m.Height
pool.peers[p] = peer
if m.Height > pool.maxPeerHeight then
pool.maxPeerHeight = m.Height
pool.mtx.Unlock()
if m.Height > pool.maxPeerHeight then
pool.maxPeerHeight = m.Height
pool.mtx.Unlock()
onTimeout(p):
send error message to pool error channel
peer = pool.peers[p]
peer.didTimeout = true
send error message to pool error channel
peer = pool.peers[p]
peer.didTimeout = true
```
### Requester tasks
Requester task is responsible for fetching a single block at position `height`.
Requester task is responsible for fetching a single block at position `height`.
```go
fetchBlock(height, pool):
while true do
peerID = nil
while true do
peerID = nil
block = nil
peer = pickAvailablePeer(height)
peerId = peer.id
peer = pickAvailablePeer(height)
peerId = peer.id
enqueue BlockRequest(height, peerID) to pool.requestsChannel
redo = false
while !redo do
select {
redo = false
while !redo do
select {
upon receiving Quit message do
return
upon receiving message on redoChannel do
mtx.Lock()
return
upon receiving message on redoChannel do
mtx.Lock()
pool.numPending++
redo = true
mtx.UnLock()
}
redo = true
mtx.UnLock()
}
pickAvailablePeer(height):
selectedPeer = nil
while selectedPeer = nil do
pool.mtx.Lock()
for each peer in pool.peers do
if !peer.didTimeout and peer.numPending < maxPendingRequestsPerPeer and peer.height >= height then
peer.numPending++
selectedPeer = peer
break
pool.mtx.Unlock()
if selectedPeer = nil then
sleep requestIntervalMS
selectedPeer = nil
while selectedPeer = nil do
pool.mtx.Lock()
for each peer in pool.peers do
if !peer.didTimeout and peer.numPending < maxPendingRequestsPerPeer and peer.height >= height then
peer.numPending++
selectedPeer = peer
break
pool.mtx.Unlock()
return selectedPeer
if selectedPeer = nil then
sleep requestIntervalMS
return selectedPeer
```
sleep for requestIntervalMS
### Task for creating Requesters
This task is responsible for continuously creating and starting Requester tasks.
```go
createRequesters(pool):
while true do
if !pool.isRunning then break
if pool.numPending < maxPendingRequests or size(pool.requesters) < maxTotalRequesters then
while true do
if !pool.isRunning then break
if pool.numPending < maxPendingRequests or size(pool.requesters) < maxTotalRequesters then
pool.mtx.Lock()
nextHeight = pool.height + size(pool.requesters)
requester = create new requester for height nextHeight
pool.requesters[nextHeight] = requester
pool.numPending += 1 // atomic increment
start requester task
pool.mtx.Unlock()
else
requester = create new requester for height nextHeight
pool.requesters[nextHeight] = requester
pool.numPending += 1 // atomic increment
start requester task
pool.mtx.Unlock()
else
sleep requestIntervalMS
pool.mtx.Lock()
for each peer in pool.peers do
if !peer.didTimeout && peer.numPending > 0 && peer.curRate < minRecvRate then
send error on pool error channel
pool.mtx.Lock()
for each peer in pool.peers do
if !peer.didTimeout && peer.numPending > 0 && peer.curRate < minRecvRate then
send error on pool error channel
peer.didTimeout = true
if peer.didTimeout then
for each requester in pool.requesters do
if requester.getPeerID() == peer then
if peer.didTimeout then
for each requester in pool.requesters do
if requester.getPeerID() == peer then
enqueue msg on requestor's redoChannel
delete(pool.peers, peerID)
pool.mtx.Unlock()
delete(pool.peers, peerID)
pool.mtx.Unlock()
```
### Main blockchain reactor controller task
### Main blockchain reactor controller task
```go
main(pool):
create trySyncTicker with interval trySyncIntervalMS
create statusUpdateTicker with interval statusUpdateIntervalSeconds
create switchToConsensusTicker with interbal switchToConsensusIntervalSeconds
while true do
select {
create trySyncTicker with interval trySyncIntervalMS
create statusUpdateTicker with interval statusUpdateIntervalSeconds
create switchToConsensusTicker with interbal switchToConsensusIntervalSeconds
while true do
select {
upon receiving BlockRequest(Height, Peer) on pool.requestsChannel:
try to send bcBlockRequestMessage(Height) to Peer
try to send bcBlockRequestMessage(Height) to Peer
upon receiving error(peer) on errorsChannel:
stop peer for error
stop peer for error
upon receiving message on statusUpdateTickerChannel:
broadcast bcStatusRequestMessage(bcR.store.Height) // message sent in a separate routine
broadcast bcStatusRequestMessage(bcR.store.Height) // message sent in a separate routine
upon receiving message on switchToConsensusTickerChannel:
pool.mtx.Lock()
receivedBlockOrTimedOut = pool.height > 0 || (time.Now() - pool.startTime) > 5 Seconds
ourChainIsLongestAmongPeers = pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
haveSomePeers = size of pool.peers > 0
pool.mtx.Lock()
receivedBlockOrTimedOut = pool.height > 0 || (time.Now() - pool.startTime) > 5 Seconds
ourChainIsLongestAmongPeers = pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
haveSomePeers = size of pool.peers > 0
pool.mtx.Unlock()
if haveSomePeers && receivedBlockOrTimedOut && ourChainIsLongestAmongPeers then
switch to consensus mode
switch to consensus mode
upon receiving message on trySyncTickerChannel:
for i = 0; i < 10; i++ do
pool.mtx.Lock()
for i = 0; i < 10; i++ do
pool.mtx.Lock()
firstBlock = pool.requesters[pool.height].block
secondBlock = pool.requesters[pool.height].block
if firstBlock == nil or secondBlock == nil then continue
pool.mtx.Unlock()
verify firstBlock using LastCommit from secondBlock
if verification failed
pool.mtx.Lock()
verify firstBlock using LastCommit from secondBlock
if verification failed
pool.mtx.Lock()
peerID = pool.requesters[pool.height].peerID
redoRequestsForPeer(peerId)
delete(pool.peers, peerID)
stop peer peerID for error
pool.mtx.Unlock()
else
stop peer peerID for error
pool.mtx.Unlock()
else
delete(pool.requesters, pool.height)
save firstBlock to store
pool.height++
execute firstBlock
pool.height++
execute firstBlock
}
redoRequestsForPeer(pool, peerId):
for each requester in pool.requesters do
if requester.getPeerID() == peerID
enqueue msg on redoChannel for requester
for each requester in pool.requesters do
if requester.getPeerID() == peerID
enqueue msg on redoChannel for requester
```
## Channels
Defines `maxMsgSize` for the maximum size of incoming messages,

View File

@@ -1,49 +1,48 @@
# Consensus Reactor
Consensus Reactor defines a reactor for the consensus service. It contains the ConsensusState service that
manages the state of the Tendermint consensus internal state machine.
When Consensus Reactor is started, it starts Broadcast Routine which starts ConsensusState service.
Furthermore, for each peer that is added to the Consensus Reactor, it creates (and manages) the known peer state
(that is used extensively in gossip routines) and starts the following three routines for the peer p:
Gossip Data Routine, Gossip Votes Routine and QueryMaj23Routine. Finally, Consensus Reactor is responsible
Consensus Reactor defines a reactor for the consensus service. It contains the ConsensusState service that
manages the state of the Tendermint consensus internal state machine.
When Consensus Reactor is started, it starts Broadcast Routine which starts ConsensusState service.
Furthermore, for each peer that is added to the Consensus Reactor, it creates (and manages) the known peer state
(that is used extensively in gossip routines) and starts the following three routines for the peer p:
Gossip Data Routine, Gossip Votes Routine and QueryMaj23Routine. Finally, Consensus Reactor is responsible
for decoding messages received from a peer and for adequate processing of the message depending on its type and content.
The processing normally consists of updating the known peer state and for some messages
(`ProposalMessage`, `BlockPartMessage` and `VoteMessage`) also forwarding message to ConsensusState module
for further processing. In the following text we specify the core functionality of those separate unit of executions
that are part of the Consensus Reactor.
The processing normally consists of updating the known peer state and for some messages
(`ProposalMessage`, `BlockPartMessage` and `VoteMessage`) also forwarding message to ConsensusState module
for further processing. In the following text we specify the core functionality of those separate unit of executions
that are part of the Consensus Reactor.
## ConsensusState service
Consensus State handles execution of the Tendermint BFT consensus algorithm. It processes votes and proposals,
Consensus State handles execution of the Tendermint BFT consensus algorithm. It processes votes and proposals,
and upon reaching agreement, commits blocks to the chain and executes them against the application.
The internal state machine receives input from peers, the internal validator and from a timer.
Inside Consensus State we have the following units of execution: Timeout Ticker and Receive Routine.
Timeout Ticker is a timer that schedules timeouts conditional on the height/round/step that are processed
by the Receive Routine.
Timeout Ticker is a timer that schedules timeouts conditional on the height/round/step that are processed
by the Receive Routine.
### Receive Routine of the ConsensusState service
Receive Routine of the ConsensusState handles messages which may cause internal consensus state transitions.
It is the only routine that updates RoundState that contains internal consensus state.
Updates (state transitions) happen on timeouts, complete proposals, and 2/3 majorities.
It receives messages from peers, internal validators and from Timeout Ticker
and invokes the corresponding handlers, potentially updating the RoundState.
The details of the protocol (together with formal proofs of correctness) implemented by the Receive Routine are
It is the only routine that updates RoundState that contains internal consensus state.
Updates (state transitions) happen on timeouts, complete proposals, and 2/3 majorities.
It receives messages from peers, internal validators and from Timeout Ticker
and invokes the corresponding handlers, potentially updating the RoundState.
The details of the protocol (together with formal proofs of correctness) implemented by the Receive Routine are
discussed in separate document. For understanding of this document
it is sufficient to understand that the Receive Routine manages and updates RoundState data structure that is
it is sufficient to understand that the Receive Routine manages and updates RoundState data structure that is
then extensively used by the gossip routines to determine what information should be sent to peer processes.
## Round State
RoundState defines the internal consensus state. It contains height, round, round step, a current validator set,
a proposal and proposal block for the current round, locked round and block (if some block is being locked), set of
received votes and last commit and last validators set.
a proposal and proposal block for the current round, locked round and block (if some block is being locked), set of
received votes and last commit and last validators set.
```golang
type RoundState struct {
Height int64
Height int64
Round int
Step RoundStepType
Validators ValidatorSet
@@ -54,10 +53,10 @@ type RoundState struct {
LockedBlock Block
LockedBlockParts PartSet
Votes HeightVoteSet
LastCommit VoteSet
LastCommit VoteSet
LastValidators ValidatorSet
}
```
}
```
Internally, consensus will run as a state machine with the following states:
@@ -82,8 +81,8 @@ type PeerRoundState struct {
Round int // Round peer is at, -1 if unknown.
Step RoundStepType // Step peer is at
Proposal bool // True if peer has proposal for this round
ProposalBlockPartsHeader PartSetHeader
ProposalBlockParts BitArray
ProposalBlockPartsHeader PartSetHeader
ProposalBlockParts BitArray
ProposalPOLRound int // Proposal's POL round. -1 if none.
ProposalPOL BitArray // nil until ProposalPOLMessage received.
Prevotes BitArray // All votes peer has for this round
@@ -93,19 +92,19 @@ type PeerRoundState struct {
CatchupCommitRound int // Round that we have commit for. Not necessarily unique. -1 if none.
CatchupCommit BitArray // All commit precommits peer has for this height & CatchupCommitRound
}
```
```
## Receive method of Consensus reactor
The entry point of the Consensus reactor is a receive method. When a message is received from a peer p,
normally the peer round state is updated correspondingly, and some messages
The entry point of the Consensus reactor is a receive method. When a message is received from a peer p,
normally the peer round state is updated correspondingly, and some messages
are passed for further processing, for example to ConsensusState service. We now specify the processing of messages
in the receive method of Consensus reactor for each message type. In the following message handler, `rs` and `prs` denote
`RoundState` and `PeerRoundState`, respectively.
### NewRoundStepMessage handler
### NewRoundStepMessage handler
```
```
handleMessage(msg):
if msg is from smaller height/round/step then return
// Just remember these values.
@@ -116,10 +115,10 @@ handleMessage(msg):
Update prs with values from msg
if prs.Height or prs.Round has been updated then
reset Proposal related fields of the peer state
reset Proposal related fields of the peer state
if prs.Round has been updated and msg.Round == prsCatchupCommitRound then
prs.Precommits = psCatchupCommit
if prs.Height has been updated then
if prs.Height has been updated then
if prsHeight+1 == msg.Height && prsRound == msg.LastCommitRound then
prs.LastCommitRound = msg.LastCommitRound
prs.LastCommit = prs.Precommits
@@ -128,111 +127,111 @@ handleMessage(msg):
prs.LastCommit = nil
}
Reset prs.CatchupCommitRound and prs.CatchupCommit
```
```
### CommitStepMessage handler
```
```
handleMessage(msg):
if prs.Height == msg.Height then
if prs.Height == msg.Height then
prs.ProposalBlockPartsHeader = msg.BlockPartsHeader
prs.ProposalBlockParts = msg.BlockParts
```
```
### HasVoteMessage handler
```
```
handleMessage(msg):
if prs.Height == msg.Height then
if prs.Height == msg.Height then
prs.setHasVote(msg.Height, msg.Round, msg.Type, msg.Index)
```
```
### VoteSetMaj23Message handler
```
```
handleMessage(msg):
if prs.Height == msg.Height then
Record in rs that a peer claim to have ⅔ majority for msg.BlockID
Send VoteSetBitsMessage showing votes node has for that BlockId
```
Send VoteSetBitsMessage showing votes node has for that BlockId
```
### ProposalMessage handler
```
handleMessage(msg):
if prs.Height != msg.Height || prs.Round != msg.Round || prs.Proposal then return
if prs.Height != msg.Height || prs.Round != msg.Round || prs.Proposal then return
prs.Proposal = true
prs.ProposalBlockPartsHeader = msg.BlockPartsHeader
prs.ProposalBlockParts = empty set
prs.ProposalBlockParts = empty set
prs.ProposalPOLRound = msg.POLRound
prs.ProposalPOL = nil
prs.ProposalPOL = nil
Send msg through internal peerMsgQueue to ConsensusState service
```
```
### ProposalPOLMessage handler
```
```
handleMessage(msg):
if prs.Height != msg.Height or prs.ProposalPOLRound != msg.ProposalPOLRound then return
prs.ProposalPOL = msg.ProposalPOL
```
```
### BlockPartMessage handler
```
```
handleMessage(msg):
if prs.Height != msg.Height || prs.Round != msg.Round then return
Record in prs that peer has block part msg.Part.Index
Record in prs that peer has block part msg.Part.Index
Send msg trough internal peerMsgQueue to ConsensusState service
```
```
### VoteMessage handler
```
```
handleMessage(msg):
Record in prs that a peer knows vote with index msg.vote.ValidatorIndex for particular height and round
Send msg trough internal peerMsgQueue to ConsensusState service
```
```
### VoteSetBitsMessage handler
```
```
handleMessage(msg):
Update prs for the bit-array of votes peer claims to have for the msg.BlockID
```
```
## Gossip Data Routine
It is used to send the following messages to the peer: `BlockPartMessage`, `ProposalMessage` and
`ProposalPOLMessage` on the DataChannel. The gossip data routine is based on the local RoundState (`rs`)
It is used to send the following messages to the peer: `BlockPartMessage`, `ProposalMessage` and
`ProposalPOLMessage` on the DataChannel. The gossip data routine is based on the local RoundState (`rs`)
and the known PeerRoundState (`prs`). The routine repeats forever the logic shown below:
```
1a) if rs.ProposalBlockPartsHeader == prs.ProposalBlockPartsHeader and the peer does not have all the proposal parts then
Part = pick a random proposal block part the peer does not have
Send BlockPartMessage(rs.Height, rs.Round, Part) to the peer on the DataChannel
Part = pick a random proposal block part the peer does not have
Send BlockPartMessage(rs.Height, rs.Round, Part) to the peer on the DataChannel
if send returns true, record that the peer knows the corresponding block Part
Continue
Continue
1b) if (0 < prs.Height) and (prs.Height < rs.Height) then
help peer catch up using gossipDataForCatchup function
Continue
1c) if (rs.Height != prs.Height) or (rs.Round != prs.Round) then
1c) if (rs.Height != prs.Height) or (rs.Round != prs.Round) then
Sleep PeerGossipSleepDuration
Continue
Continue
// at this point rs.Height == prs.Height and rs.Round == prs.Round
1d) if (rs.Proposal != nil and !prs.Proposal) then
1d) if (rs.Proposal != nil and !prs.Proposal) then
Send ProposalMessage(rs.Proposal) to the peer
if send returns true, record that the peer knows Proposal
if 0 <= rs.Proposal.POLRound then
polRound = rs.Proposal.POLRound
prevotesBitArray = rs.Votes.Prevotes(polRound).BitArray()
polRound = rs.Proposal.POLRound
prevotesBitArray = rs.Votes.Prevotes(polRound).BitArray()
Send ProposalPOLMessage(rs.Height, polRound, prevotesBitArray)
Continue
Continue
2) Sleep PeerGossipSleepDuration
2) Sleep PeerGossipSleepDuration
```
### Gossip Data For Catchup
@@ -240,65 +239,65 @@ and the known PeerRoundState (`prs`). The routine repeats forever the logic show
This function is responsible for helping peer catch up if it is at the smaller height (prs.Height < rs.Height).
The function executes the following logic:
if peer does not have all block parts for prs.ProposalBlockPart then
if peer does not have all block parts for prs.ProposalBlockPart then
blockMeta = Load Block Metadata for height prs.Height from blockStore
if (!blockMeta.BlockID.PartsHeader == prs.ProposalBlockPartsHeader) then
Sleep PeerGossipSleepDuration
return
Part = pick a random proposal block part the peer does not have
Send BlockPartMessage(prs.Height, prs.Round, Part) to the peer on the DataChannel
Part = pick a random proposal block part the peer does not have
Send BlockPartMessage(prs.Height, prs.Round, Part) to the peer on the DataChannel
if send returns true, record that the peer knows the corresponding block Part
return
else Sleep PeerGossipSleepDuration
else Sleep PeerGossipSleepDuration
## Gossip Votes Routine
It is used to send the following message: `VoteMessage` on the VoteChannel.
The gossip votes routine is based on the local RoundState (`rs`)
The gossip votes routine is based on the local RoundState (`rs`)
and the known PeerRoundState (`prs`). The routine repeats forever the logic shown below:
```
1a) if rs.Height == prs.Height then
if prs.Step == RoundStepNewHeight then
vote = random vote from rs.LastCommit the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
if prs.Step <= RoundStepPrevote and prs.Round != -1 and prs.Round <= rs.Round then
Prevotes = rs.Votes.Prevotes(prs.Round)
vote = random vote from Prevotes the peer does not have
Send VoteMessage(vote) to the peer
if prs.Step == RoundStepNewHeight then
vote = random vote from rs.LastCommit the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
if prs.Step <= RoundStepPrecommit and prs.Round != -1 and prs.Round <= rs.Round then
Precommits = rs.Votes.Precommits(prs.Round)
vote = random vote from Precommits the peer does not have
Send VoteMessage(vote) to the peer
if prs.Step <= RoundStepPrevote and prs.Round != -1 and prs.Round <= rs.Round then
Prevotes = rs.Votes.Prevotes(prs.Round)
vote = random vote from Prevotes the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
if prs.ProposalPOLRound != -1 then
if prs.Step <= RoundStepPrecommit and prs.Round != -1 and prs.Round <= rs.Round then
Precommits = rs.Votes.Precommits(prs.Round)
vote = random vote from Precommits the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
if prs.ProposalPOLRound != -1 then
PolPrevotes = rs.Votes.Prevotes(prs.ProposalPOLRound)
vote = random vote from PolPrevotes the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
vote = random vote from PolPrevotes the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
1b) if prs.Height != 0 and rs.Height == prs.Height+1 then
vote = random vote from rs.LastCommit peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
1c) if prs.Height != 0 and rs.Height >= prs.Height+2 then
Commit = get commit from BlockStore for prs.Height
vote = random vote from Commit the peer does not have
Send VoteMessage(vote) to the peer
vote = random vote from rs.LastCommit peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
2) Sleep PeerGossipSleepDuration
1c) if prs.Height != 0 and rs.Height >= prs.Height+2 then
Commit = get commit from BlockStore for prs.Height
vote = random vote from Commit the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
2) Sleep PeerGossipSleepDuration
```
## QueryMaj23Routine
It is used to send the following message: `VoteSetMaj23Message`. `VoteSetMaj23Message` is sent to indicate that a given
It is used to send the following message: `VoteSetMaj23Message`. `VoteSetMaj23Message` is sent to indicate that a given
BlockID has seen +2/3 votes. This routine is based on the local RoundState (`rs`) and the known PeerRoundState
(`prs`). The routine repeats forever the logic shown below.
@@ -324,8 +323,8 @@ BlockID has seen +2/3 votes. This routine is based on the local RoundState (`rs`
Send m to peer
Sleep PeerQueryMaj23SleepDuration
1d) if prs.CatchupCommitRound != -1 and 0 < prs.Height and
prs.Height <= blockStore.Height() then
1d) if prs.CatchupCommitRound != -1 and 0 < prs.Height and
prs.Height <= blockStore.Height() then
Commit = LoadCommit(prs.Height)
m = VoteSetMaj23Message(prs.Height,Commit.Round,Precommit,Commit.blockId)
Send m to peer
@@ -339,14 +338,14 @@ BlockID has seen +2/3 votes. This routine is based on the local RoundState (`rs`
The Broadcast routine subscribes to an internal event bus to receive new round steps, votes messages and proposal
heartbeat messages, and broadcasts messages to peers upon receiving those events.
It broadcasts `NewRoundStepMessage` or `CommitStepMessage` upon new round state event. Note that
broadcasting these messages does not depend on the PeerRoundState; it is sent on the StateChannel.
Upon receiving VoteMessage it broadcasts `HasVoteMessage` message to its peers on the StateChannel.
broadcasting these messages does not depend on the PeerRoundState; it is sent on the StateChannel.
Upon receiving VoteMessage it broadcasts `HasVoteMessage` message to its peers on the StateChannel.
`ProposalHeartbeatMessage` is sent the same way on the StateChannel.
## Channels
Defines 4 channels: state, data, vote and vote_set_bits. Each channel
has `SendQueueCapacity` and `RecvBufferCapacity` and
has `SendQueueCapacity` and `RecvBufferCapacity` and
`RecvMessageCapacity` set to `maxMsgSize`.
Sending incorrectly encoded data will result in stopping the peer.

View File

@@ -23,7 +23,7 @@ processes using `BlockPartMessage`.
Validators in Tendermint communicate by peer-to-peer gossiping protocol. Each validator is connected
only to a subset of processes called peers. By the gossiping protocol, a validator send to its peers
all needed information (`ProposalMessage`, `VoteMessage` and `BlockPartMessage`) so they can
all needed information (`ProposalMessage`, `VoteMessage` and `BlockPartMessage`) so they can
reach agreement on some block, and also obtain the content of the chosen block (block parts). As
part of the gossiping protocol, processes also send auxiliary messages that inform peers about the
executed steps of the core consensus algorithm (`NewRoundStepMessage` and `CommitStepMessage`), and

View File

@@ -1,6 +1,6 @@
# Proposer selection procedure in Tendermint
This document specifies the Proposer Selection Procedure that is used in Tendermint to choose a round proposer.
This document specifies the Proposer Selection Procedure that is used in Tendermint to choose a round proposer.
As Tendermint is “leader-based protocol”, the proposer selection is critical for its correct functioning.
Let denote with `proposer_p(h,r)` a process returned by the Proposer Selection Procedure at the process p, at height h
and round r. Then the Proposer Selection procedure should fulfill the following properties:
@@ -9,13 +9,13 @@ and round r. Then the Proposer Selection procedure should fulfill the following
p and q, for each height h, and each round r,
proposer_p(h,r) = proposer_q(h,r)
`Liveness`: In every consecutive sequence of rounds of size K (K is system parameter), at least a
single round has an honest proposer.
`Liveness`: In every consecutive sequence of rounds of size K (K is system parameter), at least a
single round has an honest proposer.
`Fairness`: The proposer selection is proportional to the validator voting power, i.e., a validator with more
voting power is selected more frequently, proportional to its power. More precisely, given a set of processes
with the total voting power N, during a sequence of rounds of size N, every process is proposer in a number of rounds
equal to its voting power.
`Fairness`: The proposer selection is proportional to the validator voting power, i.e., a validator with more
voting power is selected more frequently, proportional to its power. More precisely, given a set of processes
with the total voting power N, during a sequence of rounds of size N, every process is proposer in a number of rounds
equal to its voting power.
We now look at a few particular cases to understand better how fairness should be implemented.
If we have 4 processes with the following voting power distribution (p0,4), (p1, 2), (p2, 2), (p3, 2) at some round r,
@@ -27,20 +27,20 @@ Let consider now the following scenario where a total voting power of faulty pro
p0: (p0,3), (p1, 1), (p2, 1), (p3, 1), (p4, 1), (p5, 1), (p6, 1), (p7, 1).
In this case the sequence of proposer selections looks like this:
`p0, p1, p2, p3, p0, p4, p5, p6, p7, p0, p0, p1, p2, p3, p0, p4, p5, p6, p7, p0, etc`
`p0, p1, p2, p3, p0, p4, p5, p6, p7, p0, p0, p1, p2, p3, p0, p4, p5, p6, p7, p0, etc`
In this case, we see that a number of rounds coordinated by a faulty process is proportional to its voting power.
We consider also the case where we have voting power uniformly distributed among processes, i.e., we have 10 processes
each with voting power of 1. And let consider that there are 3 faulty processes with consecutive addresses,
We consider also the case where we have voting power uniformly distributed among processes, i.e., we have 10 processes
each with voting power of 1. And let consider that there are 3 faulty processes with consecutive addresses,
for example the first 3 processes are faulty. Then the sequence looks like this:
`p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, etc`
In this case, we have 3 consecutive rounds with a faulty proposer.
In this case, we have 3 consecutive rounds with a faulty proposer.
One special case we consider is the case where a single honest process p0 has most of the voting power, for example:
(p0,100), (p1, 2), (p2, 3), (p3, 4). Then the sequence of proposer selection looks like this:
p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p1, p0, p0, p0, p0, p0, etc
This basically means that almost all rounds have the same proposer. But in this case, the process p0 has anyway enough
This basically means that almost all rounds have the same proposer. But in this case, the process p0 has anyway enough
voting power to decide whatever he wants, so the fact that he coordinates almost all rounds seems correct.

View File

@@ -2,7 +2,7 @@
Look at the concurrency model this uses...
* Receiving CheckTx
* Broadcasting new tx
* Interfaces with consensus engine, reap/update while checking
* Calling the ABCI app (ordering. callbacks. how proxy works alongside the blockchain proxy which actually writes blocks)
- Receiving CheckTx
- Broadcasting new tx
- Interfaces with consensus engine, reap/update while checking
- Calling the ABCI app (ordering. callbacks. how proxy works alongside the blockchain proxy which actually writes blocks)

View File

@@ -11,12 +11,12 @@ Flag: `--mempool.recheck_empty=false`
Environment: `TM_MEMPOOL_RECHECK_EMPTY=false`
Config:
```
[mempool]
recheck_empty = false
```
## Recheck
`--mempool.recheck=false` (default: true)

View File

@@ -6,26 +6,25 @@ consensus reactor when it is selected as the block proposer.
There are two sides to the mempool state:
* External: get, check, and broadcast new transactions
* Internal: return valid transaction, update list after block commit
- External: get, check, and broadcast new transactions
- Internal: return valid transaction, update list after block commit
## External functionality
External functionality is exposed via network interfaces
to potentially untrusted actors.
* CheckTx - triggered via RPC or P2P
* Broadcast - gossip messages after a successful check
- CheckTx - triggered via RPC or P2P
- Broadcast - gossip messages after a successful check
## Internal functionality
Internal functionality is exposed via method calls to other
code compiled into the tendermint binary.
* Reap - get tx to propose in next block
* Update - remove tx that were included in last block
* ABCI.CheckTx - call ABCI app to validate the tx
- Reap - get tx to propose in next block
- Update - remove tx that were included in last block
- ABCI.CheckTx - call ABCI app to validate the tx
What does it provide the consensus reactor?
What guarantees does it need from the ABCI app?

View File

@@ -35,12 +35,12 @@ Request (`POST http://gaia.zone:26657/`):
```json
{
"id": "",
"jsonrpc": "2.0",
"method": "broadcast_sync",
"params": {
"id": "",
"jsonrpc": "2.0",
"method": "broadcast_sync",
"params": {
"tx": "F012A4BC68..."
}
}
}
```
@@ -48,14 +48,14 @@ Response:
```json
{
"error": "",
"result": {
"hash": "E39AAB7A537ABAA237831742DCE1117F187C3C52",
"log": "",
"data": "",
"code": 0
},
"id": "",
"jsonrpc": "2.0"
"error": "",
"result": {
"hash": "E39AAB7A537ABAA237831742DCE1117F187C3C52",
"log": "",
"data": "",
"code": 0
},
"id": "",
"jsonrpc": "2.0"
}
```

View File

@@ -95,6 +95,7 @@ remove from address book completely.
## Select Peers to Exchange
When were asked for peers, we select them as follows:
- select at most `maxGetSelection` peers
- try to select at least `minGetSelection` peers - if we have less than that, select them all.
- select a random, unbiased `getSelectionPercent` of the peers
@@ -126,4 +127,3 @@ to use it in the PEX.
See the [trustmetric](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-006-trust-metric.md)
and [trustmetric useage](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-007-trust-metric-usage.md)
architecture docs for more details.

View File

@@ -44,7 +44,6 @@ Thus, during Commit, it is safe to reset the QueryState and the CheckTxState to
Note, however, that it is not possible to send transactions to Tendermint during Commit - if your app
tries to send a `/broadcast_tx` to Tendermint during Commit, it will deadlock.
## EndBlock Validator Updates
Updates to the Tendermint validator set can be made by returning `Validator`
@@ -60,12 +59,12 @@ message PubKey {
string type
bytes data
}
```
The `pub_key` currently supports two types:
- `type = "ed25519" and `data = <raw 32-byte public key>`
- `type = "secp256k1" and `data = <33-byte OpenSSL compressed public key>`
- `type = "ed25519" and`data = <raw 32-byte public key>`
- `type = "secp256k1" and `data = <33-byte OpenSSL compressed public key>`
If the address is provided, it must match the address of the pubkey, as
specified [here](/docs/spec/blockchain/encoding.md#Addresses)
@@ -87,9 +86,9 @@ following rules:
- if power is 0, the validator must already exist, and will be removed from the
validator set
- if power is non-0:
- if the validator does not already exist, it will be added to the validator
set with the given power
- if the validator does already exist, its power will be adjusted to the given power
- if the validator does not already exist, it will be added to the validator
set with the given power
- if the validator does already exist, its power will be adjusted to the given power
## InitChain Validator Updates
@@ -114,10 +113,10 @@ features. These are:
When Tendermint connects to a peer, it sends two queries to the ABCI application
using the following paths, with no additional data:
- `/p2p/filter/addr/<IP:PORT>`, where `<IP:PORT>` denote the IP address and
the port of the connection
- `p2p/filter/id/<ID>`, where `<ID>` is the peer node ID (ie. the
pubkey.Address() for the peer's PubKey)
- `/p2p/filter/addr/<IP:PORT>`, where `<IP:PORT>` denote the IP address and
the port of the connection
- `p2p/filter/id/<ID>`, where `<ID>` is the peer node ID (ie. the
pubkey.Address() for the peer's PubKey)
If either of these queries return a non-zero ABCI code, Tendermint will refuse
to connect to the peer.
@@ -128,11 +127,9 @@ On startup, Tendermint calls Info on the Query connection to get the latest
committed state of the app. The app MUST return information consistent with the
last block it succesfully completed Commit for.
If the app succesfully committed block H but not H+1, then `last_block_height =
H` and `last_block_app_hash = <hash returned by Commit for block H>`. If the app
If the app succesfully committed block H but not H+1, then `last_block_height = H` and `last_block_app_hash = <hash returned by Commit for block H>`. If the app
failed during the Commit of block H, then `last_block_height = H-1` and
`last_block_app_hash = <hash returned by Commit for block H-1, which is the hash
in the header of block H>`.
`last_block_app_hash = <hash returned by Commit for block H-1, which is the hash in the header of block H>`.
We now distinguish three heights, and describe how Tendermint syncs itself with
the app.
@@ -165,24 +162,24 @@ If `storeBlockHeight > stateBlockHeight+1`, panic
Now, the meat:
If `storeBlockHeight == stateBlockHeight && appBlockHeight < storeBlockHeight`,
replay all blocks in full from `appBlockHeight` to `storeBlockHeight`.
This happens if we completed processing the block, but the app forgot its height.
replay all blocks in full from `appBlockHeight` to `storeBlockHeight`.
This happens if we completed processing the block, but the app forgot its height.
If `storeBlockHeight == stateBlockHeight && appBlockHeight == storeBlockHeight`, we're done
This happens if we crashed at an opportune spot.
This happens if we crashed at an opportune spot.
If `storeBlockHeight == stateBlockHeight+1`
This happens if we started processing the block but didn't finish.
This happens if we started processing the block but didn't finish.
If `appBlockHeight < stateBlockHeight`
replay all blocks in full from `appBlockHeight` to `storeBlockHeight-1`,
and replay the block at `storeBlockHeight` using the WAL.
This happens if the app forgot the last block it committed.
If `appBlockHeight < stateBlockHeight`
replay all blocks in full from `appBlockHeight` to `storeBlockHeight-1`,
and replay the block at `storeBlockHeight` using the WAL.
This happens if the app forgot the last block it committed.
If `appBlockHeight == stateBlockHeight`,
replay the last block (storeBlockHeight) in full.
This happens if we crashed before the app finished Commit
If `appBlockHeight == stateBlockHeight`,
replay the last block (storeBlockHeight) in full.
This happens if we crashed before the app finished Commit
If appBlockHeight == storeBlockHeight {
update the state using the saved ABCI responses but dont run the block against the real app.
This happens if we crashed after the app finished Commit but before Tendermint saved the state.
If appBlockHeight == storeBlockHeight {
update the state using the saved ABCI responses but dont run the block against the real app.
This happens if we crashed after the app finished Commit but before Tendermint saved the state.