mirror of
https://github.com/fluencelabs/tendermint
synced 2025-07-22 07:41:57 +00:00
Compare commits
42 Commits
v0.27.0-de
...
v0.27.3
Author | SHA1 | Date | |
---|---|---|---|
|
0138530df2 | ||
|
0533c73a50 | ||
|
1beb45511c | ||
|
4a568fcedb | ||
|
b3141d7d02 | ||
|
9a6dd96cba | ||
|
9fa959619a | ||
|
1f09818770 | ||
|
e4806f980b | ||
|
b53a2712df | ||
|
a75dab492c | ||
|
7c9e767e1f | ||
|
f82a8ff73a | ||
|
ae275d791e | ||
|
f5cca9f121 | ||
|
3fbe9f235a | ||
|
f7e463f6d3 | ||
|
bc2a9b20c0 | ||
|
9e075d8dd5 | ||
|
8003786c9a | ||
|
2594cec116 | ||
|
df32ea4be5 | ||
|
f69e2c6d6c | ||
|
d5d0d2bd77 | ||
|
41eaf0e31d | ||
|
68b467886a | ||
|
2f64717bb5 | ||
|
c4a1cfc5c2 | ||
|
0f96bea41d | ||
|
9c236ffd6c | ||
|
9f8761d105 | ||
|
5413c11150 | ||
|
a14fd8eba0 | ||
|
1bb7e31d63 | ||
|
222b8978c8 | ||
|
d9a1aad5c5 | ||
|
8ef0c2681d | ||
|
c4d93fd27b | ||
|
dc2a338d96 | ||
|
725ed7969a | ||
|
44b769b1ac | ||
|
380afaa678 |
@@ -7,6 +7,13 @@ defaults: &defaults
|
|||||||
environment:
|
environment:
|
||||||
GOBIN: /tmp/workspace/bin
|
GOBIN: /tmp/workspace/bin
|
||||||
|
|
||||||
|
docs_update_config: &docs_update_config
|
||||||
|
working_directory: ~/repo
|
||||||
|
docker:
|
||||||
|
- image: tendermint/docs_deployment
|
||||||
|
environment:
|
||||||
|
AWS_REGION: us-east-1
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
setup_dependencies:
|
setup_dependencies:
|
||||||
<<: *defaults
|
<<: *defaults
|
||||||
@@ -339,10 +346,25 @@ jobs:
|
|||||||
name: upload
|
name: upload
|
||||||
command: bash .circleci/codecov.sh -f coverage.txt
|
command: bash .circleci/codecov.sh -f coverage.txt
|
||||||
|
|
||||||
|
deploy_docs:
|
||||||
|
<<: *docs_update_config
|
||||||
|
steps:
|
||||||
|
- checkout
|
||||||
|
- run:
|
||||||
|
name: Trigger website build
|
||||||
|
command: |
|
||||||
|
chamber exec tendermint -- start_website_build
|
||||||
|
|
||||||
workflows:
|
workflows:
|
||||||
version: 2
|
version: 2
|
||||||
test-suite:
|
test-suite:
|
||||||
jobs:
|
jobs:
|
||||||
|
- deploy_docs:
|
||||||
|
filters:
|
||||||
|
branches:
|
||||||
|
only:
|
||||||
|
- master
|
||||||
|
- develop
|
||||||
- setup_dependencies
|
- setup_dependencies
|
||||||
- lint:
|
- lint:
|
||||||
requires:
|
requires:
|
||||||
|
121
CHANGELOG.md
121
CHANGELOG.md
@@ -1,13 +1,128 @@
|
|||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## v0.27.3
|
||||||
|
|
||||||
|
*December 16th, 2018*
|
||||||
|
|
||||||
|
### BREAKING CHANGES:
|
||||||
|
|
||||||
|
* Go API
|
||||||
|
|
||||||
|
- [dep] [\#3027](https://github.com/tendermint/tendermint/issues/3027) Revert to mainline Go crypto library, eliminating the modified
|
||||||
|
`bcrypt.GenerateFromPassword`
|
||||||
|
|
||||||
|
## v0.27.2
|
||||||
|
|
||||||
|
*December 16th, 2018*
|
||||||
|
|
||||||
|
### IMPROVEMENTS:
|
||||||
|
|
||||||
|
- [node] [\#3025](https://github.com/tendermint/tendermint/issues/3025) Validate NodeInfo addresses on startup.
|
||||||
|
|
||||||
|
### BUG FIXES:
|
||||||
|
|
||||||
|
- [p2p] [\#3025](https://github.com/tendermint/tendermint/pull/3025) Revert to using defers in addrbook. Fixes deadlocks in pex and consensus upon invalid ExternalAddr/ListenAddr configuration.
|
||||||
|
|
||||||
|
## v0.27.1
|
||||||
|
|
||||||
|
*December 15th, 2018*
|
||||||
|
|
||||||
|
Special thanks to external contributors on this release:
|
||||||
|
@danil-lashin, @hleb-albau, @james-ray, @leo-xinwang
|
||||||
|
|
||||||
|
### FEATURES:
|
||||||
|
- [rpc] [\#2964](https://github.com/tendermint/tendermint/issues/2964) Add `UnconfirmedTxs(limit)` and `NumUnconfirmedTxs()` methods to HTTP/Local clients (@danil-lashin)
|
||||||
|
- [docs] [\#3004](https://github.com/tendermint/tendermint/issues/3004) Enable full-text search on docs pages
|
||||||
|
|
||||||
|
### IMPROVEMENTS:
|
||||||
|
- [consensus] [\#2971](https://github.com/tendermint/tendermint/issues/2971) Return error if ValidatorSet is empty after InitChain
|
||||||
|
(@leo-xinwang)
|
||||||
|
- [ci/cd] [\#3005](https://github.com/tendermint/tendermint/issues/3005) Updated CircleCI job to trigger website build when docs are updated
|
||||||
|
- [docs] Various updates
|
||||||
|
|
||||||
|
### BUG FIXES:
|
||||||
|
- [cmd] [\#2983](https://github.com/tendermint/tendermint/issues/2983) `testnet` command always sets `addr_book_strict = false`
|
||||||
|
- [config] [\#2980](https://github.com/tendermint/tendermint/issues/2980) Fix CORS options formatting
|
||||||
|
- [kv indexer] [\#2912](https://github.com/tendermint/tendermint/issues/2912) Don't ignore key when executing CONTAINS
|
||||||
|
- [mempool] [\#2961](https://github.com/tendermint/tendermint/issues/2961) Call `notifyTxsAvailable` if there're txs left after committing a block, but recheck=false
|
||||||
|
- [mempool] [\#2994](https://github.com/tendermint/tendermint/issues/2994) Reject txs with negative GasWanted
|
||||||
|
- [p2p] [\#2990](https://github.com/tendermint/tendermint/issues/2990) Fix a bug where seeds don't disconnect from a peer after 3h
|
||||||
|
- [consensus] [\#3006](https://github.com/tendermint/tendermint/issues/3006) Save state after InitChain only when stateHeight is also 0 (@james-ray)
|
||||||
|
|
||||||
|
## v0.27.0
|
||||||
|
|
||||||
|
*December 5th, 2018*
|
||||||
|
|
||||||
|
Special thanks to external contributors on this release:
|
||||||
|
@danil-lashin, @srmo
|
||||||
|
|
||||||
|
Special thanks to @dlguddus for discovering a [major
|
||||||
|
issue](https://github.com/tendermint/tendermint/issues/2718#issuecomment-440888677)
|
||||||
|
in the proposer selection algorithm.
|
||||||
|
|
||||||
|
Friendly reminder, we have a [bug bounty
|
||||||
|
program](https://hackerone.com/tendermint).
|
||||||
|
|
||||||
|
This release is primarily about fixes to the proposer selection algorithm
|
||||||
|
in preparation for the [Cosmos Game of
|
||||||
|
Stakes](https://blog.cosmos.network/the-game-of-stakes-is-open-for-registration-83a404746ee6).
|
||||||
|
It also makes use of the `ConsensusParams.Validator.PubKeyTypes` to restrict the
|
||||||
|
key types that can be used by validators, and removes the `Heartbeat` consensus
|
||||||
|
message.
|
||||||
|
|
||||||
|
### BREAKING CHANGES:
|
||||||
|
|
||||||
|
* CLI/RPC/Config
|
||||||
|
- [rpc] [\#2932](https://github.com/tendermint/tendermint/issues/2932) Rename `accum` to `proposer_priority`
|
||||||
|
|
||||||
|
* Go API
|
||||||
|
- [db] [\#2913](https://github.com/tendermint/tendermint/pull/2913)
|
||||||
|
ReverseIterator API change: start < end, and end is exclusive.
|
||||||
|
- [types] [\#2932](https://github.com/tendermint/tendermint/issues/2932) Rename `Validator.Accum` to `Validator.ProposerPriority`
|
||||||
|
|
||||||
|
* Blockchain Protocol
|
||||||
|
- [state] [\#2714](https://github.com/tendermint/tendermint/issues/2714) Validators can now only use pubkeys allowed within
|
||||||
|
ConsensusParams.Validator.PubKeyTypes
|
||||||
|
|
||||||
|
* P2P Protocol
|
||||||
|
- [consensus] [\#2871](https://github.com/tendermint/tendermint/issues/2871)
|
||||||
|
Remove *ProposalHeartbeat* message as it serves no real purpose (@srmo)
|
||||||
|
- [state] Fixes for proposer selection:
|
||||||
|
- [\#2785](https://github.com/tendermint/tendermint/issues/2785) Accum for new validators is `-1.125*totalVotingPower` instead of 0
|
||||||
|
- [\#2941](https://github.com/tendermint/tendermint/issues/2941) val.Accum is preserved during ValidatorSet.Update to avoid being
|
||||||
|
reset to 0
|
||||||
|
|
||||||
|
### IMPROVEMENTS:
|
||||||
|
|
||||||
|
- [state] [\#2929](https://github.com/tendermint/tendermint/issues/2929) Minor refactor of updateState logic (@danil-lashin)
|
||||||
|
- [node] [\#2959](https://github.com/tendermint/tendermint/issues/2959) Allow node to start even if software's BlockProtocol is
|
||||||
|
different from state's BlockProtocol
|
||||||
|
- [pex] [\#2959](https://github.com/tendermint/tendermint/issues/2959) Pex reactor logger uses `module=pex`
|
||||||
|
|
||||||
|
### BUG FIXES:
|
||||||
|
|
||||||
|
- [p2p] [\#2968](https://github.com/tendermint/tendermint/issues/2968) Panic on transport error rather than continuing to run but not
|
||||||
|
accept new connections
|
||||||
|
- [p2p] [\#2969](https://github.com/tendermint/tendermint/issues/2969) Fix mismatch in peer count between `/net_info` and the prometheus
|
||||||
|
metrics
|
||||||
|
- [rpc] [\#2408](https://github.com/tendermint/tendermint/issues/2408) `/broadcast_tx_commit`: Fix "interface conversion: interface {} in nil, not EventDataTx" panic (could happen if somebody sent a tx using `/broadcast_tx_commit` while Tendermint was being stopped)
|
||||||
|
- [state] [\#2785](https://github.com/tendermint/tendermint/issues/2785) Fix accum for new validators to be `-1.125*totalVotingPower`
|
||||||
|
instead of 0, forcing them to wait before becoming the proposer. Also:
|
||||||
|
- do not batch clip
|
||||||
|
- keep accums averaged near 0
|
||||||
|
- [txindex/kv] [\#2925](https://github.com/tendermint/tendermint/issues/2925) Don't return false positives when range searching for a prefix of a tag value
|
||||||
|
- [types] [\#2938](https://github.com/tendermint/tendermint/issues/2938) Fix regression in v0.26.4 where we panic on empty
|
||||||
|
genDoc.Validators
|
||||||
|
- [types] [\#2941](https://github.com/tendermint/tendermint/issues/2941) Preserve val.Accum during ValidatorSet.Update to avoid it being
|
||||||
|
reset to 0 every time a validator is updated
|
||||||
|
|
||||||
## v0.26.4
|
## v0.26.4
|
||||||
|
|
||||||
*November 27th, 2018*
|
*November 27th, 2018*
|
||||||
|
|
||||||
Special thanks to external contributors on this release:
|
Special thanks to external contributors on this release:
|
||||||
ackratos, goolAdapter, james-ray, joe-bowman, kostko,
|
@ackratos, @goolAdapter, @james-ray, @joe-bowman, @kostko,
|
||||||
nagarajmanjunath, tomtau
|
@nagarajmanjunath, @tomtau
|
||||||
|
|
||||||
|
|
||||||
Friendly reminder, we have a [bug bounty
|
Friendly reminder, we have a [bug bounty
|
||||||
program](https://hackerone.com/tendermint).
|
program](https://hackerone.com/tendermint).
|
||||||
|
@@ -1,14 +1,9 @@
|
|||||||
# Pending
|
## v0.27.4
|
||||||
|
|
||||||
## v0.27.0
|
|
||||||
|
|
||||||
*TBD*
|
*TBD*
|
||||||
|
|
||||||
Special thanks to external contributors on this release:
|
Special thanks to external contributors on this release:
|
||||||
|
|
||||||
Friendly reminder, we have a [bug bounty
|
|
||||||
program](https://hackerone.com/tendermint).
|
|
||||||
|
|
||||||
### BREAKING CHANGES:
|
### BREAKING CHANGES:
|
||||||
|
|
||||||
* CLI/RPC/Config
|
* CLI/RPC/Config
|
||||||
@@ -17,18 +12,13 @@ program](https://hackerone.com/tendermint).
|
|||||||
|
|
||||||
* Go API
|
* Go API
|
||||||
|
|
||||||
- [db] [\#2913](https://github.com/tendermint/tendermint/pull/2913) ReverseIterator API change -- start < end, and end is exclusive.
|
|
||||||
|
|
||||||
* Blockchain Protocol
|
* Blockchain Protocol
|
||||||
* [state] \#2714 Validators can now only use pubkeys allowed within ConsensusParams.ValidatorParams
|
|
||||||
|
|
||||||
* P2P Protocol
|
* P2P Protocol
|
||||||
|
|
||||||
### FEATURES:
|
### FEATURES:
|
||||||
|
|
||||||
### IMPROVEMENTS:
|
### IMPROVEMENTS:
|
||||||
- [consensus] [\#2871](https://github.com/tendermint/tendermint/issues/2871) Remove *ProposalHeartbeat* infrastructure as it serves no real purpose
|
|
||||||
|
|
||||||
### BUG FIXES:
|
### BUG FIXES:
|
||||||
- [types] \#2938 Fix regression in v0.26.4 where we panic on empty
|
|
||||||
genDoc.Validators
|
|
||||||
|
5
Gopkg.lock
generated
5
Gopkg.lock
generated
@@ -376,7 +376,7 @@
|
|||||||
version = "v0.14.1"
|
version = "v0.14.1"
|
||||||
|
|
||||||
[[projects]]
|
[[projects]]
|
||||||
digest = "1:72b71e3a29775e5752ed7a8012052a3dee165e27ec18cedddae5288058f09acf"
|
digest = "1:00d2b3e64cdc3fa69aa250dfbe4cc38c4837d4f37e62279be2ae52107ffbbb44"
|
||||||
name = "golang.org/x/crypto"
|
name = "golang.org/x/crypto"
|
||||||
packages = [
|
packages = [
|
||||||
"bcrypt",
|
"bcrypt",
|
||||||
@@ -397,8 +397,7 @@
|
|||||||
"salsa20/salsa",
|
"salsa20/salsa",
|
||||||
]
|
]
|
||||||
pruneopts = "UT"
|
pruneopts = "UT"
|
||||||
revision = "3764759f34a542a3aef74d6b02e35be7ab893bba"
|
revision = "505ab145d0a99da450461ae2c1a9f6cd10d1f447"
|
||||||
source = "github.com/tendermint/crypto"
|
|
||||||
|
|
||||||
[[projects]]
|
[[projects]]
|
||||||
digest = "1:d36f55a999540d29b6ea3c2ea29d71c76b1d9853fdcd3e5c5cb4836f2ba118f1"
|
digest = "1:d36f55a999540d29b6ea3c2ea29d71c76b1d9853fdcd3e5c5cb4836f2ba118f1"
|
||||||
|
@@ -81,8 +81,7 @@
|
|||||||
|
|
||||||
[[constraint]]
|
[[constraint]]
|
||||||
name = "golang.org/x/crypto"
|
name = "golang.org/x/crypto"
|
||||||
source = "github.com/tendermint/crypto"
|
revision = "505ab145d0a99da450461ae2c1a9f6cd10d1f447"
|
||||||
revision = "3764759f34a542a3aef74d6b02e35be7ab893bba"
|
|
||||||
|
|
||||||
[[override]]
|
[[override]]
|
||||||
name = "github.com/jmhodges/levigo"
|
name = "github.com/jmhodges/levigo"
|
||||||
|
1
Makefile
1
Makefile
@@ -294,6 +294,7 @@ build-linux:
|
|||||||
build-docker-localnode:
|
build-docker-localnode:
|
||||||
cd networks/local
|
cd networks/local
|
||||||
make
|
make
|
||||||
|
cd -
|
||||||
|
|
||||||
# Run a 4-node testnet locally
|
# Run a 4-node testnet locally
|
||||||
localnet-start: localnet-stop
|
localnet-start: localnet-stop
|
||||||
|
45
UPGRADING.md
45
UPGRADING.md
@@ -3,9 +3,50 @@
|
|||||||
This guide provides steps to be followed when you upgrade your applications to
|
This guide provides steps to be followed when you upgrade your applications to
|
||||||
a newer version of Tendermint Core.
|
a newer version of Tendermint Core.
|
||||||
|
|
||||||
|
## v0.27.0
|
||||||
|
|
||||||
|
This release contains some breaking changes to the block and p2p protocols,
|
||||||
|
but does not change any core data structures, so it should be compatible with
|
||||||
|
existing blockchains from the v0.26 series that only used Ed25519 validator keys.
|
||||||
|
Blockchains using Secp256k1 for validators will not be compatible. This is due
|
||||||
|
to the fact that we now enforce which key types validators can use as a
|
||||||
|
consensus param. The default is Ed25519, and Secp256k1 must be activated
|
||||||
|
explicitly.
|
||||||
|
|
||||||
|
It is recommended to upgrade all nodes at once to avoid incompatibilities at the
|
||||||
|
peer layer - namely, the heartbeat consensus message has been removed (only
|
||||||
|
relevant if `create_empty_blocks=false` or `create_empty_blocks_interval > 0`),
|
||||||
|
and the proposer selection algorithm has changed. Since proposer information is
|
||||||
|
never included in the blockchain, this change only affects the peer layer.
|
||||||
|
|
||||||
|
### Go API Changes
|
||||||
|
|
||||||
|
#### libs/db
|
||||||
|
|
||||||
|
The ReverseIterator API has changed the meaning of `start` and `end`.
|
||||||
|
Before, iteration was from `start` to `end`, where
|
||||||
|
`start > end`. Now, iteration is from `end` to `start`, where `start < end`.
|
||||||
|
The iterator also excludes `end`. This change allows a simplified and more
|
||||||
|
intuitive logic, aligning the semantic meaning of `start` and `end` in the
|
||||||
|
`Iterator` and `ReverseIterator`.
|
||||||
|
|
||||||
|
### Applications
|
||||||
|
|
||||||
|
This release enforces a new consensus parameter, the
|
||||||
|
ValidatorParams.PubKeyTypes. Applications must ensure that they only return
|
||||||
|
validator updates with the allowed PubKeyTypes. If a validator update includes a
|
||||||
|
pubkey type that is not included in the ConsensusParams.Validator.PubKeyTypes,
|
||||||
|
block execution will fail and the consensus will halt.
|
||||||
|
|
||||||
|
By default, only Ed25519 pubkeys may be used for validators. Enabling
|
||||||
|
Secp256k1 requires explicit modification of the ConsensusParams.
|
||||||
|
Please update your application accordingly (ie. restrict validators to only be
|
||||||
|
able to use Ed25519 keys, or explicitly add additional key types to the genesis
|
||||||
|
file).
|
||||||
|
|
||||||
## v0.26.0
|
## v0.26.0
|
||||||
|
|
||||||
New 0.26.0 release contains a lot of changes to core data types and protocols. It is not
|
This release contains a lot of changes to core data types and protocols. It is not
|
||||||
compatible to the old versions and there is no straight forward way to update
|
compatible to the old versions and there is no straight forward way to update
|
||||||
old data to be compatible with the new version.
|
old data to be compatible with the new version.
|
||||||
|
|
||||||
@@ -67,7 +108,7 @@ For more information, see:
|
|||||||
|
|
||||||
### Go API Changes
|
### Go API Changes
|
||||||
|
|
||||||
#### crypto.merkle
|
#### crypto/merkle
|
||||||
|
|
||||||
The `merkle.Hasher` interface was removed. Functions which used to take `Hasher`
|
The `merkle.Hasher` interface was removed. Functions which used to take `Hasher`
|
||||||
now simply take `[]byte`. This means that any objects being Merklized should be
|
now simply take `[]byte`. This means that any objects being Merklized should be
|
||||||
|
@@ -127,14 +127,31 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gather persistent peer addresses.
|
||||||
|
var (
|
||||||
|
persistentPeers string
|
||||||
|
err error
|
||||||
|
)
|
||||||
if populatePersistentPeers {
|
if populatePersistentPeers {
|
||||||
err := populatePersistentPeersInConfigAndWriteIt(config)
|
persistentPeers, err = persistentPeersString(config)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
_ = os.RemoveAll(outputDir)
|
_ = os.RemoveAll(outputDir)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Overwrite default config.
|
||||||
|
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||||
|
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||||
|
config.SetRoot(nodeDir)
|
||||||
|
config.P2P.AddrBookStrict = false
|
||||||
|
if populatePersistentPeers {
|
||||||
|
config.P2P.PersistentPeers = persistentPeers
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg.WriteConfigFile(filepath.Join(nodeDir, "config", "config.toml"), config)
|
||||||
|
}
|
||||||
|
|
||||||
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
|
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -157,28 +174,16 @@ func hostnameOrIP(i int) string {
|
|||||||
return fmt.Sprintf("%s%d", hostnamePrefix, i)
|
return fmt.Sprintf("%s%d", hostnamePrefix, i)
|
||||||
}
|
}
|
||||||
|
|
||||||
func populatePersistentPeersInConfigAndWriteIt(config *cfg.Config) error {
|
func persistentPeersString(config *cfg.Config) (string, error) {
|
||||||
persistentPeers := make([]string, nValidators+nNonValidators)
|
persistentPeers := make([]string, nValidators+nNonValidators)
|
||||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||||
config.SetRoot(nodeDir)
|
config.SetRoot(nodeDir)
|
||||||
nodeKey, err := p2p.LoadNodeKey(config.NodeKeyFile())
|
nodeKey, err := p2p.LoadNodeKey(config.NodeKeyFile())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return "", err
|
||||||
}
|
}
|
||||||
persistentPeers[i] = p2p.IDAddressString(nodeKey.ID(), fmt.Sprintf("%s:%d", hostnameOrIP(i), p2pPort))
|
persistentPeers[i] = p2p.IDAddressString(nodeKey.ID(), fmt.Sprintf("%s:%d", hostnameOrIP(i), p2pPort))
|
||||||
}
|
}
|
||||||
persistentPeersList := strings.Join(persistentPeers, ",")
|
return strings.Join(persistentPeers, ","), nil
|
||||||
|
|
||||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
|
||||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
|
||||||
config.SetRoot(nodeDir)
|
|
||||||
config.P2P.PersistentPeers = persistentPeersList
|
|
||||||
config.P2P.AddrBookStrict = false
|
|
||||||
|
|
||||||
// overwrite default config
|
|
||||||
cfg.WriteConfigFile(filepath.Join(nodeDir, "config", "config.toml"), config)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
@@ -283,7 +283,7 @@ type RPCConfig struct {
|
|||||||
|
|
||||||
// Maximum number of simultaneous connections.
|
// Maximum number of simultaneous connections.
|
||||||
// Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
|
// Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
|
||||||
// If you want to accept more significant number than the default, make sure
|
// If you want to accept a larger number than the default, make sure
|
||||||
// you increase your OS limits.
|
// you increase your OS limits.
|
||||||
// 0 - unlimited.
|
// 0 - unlimited.
|
||||||
GRPCMaxOpenConnections int `mapstructure:"grpc_max_open_connections"`
|
GRPCMaxOpenConnections int `mapstructure:"grpc_max_open_connections"`
|
||||||
@@ -293,7 +293,7 @@ type RPCConfig struct {
|
|||||||
|
|
||||||
// Maximum number of simultaneous connections (including WebSocket).
|
// Maximum number of simultaneous connections (including WebSocket).
|
||||||
// Does not include gRPC connections. See grpc_max_open_connections
|
// Does not include gRPC connections. See grpc_max_open_connections
|
||||||
// If you want to accept more significant number than the default, make sure
|
// If you want to accept a larger number than the default, make sure
|
||||||
// you increase your OS limits.
|
// you increase your OS limits.
|
||||||
// 0 - unlimited.
|
// 0 - unlimited.
|
||||||
// Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
|
// Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
|
||||||
@@ -774,12 +774,12 @@ type InstrumentationConfig struct {
|
|||||||
PrometheusListenAddr string `mapstructure:"prometheus_listen_addr"`
|
PrometheusListenAddr string `mapstructure:"prometheus_listen_addr"`
|
||||||
|
|
||||||
// Maximum number of simultaneous connections.
|
// Maximum number of simultaneous connections.
|
||||||
// If you want to accept more significant number than the default, make sure
|
// If you want to accept a larger number than the default, make sure
|
||||||
// you increase your OS limits.
|
// you increase your OS limits.
|
||||||
// 0 - unlimited.
|
// 0 - unlimited.
|
||||||
MaxOpenConnections int `mapstructure:"max_open_connections"`
|
MaxOpenConnections int `mapstructure:"max_open_connections"`
|
||||||
|
|
||||||
// Tendermint instrumentation namespace.
|
// Instrumentation namespace.
|
||||||
Namespace string `mapstructure:"namespace"`
|
Namespace string `mapstructure:"namespace"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -125,13 +125,13 @@ laddr = "{{ .RPC.ListenAddress }}"
|
|||||||
# A list of origins a cross-domain request can be executed from
|
# A list of origins a cross-domain request can be executed from
|
||||||
# Default value '[]' disables cors support
|
# Default value '[]' disables cors support
|
||||||
# Use '["*"]' to allow any origin
|
# Use '["*"]' to allow any origin
|
||||||
cors_allowed_origins = "{{ .RPC.CORSAllowedOrigins }}"
|
cors_allowed_origins = [{{ range .RPC.CORSAllowedOrigins }}{{ printf "%q, " . }}{{end}}]
|
||||||
|
|
||||||
# A list of methods the client is allowed to use with cross-domain requests
|
# A list of methods the client is allowed to use with cross-domain requests
|
||||||
cors_allowed_methods = "{{ .RPC.CORSAllowedMethods }}"
|
cors_allowed_methods = [{{ range .RPC.CORSAllowedMethods }}{{ printf "%q, " . }}{{end}}]
|
||||||
|
|
||||||
# A list of non simple headers the client is allowed to use with cross-domain requests
|
# A list of non simple headers the client is allowed to use with cross-domain requests
|
||||||
cors_allowed_headers = "{{ .RPC.CORSAllowedHeaders }}"
|
cors_allowed_headers = [{{ range .RPC.CORSAllowedHeaders }}{{ printf "%q, " . }}{{end}}]
|
||||||
|
|
||||||
# TCP or UNIX socket address for the gRPC server to listen on
|
# TCP or UNIX socket address for the gRPC server to listen on
|
||||||
# NOTE: This server only supports /broadcast_tx_commit
|
# NOTE: This server only supports /broadcast_tx_commit
|
||||||
@@ -139,7 +139,7 @@ grpc_laddr = "{{ .RPC.GRPCListenAddress }}"
|
|||||||
|
|
||||||
# Maximum number of simultaneous connections.
|
# Maximum number of simultaneous connections.
|
||||||
# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
|
# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
|
||||||
# If you want to accept more significant number than the default, make sure
|
# If you want to accept a larger number than the default, make sure
|
||||||
# you increase your OS limits.
|
# you increase your OS limits.
|
||||||
# 0 - unlimited.
|
# 0 - unlimited.
|
||||||
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
|
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
|
||||||
@@ -151,7 +151,7 @@ unsafe = {{ .RPC.Unsafe }}
|
|||||||
|
|
||||||
# Maximum number of simultaneous connections (including WebSocket).
|
# Maximum number of simultaneous connections (including WebSocket).
|
||||||
# Does not include gRPC connections. See grpc_max_open_connections
|
# Does not include gRPC connections. See grpc_max_open_connections
|
||||||
# If you want to accept more significant number than the default, make sure
|
# If you want to accept a larger number than the default, make sure
|
||||||
# you increase your OS limits.
|
# you increase your OS limits.
|
||||||
# 0 - unlimited.
|
# 0 - unlimited.
|
||||||
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
|
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
|
||||||
@@ -269,8 +269,8 @@ blocktime_iota = "{{ .Consensus.BlockTimeIota }}"
|
|||||||
# What indexer to use for transactions
|
# What indexer to use for transactions
|
||||||
#
|
#
|
||||||
# Options:
|
# Options:
|
||||||
# 1) "null" (default)
|
# 1) "null"
|
||||||
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
|
# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
|
||||||
indexer = "{{ .TxIndex.Indexer }}"
|
indexer = "{{ .TxIndex.Indexer }}"
|
||||||
|
|
||||||
# Comma-separated list of tags to index (by default the only tag is "tx.hash")
|
# Comma-separated list of tags to index (by default the only tag is "tx.hash")
|
||||||
@@ -302,7 +302,7 @@ prometheus = {{ .Instrumentation.Prometheus }}
|
|||||||
prometheus_listen_addr = "{{ .Instrumentation.PrometheusListenAddr }}"
|
prometheus_listen_addr = "{{ .Instrumentation.PrometheusListenAddr }}"
|
||||||
|
|
||||||
# Maximum number of simultaneous connections.
|
# Maximum number of simultaneous connections.
|
||||||
# If you want to accept more significant number than the default, make sure
|
# If you want to accept a larger number than the default, make sure
|
||||||
# you increase your OS limits.
|
# you increase your OS limits.
|
||||||
# 0 - unlimited.
|
# 0 - unlimited.
|
||||||
max_open_connections = {{ .Instrumentation.MaxOpenConnections }}
|
max_open_connections = {{ .Instrumentation.MaxOpenConnections }}
|
||||||
|
@@ -11,7 +11,6 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
abci "github.com/tendermint/tendermint/abci/types"
|
abci "github.com/tendermint/tendermint/abci/types"
|
||||||
"github.com/tendermint/tendermint/version"
|
|
||||||
//auto "github.com/tendermint/tendermint/libs/autofile"
|
//auto "github.com/tendermint/tendermint/libs/autofile"
|
||||||
cmn "github.com/tendermint/tendermint/libs/common"
|
cmn "github.com/tendermint/tendermint/libs/common"
|
||||||
dbm "github.com/tendermint/tendermint/libs/db"
|
dbm "github.com/tendermint/tendermint/libs/db"
|
||||||
@@ -20,6 +19,7 @@ import (
|
|||||||
"github.com/tendermint/tendermint/proxy"
|
"github.com/tendermint/tendermint/proxy"
|
||||||
sm "github.com/tendermint/tendermint/state"
|
sm "github.com/tendermint/tendermint/state"
|
||||||
"github.com/tendermint/tendermint/types"
|
"github.com/tendermint/tendermint/types"
|
||||||
|
"github.com/tendermint/tendermint/version"
|
||||||
)
|
)
|
||||||
|
|
||||||
var crc32c = crc32.MakeTable(crc32.Castagnoli)
|
var crc32c = crc32.MakeTable(crc32.Castagnoli)
|
||||||
@@ -247,6 +247,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
|
|||||||
|
|
||||||
// Set AppVersion on the state.
|
// Set AppVersion on the state.
|
||||||
h.initialState.Version.Consensus.App = version.Protocol(res.AppVersion)
|
h.initialState.Version.Consensus.App = version.Protocol(res.AppVersion)
|
||||||
|
sm.SaveState(h.stateDB, h.initialState)
|
||||||
|
|
||||||
// Replay blocks up to the latest in the blockstore.
|
// Replay blocks up to the latest in the blockstore.
|
||||||
_, err = h.ReplayBlocks(h.initialState, appHash, blockHeight, proxyApp)
|
_, err = h.ReplayBlocks(h.initialState, appHash, blockHeight, proxyApp)
|
||||||
@@ -295,6 +296,7 @@ func (h *Handshaker) ReplayBlocks(
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if stateBlockHeight == 0 { //we only update state when we are in initial state
|
||||||
// If the app returned validators or consensus params, update the state.
|
// If the app returned validators or consensus params, update the state.
|
||||||
if len(res.Validators) > 0 {
|
if len(res.Validators) > 0 {
|
||||||
vals, err := types.PB2TM.ValidatorUpdates(res.Validators)
|
vals, err := types.PB2TM.ValidatorUpdates(res.Validators)
|
||||||
@@ -303,12 +305,19 @@ func (h *Handshaker) ReplayBlocks(
|
|||||||
}
|
}
|
||||||
state.Validators = types.NewValidatorSet(vals)
|
state.Validators = types.NewValidatorSet(vals)
|
||||||
state.NextValidators = types.NewValidatorSet(vals)
|
state.NextValidators = types.NewValidatorSet(vals)
|
||||||
|
} else {
|
||||||
|
// If validator set is not set in genesis and still empty after InitChain, exit.
|
||||||
|
if len(h.genDoc.Validators) == 0 {
|
||||||
|
return nil, fmt.Errorf("Validator set is nil in genesis and still empty after InitChain")
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if res.ConsensusParams != nil {
|
if res.ConsensusParams != nil {
|
||||||
state.ConsensusParams = types.PB2TM.ConsensusParams(res.ConsensusParams)
|
state.ConsensusParams = types.PB2TM.ConsensusParams(res.ConsensusParams)
|
||||||
}
|
}
|
||||||
sm.SaveState(h.stateDB, state)
|
sm.SaveState(h.stateDB, state)
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// First handle edge cases and constraints on the storeBlockHeight.
|
// First handle edge cases and constraints on the storeBlockHeight.
|
||||||
if storeBlockHeight == 0 {
|
if storeBlockHeight == 0 {
|
||||||
|
@@ -5,7 +5,7 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
|
|
||||||
"golang.org/x/crypto/openpgp/armor" // forked to github.com/tendermint/crypto
|
"golang.org/x/crypto/openpgp/armor"
|
||||||
)
|
)
|
||||||
|
|
||||||
func EncodeArmor(blockType string, headers map[string]string, data []byte) string {
|
func EncodeArmor(blockType string, headers map[string]string, data []byte) string {
|
||||||
|
@@ -7,7 +7,7 @@ import (
|
|||||||
"io"
|
"io"
|
||||||
|
|
||||||
amino "github.com/tendermint/go-amino"
|
amino "github.com/tendermint/go-amino"
|
||||||
"golang.org/x/crypto/ed25519" // forked to github.com/tendermint/crypto
|
"golang.org/x/crypto/ed25519"
|
||||||
|
|
||||||
"github.com/tendermint/tendermint/crypto"
|
"github.com/tendermint/tendermint/crypto"
|
||||||
"github.com/tendermint/tendermint/crypto/tmhash"
|
"github.com/tendermint/tendermint/crypto/tmhash"
|
||||||
|
@@ -3,7 +3,7 @@ package crypto
|
|||||||
import (
|
import (
|
||||||
"crypto/sha256"
|
"crypto/sha256"
|
||||||
|
|
||||||
"golang.org/x/crypto/ripemd160" // forked to github.com/tendermint/crypto
|
"golang.org/x/crypto/ripemd160"
|
||||||
)
|
)
|
||||||
|
|
||||||
func Sha256(bytes []byte) []byte {
|
func Sha256(bytes []byte) []byte {
|
||||||
|
@@ -9,7 +9,7 @@ import (
|
|||||||
|
|
||||||
secp256k1 "github.com/tendermint/btcd/btcec"
|
secp256k1 "github.com/tendermint/btcd/btcec"
|
||||||
amino "github.com/tendermint/go-amino"
|
amino "github.com/tendermint/go-amino"
|
||||||
"golang.org/x/crypto/ripemd160" // forked to github.com/tendermint/crypto
|
"golang.org/x/crypto/ripemd160"
|
||||||
|
|
||||||
"github.com/tendermint/tendermint/crypto"
|
"github.com/tendermint/tendermint/crypto"
|
||||||
)
|
)
|
||||||
|
@@ -8,7 +8,7 @@ import (
|
|||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"golang.org/x/crypto/chacha20poly1305" // forked to github.com/tendermint/crypto
|
"golang.org/x/crypto/chacha20poly1305"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Implements crypto.AEAD
|
// Implements crypto.AEAD
|
||||||
|
@@ -4,7 +4,7 @@ import (
|
|||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"golang.org/x/crypto/nacl/secretbox" // forked to github.com/tendermint/crypto
|
"golang.org/x/crypto/nacl/secretbox"
|
||||||
|
|
||||||
"github.com/tendermint/tendermint/crypto"
|
"github.com/tendermint/tendermint/crypto"
|
||||||
cmn "github.com/tendermint/tendermint/libs/common"
|
cmn "github.com/tendermint/tendermint/libs/common"
|
||||||
|
@@ -6,7 +6,7 @@ import (
|
|||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
|
||||||
"golang.org/x/crypto/bcrypt" // forked to github.com/tendermint/crypto
|
"golang.org/x/crypto/bcrypt"
|
||||||
|
|
||||||
"github.com/tendermint/tendermint/crypto"
|
"github.com/tendermint/tendermint/crypto"
|
||||||
)
|
)
|
||||||
@@ -30,9 +30,7 @@ func TestSimpleWithKDF(t *testing.T) {
|
|||||||
|
|
||||||
plaintext := []byte("sometext")
|
plaintext := []byte("sometext")
|
||||||
secretPass := []byte("somesecret")
|
secretPass := []byte("somesecret")
|
||||||
salt := []byte("somesaltsomesalt") // len 16
|
secret, err := bcrypt.GenerateFromPassword(secretPass, 12)
|
||||||
// NOTE: we use a fork of x/crypto so we can inject our own randomness for salt
|
|
||||||
secret, err := bcrypt.GenerateFromPassword(salt, secretPass, 12)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Error(err)
|
t.Error(err)
|
||||||
}
|
}
|
||||||
|
@@ -8,7 +8,17 @@ module.exports = {
|
|||||||
lineNumbers: true
|
lineNumbers: true
|
||||||
},
|
},
|
||||||
themeConfig: {
|
themeConfig: {
|
||||||
lastUpdated: "Last Updated",
|
repo: "tendermint/tendermint",
|
||||||
|
editLinks: true,
|
||||||
|
docsDir: "docs",
|
||||||
|
docsBranch: "develop",
|
||||||
|
editLinkText: 'Edit this page on Github',
|
||||||
|
lastUpdated: true,
|
||||||
|
algolia: {
|
||||||
|
apiKey: '59f0e2deb984aa9cdf2b3a5fd24ac501',
|
||||||
|
indexName: 'tendermint',
|
||||||
|
debug: false
|
||||||
|
},
|
||||||
nav: [{ text: "Back to Tendermint", link: "https://tendermint.com" }],
|
nav: [{ text: "Back to Tendermint", link: "https://tendermint.com" }],
|
||||||
sidebar: [
|
sidebar: [
|
||||||
{
|
{
|
||||||
|
@@ -12,10 +12,10 @@ respectively.
|
|||||||
|
|
||||||
## How It Works
|
## How It Works
|
||||||
|
|
||||||
There is a Jenkins job listening for changes in the `/docs` directory, on both
|
There is a CircleCI job listening for changes in the `/docs` directory, on both
|
||||||
the `master` and `develop` branches. Any updates to files in this directory
|
the `master` and `develop` branches. Any updates to files in this directory
|
||||||
on those branches will automatically trigger a website deployment. Under the hood,
|
on those branches will automatically trigger a website deployment. Under the hood,
|
||||||
a private website repository has make targets consumed by a standard Jenkins task.
|
the private website repository has a `make build-docs` target consumed by a CircleCI job in that repo.
|
||||||
|
|
||||||
## README
|
## README
|
||||||
|
|
||||||
@@ -93,6 +93,10 @@ python -m SimpleHTTPServer 8080
|
|||||||
|
|
||||||
then navigate to localhost:8080 in your browser.
|
then navigate to localhost:8080 in your browser.
|
||||||
|
|
||||||
|
## Search
|
||||||
|
|
||||||
|
We are using [Algolia](https://www.algolia.com) to power full-text search. This uses a public API search-only key in the `config.js` as well as a [tendermint.json](https://github.com/algolia/docsearch-configs/blob/master/configs/tendermint.json) configuration file that we can update with PRs.
|
||||||
|
|
||||||
## Consistency
|
## Consistency
|
||||||
|
|
||||||
Because the build processes are identical (as is the information contained herein), this file should be kept in sync as
|
Because the build processes are identical (as is the information contained herein), this file should be kept in sync as
|
||||||
|
@@ -5,7 +5,7 @@ Tendermint blockchain application.
|
|||||||
|
|
||||||
The following diagram provides a superb example:
|
The following diagram provides a superb example:
|
||||||
|
|
||||||
<https://drive.google.com/open?id=1yR2XpRi9YCY9H9uMfcw8-RMJpvDyvjz9>
|

|
||||||
|
|
||||||
The end-user application here is the Cosmos Voyager, at the bottom left.
|
The end-user application here is the Cosmos Voyager, at the bottom left.
|
||||||
Voyager communicates with a REST API exposed by a local Light-Client
|
Voyager communicates with a REST API exposed by a local Light-Client
|
||||||
|
@@ -122,7 +122,7 @@
|
|||||||
],
|
],
|
||||||
"abciServers": [
|
"abciServers": [
|
||||||
{
|
{
|
||||||
"name": "abci",
|
"name": "go-abci",
|
||||||
"url": "https://github.com/tendermint/tendermint/tree/master/abci",
|
"url": "https://github.com/tendermint/tendermint/tree/master/abci",
|
||||||
"language": "Go",
|
"language": "Go",
|
||||||
"author": "Tendermint"
|
"author": "Tendermint"
|
||||||
@@ -133,6 +133,12 @@
|
|||||||
"language": "Javascript",
|
"language": "Javascript",
|
||||||
"author": "Tendermint"
|
"author": "Tendermint"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "rust-tsp",
|
||||||
|
"url": "https://github.com/tendermint/rust-tsp",
|
||||||
|
"language": "Rust",
|
||||||
|
"author": "Tendermint"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "cpp-tmsp",
|
"name": "cpp-tmsp",
|
||||||
"url": "https://github.com/mdyring/cpp-tmsp",
|
"url": "https://github.com/mdyring/cpp-tmsp",
|
||||||
@@ -164,7 +170,7 @@
|
|||||||
"author": "Dave Bryson"
|
"author": "Dave Bryson"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "tm-abci",
|
"name": "tm-abci (fork of py-abci with async IO)",
|
||||||
"url": "https://github.com/SoftblocksCo/tm-abci",
|
"url": "https://github.com/SoftblocksCo/tm-abci",
|
||||||
"language": "Python",
|
"language": "Python",
|
||||||
"author": "Softblocks"
|
"author": "Softblocks"
|
||||||
@@ -175,5 +181,13 @@
|
|||||||
"language": "Javascript",
|
"language": "Javascript",
|
||||||
"author": "Dennis McKinnon"
|
"author": "Dennis McKinnon"
|
||||||
}
|
}
|
||||||
|
],
|
||||||
|
"aminoLibraries": [
|
||||||
|
{
|
||||||
|
"name": "JS-Amino",
|
||||||
|
"url": "https://github.com/TanNgocDo/Js-Amino",
|
||||||
|
"language": "Javascript",
|
||||||
|
"author": "TanNgocDo"
|
||||||
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
@@ -1,11 +1,9 @@
|
|||||||
# Ecosystem
|
# Ecosystem
|
||||||
|
|
||||||
The growing list of applications built using various pieces of the
|
The growing list of applications built using various pieces of the
|
||||||
Tendermint stack can be found at:
|
Tendermint stack can be found at the [ecosystem page](https://tendermint.com/ecosystem).
|
||||||
|
|
||||||
- https://tendermint.com/ecosystem
|
We thank the community for their contributions and welcome the
|
||||||
|
|
||||||
We thank the community for their contributions thus far and welcome the
|
|
||||||
addition of new projects. A pull request can be submitted to [this
|
addition of new projects. A pull request can be submitted to [this
|
||||||
file](https://github.com/tendermint/tendermint/blob/master/docs/app-dev/ecosystem.json)
|
file](https://github.com/tendermint/tendermint/blob/master/docs/app-dev/ecosystem.json)
|
||||||
to include your project.
|
to include your project.
|
||||||
|
BIN
docs/imgs/cosmos-tendermint-stack-4k.jpg
Normal file
BIN
docs/imgs/cosmos-tendermint-stack-4k.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 625 KiB |
@@ -70,10 +70,6 @@ Tendermint is in essence similar software, but with two key differences:
|
|||||||
the application logic that's right for them, from key-value store to
|
the application logic that's right for them, from key-value store to
|
||||||
cryptocurrency to e-voting platform and beyond.
|
cryptocurrency to e-voting platform and beyond.
|
||||||
|
|
||||||
The layout of this Tendermint website content is also ripped directly
|
|
||||||
and without shame from [consul.io](https://www.consul.io/) and the other
|
|
||||||
[Hashicorp sites](https://www.hashicorp.com/#tools).
|
|
||||||
|
|
||||||
### Bitcoin, Ethereum, etc.
|
### Bitcoin, Ethereum, etc.
|
||||||
|
|
||||||
Tendermint emerged in the tradition of cryptocurrencies like Bitcoin,
|
Tendermint emerged in the tradition of cryptocurrencies like Bitcoin,
|
||||||
|
@@ -1,20 +1,17 @@
|
|||||||
# Docker Compose
|
# Docker Compose
|
||||||
|
|
||||||
With Docker Compose, we can spin up local testnets in a single command:
|
With Docker Compose, you can spin up local testnets with a single command.
|
||||||
|
|
||||||
```
|
|
||||||
make localnet-start
|
|
||||||
```
|
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
- [Install tendermint](/docs/install.md)
|
1. [Install tendermint](/docs/install.md)
|
||||||
- [Install docker](https://docs.docker.com/engine/installation/)
|
2. [Install docker](https://docs.docker.com/engine/installation/)
|
||||||
- [Install docker-compose](https://docs.docker.com/compose/install/)
|
3. [Install docker-compose](https://docs.docker.com/compose/install/)
|
||||||
|
|
||||||
## Build
|
## Build
|
||||||
|
|
||||||
Build the `tendermint` binary and the `tendermint/localnode` docker image.
|
Build the `tendermint` binary and, optionally, the `tendermint/localnode`
|
||||||
|
docker image.
|
||||||
|
|
||||||
Note the binary will be mounted into the container so it can be updated without
|
Note the binary will be mounted into the container so it can be updated without
|
||||||
rebuilding the image.
|
rebuilding the image.
|
||||||
@@ -25,11 +22,10 @@ cd $GOPATH/src/github.com/tendermint/tendermint
|
|||||||
# Build the linux binary in ./build
|
# Build the linux binary in ./build
|
||||||
make build-linux
|
make build-linux
|
||||||
|
|
||||||
# Build tendermint/localnode image
|
# (optionally) Build tendermint/localnode image
|
||||||
make build-docker-localnode
|
make build-docker-localnode
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Run a testnet
|
## Run a testnet
|
||||||
|
|
||||||
To start a 4 node testnet run:
|
To start a 4 node testnet run:
|
||||||
@@ -38,9 +34,13 @@ To start a 4 node testnet run:
|
|||||||
make localnet-start
|
make localnet-start
|
||||||
```
|
```
|
||||||
|
|
||||||
The nodes bind their RPC servers to ports 26657, 26660, 26662, and 26664 on the host.
|
The nodes bind their RPC servers to ports 26657, 26660, 26662, and 26664 on the
|
||||||
|
host.
|
||||||
|
|
||||||
This file creates a 4-node network using the localnode image.
|
This file creates a 4-node network using the localnode image.
|
||||||
The nodes of the network expose their P2P and RPC endpoints to the host machine on ports 26656-26657, 26659-26660, 26661-26662, and 26663-26664 respectively.
|
|
||||||
|
The nodes of the network expose their P2P and RPC endpoints to the host machine
|
||||||
|
on ports 26656-26657, 26659-26660, 26661-26662, and 26663-26664 respectively.
|
||||||
|
|
||||||
To update the binary, just rebuild it and restart the nodes:
|
To update the binary, just rebuild it and restart the nodes:
|
||||||
|
|
||||||
@@ -52,34 +52,40 @@ make localnet-start
|
|||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
The `make localnet-start` creates files for a 4-node testnet in `./build` by calling the `tendermint testnet` command.
|
The `make localnet-start` creates files for a 4-node testnet in `./build` by
|
||||||
|
calling the `tendermint testnet` command.
|
||||||
|
|
||||||
The `./build` directory is mounted to the `/tendermint` mount point to attach the binary and config files to the container.
|
The `./build` directory is mounted to the `/tendermint` mount point to attach
|
||||||
|
the binary and config files to the container.
|
||||||
|
|
||||||
For instance, to create a single node testnet:
|
To change the number of validators / non-validators change the `localnet-start` Makefile target:
|
||||||
|
|
||||||
|
```
|
||||||
|
localnet-start: localnet-stop
|
||||||
|
@if ! [ -f build/node0/config/genesis.json ]; then docker run --rm -v $(CURDIR)/build:/tendermint:Z tendermint/localnode testnet --v 5 --n 3 --o . --populate-persistent-peers --starting-ip-address 192.167.10.2 ; fi
|
||||||
|
docker-compose up
|
||||||
|
```
|
||||||
|
|
||||||
|
The command now will generate config files for 5 validators and 3
|
||||||
|
non-validators network.
|
||||||
|
|
||||||
|
Before running it, don't forget to cleanup the old files:
|
||||||
|
|
||||||
```
|
```
|
||||||
cd $GOPATH/src/github.com/tendermint/tendermint
|
cd $GOPATH/src/github.com/tendermint/tendermint
|
||||||
|
|
||||||
# Clear the build folder
|
# Clear the build folder
|
||||||
rm -rf ./build
|
rm -rf ./build/node*
|
||||||
|
|
||||||
# Build binary
|
|
||||||
make build-linux
|
|
||||||
|
|
||||||
# Create configuration
|
|
||||||
docker run -e LOG="stdout" -v `pwd`/build:/tendermint tendermint/localnode testnet --o . --v 1
|
|
||||||
|
|
||||||
#Run the node
|
|
||||||
docker run -v `pwd`/build:/tendermint tendermint/localnode
|
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Logging
|
## Logging
|
||||||
|
|
||||||
Log is saved under the attached volume, in the `tendermint.log` file. If the `LOG` environment variable is set to `stdout` at start, the log is not saved, but printed on the screen.
|
Log is saved under the attached volume, in the `tendermint.log` file. If the
|
||||||
|
`LOG` environment variable is set to `stdout` at start, the log is not saved,
|
||||||
|
but printed on the screen.
|
||||||
|
|
||||||
## Special binaries
|
## Special binaries
|
||||||
|
|
||||||
If you have multiple binaries with different names, you can specify which one to run with the BINARY environment variable. The path of the binary is relative to the attached volume.
|
If you have multiple binaries with different names, you can specify which one
|
||||||
|
to run with the `BINARY` environment variable. The path of the binary is relative
|
||||||
|
to the attached volume.
|
||||||
|
@@ -14,31 +14,31 @@ please submit them to our [bug bounty](https://tendermint.com/security)!
|
|||||||
|
|
||||||
### Data Structures
|
### Data Structures
|
||||||
|
|
||||||
- [Encoding and Digests](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/encoding.md)
|
- [Encoding and Digests](./blockchain/encoding.md)
|
||||||
- [Blockchain](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/blockchain.md)
|
- [Blockchain](./blockchain/blockchain.md)
|
||||||
- [State](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md)
|
- [State](./blockchain/state.md)
|
||||||
|
|
||||||
### Consensus Protocol
|
### Consensus Protocol
|
||||||
|
|
||||||
- [Consensus Algorithm](/docs/spec/consensus/consensus.md)
|
- [Consensus Algorithm](./consensus/consensus.md)
|
||||||
- [Creating a proposal](/docs/spec/consensus/creating-proposal.md)
|
- [Creating a proposal](./consensus/creating-proposal.md)
|
||||||
- [Time](/docs/spec/consensus/bft-time.md)
|
- [Time](./consensus/bft-time.md)
|
||||||
- [Light-Client](/docs/spec/consensus/light-client.md)
|
- [Light-Client](./consensus/light-client.md)
|
||||||
|
|
||||||
### P2P and Network Protocols
|
### P2P and Network Protocols
|
||||||
|
|
||||||
- [The Base P2P Layer](https://github.com/tendermint/tendermint/tree/master/docs/spec/p2p): multiplex the protocols ("reactors") on authenticated and encrypted TCP connections
|
- [The Base P2P Layer](./p2p/): multiplex the protocols ("reactors") on authenticated and encrypted TCP connections
|
||||||
- [Peer Exchange (PEX)](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/pex): gossip known peer addresses so peers can find each other
|
- [Peer Exchange (PEX)](./reactors/pex/): gossip known peer addresses so peers can find each other
|
||||||
- [Block Sync](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/block_sync): gossip blocks so peers can catch up quickly
|
- [Block Sync](./reactors/block_sync/): gossip blocks so peers can catch up quickly
|
||||||
- [Consensus](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/consensus): gossip votes and block parts so new blocks can be committed
|
- [Consensus](./reactors/consensus/): gossip votes and block parts so new blocks can be committed
|
||||||
- [Mempool](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/mempool): gossip transactions so they get included in blocks
|
- [Mempool](./reactors/mempool/): gossip transactions so they get included in blocks
|
||||||
- Evidence: Forthcoming, see [this issue](https://github.com/tendermint/tendermint/issues/2329).
|
- [Evidence](./reactors/evidence/): sending invalid evidence will stop the peer
|
||||||
|
|
||||||
### Software
|
### Software
|
||||||
|
|
||||||
- [ABCI](/docs/spec/software/abci.md): Details about interactions between the
|
- [ABCI](./software/abci.md): Details about interactions between the
|
||||||
application and consensus engine over ABCI
|
application and consensus engine over ABCI
|
||||||
- [Write-Ahead Log](/docs/spec/software/wal.md): Details about how the consensus
|
- [Write-Ahead Log](./software/wal.md): Details about how the consensus
|
||||||
engine preserves data and recovers from crash failures
|
engine preserves data and recovers from crash failures
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
@@ -36,22 +36,26 @@ db_backend = "leveldb"
|
|||||||
# Database directory
|
# Database directory
|
||||||
db_dir = "data"
|
db_dir = "data"
|
||||||
|
|
||||||
# Output level for logging
|
# Output level for logging, including package level options
|
||||||
log_level = "state:info,*:error"
|
log_level = "main:info,state:info,*:error"
|
||||||
|
|
||||||
# Output format: 'plain' (colored text) or 'json'
|
# Output format: 'plain' (colored text) or 'json'
|
||||||
log_format = "plain"
|
log_format = "plain"
|
||||||
|
|
||||||
##### additional base config options #####
|
##### additional base config options #####
|
||||||
|
|
||||||
# The ID of the chain to join (should be signed with every transaction and vote)
|
|
||||||
chain_id = ""
|
|
||||||
|
|
||||||
# Path to the JSON file containing the initial validator set and other meta data
|
# Path to the JSON file containing the initial validator set and other meta data
|
||||||
genesis_file = "genesis.json"
|
genesis_file = "config/genesis.json"
|
||||||
|
|
||||||
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
|
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
|
||||||
priv_validator_file = "priv_validator.json"
|
priv_validator_file = "config/priv_validator.json"
|
||||||
|
|
||||||
|
# TCP or UNIX socket address for Tendermint to listen on for
|
||||||
|
# connections from an external PrivValidator process
|
||||||
|
priv_validator_laddr = ""
|
||||||
|
|
||||||
|
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
|
||||||
|
node_key_file = "config/node_key.json"
|
||||||
|
|
||||||
# Mechanism to connect to the ABCI application: socket | grpc
|
# Mechanism to connect to the ABCI application: socket | grpc
|
||||||
abci = "socket"
|
abci = "socket"
|
||||||
@@ -74,13 +78,13 @@ laddr = "tcp://0.0.0.0:26657"
|
|||||||
# A list of origins a cross-domain request can be executed from
|
# A list of origins a cross-domain request can be executed from
|
||||||
# Default value '[]' disables cors support
|
# Default value '[]' disables cors support
|
||||||
# Use '["*"]' to allow any origin
|
# Use '["*"]' to allow any origin
|
||||||
cors_allowed_origins = "[]"
|
cors_allowed_origins = []
|
||||||
|
|
||||||
# A list of methods the client is allowed to use with cross-domain requests
|
# A list of methods the client is allowed to use with cross-domain requests
|
||||||
cors_allowed_methods = "[HEAD GET POST]"
|
cors_allowed_methods = ["HEAD", "GET", "POST"]
|
||||||
|
|
||||||
# A list of non simple headers the client is allowed to use with cross-domain requests
|
# A list of non simple headers the client is allowed to use with cross-domain requests
|
||||||
cors_allowed_headers = "[Origin Accept Content-Type X-Requested-With X-Server-Time]"
|
cors_allowed_headers = ["Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time"]
|
||||||
|
|
||||||
# TCP or UNIX socket address for the gRPC server to listen on
|
# TCP or UNIX socket address for the gRPC server to listen on
|
||||||
# NOTE: This server only supports /broadcast_tx_commit
|
# NOTE: This server only supports /broadcast_tx_commit
|
||||||
@@ -88,7 +92,7 @@ grpc_laddr = ""
|
|||||||
|
|
||||||
# Maximum number of simultaneous connections.
|
# Maximum number of simultaneous connections.
|
||||||
# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
|
# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
|
||||||
# If you want to accept more significant number than the default, make sure
|
# If you want to accept a larger number than the default, make sure
|
||||||
# you increase your OS limits.
|
# you increase your OS limits.
|
||||||
# 0 - unlimited.
|
# 0 - unlimited.
|
||||||
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
|
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
|
||||||
@@ -100,7 +104,7 @@ unsafe = false
|
|||||||
|
|
||||||
# Maximum number of simultaneous connections (including WebSocket).
|
# Maximum number of simultaneous connections (including WebSocket).
|
||||||
# Does not include gRPC connections. See grpc_max_open_connections
|
# Does not include gRPC connections. See grpc_max_open_connections
|
||||||
# If you want to accept more significant number than the default, make sure
|
# If you want to accept a larger number than the default, make sure
|
||||||
# you increase your OS limits.
|
# you increase your OS limits.
|
||||||
# 0 - unlimited.
|
# 0 - unlimited.
|
||||||
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
|
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
|
||||||
@@ -113,6 +117,12 @@ max_open_connections = 900
|
|||||||
# Address to listen for incoming connections
|
# Address to listen for incoming connections
|
||||||
laddr = "tcp://0.0.0.0:26656"
|
laddr = "tcp://0.0.0.0:26656"
|
||||||
|
|
||||||
|
# Address to advertise to peers for them to dial
|
||||||
|
# If empty, will use the same port as the laddr,
|
||||||
|
# and will introspect on the listener or use UPnP
|
||||||
|
# to figure out the address.
|
||||||
|
external_address = ""
|
||||||
|
|
||||||
# Comma separated list of seed nodes to connect to
|
# Comma separated list of seed nodes to connect to
|
||||||
seeds = ""
|
seeds = ""
|
||||||
|
|
||||||
@@ -123,7 +133,7 @@ persistent_peers = ""
|
|||||||
upnp = false
|
upnp = false
|
||||||
|
|
||||||
# Path to address book
|
# Path to address book
|
||||||
addr_book_file = "addrbook.json"
|
addr_book_file = "config/addrbook.json"
|
||||||
|
|
||||||
# Set true for strict address routability rules
|
# Set true for strict address routability rules
|
||||||
# Set false for private or local networks
|
# Set false for private or local networks
|
||||||
@@ -171,26 +181,26 @@ dial_timeout = "3s"
|
|||||||
|
|
||||||
recheck = true
|
recheck = true
|
||||||
broadcast = true
|
broadcast = true
|
||||||
wal_dir = "data/mempool.wal"
|
wal_dir = ""
|
||||||
|
|
||||||
# size of the mempool
|
# size of the mempool
|
||||||
size = 100000
|
size = 5000
|
||||||
|
|
||||||
# size of the cache (used to filter transactions we saw earlier)
|
# size of the cache (used to filter transactions we saw earlier)
|
||||||
cache_size = 100000
|
cache_size = 10000
|
||||||
|
|
||||||
##### consensus configuration options #####
|
##### consensus configuration options #####
|
||||||
[consensus]
|
[consensus]
|
||||||
|
|
||||||
wal_file = "data/cs.wal/wal"
|
wal_file = "data/cs.wal/wal"
|
||||||
|
|
||||||
timeout_propose = "3000ms"
|
timeout_propose = "3s"
|
||||||
timeout_propose_delta = "500ms"
|
timeout_propose_delta = "500ms"
|
||||||
timeout_prevote = "1000ms"
|
timeout_prevote = "1s"
|
||||||
timeout_prevote_delta = "500ms"
|
timeout_prevote_delta = "500ms"
|
||||||
timeout_precommit = "1000ms"
|
timeout_precommit = "1s"
|
||||||
timeout_precommit_delta = "500ms"
|
timeout_precommit_delta = "500ms"
|
||||||
timeout_commit = "1000ms"
|
timeout_commit = "1s"
|
||||||
|
|
||||||
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
|
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
|
||||||
skip_timeout_commit = false
|
skip_timeout_commit = false
|
||||||
@@ -201,10 +211,10 @@ create_empty_blocks_interval = "0s"
|
|||||||
|
|
||||||
# Reactor sleep duration parameters
|
# Reactor sleep duration parameters
|
||||||
peer_gossip_sleep_duration = "100ms"
|
peer_gossip_sleep_duration = "100ms"
|
||||||
peer_query_maj23_sleep_duration = "2000ms"
|
peer_query_maj23_sleep_duration = "2s"
|
||||||
|
|
||||||
# Block time parameters. Corresponds to the minimum time increment between consecutive blocks.
|
# Block time parameters. Corresponds to the minimum time increment between consecutive blocks.
|
||||||
blocktime_iota = "1000ms"
|
blocktime_iota = "1s"
|
||||||
|
|
||||||
##### transactions indexer configuration options #####
|
##### transactions indexer configuration options #####
|
||||||
[tx_index]
|
[tx_index]
|
||||||
@@ -245,7 +255,7 @@ prometheus = false
|
|||||||
prometheus_listen_addr = ":26660"
|
prometheus_listen_addr = ":26660"
|
||||||
|
|
||||||
# Maximum number of simultaneous connections.
|
# Maximum number of simultaneous connections.
|
||||||
# If you want to accept a more significant number than the default, make sure
|
# If you want to accept a larger number than the default, make sure
|
||||||
# you increase your OS limits.
|
# you increase your OS limits.
|
||||||
# 0 - unlimited.
|
# 0 - unlimited.
|
||||||
max_open_connections = 3
|
max_open_connections = 3
|
||||||
|
@@ -1,6 +1,7 @@
|
|||||||
package log
|
package log
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"io"
|
||||||
"os"
|
"os"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
@@ -19,12 +20,22 @@ var (
|
|||||||
// inside a test (not in the init func) because
|
// inside a test (not in the init func) because
|
||||||
// verbose flag only set at the time of testing.
|
// verbose flag only set at the time of testing.
|
||||||
func TestingLogger() Logger {
|
func TestingLogger() Logger {
|
||||||
|
return TestingLoggerWithOutput(os.Stdout)
|
||||||
|
}
|
||||||
|
|
||||||
|
// TestingLoggerWOutput returns a TMLogger which writes to (w io.Writer) if testing being run
|
||||||
|
// with the verbose (-v) flag, NopLogger otherwise.
|
||||||
|
//
|
||||||
|
// Note that the call to TestingLoggerWithOutput(w io.Writer) must be made
|
||||||
|
// inside a test (not in the init func) because
|
||||||
|
// verbose flag only set at the time of testing.
|
||||||
|
func TestingLoggerWithOutput(w io.Writer) Logger {
|
||||||
if _testingLogger != nil {
|
if _testingLogger != nil {
|
||||||
return _testingLogger
|
return _testingLogger
|
||||||
}
|
}
|
||||||
|
|
||||||
if testing.Verbose() {
|
if testing.Verbose() {
|
||||||
_testingLogger = NewTMLogger(NewSyncWriter(os.Stdout))
|
_testingLogger = NewTMLogger(NewSyncWriter(w))
|
||||||
} else {
|
} else {
|
||||||
_testingLogger = NewNopLogger()
|
_testingLogger = NewNopLogger()
|
||||||
}
|
}
|
||||||
|
@@ -108,6 +108,10 @@ func PostCheckMaxGas(maxGas int64) PostCheckFunc {
|
|||||||
if maxGas == -1 {
|
if maxGas == -1 {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
if res.GasWanted < 0 {
|
||||||
|
return fmt.Errorf("gas wanted %d is negative",
|
||||||
|
res.GasWanted)
|
||||||
|
}
|
||||||
if res.GasWanted > maxGas {
|
if res.GasWanted > maxGas {
|
||||||
return fmt.Errorf("gas wanted %d is greater than max gas %d",
|
return fmt.Errorf("gas wanted %d is greater than max gas %d",
|
||||||
res.GasWanted, maxGas)
|
res.GasWanted, maxGas)
|
||||||
@@ -486,11 +490,15 @@ func (mem *Mempool) ReapMaxBytesMaxGas(maxBytes, maxGas int64) types.Txs {
|
|||||||
return txs
|
return txs
|
||||||
}
|
}
|
||||||
totalBytes += int64(len(memTx.tx)) + aminoOverhead
|
totalBytes += int64(len(memTx.tx)) + aminoOverhead
|
||||||
// Check total gas requirement
|
// Check total gas requirement.
|
||||||
if maxGas > -1 && totalGas+memTx.gasWanted > maxGas {
|
// If maxGas is negative, skip this check.
|
||||||
|
// Since newTotalGas < masGas, which
|
||||||
|
// must be non-negative, it follows that this won't overflow.
|
||||||
|
newTotalGas := totalGas + memTx.gasWanted
|
||||||
|
if maxGas > -1 && newTotalGas > maxGas {
|
||||||
return txs
|
return txs
|
||||||
}
|
}
|
||||||
totalGas += memTx.gasWanted
|
totalGas = newTotalGas
|
||||||
txs = append(txs, memTx.tx)
|
txs = append(txs, memTx.tx)
|
||||||
}
|
}
|
||||||
return txs
|
return txs
|
||||||
@@ -548,13 +556,18 @@ func (mem *Mempool) Update(
|
|||||||
// Remove committed transactions.
|
// Remove committed transactions.
|
||||||
txsLeft := mem.removeTxs(txs)
|
txsLeft := mem.removeTxs(txs)
|
||||||
|
|
||||||
// Recheck mempool txs if any txs were committed in the block
|
// Either recheck non-committed txs to see if they became invalid
|
||||||
if mem.config.Recheck && len(txsLeft) > 0 {
|
// or just notify there're some txs left.
|
||||||
|
if len(txsLeft) > 0 {
|
||||||
|
if mem.config.Recheck {
|
||||||
mem.logger.Info("Recheck txs", "numtxs", len(txsLeft), "height", height)
|
mem.logger.Info("Recheck txs", "numtxs", len(txsLeft), "height", height)
|
||||||
mem.recheckTxs(txsLeft)
|
mem.recheckTxs(txsLeft)
|
||||||
// At this point, mem.txs are being rechecked.
|
// At this point, mem.txs are being rechecked.
|
||||||
// mem.recheckCursor re-scans mem.txs and possibly removes some txs.
|
// mem.recheckCursor re-scans mem.txs and possibly removes some txs.
|
||||||
// Before mem.Reap(), we should wait for mem.recheckCursor to be nil.
|
// Before mem.Reap(), we should wait for mem.recheckCursor to be nil.
|
||||||
|
} else {
|
||||||
|
mem.notifyTxsAvailable()
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update metrics
|
// Update metrics
|
||||||
|
33
node/node.go
33
node/node.go
@@ -210,13 +210,18 @@ func NewNode(config *cfg.Config,
|
|||||||
// what happened during block replay).
|
// what happened during block replay).
|
||||||
state = sm.LoadState(stateDB)
|
state = sm.LoadState(stateDB)
|
||||||
|
|
||||||
// Ensure the state's block version matches that of the software.
|
// Log the version info.
|
||||||
|
logger.Info("Version info",
|
||||||
|
"software", version.TMCoreSemVer,
|
||||||
|
"block", version.BlockProtocol,
|
||||||
|
"p2p", version.P2PProtocol,
|
||||||
|
)
|
||||||
|
|
||||||
|
// If the state and software differ in block version, at least log it.
|
||||||
if state.Version.Consensus.Block != version.BlockProtocol {
|
if state.Version.Consensus.Block != version.BlockProtocol {
|
||||||
return nil, fmt.Errorf(
|
logger.Info("Software and state have different block protocols",
|
||||||
"Block version of the software does not match that of the state.\n"+
|
"software", version.BlockProtocol,
|
||||||
"Got version.BlockProtocol=%v, state.Version.Consensus.Block=%v",
|
"state", state.Version.Consensus.Block,
|
||||||
version.BlockProtocol,
|
|
||||||
state.Version.Consensus.Block,
|
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -343,9 +348,8 @@ func NewNode(config *cfg.Config,
|
|||||||
indexerService := txindex.NewIndexerService(txIndexer, eventBus)
|
indexerService := txindex.NewIndexerService(txIndexer, eventBus)
|
||||||
indexerService.SetLogger(logger.With("module", "txindex"))
|
indexerService.SetLogger(logger.With("module", "txindex"))
|
||||||
|
|
||||||
var (
|
p2pLogger := logger.With("module", "p2p")
|
||||||
p2pLogger = logger.With("module", "p2p")
|
nodeInfo, err := makeNodeInfo(
|
||||||
nodeInfo = makeNodeInfo(
|
|
||||||
config,
|
config,
|
||||||
nodeKey.ID(),
|
nodeKey.ID(),
|
||||||
txIndexer,
|
txIndexer,
|
||||||
@@ -356,7 +360,9 @@ func NewNode(config *cfg.Config,
|
|||||||
state.Version.Consensus.App,
|
state.Version.Consensus.App,
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
)
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
// Setup Transport.
|
// Setup Transport.
|
||||||
var (
|
var (
|
||||||
@@ -454,7 +460,7 @@ func NewNode(config *cfg.Config,
|
|||||||
Seeds: splitAndTrimEmpty(config.P2P.Seeds, ",", " "),
|
Seeds: splitAndTrimEmpty(config.P2P.Seeds, ",", " "),
|
||||||
SeedMode: config.P2P.SeedMode,
|
SeedMode: config.P2P.SeedMode,
|
||||||
})
|
})
|
||||||
pexReactor.SetLogger(p2pLogger)
|
pexReactor.SetLogger(logger.With("module", "pex"))
|
||||||
sw.AddReactor("PEX", pexReactor)
|
sw.AddReactor("PEX", pexReactor)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -777,7 +783,7 @@ func makeNodeInfo(
|
|||||||
txIndexer txindex.TxIndexer,
|
txIndexer txindex.TxIndexer,
|
||||||
chainID string,
|
chainID string,
|
||||||
protocolVersion p2p.ProtocolVersion,
|
protocolVersion p2p.ProtocolVersion,
|
||||||
) p2p.NodeInfo {
|
) (p2p.NodeInfo, error) {
|
||||||
txIndexerStatus := "on"
|
txIndexerStatus := "on"
|
||||||
if _, ok := txIndexer.(*null.TxIndex); ok {
|
if _, ok := txIndexer.(*null.TxIndex); ok {
|
||||||
txIndexerStatus = "off"
|
txIndexerStatus = "off"
|
||||||
@@ -812,7 +818,8 @@ func makeNodeInfo(
|
|||||||
|
|
||||||
nodeInfo.ListenAddr = lAddr
|
nodeInfo.ListenAddr = lAddr
|
||||||
|
|
||||||
return nodeInfo
|
err := nodeInfo.Validate()
|
||||||
|
return nodeInfo, err
|
||||||
}
|
}
|
||||||
|
|
||||||
//------------------------------------------------------------------------------
|
//------------------------------------------------------------------------------
|
||||||
|
@@ -160,6 +160,7 @@ func NewMConnectionWithConfig(conn net.Conn, chDescs []*ChannelDescriptor, onRec
|
|||||||
onReceive: onReceive,
|
onReceive: onReceive,
|
||||||
onError: onError,
|
onError: onError,
|
||||||
config: config,
|
config: config,
|
||||||
|
created: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create channels
|
// Create channels
|
||||||
|
@@ -10,7 +10,6 @@ import (
|
|||||||
"net"
|
"net"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
// forked to github.com/tendermint/crypto
|
|
||||||
"golang.org/x/crypto/chacha20poly1305"
|
"golang.org/x/crypto/chacha20poly1305"
|
||||||
"golang.org/x/crypto/curve25519"
|
"golang.org/x/crypto/curve25519"
|
||||||
"golang.org/x/crypto/nacl/box"
|
"golang.org/x/crypto/nacl/box"
|
||||||
|
@@ -175,6 +175,9 @@ func (na *NetAddress) Same(other interface{}) bool {
|
|||||||
|
|
||||||
// String representation: <ID>@<IP>:<PORT>
|
// String representation: <ID>@<IP>:<PORT>
|
||||||
func (na *NetAddress) String() string {
|
func (na *NetAddress) String() string {
|
||||||
|
if na == nil {
|
||||||
|
return "<nil-NetAddress>"
|
||||||
|
}
|
||||||
if na.str == "" {
|
if na.str == "" {
|
||||||
addrStr := na.DialString()
|
addrStr := na.DialString()
|
||||||
if na.ID != "" {
|
if na.ID != "" {
|
||||||
@@ -186,6 +189,9 @@ func (na *NetAddress) String() string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (na *NetAddress) DialString() string {
|
func (na *NetAddress) DialString() string {
|
||||||
|
if na == nil {
|
||||||
|
return "<nil-NetAddress>"
|
||||||
|
}
|
||||||
return net.JoinHostPort(
|
return net.JoinHostPort(
|
||||||
na.IP.String(),
|
na.IP.String(),
|
||||||
strconv.FormatUint(uint64(na.Port), 10),
|
strconv.FormatUint(uint64(na.Port), 10),
|
||||||
|
@@ -36,7 +36,7 @@ type nodeInfoAddress interface {
|
|||||||
// nodeInfoTransport validates a nodeInfo and checks
|
// nodeInfoTransport validates a nodeInfo and checks
|
||||||
// our compatibility with it. It's for use in the handshake.
|
// our compatibility with it. It's for use in the handshake.
|
||||||
type nodeInfoTransport interface {
|
type nodeInfoTransport interface {
|
||||||
ValidateBasic() error
|
Validate() error
|
||||||
CompatibleWith(other NodeInfo) error
|
CompatibleWith(other NodeInfo) error
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -103,7 +103,7 @@ func (info DefaultNodeInfo) ID() ID {
|
|||||||
return info.ID_
|
return info.ID_
|
||||||
}
|
}
|
||||||
|
|
||||||
// ValidateBasic checks the self-reported DefaultNodeInfo is safe.
|
// Validate checks the self-reported DefaultNodeInfo is safe.
|
||||||
// It returns an error if there
|
// It returns an error if there
|
||||||
// are too many Channels, if there are any duplicate Channels,
|
// are too many Channels, if there are any duplicate Channels,
|
||||||
// if the ListenAddr is malformed, or if the ListenAddr is a host name
|
// if the ListenAddr is malformed, or if the ListenAddr is a host name
|
||||||
@@ -116,7 +116,7 @@ func (info DefaultNodeInfo) ID() ID {
|
|||||||
// International clients could then use punycode (or we could use
|
// International clients could then use punycode (or we could use
|
||||||
// url-encoding), and we just need to be careful with how we handle that in our
|
// url-encoding), and we just need to be careful with how we handle that in our
|
||||||
// clients. (e.g. off by default).
|
// clients. (e.g. off by default).
|
||||||
func (info DefaultNodeInfo) ValidateBasic() error {
|
func (info DefaultNodeInfo) Validate() error {
|
||||||
|
|
||||||
// ID is already validated.
|
// ID is already validated.
|
||||||
|
|
||||||
|
@@ -12,7 +12,7 @@ func TestNodeInfoValidate(t *testing.T) {
|
|||||||
|
|
||||||
// empty fails
|
// empty fails
|
||||||
ni := DefaultNodeInfo{}
|
ni := DefaultNodeInfo{}
|
||||||
assert.Error(t, ni.ValidateBasic())
|
assert.Error(t, ni.Validate())
|
||||||
|
|
||||||
channels := make([]byte, maxNumChannels)
|
channels := make([]byte, maxNumChannels)
|
||||||
for i := 0; i < maxNumChannels; i++ {
|
for i := 0; i < maxNumChannels; i++ {
|
||||||
@@ -68,13 +68,13 @@ func TestNodeInfoValidate(t *testing.T) {
|
|||||||
// test case passes
|
// test case passes
|
||||||
ni = testNodeInfo(nodeKey.ID(), name).(DefaultNodeInfo)
|
ni = testNodeInfo(nodeKey.ID(), name).(DefaultNodeInfo)
|
||||||
ni.Channels = channels
|
ni.Channels = channels
|
||||||
assert.NoError(t, ni.ValidateBasic())
|
assert.NoError(t, ni.Validate())
|
||||||
|
|
||||||
for _, tc := range testCases {
|
for _, tc := range testCases {
|
||||||
ni := testNodeInfo(nodeKey.ID(), name).(DefaultNodeInfo)
|
ni := testNodeInfo(nodeKey.ID(), name).(DefaultNodeInfo)
|
||||||
ni.Channels = channels
|
ni.Channels = channels
|
||||||
tc.malleateNodeInfo(&ni)
|
tc.malleateNodeInfo(&ni)
|
||||||
err := ni.ValidateBasic()
|
err := ni.Validate()
|
||||||
if tc.expectErr {
|
if tc.expectErr {
|
||||||
assert.Error(t, err, tc.testName)
|
assert.Error(t, err, tc.testName)
|
||||||
} else {
|
} else {
|
||||||
|
@@ -98,13 +98,15 @@ func (ps *PeerSet) Get(peerKey ID) Peer {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Remove discards peer by its Key, if the peer was previously memoized.
|
// Remove discards peer by its Key, if the peer was previously memoized.
|
||||||
func (ps *PeerSet) Remove(peer Peer) {
|
// Returns true if the peer was removed, and false if it was not found.
|
||||||
|
// in the set.
|
||||||
|
func (ps *PeerSet) Remove(peer Peer) bool {
|
||||||
ps.mtx.Lock()
|
ps.mtx.Lock()
|
||||||
defer ps.mtx.Unlock()
|
defer ps.mtx.Unlock()
|
||||||
|
|
||||||
item := ps.lookup[peer.ID()]
|
item := ps.lookup[peer.ID()]
|
||||||
if item == nil {
|
if item == nil {
|
||||||
return
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
index := item.index
|
index := item.index
|
||||||
@@ -116,7 +118,7 @@ func (ps *PeerSet) Remove(peer Peer) {
|
|||||||
if index == len(ps.list)-1 {
|
if index == len(ps.list)-1 {
|
||||||
ps.list = newList
|
ps.list = newList
|
||||||
delete(ps.lookup, peer.ID())
|
delete(ps.lookup, peer.ID())
|
||||||
return
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
// Replace the popped item with the last item in the old list.
|
// Replace the popped item with the last item in the old list.
|
||||||
@@ -127,6 +129,7 @@ func (ps *PeerSet) Remove(peer Peer) {
|
|||||||
lastPeerItem.index = index
|
lastPeerItem.index = index
|
||||||
ps.list = newList
|
ps.list = newList
|
||||||
delete(ps.lookup, peer.ID())
|
delete(ps.lookup, peer.ID())
|
||||||
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
// Size returns the number of unique items in the peerSet.
|
// Size returns the number of unique items in the peerSet.
|
||||||
|
@@ -60,13 +60,15 @@ func TestPeerSetAddRemoveOne(t *testing.T) {
|
|||||||
n := len(peerList)
|
n := len(peerList)
|
||||||
// 1. Test removing from the front
|
// 1. Test removing from the front
|
||||||
for i, peerAtFront := range peerList {
|
for i, peerAtFront := range peerList {
|
||||||
peerSet.Remove(peerAtFront)
|
removed := peerSet.Remove(peerAtFront)
|
||||||
|
assert.True(t, removed)
|
||||||
wantSize := n - i - 1
|
wantSize := n - i - 1
|
||||||
for j := 0; j < 2; j++ {
|
for j := 0; j < 2; j++ {
|
||||||
assert.Equal(t, false, peerSet.Has(peerAtFront.ID()), "#%d Run #%d: failed to remove peer", i, j)
|
assert.Equal(t, false, peerSet.Has(peerAtFront.ID()), "#%d Run #%d: failed to remove peer", i, j)
|
||||||
assert.Equal(t, wantSize, peerSet.Size(), "#%d Run #%d: failed to remove peer and decrement size", i, j)
|
assert.Equal(t, wantSize, peerSet.Size(), "#%d Run #%d: failed to remove peer and decrement size", i, j)
|
||||||
// Test the route of removing the now non-existent element
|
// Test the route of removing the now non-existent element
|
||||||
peerSet.Remove(peerAtFront)
|
removed := peerSet.Remove(peerAtFront)
|
||||||
|
assert.False(t, removed)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -81,7 +83,8 @@ func TestPeerSetAddRemoveOne(t *testing.T) {
|
|||||||
// b) In reverse, remove each element
|
// b) In reverse, remove each element
|
||||||
for i := n - 1; i >= 0; i-- {
|
for i := n - 1; i >= 0; i-- {
|
||||||
peerAtEnd := peerList[i]
|
peerAtEnd := peerList[i]
|
||||||
peerSet.Remove(peerAtEnd)
|
removed := peerSet.Remove(peerAtEnd)
|
||||||
|
assert.True(t, removed)
|
||||||
assert.Equal(t, false, peerSet.Has(peerAtEnd.ID()), "#%d: failed to remove item at end", i)
|
assert.Equal(t, false, peerSet.Has(peerAtEnd.ID()), "#%d: failed to remove item at end", i)
|
||||||
assert.Equal(t, i, peerSet.Size(), "#%d: differing sizes after peerSet.Remove(atEndPeer)", i)
|
assert.Equal(t, i, peerSet.Size(), "#%d: differing sizes after peerSet.Remove(atEndPeer)", i)
|
||||||
}
|
}
|
||||||
@@ -105,7 +108,8 @@ func TestPeerSetAddRemoveMany(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for i, peer := range peers {
|
for i, peer := range peers {
|
||||||
peerSet.Remove(peer)
|
removed := peerSet.Remove(peer)
|
||||||
|
assert.True(t, removed)
|
||||||
if peerSet.Has(peer.ID()) {
|
if peerSet.Has(peer.ID()) {
|
||||||
t.Errorf("Failed to remove peer")
|
t.Errorf("Failed to remove peer")
|
||||||
}
|
}
|
||||||
|
@@ -162,26 +162,29 @@ func (a *addrBook) FilePath() string {
|
|||||||
|
|
||||||
// AddOurAddress one of our addresses.
|
// AddOurAddress one of our addresses.
|
||||||
func (a *addrBook) AddOurAddress(addr *p2p.NetAddress) {
|
func (a *addrBook) AddOurAddress(addr *p2p.NetAddress) {
|
||||||
a.Logger.Info("Add our address to book", "addr", addr)
|
|
||||||
a.mtx.Lock()
|
a.mtx.Lock()
|
||||||
|
defer a.mtx.Unlock()
|
||||||
|
|
||||||
|
a.Logger.Info("Add our address to book", "addr", addr)
|
||||||
a.ourAddrs[addr.String()] = struct{}{}
|
a.ourAddrs[addr.String()] = struct{}{}
|
||||||
a.mtx.Unlock()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// OurAddress returns true if it is our address.
|
// OurAddress returns true if it is our address.
|
||||||
func (a *addrBook) OurAddress(addr *p2p.NetAddress) bool {
|
func (a *addrBook) OurAddress(addr *p2p.NetAddress) bool {
|
||||||
a.mtx.Lock()
|
a.mtx.Lock()
|
||||||
|
defer a.mtx.Unlock()
|
||||||
|
|
||||||
_, ok := a.ourAddrs[addr.String()]
|
_, ok := a.ourAddrs[addr.String()]
|
||||||
a.mtx.Unlock()
|
|
||||||
return ok
|
return ok
|
||||||
}
|
}
|
||||||
|
|
||||||
func (a *addrBook) AddPrivateIDs(IDs []string) {
|
func (a *addrBook) AddPrivateIDs(IDs []string) {
|
||||||
a.mtx.Lock()
|
a.mtx.Lock()
|
||||||
|
defer a.mtx.Unlock()
|
||||||
|
|
||||||
for _, id := range IDs {
|
for _, id := range IDs {
|
||||||
a.privateIDs[p2p.ID(id)] = struct{}{}
|
a.privateIDs[p2p.ID(id)] = struct{}{}
|
||||||
}
|
}
|
||||||
a.mtx.Unlock()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// AddAddress implements AddrBook
|
// AddAddress implements AddrBook
|
||||||
@@ -191,6 +194,7 @@ func (a *addrBook) AddPrivateIDs(IDs []string) {
|
|||||||
func (a *addrBook) AddAddress(addr *p2p.NetAddress, src *p2p.NetAddress) error {
|
func (a *addrBook) AddAddress(addr *p2p.NetAddress, src *p2p.NetAddress) error {
|
||||||
a.mtx.Lock()
|
a.mtx.Lock()
|
||||||
defer a.mtx.Unlock()
|
defer a.mtx.Unlock()
|
||||||
|
|
||||||
return a.addAddress(addr, src)
|
return a.addAddress(addr, src)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -198,6 +202,7 @@ func (a *addrBook) AddAddress(addr *p2p.NetAddress, src *p2p.NetAddress) error {
|
|||||||
func (a *addrBook) RemoveAddress(addr *p2p.NetAddress) {
|
func (a *addrBook) RemoveAddress(addr *p2p.NetAddress) {
|
||||||
a.mtx.Lock()
|
a.mtx.Lock()
|
||||||
defer a.mtx.Unlock()
|
defer a.mtx.Unlock()
|
||||||
|
|
||||||
ka := a.addrLookup[addr.ID]
|
ka := a.addrLookup[addr.ID]
|
||||||
if ka == nil {
|
if ka == nil {
|
||||||
return
|
return
|
||||||
@@ -211,14 +216,16 @@ func (a *addrBook) RemoveAddress(addr *p2p.NetAddress) {
|
|||||||
func (a *addrBook) IsGood(addr *p2p.NetAddress) bool {
|
func (a *addrBook) IsGood(addr *p2p.NetAddress) bool {
|
||||||
a.mtx.Lock()
|
a.mtx.Lock()
|
||||||
defer a.mtx.Unlock()
|
defer a.mtx.Unlock()
|
||||||
|
|
||||||
return a.addrLookup[addr.ID].isOld()
|
return a.addrLookup[addr.ID].isOld()
|
||||||
}
|
}
|
||||||
|
|
||||||
// HasAddress returns true if the address is in the book.
|
// HasAddress returns true if the address is in the book.
|
||||||
func (a *addrBook) HasAddress(addr *p2p.NetAddress) bool {
|
func (a *addrBook) HasAddress(addr *p2p.NetAddress) bool {
|
||||||
a.mtx.Lock()
|
a.mtx.Lock()
|
||||||
|
defer a.mtx.Unlock()
|
||||||
|
|
||||||
ka := a.addrLookup[addr.ID]
|
ka := a.addrLookup[addr.ID]
|
||||||
a.mtx.Unlock()
|
|
||||||
return ka != nil
|
return ka != nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -292,6 +299,7 @@ func (a *addrBook) PickAddress(biasTowardsNewAddrs int) *p2p.NetAddress {
|
|||||||
func (a *addrBook) MarkGood(addr *p2p.NetAddress) {
|
func (a *addrBook) MarkGood(addr *p2p.NetAddress) {
|
||||||
a.mtx.Lock()
|
a.mtx.Lock()
|
||||||
defer a.mtx.Unlock()
|
defer a.mtx.Unlock()
|
||||||
|
|
||||||
ka := a.addrLookup[addr.ID]
|
ka := a.addrLookup[addr.ID]
|
||||||
if ka == nil {
|
if ka == nil {
|
||||||
return
|
return
|
||||||
@@ -306,6 +314,7 @@ func (a *addrBook) MarkGood(addr *p2p.NetAddress) {
|
|||||||
func (a *addrBook) MarkAttempt(addr *p2p.NetAddress) {
|
func (a *addrBook) MarkAttempt(addr *p2p.NetAddress) {
|
||||||
a.mtx.Lock()
|
a.mtx.Lock()
|
||||||
defer a.mtx.Unlock()
|
defer a.mtx.Unlock()
|
||||||
|
|
||||||
ka := a.addrLookup[addr.ID]
|
ka := a.addrLookup[addr.ID]
|
||||||
if ka == nil {
|
if ka == nil {
|
||||||
return
|
return
|
||||||
@@ -461,12 +470,13 @@ ADDRS_LOOP:
|
|||||||
|
|
||||||
// ListOfKnownAddresses returns the new and old addresses.
|
// ListOfKnownAddresses returns the new and old addresses.
|
||||||
func (a *addrBook) ListOfKnownAddresses() []*knownAddress {
|
func (a *addrBook) ListOfKnownAddresses() []*knownAddress {
|
||||||
addrs := []*knownAddress{}
|
|
||||||
a.mtx.Lock()
|
a.mtx.Lock()
|
||||||
|
defer a.mtx.Unlock()
|
||||||
|
|
||||||
|
addrs := []*knownAddress{}
|
||||||
for _, addr := range a.addrLookup {
|
for _, addr := range a.addrLookup {
|
||||||
addrs = append(addrs, addr.copy())
|
addrs = append(addrs, addr.copy())
|
||||||
}
|
}
|
||||||
a.mtx.Unlock()
|
|
||||||
return addrs
|
return addrs
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -476,6 +486,7 @@ func (a *addrBook) ListOfKnownAddresses() []*knownAddress {
|
|||||||
func (a *addrBook) Size() int {
|
func (a *addrBook) Size() int {
|
||||||
a.mtx.Lock()
|
a.mtx.Lock()
|
||||||
defer a.mtx.Unlock()
|
defer a.mtx.Unlock()
|
||||||
|
|
||||||
return a.size()
|
return a.size()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -211,7 +211,9 @@ func (sw *Switch) OnStop() {
|
|||||||
// Stop peers
|
// Stop peers
|
||||||
for _, p := range sw.peers.List() {
|
for _, p := range sw.peers.List() {
|
||||||
p.Stop()
|
p.Stop()
|
||||||
sw.peers.Remove(p)
|
if sw.peers.Remove(p) {
|
||||||
|
sw.metrics.Peers.Add(float64(-1))
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stop reactors
|
// Stop reactors
|
||||||
@@ -299,8 +301,9 @@ func (sw *Switch) StopPeerGracefully(peer Peer) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (sw *Switch) stopAndRemovePeer(peer Peer, reason interface{}) {
|
func (sw *Switch) stopAndRemovePeer(peer Peer, reason interface{}) {
|
||||||
sw.peers.Remove(peer)
|
if sw.peers.Remove(peer) {
|
||||||
sw.metrics.Peers.Add(float64(-1))
|
sw.metrics.Peers.Add(float64(-1))
|
||||||
|
}
|
||||||
peer.Stop()
|
peer.Stop()
|
||||||
for _, reactor := range sw.reactors {
|
for _, reactor := range sw.reactors {
|
||||||
reactor.RemovePeer(peer, reason)
|
reactor.RemovePeer(peer, reason)
|
||||||
@@ -505,6 +508,12 @@ func (sw *Switch) acceptRoutine() {
|
|||||||
"err", err,
|
"err", err,
|
||||||
"numPeers", sw.peers.Size(),
|
"numPeers", sw.peers.Size(),
|
||||||
)
|
)
|
||||||
|
// We could instead have a retry loop around the acceptRoutine,
|
||||||
|
// but that would need to stop and let the node shutdown eventually.
|
||||||
|
// So might as well panic and let process managers restart the node.
|
||||||
|
// There's no point in letting the node run without the acceptRoutine,
|
||||||
|
// since it won't be able to accept new connections.
|
||||||
|
panic(fmt.Errorf("accept routine exited: %v", err))
|
||||||
}
|
}
|
||||||
|
|
||||||
break
|
break
|
||||||
|
@@ -3,10 +3,17 @@ package p2p
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io/ioutil"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"regexp"
|
||||||
|
"strconv"
|
||||||
"sync"
|
"sync"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
stdprometheus "github.com/prometheus/client_golang/prometheus"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
|
||||||
@@ -335,6 +342,54 @@ func TestSwitchStopsNonPersistentPeerOnError(t *testing.T) {
|
|||||||
assert.False(p.IsRunning())
|
assert.False(p.IsRunning())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestSwitchStopPeerForError(t *testing.T) {
|
||||||
|
s := httptest.NewServer(stdprometheus.UninstrumentedHandler())
|
||||||
|
defer s.Close()
|
||||||
|
|
||||||
|
scrapeMetrics := func() string {
|
||||||
|
resp, _ := http.Get(s.URL)
|
||||||
|
buf, _ := ioutil.ReadAll(resp.Body)
|
||||||
|
return string(buf)
|
||||||
|
}
|
||||||
|
|
||||||
|
namespace, subsystem, name := config.TestInstrumentationConfig().Namespace, MetricsSubsystem, "peers"
|
||||||
|
re := regexp.MustCompile(namespace + `_` + subsystem + `_` + name + ` ([0-9\.]+)`)
|
||||||
|
peersMetricValue := func() float64 {
|
||||||
|
matches := re.FindStringSubmatch(scrapeMetrics())
|
||||||
|
f, _ := strconv.ParseFloat(matches[1], 64)
|
||||||
|
return f
|
||||||
|
}
|
||||||
|
|
||||||
|
p2pMetrics := PrometheusMetrics(namespace)
|
||||||
|
|
||||||
|
// make two connected switches
|
||||||
|
sw1, sw2 := MakeSwitchPair(t, func(i int, sw *Switch) *Switch {
|
||||||
|
// set metrics on sw1
|
||||||
|
if i == 0 {
|
||||||
|
opt := WithMetrics(p2pMetrics)
|
||||||
|
opt(sw)
|
||||||
|
}
|
||||||
|
return initSwitchFunc(i, sw)
|
||||||
|
})
|
||||||
|
|
||||||
|
assert.Equal(t, len(sw1.Peers().List()), 1)
|
||||||
|
assert.EqualValues(t, 1, peersMetricValue())
|
||||||
|
|
||||||
|
// send messages to the peer from sw1
|
||||||
|
p := sw1.Peers().List()[0]
|
||||||
|
p.Send(0x1, []byte("here's a message to send"))
|
||||||
|
|
||||||
|
// stop sw2. this should cause the p to fail,
|
||||||
|
// which results in calling StopPeerForError internally
|
||||||
|
sw2.Stop()
|
||||||
|
|
||||||
|
// now call StopPeerForError explicitly, eg. from a reactor
|
||||||
|
sw1.StopPeerForError(p, fmt.Errorf("some err"))
|
||||||
|
|
||||||
|
assert.Equal(t, len(sw1.Peers().List()), 0)
|
||||||
|
assert.EqualValues(t, 0, peersMetricValue())
|
||||||
|
}
|
||||||
|
|
||||||
func TestSwitchReconnectsToPersistentPeer(t *testing.T) {
|
func TestSwitchReconnectsToPersistentPeer(t *testing.T) {
|
||||||
assert, require := assert.New(t), require.New(t)
|
assert, require := assert.New(t), require.New(t)
|
||||||
|
|
||||||
|
@@ -24,7 +24,7 @@ type mockNodeInfo struct {
|
|||||||
|
|
||||||
func (ni mockNodeInfo) ID() ID { return ni.addr.ID }
|
func (ni mockNodeInfo) ID() ID { return ni.addr.ID }
|
||||||
func (ni mockNodeInfo) NetAddress() *NetAddress { return ni.addr }
|
func (ni mockNodeInfo) NetAddress() *NetAddress { return ni.addr }
|
||||||
func (ni mockNodeInfo) ValidateBasic() error { return nil }
|
func (ni mockNodeInfo) Validate() error { return nil }
|
||||||
func (ni mockNodeInfo) CompatibleWith(other NodeInfo) error { return nil }
|
func (ni mockNodeInfo) CompatibleWith(other NodeInfo) error { return nil }
|
||||||
|
|
||||||
func AddPeerToSwitch(sw *Switch, peer Peer) {
|
func AddPeerToSwitch(sw *Switch, peer Peer) {
|
||||||
@@ -184,7 +184,7 @@ func MakeSwitch(
|
|||||||
|
|
||||||
// TODO: let the config be passed in?
|
// TODO: let the config be passed in?
|
||||||
sw := initSwitch(i, NewSwitch(cfg, t, opts...))
|
sw := initSwitch(i, NewSwitch(cfg, t, opts...))
|
||||||
sw.SetLogger(log.TestingLogger())
|
sw.SetLogger(log.TestingLogger().With("switch", i))
|
||||||
sw.SetNodeKey(&nodeKey)
|
sw.SetNodeKey(&nodeKey)
|
||||||
|
|
||||||
ni := nodeInfo.(DefaultNodeInfo)
|
ni := nodeInfo.(DefaultNodeInfo)
|
||||||
|
@@ -350,7 +350,7 @@ func (mt *MultiplexTransport) upgrade(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := nodeInfo.ValidateBasic(); err != nil {
|
if err := nodeInfo.Validate(); err != nil {
|
||||||
return nil, nil, ErrRejected{
|
return nil, nil, ErrRejected{
|
||||||
conn: c,
|
conn: c,
|
||||||
err: err,
|
err: err,
|
||||||
|
@@ -109,6 +109,24 @@ func (c *HTTP) broadcastTX(route string, tx types.Tx) (*ctypes.ResultBroadcastTx
|
|||||||
return result, nil
|
return result, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (c *HTTP) UnconfirmedTxs(limit int) (*ctypes.ResultUnconfirmedTxs, error) {
|
||||||
|
result := new(ctypes.ResultUnconfirmedTxs)
|
||||||
|
_, err := c.rpc.Call("unconfirmed_txs", map[string]interface{}{"limit": limit}, result)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrap(err, "unconfirmed_txs")
|
||||||
|
}
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *HTTP) NumUnconfirmedTxs() (*ctypes.ResultUnconfirmedTxs, error) {
|
||||||
|
result := new(ctypes.ResultUnconfirmedTxs)
|
||||||
|
_, err := c.rpc.Call("num_unconfirmed_txs", map[string]interface{}{}, result)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrap(err, "num_unconfirmed_txs")
|
||||||
|
}
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
func (c *HTTP) NetInfo() (*ctypes.ResultNetInfo, error) {
|
func (c *HTTP) NetInfo() (*ctypes.ResultNetInfo, error) {
|
||||||
result := new(ctypes.ResultNetInfo)
|
result := new(ctypes.ResultNetInfo)
|
||||||
_, err := c.rpc.Call("net_info", map[string]interface{}{}, result)
|
_, err := c.rpc.Call("net_info", map[string]interface{}{}, result)
|
||||||
|
@@ -93,3 +93,9 @@ type NetworkClient interface {
|
|||||||
type EventsClient interface {
|
type EventsClient interface {
|
||||||
types.EventBusSubscriber
|
types.EventBusSubscriber
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// MempoolClient shows us data about current mempool state.
|
||||||
|
type MempoolClient interface {
|
||||||
|
UnconfirmedTxs(limit int) (*ctypes.ResultUnconfirmedTxs, error)
|
||||||
|
NumUnconfirmedTxs() (*ctypes.ResultUnconfirmedTxs, error)
|
||||||
|
}
|
||||||
|
@@ -76,6 +76,14 @@ func (Local) BroadcastTxSync(tx types.Tx) (*ctypes.ResultBroadcastTx, error) {
|
|||||||
return core.BroadcastTxSync(tx)
|
return core.BroadcastTxSync(tx)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (Local) UnconfirmedTxs(limit int) (*ctypes.ResultUnconfirmedTxs, error) {
|
||||||
|
return core.UnconfirmedTxs(limit)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (Local) NumUnconfirmedTxs() (*ctypes.ResultUnconfirmedTxs, error) {
|
||||||
|
return core.NumUnconfirmedTxs()
|
||||||
|
}
|
||||||
|
|
||||||
func (Local) NetInfo() (*ctypes.ResultNetInfo, error) {
|
func (Local) NetInfo() (*ctypes.ResultNetInfo, error) {
|
||||||
return core.NetInfo()
|
return core.NetInfo()
|
||||||
}
|
}
|
||||||
|
@@ -281,6 +281,42 @@ func TestBroadcastTxCommit(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestUnconfirmedTxs(t *testing.T) {
|
||||||
|
_, _, tx := MakeTxKV()
|
||||||
|
|
||||||
|
mempool := node.MempoolReactor().Mempool
|
||||||
|
_ = mempool.CheckTx(tx, nil)
|
||||||
|
|
||||||
|
for i, c := range GetClients() {
|
||||||
|
mc, ok := c.(client.MempoolClient)
|
||||||
|
require.True(t, ok, "%d", i)
|
||||||
|
txs, err := mc.UnconfirmedTxs(1)
|
||||||
|
require.Nil(t, err, "%d: %+v", i, err)
|
||||||
|
assert.Exactly(t, types.Txs{tx}, types.Txs(txs.Txs))
|
||||||
|
}
|
||||||
|
|
||||||
|
mempool.Flush()
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNumUnconfirmedTxs(t *testing.T) {
|
||||||
|
_, _, tx := MakeTxKV()
|
||||||
|
|
||||||
|
mempool := node.MempoolReactor().Mempool
|
||||||
|
_ = mempool.CheckTx(tx, nil)
|
||||||
|
mempoolSize := mempool.Size()
|
||||||
|
|
||||||
|
for i, c := range GetClients() {
|
||||||
|
mc, ok := c.(client.MempoolClient)
|
||||||
|
require.True(t, ok, "%d", i)
|
||||||
|
res, err := mc.NumUnconfirmedTxs()
|
||||||
|
require.Nil(t, err, "%d: %+v", i, err)
|
||||||
|
|
||||||
|
assert.Equal(t, mempoolSize, res.N)
|
||||||
|
}
|
||||||
|
|
||||||
|
mempool.Flush()
|
||||||
|
}
|
||||||
|
|
||||||
func TestTx(t *testing.T) {
|
func TestTx(t *testing.T) {
|
||||||
// first we broadcast a tx
|
// first we broadcast a tx
|
||||||
c := getHTTPClient()
|
c := getHTTPClient()
|
||||||
|
@@ -15,6 +15,11 @@ import (
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// result, err := client.ABCIQuery("", "abcd", true)
|
// result, err := client.ABCIQuery("", "abcd", true)
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
@@ -69,6 +74,11 @@ func ABCIQuery(path string, data cmn.HexBytes, height int64, prove bool) (*ctype
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// info, err := client.ABCIInfo()
|
// info, err := client.ABCIInfo()
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
|
@@ -18,6 +18,11 @@ import (
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// info, err := client.BlockchainInfo(10, 10)
|
// info, err := client.BlockchainInfo(10, 10)
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
@@ -123,6 +128,11 @@ func filterMinMax(height, min, max, limit int64) (int64, int64, error) {
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// info, err := client.Block(10)
|
// info, err := client.Block(10)
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
@@ -235,6 +245,11 @@ func Block(heightPtr *int64) (*ctypes.ResultBlock, error) {
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// info, err := client.Commit(11)
|
// info, err := client.Commit(11)
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
@@ -329,6 +344,11 @@ func Commit(heightPtr *int64) (*ctypes.ResultCommit, error) {
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// info, err := client.BlockResults(10)
|
// info, err := client.BlockResults(10)
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
|
@@ -16,6 +16,11 @@ import (
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// state, err := client.Validators()
|
// state, err := client.Validators()
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
@@ -67,6 +72,11 @@ func Validators(heightPtr *int64) (*ctypes.ResultValidators, error) {
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// state, err := client.DumpConsensusState()
|
// state, err := client.DumpConsensusState()
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
@@ -225,6 +235,11 @@ func DumpConsensusState() (*ctypes.ResultDumpConsensusState, error) {
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// state, err := client.ConsensusState()
|
// state, err := client.ConsensusState()
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
@@ -273,6 +288,11 @@ func ConsensusState() (*ctypes.ResultConsensusState, error) {
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// state, err := client.ConsensusParams()
|
// state, err := client.ConsensusParams()
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
|
@@ -55,6 +55,10 @@ import (
|
|||||||
//
|
//
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
// err := client.Start()
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// ctx, cancel := context.WithTimeout(context.Background(), timeout)
|
// ctx, cancel := context.WithTimeout(context.Background(), timeout)
|
||||||
// defer cancel()
|
// defer cancel()
|
||||||
// query := query.MustParse("tm.event = 'Tx' AND tx.height = 3")
|
// query := query.MustParse("tm.event = 'Tx' AND tx.height = 3")
|
||||||
@@ -118,6 +122,10 @@ func Subscribe(wsCtx rpctypes.WSRPCContext, query string) (*ctypes.ResultSubscri
|
|||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
// err := client.Start()
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// err = client.Unsubscribe("test-client", query)
|
// err = client.Unsubscribe("test-client", query)
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
@@ -158,6 +166,10 @@ func Unsubscribe(wsCtx rpctypes.WSRPCContext, query string) (*ctypes.ResultUnsub
|
|||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
// err := client.Start()
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// err = client.UnsubscribeAll("test-client")
|
// err = client.UnsubscribeAll("test-client")
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
|
@@ -13,6 +13,11 @@ import (
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// result, err := client.Health()
|
// result, err := client.Health()
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
|
@@ -24,6 +24,11 @@ import (
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// result, err := client.BroadcastTxAsync("123")
|
// result, err := client.BroadcastTxAsync("123")
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
@@ -64,6 +69,11 @@ func BroadcastTxAsync(tx types.Tx) (*ctypes.ResultBroadcastTx, error) {
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// result, err := client.BroadcastTxSync("456")
|
// result, err := client.BroadcastTxSync("456")
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
@@ -118,6 +128,11 @@ func BroadcastTxSync(tx types.Tx) (*ctypes.ResultBroadcastTx, error) {
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// result, err := client.BroadcastTxCommit("789")
|
// result, err := client.BroadcastTxCommit("789")
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
@@ -198,7 +213,10 @@ func BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) {
|
|||||||
// TODO: configurable?
|
// TODO: configurable?
|
||||||
var deliverTxTimeout = rpcserver.WriteTimeout / 2
|
var deliverTxTimeout = rpcserver.WriteTimeout / 2
|
||||||
select {
|
select {
|
||||||
case deliverTxResMsg := <-deliverTxResCh: // The tx was included in a block.
|
case deliverTxResMsg, ok := <-deliverTxResCh: // The tx was included in a block.
|
||||||
|
if !ok {
|
||||||
|
return nil, errors.New("Error on broadcastTxCommit: expected DeliverTxResult, got nil. Did the Tendermint stop?")
|
||||||
|
}
|
||||||
deliverTxRes := deliverTxResMsg.(types.EventDataTx)
|
deliverTxRes := deliverTxResMsg.(types.EventDataTx)
|
||||||
return &ctypes.ResultBroadcastTxCommit{
|
return &ctypes.ResultBroadcastTxCommit{
|
||||||
CheckTx: *checkTxRes,
|
CheckTx: *checkTxRes,
|
||||||
@@ -225,6 +243,11 @@ func BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) {
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// result, err := client.UnconfirmedTxs()
|
// result, err := client.UnconfirmedTxs()
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
@@ -263,6 +286,11 @@ func UnconfirmedTxs(limit int) (*ctypes.ResultUnconfirmedTxs, error) {
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// result, err := client.UnconfirmedTxs()
|
// result, err := client.UnconfirmedTxs()
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
|
@@ -17,6 +17,11 @@ import (
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// info, err := client.NetInfo()
|
// info, err := client.NetInfo()
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
@@ -95,6 +100,11 @@ func UnsafeDialPeers(peers []string, persistent bool) (*ctypes.ResultDialPeers,
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// genesis, err := client.Genesis()
|
// genesis, err := client.Genesis()
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
|
@@ -20,6 +20,11 @@ import (
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// result, err := client.Status()
|
// result, err := client.Status()
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
|
@@ -21,6 +21,11 @@ import (
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// tx, err := client.Tx([]byte("2B8EC32BA2579B3B8606E42C06DE2F7AFA2556EF"), true)
|
// tx, err := client.Tx([]byte("2B8EC32BA2579B3B8606E42C06DE2F7AFA2556EF"), true)
|
||||||
// ```
|
// ```
|
||||||
//
|
//
|
||||||
@@ -115,6 +120,11 @@ func Tx(hash []byte, prove bool) (*ctypes.ResultTx, error) {
|
|||||||
//
|
//
|
||||||
// ```go
|
// ```go
|
||||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||||
|
// err := client.Start()
|
||||||
|
// if err != nil {
|
||||||
|
// // handle error
|
||||||
|
// }
|
||||||
|
// defer client.Stop()
|
||||||
// q, err := tmquery.New("account.owner='Ivan'")
|
// q, err := tmquery.New("account.owner='Ivan'")
|
||||||
// tx, err := client.TxSearch(q, true)
|
// tx, err := client.TxSearch(q, true)
|
||||||
// ```
|
// ```
|
||||||
|
@@ -172,10 +172,10 @@ func (txi *TxIndex) Search(q *query.Query) ([]*types.TxResult, error) {
|
|||||||
|
|
||||||
for _, r := range ranges {
|
for _, r := range ranges {
|
||||||
if !hashesInitialized {
|
if !hashesInitialized {
|
||||||
hashes = txi.matchRange(r, []byte(r.key))
|
hashes = txi.matchRange(r, startKey(r.key))
|
||||||
hashesInitialized = true
|
hashesInitialized = true
|
||||||
} else {
|
} else {
|
||||||
hashes = intersect(hashes, txi.matchRange(r, []byte(r.key)))
|
hashes = intersect(hashes, txi.matchRange(r, startKey(r.key)))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -190,10 +190,10 @@ func (txi *TxIndex) Search(q *query.Query) ([]*types.TxResult, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if !hashesInitialized {
|
if !hashesInitialized {
|
||||||
hashes = txi.match(c, startKey(c, height))
|
hashes = txi.match(c, startKeyForCondition(c, height))
|
||||||
hashesInitialized = true
|
hashesInitialized = true
|
||||||
} else {
|
} else {
|
||||||
hashes = intersect(hashes, txi.match(c, startKey(c, height)))
|
hashes = intersect(hashes, txi.match(c, startKeyForCondition(c, height)))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -332,18 +332,18 @@ func isRangeOperation(op query.Operator) bool {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (txi *TxIndex) match(c query.Condition, startKey []byte) (hashes [][]byte) {
|
func (txi *TxIndex) match(c query.Condition, startKeyBz []byte) (hashes [][]byte) {
|
||||||
if c.Op == query.OpEqual {
|
if c.Op == query.OpEqual {
|
||||||
it := dbm.IteratePrefix(txi.store, startKey)
|
it := dbm.IteratePrefix(txi.store, startKeyBz)
|
||||||
defer it.Close()
|
defer it.Close()
|
||||||
for ; it.Valid(); it.Next() {
|
for ; it.Valid(); it.Next() {
|
||||||
hashes = append(hashes, it.Value())
|
hashes = append(hashes, it.Value())
|
||||||
}
|
}
|
||||||
} else if c.Op == query.OpContains {
|
} else if c.Op == query.OpContains {
|
||||||
// XXX: doing full scan because startKey does not apply here
|
// XXX: startKey does not apply here.
|
||||||
// For example, if startKey = "account.owner=an" and search query = "accoutn.owner CONSISTS an"
|
// For example, if startKey = "account.owner/an/" and search query = "accoutn.owner CONTAINS an"
|
||||||
// we can't iterate with prefix "account.owner=an" because we might miss keys like "account.owner=Ulan"
|
// we can't iterate with prefix "account.owner/an/" because we might miss keys like "account.owner/Ulan/"
|
||||||
it := txi.store.Iterator(nil, nil)
|
it := dbm.IteratePrefix(txi.store, startKey(c.Tag))
|
||||||
defer it.Close()
|
defer it.Close()
|
||||||
for ; it.Valid(); it.Next() {
|
for ; it.Valid(); it.Next() {
|
||||||
if !isTagKey(it.Key()) {
|
if !isTagKey(it.Key()) {
|
||||||
@@ -359,14 +359,14 @@ func (txi *TxIndex) match(c query.Condition, startKey []byte) (hashes [][]byte)
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
func (txi *TxIndex) matchRange(r queryRange, prefix []byte) (hashes [][]byte) {
|
func (txi *TxIndex) matchRange(r queryRange, startKey []byte) (hashes [][]byte) {
|
||||||
// create a map to prevent duplicates
|
// create a map to prevent duplicates
|
||||||
hashesMap := make(map[string][]byte)
|
hashesMap := make(map[string][]byte)
|
||||||
|
|
||||||
lowerBound := r.lowerBoundValue()
|
lowerBound := r.lowerBoundValue()
|
||||||
upperBound := r.upperBoundValue()
|
upperBound := r.upperBoundValue()
|
||||||
|
|
||||||
it := dbm.IteratePrefix(txi.store, prefix)
|
it := dbm.IteratePrefix(txi.store, startKey)
|
||||||
defer it.Close()
|
defer it.Close()
|
||||||
LOOP:
|
LOOP:
|
||||||
for ; it.Valid(); it.Next() {
|
for ; it.Valid(); it.Next() {
|
||||||
@@ -409,16 +409,6 @@ LOOP:
|
|||||||
///////////////////////////////////////////////////////////////////////////////
|
///////////////////////////////////////////////////////////////////////////////
|
||||||
// Keys
|
// Keys
|
||||||
|
|
||||||
func startKey(c query.Condition, height int64) []byte {
|
|
||||||
var key string
|
|
||||||
if height > 0 {
|
|
||||||
key = fmt.Sprintf("%s/%v/%d/", c.Tag, c.Operand, height)
|
|
||||||
} else {
|
|
||||||
key = fmt.Sprintf("%s/%v/", c.Tag, c.Operand)
|
|
||||||
}
|
|
||||||
return []byte(key)
|
|
||||||
}
|
|
||||||
|
|
||||||
func isTagKey(key []byte) bool {
|
func isTagKey(key []byte) bool {
|
||||||
return strings.Count(string(key), tagKeySeparator) == 3
|
return strings.Count(string(key), tagKeySeparator) == 3
|
||||||
}
|
}
|
||||||
@@ -429,11 +419,36 @@ func extractValueFromKey(key []byte) string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func keyForTag(tag cmn.KVPair, result *types.TxResult) []byte {
|
func keyForTag(tag cmn.KVPair, result *types.TxResult) []byte {
|
||||||
return []byte(fmt.Sprintf("%s/%s/%d/%d", tag.Key, tag.Value, result.Height, result.Index))
|
return []byte(fmt.Sprintf("%s/%s/%d/%d",
|
||||||
|
tag.Key,
|
||||||
|
tag.Value,
|
||||||
|
result.Height,
|
||||||
|
result.Index,
|
||||||
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
func keyForHeight(result *types.TxResult) []byte {
|
func keyForHeight(result *types.TxResult) []byte {
|
||||||
return []byte(fmt.Sprintf("%s/%d/%d/%d", types.TxHeightKey, result.Height, result.Height, result.Index))
|
return []byte(fmt.Sprintf("%s/%d/%d/%d",
|
||||||
|
types.TxHeightKey,
|
||||||
|
result.Height,
|
||||||
|
result.Height,
|
||||||
|
result.Index,
|
||||||
|
))
|
||||||
|
}
|
||||||
|
|
||||||
|
func startKeyForCondition(c query.Condition, height int64) []byte {
|
||||||
|
if height > 0 {
|
||||||
|
return startKey(c.Tag, c.Operand, height)
|
||||||
|
}
|
||||||
|
return startKey(c.Tag, c.Operand)
|
||||||
|
}
|
||||||
|
|
||||||
|
func startKey(fields ...interface{}) []byte {
|
||||||
|
var b bytes.Buffer
|
||||||
|
for _, f := range fields {
|
||||||
|
b.Write([]byte(fmt.Sprintf("%v", f) + tagKeySeparator))
|
||||||
|
}
|
||||||
|
return b.Bytes()
|
||||||
}
|
}
|
||||||
|
|
||||||
///////////////////////////////////////////////////////////////////////////////
|
///////////////////////////////////////////////////////////////////////////////
|
||||||
|
@@ -89,8 +89,10 @@ func TestTxSearch(t *testing.T) {
|
|||||||
{"account.date >= TIME 2013-05-03T14:45:00Z", 0},
|
{"account.date >= TIME 2013-05-03T14:45:00Z", 0},
|
||||||
// search using CONTAINS
|
// search using CONTAINS
|
||||||
{"account.owner CONTAINS 'an'", 1},
|
{"account.owner CONTAINS 'an'", 1},
|
||||||
// search using CONTAINS
|
// search for non existing value using CONTAINS
|
||||||
{"account.owner CONTAINS 'Vlad'", 0},
|
{"account.owner CONTAINS 'Vlad'", 0},
|
||||||
|
// search using the wrong tag (of numeric type) using CONTAINS
|
||||||
|
{"account.number CONTAINS 'Iv'", 0},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tc := range testCases {
|
for _, tc := range testCases {
|
||||||
@@ -126,7 +128,7 @@ func TestTxSearchOneTxWithMultipleSameTagsButDifferentValues(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestTxSearchMultipleTxs(t *testing.T) {
|
func TestTxSearchMultipleTxs(t *testing.T) {
|
||||||
allowedTags := []string{"account.number"}
|
allowedTags := []string{"account.number", "account.number.id"}
|
||||||
indexer := NewTxIndex(db.NewMemDB(), IndexTags(allowedTags))
|
indexer := NewTxIndex(db.NewMemDB(), IndexTags(allowedTags))
|
||||||
|
|
||||||
// indexed first, but bigger height (to test the order of transactions)
|
// indexed first, but bigger height (to test the order of transactions)
|
||||||
@@ -160,6 +162,17 @@ func TestTxSearchMultipleTxs(t *testing.T) {
|
|||||||
err = indexer.Index(txResult3)
|
err = indexer.Index(txResult3)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// indexed fourth (to test we don't include txs with similar tags)
|
||||||
|
// https://github.com/tendermint/tendermint/issues/2908
|
||||||
|
txResult4 := txResultWithTags([]cmn.KVPair{
|
||||||
|
{Key: []byte("account.number.id"), Value: []byte("1")},
|
||||||
|
})
|
||||||
|
txResult4.Tx = types.Tx("Mike's account")
|
||||||
|
txResult4.Height = 2
|
||||||
|
txResult4.Index = 2
|
||||||
|
err = indexer.Index(txResult4)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
results, err := indexer.Search(query.MustParse("account.number >= 1"))
|
results, err := indexer.Search(query.MustParse("account.number >= 1"))
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
|
@@ -19,7 +19,7 @@ import (
|
|||||||
// x + (x >> 3) = x + x/8 = x * (1 + 0.125).
|
// x + (x >> 3) = x + x/8 = x * (1 + 0.125).
|
||||||
// MaxTotalVotingPower is the largest int64 `x` with the property that `x + (x >> 3)` is
|
// MaxTotalVotingPower is the largest int64 `x` with the property that `x + (x >> 3)` is
|
||||||
// still in the bounds of int64.
|
// still in the bounds of int64.
|
||||||
const MaxTotalVotingPower = 8198552921648689607
|
const MaxTotalVotingPower = int64(8198552921648689607)
|
||||||
|
|
||||||
// ValidatorSet represent a set of *Validator at a given height.
|
// ValidatorSet represent a set of *Validator at a given height.
|
||||||
// The validators can be fetched by address or index.
|
// The validators can be fetched by address or index.
|
||||||
@@ -80,7 +80,7 @@ func (vals *ValidatorSet) IncrementProposerPriority(times int) {
|
|||||||
|
|
||||||
const shiftEveryNthIter = 10
|
const shiftEveryNthIter = 10
|
||||||
var proposer *Validator
|
var proposer *Validator
|
||||||
// call IncrementAccum(1) times times:
|
// call IncrementProposerPriority(1) times times:
|
||||||
for i := 0; i < times; i++ {
|
for i := 0; i < times; i++ {
|
||||||
shiftByAvgProposerPriority := i%shiftEveryNthIter == 0
|
shiftByAvgProposerPriority := i%shiftEveryNthIter == 0
|
||||||
proposer = vals.incrementProposerPriority(shiftByAvgProposerPriority)
|
proposer = vals.incrementProposerPriority(shiftByAvgProposerPriority)
|
||||||
@@ -272,13 +272,22 @@ func (vals *ValidatorSet) Add(val *Validator) (added bool) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update updates val and returns true. It returns false if val is not present
|
// Update updates the ValidatorSet by copying in the val.
|
||||||
// in the set.
|
// If the val is not found, it returns false; otherwise,
|
||||||
|
// it returns true. The val.ProposerPriority field is ignored
|
||||||
|
// and unchanged by this method.
|
||||||
func (vals *ValidatorSet) Update(val *Validator) (updated bool) {
|
func (vals *ValidatorSet) Update(val *Validator) (updated bool) {
|
||||||
index, sameVal := vals.GetByAddress(val.Address)
|
index, sameVal := vals.GetByAddress(val.Address)
|
||||||
if sameVal == nil {
|
if sameVal == nil {
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
// Overwrite the ProposerPriority so it doesn't change.
|
||||||
|
// During block execution, the val passed in here comes
|
||||||
|
// from ABCI via PB2TM.ValidatorUpdates. Since ABCI
|
||||||
|
// doesn't know about ProposerPriority, PB2TM.ValidatorUpdates
|
||||||
|
// uses the default value of 0, which would cause issues for
|
||||||
|
// proposer selection every time a validator's voting power changes.
|
||||||
|
val.ProposerPriority = sameVal.ProposerPriority
|
||||||
vals.Validators[index] = val.Copy()
|
vals.Validators[index] = val.Copy()
|
||||||
// Invalidate cache
|
// Invalidate cache
|
||||||
vals.Proposer = nil
|
vals.Proposer = nil
|
||||||
|
@@ -45,7 +45,8 @@ func TestValidatorSetBasic(t *testing.T) {
|
|||||||
assert.Nil(t, vset.Hash())
|
assert.Nil(t, vset.Hash())
|
||||||
|
|
||||||
// add
|
// add
|
||||||
val = randValidator_()
|
|
||||||
|
val = randValidator_(vset.TotalVotingPower())
|
||||||
assert.True(t, vset.Add(val))
|
assert.True(t, vset.Add(val))
|
||||||
assert.True(t, vset.HasAddress(val.Address))
|
assert.True(t, vset.HasAddress(val.Address))
|
||||||
idx, val2 := vset.GetByAddress(val.Address)
|
idx, val2 := vset.GetByAddress(val.Address)
|
||||||
@@ -61,12 +62,19 @@ func TestValidatorSetBasic(t *testing.T) {
|
|||||||
assert.NotPanics(t, func() { vset.IncrementProposerPriority(1) })
|
assert.NotPanics(t, func() { vset.IncrementProposerPriority(1) })
|
||||||
|
|
||||||
// update
|
// update
|
||||||
assert.False(t, vset.Update(randValidator_()))
|
assert.False(t, vset.Update(randValidator_(vset.TotalVotingPower())))
|
||||||
val.VotingPower = 100
|
_, val = vset.GetByAddress(val.Address)
|
||||||
|
val.VotingPower += 100
|
||||||
|
proposerPriority := val.ProposerPriority
|
||||||
|
// Mimic update from types.PB2TM.ValidatorUpdates which does not know about ProposerPriority
|
||||||
|
// and hence defaults to 0.
|
||||||
|
val.ProposerPriority = 0
|
||||||
assert.True(t, vset.Update(val))
|
assert.True(t, vset.Update(val))
|
||||||
|
_, val = vset.GetByAddress(val.Address)
|
||||||
|
assert.Equal(t, proposerPriority, val.ProposerPriority)
|
||||||
|
|
||||||
// remove
|
// remove
|
||||||
val2, removed := vset.Remove(randValidator_().Address)
|
val2, removed := vset.Remove(randValidator_(vset.TotalVotingPower()).Address)
|
||||||
assert.Nil(t, val2)
|
assert.Nil(t, val2)
|
||||||
assert.False(t, removed)
|
assert.False(t, removed)
|
||||||
val2, removed = vset.Remove(val.Address)
|
val2, removed = vset.Remove(val.Address)
|
||||||
@@ -273,16 +281,20 @@ func randPubKey() crypto.PubKey {
|
|||||||
return ed25519.PubKeyEd25519(pubKey)
|
return ed25519.PubKeyEd25519(pubKey)
|
||||||
}
|
}
|
||||||
|
|
||||||
func randValidator_() *Validator {
|
func randValidator_(totalVotingPower int64) *Validator {
|
||||||
val := NewValidator(randPubKey(), cmn.RandInt64())
|
// this modulo limits the ProposerPriority/VotingPower to stay in the
|
||||||
val.ProposerPriority = cmn.RandInt64() % MaxTotalVotingPower
|
// bounds of MaxTotalVotingPower minus the already existing voting power:
|
||||||
|
val := NewValidator(randPubKey(), cmn.RandInt64()%(MaxTotalVotingPower-totalVotingPower))
|
||||||
|
val.ProposerPriority = cmn.RandInt64() % (MaxTotalVotingPower - totalVotingPower)
|
||||||
return val
|
return val
|
||||||
}
|
}
|
||||||
|
|
||||||
func randValidatorSet(numValidators int) *ValidatorSet {
|
func randValidatorSet(numValidators int) *ValidatorSet {
|
||||||
validators := make([]*Validator, numValidators)
|
validators := make([]*Validator, numValidators)
|
||||||
|
totalVotingPower := int64(0)
|
||||||
for i := 0; i < numValidators; i++ {
|
for i := 0; i < numValidators; i++ {
|
||||||
validators[i] = randValidator_()
|
validators[i] = randValidator_(totalVotingPower)
|
||||||
|
totalVotingPower += validators[i].VotingPower
|
||||||
}
|
}
|
||||||
return NewValidatorSet(validators)
|
return NewValidatorSet(validators)
|
||||||
}
|
}
|
||||||
@@ -335,7 +347,174 @@ func TestAvgProposerPriority(t *testing.T) {
|
|||||||
got := tc.vs.computeAvgProposerPriority()
|
got := tc.vs.computeAvgProposerPriority()
|
||||||
assert.Equal(t, tc.want, got, "test case: %v", i)
|
assert.Equal(t, tc.want, got, "test case: %v", i)
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAveragingInIncrementProposerPriority(t *testing.T) {
|
||||||
|
// Test that the averaging works as expected inside of IncrementProposerPriority.
|
||||||
|
// Each validator comes with zero voting power which simplifies reasoning about
|
||||||
|
// the expected ProposerPriority.
|
||||||
|
tcs := []struct {
|
||||||
|
vs ValidatorSet
|
||||||
|
times int
|
||||||
|
avg int64
|
||||||
|
}{
|
||||||
|
0: {ValidatorSet{
|
||||||
|
Validators: []*Validator{
|
||||||
|
{Address: []byte("a"), ProposerPriority: 1},
|
||||||
|
{Address: []byte("b"), ProposerPriority: 2},
|
||||||
|
{Address: []byte("c"), ProposerPriority: 3}}},
|
||||||
|
1, 2},
|
||||||
|
1: {ValidatorSet{
|
||||||
|
Validators: []*Validator{
|
||||||
|
{Address: []byte("a"), ProposerPriority: 10},
|
||||||
|
{Address: []byte("b"), ProposerPriority: -10},
|
||||||
|
{Address: []byte("c"), ProposerPriority: 1}}},
|
||||||
|
// this should average twice but the average should be 0 after the first iteration
|
||||||
|
// (voting power is 0 -> no changes)
|
||||||
|
11, 1 / 3},
|
||||||
|
2: {ValidatorSet{
|
||||||
|
Validators: []*Validator{
|
||||||
|
{Address: []byte("a"), ProposerPriority: 100},
|
||||||
|
{Address: []byte("b"), ProposerPriority: -10},
|
||||||
|
{Address: []byte("c"), ProposerPriority: 1}}},
|
||||||
|
1, 91 / 3},
|
||||||
|
}
|
||||||
|
for i, tc := range tcs {
|
||||||
|
// work on copy to have the old ProposerPriorities:
|
||||||
|
newVset := tc.vs.CopyIncrementProposerPriority(tc.times)
|
||||||
|
for _, val := range tc.vs.Validators {
|
||||||
|
_, updatedVal := newVset.GetByAddress(val.Address)
|
||||||
|
assert.Equal(t, updatedVal.ProposerPriority, val.ProposerPriority-tc.avg, "test case: %v", i)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAveragingInIncrementProposerPriorityWithVotingPower(t *testing.T) {
|
||||||
|
// Other than TestAveragingInIncrementProposerPriority this is a more complete test showing
|
||||||
|
// how each ProposerPriority changes in relation to the validator's voting power respectively.
|
||||||
|
vals := ValidatorSet{Validators: []*Validator{
|
||||||
|
{Address: []byte{0}, ProposerPriority: 0, VotingPower: 10},
|
||||||
|
{Address: []byte{1}, ProposerPriority: 0, VotingPower: 1},
|
||||||
|
{Address: []byte{2}, ProposerPriority: 0, VotingPower: 1}}}
|
||||||
|
tcs := []struct {
|
||||||
|
vals *ValidatorSet
|
||||||
|
wantProposerPrioritys []int64
|
||||||
|
times int
|
||||||
|
wantProposer *Validator
|
||||||
|
}{
|
||||||
|
|
||||||
|
0: {
|
||||||
|
vals.Copy(),
|
||||||
|
[]int64{
|
||||||
|
// Acumm+VotingPower-Avg:
|
||||||
|
0 + 10 - 12 - 4, // mostest will be subtracted by total voting power (12)
|
||||||
|
0 + 1 - 4,
|
||||||
|
0 + 1 - 4},
|
||||||
|
1,
|
||||||
|
vals.Validators[0]},
|
||||||
|
1: {
|
||||||
|
vals.Copy(),
|
||||||
|
[]int64{
|
||||||
|
(0 + 10 - 12 - 4) + 10 - 12 + 4, // this will be mostest on 2nd iter, too
|
||||||
|
(0 + 1 - 4) + 1 + 4,
|
||||||
|
(0 + 1 - 4) + 1 + 4},
|
||||||
|
2,
|
||||||
|
vals.Validators[0]}, // increment twice -> expect average to be subtracted twice
|
||||||
|
2: {
|
||||||
|
vals.Copy(),
|
||||||
|
[]int64{
|
||||||
|
((0 + 10 - 12 - 4) + 10 - 12) + 10 - 12 + 4, // still mostest
|
||||||
|
((0 + 1 - 4) + 1) + 1 + 4,
|
||||||
|
((0 + 1 - 4) + 1) + 1 + 4},
|
||||||
|
3,
|
||||||
|
vals.Validators[0]},
|
||||||
|
3: {
|
||||||
|
vals.Copy(),
|
||||||
|
[]int64{
|
||||||
|
0 + 4*(10-12) + 4 - 4, // still mostest
|
||||||
|
0 + 4*1 + 4 - 4,
|
||||||
|
0 + 4*1 + 4 - 4},
|
||||||
|
4,
|
||||||
|
vals.Validators[0]},
|
||||||
|
4: {
|
||||||
|
vals.Copy(),
|
||||||
|
[]int64{
|
||||||
|
0 + 4*(10-12) + 10 + 4 - 4, // 4 iters was mostest
|
||||||
|
0 + 5*1 - 12 + 4 - 4, // now this val is mostest for the 1st time (hence -12==totalVotingPower)
|
||||||
|
0 + 5*1 + 4 - 4},
|
||||||
|
5,
|
||||||
|
vals.Validators[1]},
|
||||||
|
5: {
|
||||||
|
vals.Copy(),
|
||||||
|
[]int64{
|
||||||
|
0 + 6*10 - 5*12 + 4 - 4, // mostest again
|
||||||
|
0 + 6*1 - 12 + 4 - 4, // mostest once up to here
|
||||||
|
0 + 6*1 + 4 - 4},
|
||||||
|
6,
|
||||||
|
vals.Validators[0]},
|
||||||
|
6: {
|
||||||
|
vals.Copy(),
|
||||||
|
[]int64{
|
||||||
|
0 + 7*10 - 6*12 + 4 - 4, // in 7 iters this val is mostest 6 times
|
||||||
|
0 + 7*1 - 12 + 4 - 4, // in 7 iters this val is mostest 1 time
|
||||||
|
0 + 7*1 + 4 - 4},
|
||||||
|
7,
|
||||||
|
vals.Validators[0]},
|
||||||
|
7: {
|
||||||
|
vals.Copy(),
|
||||||
|
[]int64{
|
||||||
|
0 + 8*10 - 7*12 + 4 - 4, // mostest
|
||||||
|
0 + 8*1 - 12 + 4 - 4,
|
||||||
|
0 + 8*1 + 4 - 4},
|
||||||
|
8,
|
||||||
|
vals.Validators[0]},
|
||||||
|
8: {
|
||||||
|
vals.Copy(),
|
||||||
|
[]int64{
|
||||||
|
0 + 9*10 - 7*12 + 4 - 4,
|
||||||
|
0 + 9*1 - 12 + 4 - 4,
|
||||||
|
0 + 9*1 - 12 + 4 - 4}, // mostest
|
||||||
|
9,
|
||||||
|
vals.Validators[2]},
|
||||||
|
9: {
|
||||||
|
vals.Copy(),
|
||||||
|
[]int64{
|
||||||
|
0 + 10*10 - 8*12 + 4 - 4, // after 10 iters this is mostest again
|
||||||
|
0 + 10*1 - 12 + 4 - 4, // after 6 iters this val is "mostest" once and not in between
|
||||||
|
0 + 10*1 - 12 + 4 - 4}, // in between 10 iters this val is "mostest" once
|
||||||
|
10,
|
||||||
|
vals.Validators[0]},
|
||||||
|
10: {
|
||||||
|
vals.Copy(),
|
||||||
|
[]int64{
|
||||||
|
// shift twice inside incrementProposerPriority (shift every 10th iter);
|
||||||
|
// don't shift at the end of IncremenctProposerPriority
|
||||||
|
// last avg should be zero because
|
||||||
|
// ProposerPriority of validator 0: (0 + 11*10 - 8*12 - 4) == 10
|
||||||
|
// ProposerPriority of validator 1 and 2: (0 + 11*1 - 12 - 4) == -5
|
||||||
|
// and (10 + 5 - 5) / 3 == 0
|
||||||
|
0 + 11*10 - 8*12 - 4 - 12 - 0,
|
||||||
|
0 + 11*1 - 12 - 4 - 0, // after 6 iters this val is "mostest" once and not in between
|
||||||
|
0 + 11*1 - 12 - 4 - 0}, // after 10 iters this val is "mostest" once
|
||||||
|
11,
|
||||||
|
vals.Validators[0]},
|
||||||
|
}
|
||||||
|
for i, tc := range tcs {
|
||||||
|
tc.vals.IncrementProposerPriority(tc.times)
|
||||||
|
|
||||||
|
assert.Equal(t, tc.wantProposer.Address, tc.vals.GetProposer().Address,
|
||||||
|
"test case: %v",
|
||||||
|
i)
|
||||||
|
|
||||||
|
for valIdx, val := range tc.vals.Validators {
|
||||||
|
assert.Equal(t,
|
||||||
|
tc.wantProposerPrioritys[valIdx],
|
||||||
|
val.ProposerPriority,
|
||||||
|
"test case: %v, validator: %v",
|
||||||
|
i,
|
||||||
|
valIdx)
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestSafeAdd(t *testing.T) {
|
func TestSafeAdd(t *testing.T) {
|
||||||
|
@@ -18,7 +18,7 @@ const (
|
|||||||
// TMCoreSemVer is the current version of Tendermint Core.
|
// TMCoreSemVer is the current version of Tendermint Core.
|
||||||
// It's the Semantic Version of the software.
|
// It's the Semantic Version of the software.
|
||||||
// Must be a string because scripts like dist.sh read this file.
|
// Must be a string because scripts like dist.sh read this file.
|
||||||
TMCoreSemVer = "0.26.4"
|
TMCoreSemVer = "0.27.3"
|
||||||
|
|
||||||
// ABCISemVer is the semantic version of the ABCI library
|
// ABCISemVer is the semantic version of the ABCI library
|
||||||
ABCISemVer = "0.15.0"
|
ABCISemVer = "0.15.0"
|
||||||
@@ -36,10 +36,10 @@ func (p Protocol) Uint64() uint64 {
|
|||||||
|
|
||||||
var (
|
var (
|
||||||
// P2PProtocol versions all p2p behaviour and msgs.
|
// P2PProtocol versions all p2p behaviour and msgs.
|
||||||
P2PProtocol Protocol = 4
|
P2PProtocol Protocol = 5
|
||||||
|
|
||||||
// BlockProtocol versions all block data structures and processing.
|
// BlockProtocol versions all block data structures and processing.
|
||||||
BlockProtocol Protocol = 7
|
BlockProtocol Protocol = 8
|
||||||
)
|
)
|
||||||
|
|
||||||
//------------------------------------------------------------------------
|
//------------------------------------------------------------------------
|
||||||
|
Reference in New Issue
Block a user