Compare commits

..

42 Commits

Author SHA1 Message Date
Ethan Buchman
1e1ca15bcc Merge pull request #3062 from tendermint/release/v0.27.4
Release/v0.27.4
2018-12-21 16:35:45 -05:00
Ethan Buchman
c6604b5a9b changelog and version 2018-12-21 16:31:28 -05:00
Anton Kaliaev
c510f823e7 mempool: move tx to back, not front (#3036)
because we pop txs from the front if the cache is full

Refs #3035
2018-12-21 16:28:21 -05:00
Ethan Buchman
0138530df2 Merge pull request #3032 from tendermint/release/v0.27.3
Release/v0.27.3
2018-12-16 14:30:23 -05:00
Ethan Buchman
0533c73a50 crypto: revert to mainline Go crypto lib (#3027)
* crypto: revert to mainline Go crypto lib

We used to use a fork for a modified bcrypt so we could pass our own
randomness but this was largely unecessary, unused, and a burden.
So now we just use the mainline Go crypto lib.

* changelog

* fix tests

* version and changelog
2018-12-16 14:19:38 -05:00
Ethan Buchman
1beb45511c Merge pull request #3031 from tendermint/master
Merge pull request #3030 from tendermint/release/v0.27.2
2018-12-16 14:13:57 -05:00
Ethan Buchman
4a568fcedb Merge pull request #3030 from tendermint/release/v0.27.2
Release/v0.27.2
2018-12-16 14:13:30 -05:00
Ethan Buchman
b3141d7d02 makeNodeInfo returns error (#3029)
* makeNodeInfo returns error

* version and changelog
2018-12-16 14:05:58 -05:00
Jae Kwon
9a6dd96cba Revert to using defers in addrbook. (#3025)
* Revert to using defers in addrbook.  ValidateBasic->Validate since it requires DNS

* Update CHANGELOG_PENDING
2018-12-16 12:27:16 -05:00
Ethan Buchman
9fa959619a Merge pull request #3026 from tendermint/master
Merge pull request #3023 from tendermint/release/v0.27.1
2018-12-16 12:25:19 -05:00
Ethan Buchman
1f09818770 Merge pull request #3023 from tendermint/release/v0.27.1
Release/v0.27.1
2018-12-16 12:24:54 -05:00
Ethan Buchman
e4806f980b Bucky/v0.27.1 (#3022)
* update changelog

* linkify

* changelog and version
2018-12-15 15:58:09 -05:00
Anton Kaliaev
b53a2712df docs: networks/docker-compose: small fixes (#3017) 2018-12-15 15:38:13 -05:00
Hleb Albau
a75dab492c #2980 fix cors doc (#3013) 2018-12-15 15:32:35 -05:00
Ethan Buchman
7c9e767e1f 2980 fix cors (#3021)
* config: cors options are arrays of strings, not strings

Fixes #2980

* docs: update tendermint-core/configuration.html page

* set allow_duplicate_ip to false

* in `tendermint testnet`, set allow_duplicate_ip to true

Refs #2712

* fixes after Ismail's review

* Revert "set allow_duplicate_ip to false"

This reverts commit 24c1094ebcf2bd35f2642a44d7a1e5fb5c178fb1.
2018-12-15 15:26:27 -05:00
JamesRay
f82a8ff73a During replay, when appHeight==0, only saveState if stateHeight is also 0 (#3006)
* optimize addProposalBlockPart

* optimize addProposalBlockPart

* if ProposalBlockParts and LockedBlockParts both exist,let LockedBlockParts overwrite ProposalBlockParts.

* fix tryAddBlock

* broadcast lockedBlockParts in higher priority

* when appHeight==0, it's better fetch genDoc than state.validators.

* not save state if replay from height 1

* only save state if replay from height 1 when stateHeight is also 1

* only save state if replay from height 1 when stateHeight is also 1

* only save state if replay from height 0 when stateHeight is also 0

* handshake info's response version only update when stateHeight==0

* save the handshake responseInfo appVersion
2018-12-15 14:33:30 -05:00
Anton Kaliaev
ae275d791e mempool: notifyTxsAvailable if there're txs left, but recheck=false (#2991)
(left after committing a block)

Fixes #2961

--------------
ORIGINAL ISSUE

Tendermint version : 0.26.4-b771798d

ABCI app : kv-store

Environment:

OS (e.g. from /etc/os-release): macOS 10.14.1 What happened: Set
mempool.recheck = false and create empty block = false in config.toml.
When transactions get added right between a new empty block is being
proposed and committed, the proposer won't propose new block for that
transactions immediately after. That transactions are stuck in the
mempool until a new transaction is added and trigger the proposer.

What you expected to happen: If there is a transaction left in the
mempool, new block should be proposed immediately.

Have you tried the latest version: yes

How to reproduce it (as minimally and precisely as possible): Fire two
transaction using broadcast_tx_sync with specific delay between them.
(You may need to do it multiple time before the right delay is found, on
my machine the delay is 0.98s)

Logs (paste a small part showing an error (< 10 lines) or link a
pastebin, gist, etc. containing more of the log file):
https://pastebin.com/0Wt6uhPF

Config (you can paste only the changes you've made): [mempool] recheck =
false create_empty_block = false

Anything else we need to know: In mempool.go, we found that proposer
will immediately propose new block if

Last committed block has some transaction (causing AppHash to changed)
or mem.notifyTxsAvailable() is called. Our scenario is as followed.

A transaction is fired, it will create 1 block with 1 tx (line 1-4 in
the log) and 1 empty block. After the empty block is proposed but before
it is committed, second transaction is fired and added to mempool. (line
8-16) Now, since the last committed block is empty and
mem.notifyTxsAvailable() will be called only if mempool.recheck = true.
The proposer won't immediately propose new block, causing the second
transaction to stuck in mempool until another transaction is added to
mempool and trigger mem.notifyTxsAvailable().
2018-12-15 14:27:02 -05:00
Zach
f5cca9f121 docs: update DOCS_README (#3019)
Co-Authored-By: zramsay <zach.ramsay@gmail.com>
2018-12-15 23:01:28 +04:00
Zach
3fbe9f235a docs: add edit on Github links (#3014) 2018-12-14 09:32:09 +04:00
mircea-c
f7e463f6d3 circleci: add a job to automatically update docs (#3005) 2018-12-12 13:48:53 +04:00
Dev Ojha
bc2a9b20c0 mempool: add a comment and missing changelog entry (#2996)
Refs #2994
2018-12-12 13:31:35 +04:00
Zach
9e075d8dd5 docs: enable full-text search (#3004) 2018-12-12 13:20:02 +04:00
Zach
8003786c9a docs: fixes from 'first time' review (#2999) 2018-12-11 22:21:54 +04:00
Daniil Lashin
2594cec116 add UnconfirmedTxs/NumUnconfirmedTxs methods to HTTP/Local clients (#2964) 2018-12-11 12:41:02 +04:00
Dev Ojha
df32ea4be5 Make testing logger that doesn't write to stdout (#2997) 2018-12-11 12:17:21 +04:00
Anton Kaliaev
f69e2c6d6c p2p: set MConnection#created during init (#2990)
Fixes #2715

In crawlPeersRoutine, which is performed when seedMode is run, there is
logic that disconnects the peer's state information at 3-hour intervals
through the duration value. The duration value is calculated by
referring to the created value of MConnection. When MConnection is
created for the first time, the created value is not initiated, so it is
not disconnected every 3 hours but every time it is disconnected. So,
normal nodes are connected to seedNode and disconnected immediately, so
address exchange does not work properly.

https://github.com/tendermint/tendermint/blob/master/p2p/pex/pex_reactor.go#L629
This point is not work correctly.
I think,
https://github.com/tendermint/tendermint/blob/master/p2p/conn/connection.go#L148
created variable is missing the current time setting.
2018-12-10 15:24:58 -05:00
Dev Ojha
d5d0d2bd77 Make mempool fail txs with negative gas wanted (#2994)
This is only one part of #2989. We also need to fix the application,
and add rules to consensus to ensure this.
2018-12-10 12:56:49 -05:00
Anton Kaliaev
41eaf0e31d turn off strict routability every time (#2983)
previously, we're turning it off only when --populate-persistent-peers
flag was used, which is obviously incorrect.

Fixes https://github.com/cosmos/cosmos-sdk/issues/2983
2018-12-09 13:29:51 -05:00
Zach
68b467886a docs: relative links in docs/spec/readme.md, js-amino lib (#2977)
Co-Authored-By: zramsay <zach.ramsay@gmail.com>
2018-12-07 19:41:19 +04:00
Leo Wang
2f64717bb5 return an error if validator set is empty in genesis file and after InitChain (#2971)
Fixes #2951
2018-12-07 12:30:58 +04:00
Anton Kaliaev
c4a1cfc5c2 don't ignore key when executing CONTAINS (#2924)
Fixes #2912
2018-12-07 12:28:02 +04:00
Ethan Buchman
0f96bea41d Merge pull request #2976 from tendermint/master
Merge pull request #2975 from tendermint/release/v0.27.0
2018-12-05 16:51:04 -05:00
Ethan Buchman
9c236ffd6c Merge pull request #2975 from tendermint/release/v0.27.0
Release/v0.27.0
2018-12-05 16:50:36 -05:00
Ethan Buchman
9f8761d105 update changelog and upgrading (#2974) 2018-12-05 15:19:30 -05:00
Anton Kaliaev
5413c11150 kv indexer: add separator to start key when matching ranges (#2925)
* kv indexer: add separator to start key when matching ranges

to avoid including false positives

Refs #2908

* refactor code

* add a test case
2018-12-05 14:21:46 -05:00
Ethan Buchman
a14fd8eba0 p2p: fix peer count mismatch #2332 (#2969)
* p2p: test case for peer count mismatch #2332

* p2p: fix peer count mismatch #2332

* changelog

* use httptest.Server to scrape Prometheus metrics
2018-12-05 07:32:27 -05:00
Ethan Buchman
1bb7e31d63 p2p: panic on transport error (#2968)
* p2p: panic on transport error

Addresses #2823. Currently, the acceptRoutine exits if the transport returns
an error trying to accept a new connection. Once this happens, the node
can't accept any new connections. So here, we panic instead. While we
could potentially be more intelligent by rerunning the acceptRoutine, the
error may indicate something more fundamental (eg. file desriptor limit)
that requires a restart anyways. We can leave it to process managers to
handle that restart, and notify operators about the panic.

* changelog
2018-12-04 19:16:06 -05:00
Ethan Buchman
222b8978c8 Minor log changes (#2959)
* node: allow state and code to have diff block versions

* node: pex is a log module
2018-12-04 08:30:29 -05:00
Anton Kaliaev
d9a1aad5c5 docs: add client#Start/Stop to examples in RPC docs (#2939)
follow-up on https://github.com/tendermint/tendermint/pull/2936
2018-12-03 16:17:06 +04:00
Anton Kaliaev
8ef0c2681d check if deliverTxResCh is still open, return an err otherwise (#2947)
deliverTxResCh, like any other eventBus (pubsub) channel, is closed when
eventBus is stopped. We must check if the channel is still open. The
alternative approach is to not close any channels, which seems a bit
odd.

Fixes #2408
2018-12-03 16:15:36 +04:00
Ismail Khoffi
c4d93fd27b explicitly type MaxTotalVotingPower to int64 (#2953) 2018-11-30 14:43:16 -05:00
Ethan Buchman
dc2a338d96 Bucky/v0.27.0 (#2950)
* update changelog

* changelog, upgrading, version
2018-11-30 14:05:16 -05:00
76 changed files with 1241 additions and 928 deletions

View File

@@ -7,6 +7,13 @@ defaults: &defaults
environment:
GOBIN: /tmp/workspace/bin
docs_update_config: &docs_update_config
working_directory: ~/repo
docker:
- image: tendermint/docs_deployment
environment:
AWS_REGION: us-east-1
jobs:
setup_dependencies:
<<: *defaults
@@ -339,10 +346,25 @@ jobs:
name: upload
command: bash .circleci/codecov.sh -f coverage.txt
deploy_docs:
<<: *docs_update_config
steps:
- checkout
- run:
name: Trigger website build
command: |
chamber exec tendermint -- start_website_build
workflows:
version: 2
test-suite:
jobs:
- deploy_docs:
filters:
branches:
only:
- master
- develop
- setup_dependencies
- lint:
requires:

View File

@@ -1,13 +1,138 @@
# Changelog
## v0.27.4
*December 21st, 2018*
### BUG FIXES:
- [mempool] [\#3036](https://github.com/tendermint/tendermint/issues/3036) Fix
LRU cache by popping the least recently used item when the cache is full,
not the most recently used one!
## v0.27.3
*December 16th, 2018*
### BREAKING CHANGES:
* Go API
- [dep] [\#3027](https://github.com/tendermint/tendermint/issues/3027) Revert to mainline Go crypto library, eliminating the modified
`bcrypt.GenerateFromPassword`
## v0.27.2
*December 16th, 2018*
### IMPROVEMENTS:
- [node] [\#3025](https://github.com/tendermint/tendermint/issues/3025) Validate NodeInfo addresses on startup.
### BUG FIXES:
- [p2p] [\#3025](https://github.com/tendermint/tendermint/pull/3025) Revert to using defers in addrbook. Fixes deadlocks in pex and consensus upon invalid ExternalAddr/ListenAddr configuration.
## v0.27.1
*December 15th, 2018*
Special thanks to external contributors on this release:
@danil-lashin, @hleb-albau, @james-ray, @leo-xinwang
### FEATURES:
- [rpc] [\#2964](https://github.com/tendermint/tendermint/issues/2964) Add `UnconfirmedTxs(limit)` and `NumUnconfirmedTxs()` methods to HTTP/Local clients (@danil-lashin)
- [docs] [\#3004](https://github.com/tendermint/tendermint/issues/3004) Enable full-text search on docs pages
### IMPROVEMENTS:
- [consensus] [\#2971](https://github.com/tendermint/tendermint/issues/2971) Return error if ValidatorSet is empty after InitChain
(@leo-xinwang)
- [ci/cd] [\#3005](https://github.com/tendermint/tendermint/issues/3005) Updated CircleCI job to trigger website build when docs are updated
- [docs] Various updates
### BUG FIXES:
- [cmd] [\#2983](https://github.com/tendermint/tendermint/issues/2983) `testnet` command always sets `addr_book_strict = false`
- [config] [\#2980](https://github.com/tendermint/tendermint/issues/2980) Fix CORS options formatting
- [kv indexer] [\#2912](https://github.com/tendermint/tendermint/issues/2912) Don't ignore key when executing CONTAINS
- [mempool] [\#2961](https://github.com/tendermint/tendermint/issues/2961) Call `notifyTxsAvailable` if there're txs left after committing a block, but recheck=false
- [mempool] [\#2994](https://github.com/tendermint/tendermint/issues/2994) Reject txs with negative GasWanted
- [p2p] [\#2990](https://github.com/tendermint/tendermint/issues/2990) Fix a bug where seeds don't disconnect from a peer after 3h
- [consensus] [\#3006](https://github.com/tendermint/tendermint/issues/3006) Save state after InitChain only when stateHeight is also 0 (@james-ray)
## v0.27.0
*December 5th, 2018*
Special thanks to external contributors on this release:
@danil-lashin, @srmo
Special thanks to @dlguddus for discovering a [major
issue](https://github.com/tendermint/tendermint/issues/2718#issuecomment-440888677)
in the proposer selection algorithm.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
This release is primarily about fixes to the proposer selection algorithm
in preparation for the [Cosmos Game of
Stakes](https://blog.cosmos.network/the-game-of-stakes-is-open-for-registration-83a404746ee6).
It also makes use of the `ConsensusParams.Validator.PubKeyTypes` to restrict the
key types that can be used by validators, and removes the `Heartbeat` consensus
message.
### BREAKING CHANGES:
* CLI/RPC/Config
- [rpc] [\#2932](https://github.com/tendermint/tendermint/issues/2932) Rename `accum` to `proposer_priority`
* Go API
- [db] [\#2913](https://github.com/tendermint/tendermint/pull/2913)
ReverseIterator API change: start < end, and end is exclusive.
- [types] [\#2932](https://github.com/tendermint/tendermint/issues/2932) Rename `Validator.Accum` to `Validator.ProposerPriority`
* Blockchain Protocol
- [state] [\#2714](https://github.com/tendermint/tendermint/issues/2714) Validators can now only use pubkeys allowed within
ConsensusParams.Validator.PubKeyTypes
* P2P Protocol
- [consensus] [\#2871](https://github.com/tendermint/tendermint/issues/2871)
Remove *ProposalHeartbeat* message as it serves no real purpose (@srmo)
- [state] Fixes for proposer selection:
- [\#2785](https://github.com/tendermint/tendermint/issues/2785) Accum for new validators is `-1.125*totalVotingPower` instead of 0
- [\#2941](https://github.com/tendermint/tendermint/issues/2941) val.Accum is preserved during ValidatorSet.Update to avoid being
reset to 0
### IMPROVEMENTS:
- [state] [\#2929](https://github.com/tendermint/tendermint/issues/2929) Minor refactor of updateState logic (@danil-lashin)
- [node] [\#2959](https://github.com/tendermint/tendermint/issues/2959) Allow node to start even if software's BlockProtocol is
different from state's BlockProtocol
- [pex] [\#2959](https://github.com/tendermint/tendermint/issues/2959) Pex reactor logger uses `module=pex`
### BUG FIXES:
- [p2p] [\#2968](https://github.com/tendermint/tendermint/issues/2968) Panic on transport error rather than continuing to run but not
accept new connections
- [p2p] [\#2969](https://github.com/tendermint/tendermint/issues/2969) Fix mismatch in peer count between `/net_info` and the prometheus
metrics
- [rpc] [\#2408](https://github.com/tendermint/tendermint/issues/2408) `/broadcast_tx_commit`: Fix "interface conversion: interface {} in nil, not EventDataTx" panic (could happen if somebody sent a tx using `/broadcast_tx_commit` while Tendermint was being stopped)
- [state] [\#2785](https://github.com/tendermint/tendermint/issues/2785) Fix accum for new validators to be `-1.125*totalVotingPower`
instead of 0, forcing them to wait before becoming the proposer. Also:
- do not batch clip
- keep accums averaged near 0
- [txindex/kv] [\#2925](https://github.com/tendermint/tendermint/issues/2925) Don't return false positives when range searching for a prefix of a tag value
- [types] [\#2938](https://github.com/tendermint/tendermint/issues/2938) Fix regression in v0.26.4 where we panic on empty
genDoc.Validators
- [types] [\#2941](https://github.com/tendermint/tendermint/issues/2941) Preserve val.Accum during ValidatorSet.Update to avoid it being
reset to 0 every time a validator is updated
## v0.26.4
*November 27th, 2018*
Special thanks to external contributors on this release:
ackratos, goolAdapter, james-ray, joe-bowman, kostko,
nagarajmanjunath, tomtau
@ackratos, @goolAdapter, @james-ray, @joe-bowman, @kostko,
@nagarajmanjunath, @tomtau
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).

View File

@@ -1,49 +1,24 @@
# Pending
## v0.27.0
## v0.27.4
*TBD*
Special thanks to external contributors on this release:
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* CLI/RPC/Config
- [rpc] \#2932 Rename `accum` to `proposer_priority`
* Apps
* Go API
- [db] [\#2913](https://github.com/tendermint/tendermint/pull/2913)
ReverseIterator API change -- start < end, and end is exclusive.
- [types] \#2932 Rename `Validator.Accum` to `Validator.ProposerPriority`
* Blockchain Protocol
- [state] \#2714 Validators can now only use pubkeys allowed within
ConsensusParams.ValidatorParams
* P2P Protocol
- [consensus] [\#2871](https://github.com/tendermint/tendermint/issues/2871)
Remove *ProposalHeartbeat* message as it serves no real purpose
- [state] Fixes for proposer selection:
- \#2785 Accum for new validators is `-1.125*totalVotingPower` instead of 0
- \#2941 val.Accum is preserved during ValidatorSet.Update to avoid being
reset to 0
### FEATURES:
- [privval] \#1181 Split immutable and mutable parts of priv_validator.json
### IMPROVEMENTS:
### BUG FIXES:
- [types] \#2938 Fix regression in v0.26.4 where we panic on empty
genDoc.Validators
- [state] \#2785 Fix accum for new validators to be `-1.125*totalVotingPower`
instead of 0, forcing them to wait before becoming the proposer. Also:
- do not batch clip
- keep accums averaged near 0
- [types] \#2941 Preserve val.Accum during ValidatorSet.Update to avoid it being
reset to 0 every time a validator is updated

5
Gopkg.lock generated
View File

@@ -376,7 +376,7 @@
version = "v0.14.1"
[[projects]]
digest = "1:72b71e3a29775e5752ed7a8012052a3dee165e27ec18cedddae5288058f09acf"
digest = "1:00d2b3e64cdc3fa69aa250dfbe4cc38c4837d4f37e62279be2ae52107ffbbb44"
name = "golang.org/x/crypto"
packages = [
"bcrypt",
@@ -397,8 +397,7 @@
"salsa20/salsa",
]
pruneopts = "UT"
revision = "3764759f34a542a3aef74d6b02e35be7ab893bba"
source = "github.com/tendermint/crypto"
revision = "505ab145d0a99da450461ae2c1a9f6cd10d1f447"
[[projects]]
digest = "1:d36f55a999540d29b6ea3c2ea29d71c76b1d9853fdcd3e5c5cb4836f2ba118f1"

View File

@@ -81,8 +81,7 @@
[[constraint]]
name = "golang.org/x/crypto"
source = "github.com/tendermint/crypto"
revision = "3764759f34a542a3aef74d6b02e35be7ab893bba"
revision = "505ab145d0a99da450461ae2c1a9f6cd10d1f447"
[[override]]
name = "github.com/jmhodges/levigo"

View File

@@ -294,6 +294,7 @@ build-linux:
build-docker-localnode:
cd networks/local
make
cd -
# Run a 4-node testnet locally
localnet-start: localnet-stop

View File

@@ -3,9 +3,50 @@
This guide provides steps to be followed when you upgrade your applications to
a newer version of Tendermint Core.
## v0.27.0
This release contains some breaking changes to the block and p2p protocols,
but does not change any core data structures, so it should be compatible with
existing blockchains from the v0.26 series that only used Ed25519 validator keys.
Blockchains using Secp256k1 for validators will not be compatible. This is due
to the fact that we now enforce which key types validators can use as a
consensus param. The default is Ed25519, and Secp256k1 must be activated
explicitly.
It is recommended to upgrade all nodes at once to avoid incompatibilities at the
peer layer - namely, the heartbeat consensus message has been removed (only
relevant if `create_empty_blocks=false` or `create_empty_blocks_interval > 0`),
and the proposer selection algorithm has changed. Since proposer information is
never included in the blockchain, this change only affects the peer layer.
### Go API Changes
#### libs/db
The ReverseIterator API has changed the meaning of `start` and `end`.
Before, iteration was from `start` to `end`, where
`start > end`. Now, iteration is from `end` to `start`, where `start < end`.
The iterator also excludes `end`. This change allows a simplified and more
intuitive logic, aligning the semantic meaning of `start` and `end` in the
`Iterator` and `ReverseIterator`.
### Applications
This release enforces a new consensus parameter, the
ValidatorParams.PubKeyTypes. Applications must ensure that they only return
validator updates with the allowed PubKeyTypes. If a validator update includes a
pubkey type that is not included in the ConsensusParams.Validator.PubKeyTypes,
block execution will fail and the consensus will halt.
By default, only Ed25519 pubkeys may be used for validators. Enabling
Secp256k1 requires explicit modification of the ConsensusParams.
Please update your application accordingly (ie. restrict validators to only be
able to use Ed25519 keys, or explicitly add additional key types to the genesis
file).
## v0.26.0
New 0.26.0 release contains a lot of changes to core data types and protocols. It is not
This release contains a lot of changes to core data types and protocols. It is not
compatible to the old versions and there is no straight forward way to update
old data to be compatible with the new version.
@@ -67,7 +108,7 @@ For more information, see:
### Go API Changes
#### crypto.merkle
#### crypto/merkle
The `merkle.Hasher` interface was removed. Functions which used to take `Hasher`
now simply take `[]byte`. This means that any objects being Merklized should be

View File

@@ -13,10 +13,9 @@ import (
func main() {
var (
addr = flag.String("addr", ":26659", "Address of client to connect to")
chainID = flag.String("chain-id", "mychain", "chain id")
privValKeyPath = flag.String("priv-key", "", "priv val key file path")
privValStatePath = flag.String("priv-state", "", "priv val state file path")
addr = flag.String("addr", ":26659", "Address of client to connect to")
chainID = flag.String("chain-id", "mychain", "chain id")
privValPath = flag.String("priv", "", "priv val file path")
logger = log.NewTMLogger(
log.NewSyncWriter(os.Stdout),
@@ -28,11 +27,10 @@ func main() {
"Starting private validator",
"addr", *addr,
"chainID", *chainID,
"privKeyPath", *privValKeyPath,
"privStatePath", *privValStatePath,
"privPath", *privValPath,
)
pv := privval.LoadFilePV(*privValKeyPath, *privValStatePath)
pv := privval.LoadFilePV(*privValPath)
rs := privval.NewRemoteSigner(
logger,

View File

@@ -17,7 +17,7 @@ var GenValidatorCmd = &cobra.Command{
}
func genValidator(cmd *cobra.Command, args []string) {
pv := privval.GenFilePV("", "")
pv := privval.GenFilePV("")
jsbz, err := cdc.MarshalJSON(pv)
if err != nil {
panic(err)

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/p2p"
@@ -25,18 +26,15 @@ func initFiles(cmd *cobra.Command, args []string) error {
func initFilesWithConfig(config *cfg.Config) error {
// private validator
privValKeyFile := config.PrivValidatorKeyFile()
privValStateFile := config.PrivValidatorStateFile()
privValFile := config.PrivValidatorFile()
var pv *privval.FilePV
if cmn.FileExists(privValKeyFile) {
pv = privval.LoadFilePV(privValKeyFile, privValStateFile)
logger.Info("Found private validator", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
if cmn.FileExists(privValFile) {
pv = privval.LoadFilePV(privValFile)
logger.Info("Found private validator", "path", privValFile)
} else {
pv = privval.GenFilePV(privValKeyFile, privValStateFile)
pv = privval.GenFilePV(privValFile)
pv.Save()
logger.Info("Generated private validator", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
logger.Info("Generated private validator", "path", privValFile)
}
nodeKeyFile := config.NodeKeyFile()

View File

@@ -27,20 +27,19 @@ var ResetPrivValidatorCmd = &cobra.Command{
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetAll(cmd *cobra.Command, args []string) {
ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidatorKeyFile(),
config.PrivValidatorStateFile(), logger)
ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidatorFile(), logger)
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetPrivValidator(cmd *cobra.Command, args []string) {
resetFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile(), logger)
resetFilePV(config.PrivValidatorFile(), logger)
}
// ResetAll removes the privValidator and address book files plus all data.
// Exported so other CLI tools can use it.
func ResetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logger log.Logger) {
resetFilePV(privValKeyFile, privValStateFile, logger)
func ResetAll(dbDir, addrBookFile, privValFile string, logger log.Logger) {
resetFilePV(privValFile, logger)
removeAddrBook(addrBookFile, logger)
if err := os.RemoveAll(dbDir); err == nil {
logger.Info("Removed all blockchain history", "dir", dbDir)
@@ -49,17 +48,15 @@ func ResetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logg
}
}
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger) {
if _, err := os.Stat(privValKeyFile); err == nil {
pv := privval.LoadFilePV(privValKeyFile, privValStateFile)
func resetFilePV(privValFile string, logger log.Logger) {
if _, err := os.Stat(privValFile); err == nil {
pv := privval.LoadFilePV(privValFile)
pv.Reset()
logger.Info("Reset private validator file to genesis state", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
logger.Info("Reset private validator file to genesis state", "file", privValFile)
} else {
pv := privval.GenFilePV(privValKeyFile, privValStateFile)
pv := privval.GenFilePV(privValFile)
pv.Save()
logger.Info("Generated private validator file", "file", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
logger.Info("Generated private validator file", "file", privValFile)
}
}

View File

@@ -16,7 +16,7 @@ var ShowValidatorCmd = &cobra.Command{
}
func showValidator(cmd *cobra.Command, args []string) {
privValidator := privval.LoadOrGenFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
privValidator := privval.LoadOrGenFilePV(config.PrivValidatorFile())
pubKeyJSONBytes, _ := cdc.MarshalJSON(privValidator.GetPubKey())
fmt.Println(string(pubKeyJSONBytes))
}

View File

@@ -85,18 +85,11 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
_ = os.RemoveAll(outputDir)
return err
}
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
if err != nil {
_ = os.RemoveAll(outputDir)
return err
}
initFilesWithConfig(config)
pvKeyFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidatorKey)
pvStateFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidatorState)
pv := privval.LoadFilePV(pvKeyFile, pvStateFile)
pvFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidator)
pv := privval.LoadFilePV(pvFile)
genVals[i] = types.GenesisValidator{
Address: pv.GetPubKey().Address(),
PubKey: pv.GetPubKey(),
@@ -134,14 +127,31 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
}
}
// Gather persistent peer addresses.
var (
persistentPeers string
err error
)
if populatePersistentPeers {
err := populatePersistentPeersInConfigAndWriteIt(config)
persistentPeers, err = persistentPeersString(config)
if err != nil {
_ = os.RemoveAll(outputDir)
return err
}
}
// Overwrite default config.
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
config.P2P.AddrBookStrict = false
if populatePersistentPeers {
config.P2P.PersistentPeers = persistentPeers
}
cfg.WriteConfigFile(filepath.Join(nodeDir, "config", "config.toml"), config)
}
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
return nil
}
@@ -164,28 +174,16 @@ func hostnameOrIP(i int) string {
return fmt.Sprintf("%s%d", hostnamePrefix, i)
}
func populatePersistentPeersInConfigAndWriteIt(config *cfg.Config) error {
func persistentPeersString(config *cfg.Config) (string, error) {
persistentPeers := make([]string, nValidators+nNonValidators)
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
nodeKey, err := p2p.LoadNodeKey(config.NodeKeyFile())
if err != nil {
return err
return "", err
}
persistentPeers[i] = p2p.IDAddressString(nodeKey.ID(), fmt.Sprintf("%s:%d", hostnameOrIP(i), p2pPort))
}
persistentPeersList := strings.Join(persistentPeers, ",")
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
config.P2P.PersistentPeers = persistentPeersList
config.P2P.AddrBookStrict = false
// overwrite default config
cfg.WriteConfigFile(filepath.Join(nodeDir, "config", "config.toml"), config)
}
return nil
return strings.Join(persistentPeers, ","), nil
}

View File

@@ -35,24 +35,15 @@ var (
defaultConfigFileName = "config.toml"
defaultGenesisJSONName = "genesis.json"
defaultPrivValKeyName = "priv_validator_key.json"
defaultPrivValStateName = "priv_validator_state.json"
defaultPrivValName = "priv_validator.json"
defaultNodeKeyName = "node_key.json"
defaultAddrBookName = "addrbook.json"
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName)
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName)
defaultPrivValKeyPath = filepath.Join(defaultConfigDir, defaultPrivValKeyName)
defaultPrivValStatePath = filepath.Join(defaultDataDir, defaultPrivValStateName)
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
)
var (
oldPrivVal = "priv_validator.json"
oldPrivValPath = filepath.Join(defaultConfigDir, oldPrivVal)
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName)
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName)
defaultPrivValPath = filepath.Join(defaultConfigDir, defaultPrivValName)
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
)
// Config defines the top level configuration for a Tendermint node
@@ -169,10 +160,7 @@ type BaseConfig struct {
Genesis string `mapstructure:"genesis_file"`
// Path to the JSON file containing the private key to use as a validator in the consensus protocol
PrivValidatorKey string `mapstructure:"priv_validator_key_file"`
// Path to the JSON file containing the last sign state of a validator
PrivValidatorState string `mapstructure:"priv_validator_state_file"`
PrivValidator string `mapstructure:"priv_validator_file"`
// TCP or UNIX socket address for Tendermint to listen on for
// connections from an external PrivValidator process
@@ -195,20 +183,19 @@ type BaseConfig struct {
// DefaultBaseConfig returns a default base configuration for a Tendermint node
func DefaultBaseConfig() BaseConfig {
return BaseConfig{
Genesis: defaultGenesisJSONPath,
PrivValidatorKey: defaultPrivValKeyPath,
PrivValidatorState: defaultPrivValStatePath,
NodeKey: defaultNodeKeyPath,
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:26658",
ABCI: "socket",
LogLevel: DefaultPackageLogLevels(),
LogFormat: LogFormatPlain,
ProfListenAddress: "",
FastSync: true,
FilterPeers: false,
DBBackend: "leveldb",
DBPath: "data",
Genesis: defaultGenesisJSONPath,
PrivValidator: defaultPrivValPath,
NodeKey: defaultNodeKeyPath,
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:26658",
ABCI: "socket",
LogLevel: DefaultPackageLogLevels(),
LogFormat: LogFormatPlain,
ProfListenAddress: "",
FastSync: true,
FilterPeers: false,
DBBackend: "leveldb",
DBPath: "data",
}
}
@@ -231,20 +218,9 @@ func (cfg BaseConfig) GenesisFile() string {
return rootify(cfg.Genesis, cfg.RootDir)
}
// PrivValidatorKeyFile returns the full path to the priv_validator_key.json file
func (cfg BaseConfig) PrivValidatorKeyFile() string {
return rootify(cfg.PrivValidatorKey, cfg.RootDir)
}
// PrivValidatorFile returns the full path to the priv_validator_state.json file
func (cfg BaseConfig) PrivValidatorStateFile() string {
return rootify(cfg.PrivValidatorState, cfg.RootDir)
}
// OldPrivValidatorFile returns the full path of the priv_validator.json from pre v0.28.0.
// TODO: eventually remove.
func (cfg BaseConfig) OldPrivValidatorFile() string {
return rootify(oldPrivValPath, cfg.RootDir)
// PrivValidatorFile returns the full path to the priv_validator.json file
func (cfg BaseConfig) PrivValidatorFile() string {
return rootify(cfg.PrivValidator, cfg.RootDir)
}
// NodeKeyFile returns the full path to the node_key.json file
@@ -307,7 +283,7 @@ type RPCConfig struct {
// Maximum number of simultaneous connections.
// Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
// If you want to accept more significant number than the default, make sure
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
GRPCMaxOpenConnections int `mapstructure:"grpc_max_open_connections"`
@@ -317,7 +293,7 @@ type RPCConfig struct {
// Maximum number of simultaneous connections (including WebSocket).
// Does not include gRPC connections. See grpc_max_open_connections
// If you want to accept more significant number than the default, make sure
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
// Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -798,12 +774,12 @@ type InstrumentationConfig struct {
PrometheusListenAddr string `mapstructure:"prometheus_listen_addr"`
// Maximum number of simultaneous connections.
// If you want to accept more significant number than the default, make sure
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
MaxOpenConnections int `mapstructure:"max_open_connections"`
// Tendermint instrumentation namespace.
// Instrumentation namespace.
Namespace string `mapstructure:"namespace"`
}

View File

@@ -95,10 +95,7 @@ log_format = "{{ .BaseConfig.LogFormat }}"
genesis_file = "{{ js .BaseConfig.Genesis }}"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_key_file = "{{ js .BaseConfig.PrivValidatorKey }}"
# Path to the JSON file containing the last sign state of a validator
priv_validator_state_file = "{{ js .BaseConfig.PrivValidatorState }}"
priv_validator_file = "{{ js .BaseConfig.PrivValidator }}"
# TCP or UNIX socket address for Tendermint to listen on for
# connections from an external PrivValidator process
@@ -128,13 +125,13 @@ laddr = "{{ .RPC.ListenAddress }}"
# A list of origins a cross-domain request can be executed from
# Default value '[]' disables cors support
# Use '["*"]' to allow any origin
cors_allowed_origins = "{{ .RPC.CORSAllowedOrigins }}"
cors_allowed_origins = [{{ range .RPC.CORSAllowedOrigins }}{{ printf "%q, " . }}{{end}}]
# A list of methods the client is allowed to use with cross-domain requests
cors_allowed_methods = "{{ .RPC.CORSAllowedMethods }}"
cors_allowed_methods = [{{ range .RPC.CORSAllowedMethods }}{{ printf "%q, " . }}{{end}}]
# A list of non simple headers the client is allowed to use with cross-domain requests
cors_allowed_headers = "{{ .RPC.CORSAllowedHeaders }}"
cors_allowed_headers = [{{ range .RPC.CORSAllowedHeaders }}{{ printf "%q, " . }}{{end}}]
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
@@ -142,7 +139,7 @@ grpc_laddr = "{{ .RPC.GRPCListenAddress }}"
# Maximum number of simultaneous connections.
# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -154,7 +151,7 @@ unsafe = {{ .RPC.Unsafe }}
# Maximum number of simultaneous connections (including WebSocket).
# Does not include gRPC connections. See grpc_max_open_connections
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -272,8 +269,8 @@ blocktime_iota = "{{ .Consensus.BlockTimeIota }}"
# What indexer to use for transactions
#
# Options:
# 1) "null" (default)
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
# 1) "null"
# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "{{ .TxIndex.Indexer }}"
# Comma-separated list of tags to index (by default the only tag is "tx.hash")
@@ -305,7 +302,7 @@ prometheus = {{ .Instrumentation.Prometheus }}
prometheus_listen_addr = "{{ .Instrumentation.PrometheusListenAddr }}"
# Maximum number of simultaneous connections.
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
max_open_connections = {{ .Instrumentation.MaxOpenConnections }}
@@ -345,8 +342,7 @@ func ResetTestRoot(testName string) *Config {
baseConfig := DefaultBaseConfig()
configFilePath := filepath.Join(rootDir, defaultConfigFilePath)
genesisFilePath := filepath.Join(rootDir, baseConfig.Genesis)
privKeyFilePath := filepath.Join(rootDir, baseConfig.PrivValidatorKey)
privStateFilePath := filepath.Join(rootDir, baseConfig.PrivValidatorState)
privFilePath := filepath.Join(rootDir, baseConfig.PrivValidator)
// Write default config file if missing.
if !cmn.FileExists(configFilePath) {
@@ -356,8 +352,7 @@ func ResetTestRoot(testName string) *Config {
cmn.MustWriteFile(genesisFilePath, []byte(testGenesis), 0644)
}
// we always overwrite the priv val
cmn.MustWriteFile(privKeyFilePath, []byte(testPrivValidatorKey), 0644)
cmn.MustWriteFile(privStateFilePath, []byte(testPrivValidatorState), 0644)
cmn.MustWriteFile(privFilePath, []byte(testPrivValidator), 0644)
config := TestConfig().SetRoot(rootDir)
return config
@@ -379,7 +374,7 @@ var testGenesis = `{
"app_hash": ""
}`
var testPrivValidatorKey = `{
var testPrivValidator = `{
"address": "A3258DCBF45DCA0DF052981870F2D1441A36D145",
"pub_key": {
"type": "tendermint/PubKeyEd25519",
@@ -388,11 +383,8 @@ var testPrivValidatorKey = `{
"priv_key": {
"type": "tendermint/PrivKeyEd25519",
"value": "EVkqJO/jIXp3rkASXfh9YnyToYXRXhBr6g9cQVxPFnQBP/5povV4HTjvsy530kybxKHwEi85iU8YL0qQhSYVoQ=="
}
}`
var testPrivValidatorState = `{
"height": "0",
"round": "0",
"step": 0
},
"last_height": "0",
"last_round": "0",
"last_step": 0
}`

View File

@@ -60,7 +60,7 @@ func TestEnsureTestRoot(t *testing.T) {
// TODO: make sure the cfg returned and testconfig are the same!
baseConfig := DefaultBaseConfig()
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, baseConfig.PrivValidatorKey, baseConfig.PrivValidatorState)
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, baseConfig.PrivValidator)
}
func checkConfig(configFile string) bool {

View File

@@ -6,18 +6,14 @@ import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"path"
"reflect"
"sort"
"sync"
"testing"
"time"
"github.com/go-kit/kit/log/term"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
bc "github.com/tendermint/tendermint/blockchain"
cfg "github.com/tendermint/tendermint/config"
@@ -31,6 +27,11 @@ import (
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
"github.com/go-kit/kit/log/term"
)
const (
@@ -280,10 +281,9 @@ func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state sm.S
}
func loadPrivValidator(config *cfg.Config) *privval.FilePV {
privValidatorKeyFile := config.PrivValidatorKeyFile()
ensureDir(filepath.Dir(privValidatorKeyFile), 0700)
privValidatorStateFile := config.PrivValidatorStateFile()
privValidator := privval.LoadOrGenFilePV(privValidatorKeyFile, privValidatorStateFile)
privValidatorFile := config.PrivValidatorFile()
ensureDir(path.Dir(privValidatorFile), 0700)
privValidator := privval.LoadOrGenFilePV(privValidatorFile)
privValidator.Reset()
return privValidator
}
@@ -591,7 +591,7 @@ func randConsensusNet(nValidators int, testName string, tickerFunc func() Timeou
for _, opt := range configOpts {
opt(thisConfig)
}
ensureDir(filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
app := appFunc()
vals := types.TM2PB.ValidatorUpdates(state.Validators)
app.InitChain(abci.RequestInitChain{Validators: vals})
@@ -612,21 +612,16 @@ func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerF
stateDB := dbm.NewMemDB() // each state needs its own db
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
thisConfig := ResetConfig(fmt.Sprintf("%s_%d", testName, i))
ensureDir(filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
var privVal types.PrivValidator
if i < nValidators {
privVal = privVals[i]
} else {
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
tempFile, err := ioutil.TempFile("", "priv_validator_")
if err != nil {
panic(err)
}
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
if err != nil {
panic(err)
}
privVal = privval.GenFilePV(tempKeyFile.Name(), tempStateFile.Name())
privVal = privval.GenFilePV(tempFile.Name())
}
app := appFunc()

View File

@@ -11,7 +11,6 @@ import (
"time"
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/version"
//auto "github.com/tendermint/tendermint/libs/autofile"
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
@@ -20,6 +19,7 @@ import (
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/version"
)
var crc32c = crc32.MakeTable(crc32.Castagnoli)
@@ -247,6 +247,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
// Set AppVersion on the state.
h.initialState.Version.Consensus.App = version.Protocol(res.AppVersion)
sm.SaveState(h.stateDB, h.initialState)
// Replay blocks up to the latest in the blockstore.
_, err = h.ReplayBlocks(h.initialState, appHash, blockHeight, proxyApp)
@@ -295,19 +296,27 @@ func (h *Handshaker) ReplayBlocks(
return nil, err
}
// If the app returned validators or consensus params, update the state.
if len(res.Validators) > 0 {
vals, err := types.PB2TM.ValidatorUpdates(res.Validators)
if err != nil {
return nil, err
if stateBlockHeight == 0 { //we only update state when we are in initial state
// If the app returned validators or consensus params, update the state.
if len(res.Validators) > 0 {
vals, err := types.PB2TM.ValidatorUpdates(res.Validators)
if err != nil {
return nil, err
}
state.Validators = types.NewValidatorSet(vals)
state.NextValidators = types.NewValidatorSet(vals)
} else {
// If validator set is not set in genesis and still empty after InitChain, exit.
if len(h.genDoc.Validators) == 0 {
return nil, fmt.Errorf("Validator set is nil in genesis and still empty after InitChain")
}
}
state.Validators = types.NewValidatorSet(vals)
state.NextValidators = types.NewValidatorSet(vals)
if res.ConsensusParams != nil {
state.ConsensusParams = types.PB2TM.ConsensusParams(res.ConsensusParams)
}
sm.SaveState(h.stateDB, state)
}
if res.ConsensusParams != nil {
state.ConsensusParams = types.PB2TM.ConsensusParams(res.ConsensusParams)
}
sm.SaveState(h.stateDB, state)
}
// First handle edge cases and constraints on the storeBlockHeight.

View File

@@ -319,7 +319,7 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
walFile := tempWALWithData(walBody)
config.Consensus.SetWalFile(walFile)
privVal := privval.LoadFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
privVal := privval.LoadFilePV(config.PrivValidatorFile())
wal, err := NewWAL(walFile)
require.NoError(t, err)
@@ -633,7 +633,7 @@ func TestInitChainUpdateValidators(t *testing.T) {
clientCreator := proxy.NewLocalClientCreator(app)
config := ResetConfig("proxy_test_")
privVal := privval.LoadFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
privVal := privval.LoadFilePV(config.PrivValidatorFile())
stateDB, state, store := stateAndStore(config, privVal.GetPubKey(), 0x0)
oldValAddr := state.Validators.Validators[0].Address

View File

@@ -40,9 +40,8 @@ func WALGenerateNBlocks(wr io.Writer, numBlocks int) (err error) {
// COPY PASTE FROM node.go WITH A FEW MODIFICATIONS
// NOTE: we can't import node package because of circular dependency.
// NOTE: we don't do handshake so need to set state.Version.Consensus.App directly.
privValidatorKeyFile := config.PrivValidatorKeyFile()
privValidatorStateFile := config.PrivValidatorStateFile()
privValidator := privval.LoadOrGenFilePV(privValidatorKeyFile, privValidatorStateFile)
privValidatorFile := config.PrivValidatorFile()
privValidator := privval.LoadOrGenFilePV(privValidatorFile)
genDoc, err := types.GenesisDocFromFile(config.GenesisFile())
if err != nil {
return errors.Wrap(err, "failed to read genesis file")

View File

@@ -5,7 +5,7 @@ import (
"fmt"
"io/ioutil"
"golang.org/x/crypto/openpgp/armor" // forked to github.com/tendermint/crypto
"golang.org/x/crypto/openpgp/armor"
)
func EncodeArmor(blockType string, headers map[string]string, data []byte) string {

View File

@@ -7,7 +7,7 @@ import (
"io"
amino "github.com/tendermint/go-amino"
"golang.org/x/crypto/ed25519" // forked to github.com/tendermint/crypto
"golang.org/x/crypto/ed25519"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/tmhash"

View File

@@ -3,7 +3,7 @@ package crypto
import (
"crypto/sha256"
"golang.org/x/crypto/ripemd160" // forked to github.com/tendermint/crypto
"golang.org/x/crypto/ripemd160"
)
func Sha256(bytes []byte) []byte {

View File

@@ -9,7 +9,7 @@ import (
secp256k1 "github.com/tendermint/btcd/btcec"
amino "github.com/tendermint/go-amino"
"golang.org/x/crypto/ripemd160" // forked to github.com/tendermint/crypto
"golang.org/x/crypto/ripemd160"
"github.com/tendermint/tendermint/crypto"
)

View File

@@ -8,7 +8,7 @@ import (
"errors"
"fmt"
"golang.org/x/crypto/chacha20poly1305" // forked to github.com/tendermint/crypto
"golang.org/x/crypto/chacha20poly1305"
)
// Implements crypto.AEAD

View File

@@ -4,7 +4,7 @@ import (
"errors"
"fmt"
"golang.org/x/crypto/nacl/secretbox" // forked to github.com/tendermint/crypto
"golang.org/x/crypto/nacl/secretbox"
"github.com/tendermint/tendermint/crypto"
cmn "github.com/tendermint/tendermint/libs/common"

View File

@@ -6,7 +6,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/bcrypt" // forked to github.com/tendermint/crypto
"golang.org/x/crypto/bcrypt"
"github.com/tendermint/tendermint/crypto"
)
@@ -30,9 +30,7 @@ func TestSimpleWithKDF(t *testing.T) {
plaintext := []byte("sometext")
secretPass := []byte("somesecret")
salt := []byte("somesaltsomesalt") // len 16
// NOTE: we use a fork of x/crypto so we can inject our own randomness for salt
secret, err := bcrypt.GenerateFromPassword(salt, secretPass, 12)
secret, err := bcrypt.GenerateFromPassword(secretPass, 12)
if err != nil {
t.Error(err)
}

View File

@@ -8,7 +8,17 @@ module.exports = {
lineNumbers: true
},
themeConfig: {
lastUpdated: "Last Updated",
repo: "tendermint/tendermint",
editLinks: true,
docsDir: "docs",
docsBranch: "develop",
editLinkText: 'Edit this page on Github',
lastUpdated: true,
algolia: {
apiKey: '59f0e2deb984aa9cdf2b3a5fd24ac501',
indexName: 'tendermint',
debug: false
},
nav: [{ text: "Back to Tendermint", link: "https://tendermint.com" }],
sidebar: [
{

View File

@@ -12,10 +12,10 @@ respectively.
## How It Works
There is a Jenkins job listening for changes in the `/docs` directory, on both
There is a CircleCI job listening for changes in the `/docs` directory, on both
the `master` and `develop` branches. Any updates to files in this directory
on those branches will automatically trigger a website deployment. Under the hood,
a private website repository has make targets consumed by a standard Jenkins task.
the private website repository has a `make build-docs` target consumed by a CircleCI job in that repo.
## README
@@ -93,6 +93,10 @@ python -m SimpleHTTPServer 8080
then navigate to localhost:8080 in your browser.
## Search
We are using [Algolia](https://www.algolia.com) to power full-text search. This uses a public API search-only key in the `config.js` as well as a [tendermint.json](https://github.com/algolia/docsearch-configs/blob/master/configs/tendermint.json) configuration file that we can update with PRs.
## Consistency
Because the build processes are identical (as is the information contained herein), this file should be kept in sync as

View File

@@ -5,7 +5,7 @@ Tendermint blockchain application.
The following diagram provides a superb example:
<https://drive.google.com/open?id=1yR2XpRi9YCY9H9uMfcw8-RMJpvDyvjz9>
![](../imgs/cosmos-tendermint-stack-4k.jpg)
The end-user application here is the Cosmos Voyager, at the bottom left.
Voyager communicates with a REST API exposed by a local Light-Client

View File

@@ -181,5 +181,13 @@
"language": "Javascript",
"author": "Dennis McKinnon"
}
],
"aminoLibraries": [
{
"name": "JS-Amino",
"url": "https://github.com/TanNgocDo/Js-Amino",
"language": "Javascript",
"author": "TanNgocDo"
}
]
}

View File

@@ -1,11 +1,9 @@
# Ecosystem
The growing list of applications built using various pieces of the
Tendermint stack can be found at:
Tendermint stack can be found at the [ecosystem page](https://tendermint.com/ecosystem).
- https://tendermint.com/ecosystem
We thank the community for their contributions thus far and welcome the
We thank the community for their contributions and welcome the
addition of new projects. A pull request can be submitted to [this
file](https://github.com/tendermint/tendermint/blob/master/docs/app-dev/ecosystem.json)
to include your project.

Binary file not shown.

After

Width:  |  Height:  |  Size: 625 KiB

View File

@@ -70,10 +70,6 @@ Tendermint is in essence similar software, but with two key differences:
the application logic that's right for them, from key-value store to
cryptocurrency to e-voting platform and beyond.
The layout of this Tendermint website content is also ripped directly
and without shame from [consul.io](https://www.consul.io/) and the other
[Hashicorp sites](https://www.hashicorp.com/#tools).
### Bitcoin, Ethereum, etc.
Tendermint emerged in the tradition of cryptocurrencies like Bitcoin,

View File

@@ -1,20 +1,17 @@
# Docker Compose
With Docker Compose, we can spin up local testnets in a single command:
```
make localnet-start
```
With Docker Compose, you can spin up local testnets with a single command.
## Requirements
- [Install tendermint](/docs/install.md)
- [Install docker](https://docs.docker.com/engine/installation/)
- [Install docker-compose](https://docs.docker.com/compose/install/)
1. [Install tendermint](/docs/install.md)
2. [Install docker](https://docs.docker.com/engine/installation/)
3. [Install docker-compose](https://docs.docker.com/compose/install/)
## Build
Build the `tendermint` binary and the `tendermint/localnode` docker image.
Build the `tendermint` binary and, optionally, the `tendermint/localnode`
docker image.
Note the binary will be mounted into the container so it can be updated without
rebuilding the image.
@@ -25,11 +22,10 @@ cd $GOPATH/src/github.com/tendermint/tendermint
# Build the linux binary in ./build
make build-linux
# Build tendermint/localnode image
# (optionally) Build tendermint/localnode image
make build-docker-localnode
```
## Run a testnet
To start a 4 node testnet run:
@@ -38,9 +34,13 @@ To start a 4 node testnet run:
make localnet-start
```
The nodes bind their RPC servers to ports 26657, 26660, 26662, and 26664 on the host.
The nodes bind their RPC servers to ports 26657, 26660, 26662, and 26664 on the
host.
This file creates a 4-node network using the localnode image.
The nodes of the network expose their P2P and RPC endpoints to the host machine on ports 26656-26657, 26659-26660, 26661-26662, and 26663-26664 respectively.
The nodes of the network expose their P2P and RPC endpoints to the host machine
on ports 26656-26657, 26659-26660, 26661-26662, and 26663-26664 respectively.
To update the binary, just rebuild it and restart the nodes:
@@ -52,34 +52,40 @@ make localnet-start
## Configuration
The `make localnet-start` creates files for a 4-node testnet in `./build` by calling the `tendermint testnet` command.
The `make localnet-start` creates files for a 4-node testnet in `./build` by
calling the `tendermint testnet` command.
The `./build` directory is mounted to the `/tendermint` mount point to attach the binary and config files to the container.
The `./build` directory is mounted to the `/tendermint` mount point to attach
the binary and config files to the container.
For instance, to create a single node testnet:
To change the number of validators / non-validators change the `localnet-start` Makefile target:
```
localnet-start: localnet-stop
@if ! [ -f build/node0/config/genesis.json ]; then docker run --rm -v $(CURDIR)/build:/tendermint:Z tendermint/localnode testnet --v 5 --n 3 --o . --populate-persistent-peers --starting-ip-address 192.167.10.2 ; fi
docker-compose up
```
The command now will generate config files for 5 validators and 3
non-validators network.
Before running it, don't forget to cleanup the old files:
```
cd $GOPATH/src/github.com/tendermint/tendermint
# Clear the build folder
rm -rf ./build
# Build binary
make build-linux
# Create configuration
docker run -e LOG="stdout" -v `pwd`/build:/tendermint tendermint/localnode testnet --o . --v 1
#Run the node
docker run -v `pwd`/build:/tendermint tendermint/localnode
rm -rf ./build/node*
```
## Logging
Log is saved under the attached volume, in the `tendermint.log` file. If the `LOG` environment variable is set to `stdout` at start, the log is not saved, but printed on the screen.
Log is saved under the attached volume, in the `tendermint.log` file. If the
`LOG` environment variable is set to `stdout` at start, the log is not saved,
but printed on the screen.
## Special binaries
If you have multiple binaries with different names, you can specify which one to run with the BINARY environment variable. The path of the binary is relative to the attached volume.
If you have multiple binaries with different names, you can specify which one
to run with the `BINARY` environment variable. The path of the binary is relative
to the attached volume.

View File

@@ -14,31 +14,31 @@ please submit them to our [bug bounty](https://tendermint.com/security)!
### Data Structures
- [Encoding and Digests](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/encoding.md)
- [Blockchain](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/blockchain.md)
- [State](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md)
- [Encoding and Digests](./blockchain/encoding.md)
- [Blockchain](./blockchain/blockchain.md)
- [State](./blockchain/state.md)
### Consensus Protocol
- [Consensus Algorithm](/docs/spec/consensus/consensus.md)
- [Creating a proposal](/docs/spec/consensus/creating-proposal.md)
- [Time](/docs/spec/consensus/bft-time.md)
- [Light-Client](/docs/spec/consensus/light-client.md)
- [Consensus Algorithm](./consensus/consensus.md)
- [Creating a proposal](./consensus/creating-proposal.md)
- [Time](./consensus/bft-time.md)
- [Light-Client](./consensus/light-client.md)
### P2P and Network Protocols
- [The Base P2P Layer](https://github.com/tendermint/tendermint/tree/master/docs/spec/p2p): multiplex the protocols ("reactors") on authenticated and encrypted TCP connections
- [Peer Exchange (PEX)](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/pex): gossip known peer addresses so peers can find each other
- [Block Sync](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/block_sync): gossip blocks so peers can catch up quickly
- [Consensus](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/consensus): gossip votes and block parts so new blocks can be committed
- [Mempool](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/mempool): gossip transactions so they get included in blocks
- Evidence: Forthcoming, see [this issue](https://github.com/tendermint/tendermint/issues/2329).
- [The Base P2P Layer](./p2p/): multiplex the protocols ("reactors") on authenticated and encrypted TCP connections
- [Peer Exchange (PEX)](./reactors/pex/): gossip known peer addresses so peers can find each other
- [Block Sync](./reactors/block_sync/): gossip blocks so peers can catch up quickly
- [Consensus](./reactors/consensus/): gossip votes and block parts so new blocks can be committed
- [Mempool](./reactors/mempool/): gossip transactions so they get included in blocks
- [Evidence](./reactors/evidence/): sending invalid evidence will stop the peer
### Software
- [ABCI](/docs/spec/software/abci.md): Details about interactions between the
- [ABCI](./software/abci.md): Details about interactions between the
application and consensus engine over ABCI
- [Write-Ahead Log](/docs/spec/software/wal.md): Details about how the consensus
- [Write-Ahead Log](./software/wal.md): Details about how the consensus
engine preserves data and recovers from crash failures
## Overview

View File

@@ -36,22 +36,26 @@ db_backend = "leveldb"
# Database directory
db_dir = "data"
# Output level for logging
log_level = "state:info,*:error"
# Output level for logging, including package level options
log_level = "main:info,state:info,*:error"
# Output format: 'plain' (colored text) or 'json'
log_format = "plain"
##### additional base config options #####
# The ID of the chain to join (should be signed with every transaction and vote)
chain_id = ""
# Path to the JSON file containing the initial validator set and other meta data
genesis_file = "genesis.json"
genesis_file = "config/genesis.json"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_file = "priv_validator.json"
priv_validator_file = "config/priv_validator.json"
# TCP or UNIX socket address for Tendermint to listen on for
# connections from an external PrivValidator process
priv_validator_laddr = ""
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
node_key_file = "config/node_key.json"
# Mechanism to connect to the ABCI application: socket | grpc
abci = "socket"
@@ -74,13 +78,13 @@ laddr = "tcp://0.0.0.0:26657"
# A list of origins a cross-domain request can be executed from
# Default value '[]' disables cors support
# Use '["*"]' to allow any origin
cors_allowed_origins = "[]"
cors_allowed_origins = []
# A list of methods the client is allowed to use with cross-domain requests
cors_allowed_methods = "[HEAD GET POST]"
cors_allowed_methods = ["HEAD", "GET", "POST"]
# A list of non simple headers the client is allowed to use with cross-domain requests
cors_allowed_headers = "[Origin Accept Content-Type X-Requested-With X-Server-Time]"
cors_allowed_headers = ["Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time"]
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
@@ -88,7 +92,7 @@ grpc_laddr = ""
# Maximum number of simultaneous connections.
# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -100,7 +104,7 @@ unsafe = false
# Maximum number of simultaneous connections (including WebSocket).
# Does not include gRPC connections. See grpc_max_open_connections
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -113,6 +117,12 @@ max_open_connections = 900
# Address to listen for incoming connections
laddr = "tcp://0.0.0.0:26656"
# Address to advertise to peers for them to dial
# If empty, will use the same port as the laddr,
# and will introspect on the listener or use UPnP
# to figure out the address.
external_address = ""
# Comma separated list of seed nodes to connect to
seeds = ""
@@ -123,7 +133,7 @@ persistent_peers = ""
upnp = false
# Path to address book
addr_book_file = "addrbook.json"
addr_book_file = "config/addrbook.json"
# Set true for strict address routability rules
# Set false for private or local networks
@@ -171,26 +181,26 @@ dial_timeout = "3s"
recheck = true
broadcast = true
wal_dir = "data/mempool.wal"
wal_dir = ""
# size of the mempool
size = 100000
size = 5000
# size of the cache (used to filter transactions we saw earlier)
cache_size = 100000
cache_size = 10000
##### consensus configuration options #####
[consensus]
wal_file = "data/cs.wal/wal"
timeout_propose = "3000ms"
timeout_propose = "3s"
timeout_propose_delta = "500ms"
timeout_prevote = "1000ms"
timeout_prevote = "1s"
timeout_prevote_delta = "500ms"
timeout_precommit = "1000ms"
timeout_precommit = "1s"
timeout_precommit_delta = "500ms"
timeout_commit = "1000ms"
timeout_commit = "1s"
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
skip_timeout_commit = false
@@ -201,10 +211,10 @@ create_empty_blocks_interval = "0s"
# Reactor sleep duration parameters
peer_gossip_sleep_duration = "100ms"
peer_query_maj23_sleep_duration = "2000ms"
peer_query_maj23_sleep_duration = "2s"
# Block time parameters. Corresponds to the minimum time increment between consecutive blocks.
blocktime_iota = "1000ms"
blocktime_iota = "1s"
##### transactions indexer configuration options #####
[tx_index]
@@ -245,7 +255,7 @@ prometheus = false
prometheus_listen_addr = ":26660"
# Maximum number of simultaneous connections.
# If you want to accept a more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
max_open_connections = 3

View File

@@ -1,6 +1,7 @@
package log
import (
"io"
"os"
"testing"
@@ -19,12 +20,22 @@ var (
// inside a test (not in the init func) because
// verbose flag only set at the time of testing.
func TestingLogger() Logger {
return TestingLoggerWithOutput(os.Stdout)
}
// TestingLoggerWOutput returns a TMLogger which writes to (w io.Writer) if testing being run
// with the verbose (-v) flag, NopLogger otherwise.
//
// Note that the call to TestingLoggerWithOutput(w io.Writer) must be made
// inside a test (not in the init func) because
// verbose flag only set at the time of testing.
func TestingLoggerWithOutput(w io.Writer) Logger {
if _testingLogger != nil {
return _testingLogger
}
if testing.Verbose() {
_testingLogger = NewTMLogger(NewSyncWriter(os.Stdout))
_testingLogger = NewTMLogger(NewSyncWriter(w))
} else {
_testingLogger = NewNopLogger()
}

View File

@@ -108,6 +108,10 @@ func PostCheckMaxGas(maxGas int64) PostCheckFunc {
if maxGas == -1 {
return nil
}
if res.GasWanted < 0 {
return fmt.Errorf("gas wanted %d is negative",
res.GasWanted)
}
if res.GasWanted > maxGas {
return fmt.Errorf("gas wanted %d is greater than max gas %d",
res.GasWanted, maxGas)
@@ -486,11 +490,15 @@ func (mem *Mempool) ReapMaxBytesMaxGas(maxBytes, maxGas int64) types.Txs {
return txs
}
totalBytes += int64(len(memTx.tx)) + aminoOverhead
// Check total gas requirement
if maxGas > -1 && totalGas+memTx.gasWanted > maxGas {
// Check total gas requirement.
// If maxGas is negative, skip this check.
// Since newTotalGas < masGas, which
// must be non-negative, it follows that this won't overflow.
newTotalGas := totalGas + memTx.gasWanted
if maxGas > -1 && newTotalGas > maxGas {
return txs
}
totalGas += memTx.gasWanted
totalGas = newTotalGas
txs = append(txs, memTx.tx)
}
return txs
@@ -548,13 +556,18 @@ func (mem *Mempool) Update(
// Remove committed transactions.
txsLeft := mem.removeTxs(txs)
// Recheck mempool txs if any txs were committed in the block
if mem.config.Recheck && len(txsLeft) > 0 {
mem.logger.Info("Recheck txs", "numtxs", len(txsLeft), "height", height)
mem.recheckTxs(txsLeft)
// At this point, mem.txs are being rechecked.
// mem.recheckCursor re-scans mem.txs and possibly removes some txs.
// Before mem.Reap(), we should wait for mem.recheckCursor to be nil.
// Either recheck non-committed txs to see if they became invalid
// or just notify there're some txs left.
if len(txsLeft) > 0 {
if mem.config.Recheck {
mem.logger.Info("Recheck txs", "numtxs", len(txsLeft), "height", height)
mem.recheckTxs(txsLeft)
// At this point, mem.txs are being rechecked.
// mem.recheckCursor re-scans mem.txs and possibly removes some txs.
// Before mem.Reap(), we should wait for mem.recheckCursor to be nil.
} else {
mem.notifyTxsAvailable()
}
}
// Update metrics
@@ -663,7 +676,7 @@ func (cache *mapTxCache) Push(tx types.Tx) bool {
// Use the tx hash in the cache
txHash := sha256.Sum256(tx)
if moved, exists := cache.map_[txHash]; exists {
cache.list.MoveToFront(moved)
cache.list.MoveToBack(moved)
return false
}

View File

@@ -7,7 +7,6 @@ import (
"net"
"net/http"
_ "net/http/pprof"
"os"
"strings"
"time"
@@ -87,26 +86,8 @@ func DefaultNewNode(config *cfg.Config, logger log.Logger) (*Node, error) {
if err != nil {
return nil, err
}
// Convert old PrivValidator if it exists.
oldPrivVal := config.OldPrivValidatorFile()
newPrivValKey := config.PrivValidatorKeyFile()
newPrivValState := config.PrivValidatorStateFile()
if _, err := os.Stat(oldPrivVal); !os.IsNotExist(err) {
oldPV, err := privval.LoadOldFilePV(oldPrivVal)
if err != nil {
return nil, fmt.Errorf("Error reading OldPrivValidator from %v: %v\n", oldPrivVal, err)
}
logger.Info("Upgrading PrivValidator file",
"old", oldPrivVal,
"newKey", newPrivValKey,
"newState", newPrivValState,
)
oldPV.Upgrade(newPrivValKey, newPrivValState)
}
return NewNode(config,
privval.LoadOrGenFilePV(newPrivValKey, newPrivValState),
privval.LoadOrGenFilePV(config.PrivValidatorFile()),
nodeKey,
proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir()),
DefaultGenesisDocProviderFunc(config),
@@ -229,13 +210,18 @@ func NewNode(config *cfg.Config,
// what happened during block replay).
state = sm.LoadState(stateDB)
// Ensure the state's block version matches that of the software.
// Log the version info.
logger.Info("Version info",
"software", version.TMCoreSemVer,
"block", version.BlockProtocol,
"p2p", version.P2PProtocol,
)
// If the state and software differ in block version, at least log it.
if state.Version.Consensus.Block != version.BlockProtocol {
return nil, fmt.Errorf(
"Block version of the software does not match that of the state.\n"+
"Got version.BlockProtocol=%v, state.Version.Consensus.Block=%v",
version.BlockProtocol,
state.Version.Consensus.Block,
logger.Info("Software and state have different block protocols",
"software", version.BlockProtocol,
"state", state.Version.Consensus.Block,
)
}
@@ -362,20 +348,21 @@ func NewNode(config *cfg.Config,
indexerService := txindex.NewIndexerService(txIndexer, eventBus)
indexerService.SetLogger(logger.With("module", "txindex"))
var (
p2pLogger = logger.With("module", "p2p")
nodeInfo = makeNodeInfo(
config,
nodeKey.ID(),
txIndexer,
genDoc.ChainID,
p2p.NewProtocolVersion(
version.P2PProtocol, // global
state.Version.Consensus.Block,
state.Version.Consensus.App,
),
)
p2pLogger := logger.With("module", "p2p")
nodeInfo, err := makeNodeInfo(
config,
nodeKey.ID(),
txIndexer,
genDoc.ChainID,
p2p.NewProtocolVersion(
version.P2PProtocol, // global
state.Version.Consensus.Block,
state.Version.Consensus.App,
),
)
if err != nil {
return nil, err
}
// Setup Transport.
var (
@@ -473,7 +460,7 @@ func NewNode(config *cfg.Config,
Seeds: splitAndTrimEmpty(config.P2P.Seeds, ",", " "),
SeedMode: config.P2P.SeedMode,
})
pexReactor.SetLogger(p2pLogger)
pexReactor.SetLogger(logger.With("module", "pex"))
sw.AddReactor("PEX", pexReactor)
}
@@ -796,7 +783,7 @@ func makeNodeInfo(
txIndexer txindex.TxIndexer,
chainID string,
protocolVersion p2p.ProtocolVersion,
) p2p.NodeInfo {
) (p2p.NodeInfo, error) {
txIndexerStatus := "on"
if _, ok := txIndexer.(*null.TxIndex); ok {
txIndexerStatus = "off"
@@ -831,7 +818,8 @@ func makeNodeInfo(
nodeInfo.ListenAddr = lAddr
return nodeInfo
err := nodeInfo.Validate()
return nodeInfo, err
}
//------------------------------------------------------------------------------

View File

@@ -160,6 +160,7 @@ func NewMConnectionWithConfig(conn net.Conn, chDescs []*ChannelDescriptor, onRec
onReceive: onReceive,
onError: onError,
config: config,
created: time.Now(),
}
// Create channels

View File

@@ -10,7 +10,6 @@ import (
"net"
"time"
// forked to github.com/tendermint/crypto
"golang.org/x/crypto/chacha20poly1305"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/nacl/box"

View File

@@ -175,6 +175,9 @@ func (na *NetAddress) Same(other interface{}) bool {
// String representation: <ID>@<IP>:<PORT>
func (na *NetAddress) String() string {
if na == nil {
return "<nil-NetAddress>"
}
if na.str == "" {
addrStr := na.DialString()
if na.ID != "" {
@@ -186,6 +189,9 @@ func (na *NetAddress) String() string {
}
func (na *NetAddress) DialString() string {
if na == nil {
return "<nil-NetAddress>"
}
return net.JoinHostPort(
na.IP.String(),
strconv.FormatUint(uint64(na.Port), 10),

View File

@@ -36,7 +36,7 @@ type nodeInfoAddress interface {
// nodeInfoTransport validates a nodeInfo and checks
// our compatibility with it. It's for use in the handshake.
type nodeInfoTransport interface {
ValidateBasic() error
Validate() error
CompatibleWith(other NodeInfo) error
}
@@ -103,7 +103,7 @@ func (info DefaultNodeInfo) ID() ID {
return info.ID_
}
// ValidateBasic checks the self-reported DefaultNodeInfo is safe.
// Validate checks the self-reported DefaultNodeInfo is safe.
// It returns an error if there
// are too many Channels, if there are any duplicate Channels,
// if the ListenAddr is malformed, or if the ListenAddr is a host name
@@ -116,7 +116,7 @@ func (info DefaultNodeInfo) ID() ID {
// International clients could then use punycode (or we could use
// url-encoding), and we just need to be careful with how we handle that in our
// clients. (e.g. off by default).
func (info DefaultNodeInfo) ValidateBasic() error {
func (info DefaultNodeInfo) Validate() error {
// ID is already validated.

View File

@@ -12,7 +12,7 @@ func TestNodeInfoValidate(t *testing.T) {
// empty fails
ni := DefaultNodeInfo{}
assert.Error(t, ni.ValidateBasic())
assert.Error(t, ni.Validate())
channels := make([]byte, maxNumChannels)
for i := 0; i < maxNumChannels; i++ {
@@ -68,13 +68,13 @@ func TestNodeInfoValidate(t *testing.T) {
// test case passes
ni = testNodeInfo(nodeKey.ID(), name).(DefaultNodeInfo)
ni.Channels = channels
assert.NoError(t, ni.ValidateBasic())
assert.NoError(t, ni.Validate())
for _, tc := range testCases {
ni := testNodeInfo(nodeKey.ID(), name).(DefaultNodeInfo)
ni.Channels = channels
tc.malleateNodeInfo(&ni)
err := ni.ValidateBasic()
err := ni.Validate()
if tc.expectErr {
assert.Error(t, err, tc.testName)
} else {

View File

@@ -98,13 +98,15 @@ func (ps *PeerSet) Get(peerKey ID) Peer {
}
// Remove discards peer by its Key, if the peer was previously memoized.
func (ps *PeerSet) Remove(peer Peer) {
// Returns true if the peer was removed, and false if it was not found.
// in the set.
func (ps *PeerSet) Remove(peer Peer) bool {
ps.mtx.Lock()
defer ps.mtx.Unlock()
item := ps.lookup[peer.ID()]
if item == nil {
return
return false
}
index := item.index
@@ -116,7 +118,7 @@ func (ps *PeerSet) Remove(peer Peer) {
if index == len(ps.list)-1 {
ps.list = newList
delete(ps.lookup, peer.ID())
return
return true
}
// Replace the popped item with the last item in the old list.
@@ -127,6 +129,7 @@ func (ps *PeerSet) Remove(peer Peer) {
lastPeerItem.index = index
ps.list = newList
delete(ps.lookup, peer.ID())
return true
}
// Size returns the number of unique items in the peerSet.

View File

@@ -60,13 +60,15 @@ func TestPeerSetAddRemoveOne(t *testing.T) {
n := len(peerList)
// 1. Test removing from the front
for i, peerAtFront := range peerList {
peerSet.Remove(peerAtFront)
removed := peerSet.Remove(peerAtFront)
assert.True(t, removed)
wantSize := n - i - 1
for j := 0; j < 2; j++ {
assert.Equal(t, false, peerSet.Has(peerAtFront.ID()), "#%d Run #%d: failed to remove peer", i, j)
assert.Equal(t, wantSize, peerSet.Size(), "#%d Run #%d: failed to remove peer and decrement size", i, j)
// Test the route of removing the now non-existent element
peerSet.Remove(peerAtFront)
removed := peerSet.Remove(peerAtFront)
assert.False(t, removed)
}
}
@@ -81,7 +83,8 @@ func TestPeerSetAddRemoveOne(t *testing.T) {
// b) In reverse, remove each element
for i := n - 1; i >= 0; i-- {
peerAtEnd := peerList[i]
peerSet.Remove(peerAtEnd)
removed := peerSet.Remove(peerAtEnd)
assert.True(t, removed)
assert.Equal(t, false, peerSet.Has(peerAtEnd.ID()), "#%d: failed to remove item at end", i)
assert.Equal(t, i, peerSet.Size(), "#%d: differing sizes after peerSet.Remove(atEndPeer)", i)
}
@@ -105,7 +108,8 @@ func TestPeerSetAddRemoveMany(t *testing.T) {
}
for i, peer := range peers {
peerSet.Remove(peer)
removed := peerSet.Remove(peer)
assert.True(t, removed)
if peerSet.Has(peer.ID()) {
t.Errorf("Failed to remove peer")
}

View File

@@ -162,26 +162,29 @@ func (a *addrBook) FilePath() string {
// AddOurAddress one of our addresses.
func (a *addrBook) AddOurAddress(addr *p2p.NetAddress) {
a.Logger.Info("Add our address to book", "addr", addr)
a.mtx.Lock()
defer a.mtx.Unlock()
a.Logger.Info("Add our address to book", "addr", addr)
a.ourAddrs[addr.String()] = struct{}{}
a.mtx.Unlock()
}
// OurAddress returns true if it is our address.
func (a *addrBook) OurAddress(addr *p2p.NetAddress) bool {
a.mtx.Lock()
defer a.mtx.Unlock()
_, ok := a.ourAddrs[addr.String()]
a.mtx.Unlock()
return ok
}
func (a *addrBook) AddPrivateIDs(IDs []string) {
a.mtx.Lock()
defer a.mtx.Unlock()
for _, id := range IDs {
a.privateIDs[p2p.ID(id)] = struct{}{}
}
a.mtx.Unlock()
}
// AddAddress implements AddrBook
@@ -191,6 +194,7 @@ func (a *addrBook) AddPrivateIDs(IDs []string) {
func (a *addrBook) AddAddress(addr *p2p.NetAddress, src *p2p.NetAddress) error {
a.mtx.Lock()
defer a.mtx.Unlock()
return a.addAddress(addr, src)
}
@@ -198,6 +202,7 @@ func (a *addrBook) AddAddress(addr *p2p.NetAddress, src *p2p.NetAddress) error {
func (a *addrBook) RemoveAddress(addr *p2p.NetAddress) {
a.mtx.Lock()
defer a.mtx.Unlock()
ka := a.addrLookup[addr.ID]
if ka == nil {
return
@@ -211,14 +216,16 @@ func (a *addrBook) RemoveAddress(addr *p2p.NetAddress) {
func (a *addrBook) IsGood(addr *p2p.NetAddress) bool {
a.mtx.Lock()
defer a.mtx.Unlock()
return a.addrLookup[addr.ID].isOld()
}
// HasAddress returns true if the address is in the book.
func (a *addrBook) HasAddress(addr *p2p.NetAddress) bool {
a.mtx.Lock()
defer a.mtx.Unlock()
ka := a.addrLookup[addr.ID]
a.mtx.Unlock()
return ka != nil
}
@@ -292,6 +299,7 @@ func (a *addrBook) PickAddress(biasTowardsNewAddrs int) *p2p.NetAddress {
func (a *addrBook) MarkGood(addr *p2p.NetAddress) {
a.mtx.Lock()
defer a.mtx.Unlock()
ka := a.addrLookup[addr.ID]
if ka == nil {
return
@@ -306,6 +314,7 @@ func (a *addrBook) MarkGood(addr *p2p.NetAddress) {
func (a *addrBook) MarkAttempt(addr *p2p.NetAddress) {
a.mtx.Lock()
defer a.mtx.Unlock()
ka := a.addrLookup[addr.ID]
if ka == nil {
return
@@ -461,12 +470,13 @@ ADDRS_LOOP:
// ListOfKnownAddresses returns the new and old addresses.
func (a *addrBook) ListOfKnownAddresses() []*knownAddress {
addrs := []*knownAddress{}
a.mtx.Lock()
defer a.mtx.Unlock()
addrs := []*knownAddress{}
for _, addr := range a.addrLookup {
addrs = append(addrs, addr.copy())
}
a.mtx.Unlock()
return addrs
}
@@ -476,6 +486,7 @@ func (a *addrBook) ListOfKnownAddresses() []*knownAddress {
func (a *addrBook) Size() int {
a.mtx.Lock()
defer a.mtx.Unlock()
return a.size()
}

View File

@@ -211,7 +211,9 @@ func (sw *Switch) OnStop() {
// Stop peers
for _, p := range sw.peers.List() {
p.Stop()
sw.peers.Remove(p)
if sw.peers.Remove(p) {
sw.metrics.Peers.Add(float64(-1))
}
}
// Stop reactors
@@ -299,8 +301,9 @@ func (sw *Switch) StopPeerGracefully(peer Peer) {
}
func (sw *Switch) stopAndRemovePeer(peer Peer, reason interface{}) {
sw.peers.Remove(peer)
sw.metrics.Peers.Add(float64(-1))
if sw.peers.Remove(peer) {
sw.metrics.Peers.Add(float64(-1))
}
peer.Stop()
for _, reactor := range sw.reactors {
reactor.RemovePeer(peer, reason)
@@ -505,6 +508,12 @@ func (sw *Switch) acceptRoutine() {
"err", err,
"numPeers", sw.peers.Size(),
)
// We could instead have a retry loop around the acceptRoutine,
// but that would need to stop and let the node shutdown eventually.
// So might as well panic and let process managers restart the node.
// There's no point in letting the node run without the acceptRoutine,
// since it won't be able to accept new connections.
panic(fmt.Errorf("accept routine exited: %v", err))
}
break

View File

@@ -3,10 +3,17 @@ package p2p
import (
"bytes"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
"regexp"
"strconv"
"sync"
"testing"
"time"
stdprometheus "github.com/prometheus/client_golang/prometheus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -335,6 +342,54 @@ func TestSwitchStopsNonPersistentPeerOnError(t *testing.T) {
assert.False(p.IsRunning())
}
func TestSwitchStopPeerForError(t *testing.T) {
s := httptest.NewServer(stdprometheus.UninstrumentedHandler())
defer s.Close()
scrapeMetrics := func() string {
resp, _ := http.Get(s.URL)
buf, _ := ioutil.ReadAll(resp.Body)
return string(buf)
}
namespace, subsystem, name := config.TestInstrumentationConfig().Namespace, MetricsSubsystem, "peers"
re := regexp.MustCompile(namespace + `_` + subsystem + `_` + name + ` ([0-9\.]+)`)
peersMetricValue := func() float64 {
matches := re.FindStringSubmatch(scrapeMetrics())
f, _ := strconv.ParseFloat(matches[1], 64)
return f
}
p2pMetrics := PrometheusMetrics(namespace)
// make two connected switches
sw1, sw2 := MakeSwitchPair(t, func(i int, sw *Switch) *Switch {
// set metrics on sw1
if i == 0 {
opt := WithMetrics(p2pMetrics)
opt(sw)
}
return initSwitchFunc(i, sw)
})
assert.Equal(t, len(sw1.Peers().List()), 1)
assert.EqualValues(t, 1, peersMetricValue())
// send messages to the peer from sw1
p := sw1.Peers().List()[0]
p.Send(0x1, []byte("here's a message to send"))
// stop sw2. this should cause the p to fail,
// which results in calling StopPeerForError internally
sw2.Stop()
// now call StopPeerForError explicitly, eg. from a reactor
sw1.StopPeerForError(p, fmt.Errorf("some err"))
assert.Equal(t, len(sw1.Peers().List()), 0)
assert.EqualValues(t, 0, peersMetricValue())
}
func TestSwitchReconnectsToPersistentPeer(t *testing.T) {
assert, require := assert.New(t), require.New(t)

View File

@@ -24,7 +24,7 @@ type mockNodeInfo struct {
func (ni mockNodeInfo) ID() ID { return ni.addr.ID }
func (ni mockNodeInfo) NetAddress() *NetAddress { return ni.addr }
func (ni mockNodeInfo) ValidateBasic() error { return nil }
func (ni mockNodeInfo) Validate() error { return nil }
func (ni mockNodeInfo) CompatibleWith(other NodeInfo) error { return nil }
func AddPeerToSwitch(sw *Switch, peer Peer) {
@@ -184,7 +184,7 @@ func MakeSwitch(
// TODO: let the config be passed in?
sw := initSwitch(i, NewSwitch(cfg, t, opts...))
sw.SetLogger(log.TestingLogger())
sw.SetLogger(log.TestingLogger().With("switch", i))
sw.SetNodeKey(&nodeKey)
ni := nodeInfo.(DefaultNodeInfo)

View File

@@ -350,7 +350,7 @@ func (mt *MultiplexTransport) upgrade(
}
}
if err := nodeInfo.ValidateBasic(); err != nil {
if err := nodeInfo.Validate(); err != nil {
return nil, nil, ErrRejected{
conn: c,
err: err,

View File

@@ -1,80 +0,0 @@
package privval
import (
"io/ioutil"
"os"
"github.com/tendermint/tendermint/crypto"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/types"
)
// OldFilePV is the old version of the FilePV, pre v0.28.0.
type OldFilePV struct {
Address types.Address `json:"address"`
PubKey crypto.PubKey `json:"pub_key"`
LastHeight int64 `json:"last_height"`
LastRound int `json:"last_round"`
LastStep int8 `json:"last_step"`
LastSignature []byte `json:"last_signature,omitempty"`
LastSignBytes cmn.HexBytes `json:"last_signbytes,omitempty"`
PrivKey crypto.PrivKey `json:"priv_key"`
filePath string
}
// LoadOldFilePV loads an OldFilePV from the filePath.
func LoadOldFilePV(filePath string) (*OldFilePV, error) {
pvJSONBytes, err := ioutil.ReadFile(filePath)
if err != nil {
return nil, err
}
pv := &OldFilePV{}
err = cdc.UnmarshalJSON(pvJSONBytes, &pv)
if err != nil {
return nil, err
}
// overwrite pubkey and address for convenience
pv.PubKey = pv.PrivKey.PubKey()
pv.Address = pv.PubKey.Address()
pv.filePath = filePath
return pv, nil
}
// Upgrade convets the OldFilePV to the new FilePV, separating the immutable and mutable components,
// and persisting them to the keyFilePath and stateFilePath, respectively.
// It renames the original file by adding ".bak".
func (oldFilePV *OldFilePV) Upgrade(keyFilePath, stateFilePath string) *FilePV {
privKey := oldFilePV.PrivKey
pvKey := FilePVKey{
PrivKey: privKey,
PubKey: privKey.PubKey(),
Address: privKey.PubKey().Address(),
filePath: keyFilePath,
}
pvState := FilePVLastSignState{
Height: oldFilePV.LastHeight,
Round: oldFilePV.LastRound,
Step: oldFilePV.LastStep,
Signature: oldFilePV.LastSignature,
SignBytes: oldFilePV.LastSignBytes,
filePath: stateFilePath,
}
// Save the new PV files
pv := &FilePV{
Key: pvKey,
LastSignState: pvState,
}
pv.Save()
// Rename the old PV file
err := os.Rename(oldFilePV.filePath, oldFilePV.filePath+".bak")
if err != nil {
panic(err)
}
return pv
}

View File

@@ -5,6 +5,7 @@ import (
"errors"
"fmt"
"io/ioutil"
"sync"
"time"
"github.com/tendermint/tendermint/crypto"
@@ -34,195 +35,126 @@ func voteToStep(vote *types.Vote) int8 {
}
}
//-------------------------------------------------------------------------------
// FilePVKey stores the immutable part of PrivValidator.
type FilePVKey struct {
Address types.Address `json:"address"`
PubKey crypto.PubKey `json:"pub_key"`
PrivKey crypto.PrivKey `json:"priv_key"`
filePath string
}
// Save persists the FilePVKey to its filePath.
func (pvKey FilePVKey) Save() {
outFile := pvKey.filePath
if outFile == "" {
panic("Cannot save PrivValidator key: filePath not set")
}
jsonBytes, err := cdc.MarshalJSONIndent(pvKey, "", " ")
if err != nil {
panic(err)
}
err = cmn.WriteFileAtomic(outFile, jsonBytes, 0600)
if err != nil {
panic(err)
}
}
//-------------------------------------------------------------------------------
// FilePVLastSignState stores the mutable part of PrivValidator.
type FilePVLastSignState struct {
Height int64 `json:"height"`
Round int `json:"round"`
Step int8 `json:"step"`
Signature []byte `json:"signature,omitempty"`
SignBytes cmn.HexBytes `json:"signbytes,omitempty"`
filePath string
}
// CheckHRS checks the given height, round, step (HRS) against that of the
// FilePVLastSignState. It returns an error if the arguments constitute a regression,
// or if they match but the SignBytes are empty.
// The returned boolean indicates whether the last Signature should be reused -
// it returns true if the HRS matches the arguments and the SignBytes are not empty (indicating
// we have already signed for this HRS, and can reuse the existing signature).
// It panics if the HRS matches the arguments, there's a SignBytes, but no Signature.
func (lss *FilePVLastSignState) CheckHRS(height int64, round int, step int8) (bool, error) {
if lss.Height > height {
return false, errors.New("Height regression")
}
if lss.Height == height {
if lss.Round > round {
return false, errors.New("Round regression")
}
if lss.Round == round {
if lss.Step > step {
return false, errors.New("Step regression")
} else if lss.Step == step {
if lss.SignBytes != nil {
if lss.Signature == nil {
panic("pv: Signature is nil but SignBytes is not!")
}
return true, nil
}
return false, errors.New("No SignBytes found")
}
}
}
return false, nil
}
// Save persists the FilePvLastSignState to its filePath.
func (lss *FilePVLastSignState) Save() {
outFile := lss.filePath
if outFile == "" {
panic("Cannot save FilePVLastSignState: filePath not set")
}
jsonBytes, err := cdc.MarshalJSONIndent(lss, "", " ")
if err != nil {
panic(err)
}
err = cmn.WriteFileAtomic(outFile, jsonBytes, 0600)
if err != nil {
panic(err)
}
}
//-------------------------------------------------------------------------------
// FilePV implements PrivValidator using data persisted to disk
// to prevent double signing.
// NOTE: the directories containing pv.Key.filePath and pv.LastSignState.filePath must already exist.
// NOTE: the directory containing the pv.filePath must already exist.
// It includes the LastSignature and LastSignBytes so we don't lose the signature
// if the process crashes after signing but before the resulting consensus message is processed.
type FilePV struct {
Key FilePVKey
LastSignState FilePVLastSignState
}
Address types.Address `json:"address"`
PubKey crypto.PubKey `json:"pub_key"`
LastHeight int64 `json:"last_height"`
LastRound int `json:"last_round"`
LastStep int8 `json:"last_step"`
LastSignature []byte `json:"last_signature,omitempty"`
LastSignBytes cmn.HexBytes `json:"last_signbytes,omitempty"`
PrivKey crypto.PrivKey `json:"priv_key"`
// GenFilePV generates a new validator with randomly generated private key
// and sets the filePaths, but does not call Save().
func GenFilePV(keyFilePath, stateFilePath string) *FilePV {
privKey := ed25519.GenPrivKey()
return &FilePV{
Key: FilePVKey{
Address: privKey.PubKey().Address(),
PubKey: privKey.PubKey(),
PrivKey: privKey,
filePath: keyFilePath,
},
LastSignState: FilePVLastSignState{
Step: stepNone,
filePath: stateFilePath,
},
}
}
// LoadFilePV loads a FilePV from the filePaths. The FilePV handles double
// signing prevention by persisting data to the stateFilePath. If the filePaths
// do not exist, the FilePV must be created manually and saved.
func LoadFilePV(keyFilePath, stateFilePath string) *FilePV {
keyJSONBytes, err := ioutil.ReadFile(keyFilePath)
if err != nil {
cmn.Exit(err.Error())
}
pvKey := FilePVKey{}
err = cdc.UnmarshalJSON(keyJSONBytes, &pvKey)
if err != nil {
cmn.Exit(fmt.Sprintf("Error reading PrivValidator key from %v: %v\n", keyFilePath, err))
}
// overwrite pubkey and address for convenience
pvKey.PubKey = pvKey.PrivKey.PubKey()
pvKey.Address = pvKey.PubKey.Address()
pvKey.filePath = keyFilePath
stateJSONBytes, err := ioutil.ReadFile(stateFilePath)
if err != nil {
cmn.Exit(err.Error())
}
pvState := FilePVLastSignState{}
err = cdc.UnmarshalJSON(stateJSONBytes, &pvState)
if err != nil {
cmn.Exit(fmt.Sprintf("Error reading PrivValidator state from %v: %v\n", stateFilePath, err))
}
pvState.filePath = stateFilePath
return &FilePV{
Key: pvKey,
LastSignState: pvState,
}
}
// LoadOrGenFilePV loads a FilePV from the given filePaths
// or else generates a new one and saves it to the filePaths.
func LoadOrGenFilePV(keyFilePath, stateFilePath string) *FilePV {
var pv *FilePV
if cmn.FileExists(keyFilePath) {
pv = LoadFilePV(keyFilePath, stateFilePath)
} else {
pv = GenFilePV(keyFilePath, stateFilePath)
pv.Save()
}
return pv
// For persistence.
// Overloaded for testing.
filePath string
mtx sync.Mutex
}
// GetAddress returns the address of the validator.
// Implements PrivValidator.
func (pv *FilePV) GetAddress() types.Address {
return pv.Key.Address
return pv.Address
}
// GetPubKey returns the public key of the validator.
// Implements PrivValidator.
func (pv *FilePV) GetPubKey() crypto.PubKey {
return pv.Key.PubKey
return pv.PubKey
}
// GenFilePV generates a new validator with randomly generated private key
// and sets the filePath, but does not call Save().
func GenFilePV(filePath string) *FilePV {
privKey := ed25519.GenPrivKey()
return &FilePV{
Address: privKey.PubKey().Address(),
PubKey: privKey.PubKey(),
PrivKey: privKey,
LastStep: stepNone,
filePath: filePath,
}
}
// LoadFilePV loads a FilePV from the filePath. The FilePV handles double
// signing prevention by persisting data to the filePath. If the filePath does
// not exist, the FilePV must be created manually and saved.
func LoadFilePV(filePath string) *FilePV {
pvJSONBytes, err := ioutil.ReadFile(filePath)
if err != nil {
cmn.Exit(err.Error())
}
pv := &FilePV{}
err = cdc.UnmarshalJSON(pvJSONBytes, &pv)
if err != nil {
cmn.Exit(fmt.Sprintf("Error reading PrivValidator from %v: %v\n", filePath, err))
}
// overwrite pubkey and address for convenience
pv.PubKey = pv.PrivKey.PubKey()
pv.Address = pv.PubKey.Address()
pv.filePath = filePath
return pv
}
// LoadOrGenFilePV loads a FilePV from the given filePath
// or else generates a new one and saves it to the filePath.
func LoadOrGenFilePV(filePath string) *FilePV {
var pv *FilePV
if cmn.FileExists(filePath) {
pv = LoadFilePV(filePath)
} else {
pv = GenFilePV(filePath)
pv.Save()
}
return pv
}
// Save persists the FilePV to disk.
func (pv *FilePV) Save() {
pv.mtx.Lock()
defer pv.mtx.Unlock()
pv.save()
}
func (pv *FilePV) save() {
outFile := pv.filePath
if outFile == "" {
panic("Cannot save PrivValidator: filePath not set")
}
jsonBytes, err := cdc.MarshalJSONIndent(pv, "", " ")
if err != nil {
panic(err)
}
err = cmn.WriteFileAtomic(outFile, jsonBytes, 0600)
if err != nil {
panic(err)
}
}
// Reset resets all fields in the FilePV.
// NOTE: Unsafe!
func (pv *FilePV) Reset() {
var sig []byte
pv.LastHeight = 0
pv.LastRound = 0
pv.LastStep = 0
pv.LastSignature = sig
pv.LastSignBytes = nil
pv.Save()
}
// SignVote signs a canonical representation of the vote, along with the
// chainID. Implements PrivValidator.
func (pv *FilePV) SignVote(chainID string, vote *types.Vote) error {
pv.mtx.Lock()
defer pv.mtx.Unlock()
if err := pv.signVote(chainID, vote); err != nil {
return fmt.Errorf("Error signing vote: %v", err)
}
@@ -232,63 +164,65 @@ func (pv *FilePV) SignVote(chainID string, vote *types.Vote) error {
// SignProposal signs a canonical representation of the proposal, along with
// the chainID. Implements PrivValidator.
func (pv *FilePV) SignProposal(chainID string, proposal *types.Proposal) error {
pv.mtx.Lock()
defer pv.mtx.Unlock()
if err := pv.signProposal(chainID, proposal); err != nil {
return fmt.Errorf("Error signing proposal: %v", err)
}
return nil
}
// Save persists the FilePV to disk.
func (pv *FilePV) Save() {
pv.Key.Save()
pv.LastSignState.Save()
}
// returns error if HRS regression or no LastSignBytes. returns true if HRS is unchanged
func (pv *FilePV) checkHRS(height int64, round int, step int8) (bool, error) {
if pv.LastHeight > height {
return false, errors.New("Height regression")
}
// Reset resets all fields in the FilePV.
// NOTE: Unsafe!
func (pv *FilePV) Reset() {
var sig []byte
pv.LastSignState.Height = 0
pv.LastSignState.Round = 0
pv.LastSignState.Step = 0
pv.LastSignState.Signature = sig
pv.LastSignState.SignBytes = nil
pv.Save()
}
if pv.LastHeight == height {
if pv.LastRound > round {
return false, errors.New("Round regression")
}
// String returns a string representation of the FilePV.
func (pv *FilePV) String() string {
return fmt.Sprintf("PrivValidator{%v LH:%v, LR:%v, LS:%v}", pv.GetAddress(), pv.LastSignState.Height, pv.LastSignState.Round, pv.LastSignState.Step)
if pv.LastRound == round {
if pv.LastStep > step {
return false, errors.New("Step regression")
} else if pv.LastStep == step {
if pv.LastSignBytes != nil {
if pv.LastSignature == nil {
panic("pv: LastSignature is nil but LastSignBytes is not!")
}
return true, nil
}
return false, errors.New("No LastSignature found")
}
}
}
return false, nil
}
//------------------------------------------------------------------------------------
// signVote checks if the vote is good to sign and sets the vote signature.
// It may need to set the timestamp as well if the vote is otherwise the same as
// a previously signed vote (ie. we crashed after signing but before the vote hit the WAL).
func (pv *FilePV) signVote(chainID string, vote *types.Vote) error {
height, round, step := vote.Height, vote.Round, voteToStep(vote)
signBytes := vote.SignBytes(chainID)
lss := pv.LastSignState
sameHRS, err := lss.CheckHRS(height, round, step)
sameHRS, err := pv.checkHRS(height, round, step)
if err != nil {
return err
}
signBytes := vote.SignBytes(chainID)
// We might crash before writing to the wal,
// causing us to try to re-sign for the same HRS.
// If signbytes are the same, use the last signature.
// If they only differ by timestamp, use last timestamp and signature
// Otherwise, return error
if sameHRS {
if bytes.Equal(signBytes, lss.SignBytes) {
vote.Signature = lss.Signature
} else if timestamp, ok := checkVotesOnlyDifferByTimestamp(lss.SignBytes, signBytes); ok {
if bytes.Equal(signBytes, pv.LastSignBytes) {
vote.Signature = pv.LastSignature
} else if timestamp, ok := checkVotesOnlyDifferByTimestamp(pv.LastSignBytes, signBytes); ok {
vote.Timestamp = timestamp
vote.Signature = lss.Signature
vote.Signature = pv.LastSignature
} else {
err = fmt.Errorf("Conflicting data")
}
@@ -296,7 +230,7 @@ func (pv *FilePV) signVote(chainID string, vote *types.Vote) error {
}
// It passed the checks. Sign the vote
sig, err := pv.Key.PrivKey.Sign(signBytes)
sig, err := pv.PrivKey.Sign(signBytes)
if err != nil {
return err
}
@@ -310,27 +244,24 @@ func (pv *FilePV) signVote(chainID string, vote *types.Vote) error {
// a previously signed proposal ie. we crashed after signing but before the proposal hit the WAL).
func (pv *FilePV) signProposal(chainID string, proposal *types.Proposal) error {
height, round, step := proposal.Height, proposal.Round, stepPropose
signBytes := proposal.SignBytes(chainID)
lss := pv.LastSignState
sameHRS, err := lss.CheckHRS(height, round, step)
sameHRS, err := pv.checkHRS(height, round, step)
if err != nil {
return err
}
signBytes := proposal.SignBytes(chainID)
// We might crash before writing to the wal,
// causing us to try to re-sign for the same HRS.
// If signbytes are the same, use the last signature.
// If they only differ by timestamp, use last timestamp and signature
// Otherwise, return error
if sameHRS {
if bytes.Equal(signBytes, lss.SignBytes) {
proposal.Signature = lss.Signature
} else if timestamp, ok := checkProposalsOnlyDifferByTimestamp(lss.SignBytes, signBytes); ok {
if bytes.Equal(signBytes, pv.LastSignBytes) {
proposal.Signature = pv.LastSignature
} else if timestamp, ok := checkProposalsOnlyDifferByTimestamp(pv.LastSignBytes, signBytes); ok {
proposal.Timestamp = timestamp
proposal.Signature = lss.Signature
proposal.Signature = pv.LastSignature
} else {
err = fmt.Errorf("Conflicting data")
}
@@ -338,7 +269,7 @@ func (pv *FilePV) signProposal(chainID string, proposal *types.Proposal) error {
}
// It passed the checks. Sign the proposal
sig, err := pv.Key.PrivKey.Sign(signBytes)
sig, err := pv.PrivKey.Sign(signBytes)
if err != nil {
return err
}
@@ -351,15 +282,20 @@ func (pv *FilePV) signProposal(chainID string, proposal *types.Proposal) error {
func (pv *FilePV) saveSigned(height int64, round int, step int8,
signBytes []byte, sig []byte) {
pv.LastSignState.Height = height
pv.LastSignState.Round = round
pv.LastSignState.Step = step
pv.LastSignState.Signature = sig
pv.LastSignState.SignBytes = signBytes
pv.LastSignState.Save()
pv.LastHeight = height
pv.LastRound = round
pv.LastStep = step
pv.LastSignature = sig
pv.LastSignBytes = signBytes
pv.save()
}
//-----------------------------------------------------------------------------------------
// String returns a string representation of the FilePV.
func (pv *FilePV) String() string {
return fmt.Sprintf("PrivValidator{%v LH:%v, LR:%v, LS:%v}", pv.GetAddress(), pv.LastHeight, pv.LastRound, pv.LastStep)
}
//-------------------------------------
// returns the timestamp from the lastSignBytes.
// returns true if the only difference in the votes is their timestamp.

View File

@@ -18,72 +18,36 @@ import (
func TestGenLoadValidator(t *testing.T) {
assert := assert.New(t)
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
tempFile, err := ioutil.TempFile("", "priv_validator_")
require.Nil(t, err)
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
require.Nil(t, err)
privVal := GenFilePV(tempKeyFile.Name(), tempStateFile.Name())
privVal := GenFilePV(tempFile.Name())
height := int64(100)
privVal.LastSignState.Height = height
privVal.LastHeight = height
privVal.Save()
addr := privVal.GetAddress()
privVal = LoadFilePV(tempKeyFile.Name(), tempStateFile.Name())
privVal = LoadFilePV(tempFile.Name())
assert.Equal(addr, privVal.GetAddress(), "expected privval addr to be the same")
assert.Equal(height, privVal.LastSignState.Height, "expected privval.LastHeight to have been saved")
assert.Equal(height, privVal.LastHeight, "expected privval.LastHeight to have been saved")
}
func TestLoadOrGenValidator(t *testing.T) {
assert := assert.New(t)
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
tempFile, err := ioutil.TempFile("", "priv_validator_")
require.Nil(t, err)
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
require.Nil(t, err)
tempKeyFilePath := tempKeyFile.Name()
if err := os.Remove(tempKeyFilePath); err != nil {
tempFilePath := tempFile.Name()
if err := os.Remove(tempFilePath); err != nil {
t.Error(err)
}
tempStateFilePath := tempStateFile.Name()
if err := os.Remove(tempStateFilePath); err != nil {
t.Error(err)
}
privVal := LoadOrGenFilePV(tempKeyFilePath, tempStateFilePath)
privVal := LoadOrGenFilePV(tempFilePath)
addr := privVal.GetAddress()
privVal = LoadOrGenFilePV(tempKeyFilePath, tempStateFilePath)
privVal = LoadOrGenFilePV(tempFilePath)
assert.Equal(addr, privVal.GetAddress(), "expected privval addr to be the same")
}
func TestUnmarshalValidatorState(t *testing.T) {
assert, require := assert.New(t), require.New(t)
// create some fixed values
serialized := `{
"height": "1",
"round": "1",
"step": 1
}`
val := FilePVLastSignState{}
err := cdc.UnmarshalJSON([]byte(serialized), &val)
require.Nil(err, "%+v", err)
// make sure the values match
assert.EqualValues(val.Height, 1)
assert.EqualValues(val.Round, 1)
assert.EqualValues(val.Step, 1)
// export it and make sure it is the same
out, err := cdc.MarshalJSON(val)
require.Nil(err, "%+v", err)
assert.JSONEq(serialized, string(out))
}
func TestUnmarshalValidatorKey(t *testing.T) {
func TestUnmarshalValidator(t *testing.T) {
assert, require := assert.New(t), require.New(t)
// create some fixed values
@@ -103,19 +67,22 @@ func TestUnmarshalValidatorKey(t *testing.T) {
"type": "tendermint/PubKeyEd25519",
"value": "%s"
},
"last_height": "0",
"last_round": "0",
"last_step": 0,
"priv_key": {
"type": "tendermint/PrivKeyEd25519",
"value": "%s"
}
}`, addr, pubB64, privB64)
val := FilePVKey{}
val := FilePV{}
err := cdc.UnmarshalJSON([]byte(serialized), &val)
require.Nil(err, "%+v", err)
// make sure the values match
assert.EqualValues(addr, val.Address)
assert.EqualValues(pubKey, val.PubKey)
assert.EqualValues(addr, val.GetAddress())
assert.EqualValues(pubKey, val.GetPubKey())
assert.EqualValues(privKey, val.PrivKey)
// export it and make sure it is the same
@@ -127,12 +94,9 @@ func TestUnmarshalValidatorKey(t *testing.T) {
func TestSignVote(t *testing.T) {
assert := assert.New(t)
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
tempFile, err := ioutil.TempFile("", "priv_validator_")
require.Nil(t, err)
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
require.Nil(t, err)
privVal := GenFilePV(tempKeyFile.Name(), tempStateFile.Name())
privVal := GenFilePV(tempFile.Name())
block1 := types.BlockID{[]byte{1, 2, 3}, types.PartSetHeader{}}
block2 := types.BlockID{[]byte{3, 2, 1}, types.PartSetHeader{}}
@@ -140,7 +104,7 @@ func TestSignVote(t *testing.T) {
voteType := byte(types.PrevoteType)
// sign a vote for first time
vote := newVote(privVal.Key.Address, 0, height, round, voteType, block1)
vote := newVote(privVal.Address, 0, height, round, voteType, block1)
err = privVal.SignVote("mychainid", vote)
assert.NoError(err, "expected no error signing vote")
@@ -150,10 +114,10 @@ func TestSignVote(t *testing.T) {
// now try some bad votes
cases := []*types.Vote{
newVote(privVal.Key.Address, 0, height, round-1, voteType, block1), // round regression
newVote(privVal.Key.Address, 0, height-1, round, voteType, block1), // height regression
newVote(privVal.Key.Address, 0, height-2, round+4, voteType, block1), // height regression and different round
newVote(privVal.Key.Address, 0, height, round, voteType, block2), // different block
newVote(privVal.Address, 0, height, round-1, voteType, block1), // round regression
newVote(privVal.Address, 0, height-1, round, voteType, block1), // height regression
newVote(privVal.Address, 0, height-2, round+4, voteType, block1), // height regression and different round
newVote(privVal.Address, 0, height, round, voteType, block2), // different block
}
for _, c := range cases {
@@ -172,12 +136,9 @@ func TestSignVote(t *testing.T) {
func TestSignProposal(t *testing.T) {
assert := assert.New(t)
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
tempFile, err := ioutil.TempFile("", "priv_validator_")
require.Nil(t, err)
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
require.Nil(t, err)
privVal := GenFilePV(tempKeyFile.Name(), tempStateFile.Name())
privVal := GenFilePV(tempFile.Name())
block1 := types.BlockID{[]byte{1, 2, 3}, types.PartSetHeader{5, []byte{1, 2, 3}}}
block2 := types.BlockID{[]byte{3, 2, 1}, types.PartSetHeader{10, []byte{3, 2, 1}}}
@@ -214,12 +175,9 @@ func TestSignProposal(t *testing.T) {
}
func TestDifferByTimestamp(t *testing.T) {
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
tempFile, err := ioutil.TempFile("", "priv_validator_")
require.Nil(t, err)
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
require.Nil(t, err)
privVal := GenFilePV(tempKeyFile.Name(), tempStateFile.Name())
privVal := GenFilePV(tempFile.Name())
block1 := types.BlockID{[]byte{1, 2, 3}, types.PartSetHeader{5, []byte{1, 2, 3}}}
height, round := int64(10), 1
@@ -250,7 +208,7 @@ func TestDifferByTimestamp(t *testing.T) {
{
voteType := byte(types.PrevoteType)
blockID := types.BlockID{[]byte{1, 2, 3}, types.PartSetHeader{}}
vote := newVote(privVal.Key.Address, 0, height, round, voteType, blockID)
vote := newVote(privVal.Address, 0, height, round, voteType, blockID)
err := privVal.SignVote("mychainid", vote)
assert.NoError(t, err, "expected no error signing vote")

View File

@@ -109,6 +109,24 @@ func (c *HTTP) broadcastTX(route string, tx types.Tx) (*ctypes.ResultBroadcastTx
return result, nil
}
func (c *HTTP) UnconfirmedTxs(limit int) (*ctypes.ResultUnconfirmedTxs, error) {
result := new(ctypes.ResultUnconfirmedTxs)
_, err := c.rpc.Call("unconfirmed_txs", map[string]interface{}{"limit": limit}, result)
if err != nil {
return nil, errors.Wrap(err, "unconfirmed_txs")
}
return result, nil
}
func (c *HTTP) NumUnconfirmedTxs() (*ctypes.ResultUnconfirmedTxs, error) {
result := new(ctypes.ResultUnconfirmedTxs)
_, err := c.rpc.Call("num_unconfirmed_txs", map[string]interface{}{}, result)
if err != nil {
return nil, errors.Wrap(err, "num_unconfirmed_txs")
}
return result, nil
}
func (c *HTTP) NetInfo() (*ctypes.ResultNetInfo, error) {
result := new(ctypes.ResultNetInfo)
_, err := c.rpc.Call("net_info", map[string]interface{}{}, result)

View File

@@ -93,3 +93,9 @@ type NetworkClient interface {
type EventsClient interface {
types.EventBusSubscriber
}
// MempoolClient shows us data about current mempool state.
type MempoolClient interface {
UnconfirmedTxs(limit int) (*ctypes.ResultUnconfirmedTxs, error)
NumUnconfirmedTxs() (*ctypes.ResultUnconfirmedTxs, error)
}

View File

@@ -76,6 +76,14 @@ func (Local) BroadcastTxSync(tx types.Tx) (*ctypes.ResultBroadcastTx, error) {
return core.BroadcastTxSync(tx)
}
func (Local) UnconfirmedTxs(limit int) (*ctypes.ResultUnconfirmedTxs, error) {
return core.UnconfirmedTxs(limit)
}
func (Local) NumUnconfirmedTxs() (*ctypes.ResultUnconfirmedTxs, error) {
return core.NumUnconfirmedTxs()
}
func (Local) NetInfo() (*ctypes.ResultNetInfo, error) {
return core.NetInfo()
}

View File

@@ -281,6 +281,42 @@ func TestBroadcastTxCommit(t *testing.T) {
}
}
func TestUnconfirmedTxs(t *testing.T) {
_, _, tx := MakeTxKV()
mempool := node.MempoolReactor().Mempool
_ = mempool.CheckTx(tx, nil)
for i, c := range GetClients() {
mc, ok := c.(client.MempoolClient)
require.True(t, ok, "%d", i)
txs, err := mc.UnconfirmedTxs(1)
require.Nil(t, err, "%d: %+v", i, err)
assert.Exactly(t, types.Txs{tx}, types.Txs(txs.Txs))
}
mempool.Flush()
}
func TestNumUnconfirmedTxs(t *testing.T) {
_, _, tx := MakeTxKV()
mempool := node.MempoolReactor().Mempool
_ = mempool.CheckTx(tx, nil)
mempoolSize := mempool.Size()
for i, c := range GetClients() {
mc, ok := c.(client.MempoolClient)
require.True(t, ok, "%d", i)
res, err := mc.NumUnconfirmedTxs()
require.Nil(t, err, "%d: %+v", i, err)
assert.Equal(t, mempoolSize, res.N)
}
mempool.Flush()
}
func TestTx(t *testing.T) {
// first we broadcast a tx
c := getHTTPClient()

View File

@@ -15,6 +15,11 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.ABCIQuery("", "abcd", true)
// ```
//
@@ -69,6 +74,11 @@ func ABCIQuery(path string, data cmn.HexBytes, height int64, prove bool) (*ctype
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// info, err := client.ABCIInfo()
// ```
//

View File

@@ -18,6 +18,11 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// info, err := client.BlockchainInfo(10, 10)
// ```
//
@@ -123,6 +128,11 @@ func filterMinMax(height, min, max, limit int64) (int64, int64, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// info, err := client.Block(10)
// ```
//
@@ -235,6 +245,11 @@ func Block(heightPtr *int64) (*ctypes.ResultBlock, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// info, err := client.Commit(11)
// ```
//
@@ -329,6 +344,11 @@ func Commit(heightPtr *int64) (*ctypes.ResultCommit, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// info, err := client.BlockResults(10)
// ```
//

View File

@@ -16,6 +16,11 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// state, err := client.Validators()
// ```
//
@@ -67,6 +72,11 @@ func Validators(heightPtr *int64) (*ctypes.ResultValidators, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// state, err := client.DumpConsensusState()
// ```
//
@@ -225,6 +235,11 @@ func DumpConsensusState() (*ctypes.ResultDumpConsensusState, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// state, err := client.ConsensusState()
// ```
//
@@ -273,6 +288,11 @@ func ConsensusState() (*ctypes.ResultConsensusState, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// state, err := client.ConsensusParams()
// ```
//

View File

@@ -55,6 +55,10 @@ import (
//
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// ctx, cancel := context.WithTimeout(context.Background(), timeout)
// defer cancel()
// query := query.MustParse("tm.event = 'Tx' AND tx.height = 3")
@@ -118,6 +122,10 @@ func Subscribe(wsCtx rpctypes.WSRPCContext, query string) (*ctypes.ResultSubscri
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// err = client.Unsubscribe("test-client", query)
// ```
//
@@ -158,6 +166,10 @@ func Unsubscribe(wsCtx rpctypes.WSRPCContext, query string) (*ctypes.ResultUnsub
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// err = client.UnsubscribeAll("test-client")
// ```
//

View File

@@ -13,6 +13,11 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.Health()
// ```
//

View File

@@ -24,6 +24,11 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.BroadcastTxAsync("123")
// ```
//
@@ -64,6 +69,11 @@ func BroadcastTxAsync(tx types.Tx) (*ctypes.ResultBroadcastTx, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.BroadcastTxSync("456")
// ```
//
@@ -118,6 +128,11 @@ func BroadcastTxSync(tx types.Tx) (*ctypes.ResultBroadcastTx, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.BroadcastTxCommit("789")
// ```
//
@@ -198,7 +213,10 @@ func BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) {
// TODO: configurable?
var deliverTxTimeout = rpcserver.WriteTimeout / 2
select {
case deliverTxResMsg := <-deliverTxResCh: // The tx was included in a block.
case deliverTxResMsg, ok := <-deliverTxResCh: // The tx was included in a block.
if !ok {
return nil, errors.New("Error on broadcastTxCommit: expected DeliverTxResult, got nil. Did the Tendermint stop?")
}
deliverTxRes := deliverTxResMsg.(types.EventDataTx)
return &ctypes.ResultBroadcastTxCommit{
CheckTx: *checkTxRes,
@@ -225,6 +243,11 @@ func BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.UnconfirmedTxs()
// ```
//
@@ -263,6 +286,11 @@ func UnconfirmedTxs(limit int) (*ctypes.ResultUnconfirmedTxs, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.UnconfirmedTxs()
// ```
//

View File

@@ -17,6 +17,11 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// info, err := client.NetInfo()
// ```
//
@@ -95,6 +100,11 @@ func UnsafeDialPeers(peers []string, persistent bool) (*ctypes.ResultDialPeers,
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// genesis, err := client.Genesis()
// ```
//

View File

@@ -20,6 +20,11 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.Status()
// ```
//

View File

@@ -21,6 +21,11 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// tx, err := client.Tx([]byte("2B8EC32BA2579B3B8606E42C06DE2F7AFA2556EF"), true)
// ```
//
@@ -115,6 +120,11 @@ func Tx(hash []byte, prove bool) (*ctypes.ResultTx, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// q, err := tmquery.New("account.owner='Ivan'")
// tx, err := client.TxSearch(q, true)
// ```

View File

@@ -119,9 +119,8 @@ func NewTendermint(app abci.Application) *nm.Node {
config := GetConfig()
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
logger = log.NewFilter(logger, log.AllowError())
pvKeyFile := config.PrivValidatorKeyFile()
pvKeyStateFile := config.PrivValidatorStateFile()
pv := privval.LoadOrGenFilePV(pvKeyFile, pvKeyStateFile)
pvFile := config.PrivValidatorFile()
pv := privval.LoadOrGenFilePV(pvFile)
papp := proxy.NewLocalClientCreator(app)
nodeKey, err := p2p.LoadOrGenNodeKey(config.NodeKeyFile())
if err != nil {

View File

@@ -1,41 +0,0 @@
package main
import (
"fmt"
"os"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/privval"
)
var (
logger = log.NewTMLogger(log.NewSyncWriter(os.Stdout))
)
func main() {
args := os.Args[1:]
if len(args) != 3 {
fmt.Println("Expected three args: <old path> <new key path> <new state path>")
fmt.Println("Eg. ~/.tendermint/config/priv_validator.json ~/.tendermint/config/priv_validator_key.json ~/.tendermint/data/priv_validator_state.json")
os.Exit(1)
}
err := loadAndUpgrade(args[0], args[1], args[2])
if err != nil {
fmt.Println(err)
os.Exit(1)
}
}
func loadAndUpgrade(oldPVPath, newPVKeyPath, newPVStatePath string) error {
oldPV, err := privval.LoadOldFilePV(oldPVPath)
if err != nil {
return fmt.Errorf("Error reading OldPrivValidator from %v: %v\n", oldPVPath, err)
}
logger.Info("Upgrading PrivValidator file",
"old", oldPVPath,
"newKey", newPVKeyPath,
"newState", newPVStatePath,
)
oldPV.Upgrade(newPVKeyPath, newPVStatePath)
return nil
}

View File

@@ -1,111 +0,0 @@
package main
import (
"io/ioutil"
"os"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/privval"
)
const oldPrivvalContent = `{
"address": "1D8089FAFDFAE4A637F3D616E17B92905FA2D91D",
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": "r3Yg2AhDZ745CNTpavsGU+mRZ8WpRXqoJuyqjN8mJq0="
},
"last_height": "5",
"last_round": "0",
"last_step": 3,
"last_signature": "CTr7b9ZQlrJJf+12rPl5t/YSCUc/KqV7jQogCfFJA24e7hof69X6OMT7eFLVQHyodPjD/QTA298XHV5ejxInDQ==",
"last_signbytes": "750802110500000000000000220B08B398F3E00510F48DA6402A480A20FC258973076512999C3E6839A22E9FBDB1B77CF993E8A9955412A41A59D4CAD312240A20C971B286ACB8AAA6FCA0365EB0A660B189EDC08B46B5AF2995DEFA51A28D215B10013211746573742D636861696E2D533245415533",
"priv_key": {
"type": "tendermint/PrivKeyEd25519",
"value": "7MwvTGEWWjsYwjn2IpRb+GYsWi9nnFsw8jPLLY1UtP6vdiDYCENnvjkI1Olq+wZT6ZFnxalFeqgm7KqM3yYmrQ=="
}
}`
func TestLoadAndUpgrade(t *testing.T) {
oldFilePath := initTmpOldFile(t)
defer os.Remove(oldFilePath)
newStateFile, err := ioutil.TempFile("", "priv_validator_state*.json")
defer os.Remove(newStateFile.Name())
require.NoError(t, err)
newKeyFile, err := ioutil.TempFile("", "priv_validator_key*.json")
defer os.Remove(newKeyFile.Name())
require.NoError(t, err)
emptyOldFile, err := ioutil.TempFile("", "priv_validator_empty*.json")
require.NoError(t, err)
defer os.Remove(emptyOldFile.Name())
type args struct {
oldPVPath string
newPVKeyPath string
newPVStatePath string
}
tests := []struct {
name string
args args
wantErr bool
}{
{"successful upgrade",
args{oldPVPath: oldFilePath, newPVKeyPath: newKeyFile.Name(), newPVStatePath: newStateFile.Name()},
false,
},
{"unsuccessful upgrade: empty old privval file",
args{oldPVPath: emptyOldFile.Name(), newPVKeyPath: newKeyFile.Name(), newPVStatePath: newStateFile.Name()},
true,
},
{"unsuccessful upgrade: invalid new paths (1/3)",
args{oldPVPath: emptyOldFile.Name(), newPVKeyPath: "", newPVStatePath: newStateFile.Name()},
true,
},
{"unsuccessful upgrade: invalid new paths (2/3)",
args{oldPVPath: emptyOldFile.Name(), newPVKeyPath: newKeyFile.Name(), newPVStatePath: ""},
true,
},
{"unsuccessful upgrade: invalid new paths (3/3)",
args{oldPVPath: emptyOldFile.Name(), newPVKeyPath: "", newPVStatePath: ""},
true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := loadAndUpgrade(tt.args.oldPVPath, tt.args.newPVKeyPath, tt.args.newPVStatePath)
if tt.wantErr {
assert.Error(t, err)
} else {
assert.NoError(t, err)
upgradedPV := privval.LoadFilePV(tt.args.newPVKeyPath, tt.args.newPVStatePath)
oldPV, err := privval.LoadOldFilePV(tt.args.oldPVPath + ".bak")
require.NoError(t, err)
assert.Equal(t, oldPV.Address, upgradedPV.Key.Address)
assert.Equal(t, oldPV.Address, upgradedPV.GetAddress())
assert.Equal(t, oldPV.PubKey, upgradedPV.Key.PubKey)
assert.Equal(t, oldPV.PubKey, upgradedPV.GetPubKey())
assert.Equal(t, oldPV.PrivKey, upgradedPV.Key.PrivKey)
assert.Equal(t, oldPV.LastHeight, upgradedPV.LastSignState.Height)
assert.Equal(t, oldPV.LastRound, upgradedPV.LastSignState.Round)
assert.Equal(t, oldPV.LastSignature, upgradedPV.LastSignState.Signature)
assert.Equal(t, oldPV.LastSignBytes, upgradedPV.LastSignState.SignBytes)
assert.Equal(t, oldPV.LastStep, upgradedPV.LastSignState.Step)
}
})
}
}
func initTmpOldFile(t *testing.T) string {
tmpfile, err := ioutil.TempFile("", "priv_validator_*.json")
require.NoError(t, err)
t.Logf("created test file %s", tmpfile.Name())
_, err = tmpfile.WriteString(oldPrivvalContent)
require.NoError(t, err)
return tmpfile.Name()
}

182
scripts/wire2amino.go Normal file
View File

@@ -0,0 +1,182 @@
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"time"
"github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/crypto/ed25519"
cryptoAmino "github.com/tendermint/tendermint/crypto/encoding/amino"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/types"
)
type GenesisValidator struct {
PubKey Data `json:"pub_key"`
Power int64 `json:"power"`
Name string `json:"name"`
}
type Genesis struct {
GenesisTime time.Time `json:"genesis_time"`
ChainID string `json:"chain_id"`
ConsensusParams *types.ConsensusParams `json:"consensus_params,omitempty"`
Validators []GenesisValidator `json:"validators"`
AppHash cmn.HexBytes `json:"app_hash"`
AppState json.RawMessage `json:"app_state,omitempty"`
AppOptions json.RawMessage `json:"app_options,omitempty"` // DEPRECATED
}
type NodeKey struct {
PrivKey Data `json:"priv_key"`
}
type PrivVal struct {
Address cmn.HexBytes `json:"address"`
LastHeight int64 `json:"last_height"`
LastRound int `json:"last_round"`
LastStep int8 `json:"last_step"`
PubKey Data `json:"pub_key"`
PrivKey Data `json:"priv_key"`
}
type Data struct {
Type string `json:"type"`
Data cmn.HexBytes `json:"data"`
}
func convertNodeKey(cdc *amino.Codec, jsonBytes []byte) ([]byte, error) {
var nodeKey NodeKey
err := json.Unmarshal(jsonBytes, &nodeKey)
if err != nil {
return nil, err
}
var privKey ed25519.PrivKeyEd25519
copy(privKey[:], nodeKey.PrivKey.Data)
nodeKeyNew := p2p.NodeKey{privKey}
bz, err := cdc.MarshalJSON(nodeKeyNew)
if err != nil {
return nil, err
}
return bz, nil
}
func convertPrivVal(cdc *amino.Codec, jsonBytes []byte) ([]byte, error) {
var privVal PrivVal
err := json.Unmarshal(jsonBytes, &privVal)
if err != nil {
return nil, err
}
var privKey ed25519.PrivKeyEd25519
copy(privKey[:], privVal.PrivKey.Data)
var pubKey ed25519.PubKeyEd25519
copy(pubKey[:], privVal.PubKey.Data)
privValNew := privval.FilePV{
Address: pubKey.Address(),
PubKey: pubKey,
LastHeight: privVal.LastHeight,
LastRound: privVal.LastRound,
LastStep: privVal.LastStep,
PrivKey: privKey,
}
bz, err := cdc.MarshalJSON(privValNew)
if err != nil {
return nil, err
}
return bz, nil
}
func convertGenesis(cdc *amino.Codec, jsonBytes []byte) ([]byte, error) {
var genesis Genesis
err := json.Unmarshal(jsonBytes, &genesis)
if err != nil {
return nil, err
}
genesisNew := types.GenesisDoc{
GenesisTime: genesis.GenesisTime,
ChainID: genesis.ChainID,
ConsensusParams: genesis.ConsensusParams,
// Validators
AppHash: genesis.AppHash,
AppState: genesis.AppState,
}
if genesis.AppOptions != nil {
genesisNew.AppState = genesis.AppOptions
}
for _, v := range genesis.Validators {
var pubKey ed25519.PubKeyEd25519
copy(pubKey[:], v.PubKey.Data)
genesisNew.Validators = append(
genesisNew.Validators,
types.GenesisValidator{
PubKey: pubKey,
Power: v.Power,
Name: v.Name,
},
)
}
bz, err := cdc.MarshalJSON(genesisNew)
if err != nil {
return nil, err
}
return bz, nil
}
func main() {
cdc := amino.NewCodec()
cryptoAmino.RegisterAmino(cdc)
args := os.Args[1:]
if len(args) != 1 {
fmt.Println("Please specify a file to convert")
os.Exit(1)
}
filePath := args[0]
fileName := filepath.Base(filePath)
fileBytes, err := ioutil.ReadFile(filePath)
if err != nil {
panic(err)
}
var bz []byte
switch fileName {
case "node_key.json":
bz, err = convertNodeKey(cdc, fileBytes)
case "priv_validator.json":
bz, err = convertPrivVal(cdc, fileBytes)
case "genesis.json":
bz, err = convertGenesis(cdc, fileBytes)
default:
fmt.Println("Expected file name to be in (node_key.json, priv_validator.json, genesis.json)")
os.Exit(1)
}
if err != nil {
panic(err)
}
fmt.Println(string(bz))
}

View File

@@ -172,10 +172,10 @@ func (txi *TxIndex) Search(q *query.Query) ([]*types.TxResult, error) {
for _, r := range ranges {
if !hashesInitialized {
hashes = txi.matchRange(r, []byte(r.key))
hashes = txi.matchRange(r, startKey(r.key))
hashesInitialized = true
} else {
hashes = intersect(hashes, txi.matchRange(r, []byte(r.key)))
hashes = intersect(hashes, txi.matchRange(r, startKey(r.key)))
}
}
}
@@ -190,10 +190,10 @@ func (txi *TxIndex) Search(q *query.Query) ([]*types.TxResult, error) {
}
if !hashesInitialized {
hashes = txi.match(c, startKey(c, height))
hashes = txi.match(c, startKeyForCondition(c, height))
hashesInitialized = true
} else {
hashes = intersect(hashes, txi.match(c, startKey(c, height)))
hashes = intersect(hashes, txi.match(c, startKeyForCondition(c, height)))
}
}
@@ -332,18 +332,18 @@ func isRangeOperation(op query.Operator) bool {
}
}
func (txi *TxIndex) match(c query.Condition, startKey []byte) (hashes [][]byte) {
func (txi *TxIndex) match(c query.Condition, startKeyBz []byte) (hashes [][]byte) {
if c.Op == query.OpEqual {
it := dbm.IteratePrefix(txi.store, startKey)
it := dbm.IteratePrefix(txi.store, startKeyBz)
defer it.Close()
for ; it.Valid(); it.Next() {
hashes = append(hashes, it.Value())
}
} else if c.Op == query.OpContains {
// XXX: doing full scan because startKey does not apply here
// For example, if startKey = "account.owner=an" and search query = "accoutn.owner CONSISTS an"
// we can't iterate with prefix "account.owner=an" because we might miss keys like "account.owner=Ulan"
it := txi.store.Iterator(nil, nil)
// XXX: startKey does not apply here.
// For example, if startKey = "account.owner/an/" and search query = "accoutn.owner CONTAINS an"
// we can't iterate with prefix "account.owner/an/" because we might miss keys like "account.owner/Ulan/"
it := dbm.IteratePrefix(txi.store, startKey(c.Tag))
defer it.Close()
for ; it.Valid(); it.Next() {
if !isTagKey(it.Key()) {
@@ -359,14 +359,14 @@ func (txi *TxIndex) match(c query.Condition, startKey []byte) (hashes [][]byte)
return
}
func (txi *TxIndex) matchRange(r queryRange, prefix []byte) (hashes [][]byte) {
func (txi *TxIndex) matchRange(r queryRange, startKey []byte) (hashes [][]byte) {
// create a map to prevent duplicates
hashesMap := make(map[string][]byte)
lowerBound := r.lowerBoundValue()
upperBound := r.upperBoundValue()
it := dbm.IteratePrefix(txi.store, prefix)
it := dbm.IteratePrefix(txi.store, startKey)
defer it.Close()
LOOP:
for ; it.Valid(); it.Next() {
@@ -409,16 +409,6 @@ LOOP:
///////////////////////////////////////////////////////////////////////////////
// Keys
func startKey(c query.Condition, height int64) []byte {
var key string
if height > 0 {
key = fmt.Sprintf("%s/%v/%d/", c.Tag, c.Operand, height)
} else {
key = fmt.Sprintf("%s/%v/", c.Tag, c.Operand)
}
return []byte(key)
}
func isTagKey(key []byte) bool {
return strings.Count(string(key), tagKeySeparator) == 3
}
@@ -429,11 +419,36 @@ func extractValueFromKey(key []byte) string {
}
func keyForTag(tag cmn.KVPair, result *types.TxResult) []byte {
return []byte(fmt.Sprintf("%s/%s/%d/%d", tag.Key, tag.Value, result.Height, result.Index))
return []byte(fmt.Sprintf("%s/%s/%d/%d",
tag.Key,
tag.Value,
result.Height,
result.Index,
))
}
func keyForHeight(result *types.TxResult) []byte {
return []byte(fmt.Sprintf("%s/%d/%d/%d", types.TxHeightKey, result.Height, result.Height, result.Index))
return []byte(fmt.Sprintf("%s/%d/%d/%d",
types.TxHeightKey,
result.Height,
result.Height,
result.Index,
))
}
func startKeyForCondition(c query.Condition, height int64) []byte {
if height > 0 {
return startKey(c.Tag, c.Operand, height)
}
return startKey(c.Tag, c.Operand)
}
func startKey(fields ...interface{}) []byte {
var b bytes.Buffer
for _, f := range fields {
b.Write([]byte(fmt.Sprintf("%v", f) + tagKeySeparator))
}
return b.Bytes()
}
///////////////////////////////////////////////////////////////////////////////

View File

@@ -89,8 +89,10 @@ func TestTxSearch(t *testing.T) {
{"account.date >= TIME 2013-05-03T14:45:00Z", 0},
// search using CONTAINS
{"account.owner CONTAINS 'an'", 1},
// search using CONTAINS
// search for non existing value using CONTAINS
{"account.owner CONTAINS 'Vlad'", 0},
// search using the wrong tag (of numeric type) using CONTAINS
{"account.number CONTAINS 'Iv'", 0},
}
for _, tc := range testCases {
@@ -126,7 +128,7 @@ func TestTxSearchOneTxWithMultipleSameTagsButDifferentValues(t *testing.T) {
}
func TestTxSearchMultipleTxs(t *testing.T) {
allowedTags := []string{"account.number"}
allowedTags := []string{"account.number", "account.number.id"}
indexer := NewTxIndex(db.NewMemDB(), IndexTags(allowedTags))
// indexed first, but bigger height (to test the order of transactions)
@@ -160,6 +162,17 @@ func TestTxSearchMultipleTxs(t *testing.T) {
err = indexer.Index(txResult3)
require.NoError(t, err)
// indexed fourth (to test we don't include txs with similar tags)
// https://github.com/tendermint/tendermint/issues/2908
txResult4 := txResultWithTags([]cmn.KVPair{
{Key: []byte("account.number.id"), Value: []byte("1")},
})
txResult4.Tx = types.Tx("Mike's account")
txResult4.Height = 2
txResult4.Index = 2
err = indexer.Index(txResult4)
require.NoError(t, err)
results, err := indexer.Search(query.MustParse("account.number >= 1"))
assert.NoError(t, err)

View File

@@ -19,7 +19,7 @@ import (
// x + (x >> 3) = x + x/8 = x * (1 + 0.125).
// MaxTotalVotingPower is the largest int64 `x` with the property that `x + (x >> 3)` is
// still in the bounds of int64.
const MaxTotalVotingPower = 8198552921648689607
const MaxTotalVotingPower = int64(8198552921648689607)
// ValidatorSet represent a set of *Validator at a given height.
// The validators can be fetched by address or index.

View File

@@ -18,7 +18,7 @@ const (
// TMCoreSemVer is the current version of Tendermint Core.
// It's the Semantic Version of the software.
// Must be a string because scripts like dist.sh read this file.
TMCoreSemVer = "0.26.4"
TMCoreSemVer = "0.27.4"
// ABCISemVer is the semantic version of the ABCI library
ABCISemVer = "0.15.0"
@@ -36,10 +36,10 @@ func (p Protocol) Uint64() uint64 {
var (
// P2PProtocol versions all p2p behaviour and msgs.
P2PProtocol Protocol = 4
P2PProtocol Protocol = 5
// BlockProtocol versions all block data structures and processing.
BlockProtocol Protocol = 7
BlockProtocol Protocol = 8
)
//------------------------------------------------------------------------