mirror of
https://github.com/fluencelabs/tendermint
synced 2025-07-22 07:41:57 +00:00
Compare commits
59 Commits
v0.28.0-de
...
v0.29.1
Author | SHA1 | Date | |
---|---|---|---|
|
4d7b29cd8f | ||
|
90970d0ddc | ||
|
bb0a9b3d6d | ||
|
8992596192 | ||
|
c20fbed2f7 | ||
|
c4157549ab | ||
|
fbd1e79465 | ||
|
1efacaa8d3 | ||
|
98b42e9eb2 | ||
|
2449bf7300 | ||
|
3362da0a69 | ||
|
4514842a63 | ||
|
a97d6995c9 | ||
|
d9d4f3e629 | ||
|
7a8aeff4b0 | ||
|
de5a6010f0 | ||
|
da95f4aa6d | ||
|
4f8769175e | ||
|
40c887baf7 | ||
|
d3e8889411 | ||
|
d17969e378 | ||
|
07263298bd | ||
|
5a2e69df81 | ||
|
f5f1416a14 | ||
|
4d36647eea | ||
|
8fd8f800d0 | ||
|
87991059aa | ||
|
c69dbb25ce | ||
|
bc8874020f | ||
|
55d7238708 | ||
|
4a037f9fe6 | ||
|
aa40cfcbb9 | ||
|
6d6d103f15 | ||
|
239ebe2076 | ||
|
0cba0e11b5 | ||
|
d4e6720541 | ||
|
dcb8f88525 | ||
|
a2a62c9be6 | ||
|
3191ee8bad | ||
|
308b7e3bbe | ||
|
73ea5effe5 | ||
|
d1afa0ed6c | ||
|
ca00cd6a78 | ||
|
4daca1a634 | ||
|
bc00a032c1 | ||
|
5f4d8e031e | ||
|
7b2c4bb493 | ||
|
5f93220c61 | ||
|
ec53ce359b | ||
|
1f68318875 | ||
|
1ccc0918f5 | ||
|
fc031d980b | ||
|
1895cde590 | ||
|
be00cd1add | ||
|
a6011c007d | ||
|
ef94a322b8 | ||
|
7f607d0ce2 | ||
|
81c51cd4fc | ||
|
51094f9417 |
@@ -240,7 +240,7 @@ jobs:
|
||||
for pkg in $(go list github.com/tendermint/tendermint/... | circleci tests split --split-by=timings); do
|
||||
id=$(basename "$pkg")
|
||||
|
||||
GOCACHE=off go test -timeout 5m -race -coverprofile=/tmp/workspace/profiles/$id.out -covermode=atomic "$pkg" | tee "/tmp/logs/$id-$RANDOM.log"
|
||||
GOCACHE=off go test -v -timeout 5m -race -coverprofile=/tmp/workspace/profiles/$id.out -covermode=atomic "$pkg" | tee "/tmp/logs/$id-$RANDOM.log"
|
||||
done
|
||||
- persist_to_workspace:
|
||||
root: /tmp/workspace
|
||||
|
168
CHANGELOG.md
168
CHANGELOG.md
@@ -1,5 +1,168 @@
|
||||
# Changelog
|
||||
|
||||
## v0.29.1
|
||||
|
||||
*January 24, 2019*
|
||||
|
||||
Special thanks to external contributors on this release:
|
||||
@infinytum, @gauthamzz
|
||||
|
||||
This release contains two important fixes: one for p2p layer where we sometimes
|
||||
were not closing connections and one for consensus layer where consensus with
|
||||
no empty blocks (`create_empty_blocks = false`) could halt.
|
||||
|
||||
Friendly reminder, we have a [bug bounty
|
||||
program](https://hackerone.com/tendermint).
|
||||
|
||||
### IMPROVEMENTS:
|
||||
- [pex] [\#3037](https://github.com/tendermint/tendermint/issues/3037) Only log "Reached max attempts to dial" once
|
||||
- [rpc] [\#3159](https://github.com/tendermint/tendermint/issues/3159) Expose
|
||||
`triggered_timeout_commit` in the `/dump_consensus_state`
|
||||
|
||||
### BUG FIXES:
|
||||
- [consensus] [\#3199](https://github.com/tendermint/tendermint/issues/3199) Fix consensus halt with no empty blocks from not resetting triggeredTimeoutCommit
|
||||
- [p2p] [\#2967](https://github.com/tendermint/tendermint/issues/2967) Fix file descriptor leak
|
||||
|
||||
## v0.29.0
|
||||
|
||||
*January 21, 2019*
|
||||
|
||||
Special thanks to external contributors on this release:
|
||||
@bradyjoestar, @kunaldhariwal, @gauthamzz, @hrharder
|
||||
|
||||
This release is primarily about making some breaking changes to
|
||||
the Block protocol version before Cosmos launch, and to fixing more issues
|
||||
in the proposer selection algorithm discovered on Cosmos testnets.
|
||||
|
||||
The Block protocol changes include using a standard Merkle tree format (RFC 6962),
|
||||
fixing some inconsistencies between field orders in Vote and Proposal structs,
|
||||
and constraining the hash of the ConsensusParams to include only a few fields.
|
||||
|
||||
The proposer selection algorithm saw significant progress,
|
||||
including a [formal proof by @cwgoes for the base-case in Idris](https://github.com/cwgoes/tm-proposer-idris)
|
||||
and a [much more detailed specification (still in progress) by
|
||||
@ancazamfir](https://github.com/tendermint/tendermint/pull/3140).
|
||||
|
||||
Fixes to the proposer selection algorithm include normalizing the proposer
|
||||
priorities to mitigate the effects of large changes to the validator set.
|
||||
That said, we just discovered [another bug](https://github.com/tendermint/tendermint/issues/3181),
|
||||
which will be fixed in the next breaking release.
|
||||
|
||||
While we are trying to stabilize the Block protocol to preserve compatibility
|
||||
with old chains, there may be some final changes yet to come before Cosmos
|
||||
launch as we continue to audit and test the software.
|
||||
|
||||
Friendly reminder, we have a [bug bounty
|
||||
program](https://hackerone.com/tendermint).
|
||||
|
||||
### BREAKING CHANGES:
|
||||
|
||||
* CLI/RPC/Config
|
||||
|
||||
* Apps
|
||||
- [state] [\#3049](https://github.com/tendermint/tendermint/issues/3049) Total voting power of the validator set is upper bounded by
|
||||
`MaxInt64 / 8`. Apps must ensure they do not return changes to the validator
|
||||
set that cause this maximum to be exceeded.
|
||||
|
||||
* Go API
|
||||
- [node] [\#3082](https://github.com/tendermint/tendermint/issues/3082) MetricsProvider now requires you to pass a chain ID
|
||||
- [types] [\#2713](https://github.com/tendermint/tendermint/issues/2713) Rename `TxProof.LeafHash` to `TxProof.Leaf`
|
||||
- [crypto/merkle] [\#2713](https://github.com/tendermint/tendermint/issues/2713) `SimpleProof.Verify` takes a `leaf` instead of a
|
||||
`leafHash` and performs the hashing itself
|
||||
|
||||
* Blockchain Protocol
|
||||
* [crypto/merkle] [\#2713](https://github.com/tendermint/tendermint/issues/2713) Merkle trees now match the RFC 6962 specification
|
||||
* [types] [\#3078](https://github.com/tendermint/tendermint/issues/3078) Re-order Timestamp and BlockID in CanonicalVote so it's
|
||||
consistent with CanonicalProposal (BlockID comes
|
||||
first)
|
||||
* [types] [\#3165](https://github.com/tendermint/tendermint/issues/3165) Hash of ConsensusParams only includes BlockSize.MaxBytes and
|
||||
BlockSize.MaxGas
|
||||
|
||||
* P2P Protocol
|
||||
- [consensus] [\#3049](https://github.com/tendermint/tendermint/issues/3049) Normalize priorities to not exceed `2*TotalVotingPower` to mitigate unfair proposer selection
|
||||
heavily preferring earlier joined validators in the case of an early bonded large validator unbonding
|
||||
|
||||
### FEATURES:
|
||||
|
||||
### IMPROVEMENTS:
|
||||
- [rpc] [\#3065](https://github.com/tendermint/tendermint/issues/3065) Return maxPerPage (100), not defaultPerPage (30) if `per_page` is greater than the max 100.
|
||||
- [instrumentation] [\#3082](https://github.com/tendermint/tendermint/issues/3082) Add `chain_id` label for all metrics
|
||||
|
||||
### BUG FIXES:
|
||||
- [crypto] [\#3164](https://github.com/tendermint/tendermint/issues/3164) Update `btcd` fork for rare signRFC6979 bug
|
||||
- [lite] [\#3171](https://github.com/tendermint/tendermint/issues/3171) Fix verifying large validator set changes
|
||||
- [log] [\#3125](https://github.com/tendermint/tendermint/issues/3125) Fix year format
|
||||
- [mempool] [\#3168](https://github.com/tendermint/tendermint/issues/3168) Limit tx size to fit in the max reactor msg size
|
||||
- [scripts] [\#3147](https://github.com/tendermint/tendermint/issues/3147) Fix json2wal for large block parts (@bradyjoestar)
|
||||
|
||||
## v0.28.1
|
||||
|
||||
*January 18th, 2019*
|
||||
|
||||
Special thanks to external contributors on this release:
|
||||
@HaoyangLiu
|
||||
|
||||
Friendly reminder, we have a [bug bounty
|
||||
program](https://hackerone.com/tendermint).
|
||||
|
||||
### BUG FIXES:
|
||||
- [consensus] Fix consensus halt from proposing blocks with too much evidence
|
||||
|
||||
## v0.28.0
|
||||
|
||||
*January 16th, 2019*
|
||||
|
||||
Special thanks to external contributors on this release:
|
||||
@fmauricios, @gianfelipe93, @husio, @needkane, @srmo, @yutianwu
|
||||
|
||||
This release is primarily about upgrades to the `privval` system -
|
||||
separating the `priv_validator.json` into distinct config and data files, and
|
||||
refactoring the socket validator to support reconnections.
|
||||
|
||||
**Note:** Please backup your existing `priv_validator.json` before using this
|
||||
version.
|
||||
|
||||
See [UPGRADING.md](UPGRADING.md) for more details.
|
||||
|
||||
### BREAKING CHANGES:
|
||||
|
||||
* CLI/RPC/Config
|
||||
- [cli] Removed `--proxy_app=dummy` option. Use `kvstore` (`persistent_kvstore`) instead.
|
||||
- [cli] Renamed `--proxy_app=nilapp` to `--proxy_app=noop`.
|
||||
- [config] [\#2992](https://github.com/tendermint/tendermint/issues/2992) `allow_duplicate_ip` is now set to false
|
||||
- [privval] [\#1181](https://github.com/tendermint/tendermint/issues/1181) Split `priv_validator.json` into immutable (`config/priv_validator_key.json`) and mutable (`data/priv_validator_state.json`) parts (@yutianwu)
|
||||
- [privval] [\#2926](https://github.com/tendermint/tendermint/issues/2926) Split up `PubKeyMsg` into `PubKeyRequest` and `PubKeyResponse` to be consistent with other message types
|
||||
- [privval] [\#2923](https://github.com/tendermint/tendermint/issues/2923) Listen for unix socket connections instead of dialing them
|
||||
|
||||
* Apps
|
||||
|
||||
* Go API
|
||||
- [types] [\#2981](https://github.com/tendermint/tendermint/issues/2981) Remove `PrivValidator.GetAddress()`
|
||||
|
||||
* Blockchain Protocol
|
||||
|
||||
* P2P Protocol
|
||||
|
||||
### FEATURES:
|
||||
- [rpc] [\#3052](https://github.com/tendermint/tendermint/issues/3052) Include peer's remote IP in `/net_info`
|
||||
|
||||
### IMPROVEMENTS:
|
||||
- [consensus] [\#3086](https://github.com/tendermint/tendermint/issues/3086) Log peerID on ignored votes (@srmo)
|
||||
- [docs] [\#3061](https://github.com/tendermint/tendermint/issues/3061) Added specification for signing consensus msgs at
|
||||
./docs/spec/consensus/signing.md
|
||||
- [privval] [\#2948](https://github.com/tendermint/tendermint/issues/2948) Memoize pubkey so it's only requested once on startup
|
||||
- [privval] [\#2923](https://github.com/tendermint/tendermint/issues/2923) Retry RemoteSigner connections on error
|
||||
|
||||
### BUG FIXES:
|
||||
|
||||
- [build] [\#3085](https://github.com/tendermint/tendermint/issues/3085) Fix `Version` field in build scripts (@husio)
|
||||
- [crypto/multisig] [\#3102](https://github.com/tendermint/tendermint/issues/3102) Fix multisig keys address length
|
||||
- [crypto/encoding] [\#3101](https://github.com/tendermint/tendermint/issues/3101) Fix `PubKeyMultisigThreshold` unmarshalling into `crypto.PubKey` interface
|
||||
- [p2p/conn] [\#3111](https://github.com/tendermint/tendermint/issues/3111) Make SecretConnection thread safe
|
||||
- [rpc] [\#3053](https://github.com/tendermint/tendermint/issues/3053) Fix internal error in `/tx_search` when results are empty
|
||||
(@gianfelipe93)
|
||||
- [types] [\#2926](https://github.com/tendermint/tendermint/issues/2926) Do not panic if retrieving the privval's public key fails
|
||||
|
||||
## v0.27.4
|
||||
|
||||
*December 21st, 2018*
|
||||
@@ -17,9 +180,8 @@
|
||||
### BREAKING CHANGES:
|
||||
|
||||
* Go API
|
||||
|
||||
- [dep] [\#3027](https://github.com/tendermint/tendermint/issues/3027) Revert to mainline Go crypto library, eliminating the modified
|
||||
`bcrypt.GenerateFromPassword`
|
||||
- [dep] [\#3027](https://github.com/tendermint/tendermint/issues/3027) Revert to mainline Go crypto library, eliminating the modified
|
||||
`bcrypt.GenerateFromPassword`
|
||||
|
||||
## v0.27.2
|
||||
|
||||
|
@@ -1,4 +1,4 @@
|
||||
## v0.27.4
|
||||
## v0.30.0
|
||||
|
||||
*TBD*
|
||||
|
||||
@@ -7,30 +7,17 @@ Special thanks to external contributors on this release:
|
||||
### BREAKING CHANGES:
|
||||
|
||||
* CLI/RPC/Config
|
||||
- [cli] Removed `node` `--proxy_app=dummy` option. Use `kvstore` (`persistent_kvstore`) instead.
|
||||
- [cli] Renamed `node` `--proxy_app=nilapp` to `--proxy_app=noop`.
|
||||
- [config] \#2992 `allow_duplicate_ip` is now set to false
|
||||
|
||||
- [privval] \#2926 split up `PubKeyMsg` into `PubKeyRequest` and `PubKeyResponse` to be consistent with other message types
|
||||
|
||||
* Apps
|
||||
|
||||
* Go API
|
||||
- [types] \#2926 memoize consensus public key on initialization of remote signer and return the memoized key on
|
||||
`PrivValidator.GetPubKey()` instead of requesting it again
|
||||
- [types] \#2981 Remove `PrivValidator.GetAddress()`
|
||||
* Go API
|
||||
|
||||
* Blockchain Protocol
|
||||
|
||||
* P2P Protocol
|
||||
- multiple connections from the same IP are now disabled by default (see `allow_duplicate_ip` config option)
|
||||
|
||||
### FEATURES:
|
||||
- [privval] \#1181 Split immutable and mutable parts of priv_validator.json
|
||||
|
||||
### IMPROVEMENTS:
|
||||
|
||||
### BUG FIXES:
|
||||
- [types] \#2926 do not panic if retrieving the private validator's public key fails
|
||||
- [crypto/multisig] \#3102 fix multisig keys address length
|
||||
- [crypto/encoding] \#3101 Fix `PubKeyMultisigThreshold` unmarshalling into `crypto.PubKey` interface
|
||||
|
5
Gopkg.lock
generated
5
Gopkg.lock
generated
@@ -361,11 +361,12 @@
|
||||
revision = "6b91fda63f2e36186f1c9d0e48578defb69c5d43"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:605b6546f3f43745695298ec2d342d3e952b6d91cdf9f349bea9315f677d759f"
|
||||
digest = "1:83f5e189eea2baad419a6a410984514266ff690075759c87e9ede596809bd0b8"
|
||||
name = "github.com/tendermint/btcd"
|
||||
packages = ["btcec"]
|
||||
pruneopts = "UT"
|
||||
revision = "e5840949ff4fff0c56f9b6a541e22b63581ea9df"
|
||||
revision = "80daadac05d1cd29571fccf27002d79667a88b58"
|
||||
version = "v0.1.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:ad9c4c1a4e7875330b1f62906f2830f043a23edb5db997e3a5ac5d3e6eadf80a"
|
||||
|
@@ -75,6 +75,10 @@
|
||||
name = "github.com/prometheus/client_golang"
|
||||
version = "^0.9.1"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/tendermint/btcd"
|
||||
version = "v0.1.1"
|
||||
|
||||
###################################
|
||||
## Some repos dont have releases.
|
||||
## Pin to revision
|
||||
@@ -92,9 +96,6 @@
|
||||
name = "github.com/btcsuite/btcutil"
|
||||
revision = "d4cc87b860166d00d6b5b9e0d3b3d71d6088d4d4"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/tendermint/btcd"
|
||||
revision = "e5840949ff4fff0c56f9b6a541e22b63581ea9df"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/rcrowley/go-metrics"
|
||||
|
4
Makefile
4
Makefile
@@ -292,9 +292,7 @@ build-linux:
|
||||
GOOS=linux GOARCH=amd64 $(MAKE) build
|
||||
|
||||
build-docker-localnode:
|
||||
cd networks/local
|
||||
make
|
||||
cd -
|
||||
@cd networks/local && make
|
||||
|
||||
# Run a 4-node testnet locally
|
||||
localnet-start: localnet-stop
|
||||
|
93
README.md
93
README.md
@@ -1,8 +1,8 @@
|
||||
# Tendermint
|
||||
|
||||
[Byzantine-Fault Tolerant](https://en.wikipedia.org/wiki/Byzantine_fault_tolerance)
|
||||
[State Machine Replication](https://en.wikipedia.org/wiki/State_machine_replication).
|
||||
Or [Blockchain](https://en.wikipedia.org/wiki/Blockchain_(database)) for short.
|
||||
[State Machines](https://en.wikipedia.org/wiki/State_machine_replication).
|
||||
Or [Blockchain](https://en.wikipedia.org/wiki/Blockchain_(database)), for short.
|
||||
|
||||
[](https://github.com/tendermint/tendermint/releases/latest)
|
||||
[
|
||||
- [Remote cluster using terraform and ansible](/docs/networks/terraform-and-ansible.md)
|
||||
- [Join the Cosmos testnet](https://cosmos.network/testnet)
|
||||
|
||||
## Resources
|
||||
|
||||
### Tendermint Core
|
||||
|
||||
For details about the blockchain data structures and the p2p protocols, see the
|
||||
the [Tendermint specification](/docs/spec).
|
||||
|
||||
For details on using the software, see the [documentation](/docs/) which is also
|
||||
hosted at: https://tendermint.com/docs/
|
||||
|
||||
### Tools
|
||||
|
||||
Benchmarking and monitoring is provided by `tm-bench` and `tm-monitor`, respectively.
|
||||
Their code is found [here](/tools) and these binaries need to be built seperately.
|
||||
Additional documentation is found [here](/docs/tools).
|
||||
|
||||
### Sub-projects
|
||||
|
||||
* [Amino](http://github.com/tendermint/go-amino), a reflection-based improvement on proto3
|
||||
* [IAVL](http://github.com/tendermint/iavl), Merkleized IAVL+ Tree implementation
|
||||
|
||||
### Applications
|
||||
|
||||
* [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
|
||||
* [Ethermint](http://github.com/cosmos/ethermint); Ethereum on Tendermint
|
||||
* [Many more](https://tendermint.com/ecosystem)
|
||||
|
||||
### Research
|
||||
|
||||
* [The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938)
|
||||
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
|
||||
* [Original Whitepaper](https://tendermint.com/static/docs/tendermint.pdf)
|
||||
* [Blog](https://blog.cosmos.network/tendermint/home)
|
||||
|
||||
## Contributing
|
||||
|
||||
Yay open source! Please see our [contributing guidelines](CONTRIBUTING.md).
|
||||
Please abide by the [Code of Conduct](CODE_OF_CONDUCT.md) in all interactions,
|
||||
and the [contributing guidelines](CONTRIBUTING.md) when submitting code.
|
||||
|
||||
Join the larger community on the [forum](https://forum.cosmos.network/) and the [chat](https://riot.im/app/#/room/#tendermint:matrix.org).
|
||||
|
||||
To learn more about the structure of the software, watch the [Developer
|
||||
Sessions](https://www.youtube.com/playlist?list=PLdQIb0qr3pnBbG5ZG-0gr3zM86_s8Rpqv)
|
||||
and read some [Architectural
|
||||
Decision Records](https://github.com/tendermint/tendermint/tree/master/docs/architecture).
|
||||
|
||||
Learn more by reading the code and comparing it to the
|
||||
[specification](https://github.com/tendermint/tendermint/tree/develop/docs/spec).
|
||||
|
||||
## Versioning
|
||||
|
||||
### SemVer
|
||||
### Semantic Versioning
|
||||
|
||||
Tendermint uses [SemVer](http://semver.org/) to determine when and how the version changes.
|
||||
Tendermint uses [Semantic Versioning](http://semver.org/) to determine when and how the version changes.
|
||||
According to SemVer, anything in the public API can change at any time before version 1.0.0
|
||||
|
||||
To provide some stability to Tendermint users in these 0.X.X days, the MINOR version is used
|
||||
@@ -145,8 +122,40 @@ data into the new chain.
|
||||
However, any bump in the PATCH version should be compatible with existing histories
|
||||
(if not please open an [issue](https://github.com/tendermint/tendermint/issues)).
|
||||
|
||||
For more information on upgrading, see [here](./UPGRADING.md)
|
||||
For more information on upgrading, see [UPGRADING.md](./UPGRADING.md)
|
||||
|
||||
## Code of Conduct
|
||||
## Resources
|
||||
|
||||
### Tendermint Core
|
||||
|
||||
For details about the blockchain data structures and the p2p protocols, see the
|
||||
[Tendermint specification](/docs/spec).
|
||||
|
||||
For details on using the software, see the [documentation](/docs/) which is also
|
||||
hosted at: https://tendermint.com/docs/
|
||||
|
||||
### Tools
|
||||
|
||||
Benchmarking and monitoring is provided by `tm-bench` and `tm-monitor`, respectively.
|
||||
Their code is found [here](/tools) and these binaries need to be built seperately.
|
||||
Additional documentation is found [here](/docs/tools).
|
||||
|
||||
### Sub-projects
|
||||
|
||||
* [Amino](http://github.com/tendermint/go-amino), reflection-based proto3, with
|
||||
interfaces
|
||||
* [IAVL](http://github.com/tendermint/iavl), Merkleized IAVL+ Tree implementation
|
||||
|
||||
### Applications
|
||||
|
||||
* [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
|
||||
* [Ethermint](http://github.com/cosmos/ethermint); Ethereum on Tendermint
|
||||
* [Many more](https://tendermint.com/ecosystem)
|
||||
|
||||
### Research
|
||||
|
||||
* [The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938)
|
||||
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
|
||||
* [Original Whitepaper](https://tendermint.com/static/docs/tendermint.pdf)
|
||||
* [Blog](https://blog.cosmos.network/tendermint/home)
|
||||
|
||||
Please read, understand and adhere to our [code of conduct](CODE_OF_CONDUCT.md).
|
||||
|
71
UPGRADING.md
71
UPGRADING.md
@@ -3,6 +3,77 @@
|
||||
This guide provides steps to be followed when you upgrade your applications to
|
||||
a newer version of Tendermint Core.
|
||||
|
||||
## v0.29.0
|
||||
|
||||
This release contains some breaking changes to the block and p2p protocols,
|
||||
and will not be compatible with any previous versions of the software, primarily
|
||||
due to changes in how various data structures are hashed.
|
||||
|
||||
Any implementations of Tendermint blockchain verification, including lite clients,
|
||||
will need to be updated. For specific details:
|
||||
- [Merkle tree](./docs/spec/blockchain/encoding.md#merkle-trees)
|
||||
- [ConsensusParams](./docs/spec/blockchain/state.md#consensusparams)
|
||||
|
||||
There was also a small change to field ordering in the vote struct. Any
|
||||
implementations of an out-of-process validator (like a Key-Management Server)
|
||||
will need to be updated. For specific details:
|
||||
- [Vote](https://github.com/tendermint/tendermint/blob/develop/docs/spec/consensus/signing.md#votes)
|
||||
|
||||
Finally, the proposer selection algorithm continues to evolve. See the
|
||||
[work-in-progress
|
||||
specification](https://github.com/tendermint/tendermint/pull/3140).
|
||||
|
||||
For everything else, please see the [CHANGELOG](./CHANGELOG.md#v0.29.0).
|
||||
|
||||
## v0.28.0
|
||||
|
||||
This release breaks the format for the `priv_validator.json` file
|
||||
and the protocol used for the external validator process.
|
||||
It is compatible with v0.27.0 blockchains (neither the BlockProtocol nor the
|
||||
P2PProtocol have changed).
|
||||
|
||||
Please read carefully for details about upgrading.
|
||||
|
||||
**Note:** Backup your `config/priv_validator.json`
|
||||
before proceeding.
|
||||
|
||||
### `priv_validator.json`
|
||||
|
||||
The `config/priv_validator.json` is now two files:
|
||||
`config/priv_validator_key.json` and `data/priv_validator_state.json`.
|
||||
The former contains the key material, the later contains the details on the last
|
||||
message signed.
|
||||
|
||||
When running v0.28.0 for the first time, it will back up any pre-existing
|
||||
`priv_validator.json` file and proceed to split it into the two new files.
|
||||
Upgrading should happen automatically without problem.
|
||||
|
||||
To upgrade manually, use the provided `privValUpgrade.go` script, with exact paths for the old
|
||||
`priv_validator.json` and the locations for the two new files. It's recomended
|
||||
to use the default paths, of `config/priv_validator_key.json` and
|
||||
`data/priv_validator_state.json`, respectively:
|
||||
|
||||
```
|
||||
go run scripts/privValUpgrade.go <old-path> <new-key-path> <new-state-path>
|
||||
```
|
||||
|
||||
### External validator signers
|
||||
|
||||
The Unix and TCP implementations of the remote signing validator
|
||||
have been consolidated into a single implementation.
|
||||
Thus in both cases, the external process is expected to dial
|
||||
Tendermint. This is different from how Unix sockets used to work, where
|
||||
Tendermint dialed the external process.
|
||||
|
||||
The `PubKeyMsg` was also split into separate `Request` and `Response` types
|
||||
for consistency with other messages.
|
||||
|
||||
Note that the TCP sockets don't yet use a persistent key,
|
||||
so while they're encrypted, they can't yet be properly authenticated.
|
||||
See [#3105](https://github.com/tendermint/tendermint/issues/3105).
|
||||
Note the Unix socket has neither encryption nor authentication, but will
|
||||
add a shared-secret in [#3099](https://github.com/tendermint/tendermint/issues/3099).
|
||||
|
||||
## v0.27.0
|
||||
|
||||
This release contains some breaking changes to the block and p2p protocols,
|
||||
|
@@ -311,7 +311,7 @@ func TestLoadBlockPart(t *testing.T) {
|
||||
gotPart, _, panicErr := doFn(loadPart)
|
||||
require.Nil(t, panicErr, "an existent and proper block should not panic")
|
||||
require.Nil(t, res, "a properly saved block should return a proper block")
|
||||
require.Equal(t, gotPart.(*types.Part).Hash(), part1.Hash(),
|
||||
require.Equal(t, gotPart.(*types.Part), part1,
|
||||
"expecting successful retrieval of previously saved block")
|
||||
}
|
||||
|
||||
|
@@ -3,6 +3,7 @@ package main
|
||||
import (
|
||||
"flag"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
@@ -34,13 +35,20 @@ func main() {
|
||||
|
||||
pv := privval.LoadFilePV(*privValKeyPath, *privValStatePath)
|
||||
|
||||
rs := privval.NewRemoteSigner(
|
||||
logger,
|
||||
*chainID,
|
||||
*addr,
|
||||
pv,
|
||||
ed25519.GenPrivKey(),
|
||||
)
|
||||
var dialer privval.Dialer
|
||||
protocol, address := cmn.ProtocolAndAddress(*addr)
|
||||
switch protocol {
|
||||
case "unix":
|
||||
dialer = privval.DialUnixFn(address)
|
||||
case "tcp":
|
||||
connTimeout := 3 * time.Second // TODO
|
||||
dialer = privval.DialTCPFn(address, connTimeout, ed25519.GenPrivKey())
|
||||
default:
|
||||
logger.Error("Unknown protocol", "protocol", protocol)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
rs := privval.NewRemoteSigner(logger, *chainID, pv, dialer)
|
||||
err := rs.Start()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
|
@@ -10,6 +10,7 @@ import (
|
||||
|
||||
"github.com/tendermint/tendermint/abci/example/code"
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
@@ -17,12 +18,17 @@ func init() {
|
||||
config = ResetConfig("consensus_mempool_test")
|
||||
}
|
||||
|
||||
// for testing
|
||||
func assertMempool(txn txNotifier) sm.Mempool {
|
||||
return txn.(sm.Mempool)
|
||||
}
|
||||
|
||||
func TestMempoolNoProgressUntilTxsAvailable(t *testing.T) {
|
||||
config := ResetConfig("consensus_mempool_txs_available_test")
|
||||
config.Consensus.CreateEmptyBlocks = false
|
||||
state, privVals := randGenesisState(1, false, 10)
|
||||
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
|
||||
cs.mempool.EnableTxsAvailable()
|
||||
assertMempool(cs.txNotifier).EnableTxsAvailable()
|
||||
height, round := cs.Height, cs.Round
|
||||
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
|
||||
startTestRound(cs, height, round)
|
||||
@@ -40,7 +46,7 @@ func TestMempoolProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
|
||||
config.Consensus.CreateEmptyBlocksInterval = ensureTimeout
|
||||
state, privVals := randGenesisState(1, false, 10)
|
||||
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
|
||||
cs.mempool.EnableTxsAvailable()
|
||||
assertMempool(cs.txNotifier).EnableTxsAvailable()
|
||||
height, round := cs.Height, cs.Round
|
||||
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
|
||||
startTestRound(cs, height, round)
|
||||
@@ -55,7 +61,7 @@ func TestMempoolProgressInHigherRound(t *testing.T) {
|
||||
config.Consensus.CreateEmptyBlocks = false
|
||||
state, privVals := randGenesisState(1, false, 10)
|
||||
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
|
||||
cs.mempool.EnableTxsAvailable()
|
||||
assertMempool(cs.txNotifier).EnableTxsAvailable()
|
||||
height, round := cs.Height, cs.Round
|
||||
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
|
||||
newRoundCh := subscribe(cs.eventBus, types.EventQueryNewRound)
|
||||
@@ -91,7 +97,7 @@ func deliverTxsRange(cs *ConsensusState, start, end int) {
|
||||
for i := start; i < end; i++ {
|
||||
txBytes := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(txBytes, uint64(i))
|
||||
err := cs.mempool.CheckTx(txBytes, nil)
|
||||
err := assertMempool(cs.txNotifier).CheckTx(txBytes, nil)
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("Error after CheckTx: %v", err))
|
||||
}
|
||||
@@ -141,7 +147,7 @@ func TestMempoolRmBadTx(t *testing.T) {
|
||||
// Try to send the tx through the mempool.
|
||||
// CheckTx should not err, but the app should return a bad abci code
|
||||
// and the tx should get removed from the pool
|
||||
err := cs.mempool.CheckTx(txBytes, func(r *abci.Response) {
|
||||
err := assertMempool(cs.txNotifier).CheckTx(txBytes, func(r *abci.Response) {
|
||||
if r.GetCheckTx().Code != code.CodeTypeBadNonce {
|
||||
t.Fatalf("expected checktx to return bad nonce, got %v", r)
|
||||
}
|
||||
@@ -153,7 +159,7 @@ func TestMempoolRmBadTx(t *testing.T) {
|
||||
|
||||
// check for the tx
|
||||
for {
|
||||
txs := cs.mempool.ReapMaxBytesMaxGas(int64(len(txBytes)), -1)
|
||||
txs := assertMempool(cs.txNotifier).ReapMaxBytesMaxGas(int64(len(txBytes)), -1)
|
||||
if len(txs) == 0 {
|
||||
emptyMempoolCh <- struct{}{}
|
||||
return
|
||||
|
@@ -8,7 +8,11 @@ import (
|
||||
stdprometheus "github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const MetricsSubsystem = "consensus"
|
||||
const (
|
||||
// MetricsSubsystem is a subsystem shared by all metrics exposed by this
|
||||
// package.
|
||||
MetricsSubsystem = "consensus"
|
||||
)
|
||||
|
||||
// Metrics contains metrics exposed by this package.
|
||||
type Metrics struct {
|
||||
@@ -50,101 +54,107 @@ type Metrics struct {
|
||||
}
|
||||
|
||||
// PrometheusMetrics returns Metrics build using Prometheus client library.
|
||||
func PrometheusMetrics(namespace string) *Metrics {
|
||||
// Optionally, labels can be provided along with their values ("foo",
|
||||
// "fooValue").
|
||||
func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
|
||||
labels := []string{}
|
||||
for i := 0; i < len(labelsAndValues); i += 2 {
|
||||
labels = append(labels, labelsAndValues[i])
|
||||
}
|
||||
return &Metrics{
|
||||
Height: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "height",
|
||||
Help: "Height of the chain.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
Rounds: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "rounds",
|
||||
Help: "Number of rounds.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
|
||||
Validators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "validators",
|
||||
Help: "Number of validators.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
ValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "validators_power",
|
||||
Help: "Total power of all validators.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
MissingValidators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "missing_validators",
|
||||
Help: "Number of validators who did not sign.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
MissingValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "missing_validators_power",
|
||||
Help: "Total power of the missing validators.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
ByzantineValidators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "byzantine_validators",
|
||||
Help: "Number of validators who tried to double sign.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
ByzantineValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "byzantine_validators_power",
|
||||
Help: "Total power of the byzantine validators.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
|
||||
BlockIntervalSeconds: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "block_interval_seconds",
|
||||
Help: "Time between this and the last block.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
|
||||
NumTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "num_txs",
|
||||
Help: "Number of transactions.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
BlockSizeBytes: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "block_size_bytes",
|
||||
Help: "Size of the block.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
TotalTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "total_txs",
|
||||
Help: "Total number of transactions.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
CommittedHeight: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "latest_block_height",
|
||||
Help: "The latest block height.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
FastSyncing: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "fast_syncing",
|
||||
Help: "Whether or not a node is fast syncing. 1 if yes, 0 if no.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
BlockParts: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "block_parts",
|
||||
Help: "Number of blockparts transmitted by peer.",
|
||||
}, []string{"peer_id"}),
|
||||
}, append(labels, "peer_id")).With(labelsAndValues...),
|
||||
}
|
||||
}
|
||||
|
||||
|
@@ -225,7 +225,7 @@ func TestReactorCreatesBlockWhenEmptyBlocksFalse(t *testing.T) {
|
||||
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
|
||||
|
||||
// send a tx
|
||||
if err := css[3].mempool.CheckTx([]byte{1, 2, 3}, nil); err != nil {
|
||||
if err := assertMempool(css[3].txNotifier).CheckTx([]byte{1, 2, 3}, nil); err != nil {
|
||||
//t.Fatal(err)
|
||||
}
|
||||
|
||||
@@ -448,7 +448,7 @@ func waitForAndValidateBlock(t *testing.T, n int, activeVals map[string]struct{}
|
||||
err := validateBlock(newBlock, activeVals)
|
||||
assert.Nil(t, err)
|
||||
for _, tx := range txs {
|
||||
err := css[j].mempool.CheckTx(tx, nil)
|
||||
err := assertMempool(css[j].txNotifier).CheckTx(tx, nil)
|
||||
assert.Nil(t, err)
|
||||
}
|
||||
}, css)
|
||||
|
@@ -137,7 +137,7 @@ func (pb *playback) replayReset(count int, newStepCh chan interface{}) error {
|
||||
pb.cs.Wait()
|
||||
|
||||
newCS := NewConsensusState(pb.cs.config, pb.genesisState.Copy(), pb.cs.blockExec,
|
||||
pb.cs.blockStore, pb.cs.mempool, pb.cs.evpool)
|
||||
pb.cs.blockStore, pb.cs.txNotifier, pb.cs.evpool)
|
||||
newCS.SetEventBus(pb.cs.eventBus)
|
||||
newCS.startForReplay()
|
||||
|
||||
|
@@ -87,7 +87,7 @@ func sendTxs(cs *ConsensusState, ctx context.Context) {
|
||||
return
|
||||
default:
|
||||
tx := []byte{byte(i)}
|
||||
cs.mempool.CheckTx(tx, nil)
|
||||
assertMempool(cs.txNotifier).CheckTx(tx, nil)
|
||||
i++
|
||||
}
|
||||
}
|
||||
|
@@ -57,6 +57,16 @@ func (ti *timeoutInfo) String() string {
|
||||
return fmt.Sprintf("%v ; %d/%d %v", ti.Duration, ti.Height, ti.Round, ti.Step)
|
||||
}
|
||||
|
||||
// interface to the mempool
|
||||
type txNotifier interface {
|
||||
TxsAvailable() <-chan struct{}
|
||||
}
|
||||
|
||||
// interface to the evidence pool
|
||||
type evidencePool interface {
|
||||
AddEvidence(types.Evidence) error
|
||||
}
|
||||
|
||||
// ConsensusState handles execution of the consensus algorithm.
|
||||
// It processes votes and proposals, and upon reaching agreement,
|
||||
// commits blocks to the chain and executes them against the application.
|
||||
@@ -68,16 +78,22 @@ type ConsensusState struct {
|
||||
config *cfg.ConsensusConfig
|
||||
privValidator types.PrivValidator // for signing votes
|
||||
|
||||
// services for creating and executing blocks
|
||||
blockExec *sm.BlockExecutor
|
||||
// store blocks and commits
|
||||
blockStore sm.BlockStore
|
||||
mempool sm.Mempool
|
||||
evpool sm.EvidencePool
|
||||
|
||||
// create and execute blocks
|
||||
blockExec *sm.BlockExecutor
|
||||
|
||||
// notify us if txs are available
|
||||
txNotifier txNotifier
|
||||
|
||||
// add evidence to the pool
|
||||
// when it's detected
|
||||
evpool evidencePool
|
||||
|
||||
// internal state
|
||||
mtx sync.RWMutex
|
||||
cstypes.RoundState
|
||||
triggeredTimeoutPrecommit bool
|
||||
state sm.State // State until height-1.
|
||||
|
||||
// state changes may be triggered by: msgs from peers,
|
||||
@@ -128,15 +144,15 @@ func NewConsensusState(
|
||||
state sm.State,
|
||||
blockExec *sm.BlockExecutor,
|
||||
blockStore sm.BlockStore,
|
||||
mempool sm.Mempool,
|
||||
evpool sm.EvidencePool,
|
||||
txNotifier txNotifier,
|
||||
evpool evidencePool,
|
||||
options ...StateOption,
|
||||
) *ConsensusState {
|
||||
cs := &ConsensusState{
|
||||
config: config,
|
||||
blockExec: blockExec,
|
||||
blockStore: blockStore,
|
||||
mempool: mempool,
|
||||
txNotifier: txNotifier,
|
||||
peerMsgQueue: make(chan msgInfo, msgQueueSize),
|
||||
internalMsgQueue: make(chan msgInfo, msgQueueSize),
|
||||
timeoutTicker: NewTimeoutTicker(),
|
||||
@@ -484,7 +500,7 @@ func (cs *ConsensusState) updateToState(state sm.State) {
|
||||
// If state isn't further out than cs.state, just ignore.
|
||||
// This happens when SwitchToConsensus() is called in the reactor.
|
||||
// We don't want to reset e.g. the Votes, but we still want to
|
||||
// signal the new round step, because other services (eg. mempool)
|
||||
// signal the new round step, because other services (eg. txNotifier)
|
||||
// depend on having an up-to-date peer state!
|
||||
if !cs.state.IsEmpty() && (state.LastBlockHeight <= cs.state.LastBlockHeight) {
|
||||
cs.Logger.Info("Ignoring updateToState()", "newHeight", state.LastBlockHeight+1, "oldHeight", cs.state.LastBlockHeight+1)
|
||||
@@ -599,7 +615,7 @@ func (cs *ConsensusState) receiveRoutine(maxSteps int) {
|
||||
var mi msgInfo
|
||||
|
||||
select {
|
||||
case <-cs.mempool.TxsAvailable():
|
||||
case <-cs.txNotifier.TxsAvailable():
|
||||
cs.handleTxsAvailable()
|
||||
case mi = <-cs.peerMsgQueue:
|
||||
cs.wal.Write(mi)
|
||||
@@ -715,6 +731,7 @@ func (cs *ConsensusState) handleTxsAvailable() {
|
||||
cs.mtx.Lock()
|
||||
defer cs.mtx.Unlock()
|
||||
// we only need to do this for round 0
|
||||
cs.enterNewRound(cs.Height, 0)
|
||||
cs.enterPropose(cs.Height, 0)
|
||||
}
|
||||
|
||||
@@ -765,7 +782,7 @@ func (cs *ConsensusState) enterNewRound(height int64, round int) {
|
||||
cs.ProposalBlockParts = nil
|
||||
}
|
||||
cs.Votes.SetRound(round + 1) // also track next round (round+1) to allow round-skipping
|
||||
cs.triggeredTimeoutPrecommit = false
|
||||
cs.TriggeredTimeoutPrecommit = false
|
||||
|
||||
cs.eventBus.PublishEventNewRound(cs.NewRoundEvent())
|
||||
cs.metrics.Rounds.Set(float64(round))
|
||||
@@ -921,20 +938,8 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
|
||||
return
|
||||
}
|
||||
|
||||
maxBytes := cs.state.ConsensusParams.BlockSize.MaxBytes
|
||||
maxGas := cs.state.ConsensusParams.BlockSize.MaxGas
|
||||
// bound evidence to 1/10th of the block
|
||||
evidence := cs.evpool.PendingEvidence(types.MaxEvidenceBytesPerBlock(maxBytes))
|
||||
// Mempool validated transactions
|
||||
txs := cs.mempool.ReapMaxBytesMaxGas(types.MaxDataBytes(
|
||||
maxBytes,
|
||||
cs.state.Validators.Size(),
|
||||
len(evidence),
|
||||
), maxGas)
|
||||
proposerAddr := cs.privValidator.GetPubKey().Address()
|
||||
block, parts := cs.state.MakeBlock(cs.Height, txs, commit, evidence, proposerAddr)
|
||||
|
||||
return block, parts
|
||||
return cs.blockExec.CreateProposalBlock(cs.Height, cs.state, commit, proposerAddr)
|
||||
}
|
||||
|
||||
// Enter: `timeoutPropose` after entering Propose.
|
||||
@@ -1123,12 +1128,12 @@ func (cs *ConsensusState) enterPrecommit(height int64, round int) {
|
||||
func (cs *ConsensusState) enterPrecommitWait(height int64, round int) {
|
||||
logger := cs.Logger.With("height", height, "round", round)
|
||||
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && cs.triggeredTimeoutPrecommit) {
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && cs.TriggeredTimeoutPrecommit) {
|
||||
logger.Debug(
|
||||
fmt.Sprintf(
|
||||
"enterPrecommitWait(%v/%v): Invalid args. "+
|
||||
"Current state is Height/Round: %v/%v/, triggeredTimeoutPrecommit:%v",
|
||||
height, round, cs.Height, cs.Round, cs.triggeredTimeoutPrecommit))
|
||||
"Current state is Height/Round: %v/%v/, TriggeredTimeoutPrecommit:%v",
|
||||
height, round, cs.Height, cs.Round, cs.TriggeredTimeoutPrecommit))
|
||||
return
|
||||
}
|
||||
if !cs.Votes.Precommits(round).HasTwoThirdsAny() {
|
||||
@@ -1138,7 +1143,7 @@ func (cs *ConsensusState) enterPrecommitWait(height int64, round int) {
|
||||
|
||||
defer func() {
|
||||
// Done enterPrecommitWait:
|
||||
cs.triggeredTimeoutPrecommit = true
|
||||
cs.TriggeredTimeoutPrecommit = true
|
||||
cs.newStep()
|
||||
}()
|
||||
|
||||
|
@@ -1279,6 +1279,71 @@ func TestCommitFromPreviousRound(t *testing.T) {
|
||||
ensureNewRound(newRoundCh, height+1, 0)
|
||||
}
|
||||
|
||||
type fakeTxNotifier struct {
|
||||
ch chan struct{}
|
||||
}
|
||||
|
||||
func (n *fakeTxNotifier) TxsAvailable() <-chan struct{} {
|
||||
return n.ch
|
||||
}
|
||||
|
||||
func (n *fakeTxNotifier) Notify() {
|
||||
n.ch <- struct{}{}
|
||||
}
|
||||
|
||||
func TestStartNextHeightCorrectly(t *testing.T) {
|
||||
cs1, vss := randConsensusState(4)
|
||||
cs1.config.SkipTimeoutCommit = false
|
||||
cs1.txNotifier = &fakeTxNotifier{ch: make(chan struct{})}
|
||||
|
||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||
height, round := cs1.Height, cs1.Round
|
||||
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||
|
||||
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
|
||||
newBlockHeader := subscribe(cs1.eventBus, types.EventQueryNewBlockHeader)
|
||||
addr := cs1.privValidator.GetPubKey().Address()
|
||||
voteCh := subscribeToVoter(cs1, addr)
|
||||
|
||||
// start round and wait for propose and prevote
|
||||
startTestRound(cs1, height, round)
|
||||
ensureNewRound(newRoundCh, height, round)
|
||||
|
||||
ensureNewProposal(proposalCh, height, round)
|
||||
rs := cs1.GetRoundState()
|
||||
theBlockHash := rs.ProposalBlock.Hash()
|
||||
theBlockParts := rs.ProposalBlockParts.Header()
|
||||
|
||||
ensurePrevote(voteCh, height, round)
|
||||
validatePrevote(t, cs1, round, vss[0], theBlockHash)
|
||||
|
||||
signAddVotes(cs1, types.PrevoteType, theBlockHash, theBlockParts, vs2, vs3, vs4)
|
||||
|
||||
ensurePrecommit(voteCh, height, round)
|
||||
// the proposed block should now be locked and our precommit added
|
||||
validatePrecommit(t, cs1, round, round, vss[0], theBlockHash, theBlockHash)
|
||||
|
||||
rs = cs1.GetRoundState()
|
||||
|
||||
// add precommits
|
||||
signAddVotes(cs1, types.PrecommitType, nil, types.PartSetHeader{}, vs2)
|
||||
signAddVotes(cs1, types.PrecommitType, theBlockHash, theBlockParts, vs3)
|
||||
signAddVotes(cs1, types.PrecommitType, theBlockHash, theBlockParts, vs4)
|
||||
|
||||
ensureNewBlockHeader(newBlockHeader, height, theBlockHash)
|
||||
|
||||
rs = cs1.GetRoundState()
|
||||
assert.True(t, rs.TriggeredTimeoutPrecommit)
|
||||
|
||||
cs1.txNotifier.(*fakeTxNotifier).Notify()
|
||||
|
||||
ensureNewTimeout(timeoutProposeCh, height+1, round, cs1.config.TimeoutPropose.Nanoseconds())
|
||||
rs = cs1.GetRoundState()
|
||||
assert.False(t, rs.TriggeredTimeoutPrecommit, "triggeredTimeoutPrecommit should be false at the beginning of each round")
|
||||
}
|
||||
|
||||
//------------------------------------------------------------------------------------------
|
||||
// SlashingSuite
|
||||
// TODO: Slashing
|
||||
|
@@ -65,25 +65,26 @@ func (rs RoundStepType) String() string {
|
||||
// NOTE: Not thread safe. Should only be manipulated by functions downstream
|
||||
// of the cs.receiveRoutine
|
||||
type RoundState struct {
|
||||
Height int64 `json:"height"` // Height we are working on
|
||||
Round int `json:"round"`
|
||||
Step RoundStepType `json:"step"`
|
||||
StartTime time.Time `json:"start_time"`
|
||||
CommitTime time.Time `json:"commit_time"` // Subjective time when +2/3 precommits for Block at Round were found
|
||||
Validators *types.ValidatorSet `json:"validators"`
|
||||
Proposal *types.Proposal `json:"proposal"`
|
||||
ProposalBlock *types.Block `json:"proposal_block"`
|
||||
ProposalBlockParts *types.PartSet `json:"proposal_block_parts"`
|
||||
LockedRound int `json:"locked_round"`
|
||||
LockedBlock *types.Block `json:"locked_block"`
|
||||
LockedBlockParts *types.PartSet `json:"locked_block_parts"`
|
||||
ValidRound int `json:"valid_round"` // Last known round with POL for non-nil valid block.
|
||||
ValidBlock *types.Block `json:"valid_block"` // Last known block of POL mentioned above.
|
||||
ValidBlockParts *types.PartSet `json:"valid_block_parts"` // Last known block parts of POL metnioned above.
|
||||
Votes *HeightVoteSet `json:"votes"`
|
||||
CommitRound int `json:"commit_round"` //
|
||||
LastCommit *types.VoteSet `json:"last_commit"` // Last precommits at Height-1
|
||||
LastValidators *types.ValidatorSet `json:"last_validators"`
|
||||
Height int64 `json:"height"` // Height we are working on
|
||||
Round int `json:"round"`
|
||||
Step RoundStepType `json:"step"`
|
||||
StartTime time.Time `json:"start_time"`
|
||||
CommitTime time.Time `json:"commit_time"` // Subjective time when +2/3 precommits for Block at Round were found
|
||||
Validators *types.ValidatorSet `json:"validators"`
|
||||
Proposal *types.Proposal `json:"proposal"`
|
||||
ProposalBlock *types.Block `json:"proposal_block"`
|
||||
ProposalBlockParts *types.PartSet `json:"proposal_block_parts"`
|
||||
LockedRound int `json:"locked_round"`
|
||||
LockedBlock *types.Block `json:"locked_block"`
|
||||
LockedBlockParts *types.PartSet `json:"locked_block_parts"`
|
||||
ValidRound int `json:"valid_round"` // Last known round with POL for non-nil valid block.
|
||||
ValidBlock *types.Block `json:"valid_block"` // Last known block of POL mentioned above.
|
||||
ValidBlockParts *types.PartSet `json:"valid_block_parts"` // Last known block parts of POL metnioned above.
|
||||
Votes *HeightVoteSet `json:"votes"`
|
||||
CommitRound int `json:"commit_round"` //
|
||||
LastCommit *types.VoteSet `json:"last_commit"` // Last precommits at Height-1
|
||||
LastValidators *types.ValidatorSet `json:"last_validators"`
|
||||
TriggeredTimeoutPrecommit bool `json:"triggered_timeout_precommit"`
|
||||
}
|
||||
|
||||
// Compressed version of the RoundState for use in RPC
|
||||
|
21
crypto/merkle/hash.go
Normal file
21
crypto/merkle/hash.go
Normal file
@@ -0,0 +1,21 @@
|
||||
package merkle
|
||||
|
||||
import (
|
||||
"github.com/tendermint/tendermint/crypto/tmhash"
|
||||
)
|
||||
|
||||
// TODO: make these have a large predefined capacity
|
||||
var (
|
||||
leafPrefix = []byte{0}
|
||||
innerPrefix = []byte{1}
|
||||
)
|
||||
|
||||
// returns tmhash(0x00 || leaf)
|
||||
func leafHash(leaf []byte) []byte {
|
||||
return tmhash.Sum(append(leafPrefix, leaf...))
|
||||
}
|
||||
|
||||
// returns tmhash(0x01 || left || right)
|
||||
func innerHash(left []byte, right []byte) []byte {
|
||||
return tmhash.Sum(append(innerPrefix, append(left, right...)...))
|
||||
}
|
@@ -71,11 +71,11 @@ func (op SimpleValueOp) Run(args [][]byte) ([][]byte, error) {
|
||||
hasher.Write(value) // does not error
|
||||
vhash := hasher.Sum(nil)
|
||||
|
||||
bz := new(bytes.Buffer)
|
||||
// Wrap <op.Key, vhash> to hash the KVPair.
|
||||
hasher = tmhash.New()
|
||||
encodeByteSlice(hasher, []byte(op.key)) // does not error
|
||||
encodeByteSlice(hasher, []byte(vhash)) // does not error
|
||||
kvhash := hasher.Sum(nil)
|
||||
encodeByteSlice(bz, []byte(op.key)) // does not error
|
||||
encodeByteSlice(bz, []byte(vhash)) // does not error
|
||||
kvhash := leafHash(bz.Bytes())
|
||||
|
||||
if !bytes.Equal(kvhash, op.Proof.LeafHash) {
|
||||
return nil, cmn.NewError("leaf hash mismatch: want %X got %X", op.Proof.LeafHash, kvhash)
|
||||
|
97
crypto/merkle/rfc6962_test.go
Normal file
97
crypto/merkle/rfc6962_test.go
Normal file
@@ -0,0 +1,97 @@
|
||||
package merkle
|
||||
|
||||
// Copyright 2016 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
// These tests were taken from https://github.com/google/trillian/blob/master/merkle/rfc6962/rfc6962_test.go,
|
||||
// and consequently fall under the above license.
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
"testing"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto/tmhash"
|
||||
)
|
||||
|
||||
func TestRFC6962Hasher(t *testing.T) {
|
||||
_, leafHashTrail := trailsFromByteSlices([][]byte{[]byte("L123456")})
|
||||
leafHash := leafHashTrail.Hash
|
||||
_, leafHashTrail = trailsFromByteSlices([][]byte{[]byte{}})
|
||||
emptyLeafHash := leafHashTrail.Hash
|
||||
for _, tc := range []struct {
|
||||
desc string
|
||||
got []byte
|
||||
want string
|
||||
}{
|
||||
// Since creating a merkle tree of no leaves is unsupported here, we skip
|
||||
// the corresponding trillian test vector.
|
||||
|
||||
// Check that the empty hash is not the same as the hash of an empty leaf.
|
||||
// echo -n 00 | xxd -r -p | sha256sum
|
||||
{
|
||||
desc: "RFC6962 Empty Leaf",
|
||||
want: "6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d"[:tmhash.Size*2],
|
||||
got: emptyLeafHash,
|
||||
},
|
||||
// echo -n 004C313233343536 | xxd -r -p | sha256sum
|
||||
{
|
||||
desc: "RFC6962 Leaf",
|
||||
want: "395aa064aa4c29f7010acfe3f25db9485bbd4b91897b6ad7ad547639252b4d56"[:tmhash.Size*2],
|
||||
got: leafHash,
|
||||
},
|
||||
// echo -n 014E3132334E343536 | xxd -r -p | sha256sum
|
||||
{
|
||||
desc: "RFC6962 Node",
|
||||
want: "aa217fe888e47007fa15edab33c2b492a722cb106c64667fc2b044444de66bbb"[:tmhash.Size*2],
|
||||
got: innerHash([]byte("N123"), []byte("N456")),
|
||||
},
|
||||
} {
|
||||
t.Run(tc.desc, func(t *testing.T) {
|
||||
wantBytes, err := hex.DecodeString(tc.want)
|
||||
if err != nil {
|
||||
t.Fatalf("hex.DecodeString(%x): %v", tc.want, err)
|
||||
}
|
||||
if got, want := tc.got, wantBytes; !bytes.Equal(got, want) {
|
||||
t.Errorf("got %x, want %x", got, want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestRFC6962HasherCollisions(t *testing.T) {
|
||||
// Check that different leaves have different hashes.
|
||||
leaf1, leaf2 := []byte("Hello"), []byte("World")
|
||||
_, leafHashTrail := trailsFromByteSlices([][]byte{leaf1})
|
||||
hash1 := leafHashTrail.Hash
|
||||
_, leafHashTrail = trailsFromByteSlices([][]byte{leaf2})
|
||||
hash2 := leafHashTrail.Hash
|
||||
if bytes.Equal(hash1, hash2) {
|
||||
t.Errorf("Leaf hashes should differ, but both are %x", hash1)
|
||||
}
|
||||
// Compute an intermediate subtree hash.
|
||||
_, subHash1Trail := trailsFromByteSlices([][]byte{hash1, hash2})
|
||||
subHash1 := subHash1Trail.Hash
|
||||
// Check that this is not the same as a leaf hash of their concatenation.
|
||||
preimage := append(hash1, hash2...)
|
||||
_, forgedHashTrail := trailsFromByteSlices([][]byte{preimage})
|
||||
forgedHash := forgedHashTrail.Hash
|
||||
if bytes.Equal(subHash1, forgedHash) {
|
||||
t.Errorf("Hasher is not second-preimage resistant")
|
||||
}
|
||||
// Swap the order of nodes and check that the hash is different.
|
||||
_, subHash2Trail := trailsFromByteSlices([][]byte{hash2, hash1})
|
||||
subHash2 := subHash2Trail.Hash
|
||||
if bytes.Equal(subHash1, subHash2) {
|
||||
t.Errorf("Subtree hash does not depend on the order of leaves")
|
||||
}
|
||||
}
|
@@ -13,14 +13,14 @@ func TestSimpleMap(t *testing.T) {
|
||||
values []string // each string gets converted to []byte in test
|
||||
want string
|
||||
}{
|
||||
{[]string{"key1"}, []string{"value1"}, "321d150de16dceb51c72981b432b115045383259b1a550adf8dc80f927508967"},
|
||||
{[]string{"key1"}, []string{"value2"}, "2a9e4baf321eac99f6eecc3406603c14bc5e85bb7b80483cbfc75b3382d24a2f"},
|
||||
{[]string{"key1"}, []string{"value1"}, "a44d3cc7daba1a4600b00a2434b30f8b970652169810d6dfa9fb1793a2189324"},
|
||||
{[]string{"key1"}, []string{"value2"}, "0638e99b3445caec9d95c05e1a3fc1487b4ddec6a952ff337080360b0dcc078c"},
|
||||
// swap order with 2 keys
|
||||
{[]string{"key1", "key2"}, []string{"value1", "value2"}, "c4d8913ab543ba26aa970646d4c99a150fd641298e3367cf68ca45fb45a49881"},
|
||||
{[]string{"key2", "key1"}, []string{"value2", "value1"}, "c4d8913ab543ba26aa970646d4c99a150fd641298e3367cf68ca45fb45a49881"},
|
||||
{[]string{"key1", "key2"}, []string{"value1", "value2"}, "8fd19b19e7bb3f2b3ee0574027d8a5a4cec370464ea2db2fbfa5c7d35bb0cff3"},
|
||||
{[]string{"key2", "key1"}, []string{"value2", "value1"}, "8fd19b19e7bb3f2b3ee0574027d8a5a4cec370464ea2db2fbfa5c7d35bb0cff3"},
|
||||
// swap order with 3 keys
|
||||
{[]string{"key1", "key2", "key3"}, []string{"value1", "value2", "value3"}, "b23cef00eda5af4548a213a43793f2752d8d9013b3f2b64bc0523a4791196268"},
|
||||
{[]string{"key1", "key3", "key2"}, []string{"value1", "value3", "value2"}, "b23cef00eda5af4548a213a43793f2752d8d9013b3f2b64bc0523a4791196268"},
|
||||
{[]string{"key1", "key2", "key3"}, []string{"value1", "value2", "value3"}, "1dd674ec6782a0d586a903c9c63326a41cbe56b3bba33ed6ff5b527af6efb3dc"},
|
||||
{[]string{"key1", "key3", "key2"}, []string{"value1", "value3", "value2"}, "1dd674ec6782a0d586a903c9c63326a41cbe56b3bba33ed6ff5b527af6efb3dc"},
|
||||
}
|
||||
for i, tc := range tests {
|
||||
db := newSimpleMap()
|
||||
|
@@ -5,7 +5,6 @@ import (
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto/tmhash"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
@@ -67,7 +66,8 @@ func SimpleProofsFromMap(m map[string][]byte) (rootHash []byte, proofs map[strin
|
||||
|
||||
// Verify that the SimpleProof proves the root hash.
|
||||
// Check sp.Index/sp.Total manually if needed
|
||||
func (sp *SimpleProof) Verify(rootHash []byte, leafHash []byte) error {
|
||||
func (sp *SimpleProof) Verify(rootHash []byte, leaf []byte) error {
|
||||
leafHash := leafHash(leaf)
|
||||
if sp.Total < 0 {
|
||||
return errors.New("Proof total must be positive")
|
||||
}
|
||||
@@ -128,19 +128,19 @@ func computeHashFromAunts(index int, total int, leafHash []byte, innerHashes [][
|
||||
if len(innerHashes) == 0 {
|
||||
return nil
|
||||
}
|
||||
numLeft := (total + 1) / 2
|
||||
numLeft := getSplitPoint(total)
|
||||
if index < numLeft {
|
||||
leftHash := computeHashFromAunts(index, numLeft, leafHash, innerHashes[:len(innerHashes)-1])
|
||||
if leftHash == nil {
|
||||
return nil
|
||||
}
|
||||
return simpleHashFromTwoHashes(leftHash, innerHashes[len(innerHashes)-1])
|
||||
return innerHash(leftHash, innerHashes[len(innerHashes)-1])
|
||||
}
|
||||
rightHash := computeHashFromAunts(index-numLeft, total-numLeft, leafHash, innerHashes[:len(innerHashes)-1])
|
||||
if rightHash == nil {
|
||||
return nil
|
||||
}
|
||||
return simpleHashFromTwoHashes(innerHashes[len(innerHashes)-1], rightHash)
|
||||
return innerHash(innerHashes[len(innerHashes)-1], rightHash)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -182,12 +182,13 @@ func trailsFromByteSlices(items [][]byte) (trails []*SimpleProofNode, root *Simp
|
||||
case 0:
|
||||
return nil, nil
|
||||
case 1:
|
||||
trail := &SimpleProofNode{tmhash.Sum(items[0]), nil, nil, nil}
|
||||
trail := &SimpleProofNode{leafHash(items[0]), nil, nil, nil}
|
||||
return []*SimpleProofNode{trail}, trail
|
||||
default:
|
||||
lefts, leftRoot := trailsFromByteSlices(items[:(len(items)+1)/2])
|
||||
rights, rightRoot := trailsFromByteSlices(items[(len(items)+1)/2:])
|
||||
rootHash := simpleHashFromTwoHashes(leftRoot.Hash, rightRoot.Hash)
|
||||
k := getSplitPoint(len(items))
|
||||
lefts, leftRoot := trailsFromByteSlices(items[:k])
|
||||
rights, rightRoot := trailsFromByteSlices(items[k:])
|
||||
rootHash := innerHash(leftRoot.Hash, rightRoot.Hash)
|
||||
root := &SimpleProofNode{rootHash, nil, nil, nil}
|
||||
leftRoot.Parent = root
|
||||
leftRoot.Right = rightRoot
|
||||
|
@@ -1,23 +1,9 @@
|
||||
package merkle
|
||||
|
||||
import (
|
||||
"github.com/tendermint/tendermint/crypto/tmhash"
|
||||
"math/bits"
|
||||
)
|
||||
|
||||
// simpleHashFromTwoHashes is the basic operation of the Merkle tree: Hash(left | right).
|
||||
func simpleHashFromTwoHashes(left, right []byte) []byte {
|
||||
var hasher = tmhash.New()
|
||||
err := encodeByteSlice(hasher, left)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = encodeByteSlice(hasher, right)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return hasher.Sum(nil)
|
||||
}
|
||||
|
||||
// SimpleHashFromByteSlices computes a Merkle tree where the leaves are the byte slice,
|
||||
// in the provided order.
|
||||
func SimpleHashFromByteSlices(items [][]byte) []byte {
|
||||
@@ -25,11 +11,12 @@ func SimpleHashFromByteSlices(items [][]byte) []byte {
|
||||
case 0:
|
||||
return nil
|
||||
case 1:
|
||||
return tmhash.Sum(items[0])
|
||||
return leafHash(items[0])
|
||||
default:
|
||||
left := SimpleHashFromByteSlices(items[:(len(items)+1)/2])
|
||||
right := SimpleHashFromByteSlices(items[(len(items)+1)/2:])
|
||||
return simpleHashFromTwoHashes(left, right)
|
||||
k := getSplitPoint(len(items))
|
||||
left := SimpleHashFromByteSlices(items[:k])
|
||||
right := SimpleHashFromByteSlices(items[k:])
|
||||
return innerHash(left, right)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -44,3 +31,17 @@ func SimpleHashFromMap(m map[string][]byte) []byte {
|
||||
}
|
||||
return sm.Hash()
|
||||
}
|
||||
|
||||
// getSplitPoint returns the largest power of 2 less than length
|
||||
func getSplitPoint(length int) int {
|
||||
if length < 1 {
|
||||
panic("Trying to split a tree with size < 1")
|
||||
}
|
||||
uLength := uint(length)
|
||||
bitlen := bits.Len(uLength)
|
||||
k := 1 << uint(bitlen-1)
|
||||
if k == length {
|
||||
k >>= 1
|
||||
}
|
||||
return k
|
||||
}
|
||||
|
@@ -34,7 +34,6 @@ func TestSimpleProof(t *testing.T) {
|
||||
|
||||
// For each item, check the trail.
|
||||
for i, item := range items {
|
||||
itemHash := tmhash.Sum(item)
|
||||
proof := proofs[i]
|
||||
|
||||
// Check total/index
|
||||
@@ -43,30 +42,53 @@ func TestSimpleProof(t *testing.T) {
|
||||
require.Equal(t, proof.Total, total, "Unmatched totals: %d vs %d", proof.Total, total)
|
||||
|
||||
// Verify success
|
||||
err := proof.Verify(rootHash, itemHash)
|
||||
require.NoError(t, err, "Verificatior failed: %v.", err)
|
||||
err := proof.Verify(rootHash, item)
|
||||
require.NoError(t, err, "Verification failed: %v.", err)
|
||||
|
||||
// Trail too long should make it fail
|
||||
origAunts := proof.Aunts
|
||||
proof.Aunts = append(proof.Aunts, cmn.RandBytes(32))
|
||||
err = proof.Verify(rootHash, itemHash)
|
||||
err = proof.Verify(rootHash, item)
|
||||
require.Error(t, err, "Expected verification to fail for wrong trail length")
|
||||
|
||||
proof.Aunts = origAunts
|
||||
|
||||
// Trail too short should make it fail
|
||||
proof.Aunts = proof.Aunts[0 : len(proof.Aunts)-1]
|
||||
err = proof.Verify(rootHash, itemHash)
|
||||
err = proof.Verify(rootHash, item)
|
||||
require.Error(t, err, "Expected verification to fail for wrong trail length")
|
||||
|
||||
proof.Aunts = origAunts
|
||||
|
||||
// Mutating the itemHash should make it fail.
|
||||
err = proof.Verify(rootHash, MutateByteSlice(itemHash))
|
||||
err = proof.Verify(rootHash, MutateByteSlice(item))
|
||||
require.Error(t, err, "Expected verification to fail for mutated leaf hash")
|
||||
|
||||
// Mutating the rootHash should make it fail.
|
||||
err = proof.Verify(MutateByteSlice(rootHash), itemHash)
|
||||
err = proof.Verify(MutateByteSlice(rootHash), item)
|
||||
require.Error(t, err, "Expected verification to fail for mutated root hash")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_getSplitPoint(t *testing.T) {
|
||||
tests := []struct {
|
||||
length int
|
||||
want int
|
||||
}{
|
||||
{1, 0},
|
||||
{2, 1},
|
||||
{3, 2},
|
||||
{4, 2},
|
||||
{5, 4},
|
||||
{10, 8},
|
||||
{20, 16},
|
||||
{100, 64},
|
||||
{255, 128},
|
||||
{256, 128},
|
||||
{257, 256},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
got := getSplitPoint(tt.length)
|
||||
require.Equal(t, tt.want, got, "getSplitPoint(%d) = %v, want %v", tt.length, got, tt.want)
|
||||
}
|
||||
}
|
||||
|
@@ -45,6 +45,6 @@ Tendermint.
|
||||
See the following for more extensive documentation:
|
||||
|
||||
- [Interchain Standard for the Light-Client REST API](https://github.com/cosmos/cosmos-sdk/pull/1028)
|
||||
- [Tendermint RPC Docs](https://tendermint.github.io/slate/)
|
||||
- [Tendermint RPC Docs](https://tendermint.com/rpc/)
|
||||
- [Tendermint in Production](../tendermint-core/running-in-production.md)
|
||||
- [ABCI spec](./abci-spec.md)
|
||||
|
@@ -63,6 +63,13 @@
|
||||
"author": "Zach Balder",
|
||||
"description": "Public service reporting and tracking"
|
||||
},
|
||||
{
|
||||
"name": "ParadigmCore",
|
||||
"url": "https://github.com/ParadigmFoundation/ParadigmCore",
|
||||
"language": "TypeScript",
|
||||
"author": "Paradigm Labs",
|
||||
"description": "Reference implementation of the Paradigm Protocol, and OrderStream network client."
|
||||
},
|
||||
{
|
||||
"name": "Passchain",
|
||||
"url": "https://github.com/trusch/passchain",
|
||||
|
@@ -78,7 +78,7 @@ endpoint:
|
||||
curl "localhost:26657/tx_search?query=\"account.name='igor'\"&prove=true"
|
||||
```
|
||||
|
||||
Check out [API docs](https://tendermint.github.io/slate/?shell#txsearch)
|
||||
Check out [API docs](https://tendermint.com/rpc/#txsearch)
|
||||
for more information on query syntax and other options.
|
||||
|
||||
## Subscribing to transactions
|
||||
@@ -97,5 +97,5 @@ by providing a query to `/subscribe` RPC endpoint.
|
||||
}
|
||||
```
|
||||
|
||||
Check out [API docs](https://tendermint.github.io/slate/#subscribe) for
|
||||
Check out [API docs](https://tendermint.com/rpc/#subscribe) for
|
||||
more information on query syntax and other options.
|
||||
|
@@ -7,6 +7,7 @@
|
||||
28-08-2018: Third version after Ethan's comments
|
||||
30-08-2018: AminoOverheadForBlock => MaxAminoOverheadForBlock
|
||||
31-08-2018: Bounding evidence and chain ID
|
||||
13-01-2019: Add section on MaxBytes vs MaxDataBytes
|
||||
|
||||
## Context
|
||||
|
||||
@@ -20,6 +21,32 @@ We should just remove MaxTxs all together and stick with MaxBytes, and have a
|
||||
But we can't just reap BlockSize.MaxBytes, since MaxBytes is for the entire block,
|
||||
not for the txs inside the block. There's extra amino overhead + the actual
|
||||
headers on top of the actual transactions + evidence + last commit.
|
||||
We could also consider using a MaxDataBytes instead of or in addition to MaxBytes.
|
||||
|
||||
## MaxBytes vs MaxDataBytes
|
||||
|
||||
The [PR #3045](https://github.com/tendermint/tendermint/pull/3045) suggested
|
||||
additional clarity/justification was necessary here, wither respect to the use
|
||||
of MaxDataBytes in addition to, or instead of, MaxBytes.
|
||||
|
||||
MaxBytes provides a clear limit on the total size of a block that requires no
|
||||
additional calculation if you want to use it to bound resource usage, and there
|
||||
has been considerable discussions about optimizing tendermint around 1MB blocks.
|
||||
Regardless, we need some maximum on the size of a block so we can avoid
|
||||
unmarshalling blocks that are too big during the consensus, and it seems more
|
||||
straightforward to provide a single fixed number for this rather than a
|
||||
computation of "MaxDataBytes + everything else you need to make room for
|
||||
(signatures, evidence, header)". MaxBytes provides a simple bound so we can
|
||||
always say "blocks are less than X MB".
|
||||
|
||||
Having both MaxBytes and MaxDataBytes feels like unnecessary complexity. It's
|
||||
not particularly surprising for MaxBytes to imply the maximum size of the
|
||||
entire block (not just txs), one just has to know that a block includes header,
|
||||
txs, evidence, votes. For more fine grained control over the txs included in the
|
||||
block, there is the MaxGas. In practice, the MaxGas may be expected to do most of
|
||||
the tx throttling, and the MaxBytes to just serve as an upper bound on the total
|
||||
size. Applications can use MaxGas as a MaxDataBytes by just taking the gas for
|
||||
every tx to be its size in bytes.
|
||||
|
||||
## Proposed solution
|
||||
|
||||
@@ -61,7 +88,7 @@ MaxXXX stayed the same.
|
||||
|
||||
## Status
|
||||
|
||||
Proposed.
|
||||
Accepted.
|
||||
|
||||
## Consequences
|
||||
|
||||
|
@@ -126,6 +126,312 @@ func TestConsensusXXX(t *testing.T) {
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Consensus Executor
|
||||
|
||||
## Consensus Core
|
||||
|
||||
```go
|
||||
type Event interface{}
|
||||
|
||||
type EventNewHeight struct {
|
||||
Height int64
|
||||
ValidatorId int
|
||||
}
|
||||
|
||||
type EventNewRound HeightAndRound
|
||||
|
||||
type EventProposal struct {
|
||||
Height int64
|
||||
Round int
|
||||
Timestamp Time
|
||||
BlockID BlockID
|
||||
POLRound int
|
||||
Sender int
|
||||
}
|
||||
|
||||
type Majority23PrevotesBlock struct {
|
||||
Height int64
|
||||
Round int
|
||||
BlockID BlockID
|
||||
}
|
||||
|
||||
type Majority23PrecommitBlock struct {
|
||||
Height int64
|
||||
Round int
|
||||
BlockID BlockID
|
||||
}
|
||||
|
||||
type HeightAndRound struct {
|
||||
Height int64
|
||||
Round int
|
||||
}
|
||||
|
||||
type Majority23PrevotesAny HeightAndRound
|
||||
type Majority23PrecommitAny HeightAndRound
|
||||
type TimeoutPropose HeightAndRound
|
||||
type TimeoutPrevotes HeightAndRound
|
||||
type TimeoutPrecommit HeightAndRound
|
||||
|
||||
|
||||
type Message interface{}
|
||||
|
||||
type MessageProposal struct {
|
||||
Height int64
|
||||
Round int
|
||||
BlockID BlockID
|
||||
POLRound int
|
||||
}
|
||||
|
||||
type VoteType int
|
||||
|
||||
const (
|
||||
VoteTypeUnknown VoteType = iota
|
||||
Prevote
|
||||
Precommit
|
||||
)
|
||||
|
||||
|
||||
type MessageVote struct {
|
||||
Height int64
|
||||
Round int
|
||||
BlockID BlockID
|
||||
Type VoteType
|
||||
}
|
||||
|
||||
|
||||
type MessageDecision struct {
|
||||
Height int64
|
||||
Round int
|
||||
BlockID BlockID
|
||||
}
|
||||
|
||||
type TriggerTimeout struct {
|
||||
Height int64
|
||||
Round int
|
||||
Duration Duration
|
||||
}
|
||||
|
||||
|
||||
type RoundStep int
|
||||
|
||||
const (
|
||||
RoundStepUnknown RoundStep = iota
|
||||
RoundStepPropose
|
||||
RoundStepPrevote
|
||||
RoundStepPrecommit
|
||||
RoundStepCommit
|
||||
)
|
||||
|
||||
type State struct {
|
||||
Height int64
|
||||
Round int
|
||||
Step RoundStep
|
||||
LockedValue BlockID
|
||||
LockedRound int
|
||||
ValidValue BlockID
|
||||
ValidRound int
|
||||
ValidatorId int
|
||||
ValidatorSetSize int
|
||||
}
|
||||
|
||||
func proposer(height int64, round int) int {}
|
||||
func getValue() BlockID {}
|
||||
|
||||
func Consensus(event Event, state State) (State, Message, TriggerTimeout) {
|
||||
msg = nil
|
||||
timeout = nil
|
||||
switch event := event.(type) {
|
||||
case EventNewHeight:
|
||||
if event.Height > state.Height {
|
||||
state.Height = event.Height
|
||||
state.Round = -1
|
||||
state.Step = RoundStepPropose
|
||||
state.LockedValue = nil
|
||||
state.LockedRound = -1
|
||||
state.ValidValue = nil
|
||||
state.ValidRound = -1
|
||||
state.ValidatorId = event.ValidatorId
|
||||
}
|
||||
return state, msg, timeout
|
||||
|
||||
case EventNewRound:
|
||||
if event.Height == state.Height and event.Round > state.Round {
|
||||
state.Round = eventRound
|
||||
state.Step = RoundStepPropose
|
||||
if proposer(state.Height, state.Round) == state.ValidatorId {
|
||||
proposal = state.ValidValue
|
||||
if proposal == nil {
|
||||
proposal = getValue()
|
||||
}
|
||||
msg = MessageProposal { state.Height, state.Round, proposal, state.ValidRound }
|
||||
}
|
||||
timeout = TriggerTimeout { state.Height, state.Round, timeoutPropose(state.Round) }
|
||||
}
|
||||
return state, msg, timeout
|
||||
|
||||
case EventProposal:
|
||||
if event.Height == state.Height and event.Round == state.Round and
|
||||
event.Sender == proposal(state.Height, state.Round) and state.Step == RoundStepPropose {
|
||||
if event.POLRound >= state.LockedRound or event.BlockID == state.BlockID or state.LockedRound == -1 {
|
||||
msg = MessageVote { state.Height, state.Round, event.BlockID, Prevote }
|
||||
}
|
||||
state.Step = RoundStepPrevote
|
||||
}
|
||||
return state, msg, timeout
|
||||
|
||||
case TimeoutPropose:
|
||||
if event.Height == state.Height and event.Round == state.Round and state.Step == RoundStepPropose {
|
||||
msg = MessageVote { state.Height, state.Round, nil, Prevote }
|
||||
state.Step = RoundStepPrevote
|
||||
}
|
||||
return state, msg, timeout
|
||||
|
||||
case Majority23PrevotesBlock:
|
||||
if event.Height == state.Height and event.Round == state.Round and state.Step >= RoundStepPrevote and event.Round > state.ValidRound {
|
||||
state.ValidRound = event.Round
|
||||
state.ValidValue = event.BlockID
|
||||
if state.Step == RoundStepPrevote {
|
||||
state.LockedRound = event.Round
|
||||
state.LockedValue = event.BlockID
|
||||
msg = MessageVote { state.Height, state.Round, event.BlockID, Precommit }
|
||||
state.Step = RoundStepPrecommit
|
||||
}
|
||||
}
|
||||
return state, msg, timeout
|
||||
|
||||
case Majority23PrevotesAny:
|
||||
if event.Height == state.Height and event.Round == state.Round and state.Step == RoundStepPrevote {
|
||||
timeout = TriggerTimeout { state.Height, state.Round, timeoutPrevote(state.Round) }
|
||||
}
|
||||
return state, msg, timeout
|
||||
|
||||
case TimeoutPrevote:
|
||||
if event.Height == state.Height and event.Round == state.Round and state.Step == RoundStepPrevote {
|
||||
msg = MessageVote { state.Height, state.Round, nil, Precommit }
|
||||
state.Step = RoundStepPrecommit
|
||||
}
|
||||
return state, msg, timeout
|
||||
|
||||
case Majority23PrecommitBlock:
|
||||
if event.Height == state.Height {
|
||||
state.Step = RoundStepCommit
|
||||
state.LockedValue = event.BlockID
|
||||
}
|
||||
return state, msg, timeout
|
||||
|
||||
case Majority23PrecommitAny:
|
||||
if event.Height == state.Height and event.Round == state.Round {
|
||||
timeout = TriggerTimeout { state.Height, state.Round, timeoutPrecommit(state.Round) }
|
||||
}
|
||||
return state, msg, timeout
|
||||
|
||||
case TimeoutPrecommit:
|
||||
if event.Height == state.Height and event.Round == state.Round {
|
||||
state.Round = state.Round + 1
|
||||
}
|
||||
return state, msg, timeout
|
||||
}
|
||||
}
|
||||
|
||||
func ConsensusExecutor() {
|
||||
proposal = nil
|
||||
votes = HeightVoteSet { Height: 1 }
|
||||
state = State {
|
||||
Height: 1
|
||||
Round: 0
|
||||
Step: RoundStepPropose
|
||||
LockedValue: nil
|
||||
LockedRound: -1
|
||||
ValidValue: nil
|
||||
ValidRound: -1
|
||||
}
|
||||
|
||||
event = EventNewHeight {1, id}
|
||||
state, msg, timeout = Consensus(event, state)
|
||||
|
||||
event = EventNewRound {state.Height, 0}
|
||||
state, msg, timeout = Consensus(event, state)
|
||||
|
||||
if msg != nil {
|
||||
send msg
|
||||
}
|
||||
|
||||
if timeout != nil {
|
||||
trigger timeout
|
||||
}
|
||||
|
||||
for {
|
||||
select {
|
||||
case message := <- msgCh:
|
||||
switch msg := message.(type) {
|
||||
case MessageProposal:
|
||||
|
||||
case MessageVote:
|
||||
if msg.Height == state.Height {
|
||||
newVote = votes.AddVote(msg)
|
||||
if newVote {
|
||||
switch msg.Type {
|
||||
case Prevote:
|
||||
prevotes = votes.Prevotes(msg.Round)
|
||||
if prevotes.WeakCertificate() and msg.Round > state.Round {
|
||||
event = EventNewRound { msg.Height, msg.Round }
|
||||
state, msg, timeout = Consensus(event, state)
|
||||
state = handleStateChange(state, msg, timeout)
|
||||
}
|
||||
|
||||
if blockID, ok = prevotes.TwoThirdsMajority(); ok and blockID != nil {
|
||||
if msg.Round == state.Round and hasBlock(blockID) {
|
||||
event = Majority23PrevotesBlock { msg.Height, msg.Round, blockID }
|
||||
state, msg, timeout = Consensus(event, state)
|
||||
state = handleStateChange(state, msg, timeout)
|
||||
}
|
||||
if proposal != nil and proposal.POLRound == msg.Round and hasBlock(blockID) {
|
||||
event = EventProposal {
|
||||
Height: state.Height
|
||||
Round: state.Round
|
||||
BlockID: blockID
|
||||
POLRound: proposal.POLRound
|
||||
Sender: message.Sender
|
||||
}
|
||||
state, msg, timeout = Consensus(event, state)
|
||||
state = handleStateChange(state, msg, timeout)
|
||||
}
|
||||
}
|
||||
|
||||
if prevotes.HasTwoThirdsAny() and msg.Round == state.Round {
|
||||
event = Majority23PrevotesAny { msg.Height, msg.Round, blockID }
|
||||
state, msg, timeout = Consensus(event, state)
|
||||
state = handleStateChange(state, msg, timeout)
|
||||
}
|
||||
|
||||
case Precommit:
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
case timeout := <- timeoutCh:
|
||||
|
||||
case block := <- blockCh:
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func handleStateChange(state, msg, timeout) State {
|
||||
if state.Step == Commit {
|
||||
state = ExecuteBlock(state.LockedValue)
|
||||
}
|
||||
if msg != nil {
|
||||
send msg
|
||||
}
|
||||
if timeout != nil {
|
||||
trigger timeout
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
### Implementation roadmap
|
||||
|
||||
* implement proposed implementation
|
||||
|
@@ -5,6 +5,8 @@ Author: Anton Kaliaev (@melekes)
|
||||
## Changelog
|
||||
|
||||
02-10-2018: Initial draft
|
||||
16-01-2019: Second version based on our conversation with Jae
|
||||
17-01-2019: Third version explaining how new design solves current issues
|
||||
|
||||
## Context
|
||||
|
||||
@@ -40,7 +42,14 @@ goroutines can be used to avoid uncontrolled memory growth.
|
||||
|
||||
In certain cases, this is what you want. But in our case, because we need
|
||||
strict ordering of events (if event A was published before B, the guaranteed
|
||||
delivery order will be A -> B), we can't use goroutines.
|
||||
delivery order will be A -> B), we can't publish msg in a new goroutine every time.
|
||||
|
||||
We can also have a goroutine per subscriber, although we'd need to be careful
|
||||
with the number of subscribers. It's more difficult to implement as well +
|
||||
unclear if we'll benefit from it (cause we'd be forced to create N additional
|
||||
channels to distribute msg to these goroutines).
|
||||
|
||||
### Non-blocking send
|
||||
|
||||
There is also a question whenever we should have a non-blocking send:
|
||||
|
||||
@@ -56,15 +65,14 @@ for each subscriber {
|
||||
```
|
||||
|
||||
This fixes the "slow client problem", but there is no way for a slow client to
|
||||
know if it had missed a message. On the other hand, if we're going to stick
|
||||
with blocking send, **devs must always ensure subscriber's handling code does not
|
||||
block**. As you can see, there is an implicit choice between ordering guarantees
|
||||
and using goroutines.
|
||||
know if it had missed a message. We could return a second channel and close it
|
||||
to indicate subscription termination. On the other hand, if we're going to
|
||||
stick with blocking send, **devs must always ensure subscriber's handling code
|
||||
does not block**, which is a hard task to put on their shoulders.
|
||||
|
||||
The interim option is to run goroutines pool for a single message, wait for all
|
||||
goroutines to finish. This will solve "slow client problem", but we'd still
|
||||
have to wait `max(goroutine_X_time)` before we can publish the next message.
|
||||
My opinion: not worth doing.
|
||||
|
||||
### Channels vs Callbacks
|
||||
|
||||
@@ -76,8 +84,6 @@ memory leaks and/or memory usage increase.
|
||||
|
||||
Go channels are de-facto standard for carrying data between goroutines.
|
||||
|
||||
**Question: Is it worth switching to callback functions?**
|
||||
|
||||
### Why `Subscribe()` accepts an `out` channel?
|
||||
|
||||
Because in our tests, we create buffered channels (cap: 1). Alternatively, we
|
||||
@@ -85,27 +91,89 @@ can make capacity an argument.
|
||||
|
||||
## Decision
|
||||
|
||||
Change Subscribe() function to return out channel:
|
||||
Change Subscribe() function to return a `Subscription` struct:
|
||||
|
||||
```go
|
||||
// outCap can be used to set capacity of out channel (unbuffered by default).
|
||||
Subscribe(ctx context.Context, clientID string, query Query, outCap... int) (out <-chan interface{}, err error) {
|
||||
type Subscription struct {
|
||||
// private fields
|
||||
}
|
||||
|
||||
func (s *Subscription) Out() <-chan MsgAndTags
|
||||
func (s *Subscription) Cancelled() <-chan struct{}
|
||||
func (s *Subscription) Err() error
|
||||
```
|
||||
|
||||
It's more idiomatic since we're closing it during Unsubscribe/UnsubscribeAll calls.
|
||||
Out returns a channel onto which messages and tags are published.
|
||||
Unsubscribe/UnsubscribeAll does not close the channel to avoid clients from
|
||||
receiving a nil message.
|
||||
|
||||
Also, we should make tags available to subscribers:
|
||||
Cancelled returns a channel that's closed when the subscription is terminated
|
||||
and supposed to be used in a select statement.
|
||||
|
||||
If Cancelled is not closed yet, Err() returns nil.
|
||||
If Cancelled is closed, Err returns a non-nil error explaining why:
|
||||
Unsubscribed if the subscriber choose to unsubscribe,
|
||||
OutOfCapacity if the subscriber is not pulling messages fast enough and the Out channel become full.
|
||||
After Err returns a non-nil error, successive calls to Err() return the same error.
|
||||
|
||||
```go
|
||||
subscription, err := pubsub.Subscribe(...)
|
||||
if err != nil {
|
||||
// ...
|
||||
}
|
||||
for {
|
||||
select {
|
||||
case msgAndTags <- subscription.Out():
|
||||
// ...
|
||||
case <-subscription.Cancelled():
|
||||
return subscription.Err()
|
||||
}
|
||||
```
|
||||
|
||||
Make Out() channel buffered (cap: 1) by default. In most cases, we want to
|
||||
terminate the slow subscriber. Only in rare cases, we want to block the pubsub
|
||||
(e.g. when debugging consensus). This should lower the chances of the pubsub
|
||||
being frozen.
|
||||
|
||||
```go
|
||||
// outCap can be used to set capacity of Out channel (1 by default). Set to 0
|
||||
for unbuffered channel (WARNING: it may block the pubsub).
|
||||
Subscribe(ctx context.Context, clientID string, query Query, outCap... int) (Subscription, error) {
|
||||
```
|
||||
|
||||
Also, Out() channel should return tags along with a message:
|
||||
|
||||
```go
|
||||
type MsgAndTags struct {
|
||||
Msg interface{}
|
||||
Tags TagMap
|
||||
}
|
||||
|
||||
// outCap can be used to set capacity of out channel (unbuffered by default).
|
||||
Subscribe(ctx context.Context, clientID string, query Query, outCap... int) (out <-chan MsgAndTags, err error) {
|
||||
```
|
||||
|
||||
to inform clients of which Tags were used with Msg.
|
||||
|
||||
### How this new design solves the current issues?
|
||||
|
||||
https://github.com/tendermint/tendermint/issues/951 (https://github.com/tendermint/tendermint/issues/1880)
|
||||
|
||||
Because of non-blocking send, situation where we'll deadlock is not possible
|
||||
anymore. If the client stops reading messages, it will be removed.
|
||||
|
||||
https://github.com/tendermint/tendermint/issues/1879
|
||||
|
||||
MsgAndTags is used now instead of a plain message.
|
||||
|
||||
### Future problems and their possible solutions
|
||||
|
||||
https://github.com/tendermint/tendermint/issues/2826
|
||||
|
||||
One question I am still pondering about: how to prevent pubsub from slowing
|
||||
down consensus. We can increase the pubsub queue size (which is 0 now). Also,
|
||||
it's probably a good idea to limit the total number of subscribers.
|
||||
|
||||
This can be made automatically. Say we set queue size to 1000 and, when it's >=
|
||||
80% full, refuse new subscriptions.
|
||||
|
||||
## Status
|
||||
|
||||
In review
|
||||
@@ -116,7 +184,10 @@ In review
|
||||
|
||||
- more idiomatic interface
|
||||
- subscribers know what tags msg was published with
|
||||
- subscribers aware of the reason their subscription was cancelled
|
||||
|
||||
### Negative
|
||||
|
||||
- (since v1) no concurrency when it comes to publishing messages
|
||||
|
||||
### Neutral
|
||||
|
@@ -4,7 +4,7 @@ With Docker Compose, you can spin up local testnets with a single command.
|
||||
|
||||
## Requirements
|
||||
|
||||
1. [Install tendermint](/docs/install.md)
|
||||
1. [Install tendermint](/docs/introduction/install.md)
|
||||
2. [Install docker](https://docs.docker.com/engine/installation/)
|
||||
3. [Install docker-compose](https://docs.docker.com/compose/install/)
|
||||
|
||||
@@ -78,6 +78,78 @@ cd $GOPATH/src/github.com/tendermint/tendermint
|
||||
rm -rf ./build/node*
|
||||
```
|
||||
|
||||
## Configuring abci containers
|
||||
|
||||
To use your own abci applications with 4-node setup edit the [docker-compose.yaml](https://github.com/tendermint/tendermint/blob/develop/docker-compose.yml) file and add image to your abci application.
|
||||
|
||||
```
|
||||
abci0:
|
||||
container_name: abci0
|
||||
image: "abci-image"
|
||||
build:
|
||||
context: .
|
||||
dockerfile: abci.Dockerfile
|
||||
command: <insert command to run your abci application>
|
||||
networks:
|
||||
localnet:
|
||||
ipv4_address: 192.167.10.6
|
||||
|
||||
abci1:
|
||||
container_name: abci1
|
||||
image: "abci-image"
|
||||
build:
|
||||
context: .
|
||||
dockerfile: abci.Dockerfile
|
||||
command: <insert command to run your abci application>
|
||||
networks:
|
||||
localnet:
|
||||
ipv4_address: 192.167.10.7
|
||||
|
||||
abci2:
|
||||
container_name: abci2
|
||||
image: "abci-image"
|
||||
build:
|
||||
context: .
|
||||
dockerfile: abci.Dockerfile
|
||||
command: <insert command to run your abci application>
|
||||
networks:
|
||||
localnet:
|
||||
ipv4_address: 192.167.10.8
|
||||
|
||||
abci3:
|
||||
container_name: abci3
|
||||
image: "abci-image"
|
||||
build:
|
||||
context: .
|
||||
dockerfile: abci.Dockerfile
|
||||
command: <insert command to run your abci application>
|
||||
networks:
|
||||
localnet:
|
||||
ipv4_address: 192.167.10.9
|
||||
|
||||
```
|
||||
|
||||
Override the [command](https://github.com/tendermint/tendermint/blob/master/networks/local/localnode/Dockerfile#L12) in each node to connect to it's abci.
|
||||
|
||||
```
|
||||
node0:
|
||||
container_name: node0
|
||||
image: "tendermint/localnode"
|
||||
ports:
|
||||
- "26656-26657:26656-26657"
|
||||
environment:
|
||||
- ID=0
|
||||
- LOG=$${LOG:-tendermint.log}
|
||||
volumes:
|
||||
- ./build:/tendermint:Z
|
||||
command: node --proxy_app=tcp://abci0:26658
|
||||
networks:
|
||||
localnet:
|
||||
ipv4_address: 192.167.10.2
|
||||
```
|
||||
|
||||
Similarly do for node1, node2 and node3 then [run testnet](https://github.com/tendermint/tendermint/blob/master/docs/networks/docker-compose.md#run-a-testnet)
|
||||
|
||||
## Logging
|
||||
|
||||
Log is saved under the attached volume, in the `tendermint.log` file. If the
|
||||
|
@@ -166,6 +166,11 @@ the tags will be hashed into the next block header.
|
||||
The application may set the validator set during InitChain, and update it during
|
||||
EndBlock.
|
||||
|
||||
Note that the maximum total power of the validator set is bounded by
|
||||
`MaxTotalVotingPower = MaxInt64 / 8`. Applications are responsible for ensuring
|
||||
they do not make changes to the validator set that cause it to exceed this
|
||||
limit.
|
||||
|
||||
### InitChain
|
||||
|
||||
ResponseInitChain can return a list of validators.
|
||||
@@ -206,6 +211,7 @@ following rules:
|
||||
- if the validator does not already exist, it will be added to the validator
|
||||
set with the given power
|
||||
- if the validator does already exist, its power will be adjusted to the given power
|
||||
- the total power of the new validator set must not exceed MaxTotalVotingPower
|
||||
|
||||
Note the updates returned in block `H` will only take effect at block `H+2`.
|
||||
|
||||
|
@@ -51,7 +51,7 @@ type Header struct {
|
||||
|
||||
// hashes of block data
|
||||
LastCommitHash []byte // commit from validators from the last block
|
||||
DataHash []byte // Merkle root of transactions
|
||||
DataHash []byte // MerkleRoot of transaction hashes
|
||||
|
||||
// hashes from the app output from the prev block
|
||||
ValidatorsHash []byte // validators for the current block
|
||||
@@ -83,25 +83,27 @@ type Version struct {
|
||||
## BlockID
|
||||
|
||||
The `BlockID` contains two distinct Merkle roots of the block.
|
||||
The first, used as the block's main hash, is the Merkle root
|
||||
of all the fields in the header. The second, used for secure gossipping of
|
||||
the block during consensus, is the Merkle root of the complete serialized block
|
||||
cut into parts. The `BlockID` includes these two hashes, as well as the number of
|
||||
parts.
|
||||
The first, used as the block's main hash, is the MerkleRoot
|
||||
of all the fields in the header (ie. `MerkleRoot(header)`.
|
||||
The second, used for secure gossipping of the block during consensus,
|
||||
is the MerkleRoot of the complete serialized block
|
||||
cut into parts (ie. `MerkleRoot(MakeParts(block))`).
|
||||
The `BlockID` includes these two hashes, as well as the number of
|
||||
parts (ie. `len(MakeParts(block))`)
|
||||
|
||||
```go
|
||||
type BlockID struct {
|
||||
Hash []byte
|
||||
Parts PartsHeader
|
||||
PartsHeader PartSetHeader
|
||||
}
|
||||
|
||||
type PartsHeader struct {
|
||||
Hash []byte
|
||||
type PartSetHeader struct {
|
||||
Total int32
|
||||
Hash []byte
|
||||
}
|
||||
```
|
||||
|
||||
TODO: link to details of merkle sums.
|
||||
See [MerkleRoot](/docs/spec/blockchain/encoding.md#MerkleRoot) for details.
|
||||
|
||||
## Time
|
||||
|
||||
@@ -109,10 +111,6 @@ Tendermint uses the
|
||||
[Google.Protobuf.WellKnownTypes.Timestamp](https://developers.google.com/protocol-buffers/docs/reference/csharp/class/google/protobuf/well-known-types/timestamp)
|
||||
format, which uses two integers, one for Seconds and for Nanoseconds.
|
||||
|
||||
NOTE: there is currently a small divergence between Tendermint and the
|
||||
Google.Protobuf.WellKnownTypes.Timestamp that should be resolved. See [this
|
||||
issue](https://github.com/tendermint/go-amino/issues/223) for details.
|
||||
|
||||
## Data
|
||||
|
||||
Data is just a wrapper for a list of transactions, where transactions are
|
||||
@@ -146,12 +144,12 @@ The vote includes information about the validator signing it.
|
||||
|
||||
```go
|
||||
type Vote struct {
|
||||
Type SignedMsgType // byte
|
||||
Type byte
|
||||
Height int64
|
||||
Round int
|
||||
Timestamp time.Time
|
||||
BlockID BlockID
|
||||
ValidatorAddress Address
|
||||
Timestamp Time
|
||||
ValidatorAddress []byte
|
||||
ValidatorIndex int
|
||||
Signature []byte
|
||||
}
|
||||
@@ -164,8 +162,8 @@ a _precommit_ has `vote.Type == 2`.
|
||||
## Signature
|
||||
|
||||
Signatures in Tendermint are raw bytes representing the underlying signature.
|
||||
The only signature scheme currently supported for Tendermint validators is
|
||||
ED25519. The signature is the raw 64-byte ED25519 signature.
|
||||
|
||||
See the [signature spec](/docs/spec/blockchain/encoding.md#key-types) for more.
|
||||
|
||||
## EvidenceData
|
||||
|
||||
@@ -192,6 +190,8 @@ type DuplicateVoteEvidence struct {
|
||||
}
|
||||
```
|
||||
|
||||
See the [pubkey spec](/docs/spec/blockchain/encoding.md#key-types) for more.
|
||||
|
||||
## Validation
|
||||
|
||||
Here we describe the validation rules for every element in a block.
|
||||
@@ -209,7 +209,7 @@ the current version of the `state` corresponds to the state
|
||||
after executing transactions from the `prevBlock`.
|
||||
Elements of an object are accessed as expected,
|
||||
ie. `block.Header`.
|
||||
See [here](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md) for the definition of `state`.
|
||||
See the [definition of `State`](/docs/spec/blockchain/state.md).
|
||||
|
||||
### Header
|
||||
|
||||
@@ -288,28 +288,25 @@ The first block has `block.Header.TotalTxs = block.Header.NumberTxs`.
|
||||
LastBlockID is the previous block's BlockID:
|
||||
|
||||
```go
|
||||
prevBlockParts := MakeParts(prevBlock, state.LastConsensusParams.BlockGossip.BlockPartSize)
|
||||
prevBlockParts := MakeParts(prevBlock)
|
||||
block.Header.LastBlockID == BlockID {
|
||||
Hash: SimpleMerkleRoot(prevBlock.Header),
|
||||
Hash: MerkleRoot(prevBlock.Header),
|
||||
PartsHeader{
|
||||
Hash: SimpleMerkleRoot(prevBlockParts),
|
||||
Hash: MerkleRoot(prevBlockParts),
|
||||
Total: len(prevBlockParts),
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
Note: it depends on the ConsensusParams,
|
||||
which are held in the `state` and may be updated by the application.
|
||||
|
||||
The first block has `block.Header.LastBlockID == BlockID{}`.
|
||||
|
||||
### LastCommitHash
|
||||
|
||||
```go
|
||||
block.Header.LastCommitHash == SimpleMerkleRoot(block.LastCommit)
|
||||
block.Header.LastCommitHash == MerkleRoot(block.LastCommit.Precommits)
|
||||
```
|
||||
|
||||
Simple Merkle root of the votes included in the block.
|
||||
MerkleRoot of the votes included in the block.
|
||||
These are the votes that committed the previous block.
|
||||
|
||||
The first block has `block.Header.LastCommitHash == []byte{}`
|
||||
@@ -317,37 +314,42 @@ The first block has `block.Header.LastCommitHash == []byte{}`
|
||||
### DataHash
|
||||
|
||||
```go
|
||||
block.Header.DataHash == SimpleMerkleRoot(block.Txs.Txs)
|
||||
block.Header.DataHash == MerkleRoot(Hashes(block.Txs.Txs))
|
||||
```
|
||||
|
||||
Simple Merkle root of the transactions included in the block.
|
||||
MerkleRoot of the hashes of transactions included in the block.
|
||||
|
||||
Note the transactions are hashed before being included in the Merkle tree,
|
||||
so the leaves of the Merkle tree are the hashes, not the transactions
|
||||
themselves. This is because transaction hashes are regularly used as identifiers for
|
||||
transactions.
|
||||
|
||||
### ValidatorsHash
|
||||
|
||||
```go
|
||||
block.ValidatorsHash == SimpleMerkleRoot(state.Validators)
|
||||
block.ValidatorsHash == MerkleRoot(state.Validators)
|
||||
```
|
||||
|
||||
Simple Merkle root of the current validator set that is committing the block.
|
||||
MerkleRoot of the current validator set that is committing the block.
|
||||
This can be used to validate the `LastCommit` included in the next block.
|
||||
|
||||
### NextValidatorsHash
|
||||
|
||||
```go
|
||||
block.NextValidatorsHash == SimpleMerkleRoot(state.NextValidators)
|
||||
block.NextValidatorsHash == MerkleRoot(state.NextValidators)
|
||||
```
|
||||
|
||||
Simple Merkle root of the next validator set that will be the validator set that commits the next block.
|
||||
MerkleRoot of the next validator set that will be the validator set that commits the next block.
|
||||
This is included so that the current validator set gets a chance to sign the
|
||||
next validator sets Merkle root.
|
||||
|
||||
### ConsensusParamsHash
|
||||
### ConsensusHash
|
||||
|
||||
```go
|
||||
block.ConsensusParamsHash == TMHASH(amino(state.ConsensusParams))
|
||||
block.ConsensusHash == state.ConsensusParams.Hash()
|
||||
```
|
||||
|
||||
Hash of the amino-encoded consensus parameters.
|
||||
Hash of the amino-encoding of a subset of the consensus parameters.
|
||||
|
||||
### AppHash
|
||||
|
||||
@@ -362,20 +364,20 @@ The first block has `block.Header.AppHash == []byte{}`.
|
||||
### LastResultsHash
|
||||
|
||||
```go
|
||||
block.ResultsHash == SimpleMerkleRoot(state.LastResults)
|
||||
block.ResultsHash == MerkleRoot(state.LastResults)
|
||||
```
|
||||
|
||||
Simple Merkle root of the results of the transactions in the previous block.
|
||||
MerkleRoot of the results of the transactions in the previous block.
|
||||
|
||||
The first block has `block.Header.ResultsHash == []byte{}`.
|
||||
|
||||
## EvidenceHash
|
||||
|
||||
```go
|
||||
block.EvidenceHash == SimpleMerkleRoot(block.Evidence)
|
||||
block.EvidenceHash == MerkleRoot(block.Evidence)
|
||||
```
|
||||
|
||||
Simple Merkle root of the evidence of Byzantine behaviour included in this block.
|
||||
MerkleRoot of the evidence of Byzantine behaviour included in this block.
|
||||
|
||||
### ProposerAddress
|
||||
|
||||
|
@@ -30,6 +30,12 @@ For example, the byte-array `[0xA, 0xB]` would be encoded as `0x020A0B`,
|
||||
while a byte-array containing 300 entires beginning with `[0xA, 0xB, ...]` would
|
||||
be encoded as `0xAC020A0B...` where `0xAC02` is the UVarint encoding of 300.
|
||||
|
||||
## Hashing
|
||||
|
||||
Tendermint uses `SHA256` as its hash function.
|
||||
Objects are always Amino encoded before being hashed.
|
||||
So `SHA256(obj)` is short for `SHA256(AminoEncode(obj))`.
|
||||
|
||||
## Public Key Cryptography
|
||||
|
||||
Tendermint uses Amino to distinguish between different types of private keys,
|
||||
@@ -68,23 +74,27 @@ For example, the 33-byte (or 0x21-byte in hex) Secp256k1 pubkey
|
||||
would be encoded as
|
||||
`EB5AE98721020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
|
||||
|
||||
### Addresses
|
||||
### Key Types
|
||||
|
||||
Addresses for each public key types are computed as follows:
|
||||
Each type specifies it's own pubkey, address, and signature format.
|
||||
|
||||
#### Ed25519
|
||||
|
||||
First 20-bytes of the SHA256 hash of the raw 32-byte public key:
|
||||
TODO: pubkey
|
||||
|
||||
The address is the first 20-bytes of the SHA256 hash of the raw 32-byte public key:
|
||||
|
||||
```
|
||||
address = SHA256(pubkey)[:20]
|
||||
```
|
||||
|
||||
NOTE: before v0.22.0, this was the RIPEMD160 of the Amino encoded public key.
|
||||
The signature is the raw 64-byte ED25519 signature.
|
||||
|
||||
#### Secp256k1
|
||||
|
||||
RIPEMD160 hash of the SHA256 hash of the OpenSSL compressed public key:
|
||||
TODO: pubkey
|
||||
|
||||
The address is the RIPEMD160 hash of the SHA256 hash of the OpenSSL compressed public key:
|
||||
|
||||
```
|
||||
address = RIPEMD160(SHA256(pubkey))
|
||||
@@ -92,12 +102,21 @@ address = RIPEMD160(SHA256(pubkey))
|
||||
|
||||
This is the same as Bitcoin.
|
||||
|
||||
The signature is the 64-byte concatenation of ECDSA `r` and `s` (ie. `r || s`),
|
||||
where `s` is lexicographically less than its inverse, to prevent malleability.
|
||||
This is like Ethereum, but without the extra byte for pubkey recovery, since
|
||||
Tendermint assumes the pubkey is always provided anyway.
|
||||
|
||||
#### Multisig
|
||||
|
||||
TODO
|
||||
|
||||
## Other Common Types
|
||||
|
||||
### BitArray
|
||||
|
||||
The BitArray is used in block headers and some consensus messages to signal
|
||||
whether or not something was done by each validator. BitArray is represented
|
||||
The BitArray is used in some consensus messages to represent votes received from
|
||||
validators, or parts received in a block. It is represented
|
||||
with a struct containing the number of bits (`Bits`) and the bit-array itself
|
||||
encoded in base64 (`Elems`).
|
||||
|
||||
@@ -119,24 +138,27 @@ representing `1` and `0`. Ie. the BitArray `10110` would be JSON encoded as
|
||||
Part is used to break up blocks into pieces that can be gossiped in parallel
|
||||
and securely verified using a Merkle tree of the parts.
|
||||
|
||||
Part contains the index of the part in the larger set (`Index`), the actual
|
||||
underlying data of the part (`Bytes`), and a simple Merkle proof that the part is contained in
|
||||
the larger set (`Proof`).
|
||||
Part contains the index of the part (`Index`), the actual
|
||||
underlying data of the part (`Bytes`), and a Merkle proof that the part is contained in
|
||||
the set (`Proof`).
|
||||
|
||||
```go
|
||||
type Part struct {
|
||||
Index int
|
||||
Bytes byte[]
|
||||
Proof byte[]
|
||||
Bytes []byte
|
||||
Proof SimpleProof
|
||||
}
|
||||
```
|
||||
|
||||
See details of SimpleProof, below.
|
||||
|
||||
### MakeParts
|
||||
|
||||
Encode an object using Amino and slice it into parts.
|
||||
Tendermint uses a part size of 65536 bytes.
|
||||
|
||||
```go
|
||||
func MakeParts(obj interface{}, partSize int) []Part
|
||||
func MakeParts(block Block) []Part
|
||||
```
|
||||
|
||||
## Merkle Trees
|
||||
@@ -144,12 +166,17 @@ func MakeParts(obj interface{}, partSize int) []Part
|
||||
For an overview of Merkle trees, see
|
||||
[wikipedia](https://en.wikipedia.org/wiki/Merkle_tree)
|
||||
|
||||
A Simple Tree is a simple compact binary tree for a static list of items. Simple Merkle trees are used in numerous places in Tendermint to compute a cryptographic digest of a data structure. In a Simple Tree, the transactions and validation signatures of a block are hashed using this simple merkle tree logic.
|
||||
We use the RFC 6962 specification of a merkle tree, with sha256 as the hash function.
|
||||
Merkle trees are used throughout Tendermint to compute a cryptographic digest of a data structure.
|
||||
The differences between RFC 6962 and the simplest form a merkle tree are that:
|
||||
|
||||
If the number of items is not a power of two, the tree will not be full
|
||||
and some leaf nodes will be at different levels. Simple Tree tries to
|
||||
keep both sides of the tree the same size, but the left side may be one
|
||||
greater, for example:
|
||||
1) leaf nodes and inner nodes have different hashes.
|
||||
This is for "second pre-image resistance", to prevent the proof to an inner node being valid as the proof of a leaf.
|
||||
The leaf nodes are `SHA256(0x00 || leaf_data)`, and inner nodes are `SHA256(0x01 || left_hash || right_hash)`.
|
||||
|
||||
2) When the number of items isn't a power of two, the left half of the tree is as big as it could be.
|
||||
(The smallest power of two less than the number of items) This allows new leaves to be added with less
|
||||
recomputation. For example:
|
||||
|
||||
```
|
||||
Simple Tree with 6 items Simple Tree with 7 items
|
||||
@@ -163,68 +190,79 @@ greater, for example:
|
||||
/ \ / \ / \ / \
|
||||
/ \ / \ / \ / \
|
||||
/ \ / \ / \ / \
|
||||
* h2 * h5 * * * h6
|
||||
/ \ / \ / \ / \ / \
|
||||
h0 h1 h3 h4 h0 h1 h2 h3 h4 h5
|
||||
* * h4 h5 * * * h6
|
||||
/ \ / \ / \ / \ / \
|
||||
h0 h1 h2 h3 h0 h1 h2 h3 h4 h5
|
||||
```
|
||||
|
||||
Tendermint always uses the `TMHASH` hash function, which is equivalent to
|
||||
SHA256:
|
||||
### MerkleRoot
|
||||
|
||||
```
|
||||
func TMHASH(bz []byte) []byte {
|
||||
return SHA256(bz)
|
||||
}
|
||||
```
|
||||
|
||||
### Simple Merkle Root
|
||||
|
||||
The function `SimpleMerkleRoot` is a simple recursive function defined as follows:
|
||||
The function `MerkleRoot` is a simple recursive function defined as follows:
|
||||
|
||||
```go
|
||||
func SimpleMerkleRoot(hashes [][]byte) []byte{
|
||||
switch len(hashes) {
|
||||
case 0:
|
||||
return nil
|
||||
case 1:
|
||||
return hashes[0]
|
||||
default:
|
||||
left := SimpleMerkleRoot(hashes[:(len(hashes)+1)/2])
|
||||
right := SimpleMerkleRoot(hashes[(len(hashes)+1)/2:])
|
||||
return SimpleConcatHash(left, right)
|
||||
}
|
||||
// SHA256(0x00 || leaf)
|
||||
func leafHash(leaf []byte) []byte {
|
||||
return tmhash.Sum(append(0x00, leaf...))
|
||||
}
|
||||
|
||||
func SimpleConcatHash(left, right []byte) []byte{
|
||||
left = encodeByteSlice(left)
|
||||
right = encodeByteSlice(right)
|
||||
return TMHASH(append(left, right))
|
||||
// SHA256(0x01 || left || right)
|
||||
func innerHash(left []byte, right []byte) []byte {
|
||||
return tmhash.Sum(append(0x01, append(left, right...)...))
|
||||
}
|
||||
|
||||
// largest power of 2 less than k
|
||||
func getSplitPoint(k int) { ... }
|
||||
|
||||
func MerkleRoot(items [][]byte) []byte{
|
||||
switch len(items) {
|
||||
case 0:
|
||||
return nil
|
||||
case 1:
|
||||
return leafHash(leafs[0])
|
||||
default:
|
||||
k := getSplitPoint(len(items))
|
||||
left := MerkleRoot(items[:k])
|
||||
right := MerkleRoot(items[k:])
|
||||
return innerHash(left, right)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Note that the leaves are Amino encoded as byte-arrays (ie. simple Uvarint length
|
||||
prefix) before being concatenated together and hashed.
|
||||
Note: `MerkleRoot` operates on items which are arbitrary byte arrays, not
|
||||
necessarily hashes. For items which need to be hashed first, we introduce the
|
||||
`Hashes` function:
|
||||
|
||||
Note: we will abuse notion and invoke `SimpleMerkleRoot` with arguments of type `struct` or type `[]struct`.
|
||||
For `struct` arguments, we compute a `[][]byte` containing the hash of each
|
||||
```
|
||||
func Hashes(items [][]byte) [][]byte {
|
||||
return SHA256 of each item
|
||||
}
|
||||
```
|
||||
|
||||
Note: we will abuse notion and invoke `MerkleRoot` with arguments of type `struct` or type `[]struct`.
|
||||
For `struct` arguments, we compute a `[][]byte` containing the amino encoding of each
|
||||
field in the struct, in the same order the fields appear in the struct.
|
||||
For `[]struct` arguments, we compute a `[][]byte` by hashing the individual `struct` elements.
|
||||
For `[]struct` arguments, we compute a `[][]byte` by amino encoding the individual `struct` elements.
|
||||
|
||||
### Simple Merkle Proof
|
||||
|
||||
Proof that a leaf is in a Merkle tree consists of a simple structure:
|
||||
Proof that a leaf is in a Merkle tree is composed as follows:
|
||||
|
||||
```
|
||||
```golang
|
||||
type SimpleProof struct {
|
||||
Total int
|
||||
Index int
|
||||
LeafHash []byte
|
||||
Aunts [][]byte
|
||||
}
|
||||
```
|
||||
|
||||
Which is verified using the following:
|
||||
Which is verified as follows:
|
||||
|
||||
```
|
||||
func (proof SimpleProof) Verify(index, total int, leafHash, rootHash []byte) bool {
|
||||
computedHash := computeHashFromAunts(index, total, leafHash, proof.Aunts)
|
||||
```golang
|
||||
func (proof SimpleProof) Verify(rootHash []byte, leaf []byte) bool {
|
||||
assert(proof.LeafHash, leafHash(leaf)
|
||||
|
||||
computedHash := computeHashFromAunts(proof.Index, proof.Total, proof.LeafHash, proof.Aunts)
|
||||
return computedHash == rootHash
|
||||
}
|
||||
|
||||
@@ -238,26 +276,18 @@ func computeHashFromAunts(index, total int, leafHash []byte, innerHashes [][]byt
|
||||
|
||||
assert(len(innerHashes) > 0)
|
||||
|
||||
numLeft := (total + 1) / 2
|
||||
numLeft := getSplitPoint(total) // largest power of 2 less than total
|
||||
if index < numLeft {
|
||||
leftHash := computeHashFromAunts(index, numLeft, leafHash, innerHashes[:len(innerHashes)-1])
|
||||
assert(leftHash != nil)
|
||||
return SimpleHashFromTwoHashes(leftHash, innerHashes[len(innerHashes)-1])
|
||||
return innerHash(leftHash, innerHashes[len(innerHashes)-1])
|
||||
}
|
||||
rightHash := computeHashFromAunts(index-numLeft, total-numLeft, leafHash, innerHashes[:len(innerHashes)-1])
|
||||
assert(rightHash != nil)
|
||||
return SimpleHashFromTwoHashes(innerHashes[len(innerHashes)-1], rightHash)
|
||||
return innerHash(innerHashes[len(innerHashes)-1], rightHash)
|
||||
}
|
||||
```
|
||||
|
||||
### Simple Tree with Dictionaries
|
||||
|
||||
The Simple Tree is used to merkelize a list of items, so to merkelize a
|
||||
(short) dictionary of key-value pairs, encode the dictionary as an
|
||||
ordered list of `KVPair` structs. The block hash is such a hash
|
||||
derived from all the fields of the block `Header`. The state hash is
|
||||
similarly derived.
|
||||
|
||||
### IAVL+ Tree
|
||||
|
||||
Because Tendermint only uses a Simple Merkle Tree, application developers are expect to use their own Merkle tree in their applications. For example, the IAVL+ Tree - an immutable self-balancing binary tree for persisting application state is used by the [Cosmos SDK](https://github.com/cosmos/cosmos-sdk/blob/develop/docs/sdk/core/multistore.md)
|
||||
@@ -301,12 +331,14 @@ type CanonicalVote struct {
|
||||
Type byte
|
||||
Height int64 `binary:"fixed64"`
|
||||
Round int64 `binary:"fixed64"`
|
||||
Timestamp time.Time
|
||||
BlockID CanonicalBlockID
|
||||
Timestamp time.Time
|
||||
ChainID string
|
||||
}
|
||||
```
|
||||
|
||||
The field ordering and the fixed sized encoding for the first three fields is optimized to ease parsing of SignBytes
|
||||
in HSMs. It creates fixed offsets for relevant fields that need to be read in this context.
|
||||
See [#1622](https://github.com/tendermint/tendermint/issues/1622) for more details.
|
||||
For more details, see the [signing spec](/docs/spec/consensus/signing.md).
|
||||
Also, see the motivating discussion in
|
||||
[#1622](https://github.com/tendermint/tendermint/issues/1622).
|
||||
|
@@ -60,7 +60,7 @@ When hashing the Validator struct, the address is not included,
|
||||
because it is redundant with the pubkey.
|
||||
|
||||
The `state.Validators`, `state.LastValidators`, and `state.NextValidators`, must always by sorted by validator address,
|
||||
so that there is a canonical order for computing the SimpleMerkleRoot.
|
||||
so that there is a canonical order for computing the MerkleRoot.
|
||||
|
||||
We also define a `TotalVotingPower` function, to return the total voting power:
|
||||
|
||||
@@ -78,6 +78,8 @@ func TotalVotingPower(vals []Validators) int64{
|
||||
|
||||
ConsensusParams define various limits for blockchain data structures.
|
||||
Like validator sets, they are set during genesis and can be updated by the application through ABCI.
|
||||
When hashed, only a subset of the params are included, to allow the params to
|
||||
evolve without breaking the header.
|
||||
|
||||
```go
|
||||
type ConsensusParams struct {
|
||||
@@ -86,6 +88,18 @@ type ConsensusParams struct {
|
||||
Validator
|
||||
}
|
||||
|
||||
type hashedParams struct {
|
||||
BlockMaxBytes int64
|
||||
BlockMaxGas int64
|
||||
}
|
||||
|
||||
func (params ConsensusParams) Hash() []byte {
|
||||
SHA256(hashedParams{
|
||||
BlockMaxBytes: params.BlockSize.MaxBytes,
|
||||
BlockMaxGas: params.BlockSize.MaxGas,
|
||||
})
|
||||
}
|
||||
|
||||
type BlockSize struct {
|
||||
MaxBytes int64
|
||||
MaxGas int64
|
||||
|
@@ -59,9 +59,9 @@ type PartSetHeader struct {
|
||||
```
|
||||
|
||||
To be included in a valid vote or proposal, BlockID must either represent a `nil` block, or a complete one.
|
||||
We introduce two methods, `BlockID.IsNil()` and `BlockID.IsComplete()` for these cases, respectively.
|
||||
We introduce two methods, `BlockID.IsZero()` and `BlockID.IsComplete()` for these cases, respectively.
|
||||
|
||||
`BlockID.IsNil()` returns true for BlockID `b` if each of the following
|
||||
`BlockID.IsZero()` returns true for BlockID `b` if each of the following
|
||||
are true:
|
||||
|
||||
```
|
||||
@@ -81,7 +81,7 @@ len(b.PartsHeader.Hash) == 32
|
||||
|
||||
## Proposals
|
||||
|
||||
The structure of a propsal for signing looks like:
|
||||
The structure of a proposal for signing looks like:
|
||||
|
||||
```
|
||||
type CanonicalProposal struct {
|
||||
@@ -118,8 +118,8 @@ type CanonicalVote struct {
|
||||
Type SignedMsgType // type alias for byte
|
||||
Height int64 `binary:"fixed64"`
|
||||
Round int64 `binary:"fixed64"`
|
||||
Timestamp time.Time
|
||||
BlockID BlockID
|
||||
Timestamp time.Time
|
||||
ChainID string
|
||||
}
|
||||
```
|
||||
@@ -130,7 +130,7 @@ A vote is valid if each of the following lines evaluates to true for vote `v`:
|
||||
v.Type == 0x1 || v.Type == 0x2
|
||||
v.Height > 0
|
||||
v.Round >= 0
|
||||
v.BlockID.IsNil() || v.BlockID.IsValid()
|
||||
v.BlockID.IsZero() || v.BlockID.IsComplete()
|
||||
```
|
||||
|
||||
In other words, a vote is valid for signing if it contains the type of a Prevote
|
||||
|
@@ -57,10 +57,10 @@ func (evpool *EvidencePool) PriorityEvidence() []types.Evidence {
|
||||
return evpool.evidenceStore.PriorityEvidence()
|
||||
}
|
||||
|
||||
// PendingEvidence returns uncommitted evidence up to maxBytes.
|
||||
// If maxBytes is -1, all evidence is returned.
|
||||
func (evpool *EvidencePool) PendingEvidence(maxBytes int64) []types.Evidence {
|
||||
return evpool.evidenceStore.PendingEvidence(maxBytes)
|
||||
// PendingEvidence returns up to maxNum uncommitted evidence.
|
||||
// If maxNum is -1, all evidence is returned.
|
||||
func (evpool *EvidencePool) PendingEvidence(maxNum int64) []types.Evidence {
|
||||
return evpool.evidenceStore.PendingEvidence(maxNum)
|
||||
}
|
||||
|
||||
// State returns the current state of the evpool.
|
||||
|
@@ -86,26 +86,26 @@ func (store *EvidenceStore) PriorityEvidence() (evidence []types.Evidence) {
|
||||
return l
|
||||
}
|
||||
|
||||
// PendingEvidence returns known uncommitted evidence up to maxBytes.
|
||||
// If maxBytes is -1, all evidence is returned.
|
||||
func (store *EvidenceStore) PendingEvidence(maxBytes int64) (evidence []types.Evidence) {
|
||||
return store.listEvidence(baseKeyPending, maxBytes)
|
||||
// PendingEvidence returns up to maxNum known, uncommitted evidence.
|
||||
// If maxNum is -1, all evidence is returned.
|
||||
func (store *EvidenceStore) PendingEvidence(maxNum int64) (evidence []types.Evidence) {
|
||||
return store.listEvidence(baseKeyPending, maxNum)
|
||||
}
|
||||
|
||||
// listEvidence lists the evidence for the given prefix key up to maxBytes.
|
||||
// listEvidence lists up to maxNum pieces of evidence for the given prefix key.
|
||||
// It is wrapped by PriorityEvidence and PendingEvidence for convenience.
|
||||
// If maxBytes is -1, there's no cap on the size of returned evidence.
|
||||
func (store *EvidenceStore) listEvidence(prefixKey string, maxBytes int64) (evidence []types.Evidence) {
|
||||
var bytes int64
|
||||
// If maxNum is -1, there's no cap on the size of returned evidence.
|
||||
func (store *EvidenceStore) listEvidence(prefixKey string, maxNum int64) (evidence []types.Evidence) {
|
||||
var count int64
|
||||
iter := dbm.IteratePrefix(store.db, []byte(prefixKey))
|
||||
defer iter.Close()
|
||||
for ; iter.Valid(); iter.Next() {
|
||||
val := iter.Value()
|
||||
|
||||
if maxBytes > 0 && bytes+int64(len(val)) > maxBytes {
|
||||
if count == maxNum {
|
||||
return evidence
|
||||
}
|
||||
bytes += int64(len(val))
|
||||
count++
|
||||
|
||||
var ei EvidenceInfo
|
||||
err := cdc.UnmarshalBinaryBare(val, &ei)
|
||||
|
@@ -13,3 +13,8 @@ func init() {
|
||||
cryptoAmino.RegisterAmino(cdc)
|
||||
types.RegisterEvidences(cdc)
|
||||
}
|
||||
|
||||
// For testing purposes only
|
||||
func RegisterMockEvidences() {
|
||||
types.RegisterMockEvidences(cdc)
|
||||
}
|
||||
|
@@ -90,7 +90,7 @@ func (l tmfmtLogger) Log(keyvals ...interface{}) error {
|
||||
// D - first character of the level, uppercase (ASCII only)
|
||||
// [2016-05-02|11:06:44.322] - our time format (see https://golang.org/src/time/format.go)
|
||||
// Stopping ... - message
|
||||
enc.buf.WriteString(fmt.Sprintf("%c[%s] %-44s ", lvl[0]-32, time.Now().Format("2016-01-02|15:04:05.000"), msg))
|
||||
enc.buf.WriteString(fmt.Sprintf("%c[%s] %-44s ", lvl[0]-32, time.Now().Format("2006-01-02|15:04:05.000"), msg))
|
||||
|
||||
if module != unknown {
|
||||
enc.buf.WriteString("module=" + module + " ")
|
||||
|
@@ -35,34 +35,40 @@ func NewBaseVerifier(chainID string, height int64, valset *types.ValidatorSet) *
|
||||
}
|
||||
|
||||
// Implements Verifier.
|
||||
func (bc *BaseVerifier) ChainID() string {
|
||||
return bc.chainID
|
||||
func (bv *BaseVerifier) ChainID() string {
|
||||
return bv.chainID
|
||||
}
|
||||
|
||||
// Implements Verifier.
|
||||
func (bc *BaseVerifier) Verify(signedHeader types.SignedHeader) error {
|
||||
func (bv *BaseVerifier) Verify(signedHeader types.SignedHeader) error {
|
||||
|
||||
// We can't verify commits older than bc.height.
|
||||
if signedHeader.Height < bc.height {
|
||||
// We can't verify commits for a different chain.
|
||||
if signedHeader.ChainID != bv.chainID {
|
||||
return cmn.NewError("BaseVerifier chainID is %v, cannot verify chainID %v",
|
||||
bv.chainID, signedHeader.ChainID)
|
||||
}
|
||||
|
||||
// We can't verify commits older than bv.height.
|
||||
if signedHeader.Height < bv.height {
|
||||
return cmn.NewError("BaseVerifier height is %v, cannot verify height %v",
|
||||
bc.height, signedHeader.Height)
|
||||
bv.height, signedHeader.Height)
|
||||
}
|
||||
|
||||
// We can't verify with the wrong validator set.
|
||||
if !bytes.Equal(signedHeader.ValidatorsHash,
|
||||
bc.valset.Hash()) {
|
||||
return lerr.ErrUnexpectedValidators(signedHeader.ValidatorsHash, bc.valset.Hash())
|
||||
bv.valset.Hash()) {
|
||||
return lerr.ErrUnexpectedValidators(signedHeader.ValidatorsHash, bv.valset.Hash())
|
||||
}
|
||||
|
||||
// Do basic sanity checks.
|
||||
err := signedHeader.ValidateBasic(bc.chainID)
|
||||
err := signedHeader.ValidateBasic(bv.chainID)
|
||||
if err != nil {
|
||||
return cmn.ErrorWrap(err, "in verify")
|
||||
}
|
||||
|
||||
// Check commit signatures.
|
||||
err = bc.valset.VerifyCommit(
|
||||
bc.chainID, signedHeader.Commit.BlockID,
|
||||
err = bv.valset.VerifyCommit(
|
||||
bv.chainID, signedHeader.Commit.BlockID,
|
||||
signedHeader.Height, signedHeader.Commit)
|
||||
if err != nil {
|
||||
return cmn.ErrorWrap(err, "in verify")
|
||||
|
@@ -8,7 +8,7 @@ import (
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// FullCommit is a signed header (the block header and a commit that signs it),
|
||||
// FullCommit contains a SignedHeader (the block header and a commit that signs it),
|
||||
// the validator set which signed the commit, and the next validator set. The
|
||||
// next validator set (which is proven from the block header) allows us to
|
||||
// revert to block-by-block updating of lite Verifier's latest validator set,
|
||||
|
@@ -13,6 +13,9 @@ import (
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
var _ PersistentProvider = (*DBProvider)(nil)
|
||||
|
||||
// DBProvider stores commits and validator sets in a DB.
|
||||
type DBProvider struct {
|
||||
logger log.Logger
|
||||
label string
|
||||
|
43
lite/doc.go
43
lite/doc.go
@@ -15,9 +15,7 @@ for you, so you can just build nice applications.
|
||||
We design for clients who have no strong trust relationship with any Tendermint
|
||||
node, just the blockchain and validator set as a whole.
|
||||
|
||||
# Data structures
|
||||
|
||||
## SignedHeader
|
||||
SignedHeader
|
||||
|
||||
SignedHeader is a block header along with a commit -- enough validator
|
||||
precommit-vote signatures to prove its validity (> 2/3 of the voting power)
|
||||
@@ -42,7 +40,7 @@ The FullCommit is also declared in this package as a convenience structure,
|
||||
which includes the SignedHeader along with the full current and next
|
||||
ValidatorSets.
|
||||
|
||||
## Verifier
|
||||
Verifier
|
||||
|
||||
A Verifier validates a new SignedHeader given the currently known state. There
|
||||
are two different types of Verifiers provided.
|
||||
@@ -53,42 +51,35 @@ SignedHeader, and that the SignedHeader was to be signed by the exact given
|
||||
validator set, and that the height of the commit is at least height (or
|
||||
greater).
|
||||
|
||||
SignedHeader.Commit may be signed by a different validator set, it can get
|
||||
verified with a BaseVerifier as long as sufficient signatures from the
|
||||
previous validator set are present in the commit.
|
||||
|
||||
DynamicVerifier - this Verifier implements an auto-update and persistence
|
||||
strategy to verify any SignedHeader of the blockchain.
|
||||
|
||||
## Provider and PersistentProvider
|
||||
Provider and PersistentProvider
|
||||
|
||||
A Provider allows us to store and retrieve the FullCommits.
|
||||
|
||||
```go
|
||||
type Provider interface {
|
||||
// LatestFullCommit returns the latest commit with
|
||||
// minHeight <= height <= maxHeight.
|
||||
// If maxHeight is zero, returns the latest where
|
||||
// minHeight <= height.
|
||||
LatestFullCommit(chainID string, minHeight, maxHeight int64) (FullCommit, error)
|
||||
}
|
||||
```
|
||||
type Provider interface {
|
||||
// LatestFullCommit returns the latest commit with
|
||||
// minHeight <= height <= maxHeight.
|
||||
// If maxHeight is zero, returns the latest where
|
||||
// minHeight <= height.
|
||||
LatestFullCommit(chainID string, minHeight, maxHeight int64) (FullCommit, error)
|
||||
}
|
||||
|
||||
* client.NewHTTPProvider - query Tendermint rpc.
|
||||
|
||||
A PersistentProvider is a Provider that also allows for saving state. This is
|
||||
used by the DynamicVerifier for persistence.
|
||||
|
||||
```go
|
||||
type PersistentProvider interface {
|
||||
Provider
|
||||
type PersistentProvider interface {
|
||||
Provider
|
||||
|
||||
// SaveFullCommit saves a FullCommit (without verification).
|
||||
SaveFullCommit(fc FullCommit) error
|
||||
}
|
||||
```
|
||||
// SaveFullCommit saves a FullCommit (without verification).
|
||||
SaveFullCommit(fc FullCommit) error
|
||||
}
|
||||
|
||||
* DBProvider - persistence provider for use with any libs/DB.
|
||||
|
||||
* MultiProvider - combine multiple providers.
|
||||
|
||||
The suggested use for local light clients is client.NewHTTPProvider(...) for
|
||||
@@ -97,7 +88,7 @@ dbm.NewMemDB()), NewDBProvider("label", db.NewFileDB(...))) to store confirmed
|
||||
full commits (Trusted)
|
||||
|
||||
|
||||
# How We Track Validators
|
||||
How We Track Validators
|
||||
|
||||
Unless you want to blindly trust the node you talk with, you need to trace
|
||||
every response back to a hash in a block header and validate the commit
|
||||
|
@@ -18,12 +18,17 @@ var _ Verifier = (*DynamicVerifier)(nil)
|
||||
// "source" provider to obtain the needed FullCommits to securely sync with
|
||||
// validator set changes. It stores properly validated data on the
|
||||
// "trusted" local system.
|
||||
// TODO: make this single threaded and create a new
|
||||
// ConcurrentDynamicVerifier that wraps it with concurrency.
|
||||
// see https://github.com/tendermint/tendermint/issues/3170
|
||||
type DynamicVerifier struct {
|
||||
logger log.Logger
|
||||
chainID string
|
||||
// These are only properly validated data, from local system.
|
||||
logger log.Logger
|
||||
|
||||
// Already validated, stored locally
|
||||
trusted PersistentProvider
|
||||
// This is a source of new info, like a node rpc, or other import method.
|
||||
|
||||
// New info, like a node rpc, or other import method.
|
||||
source Provider
|
||||
|
||||
// pending map to synchronize concurrent verification requests
|
||||
@@ -35,8 +40,8 @@ type DynamicVerifier struct {
|
||||
// trusted provider to store validated data and the source provider to
|
||||
// obtain missing data (e.g. FullCommits).
|
||||
//
|
||||
// The trusted provider should a CacheProvider, MemProvider or
|
||||
// files.Provider. The source provider should be a client.HTTPProvider.
|
||||
// The trusted provider should be a DBProvider.
|
||||
// The source provider should be a client.HTTPProvider.
|
||||
func NewDynamicVerifier(chainID string, trusted PersistentProvider, source Provider) *DynamicVerifier {
|
||||
return &DynamicVerifier{
|
||||
logger: log.NewNopLogger(),
|
||||
@@ -47,68 +52,71 @@ func NewDynamicVerifier(chainID string, trusted PersistentProvider, source Provi
|
||||
}
|
||||
}
|
||||
|
||||
func (ic *DynamicVerifier) SetLogger(logger log.Logger) {
|
||||
func (dv *DynamicVerifier) SetLogger(logger log.Logger) {
|
||||
logger = logger.With("module", "lite")
|
||||
ic.logger = logger
|
||||
ic.trusted.SetLogger(logger)
|
||||
ic.source.SetLogger(logger)
|
||||
dv.logger = logger
|
||||
dv.trusted.SetLogger(logger)
|
||||
dv.source.SetLogger(logger)
|
||||
}
|
||||
|
||||
// Implements Verifier.
|
||||
func (ic *DynamicVerifier) ChainID() string {
|
||||
return ic.chainID
|
||||
func (dv *DynamicVerifier) ChainID() string {
|
||||
return dv.chainID
|
||||
}
|
||||
|
||||
// Implements Verifier.
|
||||
//
|
||||
// If the validators have changed since the last known time, it looks to
|
||||
// ic.trusted and ic.source to prove the new validators. On success, it will
|
||||
// try to store the SignedHeader in ic.trusted if the next
|
||||
// dv.trusted and dv.source to prove the new validators. On success, it will
|
||||
// try to store the SignedHeader in dv.trusted if the next
|
||||
// validator can be sourced.
|
||||
func (ic *DynamicVerifier) Verify(shdr types.SignedHeader) error {
|
||||
func (dv *DynamicVerifier) Verify(shdr types.SignedHeader) error {
|
||||
|
||||
// Performs synchronization for multi-threads verification at the same height.
|
||||
ic.mtx.Lock()
|
||||
if pending := ic.pendingVerifications[shdr.Height]; pending != nil {
|
||||
ic.mtx.Unlock()
|
||||
dv.mtx.Lock()
|
||||
if pending := dv.pendingVerifications[shdr.Height]; pending != nil {
|
||||
dv.mtx.Unlock()
|
||||
<-pending // pending is chan struct{}
|
||||
} else {
|
||||
pending := make(chan struct{})
|
||||
ic.pendingVerifications[shdr.Height] = pending
|
||||
dv.pendingVerifications[shdr.Height] = pending
|
||||
defer func() {
|
||||
close(pending)
|
||||
ic.mtx.Lock()
|
||||
delete(ic.pendingVerifications, shdr.Height)
|
||||
ic.mtx.Unlock()
|
||||
dv.mtx.Lock()
|
||||
delete(dv.pendingVerifications, shdr.Height)
|
||||
dv.mtx.Unlock()
|
||||
}()
|
||||
ic.mtx.Unlock()
|
||||
dv.mtx.Unlock()
|
||||
}
|
||||
|
||||
//Get the exact trusted commit for h, and if it is
|
||||
// equal to shdr, then don't even verify it,
|
||||
// and just return nil.
|
||||
trustedFCSameHeight, err := ic.trusted.LatestFullCommit(ic.chainID, shdr.Height, shdr.Height)
|
||||
// equal to shdr, then it's already trusted, so
|
||||
// just return nil.
|
||||
trustedFCSameHeight, err := dv.trusted.LatestFullCommit(dv.chainID, shdr.Height, shdr.Height)
|
||||
if err == nil {
|
||||
// If loading trust commit successfully, and trust commit equal to shdr, then don't verify it,
|
||||
// just return nil.
|
||||
if bytes.Equal(trustedFCSameHeight.SignedHeader.Hash(), shdr.Hash()) {
|
||||
ic.logger.Info(fmt.Sprintf("Load full commit at height %d from cache, there is not need to verify.", shdr.Height))
|
||||
dv.logger.Info(fmt.Sprintf("Load full commit at height %d from cache, there is not need to verify.", shdr.Height))
|
||||
return nil
|
||||
}
|
||||
} else if !lerr.IsErrCommitNotFound(err) {
|
||||
// Return error if it is not CommitNotFound error
|
||||
ic.logger.Info(fmt.Sprintf("Encountered unknown error in loading full commit at height %d.", shdr.Height))
|
||||
dv.logger.Info(fmt.Sprintf("Encountered unknown error in loading full commit at height %d.", shdr.Height))
|
||||
return err
|
||||
}
|
||||
|
||||
// Get the latest known full commit <= h-1 from our trusted providers.
|
||||
// The full commit at h-1 contains the valset to sign for h.
|
||||
h := shdr.Height - 1
|
||||
trustedFC, err := ic.trusted.LatestFullCommit(ic.chainID, 1, h)
|
||||
prevHeight := shdr.Height - 1
|
||||
trustedFC, err := dv.trusted.LatestFullCommit(dv.chainID, 1, prevHeight)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if trustedFC.Height() == h {
|
||||
// sync up to the prevHeight and assert our latest NextValidatorSet
|
||||
// is the ValidatorSet for the SignedHeader
|
||||
if trustedFC.Height() == prevHeight {
|
||||
// Return error if valset doesn't match.
|
||||
if !bytes.Equal(
|
||||
trustedFC.NextValidators.Hash(),
|
||||
@@ -118,11 +126,12 @@ func (ic *DynamicVerifier) Verify(shdr types.SignedHeader) error {
|
||||
shdr.Header.ValidatorsHash)
|
||||
}
|
||||
} else {
|
||||
// If valset doesn't match...
|
||||
if !bytes.Equal(trustedFC.NextValidators.Hash(),
|
||||
// If valset doesn't match, try to update
|
||||
if !bytes.Equal(
|
||||
trustedFC.NextValidators.Hash(),
|
||||
shdr.Header.ValidatorsHash) {
|
||||
// ... update.
|
||||
trustedFC, err = ic.updateToHeight(h)
|
||||
trustedFC, err = dv.updateToHeight(prevHeight)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -137,14 +146,21 @@ func (ic *DynamicVerifier) Verify(shdr types.SignedHeader) error {
|
||||
}
|
||||
|
||||
// Verify the signed header using the matching valset.
|
||||
cert := NewBaseVerifier(ic.chainID, trustedFC.Height()+1, trustedFC.NextValidators)
|
||||
cert := NewBaseVerifier(dv.chainID, trustedFC.Height()+1, trustedFC.NextValidators)
|
||||
err = cert.Verify(shdr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// By now, the SignedHeader is fully validated and we're synced up to
|
||||
// SignedHeader.Height - 1. To sync to SignedHeader.Height, we need
|
||||
// the validator set at SignedHeader.Height + 1 so we can verify the
|
||||
// SignedHeader.NextValidatorSet.
|
||||
// TODO: is the ValidateFull below mostly redundant with the BaseVerifier.Verify above?
|
||||
// See https://github.com/tendermint/tendermint/issues/3174.
|
||||
|
||||
// Get the next validator set.
|
||||
nextValset, err := ic.source.ValidatorSet(ic.chainID, shdr.Height+1)
|
||||
nextValset, err := dv.source.ValidatorSet(dv.chainID, shdr.Height+1)
|
||||
if lerr.IsErrUnknownValidators(err) {
|
||||
// Ignore this error.
|
||||
return nil
|
||||
@@ -160,31 +176,31 @@ func (ic *DynamicVerifier) Verify(shdr types.SignedHeader) error {
|
||||
}
|
||||
// Validate the full commit. This checks the cryptographic
|
||||
// signatures of Commit against Validators.
|
||||
if err := nfc.ValidateFull(ic.chainID); err != nil {
|
||||
if err := nfc.ValidateFull(dv.chainID); err != nil {
|
||||
return err
|
||||
}
|
||||
// Trust it.
|
||||
return ic.trusted.SaveFullCommit(nfc)
|
||||
return dv.trusted.SaveFullCommit(nfc)
|
||||
}
|
||||
|
||||
// verifyAndSave will verify if this is a valid source full commit given the
|
||||
// best match trusted full commit, and if good, persist to ic.trusted.
|
||||
// best match trusted full commit, and if good, persist to dv.trusted.
|
||||
// Returns ErrTooMuchChange when >2/3 of trustedFC did not sign sourceFC.
|
||||
// Panics if trustedFC.Height() >= sourceFC.Height().
|
||||
func (ic *DynamicVerifier) verifyAndSave(trustedFC, sourceFC FullCommit) error {
|
||||
func (dv *DynamicVerifier) verifyAndSave(trustedFC, sourceFC FullCommit) error {
|
||||
if trustedFC.Height() >= sourceFC.Height() {
|
||||
panic("should not happen")
|
||||
}
|
||||
err := trustedFC.NextValidators.VerifyFutureCommit(
|
||||
sourceFC.Validators,
|
||||
ic.chainID, sourceFC.SignedHeader.Commit.BlockID,
|
||||
dv.chainID, sourceFC.SignedHeader.Commit.BlockID,
|
||||
sourceFC.SignedHeader.Height, sourceFC.SignedHeader.Commit,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return ic.trusted.SaveFullCommit(sourceFC)
|
||||
return dv.trusted.SaveFullCommit(sourceFC)
|
||||
}
|
||||
|
||||
// updateToHeight will use divide-and-conquer to find a path to h.
|
||||
@@ -192,29 +208,30 @@ func (ic *DynamicVerifier) verifyAndSave(trustedFC, sourceFC FullCommit) error {
|
||||
// for height h, using repeated applications of bisection if necessary.
|
||||
//
|
||||
// Returns ErrCommitNotFound if source provider doesn't have the commit for h.
|
||||
func (ic *DynamicVerifier) updateToHeight(h int64) (FullCommit, error) {
|
||||
func (dv *DynamicVerifier) updateToHeight(h int64) (FullCommit, error) {
|
||||
|
||||
// Fetch latest full commit from source.
|
||||
sourceFC, err := ic.source.LatestFullCommit(ic.chainID, h, h)
|
||||
sourceFC, err := dv.source.LatestFullCommit(dv.chainID, h, h)
|
||||
if err != nil {
|
||||
return FullCommit{}, err
|
||||
}
|
||||
|
||||
// Validate the full commit. This checks the cryptographic
|
||||
// signatures of Commit against Validators.
|
||||
if err := sourceFC.ValidateFull(ic.chainID); err != nil {
|
||||
return FullCommit{}, err
|
||||
}
|
||||
|
||||
// If sourceFC.Height() != h, we can't do it.
|
||||
if sourceFC.Height() != h {
|
||||
return FullCommit{}, lerr.ErrCommitNotFound()
|
||||
}
|
||||
|
||||
// Validate the full commit. This checks the cryptographic
|
||||
// signatures of Commit against Validators.
|
||||
if err := sourceFC.ValidateFull(dv.chainID); err != nil {
|
||||
return FullCommit{}, err
|
||||
}
|
||||
|
||||
// Verify latest FullCommit against trusted FullCommits
|
||||
FOR_LOOP:
|
||||
for {
|
||||
// Fetch latest full commit from trusted.
|
||||
trustedFC, err := ic.trusted.LatestFullCommit(ic.chainID, 1, h)
|
||||
trustedFC, err := dv.trusted.LatestFullCommit(dv.chainID, 1, h)
|
||||
if err != nil {
|
||||
return FullCommit{}, err
|
||||
}
|
||||
@@ -224,21 +241,21 @@ FOR_LOOP:
|
||||
}
|
||||
|
||||
// Try to update to full commit with checks.
|
||||
err = ic.verifyAndSave(trustedFC, sourceFC)
|
||||
err = dv.verifyAndSave(trustedFC, sourceFC)
|
||||
if err == nil {
|
||||
// All good!
|
||||
return sourceFC, nil
|
||||
}
|
||||
|
||||
// Handle special case when err is ErrTooMuchChange.
|
||||
if lerr.IsErrTooMuchChange(err) {
|
||||
if types.IsErrTooMuchChange(err) {
|
||||
// Divide and conquer.
|
||||
start, end := trustedFC.Height(), sourceFC.Height()
|
||||
if !(start < end) {
|
||||
panic("should not happen")
|
||||
}
|
||||
mid := (start + end) / 2
|
||||
_, err = ic.updateToHeight(mid)
|
||||
_, err = dv.updateToHeight(mid)
|
||||
if err != nil {
|
||||
return FullCommit{}, err
|
||||
}
|
||||
@@ -249,8 +266,8 @@ FOR_LOOP:
|
||||
}
|
||||
}
|
||||
|
||||
func (ic *DynamicVerifier) LastTrustedHeight() int64 {
|
||||
fc, err := ic.trusted.LatestFullCommit(ic.chainID, 1, 1<<63-1)
|
||||
func (dv *DynamicVerifier) LastTrustedHeight() int64 {
|
||||
fc, err := dv.trusted.LatestFullCommit(dv.chainID, 1, 1<<63-1)
|
||||
if err != nil {
|
||||
panic("should not happen")
|
||||
}
|
||||
|
@@ -10,6 +10,7 @@ import (
|
||||
|
||||
dbm "github.com/tendermint/tendermint/libs/db"
|
||||
log "github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
func TestInquirerValidPath(t *testing.T) {
|
||||
@@ -70,6 +71,70 @@ func TestInquirerValidPath(t *testing.T) {
|
||||
assert.Nil(err, "%+v", err)
|
||||
}
|
||||
|
||||
func TestDynamicVerify(t *testing.T) {
|
||||
trust := NewDBProvider("trust", dbm.NewMemDB())
|
||||
source := NewDBProvider("source", dbm.NewMemDB())
|
||||
|
||||
// 10 commits with one valset, 1 to change,
|
||||
// 10 commits with the next one
|
||||
n1, n2 := 10, 10
|
||||
nCommits := n1 + n2 + 1
|
||||
maxHeight := int64(nCommits)
|
||||
fcz := make([]FullCommit, nCommits)
|
||||
|
||||
// gen the 2 val sets
|
||||
chainID := "dynamic-verifier"
|
||||
power := int64(10)
|
||||
keys1 := genPrivKeys(5)
|
||||
vals1 := keys1.ToValidators(power, 0)
|
||||
keys2 := genPrivKeys(5)
|
||||
vals2 := keys2.ToValidators(power, 0)
|
||||
|
||||
// make some commits with the first
|
||||
for i := 0; i < n1; i++ {
|
||||
fcz[i] = makeFullCommit(int64(i), keys1, vals1, vals1, chainID)
|
||||
}
|
||||
|
||||
// update the val set
|
||||
fcz[n1] = makeFullCommit(int64(n1), keys1, vals1, vals2, chainID)
|
||||
|
||||
// make some commits with the new one
|
||||
for i := n1 + 1; i < nCommits; i++ {
|
||||
fcz[i] = makeFullCommit(int64(i), keys2, vals2, vals2, chainID)
|
||||
}
|
||||
|
||||
// Save everything in the source
|
||||
for _, fc := range fcz {
|
||||
source.SaveFullCommit(fc)
|
||||
}
|
||||
|
||||
// Initialize a Verifier with the initial state.
|
||||
err := trust.SaveFullCommit(fcz[0])
|
||||
require.Nil(t, err)
|
||||
ver := NewDynamicVerifier(chainID, trust, source)
|
||||
ver.SetLogger(log.TestingLogger())
|
||||
|
||||
// fetch the latest from the source
|
||||
latestFC, err := source.LatestFullCommit(chainID, 1, maxHeight)
|
||||
require.NoError(t, err)
|
||||
|
||||
// try to update to the latest
|
||||
err = ver.Verify(latestFC.SignedHeader)
|
||||
require.NoError(t, err)
|
||||
|
||||
}
|
||||
|
||||
func makeFullCommit(height int64, keys privKeys, vals, nextVals *types.ValidatorSet, chainID string) FullCommit {
|
||||
height += 1
|
||||
consHash := []byte("special-params")
|
||||
appHash := []byte(fmt.Sprintf("h=%d", height))
|
||||
resHash := []byte(fmt.Sprintf("res=%d", height))
|
||||
return keys.GenFullCommit(
|
||||
chainID, height, nil,
|
||||
vals, nextVals,
|
||||
appHash, consHash, resHash, 0, len(keys))
|
||||
}
|
||||
|
||||
func TestInquirerVerifyHistorical(t *testing.T) {
|
||||
assert, require := assert.New(t), require.New(t)
|
||||
trust := NewDBProvider("trust", dbm.NewMemDB())
|
||||
|
@@ -25,12 +25,6 @@ func (e errUnexpectedValidators) Error() string {
|
||||
e.got, e.want)
|
||||
}
|
||||
|
||||
type errTooMuchChange struct{}
|
||||
|
||||
func (e errTooMuchChange) Error() string {
|
||||
return "Insufficient signatures to validate due to valset changes"
|
||||
}
|
||||
|
||||
type errUnknownValidators struct {
|
||||
chainID string
|
||||
height int64
|
||||
@@ -85,22 +79,6 @@ func IsErrUnexpectedValidators(err error) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
//-----------------
|
||||
// ErrTooMuchChange
|
||||
|
||||
// ErrTooMuchChange indicates that the underlying validator set was changed by >1/3.
|
||||
func ErrTooMuchChange() error {
|
||||
return cmn.ErrorWrap(errTooMuchChange{}, "")
|
||||
}
|
||||
|
||||
func IsErrTooMuchChange(err error) bool {
|
||||
if err_, ok := err.(cmn.Error); ok {
|
||||
_, ok := err_.Data().(errTooMuchChange)
|
||||
return ok
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
//-----------------
|
||||
// ErrUnknownValidators
|
||||
|
||||
|
@@ -6,6 +6,8 @@ import (
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
var _ PersistentProvider = (*multiProvider)(nil)
|
||||
|
||||
// multiProvider allows you to place one or more caches in front of a source
|
||||
// Provider. It runs through them in order until a match is found.
|
||||
type multiProvider struct {
|
||||
|
@@ -1,7 +1,7 @@
|
||||
package lite
|
||||
|
||||
import (
|
||||
log "github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
|
@@ -10,6 +10,7 @@ import (
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/example/kvstore"
|
||||
"github.com/tendermint/tendermint/crypto/merkle"
|
||||
"github.com/tendermint/tendermint/lite"
|
||||
certclient "github.com/tendermint/tendermint/lite/client"
|
||||
nm "github.com/tendermint/tendermint/node"
|
||||
@@ -143,12 +144,13 @@ func TestTxProofs(t *testing.T) {
|
||||
require.NotNil(err)
|
||||
require.Contains(err.Error(), "not found")
|
||||
|
||||
// Now let's check with the real tx hash.
|
||||
// Now let's check with the real tx root hash.
|
||||
key = types.Tx(tx).Hash()
|
||||
res, err = cl.Tx(key, true)
|
||||
require.NoError(err, "%#v", err)
|
||||
require.NotNil(res)
|
||||
err = res.Proof.Validate(key)
|
||||
keyHash := merkle.SimpleHashFromByteSlices([][]byte{key})
|
||||
err = res.Proof.Validate(keyHash)
|
||||
assert.NoError(err, "%#v", err)
|
||||
|
||||
commit, err := GetCertifiedCommit(br.Height, cl, cert)
|
||||
|
@@ -65,6 +65,9 @@ var (
|
||||
|
||||
// ErrMempoolIsFull means Tendermint & an application can't handle that much load
|
||||
ErrMempoolIsFull = errors.New("Mempool is full")
|
||||
|
||||
// ErrTxTooLarge means the tx is too big to be sent in a message to other peers
|
||||
ErrTxTooLarge = fmt.Errorf("Tx too large. Max size is %d", maxTxSize)
|
||||
)
|
||||
|
||||
// ErrPreCheck is returned when tx is too big
|
||||
@@ -309,6 +312,13 @@ func (mem *Mempool) CheckTx(tx types.Tx, cb func(*abci.Response)) (err error) {
|
||||
return ErrMempoolIsFull
|
||||
}
|
||||
|
||||
// The size of the corresponding amino-encoded TxMessage
|
||||
// can't be larger than the maxMsgSize, otherwise we can't
|
||||
// relay it to peers.
|
||||
if len(tx) > maxTxSize {
|
||||
return ErrTxTooLarge
|
||||
}
|
||||
|
||||
if mem.preCheck != nil {
|
||||
if err := mem.preCheck(tx); err != nil {
|
||||
return ErrPreCheck{err}
|
||||
|
@@ -14,10 +14,12 @@ import (
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
amino "github.com/tendermint/go-amino"
|
||||
"github.com/tendermint/tendermint/abci/example/counter"
|
||||
"github.com/tendermint/tendermint/abci/example/kvstore"
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/proxy"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
@@ -394,6 +396,60 @@ func TestMempoolCloseWAL(t *testing.T) {
|
||||
require.Equal(t, 1, len(m3), "expecting the wal match in")
|
||||
}
|
||||
|
||||
// Size of the amino encoded TxMessage is the length of the
|
||||
// encoded byte array, plus 1 for the struct field, plus 4
|
||||
// for the amino prefix.
|
||||
func txMessageSize(tx types.Tx) int {
|
||||
return amino.ByteSliceSize(tx) + 1 + 4
|
||||
}
|
||||
|
||||
func TestMempoolMaxMsgSize(t *testing.T) {
|
||||
app := kvstore.NewKVStoreApplication()
|
||||
cc := proxy.NewLocalClientCreator(app)
|
||||
mempl := newMempoolWithApp(cc)
|
||||
|
||||
testCases := []struct {
|
||||
len int
|
||||
err bool
|
||||
}{
|
||||
// check small txs. no error
|
||||
{10, false},
|
||||
{1000, false},
|
||||
{1000000, false},
|
||||
|
||||
// check around maxTxSize
|
||||
// changes from no error to error
|
||||
{maxTxSize - 2, false},
|
||||
{maxTxSize - 1, false},
|
||||
{maxTxSize, false},
|
||||
{maxTxSize + 1, true},
|
||||
{maxTxSize + 2, true},
|
||||
|
||||
// check around maxMsgSize. all error
|
||||
{maxMsgSize - 1, true},
|
||||
{maxMsgSize, true},
|
||||
{maxMsgSize + 1, true},
|
||||
}
|
||||
|
||||
for i, testCase := range testCases {
|
||||
caseString := fmt.Sprintf("case %d, len %d", i, testCase.len)
|
||||
|
||||
tx := cmn.RandBytes(testCase.len)
|
||||
err := mempl.CheckTx(tx, nil)
|
||||
msg := &TxMessage{tx}
|
||||
encoded := cdc.MustMarshalBinaryBare(msg)
|
||||
require.Equal(t, len(encoded), txMessageSize(tx), caseString)
|
||||
if !testCase.err {
|
||||
require.True(t, len(encoded) <= maxMsgSize, caseString)
|
||||
require.NoError(t, err, caseString)
|
||||
} else {
|
||||
require.True(t, len(encoded) > maxMsgSize, caseString)
|
||||
require.Equal(t, err, ErrTxTooLarge, caseString)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func checksumIt(data []byte) string {
|
||||
h := md5.New()
|
||||
h.Write(data)
|
||||
|
@@ -7,7 +7,11 @@ import (
|
||||
stdprometheus "github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const MetricsSubsytem = "mempool"
|
||||
const (
|
||||
// MetricsSubsystem is a subsystem shared by all metrics exposed by this
|
||||
// package.
|
||||
MetricsSubsystem = "mempool"
|
||||
)
|
||||
|
||||
// Metrics contains metrics exposed by this package.
|
||||
// see MetricsProvider for descriptions.
|
||||
@@ -23,33 +27,39 @@ type Metrics struct {
|
||||
}
|
||||
|
||||
// PrometheusMetrics returns Metrics build using Prometheus client library.
|
||||
func PrometheusMetrics(namespace string) *Metrics {
|
||||
// Optionally, labels can be provided along with their values ("foo",
|
||||
// "fooValue").
|
||||
func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
|
||||
labels := []string{}
|
||||
for i := 0; i < len(labelsAndValues); i += 2 {
|
||||
labels = append(labels, labelsAndValues[i])
|
||||
}
|
||||
return &Metrics{
|
||||
Size: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsytem,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "size",
|
||||
Help: "Size of the mempool (number of uncommitted transactions).",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
TxSizeBytes: prometheus.NewHistogramFrom(stdprometheus.HistogramOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsytem,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "tx_size_bytes",
|
||||
Help: "Transaction sizes in bytes.",
|
||||
Buckets: stdprometheus.ExponentialBuckets(1, 3, 17),
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
FailedTxs: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsytem,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "failed_txs",
|
||||
Help: "Number of failed transactions.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
RecheckTimes: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsytem,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "recheck_times",
|
||||
Help: "Number of times transactions are rechecked in the mempool.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
}
|
||||
}
|
||||
|
||||
|
@@ -6,7 +6,6 @@ import (
|
||||
"time"
|
||||
|
||||
amino "github.com/tendermint/go-amino"
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
"github.com/tendermint/tendermint/libs/clist"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
||||
@@ -18,8 +17,10 @@ import (
|
||||
const (
|
||||
MempoolChannel = byte(0x30)
|
||||
|
||||
maxMsgSize = 1048576 // 1MB TODO make it configurable
|
||||
peerCatchupSleepIntervalMS = 100 // If peer is behind, sleep this amount
|
||||
maxMsgSize = 1048576 // 1MB TODO make it configurable
|
||||
maxTxSize = maxMsgSize - 8 // account for amino overhead of TxMessage
|
||||
|
||||
peerCatchupSleepIntervalMS = 100 // If peer is behind, sleep this amount
|
||||
)
|
||||
|
||||
// MempoolReactor handles mempool tx broadcasting amongst peers.
|
||||
@@ -98,11 +99,6 @@ func (memR *MempoolReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
|
||||
}
|
||||
}
|
||||
|
||||
// BroadcastTx is an alias for Mempool.CheckTx. Broadcasting itself happens in peer routines.
|
||||
func (memR *MempoolReactor) BroadcastTx(tx types.Tx, cb func(*abci.Response)) error {
|
||||
return memR.Mempool.CheckTx(tx, cb)
|
||||
}
|
||||
|
||||
// PeerState describes the state of a peer.
|
||||
type PeerState interface {
|
||||
GetHeight() int64
|
||||
|
29
node/node.go
29
node/node.go
@@ -117,15 +117,17 @@ func DefaultNewNode(config *cfg.Config, logger log.Logger) (*Node, error) {
|
||||
}
|
||||
|
||||
// MetricsProvider returns a consensus, p2p and mempool Metrics.
|
||||
type MetricsProvider func() (*cs.Metrics, *p2p.Metrics, *mempl.Metrics, *sm.Metrics)
|
||||
type MetricsProvider func(chainID string) (*cs.Metrics, *p2p.Metrics, *mempl.Metrics, *sm.Metrics)
|
||||
|
||||
// DefaultMetricsProvider returns Metrics build using Prometheus client library
|
||||
// if Prometheus is enabled. Otherwise, it returns no-op Metrics.
|
||||
func DefaultMetricsProvider(config *cfg.InstrumentationConfig) MetricsProvider {
|
||||
return func() (*cs.Metrics, *p2p.Metrics, *mempl.Metrics, *sm.Metrics) {
|
||||
return func(chainID string) (*cs.Metrics, *p2p.Metrics, *mempl.Metrics, *sm.Metrics) {
|
||||
if config.Prometheus {
|
||||
return cs.PrometheusMetrics(config.Namespace), p2p.PrometheusMetrics(config.Namespace),
|
||||
mempl.PrometheusMetrics(config.Namespace), sm.PrometheusMetrics(config.Namespace)
|
||||
return cs.PrometheusMetrics(config.Namespace, "chain_id", chainID),
|
||||
p2p.PrometheusMetrics(config.Namespace, "chain_id", chainID),
|
||||
mempl.PrometheusMetrics(config.Namespace, "chain_id", chainID),
|
||||
sm.PrometheusMetrics(config.Namespace, "chain_id", chainID)
|
||||
}
|
||||
return cs.NopMetrics(), p2p.NopMetrics(), mempl.NopMetrics(), sm.NopMetrics()
|
||||
}
|
||||
@@ -274,7 +276,7 @@ func NewNode(config *cfg.Config,
|
||||
consensusLogger.Info("This node is not a validator", "addr", addr, "pubKey", pubKey)
|
||||
}
|
||||
|
||||
csMetrics, p2pMetrics, memplMetrics, smMetrics := metricsProvider()
|
||||
csMetrics, p2pMetrics, memplMetrics, smMetrics := metricsProvider(genDoc.ChainID)
|
||||
|
||||
// Make MempoolReactor
|
||||
mempool := mempl.NewMempool(
|
||||
@@ -878,16 +880,20 @@ func createAndStartPrivValidatorSocketClient(
|
||||
listenAddr string,
|
||||
logger log.Logger,
|
||||
) (types.PrivValidator, error) {
|
||||
var pvsc types.PrivValidator
|
||||
var listener net.Listener
|
||||
|
||||
protocol, address := cmn.ProtocolAndAddress(listenAddr)
|
||||
ln, err := net.Listen(protocol, address)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
switch protocol {
|
||||
case "unix":
|
||||
pvsc = privval.NewIPCVal(logger.With("module", "privval"), address)
|
||||
listener = privval.NewUnixListener(ln)
|
||||
case "tcp":
|
||||
// TODO: persist this key so external signer
|
||||
// can actually authenticate us
|
||||
pvsc = privval.NewTCPVal(logger.With("module", "privval"), listenAddr, ed25519.GenPrivKey())
|
||||
listener = privval.NewTCPListener(ln, ed25519.GenPrivKey())
|
||||
default:
|
||||
return nil, fmt.Errorf(
|
||||
"Wrong listen address: expected either 'tcp' or 'unix' protocols, got %s",
|
||||
@@ -895,10 +901,9 @@ func createAndStartPrivValidatorSocketClient(
|
||||
)
|
||||
}
|
||||
|
||||
if pvsc, ok := pvsc.(cmn.Service); ok {
|
||||
if err := pvsc.Start(); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to start")
|
||||
}
|
||||
pvsc := privval.NewSocketVal(logger.With("module", "privval"), listener)
|
||||
if err := pvsc.Start(); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to start private validator")
|
||||
}
|
||||
|
||||
return pvsc, nil
|
||||
|
@@ -15,10 +15,14 @@ import (
|
||||
"github.com/tendermint/tendermint/abci/example/kvstore"
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
"github.com/tendermint/tendermint/evidence"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
dbm "github.com/tendermint/tendermint/libs/db"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
mempl "github.com/tendermint/tendermint/mempool"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
"github.com/tendermint/tendermint/proxy"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
tmtime "github.com/tendermint/tendermint/types/time"
|
||||
@@ -122,25 +126,25 @@ func TestNodeSetPrivValTCP(t *testing.T) {
|
||||
config := cfg.ResetTestRoot("node_priv_val_tcp_test")
|
||||
config.BaseConfig.PrivValidatorListenAddr = addr
|
||||
|
||||
rs := privval.NewRemoteSigner(
|
||||
dialer := privval.DialTCPFn(addr, 100*time.Millisecond, ed25519.GenPrivKey())
|
||||
pvsc := privval.NewRemoteSigner(
|
||||
log.TestingLogger(),
|
||||
config.ChainID(),
|
||||
addr,
|
||||
types.NewMockPV(),
|
||||
ed25519.GenPrivKey(),
|
||||
dialer,
|
||||
)
|
||||
privval.RemoteSignerConnDeadline(5 * time.Millisecond)(rs)
|
||||
|
||||
go func() {
|
||||
err := rs.Start()
|
||||
err := pvsc.Start()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}()
|
||||
defer rs.Stop()
|
||||
defer pvsc.Stop()
|
||||
|
||||
n, err := DefaultNewNode(config, log.TestingLogger())
|
||||
require.NoError(t, err)
|
||||
assert.IsType(t, &privval.TCPVal{}, n.PrivValidator())
|
||||
assert.IsType(t, &privval.SocketVal{}, n.PrivValidator())
|
||||
}
|
||||
|
||||
// address without a protocol must result in error
|
||||
@@ -161,25 +165,25 @@ func TestNodeSetPrivValIPC(t *testing.T) {
|
||||
config := cfg.ResetTestRoot("node_priv_val_tcp_test")
|
||||
config.BaseConfig.PrivValidatorListenAddr = "unix://" + tmpfile
|
||||
|
||||
rs := privval.NewIPCRemoteSigner(
|
||||
dialer := privval.DialUnixFn(tmpfile)
|
||||
pvsc := privval.NewRemoteSigner(
|
||||
log.TestingLogger(),
|
||||
config.ChainID(),
|
||||
tmpfile,
|
||||
types.NewMockPV(),
|
||||
dialer,
|
||||
)
|
||||
privval.IPCRemoteSignerConnDeadline(3 * time.Second)(rs)
|
||||
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
defer close(done)
|
||||
n, err := DefaultNewNode(config, log.TestingLogger())
|
||||
require.NoError(t, err)
|
||||
assert.IsType(t, &privval.IPCVal{}, n.PrivValidator())
|
||||
assert.IsType(t, &privval.SocketVal{}, n.PrivValidator())
|
||||
}()
|
||||
|
||||
err := rs.Start()
|
||||
err := pvsc.Start()
|
||||
require.NoError(t, err)
|
||||
defer rs.Stop()
|
||||
defer pvsc.Stop()
|
||||
|
||||
<-done
|
||||
}
|
||||
@@ -192,3 +196,110 @@ func testFreeAddr(t *testing.T) string {
|
||||
|
||||
return fmt.Sprintf("127.0.0.1:%d", ln.Addr().(*net.TCPAddr).Port)
|
||||
}
|
||||
|
||||
// create a proposal block using real and full
|
||||
// mempool and evidence pool and validate it.
|
||||
func TestCreateProposalBlock(t *testing.T) {
|
||||
config := cfg.ResetTestRoot("node_create_proposal")
|
||||
cc := proxy.NewLocalClientCreator(kvstore.NewKVStoreApplication())
|
||||
proxyApp := proxy.NewAppConns(cc)
|
||||
err := proxyApp.Start()
|
||||
require.Nil(t, err)
|
||||
defer proxyApp.Stop()
|
||||
|
||||
logger := log.TestingLogger()
|
||||
|
||||
var height int64 = 1
|
||||
state, stateDB := state(1, height)
|
||||
maxBytes := 16384
|
||||
state.ConsensusParams.BlockSize.MaxBytes = int64(maxBytes)
|
||||
proposerAddr, _ := state.Validators.GetByIndex(0)
|
||||
|
||||
// Make Mempool
|
||||
memplMetrics := mempl.PrometheusMetrics("node_test")
|
||||
mempool := mempl.NewMempool(
|
||||
config.Mempool,
|
||||
proxyApp.Mempool(),
|
||||
state.LastBlockHeight,
|
||||
mempl.WithMetrics(memplMetrics),
|
||||
mempl.WithPreCheck(sm.TxPreCheck(state)),
|
||||
mempl.WithPostCheck(sm.TxPostCheck(state)),
|
||||
)
|
||||
mempool.SetLogger(logger)
|
||||
|
||||
// Make EvidencePool
|
||||
types.RegisterMockEvidencesGlobal()
|
||||
evidence.RegisterMockEvidences()
|
||||
evidenceDB := dbm.NewMemDB()
|
||||
evidenceStore := evidence.NewEvidenceStore(evidenceDB)
|
||||
evidencePool := evidence.NewEvidencePool(stateDB, evidenceStore)
|
||||
evidencePool.SetLogger(logger)
|
||||
|
||||
// fill the evidence pool with more evidence
|
||||
// than can fit in a block
|
||||
minEvSize := 12
|
||||
numEv := (maxBytes / types.MaxEvidenceBytesDenominator) / minEvSize
|
||||
for i := 0; i < numEv; i++ {
|
||||
ev := types.NewMockRandomGoodEvidence(1, proposerAddr, cmn.RandBytes(minEvSize))
|
||||
err := evidencePool.AddEvidence(ev)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
// fill the mempool with more txs
|
||||
// than can fit in a block
|
||||
txLength := 1000
|
||||
for i := 0; i < maxBytes/txLength; i++ {
|
||||
tx := cmn.RandBytes(txLength)
|
||||
err := mempool.CheckTx(tx, nil)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
blockExec := sm.NewBlockExecutor(
|
||||
stateDB,
|
||||
logger,
|
||||
proxyApp.Consensus(),
|
||||
mempool,
|
||||
evidencePool,
|
||||
)
|
||||
|
||||
commit := &types.Commit{}
|
||||
block, _ := blockExec.CreateProposalBlock(
|
||||
height,
|
||||
state, commit,
|
||||
proposerAddr,
|
||||
)
|
||||
|
||||
err = blockExec.ValidateBlock(state, block)
|
||||
assert.NoError(t, err)
|
||||
|
||||
}
|
||||
|
||||
func state(nVals int, height int64) (sm.State, dbm.DB) {
|
||||
vals := make([]types.GenesisValidator, nVals)
|
||||
for i := 0; i < nVals; i++ {
|
||||
secret := []byte(fmt.Sprintf("test%d", i))
|
||||
pk := ed25519.GenPrivKeyFromSecret(secret)
|
||||
vals[i] = types.GenesisValidator{
|
||||
pk.PubKey().Address(),
|
||||
pk.PubKey(),
|
||||
1000,
|
||||
fmt.Sprintf("test%d", i),
|
||||
}
|
||||
}
|
||||
s, _ := sm.MakeGenesisState(&types.GenesisDoc{
|
||||
ChainID: "test-chain",
|
||||
Validators: vals,
|
||||
AppHash: nil,
|
||||
})
|
||||
|
||||
// save validators to db for 2 heights
|
||||
stateDB := dbm.NewMemDB()
|
||||
sm.SaveState(stateDB, s)
|
||||
|
||||
for i := 1; i < int(height); i++ {
|
||||
s.LastBlockHeight++
|
||||
s.LastValidators = s.Validators.Copy()
|
||||
sm.SaveState(stateDB, s)
|
||||
}
|
||||
return s, stateDB
|
||||
}
|
||||
|
@@ -4,8 +4,8 @@ The p2p package provides an abstraction around peer-to-peer communication.
|
||||
|
||||
Docs:
|
||||
|
||||
- [Connection](https://github.com/tendermint/tendermint/blob/master/docs/spec/docs/spec/p2p/connection.md) for details on how connections and multiplexing work
|
||||
- [Peer](https://github.com/tendermint/tendermint/blob/master/docs/spec/docs/spec/p2p/peer.md) for details on peer ID, handshakes, and peer exchange
|
||||
- [Node](https://github.com/tendermint/tendermint/blob/master/docs/spec/docs/spec/p2p/node.md) for details about different types of nodes and how they should work
|
||||
- [Pex](https://github.com/tendermint/tendermint/blob/master/docs/spec/docs/spec/reactors/pex/pex.md) for details on peer discovery and exchange
|
||||
- [Config](https://github.com/tendermint/tendermint/blob/master/docs/spec/docs/spec/p2p/config.md) for details on some config option
|
||||
- [Connection](https://github.com/tendermint/tendermint/blob/master/docs/spec/p2p/connection.md) for details on how connections and multiplexing work
|
||||
- [Peer](https://github.com/tendermint/tendermint/blob/master/docs/spec/p2p/peer.md) for details on peer ID, handshakes, and peer exchange
|
||||
- [Node](https://github.com/tendermint/tendermint/blob/master/docs/spec/p2p/node.md) for details about different types of nodes and how they should work
|
||||
- [Pex](https://github.com/tendermint/tendermint/blob/master/docs/spec/reactors/pex/pex.md) for details on peer discovery and exchange
|
||||
- [Config](https://github.com/tendermint/tendermint/blob/master/docs/spec/p2p/config.md) for details on some config option
|
||||
|
@@ -8,6 +8,7 @@ import (
|
||||
"errors"
|
||||
"io"
|
||||
"net"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"golang.org/x/crypto/chacha20poly1305"
|
||||
@@ -27,20 +28,36 @@ const aeadSizeOverhead = 16 // overhead of poly 1305 authentication tag
|
||||
const aeadKeySize = chacha20poly1305.KeySize
|
||||
const aeadNonceSize = chacha20poly1305.NonceSize
|
||||
|
||||
// SecretConnection implements net.conn.
|
||||
// SecretConnection implements net.Conn.
|
||||
// It is an implementation of the STS protocol.
|
||||
// Note we do not (yet) assume that a remote peer's pubkey
|
||||
// is known ahead of time, and thus we are technically
|
||||
// still vulnerable to MITM. (TODO!)
|
||||
// See docs/sts-final.pdf for more info
|
||||
// See https://github.com/tendermint/tendermint/blob/0.1/docs/sts-final.pdf for
|
||||
// details on the protocol.
|
||||
//
|
||||
// Consumers of the SecretConnection are responsible for authenticating
|
||||
// the remote peer's pubkey against known information, like a nodeID.
|
||||
// Otherwise they are vulnerable to MITM.
|
||||
// (TODO(ismail): see also https://github.com/tendermint/tendermint/issues/3010)
|
||||
type SecretConnection struct {
|
||||
conn io.ReadWriteCloser
|
||||
recvBuffer []byte
|
||||
recvNonce *[aeadNonceSize]byte
|
||||
sendNonce *[aeadNonceSize]byte
|
||||
|
||||
// immutable
|
||||
recvSecret *[aeadKeySize]byte
|
||||
sendSecret *[aeadKeySize]byte
|
||||
remPubKey crypto.PubKey
|
||||
conn io.ReadWriteCloser
|
||||
|
||||
// net.Conn must be thread safe:
|
||||
// https://golang.org/pkg/net/#Conn.
|
||||
// Since we have internal mutable state,
|
||||
// we need mtxs. But recv and send states
|
||||
// are independent, so we can use two mtxs.
|
||||
// All .Read are covered by recvMtx,
|
||||
// all .Write are covered by sendMtx.
|
||||
recvMtx sync.Mutex
|
||||
recvBuffer []byte
|
||||
recvNonce *[aeadNonceSize]byte
|
||||
|
||||
sendMtx sync.Mutex
|
||||
sendNonce *[aeadNonceSize]byte
|
||||
}
|
||||
|
||||
// MakeSecretConnection performs handshake and returns a new authenticated
|
||||
@@ -109,9 +126,12 @@ func (sc *SecretConnection) RemotePubKey() crypto.PubKey {
|
||||
return sc.remPubKey
|
||||
}
|
||||
|
||||
// Writes encrypted frames of `sealedFrameSize`
|
||||
// CONTRACT: data smaller than dataMaxSize is read atomically.
|
||||
// Writes encrypted frames of `totalFrameSize + aeadSizeOverhead`.
|
||||
// CONTRACT: data smaller than dataMaxSize is written atomically.
|
||||
func (sc *SecretConnection) Write(data []byte) (n int, err error) {
|
||||
sc.sendMtx.Lock()
|
||||
defer sc.sendMtx.Unlock()
|
||||
|
||||
for 0 < len(data) {
|
||||
var frame = make([]byte, totalFrameSize)
|
||||
var chunk []byte
|
||||
@@ -130,6 +150,7 @@ func (sc *SecretConnection) Write(data []byte) (n int, err error) {
|
||||
if err != nil {
|
||||
return n, errors.New("Invalid SecretConnection Key")
|
||||
}
|
||||
|
||||
// encrypt the frame
|
||||
var sealedFrame = make([]byte, aeadSizeOverhead+totalFrameSize)
|
||||
aead.Seal(sealedFrame[:0], sc.sendNonce[:], frame, nil)
|
||||
@@ -147,23 +168,30 @@ func (sc *SecretConnection) Write(data []byte) (n int, err error) {
|
||||
|
||||
// CONTRACT: data smaller than dataMaxSize is read atomically.
|
||||
func (sc *SecretConnection) Read(data []byte) (n int, err error) {
|
||||
sc.recvMtx.Lock()
|
||||
defer sc.recvMtx.Unlock()
|
||||
|
||||
// read off and update the recvBuffer, if non-empty
|
||||
if 0 < len(sc.recvBuffer) {
|
||||
n = copy(data, sc.recvBuffer)
|
||||
sc.recvBuffer = sc.recvBuffer[n:]
|
||||
return
|
||||
}
|
||||
|
||||
// read off the conn
|
||||
sealedFrame := make([]byte, totalFrameSize+aeadSizeOverhead)
|
||||
_, err = io.ReadFull(sc.conn, sealedFrame)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
aead, err := chacha20poly1305.New(sc.recvSecret[:])
|
||||
if err != nil {
|
||||
return n, errors.New("Invalid SecretConnection Key")
|
||||
}
|
||||
sealedFrame := make([]byte, totalFrameSize+aeadSizeOverhead)
|
||||
_, err = io.ReadFull(sc.conn, sealedFrame)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// decrypt the frame
|
||||
// decrypt the frame.
|
||||
// reads and updates the sc.recvNonce
|
||||
var frame = make([]byte, totalFrameSize)
|
||||
_, err = aead.Open(frame[:0], sc.recvNonce[:], sealedFrame, nil)
|
||||
if err != nil {
|
||||
@@ -172,12 +200,13 @@ func (sc *SecretConnection) Read(data []byte) (n int, err error) {
|
||||
incrNonce(sc.recvNonce)
|
||||
// end decryption
|
||||
|
||||
// copy checkLength worth into data,
|
||||
// set recvBuffer to the rest.
|
||||
var chunkLength = binary.LittleEndian.Uint32(frame) // read the first four bytes
|
||||
if chunkLength > dataMaxSize {
|
||||
return 0, errors.New("chunkLength is greater than dataMaxSize")
|
||||
}
|
||||
var chunk = frame[dataLenSize : dataLenSize+chunkLength]
|
||||
|
||||
n = copy(data, chunk)
|
||||
sc.recvBuffer = chunk[n:]
|
||||
return
|
||||
|
@@ -7,10 +7,12 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
@@ -98,6 +100,69 @@ func TestSecretConnectionHandshake(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestConcurrentWrite(t *testing.T) {
|
||||
fooSecConn, barSecConn := makeSecretConnPair(t)
|
||||
fooWriteText := cmn.RandStr(dataMaxSize)
|
||||
|
||||
// write from two routines.
|
||||
// should be safe from race according to net.Conn:
|
||||
// https://golang.org/pkg/net/#Conn
|
||||
n := 100
|
||||
wg := new(sync.WaitGroup)
|
||||
wg.Add(3)
|
||||
go writeLots(t, wg, fooSecConn, fooWriteText, n)
|
||||
go writeLots(t, wg, fooSecConn, fooWriteText, n)
|
||||
|
||||
// Consume reads from bar's reader
|
||||
readLots(t, wg, barSecConn, n*2)
|
||||
wg.Wait()
|
||||
|
||||
if err := fooSecConn.Close(); err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConcurrentRead(t *testing.T) {
|
||||
fooSecConn, barSecConn := makeSecretConnPair(t)
|
||||
fooWriteText := cmn.RandStr(dataMaxSize)
|
||||
n := 100
|
||||
|
||||
// read from two routines.
|
||||
// should be safe from race according to net.Conn:
|
||||
// https://golang.org/pkg/net/#Conn
|
||||
wg := new(sync.WaitGroup)
|
||||
wg.Add(3)
|
||||
go readLots(t, wg, fooSecConn, n/2)
|
||||
go readLots(t, wg, fooSecConn, n/2)
|
||||
|
||||
// write to bar
|
||||
writeLots(t, wg, barSecConn, fooWriteText, n)
|
||||
wg.Wait()
|
||||
|
||||
if err := fooSecConn.Close(); err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
}
|
||||
|
||||
func writeLots(t *testing.T, wg *sync.WaitGroup, conn net.Conn, txt string, n int) {
|
||||
defer wg.Done()
|
||||
for i := 0; i < n; i++ {
|
||||
_, err := conn.Write([]byte(txt))
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to write to fooSecConn: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func readLots(t *testing.T, wg *sync.WaitGroup, conn net.Conn, n int) {
|
||||
readBuffer := make([]byte, dataMaxSize)
|
||||
for i := 0; i < n; i++ {
|
||||
_, err := conn.Read(readBuffer)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
wg.Done()
|
||||
}
|
||||
|
||||
func TestSecretConnectionReadWrite(t *testing.T) {
|
||||
fooConn, barConn := makeKVStoreConnPair()
|
||||
fooWrites, barWrites := []string{}, []string{}
|
||||
|
@@ -11,6 +11,7 @@ type ConnSet interface {
|
||||
HasIP(net.IP) bool
|
||||
Set(net.Conn, []net.IP)
|
||||
Remove(net.Conn)
|
||||
RemoveAddr(net.Addr)
|
||||
}
|
||||
|
||||
type connSetItem struct {
|
||||
@@ -62,6 +63,13 @@ func (cs *connSet) Remove(c net.Conn) {
|
||||
delete(cs.conns, c.RemoteAddr().String())
|
||||
}
|
||||
|
||||
func (cs *connSet) RemoveAddr(addr net.Addr) {
|
||||
cs.Lock()
|
||||
defer cs.Unlock()
|
||||
|
||||
delete(cs.conns, addr.String())
|
||||
}
|
||||
|
||||
func (cs *connSet) Set(c net.Conn, ips []net.IP) {
|
||||
cs.Lock()
|
||||
defer cs.Unlock()
|
||||
|
@@ -55,6 +55,16 @@ func (p *peer) RemoteIP() net.IP {
|
||||
return net.ParseIP("127.0.0.1")
|
||||
}
|
||||
|
||||
// Addr always returns tcp://localhost:8800.
|
||||
func (p *peer) RemoteAddr() net.Addr {
|
||||
return &net.TCPAddr{IP: net.ParseIP("127.0.0.1"), Port: 8800}
|
||||
}
|
||||
|
||||
// CloseConn always returns nil.
|
||||
func (p *peer) CloseConn() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Status always returns empry connection status.
|
||||
func (p *peer) Status() tmconn.ConnectionStatus {
|
||||
return tmconn.ConnectionStatus{}
|
||||
|
@@ -7,7 +7,11 @@ import (
|
||||
stdprometheus "github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const MetricsSubsystem = "p2p"
|
||||
const (
|
||||
// MetricsSubsystem is a subsystem shared by all metrics exposed by this
|
||||
// package.
|
||||
MetricsSubsystem = "p2p"
|
||||
)
|
||||
|
||||
// Metrics contains metrics exposed by this package.
|
||||
type Metrics struct {
|
||||
@@ -24,45 +28,51 @@ type Metrics struct {
|
||||
}
|
||||
|
||||
// PrometheusMetrics returns Metrics build using Prometheus client library.
|
||||
func PrometheusMetrics(namespace string) *Metrics {
|
||||
// Optionally, labels can be provided along with their values ("foo",
|
||||
// "fooValue").
|
||||
func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
|
||||
labels := []string{}
|
||||
for i := 0; i < len(labelsAndValues); i += 2 {
|
||||
labels = append(labels, labelsAndValues[i])
|
||||
}
|
||||
return &Metrics{
|
||||
Peers: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "peers",
|
||||
Help: "Number of peers.",
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
PeerReceiveBytesTotal: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "peer_receive_bytes_total",
|
||||
Help: "Number of bytes received from a given peer.",
|
||||
}, []string{"peer_id"}),
|
||||
}, append(labels, "peer_id")).With(labelsAndValues...),
|
||||
PeerSendBytesTotal: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "peer_send_bytes_total",
|
||||
Help: "Number of bytes sent to a given peer.",
|
||||
}, []string{"peer_id"}),
|
||||
}, append(labels, "peer_id")).With(labelsAndValues...),
|
||||
PeerPendingSendBytes: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "peer_pending_send_bytes",
|
||||
Help: "Number of pending bytes to be sent to a given peer.",
|
||||
}, []string{"peer_id"}),
|
||||
}, append(labels, "peer_id")).With(labelsAndValues...),
|
||||
NumTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Subsystem: MetricsSubsystem,
|
||||
Name: "num_txs",
|
||||
Help: "Number of transactions submitted by each peer.",
|
||||
}, []string{"peer_id"}),
|
||||
}, append(labels, "peer_id")).With(labelsAndValues...),
|
||||
}
|
||||
}
|
||||
|
||||
// NopMetrics returns no-op Metrics.
|
||||
func NopMetrics() *Metrics {
|
||||
return &Metrics{
|
||||
Peers: discard.NewGauge(),
|
||||
Peers: discard.NewGauge(),
|
||||
PeerReceiveBytesTotal: discard.NewCounter(),
|
||||
PeerSendBytesTotal: discard.NewCounter(),
|
||||
PeerPendingSendBytes: discard.NewGauge(),
|
||||
|
18
p2p/peer.go
18
p2p/peer.go
@@ -18,15 +18,18 @@ type Peer interface {
|
||||
cmn.Service
|
||||
FlushStop()
|
||||
|
||||
ID() ID // peer's cryptographic ID
|
||||
RemoteIP() net.IP // remote IP of the connection
|
||||
ID() ID // peer's cryptographic ID
|
||||
RemoteIP() net.IP // remote IP of the connection
|
||||
RemoteAddr() net.Addr // remote address of the connection
|
||||
|
||||
IsOutbound() bool // did we dial the peer
|
||||
IsPersistent() bool // do we redial this peer when we disconnect
|
||||
|
||||
CloseConn() error // close original connection
|
||||
|
||||
NodeInfo() NodeInfo // peer's info
|
||||
Status() tmconn.ConnectionStatus
|
||||
OriginalAddr() *NetAddress
|
||||
OriginalAddr() *NetAddress // original address for outbound peers
|
||||
|
||||
Send(byte, []byte) bool
|
||||
TrySend(byte, []byte) bool
|
||||
@@ -296,6 +299,11 @@ func (p *peer) hasChannel(chID byte) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// CloseConn closes original connection. Used for cleaning up in cases where the peer had not been started at all.
|
||||
func (p *peer) CloseConn() error {
|
||||
return p.peerConn.conn.Close()
|
||||
}
|
||||
|
||||
//---------------------------------------------------
|
||||
// methods only used for testing
|
||||
// TODO: can we remove these?
|
||||
@@ -305,8 +313,8 @@ func (pc *peerConn) CloseConn() {
|
||||
pc.conn.Close() // nolint: errcheck
|
||||
}
|
||||
|
||||
// Addr returns peer's remote network address.
|
||||
func (p *peer) Addr() net.Addr {
|
||||
// RemoteAddr returns peer's remote network address.
|
||||
func (p *peer) RemoteAddr() net.Addr {
|
||||
return p.peerConn.conn.RemoteAddr()
|
||||
}
|
||||
|
||||
|
@@ -30,6 +30,8 @@ func (mp *mockPeer) Get(s string) interface{} { return s }
|
||||
func (mp *mockPeer) Set(string, interface{}) {}
|
||||
func (mp *mockPeer) RemoteIP() net.IP { return mp.ip }
|
||||
func (mp *mockPeer) OriginalAddr() *NetAddress { return nil }
|
||||
func (mp *mockPeer) RemoteAddr() net.Addr { return &net.TCPAddr{IP: mp.ip, Port: 8800} }
|
||||
func (mp *mockPeer) CloseConn() error { return nil }
|
||||
|
||||
// Returns a mock peer
|
||||
func newMockPeer(ip net.IP) *mockPeer {
|
||||
|
@@ -39,7 +39,7 @@ func TestPeerBasic(t *testing.T) {
|
||||
assert.False(p.IsPersistent())
|
||||
p.persistent = true
|
||||
assert.True(p.IsPersistent())
|
||||
assert.Equal(rp.Addr().DialString(), p.Addr().String())
|
||||
assert.Equal(rp.Addr().DialString(), p.RemoteAddr().String())
|
||||
assert.Equal(rp.ID(), p.ID())
|
||||
}
|
||||
|
||||
@@ -137,9 +137,9 @@ type remotePeer struct {
|
||||
PrivKey crypto.PrivKey
|
||||
Config *config.P2PConfig
|
||||
addr *NetAddress
|
||||
quit chan struct{}
|
||||
channels cmn.HexBytes
|
||||
listenAddr string
|
||||
listener net.Listener
|
||||
}
|
||||
|
||||
func (rp *remotePeer) Addr() *NetAddress {
|
||||
@@ -159,25 +159,45 @@ func (rp *remotePeer) Start() {
|
||||
if e != nil {
|
||||
golog.Fatalf("net.Listen tcp :0: %+v", e)
|
||||
}
|
||||
rp.listener = l
|
||||
rp.addr = NewNetAddress(PubKeyToID(rp.PrivKey.PubKey()), l.Addr())
|
||||
rp.quit = make(chan struct{})
|
||||
if rp.channels == nil {
|
||||
rp.channels = []byte{testCh}
|
||||
}
|
||||
go rp.accept(l)
|
||||
go rp.accept()
|
||||
}
|
||||
|
||||
func (rp *remotePeer) Stop() {
|
||||
close(rp.quit)
|
||||
rp.listener.Close()
|
||||
}
|
||||
|
||||
func (rp *remotePeer) accept(l net.Listener) {
|
||||
func (rp *remotePeer) Dial(addr *NetAddress) (net.Conn, error) {
|
||||
conn, err := addr.DialTimeout(1 * time.Second)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pc, err := testInboundPeerConn(conn, rp.Config, rp.PrivKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
_, err = handshake(pc.conn, time.Second, rp.nodeInfo())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return conn, err
|
||||
}
|
||||
|
||||
func (rp *remotePeer) accept() {
|
||||
conns := []net.Conn{}
|
||||
|
||||
for {
|
||||
conn, err := l.Accept()
|
||||
conn, err := rp.listener.Accept()
|
||||
if err != nil {
|
||||
golog.Fatalf("Failed to accept conn: %+v", err)
|
||||
golog.Printf("Failed to accept conn: %+v", err)
|
||||
for _, conn := range conns {
|
||||
_ = conn.Close()
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
pc, err := testInboundPeerConn(conn, rp.Config, rp.PrivKey)
|
||||
@@ -185,31 +205,20 @@ func (rp *remotePeer) accept(l net.Listener) {
|
||||
golog.Fatalf("Failed to create a peer: %+v", err)
|
||||
}
|
||||
|
||||
_, err = handshake(pc.conn, time.Second, rp.nodeInfo(l))
|
||||
_, err = handshake(pc.conn, time.Second, rp.nodeInfo())
|
||||
if err != nil {
|
||||
golog.Fatalf("Failed to perform handshake: %+v", err)
|
||||
}
|
||||
|
||||
conns = append(conns, conn)
|
||||
|
||||
select {
|
||||
case <-rp.quit:
|
||||
for _, conn := range conns {
|
||||
if err := conn.Close(); err != nil {
|
||||
golog.Fatal(err)
|
||||
}
|
||||
}
|
||||
return
|
||||
default:
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (rp *remotePeer) nodeInfo(l net.Listener) NodeInfo {
|
||||
func (rp *remotePeer) nodeInfo() NodeInfo {
|
||||
return DefaultNodeInfo{
|
||||
ProtocolVersion: defaultProtocolVersion,
|
||||
ID_: rp.Addr().ID,
|
||||
ListenAddr: l.Addr().String(),
|
||||
ListenAddr: rp.listener.Addr().String(),
|
||||
Network: "testing",
|
||||
Version: "1.2.3-rc0-deadbeef",
|
||||
Channels: rp.channels,
|
||||
|
@@ -471,7 +471,11 @@ func (r *PEXReactor) dialPeer(addr *p2p.NetAddress) {
|
||||
attempts, lastDialed := r.dialAttemptsInfo(addr)
|
||||
|
||||
if attempts > maxAttemptsToDial {
|
||||
r.Logger.Error("Reached max attempts to dial", "addr", addr, "attempts", attempts)
|
||||
// Do not log the message if the addr gets readded.
|
||||
if attempts+1 == maxAttemptsToDial {
|
||||
r.Logger.Info("Reached max attempts to dial", "addr", addr, "attempts", attempts)
|
||||
r.attemptsToDial.Store(addr.DialString(), _attemptsToDial{attempts + 1, time.Now()})
|
||||
}
|
||||
r.book.MarkBad(addr)
|
||||
return
|
||||
}
|
||||
|
@@ -404,6 +404,8 @@ func (mockPeer) TrySend(byte, []byte) bool { return false }
|
||||
func (mockPeer) Set(string, interface{}) {}
|
||||
func (mockPeer) Get(string) interface{} { return nil }
|
||||
func (mockPeer) OriginalAddr() *p2p.NetAddress { return nil }
|
||||
func (mockPeer) RemoteAddr() net.Addr { return &net.TCPAddr{IP: net.ParseIP("127.0.0.1"), Port: 8800} }
|
||||
func (mockPeer) CloseConn() error { return nil }
|
||||
|
||||
func assertPeersWithTimeout(
|
||||
t *testing.T,
|
||||
|
@@ -210,6 +210,7 @@ func (sw *Switch) OnStart() error {
|
||||
func (sw *Switch) OnStop() {
|
||||
// Stop peers
|
||||
for _, p := range sw.peers.List() {
|
||||
sw.transport.Cleanup(p)
|
||||
p.Stop()
|
||||
if sw.peers.Remove(p) {
|
||||
sw.metrics.Peers.Add(float64(-1))
|
||||
@@ -304,6 +305,7 @@ func (sw *Switch) stopAndRemovePeer(peer Peer, reason interface{}) {
|
||||
if sw.peers.Remove(peer) {
|
||||
sw.metrics.Peers.Add(float64(-1))
|
||||
}
|
||||
sw.transport.Cleanup(peer)
|
||||
peer.Stop()
|
||||
for _, reactor := range sw.reactors {
|
||||
reactor.RemovePeer(peer, reason)
|
||||
@@ -529,13 +531,16 @@ func (sw *Switch) acceptRoutine() {
|
||||
"max", sw.config.MaxNumInboundPeers,
|
||||
)
|
||||
|
||||
_ = p.Stop()
|
||||
sw.transport.Cleanup(p)
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
if err := sw.addPeer(p); err != nil {
|
||||
_ = p.Stop()
|
||||
sw.transport.Cleanup(p)
|
||||
if p.IsRunning() {
|
||||
_ = p.Stop()
|
||||
}
|
||||
sw.Logger.Info(
|
||||
"Ignoring inbound connection: error while adding peer",
|
||||
"err", err,
|
||||
@@ -593,7 +598,10 @@ func (sw *Switch) addOutboundPeerWithConfig(
|
||||
}
|
||||
|
||||
if err := sw.addPeer(p); err != nil {
|
||||
_ = p.Stop()
|
||||
sw.transport.Cleanup(p)
|
||||
if p.IsRunning() {
|
||||
_ = p.Stop()
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -628,7 +636,8 @@ func (sw *Switch) filterPeer(p Peer) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// addPeer starts up the Peer and adds it to the Switch.
|
||||
// addPeer starts up the Peer and adds it to the Switch. Error is returned if
|
||||
// the peer is filtered out or failed to start or can't be added.
|
||||
func (sw *Switch) addPeer(p Peer) error {
|
||||
if err := sw.filterPeer(p); err != nil {
|
||||
return err
|
||||
@@ -636,11 +645,15 @@ func (sw *Switch) addPeer(p Peer) error {
|
||||
|
||||
p.SetLogger(sw.Logger.With("peer", p.NodeInfo().NetAddress()))
|
||||
|
||||
// All good. Start peer
|
||||
// Handle the shut down case where the switch has stopped but we're
|
||||
// concurrently trying to add a peer.
|
||||
if sw.IsRunning() {
|
||||
// All good. Start peer
|
||||
if err := sw.startInitPeer(p); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
sw.Logger.Error("Won't start a peer - switch is not running", "peer", p)
|
||||
}
|
||||
|
||||
// Add the peer to .peers.
|
||||
|
@@ -3,7 +3,9 @@ package p2p
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"regexp"
|
||||
@@ -13,7 +15,6 @@ import (
|
||||
"time"
|
||||
|
||||
stdprometheus "github.com/prometheus/client_golang/prometheus"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
@@ -477,6 +478,58 @@ func TestSwitchFullConnectivity(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestSwitchAcceptRoutine(t *testing.T) {
|
||||
cfg.MaxNumInboundPeers = 5
|
||||
|
||||
// make switch
|
||||
sw := MakeSwitch(cfg, 1, "testing", "123.123.123", initSwitchFunc)
|
||||
err := sw.Start()
|
||||
require.NoError(t, err)
|
||||
defer sw.Stop()
|
||||
|
||||
remotePeers := make([]*remotePeer, 0)
|
||||
assert.Equal(t, 0, sw.Peers().Size())
|
||||
|
||||
// 1. check we connect up to MaxNumInboundPeers
|
||||
for i := 0; i < cfg.MaxNumInboundPeers; i++ {
|
||||
rp := &remotePeer{PrivKey: ed25519.GenPrivKey(), Config: cfg}
|
||||
remotePeers = append(remotePeers, rp)
|
||||
rp.Start()
|
||||
c, err := rp.Dial(sw.NodeInfo().NetAddress())
|
||||
require.NoError(t, err)
|
||||
// spawn a reading routine to prevent connection from closing
|
||||
go func(c net.Conn) {
|
||||
for {
|
||||
one := make([]byte, 1)
|
||||
_, err := c.Read(one)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
}(c)
|
||||
}
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
assert.Equal(t, cfg.MaxNumInboundPeers, sw.Peers().Size())
|
||||
|
||||
// 2. check we close new connections if we already have MaxNumInboundPeers peers
|
||||
rp := &remotePeer{PrivKey: ed25519.GenPrivKey(), Config: cfg}
|
||||
rp.Start()
|
||||
conn, err := rp.Dial(sw.NodeInfo().NetAddress())
|
||||
require.NoError(t, err)
|
||||
// check conn is closed
|
||||
one := make([]byte, 1)
|
||||
conn.SetReadDeadline(time.Now().Add(10 * time.Millisecond))
|
||||
_, err = conn.Read(one)
|
||||
assert.Equal(t, io.EOF, err)
|
||||
assert.Equal(t, cfg.MaxNumInboundPeers, sw.Peers().Size())
|
||||
rp.Stop()
|
||||
|
||||
// stop remote peers
|
||||
for _, rp := range remotePeers {
|
||||
rp.Stop()
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkSwitchBroadcast(b *testing.B) {
|
||||
s1, s2 := MakeSwitchPair(b, func(i int, sw *Switch) *Switch {
|
||||
// Make bar reactors of bar channels each
|
||||
|
@@ -247,17 +247,35 @@ func testNodeInfo(id ID, name string) NodeInfo {
|
||||
}
|
||||
|
||||
func testNodeInfoWithNetwork(id ID, name, network string) NodeInfo {
|
||||
port, err := getFreePort()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return DefaultNodeInfo{
|
||||
ProtocolVersion: defaultProtocolVersion,
|
||||
ID_: id,
|
||||
ListenAddr: fmt.Sprintf("127.0.0.1:%d", cmn.RandIntn(64512)+1023),
|
||||
ListenAddr: fmt.Sprintf("127.0.0.1:%d", port),
|
||||
Network: network,
|
||||
Version: "1.2.3-rc0-deadbeef",
|
||||
Channels: []byte{testCh},
|
||||
Moniker: name,
|
||||
Other: DefaultNodeInfoOther{
|
||||
TxIndex: "on",
|
||||
RPCAddress: fmt.Sprintf("127.0.0.1:%d", cmn.RandIntn(64512)+1023),
|
||||
RPCAddress: fmt.Sprintf("127.0.0.1:%d", port),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func getFreePort() (int, error) {
|
||||
addr, err := net.ResolveTCPAddr("tcp", "localhost:0")
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
l, err := net.ListenTCP("tcp", addr)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
defer l.Close()
|
||||
return l.Addr().(*net.TCPAddr).Port, nil
|
||||
}
|
||||
|
@@ -52,6 +52,9 @@ type Transport interface {
|
||||
|
||||
// Dial connects to the Peer for the address.
|
||||
Dial(NetAddress, peerConfig) (Peer, error)
|
||||
|
||||
// Cleanup any resources associated with Peer.
|
||||
Cleanup(Peer)
|
||||
}
|
||||
|
||||
// transportLifecycle bundles the methods for callers to control start and stop
|
||||
@@ -274,6 +277,13 @@ func (mt *MultiplexTransport) acceptPeers() {
|
||||
}
|
||||
}
|
||||
|
||||
// Cleanup removes the given address from the connections set and
|
||||
// closes the connection.
|
||||
func (mt *MultiplexTransport) Cleanup(peer Peer) {
|
||||
mt.conns.RemoveAddr(peer.RemoteAddr())
|
||||
_ = peer.CloseConn()
|
||||
}
|
||||
|
||||
func (mt *MultiplexTransport) cleanup(c net.Conn) error {
|
||||
mt.conns.Remove(c)
|
||||
|
||||
@@ -418,12 +428,6 @@ func (mt *MultiplexTransport) wrapPeer(
|
||||
PeerMetrics(cfg.metrics),
|
||||
)
|
||||
|
||||
// Wait for Peer to Stop so we can cleanup.
|
||||
go func(c net.Conn) {
|
||||
<-p.Quit()
|
||||
_ = mt.cleanup(c)
|
||||
}(c)
|
||||
|
||||
return p
|
||||
}
|
||||
|
||||
|
238
privval/client.go
Normal file
238
privval/client.go
Normal file
@@ -0,0 +1,238 @@
|
||||
package privval
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultConnHeartBeatSeconds = 2
|
||||
defaultDialRetries = 10
|
||||
)
|
||||
|
||||
// Socket errors.
|
||||
var (
|
||||
ErrUnexpectedResponse = errors.New("received unexpected response")
|
||||
)
|
||||
|
||||
var (
|
||||
connHeartbeat = time.Second * defaultConnHeartBeatSeconds
|
||||
)
|
||||
|
||||
// SocketValOption sets an optional parameter on the SocketVal.
|
||||
type SocketValOption func(*SocketVal)
|
||||
|
||||
// SocketValHeartbeat sets the period on which to check the liveness of the
|
||||
// connected Signer connections.
|
||||
func SocketValHeartbeat(period time.Duration) SocketValOption {
|
||||
return func(sc *SocketVal) { sc.connHeartbeat = period }
|
||||
}
|
||||
|
||||
// SocketVal implements PrivValidator.
|
||||
// It listens for an external process to dial in and uses
|
||||
// the socket to request signatures.
|
||||
type SocketVal struct {
|
||||
cmn.BaseService
|
||||
|
||||
listener net.Listener
|
||||
|
||||
// ping
|
||||
cancelPing chan struct{}
|
||||
pingTicker *time.Ticker
|
||||
connHeartbeat time.Duration
|
||||
|
||||
// signer is mutable since it can be
|
||||
// reset if the connection fails.
|
||||
// failures are detected by a background
|
||||
// ping routine.
|
||||
// Methods on the underlying net.Conn itself
|
||||
// are already gorountine safe.
|
||||
mtx sync.RWMutex
|
||||
signer *RemoteSignerClient
|
||||
}
|
||||
|
||||
// Check that SocketVal implements PrivValidator.
|
||||
var _ types.PrivValidator = (*SocketVal)(nil)
|
||||
|
||||
// NewSocketVal returns an instance of SocketVal.
|
||||
func NewSocketVal(
|
||||
logger log.Logger,
|
||||
listener net.Listener,
|
||||
) *SocketVal {
|
||||
sc := &SocketVal{
|
||||
listener: listener,
|
||||
connHeartbeat: connHeartbeat,
|
||||
}
|
||||
|
||||
sc.BaseService = *cmn.NewBaseService(logger, "SocketVal", sc)
|
||||
|
||||
return sc
|
||||
}
|
||||
|
||||
//--------------------------------------------------------
|
||||
// Implement PrivValidator
|
||||
|
||||
// GetPubKey implements PrivValidator.
|
||||
func (sc *SocketVal) GetPubKey() crypto.PubKey {
|
||||
sc.mtx.RLock()
|
||||
defer sc.mtx.RUnlock()
|
||||
return sc.signer.GetPubKey()
|
||||
}
|
||||
|
||||
// SignVote implements PrivValidator.
|
||||
func (sc *SocketVal) SignVote(chainID string, vote *types.Vote) error {
|
||||
sc.mtx.RLock()
|
||||
defer sc.mtx.RUnlock()
|
||||
return sc.signer.SignVote(chainID, vote)
|
||||
}
|
||||
|
||||
// SignProposal implements PrivValidator.
|
||||
func (sc *SocketVal) SignProposal(chainID string, proposal *types.Proposal) error {
|
||||
sc.mtx.RLock()
|
||||
defer sc.mtx.RUnlock()
|
||||
return sc.signer.SignProposal(chainID, proposal)
|
||||
}
|
||||
|
||||
//--------------------------------------------------------
|
||||
// More thread safe methods proxied to the signer
|
||||
|
||||
// Ping is used to check connection health.
|
||||
func (sc *SocketVal) Ping() error {
|
||||
sc.mtx.RLock()
|
||||
defer sc.mtx.RUnlock()
|
||||
return sc.signer.Ping()
|
||||
}
|
||||
|
||||
// Close closes the underlying net.Conn.
|
||||
func (sc *SocketVal) Close() {
|
||||
sc.mtx.RLock()
|
||||
defer sc.mtx.RUnlock()
|
||||
if sc.signer != nil {
|
||||
if err := sc.signer.Close(); err != nil {
|
||||
sc.Logger.Error("OnStop", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
if sc.listener != nil {
|
||||
if err := sc.listener.Close(); err != nil {
|
||||
sc.Logger.Error("OnStop", "err", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//--------------------------------------------------------
|
||||
// Service start and stop
|
||||
|
||||
// OnStart implements cmn.Service.
|
||||
func (sc *SocketVal) OnStart() error {
|
||||
if closed, err := sc.reset(); err != nil {
|
||||
sc.Logger.Error("OnStart", "err", err)
|
||||
return err
|
||||
} else if closed {
|
||||
return fmt.Errorf("listener is closed")
|
||||
}
|
||||
|
||||
// Start a routine to keep the connection alive
|
||||
sc.cancelPing = make(chan struct{}, 1)
|
||||
sc.pingTicker = time.NewTicker(sc.connHeartbeat)
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-sc.pingTicker.C:
|
||||
err := sc.Ping()
|
||||
if err != nil {
|
||||
sc.Logger.Error("Ping", "err", err)
|
||||
if err == ErrUnexpectedResponse {
|
||||
return
|
||||
}
|
||||
|
||||
closed, err := sc.reset()
|
||||
if err != nil {
|
||||
sc.Logger.Error("Reconnecting to remote signer failed", "err", err)
|
||||
continue
|
||||
}
|
||||
if closed {
|
||||
sc.Logger.Info("listener is closing")
|
||||
return
|
||||
}
|
||||
|
||||
sc.Logger.Info("Re-created connection to remote signer", "impl", sc)
|
||||
}
|
||||
case <-sc.cancelPing:
|
||||
sc.pingTicker.Stop()
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// OnStop implements cmn.Service.
|
||||
func (sc *SocketVal) OnStop() {
|
||||
if sc.cancelPing != nil {
|
||||
close(sc.cancelPing)
|
||||
}
|
||||
sc.Close()
|
||||
}
|
||||
|
||||
//--------------------------------------------------------
|
||||
// Connection and signer management
|
||||
|
||||
// waits to accept and sets a new connection.
|
||||
// connection is closed in OnStop.
|
||||
// returns true if the listener is closed
|
||||
// (ie. it returns a nil conn).
|
||||
func (sc *SocketVal) reset() (closed bool, err error) {
|
||||
sc.mtx.Lock()
|
||||
defer sc.mtx.Unlock()
|
||||
|
||||
// first check if the conn already exists and close it.
|
||||
if sc.signer != nil {
|
||||
if err := sc.signer.Close(); err != nil {
|
||||
sc.Logger.Error("error closing socket val connection during reset", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
// wait for a new conn
|
||||
conn, err := sc.acceptConnection()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
// listener is closed
|
||||
if conn == nil {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
sc.signer, err = NewRemoteSignerClient(conn)
|
||||
if err != nil {
|
||||
// failed to fetch the pubkey. close out the connection.
|
||||
if err := conn.Close(); err != nil {
|
||||
sc.Logger.Error("error closing connection", "err", err)
|
||||
}
|
||||
return false, err
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// Attempt to accept a connection.
|
||||
// Times out after the listener's acceptDeadline
|
||||
func (sc *SocketVal) acceptConnection() (net.Conn, error) {
|
||||
conn, err := sc.listener.Accept()
|
||||
if err != nil {
|
||||
if !sc.IsRunning() {
|
||||
return nil, nil // Ignore error from listener closing.
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return conn, nil
|
||||
}
|
461
privval/client_test.go
Normal file
461
privval/client_test.go
Normal file
@@ -0,0 +1,461 @@
|
||||
package privval
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
var (
|
||||
testAcceptDeadline = defaultAcceptDeadlineSeconds * time.Second
|
||||
|
||||
testConnDeadline = 100 * time.Millisecond
|
||||
testConnDeadline2o3 = 66 * time.Millisecond // 2/3 of the other one
|
||||
|
||||
testHeartbeatTimeout = 10 * time.Millisecond
|
||||
testHeartbeatTimeout3o2 = 6 * time.Millisecond // 3/2 of the other one
|
||||
)
|
||||
|
||||
type socketTestCase struct {
|
||||
addr string
|
||||
dialer Dialer
|
||||
}
|
||||
|
||||
func socketTestCases(t *testing.T) []socketTestCase {
|
||||
tcpAddr := fmt.Sprintf("tcp://%s", testFreeTCPAddr(t))
|
||||
unixFilePath, err := testUnixAddr()
|
||||
require.NoError(t, err)
|
||||
unixAddr := fmt.Sprintf("unix://%s", unixFilePath)
|
||||
return []socketTestCase{
|
||||
socketTestCase{
|
||||
addr: tcpAddr,
|
||||
dialer: DialTCPFn(tcpAddr, testConnDeadline, ed25519.GenPrivKey()),
|
||||
},
|
||||
socketTestCase{
|
||||
addr: unixAddr,
|
||||
dialer: DialUnixFn(unixFilePath),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func TestSocketPVAddress(t *testing.T) {
|
||||
for _, tc := range socketTestCases(t) {
|
||||
// Execute the test within a closure to ensure the deferred statements
|
||||
// are called between each for loop iteration, for isolated test cases.
|
||||
func() {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewMockPV(), tc.addr, tc.dialer)
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
serverAddr := rs.privVal.GetPubKey().Address()
|
||||
clientAddr := sc.GetPubKey().Address()
|
||||
|
||||
assert.Equal(t, serverAddr, clientAddr)
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
func TestSocketPVPubKey(t *testing.T) {
|
||||
for _, tc := range socketTestCases(t) {
|
||||
func() {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewMockPV(), tc.addr, tc.dialer)
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
clientKey := sc.GetPubKey()
|
||||
|
||||
privvalPubKey := rs.privVal.GetPubKey()
|
||||
|
||||
assert.Equal(t, privvalPubKey, clientKey)
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
func TestSocketPVProposal(t *testing.T) {
|
||||
for _, tc := range socketTestCases(t) {
|
||||
func() {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewMockPV(), tc.addr, tc.dialer)
|
||||
|
||||
ts = time.Now()
|
||||
privProposal = &types.Proposal{Timestamp: ts}
|
||||
clientProposal = &types.Proposal{Timestamp: ts}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
require.NoError(t, rs.privVal.SignProposal(chainID, privProposal))
|
||||
require.NoError(t, sc.SignProposal(chainID, clientProposal))
|
||||
assert.Equal(t, privProposal.Signature, clientProposal.Signature)
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
func TestSocketPVVote(t *testing.T) {
|
||||
for _, tc := range socketTestCases(t) {
|
||||
func() {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewMockPV(), tc.addr, tc.dialer)
|
||||
|
||||
ts = time.Now()
|
||||
vType = types.PrecommitType
|
||||
want = &types.Vote{Timestamp: ts, Type: vType}
|
||||
have = &types.Vote{Timestamp: ts, Type: vType}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
require.NoError(t, rs.privVal.SignVote(chainID, want))
|
||||
require.NoError(t, sc.SignVote(chainID, have))
|
||||
assert.Equal(t, want.Signature, have.Signature)
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
func TestSocketPVVoteResetDeadline(t *testing.T) {
|
||||
for _, tc := range socketTestCases(t) {
|
||||
func() {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewMockPV(), tc.addr, tc.dialer)
|
||||
|
||||
ts = time.Now()
|
||||
vType = types.PrecommitType
|
||||
want = &types.Vote{Timestamp: ts, Type: vType}
|
||||
have = &types.Vote{Timestamp: ts, Type: vType}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
time.Sleep(testConnDeadline2o3)
|
||||
|
||||
require.NoError(t, rs.privVal.SignVote(chainID, want))
|
||||
require.NoError(t, sc.SignVote(chainID, have))
|
||||
assert.Equal(t, want.Signature, have.Signature)
|
||||
|
||||
// This would exceed the deadline if it was not extended by the previous message
|
||||
time.Sleep(testConnDeadline2o3)
|
||||
|
||||
require.NoError(t, rs.privVal.SignVote(chainID, want))
|
||||
require.NoError(t, sc.SignVote(chainID, have))
|
||||
assert.Equal(t, want.Signature, have.Signature)
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
func TestSocketPVVoteKeepalive(t *testing.T) {
|
||||
for _, tc := range socketTestCases(t) {
|
||||
func() {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewMockPV(), tc.addr, tc.dialer)
|
||||
|
||||
ts = time.Now()
|
||||
vType = types.PrecommitType
|
||||
want = &types.Vote{Timestamp: ts, Type: vType}
|
||||
have = &types.Vote{Timestamp: ts, Type: vType}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
time.Sleep(testConnDeadline * 2)
|
||||
|
||||
require.NoError(t, rs.privVal.SignVote(chainID, want))
|
||||
require.NoError(t, sc.SignVote(chainID, have))
|
||||
assert.Equal(t, want.Signature, have.Signature)
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
func TestSocketPVDeadline(t *testing.T) {
|
||||
for _, tc := range socketTestCases(t) {
|
||||
func() {
|
||||
var (
|
||||
listenc = make(chan struct{})
|
||||
thisConnTimeout = 100 * time.Millisecond
|
||||
sc = newSocketVal(log.TestingLogger(), tc.addr, thisConnTimeout)
|
||||
)
|
||||
|
||||
go func(sc *SocketVal) {
|
||||
defer close(listenc)
|
||||
|
||||
// Note: the TCP connection times out at the accept() phase,
|
||||
// whereas the Unix domain sockets connection times out while
|
||||
// attempting to fetch the remote signer's public key.
|
||||
assert.True(t, IsConnTimeout(sc.Start()))
|
||||
|
||||
assert.False(t, sc.IsRunning())
|
||||
}(sc)
|
||||
|
||||
for {
|
||||
_, err := cmn.Connect(tc.addr)
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
<-listenc
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemoteSignVoteErrors(t *testing.T) {
|
||||
for _, tc := range socketTestCases(t) {
|
||||
func() {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewErroringMockPV(), tc.addr, tc.dialer)
|
||||
|
||||
ts = time.Now()
|
||||
vType = types.PrecommitType
|
||||
vote = &types.Vote{Timestamp: ts, Type: vType}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
err := sc.SignVote("", vote)
|
||||
require.Equal(t, err.(*RemoteSignerError).Description, types.ErroringMockPVErr.Error())
|
||||
|
||||
err = rs.privVal.SignVote(chainID, vote)
|
||||
require.Error(t, err)
|
||||
err = sc.SignVote(chainID, vote)
|
||||
require.Error(t, err)
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemoteSignProposalErrors(t *testing.T) {
|
||||
for _, tc := range socketTestCases(t) {
|
||||
func() {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewErroringMockPV(), tc.addr, tc.dialer)
|
||||
|
||||
ts = time.Now()
|
||||
proposal = &types.Proposal{Timestamp: ts}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
err := sc.SignProposal("", proposal)
|
||||
require.Equal(t, err.(*RemoteSignerError).Description, types.ErroringMockPVErr.Error())
|
||||
|
||||
err = rs.privVal.SignProposal(chainID, proposal)
|
||||
require.Error(t, err)
|
||||
|
||||
err = sc.SignProposal(chainID, proposal)
|
||||
require.Error(t, err)
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
func TestErrUnexpectedResponse(t *testing.T) {
|
||||
for _, tc := range socketTestCases(t) {
|
||||
func() {
|
||||
var (
|
||||
logger = log.TestingLogger()
|
||||
chainID = cmn.RandStr(12)
|
||||
readyc = make(chan struct{})
|
||||
errc = make(chan error, 1)
|
||||
|
||||
rs = NewRemoteSigner(
|
||||
logger,
|
||||
chainID,
|
||||
types.NewMockPV(),
|
||||
tc.dialer,
|
||||
)
|
||||
sc = newSocketVal(logger, tc.addr, testConnDeadline)
|
||||
)
|
||||
|
||||
testStartSocketPV(t, readyc, sc)
|
||||
defer sc.Stop()
|
||||
RemoteSignerConnDeadline(time.Millisecond)(rs)
|
||||
RemoteSignerConnRetries(100)(rs)
|
||||
// we do not want to Start() the remote signer here and instead use the connection to
|
||||
// reply with intentionally wrong replies below:
|
||||
rsConn, err := rs.connect()
|
||||
defer rsConn.Close()
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, rsConn)
|
||||
// send over public key to get the remote signer running:
|
||||
go testReadWriteResponse(t, &PubKeyResponse{}, rsConn)
|
||||
<-readyc
|
||||
|
||||
// Proposal:
|
||||
go func(errc chan error) {
|
||||
errc <- sc.SignProposal(chainID, &types.Proposal{})
|
||||
}(errc)
|
||||
// read request and write wrong response:
|
||||
go testReadWriteResponse(t, &SignedVoteResponse{}, rsConn)
|
||||
err = <-errc
|
||||
require.Error(t, err)
|
||||
require.Equal(t, err, ErrUnexpectedResponse)
|
||||
|
||||
// Vote:
|
||||
go func(errc chan error) {
|
||||
errc <- sc.SignVote(chainID, &types.Vote{})
|
||||
}(errc)
|
||||
// read request and write wrong response:
|
||||
go testReadWriteResponse(t, &SignedProposalResponse{}, rsConn)
|
||||
err = <-errc
|
||||
require.Error(t, err)
|
||||
require.Equal(t, err, ErrUnexpectedResponse)
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
func TestRetryConnToRemoteSigner(t *testing.T) {
|
||||
for _, tc := range socketTestCases(t) {
|
||||
func() {
|
||||
var (
|
||||
logger = log.TestingLogger()
|
||||
chainID = cmn.RandStr(12)
|
||||
readyc = make(chan struct{})
|
||||
|
||||
rs = NewRemoteSigner(
|
||||
logger,
|
||||
chainID,
|
||||
types.NewMockPV(),
|
||||
tc.dialer,
|
||||
)
|
||||
thisConnTimeout = testConnDeadline
|
||||
sc = newSocketVal(logger, tc.addr, thisConnTimeout)
|
||||
)
|
||||
// Ping every:
|
||||
SocketValHeartbeat(testHeartbeatTimeout)(sc)
|
||||
|
||||
RemoteSignerConnDeadline(testConnDeadline)(rs)
|
||||
RemoteSignerConnRetries(10)(rs)
|
||||
|
||||
testStartSocketPV(t, readyc, sc)
|
||||
defer sc.Stop()
|
||||
require.NoError(t, rs.Start())
|
||||
assert.True(t, rs.IsRunning())
|
||||
|
||||
<-readyc
|
||||
time.Sleep(testHeartbeatTimeout * 2)
|
||||
|
||||
rs.Stop()
|
||||
rs2 := NewRemoteSigner(
|
||||
logger,
|
||||
chainID,
|
||||
types.NewMockPV(),
|
||||
tc.dialer,
|
||||
)
|
||||
// let some pings pass
|
||||
time.Sleep(testHeartbeatTimeout3o2)
|
||||
require.NoError(t, rs2.Start())
|
||||
assert.True(t, rs2.IsRunning())
|
||||
defer rs2.Stop()
|
||||
|
||||
// give the client some time to re-establish the conn to the remote signer
|
||||
// should see sth like this in the logs:
|
||||
//
|
||||
// E[10016-01-10|17:12:46.128] Ping err="remote signer timed out"
|
||||
// I[10016-01-10|17:16:42.447] Re-created connection to remote signer impl=SocketVal
|
||||
time.Sleep(testConnDeadline * 2)
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
func newSocketVal(logger log.Logger, addr string, connDeadline time.Duration) *SocketVal {
|
||||
proto, address := cmn.ProtocolAndAddress(addr)
|
||||
ln, err := net.Listen(proto, address)
|
||||
logger.Info("Listening at", "proto", proto, "address", address)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
var svln net.Listener
|
||||
if proto == "unix" {
|
||||
unixLn := NewUnixListener(ln)
|
||||
UnixListenerAcceptDeadline(testAcceptDeadline)(unixLn)
|
||||
UnixListenerConnDeadline(connDeadline)(unixLn)
|
||||
svln = unixLn
|
||||
} else {
|
||||
tcpLn := NewTCPListener(ln, ed25519.GenPrivKey())
|
||||
TCPListenerAcceptDeadline(testAcceptDeadline)(tcpLn)
|
||||
TCPListenerConnDeadline(connDeadline)(tcpLn)
|
||||
svln = tcpLn
|
||||
}
|
||||
return NewSocketVal(logger, svln)
|
||||
}
|
||||
|
||||
func testSetupSocketPair(
|
||||
t *testing.T,
|
||||
chainID string,
|
||||
privValidator types.PrivValidator,
|
||||
addr string,
|
||||
dialer Dialer,
|
||||
) (*SocketVal, *RemoteSigner) {
|
||||
var (
|
||||
logger = log.TestingLogger()
|
||||
privVal = privValidator
|
||||
readyc = make(chan struct{})
|
||||
rs = NewRemoteSigner(
|
||||
logger,
|
||||
chainID,
|
||||
privVal,
|
||||
dialer,
|
||||
)
|
||||
|
||||
thisConnTimeout = testConnDeadline
|
||||
sc = newSocketVal(logger, addr, thisConnTimeout)
|
||||
)
|
||||
|
||||
SocketValHeartbeat(testHeartbeatTimeout)(sc)
|
||||
RemoteSignerConnDeadline(testConnDeadline)(rs)
|
||||
RemoteSignerConnRetries(1e6)(rs)
|
||||
|
||||
testStartSocketPV(t, readyc, sc)
|
||||
|
||||
require.NoError(t, rs.Start())
|
||||
assert.True(t, rs.IsRunning())
|
||||
|
||||
<-readyc
|
||||
|
||||
return sc, rs
|
||||
}
|
||||
|
||||
func testReadWriteResponse(t *testing.T, resp RemoteSignerMsg, rsConn net.Conn) {
|
||||
_, err := readMsg(rsConn)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = writeMsg(rsConn, resp)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func testStartSocketPV(t *testing.T, readyc chan struct{}, sc *SocketVal) {
|
||||
go func(sc *SocketVal) {
|
||||
require.NoError(t, sc.Start())
|
||||
assert.True(t, sc.IsRunning())
|
||||
|
||||
readyc <- struct{}{}
|
||||
}(sc)
|
||||
}
|
||||
|
||||
// testFreeTCPAddr claims a free port so we don't block on listener being ready.
|
||||
func testFreeTCPAddr(t *testing.T) string {
|
||||
ln, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
require.NoError(t, err)
|
||||
defer ln.Close()
|
||||
|
||||
return fmt.Sprintf("127.0.0.1:%d", ln.Addr().(*net.TCPAddr).Port)
|
||||
}
|
21
privval/doc.go
Normal file
21
privval/doc.go
Normal file
@@ -0,0 +1,21 @@
|
||||
/*
|
||||
|
||||
Package privval provides different implementations of the types.PrivValidator.
|
||||
|
||||
FilePV
|
||||
|
||||
FilePV is the simplest implementation and developer default. It uses one file for the private key and another to store state.
|
||||
|
||||
SocketVal
|
||||
|
||||
SocketVal establishes a connection to an external process, like a Key Management Server (KMS), using a socket.
|
||||
SocketVal listens for the external KMS process to dial in.
|
||||
SocketVal takes a listener, which determines the type of connection
|
||||
(ie. encrypted over tcp, or unencrypted over unix).
|
||||
|
||||
RemoteSigner
|
||||
|
||||
RemoteSigner is a simple wrapper around a net.Conn. It's used by both IPCVal and TCPVal.
|
||||
|
||||
*/
|
||||
package privval
|
@@ -22,6 +22,7 @@ const (
|
||||
stepPrecommit int8 = 3
|
||||
)
|
||||
|
||||
// A vote is either stepPrevote or stepPrecommit.
|
||||
func voteToStep(vote *types.Vote) int8 {
|
||||
switch vote.Type {
|
||||
case types.PrevoteType:
|
||||
@@ -29,7 +30,7 @@ func voteToStep(vote *types.Vote) int8 {
|
||||
case types.PrecommitType:
|
||||
return stepPrecommit
|
||||
default:
|
||||
cmn.PanicSanity("Unknown vote type")
|
||||
panic("Unknown vote type")
|
||||
return 0
|
||||
}
|
||||
}
|
123
privval/ipc.go
123
privval/ipc.go
@@ -1,123 +0,0 @@
|
||||
package privval
|
||||
|
||||
import (
|
||||
"net"
|
||||
"time"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// IPCValOption sets an optional parameter on the SocketPV.
|
||||
type IPCValOption func(*IPCVal)
|
||||
|
||||
// IPCValConnTimeout sets the read and write timeout for connections
|
||||
// from external signing processes.
|
||||
func IPCValConnTimeout(timeout time.Duration) IPCValOption {
|
||||
return func(sc *IPCVal) { sc.connTimeout = timeout }
|
||||
}
|
||||
|
||||
// IPCValHeartbeat sets the period on which to check the liveness of the
|
||||
// connected Signer connections.
|
||||
func IPCValHeartbeat(period time.Duration) IPCValOption {
|
||||
return func(sc *IPCVal) { sc.connHeartbeat = period }
|
||||
}
|
||||
|
||||
// IPCVal implements PrivValidator, it uses a unix socket to request signatures
|
||||
// from an external process.
|
||||
type IPCVal struct {
|
||||
cmn.BaseService
|
||||
*RemoteSignerClient
|
||||
|
||||
addr string
|
||||
|
||||
connTimeout time.Duration
|
||||
connHeartbeat time.Duration
|
||||
|
||||
conn net.Conn
|
||||
cancelPing chan struct{}
|
||||
pingTicker *time.Ticker
|
||||
}
|
||||
|
||||
// Check that IPCVal implements PrivValidator.
|
||||
var _ types.PrivValidator = (*IPCVal)(nil)
|
||||
|
||||
// NewIPCVal returns an instance of IPCVal.
|
||||
func NewIPCVal(
|
||||
logger log.Logger,
|
||||
socketAddr string,
|
||||
) *IPCVal {
|
||||
sc := &IPCVal{
|
||||
addr: socketAddr,
|
||||
connTimeout: connTimeout,
|
||||
connHeartbeat: connHeartbeat,
|
||||
}
|
||||
|
||||
sc.BaseService = *cmn.NewBaseService(logger, "IPCVal", sc)
|
||||
|
||||
return sc
|
||||
}
|
||||
|
||||
// OnStart implements cmn.Service.
|
||||
func (sc *IPCVal) OnStart() error {
|
||||
err := sc.connect()
|
||||
if err != nil {
|
||||
sc.Logger.Error("OnStart", "err", err)
|
||||
return err
|
||||
}
|
||||
|
||||
sc.RemoteSignerClient, err = NewRemoteSignerClient(sc.conn)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Start a routine to keep the connection alive
|
||||
sc.cancelPing = make(chan struct{}, 1)
|
||||
sc.pingTicker = time.NewTicker(sc.connHeartbeat)
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-sc.pingTicker.C:
|
||||
err := sc.Ping()
|
||||
if err != nil {
|
||||
sc.Logger.Error("Ping", "err", err)
|
||||
}
|
||||
case <-sc.cancelPing:
|
||||
sc.pingTicker.Stop()
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// OnStop implements cmn.Service.
|
||||
func (sc *IPCVal) OnStop() {
|
||||
if sc.cancelPing != nil {
|
||||
close(sc.cancelPing)
|
||||
}
|
||||
|
||||
if sc.conn != nil {
|
||||
if err := sc.conn.Close(); err != nil {
|
||||
sc.Logger.Error("OnStop", "err", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (sc *IPCVal) connect() error {
|
||||
la, err := net.ResolveUnixAddr("unix", sc.addr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
conn, err := net.DialUnix("unix", nil, la)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
sc.conn = newTimeoutConn(conn, sc.connTimeout)
|
||||
|
||||
return nil
|
||||
}
|
@@ -1,132 +0,0 @@
|
||||
package privval
|
||||
|
||||
import (
|
||||
"io"
|
||||
"net"
|
||||
"time"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// IPCRemoteSignerOption sets an optional parameter on the IPCRemoteSigner.
|
||||
type IPCRemoteSignerOption func(*IPCRemoteSigner)
|
||||
|
||||
// IPCRemoteSignerConnDeadline sets the read and write deadline for connections
|
||||
// from external signing processes.
|
||||
func IPCRemoteSignerConnDeadline(deadline time.Duration) IPCRemoteSignerOption {
|
||||
return func(ss *IPCRemoteSigner) { ss.connDeadline = deadline }
|
||||
}
|
||||
|
||||
// IPCRemoteSignerConnRetries sets the amount of attempted retries to connect.
|
||||
func IPCRemoteSignerConnRetries(retries int) IPCRemoteSignerOption {
|
||||
return func(ss *IPCRemoteSigner) { ss.connRetries = retries }
|
||||
}
|
||||
|
||||
// IPCRemoteSigner is a RPC implementation of PrivValidator that listens on a unix socket.
|
||||
type IPCRemoteSigner struct {
|
||||
cmn.BaseService
|
||||
|
||||
addr string
|
||||
chainID string
|
||||
connDeadline time.Duration
|
||||
connRetries int
|
||||
privVal types.PrivValidator
|
||||
|
||||
listener *net.UnixListener
|
||||
}
|
||||
|
||||
// NewIPCRemoteSigner returns an instance of IPCRemoteSigner.
|
||||
func NewIPCRemoteSigner(
|
||||
logger log.Logger,
|
||||
chainID, socketAddr string,
|
||||
privVal types.PrivValidator,
|
||||
) *IPCRemoteSigner {
|
||||
rs := &IPCRemoteSigner{
|
||||
addr: socketAddr,
|
||||
chainID: chainID,
|
||||
connDeadline: time.Second * defaultConnDeadlineSeconds,
|
||||
connRetries: defaultDialRetries,
|
||||
privVal: privVal,
|
||||
}
|
||||
|
||||
rs.BaseService = *cmn.NewBaseService(logger, "IPCRemoteSigner", rs)
|
||||
|
||||
return rs
|
||||
}
|
||||
|
||||
// OnStart implements cmn.Service.
|
||||
func (rs *IPCRemoteSigner) OnStart() error {
|
||||
err := rs.listen()
|
||||
if err != nil {
|
||||
err = cmn.ErrorWrap(err, "listen")
|
||||
rs.Logger.Error("OnStart", "err", err)
|
||||
return err
|
||||
}
|
||||
|
||||
go func() {
|
||||
for {
|
||||
conn, err := rs.listener.AcceptUnix()
|
||||
if err != nil {
|
||||
rs.Logger.Error("AcceptUnix", "err", err)
|
||||
return
|
||||
}
|
||||
go rs.handleConnection(conn)
|
||||
}
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// OnStop implements cmn.Service.
|
||||
func (rs *IPCRemoteSigner) OnStop() {
|
||||
if rs.listener != nil {
|
||||
if err := rs.listener.Close(); err != nil {
|
||||
rs.Logger.Error("OnStop", "err", cmn.ErrorWrap(err, "closing listener failed"))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (rs *IPCRemoteSigner) listen() error {
|
||||
la, err := net.ResolveUnixAddr("unix", rs.addr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
rs.listener, err = net.ListenUnix("unix", la)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (rs *IPCRemoteSigner) handleConnection(conn net.Conn) {
|
||||
for {
|
||||
if !rs.IsRunning() {
|
||||
return // Ignore error from listener closing.
|
||||
}
|
||||
|
||||
// Reset the connection deadline
|
||||
conn.SetDeadline(time.Now().Add(rs.connDeadline))
|
||||
|
||||
req, err := readMsg(conn)
|
||||
if err != nil {
|
||||
if err != io.EOF {
|
||||
rs.Logger.Error("handleConnection", "err", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
res, err := handleRequest(req, rs.chainID, rs.privVal)
|
||||
|
||||
if err != nil {
|
||||
// only log the error; we'll reply with an error in res
|
||||
rs.Logger.Error("handleConnection", "err", err)
|
||||
}
|
||||
|
||||
err = writeMsg(conn, res)
|
||||
if err != nil {
|
||||
rs.Logger.Error("handleConnection", "err", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,147 +0,0 @@
|
||||
package privval
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
func TestIPCPVVote(t *testing.T) {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupIPCSocketPair(t, chainID, types.NewMockPV())
|
||||
|
||||
ts = time.Now()
|
||||
vType = types.PrecommitType
|
||||
want = &types.Vote{Timestamp: ts, Type: vType}
|
||||
have = &types.Vote{Timestamp: ts, Type: vType}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
require.NoError(t, rs.privVal.SignVote(chainID, want))
|
||||
require.NoError(t, sc.SignVote(chainID, have))
|
||||
assert.Equal(t, want.Signature, have.Signature)
|
||||
}
|
||||
|
||||
func TestIPCPVVoteResetDeadline(t *testing.T) {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupIPCSocketPair(t, chainID, types.NewMockPV())
|
||||
|
||||
ts = time.Now()
|
||||
vType = types.PrecommitType
|
||||
want = &types.Vote{Timestamp: ts, Type: vType}
|
||||
have = &types.Vote{Timestamp: ts, Type: vType}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
time.Sleep(3 * time.Millisecond)
|
||||
|
||||
require.NoError(t, rs.privVal.SignVote(chainID, want))
|
||||
require.NoError(t, sc.SignVote(chainID, have))
|
||||
assert.Equal(t, want.Signature, have.Signature)
|
||||
|
||||
// This would exceed the deadline if it was not extended by the previous message
|
||||
time.Sleep(3 * time.Millisecond)
|
||||
|
||||
require.NoError(t, rs.privVal.SignVote(chainID, want))
|
||||
require.NoError(t, sc.SignVote(chainID, have))
|
||||
assert.Equal(t, want.Signature, have.Signature)
|
||||
}
|
||||
|
||||
func TestIPCPVVoteKeepalive(t *testing.T) {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupIPCSocketPair(t, chainID, types.NewMockPV())
|
||||
|
||||
ts = time.Now()
|
||||
vType = types.PrecommitType
|
||||
want = &types.Vote{Timestamp: ts, Type: vType}
|
||||
have = &types.Vote{Timestamp: ts, Type: vType}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
|
||||
require.NoError(t, rs.privVal.SignVote(chainID, want))
|
||||
require.NoError(t, sc.SignVote(chainID, have))
|
||||
assert.Equal(t, want.Signature, have.Signature)
|
||||
}
|
||||
|
||||
func testSetupIPCSocketPair(
|
||||
t *testing.T,
|
||||
chainID string,
|
||||
privValidator types.PrivValidator,
|
||||
) (*IPCVal, *IPCRemoteSigner) {
|
||||
addr, err := testUnixAddr()
|
||||
require.NoError(t, err)
|
||||
|
||||
var (
|
||||
logger = log.TestingLogger()
|
||||
privVal = privValidator
|
||||
readyc = make(chan struct{})
|
||||
rs = NewIPCRemoteSigner(
|
||||
logger,
|
||||
chainID,
|
||||
addr,
|
||||
privVal,
|
||||
)
|
||||
sc = NewIPCVal(
|
||||
logger,
|
||||
addr,
|
||||
)
|
||||
)
|
||||
|
||||
IPCValConnTimeout(5 * time.Millisecond)(sc)
|
||||
IPCValHeartbeat(time.Millisecond)(sc)
|
||||
|
||||
IPCRemoteSignerConnDeadline(time.Millisecond * 5)(rs)
|
||||
|
||||
testStartIPCRemoteSigner(t, readyc, rs)
|
||||
|
||||
<-readyc
|
||||
|
||||
require.NoError(t, sc.Start())
|
||||
assert.True(t, sc.IsRunning())
|
||||
|
||||
return sc, rs
|
||||
}
|
||||
|
||||
func testStartIPCRemoteSigner(t *testing.T, readyc chan struct{}, rs *IPCRemoteSigner) {
|
||||
go func(rs *IPCRemoteSigner) {
|
||||
require.NoError(t, rs.Start())
|
||||
assert.True(t, rs.IsRunning())
|
||||
|
||||
readyc <- struct{}{}
|
||||
}(rs)
|
||||
}
|
||||
|
||||
func testUnixAddr() (string, error) {
|
||||
f, err := ioutil.TempFile("/tmp", "nettest")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
addr := f.Name()
|
||||
err = f.Close()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
err = os.Remove(addr)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return addr, nil
|
||||
}
|
@@ -4,7 +4,6 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"sync"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
@@ -14,31 +13,41 @@ import (
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// RemoteSignerClient implements PrivValidator, it uses a socket to request signatures
|
||||
// Socket errors.
|
||||
var (
|
||||
ErrConnTimeout = errors.New("remote signer timed out")
|
||||
)
|
||||
|
||||
// RemoteSignerClient implements PrivValidator.
|
||||
// It uses a net.Conn to request signatures
|
||||
// from an external process.
|
||||
type RemoteSignerClient struct {
|
||||
conn net.Conn
|
||||
conn net.Conn
|
||||
|
||||
// memoized
|
||||
consensusPubKey crypto.PubKey
|
||||
mtx sync.Mutex
|
||||
}
|
||||
|
||||
// Check that RemoteSignerClient implements PrivValidator.
|
||||
var _ types.PrivValidator = (*RemoteSignerClient)(nil)
|
||||
|
||||
// NewRemoteSignerClient returns an instance of RemoteSignerClient.
|
||||
func NewRemoteSignerClient(
|
||||
conn net.Conn,
|
||||
) (*RemoteSignerClient, error) {
|
||||
sc := &RemoteSignerClient{
|
||||
conn: conn,
|
||||
}
|
||||
pubKey, err := sc.getPubKey()
|
||||
func NewRemoteSignerClient(conn net.Conn) (*RemoteSignerClient, error) {
|
||||
|
||||
// retrieve and memoize the consensus public key once.
|
||||
pubKey, err := getPubKey(conn)
|
||||
if err != nil {
|
||||
return nil, cmn.ErrorWrap(err, "error while retrieving public key for remote signer")
|
||||
}
|
||||
// retrieve and memoize the consensus public key once:
|
||||
sc.consensusPubKey = pubKey
|
||||
return sc, nil
|
||||
return &RemoteSignerClient{
|
||||
conn: conn,
|
||||
consensusPubKey: pubKey,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Close calls Close on the underlying net.Conn.
|
||||
func (sc *RemoteSignerClient) Close() error {
|
||||
return sc.conn.Close()
|
||||
}
|
||||
|
||||
// GetPubKey implements PrivValidator.
|
||||
@@ -46,16 +55,14 @@ func (sc *RemoteSignerClient) GetPubKey() crypto.PubKey {
|
||||
return sc.consensusPubKey
|
||||
}
|
||||
|
||||
func (sc *RemoteSignerClient) getPubKey() (crypto.PubKey, error) {
|
||||
sc.mtx.Lock()
|
||||
defer sc.mtx.Unlock()
|
||||
|
||||
err := writeMsg(sc.conn, &PubKeyRequest{})
|
||||
// not thread-safe (only called on startup).
|
||||
func getPubKey(conn net.Conn) (crypto.PubKey, error) {
|
||||
err := writeMsg(conn, &PubKeyRequest{})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
res, err := readMsg(sc.conn)
|
||||
res, err := readMsg(conn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -73,9 +80,6 @@ func (sc *RemoteSignerClient) getPubKey() (crypto.PubKey, error) {
|
||||
|
||||
// SignVote implements PrivValidator.
|
||||
func (sc *RemoteSignerClient) SignVote(chainID string, vote *types.Vote) error {
|
||||
sc.mtx.Lock()
|
||||
defer sc.mtx.Unlock()
|
||||
|
||||
err := writeMsg(sc.conn, &SignVoteRequest{Vote: vote})
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -103,9 +107,6 @@ func (sc *RemoteSignerClient) SignProposal(
|
||||
chainID string,
|
||||
proposal *types.Proposal,
|
||||
) error {
|
||||
sc.mtx.Lock()
|
||||
defer sc.mtx.Unlock()
|
||||
|
||||
err := writeMsg(sc.conn, &SignProposalRequest{Proposal: proposal})
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -129,9 +130,6 @@ func (sc *RemoteSignerClient) SignProposal(
|
||||
|
||||
// Ping is used to check connection health.
|
||||
func (sc *RemoteSignerClient) Ping() error {
|
||||
sc.mtx.Lock()
|
||||
defer sc.mtx.Unlock()
|
||||
|
||||
err := writeMsg(sc.conn, &PingRequest{})
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -260,3 +258,18 @@ func handleRequest(req RemoteSignerMsg, chainID string, privVal types.PrivValida
|
||||
|
||||
return res, err
|
||||
}
|
||||
|
||||
// IsConnTimeout returns a boolean indicating whether the error is known to
|
||||
// report that a connection timeout occurred. This detects both fundamental
|
||||
// network timeouts, as well as ErrConnTimeout errors.
|
||||
func IsConnTimeout(err error) bool {
|
||||
if cmnErr, ok := err.(cmn.Error); ok {
|
||||
if cmnErr.Data() == ErrConnTimeout {
|
||||
return true
|
||||
}
|
||||
}
|
||||
if _, ok := err.(timeoutError); ok {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
90
privval/remote_signer_test.go
Normal file
90
privval/remote_signer_test.go
Normal file
@@ -0,0 +1,90 @@
|
||||
package privval
|
||||
|
||||
import (
|
||||
"net"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// TestRemoteSignerRetryTCPOnly will test connection retry attempts over TCP. We
|
||||
// don't need this for Unix sockets because the OS instantly knows the state of
|
||||
// both ends of the socket connection. This basically causes the
|
||||
// RemoteSigner.dialer() call inside RemoteSigner.connect() to return
|
||||
// successfully immediately, putting an instant stop to any retry attempts.
|
||||
func TestRemoteSignerRetryTCPOnly(t *testing.T) {
|
||||
var (
|
||||
attemptc = make(chan int)
|
||||
retries = 2
|
||||
)
|
||||
|
||||
ln, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
require.NoError(t, err)
|
||||
|
||||
go func(ln net.Listener, attemptc chan<- int) {
|
||||
attempts := 0
|
||||
|
||||
for {
|
||||
conn, err := ln.Accept()
|
||||
require.NoError(t, err)
|
||||
|
||||
err = conn.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
attempts++
|
||||
|
||||
if attempts == retries {
|
||||
attemptc <- attempts
|
||||
break
|
||||
}
|
||||
}
|
||||
}(ln, attemptc)
|
||||
|
||||
rs := NewRemoteSigner(
|
||||
log.TestingLogger(),
|
||||
cmn.RandStr(12),
|
||||
types.NewMockPV(),
|
||||
DialTCPFn(ln.Addr().String(), testConnDeadline, ed25519.GenPrivKey()),
|
||||
)
|
||||
defer rs.Stop()
|
||||
|
||||
RemoteSignerConnDeadline(time.Millisecond)(rs)
|
||||
RemoteSignerConnRetries(retries)(rs)
|
||||
|
||||
assert.Equal(t, rs.Start(), ErrDialRetryMax)
|
||||
|
||||
select {
|
||||
case attempts := <-attemptc:
|
||||
assert.Equal(t, retries, attempts)
|
||||
case <-time.After(100 * time.Millisecond):
|
||||
t.Error("expected remote to observe connection attempts")
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsConnTimeoutForFundamentalTimeouts(t *testing.T) {
|
||||
// Generate a networking timeout
|
||||
dialer := DialTCPFn(testFreeTCPAddr(t), time.Millisecond, ed25519.GenPrivKey())
|
||||
_, err := dialer()
|
||||
assert.Error(t, err)
|
||||
assert.True(t, IsConnTimeout(err))
|
||||
}
|
||||
|
||||
func TestIsConnTimeoutForWrappedConnTimeouts(t *testing.T) {
|
||||
dialer := DialTCPFn(testFreeTCPAddr(t), time.Millisecond, ed25519.GenPrivKey())
|
||||
_, err := dialer()
|
||||
assert.Error(t, err)
|
||||
err = cmn.ErrorWrap(ErrConnTimeout, err.Error())
|
||||
assert.True(t, IsConnTimeout(err))
|
||||
}
|
||||
|
||||
func TestIsConnTimeoutForNonTimeoutErrors(t *testing.T) {
|
||||
assert.False(t, IsConnTimeout(cmn.ErrorWrap(ErrDialRetryMax, "max retries exceeded")))
|
||||
assert.False(t, IsConnTimeout(errors.New("completely irrelevant error")))
|
||||
}
|
@@ -5,6 +5,7 @@ import (
|
||||
"net"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
@@ -12,6 +13,11 @@ import (
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// Socket errors.
|
||||
var (
|
||||
ErrDialRetryMax = errors.New("dialed maximum retries")
|
||||
)
|
||||
|
||||
// RemoteSignerOption sets an optional parameter on the RemoteSigner.
|
||||
type RemoteSignerOption func(*RemoteSigner)
|
||||
|
||||
@@ -26,38 +32,64 @@ func RemoteSignerConnRetries(retries int) RemoteSignerOption {
|
||||
return func(ss *RemoteSigner) { ss.connRetries = retries }
|
||||
}
|
||||
|
||||
// RemoteSigner implements PrivValidator by dialing to a socket.
|
||||
// RemoteSigner dials using its dialer and responds to any
|
||||
// signature requests using its privVal.
|
||||
type RemoteSigner struct {
|
||||
cmn.BaseService
|
||||
|
||||
addr string
|
||||
chainID string
|
||||
connDeadline time.Duration
|
||||
connRetries int
|
||||
privKey ed25519.PrivKeyEd25519
|
||||
privVal types.PrivValidator
|
||||
|
||||
conn net.Conn
|
||||
dialer Dialer
|
||||
conn net.Conn
|
||||
}
|
||||
|
||||
// NewRemoteSigner returns an instance of RemoteSigner.
|
||||
// Dialer dials a remote address and returns a net.Conn or an error.
|
||||
type Dialer func() (net.Conn, error)
|
||||
|
||||
// DialTCPFn dials the given tcp addr, using the given connTimeout and privKey for the
|
||||
// authenticated encryption handshake.
|
||||
func DialTCPFn(addr string, connTimeout time.Duration, privKey ed25519.PrivKeyEd25519) Dialer {
|
||||
return func() (net.Conn, error) {
|
||||
conn, err := cmn.Connect(addr)
|
||||
if err == nil {
|
||||
err = conn.SetDeadline(time.Now().Add(connTimeout))
|
||||
}
|
||||
if err == nil {
|
||||
conn, err = p2pconn.MakeSecretConnection(conn, privKey)
|
||||
}
|
||||
return conn, err
|
||||
}
|
||||
}
|
||||
|
||||
// DialUnixFn dials the given unix socket.
|
||||
func DialUnixFn(addr string) Dialer {
|
||||
return func() (net.Conn, error) {
|
||||
unixAddr := &net.UnixAddr{addr, "unix"}
|
||||
return net.DialUnix("unix", nil, unixAddr)
|
||||
}
|
||||
}
|
||||
|
||||
// NewRemoteSigner return a RemoteSigner that will dial using the given
|
||||
// dialer and respond to any signature requests over the connection
|
||||
// using the given privVal.
|
||||
func NewRemoteSigner(
|
||||
logger log.Logger,
|
||||
chainID, socketAddr string,
|
||||
chainID string,
|
||||
privVal types.PrivValidator,
|
||||
privKey ed25519.PrivKeyEd25519,
|
||||
dialer Dialer,
|
||||
) *RemoteSigner {
|
||||
rs := &RemoteSigner{
|
||||
addr: socketAddr,
|
||||
chainID: chainID,
|
||||
connDeadline: time.Second * defaultConnDeadlineSeconds,
|
||||
connRetries: defaultDialRetries,
|
||||
privKey: privKey,
|
||||
privVal: privVal,
|
||||
dialer: dialer,
|
||||
}
|
||||
|
||||
rs.BaseService = *cmn.NewBaseService(logger, "RemoteSigner", rs)
|
||||
|
||||
return rs
|
||||
}
|
||||
|
||||
@@ -68,6 +100,7 @@ func (rs *RemoteSigner) OnStart() error {
|
||||
rs.Logger.Error("OnStart", "err", err)
|
||||
return err
|
||||
}
|
||||
rs.conn = conn
|
||||
|
||||
go rs.handleConnection(conn)
|
||||
|
||||
@@ -91,36 +124,11 @@ func (rs *RemoteSigner) connect() (net.Conn, error) {
|
||||
if retries != rs.connRetries {
|
||||
time.Sleep(rs.connDeadline)
|
||||
}
|
||||
|
||||
conn, err := cmn.Connect(rs.addr)
|
||||
conn, err := rs.dialer()
|
||||
if err != nil {
|
||||
rs.Logger.Error(
|
||||
"connect",
|
||||
"addr", rs.addr,
|
||||
"err", err,
|
||||
)
|
||||
|
||||
rs.Logger.Error("dialing", "err", err)
|
||||
continue
|
||||
}
|
||||
|
||||
if err := conn.SetDeadline(time.Now().Add(connTimeout)); err != nil {
|
||||
rs.Logger.Error(
|
||||
"connect",
|
||||
"err", err,
|
||||
)
|
||||
continue
|
||||
}
|
||||
|
||||
conn, err = p2pconn.MakeSecretConnection(conn, rs.privKey)
|
||||
if err != nil {
|
||||
rs.Logger.Error(
|
||||
"connect",
|
||||
"err", err,
|
||||
)
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
return conn, nil
|
||||
}
|
||||
|
||||
@@ -139,7 +147,7 @@ func (rs *RemoteSigner) handleConnection(conn net.Conn) {
|
||||
req, err := readMsg(conn)
|
||||
if err != nil {
|
||||
if err != io.EOF {
|
||||
rs.Logger.Error("handleConnection", "err", err)
|
||||
rs.Logger.Error("handleConnection readMsg", "err", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -148,12 +156,12 @@ func (rs *RemoteSigner) handleConnection(conn net.Conn) {
|
||||
|
||||
if err != nil {
|
||||
// only log the error; we'll reply with an error in res
|
||||
rs.Logger.Error("handleConnection", "err", err)
|
||||
rs.Logger.Error("handleConnection handleRequest", "err", err)
|
||||
}
|
||||
|
||||
err = writeMsg(conn, res)
|
||||
if err != nil {
|
||||
rs.Logger.Error("handleConnection", "err", err)
|
||||
rs.Logger.Error("handleConnection writeMsg", "err", err)
|
||||
return
|
||||
}
|
||||
}
|
184
privval/socket.go
Normal file
184
privval/socket.go
Normal file
@@ -0,0 +1,184 @@
|
||||
package privval
|
||||
|
||||
import (
|
||||
"net"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
p2pconn "github.com/tendermint/tendermint/p2p/conn"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultAcceptDeadlineSeconds = 3
|
||||
defaultConnDeadlineSeconds = 3
|
||||
)
|
||||
|
||||
// timeoutError can be used to check if an error returned from the netp package
|
||||
// was due to a timeout.
|
||||
type timeoutError interface {
|
||||
Timeout() bool
|
||||
}
|
||||
|
||||
//------------------------------------------------------------------
|
||||
// TCP Listener
|
||||
|
||||
// TCPListenerOption sets an optional parameter on the tcpListener.
|
||||
type TCPListenerOption func(*tcpListener)
|
||||
|
||||
// TCPListenerAcceptDeadline sets the deadline for the listener.
|
||||
// A zero time value disables the deadline.
|
||||
func TCPListenerAcceptDeadline(deadline time.Duration) TCPListenerOption {
|
||||
return func(tl *tcpListener) { tl.acceptDeadline = deadline }
|
||||
}
|
||||
|
||||
// TCPListenerConnDeadline sets the read and write deadline for connections
|
||||
// from external signing processes.
|
||||
func TCPListenerConnDeadline(deadline time.Duration) TCPListenerOption {
|
||||
return func(tl *tcpListener) { tl.connDeadline = deadline }
|
||||
}
|
||||
|
||||
// tcpListener implements net.Listener.
|
||||
var _ net.Listener = (*tcpListener)(nil)
|
||||
|
||||
// tcpListener wraps a *net.TCPListener to standardise protocol timeouts
|
||||
// and potentially other tuning parameters. It also returns encrypted connections.
|
||||
type tcpListener struct {
|
||||
*net.TCPListener
|
||||
|
||||
secretConnKey ed25519.PrivKeyEd25519
|
||||
|
||||
acceptDeadline time.Duration
|
||||
connDeadline time.Duration
|
||||
}
|
||||
|
||||
// NewTCPListener returns a listener that accepts authenticated encrypted connections
|
||||
// using the given secretConnKey and the default timeout values.
|
||||
func NewTCPListener(ln net.Listener, secretConnKey ed25519.PrivKeyEd25519) *tcpListener {
|
||||
return &tcpListener{
|
||||
TCPListener: ln.(*net.TCPListener),
|
||||
secretConnKey: secretConnKey,
|
||||
acceptDeadline: time.Second * defaultAcceptDeadlineSeconds,
|
||||
connDeadline: time.Second * defaultConnDeadlineSeconds,
|
||||
}
|
||||
}
|
||||
|
||||
// Accept implements net.Listener.
|
||||
func (ln *tcpListener) Accept() (net.Conn, error) {
|
||||
err := ln.SetDeadline(time.Now().Add(ln.acceptDeadline))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tc, err := ln.AcceptTCP()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Wrap the conn in our timeout and encryption wrappers
|
||||
timeoutConn := newTimeoutConn(tc, ln.connDeadline)
|
||||
secretConn, err := p2pconn.MakeSecretConnection(timeoutConn, ln.secretConnKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return secretConn, nil
|
||||
}
|
||||
|
||||
//------------------------------------------------------------------
|
||||
// Unix Listener
|
||||
|
||||
// unixListener implements net.Listener.
|
||||
var _ net.Listener = (*unixListener)(nil)
|
||||
|
||||
type UnixListenerOption func(*unixListener)
|
||||
|
||||
// UnixListenerAcceptDeadline sets the deadline for the listener.
|
||||
// A zero time value disables the deadline.
|
||||
func UnixListenerAcceptDeadline(deadline time.Duration) UnixListenerOption {
|
||||
return func(ul *unixListener) { ul.acceptDeadline = deadline }
|
||||
}
|
||||
|
||||
// UnixListenerConnDeadline sets the read and write deadline for connections
|
||||
// from external signing processes.
|
||||
func UnixListenerConnDeadline(deadline time.Duration) UnixListenerOption {
|
||||
return func(ul *unixListener) { ul.connDeadline = deadline }
|
||||
}
|
||||
|
||||
// unixListener wraps a *net.UnixListener to standardise protocol timeouts
|
||||
// and potentially other tuning parameters. It returns unencrypted connections.
|
||||
type unixListener struct {
|
||||
*net.UnixListener
|
||||
|
||||
acceptDeadline time.Duration
|
||||
connDeadline time.Duration
|
||||
}
|
||||
|
||||
// NewUnixListener returns a listener that accepts unencrypted connections
|
||||
// using the default timeout values.
|
||||
func NewUnixListener(ln net.Listener) *unixListener {
|
||||
return &unixListener{
|
||||
UnixListener: ln.(*net.UnixListener),
|
||||
acceptDeadline: time.Second * defaultAcceptDeadlineSeconds,
|
||||
connDeadline: time.Second * defaultConnDeadlineSeconds,
|
||||
}
|
||||
}
|
||||
|
||||
// Accept implements net.Listener.
|
||||
func (ln *unixListener) Accept() (net.Conn, error) {
|
||||
err := ln.SetDeadline(time.Now().Add(ln.acceptDeadline))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tc, err := ln.AcceptUnix()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Wrap the conn in our timeout wrapper
|
||||
conn := newTimeoutConn(tc, ln.connDeadline)
|
||||
|
||||
// TODO: wrap in something that authenticates
|
||||
// with a MAC - https://github.com/tendermint/tendermint/issues/3099
|
||||
|
||||
return conn, nil
|
||||
}
|
||||
|
||||
//------------------------------------------------------------------
|
||||
// Connection
|
||||
|
||||
// timeoutConn implements net.Conn.
|
||||
var _ net.Conn = (*timeoutConn)(nil)
|
||||
|
||||
// timeoutConn wraps a net.Conn to standardise protocol timeouts / deadline resets.
|
||||
type timeoutConn struct {
|
||||
net.Conn
|
||||
|
||||
connDeadline time.Duration
|
||||
}
|
||||
|
||||
// newTimeoutConn returns an instance of timeoutConn.
|
||||
func newTimeoutConn(
|
||||
conn net.Conn,
|
||||
connDeadline time.Duration) *timeoutConn {
|
||||
return &timeoutConn{
|
||||
conn,
|
||||
connDeadline,
|
||||
}
|
||||
}
|
||||
|
||||
// Read implements net.Conn.
|
||||
func (c timeoutConn) Read(b []byte) (n int, err error) {
|
||||
// Reset deadline
|
||||
c.Conn.SetReadDeadline(time.Now().Add(c.connDeadline))
|
||||
|
||||
return c.Conn.Read(b)
|
||||
}
|
||||
|
||||
// Write implements net.Conn.
|
||||
func (c timeoutConn) Write(b []byte) (n int, err error) {
|
||||
// Reset deadline
|
||||
c.Conn.SetWriteDeadline(time.Now().Add(c.connDeadline))
|
||||
|
||||
return c.Conn.Write(b)
|
||||
}
|
133
privval/socket_test.go
Normal file
133
privval/socket_test.go
Normal file
@@ -0,0 +1,133 @@
|
||||
package privval
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
)
|
||||
|
||||
//-------------------------------------------
|
||||
// helper funcs
|
||||
|
||||
func newPrivKey() ed25519.PrivKeyEd25519 {
|
||||
return ed25519.GenPrivKey()
|
||||
}
|
||||
|
||||
//-------------------------------------------
|
||||
// tests
|
||||
|
||||
type listenerTestCase struct {
|
||||
description string // For test reporting purposes.
|
||||
listener net.Listener
|
||||
dialer Dialer
|
||||
}
|
||||
|
||||
// testUnixAddr will attempt to obtain a platform-independent temporary file
|
||||
// name for a Unix socket
|
||||
func testUnixAddr() (string, error) {
|
||||
f, err := ioutil.TempFile("", "tendermint-privval-test-*")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
addr := f.Name()
|
||||
f.Close()
|
||||
os.Remove(addr)
|
||||
return addr, nil
|
||||
}
|
||||
|
||||
func tcpListenerTestCase(t *testing.T, acceptDeadline, connectDeadline time.Duration) listenerTestCase {
|
||||
ln, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
tcpLn := NewTCPListener(ln, newPrivKey())
|
||||
TCPListenerAcceptDeadline(acceptDeadline)(tcpLn)
|
||||
TCPListenerConnDeadline(connectDeadline)(tcpLn)
|
||||
return listenerTestCase{
|
||||
description: "TCP",
|
||||
listener: tcpLn,
|
||||
dialer: DialTCPFn(ln.Addr().String(), testConnDeadline, newPrivKey()),
|
||||
}
|
||||
}
|
||||
|
||||
func unixListenerTestCase(t *testing.T, acceptDeadline, connectDeadline time.Duration) listenerTestCase {
|
||||
addr, err := testUnixAddr()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
ln, err := net.Listen("unix", addr)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
unixLn := NewUnixListener(ln)
|
||||
UnixListenerAcceptDeadline(acceptDeadline)(unixLn)
|
||||
UnixListenerConnDeadline(connectDeadline)(unixLn)
|
||||
return listenerTestCase{
|
||||
description: "Unix",
|
||||
listener: unixLn,
|
||||
dialer: DialUnixFn(addr),
|
||||
}
|
||||
}
|
||||
|
||||
func listenerTestCases(t *testing.T, acceptDeadline, connectDeadline time.Duration) []listenerTestCase {
|
||||
return []listenerTestCase{
|
||||
tcpListenerTestCase(t, acceptDeadline, connectDeadline),
|
||||
unixListenerTestCase(t, acceptDeadline, connectDeadline),
|
||||
}
|
||||
}
|
||||
|
||||
func TestListenerAcceptDeadlines(t *testing.T) {
|
||||
for _, tc := range listenerTestCases(t, time.Millisecond, time.Second) {
|
||||
_, err := tc.listener.Accept()
|
||||
opErr, ok := err.(*net.OpError)
|
||||
if !ok {
|
||||
t.Fatalf("for %s listener, have %v, want *net.OpError", tc.description, err)
|
||||
}
|
||||
|
||||
if have, want := opErr.Op, "accept"; have != want {
|
||||
t.Errorf("for %s listener, have %v, want %v", tc.description, have, want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestListenerConnectDeadlines(t *testing.T) {
|
||||
for _, tc := range listenerTestCases(t, time.Second, time.Millisecond) {
|
||||
readyc := make(chan struct{})
|
||||
donec := make(chan struct{})
|
||||
go func(ln net.Listener) {
|
||||
defer close(donec)
|
||||
|
||||
c, err := ln.Accept()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
<-readyc
|
||||
|
||||
time.Sleep(2 * time.Millisecond)
|
||||
|
||||
msg := make([]byte, 200)
|
||||
_, err = c.Read(msg)
|
||||
opErr, ok := err.(*net.OpError)
|
||||
if !ok {
|
||||
t.Fatalf("for %s listener, have %v, want *net.OpError", tc.description, err)
|
||||
}
|
||||
|
||||
if have, want := opErr.Op, "read"; have != want {
|
||||
t.Errorf("for %s listener, have %v, want %v", tc.description, have, want)
|
||||
}
|
||||
}(tc.listener)
|
||||
|
||||
_, err := tc.dialer()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
close(readyc)
|
||||
<-donec
|
||||
}
|
||||
}
|
216
privval/tcp.go
216
privval/tcp.go
@@ -1,216 +0,0 @@
|
||||
package privval
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"net"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
p2pconn "github.com/tendermint/tendermint/p2p/conn"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultAcceptDeadlineSeconds = 3
|
||||
defaultConnDeadlineSeconds = 3
|
||||
defaultConnHeartBeatSeconds = 2
|
||||
defaultDialRetries = 10
|
||||
)
|
||||
|
||||
// Socket errors.
|
||||
var (
|
||||
ErrDialRetryMax = errors.New("dialed maximum retries")
|
||||
ErrConnTimeout = errors.New("remote signer timed out")
|
||||
ErrUnexpectedResponse = errors.New("received unexpected response")
|
||||
)
|
||||
|
||||
var (
|
||||
acceptDeadline = time.Second * defaultAcceptDeadlineSeconds
|
||||
connTimeout = time.Second * defaultConnDeadlineSeconds
|
||||
connHeartbeat = time.Second * defaultConnHeartBeatSeconds
|
||||
)
|
||||
|
||||
// TCPValOption sets an optional parameter on the SocketPV.
|
||||
type TCPValOption func(*TCPVal)
|
||||
|
||||
// TCPValAcceptDeadline sets the deadline for the TCPVal listener.
|
||||
// A zero time value disables the deadline.
|
||||
func TCPValAcceptDeadline(deadline time.Duration) TCPValOption {
|
||||
return func(sc *TCPVal) { sc.acceptDeadline = deadline }
|
||||
}
|
||||
|
||||
// TCPValConnTimeout sets the read and write timeout for connections
|
||||
// from external signing processes.
|
||||
func TCPValConnTimeout(timeout time.Duration) TCPValOption {
|
||||
return func(sc *TCPVal) { sc.connTimeout = timeout }
|
||||
}
|
||||
|
||||
// TCPValHeartbeat sets the period on which to check the liveness of the
|
||||
// connected Signer connections.
|
||||
func TCPValHeartbeat(period time.Duration) TCPValOption {
|
||||
return func(sc *TCPVal) { sc.connHeartbeat = period }
|
||||
}
|
||||
|
||||
// TCPVal implements PrivValidator, it uses a socket to request signatures
|
||||
// from an external process.
|
||||
type TCPVal struct {
|
||||
cmn.BaseService
|
||||
*RemoteSignerClient
|
||||
|
||||
addr string
|
||||
acceptDeadline time.Duration
|
||||
connTimeout time.Duration
|
||||
connHeartbeat time.Duration
|
||||
privKey ed25519.PrivKeyEd25519
|
||||
|
||||
conn net.Conn
|
||||
listener net.Listener
|
||||
cancelPing chan struct{}
|
||||
pingTicker *time.Ticker
|
||||
}
|
||||
|
||||
// Check that TCPVal implements PrivValidator.
|
||||
var _ types.PrivValidator = (*TCPVal)(nil)
|
||||
|
||||
// NewTCPVal returns an instance of TCPVal.
|
||||
func NewTCPVal(
|
||||
logger log.Logger,
|
||||
socketAddr string,
|
||||
privKey ed25519.PrivKeyEd25519,
|
||||
) *TCPVal {
|
||||
sc := &TCPVal{
|
||||
addr: socketAddr,
|
||||
acceptDeadline: acceptDeadline,
|
||||
connTimeout: connTimeout,
|
||||
connHeartbeat: connHeartbeat,
|
||||
privKey: privKey,
|
||||
}
|
||||
|
||||
sc.BaseService = *cmn.NewBaseService(logger, "TCPVal", sc)
|
||||
|
||||
return sc
|
||||
}
|
||||
|
||||
// OnStart implements cmn.Service.
|
||||
func (sc *TCPVal) OnStart() error {
|
||||
if err := sc.listen(); err != nil {
|
||||
sc.Logger.Error("OnStart", "err", err)
|
||||
return err
|
||||
}
|
||||
|
||||
conn, err := sc.waitConnection()
|
||||
if err != nil {
|
||||
sc.Logger.Error("OnStart", "err", err)
|
||||
return err
|
||||
}
|
||||
|
||||
sc.conn = conn
|
||||
sc.RemoteSignerClient, err = NewRemoteSignerClient(sc.conn)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Start a routine to keep the connection alive
|
||||
sc.cancelPing = make(chan struct{}, 1)
|
||||
sc.pingTicker = time.NewTicker(sc.connHeartbeat)
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-sc.pingTicker.C:
|
||||
err := sc.Ping()
|
||||
if err != nil {
|
||||
sc.Logger.Error(
|
||||
"Ping",
|
||||
"err", err,
|
||||
)
|
||||
}
|
||||
case <-sc.cancelPing:
|
||||
sc.pingTicker.Stop()
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// OnStop implements cmn.Service.
|
||||
func (sc *TCPVal) OnStop() {
|
||||
if sc.cancelPing != nil {
|
||||
close(sc.cancelPing)
|
||||
}
|
||||
|
||||
if sc.conn != nil {
|
||||
if err := sc.conn.Close(); err != nil {
|
||||
sc.Logger.Error("OnStop", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
if sc.listener != nil {
|
||||
if err := sc.listener.Close(); err != nil {
|
||||
sc.Logger.Error("OnStop", "err", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (sc *TCPVal) acceptConnection() (net.Conn, error) {
|
||||
conn, err := sc.listener.Accept()
|
||||
if err != nil {
|
||||
if !sc.IsRunning() {
|
||||
return nil, nil // Ignore error from listener closing.
|
||||
}
|
||||
return nil, err
|
||||
|
||||
}
|
||||
|
||||
conn, err = p2pconn.MakeSecretConnection(conn, sc.privKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return conn, nil
|
||||
}
|
||||
|
||||
func (sc *TCPVal) listen() error {
|
||||
ln, err := net.Listen(cmn.ProtocolAndAddress(sc.addr))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
sc.listener = newTCPTimeoutListener(
|
||||
ln,
|
||||
sc.acceptDeadline,
|
||||
sc.connTimeout,
|
||||
sc.connHeartbeat,
|
||||
)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// waitConnection uses the configured wait timeout to error if no external
|
||||
// process connects in the time period.
|
||||
func (sc *TCPVal) waitConnection() (net.Conn, error) {
|
||||
var (
|
||||
connc = make(chan net.Conn, 1)
|
||||
errc = make(chan error, 1)
|
||||
)
|
||||
|
||||
go func(connc chan<- net.Conn, errc chan<- error) {
|
||||
conn, err := sc.acceptConnection()
|
||||
if err != nil {
|
||||
errc <- err
|
||||
return
|
||||
}
|
||||
|
||||
connc <- conn
|
||||
}(connc, errc)
|
||||
|
||||
select {
|
||||
case conn := <-connc:
|
||||
return conn, nil
|
||||
case err := <-errc:
|
||||
return nil, err
|
||||
}
|
||||
}
|
@@ -1,90 +0,0 @@
|
||||
package privval
|
||||
|
||||
import (
|
||||
"net"
|
||||
"time"
|
||||
)
|
||||
|
||||
// timeoutError can be used to check if an error returned from the netp package
|
||||
// was due to a timeout.
|
||||
type timeoutError interface {
|
||||
Timeout() bool
|
||||
}
|
||||
|
||||
// tcpTimeoutListener implements net.Listener.
|
||||
var _ net.Listener = (*tcpTimeoutListener)(nil)
|
||||
|
||||
// tcpTimeoutListener wraps a *net.TCPListener to standardise protocol timeouts
|
||||
// and potentially other tuning parameters.
|
||||
type tcpTimeoutListener struct {
|
||||
*net.TCPListener
|
||||
|
||||
acceptDeadline time.Duration
|
||||
connDeadline time.Duration
|
||||
period time.Duration
|
||||
}
|
||||
|
||||
// timeoutConn wraps a net.Conn to standardise protocol timeouts / deadline resets.
|
||||
type timeoutConn struct {
|
||||
net.Conn
|
||||
|
||||
connDeadline time.Duration
|
||||
}
|
||||
|
||||
// newTCPTimeoutListener returns an instance of tcpTimeoutListener.
|
||||
func newTCPTimeoutListener(
|
||||
ln net.Listener,
|
||||
acceptDeadline, connDeadline time.Duration,
|
||||
period time.Duration,
|
||||
) tcpTimeoutListener {
|
||||
return tcpTimeoutListener{
|
||||
TCPListener: ln.(*net.TCPListener),
|
||||
acceptDeadline: acceptDeadline,
|
||||
connDeadline: connDeadline,
|
||||
period: period,
|
||||
}
|
||||
}
|
||||
|
||||
// newTimeoutConn returns an instance of newTCPTimeoutConn.
|
||||
func newTimeoutConn(
|
||||
conn net.Conn,
|
||||
connDeadline time.Duration) *timeoutConn {
|
||||
return &timeoutConn{
|
||||
conn,
|
||||
connDeadline,
|
||||
}
|
||||
}
|
||||
|
||||
// Accept implements net.Listener.
|
||||
func (ln tcpTimeoutListener) Accept() (net.Conn, error) {
|
||||
err := ln.SetDeadline(time.Now().Add(ln.acceptDeadline))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tc, err := ln.AcceptTCP()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Wrap the conn in our timeout wrapper
|
||||
conn := newTimeoutConn(tc, ln.connDeadline)
|
||||
|
||||
return conn, nil
|
||||
}
|
||||
|
||||
// Read implements net.Listener.
|
||||
func (c timeoutConn) Read(b []byte) (n int, err error) {
|
||||
// Reset deadline
|
||||
c.Conn.SetReadDeadline(time.Now().Add(c.connDeadline))
|
||||
|
||||
return c.Conn.Read(b)
|
||||
}
|
||||
|
||||
// Write implements net.Listener.
|
||||
func (c timeoutConn) Write(b []byte) (n int, err error) {
|
||||
// Reset deadline
|
||||
c.Conn.SetWriteDeadline(time.Now().Add(c.connDeadline))
|
||||
|
||||
return c.Conn.Write(b)
|
||||
}
|
@@ -1,65 +0,0 @@
|
||||
package privval
|
||||
|
||||
import (
|
||||
"net"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestTCPTimeoutListenerAcceptDeadline(t *testing.T) {
|
||||
ln, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
ln = newTCPTimeoutListener(ln, time.Millisecond, time.Second, time.Second)
|
||||
|
||||
_, err = ln.Accept()
|
||||
opErr, ok := err.(*net.OpError)
|
||||
if !ok {
|
||||
t.Fatalf("have %v, want *net.OpError", err)
|
||||
}
|
||||
|
||||
if have, want := opErr.Op, "accept"; have != want {
|
||||
t.Errorf("have %v, want %v", have, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestTCPTimeoutListenerConnDeadline(t *testing.T) {
|
||||
ln, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
ln = newTCPTimeoutListener(ln, time.Second, time.Millisecond, time.Second)
|
||||
|
||||
donec := make(chan struct{})
|
||||
go func(ln net.Listener) {
|
||||
defer close(donec)
|
||||
|
||||
c, err := ln.Accept()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
time.Sleep(2 * time.Millisecond)
|
||||
|
||||
msg := make([]byte, 200)
|
||||
_, err = c.Read(msg)
|
||||
opErr, ok := err.(*net.OpError)
|
||||
if !ok {
|
||||
t.Fatalf("have %v, want *net.OpError", err)
|
||||
}
|
||||
|
||||
if have, want := opErr.Op, "read"; have != want {
|
||||
t.Errorf("have %v, want %v", have, want)
|
||||
}
|
||||
}(ln)
|
||||
|
||||
_, err = net.Dial("tcp", ln.Addr().String())
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
<-donec
|
||||
}
|
@@ -1,397 +0,0 @@
|
||||
package privval
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
||||
p2pconn "github.com/tendermint/tendermint/p2p/conn"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
func TestSocketPVAddress(t *testing.T) {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewMockPV())
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
serverAddr := rs.privVal.GetPubKey().Address()
|
||||
clientAddr := sc.GetPubKey().Address()
|
||||
|
||||
assert.Equal(t, serverAddr, clientAddr)
|
||||
}
|
||||
|
||||
func TestSocketPVPubKey(t *testing.T) {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewMockPV())
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
clientKey, err := sc.getPubKey()
|
||||
require.NoError(t, err)
|
||||
|
||||
privvalPubKey := rs.privVal.GetPubKey()
|
||||
|
||||
assert.Equal(t, privvalPubKey, clientKey)
|
||||
}
|
||||
|
||||
func TestSocketPVProposal(t *testing.T) {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewMockPV())
|
||||
|
||||
ts = time.Now()
|
||||
privProposal = &types.Proposal{Timestamp: ts}
|
||||
clientProposal = &types.Proposal{Timestamp: ts}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
require.NoError(t, rs.privVal.SignProposal(chainID, privProposal))
|
||||
require.NoError(t, sc.SignProposal(chainID, clientProposal))
|
||||
assert.Equal(t, privProposal.Signature, clientProposal.Signature)
|
||||
}
|
||||
|
||||
func TestSocketPVVote(t *testing.T) {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewMockPV())
|
||||
|
||||
ts = time.Now()
|
||||
vType = types.PrecommitType
|
||||
want = &types.Vote{Timestamp: ts, Type: vType}
|
||||
have = &types.Vote{Timestamp: ts, Type: vType}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
require.NoError(t, rs.privVal.SignVote(chainID, want))
|
||||
require.NoError(t, sc.SignVote(chainID, have))
|
||||
assert.Equal(t, want.Signature, have.Signature)
|
||||
}
|
||||
|
||||
func TestSocketPVVoteResetDeadline(t *testing.T) {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewMockPV())
|
||||
|
||||
ts = time.Now()
|
||||
vType = types.PrecommitType
|
||||
want = &types.Vote{Timestamp: ts, Type: vType}
|
||||
have = &types.Vote{Timestamp: ts, Type: vType}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
time.Sleep(3 * time.Millisecond)
|
||||
|
||||
require.NoError(t, rs.privVal.SignVote(chainID, want))
|
||||
require.NoError(t, sc.SignVote(chainID, have))
|
||||
assert.Equal(t, want.Signature, have.Signature)
|
||||
|
||||
// This would exceed the deadline if it was not extended by the previous message
|
||||
time.Sleep(3 * time.Millisecond)
|
||||
|
||||
require.NoError(t, rs.privVal.SignVote(chainID, want))
|
||||
require.NoError(t, sc.SignVote(chainID, have))
|
||||
assert.Equal(t, want.Signature, have.Signature)
|
||||
}
|
||||
|
||||
func TestSocketPVVoteKeepalive(t *testing.T) {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewMockPV())
|
||||
|
||||
ts = time.Now()
|
||||
vType = types.PrecommitType
|
||||
want = &types.Vote{Timestamp: ts, Type: vType}
|
||||
have = &types.Vote{Timestamp: ts, Type: vType}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
|
||||
require.NoError(t, rs.privVal.SignVote(chainID, want))
|
||||
require.NoError(t, sc.SignVote(chainID, have))
|
||||
assert.Equal(t, want.Signature, have.Signature)
|
||||
}
|
||||
|
||||
func TestSocketPVDeadline(t *testing.T) {
|
||||
var (
|
||||
addr = testFreeAddr(t)
|
||||
listenc = make(chan struct{})
|
||||
sc = NewTCPVal(
|
||||
log.TestingLogger(),
|
||||
addr,
|
||||
ed25519.GenPrivKey(),
|
||||
)
|
||||
)
|
||||
|
||||
TCPValConnTimeout(100 * time.Millisecond)(sc)
|
||||
|
||||
go func(sc *TCPVal) {
|
||||
defer close(listenc)
|
||||
|
||||
assert.Equal(t, sc.Start().(cmn.Error).Data(), ErrConnTimeout)
|
||||
|
||||
assert.False(t, sc.IsRunning())
|
||||
}(sc)
|
||||
|
||||
for {
|
||||
conn, err := cmn.Connect(addr)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
_, err = p2pconn.MakeSecretConnection(
|
||||
conn,
|
||||
ed25519.GenPrivKey(),
|
||||
)
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
<-listenc
|
||||
}
|
||||
|
||||
func TestRemoteSignerRetry(t *testing.T) {
|
||||
var (
|
||||
attemptc = make(chan int)
|
||||
retries = 2
|
||||
)
|
||||
|
||||
ln, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
require.NoError(t, err)
|
||||
|
||||
go func(ln net.Listener, attemptc chan<- int) {
|
||||
attempts := 0
|
||||
|
||||
for {
|
||||
conn, err := ln.Accept()
|
||||
require.NoError(t, err)
|
||||
|
||||
err = conn.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
attempts++
|
||||
|
||||
if attempts == retries {
|
||||
attemptc <- attempts
|
||||
break
|
||||
}
|
||||
}
|
||||
}(ln, attemptc)
|
||||
|
||||
rs := NewRemoteSigner(
|
||||
log.TestingLogger(),
|
||||
cmn.RandStr(12),
|
||||
ln.Addr().String(),
|
||||
types.NewMockPV(),
|
||||
ed25519.GenPrivKey(),
|
||||
)
|
||||
defer rs.Stop()
|
||||
|
||||
RemoteSignerConnDeadline(time.Millisecond)(rs)
|
||||
RemoteSignerConnRetries(retries)(rs)
|
||||
|
||||
assert.Equal(t, rs.Start(), ErrDialRetryMax)
|
||||
|
||||
select {
|
||||
case attempts := <-attemptc:
|
||||
assert.Equal(t, retries, attempts)
|
||||
case <-time.After(100 * time.Millisecond):
|
||||
t.Error("expected remote to observe connection attempts")
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemoteSignVoteErrors(t *testing.T) {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewErroringMockPV())
|
||||
|
||||
ts = time.Now()
|
||||
vType = types.PrecommitType
|
||||
vote = &types.Vote{Timestamp: ts, Type: vType}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
err := writeMsg(sc.conn, &SignVoteRequest{Vote: vote})
|
||||
require.NoError(t, err)
|
||||
|
||||
res, err := readMsg(sc.conn)
|
||||
require.NoError(t, err)
|
||||
|
||||
resp := *res.(*SignedVoteResponse)
|
||||
require.NotNil(t, resp.Error)
|
||||
require.Equal(t, resp.Error.Description, types.ErroringMockPVErr.Error())
|
||||
|
||||
err = rs.privVal.SignVote(chainID, vote)
|
||||
require.Error(t, err)
|
||||
err = sc.SignVote(chainID, vote)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestRemoteSignProposalErrors(t *testing.T) {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewErroringMockPV())
|
||||
|
||||
ts = time.Now()
|
||||
proposal = &types.Proposal{Timestamp: ts}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
err := writeMsg(sc.conn, &SignProposalRequest{Proposal: proposal})
|
||||
require.NoError(t, err)
|
||||
|
||||
res, err := readMsg(sc.conn)
|
||||
require.NoError(t, err)
|
||||
|
||||
resp := *res.(*SignedProposalResponse)
|
||||
require.NotNil(t, resp.Error)
|
||||
require.Equal(t, resp.Error.Description, types.ErroringMockPVErr.Error())
|
||||
|
||||
err = rs.privVal.SignProposal(chainID, proposal)
|
||||
require.Error(t, err)
|
||||
|
||||
err = sc.SignProposal(chainID, proposal)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestErrUnexpectedResponse(t *testing.T) {
|
||||
var (
|
||||
addr = testFreeAddr(t)
|
||||
logger = log.TestingLogger()
|
||||
chainID = cmn.RandStr(12)
|
||||
readyc = make(chan struct{})
|
||||
errc = make(chan error, 1)
|
||||
|
||||
rs = NewRemoteSigner(
|
||||
logger,
|
||||
chainID,
|
||||
addr,
|
||||
types.NewMockPV(),
|
||||
ed25519.GenPrivKey(),
|
||||
)
|
||||
sc = NewTCPVal(
|
||||
logger,
|
||||
addr,
|
||||
ed25519.GenPrivKey(),
|
||||
)
|
||||
)
|
||||
|
||||
testStartSocketPV(t, readyc, sc)
|
||||
defer sc.Stop()
|
||||
RemoteSignerConnDeadline(time.Millisecond)(rs)
|
||||
RemoteSignerConnRetries(100)(rs)
|
||||
// we do not want to Start() the remote signer here and instead use the connection to
|
||||
// reply with intentionally wrong replies below:
|
||||
rsConn, err := rs.connect()
|
||||
defer rsConn.Close()
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, rsConn)
|
||||
// send over public key to get the remote signer running:
|
||||
go testReadWriteResponse(t, &PubKeyResponse{}, rsConn)
|
||||
<-readyc
|
||||
|
||||
// Proposal:
|
||||
go func(errc chan error) {
|
||||
errc <- sc.SignProposal(chainID, &types.Proposal{})
|
||||
}(errc)
|
||||
// read request and write wrong response:
|
||||
go testReadWriteResponse(t, &SignedVoteResponse{}, rsConn)
|
||||
err = <-errc
|
||||
require.Error(t, err)
|
||||
require.Equal(t, err, ErrUnexpectedResponse)
|
||||
|
||||
// Vote:
|
||||
go func(errc chan error) {
|
||||
errc <- sc.SignVote(chainID, &types.Vote{})
|
||||
}(errc)
|
||||
// read request and write wrong response:
|
||||
go testReadWriteResponse(t, &SignedProposalResponse{}, rsConn)
|
||||
err = <-errc
|
||||
require.Error(t, err)
|
||||
require.Equal(t, err, ErrUnexpectedResponse)
|
||||
}
|
||||
|
||||
func testSetupSocketPair(
|
||||
t *testing.T,
|
||||
chainID string,
|
||||
privValidator types.PrivValidator,
|
||||
) (*TCPVal, *RemoteSigner) {
|
||||
var (
|
||||
addr = testFreeAddr(t)
|
||||
logger = log.TestingLogger()
|
||||
privVal = privValidator
|
||||
readyc = make(chan struct{})
|
||||
rs = NewRemoteSigner(
|
||||
logger,
|
||||
chainID,
|
||||
addr,
|
||||
privVal,
|
||||
ed25519.GenPrivKey(),
|
||||
)
|
||||
sc = NewTCPVal(
|
||||
logger,
|
||||
addr,
|
||||
ed25519.GenPrivKey(),
|
||||
)
|
||||
)
|
||||
|
||||
TCPValConnTimeout(5 * time.Millisecond)(sc)
|
||||
TCPValHeartbeat(2 * time.Millisecond)(sc)
|
||||
RemoteSignerConnDeadline(5 * time.Millisecond)(rs)
|
||||
RemoteSignerConnRetries(1e6)(rs)
|
||||
|
||||
testStartSocketPV(t, readyc, sc)
|
||||
|
||||
require.NoError(t, rs.Start())
|
||||
assert.True(t, rs.IsRunning())
|
||||
|
||||
<-readyc
|
||||
|
||||
return sc, rs
|
||||
}
|
||||
|
||||
func testReadWriteResponse(t *testing.T, resp RemoteSignerMsg, rsConn net.Conn) {
|
||||
_, err := readMsg(rsConn)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = writeMsg(rsConn, resp)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func testStartSocketPV(t *testing.T, readyc chan struct{}, sc *TCPVal) {
|
||||
go func(sc *TCPVal) {
|
||||
require.NoError(t, sc.Start())
|
||||
assert.True(t, sc.IsRunning())
|
||||
|
||||
readyc <- struct{}{}
|
||||
}(sc)
|
||||
}
|
||||
|
||||
// testFreeAddr claims a free port so we don't block on listener being ready.
|
||||
func testFreeAddr(t *testing.T) string {
|
||||
ln, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
require.NoError(t, err)
|
||||
defer ln.Close()
|
||||
|
||||
return fmt.Sprintf("127.0.0.1:%d", ln.Addr().(*net.TCPAddr).Port)
|
||||
}
|
@@ -428,5 +428,10 @@ func TestTxSearch(t *testing.T) {
|
||||
if len(result.Txs) == 0 {
|
||||
t.Fatal("expected a lot of transactions")
|
||||
}
|
||||
|
||||
// query a non existing tx with page 1 and txsPerPage 1
|
||||
result, err = c.TxSearch("app.creator='Cosmoshi Neetowoko'", true, 1, 1)
|
||||
require.Nil(t, err, "%+v", err)
|
||||
require.Len(t, result.Txs, 0)
|
||||
}
|
||||
}
|
||||
|
@@ -53,6 +53,7 @@ func NetInfo() (*ctypes.ResultNetInfo, error) {
|
||||
NodeInfo: nodeInfo,
|
||||
IsOutbound: peer.IsOutbound(),
|
||||
ConnectionStatus: peer.Status(),
|
||||
RemoteIP: peer.RemoteIP(),
|
||||
})
|
||||
}
|
||||
// TODO: Should we include PersistentPeers and Seeds in here?
|
||||
|
@@ -149,8 +149,19 @@ func validatePage(page, perPage, totalCount int) int {
|
||||
}
|
||||
|
||||
func validatePerPage(perPage int) int {
|
||||
if perPage < 1 || perPage > maxPerPage {
|
||||
if perPage < 1 {
|
||||
return defaultPerPage
|
||||
} else if perPage > maxPerPage {
|
||||
return maxPerPage
|
||||
}
|
||||
return perPage
|
||||
}
|
||||
|
||||
func validateSkipCount(page, perPage int) int {
|
||||
skipCount := (page - 1) * perPage
|
||||
if skipCount < 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
return skipCount
|
||||
}
|
||||
|
@@ -47,7 +47,6 @@ func TestPaginationPage(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestPaginationPerPage(t *testing.T) {
|
||||
|
||||
cases := []struct {
|
||||
totalCount int
|
||||
perPage int
|
||||
@@ -59,7 +58,7 @@ func TestPaginationPerPage(t *testing.T) {
|
||||
{5, defaultPerPage, defaultPerPage},
|
||||
{5, maxPerPage - 1, maxPerPage - 1},
|
||||
{5, maxPerPage, maxPerPage},
|
||||
{5, maxPerPage + 1, defaultPerPage},
|
||||
{5, maxPerPage + 1, maxPerPage},
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
|
@@ -201,10 +201,11 @@ func TxSearch(query string, prove bool, page, perPage int) (*ctypes.ResultTxSear
|
||||
totalCount := len(results)
|
||||
perPage = validatePerPage(perPage)
|
||||
page = validatePage(page, perPage, totalCount)
|
||||
skipCount := (page - 1) * perPage
|
||||
skipCount := validateSkipCount(page, perPage)
|
||||
|
||||
apiResults := make([]*ctypes.ResultTx, cmn.MinInt(perPage, totalCount-skipCount))
|
||||
var proof types.TxProof
|
||||
// if there's no tx in the results array, we don't need to loop through the apiResults array
|
||||
for i := 0; i < len(apiResults); i++ {
|
||||
r := results[skipCount+i]
|
||||
height := r.Height
|
||||
|
@@ -2,6 +2,7 @@ package core_types
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net"
|
||||
"time"
|
||||
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
@@ -110,6 +111,7 @@ type Peer struct {
|
||||
NodeInfo p2p.DefaultNodeInfo `json:"node_info"`
|
||||
IsOutbound bool `json:"is_outbound"`
|
||||
ConnectionStatus p2p.ConnectionStatus `json:"connection_status"`
|
||||
RemoteIP net.IP `json:"remote_ip"`
|
||||
}
|
||||
|
||||
// Validators for a height
|
||||
|
@@ -45,7 +45,10 @@ func main() {
|
||||
}
|
||||
defer walFile.Close()
|
||||
|
||||
br := bufio.NewReader(f)
|
||||
// the length of tendermint/wal/MsgInfo in the wal.json may exceed the defaultBufSize(4096) of bufio
|
||||
// because of the byte array in BlockPart
|
||||
// leading to unmarshal error: unexpected end of JSON input
|
||||
br := bufio.NewReaderSize(f, 2*types.BlockPartSizeBytes)
|
||||
dec := cs.NewWALEncoder(walFile)
|
||||
|
||||
for {
|
||||
|
@@ -29,7 +29,8 @@ type BlockExecutor struct {
|
||||
// events
|
||||
eventBus types.BlockEventPublisher
|
||||
|
||||
// update these with block results after commit
|
||||
// manage the mempool lock during commit
|
||||
// and update both with block results after commit.
|
||||
mempool Mempool
|
||||
evpool EvidencePool
|
||||
|
||||
@@ -73,6 +74,31 @@ func (blockExec *BlockExecutor) SetEventBus(eventBus types.BlockEventPublisher)
|
||||
blockExec.eventBus = eventBus
|
||||
}
|
||||
|
||||
// CreateProposalBlock calls state.MakeBlock with evidence from the evpool
|
||||
// and txs from the mempool. The max bytes must be big enough to fit the commit.
|
||||
// Up to 1/10th of the block space is allcoated for maximum sized evidence.
|
||||
// The rest is given to txs, up to the max gas.
|
||||
func (blockExec *BlockExecutor) CreateProposalBlock(
|
||||
height int64,
|
||||
state State, commit *types.Commit,
|
||||
proposerAddr []byte,
|
||||
) (*types.Block, *types.PartSet) {
|
||||
|
||||
maxBytes := state.ConsensusParams.BlockSize.MaxBytes
|
||||
maxGas := state.ConsensusParams.BlockSize.MaxGas
|
||||
|
||||
// Fetch a limited amount of valid evidence
|
||||
maxNumEvidence, _ := types.MaxEvidencePerBlock(maxBytes)
|
||||
evidence := blockExec.evpool.PendingEvidence(maxNumEvidence)
|
||||
|
||||
// Fetch a limited amount of valid txs
|
||||
maxDataBytes := types.MaxDataBytes(maxBytes, state.Validators.Size(), len(evidence))
|
||||
txs := blockExec.mempool.ReapMaxBytesMaxGas(maxDataBytes, maxGas)
|
||||
|
||||
return state.MakeBlock(height, txs, commit, evidence, proposerAddr)
|
||||
|
||||
}
|
||||
|
||||
// ValidateBlock validates the given block against the given state.
|
||||
// If the block is invalid, it returns an error.
|
||||
// Validation does not mutate state, but does require historical information from the stateDB,
|
||||
|
@@ -7,14 +7,26 @@ import (
|
||||
stdprometheus "github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const MetricsSubsystem = "state"
|
||||
const (
|
||||
// MetricsSubsystem is a subsystem shared by all metrics exposed by this
|
||||
// package.
|
||||
MetricsSubsystem = "state"
|
||||
)
|
||||
|
||||
// Metrics contains metrics exposed by this package.
|
||||
type Metrics struct {
|
||||
// Time between BeginBlock and EndBlock.
|
||||
BlockProcessingTime metrics.Histogram
|
||||
}
|
||||
|
||||
func PrometheusMetrics(namespace string) *Metrics {
|
||||
// PrometheusMetrics returns Metrics build using Prometheus client library.
|
||||
// Optionally, labels can be provided along with their values ("foo",
|
||||
// "fooValue").
|
||||
func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
|
||||
labels := []string{}
|
||||
for i := 0; i < len(labelsAndValues); i += 2 {
|
||||
labels = append(labels, labelsAndValues[i])
|
||||
}
|
||||
return &Metrics{
|
||||
BlockProcessingTime: prometheus.NewHistogramFrom(stdprometheus.HistogramOpts{
|
||||
Namespace: namespace,
|
||||
@@ -22,10 +34,11 @@ func PrometheusMetrics(namespace string) *Metrics {
|
||||
Name: "block_processing_time",
|
||||
Help: "Time between BeginBlock and EndBlock in ms.",
|
||||
Buckets: stdprometheus.LinearBuckets(1, 10, 10),
|
||||
}, []string{}),
|
||||
}, labels).With(labelsAndValues...),
|
||||
}
|
||||
}
|
||||
|
||||
// NopMetrics returns no-op Metrics.
|
||||
func NopMetrics() *Metrics {
|
||||
return &Metrics{
|
||||
BlockProcessingTime: discard.NewHistogram(),
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user