mirror of
https://github.com/fluencelabs/tendermint
synced 2025-07-15 20:41:37 +00:00
Compare commits
507 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
c8a2bdf78b | ||
|
3cd604562c | ||
|
7c6c0dba53 | ||
|
ec2f3f49ef | ||
|
8bba7c64bc | ||
|
0467de890a | ||
|
0ae0155cba | ||
|
f4feb7703b | ||
|
0e68638af3 | ||
|
d3e276bf80 | ||
|
fc585bcdec | ||
|
2a24ae90c1 | ||
|
f1c8489270 | ||
|
2d10c8f15b | ||
|
106cdb74e5 | ||
|
22b038810a | ||
|
45750e1b29 | ||
|
26419fba28 | ||
|
ac0123d249 | ||
|
f4ff66de30 | ||
|
747b73cb95 | ||
|
161e100a24 | ||
|
3ae738f453 | ||
|
d14d4a2527 | ||
|
860da464df | ||
|
4e2000abfe | ||
|
5834a59816 | ||
|
b28b76ddf7 | ||
|
91e4f4b786 | ||
|
9b554fb2c4 | ||
|
f97ead4f5f | ||
|
5af22d6ee6 | ||
|
1d16df6a92 | ||
|
e7bc946760 | ||
|
cf1e1f5899 | ||
|
2f8372d629 | ||
|
d84e4effce | ||
|
0c1b91b762 | ||
|
c8990d06d9 | ||
|
b0a55882b2 | ||
|
d1fa44e816 | ||
|
199ea40980 | ||
|
66fc476e1e | ||
|
6b347200d9 | ||
|
8a908a7cf9 | ||
|
0bcc11c9bc | ||
|
0247a21a93 | ||
|
cf1f483526 | ||
|
3f9aa8d8fa | ||
|
d6d1f8512d | ||
|
2b2c233977 | ||
|
7640e6a29f | ||
|
b0ca8a0872 | ||
|
9e767771fc | ||
|
6c8d7a8c19 | ||
|
15ef57c6d0 | ||
|
f37c502fd8 | ||
|
945b0e6eca | ||
|
84a0a1987c | ||
|
11b68f1934 | ||
|
202d9a2c0c | ||
|
bf84e82577 | ||
|
abca9a2d61 | ||
|
d34286c421 | ||
|
bb2bdbc0e1 | ||
|
e7747f7d66 | ||
|
7a5060dc52 | ||
|
426379dc47 | ||
|
cd0fd06b0d | ||
|
4e3488c677 | ||
|
061ad355bb | ||
|
2679b7554b | ||
|
62c9cad484 | ||
|
4cbdbbaac9 | ||
|
2919bc3f7f | ||
|
1c01671ec6 | ||
|
fe632ea32a | ||
|
5b368252ac | ||
|
8cca953590 | ||
|
4b4a2029c4 | ||
|
6aa85357b6 | ||
|
eae62ec09b | ||
|
18d96266bc | ||
|
4529fd6787 | ||
|
4a99a2a07d | ||
|
4b63b3aa0b | ||
|
fc860c3a07 | ||
|
2f147ec000 | ||
|
0a7a190cd1 | ||
|
3366dfe32a | ||
|
baff4bd8cc | ||
|
fb109db33d | ||
|
2f5971532e | ||
|
ab13806276 | ||
|
3ae26bd6e6 | ||
|
27ef3489a0 | ||
|
b6eb275b22 | ||
|
57cc8ab977 | ||
|
99034904f8 | ||
|
a0ffcbcee4 | ||
|
260affd037 | ||
|
d7b1b8d3d5 | ||
|
50129ad8ac | ||
|
5c9cb5e6a2 | ||
|
4051391039 | ||
|
8f3bd3f209 | ||
|
85816877c6 | ||
|
87087b8acd | ||
|
775bb85efb | ||
|
21ce5856b3 | ||
|
f5226e0008 | ||
|
a745fe2eed | ||
|
5f3048bd09 | ||
|
6a5818e107 | ||
|
dfdfd6c98e | ||
|
3090b05eb4 | ||
|
ee674f919f | ||
|
813bb6af96 | ||
|
aecbff725f | ||
|
6679fef2be | ||
|
c070ed056a | ||
|
2c6ed302b7 | ||
|
0eb85161aa | ||
|
940145b368 | ||
|
a30315276b | ||
|
6366eb9d99 | ||
|
44e967184a | ||
|
2ec425ae4b | ||
|
0d7d16005a | ||
|
5b5cbaa66a | ||
|
03550c7076 | ||
|
930fde056a | ||
|
8d758560d8 | ||
|
7b87cdaed8 | ||
|
c2f97e6454 | ||
|
88eb3e7af0 | ||
|
949211a137 | ||
|
39d8da3536 | ||
|
ae27e85bf7 | ||
|
f2d19162d2 | ||
|
d36e118bf6 | ||
|
02c1aef48b | ||
|
6f3d9b4be3 | ||
|
f06cc6630b | ||
|
8171628ee5 | ||
|
1cb76625d3 | ||
|
ebeadfc57e | ||
|
cca597a9c0 | ||
|
940db715f4 | ||
|
ec2b038493 | ||
|
ba0cb4f10e | ||
|
e764a180d8 | ||
|
bc19e7843c | ||
|
f57f97c4bd | ||
|
b32474bb1a | ||
|
0a20e8f268 | ||
|
9d4d939b89 | ||
|
c1e6e73bb1 | ||
|
620c957a44 | ||
|
64ce7eef16 | ||
|
fc7915ab4c | ||
|
26aaa283a9 | ||
|
a29c67563c | ||
|
17f7a9b510 | ||
|
3df5fd21cd | ||
|
99076f1942 | ||
|
3368eeb03e | ||
|
68237911ba | ||
|
f9e4f6eb6b | ||
|
8b74a8d6ac | ||
|
08f84cd712 | ||
|
452d10f368 | ||
|
8b3fb743cf | ||
|
7667e11973 | ||
|
53a5498fc5 | ||
|
e4d52401cf | ||
|
9670519a21 | ||
|
b1485b181a | ||
|
47a6928890 | ||
|
c1e167e330 | ||
|
e2b3b5b58c | ||
|
e6b70baae0 | ||
|
b40aa91b41 | ||
|
a13b17ec6c | ||
|
b32a507a1b | ||
|
d6e01e8cee | ||
|
c79ba3c349 | ||
|
12ca972761 | ||
|
657ad214cb | ||
|
39acf1c5e8 | ||
|
075ae1e301 | ||
|
705d51aa42 | ||
|
ef0493ddf3 | ||
|
1b455883d2 | ||
|
e4897b7bdd | ||
|
37f86f9518 | ||
|
28fc15028a | ||
|
179d6062e4 | ||
|
c521f385a6 | ||
|
b8214fce66 | ||
|
03a14d8342 | ||
|
48638eaa20 | ||
|
170777300e | ||
|
32311acd01 | ||
|
b9cbaf8f10 | ||
|
8d86b6c2d2 | ||
|
555f560ecd | ||
|
dba4815616 | ||
|
cf42611187 | ||
|
90c691df2b | ||
|
13fa23c568 | ||
|
124c58d48f | ||
|
a034600024 | ||
|
e0e600df05 | ||
|
92f5ae5a84 | ||
|
d57ddec302 | ||
|
bb3dc10f24 | ||
|
2cc50938a4 | ||
|
ba475d3128 | ||
|
ed81fb54ec | ||
|
8481da2405 | ||
|
ecb7303e35 | ||
|
9cb45eb7df | ||
|
6855e62f0a | ||
|
17b61db40a | ||
|
7b52499463 | ||
|
f67f99c227 | ||
|
0430ebf95c | ||
|
f602de437e | ||
|
a573b20888 | ||
|
488ae529ad | ||
|
6e823c6e87 | ||
|
7d35500e6b | ||
|
a17105fd46 | ||
|
b289d2baf4 | ||
|
f2e0abf1dc | ||
|
528154f1a2 | ||
|
bc71840f06 | ||
|
cd15b677ec | ||
|
1acb12edf5 | ||
|
008de93bbe | ||
|
4e834baa9a | ||
|
96e0e4ab5a | ||
|
381fe19335 | ||
|
ff99ca7cdf | ||
|
60f95cd9ea | ||
|
28bbeac763 | ||
|
e97e0bacd1 | ||
|
abfdfe67e8 | ||
|
992371b4cf | ||
|
07eeddc5e1 | ||
|
3b70c89e07 | ||
|
444db4c242 | ||
|
cb845ebff5 | ||
|
6112578d07 | ||
|
ae68fcb78a | ||
|
8d8d63c94c | ||
|
1d6f00859d | ||
|
397251b0f4 | ||
|
537b0dfa1a | ||
|
0acca7fe69 | ||
|
bac60f2067 | ||
|
f82b7e2a13 | ||
|
9e6d088757 | ||
|
c915719f85 | ||
|
f55135578c | ||
|
139eca0177 | ||
|
a8e625e99d | ||
|
a92a32b862 | ||
|
9da5cd0180 | ||
|
a6f2e502e7 | ||
|
69d8c2e554 | ||
|
70ba608850 | ||
|
75182f7205 | ||
|
41caa4415c | ||
|
c611cc7268 | ||
|
d14ec7d7d2 | ||
|
4ca19e33c2 | ||
|
1a0db878bf | ||
|
53eb9aca2b | ||
|
caccabd4e5 | ||
|
d0e0ac5fac | ||
|
7d81a3f4a5 | ||
|
f95b7529eb | ||
|
6a4fd46479 | ||
|
b01b1e4758 | ||
|
014b0b9944 | ||
|
5904f6df8b | ||
|
0f293bfc2b | ||
|
cfbedec719 | ||
|
666ae244b3 | ||
|
c13e93d63e | ||
|
c2585b5525 | ||
|
1d021c2790 | ||
|
cc418e5dab | ||
|
c7acdfadf2 | ||
|
869d873d5c | ||
|
3271634e7a | ||
|
4854c231e1 | ||
|
7a18fa887d | ||
|
6c4a0f9363 | ||
|
f7731d38f6 | ||
|
df3f4de7c3 | ||
|
10c43c9edc | ||
|
fe4b53a463 | ||
|
5b1f987ed1 | ||
|
48d778c4b3 | ||
|
7d086e9524 | ||
|
6e9433c7a8 | ||
|
39299e5cc1 | ||
|
eeab0efa56 | ||
|
77e45756f2 | ||
|
9cdcffbe4b | ||
|
50850cf8a2 | ||
|
35587658cd | ||
|
4661c98c17 | ||
|
7928659f70 | ||
|
f80f6445a6 | ||
|
336c2f4fe1 | ||
|
bfcb40bf6b | ||
|
051c2701ab | ||
|
028ee58580 | ||
|
801e3dfacf | ||
|
4171bd3bae | ||
|
73fb1c3a17 | ||
|
d65234ed51 | ||
|
58c5df729b | ||
|
632cc918b4 | ||
|
f870a49f42 | ||
|
d844799b3b | ||
|
3ea1145486 | ||
|
d4716fc03c | ||
|
16227594ef | ||
|
65cdb07f0c | ||
|
eb73e82279 | ||
|
d6fbfddddd | ||
|
1339a44402 | ||
|
b8215d8ac8 | ||
|
289d92c97d | ||
|
6c39c77fc5 | ||
|
69c3a7640b | ||
|
e8b0458f16 | ||
|
6b89639f90 | ||
|
9b25f7325a | ||
|
0093f9877a | ||
|
cf0b5d3715 | ||
|
616f7e74db | ||
|
14c812a39c | ||
|
1e52751344 | ||
|
d7ac6e516a | ||
|
c2436c46e6 | ||
|
e3585a6eb0 | ||
|
38608b1b0f | ||
|
2b634dab32 | ||
|
91acc51cd1 | ||
|
dc54ba67e4 | ||
|
35521b553a | ||
|
70a744558c | ||
|
4b789ff7e9 | ||
|
306657a118 | ||
|
be765e4cb9 | ||
|
b5857da877 | ||
|
3ad055ef3a | ||
|
3d00c477fc | ||
|
c2912d612a | ||
|
a3c7525249 | ||
|
f81025631e | ||
|
9c03c58de2 | ||
|
0ffd60b8cf | ||
|
d5baa6601c | ||
|
19eeef0aad | ||
|
e76392e330 | ||
|
a1cc9ac642 | ||
|
67c3af3bf8 | ||
|
843e1ed400 | ||
|
4bca6bf6f5 | ||
|
960b25408f | ||
|
45bc106de7 | ||
|
d151e36ea8 | ||
|
56cada6a0c | ||
|
a0b2d77bef | ||
|
030fd00232 | ||
|
d21f39160f | ||
|
4265a94bfe | ||
|
652d1e3de8 | ||
|
b33cff4cb7 | ||
|
e0fe84a856 | ||
|
783ffdb7fd | ||
|
cb3ac6987e | ||
|
5a83e58428 | ||
|
3f02ab0ead | ||
|
99c58fc561 | ||
|
a86df17ceb | ||
|
5d04ccbe51 | ||
|
61dc357bb3 | ||
|
d7cb2f850d | ||
|
bfe0a4a8ac | ||
|
0ec7909ec3 | ||
|
b5b912e2c4 | ||
|
9504a593e9 | ||
|
f8f28c8942 | ||
|
8fc7d63cf8 | ||
|
c513649df4 | ||
|
a6911825b0 | ||
|
eddabab5e4 | ||
|
3eee69de2d | ||
|
068d83bce8 | ||
|
7f649ccf23 | ||
|
808b830942 | ||
|
d669816a1b | ||
|
e40689b9cc | ||
|
709cf18aef | ||
|
e57cad6c3f | ||
|
4f94caa1b9 | ||
|
78a682e4b6 | ||
|
21d030dbfb | ||
|
72da553ed9 | ||
|
b78606d94f | ||
|
a6f719a402 | ||
|
e0fbd148ef | ||
|
2f91289880 | ||
|
462b755a60 | ||
|
0a2ecaa393 | ||
|
dedf03bb81 | ||
|
64f056b57d | ||
|
90df9fa1bf | ||
|
eae6e6381e | ||
|
04a18e0a97 | ||
|
06aece31cf | ||
|
e0296d6c3c | ||
|
5ffb5f01cc | ||
|
8576ad58bd | ||
|
c4860f6c29 | ||
|
850310b034 | ||
|
a29c781295 | ||
|
599673690c | ||
|
7deda53b7c | ||
|
5ecae52bf1 | ||
|
ac2d0edb2f | ||
|
ae632654d2 | ||
|
88f5f21dbb | ||
|
49e5510953 | ||
|
a6644f7477 | ||
|
10265d8667 | ||
|
8be708fe5b | ||
|
5facfafbd5 | ||
|
11761d1769 | ||
|
2c14488b93 | ||
|
c127bce73b | ||
|
af79a2a59e | ||
|
ee66476d62 | ||
|
40f9261d48 | ||
|
69205594cc | ||
|
d943f66abc | ||
|
b2385b46cf | ||
|
2af32d6665 | ||
|
5c58db3bb4 | ||
|
24a9491203 | ||
|
5511bd8e85 | ||
|
f1ca2b3a3a | ||
|
10fcefe346 | ||
|
0bfc11f1ba | ||
|
96998a5498 | ||
|
2da5299924 | ||
|
83b40b25d6 | ||
|
05f30b3e28 | ||
|
116a61beb1 | ||
|
8c86bb8024 | ||
|
41bb2c2663 | ||
|
e7b9cd8ee8 | ||
|
3019b9f320 | ||
|
60872d7d7c | ||
|
a37c1143ca | ||
|
4e08ee1833 | ||
|
7563870d11 | ||
|
12c5a57415 | ||
|
101680d603 | ||
|
d819d5d324 | ||
|
90cdffa067 | ||
|
080640a171 | ||
|
950a64f756 | ||
|
b50cef742d | ||
|
4f50935aa2 | ||
|
44f62e5e27 | ||
|
59e89e7664 | ||
|
5d464364a8 | ||
|
2112299586 | ||
|
c771964a40 | ||
|
a199ec2813 | ||
|
a28b3fff49 | ||
|
5bcd95f01f | ||
|
c84494b36b | ||
|
7e3a5b7ce8 | ||
|
fbfd11de2c | ||
|
9657d183f8 | ||
|
b98098b1f0 | ||
|
1ae14e5a3d | ||
|
7a92a3b729 | ||
|
457c688346 | ||
|
c609b18698 | ||
|
5ff0bb2100 | ||
|
90944bb1a2 | ||
|
07571741c5 | ||
|
14ccc8bc4c | ||
|
5cb936fa00 | ||
|
c6f025f40e | ||
|
5c4397ab30 |
5
.github/CODEOWNERS
vendored
5
.github/CODEOWNERS
vendored
@@ -1,5 +1,4 @@
|
||||
# CODEOWNERS: https://help.github.com/articles/about-codeowners/
|
||||
|
||||
# Everything goes through Bucky. For now.
|
||||
* @ebuchman
|
||||
|
||||
# Everything goes through Bucky and Anton. For now.
|
||||
* @ebuchman @melekes
|
||||
|
6
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
6
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
<!-- Thanks for filing a PR! Before hitting the button, please check the following items.-->
|
||||
|
||||
* [ ] Updated all relevant documentation in docs
|
||||
* [ ] Updated all code comments where relevant
|
||||
* [ ] Wrote tests
|
||||
* [ ] Updated CHANGELOG.md
|
1
.gitignore
vendored
1
.gitignore
vendored
@@ -17,6 +17,7 @@ test/logs
|
||||
coverage.txt
|
||||
docs/_build
|
||||
docs/tools
|
||||
*.log
|
||||
|
||||
scripts/wal2json/wal2json
|
||||
scripts/cutWALUntil/cutWALUntil
|
||||
|
82
CHANGELOG.md
82
CHANGELOG.md
@@ -3,9 +3,7 @@
|
||||
## Roadmap
|
||||
|
||||
BREAKING CHANGES:
|
||||
- Upgrade the header to support better proofs on validtors, results, evidence, and possibly more
|
||||
- Better support for injecting randomness
|
||||
- Pass evidence/voteInfo through ABCI
|
||||
- Upgrade consensus for more real-time use of evidence
|
||||
|
||||
FEATURES:
|
||||
@@ -27,6 +25,86 @@ BUG FIXES:
|
||||
- Graceful handling/recovery for apps that have non-determinism or fail to halt
|
||||
- Graceful handling/recovery for violations of safety, or liveness
|
||||
|
||||
## 0.16.0 (February 20th, 2017)
|
||||
|
||||
BREAKING CHANGES:
|
||||
- [config] use $TMHOME/config for all config and json files
|
||||
- [p2p] old `--p2p.seeds` is now `--p2p.persistent_peers` (persistent peers to which TM will always connect to)
|
||||
- [p2p] now `--p2p.seeds` only used for getting addresses (if addrbook is empty; not persistent)
|
||||
- [p2p] NodeInfo: remove RemoteAddr and add Channels
|
||||
- we must have at least one overlapping channel with peer
|
||||
- we only send msgs for channels the peer advertised
|
||||
- [p2p/conn] pong timeout
|
||||
- [lite] comment out IAVL related code
|
||||
|
||||
FEATURES:
|
||||
- [p2p] added new `/dial_peers&persistent=_` **unsafe** endpoint
|
||||
- [p2p] persistent node key in `$THMHOME/config/node_key.json`
|
||||
- [p2p] introduce peer ID and authenticate peers by ID using addresses like `ID@IP:PORT`
|
||||
- [p2p/pex] new seed mode crawls the network and serves as a seed.
|
||||
- [config] MempoolConfig.CacheSize
|
||||
- [config] P2P.SeedMode (`--p2p.seed_mode`)
|
||||
|
||||
IMPROVEMENT:
|
||||
- [p2p/pex] stricter rules in the PEX reactor for better handling of abuse
|
||||
- [p2p] various improvements to code structure including subpackages for `pex` and `conn`
|
||||
- [docs] new spec!
|
||||
- [all] speed up the tests!
|
||||
|
||||
BUG FIX:
|
||||
- [blockchain] StopPeerForError on timeout
|
||||
- [consensus] StopPeerForError on a bad Maj23 message
|
||||
- [state] flush mempool conn before calling commit
|
||||
- [types] fix priv val signing things that only differ by timestamp
|
||||
- [mempool] fix memory leak causing zombie peers
|
||||
- [p2p/conn] fix potential deadlock
|
||||
|
||||
## 0.15.0 (December 29, 2017)
|
||||
|
||||
BREAKING CHANGES:
|
||||
- [p2p] enable the Peer Exchange reactor by default
|
||||
- [types] add Timestamp field to Proposal/Vote
|
||||
- [types] add new fields to Header: TotalTxs, ConsensusParamsHash, LastResultsHash, EvidenceHash
|
||||
- [types] add Evidence to Block
|
||||
- [types] simplify ValidateBasic
|
||||
- [state] updates to support changes to the header
|
||||
- [state] Enforce <1/3 of validator set can change at a time
|
||||
|
||||
FEATURES:
|
||||
- [state] Send indices of absent validators and addresses of byzantine validators in BeginBlock
|
||||
- [state] Historical ConsensusParams and ABCIResponses
|
||||
- [docs] Specification for the base Tendermint data structures.
|
||||
- [evidence] New evidence reactor for gossiping and managing evidence
|
||||
- [rpc] `/block_results?height=X` returns the DeliverTx results for a given height.
|
||||
|
||||
IMPROVEMENTS:
|
||||
- [consensus] Better handling of corrupt WAL file
|
||||
|
||||
BUG FIXES:
|
||||
- [lite] fix race
|
||||
- [state] validate block.Header.ValidatorsHash
|
||||
- [p2p] allow seed addresses to be prefixed with eg. `tcp://`
|
||||
- [p2p] use consistent key to refer to peers so we dont try to connect to existing peers
|
||||
- [cmd] fix `tendermint init` to ignore files that are there and generate files that aren't.
|
||||
|
||||
## 0.14.0 (December 11, 2017)
|
||||
|
||||
BREAKING CHANGES:
|
||||
- consensus/wal: removed separator
|
||||
- rpc/client: changed Subscribe/Unsubscribe/UnsubscribeAll funcs signatures to be identical to event bus.
|
||||
|
||||
FEATURES:
|
||||
- new `tendermint lite` command (and `lite/proxy` pkg) for running a light-client RPC proxy.
|
||||
NOTE it is currently insecure and its APIs are not yet covered by semver
|
||||
|
||||
IMPROVEMENTS:
|
||||
- rpc/client: can act as event bus subscriber (See https://github.com/tendermint/tendermint/issues/945).
|
||||
- p2p: use exponential backoff from seconds to hours when attempting to reconnect to persistent peer
|
||||
- config: moniker defaults to the machine's hostname instead of "anonymous"
|
||||
|
||||
BUG FIXES:
|
||||
- p2p: no longer exit if one of the seed addresses is incorrect
|
||||
|
||||
## 0.13.0 (December 6, 2017)
|
||||
|
||||
BREAKING CHANGES:
|
||||
|
@@ -42,15 +42,18 @@ Run `bash scripts/glide/status.sh` to get a list of vendored dependencies that m
|
||||
|
||||
## Vagrant
|
||||
|
||||
If you are a [Vagrant](https://www.vagrantup.com/) user, all you have to do to get started hacking Tendermint is:
|
||||
If you are a [Vagrant](https://www.vagrantup.com/) user, you can get started hacking Tendermint with the commands below.
|
||||
|
||||
NOTE: In case you installed Vagrant in 2017, you might need to run
|
||||
`vagrant box update` to upgrade to the latest `ubuntu/xenial64`.
|
||||
|
||||
```
|
||||
vagrant up
|
||||
vagrant ssh
|
||||
cd ~/go/src/github.com/tendermint/tendermint
|
||||
make test
|
||||
```
|
||||
|
||||
|
||||
## Testing
|
||||
|
||||
All repos should be hooked up to circle.
|
||||
@@ -97,4 +100,4 @@ especially `go-p2p` and `go-rpc`, as their versions are referenced in tendermint
|
||||
- push to hotfix-vX.X.X to run the extended integration tests on the CI
|
||||
- merge hotfix-vX.X.X to master
|
||||
- merge hotfix-vX.X.X to develop
|
||||
- delete the hotfix-vX.X.X branch
|
||||
- delete the hotfix-vX.X.X branch
|
||||
|
@@ -1,8 +1,8 @@
|
||||
FROM alpine:3.6
|
||||
|
||||
# This is the release of tendermint to pull in.
|
||||
ENV TM_VERSION 0.12.0
|
||||
ENV TM_SHA256SUM be17469e92f04fc2a3663f891da28edbaa6c37c4d2f746736571887f4790555a
|
||||
ENV TM_VERSION 0.15.0
|
||||
ENV TM_SHA256SUM 71cc271c67eca506ca492c8b90b090132f104bf5dbfe0af2702a50886e88de17
|
||||
|
||||
# Tendermint will be looking for genesis file in /tendermint (unless you change
|
||||
# `genesis_file` in config.toml). You can put your config.toml and private
|
||||
|
@@ -1,6 +1,9 @@
|
||||
# Supported tags and respective `Dockerfile` links
|
||||
|
||||
- `0.12.0`, `latest` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/70d8afa6e952e24c573ece345560a5971bf2cc0e/DOCKER/Dockerfile)
|
||||
- `0.15.0`, `latest` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/170777300ea92dc21a8aec1abc16cb51812513a4/DOCKER/Dockerfile)
|
||||
- `0.13.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/a28b3fff49dce2fb31f90abb2fc693834e0029c2/DOCKER/Dockerfile)
|
||||
- `0.12.1` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/457c688346b565e90735431619ca3ca597ef9007/DOCKER/Dockerfile)
|
||||
- `0.12.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/70d8afa6e952e24c573ece345560a5971bf2cc0e/DOCKER/Dockerfile)
|
||||
- `0.11.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/9177cc1f64ca88a4a0243c5d1773d10fba67e201/DOCKER/Dockerfile)
|
||||
- `0.10.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/e5342f4054ab784b2cd6150e14f01053d7c8deb2/DOCKER/Dockerfile)
|
||||
- `0.9.1`, `0.9`, [(Dockerfile)](https://github.com/tendermint/tendermint/blob/809e0e8c5933604ba8b2d096803ada7c5ec4dfd3/DOCKER/Dockerfile)
|
||||
|
124
Makefile
124
Makefile
@@ -1,32 +1,66 @@
|
||||
GOTOOLS = \
|
||||
github.com/mitchellh/gox \
|
||||
github.com/tcnksm/ghr \
|
||||
github.com/alecthomas/gometalinter
|
||||
|
||||
github.com/tendermint/glide \
|
||||
# gopkg.in/alecthomas/gometalinter.v2
|
||||
PACKAGES=$(shell go list ./... | grep -v '/vendor/')
|
||||
BUILD_TAGS?=tendermint
|
||||
TMHOME = $${TMHOME:-$$HOME/.tendermint}
|
||||
BUILD_FLAGS = -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`"
|
||||
|
||||
BUILD_FLAGS = -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short HEAD`"
|
||||
all: check build test install
|
||||
|
||||
all: get_vendor_deps install test
|
||||
check: check_tools get_vendor_deps
|
||||
|
||||
install:
|
||||
CGO_ENABLED=0 go install $(BUILD_FLAGS) ./cmd/tendermint
|
||||
|
||||
########################################
|
||||
### Build
|
||||
|
||||
build:
|
||||
CGO_ENABLED=0 go build $(BUILD_FLAGS) -o build/tendermint ./cmd/tendermint/
|
||||
go build $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint/
|
||||
|
||||
build_race:
|
||||
CGO_ENABLED=0 go build -race $(BUILD_FLAGS) -o build/tendermint ./cmd/tendermint
|
||||
go build -race $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint
|
||||
|
||||
install:
|
||||
go install $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' ./cmd/tendermint
|
||||
|
||||
########################################
|
||||
### Distribution
|
||||
|
||||
# dist builds binaries for all platforms and packages them for distribution
|
||||
dist:
|
||||
@BUILD_TAGS='$(BUILD_TAGS)' sh -c "'$(CURDIR)/scripts/dist.sh'"
|
||||
|
||||
########################################
|
||||
### Tools & dependencies
|
||||
|
||||
check_tools:
|
||||
@# https://stackoverflow.com/a/25668869
|
||||
@echo "Found tools: $(foreach tool,$(notdir $(GOTOOLS)),\
|
||||
$(if $(shell which $(tool)),$(tool),$(error "No $(tool) in PATH")))"
|
||||
|
||||
get_tools:
|
||||
@echo "--> Installing tools"
|
||||
go get -u -v $(GOTOOLS)
|
||||
# @gometalinter.v2 --install
|
||||
|
||||
update_tools:
|
||||
@echo "--> Updating tools"
|
||||
@go get -u $(GOTOOLS)
|
||||
|
||||
get_vendor_deps:
|
||||
@rm -rf vendor/
|
||||
@echo "--> Running glide install"
|
||||
@glide install
|
||||
|
||||
draw_deps:
|
||||
@# requires brew install graphviz or apt-get install graphviz
|
||||
go get github.com/RobotsAndPencils/goviz
|
||||
@goviz -i github.com/tendermint/tendermint/cmd/tendermint -d 3 | dot -Tpng -o dependency-graph.png
|
||||
|
||||
|
||||
########################################
|
||||
### Testing
|
||||
|
||||
test:
|
||||
@echo "--> Running linter"
|
||||
@make metalinter_test
|
||||
@echo "--> Running go test"
|
||||
@go test $(PACKAGES)
|
||||
|
||||
@@ -43,55 +77,28 @@ test_release:
|
||||
test100:
|
||||
@for i in {1..100}; do make test; done
|
||||
|
||||
draw_deps:
|
||||
# requires brew install graphviz or apt-get install graphviz
|
||||
go get github.com/RobotsAndPencils/goviz
|
||||
@goviz -i github.com/tendermint/tendermint/cmd/tendermint -d 3 | dot -Tpng -o dependency-graph.png
|
||||
vagrant_test:
|
||||
vagrant up
|
||||
vagrant ssh -c 'make install'
|
||||
vagrant ssh -c 'make test_race'
|
||||
vagrant ssh -c 'make test_integrations'
|
||||
|
||||
list_deps:
|
||||
@go list -f '{{join .Deps "\n"}}' ./... | \
|
||||
grep -v /vendor/ | sort | uniq | \
|
||||
xargs go list -f '{{if not .Standard}}{{.ImportPath}}{{end}}'
|
||||
|
||||
get_deps:
|
||||
@echo "--> Running go get"
|
||||
@go get -v -d $(PACKAGES)
|
||||
@go list -f '{{join .TestImports "\n"}}' ./... | \
|
||||
grep -v /vendor/ | sort | uniq | \
|
||||
xargs go get -v -d
|
||||
|
||||
update_deps:
|
||||
@echo "--> Updating dependencies"
|
||||
@go get -d -u ./...
|
||||
|
||||
get_vendor_deps:
|
||||
@hash glide 2>/dev/null || go get github.com/Masterminds/glide
|
||||
@rm -rf vendor/
|
||||
@echo "--> Running glide install"
|
||||
@glide install
|
||||
|
||||
update_tools:
|
||||
@echo "--> Updating tools"
|
||||
@go get -u $(GOTOOLS)
|
||||
|
||||
tools:
|
||||
@echo "--> Installing tools"
|
||||
@go get $(GOTOOLS)
|
||||
@gometalinter --install
|
||||
|
||||
########################################
|
||||
### Formatting, linting, and vetting
|
||||
|
||||
metalinter:
|
||||
@gometalinter --vendor --deadline=600s --enable-all --disable=lll ./...
|
||||
fmt:
|
||||
@go fmt ./...
|
||||
|
||||
metalinter_test:
|
||||
@gometalinter --vendor --deadline=600s --disable-all \
|
||||
metalinter:
|
||||
@echo "--> Running linter"
|
||||
gometalinter.v2 --vendor --deadline=600s --disable-all \
|
||||
--enable=deadcode \
|
||||
--enable=gosimple \
|
||||
--enable=misspell \
|
||||
--enable=safesql \
|
||||
./...
|
||||
|
||||
# --enable=gas \
|
||||
#--enable=gas \
|
||||
#--enable=maligned \
|
||||
#--enable=dupl \
|
||||
#--enable=errcheck \
|
||||
@@ -99,7 +106,6 @@ metalinter_test:
|
||||
#--enable=gocyclo \
|
||||
#--enable=goimports \
|
||||
#--enable=golint \ <== comments on anything exported
|
||||
#--enable=gosimple \
|
||||
#--enable=gotype \
|
||||
#--enable=ineffassign \
|
||||
#--enable=interfacer \
|
||||
@@ -113,4 +119,12 @@ metalinter_test:
|
||||
#--enable=vet \
|
||||
#--enable=vetshadow \
|
||||
|
||||
.PHONY: install build build_race dist test test_race test_integrations test100 draw_deps list_deps get_deps get_vendor_deps update_deps update_tools tools test_release
|
||||
metalinter_all:
|
||||
@echo "--> Running linter (all)"
|
||||
gometalinter.v2 --vendor --deadline=600s --enable-all --disable=lll ./...
|
||||
|
||||
|
||||
# To avoid unintended conflicts with file names, always add to .PHONY
|
||||
# unless there is a reason not to.
|
||||
# https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html
|
||||
.PHONY: check build build_race dist install check_tools get_tools update_tools get_vendor_deps draw_deps test test_race test_integrations test_release test100 vagrant_test fmt metalinter metalinter_all
|
||||
|
@@ -26,6 +26,12 @@ and securely replicates it on many machines.
|
||||
|
||||
For more information, from introduction to install to application development, [Read The Docs](https://tendermint.readthedocs.io/en/master/).
|
||||
|
||||
## Minimum requirements
|
||||
|
||||
Requirement|Notes
|
||||
---|---
|
||||
Go version | Go1.9 or higher
|
||||
|
||||
## Install
|
||||
|
||||
To download pre-built binaries, see our [downloads page](https://tendermint.com/downloads).
|
||||
|
43
Vagrantfile
vendored
43
Vagrantfile
vendored
@@ -2,7 +2,7 @@
|
||||
# vi: set ft=ruby :
|
||||
|
||||
Vagrant.configure("2") do |config|
|
||||
config.vm.box = "ubuntu/trusty64"
|
||||
config.vm.box = "ubuntu/xenial64"
|
||||
|
||||
config.vm.provider "virtualbox" do |v|
|
||||
v.memory = 4096
|
||||
@@ -10,30 +10,43 @@ Vagrant.configure("2") do |config|
|
||||
end
|
||||
|
||||
config.vm.provision "shell", inline: <<-SHELL
|
||||
apt-get update
|
||||
apt-get install -y --no-install-recommends wget curl jq shellcheck bsdmainutils psmisc
|
||||
# add docker repo
|
||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
|
||||
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable"
|
||||
|
||||
wget -qO- https://get.docker.com/ | sh
|
||||
usermod -a -G docker vagrant
|
||||
# and golang 1.9 support
|
||||
# official repo doesn't have race detection runtime...
|
||||
# add-apt-repository ppa:gophers/archive
|
||||
add-apt-repository ppa:longsleep/golang-backports
|
||||
|
||||
# install base requirements
|
||||
apt-get update
|
||||
apt-get install -y --no-install-recommends wget curl jq zip \
|
||||
make shellcheck bsdmainutils psmisc
|
||||
apt-get install -y docker-ce golang-1.9-go
|
||||
apt-get install -y language-pack-en
|
||||
|
||||
# cleanup
|
||||
apt-get autoremove -y
|
||||
|
||||
apt-get install -y --no-install-recommends git
|
||||
curl -O https://storage.googleapis.com/golang/go1.9.linux-amd64.tar.gz
|
||||
tar -xvf go1.9.linux-amd64.tar.gz
|
||||
rm -rf /usr/local/go
|
||||
mv go /usr/local
|
||||
rm -f go1.9.linux-amd64.tar.gz
|
||||
mkdir -p /home/vagrant/go/bin
|
||||
echo 'export PATH=$PATH:/usr/local/go/bin:/home/vagrant/go/bin' >> /home/vagrant/.bash_profile
|
||||
# needed for docker
|
||||
usermod -a -G docker vagrant
|
||||
|
||||
# set env variables
|
||||
echo 'export PATH=$PATH:/usr/lib/go-1.9/bin:/home/vagrant/go/bin' >> /home/vagrant/.bash_profile
|
||||
echo 'export GOPATH=/home/vagrant/go' >> /home/vagrant/.bash_profile
|
||||
|
||||
echo 'export LC_ALL=en_US.UTF-8' >> /home/vagrant/.bash_profile
|
||||
echo 'cd go/src/github.com/tendermint/tendermint' >> /home/vagrant/.bash_profile
|
||||
|
||||
mkdir -p /home/vagrant/go/bin
|
||||
mkdir -p /home/vagrant/go/src/github.com/tendermint
|
||||
ln -s /vagrant /home/vagrant/go/src/github.com/tendermint/tendermint
|
||||
|
||||
chown -R vagrant:vagrant /home/vagrant/go
|
||||
chown vagrant:vagrant /home/vagrant/.bash_profile
|
||||
|
||||
su - vagrant -c 'cd /home/vagrant/go/src/github.com/tendermint/tendermint && make get_vendor_deps'
|
||||
# get all deps and tools, ready to install/test
|
||||
su - vagrant -c 'source /home/vagrant/.bash_profile'
|
||||
su - vagrant -c 'cd /home/vagrant/go/src/github.com/tendermint/tendermint && make get_tools && make get_vendor_deps'
|
||||
SHELL
|
||||
end
|
||||
|
13
appveyor.yml
Normal file
13
appveyor.yml
Normal file
@@ -0,0 +1,13 @@
|
||||
version: 1.0.{build}
|
||||
configuration: Release
|
||||
platform:
|
||||
- x64
|
||||
- x86
|
||||
clone_folder: c:\go\path\src\github.com\tendermint\tendermint
|
||||
before_build:
|
||||
- cmd: set GOPATH=%GOROOT%\path
|
||||
- cmd: set PATH=%GOPATH%\bin;%PATH%
|
||||
- cmd: make get_vendor_deps
|
||||
build_script:
|
||||
- cmd: make test
|
||||
test: off
|
@@ -51,7 +51,7 @@ tendermint node \
|
||||
--proxy_app dummy \
|
||||
--p2p.laddr tcp://127.0.0.1:56666 \
|
||||
--rpc.laddr tcp://127.0.0.1:56667 \
|
||||
--p2p.seeds 127.0.0.1:56656 \
|
||||
--p2p.persistent_peers 127.0.0.1:56656 \
|
||||
--log_level error &
|
||||
|
||||
# wait for node to start up so we only count time where we are actually syncing
|
||||
|
@@ -16,11 +16,10 @@ func BenchmarkEncodeStatusWire(b *testing.B) {
|
||||
b.StopTimer()
|
||||
pubKey := crypto.GenPrivKeyEd25519().PubKey()
|
||||
status := &ctypes.ResultStatus{
|
||||
NodeInfo: &p2p.NodeInfo{
|
||||
PubKey: pubKey.Unwrap().(crypto.PubKeyEd25519),
|
||||
NodeInfo: p2p.NodeInfo{
|
||||
PubKey: pubKey,
|
||||
Moniker: "SOMENAME",
|
||||
Network: "SOMENAME",
|
||||
RemoteAddr: "SOMEADDR",
|
||||
ListenAddr: "SOMEADDR",
|
||||
Version: "SOMEVER",
|
||||
Other: []string{"SOMESTRING", "OTHERSTRING"},
|
||||
@@ -42,12 +41,11 @@ func BenchmarkEncodeStatusWire(b *testing.B) {
|
||||
|
||||
func BenchmarkEncodeNodeInfoWire(b *testing.B) {
|
||||
b.StopTimer()
|
||||
pubKey := crypto.GenPrivKeyEd25519().PubKey().Unwrap().(crypto.PubKeyEd25519)
|
||||
nodeInfo := &p2p.NodeInfo{
|
||||
pubKey := crypto.GenPrivKeyEd25519().PubKey()
|
||||
nodeInfo := p2p.NodeInfo{
|
||||
PubKey: pubKey,
|
||||
Moniker: "SOMENAME",
|
||||
Network: "SOMENAME",
|
||||
RemoteAddr: "SOMEADDR",
|
||||
ListenAddr: "SOMEADDR",
|
||||
Version: "SOMEVER",
|
||||
Other: []string{"SOMESTRING", "OTHERSTRING"},
|
||||
@@ -63,12 +61,11 @@ func BenchmarkEncodeNodeInfoWire(b *testing.B) {
|
||||
|
||||
func BenchmarkEncodeNodeInfoBinary(b *testing.B) {
|
||||
b.StopTimer()
|
||||
pubKey := crypto.GenPrivKeyEd25519().PubKey().Unwrap().(crypto.PubKeyEd25519)
|
||||
nodeInfo := &p2p.NodeInfo{
|
||||
pubKey := crypto.GenPrivKeyEd25519().PubKey()
|
||||
nodeInfo := p2p.NodeInfo{
|
||||
PubKey: pubKey,
|
||||
Moniker: "SOMENAME",
|
||||
Network: "SOMENAME",
|
||||
RemoteAddr: "SOMEADDR",
|
||||
ListenAddr: "SOMEADDR",
|
||||
Version: "SOMEVER",
|
||||
Other: []string{"SOMESTRING", "OTHERSTRING"},
|
||||
@@ -87,11 +84,10 @@ func BenchmarkEncodeNodeInfoProto(b *testing.B) {
|
||||
b.StopTimer()
|
||||
pubKey := crypto.GenPrivKeyEd25519().PubKey().Unwrap().(crypto.PubKeyEd25519)
|
||||
pubKey2 := &proto.PubKey{Ed25519: &proto.PubKeyEd25519{Bytes: pubKey[:]}}
|
||||
nodeInfo := &proto.NodeInfo{
|
||||
nodeInfo := proto.NodeInfo{
|
||||
PubKey: pubKey2,
|
||||
Moniker: "SOMENAME",
|
||||
Network: "SOMENAME",
|
||||
RemoteAddr: "SOMEADDR",
|
||||
ListenAddr: "SOMEADDR",
|
||||
Version: "SOMEVER",
|
||||
Other: []string{"SOMESTRING", "OTHERSTRING"},
|
||||
|
@@ -1,18 +1,20 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
flow "github.com/tendermint/tmlibs/flowrate"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
/*
|
||||
|
||||
eg, L = latency = 0.1s
|
||||
P = num peers = 10
|
||||
FN = num full nodes
|
||||
@@ -22,7 +24,6 @@ eg, L = latency = 0.1s
|
||||
B/S = CB/P/BS = 12.8 blocks/s
|
||||
|
||||
12.8 * 0.1 = 1.28 blocks on conn
|
||||
|
||||
*/
|
||||
|
||||
const (
|
||||
@@ -30,7 +31,14 @@ const (
|
||||
maxTotalRequesters = 1000
|
||||
maxPendingRequests = maxTotalRequesters
|
||||
maxPendingRequestsPerPeer = 50
|
||||
minRecvRate = 10240 // 10Kb/s
|
||||
|
||||
// Minimum recv rate to ensure we're receiving blocks from a peer fast
|
||||
// enough. If a peer is not sending us data at at least that rate, we
|
||||
// consider them to have timedout and we disconnect.
|
||||
//
|
||||
// Assuming a DSL connection (not a good choice) 128 Kbps (upload) ~ 15 KB/s,
|
||||
// sending data across atlantic ~ 7.5 KB/s.
|
||||
minRecvRate = 7680
|
||||
)
|
||||
|
||||
var peerTimeoutSeconds = time.Duration(15) // not const so we can override with tests
|
||||
@@ -56,16 +64,16 @@ type BlockPool struct {
|
||||
height int64 // the lowest key in requesters.
|
||||
numPending int32 // number of requests pending assignment or block response
|
||||
// peers
|
||||
peers map[string]*bpPeer
|
||||
peers map[p2p.ID]*bpPeer
|
||||
maxPeerHeight int64
|
||||
|
||||
requestsCh chan<- BlockRequest
|
||||
timeoutsCh chan<- string
|
||||
timeoutsCh chan<- p2p.ID
|
||||
}
|
||||
|
||||
func NewBlockPool(start int64, requestsCh chan<- BlockRequest, timeoutsCh chan<- string) *BlockPool {
|
||||
func NewBlockPool(start int64, requestsCh chan<- BlockRequest, timeoutsCh chan<- p2p.ID) *BlockPool {
|
||||
bp := &BlockPool{
|
||||
peers: make(map[string]*bpPeer),
|
||||
peers: make(map[p2p.ID]*bpPeer),
|
||||
|
||||
requesters: make(map[int64]*bpRequester),
|
||||
height: start,
|
||||
@@ -88,7 +96,6 @@ func (pool *BlockPool) OnStop() {}
|
||||
|
||||
// Run spawns requesters as needed.
|
||||
func (pool *BlockPool) makeRequestersRoutine() {
|
||||
|
||||
for {
|
||||
if !pool.IsRunning() {
|
||||
break
|
||||
@@ -119,10 +126,13 @@ func (pool *BlockPool) removeTimedoutPeers() {
|
||||
for _, peer := range pool.peers {
|
||||
if !peer.didTimeout && peer.numPending > 0 {
|
||||
curRate := peer.recvMonitor.Status().CurRate
|
||||
// XXX remove curRate != 0
|
||||
// curRate can be 0 on start
|
||||
if curRate != 0 && curRate < minRecvRate {
|
||||
pool.sendTimeout(peer.id)
|
||||
pool.Logger.Error("SendTimeout", "peer", peer.id, "reason", "curRate too low")
|
||||
pool.Logger.Error("SendTimeout", "peer", peer.id,
|
||||
"reason", "peer is not sending us data fast enough",
|
||||
"curRate", fmt.Sprintf("%d KB/s", curRate/1024),
|
||||
"minRate", fmt.Sprintf("%d KB/s", minRecvRate/1024))
|
||||
peer.didTimeout = true
|
||||
}
|
||||
}
|
||||
@@ -195,7 +205,8 @@ func (pool *BlockPool) PopRequest() {
|
||||
|
||||
// Invalidates the block at pool.height,
|
||||
// Remove the peer and redo request from others.
|
||||
func (pool *BlockPool) RedoRequest(height int64) {
|
||||
// Returns the ID of the removed peer.
|
||||
func (pool *BlockPool) RedoRequest(height int64) p2p.ID {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
@@ -205,12 +216,12 @@ func (pool *BlockPool) RedoRequest(height int64) {
|
||||
cmn.PanicSanity("Expected block to be non-nil")
|
||||
}
|
||||
// RemovePeer will redo all requesters associated with this peer.
|
||||
// TODO: record this malfeasance
|
||||
pool.removePeer(request.peerID)
|
||||
return request.peerID
|
||||
}
|
||||
|
||||
// TODO: ensure that blocks come in order for each peer.
|
||||
func (pool *BlockPool) AddBlock(peerID string, block *types.Block, blockSize int) {
|
||||
func (pool *BlockPool) AddBlock(peerID p2p.ID, block *types.Block, blockSize int) {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
@@ -240,7 +251,7 @@ func (pool *BlockPool) MaxPeerHeight() int64 {
|
||||
}
|
||||
|
||||
// Sets the peer's alleged blockchain height.
|
||||
func (pool *BlockPool) SetPeerHeight(peerID string, height int64) {
|
||||
func (pool *BlockPool) SetPeerHeight(peerID p2p.ID, height int64) {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
@@ -258,14 +269,14 @@ func (pool *BlockPool) SetPeerHeight(peerID string, height int64) {
|
||||
}
|
||||
}
|
||||
|
||||
func (pool *BlockPool) RemovePeer(peerID string) {
|
||||
func (pool *BlockPool) RemovePeer(peerID p2p.ID) {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
pool.removePeer(peerID)
|
||||
}
|
||||
|
||||
func (pool *BlockPool) removePeer(peerID string) {
|
||||
func (pool *BlockPool) removePeer(peerID p2p.ID) {
|
||||
for _, requester := range pool.requesters {
|
||||
if requester.getPeerID() == peerID {
|
||||
if requester.getBlock() != nil {
|
||||
@@ -321,14 +332,14 @@ func (pool *BlockPool) requestersLen() int64 {
|
||||
return int64(len(pool.requesters))
|
||||
}
|
||||
|
||||
func (pool *BlockPool) sendRequest(height int64, peerID string) {
|
||||
func (pool *BlockPool) sendRequest(height int64, peerID p2p.ID) {
|
||||
if !pool.IsRunning() {
|
||||
return
|
||||
}
|
||||
pool.requestsCh <- BlockRequest{height, peerID}
|
||||
}
|
||||
|
||||
func (pool *BlockPool) sendTimeout(peerID string) {
|
||||
func (pool *BlockPool) sendTimeout(peerID p2p.ID) {
|
||||
if !pool.IsRunning() {
|
||||
return
|
||||
}
|
||||
@@ -357,7 +368,7 @@ func (pool *BlockPool) debug() string {
|
||||
|
||||
type bpPeer struct {
|
||||
pool *BlockPool
|
||||
id string
|
||||
id p2p.ID
|
||||
recvMonitor *flow.Monitor
|
||||
|
||||
height int64
|
||||
@@ -368,7 +379,7 @@ type bpPeer struct {
|
||||
logger log.Logger
|
||||
}
|
||||
|
||||
func newBPPeer(pool *BlockPool, peerID string, height int64) *bpPeer {
|
||||
func newBPPeer(pool *BlockPool, peerID p2p.ID, height int64) *bpPeer {
|
||||
peer := &bpPeer{
|
||||
pool: pool,
|
||||
id: peerID,
|
||||
@@ -434,7 +445,7 @@ type bpRequester struct {
|
||||
redoCh chan struct{}
|
||||
|
||||
mtx sync.Mutex
|
||||
peerID string
|
||||
peerID p2p.ID
|
||||
block *types.Block
|
||||
}
|
||||
|
||||
@@ -458,7 +469,7 @@ func (bpr *bpRequester) OnStart() error {
|
||||
}
|
||||
|
||||
// Returns true if the peer matches
|
||||
func (bpr *bpRequester) setBlock(block *types.Block, peerID string) bool {
|
||||
func (bpr *bpRequester) setBlock(block *types.Block, peerID p2p.ID) bool {
|
||||
bpr.mtx.Lock()
|
||||
if bpr.block != nil || bpr.peerID != peerID {
|
||||
bpr.mtx.Unlock()
|
||||
@@ -477,7 +488,7 @@ func (bpr *bpRequester) getBlock() *types.Block {
|
||||
return bpr.block
|
||||
}
|
||||
|
||||
func (bpr *bpRequester) getPeerID() string {
|
||||
func (bpr *bpRequester) getPeerID() p2p.ID {
|
||||
bpr.mtx.Lock()
|
||||
defer bpr.mtx.Unlock()
|
||||
return bpr.peerID
|
||||
@@ -502,7 +513,7 @@ func (bpr *bpRequester) requestRoutine() {
|
||||
OUTER_LOOP:
|
||||
for {
|
||||
// Pick a peer to send request to.
|
||||
var peer *bpPeer = nil
|
||||
var peer *bpPeer
|
||||
PICK_PEER_LOOP:
|
||||
for {
|
||||
if !bpr.IsRunning() || !bpr.pool.IsRunning() {
|
||||
@@ -523,10 +534,10 @@ OUTER_LOOP:
|
||||
// Send request and wait.
|
||||
bpr.pool.sendRequest(bpr.height, peer.id)
|
||||
select {
|
||||
case <-bpr.pool.Quit:
|
||||
case <-bpr.pool.Quit():
|
||||
bpr.Stop()
|
||||
return
|
||||
case <-bpr.Quit:
|
||||
case <-bpr.Quit():
|
||||
return
|
||||
case <-bpr.redoCh:
|
||||
bpr.reset()
|
||||
@@ -534,10 +545,10 @@ OUTER_LOOP:
|
||||
case <-bpr.gotBlockCh:
|
||||
// We got the block, now see if it's good.
|
||||
select {
|
||||
case <-bpr.pool.Quit:
|
||||
case <-bpr.pool.Quit():
|
||||
bpr.Stop()
|
||||
return
|
||||
case <-bpr.Quit:
|
||||
case <-bpr.Quit():
|
||||
return
|
||||
case <-bpr.redoCh:
|
||||
bpr.reset()
|
||||
@@ -551,5 +562,5 @@ OUTER_LOOP:
|
||||
|
||||
type BlockRequest struct {
|
||||
Height int64
|
||||
PeerID string
|
||||
PeerID p2p.ID
|
||||
}
|
||||
|
@@ -5,9 +5,11 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -15,14 +17,14 @@ func init() {
|
||||
}
|
||||
|
||||
type testPeer struct {
|
||||
id string
|
||||
id p2p.ID
|
||||
height int64
|
||||
}
|
||||
|
||||
func makePeers(numPeers int, minHeight, maxHeight int64) map[string]testPeer {
|
||||
peers := make(map[string]testPeer, numPeers)
|
||||
func makePeers(numPeers int, minHeight, maxHeight int64) map[p2p.ID]testPeer {
|
||||
peers := make(map[p2p.ID]testPeer, numPeers)
|
||||
for i := 0; i < numPeers; i++ {
|
||||
peerID := cmn.RandStr(12)
|
||||
peerID := p2p.ID(cmn.RandStr(12))
|
||||
height := minHeight + rand.Int63n(maxHeight-minHeight)
|
||||
peers[peerID] = testPeer{peerID, height}
|
||||
}
|
||||
@@ -32,7 +34,7 @@ func makePeers(numPeers int, minHeight, maxHeight int64) map[string]testPeer {
|
||||
func TestBasic(t *testing.T) {
|
||||
start := int64(42)
|
||||
peers := makePeers(10, start+1, 1000)
|
||||
timeoutsCh := make(chan string, 100)
|
||||
timeoutsCh := make(chan p2p.ID, 100)
|
||||
requestsCh := make(chan BlockRequest, 100)
|
||||
pool := NewBlockPool(start, requestsCh, timeoutsCh)
|
||||
pool.SetLogger(log.TestingLogger())
|
||||
@@ -89,7 +91,7 @@ func TestBasic(t *testing.T) {
|
||||
func TestTimeout(t *testing.T) {
|
||||
start := int64(42)
|
||||
peers := makePeers(10, start+1, 1000)
|
||||
timeoutsCh := make(chan string, 100)
|
||||
timeoutsCh := make(chan p2p.ID, 100)
|
||||
requestsCh := make(chan BlockRequest, 100)
|
||||
pool := NewBlockPool(start, requestsCh, timeoutsCh)
|
||||
pool.SetLogger(log.TestingLogger())
|
||||
@@ -127,7 +129,7 @@ func TestTimeout(t *testing.T) {
|
||||
|
||||
// Pull from channels
|
||||
counter := 0
|
||||
timedOut := map[string]struct{}{}
|
||||
timedOut := map[p2p.ID]struct{}{}
|
||||
for {
|
||||
select {
|
||||
case peerID := <-timeoutsCh:
|
||||
|
@@ -3,16 +3,19 @@ package blockchain
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
wire "github.com/tendermint/go-wire"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/proxy"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -34,39 +37,48 @@ const (
|
||||
type consensusReactor interface {
|
||||
// for when we switch from blockchain reactor and fast sync to
|
||||
// the consensus machine
|
||||
SwitchToConsensus(*sm.State, int)
|
||||
SwitchToConsensus(sm.State, int)
|
||||
}
|
||||
|
||||
// BlockchainReactor handles long-term catchup syncing.
|
||||
type BlockchainReactor struct {
|
||||
p2p.BaseReactor
|
||||
|
||||
state *sm.State
|
||||
proxyAppConn proxy.AppConnConsensus // same as consensus.proxyAppConn
|
||||
store *BlockStore
|
||||
pool *BlockPool
|
||||
fastSync bool
|
||||
requestsCh chan BlockRequest
|
||||
timeoutsCh chan string
|
||||
mtx sync.Mutex
|
||||
params types.ConsensusParams
|
||||
|
||||
eventBus *types.EventBus
|
||||
// immutable
|
||||
initialState sm.State
|
||||
|
||||
blockExec *sm.BlockExecutor
|
||||
store *BlockStore
|
||||
pool *BlockPool
|
||||
fastSync bool
|
||||
|
||||
requestsCh <-chan BlockRequest
|
||||
timeoutsCh <-chan p2p.ID
|
||||
}
|
||||
|
||||
// NewBlockchainReactor returns new reactor instance.
|
||||
func NewBlockchainReactor(state *sm.State, proxyAppConn proxy.AppConnConsensus, store *BlockStore, fastSync bool) *BlockchainReactor {
|
||||
func NewBlockchainReactor(state sm.State, blockExec *sm.BlockExecutor, store *BlockStore,
|
||||
fastSync bool) *BlockchainReactor {
|
||||
|
||||
if state.LastBlockHeight != store.Height() {
|
||||
cmn.PanicSanity(cmn.Fmt("state (%v) and store (%v) height mismatch", state.LastBlockHeight, store.Height()))
|
||||
cmn.PanicSanity(cmn.Fmt("state (%v) and store (%v) height mismatch", state.LastBlockHeight,
|
||||
store.Height()))
|
||||
}
|
||||
|
||||
requestsCh := make(chan BlockRequest, defaultChannelCapacity)
|
||||
timeoutsCh := make(chan string, defaultChannelCapacity)
|
||||
timeoutsCh := make(chan p2p.ID, defaultChannelCapacity)
|
||||
pool := NewBlockPool(
|
||||
store.Height()+1,
|
||||
requestsCh,
|
||||
timeoutsCh,
|
||||
)
|
||||
bcR := &BlockchainReactor{
|
||||
state: state,
|
||||
proxyAppConn: proxyAppConn,
|
||||
params: state.ConsensusParams,
|
||||
initialState: state,
|
||||
blockExec: blockExec,
|
||||
store: store,
|
||||
pool: pool,
|
||||
fastSync: fastSync,
|
||||
@@ -117,7 +129,8 @@ func (bcR *BlockchainReactor) GetChannels() []*p2p.ChannelDescriptor {
|
||||
|
||||
// AddPeer implements Reactor by sending our state to peer.
|
||||
func (bcR *BlockchainReactor) AddPeer(peer p2p.Peer) {
|
||||
if !peer.Send(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}}) {
|
||||
if !peer.Send(BlockchainChannel,
|
||||
struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}}) {
|
||||
// doing nothing, will try later in `poolRoutine`
|
||||
}
|
||||
// peer is added to the pool once we receive the first
|
||||
@@ -126,14 +139,16 @@ func (bcR *BlockchainReactor) AddPeer(peer p2p.Peer) {
|
||||
|
||||
// RemovePeer implements Reactor by removing peer from the pool.
|
||||
func (bcR *BlockchainReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
|
||||
bcR.pool.RemovePeer(peer.Key())
|
||||
bcR.pool.RemovePeer(peer.ID())
|
||||
}
|
||||
|
||||
// respondToPeer loads a block and sends it to the requesting peer,
|
||||
// if we have it. Otherwise, we'll respond saying we don't have it.
|
||||
// According to the Tendermint spec, if all nodes are honest,
|
||||
// no node should be requesting for a block that's non-existent.
|
||||
func (bcR *BlockchainReactor) respondToPeer(msg *bcBlockRequestMessage, src p2p.Peer) (queued bool) {
|
||||
func (bcR *BlockchainReactor) respondToPeer(msg *bcBlockRequestMessage,
|
||||
src p2p.Peer) (queued bool) {
|
||||
|
||||
block := bcR.store.LoadBlock(msg.Height)
|
||||
if block != nil {
|
||||
msg := &bcBlockResponseMessage{Block: block}
|
||||
@@ -157,7 +172,6 @@ func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
|
||||
|
||||
bcR.Logger.Debug("Receive", "src", src, "chID", chID, "msg", msg)
|
||||
|
||||
// TODO: improve logic to satisfy megacheck
|
||||
switch msg := msg.(type) {
|
||||
case *bcBlockRequestMessage:
|
||||
if queued := bcR.respondToPeer(msg, src); !queued {
|
||||
@@ -165,16 +179,17 @@ func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
|
||||
}
|
||||
case *bcBlockResponseMessage:
|
||||
// Got a block.
|
||||
bcR.pool.AddBlock(src.Key(), msg.Block, len(msgBytes))
|
||||
bcR.pool.AddBlock(src.ID(), msg.Block, len(msgBytes))
|
||||
case *bcStatusRequestMessage:
|
||||
// Send peer our state.
|
||||
queued := src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}})
|
||||
queued := src.TrySend(BlockchainChannel,
|
||||
struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}})
|
||||
if !queued {
|
||||
// sorry
|
||||
}
|
||||
case *bcStatusResponseMessage:
|
||||
// Got a peer status. Unverified.
|
||||
bcR.pool.SetPeerHeight(src.Key(), msg.Height)
|
||||
bcR.pool.SetPeerHeight(src.ID(), msg.Height)
|
||||
default:
|
||||
bcR.Logger.Error(cmn.Fmt("Unknown message type %v", reflect.TypeOf(msg)))
|
||||
}
|
||||
@@ -183,7 +198,16 @@ func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
|
||||
// maxMsgSize returns the maximum allowable size of a
|
||||
// message on the blockchain reactor.
|
||||
func (bcR *BlockchainReactor) maxMsgSize() int {
|
||||
return bcR.state.Params.BlockSizeParams.MaxBytes + 2
|
||||
bcR.mtx.Lock()
|
||||
defer bcR.mtx.Unlock()
|
||||
return bcR.params.BlockSize.MaxBytes + 2
|
||||
}
|
||||
|
||||
// updateConsensusParams updates the internal consensus params
|
||||
func (bcR *BlockchainReactor) updateConsensusParams(params types.ConsensusParams) {
|
||||
bcR.mtx.Lock()
|
||||
defer bcR.mtx.Unlock()
|
||||
bcR.params = params
|
||||
}
|
||||
|
||||
// Handle messages from the poolReactor telling the reactor what to do.
|
||||
@@ -197,7 +221,8 @@ func (bcR *BlockchainReactor) poolRoutine() {
|
||||
|
||||
blocksSynced := 0
|
||||
|
||||
chainID := bcR.state.ChainID
|
||||
chainID := bcR.initialState.ChainID
|
||||
state := bcR.initialState
|
||||
|
||||
lastHundred := time.Now()
|
||||
lastRate := 0.0
|
||||
@@ -236,7 +261,7 @@ FOR_LOOP:
|
||||
bcR.pool.Stop()
|
||||
|
||||
conR := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
|
||||
conR.SwitchToConsensus(bcR.state, blocksSynced)
|
||||
conR.SwitchToConsensus(state, blocksSynced)
|
||||
|
||||
break FOR_LOOP
|
||||
}
|
||||
@@ -251,33 +276,42 @@ FOR_LOOP:
|
||||
// We need both to sync the first block.
|
||||
break SYNC_LOOP
|
||||
}
|
||||
firstParts := first.MakePartSet(bcR.state.Params.BlockPartSizeBytes)
|
||||
firstParts := first.MakePartSet(state.ConsensusParams.BlockPartSizeBytes)
|
||||
firstPartsHeader := firstParts.Header()
|
||||
firstID := types.BlockID{first.Hash(), firstPartsHeader}
|
||||
// Finally, verify the first block using the second's commit
|
||||
// NOTE: we can probably make this more efficient, but note that calling
|
||||
// first.Hash() doesn't verify the tx contents, so MakePartSet() is
|
||||
// currently necessary.
|
||||
err := bcR.state.Validators.VerifyCommit(
|
||||
chainID, types.BlockID{first.Hash(), firstPartsHeader}, first.Height, second.LastCommit)
|
||||
err := state.Validators.VerifyCommit(
|
||||
chainID, firstID, first.Height, second.LastCommit)
|
||||
if err != nil {
|
||||
bcR.Logger.Error("Error in validation", "err", err)
|
||||
bcR.pool.RedoRequest(first.Height)
|
||||
peerID := bcR.pool.RedoRequest(first.Height)
|
||||
peer := bcR.Switch.Peers().Get(peerID)
|
||||
if peer != nil {
|
||||
bcR.Switch.StopPeerForError(peer, fmt.Errorf("BlockchainReactor validation error: %v", err))
|
||||
}
|
||||
break SYNC_LOOP
|
||||
} else {
|
||||
bcR.pool.PopRequest()
|
||||
|
||||
// TODO: batch saves so we dont persist to disk every block
|
||||
bcR.store.SaveBlock(first, firstParts, second.LastCommit)
|
||||
|
||||
// TODO: should we be firing events? need to fire NewBlock events manually ...
|
||||
// NOTE: we could improve performance if we
|
||||
// didn't make the app commit to disk every block
|
||||
// ... but we would need a way to get the hash without it persisting
|
||||
err := bcR.state.ApplyBlock(bcR.eventBus, bcR.proxyAppConn, first, firstPartsHeader, types.MockMempool{})
|
||||
// TODO: same thing for app - but we would need a way to
|
||||
// get the hash without persisting the state
|
||||
var err error
|
||||
state, err = bcR.blockExec.ApplyBlock(state, firstID, first)
|
||||
if err != nil {
|
||||
// TODO This is bad, are we zombie?
|
||||
cmn.PanicQ(cmn.Fmt("Failed to process committed block (%d:%X): %v", first.Height, first.Hash(), err))
|
||||
cmn.PanicQ(cmn.Fmt("Failed to process committed block (%d:%X): %v",
|
||||
first.Height, first.Hash(), err))
|
||||
}
|
||||
blocksSynced += 1
|
||||
blocksSynced++
|
||||
|
||||
// update the consensus params
|
||||
bcR.updateConsensusParams(state.ConsensusParams)
|
||||
|
||||
if blocksSynced%100 == 0 {
|
||||
lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds())
|
||||
@@ -288,7 +322,7 @@ FOR_LOOP:
|
||||
}
|
||||
}
|
||||
continue FOR_LOOP
|
||||
case <-bcR.Quit:
|
||||
case <-bcR.Quit():
|
||||
break FOR_LOOP
|
||||
}
|
||||
}
|
||||
@@ -296,15 +330,11 @@ FOR_LOOP:
|
||||
|
||||
// BroadcastStatusRequest broadcasts `BlockStore` height.
|
||||
func (bcR *BlockchainReactor) BroadcastStatusRequest() error {
|
||||
bcR.Switch.Broadcast(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusRequestMessage{bcR.store.Height()}})
|
||||
bcR.Switch.Broadcast(BlockchainChannel,
|
||||
struct{ BlockchainMessage }{&bcStatusRequestMessage{bcR.store.Height()}})
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetEventBus sets event bus.
|
||||
func (bcR *BlockchainReactor) SetEventBus(b *types.EventBus) {
|
||||
bcR.eventBus = b
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
// Messages
|
||||
|
||||
|
@@ -4,30 +4,35 @@ import (
|
||||
"testing"
|
||||
|
||||
wire "github.com/tendermint/go-wire"
|
||||
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
dbm "github.com/tendermint/tmlibs/db"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/proxy"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
func newBlockchainReactor(maxBlockHeight int64) *BlockchainReactor {
|
||||
logger := log.TestingLogger()
|
||||
func makeStateAndBlockStore(logger log.Logger) (sm.State, *BlockStore) {
|
||||
config := cfg.ResetTestRoot("blockchain_reactor_test")
|
||||
|
||||
blockStore := NewBlockStore(dbm.NewMemDB())
|
||||
state, _ := sm.LoadStateFromDBOrGenesisFile(dbm.NewMemDB(), config.GenesisFile())
|
||||
return state, blockStore
|
||||
}
|
||||
|
||||
// Get State
|
||||
state, _ := sm.GetState(dbm.NewMemDB(), config.GenesisFile())
|
||||
state.SetLogger(logger.With("module", "state"))
|
||||
state.Save()
|
||||
func newBlockchainReactor(logger log.Logger, maxBlockHeight int64) *BlockchainReactor {
|
||||
state, blockStore := makeStateAndBlockStore(logger)
|
||||
|
||||
// Make the blockchainReactor itself
|
||||
fastSync := true
|
||||
bcReactor := NewBlockchainReactor(state.Copy(), nil, blockStore, fastSync)
|
||||
var nilApp proxy.AppConnConsensus
|
||||
blockExec := sm.NewBlockExecutor(dbm.NewMemDB(), log.TestingLogger(), nilApp,
|
||||
types.MockMempool{}, types.MockEvidencePool{})
|
||||
|
||||
bcReactor := NewBlockchainReactor(state.Copy(), blockExec, blockStore, fastSync)
|
||||
bcReactor.SetLogger(logger.With("module", "blockchain"))
|
||||
|
||||
// Next: we need to set a switch in order for peers to be added in
|
||||
@@ -37,22 +42,22 @@ func newBlockchainReactor(maxBlockHeight int64) *BlockchainReactor {
|
||||
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
|
||||
firstBlock := makeBlock(blockHeight, state)
|
||||
secondBlock := makeBlock(blockHeight+1, state)
|
||||
firstParts := firstBlock.MakePartSet(state.Params.BlockGossipParams.BlockPartSizeBytes)
|
||||
firstParts := firstBlock.MakePartSet(state.ConsensusParams.BlockGossip.BlockPartSizeBytes)
|
||||
blockStore.SaveBlock(firstBlock, firstParts, secondBlock.LastCommit)
|
||||
}
|
||||
|
||||
return bcReactor
|
||||
}
|
||||
|
||||
func TestNoBlockMessageResponse(t *testing.T) {
|
||||
func TestNoBlockResponse(t *testing.T) {
|
||||
maxBlockHeight := int64(20)
|
||||
|
||||
bcr := newBlockchainReactor(maxBlockHeight)
|
||||
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
|
||||
bcr.Start()
|
||||
defer bcr.Stop()
|
||||
|
||||
// Add some peers in
|
||||
peer := newbcrTestPeer(cmn.RandStr(12))
|
||||
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
|
||||
bcr.AddPeer(peer)
|
||||
|
||||
chID := byte(0x01)
|
||||
@@ -67,6 +72,8 @@ func TestNoBlockMessageResponse(t *testing.T) {
|
||||
{100, false},
|
||||
}
|
||||
|
||||
// receive a request message from peer,
|
||||
// wait for our response to be received on the peer
|
||||
for _, tt := range tests {
|
||||
reqBlockMsg := &bcBlockRequestMessage{tt.height}
|
||||
reqBlockBytes := wire.BinaryBytes(struct{ BlockchainMessage }{reqBlockMsg})
|
||||
@@ -90,6 +97,49 @@ func TestNoBlockMessageResponse(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
// NOTE: This is too hard to test without
|
||||
// an easy way to add test peer to switch
|
||||
// or without significant refactoring of the module.
|
||||
// Alternatively we could actually dial a TCP conn but
|
||||
// that seems extreme.
|
||||
func TestBadBlockStopsPeer(t *testing.T) {
|
||||
maxBlockHeight := int64(20)
|
||||
|
||||
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
|
||||
bcr.Start()
|
||||
defer bcr.Stop()
|
||||
|
||||
// Add some peers in
|
||||
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
|
||||
|
||||
// XXX: This doesn't add the peer to anything,
|
||||
// so it's hard to check that it's later removed
|
||||
bcr.AddPeer(peer)
|
||||
assert.True(t, bcr.Switch.Peers().Size() > 0)
|
||||
|
||||
// send a bad block from the peer
|
||||
// default blocks already dont have commits, so should fail
|
||||
block := bcr.store.LoadBlock(3)
|
||||
msg := &bcBlockResponseMessage{Block: block}
|
||||
peer.Send(BlockchainChannel, struct{ BlockchainMessage }{msg})
|
||||
|
||||
ticker := time.NewTicker(time.Millisecond * 10)
|
||||
timer := time.NewTimer(time.Second * 2)
|
||||
LOOP:
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
if bcr.Switch.Peers().Size() == 0 {
|
||||
break LOOP
|
||||
}
|
||||
case <-timer.C:
|
||||
t.Fatal("Timed out waiting to disconnect peer")
|
||||
}
|
||||
}
|
||||
}
|
||||
*/
|
||||
|
||||
//----------------------------------------------
|
||||
// utility funcs
|
||||
|
||||
@@ -100,37 +150,34 @@ func makeTxs(height int64) (txs []types.Tx) {
|
||||
return txs
|
||||
}
|
||||
|
||||
func makeBlock(height int64, state *sm.State) *types.Block {
|
||||
prevHash := state.LastBlockID.Hash
|
||||
prevParts := types.PartSetHeader{}
|
||||
valHash := state.Validators.Hash()
|
||||
prevBlockID := types.BlockID{prevHash, prevParts}
|
||||
block, _ := types.MakeBlock(height, "test_chain", makeTxs(height),
|
||||
new(types.Commit), prevBlockID, valHash, state.AppHash, state.Params.BlockGossipParams.BlockPartSizeBytes)
|
||||
func makeBlock(height int64, state sm.State) *types.Block {
|
||||
block, _ := state.MakeBlock(height, makeTxs(height), new(types.Commit))
|
||||
return block
|
||||
}
|
||||
|
||||
// The Test peer
|
||||
type bcrTestPeer struct {
|
||||
cmn.Service
|
||||
key string
|
||||
ch chan interface{}
|
||||
cmn.BaseService
|
||||
id p2p.ID
|
||||
ch chan interface{}
|
||||
}
|
||||
|
||||
var _ p2p.Peer = (*bcrTestPeer)(nil)
|
||||
|
||||
func newbcrTestPeer(key string) *bcrTestPeer {
|
||||
return &bcrTestPeer{
|
||||
Service: cmn.NewBaseService(nil, "bcrTestPeer", nil),
|
||||
key: key,
|
||||
ch: make(chan interface{}, 2),
|
||||
func newbcrTestPeer(id p2p.ID) *bcrTestPeer {
|
||||
bcr := &bcrTestPeer{
|
||||
id: id,
|
||||
ch: make(chan interface{}, 2),
|
||||
}
|
||||
bcr.BaseService = *cmn.NewBaseService(nil, "bcrTestPeer", bcr)
|
||||
return bcr
|
||||
}
|
||||
|
||||
func (tp *bcrTestPeer) lastValue() interface{} { return <-tp.ch }
|
||||
|
||||
func (tp *bcrTestPeer) TrySend(chID byte, value interface{}) bool {
|
||||
if _, ok := value.(struct{ BlockchainMessage }).BlockchainMessage.(*bcStatusResponseMessage); ok {
|
||||
if _, ok := value.(struct{ BlockchainMessage }).
|
||||
BlockchainMessage.(*bcStatusResponseMessage); ok {
|
||||
// Discard status response messages since they skew our results
|
||||
// We only want to deal with:
|
||||
// + bcBlockResponseMessage
|
||||
@@ -142,9 +189,9 @@ func (tp *bcrTestPeer) TrySend(chID byte, value interface{}) bool {
|
||||
}
|
||||
|
||||
func (tp *bcrTestPeer) Send(chID byte, data interface{}) bool { return tp.TrySend(chID, data) }
|
||||
func (tp *bcrTestPeer) NodeInfo() *p2p.NodeInfo { return nil }
|
||||
func (tp *bcrTestPeer) NodeInfo() p2p.NodeInfo { return p2p.NodeInfo{} }
|
||||
func (tp *bcrTestPeer) Status() p2p.ConnectionStatus { return p2p.ConnectionStatus{} }
|
||||
func (tp *bcrTestPeer) Key() string { return tp.key }
|
||||
func (tp *bcrTestPeer) ID() p2p.ID { return tp.id }
|
||||
func (tp *bcrTestPeer) IsOutbound() bool { return false }
|
||||
func (tp *bcrTestPeer) IsPersistent() bool { return true }
|
||||
func (tp *bcrTestPeer) Get(s string) interface{} { return s }
|
||||
|
@@ -8,13 +8,15 @@ import (
|
||||
"sync"
|
||||
|
||||
wire "github.com/tendermint/go-wire"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
dbm "github.com/tendermint/tmlibs/db"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
/*
|
||||
Simple low level store for blocks.
|
||||
BlockStore is a simple low level store for blocks.
|
||||
|
||||
There are three types of information stored:
|
||||
- BlockMeta: Meta information about each block
|
||||
@@ -23,7 +25,7 @@ There are three types of information stored:
|
||||
|
||||
Currently the precommit signatures are duplicated in the Block parts as
|
||||
well as the Commit. In the future this may change, perhaps by moving
|
||||
the Commit data outside the Block.
|
||||
the Commit data outside the Block. (TODO)
|
||||
|
||||
// NOTE: BlockStore methods will panic if they encounter errors
|
||||
// deserializing loaded data, indicating probable corruption on disk.
|
||||
@@ -35,6 +37,8 @@ type BlockStore struct {
|
||||
height int64
|
||||
}
|
||||
|
||||
// NewBlockStore returns a new BlockStore with the given DB,
|
||||
// initialized to the last height that was committed to the DB.
|
||||
func NewBlockStore(db dbm.DB) *BlockStore {
|
||||
bsjson := LoadBlockStoreStateJSON(db)
|
||||
return &BlockStore{
|
||||
@@ -43,13 +47,16 @@ func NewBlockStore(db dbm.DB) *BlockStore {
|
||||
}
|
||||
}
|
||||
|
||||
// Height() returns the last known contiguous block height.
|
||||
// Height returns the last known contiguous block height.
|
||||
func (bs *BlockStore) Height() int64 {
|
||||
bs.mtx.RLock()
|
||||
defer bs.mtx.RUnlock()
|
||||
return bs.height
|
||||
}
|
||||
|
||||
// GetReader returns the value associated with the given key wrapped in an io.Reader.
|
||||
// If no value is found, it returns nil.
|
||||
// It's mainly for use with wire.ReadBinary.
|
||||
func (bs *BlockStore) GetReader(key []byte) io.Reader {
|
||||
bytez := bs.db.Get(key)
|
||||
if bytez == nil {
|
||||
@@ -58,6 +65,8 @@ func (bs *BlockStore) GetReader(key []byte) io.Reader {
|
||||
return bytes.NewReader(bytez)
|
||||
}
|
||||
|
||||
// LoadBlock returns the block with the given height.
|
||||
// If no block is found for that height, it returns nil.
|
||||
func (bs *BlockStore) LoadBlock(height int64) *types.Block {
|
||||
var n int
|
||||
var err error
|
||||
@@ -81,6 +90,9 @@ func (bs *BlockStore) LoadBlock(height int64) *types.Block {
|
||||
return block
|
||||
}
|
||||
|
||||
// LoadBlockPart returns the Part at the given index
|
||||
// from the block at the given height.
|
||||
// If no part is found for the given height and index, it returns nil.
|
||||
func (bs *BlockStore) LoadBlockPart(height int64, index int) *types.Part {
|
||||
var n int
|
||||
var err error
|
||||
@@ -95,6 +107,8 @@ func (bs *BlockStore) LoadBlockPart(height int64, index int) *types.Part {
|
||||
return part
|
||||
}
|
||||
|
||||
// LoadBlockMeta returns the BlockMeta for the given height.
|
||||
// If no block is found for the given height, it returns nil.
|
||||
func (bs *BlockStore) LoadBlockMeta(height int64) *types.BlockMeta {
|
||||
var n int
|
||||
var err error
|
||||
@@ -109,8 +123,10 @@ func (bs *BlockStore) LoadBlockMeta(height int64) *types.BlockMeta {
|
||||
return blockMeta
|
||||
}
|
||||
|
||||
// The +2/3 and other Precommit-votes for block at `height`.
|
||||
// This Commit comes from block.LastCommit for `height+1`.
|
||||
// LoadBlockCommit returns the Commit for the given height.
|
||||
// This commit consists of the +2/3 and other Precommit-votes for block at `height`,
|
||||
// and it comes from the block.LastCommit for `height+1`.
|
||||
// If no commit is found for the given height, it returns nil.
|
||||
func (bs *BlockStore) LoadBlockCommit(height int64) *types.Commit {
|
||||
var n int
|
||||
var err error
|
||||
@@ -125,7 +141,9 @@ func (bs *BlockStore) LoadBlockCommit(height int64) *types.Commit {
|
||||
return commit
|
||||
}
|
||||
|
||||
// NOTE: the Precommit-vote heights are for the block at `height`
|
||||
// LoadSeenCommit returns the locally seen Commit for the given height.
|
||||
// This is useful when we've seen a commit, but there has not yet been
|
||||
// a new block at `height + 1` that includes this commit in its block.LastCommit.
|
||||
func (bs *BlockStore) LoadSeenCommit(height int64) *types.Commit {
|
||||
var n int
|
||||
var err error
|
||||
@@ -140,15 +158,19 @@ func (bs *BlockStore) LoadSeenCommit(height int64) *types.Commit {
|
||||
return commit
|
||||
}
|
||||
|
||||
// SaveBlock persists the given block, blockParts, and seenCommit to the underlying db.
|
||||
// blockParts: Must be parts of the block
|
||||
// seenCommit: The +2/3 precommits that were seen which committed at height.
|
||||
// If all the nodes restart after committing a block,
|
||||
// we need this to reload the precommits to catch-up nodes to the
|
||||
// most recent height. Otherwise they'd stall at H-1.
|
||||
func (bs *BlockStore) SaveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) {
|
||||
if block == nil {
|
||||
cmn.PanicSanity("BlockStore can only save a non-nil block")
|
||||
}
|
||||
height := block.Height
|
||||
if height != bs.Height()+1 {
|
||||
cmn.PanicSanity(cmn.Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", bs.Height()+1, height))
|
||||
if g, w := height, bs.Height()+1; g != w {
|
||||
cmn.PanicSanity(cmn.Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", w, g))
|
||||
}
|
||||
if !blockParts.IsComplete() {
|
||||
cmn.PanicSanity(cmn.Fmt("BlockStore can only save complete block part sets"))
|
||||
@@ -219,6 +241,7 @@ type BlockStoreStateJSON struct {
|
||||
Height int64
|
||||
}
|
||||
|
||||
// Save persists the blockStore state to the database as JSON.
|
||||
func (bsj BlockStoreStateJSON) Save(db dbm.DB) {
|
||||
bytes, err := json.Marshal(bsj)
|
||||
if err != nil {
|
||||
@@ -227,9 +250,11 @@ func (bsj BlockStoreStateJSON) Save(db dbm.DB) {
|
||||
db.SetSync(blockStoreKey, bytes)
|
||||
}
|
||||
|
||||
// LoadBlockStoreStateJSON returns the BlockStoreStateJSON as loaded from disk.
|
||||
// If no BlockStoreStateJSON was previously persisted, it returns the zero value.
|
||||
func LoadBlockStoreStateJSON(db dbm.DB) BlockStoreStateJSON {
|
||||
bytes := db.Get(blockStoreKey)
|
||||
if bytes == nil {
|
||||
if len(bytes) == 0 {
|
||||
return BlockStoreStateJSON{
|
||||
Height: 0,
|
||||
}
|
||||
|
424
blockchain/store_test.go
Normal file
424
blockchain/store_test.go
Normal file
@@ -0,0 +1,424 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"runtime/debug"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
wire "github.com/tendermint/go-wire"
|
||||
|
||||
"github.com/tendermint/tmlibs/db"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
func TestLoadBlockStoreStateJSON(t *testing.T) {
|
||||
db := db.NewMemDB()
|
||||
|
||||
bsj := &BlockStoreStateJSON{Height: 1000}
|
||||
bsj.Save(db)
|
||||
|
||||
retrBSJ := LoadBlockStoreStateJSON(db)
|
||||
|
||||
assert.Equal(t, *bsj, retrBSJ, "expected the retrieved DBs to match")
|
||||
}
|
||||
|
||||
func TestNewBlockStore(t *testing.T) {
|
||||
db := db.NewMemDB()
|
||||
db.Set(blockStoreKey, []byte(`{"height": 10000}`))
|
||||
bs := NewBlockStore(db)
|
||||
assert.Equal(t, bs.Height(), int64(10000), "failed to properly parse blockstore")
|
||||
|
||||
panicCausers := []struct {
|
||||
data []byte
|
||||
wantErr string
|
||||
}{
|
||||
{[]byte("artful-doger"), "not unmarshal bytes"},
|
||||
{[]byte(" "), "unmarshal bytes"},
|
||||
}
|
||||
|
||||
for i, tt := range panicCausers {
|
||||
// Expecting a panic here on trying to parse an invalid blockStore
|
||||
_, _, panicErr := doFn(func() (interface{}, error) {
|
||||
db.Set(blockStoreKey, tt.data)
|
||||
_ = NewBlockStore(db)
|
||||
return nil, nil
|
||||
})
|
||||
require.NotNil(t, panicErr, "#%d panicCauser: %q expected a panic", i, tt.data)
|
||||
assert.Contains(t, panicErr.Error(), tt.wantErr, "#%d data: %q", i, tt.data)
|
||||
}
|
||||
|
||||
db.Set(blockStoreKey, nil)
|
||||
bs = NewBlockStore(db)
|
||||
assert.Equal(t, bs.Height(), int64(0), "expecting nil bytes to be unmarshaled alright")
|
||||
}
|
||||
|
||||
func TestBlockStoreGetReader(t *testing.T) {
|
||||
db := db.NewMemDB()
|
||||
// Initial setup
|
||||
db.Set([]byte("Foo"), []byte("Bar"))
|
||||
db.Set([]byte("Foo1"), nil)
|
||||
|
||||
bs := NewBlockStore(db)
|
||||
|
||||
tests := [...]struct {
|
||||
key []byte
|
||||
want []byte
|
||||
}{
|
||||
0: {key: []byte("Foo"), want: []byte("Bar")},
|
||||
1: {key: []byte("KnoxNonExistent"), want: nil},
|
||||
2: {key: []byte("Foo1"), want: []byte{}},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
r := bs.GetReader(tt.key)
|
||||
if r == nil {
|
||||
assert.Nil(t, tt.want, "#%d: expected a non-nil reader", i)
|
||||
continue
|
||||
}
|
||||
slurp, err := ioutil.ReadAll(r)
|
||||
if err != nil {
|
||||
t.Errorf("#%d: unexpected Read err: %v", i, err)
|
||||
} else {
|
||||
assert.Equal(t, slurp, tt.want, "#%d: mismatch", i)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func freshBlockStore() (*BlockStore, db.DB) {
|
||||
db := db.NewMemDB()
|
||||
return NewBlockStore(db), db
|
||||
}
|
||||
|
||||
var (
|
||||
state, _ = makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
|
||||
|
||||
block = makeBlock(1, state)
|
||||
partSet = block.MakePartSet(2)
|
||||
part1 = partSet.GetPart(0)
|
||||
part2 = partSet.GetPart(1)
|
||||
seenCommit1 = &types.Commit{Precommits: []*types.Vote{{Height: 10,
|
||||
Timestamp: time.Now().UTC()}}}
|
||||
)
|
||||
|
||||
// TODO: This test should be simplified ...
|
||||
|
||||
func TestBlockStoreSaveLoadBlock(t *testing.T) {
|
||||
state, bs := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
|
||||
require.Equal(t, bs.Height(), int64(0), "initially the height should be zero")
|
||||
|
||||
// check there are no blocks at various heights
|
||||
noBlockHeights := []int64{0, -1, 100, 1000, 2}
|
||||
for i, height := range noBlockHeights {
|
||||
if g := bs.LoadBlock(height); g != nil {
|
||||
t.Errorf("#%d: height(%d) got a block; want nil", i, height)
|
||||
}
|
||||
}
|
||||
|
||||
// save a block
|
||||
block := makeBlock(bs.Height()+1, state)
|
||||
validPartSet := block.MakePartSet(2)
|
||||
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
|
||||
Timestamp: time.Now().UTC()}}}
|
||||
bs.SaveBlock(block, partSet, seenCommit)
|
||||
require.Equal(t, bs.Height(), block.Header.Height, "expecting the new height to be changed")
|
||||
|
||||
incompletePartSet := types.NewPartSetFromHeader(types.PartSetHeader{Total: 2})
|
||||
uncontiguousPartSet := types.NewPartSetFromHeader(types.PartSetHeader{Total: 0})
|
||||
uncontiguousPartSet.AddPart(part2, false)
|
||||
|
||||
header1 := types.Header{
|
||||
Height: 1,
|
||||
NumTxs: 100,
|
||||
ChainID: "block_test",
|
||||
Time: time.Now(),
|
||||
}
|
||||
header2 := header1
|
||||
header2.Height = 4
|
||||
|
||||
// End of setup, test data
|
||||
|
||||
commitAtH10 := &types.Commit{Precommits: []*types.Vote{{Height: 10,
|
||||
Timestamp: time.Now().UTC()}}}
|
||||
tuples := []struct {
|
||||
block *types.Block
|
||||
parts *types.PartSet
|
||||
seenCommit *types.Commit
|
||||
wantErr bool
|
||||
wantPanic string
|
||||
|
||||
corruptBlockInDB bool
|
||||
corruptCommitInDB bool
|
||||
corruptSeenCommitInDB bool
|
||||
eraseCommitInDB bool
|
||||
eraseSeenCommitInDB bool
|
||||
}{
|
||||
{
|
||||
block: newBlock(&header1, commitAtH10),
|
||||
parts: validPartSet,
|
||||
seenCommit: seenCommit1,
|
||||
},
|
||||
|
||||
{
|
||||
block: nil,
|
||||
wantPanic: "only save a non-nil block",
|
||||
},
|
||||
|
||||
{
|
||||
block: newBlock(&header2, commitAtH10),
|
||||
parts: uncontiguousPartSet,
|
||||
wantPanic: "only save contiguous blocks", // and incomplete and uncontiguous parts
|
||||
},
|
||||
|
||||
{
|
||||
block: newBlock(&header1, commitAtH10),
|
||||
parts: incompletePartSet,
|
||||
wantPanic: "only save complete block", // incomplete parts
|
||||
},
|
||||
|
||||
{
|
||||
block: newBlock(&header1, commitAtH10),
|
||||
parts: validPartSet,
|
||||
seenCommit: seenCommit1,
|
||||
corruptCommitInDB: true, // Corrupt the DB's commit entry
|
||||
wantPanic: "rror reading commit",
|
||||
},
|
||||
|
||||
{
|
||||
block: newBlock(&header1, commitAtH10),
|
||||
parts: validPartSet,
|
||||
seenCommit: seenCommit1,
|
||||
wantPanic: "rror reading block",
|
||||
corruptBlockInDB: true, // Corrupt the DB's block entry
|
||||
},
|
||||
|
||||
{
|
||||
block: newBlock(&header1, commitAtH10),
|
||||
parts: validPartSet,
|
||||
seenCommit: seenCommit1,
|
||||
|
||||
// Expecting no error and we want a nil back
|
||||
eraseSeenCommitInDB: true,
|
||||
},
|
||||
|
||||
{
|
||||
block: newBlock(&header1, commitAtH10),
|
||||
parts: validPartSet,
|
||||
seenCommit: seenCommit1,
|
||||
|
||||
corruptSeenCommitInDB: true,
|
||||
wantPanic: "rror reading commit",
|
||||
},
|
||||
|
||||
{
|
||||
block: newBlock(&header1, commitAtH10),
|
||||
parts: validPartSet,
|
||||
seenCommit: seenCommit1,
|
||||
|
||||
// Expecting no error and we want a nil back
|
||||
eraseCommitInDB: true,
|
||||
},
|
||||
}
|
||||
|
||||
type quad struct {
|
||||
block *types.Block
|
||||
commit *types.Commit
|
||||
meta *types.BlockMeta
|
||||
|
||||
seenCommit *types.Commit
|
||||
}
|
||||
|
||||
for i, tuple := range tuples {
|
||||
bs, db := freshBlockStore()
|
||||
// SaveBlock
|
||||
res, err, panicErr := doFn(func() (interface{}, error) {
|
||||
bs.SaveBlock(tuple.block, tuple.parts, tuple.seenCommit)
|
||||
if tuple.block == nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
if tuple.corruptBlockInDB {
|
||||
db.Set(calcBlockMetaKey(tuple.block.Height), []byte("block-bogus"))
|
||||
}
|
||||
bBlock := bs.LoadBlock(tuple.block.Height)
|
||||
bBlockMeta := bs.LoadBlockMeta(tuple.block.Height)
|
||||
|
||||
if tuple.eraseSeenCommitInDB {
|
||||
db.Delete(calcSeenCommitKey(tuple.block.Height))
|
||||
}
|
||||
if tuple.corruptSeenCommitInDB {
|
||||
db.Set(calcSeenCommitKey(tuple.block.Height), []byte("bogus-seen-commit"))
|
||||
}
|
||||
bSeenCommit := bs.LoadSeenCommit(tuple.block.Height)
|
||||
|
||||
commitHeight := tuple.block.Height - 1
|
||||
if tuple.eraseCommitInDB {
|
||||
db.Delete(calcBlockCommitKey(commitHeight))
|
||||
}
|
||||
if tuple.corruptCommitInDB {
|
||||
db.Set(calcBlockCommitKey(commitHeight), []byte("foo-bogus"))
|
||||
}
|
||||
bCommit := bs.LoadBlockCommit(commitHeight)
|
||||
return &quad{block: bBlock, seenCommit: bSeenCommit, commit: bCommit,
|
||||
meta: bBlockMeta}, nil
|
||||
})
|
||||
|
||||
if subStr := tuple.wantPanic; subStr != "" {
|
||||
if panicErr == nil {
|
||||
t.Errorf("#%d: want a non-nil panic", i)
|
||||
} else if got := panicErr.Error(); !strings.Contains(got, subStr) {
|
||||
t.Errorf("#%d:\n\tgotErr: %q\nwant substring: %q", i, got, subStr)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if tuple.wantErr {
|
||||
if err == nil {
|
||||
t.Errorf("#%d: got nil error", i)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
assert.Nil(t, panicErr, "#%d: unexpected panic", i)
|
||||
assert.Nil(t, err, "#%d: expecting a non-nil error", i)
|
||||
qua, ok := res.(*quad)
|
||||
if !ok || qua == nil {
|
||||
t.Errorf("#%d: got nil quad back; gotType=%T", i, res)
|
||||
continue
|
||||
}
|
||||
if tuple.eraseSeenCommitInDB {
|
||||
assert.Nil(t, qua.seenCommit,
|
||||
"erased the seenCommit in the DB hence we should get back a nil seenCommit")
|
||||
}
|
||||
if tuple.eraseCommitInDB {
|
||||
assert.Nil(t, qua.commit,
|
||||
"erased the commit in the DB hence we should get back a nil commit")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func binarySerializeIt(v interface{}) []byte {
|
||||
var n int
|
||||
var err error
|
||||
buf := new(bytes.Buffer)
|
||||
wire.WriteBinary(v, buf, &n, &err)
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
func TestLoadBlockPart(t *testing.T) {
|
||||
bs, db := freshBlockStore()
|
||||
height, index := int64(10), 1
|
||||
loadPart := func() (interface{}, error) {
|
||||
part := bs.LoadBlockPart(height, index)
|
||||
return part, nil
|
||||
}
|
||||
|
||||
// Initially no contents.
|
||||
// 1. Requesting for a non-existent block shouldn't fail
|
||||
res, _, panicErr := doFn(loadPart)
|
||||
require.Nil(t, panicErr, "a non-existent block part shouldn't cause a panic")
|
||||
require.Nil(t, res, "a non-existent block part should return nil")
|
||||
|
||||
// 2. Next save a corrupted block then try to load it
|
||||
db.Set(calcBlockPartKey(height, index), []byte("Tendermint"))
|
||||
res, _, panicErr = doFn(loadPart)
|
||||
require.NotNil(t, panicErr, "expecting a non-nil panic")
|
||||
require.Contains(t, panicErr.Error(), "Error reading block part")
|
||||
|
||||
// 3. A good block serialized and saved to the DB should be retrievable
|
||||
db.Set(calcBlockPartKey(height, index), binarySerializeIt(part1))
|
||||
gotPart, _, panicErr := doFn(loadPart)
|
||||
require.Nil(t, panicErr, "an existent and proper block should not panic")
|
||||
require.Nil(t, res, "a properly saved block should return a proper block")
|
||||
require.Equal(t, gotPart.(*types.Part).Hash(), part1.Hash(),
|
||||
"expecting successful retrieval of previously saved block")
|
||||
}
|
||||
|
||||
func TestLoadBlockMeta(t *testing.T) {
|
||||
bs, db := freshBlockStore()
|
||||
height := int64(10)
|
||||
loadMeta := func() (interface{}, error) {
|
||||
meta := bs.LoadBlockMeta(height)
|
||||
return meta, nil
|
||||
}
|
||||
|
||||
// Initially no contents.
|
||||
// 1. Requesting for a non-existent blockMeta shouldn't fail
|
||||
res, _, panicErr := doFn(loadMeta)
|
||||
require.Nil(t, panicErr, "a non-existent blockMeta shouldn't cause a panic")
|
||||
require.Nil(t, res, "a non-existent blockMeta should return nil")
|
||||
|
||||
// 2. Next save a corrupted blockMeta then try to load it
|
||||
db.Set(calcBlockMetaKey(height), []byte("Tendermint-Meta"))
|
||||
res, _, panicErr = doFn(loadMeta)
|
||||
require.NotNil(t, panicErr, "expecting a non-nil panic")
|
||||
require.Contains(t, panicErr.Error(), "Error reading block meta")
|
||||
|
||||
// 3. A good blockMeta serialized and saved to the DB should be retrievable
|
||||
meta := &types.BlockMeta{}
|
||||
db.Set(calcBlockMetaKey(height), binarySerializeIt(meta))
|
||||
gotMeta, _, panicErr := doFn(loadMeta)
|
||||
require.Nil(t, panicErr, "an existent and proper block should not panic")
|
||||
require.Nil(t, res, "a properly saved blockMeta should return a proper blocMeta ")
|
||||
require.Equal(t, binarySerializeIt(meta), binarySerializeIt(gotMeta),
|
||||
"expecting successful retrieval of previously saved blockMeta")
|
||||
}
|
||||
|
||||
func TestBlockFetchAtHeight(t *testing.T) {
|
||||
state, bs := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
|
||||
require.Equal(t, bs.Height(), int64(0), "initially the height should be zero")
|
||||
block := makeBlock(bs.Height()+1, state)
|
||||
|
||||
partSet := block.MakePartSet(2)
|
||||
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
|
||||
Timestamp: time.Now().UTC()}}}
|
||||
|
||||
bs.SaveBlock(block, partSet, seenCommit)
|
||||
require.Equal(t, bs.Height(), block.Header.Height, "expecting the new height to be changed")
|
||||
|
||||
blockAtHeight := bs.LoadBlock(bs.Height())
|
||||
require.Equal(t, block.Hash(), blockAtHeight.Hash(),
|
||||
"expecting a successful load of the last saved block")
|
||||
|
||||
blockAtHeightPlus1 := bs.LoadBlock(bs.Height() + 1)
|
||||
require.Nil(t, blockAtHeightPlus1, "expecting an unsuccessful load of Height()+1")
|
||||
blockAtHeightPlus2 := bs.LoadBlock(bs.Height() + 2)
|
||||
require.Nil(t, blockAtHeightPlus2, "expecting an unsuccessful load of Height()+2")
|
||||
}
|
||||
|
||||
func doFn(fn func() (interface{}, error)) (res interface{}, err error, panicErr error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
switch e := r.(type) {
|
||||
case error:
|
||||
panicErr = e
|
||||
case string:
|
||||
panicErr = fmt.Errorf("%s", e)
|
||||
default:
|
||||
if st, ok := r.(fmt.Stringer); ok {
|
||||
panicErr = fmt.Errorf("%s", st)
|
||||
} else {
|
||||
panicErr = fmt.Errorf("%s", debug.Stack())
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
res, err = fn()
|
||||
return res, err, panicErr
|
||||
}
|
||||
|
||||
func newBlock(hdr *types.Header, lastCommit *types.Commit) *types.Block {
|
||||
return &types.Block{
|
||||
Header: hdr,
|
||||
LastCommit: lastCommit,
|
||||
}
|
||||
}
|
@@ -7,6 +7,7 @@ machine:
|
||||
GOPATH: "$HOME/.go_project"
|
||||
PROJECT_PARENT_PATH: "$GOPATH/src/github.com/$CIRCLE_PROJECT_USERNAME"
|
||||
PROJECT_PATH: "$PROJECT_PARENT_PATH/$CIRCLE_PROJECT_REPONAME"
|
||||
PATH: "$HOME/.go_project/bin:${PATH}"
|
||||
hosts:
|
||||
localhost: 127.0.0.1
|
||||
|
||||
@@ -30,4 +31,5 @@ test:
|
||||
- cd "$PROJECT_PATH" && mv test_integrations.log "${CIRCLE_ARTIFACTS}"
|
||||
- cd "$PROJECT_PATH" && bash <(curl -s https://codecov.io/bash) -f coverage.txt
|
||||
- cd "$PROJECT_PATH" && mv coverage.txt "${CIRCLE_ARTIFACTS}"
|
||||
- cd "$PROJECT_PATH" && cp test/logs/messages "${CIRCLE_ARTIFACTS}/docker_logs.txt"
|
||||
- cd "$PROJECT_PATH" && cp test/logs/messages "${CIRCLE_ARTIFACTS}/docker.log"
|
||||
- cd "${CIRCLE_ARTIFACTS}" && tar czf logs.tar.gz *.log
|
||||
|
@@ -1,8 +1,6 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
@@ -17,29 +15,34 @@ var InitFilesCmd = &cobra.Command{
|
||||
}
|
||||
|
||||
func initFiles(cmd *cobra.Command, args []string) {
|
||||
// private validator
|
||||
privValFile := config.PrivValidatorFile()
|
||||
if _, err := os.Stat(privValFile); os.IsNotExist(err) {
|
||||
privValidator := types.GenPrivValidatorFS(privValFile)
|
||||
privValidator.Save()
|
||||
|
||||
genFile := config.GenesisFile()
|
||||
|
||||
if _, err := os.Stat(genFile); os.IsNotExist(err) {
|
||||
genDoc := types.GenesisDoc{
|
||||
ChainID: cmn.Fmt("test-chain-%v", cmn.RandStr(6)),
|
||||
}
|
||||
genDoc.Validators = []types.GenesisValidator{{
|
||||
PubKey: privValidator.GetPubKey(),
|
||||
Power: 10,
|
||||
}}
|
||||
|
||||
if err := genDoc.SaveAs(genFile); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
logger.Info("Initialized tendermint", "genesis", config.GenesisFile(), "priv_validator", config.PrivValidatorFile())
|
||||
var privValidator *types.PrivValidatorFS
|
||||
if cmn.FileExists(privValFile) {
|
||||
privValidator = types.LoadPrivValidatorFS(privValFile)
|
||||
logger.Info("Found private validator", "path", privValFile)
|
||||
} else {
|
||||
logger.Info("Already initialized", "priv_validator", config.PrivValidatorFile())
|
||||
privValidator = types.GenPrivValidatorFS(privValFile)
|
||||
privValidator.Save()
|
||||
logger.Info("Genetated private validator", "path", privValFile)
|
||||
}
|
||||
|
||||
// genesis file
|
||||
genFile := config.GenesisFile()
|
||||
if cmn.FileExists(genFile) {
|
||||
logger.Info("Found genesis file", "path", genFile)
|
||||
} else {
|
||||
genDoc := types.GenesisDoc{
|
||||
ChainID: cmn.Fmt("test-chain-%v", cmn.RandStr(6)),
|
||||
}
|
||||
genDoc.Validators = []types.GenesisValidator{{
|
||||
PubKey: privValidator.GetPubKey(),
|
||||
Power: 10,
|
||||
}}
|
||||
|
||||
if err := genDoc.SaveAs(genFile); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
logger.Info("Genetated genesis file", "path", genFile)
|
||||
}
|
||||
}
|
||||
|
60
cmd/tendermint/commands/lite.go
Normal file
60
cmd/tendermint/commands/lite.go
Normal file
@@ -0,0 +1,60 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
|
||||
"github.com/tendermint/tendermint/lite/proxy"
|
||||
rpcclient "github.com/tendermint/tendermint/rpc/client"
|
||||
)
|
||||
|
||||
// LiteCmd represents the base command when called without any subcommands
|
||||
var LiteCmd = &cobra.Command{
|
||||
Use: "lite",
|
||||
Short: "Run lite-client proxy server, verifying tendermint rpc",
|
||||
Long: `This node will run a secure proxy to a tendermint rpc server.
|
||||
|
||||
All calls that can be tracked back to a block header by a proof
|
||||
will be verified before passing them back to the caller. Other that
|
||||
that it will present the same interface as a full tendermint node,
|
||||
just with added trust and running locally.`,
|
||||
RunE: runProxy,
|
||||
SilenceUsage: true,
|
||||
}
|
||||
|
||||
var (
|
||||
listenAddr string
|
||||
nodeAddr string
|
||||
chainID string
|
||||
home string
|
||||
)
|
||||
|
||||
func init() {
|
||||
LiteCmd.Flags().StringVar(&listenAddr, "laddr", ":8888", "Serve the proxy on the given port")
|
||||
LiteCmd.Flags().StringVar(&nodeAddr, "node", "localhost:46657", "Connect to a Tendermint node at this address")
|
||||
LiteCmd.Flags().StringVar(&chainID, "chain-id", "tendermint", "Specify the Tendermint chain ID")
|
||||
LiteCmd.Flags().StringVar(&home, "home-dir", ".tendermint-lite", "Specify the home directory")
|
||||
}
|
||||
|
||||
func runProxy(cmd *cobra.Command, args []string) error {
|
||||
// First, connect a client
|
||||
node := rpcclient.NewHTTP(nodeAddr, "/websocket")
|
||||
|
||||
cert, err := proxy.GetCertifier(chainID, home, nodeAddr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
sc := proxy.SecureClient(node, cert)
|
||||
|
||||
err = proxy.StartProxy(sc, listenAddr, logger)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cmn.TrapSignal(func() {
|
||||
// TODO: close up shop
|
||||
})
|
||||
|
||||
return nil
|
||||
}
|
@@ -19,7 +19,7 @@ var ProbeUpnpCmd = &cobra.Command{
|
||||
func probeUpnp(cmd *cobra.Command, args []string) error {
|
||||
capabilities, err := upnp.Probe(logger)
|
||||
if err != nil {
|
||||
fmt.Println("Probe failed: %v", err)
|
||||
fmt.Println("Probe failed: ", err)
|
||||
} else {
|
||||
fmt.Println("Probe success!")
|
||||
jsonBytes, err := json.Marshal(capabilities)
|
||||
|
@@ -14,11 +14,15 @@ import (
|
||||
|
||||
var (
|
||||
config = cfg.DefaultConfig()
|
||||
logger = log.NewTMLogger(log.NewSyncWriter(os.Stdout)).With("module", "main")
|
||||
logger = log.NewTMLogger(log.NewSyncWriter(os.Stdout))
|
||||
)
|
||||
|
||||
func init() {
|
||||
RootCmd.PersistentFlags().String("log_level", config.LogLevel, "Log level")
|
||||
registerFlagsRootCmd(RootCmd)
|
||||
}
|
||||
|
||||
func registerFlagsRootCmd(cmd *cobra.Command) {
|
||||
cmd.PersistentFlags().String("log_level", config.LogLevel, "Log level")
|
||||
}
|
||||
|
||||
// ParseConfig retrieves the default environment configuration,
|
||||
@@ -53,6 +57,7 @@ var RootCmd = &cobra.Command{
|
||||
if viper.GetBool(cli.TraceFlag) {
|
||||
logger = log.NewTracingLogger(logger)
|
||||
}
|
||||
logger = logger.With("module", "main")
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
@@ -1,7 +1,10 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"testing"
|
||||
|
||||
@@ -12,6 +15,7 @@ import (
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tmlibs/cli"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -22,10 +26,8 @@ const (
|
||||
rootName = "root"
|
||||
)
|
||||
|
||||
// isolate provides a clean setup and returns a copy of RootCmd you can
|
||||
// modify in the test cases.
|
||||
// NOTE: it unsets all TM* env variables.
|
||||
func isolate(cmds ...*cobra.Command) cli.Executable {
|
||||
// clearConfig clears env vars, the given root dir, and resets viper.
|
||||
func clearConfig(dir string) {
|
||||
if err := os.Unsetenv("TMHOME"); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
@@ -33,75 +35,142 @@ func isolate(cmds ...*cobra.Command) cli.Executable {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
if err := os.RemoveAll(dir); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
viper.Reset()
|
||||
config = cfg.DefaultConfig()
|
||||
r := &cobra.Command{
|
||||
Use: rootName,
|
||||
PersistentPreRunE: RootCmd.PersistentPreRunE,
|
||||
}
|
||||
r.AddCommand(cmds...)
|
||||
wr := cli.PrepareBaseCmd(r, "TM", defaultRoot)
|
||||
return wr
|
||||
}
|
||||
|
||||
func TestRootConfig(t *testing.T) {
|
||||
assert, require := assert.New(t), require.New(t)
|
||||
|
||||
// we pre-create a config file we can refer to in the rest of
|
||||
// the test cases.
|
||||
cvals := map[string]string{
|
||||
"moniker": "monkey",
|
||||
"fast_sync": "false",
|
||||
// prepare new rootCmd
|
||||
func testRootCmd() *cobra.Command {
|
||||
rootCmd := &cobra.Command{
|
||||
Use: RootCmd.Use,
|
||||
PersistentPreRunE: RootCmd.PersistentPreRunE,
|
||||
Run: func(cmd *cobra.Command, args []string) {},
|
||||
}
|
||||
// proper types of the above settings
|
||||
cfast := false
|
||||
conf, err := cli.WriteDemoConfig(cvals)
|
||||
require.Nil(err)
|
||||
registerFlagsRootCmd(rootCmd)
|
||||
var l string
|
||||
rootCmd.PersistentFlags().String("log", l, "Log")
|
||||
return rootCmd
|
||||
}
|
||||
|
||||
func testSetup(rootDir string, args []string, env map[string]string) error {
|
||||
clearConfig(defaultRoot)
|
||||
|
||||
rootCmd := testRootCmd()
|
||||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", defaultRoot)
|
||||
|
||||
// run with the args and env
|
||||
args = append([]string{rootCmd.Use}, args...)
|
||||
return cli.RunWithArgs(cmd, args, env)
|
||||
}
|
||||
|
||||
func TestRootHome(t *testing.T) {
|
||||
newRoot := filepath.Join(defaultRoot, "something-else")
|
||||
cases := []struct {
|
||||
args []string
|
||||
env map[string]string
|
||||
root string
|
||||
}{
|
||||
{nil, nil, defaultRoot},
|
||||
{[]string{"--home", newRoot}, nil, newRoot},
|
||||
{nil, map[string]string{"TMHOME": newRoot}, newRoot},
|
||||
}
|
||||
|
||||
for i, tc := range cases {
|
||||
idxString := strconv.Itoa(i)
|
||||
|
||||
err := testSetup(defaultRoot, tc.args, tc.env)
|
||||
require.Nil(t, err, idxString)
|
||||
|
||||
assert.Equal(t, tc.root, config.RootDir, idxString)
|
||||
assert.Equal(t, tc.root, config.P2P.RootDir, idxString)
|
||||
assert.Equal(t, tc.root, config.Consensus.RootDir, idxString)
|
||||
assert.Equal(t, tc.root, config.Mempool.RootDir, idxString)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRootFlagsEnv(t *testing.T) {
|
||||
|
||||
// defaults
|
||||
defaults := cfg.DefaultConfig()
|
||||
dmax := defaults.P2P.MaxNumPeers
|
||||
defaultLogLvl := defaults.LogLevel
|
||||
|
||||
cases := []struct {
|
||||
args []string
|
||||
env map[string]string
|
||||
root string
|
||||
moniker string
|
||||
fastSync bool
|
||||
maxPeer int
|
||||
logLevel string
|
||||
}{
|
||||
{nil, nil, defaultRoot, defaults.Moniker, defaults.FastSync, dmax},
|
||||
// try multiple ways of setting root (two flags, cli vs. env)
|
||||
{[]string{"--home", conf}, nil, conf, cvals["moniker"], cfast, dmax},
|
||||
{nil, map[string]string{"TMHOME": conf}, conf, cvals["moniker"], cfast, dmax},
|
||||
// check setting p2p subflags two different ways
|
||||
{[]string{"--p2p.max_num_peers", "420"}, nil, defaultRoot, defaults.Moniker, defaults.FastSync, 420},
|
||||
{nil, map[string]string{"TM_P2P_MAX_NUM_PEERS": "17"}, defaultRoot, defaults.Moniker, defaults.FastSync, 17},
|
||||
// try to set env that have no flags attached...
|
||||
{[]string{"--home", conf}, map[string]string{"TM_MONIKER": "funny"}, conf, "funny", cfast, dmax},
|
||||
{[]string{"--log", "debug"}, nil, defaultLogLvl}, // wrong flag
|
||||
{[]string{"--log_level", "debug"}, nil, "debug"}, // right flag
|
||||
{nil, map[string]string{"TM_LOW": "debug"}, defaultLogLvl}, // wrong env flag
|
||||
{nil, map[string]string{"MT_LOG_LEVEL": "debug"}, defaultLogLvl}, // wrong env prefix
|
||||
{nil, map[string]string{"TM_LOG_LEVEL": "debug"}, "debug"}, // right env
|
||||
}
|
||||
|
||||
for idx, tc := range cases {
|
||||
i := strconv.Itoa(idx)
|
||||
// test command that does nothing, except trigger unmarshalling in root
|
||||
noop := &cobra.Command{
|
||||
Use: "noop",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return nil
|
||||
},
|
||||
}
|
||||
noop.Flags().Int("p2p.max_num_peers", defaults.P2P.MaxNumPeers, "")
|
||||
cmd := isolate(noop)
|
||||
for i, tc := range cases {
|
||||
idxString := strconv.Itoa(i)
|
||||
|
||||
args := append([]string{rootName, noop.Use}, tc.args...)
|
||||
err := cli.RunWithArgs(cmd, args, tc.env)
|
||||
require.Nil(err, i)
|
||||
assert.Equal(tc.root, config.RootDir, i)
|
||||
assert.Equal(tc.root, config.P2P.RootDir, i)
|
||||
assert.Equal(tc.root, config.Consensus.RootDir, i)
|
||||
assert.Equal(tc.root, config.Mempool.RootDir, i)
|
||||
assert.Equal(tc.moniker, config.Moniker, i)
|
||||
assert.Equal(tc.fastSync, config.FastSync, i)
|
||||
assert.Equal(tc.maxPeer, config.P2P.MaxNumPeers, i)
|
||||
err := testSetup(defaultRoot, tc.args, tc.env)
|
||||
require.Nil(t, err, idxString)
|
||||
|
||||
assert.Equal(t, tc.logLevel, config.LogLevel, idxString)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestRootConfig(t *testing.T) {
|
||||
|
||||
// write non-default config
|
||||
nonDefaultLogLvl := "abc:debug"
|
||||
cvals := map[string]string{
|
||||
"log_level": nonDefaultLogLvl,
|
||||
}
|
||||
|
||||
cases := []struct {
|
||||
args []string
|
||||
env map[string]string
|
||||
|
||||
logLvl string
|
||||
}{
|
||||
{nil, nil, nonDefaultLogLvl}, // should load config
|
||||
{[]string{"--log_level=abc:info"}, nil, "abc:info"}, // flag over rides
|
||||
{nil, map[string]string{"TM_LOG_LEVEL": "abc:info"}, "abc:info"}, // env over rides
|
||||
}
|
||||
|
||||
for i, tc := range cases {
|
||||
idxString := strconv.Itoa(i)
|
||||
clearConfig(defaultRoot)
|
||||
|
||||
// XXX: path must match cfg.defaultConfigPath
|
||||
configFilePath := filepath.Join(defaultRoot, "config")
|
||||
err := cmn.EnsureDir(configFilePath, 0700)
|
||||
require.Nil(t, err)
|
||||
|
||||
// write the non-defaults to a different path
|
||||
// TODO: support writing sub configs so we can test that too
|
||||
err = WriteConfigVals(configFilePath, cvals)
|
||||
require.Nil(t, err)
|
||||
|
||||
rootCmd := testRootCmd()
|
||||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", defaultRoot)
|
||||
|
||||
// run with the args and env
|
||||
tc.args = append([]string{rootCmd.Use}, tc.args...)
|
||||
err = cli.RunWithArgs(cmd, tc.args, tc.env)
|
||||
require.Nil(t, err, idxString)
|
||||
|
||||
assert.Equal(t, tc.logLvl, config.LogLevel, idxString)
|
||||
}
|
||||
}
|
||||
|
||||
// WriteConfigVals writes a toml file with the given values.
|
||||
// It returns an error if writing was impossible.
|
||||
func WriteConfigVals(dir string, vals map[string]string) error {
|
||||
data := ""
|
||||
for k, v := range vals {
|
||||
data = data + fmt.Sprintf("%s = \"%s\"\n", k, v)
|
||||
}
|
||||
cfile := filepath.Join(dir, "config.toml")
|
||||
return ioutil.WriteFile(cfile, []byte(data), 0666)
|
||||
}
|
||||
|
@@ -29,8 +29,10 @@ func AddNodeFlags(cmd *cobra.Command) {
|
||||
// p2p flags
|
||||
cmd.Flags().String("p2p.laddr", config.P2P.ListenAddress, "Node listen address. (0.0.0.0:0 means any interface, any port)")
|
||||
cmd.Flags().String("p2p.seeds", config.P2P.Seeds, "Comma delimited host:port seed nodes")
|
||||
cmd.Flags().String("p2p.persistent_peers", config.P2P.PersistentPeers, "Comma delimited host:port persistent peers")
|
||||
cmd.Flags().Bool("p2p.skip_upnp", config.P2P.SkipUPNP, "Skip UPNP configuration")
|
||||
cmd.Flags().Bool("p2p.pex", config.P2P.PexReactor, "Enable Peer-Exchange (dev feature)")
|
||||
cmd.Flags().Bool("p2p.pex", config.P2P.PexReactor, "Enable/disable Peer-Exchange")
|
||||
cmd.Flags().Bool("p2p.seed_mode", config.P2P.SeedMode, "Enable/disable seed mode")
|
||||
|
||||
// consensus flags
|
||||
cmd.Flags().Bool("consensus.create_empty_blocks", config.Consensus.CreateEmptyBlocks, "Set this to false to only produce blocks when there are txs or when the AppHash changes")
|
||||
|
@@ -2,11 +2,12 @@ package commands
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
)
|
||||
@@ -35,6 +36,7 @@ var TestnetFilesCmd = &cobra.Command{
|
||||
func testnetFiles(cmd *cobra.Command, args []string) {
|
||||
|
||||
genVals := make([]types.GenesisValidator, nValidators)
|
||||
defaultConfig := cfg.DefaultBaseConfig()
|
||||
|
||||
// Initialize core dir and priv_validator.json's
|
||||
for i := 0; i < nValidators; i++ {
|
||||
@@ -44,7 +46,7 @@ func testnetFiles(cmd *cobra.Command, args []string) {
|
||||
cmn.Exit(err.Error())
|
||||
}
|
||||
// Read priv_validator.json to populate vals
|
||||
privValFile := path.Join(dataDir, mach, "priv_validator.json")
|
||||
privValFile := filepath.Join(dataDir, mach, defaultConfig.PrivValidator)
|
||||
privVal := types.LoadPrivValidatorFS(privValFile)
|
||||
genVals[i] = types.GenesisValidator{
|
||||
PubKey: privVal.GetPubKey(),
|
||||
@@ -63,7 +65,7 @@ func testnetFiles(cmd *cobra.Command, args []string) {
|
||||
// Write genesis file.
|
||||
for i := 0; i < nValidators; i++ {
|
||||
mach := cmn.Fmt("mach%d", i)
|
||||
if err := genDoc.SaveAs(path.Join(dataDir, mach, "genesis.json")); err != nil {
|
||||
if err := genDoc.SaveAs(filepath.Join(dataDir, mach, defaultConfig.Genesis)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
@@ -73,14 +75,15 @@ func testnetFiles(cmd *cobra.Command, args []string) {
|
||||
|
||||
// Initialize per-machine core directory
|
||||
func initMachCoreDirectory(base, mach string) error {
|
||||
dir := path.Join(base, mach)
|
||||
dir := filepath.Join(base, mach)
|
||||
err := cmn.EnsureDir(dir, 0777)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Create priv_validator.json file if not present
|
||||
ensurePrivValidator(path.Join(dir, "priv_validator.json"))
|
||||
defaultConfig := cfg.DefaultBaseConfig()
|
||||
ensurePrivValidator(filepath.Join(dir, defaultConfig.PrivValidator))
|
||||
return nil
|
||||
|
||||
}
|
||||
|
@@ -2,10 +2,12 @@ package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/tendermint/tmlibs/cli"
|
||||
|
||||
cmd "github.com/tendermint/tendermint/cmd/tendermint/commands"
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
nm "github.com/tendermint/tendermint/node"
|
||||
)
|
||||
|
||||
@@ -15,6 +17,7 @@ func main() {
|
||||
cmd.GenValidatorCmd,
|
||||
cmd.InitFilesCmd,
|
||||
cmd.ProbeUpnpCmd,
|
||||
cmd.LiteCmd,
|
||||
cmd.ReplayCmd,
|
||||
cmd.ReplayConsoleCmd,
|
||||
cmd.ResetAllCmd,
|
||||
@@ -36,7 +39,7 @@ func main() {
|
||||
// Create & start node
|
||||
rootCmd.AddCommand(cmd.NewRunNodeCmd(nodeFunc))
|
||||
|
||||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", os.ExpandEnv("$HOME/.tendermint"))
|
||||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", os.ExpandEnv(filepath.Join("$HOME", cfg.DefaultTendermintDir)))
|
||||
if err := cmd.Execute(); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
131
config/config.go
131
config/config.go
@@ -2,10 +2,35 @@ package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
)
|
||||
|
||||
// NOTE: Most of the structs & relevant comments + the
|
||||
// default configuration options were used to manually
|
||||
// generate the config.toml. Please reflect any changes
|
||||
// made here in the defaultConfigTemplate constant in
|
||||
// config/toml.go
|
||||
// NOTE: tmlibs/cli must know to look in the config dir!
|
||||
var (
|
||||
DefaultTendermintDir = ".tendermint"
|
||||
defaultConfigDir = "config"
|
||||
defaultDataDir = "data"
|
||||
|
||||
defaultConfigFileName = "config.toml"
|
||||
defaultGenesisJSONName = "genesis.json"
|
||||
defaultPrivValName = "priv_validator.json"
|
||||
defaultNodeKeyName = "node_key.json"
|
||||
defaultAddrBookName = "addrbook.json"
|
||||
|
||||
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName)
|
||||
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName)
|
||||
defaultPrivValPath = filepath.Join(defaultConfigDir, defaultPrivValName)
|
||||
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
|
||||
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
|
||||
)
|
||||
|
||||
// Config defines the top level configuration for a Tendermint node
|
||||
type Config struct {
|
||||
// Top level options use an anonymous struct
|
||||
@@ -37,9 +62,9 @@ func TestConfig() *Config {
|
||||
BaseConfig: TestBaseConfig(),
|
||||
RPC: TestRPCConfig(),
|
||||
P2P: TestP2PConfig(),
|
||||
Mempool: DefaultMempoolConfig(),
|
||||
Mempool: TestMempoolConfig(),
|
||||
Consensus: TestConsensusConfig(),
|
||||
TxIndex: DefaultTxIndexConfig(),
|
||||
TxIndex: TestTxIndexConfig(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -58,19 +83,23 @@ func (cfg *Config) SetRoot(root string) *Config {
|
||||
|
||||
// BaseConfig defines the base configuration for a Tendermint node
|
||||
type BaseConfig struct {
|
||||
|
||||
// chainID is unexposed and immutable but here for convenience
|
||||
chainID string
|
||||
|
||||
// The root directory for all data.
|
||||
// This should be set in viper so it can unmarshal into this struct
|
||||
RootDir string `mapstructure:"home"`
|
||||
|
||||
// The ID of the chain to join (should be signed with every transaction and vote)
|
||||
ChainID string `mapstructure:"chain_id"`
|
||||
|
||||
// A JSON file containing the initial validator set and other meta data
|
||||
// Path to the JSON file containing the initial validator set and other meta data
|
||||
Genesis string `mapstructure:"genesis_file"`
|
||||
|
||||
// A JSON file containing the private key to use as a validator in the consensus protocol
|
||||
// Path to the JSON file containing the private key to use as a validator in the consensus protocol
|
||||
PrivValidator string `mapstructure:"priv_validator_file"`
|
||||
|
||||
// A JSON file containing the private key to use for p2p authenticated encryption
|
||||
NodeKey string `mapstructure:"node_key_file"`
|
||||
|
||||
// A custom human readable name for this node
|
||||
Moniker string `mapstructure:"moniker"`
|
||||
|
||||
@@ -103,12 +132,17 @@ type BaseConfig struct {
|
||||
DBPath string `mapstructure:"db_dir"`
|
||||
}
|
||||
|
||||
func (c BaseConfig) ChainID() string {
|
||||
return c.chainID
|
||||
}
|
||||
|
||||
// DefaultBaseConfig returns a default base configuration for a Tendermint node
|
||||
func DefaultBaseConfig() BaseConfig {
|
||||
return BaseConfig{
|
||||
Genesis: "genesis.json",
|
||||
PrivValidator: "priv_validator.json",
|
||||
Moniker: "anonymous",
|
||||
Genesis: defaultGenesisJSONPath,
|
||||
PrivValidator: defaultPrivValPath,
|
||||
NodeKey: defaultNodeKeyPath,
|
||||
Moniker: defaultMoniker,
|
||||
ProxyApp: "tcp://127.0.0.1:46658",
|
||||
ABCI: "socket",
|
||||
LogLevel: DefaultPackageLogLevels(),
|
||||
@@ -123,7 +157,7 @@ func DefaultBaseConfig() BaseConfig {
|
||||
// TestBaseConfig returns a base configuration for testing a Tendermint node
|
||||
func TestBaseConfig() BaseConfig {
|
||||
conf := DefaultBaseConfig()
|
||||
conf.ChainID = "tendermint_test"
|
||||
conf.chainID = "tendermint_test"
|
||||
conf.ProxyApp = "dummy"
|
||||
conf.FastSync = false
|
||||
conf.DBBackend = "memdb"
|
||||
@@ -140,6 +174,11 @@ func (b BaseConfig) PrivValidatorFile() string {
|
||||
return rootify(b.PrivValidator, b.RootDir)
|
||||
}
|
||||
|
||||
// NodeKeyFile returns the full path to the node_key.json file
|
||||
func (b BaseConfig) NodeKeyFile() string {
|
||||
return rootify(b.NodeKey, b.RootDir)
|
||||
}
|
||||
|
||||
// DBDir returns the full path to the database directory
|
||||
func (b BaseConfig) DBDir() string {
|
||||
return rootify(b.DBPath, b.RootDir)
|
||||
@@ -150,9 +189,10 @@ func DefaultLogLevel() string {
|
||||
return "error"
|
||||
}
|
||||
|
||||
// DefaultPackageLogLevels returns a default log level setting so all packages log at "error", while the `state` package logs at "info"
|
||||
// DefaultPackageLogLevels returns a default log level setting so all packages
|
||||
// log at "error", while the `state` and `main` packages log at "info"
|
||||
func DefaultPackageLogLevels() string {
|
||||
return fmt.Sprintf("state:info,*:%s", DefaultLogLevel())
|
||||
return fmt.Sprintf("main:info,state:info,*:%s", DefaultLogLevel())
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
@@ -169,7 +209,7 @@ type RPCConfig struct {
|
||||
// NOTE: This server only supports /broadcast_tx_commit
|
||||
GRPCListenAddress string `mapstructure:"grpc_laddr"`
|
||||
|
||||
// Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
|
||||
// Activate unsafe RPC commands like /dial_persistent_peers and /unsafe_flush_mempool
|
||||
Unsafe bool `mapstructure:"unsafe"`
|
||||
}
|
||||
|
||||
@@ -202,8 +242,13 @@ type P2PConfig struct {
|
||||
ListenAddress string `mapstructure:"laddr"`
|
||||
|
||||
// Comma separated list of seed nodes to connect to
|
||||
// We only use these if we can’t connect to peers in the addrbook
|
||||
Seeds string `mapstructure:"seeds"`
|
||||
|
||||
// Comma separated list of persistent peers to connect to
|
||||
// We always connect to these
|
||||
PersistentPeers string `mapstructure:"persistent_peers"`
|
||||
|
||||
// Skip UPNP port forwarding
|
||||
SkipUPNP bool `mapstructure:"skip_upnp"`
|
||||
|
||||
@@ -213,9 +258,6 @@ type P2PConfig struct {
|
||||
// Set true for strict address routability rules
|
||||
AddrBookStrict bool `mapstructure:"addr_book_strict"`
|
||||
|
||||
// Set true to enable the peer-exchange reactor
|
||||
PexReactor bool `mapstructure:"pex"`
|
||||
|
||||
// Maximum number of peers to connect to
|
||||
MaxNumPeers int `mapstructure:"max_num_peers"`
|
||||
|
||||
@@ -230,19 +272,30 @@ type P2PConfig struct {
|
||||
|
||||
// Rate at which packets can be received, in bytes/second
|
||||
RecvRate int64 `mapstructure:"recv_rate"`
|
||||
|
||||
// Set true to enable the peer-exchange reactor
|
||||
PexReactor bool `mapstructure:"pex"`
|
||||
|
||||
// Seed mode, in which node constantly crawls the network and looks for
|
||||
// peers. If another node asks it for addresses, it responds and disconnects.
|
||||
//
|
||||
// Does not work if the peer-exchange reactor is disabled.
|
||||
SeedMode bool `mapstructure:"seed_mode"`
|
||||
}
|
||||
|
||||
// DefaultP2PConfig returns a default configuration for the peer-to-peer layer
|
||||
func DefaultP2PConfig() *P2PConfig {
|
||||
return &P2PConfig{
|
||||
ListenAddress: "tcp://0.0.0.0:46656",
|
||||
AddrBook: "addrbook.json",
|
||||
AddrBook: defaultAddrBookPath,
|
||||
AddrBookStrict: true,
|
||||
MaxNumPeers: 50,
|
||||
FlushThrottleTimeout: 100,
|
||||
MaxMsgPacketPayloadSize: 1024, // 1 kB
|
||||
SendRate: 512000, // 500 kB/s
|
||||
RecvRate: 512000, // 500 kB/s
|
||||
PexReactor: true,
|
||||
SeedMode: false,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -251,6 +304,7 @@ func TestP2PConfig() *P2PConfig {
|
||||
conf := DefaultP2PConfig()
|
||||
conf.ListenAddress = "tcp://0.0.0.0:36656"
|
||||
conf.SkipUPNP = true
|
||||
conf.FlushThrottleTimeout = 10
|
||||
return conf
|
||||
}
|
||||
|
||||
@@ -269,6 +323,7 @@ type MempoolConfig struct {
|
||||
RecheckEmpty bool `mapstructure:"recheck_empty"`
|
||||
Broadcast bool `mapstructure:"broadcast"`
|
||||
WalPath string `mapstructure:"wal_dir"`
|
||||
CacheSize int `mapstructure:"cache_size"`
|
||||
}
|
||||
|
||||
// DefaultMempoolConfig returns a default configuration for the Tendermint mempool
|
||||
@@ -277,10 +332,18 @@ func DefaultMempoolConfig() *MempoolConfig {
|
||||
Recheck: true,
|
||||
RecheckEmpty: true,
|
||||
Broadcast: true,
|
||||
WalPath: "data/mempool.wal",
|
||||
WalPath: filepath.Join(defaultDataDir, "mempool.wal"),
|
||||
CacheSize: 100000,
|
||||
}
|
||||
}
|
||||
|
||||
// TestMempoolConfig returns a configuration for testing the Tendermint mempool
|
||||
func TestMempoolConfig() *MempoolConfig {
|
||||
config := DefaultMempoolConfig()
|
||||
config.CacheSize = 1000
|
||||
return config
|
||||
}
|
||||
|
||||
// WalDir returns the full path to the mempool's write-ahead log
|
||||
func (m *MempoolConfig) WalDir() string {
|
||||
return rootify(m.WalPath, m.RootDir)
|
||||
@@ -297,7 +360,7 @@ type ConsensusConfig struct {
|
||||
WalLight bool `mapstructure:"wal_light"`
|
||||
walFile string // overrides WalPath if set
|
||||
|
||||
// All timeouts are in ms
|
||||
// All timeouts are in milliseconds
|
||||
TimeoutPropose int `mapstructure:"timeout_propose"`
|
||||
TimeoutProposeDelta int `mapstructure:"timeout_propose_delta"`
|
||||
TimeoutPrevote int `mapstructure:"timeout_prevote"`
|
||||
@@ -317,7 +380,7 @@ type ConsensusConfig struct {
|
||||
CreateEmptyBlocks bool `mapstructure:"create_empty_blocks"`
|
||||
CreateEmptyBlocksInterval int `mapstructure:"create_empty_blocks_interval"`
|
||||
|
||||
// Reactor sleep duration parameters are in ms
|
||||
// Reactor sleep duration parameters are in milliseconds
|
||||
PeerGossipSleepDuration int `mapstructure:"peer_gossip_sleep_duration"`
|
||||
PeerQueryMaj23SleepDuration int `mapstructure:"peer_query_maj23_sleep_duration"`
|
||||
}
|
||||
@@ -365,7 +428,7 @@ func (cfg *ConsensusConfig) PeerQueryMaj23Sleep() time.Duration {
|
||||
// DefaultConsensusConfig returns a default configuration for the consensus service
|
||||
func DefaultConsensusConfig() *ConsensusConfig {
|
||||
return &ConsensusConfig{
|
||||
WalPath: "data/cs.wal/wal",
|
||||
WalPath: filepath.Join(defaultDataDir, "cs.wal", "wal"),
|
||||
WalLight: false,
|
||||
TimeoutPropose: 3000,
|
||||
TimeoutProposeDelta: 500,
|
||||
@@ -387,7 +450,7 @@ func DefaultConsensusConfig() *ConsensusConfig {
|
||||
// TestConsensusConfig returns a configuration for testing the consensus service
|
||||
func TestConsensusConfig() *ConsensusConfig {
|
||||
config := DefaultConsensusConfig()
|
||||
config.TimeoutPropose = 2000
|
||||
config.TimeoutPropose = 100
|
||||
config.TimeoutProposeDelta = 1
|
||||
config.TimeoutPrevote = 10
|
||||
config.TimeoutPrevoteDelta = 1
|
||||
@@ -395,6 +458,8 @@ func TestConsensusConfig() *ConsensusConfig {
|
||||
config.TimeoutPrecommitDelta = 1
|
||||
config.TimeoutCommit = 10
|
||||
config.SkipTimeoutCommit = true
|
||||
config.PeerGossipSleepDuration = 5
|
||||
config.PeerQueryMaj23SleepDuration = 250
|
||||
return config
|
||||
}
|
||||
|
||||
@@ -446,6 +511,11 @@ func DefaultTxIndexConfig() *TxIndexConfig {
|
||||
}
|
||||
}
|
||||
|
||||
// TestTxIndexConfig returns a default configuration for the transaction indexer.
|
||||
func TestTxIndexConfig() *TxIndexConfig {
|
||||
return DefaultTxIndexConfig()
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
// Utils
|
||||
|
||||
@@ -456,3 +526,18 @@ func rootify(path, root string) string {
|
||||
}
|
||||
return filepath.Join(root, path)
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
// Moniker
|
||||
|
||||
var defaultMoniker = getDefaultMoniker()
|
||||
|
||||
// getDefaultMoniker returns a default moniker, which is the host name. If runtime
|
||||
// fails to get the host name, "anonymous" will be returned.
|
||||
func getDefaultMoniker() string {
|
||||
moniker, err := os.Hostname()
|
||||
if err != nil {
|
||||
moniker = "anonymous"
|
||||
}
|
||||
return moniker
|
||||
}
|
||||
|
247
config/toml.go
247
config/toml.go
@@ -1,54 +1,224 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"text/template"
|
||||
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
)
|
||||
|
||||
var configTemplate *template.Template
|
||||
|
||||
func init() {
|
||||
var err error
|
||||
if configTemplate, err = template.New("configFileTemplate").Parse(defaultConfigTemplate); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
/****** these are for production settings ***********/
|
||||
|
||||
// EnsureRoot creates the root, config, and data directories if they don't exist,
|
||||
// and panics if it fails.
|
||||
func EnsureRoot(rootDir string) {
|
||||
if err := cmn.EnsureDir(rootDir, 0700); err != nil {
|
||||
cmn.PanicSanity(err.Error())
|
||||
}
|
||||
if err := cmn.EnsureDir(rootDir+"/data", 0700); err != nil {
|
||||
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultConfigDir), 0700); err != nil {
|
||||
cmn.PanicSanity(err.Error())
|
||||
}
|
||||
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultDataDir), 0700); err != nil {
|
||||
cmn.PanicSanity(err.Error())
|
||||
}
|
||||
|
||||
configFilePath := path.Join(rootDir, "config.toml")
|
||||
configFilePath := filepath.Join(rootDir, defaultConfigFilePath)
|
||||
|
||||
// Write default config file if missing.
|
||||
if !cmn.FileExists(configFilePath) {
|
||||
// Ask user for moniker
|
||||
// moniker := cfg.Prompt("Type hostname: ", "anonymous")
|
||||
cmn.MustWriteFile(configFilePath, []byte(defaultConfig("anonymous")), 0644)
|
||||
writeConfigFile(configFilePath)
|
||||
}
|
||||
}
|
||||
|
||||
var defaultConfigTmpl = `# This is a TOML config file.
|
||||
// XXX: this func should probably be called by cmd/tendermint/commands/init.go
|
||||
// alongside the writing of the genesis.json and priv_validator.json
|
||||
func writeConfigFile(configFilePath string) {
|
||||
var buffer bytes.Buffer
|
||||
|
||||
if err := configTemplate.Execute(&buffer, DefaultConfig()); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
cmn.MustWriteFile(configFilePath, buffer.Bytes(), 0644)
|
||||
}
|
||||
|
||||
// Note: any changes to the comments/variables/mapstructure
|
||||
// must be reflected in the appropriate struct in config/config.go
|
||||
const defaultConfigTemplate = `# This is a TOML config file.
|
||||
# For more information, see https://github.com/toml-lang/toml
|
||||
|
||||
proxy_app = "tcp://127.0.0.1:46658"
|
||||
moniker = "__MONIKER__"
|
||||
fast_sync = true
|
||||
db_backend = "leveldb"
|
||||
log_level = "state:info,*:error"
|
||||
##### main base config options #####
|
||||
|
||||
# TCP or UNIX socket address of the ABCI application,
|
||||
# or the name of an ABCI application compiled in with the Tendermint binary
|
||||
proxy_app = "{{ .BaseConfig.ProxyApp }}"
|
||||
|
||||
# A custom human readable name for this node
|
||||
moniker = "{{ .BaseConfig.Moniker }}"
|
||||
|
||||
# If this node is many blocks behind the tip of the chain, FastSync
|
||||
# allows them to catchup quickly by downloading blocks in parallel
|
||||
# and verifying their commits
|
||||
fast_sync = {{ .BaseConfig.FastSync }}
|
||||
|
||||
# Database backend: leveldb | memdb
|
||||
db_backend = "{{ .BaseConfig.DBBackend }}"
|
||||
|
||||
# Database directory
|
||||
db_path = "{{ .BaseConfig.DBPath }}"
|
||||
|
||||
# Output level for logging, including package level options
|
||||
log_level = "{{ .BaseConfig.LogLevel }}"
|
||||
|
||||
##### additional base config options #####
|
||||
|
||||
# Path to the JSON file containing the initial validator set and other meta data
|
||||
genesis_file = "{{ .BaseConfig.Genesis }}"
|
||||
|
||||
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
|
||||
priv_validator_file = "{{ .BaseConfig.PrivValidator }}"
|
||||
|
||||
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
|
||||
node_key_file = "{{ .BaseConfig.NodeKey}}"
|
||||
|
||||
# Mechanism to connect to the ABCI application: socket | grpc
|
||||
abci = "{{ .BaseConfig.ABCI }}"
|
||||
|
||||
# TCP or UNIX socket address for the profiling server to listen on
|
||||
prof_laddr = "{{ .BaseConfig.ProfListenAddress }}"
|
||||
|
||||
# If true, query the ABCI app on connecting to a new peer
|
||||
# so the app can decide if we should keep the connection or not
|
||||
filter_peers = {{ .BaseConfig.FilterPeers }}
|
||||
|
||||
##### advanced configuration options #####
|
||||
|
||||
##### rpc server configuration options #####
|
||||
[rpc]
|
||||
laddr = "tcp://0.0.0.0:46657"
|
||||
|
||||
# TCP or UNIX socket address for the RPC server to listen on
|
||||
laddr = "{{ .RPC.ListenAddress }}"
|
||||
|
||||
# TCP or UNIX socket address for the gRPC server to listen on
|
||||
# NOTE: This server only supports /broadcast_tx_commit
|
||||
grpc_laddr = "{{ .RPC.GRPCListenAddress }}"
|
||||
|
||||
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
|
||||
unsafe = {{ .RPC.Unsafe }}
|
||||
|
||||
##### peer to peer configuration options #####
|
||||
[p2p]
|
||||
laddr = "tcp://0.0.0.0:46656"
|
||||
seeds = ""
|
||||
`
|
||||
|
||||
func defaultConfig(moniker string) string {
|
||||
return strings.Replace(defaultConfigTmpl, "__MONIKER__", moniker, -1)
|
||||
}
|
||||
# Address to listen for incoming connections
|
||||
laddr = "{{ .P2P.ListenAddress }}"
|
||||
|
||||
# Comma separated list of seed nodes to connect to
|
||||
seeds = ""
|
||||
|
||||
# Comma separated list of nodes to keep persistent connections to
|
||||
persistent_peers = ""
|
||||
|
||||
# Path to address book
|
||||
addr_book_file = "{{ .P2P.AddrBook }}"
|
||||
|
||||
# Set true for strict address routability rules
|
||||
addr_book_strict = {{ .P2P.AddrBookStrict }}
|
||||
|
||||
# Time to wait before flushing messages out on the connection, in ms
|
||||
flush_throttle_timeout = {{ .P2P.FlushThrottleTimeout }}
|
||||
|
||||
# Maximum number of peers to connect to
|
||||
max_num_peers = {{ .P2P.MaxNumPeers }}
|
||||
|
||||
# Maximum size of a message packet payload, in bytes
|
||||
max_msg_packet_payload_size = {{ .P2P.MaxMsgPacketPayloadSize }}
|
||||
|
||||
# Rate at which packets can be sent, in bytes/second
|
||||
send_rate = {{ .P2P.SendRate }}
|
||||
|
||||
# Rate at which packets can be received, in bytes/second
|
||||
recv_rate = {{ .P2P.RecvRate }}
|
||||
|
||||
# Set true to enable the peer-exchange reactor
|
||||
pex = {{ .P2P.PexReactor }}
|
||||
|
||||
# Seed mode, in which node constantly crawls the network and looks for
|
||||
# peers. If another node asks it for addresses, it responds and disconnects.
|
||||
#
|
||||
# Does not work if the peer-exchange reactor is disabled.
|
||||
seed_mode = {{ .P2P.SeedMode }}
|
||||
|
||||
##### mempool configuration options #####
|
||||
[mempool]
|
||||
|
||||
recheck = {{ .Mempool.Recheck }}
|
||||
recheck_empty = {{ .Mempool.RecheckEmpty }}
|
||||
broadcast = {{ .Mempool.Broadcast }}
|
||||
wal_dir = "{{ .Mempool.WalPath }}"
|
||||
|
||||
##### consensus configuration options #####
|
||||
[consensus]
|
||||
|
||||
wal_file = "{{ .Consensus.WalPath }}"
|
||||
wal_light = {{ .Consensus.WalLight }}
|
||||
|
||||
# All timeouts are in milliseconds
|
||||
timeout_propose = {{ .Consensus.TimeoutPropose }}
|
||||
timeout_propose_delta = {{ .Consensus.TimeoutProposeDelta }}
|
||||
timeout_prevote = {{ .Consensus.TimeoutPrevote }}
|
||||
timeout_prevote_delta = {{ .Consensus.TimeoutPrevoteDelta }}
|
||||
timeout_precommit = {{ .Consensus.TimeoutPrecommit }}
|
||||
timeout_precommit_delta = {{ .Consensus.TimeoutPrecommitDelta }}
|
||||
timeout_commit = {{ .Consensus.TimeoutCommit }}
|
||||
|
||||
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
|
||||
skip_timeout_commit = {{ .Consensus.SkipTimeoutCommit }}
|
||||
|
||||
# BlockSize
|
||||
max_block_size_txs = {{ .Consensus.MaxBlockSizeTxs }}
|
||||
max_block_size_bytes = {{ .Consensus.MaxBlockSizeBytes }}
|
||||
|
||||
# EmptyBlocks mode and possible interval between empty blocks in seconds
|
||||
create_empty_blocks = {{ .Consensus.CreateEmptyBlocks }}
|
||||
create_empty_blocks_interval = {{ .Consensus.CreateEmptyBlocksInterval }}
|
||||
|
||||
# Reactor sleep duration parameters are in milliseconds
|
||||
peer_gossip_sleep_duration = {{ .Consensus.PeerGossipSleepDuration }}
|
||||
peer_query_maj23_sleep_duration = {{ .Consensus.PeerQueryMaj23SleepDuration }}
|
||||
|
||||
##### transactions indexer configuration options #####
|
||||
[tx_index]
|
||||
|
||||
# What indexer to use for transactions
|
||||
#
|
||||
# Options:
|
||||
# 1) "null" (default)
|
||||
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
|
||||
indexer = "{{ .TxIndex.Indexer }}"
|
||||
|
||||
# Comma-separated list of tags to index (by default the only tag is tx hash)
|
||||
#
|
||||
# It's recommended to index only a subset of tags due to possible memory
|
||||
# bloat. This is, of course, depends on the indexer's DB and the volume of
|
||||
# transactions.
|
||||
index_tags = "{{ .TxIndex.IndexTags }}"
|
||||
|
||||
# When set to true, tells indexer to index all tags. Note this may be not
|
||||
# desirable (see the comment above). IndexTags has a precedence over
|
||||
# IndexAllTags (i.e. when given both, IndexTags will be indexed).
|
||||
index_all_tags = {{ .TxIndex.IndexAllTags }}
|
||||
`
|
||||
|
||||
/****** these are for test settings ***********/
|
||||
|
||||
@@ -71,18 +241,21 @@ func ResetTestRoot(testName string) *Config {
|
||||
if err := cmn.EnsureDir(rootDir, 0700); err != nil {
|
||||
cmn.PanicSanity(err.Error())
|
||||
}
|
||||
if err := cmn.EnsureDir(rootDir+"/data", 0700); err != nil {
|
||||
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultConfigDir), 0700); err != nil {
|
||||
cmn.PanicSanity(err.Error())
|
||||
}
|
||||
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultDataDir), 0700); err != nil {
|
||||
cmn.PanicSanity(err.Error())
|
||||
}
|
||||
|
||||
configFilePath := path.Join(rootDir, "config.toml")
|
||||
genesisFilePath := path.Join(rootDir, "genesis.json")
|
||||
privFilePath := path.Join(rootDir, "priv_validator.json")
|
||||
baseConfig := DefaultBaseConfig()
|
||||
configFilePath := filepath.Join(rootDir, defaultConfigFilePath)
|
||||
genesisFilePath := filepath.Join(rootDir, baseConfig.Genesis)
|
||||
privFilePath := filepath.Join(rootDir, baseConfig.PrivValidator)
|
||||
|
||||
// Write default config file if missing.
|
||||
if !cmn.FileExists(configFilePath) {
|
||||
// Ask user for moniker
|
||||
cmn.MustWriteFile(configFilePath, []byte(testConfig("anonymous")), 0644)
|
||||
writeConfigFile(configFilePath)
|
||||
}
|
||||
if !cmn.FileExists(genesisFilePath) {
|
||||
cmn.MustWriteFile(genesisFilePath, []byte(testGenesis), 0644)
|
||||
@@ -94,28 +267,6 @@ func ResetTestRoot(testName string) *Config {
|
||||
return config
|
||||
}
|
||||
|
||||
var testConfigTmpl = `# This is a TOML config file.
|
||||
# For more information, see https://github.com/toml-lang/toml
|
||||
|
||||
proxy_app = "dummy"
|
||||
moniker = "__MONIKER__"
|
||||
fast_sync = false
|
||||
db_backend = "memdb"
|
||||
log_level = "info"
|
||||
|
||||
[rpc]
|
||||
laddr = "tcp://0.0.0.0:36657"
|
||||
|
||||
[p2p]
|
||||
laddr = "tcp://0.0.0.0:36656"
|
||||
seeds = ""
|
||||
`
|
||||
|
||||
func testConfig(moniker string) (testConfig string) {
|
||||
testConfig = strings.Replace(testConfigTmpl, "__MONIKER__", moniker, -1)
|
||||
return
|
||||
}
|
||||
|
||||
var testGenesis = `{
|
||||
"genesis_time": "0001-01-01T00:00:00.000Z",
|
||||
"chain_id": "tendermint_test",
|
||||
|
@@ -4,6 +4,7 @@ import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
@@ -19,7 +20,7 @@ func ensureFiles(t *testing.T, rootDir string, files ...string) {
|
||||
}
|
||||
|
||||
func TestEnsureRoot(t *testing.T) {
|
||||
assert, require := assert.New(t), require.New(t)
|
||||
require := require.New(t)
|
||||
|
||||
// setup temp dir for test
|
||||
tmpDir, err := ioutil.TempDir("", "config-test")
|
||||
@@ -30,15 +31,18 @@ func TestEnsureRoot(t *testing.T) {
|
||||
EnsureRoot(tmpDir)
|
||||
|
||||
// make sure config is set properly
|
||||
data, err := ioutil.ReadFile(filepath.Join(tmpDir, "config.toml"))
|
||||
data, err := ioutil.ReadFile(filepath.Join(tmpDir, defaultConfigFilePath))
|
||||
require.Nil(err)
|
||||
assert.Equal([]byte(defaultConfig("anonymous")), data)
|
||||
|
||||
if !checkConfig(string(data)) {
|
||||
t.Fatalf("config file missing some information")
|
||||
}
|
||||
|
||||
ensureFiles(t, tmpDir, "data")
|
||||
}
|
||||
|
||||
func TestEnsureTestRoot(t *testing.T) {
|
||||
assert, require := assert.New(t), require.New(t)
|
||||
require := require.New(t)
|
||||
|
||||
testName := "ensureTestRoot"
|
||||
|
||||
@@ -47,11 +51,44 @@ func TestEnsureTestRoot(t *testing.T) {
|
||||
rootDir := cfg.RootDir
|
||||
|
||||
// make sure config is set properly
|
||||
data, err := ioutil.ReadFile(filepath.Join(rootDir, "config.toml"))
|
||||
data, err := ioutil.ReadFile(filepath.Join(rootDir, defaultConfigFilePath))
|
||||
require.Nil(err)
|
||||
assert.Equal([]byte(testConfig("anonymous")), data)
|
||||
|
||||
if !checkConfig(string(data)) {
|
||||
t.Fatalf("config file missing some information")
|
||||
}
|
||||
|
||||
// TODO: make sure the cfg returned and testconfig are the same!
|
||||
|
||||
ensureFiles(t, rootDir, "data", "genesis.json", "priv_validator.json")
|
||||
baseConfig := DefaultBaseConfig()
|
||||
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, baseConfig.PrivValidator)
|
||||
}
|
||||
|
||||
func checkConfig(configFile string) bool {
|
||||
var valid bool
|
||||
|
||||
// list of words we expect in the config
|
||||
var elems = []string{
|
||||
"moniker",
|
||||
"seeds",
|
||||
"proxy_app",
|
||||
"fast_sync",
|
||||
"create_empty_blocks",
|
||||
"peer",
|
||||
"timeout",
|
||||
"broadcast",
|
||||
"send",
|
||||
"addr",
|
||||
"wal",
|
||||
"propose",
|
||||
"max",
|
||||
"genesis",
|
||||
}
|
||||
for _, e := range elems {
|
||||
if !strings.Contains(configFile, e) {
|
||||
valid = false
|
||||
} else {
|
||||
valid = true
|
||||
}
|
||||
}
|
||||
return valid
|
||||
}
|
||||
|
@@ -8,7 +8,6 @@ import (
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
crypto "github.com/tendermint/go-crypto"
|
||||
data "github.com/tendermint/go-wire/data"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
@@ -33,7 +32,9 @@ func TestByzantine(t *testing.T) {
|
||||
css := randConsensusNet(N, "consensus_byzantine_test", newMockTickerFunc(false), newCounter)
|
||||
|
||||
// give the byzantine validator a normal ticker
|
||||
css[0].SetTimeoutTicker(NewTimeoutTicker())
|
||||
ticker := NewTimeoutTicker()
|
||||
ticker.SetLogger(css[0].Logger)
|
||||
css[0].SetTimeoutTicker(ticker)
|
||||
|
||||
switches := make([]*p2p.Switch, N)
|
||||
p2pLogger := logger.With("module", "p2p")
|
||||
@@ -279,7 +280,7 @@ func NewByzantinePrivValidator(pv types.PrivValidator) *ByzantinePrivValidator {
|
||||
}
|
||||
}
|
||||
|
||||
func (privVal *ByzantinePrivValidator) GetAddress() data.Bytes {
|
||||
func (privVal *ByzantinePrivValidator) GetAddress() types.Address {
|
||||
return privVal.pv.GetAddress()
|
||||
}
|
||||
|
||||
|
@@ -36,8 +36,8 @@ const (
|
||||
)
|
||||
|
||||
// genesis, chain_id, priv_val
|
||||
var config *cfg.Config // NOTE: must be reset for each _test.go file
|
||||
var ensureTimeout = time.Second * 2
|
||||
var config *cfg.Config // NOTE: must be reset for each _test.go file
|
||||
var ensureTimeout = time.Second * 1 // must be in seconds because CreateEmptyBlocksInterval is
|
||||
|
||||
func ensureDir(dir string, mode os.FileMode) {
|
||||
if err := cmn.EnsureDir(dir, mode); err != nil {
|
||||
@@ -74,10 +74,11 @@ func (vs *validatorStub) signVote(voteType byte, hash []byte, header types.PartS
|
||||
ValidatorAddress: vs.PrivValidator.GetAddress(),
|
||||
Height: vs.Height,
|
||||
Round: vs.Round,
|
||||
Timestamp: time.Now().UTC(),
|
||||
Type: voteType,
|
||||
BlockID: types.BlockID{hash, header},
|
||||
}
|
||||
err := vs.PrivValidator.SignVote(config.ChainID, vote)
|
||||
err := vs.PrivValidator.SignVote(config.ChainID(), vote)
|
||||
return vote, err
|
||||
}
|
||||
|
||||
@@ -128,7 +129,7 @@ func decideProposal(cs1 *ConsensusState, vs *validatorStub, height int64, round
|
||||
// Make proposal
|
||||
polRound, polBlockID := cs1.Votes.POLInfo()
|
||||
proposal = types.NewProposal(height, round, blockParts.Header(), polRound, polBlockID)
|
||||
if err := vs.SignProposal(config.ChainID, proposal); err != nil {
|
||||
if err := vs.SignProposal(cs1.state.ChainID, proposal); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return
|
||||
@@ -234,16 +235,16 @@ func subscribeToVoter(cs *ConsensusState, addr []byte) chan interface{} {
|
||||
//-------------------------------------------------------------------------------
|
||||
// consensus states
|
||||
|
||||
func newConsensusState(state *sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState {
|
||||
func newConsensusState(state sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState {
|
||||
return newConsensusStateWithConfig(config, state, pv, app)
|
||||
}
|
||||
|
||||
func newConsensusStateWithConfig(thisConfig *cfg.Config, state *sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState {
|
||||
func newConsensusStateWithConfig(thisConfig *cfg.Config, state sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState {
|
||||
blockDB := dbm.NewMemDB()
|
||||
return newConsensusStateWithConfigAndBlockStore(thisConfig, state, pv, app, blockDB)
|
||||
}
|
||||
|
||||
func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state *sm.State, pv types.PrivValidator, app abci.Application, blockDB dbm.DB) *ConsensusState {
|
||||
func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state sm.State, pv types.PrivValidator, app abci.Application, blockDB dbm.DB) *ConsensusState {
|
||||
// Get BlockStore
|
||||
blockStore := bc.NewBlockStore(blockDB)
|
||||
|
||||
@@ -259,9 +260,14 @@ func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state *sm.
|
||||
mempool.EnableTxsAvailable()
|
||||
}
|
||||
|
||||
// mock the evidence pool
|
||||
evpool := types.MockEvidencePool{}
|
||||
|
||||
// Make ConsensusReactor
|
||||
cs := NewConsensusState(thisConfig.Consensus, state, proxyAppConnCon, blockStore, mempool)
|
||||
cs.SetLogger(log.TestingLogger())
|
||||
stateDB := dbm.NewMemDB()
|
||||
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyAppConnCon, mempool, evpool)
|
||||
cs := NewConsensusState(thisConfig.Consensus, state, blockExec, blockStore, mempool, evpool)
|
||||
cs.SetLogger(log.TestingLogger().With("module", "consensus"))
|
||||
cs.SetPrivValidator(pv)
|
||||
|
||||
eventBus := types.NewEventBus()
|
||||
@@ -279,16 +285,6 @@ func loadPrivValidator(config *cfg.Config) *types.PrivValidatorFS {
|
||||
return privValidator
|
||||
}
|
||||
|
||||
func fixedConsensusStateDummy(config *cfg.Config, logger log.Logger) *ConsensusState {
|
||||
stateDB := dbm.NewMemDB()
|
||||
state, _ := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
|
||||
state.SetLogger(logger.With("module", "state"))
|
||||
privValidator := loadPrivValidator(config)
|
||||
cs := newConsensusState(state, privValidator, dummy.NewDummyApplication())
|
||||
cs.SetLogger(logger)
|
||||
return cs
|
||||
}
|
||||
|
||||
func randConsensusState(nValidators int) (*ConsensusState, []*validatorStub) {
|
||||
// Get State
|
||||
state, privVals := randGenesisState(nValidators, false, 10)
|
||||
@@ -296,7 +292,6 @@ func randConsensusState(nValidators int) (*ConsensusState, []*validatorStub) {
|
||||
vss := make([]*validatorStub, nValidators)
|
||||
|
||||
cs := newConsensusState(state, privVals[0], counter.NewCounterApplication(true))
|
||||
cs.SetLogger(log.TestingLogger())
|
||||
|
||||
for i := 0; i < nValidators; i++ {
|
||||
vss[i] = NewValidatorStub(privVals[i], i)
|
||||
@@ -342,18 +337,16 @@ func consensusLogger() log.Logger {
|
||||
}
|
||||
}
|
||||
return term.FgBgColor{}
|
||||
})
|
||||
}).With("module", "consensus")
|
||||
}
|
||||
|
||||
func randConsensusNet(nValidators int, testName string, tickerFunc func() TimeoutTicker, appFunc func() abci.Application, configOpts ...func(*cfg.Config)) []*ConsensusState {
|
||||
genDoc, privVals := randGenesisDoc(nValidators, false, 10)
|
||||
genDoc, privVals := randGenesisDoc(nValidators, false, 30)
|
||||
css := make([]*ConsensusState, nValidators)
|
||||
logger := consensusLogger()
|
||||
for i := 0; i < nValidators; i++ {
|
||||
db := dbm.NewMemDB() // each state needs its own db
|
||||
state, _ := sm.MakeGenesisState(db, genDoc)
|
||||
state.SetLogger(logger.With("module", "state", "validator", i))
|
||||
state.Save()
|
||||
stateDB := dbm.NewMemDB() // each state needs its own db
|
||||
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
|
||||
thisConfig := ResetConfig(cmn.Fmt("%s_%d", testName, i))
|
||||
for _, opt := range configOpts {
|
||||
opt(thisConfig)
|
||||
@@ -364,8 +357,8 @@ func randConsensusNet(nValidators int, testName string, tickerFunc func() Timeou
|
||||
app.InitChain(abci.RequestInitChain{Validators: vals})
|
||||
|
||||
css[i] = newConsensusStateWithConfig(thisConfig, state, privVals[i], app)
|
||||
css[i].SetLogger(logger.With("validator", i))
|
||||
css[i].SetTimeoutTicker(tickerFunc())
|
||||
css[i].SetLogger(logger.With("validator", i, "module", "consensus"))
|
||||
}
|
||||
return css
|
||||
}
|
||||
@@ -376,10 +369,8 @@ func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerF
|
||||
css := make([]*ConsensusState, nPeers)
|
||||
logger := consensusLogger()
|
||||
for i := 0; i < nPeers; i++ {
|
||||
db := dbm.NewMemDB() // each state needs its own db
|
||||
state, _ := sm.MakeGenesisState(db, genDoc)
|
||||
state.SetLogger(logger.With("module", "state", "validator", i))
|
||||
state.Save()
|
||||
stateDB := dbm.NewMemDB() // each state needs its own db
|
||||
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
|
||||
thisConfig := ResetConfig(cmn.Fmt("%s_%d", testName, i))
|
||||
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
|
||||
var privVal types.PrivValidator
|
||||
@@ -395,8 +386,8 @@ func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerF
|
||||
app.InitChain(abci.RequestInitChain{Validators: vals})
|
||||
|
||||
css[i] = newConsensusStateWithConfig(thisConfig, state, privVal, app)
|
||||
css[i].SetLogger(logger.With("validator", i))
|
||||
css[i].SetTimeoutTicker(tickerFunc())
|
||||
css[i].SetLogger(logger.With("validator", i, "module", "consensus"))
|
||||
}
|
||||
return css
|
||||
}
|
||||
@@ -426,19 +417,19 @@ func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.G
|
||||
privValidators[i] = privVal
|
||||
}
|
||||
sort.Sort(types.PrivValidatorsByAddress(privValidators))
|
||||
|
||||
return &types.GenesisDoc{
|
||||
GenesisTime: time.Now(),
|
||||
ChainID: config.ChainID,
|
||||
ChainID: config.ChainID(),
|
||||
Validators: validators,
|
||||
}, privValidators
|
||||
}
|
||||
|
||||
func randGenesisState(numValidators int, randPower bool, minPower int64) (*sm.State, []*types.PrivValidatorFS) {
|
||||
func randGenesisState(numValidators int, randPower bool, minPower int64) (sm.State, []*types.PrivValidatorFS) {
|
||||
genDoc, privValidators := randGenesisDoc(numValidators, randPower, minPower)
|
||||
s0, _ := sm.MakeGenesisState(genDoc)
|
||||
db := dbm.NewMemDB()
|
||||
s0, _ := sm.MakeGenesisState(db, genDoc)
|
||||
s0.SetLogger(log.TestingLogger().With("module", "state"))
|
||||
s0.Save()
|
||||
sm.SaveState(db, s0)
|
||||
return s0, privValidators
|
||||
}
|
||||
|
||||
|
@@ -19,7 +19,7 @@ func init() {
|
||||
config = ResetConfig("consensus_mempool_test")
|
||||
}
|
||||
|
||||
func TestNoProgressUntilTxsAvailable(t *testing.T) {
|
||||
func TestMempoolNoProgressUntilTxsAvailable(t *testing.T) {
|
||||
config := ResetConfig("consensus_mempool_txs_available_test")
|
||||
config.Consensus.CreateEmptyBlocks = false
|
||||
state, privVals := randGenesisState(1, false, 10)
|
||||
@@ -37,7 +37,7 @@ func TestNoProgressUntilTxsAvailable(t *testing.T) {
|
||||
ensureNoNewStep(newBlockCh)
|
||||
}
|
||||
|
||||
func TestProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
|
||||
func TestMempoolProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
|
||||
config := ResetConfig("consensus_mempool_txs_available_test")
|
||||
config.Consensus.CreateEmptyBlocksInterval = int(ensureTimeout.Seconds())
|
||||
state, privVals := randGenesisState(1, false, 10)
|
||||
@@ -52,7 +52,7 @@ func TestProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
|
||||
ensureNewStep(newBlockCh) // until the CreateEmptyBlocksInterval has passed
|
||||
}
|
||||
|
||||
func TestProgressInHigherRound(t *testing.T) {
|
||||
func TestMempoolProgressInHigherRound(t *testing.T) {
|
||||
config := ResetConfig("consensus_mempool_txs_available_test")
|
||||
config.Consensus.CreateEmptyBlocks = false
|
||||
state, privVals := randGenesisState(1, false, 10)
|
||||
@@ -94,7 +94,7 @@ func deliverTxsRange(cs *ConsensusState, start, end int) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestTxConcurrentWithCommit(t *testing.T) {
|
||||
func TestMempoolTxConcurrentWithCommit(t *testing.T) {
|
||||
state, privVals := randGenesisState(1, false, 10)
|
||||
cs := newConsensusState(state, privVals[0], NewCounterApplication())
|
||||
height, round := cs.Height, cs.Round
|
||||
@@ -104,18 +104,19 @@ func TestTxConcurrentWithCommit(t *testing.T) {
|
||||
go deliverTxsRange(cs, 0, NTxs)
|
||||
|
||||
startTestRound(cs, height, round)
|
||||
ticker := time.NewTicker(time.Second * 20)
|
||||
for nTxs := 0; nTxs < NTxs; {
|
||||
ticker := time.NewTicker(time.Second * 30)
|
||||
select {
|
||||
case b := <-newBlockCh:
|
||||
nTxs += b.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block.Header.NumTxs
|
||||
evt := b.(types.TMEventData).Unwrap().(types.EventDataNewBlock)
|
||||
nTxs += int(evt.Block.Header.NumTxs)
|
||||
case <-ticker.C:
|
||||
panic("Timed out waiting to commit blocks with transactions")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestRmBadTx(t *testing.T) {
|
||||
func TestMempoolRmBadTx(t *testing.T) {
|
||||
state, privVals := randGenesisState(1, false, 10)
|
||||
app := NewCounterApplication()
|
||||
cs := newConsensusState(state, privVals[0], app)
|
||||
@@ -128,7 +129,7 @@ func TestRmBadTx(t *testing.T) {
|
||||
assert.False(t, resDeliver.IsErr(), cmn.Fmt("expected no error. got %v", resDeliver))
|
||||
|
||||
resCommit := app.Commit()
|
||||
assert.False(t, resCommit.IsErr(), cmn.Fmt("expected no error. got %v", resCommit))
|
||||
assert.True(t, len(resCommit.Data) > 0)
|
||||
|
||||
emptyMempoolCh := make(chan struct{})
|
||||
checkTxRespCh := make(chan struct{})
|
||||
@@ -222,10 +223,10 @@ func txAsUint64(tx []byte) uint64 {
|
||||
func (app *CounterApplication) Commit() abci.ResponseCommit {
|
||||
app.mempoolTxCount = app.txCount
|
||||
if app.txCount == 0 {
|
||||
return abci.ResponseCommit{Code: code.CodeTypeOK}
|
||||
return abci.ResponseCommit{}
|
||||
} else {
|
||||
hash := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(hash, uint64(app.txCount))
|
||||
return abci.ResponseCommit{Code: code.CodeTypeOK, Data: hash}
|
||||
return abci.ResponseCommit{Data: hash}
|
||||
}
|
||||
}
|
||||
|
@@ -82,7 +82,7 @@ func (conR *ConsensusReactor) OnStop() {
|
||||
|
||||
// SwitchToConsensus switches from fast_sync mode to consensus mode.
|
||||
// It resets the state, turns off fast_sync, and starts the consensus state-machine
|
||||
func (conR *ConsensusReactor) SwitchToConsensus(state *sm.State, blocksSynced int) {
|
||||
func (conR *ConsensusReactor) SwitchToConsensus(state sm.State, blocksSynced int) {
|
||||
conR.Logger.Info("SwitchToConsensus")
|
||||
conR.conS.reconstructLastCommit(state)
|
||||
// NOTE: The line below causes broadcastNewRoundStepRoutine() to
|
||||
@@ -205,7 +205,11 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
|
||||
return
|
||||
}
|
||||
// Peer claims to have a maj23 for some BlockID at H,R,S,
|
||||
votes.SetPeerMaj23(msg.Round, msg.Type, ps.Peer.Key(), msg.BlockID)
|
||||
err := votes.SetPeerMaj23(msg.Round, msg.Type, ps.Peer.ID(), msg.BlockID)
|
||||
if err != nil {
|
||||
conR.Switch.StopPeerForError(src, err)
|
||||
return
|
||||
}
|
||||
// Respond with a VoteSetBitsMessage showing which votes we have.
|
||||
// (and consequently shows which we don't have)
|
||||
var ourVotes *cmn.BitArray
|
||||
@@ -242,12 +246,12 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
|
||||
switch msg := msg.(type) {
|
||||
case *ProposalMessage:
|
||||
ps.SetHasProposal(msg.Proposal)
|
||||
conR.conS.peerMsgQueue <- msgInfo{msg, src.Key()}
|
||||
conR.conS.peerMsgQueue <- msgInfo{msg, src.ID()}
|
||||
case *ProposalPOLMessage:
|
||||
ps.ApplyProposalPOLMessage(msg)
|
||||
case *BlockPartMessage:
|
||||
ps.SetHasProposalBlockPart(msg.Height, msg.Round, msg.Part.Index)
|
||||
conR.conS.peerMsgQueue <- msgInfo{msg, src.Key()}
|
||||
conR.conS.peerMsgQueue <- msgInfo{msg, src.ID()}
|
||||
default:
|
||||
conR.Logger.Error(cmn.Fmt("Unknown message type %v", reflect.TypeOf(msg)))
|
||||
}
|
||||
@@ -267,7 +271,7 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
|
||||
ps.EnsureVoteBitArrays(height-1, lastCommitSize)
|
||||
ps.SetHasVote(msg.Vote)
|
||||
|
||||
cs.peerMsgQueue <- msgInfo{msg, src.Key()}
|
||||
cs.peerMsgQueue <- msgInfo{msg, src.ID()}
|
||||
|
||||
default:
|
||||
// don't punish (leave room for soft upgrades)
|
||||
@@ -376,7 +380,7 @@ func (conR *ConsensusReactor) startBroadcastRoutine() error {
|
||||
edph := data.(types.TMEventData).Unwrap().(types.EventDataProposalHeartbeat)
|
||||
conR.broadcastProposalHeartbeatMessage(edph)
|
||||
}
|
||||
case <-conR.Quit:
|
||||
case <-conR.Quit():
|
||||
conR.eventBus.UnsubscribeAll(ctx, subscriber)
|
||||
return
|
||||
}
|
||||
@@ -646,7 +650,6 @@ OUTER_LOOP:
|
||||
// Load the block commit for prs.Height,
|
||||
// which contains precommit signatures for prs.Height.
|
||||
commit := conR.conS.blockStore.LoadBlockCommit(prs.Height)
|
||||
logger.Info("Loaded BlockCommit for catch-up", "height", prs.Height, "commit", commit)
|
||||
if ps.PickSendVote(commit) {
|
||||
logger.Debug("Picked Catchup commit to send", "height", prs.Height)
|
||||
continue OUTER_LOOP
|
||||
@@ -916,6 +919,7 @@ func (ps *PeerState) SetHasProposalBlockPart(height int64, round int, index int)
|
||||
func (ps *PeerState) PickSendVote(votes types.VoteSetReader) bool {
|
||||
if vote, ok := ps.PickVoteToSend(votes); ok {
|
||||
msg := &VoteMessage{vote}
|
||||
ps.logger.Debug("Sending vote message", "ps", ps, "vote", vote)
|
||||
return ps.Peer.Send(VoteChannel, struct{ ConsensusMessage }{msg})
|
||||
}
|
||||
return false
|
||||
@@ -1194,11 +1198,13 @@ func (ps *PeerState) String() string {
|
||||
|
||||
// StringIndented returns a string representation of the PeerState
|
||||
func (ps *PeerState) StringIndented(indent string) string {
|
||||
ps.mtx.Lock()
|
||||
defer ps.mtx.Unlock()
|
||||
return fmt.Sprintf(`PeerState{
|
||||
%s Key %v
|
||||
%s PRS %v
|
||||
%s}`,
|
||||
indent, ps.Peer.Key(),
|
||||
indent, ps.Peer.ID(),
|
||||
indent, ps.PeerRoundState.StringIndented(indent+" "),
|
||||
indent)
|
||||
}
|
||||
@@ -1344,7 +1350,7 @@ type HasVoteMessage struct {
|
||||
|
||||
// String returns a string representation.
|
||||
func (m *HasVoteMessage) String() string {
|
||||
return fmt.Sprintf("[HasVote VI:%v V:{%v/%02d/%v} VI:%v]", m.Index, m.Height, m.Round, m.Type, m.Index)
|
||||
return fmt.Sprintf("[HasVote VI:%v V:{%v/%02d/%v}]", m.Index, m.Height, m.Round, m.Type)
|
||||
}
|
||||
|
||||
//-------------------------------------
|
||||
|
@@ -4,17 +4,20 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"runtime"
|
||||
"runtime/pprof"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/abci/example/dummy"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
@@ -29,30 +32,24 @@ func startConsensusNet(t *testing.T, css []*ConsensusState, N int) ([]*Consensus
|
||||
reactors := make([]*ConsensusReactor, N)
|
||||
eventChans := make([]chan interface{}, N)
|
||||
eventBuses := make([]*types.EventBus, N)
|
||||
logger := consensusLogger()
|
||||
for i := 0; i < N; i++ {
|
||||
/*thisLogger, err := tmflags.ParseLogLevel("consensus:info,*:error", logger, "info")
|
||||
/*logger, err := tmflags.ParseLogLevel("consensus:info,*:error", logger, "info")
|
||||
if err != nil { t.Fatal(err)}*/
|
||||
thisLogger := logger
|
||||
|
||||
reactors[i] = NewConsensusReactor(css[i], true) // so we dont start the consensus states
|
||||
reactors[i].conS.SetLogger(thisLogger.With("validator", i))
|
||||
reactors[i].SetLogger(thisLogger.With("validator", i))
|
||||
|
||||
eventBuses[i] = types.NewEventBus()
|
||||
eventBuses[i].SetLogger(thisLogger.With("module", "events", "validator", i))
|
||||
err := eventBuses[i].Start()
|
||||
require.NoError(t, err)
|
||||
reactors[i].SetLogger(css[i].Logger)
|
||||
|
||||
// eventBus is already started with the cs
|
||||
eventBuses[i] = css[i].eventBus
|
||||
reactors[i].SetEventBus(eventBuses[i])
|
||||
|
||||
eventChans[i] = make(chan interface{}, 1)
|
||||
err = eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, eventChans[i])
|
||||
err := eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, eventChans[i])
|
||||
require.NoError(t, err)
|
||||
}
|
||||
// make connected switches and start all reactors
|
||||
p2p.MakeConnectedSwitches(config.P2P, N, func(i int, s *p2p.Switch) *p2p.Switch {
|
||||
s.AddReactor("CONSENSUS", reactors[i])
|
||||
s.SetLogger(reactors[i].conS.Logger.With("module", "p2p"))
|
||||
return s
|
||||
}, p2p.Connect2Switches)
|
||||
|
||||
@@ -67,25 +64,28 @@ func startConsensusNet(t *testing.T, css []*ConsensusState, N int) ([]*Consensus
|
||||
return reactors, eventChans, eventBuses
|
||||
}
|
||||
|
||||
func stopConsensusNet(reactors []*ConsensusReactor, eventBuses []*types.EventBus) {
|
||||
for _, r := range reactors {
|
||||
func stopConsensusNet(logger log.Logger, reactors []*ConsensusReactor, eventBuses []*types.EventBus) {
|
||||
logger.Info("stopConsensusNet", "n", len(reactors))
|
||||
for i, r := range reactors {
|
||||
logger.Info("stopConsensusNet: Stopping ConsensusReactor", "i", i)
|
||||
r.Switch.Stop()
|
||||
}
|
||||
for _, b := range eventBuses {
|
||||
for i, b := range eventBuses {
|
||||
logger.Info("stopConsensusNet: Stopping eventBus", "i", i)
|
||||
b.Stop()
|
||||
}
|
||||
logger.Info("stopConsensusNet: DONE", "n", len(reactors))
|
||||
}
|
||||
|
||||
// Ensure a testnet makes blocks
|
||||
func TestReactor(t *testing.T) {
|
||||
func TestReactorBasic(t *testing.T) {
|
||||
N := 4
|
||||
css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
|
||||
reactors, eventChans, eventBuses := startConsensusNet(t, css, N)
|
||||
defer stopConsensusNet(reactors, eventBuses)
|
||||
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
|
||||
// wait till everyone makes the first new block
|
||||
timeoutWaitGroup(t, N, func(wg *sync.WaitGroup, j int) {
|
||||
timeoutWaitGroup(t, N, func(j int) {
|
||||
<-eventChans[j]
|
||||
wg.Done()
|
||||
}, css)
|
||||
}
|
||||
|
||||
@@ -97,7 +97,7 @@ func TestReactorProposalHeartbeats(t *testing.T) {
|
||||
c.Consensus.CreateEmptyBlocks = false
|
||||
})
|
||||
reactors, eventChans, eventBuses := startConsensusNet(t, css, N)
|
||||
defer stopConsensusNet(reactors, eventBuses)
|
||||
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
|
||||
heartbeatChans := make([]chan interface{}, N)
|
||||
var err error
|
||||
for i := 0; i < N; i++ {
|
||||
@@ -106,9 +106,8 @@ func TestReactorProposalHeartbeats(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
// wait till everyone sends a proposal heartbeat
|
||||
timeoutWaitGroup(t, N, func(wg *sync.WaitGroup, j int) {
|
||||
timeoutWaitGroup(t, N, func(j int) {
|
||||
<-heartbeatChans[j]
|
||||
wg.Done()
|
||||
}, css)
|
||||
|
||||
// send a tx
|
||||
@@ -117,20 +116,20 @@ func TestReactorProposalHeartbeats(t *testing.T) {
|
||||
}
|
||||
|
||||
// wait till everyone makes the first new block
|
||||
timeoutWaitGroup(t, N, func(wg *sync.WaitGroup, j int) {
|
||||
timeoutWaitGroup(t, N, func(j int) {
|
||||
<-eventChans[j]
|
||||
wg.Done()
|
||||
}, css)
|
||||
}
|
||||
|
||||
//-------------------------------------------------------------
|
||||
// ensure we can make blocks despite cycling a validator set
|
||||
|
||||
func TestVotingPowerChange(t *testing.T) {
|
||||
func TestReactorVotingPowerChange(t *testing.T) {
|
||||
nVals := 4
|
||||
logger := log.TestingLogger()
|
||||
css := randConsensusNet(nVals, "consensus_voting_power_changes_test", newMockTickerFunc(true), newPersistentDummy)
|
||||
reactors, eventChans, eventBuses := startConsensusNet(t, css, nVals)
|
||||
defer stopConsensusNet(reactors, eventBuses)
|
||||
defer stopConsensusNet(logger, reactors, eventBuses)
|
||||
|
||||
// map of active validators
|
||||
activeVals := make(map[string]struct{})
|
||||
@@ -139,20 +138,19 @@ func TestVotingPowerChange(t *testing.T) {
|
||||
}
|
||||
|
||||
// wait till everyone makes block 1
|
||||
timeoutWaitGroup(t, nVals, func(wg *sync.WaitGroup, j int) {
|
||||
timeoutWaitGroup(t, nVals, func(j int) {
|
||||
<-eventChans[j]
|
||||
wg.Done()
|
||||
}, css)
|
||||
|
||||
//---------------------------------------------------------------------------
|
||||
t.Log("---------------------------- Testing changing the voting power of one validator a few times")
|
||||
logger.Debug("---------------------------- Testing changing the voting power of one validator a few times")
|
||||
|
||||
val1PubKey := css[0].privValidator.GetPubKey()
|
||||
updateValidatorTx := dummy.MakeValSetChangeTx(val1PubKey.Bytes(), 25)
|
||||
previousTotalVotingPower := css[0].GetRoundState().LastValidators.TotalVotingPower()
|
||||
|
||||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css, updateValidatorTx)
|
||||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
|
||||
waitForAndValidateBlockWithTx(t, nVals, activeVals, eventChans, css, updateValidatorTx)
|
||||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
|
||||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
|
||||
|
||||
@@ -164,7 +162,7 @@ func TestVotingPowerChange(t *testing.T) {
|
||||
previousTotalVotingPower = css[0].GetRoundState().LastValidators.TotalVotingPower()
|
||||
|
||||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css, updateValidatorTx)
|
||||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
|
||||
waitForAndValidateBlockWithTx(t, nVals, activeVals, eventChans, css, updateValidatorTx)
|
||||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
|
||||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
|
||||
|
||||
@@ -172,11 +170,11 @@ func TestVotingPowerChange(t *testing.T) {
|
||||
t.Fatalf("expected voting power to change (before: %d, after: %d)", previousTotalVotingPower, css[0].GetRoundState().LastValidators.TotalVotingPower())
|
||||
}
|
||||
|
||||
updateValidatorTx = dummy.MakeValSetChangeTx(val1PubKey.Bytes(), 100)
|
||||
updateValidatorTx = dummy.MakeValSetChangeTx(val1PubKey.Bytes(), 26)
|
||||
previousTotalVotingPower = css[0].GetRoundState().LastValidators.TotalVotingPower()
|
||||
|
||||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css, updateValidatorTx)
|
||||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
|
||||
waitForAndValidateBlockWithTx(t, nVals, activeVals, eventChans, css, updateValidatorTx)
|
||||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
|
||||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
|
||||
|
||||
@@ -185,13 +183,15 @@ func TestVotingPowerChange(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidatorSetChanges(t *testing.T) {
|
||||
func TestReactorValidatorSetChanges(t *testing.T) {
|
||||
nPeers := 7
|
||||
nVals := 4
|
||||
css := randConsensusNetWithPeers(nVals, nPeers, "consensus_val_set_changes_test", newMockTickerFunc(true), newPersistentDummy)
|
||||
|
||||
logger := log.TestingLogger()
|
||||
|
||||
reactors, eventChans, eventBuses := startConsensusNet(t, css, nPeers)
|
||||
defer stopConsensusNet(reactors, eventBuses)
|
||||
defer stopConsensusNet(logger, reactors, eventBuses)
|
||||
|
||||
// map of active validators
|
||||
activeVals := make(map[string]struct{})
|
||||
@@ -200,13 +200,12 @@ func TestValidatorSetChanges(t *testing.T) {
|
||||
}
|
||||
|
||||
// wait till everyone makes block 1
|
||||
timeoutWaitGroup(t, nPeers, func(wg *sync.WaitGroup, j int) {
|
||||
timeoutWaitGroup(t, nPeers, func(j int) {
|
||||
<-eventChans[j]
|
||||
wg.Done()
|
||||
}, css)
|
||||
|
||||
//---------------------------------------------------------------------------
|
||||
t.Log("---------------------------- Testing adding one validator")
|
||||
logger.Info("---------------------------- Testing adding one validator")
|
||||
|
||||
newValidatorPubKey1 := css[nVals].privValidator.GetPubKey()
|
||||
newValidatorTx1 := dummy.MakeValSetChangeTx(newValidatorPubKey1.Bytes(), testMinPower)
|
||||
@@ -218,7 +217,7 @@ func TestValidatorSetChanges(t *testing.T) {
|
||||
|
||||
// wait till everyone makes block 3.
|
||||
// it includes the commit for block 2, which is by the original validator set
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, newValidatorTx1)
|
||||
|
||||
// wait till everyone makes block 4.
|
||||
// it includes the commit for block 3, which is by the original validator set
|
||||
@@ -232,14 +231,14 @@ func TestValidatorSetChanges(t *testing.T) {
|
||||
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
|
||||
|
||||
//---------------------------------------------------------------------------
|
||||
t.Log("---------------------------- Testing changing the voting power of one validator")
|
||||
logger.Info("---------------------------- Testing changing the voting power of one validator")
|
||||
|
||||
updateValidatorPubKey1 := css[nVals].privValidator.GetPubKey()
|
||||
updateValidatorTx1 := dummy.MakeValSetChangeTx(updateValidatorPubKey1.Bytes(), 25)
|
||||
previousTotalVotingPower := css[nVals].GetRoundState().LastValidators.TotalVotingPower()
|
||||
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, updateValidatorTx1)
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, updateValidatorTx1)
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
|
||||
|
||||
@@ -248,7 +247,7 @@ func TestValidatorSetChanges(t *testing.T) {
|
||||
}
|
||||
|
||||
//---------------------------------------------------------------------------
|
||||
t.Log("---------------------------- Testing adding two validators at once")
|
||||
logger.Info("---------------------------- Testing adding two validators at once")
|
||||
|
||||
newValidatorPubKey2 := css[nVals+1].privValidator.GetPubKey()
|
||||
newValidatorTx2 := dummy.MakeValSetChangeTx(newValidatorPubKey2.Bytes(), testMinPower)
|
||||
@@ -257,20 +256,20 @@ func TestValidatorSetChanges(t *testing.T) {
|
||||
newValidatorTx3 := dummy.MakeValSetChangeTx(newValidatorPubKey3.Bytes(), testMinPower)
|
||||
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, newValidatorTx2, newValidatorTx3)
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, newValidatorTx2, newValidatorTx3)
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
activeVals[string(newValidatorPubKey2.Address())] = struct{}{}
|
||||
activeVals[string(newValidatorPubKey3.Address())] = struct{}{}
|
||||
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
|
||||
|
||||
//---------------------------------------------------------------------------
|
||||
t.Log("---------------------------- Testing removing two validators at once")
|
||||
logger.Info("---------------------------- Testing removing two validators at once")
|
||||
|
||||
removeValidatorTx2 := dummy.MakeValSetChangeTx(newValidatorPubKey2.Bytes(), 0)
|
||||
removeValidatorTx3 := dummy.MakeValSetChangeTx(newValidatorPubKey3.Bytes(), 0)
|
||||
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, removeValidatorTx2, removeValidatorTx3)
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, removeValidatorTx2, removeValidatorTx3)
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
delete(activeVals, string(newValidatorPubKey2.Address()))
|
||||
delete(activeVals, string(newValidatorPubKey3.Address()))
|
||||
@@ -287,61 +286,85 @@ func TestReactorWithTimeoutCommit(t *testing.T) {
|
||||
}
|
||||
|
||||
reactors, eventChans, eventBuses := startConsensusNet(t, css, N-1)
|
||||
defer stopConsensusNet(reactors, eventBuses)
|
||||
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
|
||||
|
||||
// wait till everyone makes the first new block
|
||||
timeoutWaitGroup(t, N-1, func(wg *sync.WaitGroup, j int) {
|
||||
timeoutWaitGroup(t, N-1, func(j int) {
|
||||
<-eventChans[j]
|
||||
wg.Done()
|
||||
}, css)
|
||||
}
|
||||
|
||||
func waitForAndValidateBlock(t *testing.T, n int, activeVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState, txs ...[]byte) {
|
||||
timeoutWaitGroup(t, n, func(wg *sync.WaitGroup, j int) {
|
||||
defer wg.Done()
|
||||
|
||||
timeoutWaitGroup(t, n, func(j int) {
|
||||
css[j].Logger.Debug("waitForAndValidateBlock")
|
||||
newBlockI, ok := <-eventChans[j]
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
newBlock := newBlockI.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block
|
||||
t.Logf("Got block height=%v validator=%v", newBlock.Height, j)
|
||||
css[j].Logger.Debug("waitForAndValidateBlock: Got block", "height", newBlock.Height)
|
||||
err := validateBlock(newBlock, activeVals)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
assert.Nil(t, err)
|
||||
for _, tx := range txs {
|
||||
if err = css[j].mempool.CheckTx(tx, nil); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
css[j].mempool.CheckTx(tx, nil)
|
||||
assert.Nil(t, err)
|
||||
}
|
||||
}, css)
|
||||
}
|
||||
|
||||
func waitForAndValidateBlockWithTx(t *testing.T, n int, activeVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState, txs ...[]byte) {
|
||||
timeoutWaitGroup(t, n, func(j int) {
|
||||
ntxs := 0
|
||||
BLOCK_TX_LOOP:
|
||||
for {
|
||||
css[j].Logger.Debug("waitForAndValidateBlockWithTx", "ntxs", ntxs)
|
||||
newBlockI, ok := <-eventChans[j]
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
newBlock := newBlockI.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block
|
||||
css[j].Logger.Debug("waitForAndValidateBlockWithTx: Got block", "height", newBlock.Height)
|
||||
err := validateBlock(newBlock, activeVals)
|
||||
assert.Nil(t, err)
|
||||
|
||||
// check that txs match the txs we're waiting for.
|
||||
// note they could be spread over multiple blocks,
|
||||
// but they should be in order.
|
||||
for _, tx := range newBlock.Data.Txs {
|
||||
assert.EqualValues(t, txs[ntxs], tx)
|
||||
ntxs += 1
|
||||
}
|
||||
|
||||
if ntxs == len(txs) {
|
||||
break BLOCK_TX_LOOP
|
||||
}
|
||||
}
|
||||
|
||||
}, css)
|
||||
}
|
||||
|
||||
func waitForBlockWithUpdatedValsAndValidateIt(t *testing.T, n int, updatedVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState) {
|
||||
timeoutWaitGroup(t, n, func(wg *sync.WaitGroup, j int) {
|
||||
defer wg.Done()
|
||||
timeoutWaitGroup(t, n, func(j int) {
|
||||
|
||||
var newBlock *types.Block
|
||||
LOOP:
|
||||
for {
|
||||
css[j].Logger.Debug("waitForBlockWithUpdatedValsAndValidateIt")
|
||||
newBlockI, ok := <-eventChans[j]
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
newBlock = newBlockI.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block
|
||||
if newBlock.LastCommit.Size() == len(updatedVals) {
|
||||
t.Logf("Block with new validators height=%v validator=%v", newBlock.Height, j)
|
||||
css[j].Logger.Debug("waitForBlockWithUpdatedValsAndValidateIt: Got block", "height", newBlock.Height)
|
||||
break LOOP
|
||||
} else {
|
||||
t.Logf("Block with no new validators height=%v validator=%v. Skipping...", newBlock.Height, j)
|
||||
css[j].Logger.Debug("waitForBlockWithUpdatedValsAndValidateIt: Got block with no new validators. Skipping", "height", newBlock.Height)
|
||||
}
|
||||
}
|
||||
|
||||
err := validateBlock(newBlock, updatedVals)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
assert.Nil(t, err)
|
||||
}, css)
|
||||
}
|
||||
|
||||
@@ -359,11 +382,14 @@ func validateBlock(block *types.Block, activeVals map[string]struct{}) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func timeoutWaitGroup(t *testing.T, n int, f func(*sync.WaitGroup, int), css []*ConsensusState) {
|
||||
func timeoutWaitGroup(t *testing.T, n int, f func(int), css []*ConsensusState) {
|
||||
wg := new(sync.WaitGroup)
|
||||
wg.Add(n)
|
||||
for i := 0; i < n; i++ {
|
||||
go f(wg, i)
|
||||
go func(j int) {
|
||||
f(j)
|
||||
wg.Done()
|
||||
}(i)
|
||||
}
|
||||
|
||||
done := make(chan struct{})
|
||||
@@ -374,7 +400,7 @@ func timeoutWaitGroup(t *testing.T, n int, f func(*sync.WaitGroup, int), css []*
|
||||
|
||||
// we're running many nodes in-process, possibly in in a virtual machine,
|
||||
// and spewing debug messages - making a block could take a while,
|
||||
timeout := time.Second * 60
|
||||
timeout := time.Second * 300
|
||||
|
||||
select {
|
||||
case <-done:
|
||||
@@ -385,7 +411,15 @@ func timeoutWaitGroup(t *testing.T, n int, f func(*sync.WaitGroup, int), css []*
|
||||
t.Log(cs.GetRoundState())
|
||||
t.Log("")
|
||||
}
|
||||
os.Stdout.Write([]byte("pprof.Lookup('goroutine'):\n"))
|
||||
pprof.Lookup("goroutine").WriteTo(os.Stdout, 1)
|
||||
capture()
|
||||
panic("Timed out waiting for all validators to commit a block")
|
||||
}
|
||||
}
|
||||
|
||||
func capture() {
|
||||
trace := make([]byte, 10240000)
|
||||
count := runtime.Stack(trace, true)
|
||||
fmt.Printf("Stack of %d bytes: %s\n", count, trace)
|
||||
}
|
||||
|
@@ -2,7 +2,6 @@ package consensus
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"hash/crc32"
|
||||
"io"
|
||||
@@ -14,6 +13,7 @@ import (
|
||||
abci "github.com/tendermint/abci/types"
|
||||
//auto "github.com/tendermint/tmlibs/autofile"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
dbm "github.com/tendermint/tmlibs/db"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
|
||||
"github.com/tendermint/tendermint/proxy"
|
||||
@@ -61,21 +61,21 @@ func (cs *ConsensusState) readReplayMessage(msg *TimedWALMessage, newStepCh chan
|
||||
}
|
||||
}
|
||||
case msgInfo:
|
||||
peerKey := m.PeerKey
|
||||
if peerKey == "" {
|
||||
peerKey = "local"
|
||||
peerID := m.PeerID
|
||||
if peerID == "" {
|
||||
peerID = "local"
|
||||
}
|
||||
switch msg := m.Msg.(type) {
|
||||
case *ProposalMessage:
|
||||
p := msg.Proposal
|
||||
cs.Logger.Info("Replay: Proposal", "height", p.Height, "round", p.Round, "header",
|
||||
p.BlockPartsHeader, "pol", p.POLRound, "peer", peerKey)
|
||||
p.BlockPartsHeader, "pol", p.POLRound, "peer", peerID)
|
||||
case *BlockPartMessage:
|
||||
cs.Logger.Info("Replay: BlockPart", "height", msg.Height, "round", msg.Round, "peer", peerKey)
|
||||
cs.Logger.Info("Replay: BlockPart", "height", msg.Height, "round", msg.Round, "peer", peerID)
|
||||
case *VoteMessage:
|
||||
v := msg.Vote
|
||||
cs.Logger.Info("Replay: Vote", "height", v.Height, "round", v.Round, "type", v.Type,
|
||||
"blockID", v.BlockID, "peer", peerKey)
|
||||
"blockID", v.BlockID, "peer", peerID)
|
||||
}
|
||||
|
||||
cs.handleMsg(m)
|
||||
@@ -95,10 +95,13 @@ func (cs *ConsensusState) catchupReplay(csHeight int64) error {
|
||||
cs.replayMode = true
|
||||
defer func() { cs.replayMode = false }()
|
||||
|
||||
// Ensure that ENDHEIGHT for this height doesn't exist
|
||||
// NOTE: This is just a sanity check. As far as we know things work fine without it,
|
||||
// and Handshake could reuse ConsensusState if it weren't for this check (since we can crash after writing ENDHEIGHT).
|
||||
gr, found, err := cs.wal.SearchForEndHeight(csHeight)
|
||||
// Ensure that ENDHEIGHT for this height doesn't exist.
|
||||
// NOTE: This is just a sanity check. As far as we know things work fine
|
||||
// without it, and Handshake could reuse ConsensusState if it weren't for
|
||||
// this check (since we can crash after writing ENDHEIGHT).
|
||||
//
|
||||
// Ignore data corruption errors since this is a sanity check.
|
||||
gr, found, err := cs.wal.SearchForEndHeight(csHeight, &WALSearchOptions{IgnoreDataCorruptionErrors: true})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -112,14 +115,16 @@ func (cs *ConsensusState) catchupReplay(csHeight int64) error {
|
||||
}
|
||||
|
||||
// Search for last height marker
|
||||
gr, found, err = cs.wal.SearchForEndHeight(csHeight - 1)
|
||||
//
|
||||
// Ignore data corruption errors in previous heights because we only care about last height
|
||||
gr, found, err = cs.wal.SearchForEndHeight(csHeight-1, &WALSearchOptions{IgnoreDataCorruptionErrors: true})
|
||||
if err == io.EOF {
|
||||
cs.Logger.Error("Replay: wal.group.Search returned EOF", "#ENDHEIGHT", csHeight-1)
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
if !found {
|
||||
return errors.New(cmn.Fmt("Cannot replay height %d. WAL does not contain #ENDHEIGHT for %d.", csHeight, csHeight-1))
|
||||
return fmt.Errorf("Cannot replay height %d. WAL does not contain #ENDHEIGHT for %d.", csHeight, csHeight-1)
|
||||
}
|
||||
defer gr.Close() // nolint: errcheck
|
||||
|
||||
@@ -132,9 +137,13 @@ func (cs *ConsensusState) catchupReplay(csHeight int64) error {
|
||||
msg, err = dec.Decode()
|
||||
if err == io.EOF {
|
||||
break
|
||||
} else if IsDataCorruptionError(err) {
|
||||
cs.Logger.Debug("data has been corrupted in last height of consensus WAL", "err", err, "height", csHeight)
|
||||
panic(fmt.Sprintf("data has been corrupted (%v) in last height %d of consensus WAL", err, csHeight))
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// NOTE: since the priv key is set when the msgs are received
|
||||
// it will attempt to eg double sign but we can just ignore it
|
||||
// since the votes will be replayed and we'll get to the next step
|
||||
@@ -178,15 +187,16 @@ func makeHeightSearchFunc(height int64) auto.SearchFunc {
|
||||
// we were last and using the WAL to recover there
|
||||
|
||||
type Handshaker struct {
|
||||
state *sm.State
|
||||
store types.BlockStore
|
||||
logger log.Logger
|
||||
stateDB dbm.DB
|
||||
initialState sm.State
|
||||
store types.BlockStore
|
||||
logger log.Logger
|
||||
|
||||
nBlocks int // number of blocks applied to the state
|
||||
}
|
||||
|
||||
func NewHandshaker(state *sm.State, store types.BlockStore) *Handshaker {
|
||||
return &Handshaker{state, store, log.NewNopLogger(), 0}
|
||||
func NewHandshaker(stateDB dbm.DB, state sm.State, store types.BlockStore) *Handshaker {
|
||||
return &Handshaker{stateDB, state, store, log.NewNopLogger(), 0}
|
||||
}
|
||||
|
||||
func (h *Handshaker) SetLogger(l log.Logger) {
|
||||
@@ -202,7 +212,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
|
||||
// handshake is done via info request on the query conn
|
||||
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{version.Version})
|
||||
if err != nil {
|
||||
return errors.New(cmn.Fmt("Error calling Info: %v", err))
|
||||
return fmt.Errorf("Error calling Info: %v", err)
|
||||
}
|
||||
|
||||
blockHeight := int64(res.LastBlockHeight)
|
||||
@@ -216,9 +226,9 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
|
||||
// TODO: check version
|
||||
|
||||
// replay blocks up to the latest in the blockstore
|
||||
_, err = h.ReplayBlocks(appHash, blockHeight, proxyApp)
|
||||
_, err = h.ReplayBlocks(h.initialState, appHash, blockHeight, proxyApp)
|
||||
if err != nil {
|
||||
return errors.New(cmn.Fmt("Error on replay: %v", err))
|
||||
return fmt.Errorf("Error on replay: %v", err)
|
||||
}
|
||||
|
||||
h.logger.Info("Completed ABCI Handshake - Tendermint and App are synced", "appHeight", blockHeight, "appHash", fmt.Sprintf("%X", appHash))
|
||||
@@ -230,23 +240,25 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
|
||||
|
||||
// Replay all blocks since appBlockHeight and ensure the result matches the current state.
|
||||
// Returns the final AppHash or an error
|
||||
func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int64, proxyApp proxy.AppConns) ([]byte, error) {
|
||||
func (h *Handshaker) ReplayBlocks(state sm.State, appHash []byte, appBlockHeight int64, proxyApp proxy.AppConns) ([]byte, error) {
|
||||
|
||||
storeBlockHeight := h.store.Height()
|
||||
stateBlockHeight := h.state.LastBlockHeight
|
||||
stateBlockHeight := state.LastBlockHeight
|
||||
h.logger.Info("ABCI Replay Blocks", "appHeight", appBlockHeight, "storeHeight", storeBlockHeight, "stateHeight", stateBlockHeight)
|
||||
|
||||
// If appBlockHeight == 0 it means that we are at genesis and hence should send InitChain
|
||||
if appBlockHeight == 0 {
|
||||
validators := types.TM2PB.Validators(h.state.Validators)
|
||||
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators}); err != nil {
|
||||
validators := types.TM2PB.Validators(state.Validators)
|
||||
// TODO: get the genesis bytes (https://github.com/tendermint/tendermint/issues/1224)
|
||||
var genesisBytes []byte
|
||||
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators, genesisBytes}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// First handle edge cases and constraints on the storeBlockHeight
|
||||
if storeBlockHeight == 0 {
|
||||
return appHash, h.checkAppHash(appHash)
|
||||
return appHash, checkAppHash(state, appHash)
|
||||
|
||||
} else if storeBlockHeight < appBlockHeight {
|
||||
// the app should never be ahead of the store (but this is under app's control)
|
||||
@@ -261,6 +273,7 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int64, proxyApp
|
||||
cmn.PanicSanity(cmn.Fmt("StoreBlockHeight (%d) > StateBlockHeight + 1 (%d)", storeBlockHeight, stateBlockHeight+1))
|
||||
}
|
||||
|
||||
var err error
|
||||
// Now either store is equal to state, or one ahead.
|
||||
// For each, consider all cases of where the app could be, given app <= store
|
||||
if storeBlockHeight == stateBlockHeight {
|
||||
@@ -268,11 +281,11 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int64, proxyApp
|
||||
// Either the app is asking for replay, or we're all synced up.
|
||||
if appBlockHeight < storeBlockHeight {
|
||||
// the app is behind, so replay blocks, but no need to go through WAL (state is already synced to store)
|
||||
return h.replayBlocks(proxyApp, appBlockHeight, storeBlockHeight, false)
|
||||
return h.replayBlocks(state, proxyApp, appBlockHeight, storeBlockHeight, false)
|
||||
|
||||
} else if appBlockHeight == storeBlockHeight {
|
||||
// We're good!
|
||||
return appHash, h.checkAppHash(appHash)
|
||||
return appHash, checkAppHash(state, appHash)
|
||||
}
|
||||
|
||||
} else if storeBlockHeight == stateBlockHeight+1 {
|
||||
@@ -281,7 +294,7 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int64, proxyApp
|
||||
if appBlockHeight < stateBlockHeight {
|
||||
// the app is further behind than it should be, so replay blocks
|
||||
// but leave the last block to go through the WAL
|
||||
return h.replayBlocks(proxyApp, appBlockHeight, storeBlockHeight, true)
|
||||
return h.replayBlocks(state, proxyApp, appBlockHeight, storeBlockHeight, true)
|
||||
|
||||
} else if appBlockHeight == stateBlockHeight {
|
||||
// We haven't run Commit (both the state and app are one block behind),
|
||||
@@ -289,14 +302,19 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int64, proxyApp
|
||||
// NOTE: We could instead use the cs.WAL on cs.Start,
|
||||
// but we'd have to allow the WAL to replay a block that wrote it's ENDHEIGHT
|
||||
h.logger.Info("Replay last block using real app")
|
||||
return h.replayBlock(storeBlockHeight, proxyApp.Consensus())
|
||||
state, err = h.replayBlock(state, storeBlockHeight, proxyApp.Consensus())
|
||||
return state.AppHash, err
|
||||
|
||||
} else if appBlockHeight == storeBlockHeight {
|
||||
// We ran Commit, but didn't save the state, so replayBlock with mock app
|
||||
abciResponses := h.state.LoadABCIResponses()
|
||||
abciResponses, err := sm.LoadABCIResponses(h.stateDB, storeBlockHeight)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
mockApp := newMockProxyApp(appHash, abciResponses)
|
||||
h.logger.Info("Replay last block using mock app")
|
||||
return h.replayBlock(storeBlockHeight, mockApp)
|
||||
state, err = h.replayBlock(state, storeBlockHeight, mockApp)
|
||||
return state.AppHash, err
|
||||
}
|
||||
|
||||
}
|
||||
@@ -305,13 +323,14 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int64, proxyApp
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (h *Handshaker) replayBlocks(proxyApp proxy.AppConns, appBlockHeight, storeBlockHeight int64, mutateState bool) ([]byte, error) {
|
||||
func (h *Handshaker) replayBlocks(state sm.State, proxyApp proxy.AppConns, appBlockHeight, storeBlockHeight int64, mutateState bool) ([]byte, error) {
|
||||
// App is further behind than it should be, so we need to replay blocks.
|
||||
// We replay all blocks from appBlockHeight+1.
|
||||
//
|
||||
// Note that we don't have an old version of the state,
|
||||
// so we by-pass state validation/mutation using sm.ExecCommitBlock.
|
||||
// This also means we won't be saving validator sets if they change during this period.
|
||||
// TODO: Load the historical information to fix this and just use state.ApplyBlock
|
||||
//
|
||||
// If mutateState == true, the final block is replayed with h.replayBlock()
|
||||
|
||||
@@ -334,31 +353,37 @@ func (h *Handshaker) replayBlocks(proxyApp proxy.AppConns, appBlockHeight, store
|
||||
|
||||
if mutateState {
|
||||
// sync the final block
|
||||
return h.replayBlock(storeBlockHeight, proxyApp.Consensus())
|
||||
state, err = h.replayBlock(state, storeBlockHeight, proxyApp.Consensus())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
appHash = state.AppHash
|
||||
}
|
||||
|
||||
return appHash, h.checkAppHash(appHash)
|
||||
return appHash, checkAppHash(state, appHash)
|
||||
}
|
||||
|
||||
// ApplyBlock on the proxyApp with the last block.
|
||||
func (h *Handshaker) replayBlock(height int64, proxyApp proxy.AppConnConsensus) ([]byte, error) {
|
||||
mempool := types.MockMempool{}
|
||||
|
||||
func (h *Handshaker) replayBlock(state sm.State, height int64, proxyApp proxy.AppConnConsensus) (sm.State, error) {
|
||||
block := h.store.LoadBlock(height)
|
||||
meta := h.store.LoadBlockMeta(height)
|
||||
|
||||
if err := h.state.ApplyBlock(types.NopEventBus{}, proxyApp, block, meta.BlockID.PartsHeader, mempool); err != nil {
|
||||
return nil, err
|
||||
blockExec := sm.NewBlockExecutor(h.stateDB, h.logger, proxyApp, types.MockMempool{}, types.MockEvidencePool{})
|
||||
|
||||
var err error
|
||||
state, err = blockExec.ApplyBlock(state, meta.BlockID, block)
|
||||
if err != nil {
|
||||
return sm.State{}, err
|
||||
}
|
||||
|
||||
h.nBlocks += 1
|
||||
|
||||
return h.state.AppHash, nil
|
||||
return state, nil
|
||||
}
|
||||
|
||||
func (h *Handshaker) checkAppHash(appHash []byte) error {
|
||||
if !bytes.Equal(h.state.AppHash, appHash) {
|
||||
panic(errors.New(cmn.Fmt("Tendermint state.AppHash does not match AppHash after replay. Got %X, expected %X", appHash, h.state.AppHash)).Error())
|
||||
func checkAppHash(state sm.State, appHash []byte) error {
|
||||
if !bytes.Equal(state.AppHash, appHash) {
|
||||
panic(fmt.Errorf("Tendermint state.AppHash does not match AppHash after replay. Got %X, expected %X", appHash, state.AppHash).Error())
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -400,5 +425,5 @@ func (mock *mockProxyApp) EndBlock(req abci.RequestEndBlock) abci.ResponseEndBlo
|
||||
}
|
||||
|
||||
func (mock *mockProxyApp) Commit() abci.ResponseCommit {
|
||||
return abci.ResponseCommit{Code: abci.CodeTypeOK, Data: mock.appHash}
|
||||
return abci.ResponseCommit{Data: mock.appHash}
|
||||
}
|
||||
|
@@ -18,6 +18,7 @@ import (
|
||||
"github.com/tendermint/tendermint/types"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
dbm "github.com/tendermint/tmlibs/db"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -104,11 +105,11 @@ type playback struct {
|
||||
count int // how many lines/msgs into the file are we
|
||||
|
||||
// replays can be reset to beginning
|
||||
fileName string // so we can close/reopen the file
|
||||
genesisState *sm.State // so the replay session knows where to restart from
|
||||
fileName string // so we can close/reopen the file
|
||||
genesisState sm.State // so the replay session knows where to restart from
|
||||
}
|
||||
|
||||
func newPlayback(fileName string, fp *os.File, cs *ConsensusState, genState *sm.State) *playback {
|
||||
func newPlayback(fileName string, fp *os.File, cs *ConsensusState, genState sm.State) *playback {
|
||||
return &playback{
|
||||
cs: cs,
|
||||
fp: fp,
|
||||
@@ -123,7 +124,8 @@ func (pb *playback) replayReset(count int, newStepCh chan interface{}) error {
|
||||
pb.cs.Stop()
|
||||
pb.cs.Wait()
|
||||
|
||||
newCS := NewConsensusState(pb.cs.config, pb.genesisState.Copy(), pb.cs.proxyAppConn, pb.cs.blockStore, pb.cs.mempool)
|
||||
newCS := NewConsensusState(pb.cs.config, pb.genesisState.Copy(), pb.cs.blockExec,
|
||||
pb.cs.blockStore, pb.cs.mempool, pb.cs.evpool)
|
||||
newCS.SetEventBus(pb.cs.eventBus)
|
||||
newCS.startForReplay()
|
||||
|
||||
@@ -278,20 +280,21 @@ func (pb *playback) replayConsoleLoop() int {
|
||||
|
||||
// convenience for replay mode
|
||||
func newConsensusStateForReplay(config cfg.BaseConfig, csConfig *cfg.ConsensusConfig) *ConsensusState {
|
||||
dbType := dbm.DBBackendType(config.DBBackend)
|
||||
// Get BlockStore
|
||||
blockStoreDB := dbm.NewDB("blockstore", config.DBBackend, config.DBDir())
|
||||
blockStoreDB := dbm.NewDB("blockstore", dbType, config.DBDir())
|
||||
blockStore := bc.NewBlockStore(blockStoreDB)
|
||||
|
||||
// Get State
|
||||
stateDB := dbm.NewDB("state", config.DBBackend, config.DBDir())
|
||||
state, err := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
|
||||
stateDB := dbm.NewDB("state", dbType, config.DBDir())
|
||||
state, err := sm.MakeGenesisStateFromFile(config.GenesisFile())
|
||||
if err != nil {
|
||||
cmn.Exit(err.Error())
|
||||
}
|
||||
|
||||
// Create proxyAppConn connection (consensus, mempool, query)
|
||||
clientCreator := proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir())
|
||||
proxyApp := proxy.NewAppConns(clientCreator, NewHandshaker(state, blockStore))
|
||||
proxyApp := proxy.NewAppConns(clientCreator, NewHandshaker(stateDB, state, blockStore))
|
||||
err = proxyApp.Start()
|
||||
if err != nil {
|
||||
cmn.Exit(cmn.Fmt("Error starting proxy app conns: %v", err))
|
||||
@@ -302,7 +305,11 @@ func newConsensusStateForReplay(config cfg.BaseConfig, csConfig *cfg.ConsensusCo
|
||||
cmn.Exit(cmn.Fmt("Failed to start event bus: %v", err))
|
||||
}
|
||||
|
||||
consensusState := NewConsensusState(csConfig, state.Copy(), proxyApp.Consensus(), blockStore, types.MockMempool{})
|
||||
mempool, evpool := types.MockMempool{}, types.MockEvidencePool{}
|
||||
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)
|
||||
|
||||
consensusState := NewConsensusState(csConfig, state.Copy(), blockExec,
|
||||
blockStore, mempool, evpool)
|
||||
|
||||
consensusState.SetEventBus(eventBus)
|
||||
return consensusState
|
||||
|
@@ -44,13 +44,6 @@ func init() {
|
||||
// the `Handshake Tests` are for failures in applying the block.
|
||||
// With the help of the WAL, we can recover from it all!
|
||||
|
||||
// NOTE: Files in this dir are generated by running the `build.sh` therein.
|
||||
// It's a simple way to generate wals for a single block, or multiple blocks, with random transactions,
|
||||
// and different part sizes. The output is not deterministic.
|
||||
// It should only have to be re-run if there is some breaking change to the consensus data structures (eg. blocks, votes)
|
||||
// or to the behaviour of the app (eg. computes app hash differently)
|
||||
var data_dir = path.Join(cmn.GoPath(), "src/github.com/tendermint/tendermint/consensus", "test_data")
|
||||
|
||||
//------------------------------------------------------------------------------------------
|
||||
// WAL Tests
|
||||
|
||||
@@ -60,8 +53,7 @@ var data_dir = path.Join(cmn.GoPath(), "src/github.com/tendermint/tendermint/con
|
||||
|
||||
func startNewConsensusStateAndWaitForBlock(t *testing.T, lastBlockHeight int64, blockDB dbm.DB, stateDB dbm.DB) {
|
||||
logger := log.TestingLogger()
|
||||
state, _ := sm.GetState(stateDB, consensusReplayConfig.GenesisFile())
|
||||
state.SetLogger(logger.With("module", "state"))
|
||||
state, _ := sm.LoadStateFromDBOrGenesisFile(stateDB, consensusReplayConfig.GenesisFile())
|
||||
privValidator := loadPrivValidator(consensusReplayConfig)
|
||||
cs := newConsensusStateWithConfigAndBlockStore(consensusReplayConfig, state, privValidator, dummy.NewDummyApplication(), blockDB)
|
||||
cs.SetLogger(logger)
|
||||
@@ -72,9 +64,7 @@ func startNewConsensusStateAndWaitForBlock(t *testing.T, lastBlockHeight int64,
|
||||
|
||||
err := cs.Start()
|
||||
require.NoError(t, err)
|
||||
defer func() {
|
||||
cs.Stop()
|
||||
}()
|
||||
defer cs.Stop()
|
||||
|
||||
// This is just a signal that we haven't halted; its not something contained
|
||||
// in the WAL itself. Assuming the consensus state is running, replay of any
|
||||
@@ -85,19 +75,19 @@ func startNewConsensusStateAndWaitForBlock(t *testing.T, lastBlockHeight int64,
|
||||
require.NoError(t, err)
|
||||
select {
|
||||
case <-newBlockCh:
|
||||
case <-time.After(10 * time.Second):
|
||||
case <-time.After(60 * time.Second):
|
||||
t.Fatalf("Timed out waiting for new block (see trace above)")
|
||||
}
|
||||
}
|
||||
|
||||
func sendTxs(cs *ConsensusState, ctx context.Context) {
|
||||
i := 0
|
||||
for {
|
||||
for i := 0; i < 256; i++ {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
default:
|
||||
cs.mempool.CheckTx([]byte{byte(i)}, nil)
|
||||
tx := []byte{byte(i)}
|
||||
cs.mempool.CheckTx(tx, nil)
|
||||
i++
|
||||
}
|
||||
}
|
||||
@@ -107,23 +97,24 @@ func sendTxs(cs *ConsensusState, ctx context.Context) {
|
||||
func TestWALCrash(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
initFn func(*ConsensusState, context.Context)
|
||||
initFn func(dbm.DB, *ConsensusState, context.Context)
|
||||
heightToStop int64
|
||||
}{
|
||||
{"empty block",
|
||||
func(cs *ConsensusState, ctx context.Context) {},
|
||||
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {},
|
||||
1},
|
||||
{"block with a smaller part size",
|
||||
func(cs *ConsensusState, ctx context.Context) {
|
||||
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {
|
||||
// XXX: is there a better way to change BlockPartSizeBytes?
|
||||
params := cs.state.Params
|
||||
params.BlockPartSizeBytes = 512
|
||||
cs.state.Params = params
|
||||
sendTxs(cs, ctx)
|
||||
cs.state.ConsensusParams.BlockPartSizeBytes = 512
|
||||
sm.SaveState(stateDB, cs.state)
|
||||
go sendTxs(cs, ctx)
|
||||
},
|
||||
1},
|
||||
{"many non-empty blocks",
|
||||
sendTxs,
|
||||
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {
|
||||
go sendTxs(cs, ctx)
|
||||
},
|
||||
3},
|
||||
}
|
||||
|
||||
@@ -134,7 +125,7 @@ func TestWALCrash(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func crashWALandCheckLiveness(t *testing.T, initFn func(*ConsensusState, context.Context), heightToStop int64) {
|
||||
func crashWALandCheckLiveness(t *testing.T, initFn func(dbm.DB, *ConsensusState, context.Context), heightToStop int64) {
|
||||
walPaniced := make(chan error)
|
||||
crashingWal := &crashingWAL{panicCh: walPaniced, heightToStop: heightToStop}
|
||||
|
||||
@@ -147,8 +138,7 @@ LOOP:
|
||||
// create consensus state from a clean slate
|
||||
logger := log.NewNopLogger()
|
||||
stateDB := dbm.NewMemDB()
|
||||
state, _ := sm.MakeGenesisStateFromFile(stateDB, consensusReplayConfig.GenesisFile())
|
||||
state.SetLogger(logger.With("module", "state"))
|
||||
state, _ := sm.MakeGenesisStateFromFile(consensusReplayConfig.GenesisFile())
|
||||
privValidator := loadPrivValidator(consensusReplayConfig)
|
||||
blockDB := dbm.NewMemDB()
|
||||
cs := newConsensusStateWithConfigAndBlockStore(consensusReplayConfig, state, privValidator, dummy.NewDummyApplication(), blockDB)
|
||||
@@ -156,7 +146,7 @@ LOOP:
|
||||
|
||||
// start sending transactions
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
go initFn(cs, ctx)
|
||||
initFn(stateDB, cs, ctx)
|
||||
|
||||
// clean up WAL file from the previous iteration
|
||||
walFile := cs.config.WalFile()
|
||||
@@ -253,8 +243,8 @@ func (w *crashingWAL) Save(m WALMessage) {
|
||||
}
|
||||
|
||||
func (w *crashingWAL) Group() *auto.Group { return w.next.Group() }
|
||||
func (w *crashingWAL) SearchForEndHeight(height int64) (gr *auto.GroupReader, found bool, err error) {
|
||||
return w.next.SearchForEndHeight(height)
|
||||
func (w *crashingWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
|
||||
return w.next.SearchForEndHeight(height, options)
|
||||
}
|
||||
|
||||
func (w *crashingWAL) Start() error { return w.next.Start() }
|
||||
@@ -264,9 +254,13 @@ func (w *crashingWAL) Wait() { w.next.Wait() }
|
||||
//------------------------------------------------------------------------------------------
|
||||
// Handshake Tests
|
||||
|
||||
const (
|
||||
NUM_BLOCKS = 6
|
||||
)
|
||||
|
||||
var (
|
||||
NUM_BLOCKS = 6 // number of blocks in the test_data/many_blocks.cswal
|
||||
mempool = types.MockMempool{}
|
||||
mempool = types.MockMempool{}
|
||||
evpool = types.MockEvidencePool{}
|
||||
)
|
||||
|
||||
//---------------------------------------
|
||||
@@ -305,12 +299,12 @@ func TestHandshakeReplayNone(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func writeWAL(walMsgs []byte) string {
|
||||
func tempWALWithData(data []byte) string {
|
||||
walFile, err := ioutil.TempFile("", "wal")
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("failed to create temp WAL file: %v", err))
|
||||
}
|
||||
_, err = walFile.Write(walMsgs)
|
||||
_, err = walFile.Write(data)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("failed to write to temp WAL file: %v", err))
|
||||
}
|
||||
@@ -324,12 +318,11 @@ func writeWAL(walMsgs []byte) string {
|
||||
func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
|
||||
config := ResetConfig("proxy_test_")
|
||||
|
||||
// copy the many_blocks file
|
||||
walBody, err := cmn.ReadFile(path.Join(data_dir, "many_blocks.cswal"))
|
||||
walBody, err := WALWithNBlocks(NUM_BLOCKS)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
walFile := writeWAL(walBody)
|
||||
walFile := tempWALWithData(walBody)
|
||||
config.Consensus.SetWalFile(walFile)
|
||||
|
||||
privVal := types.LoadPrivValidatorFS(config.PrivValidatorFile())
|
||||
@@ -342,17 +335,20 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
|
||||
if err := wal.Start(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer wal.Stop()
|
||||
|
||||
chain, commits, err := makeBlockchainFromWAL(wal)
|
||||
if err != nil {
|
||||
t.Fatalf(err.Error())
|
||||
}
|
||||
|
||||
state, store := stateAndStore(config, privVal.GetPubKey())
|
||||
stateDB, state, store := stateAndStore(config, privVal.GetPubKey())
|
||||
store.chain = chain
|
||||
store.commits = commits
|
||||
|
||||
// run the chain through state.ApplyBlock to build up the tendermint state
|
||||
latestAppHash := buildTMStateFromChain(config, state, chain, mode)
|
||||
state = buildTMStateFromChain(config, stateDB, state, chain, mode)
|
||||
latestAppHash := state.AppHash
|
||||
|
||||
// make a new client creator
|
||||
dummyApp := dummy.NewPersistentDummyApplication(path.Join(config.DBDir(), "2"))
|
||||
@@ -361,16 +357,17 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
|
||||
// run nBlocks against a new client to build up the app state.
|
||||
// use a throwaway tendermint state
|
||||
proxyApp := proxy.NewAppConns(clientCreator2, nil)
|
||||
state, _ := stateAndStore(config, privVal.GetPubKey())
|
||||
buildAppStateFromChain(proxyApp, state, chain, nBlocks, mode)
|
||||
stateDB, state, _ := stateAndStore(config, privVal.GetPubKey())
|
||||
buildAppStateFromChain(proxyApp, stateDB, state, chain, nBlocks, mode)
|
||||
}
|
||||
|
||||
// now start the app using the handshake - it should sync
|
||||
handshaker := NewHandshaker(state, store)
|
||||
handshaker := NewHandshaker(stateDB, state, store)
|
||||
proxyApp := proxy.NewAppConns(clientCreator2, handshaker)
|
||||
if err := proxyApp.Start(); err != nil {
|
||||
t.Fatalf("Error starting proxy app connections: %v", err)
|
||||
}
|
||||
defer proxyApp.Stop()
|
||||
|
||||
// get the latest app hash from the app
|
||||
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{""})
|
||||
@@ -395,49 +392,55 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
|
||||
}
|
||||
}
|
||||
|
||||
func applyBlock(st *sm.State, blk *types.Block, proxyApp proxy.AppConns) {
|
||||
testPartSize := st.Params.BlockPartSizeBytes
|
||||
err := st.ApplyBlock(types.NopEventBus{}, proxyApp.Consensus(), blk, blk.MakePartSet(testPartSize).Header(), mempool)
|
||||
func applyBlock(stateDB dbm.DB, st sm.State, blk *types.Block, proxyApp proxy.AppConns) sm.State {
|
||||
testPartSize := st.ConsensusParams.BlockPartSizeBytes
|
||||
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)
|
||||
|
||||
blkID := types.BlockID{blk.Hash(), blk.MakePartSet(testPartSize).Header()}
|
||||
newState, err := blockExec.ApplyBlock(st, blkID, blk)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return newState
|
||||
}
|
||||
|
||||
func buildAppStateFromChain(proxyApp proxy.AppConns,
|
||||
state *sm.State, chain []*types.Block, nBlocks int, mode uint) {
|
||||
func buildAppStateFromChain(proxyApp proxy.AppConns, stateDB dbm.DB,
|
||||
state sm.State, chain []*types.Block, nBlocks int, mode uint) {
|
||||
// start a new app without handshake, play nBlocks blocks
|
||||
if err := proxyApp.Start(); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer proxyApp.Stop()
|
||||
|
||||
validators := types.TM2PB.Validators(state.Validators)
|
||||
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators}); err != nil {
|
||||
// TODO: get the genesis bytes (https://github.com/tendermint/tendermint/issues/1224)
|
||||
var genesisBytes []byte
|
||||
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators, genesisBytes}); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
defer proxyApp.Stop()
|
||||
switch mode {
|
||||
case 0:
|
||||
for i := 0; i < nBlocks; i++ {
|
||||
block := chain[i]
|
||||
applyBlock(state, block, proxyApp)
|
||||
state = applyBlock(stateDB, state, block, proxyApp)
|
||||
}
|
||||
case 1, 2:
|
||||
for i := 0; i < nBlocks-1; i++ {
|
||||
block := chain[i]
|
||||
applyBlock(state, block, proxyApp)
|
||||
state = applyBlock(stateDB, state, block, proxyApp)
|
||||
}
|
||||
|
||||
if mode == 2 {
|
||||
// update the dummy height and apphash
|
||||
// as if we ran commit but not
|
||||
applyBlock(state, chain[nBlocks-1], proxyApp)
|
||||
state = applyBlock(stateDB, state, chain[nBlocks-1], proxyApp)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func buildTMStateFromChain(config *cfg.Config, state *sm.State, chain []*types.Block, mode uint) []byte {
|
||||
func buildTMStateFromChain(config *cfg.Config, stateDB dbm.DB, state sm.State, chain []*types.Block, mode uint) sm.State {
|
||||
// run the whole chain against this client to build up the tendermint state
|
||||
clientCreator := proxy.NewLocalClientCreator(dummy.NewPersistentDummyApplication(path.Join(config.DBDir(), "1")))
|
||||
proxyApp := proxy.NewAppConns(clientCreator, nil) // sm.NewHandshaker(config, state, store, ReplayLastBlock))
|
||||
@@ -447,35 +450,32 @@ func buildTMStateFromChain(config *cfg.Config, state *sm.State, chain []*types.B
|
||||
defer proxyApp.Stop()
|
||||
|
||||
validators := types.TM2PB.Validators(state.Validators)
|
||||
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators}); err != nil {
|
||||
// TODO: get the genesis bytes (https://github.com/tendermint/tendermint/issues/1224)
|
||||
var genesisBytes []byte
|
||||
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators, genesisBytes}); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
var latestAppHash []byte
|
||||
|
||||
switch mode {
|
||||
case 0:
|
||||
// sync right up
|
||||
for _, block := range chain {
|
||||
applyBlock(state, block, proxyApp)
|
||||
state = applyBlock(stateDB, state, block, proxyApp)
|
||||
}
|
||||
|
||||
latestAppHash = state.AppHash
|
||||
case 1, 2:
|
||||
// sync up to the penultimate as if we stored the block.
|
||||
// whether we commit or not depends on the appHash
|
||||
for _, block := range chain[:len(chain)-1] {
|
||||
applyBlock(state, block, proxyApp)
|
||||
state = applyBlock(stateDB, state, block, proxyApp)
|
||||
}
|
||||
|
||||
// apply the final block to a state copy so we can
|
||||
// get the right next appHash but keep the state back
|
||||
stateCopy := state.Copy()
|
||||
applyBlock(stateCopy, chain[len(chain)-1], proxyApp)
|
||||
latestAppHash = stateCopy.AppHash
|
||||
applyBlock(stateDB, state, chain[len(chain)-1], proxyApp)
|
||||
}
|
||||
|
||||
return latestAppHash
|
||||
return state
|
||||
}
|
||||
|
||||
//--------------------------
|
||||
@@ -483,7 +483,7 @@ func buildTMStateFromChain(config *cfg.Config, state *sm.State, chain []*types.B
|
||||
|
||||
func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
|
||||
// Search for height marker
|
||||
gr, found, err := wal.SearchForEndHeight(0)
|
||||
gr, found, err := wal.SearchForEndHeight(0, &WALSearchOptions{})
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
@@ -494,10 +494,13 @@ func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
|
||||
|
||||
// log.Notice("Build a blockchain by reading from the WAL")
|
||||
|
||||
var blockParts *types.PartSet
|
||||
var blocks []*types.Block
|
||||
var commits []*types.Commit
|
||||
|
||||
var thisBlockParts *types.PartSet
|
||||
var thisBlockCommit *types.Commit
|
||||
var height int64
|
||||
|
||||
dec := NewWALDecoder(gr)
|
||||
for {
|
||||
msg, err := dec.Decode()
|
||||
@@ -513,42 +516,60 @@ func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
|
||||
}
|
||||
|
||||
switch p := piece.(type) {
|
||||
case *types.PartSetHeader:
|
||||
case EndHeightMessage:
|
||||
// if its not the first one, we have a full block
|
||||
if blockParts != nil {
|
||||
if thisBlockParts != nil {
|
||||
var n int
|
||||
block := wire.ReadBinary(&types.Block{}, blockParts.GetReader(), 0, &n, &err).(*types.Block)
|
||||
block := wire.ReadBinary(&types.Block{}, thisBlockParts.GetReader(), 0, &n, &err).(*types.Block)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if block.Height != height+1 {
|
||||
panic(cmn.Fmt("read bad block from wal. got height %d, expected %d", block.Height, height+1))
|
||||
}
|
||||
commitHeight := thisBlockCommit.Precommits[0].Height
|
||||
if commitHeight != height+1 {
|
||||
panic(cmn.Fmt("commit doesnt match. got height %d, expected %d", commitHeight, height+1))
|
||||
}
|
||||
blocks = append(blocks, block)
|
||||
commits = append(commits, thisBlockCommit)
|
||||
height += 1
|
||||
}
|
||||
blockParts = types.NewPartSetFromHeader(*p)
|
||||
case *types.PartSetHeader:
|
||||
thisBlockParts = types.NewPartSetFromHeader(*p)
|
||||
case *types.Part:
|
||||
_, err := blockParts.AddPart(p, false)
|
||||
_, err := thisBlockParts.AddPart(p, false)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
case *types.Vote:
|
||||
if p.Type == types.VoteTypePrecommit {
|
||||
commit := &types.Commit{
|
||||
thisBlockCommit = &types.Commit{
|
||||
BlockID: p.BlockID,
|
||||
Precommits: []*types.Vote{p},
|
||||
}
|
||||
commits = append(commits, commit)
|
||||
}
|
||||
}
|
||||
}
|
||||
// grab the last block too
|
||||
var n int
|
||||
block := wire.ReadBinary(&types.Block{}, blockParts.GetReader(), 0, &n, &err).(*types.Block)
|
||||
block := wire.ReadBinary(&types.Block{}, thisBlockParts.GetReader(), 0, &n, &err).(*types.Block)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if block.Height != height+1 {
|
||||
panic(cmn.Fmt("read bad block from wal. got height %d, expected %d", block.Height, height+1))
|
||||
}
|
||||
commitHeight := thisBlockCommit.Precommits[0].Height
|
||||
if commitHeight != height+1 {
|
||||
panic(cmn.Fmt("commit doesnt match. got height %d, expected %d", commitHeight, height+1))
|
||||
}
|
||||
blocks = append(blocks, block)
|
||||
commits = append(commits, thisBlockCommit)
|
||||
return blocks, commits, nil
|
||||
}
|
||||
|
||||
func readPieceFromWAL(msg *TimedWALMessage) interface{} {
|
||||
// skip meta messages
|
||||
if _, ok := msg.Msg.(EndHeightMessage); ok {
|
||||
return nil
|
||||
}
|
||||
|
||||
// for logging
|
||||
switch m := msg.Msg.(type) {
|
||||
case msgInfo:
|
||||
@@ -560,19 +581,19 @@ func readPieceFromWAL(msg *TimedWALMessage) interface{} {
|
||||
case *VoteMessage:
|
||||
return msg.Vote
|
||||
}
|
||||
case EndHeightMessage:
|
||||
return m
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// fresh state and mock store
|
||||
func stateAndStore(config *cfg.Config, pubKey crypto.PubKey) (*sm.State, *mockBlockStore) {
|
||||
func stateAndStore(config *cfg.Config, pubKey crypto.PubKey) (dbm.DB, sm.State, *mockBlockStore) {
|
||||
stateDB := dbm.NewMemDB()
|
||||
state, _ := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
|
||||
state.SetLogger(log.TestingLogger().With("module", "state"))
|
||||
|
||||
store := NewMockBlockStore(config, state.Params)
|
||||
return state, store
|
||||
state, _ := sm.MakeGenesisStateFromFile(config.GenesisFile())
|
||||
store := NewMockBlockStore(config, state.ConsensusParams)
|
||||
return stateDB, state, store
|
||||
}
|
||||
|
||||
//----------------------------------
|
||||
|
@@ -17,7 +17,7 @@ import (
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
cstypes "github.com/tendermint/tendermint/consensus/types"
|
||||
"github.com/tendermint/tendermint/proxy"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
@@ -47,8 +47,8 @@ var (
|
||||
|
||||
// msgs from the reactor which may update the state
|
||||
type msgInfo struct {
|
||||
Msg ConsensusMessage `json:"msg"`
|
||||
PeerKey string `json:"peer_key"`
|
||||
Msg ConsensusMessage `json:"msg"`
|
||||
PeerID p2p.ID `json:"peer_key"`
|
||||
}
|
||||
|
||||
// internally generated messages which may update the state
|
||||
@@ -75,16 +75,18 @@ type ConsensusState struct {
|
||||
privValidator types.PrivValidator // for signing votes
|
||||
|
||||
// services for creating and executing blocks
|
||||
proxyAppConn proxy.AppConnConsensus
|
||||
blockStore types.BlockStore
|
||||
mempool types.Mempool
|
||||
// TODO: encapsulate all of this in one "BlockManager"
|
||||
blockExec *sm.BlockExecutor
|
||||
blockStore types.BlockStore
|
||||
mempool types.Mempool
|
||||
evpool types.EvidencePool
|
||||
|
||||
// internal state
|
||||
mtx sync.Mutex
|
||||
cstypes.RoundState
|
||||
state *sm.State // State until height-1.
|
||||
state sm.State // State until height-1.
|
||||
|
||||
// state changes may be triggered by msgs from peers,
|
||||
// state changes may be triggered by: msgs from peers,
|
||||
// msgs from ourself, or by timeouts
|
||||
peerMsgQueue chan msgInfo
|
||||
internalMsgQueue chan msgInfo
|
||||
@@ -113,10 +115,10 @@ type ConsensusState struct {
|
||||
}
|
||||
|
||||
// NewConsensusState returns a new ConsensusState.
|
||||
func NewConsensusState(config *cfg.ConsensusConfig, state *sm.State, proxyAppConn proxy.AppConnConsensus, blockStore types.BlockStore, mempool types.Mempool) *ConsensusState {
|
||||
func NewConsensusState(config *cfg.ConsensusConfig, state sm.State, blockExec *sm.BlockExecutor, blockStore types.BlockStore, mempool types.Mempool, evpool types.EvidencePool) *ConsensusState {
|
||||
cs := &ConsensusState{
|
||||
config: config,
|
||||
proxyAppConn: proxyAppConn,
|
||||
blockExec: blockExec,
|
||||
blockStore: blockStore,
|
||||
mempool: mempool,
|
||||
peerMsgQueue: make(chan msgInfo, msgQueueSize),
|
||||
@@ -125,6 +127,7 @@ func NewConsensusState(config *cfg.ConsensusConfig, state *sm.State, proxyAppCon
|
||||
done: make(chan struct{}),
|
||||
doWALCatchup: true,
|
||||
wal: nilWAL{},
|
||||
evpool: evpool,
|
||||
}
|
||||
// set function defaults (may be overwritten before calling Start)
|
||||
cs.decideProposal = cs.defaultDecideProposal
|
||||
@@ -151,6 +154,7 @@ func (cs *ConsensusState) SetLogger(l log.Logger) {
|
||||
// SetEventBus sets event bus.
|
||||
func (cs *ConsensusState) SetEventBus(b *types.EventBus) {
|
||||
cs.eventBus = b
|
||||
cs.blockExec.SetEventBus(b)
|
||||
}
|
||||
|
||||
// String returns a string.
|
||||
@@ -160,7 +164,7 @@ func (cs *ConsensusState) String() string {
|
||||
}
|
||||
|
||||
// GetState returns a copy of the chain state.
|
||||
func (cs *ConsensusState) GetState() *sm.State {
|
||||
func (cs *ConsensusState) GetState() sm.State {
|
||||
cs.mtx.Lock()
|
||||
defer cs.mtx.Unlock()
|
||||
return cs.state.Copy()
|
||||
@@ -300,17 +304,17 @@ func (cs *ConsensusState) OpenWAL(walFile string) (WAL, error) {
|
||||
|
||||
//------------------------------------------------------------
|
||||
// Public interface for passing messages into the consensus state, possibly causing a state transition.
|
||||
// If peerKey == "", the msg is considered internal.
|
||||
// If peerID == "", the msg is considered internal.
|
||||
// Messages are added to the appropriate queue (peer or internal).
|
||||
// If the queue is full, the function may block.
|
||||
// TODO: should these return anything or let callers just use events?
|
||||
|
||||
// AddVote inputs a vote.
|
||||
func (cs *ConsensusState) AddVote(vote *types.Vote, peerKey string) (added bool, err error) {
|
||||
if peerKey == "" {
|
||||
func (cs *ConsensusState) AddVote(vote *types.Vote, peerID p2p.ID) (added bool, err error) {
|
||||
if peerID == "" {
|
||||
cs.internalMsgQueue <- msgInfo{&VoteMessage{vote}, ""}
|
||||
} else {
|
||||
cs.peerMsgQueue <- msgInfo{&VoteMessage{vote}, peerKey}
|
||||
cs.peerMsgQueue <- msgInfo{&VoteMessage{vote}, peerID}
|
||||
}
|
||||
|
||||
// TODO: wait for event?!
|
||||
@@ -318,12 +322,12 @@ func (cs *ConsensusState) AddVote(vote *types.Vote, peerKey string) (added bool,
|
||||
}
|
||||
|
||||
// SetProposal inputs a proposal.
|
||||
func (cs *ConsensusState) SetProposal(proposal *types.Proposal, peerKey string) error {
|
||||
func (cs *ConsensusState) SetProposal(proposal *types.Proposal, peerID p2p.ID) error {
|
||||
|
||||
if peerKey == "" {
|
||||
if peerID == "" {
|
||||
cs.internalMsgQueue <- msgInfo{&ProposalMessage{proposal}, ""}
|
||||
} else {
|
||||
cs.peerMsgQueue <- msgInfo{&ProposalMessage{proposal}, peerKey}
|
||||
cs.peerMsgQueue <- msgInfo{&ProposalMessage{proposal}, peerID}
|
||||
}
|
||||
|
||||
// TODO: wait for event?!
|
||||
@@ -331,12 +335,12 @@ func (cs *ConsensusState) SetProposal(proposal *types.Proposal, peerKey string)
|
||||
}
|
||||
|
||||
// AddProposalBlockPart inputs a part of the proposal block.
|
||||
func (cs *ConsensusState) AddProposalBlockPart(height int64, round int, part *types.Part, peerKey string) error {
|
||||
func (cs *ConsensusState) AddProposalBlockPart(height int64, round int, part *types.Part, peerID p2p.ID) error {
|
||||
|
||||
if peerKey == "" {
|
||||
if peerID == "" {
|
||||
cs.internalMsgQueue <- msgInfo{&BlockPartMessage{height, round, part}, ""}
|
||||
} else {
|
||||
cs.peerMsgQueue <- msgInfo{&BlockPartMessage{height, round, part}, peerKey}
|
||||
cs.peerMsgQueue <- msgInfo{&BlockPartMessage{height, round, part}, peerID}
|
||||
}
|
||||
|
||||
// TODO: wait for event?!
|
||||
@@ -344,13 +348,13 @@ func (cs *ConsensusState) AddProposalBlockPart(height int64, round int, part *ty
|
||||
}
|
||||
|
||||
// SetProposalAndBlock inputs the proposal and all block parts.
|
||||
func (cs *ConsensusState) SetProposalAndBlock(proposal *types.Proposal, block *types.Block, parts *types.PartSet, peerKey string) error {
|
||||
if err := cs.SetProposal(proposal, peerKey); err != nil {
|
||||
func (cs *ConsensusState) SetProposalAndBlock(proposal *types.Proposal, block *types.Block, parts *types.PartSet, peerID p2p.ID) error {
|
||||
if err := cs.SetProposal(proposal, peerID); err != nil {
|
||||
return err
|
||||
}
|
||||
for i := 0; i < parts.Total(); i++ {
|
||||
part := parts.GetPart(i)
|
||||
if err := cs.AddProposalBlockPart(proposal.Height, proposal.Round, part, peerKey); err != nil {
|
||||
if err := cs.AddProposalBlockPart(proposal.Height, proposal.Round, part, peerID); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@@ -397,7 +401,7 @@ func (cs *ConsensusState) sendInternalMessage(mi msgInfo) {
|
||||
|
||||
// Reconstruct LastCommit from SeenCommit, which we saved along with the block,
|
||||
// (which happens even before saving the state)
|
||||
func (cs *ConsensusState) reconstructLastCommit(state *sm.State) {
|
||||
func (cs *ConsensusState) reconstructLastCommit(state sm.State) {
|
||||
if state.LastBlockHeight == 0 {
|
||||
return
|
||||
}
|
||||
@@ -420,12 +424,12 @@ func (cs *ConsensusState) reconstructLastCommit(state *sm.State) {
|
||||
|
||||
// Updates ConsensusState and increments height to match that of state.
|
||||
// The round becomes 0 and cs.Step becomes cstypes.RoundStepNewHeight.
|
||||
func (cs *ConsensusState) updateToState(state *sm.State) {
|
||||
func (cs *ConsensusState) updateToState(state sm.State) {
|
||||
if cs.CommitRound > -1 && 0 < cs.Height && cs.Height != state.LastBlockHeight {
|
||||
cmn.PanicSanity(cmn.Fmt("updateToState() expected state height of %v but found %v",
|
||||
cs.Height, state.LastBlockHeight))
|
||||
}
|
||||
if cs.state != nil && cs.state.LastBlockHeight+1 != cs.Height {
|
||||
if !cs.state.IsEmpty() && cs.state.LastBlockHeight+1 != cs.Height {
|
||||
// This might happen when someone else is mutating cs.state.
|
||||
// Someone forgot to pass in state.Copy() somewhere?!
|
||||
cmn.PanicSanity(cmn.Fmt("Inconsistent cs.state.LastBlockHeight+1 %v vs cs.Height %v",
|
||||
@@ -435,7 +439,7 @@ func (cs *ConsensusState) updateToState(state *sm.State) {
|
||||
// If state isn't further out than cs.state, just ignore.
|
||||
// This happens when SwitchToConsensus() is called in the reactor.
|
||||
// We don't want to reset e.g. the Votes.
|
||||
if cs.state != nil && (state.LastBlockHeight <= cs.state.LastBlockHeight) {
|
||||
if !cs.state.IsEmpty() && (state.LastBlockHeight <= cs.state.LastBlockHeight) {
|
||||
cs.Logger.Info("Ignoring updateToState()", "newHeight", state.LastBlockHeight+1, "oldHeight", cs.state.LastBlockHeight+1)
|
||||
return
|
||||
}
|
||||
@@ -537,7 +541,7 @@ func (cs *ConsensusState) receiveRoutine(maxSteps int) {
|
||||
// if the timeout is relevant to the rs
|
||||
// go to the next step
|
||||
cs.handleTimeout(ti, rs)
|
||||
case <-cs.Quit:
|
||||
case <-cs.Quit():
|
||||
|
||||
// NOTE: the internalMsgQueue may have signed messages from our
|
||||
// priv_val that haven't hit the WAL, but its ok because
|
||||
@@ -558,7 +562,7 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
|
||||
defer cs.mtx.Unlock()
|
||||
|
||||
var err error
|
||||
msg, peerKey := mi.Msg, mi.PeerKey
|
||||
msg, peerID := mi.Msg, mi.PeerID
|
||||
switch msg := msg.(type) {
|
||||
case *ProposalMessage:
|
||||
// will not cause transition.
|
||||
@@ -566,14 +570,14 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
|
||||
err = cs.setProposal(msg.Proposal)
|
||||
case *BlockPartMessage:
|
||||
// if the proposal is complete, we'll enterPrevote or tryFinalizeCommit
|
||||
_, err = cs.addProposalBlockPart(msg.Height, msg.Part, peerKey != "")
|
||||
_, err = cs.addProposalBlockPart(msg.Height, msg.Part, peerID != "")
|
||||
if err != nil && msg.Round != cs.Round {
|
||||
err = nil
|
||||
}
|
||||
case *VoteMessage:
|
||||
// attempt to add the vote and dupeout the validator if its a duplicate signature
|
||||
// if the vote gives us a 2/3-any or 2/3-one, we transition
|
||||
err := cs.tryAddVote(msg.Vote, peerKey)
|
||||
err := cs.tryAddVote(msg.Vote, peerID)
|
||||
if err == ErrAddingVote {
|
||||
// TODO: punish peer
|
||||
}
|
||||
@@ -588,7 +592,7 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
|
||||
cs.Logger.Error("Unknown msg type", reflect.TypeOf(msg))
|
||||
}
|
||||
if err != nil {
|
||||
cs.Logger.Error("Error with msg", "type", reflect.TypeOf(msg), "peer", peerKey, "err", err, "msg", msg)
|
||||
cs.Logger.Error("Error with msg", "type", reflect.TypeOf(msg), "peer", peerID, "err", err, "msg", msg)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -767,17 +771,18 @@ func (cs *ConsensusState) enterPropose(height int64, round int) {
|
||||
return
|
||||
}
|
||||
|
||||
if !cs.isProposer() {
|
||||
cs.Logger.Info("enterPropose: Not our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator)
|
||||
if cs.Validators.HasAddress(cs.privValidator.GetAddress()) {
|
||||
cs.Logger.Debug("This node is a validator")
|
||||
} else {
|
||||
cs.Logger.Debug("This node is not a validator")
|
||||
}
|
||||
} else {
|
||||
// if not a validator, we're done
|
||||
if !cs.Validators.HasAddress(cs.privValidator.GetAddress()) {
|
||||
cs.Logger.Debug("This node is not a validator")
|
||||
return
|
||||
}
|
||||
cs.Logger.Debug("This node is a validator")
|
||||
|
||||
if cs.isProposer() {
|
||||
cs.Logger.Info("enterPropose: Our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator)
|
||||
cs.Logger.Debug("This node is a validator")
|
||||
cs.decideProposal(height, round)
|
||||
} else {
|
||||
cs.Logger.Info("enterPropose: Not our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -863,9 +868,10 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
|
||||
|
||||
// Mempool validated transactions
|
||||
txs := cs.mempool.Reap(cs.config.MaxBlockSizeTxs)
|
||||
return types.MakeBlock(cs.Height, cs.state.ChainID, txs, commit,
|
||||
cs.state.LastBlockID, cs.state.Validators.Hash(),
|
||||
cs.state.AppHash, cs.state.Params.BlockPartSizeBytes)
|
||||
block, parts := cs.state.MakeBlock(cs.Height, txs, commit)
|
||||
evidence := cs.evpool.PendingEvidence()
|
||||
block.AddEvidence(evidence)
|
||||
return block, parts
|
||||
}
|
||||
|
||||
// Enter: `timeoutPropose` after entering Propose.
|
||||
@@ -919,7 +925,7 @@ func (cs *ConsensusState) defaultDoPrevote(height int64, round int) {
|
||||
}
|
||||
|
||||
// Validate proposal block
|
||||
err := cs.state.ValidateBlock(cs.ProposalBlock)
|
||||
err := cs.blockExec.ValidateBlock(cs.state, cs.ProposalBlock)
|
||||
if err != nil {
|
||||
// ProposalBlock is invalid, prevote nil.
|
||||
logger.Error("enterPrevote: ProposalBlock is invalid", "err", err)
|
||||
@@ -1027,7 +1033,7 @@ func (cs *ConsensusState) enterPrecommit(height int64, round int) {
|
||||
if cs.ProposalBlock.HashesTo(blockID.Hash) {
|
||||
cs.Logger.Info("enterPrecommit: +2/3 prevoted proposal block. Locking", "hash", blockID.Hash)
|
||||
// Validate the block.
|
||||
if err := cs.state.ValidateBlock(cs.ProposalBlock); err != nil {
|
||||
if err := cs.blockExec.ValidateBlock(cs.state, cs.ProposalBlock); err != nil {
|
||||
cmn.PanicConsensus(cmn.Fmt("enterPrecommit: +2/3 prevoted for an invalid block: %v", err))
|
||||
}
|
||||
cs.LockedRound = round
|
||||
@@ -1162,7 +1168,7 @@ func (cs *ConsensusState) finalizeCommit(height int64) {
|
||||
if !block.HashesTo(blockID.Hash) {
|
||||
cmn.PanicSanity(cmn.Fmt("Cannot finalizeCommit, ProposalBlock does not hash to commit hash"))
|
||||
}
|
||||
if err := cs.state.ValidateBlock(block); err != nil {
|
||||
if err := cs.blockExec.ValidateBlock(cs.state, block); err != nil {
|
||||
cmn.PanicConsensus(cmn.Fmt("+2/3 committed an invalid block: %v", err))
|
||||
}
|
||||
|
||||
@@ -1200,12 +1206,11 @@ func (cs *ConsensusState) finalizeCommit(height int64) {
|
||||
// Create a copy of the state for staging
|
||||
// and an event cache for txs
|
||||
stateCopy := cs.state.Copy()
|
||||
txEventBuffer := types.NewTxEventBuffer(cs.eventBus, block.NumTxs)
|
||||
|
||||
// Execute and commit the block, update and save the state, and update the mempool.
|
||||
// All calls to the proxyAppConn come here.
|
||||
// NOTE: the block.AppHash wont reflect these txs until the next block
|
||||
err := stateCopy.ApplyBlock(txEventBuffer, cs.proxyAppConn, block, blockParts.Header(), cs.mempool)
|
||||
var err error
|
||||
stateCopy, err = cs.blockExec.ApplyBlock(stateCopy, types.BlockID{block.Hash(), blockParts.Header()}, block)
|
||||
if err != nil {
|
||||
cs.Logger.Error("Error on ApplyBlock. Did the application crash? Please restart tendermint", "err", err)
|
||||
err := cmn.Kill()
|
||||
@@ -1217,22 +1222,6 @@ func (cs *ConsensusState) finalizeCommit(height int64) {
|
||||
|
||||
fail.Fail() // XXX
|
||||
|
||||
// Fire event for new block.
|
||||
// NOTE: If we fail before firing, these events will never fire
|
||||
//
|
||||
// TODO: Either
|
||||
// * Fire before persisting state, in ApplyBlock
|
||||
// * Fire on start up if we haven't written any new WAL msgs
|
||||
// Both options mean we may fire more than once. Is that fine ?
|
||||
cs.eventBus.PublishEventNewBlock(types.EventDataNewBlock{block})
|
||||
cs.eventBus.PublishEventNewBlockHeader(types.EventDataNewBlockHeader{block.Header})
|
||||
err = txEventBuffer.Flush()
|
||||
if err != nil {
|
||||
cs.Logger.Error("Failed to flush event buffer", "err", err)
|
||||
}
|
||||
|
||||
fail.Fail() // XXX
|
||||
|
||||
// NewHeightStep!
|
||||
cs.updateToState(stateCopy)
|
||||
|
||||
@@ -1305,7 +1294,7 @@ func (cs *ConsensusState) addProposalBlockPart(height int64, part *types.Part, v
|
||||
var n int
|
||||
var err error
|
||||
cs.ProposalBlock = wire.ReadBinary(&types.Block{}, cs.ProposalBlockParts.GetReader(),
|
||||
cs.state.Params.BlockSizeParams.MaxBytes, &n, &err).(*types.Block)
|
||||
cs.state.ConsensusParams.BlockSize.MaxBytes, &n, &err).(*types.Block)
|
||||
// NOTE: it's possible to receive complete proposal blocks for future rounds without having the proposal
|
||||
cs.Logger.Info("Received complete proposal block", "height", cs.ProposalBlock.Height, "hash", cs.ProposalBlock.Hash())
|
||||
if cs.Step == cstypes.RoundStepPropose && cs.isProposalComplete() {
|
||||
@@ -1321,26 +1310,24 @@ func (cs *ConsensusState) addProposalBlockPart(height int64, part *types.Part, v
|
||||
}
|
||||
|
||||
// Attempt to add the vote. if its a duplicate signature, dupeout the validator
|
||||
func (cs *ConsensusState) tryAddVote(vote *types.Vote, peerKey string) error {
|
||||
_, err := cs.addVote(vote, peerKey)
|
||||
func (cs *ConsensusState) tryAddVote(vote *types.Vote, peerID p2p.ID) error {
|
||||
_, err := cs.addVote(vote, peerID)
|
||||
if err != nil {
|
||||
// If the vote height is off, we'll just ignore it,
|
||||
// But if it's a conflicting sig, broadcast evidence tx for slashing.
|
||||
// But if it's a conflicting sig, add it to the cs.evpool.
|
||||
// If it's otherwise invalid, punish peer.
|
||||
if err == ErrVoteHeightMismatch {
|
||||
return err
|
||||
} else if _, ok := err.(*types.ErrVoteConflictingVotes); ok {
|
||||
} else if voteErr, ok := err.(*types.ErrVoteConflictingVotes); ok {
|
||||
if bytes.Equal(vote.ValidatorAddress, cs.privValidator.GetAddress()) {
|
||||
cs.Logger.Error("Found conflicting vote from ourselves. Did you unsafe_reset a validator?", "height", vote.Height, "round", vote.Round, "type", vote.Type)
|
||||
return err
|
||||
}
|
||||
cs.Logger.Error("Found conflicting vote. Publish evidence (TODO)", "height", vote.Height, "round", vote.Round, "type", vote.Type, "valAddr", vote.ValidatorAddress, "valIndex", vote.ValidatorIndex)
|
||||
|
||||
// TODO: track evidence for inclusion in a block
|
||||
|
||||
cs.evpool.AddEvidence(voteErr.DuplicateVoteEvidence)
|
||||
return err
|
||||
} else {
|
||||
// Probably an invalid signature. Bad peer.
|
||||
// Probably an invalid signature / Bad peer.
|
||||
// Seems this can also err sometimes with "Unexpected step" - perhaps not from a bad peer ?
|
||||
cs.Logger.Error("Error attempting to add vote", "err", err)
|
||||
return ErrAddingVote
|
||||
}
|
||||
@@ -1350,7 +1337,7 @@ func (cs *ConsensusState) tryAddVote(vote *types.Vote, peerKey string) error {
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
|
||||
func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool, err error) {
|
||||
func (cs *ConsensusState) addVote(vote *types.Vote, peerID p2p.ID) (added bool, err error) {
|
||||
cs.Logger.Debug("addVote", "voteHeight", vote.Height, "voteType", vote.Type, "valIndex", vote.ValidatorIndex, "csHeight", cs.Height)
|
||||
|
||||
// A precommit for the previous height?
|
||||
@@ -1380,7 +1367,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool,
|
||||
// A prevote/precommit for this height?
|
||||
if vote.Height == cs.Height {
|
||||
height := cs.Height
|
||||
added, err = cs.Votes.AddVote(vote, peerKey)
|
||||
added, err = cs.Votes.AddVote(vote, peerID)
|
||||
if added {
|
||||
cs.eventBus.PublishEventVote(types.EventDataVote{vote})
|
||||
|
||||
@@ -1466,6 +1453,7 @@ func (cs *ConsensusState) signVote(type_ byte, hash []byte, header types.PartSet
|
||||
ValidatorIndex: valIndex,
|
||||
Height: cs.Height,
|
||||
Round: cs.Round,
|
||||
Timestamp: time.Now().UTC(),
|
||||
Type: type_,
|
||||
BlockID: types.BlockID{hash, header},
|
||||
}
|
||||
|
@@ -55,7 +55,7 @@ x * TestHalt1 - if we see +2/3 precommits after timing out into new round, we sh
|
||||
//----------------------------------------------------------------------------------------------------
|
||||
// ProposeSuite
|
||||
|
||||
func TestProposerSelection0(t *testing.T) {
|
||||
func TestStateProposerSelection0(t *testing.T) {
|
||||
cs1, vss := randConsensusState(4)
|
||||
height, round := cs1.Height, cs1.Round
|
||||
|
||||
@@ -89,7 +89,7 @@ func TestProposerSelection0(t *testing.T) {
|
||||
}
|
||||
|
||||
// Now let's do it all again, but starting from round 2 instead of 0
|
||||
func TestProposerSelection2(t *testing.T) {
|
||||
func TestStateProposerSelection2(t *testing.T) {
|
||||
cs1, vss := randConsensusState(4) // test needs more work for more than 3 validators
|
||||
|
||||
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
|
||||
@@ -118,7 +118,7 @@ func TestProposerSelection2(t *testing.T) {
|
||||
}
|
||||
|
||||
// a non-validator should timeout into the prevote round
|
||||
func TestEnterProposeNoPrivValidator(t *testing.T) {
|
||||
func TestStateEnterProposeNoPrivValidator(t *testing.T) {
|
||||
cs, _ := randConsensusState(1)
|
||||
cs.SetPrivValidator(nil)
|
||||
height, round := cs.Height, cs.Round
|
||||
@@ -143,7 +143,7 @@ func TestEnterProposeNoPrivValidator(t *testing.T) {
|
||||
}
|
||||
|
||||
// a validator should not timeout of the prevote round (TODO: unless the block is really big!)
|
||||
func TestEnterProposeYesPrivValidator(t *testing.T) {
|
||||
func TestStateEnterProposeYesPrivValidator(t *testing.T) {
|
||||
cs, _ := randConsensusState(1)
|
||||
height, round := cs.Height, cs.Round
|
||||
|
||||
@@ -179,12 +179,12 @@ func TestEnterProposeYesPrivValidator(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestBadProposal(t *testing.T) {
|
||||
func TestStateBadProposal(t *testing.T) {
|
||||
cs1, vss := randConsensusState(2)
|
||||
height, round := cs1.Height, cs1.Round
|
||||
vs2 := vss[1]
|
||||
|
||||
partSize := cs1.state.Params.BlockPartSizeBytes
|
||||
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
|
||||
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
|
||||
@@ -204,7 +204,7 @@ func TestBadProposal(t *testing.T) {
|
||||
propBlock.AppHash = stateHash
|
||||
propBlockParts := propBlock.MakePartSet(partSize)
|
||||
proposal := types.NewProposal(vs2.Height, round, propBlockParts.Header(), -1, types.BlockID{})
|
||||
if err := vs2.SignProposal(config.ChainID, proposal); err != nil {
|
||||
if err := vs2.SignProposal(config.ChainID(), proposal); err != nil {
|
||||
t.Fatal("failed to sign bad proposal", err)
|
||||
}
|
||||
|
||||
@@ -239,7 +239,7 @@ func TestBadProposal(t *testing.T) {
|
||||
// FullRoundSuite
|
||||
|
||||
// propose, prevote, and precommit a block
|
||||
func TestFullRound1(t *testing.T) {
|
||||
func TestStateFullRound1(t *testing.T) {
|
||||
cs, vss := randConsensusState(1)
|
||||
height, round := cs.Height, cs.Round
|
||||
|
||||
@@ -275,7 +275,7 @@ func TestFullRound1(t *testing.T) {
|
||||
}
|
||||
|
||||
// nil is proposed, so prevote and precommit nil
|
||||
func TestFullRoundNil(t *testing.T) {
|
||||
func TestStateFullRoundNil(t *testing.T) {
|
||||
cs, vss := randConsensusState(1)
|
||||
height, round := cs.Height, cs.Round
|
||||
|
||||
@@ -293,7 +293,7 @@ func TestFullRoundNil(t *testing.T) {
|
||||
|
||||
// run through propose, prevote, precommit commit with two validators
|
||||
// where the first validator has to wait for votes from the second
|
||||
func TestFullRound2(t *testing.T) {
|
||||
func TestStateFullRound2(t *testing.T) {
|
||||
cs1, vss := randConsensusState(2)
|
||||
vs2 := vss[1]
|
||||
height, round := cs1.Height, cs1.Round
|
||||
@@ -334,12 +334,12 @@ func TestFullRound2(t *testing.T) {
|
||||
|
||||
// two validators, 4 rounds.
|
||||
// two vals take turns proposing. val1 locks on first one, precommits nil on everything else
|
||||
func TestLockNoPOL(t *testing.T) {
|
||||
func TestStateLockNoPOL(t *testing.T) {
|
||||
cs1, vss := randConsensusState(2)
|
||||
vs2 := vss[1]
|
||||
height := cs1.Height
|
||||
|
||||
partSize := cs1.state.Params.BlockPartSizeBytes
|
||||
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
|
||||
|
||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
||||
@@ -503,11 +503,11 @@ func TestLockNoPOL(t *testing.T) {
|
||||
}
|
||||
|
||||
// 4 vals, one precommits, other 3 polka at next round, so we unlock and precomit the polka
|
||||
func TestLockPOLRelock(t *testing.T) {
|
||||
func TestStateLockPOLRelock(t *testing.T) {
|
||||
cs1, vss := randConsensusState(4)
|
||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||
|
||||
partSize := cs1.state.Params.BlockPartSizeBytes
|
||||
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
|
||||
|
||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
||||
@@ -618,11 +618,11 @@ func TestLockPOLRelock(t *testing.T) {
|
||||
}
|
||||
|
||||
// 4 vals, one precommits, other 3 polka at next round, so we unlock and precomit the polka
|
||||
func TestLockPOLUnlock(t *testing.T) {
|
||||
func TestStateLockPOLUnlock(t *testing.T) {
|
||||
cs1, vss := randConsensusState(4)
|
||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||
|
||||
partSize := cs1.state.Params.BlockPartSizeBytes
|
||||
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
|
||||
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||
@@ -715,11 +715,11 @@ func TestLockPOLUnlock(t *testing.T) {
|
||||
// a polka at round 1 but we miss it
|
||||
// then a polka at round 2 that we lock on
|
||||
// then we see the polka from round 1 but shouldn't unlock
|
||||
func TestLockPOLSafety1(t *testing.T) {
|
||||
func TestStateLockPOLSafety1(t *testing.T) {
|
||||
cs1, vss := randConsensusState(4)
|
||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||
|
||||
partSize := cs1.state.Params.BlockPartSizeBytes
|
||||
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
|
||||
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||
@@ -838,11 +838,11 @@ func TestLockPOLSafety1(t *testing.T) {
|
||||
|
||||
// What we want:
|
||||
// dont see P0, lock on P1 at R1, dont unlock using P0 at R2
|
||||
func TestLockPOLSafety2(t *testing.T) {
|
||||
func TestStateLockPOLSafety2(t *testing.T) {
|
||||
cs1, vss := randConsensusState(4)
|
||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||
|
||||
partSize := cs1.state.Params.BlockPartSizeBytes
|
||||
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
|
||||
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||
@@ -900,7 +900,7 @@ func TestLockPOLSafety2(t *testing.T) {
|
||||
|
||||
// in round 2 we see the polkad block from round 0
|
||||
newProp := types.NewProposal(height, 2, propBlockParts0.Header(), 0, propBlockID1)
|
||||
if err := vs3.SignProposal(config.ChainID, newProp); err != nil {
|
||||
if err := vs3.SignProposal(config.ChainID(), newProp); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := cs1.SetProposalAndBlock(newProp, propBlock0, propBlockParts0, "some peer"); err != nil {
|
||||
@@ -937,7 +937,7 @@ func TestLockPOLSafety2(t *testing.T) {
|
||||
// TODO: Slashing
|
||||
|
||||
/*
|
||||
func TestSlashingPrevotes(t *testing.T) {
|
||||
func TestStateSlashingPrevotes(t *testing.T) {
|
||||
cs1, vss := randConsensusState(2)
|
||||
vs2 := vss[1]
|
||||
|
||||
@@ -972,7 +972,7 @@ func TestSlashingPrevotes(t *testing.T) {
|
||||
// XXX: Check for existence of Dupeout info
|
||||
}
|
||||
|
||||
func TestSlashingPrecommits(t *testing.T) {
|
||||
func TestStateSlashingPrecommits(t *testing.T) {
|
||||
cs1, vss := randConsensusState(2)
|
||||
vs2 := vss[1]
|
||||
|
||||
@@ -1017,11 +1017,11 @@ func TestSlashingPrecommits(t *testing.T) {
|
||||
|
||||
// 4 vals.
|
||||
// we receive a final precommit after going into next round, but others might have gone to commit already!
|
||||
func TestHalt1(t *testing.T) {
|
||||
func TestStateHalt1(t *testing.T) {
|
||||
cs1, vss := randConsensusState(4)
|
||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||
|
||||
partSize := cs1.state.Params.BlockPartSizeBytes
|
||||
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
|
||||
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
||||
|
@@ -1,36 +0,0 @@
|
||||
# Generating test data
|
||||
|
||||
To generate the data, run `build.sh`. See that script for more details.
|
||||
|
||||
Make sure to adjust the stepChanges in the testCases if the number of messages changes.
|
||||
This sometimes happens for the `small_block2.cswal`, where the number of block parts changes between 4 and 5.
|
||||
|
||||
If you need to change the signatures, you can use a script as follows:
|
||||
The privBytes comes from `config/tendermint_test/...`:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
|
||||
"github.com/tendermint/go-crypto"
|
||||
)
|
||||
|
||||
func main() {
|
||||
signBytes, err := hex.DecodeString("7B22636861696E5F6964223A2274656E6465726D696E745F74657374222C22766F7465223A7B22626C6F636B5F68617368223A2242453544373939433846353044354645383533364334333932464443384537423342313830373638222C22626C6F636B5F70617274735F686561646572223A506172745365747B543A31204236323237323535464632307D2C22686569676874223A312C22726F756E64223A302C2274797065223A327D7D")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
privBytes, err := hex.DecodeString("27F82582AEFAE7AB151CFB01C48BB6C1A0DA78F9BDDA979A9F70A84D074EB07D3B3069C422E19688B45CBFAE7BB009FC0FA1B1EA86593519318B7214853803C8")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
privKey := crypto.PrivKeyEd25519{}
|
||||
copy(privKey[:], privBytes)
|
||||
signature := privKey.Sign(signBytes)
|
||||
fmt.Printf("Signature Bytes: %X\n", signature.Bytes())
|
||||
}
|
||||
```
|
||||
|
@@ -1,148 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Requires: killall command and jq JSON processor.
|
||||
|
||||
# Get the parent directory of where this script is.
|
||||
SOURCE="${BASH_SOURCE[0]}"
|
||||
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
|
||||
DIR="$( cd -P "$( dirname "$SOURCE" )/../.." && pwd )"
|
||||
|
||||
# Change into that dir because we expect that.
|
||||
cd "$DIR" || exit 1
|
||||
|
||||
# Make sure we have a tendermint command.
|
||||
if ! hash tendermint 2>/dev/null; then
|
||||
make install
|
||||
fi
|
||||
|
||||
# Make sure we have a cutWALUntil binary.
|
||||
cutWALUntil=./scripts/cutWALUntil/cutWALUntil
|
||||
cutWALUntilDir=$(dirname $cutWALUntil)
|
||||
if ! hash $cutWALUntil 2>/dev/null; then
|
||||
cd "$cutWALUntilDir" && go build && cd - || exit 1
|
||||
fi
|
||||
|
||||
TMHOME=$(mktemp -d)
|
||||
export TMHOME="$TMHOME"
|
||||
|
||||
if [[ ! -d "$TMHOME" ]]; then
|
||||
echo "Could not create temp directory"
|
||||
exit 1
|
||||
else
|
||||
echo "TMHOME: ${TMHOME}"
|
||||
fi
|
||||
|
||||
# TODO: eventually we should replace with `tendermint init --test`
|
||||
DIR_TO_COPY=$HOME/.tendermint_test/consensus_state_test
|
||||
if [ ! -d "$DIR_TO_COPY" ]; then
|
||||
echo "$DIR_TO_COPY does not exist. Please run: go test ./consensus"
|
||||
exit 1
|
||||
fi
|
||||
echo "==> Copying ${DIR_TO_COPY} to ${TMHOME} directory..."
|
||||
cp -r "$DIR_TO_COPY"/* "$TMHOME"
|
||||
|
||||
# preserve original genesis file because later it will be modified (see small_block2)
|
||||
cp "$TMHOME/genesis.json" "$TMHOME/genesis.json.bak"
|
||||
|
||||
function reset(){
|
||||
echo "==> Resetting tendermint..."
|
||||
tendermint unsafe_reset_all
|
||||
cp "$TMHOME/genesis.json.bak" "$TMHOME/genesis.json"
|
||||
}
|
||||
|
||||
reset
|
||||
|
||||
# function empty_block(){
|
||||
# echo "==> Starting tendermint..."
|
||||
# tendermint node --proxy_app=persistent_dummy &> /dev/null &
|
||||
# sleep 5
|
||||
# echo "==> Killing tendermint..."
|
||||
# killall tendermint
|
||||
|
||||
# echo "==> Copying WAL log..."
|
||||
# $cutWALUntil "$TMHOME/data/cs.wal/wal" 1 consensus/test_data/new_empty_block.cswal
|
||||
# mv consensus/test_data/new_empty_block.cswal consensus/test_data/empty_block.cswal
|
||||
|
||||
# reset
|
||||
# }
|
||||
|
||||
function many_blocks(){
|
||||
bash scripts/txs/random.sh 1000 36657 &> /dev/null &
|
||||
PID=$!
|
||||
echo "==> Starting tendermint..."
|
||||
tendermint node --proxy_app=persistent_dummy &> /dev/null &
|
||||
sleep 10
|
||||
echo "==> Killing tendermint..."
|
||||
kill -9 $PID
|
||||
killall tendermint
|
||||
|
||||
echo "==> Copying WAL log..."
|
||||
$cutWALUntil "$TMHOME/data/cs.wal/wal" 6 consensus/test_data/new_many_blocks.cswal
|
||||
mv consensus/test_data/new_many_blocks.cswal consensus/test_data/many_blocks.cswal
|
||||
|
||||
reset
|
||||
}
|
||||
|
||||
|
||||
# function small_block1(){
|
||||
# bash scripts/txs/random.sh 1000 36657 &> /dev/null &
|
||||
# PID=$!
|
||||
# echo "==> Starting tendermint..."
|
||||
# tendermint node --proxy_app=persistent_dummy &> /dev/null &
|
||||
# sleep 10
|
||||
# echo "==> Killing tendermint..."
|
||||
# kill -9 $PID
|
||||
# killall tendermint
|
||||
|
||||
# echo "==> Copying WAL log..."
|
||||
# $cutWALUntil "$TMHOME/data/cs.wal/wal" 1 consensus/test_data/new_small_block1.cswal
|
||||
# mv consensus/test_data/new_small_block1.cswal consensus/test_data/small_block1.cswal
|
||||
|
||||
# reset
|
||||
# }
|
||||
|
||||
|
||||
# # block part size = 512
|
||||
# function small_block2(){
|
||||
# cat "$TMHOME/genesis.json" | jq '. + {consensus_params: {block_size_params: {max_bytes: 22020096}, block_gossip_params: {block_part_size_bytes: 512}}}' > "$TMHOME/new_genesis.json"
|
||||
# mv "$TMHOME/new_genesis.json" "$TMHOME/genesis.json"
|
||||
# bash scripts/txs/random.sh 1000 36657 &> /dev/null &
|
||||
# PID=$!
|
||||
# echo "==> Starting tendermint..."
|
||||
# tendermint node --proxy_app=persistent_dummy &> /dev/null &
|
||||
# sleep 5
|
||||
# echo "==> Killing tendermint..."
|
||||
# kill -9 $PID
|
||||
# killall tendermint
|
||||
|
||||
# echo "==> Copying WAL log..."
|
||||
# $cutWALUntil "$TMHOME/data/cs.wal/wal" 1 consensus/test_data/new_small_block2.cswal
|
||||
# mv consensus/test_data/new_small_block2.cswal consensus/test_data/small_block2.cswal
|
||||
|
||||
# reset
|
||||
# }
|
||||
|
||||
|
||||
|
||||
case "$1" in
|
||||
# "small_block1")
|
||||
# small_block1
|
||||
# ;;
|
||||
# "small_block2")
|
||||
# small_block2
|
||||
# ;;
|
||||
# "empty_block")
|
||||
# empty_block
|
||||
# ;;
|
||||
"many_blocks")
|
||||
many_blocks
|
||||
;;
|
||||
*)
|
||||
# small_block1
|
||||
# small_block2
|
||||
# empty_block
|
||||
many_blocks
|
||||
esac
|
||||
|
||||
echo "==> Cleaning up..."
|
||||
rm -rf "$TMHOME"
|
Binary file not shown.
@@ -68,7 +68,7 @@ func (t *timeoutTicker) Chan() <-chan timeoutInfo {
|
||||
}
|
||||
|
||||
// ScheduleTimeout schedules a new timeout by sending on the internal tickChan.
|
||||
// The timeoutRoutine is alwaya available to read from tickChan, so this won't block.
|
||||
// The timeoutRoutine is always available to read from tickChan, so this won't block.
|
||||
// The scheduling may fail if the timeoutRoutine has already scheduled a timeout for a later height/round/step.
|
||||
func (t *timeoutTicker) ScheduleTimeout(ti timeoutInfo) {
|
||||
t.tickChan <- ti
|
||||
@@ -127,7 +127,7 @@ func (t *timeoutTicker) timeoutRoutine() {
|
||||
// We can eliminate it by merging the timeoutRoutine into receiveRoutine
|
||||
// and managing the timeouts ourselves with a millisecond ticker
|
||||
go func(toi timeoutInfo) { t.tockChan <- toi }(ti)
|
||||
case <-t.Quit:
|
||||
case <-t.Quit():
|
||||
return
|
||||
}
|
||||
}
|
||||
|
@@ -1,9 +1,11 @@
|
||||
package types
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
)
|
||||
@@ -35,7 +37,7 @@ type HeightVoteSet struct {
|
||||
mtx sync.Mutex
|
||||
round int // max tracked round
|
||||
roundVoteSets map[int]RoundVoteSet // keys: [0...round]
|
||||
peerCatchupRounds map[string][]int // keys: peer.Key; values: at most 2 rounds
|
||||
peerCatchupRounds map[p2p.ID][]int // keys: peer.ID; values: at most 2 rounds
|
||||
}
|
||||
|
||||
func NewHeightVoteSet(chainID string, height int64, valSet *types.ValidatorSet) *HeightVoteSet {
|
||||
@@ -53,7 +55,7 @@ func (hvs *HeightVoteSet) Reset(height int64, valSet *types.ValidatorSet) {
|
||||
hvs.height = height
|
||||
hvs.valSet = valSet
|
||||
hvs.roundVoteSets = make(map[int]RoundVoteSet)
|
||||
hvs.peerCatchupRounds = make(map[string][]int)
|
||||
hvs.peerCatchupRounds = make(map[p2p.ID][]int)
|
||||
|
||||
hvs.addRound(0)
|
||||
hvs.round = 0
|
||||
@@ -101,8 +103,8 @@ func (hvs *HeightVoteSet) addRound(round int) {
|
||||
}
|
||||
|
||||
// Duplicate votes return added=false, err=nil.
|
||||
// By convention, peerKey is "" if origin is self.
|
||||
func (hvs *HeightVoteSet) AddVote(vote *types.Vote, peerKey string) (added bool, err error) {
|
||||
// By convention, peerID is "" if origin is self.
|
||||
func (hvs *HeightVoteSet) AddVote(vote *types.Vote, peerID p2p.ID) (added bool, err error) {
|
||||
hvs.mtx.Lock()
|
||||
defer hvs.mtx.Unlock()
|
||||
if !types.IsVoteTypeValid(vote.Type) {
|
||||
@@ -110,10 +112,10 @@ func (hvs *HeightVoteSet) AddVote(vote *types.Vote, peerKey string) (added bool,
|
||||
}
|
||||
voteSet := hvs.getVoteSet(vote.Round, vote.Type)
|
||||
if voteSet == nil {
|
||||
if rndz := hvs.peerCatchupRounds[peerKey]; len(rndz) < 2 {
|
||||
if rndz := hvs.peerCatchupRounds[peerID]; len(rndz) < 2 {
|
||||
hvs.addRound(vote.Round)
|
||||
voteSet = hvs.getVoteSet(vote.Round, vote.Type)
|
||||
hvs.peerCatchupRounds[peerKey] = append(rndz, vote.Round)
|
||||
hvs.peerCatchupRounds[peerID] = append(rndz, vote.Round)
|
||||
} else {
|
||||
// Peer has sent a vote that does not match our round,
|
||||
// for more than one round. Bad peer!
|
||||
@@ -206,15 +208,15 @@ func (hvs *HeightVoteSet) StringIndented(indent string) string {
|
||||
// NOTE: if there are too many peers, or too much peer churn,
|
||||
// this can cause memory issues.
|
||||
// TODO: implement ability to remove peers too
|
||||
func (hvs *HeightVoteSet) SetPeerMaj23(round int, type_ byte, peerID string, blockID types.BlockID) {
|
||||
func (hvs *HeightVoteSet) SetPeerMaj23(round int, type_ byte, peerID p2p.ID, blockID types.BlockID) error {
|
||||
hvs.mtx.Lock()
|
||||
defer hvs.mtx.Unlock()
|
||||
if !types.IsVoteTypeValid(type_) {
|
||||
return
|
||||
return fmt.Errorf("SetPeerMaj23: Invalid vote type %v", type_)
|
||||
}
|
||||
voteSet := hvs.getVoteSet(round, type_)
|
||||
if voteSet == nil {
|
||||
return
|
||||
return nil // something we don't know about yet
|
||||
}
|
||||
voteSet.SetPeerMaj23(peerID, blockID)
|
||||
return voteSet.SetPeerMaj23(peerID, blockID)
|
||||
}
|
||||
|
@@ -2,6 +2,7 @@ package types
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
@@ -17,7 +18,7 @@ func init() {
|
||||
func TestPeerCatchupRounds(t *testing.T) {
|
||||
valSet, privVals := types.RandValidatorSet(10, 1)
|
||||
|
||||
hvs := NewHeightVoteSet(config.ChainID, 1, valSet)
|
||||
hvs := NewHeightVoteSet(config.ChainID(), 1, valSet)
|
||||
|
||||
vote999_0 := makeVoteHR(t, 1, 999, privVals, 0)
|
||||
added, err := hvs.AddVote(vote999_0, "peer1")
|
||||
@@ -54,10 +55,11 @@ func makeVoteHR(t *testing.T, height int64, round int, privVals []*types.PrivVal
|
||||
ValidatorIndex: valIndex,
|
||||
Height: height,
|
||||
Round: round,
|
||||
Timestamp: time.Now().UTC(),
|
||||
Type: types.VoteTypePrecommit,
|
||||
BlockID: types.BlockID{[]byte("fakehash"), types.PartSetHeader{}},
|
||||
}
|
||||
chainID := config.ChainID
|
||||
chainID := config.ChainID()
|
||||
err := privVal.SignVote(chainID, vote)
|
||||
if err != nil {
|
||||
panic(cmn.Fmt("Error signing vote: %v", err))
|
||||
|
@@ -107,8 +107,8 @@ func (rs *RoundState) StringIndented(indent string) string {
|
||||
%s LockedRound: %v
|
||||
%s LockedBlock: %v %v
|
||||
%s Votes: %v
|
||||
%s LastCommit: %v
|
||||
%s LastValidators: %v
|
||||
%s LastCommit: %v
|
||||
%s LastValidators:%v
|
||||
%s}`,
|
||||
indent, rs.Height, rs.Round, rs.Step,
|
||||
indent, rs.StartTime,
|
||||
|
@@ -17,20 +17,21 @@ import (
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
)
|
||||
|
||||
const (
|
||||
// must be greater than params.BlockGossip.BlockPartSizeBytes + a few bytes
|
||||
maxMsgSizeBytes = 1024 * 1024 // 1MB
|
||||
)
|
||||
|
||||
//--------------------------------------------------------
|
||||
// types and functions for savings consensus messages
|
||||
|
||||
var (
|
||||
walSeparator = []byte{55, 127, 6, 130} // 0x377f0682 - magic number
|
||||
)
|
||||
|
||||
type TimedWALMessage struct {
|
||||
Time time.Time `json:"time"` // for debugging purposes
|
||||
Msg WALMessage `json:"msg"`
|
||||
}
|
||||
|
||||
// EndHeightMessage marks the end of the given height inside WAL.
|
||||
// @internal used by scripts/cutWALUntil util.
|
||||
// @internal used by scripts/wal2json util.
|
||||
type EndHeightMessage struct {
|
||||
Height int64 `json:"height"`
|
||||
}
|
||||
@@ -52,7 +53,7 @@ var _ = wire.RegisterInterface(
|
||||
type WAL interface {
|
||||
Save(WALMessage)
|
||||
Group() *auto.Group
|
||||
SearchForEndHeight(height int64) (gr *auto.GroupReader, found bool, err error)
|
||||
SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error)
|
||||
|
||||
Start() error
|
||||
Stop() error
|
||||
@@ -120,7 +121,7 @@ func (wal *baseWAL) Save(msg WALMessage) {
|
||||
if wal.light {
|
||||
// in light mode we only write new steps, timeouts, and our own votes (no proposals, block parts)
|
||||
if mi, ok := msg.(msgInfo); ok {
|
||||
if mi.PeerKey != "" {
|
||||
if mi.PeerID != "" {
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -137,12 +138,18 @@ func (wal *baseWAL) Save(msg WALMessage) {
|
||||
}
|
||||
}
|
||||
|
||||
// WALSearchOptions are optional arguments to SearchForEndHeight.
|
||||
type WALSearchOptions struct {
|
||||
// IgnoreDataCorruptionErrors set to true will result in skipping data corruption errors.
|
||||
IgnoreDataCorruptionErrors bool
|
||||
}
|
||||
|
||||
// SearchForEndHeight searches for the EndHeightMessage with the height and
|
||||
// returns an auto.GroupReader, whenever it was found or not and an error.
|
||||
// Group reader will be nil if found equals false.
|
||||
//
|
||||
// CONTRACT: caller must close group reader.
|
||||
func (wal *baseWAL) SearchForEndHeight(height int64) (gr *auto.GroupReader, found bool, err error) {
|
||||
func (wal *baseWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
|
||||
var msg *TimedWALMessage
|
||||
|
||||
// NOTE: starting from the last file in the group because we're usually
|
||||
@@ -162,7 +169,9 @@ func (wal *baseWAL) SearchForEndHeight(height int64) (gr *auto.GroupReader, foun
|
||||
// check next file
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
if options.IgnoreDataCorruptionErrors && IsDataCorruptionError(err) {
|
||||
// do nothing
|
||||
} else if err != nil {
|
||||
gr.Close()
|
||||
return nil, false, err
|
||||
}
|
||||
@@ -195,7 +204,7 @@ func NewWALEncoder(wr io.Writer) *WALEncoder {
|
||||
}
|
||||
|
||||
// Encode writes the custom encoding of v to the stream.
|
||||
func (enc *WALEncoder) Encode(v interface{}) error {
|
||||
func (enc *WALEncoder) Encode(v *TimedWALMessage) error {
|
||||
data := wire.BinaryBytes(v)
|
||||
|
||||
crc := crc32.Checksum(data, crc32c)
|
||||
@@ -209,16 +218,30 @@ func (enc *WALEncoder) Encode(v interface{}) error {
|
||||
|
||||
_, err := enc.wr.Write(msg)
|
||||
|
||||
if err == nil {
|
||||
// TODO [Anton Kaliaev 23 Oct 2017]: remove separator
|
||||
_, err = enc.wr.Write(walSeparator)
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
///////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
// IsDataCorruptionError returns true if data has been corrupted inside WAL.
|
||||
func IsDataCorruptionError(err error) bool {
|
||||
_, ok := err.(DataCorruptionError)
|
||||
return ok
|
||||
}
|
||||
|
||||
// DataCorruptionError is an error that occures if data on disk was corrupted.
|
||||
type DataCorruptionError struct {
|
||||
cause error
|
||||
}
|
||||
|
||||
func (e DataCorruptionError) Error() string {
|
||||
return fmt.Sprintf("DataCorruptionError[%v]", e.cause)
|
||||
}
|
||||
|
||||
func (e DataCorruptionError) Cause() error {
|
||||
return e.cause
|
||||
}
|
||||
|
||||
// A WALDecoder reads and decodes custom-encoded WAL messages from an input
|
||||
// stream. See WALEncoder for the format used.
|
||||
//
|
||||
@@ -237,7 +260,7 @@ func NewWALDecoder(rd io.Reader) *WALDecoder {
|
||||
func (dec *WALDecoder) Decode() (*TimedWALMessage, error) {
|
||||
b := make([]byte, 4)
|
||||
|
||||
n, err := dec.rd.Read(b)
|
||||
_, err := dec.rd.Read(b)
|
||||
if err == io.EOF {
|
||||
return nil, err
|
||||
}
|
||||
@@ -247,7 +270,7 @@ func (dec *WALDecoder) Decode() (*TimedWALMessage, error) {
|
||||
crc := binary.BigEndian.Uint32(b)
|
||||
|
||||
b = make([]byte, 4)
|
||||
n, err = dec.rd.Read(b)
|
||||
_, err = dec.rd.Read(b)
|
||||
if err == io.EOF {
|
||||
return nil, err
|
||||
}
|
||||
@@ -256,55 +279,40 @@ func (dec *WALDecoder) Decode() (*TimedWALMessage, error) {
|
||||
}
|
||||
length := binary.BigEndian.Uint32(b)
|
||||
|
||||
if length > maxMsgSizeBytes {
|
||||
return nil, DataCorruptionError{fmt.Errorf("length %d exceeded maximum possible value of %d bytes", length, maxMsgSizeBytes)}
|
||||
}
|
||||
|
||||
data := make([]byte, length)
|
||||
n, err = dec.rd.Read(data)
|
||||
_, err = dec.rd.Read(data)
|
||||
if err == io.EOF {
|
||||
return nil, err
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("not enough bytes for data: %v (want: %d, read: %v)", err, length, n)
|
||||
return nil, fmt.Errorf("failed to read data: %v", err)
|
||||
}
|
||||
|
||||
// check checksum before decoding data
|
||||
actualCRC := crc32.Checksum(data, crc32c)
|
||||
if actualCRC != crc {
|
||||
return nil, fmt.Errorf("checksums do not match: (read: %v, actual: %v)", crc, actualCRC)
|
||||
return nil, DataCorruptionError{fmt.Errorf("checksums do not match: (read: %v, actual: %v)", crc, actualCRC)}
|
||||
}
|
||||
|
||||
var nn int
|
||||
var res *TimedWALMessage // nolint: gosimple
|
||||
res = wire.ReadBinary(&TimedWALMessage{}, bytes.NewBuffer(data), int(length), &nn, &err).(*TimedWALMessage)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to decode data: %v", err)
|
||||
}
|
||||
|
||||
// TODO [Anton Kaliaev 23 Oct 2017]: remove separator
|
||||
if err = readSeparator(dec.rd); err != nil {
|
||||
return nil, err
|
||||
return nil, DataCorruptionError{fmt.Errorf("failed to decode data: %v", err)}
|
||||
}
|
||||
|
||||
return res, err
|
||||
}
|
||||
|
||||
// readSeparator reads a separator from r. It returns any error from underlying
|
||||
// reader or if it's not a separator.
|
||||
func readSeparator(r io.Reader) error {
|
||||
b := make([]byte, len(walSeparator))
|
||||
_, err := r.Read(b)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read separator: %v", err)
|
||||
}
|
||||
if !bytes.Equal(b, walSeparator) {
|
||||
return fmt.Errorf("not a separator: %v", b)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type nilWAL struct{}
|
||||
|
||||
func (nilWAL) Save(m WALMessage) {}
|
||||
func (nilWAL) Group() *auto.Group { return nil }
|
||||
func (nilWAL) SearchForEndHeight(height int64) (gr *auto.GroupReader, found bool, err error) {
|
||||
func (nilWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
|
||||
return nil, false, nil
|
||||
}
|
||||
func (nilWAL) Start() error { return nil }
|
||||
|
31
consensus/wal_fuzz.go
Normal file
31
consensus/wal_fuzz.go
Normal file
@@ -0,0 +1,31 @@
|
||||
// +build gofuzz
|
||||
|
||||
package consensus
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
)
|
||||
|
||||
func Fuzz(data []byte) int {
|
||||
dec := NewWALDecoder(bytes.NewReader(data))
|
||||
for {
|
||||
msg, err := dec.Decode()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
if msg != nil {
|
||||
panic("msg != nil on error")
|
||||
}
|
||||
return 0
|
||||
}
|
||||
var w bytes.Buffer
|
||||
enc := NewWALEncoder(&w)
|
||||
err = enc.Encode(msg)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
return 1
|
||||
}
|
201
consensus/wal_generator.go
Normal file
201
consensus/wal_generator.go
Normal file
@@ -0,0 +1,201 @@
|
||||
package consensus
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/tendermint/abci/example/dummy"
|
||||
bc "github.com/tendermint/tendermint/blockchain"
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/proxy"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
auto "github.com/tendermint/tmlibs/autofile"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
"github.com/tendermint/tmlibs/db"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
)
|
||||
|
||||
// WALWithNBlocks generates a consensus WAL. It does this by spining up a
|
||||
// stripped down version of node (proxy app, event bus, consensus state) with a
|
||||
// persistent dummy application and special consensus wal instance
|
||||
// (byteBufferWAL) and waits until numBlocks are created. Then it returns a WAL
|
||||
// content.
|
||||
func WALWithNBlocks(numBlocks int) (data []byte, err error) {
|
||||
config := getConfig()
|
||||
|
||||
app := dummy.NewPersistentDummyApplication(filepath.Join(config.DBDir(), "wal_generator"))
|
||||
|
||||
logger := log.TestingLogger().With("wal_generator", "wal_generator")
|
||||
logger.Info("generating WAL (last height msg excluded)", "numBlocks", numBlocks)
|
||||
|
||||
/////////////////////////////////////////////////////////////////////////////
|
||||
// COPY PASTE FROM node.go WITH A FEW MODIFICATIONS
|
||||
// NOTE: we can't import node package because of circular dependency
|
||||
privValidatorFile := config.PrivValidatorFile()
|
||||
privValidator := types.LoadOrGenPrivValidatorFS(privValidatorFile)
|
||||
genDoc, err := types.GenesisDocFromFile(config.GenesisFile())
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to read genesis file")
|
||||
}
|
||||
stateDB := db.NewMemDB()
|
||||
blockStoreDB := db.NewMemDB()
|
||||
state, err := sm.MakeGenesisState(genDoc)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to make genesis state")
|
||||
}
|
||||
blockStore := bc.NewBlockStore(blockStoreDB)
|
||||
handshaker := NewHandshaker(stateDB, state, blockStore)
|
||||
proxyApp := proxy.NewAppConns(proxy.NewLocalClientCreator(app), handshaker)
|
||||
proxyApp.SetLogger(logger.With("module", "proxy"))
|
||||
if err := proxyApp.Start(); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to start proxy app connections")
|
||||
}
|
||||
defer proxyApp.Stop()
|
||||
eventBus := types.NewEventBus()
|
||||
eventBus.SetLogger(logger.With("module", "events"))
|
||||
if err := eventBus.Start(); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to start event bus")
|
||||
}
|
||||
defer eventBus.Stop()
|
||||
mempool := types.MockMempool{}
|
||||
evpool := types.MockEvidencePool{}
|
||||
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)
|
||||
consensusState := NewConsensusState(config.Consensus, state.Copy(), blockExec, blockStore, mempool, evpool)
|
||||
consensusState.SetLogger(logger)
|
||||
consensusState.SetEventBus(eventBus)
|
||||
if privValidator != nil {
|
||||
consensusState.SetPrivValidator(privValidator)
|
||||
}
|
||||
// END OF COPY PASTE
|
||||
/////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
// set consensus wal to buffered WAL, which will write all incoming msgs to buffer
|
||||
var b bytes.Buffer
|
||||
wr := bufio.NewWriter(&b)
|
||||
numBlocksWritten := make(chan struct{})
|
||||
wal := newByteBufferWAL(logger, NewWALEncoder(wr), int64(numBlocks), numBlocksWritten)
|
||||
// see wal.go#103
|
||||
wal.Save(EndHeightMessage{0})
|
||||
consensusState.wal = wal
|
||||
|
||||
if err := consensusState.Start(); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to start consensus state")
|
||||
}
|
||||
defer consensusState.Stop()
|
||||
|
||||
select {
|
||||
case <-numBlocksWritten:
|
||||
wr.Flush()
|
||||
return b.Bytes(), nil
|
||||
case <-time.After(1 * time.Minute):
|
||||
wr.Flush()
|
||||
return b.Bytes(), fmt.Errorf("waited too long for tendermint to produce %d blocks (grep logs for `wal_generator`)", numBlocks)
|
||||
}
|
||||
}
|
||||
|
||||
// f**ing long, but unique for each test
|
||||
func makePathname() string {
|
||||
// get path
|
||||
p, err := os.Getwd()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// fmt.Println(p)
|
||||
sep := string(filepath.Separator)
|
||||
return strings.Replace(p, sep, "_", -1)
|
||||
}
|
||||
|
||||
func randPort() int {
|
||||
// returns between base and base + spread
|
||||
base, spread := 20000, 20000
|
||||
return base + rand.Intn(spread)
|
||||
}
|
||||
|
||||
func makeAddrs() (string, string, string) {
|
||||
start := randPort()
|
||||
return fmt.Sprintf("tcp://0.0.0.0:%d", start),
|
||||
fmt.Sprintf("tcp://0.0.0.0:%d", start+1),
|
||||
fmt.Sprintf("tcp://0.0.0.0:%d", start+2)
|
||||
}
|
||||
|
||||
// getConfig returns a config for test cases
|
||||
func getConfig() *cfg.Config {
|
||||
pathname := makePathname()
|
||||
c := cfg.ResetTestRoot(fmt.Sprintf("%s_%d", pathname, cmn.RandInt()))
|
||||
|
||||
// and we use random ports to run in parallel
|
||||
tm, rpc, grpc := makeAddrs()
|
||||
c.P2P.ListenAddress = tm
|
||||
c.RPC.ListenAddress = rpc
|
||||
c.RPC.GRPCListenAddress = grpc
|
||||
return c
|
||||
}
|
||||
|
||||
// byteBufferWAL is a WAL which writes all msgs to a byte buffer. Writing stops
|
||||
// when the heightToStop is reached. Client will be notified via
|
||||
// signalWhenStopsTo channel.
|
||||
type byteBufferWAL struct {
|
||||
enc *WALEncoder
|
||||
stopped bool
|
||||
heightToStop int64
|
||||
signalWhenStopsTo chan<- struct{}
|
||||
|
||||
logger log.Logger
|
||||
}
|
||||
|
||||
// needed for determinism
|
||||
var fixedTime, _ = time.Parse(time.RFC3339, "2017-01-02T15:04:05Z")
|
||||
|
||||
func newByteBufferWAL(logger log.Logger, enc *WALEncoder, nBlocks int64, signalStop chan<- struct{}) *byteBufferWAL {
|
||||
return &byteBufferWAL{
|
||||
enc: enc,
|
||||
heightToStop: nBlocks,
|
||||
signalWhenStopsTo: signalStop,
|
||||
logger: logger,
|
||||
}
|
||||
}
|
||||
|
||||
// Save writes message to the internal buffer except when heightToStop is
|
||||
// reached, in which case it will signal the caller via signalWhenStopsTo and
|
||||
// skip writing.
|
||||
func (w *byteBufferWAL) Save(m WALMessage) {
|
||||
if w.stopped {
|
||||
w.logger.Debug("WAL already stopped. Not writing message", "msg", m)
|
||||
return
|
||||
}
|
||||
|
||||
if endMsg, ok := m.(EndHeightMessage); ok {
|
||||
w.logger.Debug("WAL write end height message", "height", endMsg.Height, "stopHeight", w.heightToStop)
|
||||
if endMsg.Height == w.heightToStop {
|
||||
w.logger.Debug("Stopping WAL at height", "height", endMsg.Height)
|
||||
w.signalWhenStopsTo <- struct{}{}
|
||||
w.stopped = true
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
w.logger.Debug("WAL Write Message", "msg", m)
|
||||
err := w.enc.Encode(&TimedWALMessage{fixedTime, m})
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("failed to encode the msg %v", m))
|
||||
}
|
||||
}
|
||||
|
||||
func (w *byteBufferWAL) Group() *auto.Group {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (w *byteBufferWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
|
||||
return nil, false, nil
|
||||
}
|
||||
|
||||
func (w *byteBufferWAL) Start() error { return nil }
|
||||
func (w *byteBufferWAL) Stop() error { return nil }
|
||||
func (w *byteBufferWAL) Wait() {}
|
@@ -3,7 +3,6 @@ package consensus
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"path"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
@@ -42,14 +41,20 @@ func TestWALEncoderDecoder(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestSearchForEndHeight(t *testing.T) {
|
||||
wal, err := NewWAL(path.Join(data_dir, "many_blocks.cswal"), false)
|
||||
func TestWALSearchForEndHeight(t *testing.T) {
|
||||
walBody, err := WALWithNBlocks(6)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
walFile := tempWALWithData(walBody)
|
||||
|
||||
wal, err := NewWAL(walFile, false)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
h := int64(3)
|
||||
gr, found, err := wal.SearchForEndHeight(h)
|
||||
gr, found, err := wal.SearchForEndHeight(h, &WALSearchOptions{})
|
||||
assert.NoError(t, err, cmn.Fmt("expected not to err on height %d", h))
|
||||
assert.True(t, found, cmn.Fmt("expected to find end height for %d", h))
|
||||
assert.NotNil(t, gr, "expected group not to be nil")
|
||||
|
1
docs/.python-version
Normal file
1
docs/.python-version
Normal file
@@ -0,0 +1 @@
|
||||
2.7.14
|
@@ -12,6 +12,9 @@ BUILDDIR = _build
|
||||
help:
|
||||
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
|
||||
install:
|
||||
@pip install -r requirements.txt
|
||||
|
||||
.PHONY: help Makefile
|
||||
|
||||
# Catch-all target: route all unknown targets to Sphinx using the new
|
||||
|
@@ -53,7 +53,7 @@ Now run ``abci-cli`` to see the list of commands:
|
||||
-h, --help help for abci-cli
|
||||
-v, --verbose print the command and results as if it were a console session
|
||||
|
||||
Use "abci-cli [command] --help" for more information about a command.
|
||||
Use "abci-cli [command] --help" for more information about a command.
|
||||
|
||||
|
||||
Dummy - First Example
|
||||
@@ -66,14 +66,56 @@ The most important messages are ``deliver_tx``, ``check_tx``, and
|
||||
``commit``, but there are others for convenience, configuration, and
|
||||
information purposes.
|
||||
|
||||
Let's start a dummy application, which was installed at the same time as
|
||||
``abci-cli`` above. The dummy just stores transactions in a merkle tree:
|
||||
We'll start a dummy application, which was installed at the same time as
|
||||
``abci-cli`` above. The dummy just stores transactions in a merkle tree.
|
||||
|
||||
Its code can be found `here <https://github.com/tendermint/abci/blob/master/cmd/abci-cli/abci-cli.go>`__ and looks like:
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Dummy Example**
|
||||
|
||||
.. code-block:: go
|
||||
|
||||
func cmdDummy(cmd *cobra.Command, args []string) error {
|
||||
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
|
||||
|
||||
// Create the application - in memory or persisted to disk
|
||||
var app types.Application
|
||||
if flagPersist == "" {
|
||||
app = dummy.NewDummyApplication()
|
||||
} else {
|
||||
app = dummy.NewPersistentDummyApplication(flagPersist)
|
||||
app.(*dummy.PersistentDummyApplication).SetLogger(logger.With("module", "dummy"))
|
||||
}
|
||||
|
||||
// Start the listener
|
||||
srv, err := server.NewServer(flagAddrD, flagAbci, app)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
srv.SetLogger(logger.With("module", "abci-server"))
|
||||
if err := srv.Start(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Wait forever
|
||||
cmn.TrapSignal(func() {
|
||||
// Cleanup
|
||||
srv.Stop()
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
Start by running:
|
||||
|
||||
::
|
||||
|
||||
abci-cli dummy
|
||||
|
||||
In another terminal, run
|
||||
And in another terminal, run
|
||||
|
||||
::
|
||||
|
||||
@@ -187,6 +229,41 @@ Counter - Another Example
|
||||
Now that we've got the hang of it, let's try another application, the
|
||||
"counter" app.
|
||||
|
||||
Like the dummy app, its code can be found `here <https://github.com/tendermint/abci/blob/master/cmd/abci-cli/abci-cli.go>`__ and looks like:
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Counter Example**
|
||||
|
||||
.. code-block:: go
|
||||
|
||||
func cmdCounter(cmd *cobra.Command, args []string) error {
|
||||
|
||||
app := counter.NewCounterApplication(flagSerial)
|
||||
|
||||
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
|
||||
|
||||
// Start the listener
|
||||
srv, err := server.NewServer(flagAddrC, flagAbci, app)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
srv.SetLogger(logger.With("module", "abci-server"))
|
||||
if err := srv.Start(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Wait forever
|
||||
cmn.TrapSignal(func() {
|
||||
// Cleanup
|
||||
srv.Stop()
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
The counter app doesn't use a Merkle tree, it just counts how many times
|
||||
we've sent a transaction, asked for a hash, or committed the state. The
|
||||
result of ``commit`` is just the number of transactions sent.
|
||||
@@ -261,7 +338,7 @@ But the ultimate flexibility comes from being able to write the
|
||||
application easily in any language.
|
||||
|
||||
We have implemented the counter in a number of languages (see the
|
||||
example directory).
|
||||
`example directory <https://github.com/tendermint/abci/tree/master/example`__).
|
||||
|
||||
To run the Node JS version, ``cd`` to ``example/js`` and run
|
||||
|
||||
@@ -289,4 +366,4 @@ its own pattern of messages.
|
||||
For more information, see the `application developers
|
||||
guide <./app-development.html>`__. For examples of running an ABCI
|
||||
app with Tendermint, see the `getting started
|
||||
guide <./getting-started.html>`__.
|
||||
guide <./getting-started.html>`__. Next is the ABCI specification.
|
||||
|
@@ -403,14 +403,16 @@ pick up from when it restarts. See information on the Handshake, below.
|
||||
EndBlock
|
||||
^^^^^^^^
|
||||
|
||||
The EndBlock request can be used to run some code at the end of every
|
||||
block. Additionally, the response may contain a list of validators,
|
||||
which can be used to update the validator set. To add a new validator or
|
||||
update an existing one, simply include them in the list returned in the
|
||||
EndBlock response. To remove one, include it in the list with a
|
||||
``power`` equal to ``0``. Tendermint core will take care of updating the
|
||||
validator set. Note validator set changes are only available in v0.8.0
|
||||
and up.
|
||||
The EndBlock request can be used to run some code at the end of every block.
|
||||
Additionally, the response may contain a list of validators, which can be used
|
||||
to update the validator set. To add a new validator or update an existing one,
|
||||
simply include them in the list returned in the EndBlock response. To remove
|
||||
one, include it in the list with a ``power`` equal to ``0``. Tendermint core
|
||||
will take care of updating the validator set. Note the change in voting power
|
||||
must be strictly less than 1/3 per block if you want a light client to be able
|
||||
to prove the transition externally. See the `light client docs
|
||||
<https://godoc.org/github.com/tendermint/tendermint/lite#hdr-How_We_Track_Validators>`__
|
||||
for details on how it tracks validators.
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
@@ -421,8 +423,8 @@ and up.
|
||||
.. code-block:: go
|
||||
|
||||
// Update the validator set
|
||||
func (app *PersistentDummyApplication) EndBlock(height uint64) (resEndBlock types.ResponseEndBlock) {
|
||||
return types.ResponseEndBlock{Diffs: app.changes}
|
||||
func (app *PersistentDummyApplication) EndBlock(req types.RequestEndBlock) types.ResponseEndBlock {
|
||||
return types.ResponseEndBlock{ValidatorUpdates: app.ValUpdates}
|
||||
}
|
||||
|
||||
.. container:: toggle
|
||||
|
@@ -22,30 +22,30 @@ The parameters are used to determine the validity of a block (and tx) via the un
|
||||
|
||||
```
|
||||
type ConsensusParams struct {
|
||||
BlockSizeParams
|
||||
TxSizeParams
|
||||
BlockGossipParams
|
||||
BlockSize
|
||||
TxSize
|
||||
BlockGossip
|
||||
}
|
||||
|
||||
type BlockSizeParams struct {
|
||||
type BlockSize struct {
|
||||
MaxBytes int
|
||||
MaxTxs int
|
||||
MaxGas int
|
||||
}
|
||||
|
||||
type TxSizeParams struct {
|
||||
type TxSize struct {
|
||||
MaxBytes int
|
||||
MaxGas int
|
||||
}
|
||||
|
||||
type BlockGossipParams struct {
|
||||
type BlockGossip struct {
|
||||
BlockPartSizeBytes int
|
||||
}
|
||||
```
|
||||
|
||||
The `ConsensusParams` can evolve over time by adding new structs that cover different aspects of the consensus rules.
|
||||
|
||||
The `BlockPartSizeBytes` and the `BlockSizeParams.MaxBytes` are enforced to be greater than 0.
|
||||
The `BlockPartSizeBytes` and the `BlockSize.MaxBytes` are enforced to be greater than 0.
|
||||
The former because we need a part size, the latter so that we always have at least some sanity check over the size of blocks.
|
||||
|
||||
### ABCI
|
||||
@@ -58,7 +58,7 @@ like the BlockPartSize, that the app shouldn't really know about.
|
||||
|
||||
#### EndBlock
|
||||
|
||||
The EndBlock response includes a `ConsensusParams`, which includes BlockSizeParams and TxSizeParams, but not BlockGossipParams.
|
||||
The EndBlock response includes a `ConsensusParams`, which includes BlockSize and TxSize, but not BlockGossip.
|
||||
Other param struct can be added to `ConsensusParams` in the future.
|
||||
The `0` value is used to denote no change.
|
||||
Any other value will update that parameter in the `State.ConsensusParams`, to be applied for the next block.
|
||||
@@ -82,4 +82,5 @@ Proposed.
|
||||
|
||||
### Neutral
|
||||
|
||||
- The TxSizeParams, which checks validity, may be in conflict with the config's `max_block_size_tx`, which determines proposal sizes
|
||||
- The TxSize, which checks validity, may be in conflict with the config's `max_block_size_tx`, which determines proposal sizes
|
||||
|
||||
|
103
docs/architecture/adr-007-trust-metric-usage.md
Normal file
103
docs/architecture/adr-007-trust-metric-usage.md
Normal file
@@ -0,0 +1,103 @@
|
||||
# ADR 007: Trust Metric Usage Guide
|
||||
|
||||
## Context
|
||||
|
||||
Tendermint is required to monitor peer quality in order to inform its peer dialing and peer exchange strategies.
|
||||
|
||||
When a node first connects to the network, it is important that it can quickly find good peers.
|
||||
Thus, while a node has fewer connections, it should prioritize connecting to higher quality peers.
|
||||
As the node becomes well connected to the rest of the network, it can dial lesser known or lesser
|
||||
quality peers and help assess their quality. Similarly, when queried for peers, a node should make
|
||||
sure they dont return low quality peers.
|
||||
|
||||
Peer quality can be tracked using a trust metric that flags certain behaviours as good or bad. When enough
|
||||
bad behaviour accumulates, we can mark the peer as bad and disconnect.
|
||||
For example, when the PEXReactor makes a request for peers network addresses from an already known peer, and the returned network addresses are unreachable, this undesirable behavior should be tracked. Returning a few bad network addresses probably shouldn’t cause a peer to be dropped, while excessive amounts of this behavior does qualify the peer for removal. The originally proposed approach and design document for the trust metric can be found in the [ADR 006](adr-006-trust-metric.md) document.
|
||||
|
||||
The trust metric implementation allows a developer to obtain a peer's trust metric from a trust metric store, and track good and bad events relevant to a peer's behavior, and at any time, the peer's metric can be queried for a current trust value. The current trust value is calculated with a formula that utilizes current behavior, previous behavior, and change between the two. Current behavior is calculated as the percentage of good behavior within a time interval. The time interval is short; probably set between 30 seconds and 5 minutes. On the other hand, the historic data can estimate a peer's behavior over days worth of tracking. At the end of a time interval, the current behavior becomes part of the historic data, and a new time interval begins with the good and bad counters reset to zero.
|
||||
|
||||
These are some important things to keep in mind regarding how the trust metrics handle time intervals and scoring:
|
||||
- Each new time interval begins with a perfect score
|
||||
- Bad events quickly bring the score down and good events cause the score to slowly rise
|
||||
- When the time interval is over, the percentage of good events becomes historic data.
|
||||
|
||||
Some useful information about the inner workings of the trust metric:
|
||||
- When a trust metric is first instantiated, a timer (ticker) periodically fires in order to handle transitions between trust metric time intervals
|
||||
- If a peer is disconnected from a node, the timer should be paused, since the node is no longer connected to that peer
|
||||
- The ability to pause the metric is supported with the store **PeerDisconnected** method and the metric **Pause** method
|
||||
- After a pause, if a good or bad event method is called on a metric, it automatically becomes unpaused and begins a new time interval.
|
||||
|
||||
## Decision
|
||||
|
||||
The trust metric capability is now available, yet, it still leaves the question of how should it be applied throughout Tendermint in order to properly track the quality of peers?
|
||||
|
||||
### Proposed Process
|
||||
|
||||
Peers are managed using an address book and a trust metric:
|
||||
|
||||
- The address book keeps a record of peers and provides selection methods
|
||||
- The trust metric tracks the quality of the peers
|
||||
|
||||
#### Presence in Address Book
|
||||
|
||||
Outbound peers are added to the address book before they are dialed,
|
||||
and inbound peers are added once the peer connection is set up.
|
||||
Peers are also added to the address book when they are received in response to
|
||||
a pexRequestMessage.
|
||||
|
||||
While a node has less than `needAddressThreshold`, it will periodically request more,
|
||||
via pexRequestMessage, from randomly selected peers and from newly dialed outbound peers.
|
||||
|
||||
When a new address is added to an address book that has more than `0.5*needAddressThreshold` addresses,
|
||||
then with some low probability, a randomly chosen low quality peer is removed.
|
||||
|
||||
#### Outbound Peers
|
||||
|
||||
Peers attempt to maintain a minimum number of outbound connections by
|
||||
repeatedly querying the address book for peers to connect to.
|
||||
While a node has few to no outbound connections, the address book is biased to return
|
||||
higher quality peers. As the node increases the number of outbound connections,
|
||||
the address book is biased to return less-vetted or lower-quality peers.
|
||||
|
||||
#### Inbound Peers
|
||||
|
||||
Peers also maintain a maximum number of total connections, MaxNumPeers.
|
||||
If a peer has MaxNumPeers, new incoming connections will be accepted with low probability.
|
||||
When such a new connection is accepted, the peer disconnects from a probabilistically chosen low ranking peer
|
||||
so it does not exceed MaxNumPeers.
|
||||
|
||||
#### Peer Exchange
|
||||
|
||||
When a peer receives a pexRequestMessage, it returns a random sample of high quality peers from the address book. Peers with no score or low score should not be inclided in a response to pexRequestMessage.
|
||||
|
||||
#### Peer Quality
|
||||
|
||||
Peer quality is tracked in the connection and across the reactors by storing the TrustMetric in the peer's
|
||||
thread safe Data store.
|
||||
|
||||
Peer behaviour is then defined as one of the following:
|
||||
- Fatal - something outright malicious that causes us to disconnect the peer and ban it from the address book for some amount of time
|
||||
- Bad - Any kind of timeout, messages that don't unmarshal, fail other validity checks, or messages we didn't ask for or aren't expecting (usually worth one bad event)
|
||||
- Neutral - Unknown channels/message types/version upgrades (no good or bad events recorded)
|
||||
- Correct - Normal correct behavior (worth one good event)
|
||||
- Good - some random majority of peers per reactor sending us useful messages (worth more than one good event).
|
||||
|
||||
Note that Fatal behaviour causes us to remove the peer, and neutral behaviour does not affect the score.
|
||||
|
||||
## Status
|
||||
|
||||
Proposed.
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
|
||||
- Bringing the address book and trust metric store together will cause the network to be built in a way that encourages greater security and reliability.
|
||||
|
||||
### Negative
|
||||
|
||||
- TBD
|
||||
|
||||
### Neutral
|
||||
|
||||
- Keep in mind that, good events need to be recorded just as bad events do using this implementation.
|
45
docs/conf.py
45
docs/conf.py
@@ -171,29 +171,38 @@ texinfo_documents = [
|
||||
'Database'),
|
||||
]
|
||||
|
||||
repo = "https://raw.githubusercontent.com/tendermint/tools/"
|
||||
branch = "master"
|
||||
# ---- customization -------------------------
|
||||
|
||||
tools = "./tools"
|
||||
assets = tools + "/assets"
|
||||
tools_repo = "https://raw.githubusercontent.com/tendermint/tools/"
|
||||
tools_branch = "master"
|
||||
|
||||
if os.path.isdir(tools) != True:
|
||||
os.mkdir(tools)
|
||||
if os.path.isdir(assets) != True:
|
||||
os.mkdir(assets)
|
||||
tools_dir = "./tools"
|
||||
assets_dir = tools_dir + "/assets"
|
||||
|
||||
urllib.urlretrieve(repo+branch+'/ansible/README.rst', filename=tools+'/ansible.rst')
|
||||
urllib.urlretrieve(repo+branch+'/ansible/assets/a_plus_t.png', filename=assets+'/a_plus_t.png')
|
||||
if os.path.isdir(tools_dir) != True:
|
||||
os.mkdir(tools_dir)
|
||||
if os.path.isdir(assets_dir) != True:
|
||||
os.mkdir(assets_dir)
|
||||
|
||||
urllib.urlretrieve(repo+branch+'/docker/README.rst', filename=tools+'/docker.rst')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/ansible/README.rst', filename=tools_dir+'/ansible.rst')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/ansible/assets/a_plus_t.png', filename=assets_dir+'/a_plus_t.png')
|
||||
|
||||
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/README.rst', filename=tools+'/mintnet-kubernetes.rst')
|
||||
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/gce1.png', filename=assets+'/gce1.png')
|
||||
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/gce2.png', filename=assets+'/gce2.png')
|
||||
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/statefulset.png', filename=assets+'/statefulset.png')
|
||||
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/t_plus_k.png', filename=assets+'/t_plus_k.png')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/docker/README.rst', filename=tools_dir+'/docker.rst')
|
||||
|
||||
urllib.urlretrieve(repo+branch+'/terraform-digitalocean/README.rst', filename=tools+'/terraform-digitalocean.rst')
|
||||
urllib.urlretrieve(repo+branch+'/tm-bench/README.rst', filename=tools+'/benchmarking-and-monitoring.rst')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/README.rst', filename=tools_dir+'/mintnet-kubernetes.rst')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/gce1.png', filename=assets_dir+'/gce1.png')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/gce2.png', filename=assets_dir+'/gce2.png')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/statefulset.png', filename=assets_dir+'/statefulset.png')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/t_plus_k.png', filename=assets_dir+'/t_plus_k.png')
|
||||
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/terraform-digitalocean/README.rst', filename=tools_dir+'/terraform-digitalocean.rst')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/tm-bench/README.rst', filename=tools_dir+'/benchmarking-and-monitoring.rst')
|
||||
# the readme for below is included in tm-bench
|
||||
# urllib.urlretrieve('https://raw.githubusercontent.com/tendermint/tools/master/tm-monitor/README.rst', filename='tools/tm-monitor.rst')
|
||||
|
||||
#### abci spec #################################
|
||||
|
||||
abci_repo = "https://raw.githubusercontent.com/tendermint/abci/"
|
||||
abci_branch = "spec-docs"
|
||||
|
||||
urllib.urlretrieve(abci_repo+abci_branch+'/specification.rst', filename='abci-spec.rst')
|
||||
|
@@ -13,7 +13,7 @@ It's relatively easy to setup a Tendermint cluster manually. The only
|
||||
requirements for a particular Tendermint node are a private key for the
|
||||
validator, stored as ``priv_validator.json``, and a list of the public
|
||||
keys of all validators, stored as ``genesis.json``. These files should
|
||||
be stored in ``~/.tendermint``, or wherever the ``$TMHOME`` variable
|
||||
be stored in ``~/.tendermint/config``, or wherever the ``$TMHOME`` variable
|
||||
might be set to.
|
||||
|
||||
Here are the steps to setting up a testnet manually:
|
||||
@@ -24,15 +24,15 @@ Here are the steps to setting up a testnet manually:
|
||||
``tendermint gen_validator``
|
||||
4) Compile a list of public keys for each validator into a
|
||||
``genesis.json`` file.
|
||||
5) Run ``tendermint node --p2p.seeds=< seed addresses >`` on each node,
|
||||
where ``< seed addresses >`` is a comma separated list of the IP:PORT
|
||||
5) Run ``tendermint node --p2p.persistent_peers=< peer addresses >`` on each node,
|
||||
where ``< peer addresses >`` is a comma separated list of the IP:PORT
|
||||
combination for each node. The default port for Tendermint is
|
||||
``46656``. Thus, if the IP addresses of your nodes were
|
||||
``192.168.0.1, 192.168.0.2, 192.168.0.3, 192.168.0.4``, the command
|
||||
would look like:
|
||||
``tendermint node --p2p.seeds=192.168.0.1:46656,192.168.0.2:46656,192.168.0.3:46656,192.168.0.4:46656``.
|
||||
``tendermint node --p2p.persistent_peers=192.168.0.1:46656,192.168.0.2:46656,192.168.0.3:46656,192.168.0.4:46656``.
|
||||
|
||||
After a few seconds, all the nodes should connect to eachother and start
|
||||
After a few seconds, all the nodes should connect to each other and start
|
||||
making blocks! For more information, see the Tendermint Networks section
|
||||
of `the guide to using Tendermint <using-tendermint.html>`__.
|
||||
|
||||
@@ -48,7 +48,7 @@ Automated Deployment using Kubernetes
|
||||
|
||||
The `mintnet-kubernetes tool <https://github.com/tendermint/tools/tree/master/mintnet-kubernetes>`__
|
||||
allows automating the deployment of a Tendermint network on an already
|
||||
provisioned kubernetes cluster. For simple provisioning of a kubernetes
|
||||
provisioned Kubernetes cluster. For simple provisioning of a Kubernetes
|
||||
cluster, check out the `Google Cloud Platform <https://cloud.google.com/>`__.
|
||||
|
||||
Automated Deployment using Terraform and Ansible
|
||||
|
@@ -1,122 +1,15 @@
|
||||
Tendermint Ecosystem
|
||||
====================
|
||||
|
||||
Below are the many applications built using various pieces of the Tendermint stack. We thank the community for their contributions thus far and welcome the addition of new projects. Feel free to submit a pull request to add your project!
|
||||
The growing list of applications built using various pieces of the Tendermint stack can be found at:
|
||||
|
||||
ABCI Applications
|
||||
-----------------
|
||||
* https://tendermint.com/ecosystem
|
||||
|
||||
Burrow
|
||||
^^^^^^
|
||||
We thank the community for their contributions thus far and welcome the addition of new projects. A pull request can be submitted to `this file <https://github.com/tendermint/tendermint/blob/master/docs/ecosystem.rst>`__ to include your project.
|
||||
|
||||
Ethereum Virtual Machine augmented with native permissioning scheme and global key-value store, written in Go, authored by Monax Industries, and incubated `by Hyperledger <https://github.com/hyperledger/burrow>`__.
|
||||
|
||||
cb-ledger
|
||||
^^^^^^^^^
|
||||
|
||||
Custodian Bank Ledger, integrating central banking with the blockchains of tomorrow, written in C++, and `authored by Block Finance <https://github.com/block-finance/cpp-abci>`__.
|
||||
|
||||
Clearchain
|
||||
^^^^^^^^^^
|
||||
|
||||
Application to manage a distributed ledger for money transfers that support multi-currency accounts, written in Go, and `authored by Allession Treglia <https://github.com/tendermint/clearchain>`__.
|
||||
|
||||
Comit
|
||||
^^^^^
|
||||
|
||||
Public service reporting and tracking, written in Go, and `authored by Zach Balder <https://github.com/zbo14/comit>`__.
|
||||
|
||||
Cosmos SDK
|
||||
^^^^^^^^^^
|
||||
|
||||
A prototypical account based crypto currency state machine supporting plugins, written in Go, and `authored by Cosmos <https://github.com/cosmos/cosmos-sdk>`__.
|
||||
|
||||
Ethermint
|
||||
^^^^^^^^^
|
||||
|
||||
The go-ethereum state machine run as a ABCI app, written in Go, `authored by Tendermint <https://github.com/tendermint/ethermint>`__.
|
||||
|
||||
IAVL
|
||||
^^^^
|
||||
|
||||
Immutable AVL+ tree with Merkle proofs, Written in Go, `authored by Tendermint <https://github.com/tendermint/iavl>`__.
|
||||
|
||||
Lotion
|
||||
^^^^^^
|
||||
|
||||
A Javascript microframework for building blockchain applications with Tendermint, written in Javascript, `authored by Judd Keppel of Tendermint <https://github.com/keppel/lotion>`__. See also `lotion-chat <https://github.com/keppel/lotion-chat>`__ and `lotion-coin <https://github.com/keppel/lotion-coin>`__ apps written using Lotion.
|
||||
|
||||
MerkleTree
|
||||
^^^^^^^^^^
|
||||
|
||||
Immutable AVL+ tree with Merkle proofs, Written in Java, `authored by jTendermint <https://github.com/jTendermint/MerkleTree>`__.
|
||||
|
||||
Passchain
|
||||
^^^^^^^^^
|
||||
|
||||
Passchain is a tool to securely store and share passwords, tokens and other short secrets, `authored by trusch <https://github.com/trusch/passchain>`__.
|
||||
|
||||
Passwerk
|
||||
^^^^^^^^
|
||||
|
||||
Encrypted storage web-utility backed by Tendermint, written in Go, `authored by Rigel Rozanski <https://github.com/rigelrozanski/passwerk>`__.
|
||||
|
||||
Py-Tendermint
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
A Python microframework for building blockchain applications with Tendermint, written in Python, `authored by Dave Bryson <https://github.com/davebryson/py-tendermint>`__.
|
||||
|
||||
Stratumn
|
||||
^^^^^^^^
|
||||
|
||||
SDK for "Proof-of-Process" networks, written in Go, `authored by the Stratumn team <https://github.com/stratumn/sdk>`__.
|
||||
|
||||
TMChat
|
||||
^^^^^^
|
||||
|
||||
P2P chat using Tendermint, written in Java, `authored by wolfposd <https://github.com/wolfposd/TMChat>`__.
|
||||
|
||||
|
||||
ABCI Servers
|
||||
------------
|
||||
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| **Name** | **Author** | **Language** |
|
||||
| | | |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `abci <https://github.com/tendermint/abci>`__ | Tendermint | Go |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `js abci <https://github.com/tendermint/js-abci>`__ | Tendermint | Javascript |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `cpp-tmsp <https://github.com/block-finance/cpp-abci>`__ | Martin Dyring | C++ |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `c-abci <https://github.com/chainx-org/c-abci>`__ | ChainX | C |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `jabci <https://github.com/jTendermint/jabci>`__ | jTendermint | Java |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `ocaml-tmsp <https://github.com/zbo14/ocaml-tmsp>`__ | Zach Balder | Ocaml |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `abci_server <https://github.com/KrzysiekJ/abci_server>`__ | Krzysztof Jurewicz | Erlang |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `rust-tsp <https://github.com/tendermint/rust-tsp>`__ | Adrian Brink | Rust |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `hs-abci <https://github.com/albertov/hs-abci>`__ | Alberto Gonzalez | Haskell |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `haskell-abci <https://github.com/cwgoes/haskell-abci>`__ | Christoper Goes | Haskell |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `Spearmint <https://github.com/dennismckinnon/spearmint>`__ | Dennis Mckinnon | Javascript |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `py-abci <https://github.com/davebryson/py-abci>`__ | Dave Bryson | Python |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
|
||||
Deployment Tools
|
||||
----------------
|
||||
Other Tools
|
||||
-----------
|
||||
|
||||
See `deploy testnets <./deploy-testnets.html>`__ for information about all the tools built by Tendermint. We have Kubernetes, Ansible, and Terraform integrations.
|
||||
|
||||
Cloudsoft built `brooklyn-tendermint <https://github.com/cloudsoft/brooklyn-tendermint>`__ for deploying a tendermint testnet in docker continers. It uses Clocker for Apache Brooklyn.
|
||||
|
||||
Dev Tools
|
||||
---------
|
||||
|
||||
For upgrading from older to newer versions of tendermint and to migrate your chain data, see `tm-migrator <https://github.com/hxzqlh/tm-tools>`__ written by @hxzqlh.
|
||||
|
142
docs/examples/getting-started.md
Normal file
142
docs/examples/getting-started.md
Normal file
@@ -0,0 +1,142 @@
|
||||
# Tendermint
|
||||
|
||||
## Overview
|
||||
|
||||
This is a quick start guide. If you have a vague idea about how Tendermint works
|
||||
and want to get started right away, continue. Otherwise, [review the documentation](http://tendermint.readthedocs.io/en/master/)
|
||||
|
||||
## Install
|
||||
|
||||
### Quick Install
|
||||
|
||||
On a fresh Ubuntu 16.04 machine can be done with [this script](https://git.io/vNLfY), like so:
|
||||
|
||||
```
|
||||
curl -L https://git.io/vNLfY | bash
|
||||
source ~/.profile
|
||||
```
|
||||
|
||||
WARNING: do not run the above on your local machine.
|
||||
|
||||
The script is also used to facilitate cluster deployment below.
|
||||
|
||||
### Manual Install
|
||||
|
||||
Requires:
|
||||
- `go` minimum version 1.9
|
||||
- `$GOPATH` environment variable must be set
|
||||
- `$GOPATH/bin` must be on your `$PATH` (see https://github.com/tendermint/tendermint/wiki/Setting-GOPATH)
|
||||
|
||||
To install Tendermint, run:
|
||||
|
||||
```
|
||||
go get github.com/tendermint/tendermint
|
||||
cd $GOPATH/src/github.com/tendermint/tendermint
|
||||
make get_tools && make get_vendor_deps
|
||||
make install
|
||||
```
|
||||
|
||||
Note that `go get` may return an error but it can be ignored.
|
||||
|
||||
Confirm installation:
|
||||
|
||||
```
|
||||
$ tendermint version
|
||||
0.15.0-381fe19
|
||||
```
|
||||
|
||||
## Initialization
|
||||
|
||||
Running:
|
||||
|
||||
```
|
||||
tendermint init
|
||||
```
|
||||
|
||||
will create the required files for a single, local node.
|
||||
|
||||
These files are found in `$HOME/.tendermint`:
|
||||
|
||||
```
|
||||
$ ls $HOME/.tendermint
|
||||
|
||||
config.toml data genesis.json priv_validator.json
|
||||
```
|
||||
|
||||
For a single, local node, no further configuration is required.
|
||||
Configuring a cluster is covered further below.
|
||||
|
||||
## Local Node
|
||||
|
||||
Start tendermint with a simple in-process application:
|
||||
|
||||
```
|
||||
tendermint node --proxy_app=dummy
|
||||
```
|
||||
|
||||
and blocks will start to stream in:
|
||||
|
||||
```
|
||||
I[01-06|01:45:15.592] Executed block module=state height=1 validTxs=0 invalidTxs=0
|
||||
I[01-06|01:45:15.624] Committed state module=state height=1 txs=0 appHash=
|
||||
```
|
||||
|
||||
Check the status with:
|
||||
|
||||
```
|
||||
curl -s localhost:46657/status
|
||||
```
|
||||
|
||||
### Sending Transactions
|
||||
|
||||
With the dummy app running, we can send transactions:
|
||||
|
||||
```
|
||||
curl -s 'localhost:46657/broadcast_tx_commit?tx="abcd"'
|
||||
```
|
||||
|
||||
and check that it worked with:
|
||||
|
||||
```
|
||||
curl -s 'localhost:46657/abci_query?data="abcd"'
|
||||
```
|
||||
|
||||
We can send transactions with a key and value too:
|
||||
|
||||
```
|
||||
curl -s 'localhost:46657/broadcast_tx_commit?tx="name=satoshi"'
|
||||
```
|
||||
|
||||
and query the key:
|
||||
|
||||
```
|
||||
curl -s 'localhost:46657/abci_query?data="name"'
|
||||
```
|
||||
|
||||
where the value is returned in hex.
|
||||
|
||||
## Cluster of Nodes
|
||||
|
||||
First create four Ubuntu cloud machines. The following was tested on Digital Ocean Ubuntu 16.04 x64 (3GB/1CPU, 20GB SSD). We'll refer to their respective IP addresses below as IP1, IP2, IP3, IP4.
|
||||
|
||||
Then, `ssh` into each machine, and execute [this script](https://git.io/vNLfY):
|
||||
|
||||
```
|
||||
curl -L https://git.io/vNLfY | bash
|
||||
source ~/.profile
|
||||
```
|
||||
|
||||
This will install `go` and other dependencies, get the Tendermint source code, then compile the `tendermint` binary.
|
||||
|
||||
Next, `cd` into `docs/examples`. Each command below should be run from each node, in sequence:
|
||||
|
||||
```
|
||||
tendermint node --home ./node1 --proxy_app=dummy --p2p.seeds IP1:46656,IP2:46656,IP3:46656,IP4:46656
|
||||
tendermint node --home ./node2 --proxy_app=dummy --p2p.seeds IP1:46656,IP2:46656,IP3:46656,IP4:46656
|
||||
tendermint node --home ./node3 --proxy_app=dummy --p2p.seeds IP1:46656,IP2:46656,IP3:46656,IP4:46656
|
||||
tendermint node --home ./node4 --proxy_app=dummy --p2p.seeds IP1:46656,IP2:46656,IP3:46656,IP4:46656
|
||||
```
|
||||
|
||||
Note that after the third node is started, blocks will start to stream in because >2/3 of validators (defined in the `genesis.json`) have come online. Seeds can also be specified in the `config.toml`. See [this PR](https://github.com/tendermint/tendermint/pull/792) for more information about configuration options.
|
||||
|
||||
Transactions can then be sent as covered in the single, local node example above.
|
32
docs/examples/install_tendermint.sh
Normal file
32
docs/examples/install_tendermint.sh
Normal file
@@ -0,0 +1,32 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# XXX: this script is meant to be used only on a fresh Ubuntu 16.04 instance
|
||||
# and has only been tested on Digital Ocean
|
||||
|
||||
# get and unpack golang
|
||||
curl -O https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz
|
||||
tar -xvf go1.9.2.linux-amd64.tar.gz
|
||||
|
||||
apt install make
|
||||
|
||||
## move go and add binary to path
|
||||
mv go /usr/local
|
||||
echo "export PATH=\$PATH:/usr/local/go/bin" >> ~/.profile
|
||||
|
||||
## create the GOPATH directory, set GOPATH and put on PATH
|
||||
mkdir goApps
|
||||
echo "export GOPATH=/root/goApps" >> ~/.profile
|
||||
echo "export PATH=\$PATH:\$GOPATH/bin" >> ~/.profile
|
||||
|
||||
source ~/.profile
|
||||
|
||||
## get the code and move into it
|
||||
REPO=github.com/tendermint/tendermint
|
||||
go get $REPO
|
||||
cd $GOPATH/src/$REPO
|
||||
|
||||
## build
|
||||
git checkout v0.15.0
|
||||
make get_tools
|
||||
make get_vendor_deps
|
||||
make install
|
15
docs/examples/node1/config.toml
Normal file
15
docs/examples/node1/config.toml
Normal file
@@ -0,0 +1,15 @@
|
||||
# This is a TOML config file.
|
||||
# For more information, see https://github.com/toml-lang/toml
|
||||
|
||||
proxy_app = "tcp://127.0.0.1:46658"
|
||||
moniker = "penguin"
|
||||
fast_sync = true
|
||||
db_backend = "leveldb"
|
||||
log_level = "state:info,*:error"
|
||||
|
||||
[rpc]
|
||||
laddr = "tcp://0.0.0.0:46657"
|
||||
|
||||
[p2p]
|
||||
laddr = "tcp://0.0.0.0:46656"
|
||||
seeds = ""
|
42
docs/examples/node1/genesis.json
Normal file
42
docs/examples/node1/genesis.json
Normal file
@@ -0,0 +1,42 @@
|
||||
{
|
||||
"genesis_time":"0001-01-01T00:00:00Z",
|
||||
"chain_id":"test-chain-wt7apy",
|
||||
"validators":[
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data":"F08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node1"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "A8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node2"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node3"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node4"
|
||||
}
|
||||
],
|
||||
"app_hash":""
|
||||
}
|
15
docs/examples/node1/priv_validator.json
Normal file
15
docs/examples/node1/priv_validator.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"address":"4DC2756029CE0D8F8C6C3E4C3CE6EE8C30AF352F",
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data":"F08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
|
||||
},
|
||||
"last_height":0,
|
||||
"last_round":0,
|
||||
"last_step":0,
|
||||
"last_signature":null,
|
||||
"priv_key":{
|
||||
"type":"ed25519",
|
||||
"data":"4D3648E1D93C8703E436BFF814728B6BD270CFDFD686DF5385E8ACBEB7BE2D7DF08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
|
||||
}
|
||||
}
|
15
docs/examples/node2/config.toml
Normal file
15
docs/examples/node2/config.toml
Normal file
@@ -0,0 +1,15 @@
|
||||
# This is a TOML config file.
|
||||
# For more information, see https://github.com/toml-lang/toml
|
||||
|
||||
proxy_app = "tcp://127.0.0.1:46658"
|
||||
moniker = "penguin"
|
||||
fast_sync = true
|
||||
db_backend = "leveldb"
|
||||
log_level = "state:info,*:error"
|
||||
|
||||
[rpc]
|
||||
laddr = "tcp://0.0.0.0:46657"
|
||||
|
||||
[p2p]
|
||||
laddr = "tcp://0.0.0.0:46656"
|
||||
seeds = ""
|
42
docs/examples/node2/genesis.json
Normal file
42
docs/examples/node2/genesis.json
Normal file
@@ -0,0 +1,42 @@
|
||||
{
|
||||
"genesis_time":"0001-01-01T00:00:00Z",
|
||||
"chain_id":"test-chain-wt7apy",
|
||||
"validators":[
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data":"F08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node1"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "A8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node2"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node3"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node4"
|
||||
}
|
||||
],
|
||||
"app_hash":""
|
||||
}
|
15
docs/examples/node2/priv_validator.json
Normal file
15
docs/examples/node2/priv_validator.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"address": "DD6C63A762608A9DDD4A845657743777F63121D6",
|
||||
"pub_key": {
|
||||
"type": "ed25519",
|
||||
"data": "A8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
|
||||
},
|
||||
"last_height": 0,
|
||||
"last_round": 0,
|
||||
"last_step": 0,
|
||||
"last_signature": null,
|
||||
"priv_key": {
|
||||
"type": "ed25519",
|
||||
"data": "7B0DE666FF5E9B437D284BCE767F612381890C018B93B0A105D2E829A568DA6FA8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
|
||||
}
|
||||
}
|
15
docs/examples/node3/config.toml
Normal file
15
docs/examples/node3/config.toml
Normal file
@@ -0,0 +1,15 @@
|
||||
# This is a TOML config file.
|
||||
# For more information, see https://github.com/toml-lang/toml
|
||||
|
||||
proxy_app = "tcp://127.0.0.1:46658"
|
||||
moniker = "penguin"
|
||||
fast_sync = true
|
||||
db_backend = "leveldb"
|
||||
log_level = "state:info,*:error"
|
||||
|
||||
[rpc]
|
||||
laddr = "tcp://0.0.0.0:46657"
|
||||
|
||||
[p2p]
|
||||
laddr = "tcp://0.0.0.0:46656"
|
||||
seeds = ""
|
42
docs/examples/node3/genesis.json
Normal file
42
docs/examples/node3/genesis.json
Normal file
@@ -0,0 +1,42 @@
|
||||
{
|
||||
"genesis_time":"0001-01-01T00:00:00Z",
|
||||
"chain_id":"test-chain-wt7apy",
|
||||
"validators":[
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data":"F08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node1"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "A8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node2"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node3"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node4"
|
||||
}
|
||||
],
|
||||
"app_hash":""
|
||||
}
|
15
docs/examples/node3/priv_validator.json
Normal file
15
docs/examples/node3/priv_validator.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"address": "6D6A1E313B407B5474106CA8759C976B777AB659",
|
||||
"pub_key": {
|
||||
"type": "ed25519",
|
||||
"data": "E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
|
||||
},
|
||||
"last_height": 0,
|
||||
"last_round": 0,
|
||||
"last_step": 0,
|
||||
"last_signature": null,
|
||||
"priv_key": {
|
||||
"type": "ed25519",
|
||||
"data": "622432A370111A5C25CFE121E163FE709C9D5C95F551EDBD7A2C69A8545C9B76E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
|
||||
}
|
||||
}
|
15
docs/examples/node4/config.toml
Normal file
15
docs/examples/node4/config.toml
Normal file
@@ -0,0 +1,15 @@
|
||||
# This is a TOML config file.
|
||||
# For more information, see https://github.com/toml-lang/toml
|
||||
|
||||
proxy_app = "tcp://127.0.0.1:46658"
|
||||
moniker = "penguin"
|
||||
fast_sync = true
|
||||
db_backend = "leveldb"
|
||||
log_level = "state:info,*:error"
|
||||
|
||||
[rpc]
|
||||
laddr = "tcp://0.0.0.0:46657"
|
||||
|
||||
[p2p]
|
||||
laddr = "tcp://0.0.0.0:46656"
|
||||
seeds = ""
|
42
docs/examples/node4/genesis.json
Normal file
42
docs/examples/node4/genesis.json
Normal file
@@ -0,0 +1,42 @@
|
||||
{
|
||||
"genesis_time":"0001-01-01T00:00:00Z",
|
||||
"chain_id":"test-chain-wt7apy",
|
||||
"validators":[
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data":"F08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node1"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "A8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node2"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node3"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node4"
|
||||
}
|
||||
],
|
||||
"app_hash":""
|
||||
}
|
15
docs/examples/node4/priv_validator.json
Normal file
15
docs/examples/node4/priv_validator.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"address": "829A9663611D3DD88A3D84EA0249679D650A0755",
|
||||
"pub_key": {
|
||||
"type": "ed25519",
|
||||
"data": "2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
|
||||
},
|
||||
"last_height": 0,
|
||||
"last_round": 0,
|
||||
"last_step": 0,
|
||||
"last_signature": null,
|
||||
"priv_key": {
|
||||
"type": "ed25519",
|
||||
"data": "0A604D1C9AE94A50150BF39E603239092F9392E4773F4D8F4AC1D86E6438E89E2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
|
||||
}
|
||||
}
|
@@ -40,7 +40,7 @@ dependencies:
|
||||
Now you should have the ``abci-cli`` installed; you'll see
|
||||
a couple of commands (``counter`` and ``dummy``) that are
|
||||
example applications written in Go. See below for an application
|
||||
written in Javascript.
|
||||
written in JavaScript.
|
||||
|
||||
Now, let's run some apps!
|
||||
|
||||
@@ -49,7 +49,7 @@ Dummy - A First Example
|
||||
|
||||
The dummy app is a `Merkle
|
||||
tree <https://en.wikipedia.org/wiki/Merkle_tree>`__ that just stores all
|
||||
transactions. If the transaction contains an ``=``, eg. ``key=value``,
|
||||
transactions. If the transaction contains an ``=``, e.g. ``key=value``,
|
||||
then the ``value`` is stored under the ``key`` in the Merkle tree.
|
||||
Otherwise, the full transaction bytes are stored as the key and the
|
||||
value.
|
||||
@@ -147,7 +147,7 @@ The result should look like:
|
||||
|
||||
Note the ``value`` in the result (``61626364``); this is the
|
||||
hex-encoding of the ASCII of ``abcd``. You can verify this in
|
||||
a python shell by running ``"61626364".decode('hex')``. Stay
|
||||
a python 2 shell by running ``"61626364".decode('hex')`` or in python 3 shell by running ``import codecs; codecs.decode("61626364", 'hex').decode('ascii')``. Stay
|
||||
tuned for a future release that `makes this output more human-readable <https://github.com/tendermint/abci/issues/32>`__.
|
||||
|
||||
Now let's try setting a different key and value:
|
||||
|
@@ -53,6 +53,7 @@ Tendermint 102
|
||||
:maxdepth: 2
|
||||
|
||||
abci-cli.rst
|
||||
abci-spec.rst
|
||||
app-architecture.rst
|
||||
app-development.rst
|
||||
how-to-read-logs.rst
|
||||
|
@@ -9,7 +9,7 @@ To download pre-built binaries, see the `Download page <https://tendermint.com/d
|
||||
From Source
|
||||
-----------
|
||||
|
||||
You'll need ``go``, maybe ``glide``, and the tendermint source code.
|
||||
You'll need ``go``, maybe ``glide``, and the Tendermint source code.
|
||||
|
||||
Install Go
|
||||
^^^^^^^^^^
|
||||
|
@@ -112,7 +112,7 @@ Motivation
|
||||
Thus far, all blockchains "stacks" (such as `Bitcoin <https://github.com/bitcoin/bitcoin>`__) have had a monolithic design. That is, each blockchain stack is a single program that handles all the concerns of a decentralized ledger; this includes P2P connectivity, the "mempool" broadcasting of transactions, consensus on the most recent block, account balances, Turing-complete contracts, user-level permissions, etc.
|
||||
|
||||
Using a monolithic architecture is typically bad practice in computer science.
|
||||
It makes it difficult to reuse components of the code, and attempts to do so result in complex maintanence procedures for forks of the codebase.
|
||||
It makes it difficult to reuse components of the code, and attempts to do so result in complex maintenance procedures for forks of the codebase.
|
||||
This is especially true when the codebase is not modular in design and suffers from "spaghetti code".
|
||||
|
||||
Another problem with monolithic design is that it limits you to the language of the blockchain stack (or vice versa). In the case of Ethereum which supports a Turing-complete bytecode virtual-machine, it limits you to languages that compile down to that bytecode; today, those are Serpent and Solidity.
|
||||
|
@@ -1,57 +1,182 @@
|
||||
Configuration
|
||||
=============
|
||||
|
||||
TendermintCore can be configured via a TOML file in
|
||||
``$TMHOME/config.toml``. Some of these parameters can be overridden by
|
||||
command-line flags.
|
||||
Tendermint Core can be configured via a TOML file in
|
||||
``$TMHOME/config/config.toml``. Some of these parameters can be overridden by
|
||||
command-line flags. For most users, the options in the ``##### main
|
||||
base configuration options #####`` are intended to be modified while
|
||||
config options further below are intended for advance power users.
|
||||
|
||||
Config parameters
|
||||
~~~~~~~~~~~~~~~~~
|
||||
Config options
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
The main config parameters are defined
|
||||
`here <https://github.com/tendermint/tendermint/blob/master/config/config.go>`__.
|
||||
The default configuration file create by ``tendermint init`` has all
|
||||
the parameters set with their default values. It will look something
|
||||
like the file below, however, double check by inspecting the
|
||||
``config.toml`` created with your version of ``tendermint`` installed:
|
||||
|
||||
- ``abci``: ABCI transport (socket \| grpc). *Default*: ``socket``
|
||||
- ``db_backend``: Database backend for the blockchain and
|
||||
TendermintCore state. ``leveldb`` or ``memdb``. *Default*:
|
||||
``"leveldb"``
|
||||
- ``db_dir``: Database dir. *Default*: ``"$TMHOME/data"``
|
||||
- ``fast_sync``: Whether to sync faster from the block pool. *Default*:
|
||||
``true``
|
||||
- ``genesis_file``: The location of the genesis file. *Default*:
|
||||
``"$TMHOME/genesis.json"``
|
||||
- ``log_level``: *Default*: ``"state:info,*:error"``
|
||||
- ``moniker``: Name of this node. *Default*: ``"anonymous"``
|
||||
- ``priv_validator_file``: Validator private key file. *Default*:
|
||||
``"$TMHOME/priv_validator.json"``
|
||||
- ``prof_laddr``: Profile listen address. *Default*: ``""``
|
||||
- ``proxy_app``: The ABCI app endpoint. *Default*:
|
||||
``"tcp://127.0.0.1:46658"``
|
||||
::
|
||||
|
||||
- ``consensus.max_block_size_txs``: Maximum number of block txs.
|
||||
*Default*: ``10000``
|
||||
- ``consensus.create_empty_blocks``: Create empty blocks w/o txs.
|
||||
*Default*: ``true``
|
||||
- ``consensus.create_empty_blocks_interval``: Block creation interval, even if empty.
|
||||
- ``consensus.timeout_*``: Various consensus timeout parameters
|
||||
- ``consensus.wal_file``: Consensus state WAL. *Default*:
|
||||
``"$TMHOME/data/cs.wal/wal"``
|
||||
- ``consensus.wal_light``: Whether to use light-mode for Consensus
|
||||
state WAL. *Default*: ``false``
|
||||
# This is a TOML config file.
|
||||
# For more information, see https://github.com/toml-lang/toml
|
||||
|
||||
- ``mempool.*``: Various mempool parameters
|
||||
##### main base config options #####
|
||||
|
||||
- ``p2p.addr_book_file``: Peer address book. *Default*:
|
||||
``"$TMHOME/addrbook.json"``. **NOT USED**
|
||||
- ``p2p.laddr``: Node listen address. (0.0.0.0:0 means any interface,
|
||||
any port). *Default*: ``"0.0.0.0:46656"``
|
||||
- ``p2p.pex``: Enable Peer-Exchange (dev feature). *Default*: ``false``
|
||||
- ``p2p.seeds``: Comma delimited host:port seed nodes. *Default*:
|
||||
``""``
|
||||
- ``p2p.skip_upnp``: Skip UPNP detection. *Default*: ``false``
|
||||
# TCP or UNIX socket address of the ABCI application,
|
||||
# or the name of an ABCI application compiled in with the Tendermint binary
|
||||
proxy_app = "tcp://127.0.0.1:46658"
|
||||
|
||||
- ``rpc.grpc_laddr``: GRPC listen address (BroadcastTx only). Port
|
||||
required. *Default*: ``""``
|
||||
- ``rpc.laddr``: RPC listen address. Port required. *Default*:
|
||||
``"0.0.0.0:46657"``
|
||||
- ``rpc.unsafe``: Enabled unsafe rpc methods. *Default*: ``true``
|
||||
# A custom human readable name for this node
|
||||
moniker = "anonymous"
|
||||
|
||||
# If this node is many blocks behind the tip of the chain, FastSync
|
||||
# allows them to catchup quickly by downloading blocks in parallel
|
||||
# and verifying their commits
|
||||
fast_sync = true
|
||||
|
||||
# Database backend: leveldb | memdb
|
||||
db_backend = "leveldb"
|
||||
|
||||
# Database directory
|
||||
db_path = "data"
|
||||
|
||||
# Output level for logging
|
||||
log_level = "state:info,*:error"
|
||||
|
||||
##### additional base config options #####
|
||||
|
||||
# The ID of the chain to join (should be signed with every transaction and vote)
|
||||
chain_id = ""
|
||||
|
||||
# Path to the JSON file containing the initial validator set and other meta data
|
||||
genesis_file = "genesis.json"
|
||||
|
||||
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
|
||||
priv_validator_file = "priv_validator.json"
|
||||
|
||||
# Mechanism to connect to the ABCI application: socket | grpc
|
||||
abci = "socket"
|
||||
|
||||
# TCP or UNIX socket address for the profiling server to listen on
|
||||
prof_laddr = ""
|
||||
|
||||
# If true, query the ABCI app on connecting to a new peer
|
||||
# so the app can decide if we should keep the connection or not
|
||||
filter_peers = false
|
||||
|
||||
##### advanced configuration options #####
|
||||
|
||||
##### rpc server configuration options #####
|
||||
[rpc]
|
||||
|
||||
# TCP or UNIX socket address for the RPC server to listen on
|
||||
laddr = "tcp://0.0.0.0:46657"
|
||||
|
||||
# TCP or UNIX socket address for the gRPC server to listen on
|
||||
# NOTE: This server only supports /broadcast_tx_commit
|
||||
grpc_laddr = ""
|
||||
|
||||
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
|
||||
unsafe = false
|
||||
|
||||
##### peer to peer configuration options #####
|
||||
[p2p]
|
||||
|
||||
# Address to listen for incoming connections
|
||||
laddr = "tcp://0.0.0.0:46656"
|
||||
|
||||
# Comma separated list of seed nodes to connect to
|
||||
seeds = ""
|
||||
|
||||
# Comma separated list of nodes to keep persistent connections to
|
||||
persistent_peers = ""
|
||||
|
||||
# Path to address book
|
||||
addr_book_file = "addrbook.json"
|
||||
|
||||
# Set true for strict address routability rules
|
||||
addr_book_strict = true
|
||||
|
||||
# Time to wait before flushing messages out on the connection, in ms
|
||||
flush_throttle_timeout = 100
|
||||
|
||||
# Maximum number of peers to connect to
|
||||
max_num_peers = 50
|
||||
|
||||
# Maximum size of a message packet payload, in bytes
|
||||
max_msg_packet_payload_size = 1024
|
||||
|
||||
# Rate at which packets can be sent, in bytes/second
|
||||
send_rate = 512000
|
||||
|
||||
# Rate at which packets can be received, in bytes/second
|
||||
recv_rate = 512000
|
||||
|
||||
# Set true to enable the peer-exchange reactor
|
||||
pex = true
|
||||
|
||||
# Seed mode, in which node constantly crawls the network and looks for
|
||||
# peers. If another node asks it for addresses, it responds and disconnects.
|
||||
#
|
||||
# Does not work if the peer-exchange reactor is disabled.
|
||||
seed_mode = false
|
||||
|
||||
##### mempool configuration options #####
|
||||
[mempool]
|
||||
|
||||
recheck = true
|
||||
recheck_empty = true
|
||||
broadcast = true
|
||||
wal_dir = "data/mempool.wal"
|
||||
|
||||
##### consensus configuration options #####
|
||||
[consensus]
|
||||
|
||||
wal_file = "data/cs.wal/wal"
|
||||
wal_light = false
|
||||
|
||||
# All timeouts are in milliseconds
|
||||
timeout_propose = 3000
|
||||
timeout_propose_delta = 500
|
||||
timeout_prevote = 1000
|
||||
timeout_prevote_delta = 500
|
||||
timeout_precommit = 1000
|
||||
timeout_precommit_delta = 500
|
||||
timeout_commit = 1000
|
||||
|
||||
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
|
||||
skip_timeout_commit = false
|
||||
|
||||
# BlockSize
|
||||
max_block_size_txs = 10000
|
||||
max_block_size_bytes = 1
|
||||
|
||||
# EmptyBlocks mode and possible interval between empty blocks in seconds
|
||||
create_empty_blocks = true
|
||||
create_empty_blocks_interval = 0
|
||||
|
||||
# Reactor sleep duration parameters are in milliseconds
|
||||
peer_gossip_sleep_duration = 100
|
||||
peer_query_maj23_sleep_duration = 2000
|
||||
|
||||
##### transactions indexer configuration options #####
|
||||
[tx_index]
|
||||
|
||||
# What indexer to use for transactions
|
||||
#
|
||||
# Options:
|
||||
# 1) "null" (default)
|
||||
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
|
||||
indexer = "{{ .TxIndex.Indexer }}"
|
||||
|
||||
# Comma-separated list of tags to index (by default the only tag is tx hash)
|
||||
#
|
||||
# It's recommended to index only a subset of tags due to possible memory
|
||||
# bloat. This is, of course, depends on the indexer's DB and the volume of
|
||||
# transactions.
|
||||
index_tags = "{{ .TxIndex.IndexTags }}"
|
||||
|
||||
# When set to true, tells indexer to index all tags. Note this may be not
|
||||
# desirable (see the comment above). IndexTags has a precedence over
|
||||
# IndexAllTags (i.e. when given both, IndexTags will be indexed).
|
||||
index_all_tags = {{ .TxIndex.IndexAllTags }}
|
||||
|
69
docs/specification/corruption.rst
Normal file
69
docs/specification/corruption.rst
Normal file
@@ -0,0 +1,69 @@
|
||||
Corruption
|
||||
==========
|
||||
|
||||
Important step
|
||||
--------------
|
||||
|
||||
Make sure you have a backup of the Tendermint data directory.
|
||||
|
||||
Possible causes
|
||||
---------------
|
||||
|
||||
Remember that most corruption is caused by hardware issues:
|
||||
|
||||
- RAID controllers with faulty / worn out battery backup, and an unexpected power loss
|
||||
- Hard disk drives with write-back cache enabled, and an unexpected power loss
|
||||
- Cheap SSDs with insufficient power-loss protection, and an unexpected power-loss
|
||||
- Defective RAM
|
||||
- Defective or overheating CPU(s)
|
||||
|
||||
Other causes can be:
|
||||
|
||||
- Database systems configured with fsync=off and an OS crash or power loss
|
||||
- Filesystems configured to use write barriers plus a storage layer that ignores write barriers. LVM is a particular culprit.
|
||||
- Tendermint bugs
|
||||
- Operating system bugs
|
||||
- Admin error
|
||||
- directly modifying Tendermint data-directory contents
|
||||
|
||||
(Source: https://wiki.postgresql.org/wiki/Corruption)
|
||||
|
||||
WAL Corruption
|
||||
--------------
|
||||
|
||||
If consensus WAL is corrupted at the lastest height and you are trying to start
|
||||
Tendermint, replay will fail with panic.
|
||||
|
||||
Recovering from data corruption can be hard and time-consuming. Here are two approaches you can take:
|
||||
|
||||
1) Delete the WAL file and restart Tendermint. It will attempt to sync with other peers.
|
||||
2) Try to repair the WAL file manually:
|
||||
1. Create a backup of the corrupted WAL file:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
cp "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal_backup
|
||||
|
||||
2. Use ./scripts/wal2json to create a human-readable version
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./scripts/wal2json/wal2json "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal
|
||||
|
||||
3. Search for a "CORRUPTED MESSAGE" line.
|
||||
4. By looking at the previous message and the message after the corrupted one
|
||||
and looking at the logs, try to rebuild the message. If the consequent
|
||||
messages are marked as corrupted too (this may happen if length header
|
||||
got corrupted or some writes did not make it to the WAL ~ truncation),
|
||||
then remove all the lines starting from the corrupted one and restart
|
||||
Tendermint.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$EDITOR /tmp/corrupted_wal
|
||||
|
||||
5. After editing, convert this file back into binary form by running:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./scripts/json2wal/json2wal /tmp/corrupted_wal > "$TMHOME/data/cs.wal/wal"
|
@@ -1,7 +1,7 @@
|
||||
Genesis
|
||||
=======
|
||||
|
||||
The genesis.json file in ``$TMHOME`` defines the initial TendermintCore
|
||||
The genesis.json file in ``$TMHOME/config`` defines the initial TendermintCore
|
||||
state upon genesis of the blockchain (`see
|
||||
definition <https://github.com/tendermint/tendermint/blob/master/types/genesis.go>`__).
|
||||
|
||||
|
@@ -5,8 +5,8 @@ Light clients are an important part of the complete blockchain system
|
||||
for most applications. Tendermint provides unique speed and security
|
||||
properties for light client applications.
|
||||
|
||||
See our developing `light-client
|
||||
repository <https://github.com/tendermint/light-client>`__.
|
||||
See our `lite package
|
||||
<https://godoc.org/github.com/tendermint/tendermint/lite>`__.
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
78
docs/specification/new-spec/README.md
Normal file
78
docs/specification/new-spec/README.md
Normal file
@@ -0,0 +1,78 @@
|
||||
# Tendermint Specification
|
||||
|
||||
This is a markdown specification of the Tendermint blockchain.
|
||||
It defines the base data structures, how they are validated,
|
||||
and how they are communicated over the network.
|
||||
|
||||
XXX: this spec is a work in progress and not yet complete - see github
|
||||
[issues](https://github.com/tendermint/tendermint/issues) and
|
||||
[pull requests](https://github.com/tendermint/tendermint/pulls)
|
||||
for more details.
|
||||
|
||||
If you find discrepancies between the spec and the code that
|
||||
do not have an associated issue or pull request on github,
|
||||
please submit them to our [bug bounty](https://tendermint.com/security)!
|
||||
|
||||
## Contents
|
||||
|
||||
### Data Structures
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Encoding and Digests](encoding.md)
|
||||
- [Blockchain](blockchain.md)
|
||||
- [State](state.md)
|
||||
|
||||
### P2P and Network Protocols
|
||||
|
||||
TODO: update links
|
||||
|
||||
- [The Base P2P Layer](p2p/README.md): multiplex the protocols ("reactors") on authenticated and encrypted TCP connections
|
||||
- [Peer Exchange (PEX)](pex/README.md): gossip known peer addresses so peers can find each other
|
||||
- [Block Sync](block_sync/README.md): gossip blocks so peers can catch up quickly
|
||||
- [Consensus](consensus/README.md): gossip votes and block parts so new blocks can be committed
|
||||
- [Mempool](mempool/README.md): gossip transactions so they get included in blocks
|
||||
- [Evidence](evidence/README.md): TODO
|
||||
|
||||
### More
|
||||
- [Light Client](light_client/README.md): TODO
|
||||
- [Persistence](persistence/README.md): TODO
|
||||
|
||||
## Overview
|
||||
|
||||
Tendermint provides Byzantine Fault Tolerant State Machine Replication using
|
||||
hash-linked batches of transactions. Such transaction batches are called "blocks".
|
||||
Hence, Tendermint defines a "blockchain".
|
||||
|
||||
Each block in Tendermint has a unique index - its Height.
|
||||
A block at `Height == H` can only be committed *after* the
|
||||
block at `Height == H-1`.
|
||||
Each block is committed by a known set of weighted Validators.
|
||||
Membership and weighting within this set may change over time.
|
||||
Tendermint guarantees the safety and liveness of the blockchain
|
||||
so long as less than 1/3 of the total weight of the Validator set
|
||||
is malicious or faulty.
|
||||
|
||||
A commit in Tendermint is a set of signed messages from more than 2/3 of
|
||||
the total weight of the current Validator set. Validators take turns proposing
|
||||
blocks and voting on them. Once enough votes are received, the block is considered
|
||||
committed. These votes are included in the *next* block as proof that the previous block
|
||||
was committed - they cannot be included in the current block, as that block has already been
|
||||
created.
|
||||
|
||||
Once a block is committed, it can be executed against an application.
|
||||
The application returns results for each of the transactions in the block.
|
||||
The application can also return changes to be made to the validator set,
|
||||
as well as a cryptographic digest of its latest state.
|
||||
|
||||
Tendermint is designed to enable efficient verification and authentication
|
||||
of the latest state of the blockchain. To achieve this, it embeds
|
||||
cryptographic commitments to certain information in the block "header".
|
||||
This information includes the contents of the block (eg. the transactions),
|
||||
the validator set committing the block, as well as the various results returned by the application.
|
||||
Note, however, that block execution only occurs *after* a block is committed.
|
||||
Thus, application results can only be included in the *next* block.
|
||||
|
||||
Also note that information like the transaction results and the validator set are never
|
||||
directly included in the block - only their cryptographic digests (Merkle roots) are.
|
||||
Hence, verification of a block requires a separate data structure to store this information.
|
||||
We call this the `State`. Block verification also requires access to the previous block.
|
42
docs/specification/new-spec/bft-time.md
Normal file
42
docs/specification/new-spec/bft-time.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# BFT time in Tendermint
|
||||
|
||||
Tendermint provides a deterministic, Byzantine fault-tolerant, source of time.
|
||||
Time in Tendermint is defined with the Time field of the block header.
|
||||
|
||||
It satisfies the following properties:
|
||||
|
||||
- Time Monotonicity: Time is monotonically increasing, i.e., given
|
||||
a header H1 for height h1 and a header H2 for height `h2 = h1 + 1`, `H1.Time < H2.Time`.
|
||||
- Time Validity: Given a set of Commit votes that forms the `block.LastCommit` field, a range of
|
||||
valid values for the Time field of the block header is defined only by
|
||||
Precommit messages (from the LastCommit field) sent by correct processes, i.e.,
|
||||
a faulty process cannot arbitrarily increase the Time value.
|
||||
|
||||
In the context of Tendermint, time is of type int64 and denotes UNIX time in milliseconds, i.e.,
|
||||
corresponds to the number of milliseconds since January 1, 1970. Before defining rules that need to be enforced by the
|
||||
Tendermint consensus protocol, so the properties above holds, we introduce the following definition:
|
||||
|
||||
- median of a set of `Vote` messages is equal to the median of `Vote.Time` fields of the corresponding `Vote` messages
|
||||
|
||||
We ensure Time Monotonicity and Time Validity properties by the following rules:
|
||||
|
||||
- let rs denotes `RoundState` (consensus internal state) of some process. Then
|
||||
`rs.ProposalBlock.Header.Time == median(rs.LastCommit) &&
|
||||
rs.Proposal.Timestamp == rs.ProposalBlock.Header.Time`.
|
||||
|
||||
- Furthermore, when creating the `vote` message, the following rules for determining `vote.Time` field should hold:
|
||||
|
||||
- if `rs.Proposal` is defined then
|
||||
`vote.Time = max(rs.Proposal.Timestamp + 1, time.Now())`, where `time.Now()`
|
||||
denotes local Unix time in milliseconds.
|
||||
|
||||
- if `rs.Proposal` is not defined and `rs.Votes` contains +2/3 of the corresponding vote messages (votes for the
|
||||
current height and round, and with the corresponding type (`Prevote` or `Precommit`)), then
|
||||
|
||||
`vote.Time = max(median(getVotes(rs.Votes, vote.Height, vote.Round, vote.Type)), time.Now())`,
|
||||
|
||||
where `getVotes` function returns the votes for particular `Height`, `Round` and `Type`.
|
||||
The second rule is relevant for the case when a process jumps to a higher round upon receiving +2/3 votes for a higher
|
||||
round, but the corresponding `Proposal` message for the higher round hasn't been received yet.
|
||||
|
||||
|
424
docs/specification/new-spec/blockchain.md
Normal file
424
docs/specification/new-spec/blockchain.md
Normal file
@@ -0,0 +1,424 @@
|
||||
# Tendermint Blockchain
|
||||
|
||||
Here we describe the data structures in the Tendermint blockchain and the rules for validating them.
|
||||
|
||||
## Data Structures
|
||||
|
||||
The Tendermint blockchains consists of a short list of basic data types:
|
||||
|
||||
- `Block`
|
||||
- `Header`
|
||||
- `Vote`
|
||||
- `BlockID`
|
||||
- `Signature`
|
||||
- `Evidence`
|
||||
|
||||
## Block
|
||||
|
||||
A block consists of a header, a list of transactions, a list of votes (the commit),
|
||||
and a list of evidence of malfeasance (ie. signing conflicting votes).
|
||||
|
||||
```go
|
||||
type Block struct {
|
||||
Header Header
|
||||
Txs [][]byte
|
||||
LastCommit []Vote
|
||||
Evidence []Evidence
|
||||
}
|
||||
```
|
||||
|
||||
## Header
|
||||
|
||||
A block header contains metadata about the block and about the consensus, as well as commitments to
|
||||
the data in the current block, the previous block, and the results returned by the application:
|
||||
|
||||
```go
|
||||
type Header struct {
|
||||
// block metadata
|
||||
Version string // Version string
|
||||
ChainID string // ID of the chain
|
||||
Height int64 // Current block height
|
||||
Time int64 // UNIX time, in millisconds
|
||||
|
||||
// current block
|
||||
NumTxs int64 // Number of txs in this block
|
||||
TxHash []byte // SimpleMerkle of the block.Txs
|
||||
LastCommitHash []byte // SimpleMerkle of the block.LastCommit
|
||||
|
||||
// previous block
|
||||
TotalTxs int64 // prevBlock.TotalTxs + block.NumTxs
|
||||
LastBlockID BlockID // BlockID of prevBlock
|
||||
|
||||
// application
|
||||
ResultsHash []byte // SimpleMerkle of []abci.Result from prevBlock
|
||||
AppHash []byte // Arbitrary state digest
|
||||
ValidatorsHash []byte // SimpleMerkle of the ValidatorSet
|
||||
ConsensusParamsHash []byte // SimpleMerkle of the ConsensusParams
|
||||
|
||||
// consensus
|
||||
Proposer []byte // Address of the block proposer
|
||||
EvidenceHash []byte // SimpleMerkle of []Evidence
|
||||
}
|
||||
```
|
||||
|
||||
Further details on each of these fields is described below.
|
||||
|
||||
## BlockID
|
||||
|
||||
The `BlockID` contains two distinct Merkle roots of the block.
|
||||
The first, used as the block's main hash, is the Merkle root
|
||||
of all the fields in the header. The second, used for secure gossipping of
|
||||
the block during consensus, is the Merkle root of the complete serialized block
|
||||
cut into parts. The `BlockID` includes these two hashes, as well as the number of
|
||||
parts.
|
||||
|
||||
```go
|
||||
type BlockID struct {
|
||||
Hash []byte
|
||||
Parts PartsHeader
|
||||
}
|
||||
|
||||
type PartsHeader struct {
|
||||
Hash []byte
|
||||
Total int32
|
||||
}
|
||||
```
|
||||
|
||||
## Vote
|
||||
|
||||
A vote is a signed message from a validator for a particular block.
|
||||
The vote includes information about the validator signing it.
|
||||
|
||||
```go
|
||||
type Vote struct {
|
||||
Timestamp int64
|
||||
Address []byte
|
||||
Index int
|
||||
Height int64
|
||||
Round int
|
||||
Type int8
|
||||
BlockID BlockID
|
||||
Signature Signature
|
||||
}
|
||||
```
|
||||
|
||||
There are two types of votes:
|
||||
a *prevote* has `vote.Type == 1` and
|
||||
a *precommit* has `vote.Type == 2`.
|
||||
|
||||
## Signature
|
||||
|
||||
Tendermint allows for multiple signature schemes to be used by prepending a single type-byte
|
||||
to the signature bytes. Different signatures may also come with fixed or variable lengths.
|
||||
Currently, Tendermint supports Ed25519 and Secp256k1.
|
||||
|
||||
### ED25519
|
||||
|
||||
An ED25519 signature has `Type == 0x1`. It looks like:
|
||||
|
||||
```go
|
||||
// Implements Signature
|
||||
type Ed25519Signature struct {
|
||||
Type int8 = 0x1
|
||||
Signature [64]byte
|
||||
}
|
||||
```
|
||||
|
||||
where `Signature` is the 64 byte signature.
|
||||
|
||||
### Secp256k1
|
||||
|
||||
A `Secp256k1` signature has `Type == 0x2`. It looks like:
|
||||
|
||||
```go
|
||||
// Implements Signature
|
||||
type Secp256k1Signature struct {
|
||||
Type int8 = 0x2
|
||||
Signature []byte
|
||||
}
|
||||
```
|
||||
|
||||
where `Signature` is the DER encoded signature, ie:
|
||||
|
||||
```hex
|
||||
0x30 <length of whole message> <0x02> <length of R> <R> 0x2 <length of S> <S>.
|
||||
```
|
||||
|
||||
## Evidence
|
||||
|
||||
TODO
|
||||
|
||||
## Validation
|
||||
|
||||
Here we describe the validation rules for every element in a block.
|
||||
Blocks which do not satisfy these rules are considered invalid.
|
||||
|
||||
We abuse notation by using something that looks like Go, supplemented with English.
|
||||
A statement such as `x == y` is an assertion - if it fails, the item is invalid.
|
||||
|
||||
We refer to certain globally available objects:
|
||||
`block` is the block under consideration,
|
||||
`prevBlock` is the `block` at the previous height,
|
||||
and `state` keeps track of the validator set, the consensus parameters
|
||||
and other results from the application.
|
||||
Elements of an object are accessed as expected,
|
||||
ie. `block.Header`. See [here](state.md) for the definition of `state`.
|
||||
|
||||
### Header
|
||||
|
||||
A Header is valid if its corresponding fields are valid.
|
||||
|
||||
### Version
|
||||
|
||||
Arbitrary string.
|
||||
|
||||
### ChainID
|
||||
|
||||
Arbitrary constant string.
|
||||
|
||||
### Height
|
||||
|
||||
```go
|
||||
block.Header.Height > 0
|
||||
block.Header.Height == prevBlock.Header.Height + 1
|
||||
```
|
||||
|
||||
The height is an incrementing integer. The first block has `block.Header.Height == 1`.
|
||||
|
||||
### Time
|
||||
|
||||
The median of the timestamps of the valid votes in the block.LastCommit.
|
||||
Corresponds to the number of nanoseconds, with millisecond resolution, since January 1, 1970.
|
||||
|
||||
Note: the timestamp of a vote must be greater by at least one millisecond than that of the
|
||||
block being voted on.
|
||||
|
||||
### NumTxs
|
||||
|
||||
```go
|
||||
block.Header.NumTxs == len(block.Txs)
|
||||
```
|
||||
|
||||
Number of transactions included in the block.
|
||||
|
||||
### TxHash
|
||||
|
||||
```go
|
||||
block.Header.TxHash == SimpleMerkleRoot(block.Txs)
|
||||
```
|
||||
|
||||
Simple Merkle root of the transactions in the block.
|
||||
|
||||
### LastCommitHash
|
||||
|
||||
```go
|
||||
block.Header.LastCommitHash == SimpleMerkleRoot(block.LastCommit)
|
||||
```
|
||||
|
||||
Simple Merkle root of the votes included in the block.
|
||||
These are the votes that committed the previous block.
|
||||
|
||||
The first block has `block.Header.LastCommitHash == []byte{}`
|
||||
|
||||
### TotalTxs
|
||||
|
||||
```go
|
||||
block.Header.TotalTxs == prevBlock.Header.TotalTxs + block.Header.NumTxs
|
||||
```
|
||||
|
||||
The cumulative sum of all transactions included in this blockchain.
|
||||
|
||||
The first block has `block.Header.TotalTxs = block.Header.NumberTxs`.
|
||||
|
||||
### LastBlockID
|
||||
|
||||
LastBlockID is the previous block's BlockID:
|
||||
|
||||
```go
|
||||
prevBlockParts := MakeParts(prevBlock, state.LastConsensusParams.BlockGossip.BlockPartSize)
|
||||
block.Header.LastBlockID == BlockID {
|
||||
Hash: SimpleMerkleRoot(prevBlock.Header),
|
||||
PartsHeader{
|
||||
Hash: SimpleMerkleRoot(prevBlockParts),
|
||||
Total: len(prevBlockParts),
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
Note: it depends on the ConsensusParams,
|
||||
which are held in the `state` and may be updated by the application.
|
||||
|
||||
The first block has `block.Header.LastBlockID == BlockID{}`.
|
||||
|
||||
### ResultsHash
|
||||
|
||||
```go
|
||||
block.ResultsHash == SimpleMerkleRoot(state.LastResults)
|
||||
```
|
||||
|
||||
Simple Merkle root of the results of the transactions in the previous block.
|
||||
|
||||
The first block has `block.Header.ResultsHash == []byte{}`.
|
||||
|
||||
### AppHash
|
||||
|
||||
```go
|
||||
block.AppHash == state.AppHash
|
||||
```
|
||||
|
||||
Arbitrary byte array returned by the application after executing and commiting the previous block.
|
||||
|
||||
The first block has `block.Header.AppHash == []byte{}`.
|
||||
|
||||
### ValidatorsHash
|
||||
|
||||
```go
|
||||
block.ValidatorsHash == SimpleMerkleRoot(state.Validators)
|
||||
```
|
||||
|
||||
Simple Merkle root of the current validator set that is committing the block.
|
||||
This can be used to validate the `LastCommit` included in the next block.
|
||||
May be updated by the application.
|
||||
|
||||
### ConsensusParamsHash
|
||||
|
||||
```go
|
||||
block.ConsensusParamsHash == SimpleMerkleRoot(state.ConsensusParams)
|
||||
```
|
||||
|
||||
Simple Merkle root of the consensus parameters.
|
||||
May be updated by the application.
|
||||
|
||||
### Proposer
|
||||
|
||||
```go
|
||||
block.Header.Proposer in state.Validators
|
||||
```
|
||||
|
||||
Original proposer of the block. Must be a current validator.
|
||||
|
||||
NOTE: we also need to track the round.
|
||||
|
||||
## EvidenceHash
|
||||
|
||||
```go
|
||||
block.EvidenceHash == SimpleMerkleRoot(block.Evidence)
|
||||
```
|
||||
|
||||
Simple Merkle root of the evidence of Byzantine behaviour included in this block.
|
||||
|
||||
## Txs
|
||||
|
||||
Arbitrary length array of arbitrary length byte-arrays.
|
||||
|
||||
## LastCommit
|
||||
|
||||
The first height is an exception - it requires the LastCommit to be empty:
|
||||
|
||||
```go
|
||||
if block.Header.Height == 1 {
|
||||
len(b.LastCommit) == 0
|
||||
}
|
||||
```
|
||||
|
||||
Otherwise, we require:
|
||||
|
||||
```go
|
||||
len(block.LastCommit) == len(state.LastValidators)
|
||||
talliedVotingPower := 0
|
||||
for i, vote := range block.LastCommit{
|
||||
if vote == nil{
|
||||
continue
|
||||
}
|
||||
vote.Type == 2
|
||||
vote.Height == block.LastCommit.Height()
|
||||
vote.Round == block.LastCommit.Round()
|
||||
vote.BlockID == block.LastBlockID
|
||||
|
||||
val := state.LastValidators[i]
|
||||
vote.Verify(block.ChainID, val.PubKey) == true
|
||||
|
||||
talliedVotingPower += val.VotingPower
|
||||
}
|
||||
|
||||
talliedVotingPower > (2/3) * TotalVotingPower(state.LastValidators)
|
||||
```
|
||||
|
||||
Includes one (possibly nil) vote for every current validator.
|
||||
Non-nil votes must be Precommits.
|
||||
All votes must be for the same height and round.
|
||||
All votes must be for the previous block.
|
||||
All votes must have a valid signature from the corresponding validator.
|
||||
The sum total of the voting power of the validators that voted
|
||||
must be greater than 2/3 of the total voting power of the complete validator set.
|
||||
|
||||
### Vote
|
||||
|
||||
A vote is a signed message broadcast in the consensus for a particular block at a particular height and round.
|
||||
When stored in the blockchain or propagated over the network, votes are encoded in TMBIN.
|
||||
For signing, votes are encoded in JSON, and the ChainID is included, in the form of the `CanonicalSignBytes`.
|
||||
|
||||
We define a method `Verify` that returns `true` if the signature verifies against the pubkey for the CanonicalSignBytes
|
||||
using the given ChainID:
|
||||
|
||||
```go
|
||||
func (v Vote) Verify(chainID string, pubKey PubKey) bool {
|
||||
return pubKey.Verify(v.Signature, CanonicalSignBytes(chainID, v))
|
||||
}
|
||||
```
|
||||
|
||||
where `pubKey.Verify` performs the appropriate digital signature verification of the `pubKey`
|
||||
against the given signature and message bytes.
|
||||
|
||||
## Evidence
|
||||
|
||||
TODO
|
||||
|
||||
```
|
||||
TODO
|
||||
```
|
||||
|
||||
Every piece of evidence contains two conflicting votes from a single validator that
|
||||
was active at the height indicated in the votes.
|
||||
The votes must not be too old.
|
||||
|
||||
|
||||
# Execution
|
||||
|
||||
Once a block is validated, it can be executed against the state.
|
||||
|
||||
The state follows this recursive equation:
|
||||
|
||||
```go
|
||||
state(1) = InitialState
|
||||
state(h+1) <- Execute(state(h), ABCIApp, block(h))
|
||||
```
|
||||
|
||||
where `InitialState` includes the initial consensus parameters and validator set,
|
||||
and `ABCIApp` is an ABCI application that can return results and changes to the validator
|
||||
set (TODO). Execute is defined as:
|
||||
|
||||
```go
|
||||
Execute(s State, app ABCIApp, block Block) State {
|
||||
TODO: just spell out ApplyBlock here
|
||||
and remove ABCIResponses struct.
|
||||
abciResponses := app.ApplyBlock(block)
|
||||
|
||||
return State{
|
||||
LastResults: abciResponses.DeliverTxResults,
|
||||
AppHash: abciResponses.AppHash,
|
||||
Validators: UpdateValidators(state.Validators, abciResponses.ValidatorChanges),
|
||||
LastValidators: state.Validators,
|
||||
ConsensusParams: UpdateConsensusParams(state.ConsensusParams, abci.Responses.ConsensusParamChanges),
|
||||
}
|
||||
}
|
||||
|
||||
type ABCIResponses struct {
|
||||
DeliverTxResults []Result
|
||||
ValidatorChanges []Validator
|
||||
ConsensusParamChanges ConsensusParams
|
||||
AppHash []byte
|
||||
}
|
||||
```
|
||||
|
||||
|
206
docs/specification/new-spec/encoding.md
Normal file
206
docs/specification/new-spec/encoding.md
Normal file
@@ -0,0 +1,206 @@
|
||||
# Tendermint Encoding
|
||||
|
||||
## Binary Serialization (TMBIN)
|
||||
|
||||
Tendermint aims to encode data structures in a manner similar to how the corresponding Go structs
|
||||
are laid out in memory.
|
||||
Variable length items are length-prefixed.
|
||||
While the encoding was inspired by Go, it is easily implemented in other languages as well, given its intuitive design.
|
||||
|
||||
XXX: This is changing to use real varints and 4-byte-prefixes.
|
||||
See https://github.com/tendermint/go-wire/tree/sdk2.
|
||||
|
||||
### Fixed Length Integers
|
||||
|
||||
Fixed length integers are encoded in Big-Endian using the specified number of bytes.
|
||||
So `uint8` and `int8` use one byte, `uint16` and `int16` use two bytes,
|
||||
`uint32` and `int32` use 3 bytes, and `uint64` and `int64` use 4 bytes.
|
||||
|
||||
Negative integers are encoded via twos-complement.
|
||||
|
||||
Examples:
|
||||
|
||||
```go
|
||||
encode(uint8(6)) == [0x06]
|
||||
encode(uint32(6)) == [0x00, 0x00, 0x00, 0x06]
|
||||
|
||||
encode(int8(-6)) == [0xFA]
|
||||
encode(int32(-6)) == [0xFF, 0xFF, 0xFF, 0xFA]
|
||||
```
|
||||
|
||||
### Variable Length Integers
|
||||
|
||||
Variable length integers are encoded as length-prefixed Big-Endian integers.
|
||||
The length-prefix consists of a single byte and corresponds to the length of the encoded integer.
|
||||
|
||||
Negative integers are encoded by flipping the leading bit of the length-prefix to a `1`.
|
||||
|
||||
Zero is encoded as `0x00`. It is not length-prefixed.
|
||||
|
||||
Examples:
|
||||
|
||||
```go
|
||||
encode(uint(6)) == [0x01, 0x06]
|
||||
encode(uint(70000)) == [0x03, 0x01, 0x11, 0x70]
|
||||
|
||||
encode(int(-6)) == [0xF1, 0x06]
|
||||
encode(int(-70000)) == [0xF3, 0x01, 0x11, 0x70]
|
||||
|
||||
encode(int(0)) == [0x00]
|
||||
```
|
||||
|
||||
### Strings
|
||||
|
||||
An encoded string is length-prefixed followed by the underlying bytes of the string.
|
||||
The length-prefix is itself encoded as an `int`.
|
||||
|
||||
The empty string is encoded as `0x00`. It is not length-prefixed.
|
||||
|
||||
Examples:
|
||||
|
||||
```go
|
||||
encode("") == [0x00]
|
||||
encode("a") == [0x01, 0x01, 0x61]
|
||||
encode("hello") == [0x01, 0x05, 0x68, 0x65, 0x6C, 0x6C, 0x6F]
|
||||
encode("¥") == [0x01, 0x02, 0xC2, 0xA5]
|
||||
```
|
||||
|
||||
### Arrays (fixed length)
|
||||
|
||||
An encoded fix-lengthed array is the concatenation of the encoding of its elements.
|
||||
There is no length-prefix.
|
||||
|
||||
Examples:
|
||||
|
||||
```go
|
||||
encode([4]int8{1, 2, 3, 4}) == [0x01, 0x02, 0x03, 0x04]
|
||||
encode([4]int16{1, 2, 3, 4}) == [0x00, 0x01, 0x00, 0x02, 0x00, 0x03, 0x00, 0x04]
|
||||
encode([4]int{1, 2, 3, 4}) == [0x01, 0x01, 0x01, 0x02, 0x01, 0x03, 0x01, 0x04]
|
||||
encode([2]string{"abc", "efg"}) == [0x01, 0x03, 0x61, 0x62, 0x63, 0x01, 0x03, 0x65, 0x66, 0x67]
|
||||
```
|
||||
|
||||
### Slices (variable length)
|
||||
|
||||
An encoded variable-length array is length-prefixed followed by the concatenation of the encoding of
|
||||
its elements.
|
||||
The length-prefix is itself encoded as an `int`.
|
||||
|
||||
An empty slice is encoded as `0x00`. It is not length-prefixed.
|
||||
|
||||
Examples:
|
||||
|
||||
```go
|
||||
encode([]int8{}) == [0x00]
|
||||
encode([]int8{1, 2, 3, 4}) == [0x01, 0x04, 0x01, 0x02, 0x03, 0x04]
|
||||
encode([]int16{1, 2, 3, 4}) == [0x01, 0x04, 0x00, 0x01, 0x00, 0x02, 0x00, 0x03, 0x00, 0x04]
|
||||
encode([]int{1, 2, 3, 4}) == [0x01, 0x04, 0x01, 0x01, 0x01, 0x02, 0x01, 0x03, 0x01, 0x4]
|
||||
encode([]string{"abc", "efg"}) == [0x01, 0x02, 0x01, 0x03, 0x61, 0x62, 0x63, 0x01, 0x03, 0x65, 0x66, 0x67]
|
||||
```
|
||||
|
||||
### BitArray
|
||||
|
||||
BitArray is encoded as an `int` of the number of bits, and with an array of `uint64` to encode
|
||||
value of each array element.
|
||||
|
||||
```go
|
||||
type BitArray struct {
|
||||
Bits int
|
||||
Elems []uint64
|
||||
}
|
||||
```
|
||||
|
||||
### Time
|
||||
|
||||
Time is encoded as an `int64` of the number of nanoseconds since January 1, 1970,
|
||||
rounded to the nearest millisecond.
|
||||
|
||||
Times before then are invalid.
|
||||
|
||||
Examples:
|
||||
|
||||
```go
|
||||
encode(time.Time("Jan 1 00:00:00 UTC 1970")) == [0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00]
|
||||
encode(time.Time("Jan 1 00:00:01 UTC 1970")) == [0x00, 0x00, 0x00, 0x00, 0x3B, 0x9A, 0xCA, 0x00] // 1,000,000,000 ns
|
||||
encode(time.Time("Mon Jan 2 15:04:05 -0700 MST 2006")) == [0x0F, 0xC4, 0xBB, 0xC1, 0x53, 0x03, 0x12, 0x00]
|
||||
```
|
||||
|
||||
### Structs
|
||||
|
||||
An encoded struct is the concatenation of the encoding of its elements.
|
||||
There is no length-prefix.
|
||||
|
||||
Examples:
|
||||
|
||||
```go
|
||||
type MyStruct struct{
|
||||
A int
|
||||
B string
|
||||
C time.Time
|
||||
}
|
||||
encode(MyStruct{4, "hello", time.Time("Mon Jan 2 15:04:05 -0700 MST 2006")}) ==
|
||||
[0x01, 0x04, 0x01, 0x05, 0x68, 0x65, 0x6C, 0x6C, 0x6F, 0x0F, 0xC4, 0xBB, 0xC1, 0x53, 0x03, 0x12, 0x00]
|
||||
```
|
||||
|
||||
## Merkle Trees
|
||||
|
||||
Simple Merkle trees are used in numerous places in Tendermint to compute a cryptographic digest of a data structure.
|
||||
|
||||
RIPEMD160 is always used as the hashing function.
|
||||
|
||||
The function `SimpleMerkleRoot` is a simple recursive function defined as follows:
|
||||
|
||||
```go
|
||||
func SimpleMerkleRoot(hashes [][]byte) []byte{
|
||||
switch len(hashes) {
|
||||
case 0:
|
||||
return nil
|
||||
case 1:
|
||||
return hashes[0]
|
||||
default:
|
||||
left := SimpleMerkleRoot(hashes[:(len(hashes)+1)/2])
|
||||
right := SimpleMerkleRoot(hashes[(len(hashes)+1)/2:])
|
||||
return RIPEMD160(append(left, right))
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Note: we abuse notion and call `SimpleMerkleRoot` with arguments of type `struct` or type `[]struct`.
|
||||
For `struct` arguments, we compute a `[][]byte` by sorting elements of the `struct` according to
|
||||
field name and then hashing them.
|
||||
For `[]struct` arguments, we compute a `[][]byte` by hashing the individual `struct` elements.
|
||||
|
||||
## JSON (TMJSON)
|
||||
|
||||
Signed messages (eg. votes, proposals) in the consensus are encoded in TMJSON, rather than TMBIN.
|
||||
TMJSON is JSON where `[]byte` are encoded as uppercase hex, rather than base64.
|
||||
|
||||
When signing, the elements of a message are sorted by key and the sorted message is embedded in an
|
||||
outer JSON that includes a `chain_id` field.
|
||||
We call this encoding the CanonicalSignBytes. For instance, CanonicalSignBytes for a vote would look
|
||||
like:
|
||||
|
||||
```json
|
||||
{"chain_id":"my-chain-id","vote":{"block_id":{"hash":DEADBEEF,"parts":{"hash":BEEFDEAD,"total":3}},"height":3,"round":2,"timestamp":1234567890, "type":2}
|
||||
```
|
||||
|
||||
Note how the fields within each level are sorted.
|
||||
|
||||
## Other
|
||||
|
||||
### MakeParts
|
||||
|
||||
Encode an object using TMBIN and slice it into parts.
|
||||
|
||||
```go
|
||||
MakeParts(object, partSize)
|
||||
```
|
||||
|
||||
### Part
|
||||
|
||||
```go
|
||||
type Part struct {
|
||||
Index int
|
||||
Bytes byte[]
|
||||
Proof byte[]
|
||||
}
|
||||
```
|
38
docs/specification/new-spec/p2p/config.md
Normal file
38
docs/specification/new-spec/p2p/config.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# P2P Config
|
||||
|
||||
Here we describe configuration options around the Peer Exchange.
|
||||
These can be set using flags or via the `$TMHOME/config/config.toml` file.
|
||||
|
||||
## Seed Mode
|
||||
|
||||
`--p2p.seed_mode`
|
||||
|
||||
The node operates in seed mode. In seed mode, a node continuously crawls the network for peers,
|
||||
and upon incoming connection shares some peers and disconnects.
|
||||
|
||||
## Seeds
|
||||
|
||||
`--p2p.seeds “1.2.3.4:466656,2.3.4.5:4444”`
|
||||
|
||||
Dials these seeds when we need more peers. They should return a list of peers and then disconnect.
|
||||
If we already have enough peers in the address book, we may never need to dial them.
|
||||
|
||||
## Persistent Peers
|
||||
|
||||
`--p2p.persistent_peers “1.2.3.4:46656,2.3.4.5:466656”`
|
||||
|
||||
Dial these peers and auto-redial them if the connection fails.
|
||||
These are intended to be trusted persistent peers that can help
|
||||
anchor us in the p2p network. The auto-redial uses exponential
|
||||
backoff and will give up after a day of trying to connect.
|
||||
|
||||
**Note:** If `seeds` and `persistent_peers` intersect,
|
||||
the user will be warned that seeds may auto-close connections
|
||||
and that the node may not be able to keep the connection persistent.
|
||||
|
||||
## Private Persistent Peers
|
||||
|
||||
`--p2p.private_persistent_peers “1.2.3.4:46656,2.3.4.5:466656”`
|
||||
|
||||
These are persistent peers that we do not add to the address book or
|
||||
gossip to other peers. They stay private to us.
|
110
docs/specification/new-spec/p2p/connection.md
Normal file
110
docs/specification/new-spec/p2p/connection.md
Normal file
@@ -0,0 +1,110 @@
|
||||
# P2P Multiplex Connection
|
||||
|
||||
## MConnection
|
||||
|
||||
`MConnection` is a multiplex connection that supports multiple independent streams
|
||||
with distinct quality of service guarantees atop a single TCP connection.
|
||||
Each stream is known as a `Channel` and each `Channel` has a globally unique *byte id*.
|
||||
Each `Channel` also has a relative priority that determines the quality of service
|
||||
of the `Channel` compared to other `Channel`s.
|
||||
The *byte id* and the relative priorities of each `Channel` are configured upon
|
||||
initialization of the connection.
|
||||
|
||||
The `MConnection` supports three packet types:
|
||||
|
||||
- Ping
|
||||
- Pong
|
||||
- Msg
|
||||
|
||||
### Ping and Pong
|
||||
|
||||
The ping and pong messages consist of writing a single byte to the connection; 0x1 and 0x2, respectively.
|
||||
|
||||
When we haven't received any messages on an `MConnection` in time `pingTimeout`, we send a ping message.
|
||||
When a ping is received on the `MConnection`, a pong is sent in response only if there are no other messages
|
||||
to send and the peer has not sent us too many pings (TODO).
|
||||
|
||||
If a pong or message is not received in sufficient time after a ping, the peer is disconnected from.
|
||||
|
||||
### Msg
|
||||
|
||||
Messages in channels are chopped into smaller `msgPacket`s for multiplexing.
|
||||
|
||||
```
|
||||
type msgPacket struct {
|
||||
ChannelID byte
|
||||
EOF byte // 1 means message ends here.
|
||||
Bytes []byte
|
||||
}
|
||||
```
|
||||
|
||||
The `msgPacket` is serialized using [go-wire](https://github.com/tendermint/go-wire) and prefixed with 0x3.
|
||||
The received `Bytes` of a sequential set of packets are appended together
|
||||
until a packet with `EOF=1` is received, then the complete serialized message
|
||||
is returned for processing by the `onReceive` function of the corresponding channel.
|
||||
|
||||
### Multiplexing
|
||||
|
||||
Messages are sent from a single `sendRoutine`, which loops over a select statement and results in the sending
|
||||
of a ping, a pong, or a batch of data messages. The batch of data messages may include messages from multiple channels.
|
||||
Message bytes are queued for sending in their respective channel, with each channel holding one unsent message at a time.
|
||||
Messages are chosen for a batch one at a time from the channel with the lowest ratio of recently sent bytes to channel priority.
|
||||
|
||||
## Sending Messages
|
||||
|
||||
There are two methods for sending messages:
|
||||
```go
|
||||
func (m MConnection) Send(chID byte, msg interface{}) bool {}
|
||||
func (m MConnection) TrySend(chID byte, msg interface{}) bool {}
|
||||
```
|
||||
|
||||
`Send(chID, msg)` is a blocking call that waits until `msg` is successfully queued
|
||||
for the channel with the given id byte `chID`. The message `msg` is serialized
|
||||
using the `tendermint/wire` submodule's `WriteBinary()` reflection routine.
|
||||
|
||||
`TrySend(chID, msg)` is a nonblocking call that queues the message msg in the channel
|
||||
with the given id byte chID if the queue is not full; otherwise it returns false immediately.
|
||||
|
||||
`Send()` and `TrySend()` are also exposed for each `Peer`.
|
||||
|
||||
## Peer
|
||||
|
||||
Each peer has one `MConnection` instance, and includes other information such as whether the connection
|
||||
was outbound, whether the connection should be recreated if it closes, various identity information about the node,
|
||||
and other higher level thread-safe data used by the reactors.
|
||||
|
||||
## Switch/Reactor
|
||||
|
||||
The `Switch` handles peer connections and exposes an API to receive incoming messages
|
||||
on `Reactors`. Each `Reactor` is responsible for handling incoming messages of one
|
||||
or more `Channels`. So while sending outgoing messages is typically performed on the peer,
|
||||
incoming messages are received on the reactor.
|
||||
|
||||
```go
|
||||
// Declare a MyReactor reactor that handles messages on MyChannelID.
|
||||
type MyReactor struct{}
|
||||
|
||||
func (reactor MyReactor) GetChannels() []*ChannelDescriptor {
|
||||
return []*ChannelDescriptor{ChannelDescriptor{ID:MyChannelID, Priority: 1}}
|
||||
}
|
||||
|
||||
func (reactor MyReactor) Receive(chID byte, peer *Peer, msgBytes []byte) {
|
||||
r, n, err := bytes.NewBuffer(msgBytes), new(int64), new(error)
|
||||
msgString := ReadString(r, n, err)
|
||||
fmt.Println(msgString)
|
||||
}
|
||||
|
||||
// Other Reactor methods omitted for brevity
|
||||
...
|
||||
|
||||
switch := NewSwitch([]Reactor{MyReactor{}})
|
||||
|
||||
...
|
||||
|
||||
// Send a random message to all outbound connections
|
||||
for _, peer := range switch.Peers().List() {
|
||||
if peer.IsOutbound() {
|
||||
peer.Send(MyChannelID, "Here's a random message")
|
||||
}
|
||||
}
|
||||
```
|
65
docs/specification/new-spec/p2p/node.md
Normal file
65
docs/specification/new-spec/p2p/node.md
Normal file
@@ -0,0 +1,65 @@
|
||||
# Tendermint Peer Discovery
|
||||
|
||||
A Tendermint P2P network has different kinds of nodes with different requirements for connectivity to one another.
|
||||
This document describes what kind of nodes Tendermint should enable and how they should work.
|
||||
|
||||
## Seeds
|
||||
|
||||
Seeds are the first point of contact for a new node.
|
||||
They return a list of known active peers and then disconnect.
|
||||
|
||||
Seeds should operate full nodes with the PEX reactor in a "crawler" mode
|
||||
that continuously explores to validate the availability of peers.
|
||||
|
||||
Seeds should only respond with some top percentile of the best peers it knows about.
|
||||
See [reputation](TODO) for details on peer quality.
|
||||
|
||||
## New Full Node
|
||||
|
||||
A new node needs a few things to connect to the network:
|
||||
- a list of seeds, which can be provided to Tendermint via config file or flags,
|
||||
or hardcoded into the software by in-process apps
|
||||
- a `ChainID`, also called `Network` at the p2p layer
|
||||
- a recent block height, H, and hash, HASH for the blockchain.
|
||||
|
||||
The values `H` and `HASH` must be received and corroborated by means external to Tendermint, and specific to the user - ie. via the user's trusted social consensus.
|
||||
This requirement to validate `H` and `HASH` out-of-band and via social consensus
|
||||
is the essential difference in security models between Proof-of-Work and Proof-of-Stake blockchains.
|
||||
|
||||
With the above, the node then queries some seeds for peers for its chain,
|
||||
dials those peers, and runs the Tendermint protocols with those it successfully connects to.
|
||||
|
||||
When the peer catches up to height H, it ensures the block hash matches HASH.
|
||||
If not, Tendermint will exit, and the user must try again - either they are connected
|
||||
to bad peers or their social consensus is invalid.
|
||||
|
||||
## Restarted Full Node
|
||||
|
||||
A node checks its address book on startup and attempts to connect to peers from there.
|
||||
If it can't connect to any peers after some time, it falls back to the seeds to find more.
|
||||
|
||||
Restarted full nodes can run the `blockchain` or `consensus` reactor protocols to sync up
|
||||
to the latest state of the blockchain from wherever they were last.
|
||||
In a Proof-of-Stake context, if they are sufficiently far behind (greater than the length
|
||||
of the unbonding period), they will need to validate a recent `H` and `HASH` out-of-band again
|
||||
so they know they have synced the correct chain.
|
||||
|
||||
## Validator Node
|
||||
|
||||
A validator node is a node that interfaces with a validator signing key.
|
||||
These nodes require the highest security, and should not accept incoming connections.
|
||||
They should maintain outgoing connections to a controlled set of "Sentry Nodes" that serve
|
||||
as their proxy shield to the rest of the network.
|
||||
|
||||
Validators that know and trust each other can accept incoming connections from one another and maintain direct private connectivity via VPN.
|
||||
|
||||
## Sentry Node
|
||||
|
||||
Sentry nodes are guardians of a validator node and provide it access to the rest of the network.
|
||||
They should be well connected to other full nodes on the network.
|
||||
Sentry nodes may be dynamic, but should maintain persistent connections to some evolving random subset of each other.
|
||||
They should always expect to have direct incoming connections from the validator node and its backup(s).
|
||||
They do not report the validator node's address in the PEX and
|
||||
they may be more strict about the quality of peers they keep.
|
||||
|
||||
Sentry nodes belonging to validators that trust each other may wish to maintain persistent connections via VPN with one another, but only report each other sparingly in the PEX.
|
114
docs/specification/new-spec/p2p/peer.md
Normal file
114
docs/specification/new-spec/p2p/peer.md
Normal file
@@ -0,0 +1,114 @@
|
||||
# Tendermint Peers
|
||||
|
||||
This document explains how Tendermint Peers are identified and how they connect to one another.
|
||||
|
||||
For details on peer discovery, see the [peer exchange (PEX) reactor doc](pex.md).
|
||||
|
||||
## Peer Identity
|
||||
|
||||
Tendermint peers are expected to maintain long-term persistent identities in the form of a public key.
|
||||
Each peer has an ID defined as `peer.ID == peer.PubKey.Address()`, where `Address` uses the scheme defined in go-crypto.
|
||||
|
||||
A single peer ID can have multiple IP addresses associated with it.
|
||||
TODO: define how to deal with this.
|
||||
|
||||
When attempting to connect to a peer, we use the PeerURL: `<ID>@<IP>:<PORT>`.
|
||||
We will attempt to connect to the peer at IP:PORT, and verify,
|
||||
via authenticated encryption, that it is in possession of the private key
|
||||
corresponding to `<ID>`. This prevents man-in-the-middle attacks on the peer layer.
|
||||
|
||||
Peers can also be connected to without specifying an ID, ie. just `<IP>:<PORT>`.
|
||||
In this case, the peer must be authenticated out-of-band of Tendermint,
|
||||
for instance via VPN.
|
||||
|
||||
## Connections
|
||||
|
||||
All p2p connections use TCP.
|
||||
Upon establishing a successful TCP connection with a peer,
|
||||
two handhsakes are performed: one for authenticated encryption, and one for Tendermint versioning.
|
||||
Both handshakes have configurable timeouts (they should complete quickly).
|
||||
|
||||
### Authenticated Encryption Handshake
|
||||
|
||||
Tendermint implements the Station-to-Station protocol
|
||||
using ED25519 keys for Diffie-Helman key-exchange and NACL SecretBox for encryption.
|
||||
It goes as follows:
|
||||
- generate an emphemeral ED25519 keypair
|
||||
- send the ephemeral public key to the peer
|
||||
- wait to receive the peer's ephemeral public key
|
||||
- compute the Diffie-Hellman shared secret using the peers ephemeral public key and our ephemeral private key
|
||||
- generate two nonces to use for encryption (sending and receiving) as follows:
|
||||
- sort the ephemeral public keys in ascending order and concatenate them
|
||||
- RIPEMD160 the result
|
||||
- append 4 empty bytes (extending the hash to 24-bytes)
|
||||
- the result is nonce1
|
||||
- flip the last bit of nonce1 to get nonce2
|
||||
- if we had the smaller ephemeral pubkey, use nonce1 for receiving, nonce2 for sending;
|
||||
else the opposite
|
||||
- all communications from now on are encrypted using the shared secret and the nonces, where each nonce
|
||||
increments by 2 every time it is used
|
||||
- we now have an encrypted channel, but still need to authenticate
|
||||
- generate a common challenge to sign:
|
||||
- SHA256 of the sorted (lowest first) and concatenated ephemeral pub keys
|
||||
- sign the common challenge with our persistent private key
|
||||
- send the go-wire encoded persistent pubkey and signature to the peer
|
||||
- wait to receive the persistent public key and signature from the peer
|
||||
- verify the signature on the challenge using the peer's persistent public key
|
||||
|
||||
|
||||
If this is an outgoing connection (we dialed the peer) and we used a peer ID,
|
||||
then finally verify that the peer's persistent public key corresponds to the peer ID we dialed,
|
||||
ie. `peer.PubKey.Address() == <ID>`.
|
||||
|
||||
The connection has now been authenticated. All traffic is encrypted.
|
||||
|
||||
Note: only the dialer can authenticate the identity of the peer,
|
||||
but this is what we care about since when we join the network we wish to
|
||||
ensure we have reached the intended peer (and are not being MITMd).
|
||||
|
||||
### Peer Filter
|
||||
|
||||
Before continuing, we check if the new peer has the same ID as ourselves or
|
||||
an existing peer. If so, we disconnect.
|
||||
|
||||
We also check the peer's address and public key against
|
||||
an optional whitelist which can be managed through the ABCI app -
|
||||
if the whitelist is enabled and the peer does not qualify, the connection is
|
||||
terminated.
|
||||
|
||||
|
||||
### Tendermint Version Handshake
|
||||
|
||||
The Tendermint Version Handshake allows the peers to exchange their NodeInfo:
|
||||
|
||||
```golang
|
||||
type NodeInfo struct {
|
||||
PubKey crypto.PubKey
|
||||
Moniker string
|
||||
Network string
|
||||
RemoteAddr string
|
||||
ListenAddr string
|
||||
Version string
|
||||
Channels []int8
|
||||
Other []string
|
||||
}
|
||||
```
|
||||
|
||||
The connection is disconnected if:
|
||||
- `peer.NodeInfo.PubKey != peer.PubKey`
|
||||
- `peer.NodeInfo.Version` is not formatted as `X.X.X` where X are integers known as Major, Minor, and Revision
|
||||
- `peer.NodeInfo.Version` Major is not the same as ours
|
||||
- `peer.NodeInfo.Version` Minor is not the same as ours
|
||||
- `peer.NodeInfo.Network` is not the same as ours
|
||||
- `peer.Channels` does not intersect with our known Channels.
|
||||
|
||||
|
||||
At this point, if we have not disconnected, the peer is valid.
|
||||
It is added to the switch and hence all reactors via the `AddPeer` method.
|
||||
Note that each reactor may handle multiple channels.
|
||||
|
||||
## Connection Activity
|
||||
|
||||
Once a peer is added, incoming messages for a given reactor are handled through
|
||||
that reactor's `Receive` method, and output messages are sent directly by the Reactors
|
||||
on each peer. A typical reactor maintains per-peer go-routine(s) that handle this.
|
46
docs/specification/new-spec/reactors/block_sync/impl.md
Normal file
46
docs/specification/new-spec/reactors/block_sync/impl.md
Normal file
@@ -0,0 +1,46 @@
|
||||
## Blockchain Reactor
|
||||
|
||||
* coordinates the pool for syncing
|
||||
* coordinates the store for persistence
|
||||
* coordinates the playing of blocks towards the app using a sm.BlockExecutor
|
||||
* handles switching between fastsync and consensus
|
||||
* it is a p2p.BaseReactor
|
||||
* starts the pool.Start() and its poolRoutine()
|
||||
* registers all the concrete types and interfaces for serialisation
|
||||
|
||||
### poolRoutine
|
||||
|
||||
* listens to these channels:
|
||||
* pool requests blocks from a specific peer by posting to requestsCh, block reactor then sends
|
||||
a &bcBlockRequestMessage for a specific height
|
||||
* pool signals timeout of a specific peer by posting to timeoutsCh
|
||||
* switchToConsensusTicker to periodically try and switch to consensus
|
||||
* trySyncTicker to periodically check if we have fallen behind and then catch-up sync
|
||||
* if there aren't any new blocks available on the pool it skips syncing
|
||||
* tries to sync the app by taking downloaded blocks from the pool, gives them to the app and stores
|
||||
them on disk
|
||||
* implements Receive which is called by the switch/peer
|
||||
* calls AddBlock on the pool when it receives a new block from a peer
|
||||
|
||||
## Block Pool
|
||||
|
||||
* responsible for downloading blocks from peers
|
||||
* makeRequestersRoutine()
|
||||
* removes timeout peers
|
||||
* starts new requesters by calling makeNextRequester()
|
||||
* requestRoutine():
|
||||
* picks a peer and sends the request, then blocks until:
|
||||
* pool is stopped by listening to pool.Quit
|
||||
* requester is stopped by listening to Quit
|
||||
* request is redone
|
||||
* we receive a block
|
||||
* gotBlockCh is strange
|
||||
|
||||
## Block Store
|
||||
|
||||
* persists blocks to disk
|
||||
|
||||
# TODO
|
||||
|
||||
* How does the switch from bcR to conR happen? Does conR persist blocks to disk too?
|
||||
* What is the interaction between the consensus and blockchain reactors?
|
49
docs/specification/new-spec/reactors/block_sync/reactor.md
Normal file
49
docs/specification/new-spec/reactors/block_sync/reactor.md
Normal file
@@ -0,0 +1,49 @@
|
||||
# Blockchain Reactor
|
||||
|
||||
The Blockchain Reactor's high level responsibility is to enable peers who are
|
||||
far behind the current state of the consensus to quickly catch up by downloading
|
||||
many blocks in parallel, verifying their commits, and executing them against the
|
||||
ABCI application.
|
||||
|
||||
Tendermint full nodes run the Blockchain Reactor as a service to provide blocks
|
||||
to new nodes. New nodes run the Blockchain Reactor in "fast_sync" mode,
|
||||
where they actively make requests for more blocks until they sync up.
|
||||
Once caught up, "fast_sync" mode is disabled and the node switches to
|
||||
using (and turns on) the Consensus Reactor.
|
||||
|
||||
## Message Types
|
||||
|
||||
```go
|
||||
const (
|
||||
msgTypeBlockRequest = byte(0x10)
|
||||
msgTypeBlockResponse = byte(0x11)
|
||||
msgTypeNoBlockResponse = byte(0x12)
|
||||
msgTypeStatusResponse = byte(0x20)
|
||||
msgTypeStatusRequest = byte(0x21)
|
||||
)
|
||||
```
|
||||
|
||||
```go
|
||||
type bcBlockRequestMessage struct {
|
||||
Height int64
|
||||
}
|
||||
|
||||
type bcNoBlockResponseMessage struct {
|
||||
Height int64
|
||||
}
|
||||
|
||||
type bcBlockResponseMessage struct {
|
||||
Block Block
|
||||
}
|
||||
|
||||
type bcStatusRequestMessage struct {
|
||||
Height int64
|
||||
|
||||
type bcStatusResponseMessage struct {
|
||||
Height int64
|
||||
}
|
||||
```
|
||||
|
||||
## Protocol
|
||||
|
||||
TODO
|
@@ -0,0 +1,344 @@
|
||||
# Consensus Reactor
|
||||
|
||||
Consensus Reactor defines a reactor for the consensus service. It contains the ConsensusState service that
|
||||
manages the state of the Tendermint consensus internal state machine.
|
||||
When Consensus Reactor is started, it starts Broadcast Routine which starts ConsensusState service.
|
||||
Furthermore, for each peer that is added to the Consensus Reactor, it creates (and manages) the known peer state
|
||||
(that is used extensively in gossip routines) and starts the following three routines for the peer p:
|
||||
Gossip Data Routine, Gossip Votes Routine and QueryMaj23Routine. Finally, Consensus Reactor is responsible
|
||||
for decoding messages received from a peer and for adequate processing of the message depending on its type and content.
|
||||
The processing normally consists of updating the known peer state and for some messages
|
||||
(`ProposalMessage`, `BlockPartMessage` and `VoteMessage`) also forwarding message to ConsensusState module
|
||||
for further processing. In the following text we specify the core functionality of those separate unit of executions
|
||||
that are part of the Consensus Reactor.
|
||||
|
||||
## ConsensusState service
|
||||
|
||||
Consensus State handles execution of the Tendermint BFT consensus algorithm. It processes votes and proposals,
|
||||
and upon reaching agreement, commits blocks to the chain and executes them against the application.
|
||||
The internal state machine receives input from peers, the internal validator and from a timer.
|
||||
|
||||
Inside Consensus State we have the following units of execution: Timeout Ticker and Receive Routine.
|
||||
Timeout Ticker is a timer that schedules timeouts conditional on the height/round/step that are processed
|
||||
by the Receive Routine.
|
||||
|
||||
|
||||
### Receive Routine of the ConsensusState service
|
||||
|
||||
Receive Routine of the ConsensusState handles messages which may cause internal consensus state transitions.
|
||||
It is the only routine that updates RoundState that contains internal consensus state.
|
||||
Updates (state transitions) happen on timeouts, complete proposals, and 2/3 majorities.
|
||||
It receives messages from peers, internal validators and from Timeout Ticker
|
||||
and invokes the corresponding handlers, potentially updating the RoundState.
|
||||
The details of the protocol (together with formal proofs of correctness) implemented by the Receive Routine are
|
||||
discussed in separate document (see [spec](https://github.com/tendermint/spec)). For understanding of this document
|
||||
it is sufficient to understand that the Receive Routine manages and updates RoundState data structure that is
|
||||
then extensively used by the gossip routines to determine what information should be sent to peer processes.
|
||||
|
||||
## Round State
|
||||
|
||||
RoundState defines the internal consensus state. It contains height, round, round step, a current validator set,
|
||||
a proposal and proposal block for the current round, locked round and block (if some block is being locked), set of
|
||||
received votes and last commit and last validators set.
|
||||
|
||||
```golang
|
||||
type RoundState struct {
|
||||
Height int64
|
||||
Round int
|
||||
Step RoundStepType
|
||||
Validators ValidatorSet
|
||||
Proposal Proposal
|
||||
ProposalBlock Block
|
||||
ProposalBlockParts PartSet
|
||||
LockedRound int
|
||||
LockedBlock Block
|
||||
LockedBlockParts PartSet
|
||||
Votes HeightVoteSet
|
||||
LastCommit VoteSet
|
||||
LastValidators ValidatorSet
|
||||
}
|
||||
```
|
||||
|
||||
Internally, consensus will run as a state machine with the following states:
|
||||
|
||||
- RoundStepNewHeight
|
||||
- RoundStepNewRound
|
||||
- RoundStepPropose
|
||||
- RoundStepProposeWait
|
||||
- RoundStepPrevote
|
||||
- RoundStepPrevoteWait
|
||||
- RoundStepPrecommit
|
||||
- RoundStepPrecommitWait
|
||||
- RoundStepCommit
|
||||
|
||||
## Peer Round State
|
||||
|
||||
Peer round state contains the known state of a peer. It is being updated by the Receive routine of
|
||||
Consensus Reactor and by the gossip routines upon sending a message to the peer.
|
||||
|
||||
```golang
|
||||
type PeerRoundState struct {
|
||||
Height int64 // Height peer is at
|
||||
Round int // Round peer is at, -1 if unknown.
|
||||
Step RoundStepType // Step peer is at
|
||||
Proposal bool // True if peer has proposal for this round
|
||||
ProposalBlockPartsHeader PartSetHeader
|
||||
ProposalBlockParts BitArray
|
||||
ProposalPOLRound int // Proposal's POL round. -1 if none.
|
||||
ProposalPOL BitArray // nil until ProposalPOLMessage received.
|
||||
Prevotes BitArray // All votes peer has for this round
|
||||
Precommits BitArray // All precommits peer has for this round
|
||||
LastCommitRound int // Round of commit for last height. -1 if none.
|
||||
LastCommit BitArray // All commit precommits of commit for last height.
|
||||
CatchupCommitRound int // Round that we have commit for. Not necessarily unique. -1 if none.
|
||||
CatchupCommit BitArray // All commit precommits peer has for this height & CatchupCommitRound
|
||||
}
|
||||
```
|
||||
|
||||
## Receive method of Consensus reactor
|
||||
|
||||
The entry point of the Consensus reactor is a receive method. When a message is received from a peer p,
|
||||
normally the peer round state is updated correspondingly, and some messages
|
||||
are passed for further processing, for example to ConsensusState service. We now specify the processing of messages
|
||||
in the receive method of Consensus reactor for each message type. In the following message handler, `rs` and `prs` denote
|
||||
`RoundState` and `PeerRoundState`, respectively.
|
||||
|
||||
### NewRoundStepMessage handler
|
||||
|
||||
```
|
||||
handleMessage(msg):
|
||||
if msg is from smaller height/round/step then return
|
||||
// Just remember these values.
|
||||
prsHeight = prs.Height
|
||||
prsRound = prs.Round
|
||||
prsCatchupCommitRound = prs.CatchupCommitRound
|
||||
prsCatchupCommit = prs.CatchupCommit
|
||||
|
||||
Update prs with values from msg
|
||||
if prs.Height or prs.Round has been updated then
|
||||
reset Proposal related fields of the peer state
|
||||
if prs.Round has been updated and msg.Round == prsCatchupCommitRound then
|
||||
prs.Precommits = psCatchupCommit
|
||||
if prs.Height has been updated then
|
||||
if prsHeight+1 == msg.Height && prsRound == msg.LastCommitRound then
|
||||
prs.LastCommitRound = msg.LastCommitRound
|
||||
prs.LastCommit = prs.Precommits
|
||||
} else {
|
||||
prs.LastCommitRound = msg.LastCommitRound
|
||||
prs.LastCommit = nil
|
||||
}
|
||||
Reset prs.CatchupCommitRound and prs.CatchupCommit
|
||||
```
|
||||
|
||||
### CommitStepMessage handler
|
||||
|
||||
```
|
||||
handleMessage(msg):
|
||||
if prs.Height == msg.Height then
|
||||
prs.ProposalBlockPartsHeader = msg.BlockPartsHeader
|
||||
prs.ProposalBlockParts = msg.BlockParts
|
||||
```
|
||||
|
||||
### HasVoteMessage handler
|
||||
|
||||
```
|
||||
handleMessage(msg):
|
||||
if prs.Height == msg.Height then
|
||||
prs.setHasVote(msg.Height, msg.Round, msg.Type, msg.Index)
|
||||
```
|
||||
|
||||
### VoteSetMaj23Message handler
|
||||
|
||||
```
|
||||
handleMessage(msg):
|
||||
if prs.Height == msg.Height then
|
||||
Record in rs that a peer claim to have ⅔ majority for msg.BlockID
|
||||
Send VoteSetBitsMessage showing votes node has for that BlockId
|
||||
```
|
||||
|
||||
### ProposalMessage handler
|
||||
|
||||
```
|
||||
handleMessage(msg):
|
||||
if prs.Height != msg.Height || prs.Round != msg.Round || prs.Proposal then return
|
||||
prs.Proposal = true
|
||||
prs.ProposalBlockPartsHeader = msg.BlockPartsHeader
|
||||
prs.ProposalBlockParts = empty set
|
||||
prs.ProposalPOLRound = msg.POLRound
|
||||
prs.ProposalPOL = nil
|
||||
Send msg through internal peerMsgQueue to ConsensusState service
|
||||
```
|
||||
|
||||
### ProposalPOLMessage handler
|
||||
|
||||
```
|
||||
handleMessage(msg):
|
||||
if prs.Height != msg.Height or prs.ProposalPOLRound != msg.ProposalPOLRound then return
|
||||
prs.ProposalPOL = msg.ProposalPOL
|
||||
```
|
||||
|
||||
### BlockPartMessage handler
|
||||
|
||||
```
|
||||
handleMessage(msg):
|
||||
if prs.Height != msg.Height || prs.Round != msg.Round then return
|
||||
Record in prs that peer has block part msg.Part.Index
|
||||
Send msg trough internal peerMsgQueue to ConsensusState service
|
||||
```
|
||||
|
||||
### VoteMessage handler
|
||||
|
||||
```
|
||||
handleMessage(msg):
|
||||
Record in prs that a peer knows vote with index msg.vote.ValidatorIndex for particular height and round
|
||||
Send msg trough internal peerMsgQueue to ConsensusState service
|
||||
```
|
||||
|
||||
### VoteSetBitsMessage handler
|
||||
|
||||
```
|
||||
handleMessage(msg):
|
||||
Update prs for the bit-array of votes peer claims to have for the msg.BlockID
|
||||
```
|
||||
|
||||
## Gossip Data Routine
|
||||
|
||||
It is used to send the following messages to the peer: `BlockPartMessage`, `ProposalMessage` and
|
||||
`ProposalPOLMessage` on the DataChannel. The gossip data routine is based on the local RoundState (`rs`)
|
||||
and the known PeerRoundState (`prs`). The routine repeats forever the logic shown below:
|
||||
|
||||
```
|
||||
1a) if rs.ProposalBlockPartsHeader == prs.ProposalBlockPartsHeader and the peer does not have all the proposal parts then
|
||||
Part = pick a random proposal block part the peer does not have
|
||||
Send BlockPartMessage(rs.Height, rs.Round, Part) to the peer on the DataChannel
|
||||
if send returns true, record that the peer knows the corresponding block Part
|
||||
Continue
|
||||
|
||||
1b) if (0 < prs.Height) and (prs.Height < rs.Height) then
|
||||
help peer catch up using gossipDataForCatchup function
|
||||
Continue
|
||||
|
||||
1c) if (rs.Height != prs.Height) or (rs.Round != prs.Round) then
|
||||
Sleep PeerGossipSleepDuration
|
||||
Continue
|
||||
|
||||
// at this point rs.Height == prs.Height and rs.Round == prs.Round
|
||||
1d) if (rs.Proposal != nil and !prs.Proposal) then
|
||||
Send ProposalMessage(rs.Proposal) to the peer
|
||||
if send returns true, record that the peer knows Proposal
|
||||
if 0 <= rs.Proposal.POLRound then
|
||||
polRound = rs.Proposal.POLRound
|
||||
prevotesBitArray = rs.Votes.Prevotes(polRound).BitArray()
|
||||
Send ProposalPOLMessage(rs.Height, polRound, prevotesBitArray)
|
||||
Continue
|
||||
|
||||
2) Sleep PeerGossipSleepDuration
|
||||
```
|
||||
|
||||
### Gossip Data For Catchup
|
||||
|
||||
This function is responsible for helping peer catch up if it is at the smaller height (prs.Height < rs.Height).
|
||||
The function executes the following logic:
|
||||
|
||||
if peer does not have all block parts for prs.ProposalBlockPart then
|
||||
blockMeta = Load Block Metadata for height prs.Height from blockStore
|
||||
if (!blockMeta.BlockID.PartsHeader == prs.ProposalBlockPartsHeader) then
|
||||
Sleep PeerGossipSleepDuration
|
||||
return
|
||||
Part = pick a random proposal block part the peer does not have
|
||||
Send BlockPartMessage(prs.Height, prs.Round, Part) to the peer on the DataChannel
|
||||
if send returns true, record that the peer knows the corresponding block Part
|
||||
return
|
||||
else Sleep PeerGossipSleepDuration
|
||||
|
||||
## Gossip Votes Routine
|
||||
|
||||
It is used to send the following message: `VoteMessage` on the VoteChannel.
|
||||
The gossip votes routine is based on the local RoundState (`rs`)
|
||||
and the known PeerRoundState (`prs`). The routine repeats forever the logic shown below:
|
||||
|
||||
```
|
||||
1a) if rs.Height == prs.Height then
|
||||
if prs.Step == RoundStepNewHeight then
|
||||
vote = random vote from rs.LastCommit the peer does not have
|
||||
Send VoteMessage(vote) to the peer
|
||||
if send returns true, continue
|
||||
|
||||
if prs.Step <= RoundStepPrevote and prs.Round != -1 and prs.Round <= rs.Round then
|
||||
Prevotes = rs.Votes.Prevotes(prs.Round)
|
||||
vote = random vote from Prevotes the peer does not have
|
||||
Send VoteMessage(vote) to the peer
|
||||
if send returns true, continue
|
||||
|
||||
if prs.Step <= RoundStepPrecommit and prs.Round != -1 and prs.Round <= rs.Round then
|
||||
Precommits = rs.Votes.Precommits(prs.Round)
|
||||
vote = random vote from Precommits the peer does not have
|
||||
Send VoteMessage(vote) to the peer
|
||||
if send returns true, continue
|
||||
|
||||
if prs.ProposalPOLRound != -1 then
|
||||
PolPrevotes = rs.Votes.Prevotes(prs.ProposalPOLRound)
|
||||
vote = random vote from PolPrevotes the peer does not have
|
||||
Send VoteMessage(vote) to the peer
|
||||
if send returns true, continue
|
||||
|
||||
1b) if prs.Height != 0 and rs.Height == prs.Height+1 then
|
||||
vote = random vote from rs.LastCommit peer does not have
|
||||
Send VoteMessage(vote) to the peer
|
||||
if send returns true, continue
|
||||
|
||||
1c) if prs.Height != 0 and rs.Height >= prs.Height+2 then
|
||||
Commit = get commit from BlockStore for prs.Height
|
||||
vote = random vote from Commit the peer does not have
|
||||
Send VoteMessage(vote) to the peer
|
||||
if send returns true, continue
|
||||
|
||||
2) Sleep PeerGossipSleepDuration
|
||||
```
|
||||
|
||||
## QueryMaj23Routine
|
||||
|
||||
It is used to send the following message: `VoteSetMaj23Message`. `VoteSetMaj23Message` is sent to indicate that a given
|
||||
BlockID has seen +2/3 votes. This routine is based on the local RoundState (`rs`) and the known PeerRoundState
|
||||
(`prs`). The routine repeats forever the logic shown below.
|
||||
|
||||
```
|
||||
1a) if rs.Height == prs.Height then
|
||||
Prevotes = rs.Votes.Prevotes(prs.Round)
|
||||
if there is a ⅔ majority for some blockId in Prevotes then
|
||||
m = VoteSetMaj23Message(prs.Height, prs.Round, Prevote, blockId)
|
||||
Send m to peer
|
||||
Sleep PeerQueryMaj23SleepDuration
|
||||
|
||||
1b) if rs.Height == prs.Height then
|
||||
Precommits = rs.Votes.Precommits(prs.Round)
|
||||
if there is a ⅔ majority for some blockId in Precommits then
|
||||
m = VoteSetMaj23Message(prs.Height,prs.Round,Precommit,blockId)
|
||||
Send m to peer
|
||||
Sleep PeerQueryMaj23SleepDuration
|
||||
|
||||
1c) if rs.Height == prs.Height and prs.ProposalPOLRound >= 0 then
|
||||
Prevotes = rs.Votes.Prevotes(prs.ProposalPOLRound)
|
||||
if there is a ⅔ majority for some blockId in Prevotes then
|
||||
m = VoteSetMaj23Message(prs.Height,prs.ProposalPOLRound,Prevotes,blockId)
|
||||
Send m to peer
|
||||
Sleep PeerQueryMaj23SleepDuration
|
||||
|
||||
1d) if prs.CatchupCommitRound != -1 and 0 < prs.Height and
|
||||
prs.Height <= blockStore.Height() then
|
||||
Commit = LoadCommit(prs.Height)
|
||||
m = VoteSetMaj23Message(prs.Height,Commit.Round,Precommit,Commit.blockId)
|
||||
Send m to peer
|
||||
Sleep PeerQueryMaj23SleepDuration
|
||||
|
||||
2) Sleep PeerQueryMaj23SleepDuration
|
||||
```
|
||||
|
||||
## Broadcast routine
|
||||
|
||||
The Broadcast routine subscribes to an internal event bus to receive new round steps, votes messages and proposal
|
||||
heartbeat messages, and broadcasts messages to peers upon receiving those events.
|
||||
It broadcasts `NewRoundStepMessage` or `CommitStepMessage` upon new round state event. Note that
|
||||
broadcasting these messages does not depend on the PeerRoundState; it is sent on the StateChannel.
|
||||
Upon receiving VoteMessage it broadcasts `HasVoteMessage` message to its peers on the StateChannel.
|
||||
`ProposalHeartbeatMessage` is sent the same way on the StateChannel.
|
212
docs/specification/new-spec/reactors/consensus/consensus.md
Normal file
212
docs/specification/new-spec/reactors/consensus/consensus.md
Normal file
@@ -0,0 +1,212 @@
|
||||
# Tendermint Consensus Reactor
|
||||
|
||||
Tendermint Consensus is a distributed protocol executed by validator processes to agree on
|
||||
the next block to be added to the Tendermint blockchain. The protocol proceeds in rounds, where
|
||||
each round is a try to reach agreement on the next block. A round starts by having a dedicated
|
||||
process (called proposer) suggesting to other processes what should be the next block with
|
||||
the `ProposalMessage`.
|
||||
The processes respond by voting for a block with `VoteMessage` (there are two kinds of vote
|
||||
messages, prevote and precommit votes). Note that a proposal message is just a suggestion what the
|
||||
next block should be; a validator might vote with a `VoteMessage` for a different block. If in some
|
||||
round, enough number of processes vote for the same block, then this block is committed and later
|
||||
added to the blockchain. `ProposalMessage` and `VoteMessage` are signed by the private key of the
|
||||
validator. The internals of the protocol and how it ensures safety and liveness properties are
|
||||
explained [here](https://github.com/tendermint/spec).
|
||||
|
||||
For efficiency reasons, validators in Tendermint consensus protocol do not agree directly on the
|
||||
block as the block size is big, i.e., they don't embed the block inside `Proposal` and
|
||||
`VoteMessage`. Instead, they reach agreement on the `BlockID` (see `BlockID` definition in
|
||||
[Blockchain](blockchain.md) section) that uniquely identifies each block. The block itself is
|
||||
disseminated to validator processes using peer-to-peer gossiping protocol. It starts by having a
|
||||
proposer first splitting a block into a number of block parts, that are then gossiped between
|
||||
processes using `BlockPartMessage`.
|
||||
|
||||
Validators in Tendermint communicate by peer-to-peer gossiping protocol. Each validator is connected
|
||||
only to a subset of processes called peers. By the gossiping protocol, a validator send to its peers
|
||||
all needed information (`ProposalMessage`, `VoteMessage` and `BlockPartMessage`) so they can
|
||||
reach agreement on some block, and also obtain the content of the chosen block (block parts). As
|
||||
part of the gossiping protocol, processes also send auxiliary messages that inform peers about the
|
||||
executed steps of the core consensus algorithm (`NewRoundStepMessage` and `CommitStepMessage`), and
|
||||
also messages that inform peers what votes the process has seen (`HasVoteMessage`,
|
||||
`VoteSetMaj23Message` and `VoteSetBitsMessage`). These messages are then used in the gossiping
|
||||
protocol to determine what messages a process should send to its peers.
|
||||
|
||||
We now describe the content of each message exchanged during Tendermint consensus protocol.
|
||||
|
||||
## ProposalMessage
|
||||
|
||||
ProposalMessage is sent when a new block is proposed. It is a suggestion of what the
|
||||
next block in the blockchain should be.
|
||||
|
||||
```go
|
||||
type ProposalMessage struct {
|
||||
Proposal Proposal
|
||||
}
|
||||
```
|
||||
|
||||
### Proposal
|
||||
|
||||
Proposal contains height and round for which this proposal is made, BlockID as a unique identifier
|
||||
of proposed block, timestamp, and two fields (POLRound and POLBlockID) that are needed for
|
||||
termination of the consensus. The message is signed by the validator private key.
|
||||
|
||||
```go
|
||||
type Proposal struct {
|
||||
Height int64
|
||||
Round int
|
||||
Timestamp Time
|
||||
BlockID BlockID
|
||||
POLRound int
|
||||
POLBlockID BlockID
|
||||
Signature Signature
|
||||
}
|
||||
```
|
||||
|
||||
NOTE: In the current version of the Tendermint, the consensus value in proposal is represented with
|
||||
PartSetHeader, and with BlockID in vote message. It should be aligned as suggested in this spec as
|
||||
BlockID contains PartSetHeader.
|
||||
|
||||
## VoteMessage
|
||||
|
||||
VoteMessage is sent to vote for some block (or to inform others that a process does not vote in the
|
||||
current round). Vote is defined in [Blockchain](blockchain.md) section and contains validator's
|
||||
information (validator address and index), height and round for which the vote is sent, vote type,
|
||||
blockID if process vote for some block (`nil` otherwise) and a timestamp when the vote is sent. The
|
||||
message is signed by the validator private key.
|
||||
|
||||
```go
|
||||
type VoteMessage struct {
|
||||
Vote Vote
|
||||
}
|
||||
```
|
||||
|
||||
## BlockPartMessage
|
||||
|
||||
BlockPartMessage is sent when gossipping a piece of the proposed block. It contains height, round
|
||||
and the block part.
|
||||
|
||||
```go
|
||||
type BlockPartMessage struct {
|
||||
Height int64
|
||||
Round int
|
||||
Part Part
|
||||
}
|
||||
```
|
||||
|
||||
## ProposalHeartbeatMessage
|
||||
|
||||
ProposalHeartbeatMessage is sent to signal that a node is alive and waiting for transactions
|
||||
to be able to create a next block proposal.
|
||||
|
||||
```go
|
||||
type ProposalHeartbeatMessage struct {
|
||||
Heartbeat Heartbeat
|
||||
}
|
||||
```
|
||||
|
||||
### Heartbeat
|
||||
|
||||
Heartbeat contains validator information (address and index),
|
||||
height, round and sequence number. It is signed by the private key of the validator.
|
||||
|
||||
```go
|
||||
type Heartbeat struct {
|
||||
ValidatorAddress []byte
|
||||
ValidatorIndex int
|
||||
Height int64
|
||||
Round int
|
||||
Sequence int
|
||||
Signature Signature
|
||||
}
|
||||
```
|
||||
|
||||
## NewRoundStepMessage
|
||||
|
||||
NewRoundStepMessage is sent for every step transition during the core consensus algorithm execution.
|
||||
It is used in the gossip part of the Tendermint protocol to inform peers about a current
|
||||
height/round/step a process is in.
|
||||
|
||||
```go
|
||||
type NewRoundStepMessage struct {
|
||||
Height int64
|
||||
Round int
|
||||
Step RoundStepType
|
||||
SecondsSinceStartTime int
|
||||
LastCommitRound int
|
||||
}
|
||||
```
|
||||
|
||||
## CommitStepMessage
|
||||
|
||||
CommitStepMessage is sent when an agreement on some block is reached. It contains height for which
|
||||
agreement is reached, block parts header that describes the decided block and is used to obtain all
|
||||
block parts, and a bit array of the block parts a process currently has, so its peers can know what
|
||||
parts it is missing so they can send them.
|
||||
|
||||
```go
|
||||
type CommitStepMessage struct {
|
||||
Height int64
|
||||
BlockID BlockID
|
||||
BlockParts BitArray
|
||||
}
|
||||
```
|
||||
|
||||
TODO: We use BlockID instead of BlockPartsHeader (in current implementation) for symmetry.
|
||||
|
||||
## ProposalPOLMessage
|
||||
|
||||
ProposalPOLMessage is sent when a previous block is re-proposed.
|
||||
It is used to inform peers in what round the process learned for this block (ProposalPOLRound),
|
||||
and what prevotes for the re-proposed block the process has.
|
||||
|
||||
```go
|
||||
type ProposalPOLMessage struct {
|
||||
Height int64
|
||||
ProposalPOLRound int
|
||||
ProposalPOL BitArray
|
||||
}
|
||||
```
|
||||
|
||||
## HasVoteMessage
|
||||
|
||||
HasVoteMessage is sent to indicate that a particular vote has been received. It contains height,
|
||||
round, vote type and the index of the validator that is the originator of the corresponding vote.
|
||||
|
||||
```go
|
||||
type HasVoteMessage struct {
|
||||
Height int64
|
||||
Round int
|
||||
Type byte
|
||||
Index int
|
||||
}
|
||||
```
|
||||
|
||||
## VoteSetMaj23Message
|
||||
|
||||
VoteSetMaj23Message is sent to indicate that a process has seen +2/3 votes for some BlockID.
|
||||
It contains height, round, vote type and the BlockID.
|
||||
|
||||
```go
|
||||
type VoteSetMaj23Message struct {
|
||||
Height int64
|
||||
Round int
|
||||
Type byte
|
||||
BlockID BlockID
|
||||
}
|
||||
```
|
||||
|
||||
## VoteSetBitsMessage
|
||||
|
||||
VoteSetBitsMessage is sent to communicate the bit-array of votes a process has seen for a given
|
||||
BlockID. It contains height, round, vote type, BlockID and a bit array of
|
||||
the votes a process has.
|
||||
|
||||
```go
|
||||
type VoteSetBitsMessage struct {
|
||||
Height int64
|
||||
Round int
|
||||
Type byte
|
||||
BlockID BlockID
|
||||
Votes BitArray
|
||||
}
|
||||
```
|
@@ -0,0 +1,47 @@
|
||||
# Proposer selection procedure in Tendermint
|
||||
|
||||
This document specifies the Proposer Selection Procedure that is used in Tendermint to choose a round proposer.
|
||||
As Tendermint is “leader-based protocol”, the proposer selection is critical for its correct functioning.
|
||||
Let denote with `proposer_p(h,r)` a process returned by the Proposer Selection Procedure at the process p, at height h
|
||||
and round r. Then the Proposer Selection procedure should fulfill the following properties:
|
||||
|
||||
`Agreement`: Given a validator set V, and two honest validators,
|
||||
p and q, for each height h, and each round r,
|
||||
proposer_p(h,r) = proposer_q(h,r)
|
||||
|
||||
`Liveness`: In every consecutive sequence of rounds of size K (K is system parameter), at least a
|
||||
single round has an honest proposer.
|
||||
|
||||
`Fairness`: The proposer selection is proportional to the validator voting power, i.e., a validator with more
|
||||
voting power is selected more frequently, proportional to its power. More precisely, given a set of processes
|
||||
with the total voting power N, during a sequence of rounds of size N, every process is proposer in a number of rounds
|
||||
equal to its voting power.
|
||||
|
||||
We now look at a few particular cases to understand better how fairness should be implemented.
|
||||
If we have 4 processes with the following voting power distribution (p0,4), (p1, 2), (p2, 2), (p3, 2) at some round r,
|
||||
we have the following sequence of proposer selections in the following rounds:
|
||||
|
||||
`p0, p1, p2, p3, p0, p0, p1, p2, p3, p0, p0, p1, p2, p3, p0, p0, p1, p2, p3, p0, etc`
|
||||
|
||||
Let consider now the following scenario where a total voting power of faulty processes is aggregated in a single process
|
||||
p0: (p0,3), (p1, 1), (p2, 1), (p3, 1), (p4, 1), (p5, 1), (p6, 1), (p7, 1).
|
||||
In this case the sequence of proposer selections looks like this:
|
||||
|
||||
`p0, p1, p2, p3, p0, p4, p5, p6, p7, p0, p0, p1, p2, p3, p0, p4, p5, p6, p7, p0, etc`
|
||||
|
||||
In this case, we see that a number of rounds coordinated by a faulty process is proportional to its voting power.
|
||||
We consider also the case where we have voting power uniformly distributed among processes, i.e., we have 10 processes
|
||||
each with voting power of 1. And let consider that there are 3 faulty processes with consecutive addresses,
|
||||
for example the first 3 processes are faulty. Then the sequence looks like this:
|
||||
|
||||
`p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, etc`
|
||||
|
||||
In this case, we have 3 consecutive rounds with a faulty proposer.
|
||||
One special case we consider is the case where a single honest process p0 has most of the voting power, for example:
|
||||
(p0,100), (p1, 2), (p2, 3), (p3, 4). Then the sequence of proposer selection looks like this:
|
||||
|
||||
p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p1, p0, p0, p0, p0, p0, etc
|
||||
|
||||
This basically means that almost all rounds have the same proposer. But in this case, the process p0 has anyway enough
|
||||
voting power to decide whatever he wants, so the fact that he coordinates almost all rounds seems correct.
|
||||
|
11
docs/specification/new-spec/reactors/mempool/README.md
Normal file
11
docs/specification/new-spec/reactors/mempool/README.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# Mempool Specification
|
||||
|
||||
This package contains documents specifying the functionality
|
||||
of the mempool module.
|
||||
|
||||
Components:
|
||||
|
||||
* [Config](./config.md) - how to configure it
|
||||
* [External Messages](./messages.md) - The messages we accept over p2p and rpc interfaces
|
||||
* [Functionality](./functionality.md) - high-level description of the functionality it provides
|
||||
* [Concurrency Model](./concurrency.md) - What guarantees we provide, what locks we require.
|
@@ -0,0 +1,8 @@
|
||||
# Mempool Concurrency
|
||||
|
||||
Look at the concurrency model this uses...
|
||||
|
||||
* Receiving CheckTx
|
||||
* Broadcasting new tx
|
||||
* Interfaces with consensus engine, reap/update while checking
|
||||
* Calling the ABCI app (ordering. callbacks. how proxy works alongside the blockchain proxy which actually writes blocks)
|
59
docs/specification/new-spec/reactors/mempool/config.md
Normal file
59
docs/specification/new-spec/reactors/mempool/config.md
Normal file
@@ -0,0 +1,59 @@
|
||||
# Mempool Configuration
|
||||
|
||||
Here we describe configuration options around mempool.
|
||||
For the purposes of this document, they are described
|
||||
as command-line flags, but they can also be passed in as
|
||||
environmental variables or in the config.toml file. The
|
||||
following are all equivalent:
|
||||
|
||||
Flag: `--mempool.recheck_empty=false`
|
||||
|
||||
Environment: `TM_MEMPOOL_RECHECK_EMPTY=false`
|
||||
|
||||
Config:
|
||||
```
|
||||
[mempool]
|
||||
recheck_empty = false
|
||||
```
|
||||
|
||||
|
||||
## Recheck
|
||||
|
||||
`--mempool.recheck=false` (default: true)
|
||||
|
||||
`--mempool.recheck_empty=false` (default: true)
|
||||
|
||||
Recheck determines if the mempool rechecks all pending
|
||||
transactions after a block was committed. Once a block
|
||||
is committed, the mempool removes all valid transactions
|
||||
that were successfully included in the block.
|
||||
|
||||
If `recheck` is true, then it will rerun CheckTx on
|
||||
all remaining transactions with the new block state.
|
||||
|
||||
If the block contained no transactions, it will skip the
|
||||
recheck unless `recheck_empty` is true.
|
||||
|
||||
## Broadcast
|
||||
|
||||
`--mempool.broadcast=false` (default: true)
|
||||
|
||||
Determines whether this node gossips any valid transactions
|
||||
that arrive in mempool. Default is to gossip anything that
|
||||
passes checktx. If this is disabled, transactions are not
|
||||
gossiped, but instead stored locally and added to the next
|
||||
block this node is the proposer.
|
||||
|
||||
## WalDir
|
||||
|
||||
`--mempool.wal_dir=/tmp/gaia/mempool.wal` (default: $TM_HOME/data/mempool.wal)
|
||||
|
||||
This defines the directory where mempool writes the write-ahead
|
||||
logs. These files can be used to reload unbroadcasted
|
||||
transactions if the node crashes.
|
||||
|
||||
If the directory passed in is an absolute path, the wal file is
|
||||
created there. If the directory is a relative path, the path is
|
||||
appended to home directory of the tendermint process to
|
||||
generate an absolute path to the wal directory
|
||||
(default `$HOME/.tendermint` or set via `TM_HOME` or `--home``)
|
@@ -0,0 +1,37 @@
|
||||
# Mempool Functionality
|
||||
|
||||
The mempool maintains a list of potentially valid transactions,
|
||||
both to broadcast to other nodes, as well as to provide to the
|
||||
consensus reactor when it is selected as the block proposer.
|
||||
|
||||
There are two sides to the mempool state:
|
||||
|
||||
* External: get, check, and broadcast new transactions
|
||||
* Internal: return valid transaction, update list after block commit
|
||||
|
||||
|
||||
## External functionality
|
||||
|
||||
External functionality is exposed via network interfaces
|
||||
to potentially untrusted actors.
|
||||
|
||||
* CheckTx - triggered via RPC or P2P
|
||||
* Broadcast - gossip messages after a successful check
|
||||
|
||||
## Internal functionality
|
||||
|
||||
Internal functionality is exposed via method calls to other
|
||||
code compiled into the tendermint binary.
|
||||
|
||||
* Reap - get tx to propose in next block
|
||||
* Update - remove tx that were included in last block
|
||||
* ABCI.CheckTx - call ABCI app to validate the tx
|
||||
|
||||
What does it provide the consensus reactor?
|
||||
What guarantees does it need from the ABCI app?
|
||||
(talk about interleaving processes in concurrency)
|
||||
|
||||
## Optimizations
|
||||
|
||||
Talk about the LRU cache to make sure we don't process any
|
||||
tx that we have seen before
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user