Compare commits
537 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
a2b92c0745 | ||
|
6884463ba2 | ||
|
167d0e82f9 | ||
|
42e77de6a3 | ||
|
58b4a8395b | ||
|
d0dc04001e | ||
|
e101aa9fc8 | ||
|
6a7c399c2d | ||
|
b3e1341e44 | ||
|
ebdc7ddf20 | ||
|
f30ce8b210 | ||
|
76cccfaabd | ||
|
440d76647d | ||
|
2cc2fade06 | ||
|
e50173c7dc | ||
|
3318cf9aec | ||
|
b37230f6db | ||
|
1bf57a90df | ||
|
d2db202a2d | ||
|
08857606dc | ||
|
72e389041c | ||
|
d41c0b10c8 | ||
|
898ae53672 | ||
|
9e20cfee0e | ||
|
9af8da7aad | ||
|
c9be2b89f9 | ||
|
388f66c9b3 | ||
|
cd5a5d332f | ||
|
cb9a1dbb4f | ||
|
814541f6d9 | ||
|
89cbcceac4 | ||
|
10f7858453 | ||
|
922af7c405 | ||
|
e9f8e56895 | ||
|
3eb069a50c | ||
|
f1fbf995f7 | ||
|
86af889dfb | ||
|
b3492356e6 | ||
|
b2489b4318 | ||
|
a5a7e7ac56 | ||
|
74cbfc68a1 | ||
|
6423306980 | ||
|
c5b62ce1ee | ||
|
e538e0e077 | ||
|
9314e451c8 | ||
|
3b61e2854a | ||
|
03222d834b | ||
|
cb9743e567 | ||
|
66ad366a4f | ||
|
864ad8546e | ||
|
58789c52cd | ||
|
a762253e24 | ||
|
10d893ee9b | ||
|
acbc0717d4 | ||
|
1e19860585 | ||
|
09941b9aa9 | ||
|
2a5e8c4a47 | ||
|
686e0eea9f | ||
|
91f2184003 | ||
|
3e577ccf4f | ||
|
ea0b205455 | ||
|
16cf7a5e0a | ||
|
56abea7427 | ||
|
461a143a2b | ||
|
f65e357d2b | ||
|
4a31532897 | ||
|
29cd1a1b8f | ||
|
cd4be1f308 | ||
|
acae38ab9e | ||
|
a52cdbfe43 | ||
|
f233cde9a9 | ||
|
ceb8ba2e15 | ||
|
aab54011b3 | ||
|
691e266bef | ||
|
c6b2334fa3 | ||
|
69b5da766c | ||
|
a393cf4109 | ||
|
1b120466ee | ||
|
c8c533cc23 | ||
|
1e8deacf2e | ||
|
3595b5931a | ||
|
c7f923c5b0 | ||
|
25f9eb782b | ||
|
2009c3d4f1 | ||
|
c3632bc54a | ||
|
d9c87a21a6 | ||
|
2e76b23c9a | ||
|
9529f12c28 | ||
|
55b81cc1a1 | ||
|
c4caad7720 | ||
|
2563b4fc92 | ||
|
6f3c05545d | ||
|
414a8cb0ba | ||
|
c84c7250ba | ||
|
c7b6faf96a | ||
|
fe37afc0d7 | ||
|
478a10aa41 | ||
|
d033470817 | ||
|
7ad8a8ab55 | ||
|
a15c7f221d | ||
|
d7cb291fb2 | ||
|
563faa98de | ||
|
9c62ed4595 | ||
|
fe694e1fe1 | ||
|
bc2aa79f9a | ||
|
48aca642e3 | ||
|
15651a931e | ||
|
68e7983c70 | ||
|
8f0237610e | ||
|
b75d4f73e7 | ||
|
b3c5933a23 | ||
|
331857c9e6 | ||
|
57ea4987f7 | ||
|
d95ba866b8 | ||
|
1721543e5c | ||
|
46ccbcbff6 | ||
|
fc33576bac | ||
|
366bc00b70 | ||
|
d5cd3a111f | ||
|
94e400a5d6 | ||
|
7dd249f754 | ||
|
12ae1bb5e5 | ||
|
248f176c1f | ||
|
1871a7c3d0 | ||
|
59b3dcb5cf | ||
|
fb87590c82 | ||
|
38c4de3fc7 | ||
|
ae67408d13 | ||
|
42da8cd297 | ||
|
887cb6d0cd | ||
|
aeaf2d0b20 | ||
|
932e472986 | ||
|
e845987503 | ||
|
531b1197a7 | ||
|
52ad6242f4 | ||
|
969b34057b | ||
|
e110f70b5c | ||
|
e997db7a23 | ||
|
c4b695f78d | ||
|
75463b8331 | ||
|
882c25f292 | ||
|
9c8100043e | ||
|
031e10133c | ||
|
4087326f45 | ||
|
f9bc22ec6a | ||
|
26cd99c66e | ||
|
9334aad906 | ||
|
c695c53259 | ||
|
5c34d087d9 | ||
|
559bd169bd | ||
|
f8c969f5a5 | ||
|
c5253c7a31 | ||
|
53f15fde07 | ||
|
814f9cb566 | ||
|
af0db599b0 | ||
|
104368bd84 | ||
|
99461a178e | ||
|
feb3230160 | ||
|
be1a16a601 | ||
|
8e044b0e6d | ||
|
40e93a5f9e | ||
|
435eb6e2b3 | ||
|
8c88cc017a | ||
|
ed95cc160a | ||
|
2f067a3f65 | ||
|
498a82784d | ||
|
b5708825a7 | ||
|
a724ffab25 | ||
|
de34ef91d7 | ||
|
248a9383a0 | ||
|
78b4ad291c | ||
|
3f9dff9aac | ||
|
443854222c | ||
|
283544c7f3 | ||
|
49faa79bdc | ||
|
7670049a31 | ||
|
0cd642bca7 | ||
|
fe3c92ecce | ||
|
a969e24177 | ||
|
7b0fa6c889 | ||
|
fa60d8120e | ||
|
8b7649b90c | ||
|
687834c99e | ||
|
54c25ccbf5 | ||
|
e160a6198c | ||
|
e69d36d54f | ||
|
844c43e044 | ||
|
194712fd3b | ||
|
30f675aafa | ||
|
695266e907 | ||
|
3db44dacae | ||
|
62c1bc0a20 | ||
|
3863885c71 | ||
|
238e2b72ee | ||
|
a65ab3b0e0 | ||
|
aba8a8f4fc | ||
|
0448c2b437 | ||
|
0ada0cf525 | ||
|
7fa12662c4 | ||
|
bc9c4e8dee | ||
|
8004af2519 | ||
|
21e87ebc11 | ||
|
70d8afa6e9 | ||
|
847f865438 | ||
|
2cda777900 | ||
|
432a7276e2 | ||
|
533f7c45eb | ||
|
a1cdc2b68a | ||
|
9c4d533695 | ||
|
ad03491ee6 | ||
|
4b9dfc8990 | ||
|
a46f64cd1e | ||
|
b1e7163689 | ||
|
c931279960 | ||
|
12b25fdf6e | ||
|
c0e2649ed6 | ||
|
593c127257 | ||
|
9f6a09277e | ||
|
dd47884661 | ||
|
a01c226dc4 | ||
|
47f5e37205 | ||
|
51c9211cf4 | ||
|
7869e541f6 | ||
|
e0daca5693 | ||
|
e8e512f1fa | ||
|
37ce171061 | ||
|
e01986e2b3 | ||
|
433416fef8 | ||
|
2d4ad02356 | ||
|
3b81d3fea4 | ||
|
e785697a64 | ||
|
4ffe9304ba | ||
|
b1eec3a5d3 | ||
|
fcdd30b2d3 | ||
|
ec87c740a7 | ||
|
6b737d2c1b | ||
|
f7f4ba5e90 | ||
|
5466720d75 | ||
|
7c3cf316f1 | ||
|
d71aed309f | ||
|
d6beb60bf7 | ||
|
61d76a273f | ||
|
6d18e2f447 | ||
|
1c1c68df8d | ||
|
f6539737de | ||
|
fe9ff62297 | ||
|
7c85f15a6c | ||
|
6b366b2443 | ||
|
a8b77359df | ||
|
e7fab7d4bf | ||
|
59556ab030 | ||
|
128e2a1d9e | ||
|
e236302256 | ||
|
dfe28c8855 | ||
|
4b616344fa | ||
|
1ecd580061 | ||
|
21dcb4f290 | ||
|
b2b35d7dc1 | ||
|
a1501dcde8 | ||
|
3319ad03b8 | ||
|
fe1c60b5cf | ||
|
591dd9e662 | ||
|
bb6c15b00a | ||
|
6af28ead87 | ||
|
fcf459158d | ||
|
e76ef2a8a1 | ||
|
3c92bea519 | ||
|
12c703c1c3 | ||
|
376f47e030 | ||
|
ceedd4d968 | ||
|
c595636999 | ||
|
6e5cd10399 | ||
|
57a684d5ac | ||
|
5534eb4707 | ||
|
b2d5546cf8 | ||
|
f653ba63bf | ||
|
0396b6d521 | ||
|
94b36bb65e | ||
|
b4fd6e876e | ||
|
775e100d2c | ||
|
4cb02d0bf2 | ||
|
ae538337ba | ||
|
62a7beec21 | ||
|
45e18a1832 | ||
|
f6adddb4a8 | ||
|
38fc351532 | ||
|
01be6fa309 | ||
|
e06bbaf303 | ||
|
bb7b152af5 | ||
|
5504920ba3 | ||
|
c74a359c46 | ||
|
ee9dc6ce59 | ||
|
62e8ec34d1 | ||
|
6a5254c475 | ||
|
2802a06a08 | ||
|
8ac430813d | ||
|
3e61b8c17a | ||
|
9e277d1596 | ||
|
4c9d5244a5 | ||
|
87cc277b38 | ||
|
3115c23762 | ||
|
31030c6514 | ||
|
7b8ffc9981 | ||
|
a75bccfbc4 | ||
|
f97229f05a | ||
|
ac2ef9e0ea | ||
|
c2803b80e8 | ||
|
7a6876bc62 | ||
|
819f81f702 | ||
|
036d3b59a3 | ||
|
782a836db0 | ||
|
bd46b78785 | ||
|
f908dd0e55 | ||
|
0bbf38141a | ||
|
f188366e26 | ||
|
fd60621a8e | ||
|
60b7f2c61b | ||
|
3e3d53daef | ||
|
d64a48e0ee | ||
|
0a7b2ab52c | ||
|
8a69f1087b | ||
|
f24f03906f | ||
|
fa56e8c0ce | ||
|
fc406d1657 | ||
|
9dcefd0e1e | ||
|
a2dc53d43d | ||
|
b9c4fab96e | ||
|
fa07dbd7ec | ||
|
9b382d7a11 | ||
|
75b78bfb72 | ||
|
b234f7aba2 | ||
|
bff069f83c | ||
|
616b07ff6b | ||
|
b26f812399 | ||
|
321061125f | ||
|
6469e2ccca | ||
|
c4646bf87f | ||
|
716364182d | ||
|
1971e149fb | ||
|
7939d62ef0 | ||
|
4c1f1e4e57 | ||
|
09170f76fe | ||
|
9e1edf8685 | ||
|
e7fe299504 | ||
|
b90edffe28 | ||
|
f361092ed9 | ||
|
a1e0f0ba95 | ||
|
5f218a43fd | ||
|
7fe470fc76 | ||
|
d490c25807 | ||
|
340f33b475 | ||
|
7518c4a9be | ||
|
db413aadfd | ||
|
5433e5771e | ||
|
e2e50bc0fc | ||
|
27245ce6f6 | ||
|
d4634dc683 | ||
|
8c08fc671c | ||
|
a2d40580d7 | ||
|
0011af7adf | ||
|
1764106606 | ||
|
3356544706 | ||
|
335e012b6a | ||
|
a458da8f92 | ||
|
9fb45c5b5a | ||
|
aae4e94998 | ||
|
d935a4f0a8 | ||
|
5c331d8276 | ||
|
13b9de6778 | ||
|
dc0e8de9b0 | ||
|
90a2335267 | ||
|
99c4e48038 | ||
|
4bd4d59af5 | ||
|
8219abc552 | ||
|
d6a87d3c43 | ||
|
a3adac3787 | ||
|
6fc82f3824 | ||
|
bcca27ee20 | ||
|
3702cb7e7c | ||
|
49653d3e31 | ||
|
ddb8430341 | ||
|
91a3cb0f21 | ||
|
765c325441 | ||
|
659768783f | ||
|
e756906a0e | ||
|
136b6a7673 | ||
|
068f01368f | ||
|
f23d47e5d2 | ||
|
09aed7ee89 | ||
|
d56b44f3a5 | ||
|
9e4c25761c | ||
|
54f2cc9709 | ||
|
31a7e2b3b4 | ||
|
00ab3daa0c | ||
|
bbf0228aa7 | ||
|
6550199751 | ||
|
10f361fcd0 | ||
|
4a0ae17401 | ||
|
8537070575 | ||
|
9cbcd4b5e3 | ||
|
4fa4e617b7 | ||
|
2e598a7caf | ||
|
031eb23dc8 | ||
|
edd718c580 | ||
|
392a041c2b | ||
|
8727bfc265 | ||
|
84e39203bb | ||
|
97e9802255 | ||
|
8c6bd44929 | ||
|
ed5511dc08 | ||
|
aa57e89e21 | ||
|
c2f6ff759b | ||
|
45ff7cdd0c | ||
|
ce36a0111a | ||
|
b61f5482d4 | ||
|
40b5defe18 | ||
|
e40d1b36f7 | ||
|
60a2867af2 | ||
|
bfdad916a2 | ||
|
498ff803db | ||
|
c8789492dc | ||
|
1723836014 | ||
|
2d6bc8d7d7 | ||
|
7448c4adde | ||
|
a221736eee | ||
|
382bead548 | ||
|
f9479b34cb | ||
|
925696ca65 | ||
|
7682ad9a60 | ||
|
3e92d295e4 | ||
|
661d336dd5 | ||
|
d1a00c684e | ||
|
db034e079a | ||
|
7d983a548b | ||
|
8311f5c611 | ||
|
628791e5a5 | ||
|
df857266b6 | ||
|
ddb3d8945d | ||
|
7f5908b622 | ||
|
24f7b9387a | ||
|
756818f940 | ||
|
2131f8d330 | ||
|
8ae2ffda89 | ||
|
75b97a5a65 | ||
|
7b99039c34 | ||
|
3ca7b10ad4 | ||
|
779c2a22d0 | ||
|
147a18b34a | ||
|
4382c8d28b | ||
|
944ebccfe9 | ||
|
fd1b0b997a | ||
|
abe912c610 | ||
|
66fcdf7c7a | ||
|
4e13a19339 | ||
|
7dd3c007c7 | ||
|
0d392a0442 | ||
|
7e4a704bd1 | ||
|
bf5e956087 | ||
|
2ccc3326ec | ||
|
83f7d5c95a | ||
|
2c129447fd | ||
|
b50339e8e7 | ||
|
8ff5b365dd | ||
|
1f0985689d | ||
|
3089bbf2b8 | ||
|
5feeb65cf0 | ||
|
715e74186c | ||
|
3a03fe5a15 | ||
|
d343560108 | ||
|
2b6db268cf | ||
|
14abdd57f3 | ||
|
1f3e4d2d9a | ||
|
29bfcb0a31 | ||
|
301845943c | ||
|
b0017c5460 | ||
|
0b61d22652 | ||
|
70b95135e6 | ||
|
a3d925ac1d | ||
|
cf9a03f698 | ||
|
7f8240dfde | ||
|
f8b152972f | ||
|
95875c55fc | ||
|
7fadde0b37 | ||
|
e36c79f713 | ||
|
2252071866 | ||
|
b700ed8e31 | ||
|
e1fd587ddd | ||
|
f74de4cb86 | ||
|
6c1572c9b8 | ||
|
60a1f49a5c | ||
|
740167202f | ||
|
e0017c8a97 | ||
|
4a78c1bb28 | ||
|
1d3f723ccc | ||
|
77408b7bde | ||
|
0cd1bd6d8b | ||
|
881d2ce31e | ||
|
ad79ead93d | ||
|
17a748c796 | ||
|
583599b19f | ||
|
044fe56b43 | ||
|
b46da19b74 | ||
|
d635783d07 | ||
|
f79e13af38 | ||
|
34e6474ad9 | ||
|
5c96e0c812 | ||
|
921a2b41f0 | ||
|
a8cec967ac | ||
|
044c500cc0 | ||
|
a917ec9ea2 | ||
|
d4ccc88676 | ||
|
12c85c4e60 | ||
|
90c0267bc1 | ||
|
8963bf08aa | ||
|
ede4f818fd | ||
|
126f63c0a2 | ||
|
aea8629272 | ||
|
5138bcb1c7 | ||
|
cc50dc076a | ||
|
bf576f0097 | ||
|
377747b061 | ||
|
870a98ccc3 | ||
|
8eda3efa28 | ||
|
e2c3cc9685 | ||
|
2a6e71a753 | ||
|
340d273f83 | ||
|
54c63726b0 | ||
|
cb80ab2965 | ||
|
c48e772115 | ||
|
aa78fc14b5 | ||
|
399fb9aa70 | ||
|
ec3c91ac14 | ||
|
fae0603413 | ||
|
9deb647303 | ||
|
f0f1ebe013 | ||
|
e2e8746044 | ||
|
78446fd99c |
@@ -19,3 +19,8 @@ coverage:
|
||||
comment:
|
||||
layout: "header, diff"
|
||||
behavior: default # update if exists else create new
|
||||
|
||||
ignore:
|
||||
- "docs"
|
||||
- "*.md"
|
||||
- "*.rst"
|
||||
|
@@ -8,10 +8,7 @@ end_of_line = lf
|
||||
insert_final_newline = true
|
||||
trim_trailing_whitespace = true
|
||||
|
||||
[Makefile]
|
||||
indent_style = tab
|
||||
|
||||
[*.sh]
|
||||
[*.{sh,Makefile}]
|
||||
indent_style = tab
|
||||
|
||||
[*.proto]
|
||||
|
5
.github/CODEOWNERS
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
# CODEOWNERS: https://help.github.com/articles/about-codeowners/
|
||||
|
||||
# Everything goes through Bucky. For now.
|
||||
* @ebuchman
|
||||
|
5
.gitignore
vendored
@@ -15,3 +15,8 @@ test/p2p/data/
|
||||
test/logs
|
||||
.glide
|
||||
coverage.txt
|
||||
docs/_build
|
||||
docs/tools
|
||||
|
||||
scripts/wal2json/wal2json
|
||||
scripts/cutWALUntil/cutWALUntil
|
||||
|
157
CHANGELOG.md
@@ -3,36 +3,135 @@
|
||||
## Roadmap
|
||||
|
||||
BREAKING CHANGES:
|
||||
- Upgrade the header to support better proves on validtors, results, evidence, and possibly more
|
||||
- Upgrade the header to support better proofs on validtors, results, evidence, and possibly more
|
||||
- Better support for injecting randomness
|
||||
- Pass evidence/voteInfo through ABCI
|
||||
- Upgrade consensus for more real-time use of evidence
|
||||
|
||||
FEATURES:
|
||||
- Peer reputation management
|
||||
- Use the chain as its own CA for nodes and validators
|
||||
- Tooling to run multiple blockchains/apps, possibly in a single process
|
||||
- State syncing (without transaction replay)
|
||||
- Improved support for querying history and state
|
||||
- Use the chain as its own CA for nodes and validators
|
||||
- Tooling to run multiple blockchains/apps, possibly in a single process
|
||||
- State syncing (without transaction replay)
|
||||
- Add authentication and rate-limitting to the RPC
|
||||
|
||||
IMPROVEMENTS:
|
||||
- Improve subtleties around mempool caching and logic
|
||||
- Consensus optimizations:
|
||||
- Improve subtleties around mempool caching and logic
|
||||
- Consensus optimizations:
|
||||
- cache block parts for faster agreement after round changes
|
||||
- propagate block parts rarest first
|
||||
- Better testing of the consensus state machine (ie. use a DSL)
|
||||
- Better testing of the consensus state machine (ie. use a DSL)
|
||||
- Auto compiled serialization/deserialization code instead of go-wire reflection
|
||||
|
||||
BUG FIXES:
|
||||
- Graceful handling/recovery for apps that have non-determinism or fail to halt
|
||||
BUG FIXES:
|
||||
- Graceful handling/recovery for apps that have non-determinism or fail to halt
|
||||
- Graceful handling/recovery for violations of safety, or liveness
|
||||
|
||||
## 0.10.4 (Septemeber 5, 2017)
|
||||
## 0.13.0 (December 6, 2017)
|
||||
|
||||
BREAKING CHANGES:
|
||||
- abci: update to v0.8 using gogo/protobuf; includes tx tags, vote info in RequestBeginBlock, data.Bytes everywhere, use int64, etc.
|
||||
- types: block heights are now `int64` everywhere
|
||||
- types & node: EventSwitch and EventCache have been replaced by EventBus and EventBuffer; event types have been overhauled
|
||||
- node: EventSwitch methods now refer to EventBus
|
||||
- rpc/lib/types: RPCResponse is no longer a pointer; WSRPCConnection interface has been modified
|
||||
- rpc/client: WaitForOneEvent takes an EventsClient instead of types.EventSwitch
|
||||
- rpc/client: Add/RemoveListenerForEvent are now Subscribe/Unsubscribe
|
||||
- rpc/core/types: ResultABCIQuery wraps an abci.ResponseQuery
|
||||
- rpc: `/subscribe` and `/unsubscribe` take `query` arg instead of `event`
|
||||
- rpc: `/status` returns the LatestBlockTime in human readable form instead of in nanoseconds
|
||||
- mempool: cached transactions return an error instead of an ABCI response with BadNonce
|
||||
|
||||
FEATURES:
|
||||
- rpc: new `/unsubscribe_all` WebSocket RPC endpoint
|
||||
- rpc: new `/tx_search` endpoint for filtering transactions by more complex queries
|
||||
- p2p/trust: new trust metric for tracking peers. See ADR-006
|
||||
- config: TxIndexConfig allows to set what DeliverTx tags to index
|
||||
|
||||
IMPROVEMENTS:
|
||||
- docs: Added Slate docs to each rpc function (see rpc/core)
|
||||
- docs: Ported all website docs to Read The Docs
|
||||
- New asynchronous events system using `tmlibs/pubsub`
|
||||
- logging: Various small improvements
|
||||
- consensus: Graceful shutdown when app crashes
|
||||
- tests: Fix various non-deterministic errors
|
||||
- p2p: more defensive programming
|
||||
|
||||
BUG FIXES:
|
||||
- consensus: fix panic where prs.ProposalBlockParts is not initialized
|
||||
- p2p: fix panic on bad channel
|
||||
|
||||
## 0.12.1 (November 27, 2017)
|
||||
|
||||
BUG FIXES:
|
||||
- upgrade tmlibs dependency to enable Windows builds for Tendermint
|
||||
|
||||
## 0.12.0 (October 27, 2017)
|
||||
|
||||
BREAKING CHANGES:
|
||||
- rpc/client: websocket ResultsCh and ErrorsCh unified in ResponsesCh.
|
||||
- rpc/client: ABCIQuery no longer takes `prove`
|
||||
- state: remove GenesisDoc from state.
|
||||
- consensus: new binary WAL format provides efficiency and uses checksums to detect corruption
|
||||
- use scripts/wal2json to convert to json for debugging
|
||||
|
||||
FEATURES:
|
||||
- new `certifiers` pkg contains the tendermint light-client library (name subject to change)!
|
||||
- rpc: `/genesis` includes the `app_options` .
|
||||
- rpc: `/abci_query` takes an additional `height` parameter to support historical queries.
|
||||
- rpc/client: new ABCIQueryWithOptions supports options like `trusted` (set false to get a proof) and `height` to query a historical height.
|
||||
|
||||
IMPROVEMENTS:
|
||||
- rpc: `/genesis` result includes `app_options`
|
||||
- rpc/lib/client: add jitter to reconnects.
|
||||
- rpc/lib/types: `RPCError` satisfies the `error` interface.
|
||||
|
||||
BUG FIXES:
|
||||
- rpc/client: fix ws deadlock after stopping
|
||||
- blockchain: fix panic on AddBlock when peer is nil
|
||||
- mempool: fix sending on TxsAvailable when a tx has been invalidated
|
||||
- consensus: dont run WAL catchup if we fast synced
|
||||
|
||||
## 0.11.1 (October 10, 2017)
|
||||
|
||||
IMPROVEMENTS:
|
||||
- blockchain/reactor: respondWithNoResponseMessage for missing height
|
||||
|
||||
BUG FIXES:
|
||||
- rpc: fixed client WebSocket timeout
|
||||
- rpc: client now resubscribes on reconnection
|
||||
- rpc: fix panics on missing params
|
||||
- rpc: fix `/dump_consensus_state` to have normal json output (NOTE: technically breaking, but worth a bug fix label)
|
||||
- types: fixed out of range error in VoteSet.addVote
|
||||
- consensus: fix wal autofile via https://github.com/tendermint/tmlibs/blob/master/CHANGELOG.md#032-october-2-2017
|
||||
|
||||
## 0.11.0 (September 22, 2017)
|
||||
|
||||
BREAKING:
|
||||
- genesis file: validator `amount` is now `power`
|
||||
- abci: Info, BeginBlock, InitChain all take structs
|
||||
- rpc: various changes to match JSONRPC spec (http://www.jsonrpc.org/specification), including breaking ones:
|
||||
- requests that previously returned HTTP code 4XX now return 200 with an error code in the JSONRPC.
|
||||
- `rpctypes.RPCResponse` uses new `RPCError` type instead of `string`.
|
||||
|
||||
- cmd: if there is no genesis, exit immediately instead of waiting around for one to show.
|
||||
- types: `Signer.Sign` returns an error.
|
||||
- state: every validator set change is persisted to disk, which required some changes to the `State` structure.
|
||||
- p2p: new `p2p.Peer` interface used for all reactor methods (instead of `*p2p.Peer` struct).
|
||||
|
||||
FEATURES:
|
||||
- rpc: `/validators?height=X` allows querying of validators at previous heights.
|
||||
- rpc: Leaving the `height` param empty for `/block`, `/validators`, and `/commit` will return the value for the latest height.
|
||||
|
||||
IMPROVEMENTS:
|
||||
- docs: Moved all docs from the website and tools repo in, converted to `.rst`, and cleaned up for presentation on `tendermint.readthedocs.io`
|
||||
|
||||
BUG FIXES:
|
||||
- fix WAL openning issue on Windows
|
||||
|
||||
## 0.10.4 (September 5, 2017)
|
||||
|
||||
IMPROVEMENTS:
|
||||
- docs: Added Slate docs to each rpc function (see rpc/core)
|
||||
- docs: Ported all website docs to Read The Docs
|
||||
- config: expose some p2p params to tweak performance: RecvRate, SendRate, and MaxMsgPacketPayloadSize
|
||||
- rpc: Upgrade the websocket client and server, including improved auto reconnect, and proper ping/pong
|
||||
|
||||
@@ -72,7 +171,7 @@ IMPROVEMENTS:
|
||||
|
||||
FEATURES:
|
||||
- Use `--trace` to get stack traces for logged errors
|
||||
- types: GenesisDoc.ValidatorHash returns the hash of the genesis validator set
|
||||
- types: GenesisDoc.ValidatorHash returns the hash of the genesis validator set
|
||||
- types: GenesisDocFromFile parses a GenesiDoc from a JSON file
|
||||
|
||||
IMPROVEMENTS:
|
||||
@@ -96,7 +195,7 @@ Also includes the Grand Repo-Merge of 2017.
|
||||
BREAKING CHANGES:
|
||||
|
||||
- Config and Flags:
|
||||
- The `config` map is replaced with a [`Config` struct](https://github.com/tendermint/tendermint/blob/master/config/config.go#L11),
|
||||
- The `config` map is replaced with a [`Config` struct](https://github.com/tendermint/tendermint/blob/master/config/config.go#L11),
|
||||
containing substructs: `BaseConfig`, `P2PConfig`, `MempoolConfig`, `ConsensusConfig`, `RPCConfig`
|
||||
- This affects the following flags:
|
||||
- `--seeds` is now `--p2p.seeds`
|
||||
@@ -109,16 +208,16 @@ containing substructs: `BaseConfig`, `P2PConfig`, `MempoolConfig`, `ConsensusCon
|
||||
```
|
||||
[p2p]
|
||||
laddr="tcp://1.2.3.4:46656"
|
||||
|
||||
|
||||
[consensus]
|
||||
timeout_propose=1000
|
||||
```
|
||||
- Use viper and `DefaultConfig() / TestConfig()` functions to handle defaults, and remove `config/tendermint` and `config/tendermint_test`
|
||||
- Change some function and method signatures to
|
||||
- Change some function and method signatures to
|
||||
- Change some [function and method signatures](https://gist.github.com/ebuchman/640d5fc6c2605f73497992fe107ebe0b) accomodate new config
|
||||
|
||||
- Logger
|
||||
- Replace static `log15` logger with a simple interface, and provide a new implementation using `go-kit`.
|
||||
- Replace static `log15` logger with a simple interface, and provide a new implementation using `go-kit`.
|
||||
See our new [logging library](https://github.com/tendermint/tmlibs/log) and [blog post](https://tendermint.com/blog/abstracting-the-logger-interface-in-go) for more details
|
||||
- Levels `warn` and `notice` are removed (you may need to change them in your `config.toml`!)
|
||||
- Change some [function and method signatures](https://gist.github.com/ebuchman/640d5fc6c2605f73497992fe107ebe0b) to accept a logger
|
||||
@@ -161,7 +260,7 @@ IMPROVEMENTS:
|
||||
- Limit `/blockchain_info` call to return a maximum of 20 blocks
|
||||
- Use `.Wrap()` and `.Unwrap()` instead of eg. `PubKeyS` for `go-crypto` types
|
||||
- RPC JSON responses use pretty printing (via `json.MarshalIndent`)
|
||||
- Color code different instances of the consensus for tests
|
||||
- Color code different instances of the consensus for tests
|
||||
- Isolate viper to `cmd/tendermint/commands` and do not read config from file for tests
|
||||
|
||||
|
||||
@@ -189,7 +288,7 @@ IMPROVEMENTS:
|
||||
- WAL uses #ENDHEIGHT instead of #HEIGHT (#HEIGHT will stop working in 0.10.0)
|
||||
- Peers included via `--seeds`, under `seeds` in the config, or in `/dial_seeds` are now persistent, and will be reconnected to if the connection breaks
|
||||
|
||||
BUG FIXES:
|
||||
BUG FIXES:
|
||||
|
||||
- Fix bug in fast-sync where we stop syncing after a peer is removed, even if they're re-added later
|
||||
- Fix handshake replay to handle validator set changes and results of DeliverTx when we crash after app.Commit but before state.Save()
|
||||
@@ -205,7 +304,7 @@ message RequestQuery{
|
||||
bytes data = 1;
|
||||
string path = 2;
|
||||
uint64 height = 3;
|
||||
bool prove = 4;
|
||||
bool prove = 4;
|
||||
}
|
||||
|
||||
message ResponseQuery{
|
||||
@@ -229,7 +328,7 @@ type BlockMeta struct {
|
||||
}
|
||||
```
|
||||
|
||||
- `ValidatorSet.Proposer` is exposed as a field and persisted with the `State`. Use `GetProposer()` to initialize or update after validator-set changes.
|
||||
- `ValidatorSet.Proposer` is exposed as a field and persisted with the `State`. Use `GetProposer()` to initialize or update after validator-set changes.
|
||||
|
||||
- `tendermint gen_validator` command output is now pure JSON
|
||||
|
||||
@@ -272,7 +371,7 @@ type BlockID struct {
|
||||
}
|
||||
```
|
||||
|
||||
- `Vote` data type now includes validator address and index:
|
||||
- `Vote` data type now includes validator address and index:
|
||||
|
||||
```
|
||||
type Vote struct {
|
||||
@@ -292,7 +391,7 @@ type Vote struct {
|
||||
|
||||
FEATURES:
|
||||
|
||||
- New message type on the ConsensusReactor, `Maj23Msg`, for peers to alert others they've seen a Maj23,
|
||||
- New message type on the ConsensusReactor, `Maj23Msg`, for peers to alert others they've seen a Maj23,
|
||||
in order to track and handle conflicting votes intelligently to prevent Byzantine faults from causing halts:
|
||||
|
||||
```
|
||||
@@ -315,7 +414,7 @@ IMPROVEMENTS:
|
||||
- Less verbose logging
|
||||
- Better test coverage (37% -> 49%)
|
||||
- Canonical SignBytes for signable types
|
||||
- Write-Ahead Log for Mempool and Consensus via tmlibs/autofile
|
||||
- Write-Ahead Log for Mempool and Consensus via tmlibs/autofile
|
||||
- Better in-process testing for the consensus reactor and byzantine faults
|
||||
- Better crash/restart testing for individual nodes at preset failure points, and of networks at arbitrary points
|
||||
- Better abstraction over timeout mechanics
|
||||
@@ -395,7 +494,7 @@ FEATURES:
|
||||
- TMSP and RPC support TCP and UNIX sockets
|
||||
- Addition config options including block size and consensus parameters
|
||||
- New WAL mode `cswal_light`; logs only the validator's own votes
|
||||
- New RPC endpoints:
|
||||
- New RPC endpoints:
|
||||
- for starting/stopping profilers, and for updating config
|
||||
- `/broadcast_tx_commit`, returns when tx is included in a block, else an error
|
||||
- `/unsafe_flush_mempool`, empties the mempool
|
||||
@@ -416,14 +515,14 @@ BUG FIXES:
|
||||
|
||||
Strict versioning only began with the release of v0.7.0, in late summer 2016.
|
||||
The project itself began in early summer 2014 and was workable decentralized cryptocurrency software by the end of that year.
|
||||
Through the course of 2015, in collaboration with Eris Industries (now Monax Indsutries),
|
||||
Through the course of 2015, in collaboration with Eris Industries (now Monax Indsutries),
|
||||
many additional features were integrated, including an implementation from scratch of the Ethereum Virtual Machine.
|
||||
That implementation now forms the heart of [Burrow](https://github.com/hyperledger/burrow).
|
||||
In the later half of 2015, the consensus algorithm was upgraded with a more asynchronous design and a more deterministic and robust implementation.
|
||||
|
||||
By late 2015, frustration with the difficulty of forking a large monolithic stack to create alternative cryptocurrency designs led to the
|
||||
By late 2015, frustration with the difficulty of forking a large monolithic stack to create alternative cryptocurrency designs led to the
|
||||
invention of the Application Blockchain Interface (ABCI), then called the Tendermint Socket Protocol (TMSP).
|
||||
The Ethereum Virtual Machine and various other transaction features were removed, and Tendermint was whittled down to a core consensus engine
|
||||
driving an application running in another process.
|
||||
driving an application running in another process.
|
||||
The ABCI interface and implementation were iterated on and improved over the course of 2016,
|
||||
until versioned history kicked in with v0.7.0.
|
||||
|
@@ -1,6 +1,6 @@
|
||||
# Contributing
|
||||
|
||||
Thank you for considering making contributions to Tendermint and related repositories (Basecoin, Merkleeyes, etc.)!
|
||||
Thank you for considering making contributions to Tendermint and related repositories! Start by taking a look at the [coding repo](https://github.com/tendermint/coding) for overall information on repository workflow and standards.
|
||||
|
||||
Please follow standard github best practices: fork the repo, branch from the tip of develop, make some commits, and submit a pull request to develop. See the [open issues](https://github.com/tendermint/tendermint/issues) for things we need help with!
|
||||
|
||||
@@ -8,9 +8,9 @@ Please make sure to use `gofmt` before every commit - the easiest way to do this
|
||||
|
||||
## Forking
|
||||
|
||||
Please note that Go requires code to live under absolute paths, which complicates forking.
|
||||
While my fork lives at `https://github.com/ebuchman/tendermint`,
|
||||
the code should never exist at `$GOPATH/src/github.com/ebuchman/tendermint`.
|
||||
Please note that Go requires code to live under absolute paths, which complicates forking.
|
||||
While my fork lives at `https://github.com/ebuchman/tendermint`,
|
||||
the code should never exist at `$GOPATH/src/github.com/ebuchman/tendermint`.
|
||||
Instead, we use `git remote` to add the fork as a new remote for the original repo,
|
||||
`$GOPATH/src/github.com/tendermint/tendermint `, and do all the work there.
|
||||
|
||||
@@ -38,11 +38,22 @@ We use [glide](https://github.com/masterminds/glide) to manage dependencies.
|
||||
That said, the master branch of every Tendermint repository should just build with `go get`, which means they should be kept up-to-date with their dependencies so we can get away with telling people they can just `go get` our software.
|
||||
Since some dependencies are not under our control, a third party may break our build, in which case we can fall back on `glide install`. Even for dependencies under our control, glide helps us keeps multiple repos in sync as they evolve. Anything with an executable, such as apps, tools, and the core, should use glide.
|
||||
|
||||
Run `bash scripts/glide/status.sh` to get a list of vendored dependencies that may not be up-to-date.
|
||||
Run `bash scripts/glide/status.sh` to get a list of vendored dependencies that may not be up-to-date.
|
||||
|
||||
## Vagrant
|
||||
|
||||
If you are a [Vagrant](https://www.vagrantup.com/) user, all you have to do to get started hacking Tendermint is:
|
||||
|
||||
```
|
||||
vagrant up
|
||||
vagrant ssh
|
||||
cd ~/go/src/github.com/tendermint/tendermint
|
||||
make test
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
All repos should be hooked up to circle.
|
||||
All repos should be hooked up to circle.
|
||||
If they have `.go` files in the root directory, they will be automatically tested by circle using `go test -v -race ./...`. If not, they will need a `circle.yml`. Ideally, every repo has a `Makefile` that defines `make test` and includes its continuous integration status using a badge in the `README.md`.
|
||||
|
||||
## Branching Model and Release
|
||||
@@ -75,3 +86,15 @@ especially `go-p2p` and `go-rpc`, as their versions are referenced in tendermint
|
||||
- push to release-vX.X.X to run the extended integration tests on the CI
|
||||
- merge to master
|
||||
- merge master back to develop
|
||||
|
||||
### Hotfix Procedure:
|
||||
- start on `master`
|
||||
- checkout a new branch named hotfix-vX.X.X
|
||||
- make the required changes
|
||||
- these changes should be small and an absolute necessity
|
||||
- add a note to CHANGELOG.md
|
||||
- bumb versions
|
||||
- push to hotfix-vX.X.X to run the extended integration tests on the CI
|
||||
- merge hotfix-vX.X.X to master
|
||||
- merge hotfix-vX.X.X to develop
|
||||
- delete the hotfix-vX.X.X branch
|
@@ -1,8 +1,8 @@
|
||||
FROM alpine:3.6
|
||||
|
||||
# This is the release of tendermint to pull in.
|
||||
ENV TM_VERSION 0.10.0
|
||||
ENV TM_SHA256SUM a29852b8d51c00db93c87c3d148fa419a047abd38f32b2507a905805131acc19
|
||||
ENV TM_VERSION 0.12.0
|
||||
ENV TM_SHA256SUM be17469e92f04fc2a3663f891da28edbaa6c37c4d2f746736571887f4790555a
|
||||
|
||||
# Tendermint will be looking for genesis file in /tendermint (unless you change
|
||||
# `genesis_file` in config.toml). You can put your config.toml and private
|
||||
|
@@ -1,6 +1,8 @@
|
||||
# Supported tags and respective `Dockerfile` links
|
||||
|
||||
- `0.10.0`, `latest` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/e5342f4054ab784b2cd6150e14f01053d7c8deb2/DOCKER/Dockerfile)
|
||||
- `0.12.0`, `latest` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/70d8afa6e952e24c573ece345560a5971bf2cc0e/DOCKER/Dockerfile)
|
||||
- `0.11.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/9177cc1f64ca88a4a0243c5d1773d10fba67e201/DOCKER/Dockerfile)
|
||||
- `0.10.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/e5342f4054ab784b2cd6150e14f01053d7c8deb2/DOCKER/Dockerfile)
|
||||
- `0.9.1`, `0.9`, [(Dockerfile)](https://github.com/tendermint/tendermint/blob/809e0e8c5933604ba8b2d096803ada7c5ec4dfd3/DOCKER/Dockerfile)
|
||||
- `0.9.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/d474baeeea6c22b289e7402449572f7c89ee21da/DOCKER/Dockerfile)
|
||||
- `0.8.0`, `0.8` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/bf64dd21fdb193e54d8addaaaa2ecf7ac371de8c/DOCKER/Dockerfile)
|
||||
@@ -8,13 +10,24 @@
|
||||
|
||||
`develop` tag points to the [develop](https://github.com/tendermint/tendermint/tree/develop) branch.
|
||||
|
||||
# Quick reference
|
||||
|
||||
* **Where to get help:**
|
||||
https://tendermint.com/community
|
||||
|
||||
* **Where to file issues:**
|
||||
https://github.com/tendermint/tendermint/issues
|
||||
|
||||
* **Supported Docker versions:**
|
||||
[the latest release](https://github.com/moby/moby/releases) (down to 1.6 on a best-effort basis)
|
||||
|
||||
# Tendermint
|
||||
|
||||
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine, written in any programming language, and securely replicates it on many machines.
|
||||
|
||||
For more background, see the [introduction](https://tendermint.com/intro).
|
||||
For more background, see the [introduction](https://tendermint.readthedocs.io/en/master/introduction.html).
|
||||
|
||||
To get started developing applications, see the [application developers guide](https://tendermint.com/docs/guides/app-development).
|
||||
To get started developing applications, see the [application developers guide](https://tendermint.readthedocs.io/en/master/getting-started.html).
|
||||
|
||||
# How to use this image
|
||||
|
||||
@@ -31,26 +44,12 @@ docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint node --proxy_app
|
||||
|
||||
If you want to see many containers talking to each other, consider using [mintnet-kubernetes](https://github.com/tendermint/tools/tree/master/mintnet-kubernetes), which is a tool for running Tendermint-based applications on a Kubernetes cluster.
|
||||
|
||||
# Supported Docker versions
|
||||
|
||||
This image is officially supported on Docker version 1.13.1.
|
||||
|
||||
Support for older versions (down to 1.6) is provided on a best-effort basis.
|
||||
|
||||
Please see [the Docker installation documentation](https://docs.docker.com/installation/) for details on how to upgrade your Docker daemon.
|
||||
|
||||
# License
|
||||
|
||||
View [license information](https://raw.githubusercontent.com/tendermint/tendermint/master/LICENSE) for the software contained in this image.
|
||||
|
||||
# User Feedback
|
||||
|
||||
## Issues
|
||||
|
||||
If you have any problems with or questions about this image, please contact us through a [GitHub](https://github.com/tendermint/tendermint/issues) issue. If the issue is related to a CVE, please check for [a `cve-tracker` issue on the `official-images` repository](https://github.com/docker-library/official-images/issues?q=label%3Acve-tracker) first.
|
||||
|
||||
You can also reach the image maintainers via [Slack](http://forum.tendermint.com:3000/).
|
||||
|
||||
## Contributing
|
||||
|
||||
You are invited to contribute new features, fixes, or updates, large or small; we are always thrilled to receive pull requests, and do our best to process them as fast as we can.
|
||||
|
@@ -1 +0,0 @@
|
||||
The installation guide has moved to the [docs directory](docs/guides/install-from-source.md) in order to easily be rendered by the website. Please update your links accordingly.
|
81
Makefile
@@ -1,30 +1,32 @@
|
||||
GOTOOLS = \
|
||||
github.com/mitchellh/gox \
|
||||
github.com/Masterminds/glide \
|
||||
honnef.co/go/tools/cmd/megacheck
|
||||
github.com/tcnksm/ghr \
|
||||
github.com/alecthomas/gometalinter
|
||||
|
||||
PACKAGES=$(shell go list ./... | grep -v '/vendor/')
|
||||
BUILD_TAGS?=tendermint
|
||||
TMHOME = $${TMHOME:-$$HOME/.tendermint}
|
||||
|
||||
all: install test
|
||||
BUILD_FLAGS = -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short HEAD`"
|
||||
|
||||
install: get_vendor_deps
|
||||
@go install --ldflags '-extldflags "-static"' \
|
||||
--ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse HEAD`" ./cmd/tendermint
|
||||
all: get_vendor_deps install test
|
||||
|
||||
install:
|
||||
CGO_ENABLED=0 go install $(BUILD_FLAGS) ./cmd/tendermint
|
||||
|
||||
build:
|
||||
go build \
|
||||
--ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse HEAD`" -o build/tendermint ./cmd/tendermint/
|
||||
CGO_ENABLED=0 go build $(BUILD_FLAGS) -o build/tendermint ./cmd/tendermint/
|
||||
|
||||
build_race:
|
||||
go build -race -o build/tendermint ./cmd/tendermint
|
||||
CGO_ENABLED=0 go build -race $(BUILD_FLAGS) -o build/tendermint ./cmd/tendermint
|
||||
|
||||
# dist builds binaries for all platforms and packages them for distribution
|
||||
dist:
|
||||
@BUILD_TAGS='$(BUILD_TAGS)' sh -c "'$(CURDIR)/scripts/dist.sh'"
|
||||
|
||||
test:
|
||||
@echo "--> Running linter"
|
||||
@make metalinter_test
|
||||
@echo "--> Running go test"
|
||||
@go test $(PACKAGES)
|
||||
|
||||
@@ -35,6 +37,9 @@ test_race:
|
||||
test_integrations:
|
||||
@bash ./test/test.sh
|
||||
|
||||
test_release:
|
||||
@go test -tags release $(PACKAGES)
|
||||
|
||||
test100:
|
||||
@for i in {1..100}; do make test; done
|
||||
|
||||
@@ -55,29 +60,57 @@ get_deps:
|
||||
grep -v /vendor/ | sort | uniq | \
|
||||
xargs go get -v -d
|
||||
|
||||
get_vendor_deps: ensure_tools
|
||||
update_deps:
|
||||
@echo "--> Updating dependencies"
|
||||
@go get -d -u ./...
|
||||
|
||||
get_vendor_deps:
|
||||
@hash glide 2>/dev/null || go get github.com/Masterminds/glide
|
||||
@rm -rf vendor/
|
||||
@echo "--> Running glide install"
|
||||
@glide install
|
||||
|
||||
update_deps: tools
|
||||
@echo "--> Updating dependencies"
|
||||
@go get -d -u ./...
|
||||
|
||||
revision:
|
||||
-echo `git rev-parse --verify HEAD` > $(TMHOME)/revision
|
||||
-echo `git rev-parse --verify HEAD` >> $(TMHOME)/revision_history
|
||||
update_tools:
|
||||
@echo "--> Updating tools"
|
||||
@go get -u $(GOTOOLS)
|
||||
|
||||
tools:
|
||||
go get -u -v $(GOTOOLS)
|
||||
|
||||
ensure_tools:
|
||||
go get $(GOTOOLS)
|
||||
@echo "--> Installing tools"
|
||||
@go get $(GOTOOLS)
|
||||
@gometalinter --install
|
||||
|
||||
### Formatting, linting, and vetting
|
||||
|
||||
megacheck:
|
||||
@for pkg in ${PACKAGES}; do megacheck "$$pkg"; done
|
||||
metalinter:
|
||||
@gometalinter --vendor --deadline=600s --enable-all --disable=lll ./...
|
||||
|
||||
metalinter_test:
|
||||
@gometalinter --vendor --deadline=600s --disable-all \
|
||||
--enable=deadcode \
|
||||
--enable=misspell \
|
||||
--enable=safesql \
|
||||
./...
|
||||
|
||||
.PHONY: install build build_race dist test test_race test_integrations test100 draw_deps list_deps get_deps get_vendor_deps update_deps revision tools
|
||||
# --enable=gas \
|
||||
#--enable=maligned \
|
||||
#--enable=dupl \
|
||||
#--enable=errcheck \
|
||||
#--enable=goconst \
|
||||
#--enable=gocyclo \
|
||||
#--enable=goimports \
|
||||
#--enable=golint \ <== comments on anything exported
|
||||
#--enable=gosimple \
|
||||
#--enable=gotype \
|
||||
#--enable=ineffassign \
|
||||
#--enable=interfacer \
|
||||
#--enable=megacheck \
|
||||
#--enable=staticcheck \
|
||||
#--enable=structcheck \
|
||||
#--enable=unconvert \
|
||||
#--enable=unparam \
|
||||
#--enable=unused \
|
||||
#--enable=varcheck \
|
||||
#--enable=vet \
|
||||
#--enable=vetshadow \
|
||||
|
||||
.PHONY: install build build_race dist test test_race test_integrations test100 draw_deps list_deps get_deps get_vendor_deps update_deps update_tools tools test_release
|
||||
|
82
README.md
@@ -8,27 +8,23 @@ Or [Blockchain](https://en.wikipedia.org/wiki/Blockchain_(database)) for short.
|
||||
[](https://godoc.org/github.com/tendermint/tendermint)
|
||||
[](https://github.com/moovweb/gvm)
|
||||
[](https://cosmos.rocket.chat/)
|
||||
[](https://github.com/tendermint/tendermint/blob/master/LICENSE)
|
||||
[](https://github.com/tendermint/tendermint)
|
||||
|
||||
|
||||
Branch | Tests | Coverage | Report Card
|
||||
----------|-------|----------|-------------
|
||||
develop | [](https://circleci.com/gh/tendermint/tendermint/tree/develop) | [](https://codecov.io/gh/tendermint/tendermint) | [](https://goreportcard.com/report/github.com/tendermint/tendermint/tree/develop)
|
||||
master | [](https://circleci.com/gh/tendermint/tendermint/tree/master) | [](https://codecov.io/gh/tendermint/tendermint) | [](https://goreportcard.com/report/github.com/tendermint/tendermint/tree/master)
|
||||
Branch | Tests | Coverage
|
||||
----------|-------|----------
|
||||
master | [](https://circleci.com/gh/tendermint/tendermint/tree/master) | [](https://codecov.io/gh/tendermint/tendermint)
|
||||
develop | [](https://circleci.com/gh/tendermint/tendermint/tree/develop) | [](https://codecov.io/gh/tendermint/tendermint)
|
||||
|
||||
_NOTE: This is alpha software. Please contact us if you intend to run it in production._
|
||||
|
||||
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine, written in any programming language,
|
||||
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine - written in any programming language -
|
||||
and securely replicates it on many machines.
|
||||
|
||||
For more background, see the [introduction](https://tendermint.com/intro).
|
||||
|
||||
To get started developing applications, see the [application developers guide](https://tendermint.com/docs/guides/app-development).
|
||||
|
||||
### Code of Conduct
|
||||
Please read, understand and adhere to our [code of conduct](CODE_OF_CONDUCT.md).
|
||||
For more information, from introduction to install to application development, [Read The Docs](https://tendermint.readthedocs.io/en/master/).
|
||||
|
||||
## Install
|
||||
|
||||
@@ -38,39 +34,73 @@ To install from source, you should be able to:
|
||||
|
||||
`go get -u github.com/tendermint/tendermint/cmd/tendermint`
|
||||
|
||||
For more details (or if it fails), see the [install guide](https://tendermint.com/docs/guides/install-from-source).
|
||||
|
||||
## Contributing
|
||||
|
||||
Yay open source! Please see our [contributing guidelines](CONTRIBUTING.md).
|
||||
For more details (or if it fails), [read the docs](https://tendermint.readthedocs.io/en/master/install.html).
|
||||
|
||||
## Resources
|
||||
|
||||
### Tendermint Core
|
||||
|
||||
- [Introduction](https://tendermint.com/intro)
|
||||
- [Docs](https://tendermint.com/docs)
|
||||
- [Software using Tendermint](https://tendermint.com/ecosystem)
|
||||
All resources involving the use of, building application on, or developing for, tendermint, can be found at [Read The Docs](https://tendermint.readthedocs.io/en/master/). Additional information about some - and eventually all - of the sub-projects below, can be found at Read The Docs.
|
||||
|
||||
### Sub-projects
|
||||
|
||||
* [ABCI](http://github.com/tendermint/abci), the Application Blockchain Interface
|
||||
* [Go-Wire](http://github.com/tendermint/go-wire), a deterministic serialization library
|
||||
* [Go-Crypto](http://github.com/tendermint/go-crypto), an elliptic curve cryptography library
|
||||
* [TmLibs](http://github.com/tendermint/tmlibs), an assortment of Go libraries
|
||||
* [Merkleeyes](http://github.com/tendermint/merkleeyes), a balanced, binary Merkle tree for ABCI apps
|
||||
* [TmLibs](http://github.com/tendermint/tmlibs), an assortment of Go libraries used internally
|
||||
* [IAVL](http://github.com/tendermint/iavl), Merkleized IAVL+ Tree implementation
|
||||
|
||||
### Tools
|
||||
* [Deployment, Benchmarking, and Monitoring](https://github.com/tendermint/tools)
|
||||
* [Deployment, Benchmarking, and Monitoring](http://tendermint.readthedocs.io/projects/tools/en/develop/index.html#tendermint-tools)
|
||||
|
||||
### Applications
|
||||
|
||||
* [Ethermint](http://github.com/tendermint/ethermint): Ethereum on Tendermint
|
||||
* [Basecoin](http://github.com/tendermint/basecoin), a cryptocurrency application framework
|
||||
* [Ethermint](http://github.com/tendermint/ethermint); Ethereum on Tendermint
|
||||
* [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
|
||||
* [Many more](https://tendermint.readthedocs.io/en/master/ecosystem.html#abci-applications)
|
||||
|
||||
### More
|
||||
|
||||
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
|
||||
* [Original Whitepaper](https://tendermint.com/static/docs/tendermint.pdf)
|
||||
* [Tendermint Blog](https://blog.cosmos.network/tendermint/home)
|
||||
* [Cosmos Blog](https://blog.cosmos.network)
|
||||
* [Original Whitepaper (out-of-date)](http://www.the-blockchain.com/docs/Tendermint%20Consensus%20without%20Mining.pdf)
|
||||
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
|
||||
|
||||
## Contributing
|
||||
|
||||
Yay open source! Please see our [contributing guidelines](CONTRIBUTING.md).
|
||||
|
||||
## Versioning
|
||||
|
||||
### SemVer
|
||||
|
||||
Tendermint uses [SemVer](http://semver.org/) to determine when and how the version changes.
|
||||
According to SemVer, anything in the public API can change at any time before version 1.0.0
|
||||
|
||||
To provide some stability to Tendermint users in these 0.X.X days, the MINOR version is used
|
||||
to signal breaking changes across a subset of the total public API. This subset includes all
|
||||
interfaces exposed to other processes (cli, rpc, p2p, etc.), as well as parts of the following packages:
|
||||
|
||||
- types
|
||||
- rpc/client
|
||||
- config
|
||||
- node
|
||||
|
||||
Exported objects in these packages that are not covered by the versioning scheme
|
||||
are explicitly marked by `// UNSTABLE` in their go doc comment and may change at any time.
|
||||
Functions, types, and values in any other package may also change at any time.
|
||||
|
||||
### Upgrades
|
||||
|
||||
In an effort to avoid accumulating technical debt prior to 1.0.0,
|
||||
we do not guarantee that breaking changes (ie. bumps in the MINOR version)
|
||||
will work with existing tendermint blockchains. In these cases you will
|
||||
have to start a new blockchain, or write something custom to get the old
|
||||
data into the new chain.
|
||||
|
||||
However, any bump in the PATCH version should be compatible with existing histories
|
||||
(if not please open an [issue](https://github.com/tendermint/tendermint/issues)).
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
Please read, understand and adhere to our [code of conduct](CODE_OF_CONDUCT.md).
|
||||
|
7
Vagrantfile
vendored
@@ -17,11 +17,12 @@ Vagrant.configure("2") do |config|
|
||||
usermod -a -G docker vagrant
|
||||
apt-get autoremove -y
|
||||
|
||||
curl -O https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz
|
||||
tar -xvf go1.8.linux-amd64.tar.gz
|
||||
apt-get install -y --no-install-recommends git
|
||||
curl -O https://storage.googleapis.com/golang/go1.9.linux-amd64.tar.gz
|
||||
tar -xvf go1.9.linux-amd64.tar.gz
|
||||
rm -rf /usr/local/go
|
||||
mv go /usr/local
|
||||
rm -f go1.8.linux-amd64.tar.gz
|
||||
rm -f go1.9.linux-amd64.tar.gz
|
||||
mkdir -p /home/vagrant/go/bin
|
||||
echo 'export PATH=$PATH:/usr/local/go/bin:/home/vagrant/go/bin' >> /home/vagrant/.bash_profile
|
||||
echo 'export GOPATH=/home/vagrant/go' >> /home/vagrant/.bash_profile
|
||||
|
2
benchmarks/blockchain/.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
data
|
||||
|
80
benchmarks/blockchain/localsync.sh
Executable file
@@ -0,0 +1,80 @@
|
||||
#!/bin/bash
|
||||
|
||||
DATA=$GOPATH/src/github.com/tendermint/tendermint/benchmarks/blockchain/data
|
||||
if [ ! -d $DATA ]; then
|
||||
echo "no data found, generating a chain... (this only has to happen once)"
|
||||
|
||||
tendermint init --home $DATA
|
||||
cp $DATA/config.toml $DATA/config2.toml
|
||||
echo "
|
||||
[consensus]
|
||||
timeout_commit = 0
|
||||
" >> $DATA/config.toml
|
||||
|
||||
echo "starting node"
|
||||
tendermint node \
|
||||
--home $DATA \
|
||||
--proxy_app dummy \
|
||||
--p2p.laddr tcp://127.0.0.1:56656 \
|
||||
--rpc.laddr tcp://127.0.0.1:56657 \
|
||||
--log_level error &
|
||||
|
||||
echo "making blocks for 60s"
|
||||
sleep 60
|
||||
|
||||
mv $DATA/config2.toml $DATA/config.toml
|
||||
|
||||
kill %1
|
||||
|
||||
echo "done generating chain."
|
||||
fi
|
||||
|
||||
# validator node
|
||||
HOME1=$TMPDIR$RANDOM$RANDOM
|
||||
cp -R $DATA $HOME1
|
||||
echo "starting validator node"
|
||||
tendermint node \
|
||||
--home $HOME1 \
|
||||
--proxy_app dummy \
|
||||
--p2p.laddr tcp://127.0.0.1:56656 \
|
||||
--rpc.laddr tcp://127.0.0.1:56657 \
|
||||
--log_level error &
|
||||
sleep 1
|
||||
|
||||
# downloader node
|
||||
HOME2=$TMPDIR$RANDOM$RANDOM
|
||||
tendermint init --home $HOME2
|
||||
cp $HOME1/genesis.json $HOME2
|
||||
printf "starting downloader node"
|
||||
tendermint node \
|
||||
--home $HOME2 \
|
||||
--proxy_app dummy \
|
||||
--p2p.laddr tcp://127.0.0.1:56666 \
|
||||
--rpc.laddr tcp://127.0.0.1:56667 \
|
||||
--p2p.seeds 127.0.0.1:56656 \
|
||||
--log_level error &
|
||||
|
||||
# wait for node to start up so we only count time where we are actually syncing
|
||||
sleep 0.5
|
||||
while curl localhost:56667/status 2> /dev/null | grep "\"latest_block_height\": 0," > /dev/null
|
||||
do
|
||||
printf '.'
|
||||
sleep 0.2
|
||||
done
|
||||
echo
|
||||
|
||||
echo "syncing blockchain for 10s"
|
||||
for i in {1..10}
|
||||
do
|
||||
sleep 1
|
||||
HEIGHT="$(curl localhost:56667/status 2> /dev/null \
|
||||
| grep 'latest_block_height' \
|
||||
| grep -o ' [0-9]*' \
|
||||
| xargs)"
|
||||
let 'RATE = HEIGHT / i'
|
||||
echo "height: $HEIGHT, blocks/sec: $RATE"
|
||||
done
|
||||
|
||||
kill %1
|
||||
kill %2
|
||||
rm -rf $HOME1 $HOME2
|
@@ -2,11 +2,13 @@ package benchmarks
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/go-crypto"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/go-wire"
|
||||
|
||||
proto "github.com/tendermint/tendermint/benchmarks/proto"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
ctypes "github.com/tendermint/tendermint/rpc/core/types"
|
||||
)
|
||||
|
||||
@@ -26,7 +28,7 @@ func BenchmarkEncodeStatusWire(b *testing.B) {
|
||||
PubKey: pubKey,
|
||||
LatestBlockHash: []byte("SOMEBYTES"),
|
||||
LatestBlockHeight: 123,
|
||||
LatestBlockTime: 1234,
|
||||
LatestBlockTime: time.Unix(0, 1234),
|
||||
}
|
||||
b.StartTimer()
|
||||
|
||||
|
@@ -1,8 +1,9 @@
|
||||
package benchmarks
|
||||
|
||||
import (
|
||||
. "github.com/tendermint/tmlibs/common"
|
||||
"testing"
|
||||
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
)
|
||||
|
||||
func BenchmarkSomething(b *testing.B) {
|
||||
@@ -11,11 +12,11 @@ func BenchmarkSomething(b *testing.B) {
|
||||
numChecks := 100000
|
||||
keys := make([]string, numItems)
|
||||
for i := 0; i < numItems; i++ {
|
||||
keys[i] = RandStr(100)
|
||||
keys[i] = cmn.RandStr(100)
|
||||
}
|
||||
txs := make([]string, numChecks)
|
||||
for i := 0; i < numChecks; i++ {
|
||||
txs[i] = RandStr(100)
|
||||
txs[i] = cmn.RandStr(100)
|
||||
}
|
||||
b.StartTimer()
|
||||
|
||||
|
@@ -4,7 +4,7 @@ import (
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
. "github.com/tendermint/tmlibs/common"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
)
|
||||
|
||||
func BenchmarkFileWrite(b *testing.B) {
|
||||
@@ -14,16 +14,20 @@ func BenchmarkFileWrite(b *testing.B) {
|
||||
if err != nil {
|
||||
b.Error(err)
|
||||
}
|
||||
testString := RandStr(200) + "\n"
|
||||
testString := cmn.RandStr(200) + "\n"
|
||||
b.StartTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
file.Write([]byte(testString))
|
||||
_, err := file.Write([]byte(testString))
|
||||
if err != nil {
|
||||
b.Error(err)
|
||||
}
|
||||
}
|
||||
|
||||
file.Close()
|
||||
err = os.Remove("benchmark_file_write.out")
|
||||
if err != nil {
|
||||
if err := file.Close(); err != nil {
|
||||
b.Error(err)
|
||||
}
|
||||
if err := os.Remove("benchmark_file_write.out"); err != nil {
|
||||
b.Error(err)
|
||||
}
|
||||
}
|
||||
|
@@ -24,9 +24,6 @@ import bytes "bytes"
|
||||
|
||||
import strings "strings"
|
||||
import github_com_gogo_protobuf_proto "github.com/gogo/protobuf/proto"
|
||||
import sort "sort"
|
||||
import strconv "strconv"
|
||||
import reflect "reflect"
|
||||
|
||||
import io "io"
|
||||
|
||||
@@ -392,31 +389,6 @@ func (this *PubKeyEd25519) GoString() string {
|
||||
s = append(s, "}")
|
||||
return strings.Join(s, "")
|
||||
}
|
||||
func valueToGoStringTest(v interface{}, typ string) string {
|
||||
rv := reflect.ValueOf(v)
|
||||
if rv.IsNil() {
|
||||
return "nil"
|
||||
}
|
||||
pv := reflect.Indirect(rv).Interface()
|
||||
return fmt.Sprintf("func(v %v) *%v { return &v } ( %#v )", typ, typ, pv)
|
||||
}
|
||||
func extensionToGoStringTest(e map[int32]github_com_gogo_protobuf_proto.Extension) string {
|
||||
if e == nil {
|
||||
return "nil"
|
||||
}
|
||||
s := "map[int32]proto.Extension{"
|
||||
keys := make([]int, 0, len(e))
|
||||
for k := range e {
|
||||
keys = append(keys, int(k))
|
||||
}
|
||||
sort.Ints(keys)
|
||||
ss := []string{}
|
||||
for _, k := range keys {
|
||||
ss = append(ss, strconv.Itoa(k)+": "+e[int32(k)].GoString())
|
||||
}
|
||||
s += strings.Join(ss, ",") + "}"
|
||||
return s
|
||||
}
|
||||
func (m *ResultStatus) Marshal() (data []byte, err error) {
|
||||
size := m.Size()
|
||||
data = make([]byte, size)
|
||||
@@ -586,24 +558,6 @@ func (m *PubKeyEd25519) MarshalTo(data []byte) (int, error) {
|
||||
return i, nil
|
||||
}
|
||||
|
||||
func encodeFixed64Test(data []byte, offset int, v uint64) int {
|
||||
data[offset] = uint8(v)
|
||||
data[offset+1] = uint8(v >> 8)
|
||||
data[offset+2] = uint8(v >> 16)
|
||||
data[offset+3] = uint8(v >> 24)
|
||||
data[offset+4] = uint8(v >> 32)
|
||||
data[offset+5] = uint8(v >> 40)
|
||||
data[offset+6] = uint8(v >> 48)
|
||||
data[offset+7] = uint8(v >> 56)
|
||||
return offset + 8
|
||||
}
|
||||
func encodeFixed32Test(data []byte, offset int, v uint32) int {
|
||||
data[offset] = uint8(v)
|
||||
data[offset+1] = uint8(v >> 8)
|
||||
data[offset+2] = uint8(v >> 16)
|
||||
data[offset+3] = uint8(v >> 24)
|
||||
return offset + 4
|
||||
}
|
||||
func encodeVarintTest(data []byte, offset int, v uint64) int {
|
||||
for v >= 1<<7 {
|
||||
data[offset] = uint8(v&0x7f | 0x80)
|
||||
@@ -689,9 +643,6 @@ func sovTest(x uint64) (n int) {
|
||||
}
|
||||
return n
|
||||
}
|
||||
func sozTest(x uint64) (n int) {
|
||||
return sovTest(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func (this *ResultStatus) String() string {
|
||||
if this == nil {
|
||||
return "nil"
|
||||
@@ -742,14 +693,6 @@ func (this *PubKeyEd25519) String() string {
|
||||
}, "")
|
||||
return s
|
||||
}
|
||||
func valueToStringTest(v interface{}) string {
|
||||
rv := reflect.ValueOf(v)
|
||||
if rv.IsNil() {
|
||||
return "nil"
|
||||
}
|
||||
pv := reflect.Indirect(rv).Interface()
|
||||
return fmt.Sprintf("*%v", pv)
|
||||
}
|
||||
func (m *ResultStatus) Unmarshal(data []byte) error {
|
||||
var hasFields [1]uint64
|
||||
l := len(data)
|
||||
|
@@ -3,9 +3,8 @@ package main
|
||||
import (
|
||||
"context"
|
||||
"encoding/binary"
|
||||
"time"
|
||||
//"encoding/hex"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
rpcclient "github.com/tendermint/tendermint/rpc/lib/client"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
@@ -13,7 +12,7 @@ import (
|
||||
|
||||
func main() {
|
||||
wsc := rpcclient.NewWSClient("127.0.0.1:46657", "/websocket")
|
||||
_, err := wsc.Start()
|
||||
err := wsc.Start()
|
||||
if err != nil {
|
||||
cmn.Exit(err.Error())
|
||||
}
|
||||
@@ -22,7 +21,7 @@ func main() {
|
||||
// Read a bunch of responses
|
||||
go func() {
|
||||
for {
|
||||
_, ok := <-wsc.ResultsCh
|
||||
_, ok := <-wsc.ResponsesCh
|
||||
if !ok {
|
||||
break
|
||||
}
|
||||
|
@@ -6,16 +6,30 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
. "github.com/tendermint/tmlibs/common"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
flow "github.com/tendermint/tmlibs/flowrate"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
)
|
||||
|
||||
/*
|
||||
|
||||
eg, L = latency = 0.1s
|
||||
P = num peers = 10
|
||||
FN = num full nodes
|
||||
BS = 1kB block size
|
||||
CB = 1 Mbit/s = 128 kB/s
|
||||
CB/P = 12.8 kB
|
||||
B/S = CB/P/BS = 12.8 blocks/s
|
||||
|
||||
12.8 * 0.1 = 1.28 blocks on conn
|
||||
|
||||
*/
|
||||
|
||||
const (
|
||||
requestIntervalMS = 250
|
||||
maxTotalRequesters = 300
|
||||
requestIntervalMS = 100
|
||||
maxTotalRequesters = 1000
|
||||
maxPendingRequests = maxTotalRequesters
|
||||
maxPendingRequestsPerPeer = 75
|
||||
maxPendingRequestsPerPeer = 50
|
||||
minRecvRate = 10240 // 10Kb/s
|
||||
)
|
||||
|
||||
@@ -33,33 +47,34 @@ var peerTimeoutSeconds = time.Duration(15) // not const so we can override with
|
||||
*/
|
||||
|
||||
type BlockPool struct {
|
||||
BaseService
|
||||
cmn.BaseService
|
||||
startTime time.Time
|
||||
|
||||
mtx sync.Mutex
|
||||
// block requests
|
||||
requesters map[int]*bpRequester
|
||||
height int // the lowest key in requesters.
|
||||
requesters map[int64]*bpRequester
|
||||
height int64 // the lowest key in requesters.
|
||||
numPending int32 // number of requests pending assignment or block response
|
||||
// peers
|
||||
peers map[string]*bpPeer
|
||||
peers map[string]*bpPeer
|
||||
maxPeerHeight int64
|
||||
|
||||
requestsCh chan<- BlockRequest
|
||||
timeoutsCh chan<- string
|
||||
}
|
||||
|
||||
func NewBlockPool(start int, requestsCh chan<- BlockRequest, timeoutsCh chan<- string) *BlockPool {
|
||||
func NewBlockPool(start int64, requestsCh chan<- BlockRequest, timeoutsCh chan<- string) *BlockPool {
|
||||
bp := &BlockPool{
|
||||
peers: make(map[string]*bpPeer),
|
||||
|
||||
requesters: make(map[int]*bpRequester),
|
||||
requesters: make(map[int64]*bpRequester),
|
||||
height: start,
|
||||
numPending: 0,
|
||||
|
||||
requestsCh: requestsCh,
|
||||
timeoutsCh: timeoutsCh,
|
||||
}
|
||||
bp.BaseService = *NewBaseService(nil, "BlockPool", bp)
|
||||
bp.BaseService = *cmn.NewBaseService(nil, "BlockPool", bp)
|
||||
return bp
|
||||
}
|
||||
|
||||
@@ -69,16 +84,16 @@ func (pool *BlockPool) OnStart() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (pool *BlockPool) OnStop() {
|
||||
pool.BaseService.OnStop()
|
||||
}
|
||||
func (pool *BlockPool) OnStop() {}
|
||||
|
||||
// Run spawns requesters as needed.
|
||||
func (pool *BlockPool) makeRequestersRoutine() {
|
||||
|
||||
for {
|
||||
if !pool.IsRunning() {
|
||||
break
|
||||
}
|
||||
|
||||
_, numPending, lenRequesters := pool.GetStatus()
|
||||
if numPending >= maxPendingRequests {
|
||||
// sleep for a bit.
|
||||
@@ -117,7 +132,7 @@ func (pool *BlockPool) removeTimedoutPeers() {
|
||||
}
|
||||
}
|
||||
|
||||
func (pool *BlockPool) GetStatus() (height int, numPending int32, lenRequesters int) {
|
||||
func (pool *BlockPool) GetStatus() (height int64, numPending int32, lenRequesters int) {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
@@ -135,16 +150,10 @@ func (pool *BlockPool) IsCaughtUp() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
maxPeerHeight := 0
|
||||
for _, peer := range pool.peers {
|
||||
maxPeerHeight = MaxInt(maxPeerHeight, peer.height)
|
||||
}
|
||||
|
||||
// some conditions to determine if we're caught up
|
||||
receivedBlockOrTimedOut := (pool.height > 0 || time.Since(pool.startTime) > 5*time.Second)
|
||||
ourChainIsLongestAmongPeers := maxPeerHeight == 0 || pool.height >= maxPeerHeight
|
||||
ourChainIsLongestAmongPeers := pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
|
||||
isCaughtUp := receivedBlockOrTimedOut && ourChainIsLongestAmongPeers
|
||||
pool.Logger.Info(Fmt("IsCaughtUp: %v", isCaughtUp), "height", pool.height, "maxPeerHeight", maxPeerHeight)
|
||||
return isCaughtUp
|
||||
}
|
||||
|
||||
@@ -180,23 +189,24 @@ func (pool *BlockPool) PopRequest() {
|
||||
delete(pool.requesters, pool.height)
|
||||
pool.height++
|
||||
} else {
|
||||
PanicSanity(Fmt("Expected requester to pop, got nothing at height %v", pool.height))
|
||||
cmn.PanicSanity(cmn.Fmt("Expected requester to pop, got nothing at height %v", pool.height))
|
||||
}
|
||||
}
|
||||
|
||||
// Invalidates the block at pool.height,
|
||||
// Remove the peer and redo request from others.
|
||||
func (pool *BlockPool) RedoRequest(height int) {
|
||||
func (pool *BlockPool) RedoRequest(height int64) {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
request := pool.requesters[height]
|
||||
pool.mtx.Unlock()
|
||||
|
||||
if request.block == nil {
|
||||
PanicSanity("Expected block to be non-nil")
|
||||
cmn.PanicSanity("Expected block to be non-nil")
|
||||
}
|
||||
// RemovePeer will redo all requesters associated with this peer.
|
||||
// TODO: record this malfeasance
|
||||
pool.RemovePeer(request.peerID)
|
||||
pool.removePeer(request.peerID)
|
||||
}
|
||||
|
||||
// TODO: ensure that blocks come in order for each peer.
|
||||
@@ -206,20 +216,31 @@ func (pool *BlockPool) AddBlock(peerID string, block *types.Block, blockSize int
|
||||
|
||||
requester := pool.requesters[block.Height]
|
||||
if requester == nil {
|
||||
// a block we didn't expect.
|
||||
// TODO:if height is too far ahead, punish peer
|
||||
return
|
||||
}
|
||||
|
||||
if requester.setBlock(block, peerID) {
|
||||
pool.numPending--
|
||||
peer := pool.peers[peerID]
|
||||
peer.decrPending(blockSize)
|
||||
if peer != nil {
|
||||
peer.decrPending(blockSize)
|
||||
}
|
||||
} else {
|
||||
// Bad peer?
|
||||
}
|
||||
}
|
||||
|
||||
// MaxPeerHeight returns the highest height reported by a peer.
|
||||
func (pool *BlockPool) MaxPeerHeight() int64 {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
return pool.maxPeerHeight
|
||||
}
|
||||
|
||||
// Sets the peer's alleged blockchain height.
|
||||
func (pool *BlockPool) SetPeerHeight(peerID string, height int) {
|
||||
func (pool *BlockPool) SetPeerHeight(peerID string, height int64) {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
@@ -231,6 +252,10 @@ func (pool *BlockPool) SetPeerHeight(peerID string, height int) {
|
||||
peer.setLogger(pool.Logger.With("peer", peerID))
|
||||
pool.peers[peerID] = peer
|
||||
}
|
||||
|
||||
if height > pool.maxPeerHeight {
|
||||
pool.maxPeerHeight = height
|
||||
}
|
||||
}
|
||||
|
||||
func (pool *BlockPool) RemovePeer(peerID string) {
|
||||
@@ -254,7 +279,7 @@ func (pool *BlockPool) removePeer(peerID string) {
|
||||
|
||||
// Pick an available peer with at least the given minHeight.
|
||||
// If no peers are available, returns nil.
|
||||
func (pool *BlockPool) pickIncrAvailablePeer(minHeight int) *bpPeer {
|
||||
func (pool *BlockPool) pickIncrAvailablePeer(minHeight int64) *bpPeer {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
@@ -279,17 +304,24 @@ func (pool *BlockPool) makeNextRequester() {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
nextHeight := pool.height + len(pool.requesters)
|
||||
nextHeight := pool.height + pool.requestersLen()
|
||||
request := newBPRequester(pool, nextHeight)
|
||||
request.SetLogger(pool.Logger.With("height", nextHeight))
|
||||
// request.SetLogger(pool.Logger.With("height", nextHeight))
|
||||
|
||||
pool.requesters[nextHeight] = request
|
||||
pool.numPending++
|
||||
|
||||
request.Start()
|
||||
err := request.Start()
|
||||
if err != nil {
|
||||
request.Logger.Error("Error starting request", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
func (pool *BlockPool) sendRequest(height int, peerID string) {
|
||||
func (pool *BlockPool) requestersLen() int64 {
|
||||
return int64(len(pool.requesters))
|
||||
}
|
||||
|
||||
func (pool *BlockPool) sendRequest(height int64, peerID string) {
|
||||
if !pool.IsRunning() {
|
||||
return
|
||||
}
|
||||
@@ -309,12 +341,13 @@ func (pool *BlockPool) debug() string {
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
str := ""
|
||||
for h := pool.height; h < pool.height+len(pool.requesters); h++ {
|
||||
nextHeight := pool.height + pool.requestersLen()
|
||||
for h := pool.height; h < nextHeight; h++ {
|
||||
if pool.requesters[h] == nil {
|
||||
str += Fmt("H(%v):X ", h)
|
||||
str += cmn.Fmt("H(%v):X ", h)
|
||||
} else {
|
||||
str += Fmt("H(%v):", h)
|
||||
str += Fmt("B?(%v) ", pool.requesters[h].block != nil)
|
||||
str += cmn.Fmt("H(%v):", h)
|
||||
str += cmn.Fmt("B?(%v) ", pool.requesters[h].block != nil)
|
||||
}
|
||||
}
|
||||
return str
|
||||
@@ -327,7 +360,7 @@ type bpPeer struct {
|
||||
id string
|
||||
recvMonitor *flow.Monitor
|
||||
|
||||
height int
|
||||
height int64
|
||||
numPending int32
|
||||
timeout *time.Timer
|
||||
didTimeout bool
|
||||
@@ -335,7 +368,7 @@ type bpPeer struct {
|
||||
logger log.Logger
|
||||
}
|
||||
|
||||
func newBPPeer(pool *BlockPool, peerID string, height int) *bpPeer {
|
||||
func newBPPeer(pool *BlockPool, peerID string, height int64) *bpPeer {
|
||||
peer := &bpPeer{
|
||||
pool: pool,
|
||||
id: peerID,
|
||||
@@ -352,7 +385,7 @@ func (peer *bpPeer) setLogger(l log.Logger) {
|
||||
|
||||
func (peer *bpPeer) resetMonitor() {
|
||||
peer.recvMonitor = flow.New(time.Second, time.Second*40)
|
||||
var initialValue = float64(minRecvRate) * math.E
|
||||
initialValue := float64(minRecvRate) * math.E
|
||||
peer.recvMonitor.SetREMA(initialValue)
|
||||
}
|
||||
|
||||
@@ -394,9 +427,9 @@ func (peer *bpPeer) onTimeout() {
|
||||
//-------------------------------------
|
||||
|
||||
type bpRequester struct {
|
||||
BaseService
|
||||
cmn.BaseService
|
||||
pool *BlockPool
|
||||
height int
|
||||
height int64
|
||||
gotBlockCh chan struct{}
|
||||
redoCh chan struct{}
|
||||
|
||||
@@ -405,7 +438,7 @@ type bpRequester struct {
|
||||
block *types.Block
|
||||
}
|
||||
|
||||
func newBPRequester(pool *BlockPool, height int) *bpRequester {
|
||||
func newBPRequester(pool *BlockPool, height int64) *bpRequester {
|
||||
bpr := &bpRequester{
|
||||
pool: pool,
|
||||
height: height,
|
||||
@@ -415,7 +448,7 @@ func newBPRequester(pool *BlockPool, height int) *bpRequester {
|
||||
peerID: "",
|
||||
block: nil,
|
||||
}
|
||||
bpr.BaseService = *NewBaseService(nil, "bpRequester", bpr)
|
||||
bpr.BaseService = *cmn.NewBaseService(nil, "bpRequester", bpr)
|
||||
return bpr
|
||||
}
|
||||
|
||||
@@ -517,6 +550,6 @@ OUTER_LOOP:
|
||||
//-------------------------------------
|
||||
|
||||
type BlockRequest struct {
|
||||
Height int
|
||||
Height int64
|
||||
PeerID string
|
||||
}
|
||||
|
@@ -6,7 +6,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
. "github.com/tendermint/tmlibs/common"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
)
|
||||
|
||||
@@ -16,27 +16,32 @@ func init() {
|
||||
|
||||
type testPeer struct {
|
||||
id string
|
||||
height int
|
||||
height int64
|
||||
}
|
||||
|
||||
func makePeers(numPeers int, minHeight, maxHeight int) map[string]testPeer {
|
||||
func makePeers(numPeers int, minHeight, maxHeight int64) map[string]testPeer {
|
||||
peers := make(map[string]testPeer, numPeers)
|
||||
for i := 0; i < numPeers; i++ {
|
||||
peerID := RandStr(12)
|
||||
height := minHeight + rand.Intn(maxHeight-minHeight)
|
||||
peerID := cmn.RandStr(12)
|
||||
height := minHeight + rand.Int63n(maxHeight-minHeight)
|
||||
peers[peerID] = testPeer{peerID, height}
|
||||
}
|
||||
return peers
|
||||
}
|
||||
|
||||
func TestBasic(t *testing.T) {
|
||||
start := 42
|
||||
start := int64(42)
|
||||
peers := makePeers(10, start+1, 1000)
|
||||
timeoutsCh := make(chan string, 100)
|
||||
requestsCh := make(chan BlockRequest, 100)
|
||||
pool := NewBlockPool(start, requestsCh, timeoutsCh)
|
||||
pool.SetLogger(log.TestingLogger())
|
||||
pool.Start()
|
||||
|
||||
err := pool.Start()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
defer pool.Stop()
|
||||
|
||||
// Introduce each peer.
|
||||
@@ -82,13 +87,16 @@ func TestBasic(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestTimeout(t *testing.T) {
|
||||
start := 42
|
||||
start := int64(42)
|
||||
peers := makePeers(10, start+1, 1000)
|
||||
timeoutsCh := make(chan string, 100)
|
||||
requestsCh := make(chan BlockRequest, 100)
|
||||
pool := NewBlockPool(start, requestsCh, timeoutsCh)
|
||||
pool.SetLogger(log.TestingLogger())
|
||||
pool.Start()
|
||||
err := pool.Start()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
defer pool.Stop()
|
||||
|
||||
for _, peer := range peers {
|
||||
|
@@ -12,14 +12,15 @@ import (
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
)
|
||||
|
||||
const (
|
||||
// BlockchainChannel is a channel for blocks and status updates (`BlockStore` height)
|
||||
BlockchainChannel = byte(0x40)
|
||||
|
||||
defaultChannelCapacity = 100
|
||||
trySyncIntervalMS = 100
|
||||
defaultChannelCapacity = 1000
|
||||
trySyncIntervalMS = 50
|
||||
// stop syncing when last block's time is
|
||||
// within this much of the system time.
|
||||
// stopSyncingDurationMinutes = 10
|
||||
@@ -28,13 +29,12 @@ const (
|
||||
statusUpdateIntervalSeconds = 10
|
||||
// check if we should switch to consensus reactor
|
||||
switchToConsensusIntervalSeconds = 1
|
||||
maxBlockchainResponseSize = types.MaxBlockSize + 2
|
||||
)
|
||||
|
||||
type consensusReactor interface {
|
||||
// for when we switch from blockchain reactor and fast sync to
|
||||
// the consensus machine
|
||||
SwitchToConsensus(*sm.State)
|
||||
SwitchToConsensus(*sm.State, int)
|
||||
}
|
||||
|
||||
// BlockchainReactor handles long-term catchup syncing.
|
||||
@@ -49,14 +49,11 @@ type BlockchainReactor struct {
|
||||
requestsCh chan BlockRequest
|
||||
timeoutsCh chan string
|
||||
|
||||
evsw types.EventSwitch
|
||||
eventBus *types.EventBus
|
||||
}
|
||||
|
||||
// NewBlockchainReactor returns new reactor instance.
|
||||
func NewBlockchainReactor(state *sm.State, proxyAppConn proxy.AppConnConsensus, store *BlockStore, fastSync bool) *BlockchainReactor {
|
||||
if state.LastBlockHeight == store.Height()-1 {
|
||||
store.height-- // XXX HACK, make this better
|
||||
}
|
||||
if state.LastBlockHeight != store.Height() {
|
||||
cmn.PanicSanity(cmn.Fmt("state (%v) and store (%v) height mismatch", state.LastBlockHeight, store.Height()))
|
||||
}
|
||||
@@ -80,11 +77,19 @@ func NewBlockchainReactor(state *sm.State, proxyAppConn proxy.AppConnConsensus,
|
||||
return bcR
|
||||
}
|
||||
|
||||
// OnStart implements BaseService
|
||||
// SetLogger implements cmn.Service by setting the logger on reactor and pool.
|
||||
func (bcR *BlockchainReactor) SetLogger(l log.Logger) {
|
||||
bcR.BaseService.Logger = l
|
||||
bcR.pool.Logger = l
|
||||
}
|
||||
|
||||
// OnStart implements cmn.Service.
|
||||
func (bcR *BlockchainReactor) OnStart() error {
|
||||
bcR.BaseReactor.OnStart()
|
||||
if err := bcR.BaseReactor.OnStart(); err != nil {
|
||||
return err
|
||||
}
|
||||
if bcR.fastSync {
|
||||
_, err := bcR.pool.Start()
|
||||
err := bcR.pool.Start()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -93,7 +98,7 @@ func (bcR *BlockchainReactor) OnStart() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// OnStop implements BaseService
|
||||
// OnStop implements cmn.Service.
|
||||
func (bcR *BlockchainReactor) OnStop() {
|
||||
bcR.BaseReactor.OnStop()
|
||||
bcR.pool.Stop()
|
||||
@@ -102,29 +107,49 @@ func (bcR *BlockchainReactor) OnStop() {
|
||||
// GetChannels implements Reactor
|
||||
func (bcR *BlockchainReactor) GetChannels() []*p2p.ChannelDescriptor {
|
||||
return []*p2p.ChannelDescriptor{
|
||||
&p2p.ChannelDescriptor{
|
||||
{
|
||||
ID: BlockchainChannel,
|
||||
Priority: 5,
|
||||
SendQueueCapacity: 100,
|
||||
Priority: 10,
|
||||
SendQueueCapacity: 1000,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// AddPeer implements Reactor by sending our state to peer.
|
||||
func (bcR *BlockchainReactor) AddPeer(peer *p2p.Peer) {
|
||||
func (bcR *BlockchainReactor) AddPeer(peer p2p.Peer) {
|
||||
if !peer.Send(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}}) {
|
||||
// doing nothing, will try later in `poolRoutine`
|
||||
}
|
||||
// peer is added to the pool once we receive the first
|
||||
// bcStatusResponseMessage from the peer and call pool.SetPeerHeight
|
||||
}
|
||||
|
||||
// RemovePeer implements Reactor by removing peer from the pool.
|
||||
func (bcR *BlockchainReactor) RemovePeer(peer *p2p.Peer, reason interface{}) {
|
||||
bcR.pool.RemovePeer(peer.Key)
|
||||
func (bcR *BlockchainReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
|
||||
bcR.pool.RemovePeer(peer.Key())
|
||||
}
|
||||
|
||||
// respondToPeer loads a block and sends it to the requesting peer,
|
||||
// if we have it. Otherwise, we'll respond saying we don't have it.
|
||||
// According to the Tendermint spec, if all nodes are honest,
|
||||
// no node should be requesting for a block that's non-existent.
|
||||
func (bcR *BlockchainReactor) respondToPeer(msg *bcBlockRequestMessage, src p2p.Peer) (queued bool) {
|
||||
block := bcR.store.LoadBlock(msg.Height)
|
||||
if block != nil {
|
||||
msg := &bcBlockResponseMessage{Block: block}
|
||||
return src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{msg})
|
||||
}
|
||||
|
||||
bcR.Logger.Info("Peer asking for a block we don't have", "src", src, "height", msg.Height)
|
||||
|
||||
return src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{
|
||||
&bcNoBlockResponseMessage{Height: msg.Height},
|
||||
})
|
||||
}
|
||||
|
||||
// Receive implements Reactor by handling 4 types of messages (look below).
|
||||
func (bcR *BlockchainReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte) {
|
||||
_, msg, err := DecodeMessage(msgBytes)
|
||||
func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
|
||||
_, msg, err := DecodeMessage(msgBytes, bcR.maxMsgSize())
|
||||
if err != nil {
|
||||
bcR.Logger.Error("Error decoding message", "err", err)
|
||||
return
|
||||
@@ -135,20 +160,12 @@ func (bcR *BlockchainReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte)
|
||||
// TODO: improve logic to satisfy megacheck
|
||||
switch msg := msg.(type) {
|
||||
case *bcBlockRequestMessage:
|
||||
// Got a request for a block. Respond with block if we have it.
|
||||
block := bcR.store.LoadBlock(msg.Height)
|
||||
if block != nil {
|
||||
msg := &bcBlockResponseMessage{Block: block}
|
||||
queued := src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{msg})
|
||||
if !queued {
|
||||
// queue is full, just ignore.
|
||||
}
|
||||
} else {
|
||||
// TODO peer is asking for things we don't have.
|
||||
if queued := bcR.respondToPeer(msg, src); !queued {
|
||||
// Unfortunately not queued since the queue is full.
|
||||
}
|
||||
case *bcBlockResponseMessage:
|
||||
// Got a block.
|
||||
bcR.pool.AddBlock(src.Key, msg.Block, len(msgBytes))
|
||||
bcR.pool.AddBlock(src.Key(), msg.Block, len(msgBytes))
|
||||
case *bcStatusRequestMessage:
|
||||
// Send peer our state.
|
||||
queued := src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}})
|
||||
@@ -157,12 +174,18 @@ func (bcR *BlockchainReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte)
|
||||
}
|
||||
case *bcStatusResponseMessage:
|
||||
// Got a peer status. Unverified.
|
||||
bcR.pool.SetPeerHeight(src.Key, msg.Height)
|
||||
bcR.pool.SetPeerHeight(src.Key(), msg.Height)
|
||||
default:
|
||||
bcR.Logger.Error(cmn.Fmt("Unknown message type %v", reflect.TypeOf(msg)))
|
||||
}
|
||||
}
|
||||
|
||||
// maxMsgSize returns the maximum allowable size of a
|
||||
// message on the blockchain reactor.
|
||||
func (bcR *BlockchainReactor) maxMsgSize() int {
|
||||
return bcR.state.Params.BlockSizeParams.MaxBytes + 2
|
||||
}
|
||||
|
||||
// Handle messages from the poolReactor telling the reactor what to do.
|
||||
// NOTE: Don't sleep in the FOR_LOOP or otherwise slow it down!
|
||||
// (Except for the SYNC_LOOP, which is the primary purpose and must be synchronous.)
|
||||
@@ -172,6 +195,13 @@ func (bcR *BlockchainReactor) poolRoutine() {
|
||||
statusUpdateTicker := time.NewTicker(statusUpdateIntervalSeconds * time.Second)
|
||||
switchToConsensusTicker := time.NewTicker(switchToConsensusIntervalSeconds * time.Second)
|
||||
|
||||
blocksSynced := 0
|
||||
|
||||
chainID := bcR.state.ChainID
|
||||
|
||||
lastHundred := time.Now()
|
||||
lastRate := 0.0
|
||||
|
||||
FOR_LOOP:
|
||||
for {
|
||||
select {
|
||||
@@ -195,18 +225,18 @@ FOR_LOOP:
|
||||
}
|
||||
case <-statusUpdateTicker.C:
|
||||
// ask for status updates
|
||||
go bcR.BroadcastStatusRequest()
|
||||
go bcR.BroadcastStatusRequest() // nolint: errcheck
|
||||
case <-switchToConsensusTicker.C:
|
||||
height, numPending, _ := bcR.pool.GetStatus()
|
||||
height, numPending, lenRequesters := bcR.pool.GetStatus()
|
||||
outbound, inbound, _ := bcR.Switch.NumPeers()
|
||||
bcR.Logger.Info("Consensus ticker", "numPending", numPending, "total", len(bcR.pool.requesters),
|
||||
bcR.Logger.Debug("Consensus ticker", "numPending", numPending, "total", lenRequesters,
|
||||
"outbound", outbound, "inbound", inbound)
|
||||
if bcR.pool.IsCaughtUp() {
|
||||
bcR.Logger.Info("Time to switch to consensus reactor!", "height", height)
|
||||
bcR.pool.Stop()
|
||||
|
||||
conR := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
|
||||
conR.SwitchToConsensus(bcR.state)
|
||||
conR.SwitchToConsensus(bcR.state, blocksSynced)
|
||||
|
||||
break FOR_LOOP
|
||||
}
|
||||
@@ -221,16 +251,16 @@ FOR_LOOP:
|
||||
// We need both to sync the first block.
|
||||
break SYNC_LOOP
|
||||
}
|
||||
firstParts := first.MakePartSet(types.DefaultBlockPartSize)
|
||||
firstParts := first.MakePartSet(bcR.state.Params.BlockPartSizeBytes)
|
||||
firstPartsHeader := firstParts.Header()
|
||||
// Finally, verify the first block using the second's commit
|
||||
// NOTE: we can probably make this more efficient, but note that calling
|
||||
// first.Hash() doesn't verify the tx contents, so MakePartSet() is
|
||||
// currently necessary.
|
||||
err := bcR.state.Validators.VerifyCommit(
|
||||
bcR.state.ChainID, types.BlockID{first.Hash(), firstPartsHeader}, first.Height, second.LastCommit)
|
||||
chainID, types.BlockID{first.Hash(), firstPartsHeader}, first.Height, second.LastCommit)
|
||||
if err != nil {
|
||||
bcR.Logger.Info("error in validation", "err", err)
|
||||
bcR.Logger.Error("Error in validation", "err", err)
|
||||
bcR.pool.RedoRequest(first.Height)
|
||||
break SYNC_LOOP
|
||||
} else {
|
||||
@@ -242,11 +272,19 @@ FOR_LOOP:
|
||||
// NOTE: we could improve performance if we
|
||||
// didn't make the app commit to disk every block
|
||||
// ... but we would need a way to get the hash without it persisting
|
||||
err := bcR.state.ApplyBlock(bcR.evsw, bcR.proxyAppConn, first, firstPartsHeader, types.MockMempool{})
|
||||
err := bcR.state.ApplyBlock(bcR.eventBus, bcR.proxyAppConn, first, firstPartsHeader, types.MockMempool{})
|
||||
if err != nil {
|
||||
// TODO This is bad, are we zombie?
|
||||
cmn.PanicQ(cmn.Fmt("Failed to process committed block (%d:%X): %v", first.Height, first.Hash(), err))
|
||||
}
|
||||
blocksSynced += 1
|
||||
|
||||
if blocksSynced%100 == 0 {
|
||||
lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds())
|
||||
bcR.Logger.Info("Fast Sync Rate", "height", bcR.pool.height,
|
||||
"max_peer_height", bcR.pool.MaxPeerHeight(), "blocks/s", lastRate)
|
||||
lastHundred = time.Now()
|
||||
}
|
||||
}
|
||||
}
|
||||
continue FOR_LOOP
|
||||
@@ -262,19 +300,20 @@ func (bcR *BlockchainReactor) BroadcastStatusRequest() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetEventSwitch implements events.Eventable
|
||||
func (bcR *BlockchainReactor) SetEventSwitch(evsw types.EventSwitch) {
|
||||
bcR.evsw = evsw
|
||||
// SetEventBus sets event bus.
|
||||
func (bcR *BlockchainReactor) SetEventBus(b *types.EventBus) {
|
||||
bcR.eventBus = b
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
// Messages
|
||||
|
||||
const (
|
||||
msgTypeBlockRequest = byte(0x10)
|
||||
msgTypeBlockResponse = byte(0x11)
|
||||
msgTypeStatusResponse = byte(0x20)
|
||||
msgTypeStatusRequest = byte(0x21)
|
||||
msgTypeBlockRequest = byte(0x10)
|
||||
msgTypeBlockResponse = byte(0x11)
|
||||
msgTypeNoBlockResponse = byte(0x12)
|
||||
msgTypeStatusResponse = byte(0x20)
|
||||
msgTypeStatusRequest = byte(0x21)
|
||||
)
|
||||
|
||||
// BlockchainMessage is a generic message for this reactor.
|
||||
@@ -284,17 +323,18 @@ var _ = wire.RegisterInterface(
|
||||
struct{ BlockchainMessage }{},
|
||||
wire.ConcreteType{&bcBlockRequestMessage{}, msgTypeBlockRequest},
|
||||
wire.ConcreteType{&bcBlockResponseMessage{}, msgTypeBlockResponse},
|
||||
wire.ConcreteType{&bcNoBlockResponseMessage{}, msgTypeNoBlockResponse},
|
||||
wire.ConcreteType{&bcStatusResponseMessage{}, msgTypeStatusResponse},
|
||||
wire.ConcreteType{&bcStatusRequestMessage{}, msgTypeStatusRequest},
|
||||
)
|
||||
|
||||
// DecodeMessage decodes BlockchainMessage.
|
||||
// TODO: ensure that bz is completely read.
|
||||
func DecodeMessage(bz []byte) (msgType byte, msg BlockchainMessage, err error) {
|
||||
func DecodeMessage(bz []byte, maxSize int) (msgType byte, msg BlockchainMessage, err error) {
|
||||
msgType = bz[0]
|
||||
n := int(0)
|
||||
r := bytes.NewReader(bz)
|
||||
msg = wire.ReadBinary(struct{ BlockchainMessage }{}, r, maxBlockchainResponseSize, &n, &err).(struct{ BlockchainMessage }).BlockchainMessage
|
||||
msg = wire.ReadBinary(struct{ BlockchainMessage }{}, r, maxSize, &n, &err).(struct{ BlockchainMessage }).BlockchainMessage
|
||||
if err != nil && n != len(bz) {
|
||||
err = errors.New("DecodeMessage() had bytes left over")
|
||||
}
|
||||
@@ -304,13 +344,21 @@ func DecodeMessage(bz []byte) (msgType byte, msg BlockchainMessage, err error) {
|
||||
//-------------------------------------
|
||||
|
||||
type bcBlockRequestMessage struct {
|
||||
Height int
|
||||
Height int64
|
||||
}
|
||||
|
||||
func (m *bcBlockRequestMessage) String() string {
|
||||
return cmn.Fmt("[bcBlockRequestMessage %v]", m.Height)
|
||||
}
|
||||
|
||||
type bcNoBlockResponseMessage struct {
|
||||
Height int64
|
||||
}
|
||||
|
||||
func (brm *bcNoBlockResponseMessage) String() string {
|
||||
return cmn.Fmt("[bcNoBlockResponseMessage %d]", brm.Height)
|
||||
}
|
||||
|
||||
//-------------------------------------
|
||||
|
||||
// NOTE: keep up-to-date with maxBlockchainResponseSize
|
||||
@@ -325,7 +373,7 @@ func (m *bcBlockResponseMessage) String() string {
|
||||
//-------------------------------------
|
||||
|
||||
type bcStatusRequestMessage struct {
|
||||
Height int
|
||||
Height int64
|
||||
}
|
||||
|
||||
func (m *bcStatusRequestMessage) String() string {
|
||||
@@ -335,7 +383,7 @@ func (m *bcStatusRequestMessage) String() string {
|
||||
//-------------------------------------
|
||||
|
||||
type bcStatusResponseMessage struct {
|
||||
Height int
|
||||
Height int64
|
||||
}
|
||||
|
||||
func (m *bcStatusResponseMessage) String() string {
|
||||
|
151
blockchain/reactor_test.go
Normal file
@@ -0,0 +1,151 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
wire "github.com/tendermint/go-wire"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
dbm "github.com/tendermint/tmlibs/db"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
func newBlockchainReactor(maxBlockHeight int64) *BlockchainReactor {
|
||||
logger := log.TestingLogger()
|
||||
config := cfg.ResetTestRoot("blockchain_reactor_test")
|
||||
|
||||
blockStore := NewBlockStore(dbm.NewMemDB())
|
||||
|
||||
// Get State
|
||||
state, _ := sm.GetState(dbm.NewMemDB(), config.GenesisFile())
|
||||
state.SetLogger(logger.With("module", "state"))
|
||||
state.Save()
|
||||
|
||||
// Make the blockchainReactor itself
|
||||
fastSync := true
|
||||
bcReactor := NewBlockchainReactor(state.Copy(), nil, blockStore, fastSync)
|
||||
bcReactor.SetLogger(logger.With("module", "blockchain"))
|
||||
|
||||
// Next: we need to set a switch in order for peers to be added in
|
||||
bcReactor.Switch = p2p.NewSwitch(cfg.DefaultP2PConfig())
|
||||
|
||||
// Lastly: let's add some blocks in
|
||||
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
|
||||
firstBlock := makeBlock(blockHeight, state)
|
||||
secondBlock := makeBlock(blockHeight+1, state)
|
||||
firstParts := firstBlock.MakePartSet(state.Params.BlockGossipParams.BlockPartSizeBytes)
|
||||
blockStore.SaveBlock(firstBlock, firstParts, secondBlock.LastCommit)
|
||||
}
|
||||
|
||||
return bcReactor
|
||||
}
|
||||
|
||||
func TestNoBlockMessageResponse(t *testing.T) {
|
||||
maxBlockHeight := int64(20)
|
||||
|
||||
bcr := newBlockchainReactor(maxBlockHeight)
|
||||
bcr.Start()
|
||||
defer bcr.Stop()
|
||||
|
||||
// Add some peers in
|
||||
peer := newbcrTestPeer(cmn.RandStr(12))
|
||||
bcr.AddPeer(peer)
|
||||
|
||||
chID := byte(0x01)
|
||||
|
||||
tests := []struct {
|
||||
height int64
|
||||
existent bool
|
||||
}{
|
||||
{maxBlockHeight + 2, false},
|
||||
{10, true},
|
||||
{1, true},
|
||||
{100, false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
reqBlockMsg := &bcBlockRequestMessage{tt.height}
|
||||
reqBlockBytes := wire.BinaryBytes(struct{ BlockchainMessage }{reqBlockMsg})
|
||||
bcr.Receive(chID, peer, reqBlockBytes)
|
||||
value := peer.lastValue()
|
||||
msg := value.(struct{ BlockchainMessage }).BlockchainMessage
|
||||
|
||||
if tt.existent {
|
||||
if blockMsg, ok := msg.(*bcBlockResponseMessage); !ok {
|
||||
t.Fatalf("Expected to receive a block response for height %d", tt.height)
|
||||
} else if blockMsg.Block.Height != tt.height {
|
||||
t.Fatalf("Expected response to be for height %d, got %d", tt.height, blockMsg.Block.Height)
|
||||
}
|
||||
} else {
|
||||
if noBlockMsg, ok := msg.(*bcNoBlockResponseMessage); !ok {
|
||||
t.Fatalf("Expected to receive a no block response for height %d", tt.height)
|
||||
} else if noBlockMsg.Height != tt.height {
|
||||
t.Fatalf("Expected response to be for height %d, got %d", tt.height, noBlockMsg.Height)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//----------------------------------------------
|
||||
// utility funcs
|
||||
|
||||
func makeTxs(height int64) (txs []types.Tx) {
|
||||
for i := 0; i < 10; i++ {
|
||||
txs = append(txs, types.Tx([]byte{byte(height), byte(i)}))
|
||||
}
|
||||
return txs
|
||||
}
|
||||
|
||||
func makeBlock(height int64, state *sm.State) *types.Block {
|
||||
prevHash := state.LastBlockID.Hash
|
||||
prevParts := types.PartSetHeader{}
|
||||
valHash := state.Validators.Hash()
|
||||
prevBlockID := types.BlockID{prevHash, prevParts}
|
||||
block, _ := types.MakeBlock(height, "test_chain", makeTxs(height),
|
||||
new(types.Commit), prevBlockID, valHash, state.AppHash, state.Params.BlockGossipParams.BlockPartSizeBytes)
|
||||
return block
|
||||
}
|
||||
|
||||
// The Test peer
|
||||
type bcrTestPeer struct {
|
||||
cmn.Service
|
||||
key string
|
||||
ch chan interface{}
|
||||
}
|
||||
|
||||
var _ p2p.Peer = (*bcrTestPeer)(nil)
|
||||
|
||||
func newbcrTestPeer(key string) *bcrTestPeer {
|
||||
return &bcrTestPeer{
|
||||
Service: cmn.NewBaseService(nil, "bcrTestPeer", nil),
|
||||
key: key,
|
||||
ch: make(chan interface{}, 2),
|
||||
}
|
||||
}
|
||||
|
||||
func (tp *bcrTestPeer) lastValue() interface{} { return <-tp.ch }
|
||||
|
||||
func (tp *bcrTestPeer) TrySend(chID byte, value interface{}) bool {
|
||||
if _, ok := value.(struct{ BlockchainMessage }).BlockchainMessage.(*bcStatusResponseMessage); ok {
|
||||
// Discard status response messages since they skew our results
|
||||
// We only want to deal with:
|
||||
// + bcBlockResponseMessage
|
||||
// + bcNoBlockResponseMessage
|
||||
} else {
|
||||
tp.ch <- value
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (tp *bcrTestPeer) Send(chID byte, data interface{}) bool { return tp.TrySend(chID, data) }
|
||||
func (tp *bcrTestPeer) NodeInfo() *p2p.NodeInfo { return nil }
|
||||
func (tp *bcrTestPeer) Status() p2p.ConnectionStatus { return p2p.ConnectionStatus{} }
|
||||
func (tp *bcrTestPeer) Key() string { return tp.key }
|
||||
func (tp *bcrTestPeer) IsOutbound() bool { return false }
|
||||
func (tp *bcrTestPeer) IsPersistent() bool { return true }
|
||||
func (tp *bcrTestPeer) Get(s string) interface{} { return s }
|
||||
func (tp *bcrTestPeer) Set(string, interface{}) {}
|
@@ -7,10 +7,10 @@ import (
|
||||
"io"
|
||||
"sync"
|
||||
|
||||
. "github.com/tendermint/tmlibs/common"
|
||||
dbm "github.com/tendermint/tmlibs/db"
|
||||
"github.com/tendermint/go-wire"
|
||||
wire "github.com/tendermint/go-wire"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
dbm "github.com/tendermint/tmlibs/db"
|
||||
)
|
||||
|
||||
/*
|
||||
@@ -25,13 +25,14 @@ Currently the precommit signatures are duplicated in the Block parts as
|
||||
well as the Commit. In the future this may change, perhaps by moving
|
||||
the Commit data outside the Block.
|
||||
|
||||
Panics indicate probable corruption in the data
|
||||
// NOTE: BlockStore methods will panic if they encounter errors
|
||||
// deserializing loaded data, indicating probable corruption on disk.
|
||||
*/
|
||||
type BlockStore struct {
|
||||
db dbm.DB
|
||||
|
||||
mtx sync.RWMutex
|
||||
height int
|
||||
height int64
|
||||
}
|
||||
|
||||
func NewBlockStore(db dbm.DB) *BlockStore {
|
||||
@@ -43,7 +44,7 @@ func NewBlockStore(db dbm.DB) *BlockStore {
|
||||
}
|
||||
|
||||
// Height() returns the last known contiguous block height.
|
||||
func (bs *BlockStore) Height() int {
|
||||
func (bs *BlockStore) Height() int64 {
|
||||
bs.mtx.RLock()
|
||||
defer bs.mtx.RUnlock()
|
||||
return bs.height
|
||||
@@ -57,7 +58,7 @@ func (bs *BlockStore) GetReader(key []byte) io.Reader {
|
||||
return bytes.NewReader(bytez)
|
||||
}
|
||||
|
||||
func (bs *BlockStore) LoadBlock(height int) *types.Block {
|
||||
func (bs *BlockStore) LoadBlock(height int64) *types.Block {
|
||||
var n int
|
||||
var err error
|
||||
r := bs.GetReader(calcBlockMetaKey(height))
|
||||
@@ -66,7 +67,7 @@ func (bs *BlockStore) LoadBlock(height int) *types.Block {
|
||||
}
|
||||
blockMeta := wire.ReadBinary(&types.BlockMeta{}, r, 0, &n, &err).(*types.BlockMeta)
|
||||
if err != nil {
|
||||
PanicCrisis(Fmt("Error reading block meta: %v", err))
|
||||
cmn.PanicCrisis(cmn.Fmt("Error reading block meta: %v", err))
|
||||
}
|
||||
bytez := []byte{}
|
||||
for i := 0; i < blockMeta.BlockID.PartsHeader.Total; i++ {
|
||||
@@ -75,12 +76,12 @@ func (bs *BlockStore) LoadBlock(height int) *types.Block {
|
||||
}
|
||||
block := wire.ReadBinary(&types.Block{}, bytes.NewReader(bytez), 0, &n, &err).(*types.Block)
|
||||
if err != nil {
|
||||
PanicCrisis(Fmt("Error reading block: %v", err))
|
||||
cmn.PanicCrisis(cmn.Fmt("Error reading block: %v", err))
|
||||
}
|
||||
return block
|
||||
}
|
||||
|
||||
func (bs *BlockStore) LoadBlockPart(height int, index int) *types.Part {
|
||||
func (bs *BlockStore) LoadBlockPart(height int64, index int) *types.Part {
|
||||
var n int
|
||||
var err error
|
||||
r := bs.GetReader(calcBlockPartKey(height, index))
|
||||
@@ -89,12 +90,12 @@ func (bs *BlockStore) LoadBlockPart(height int, index int) *types.Part {
|
||||
}
|
||||
part := wire.ReadBinary(&types.Part{}, r, 0, &n, &err).(*types.Part)
|
||||
if err != nil {
|
||||
PanicCrisis(Fmt("Error reading block part: %v", err))
|
||||
cmn.PanicCrisis(cmn.Fmt("Error reading block part: %v", err))
|
||||
}
|
||||
return part
|
||||
}
|
||||
|
||||
func (bs *BlockStore) LoadBlockMeta(height int) *types.BlockMeta {
|
||||
func (bs *BlockStore) LoadBlockMeta(height int64) *types.BlockMeta {
|
||||
var n int
|
||||
var err error
|
||||
r := bs.GetReader(calcBlockMetaKey(height))
|
||||
@@ -103,14 +104,14 @@ func (bs *BlockStore) LoadBlockMeta(height int) *types.BlockMeta {
|
||||
}
|
||||
blockMeta := wire.ReadBinary(&types.BlockMeta{}, r, 0, &n, &err).(*types.BlockMeta)
|
||||
if err != nil {
|
||||
PanicCrisis(Fmt("Error reading block meta: %v", err))
|
||||
cmn.PanicCrisis(cmn.Fmt("Error reading block meta: %v", err))
|
||||
}
|
||||
return blockMeta
|
||||
}
|
||||
|
||||
// The +2/3 and other Precommit-votes for block at `height`.
|
||||
// This Commit comes from block.LastCommit for `height+1`.
|
||||
func (bs *BlockStore) LoadBlockCommit(height int) *types.Commit {
|
||||
func (bs *BlockStore) LoadBlockCommit(height int64) *types.Commit {
|
||||
var n int
|
||||
var err error
|
||||
r := bs.GetReader(calcBlockCommitKey(height))
|
||||
@@ -119,13 +120,13 @@ func (bs *BlockStore) LoadBlockCommit(height int) *types.Commit {
|
||||
}
|
||||
commit := wire.ReadBinary(&types.Commit{}, r, 0, &n, &err).(*types.Commit)
|
||||
if err != nil {
|
||||
PanicCrisis(Fmt("Error reading commit: %v", err))
|
||||
cmn.PanicCrisis(cmn.Fmt("Error reading commit: %v", err))
|
||||
}
|
||||
return commit
|
||||
}
|
||||
|
||||
// NOTE: the Precommit-vote heights are for the block at `height`
|
||||
func (bs *BlockStore) LoadSeenCommit(height int) *types.Commit {
|
||||
func (bs *BlockStore) LoadSeenCommit(height int64) *types.Commit {
|
||||
var n int
|
||||
var err error
|
||||
r := bs.GetReader(calcSeenCommitKey(height))
|
||||
@@ -134,7 +135,7 @@ func (bs *BlockStore) LoadSeenCommit(height int) *types.Commit {
|
||||
}
|
||||
commit := wire.ReadBinary(&types.Commit{}, r, 0, &n, &err).(*types.Commit)
|
||||
if err != nil {
|
||||
PanicCrisis(Fmt("Error reading commit: %v", err))
|
||||
cmn.PanicCrisis(cmn.Fmt("Error reading commit: %v", err))
|
||||
}
|
||||
return commit
|
||||
}
|
||||
@@ -147,10 +148,10 @@ func (bs *BlockStore) LoadSeenCommit(height int) *types.Commit {
|
||||
func (bs *BlockStore) SaveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) {
|
||||
height := block.Height
|
||||
if height != bs.Height()+1 {
|
||||
PanicSanity(Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", bs.Height()+1, height))
|
||||
cmn.PanicSanity(cmn.Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", bs.Height()+1, height))
|
||||
}
|
||||
if !blockParts.IsComplete() {
|
||||
PanicSanity(Fmt("BlockStore can only save complete block part sets"))
|
||||
cmn.PanicSanity(cmn.Fmt("BlockStore can only save complete block part sets"))
|
||||
}
|
||||
|
||||
// Save block meta
|
||||
@@ -184,9 +185,9 @@ func (bs *BlockStore) SaveBlock(block *types.Block, blockParts *types.PartSet, s
|
||||
bs.db.SetSync(nil, nil)
|
||||
}
|
||||
|
||||
func (bs *BlockStore) saveBlockPart(height int, index int, part *types.Part) {
|
||||
func (bs *BlockStore) saveBlockPart(height int64, index int, part *types.Part) {
|
||||
if height != bs.Height()+1 {
|
||||
PanicSanity(Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", bs.Height()+1, height))
|
||||
cmn.PanicSanity(cmn.Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", bs.Height()+1, height))
|
||||
}
|
||||
partBytes := wire.BinaryBytes(part)
|
||||
bs.db.Set(calcBlockPartKey(height, index), partBytes)
|
||||
@@ -194,19 +195,19 @@ func (bs *BlockStore) saveBlockPart(height int, index int, part *types.Part) {
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
|
||||
func calcBlockMetaKey(height int) []byte {
|
||||
func calcBlockMetaKey(height int64) []byte {
|
||||
return []byte(fmt.Sprintf("H:%v", height))
|
||||
}
|
||||
|
||||
func calcBlockPartKey(height int, partIndex int) []byte {
|
||||
func calcBlockPartKey(height int64, partIndex int) []byte {
|
||||
return []byte(fmt.Sprintf("P:%v:%v", height, partIndex))
|
||||
}
|
||||
|
||||
func calcBlockCommitKey(height int) []byte {
|
||||
func calcBlockCommitKey(height int64) []byte {
|
||||
return []byte(fmt.Sprintf("C:%v", height))
|
||||
}
|
||||
|
||||
func calcSeenCommitKey(height int) []byte {
|
||||
func calcSeenCommitKey(height int64) []byte {
|
||||
return []byte(fmt.Sprintf("SC:%v", height))
|
||||
}
|
||||
|
||||
@@ -215,13 +216,13 @@ func calcSeenCommitKey(height int) []byte {
|
||||
var blockStoreKey = []byte("blockStore")
|
||||
|
||||
type BlockStoreStateJSON struct {
|
||||
Height int
|
||||
Height int64
|
||||
}
|
||||
|
||||
func (bsj BlockStoreStateJSON) Save(db dbm.DB) {
|
||||
bytes, err := json.Marshal(bsj)
|
||||
if err != nil {
|
||||
PanicSanity(Fmt("Could not marshal state bytes: %v", err))
|
||||
cmn.PanicSanity(cmn.Fmt("Could not marshal state bytes: %v", err))
|
||||
}
|
||||
db.SetSync(blockStoreKey, bytes)
|
||||
}
|
||||
@@ -236,7 +237,7 @@ func LoadBlockStoreStateJSON(db dbm.DB) BlockStoreStateJSON {
|
||||
bsj := BlockStoreStateJSON{}
|
||||
err := json.Unmarshal(bytes, &bsj)
|
||||
if err != nil {
|
||||
PanicCrisis(Fmt("Could not unmarshal bytes: %X", bytes))
|
||||
cmn.PanicCrisis(cmn.Fmt("Could not unmarshal bytes: %X", bytes))
|
||||
}
|
||||
return bsj
|
||||
}
|
||||
|
@@ -9,19 +9,20 @@ import (
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
var genValidatorCmd = &cobra.Command{
|
||||
// GenValidatorCmd allows the generation of a keypair for a
|
||||
// validator.
|
||||
var GenValidatorCmd = &cobra.Command{
|
||||
Use: "gen_validator",
|
||||
Short: "Generate new validator keypair",
|
||||
Run: genValidator,
|
||||
}
|
||||
|
||||
func init() {
|
||||
RootCmd.AddCommand(genValidatorCmd)
|
||||
}
|
||||
|
||||
func genValidator(cmd *cobra.Command, args []string) {
|
||||
privValidator := types.GenPrivValidator()
|
||||
privValidatorJSONBytes, _ := json.MarshalIndent(privValidator, "", "\t")
|
||||
privValidator := types.GenPrivValidatorFS("")
|
||||
privValidatorJSONBytes, err := json.MarshalIndent(privValidator, "", "\t")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
fmt.Printf(`%v
|
||||
`, string(privValidatorJSONBytes))
|
||||
}
|
||||
|
@@ -9,21 +9,17 @@ import (
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
)
|
||||
|
||||
var initFilesCmd = &cobra.Command{
|
||||
// InitFilesCmd initialises a fresh Tendermint Core instance.
|
||||
var InitFilesCmd = &cobra.Command{
|
||||
Use: "init",
|
||||
Short: "Initialize Tendermint",
|
||||
Run: initFiles,
|
||||
}
|
||||
|
||||
func init() {
|
||||
RootCmd.AddCommand(initFilesCmd)
|
||||
}
|
||||
|
||||
func initFiles(cmd *cobra.Command, args []string) {
|
||||
privValFile := config.PrivValidatorFile()
|
||||
if _, err := os.Stat(privValFile); os.IsNotExist(err) {
|
||||
privValidator := types.GenPrivValidator()
|
||||
privValidator.SetFile(privValFile)
|
||||
privValidator := types.GenPrivValidatorFS(privValFile)
|
||||
privValidator.Save()
|
||||
|
||||
genFile := config.GenesisFile()
|
||||
@@ -32,12 +28,14 @@ func initFiles(cmd *cobra.Command, args []string) {
|
||||
genDoc := types.GenesisDoc{
|
||||
ChainID: cmn.Fmt("test-chain-%v", cmn.RandStr(6)),
|
||||
}
|
||||
genDoc.Validators = []types.GenesisValidator{types.GenesisValidator{
|
||||
PubKey: privValidator.PubKey,
|
||||
Amount: 10,
|
||||
genDoc.Validators = []types.GenesisValidator{{
|
||||
PubKey: privValidator.GetPubKey(),
|
||||
Power: 10,
|
||||
}}
|
||||
|
||||
genDoc.SaveAs(genFile)
|
||||
if err := genDoc.SaveAs(genFile); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
logger.Info("Initialized tendermint", "genesis", config.GenesisFile(), "priv_validator", config.PrivValidatorFile())
|
||||
|
@@ -9,16 +9,13 @@ import (
|
||||
"github.com/tendermint/tendermint/p2p/upnp"
|
||||
)
|
||||
|
||||
var probeUpnpCmd = &cobra.Command{
|
||||
// ProbeUpnpCmd adds capabilities to test the UPnP functionality.
|
||||
var ProbeUpnpCmd = &cobra.Command{
|
||||
Use: "probe_upnp",
|
||||
Short: "Test UPnP functionality",
|
||||
RunE: probeUpnp,
|
||||
}
|
||||
|
||||
func init() {
|
||||
RootCmd.AddCommand(probeUpnpCmd)
|
||||
}
|
||||
|
||||
func probeUpnp(cmd *cobra.Command, args []string) error {
|
||||
capabilities, err := upnp.Probe(logger)
|
||||
if err != nil {
|
||||
|
@@ -6,7 +6,8 @@ import (
|
||||
"github.com/tendermint/tendermint/consensus"
|
||||
)
|
||||
|
||||
var replayCmd = &cobra.Command{
|
||||
// ReplayCmd allows replaying of messages from the WAL.
|
||||
var ReplayCmd = &cobra.Command{
|
||||
Use: "replay",
|
||||
Short: "Replay messages from WAL",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
@@ -14,15 +15,12 @@ var replayCmd = &cobra.Command{
|
||||
},
|
||||
}
|
||||
|
||||
var replayConsoleCmd = &cobra.Command{
|
||||
// ReplayConsoleCmd allows replaying of messages from the WAL in a
|
||||
// console.
|
||||
var ReplayConsoleCmd = &cobra.Command{
|
||||
Use: "replay_console",
|
||||
Short: "Replay messages from WAL in a console",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
consensus.RunReplayFile(config.BaseConfig, config.Consensus, true)
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
RootCmd.AddCommand(replayCmd)
|
||||
RootCmd.AddCommand(replayConsoleCmd)
|
||||
}
|
||||
|
@@ -9,21 +9,30 @@ import (
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
)
|
||||
|
||||
var resetAllCmd = &cobra.Command{
|
||||
// ResetAllCmd removes the database of this Tendermint core
|
||||
// instance.
|
||||
var ResetAllCmd = &cobra.Command{
|
||||
Use: "unsafe_reset_all",
|
||||
Short: "(unsafe) Remove all the data and WAL, reset this node's validator",
|
||||
Run: resetAll,
|
||||
}
|
||||
|
||||
var resetPrivValidatorCmd = &cobra.Command{
|
||||
// ResetPrivValidatorCmd resets the private validator files.
|
||||
var ResetPrivValidatorCmd = &cobra.Command{
|
||||
Use: "unsafe_reset_priv_validator",
|
||||
Short: "(unsafe) Reset this node's validator",
|
||||
Run: resetPrivValidator,
|
||||
}
|
||||
|
||||
func init() {
|
||||
RootCmd.AddCommand(resetAllCmd)
|
||||
RootCmd.AddCommand(resetPrivValidatorCmd)
|
||||
// ResetAll removes the privValidator files.
|
||||
// Exported so other CLI tools can use it.
|
||||
func ResetAll(dbDir, privValFile string, logger log.Logger) {
|
||||
resetPrivValidatorFS(privValFile, logger)
|
||||
if err := os.RemoveAll(dbDir); err != nil {
|
||||
logger.Error("Error removing directory", "err", err)
|
||||
return
|
||||
}
|
||||
logger.Info("Removed all data", "dir", dbDir)
|
||||
}
|
||||
|
||||
// XXX: this is totally unsafe.
|
||||
@@ -35,26 +44,17 @@ func resetAll(cmd *cobra.Command, args []string) {
|
||||
// XXX: this is totally unsafe.
|
||||
// it's only suitable for testnets.
|
||||
func resetPrivValidator(cmd *cobra.Command, args []string) {
|
||||
resetPrivValidatorLocal(config.PrivValidatorFile(), logger)
|
||||
resetPrivValidatorFS(config.PrivValidatorFile(), logger)
|
||||
}
|
||||
|
||||
// Exported so other CLI tools can use it
|
||||
func ResetAll(dbDir, privValFile string, logger log.Logger) {
|
||||
resetPrivValidatorLocal(privValFile, logger)
|
||||
os.RemoveAll(dbDir)
|
||||
logger.Info("Removed all data", "dir", dbDir)
|
||||
}
|
||||
|
||||
func resetPrivValidatorLocal(privValFile string, logger log.Logger) {
|
||||
func resetPrivValidatorFS(privValFile string, logger log.Logger) {
|
||||
// Get PrivValidator
|
||||
var privValidator *types.PrivValidator
|
||||
if _, err := os.Stat(privValFile); err == nil {
|
||||
privValidator = types.LoadPrivValidator(privValFile)
|
||||
privValidator := types.LoadPrivValidatorFS(privValFile)
|
||||
privValidator.Reset()
|
||||
logger.Info("Reset PrivValidator", "file", privValFile)
|
||||
} else {
|
||||
privValidator = types.GenPrivValidator()
|
||||
privValidator.SetFile(privValFile)
|
||||
privValidator := types.GenPrivValidatorFS(privValFile)
|
||||
privValidator.Save()
|
||||
logger.Info("Generated PrivValidator", "file", privValFile)
|
||||
}
|
||||
|
@@ -34,11 +34,12 @@ func ParseConfig() (*cfg.Config, error) {
|
||||
return conf, err
|
||||
}
|
||||
|
||||
// RootCmd is the root command for Tendermint core.
|
||||
var RootCmd = &cobra.Command{
|
||||
Use: "tendermint",
|
||||
Short: "Tendermint Core (BFT Consensus) in Go",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) (err error) {
|
||||
if cmd.Name() == versionCmd.Name() {
|
||||
if cmd.Name() == VersionCmd.Name() {
|
||||
return nil
|
||||
}
|
||||
config, err = ParseConfig()
|
||||
|
@@ -3,15 +3,15 @@ package commands
|
||||
import (
|
||||
"os"
|
||||
"strconv"
|
||||
"testing"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tmlibs/cli"
|
||||
|
||||
"testing"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -26,10 +26,12 @@ const (
|
||||
// modify in the test cases.
|
||||
// NOTE: it unsets all TM* env variables.
|
||||
func isolate(cmds ...*cobra.Command) cli.Executable {
|
||||
os.Unsetenv("TMHOME")
|
||||
os.Unsetenv("TM_HOME")
|
||||
os.Unsetenv("TMROOT")
|
||||
os.Unsetenv("TM_ROOT")
|
||||
if err := os.Unsetenv("TMHOME"); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if err := os.Unsetenv("TM_HOME"); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
viper.Reset()
|
||||
config = cfg.DefaultConfig()
|
||||
@@ -70,7 +72,7 @@ func TestRootConfig(t *testing.T) {
|
||||
{nil, nil, defaultRoot, defaults.Moniker, defaults.FastSync, dmax},
|
||||
// try multiple ways of setting root (two flags, cli vs. env)
|
||||
{[]string{"--home", conf}, nil, conf, cvals["moniker"], cfast, dmax},
|
||||
{nil, map[string]string{"TMROOT": conf}, conf, cvals["moniker"], cfast, dmax},
|
||||
{nil, map[string]string{"TMHOME": conf}, conf, cvals["moniker"], cfast, dmax},
|
||||
// check setting p2p subflags two different ways
|
||||
{[]string{"--p2p.max_num_peers", "420"}, nil, defaultRoot, defaults.Moniker, defaults.FastSync, 420},
|
||||
{nil, map[string]string{"TM_P2P_MAX_NUM_PEERS": "17"}, defaultRoot, defaults.Moniker, defaults.FastSync, 17},
|
||||
|
@@ -2,26 +2,12 @@ package commands
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/node"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
nm "github.com/tendermint/tendermint/node"
|
||||
)
|
||||
|
||||
var runNodeCmd = &cobra.Command{
|
||||
Use: "node",
|
||||
Short: "Run the tendermint node",
|
||||
RunE: runNode,
|
||||
}
|
||||
|
||||
func init() {
|
||||
AddNodeFlags(runNodeCmd)
|
||||
RootCmd.AddCommand(runNodeCmd)
|
||||
}
|
||||
|
||||
// AddNodeFlags exposes some common configuration options on the command-line
|
||||
// These are exposed for convenience of commands embedding a tendermint node
|
||||
func AddNodeFlags(cmd *cobra.Command) {
|
||||
@@ -50,40 +36,32 @@ func AddNodeFlags(cmd *cobra.Command) {
|
||||
cmd.Flags().Bool("consensus.create_empty_blocks", config.Consensus.CreateEmptyBlocks, "Set this to false to only produce blocks when there are txs or when the AppHash changes")
|
||||
}
|
||||
|
||||
// Users wishing to:
|
||||
// * Use an external signer for their validators
|
||||
// * Supply an in-proc abci app
|
||||
// should import github.com/tendermint/tendermint/node and implement
|
||||
// their own run_node to call node.NewNode (instead of node.NewNodeDefault)
|
||||
// with their custom priv validator and/or custom proxy.ClientCreator
|
||||
func runNode(cmd *cobra.Command, args []string) error {
|
||||
// NewRunNodeCmd returns the command that allows the CLI to start a
|
||||
// node. It can be used with a custom PrivValidator and in-process ABCI application.
|
||||
func NewRunNodeCmd(nodeProvider nm.NodeProvider) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "node",
|
||||
Short: "Run the tendermint node",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
// Create & start node
|
||||
n, err := nodeProvider(config, logger)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to create node: %v", err)
|
||||
}
|
||||
|
||||
// Wait until the genesis doc becomes available
|
||||
// This is for Mintnet compatibility.
|
||||
// TODO: If Mintnet gets deprecated or genesis_file is
|
||||
// always available, remove.
|
||||
genDocFile := config.GenesisFile()
|
||||
for !cmn.FileExists(genDocFile) {
|
||||
logger.Info(cmn.Fmt("Waiting for genesis file %v...", genDocFile))
|
||||
time.Sleep(time.Second)
|
||||
if err := n.Start(); err != nil {
|
||||
return fmt.Errorf("Failed to start node: %v", err)
|
||||
} else {
|
||||
logger.Info("Started node", "nodeInfo", n.Switch().NodeInfo())
|
||||
}
|
||||
|
||||
// Trap signal, run forever.
|
||||
n.RunForever()
|
||||
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
genDoc, err := types.GenesisDocFromFile(genDocFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
config.ChainID = genDoc.ChainID
|
||||
|
||||
// Create & start node
|
||||
n := node.NewNodeDefault(config, logger.With("module", "node"))
|
||||
if _, err := n.Start(); err != nil {
|
||||
return fmt.Errorf("Failed to start node: %v", err)
|
||||
} else {
|
||||
logger.Info("Started node", "nodeInfo", n.Switch().NodeInfo())
|
||||
}
|
||||
|
||||
// Trap signal, run forever.
|
||||
n.RunForever()
|
||||
|
||||
return nil
|
||||
AddNodeFlags(cmd)
|
||||
return cmd
|
||||
}
|
||||
|
@@ -9,18 +9,15 @@ import (
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
var showValidatorCmd = &cobra.Command{
|
||||
// ShowValidatorCmd adds capabilities for showing the validator info.
|
||||
var ShowValidatorCmd = &cobra.Command{
|
||||
Use: "show_validator",
|
||||
Short: "Show this node's validator info",
|
||||
Run: showValidator,
|
||||
}
|
||||
|
||||
func init() {
|
||||
RootCmd.AddCommand(showValidatorCmd)
|
||||
}
|
||||
|
||||
func showValidator(cmd *cobra.Command, args []string) {
|
||||
privValidator := types.LoadOrGenPrivValidator(config.PrivValidatorFile(), logger)
|
||||
privValidator := types.LoadOrGenPrivValidatorFS(config.PrivValidatorFile())
|
||||
pubKeyJSONBytes, _ := data.ToJSON(privValidator.PubKey)
|
||||
fmt.Println(string(pubKeyJSONBytes))
|
||||
}
|
||||
|
@@ -7,16 +7,10 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
)
|
||||
|
||||
var testnetFilesCmd = &cobra.Command{
|
||||
Use: "testnet",
|
||||
Short: "Initialize files for a Tendermint testnet",
|
||||
Run: testnetFiles,
|
||||
}
|
||||
|
||||
//flags
|
||||
var (
|
||||
nValidators int
|
||||
@@ -24,12 +18,18 @@ var (
|
||||
)
|
||||
|
||||
func init() {
|
||||
testnetFilesCmd.Flags().IntVar(&nValidators, "n", 4,
|
||||
TestnetFilesCmd.Flags().IntVar(&nValidators, "n", 4,
|
||||
"Number of validators to initialize the testnet with")
|
||||
testnetFilesCmd.Flags().StringVar(&dataDir, "dir", "mytestnet",
|
||||
TestnetFilesCmd.Flags().StringVar(&dataDir, "dir", "mytestnet",
|
||||
"Directory to store initialization data for the testnet")
|
||||
}
|
||||
|
||||
RootCmd.AddCommand(testnetFilesCmd)
|
||||
// TestnetFilesCmd allows initialisation of files for a
|
||||
// Tendermint testnet.
|
||||
var TestnetFilesCmd = &cobra.Command{
|
||||
Use: "testnet",
|
||||
Short: "Initialize files for a Tendermint testnet",
|
||||
Run: testnetFiles,
|
||||
}
|
||||
|
||||
func testnetFiles(cmd *cobra.Command, args []string) {
|
||||
@@ -45,10 +45,10 @@ func testnetFiles(cmd *cobra.Command, args []string) {
|
||||
}
|
||||
// Read priv_validator.json to populate vals
|
||||
privValFile := path.Join(dataDir, mach, "priv_validator.json")
|
||||
privVal := types.LoadPrivValidator(privValFile)
|
||||
privVal := types.LoadPrivValidatorFS(privValFile)
|
||||
genVals[i] = types.GenesisValidator{
|
||||
PubKey: privVal.PubKey,
|
||||
Amount: 1,
|
||||
PubKey: privVal.GetPubKey(),
|
||||
Power: 1,
|
||||
Name: mach,
|
||||
}
|
||||
}
|
||||
@@ -63,7 +63,9 @@ func testnetFiles(cmd *cobra.Command, args []string) {
|
||||
// Write genesis file.
|
||||
for i := 0; i < nValidators; i++ {
|
||||
mach := cmn.Fmt("mach%d", i)
|
||||
genDoc.SaveAs(path.Join(dataDir, mach, "genesis.json"))
|
||||
if err := genDoc.SaveAs(path.Join(dataDir, mach, "genesis.json")); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Println(cmn.Fmt("Successfully initialized %v node directories", nValidators))
|
||||
@@ -87,7 +89,6 @@ func ensurePrivValidator(file string) {
|
||||
if cmn.FileExists(file) {
|
||||
return
|
||||
}
|
||||
privValidator := types.GenPrivValidator()
|
||||
privValidator.SetFile(file)
|
||||
privValidator := types.GenPrivValidatorFS(file)
|
||||
privValidator.Save()
|
||||
}
|
||||
|
@@ -8,14 +8,11 @@ import (
|
||||
"github.com/tendermint/tendermint/version"
|
||||
)
|
||||
|
||||
var versionCmd = &cobra.Command{
|
||||
// VersionCmd ...
|
||||
var VersionCmd = &cobra.Command{
|
||||
Use: "version",
|
||||
Short: "Show version info",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
fmt.Println(version.Version)
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
RootCmd.AddCommand(versionCmd)
|
||||
}
|
||||
|
@@ -3,11 +3,41 @@ package main
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/tendermint/tendermint/cmd/tendermint/commands"
|
||||
"github.com/tendermint/tmlibs/cli"
|
||||
|
||||
cmd "github.com/tendermint/tendermint/cmd/tendermint/commands"
|
||||
nm "github.com/tendermint/tendermint/node"
|
||||
)
|
||||
|
||||
func main() {
|
||||
cmd := cli.PrepareBaseCmd(commands.RootCmd, "TM", os.ExpandEnv("$HOME/.tendermint"))
|
||||
cmd.Execute()
|
||||
rootCmd := cmd.RootCmd
|
||||
rootCmd.AddCommand(
|
||||
cmd.GenValidatorCmd,
|
||||
cmd.InitFilesCmd,
|
||||
cmd.ProbeUpnpCmd,
|
||||
cmd.ReplayCmd,
|
||||
cmd.ReplayConsoleCmd,
|
||||
cmd.ResetAllCmd,
|
||||
cmd.ResetPrivValidatorCmd,
|
||||
cmd.ShowValidatorCmd,
|
||||
cmd.TestnetFilesCmd,
|
||||
cmd.VersionCmd)
|
||||
|
||||
// NOTE:
|
||||
// Users wishing to:
|
||||
// * Use an external signer for their validators
|
||||
// * Supply an in-proc abci app
|
||||
// * Supply a genesis doc file from another source
|
||||
// * Provide their own DB implementation
|
||||
// can copy this file and use something other than the
|
||||
// DefaultNewNode function
|
||||
nodeFunc := nm.DefaultNewNode
|
||||
|
||||
// Create & start node
|
||||
rootCmd.AddCommand(cmd.NewRunNodeCmd(nodeFunc))
|
||||
|
||||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", os.ExpandEnv("$HOME/.tendermint"))
|
||||
if err := cmd.Execute(); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
@@ -4,8 +4,6 @@ import (
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/types" // TODO: remove
|
||||
)
|
||||
|
||||
// Config defines the top level configuration for a Tendermint node
|
||||
@@ -18,6 +16,7 @@ type Config struct {
|
||||
P2P *P2PConfig `mapstructure:"p2p"`
|
||||
Mempool *MempoolConfig `mapstructure:"mempool"`
|
||||
Consensus *ConsensusConfig `mapstructure:"consensus"`
|
||||
TxIndex *TxIndexConfig `mapstructure:"tx_index"`
|
||||
}
|
||||
|
||||
// DefaultConfig returns a default configuration for a Tendermint node
|
||||
@@ -28,6 +27,7 @@ func DefaultConfig() *Config {
|
||||
P2P: DefaultP2PConfig(),
|
||||
Mempool: DefaultMempoolConfig(),
|
||||
Consensus: DefaultConsensusConfig(),
|
||||
TxIndex: DefaultTxIndexConfig(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -39,6 +39,7 @@ func TestConfig() *Config {
|
||||
P2P: TestP2PConfig(),
|
||||
Mempool: DefaultMempoolConfig(),
|
||||
Consensus: TestConsensusConfig(),
|
||||
TxIndex: DefaultTxIndexConfig(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -95,9 +96,6 @@ type BaseConfig struct {
|
||||
// so the app can decide if we should keep the connection or not
|
||||
FilterPeers bool `mapstructure:"filter_peers"` // false
|
||||
|
||||
// What indexer to use for transactions
|
||||
TxIndex string `mapstructure:"tx_index"`
|
||||
|
||||
// Database backend: leveldb | memdb
|
||||
DBBackend string `mapstructure:"db_backend"`
|
||||
|
||||
@@ -117,7 +115,6 @@ func DefaultBaseConfig() BaseConfig {
|
||||
ProfListenAddress: "",
|
||||
FastSync: true,
|
||||
FilterPeers: false,
|
||||
TxIndex: "kv",
|
||||
DBBackend: "leveldb",
|
||||
DBPath: "data",
|
||||
}
|
||||
@@ -257,7 +254,7 @@ func TestP2PConfig() *P2PConfig {
|
||||
return conf
|
||||
}
|
||||
|
||||
// AddrBookFile returns the full path to the address bool
|
||||
// AddrBookFile returns the full path to the address book
|
||||
func (p *P2PConfig) AddrBookFile() string {
|
||||
return rootify(p.AddrBook, p.RootDir)
|
||||
}
|
||||
@@ -320,10 +317,6 @@ type ConsensusConfig struct {
|
||||
CreateEmptyBlocks bool `mapstructure:"create_empty_blocks"`
|
||||
CreateEmptyBlocksInterval int `mapstructure:"create_empty_blocks_interval"`
|
||||
|
||||
// TODO: This probably shouldn't be exposed but it makes it
|
||||
// easy to write tests for the wal/replay
|
||||
BlockPartSize int `mapstructure:"block_part_size"`
|
||||
|
||||
// Reactor sleep duration parameters are in ms
|
||||
PeerGossipSleepDuration int `mapstructure:"peer_gossip_sleep_duration"`
|
||||
PeerQueryMaj23SleepDuration int `mapstructure:"peer_query_maj23_sleep_duration"`
|
||||
@@ -386,7 +379,6 @@ func DefaultConsensusConfig() *ConsensusConfig {
|
||||
MaxBlockSizeBytes: 1, // TODO
|
||||
CreateEmptyBlocks: true,
|
||||
CreateEmptyBlocksInterval: 0,
|
||||
BlockPartSize: types.DefaultBlockPartSize, // TODO: we shouldnt be importing types
|
||||
PeerGossipSleepDuration: 100,
|
||||
PeerQueryMaj23SleepDuration: 2000,
|
||||
}
|
||||
@@ -419,6 +411,41 @@ func (c *ConsensusConfig) SetWalFile(walFile string) {
|
||||
c.walFile = walFile
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
// TxIndexConfig
|
||||
|
||||
// TxIndexConfig defines the confuguration for the transaction
|
||||
// indexer, including tags to index.
|
||||
type TxIndexConfig struct {
|
||||
// What indexer to use for transactions
|
||||
//
|
||||
// Options:
|
||||
// 1) "null" (default)
|
||||
// 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
|
||||
Indexer string `mapstructure:"indexer"`
|
||||
|
||||
// Comma-separated list of tags to index (by default the only tag is tx hash)
|
||||
//
|
||||
// It's recommended to index only a subset of tags due to possible memory
|
||||
// bloat. This is, of course, depends on the indexer's DB and the volume of
|
||||
// transactions.
|
||||
IndexTags string `mapstructure:"index_tags"`
|
||||
|
||||
// When set to true, tells indexer to index all tags. Note this may be not
|
||||
// desirable (see the comment above). IndexTags has a precedence over
|
||||
// IndexAllTags (i.e. when given both, IndexTags will be indexed).
|
||||
IndexAllTags bool `mapstructure:"index_all_tags"`
|
||||
}
|
||||
|
||||
// DefaultTxIndexConfig returns a default configuration for the transaction indexer.
|
||||
func DefaultTxIndexConfig() *TxIndexConfig {
|
||||
return &TxIndexConfig{
|
||||
Indexer: "kv",
|
||||
IndexTags: "",
|
||||
IndexAllTags: false,
|
||||
}
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
// Utils
|
||||
|
||||
|
@@ -12,8 +12,12 @@ import (
|
||||
/****** these are for production settings ***********/
|
||||
|
||||
func EnsureRoot(rootDir string) {
|
||||
cmn.EnsureDir(rootDir, 0700)
|
||||
cmn.EnsureDir(rootDir+"/data", 0700)
|
||||
if err := cmn.EnsureDir(rootDir, 0700); err != nil {
|
||||
cmn.PanicSanity(err.Error())
|
||||
}
|
||||
if err := cmn.EnsureDir(rootDir+"/data", 0700); err != nil {
|
||||
cmn.PanicSanity(err.Error())
|
||||
}
|
||||
|
||||
configFilePath := path.Join(rootDir, "config.toml")
|
||||
|
||||
@@ -53,21 +57,23 @@ func ResetTestRoot(testName string) *Config {
|
||||
rootDir = filepath.Join(rootDir, testName)
|
||||
// Remove ~/.tendermint_test_bak
|
||||
if cmn.FileExists(rootDir + "_bak") {
|
||||
err := os.RemoveAll(rootDir + "_bak")
|
||||
if err != nil {
|
||||
if err := os.RemoveAll(rootDir + "_bak"); err != nil {
|
||||
cmn.PanicSanity(err.Error())
|
||||
}
|
||||
}
|
||||
// Move ~/.tendermint_test to ~/.tendermint_test_bak
|
||||
if cmn.FileExists(rootDir) {
|
||||
err := os.Rename(rootDir, rootDir+"_bak")
|
||||
if err != nil {
|
||||
if err := os.Rename(rootDir, rootDir+"_bak"); err != nil {
|
||||
cmn.PanicSanity(err.Error())
|
||||
}
|
||||
}
|
||||
// Create new dir
|
||||
cmn.EnsureDir(rootDir, 0700)
|
||||
cmn.EnsureDir(rootDir+"/data", 0700)
|
||||
if err := cmn.EnsureDir(rootDir, 0700); err != nil {
|
||||
cmn.PanicSanity(err.Error())
|
||||
}
|
||||
if err := cmn.EnsureDir(rootDir+"/data", 0700); err != nil {
|
||||
cmn.PanicSanity(err.Error())
|
||||
}
|
||||
|
||||
configFilePath := path.Join(rootDir, "config.toml")
|
||||
genesisFilePath := path.Join(rootDir, "genesis.json")
|
||||
@@ -119,7 +125,7 @@ var testGenesis = `{
|
||||
"type": "ed25519",
|
||||
"data":"3B3069C422E19688B45CBFAE7BB009FC0FA1B1EA86593519318B7214853803C8"
|
||||
},
|
||||
"amount": 10,
|
||||
"power": 10,
|
||||
"name": ""
|
||||
}
|
||||
],
|
||||
|
@@ -24,7 +24,7 @@ func TestEnsureRoot(t *testing.T) {
|
||||
// setup temp dir for test
|
||||
tmpDir, err := ioutil.TempDir("", "config-test")
|
||||
require.Nil(err)
|
||||
defer os.RemoveAll(tmpDir)
|
||||
defer os.RemoveAll(tmpDir) // nolint: errcheck
|
||||
|
||||
// create root dir
|
||||
EnsureRoot(tmpDir)
|
||||
|
@@ -1,14 +1,17 @@
|
||||
package consensus
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
crypto "github.com/tendermint/go-crypto"
|
||||
data "github.com/tendermint/go-wire/data"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
. "github.com/tendermint/tmlibs/common"
|
||||
"github.com/tendermint/tmlibs/events"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -39,7 +42,43 @@ func TestByzantine(t *testing.T) {
|
||||
switches[i].SetLogger(p2pLogger.With("validator", i))
|
||||
}
|
||||
|
||||
eventChans := make([]chan interface{}, N)
|
||||
reactors := make([]p2p.Reactor, N)
|
||||
for i := 0; i < N; i++ {
|
||||
if i == 0 {
|
||||
css[i].privValidator = NewByzantinePrivValidator(css[i].privValidator)
|
||||
// make byzantine
|
||||
css[i].decideProposal = func(j int) func(int64, int) {
|
||||
return func(height int64, round int) {
|
||||
byzantineDecideProposalFunc(t, height, round, css[j], switches[j])
|
||||
}
|
||||
}(i)
|
||||
css[i].doPrevote = func(height int64, round int) {}
|
||||
}
|
||||
|
||||
eventBus := types.NewEventBus()
|
||||
eventBus.SetLogger(logger.With("module", "events", "validator", i))
|
||||
err := eventBus.Start()
|
||||
require.NoError(t, err)
|
||||
defer eventBus.Stop()
|
||||
|
||||
eventChans[i] = make(chan interface{}, 1)
|
||||
err = eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, eventChans[i])
|
||||
require.NoError(t, err)
|
||||
|
||||
conR := NewConsensusReactor(css[i], true) // so we dont start the consensus states
|
||||
conR.SetLogger(logger.With("validator", i))
|
||||
conR.SetEventBus(eventBus)
|
||||
|
||||
var conRI p2p.Reactor // nolint: gotype, gosimple
|
||||
conRI = conR
|
||||
|
||||
if i == 0 {
|
||||
conRI = NewByzantineReactor(conR)
|
||||
}
|
||||
reactors[i] = conRI
|
||||
}
|
||||
|
||||
defer func() {
|
||||
for _, r := range reactors {
|
||||
if rr, ok := r.(*ByzantineReactor); ok {
|
||||
@@ -49,40 +88,6 @@ func TestByzantine(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}()
|
||||
eventChans := make([]chan interface{}, N)
|
||||
eventLogger := logger.With("module", "events")
|
||||
for i := 0; i < N; i++ {
|
||||
if i == 0 {
|
||||
css[i].privValidator = NewByzantinePrivValidator(css[i].privValidator.(*types.PrivValidator))
|
||||
// make byzantine
|
||||
css[i].decideProposal = func(j int) func(int, int) {
|
||||
return func(height, round int) {
|
||||
byzantineDecideProposalFunc(t, height, round, css[j], switches[j])
|
||||
}
|
||||
}(i)
|
||||
css[i].doPrevote = func(height, round int) {}
|
||||
}
|
||||
|
||||
eventSwitch := events.NewEventSwitch()
|
||||
eventSwitch.SetLogger(eventLogger.With("validator", i))
|
||||
_, err := eventSwitch.Start()
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to start switch: %v", err)
|
||||
}
|
||||
eventChans[i] = subscribeToEvent(eventSwitch, "tester", types.EventStringNewBlock(), 1)
|
||||
|
||||
conR := NewConsensusReactor(css[i], true) // so we dont start the consensus states
|
||||
conR.SetLogger(logger.With("validator", i))
|
||||
conR.SetEventSwitch(eventSwitch)
|
||||
|
||||
var conRI p2p.Reactor
|
||||
conRI = conR
|
||||
|
||||
if i == 0 {
|
||||
conRI = NewByzantineReactor(conR)
|
||||
}
|
||||
reactors[i] = conRI
|
||||
}
|
||||
|
||||
p2p.MakeConnectedSwitches(config.P2P, N, func(i int, s *p2p.Switch) *p2p.Switch {
|
||||
// ignore new switch s, we already made ours
|
||||
@@ -99,10 +104,10 @@ func TestByzantine(t *testing.T) {
|
||||
// start the state machines
|
||||
byzR := reactors[0].(*ByzantineReactor)
|
||||
s := byzR.reactor.conS.GetState()
|
||||
byzR.reactor.SwitchToConsensus(s)
|
||||
byzR.reactor.SwitchToConsensus(s, 0)
|
||||
for i := 1; i < N; i++ {
|
||||
cr := reactors[i].(*ConsensusReactor)
|
||||
cr.SwitchToConsensus(cr.conS.GetState())
|
||||
cr.SwitchToConsensus(cr.conS.GetState(), 0)
|
||||
}
|
||||
|
||||
// byz proposer sends one block to peers[0]
|
||||
@@ -147,8 +152,8 @@ func TestByzantine(t *testing.T) {
|
||||
case <-done:
|
||||
case <-tick.C:
|
||||
for i, reactor := range reactors {
|
||||
t.Log(Fmt("Consensus Reactor %v", i))
|
||||
t.Log(Fmt("%v", reactor))
|
||||
t.Log(cmn.Fmt("Consensus Reactor %v", i))
|
||||
t.Log(cmn.Fmt("%v", reactor))
|
||||
}
|
||||
t.Fatalf("Timed out waiting for all validators to commit first block")
|
||||
}
|
||||
@@ -157,7 +162,7 @@ func TestByzantine(t *testing.T) {
|
||||
//-------------------------------
|
||||
// byzantine consensus functions
|
||||
|
||||
func byzantineDecideProposalFunc(t *testing.T, height, round int, cs *ConsensusState, sw *p2p.Switch) {
|
||||
func byzantineDecideProposalFunc(t *testing.T, height int64, round int, cs *ConsensusState, sw *p2p.Switch) {
|
||||
// byzantine user should create two proposals and try to split the vote.
|
||||
// Avoid sending on internalMsgQueue and running consensus state.
|
||||
|
||||
@@ -165,13 +170,17 @@ func byzantineDecideProposalFunc(t *testing.T, height, round int, cs *ConsensusS
|
||||
block1, blockParts1 := cs.createProposalBlock()
|
||||
polRound, polBlockID := cs.Votes.POLInfo()
|
||||
proposal1 := types.NewProposal(height, round, blockParts1.Header(), polRound, polBlockID)
|
||||
cs.privValidator.SignProposal(cs.state.ChainID, proposal1) // byzantine doesnt err
|
||||
if err := cs.privValidator.SignProposal(cs.state.ChainID, proposal1); err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
// Create a new proposal block from state/txs from the mempool.
|
||||
block2, blockParts2 := cs.createProposalBlock()
|
||||
polRound, polBlockID = cs.Votes.POLInfo()
|
||||
proposal2 := types.NewProposal(height, round, blockParts2.Header(), polRound, polBlockID)
|
||||
cs.privValidator.SignProposal(cs.state.ChainID, proposal2) // byzantine doesnt err
|
||||
if err := cs.privValidator.SignProposal(cs.state.ChainID, proposal2); err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
block1Hash := block1.Hash()
|
||||
block2Hash := block2.Hash()
|
||||
@@ -188,7 +197,7 @@ func byzantineDecideProposalFunc(t *testing.T, height, round int, cs *ConsensusS
|
||||
}
|
||||
}
|
||||
|
||||
func sendProposalAndParts(height, round int, cs *ConsensusState, peer *p2p.Peer, proposal *types.Proposal, blockHash []byte, parts *types.PartSet) {
|
||||
func sendProposalAndParts(height int64, round int, cs *ConsensusState, peer p2p.Peer, proposal *types.Proposal, blockHash []byte, parts *types.PartSet) {
|
||||
// proposal
|
||||
msg := &ProposalMessage{Proposal: proposal}
|
||||
peer.Send(DataChannel, struct{ ConsensusMessage }{msg})
|
||||
@@ -218,7 +227,7 @@ func sendProposalAndParts(height, round int, cs *ConsensusState, peer *p2p.Peer,
|
||||
// byzantine consensus reactor
|
||||
|
||||
type ByzantineReactor struct {
|
||||
Service
|
||||
cmn.Service
|
||||
reactor *ConsensusReactor
|
||||
}
|
||||
|
||||
@@ -231,14 +240,14 @@ func NewByzantineReactor(conR *ConsensusReactor) *ByzantineReactor {
|
||||
|
||||
func (br *ByzantineReactor) SetSwitch(s *p2p.Switch) { br.reactor.SetSwitch(s) }
|
||||
func (br *ByzantineReactor) GetChannels() []*p2p.ChannelDescriptor { return br.reactor.GetChannels() }
|
||||
func (br *ByzantineReactor) AddPeer(peer *p2p.Peer) {
|
||||
func (br *ByzantineReactor) AddPeer(peer p2p.Peer) {
|
||||
if !br.reactor.IsRunning() {
|
||||
return
|
||||
}
|
||||
|
||||
// Create peerState for peer
|
||||
peerState := NewPeerState(peer)
|
||||
peer.Data.Set(types.PeerStateKey, peerState)
|
||||
peerState := NewPeerState(peer).SetLogger(br.reactor.Logger)
|
||||
peer.Set(types.PeerStateKey, peerState)
|
||||
|
||||
// Send our state to peer.
|
||||
// If we're fast_syncing, broadcast a RoundStepMessage later upon SwitchToConsensus().
|
||||
@@ -246,10 +255,10 @@ func (br *ByzantineReactor) AddPeer(peer *p2p.Peer) {
|
||||
br.reactor.sendNewRoundStepMessages(peer)
|
||||
}
|
||||
}
|
||||
func (br *ByzantineReactor) RemovePeer(peer *p2p.Peer, reason interface{}) {
|
||||
func (br *ByzantineReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
|
||||
br.reactor.RemovePeer(peer, reason)
|
||||
}
|
||||
func (br *ByzantineReactor) Receive(chID byte, peer *p2p.Peer, msgBytes []byte) {
|
||||
func (br *ByzantineReactor) Receive(chID byte, peer p2p.Peer, msgBytes []byte) {
|
||||
br.reactor.Receive(chID, peer, msgBytes)
|
||||
}
|
||||
|
||||
@@ -257,51 +266,42 @@ func (br *ByzantineReactor) Receive(chID byte, peer *p2p.Peer, msgBytes []byte)
|
||||
// byzantine privValidator
|
||||
|
||||
type ByzantinePrivValidator struct {
|
||||
Address []byte `json:"address"`
|
||||
types.Signer `json:"-"`
|
||||
types.Signer
|
||||
|
||||
mtx sync.Mutex
|
||||
pv types.PrivValidator
|
||||
}
|
||||
|
||||
// Return a priv validator that will sign anything
|
||||
func NewByzantinePrivValidator(pv *types.PrivValidator) *ByzantinePrivValidator {
|
||||
func NewByzantinePrivValidator(pv types.PrivValidator) *ByzantinePrivValidator {
|
||||
return &ByzantinePrivValidator{
|
||||
Address: pv.Address,
|
||||
Signer: pv.Signer,
|
||||
Signer: pv.(*types.PrivValidatorFS).Signer,
|
||||
pv: pv,
|
||||
}
|
||||
}
|
||||
|
||||
func (privVal *ByzantinePrivValidator) GetAddress() []byte {
|
||||
return privVal.Address
|
||||
func (privVal *ByzantinePrivValidator) GetAddress() data.Bytes {
|
||||
return privVal.pv.GetAddress()
|
||||
}
|
||||
|
||||
func (privVal *ByzantinePrivValidator) SignVote(chainID string, vote *types.Vote) error {
|
||||
privVal.mtx.Lock()
|
||||
defer privVal.mtx.Unlock()
|
||||
func (privVal *ByzantinePrivValidator) GetPubKey() crypto.PubKey {
|
||||
return privVal.pv.GetPubKey()
|
||||
}
|
||||
|
||||
// Sign
|
||||
vote.Signature = privVal.Sign(types.SignBytes(chainID, vote))
|
||||
func (privVal *ByzantinePrivValidator) SignVote(chainID string, vote *types.Vote) (err error) {
|
||||
vote.Signature, err = privVal.Sign(types.SignBytes(chainID, vote))
|
||||
return err
|
||||
}
|
||||
|
||||
func (privVal *ByzantinePrivValidator) SignProposal(chainID string, proposal *types.Proposal) (err error) {
|
||||
proposal.Signature, _ = privVal.Sign(types.SignBytes(chainID, proposal))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (privVal *ByzantinePrivValidator) SignProposal(chainID string, proposal *types.Proposal) error {
|
||||
privVal.mtx.Lock()
|
||||
defer privVal.mtx.Unlock()
|
||||
|
||||
// Sign
|
||||
proposal.Signature = privVal.Sign(types.SignBytes(chainID, proposal))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (privVal *ByzantinePrivValidator) SignHeartbeat(chainID string, heartbeat *types.Heartbeat) error {
|
||||
privVal.mtx.Lock()
|
||||
defer privVal.mtx.Unlock()
|
||||
|
||||
// Sign
|
||||
heartbeat.Signature = privVal.Sign(types.SignBytes(chainID, heartbeat))
|
||||
func (privVal *ByzantinePrivValidator) SignHeartbeat(chainID string, heartbeat *types.Heartbeat) (err error) {
|
||||
heartbeat.Signature, _ = privVal.Sign(types.SignBytes(chainID, heartbeat))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (privVal *ByzantinePrivValidator) String() string {
|
||||
return Fmt("PrivValidator{%X}", privVal.Address)
|
||||
return cmn.Fmt("PrivValidator{%X}", privVal.GetAddress())
|
||||
}
|
||||
|
@@ -1,35 +0,0 @@
|
||||
package consensus
|
||||
|
||||
import (
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// XXX: WARNING: these functions can halt the consensus as firing events is synchronous.
|
||||
// Make sure to read off the channels, and in the case of subscribeToEventRespond, to write back on it
|
||||
|
||||
// NOTE: if chanCap=0, this blocks on the event being consumed
|
||||
func subscribeToEvent(evsw types.EventSwitch, receiver, eventID string, chanCap int) chan interface{} {
|
||||
// listen for event
|
||||
ch := make(chan interface{}, chanCap)
|
||||
types.AddListenerForEvent(evsw, receiver, eventID, func(data types.TMEventData) {
|
||||
ch <- data
|
||||
})
|
||||
return ch
|
||||
}
|
||||
|
||||
// NOTE: this blocks on receiving a response after the event is consumed
|
||||
func subscribeToEventRespond(evsw types.EventSwitch, receiver, eventID string) chan interface{} {
|
||||
// listen for event
|
||||
ch := make(chan interface{})
|
||||
types.AddListenerForEvent(evsw, receiver, eventID, func(data types.TMEventData) {
|
||||
ch <- data
|
||||
<-ch
|
||||
})
|
||||
return ch
|
||||
}
|
||||
|
||||
func discardFromChan(ch chan interface{}, n int) {
|
||||
for i := 0; i < n; i++ {
|
||||
<-ch
|
||||
}
|
||||
}
|
@@ -2,6 +2,7 @@ package consensus
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
@@ -15,11 +16,12 @@ import (
|
||||
abci "github.com/tendermint/abci/types"
|
||||
bc "github.com/tendermint/tendermint/blockchain"
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
cstypes "github.com/tendermint/tendermint/consensus/types"
|
||||
mempl "github.com/tendermint/tendermint/mempool"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
. "github.com/tendermint/tmlibs/common"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
dbm "github.com/tendermint/tmlibs/db"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
|
||||
@@ -29,12 +31,16 @@ import (
|
||||
"github.com/go-kit/kit/log/term"
|
||||
)
|
||||
|
||||
const (
|
||||
testSubscriber = "test-client"
|
||||
)
|
||||
|
||||
// genesis, chain_id, priv_val
|
||||
var config *cfg.Config // NOTE: must be reset for each _test.go file
|
||||
var ensureTimeout = time.Second * 2
|
||||
|
||||
func ensureDir(dir string, mode os.FileMode) {
|
||||
if err := EnsureDir(dir, mode); err != nil {
|
||||
if err := cmn.EnsureDir(dir, mode); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
@@ -48,14 +54,14 @@ func ResetConfig(name string) *cfg.Config {
|
||||
|
||||
type validatorStub struct {
|
||||
Index int // Validator index. NOTE: we don't assume validator set changes.
|
||||
Height int
|
||||
Height int64
|
||||
Round int
|
||||
*types.PrivValidator
|
||||
types.PrivValidator
|
||||
}
|
||||
|
||||
var testMinPower = 10
|
||||
var testMinPower int64 = 10
|
||||
|
||||
func NewValidatorStub(privValidator *types.PrivValidator, valIndex int) *validatorStub {
|
||||
func NewValidatorStub(privValidator types.PrivValidator, valIndex int) *validatorStub {
|
||||
return &validatorStub{
|
||||
Index: valIndex,
|
||||
PrivValidator: privValidator,
|
||||
@@ -65,7 +71,7 @@ func NewValidatorStub(privValidator *types.PrivValidator, valIndex int) *validat
|
||||
func (vs *validatorStub) signVote(voteType byte, hash []byte, header types.PartSetHeader) (*types.Vote, error) {
|
||||
vote := &types.Vote{
|
||||
ValidatorIndex: vs.Index,
|
||||
ValidatorAddress: vs.PrivValidator.Address,
|
||||
ValidatorAddress: vs.PrivValidator.GetAddress(),
|
||||
Height: vs.Height,
|
||||
Round: vs.Round,
|
||||
Type: voteType,
|
||||
@@ -107,13 +113,13 @@ func incrementRound(vss ...*validatorStub) {
|
||||
//-------------------------------------------------------------------------------
|
||||
// Functions for transitioning the consensus state
|
||||
|
||||
func startTestRound(cs *ConsensusState, height, round int) {
|
||||
func startTestRound(cs *ConsensusState, height int64, round int) {
|
||||
cs.enterNewRound(height, round)
|
||||
cs.startRoutines(0)
|
||||
}
|
||||
|
||||
// Create proposal block from cs1 but sign it with vs
|
||||
func decideProposal(cs1 *ConsensusState, vs *validatorStub, height, round int) (proposal *types.Proposal, block *types.Block) {
|
||||
func decideProposal(cs1 *ConsensusState, vs *validatorStub, height int64, round int) (proposal *types.Proposal, block *types.Block) {
|
||||
block, blockParts := cs1.createProposalBlock()
|
||||
if block == nil { // on error
|
||||
panic("error creating proposal block")
|
||||
@@ -142,7 +148,7 @@ func signAddVotes(to *ConsensusState, voteType byte, hash []byte, header types.P
|
||||
func validatePrevote(t *testing.T, cs *ConsensusState, round int, privVal *validatorStub, blockHash []byte) {
|
||||
prevotes := cs.Votes.Prevotes(round)
|
||||
var vote *types.Vote
|
||||
if vote = prevotes.GetByAddress(privVal.Address); vote == nil {
|
||||
if vote = prevotes.GetByAddress(privVal.GetAddress()); vote == nil {
|
||||
panic("Failed to find prevote from validator")
|
||||
}
|
||||
if blockHash == nil {
|
||||
@@ -159,7 +165,7 @@ func validatePrevote(t *testing.T, cs *ConsensusState, round int, privVal *valid
|
||||
func validateLastPrecommit(t *testing.T, cs *ConsensusState, privVal *validatorStub, blockHash []byte) {
|
||||
votes := cs.LastCommit
|
||||
var vote *types.Vote
|
||||
if vote = votes.GetByAddress(privVal.Address); vote == nil {
|
||||
if vote = votes.GetByAddress(privVal.GetAddress()); vote == nil {
|
||||
panic("Failed to find precommit from validator")
|
||||
}
|
||||
if !bytes.Equal(vote.BlockID.Hash, blockHash) {
|
||||
@@ -170,7 +176,7 @@ func validateLastPrecommit(t *testing.T, cs *ConsensusState, privVal *validatorS
|
||||
func validatePrecommit(t *testing.T, cs *ConsensusState, thisRound, lockRound int, privVal *validatorStub, votedBlockHash, lockedBlockHash []byte) {
|
||||
precommits := cs.Votes.Precommits(thisRound)
|
||||
var vote *types.Vote
|
||||
if vote = precommits.GetByAddress(privVal.Address); vote == nil {
|
||||
if vote = precommits.GetByAddress(privVal.GetAddress()); vote == nil {
|
||||
panic("Failed to find precommit from validator")
|
||||
}
|
||||
|
||||
@@ -207,11 +213,14 @@ func validatePrevoteAndPrecommit(t *testing.T, cs *ConsensusState, thisRound, lo
|
||||
|
||||
// genesis
|
||||
func subscribeToVoter(cs *ConsensusState, addr []byte) chan interface{} {
|
||||
voteCh0 := subscribeToEvent(cs.evsw, "tester", types.EventStringVote(), 1)
|
||||
voteCh0 := make(chan interface{})
|
||||
err := cs.eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryVote, voteCh0)
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("failed to subscribe %s to %v", testSubscriber, types.EventQueryVote))
|
||||
}
|
||||
voteCh := make(chan interface{})
|
||||
go func() {
|
||||
for {
|
||||
v := <-voteCh0
|
||||
for v := range voteCh0 {
|
||||
vote := v.(types.TMEventData).Unwrap().(types.EventDataVote)
|
||||
// we only fire for our own votes
|
||||
if bytes.Equal(addr, vote.Vote.ValidatorAddress) {
|
||||
@@ -225,13 +234,17 @@ func subscribeToVoter(cs *ConsensusState, addr []byte) chan interface{} {
|
||||
//-------------------------------------------------------------------------------
|
||||
// consensus states
|
||||
|
||||
func newConsensusState(state *sm.State, pv *types.PrivValidator, app abci.Application) *ConsensusState {
|
||||
func newConsensusState(state *sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState {
|
||||
return newConsensusStateWithConfig(config, state, pv, app)
|
||||
}
|
||||
|
||||
func newConsensusStateWithConfig(thisConfig *cfg.Config, state *sm.State, pv *types.PrivValidator, app abci.Application) *ConsensusState {
|
||||
// Get BlockStore
|
||||
func newConsensusStateWithConfig(thisConfig *cfg.Config, state *sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState {
|
||||
blockDB := dbm.NewMemDB()
|
||||
return newConsensusStateWithConfigAndBlockStore(thisConfig, state, pv, app, blockDB)
|
||||
}
|
||||
|
||||
func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state *sm.State, pv types.PrivValidator, app abci.Application, blockDB dbm.DB) *ConsensusState {
|
||||
// Get BlockStore
|
||||
blockStore := bc.NewBlockStore(blockDB)
|
||||
|
||||
// one for mempool, one for consensus
|
||||
@@ -251,28 +264,28 @@ func newConsensusStateWithConfig(thisConfig *cfg.Config, state *sm.State, pv *ty
|
||||
cs.SetLogger(log.TestingLogger())
|
||||
cs.SetPrivValidator(pv)
|
||||
|
||||
evsw := types.NewEventSwitch()
|
||||
evsw.SetLogger(log.TestingLogger().With("module", "events"))
|
||||
cs.SetEventSwitch(evsw)
|
||||
evsw.Start()
|
||||
eventBus := types.NewEventBus()
|
||||
eventBus.SetLogger(log.TestingLogger().With("module", "events"))
|
||||
eventBus.Start()
|
||||
cs.SetEventBus(eventBus)
|
||||
return cs
|
||||
}
|
||||
|
||||
func loadPrivValidator(config *cfg.Config) *types.PrivValidator {
|
||||
func loadPrivValidator(config *cfg.Config) *types.PrivValidatorFS {
|
||||
privValidatorFile := config.PrivValidatorFile()
|
||||
ensureDir(path.Dir(privValidatorFile), 0700)
|
||||
privValidator := types.LoadOrGenPrivValidator(privValidatorFile, log.TestingLogger())
|
||||
privValidator := types.LoadOrGenPrivValidatorFS(privValidatorFile)
|
||||
privValidator.Reset()
|
||||
return privValidator
|
||||
}
|
||||
|
||||
func fixedConsensusStateDummy() *ConsensusState {
|
||||
func fixedConsensusStateDummy(config *cfg.Config, logger log.Logger) *ConsensusState {
|
||||
stateDB := dbm.NewMemDB()
|
||||
state := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
|
||||
state.SetLogger(log.TestingLogger().With("module", "state"))
|
||||
state, _ := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
|
||||
state.SetLogger(logger.With("module", "state"))
|
||||
privValidator := loadPrivValidator(config)
|
||||
cs := newConsensusState(state, privValidator, dummy.NewDummyApplication())
|
||||
cs.SetLogger(log.TestingLogger())
|
||||
cs.SetLogger(logger)
|
||||
return cs
|
||||
}
|
||||
|
||||
@@ -296,7 +309,7 @@ func randConsensusState(nValidators int) (*ConsensusState, []*validatorStub) {
|
||||
|
||||
//-------------------------------------------------------------------------------
|
||||
|
||||
func ensureNoNewStep(stepCh chan interface{}) {
|
||||
func ensureNoNewStep(stepCh <-chan interface{}) {
|
||||
timer := time.NewTimer(ensureTimeout)
|
||||
select {
|
||||
case <-timer.C:
|
||||
@@ -306,7 +319,7 @@ func ensureNoNewStep(stepCh chan interface{}) {
|
||||
}
|
||||
}
|
||||
|
||||
func ensureNewStep(stepCh chan interface{}) {
|
||||
func ensureNewStep(stepCh <-chan interface{}) {
|
||||
timer := time.NewTimer(ensureTimeout)
|
||||
select {
|
||||
case <-timer.C:
|
||||
@@ -338,15 +351,19 @@ func randConsensusNet(nValidators int, testName string, tickerFunc func() Timeou
|
||||
logger := consensusLogger()
|
||||
for i := 0; i < nValidators; i++ {
|
||||
db := dbm.NewMemDB() // each state needs its own db
|
||||
state := sm.MakeGenesisState(db, genDoc)
|
||||
state, _ := sm.MakeGenesisState(db, genDoc)
|
||||
state.SetLogger(logger.With("module", "state", "validator", i))
|
||||
state.Save()
|
||||
thisConfig := ResetConfig(Fmt("%s_%d", testName, i))
|
||||
thisConfig := ResetConfig(cmn.Fmt("%s_%d", testName, i))
|
||||
for _, opt := range configOpts {
|
||||
opt(thisConfig)
|
||||
}
|
||||
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
|
||||
css[i] = newConsensusStateWithConfig(thisConfig, state, privVals[i], appFunc())
|
||||
app := appFunc()
|
||||
vals := types.TM2PB.Validators(state.Validators)
|
||||
app.InitChain(abci.RequestInitChain{Validators: vals})
|
||||
|
||||
css[i] = newConsensusStateWithConfig(thisConfig, state, privVals[i], app)
|
||||
css[i].SetLogger(logger.With("validator", i))
|
||||
css[i].SetTimeoutTicker(tickerFunc())
|
||||
}
|
||||
@@ -355,34 +372,38 @@ func randConsensusNet(nValidators int, testName string, tickerFunc func() Timeou
|
||||
|
||||
// nPeers = nValidators + nNotValidator
|
||||
func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerFunc func() TimeoutTicker, appFunc func() abci.Application) []*ConsensusState {
|
||||
genDoc, privVals := randGenesisDoc(nValidators, false, int64(testMinPower))
|
||||
genDoc, privVals := randGenesisDoc(nValidators, false, testMinPower)
|
||||
css := make([]*ConsensusState, nPeers)
|
||||
logger := consensusLogger()
|
||||
for i := 0; i < nPeers; i++ {
|
||||
db := dbm.NewMemDB() // each state needs its own db
|
||||
state := sm.MakeGenesisState(db, genDoc)
|
||||
state.SetLogger(log.TestingLogger().With("module", "state"))
|
||||
state, _ := sm.MakeGenesisState(db, genDoc)
|
||||
state.SetLogger(logger.With("module", "state", "validator", i))
|
||||
state.Save()
|
||||
thisConfig := ResetConfig(Fmt("%s_%d", testName, i))
|
||||
thisConfig := ResetConfig(cmn.Fmt("%s_%d", testName, i))
|
||||
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
|
||||
var privVal *types.PrivValidator
|
||||
var privVal types.PrivValidator
|
||||
if i < nValidators {
|
||||
privVal = privVals[i]
|
||||
} else {
|
||||
privVal = types.GenPrivValidator()
|
||||
_, tempFilePath := Tempfile("priv_validator_")
|
||||
privVal.SetFile(tempFilePath)
|
||||
_, tempFilePath := cmn.Tempfile("priv_validator_")
|
||||
privVal = types.GenPrivValidatorFS(tempFilePath)
|
||||
}
|
||||
|
||||
css[i] = newConsensusStateWithConfig(thisConfig, state, privVal, appFunc())
|
||||
css[i].SetLogger(log.TestingLogger())
|
||||
app := appFunc()
|
||||
vals := types.TM2PB.Validators(state.Validators)
|
||||
app.InitChain(abci.RequestInitChain{Validators: vals})
|
||||
|
||||
css[i] = newConsensusStateWithConfig(thisConfig, state, privVal, app)
|
||||
css[i].SetLogger(logger.With("validator", i))
|
||||
css[i].SetTimeoutTicker(tickerFunc())
|
||||
}
|
||||
return css
|
||||
}
|
||||
|
||||
func getSwitchIndex(switches []*p2p.Switch, peer *p2p.Peer) int {
|
||||
func getSwitchIndex(switches []*p2p.Switch, peer p2p.Peer) int {
|
||||
for i, s := range switches {
|
||||
if bytes.Equal(peer.NodeInfo.PubKey.Address(), s.NodeInfo().PubKey.Address()) {
|
||||
if bytes.Equal(peer.NodeInfo().PubKey.Address(), s.NodeInfo().PubKey.Address()) {
|
||||
return i
|
||||
}
|
||||
}
|
||||
@@ -393,14 +414,14 @@ func getSwitchIndex(switches []*p2p.Switch, peer *p2p.Peer) int {
|
||||
//-------------------------------------------------------------------------------
|
||||
// genesis
|
||||
|
||||
func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.GenesisDoc, []*types.PrivValidator) {
|
||||
func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.GenesisDoc, []*types.PrivValidatorFS) {
|
||||
validators := make([]types.GenesisValidator, numValidators)
|
||||
privValidators := make([]*types.PrivValidator, numValidators)
|
||||
privValidators := make([]*types.PrivValidatorFS, numValidators)
|
||||
for i := 0; i < numValidators; i++ {
|
||||
val, privVal := types.RandValidator(randPower, minPower)
|
||||
validators[i] = types.GenesisValidator{
|
||||
PubKey: val.PubKey,
|
||||
Amount: val.VotingPower,
|
||||
Power: val.VotingPower,
|
||||
}
|
||||
privValidators[i] = privVal
|
||||
}
|
||||
@@ -412,10 +433,10 @@ func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.G
|
||||
}, privValidators
|
||||
}
|
||||
|
||||
func randGenesisState(numValidators int, randPower bool, minPower int64) (*sm.State, []*types.PrivValidator) {
|
||||
func randGenesisState(numValidators int, randPower bool, minPower int64) (*sm.State, []*types.PrivValidatorFS) {
|
||||
genDoc, privValidators := randGenesisDoc(numValidators, randPower, minPower)
|
||||
db := dbm.NewMemDB()
|
||||
s0 := sm.MakeGenesisState(db, genDoc)
|
||||
s0, _ := sm.MakeGenesisState(db, genDoc)
|
||||
s0.SetLogger(log.TestingLogger().With("module", "state"))
|
||||
s0.Save()
|
||||
return s0, privValidators
|
||||
@@ -443,12 +464,12 @@ type mockTicker struct {
|
||||
fired bool
|
||||
}
|
||||
|
||||
func (m *mockTicker) Start() (bool, error) {
|
||||
return true, nil
|
||||
func (m *mockTicker) Start() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockTicker) Stop() bool {
|
||||
return true
|
||||
func (m *mockTicker) Stop() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockTicker) ScheduleTimeout(ti timeoutInfo) {
|
||||
@@ -457,7 +478,7 @@ func (m *mockTicker) ScheduleTimeout(ti timeoutInfo) {
|
||||
if m.onlyOnce && m.fired {
|
||||
return
|
||||
}
|
||||
if ti.Step == RoundStepNewHeight {
|
||||
if ti.Step == cstypes.RoundStepNewHeight {
|
||||
m.c <- ti
|
||||
m.fired = true
|
||||
}
|
||||
|
@@ -2,13 +2,17 @@ package consensus
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
abci "github.com/tendermint/abci/types"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
. "github.com/tendermint/tmlibs/common"
|
||||
"github.com/tendermint/abci/example/code"
|
||||
abci "github.com/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -22,16 +26,15 @@ func TestNoProgressUntilTxsAvailable(t *testing.T) {
|
||||
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
|
||||
cs.mempool.EnableTxsAvailable()
|
||||
height, round := cs.Height, cs.Round
|
||||
newBlockCh := subscribeToEvent(cs.evsw, "tester", types.EventStringNewBlock(), 1)
|
||||
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
|
||||
startTestRound(cs, height, round)
|
||||
|
||||
ensureNewStep(newBlockCh) // first block gets committed
|
||||
ensureNoNewStep(newBlockCh)
|
||||
deliverTxsRange(cs, 0, 2)
|
||||
deliverTxsRange(cs, 0, 1)
|
||||
ensureNewStep(newBlockCh) // commit txs
|
||||
ensureNewStep(newBlockCh) // commit updated app hash
|
||||
ensureNoNewStep(newBlockCh)
|
||||
|
||||
}
|
||||
|
||||
func TestProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
|
||||
@@ -41,7 +44,7 @@ func TestProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
|
||||
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
|
||||
cs.mempool.EnableTxsAvailable()
|
||||
height, round := cs.Height, cs.Round
|
||||
newBlockCh := subscribeToEvent(cs.evsw, "tester", types.EventStringNewBlock(), 1)
|
||||
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
|
||||
startTestRound(cs, height, round)
|
||||
|
||||
ensureNewStep(newBlockCh) // first block gets committed
|
||||
@@ -56,9 +59,9 @@ func TestProgressInHigherRound(t *testing.T) {
|
||||
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
|
||||
cs.mempool.EnableTxsAvailable()
|
||||
height, round := cs.Height, cs.Round
|
||||
newBlockCh := subscribeToEvent(cs.evsw, "tester", types.EventStringNewBlock(), 1)
|
||||
newRoundCh := subscribeToEvent(cs.evsw, "tester", types.EventStringNewRound(), 1)
|
||||
timeoutCh := subscribeToEvent(cs.evsw, "tester", types.EventStringTimeoutPropose(), 1)
|
||||
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
|
||||
newRoundCh := subscribe(cs.eventBus, types.EventQueryNewRound)
|
||||
timeoutCh := subscribe(cs.eventBus, types.EventQueryTimeoutPropose)
|
||||
cs.setProposal = func(proposal *types.Proposal) error {
|
||||
if cs.Height == 2 && cs.Round == 0 {
|
||||
// dont set the proposal in round 0 so we timeout and
|
||||
@@ -73,7 +76,7 @@ func TestProgressInHigherRound(t *testing.T) {
|
||||
ensureNewStep(newRoundCh) // first round at first height
|
||||
ensureNewStep(newBlockCh) // first block gets committed
|
||||
ensureNewStep(newRoundCh) // first round at next height
|
||||
deliverTxsRange(cs, 0, 2) // we deliver txs, but dont set a proposal so we get the next round
|
||||
deliverTxsRange(cs, 0, 1) // we deliver txs, but dont set a proposal so we get the next round
|
||||
<-timeoutCh
|
||||
ensureNewStep(newRoundCh) // wait for the next round
|
||||
ensureNewStep(newBlockCh) // now we can commit the block
|
||||
@@ -86,17 +89,16 @@ func deliverTxsRange(cs *ConsensusState, start, end int) {
|
||||
binary.BigEndian.PutUint64(txBytes, uint64(i))
|
||||
err := cs.mempool.CheckTx(txBytes, nil)
|
||||
if err != nil {
|
||||
panic(Fmt("Error after CheckTx: %v", err))
|
||||
panic(cmn.Fmt("Error after CheckTx: %v", err))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestTxConcurrentWithCommit(t *testing.T) {
|
||||
|
||||
state, privVals := randGenesisState(1, false, 10)
|
||||
cs := newConsensusState(state, privVals[0], NewCounterApplication())
|
||||
height, round := cs.Height, cs.Round
|
||||
newBlockCh := subscribeToEvent(cs.evsw, "tester", types.EventStringNewBlock(), 1)
|
||||
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
|
||||
|
||||
NTxs := 10000
|
||||
go deliverTxsRange(cs, 0, NTxs)
|
||||
@@ -121,41 +123,43 @@ func TestRmBadTx(t *testing.T) {
|
||||
// increment the counter by 1
|
||||
txBytes := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(txBytes, uint64(0))
|
||||
app.DeliverTx(txBytes)
|
||||
app.Commit()
|
||||
|
||||
ch := make(chan struct{})
|
||||
cbCh := make(chan struct{})
|
||||
resDeliver := app.DeliverTx(txBytes)
|
||||
assert.False(t, resDeliver.IsErr(), cmn.Fmt("expected no error. got %v", resDeliver))
|
||||
|
||||
resCommit := app.Commit()
|
||||
assert.False(t, resCommit.IsErr(), cmn.Fmt("expected no error. got %v", resCommit))
|
||||
|
||||
emptyMempoolCh := make(chan struct{})
|
||||
checkTxRespCh := make(chan struct{})
|
||||
go func() {
|
||||
// Try to send the tx through the mempool.
|
||||
// CheckTx should not err, but the app should return a bad abci code
|
||||
// and the tx should get removed from the pool
|
||||
err := cs.mempool.CheckTx(txBytes, func(r *abci.Response) {
|
||||
if r.GetCheckTx().Code != abci.CodeType_BadNonce {
|
||||
if r.GetCheckTx().Code != code.CodeTypeBadNonce {
|
||||
t.Fatalf("expected checktx to return bad nonce, got %v", r)
|
||||
}
|
||||
cbCh <- struct{}{}
|
||||
checkTxRespCh <- struct{}{}
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal("Error after CheckTx: %v", err)
|
||||
t.Fatalf("Error after CheckTx: %v", err)
|
||||
}
|
||||
|
||||
// check for the tx
|
||||
for {
|
||||
time.Sleep(time.Second)
|
||||
txs := cs.mempool.Reap(1)
|
||||
if len(txs) == 0 {
|
||||
ch <- struct{}{}
|
||||
return
|
||||
emptyMempoolCh <- struct{}{}
|
||||
}
|
||||
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
}
|
||||
}()
|
||||
|
||||
// Wait until the tx returns
|
||||
ticker := time.After(time.Second * 5)
|
||||
select {
|
||||
case <-cbCh:
|
||||
case <-checkTxRespCh:
|
||||
// success
|
||||
case <-ticker:
|
||||
t.Fatalf("Timed out waiting for tx to return")
|
||||
@@ -164,7 +168,7 @@ func TestRmBadTx(t *testing.T) {
|
||||
// Wait until the tx is removed
|
||||
ticker = time.After(time.Second * 5)
|
||||
select {
|
||||
case <-ch:
|
||||
case <-emptyMempoolCh:
|
||||
// success
|
||||
case <-ticker:
|
||||
t.Fatalf("Timed out waiting for tx to be removed")
|
||||
@@ -183,37 +187,45 @@ func NewCounterApplication() *CounterApplication {
|
||||
return &CounterApplication{}
|
||||
}
|
||||
|
||||
func (app *CounterApplication) Info() abci.ResponseInfo {
|
||||
return abci.ResponseInfo{Data: Fmt("txs:%v", app.txCount)}
|
||||
func (app *CounterApplication) Info(req abci.RequestInfo) abci.ResponseInfo {
|
||||
return abci.ResponseInfo{Data: cmn.Fmt("txs:%v", app.txCount)}
|
||||
}
|
||||
|
||||
func (app *CounterApplication) DeliverTx(tx []byte) abci.Result {
|
||||
return runTx(tx, &app.txCount)
|
||||
func (app *CounterApplication) DeliverTx(tx []byte) abci.ResponseDeliverTx {
|
||||
txValue := txAsUint64(tx)
|
||||
if txValue != uint64(app.txCount) {
|
||||
return abci.ResponseDeliverTx{
|
||||
Code: code.CodeTypeBadNonce,
|
||||
Log: fmt.Sprintf("Invalid nonce. Expected %v, got %v", app.txCount, txValue)}
|
||||
}
|
||||
app.txCount += 1
|
||||
return abci.ResponseDeliverTx{Code: code.CodeTypeOK}
|
||||
}
|
||||
|
||||
func (app *CounterApplication) CheckTx(tx []byte) abci.Result {
|
||||
return runTx(tx, &app.mempoolTxCount)
|
||||
func (app *CounterApplication) CheckTx(tx []byte) abci.ResponseCheckTx {
|
||||
txValue := txAsUint64(tx)
|
||||
if txValue != uint64(app.mempoolTxCount) {
|
||||
return abci.ResponseCheckTx{
|
||||
Code: code.CodeTypeBadNonce,
|
||||
Log: fmt.Sprintf("Invalid nonce. Expected %v, got %v", app.mempoolTxCount, txValue)}
|
||||
}
|
||||
app.mempoolTxCount += 1
|
||||
return abci.ResponseCheckTx{Code: code.CodeTypeOK}
|
||||
}
|
||||
|
||||
func runTx(tx []byte, countPtr *int) abci.Result {
|
||||
count := *countPtr
|
||||
func txAsUint64(tx []byte) uint64 {
|
||||
tx8 := make([]byte, 8)
|
||||
copy(tx8[len(tx8)-len(tx):], tx)
|
||||
txValue := binary.BigEndian.Uint64(tx8)
|
||||
if txValue != uint64(count) {
|
||||
return abci.ErrBadNonce.AppendLog(Fmt("Invalid nonce. Expected %v, got %v", count, txValue))
|
||||
}
|
||||
*countPtr += 1
|
||||
return abci.OK
|
||||
return binary.BigEndian.Uint64(tx8)
|
||||
}
|
||||
|
||||
func (app *CounterApplication) Commit() abci.Result {
|
||||
func (app *CounterApplication) Commit() abci.ResponseCommit {
|
||||
app.mempoolTxCount = app.txCount
|
||||
if app.txCount == 0 {
|
||||
return abci.OK
|
||||
return abci.ResponseCommit{Code: code.CodeTypeOK}
|
||||
} else {
|
||||
hash := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(hash, uint64(app.txCount))
|
||||
return abci.NewResultOK(hash, "")
|
||||
return abci.ResponseCommit{Code: code.CodeTypeOK, Data: hash}
|
||||
}
|
||||
}
|
||||
|
@@ -2,16 +2,19 @@ package consensus
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"context"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
wire "github.com/tendermint/go-wire"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
|
||||
cstypes "github.com/tendermint/tendermint/consensus/types"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
@@ -33,10 +36,10 @@ type ConsensusReactor struct {
|
||||
p2p.BaseReactor // BaseService + p2p.Switch
|
||||
|
||||
conS *ConsensusState
|
||||
evsw types.EventSwitch
|
||||
|
||||
mtx sync.RWMutex
|
||||
fastSync bool
|
||||
eventBus *types.EventBus
|
||||
}
|
||||
|
||||
// NewConsensusReactor returns a new ConsensusReactor with the given consensusState.
|
||||
@@ -52,18 +55,22 @@ func NewConsensusReactor(consensusState *ConsensusState, fastSync bool) *Consens
|
||||
// OnStart implements BaseService.
|
||||
func (conR *ConsensusReactor) OnStart() error {
|
||||
conR.Logger.Info("ConsensusReactor ", "fastSync", conR.FastSync())
|
||||
conR.BaseReactor.OnStart()
|
||||
if err := conR.BaseReactor.OnStart(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// callbacks for broadcasting new steps and votes to peers
|
||||
// upon their respective events (ie. uses evsw)
|
||||
conR.registerEventCallbacks()
|
||||
err := conR.startBroadcastRoutine()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !conR.FastSync() {
|
||||
_, err := conR.conS.Start()
|
||||
err := conR.conS.Start()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -75,7 +82,7 @@ func (conR *ConsensusReactor) OnStop() {
|
||||
|
||||
// SwitchToConsensus switches from fast_sync mode to consensus mode.
|
||||
// It resets the state, turns off fast_sync, and starts the consensus state-machine
|
||||
func (conR *ConsensusReactor) SwitchToConsensus(state *sm.State) {
|
||||
func (conR *ConsensusReactor) SwitchToConsensus(state *sm.State, blocksSynced int) {
|
||||
conR.Logger.Info("SwitchToConsensus")
|
||||
conR.conS.reconstructLastCommit(state)
|
||||
// NOTE: The line below causes broadcastNewRoundStepRoutine() to
|
||||
@@ -86,31 +93,38 @@ func (conR *ConsensusReactor) SwitchToConsensus(state *sm.State) {
|
||||
conR.fastSync = false
|
||||
conR.mtx.Unlock()
|
||||
|
||||
conR.conS.Start()
|
||||
if blocksSynced > 0 {
|
||||
// dont bother with the WAL if we fast synced
|
||||
conR.conS.doWALCatchup = false
|
||||
}
|
||||
err := conR.conS.Start()
|
||||
if err != nil {
|
||||
conR.Logger.Error("Error starting conS", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
// GetChannels implements Reactor
|
||||
func (conR *ConsensusReactor) GetChannels() []*p2p.ChannelDescriptor {
|
||||
// TODO optimize
|
||||
return []*p2p.ChannelDescriptor{
|
||||
&p2p.ChannelDescriptor{
|
||||
{
|
||||
ID: StateChannel,
|
||||
Priority: 5,
|
||||
SendQueueCapacity: 100,
|
||||
},
|
||||
&p2p.ChannelDescriptor{
|
||||
{
|
||||
ID: DataChannel, // maybe split between gossiping current block and catchup stuff
|
||||
Priority: 10, // once we gossip the whole block there's nothing left to send until next height or round
|
||||
SendQueueCapacity: 100,
|
||||
RecvBufferCapacity: 50 * 4096,
|
||||
},
|
||||
&p2p.ChannelDescriptor{
|
||||
{
|
||||
ID: VoteChannel,
|
||||
Priority: 5,
|
||||
SendQueueCapacity: 100,
|
||||
RecvBufferCapacity: 100 * 100,
|
||||
},
|
||||
&p2p.ChannelDescriptor{
|
||||
{
|
||||
ID: VoteSetBitsChannel,
|
||||
Priority: 1,
|
||||
SendQueueCapacity: 2,
|
||||
@@ -120,14 +134,14 @@ func (conR *ConsensusReactor) GetChannels() []*p2p.ChannelDescriptor {
|
||||
}
|
||||
|
||||
// AddPeer implements Reactor
|
||||
func (conR *ConsensusReactor) AddPeer(peer *p2p.Peer) {
|
||||
func (conR *ConsensusReactor) AddPeer(peer p2p.Peer) {
|
||||
if !conR.IsRunning() {
|
||||
return
|
||||
}
|
||||
|
||||
// Create peerState for peer
|
||||
peerState := NewPeerState(peer)
|
||||
peer.Data.Set(types.PeerStateKey, peerState)
|
||||
peerState := NewPeerState(peer).SetLogger(conR.Logger)
|
||||
peer.Set(types.PeerStateKey, peerState)
|
||||
|
||||
// Begin routines for this peer.
|
||||
go conR.gossipDataRoutine(peer, peerState)
|
||||
@@ -142,12 +156,12 @@ func (conR *ConsensusReactor) AddPeer(peer *p2p.Peer) {
|
||||
}
|
||||
|
||||
// RemovePeer implements Reactor
|
||||
func (conR *ConsensusReactor) RemovePeer(peer *p2p.Peer, reason interface{}) {
|
||||
func (conR *ConsensusReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
|
||||
if !conR.IsRunning() {
|
||||
return
|
||||
}
|
||||
// TODO
|
||||
//peer.Data.Get(PeerStateKey).(*PeerState).Disconnect()
|
||||
//peer.Get(PeerStateKey).(*PeerState).Disconnect()
|
||||
}
|
||||
|
||||
// Receive implements Reactor
|
||||
@@ -156,7 +170,7 @@ func (conR *ConsensusReactor) RemovePeer(peer *p2p.Peer, reason interface{}) {
|
||||
// Peer state updates can happen in parallel, but processing of
|
||||
// proposals, block parts, and votes are ordered by the receiveRoutine
|
||||
// NOTE: blocks on consensus state for proposals, block parts, and votes
|
||||
func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte) {
|
||||
func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
|
||||
if !conR.IsRunning() {
|
||||
conR.Logger.Debug("Receive", "src", src, "chId", chID, "bytes", msgBytes)
|
||||
return
|
||||
@@ -171,7 +185,7 @@ func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte)
|
||||
conR.Logger.Debug("Receive", "src", src, "chId", chID, "msg", msg)
|
||||
|
||||
// Get peer states
|
||||
ps := src.Data.Get(types.PeerStateKey).(*PeerState)
|
||||
ps := src.Get(types.PeerStateKey).(*PeerState)
|
||||
|
||||
switch chID {
|
||||
case StateChannel:
|
||||
@@ -191,7 +205,7 @@ func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte)
|
||||
return
|
||||
}
|
||||
// Peer claims to have a maj23 for some BlockID at H,R,S,
|
||||
votes.SetPeerMaj23(msg.Round, msg.Type, ps.Peer.Key, msg.BlockID)
|
||||
votes.SetPeerMaj23(msg.Round, msg.Type, ps.Peer.Key(), msg.BlockID)
|
||||
// Respond with a VoteSetBitsMessage showing which votes we have.
|
||||
// (and consequently shows which we don't have)
|
||||
var ourVotes *cmn.BitArray
|
||||
@@ -228,12 +242,12 @@ func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte)
|
||||
switch msg := msg.(type) {
|
||||
case *ProposalMessage:
|
||||
ps.SetHasProposal(msg.Proposal)
|
||||
conR.conS.peerMsgQueue <- msgInfo{msg, src.Key}
|
||||
conR.conS.peerMsgQueue <- msgInfo{msg, src.Key()}
|
||||
case *ProposalPOLMessage:
|
||||
ps.ApplyProposalPOLMessage(msg)
|
||||
case *BlockPartMessage:
|
||||
ps.SetHasProposalBlockPart(msg.Height, msg.Round, msg.Part.Index)
|
||||
conR.conS.peerMsgQueue <- msgInfo{msg, src.Key}
|
||||
conR.conS.peerMsgQueue <- msgInfo{msg, src.Key()}
|
||||
default:
|
||||
conR.Logger.Error(cmn.Fmt("Unknown message type %v", reflect.TypeOf(msg)))
|
||||
}
|
||||
@@ -253,7 +267,7 @@ func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte)
|
||||
ps.EnsureVoteBitArrays(height-1, lastCommitSize)
|
||||
ps.SetHasVote(msg.Vote)
|
||||
|
||||
cs.peerMsgQueue <- msgInfo{msg, src.Key}
|
||||
cs.peerMsgQueue <- msgInfo{msg, src.Key()}
|
||||
|
||||
default:
|
||||
// don't punish (leave room for soft upgrades)
|
||||
@@ -301,10 +315,10 @@ func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte)
|
||||
}
|
||||
}
|
||||
|
||||
// SetEventSwitch implements events.Eventable
|
||||
func (conR *ConsensusReactor) SetEventSwitch(evsw types.EventSwitch) {
|
||||
conR.evsw = evsw
|
||||
conR.conS.SetEventSwitch(evsw)
|
||||
// SetEventBus sets event bus.
|
||||
func (conR *ConsensusReactor) SetEventBus(b *types.EventBus) {
|
||||
conR.eventBus = b
|
||||
conR.conS.SetEventBus(b)
|
||||
}
|
||||
|
||||
// FastSync returns whether the consensus reactor is in fast-sync mode.
|
||||
@@ -316,24 +330,60 @@ func (conR *ConsensusReactor) FastSync() bool {
|
||||
|
||||
//--------------------------------------
|
||||
|
||||
// Listens for new steps and votes,
|
||||
// broadcasting the result to peers
|
||||
func (conR *ConsensusReactor) registerEventCallbacks() {
|
||||
// startBroadcastRoutine subscribes for new round steps, votes and proposal
|
||||
// heartbeats using the event bus and starts a go routine to broadcasts events
|
||||
// to peers upon receiving them.
|
||||
func (conR *ConsensusReactor) startBroadcastRoutine() error {
|
||||
const subscriber = "consensus-reactor"
|
||||
ctx := context.Background()
|
||||
|
||||
types.AddListenerForEvent(conR.evsw, "conR", types.EventStringNewRoundStep(), func(data types.TMEventData) {
|
||||
rs := data.Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
|
||||
conR.broadcastNewRoundStep(rs)
|
||||
})
|
||||
// new round steps
|
||||
stepsCh := make(chan interface{})
|
||||
err := conR.eventBus.Subscribe(ctx, subscriber, types.EventQueryNewRoundStep, stepsCh)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "failed to subscribe %s to %s", subscriber, types.EventQueryNewRoundStep)
|
||||
}
|
||||
|
||||
types.AddListenerForEvent(conR.evsw, "conR", types.EventStringVote(), func(data types.TMEventData) {
|
||||
edv := data.Unwrap().(types.EventDataVote)
|
||||
conR.broadcastHasVoteMessage(edv.Vote)
|
||||
})
|
||||
// votes
|
||||
votesCh := make(chan interface{})
|
||||
err = conR.eventBus.Subscribe(ctx, subscriber, types.EventQueryVote, votesCh)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "failed to subscribe %s to %s", subscriber, types.EventQueryVote)
|
||||
}
|
||||
|
||||
types.AddListenerForEvent(conR.evsw, "conR", types.EventStringProposalHeartbeat(), func(data types.TMEventData) {
|
||||
heartbeat := data.Unwrap().(types.EventDataProposalHeartbeat)
|
||||
conR.broadcastProposalHeartbeatMessage(heartbeat)
|
||||
})
|
||||
// proposal heartbeats
|
||||
heartbeatsCh := make(chan interface{})
|
||||
err = conR.eventBus.Subscribe(ctx, subscriber, types.EventQueryProposalHeartbeat, heartbeatsCh)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "failed to subscribe %s to %s", subscriber, types.EventQueryProposalHeartbeat)
|
||||
}
|
||||
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case data, ok := <-stepsCh:
|
||||
if ok { // a receive from a closed channel returns the zero value immediately
|
||||
edrs := data.(types.TMEventData).Unwrap().(types.EventDataRoundState)
|
||||
conR.broadcastNewRoundStep(edrs.RoundState.(*cstypes.RoundState))
|
||||
}
|
||||
case data, ok := <-votesCh:
|
||||
if ok {
|
||||
edv := data.(types.TMEventData).Unwrap().(types.EventDataVote)
|
||||
conR.broadcastHasVoteMessage(edv.Vote)
|
||||
}
|
||||
case data, ok := <-heartbeatsCh:
|
||||
if ok {
|
||||
edph := data.(types.TMEventData).Unwrap().(types.EventDataProposalHeartbeat)
|
||||
conR.broadcastProposalHeartbeatMessage(edph)
|
||||
}
|
||||
case <-conR.Quit:
|
||||
conR.eventBus.UnsubscribeAll(ctx, subscriber)
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (conR *ConsensusReactor) broadcastProposalHeartbeatMessage(heartbeat types.EventDataProposalHeartbeat) {
|
||||
@@ -344,8 +394,7 @@ func (conR *ConsensusReactor) broadcastProposalHeartbeatMessage(heartbeat types.
|
||||
conR.Switch.Broadcast(StateChannel, struct{ ConsensusMessage }{msg})
|
||||
}
|
||||
|
||||
func (conR *ConsensusReactor) broadcastNewRoundStep(rs *RoundState) {
|
||||
|
||||
func (conR *ConsensusReactor) broadcastNewRoundStep(rs *cstypes.RoundState) {
|
||||
nrsMsg, csMsg := makeRoundStepMessages(rs)
|
||||
if nrsMsg != nil {
|
||||
conR.Switch.Broadcast(StateChannel, struct{ ConsensusMessage }{nrsMsg})
|
||||
@@ -367,7 +416,7 @@ func (conR *ConsensusReactor) broadcastHasVoteMessage(vote *types.Vote) {
|
||||
/*
|
||||
// TODO: Make this broadcast more selective.
|
||||
for _, peer := range conR.Switch.Peers().List() {
|
||||
ps := peer.Data.Get(PeerStateKey).(*PeerState)
|
||||
ps := peer.Get(PeerStateKey).(*PeerState)
|
||||
prs := ps.GetRoundState()
|
||||
if prs.Height == vote.Height {
|
||||
// TODO: Also filter on round?
|
||||
@@ -381,7 +430,7 @@ func (conR *ConsensusReactor) broadcastHasVoteMessage(vote *types.Vote) {
|
||||
*/
|
||||
}
|
||||
|
||||
func makeRoundStepMessages(rs *RoundState) (nrsMsg *NewRoundStepMessage, csMsg *CommitStepMessage) {
|
||||
func makeRoundStepMessages(rs *cstypes.RoundState) (nrsMsg *NewRoundStepMessage, csMsg *CommitStepMessage) {
|
||||
nrsMsg = &NewRoundStepMessage{
|
||||
Height: rs.Height,
|
||||
Round: rs.Round,
|
||||
@@ -389,7 +438,7 @@ func makeRoundStepMessages(rs *RoundState) (nrsMsg *NewRoundStepMessage, csMsg *
|
||||
SecondsSinceStartTime: int(time.Since(rs.StartTime).Seconds()),
|
||||
LastCommitRound: rs.LastCommit.Round(),
|
||||
}
|
||||
if rs.Step == RoundStepCommit {
|
||||
if rs.Step == cstypes.RoundStepCommit {
|
||||
csMsg = &CommitStepMessage{
|
||||
Height: rs.Height,
|
||||
BlockPartsHeader: rs.ProposalBlockParts.Header(),
|
||||
@@ -399,7 +448,7 @@ func makeRoundStepMessages(rs *RoundState) (nrsMsg *NewRoundStepMessage, csMsg *
|
||||
return
|
||||
}
|
||||
|
||||
func (conR *ConsensusReactor) sendNewRoundStepMessages(peer *p2p.Peer) {
|
||||
func (conR *ConsensusReactor) sendNewRoundStepMessages(peer p2p.Peer) {
|
||||
rs := conR.conS.GetRoundState()
|
||||
nrsMsg, csMsg := makeRoundStepMessages(rs)
|
||||
if nrsMsg != nil {
|
||||
@@ -410,7 +459,7 @@ func (conR *ConsensusReactor) sendNewRoundStepMessages(peer *p2p.Peer) {
|
||||
}
|
||||
}
|
||||
|
||||
func (conR *ConsensusReactor) gossipDataRoutine(peer *p2p.Peer, ps *PeerState) {
|
||||
func (conR *ConsensusReactor) gossipDataRoutine(peer p2p.Peer, ps *PeerState) {
|
||||
logger := conR.Logger.With("peer", peer)
|
||||
|
||||
OUTER_LOOP:
|
||||
@@ -443,6 +492,18 @@ OUTER_LOOP:
|
||||
// If the peer is on a previous height, help catch up.
|
||||
if (0 < prs.Height) && (prs.Height < rs.Height) {
|
||||
heightLogger := logger.With("height", prs.Height)
|
||||
|
||||
// if we never received the commit message from the peer, the block parts wont be initialized
|
||||
if prs.ProposalBlockParts == nil {
|
||||
blockMeta := conR.conS.blockStore.LoadBlockMeta(prs.Height)
|
||||
if blockMeta == nil {
|
||||
cmn.PanicCrisis(cmn.Fmt("Failed to load block %d when blockStore is at %d",
|
||||
prs.Height, conR.conS.blockStore.Height()))
|
||||
}
|
||||
ps.InitProposalBlockParts(blockMeta.BlockID.PartsHeader)
|
||||
// continue the loop since prs is a copy and not effected by this initialization
|
||||
continue OUTER_LOOP
|
||||
}
|
||||
conR.gossipDataForCatchup(heightLogger, rs, prs, ps, peer)
|
||||
continue OUTER_LOOP
|
||||
}
|
||||
@@ -491,8 +552,8 @@ OUTER_LOOP:
|
||||
}
|
||||
}
|
||||
|
||||
func (conR *ConsensusReactor) gossipDataForCatchup(logger log.Logger, rs *RoundState,
|
||||
prs *PeerRoundState, ps *PeerState, peer *p2p.Peer) {
|
||||
func (conR *ConsensusReactor) gossipDataForCatchup(logger log.Logger, rs *cstypes.RoundState,
|
||||
prs *cstypes.PeerRoundState, ps *PeerState, peer p2p.Peer) {
|
||||
|
||||
if index, ok := prs.ProposalBlockParts.Not().PickRandom(); ok {
|
||||
// Ensure that the peer's PartSetHeader is correct
|
||||
@@ -522,9 +583,11 @@ func (conR *ConsensusReactor) gossipDataForCatchup(logger log.Logger, rs *RoundS
|
||||
Round: prs.Round, // Not our height, so it doesn't matter.
|
||||
Part: part,
|
||||
}
|
||||
logger.Debug("Sending block part for catchup", "round", prs.Round)
|
||||
logger.Debug("Sending block part for catchup", "round", prs.Round, "index", index)
|
||||
if peer.Send(DataChannel, struct{ ConsensusMessage }{msg}) {
|
||||
ps.SetHasProposalBlockPart(prs.Height, prs.Round, index)
|
||||
} else {
|
||||
logger.Debug("Sending block part for catchup failed")
|
||||
}
|
||||
return
|
||||
} else {
|
||||
@@ -534,7 +597,7 @@ func (conR *ConsensusReactor) gossipDataForCatchup(logger log.Logger, rs *RoundS
|
||||
}
|
||||
}
|
||||
|
||||
func (conR *ConsensusReactor) gossipVotesRoutine(peer *p2p.Peer, ps *PeerState) {
|
||||
func (conR *ConsensusReactor) gossipVotesRoutine(peer p2p.Peer, ps *PeerState) {
|
||||
logger := conR.Logger.With("peer", peer)
|
||||
|
||||
// Simple hack to throttle logs upon sleep.
|
||||
@@ -606,24 +669,24 @@ OUTER_LOOP:
|
||||
}
|
||||
}
|
||||
|
||||
func (conR *ConsensusReactor) gossipVotesForHeight(logger log.Logger, rs *RoundState, prs *PeerRoundState, ps *PeerState) bool {
|
||||
func (conR *ConsensusReactor) gossipVotesForHeight(logger log.Logger, rs *cstypes.RoundState, prs *cstypes.PeerRoundState, ps *PeerState) bool {
|
||||
|
||||
// If there are lastCommits to send...
|
||||
if prs.Step == RoundStepNewHeight {
|
||||
if prs.Step == cstypes.RoundStepNewHeight {
|
||||
if ps.PickSendVote(rs.LastCommit) {
|
||||
logger.Debug("Picked rs.LastCommit to send")
|
||||
return true
|
||||
}
|
||||
}
|
||||
// If there are prevotes to send...
|
||||
if prs.Step <= RoundStepPrevote && prs.Round != -1 && prs.Round <= rs.Round {
|
||||
if prs.Step <= cstypes.RoundStepPrevote && prs.Round != -1 && prs.Round <= rs.Round {
|
||||
if ps.PickSendVote(rs.Votes.Prevotes(prs.Round)) {
|
||||
logger.Debug("Picked rs.Prevotes(prs.Round) to send", "round", prs.Round)
|
||||
return true
|
||||
}
|
||||
}
|
||||
// If there are precommits to send...
|
||||
if prs.Step <= RoundStepPrecommit && prs.Round != -1 && prs.Round <= rs.Round {
|
||||
if prs.Step <= cstypes.RoundStepPrecommit && prs.Round != -1 && prs.Round <= rs.Round {
|
||||
if ps.PickSendVote(rs.Votes.Precommits(prs.Round)) {
|
||||
logger.Debug("Picked rs.Precommits(prs.Round) to send", "round", prs.Round)
|
||||
return true
|
||||
@@ -644,7 +707,7 @@ func (conR *ConsensusReactor) gossipVotesForHeight(logger log.Logger, rs *RoundS
|
||||
|
||||
// NOTE: `queryMaj23Routine` has a simple crude design since it only comes
|
||||
// into play for liveness when there's a signature DDoS attack happening.
|
||||
func (conR *ConsensusReactor) queryMaj23Routine(peer *p2p.Peer, ps *PeerState) {
|
||||
func (conR *ConsensusReactor) queryMaj23Routine(peer p2p.Peer, ps *PeerState) {
|
||||
logger := conR.Logger.With("peer", peer)
|
||||
|
||||
OUTER_LOOP:
|
||||
@@ -743,7 +806,7 @@ func (conR *ConsensusReactor) StringIndented(indent string) string {
|
||||
s := "ConsensusReactor{\n"
|
||||
s += indent + " " + conR.conS.StringIndented(indent+" ") + "\n"
|
||||
for _, peer := range conR.Switch.Peers().List() {
|
||||
ps := peer.Data.Get(types.PeerStateKey).(*PeerState)
|
||||
ps := peer.Get(types.PeerStateKey).(*PeerState)
|
||||
s += indent + " " + ps.StringIndented(indent+" ") + "\n"
|
||||
}
|
||||
s += indent + "}"
|
||||
@@ -752,54 +815,6 @@ func (conR *ConsensusReactor) StringIndented(indent string) string {
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
|
||||
// PeerRoundState contains the known state of a peer.
|
||||
// NOTE: Read-only when returned by PeerState.GetRoundState().
|
||||
type PeerRoundState struct {
|
||||
Height int // Height peer is at
|
||||
Round int // Round peer is at, -1 if unknown.
|
||||
Step RoundStepType // Step peer is at
|
||||
StartTime time.Time // Estimated start of round 0 at this height
|
||||
Proposal bool // True if peer has proposal for this round
|
||||
ProposalBlockPartsHeader types.PartSetHeader //
|
||||
ProposalBlockParts *cmn.BitArray //
|
||||
ProposalPOLRound int // Proposal's POL round. -1 if none.
|
||||
ProposalPOL *cmn.BitArray // nil until ProposalPOLMessage received.
|
||||
Prevotes *cmn.BitArray // All votes peer has for this round
|
||||
Precommits *cmn.BitArray // All precommits peer has for this round
|
||||
LastCommitRound int // Round of commit for last height. -1 if none.
|
||||
LastCommit *cmn.BitArray // All commit precommits of commit for last height.
|
||||
CatchupCommitRound int // Round that we have commit for. Not necessarily unique. -1 if none.
|
||||
CatchupCommit *cmn.BitArray // All commit precommits peer has for this height & CatchupCommitRound
|
||||
}
|
||||
|
||||
// String returns a string representation of the PeerRoundState
|
||||
func (prs PeerRoundState) String() string {
|
||||
return prs.StringIndented("")
|
||||
}
|
||||
|
||||
// StringIndented returns a string representation of the PeerRoundState
|
||||
func (prs PeerRoundState) StringIndented(indent string) string {
|
||||
return fmt.Sprintf(`PeerRoundState{
|
||||
%s %v/%v/%v @%v
|
||||
%s Proposal %v -> %v
|
||||
%s POL %v (round %v)
|
||||
%s Prevotes %v
|
||||
%s Precommits %v
|
||||
%s LastCommit %v (round %v)
|
||||
%s Catchup %v (round %v)
|
||||
%s}`,
|
||||
indent, prs.Height, prs.Round, prs.Step, prs.StartTime,
|
||||
indent, prs.ProposalBlockPartsHeader, prs.ProposalBlockParts,
|
||||
indent, prs.ProposalPOL, prs.ProposalPOLRound,
|
||||
indent, prs.Prevotes,
|
||||
indent, prs.Precommits,
|
||||
indent, prs.LastCommit, prs.LastCommitRound,
|
||||
indent, prs.CatchupCommit, prs.CatchupCommitRound,
|
||||
indent)
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
|
||||
var (
|
||||
ErrPeerStateHeightRegression = errors.New("Error peer state height regression")
|
||||
ErrPeerStateInvalidStartTime = errors.New("Error peer state invalid startTime")
|
||||
@@ -808,17 +823,19 @@ var (
|
||||
// PeerState contains the known state of a peer, including its connection
|
||||
// and threadsafe access to its PeerRoundState.
|
||||
type PeerState struct {
|
||||
Peer *p2p.Peer
|
||||
Peer p2p.Peer
|
||||
logger log.Logger
|
||||
|
||||
mtx sync.Mutex
|
||||
PeerRoundState
|
||||
cstypes.PeerRoundState
|
||||
}
|
||||
|
||||
// NewPeerState returns a new PeerState for the given Peer
|
||||
func NewPeerState(peer *p2p.Peer) *PeerState {
|
||||
func NewPeerState(peer p2p.Peer) *PeerState {
|
||||
return &PeerState{
|
||||
Peer: peer,
|
||||
PeerRoundState: PeerRoundState{
|
||||
Peer: peer,
|
||||
logger: log.NewNopLogger(),
|
||||
PeerRoundState: cstypes.PeerRoundState{
|
||||
Round: -1,
|
||||
ProposalPOLRound: -1,
|
||||
LastCommitRound: -1,
|
||||
@@ -827,9 +844,14 @@ func NewPeerState(peer *p2p.Peer) *PeerState {
|
||||
}
|
||||
}
|
||||
|
||||
func (ps *PeerState) SetLogger(logger log.Logger) *PeerState {
|
||||
ps.logger = logger
|
||||
return ps
|
||||
}
|
||||
|
||||
// GetRoundState returns an atomic snapshot of the PeerRoundState.
|
||||
// There's no point in mutating it since it won't change PeerState.
|
||||
func (ps *PeerState) GetRoundState() *PeerRoundState {
|
||||
func (ps *PeerState) GetRoundState() *cstypes.PeerRoundState {
|
||||
ps.mtx.Lock()
|
||||
defer ps.mtx.Unlock()
|
||||
|
||||
@@ -839,7 +861,7 @@ func (ps *PeerState) GetRoundState() *PeerRoundState {
|
||||
|
||||
// GetHeight returns an atomic snapshot of the PeerRoundState's height
|
||||
// used by the mempool to ensure peers are caught up before broadcasting new txs
|
||||
func (ps *PeerState) GetHeight() int {
|
||||
func (ps *PeerState) GetHeight() int64 {
|
||||
ps.mtx.Lock()
|
||||
defer ps.mtx.Unlock()
|
||||
return ps.PeerRoundState.Height
|
||||
@@ -864,8 +886,21 @@ func (ps *PeerState) SetHasProposal(proposal *types.Proposal) {
|
||||
ps.ProposalPOL = nil // Nil until ProposalPOLMessage received.
|
||||
}
|
||||
|
||||
// InitProposalBlockParts initializes the peer's proposal block parts header and bit array.
|
||||
func (ps *PeerState) InitProposalBlockParts(partsHeader types.PartSetHeader) {
|
||||
ps.mtx.Lock()
|
||||
defer ps.mtx.Unlock()
|
||||
|
||||
if ps.ProposalBlockParts != nil {
|
||||
return
|
||||
}
|
||||
|
||||
ps.ProposalBlockPartsHeader = partsHeader
|
||||
ps.ProposalBlockParts = cmn.NewBitArray(partsHeader.Total)
|
||||
}
|
||||
|
||||
// SetHasProposalBlockPart sets the given block part index as known for the peer.
|
||||
func (ps *PeerState) SetHasProposalBlockPart(height int, round int, index int) {
|
||||
func (ps *PeerState) SetHasProposalBlockPart(height int64, round int, index int) {
|
||||
ps.mtx.Lock()
|
||||
defer ps.mtx.Unlock()
|
||||
|
||||
@@ -916,7 +951,7 @@ func (ps *PeerState) PickVoteToSend(votes types.VoteSetReader) (vote *types.Vote
|
||||
return nil, false
|
||||
}
|
||||
|
||||
func (ps *PeerState) getVoteBitArray(height, round int, type_ byte) *cmn.BitArray {
|
||||
func (ps *PeerState) getVoteBitArray(height int64, round int, type_ byte) *cmn.BitArray {
|
||||
if !types.IsVoteTypeValid(type_) {
|
||||
return nil
|
||||
}
|
||||
@@ -963,7 +998,7 @@ func (ps *PeerState) getVoteBitArray(height, round int, type_ byte) *cmn.BitArra
|
||||
}
|
||||
|
||||
// 'round': A round for which we have a +2/3 commit.
|
||||
func (ps *PeerState) ensureCatchupCommitRound(height, round int, numValidators int) {
|
||||
func (ps *PeerState) ensureCatchupCommitRound(height int64, round int, numValidators int) {
|
||||
if ps.Height != height {
|
||||
return
|
||||
}
|
||||
@@ -989,13 +1024,13 @@ func (ps *PeerState) ensureCatchupCommitRound(height, round int, numValidators i
|
||||
// what votes this peer has received.
|
||||
// NOTE: It's important to make sure that numValidators actually matches
|
||||
// what the node sees as the number of validators for height.
|
||||
func (ps *PeerState) EnsureVoteBitArrays(height int, numValidators int) {
|
||||
func (ps *PeerState) EnsureVoteBitArrays(height int64, numValidators int) {
|
||||
ps.mtx.Lock()
|
||||
defer ps.mtx.Unlock()
|
||||
ps.ensureVoteBitArrays(height, numValidators)
|
||||
}
|
||||
|
||||
func (ps *PeerState) ensureVoteBitArrays(height int, numValidators int) {
|
||||
func (ps *PeerState) ensureVoteBitArrays(height int64, numValidators int) {
|
||||
if ps.Height == height {
|
||||
if ps.Prevotes == nil {
|
||||
ps.Prevotes = cmn.NewBitArray(numValidators)
|
||||
@@ -1024,9 +1059,9 @@ func (ps *PeerState) SetHasVote(vote *types.Vote) {
|
||||
ps.setHasVote(vote.Height, vote.Round, vote.Type, vote.ValidatorIndex)
|
||||
}
|
||||
|
||||
func (ps *PeerState) setHasVote(height int, round int, type_ byte, index int) {
|
||||
logger := ps.Peer.Logger.With("peerRound", ps.Round, "height", height, "round", round)
|
||||
logger.Debug("setHasVote(LastCommit)", "lastCommit", ps.LastCommit, "index", index)
|
||||
func (ps *PeerState) setHasVote(height int64, round int, type_ byte, index int) {
|
||||
logger := ps.logger.With("peerH/R", cmn.Fmt("%d/%d", ps.Height, ps.Round), "H/R", cmn.Fmt("%d/%d", height, round))
|
||||
logger.Debug("setHasVote", "type", type_, "index", index)
|
||||
|
||||
// NOTE: some may be nil BitArrays -> no side effects.
|
||||
psVotes := ps.getVoteBitArray(height, round, type_)
|
||||
@@ -1163,7 +1198,7 @@ func (ps *PeerState) StringIndented(indent string) string {
|
||||
%s Key %v
|
||||
%s PRS %v
|
||||
%s}`,
|
||||
indent, ps.Peer.Key,
|
||||
indent, ps.Peer.Key(),
|
||||
indent, ps.PeerRoundState.StringIndented(indent+" "),
|
||||
indent)
|
||||
}
|
||||
@@ -1218,9 +1253,9 @@ func DecodeMessage(bz []byte) (msgType byte, msg ConsensusMessage, err error) {
|
||||
// NewRoundStepMessage is sent for every step taken in the ConsensusState.
|
||||
// For every height/round/step transition
|
||||
type NewRoundStepMessage struct {
|
||||
Height int
|
||||
Height int64
|
||||
Round int
|
||||
Step RoundStepType
|
||||
Step cstypes.RoundStepType
|
||||
SecondsSinceStartTime int
|
||||
LastCommitRound int
|
||||
}
|
||||
@@ -1235,7 +1270,7 @@ func (m *NewRoundStepMessage) String() string {
|
||||
|
||||
// CommitStepMessage is sent when a block is committed.
|
||||
type CommitStepMessage struct {
|
||||
Height int
|
||||
Height int64
|
||||
BlockPartsHeader types.PartSetHeader
|
||||
BlockParts *cmn.BitArray
|
||||
}
|
||||
@@ -1261,7 +1296,7 @@ func (m *ProposalMessage) String() string {
|
||||
|
||||
// ProposalPOLMessage is sent when a previous proposal is re-proposed.
|
||||
type ProposalPOLMessage struct {
|
||||
Height int
|
||||
Height int64
|
||||
ProposalPOLRound int
|
||||
ProposalPOL *cmn.BitArray
|
||||
}
|
||||
@@ -1275,7 +1310,7 @@ func (m *ProposalPOLMessage) String() string {
|
||||
|
||||
// BlockPartMessage is sent when gossipping a piece of the proposed block.
|
||||
type BlockPartMessage struct {
|
||||
Height int
|
||||
Height int64
|
||||
Round int
|
||||
Part *types.Part
|
||||
}
|
||||
@@ -1301,7 +1336,7 @@ func (m *VoteMessage) String() string {
|
||||
|
||||
// HasVoteMessage is sent to indicate that a particular vote has been received.
|
||||
type HasVoteMessage struct {
|
||||
Height int
|
||||
Height int64
|
||||
Round int
|
||||
Type byte
|
||||
Index int
|
||||
@@ -1316,7 +1351,7 @@ func (m *HasVoteMessage) String() string {
|
||||
|
||||
// VoteSetMaj23Message is sent to indicate that a given BlockID has seen +2/3 votes.
|
||||
type VoteSetMaj23Message struct {
|
||||
Height int
|
||||
Height int64
|
||||
Round int
|
||||
Type byte
|
||||
BlockID types.BlockID
|
||||
@@ -1331,7 +1366,7 @@ func (m *VoteSetMaj23Message) String() string {
|
||||
|
||||
// VoteSetBitsMessage is sent to communicate the bit-array of votes seen for the BlockID.
|
||||
type VoteSetBitsMessage struct {
|
||||
Height int
|
||||
Height int64
|
||||
Round int
|
||||
Type byte
|
||||
BlockID types.BlockID
|
||||
|
@@ -1,17 +1,21 @@
|
||||
package consensus
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"runtime/pprof"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/abci/example/dummy"
|
||||
"github.com/tendermint/tmlibs/events"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -21,27 +25,30 @@ func init() {
|
||||
//----------------------------------------------
|
||||
// in-process testnets
|
||||
|
||||
func startConsensusNet(t *testing.T, css []*ConsensusState, N int, subscribeEventRespond bool) ([]*ConsensusReactor, []chan interface{}) {
|
||||
func startConsensusNet(t *testing.T, css []*ConsensusState, N int) ([]*ConsensusReactor, []chan interface{}, []*types.EventBus) {
|
||||
reactors := make([]*ConsensusReactor, N)
|
||||
eventChans := make([]chan interface{}, N)
|
||||
eventBuses := make([]*types.EventBus, N)
|
||||
logger := consensusLogger()
|
||||
for i := 0; i < N; i++ {
|
||||
/*thisLogger, err := tmflags.ParseLogLevel("consensus:info,*:error", logger, "info")
|
||||
if err != nil { t.Fatal(err)}*/
|
||||
thisLogger := logger
|
||||
|
||||
reactors[i] = NewConsensusReactor(css[i], true) // so we dont start the consensus states
|
||||
reactors[i].SetLogger(logger.With("validator", i))
|
||||
reactors[i].conS.SetLogger(thisLogger.With("validator", i))
|
||||
reactors[i].SetLogger(thisLogger.With("validator", i))
|
||||
|
||||
eventSwitch := events.NewEventSwitch()
|
||||
eventSwitch.SetLogger(logger.With("module", "events", "validator", i))
|
||||
_, err := eventSwitch.Start()
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to start switch: %v", err)
|
||||
}
|
||||
eventBuses[i] = types.NewEventBus()
|
||||
eventBuses[i].SetLogger(thisLogger.With("module", "events", "validator", i))
|
||||
err := eventBuses[i].Start()
|
||||
require.NoError(t, err)
|
||||
|
||||
reactors[i].SetEventSwitch(eventSwitch)
|
||||
if subscribeEventRespond {
|
||||
eventChans[i] = subscribeToEventRespond(eventSwitch, "tester", types.EventStringNewBlock())
|
||||
} else {
|
||||
eventChans[i] = subscribeToEvent(eventSwitch, "tester", types.EventStringNewBlock(), 1)
|
||||
}
|
||||
reactors[i].SetEventBus(eventBuses[i])
|
||||
|
||||
eventChans[i] = make(chan interface{}, 1)
|
||||
err = eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, eventChans[i])
|
||||
require.NoError(t, err)
|
||||
}
|
||||
// make connected switches and start all reactors
|
||||
p2p.MakeConnectedSwitches(config.P2P, N, func(i int, s *p2p.Switch) *p2p.Switch {
|
||||
@@ -52,25 +59,29 @@ func startConsensusNet(t *testing.T, css []*ConsensusState, N int, subscribeEven
|
||||
// now that everyone is connected, start the state machines
|
||||
// If we started the state machines before everyone was connected,
|
||||
// we'd block when the cs fires NewBlockEvent and the peers are trying to start their reactors
|
||||
// TODO: is this still true with new pubsub?
|
||||
for i := 0; i < N; i++ {
|
||||
s := reactors[i].conS.GetState()
|
||||
reactors[i].SwitchToConsensus(s)
|
||||
reactors[i].SwitchToConsensus(s, 0)
|
||||
}
|
||||
return reactors, eventChans
|
||||
return reactors, eventChans, eventBuses
|
||||
}
|
||||
|
||||
func stopConsensusNet(reactors []*ConsensusReactor) {
|
||||
func stopConsensusNet(reactors []*ConsensusReactor, eventBuses []*types.EventBus) {
|
||||
for _, r := range reactors {
|
||||
r.Switch.Stop()
|
||||
}
|
||||
for _, b := range eventBuses {
|
||||
b.Stop()
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure a testnet makes blocks
|
||||
func TestReactor(t *testing.T) {
|
||||
N := 4
|
||||
css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
|
||||
reactors, eventChans := startConsensusNet(t, css, N, false)
|
||||
defer stopConsensusNet(reactors)
|
||||
reactors, eventChans, eventBuses := startConsensusNet(t, css, N)
|
||||
defer stopConsensusNet(reactors, eventBuses)
|
||||
// wait till everyone makes the first new block
|
||||
timeoutWaitGroup(t, N, func(wg *sync.WaitGroup, j int) {
|
||||
<-eventChans[j]
|
||||
@@ -85,11 +96,14 @@ func TestReactorProposalHeartbeats(t *testing.T) {
|
||||
func(c *cfg.Config) {
|
||||
c.Consensus.CreateEmptyBlocks = false
|
||||
})
|
||||
reactors, eventChans := startConsensusNet(t, css, N, false)
|
||||
defer stopConsensusNet(reactors)
|
||||
reactors, eventChans, eventBuses := startConsensusNet(t, css, N)
|
||||
defer stopConsensusNet(reactors, eventBuses)
|
||||
heartbeatChans := make([]chan interface{}, N)
|
||||
var err error
|
||||
for i := 0; i < N; i++ {
|
||||
heartbeatChans[i] = subscribeToEvent(css[i].evsw, "tester", types.EventStringProposalHeartbeat(), 1)
|
||||
heartbeatChans[i] = make(chan interface{}, 1)
|
||||
err = eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryProposalHeartbeat, heartbeatChans[i])
|
||||
require.NoError(t, err)
|
||||
}
|
||||
// wait till everyone sends a proposal heartbeat
|
||||
timeoutWaitGroup(t, N, func(wg *sync.WaitGroup, j int) {
|
||||
@@ -98,7 +112,9 @@ func TestReactorProposalHeartbeats(t *testing.T) {
|
||||
}, css)
|
||||
|
||||
// send a tx
|
||||
css[3].mempool.CheckTx([]byte{1, 2, 3}, nil)
|
||||
if err := css[3].mempool.CheckTx([]byte{1, 2, 3}, nil); err != nil {
|
||||
//t.Fatal(err)
|
||||
}
|
||||
|
||||
// wait till everyone makes the first new block
|
||||
timeoutWaitGroup(t, N, func(wg *sync.WaitGroup, j int) {
|
||||
@@ -113,8 +129,8 @@ func TestReactorProposalHeartbeats(t *testing.T) {
|
||||
func TestVotingPowerChange(t *testing.T) {
|
||||
nVals := 4
|
||||
css := randConsensusNet(nVals, "consensus_voting_power_changes_test", newMockTickerFunc(true), newPersistentDummy)
|
||||
reactors, eventChans := startConsensusNet(t, css, nVals, true)
|
||||
defer stopConsensusNet(reactors)
|
||||
reactors, eventChans, eventBuses := startConsensusNet(t, css, nVals)
|
||||
defer stopConsensusNet(reactors, eventBuses)
|
||||
|
||||
// map of active validators
|
||||
activeVals := make(map[string]struct{})
|
||||
@@ -125,14 +141,13 @@ func TestVotingPowerChange(t *testing.T) {
|
||||
// wait till everyone makes block 1
|
||||
timeoutWaitGroup(t, nVals, func(wg *sync.WaitGroup, j int) {
|
||||
<-eventChans[j]
|
||||
eventChans[j] <- struct{}{}
|
||||
wg.Done()
|
||||
}, css)
|
||||
|
||||
//---------------------------------------------------------------------------
|
||||
t.Log("---------------------------- Testing changing the voting power of one validator a few times")
|
||||
|
||||
val1PubKey := css[0].privValidator.(*types.PrivValidator).PubKey
|
||||
val1PubKey := css[0].privValidator.GetPubKey()
|
||||
updateValidatorTx := dummy.MakeValSetChangeTx(val1PubKey.Bytes(), 25)
|
||||
previousTotalVotingPower := css[0].GetRoundState().LastValidators.TotalVotingPower()
|
||||
|
||||
@@ -174,8 +189,9 @@ func TestValidatorSetChanges(t *testing.T) {
|
||||
nPeers := 7
|
||||
nVals := 4
|
||||
css := randConsensusNetWithPeers(nVals, nPeers, "consensus_val_set_changes_test", newMockTickerFunc(true), newPersistentDummy)
|
||||
reactors, eventChans := startConsensusNet(t, css, nPeers, true)
|
||||
defer stopConsensusNet(reactors)
|
||||
|
||||
reactors, eventChans, eventBuses := startConsensusNet(t, css, nPeers)
|
||||
defer stopConsensusNet(reactors, eventBuses)
|
||||
|
||||
// map of active validators
|
||||
activeVals := make(map[string]struct{})
|
||||
@@ -186,15 +202,14 @@ func TestValidatorSetChanges(t *testing.T) {
|
||||
// wait till everyone makes block 1
|
||||
timeoutWaitGroup(t, nPeers, func(wg *sync.WaitGroup, j int) {
|
||||
<-eventChans[j]
|
||||
eventChans[j] <- struct{}{}
|
||||
wg.Done()
|
||||
}, css)
|
||||
|
||||
//---------------------------------------------------------------------------
|
||||
t.Log("---------------------------- Testing adding one validator")
|
||||
|
||||
newValidatorPubKey1 := css[nVals].privValidator.(*types.PrivValidator).PubKey
|
||||
newValidatorTx1 := dummy.MakeValSetChangeTx(newValidatorPubKey1.Bytes(), uint64(testMinPower))
|
||||
newValidatorPubKey1 := css[nVals].privValidator.GetPubKey()
|
||||
newValidatorTx1 := dummy.MakeValSetChangeTx(newValidatorPubKey1.Bytes(), testMinPower)
|
||||
|
||||
// wait till everyone makes block 2
|
||||
// ensure the commit includes all validators
|
||||
@@ -214,19 +229,19 @@ func TestValidatorSetChanges(t *testing.T) {
|
||||
|
||||
// wait till everyone makes block 5
|
||||
// it includes the commit for block 4, which should have the updated validator set
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
|
||||
|
||||
//---------------------------------------------------------------------------
|
||||
t.Log("---------------------------- Testing changing the voting power of one validator")
|
||||
|
||||
updateValidatorPubKey1 := css[nVals].privValidator.(*types.PrivValidator).PubKey
|
||||
updateValidatorPubKey1 := css[nVals].privValidator.GetPubKey()
|
||||
updateValidatorTx1 := dummy.MakeValSetChangeTx(updateValidatorPubKey1.Bytes(), 25)
|
||||
previousTotalVotingPower := css[nVals].GetRoundState().LastValidators.TotalVotingPower()
|
||||
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, updateValidatorTx1)
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
|
||||
|
||||
if css[nVals].GetRoundState().LastValidators.TotalVotingPower() == previousTotalVotingPower {
|
||||
t.Errorf("expected voting power to change (before: %d, after: %d)", previousTotalVotingPower, css[nVals].GetRoundState().LastValidators.TotalVotingPower())
|
||||
@@ -235,18 +250,18 @@ func TestValidatorSetChanges(t *testing.T) {
|
||||
//---------------------------------------------------------------------------
|
||||
t.Log("---------------------------- Testing adding two validators at once")
|
||||
|
||||
newValidatorPubKey2 := css[nVals+1].privValidator.(*types.PrivValidator).PubKey
|
||||
newValidatorTx2 := dummy.MakeValSetChangeTx(newValidatorPubKey2.Bytes(), uint64(testMinPower))
|
||||
newValidatorPubKey2 := css[nVals+1].privValidator.GetPubKey()
|
||||
newValidatorTx2 := dummy.MakeValSetChangeTx(newValidatorPubKey2.Bytes(), testMinPower)
|
||||
|
||||
newValidatorPubKey3 := css[nVals+2].privValidator.(*types.PrivValidator).PubKey
|
||||
newValidatorTx3 := dummy.MakeValSetChangeTx(newValidatorPubKey3.Bytes(), uint64(testMinPower))
|
||||
newValidatorPubKey3 := css[nVals+2].privValidator.GetPubKey()
|
||||
newValidatorTx3 := dummy.MakeValSetChangeTx(newValidatorPubKey3.Bytes(), testMinPower)
|
||||
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, newValidatorTx2, newValidatorTx3)
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
activeVals[string(newValidatorPubKey2.Address())] = struct{}{}
|
||||
activeVals[string(newValidatorPubKey3.Address())] = struct{}{}
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
|
||||
|
||||
//---------------------------------------------------------------------------
|
||||
t.Log("---------------------------- Testing removing two validators at once")
|
||||
@@ -259,7 +274,7 @@ func TestValidatorSetChanges(t *testing.T) {
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
delete(activeVals, string(newValidatorPubKey2.Address()))
|
||||
delete(activeVals, string(newValidatorPubKey3.Address()))
|
||||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
|
||||
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
|
||||
}
|
||||
|
||||
// Check we can make blocks with skip_timeout_commit=false
|
||||
@@ -271,8 +286,8 @@ func TestReactorWithTimeoutCommit(t *testing.T) {
|
||||
css[i].config.SkipTimeoutCommit = false
|
||||
}
|
||||
|
||||
reactors, eventChans := startConsensusNet(t, css, N-1, false)
|
||||
defer stopConsensusNet(reactors)
|
||||
reactors, eventChans, eventBuses := startConsensusNet(t, css, N-1)
|
||||
defer stopConsensusNet(reactors, eventBuses)
|
||||
|
||||
// wait till everyone makes the first new block
|
||||
timeoutWaitGroup(t, N-1, func(wg *sync.WaitGroup, j int) {
|
||||
@@ -283,19 +298,50 @@ func TestReactorWithTimeoutCommit(t *testing.T) {
|
||||
|
||||
func waitForAndValidateBlock(t *testing.T, n int, activeVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState, txs ...[]byte) {
|
||||
timeoutWaitGroup(t, n, func(wg *sync.WaitGroup, j int) {
|
||||
newBlockI := <-eventChans[j]
|
||||
defer wg.Done()
|
||||
|
||||
newBlockI, ok := <-eventChans[j]
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
newBlock := newBlockI.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block
|
||||
t.Logf("[WARN] Got block height=%v validator=%v", newBlock.Height, j)
|
||||
t.Logf("Got block height=%v validator=%v", newBlock.Height, j)
|
||||
err := validateBlock(newBlock, activeVals)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
for _, tx := range txs {
|
||||
css[j].mempool.CheckTx(tx, nil)
|
||||
if err = css[j].mempool.CheckTx(tx, nil); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
}, css)
|
||||
}
|
||||
|
||||
func waitForBlockWithUpdatedValsAndValidateIt(t *testing.T, n int, updatedVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState) {
|
||||
timeoutWaitGroup(t, n, func(wg *sync.WaitGroup, j int) {
|
||||
defer wg.Done()
|
||||
|
||||
var newBlock *types.Block
|
||||
LOOP:
|
||||
for {
|
||||
newBlockI, ok := <-eventChans[j]
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
newBlock = newBlockI.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block
|
||||
if newBlock.LastCommit.Size() == len(updatedVals) {
|
||||
t.Logf("Block with new validators height=%v validator=%v", newBlock.Height, j)
|
||||
break LOOP
|
||||
} else {
|
||||
t.Logf("Block with no new validators height=%v validator=%v. Skipping...", newBlock.Height, j)
|
||||
}
|
||||
}
|
||||
|
||||
eventChans[j] <- struct{}{}
|
||||
wg.Done()
|
||||
err := validateBlock(newBlock, updatedVals)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}, css)
|
||||
}
|
||||
|
||||
@@ -326,15 +372,20 @@ func timeoutWaitGroup(t *testing.T, n int, f func(*sync.WaitGroup, int), css []*
|
||||
close(done)
|
||||
}()
|
||||
|
||||
// we're running many nodes in-process, possibly in in a virtual machine,
|
||||
// and spewing debug messages - making a block could take a while,
|
||||
timeout := time.Second * 60
|
||||
|
||||
select {
|
||||
case <-done:
|
||||
case <-time.After(time.Second * 10):
|
||||
case <-time.After(timeout):
|
||||
for i, cs := range css {
|
||||
fmt.Println("#################")
|
||||
fmt.Println("Validator", i)
|
||||
fmt.Println(cs.GetRoundState())
|
||||
fmt.Println("")
|
||||
t.Log("#################")
|
||||
t.Log("Validator", i)
|
||||
t.Log(cs.GetRoundState())
|
||||
t.Log("")
|
||||
}
|
||||
pprof.Lookup("goroutine").WriteTo(os.Stdout, 1)
|
||||
panic("Timed out waiting for all validators to commit a block")
|
||||
}
|
||||
}
|
||||
|
@@ -4,23 +4,26 @@ import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"hash/crc32"
|
||||
"io"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
//"strconv"
|
||||
//"strings"
|
||||
"time"
|
||||
|
||||
abci "github.com/tendermint/abci/types"
|
||||
wire "github.com/tendermint/go-wire"
|
||||
auto "github.com/tendermint/tmlibs/autofile"
|
||||
//auto "github.com/tendermint/tmlibs/autofile"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
|
||||
"github.com/tendermint/tendermint/proxy"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
"github.com/tendermint/tendermint/version"
|
||||
)
|
||||
|
||||
var crc32c = crc32.MakeTable(crc32.Castagnoli)
|
||||
|
||||
// Functionality to replay blocks and messages on recovery from a crash.
|
||||
// There are two general failure scenarios: failure during consensus, and failure while applying the block.
|
||||
// The former is handled by the WAL, the latter by the proxyApp Handshake on restart,
|
||||
@@ -34,18 +37,11 @@ import (
|
||||
// as if it were received in receiveRoutine
|
||||
// Lines that start with "#" are ignored.
|
||||
// NOTE: receiveRoutine should not be running
|
||||
func (cs *ConsensusState) readReplayMessage(msgBytes []byte, newStepCh chan interface{}) error {
|
||||
// Skip over empty and meta lines
|
||||
if len(msgBytes) == 0 || msgBytes[0] == '#' {
|
||||
func (cs *ConsensusState) readReplayMessage(msg *TimedWALMessage, newStepCh chan interface{}) error {
|
||||
// skip meta messages
|
||||
if _, ok := msg.Msg.(EndHeightMessage); ok {
|
||||
return nil
|
||||
}
|
||||
var err error
|
||||
var msg TimedWALMessage
|
||||
wire.ReadJSON(&msg, msgBytes, &err)
|
||||
if err != nil {
|
||||
fmt.Println("MsgBytes:", msgBytes, string(msgBytes))
|
||||
return fmt.Errorf("Error reading json data: %v", err)
|
||||
}
|
||||
|
||||
// for logging
|
||||
switch m := msg.Msg.(type) {
|
||||
@@ -94,8 +90,7 @@ func (cs *ConsensusState) readReplayMessage(msgBytes []byte, newStepCh chan inte
|
||||
|
||||
// replay only those messages since the last block.
|
||||
// timeoutRoutine should run concurrently to read off tickChan
|
||||
func (cs *ConsensusState) catchupReplay(csHeight int) error {
|
||||
|
||||
func (cs *ConsensusState) catchupReplay(csHeight int64) error {
|
||||
// set replayMode
|
||||
cs.replayMode = true
|
||||
defer func() { cs.replayMode = false }()
|
||||
@@ -103,64 +98,47 @@ func (cs *ConsensusState) catchupReplay(csHeight int) error {
|
||||
// Ensure that ENDHEIGHT for this height doesn't exist
|
||||
// NOTE: This is just a sanity check. As far as we know things work fine without it,
|
||||
// and Handshake could reuse ConsensusState if it weren't for this check (since we can crash after writing ENDHEIGHT).
|
||||
gr, found, err := cs.wal.group.Search("#ENDHEIGHT: ", makeHeightSearchFunc(csHeight))
|
||||
gr, found, err := cs.wal.SearchForEndHeight(csHeight)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if gr != nil {
|
||||
gr.Close()
|
||||
if err := gr.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if found {
|
||||
return errors.New(cmn.Fmt("WAL should not contain #ENDHEIGHT %d.", csHeight))
|
||||
return fmt.Errorf("WAL should not contain #ENDHEIGHT %d.", csHeight)
|
||||
}
|
||||
|
||||
// Search for last height marker
|
||||
gr, found, err = cs.wal.group.Search("#ENDHEIGHT: ", makeHeightSearchFunc(csHeight-1))
|
||||
gr, found, err = cs.wal.SearchForEndHeight(csHeight - 1)
|
||||
if err == io.EOF {
|
||||
cs.Logger.Error("Replay: wal.group.Search returned EOF", "#ENDHEIGHT", csHeight-1)
|
||||
// if we upgraded from 0.9 to 0.9.1, we may have #HEIGHT instead
|
||||
// TODO (0.10.0): remove this
|
||||
gr, found, err = cs.wal.group.Search("#HEIGHT: ", makeHeightSearchFunc(csHeight))
|
||||
if err == io.EOF {
|
||||
cs.Logger.Error("Replay: wal.group.Search returned EOF", "#HEIGHT", csHeight)
|
||||
return nil
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
} else if err != nil {
|
||||
return err
|
||||
} else {
|
||||
defer gr.Close()
|
||||
}
|
||||
if !found {
|
||||
// if we upgraded from 0.9 to 0.9.1, we may have #HEIGHT instead
|
||||
// TODO (0.10.0): remove this
|
||||
gr, _, err = cs.wal.group.Search("#HEIGHT: ", makeHeightSearchFunc(csHeight))
|
||||
if err == io.EOF {
|
||||
cs.Logger.Error("Replay: wal.group.Search returned EOF", "#HEIGHT", csHeight)
|
||||
return nil
|
||||
} else if err != nil {
|
||||
return err
|
||||
} else {
|
||||
defer gr.Close()
|
||||
}
|
||||
|
||||
// TODO (0.10.0): uncomment
|
||||
// return errors.New(cmn.Fmt("Cannot replay height %d. WAL does not contain #ENDHEIGHT for %d.", csHeight, csHeight-1))
|
||||
return errors.New(cmn.Fmt("Cannot replay height %d. WAL does not contain #ENDHEIGHT for %d.", csHeight, csHeight-1))
|
||||
}
|
||||
defer gr.Close() // nolint: errcheck
|
||||
|
||||
cs.Logger.Info("Catchup by replaying consensus messages", "height", csHeight)
|
||||
|
||||
var msg *TimedWALMessage
|
||||
dec := WALDecoder{gr}
|
||||
|
||||
for {
|
||||
line, err := gr.ReadLine()
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
break
|
||||
} else {
|
||||
return err
|
||||
}
|
||||
msg, err = dec.Decode()
|
||||
if err == io.EOF {
|
||||
break
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
// NOTE: since the priv key is set when the msgs are received
|
||||
// it will attempt to eg double sign but we can just ignore it
|
||||
// since the votes will be replayed and we'll get to the next step
|
||||
if err := cs.readReplayMessage([]byte(line), nil); err != nil {
|
||||
if err := cs.readReplayMessage(msg, nil); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@@ -172,7 +150,8 @@ func (cs *ConsensusState) catchupReplay(csHeight int) error {
|
||||
|
||||
// Parses marker lines of the form:
|
||||
// #ENDHEIGHT: 12345
|
||||
func makeHeightSearchFunc(height int) auto.SearchFunc {
|
||||
/*
|
||||
func makeHeightSearchFunc(height int64) auto.SearchFunc {
|
||||
return func(line string) (int, error) {
|
||||
line = strings.TrimRight(line, "\n")
|
||||
parts := strings.Split(line, " ")
|
||||
@@ -191,7 +170,7 @@ func makeHeightSearchFunc(height int) auto.SearchFunc {
|
||||
return -1, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
}*/
|
||||
|
||||
//----------------------------------------------
|
||||
// Recover from failure during block processing
|
||||
@@ -221,12 +200,15 @@ func (h *Handshaker) NBlocks() int {
|
||||
// TODO: retry the handshake/replay if it fails ?
|
||||
func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
|
||||
// handshake is done via info request on the query conn
|
||||
res, err := proxyApp.Query().InfoSync()
|
||||
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{version.Version})
|
||||
if err != nil {
|
||||
return errors.New(cmn.Fmt("Error calling Info: %v", err))
|
||||
}
|
||||
|
||||
blockHeight := int(res.LastBlockHeight) // XXX: beware overflow
|
||||
blockHeight := int64(res.LastBlockHeight)
|
||||
if blockHeight < 0 {
|
||||
return fmt.Errorf("Got a negative last block height (%d) from the app", blockHeight)
|
||||
}
|
||||
appHash := res.LastBlockAppHash
|
||||
|
||||
h.logger.Info("ABCI Handshake", "appHeight", blockHeight, "appHash", fmt.Sprintf("%X", appHash))
|
||||
@@ -248,7 +230,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
|
||||
|
||||
// Replay all blocks since appBlockHeight and ensure the result matches the current state.
|
||||
// Returns the final AppHash or an error
|
||||
func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int, proxyApp proxy.AppConns) ([]byte, error) {
|
||||
func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int64, proxyApp proxy.AppConns) ([]byte, error) {
|
||||
|
||||
storeBlockHeight := h.store.Height()
|
||||
stateBlockHeight := h.state.LastBlockHeight
|
||||
@@ -257,7 +239,9 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int, proxyApp p
|
||||
// If appBlockHeight == 0 it means that we are at genesis and hence should send InitChain
|
||||
if appBlockHeight == 0 {
|
||||
validators := types.TM2PB.Validators(h.state.Validators)
|
||||
proxyApp.Consensus().InitChainSync(validators)
|
||||
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// First handle edge cases and constraints on the storeBlockHeight
|
||||
@@ -321,11 +305,14 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int, proxyApp p
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (h *Handshaker) replayBlocks(proxyApp proxy.AppConns, appBlockHeight, storeBlockHeight int, mutateState bool) ([]byte, error) {
|
||||
func (h *Handshaker) replayBlocks(proxyApp proxy.AppConns, appBlockHeight, storeBlockHeight int64, mutateState bool) ([]byte, error) {
|
||||
// App is further behind than it should be, so we need to replay blocks.
|
||||
// We replay all blocks from appBlockHeight+1.
|
||||
//
|
||||
// Note that we don't have an old version of the state,
|
||||
// so we by-pass state validation/mutation using sm.ExecCommitBlock.
|
||||
// This also means we won't be saving validator sets if they change during this period.
|
||||
//
|
||||
// If mutateState == true, the final block is replayed with h.replayBlock()
|
||||
|
||||
var appHash []byte
|
||||
@@ -354,14 +341,13 @@ func (h *Handshaker) replayBlocks(proxyApp proxy.AppConns, appBlockHeight, store
|
||||
}
|
||||
|
||||
// ApplyBlock on the proxyApp with the last block.
|
||||
func (h *Handshaker) replayBlock(height int, proxyApp proxy.AppConnConsensus) ([]byte, error) {
|
||||
func (h *Handshaker) replayBlock(height int64, proxyApp proxy.AppConnConsensus) ([]byte, error) {
|
||||
mempool := types.MockMempool{}
|
||||
|
||||
var eventCache types.Fireable // nil
|
||||
block := h.store.LoadBlock(height)
|
||||
meta := h.store.LoadBlockMeta(height)
|
||||
|
||||
if err := h.state.ApplyBlock(eventCache, proxyApp, block, meta.BlockID.PartsHeader, mempool); err != nil {
|
||||
if err := h.state.ApplyBlock(types.NopEventBus{}, proxyApp, block, meta.BlockID.PartsHeader, mempool); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -373,7 +359,6 @@ func (h *Handshaker) replayBlock(height int, proxyApp proxy.AppConnConsensus) ([
|
||||
func (h *Handshaker) checkAppHash(appHash []byte) error {
|
||||
if !bytes.Equal(h.state.AppHash, appHash) {
|
||||
panic(errors.New(cmn.Fmt("Tendermint state.AppHash does not match AppHash after replay. Got %X, expected %X", appHash, h.state.AppHash)).Error())
|
||||
return nil
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -388,7 +373,10 @@ func newMockProxyApp(appHash []byte, abciResponses *sm.ABCIResponses) proxy.AppC
|
||||
abciResponses: abciResponses,
|
||||
})
|
||||
cli, _ := clientCreator.NewABCIClient()
|
||||
cli.Start()
|
||||
err := cli.Start()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return proxy.NewAppConnConsensus(cli)
|
||||
}
|
||||
|
||||
@@ -400,21 +388,17 @@ type mockProxyApp struct {
|
||||
abciResponses *sm.ABCIResponses
|
||||
}
|
||||
|
||||
func (mock *mockProxyApp) DeliverTx(tx []byte) abci.Result {
|
||||
func (mock *mockProxyApp) DeliverTx(tx []byte) abci.ResponseDeliverTx {
|
||||
r := mock.abciResponses.DeliverTx[mock.txCount]
|
||||
mock.txCount += 1
|
||||
return abci.Result{
|
||||
r.Code,
|
||||
r.Data,
|
||||
r.Log,
|
||||
}
|
||||
return *r
|
||||
}
|
||||
|
||||
func (mock *mockProxyApp) EndBlock(height uint64) abci.ResponseEndBlock {
|
||||
func (mock *mockProxyApp) EndBlock(req abci.RequestEndBlock) abci.ResponseEndBlock {
|
||||
mock.txCount = 0
|
||||
return mock.abciResponses.EndBlock
|
||||
return *mock.abciResponses.EndBlock
|
||||
}
|
||||
|
||||
func (mock *mockProxyApp) Commit() abci.Result {
|
||||
return abci.NewResultOK(mock.appHash, "")
|
||||
func (mock *mockProxyApp) Commit() abci.ResponseCommit {
|
||||
return abci.ResponseCommit{Code: abci.CodeTypeOK, Data: mock.appHash}
|
||||
}
|
||||
|
@@ -2,12 +2,15 @@ package consensus
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"errors"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
bc "github.com/tendermint/tendermint/blockchain"
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/proxy"
|
||||
@@ -17,6 +20,11 @@ import (
|
||||
dbm "github.com/tendermint/tmlibs/db"
|
||||
)
|
||||
|
||||
const (
|
||||
// event bus subscriber
|
||||
subscriber = "replay-file"
|
||||
)
|
||||
|
||||
//--------------------------------------------------------
|
||||
// replay messages interactively or all at once
|
||||
|
||||
@@ -41,24 +49,39 @@ func (cs *ConsensusState) ReplayFile(file string, console bool) error {
|
||||
cs.startForReplay()
|
||||
|
||||
// ensure all new step events are regenerated as expected
|
||||
newStepCh := subscribeToEvent(cs.evsw, "replay-test", types.EventStringNewRoundStep(), 1)
|
||||
newStepCh := make(chan interface{}, 1)
|
||||
|
||||
ctx := context.Background()
|
||||
err := cs.eventBus.Subscribe(ctx, subscriber, types.EventQueryNewRoundStep, newStepCh)
|
||||
if err != nil {
|
||||
return errors.Errorf("failed to subscribe %s to %v", subscriber, types.EventQueryNewRoundStep)
|
||||
}
|
||||
defer cs.eventBus.Unsubscribe(ctx, subscriber, types.EventQueryNewRoundStep)
|
||||
|
||||
// just open the file for reading, no need to use wal
|
||||
fp, err := os.OpenFile(file, os.O_RDONLY, 0666)
|
||||
fp, err := os.OpenFile(file, os.O_RDONLY, 0600)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pb := newPlayback(file, fp, cs, cs.state.Copy())
|
||||
defer pb.fp.Close()
|
||||
defer pb.fp.Close() // nolint: errcheck
|
||||
|
||||
var nextN int // apply N msgs in a row
|
||||
for pb.scanner.Scan() {
|
||||
var msg *TimedWALMessage
|
||||
for {
|
||||
if nextN == 0 && console {
|
||||
nextN = pb.replayConsoleLoop()
|
||||
}
|
||||
|
||||
if err := pb.cs.readReplayMessage(pb.scanner.Bytes(), newStepCh); err != nil {
|
||||
msg, err = pb.dec.Decode()
|
||||
if err == io.EOF {
|
||||
return nil
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := pb.cs.readReplayMessage(msg, newStepCh); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -76,9 +99,9 @@ func (cs *ConsensusState) ReplayFile(file string, console bool) error {
|
||||
type playback struct {
|
||||
cs *ConsensusState
|
||||
|
||||
fp *os.File
|
||||
scanner *bufio.Scanner
|
||||
count int // how many lines/msgs into the file are we
|
||||
fp *os.File
|
||||
dec *WALDecoder
|
||||
count int // how many lines/msgs into the file are we
|
||||
|
||||
// replays can be reset to beginning
|
||||
fileName string // so we can close/reopen the file
|
||||
@@ -91,33 +114,41 @@ func newPlayback(fileName string, fp *os.File, cs *ConsensusState, genState *sm.
|
||||
fp: fp,
|
||||
fileName: fileName,
|
||||
genesisState: genState,
|
||||
scanner: bufio.NewScanner(fp),
|
||||
dec: NewWALDecoder(fp),
|
||||
}
|
||||
}
|
||||
|
||||
// go back count steps by resetting the state and running (pb.count - count) steps
|
||||
func (pb *playback) replayReset(count int, newStepCh chan interface{}) error {
|
||||
|
||||
pb.cs.Stop()
|
||||
pb.cs.Wait()
|
||||
|
||||
newCS := NewConsensusState(pb.cs.config, pb.genesisState.Copy(), pb.cs.proxyAppConn, pb.cs.blockStore, pb.cs.mempool)
|
||||
newCS.SetEventSwitch(pb.cs.evsw)
|
||||
newCS.SetEventBus(pb.cs.eventBus)
|
||||
newCS.startForReplay()
|
||||
|
||||
pb.fp.Close()
|
||||
fp, err := os.OpenFile(pb.fileName, os.O_RDONLY, 0666)
|
||||
if err := pb.fp.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
fp, err := os.OpenFile(pb.fileName, os.O_RDONLY, 0600)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
pb.fp = fp
|
||||
pb.scanner = bufio.NewScanner(fp)
|
||||
pb.dec = NewWALDecoder(fp)
|
||||
count = pb.count - count
|
||||
fmt.Printf("Reseting from %d to %d\n", pb.count, count)
|
||||
pb.count = 0
|
||||
pb.cs = newCS
|
||||
for i := 0; pb.scanner.Scan() && i < count; i++ {
|
||||
if err := pb.cs.readReplayMessage(pb.scanner.Bytes(), newStepCh); err != nil {
|
||||
var msg *TimedWALMessage
|
||||
for i := 0; i < count; i++ {
|
||||
msg, err = pb.dec.Decode()
|
||||
if err == io.EOF {
|
||||
return nil
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := pb.cs.readReplayMessage(msg, newStepCh); err != nil {
|
||||
return err
|
||||
}
|
||||
pb.count += 1
|
||||
@@ -180,10 +211,20 @@ func (pb *playback) replayConsoleLoop() int {
|
||||
// NOTE: "back" is not supported in the state machine design,
|
||||
// so we restart and replay up to
|
||||
|
||||
ctx := context.Background()
|
||||
// ensure all new step events are regenerated as expected
|
||||
newStepCh := subscribeToEvent(pb.cs.evsw, "replay-test", types.EventStringNewRoundStep(), 1)
|
||||
newStepCh := make(chan interface{}, 1)
|
||||
|
||||
err := pb.cs.eventBus.Subscribe(ctx, subscriber, types.EventQueryNewRoundStep, newStepCh)
|
||||
if err != nil {
|
||||
cmn.Exit(fmt.Sprintf("failed to subscribe %s to %v", subscriber, types.EventQueryNewRoundStep))
|
||||
}
|
||||
defer pb.cs.eventBus.Unsubscribe(ctx, subscriber, types.EventQueryNewRoundStep)
|
||||
|
||||
if len(tokens) == 1 {
|
||||
pb.replayReset(1, newStepCh)
|
||||
if err := pb.replayReset(1, newStepCh); err != nil {
|
||||
pb.cs.Logger.Error("Replay reset error", "err", err)
|
||||
}
|
||||
} else {
|
||||
i, err := strconv.Atoi(tokens[1])
|
||||
if err != nil {
|
||||
@@ -191,7 +232,9 @@ func (pb *playback) replayConsoleLoop() int {
|
||||
} else if i > pb.count {
|
||||
fmt.Printf("argument to back must not be larger than the current count (%d)\n", pb.count)
|
||||
} else {
|
||||
pb.replayReset(i, newStepCh)
|
||||
if err := pb.replayReset(i, newStepCh); err != nil {
|
||||
pb.cs.Logger.Error("Replay reset error", "err", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -241,24 +284,26 @@ func newConsensusStateForReplay(config cfg.BaseConfig, csConfig *cfg.ConsensusCo
|
||||
|
||||
// Get State
|
||||
stateDB := dbm.NewDB("state", config.DBBackend, config.DBDir())
|
||||
state := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
|
||||
state, err := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
|
||||
if err != nil {
|
||||
cmn.Exit(err.Error())
|
||||
}
|
||||
|
||||
// Create proxyAppConn connection (consensus, mempool, query)
|
||||
clientCreator := proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir())
|
||||
proxyApp := proxy.NewAppConns(clientCreator, NewHandshaker(state, blockStore))
|
||||
_, err := proxyApp.Start()
|
||||
err = proxyApp.Start()
|
||||
if err != nil {
|
||||
cmn.Exit(cmn.Fmt("Error starting proxy app conns: %v", err))
|
||||
}
|
||||
|
||||
// Make event switch
|
||||
eventSwitch := types.NewEventSwitch()
|
||||
if _, err := eventSwitch.Start(); err != nil {
|
||||
cmn.Exit(cmn.Fmt("Failed to start event switch: %v", err))
|
||||
eventBus := types.NewEventBus()
|
||||
if err := eventBus.Start(); err != nil {
|
||||
cmn.Exit(cmn.Fmt("Failed to start event bus: %v", err))
|
||||
}
|
||||
|
||||
consensusState := NewConsensusState(csConfig, state.Copy(), proxyApp.Consensus(), blockStore, types.MockMempool{})
|
||||
|
||||
consensusState.SetEventSwitch(eventSwitch)
|
||||
consensusState.SetEventBus(eventBus)
|
||||
return consensusState
|
||||
}
|
||||
|
@@ -2,19 +2,24 @@ package consensus
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"strings"
|
||||
"runtime"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/tendermint/abci/example/dummy"
|
||||
abci "github.com/tendermint/abci/types"
|
||||
crypto "github.com/tendermint/go-crypto"
|
||||
wire "github.com/tendermint/go-wire"
|
||||
auto "github.com/tendermint/tmlibs/autofile"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
dbm "github.com/tendermint/tmlibs/db"
|
||||
|
||||
@@ -25,8 +30,10 @@ import (
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
)
|
||||
|
||||
var consensusReplayConfig *cfg.Config
|
||||
|
||||
func init() {
|
||||
config = ResetConfig("consensus_replay_test")
|
||||
consensusReplayConfig = ResetConfig("consensus_replay_test")
|
||||
}
|
||||
|
||||
// These tests ensure we can always recover from failure at any part of the consensus process.
|
||||
@@ -39,11 +46,10 @@ func init() {
|
||||
|
||||
// NOTE: Files in this dir are generated by running the `build.sh` therein.
|
||||
// It's a simple way to generate wals for a single block, or multiple blocks, with random transactions,
|
||||
// and different part sizes. The output is not deterministic, and the stepChanges may need to be adjusted
|
||||
// after running it (eg. sometimes small_block2 will have 5 block parts, sometimes 6).
|
||||
// and different part sizes. The output is not deterministic.
|
||||
// It should only have to be re-run if there is some breaking change to the consensus data structures (eg. blocks, votes)
|
||||
// or to the behaviour of the app (eg. computes app hash differently)
|
||||
var data_dir = path.Join(cmn.GoPath, "src/github.com/tendermint/tendermint/consensus", "test_data")
|
||||
var data_dir = path.Join(cmn.GoPath(), "src/github.com/tendermint/tendermint/consensus", "test_data")
|
||||
|
||||
//------------------------------------------------------------------------------------------
|
||||
// WAL Tests
|
||||
@@ -52,223 +58,215 @@ var data_dir = path.Join(cmn.GoPath, "src/github.com/tendermint/tendermint/conse
|
||||
// and which ones we need the wal for - then we'd also be able to only flush the
|
||||
// wal writer when we need to, instead of with every message.
|
||||
|
||||
// the priv validator changes step at these lines for a block with 1 val and 1 part
|
||||
var baseStepChanges = []int{3, 6, 8}
|
||||
func startNewConsensusStateAndWaitForBlock(t *testing.T, lastBlockHeight int64, blockDB dbm.DB, stateDB dbm.DB) {
|
||||
logger := log.TestingLogger()
|
||||
state, _ := sm.GetState(stateDB, consensusReplayConfig.GenesisFile())
|
||||
state.SetLogger(logger.With("module", "state"))
|
||||
privValidator := loadPrivValidator(consensusReplayConfig)
|
||||
cs := newConsensusStateWithConfigAndBlockStore(consensusReplayConfig, state, privValidator, dummy.NewDummyApplication(), blockDB)
|
||||
cs.SetLogger(logger)
|
||||
|
||||
// test recovery from each line in each testCase
|
||||
var testCases = []*testCase{
|
||||
newTestCase("empty_block", baseStepChanges), // empty block (has 1 block part)
|
||||
newTestCase("small_block1", baseStepChanges), // small block with txs in 1 block part
|
||||
newTestCase("small_block2", []int{3, 11, 13}), // small block with txs across 6 smaller block parts
|
||||
}
|
||||
bytes, _ := ioutil.ReadFile(cs.config.WalFile())
|
||||
// fmt.Printf("====== WAL: \n\r%s\n", bytes)
|
||||
t.Logf("====== WAL: \n\r%s\n", bytes)
|
||||
|
||||
type testCase struct {
|
||||
name string
|
||||
log string //full cs wal
|
||||
stepMap map[int]int8 // map lines of log to privval step
|
||||
err := cs.Start()
|
||||
require.NoError(t, err)
|
||||
defer func() {
|
||||
cs.Stop()
|
||||
}()
|
||||
|
||||
proposeLine int
|
||||
prevoteLine int
|
||||
precommitLine int
|
||||
}
|
||||
|
||||
func newTestCase(name string, stepChanges []int) *testCase {
|
||||
if len(stepChanges) != 3 {
|
||||
panic(cmn.Fmt("a full wal has 3 step changes! Got array %v", stepChanges))
|
||||
}
|
||||
return &testCase{
|
||||
name: name,
|
||||
log: readWAL(path.Join(data_dir, name+".cswal")),
|
||||
stepMap: newMapFromChanges(stepChanges),
|
||||
|
||||
proposeLine: stepChanges[0],
|
||||
prevoteLine: stepChanges[1],
|
||||
precommitLine: stepChanges[2],
|
||||
}
|
||||
}
|
||||
|
||||
func newMapFromChanges(changes []int) map[int]int8 {
|
||||
changes = append(changes, changes[2]+1) // so we add the last step change to the map
|
||||
m := make(map[int]int8)
|
||||
var count int
|
||||
for changeNum, nextChange := range changes {
|
||||
for ; count < nextChange; count++ {
|
||||
m[count] = int8(changeNum)
|
||||
}
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
func readWAL(p string) string {
|
||||
b, err := ioutil.ReadFile(p)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return string(b)
|
||||
}
|
||||
|
||||
func writeWAL(walMsgs string) string {
|
||||
tempDir := os.TempDir()
|
||||
walDir := path.Join(tempDir, "/wal"+cmn.RandStr(12))
|
||||
walFile := path.Join(walDir, "wal")
|
||||
// Create WAL directory
|
||||
err := cmn.EnsureDir(walDir, 0700)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// Write the needed WAL to file
|
||||
err = cmn.WriteFile(walFile, []byte(walMsgs), 0600)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return walFile
|
||||
}
|
||||
|
||||
func waitForBlock(newBlockCh chan interface{}, thisCase *testCase, i int) {
|
||||
after := time.After(time.Second * 10)
|
||||
// This is just a signal that we haven't halted; its not something contained
|
||||
// in the WAL itself. Assuming the consensus state is running, replay of any
|
||||
// WAL, including the empty one, should eventually be followed by a new
|
||||
// block, or else something is wrong.
|
||||
newBlockCh := make(chan interface{}, 1)
|
||||
err = cs.eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, newBlockCh)
|
||||
require.NoError(t, err)
|
||||
select {
|
||||
case <-newBlockCh:
|
||||
case <-after:
|
||||
panic(cmn.Fmt("Timed out waiting for new block for case '%s' line %d", thisCase.name, i))
|
||||
case <-time.After(10 * time.Second):
|
||||
t.Fatalf("Timed out waiting for new block (see trace above)")
|
||||
}
|
||||
}
|
||||
|
||||
func runReplayTest(t *testing.T, cs *ConsensusState, walFile string, newBlockCh chan interface{},
|
||||
thisCase *testCase, i int) {
|
||||
|
||||
cs.config.SetWalFile(walFile)
|
||||
started, err := cs.Start()
|
||||
if err != nil {
|
||||
t.Fatalf("Cannot start consensus: %v", err)
|
||||
}
|
||||
if !started {
|
||||
t.Error("Consensus did not start")
|
||||
}
|
||||
// Wait to make a new block.
|
||||
// This is just a signal that we haven't halted; its not something contained in the WAL itself.
|
||||
// Assuming the consensus state is running, replay of any WAL, including the empty one,
|
||||
// should eventually be followed by a new block, or else something is wrong
|
||||
waitForBlock(newBlockCh, thisCase, i)
|
||||
cs.evsw.Stop()
|
||||
cs.Stop()
|
||||
LOOP:
|
||||
func sendTxs(cs *ConsensusState, ctx context.Context) {
|
||||
i := 0
|
||||
for {
|
||||
select {
|
||||
case <-newBlockCh:
|
||||
case <-ctx.Done():
|
||||
return
|
||||
default:
|
||||
break LOOP
|
||||
}
|
||||
}
|
||||
cs.Wait()
|
||||
}
|
||||
|
||||
func toPV(pv PrivValidator) *types.PrivValidator {
|
||||
return pv.(*types.PrivValidator)
|
||||
}
|
||||
|
||||
func setupReplayTest(t *testing.T, thisCase *testCase, nLines int, crashAfter bool) (*ConsensusState, chan interface{}, string, string) {
|
||||
t.Log("-------------------------------------")
|
||||
t.Logf("Starting replay test %v (of %d lines of WAL). Crash after = %v", thisCase.name, nLines, crashAfter)
|
||||
|
||||
lineStep := nLines
|
||||
if crashAfter {
|
||||
lineStep -= 1
|
||||
}
|
||||
|
||||
split := strings.Split(thisCase.log, "\n")
|
||||
lastMsg := split[nLines]
|
||||
|
||||
// we write those lines up to (not including) one with the signature
|
||||
walFile := writeWAL(strings.Join(split[:nLines], "\n") + "\n")
|
||||
|
||||
cs := fixedConsensusStateDummy()
|
||||
|
||||
// set the last step according to when we crashed vs the wal
|
||||
toPV(cs.privValidator).LastHeight = 1 // first block
|
||||
toPV(cs.privValidator).LastStep = thisCase.stepMap[lineStep]
|
||||
|
||||
t.Logf("[WARN] setupReplayTest LastStep=%v", toPV(cs.privValidator).LastStep)
|
||||
|
||||
newBlockCh := subscribeToEvent(cs.evsw, "tester", types.EventStringNewBlock(), 1)
|
||||
|
||||
return cs, newBlockCh, lastMsg, walFile
|
||||
}
|
||||
|
||||
func readTimedWALMessage(t *testing.T, walMsg string) TimedWALMessage {
|
||||
var err error
|
||||
var msg TimedWALMessage
|
||||
wire.ReadJSON(&msg, []byte(walMsg), &err)
|
||||
if err != nil {
|
||||
t.Fatalf("Error reading json data: %v", err)
|
||||
}
|
||||
return msg
|
||||
}
|
||||
|
||||
//-----------------------------------------------
|
||||
// Test the log at every iteration, and set the privVal last step
|
||||
// as if the log was written after signing, before the crash
|
||||
|
||||
func TestWALCrashAfterWrite(t *testing.T) {
|
||||
for _, thisCase := range testCases {
|
||||
split := strings.Split(thisCase.log, "\n")
|
||||
for i := 0; i < len(split)-1; i++ {
|
||||
cs, newBlockCh, _, walFile := setupReplayTest(t, thisCase, i+1, true)
|
||||
runReplayTest(t, cs, walFile, newBlockCh, thisCase, i+1)
|
||||
cs.mempool.CheckTx([]byte{byte(i)}, nil)
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//-----------------------------------------------
|
||||
// Test the log as if we crashed after signing but before writing.
|
||||
// This relies on privValidator.LastSignature being set
|
||||
// TestWALCrash uses crashing WAL to test we can recover from any WAL failure.
|
||||
func TestWALCrash(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
initFn func(*ConsensusState, context.Context)
|
||||
heightToStop int64
|
||||
}{
|
||||
{"empty block",
|
||||
func(cs *ConsensusState, ctx context.Context) {},
|
||||
1},
|
||||
{"block with a smaller part size",
|
||||
func(cs *ConsensusState, ctx context.Context) {
|
||||
// XXX: is there a better way to change BlockPartSizeBytes?
|
||||
params := cs.state.Params
|
||||
params.BlockPartSizeBytes = 512
|
||||
cs.state.Params = params
|
||||
sendTxs(cs, ctx)
|
||||
},
|
||||
1},
|
||||
{"many non-empty blocks",
|
||||
sendTxs,
|
||||
3},
|
||||
}
|
||||
|
||||
func TestWALCrashBeforeWritePropose(t *testing.T) {
|
||||
for _, thisCase := range testCases {
|
||||
lineNum := thisCase.proposeLine
|
||||
// setup replay test where last message is a proposal
|
||||
cs, newBlockCh, proposalMsg, walFile := setupReplayTest(t, thisCase, lineNum, false)
|
||||
msg := readTimedWALMessage(t, proposalMsg)
|
||||
proposal := msg.Msg.(msgInfo).Msg.(*ProposalMessage)
|
||||
// Set LastSig
|
||||
toPV(cs.privValidator).LastSignBytes = types.SignBytes(cs.state.ChainID, proposal.Proposal)
|
||||
toPV(cs.privValidator).LastSignature = proposal.Proposal.Signature
|
||||
runReplayTest(t, cs, walFile, newBlockCh, thisCase, lineNum)
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
crashWALandCheckLiveness(t, tc.initFn, tc.heightToStop)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestWALCrashBeforeWritePrevote(t *testing.T) {
|
||||
for _, thisCase := range testCases {
|
||||
testReplayCrashBeforeWriteVote(t, thisCase, thisCase.prevoteLine, types.EventStringCompleteProposal())
|
||||
func crashWALandCheckLiveness(t *testing.T, initFn func(*ConsensusState, context.Context), heightToStop int64) {
|
||||
walPaniced := make(chan error)
|
||||
crashingWal := &crashingWAL{panicCh: walPaniced, heightToStop: heightToStop}
|
||||
|
||||
i := 1
|
||||
LOOP:
|
||||
for {
|
||||
// fmt.Printf("====== LOOP %d\n", i)
|
||||
t.Logf("====== LOOP %d\n", i)
|
||||
|
||||
// create consensus state from a clean slate
|
||||
logger := log.NewNopLogger()
|
||||
stateDB := dbm.NewMemDB()
|
||||
state, _ := sm.MakeGenesisStateFromFile(stateDB, consensusReplayConfig.GenesisFile())
|
||||
state.SetLogger(logger.With("module", "state"))
|
||||
privValidator := loadPrivValidator(consensusReplayConfig)
|
||||
blockDB := dbm.NewMemDB()
|
||||
cs := newConsensusStateWithConfigAndBlockStore(consensusReplayConfig, state, privValidator, dummy.NewDummyApplication(), blockDB)
|
||||
cs.SetLogger(logger)
|
||||
|
||||
// start sending transactions
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
go initFn(cs, ctx)
|
||||
|
||||
// clean up WAL file from the previous iteration
|
||||
walFile := cs.config.WalFile()
|
||||
os.Remove(walFile)
|
||||
|
||||
// set crashing WAL
|
||||
csWal, err := cs.OpenWAL(walFile)
|
||||
require.NoError(t, err)
|
||||
crashingWal.next = csWal
|
||||
// reset the message counter
|
||||
crashingWal.msgIndex = 1
|
||||
cs.wal = crashingWal
|
||||
|
||||
// start consensus state
|
||||
err = cs.Start()
|
||||
require.NoError(t, err)
|
||||
|
||||
i++
|
||||
|
||||
select {
|
||||
case err := <-walPaniced:
|
||||
t.Logf("WAL paniced: %v", err)
|
||||
|
||||
// make sure we can make blocks after a crash
|
||||
startNewConsensusStateAndWaitForBlock(t, cs.Height, blockDB, stateDB)
|
||||
|
||||
// stop consensus state and transactions sender (initFn)
|
||||
cs.Stop()
|
||||
cancel()
|
||||
|
||||
// if we reached the required height, exit
|
||||
if _, ok := err.(ReachedHeightToStopError); ok {
|
||||
break LOOP
|
||||
}
|
||||
case <-time.After(10 * time.Second):
|
||||
t.Fatal("WAL did not panic for 10 seconds (check the log)")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestWALCrashBeforeWritePrecommit(t *testing.T) {
|
||||
for _, thisCase := range testCases {
|
||||
testReplayCrashBeforeWriteVote(t, thisCase, thisCase.precommitLine, types.EventStringPolka())
|
||||
// crashingWAL is a WAL which crashes or rather simulates a crash during Save
|
||||
// (before and after). It remembers a message for which we last panicked
|
||||
// (lastPanicedForMsgIndex), so we don't panic for it in subsequent iterations.
|
||||
type crashingWAL struct {
|
||||
next WAL
|
||||
panicCh chan error
|
||||
heightToStop int64
|
||||
|
||||
msgIndex int // current message index
|
||||
lastPanicedForMsgIndex int // last message for which we panicked
|
||||
}
|
||||
|
||||
// WALWriteError indicates a WAL crash.
|
||||
type WALWriteError struct {
|
||||
msg string
|
||||
}
|
||||
|
||||
func (e WALWriteError) Error() string {
|
||||
return e.msg
|
||||
}
|
||||
|
||||
// ReachedHeightToStopError indicates we've reached the required consensus
|
||||
// height and may exit.
|
||||
type ReachedHeightToStopError struct {
|
||||
height int64
|
||||
}
|
||||
|
||||
func (e ReachedHeightToStopError) Error() string {
|
||||
return fmt.Sprintf("reached height to stop %d", e.height)
|
||||
}
|
||||
|
||||
// Save simulate WAL's crashing by sending an error to the panicCh and then
|
||||
// exiting the cs.receiveRoutine.
|
||||
func (w *crashingWAL) Save(m WALMessage) {
|
||||
if endMsg, ok := m.(EndHeightMessage); ok {
|
||||
if endMsg.Height == w.heightToStop {
|
||||
w.panicCh <- ReachedHeightToStopError{endMsg.Height}
|
||||
runtime.Goexit()
|
||||
} else {
|
||||
w.next.Save(m)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if w.msgIndex > w.lastPanicedForMsgIndex {
|
||||
w.lastPanicedForMsgIndex = w.msgIndex
|
||||
_, file, line, _ := runtime.Caller(1)
|
||||
w.panicCh <- WALWriteError{fmt.Sprintf("failed to write %T to WAL (fileline: %s:%d)", m, file, line)}
|
||||
runtime.Goexit()
|
||||
} else {
|
||||
w.msgIndex++
|
||||
w.next.Save(m)
|
||||
}
|
||||
}
|
||||
|
||||
func testReplayCrashBeforeWriteVote(t *testing.T, thisCase *testCase, lineNum int, eventString string) {
|
||||
// setup replay test where last message is a vote
|
||||
cs, newBlockCh, voteMsg, walFile := setupReplayTest(t, thisCase, lineNum, false)
|
||||
types.AddListenerForEvent(cs.evsw, "tester", eventString, func(data types.TMEventData) {
|
||||
msg := readTimedWALMessage(t, voteMsg)
|
||||
vote := msg.Msg.(msgInfo).Msg.(*VoteMessage)
|
||||
// Set LastSig
|
||||
toPV(cs.privValidator).LastSignBytes = types.SignBytes(cs.state.ChainID, vote.Vote)
|
||||
toPV(cs.privValidator).LastSignature = vote.Vote.Signature
|
||||
})
|
||||
runReplayTest(t, cs, walFile, newBlockCh, thisCase, lineNum)
|
||||
func (w *crashingWAL) Group() *auto.Group { return w.next.Group() }
|
||||
func (w *crashingWAL) SearchForEndHeight(height int64) (gr *auto.GroupReader, found bool, err error) {
|
||||
return w.next.SearchForEndHeight(height)
|
||||
}
|
||||
|
||||
func (w *crashingWAL) Start() error { return w.next.Start() }
|
||||
func (w *crashingWAL) Stop() error { return w.next.Stop() }
|
||||
func (w *crashingWAL) Wait() { w.next.Wait() }
|
||||
|
||||
//------------------------------------------------------------------------------------------
|
||||
// Handshake Tests
|
||||
|
||||
var (
|
||||
NUM_BLOCKS = 6 // number of blocks in the test_data/many_blocks.cswal
|
||||
mempool = types.MockMempool{}
|
||||
|
||||
testPartSize int
|
||||
)
|
||||
|
||||
//---------------------------------------
|
||||
@@ -307,6 +305,21 @@ func TestHandshakeReplayNone(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func writeWAL(walMsgs []byte) string {
|
||||
walFile, err := ioutil.TempFile("", "wal")
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("failed to create temp WAL file: %v", err))
|
||||
}
|
||||
_, err = walFile.Write(walMsgs)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("failed to write to temp WAL file: %v", err))
|
||||
}
|
||||
if err := walFile.Close(); err != nil {
|
||||
panic(fmt.Errorf("failed to close temp WAL file: %v", err))
|
||||
}
|
||||
return walFile.Name()
|
||||
}
|
||||
|
||||
// Make some blocks. Start a fresh app and apply nBlocks blocks. Then restart the app and sync it up with the remaining blocks
|
||||
func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
|
||||
config := ResetConfig("proxy_test_")
|
||||
@@ -316,18 +329,17 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
walFile := writeWAL(string(walBody))
|
||||
walFile := writeWAL(walBody)
|
||||
config.Consensus.SetWalFile(walFile)
|
||||
|
||||
privVal := types.LoadPrivValidator(config.PrivValidatorFile())
|
||||
testPartSize = config.Consensus.BlockPartSize
|
||||
privVal := types.LoadPrivValidatorFS(config.PrivValidatorFile())
|
||||
|
||||
wal, err := NewWAL(walFile, false)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
wal.SetLogger(log.TestingLogger())
|
||||
if _, err := wal.Start(); err != nil {
|
||||
if err := wal.Start(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
chain, commits, err := makeBlockchainFromWAL(wal)
|
||||
@@ -335,7 +347,7 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
|
||||
t.Fatalf(err.Error())
|
||||
}
|
||||
|
||||
state, store := stateAndStore(config, privVal.PubKey)
|
||||
state, store := stateAndStore(config, privVal.GetPubKey())
|
||||
store.chain = chain
|
||||
store.commits = commits
|
||||
|
||||
@@ -349,19 +361,19 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
|
||||
// run nBlocks against a new client to build up the app state.
|
||||
// use a throwaway tendermint state
|
||||
proxyApp := proxy.NewAppConns(clientCreator2, nil)
|
||||
state, _ := stateAndStore(config, privVal.PubKey)
|
||||
state, _ := stateAndStore(config, privVal.GetPubKey())
|
||||
buildAppStateFromChain(proxyApp, state, chain, nBlocks, mode)
|
||||
}
|
||||
|
||||
// now start the app using the handshake - it should sync
|
||||
handshaker := NewHandshaker(state, store)
|
||||
proxyApp := proxy.NewAppConns(clientCreator2, handshaker)
|
||||
if _, err := proxyApp.Start(); err != nil {
|
||||
if err := proxyApp.Start(); err != nil {
|
||||
t.Fatalf("Error starting proxy app connections: %v", err)
|
||||
}
|
||||
|
||||
// get the latest app hash from the app
|
||||
res, err := proxyApp.Query().InfoSync()
|
||||
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{""})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
@@ -384,7 +396,8 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
|
||||
}
|
||||
|
||||
func applyBlock(st *sm.State, blk *types.Block, proxyApp proxy.AppConns) {
|
||||
err := st.ApplyBlock(nil, proxyApp.Consensus(), blk, blk.MakePartSet(testPartSize).Header(), mempool)
|
||||
testPartSize := st.Params.BlockPartSizeBytes
|
||||
err := st.ApplyBlock(types.NopEventBus{}, proxyApp.Consensus(), blk, blk.MakePartSet(testPartSize).Header(), mempool)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
@@ -393,12 +406,14 @@ func applyBlock(st *sm.State, blk *types.Block, proxyApp proxy.AppConns) {
|
||||
func buildAppStateFromChain(proxyApp proxy.AppConns,
|
||||
state *sm.State, chain []*types.Block, nBlocks int, mode uint) {
|
||||
// start a new app without handshake, play nBlocks blocks
|
||||
if _, err := proxyApp.Start(); err != nil {
|
||||
if err := proxyApp.Start(); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
validators := types.TM2PB.Validators(state.Validators)
|
||||
proxyApp.Consensus().InitChainSync(validators)
|
||||
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators}); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
defer proxyApp.Stop()
|
||||
switch mode {
|
||||
@@ -426,13 +441,15 @@ func buildTMStateFromChain(config *cfg.Config, state *sm.State, chain []*types.B
|
||||
// run the whole chain against this client to build up the tendermint state
|
||||
clientCreator := proxy.NewLocalClientCreator(dummy.NewPersistentDummyApplication(path.Join(config.DBDir(), "1")))
|
||||
proxyApp := proxy.NewAppConns(clientCreator, nil) // sm.NewHandshaker(config, state, store, ReplayLastBlock))
|
||||
if _, err := proxyApp.Start(); err != nil {
|
||||
if err := proxyApp.Start(); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer proxyApp.Stop()
|
||||
|
||||
validators := types.TM2PB.Validators(state.Validators)
|
||||
proxyApp.Consensus().InitChainSync(validators)
|
||||
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators}); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
var latestAppHash []byte
|
||||
|
||||
@@ -464,36 +481,33 @@ func buildTMStateFromChain(config *cfg.Config, state *sm.State, chain []*types.B
|
||||
//--------------------------
|
||||
// utils for making blocks
|
||||
|
||||
func makeBlockchainFromWAL(wal *WAL) ([]*types.Block, []*types.Commit, error) {
|
||||
func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
|
||||
// Search for height marker
|
||||
gr, found, err := wal.group.Search("#ENDHEIGHT: ", makeHeightSearchFunc(0))
|
||||
gr, found, err := wal.SearchForEndHeight(0)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
if !found {
|
||||
return nil, nil, errors.New(cmn.Fmt("WAL does not contain height %d.", 1))
|
||||
}
|
||||
defer gr.Close()
|
||||
defer gr.Close() // nolint: errcheck
|
||||
|
||||
// log.Notice("Build a blockchain by reading from the WAL")
|
||||
|
||||
var blockParts *types.PartSet
|
||||
var blocks []*types.Block
|
||||
var commits []*types.Commit
|
||||
for {
|
||||
line, err := gr.ReadLine()
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
break
|
||||
} else {
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
|
||||
piece, err := readPieceFromWAL([]byte(line))
|
||||
if err != nil {
|
||||
dec := NewWALDecoder(gr)
|
||||
for {
|
||||
msg, err := dec.Decode()
|
||||
if err == io.EOF {
|
||||
break
|
||||
} else if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
piece := readPieceFromWAL(msg)
|
||||
if piece == nil {
|
||||
continue
|
||||
}
|
||||
@@ -503,7 +517,7 @@ func makeBlockchainFromWAL(wal *WAL) ([]*types.Block, []*types.Commit, error) {
|
||||
// if its not the first one, we have a full block
|
||||
if blockParts != nil {
|
||||
var n int
|
||||
block := wire.ReadBinary(&types.Block{}, blockParts.GetReader(), types.MaxBlockSize, &n, &err).(*types.Block)
|
||||
block := wire.ReadBinary(&types.Block{}, blockParts.GetReader(), 0, &n, &err).(*types.Block)
|
||||
blocks = append(blocks, block)
|
||||
}
|
||||
blockParts = types.NewPartSetFromHeader(*p)
|
||||
@@ -524,22 +538,15 @@ func makeBlockchainFromWAL(wal *WAL) ([]*types.Block, []*types.Commit, error) {
|
||||
}
|
||||
// grab the last block too
|
||||
var n int
|
||||
block := wire.ReadBinary(&types.Block{}, blockParts.GetReader(), types.MaxBlockSize, &n, &err).(*types.Block)
|
||||
block := wire.ReadBinary(&types.Block{}, blockParts.GetReader(), 0, &n, &err).(*types.Block)
|
||||
blocks = append(blocks, block)
|
||||
return blocks, commits, nil
|
||||
}
|
||||
|
||||
func readPieceFromWAL(msgBytes []byte) (interface{}, error) {
|
||||
// Skip over empty and meta lines
|
||||
if len(msgBytes) == 0 || msgBytes[0] == '#' {
|
||||
return nil, nil
|
||||
}
|
||||
var err error
|
||||
var msg TimedWALMessage
|
||||
wire.ReadJSON(&msg, msgBytes, &err)
|
||||
if err != nil {
|
||||
fmt.Println("MsgBytes:", msgBytes, string(msgBytes))
|
||||
return nil, fmt.Errorf("Error reading json data: %v", err)
|
||||
func readPieceFromWAL(msg *TimedWALMessage) interface{} {
|
||||
// skip meta messages
|
||||
if _, ok := msg.Msg.(EndHeightMessage); ok {
|
||||
return nil
|
||||
}
|
||||
|
||||
// for logging
|
||||
@@ -547,23 +554,24 @@ func readPieceFromWAL(msgBytes []byte) (interface{}, error) {
|
||||
case msgInfo:
|
||||
switch msg := m.Msg.(type) {
|
||||
case *ProposalMessage:
|
||||
return &msg.Proposal.BlockPartsHeader, nil
|
||||
return &msg.Proposal.BlockPartsHeader
|
||||
case *BlockPartMessage:
|
||||
return msg.Part, nil
|
||||
return msg.Part
|
||||
case *VoteMessage:
|
||||
return msg.Vote, nil
|
||||
return msg.Vote
|
||||
}
|
||||
}
|
||||
return nil, nil
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// fresh state and mock store
|
||||
func stateAndStore(config *cfg.Config, pubKey crypto.PubKey) (*sm.State, *mockBlockStore) {
|
||||
stateDB := dbm.NewMemDB()
|
||||
state := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
|
||||
state, _ := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
|
||||
state.SetLogger(log.TestingLogger().With("module", "state"))
|
||||
|
||||
store := NewMockBlockStore(config)
|
||||
store := NewMockBlockStore(config, state.Params)
|
||||
return state, store
|
||||
}
|
||||
|
||||
@@ -572,30 +580,31 @@ func stateAndStore(config *cfg.Config, pubKey crypto.PubKey) (*sm.State, *mockBl
|
||||
|
||||
type mockBlockStore struct {
|
||||
config *cfg.Config
|
||||
params types.ConsensusParams
|
||||
chain []*types.Block
|
||||
commits []*types.Commit
|
||||
}
|
||||
|
||||
// TODO: NewBlockStore(db.NewMemDB) ...
|
||||
func NewMockBlockStore(config *cfg.Config) *mockBlockStore {
|
||||
return &mockBlockStore{config, nil, nil}
|
||||
func NewMockBlockStore(config *cfg.Config, params types.ConsensusParams) *mockBlockStore {
|
||||
return &mockBlockStore{config, params, nil, nil}
|
||||
}
|
||||
|
||||
func (bs *mockBlockStore) Height() int { return len(bs.chain) }
|
||||
func (bs *mockBlockStore) LoadBlock(height int) *types.Block { return bs.chain[height-1] }
|
||||
func (bs *mockBlockStore) LoadBlockMeta(height int) *types.BlockMeta {
|
||||
func (bs *mockBlockStore) Height() int64 { return int64(len(bs.chain)) }
|
||||
func (bs *mockBlockStore) LoadBlock(height int64) *types.Block { return bs.chain[height-1] }
|
||||
func (bs *mockBlockStore) LoadBlockMeta(height int64) *types.BlockMeta {
|
||||
block := bs.chain[height-1]
|
||||
return &types.BlockMeta{
|
||||
BlockID: types.BlockID{block.Hash(), block.MakePartSet(bs.config.Consensus.BlockPartSize).Header()},
|
||||
BlockID: types.BlockID{block.Hash(), block.MakePartSet(bs.params.BlockPartSizeBytes).Header()},
|
||||
Header: block.Header,
|
||||
}
|
||||
}
|
||||
func (bs *mockBlockStore) LoadBlockPart(height int, index int) *types.Part { return nil }
|
||||
func (bs *mockBlockStore) LoadBlockPart(height int64, index int) *types.Part { return nil }
|
||||
func (bs *mockBlockStore) SaveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) {
|
||||
}
|
||||
func (bs *mockBlockStore) LoadBlockCommit(height int) *types.Commit {
|
||||
func (bs *mockBlockStore) LoadBlockCommit(height int64) *types.Commit {
|
||||
return bs.commits[height-1]
|
||||
}
|
||||
func (bs *mockBlockStore) LoadSeenCommit(height int) *types.Commit {
|
||||
func (bs *mockBlockStore) LoadSeenCommit(height int64) *types.Commit {
|
||||
return bs.commits[height-1]
|
||||
}
|
||||
|
@@ -4,8 +4,8 @@ import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"path"
|
||||
"reflect"
|
||||
"runtime/debug"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@@ -16,6 +16,7 @@ import (
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
cstypes "github.com/tendermint/tendermint/consensus/types"
|
||||
"github.com/tendermint/tendermint/proxy"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
@@ -38,124 +39,6 @@ var (
|
||||
ErrVoteHeightMismatch = errors.New("Error vote height mismatch")
|
||||
)
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
// RoundStepType enum type
|
||||
|
||||
// RoundStepType enumerates the state of the consensus state machine
|
||||
type RoundStepType uint8 // These must be numeric, ordered.
|
||||
|
||||
const (
|
||||
RoundStepNewHeight = RoundStepType(0x01) // Wait til CommitTime + timeoutCommit
|
||||
RoundStepNewRound = RoundStepType(0x02) // Setup new round and go to RoundStepPropose
|
||||
RoundStepPropose = RoundStepType(0x03) // Did propose, gossip proposal
|
||||
RoundStepPrevote = RoundStepType(0x04) // Did prevote, gossip prevotes
|
||||
RoundStepPrevoteWait = RoundStepType(0x05) // Did receive any +2/3 prevotes, start timeout
|
||||
RoundStepPrecommit = RoundStepType(0x06) // Did precommit, gossip precommits
|
||||
RoundStepPrecommitWait = RoundStepType(0x07) // Did receive any +2/3 precommits, start timeout
|
||||
RoundStepCommit = RoundStepType(0x08) // Entered commit state machine
|
||||
// NOTE: RoundStepNewHeight acts as RoundStepCommitWait.
|
||||
)
|
||||
|
||||
// String returns a string
|
||||
func (rs RoundStepType) String() string {
|
||||
switch rs {
|
||||
case RoundStepNewHeight:
|
||||
return "RoundStepNewHeight"
|
||||
case RoundStepNewRound:
|
||||
return "RoundStepNewRound"
|
||||
case RoundStepPropose:
|
||||
return "RoundStepPropose"
|
||||
case RoundStepPrevote:
|
||||
return "RoundStepPrevote"
|
||||
case RoundStepPrevoteWait:
|
||||
return "RoundStepPrevoteWait"
|
||||
case RoundStepPrecommit:
|
||||
return "RoundStepPrecommit"
|
||||
case RoundStepPrecommitWait:
|
||||
return "RoundStepPrecommitWait"
|
||||
case RoundStepCommit:
|
||||
return "RoundStepCommit"
|
||||
default:
|
||||
return "RoundStepUnknown" // Cannot panic.
|
||||
}
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
|
||||
// RoundState defines the internal consensus state.
|
||||
// It is Immutable when returned from ConsensusState.GetRoundState()
|
||||
// TODO: Actually, only the top pointer is copied,
|
||||
// so access to field pointers is still racey
|
||||
type RoundState struct {
|
||||
Height int // Height we are working on
|
||||
Round int
|
||||
Step RoundStepType
|
||||
StartTime time.Time
|
||||
CommitTime time.Time // Subjective time when +2/3 precommits for Block at Round were found
|
||||
Validators *types.ValidatorSet
|
||||
Proposal *types.Proposal
|
||||
ProposalBlock *types.Block
|
||||
ProposalBlockParts *types.PartSet
|
||||
LockedRound int
|
||||
LockedBlock *types.Block
|
||||
LockedBlockParts *types.PartSet
|
||||
Votes *HeightVoteSet
|
||||
CommitRound int //
|
||||
LastCommit *types.VoteSet // Last precommits at Height-1
|
||||
LastValidators *types.ValidatorSet
|
||||
}
|
||||
|
||||
// RoundStateEvent returns the H/R/S of the RoundState as an event.
|
||||
func (rs *RoundState) RoundStateEvent() types.EventDataRoundState {
|
||||
edrs := types.EventDataRoundState{
|
||||
Height: rs.Height,
|
||||
Round: rs.Round,
|
||||
Step: rs.Step.String(),
|
||||
RoundState: rs,
|
||||
}
|
||||
return edrs
|
||||
}
|
||||
|
||||
// String returns a string
|
||||
func (rs *RoundState) String() string {
|
||||
return rs.StringIndented("")
|
||||
}
|
||||
|
||||
// StringIndented returns a string
|
||||
func (rs *RoundState) StringIndented(indent string) string {
|
||||
return fmt.Sprintf(`RoundState{
|
||||
%s H:%v R:%v S:%v
|
||||
%s StartTime: %v
|
||||
%s CommitTime: %v
|
||||
%s Validators: %v
|
||||
%s Proposal: %v
|
||||
%s ProposalBlock: %v %v
|
||||
%s LockedRound: %v
|
||||
%s LockedBlock: %v %v
|
||||
%s Votes: %v
|
||||
%s LastCommit: %v
|
||||
%s LastValidators: %v
|
||||
%s}`,
|
||||
indent, rs.Height, rs.Round, rs.Step,
|
||||
indent, rs.StartTime,
|
||||
indent, rs.CommitTime,
|
||||
indent, rs.Validators.StringIndented(indent+" "),
|
||||
indent, rs.Proposal,
|
||||
indent, rs.ProposalBlockParts.StringShort(), rs.ProposalBlock.StringShort(),
|
||||
indent, rs.LockedRound,
|
||||
indent, rs.LockedBlockParts.StringShort(), rs.LockedBlock.StringShort(),
|
||||
indent, rs.Votes.StringIndented(indent+" "),
|
||||
indent, rs.LastCommit.StringShort(),
|
||||
indent, rs.LastValidators.StringIndented(indent+" "),
|
||||
indent)
|
||||
}
|
||||
|
||||
// StringShort returns a string
|
||||
func (rs *RoundState) StringShort() string {
|
||||
return fmt.Sprintf(`RoundState{H:%v R:%v S:%v ST:%v}`,
|
||||
rs.Height, rs.Round, rs.Step, rs.StartTime)
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
|
||||
var (
|
||||
@@ -170,24 +53,16 @@ type msgInfo struct {
|
||||
|
||||
// internally generated messages which may update the state
|
||||
type timeoutInfo struct {
|
||||
Duration time.Duration `json:"duration"`
|
||||
Height int `json:"height"`
|
||||
Round int `json:"round"`
|
||||
Step RoundStepType `json:"step"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
Height int64 `json:"height"`
|
||||
Round int `json:"round"`
|
||||
Step cstypes.RoundStepType `json:"step"`
|
||||
}
|
||||
|
||||
func (ti *timeoutInfo) String() string {
|
||||
return fmt.Sprintf("%v ; %d/%d %v", ti.Duration, ti.Height, ti.Round, ti.Step)
|
||||
}
|
||||
|
||||
// PrivValidator is a validator that can sign votes and proposals.
|
||||
type PrivValidator interface {
|
||||
GetAddress() []byte
|
||||
SignVote(chainID string, vote *types.Vote) error
|
||||
SignProposal(chainID string, proposal *types.Proposal) error
|
||||
SignHeartbeat(chainID string, heartbeat *types.Heartbeat) error
|
||||
}
|
||||
|
||||
// ConsensusState handles execution of the consensus algorithm.
|
||||
// It processes votes and proposals, and upon reaching agreement,
|
||||
// commits blocks to the chain and executes them against the application.
|
||||
@@ -197,7 +72,7 @@ type ConsensusState struct {
|
||||
|
||||
// config details
|
||||
config *cfg.ConsensusConfig
|
||||
privValidator PrivValidator // for signing votes
|
||||
privValidator types.PrivValidator // for signing votes
|
||||
|
||||
// services for creating and executing blocks
|
||||
proxyAppConn proxy.AppConnConsensus
|
||||
@@ -206,7 +81,7 @@ type ConsensusState struct {
|
||||
|
||||
// internal state
|
||||
mtx sync.Mutex
|
||||
RoundState
|
||||
cstypes.RoundState
|
||||
state *sm.State // State until height-1.
|
||||
|
||||
// state changes may be triggered by msgs from peers,
|
||||
@@ -215,21 +90,22 @@ type ConsensusState struct {
|
||||
internalMsgQueue chan msgInfo
|
||||
timeoutTicker TimeoutTicker
|
||||
|
||||
// we use PubSub to trigger msg broadcasts in the reactor,
|
||||
// we use eventBus to trigger msg broadcasts in the reactor,
|
||||
// and to notify external subscribers, eg. through a websocket
|
||||
evsw types.EventSwitch
|
||||
eventBus *types.EventBus
|
||||
|
||||
// a Write-Ahead Log ensures we can recover from any kind of crash
|
||||
// and helps us avoid signing conflicting votes
|
||||
wal *WAL
|
||||
replayMode bool // so we don't log signing errors during replay
|
||||
wal WAL
|
||||
replayMode bool // so we don't log signing errors during replay
|
||||
doWALCatchup bool // determines if we even try to do the catchup
|
||||
|
||||
// for tests where we want to limit the number of transitions the state makes
|
||||
nSteps int
|
||||
|
||||
// some functions can be overwritten for testing
|
||||
decideProposal func(height, round int)
|
||||
doPrevote func(height, round int)
|
||||
decideProposal func(height int64, round int)
|
||||
doPrevote func(height int64, round int)
|
||||
setProposal func(proposal *types.Proposal) error
|
||||
|
||||
// closed when we finish shutting down
|
||||
@@ -247,6 +123,8 @@ func NewConsensusState(config *cfg.ConsensusConfig, state *sm.State, proxyAppCon
|
||||
internalMsgQueue: make(chan msgInfo, msgQueueSize),
|
||||
timeoutTicker: NewTimeoutTicker(),
|
||||
done: make(chan struct{}),
|
||||
doWALCatchup: true,
|
||||
wal: nilWAL{},
|
||||
}
|
||||
// set function defaults (may be overwritten before calling Start)
|
||||
cs.decideProposal = cs.defaultDecideProposal
|
||||
@@ -270,9 +148,9 @@ func (cs *ConsensusState) SetLogger(l log.Logger) {
|
||||
cs.timeoutTicker.SetLogger(l)
|
||||
}
|
||||
|
||||
// SetEventSwitch implements events.Eventable
|
||||
func (cs *ConsensusState) SetEventSwitch(evsw types.EventSwitch) {
|
||||
cs.evsw = evsw
|
||||
// SetEventBus sets event bus.
|
||||
func (cs *ConsensusState) SetEventBus(b *types.EventBus) {
|
||||
cs.eventBus = b
|
||||
}
|
||||
|
||||
// String returns a string.
|
||||
@@ -289,26 +167,26 @@ func (cs *ConsensusState) GetState() *sm.State {
|
||||
}
|
||||
|
||||
// GetRoundState returns a copy of the internal consensus state.
|
||||
func (cs *ConsensusState) GetRoundState() *RoundState {
|
||||
func (cs *ConsensusState) GetRoundState() *cstypes.RoundState {
|
||||
cs.mtx.Lock()
|
||||
defer cs.mtx.Unlock()
|
||||
return cs.getRoundState()
|
||||
}
|
||||
|
||||
func (cs *ConsensusState) getRoundState() *RoundState {
|
||||
func (cs *ConsensusState) getRoundState() *cstypes.RoundState {
|
||||
rs := cs.RoundState // copy
|
||||
return &rs
|
||||
}
|
||||
|
||||
// GetValidators returns a copy of the current validators.
|
||||
func (cs *ConsensusState) GetValidators() (int, []*types.Validator) {
|
||||
func (cs *ConsensusState) GetValidators() (int64, []*types.Validator) {
|
||||
cs.mtx.Lock()
|
||||
defer cs.mtx.Unlock()
|
||||
return cs.state.LastBlockHeight, cs.state.Validators.Copy().Validators
|
||||
}
|
||||
|
||||
// SetPrivValidator sets the private validator account for signing votes.
|
||||
func (cs *ConsensusState) SetPrivValidator(priv PrivValidator) {
|
||||
func (cs *ConsensusState) SetPrivValidator(priv types.PrivValidator) {
|
||||
cs.mtx.Lock()
|
||||
defer cs.mtx.Unlock()
|
||||
cs.privValidator = priv
|
||||
@@ -322,7 +200,7 @@ func (cs *ConsensusState) SetTimeoutTicker(timeoutTicker TimeoutTicker) {
|
||||
}
|
||||
|
||||
// LoadCommit loads the commit for a given height.
|
||||
func (cs *ConsensusState) LoadCommit(height int) *types.Commit {
|
||||
func (cs *ConsensusState) LoadCommit(height int64) *types.Commit {
|
||||
cs.mtx.Lock()
|
||||
defer cs.mtx.Unlock()
|
||||
if height == cs.blockStore.Height() {
|
||||
@@ -334,26 +212,36 @@ func (cs *ConsensusState) LoadCommit(height int) *types.Commit {
|
||||
// OnStart implements cmn.Service.
|
||||
// It loads the latest state via the WAL, and starts the timeout and receive routines.
|
||||
func (cs *ConsensusState) OnStart() error {
|
||||
|
||||
walFile := cs.config.WalFile()
|
||||
if err := cs.OpenWAL(walFile); err != nil {
|
||||
cs.Logger.Error("Error loading ConsensusState wal", "err", err.Error())
|
||||
return err
|
||||
// we may set the WAL in testing before calling Start,
|
||||
// so only OpenWAL if its still the nilWAL
|
||||
if _, ok := cs.wal.(nilWAL); ok {
|
||||
walFile := cs.config.WalFile()
|
||||
wal, err := cs.OpenWAL(walFile)
|
||||
if err != nil {
|
||||
cs.Logger.Error("Error loading ConsensusState wal", "err", err.Error())
|
||||
return err
|
||||
}
|
||||
cs.wal = wal
|
||||
}
|
||||
|
||||
// we need the timeoutRoutine for replay so
|
||||
// we don't block on the tick chan.
|
||||
// we don't block on the tick chan.
|
||||
// NOTE: we will get a build up of garbage go routines
|
||||
// firing on the tockChan until the receiveRoutine is started
|
||||
// to deal with them (by that point, at most one will be valid)
|
||||
cs.timeoutTicker.Start()
|
||||
// firing on the tockChan until the receiveRoutine is started
|
||||
// to deal with them (by that point, at most one will be valid)
|
||||
err := cs.timeoutTicker.Start()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// we may have lost some votes if the process crashed
|
||||
// reload from consensus log to catchup
|
||||
if err := cs.catchupReplay(cs.Height); err != nil {
|
||||
cs.Logger.Error("Error on catchup replay. Proceeding to start ConsensusState anyway", "err", err.Error())
|
||||
// NOTE: if we ever do return an error here,
|
||||
// make sure to stop the timeoutTicker
|
||||
if cs.doWALCatchup {
|
||||
if err := cs.catchupReplay(cs.Height); err != nil {
|
||||
cs.Logger.Error("Error on catchup replay. Proceeding to start ConsensusState anyway", "err", err.Error())
|
||||
// NOTE: if we ever do return an error here,
|
||||
// make sure to stop the timeoutTicker
|
||||
}
|
||||
}
|
||||
|
||||
// now start the receiveRoutine
|
||||
@@ -369,7 +257,11 @@ func (cs *ConsensusState) OnStart() error {
|
||||
// timeoutRoutine: receive requests for timeouts on tickChan and fire timeouts on tockChan
|
||||
// receiveRoutine: serializes processing of proposoals, block parts, votes; coordinates state transitions
|
||||
func (cs *ConsensusState) startRoutines(maxSteps int) {
|
||||
cs.timeoutTicker.Start()
|
||||
err := cs.timeoutTicker.Start()
|
||||
if err != nil {
|
||||
cs.Logger.Error("Error starting timeout ticker", "err", err)
|
||||
return
|
||||
}
|
||||
go cs.receiveRoutine(maxSteps)
|
||||
}
|
||||
|
||||
@@ -380,7 +272,7 @@ func (cs *ConsensusState) OnStop() {
|
||||
cs.timeoutTicker.Stop()
|
||||
|
||||
// Make BaseService.Wait() wait until cs.wal.Wait()
|
||||
if cs.wal != nil && cs.IsRunning() {
|
||||
if cs.IsRunning() {
|
||||
cs.wal.Wait()
|
||||
}
|
||||
}
|
||||
@@ -393,25 +285,17 @@ func (cs *ConsensusState) Wait() {
|
||||
}
|
||||
|
||||
// OpenWAL opens a file to log all consensus messages and timeouts for deterministic accountability
|
||||
func (cs *ConsensusState) OpenWAL(walFile string) (err error) {
|
||||
err = cmn.EnsureDir(path.Dir(walFile), 0700)
|
||||
if err != nil {
|
||||
cs.Logger.Error("Error ensuring ConsensusState wal dir", "err", err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
cs.mtx.Lock()
|
||||
defer cs.mtx.Unlock()
|
||||
func (cs *ConsensusState) OpenWAL(walFile string) (WAL, error) {
|
||||
wal, err := NewWAL(walFile, cs.config.WalLight)
|
||||
if err != nil {
|
||||
return err
|
||||
cs.Logger.Error("Failed to open WAL for consensus state", "wal", walFile, "err", err)
|
||||
return nil, err
|
||||
}
|
||||
wal.SetLogger(cs.Logger.With("wal", walFile))
|
||||
if _, err := wal.Start(); err != nil {
|
||||
return err
|
||||
if err := wal.Start(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cs.wal = wal
|
||||
return nil
|
||||
return wal, nil
|
||||
}
|
||||
|
||||
//------------------------------------------------------------
|
||||
@@ -447,7 +331,7 @@ func (cs *ConsensusState) SetProposal(proposal *types.Proposal, peerKey string)
|
||||
}
|
||||
|
||||
// AddProposalBlockPart inputs a part of the proposal block.
|
||||
func (cs *ConsensusState) AddProposalBlockPart(height, round int, part *types.Part, peerKey string) error {
|
||||
func (cs *ConsensusState) AddProposalBlockPart(height int64, round int, part *types.Part, peerKey string) error {
|
||||
|
||||
if peerKey == "" {
|
||||
cs.internalMsgQueue <- msgInfo{&BlockPartMessage{height, round, part}, ""}
|
||||
@@ -461,35 +345,39 @@ func (cs *ConsensusState) AddProposalBlockPart(height, round int, part *types.Pa
|
||||
|
||||
// SetProposalAndBlock inputs the proposal and all block parts.
|
||||
func (cs *ConsensusState) SetProposalAndBlock(proposal *types.Proposal, block *types.Block, parts *types.PartSet, peerKey string) error {
|
||||
cs.SetProposal(proposal, peerKey)
|
||||
if err := cs.SetProposal(proposal, peerKey); err != nil {
|
||||
return err
|
||||
}
|
||||
for i := 0; i < parts.Total(); i++ {
|
||||
part := parts.GetPart(i)
|
||||
cs.AddProposalBlockPart(proposal.Height, proposal.Round, part, peerKey)
|
||||
if err := cs.AddProposalBlockPart(proposal.Height, proposal.Round, part, peerKey); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil // TODO errors
|
||||
return nil
|
||||
}
|
||||
|
||||
//------------------------------------------------------------
|
||||
// internal functions for managing the state
|
||||
|
||||
func (cs *ConsensusState) updateHeight(height int) {
|
||||
func (cs *ConsensusState) updateHeight(height int64) {
|
||||
cs.Height = height
|
||||
}
|
||||
|
||||
func (cs *ConsensusState) updateRoundStep(round int, step RoundStepType) {
|
||||
func (cs *ConsensusState) updateRoundStep(round int, step cstypes.RoundStepType) {
|
||||
cs.Round = round
|
||||
cs.Step = step
|
||||
}
|
||||
|
||||
// enterNewRound(height, 0) at cs.StartTime.
|
||||
func (cs *ConsensusState) scheduleRound0(rs *RoundState) {
|
||||
func (cs *ConsensusState) scheduleRound0(rs *cstypes.RoundState) {
|
||||
//cs.Logger.Info("scheduleRound0", "now", time.Now(), "startTime", cs.StartTime)
|
||||
sleepDuration := rs.StartTime.Sub(time.Now())
|
||||
cs.scheduleTimeout(sleepDuration, rs.Height, 0, RoundStepNewHeight)
|
||||
sleepDuration := rs.StartTime.Sub(time.Now()) // nolint: gotype, gosimple
|
||||
cs.scheduleTimeout(sleepDuration, rs.Height, 0, cstypes.RoundStepNewHeight)
|
||||
}
|
||||
|
||||
// Attempt to schedule a timeout (by sending timeoutInfo on the tickChan)
|
||||
func (cs *ConsensusState) scheduleTimeout(duration time.Duration, height, round int, step RoundStepType) {
|
||||
func (cs *ConsensusState) scheduleTimeout(duration time.Duration, height int64, round int, step cstypes.RoundStepType) {
|
||||
cs.timeoutTicker.ScheduleTimeout(timeoutInfo{duration, height, round, step})
|
||||
}
|
||||
|
||||
@@ -514,7 +402,7 @@ func (cs *ConsensusState) reconstructLastCommit(state *sm.State) {
|
||||
return
|
||||
}
|
||||
seenCommit := cs.blockStore.LoadSeenCommit(state.LastBlockHeight)
|
||||
lastPrecommits := types.NewVoteSet(cs.state.ChainID, state.LastBlockHeight, seenCommit.Round(), types.VoteTypePrecommit, state.LastValidators)
|
||||
lastPrecommits := types.NewVoteSet(state.ChainID, state.LastBlockHeight, seenCommit.Round(), types.VoteTypePrecommit, state.LastValidators)
|
||||
for _, precommit := range seenCommit.Precommits {
|
||||
if precommit == nil {
|
||||
continue
|
||||
@@ -531,7 +419,7 @@ func (cs *ConsensusState) reconstructLastCommit(state *sm.State) {
|
||||
}
|
||||
|
||||
// Updates ConsensusState and increments height to match that of state.
|
||||
// The round becomes 0 and cs.Step becomes RoundStepNewHeight.
|
||||
// The round becomes 0 and cs.Step becomes cstypes.RoundStepNewHeight.
|
||||
func (cs *ConsensusState) updateToState(state *sm.State) {
|
||||
if cs.CommitRound > -1 && 0 < cs.Height && cs.Height != state.LastBlockHeight {
|
||||
cmn.PanicSanity(cmn.Fmt("updateToState() expected state height of %v but found %v",
|
||||
@@ -567,7 +455,7 @@ func (cs *ConsensusState) updateToState(state *sm.State) {
|
||||
|
||||
// RoundState fields
|
||||
cs.updateHeight(height)
|
||||
cs.updateRoundStep(0, RoundStepNewHeight)
|
||||
cs.updateRoundStep(0, cstypes.RoundStepNewHeight)
|
||||
if cs.CommitTime.IsZero() {
|
||||
// "Now" makes it easier to sync up dev nodes.
|
||||
// We add timeoutCommit to allow transactions
|
||||
@@ -585,7 +473,7 @@ func (cs *ConsensusState) updateToState(state *sm.State) {
|
||||
cs.LockedRound = 0
|
||||
cs.LockedBlock = nil
|
||||
cs.LockedBlockParts = nil
|
||||
cs.Votes = NewHeightVoteSet(state.ChainID, height, validators)
|
||||
cs.Votes = cstypes.NewHeightVoteSet(state.ChainID, height, validators)
|
||||
cs.CommitRound = -1
|
||||
cs.LastCommit = lastPrecommits
|
||||
cs.LastValidators = state.LastValidators
|
||||
@@ -600,9 +488,9 @@ func (cs *ConsensusState) newStep() {
|
||||
rs := cs.RoundStateEvent()
|
||||
cs.wal.Save(rs)
|
||||
cs.nSteps += 1
|
||||
// newStep is called by updateToStep in NewConsensusState before the evsw is set!
|
||||
if cs.evsw != nil {
|
||||
types.FireEventNewRoundStep(cs.evsw, rs)
|
||||
// newStep is called by updateToStep in NewConsensusState before the eventBus is set!
|
||||
if cs.eventBus != nil {
|
||||
cs.eventBus.PublishEventNewRoundStep(rs)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -617,7 +505,7 @@ func (cs *ConsensusState) newStep() {
|
||||
func (cs *ConsensusState) receiveRoutine(maxSteps int) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
cs.Logger.Error("CONSENSUS FAILURE!!!", "err", r)
|
||||
cs.Logger.Error("CONSENSUS FAILURE!!!", "err", r, "stack", string(debug.Stack()))
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -656,9 +544,7 @@ func (cs *ConsensusState) receiveRoutine(maxSteps int) {
|
||||
// priv_val tracks LastSig
|
||||
|
||||
// close wal now that we're done writing to it
|
||||
if cs.wal != nil {
|
||||
cs.wal.Stop()
|
||||
}
|
||||
cs.wal.Stop()
|
||||
|
||||
close(cs.done)
|
||||
return
|
||||
@@ -706,7 +592,7 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
|
||||
}
|
||||
}
|
||||
|
||||
func (cs *ConsensusState) handleTimeout(ti timeoutInfo, rs RoundState) {
|
||||
func (cs *ConsensusState) handleTimeout(ti timeoutInfo, rs cstypes.RoundState) {
|
||||
cs.Logger.Debug("Received tock", "timeout", ti.Duration, "height", ti.Height, "round", ti.Round, "step", ti.Step)
|
||||
|
||||
// timeouts must be for current height, round, step
|
||||
@@ -720,20 +606,20 @@ func (cs *ConsensusState) handleTimeout(ti timeoutInfo, rs RoundState) {
|
||||
defer cs.mtx.Unlock()
|
||||
|
||||
switch ti.Step {
|
||||
case RoundStepNewHeight:
|
||||
case cstypes.RoundStepNewHeight:
|
||||
// NewRound event fired from enterNewRound.
|
||||
// XXX: should we fire timeout here (for timeout commit)?
|
||||
cs.enterNewRound(ti.Height, 0)
|
||||
case RoundStepNewRound:
|
||||
case cstypes.RoundStepNewRound:
|
||||
cs.enterPropose(ti.Height, 0)
|
||||
case RoundStepPropose:
|
||||
types.FireEventTimeoutPropose(cs.evsw, cs.RoundStateEvent())
|
||||
case cstypes.RoundStepPropose:
|
||||
cs.eventBus.PublishEventTimeoutPropose(cs.RoundStateEvent())
|
||||
cs.enterPrevote(ti.Height, ti.Round)
|
||||
case RoundStepPrevoteWait:
|
||||
types.FireEventTimeoutWait(cs.evsw, cs.RoundStateEvent())
|
||||
case cstypes.RoundStepPrevoteWait:
|
||||
cs.eventBus.PublishEventTimeoutWait(cs.RoundStateEvent())
|
||||
cs.enterPrecommit(ti.Height, ti.Round)
|
||||
case RoundStepPrecommitWait:
|
||||
types.FireEventTimeoutWait(cs.evsw, cs.RoundStateEvent())
|
||||
case cstypes.RoundStepPrecommitWait:
|
||||
cs.eventBus.PublishEventTimeoutWait(cs.RoundStateEvent())
|
||||
cs.enterNewRound(ti.Height, ti.Round+1)
|
||||
default:
|
||||
panic(cmn.Fmt("Invalid timeout step: %v", ti.Step))
|
||||
@@ -741,7 +627,7 @@ func (cs *ConsensusState) handleTimeout(ti timeoutInfo, rs RoundState) {
|
||||
|
||||
}
|
||||
|
||||
func (cs *ConsensusState) handleTxsAvailable(height int) {
|
||||
func (cs *ConsensusState) handleTxsAvailable(height int64) {
|
||||
cs.mtx.Lock()
|
||||
defer cs.mtx.Unlock()
|
||||
// we only need to do this for round 0
|
||||
@@ -758,8 +644,8 @@ func (cs *ConsensusState) handleTxsAvailable(height int) {
|
||||
// Enter: +2/3 precommits for nil at (height,round-1)
|
||||
// Enter: +2/3 prevotes any or +2/3 precommits for block or any from (height, round)
|
||||
// NOTE: cs.StartTime was already set for height.
|
||||
func (cs *ConsensusState) enterNewRound(height int, round int) {
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && cs.Step != RoundStepNewHeight) {
|
||||
func (cs *ConsensusState) enterNewRound(height int64, round int) {
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && cs.Step != cstypes.RoundStepNewHeight) {
|
||||
cs.Logger.Debug(cmn.Fmt("enterNewRound(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
return
|
||||
}
|
||||
@@ -780,7 +666,7 @@ func (cs *ConsensusState) enterNewRound(height int, round int) {
|
||||
// Setup new round
|
||||
// we don't fire newStep for this step,
|
||||
// but we fire an event, so update the round step first
|
||||
cs.updateRoundStep(round, RoundStepNewRound)
|
||||
cs.updateRoundStep(round, cstypes.RoundStepNewRound)
|
||||
cs.Validators = validators
|
||||
if round == 0 {
|
||||
// We've already reset these upon new height,
|
||||
@@ -793,7 +679,7 @@ func (cs *ConsensusState) enterNewRound(height int, round int) {
|
||||
}
|
||||
cs.Votes.SetRound(round + 1) // also track next round (round+1) to allow round-skipping
|
||||
|
||||
types.FireEventNewRound(cs.evsw, cs.RoundStateEvent())
|
||||
cs.eventBus.PublishEventNewRound(cs.RoundStateEvent())
|
||||
|
||||
// Wait for txs to be available in the mempool
|
||||
// before we enterPropose in round 0. If the last block changed the app hash,
|
||||
@@ -801,7 +687,7 @@ func (cs *ConsensusState) enterNewRound(height int, round int) {
|
||||
waitForTxs := cs.config.WaitForTxs() && round == 0 && !cs.needProofBlock(height)
|
||||
if waitForTxs {
|
||||
if cs.config.CreateEmptyBlocksInterval > 0 {
|
||||
cs.scheduleTimeout(cs.config.EmptyBlocksInterval(), height, round, RoundStepNewRound)
|
||||
cs.scheduleTimeout(cs.config.EmptyBlocksInterval(), height, round, cstypes.RoundStepNewRound)
|
||||
}
|
||||
go cs.proposalHeartbeat(height, round)
|
||||
} else {
|
||||
@@ -811,19 +697,16 @@ func (cs *ConsensusState) enterNewRound(height int, round int) {
|
||||
|
||||
// needProofBlock returns true on the first height (so the genesis app hash is signed right away)
|
||||
// and where the last block (height-1) caused the app hash to change
|
||||
func (cs *ConsensusState) needProofBlock(height int) bool {
|
||||
func (cs *ConsensusState) needProofBlock(height int64) bool {
|
||||
if height == 1 {
|
||||
return true
|
||||
}
|
||||
|
||||
lastBlockMeta := cs.blockStore.LoadBlockMeta(height - 1)
|
||||
if !bytes.Equal(cs.state.AppHash, lastBlockMeta.Header.AppHash) {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
return !bytes.Equal(cs.state.AppHash, lastBlockMeta.Header.AppHash)
|
||||
}
|
||||
|
||||
func (cs *ConsensusState) proposalHeartbeat(height, round int) {
|
||||
func (cs *ConsensusState) proposalHeartbeat(height int64, round int) {
|
||||
counter := 0
|
||||
addr := cs.privValidator.GetAddress()
|
||||
valIndex, v := cs.Validators.GetByAddress(addr)
|
||||
@@ -831,10 +714,11 @@ func (cs *ConsensusState) proposalHeartbeat(height, round int) {
|
||||
// not a validator
|
||||
valIndex = -1
|
||||
}
|
||||
chainID := cs.state.ChainID
|
||||
for {
|
||||
rs := cs.GetRoundState()
|
||||
// if we've already moved on, no need to send more heartbeats
|
||||
if rs.Step > RoundStepNewRound || rs.Round > round || rs.Height > height {
|
||||
if rs.Step > cstypes.RoundStepNewRound || rs.Round > round || rs.Height > height {
|
||||
return
|
||||
}
|
||||
heartbeat := &types.Heartbeat{
|
||||
@@ -844,9 +728,8 @@ func (cs *ConsensusState) proposalHeartbeat(height, round int) {
|
||||
ValidatorAddress: addr,
|
||||
ValidatorIndex: valIndex,
|
||||
}
|
||||
cs.privValidator.SignHeartbeat(cs.state.ChainID, heartbeat)
|
||||
heartbeatEvent := types.EventDataProposalHeartbeat{heartbeat}
|
||||
types.FireEventProposalHeartbeat(cs.evsw, heartbeatEvent)
|
||||
cs.privValidator.SignHeartbeat(chainID, heartbeat)
|
||||
cs.eventBus.PublishEventProposalHeartbeat(types.EventDataProposalHeartbeat{heartbeat})
|
||||
counter += 1
|
||||
time.Sleep(proposalHeartbeatIntervalSeconds * time.Second)
|
||||
}
|
||||
@@ -855,8 +738,8 @@ func (cs *ConsensusState) proposalHeartbeat(height, round int) {
|
||||
// Enter (CreateEmptyBlocks): from enterNewRound(height,round)
|
||||
// Enter (CreateEmptyBlocks, CreateEmptyBlocksInterval > 0 ): after enterNewRound(height,round), after timeout of CreateEmptyBlocksInterval
|
||||
// Enter (!CreateEmptyBlocks) : after enterNewRound(height,round), once txs are in the mempool
|
||||
func (cs *ConsensusState) enterPropose(height int, round int) {
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && RoundStepPropose <= cs.Step) {
|
||||
func (cs *ConsensusState) enterPropose(height int64, round int) {
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && cstypes.RoundStepPropose <= cs.Step) {
|
||||
cs.Logger.Debug(cmn.Fmt("enterPropose(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
return
|
||||
}
|
||||
@@ -864,7 +747,7 @@ func (cs *ConsensusState) enterPropose(height int, round int) {
|
||||
|
||||
defer func() {
|
||||
// Done enterPropose:
|
||||
cs.updateRoundStep(round, RoundStepPropose)
|
||||
cs.updateRoundStep(round, cstypes.RoundStepPropose)
|
||||
cs.newStep()
|
||||
|
||||
// If we have the whole proposal + POL, then goto Prevote now.
|
||||
@@ -876,7 +759,7 @@ func (cs *ConsensusState) enterPropose(height int, round int) {
|
||||
}()
|
||||
|
||||
// If we don't get the proposal and all block parts quick enough, enterPrevote
|
||||
cs.scheduleTimeout(cs.config.Propose(round), height, round, RoundStepPropose)
|
||||
cs.scheduleTimeout(cs.config.Propose(round), height, round, cstypes.RoundStepPropose)
|
||||
|
||||
// Nothing more to do if we're not a validator
|
||||
if cs.privValidator == nil {
|
||||
@@ -902,7 +785,7 @@ func (cs *ConsensusState) isProposer() bool {
|
||||
return bytes.Equal(cs.Validators.GetProposer().Address, cs.privValidator.GetAddress())
|
||||
}
|
||||
|
||||
func (cs *ConsensusState) defaultDecideProposal(height, round int) {
|
||||
func (cs *ConsensusState) defaultDecideProposal(height int64, round int) {
|
||||
var block *types.Block
|
||||
var blockParts *types.PartSet
|
||||
|
||||
@@ -921,8 +804,7 @@ func (cs *ConsensusState) defaultDecideProposal(height, round int) {
|
||||
// Make proposal
|
||||
polRound, polBlockID := cs.Votes.POLInfo()
|
||||
proposal := types.NewProposal(height, round, blockParts.Header(), polRound, polBlockID)
|
||||
err := cs.privValidator.SignProposal(cs.state.ChainID, proposal)
|
||||
if err == nil {
|
||||
if err := cs.privValidator.SignProposal(cs.state.ChainID, proposal); err == nil {
|
||||
// Set fields
|
||||
/* fields set by setProposal and addBlockPart
|
||||
cs.Proposal = proposal
|
||||
@@ -981,9 +863,9 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
|
||||
|
||||
// Mempool validated transactions
|
||||
txs := cs.mempool.Reap(cs.config.MaxBlockSizeTxs)
|
||||
|
||||
return types.MakeBlock(cs.Height, cs.state.ChainID, txs, commit,
|
||||
cs.state.LastBlockID, cs.state.Validators.Hash(), cs.state.AppHash, cs.config.BlockPartSize)
|
||||
cs.state.LastBlockID, cs.state.Validators.Hash(),
|
||||
cs.state.AppHash, cs.state.Params.BlockPartSizeBytes)
|
||||
}
|
||||
|
||||
// Enter: `timeoutPropose` after entering Propose.
|
||||
@@ -991,21 +873,21 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
|
||||
// Enter: any +2/3 prevotes for future round.
|
||||
// Prevote for LockedBlock if we're locked, or ProposalBlock if valid.
|
||||
// Otherwise vote nil.
|
||||
func (cs *ConsensusState) enterPrevote(height int, round int) {
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && RoundStepPrevote <= cs.Step) {
|
||||
func (cs *ConsensusState) enterPrevote(height int64, round int) {
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && cstypes.RoundStepPrevote <= cs.Step) {
|
||||
cs.Logger.Debug(cmn.Fmt("enterPrevote(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
return
|
||||
}
|
||||
|
||||
defer func() {
|
||||
// Done enterPrevote:
|
||||
cs.updateRoundStep(round, RoundStepPrevote)
|
||||
cs.updateRoundStep(round, cstypes.RoundStepPrevote)
|
||||
cs.newStep()
|
||||
}()
|
||||
|
||||
// fire event for how we got here
|
||||
if cs.isProposalComplete() {
|
||||
types.FireEventCompleteProposal(cs.evsw, cs.RoundStateEvent())
|
||||
cs.eventBus.PublishEventCompleteProposal(cs.RoundStateEvent())
|
||||
} else {
|
||||
// we received +2/3 prevotes for a future round
|
||||
// TODO: catchup event?
|
||||
@@ -1020,7 +902,7 @@ func (cs *ConsensusState) enterPrevote(height int, round int) {
|
||||
// (so we have more time to try and collect +2/3 prevotes for a single block)
|
||||
}
|
||||
|
||||
func (cs *ConsensusState) defaultDoPrevote(height int, round int) {
|
||||
func (cs *ConsensusState) defaultDoPrevote(height int64, round int) {
|
||||
logger := cs.Logger.With("height", height, "round", round)
|
||||
// If a block is locked, prevote that.
|
||||
if cs.LockedBlock != nil {
|
||||
@@ -1053,8 +935,8 @@ func (cs *ConsensusState) defaultDoPrevote(height int, round int) {
|
||||
}
|
||||
|
||||
// Enter: any +2/3 prevotes at next round.
|
||||
func (cs *ConsensusState) enterPrevoteWait(height int, round int) {
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && RoundStepPrevoteWait <= cs.Step) {
|
||||
func (cs *ConsensusState) enterPrevoteWait(height int64, round int) {
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && cstypes.RoundStepPrevoteWait <= cs.Step) {
|
||||
cs.Logger.Debug(cmn.Fmt("enterPrevoteWait(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
return
|
||||
}
|
||||
@@ -1065,12 +947,12 @@ func (cs *ConsensusState) enterPrevoteWait(height int, round int) {
|
||||
|
||||
defer func() {
|
||||
// Done enterPrevoteWait:
|
||||
cs.updateRoundStep(round, RoundStepPrevoteWait)
|
||||
cs.updateRoundStep(round, cstypes.RoundStepPrevoteWait)
|
||||
cs.newStep()
|
||||
}()
|
||||
|
||||
// Wait for some more prevotes; enterPrecommit
|
||||
cs.scheduleTimeout(cs.config.Prevote(round), height, round, RoundStepPrevoteWait)
|
||||
cs.scheduleTimeout(cs.config.Prevote(round), height, round, cstypes.RoundStepPrevoteWait)
|
||||
}
|
||||
|
||||
// Enter: `timeoutPrevote` after any +2/3 prevotes.
|
||||
@@ -1079,8 +961,8 @@ func (cs *ConsensusState) enterPrevoteWait(height int, round int) {
|
||||
// Lock & precommit the ProposalBlock if we have enough prevotes for it (a POL in this round)
|
||||
// else, unlock an existing lock and precommit nil if +2/3 of prevotes were nil,
|
||||
// else, precommit nil otherwise.
|
||||
func (cs *ConsensusState) enterPrecommit(height int, round int) {
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && RoundStepPrecommit <= cs.Step) {
|
||||
func (cs *ConsensusState) enterPrecommit(height int64, round int) {
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && cstypes.RoundStepPrecommit <= cs.Step) {
|
||||
cs.Logger.Debug(cmn.Fmt("enterPrecommit(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
return
|
||||
}
|
||||
@@ -1089,7 +971,7 @@ func (cs *ConsensusState) enterPrecommit(height int, round int) {
|
||||
|
||||
defer func() {
|
||||
// Done enterPrecommit:
|
||||
cs.updateRoundStep(round, RoundStepPrecommit)
|
||||
cs.updateRoundStep(round, cstypes.RoundStepPrecommit)
|
||||
cs.newStep()
|
||||
}()
|
||||
|
||||
@@ -1107,7 +989,7 @@ func (cs *ConsensusState) enterPrecommit(height int, round int) {
|
||||
}
|
||||
|
||||
// At this point +2/3 prevoted for a particular block or nil
|
||||
types.FireEventPolka(cs.evsw, cs.RoundStateEvent())
|
||||
cs.eventBus.PublishEventPolka(cs.RoundStateEvent())
|
||||
|
||||
// the latest POLRound should be this round
|
||||
polRound, _ := cs.Votes.POLInfo()
|
||||
@@ -1124,7 +1006,7 @@ func (cs *ConsensusState) enterPrecommit(height int, round int) {
|
||||
cs.LockedRound = 0
|
||||
cs.LockedBlock = nil
|
||||
cs.LockedBlockParts = nil
|
||||
types.FireEventUnlock(cs.evsw, cs.RoundStateEvent())
|
||||
cs.eventBus.PublishEventUnlock(cs.RoundStateEvent())
|
||||
}
|
||||
cs.signAddVote(types.VoteTypePrecommit, nil, types.PartSetHeader{})
|
||||
return
|
||||
@@ -1136,7 +1018,7 @@ func (cs *ConsensusState) enterPrecommit(height int, round int) {
|
||||
if cs.LockedBlock.HashesTo(blockID.Hash) {
|
||||
cs.Logger.Info("enterPrecommit: +2/3 prevoted locked block. Relocking")
|
||||
cs.LockedRound = round
|
||||
types.FireEventRelock(cs.evsw, cs.RoundStateEvent())
|
||||
cs.eventBus.PublishEventRelock(cs.RoundStateEvent())
|
||||
cs.signAddVote(types.VoteTypePrecommit, blockID.Hash, blockID.PartsHeader)
|
||||
return
|
||||
}
|
||||
@@ -1151,7 +1033,7 @@ func (cs *ConsensusState) enterPrecommit(height int, round int) {
|
||||
cs.LockedRound = round
|
||||
cs.LockedBlock = cs.ProposalBlock
|
||||
cs.LockedBlockParts = cs.ProposalBlockParts
|
||||
types.FireEventLock(cs.evsw, cs.RoundStateEvent())
|
||||
cs.eventBus.PublishEventLock(cs.RoundStateEvent())
|
||||
cs.signAddVote(types.VoteTypePrecommit, blockID.Hash, blockID.PartsHeader)
|
||||
return
|
||||
}
|
||||
@@ -1167,13 +1049,13 @@ func (cs *ConsensusState) enterPrecommit(height int, round int) {
|
||||
cs.ProposalBlock = nil
|
||||
cs.ProposalBlockParts = types.NewPartSetFromHeader(blockID.PartsHeader)
|
||||
}
|
||||
types.FireEventUnlock(cs.evsw, cs.RoundStateEvent())
|
||||
cs.eventBus.PublishEventUnlock(cs.RoundStateEvent())
|
||||
cs.signAddVote(types.VoteTypePrecommit, nil, types.PartSetHeader{})
|
||||
}
|
||||
|
||||
// Enter: any +2/3 precommits for next round.
|
||||
func (cs *ConsensusState) enterPrecommitWait(height int, round int) {
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && RoundStepPrecommitWait <= cs.Step) {
|
||||
func (cs *ConsensusState) enterPrecommitWait(height int64, round int) {
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && cstypes.RoundStepPrecommitWait <= cs.Step) {
|
||||
cs.Logger.Debug(cmn.Fmt("enterPrecommitWait(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
return
|
||||
}
|
||||
@@ -1184,18 +1066,18 @@ func (cs *ConsensusState) enterPrecommitWait(height int, round int) {
|
||||
|
||||
defer func() {
|
||||
// Done enterPrecommitWait:
|
||||
cs.updateRoundStep(round, RoundStepPrecommitWait)
|
||||
cs.updateRoundStep(round, cstypes.RoundStepPrecommitWait)
|
||||
cs.newStep()
|
||||
}()
|
||||
|
||||
// Wait for some more precommits; enterNewRound
|
||||
cs.scheduleTimeout(cs.config.Precommit(round), height, round, RoundStepPrecommitWait)
|
||||
cs.scheduleTimeout(cs.config.Precommit(round), height, round, cstypes.RoundStepPrecommitWait)
|
||||
|
||||
}
|
||||
|
||||
// Enter: +2/3 precommits for block
|
||||
func (cs *ConsensusState) enterCommit(height int, commitRound int) {
|
||||
if cs.Height != height || RoundStepCommit <= cs.Step {
|
||||
func (cs *ConsensusState) enterCommit(height int64, commitRound int) {
|
||||
if cs.Height != height || cstypes.RoundStepCommit <= cs.Step {
|
||||
cs.Logger.Debug(cmn.Fmt("enterCommit(%v/%v): Invalid args. Current step: %v/%v/%v", height, commitRound, cs.Height, cs.Round, cs.Step))
|
||||
return
|
||||
}
|
||||
@@ -1204,7 +1086,7 @@ func (cs *ConsensusState) enterCommit(height int, commitRound int) {
|
||||
defer func() {
|
||||
// Done enterCommit:
|
||||
// keep cs.Round the same, commitRound points to the right Precommits set.
|
||||
cs.updateRoundStep(cs.Round, RoundStepCommit)
|
||||
cs.updateRoundStep(cs.Round, cstypes.RoundStepCommit)
|
||||
cs.CommitRound = commitRound
|
||||
cs.CommitTime = time.Now()
|
||||
cs.newStep()
|
||||
@@ -1240,7 +1122,7 @@ func (cs *ConsensusState) enterCommit(height int, commitRound int) {
|
||||
}
|
||||
|
||||
// If we have the block AND +2/3 commits for it, finalize.
|
||||
func (cs *ConsensusState) tryFinalizeCommit(height int) {
|
||||
func (cs *ConsensusState) tryFinalizeCommit(height int64) {
|
||||
if cs.Height != height {
|
||||
cmn.PanicSanity(cmn.Fmt("tryFinalizeCommit() cs.Height: %v vs height: %v", cs.Height, height))
|
||||
}
|
||||
@@ -1253,7 +1135,7 @@ func (cs *ConsensusState) tryFinalizeCommit(height int) {
|
||||
if !cs.ProposalBlock.HashesTo(blockID.Hash) {
|
||||
// TODO: this happens every time if we're not a validator (ugly logs)
|
||||
// TODO: ^^ wait, why does it matter that we're a validator?
|
||||
cs.Logger.Error("Attempt to finalize failed. We don't have the commit block.", "height", height, "proposal-block", cs.ProposalBlock.Hash(), "commit-block", blockID.Hash)
|
||||
cs.Logger.Info("Attempt to finalize failed. We don't have the commit block.", "height", height, "proposal-block", cs.ProposalBlock.Hash(), "commit-block", blockID.Hash)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1261,9 +1143,9 @@ func (cs *ConsensusState) tryFinalizeCommit(height int) {
|
||||
cs.finalizeCommit(height)
|
||||
}
|
||||
|
||||
// Increment height and goto RoundStepNewHeight
|
||||
func (cs *ConsensusState) finalizeCommit(height int) {
|
||||
if cs.Height != height || cs.Step != RoundStepCommit {
|
||||
// Increment height and goto cstypes.RoundStepNewHeight
|
||||
func (cs *ConsensusState) finalizeCommit(height int64) {
|
||||
if cs.Height != height || cs.Step != cstypes.RoundStepCommit {
|
||||
cs.Logger.Debug(cmn.Fmt("finalizeCommit(%v): Invalid args. Current step: %v/%v/%v", height, cs.Height, cs.Round, cs.Step))
|
||||
return
|
||||
}
|
||||
@@ -1311,23 +1193,25 @@ func (cs *ConsensusState) finalizeCommit(height int) {
|
||||
// WAL replay for blocks with an #ENDHEIGHT
|
||||
// As is, ConsensusState should not be started again
|
||||
// until we successfully call ApplyBlock (ie. here or in Handshake after restart)
|
||||
if cs.wal != nil {
|
||||
cs.wal.writeEndHeight(height)
|
||||
}
|
||||
cs.wal.Save(EndHeightMessage{height})
|
||||
|
||||
fail.Fail() // XXX
|
||||
|
||||
// Create a copy of the state for staging
|
||||
// and an event cache for txs
|
||||
stateCopy := cs.state.Copy()
|
||||
eventCache := types.NewEventCache(cs.evsw)
|
||||
txEventBuffer := types.NewTxEventBuffer(cs.eventBus, block.NumTxs)
|
||||
|
||||
// Execute and commit the block, update and save the state, and update the mempool.
|
||||
// All calls to the proxyAppConn come here.
|
||||
// NOTE: the block.AppHash wont reflect these txs until the next block
|
||||
err := stateCopy.ApplyBlock(eventCache, cs.proxyAppConn, block, blockParts.Header(), cs.mempool)
|
||||
err := stateCopy.ApplyBlock(txEventBuffer, cs.proxyAppConn, block, blockParts.Header(), cs.mempool)
|
||||
if err != nil {
|
||||
cs.Logger.Error("Error on ApplyBlock. Did the application crash? Please restart tendermint", "err", err)
|
||||
err := cmn.Kill()
|
||||
if err != nil {
|
||||
cs.Logger.Error("Failed to kill this process - please do so manually", "err", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1340,9 +1224,12 @@ func (cs *ConsensusState) finalizeCommit(height int) {
|
||||
// * Fire before persisting state, in ApplyBlock
|
||||
// * Fire on start up if we haven't written any new WAL msgs
|
||||
// Both options mean we may fire more than once. Is that fine ?
|
||||
types.FireEventNewBlock(cs.evsw, types.EventDataNewBlock{block})
|
||||
types.FireEventNewBlockHeader(cs.evsw, types.EventDataNewBlockHeader{block.Header})
|
||||
eventCache.Flush()
|
||||
cs.eventBus.PublishEventNewBlock(types.EventDataNewBlock{block})
|
||||
cs.eventBus.PublishEventNewBlockHeader(types.EventDataNewBlockHeader{block.Header})
|
||||
err = txEventBuffer.Flush()
|
||||
if err != nil {
|
||||
cs.Logger.Error("Failed to flush event buffer", "err", err)
|
||||
}
|
||||
|
||||
fail.Fail() // XXX
|
||||
|
||||
@@ -1357,7 +1244,7 @@ func (cs *ConsensusState) finalizeCommit(height int) {
|
||||
|
||||
// By here,
|
||||
// * cs.Height has been increment to height+1
|
||||
// * cs.Step is now RoundStepNewHeight
|
||||
// * cs.Step is now cstypes.RoundStepNewHeight
|
||||
// * cs.StartTime is set to when we will start round0.
|
||||
}
|
||||
|
||||
@@ -1375,8 +1262,8 @@ func (cs *ConsensusState) defaultSetProposal(proposal *types.Proposal) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// We don't care about the proposal if we're already in RoundStepCommit.
|
||||
if RoundStepCommit <= cs.Step {
|
||||
// We don't care about the proposal if we're already in cstypes.RoundStepCommit.
|
||||
if cstypes.RoundStepCommit <= cs.Step {
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -1398,7 +1285,7 @@ func (cs *ConsensusState) defaultSetProposal(proposal *types.Proposal) error {
|
||||
|
||||
// NOTE: block is not necessarily valid.
|
||||
// Asynchronously triggers either enterPrevote (before we timeout of propose) or tryFinalizeCommit, once we have the full block.
|
||||
func (cs *ConsensusState) addProposalBlockPart(height int, part *types.Part, verify bool) (added bool, err error) {
|
||||
func (cs *ConsensusState) addProposalBlockPart(height int64, part *types.Part, verify bool) (added bool, err error) {
|
||||
// Blocks might be reused, so round mismatch is OK
|
||||
if cs.Height != height {
|
||||
return false, nil
|
||||
@@ -1417,13 +1304,14 @@ func (cs *ConsensusState) addProposalBlockPart(height int, part *types.Part, ver
|
||||
// Added and completed!
|
||||
var n int
|
||||
var err error
|
||||
cs.ProposalBlock = wire.ReadBinary(&types.Block{}, cs.ProposalBlockParts.GetReader(), types.MaxBlockSize, &n, &err).(*types.Block)
|
||||
cs.ProposalBlock = wire.ReadBinary(&types.Block{}, cs.ProposalBlockParts.GetReader(),
|
||||
cs.state.Params.BlockSizeParams.MaxBytes, &n, &err).(*types.Block)
|
||||
// NOTE: it's possible to receive complete proposal blocks for future rounds without having the proposal
|
||||
cs.Logger.Info("Received complete proposal block", "height", cs.ProposalBlock.Height, "hash", cs.ProposalBlock.Hash())
|
||||
if cs.Step == RoundStepPropose && cs.isProposalComplete() {
|
||||
if cs.Step == cstypes.RoundStepPropose && cs.isProposalComplete() {
|
||||
// Move onto the next step
|
||||
cs.enterPrevote(height, cs.Round)
|
||||
} else if cs.Step == RoundStepCommit {
|
||||
} else if cs.Step == cstypes.RoundStepCommit {
|
||||
// If we're waiting on the proposal block...
|
||||
cs.tryFinalizeCommit(height)
|
||||
}
|
||||
@@ -1468,7 +1356,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool,
|
||||
// A precommit for the previous height?
|
||||
// These come in while we wait timeoutCommit
|
||||
if vote.Height+1 == cs.Height {
|
||||
if !(cs.Step == RoundStepNewHeight && vote.Type == types.VoteTypePrecommit) {
|
||||
if !(cs.Step == cstypes.RoundStepNewHeight && vote.Type == types.VoteTypePrecommit) {
|
||||
// TODO: give the reason ..
|
||||
// fmt.Errorf("tryAddVote: Wrong height, not a LastCommit straggler commit.")
|
||||
return added, ErrVoteHeightMismatch
|
||||
@@ -1476,12 +1364,12 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool,
|
||||
added, err = cs.LastCommit.AddVote(vote)
|
||||
if added {
|
||||
cs.Logger.Info(cmn.Fmt("Added to lastPrecommits: %v", cs.LastCommit.StringShort()))
|
||||
types.FireEventVote(cs.evsw, types.EventDataVote{vote})
|
||||
cs.eventBus.PublishEventVote(types.EventDataVote{vote})
|
||||
|
||||
// if we can skip timeoutCommit and have all the votes now,
|
||||
if cs.config.SkipTimeoutCommit && cs.LastCommit.HasAll() {
|
||||
// go straight to new round (skip timeout commit)
|
||||
// cs.scheduleTimeout(time.Duration(0), cs.Height, 0, RoundStepNewHeight)
|
||||
// cs.scheduleTimeout(time.Duration(0), cs.Height, 0, cstypes.RoundStepNewHeight)
|
||||
cs.enterNewRound(cs.Height, 0)
|
||||
}
|
||||
}
|
||||
@@ -1494,7 +1382,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool,
|
||||
height := cs.Height
|
||||
added, err = cs.Votes.AddVote(vote, peerKey)
|
||||
if added {
|
||||
types.FireEventVote(cs.evsw, types.EventDataVote{vote})
|
||||
cs.eventBus.PublishEventVote(types.EventDataVote{vote})
|
||||
|
||||
switch vote.Type {
|
||||
case types.VoteTypePrevote:
|
||||
@@ -1512,7 +1400,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool,
|
||||
cs.LockedRound = 0
|
||||
cs.LockedBlock = nil
|
||||
cs.LockedBlockParts = nil
|
||||
types.FireEventUnlock(cs.evsw, cs.RoundStateEvent())
|
||||
cs.eventBus.PublishEventUnlock(cs.RoundStateEvent())
|
||||
}
|
||||
}
|
||||
if cs.Round <= vote.Round && prevotes.HasTwoThirdsAny() {
|
||||
@@ -1545,7 +1433,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool,
|
||||
if cs.config.SkipTimeoutCommit && precommits.HasAll() {
|
||||
// if we have all the votes now,
|
||||
// go straight to new round (skip timeout commit)
|
||||
// cs.scheduleTimeout(time.Duration(0), cs.Height, 0, RoundStepNewHeight)
|
||||
// cs.scheduleTimeout(time.Duration(0), cs.Height, 0, cstypes.RoundStepNewHeight)
|
||||
cs.enterNewRound(cs.Height, 0)
|
||||
}
|
||||
|
||||
@@ -1606,7 +1494,7 @@ func (cs *ConsensusState) signAddVote(type_ byte, hash []byte, header types.Part
|
||||
|
||||
//---------------------------------------------------------
|
||||
|
||||
func CompareHRS(h1, r1 int, s1 RoundStepType, h2, r2 int, s2 RoundStepType) int {
|
||||
func CompareHRS(h1 int64, r1 int, s1 cstypes.RoundStepType, h2 int64, r2 int, s2 cstypes.RoundStepType) int {
|
||||
if h1 < h2 {
|
||||
return -1
|
||||
} else if h1 > h2 {
|
||||
|
@@ -2,12 +2,16 @@ package consensus
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
cstypes "github.com/tendermint/tendermint/consensus/types"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
. "github.com/tendermint/tmlibs/common"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
tmpubsub "github.com/tendermint/tmlibs/pubsub"
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -55,8 +59,8 @@ func TestProposerSelection0(t *testing.T) {
|
||||
cs1, vss := randConsensusState(4)
|
||||
height, round := cs1.Height, cs1.Round
|
||||
|
||||
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
|
||||
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
|
||||
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
|
||||
startTestRound(cs1, height, round)
|
||||
|
||||
@@ -79,8 +83,8 @@ func TestProposerSelection0(t *testing.T) {
|
||||
<-newRoundCh
|
||||
|
||||
prop = cs1.GetRoundState().Validators.GetProposer()
|
||||
if !bytes.Equal(prop.Address, vss[1].Address) {
|
||||
panic(Fmt("expected proposer to be validator %d. Got %X", 1, prop.Address))
|
||||
if !bytes.Equal(prop.Address, vss[1].GetAddress()) {
|
||||
panic(cmn.Fmt("expected proposer to be validator %d. Got %X", 1, prop.Address))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -88,7 +92,7 @@ func TestProposerSelection0(t *testing.T) {
|
||||
func TestProposerSelection2(t *testing.T) {
|
||||
cs1, vss := randConsensusState(4) // test needs more work for more than 3 validators
|
||||
|
||||
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
|
||||
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
|
||||
|
||||
// this time we jump in at round 2
|
||||
incrementRound(vss[1:]...)
|
||||
@@ -100,8 +104,8 @@ func TestProposerSelection2(t *testing.T) {
|
||||
// everyone just votes nil. we get a new proposer each round
|
||||
for i := 0; i < len(vss); i++ {
|
||||
prop := cs1.GetRoundState().Validators.GetProposer()
|
||||
if !bytes.Equal(prop.Address, vss[(i+2)%len(vss)].Address) {
|
||||
panic(Fmt("expected proposer to be validator %d. Got %X", (i+2)%len(vss), prop.Address))
|
||||
if !bytes.Equal(prop.Address, vss[(i+2)%len(vss)].GetAddress()) {
|
||||
panic(cmn.Fmt("expected proposer to be validator %d. Got %X", (i+2)%len(vss), prop.Address))
|
||||
}
|
||||
|
||||
rs := cs1.GetRoundState()
|
||||
@@ -120,7 +124,7 @@ func TestEnterProposeNoPrivValidator(t *testing.T) {
|
||||
height, round := cs.Height, cs.Round
|
||||
|
||||
// Listen for propose timeout event
|
||||
timeoutCh := subscribeToEvent(cs.evsw, "tester", types.EventStringTimeoutPropose(), 1)
|
||||
timeoutCh := subscribe(cs.eventBus, types.EventQueryTimeoutPropose)
|
||||
|
||||
startTestRound(cs, height, round)
|
||||
|
||||
@@ -145,8 +149,8 @@ func TestEnterProposeYesPrivValidator(t *testing.T) {
|
||||
|
||||
// Listen for propose timeout event
|
||||
|
||||
timeoutCh := subscribeToEvent(cs.evsw, "tester", types.EventStringTimeoutPropose(), 1)
|
||||
proposalCh := subscribeToEvent(cs.evsw, "tester", types.EventStringCompleteProposal(), 1)
|
||||
timeoutCh := subscribe(cs.eventBus, types.EventQueryTimeoutPropose)
|
||||
proposalCh := subscribe(cs.eventBus, types.EventQueryCompleteProposal)
|
||||
|
||||
cs.enterNewRound(height, round)
|
||||
cs.startRoutines(3)
|
||||
@@ -180,10 +184,10 @@ func TestBadProposal(t *testing.T) {
|
||||
height, round := cs1.Height, cs1.Round
|
||||
vs2 := vss[1]
|
||||
|
||||
partSize := config.Consensus.BlockPartSize
|
||||
partSize := cs1.state.Params.BlockPartSizeBytes
|
||||
|
||||
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
|
||||
voteCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringVote(), 1)
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
|
||||
|
||||
propBlock, _ := cs1.createProposalBlock() //changeProposer(t, cs1, vs2)
|
||||
|
||||
@@ -205,7 +209,9 @@ func TestBadProposal(t *testing.T) {
|
||||
}
|
||||
|
||||
// set the proposal block
|
||||
cs1.SetProposalAndBlock(proposal, propBlock, propBlockParts, "some peer")
|
||||
if err := cs1.SetProposalAndBlock(proposal, propBlock, propBlockParts, "some peer"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// start the machine
|
||||
startTestRound(cs1, height, round)
|
||||
@@ -237,9 +243,17 @@ func TestFullRound1(t *testing.T) {
|
||||
cs, vss := randConsensusState(1)
|
||||
height, round := cs.Height, cs.Round
|
||||
|
||||
voteCh := subscribeToEvent(cs.evsw, "tester", types.EventStringVote(), 0)
|
||||
propCh := subscribeToEvent(cs.evsw, "tester", types.EventStringCompleteProposal(), 1)
|
||||
newRoundCh := subscribeToEvent(cs.evsw, "tester", types.EventStringNewRound(), 1)
|
||||
// NOTE: buffer capacity of 0 ensures we can validate prevote and last commit
|
||||
// before consensus can move to the next height (and cause a race condition)
|
||||
cs.eventBus.Stop()
|
||||
eventBus := types.NewEventBusWithBufferCapacity(0)
|
||||
eventBus.SetLogger(log.TestingLogger().With("module", "events"))
|
||||
cs.SetEventBus(eventBus)
|
||||
eventBus.Start()
|
||||
|
||||
voteCh := subscribe(cs.eventBus, types.EventQueryVote)
|
||||
propCh := subscribe(cs.eventBus, types.EventQueryCompleteProposal)
|
||||
newRoundCh := subscribe(cs.eventBus, types.EventQueryNewRound)
|
||||
|
||||
startTestRound(cs, height, round)
|
||||
|
||||
@@ -247,11 +261,9 @@ func TestFullRound1(t *testing.T) {
|
||||
|
||||
// grab proposal
|
||||
re := <-propCh
|
||||
propBlockHash := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState).ProposalBlock.Hash()
|
||||
propBlockHash := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState).ProposalBlock.Hash()
|
||||
|
||||
<-voteCh // wait for prevote
|
||||
// NOTE: voteChan cap of 0 ensures we can complete this
|
||||
// before consensus can move to the next height (and cause a race condition)
|
||||
validatePrevote(t, cs, round, vss[0], propBlockHash)
|
||||
|
||||
<-voteCh // wait for precommit
|
||||
@@ -267,7 +279,7 @@ func TestFullRoundNil(t *testing.T) {
|
||||
cs, vss := randConsensusState(1)
|
||||
height, round := cs.Height, cs.Round
|
||||
|
||||
voteCh := subscribeToEvent(cs.evsw, "tester", types.EventStringVote(), 1)
|
||||
voteCh := subscribe(cs.eventBus, types.EventQueryVote)
|
||||
|
||||
cs.enterPrevote(height, round)
|
||||
cs.startRoutines(4)
|
||||
@@ -286,8 +298,8 @@ func TestFullRound2(t *testing.T) {
|
||||
vs2 := vss[1]
|
||||
height, round := cs1.Height, cs1.Round
|
||||
|
||||
voteCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringVote(), 1)
|
||||
newBlockCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewBlock(), 1)
|
||||
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
|
||||
newBlockCh := subscribe(cs1.eventBus, types.EventQueryNewBlock)
|
||||
|
||||
// start round and wait for propose and prevote
|
||||
startTestRound(cs1, height, round)
|
||||
@@ -327,13 +339,13 @@ func TestLockNoPOL(t *testing.T) {
|
||||
vs2 := vss[1]
|
||||
height := cs1.Height
|
||||
|
||||
partSize := config.Consensus.BlockPartSize
|
||||
partSize := cs1.state.Params.BlockPartSizeBytes
|
||||
|
||||
timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1)
|
||||
timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1)
|
||||
voteCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringVote(), 1)
|
||||
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
|
||||
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
|
||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
||||
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
|
||||
|
||||
/*
|
||||
Round1 (cs1, B) // B B // B B2
|
||||
@@ -344,7 +356,7 @@ func TestLockNoPOL(t *testing.T) {
|
||||
cs1.startRoutines(0)
|
||||
|
||||
re := <-proposalCh
|
||||
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
|
||||
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
|
||||
theBlockHash := rs.ProposalBlock.Hash()
|
||||
|
||||
<-voteCh // prevote
|
||||
@@ -384,7 +396,7 @@ func TestLockNoPOL(t *testing.T) {
|
||||
|
||||
// now we're on a new round and not the proposer, so wait for timeout
|
||||
re = <-timeoutProposeCh
|
||||
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
|
||||
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
|
||||
|
||||
if rs.ProposalBlock != nil {
|
||||
panic("Expected proposal block to be nil")
|
||||
@@ -428,11 +440,11 @@ func TestLockNoPOL(t *testing.T) {
|
||||
incrementRound(vs2)
|
||||
|
||||
re = <-proposalCh
|
||||
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
|
||||
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
|
||||
|
||||
// now we're on a new round and are the proposer
|
||||
if !bytes.Equal(rs.ProposalBlock.Hash(), rs.LockedBlock.Hash()) {
|
||||
panic(Fmt("Expected proposal block to be locked block. Got %v, Expected %v", rs.ProposalBlock, rs.LockedBlock))
|
||||
panic(cmn.Fmt("Expected proposal block to be locked block. Got %v, Expected %v", rs.ProposalBlock, rs.LockedBlock))
|
||||
}
|
||||
|
||||
<-voteCh // prevote
|
||||
@@ -468,7 +480,9 @@ func TestLockNoPOL(t *testing.T) {
|
||||
|
||||
// now we're on a new round and not the proposer
|
||||
// so set the proposal block
|
||||
cs1.SetProposalAndBlock(prop, propBlock, propBlock.MakePartSet(partSize), "")
|
||||
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlock.MakePartSet(partSize), ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
<-proposalCh
|
||||
<-voteCh // prevote
|
||||
@@ -493,16 +507,14 @@ func TestLockPOLRelock(t *testing.T) {
|
||||
cs1, vss := randConsensusState(4)
|
||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||
|
||||
partSize := config.Consensus.BlockPartSize
|
||||
partSize := cs1.state.Params.BlockPartSizeBytes
|
||||
|
||||
timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1)
|
||||
timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1)
|
||||
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
|
||||
voteCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringVote(), 1)
|
||||
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
|
||||
newBlockCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewBlockHeader(), 1)
|
||||
|
||||
t.Logf("vs2 last round %v", vs2.PrivValidator.LastRound)
|
||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
|
||||
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
|
||||
newBlockCh := subscribe(cs1.eventBus, types.EventQueryNewBlockHeader)
|
||||
|
||||
// everything done from perspective of cs1
|
||||
|
||||
@@ -517,7 +529,7 @@ func TestLockPOLRelock(t *testing.T) {
|
||||
|
||||
<-newRoundCh
|
||||
re := <-proposalCh
|
||||
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
|
||||
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
|
||||
theBlockHash := rs.ProposalBlock.Hash()
|
||||
|
||||
<-voteCh // prevote
|
||||
@@ -547,7 +559,9 @@ func TestLockPOLRelock(t *testing.T) {
|
||||
<-timeoutWaitCh
|
||||
|
||||
//XXX: this isnt guaranteed to get there before the timeoutPropose ...
|
||||
cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer")
|
||||
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
<-newRoundCh
|
||||
t.Log("### ONTO ROUND 1")
|
||||
@@ -593,7 +607,7 @@ func TestLockPOLRelock(t *testing.T) {
|
||||
be := <-newBlockCh
|
||||
b := be.(types.TMEventData).Unwrap().(types.EventDataNewBlockHeader)
|
||||
re = <-newRoundCh
|
||||
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
|
||||
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
|
||||
if rs.Height != 2 {
|
||||
panic("Expected height to increment")
|
||||
}
|
||||
@@ -608,13 +622,13 @@ func TestLockPOLUnlock(t *testing.T) {
|
||||
cs1, vss := randConsensusState(4)
|
||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||
|
||||
partSize := config.Consensus.BlockPartSize
|
||||
partSize := cs1.state.Params.BlockPartSizeBytes
|
||||
|
||||
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
|
||||
timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1)
|
||||
timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1)
|
||||
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
|
||||
unlockCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringUnlock(), 1)
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
||||
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
|
||||
unlockCh := subscribe(cs1.eventBus, types.EventQueryUnlock)
|
||||
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
|
||||
|
||||
// everything done from perspective of cs1
|
||||
@@ -629,7 +643,7 @@ func TestLockPOLUnlock(t *testing.T) {
|
||||
startTestRound(cs1, cs1.Height, 0)
|
||||
<-newRoundCh
|
||||
re := <-proposalCh
|
||||
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
|
||||
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
|
||||
theBlockHash := rs.ProposalBlock.Hash()
|
||||
|
||||
<-voteCh // prevote
|
||||
@@ -655,11 +669,13 @@ func TestLockPOLUnlock(t *testing.T) {
|
||||
|
||||
// timeout to new round
|
||||
re = <-timeoutWaitCh
|
||||
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
|
||||
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
|
||||
lockedBlockHash := rs.LockedBlock.Hash()
|
||||
|
||||
//XXX: this isnt guaranteed to get there before the timeoutPropose ...
|
||||
cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer")
|
||||
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
<-newRoundCh
|
||||
t.Log("#### ONTO ROUND 1")
|
||||
@@ -703,19 +719,19 @@ func TestLockPOLSafety1(t *testing.T) {
|
||||
cs1, vss := randConsensusState(4)
|
||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||
|
||||
partSize := config.Consensus.BlockPartSize
|
||||
partSize := cs1.state.Params.BlockPartSizeBytes
|
||||
|
||||
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
|
||||
timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1)
|
||||
timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1)
|
||||
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
||||
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
|
||||
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
|
||||
|
||||
// start round and wait for propose and prevote
|
||||
startTestRound(cs1, cs1.Height, 0)
|
||||
<-newRoundCh
|
||||
re := <-proposalCh
|
||||
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
|
||||
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
|
||||
propBlock := rs.ProposalBlock
|
||||
|
||||
<-voteCh // prevote
|
||||
@@ -746,7 +762,9 @@ func TestLockPOLSafety1(t *testing.T) {
|
||||
incrementRound(vs2, vs3, vs4)
|
||||
|
||||
//XXX: this isnt guaranteed to get there before the timeoutPropose ...
|
||||
cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer")
|
||||
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
<-newRoundCh
|
||||
t.Log("### ONTO ROUND 1")
|
||||
@@ -763,7 +781,7 @@ func TestLockPOLSafety1(t *testing.T) {
|
||||
re = <-proposalCh
|
||||
}
|
||||
|
||||
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
|
||||
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
|
||||
|
||||
if rs.LockedBlock != nil {
|
||||
panic("we should not be locked!")
|
||||
@@ -803,7 +821,7 @@ func TestLockPOLSafety1(t *testing.T) {
|
||||
// we should prevote what we're locked on
|
||||
validatePrevote(t, cs1, 2, vss[0], propBlockHash)
|
||||
|
||||
newStepCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRoundStep(), 1)
|
||||
newStepCh := subscribe(cs1.eventBus, types.EventQueryNewRoundStep)
|
||||
|
||||
// add prevotes from the earlier round
|
||||
addVotes(cs1, prevotes...)
|
||||
@@ -824,13 +842,13 @@ func TestLockPOLSafety2(t *testing.T) {
|
||||
cs1, vss := randConsensusState(4)
|
||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||
|
||||
partSize := config.Consensus.BlockPartSize
|
||||
partSize := cs1.state.Params.BlockPartSizeBytes
|
||||
|
||||
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
|
||||
timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1)
|
||||
timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1)
|
||||
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
|
||||
unlockCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringUnlock(), 1)
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
||||
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
|
||||
unlockCh := subscribe(cs1.eventBus, types.EventQueryUnlock)
|
||||
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
|
||||
|
||||
// the block for R0: gets polkad but we miss it
|
||||
@@ -850,7 +868,7 @@ func TestLockPOLSafety2(t *testing.T) {
|
||||
|
||||
incrementRound(vs2, vs3, vs4)
|
||||
|
||||
cs1.updateRoundStep(0, RoundStepPrecommitWait)
|
||||
cs1.updateRoundStep(0, cstypes.RoundStepPrecommitWait)
|
||||
|
||||
t.Log("### ONTO Round 1")
|
||||
// jump in at round 1
|
||||
@@ -858,7 +876,9 @@ func TestLockPOLSafety2(t *testing.T) {
|
||||
startTestRound(cs1, height, 1)
|
||||
<-newRoundCh
|
||||
|
||||
cs1.SetProposalAndBlock(prop1, propBlock1, propBlockParts1, "some peer")
|
||||
if err := cs1.SetProposalAndBlock(prop1, propBlock1, propBlockParts1, "some peer"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
<-proposalCh
|
||||
|
||||
<-voteCh // prevote
|
||||
@@ -883,7 +903,9 @@ func TestLockPOLSafety2(t *testing.T) {
|
||||
if err := vs3.SignProposal(config.ChainID, newProp); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
cs1.SetProposalAndBlock(newProp, propBlock0, propBlockParts0, "some peer")
|
||||
if err := cs1.SetProposalAndBlock(newProp, propBlock0, propBlockParts0, "some peer"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Add the pol votes
|
||||
addVotes(cs1, prevotes...)
|
||||
@@ -920,9 +942,9 @@ func TestSlashingPrevotes(t *testing.T) {
|
||||
vs2 := vss[1]
|
||||
|
||||
|
||||
proposalCh := subscribeToEvent(cs1.evsw,"tester",types.EventStringCompleteProposal() , 1)
|
||||
timeoutWaitCh := subscribeToEvent(cs1.evsw,"tester",types.EventStringTimeoutWait() , 1)
|
||||
newRoundCh := subscribeToEvent(cs1.evsw,"tester",types.EventStringNewRound() , 1)
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
||||
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
|
||||
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
|
||||
|
||||
// start round and wait for propose and prevote
|
||||
@@ -931,7 +953,7 @@ func TestSlashingPrevotes(t *testing.T) {
|
||||
re := <-proposalCh
|
||||
<-voteCh // prevote
|
||||
|
||||
rs := re.(types.EventDataRoundState).RoundState.(*RoundState)
|
||||
rs := re.(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
|
||||
|
||||
// we should now be stuck in limbo forever, waiting for more prevotes
|
||||
// add one for a different block should cause us to go into prevote wait
|
||||
@@ -955,9 +977,9 @@ func TestSlashingPrecommits(t *testing.T) {
|
||||
vs2 := vss[1]
|
||||
|
||||
|
||||
proposalCh := subscribeToEvent(cs1.evsw,"tester",types.EventStringCompleteProposal() , 1)
|
||||
timeoutWaitCh := subscribeToEvent(cs1.evsw,"tester",types.EventStringTimeoutWait() , 1)
|
||||
newRoundCh := subscribeToEvent(cs1.evsw,"tester",types.EventStringNewRound() , 1)
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
||||
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
|
||||
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
|
||||
|
||||
// start round and wait for propose and prevote
|
||||
@@ -999,19 +1021,19 @@ func TestHalt1(t *testing.T) {
|
||||
cs1, vss := randConsensusState(4)
|
||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||
|
||||
partSize := config.Consensus.BlockPartSize
|
||||
partSize := cs1.state.Params.BlockPartSizeBytes
|
||||
|
||||
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
|
||||
timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1)
|
||||
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
|
||||
newBlockCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewBlock(), 1)
|
||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
||||
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
|
||||
newBlockCh := subscribe(cs1.eventBus, types.EventQueryNewBlock)
|
||||
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
|
||||
|
||||
// start round and wait for propose and prevote
|
||||
startTestRound(cs1, cs1.Height, 0)
|
||||
<-newRoundCh
|
||||
re := <-proposalCh
|
||||
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
|
||||
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
|
||||
propBlock := rs.ProposalBlock
|
||||
propBlockParts := propBlock.MakePartSet(partSize)
|
||||
|
||||
@@ -1034,7 +1056,7 @@ func TestHalt1(t *testing.T) {
|
||||
// timeout to new round
|
||||
<-timeoutWaitCh
|
||||
re = <-newRoundCh
|
||||
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
|
||||
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
|
||||
|
||||
t.Log("### ONTO ROUND 1")
|
||||
/*Round2
|
||||
@@ -1052,9 +1074,26 @@ func TestHalt1(t *testing.T) {
|
||||
// receiving that precommit should take us straight to commit
|
||||
<-newBlockCh
|
||||
re = <-newRoundCh
|
||||
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
|
||||
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
|
||||
|
||||
if rs.Height != 2 {
|
||||
panic("expected height to increment")
|
||||
}
|
||||
}
|
||||
|
||||
// subscribe subscribes test client to the given query and returns a channel with cap = 1.
|
||||
func subscribe(eventBus *types.EventBus, q tmpubsub.Query) <-chan interface{} {
|
||||
out := make(chan interface{}, 1)
|
||||
err := eventBus.Subscribe(context.Background(), testSubscriber, q, out)
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("failed to subscribe %s to %v", testSubscriber, q))
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// discardFromChan reads n values from the channel.
|
||||
func discardFromChan(ch <-chan interface{}, n int) {
|
||||
for i := 0; i < n; i++ {
|
||||
<-ch
|
||||
}
|
||||
}
|
||||
|
@@ -1,110 +1,148 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# XXX: removes tendermint dir
|
||||
# Requires: killall command and jq JSON processor.
|
||||
|
||||
cd "$GOPATH/src/github.com/tendermint/tendermint" || exit 1
|
||||
# Get the parent directory of where this script is.
|
||||
SOURCE="${BASH_SOURCE[0]}"
|
||||
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
|
||||
DIR="$( cd -P "$( dirname "$SOURCE" )/../.." && pwd )"
|
||||
|
||||
# Change into that dir because we expect that.
|
||||
cd "$DIR" || exit 1
|
||||
|
||||
# Make sure we have a tendermint command.
|
||||
if ! hash tendermint 2>/dev/null; then
|
||||
make install
|
||||
fi
|
||||
|
||||
# specify a dir to copy
|
||||
# Make sure we have a cutWALUntil binary.
|
||||
cutWALUntil=./scripts/cutWALUntil/cutWALUntil
|
||||
cutWALUntilDir=$(dirname $cutWALUntil)
|
||||
if ! hash $cutWALUntil 2>/dev/null; then
|
||||
cd "$cutWALUntilDir" && go build && cd - || exit 1
|
||||
fi
|
||||
|
||||
TMHOME=$(mktemp -d)
|
||||
export TMHOME="$TMHOME"
|
||||
|
||||
if [[ ! -d "$TMHOME" ]]; then
|
||||
echo "Could not create temp directory"
|
||||
exit 1
|
||||
else
|
||||
echo "TMHOME: ${TMHOME}"
|
||||
fi
|
||||
|
||||
# TODO: eventually we should replace with `tendermint init --test`
|
||||
DIR_TO_COPY=$HOME/.tendermint_test/consensus_state_test
|
||||
if [ ! -d "$DIR_TO_COPY" ]; then
|
||||
echo "$DIR_TO_COPY does not exist. Please run: go test ./consensus"
|
||||
exit 1
|
||||
fi
|
||||
echo "==> Copying ${DIR_TO_COPY} to ${TMHOME} directory..."
|
||||
cp -r "$DIR_TO_COPY"/* "$TMHOME"
|
||||
|
||||
TMHOME="$HOME/.tendermint"
|
||||
rm -rf "$TMHOME"
|
||||
cp -r "$DIR_TO_COPY" "$TMHOME"
|
||||
cp $TMHOME/config.toml $TMHOME/config.toml.bak
|
||||
# preserve original genesis file because later it will be modified (see small_block2)
|
||||
cp "$TMHOME/genesis.json" "$TMHOME/genesis.json.bak"
|
||||
|
||||
function reset(){
|
||||
echo "==> Resetting tendermint..."
|
||||
tendermint unsafe_reset_all
|
||||
cp $TMHOME/config.toml.bak $TMHOME/config.toml
|
||||
cp "$TMHOME/genesis.json.bak" "$TMHOME/genesis.json"
|
||||
}
|
||||
|
||||
reset
|
||||
|
||||
# empty block
|
||||
function empty_block(){
|
||||
tendermint node --proxy_app=persistent_dummy &> /dev/null &
|
||||
sleep 5
|
||||
killall tendermint
|
||||
# function empty_block(){
|
||||
# echo "==> Starting tendermint..."
|
||||
# tendermint node --proxy_app=persistent_dummy &> /dev/null &
|
||||
# sleep 5
|
||||
# echo "==> Killing tendermint..."
|
||||
# killall tendermint
|
||||
|
||||
# /q would print up to and including the match, then quit.
|
||||
# /Q doesn't include the match.
|
||||
# http://unix.stackexchange.com/questions/11305/grep-show-all-the-file-up-to-the-match
|
||||
sed '/ENDHEIGHT: 1/Q' ~/.tendermint/data/cs.wal/wal > consensus/test_data/empty_block.cswal
|
||||
# echo "==> Copying WAL log..."
|
||||
# $cutWALUntil "$TMHOME/data/cs.wal/wal" 1 consensus/test_data/new_empty_block.cswal
|
||||
# mv consensus/test_data/new_empty_block.cswal consensus/test_data/empty_block.cswal
|
||||
|
||||
reset
|
||||
}
|
||||
# reset
|
||||
# }
|
||||
|
||||
# many blocks
|
||||
function many_blocks(){
|
||||
bash scripts/txs/random.sh 1000 36657 &> /dev/null &
|
||||
PID=$!
|
||||
tendermint node --proxy_app=persistent_dummy &> /dev/null &
|
||||
sleep 7
|
||||
killall tendermint
|
||||
kill -9 $PID
|
||||
bash scripts/txs/random.sh 1000 36657 &> /dev/null &
|
||||
PID=$!
|
||||
echo "==> Starting tendermint..."
|
||||
tendermint node --proxy_app=persistent_dummy &> /dev/null &
|
||||
sleep 10
|
||||
echo "==> Killing tendermint..."
|
||||
kill -9 $PID
|
||||
killall tendermint
|
||||
|
||||
sed '/ENDHEIGHT: 6/Q' ~/.tendermint/data/cs.wal/wal > consensus/test_data/many_blocks.cswal
|
||||
echo "==> Copying WAL log..."
|
||||
$cutWALUntil "$TMHOME/data/cs.wal/wal" 6 consensus/test_data/new_many_blocks.cswal
|
||||
mv consensus/test_data/new_many_blocks.cswal consensus/test_data/many_blocks.cswal
|
||||
|
||||
reset
|
||||
reset
|
||||
}
|
||||
|
||||
|
||||
# small block 1
|
||||
function small_block1(){
|
||||
bash scripts/txs/random.sh 1000 36657 &> /dev/null &
|
||||
PID=$!
|
||||
tendermint node --proxy_app=persistent_dummy &> /dev/null &
|
||||
sleep 10
|
||||
killall tendermint
|
||||
kill -9 $PID
|
||||
# function small_block1(){
|
||||
# bash scripts/txs/random.sh 1000 36657 &> /dev/null &
|
||||
# PID=$!
|
||||
# echo "==> Starting tendermint..."
|
||||
# tendermint node --proxy_app=persistent_dummy &> /dev/null &
|
||||
# sleep 10
|
||||
# echo "==> Killing tendermint..."
|
||||
# kill -9 $PID
|
||||
# killall tendermint
|
||||
|
||||
sed '/ENDHEIGHT: 1/Q' ~/.tendermint/data/cs.wal/wal > consensus/test_data/small_block1.cswal
|
||||
# echo "==> Copying WAL log..."
|
||||
# $cutWALUntil "$TMHOME/data/cs.wal/wal" 1 consensus/test_data/new_small_block1.cswal
|
||||
# mv consensus/test_data/new_small_block1.cswal consensus/test_data/small_block1.cswal
|
||||
|
||||
reset
|
||||
}
|
||||
# reset
|
||||
# }
|
||||
|
||||
|
||||
# small block 2 (part size = 512)
|
||||
function small_block2(){
|
||||
echo "" >> ~/.tendermint/config.toml
|
||||
echo "block_part_size = 512" >> ~/.tendermint/config.toml
|
||||
bash scripts/txs/random.sh 1000 36657 &> /dev/null &
|
||||
PID=$!
|
||||
tendermint node --proxy_app=persistent_dummy &> /dev/null &
|
||||
sleep 5
|
||||
killall tendermint
|
||||
kill -9 $PID
|
||||
# # block part size = 512
|
||||
# function small_block2(){
|
||||
# cat "$TMHOME/genesis.json" | jq '. + {consensus_params: {block_size_params: {max_bytes: 22020096}, block_gossip_params: {block_part_size_bytes: 512}}}' > "$TMHOME/new_genesis.json"
|
||||
# mv "$TMHOME/new_genesis.json" "$TMHOME/genesis.json"
|
||||
# bash scripts/txs/random.sh 1000 36657 &> /dev/null &
|
||||
# PID=$!
|
||||
# echo "==> Starting tendermint..."
|
||||
# tendermint node --proxy_app=persistent_dummy &> /dev/null &
|
||||
# sleep 5
|
||||
# echo "==> Killing tendermint..."
|
||||
# kill -9 $PID
|
||||
# killall tendermint
|
||||
|
||||
sed '/ENDHEIGHT: 1/Q' ~/.tendermint/data/cs.wal/wal > consensus/test_data/small_block2.cswal
|
||||
# echo "==> Copying WAL log..."
|
||||
# $cutWALUntil "$TMHOME/data/cs.wal/wal" 1 consensus/test_data/new_small_block2.cswal
|
||||
# mv consensus/test_data/new_small_block2.cswal consensus/test_data/small_block2.cswal
|
||||
|
||||
reset
|
||||
}
|
||||
# reset
|
||||
# }
|
||||
|
||||
|
||||
|
||||
case "$1" in
|
||||
"small_block1")
|
||||
small_block1
|
||||
;;
|
||||
"small_block2")
|
||||
small_block2
|
||||
;;
|
||||
"empty_block")
|
||||
empty_block
|
||||
;;
|
||||
# "small_block1")
|
||||
# small_block1
|
||||
# ;;
|
||||
# "small_block2")
|
||||
# small_block2
|
||||
# ;;
|
||||
# "empty_block")
|
||||
# empty_block
|
||||
# ;;
|
||||
"many_blocks")
|
||||
many_blocks
|
||||
;;
|
||||
*)
|
||||
small_block1
|
||||
small_block2
|
||||
empty_block
|
||||
# small_block1
|
||||
# small_block2
|
||||
# empty_block
|
||||
many_blocks
|
||||
esac
|
||||
|
||||
|
||||
echo "==> Cleaning up..."
|
||||
rm -rf "$TMHOME"
|
||||
|
@@ -1,10 +0,0 @@
|
||||
#ENDHEIGHT: 0
|
||||
{"time":"2017-04-27T22:24:01.346Z","msg":[3,{"duration":972946821,"height":1,"round":0,"step":1}]}
|
||||
{"time":"2017-04-27T22:24:01.349Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPropose"}]}
|
||||
{"time":"2017-04-27T22:24:01.349Z","msg":[2,{"msg":[17,{"Proposal":{"height":1,"round":0,"block_parts_header":{"total":1,"hash":"ACED4A95DDEBD24E66A681F7EAB4CA22C4B8546D"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"E785764AED6D92D7CC65C0A3A4ED9C8465198A05142C3E6C7F3EF601FDCD3A604900B77B7B87C046221EF99FD038A960398385BD5BBAA50EE4F86DE757B8F704"]}}],"peer_key":""}]}
|
||||
{"time":"2017-04-27T22:24:01.350Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010114B96165CF4496C00000000000000114354594CBFC1A7BCA1AD0050ED6AA010023EADA390001000100000000","proof":{"aunts":[]}}}],"peer_key":""}]}
|
||||
{"time":"2017-04-27T22:24:01.351Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrevote"}]}
|
||||
{"time":"2017-04-27T22:24:01.351Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":1,"block_id":{"hash":"F3BBFBE7E4A5D619E2C498C3D1B912883786DD71","parts":{"total":1,"hash":"ACED4A95DDEBD24E66A681F7EAB4CA22C4B8546D"}},"signature":[1,"35C937C78D061ECDC3770982A1330C9AA7F6FEF00835C43DEB50B8FCF69A3EEF221E675EE5E469114F64E4FBBABA414EB9170E1025FC47D3F0EADE46767D2E00"]}}],"peer_key":""}]}
|
||||
{"time":"2017-04-27T22:24:01.352Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrecommit"}]}
|
||||
{"time":"2017-04-27T22:24:01.352Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":2,"block_id":{"hash":"F3BBFBE7E4A5D619E2C498C3D1B912883786DD71","parts":{"total":1,"hash":"ACED4A95DDEBD24E66A681F7EAB4CA22C4B8546D"}},"signature":[1,"D1A7D27FCD5D352F3A3EDA8DE368520BC5B796662E32BCD8D91CDB8209A88DAF37CB7C4C93143D3C12B37C1435229268098CFFD0AD1400D88DA7606454692301"]}}],"peer_key":""}]}
|
||||
{"time":"2017-04-27T22:24:01.352Z","msg":[1,{"height":1,"round":0,"step":"RoundStepCommit"}]}
|
@@ -1,15 +0,0 @@
|
||||
#ENDHEIGHT: 0
|
||||
{"time":"2017-04-27T22:23:56.310Z","msg":[3,{"duration":969732098,"height":1,"round":0,"step":1}]}
|
||||
{"time":"2017-04-27T22:23:56.312Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPropose"}]}
|
||||
{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[17,{"Proposal":{"height":1,"round":0,"block_parts_header":{"total":6,"hash":"A3C176F13F5CBC7C48EE27A472800410C9D487DC"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"7624F6E943B7A207E16D1FA87EA099BD924E930F98E7DECBC01DB37735C619409588A67C2EABA9845FD6B80FDB65ECFCDA5F0DEFCEF74B8C34DB8E0540480203"]}}],"peer_key":""}]}
|
||||
{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010114B96164A30A118001620000000001141F6753D22BACA2180B1EADD722434EB28444D91D0114354594CBFC1A7BCA1AD0050ED6AA010023EADA3900010162011A3631363236333634333133303344363436333632363133313330011A3631363236333634333133313344363436333632363133313331011A3631363236333634333133323344363436333632363133313332011A3631363236333634333133333344363436333632363133313333011A3631363236333634333133343344363436333632363133313334011A3631363236333634333133353344363436333632363133313335011A3631363236333634333133363344363436333632363133313336011A3631363236333634333133373344363436333632363133313337011A3631363236333634333133383344363436333632363133313338011A3631363236333634333133393344363436333632363133313339011A3631363236333634333233303344363436333632363133323330011A3631363236333634333233313344363436333632363133323331011A3631363236333634333233323344363436333632363133323332011A3631363236333634333233333344363436333632363133323333011A3631363236333634333233343344363436333632363133323334011A36313632363336","proof":{"aunts":["49F4B71E3D7C457415069E2EA916DB12F67AA8D0","D35A72BEDAAAAC17045D7BFAAFA94C2EC0B0A4C2","705BC647374F3495EE73C3F44C21E9BDB4731738"]}}}],"peer_key":""}]}
|
||||
{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":1,"bytes":"34333233353344363436333632363133323335011A3631363236333634333233363344363436333632363133323336011A3631363236333634333233373344363436333632363133323337011A3631363236333634333233383344363436333632363133323338011A3631363236333634333233393344363436333632363133323339011A3631363236333634333333303344363436333632363133333330011A3631363236333634333333313344363436333632363133333331011A3631363236333634333333323344363436333632363133333332011A3631363236333634333333333344363436333632363133333333011A3631363236333634333333343344363436333632363133333334011A3631363236333634333333353344363436333632363133333335011A3631363236333634333333363344363436333632363133333336011A3631363236333634333333373344363436333632363133333337011A3631363236333634333333383344363436333632363133333338011A3631363236333634333333393344363436333632363133333339011A3631363236333634333433303344363436333632363133343330011A3631363236333634333433313344363436333632363133343331011A3631363236333634333433323344363436333632363133343332011A363136323633363433343333334436","proof":{"aunts":["5AD2A9A1A49A1FD6EF83F05FA4588F800B29DEF1","D35A72BEDAAAAC17045D7BFAAFA94C2EC0B0A4C2","705BC647374F3495EE73C3F44C21E9BDB4731738"]}}}],"peer_key":""}]}
|
||||
{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":2,"bytes":"3436333632363133343333011A3631363236333634333433343344363436333632363133343334011A3631363236333634333433353344363436333632363133343335011A3631363236333634333433363344363436333632363133343336011A3631363236333634333433373344363436333632363133343337011A3631363236333634333433383344363436333632363133343338011A3631363236333634333433393344363436333632363133343339011A3631363236333634333533303344363436333632363133353330011A3631363236333634333533313344363436333632363133353331011A3631363236333634333533323344363436333632363133353332011A3631363236333634333533333344363436333632363133353333011A3631363236333634333533343344363436333632363133353334011A3631363236333634333533353344363436333632363133353335011A3631363236333634333533363344363436333632363133353336011A3631363236333634333533373344363436333632363133353337011A3631363236333634333533383344363436333632363133353338011A3631363236333634333533393344363436333632363133353339011A3631363236333634333633303344363436333632363133363330011A3631363236333634333633313344363436333632363133","proof":{"aunts":["8B5786C3D871EE37B0F4B2DECAC39E157340DFBE","705BC647374F3495EE73C3F44C21E9BDB4731738"]}}}],"peer_key":""}]}
|
||||
{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":3,"bytes":"363331011A3631363236333634333633323344363436333632363133363332011A3631363236333634333633333344363436333632363133363333011A3631363236333634333633343344363436333632363133363334011A3631363236333634333633353344363436333632363133363335011A3631363236333634333633363344363436333632363133363336011A3631363236333634333633373344363436333632363133363337011A3631363236333634333633383344363436333632363133363338011A3631363236333634333633393344363436333632363133363339011A3631363236333634333733303344363436333632363133373330011A3631363236333634333733313344363436333632363133373331011A3631363236333634333733323344363436333632363133373332011A3631363236333634333733333344363436333632363133373333011A3631363236333634333733343344363436333632363133373334011A3631363236333634333733353344363436333632363133373335011A3631363236333634333733363344363436333632363133373336011A3631363236333634333733373344363436333632363133373337011A3631363236333634333733383344363436333632363133373338011A3631363236333634333733393344363436333632363133373339011A363136","proof":{"aunts":["56097661A1B2707588100586B3B1C2C8A51057D1","6DE889147DF528EEB5F7422E95DC45900CAFB619","247C721D5CEB90BB1FE389BA74C43DF0955E1647"]}}}],"peer_key":""}]}
|
||||
{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":4,"bytes":"3236333634333833303344363436333632363133383330011A3631363236333634333833313344363436333632363133383331011A3631363236333634333833323344363436333632363133383332011A3631363236333634333833333344363436333632363133383333011A3631363236333634333833343344363436333632363133383334011A3631363236333634333833353344363436333632363133383335011A3631363236333634333833363344363436333632363133383336011A3631363236333634333833373344363436333632363133383337011A3631363236333634333833383344363436333632363133383338011A3631363236333634333833393344363436333632363133383339011A3631363236333634333933303344363436333632363133393330011A3631363236333634333933313344363436333632363133393331011A3631363236333634333933323344363436333632363133393332011A3631363236333634333933333344363436333632363133393333011A3631363236333634333933343344363436333632363133393334011A3631363236333634333933353344363436333632363133393335011A3631363236333634333933363344363436333632363133393336011A3631363236333634333933373344363436333632363133393337011A3631363236333634333933","proof":{"aunts":["081D3DC5F11850851D5F0D760B98EE87BFA6B8B0","6DE889147DF528EEB5F7422E95DC45900CAFB619","247C721D5CEB90BB1FE389BA74C43DF0955E1647"]}}}],"peer_key":""}]}
|
||||
{"time":"2017-04-27T22:23:56.313Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":5,"bytes":"383344363436333632363133393338011A3631363236333634333933393344363436333632363133393339011E363136323633363433313330333033443634363336323631333133303330011E363136323633363433313330333133443634363336323631333133303331011E363136323633363433313330333233443634363336323631333133303332011E363136323633363433313330333333443634363336323631333133303333011E363136323633363433313330333433443634363336323631333133303334011E363136323633363433313330333533443634363336323631333133303335011E363136323633363433313330333633443634363336323631333133303336011E3631363236333634333133303337334436343633363236313331333033370100000000","proof":{"aunts":["6AA912328C2B52EFA0ECE71F523E137E400EC484","247C721D5CEB90BB1FE389BA74C43DF0955E1647"]}}}],"peer_key":""}]}
|
||||
{"time":"2017-04-27T22:23:56.314Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrevote"}]}
|
||||
{"time":"2017-04-27T22:23:56.314Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":1,"block_id":{"hash":"62371CF72F8662378691706DB256C833CF1AF81B","parts":{"total":6,"hash":"A3C176F13F5CBC7C48EE27A472800410C9D487DC"}},"signature":[1,"255906FAAA50C84E85DABF7DE73468E4F95DB4E46F598848145926E2FAD77CA682BF07E09E2F3EC81FFBD9A036B67914A3C02F819B69248D777AEBA792725907"]}}],"peer_key":""}]}
|
||||
{"time":"2017-04-27T22:23:56.315Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrecommit"}]}
|
||||
{"time":"2017-04-27T22:23:56.315Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":2,"block_id":{"hash":"62371CF72F8662378691706DB256C833CF1AF81B","parts":{"total":6,"hash":"A3C176F13F5CBC7C48EE27A472800410C9D487DC"}},"signature":[1,"056CC15C748434D0A59B64B45CB56EDC1A437A426E68FA63DC7D61A7C17B0F768F207D81340D129A57C5A64195F8AFDD03B6BF28D7B2286290D61BCE88FCA304"]}}],"peer_key":""}]}
|
||||
{"time":"2017-04-27T22:23:56.316Z","msg":[1,{"height":1,"round":0,"step":"RoundStepCommit"}]}
|
@@ -15,8 +15,8 @@ var (
|
||||
// conditional on the height/round/step in the timeoutInfo.
|
||||
// The timeoutInfo.Duration may be non-positive.
|
||||
type TimeoutTicker interface {
|
||||
Start() (bool, error)
|
||||
Stop() bool
|
||||
Start() error
|
||||
Stop() error
|
||||
Chan() <-chan timeoutInfo // on which to receive a timeout
|
||||
ScheduleTimeout(ti timeoutInfo) // reset the timer
|
||||
|
||||
|
@@ -1,11 +1,11 @@
|
||||
package consensus
|
||||
package types
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
. "github.com/tendermint/tmlibs/common"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
)
|
||||
|
||||
type RoundVoteSet struct {
|
||||
@@ -29,7 +29,7 @@ One for their LastCommit round, and another for the official commit round.
|
||||
*/
|
||||
type HeightVoteSet struct {
|
||||
chainID string
|
||||
height int
|
||||
height int64
|
||||
valSet *types.ValidatorSet
|
||||
|
||||
mtx sync.Mutex
|
||||
@@ -38,7 +38,7 @@ type HeightVoteSet struct {
|
||||
peerCatchupRounds map[string][]int // keys: peer.Key; values: at most 2 rounds
|
||||
}
|
||||
|
||||
func NewHeightVoteSet(chainID string, height int, valSet *types.ValidatorSet) *HeightVoteSet {
|
||||
func NewHeightVoteSet(chainID string, height int64, valSet *types.ValidatorSet) *HeightVoteSet {
|
||||
hvs := &HeightVoteSet{
|
||||
chainID: chainID,
|
||||
}
|
||||
@@ -46,7 +46,7 @@ func NewHeightVoteSet(chainID string, height int, valSet *types.ValidatorSet) *H
|
||||
return hvs
|
||||
}
|
||||
|
||||
func (hvs *HeightVoteSet) Reset(height int, valSet *types.ValidatorSet) {
|
||||
func (hvs *HeightVoteSet) Reset(height int64, valSet *types.ValidatorSet) {
|
||||
hvs.mtx.Lock()
|
||||
defer hvs.mtx.Unlock()
|
||||
|
||||
@@ -59,7 +59,7 @@ func (hvs *HeightVoteSet) Reset(height int, valSet *types.ValidatorSet) {
|
||||
hvs.round = 0
|
||||
}
|
||||
|
||||
func (hvs *HeightVoteSet) Height() int {
|
||||
func (hvs *HeightVoteSet) Height() int64 {
|
||||
hvs.mtx.Lock()
|
||||
defer hvs.mtx.Unlock()
|
||||
return hvs.height
|
||||
@@ -76,7 +76,7 @@ func (hvs *HeightVoteSet) SetRound(round int) {
|
||||
hvs.mtx.Lock()
|
||||
defer hvs.mtx.Unlock()
|
||||
if hvs.round != 0 && (round < hvs.round+1) {
|
||||
PanicSanity("SetRound() must increment hvs.round")
|
||||
cmn.PanicSanity("SetRound() must increment hvs.round")
|
||||
}
|
||||
for r := hvs.round + 1; r <= round; r++ {
|
||||
if _, ok := hvs.roundVoteSets[r]; ok {
|
||||
@@ -89,7 +89,7 @@ func (hvs *HeightVoteSet) SetRound(round int) {
|
||||
|
||||
func (hvs *HeightVoteSet) addRound(round int) {
|
||||
if _, ok := hvs.roundVoteSets[round]; ok {
|
||||
PanicSanity("addRound() for an existing round")
|
||||
cmn.PanicSanity("addRound() for an existing round")
|
||||
}
|
||||
// log.Debug("addRound(round)", "round", round)
|
||||
prevotes := types.NewVoteSet(hvs.chainID, hvs.height, round, types.VoteTypePrevote, hvs.valSet)
|
||||
@@ -164,7 +164,7 @@ func (hvs *HeightVoteSet) getVoteSet(round int, type_ byte) *types.VoteSet {
|
||||
case types.VoteTypePrecommit:
|
||||
return rvs.Precommits
|
||||
default:
|
||||
PanicSanity(Fmt("Unexpected vote type %X", type_))
|
||||
cmn.PanicSanity(cmn.Fmt("Unexpected vote type %X", type_))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
@@ -194,7 +194,7 @@ func (hvs *HeightVoteSet) StringIndented(indent string) string {
|
||||
voteSetString = roundVoteSet.Precommits.StringShort()
|
||||
vsStrings = append(vsStrings, voteSetString)
|
||||
}
|
||||
return Fmt(`HeightVoteSet{H:%v R:0~%v
|
||||
return cmn.Fmt(`HeightVoteSet{H:%v R:0~%v
|
||||
%s %v
|
||||
%s}`,
|
||||
hvs.height, hvs.round,
|
@@ -1,14 +1,17 @@
|
||||
package consensus
|
||||
package types
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
. "github.com/tendermint/tmlibs/common"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
)
|
||||
|
||||
var config *cfg.Config // NOTE: must be reset for each _test.go file
|
||||
|
||||
func init() {
|
||||
config = ResetConfig("consensus_height_vote_set_test")
|
||||
config = cfg.ResetTestRoot("consensus_height_vote_set_test")
|
||||
}
|
||||
|
||||
func TestPeerCatchupRounds(t *testing.T) {
|
||||
@@ -44,10 +47,10 @@ func TestPeerCatchupRounds(t *testing.T) {
|
||||
|
||||
}
|
||||
|
||||
func makeVoteHR(t *testing.T, height, round int, privVals []*types.PrivValidator, valIndex int) *types.Vote {
|
||||
func makeVoteHR(t *testing.T, height int64, round int, privVals []*types.PrivValidatorFS, valIndex int) *types.Vote {
|
||||
privVal := privVals[valIndex]
|
||||
vote := &types.Vote{
|
||||
ValidatorAddress: privVal.Address,
|
||||
ValidatorAddress: privVal.GetAddress(),
|
||||
ValidatorIndex: valIndex,
|
||||
Height: height,
|
||||
Round: round,
|
||||
@@ -57,7 +60,7 @@ func makeVoteHR(t *testing.T, height, round int, privVals []*types.PrivValidator
|
||||
chainID := config.ChainID
|
||||
err := privVal.SignVote(chainID, vote)
|
||||
if err != nil {
|
||||
panic(Fmt("Error signing vote: %v", err))
|
||||
panic(cmn.Fmt("Error signing vote: %v", err))
|
||||
return nil
|
||||
}
|
||||
return vote
|
57
consensus/types/reactor.go
Normal file
@@ -0,0 +1,57 @@
|
||||
package types
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
)
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
|
||||
// PeerRoundState contains the known state of a peer.
|
||||
// NOTE: Read-only when returned by PeerState.GetRoundState().
|
||||
type PeerRoundState struct {
|
||||
Height int64 // Height peer is at
|
||||
Round int // Round peer is at, -1 if unknown.
|
||||
Step RoundStepType // Step peer is at
|
||||
StartTime time.Time // Estimated start of round 0 at this height
|
||||
Proposal bool // True if peer has proposal for this round
|
||||
ProposalBlockPartsHeader types.PartSetHeader //
|
||||
ProposalBlockParts *cmn.BitArray //
|
||||
ProposalPOLRound int // Proposal's POL round. -1 if none.
|
||||
ProposalPOL *cmn.BitArray // nil until ProposalPOLMessage received.
|
||||
Prevotes *cmn.BitArray // All votes peer has for this round
|
||||
Precommits *cmn.BitArray // All precommits peer has for this round
|
||||
LastCommitRound int // Round of commit for last height. -1 if none.
|
||||
LastCommit *cmn.BitArray // All commit precommits of commit for last height.
|
||||
CatchupCommitRound int // Round that we have commit for. Not necessarily unique. -1 if none.
|
||||
CatchupCommit *cmn.BitArray // All commit precommits peer has for this height & CatchupCommitRound
|
||||
}
|
||||
|
||||
// String returns a string representation of the PeerRoundState
|
||||
func (prs PeerRoundState) String() string {
|
||||
return prs.StringIndented("")
|
||||
}
|
||||
|
||||
// StringIndented returns a string representation of the PeerRoundState
|
||||
func (prs PeerRoundState) StringIndented(indent string) string {
|
||||
return fmt.Sprintf(`PeerRoundState{
|
||||
%s %v/%v/%v @%v
|
||||
%s Proposal %v -> %v
|
||||
%s POL %v (round %v)
|
||||
%s Prevotes %v
|
||||
%s Precommits %v
|
||||
%s LastCommit %v (round %v)
|
||||
%s Catchup %v (round %v)
|
||||
%s}`,
|
||||
indent, prs.Height, prs.Round, prs.Step, prs.StartTime,
|
||||
indent, prs.ProposalBlockPartsHeader, prs.ProposalBlockParts,
|
||||
indent, prs.ProposalPOL, prs.ProposalPOLRound,
|
||||
indent, prs.Prevotes,
|
||||
indent, prs.Precommits,
|
||||
indent, prs.LastCommit, prs.LastCommitRound,
|
||||
indent, prs.CatchupCommit, prs.CatchupCommitRound,
|
||||
indent)
|
||||
}
|
131
consensus/types/state.go
Normal file
@@ -0,0 +1,131 @@
|
||||
package types
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
// RoundStepType enum type
|
||||
|
||||
// RoundStepType enumerates the state of the consensus state machine
|
||||
type RoundStepType uint8 // These must be numeric, ordered.
|
||||
|
||||
const (
|
||||
RoundStepNewHeight = RoundStepType(0x01) // Wait til CommitTime + timeoutCommit
|
||||
RoundStepNewRound = RoundStepType(0x02) // Setup new round and go to RoundStepPropose
|
||||
RoundStepPropose = RoundStepType(0x03) // Did propose, gossip proposal
|
||||
RoundStepPrevote = RoundStepType(0x04) // Did prevote, gossip prevotes
|
||||
RoundStepPrevoteWait = RoundStepType(0x05) // Did receive any +2/3 prevotes, start timeout
|
||||
RoundStepPrecommit = RoundStepType(0x06) // Did precommit, gossip precommits
|
||||
RoundStepPrecommitWait = RoundStepType(0x07) // Did receive any +2/3 precommits, start timeout
|
||||
RoundStepCommit = RoundStepType(0x08) // Entered commit state machine
|
||||
// NOTE: RoundStepNewHeight acts as RoundStepCommitWait.
|
||||
)
|
||||
|
||||
// String returns a string
|
||||
func (rs RoundStepType) String() string {
|
||||
switch rs {
|
||||
case RoundStepNewHeight:
|
||||
return "RoundStepNewHeight"
|
||||
case RoundStepNewRound:
|
||||
return "RoundStepNewRound"
|
||||
case RoundStepPropose:
|
||||
return "RoundStepPropose"
|
||||
case RoundStepPrevote:
|
||||
return "RoundStepPrevote"
|
||||
case RoundStepPrevoteWait:
|
||||
return "RoundStepPrevoteWait"
|
||||
case RoundStepPrecommit:
|
||||
return "RoundStepPrecommit"
|
||||
case RoundStepPrecommitWait:
|
||||
return "RoundStepPrecommitWait"
|
||||
case RoundStepCommit:
|
||||
return "RoundStepCommit"
|
||||
default:
|
||||
return "RoundStepUnknown" // Cannot panic.
|
||||
}
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
|
||||
// RoundState defines the internal consensus state.
|
||||
// It is Immutable when returned from ConsensusState.GetRoundState()
|
||||
// TODO: Actually, only the top pointer is copied,
|
||||
// so access to field pointers is still racey
|
||||
// NOTE: Not thread safe. Should only be manipulated by functions downstream
|
||||
// of the cs.receiveRoutine
|
||||
type RoundState struct {
|
||||
Height int64 // Height we are working on
|
||||
Round int
|
||||
Step RoundStepType
|
||||
StartTime time.Time
|
||||
CommitTime time.Time // Subjective time when +2/3 precommits for Block at Round were found
|
||||
Validators *types.ValidatorSet
|
||||
Proposal *types.Proposal
|
||||
ProposalBlock *types.Block
|
||||
ProposalBlockParts *types.PartSet
|
||||
LockedRound int
|
||||
LockedBlock *types.Block
|
||||
LockedBlockParts *types.PartSet
|
||||
Votes *HeightVoteSet
|
||||
CommitRound int //
|
||||
LastCommit *types.VoteSet // Last precommits at Height-1
|
||||
LastValidators *types.ValidatorSet
|
||||
}
|
||||
|
||||
// RoundStateEvent returns the H/R/S of the RoundState as an event.
|
||||
func (rs *RoundState) RoundStateEvent() types.EventDataRoundState {
|
||||
// XXX: copy the RoundState
|
||||
// if we want to avoid this, we may need synchronous events after all
|
||||
rs_ := *rs
|
||||
edrs := types.EventDataRoundState{
|
||||
Height: rs.Height,
|
||||
Round: rs.Round,
|
||||
Step: rs.Step.String(),
|
||||
RoundState: &rs_,
|
||||
}
|
||||
return edrs
|
||||
}
|
||||
|
||||
// String returns a string
|
||||
func (rs *RoundState) String() string {
|
||||
return rs.StringIndented("")
|
||||
}
|
||||
|
||||
// StringIndented returns a string
|
||||
func (rs *RoundState) StringIndented(indent string) string {
|
||||
return fmt.Sprintf(`RoundState{
|
||||
%s H:%v R:%v S:%v
|
||||
%s StartTime: %v
|
||||
%s CommitTime: %v
|
||||
%s Validators: %v
|
||||
%s Proposal: %v
|
||||
%s ProposalBlock: %v %v
|
||||
%s LockedRound: %v
|
||||
%s LockedBlock: %v %v
|
||||
%s Votes: %v
|
||||
%s LastCommit: %v
|
||||
%s LastValidators: %v
|
||||
%s}`,
|
||||
indent, rs.Height, rs.Round, rs.Step,
|
||||
indent, rs.StartTime,
|
||||
indent, rs.CommitTime,
|
||||
indent, rs.Validators.StringIndented(indent+" "),
|
||||
indent, rs.Proposal,
|
||||
indent, rs.ProposalBlockParts.StringShort(), rs.ProposalBlock.StringShort(),
|
||||
indent, rs.LockedRound,
|
||||
indent, rs.LockedBlockParts.StringShort(), rs.LockedBlock.StringShort(),
|
||||
indent, rs.Votes.StringIndented(indent+" "),
|
||||
indent, rs.LastCommit.StringShort(),
|
||||
indent, rs.LastValidators.StringIndented(indent+" "),
|
||||
indent)
|
||||
}
|
||||
|
||||
// StringShort returns a string
|
||||
func (rs *RoundState) StringShort() string {
|
||||
return fmt.Sprintf(`RoundState{H:%v R:%v S:%v ST:%v}`,
|
||||
rs.Height, rs.Round, rs.Step, rs.StartTime)
|
||||
}
|
@@ -1,7 +1,7 @@
|
||||
package consensus
|
||||
|
||||
import (
|
||||
. "github.com/tendermint/tmlibs/common"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
)
|
||||
|
||||
// kind of arbitrary
|
||||
@@ -10,4 +10,4 @@ var Major = "0" //
|
||||
var Minor = "2" // replay refactor
|
||||
var Revision = "2" // validation -> commit
|
||||
|
||||
var Version = Fmt("v%s/%s.%s.%s", Spec, Major, Minor, Revision)
|
||||
var Version = cmn.Fmt("v%s/%s.%s.%s", Spec, Major, Minor, Revision)
|
||||
|
258
consensus/wal.go
@@ -1,22 +1,40 @@
|
||||
package consensus
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"hash/crc32"
|
||||
"io"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
wire "github.com/tendermint/go-wire"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
auto "github.com/tendermint/tmlibs/autofile"
|
||||
. "github.com/tendermint/tmlibs/common"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
)
|
||||
|
||||
//--------------------------------------------------------
|
||||
// types and functions for savings consensus messages
|
||||
|
||||
var (
|
||||
walSeparator = []byte{55, 127, 6, 130} // 0x377f0682 - magic number
|
||||
)
|
||||
|
||||
type TimedWALMessage struct {
|
||||
Time time.Time `json:"time"`
|
||||
Time time.Time `json:"time"` // for debugging purposes
|
||||
Msg WALMessage `json:"msg"`
|
||||
}
|
||||
|
||||
// EndHeightMessage marks the end of the given height inside WAL.
|
||||
// @internal used by scripts/cutWALUntil util.
|
||||
type EndHeightMessage struct {
|
||||
Height int64 `json:"height"`
|
||||
}
|
||||
|
||||
type WALMessage interface{}
|
||||
|
||||
var _ = wire.RegisterInterface(
|
||||
@@ -24,81 +42,271 @@ var _ = wire.RegisterInterface(
|
||||
wire.ConcreteType{types.EventDataRoundState{}, 0x01},
|
||||
wire.ConcreteType{msgInfo{}, 0x02},
|
||||
wire.ConcreteType{timeoutInfo{}, 0x03},
|
||||
wire.ConcreteType{EndHeightMessage{}, 0x04},
|
||||
)
|
||||
|
||||
//--------------------------------------------------------
|
||||
// Simple write-ahead logger
|
||||
|
||||
// WAL is an interface for any write-ahead logger.
|
||||
type WAL interface {
|
||||
Save(WALMessage)
|
||||
Group() *auto.Group
|
||||
SearchForEndHeight(height int64) (gr *auto.GroupReader, found bool, err error)
|
||||
|
||||
Start() error
|
||||
Stop() error
|
||||
Wait()
|
||||
}
|
||||
|
||||
// Write ahead logger writes msgs to disk before they are processed.
|
||||
// Can be used for crash-recovery and deterministic replay
|
||||
// TODO: currently the wal is overwritten during replay catchup
|
||||
// give it a mode so it's either reading or appending - must read to end to start appending again
|
||||
type WAL struct {
|
||||
BaseService
|
||||
type baseWAL struct {
|
||||
cmn.BaseService
|
||||
|
||||
group *auto.Group
|
||||
light bool // ignore block parts
|
||||
|
||||
enc *WALEncoder
|
||||
}
|
||||
|
||||
func NewWAL(walFile string, light bool) (*WAL, error) {
|
||||
func NewWAL(walFile string, light bool) (*baseWAL, error) {
|
||||
err := cmn.EnsureDir(filepath.Dir(walFile), 0700)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to ensure WAL directory is in place")
|
||||
}
|
||||
|
||||
group, err := auto.OpenGroup(walFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
wal := &WAL{
|
||||
wal := &baseWAL{
|
||||
group: group,
|
||||
light: light,
|
||||
enc: NewWALEncoder(group),
|
||||
}
|
||||
wal.BaseService = *NewBaseService(nil, "WAL", wal)
|
||||
wal.BaseService = *cmn.NewBaseService(nil, "baseWAL", wal)
|
||||
return wal, nil
|
||||
}
|
||||
|
||||
func (wal *WAL) OnStart() error {
|
||||
func (wal *baseWAL) Group() *auto.Group {
|
||||
return wal.group
|
||||
}
|
||||
|
||||
func (wal *baseWAL) OnStart() error {
|
||||
size, err := wal.group.Head.Size()
|
||||
if err != nil {
|
||||
return err
|
||||
} else if size == 0 {
|
||||
wal.writeEndHeight(0)
|
||||
wal.Save(EndHeightMessage{0})
|
||||
}
|
||||
_, err = wal.group.Start()
|
||||
err = wal.group.Start()
|
||||
return err
|
||||
}
|
||||
|
||||
func (wal *WAL) OnStop() {
|
||||
func (wal *baseWAL) OnStop() {
|
||||
wal.BaseService.OnStop()
|
||||
wal.group.Stop()
|
||||
}
|
||||
|
||||
// called in newStep and for each pass in receiveRoutine
|
||||
func (wal *WAL) Save(wmsg WALMessage) {
|
||||
func (wal *baseWAL) Save(msg WALMessage) {
|
||||
if wal == nil {
|
||||
return
|
||||
}
|
||||
|
||||
if wal.light {
|
||||
// in light mode we only write new steps, timeouts, and our own votes (no proposals, block parts)
|
||||
if mi, ok := wmsg.(msgInfo); ok {
|
||||
if mi, ok := msg.(msgInfo); ok {
|
||||
if mi.PeerKey != "" {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Write the wal message
|
||||
var wmsgBytes = wire.JSONBytes(TimedWALMessage{time.Now(), wmsg})
|
||||
err := wal.group.WriteLine(string(wmsgBytes))
|
||||
if err := wal.enc.Encode(&TimedWALMessage{time.Now(), msg}); err != nil {
|
||||
cmn.PanicQ(cmn.Fmt("Error writing msg to consensus wal: %v \n\nMessage: %v", err, msg))
|
||||
}
|
||||
|
||||
// TODO: only flush when necessary
|
||||
if err := wal.group.Flush(); err != nil {
|
||||
cmn.PanicQ(cmn.Fmt("Error flushing consensus wal buf to file. Error: %v \n", err))
|
||||
}
|
||||
}
|
||||
|
||||
// SearchForEndHeight searches for the EndHeightMessage with the height and
|
||||
// returns an auto.GroupReader, whenever it was found or not and an error.
|
||||
// Group reader will be nil if found equals false.
|
||||
//
|
||||
// CONTRACT: caller must close group reader.
|
||||
func (wal *baseWAL) SearchForEndHeight(height int64) (gr *auto.GroupReader, found bool, err error) {
|
||||
var msg *TimedWALMessage
|
||||
|
||||
// NOTE: starting from the last file in the group because we're usually
|
||||
// searching for the last height. See replay.go
|
||||
min, max := wal.group.MinIndex(), wal.group.MaxIndex()
|
||||
wal.Logger.Debug("Searching for height", "height", height, "min", min, "max", max)
|
||||
for index := max; index >= min; index-- {
|
||||
gr, err = wal.group.NewReader(index)
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
|
||||
dec := NewWALDecoder(gr)
|
||||
for {
|
||||
msg, err = dec.Decode()
|
||||
if err == io.EOF {
|
||||
// check next file
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
gr.Close()
|
||||
return nil, false, err
|
||||
}
|
||||
|
||||
if m, ok := msg.Msg.(EndHeightMessage); ok {
|
||||
if m.Height == height { // found
|
||||
wal.Logger.Debug("Found", "height", height, "index", index)
|
||||
return gr, true, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
gr.Close()
|
||||
}
|
||||
|
||||
return nil, false, nil
|
||||
}
|
||||
|
||||
///////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
// A WALEncoder writes custom-encoded WAL messages to an output stream.
|
||||
//
|
||||
// Format: 4 bytes CRC sum + 4 bytes length + arbitrary-length value (go-wire encoded)
|
||||
type WALEncoder struct {
|
||||
wr io.Writer
|
||||
}
|
||||
|
||||
// NewWALEncoder returns a new encoder that writes to wr.
|
||||
func NewWALEncoder(wr io.Writer) *WALEncoder {
|
||||
return &WALEncoder{wr}
|
||||
}
|
||||
|
||||
// Encode writes the custom encoding of v to the stream.
|
||||
func (enc *WALEncoder) Encode(v interface{}) error {
|
||||
data := wire.BinaryBytes(v)
|
||||
|
||||
crc := crc32.Checksum(data, crc32c)
|
||||
length := uint32(len(data))
|
||||
totalLength := 8 + int(length)
|
||||
|
||||
msg := make([]byte, totalLength)
|
||||
binary.BigEndian.PutUint32(msg[0:4], crc)
|
||||
binary.BigEndian.PutUint32(msg[4:8], length)
|
||||
copy(msg[8:], data)
|
||||
|
||||
_, err := enc.wr.Write(msg)
|
||||
|
||||
if err == nil {
|
||||
// TODO [Anton Kaliaev 23 Oct 2017]: remove separator
|
||||
_, err = enc.wr.Write(walSeparator)
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
///////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
// A WALDecoder reads and decodes custom-encoded WAL messages from an input
|
||||
// stream. See WALEncoder for the format used.
|
||||
//
|
||||
// It will also compare the checksums and make sure data size is equal to the
|
||||
// length from the header. If that is not the case, error will be returned.
|
||||
type WALDecoder struct {
|
||||
rd io.Reader
|
||||
}
|
||||
|
||||
// NewWALDecoder returns a new decoder that reads from rd.
|
||||
func NewWALDecoder(rd io.Reader) *WALDecoder {
|
||||
return &WALDecoder{rd}
|
||||
}
|
||||
|
||||
// Decode reads the next custom-encoded value from its reader and returns it.
|
||||
func (dec *WALDecoder) Decode() (*TimedWALMessage, error) {
|
||||
b := make([]byte, 4)
|
||||
|
||||
n, err := dec.rd.Read(b)
|
||||
if err == io.EOF {
|
||||
return nil, err
|
||||
}
|
||||
if err != nil {
|
||||
PanicQ(Fmt("Error writing msg to consensus wal. Error: %v \n\nMessage: %v", err, wmsg))
|
||||
return nil, fmt.Errorf("failed to read checksum: %v", err)
|
||||
}
|
||||
// TODO: only flush when necessary
|
||||
if err := wal.group.Flush(); err != nil {
|
||||
PanicQ(Fmt("Error flushing consensus wal buf to file. Error: %v \n", err))
|
||||
crc := binary.BigEndian.Uint32(b)
|
||||
|
||||
b = make([]byte, 4)
|
||||
n, err = dec.rd.Read(b)
|
||||
if err == io.EOF {
|
||||
return nil, err
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read length: %v", err)
|
||||
}
|
||||
length := binary.BigEndian.Uint32(b)
|
||||
|
||||
data := make([]byte, length)
|
||||
n, err = dec.rd.Read(data)
|
||||
if err == io.EOF {
|
||||
return nil, err
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("not enough bytes for data: %v (want: %d, read: %v)", err, length, n)
|
||||
}
|
||||
|
||||
// check checksum before decoding data
|
||||
actualCRC := crc32.Checksum(data, crc32c)
|
||||
if actualCRC != crc {
|
||||
return nil, fmt.Errorf("checksums do not match: (read: %v, actual: %v)", crc, actualCRC)
|
||||
}
|
||||
|
||||
var nn int
|
||||
var res *TimedWALMessage // nolint: gosimple
|
||||
res = wire.ReadBinary(&TimedWALMessage{}, bytes.NewBuffer(data), int(length), &nn, &err).(*TimedWALMessage)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to decode data: %v", err)
|
||||
}
|
||||
|
||||
// TODO [Anton Kaliaev 23 Oct 2017]: remove separator
|
||||
if err = readSeparator(dec.rd); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return res, err
|
||||
}
|
||||
|
||||
func (wal *WAL) writeEndHeight(height int) {
|
||||
wal.group.WriteLine(Fmt("#ENDHEIGHT: %v", height))
|
||||
|
||||
// TODO: only flush when necessary
|
||||
if err := wal.group.Flush(); err != nil {
|
||||
PanicQ(Fmt("Error flushing consensus wal buf to file. Error: %v \n", err))
|
||||
// readSeparator reads a separator from r. It returns any error from underlying
|
||||
// reader or if it's not a separator.
|
||||
func readSeparator(r io.Reader) error {
|
||||
b := make([]byte, len(walSeparator))
|
||||
_, err := r.Read(b)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read separator: %v", err)
|
||||
}
|
||||
if !bytes.Equal(b, walSeparator) {
|
||||
return fmt.Errorf("not a separator: %v", b)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type nilWAL struct{}
|
||||
|
||||
func (nilWAL) Save(m WALMessage) {}
|
||||
func (nilWAL) Group() *auto.Group { return nil }
|
||||
func (nilWAL) SearchForEndHeight(height int64) (gr *auto.GroupReader, found bool, err error) {
|
||||
return nil, false, nil
|
||||
}
|
||||
func (nilWAL) Start() error { return nil }
|
||||
func (nilWAL) Stop() error { return nil }
|
||||
func (nilWAL) Wait() {}
|
||||
|
127
consensus/wal_test.go
Normal file
@@ -0,0 +1,127 @@
|
||||
package consensus
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"path"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
wire "github.com/tendermint/go-wire"
|
||||
"github.com/tendermint/tendermint/consensus/types"
|
||||
tmtypes "github.com/tendermint/tendermint/types"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestWALEncoderDecoder(t *testing.T) {
|
||||
now := time.Now()
|
||||
msgs := []TimedWALMessage{
|
||||
TimedWALMessage{Time: now, Msg: EndHeightMessage{0}},
|
||||
TimedWALMessage{Time: now, Msg: timeoutInfo{Duration: time.Second, Height: 1, Round: 1, Step: types.RoundStepPropose}},
|
||||
}
|
||||
|
||||
b := new(bytes.Buffer)
|
||||
|
||||
for _, msg := range msgs {
|
||||
b.Reset()
|
||||
|
||||
enc := NewWALEncoder(b)
|
||||
err := enc.Encode(&msg)
|
||||
require.NoError(t, err)
|
||||
|
||||
dec := NewWALDecoder(b)
|
||||
decoded, err := dec.Decode()
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, msg.Time.Truncate(time.Millisecond), decoded.Time)
|
||||
assert.Equal(t, msg.Msg, decoded.Msg)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSearchForEndHeight(t *testing.T) {
|
||||
wal, err := NewWAL(path.Join(data_dir, "many_blocks.cswal"), false)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
h := int64(3)
|
||||
gr, found, err := wal.SearchForEndHeight(h)
|
||||
assert.NoError(t, err, cmn.Fmt("expected not to err on height %d", h))
|
||||
assert.True(t, found, cmn.Fmt("expected to find end height for %d", h))
|
||||
assert.NotNil(t, gr, "expected group not to be nil")
|
||||
defer gr.Close()
|
||||
|
||||
dec := NewWALDecoder(gr)
|
||||
msg, err := dec.Decode()
|
||||
assert.NoError(t, err, "expected to decode a message")
|
||||
rs, ok := msg.Msg.(tmtypes.EventDataRoundState)
|
||||
assert.True(t, ok, "expected message of type EventDataRoundState")
|
||||
assert.Equal(t, rs.Height, h+1, cmn.Fmt("wrong height"))
|
||||
}
|
||||
|
||||
var initOnce sync.Once
|
||||
|
||||
func registerInterfacesOnce() {
|
||||
initOnce.Do(func() {
|
||||
var _ = wire.RegisterInterface(
|
||||
struct{ WALMessage }{},
|
||||
wire.ConcreteType{[]byte{}, 0x10},
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
func nBytes(n int) []byte {
|
||||
buf := make([]byte, n)
|
||||
n, _ = rand.Read(buf)
|
||||
return buf[:n]
|
||||
}
|
||||
|
||||
func benchmarkWalDecode(b *testing.B, n int) {
|
||||
registerInterfacesOnce()
|
||||
|
||||
buf := new(bytes.Buffer)
|
||||
enc := NewWALEncoder(buf)
|
||||
|
||||
data := nBytes(n)
|
||||
enc.Encode(&TimedWALMessage{Msg: data, Time: time.Now().Round(time.Second)})
|
||||
|
||||
encoded := buf.Bytes()
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
buf.Reset()
|
||||
buf.Write(encoded)
|
||||
dec := NewWALDecoder(buf)
|
||||
if _, err := dec.Decode(); err != nil {
|
||||
b.Fatal(err)
|
||||
}
|
||||
}
|
||||
b.ReportAllocs()
|
||||
}
|
||||
|
||||
func BenchmarkWalDecode512B(b *testing.B) {
|
||||
benchmarkWalDecode(b, 512)
|
||||
}
|
||||
|
||||
func BenchmarkWalDecode10KB(b *testing.B) {
|
||||
benchmarkWalDecode(b, 10*1024)
|
||||
}
|
||||
func BenchmarkWalDecode100KB(b *testing.B) {
|
||||
benchmarkWalDecode(b, 100*1024)
|
||||
}
|
||||
func BenchmarkWalDecode1MB(b *testing.B) {
|
||||
benchmarkWalDecode(b, 1024*1024)
|
||||
}
|
||||
func BenchmarkWalDecode10MB(b *testing.B) {
|
||||
benchmarkWalDecode(b, 10*1024*1024)
|
||||
}
|
||||
func BenchmarkWalDecode100MB(b *testing.B) {
|
||||
benchmarkWalDecode(b, 100*1024*1024)
|
||||
}
|
||||
func BenchmarkWalDecode1GB(b *testing.B) {
|
||||
benchmarkWalDecode(b, 1024*1024*1024)
|
||||
}
|
17
docs/_static/custom_collapsible_code.css
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
.toggle {
|
||||
padding-bottom: 1em ;
|
||||
}
|
||||
|
||||
.toggle .header {
|
||||
display: block;
|
||||
clear: both;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.toggle .header:after {
|
||||
content: " ▼";
|
||||
}
|
||||
|
||||
.toggle .header.open:after {
|
||||
content: " ▲";
|
||||
}
|
10
docs/_static/custom_collapsible_code.js
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
let makeCodeBlocksCollapsible = function() {
|
||||
$(".toggle > *").hide();
|
||||
$(".toggle .header").show();
|
||||
$(".toggle .header").click(function() {
|
||||
$(this).parent().children().not(".header").toggle({"duration": 400});
|
||||
$(this).parent().children(".header").toggleClass("open");
|
||||
});
|
||||
};
|
||||
// we could use the }(); way if we would have access to jQuery in HEAD, i.e. we would need to force the theme
|
||||
// to load jQuery before our custom scripts
|
20
docs/_templates/layout.html
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
{% extends "!layout.html" %}
|
||||
|
||||
{% set css_files = css_files + ["_static/custom_collapsible_code.css"] %}
|
||||
|
||||
# sadly, I didn't find a css style way to add custom JS to a list that is automagically added to head like CSS (above) #}
|
||||
{% block extrahead %}
|
||||
<script type="text/javascript" src="_static/custom_collapsible_code.js"></script>
|
||||
{% endblock %}
|
||||
|
||||
{% block footer %}
|
||||
<script type="text/javascript">
|
||||
$(document).ready(function() {
|
||||
// using this approach as we don't have access to the jQuery selectors
|
||||
// when executing the function on load in HEAD
|
||||
makeCodeBlocksCollapsible();
|
||||
});
|
||||
</script>
|
||||
{% endblock %}
|
||||
|
||||
|
@@ -1,5 +1,5 @@
|
||||
Using the abci-cli
|
||||
==================
|
||||
Using ABCI-CLI
|
||||
==============
|
||||
|
||||
To facilitate testing and debugging of ABCI servers and simple apps, we
|
||||
built a CLI, the ``abci-cli``, for sending ABCI messages from the
|
||||
@@ -14,7 +14,7 @@ Next, install the ``abci-cli`` tool and example applications:
|
||||
|
||||
::
|
||||
|
||||
go get -u github.com/tendermint/abci/cmd/...
|
||||
go get -u github.com/tendermint/abci/cmd/abci-cli
|
||||
|
||||
If this fails, you may need to use ``glide`` to get vendored
|
||||
dependencies:
|
||||
@@ -24,27 +24,37 @@ dependencies:
|
||||
go get github.com/Masterminds/glide
|
||||
cd $GOPATH/src/github.com/tendermint/abci
|
||||
glide install
|
||||
go install ./cmd/...
|
||||
go install ./cmd/abci-cli
|
||||
|
||||
Now run ``abci-cli --help`` to see the list of commands:
|
||||
Now run ``abci-cli`` to see the list of commands:
|
||||
|
||||
::
|
||||
|
||||
COMMANDS:
|
||||
batch Run a batch of ABCI commands against an application
|
||||
console Start an interactive console for multiple commands
|
||||
echo Have the application echo a message
|
||||
info Get some info about the application
|
||||
set_option Set an option on the application
|
||||
deliver_tx Append a new tx to application
|
||||
check_tx Validate a tx
|
||||
commit Get application Merkle root hash
|
||||
help, h Shows a list of commands or help for one command
|
||||
Usage:
|
||||
abci-cli [command]
|
||||
|
||||
Available Commands:
|
||||
batch Run a batch of abci commands against an application
|
||||
check_tx Validate a tx
|
||||
commit Commit the application state and return the Merkle root hash
|
||||
console Start an interactive abci console for multiple commands
|
||||
counter ABCI demo example
|
||||
deliver_tx Deliver a new tx to the application
|
||||
dummy ABCI demo example
|
||||
echo Have the application echo a message
|
||||
help Help about any command
|
||||
info Get some info about the application
|
||||
query Query the application state
|
||||
set_option Set an options on the application
|
||||
|
||||
Flags:
|
||||
--abci string socket or grpc (default "socket")
|
||||
--address string address of application socket (default "tcp://127.0.0.1:46658")
|
||||
-h, --help help for abci-cli
|
||||
-v, --verbose print the command and results as if it were a console session
|
||||
|
||||
Use "abci-cli [command] --help" for more information about a command.
|
||||
|
||||
GLOBAL OPTIONS:
|
||||
--address "tcp://127.0.0.1:46658" address of application socket
|
||||
--help, -h show help
|
||||
--version, -v print the version
|
||||
|
||||
Dummy - First Example
|
||||
---------------------
|
||||
@@ -61,7 +71,7 @@ Let's start a dummy application, which was installed at the same time as
|
||||
|
||||
::
|
||||
|
||||
dummy
|
||||
abci-cli dummy
|
||||
|
||||
In another terminal, run
|
||||
|
||||
@@ -70,8 +80,19 @@ In another terminal, run
|
||||
abci-cli echo hello
|
||||
abci-cli info
|
||||
|
||||
The application should echo ``hello`` and give you some information
|
||||
about itself.
|
||||
You'll see something like:
|
||||
|
||||
::
|
||||
|
||||
-> data: hello
|
||||
-> data.hex: 68656C6C6F
|
||||
|
||||
and:
|
||||
|
||||
::
|
||||
|
||||
-> data: {"size":0}
|
||||
-> data.hex: 7B2273697A65223A307D
|
||||
|
||||
An ABCI application must provide two things:
|
||||
|
||||
@@ -86,7 +107,7 @@ The server may be generic for a particular language, and we provide a
|
||||
`reference implementation in
|
||||
Golang <https://github.com/tendermint/abci/tree/master/server>`__. See
|
||||
the `list of other ABCI
|
||||
implementations <https://tendermint.com/ecosystem>`__ for servers in
|
||||
implementations <./ecosystem.html>`__ for servers in
|
||||
other languages.
|
||||
|
||||
The handler is specific to the application, and may be arbitrary, so
|
||||
@@ -109,36 +130,50 @@ Try running these commands:
|
||||
::
|
||||
|
||||
> echo hello
|
||||
-> code: OK
|
||||
-> data: hello
|
||||
|
||||
-> data.hex: 0x68656C6C6F
|
||||
|
||||
> info
|
||||
-> code: OK
|
||||
-> data: {"size":0}
|
||||
|
||||
-> data.hex: 0x7B2273697A65223A307D
|
||||
|
||||
> commit
|
||||
-> data: 0x
|
||||
|
||||
-> code: OK
|
||||
|
||||
> deliver_tx "abc"
|
||||
-> code: OK
|
||||
|
||||
|
||||
> info
|
||||
-> code: OK
|
||||
-> data: {"size":1}
|
||||
|
||||
-> data.hex: 0x7B2273697A65223A317D
|
||||
|
||||
> commit
|
||||
-> data: 0x750502FC7E84BBD788ED589624F06CFA871845D1
|
||||
|
||||
-> code: OK
|
||||
-> data.hex: 0x49DFD15CCDACDEAE9728CB01FBB5E8688CA58B91
|
||||
|
||||
> query "abc"
|
||||
-> code: OK
|
||||
-> data: {"index":0,"value":"abc","exists":true}
|
||||
|
||||
-> log: exists
|
||||
-> height: 0
|
||||
-> value: abc
|
||||
-> value.hex: 616263
|
||||
|
||||
> deliver_tx "def=xyz"
|
||||
-> code: OK
|
||||
|
||||
|
||||
> commit
|
||||
-> data: 0x76393B8A182E450286B0694C629ECB51B286EFD5
|
||||
|
||||
-> code: OK
|
||||
-> data.hex: 0x70102DB32280373FBF3F9F89DA2A20CE2CD62B0B
|
||||
|
||||
> query "def"
|
||||
-> code: OK
|
||||
-> data: {"index":1,"value":"xyz","exists":true}
|
||||
-> log: exists
|
||||
-> height: 0
|
||||
-> value: xyz
|
||||
-> value.hex: 78797A
|
||||
|
||||
Note that if we do ``deliver_tx "abc"`` it will store ``(abc, abc)``,
|
||||
but if we do ``deliver_tx "abc=efg"`` it will store ``(abc, efg)``.
|
||||
@@ -181,37 +216,39 @@ app:
|
||||
|
||||
::
|
||||
|
||||
counter
|
||||
abci-cli counter
|
||||
|
||||
In another window, start the ``abci-cli console``:
|
||||
|
||||
::
|
||||
|
||||
> set_option serial on
|
||||
-> data: serial=on
|
||||
|
||||
-> code: OK
|
||||
|
||||
> check_tx 0x00
|
||||
-> code: OK
|
||||
|
||||
|
||||
> check_tx 0xff
|
||||
-> code: OK
|
||||
|
||||
|
||||
> deliver_tx 0x00
|
||||
-> code: OK
|
||||
|
||||
|
||||
> check_tx 0x00
|
||||
-> code: BadNonce
|
||||
-> log: Invalid nonce. Expected >= 1, got 0
|
||||
|
||||
|
||||
> deliver_tx 0x01
|
||||
-> code: OK
|
||||
|
||||
|
||||
> deliver_tx 0x04
|
||||
-> code: BadNonce
|
||||
-> log: Invalid nonce. Expected 2, got 4
|
||||
|
||||
|
||||
> info
|
||||
-> code: OK
|
||||
-> data: {"hashes":0,"txs":2}
|
||||
-> data.hex: 0x7B22686173686573223A302C22747873223A327D
|
||||
|
||||
This is a very simple application, but between ``counter`` and
|
||||
``dummy``, its easy to see how you can build out arbitrary application
|
||||
|
@@ -58,7 +58,7 @@ Tendermint Core RPC
|
||||
|
||||
The concept is that the ABCI app is completely hidden from the outside
|
||||
world and only communicated through a tested and secured `interface
|
||||
exposed by the tendermint core <./rpc.html>`__. This interface
|
||||
exposed by the tendermint core <./specification/rpc.html>`__. This interface
|
||||
exposes a lot of data on the block header and consensus process, which
|
||||
is quite useful for externally verifying the system. It also includes
|
||||
3(!) methods to broadcast a transaction (propose it for the blockchain,
|
||||
|
@@ -142,6 +142,13 @@ It is unlikely that you will need to implement a client. For details of
|
||||
our client, see
|
||||
`here <https://github.com/tendermint/abci/tree/master/client>`__.
|
||||
|
||||
Most of the examples below are from `dummy application
|
||||
<https://github.com/tendermint/abci/blob/master/example/dummy/dummy.go>`__,
|
||||
which is a part of the abci repo. `persistent_dummy application
|
||||
<https://github.com/tendermint/abci/blob/master/example/dummy/persistent_dummy.go>`__
|
||||
is used to show ``BeginBlock``, ``EndBlock`` and ``InitChain``
|
||||
example implementations.
|
||||
|
||||
Blockchain Protocol
|
||||
-------------------
|
||||
|
||||
@@ -187,6 +194,38 @@ through all transactions in the mempool, removing any that were included
|
||||
in the block, and re-run the rest using CheckTx against the post-Commit
|
||||
mempool state.
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Go Example**
|
||||
|
||||
.. code-block:: go
|
||||
|
||||
func (app *DummyApplication) CheckTx(tx []byte) types.Result {
|
||||
return types.OK
|
||||
}
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Java Example**
|
||||
|
||||
.. code-block:: java
|
||||
|
||||
ResponseCheckTx requestCheckTx(RequestCheckTx req) {
|
||||
byte[] transaction = req.getTx().toByteArray();
|
||||
|
||||
// validate transaction
|
||||
|
||||
if (notValid) {
|
||||
return ResponseCheckTx.newBuilder().setCode(CodeType.BadNonce).setLog("invalid tx").build();
|
||||
} else {
|
||||
return ResponseCheckTx.newBuilder().setCode(CodeType.OK).build();
|
||||
}
|
||||
}
|
||||
|
||||
Consensus Connection
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
@@ -215,6 +254,49 @@ The block header will be updated (TODO) to include some commitment to
|
||||
the results of DeliverTx, be it a bitarray of non-OK transactions, or a
|
||||
merkle root of the data returned by the DeliverTx requests, or both.
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Go Example**
|
||||
|
||||
.. code-block:: go
|
||||
|
||||
// tx is either "key=value" or just arbitrary bytes
|
||||
func (app *DummyApplication) DeliverTx(tx []byte) types.Result {
|
||||
parts := strings.Split(string(tx), "=")
|
||||
if len(parts) == 2 {
|
||||
app.state.Set([]byte(parts[0]), []byte(parts[1]))
|
||||
} else {
|
||||
app.state.Set(tx, tx)
|
||||
}
|
||||
return types.OK
|
||||
}
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Java Example**
|
||||
|
||||
.. code-block:: java
|
||||
|
||||
/**
|
||||
* Using Protobuf types from the protoc compiler, we always start with a byte[]
|
||||
*/
|
||||
ResponseDeliverTx deliverTx(RequestDeliverTx request) {
|
||||
byte[] transaction = request.getTx().toByteArray();
|
||||
|
||||
// validate your transaction
|
||||
|
||||
if (notValid) {
|
||||
return ResponseDeliverTx.newBuilder().setCode(CodeType.BadNonce).setLog("transaction was invalid").build();
|
||||
} else {
|
||||
ResponseDeliverTx.newBuilder().setCode(CodeType.OK).build();
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
Commit
|
||||
^^^^^^
|
||||
|
||||
@@ -228,7 +310,7 @@ Commit, or there will be deadlock. Note also that all remaining
|
||||
transactions in the mempool are replayed on the mempool connection
|
||||
(CheckTx) following a commit.
|
||||
|
||||
The Commit response includes a byte array, which is the deterministic
|
||||
The app should respond to the Commit request with a byte array, which is the deterministic
|
||||
state root of the application. It is included in the header of the next
|
||||
block. It can be used to provide easily verified Merkle-proofs of the
|
||||
state of the application.
|
||||
@@ -237,6 +319,36 @@ It is expected that the app will persist state to disk on Commit. The
|
||||
option to have all transactions replayed from some previous block is the
|
||||
job of the `Handshake <#handshake>`__.
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Go Example**
|
||||
|
||||
.. code-block:: go
|
||||
|
||||
func (app *DummyApplication) Commit() types.Result {
|
||||
hash := app.state.Hash()
|
||||
return types.NewResultOK(hash, "")
|
||||
}
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Java Example**
|
||||
|
||||
.. code-block:: java
|
||||
|
||||
ResponseCommit requestCommit(RequestCommit requestCommit) {
|
||||
|
||||
// update the internal app-state
|
||||
byte[] newAppState = calculateAppState();
|
||||
|
||||
// and return it to the node
|
||||
return ResponseCommit.newBuilder().setCode(CodeType.OK).setData(ByteString.copyFrom(newAppState)).build();
|
||||
}
|
||||
|
||||
BeginBlock
|
||||
^^^^^^^^^^
|
||||
|
||||
@@ -248,6 +360,46 @@ The app should remember the latest height and header (ie. from which it
|
||||
has run a successful Commit) so that it can tell Tendermint where to
|
||||
pick up from when it restarts. See information on the Handshake, below.
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Go Example**
|
||||
|
||||
.. code-block:: go
|
||||
|
||||
// Track the block hash and header information
|
||||
func (app *PersistentDummyApplication) BeginBlock(params types.RequestBeginBlock) {
|
||||
// update latest block info
|
||||
app.blockHeader = params.Header
|
||||
|
||||
// reset valset changes
|
||||
app.changes = make([]*types.Validator, 0)
|
||||
}
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Java Example**
|
||||
|
||||
.. code-block:: java
|
||||
|
||||
/*
|
||||
* all types come from protobuf definition
|
||||
*/
|
||||
ResponseBeginBlock requestBeginBlock(RequestBeginBlock req) {
|
||||
|
||||
Header header = req.getHeader();
|
||||
byte[] prevAppHash = header.getAppHash().toByteArray();
|
||||
long prevHeight = header.getHeight();
|
||||
long numTxs = header.getNumTxs();
|
||||
|
||||
// run your pre-block logic. Maybe prepare a state snapshot, message components, etc
|
||||
|
||||
return ResponseBeginBlock.newBuilder().build();
|
||||
}
|
||||
|
||||
EndBlock
|
||||
^^^^^^^^
|
||||
|
||||
@@ -260,6 +412,40 @@ EndBlock response. To remove one, include it in the list with a
|
||||
validator set. Note validator set changes are only available in v0.8.0
|
||||
and up.
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Go Example**
|
||||
|
||||
.. code-block:: go
|
||||
|
||||
// Update the validator set
|
||||
func (app *PersistentDummyApplication) EndBlock(height uint64) (resEndBlock types.ResponseEndBlock) {
|
||||
return types.ResponseEndBlock{Diffs: app.changes}
|
||||
}
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Java Example**
|
||||
|
||||
.. code-block:: java
|
||||
|
||||
/*
|
||||
* Assume that one validator changes. The new validator has a power of 10
|
||||
*/
|
||||
ResponseEndBlock requestEndBlock(RequestEndBlock req) {
|
||||
final long currentHeight = req.getHeight();
|
||||
final byte[] validatorPubKey = getValPubKey();
|
||||
|
||||
ResponseEndBlock.Builder builder = ResponseEndBlock.newBuilder();
|
||||
builder.addDiffs(1, Types.Validator.newBuilder().setPower(10L).setPubKey(ByteString.copyFrom(validatorPubKey)).build());
|
||||
|
||||
return builder.build();
|
||||
}
|
||||
|
||||
Query Connection
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
@@ -281,6 +467,73 @@ cause Tendermint to not connect to the corresponding peer:
|
||||
|
||||
Note: these query formats are subject to change!
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Go Example**
|
||||
|
||||
.. code-block:: go
|
||||
|
||||
func (app *DummyApplication) Query(reqQuery types.RequestQuery) (resQuery types.ResponseQuery) {
|
||||
if reqQuery.Prove {
|
||||
value, proof, exists := app.state.Proof(reqQuery.Data)
|
||||
resQuery.Index = -1 // TODO make Proof return index
|
||||
resQuery.Key = reqQuery.Data
|
||||
resQuery.Value = value
|
||||
resQuery.Proof = proof
|
||||
if exists {
|
||||
resQuery.Log = "exists"
|
||||
} else {
|
||||
resQuery.Log = "does not exist"
|
||||
}
|
||||
return
|
||||
} else {
|
||||
index, value, exists := app.state.Get(reqQuery.Data)
|
||||
resQuery.Index = int64(index)
|
||||
resQuery.Value = value
|
||||
if exists {
|
||||
resQuery.Log = "exists"
|
||||
} else {
|
||||
resQuery.Log = "does not exist"
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Java Example**
|
||||
|
||||
.. code-block:: java
|
||||
|
||||
ResponseQuery requestQuery(RequestQuery req) {
|
||||
final boolean isProveQuery = req.getProve();
|
||||
final ResponseQuery.Builder responseBuilder = ResponseQuery.newBuilder();
|
||||
|
||||
if (isProveQuery) {
|
||||
com.app.example.ProofResult proofResult = generateProof(req.getData().toByteArray());
|
||||
final byte[] proofAsByteArray = proofResult.getAsByteArray();
|
||||
|
||||
responseBuilder.setProof(ByteString.copyFrom(proofAsByteArray));
|
||||
responseBuilder.setKey(req.getData());
|
||||
responseBuilder.setValue(ByteString.copyFrom(proofResult.getData()));
|
||||
responseBuilder.setLog(result.getLogValue());
|
||||
} else {
|
||||
byte[] queryData = req.getData().toByteArray();
|
||||
|
||||
final com.app.example.QueryResult result = generateQueryResult(queryData);
|
||||
|
||||
responseBuilder.setIndex(result.getIndex());
|
||||
responseBuilder.setValue(ByteString.copyFrom(result.getValue()));
|
||||
responseBuilder.setLog(result.getLogValue());
|
||||
}
|
||||
|
||||
return responseBuilder.build();
|
||||
}
|
||||
|
||||
Handshake
|
||||
~~~~~~~~~
|
||||
|
||||
@@ -288,7 +541,7 @@ When the app or tendermint restarts, they need to sync to a common
|
||||
height. When an ABCI connection is first established, Tendermint will
|
||||
call ``Info`` on the Query connection. The response should contain the
|
||||
LastBlockHeight and LastBlockAppHash - the former is the last block for
|
||||
the which the app ran Commit successfully, the latter is the response
|
||||
which the app ran Commit successfully, the latter is the response
|
||||
from that Commit.
|
||||
|
||||
Using this information, Tendermint will determine what needs to be
|
||||
@@ -297,3 +550,79 @@ the app are synced to the latest block height.
|
||||
|
||||
If the app returns a LastBlockHeight of 0, Tendermint will just replay
|
||||
all blocks.
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Go Example**
|
||||
|
||||
.. code-block:: go
|
||||
|
||||
func (app *DummyApplication) Info(req types.RequestInfo) (resInfo types.ResponseInfo) {
|
||||
return types.ResponseInfo{Data: cmn.Fmt("{\"size\":%v}", app.state.Size())}
|
||||
}
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Java Example**
|
||||
|
||||
.. code-block:: java
|
||||
|
||||
ResponseInfo requestInfo(RequestInfo req) {
|
||||
final byte[] lastAppHash = getLastAppHash();
|
||||
final long lastHeight = getLastHeight();
|
||||
return ResponseInfo.newBuilder().setLastBlockAppHash(ByteString.copyFrom(lastAppHash)).setLastBlockHeight(lastHeight).build();
|
||||
}
|
||||
|
||||
Genesis
|
||||
~~~~~~~
|
||||
|
||||
``InitChain`` will be called once upon the genesis. ``params`` includes the
|
||||
initial validator set. Later on, it may be extended to take parts of the
|
||||
consensus params.
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Go Example**
|
||||
|
||||
.. code-block:: go
|
||||
|
||||
// Save the validators in the merkle tree
|
||||
func (app *PersistentDummyApplication) InitChain(params types.RequestInitChain) {
|
||||
for _, v := range params.Validators {
|
||||
r := app.updateValidator(v)
|
||||
if r.IsErr() {
|
||||
app.logger.Error("Error updating validators", "r", r)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
.. container:: header
|
||||
|
||||
**Show/Hide Java Example**
|
||||
|
||||
.. code-block:: java
|
||||
|
||||
/*
|
||||
* all types come from protobuf definition
|
||||
*/
|
||||
ResponseInitChain requestInitChain(RequestInitChain req) {
|
||||
final int validatorsCount = req.getValidatorsCount();
|
||||
final List<Types.Validator> validatorsList = req.getValidatorsList();
|
||||
|
||||
validatorsList.forEach((validator) -> {
|
||||
long power = validator.getPower();
|
||||
byte[] validatorPubKey = validator.getPubKey().toByteArray();
|
||||
|
||||
// do somehing for validator setup in app
|
||||
});
|
||||
|
||||
return ResponseInitChain.newBuilder().build();
|
||||
}
|
||||
|
@@ -1,16 +0,0 @@
|
||||
# ABCI
|
||||
|
||||
ABCI is an interface between the consensus/blockchain engine known as tendermint, and the application-specific business logic, known as an ABCi app.
|
||||
|
||||
The tendermint core should run unchanged for all apps. Each app can customize it, the supported transactions, queries, even the validator sets and how to handle staking / slashing stake. This customization is achieved by implementing the ABCi app to send the proper information to the tendermint engine to perform as directed.
|
||||
|
||||
To understand this decision better, think of the design of the tendermint engine.
|
||||
|
||||
* A blockchain is simply consensus on a unique global ordering of events.
|
||||
* This consensus can efficiently be implemented using BFT and PoS
|
||||
* This code can be generalized to easily support a large number of blockchains
|
||||
* The block-chain specific code, the interpretation of the individual events, can be implemented by a 3rd party app without touching the consensus engine core
|
||||
* Use an efficient, language-agnostic layer to implement this (ABCi)
|
||||
|
||||
|
||||
Bucky, please make this doc real.
|
@@ -2,15 +2,4 @@
|
||||
|
||||
This is a location to record all high-level architecture decisions in the tendermint project. Not the implementation details, but the reasoning that happened. This should be refered to for guidance of the "right way" to extend the application. And if we notice that the original decisions were lacking, we should have another open discussion, record the new decisions here, and then modify the code to match.
|
||||
|
||||
This is like our guide and mentor when Jae and Bucky are offline.... The concept comes from a [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t) that resonated among the team when Anton shared it.
|
||||
|
||||
Each section of the code can have it's own markdown file in this directory, and please add a link to the readme.
|
||||
|
||||
## Sections
|
||||
|
||||
* [ABCI](./ABCI.md)
|
||||
* [go-merkle / merkleeyes](./merkle.md)
|
||||
* [Frey's thoughts on the data store](./merkle-frey.md)
|
||||
* basecoin
|
||||
* tendermint core (multiple sections)
|
||||
* ???
|
||||
Read up on the concept in this [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t).
|
||||
|
@@ -1,4 +1,4 @@
|
||||
# ADR 2: Indexing
|
||||
# ADR 2: Event Subscription
|
||||
|
||||
## Context
|
||||
|
@@ -1,4 +1,4 @@
|
||||
# ADR 1: Must an ABCI-app have an RPC server?
|
||||
# ADR 3: Must an ABCI-app have an RPC server?
|
||||
|
||||
## Context
|
||||
|
38
docs/architecture/adr-004-historical-validators.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# ADR 004: Historical Validators
|
||||
|
||||
## Context
|
||||
|
||||
Right now, we can query the present validator set, but there is no history.
|
||||
If you were offline for a long time, there is no way to reconstruct past validators. This is needed for the light client and we agreed needs enhancement of the API.
|
||||
|
||||
## Decision
|
||||
|
||||
For every block, store a new structure that contains either the latest validator set,
|
||||
or the height of the last block for which the validator set changed. Note this is not
|
||||
the height of the block which returned the validator set change itself, but the next block,
|
||||
ie. the first block it comes into effect for.
|
||||
|
||||
Storing the validators will be handled by the `state` package.
|
||||
|
||||
At some point in the future, we may consider more efficient storage in the case where the validators
|
||||
are updated frequently - for instance by only saving the diffs, rather than the whole set.
|
||||
|
||||
An alternative approach suggested keeping the validator set, or diffs of it, in a merkle IAVL tree.
|
||||
While it might afford cheaper proofs that a validator set has not changed, it would be more complex,
|
||||
and likely less efficient.
|
||||
|
||||
## Status
|
||||
|
||||
Accepted.
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
|
||||
- Can query old validator sets, with proof.
|
||||
|
||||
### Negative
|
||||
|
||||
- Writes an extra structure to disk with every block.
|
||||
|
||||
### Neutral
|
85
docs/architecture/adr-005-consensus-params.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# ADR 005: Consensus Params
|
||||
|
||||
## Context
|
||||
|
||||
Consensus critical parameters controlling blockchain capacity have until now been hard coded, loaded from a local config, or neglected.
|
||||
Since they may be need to be different in different networks, and potentially to evolve over time within
|
||||
networks, we seek to initialize them in a genesis file, and expose them through the ABCI.
|
||||
|
||||
While we have some specific parameters now, like maximum block and transaction size, we expect to have more in the future,
|
||||
such as a period over which evidence is valid, or the frequency of checkpoints.
|
||||
|
||||
## Decision
|
||||
|
||||
### ConsensusParams
|
||||
|
||||
No consensus critical parameters should ever be found in the `config.toml`.
|
||||
|
||||
A new `ConsensusParams` is optionally included in the `genesis.json` file,
|
||||
and loaded into the `State`. Any items not included are set to their default value.
|
||||
A value of 0 is undefined (see ABCI, below). A value of -1 is used to indicate the parameter does not apply.
|
||||
The parameters are used to determine the validity of a block (and tx) via the union of all relevant parameters.
|
||||
|
||||
```
|
||||
type ConsensusParams struct {
|
||||
BlockSizeParams
|
||||
TxSizeParams
|
||||
BlockGossipParams
|
||||
}
|
||||
|
||||
type BlockSizeParams struct {
|
||||
MaxBytes int
|
||||
MaxTxs int
|
||||
MaxGas int
|
||||
}
|
||||
|
||||
type TxSizeParams struct {
|
||||
MaxBytes int
|
||||
MaxGas int
|
||||
}
|
||||
|
||||
type BlockGossipParams struct {
|
||||
BlockPartSizeBytes int
|
||||
}
|
||||
```
|
||||
|
||||
The `ConsensusParams` can evolve over time by adding new structs that cover different aspects of the consensus rules.
|
||||
|
||||
The `BlockPartSizeBytes` and the `BlockSizeParams.MaxBytes` are enforced to be greater than 0.
|
||||
The former because we need a part size, the latter so that we always have at least some sanity check over the size of blocks.
|
||||
|
||||
### ABCI
|
||||
|
||||
#### InitChain
|
||||
|
||||
InitChain currently takes the initial validator set. It should be extended to also take parts of the ConsensusParams.
|
||||
There is some case to be made for it to take the entire Genesis, except there may be things in the genesis,
|
||||
like the BlockPartSize, that the app shouldn't really know about.
|
||||
|
||||
#### EndBlock
|
||||
|
||||
The EndBlock response includes a `ConsensusParams`, which includes BlockSizeParams and TxSizeParams, but not BlockGossipParams.
|
||||
Other param struct can be added to `ConsensusParams` in the future.
|
||||
The `0` value is used to denote no change.
|
||||
Any other value will update that parameter in the `State.ConsensusParams`, to be applied for the next block.
|
||||
Tendermint should have hard-coded upper limits as sanity checks.
|
||||
|
||||
## Status
|
||||
|
||||
Proposed.
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
|
||||
- Alternative capacity limits and consensus parameters can be specified without re-compiling the software.
|
||||
- They can also change over time under the control of the application
|
||||
|
||||
### Negative
|
||||
|
||||
- More exposed parameters is more complexity
|
||||
- Different rules at different heights in the blockchain complicates fast sync
|
||||
|
||||
### Neutral
|
||||
|
||||
- The TxSizeParams, which checks validity, may be in conflict with the config's `max_block_size_tx`, which determines proposal sizes
|
238
docs/architecture/adr-006-trust-metric.md
Normal file
@@ -0,0 +1,238 @@
|
||||
# ADR 006: Trust Metric Design
|
||||
|
||||
## Context
|
||||
|
||||
The proposed trust metric will allow Tendermint to maintain local trust rankings for peers it has directly interacted with, which can then be used to implement soft security controls. The calculations were obtained from the [TrustGuard](https://dl.acm.org/citation.cfm?id=1060808) project.
|
||||
|
||||
### Background
|
||||
|
||||
The Tendermint Core project developers would like to improve Tendermint security and reliability by keeping track of the level of trustworthiness peers have demonstrated within the peer-to-peer network. This way, undesirable outcomes from peers will not immediately result in them being dropped from the network (potentially causing drastic changes to take place). Instead, peers behavior can be monitored with appropriate metrics and be removed from the network once Tendermint Core is certain the peer is a threat. For example, when the PEXReactor makes a request for peers network addresses from a already known peer, and the returned network addresses are unreachable, this untrustworthy behavior should be tracked. Returning a few bad network addresses probably shouldn’t cause a peer to be dropped, while excessive amounts of this behavior does qualify the peer being dropped.
|
||||
|
||||
Trust metrics can be circumvented by malicious nodes through the use of strategic oscillation techniques, which adapts the malicious node’s behavior pattern in order to maximize its goals. For instance, if the malicious node learns that the time interval of the Tendermint trust metric is *X* hours, then it could wait *X* hours in-between malicious activities. We could try to combat this issue by increasing the interval length, yet this will make the system less adaptive to recent events.
|
||||
|
||||
Instead, having shorter intervals, but keeping a history of interval values, will give our metric the flexibility needed in order to keep the network stable, while also making it resilient against a strategic malicious node in the Tendermint peer-to-peer network. Also, the metric can access trust data over a rather long period of time while not greatly increasing its history size by aggregating older history values over a larger number of intervals, and at the same time, maintain great precision for the recent intervals. This approach is referred to as fading memories, and closely resembles the way human beings remember their experiences. The trade-off to using history data is that the interval values should be preserved in-between executions of the node.
|
||||
|
||||
### References
|
||||
|
||||
S. Mudhakar, L. Xiong, and L. Liu, “TrustGuard: Countering Vulnerabilities in Reputation Management for Decentralized Overlay Networks,” in *Proceedings of the 14th international conference on World Wide Web, pp. 422-431*, May 2005.
|
||||
|
||||
## Decision
|
||||
|
||||
The proposed trust metric will allow a developer to inform the trust metric store of all good and bad events relevant to a peer's behavior, and at any time, the metric can be queried for a peer's current trust ranking.
|
||||
|
||||
The three subsections below will cover the process being considered for calculating the trust ranking, the concept of the trust metric store, and the interface for the trust metric.
|
||||
|
||||
### Proposed Process
|
||||
|
||||
The proposed trust metric will count good and bad events relevant to the object, and calculate the percent of counters that are good over an interval with a predefined duration. This is the procedure that will continue for the life of the trust metric. When the trust metric is queried for the current **trust value**, a resilient equation will be utilized to perform the calculation.
|
||||
|
||||
The equation being proposed resembles a Proportional-Integral-Derivative (PID) controller used in control systems. The proportional component allows us to be sensitive to the value of the most recent interval, while the integral component allows us to incorporate trust values stored in the history data, and the derivative component allows us to give weight to sudden changes in the behavior of a peer. We compute the trust value of a peer in interval i based on its current trust ranking, its trust rating history prior to interval *i* (over the past *maxH* number of intervals) and its trust ranking fluctuation. We will break up the equation into the three components.
|
||||
|
||||
```math
|
||||
(1) Proportional Value = a * R[i]
|
||||
```
|
||||
|
||||
where *R*[*i*] denotes the raw trust value at time interval *i* (where *i* == 0 being current time) and *a* is the weight applied to the contribution of the current reports. The next component of our equation uses a weighted sum over the last *maxH* intervals to calculate the history value for time *i*:
|
||||
|
||||
|
||||
`H[i] = ` 
|
||||
|
||||
|
||||
The weights can be chosen either optimistically or pessimistically. An optimistic weight creates larger weights for newer history data values, while the the pessimistic weight creates larger weights for time intervals with lower scores. The default weights used during the calculation of the history value are optimistic and calculated as *Wk* = 0.8^*k*, for time interval *k*. With the history value available, we can now finish calculating the integral value:
|
||||
|
||||
```math
|
||||
(2) Integral Value = b * H[i]
|
||||
```
|
||||
|
||||
Where *H*[*i*] denotes the history value at time interval *i* and *b* is the weight applied to the contribution of past performance for the object being measured. The derivative component will be calculated as follows:
|
||||
|
||||
```math
|
||||
D[i] = R[i] – H[i]
|
||||
|
||||
(3) Derivative Value = c(D[i]) * D[i]
|
||||
```
|
||||
|
||||
Where the value of *c* is selected based on the *D*[*i*] value relative to zero. The default selection process makes *c* equal to 0 unless *D*[*i*] is a negative value, in which case c is equal to 1. The result is that the maximum penalty is applied when current behavior is lower than previously experienced behavior. If the current behavior is better than the previously experienced behavior, then the Derivative Value has no impact on the trust value. With the three components brought together, our trust value equation is calculated as follows:
|
||||
|
||||
```math
|
||||
TrustValue[i] = a * R[i] + b * H[i] + c(D[i]) * D[i]
|
||||
```
|
||||
|
||||
As a performance optimization that will keep the amount of raw interval data being saved to a reasonable size of *m*, while allowing us to represent 2^*m* - 1 history intervals, we can employ the fading memories technique that will trade space and time complexity for the precision of the history data values by summarizing larger quantities of less recent values. While our equation above attempts to access up to *maxH* (which can be 2^*m* - 1), we will map those requests down to *m* values using equation 4 below:
|
||||
|
||||
```math
|
||||
(4) j = index, where index > 0
|
||||
```
|
||||
|
||||
Where *j* is one of *(0, 1, 2, … , m – 1)* indices used to access history interval data. Now we can access the raw intervals using the following calculations:
|
||||
|
||||
```math
|
||||
R[0] = raw data for current time interval
|
||||
```
|
||||
|
||||
`R[j] = ` 
|
||||
|
||||
### Trust Metric Store
|
||||
|
||||
Similar to the P2P subsystem AddrBook, the trust metric store will maintain information relevant to Tendermint peers. Additionally, the trust metric store will ensure that trust metrics will only be active for peers that a node is currently and directly engaged with.
|
||||
|
||||
Reactors will provide a peer key to the trust metric store in order to retrieve the associated trust metric. The trust metric can then record new positive and negative events experienced by the reactor, as well as provided the current trust score calculated by the metric.
|
||||
|
||||
When the node is shutting down, the trust metric store will save history data for trust metrics associated with all known peers. This saved information allows experiences with a peer to be preserved across node executions, which can span a tracking windows of days or weeks. The trust history data is loaded automatically during OnStart.
|
||||
|
||||
### Interface Detailed Design
|
||||
|
||||
Each trust metric allows for the recording of positive/negative events, querying the current trust value/score, and the stopping/pausing of tracking over time intervals. This can be seen below:
|
||||
|
||||
|
||||
```go
|
||||
|
||||
// TrustMetric - keeps track of peer reliability
|
||||
type TrustMetric struct {
|
||||
// Private elements.
|
||||
}
|
||||
|
||||
// Pause tells the metric to pause recording data over time intervals.
|
||||
// All method calls that indicate events will unpause the metric
|
||||
func (tm *TrustMetric) Pause() {}
|
||||
|
||||
// Stop tells the metric to stop recording data over time intervals
|
||||
func (tm *TrustMetric) Stop() {}
|
||||
|
||||
// BadEvents indicates that an undesirable event(s) took place
|
||||
func (tm *TrustMetric) BadEvents(num int) {}
|
||||
|
||||
// GoodEvents indicates that a desirable event(s) took place
|
||||
func (tm *TrustMetric) GoodEvents(num int) {}
|
||||
|
||||
// TrustValue gets the dependable trust value; always between 0 and 1
|
||||
func (tm *TrustMetric) TrustValue() float64 {}
|
||||
|
||||
// TrustScore gets a score based on the trust value always between 0 and 100
|
||||
func (tm *TrustMetric) TrustScore() int {}
|
||||
|
||||
// NewMetric returns a trust metric with the default configuration
|
||||
func NewMetric() *TrustMetric {}
|
||||
|
||||
//------------------------------------------------------------------------------------------------
|
||||
// For example
|
||||
|
||||
tm := NewMetric()
|
||||
|
||||
tm.BadEvents(1)
|
||||
score := tm.TrustScore()
|
||||
|
||||
tm.Stop()
|
||||
|
||||
```
|
||||
|
||||
Some of the trust metric parameters can be configured. The weight values should probably be left alone in more cases, yet the time durations for the tracking window and individual time interval should be considered.
|
||||
|
||||
```go
|
||||
|
||||
// TrustMetricConfig - Configures the weight functions and time intervals for the metric
|
||||
type TrustMetricConfig struct {
|
||||
// Determines the percentage given to current behavior
|
||||
ProportionalWeight float64
|
||||
|
||||
// Determines the percentage given to prior behavior
|
||||
IntegralWeight float64
|
||||
|
||||
// The window of time that the trust metric will track events across.
|
||||
// This can be set to cover many days without issue
|
||||
TrackingWindow time.Duration
|
||||
|
||||
// Each interval should be short for adapability.
|
||||
// Less than 30 seconds is too sensitive,
|
||||
// and greater than 5 minutes will make the metric numb
|
||||
IntervalLength time.Duration
|
||||
}
|
||||
|
||||
// DefaultConfig returns a config with values that have been tested and produce desirable results
|
||||
func DefaultConfig() TrustMetricConfig {}
|
||||
|
||||
// NewMetricWithConfig returns a trust metric with a custom configuration
|
||||
func NewMetricWithConfig(tmc TrustMetricConfig) *TrustMetric {}
|
||||
|
||||
//------------------------------------------------------------------------------------------------
|
||||
// For example
|
||||
|
||||
config := TrustMetricConfig{
|
||||
TrackingWindow: time.Minute * 60 * 24, // one day
|
||||
IntervalLength: time.Minute * 2,
|
||||
}
|
||||
|
||||
tm := NewMetricWithConfig(config)
|
||||
|
||||
tm.BadEvents(10)
|
||||
tm.Pause()
|
||||
tm.GoodEvents(1) // becomes active again
|
||||
|
||||
```
|
||||
|
||||
A trust metric store should be created with a DB that has persistent storage so it can save history data across node executions. All trust metrics instantiated by the store will be created with the provided TrustMetricConfig configuration.
|
||||
|
||||
When you attempt to fetch the trust metric for a peer, and an entry does not exist in the trust metric store, a new metric is automatically created and the entry made within the store.
|
||||
|
||||
In additional to the fetching method, GetPeerTrustMetric, the trust metric store provides a method to call when a peer has disconnected from the node. This is so the metric can be paused (history data will not be saved) for periods of time when the node is not having direct experiences with the peer.
|
||||
|
||||
```go
|
||||
|
||||
// TrustMetricStore - Manages all trust metrics for peers
|
||||
type TrustMetricStore struct {
|
||||
cmn.BaseService
|
||||
|
||||
// Private elements
|
||||
}
|
||||
|
||||
// OnStart implements Service
|
||||
func (tms *TrustMetricStore) OnStart() error {}
|
||||
|
||||
// OnStop implements Service
|
||||
func (tms *TrustMetricStore) OnStop() {}
|
||||
|
||||
// NewTrustMetricStore returns a store that saves data to the DB
|
||||
// and uses the config when creating new trust metrics
|
||||
func NewTrustMetricStore(db dbm.DB, tmc TrustMetricConfig) *TrustMetricStore {}
|
||||
|
||||
// Size returns the number of entries in the trust metric store
|
||||
func (tms *TrustMetricStore) Size() int {}
|
||||
|
||||
// GetPeerTrustMetric returns a trust metric by peer key
|
||||
func (tms *TrustMetricStore) GetPeerTrustMetric(key string) *TrustMetric {}
|
||||
|
||||
// PeerDisconnected pauses the trust metric associated with the peer identified by the key
|
||||
func (tms *TrustMetricStore) PeerDisconnected(key string) {}
|
||||
|
||||
//------------------------------------------------------------------------------------------------
|
||||
// For example
|
||||
|
||||
db := dbm.NewDB("trusthistory", "goleveldb", dirPathStr)
|
||||
tms := NewTrustMetricStore(db, DefaultConfig())
|
||||
|
||||
tm := tms.GetPeerTrustMetric(key)
|
||||
tm.BadEvents(1)
|
||||
|
||||
tms.PeerDisconnected(key)
|
||||
|
||||
```
|
||||
|
||||
## Status
|
||||
|
||||
Approved.
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
|
||||
- The trust metric will allow Tendermint to make non-binary security and reliability decisions
|
||||
- Will help Tendermint implement deterrents that provide soft security controls, yet avoids disruption on the network
|
||||
- Will provide useful profiling information when analyzing performance over time related to peer interaction
|
||||
|
||||
### Negative
|
||||
|
||||
- Requires saving the trust metric history data across node executions
|
||||
|
||||
### Neutral
|
||||
|
||||
- Keep in mind that, good events need to be recorded just as bad events do using this implementation
|
16
docs/architecture/adr-template.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# ADR 000: Template for an ADR
|
||||
|
||||
## Context
|
||||
|
||||
## Decision
|
||||
|
||||
## Status
|
||||
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
|
||||
### Negative
|
||||
|
||||
### Neutral
|
BIN
docs/architecture/img/formula1.png
Normal file
After Width: | Height: | Size: 9.6 KiB |
BIN
docs/architecture/img/formula2.png
Normal file
After Width: | Height: | Size: 5.8 KiB |
@@ -1,240 +0,0 @@
|
||||
# Merkle data stores - Frey's proposal
|
||||
|
||||
## TL;DR
|
||||
|
||||
To allow the efficient creation of an ABCi app, tendermint wishes to provide a reference implementation of a key-value store that provides merkle proofs of the data. These proofs then quickly allow the ABCi app to provide an app hash to the consensus engine, as well as a full proof to any client.
|
||||
|
||||
This is equivalent to building a database, and I would propose designing it from the API first, then looking how to implement this (or make an adapter from the API to existing implementations). Once we agree on the functionality and the interface, we can implement the API bindings, and then work on building adapters to existence merkle-ized data stores, or modifying the stores to support this interface.
|
||||
|
||||
We need to consider the API (both in-process and over the network), language bindings, maintaining handles to old state (and garbage collecting), persistence, security, providing merkle proofs, and general key-value store operations. To stay consistent with the blockchains "single global order of operations", this data store should only allow one connection at a time to have write access.
|
||||
|
||||
## Overview
|
||||
|
||||
* **State**
|
||||
* There are two concepts of state, "committed state" and "working state"
|
||||
* The working state is only accessible from the ABCi app, allows writing, but does not need to support proofs.
|
||||
* When we commit the "working state", it becomes a new "committed state" and has an immutable root hash, provides proofs, and can be exposed to external clients.
|
||||
* **Transactions**
|
||||
* The database always allows creating a read-only transaction at the last "committed state", this transaction can serve read queries and proofs.
|
||||
* The database maintains all data to serve these read transactions until they are closed by the client (or time out). This allows the client(s) to determine how much old info is needed
|
||||
* The database can only support *maximal* one writable transaction at a time. This makes it easy to enforce serializability, and attempting to start a second writable transaction may trigger a panic.
|
||||
* **Functionality**
|
||||
* It must support efficient key-value operations (get/set/delete)
|
||||
* It must support returning merkle proofs for any "committed state"
|
||||
* It should support range queries on subsets of the key space if possible (ie. if the db doesn't hash keys)
|
||||
* It should also support listening to changes to a desired key via pub-sub or similar method, so I can quickly notify you on a change to your balance without constant polling.
|
||||
* It may support other db-specific query types as an extension to this interface, as long as all specified actions maintain their meaning.
|
||||
* **Interface**
|
||||
* This interface should be domain-specific - ie. designed just for this use case
|
||||
* It should present a simple go interface for embedding the data store in-process
|
||||
* It should create a gRPC/protobuf API for calling from any client
|
||||
* It should provide and maintain client adapters from our in-process interface to gRPC client calls for at least golang and Java (maybe more languages?)
|
||||
* It should provide and maintain server adapters from our gRPC calls to the in-process interface for golang at least (unless there is another server we wish to support)
|
||||
* **Persistence**
|
||||
* It must support atomic persistence upon committing a new block. That is, upon crash recovery, the state is guaranteed to represent the state at the end of a complete block (along with a note of which height it was).
|
||||
* It must delay deletion of old data as long as there are open read-only transactions referring to it, thus we must maintain some sort of WAL to keep track of pending cleanup.
|
||||
* When a transaction is closed, or when we recover from a crash, it should clean up all no longer needed data to avoid memory/storage leaks.
|
||||
* **Security and Auth**
|
||||
* If we allow connections over gRPC, we must consider this issues and allow both encryption (SSL), and some basic auth rules to prevent undesired access to the DB
|
||||
* This is client-specific and does not need to be supported in the in-process, embedded version.
|
||||
|
||||
## Details
|
||||
|
||||
Here we go more in-depth in each of the sections, explaining the reasoning and more details on the desired behavior. This document is only the high-level architecture and should support multiple implementations. When building out a specific implementation, a similar document should be provided for that repo, showing how it implements these concepts, and details about memory usage, storage, efficiency, etc.
|
||||
|
||||
|
||||
### State
|
||||
|
||||
The current ABCi interface avoids this question a bit and that has brought confusion. If I use `merkleeyes` to store data, which state is returned from `Query`? The current "working" state, which I would like to refer to in my ABCi application? Or the last committed state, which I would like to return to a client's query? Or an old state, which I may select based on height?
|
||||
|
||||
Right now, `merkleeyes` implements `Query` like a normal ABCi app and only returns committed state, which has lead to problems and confusion. Thus, we need to be explicit about which state we want to view. Each viewer can then specify which state it wants to view. This allows the app to query the working state in DeliverTx, but the committed state in Query.
|
||||
|
||||
We can easily provide two global references for "last committed" and "current working" states. However, if we want to also allow querying of older commits... then we need some way to keep track of which ones are still in use, so we can garbage collect the unneeded ones. There is a non-trivial overhead in holding references to all past states, but also a hard-coded solution (hold onto the last 5 commits) may not support all clients. We should let the client define this somehow.
|
||||
|
||||
### Transactions
|
||||
|
||||
Transactions (in the typical database sense) are a clean and established solution to this issue. We can look at the [isolations levels](https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable) which attempt to provide us things like "repeatable reads". That means if we open a transaction, and query some data 100 times while other processes are writing to the db, we get the same result each time. This transaction has a reference to its own local state from the time the transaction started. (We are referring to the highest isolation levels here, which correlate well this the blockchain use case).
|
||||
|
||||
If we implement a read-only transaction as a reference to state at the time of creation of that transaction, we can then hold these references to various snapshots, one per block that we are interested, and allow the client to multiplex queries and proofs from these various blocks.
|
||||
|
||||
If we continue using these concepts (which have informed 30+ years of server side design), we can add a few nice features to our write transactions. The first of which is `Rollback` and `Commit`. That means all the changes we make in this transaction have no effect on the database until they are committed. And until they are committed, we can always abort if we detect an anomaly, returning to the last committed state with a rollback.
|
||||
|
||||
There is also a nice extension to this available on some database servers, basically, "nested" transactions or "savepoints". This means that within one transaction, you can open a subtransaction/savepoint and continue work. Later you have the option to commit or rollback all work since the savepoint/subtransaction. And then continue with the main transaction.
|
||||
|
||||
If you don't understand why this is useful, look at how basecoin needs to [hold cached state for AppTx](https://github.com/tendermint/basecoin/blob/master/state/execution.go#L126-L149), meaning that it rolls back all modifications if the AppTx returns an error. This was implemented as a wrapper in basecoin, but it is a reasonable thing to support in the DB interface itself (especially since the implementation becomes quite non-trivial as soon as you support range queries).
|
||||
|
||||
To give a bit more reference to this concept in practice, read about [Savepoints in Postgresql](https://www.postgresql.org/docs/current/static/tutorial-transactions.html) ([reference](https://www.postgresql.org/docs/current/static/sql-savepoint.html)) or [Nesting transactions in SQL Server](http://dba-presents.com/index.php/databases/sql-server/43-nesting-transactions-and-save-transaction-command) (TL;DR: scroll to the bottom, section "Real nesting transactions with SAVE TRANSACTION")
|
||||
|
||||
### Functionality
|
||||
|
||||
Merkle trees work with key-value pairs, so we should most importantly focus on the basic Key-Value operations. That is `Get`, `Set`, and `Remove`. We also need to return a merkle proof for any key, along with a root hash of the tree for committing state to the blockchain. This is just the basic merkle-tree stuff.
|
||||
|
||||
If it is possible with the implementation, it is nice to provide access to Range Queries. That is, return all values where the key is between X and Y. If you construct your keys wisely, it is possible to store lists (1:N) relations this way. Eg, storing blog posts and the key is blog:`poster_id`:`sequence`, then I could search for all blog posts by a given `poster_id`, or even return just posts 10-19 from the given poster.
|
||||
|
||||
The construction of a tree that supports range queries was one of the [design decisions of go-merkle](https://github.com/tendermint/go-merkle/blob/master/README.md). It is also kind of possible with [ethereum's patricia trie](https://github.com/ethereum/wiki/wiki/Patricia-Tree) as long as the key is less than 32 bytes.
|
||||
|
||||
In addition to range queries, there is one more nice feature that we could add to our data store - listening to events. Depending on your context, this is "reactive programming", "event emitters", "notifications", etc... But the basic concept is that a client can listen for all changes to a given key (or set of keys), and receive a notification when this happens. This is very important to avoid [repeated polling and wasted queries](http://resthooks.org/) when a client simply wants to [detect changes](https://www.rethinkdb.com/blog/realtime-web/).
|
||||
|
||||
If the database provides access to some "listener" functionality, the app can choose to expose this to the external client via websockets, web hooks, http2 push events, android push notifications, etc, etc etc.... But if we want to support modern client functionality, let's add support for this reactive paradigm in our DB interface.
|
||||
|
||||
**TODO** support for more advanced backends, eg. Bolt....
|
||||
|
||||
### Go Interface
|
||||
|
||||
I will start with a simple go interface to illustrate the in-process interface. Once there is agreement on how this looks, we can work out the gRPC bindings to support calling out of process. These interfaces are not finalized code, but I think the demonstrate the concepts better than text and provide a strawman to get feedback.
|
||||
|
||||
```
|
||||
// DB represents the committed state of a merkle-ized key-value store
|
||||
type DB interface {
|
||||
// Snapshot returns a reference to last committed state to use for
|
||||
// providing proofs, you must close it at the end to garbage collect
|
||||
// the historical state we hold on to to make these proofs
|
||||
Snapshot() Prover
|
||||
|
||||
// Start a transaction - only way to change state
|
||||
// This will return an error if there is an open Transaction
|
||||
Begin() (Transaction, error)
|
||||
|
||||
// These callbacks are triggered when the Transaction is Committed
|
||||
// to the DB. They can be used to eg. notify clients via websockets when
|
||||
// their account balance changes.
|
||||
AddListener(key []byte, listener Listener)
|
||||
RemoveListener(listener Listener)
|
||||
}
|
||||
|
||||
// DBReader represents a read-only connection to a snapshot of the db
|
||||
type DBReader interface {
|
||||
// Queries on my local view
|
||||
Has(key []byte) (bool, error)
|
||||
Get(key []byte) (Model, error)
|
||||
GetRange(start, end []byte, ascending bool, limit int) ([]Model, error)
|
||||
Closer
|
||||
}
|
||||
|
||||
// Prover is an interface that lets one query for Proofs, holding the
|
||||
// data at a specific location in memory
|
||||
type Prover interface {
|
||||
DBReader
|
||||
|
||||
// Hash is the AppHash (RootHash) for this block
|
||||
Hash() (hash []byte)
|
||||
|
||||
// Prove returns the data along with a merkle Proof
|
||||
// Model and Proof are nil if not found
|
||||
Prove(key []byte) (Model, Proof, error)
|
||||
}
|
||||
|
||||
// Transaction is a set of state changes to the DB to be applied atomically.
|
||||
// There can only be one open transaction at a time, which may only have
|
||||
// maximum one subtransaction at a time.
|
||||
// In short, at any time, there is exactly one object that can write to the
|
||||
// DB, and we can use Subtransactions to group operations and roll them back
|
||||
// together (kind of like `types.KVCache` from basecoin)
|
||||
type Transaction interface {
|
||||
DBReader
|
||||
|
||||
// Change the state - will raise error immediately if this Transaction
|
||||
// is not holding the exclusive write lock
|
||||
Set(model Model) (err error)
|
||||
Remove(key []byte) (removed bool, err error)
|
||||
|
||||
// Subtransaction starts a new subtransaction, rollback will not affect the
|
||||
// parent. Only on Commit are the changes applied to this transaction.
|
||||
// While the subtransaction exists, no write allowed on the parent.
|
||||
// (You must Commit or Rollback the child to continue)
|
||||
Subtransaction() Transaction
|
||||
|
||||
// Commit this transaction (or subtransaction), the parent reference is
|
||||
// now updated.
|
||||
// This only updates persistant store if the top level transaction commits
|
||||
// (You may have any number of nested sub transactions)
|
||||
Commit() error
|
||||
|
||||
// Rollback ends the transaction and throw away all transaction-local state,
|
||||
// allowing the tree to prune those elements.
|
||||
// The parent transaction now recovers the write lock.
|
||||
Rollback()
|
||||
}
|
||||
|
||||
// Listener registers callbacks on changes to the data store
|
||||
type Listener interface {
|
||||
OnSet(key, value, oldValue []byte)
|
||||
OnRemove(key, oldValue []byte)
|
||||
}
|
||||
|
||||
// Proof represents a merkle proof for a key
|
||||
type Proof interface {
|
||||
RootHash() []byte
|
||||
Verify(key, value, root []byte) bool
|
||||
}
|
||||
|
||||
type Model interface {
|
||||
Key() []byte
|
||||
Value() []byte
|
||||
}
|
||||
|
||||
// Closer releases the reference to this state, allowing us to garbage collect
|
||||
// Make sure to call it before discarding.
|
||||
type Closer interface {
|
||||
Close()
|
||||
}
|
||||
```
|
||||
|
||||
### Remote Interface
|
||||
|
||||
The use-case of allowing out-of-process calls is very powerful. Not just to provide a powerful merkle-ready data store to non-go applications.
|
||||
|
||||
It we allow the ABCi app to maintain the only writable connections, we can guarantee that all transactions are only processed through the tendermint consensus engine. We could then allow multiple "web server" machines "read-only" access and scale out the database reads, assuming the consensus engine, ABCi logic, and public key cryptography is more the bottleneck than the database. We could even place the consensus engine, ABCi app, and data store on one machine, connected with unix sockets for security, and expose a tcp/ssl interface for reading the data, to scale out query processing over multiple machines.
|
||||
|
||||
But returning our focus directly to the ABCi app (which is the most important use case). An app may well want to maintain 100 or 1000 snapshots of different heights to allow people to easily query many proofs at a given height without race conditions (very important for IBC, ask Jae). Thus, we should not require a separate TCP connection for each height, as this gets quite awkward with so many connections. Also, if we want to use gRPC, we should consider the connections potentially transient (although they are more efficient with keep-alive).
|
||||
|
||||
Thus, the wire encoding of a transaction or a snapshot should simply return a unique id. All methods on a `Prover` or `Transaction` over the wire can send this id along with the arguments for the method call. And we just need a hash map on the server to map this id to a state.
|
||||
|
||||
The only negative of not requiring a persistent tcp connection for each snapshot is there is no auto-detection if the client crashes without explicitly closing the connections. Thus, I would suggest adding a `Ping` thread in the gRPC interface which keeps the Snapshot alive. If no ping is received within a server-defined time, it may automatically close those transactions. And if we consider a client with 500 snapshots that needs to ping each every 10 seconds, that is a lot of overhead, so we should design the ping to accept a list of IDs for the client and update them all. Or associate all snapshots with a clientID and then just send the clientID in the ping. (Please add other ideas on how to detect client crashes without persistent connections).
|
||||
|
||||
To encourage adoption, we should provide a nice client that uses this gRPC interface (like we do with ABCi). For go, the client may have the exact same interface as the in-process version, just that the error call may return network errors, not just illegal operations. We should also add a client with a clean API for Java, since that seems to be popular among app developers in the current tendermint community. Other bindings as we see the need in the server space.
|
||||
|
||||
### Persistence
|
||||
|
||||
Any data store worth it's name should not lose all data on a crash. Even [redis provides some persistence](https://redis.io/topics/persistence) these days. Ideally, if the system crashes and restarts, it should have the data at the last block N that was committed. If the system crash during the commit of block N+1, then the recovered state should either be block N or completely committed block N+1, but no partial state between the two. Basically, the commit must be an atomic operation (even if updating 100's of records).
|
||||
|
||||
To avoid a lot of headaches ourselves, we can use an existing data store, such as leveldb, which provides `WriteBatch` to group all operations.
|
||||
|
||||
The other issue is cleaning up old state. We cannot delete any information from our persistent store, as long as any snapshot holds a reference to it (or else we get some panics when the data we query is not there). So, we need to store the outstanding deletions that we can perform when the snapshot is `Close`d. In addition, we must consider the case that the data store crashes with open snapshots. Thus, the info on outstanding deletions must also be persisted somewhere. Something like a "delete-behind log" (the opposite of a "write ahead log").
|
||||
|
||||
This is not a concern of the generic interface, but each implementation should take care to handle this well to avoid accumulation of unused references in the data store and eventual data bloat.
|
||||
|
||||
#### Backing stores
|
||||
|
||||
It is way outside the scope of this project to build our own database that is capable of efficiently storing the data, provide multiple read-only snapshots at once, and save it atomically. The best approach seems to select an existing database (best a simple one) that provides this functionality and build upon it, much like the current `go-merkle` implementation builds upon `leveldb`. After some research here are winners and losers:
|
||||
|
||||
**Winners**
|
||||
|
||||
* Leveldb - [provides consistent snapshots](https://ayende.com/blog/161705/reviewing-leveldb-part-xiii-smile-and-here-is-your-snapshot), and [provides tooling for building ACID compliance](http://codeofrob.com/entries/writing-a-transaction-manager-on-top-of-leveldb.html)
|
||||
* Note there are at least two solid implementations available in go - [goleveldb](https://github.com/syndtr/goleveldb) - a pure go implementation, and [levigo](https://github.com/jmhodges/levigo) - a go wrapper around leveldb.
|
||||
* Goleveldb is much easier to compile and cross-compile (not requiring cgo), while levigo (or cleveldb) seems to provide a significant performance boosts (but I had trouble even running benchmarks)
|
||||
* PostgreSQL - fully supports these ACID semantics if you call `SET TRANSACTION ISOLATION LEVEL SERIALIZABLE` at the beginning of a transaction (tested)
|
||||
* This may be total overkill unless we also want to make use of other features, like storing data in multiple columns with secondary indexes.
|
||||
* Trillian can show an example of [how to store a merkle tree in sql](https://github.com/google/trillian/blob/master/storage/mysql/tree_storage.go)
|
||||
|
||||
**Losers**
|
||||
|
||||
* Bolt - open [read-only snapshots can block writing](https://github.com/boltdb/bolt/issues/378)
|
||||
* Mongo - [barely even supports atomic operations](https://docs.mongodb.com/manual/core/write-operations-atomicity/), much less multiple snapshots
|
||||
|
||||
**To investigate**
|
||||
|
||||
* [Trillian](https://github.com/google/trillian) - has a [persistent merkle tree interface](https://github.com/google/trillian/blob/master/storage/tree_storage.go) along with [backend storage with mysql](https://github.com/google/trillian/blob/master/storage/mysql/tree_storage.go), good inspiration for our design if not directly using it
|
||||
* [Moss](https://github.com/couchbase/moss) - another key-value store in go, seems similar to leveldb, maybe compare with performance tests?
|
||||
|
||||
### Security
|
||||
|
||||
When allowing access out-of-process, we should provide different mechanisms to secure it. The first is the choice of binding to a local unix socket or a tcp port. The second is the optional use of ssl to encrypt the connection (very important over tcp). The third is authentication to control access to the database.
|
||||
|
||||
We may also want to consider the case of two server connections with different permissions, eg. a local unix socket that allows write access with no more credentials, and a public TCP connection with ssl and authentication that only provides read-only access.
|
||||
|
||||
The use of ssl is quite easy in go, we just need to generate and sign a certificate, so it is nice to be able to disable it for dev machines, but it is very important for production.
|
||||
|
||||
For authentication, let me sketch out a minimal solution. The server could just have a simple config file with key/bcrypt(password) pairs along with read/write permission level, and read that upon startup. The client must provide a username and password in the HTTP headers when making the original HTTPS gRPC connection.
|
||||
|
||||
This is super minimal to provide some protection. Things like LDAP, OAuth and single-sign on seem overkill and even potential security holes. Maybe there is another solution somewhere in the middle.
|
@@ -1,17 +0,0 @@
|
||||
# Merkle data stores
|
||||
|
||||
To allow the efficient creation of an ABCi app, tendermint wishes to provide a reference implemention of a key-value store that provides merkle proofs of the data. These proofs then quickly allow the ABCi app to provide an apphash to the consensus engine, as well as a full proof to any client.
|
||||
|
||||
This engine is currently implemented in `go-merkle` with `merkleeyes` providing a language-agnostic binding via ABCi. It uses `tmlibs/db` bindings internally to persist data to leveldb.
|
||||
|
||||
What are some of the requirements of this store:
|
||||
|
||||
* It must support efficient key-value operations (get/set/delete)
|
||||
* It must support persistance.
|
||||
* We must only persist complete blocks, so when we come up after a crash we are at the state of block N or N+1, but not in-between these two states.
|
||||
* It must allow us to read/write from one uncommited state (working state), while serving other queries from the last commited state. And a way to determine which one to serve for each client.
|
||||
* It must allow us to hold references to old state, to allow providing proofs from 20 blocks ago. We can define some limits as to the maximum time to hold this data.
|
||||
* We provide in process binding in Go
|
||||
* We provide language-agnostic bindings when running the data store as it's own process.
|
||||
|
||||
|
Before Width: | Height: | Size: 43 KiB After Width: | Height: | Size: 43 KiB |
Before Width: | Height: | Size: 103 KiB After Width: | Height: | Size: 103 KiB |
Before Width: | Height: | Size: 2.4 MiB After Width: | Height: | Size: 2.4 MiB |
30
docs/conf.py
@@ -16,9 +16,10 @@
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
#
|
||||
# import os
|
||||
import os
|
||||
# import sys
|
||||
# sys.path.insert(0, os.path.abspath('.'))
|
||||
import urllib
|
||||
|
||||
import sphinx_rtd_theme
|
||||
|
||||
@@ -169,3 +170,30 @@ texinfo_documents = [
|
||||
author, 'Tendermint', 'Byzantine Fault Tolerant Consensus.',
|
||||
'Database'),
|
||||
]
|
||||
|
||||
repo = "https://raw.githubusercontent.com/tendermint/tools/"
|
||||
branch = "master"
|
||||
|
||||
tools = "./tools"
|
||||
assets = tools + "/assets"
|
||||
|
||||
if os.path.isdir(tools) != True:
|
||||
os.mkdir(tools)
|
||||
if os.path.isdir(assets) != True:
|
||||
os.mkdir(assets)
|
||||
|
||||
urllib.urlretrieve(repo+branch+'/ansible/README.rst', filename=tools+'/ansible.rst')
|
||||
urllib.urlretrieve(repo+branch+'/ansible/assets/a_plus_t.png', filename=assets+'/a_plus_t.png')
|
||||
|
||||
urllib.urlretrieve(repo+branch+'/docker/README.rst', filename=tools+'/docker.rst')
|
||||
|
||||
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/README.rst', filename=tools+'/mintnet-kubernetes.rst')
|
||||
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/gce1.png', filename=assets+'/gce1.png')
|
||||
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/gce2.png', filename=assets+'/gce2.png')
|
||||
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/statefulset.png', filename=assets+'/statefulset.png')
|
||||
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/t_plus_k.png', filename=assets+'/t_plus_k.png')
|
||||
|
||||
urllib.urlretrieve(repo+branch+'/terraform-digitalocean/README.rst', filename=tools+'/terraform-digitalocean.rst')
|
||||
urllib.urlretrieve(repo+branch+'/tm-bench/README.rst', filename=tools+'/benchmarking-and-monitoring.rst')
|
||||
# the readme for below is included in tm-bench
|
||||
# urllib.urlretrieve('https://raw.githubusercontent.com/tendermint/tools/master/tm-monitor/README.rst', filename='tools/tm-monitor.rst')
|
||||
|
@@ -13,7 +13,7 @@ It's relatively easy to setup a Tendermint cluster manually. The only
|
||||
requirements for a particular Tendermint node are a private key for the
|
||||
validator, stored as ``priv_validator.json``, and a list of the public
|
||||
keys of all validators, stored as ``genesis.json``. These files should
|
||||
be stored in ``~/.tendermint``, or wherever the ``$TMROOT`` variable
|
||||
be stored in ``~/.tendermint``, or wherever the ``$TMHOME`` variable
|
||||
might be set to.
|
||||
|
||||
Here are the steps to setting up a testnet manually:
|
||||
@@ -40,8 +40,51 @@ Automated Deployments
|
||||
---------------------
|
||||
|
||||
While the manual deployment is easy enough, an automated deployment is
|
||||
always better. For this, we have the `mintnet-kubernetes
|
||||
tool <https://github.com/tendermint/tools/tree/master/mintnet-kubernetes>`__,
|
||||
which allows us to automate the deployment of a Tendermint network on an
|
||||
already provisioned kubernetes cluster. And for simple provisioning of kubernetes
|
||||
usually quicker. The below examples show different tools that can be used
|
||||
for automated deployments.
|
||||
|
||||
Automated Deployment using Kubernetes
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The `mintnet-kubernetes tool <https://github.com/tendermint/tools/tree/master/mintnet-kubernetes>`__
|
||||
allows automating the deployment of a Tendermint network on an already
|
||||
provisioned kubernetes cluster. For simple provisioning of a kubernetes
|
||||
cluster, check out the `Google Cloud Platform <https://cloud.google.com/>`__.
|
||||
|
||||
Automated Deployment using Terraform and Ansible
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The `terraform-digitalocean tool <https://github.com/tendermint/tools/tree/master/terraform-digitalocean>`__
|
||||
allows creating a set of servers on the DigitalOcean cloud.
|
||||
|
||||
The `ansible playbooks <https://github.com/tendermint/tools/tree/master/ansible>`__
|
||||
allow creating and managing a ``basecoin`` or ``ethermint`` testnet on provisioned servers.
|
||||
|
||||
Package Deployment on Linux for developers
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The ``tendermint`` and ``basecoin`` applications can be installed from RPM or DEB packages on
|
||||
Linux machines for development purposes. The packages are configured to be validators on the
|
||||
one-node network that the machine represents. The services are not started after installation,
|
||||
this way giving an opportunity to reconfigure the applications before starting.
|
||||
|
||||
The Ansible playbooks in the previous section use this repository to install ``basecoin``.
|
||||
After installation, additional steps are executed to make sure that the multi-node testnet has
|
||||
the right configuration before start.
|
||||
|
||||
Install from the CentOS/RedHat repository:
|
||||
|
||||
::
|
||||
|
||||
rpm --import https://tendermint-packages.interblock.io/centos/7/os/x86_64/RPM-GPG-KEY-Tendermint
|
||||
wget -O /etc/yum.repos.d/tendermint.repo https://tendermint-packages.interblock.io/centos/7/os/x86_64/tendermint.repo
|
||||
yum install basecoin
|
||||
|
||||
Install from the Debian/Ubuntu repository:
|
||||
|
||||
::
|
||||
|
||||
wget -O - https://tendermint-packages.interblock.io/centos/7/os/x86_64/RPM-GPG-KEY-Tendermint | apt-key add -
|
||||
wget -O /etc/apt/sources.list.d/tendermint.list https://tendermint-packages.interblock.io/debian/tendermint.list
|
||||
apt-get update && apt-get install basecoin
|
||||
|
||||
|
122
docs/ecosystem.rst
Normal file
@@ -0,0 +1,122 @@
|
||||
Tendermint Ecosystem
|
||||
====================
|
||||
|
||||
Below are the many applications built using various pieces of the Tendermint stack. We thank the community for their contributions thus far and welcome the addition of new projects. Feel free to submit a pull request to add your project!
|
||||
|
||||
ABCI Applications
|
||||
-----------------
|
||||
|
||||
Burrow
|
||||
^^^^^^
|
||||
|
||||
Ethereum Virtual Machine augmented with native permissioning scheme and global key-value store, written in Go, authored by Monax Industries, and incubated `by Hyperledger <https://github.com/hyperledger/burrow>`__.
|
||||
|
||||
cb-ledger
|
||||
^^^^^^^^^
|
||||
|
||||
Custodian Bank Ledger, integrating central banking with the blockchains of tomorrow, written in C++, and `authored by Block Finance <https://github.com/block-finance/cpp-abci>`__.
|
||||
|
||||
Clearchain
|
||||
^^^^^^^^^^
|
||||
|
||||
Application to manage a distributed ledger for money transfers that support multi-currency accounts, written in Go, and `authored by Allession Treglia <https://github.com/tendermint/clearchain>`__.
|
||||
|
||||
Comit
|
||||
^^^^^
|
||||
|
||||
Public service reporting and tracking, written in Go, and `authored by Zach Balder <https://github.com/zbo14/comit>`__.
|
||||
|
||||
Cosmos SDK
|
||||
^^^^^^^^^^
|
||||
|
||||
A prototypical account based crypto currency state machine supporting plugins, written in Go, and `authored by Cosmos <https://github.com/cosmos/cosmos-sdk>`__.
|
||||
|
||||
Ethermint
|
||||
^^^^^^^^^
|
||||
|
||||
The go-ethereum state machine run as a ABCI app, written in Go, `authored by Tendermint <https://github.com/tendermint/ethermint>`__.
|
||||
|
||||
IAVL
|
||||
^^^^
|
||||
|
||||
Immutable AVL+ tree with Merkle proofs, Written in Go, `authored by Tendermint <https://github.com/tendermint/iavl>`__.
|
||||
|
||||
Lotion
|
||||
^^^^^^
|
||||
|
||||
A Javascript microframework for building blockchain applications with Tendermint, written in Javascript, `authored by Judd Keppel of Tendermint <https://github.com/keppel/lotion>`__. See also `lotion-chat <https://github.com/keppel/lotion-chat>`__ and `lotion-coin <https://github.com/keppel/lotion-coin>`__ apps written using Lotion.
|
||||
|
||||
MerkleTree
|
||||
^^^^^^^^^^
|
||||
|
||||
Immutable AVL+ tree with Merkle proofs, Written in Java, `authored by jTendermint <https://github.com/jTendermint/MerkleTree>`__.
|
||||
|
||||
Passchain
|
||||
^^^^^^^^^
|
||||
|
||||
Passchain is a tool to securely store and share passwords, tokens and other short secrets, `authored by trusch <https://github.com/trusch/passchain>`__.
|
||||
|
||||
Passwerk
|
||||
^^^^^^^^
|
||||
|
||||
Encrypted storage web-utility backed by Tendermint, written in Go, `authored by Rigel Rozanski <https://github.com/rigelrozanski/passwerk>`__.
|
||||
|
||||
Py-Tendermint
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
A Python microframework for building blockchain applications with Tendermint, written in Python, `authored by Dave Bryson <https://github.com/davebryson/py-tendermint>`__.
|
||||
|
||||
Stratumn
|
||||
^^^^^^^^
|
||||
|
||||
SDK for "Proof-of-Process" networks, written in Go, `authored by the Stratumn team <https://github.com/stratumn/sdk>`__.
|
||||
|
||||
TMChat
|
||||
^^^^^^
|
||||
|
||||
P2P chat using Tendermint, written in Java, `authored by wolfposd <https://github.com/wolfposd/TMChat>`__.
|
||||
|
||||
|
||||
ABCI Servers
|
||||
------------
|
||||
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| **Name** | **Author** | **Language** |
|
||||
| | | |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `abci <https://github.com/tendermint/abci>`__ | Tendermint | Go |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `js abci <https://github.com/tendermint/js-abci>`__ | Tendermint | Javascript |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `cpp-tmsp <https://github.com/block-finance/cpp-abci>`__ | Martin Dyring | C++ |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `c-abci <https://github.com/chainx-org/c-abci>`__ | ChainX | C |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `jabci <https://github.com/jTendermint/jabci>`__ | jTendermint | Java |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `ocaml-tmsp <https://github.com/zbo14/ocaml-tmsp>`__ | Zach Balder | Ocaml |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `abci_server <https://github.com/KrzysiekJ/abci_server>`__ | Krzysztof Jurewicz | Erlang |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `rust-tsp <https://github.com/tendermint/rust-tsp>`__ | Adrian Brink | Rust |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `hs-abci <https://github.com/albertov/hs-abci>`__ | Alberto Gonzalez | Haskell |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `haskell-abci <https://github.com/cwgoes/haskell-abci>`__ | Christoper Goes | Haskell |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `Spearmint <https://github.com/dennismckinnon/spearmint>`__ | Dennis Mckinnon | Javascript |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
| `py-abci <https://github.com/davebryson/py-abci>`__ | Dave Bryson | Python |
|
||||
+------------------------------------------------------------------+--------------------+--------------+
|
||||
|
||||
Deployment Tools
|
||||
----------------
|
||||
|
||||
See `deploy testnets <./deploy-testnets.html>`__ for information about all the tools built by Tendermint. We have Kubernetes, Ansible, and Terraform integrations.
|
||||
|
||||
Cloudsoft built `brooklyn-tendermint <https://github.com/cloudsoft/brooklyn-tendermint>`__ for deploying a tendermint testnet in docker continers. It uses Clocker for Apache Brooklyn.
|
||||
|
||||
Dev Tools
|
||||
---------
|
||||
|
||||
For upgrading from older to newer versions of tendermint and to migrate your chain data, see `tm-migrator <https://github.com/hxzqlh/tm-tools>`__ written by @hxzqlh.
|
@@ -5,7 +5,7 @@ As a general purpose blockchain engine, Tendermint is agnostic to the
|
||||
application you want to run. So, to run a complete blockchain that does
|
||||
something useful, you must start two programs: one is Tendermint Core,
|
||||
the other is your application, which can be written in any programming
|
||||
language. Recall from `the intro to ABCI <introduction.rst#ABCI-Overview>`__ that
|
||||
language. Recall from `the intro to ABCI <introduction.html#ABCI-Overview>`__ that
|
||||
Tendermint Core handles all the p2p and consensus stuff, and just
|
||||
forwards transactions to the application when they need to be validated,
|
||||
or when they're ready to be committed to a block.
|
||||
@@ -25,7 +25,7 @@ Then run
|
||||
|
||||
::
|
||||
|
||||
go get -u github.com/tendermint/abci/cmd/...
|
||||
go get -u github.com/tendermint/abci/cmd/abci-cli
|
||||
|
||||
If there is an error, install and run the ``glide`` tool to pin the
|
||||
dependencies:
|
||||
@@ -35,20 +35,12 @@ dependencies:
|
||||
go get github.com/Masterminds/glide
|
||||
cd $GOPATH/src/github.com/tendermint/abci
|
||||
glide install
|
||||
go install ./cmd/...
|
||||
go install ./cmd/abci-cli
|
||||
|
||||
Now you should have the ``abci-cli`` plus two apps installed:
|
||||
|
||||
::
|
||||
|
||||
dummy --help
|
||||
counter --help
|
||||
|
||||
These binaries are installed on ``$GOPATH/bin`` and all come from within
|
||||
the ``./cmd/...`` directory of the abci repository.
|
||||
|
||||
Both of these example applications are in Go. See below for an
|
||||
application written in Javascript.
|
||||
Now you should have the ``abci-cli`` installed; you'll see
|
||||
a couple of commands (``counter`` and ``dummy``) that are
|
||||
example applications written in Go. See below for an application
|
||||
written in Javascript.
|
||||
|
||||
Now, let's run some apps!
|
||||
|
||||
@@ -66,14 +58,14 @@ Let's start a dummy application.
|
||||
|
||||
::
|
||||
|
||||
dummy
|
||||
abci-cli dummy
|
||||
|
||||
In another terminal, we can start Tendermint. If you have never run
|
||||
Tendermint before, use:
|
||||
|
||||
::
|
||||
|
||||
tendermint init
|
||||
tendermint init
|
||||
tendermint node
|
||||
|
||||
If you have used Tendermint, you may want to reset the data for a new
|
||||
@@ -107,31 +99,56 @@ like:
|
||||
|
||||
::
|
||||
|
||||
{"jsonrpc":"2.0","id":"","result":[98,{"check_tx":{},"deliver_tx":{}}],"error":""}
|
||||
|
||||
The ``98`` is a type-byte, and can be ignored (it's useful for
|
||||
serializing and deserializing arbitrary json). Otherwise, this result is
|
||||
empty - there's nothing to report on and everything is OK.
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "",
|
||||
"result": {
|
||||
"check_tx": {
|
||||
"code": 0,
|
||||
"data": "",
|
||||
"log": ""
|
||||
},
|
||||
"deliver_tx": {
|
||||
"code": 0,
|
||||
"data": "",
|
||||
"log": ""
|
||||
},
|
||||
"hash": "2B8EC32BA2579B3B8606E42C06DE2F7AFA2556EF",
|
||||
"height": 154
|
||||
}
|
||||
}
|
||||
|
||||
We can confirm that our transaction worked and the value got stored by
|
||||
querying the app:
|
||||
|
||||
::
|
||||
|
||||
curl -s 'localhost:46657/abci_query?data="abcd"&path=""&prove=false'
|
||||
curl -s 'localhost:46657/abci_query?data="abcd"'
|
||||
|
||||
The ``path`` and ``prove`` arguments can be ignored for now, and in a
|
||||
future release can be left out. The result should look like:
|
||||
The result should look like:
|
||||
|
||||
::
|
||||
|
||||
{"jsonrpc":"2.0","id":"","result":[112,{"response":{"value":"61626364","log":"exists"}}],"error":""}
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "",
|
||||
"result": {
|
||||
"response": {
|
||||
"code": 0,
|
||||
"index": 0,
|
||||
"key": "",
|
||||
"value": "61626364",
|
||||
"proof": "",
|
||||
"height": 0,
|
||||
"log": "exists"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Again, the ``112`` is the type-byte. Note the ``value`` in the result
|
||||
(``61626364``); this is the hex-encoding of the ASCII of ``abcd``. You
|
||||
can verify this in a python shell by running
|
||||
``"61626364".decode('hex')``. Stay tuned for a future release that makes
|
||||
this output more human-readable ;).
|
||||
Note the ``value`` in the result (``61626364``); this is the
|
||||
hex-encoding of the ASCII of ``abcd``. You can verify this in
|
||||
a python shell by running ``"61626364".decode('hex')``. Stay
|
||||
tuned for a future release that `makes this output more human-readable <https://github.com/tendermint/abci/issues/32>`__.
|
||||
|
||||
Now let's try setting a different key and value:
|
||||
|
||||
@@ -144,7 +161,7 @@ Now if we query for ``name``, we should get ``satoshi``, or
|
||||
|
||||
::
|
||||
|
||||
curl -s 'localhost:46657/abci_query?data="name"&path=""&prove=false'
|
||||
curl -s 'localhost:46657/abci_query?data="name"'
|
||||
|
||||
Try some other transactions and queries to make sure everything is
|
||||
working!
|
||||
@@ -153,7 +170,7 @@ Counter - Another Example
|
||||
-------------------------
|
||||
|
||||
Now that we've got the hang of it, let's try another application, the
|
||||
"counter" app.
|
||||
**counter** app.
|
||||
|
||||
The counter app doesn't use a Merkle tree, it just counts how many times
|
||||
we've sent a transaction, or committed the state.
|
||||
@@ -181,7 +198,7 @@ a flag:
|
||||
|
||||
::
|
||||
|
||||
counter --serial
|
||||
abci-cli counter --serial
|
||||
|
||||
In another window, reset then start Tendermint:
|
||||
|
||||
@@ -204,14 +221,48 @@ the number ``1``. If instead, we try to send a ``5``, we get an error:
|
||||
::
|
||||
|
||||
> curl localhost:46657/broadcast_tx_commit?tx=0x05
|
||||
{"jsonrpc":"2.0","id":"","result":[98,{"check_tx":{},"deliver_tx":{"code":3,"log":"Invalid nonce. Expected 1, got 5"}}],"error":""}
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "",
|
||||
"result": {
|
||||
"check_tx": {
|
||||
"code": 0,
|
||||
"data": "",
|
||||
"log": ""
|
||||
},
|
||||
"deliver_tx": {
|
||||
"code": 3,
|
||||
"data": "",
|
||||
"log": "Invalid nonce. Expected 1, got 5"
|
||||
},
|
||||
"hash": "33B93DFF98749B0D6996A70F64071347060DC19C",
|
||||
"height": 38
|
||||
}
|
||||
}
|
||||
|
||||
But if we send a ``1``, it works again:
|
||||
|
||||
::
|
||||
|
||||
> curl localhost:46657/broadcast_tx_commit?tx=0x01
|
||||
{"jsonrpc":"2.0","id":"","result":[98,{"check_tx":{},"deliver_tx":{}}],"error":""}
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "",
|
||||
"result": {
|
||||
"check_tx": {
|
||||
"code": 0,
|
||||
"data": "",
|
||||
"log": ""
|
||||
},
|
||||
"deliver_tx": {
|
||||
"code": 0,
|
||||
"data": "",
|
||||
"log": ""
|
||||
},
|
||||
"hash": "F17854A977F6FA7EEA1BD758E296710B86F72F3D",
|
||||
"height": 87
|
||||
}
|
||||
}
|
||||
|
||||
For more details on the ``broadcast_tx`` API, see `the guide on using
|
||||
Tendermint <./using-tendermint.html>`__.
|
||||
@@ -231,6 +282,7 @@ keep all our code under the ``$GOPATH``, so run:
|
||||
go get github.com/tendermint/js-abci &> /dev/null
|
||||
cd $GOPATH/src/github.com/tendermint/js-abci/example
|
||||
npm install
|
||||
cd ..
|
||||
|
||||
Kill the previous ``counter`` and ``tendermint`` processes. Now run the
|
||||
app:
|
||||
@@ -261,12 +313,12 @@ Neat, eh?
|
||||
Basecoin - A More Interesting Example
|
||||
-------------------------------------
|
||||
|
||||
We saved the best for last; the `Cosmos SDK <https://github.com/cosmos/cosmos-sdk>`__ is a general purpose framework for building cryptocurrencies. Unlike the``dummy`` and ``counter``, which are strictly for example purposes. The reference implementation of Cosmos SDK is ``basecoin``, which demonstrates how to use the building blocks of the Cosmos SDK.
|
||||
We saved the best for last; the `Cosmos SDK <https://github.com/cosmos/cosmos-sdk>`__ is a general purpose framework for building cryptocurrencies. Unlike the ``dummy`` and ``counter``, which are strictly for example purposes. The reference implementation of Cosmos SDK is ``basecoin``, which demonstrates how to use the building blocks of the Cosmos SDK.
|
||||
|
||||
The default ``basecoin`` application is a multi-asset cryptocurrency
|
||||
that supports inter-blockchain communication. For more details on how
|
||||
that supports inter-blockchain communication (IBC). For more details on how
|
||||
basecoin works and how to use it, see our `basecoin
|
||||
guide <https://github.com/cosmos/cosmos-sdk/blob/develop/docs/guide/basecoin-basics.md>`__
|
||||
guide <http://cosmos-sdk.readthedocs.io/en/latest/basecoin-basics.html>`__
|
||||
|
||||
In this tutorial you learned how to run applications using Tendermint
|
||||
on a single node. You saw how applications could be written in different
|
||||
|
165
docs/how-to-read-logs.rst
Normal file
@@ -0,0 +1,165 @@
|
||||
How to read logs
|
||||
================
|
||||
|
||||
Walk through example
|
||||
--------------------
|
||||
|
||||
We first create three connections (mempool, consensus and query) to the
|
||||
application (locally running dummy in this case).
|
||||
|
||||
::
|
||||
|
||||
I[10-04|13:54:27.364] Starting multiAppConn module=proxy impl=multiAppConn
|
||||
I[10-04|13:54:27.366] Starting localClient module=abci-client connection=query impl=localClient
|
||||
I[10-04|13:54:27.366] Starting localClient module=abci-client connection=mempool impl=localClient
|
||||
I[10-04|13:54:27.367] Starting localClient module=abci-client connection=consensus impl=localClient
|
||||
|
||||
Then Tendermint Core and the application perform a handshake.
|
||||
|
||||
::
|
||||
|
||||
I[10-04|13:54:27.367] ABCI Handshake module=consensus appHeight=90 appHash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
|
||||
I[10-04|13:54:27.368] ABCI Replay Blocks module=consensus appHeight=90 storeHeight=90 stateHeight=90
|
||||
I[10-04|13:54:27.368] Completed ABCI Handshake - Tendermint and App are synced module=consensus appHeight=90 appHash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
|
||||
|
||||
After that, we start a few more things like the event switch, reactors, and
|
||||
perform UPNP discover in order to detect the IP address.
|
||||
|
||||
::
|
||||
|
||||
I[10-04|13:54:27.374] Starting EventSwitch module=types impl=EventSwitch
|
||||
I[10-04|13:54:27.375] This node is a validator module=consensus
|
||||
I[10-04|13:54:27.379] Starting Node module=main impl=Node
|
||||
I[10-04|13:54:27.381] Local listener module=p2p ip=:: port=46656
|
||||
I[10-04|13:54:27.382] Getting UPNP external address module=p2p
|
||||
I[10-04|13:54:30.386] Could not perform UPNP discover module=p2p err="write udp4 0.0.0.0:38238->239.255.255.250:1900: i/o timeout"
|
||||
I[10-04|13:54:30.386] Starting DefaultListener module=p2p impl=Listener(@10.0.2.15:46656)
|
||||
I[10-04|13:54:30.387] Starting P2P Switch module=p2p impl="P2P Switch"
|
||||
I[10-04|13:54:30.387] Starting MempoolReactor module=mempool impl=MempoolReactor
|
||||
I[10-04|13:54:30.387] Starting BlockchainReactor module=blockchain impl=BlockchainReactor
|
||||
I[10-04|13:54:30.387] Starting ConsensusReactor module=consensus impl=ConsensusReactor
|
||||
I[10-04|13:54:30.387] ConsensusReactor module=consensus fastSync=false
|
||||
I[10-04|13:54:30.387] Starting ConsensusState module=consensus impl=ConsensusState
|
||||
I[10-04|13:54:30.387] Starting WAL module=consensus wal=/home/vagrant/.tendermint/data/cs.wal/wal impl=WAL
|
||||
I[10-04|13:54:30.388] Starting TimeoutTicker module=consensus impl=TimeoutTicker
|
||||
|
||||
Notice the second row where Tendermint Core reports that "This node is a
|
||||
validator". It also could be just an observer (regular node).
|
||||
|
||||
Next we replay all the messages from the WAL.
|
||||
|
||||
::
|
||||
|
||||
I[10-04|13:54:30.390] Catchup by replaying consensus messages module=consensus height=91
|
||||
I[10-04|13:54:30.390] Replay: New Step module=consensus height=91 round=0 step=RoundStepNewHeight
|
||||
I[10-04|13:54:30.390] Replay: Done module=consensus
|
||||
|
||||
"Started node" message signals that everything is ready for work.
|
||||
|
||||
::
|
||||
|
||||
I[10-04|13:54:30.391] Starting RPC HTTP server on tcp socket 0.0.0.0:46657 module=rpc-server
|
||||
I[10-04|13:54:30.392] Started node module=main nodeInfo="NodeInfo{pk: PubKeyEd25519{DF22D7C92C91082324A1312F092AA1DA197FA598DBBFB6526E177003C4D6FD66}, moniker: anonymous, network: test-chain-3MNw2N [remote , listen 10.0.2.15:46656], version: 0.11.0-10f361fc ([wire_version=0.6.2 p2p_version=0.5.0 consensus_version=v1/0.2.2 rpc_version=0.7.0/3 tx_index=on rpc_addr=tcp://0.0.0.0:46657])}"
|
||||
|
||||
Next follows a standard block creation cycle, where we enter a new round,
|
||||
propose a block, receive more than 2/3 of prevotes, then precommits and finally
|
||||
have a chance to commit a block. For details, please refer to `Consensus
|
||||
Overview
|
||||
<introduction.html#consensus-overview>`__
|
||||
or `Byzantine Consensus Algorithm
|
||||
<specification.html>`__.
|
||||
|
||||
::
|
||||
|
||||
I[10-04|13:54:30.393] enterNewRound(91/0). Current: 91/0/RoundStepNewHeight module=consensus
|
||||
I[10-04|13:54:30.393] enterPropose(91/0). Current: 91/0/RoundStepNewRound module=consensus
|
||||
I[10-04|13:54:30.393] enterPropose: Our turn to propose module=consensus proposer=125B0E3C5512F5C2B0E1109E31885C4511570C42 privValidator="PrivValidator{125B0E3C5512F5C2B0E1109E31885C4511570C42 LH:90, LR:0, LS:3}"
|
||||
I[10-04|13:54:30.394] Signed proposal module=consensus height=91 round=0 proposal="Proposal{91/0 1:21B79872514F (-1,:0:000000000000) {/10EDEDD7C84E.../}}"
|
||||
I[10-04|13:54:30.397] Received complete proposal block module=consensus height=91 hash=F671D562C7B9242900A286E1882EE64E5556FE9E
|
||||
I[10-04|13:54:30.397] enterPrevote(91/0). Current: 91/0/RoundStepPropose module=consensus
|
||||
I[10-04|13:54:30.397] enterPrevote: ProposalBlock is valid module=consensus height=91 round=0
|
||||
I[10-04|13:54:30.398] Signed and pushed vote module=consensus height=91 round=0 vote="Vote{0:125B0E3C5512 91/00/1(Prevote) F671D562C7B9 {/89047FFC21D8.../}}" err=null
|
||||
I[10-04|13:54:30.401] Added to prevote module=consensus vote="Vote{0:125B0E3C5512 91/00/1(Prevote) F671D562C7B9 {/89047FFC21D8.../}}" prevotes="VoteSet{H:91 R:0 T:1 +2/3:F671D562C7B9242900A286E1882EE64E5556FE9E:1:21B79872514F BA{1:X} map[]}"
|
||||
I[10-04|13:54:30.401] enterPrecommit(91/0). Current: 91/0/RoundStepPrevote module=consensus
|
||||
I[10-04|13:54:30.401] enterPrecommit: +2/3 prevoted proposal block. Locking module=consensus hash=F671D562C7B9242900A286E1882EE64E5556FE9E
|
||||
I[10-04|13:54:30.402] Signed and pushed vote module=consensus height=91 round=0 vote="Vote{0:125B0E3C5512 91/00/2(Precommit) F671D562C7B9 {/80533478E41A.../}}" err=null
|
||||
I[10-04|13:54:30.404] Added to precommit module=consensus vote="Vote{0:125B0E3C5512 91/00/2(Precommit) F671D562C7B9 {/80533478E41A.../}}" precommits="VoteSet{H:91 R:0 T:2 +2/3:F671D562C7B9242900A286E1882EE64E5556FE9E:1:21B79872514F BA{1:X} map[]}"
|
||||
I[10-04|13:54:30.404] enterCommit(91/0). Current: 91/0/RoundStepPrecommit module=consensus
|
||||
I[10-04|13:54:30.405] Finalizing commit of block with 0 txs module=consensus height=91 hash=F671D562C7B9242900A286E1882EE64E5556FE9E root=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
|
||||
I[10-04|13:54:30.405] Block{
|
||||
Header{
|
||||
ChainID: test-chain-3MNw2N
|
||||
Height: 91
|
||||
Time: 2017-10-04 13:54:30.393 +0000 UTC
|
||||
NumTxs: 0
|
||||
LastBlockID: F15AB8BEF9A6AAB07E457A6E16BC410546AA4DC6:1:D505DA273544
|
||||
LastCommit: 56FEF2EFDB8B37E9C6E6D635749DF3169D5F005D
|
||||
Data:
|
||||
Validators: CE25FBFF2E10C0D51AA1A07C064A96931BC8B297
|
||||
App: E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
|
||||
}#F671D562C7B9242900A286E1882EE64E5556FE9E
|
||||
Data{
|
||||
|
||||
}#
|
||||
Commit{
|
||||
BlockID: F15AB8BEF9A6AAB07E457A6E16BC410546AA4DC6:1:D505DA273544
|
||||
Precommits: Vote{0:125B0E3C5512 90/00/2(Precommit) F15AB8BEF9A6 {/FE98E2B956F0.../}}
|
||||
}#56FEF2EFDB8B37E9C6E6D635749DF3169D5F005D
|
||||
}#F671D562C7B9242900A286E1882EE64E5556FE9E module=consensus
|
||||
I[10-04|13:54:30.408] Executed block module=state height=91 validTxs=0 invalidTxs=0
|
||||
I[10-04|13:54:30.410] Committed state module=state height=91 txs=0 hash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
|
||||
I[10-04|13:54:30.410] Recheck txs module=mempool numtxs=0 height=91
|
||||
|
||||
List of modules
|
||||
---------------
|
||||
|
||||
Here is the list of modules you may encounter in Tendermint's log and a little
|
||||
overview what they do.
|
||||
|
||||
- ``abci-client`` As mentioned in `Application Development Guide
|
||||
<app-development.html#abci-design>`__,
|
||||
Tendermint acts as an ABCI client with respect to the application and
|
||||
maintains 3 connections: mempool, consensus and query. The code used by
|
||||
Tendermint Core can be found `here
|
||||
<https://github.com/tendermint/abci/tree/master/client>`__.
|
||||
|
||||
- ``blockchain``
|
||||
Provides storage, pool (a group of peers), and reactor for both storing and
|
||||
exchanging blocks between peers.
|
||||
|
||||
- ``consensus``
|
||||
The heart of Tendermint core, which is the implementation of the consensus
|
||||
algorithm. Includes two "submodules": ``wal`` (write-ahead logging) for
|
||||
ensuring data integrity and ``replay`` to replay blocks and messages on
|
||||
recovery from a crash.
|
||||
|
||||
- ``events``
|
||||
Simple event notification system. The list of events can be found
|
||||
`here
|
||||
<https://github.com/tendermint/tendermint/blob/master/types/events.go>`__.
|
||||
You can subscribe to them by calling ``subscribe`` RPC method.
|
||||
Refer to `RPC docs
|
||||
<specification/rpc.html>`__
|
||||
for additional information.
|
||||
|
||||
- ``mempool``
|
||||
Mempool module handles all incoming transactions, whenever they are
|
||||
coming from peers or the application.
|
||||
|
||||
- ``p2p``
|
||||
Provides an abstraction around peer-to-peer communication. For more details,
|
||||
please check out the `README
|
||||
<https://github.com/tendermint/tendermint/blob/56c60fba2381e4ac41d2ae38a1eb6569acfbc095/p2p/README.md>`__.
|
||||
|
||||
- ``rpc``
|
||||
`Tendermint's RPC <specification/rpc.html>`__.
|
||||
|
||||
- ``rpc-server``
|
||||
RPC server. For implementation details, please read the `README <https://github.com/tendermint/tendermint/blob/master/rpc/lib/README.md>`__.
|
||||
|
||||
- ``state``
|
||||
Represents the latest state and execution submodule, which executes
|
||||
blocks against the application.
|
||||
|
||||
- ``types``
|
||||
A collection of the publicly exposed types and methods to work with them.
|
@@ -8,24 +8,44 @@ Welcome to Tendermint!
|
||||
|
||||
|
||||
.. image:: assets/tmint-logo-blue.png
|
||||
:height: 500px
|
||||
:width: 500px
|
||||
:height: 200px
|
||||
:width: 200px
|
||||
:align: center
|
||||
|
||||
Tendermint 101
|
||||
--------------
|
||||
|
||||
.. maxdepth set to 2 for sexinesss
|
||||
.. but use 4 to upgrade overall documentation
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
introduction.rst
|
||||
install.rst
|
||||
getting-started.rst
|
||||
deploy-testnets.rst
|
||||
using-tendermint.rst
|
||||
|
||||
Tendermint Ecosystem
|
||||
--------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
ecosystem.rst
|
||||
|
||||
Tendermint Tools
|
||||
----------------
|
||||
|
||||
.. the tools/ files are pulled in from the tools repo
|
||||
.. see the bottom of conf.py
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
deploy-testnets.rst
|
||||
tools/ansible.rst
|
||||
tools/docker.rst
|
||||
tools/mintnet-kubernetes.rst
|
||||
tools/terraform-digitalocean.rst
|
||||
tools/benchmarking-and-monitoring.rst
|
||||
|
||||
Tendermint 102
|
||||
--------------
|
||||
|
||||
@@ -35,6 +55,7 @@ Tendermint 102
|
||||
abci-cli.rst
|
||||
app-architecture.rst
|
||||
app-development.rst
|
||||
how-to-read-logs.rst
|
||||
|
||||
Tendermint 201
|
||||
--------------
|
||||
|
@@ -1,17 +1,24 @@
|
||||
Install from Source
|
||||
===================
|
||||
Install Tendermint
|
||||
==================
|
||||
|
||||
This page provides instructions on installing Tendermint from source. To
|
||||
download pre-built binaries, see the `Download page <https://tendermint.com/download>`__.
|
||||
From Binary
|
||||
-----------
|
||||
|
||||
To download pre-built binaries, see the `Download page <https://tendermint.com/download>`__.
|
||||
|
||||
From Source
|
||||
-----------
|
||||
|
||||
You'll need ``go``, maybe ``glide``, and the tendermint source code.
|
||||
|
||||
Install Go
|
||||
----------
|
||||
^^^^^^^^^^
|
||||
|
||||
Make sure you have `installed Go <https://golang.org/doc/install>`__ and
|
||||
set the ``GOPATH``.
|
||||
set the ``GOPATH``. You should also put ``GOPATH/bin`` on your ``PATH``.
|
||||
|
||||
Install Tendermint
|
||||
------------------
|
||||
Get Source Code
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
You should be able to install the latest with a simple
|
||||
|
||||
@@ -19,13 +26,14 @@ You should be able to install the latest with a simple
|
||||
|
||||
go get github.com/tendermint/tendermint/cmd/tendermint
|
||||
|
||||
Run ``tendermint --help`` for more.
|
||||
Run ``tendermint --help`` and ``tendermint version`` to ensure your
|
||||
installation worked.
|
||||
|
||||
If the installation failed, a dependency may been updated and become
|
||||
If the installation failed, a dependency may have been updated and become
|
||||
incompatible with the latest Tendermint master branch. We solve this
|
||||
using the ``glide`` tool for dependency management.
|
||||
|
||||
Fist, install ``glide``:
|
||||
First, install ``glide``:
|
||||
|
||||
::
|
||||
|
||||
@@ -45,7 +53,7 @@ still cloned to the correct location in the ``$GOPATH``.
|
||||
The latest Tendermint Core version is now installed.
|
||||
|
||||
Reinstall
|
||||
~~~~~~~~~
|
||||
---------
|
||||
|
||||
If you already have Tendermint installed, and you make updates, simply
|
||||
|
||||
@@ -79,7 +87,7 @@ Since the third option just uses ``glide`` right away, it should always
|
||||
work.
|
||||
|
||||
Troubleshooting
|
||||
~~~~~~~~~~~~~~~
|
||||
---------------
|
||||
|
||||
If ``go get`` failing bothers you, fetch the code using ``git``:
|
||||
|
||||
@@ -92,7 +100,7 @@ If ``go get`` failing bothers you, fetch the code using ``git``:
|
||||
go install ./cmd/tendermint
|
||||
|
||||
Run
|
||||
~~~
|
||||
^^^
|
||||
|
||||
To start a one-node blockchain with a simple in-process application:
|
||||
|
||||
|
@@ -86,10 +86,10 @@ And we plan to do the same for Bitcoin, ZCash, and various other deterministic a
|
||||
|
||||
Another example of a cryptocurrency application built on Tendermint is `the Cosmos network <http://cosmos.network>`__.
|
||||
|
||||
Fabric, Burrow
|
||||
~~~~~~~~~~~~~~
|
||||
Other Blockchain Projects
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
`Fabric <https://github.com/hyperledger/fabric>`__, takes a similar approach to Tendermint, but is more opinionated about how the state is managed,
|
||||
`Fabric <https://github.com/hyperledger/fabric>`__ takes a similar approach to Tendermint, but is more opinionated about how the state is managed,
|
||||
and requires that all application behaviour runs in potentially many docker containers, modules it calls "chaincode".
|
||||
It uses an implementation of `PBFT <http://pmg.csail.mit.edu/papers/osdi99.pdf>`__.
|
||||
from a team at IBM that is
|
||||
@@ -146,17 +146,17 @@ The ABCI consists of 3 primary message types that get delivered from the core to
|
||||
|
||||
The messages are specified here: `ABCI Message Types <https://github.com/tendermint/abci#message-types>`__.
|
||||
|
||||
The `DeliverTx` message is the work horse of the application. Each transaction in the blockchain is delivered with this message. The application needs to validate each transaction received with the `DeliverTx` message against the current state, application protocol, and the cryptographic credentials of the transaction. A validated transaction then needs to update the application state — by binding a value into a key values store, or by updating the UTXO database, for instance.
|
||||
The **DeliverTx** message is the work horse of the application. Each transaction in the blockchain is delivered with this message. The application needs to validate each transaction received with the **DeliverTx** message against the current state, application protocol, and the cryptographic credentials of the transaction. A validated transaction then needs to update the application state — by binding a value into a key values store, or by updating the UTXO database, for instance.
|
||||
|
||||
The `CheckTx` message is similar to `DeliverTx`, but it's only for validating transactions. Tendermint Core's mempool first checks the validity of a transaction with `CheckTx`, and only relays valid transactions to its peers. For instance, an application may check an incrementing sequence number in the transaction and return an error upon `CheckTx` if the sequence number is old. Alternatively, they might use a capabilities based system that requires capabilities to be renewed with every transaction.
|
||||
The **CheckTx** message is similar to **DeliverTx**, but it's only for validating transactions. Tendermint Core's mempool first checks the validity of a transaction with **CheckTx**, and only relays valid transactions to its peers. For instance, an application may check an incrementing sequence number in the transaction and return an error upon **CheckTx** if the sequence number is old. Alternatively, they might use a capabilities based system that requires capabilities to be renewed with every transaction.
|
||||
|
||||
The `Commit` message is used to compute a cryptographic commitment to the current application state, to be placed into the next block header. This has some handy properties. Inconsistencies in updating that state will now appear as blockchain forks which catches a whole class of programming errors. This also simplifies the development of secure lightweight clients, as Merkle-hash proofs can be verified by checking against the block hash, and that the block hash is signed by a quorum.
|
||||
The **Commit** message is used to compute a cryptographic commitment to the current application state, to be placed into the next block header. This has some handy properties. Inconsistencies in updating that state will now appear as blockchain forks which catches a whole class of programming errors. This also simplifies the development of secure lightweight clients, as Merkle-hash proofs can be verified by checking against the block hash, and that the block hash is signed by a quorum.
|
||||
|
||||
There can be multiple ABCI socket connections to an application. Tendermint Core creates three ABCI connections to the application; one for the validation of transactions when broadcasting in the mempool, one for the consensus engine to run block proposals, and one more for querying the application state.
|
||||
|
||||
It's probably evident that applications designers need to very carefully design their message handlers to create a blockchain that does anything useful but this architecture provides a place to start. The diagram below illustrates the flow of messages via ABCI.
|
||||
|
||||
.. figure:: images/abci.png
|
||||
.. figure:: assets/abci.png
|
||||
|
||||
A Note on Determinism
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
@@ -169,7 +169,7 @@ Solidity on Ethereum is a great language of choice for blockchain applications b
|
||||
* race conditions on threads (or avoiding threads altogether)
|
||||
* the system clock
|
||||
* uninitialized memory (in unsafe programming languages like C or C++)
|
||||
* `floating point arithmetic <http://gafferongames.com/networking-for-game-programmers/floating-point-determinism/>`__.
|
||||
* `floating point arithmetic <http://gafferongames.com/networking-for-game-programmers/floating-point-determinism/>`__
|
||||
* language features that are random (e.g. map iteration in Go)
|
||||
|
||||
While programmers can avoid non-determinism by being careful, it is also possible to create a special linter or static analyzer for each language to check for determinism. In the future we may work with partners to create such tools.
|
||||
@@ -180,19 +180,19 @@ Consensus Overview
|
||||
Tendermint is an easy-to-understand, mostly asynchronous, BFT consensus protocol.
|
||||
The protocol follows a simple state machine that looks like this:
|
||||
|
||||
.. figure:: images/consensus_logic.png
|
||||
.. figure:: assets/consensus_logic.png
|
||||
|
||||
Participants in the protocol are called "validators";
|
||||
Participants in the protocol are called **validators**;
|
||||
they take turns proposing blocks of transactions and voting on them.
|
||||
Blocks are committed in a chain, with one block at each "height".
|
||||
A block may fail to be committed, in which case the protocol moves to the next "round",
|
||||
Blocks are committed in a chain, with one block at each **height**.
|
||||
A block may fail to be committed, in which case the protocol moves to the next **round**,
|
||||
and a new validator gets to propose a block for that height.
|
||||
Two stages of voting are required to successfully commit a block;
|
||||
we call them "pre-vote" and "pre-commit".
|
||||
we call them **pre-vote** and **pre-commit**.
|
||||
A block is committed when more than 2/3 of validators pre-commit for the same block in the same round.
|
||||
|
||||
There is a picture of a couple doing the polka because validators are doing something like a polka dance.
|
||||
When more than two-thirds of the validators pre-vote for the same block, we call that a "polka".
|
||||
When more than two-thirds of the validators pre-vote for the same block, we call that a **polka**.
|
||||
Every pre-commit must be justified by a polka in the same round.
|
||||
|
||||
Validators may fail to commit a block for a number of reasons;
|
||||
@@ -204,8 +204,8 @@ However, the rest of the protocol is asynchronous, and validators only make prog
|
||||
A simplifying element of Tendermint is that it uses the same mechanism to commit a block as it does to skip to the next round.
|
||||
|
||||
Assuming less than one-third of the validators are Byzantine, Tendermint guarantees that safety will never be violated - that is, validators will never commit conflicting blocks at the same height.
|
||||
To do this it introduces a few "locking" rules which modulate which paths can be followed in the flow diagram.
|
||||
Once a validator precommits a block, it is "locked" on that block.
|
||||
To do this it introduces a few **locking** rules which modulate which paths can be followed in the flow diagram.
|
||||
Once a validator precommits a block, it is locked on that block.
|
||||
Then,
|
||||
|
||||
1) it must prevote for the block it is locked on
|
||||
@@ -228,4 +228,4 @@ The `Cosmos Network <http://cosmos.network>`__ is designed to use this Proof-of-
|
||||
|
||||
The following diagram is Tendermint in a (technical) nutshell. `See here for high resolution version <https://github.com/mobfoundry/hackatom/blob/master/tminfo.pdf>`__.
|
||||
|
||||
.. figure:: images/tm-transaction-flow.png
|
||||
.. figure:: assets/tm-transaction-flow.png
|
||||
|
@@ -2,19 +2,19 @@
|
||||
Specification
|
||||
#############
|
||||
|
||||
Here you'll find details of the Tendermint specification. See `the spec repo <https://github.com/tendermint/spec>`__ for upcoming material. Tendermint's types are produced by `godoc <https://godoc.org/github.com/tendermint/tendermint/types>`__
|
||||
Here you'll find details of the Tendermint specification. See `the spec repo <https://github.com/tendermint/spec>`__ for upcoming material. Tendermint's types are produced by `godoc <https://godoc.org/github.com/tendermint/tendermint/types>`__.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
block-structure.rst
|
||||
byzantine-consensus-algorithm.rst
|
||||
configuration.rst
|
||||
fast-sync.rst
|
||||
genesis.rst
|
||||
light-client-protocol.rst
|
||||
merkle.rst
|
||||
rpc.rst
|
||||
secure-p2p.rst
|
||||
validators.rst
|
||||
wire-protocol.rst
|
||||
specification/block-structure.rst
|
||||
specification/byzantine-consensus-algorithm.rst
|
||||
specification/configuration.rst
|
||||
specification/fast-sync.rst
|
||||
specification/genesis.rst
|
||||
specification/light-client-protocol.rst
|
||||
specification/merkle.rst
|
||||
specification/rpc.rst
|
||||
specification/secure-p2p.rst
|
||||
specification/validators.rst
|
||||
specification/wire-protocol.rst
|
||||
|
@@ -98,7 +98,7 @@ This is to protect anyone from swapping votes between chains to fake (or
|
||||
frame) a validator. Also note that this ``chainID`` is in the
|
||||
``genesis.json`` from *Tendermint*, not the ``genesis.json`` from the
|
||||
basecoin app (`that is a different
|
||||
chainID... <https://github.com/tendermint/basecoin/issues/32>`__).
|
||||
chainID... <https://github.com/cosmos/cosmos-sdk/issues/32>`__).
|
||||
|
||||
Once we have those votes, and we calculated the proper `sign
|
||||
bytes <https://godoc.org/github.com/tendermint/tendermint/types#Vote.WriteSignBytes>`__
|
||||
@@ -136,7 +136,7 @@ Block Hash
|
||||
|
||||
The `block
|
||||
hash <https://godoc.org/github.com/tendermint/tendermint/types#Block.Hash>`__
|
||||
is the `Simple Tree hash <Merkle-Trees#simple-tree-with-dictionaries>`__
|
||||
is the `Simple Tree hash <./merkle.html#simple-tree-with-dictionaries>`__
|
||||
of the fields of the block ``Header`` encoded as a list of
|
||||
``KVPair``\ s.
|
||||
|
@@ -30,14 +30,16 @@ The main config parameters are defined
|
||||
|
||||
- ``consensus.max_block_size_txs``: Maximum number of block txs.
|
||||
*Default*: ``10000``
|
||||
- ``consensus.create_empty_blocks``: Create empty blocks w/o txs.
|
||||
*Default*: ``true``
|
||||
- ``consensus.create_empty_blocks_interval``: Block creation interval, even if empty.
|
||||
- ``consensus.timeout_*``: Various consensus timeout parameters
|
||||
**TODO**
|
||||
- ``consensus.wal_file``: Consensus state WAL. *Default*:
|
||||
``"$TMHOME/data/cswal"``
|
||||
``"$TMHOME/data/cs.wal/wal"``
|
||||
- ``consensus.wal_light``: Whether to use light-mode for Consensus
|
||||
state WAL. *Default*: ``false``
|
||||
|
||||
- ``mempool.*``: Various mempool parameters **TODO**
|
||||
- ``mempool.*``: Various mempool parameters
|
||||
|
||||
- ``p2p.addr_book_file``: Peer address book. *Default*:
|
||||
``"$TMHOME/addrbook.json"``. **NOT USED**
|
@@ -1,7 +1,7 @@
|
||||
Genesis
|
||||
=======
|
||||
|
||||
The genesis.json file in ``$TMROOT`` defines the initial TendermintCore
|
||||
The genesis.json file in ``$TMHOME`` defines the initial TendermintCore
|
||||
state upon genesis of the blockchain (`see
|
||||
definition <https://github.com/tendermint/tendermint/blob/master/types/genesis.go>`__).
|
||||
|
||||
@@ -21,7 +21,7 @@ Fields
|
||||
- ``validators``:
|
||||
- ``pub_key``: The first element specifies the pub\_key type. 1 ==
|
||||
Ed25519. The second element are the pubkey bytes.
|
||||
- ``amount``: The validator's voting power.
|
||||
- ``power``: The validator's voting power.
|
||||
- ``name``: Name of the validator (optional).
|
||||
- ``app_hash``: The expected application hash (as returned by the
|
||||
``Commit`` ABCI message) upon genesis. If the app's hash does not
|
||||
@@ -41,7 +41,7 @@ Sample genesis.json
|
||||
1,
|
||||
"9BC5112CB9614D91CE423FA8744885126CD9D08D9FC9D1F42E552D662BAA411E"
|
||||
],
|
||||
"amount": 1,
|
||||
"power": 1,
|
||||
"name": "mach1"
|
||||
},
|
||||
{
|
||||
@@ -49,7 +49,7 @@ Sample genesis.json
|
||||
1,
|
||||
"F46A5543D51F31660D9F59653B4F96061A740FF7433E0DC1ECBC30BE8494DE06"
|
||||
],
|
||||
"amount": 1,
|
||||
"power": 1,
|
||||
"name": "mach2"
|
||||
},
|
||||
{
|
||||
@@ -57,7 +57,7 @@ Sample genesis.json
|
||||
1,
|
||||
"0E7B423C1635FD07C0FC3603B736D5D27953C1C6CA865BB9392CD79DE1A682BB"
|
||||
],
|
||||
"amount": 1,
|
||||
"power": 1,
|
||||
"name": "mach3"
|
||||
},
|
||||
{
|
||||
@@ -65,7 +65,7 @@ Sample genesis.json
|
||||
1,
|
||||
"4F49237B9A32EB50682EDD83C48CE9CDB1D02A7CFDADCFF6EC8C1FAADB358879"
|
||||
],
|
||||
"amount": 1,
|
||||
"power": 1,
|
||||
"name": "mach4"
|
||||
}
|
||||
],
|
@@ -16,7 +16,7 @@ The objective of the light client protocol is to get a
|
||||
`block hash <./block-structure.html#block-hash>`__ where the commit
|
||||
includes a majority of signatures from the last known validator set.
|
||||
From there, all the application state is verifiable with `merkle
|
||||
proofs <./merkle-trees#iavl-tree>`__.
|
||||
proofs <./merkle.html#iavl-tree>`__.
|
||||
|
||||
Properties
|
||||
----------
|