Compare commits

...

640 Commits

Author SHA1 Message Date
Ethan Buchman
6f9956990c Merge pull request #1377 from tendermint/release/0.17.1
Release/0.17.1
2018-03-27 11:24:48 -04:00
Ethan Buchman
9bf5862def types: fix genesis.AppStateJSON 2018-03-27 11:20:09 -04:00
Ethan Buchman
e1d98bb7f6 forgot bug fix in changelog 2018-03-27 10:06:30 -04:00
Ethan Buchman
e5cd006bce Merge pull request #1373 from tendermint/release/0.17.0
Release/0.17.0
2018-03-27 09:57:10 -04:00
Anton Kaliaev
58242e1b63 bump version one more time 2018-03-27 09:07:29 +02:00
Anton Kaliaev
4e86835163 update changelog for 0.17.0 release 2018-03-27 09:06:32 +02:00
Anton Kaliaev
ab4ac04c88 bump up the version 2018-03-26 22:07:07 +02:00
Anton Kaliaev
2c1887a635 update changelog 2018-03-26 22:06:58 +02:00
Anton Kaliaev
1c82281b77 make app_options -> app_state backwards compatible 2018-03-26 21:51:07 +02:00
Tomoya Ishizaki
43ac92b615 Changed to make line break easier to read (#1363) 2018-03-26 16:27:20 +02:00
Ethan Buchman
e3337d764a Merge pull request #1354 from tendermint/bucky/dep
update dep
2018-03-24 12:14:56 -04:00
Anton Kaliaev
214817ed17 do not add peer to switch if it fails to start 2018-03-23 13:31:48 +01:00
Anton Kaliaev
116a4ec705 temporary fix
I assume there is a deeper issue with how UnmarshalBinary works in
go-amino (i.e., when loading array of some objects, the empty array
becomes []object{nil}). Note when Marshaling, the object is nil.
2018-03-23 12:47:02 +01:00
Ethan Buchman
bbaad22982 update dep 2018-03-23 10:27:00 +01:00
Anton Kaliaev
a7250af303 Exponential backoff follow up (#1349)
* document new functionality [ci skip]

Refs #1304

* add fixme [ci skip]

Refs #1304

* ensure that we dial peer after backoff duration

Refs #1304
2018-03-23 09:48:27 +01:00
Zach
6545a21369 docs/examples: update quick start guide (#1351) 2018-03-22 08:58:02 +01:00
Ethan Buchman
8c0c8e8e01 Merge pull request #1301 from tendermint/types-data+header+non-nil-panics
types: Hash invoked for nil Data and Header should not panic
2018-03-20 23:38:55 +01:00
Alexander Simmerl
79315efd1f Merge pull request #1341 from EugeneChung/develop
Remove unnecessary bytes.Compare() call
2018-03-20 16:27:06 +01:00
Eugene Chung
a61130aebb Remove unnecessary bytes.Compare() call 2018-03-20 23:43:18 +09:00
Alexander Simmerl
5a51a0ba06 Merge pull request #1337 from tendermint/1296-follow-up
Follow up for /health endpoint
2018-03-20 10:36:47 +01:00
Ethan Buchman
0d0b56739d Merge pull request #1335 from tendermint/zarko/1146-improve-bft-time-spec
Improve BFT time spec
2018-03-20 01:00:34 +01:00
Ethan Buchman
eb1816c9ff Merge pull request #1338 from tendermint/1266/xla-fix-flaky-testswitchreconnectstopersistentpeer
p2p: Keep reference to connections in test peer
2018-03-20 00:14:38 +01:00
Alexander Simmerl
50ae892d5e p2p: Keep reference to connections in test peer
We observed non-deterministic test failures in one of our switch tests,
which would happen if the GC would run between iterations of the accept
loop. As we don't hold any reference to the connection the setup
finalizer might get triggered and therefore the file handle closed. For
the curious check the references on finalizers and the variable scoping
in the spec:

https://groups.google.com/forum/#!topic/golang-nuts/xWkhGJ5PY6c
https://groups.google.com/forum/#!topic/golang-nuts/d8aF4rAob7U/discussion
https://golang.org/ref/spec#Declarations_and_scope

Fixes #1266
2018-03-19 20:35:12 +01:00
Zarko Milosevic
5a79b3d74a Improve the spec to make explicit median computation based on voting power 2018-03-19 19:10:02 +01:00
Anton Kaliaev
460599ef75 fix comment 2018-03-19 20:01:43 +03:00
Anton Kaliaev
830bb72d6f add Health method to clients
Refs #1296
2018-03-19 20:01:43 +03:00
Anton Kaliaev
b11c26cc1c update CHANGELOG 2018-03-19 19:53:28 +03:00
Constantine
152290db7e Add \health rpc endpoint (#1306)
* Init `\health` rpc endpoint

* remove additional info from `\health` rpc endpoint

* Cleanup imports

* Added time threshold for health check

* Update rpc doc

* Remove unnecessary checks for blocktime creation lag

* Clean up of unnecessary config usage
2018-03-19 19:39:37 +03:00
Ethan Buchman
20b198681b Merge pull request #1328 from tendermint/bucky/add-vote-readability
addVote readability
2018-03-19 12:24:28 +01:00
Ethan Buchman
2bf106a1b3 Merge pull request #1333 from tendermint/1244-follow-up
consensus: fix tracking for MarkGood
2018-03-19 12:19:16 +01:00
Anton Kaliaev
2c445059f2 mark peer as good every blocksToContributeToBecomeGoodPeer blocks
if enough peers are marked good eventually some will become unmarked, so
good to have a force that will continue to cycle them back into good
territory!

Refs #1317
2018-03-19 14:10:25 +03:00
Anton Kaliaev
d8b08cd943 return back panic in peer#onReceive
Refs #1317
2018-03-19 13:19:05 +03:00
Anton Kaliaev
ab59f64f57 test we record votes and block parts
Refs #1317
2018-03-19 13:17:11 +03:00
Anton Kaliaev
42e3457884 fix tracking of votes and blockparts to not allow old information
also remove mutex
Refs #1317
2018-03-19 13:17:06 +03:00
Anton Kaliaev
31f3dd42e7 mark peer as good only once
or should we do it every N blocks?
Refs #1317
2018-03-19 13:17:00 +03:00
Anton Kaliaev
5fab8e404d replace magic number with blocksToContributeToBecomeGoodPeer const
Refs #1317
2018-03-19 13:16:56 +03:00
Anton Kaliaev
701df09971 do not use keywords
Refs #1317
2018-03-19 13:16:02 +03:00
Ethan Buchman
d350da3135 config: fix private_peer_ids 2018-03-18 23:55:44 +01:00
Ethan Buchman
ab7dea4f20 consensus: return from errors sooner in addVote 2018-03-18 23:09:04 +01:00
Ethan Buchman
b297efb532 consensus: return from go-routine in test 2018-03-18 23:05:04 +01:00
Ethan Buchman
eaabdb5cac Merge pull request #1282 from tendermint/1126-private-peers
private peers
2018-03-18 22:53:57 +01:00
racin
066aee3045 Documentation: The character for 1/3 fraction could not be rendered in PDF on readthedocs. (#1326) 2018-03-18 22:44:38 +03:00
Ethan Buchman
ff1ec0260e Merge pull request #1318 from tendermint/bucky/testnet-cmd-fix
testnet cmd: ensure config dir exists. closes #1290
2018-03-17 00:05:30 +01:00
Emmanuel T Odeke
7cb3188fbc testnet cmd: ensure config dir exists. closes #1290 2018-03-16 14:26:43 +01:00
Alexander Simmerl
9b9022f8df privVal: Improve SocketClient network code (#1315)
Follow-up to feedback from #1286, this change simplifies the connection
handling in the SocketClient and makes the communication via TCP more
robust. It introduces the tcpTimeoutListener to encapsulate accept and
i/o timeout handling as well as connection keep-alive, this type could
likely be upgraded to handle more fine-grained tuning of the tcp stack
(linger, nodelay, etc.) according to the properties we desire. The same
methods should be applied to the RemoteSigner which will be overhauled
when the priv_val_server is fleshed out.

* require private key
* simplify connect logic
* break out conn upgrades to tcpTimeoutListener
* extend test coverage and simplify component setup
2018-03-16 16:32:17 +04:00
Ethan Buchman
68e049d3af Merge pull request #1244 from tendermint/1147-using-mark-good-and-stop-peer-for-error
Using MarkGood and StopPeerForError
2018-03-15 18:58:29 +01:00
Anton Kaliaev
86ddf17db0 add a todo
Refs #1281
2018-03-15 11:58:20 +04:00
Anton Kaliaev
a655500047 fix copy-pasted comment [ci skip] 2018-03-15 11:58:20 +04:00
Anton Kaliaev
714f885dac mark peer as good if it contributed enough votes or block parts
Refs #1147
2018-03-15 11:58:20 +04:00
Anton Kaliaev
b0d8f552c5 return err if peer has sent a vote that does not match our round 2018-03-15 11:58:20 +04:00
Anton Kaliaev
63cb69cb96 comment out ErrAddingVote because it breaks byzantine_test 2018-03-15 11:58:20 +04:00
Anton Kaliaev
266974cb59 stop peer if it sends invalid vote 2018-03-15 11:58:20 +04:00
Anton Kaliaev
bcf54b0aa3 PanicSanity is deprecated 2018-03-15 11:58:20 +04:00
Anton Kaliaev
d86855ad7a stop peer if it sends us msg with unknown channel 2018-03-15 11:58:20 +04:00
Anton Kaliaev
d0c67bbe16 stop peer if evidence is not valid 2018-03-15 11:58:20 +04:00
Anton Kaliaev
4242352852 stop peer on decoding error 2018-03-15 11:58:19 +04:00
Anton Kaliaev
f299689573 return back defaultChannelCapacity 2018-03-15 11:58:19 +04:00
Anton Kaliaev
baf457e6d4 return error if peer sent us a block we didn't expect with a height too far ahead/behind 2018-03-15 11:58:19 +04:00
Anton Kaliaev
0c7e871ef0 [blockchain] replace timeoutsCh with more abstract errorsCh 2018-03-15 11:58:19 +04:00
Anton Kaliaev
87ce804b4a cmn.PanicSanity is deprecated 2018-03-15 11:58:19 +04:00
Anton Kaliaev
2a258a2c3f revert removing private peers from persistent 2018-03-15 11:55:30 +04:00
Anton Kaliaev
a40518c7da revert adding dial_peers's private flag 2018-03-15 11:55:30 +04:00
Anton Kaliaev
31deaa4a79 fix broken merge 2018-03-15 11:55:30 +04:00
Anton Kaliaev
736ea055a8 add a test for pex reactor 2018-03-15 11:55:30 +04:00
Anton Kaliaev
a39aec0bae rename private_peers to private_peer_ids to distinguish from peers 2018-03-15 11:55:30 +04:00
Anton Kaliaev
8bef3eb1f4 private peers
Refs #1126
2018-03-15 11:55:29 +04:00
Ethan Buchman
244d88dfda Merge pull request #1314 from tendermint/add-go-amino-as-source-to-wire
Add go amino as source to wire
2018-03-15 01:10:38 +01:00
Anton Kaliaev
76e1dd41e4 fix Makefile's .PHONY 2018-03-14 21:42:35 +04:00
Anton Kaliaev
e39187a063 add go-amino as source for go-wire 2018-03-14 21:42:17 +04:00
Ethan Buchman
cd2ba4aa7f Merge pull request #1286 from tendermint/feature/xla-priv-val-invert-dial
Invert privVal socket communication
2018-03-12 16:49:32 +01:00
Emmanuel T Odeke
0de19420f6 cmd/tendermint/commands/lite: add tcp scheme to address URLs (#1297)
Noticed while investigating
https://github.com/tendermint/tendermint/issues/970

As reported by @zramsay, we'd get the warning
from tendermint/rpc/lib because we were passing in
scheme-less addresses, so by default use "tcp".

Also by default, "node" (nodeAddr) has been set to:
  "tcp://localhost:46657"
instead of the bare:
  "localhost:46657"

This change is just to clean up such warnings as
they spuriously would spook users for a package "lite"
that claims to be secure.
2018-03-12 10:03:11 +04:00
Ethan Buchman
f3000d0c84 Merge pull request #1292 from tendermint/1125-exp-backoff-for-ensure-peers
exponential backoff for addrs in the address book
2018-03-11 19:57:49 +01:00
Anton Kaliaev
fc5b0471d9 use time.Since 2018-03-11 14:13:34 +04:00
Anton Kaliaev
264bce4ddd skip dialing based on last time dialed 2018-03-11 14:00:49 +04:00
Anton Kaliaev
0f41570c80 fixes from bucky's review 2018-03-11 13:22:37 +04:00
Emmanuel T Odeke
8723c91db9 types: Hash invoked for nil Data and Header should not panic
Fixes https://github.com/tendermint/tendermint/issues/1298
Fixes https://github.com/tendermint/tendermint/issues/1299

Found while writing tests in https://github.com/tendermint/tendermint/pull/1300
2018-03-10 21:44:08 -08:00
Anton Kaliaev
f85c8896d9 test pex_reactor's dialPeer 2018-03-09 16:23:52 +04:00
Anton Kaliaev
f0d4f56327 refactor pex_reactor tests 2018-03-09 16:02:24 +04:00
Anton Kaliaev
3d5c05e4e6 Merge pull request #1293 from tendermint/update-template
add 2 more points to ISSUE_TEMPLATE
2018-03-09 15:15:34 +04:00
Anton Kaliaev
018da09f14 do not run complete test suite on make
bad dev experience
2018-03-08 18:55:14 +04:00
Anton Kaliaev
60a64af28d add 2 more points to ISSUE_TEMPLATE
Refs #1291
2018-03-08 18:53:11 +04:00
Zach
13a2013229 Testing refactor for Jenkins (#1098)
* de-mystify tests & run them in parallel (#1031)

* test optimization for jenkins (#1093)

* makefile cleanup

* tests: split fast and slow go tests, closes #1055

* pr comments

* restore circle conditions

* fix need_abci

* ...

* docker run: no :Z for circle?

* Remove cmd breaking comment
2018-03-08 18:52:38 +04:00
Anton Kaliaev
1941b5c769 fixes from @xla's review 2018-03-08 16:31:44 +04:00
Anton Kaliaev
21e2c41c6b exponential backoff for addrs in the address book
Refs #1125
2018-03-08 14:04:26 +04:00
Alexander Simmerl
589781721a Invert privVal socket communication
Follow-up to #1255 aligning with the expectation that the external
signing process connects to the node. The SocketClient will block on
start until one connection has been established, support for multiple
signers connected simultaneously is a planned future extension.

* SocketClient accepts connection
* PrivValSocketServer renamed to RemoteSigner
* extend tests
2018-03-07 12:37:05 +01:00
Ethan Buchman
2ce57a65ff Merge pull request #1284 from tendermint/feature/xla-follow-priv-val
Follow-ups to PrivValidator
2018-03-06 23:37:10 +01:00
Alexander Simmerl
2aa77025c3 Fix typo 2018-03-06 19:56:31 +01:00
Alexander Simmerl
8e1856a90a Use builtin panic 2018-03-06 19:56:31 +01:00
Alexander Simmerl
ca619c80b6 Stop privVal socket client on node shutdown 2018-03-06 19:56:30 +01:00
Alexander Simmerl
25ff699425 Improve method docs 2018-03-06 19:55:26 +01:00
Alexander Simmerl
879b4c0a2c Use common method to determine file existence 2018-03-06 19:55:26 +01:00
Ethan Buchman
45d07a3d0b Merge pull request #1283 from tendermint/feature/xla-run-integration-release
Speed up CircleCI builds
2018-03-06 19:38:08 +01:00
Admir Sabanovic
788354d81e fix typo (#1285) 2018-03-06 20:44:13 +04:00
Alexander Simmerl
b7ce89e568 Speed up CircleCI builds
To achieve faster feedback cycles for our feature PRs this change
reduces the average buildtime from 35 to ~6min by utilising their new
2.0 offering based on docker and nomad. We make use of parallel build
steps wherever possible so that the duration is determined by the
slowest test suite (p2p).

This is an intermediate step until we move our CI/CD completely
on-premise for more control and added security.
2018-03-06 17:36:44 +01:00
Anton Kaliaev
8d81a259c7 Merge pull request #1280 from tendermint/zach/explain-determinism
docs: add 'On Determinism'
2018-03-06 13:44:00 +04:00
Zach Ramsay
3019761204 docs: add document 'On Determinism'
closes https://github.com/tendermint/abci/issues/56
2018-03-06 15:19:59 +08:00
Anton Kaliaev
6120a4c5e4 Merge pull request #1256 from tendermint/feature/more-priv-val
PrivValidatorAddr -> PrivValidatorListenAddr. Update ADR008
2018-03-05 21:48:16 +04:00
Alexander Simmerl
533ed2a876 adr: Amend decisions for PrivValidator 2018-03-05 17:38:05 +01:00
Ethan Buchman
d4e4055d57 PrivValidatorAddr -> PrivValidatorListenAddr. Update ADR008 2018-03-05 17:11:43 +01:00
Alexander Simmerl
ee51ad8e29 Make RPC handler protocol agnostic (#1276) 2018-03-05 19:59:04 +04:00
Zach
bdd50c5f37 fix docs links & stuff (#1273)
* fix links in docs/spec etc, closes #1261

* spec: remove ref to non-existant repo

* codecov you weirdo
2018-03-05 16:30:36 +04:00
Anton Kaliaev
3d88612690 Merge pull request #1274 from tendermint/codecov
Codecov
2018-03-05 15:54:33 +04:00
Anton Kaliaev
630d54c95a return back threshold and ignore sections 2018-03-05 15:16:24 +04:00
Ethan Buchman
3cedd8cf07 Merge pull request #1265 from tendermint/bucky/new-wire-api
Bucky/new wire api
2018-03-02 10:56:24 -05:00
Ethan Buchman
929f326dd2 update dep 2018-03-02 10:59:10 -05:00
Ethan Buchman
ff8c648c23 types: uncomment some tests 2018-03-02 09:26:37 -05:00
Ethan Buchman
8bceb5ce36 Merge pull request #1233 from tendermint/feature/xla-dial-seed-without-timeout
p2p: if we have no peers we should dial seeds right away
2018-03-02 09:07:01 -05:00
Alexander Simmerl
8f2703e8b2 Dial seeds directly without potential peers
In order to improve the operator experience we want the node to dial
seeds immediately if there are no peers to connect to. Until now the
routine responsible for ensuring peers are connected to would wait
a random amount of time up to 30s (if not configured otherwise).
2018-03-02 12:55:01 +01:00
Ethan Buchman
c394eef7b8 types: TestValidatorSetVerifyCommit 2018-03-02 04:21:23 -05:00
Ethan Buchman
f9921ae362 types/validator_set_test: move funcs around 2018-03-02 03:52:44 -05:00
Ethan Frey
fff0c6cd8e Add app_state from genesis file in InitChain message 2018-03-02 03:46:04 -05:00
Ethan Buchman
59872bf335 update dep for new go-wire API 2018-03-02 02:28:53 -05:00
Ethan Buchman
656854186c state: fix txResult issue with UnmarshalBinary into ptr 2018-03-02 02:28:17 -05:00
Ethan Buchman
6596bff8ec types: bring back json.Marshal/Unmarshal for genesis/priv_val 2018-03-02 02:09:28 -05:00
Ethan Buchman
eaafd9d61c state: builds 2018-03-02 01:51:27 -05:00
Ethan Buchman
5378bfc5c7 types.SignBytes -> o.SignBytes 2018-03-02 01:50:17 -05:00
Ethan Buchman
abeeeeb611 types: fix validator_set_test issue with UnmarshalBinary into ptr 2018-03-02 01:49:59 -05:00
Ethan Buchman
ca3655a409 types: p2pID -> P2PID 2018-03-02 01:42:56 -05:00
Ethan Buchman
6cf5100645 types: working on tests... 2018-03-02 01:34:23 -05:00
Ethan Buchman
51628aea08 types: revert to old wire. builds 2018-03-02 01:33:38 -05:00
Ethan Buchman
3395f5fb0e types: builds 2018-03-02 01:28:38 -05:00
Ethan Buchman
d2cd079541 types: tests build 2018-03-02 01:28:21 -05:00
Ethan Buchman
fc35e3b8c5 wire: no codec yet 2018-03-02 01:27:52 -05:00
Ethan Buchman
085b4f5f2e add wire pkg with global codec 2018-03-02 01:26:38 -05:00
Ethan Buchman
fd58645dd2 types: remove dep on p2p 2018-03-02 01:26:03 -05:00
Ethan Buchman
200787ede2 types: update for new go-wire. WriteSignBytes -> SignBytes 2018-03-02 01:25:54 -05:00
Ethan Buchman
9cdba04fe9 Merge pull request #1142 from tendermint/add_valid_value_mechanism
Add support for ValidBlock mechanism for the simplest case
2018-03-01 23:33:46 -05:00
Anton Kaliaev
e92c87630d Merge pull request #1258 from tendermint/return-dummy-app
return back dummy & persistent_dummy as options for proxy_app
2018-03-01 15:14:07 +04:00
Zarko Milosevic
d4e93a6de3 Separate ValidBlock rule from unlocking rule 2018-03-01 11:42:22 +01:00
Zarko Milosevic
4670857c15 Add support for ValidBlock mechanism for the simplest case 2018-03-01 11:42:22 +01:00
Anton Kaliaev
e8d8aedd1f update changelog 2018-03-01 12:00:09 +04:00
Anton Kaliaev
87372da730 return back dummy & persistent_dummy as options for proxy_app 2018-03-01 11:54:08 +04:00
Anton Kaliaev
3b40b62d04 Merge pull request #1198 from tendermint/feature/genesisrawjson
SDK: AppOptions -> AppState
2018-03-01 00:55:13 +04:00
Anton Kaliaev
c41cbf2a07 add missing golang.org/x/net/netutil package 2018-02-28 23:44:18 +04:00
Anton Kaliaev
1a3faa8db1 add app_state field to docs 2018-02-28 23:44:10 +04:00
Anton Kaliaev
4ce79baac7 rename app_options to app_state in genesis_test 2018-02-28 23:44:10 +04:00
Anton Kaliaev
056b70b4ce update changelog 2018-02-28 23:44:10 +04:00
rigelrozanski
4806b3b9bf AppOptions -> AppStateJSON 2018-02-28 23:44:10 +04:00
Ethan Buchman
2a8f0000b2 Merge pull request #1250 from tendermint/ditch-glide
Ditch glide
2018-02-28 09:52:12 -05:00
Ethan Buchman
dd2d846c02 Merge pull request #1203 from tendermint/feature/priv_val
types/priv_validator package
2018-02-28 09:27:03 -05:00
Anton Kaliaev
2ae87eee4e style fixes from @xla 2018-02-28 12:24:26 +04:00
use-n-delete
4be23027ed adding recipe for minimalistic deps analysis (#1218) 2018-02-28 11:23:31 +04:00
Anton Kaliaev
c19bbb2403 switch back to parsing .lock file 2018-02-28 11:15:40 +04:00
Ethan Buchman
edb871f514 Merge pull request #1237 from tendermint/feature/priv_val_socket_client
privVal: Integrate socket client
2018-02-28 00:57:51 -05:00
Ethan Buchman
9c5937df96 Merge pull request #1247 from tendermint/feature/xla-integrate-codecov
Integrate CodeCov as change acceptance stage
2018-02-28 00:38:37 -05:00
Ethan Buchman
be6082df8e Merge pull request #1043 from tendermint/lite-binary-vs-linear-search+optimize-insertions
lite: memStoreProvider GetHeightBinarySearch method + fix ValKeys.signHeaders
2018-02-28 00:33:59 -05:00
Anton Kaliaev
66354de219 cd into tendermint before calling dep status 2018-02-27 18:47:53 +04:00
Alexander Simmerl
458a40f74e Integrate CodeCov as change acceptance stage
As a rough measure to keep quality up we want to integrate our code
coverage tooling to the point where it is required for changes to be
merged.

Example taken from https://github.com/cosmos/voyager/blob/develop/codecov.yaml
2018-02-27 15:39:28 +01:00
Anton Kaliaev
0821384ac6 update abci version 2018-02-27 18:34:32 +04:00
Anton Kaliaev
e01650f21d fix Dockerfile.develop 2018-02-27 18:02:40 +04:00
Anton Kaliaev
8dd06cf197 ditch glide 2018-02-27 18:02:40 +04:00
Anton Kaliaev
93732b4c1e remove old network tests
Refs #1249
2018-02-27 18:02:25 +04:00
Zach
2cc63069c6 rename dummy to kvstore (#1223)
* remove accidental binary

* docs: s/Dummy&dummy/KVStore&kvstore/g

* glide update to abci

* update abci import paths

* dummy begone, hello kvstore

* RequestInitChain needs genesisBytes

* glide update
2018-02-27 18:01:10 +04:00
Zaki Manian
6270ecef8c Switch to dep from glide for dependency management (#1243)
* Switch to dep from glide for dependency management

* Update CI dockerfile to use dep instead of glide

* Wrong file extension

* Run 'dep ensure' after copying code

* Install glide to handle abci dependencies in testing

* Use `dep ensure -vendor-only` to setup vendor directory before installing source code on ci
2018-02-27 15:59:50 +04:00
Ethan Buchman
9293ae76bf p2p: introduce peerConn to simplify peer creation (#1226)
* expose AuthEnc in the P2P config

if AuthEnc is true, dialed peers must have a node ID in the address and
it must match the persistent pubkey from the secret handshake.

Refs #1157

* fixes after my own review

* fix docs

* fix build failure

```
p2p/pex/pex_reactor_test.go:288:88: cannot use seed.NodeInfo().NetAddress() (type *p2p.NetAddress) as type string in array or slice literal
```

* p2p: introduce peerConn to simplify peer creation

* Introduce `peerConn` containing the known fields of `peer`
* `peer` only created in `sw.addPeer` once handshake is complete and NodeInfo is checked
* Eliminates some mutable variables and makes the code flow better
* Simplifies the `newXxxPeer` funcs
* Use ID instead of PubKey where possible.
        * SetPubKeyFilter -> SetIDFilter
        * nodeInfo.Validate takes ID
        * remove peer.PubKey()

* persistent node ids

* fixes from review

* test: use ip_plus_id.sh more

* fix invalid memory panic during fast_sync test

```
2018-02-21T06:30:05Z box887.localdomain docker/local_testnet_4[14907]: panic: runtime error: invalid memory address or nil pointer dereference
2018-02-21T06:30:05Z box887.localdomain docker/local_testnet_4[14907]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x98dd3e]
2018-02-21T06:30:05Z box887.localdomain docker/local_testnet_4[14907]:
2018-02-21T06:30:05Z box887.localdomain docker/local_testnet_4[14907]: goroutine 3432 [running]:
2018-02-21T06:30:05Z box887.localdomain docker/local_testnet_4[14907]: github.com/tendermint/tendermint/p2p.newOutboundPeerConn(0xc423fd1380, 0xc420933e00, 0x1, 0x1239a60, 0
xc420128c40, 0x2, 0x42caf6, 0xc42001f300, 0xc422831d98, 0xc4227951c0, ...)
2018-02-21T06:30:05Z box887.localdomain docker/local_testnet_4[14907]: #011/go/src/github.com/tendermint/tendermint/p2p/peer.go:123 +0x31e
2018-02-21T06:30:05Z box887.localdomain docker/local_testnet_4[14907]: github.com/tendermint/tendermint/p2p.(*Switch).addOutboundPeerWithConfig(0xc4200ad040, 0xc423fd1380, 0
xc420933e00, 0xc423f48801, 0x28, 0x2)
2018-02-21T06:30:05Z box887.localdomain docker/local_testnet_4[14907]: #011/go/src/github.com/tendermint/tendermint/p2p/switch.go:455 +0x12b
2018-02-21T06:30:05Z box887.localdomain docker/local_testnet_4[14907]: github.com/tendermint/tendermint/p2p.(*Switch).DialPeerWithAddress(0xc4200ad040, 0xc423fd1380, 0x1, 0x
0, 0x0)
2018-02-21T06:30:05Z box887.localdomain docker/local_testnet_4[14907]: #011/go/src/github.com/tendermint/tendermint/p2p/switch.go:371 +0xdc
2018-02-21T06:30:05Z box887.localdomain docker/local_testnet_4[14907]: github.com/tendermint/tendermint/p2p.(*Switch).reconnectToPeer(0xc4200ad040, 0x123e000, 0xc42007bb00)
2018-02-21T06:30:05Z box887.localdomain docker/local_testnet_4[14907]: #011/go/src/github.com/tendermint/tendermint/p2p/switch.go:290 +0x25f
2018-02-21T06:30:05Z box887.localdomain docker/local_testnet_4[14907]: created by github.com/tendermint/tendermint/p2p.(*Switch).StopPeerForError
2018-02-21T06:30:05Z box887.localdomain docker/local_testnet_4[14907]: #011/go/src/github.com/tendermint/tendermint/p2p/switch.go:256 +0x1b7
```
2018-02-27 15:54:40 +04:00
Alexander Simmerl
74d3f7e1fd Integrate private validator socket client
Following ADDR 008 the node will connect to an external
process to handle signing requests. Operation of the external process is
left to the user.

* introduce alias for PrivValidator interface on socket client
* integrate socket client in node
* structure tests
* remove unnecessary flag
2018-02-23 13:58:22 +01:00
Ethan Buchman
2fd023a239 remove accidental binary 2018-02-21 00:04:53 -05:00
Ethan Buchman
c8a2bdf78b Merge pull request #1225 from tendermint/release-v0.16.0
Release v0.16.0
2018-02-20 23:23:48 -05:00
Zach Ramsay
3cd604562c RequestInitChain needs genesisBytes 2018-02-21 03:43:47 +00:00
Zach Ramsay
7c6c0dba53 glide update 2018-02-21 03:32:02 +00:00
Ethan Buchman
ec2f3f49ef changelog date and version 2018-02-19 17:35:46 -05:00
Ethan Buchman
8bba7c64bc update version and changelog [ci skip] 2018-02-19 17:33:48 -05:00
Ethan Buchman
ffd2483e67 Merge pull request #1204 from tendermint/feature/priv_val_sockets
Feature/priv val sockets
2018-02-19 16:06:07 -05:00
Ethan Buchman
0467de890a Merge pull request #1202 from tendermint/restore-mempool-memory-leak-tests
restore mempool memory leak tests
2018-02-19 15:36:19 -05:00
Anton Kaliaev
0ae0155cba restore mempool memory leak tests 2018-02-19 15:34:33 -05:00
Ethan Buchman
f4feb7703b fix appHash log. closes #1207 2018-02-19 15:32:09 -05:00
Alexander Simmerl
a14aab67de Integrate PrivValidator socket server 2018-02-19 19:20:01 +01:00
Alexander Simmerl
106d804357 Correct config description 2018-02-14 02:51:05 +01:00
Alexander Simmerl
a1020307a0 Clean up flags 2018-02-14 02:41:16 +01:00
Alexander Simmerl
6c70b4ce05 Apply connection deadline consistently 2018-02-14 02:25:17 +01:00
Alexander Simmerl
2a292efb56 Return error for all PrivValidator methoods
As calls to the private validator can involve side-effects like network
communication it is desirable for all methods returning an error to not
break the control flow of the caller.

* adjust PrivValidator interface
2018-02-13 19:34:50 +01:00
Alexander Simmerl
82b1a34a36 Separate connect logic
* break out connect functionality out of OnStart
* introduce max retries
2018-02-13 19:08:21 +01:00
Ethan Buchman
0e68638af3 update glide abci/tmlibs to develop 2018-02-12 19:13:36 -05:00
Ethan Buchman
d3e276bf80 Merge pull request #1209 from tendermint/1205-fixes-for-p2p-memory-leak-and-pong
Fixes for p2p memory leak and pong
2018-02-12 19:04:31 -05:00
Anton Kaliaev
fc585bcdec do not block when writing to pongTimeoutCh
Refs #1205
2018-02-12 17:04:07 +04:00
Anton Kaliaev
2a24ae90c1 fixes from Jae's review
1. remove pointer
2. add Quit() method to Service interface
2018-02-12 14:32:09 +04:00
Ethan Buchman
8da2a6a147 types/priv_validator: fixes for latest p2p and cmn 2018-02-09 17:24:30 -05:00
Alexander Simmerl
7d71e702d8 Integrate privVal client with node secret 2018-02-09 16:54:43 -05:00
Alexander Simmerl
38d18ca11a Harden tests 2018-02-09 16:53:17 -05:00
Alexander Simmerl
32d9563a15 Format and consolidate 2018-02-09 16:53:17 -05:00
Alexander Simmerl
18f7e52562 Use secret connection 2018-02-09 16:53:17 -05:00
Alexander Simmerl
fec541373d Correct server protocol 2018-02-09 16:53:17 -05:00
Alexander Simmerl
ff600e9aa0 wip: check error of wire read 2018-02-09 16:53:17 -05:00
Alexander Simmerl
a49357b19e wip: Avoid underscore in var name 2018-02-09 16:53:17 -05:00
Alexander Simmerl
4b997c29ee wip: fix nil pointer deference 2018-02-09 16:53:17 -05:00
Alexander Simmerl
d321839669 wip: fix code block in ADR 2018-02-09 16:53:17 -05:00
Alexander Simmerl
c27fda09dd wip: Comment types
* add comments to all public types
* fix comments to adhere to comment standards
2018-02-09 16:53:15 -05:00
Ethan Buchman
23eb84db35 wip: priv val via sockets 2018-02-09 16:52:58 -05:00
Ethan Buchman
bef91ea7fe adr-008-priv-validator 2018-02-09 16:52:05 -05:00
Ethan Buchman
459633fb4c types/priv_validator 2018-02-09 16:38:23 -05:00
Ethan Buchman
f1c8489270 Merge pull request #1201 from tendermint/1022-do-not-enforce-1/3-val-changes
do not enforce 1/3 validator power change
2018-02-09 16:15:38 -05:00
Ethan Buchman
2d10c8f15b Merge pull request #1095 from tendermint/804-p2p-issues
[p2p] Pong Timeout
2018-02-09 14:38:24 -05:00
Anton Kaliaev
106cdb74e5 do not enforce 1/3 validator power change
leave it to the app

Refs #1022
2018-02-09 23:30:04 +04:00
Anton Kaliaev
22b038810a do not block in recvRoutine 2018-02-09 23:03:26 +04:00
Anton Kaliaev
45750e1b29 fix race by sending signal instead of stopping pongTimer 2018-02-09 21:32:29 +04:00
Anton Kaliaev
26419fba28 refactor code plus add one more test
* extract stopPongTimer method
* TestMConnectionMultiplePings
2018-02-09 21:32:29 +04:00
Anton Kaliaev
ac0123d249 drain pongTimeoutCh and pongTimer's channel to prevent leaks 2018-02-09 21:32:29 +04:00
Anton Kaliaev
f4ff66de30 rewrite pong timer to use time.AfterFunc 2018-02-09 21:32:29 +04:00
Anton Kaliaev
747b73cb95 fix merge conflicts 2018-02-09 21:32:29 +04:00
Anton Kaliaev
161e100a24 close return channel when we're done
Benchmark results:

```
BenchmarkSwitchBroadcast-2         30000             71275 ns/op
--- BENCH: BenchmarkSwitchBroadcast-2
        switch_test.go:339: success: 1, failure: 0
        switch_test.go:339: success: 100, failure: 0
        switch_test.go:339: success: 10000, failure: 0
        switch_test.go:339: success: 30000, failure: 0
```
2018-02-09 21:32:29 +04:00
Anton Kaliaev
3ae738f453 increase timeouts 2018-02-09 21:32:29 +04:00
Anton Kaliaev
d14d4a2527 remove TryBroadcast 2018-02-09 21:32:29 +04:00
Anton Kaliaev
860da464df remove weird concurrency testing 2018-02-09 21:32:28 +04:00
Anton Kaliaev
4e2000abfe control order by sending msgs from one goroutine 2018-02-09 21:32:28 +04:00
Anton Kaliaev
5834a59816 read ping 2018-02-09 21:32:28 +04:00
Anton Kaliaev
b28b76ddf7 rename pingTimeout to pingInterval, pongTimer is now time.Timer 2018-02-09 21:32:28 +04:00
zbo14
91e4f4b786 ping/pong timeout in config 2018-02-09 21:32:28 +04:00
zbo14
9b554fb2c4 switch test modification 2018-02-09 21:32:28 +04:00
zbo14
f97ead4f5f prep for merge 2018-02-09 21:32:28 +04:00
zbo14
5af22d6ee6 remove SwitchEventNewPeer, SwitchEventDonePeer 2018-02-09 21:32:28 +04:00
zbo14
1d16df6a92 add test, TrySend in broadcast 2018-02-09 21:32:27 +04:00
Ethan Buchman
e7bc946760 Merge pull request #1200 from tendermint/update-deps
Update tmlibs & protobuf deps
2018-02-09 11:00:43 -05:00
Ethan Buchman
cf1e1f5899 Merge pull request #1194 from tendermint/1177-semantics-of-currate-low-msg
improve "curRate too low" message
2018-02-09 11:00:09 -05:00
Anton Kaliaev
2f8372d629 update protobuf 2018-02-09 13:58:36 +04:00
Anton Kaliaev
d84e4effce update tmlibs 2018-02-09 13:47:51 +04:00
Anton Kaliaev
0c1b91b762 revert back curRate != 0 2018-02-09 13:02:44 +04:00
Anton Kaliaev
c8990d06d9 remove curRate != 0 2018-02-09 12:01:13 +04:00
Anton Kaliaev
b0a55882b2 lower the minRecvRate
See https://github.com/tendermint/tendermint/issues/1177#issuecomment-363720118
2018-02-09 12:01:12 +04:00
Anton Kaliaev
d1fa44e816 improve "curRate too low" message
Refs #1177

Note on labels:
KB - 1024
kB - 1000
https://ux.stackexchange.com/questions/13815/files-size-units-kib-vs-kb-vs-kb
2018-02-09 12:01:12 +04:00
Ethan Buchman
199ea40980 Merge pull request #1196 from tendermint/1149-TestReactorValidatorSetChanges-fails-non-deterministically
WIP: TestReactorValidatorSetChanges fails non deterministically
2018-02-09 01:51:17 -05:00
Ethan Buchman
66fc476e1e Merge pull request #1173 from tendermint/memory-leak-in-reconnect-to-peer-2
fix memory leak in mempool reactor
2018-02-09 01:50:34 -05:00
Ethan Buchman
6b347200d9 Merge pull request #1197 from tendermint/1155-seed-mode-flag
add seed_mode flag (`--p2p.seed_mode`)
2018-02-08 17:48:39 -05:00
Ethan Buchman
8a908a7cf9 Merge pull request #1193 from tendermint/return-back-cmd-logging
With must be called on log.filter, otherwise "main" entries get filtered
2018-02-08 16:30:29 -05:00
Ethan Buchman
0bcc11c9bc Merge pull request #1191 from tendermint/event-buffer-slice-clearing-on-flush
types: TxEventBuffer.Flush now uses capacity preserving slice clearing idiom
2018-02-08 16:28:47 -05:00
Ethan Buchman
0247a21a93 Merge pull request #1086 from tendermint/966-deterministic-tooling-for-releases
Distribution improvements (freeze glide version + get rid of gox)
2018-02-08 15:26:11 -05:00
Anton Kaliaev
cf1f483526 add seed_mode flag (--p2p.seed_mode) 2018-02-08 17:20:55 +04:00
Anton Kaliaev
3f9aa8d8fa document that msgBytes in p2p/connection change 2018-02-08 13:25:26 +04:00
Anton Kaliaev
d6d1f8512d do not reset pingTimer
don't bother with this "only ping when we havent heard from them". lets
just always ping every peer from the sendRoutine every 10s no matter
what. if they dont pong within pongTimeout, disconnect :)
2018-02-08 13:08:11 +04:00
Anton Kaliaev
2b2c233977 write docs for Reactor interface 2018-02-08 13:07:40 +04:00
Ethan Buchman
7640e6a29f add some p2p TODOs 2018-02-08 12:46:04 +04:00
Anton Kaliaev
b0ca8a0872 With must be called on log.filter, otherwise "main" entries get filtered
Also, we should allow "main" module to log INFO messages like

```
I[02-07|07:57:25.074] Found private validator                      module=main path=/home/vagrant/.tendermint/config/priv_validator.json
I[02-07|07:57:25.076] Found genesis file                           module=main path=/home/vagrant/.tendermint/config/genesis.json
```

Refs https://github.com/cosmos/gaia/issues/118

**BEFORE**:
```
$ tendermint init

```

**AFTER**:
```
$ tendermint init
I[02-07|07:57:25.074] Found private validator                      module=main path=/home/vagrant/.tendermint/config/priv_validator.json
I[02-07|07:57:25.076] Found genesis file                           module=main path=/home/vagrant/.tendermint/config/genesis.json
```
2018-02-07 12:08:13 +04:00
Ethan Buchman
9e767771fc Merge pull request #1188 from ltfschoen/patch-1
Update getting-started.rst with Python 3 example
2018-02-06 12:53:47 -05:00
Anton Kaliaev
6c8d7a8c19 deterministic tooling for releases
get rid of gox

build target builds inside docker, dev-build - locally

Revert "build target builds inside docker, dev-build - locally"

This reverts commit 8ba89d5e8c5668e3839ff49952a9166d1158f6e8.

add build tags to make build/build_race/install

use tendermint's fork of glide instead of tar.gz

remove TMHOME unused var + set length for git hash

get rid of GOTOOLS_CHECK

fixes after review

zip

needed for distribution
2018-02-06 12:46:13 +04:00
Emmanuel Odeke
15ef57c6d0 types: TxEventBuffer.Flush now uses capacity preserving slice clearing idiom
Fixes https://github.com/tendermint/tendermint/issues/1189

For every TxEventBuffer.Flush() invoking, we were invoking
a:
  b.events = make([]EventDataTx, 0, b.capacity)
whose intention is to innocently clear the events slice but
maintain the underlying capacity.

However, unfortunately this is memory and garbage collection intensive
which is linear in the number of events added. If an attack had access
to our code somehow, invoking .Flush() in tight loops would be a sure
way to cause huge GC pressure, and say if they added about 1e9
events maliciously, every Flush() would take at least 3.2seconds
which is enough to now control our application.

The new using of the capacity preserving slice clearing idiom
takes a constant time regardless of the number of elements with zero
allocations so we are killing many birds with one stone i.e
  b.events = b.events[:0]

For benchmarking results, please see
https://gist.github.com/odeke-em/532c14ab67d71c9c0b95518a7a526058
for a reference on how things can get out of hand easily.
2018-02-05 23:34:15 -08:00
Luke Schoen
f37c502fd8 Update getting-started.rst with Python 3 example 2018-02-06 16:00:51 +11:00
Anton Kaliaev
945b0e6eca cleanup glide.yaml 2018-02-05 22:53:44 +04:00
Anton Kaliaev
84a0a1987c comment out tests for now
https://github.com/tendermint/tendermint/pull/1173#issuecomment-363173047
2018-02-05 22:26:14 +04:00
Anton Kaliaev
11b68f1934 rewrite broadcastTxRoutine to use channels
https://play.golang.org/p/gN21yO9IRs3

```
func waitWithCancel(f func() *clist.CElement, ctx context.Context) *clist.CElement {
	el := make(chan *clist.CElement, 1)
	select {
	case el <- f():
```
will just run f() blockingly, so this doesn't change much in terms of behavior.
2018-02-05 16:36:26 +04:00
Anton Kaliaev
202d9a2c0c fix memory leak in mempool reactor
Leaking goroutine:
```
114 @ 0x42f2bc 0x42f3ae 0x440794 0x4403b9 0x468002 0x9fe32d 0x9ff78f 0xa025ed 0x45e571
```

Explanation:
it blocks on an empty clist forever. so unless theres txs coming in,
this go routine will just sit there, holding onto the peer too.
if we're constantly reconnecting to some peer, old instances are not
garbage collected, leading to memory leak.

Fixes https://github.com/cosmos/gaia/issues/108
Previous attempt https://github.com/tendermint/tendermint/pull/1156
2018-02-05 13:52:18 +04:00
Ethan Buchman
bf84e82577 Merge pull request #1184 from tendermint/sdk2-tmlibs-abci
Updates for tmlibs and abci (sdk2)
2018-02-03 10:45:54 -05:00
Ethan Buchman
abca9a2d61 woops - bring back glide.lock file 2018-02-03 03:59:16 -05:00
Ethan Buchman
d34286c421 minor fixes - tests pass 2018-02-03 03:54:49 -05:00
Anton Kaliaev
bb2bdbc0e1 add missing element (tag.Value) to keyForTag
encoded as %s. not sure this will work with raw bytes
2018-02-03 03:52:25 -05:00
Ethan Buchman
e7747f7d66 it compiles 2018-02-03 03:52:17 -05:00
Ethan Buchman
7a5060dc52 replace data.Bytes with cmn.HexBytes 2018-02-03 03:47:01 -05:00
Ethan Buchman
426379dc47 remove use of wire/nowriter 2018-02-03 03:39:14 -05:00
Ethan Buchman
cd0fd06b0d update for sdk2 libs. need to fix kv test
NOTE we only updating for tmlibs and abci
2018-02-03 03:35:02 -05:00
Ethan Buchman
4e3488c677 update types 2018-02-03 03:23:10 -05:00
Ethan Buchman
061ad355bb update glide 2018-02-03 03:22:56 -05:00
Ethan Buchman
2679b7554b lite: comment out iavl code - TODO #1183 2018-02-03 03:02:49 -05:00
Ethan Buchman
62c9cad484 Merge pull request #1180 from tendermint/1146-add-BFT-time-spec
Add BFT time spec
2018-02-01 16:38:05 -05:00
Zarko Milosevic
4cbdbbaac9 Add BFT time spec 2018-02-01 15:22:08 +01:00
Emmanuel Odeke
9ed296ae71 GetByHeight switches between linear & binary search on >=50 items
* GetByHeight will now switch to using binary search once
we have  >=50 items.
* Feedback from @ebuchman to catch a missed spot where
we forgot about lazy sorting that the original code
assumed would always have sorted commits by height.
Added a lazy sorting routine here too.
A test as well to ensure that we always get the properly
sorted and last value.
2018-01-31 20:53:03 -07:00
Zach Ramsay
e8d0960cef nolint 2018-01-31 20:52:12 -07:00
Emmanuel Odeke
2023115ff8 lite: TestCacheGetsBestHeight with GetByHeight and GetByHeightBinarySearch
Addressing PR review requests from @melekes and @ebuchman to
add a test that checks that the heights returned from both are
the same thus providing a perceptible equivalence of the code
linear range search vs binary range search code.
2018-01-31 20:52:12 -07:00
Adrian Brink
7790ae9e6f Fix spelling mistake 2018-01-31 20:52:12 -07:00
Emmanuel Odeke
206da7a1b8 lite: < len(v) in for loop check, as per @melekes' recommendation
Also lazily load the commits to only be run once when
the benchmarks are activated, lest it slows down all the tests
2018-01-31 20:52:11 -07:00
Emmanuel Odeke
14eaba9ec3 lite: memStoreProvider GetHeightBinarySearch method + fix ValKeys.signHeaders
Updates #1021

* Implement a GetHeightBinarySearch method that looks for
the height using the binary search algorithm guaranteeing
worst case iteration time of O(log2(n))
whereas
worst case iteration time of O(n) for the current linear search

So if n we had 500 commits stored by height and sorted, to
trigger the worst case scenario for each, pass in
the most negative height you can find e.g. -1
Linear search: 500 iterations
Binary search: 9 iterations

with n=1000, qHeight = -1
Linear search: 1000 iterations
Binary search: 10 iterations

with n=1e6, qHeight = -1
Linear search: 1e6 iterations
Binary search: 20 iterations

Of course there are realistic expectations e.g. a max of
commits that may be saved so linear search might be useful
for very small size set because it has less preparing overhead
and only ~2 types of comparisons, but nonetheless binary search
shines as soon as we start to hit say 50 commits to search from
as you can see below:

```shell
$ go test -v -run=^$ -bench=MemStore
goos: darwin
goarch: amd64
pkg: github.com/tendermint/tendermint/lite
BenchmarkMemStoreProviderGetByHeightLinearSearch5-4     300000        6491 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightLinearSearch50-4      200000       12064 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightLinearSearch100-4      50000       32987 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightLinearSearch500-4       5000      395521 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightLinearSearch1000-4	       500     2940724 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightBinarySearch5-4     300000        6281 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightBinarySearch50-4      200000       10117 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightBinarySearch100-4     100000       18447 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightBinarySearch500-4      20000       89029 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightBinarySearch1000-4	      5000      265719 ns/op      1600 B/op       15 allocs/op
PASS
ok    github.com/tendermint/tendermint/lite 86.614s
$ go test -v -run=^$ -bench=MemStore
goos: darwin
goarch: amd64
pkg: github.com/tendermint/tendermint/lite
BenchmarkMemStoreProviderGetByHeightLinearSearch5-4     300000        6779 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightLinearSearch50-4      100000       12980 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightLinearSearch100-4      30000       43598 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightLinearSearch500-4       5000      377462 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightLinearSearch1000-4	       500     3278122 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightBinarySearch5-4     300000        7084 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightBinarySearch50-4      200000        9852 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightBinarySearch100-4     100000       19020 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightBinarySearch500-4      20000       99463 ns/op      1600 B/op       15 allocs/op
BenchmarkMemStoreProviderGetByHeightBinarySearch1000-4	      5000      259293 ns/op      1600 B/op       15 allocs/op
PASS
ok    github.com/tendermint/tendermint/lite 86.204s
```

which gives
```shell
$ benchstat old.txt new.txt
name                               old time/op    new time/op    delta
MemStoreProviderGetByHeight5-4       6.63µs ± 2%    6.68µs ± 6%   ~             (p=1.000 n=2+2)
MemStoreProviderGetByHeight50-4      12.5µs ± 4%    10.0µs ± 1%   ~             (p=0.333 n=2+2)
MemStoreProviderGetByHeight100-4     38.3µs ±14%    18.7µs ± 2%   ~             (p=0.333 n=2+2)
MemStoreProviderGetByHeight500-4      386µs ± 2%      94µs ± 6%   ~             (p=0.333 n=2+2)
MemStoreProviderGetByHeight1000-4    3.11ms ± 5%    0.26ms ± 1%   ~             (p=0.333 n=2+2)
```

If need be we can make a hybrid algorithm that switches between the
linear and binary search depending on the number of items.
This is reminiscent of Python's TimSort algorithm.
2018-01-31 20:52:11 -07:00
Ethan Buchman
2919bc3f7f Merge pull request #1178 from tendermint/nice-err-msg
improve error message
2018-01-31 21:32:58 -05:00
Zach Ramsay
1c01671ec6 improve vague error msg, closes #1158 2018-01-31 17:44:19 +00:00
Ethan Buchman
fe632ea32a spec: minor fixes 2018-01-26 17:26:33 -05:00
Zach Ramsay
5b368252ac spec: more fixes 2018-01-26 17:26:33 -05:00
Zach Ramsay
8cca953590 spec: remove notes, see #1152 2018-01-26 17:26:33 -05:00
Zach Ramsay
4b4a2029c4 spec: typos & other fixes 2018-01-26 17:26:33 -05:00
Ethan Buchman
6aa85357b6 Merge pull request #1160 from shapeshed/patch-1
Fix documentation typos
2018-01-26 17:17:43 -05:00
Ethan Buchman
eae62ec09b Merge branch 'develop' into patch-1 2018-01-26 17:17:28 -05:00
Ethan Buchman
18d96266bc Merge pull request #1140 from tendermint/feature/vagrant
Fix Vagrantfile
2018-01-26 17:12:06 -05:00
George Ornbo
4529fd6787 Fix documentation typos 2018-01-26 14:44:48 +00:00
Ethan Buchman
4a99a2a07d update contributing.md 2018-01-26 01:18:33 -05:00
Adrian Brink
4b63b3aa0b Switch to correct directory in Vagrant 2018-01-26 01:16:07 -05:00
Adrian Brink
fc860c3a07 Final Vagrantfile 2018-01-26 01:16:07 -05:00
Adrian Brink
2f147ec000 Remove upgrade step 2018-01-26 01:16:07 -05:00
Adrian Brink
0a7a190cd1 Fix vagrantfile
If you get an error, please run `vagrant box update`.
2018-01-26 01:16:07 -05:00
Ethan Buchman
3366dfe32a Merge pull request #1151 from tendermint/fix/p2p-stop-conn
p2p/conn: fix blocking on pong during quit and break out of loops
2018-01-25 02:45:06 -05:00
Ethan Buchman
baff4bd8cc p2p/conn: better handling for some stop conditions 2018-01-25 02:11:16 -05:00
Ethan Buchman
fb109db33d update changelog 2018-01-25 02:10:01 -05:00
Ethan Buchman
2f5971532e Merge pull request #1154 from tendermint/fix/consensus-tests
consensus: fix SetLogger in tests
2018-01-25 02:07:20 -05:00
Ethan Buchman
ab13806276 consensus: print go routines in failed test 2018-01-25 01:22:53 -05:00
Ethan Buchman
3ae26bd6e6 consensus: fix SetLogger in tests 2018-01-24 23:34:57 -05:00
Ethan Buchman
27ef3489a0 Merge pull request #1049 from tendermint/p2p-channels
p2p: add Channels to NodeInfo and don't send for unknown channels
2018-01-24 15:29:38 -05:00
Ethan Buchman
b6eb275b22 p2p: fix break in double loop 2018-01-24 14:27:37 -05:00
Ethan Buchman
57cc8ab977 Merge pull request #1143 from tendermint/1091-race-condition
call FlushSync before calling CommitSync
2018-01-24 14:22:43 -05:00
Ethan Buchman
99034904f8 p2p: fix tests for required channels 2018-01-23 23:45:51 -05:00
Ethan Buchman
a0ffcbcee4 Merge pull request #1137 from tendermint/docs-consolidate
WIP: docs consolidation
2018-01-23 23:45:20 -05:00
Ethan Buchman
260affd037 docs consolidation 2018-01-23 23:46:28 -05:00
Ethan Buchman
d7b1b8d3d5 Merge pull request #1129 from tendermint/addrbook
p2p: bust up into sub dirs
2018-01-23 23:10:50 -05:00
Ethan Buchman
50129ad8ac p2p: add Channels to NodeInfo and don't send for unknown channels 2018-01-23 22:43:56 -05:00
Ethan Buchman
5c9cb5e6a2 Merge pull request #1133 from tendermint/fix/stop-peer-for-error
StopPeerForError in blockchain and consensus
2018-01-23 22:26:52 -05:00
Ethan Buchman
4051391039 blockchain: test wip for hard to test functionality [ci skip] 2018-01-23 22:27:33 -05:00
Ethan Buchman
8f3bd3f209 p2p: addrBook.Save() on DialPeersAsync 2018-01-23 22:25:39 -05:00
Ethan Buchman
85816877c6 config: fix addrbook path to go in config 2018-01-23 22:21:17 -05:00
Ethan Buchman
87087b8acd consensus: minor cosmetic 2018-01-23 21:41:13 -05:00
Ethan Buchman
775bb85efb p2p/pex: wait to connect to all peers in reactor test 2018-01-23 21:30:53 -05:00
Ethan Buchman
21ce5856b3 p2p: notes about ListenAddr 2018-01-23 21:26:19 -05:00
Ethan Buchman
f5226e0008 Merge pull request #1144 from tendermint/create-logs-tarball
mercy for developers with slow Internet
2018-01-23 12:47:45 -05:00
Anton Kaliaev
a745fe2eed mercy for developers with slow Internet 2018-01-23 20:37:38 +04:00
Anton Kaliaev
5f3048bd09 call FlushSync before calling CommitSync
if we call it after, we might receive a "fresh" transaction from
  `broadcast_tx_sync` before old transactions (which were not
  committed).

Refs #1091

```
Commit is called with a lock on the mempool, meaning no calls to CheckTx
can start. However, since CheckTx is called async in the mempool
connection, some CheckTx might have already "sailed", when the lock is
released in the mempool and Commit proceeds.

Then, that spurious CheckTx has not yet "begun" in the ABCI app (stuck
in transport?). Instead, ABCI app manages to start to process the
Commit. Next, the spurious, "sailed" CheckTx happens in the wrong place.
```
2018-01-23 16:56:14 +04:00
Ethan Buchman
6a5818e107 Merge pull request #1138 from tendermint/small-fix
Small fix in example
2018-01-22 17:27:35 -05:00
Zarko Milosevic
dfdfd6c98e Small fix in example 2018-01-22 13:10:54 +01:00
Ethan Buchman
3090b05eb4 p2p: use conn.Close when peer is nil 2018-01-21 16:26:59 -05:00
Ethan Buchman
ee674f919f StopPeerForError in blockchain and consensus 2018-01-21 13:32:04 -05:00
Ethan Buchman
813bb6af96 Merge pull request #1092 from tendermint/add-consensus-reactor-doc
Add consensus reactor doc
2018-01-21 12:40:28 -05:00
Ethan Buchman
aecbff725f Merge pull request #1082 from tendermint/document-proposer-selection
Document proposer selection procedure
2018-01-21 12:39:43 -05:00
Ethan Buchman
6679fef2be Merge pull request #1056 from tendermint/feature/mempool-spec
WIP: Mempool specification
2018-01-21 12:39:10 -05:00
Ethan Buchman
c070ed056a Merge pull request #1051 from tendermint/feature/blockchain_reactor_docs
docs: Blockchain Reactor Documentation
2018-01-21 12:38:32 -05:00
Ethan Buchman
2c6ed302b7 minor changes [ci skip] 2018-01-21 12:36:46 -05:00
Adrian Brink
0eb85161aa More specification 2018-01-21 12:35:09 -05:00
Adrian Brink
940145b368 Bullet points for reactor and poolRoutine 2018-01-21 12:32:45 -05:00
Adrian Brink
a30315276b Formatting and documentation 2018-01-21 12:32:23 -05:00
Adrian Brink
6366eb9d99 Cleanup build and structure 2018-01-21 12:31:14 -05:00
Ethan Buchman
44e967184a p2p: tmconn->conn and types->p2p 2018-01-21 00:34:41 -05:00
Ethan Buchman
2ec425ae4b Merge pull request #1128 from tendermint/862-seed-crawler-mode
seed crawler mode
2018-01-20 23:35:26 -05:00
Ethan Buchman
0d7d16005a fixes 2018-01-20 21:44:30 -05:00
Ethan Buchman
5b5cbaa66a p2p: use sub dirs 2018-01-20 21:35:37 -05:00
Ethan Buchman
03550c7076 wip addrbook 2018-01-20 21:33:43 -05:00
Ethan Buchman
930fde056a p2p: add back lost func 2018-01-20 21:28:00 -05:00
Ethan Buchman
8d758560d8 p2p/trustmetric: non-deterministic test 2018-01-20 21:24:22 -05:00
Ethan Buchman
7b87cdaed8 p2p: seed disconnects after sending addrs 2018-01-20 21:24:22 -05:00
Ethan Buchman
c2f97e6454 p2p: seed mode fixes from rebase and review 2018-01-20 21:24:22 -05:00
Ethan Buchman
88eb3e7af0 some minor renames 2018-01-20 21:24:20 -05:00
caffix
949211a137 added a test for PEX reactor seed mode 2018-01-20 21:23:48 -05:00
Ethan Buchman
39d8da3536 docs: update getting started [ci skip] 2018-01-20 21:21:50 -05:00
Ethan Buchman
ae27e85bf7 add warnings about new spec 2018-01-20 21:20:15 -05:00
Ethan Buchman
f2d19162d2 fixes from caffix review 2018-01-20 21:20:09 -05:00
Zarko Milosevic
d36e118bf6 Add Consensus reactor spec 2018-01-19 19:57:08 +01:00
Ethan Buchman
02c1aef48b Merge pull request #1121 from tendermint/consensus-tests
Consensus tests
2018-01-19 12:50:32 -05:00
Ethan Buchman
6f3d9b4be3 fix race 2018-01-19 01:36:52 -05:00
Ethan Buchman
f06cc6630b mempool: cfg.CacheSize and expose InitWAL 2018-01-19 01:03:03 -05:00
Ethan Buchman
8171628ee5 make tests run faster 2018-01-19 00:59:09 -05:00
Ethan Buchman
1cb76625d3 consensus: rename test funcs 2018-01-19 00:59:09 -05:00
Ethan Buchman
ebeadfc57e dont run metalinter 2018-01-19 00:58:54 -05:00
Ethan Buchman
cca597a9c0 fix and test config file 2018-01-19 00:08:19 -05:00
Ethan Buchman
940db715f4 Merge pull request #1104 from tendermint/p2p-consolidate
WIP: P2P consolidate
2018-01-18 19:00:35 -05:00
Ethan Buchman
ec2b038493 Merge pull request #1103 from tendermint/1100-document-event-subscriptions
document event subscriptions
2018-01-18 19:00:01 -05:00
Ethan Buchman
ba0cb4f10e Merge pull request #1115 from tendermint/docs-tx-format
document tx formats
2018-01-18 18:57:23 -05:00
Ethan Buchman
e764a180d8 docs: fix tx formats [ci skip] 2018-01-18 18:58:33 -05:00
Ethan Buchman
bc19e7843c Merge branch 'develop' into p2p-consolidate 2018-01-18 18:30:37 -05:00
Ethan Buchman
f57f97c4bd fix bash tests 2018-01-18 17:40:12 -05:00
Ethan Buchman
b32474bb1a Merge pull request #1116 from tendermint/feature/prs
Create PULL_REQUEST_TEMPLATE.md
2018-01-18 13:52:18 -05:00
Adrian Brink
0a20e8f268 Create PULL_REQUEST_TEMPLATE.md 2018-01-18 13:53:18 -05:00
Zach Ramsay
9d4d939b89 docs: tx formats: closes #1083, #536 2018-01-18 15:37:56 +00:00
Ethan Buchman
c1e6e73bb1 Merge pull request #1108 from tendermint/zramsay-patch-1
Update p2p README
2018-01-15 10:31:31 -05:00
Ethan Buchman
620c957a44 fix test 2018-01-14 13:24:43 -05:00
Ethan Buchman
64ce7eef16 Merge pull request #1107 from tendermint/p2p-pex-abuse
better abuse handling in pex
2018-01-14 13:03:02 -05:00
Ethan Buchman
fc7915ab4c fixes from review 2018-01-14 13:03:57 -05:00
Zach Ramsay
26aaa283a9 p2p: remove deprecated Dockerfile 2018-01-14 13:51:28 +00:00
Zach
a29c67563c Update p2p README, closes #1102 2018-01-14 13:50:34 +00:00
Ethan Buchman
17f7a9b510 improve seed dialing logic 2018-01-14 03:56:15 -05:00
Ethan Buchman
3df5fd21cd better abuse handling in pex 2018-01-14 03:22:01 -05:00
Ethan Buchman
99076f1942 Merge pull request #1105 from tendermint/p2p-switch
cleanup switch
2018-01-14 01:20:13 -05:00
Ethan Buchman
3368eeb03e fix tests 2018-01-14 01:19:07 -05:00
Ethan Buchman
68237911ba NetAddress.Same checks ID or DialString 2018-01-14 01:15:37 -05:00
Ethan Buchman
f9e4f6eb6b reorder peer.go methods 2018-01-14 01:15:37 -05:00
Ethan Buchman
8b74a8d6ac NodeInfo not a pointer 2018-01-14 01:15:33 -05:00
Ethan Buchman
08f84cd712 a little more moving around 2018-01-13 23:56:57 -05:00
Ethan Buchman
452d10f368 cleanup switch 2018-01-13 17:37:52 -05:00
Ethan Buchman
8b3fb743cf Merge pull request #1048 from tendermint/p2p-id
P2P ID
2018-01-13 17:35:21 -05:00
Ethan Buchman
7667e11973 remove RemoteAddr from NodeInfo 2018-01-13 17:36:03 -05:00
Ethan Buchman
53a5498fc5 more fixes from review 2018-01-13 17:34:12 -05:00
Ethan Buchman
e4d52401cf some fixes from review 2018-01-13 16:06:51 -05:00
Ethan Buchman
9670519a21 remove PoW from ID 2018-01-13 15:50:59 -05:00
Ethan Buchman
b1485b181a Merge branch 'p2p-consolidate' into p2p-id 2018-01-13 15:20:23 -05:00
Ethan Buchman
47a6928890 Merge pull request #1030 from tendermint/864-distinguish-between-seeds-and-manual-peers
Distinguish between seeds and manual peers
2018-01-13 15:12:47 -05:00
Ethan Buchman
c1e167e330 note in trust metric test 2018-01-13 15:11:13 -05:00
Ethan Buchman
e2b3b5b58c dial_persistent_peers -> dial_peers with persistent option 2018-01-13 14:50:58 -05:00
Ethan Buchman
e6b70baae0 Merge branch 'develop' into 864-distinguish-between-seeds-and-manual-peers 2018-01-13 14:34:32 -05:00
Anton Kaliaev
b40aa91b41 document event subscriptions
Refs #1100
2018-01-12 12:53:34 -06:00
Ethan Buchman
a13b17ec6c Merge pull request #792 from tendermint/config
Config Improvements
2018-01-10 14:41:30 -05:00
Ethan Buchman
b32a507a1b Merge pull request #1088 from tendermint/1087-update-docker
Update docker image
2018-01-10 14:23:06 -05:00
Ethan Buchman
d6e01e8cee Merge pull request #1089 from tendermint/1075-document-underflow-overflow
document the maximum supported voting power due to overflow [ci skip]
2018-01-10 14:20:31 -05:00
Anton Kaliaev
c79ba3c349 document the maximum supported voting power due to overflow [ci skip]
Refs #1075
2018-01-10 14:21:26 -05:00
Anton Kaliaev
12ca972761 Merge branch 'develop' into 1087-update-docker 2018-01-10 11:08:14 -06:00
Zach Ramsay
657ad214cb p2p tests: put priv_val in right place 2018-01-10 16:30:00 +00:00
Zach
39acf1c5e8 Merge branch 'develop' into config 2018-01-10 14:21:24 +00:00
Anton Kaliaev
075ae1e301 minimal test for dialing seeds in pex reactor 2018-01-09 18:29:29 -06:00
Anton Kaliaev
705d51aa42 move dialSeedsIfAddrBookIsEmptyOrPEXFailedToConnect into PEX reactor 2018-01-09 17:54:29 -06:00
Anton Kaliaev
ef0493ddf3 rewrite Peers section of Using Tendermint guide [ci skip] 2018-01-09 17:54:28 -06:00
Anton Kaliaev
1b455883d2 readd /dial_seeds 2018-01-09 17:54:28 -06:00
Anton Kaliaev
e4897b7bdd rename manual peers to persistent peers 2018-01-09 16:18:05 -06:00
Anton Kaliaev
37f86f9518 update changelog [ci skip] 2018-01-09 16:04:07 -06:00
Anton Kaliaev
28fc15028a distinguish between seeds and manual peers in the config/flags
- we only use seeds if we can’t connect to peers in the addrbook.
- we always connect to nodes given in config/flags

Refs #864
2018-01-09 16:03:24 -06:00
Anton Kaliaev
179d6062e4 try to connect through addrbook before requesting peers from seeds
we only use seeds if we can’t connect to peers in the addrbook.

Refs #864
2018-01-09 16:03:24 -06:00
Zach
c521f385a6 add quick start guide (#1069) 2018-01-09 20:35:47 +00:00
Anton Kaliaev
b8214fce66 Merge branch 'develop' into 1087-update-docker 2018-01-09 12:21:23 -06:00
Ethan Buchman
03a14d8342 docs/p2p: updates from review (#1076) 2018-01-09 11:44:49 -06:00
Anton Kaliaev
48638eaa20 update docker readme 2018-01-09 11:05:15 -06:00
Anton Kaliaev
170777300e update docker
Closes #1087
2018-01-09 11:04:37 -06:00
Adrian Brink
32311acd01 Vulnerability in light client proxy (#1081)
* Vulnerability in light client proxy

When calling GetCertifiedCommit the light client proxy would call
Certify and even on error return the Commit as if it had been correctly
certified.

Now it returns the error correctly and returns an empty Commit on error.

* Improve names for clarity

The lite package now contains StaticCertifier, DynamicCertifier and
InqueringCertifier. This also changes the method receivers from one
letter to two letter names, which will make future refactoring easier
and follows the coding standards.

* Fix test failures

* Rename files

* remove dead code
2018-01-09 10:36:11 -06:00
Ethan Buchman
b9cbaf8f10 priv-val: fix timestamp for signing things that only differ by timestamp 2018-01-08 16:36:16 -05:00
Ethan Buchman
8d86b6c2d2 Merge pull request #1084 from tendermint/fix-make-dist-script
fix broken `make dist` target
2018-01-08 16:20:36 -05:00
Anton Kaliaev
555f560ecd fix broken make dist target 2018-01-08 13:13:47 -06:00
Zarko Milosevic
dba4815616 Define requirements of the proposer selection procedure 2018-01-08 14:10:01 +01:00
Ethan Buchman
cf42611187 Merge pull request #997 from tendermint/919-careful-with-validator-voting
check for overflow and underflow while choosing proposer
2018-01-07 16:48:39 -05:00
Ethan Buchman
90c691df2b Merge pull request #1063 from tendermint/feature/Issue1020
NewInquiring returns error instead of swallowing it
2018-01-07 16:30:33 -05:00
Adrian Brink
13fa23c568 Add error checking 2018-01-06 22:24:58 +01:00
Jae Kwon
124c58d48f Merge pull request #1071 from tendermint/revert-1053-feature/buildprocess
Revert "Changes to achieve a standardized build process and deterministic builds"
2018-01-05 22:42:19 -08:00
Jae Kwon
a034600024 Revert "Changes to achieve a standardized build process and deterministic builds" 2018-01-05 22:35:57 -08:00
Zach
e0e600df05 Merge branch 'develop' into config 2018-01-06 05:30:38 +00:00
Ethan Buchman
92f5ae5a84 fix vagrant [ci skip] 2018-01-05 22:26:10 -05:00
Ethan Buchman
d57ddec302 Merge pull request #1053 from tendermint/feature/buildprocess
Changes to achieve a standardized build process and deterministic builds
2018-01-05 21:13:42 -05:00
Greg Szabo
bb3dc10f24 Makefile improvements for deterministic builds based on Bucky's feedback 2018-01-05 14:22:13 -05:00
Greg Szabo
2cc50938a4 Merge branch 'develop' into feature/buildprocess 2018-01-05 13:01:40 -05:00
Adrian Brink
ba475d3128 Fix formatting 2018-01-05 13:25:58 +01:00
Adrian Brink
ed81fb54ec NewInquiring returns error instead of swallowing it 2018-01-05 13:24:16 +01:00
Ethan Buchman
8481da2405 Merge pull request #1046 from tendermint/abci-spec-docs
docs: add abci spec
2018-01-04 13:57:40 -05:00
Ethan Buchman
ecb7303e35 Merge branch 'develop' into abci-spec-docs 2018-01-04 13:57:08 -05:00
Ethan Frey
9cb45eb7df Add skeleton for functionality and concurrency 2018-01-04 19:08:03 +01:00
Ethan Buchman
6855e62f0a Merge pull request #1052 from tendermint/feature/p2p_docs
docs: P2P Documentation
2018-01-04 12:32:28 -05:00
Ethan Frey
17b61db40a Document p2p and rpc messages 2018-01-04 18:12:48 +01:00
Ethan Frey
7b52499463 Start writing mempool specification
Include overview and configuration options.
2018-01-04 17:11:35 +01:00
Greg Szabo
f67f99c227 Extended install document with docker option. Added extra checks to developer's build target. 2018-01-03 17:24:11 -05:00
Greg Szabo
0430ebf95c Makefile changes for cross-building and standardized builds using gox 2018-01-03 14:58:23 -05:00
Adrian Brink
f602de437e Move P2P docs into docs folder 2018-01-03 10:49:47 +01:00
Zach Ramsay
a573b20888 docs: add counter/dummy code snippets
closes https://github.com/tendermint/abci/issues/134
2018-01-03 01:23:38 +00:00
Ethan Buchman
488ae529ad p2p: authenticate peer ID 2018-01-01 23:23:11 -05:00
Ethan Buchman
6e823c6e87 p2p: support addr format ID@IP:PORT 2018-01-01 23:08:20 -05:00
Ethan Buchman
7d35500e6b p2p: add ID to NetAddress and use for AddrBook 2018-01-01 22:39:08 -05:00
Ethan Buchman
a17105fd46 p2p: peer.Key -> peer.ID 2018-01-01 22:39:05 -05:00
Ethan Buchman
b289d2baf4 persistent node key and ID 2018-01-01 21:21:42 -05:00
Ethan Buchman
f2e0abf1dc p2p: reorder some checks in addPeer; add comments to NodeInfo 2018-01-01 21:21:39 -05:00
Ethan Buchman
528154f1a2 p2p: PrivKey need not be Ed25519 2018-01-01 19:44:01 -05:00
Ethan Buchman
bc71840f06 more p2p docs 2018-01-01 16:30:36 -05:00
Zach Ramsay
cd15b677ec docs: add abci spec 2018-01-01 15:35:28 +00:00
Ethan Buchman
1acb12edf5 p2p docs 2017-12-31 17:11:09 -05:00
Ethan Buchman
008de93bbe Merge pull request #1039 from tendermint/add_consensus_spec
Spec of consensus/gossip protocol
2017-12-31 14:59:32 -05:00
Zach
4e834baa9a docs: update ecosystem.rst (#1037)
* docs: update ecosystem.rst

* typo [ci skip]
2017-12-31 13:54:50 +00:00
Zarko Milosevic
96e0e4ab5a Describe messages sent as part of consensus/gossip protocol 2017-12-30 21:18:12 +01:00
Ethan Buchman
381fe19335 changelog date [ci skip] 2017-12-29 17:51:19 -05:00
Ethan Buchman
ff99ca7cdf bump wal test timeout 2017-12-29 17:51:13 -05:00
Ethan Buchman
60f95cd9ea changelog and version 2017-12-29 11:28:44 -05:00
Ethan Buchman
28bbeac763 state: send byzantine validators in BeginBlock 2017-12-29 11:26:55 -05:00
Ethan Buchman
e97e0bacd1 update glide 2017-12-29 11:11:13 -05:00
Ethan Buchman
abfdfe67e8 test/p2p: add some timeouts 2017-12-29 11:02:47 -05:00
Ethan Buchman
992371b4cf Merge pull request #1035 from tendermint/readme-min-requirements
README: document the minimum Go version
2017-12-29 10:24:57 -05:00
Ethan Buchman
07eeddc5e1 Merge pull request #1015 from tendermint/state_funcs
state: move methods to funcs
2017-12-29 10:24:22 -05:00
Emmanuel Odeke
3b70c89e07 README: document the minimum Go version
Solidify in writing, the minimum Go version that
we support, as Go1.9.
2017-12-28 23:26:45 -07:00
Ethan Buchman
444db4c242 metalinter 2017-12-28 23:15:54 -05:00
Ethan Buchman
cb845ebff5 fix EvidencePool and VerifyEvidence 2017-12-28 23:15:54 -05:00
Ethan Buchman
6112578d07 ValidateBlock is a method on blockExec 2017-12-28 23:15:54 -05:00
Ethan Buchman
ae68fcb78a move fireEvents to ApplyBlock 2017-12-28 23:15:54 -05:00
Ethan Buchman
8d8d63c94c changelog 2017-12-28 23:15:54 -05:00
Ethan Buchman
1d6f00859d fixes from review 2017-12-28 23:15:54 -05:00
Ethan Buchman
397251b0f4 fix evidence 2017-12-28 23:15:54 -05:00
Ethan Buchman
537b0dfa1a use NopEventBus 2017-12-28 23:15:54 -05:00
Ethan Buchman
0acca7fe69 final updates for state 2017-12-28 23:15:54 -05:00
Ethan Buchman
bac60f2067 blockchain: update for new state 2017-12-28 23:15:54 -05:00
Ethan Buchman
f82b7e2a13 state: re-order funcs. fix tests 2017-12-28 23:15:54 -05:00
Ethan Buchman
9e6d088757 state: BlockExecutor 2017-12-28 23:15:54 -05:00
Ethan Buchman
c915719f85 *State->State; SetBlockAndValidators->NextState 2017-12-28 23:15:54 -05:00
Ethan Buchman
f55135578c state: move methods to funcs 2017-12-28 23:15:54 -05:00
Ethan Buchman
139eca0177 Merge pull request #1027 from tendermint/1026-error-signing-vote-wal-gen
change directory for each call, not only for each test
2017-12-28 16:08:52 -05:00
Ethan Buchman
a8e625e99d config: unexpose chainID 2017-12-28 20:49:02 +00:00
Zach Ramsay
a92a32b862 config: lil fixes 2017-12-28 20:49:02 +00:00
Zach Ramsay
9da5cd0180 rebase fix 2017-12-28 20:49:02 +00:00
Anton Kaliaev
a6f2e502e7 move genesis.json file into config dir 2017-12-28 20:49:02 +00:00
Anton Kaliaev
69d8c2e554 fixes after my own review 2017-12-28 20:49:02 +00:00
Zach Ramsay
70ba608850 config: write all default options to config file
config: test the default file

docs: spiff up config

config: minor fixes & comments

config: simplify test

config; use a seperate config directory, #556

config: update docs & parameterize file paths

config: PR comments

config: use the default object

fix a rebase error
2017-12-28 20:49:02 +00:00
Anton Kaliaev
75182f7205 change directory for each call, not only for each test
Fixes #1026
2017-12-28 11:17:15 -06:00
Ethan Buchman
41caa4415c Merge pull request #1024 from tendermint/1023-tendermint-version
remove quotes from `tendermint version`
2017-12-28 10:14:39 -05:00
Anton Kaliaev
c611cc7268 remove quotes from tendermint version
before:
0.14.0-'88f5f21d'

after:
0.14.0-88f5f21d

Fixes #1023
2017-12-28 09:02:25 -06:00
Ethan Buchman
d14ec7d7d2 Merge pull request #1011 from tendermint/1006-panic-on-light-client-startup
protect memStoreProvider#byHash map by mutex
2017-12-27 15:30:59 -05:00
Anton Kaliaev
4ca19e33c2 add mutex to memStoreProvider
Fixes #1006

```
goroutine 230 [runnable]:
github.com/cosmos/gaia/vendor/golang.org/x/crypto/ripemd160.(*digest).Sum(0xc420ac2af0, 0x0, 0x0, 0x0, 0xc420c01360, 0xc420c01378, 0x20)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/golang.org/x/crypto/ripemd160/ripemd160.go:119 +0x2c9
github.com/cosmos/gaia/vendor/github.com/tendermint/tmlibs/merkle.KVPair.Hash(0x49447e8, 0x7, 0x47ff760, 0xc420048d50, 0xc420aba9a0, 0x14, 0x20)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tmlibs/merkle/simple_tree.go:122 +0x200
github.com/cosmos/gaia/vendor/github.com/tendermint/tmlibs/merkle.(*KVPair).Hash(0xc4200f7ae0, 0xc420aba9a0, 0x14, 0x20)
	<autogenerated>:1 +0x57
github.com/cosmos/gaia/vendor/github.com/tendermint/tmlibs/merkle.SimpleHashFromHashables(0xc420111900, 0x9, 0x10, 0x10, 0xc420ad06c0, 0xc420087500)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tmlibs/merkle/simple_tree.go:87 +0xaa
github.com/cosmos/gaia/vendor/github.com/tendermint/tmlibs/merkle.SimpleHashFromMap(0xc420b987e0, 0xc420b987e0, 0x4941892, 0x3)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tmlibs/merkle/simple_tree.go:96 +0x4d
github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/types.(*Header).Hash(0xc420a541a0, 0x4034f, 0xc420a28530, 0xa)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/types/block.go:177 +0x54a
github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite.Commit.ValidateBasic(0xc420a541a0, 0xc420a1e300, 0xc420a28530, 0xa, 0xb, 0x27)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite/commit.go:83 +0x1c9
github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite/files.(*provider).StoreCommit(0xc4202f2780, 0xc420a541a0, 0xc420a1e300, 0xc420a24870, 0x0, 0x0)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite/files/provider.go:71 +0x5e
github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite.cacheProvider.StoreCommit(0xc4202f27a0, 0x2, 0x2, 0xc420a541a0, 0xc420a1e300, 0xc420a24870, 0xc420a541a0, 0xc420a1e300)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite/provider.go:39 +0x8f
github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite.(*cacheProvider).StoreCommit(0xc4202f27c0, 0xc420a541a0, 0xc420a1e300, 0xc420a24870, 0x2, 0xc420a541a0)
	<autogenerated>:1 +0x70
github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite.NewInquiring(0xc420990610, 0xa, 0xc420a541a0, 0xc420a1e300, 0xc420a24870, 0x4f3eb80, 0xc4202f27c0, 0x4f3e5c0, 0xc4202f2840, 0xa)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite/inquirer.go:29 +0x5c
github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client.GetCertifier(0xc420990610, 0xa, 0x4f3eb80, 0xc4202f27c0, 0x4f3e5c0, 0xc4202f2840, 0xc4200f68a0, 0xc420b21888, 0x470eece)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/common.go:47 +0x141
github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands.GetCertifier(0x4f46aa0, 0xc4200f68a0, 0xc420b218e0)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands/common.go:82 +0x83
github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands/query.GetWithProof(0xc420a260c0, 0x22, 0x30, 0x0, 0x4947dd6, 0xc42016a000, 0xc420b219a0, 0x4389d85, 0x0, 0x0, ...)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands/query/get.go:73 +0x52
github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands/query.Get(0xc420a260c0, 0x22, 0x30, 0x0, 0x1, 0x0, 0xc420b21a08, 0x43c9c91, 0xc4200a20f0, 0x4947dd6, ...)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands/query/get.go:64 +0x178
github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands/query.GetParsed(0xc420a260c0, 0x22, 0x30, 0x47d1440, 0xc420b984e0, 0x0, 0x1, 0x30, 0x1d, 0x40)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands/query/get.go:31 +0x5f
github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/modules/coin/rest.doQueryAccount(0x4f3d300, 0xc4201ee0e0, 0xc420111500)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/modules/coin/rest/handlers.go:66 +0x3a5
net/http.HandlerFunc.ServeHTTP(0x4a9d808, 0x4f3d300, 0xc4201ee0e0, 0xc420111500)
	/usr/local/go/src/net/http/server.go:1918 +0x44
github.com/cosmos/gaia/vendor/github.com/gorilla/mux.(*Router).ServeHTTP(0xc4201aab40, 0x4f3d300, 0xc4201ee0e0, 0xc420111500)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/gorilla/mux/mux.go:150 +0xed
net/http.serverHandler.ServeHTTP(0xc4209992b0, 0x4f3d300, 0xc4201ee0e0, 0xc420111300)
	/usr/local/go/src/net/http/server.go:2619 +0xb4
net/http.(*conn).serve(0xc4200b5540, 0x4f3df80, 0xc420086900)
	/usr/local/go/src/net/http/server.go:1801 +0x71d
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2720 +0x288
```
2017-12-27 15:29:17 -05:00
Ethan Buchman
1a0db878bf Merge pull request #1009 from tendermint/spec
Spec
2017-12-27 15:23:13 -05:00
Ethan Buchman
53eb9aca2b Merge pull request #592 from tendermint/evidence
track evidence, include in block
2017-12-27 15:22:24 -05:00
Ethan Buchman
caccabd4e5 spec: fixes from review 2017-12-27 15:14:49 -05:00
Ethan Buchman
d0e0ac5fac types: better error messages for votes 2017-12-27 14:46:24 -05:00
Ethan Buchman
7d81a3f4a5 address some comments from review 2017-12-27 01:27:03 -05:00
Ethan Buchman
f95b7529eb Merge pull request #999 from tendermint/feature/result-hash-header
Add results hash to header
2017-12-26 20:49:20 -05:00
Ethan Buchman
6a4fd46479 fixes from rebase 2017-12-26 20:42:34 -05:00
Ethan Buchman
b01b1e4758 remove unused var 2017-12-26 20:27:40 -05:00
Ethan Buchman
014b0b9944 evidence: reactor test 2017-12-26 20:27:40 -05:00
Ethan Buchman
5904f6df8b minor fixes from review 2017-12-26 20:27:40 -05:00
Ethan Buchman
0f293bfc2b remove some TODOs 2017-12-26 20:27:40 -05:00
Ethan Buchman
cfbedec719 evidence: reactor test 2017-12-26 20:27:40 -05:00
Ethan Buchman
666ae244b3 evidence: pool test 2017-12-26 20:27:40 -05:00
Ethan Buchman
c13e93d63e evidence: store tests and fixes 2017-12-26 20:27:40 -05:00
Ethan Buchman
c2585b5525 evidence_pool.go -> pool.go. remove old test files 2017-12-26 20:27:40 -05:00
Ethan Buchman
1d021c2790 update consensus/test_data/many_blocks.cswal 2017-12-26 20:27:40 -05:00
Ethan Buchman
cc418e5dab state.VerifyEvidence enforces EvidenceParams.MaxAge 2017-12-26 20:27:32 -05:00
Ethan Buchman
c7acdfadf2 evidence: more funcs in store.go 2017-12-26 20:27:32 -05:00
Ethan Buchman
869d873d5c state.ApplyBlock takes evpool and calls MarkEvidenceAsCommitted 2017-12-26 20:27:32 -05:00
Ethan Buchman
3271634e7a types: evidence cleanup 2017-12-26 20:26:21 -05:00
Ethan Buchman
4854c231e1 evidence store comments and cleanup 2017-12-26 20:26:21 -05:00
Ethan Buchman
7a18fa887d evidence linked with consensus/node. compiles 2017-12-26 20:26:21 -05:00
Ethan Buchman
6c4a0f9363 cleanup evidence pkg. state.VerifyEvidence 2017-12-26 20:26:21 -05:00
Ethan Buchman
f7731d38f6 some comments and cleanup 2017-12-26 20:25:14 -05:00
Ethan Buchman
df3f4de7c3 check evidence is from validator; some cleanup 2017-12-26 20:25:14 -05:00
Ethan Buchman
10c43c9edc introduce evidence store 2017-12-26 20:25:14 -05:00
Ethan Buchman
fe4b53a463 EvidencePool 2017-12-26 20:24:54 -05:00
Ethan Buchman
5b1f987ed1 mempool: remove Peer interface. use p2p.Peer 2017-12-26 20:24:12 -05:00
Ethan Buchman
48d778c4b3 types/params: introduce EvidenceParams 2017-12-26 20:24:12 -05:00
Ethan Buchman
7d086e9524 check if we already have evidence 2017-12-26 20:21:17 -05:00
Ethan Buchman
6e9433c7a8 post rebase fix 2017-12-26 20:21:17 -05:00
Ethan Buchman
39299e5cc1 consensus: note about duplicate evidence 2017-12-26 20:21:17 -05:00
Ethan Buchman
eeab0efa56 types: tx.go comments 2017-12-26 20:21:17 -05:00
Ethan Buchman
77e45756f2 types: Evidences for merkle hashing; Evidence.String() 2017-12-26 20:21:17 -05:00
Ethan Buchman
9cdcffbe4b types: comments; compiles; evidence test 2017-12-26 20:21:17 -05:00
Ethan Buchman
50850cf8a2 verify sigs on both votes; note about indices 2017-12-26 20:21:17 -05:00
Ethan Buchman
35587658cd verify evidence in block 2017-12-26 20:21:17 -05:00
Ethan Buchman
4661c98c17 add pubkey to conflicting vote evidence 2017-12-26 20:21:17 -05:00
Ethan Buchman
7928659f70 track evidence, include in block 2017-12-26 20:21:17 -05:00
Ethan Buchman
f80f6445a6 fix test 2017-12-26 20:15:09 -05:00
Ethan Buchman
336c2f4fe1 rpc: fix getHeight 2017-12-26 20:08:25 -05:00
Ethan Buchman
bfcb40bf6b validate block.ValidatorsHash 2017-12-26 20:00:45 -05:00
Ethan Buchman
051c2701ab remove LastConsensusParams 2017-12-26 19:56:39 -05:00
Ethan Buchman
028ee58580 call it LastResultsHash 2017-12-26 19:53:26 -05:00
Ethan Buchman
801e3dfacf rpc: getHeight helper function 2017-12-26 19:37:42 -05:00
Ethan Buchman
4171bd3bae fixes 2017-12-26 19:24:45 -05:00
Ethan Buchman
73fb1c3a17 consolidate saveResults/SaveABCIResponses 2017-12-26 19:24:45 -05:00
Ethan Frey
d65234ed51 Add /block_results?height=H as rpc endpoint
Expose it in rpc client
Move ABCIResults into tendermint/types from tendermint/state
2017-12-26 19:24:25 -05:00
Ethan Frey
58c5df729b Add ResultHash to header 2017-12-26 19:24:25 -05:00
Ethan Frey
632cc918b4 Save/Load Results for every height
Add some tests.
Behaves like saving validator set, except it always saves at each height
instead of a reference to last changed.
2017-12-26 19:24:25 -05:00
Ethan Frey
f870a49f42 Add ABCIResults with Hash and Proof to State
State maintains LastResultsHash
Verify that we can produce unique hashes for each result,
and provide valid proofs from the root hash.
2017-12-26 19:24:25 -05:00
Ethan Buchman
d844799b3b Merge branch '950-enforce-less-13-val-changes-per-block' into develop 2017-12-26 19:22:21 -05:00
Ethan Buchman
3ea1145486 bring back test 2017-12-26 19:22:15 -05:00
Ethan Buchman
d4716fc03c state 2017-12-26 18:43:03 -05:00
Ethan Buchman
16227594ef notes about block 1 2017-12-26 16:33:42 -05:00
Ethan Buchman
65cdb07f0c merkle 2017-12-26 15:48:17 -05:00
Ethan Buchman
eb73e82279 encoding.md 2017-12-26 15:30:56 -05:00
Ethan Buchman
d6fbfddddd spec.md -> blockchain.md. some fixes 2017-12-26 15:30:27 -05:00
Anton Kaliaev
1339a44402 add safe*Clip funcs 2017-12-26 14:13:12 -06:00
Anton Kaliaev
b8215d8ac8 more test cases 2017-12-26 13:30:00 -06:00
Ethan Buchman
289d92c97d consensus: remove log stmt. closes #987 2017-12-26 10:41:31 -05:00
Ethan Buchman
6c39c77fc5 Merge pull request #996 from ricardohsd/types-add-tests-to-vote
Add more tests to types/vote.go
2017-12-26 10:32:07 -05:00
Anton Kaliaev
69c3a7640b add safeAdd & safeSub plus quickcheck tests 2017-12-25 18:39:14 -06:00
Anton Kaliaev
e8b0458f16 check for overflow and underflow while choosing proposer
Refs #919
2017-12-25 18:39:14 -06:00
Anton Kaliaev
6b89639f90 update docs 2 [ci skip] 2017-12-25 17:58:15 -06:00
Anton Kaliaev
9b25f7325a update docs [ci skip] 2017-12-25 17:53:54 -06:00
Anton Kaliaev
0093f9877a change voting power change, not number of vals 2017-12-25 17:49:36 -06:00
Anton Kaliaev
cf0b5d3715 enforce <1/3 validator updates
Refs #950
2017-12-25 12:10:53 -06:00
Ethan Buchman
616f7e74db Merge pull request #1001 from tendermint/makefile
Cleaned up makefile
2017-12-25 12:09:10 -05:00
Ethan Buchman
14c812a39c tmlibs timer fix 2017-12-25 11:11:55 -05:00
Ethan Buchman
1e52751344 update tests for makefile 2017-12-25 10:24:41 -05:00
Jae Kwon
d7ac6e516a Cleaned up makefile 2017-12-23 02:23:05 -08:00
Ethan Buchman
c2436c46e6 Merge pull request #972 from tendermint/feature/enhance-endblock
Update EndBlock parameters
2017-12-22 01:30:58 -05:00
Ethan Buchman
e3585a6eb0 wip: tendermint specification 2017-12-21 22:36:37 -05:00
Ethan Buchman
38608b1b0f comment and tmlibs fix 2017-12-21 18:32:40 -05:00
Ethan Buchman
2b634dab32 Merge pull request #994 from tendermint/clean/block-validation
Clean/block validation
2017-12-21 17:53:11 -05:00
Ethan Buchman
91acc51cd1 fix test 2017-12-21 17:52:06 -05:00
Ethan Buchman
dc54ba67e4 state: TestValidateBlock 2017-12-21 17:51:03 -05:00
Ethan Buchman
35521b553a save historical consensus params 2017-12-21 17:46:25 -05:00
Ethan Buchman
70a744558c types: params.Update() 2017-12-21 17:00:52 -05:00
Ethan Buchman
4b789ff7e9 another cmn fix 2017-12-21 16:49:47 -05:00
Ethan Buchman
306657a118 no patience for metalinter right now 2017-12-21 16:49:47 -05:00
Ethan Buchman
be765e4cb9 update glide for cmn fixes 2017-12-21 16:49:47 -05:00
Ethan Buchman
b5857da877 forgot file 2017-12-21 16:49:47 -05:00
Ethan Buchman
3ad055ef3a fix randPort 2017-12-21 16:49:47 -05:00
Ethan Buchman
3d00c477fc separate block vs state based validation 2017-12-21 16:49:47 -05:00
Ethan Buchman
c2912d612a update glide 2017-12-21 16:49:47 -05:00
Ethan Buchman
a3c7525249 Merge pull request #993 from tendermint/984-priv-validator-signing
priv validator returns last sign bytes if h/r/s matches
2017-12-21 16:30:22 -05:00
Ethan Buchman
f81025631e update comment [ci skip] 2017-12-21 16:28:05 -05:00
Ethan Buchman
9c03c58de2 priv validator checks if only difference is timestamp; else error 2017-12-21 15:37:27 -05:00
Anton Kaliaev
0ffd60b8cf ValidatorSetUpdates -> ValidatorUpdates 2017-12-21 11:52:26 -06:00
Ricardo Domingos
d5baa6601c types: Add test for IsVoteTypeValid 2017-12-21 18:13:31 +01:00
Ricardo Domingos
19eeef0aad types: Rename exampleVote to examplePrecommit on vote_test
exampleVote doesn't express the type of the vote.
2017-12-21 18:13:31 +01:00
Ricardo Domingos
e76392e330 types: Update String() test to assert Prevote type 2017-12-20 23:21:30 +01:00
Anton Kaliaev
a1cc9ac642 priv validator returns last sign bytes if h/r/s matches
since now we have time in the msgs and we might crash between writing
the priv val and writing to wal.

Refs #984
2017-12-20 14:41:43 -06:00
Emmanuel T Odeke
67c3af3bf8 cmd/tendermint: fix initialization file creation checks (#991)
* cmd/tendermint: fix initialization file creation checks

Fixes #989.

The original initialization sequence started to inexplicably
fail
```shell
tendermint unsafe_reset_all
tendermint init
tendermint node --proxy_app=dummy
```

used to fail with
```shell
ERROR: Failed to create node: Couldn't read GenesisDoc file: open
/Users/emmanuelodeke/.tendermint/genesis.json: no such file or directory
```

because the initialization sequence always assumed that the
genesisDoc would only be set if the privValidator was generated.

However, `tendermint unsafe_reset_all` only created the
`priv_validator.json` file which would mean that then running
`tendermint init` would never create the `genesis.json` file
which if following the recommended sequence would then fail
since the `genesis.json` was absent.

* cmd/tendermint: Load PrivValidatorFS if existent, lest generate it

Feedback from @melekes

* change logging messages for init cmd

Refs #989
2017-12-20 12:50:27 -06:00
Anton Kaliaev
843e1ed400 Updates -> ValidatoSetUpdates 2017-12-19 13:03:39 -06:00
Ethan Buchman
4bca6bf6f5 fix test 2017-12-19 12:30:34 -05:00
Ethan Frey
960b25408f Store LastConsensusHash in State as well
Update all BlockValidation that it matches the last state
2017-12-19 12:28:08 -05:00
Ethan Frey
45bc106de7 Updated lite tests to set ConsensusHash in header 2017-12-19 12:28:08 -05:00
Ethan Frey
d151e36ea8 Add ConsensusHash to header 2017-12-19 12:28:08 -05:00
Ethan Frey
56cada6a0c Validate ConsensusParams returned from abci app 2017-12-19 12:28:08 -05:00
Ethan Frey
a0b2d77bef Add hash to ConsensusParams 2017-12-19 12:28:08 -05:00
Ethan Frey
030fd00232 Added tests for applying consensus param changes 2017-12-19 12:28:08 -05:00
Ethan Frey
d21f39160f Apply ConsensusParamChanges to state/State 2017-12-19 12:28:08 -05:00
Ethan Frey
4265a94bfe Update EndBlock parameters
* Update abci dependencies
* Modify references from Diffs to Changes
* Fixes issues #924
2017-12-19 12:28:08 -05:00
Ethan Buchman
652d1e3de8 Merge pull request #979 from tendermint/934-node-fails-to-parse-seeds
strip protocol if defined
2017-12-19 12:26:32 -05:00
Ethan Buchman
b33cff4cb7 Merge pull request #981 from tendermint/977-wal-generator
enable logging for wal_generator and set timeout to 1 min
2017-12-19 12:25:58 -05:00
Ethan Buchman
e0fe84a856 Merge branch 'develop' into 977-wal-generator 2017-12-19 11:11:26 -05:00
Ethan Buchman
783ffdb7fd Merge pull request #983 from tendermint/circle-testing
Circle testing
2017-12-19 11:09:22 -05:00
Ethan Buchman
cb3ac6987e remove some debugs 2017-12-19 10:11:37 -05:00
Anton Kaliaev
5a83e58428 stop eventBus 2017-12-17 20:16:02 -06:00
Anton Kaliaev
3f02ab0ead unidirectional channel 2017-12-16 22:20:07 -06:00
Anton Kaliaev
99c58fc561 enable logging for wal_generator and set timeout to 1 min
Refs #977
2017-12-16 21:59:10 -06:00
Ethan Buchman
a86df17ceb crank city 2017-12-16 19:55:04 -05:00
Ethan Buchman
5d04ccbe51 excessive logging. update tmlibs for timer fix 2017-12-16 19:16:08 -05:00
Ethan Buchman
61dc357bb3 test/p2p/kill_all: longer timeout 2017-12-16 13:36:52 -05:00
Ethan Buchman
d7cb2f850d more logs in p2p 2017-12-16 13:36:52 -05:00
Ethan Buchman
bfe0a4a8ac more logging 2017-12-16 13:36:52 -05:00
Ethan Buchman
0ec7909ec3 more logging in p2p and consensus 2017-12-16 13:36:52 -05:00
Ethan Buchman
b5b912e2c4 Merge remote-tracking branch 'origin/977-wal-generator' into develop 2017-12-16 13:36:32 -05:00
Ethan Buchman
9504a593e9 Merge pull request #980 from tendermint/fix-test-in-develop
add missing Timestamp to Votes
2017-12-15 22:11:47 -05:00
Anton Kaliaev
f8f28c8942 enable logging for wal_generator and set timeout to 1 min
Refs #977
2017-12-15 16:15:09 -06:00
Anton Kaliaev
8fc7d63cf8 add missing Timestamp to Votes
Fixes:

```
panic: Panicked on a Sanity Check: can't encode times below 1970 [recovered]
        panic: Panicked on a Sanity Check: can't encode times below 1970

goroutine 2042 [running]:
testing.tRunner.func1(0xc420e8c0f0)
        /usr/local/go/src/testing/testing.go:711 +0x5d9
panic(0xcd9e20, 0xc420c8c270)
        /usr/local/go/src/runtime/panic.go:491 +0x2a2
github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/common.PanicSanity(0xcd9e20, 0xf8ddd0)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/common/errors.go:26 +0x120
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.WriteTime(0x0, 0x0, 0x0, 0x1306440, 0xc4201607e0, 0xc420e31658, 0xc420e31680)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/time.go:19 +0x11e
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.writeReflectBinary(0xdc9e40, 0xc4201fcf30, 0x199, 0x1317b80, 0xdc9e40, 0xc98451, 0x9, 0x0, 0xdc9e40, 0xc420ead9c0, ...)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/reflect.go:525 +0x22f0
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.writeReflectBinary(0xd25400, 0xc42000e2e8, 0x196, 0x1317b80, 0xd92d60, 0xc9a195, 0xa, 0x0, 0xcc8ce0, 0xc420164920, ...)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/reflect.go:530 +0x21e7
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.writeReflectBinary(0xcc8ce0, 0xc4209f95b8, 0x197, 0x1317b80, 0xcc8ce0, 0xc9a195, 0xa, 0x0, 0xcc8ce0, 0xc420164920, ...)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/reflect.go:518 +0x2509
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.writeReflectBinary(0xd873e0, 0xc4209f9580, 0x16, 0x1317b80, 0xd79400, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/reflect.go:530 +0x21e7
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.WriteBinary(0xd873e0, 0xc4209f9580, 0x1306440, 0xc4201607e0, 0xc420e31658, 0xc420e31680)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/wire.go:80 +0x15f
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.BinaryBytes(0xd873e0, 0xc4209f9580, 0x3, 0x8, 0xc420160798)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/util.go:15 +0xb8
github.com/tendermint/tendermint/blockchain.(*BlockStore).SaveBlock(0xc4201342a0, 0xc420eac180, 0xc420130640, 0xc4209f9580)
        github.com/tendermint/tendermint/blockchain/_test/_obj_test/store.go:192 +0x439
github.com/tendermint/tendermint/blockchain.TestBlockStoreSaveLoadBlock(0xc420e8c0f0)
        /go/src/github.com/tendermint/tendermint/blockchain/store_test.go:128 +0x609
testing.tRunner(0xc420e8c0f0, 0xf20830)
        /usr/local/go/src/testing/testing.go:746 +0x16d
created by testing.(*T).Run
        /usr/local/go/src/testing/testing.go:789 +0x569
exit status 2
FAIL
```
2017-12-15 13:59:25 -06:00
Anton Kaliaev
c513649df4 strip protocol if defined
Fixes #934
2017-12-15 13:36:08 -06:00
Anton Kaliaev
a6911825b0 PanicCrisis is deprecated 2017-12-15 13:35:49 -06:00
Ethan Buchman
eddabab5e4 Merge pull request #965 from tendermint/573-handle-corrupt-wal-file
Handle corrupt WAL file
2017-12-15 14:33:16 -05:00
Ethan Buchman
3eee69de2d Merge pull request #954 from tendermint/668-send-absent-validators
Send absent validators
2017-12-15 13:55:52 -05:00
Ethan Buchman
068d83bce8 Merge pull request #677 from tendermint/blockchain-test-store
blockchain: add tests for BlockStore
2017-12-15 13:33:55 -05:00
Anton Kaliaev
7f649ccf23 fixes from Frey's review 2017-12-15 12:21:15 -06:00
Anton Kaliaev
808b830942 add a unit test
Refs #668
2017-12-15 12:13:02 -06:00
Anton Kaliaev
d669816a1b send absent validators in BeginBlock
Refs #668
2017-12-15 12:13:02 -06:00
Anton Kaliaev
e40689b9cc PanicCrisis is deprecated 2017-12-15 11:59:45 -06:00
Anton Kaliaev
709cf18aef add gofuzz test for consensus wal 2017-12-15 11:56:24 -06:00
Anton Kaliaev
e57cad6c3f correct maxMsgSizeBytes 2017-12-15 11:42:53 -06:00
Anton Kaliaev
4f94caa1b9 explain what to do in case of truncation [ci skip] 2017-12-15 11:11:21 -06:00
Ethan Buchman
78a682e4b6 blockchain: test fixes 2017-12-15 12:07:48 -05:00
Ethan Buchman
21d030dbfb Merge pull request #975 from tendermint/974-fix-test-in-develop
add missing Timestamp to Vote
2017-12-14 09:22:11 -05:00
Anton Kaliaev
72da553ed9 add missing Timestamp to Vote
Fixes #974
2017-12-13 22:24:06 -06:00
Anton Kaliaev
b78606d94f Merge pull request #967 from tendermint/feature/total-tx
Add TotalTx to block header
2017-12-13 17:09:48 -06:00
Ethan Frey
a6f719a402 Add tests for block validation 2017-12-13 19:54:16 +01:00
Anton Kaliaev
e0fbd148ef Merge pull request #958 from tendermint/pex-on-by-default
activate PEX reactor by default
2017-12-13 12:52:17 -06:00
Anton Kaliaev
2f91289880 update changelog [ci skip] 2017-12-13 12:26:12 -06:00
Ethan Buchman
462b755a60 activate PEX reactor by default 2017-12-13 12:25:48 -06:00
Anton Kaliaev
0a2ecaa393 Merge pull request #953 from tendermint/feature/time-fields
Add Timestamp to Proposal/Vote
2017-12-13 12:18:55 -06:00
Ethan Frey
dedf03bb81 Add TotalTx to block header, issue #952
Update state to keep track of this info.
Change function args as needed.
Make NumTx also an int64 for consistency.
2017-12-13 12:20:53 +01:00
Ethan Buchman
64f056b57d Merge branch '916-remove-sleeps-from-tests' into develop 2017-12-12 16:43:36 -05:00
Ethan Buchman
90df9fa1bf p2p/trust: remove extra channels 2017-12-12 16:43:19 -05:00
caffix
eae6e6381e trust metric is now a service and the test ticker has been added 2017-12-12 15:33:42 -05:00
Anton Kaliaev
04a18e0a97 briefly describe the recover process [ci skip] 2017-12-12 13:03:09 -06:00
Anton Kaliaev
06aece31cf lower the max message size 2017-12-12 13:02:40 -06:00
Ethan Buchman
e0296d6c3c consensus: fix makeBlockchainFromWAL 2017-12-12 12:14:15 -05:00
Ethan Frey
5ffb5f01cc Add more tests for Proposal/Vote serialization
String() and Proposal valid after serializing.
To be safe, but mainly to increase test coverage for the PR
2017-12-12 12:59:51 +01:00
Ethan Frey
8576ad58bd Cleanup canonical json 2017-12-12 12:59:51 +01:00
Ethan Frey
c4860f6c29 Force CanonicalTime to UTC
fixes issue with vote serialization breaking the signatures
2017-12-12 12:59:51 +01:00
Ethan Frey
850310b034 Add test to isolate precommit failure
types/vote_test.go now checks signature on a serialized and
then deserialized vote. Turns out go-wire time encoding doesn't
respect timezones, and the signatures don't check out.
2017-12-12 12:59:51 +01:00
Ethan Frey
a29c781295 Add default timestamp to all instances of *types.Vote 2017-12-12 12:59:51 +01:00
Ethan Frey
599673690c Add timestamp to vote canonical encoding 2017-12-12 12:59:51 +01:00
Ethan Frey
7deda53b7c Add Timestamp to Proposal for issue #929
Store it as time.Timestamp locally, encode it as RFC3339 with milliseconds
before signing the canonical form.
2017-12-12 12:59:51 +01:00
Ethan Buchman
5ecae52bf1 Merge branch 'master' into develop 2017-12-12 02:31:47 -05:00
Ethan Buchman
ac2d0edb2f Merge pull request #964 from tendermint/fix-gometa-makefile
fix gometalinter.v2 automatically
2017-12-12 02:26:18 -05:00
Petabyte Storage
ae632654d2 add tools check with short circuit 2017-12-11 23:00:18 -08:00
Petabyte Storage
49e5510953 remove tools from all 2017-12-11 21:44:53 -08:00
Anton Kaliaev
a6644f7477 remove gopath prefixes
it's safe because I added GOPATH to PATH earlier today
2017-12-11 23:05:22 -06:00
Anton Kaliaev
10265d8667 add tools to make all because it's required for test target 2017-12-11 23:02:42 -06:00
Petabyte Storage
8be708fe5b fix spelling and makefile gometalinter.v2 2017-12-11 20:48:15 -08:00
Anton Kaliaev
af79a2a59e fix error msg 2017-12-11 19:50:05 -06:00
Anton Kaliaev
ee66476d62 set max msg size
otherwise, it is easy to get OutOfMemory panic (somebody can even expoit
this)
2017-12-11 19:48:57 -06:00
Anton Kaliaev
40f9261d48 handle data corruption errors
Refs #573
2017-12-11 19:48:20 -06:00
Ethan Buchman
0bfc11f1ba blockchain: note about store tests needing simplification ... 2017-12-10 20:03:58 -05:00
Emmanuel Odeke
96998a5498 blockchain: Block creator helper for compressing tests as per @ebuchman 2017-12-10 19:58:22 -05:00
Emmanuel Odeke
2da5299924 blockchain: less fragile and involved tests for blockstore
With feedback from @ebuchman, to make the tests nicer
and less fragile.
2017-12-10 19:58:22 -05:00
Emmanuel Odeke
83b40b25d6 blockchain: deduplicate store header value tests 2017-12-10 19:57:06 -05:00
Emmanuel Odeke
05f30b3e28 blockchain: updated store docs/comments from review 2017-12-10 19:57:06 -05:00
Ethan Buchman
116a61beb1 blockchain: update store comments 2017-12-10 19:57:06 -05:00
Emmanuel Odeke
8c86bb8024 blockchain: add tests and more docs for BlockStore
Add tests to test store, to the fullest reasonable extent for
paths that can taken by input arguments altering internal behavior,
as well as by mutating content in the DB.
2017-12-10 19:54:34 -05:00
caffix
44f62e5e27 built the WaitForStop functionality into the Stop method 2017-12-09 13:25:28 -05:00
caffix
5d464364a8 fixed the racy test and removed all the calls to Sleep 2017-12-08 15:51:18 -05:00
337 changed files with 17755 additions and 5798 deletions

222
.circleci/config.yml Normal file
View File

@@ -0,0 +1,222 @@
version: 2
defaults: &defaults
working_directory: /go/src/github.com/tendermint/tendermint
docker:
- image: circleci/golang:1.10.0
environment:
GOBIN: /tmp/workspace/bin
jobs:
setup_dependencies:
<<: *defaults
steps:
- run: mkdir -p /tmp/workspace/bin
- run: mkdir -p /tmp/workspace/profiles
- checkout
- restore_cache:
keys:
- v1-pkg-cache
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run:
name: binaries
command: |
export PATH="$GOBIN:$PATH"
make install
- persist_to_workspace:
root: /tmp/workspace
paths:
- bin
- profiles
- save_cache:
key: v1-pkg-cache
paths:
- /go/pkg
- save_cache:
key: v1-tree-{{ .Environment.CIRCLE_SHA1 }}
paths:
- /go/src/github.com/tendermint/tendermint
setup_abci:
<<: *defaults
steps:
- attach_workspace:
at: /tmp/workspace
- restore_cache:
key: v1-pkg-cache
- restore_cache:
key: v1-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: Checkout abci
command: |
commit=$(bash scripts/dep_utils/parse.sh abci)
go get -v -u -d github.com/tendermint/abci/...
cd /go/src/github.com/tendermint/abci
git checkout "$commit"
- run:
working_directory: /go/src/github.com/tendermint/abci
name: Install abci
command: |
set -ex
export PATH="$GOBIN:$PATH"
make get_tools
make get_vendor_deps
make install
- run: ls -lah /tmp/workspace/bin
- persist_to_workspace:
root: /tmp/workspace
paths:
- "bin/abci*"
lint:
<<: *defaults
steps:
- attach_workspace:
at: /tmp/workspace
- restore_cache:
key: v1-pkg-cache
- restore_cache:
key: v1-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: metalinter
command: |
set -ex
export PATH="$GOBIN:$PATH"
make metalinter
test_apps:
<<: *defaults
steps:
- attach_workspace:
at: /tmp/workspace
- restore_cache:
key: v1-pkg-cache
- restore_cache:
key: v1-tree-{{ .Environment.CIRCLE_SHA1 }}
- run: sudo apt-get update && sudo apt-get install -y --no-install-recommends bsdmainutils
- run:
name: Run tests
command: bash test/app/test.sh
test_cover:
<<: *defaults
parallelism: 4
steps:
- attach_workspace:
at: /tmp/workspace
- restore_cache:
key: v1-pkg-cache
- restore_cache:
key: v1-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: Run tests
command: |
for pkg in $(go list github.com/tendermint/tendermint/... | grep -v /vendor/ | circleci tests split --split-by=timings); do
id=$(basename "$pkg")
go test -timeout 5m -race -coverprofile=/tmp/workspace/profiles/$id.out -covermode=atomic "$pkg"
done
- persist_to_workspace:
root: /tmp/workspace
paths:
- "profiles/*"
test_libs:
<<: *defaults
steps:
- attach_workspace:
at: /tmp/workspace
- restore_cache:
key: v1-pkg-cache
- restore_cache:
key: v1-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: Run tests
command: bash test/test_libs.sh
test_persistence:
<<: *defaults
steps:
- attach_workspace:
at: /tmp/workspace
- restore_cache:
key: v1-pkg-cache
- restore_cache:
key: v1-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: Run tests
command: bash test/persist/test_failure_indices.sh
test_p2p:
environment:
GOBIN: /home/circleci/.go_workspace/bin
GOPATH: /home/circleci/.go_workspace
machine:
image: circleci/classic:latest
steps:
- checkout
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- run: bash test/circleci/p2p.sh
upload_coverage:
<<: *defaults
steps:
- attach_workspace:
at: /tmp/workspace
- restore_cache:
key: v1-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: gather
command: |
set -ex
echo "mode: atomic" > coverage.txt
for prof in $(ls /tmp/workspace/profiles/); do
tail -n +2 /tmp/workspace/profiles/"$prof" >> coverage.txt
done
- run:
name: upload
command: bash <(curl -s https://codecov.io/bash) -f coverage.txt
workflows:
version: 2
test-suite:
jobs:
- setup_dependencies
- setup_abci:
requires:
- setup_dependencies
- lint:
requires:
- setup_dependencies
- test_apps:
requires:
- setup_abci
- test_cover:
requires:
- setup_dependencies
- test_libs:
filters:
branches:
only:
- develop
- master
requires:
- setup_dependencies
- test_persistence:
requires:
- setup_abci
- test_p2p
- upload_coverage:
requires:
- test_cover

View File

@@ -1,26 +0,0 @@
#
# This codecov.yml is the default configuration for
# all repositories on Codecov. You may adjust the settings
# below in your own codecov.yml in your repository.
#
coverage:
precision: 2
round: down
range: 70...100
status:
# Learn more at https://codecov.io/docs#yaml_default_commit_status
project:
default:
threshold: 1% # allow this much decrease on project
changes: false
comment:
layout: "header, diff"
behavior: default # update if exists else create new
ignore:
- "docs"
- "*.md"
- "*.rst"

View File

@@ -37,5 +37,8 @@ in a case of bug.
**How to reproduce it** (as minimally and precisely as possible):
**Logs (you can paste a part showing an error or attach the whole file)**:
**`/dump_consensus_state` output for consensus bugs**
**Anything else do we need to know**:

6
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,6 @@
<!-- Thanks for filing a PR! Before hitting the button, please check the following items.-->
* [ ] Updated all relevant documentation in docs
* [ ] Updated all code comments where relevant
* [ ] Wrote tests
* [ ] Updated CHANGELOG.md

4
.gitignore vendored
View File

@@ -17,6 +17,10 @@ test/logs
coverage.txt
docs/_build
docs/tools
*.log
scripts/wal2json/wal2json
scripts/cutWALUntil/cutWALUntil
.idea/
*.iml

View File

@@ -3,9 +3,7 @@
## Roadmap
BREAKING CHANGES:
- Upgrade the header to support better proofs on validtors, results, evidence, and possibly more
- Better support for injecting randomness
- Pass evidence/voteInfo through ABCI
- Upgrade consensus for more real-time use of evidence
FEATURES:
@@ -27,6 +25,101 @@ BUG FIXES:
- Graceful handling/recovery for apps that have non-determinism or fail to halt
- Graceful handling/recovery for violations of safety, or liveness
## 0.17.1 (March 27th, 2018)
BUG FIXES:
- [types] Actually support `app_state` in genesis as `AppStateJSON`
## 0.17.0 (March 27th, 2018)
BREAKING:
- [types] WriteSignBytes -> SignBytes
IMPROVEMENTS:
- [all] renamed `dummy` (`persistent_dummy`) to `kvstore` (`persistent_kvstore`) (name "dummy" is deprecated and will not work in the next breaking release)
- [docs] note on determinism (docs/determinism.rst)
- [genesis] `app_options` field is deprecated. please rename it to `app_state` in your genesis file(s). `app_options` will not work in the next breaking release
- [p2p] dial seeds directly without potential peers
- [p2p] exponential backoff for addrs in the address book
- [p2p] mark peer as good if it contributed enough votes or block parts
- [p2p] stop peer if it sends incorrect data, msg to unknown channel, msg we did not expect
- [p2p] when `auth_enc` is true, all dialed peers must have a node ID in their address
- [spec] various improvements
- switched from glide to dep internally for package management
- [wire] prep work for upgrading to new go-wire (which is now called go-amino)
FEATURES:
- [config] exposed `auth_enc` flag to enable/disable encryption
- [config] added the `--p2p.private_peer_ids` flag and `PrivatePeerIDs` config variable (see config for description)
- [rpc] added `/health` endpoint, which returns empty result for now
- [types/priv_validator] new format and socket client, allowing for remote signing
BUG FIXES:
- [consensus] fix liveness bug by introducing ValidBlock mechanism
## 0.16.0 (February 20th, 2018)
BREAKING CHANGES:
- [config] use $TMHOME/config for all config and json files
- [p2p] old `--p2p.seeds` is now `--p2p.persistent_peers` (persistent peers to which TM will always connect to)
- [p2p] now `--p2p.seeds` only used for getting addresses (if addrbook is empty; not persistent)
- [p2p] NodeInfo: remove RemoteAddr and add Channels
- we must have at least one overlapping channel with peer
- we only send msgs for channels the peer advertised
- [p2p/conn] pong timeout
- [lite] comment out IAVL related code
FEATURES:
- [p2p] added new `/dial_peers&persistent=_` **unsafe** endpoint
- [p2p] persistent node key in `$THMHOME/config/node_key.json`
- [p2p] introduce peer ID and authenticate peers by ID using addresses like `ID@IP:PORT`
- [p2p/pex] new seed mode crawls the network and serves as a seed.
- [config] MempoolConfig.CacheSize
- [config] P2P.SeedMode (`--p2p.seed_mode`)
IMPROVEMENT:
- [p2p/pex] stricter rules in the PEX reactor for better handling of abuse
- [p2p] various improvements to code structure including subpackages for `pex` and `conn`
- [docs] new spec!
- [all] speed up the tests!
BUG FIX:
- [blockchain] StopPeerForError on timeout
- [consensus] StopPeerForError on a bad Maj23 message
- [state] flush mempool conn before calling commit
- [types] fix priv val signing things that only differ by timestamp
- [mempool] fix memory leak causing zombie peers
- [p2p/conn] fix potential deadlock
## 0.15.0 (December 29, 2017)
BREAKING CHANGES:
- [p2p] enable the Peer Exchange reactor by default
- [types] add Timestamp field to Proposal/Vote
- [types] add new fields to Header: TotalTxs, ConsensusParamsHash, LastResultsHash, EvidenceHash
- [types] add Evidence to Block
- [types] simplify ValidateBasic
- [state] updates to support changes to the header
- [state] Enforce <1/3 of validator set can change at a time
FEATURES:
- [state] Send indices of absent validators and addresses of byzantine validators in BeginBlock
- [state] Historical ConsensusParams and ABCIResponses
- [docs] Specification for the base Tendermint data structures.
- [evidence] New evidence reactor for gossiping and managing evidence
- [rpc] `/block_results?height=X` returns the DeliverTx results for a given height.
IMPROVEMENTS:
- [consensus] Better handling of corrupt WAL file
BUG FIXES:
- [lite] fix race
- [state] validate block.Header.ValidatorsHash
- [p2p] allow seed addresses to be prefixed with eg. `tcp://`
- [p2p] use consistent key to refer to peers so we dont try to connect to existing peers
- [cmd] fix `tendermint init` to ignore files that are there and generate files that aren't.
## 0.14.0 (December 11, 2017)
BREAKING CHANGES:

View File

@@ -34,27 +34,44 @@ Please don't make Pull Requests to `master`.
## Dependencies
We use [glide](https://github.com/masterminds/glide) to manage dependencies.
That said, the master branch of every Tendermint repository should just build with `go get`, which means they should be kept up-to-date with their dependencies so we can get away with telling people they can just `go get` our software.
Since some dependencies are not under our control, a third party may break our build, in which case we can fall back on `glide install`. Even for dependencies under our control, glide helps us keeps multiple repos in sync as they evolve. Anything with an executable, such as apps, tools, and the core, should use glide.
We use [dep](https://github.com/golang/dep) to manage dependencies.
Run `bash scripts/glide/status.sh` to get a list of vendored dependencies that may not be up-to-date.
That said, the master branch of every Tendermint repository should just build
with `go get`, which means they should be kept up-to-date with their
dependencies so we can get away with telling people they can just `go get` our
software.
Since some dependencies are not under our control, a third party may break our
build, in which case we can fall back on `dep ensure` (or `make
get_vendor_deps`). Even for dependencies under our control, dep helps us to
keep multiple repos in sync as they evolve. Anything with an executable, such
as apps, tools, and the core, should use dep.
Run `dep status` to get a list of vendored dependencies that may not be
up-to-date.
## Vagrant
If you are a [Vagrant](https://www.vagrantup.com/) user, all you have to do to get started hacking Tendermint is:
If you are a [Vagrant](https://www.vagrantup.com/) user, you can get started
hacking Tendermint with the commands below.
NOTE: In case you installed Vagrant in 2017, you might need to run
`vagrant box update` to upgrade to the latest `ubuntu/xenial64`.
```
vagrant up
vagrant ssh
cd ~/go/src/github.com/tendermint/tendermint
make test
```
## Testing
All repos should be hooked up to circle.
If they have `.go` files in the root directory, they will be automatically tested by circle using `go test -v -race ./...`. If not, they will need a `circle.yml`. Ideally, every repo has a `Makefile` that defines `make test` and includes its continuous integration status using a badge in the `README.md`.
All repos should be hooked up to [CircleCI](https://circleci.com/).
If they have `.go` files in the root directory, they will be automatically
tested by circle using `go test -v -race ./...`. If not, they will need a
`circle.yml`. Ideally, every repo has a `Makefile` that defines `make test` and
includes its continuous integration status using a badge in the `README.md`.
## Branching Model and Release
@@ -97,4 +114,4 @@ especially `go-p2p` and `go-rpc`, as their versions are referenced in tendermint
- push to hotfix-vX.X.X to run the extended integration tests on the CI
- merge hotfix-vX.X.X to master
- merge hotfix-vX.X.X to develop
- delete the hotfix-vX.X.X branch
- delete the hotfix-vX.X.X branch

View File

@@ -1,8 +1,8 @@
FROM alpine:3.6
# This is the release of tendermint to pull in.
ENV TM_VERSION 0.13.0
ENV TM_SHA256SUM 36d773d4c2890addc61cc87a72c1e9c21c89516921b0defb0edfebde719b4b85
ENV TM_VERSION 0.15.0
ENV TM_SHA256SUM 71cc271c67eca506ca492c8b90b090132f104bf5dbfe0af2702a50886e88de17
# Tendermint will be looking for genesis file in /tendermint (unless you change
# `genesis_file` in config.toml). You can put your config.toml and private

View File

@@ -18,9 +18,9 @@ RUN mkdir -p /go/src/github.com/tendermint/tendermint && \
cd /go/src/github.com/tendermint/tendermint && \
git clone https://github.com/tendermint/tendermint . && \
git checkout develop && \
make get_tools && \
make get_vendor_deps && \
make install && \
glide cc && \
cd - && \
rm -rf /go/src/github.com/tendermint/tendermint && \
apk del go build-base git
@@ -32,4 +32,4 @@ EXPOSE 46657
ENTRYPOINT ["tendermint"]
CMD ["node", "--moniker=`hostname`", "--proxy_app=dummy"]
CMD ["node", "--moniker=`hostname`", "--proxy_app=kvstore"]

View File

@@ -1,6 +1,7 @@
# Supported tags and respective `Dockerfile` links
- `0.13.0`, `latest` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/a28b3fff49dce2fb31f90abb2fc693834e0029c2/DOCKER/Dockerfile)
- `0.15.0`, `latest` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/170777300ea92dc21a8aec1abc16cb51812513a4/DOCKER/Dockerfile)
- `0.13.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/a28b3fff49dce2fb31f90abb2fc693834e0029c2/DOCKER/Dockerfile)
- `0.12.1` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/457c688346b565e90735431619ca3ca597ef9007/DOCKER/Dockerfile)
- `0.12.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/70d8afa6e952e24c573ece345560a5971bf2cc0e/DOCKER/Dockerfile)
- `0.11.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/9177cc1f64ca88a4a0243c5d1773d10fba67e201/DOCKER/Dockerfile)
@@ -33,13 +34,13 @@ To get started developing applications, see the [application developers guide](h
# How to use this image
## Start one instance of the Tendermint core with the `dummy` app
## Start one instance of the Tendermint core with the `kvstore` app
A very simple example of a built-in app and Tendermint core in one container.
```
docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint init
docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint node --proxy_app=dummy
docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint node --proxy_app=kvstore
```
## mintnet-kubernetes

386
Gopkg.lock generated Normal file
View File

@@ -0,0 +1,386 @@
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
[[projects]]
branch = "master"
name = "github.com/btcsuite/btcd"
packages = ["btcec"]
revision = "2be2f12b358dc57d70b8f501b00be450192efbc3"
[[projects]]
name = "github.com/davecgh/go-spew"
packages = ["spew"]
revision = "346938d642f2ec3594ed81d874461961cd0faa76"
version = "v1.1.0"
[[projects]]
branch = "master"
name = "github.com/ebuchman/fail-test"
packages = ["."]
revision = "95f809107225be108efcf10a3509e4ea6ceef3c4"
[[projects]]
name = "github.com/fortytw2/leaktest"
packages = ["."]
revision = "a5ef70473c97b71626b9abeda80ee92ba2a7de9e"
version = "v1.2.0"
[[projects]]
name = "github.com/fsnotify/fsnotify"
packages = ["."]
revision = "c2828203cd70a50dcccfb2761f8b1f8ceef9a8e9"
version = "v1.4.7"
[[projects]]
name = "github.com/go-kit/kit"
packages = [
"log",
"log/level",
"log/term"
]
revision = "4dc7be5d2d12881735283bcab7352178e190fc71"
version = "v0.6.0"
[[projects]]
name = "github.com/go-logfmt/logfmt"
packages = ["."]
revision = "390ab7935ee28ec6b286364bba9b4dd6410cb3d5"
version = "v0.3.0"
[[projects]]
name = "github.com/go-stack/stack"
packages = ["."]
revision = "259ab82a6cad3992b4e21ff5cac294ccb06474bc"
version = "v1.7.0"
[[projects]]
name = "github.com/gogo/protobuf"
packages = [
"gogoproto",
"jsonpb",
"proto",
"protoc-gen-gogo/descriptor",
"sortkeys",
"types"
]
revision = "1adfc126b41513cc696b209667c8656ea7aac67c"
version = "v1.0.0"
[[projects]]
name = "github.com/golang/protobuf"
packages = [
"proto",
"ptypes",
"ptypes/any",
"ptypes/duration",
"ptypes/timestamp"
]
revision = "925541529c1fa6821df4e44ce2723319eb2be768"
version = "v1.0.0"
[[projects]]
branch = "master"
name = "github.com/golang/snappy"
packages = ["."]
revision = "553a641470496b2327abcac10b36396bd98e45c9"
[[projects]]
name = "github.com/gorilla/websocket"
packages = ["."]
revision = "ea4d1f681babbce9545c9c5f3d5194a789c89f5b"
version = "v1.2.0"
[[projects]]
branch = "master"
name = "github.com/hashicorp/hcl"
packages = [
".",
"hcl/ast",
"hcl/parser",
"hcl/printer",
"hcl/scanner",
"hcl/strconv",
"hcl/token",
"json/parser",
"json/scanner",
"json/token"
]
revision = "f40e974e75af4e271d97ce0fc917af5898ae7bda"
[[projects]]
name = "github.com/inconshreveable/mousetrap"
packages = ["."]
revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
version = "v1.0"
[[projects]]
branch = "master"
name = "github.com/jmhodges/levigo"
packages = ["."]
revision = "c42d9e0ca023e2198120196f842701bb4c55d7b9"
[[projects]]
branch = "master"
name = "github.com/kr/logfmt"
packages = ["."]
revision = "b84e30acd515aadc4b783ad4ff83aff3299bdfe0"
[[projects]]
name = "github.com/magiconair/properties"
packages = ["."]
revision = "c3beff4c2358b44d0493c7dda585e7db7ff28ae6"
version = "v1.7.6"
[[projects]]
branch = "master"
name = "github.com/mitchellh/mapstructure"
packages = ["."]
revision = "00c29f56e2386353d58c599509e8dc3801b0d716"
[[projects]]
name = "github.com/pelletier/go-toml"
packages = ["."]
revision = "acdc4509485b587f5e675510c4f2c63e90ff68a8"
version = "v1.1.0"
[[projects]]
name = "github.com/pkg/errors"
packages = ["."]
revision = "645ef00459ed84a119197bfb8d8205042c6df63d"
version = "v0.8.0"
[[projects]]
name = "github.com/pmezard/go-difflib"
packages = ["difflib"]
revision = "792786c7400a136282c1664665ae0a8db921c6c2"
version = "v1.0.0"
[[projects]]
branch = "master"
name = "github.com/rcrowley/go-metrics"
packages = ["."]
revision = "8732c616f52954686704c8645fe1a9d59e9df7c1"
[[projects]]
name = "github.com/spf13/afero"
packages = [
".",
"mem"
]
revision = "bb8f1927f2a9d3ab41c9340aa034f6b803f4359c"
version = "v1.0.2"
[[projects]]
name = "github.com/spf13/cast"
packages = ["."]
revision = "8965335b8c7107321228e3e3702cab9832751bac"
version = "v1.2.0"
[[projects]]
name = "github.com/spf13/cobra"
packages = ["."]
revision = "7b2c5ac9fc04fc5efafb60700713d4fa609b777b"
version = "v0.0.1"
[[projects]]
branch = "master"
name = "github.com/spf13/jwalterweatherman"
packages = ["."]
revision = "7c0cea34c8ece3fbeb2b27ab9b59511d360fb394"
[[projects]]
name = "github.com/spf13/pflag"
packages = ["."]
revision = "e57e3eeb33f795204c1ca35f56c44f83227c6e66"
version = "v1.0.0"
[[projects]]
name = "github.com/spf13/viper"
packages = ["."]
revision = "b5e8006cbee93ec955a89ab31e0e3ce3204f3736"
version = "v1.0.2"
[[projects]]
name = "github.com/stretchr/testify"
packages = [
"assert",
"require"
]
revision = "12b6f73e6084dad08a7c6e575284b177ecafbc71"
version = "v1.2.1"
[[projects]]
branch = "master"
name = "github.com/syndtr/goleveldb"
packages = [
"leveldb",
"leveldb/cache",
"leveldb/comparer",
"leveldb/errors",
"leveldb/filter",
"leveldb/iterator",
"leveldb/journal",
"leveldb/memdb",
"leveldb/opt",
"leveldb/storage",
"leveldb/table",
"leveldb/util"
]
revision = "169b1b37be738edb2813dab48c97a549bcf99bb5"
[[projects]]
name = "github.com/tendermint/abci"
packages = [
"client",
"example/code",
"example/counter",
"example/kvstore",
"server",
"types"
]
revision = "46686763ba8ea595ede16530ed4a40fb38f49f94"
version = "v0.10.2"
[[projects]]
branch = "master"
name = "github.com/tendermint/ed25519"
packages = [
".",
"edwards25519",
"extra25519"
]
revision = "d8387025d2b9d158cf4efb07e7ebf814bcce2057"
[[projects]]
name = "github.com/tendermint/go-crypto"
packages = ["."]
revision = "c3e19f3ea26f5c3357e0bcbb799b0761ef923755"
version = "v0.5.0"
[[projects]]
name = "github.com/tendermint/go-wire"
packages = [
".",
"data"
]
revision = "fa721242b042ecd4c6ed1a934ee740db4f74e45c"
source = "github.com/tendermint/go-amino"
version = "v0.7.3"
[[projects]]
name = "github.com/tendermint/tmlibs"
packages = [
"autofile",
"cli",
"cli/flags",
"clist",
"common",
"db",
"flowrate",
"log",
"merkle",
"pubsub",
"pubsub/query",
"test"
]
revision = "24da7009c3d8c019b40ba4287495749e3160caca"
version = "v0.7.1"
[[projects]]
branch = "master"
name = "golang.org/x/crypto"
packages = [
"curve25519",
"nacl/box",
"nacl/secretbox",
"openpgp/armor",
"openpgp/errors",
"poly1305",
"ripemd160",
"salsa20/salsa"
]
revision = "88942b9c40a4c9d203b82b3731787b672d6e809b"
[[projects]]
branch = "master"
name = "golang.org/x/net"
packages = [
"context",
"http2",
"http2/hpack",
"idna",
"internal/timeseries",
"lex/httplex",
"trace"
]
revision = "6078986fec03a1dcc236c34816c71b0e05018fda"
[[projects]]
branch = "master"
name = "golang.org/x/sys"
packages = ["unix"]
revision = "91ee8cde435411ca3f1cd365e8f20131aed4d0a1"
[[projects]]
name = "golang.org/x/text"
packages = [
"collate",
"collate/build",
"internal/colltab",
"internal/gen",
"internal/tag",
"internal/triegen",
"internal/ucd",
"language",
"secure/bidirule",
"transform",
"unicode/bidi",
"unicode/cldr",
"unicode/norm",
"unicode/rangetable"
]
revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0"
version = "v0.3.0"
[[projects]]
branch = "master"
name = "google.golang.org/genproto"
packages = ["googleapis/rpc/status"]
revision = "f8c8703595236ae70fdf8789ecb656ea0bcdcf46"
[[projects]]
name = "google.golang.org/grpc"
packages = [
".",
"balancer",
"codes",
"connectivity",
"credentials",
"grpclb/grpc_lb_v1/messages",
"grpclog",
"internal",
"keepalive",
"metadata",
"naming",
"peer",
"resolver",
"stats",
"status",
"tap",
"transport"
]
revision = "5b3c4e850e90a4cf6a20ebd46c8b32a0a3afcb9e"
version = "v1.7.5"
[[projects]]
name = "gopkg.in/yaml.v2"
packages = ["."]
revision = "7f97868eec74b32b0982dd158a51a446d1da7eb5"
version = "v2.1.1"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "4dca5dbd2d280d093d7c8fc423606ab86d6ad1b241b076a7716c2093b5a09231"
solver-name = "gps-cdcl"
solver-version = 1

95
Gopkg.toml Normal file
View File

@@ -0,0 +1,95 @@
# Gopkg.toml example
#
# Refer to https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md
# for detailed Gopkg.toml documentation.
#
# required = ["github.com/user/thing/cmd/thing"]
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
#
# [[constraint]]
# name = "github.com/user/project"
# version = "1.0.0"
#
# [[constraint]]
# name = "github.com/user/project2"
# branch = "dev"
# source = "github.com/myfork/project2"
#
# [[override]]
# name = "github.com/x/y"
# version = "2.4.0"
#
# [prune]
# non-go = false
# go-tests = true
# unused-packages = true
[[constraint]]
branch = "master"
name = "github.com/ebuchman/fail-test"
[[constraint]]
branch = "master"
name = "github.com/fortytw2/leaktest"
[[constraint]]
name = "github.com/go-kit/kit"
version = "~0.6.0"
[[constraint]]
name = "github.com/gogo/protobuf"
version = "~1.0.0"
[[constraint]]
name = "github.com/golang/protobuf"
version = "~1.0.0"
[[constraint]]
name = "github.com/gorilla/websocket"
version = "~1.2.0"
[[constraint]]
name = "github.com/pkg/errors"
version = "~0.8.0"
[[constraint]]
branch = "master"
name = "github.com/rcrowley/go-metrics"
[[constraint]]
name = "github.com/spf13/cobra"
version = "~0.0.1"
[[constraint]]
name = "github.com/spf13/viper"
version = "~1.0.0"
[[constraint]]
name = "github.com/stretchr/testify"
version = "~1.2.1"
[[constraint]]
name = "github.com/tendermint/abci"
version = "~0.10.2"
[[constraint]]
name = "github.com/tendermint/go-crypto"
version = "~0.5.0"
[[constraint]]
name = "github.com/tendermint/go-wire"
source = "github.com/tendermint/go-amino"
version = "~0.7.3"
[[constraint]]
name = "github.com/tendermint/tmlibs"
version = "~0.7.1"
[[constraint]]
name = "google.golang.org/grpc"
version = "~1.7.3"
[prune]
go-tests = true
unused-packages = true

176
Makefile
View File

@@ -1,41 +1,128 @@
GOTOOLS = \
github.com/mitchellh/gox \
github.com/tcnksm/ghr \
gopkg.in/alecthomas/gometalinter.v2
github.com/golang/dep/cmd/dep \
gopkg.in/alecthomas/gometalinter.v2
PACKAGES=$(shell go list ./... | grep -v '/vendor/')
BUILD_TAGS?=tendermint
TMHOME = $${TMHOME:-$$HOME/.tendermint}
BUILD_FLAGS = -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`"
BUILD_FLAGS = -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short HEAD`"
all: check build test install
all: get_vendor_deps install test
check: check_tools ensure_deps
install:
go install $(BUILD_FLAGS) ./cmd/tendermint
########################################
### Build
build:
go build $(BUILD_FLAGS) -o build/tendermint ./cmd/tendermint/
go build $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint/
build_race:
go build -race $(BUILD_FLAGS) -o build/tendermint ./cmd/tendermint
go build -race $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint
install:
go install $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' ./cmd/tendermint
########################################
### Distribution
# dist builds binaries for all platforms and packages them for distribution
dist:
@BUILD_TAGS='$(BUILD_TAGS)' sh -c "'$(CURDIR)/scripts/dist.sh'"
test:
@echo "--> Running linter"
@make metalinter_test
@echo "--> Running go test"
@go test $(PACKAGES)
########################################
### Tools & dependencies
test_race:
@echo "--> Running go test --race"
@go test -v -race $(PACKAGES)
check_tools:
@# https://stackoverflow.com/a/25668869
@echo "Found tools: $(foreach tool,$(notdir $(GOTOOLS)),\
$(if $(shell which $(tool)),$(tool),$(error "No $(tool) in PATH")))"
get_tools:
@echo "--> Installing tools"
go get -u -v $(GOTOOLS)
@gometalinter.v2 --install
update_tools:
@echo "--> Updating tools"
@go get -u $(GOTOOLS)
#Run this from CI
get_vendor_deps:
@rm -rf vendor/
@echo "--> Running dep"
@dep ensure -vendor-only
#Run this locally.
ensure_deps:
@rm -rf vendor/
@echo "--> Running dep"
@dep ensure
draw_deps:
@# requires brew install graphviz or apt-get install graphviz
go get github.com/RobotsAndPencils/goviz
@goviz -i github.com/tendermint/tendermint/cmd/tendermint -d 3 | dot -Tpng -o dependency-graph.png
get_deps_bin_size:
@# Copy of build recipe with additional flags to perform binary size analysis
$(eval $(shell go build -work -a $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint/ 2>&1))
@find $(WORK) -type f -name "*.a" | xargs -I{} du -hxs "{}" | sort -rh | sed -e s:${WORK}/::g > deps_bin_size.log
@echo "Results can be found here: $(CURDIR)/deps_bin_size.log"
########################################
### Testing
## required to be run first by most tests
build_docker_test_image:
docker build -t tester -f ./test/docker/Dockerfile .
### coverage, app, persistence, and libs tests
test_cover:
# run the go unit tests with coverage
bash test/test_cover.sh
test_apps:
# run the app tests using bash
# requires `abci-cli` and `tendermint` binaries installed
bash test/app/test.sh
test_persistence:
# run the persistence tests using bash
# requires `abci-cli` installed
docker run --name run_persistence -t tester bash test/persist/test_failure_indices.sh
# TODO undockerize
# bash test/persist/test_failure_indices.sh
test_p2p:
docker rm -f rsyslog || true
rm -rf test/logs || true
mkdir test/logs
cd test/
docker run -d -v "logs:/var/log/" -p 127.0.0.1:5514:514/udp --name rsyslog voxxit/rsyslog
cd ..
# requires 'tester' the image from above
bash test/p2p/test.sh tester
need_abci:
bash scripts/install_abci_apps.sh
test_integrations:
@bash ./test/test.sh
make build_docker_test_image
make get_tools
make get_vendor_deps
make install
make need_abci
make test_cover
make test_apps
make test_persistence
make test_p2p
test_libs:
# checkout every github.com/tendermint dir and run its tests
# NOTE: on release-* or master branches only (set by Jenkins)
docker run --name run_libs -t tester bash test/test_libs.sh
test_release:
@go test -tags release $(PACKAGES)
@@ -45,46 +132,32 @@ test100:
vagrant_test:
vagrant up
vagrant ssh -c 'make install'
vagrant ssh -c 'make test_race'
vagrant ssh -c 'make test_integrations'
draw_deps:
# requires brew install graphviz or apt-get install graphviz
go get github.com/RobotsAndPencils/goviz
@goviz -i github.com/tendermint/tendermint/cmd/tendermint -d 3 | dot -Tpng -o dependency-graph.png
### go tests
test:
@echo "--> Running go test"
@go test $(PACKAGES)
get_vendor_deps:
@hash glide 2>/dev/null || go get github.com/Masterminds/glide
@rm -rf vendor/
@echo "--> Running glide install"
@$(GOPATH)/bin/glide install
test_race:
@echo "--> Running go test --race"
@go test -v -race $(PACKAGES)
update_vendor_deps:
@$(GOPATH)/bin/glide update
update_tools:
@echo "--> Updating tools"
@go get -u $(GOTOOLS)
tools:
@echo "--> Installing tools"
@go get $(GOTOOLS)
$(GOPATH)/bin/gometalinter.v2 --install
########################################
### Formatting, linting, and vetting
metalinter:
$(GOPATH)/bin/gometalinter.v2 --vendor --deadline=600s --enable-all --disable=lll ./...
fmt:
@go fmt ./...
metalinter_test:
$(GOPATH)/bin/gometalinter.v2 --vendor --deadline=600s --disable-all \
metalinter:
@echo "--> Running linter"
@gometalinter.v2 --vendor --deadline=600s --disable-all \
--enable=deadcode \
--enable=gosimple \
--enable=misspell \
--enable=safesql \
./...
#--enable=gas \
#--enable=maligned \
#--enable=dupl \
@@ -106,4 +179,11 @@ metalinter_test:
#--enable=vet \
#--enable=vetshadow \
.PHONY: install build build_race dist test test_race test_integrations test100 draw_deps get_vendor_deps update_vendor_deps update_tools tools test_release
metalinter_all:
@echo "--> Running linter (all)"
gometalinter.v2 --vendor --deadline=600s --enable-all --disable=lll ./...
# To avoid unintended conflicts with file names, always add to .PHONY
# unless there is a reason not to.
# https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html
.PHONY: check build build_race dist install check_tools get_tools update_tools get_vendor_deps draw_deps test_cover test_apps test_persistence test_p2p test test_race test_libs test_integrations test_release test100 vagrant_test fmt

View File

@@ -26,6 +26,12 @@ and securely replicates it on many machines.
For more information, from introduction to install to application development, [Read The Docs](https://tendermint.readthedocs.io/en/master/).
## Minimum requirements
Requirement|Notes
---|---
Go version | Go1.9 or higher
## Install
To download pre-built binaries, see our [downloads page](https://tendermint.com/downloads).

33
Vagrantfile vendored
View File

@@ -21,29 +21,32 @@ Vagrant.configure("2") do |config|
# install base requirements
apt-get update
apt-get install -y --no-install-recommends wget curl jq \
apt-get install -y --no-install-recommends wget curl jq zip \
make shellcheck bsdmainutils psmisc
apt-get install -y docker-ce golang-1.9-go
apt-get install -y language-pack-en
# cleanup
apt-get autoremove -y
# needed for docker
usermod -a -G docker ubuntu
usermod -a -G docker vagrant
# use "EOF" not EOF to avoid variable substitution of $PATH
cat << "EOF" >> /home/ubuntu/.bash_profile
export PATH=$PATH:/usr/lib/go-1.9/bin:/home/ubuntu/go/bin
export GOPATH=/home/ubuntu/go
export LC_ALL=en_US.UTF-8
cd go/src/github.com/tendermint/tendermint
EOF
# set env variables
echo 'export PATH=$PATH:/usr/lib/go-1.9/bin:/home/vagrant/go/bin' >> /home/vagrant/.bash_profile
echo 'export GOPATH=/home/vagrant/go' >> /home/vagrant/.bash_profile
echo 'export LC_ALL=en_US.UTF-8' >> /home/vagrant/.bash_profile
echo 'cd go/src/github.com/tendermint/tendermint' >> /home/vagrant/.bash_profile
mkdir -p /home/ubuntu/go/bin
mkdir -p /home/ubuntu/go/src/github.com/tendermint
ln -s /vagrant /home/ubuntu/go/src/github.com/tendermint/tendermint
mkdir -p /home/vagrant/go/bin
mkdir -p /home/vagrant/go/src/github.com/tendermint
ln -s /vagrant /home/vagrant/go/src/github.com/tendermint/tendermint
chown -R ubuntu:ubuntu /home/ubuntu/go
chown ubuntu:ubuntu /home/ubuntu/.bash_profile
chown -R vagrant:vagrant /home/vagrant/go
chown vagrant:vagrant /home/vagrant/.bash_profile
# get all deps and tools, ready to install/test
su - ubuntu -c 'cd /home/ubuntu/go/src/github.com/tendermint/tendermint && make get_vendor_deps && make tools'
su - vagrant -c 'source /home/vagrant/.bash_profile'
su - vagrant -c 'cd /home/vagrant/go/src/github.com/tendermint/tendermint && make get_tools && make get_vendor_deps'
SHELL
end

View File

@@ -14,7 +14,7 @@ if [ ! -d $DATA ]; then
echo "starting node"
tendermint node \
--home $DATA \
--proxy_app dummy \
--proxy_app kvstore \
--p2p.laddr tcp://127.0.0.1:56656 \
--rpc.laddr tcp://127.0.0.1:56657 \
--log_level error &
@@ -35,7 +35,7 @@ cp -R $DATA $HOME1
echo "starting validator node"
tendermint node \
--home $HOME1 \
--proxy_app dummy \
--proxy_app kvstore \
--p2p.laddr tcp://127.0.0.1:56656 \
--rpc.laddr tcp://127.0.0.1:56657 \
--log_level error &
@@ -48,10 +48,10 @@ cp $HOME1/genesis.json $HOME2
printf "starting downloader node"
tendermint node \
--home $HOME2 \
--proxy_app dummy \
--proxy_app kvstore \
--p2p.laddr tcp://127.0.0.1:56666 \
--rpc.laddr tcp://127.0.0.1:56667 \
--p2p.seeds 127.0.0.1:56656 \
--p2p.persistent_peers 127.0.0.1:56656 \
--log_level error &
# wait for node to start up so we only count time where we are actually syncing

View File

@@ -16,11 +16,10 @@ func BenchmarkEncodeStatusWire(b *testing.B) {
b.StopTimer()
pubKey := crypto.GenPrivKeyEd25519().PubKey()
status := &ctypes.ResultStatus{
NodeInfo: &p2p.NodeInfo{
PubKey: pubKey.Unwrap().(crypto.PubKeyEd25519),
NodeInfo: p2p.NodeInfo{
PubKey: pubKey,
Moniker: "SOMENAME",
Network: "SOMENAME",
RemoteAddr: "SOMEADDR",
ListenAddr: "SOMEADDR",
Version: "SOMEVER",
Other: []string{"SOMESTRING", "OTHERSTRING"},
@@ -42,12 +41,11 @@ func BenchmarkEncodeStatusWire(b *testing.B) {
func BenchmarkEncodeNodeInfoWire(b *testing.B) {
b.StopTimer()
pubKey := crypto.GenPrivKeyEd25519().PubKey().Unwrap().(crypto.PubKeyEd25519)
nodeInfo := &p2p.NodeInfo{
pubKey := crypto.GenPrivKeyEd25519().PubKey()
nodeInfo := p2p.NodeInfo{
PubKey: pubKey,
Moniker: "SOMENAME",
Network: "SOMENAME",
RemoteAddr: "SOMEADDR",
ListenAddr: "SOMEADDR",
Version: "SOMEVER",
Other: []string{"SOMESTRING", "OTHERSTRING"},
@@ -63,12 +61,11 @@ func BenchmarkEncodeNodeInfoWire(b *testing.B) {
func BenchmarkEncodeNodeInfoBinary(b *testing.B) {
b.StopTimer()
pubKey := crypto.GenPrivKeyEd25519().PubKey().Unwrap().(crypto.PubKeyEd25519)
nodeInfo := &p2p.NodeInfo{
pubKey := crypto.GenPrivKeyEd25519().PubKey()
nodeInfo := p2p.NodeInfo{
PubKey: pubKey,
Moniker: "SOMENAME",
Network: "SOMENAME",
RemoteAddr: "SOMEADDR",
ListenAddr: "SOMEADDR",
Version: "SOMEVER",
Other: []string{"SOMESTRING", "OTHERSTRING"},
@@ -87,11 +84,10 @@ func BenchmarkEncodeNodeInfoProto(b *testing.B) {
b.StopTimer()
pubKey := crypto.GenPrivKeyEd25519().PubKey().Unwrap().(crypto.PubKeyEd25519)
pubKey2 := &proto.PubKey{Ed25519: &proto.PubKeyEd25519{Bytes: pubKey[:]}}
nodeInfo := &proto.NodeInfo{
nodeInfo := proto.NodeInfo{
PubKey: pubKey2,
Moniker: "SOMENAME",
Network: "SOMENAME",
RemoteAddr: "SOMEADDR",
ListenAddr: "SOMEADDR",
Version: "SOMEVER",
Other: []string{"SOMESTRING", "OTHERSTRING"},

View File

@@ -1,18 +1,21 @@
package blockchain
import (
"errors"
"fmt"
"math"
"sync"
"time"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
flow "github.com/tendermint/tmlibs/flowrate"
"github.com/tendermint/tmlibs/log"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/types"
)
/*
eg, L = latency = 0.1s
P = num peers = 10
FN = num full nodes
@@ -22,7 +25,6 @@ eg, L = latency = 0.1s
B/S = CB/P/BS = 12.8 blocks/s
12.8 * 0.1 = 1.28 blocks on conn
*/
const (
@@ -30,10 +32,20 @@ const (
maxTotalRequesters = 1000
maxPendingRequests = maxTotalRequesters
maxPendingRequestsPerPeer = 50
minRecvRate = 10240 // 10Kb/s
// Minimum recv rate to ensure we're receiving blocks from a peer fast
// enough. If a peer is not sending us data at at least that rate, we
// consider them to have timedout and we disconnect.
//
// Assuming a DSL connection (not a good choice) 128 Kbps (upload) ~ 15 KB/s,
// sending data across atlantic ~ 7.5 KB/s.
minRecvRate = 7680
// Maximum difference between current and new block's height.
maxDiffBetweenCurrentAndReceivedBlockHeight = 100
)
var peerTimeoutSeconds = time.Duration(15) // not const so we can override with tests
var peerTimeout = 15 * time.Second // not const so we can override with tests
/*
Peers self report their heights when we join the block pool.
@@ -56,23 +68,23 @@ type BlockPool struct {
height int64 // the lowest key in requesters.
numPending int32 // number of requests pending assignment or block response
// peers
peers map[string]*bpPeer
peers map[p2p.ID]*bpPeer
maxPeerHeight int64
requestsCh chan<- BlockRequest
timeoutsCh chan<- string
errorsCh chan<- peerError
}
func NewBlockPool(start int64, requestsCh chan<- BlockRequest, timeoutsCh chan<- string) *BlockPool {
func NewBlockPool(start int64, requestsCh chan<- BlockRequest, errorsCh chan<- peerError) *BlockPool {
bp := &BlockPool{
peers: make(map[string]*bpPeer),
peers: make(map[p2p.ID]*bpPeer),
requesters: make(map[int64]*bpRequester),
height: start,
numPending: 0,
requestsCh: requestsCh,
timeoutsCh: timeoutsCh,
errorsCh: errorsCh,
}
bp.BaseService = *cmn.NewBaseService(nil, "BlockPool", bp)
return bp
@@ -88,7 +100,6 @@ func (pool *BlockPool) OnStop() {}
// Run spawns requesters as needed.
func (pool *BlockPool) makeRequestersRoutine() {
for {
if !pool.IsRunning() {
break
@@ -119,10 +130,14 @@ func (pool *BlockPool) removeTimedoutPeers() {
for _, peer := range pool.peers {
if !peer.didTimeout && peer.numPending > 0 {
curRate := peer.recvMonitor.Status().CurRate
// XXX remove curRate != 0
// curRate can be 0 on start
if curRate != 0 && curRate < minRecvRate {
pool.sendTimeout(peer.id)
pool.Logger.Error("SendTimeout", "peer", peer.id, "reason", "curRate too low")
err := errors.New("peer is not sending us data fast enough")
pool.sendError(err, peer.id)
pool.Logger.Error("SendTimeout", "peer", peer.id,
"reason", err,
"curRate", fmt.Sprintf("%d KB/s", curRate/1024),
"minRate", fmt.Sprintf("%d KB/s", minRecvRate/1024))
peer.didTimeout = true
}
}
@@ -189,35 +204,43 @@ func (pool *BlockPool) PopRequest() {
delete(pool.requesters, pool.height)
pool.height++
} else {
cmn.PanicSanity(cmn.Fmt("Expected requester to pop, got nothing at height %v", pool.height))
panic(fmt.Sprintf("Expected requester to pop, got nothing at height %v", pool.height))
}
}
// Invalidates the block at pool.height,
// Remove the peer and redo request from others.
func (pool *BlockPool) RedoRequest(height int64) {
// Returns the ID of the removed peer.
func (pool *BlockPool) RedoRequest(height int64) p2p.ID {
pool.mtx.Lock()
defer pool.mtx.Unlock()
request := pool.requesters[height]
if request.block == nil {
cmn.PanicSanity("Expected block to be non-nil")
panic("Expected block to be non-nil")
}
// RemovePeer will redo all requesters associated with this peer.
// TODO: record this malfeasance
pool.removePeer(request.peerID)
return request.peerID
}
// TODO: ensure that blocks come in order for each peer.
func (pool *BlockPool) AddBlock(peerID string, block *types.Block, blockSize int) {
func (pool *BlockPool) AddBlock(peerID p2p.ID, block *types.Block, blockSize int) {
pool.mtx.Lock()
defer pool.mtx.Unlock()
requester := pool.requesters[block.Height]
if requester == nil {
// a block we didn't expect.
// TODO:if height is too far ahead, punish peer
pool.Logger.Info("peer sent us a block we didn't expect", "peer", peerID, "curHeight", pool.height, "blockHeight", block.Height)
diff := pool.height - block.Height
if diff < 0 {
diff *= -1
}
if diff > maxDiffBetweenCurrentAndReceivedBlockHeight {
pool.sendError(errors.New("peer sent us a block we didn't expect with a height too far ahead/behind"), peerID)
}
return
}
@@ -240,7 +263,7 @@ func (pool *BlockPool) MaxPeerHeight() int64 {
}
// Sets the peer's alleged blockchain height.
func (pool *BlockPool) SetPeerHeight(peerID string, height int64) {
func (pool *BlockPool) SetPeerHeight(peerID p2p.ID, height int64) {
pool.mtx.Lock()
defer pool.mtx.Unlock()
@@ -258,14 +281,14 @@ func (pool *BlockPool) SetPeerHeight(peerID string, height int64) {
}
}
func (pool *BlockPool) RemovePeer(peerID string) {
func (pool *BlockPool) RemovePeer(peerID p2p.ID) {
pool.mtx.Lock()
defer pool.mtx.Unlock()
pool.removePeer(peerID)
}
func (pool *BlockPool) removePeer(peerID string) {
func (pool *BlockPool) removePeer(peerID p2p.ID) {
for _, requester := range pool.requesters {
if requester.getPeerID() == peerID {
if requester.getBlock() != nil {
@@ -321,18 +344,18 @@ func (pool *BlockPool) requestersLen() int64 {
return int64(len(pool.requesters))
}
func (pool *BlockPool) sendRequest(height int64, peerID string) {
func (pool *BlockPool) sendRequest(height int64, peerID p2p.ID) {
if !pool.IsRunning() {
return
}
pool.requestsCh <- BlockRequest{height, peerID}
}
func (pool *BlockPool) sendTimeout(peerID string) {
func (pool *BlockPool) sendError(err error, peerID p2p.ID) {
if !pool.IsRunning() {
return
}
pool.timeoutsCh <- peerID
pool.errorsCh <- peerError{err, peerID}
}
// unused by tendermint; left for debugging purposes
@@ -357,7 +380,7 @@ func (pool *BlockPool) debug() string {
type bpPeer struct {
pool *BlockPool
id string
id p2p.ID
recvMonitor *flow.Monitor
height int64
@@ -368,7 +391,7 @@ type bpPeer struct {
logger log.Logger
}
func newBPPeer(pool *BlockPool, peerID string, height int64) *bpPeer {
func newBPPeer(pool *BlockPool, peerID p2p.ID, height int64) *bpPeer {
peer := &bpPeer{
pool: pool,
id: peerID,
@@ -391,9 +414,9 @@ func (peer *bpPeer) resetMonitor() {
func (peer *bpPeer) resetTimeout() {
if peer.timeout == nil {
peer.timeout = time.AfterFunc(time.Second*peerTimeoutSeconds, peer.onTimeout)
peer.timeout = time.AfterFunc(peerTimeout, peer.onTimeout)
} else {
peer.timeout.Reset(time.Second * peerTimeoutSeconds)
peer.timeout.Reset(peerTimeout)
}
}
@@ -419,8 +442,9 @@ func (peer *bpPeer) onTimeout() {
peer.pool.mtx.Lock()
defer peer.pool.mtx.Unlock()
peer.pool.sendTimeout(peer.id)
peer.logger.Error("SendTimeout", "reason", "onTimeout")
err := errors.New("peer did not send us anything")
peer.pool.sendError(err, peer.id)
peer.logger.Error("SendTimeout", "reason", err, "timeout", peerTimeout)
peer.didTimeout = true
}
@@ -434,7 +458,7 @@ type bpRequester struct {
redoCh chan struct{}
mtx sync.Mutex
peerID string
peerID p2p.ID
block *types.Block
}
@@ -458,7 +482,7 @@ func (bpr *bpRequester) OnStart() error {
}
// Returns true if the peer matches
func (bpr *bpRequester) setBlock(block *types.Block, peerID string) bool {
func (bpr *bpRequester) setBlock(block *types.Block, peerID p2p.ID) bool {
bpr.mtx.Lock()
if bpr.block != nil || bpr.peerID != peerID {
bpr.mtx.Unlock()
@@ -477,7 +501,7 @@ func (bpr *bpRequester) getBlock() *types.Block {
return bpr.block
}
func (bpr *bpRequester) getPeerID() string {
func (bpr *bpRequester) getPeerID() p2p.ID {
bpr.mtx.Lock()
defer bpr.mtx.Unlock()
return bpr.peerID
@@ -502,7 +526,7 @@ func (bpr *bpRequester) requestRoutine() {
OUTER_LOOP:
for {
// Pick a peer to send request to.
var peer *bpPeer = nil
var peer *bpPeer
PICK_PEER_LOOP:
for {
if !bpr.IsRunning() || !bpr.pool.IsRunning() {
@@ -523,10 +547,10 @@ OUTER_LOOP:
// Send request and wait.
bpr.pool.sendRequest(bpr.height, peer.id)
select {
case <-bpr.pool.Quit:
case <-bpr.pool.Quit():
bpr.Stop()
return
case <-bpr.Quit:
case <-bpr.Quit():
return
case <-bpr.redoCh:
bpr.reset()
@@ -534,10 +558,10 @@ OUTER_LOOP:
case <-bpr.gotBlockCh:
// We got the block, now see if it's good.
select {
case <-bpr.pool.Quit:
case <-bpr.pool.Quit():
bpr.Stop()
return
case <-bpr.Quit:
case <-bpr.Quit():
return
case <-bpr.redoCh:
bpr.reset()
@@ -551,5 +575,5 @@ OUTER_LOOP:
type BlockRequest struct {
Height int64
PeerID string
PeerID p2p.ID
}

View File

@@ -5,24 +5,26 @@ import (
"testing"
"time"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/log"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/types"
)
func init() {
peerTimeoutSeconds = time.Duration(2)
peerTimeout = 2 * time.Second
}
type testPeer struct {
id string
id p2p.ID
height int64
}
func makePeers(numPeers int, minHeight, maxHeight int64) map[string]testPeer {
peers := make(map[string]testPeer, numPeers)
func makePeers(numPeers int, minHeight, maxHeight int64) map[p2p.ID]testPeer {
peers := make(map[p2p.ID]testPeer, numPeers)
for i := 0; i < numPeers; i++ {
peerID := cmn.RandStr(12)
peerID := p2p.ID(cmn.RandStr(12))
height := minHeight + rand.Int63n(maxHeight-minHeight)
peers[peerID] = testPeer{peerID, height}
}
@@ -32,9 +34,9 @@ func makePeers(numPeers int, minHeight, maxHeight int64) map[string]testPeer {
func TestBasic(t *testing.T) {
start := int64(42)
peers := makePeers(10, start+1, 1000)
timeoutsCh := make(chan string, 100)
requestsCh := make(chan BlockRequest, 100)
pool := NewBlockPool(start, requestsCh, timeoutsCh)
errorsCh := make(chan peerError, 1000)
requestsCh := make(chan BlockRequest, 1000)
pool := NewBlockPool(start, requestsCh, errorsCh)
pool.SetLogger(log.TestingLogger())
err := pool.Start()
@@ -69,8 +71,8 @@ func TestBasic(t *testing.T) {
// Pull from channels
for {
select {
case peerID := <-timeoutsCh:
t.Errorf("timeout: %v", peerID)
case err := <-errorsCh:
t.Error(err)
case request := <-requestsCh:
t.Logf("Pulled new BlockRequest %v", request)
if request.Height == 300 {
@@ -89,9 +91,9 @@ func TestBasic(t *testing.T) {
func TestTimeout(t *testing.T) {
start := int64(42)
peers := makePeers(10, start+1, 1000)
timeoutsCh := make(chan string, 100)
requestsCh := make(chan BlockRequest, 100)
pool := NewBlockPool(start, requestsCh, timeoutsCh)
errorsCh := make(chan peerError, 1000)
requestsCh := make(chan BlockRequest, 1000)
pool := NewBlockPool(start, requestsCh, errorsCh)
pool.SetLogger(log.TestingLogger())
err := pool.Start()
if err != nil {
@@ -127,12 +129,13 @@ func TestTimeout(t *testing.T) {
// Pull from channels
counter := 0
timedOut := map[string]struct{}{}
timedOut := map[p2p.ID]struct{}{}
for {
select {
case peerID := <-timeoutsCh:
t.Logf("Peer %v timeouted", peerID)
if _, ok := timedOut[peerID]; !ok {
case err := <-errorsCh:
t.Log(err)
// consider error to be always timeout here
if _, ok := timedOut[err.peerID]; !ok {
counter++
if counter == len(peers) {
return // Done!

View File

@@ -3,24 +3,26 @@ package blockchain
import (
"bytes"
"errors"
"fmt"
"reflect"
"sync"
"time"
wire "github.com/tendermint/go-wire"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/log"
"github.com/tendermint/tendermint/p2p"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
const (
// BlockchainChannel is a channel for blocks and status updates (`BlockStore` height)
BlockchainChannel = byte(0x40)
defaultChannelCapacity = 1000
trySyncIntervalMS = 50
trySyncIntervalMS = 50
// stop syncing when last block's time is
// within this much of the system time.
// stopSyncingDurationMinutes = 10
@@ -34,44 +36,65 @@ const (
type consensusReactor interface {
// for when we switch from blockchain reactor and fast sync to
// the consensus machine
SwitchToConsensus(*sm.State, int)
SwitchToConsensus(sm.State, int)
}
type peerError struct {
err error
peerID p2p.ID
}
func (e peerError) Error() string {
return fmt.Sprintf("error with peer %v: %s", e.peerID, e.err.Error())
}
// BlockchainReactor handles long-term catchup syncing.
type BlockchainReactor struct {
p2p.BaseReactor
state *sm.State
proxyAppConn proxy.AppConnConsensus // same as consensus.proxyAppConn
store *BlockStore
pool *BlockPool
fastSync bool
requestsCh chan BlockRequest
timeoutsCh chan string
mtx sync.Mutex
params types.ConsensusParams
eventBus *types.EventBus
// immutable
initialState sm.State
blockExec *sm.BlockExecutor
store *BlockStore
pool *BlockPool
fastSync bool
requestsCh <-chan BlockRequest
errorsCh <-chan peerError
}
// NewBlockchainReactor returns new reactor instance.
func NewBlockchainReactor(state *sm.State, proxyAppConn proxy.AppConnConsensus, store *BlockStore, fastSync bool) *BlockchainReactor {
func NewBlockchainReactor(state sm.State, blockExec *sm.BlockExecutor, store *BlockStore,
fastSync bool) *BlockchainReactor {
if state.LastBlockHeight != store.Height() {
cmn.PanicSanity(cmn.Fmt("state (%v) and store (%v) height mismatch", state.LastBlockHeight, store.Height()))
panic(fmt.Sprintf("state (%v) and store (%v) height mismatch", state.LastBlockHeight,
store.Height()))
}
requestsCh := make(chan BlockRequest, defaultChannelCapacity)
timeoutsCh := make(chan string, defaultChannelCapacity)
const capacity = 1000 // must be bigger than peers count
requestsCh := make(chan BlockRequest, capacity)
errorsCh := make(chan peerError, capacity) // so we don't block in #Receive#pool.AddBlock
pool := NewBlockPool(
store.Height()+1,
requestsCh,
timeoutsCh,
errorsCh,
)
bcR := &BlockchainReactor{
state: state,
proxyAppConn: proxyAppConn,
params: state.ConsensusParams,
initialState: state,
blockExec: blockExec,
store: store,
pool: pool,
fastSync: fastSync,
requestsCh: requestsCh,
timeoutsCh: timeoutsCh,
errorsCh: errorsCh,
}
bcR.BaseReactor = *p2p.NewBaseReactor("BlockchainReactor", bcR)
return bcR
@@ -117,7 +140,8 @@ func (bcR *BlockchainReactor) GetChannels() []*p2p.ChannelDescriptor {
// AddPeer implements Reactor by sending our state to peer.
func (bcR *BlockchainReactor) AddPeer(peer p2p.Peer) {
if !peer.Send(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}}) {
if !peer.Send(BlockchainChannel,
struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}}) {
// doing nothing, will try later in `poolRoutine`
}
// peer is added to the pool once we receive the first
@@ -126,14 +150,16 @@ func (bcR *BlockchainReactor) AddPeer(peer p2p.Peer) {
// RemovePeer implements Reactor by removing peer from the pool.
func (bcR *BlockchainReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
bcR.pool.RemovePeer(peer.Key())
bcR.pool.RemovePeer(peer.ID())
}
// respondToPeer loads a block and sends it to the requesting peer,
// if we have it. Otherwise, we'll respond saying we don't have it.
// According to the Tendermint spec, if all nodes are honest,
// no node should be requesting for a block that's non-existent.
func (bcR *BlockchainReactor) respondToPeer(msg *bcBlockRequestMessage, src p2p.Peer) (queued bool) {
func (bcR *BlockchainReactor) respondToPeer(msg *bcBlockRequestMessage,
src p2p.Peer) (queued bool) {
block := bcR.store.LoadBlock(msg.Height)
if block != nil {
msg := &bcBlockResponseMessage{Block: block}
@@ -151,13 +177,13 @@ func (bcR *BlockchainReactor) respondToPeer(msg *bcBlockRequestMessage, src p2p.
func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
_, msg, err := DecodeMessage(msgBytes, bcR.maxMsgSize())
if err != nil {
bcR.Logger.Error("Error decoding message", "err", err)
bcR.Logger.Error("Error decoding message", "src", src, "chId", chID, "msg", msg, "err", err, "bytes", msgBytes)
bcR.Switch.StopPeerForError(src, err)
return
}
bcR.Logger.Debug("Receive", "src", src, "chID", chID, "msg", msg)
// TODO: improve logic to satisfy megacheck
switch msg := msg.(type) {
case *bcBlockRequestMessage:
if queued := bcR.respondToPeer(msg, src); !queued {
@@ -165,16 +191,17 @@ func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
}
case *bcBlockResponseMessage:
// Got a block.
bcR.pool.AddBlock(src.Key(), msg.Block, len(msgBytes))
bcR.pool.AddBlock(src.ID(), msg.Block, len(msgBytes))
case *bcStatusRequestMessage:
// Send peer our state.
queued := src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}})
queued := src.TrySend(BlockchainChannel,
struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}})
if !queued {
// sorry
}
case *bcStatusResponseMessage:
// Got a peer status. Unverified.
bcR.pool.SetPeerHeight(src.Key(), msg.Height)
bcR.pool.SetPeerHeight(src.ID(), msg.Height)
default:
bcR.Logger.Error(cmn.Fmt("Unknown message type %v", reflect.TypeOf(msg)))
}
@@ -183,7 +210,16 @@ func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
// maxMsgSize returns the maximum allowable size of a
// message on the blockchain reactor.
func (bcR *BlockchainReactor) maxMsgSize() int {
return bcR.state.Params.BlockSizeParams.MaxBytes + 2
bcR.mtx.Lock()
defer bcR.mtx.Unlock()
return bcR.params.BlockSize.MaxBytes + 2
}
// updateConsensusParams updates the internal consensus params
func (bcR *BlockchainReactor) updateConsensusParams(params types.ConsensusParams) {
bcR.mtx.Lock()
defer bcR.mtx.Unlock()
bcR.params = params
}
// Handle messages from the poolReactor telling the reactor what to do.
@@ -197,7 +233,8 @@ func (bcR *BlockchainReactor) poolRoutine() {
blocksSynced := 0
chainID := bcR.state.ChainID
chainID := bcR.initialState.ChainID
state := bcR.initialState
lastHundred := time.Now()
lastRate := 0.0
@@ -205,7 +242,7 @@ func (bcR *BlockchainReactor) poolRoutine() {
FOR_LOOP:
for {
select {
case request := <-bcR.requestsCh: // chan BlockRequest
case request := <-bcR.requestsCh:
peer := bcR.Switch.Peers().Get(request.PeerID)
if peer == nil {
continue FOR_LOOP // Peer has since been disconnected.
@@ -217,11 +254,10 @@ FOR_LOOP:
// The pool handles timeouts, just let it go.
continue FOR_LOOP
}
case peerID := <-bcR.timeoutsCh: // chan string
// Peer timed out.
peer := bcR.Switch.Peers().Get(peerID)
case err := <-bcR.errorsCh:
peer := bcR.Switch.Peers().Get(err.peerID)
if peer != nil {
bcR.Switch.StopPeerForError(peer, errors.New("BlockchainReactor Timeout"))
bcR.Switch.StopPeerForError(peer, err)
}
case <-statusUpdateTicker.C:
// ask for status updates
@@ -236,7 +272,7 @@ FOR_LOOP:
bcR.pool.Stop()
conR := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
conR.SwitchToConsensus(bcR.state, blocksSynced)
conR.SwitchToConsensus(state, blocksSynced)
break FOR_LOOP
}
@@ -251,33 +287,42 @@ FOR_LOOP:
// We need both to sync the first block.
break SYNC_LOOP
}
firstParts := first.MakePartSet(bcR.state.Params.BlockPartSizeBytes)
firstParts := first.MakePartSet(state.ConsensusParams.BlockPartSizeBytes)
firstPartsHeader := firstParts.Header()
firstID := types.BlockID{first.Hash(), firstPartsHeader}
// Finally, verify the first block using the second's commit
// NOTE: we can probably make this more efficient, but note that calling
// first.Hash() doesn't verify the tx contents, so MakePartSet() is
// currently necessary.
err := bcR.state.Validators.VerifyCommit(
chainID, types.BlockID{first.Hash(), firstPartsHeader}, first.Height, second.LastCommit)
err := state.Validators.VerifyCommit(
chainID, firstID, first.Height, second.LastCommit)
if err != nil {
bcR.Logger.Error("Error in validation", "err", err)
bcR.pool.RedoRequest(first.Height)
peerID := bcR.pool.RedoRequest(first.Height)
peer := bcR.Switch.Peers().Get(peerID)
if peer != nil {
bcR.Switch.StopPeerForError(peer, fmt.Errorf("BlockchainReactor validation error: %v", err))
}
break SYNC_LOOP
} else {
bcR.pool.PopRequest()
// TODO: batch saves so we dont persist to disk every block
bcR.store.SaveBlock(first, firstParts, second.LastCommit)
// TODO: should we be firing events? need to fire NewBlock events manually ...
// NOTE: we could improve performance if we
// didn't make the app commit to disk every block
// ... but we would need a way to get the hash without it persisting
err := bcR.state.ApplyBlock(bcR.eventBus, bcR.proxyAppConn, first, firstPartsHeader, types.MockMempool{})
// TODO: same thing for app - but we would need a way to
// get the hash without persisting the state
var err error
state, err = bcR.blockExec.ApplyBlock(state, firstID, first)
if err != nil {
// TODO This is bad, are we zombie?
cmn.PanicQ(cmn.Fmt("Failed to process committed block (%d:%X): %v", first.Height, first.Hash(), err))
cmn.PanicQ(cmn.Fmt("Failed to process committed block (%d:%X): %v",
first.Height, first.Hash(), err))
}
blocksSynced += 1
blocksSynced++
// update the consensus params
bcR.updateConsensusParams(state.ConsensusParams)
if blocksSynced%100 == 0 {
lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds())
@@ -288,7 +333,7 @@ FOR_LOOP:
}
}
continue FOR_LOOP
case <-bcR.Quit:
case <-bcR.Quit():
break FOR_LOOP
}
}
@@ -296,15 +341,11 @@ FOR_LOOP:
// BroadcastStatusRequest broadcasts `BlockStore` height.
func (bcR *BlockchainReactor) BroadcastStatusRequest() error {
bcR.Switch.Broadcast(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusRequestMessage{bcR.store.Height()}})
bcR.Switch.Broadcast(BlockchainChannel,
struct{ BlockchainMessage }{&bcStatusRequestMessage{bcR.store.Height()}})
return nil
}
// SetEventBus sets event bus.
func (bcR *BlockchainReactor) SetEventBus(b *types.EventBus) {
bcR.eventBus = b
}
//-----------------------------------------------------------------------------
// Messages

View File

@@ -4,30 +4,35 @@ import (
"testing"
wire "github.com/tendermint/go-wire"
cmn "github.com/tendermint/tmlibs/common"
dbm "github.com/tendermint/tmlibs/db"
"github.com/tendermint/tmlibs/log"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
func newBlockchainReactor(maxBlockHeight int64) *BlockchainReactor {
logger := log.TestingLogger()
func makeStateAndBlockStore(logger log.Logger) (sm.State, *BlockStore) {
config := cfg.ResetTestRoot("blockchain_reactor_test")
blockStore := NewBlockStore(dbm.NewMemDB())
state, _ := sm.LoadStateFromDBOrGenesisFile(dbm.NewMemDB(), config.GenesisFile())
return state, blockStore
}
// Get State
state, _ := sm.GetState(dbm.NewMemDB(), config.GenesisFile())
state.SetLogger(logger.With("module", "state"))
state.Save()
func newBlockchainReactor(logger log.Logger, maxBlockHeight int64) *BlockchainReactor {
state, blockStore := makeStateAndBlockStore(logger)
// Make the blockchainReactor itself
fastSync := true
bcReactor := NewBlockchainReactor(state.Copy(), nil, blockStore, fastSync)
var nilApp proxy.AppConnConsensus
blockExec := sm.NewBlockExecutor(dbm.NewMemDB(), log.TestingLogger(), nilApp,
types.MockMempool{}, types.MockEvidencePool{})
bcReactor := NewBlockchainReactor(state.Copy(), blockExec, blockStore, fastSync)
bcReactor.SetLogger(logger.With("module", "blockchain"))
// Next: we need to set a switch in order for peers to be added in
@@ -37,22 +42,22 @@ func newBlockchainReactor(maxBlockHeight int64) *BlockchainReactor {
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
firstBlock := makeBlock(blockHeight, state)
secondBlock := makeBlock(blockHeight+1, state)
firstParts := firstBlock.MakePartSet(state.Params.BlockGossipParams.BlockPartSizeBytes)
firstParts := firstBlock.MakePartSet(state.ConsensusParams.BlockGossip.BlockPartSizeBytes)
blockStore.SaveBlock(firstBlock, firstParts, secondBlock.LastCommit)
}
return bcReactor
}
func TestNoBlockMessageResponse(t *testing.T) {
func TestNoBlockResponse(t *testing.T) {
maxBlockHeight := int64(20)
bcr := newBlockchainReactor(maxBlockHeight)
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
bcr.Start()
defer bcr.Stop()
// Add some peers in
peer := newbcrTestPeer(cmn.RandStr(12))
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
bcr.AddPeer(peer)
chID := byte(0x01)
@@ -67,6 +72,8 @@ func TestNoBlockMessageResponse(t *testing.T) {
{100, false},
}
// receive a request message from peer,
// wait for our response to be received on the peer
for _, tt := range tests {
reqBlockMsg := &bcBlockRequestMessage{tt.height}
reqBlockBytes := wire.BinaryBytes(struct{ BlockchainMessage }{reqBlockMsg})
@@ -90,6 +97,49 @@ func TestNoBlockMessageResponse(t *testing.T) {
}
}
/*
// NOTE: This is too hard to test without
// an easy way to add test peer to switch
// or without significant refactoring of the module.
// Alternatively we could actually dial a TCP conn but
// that seems extreme.
func TestBadBlockStopsPeer(t *testing.T) {
maxBlockHeight := int64(20)
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
bcr.Start()
defer bcr.Stop()
// Add some peers in
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
// XXX: This doesn't add the peer to anything,
// so it's hard to check that it's later removed
bcr.AddPeer(peer)
assert.True(t, bcr.Switch.Peers().Size() > 0)
// send a bad block from the peer
// default blocks already dont have commits, so should fail
block := bcr.store.LoadBlock(3)
msg := &bcBlockResponseMessage{Block: block}
peer.Send(BlockchainChannel, struct{ BlockchainMessage }{msg})
ticker := time.NewTicker(time.Millisecond * 10)
timer := time.NewTimer(time.Second * 2)
LOOP:
for {
select {
case <-ticker.C:
if bcr.Switch.Peers().Size() == 0 {
break LOOP
}
case <-timer.C:
t.Fatal("Timed out waiting to disconnect peer")
}
}
}
*/
//----------------------------------------------
// utility funcs
@@ -100,37 +150,34 @@ func makeTxs(height int64) (txs []types.Tx) {
return txs
}
func makeBlock(height int64, state *sm.State) *types.Block {
prevHash := state.LastBlockID.Hash
prevParts := types.PartSetHeader{}
valHash := state.Validators.Hash()
prevBlockID := types.BlockID{prevHash, prevParts}
block, _ := types.MakeBlock(height, "test_chain", makeTxs(height),
new(types.Commit), prevBlockID, valHash, state.AppHash, state.Params.BlockGossipParams.BlockPartSizeBytes)
func makeBlock(height int64, state sm.State) *types.Block {
block, _ := state.MakeBlock(height, makeTxs(height), new(types.Commit))
return block
}
// The Test peer
type bcrTestPeer struct {
cmn.Service
key string
ch chan interface{}
cmn.BaseService
id p2p.ID
ch chan interface{}
}
var _ p2p.Peer = (*bcrTestPeer)(nil)
func newbcrTestPeer(key string) *bcrTestPeer {
return &bcrTestPeer{
Service: cmn.NewBaseService(nil, "bcrTestPeer", nil),
key: key,
ch: make(chan interface{}, 2),
func newbcrTestPeer(id p2p.ID) *bcrTestPeer {
bcr := &bcrTestPeer{
id: id,
ch: make(chan interface{}, 2),
}
bcr.BaseService = *cmn.NewBaseService(nil, "bcrTestPeer", bcr)
return bcr
}
func (tp *bcrTestPeer) lastValue() interface{} { return <-tp.ch }
func (tp *bcrTestPeer) TrySend(chID byte, value interface{}) bool {
if _, ok := value.(struct{ BlockchainMessage }).BlockchainMessage.(*bcStatusResponseMessage); ok {
if _, ok := value.(struct{ BlockchainMessage }).
BlockchainMessage.(*bcStatusResponseMessage); ok {
// Discard status response messages since they skew our results
// We only want to deal with:
// + bcBlockResponseMessage
@@ -142,9 +189,9 @@ func (tp *bcrTestPeer) TrySend(chID byte, value interface{}) bool {
}
func (tp *bcrTestPeer) Send(chID byte, data interface{}) bool { return tp.TrySend(chID, data) }
func (tp *bcrTestPeer) NodeInfo() *p2p.NodeInfo { return nil }
func (tp *bcrTestPeer) NodeInfo() p2p.NodeInfo { return p2p.NodeInfo{} }
func (tp *bcrTestPeer) Status() p2p.ConnectionStatus { return p2p.ConnectionStatus{} }
func (tp *bcrTestPeer) Key() string { return tp.key }
func (tp *bcrTestPeer) ID() p2p.ID { return tp.id }
func (tp *bcrTestPeer) IsOutbound() bool { return false }
func (tp *bcrTestPeer) IsPersistent() bool { return true }
func (tp *bcrTestPeer) Get(s string) interface{} { return s }

View File

@@ -8,13 +8,15 @@ import (
"sync"
wire "github.com/tendermint/go-wire"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
dbm "github.com/tendermint/tmlibs/db"
"github.com/tendermint/tendermint/types"
)
/*
Simple low level store for blocks.
BlockStore is a simple low level store for blocks.
There are three types of information stored:
- BlockMeta: Meta information about each block
@@ -23,7 +25,7 @@ There are three types of information stored:
Currently the precommit signatures are duplicated in the Block parts as
well as the Commit. In the future this may change, perhaps by moving
the Commit data outside the Block.
the Commit data outside the Block. (TODO)
// NOTE: BlockStore methods will panic if they encounter errors
// deserializing loaded data, indicating probable corruption on disk.
@@ -35,6 +37,8 @@ type BlockStore struct {
height int64
}
// NewBlockStore returns a new BlockStore with the given DB,
// initialized to the last height that was committed to the DB.
func NewBlockStore(db dbm.DB) *BlockStore {
bsjson := LoadBlockStoreStateJSON(db)
return &BlockStore{
@@ -43,13 +47,16 @@ func NewBlockStore(db dbm.DB) *BlockStore {
}
}
// Height() returns the last known contiguous block height.
// Height returns the last known contiguous block height.
func (bs *BlockStore) Height() int64 {
bs.mtx.RLock()
defer bs.mtx.RUnlock()
return bs.height
}
// GetReader returns the value associated with the given key wrapped in an io.Reader.
// If no value is found, it returns nil.
// It's mainly for use with wire.ReadBinary.
func (bs *BlockStore) GetReader(key []byte) io.Reader {
bytez := bs.db.Get(key)
if bytez == nil {
@@ -58,6 +65,8 @@ func (bs *BlockStore) GetReader(key []byte) io.Reader {
return bytes.NewReader(bytez)
}
// LoadBlock returns the block with the given height.
// If no block is found for that height, it returns nil.
func (bs *BlockStore) LoadBlock(height int64) *types.Block {
var n int
var err error
@@ -67,7 +76,7 @@ func (bs *BlockStore) LoadBlock(height int64) *types.Block {
}
blockMeta := wire.ReadBinary(&types.BlockMeta{}, r, 0, &n, &err).(*types.BlockMeta)
if err != nil {
cmn.PanicCrisis(cmn.Fmt("Error reading block meta: %v", err))
panic(fmt.Sprintf("Error reading block meta: %v", err))
}
bytez := []byte{}
for i := 0; i < blockMeta.BlockID.PartsHeader.Total; i++ {
@@ -76,11 +85,14 @@ func (bs *BlockStore) LoadBlock(height int64) *types.Block {
}
block := wire.ReadBinary(&types.Block{}, bytes.NewReader(bytez), 0, &n, &err).(*types.Block)
if err != nil {
cmn.PanicCrisis(cmn.Fmt("Error reading block: %v", err))
panic(fmt.Sprintf("Error reading block: %v", err))
}
return block
}
// LoadBlockPart returns the Part at the given index
// from the block at the given height.
// If no part is found for the given height and index, it returns nil.
func (bs *BlockStore) LoadBlockPart(height int64, index int) *types.Part {
var n int
var err error
@@ -90,11 +102,13 @@ func (bs *BlockStore) LoadBlockPart(height int64, index int) *types.Part {
}
part := wire.ReadBinary(&types.Part{}, r, 0, &n, &err).(*types.Part)
if err != nil {
cmn.PanicCrisis(cmn.Fmt("Error reading block part: %v", err))
panic(fmt.Sprintf("Error reading block part: %v", err))
}
return part
}
// LoadBlockMeta returns the BlockMeta for the given height.
// If no block is found for the given height, it returns nil.
func (bs *BlockStore) LoadBlockMeta(height int64) *types.BlockMeta {
var n int
var err error
@@ -104,13 +118,15 @@ func (bs *BlockStore) LoadBlockMeta(height int64) *types.BlockMeta {
}
blockMeta := wire.ReadBinary(&types.BlockMeta{}, r, 0, &n, &err).(*types.BlockMeta)
if err != nil {
cmn.PanicCrisis(cmn.Fmt("Error reading block meta: %v", err))
panic(fmt.Sprintf("Error reading block meta: %v", err))
}
return blockMeta
}
// The +2/3 and other Precommit-votes for block at `height`.
// This Commit comes from block.LastCommit for `height+1`.
// LoadBlockCommit returns the Commit for the given height.
// This commit consists of the +2/3 and other Precommit-votes for block at `height`,
// and it comes from the block.LastCommit for `height+1`.
// If no commit is found for the given height, it returns nil.
func (bs *BlockStore) LoadBlockCommit(height int64) *types.Commit {
var n int
var err error
@@ -120,12 +136,14 @@ func (bs *BlockStore) LoadBlockCommit(height int64) *types.Commit {
}
commit := wire.ReadBinary(&types.Commit{}, r, 0, &n, &err).(*types.Commit)
if err != nil {
cmn.PanicCrisis(cmn.Fmt("Error reading commit: %v", err))
panic(fmt.Sprintf("Error reading commit: %v", err))
}
return commit
}
// NOTE: the Precommit-vote heights are for the block at `height`
// LoadSeenCommit returns the locally seen Commit for the given height.
// This is useful when we've seen a commit, but there has not yet been
// a new block at `height + 1` that includes this commit in its block.LastCommit.
func (bs *BlockStore) LoadSeenCommit(height int64) *types.Commit {
var n int
var err error
@@ -135,20 +153,24 @@ func (bs *BlockStore) LoadSeenCommit(height int64) *types.Commit {
}
commit := wire.ReadBinary(&types.Commit{}, r, 0, &n, &err).(*types.Commit)
if err != nil {
cmn.PanicCrisis(cmn.Fmt("Error reading commit: %v", err))
panic(fmt.Sprintf("Error reading commit: %v", err))
}
return commit
}
// SaveBlock persists the given block, blockParts, and seenCommit to the underlying db.
// blockParts: Must be parts of the block
// seenCommit: The +2/3 precommits that were seen which committed at height.
// If all the nodes restart after committing a block,
// we need this to reload the precommits to catch-up nodes to the
// most recent height. Otherwise they'd stall at H-1.
func (bs *BlockStore) SaveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) {
if block == nil {
cmn.PanicSanity("BlockStore can only save a non-nil block")
}
height := block.Height
if height != bs.Height()+1 {
cmn.PanicSanity(cmn.Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", bs.Height()+1, height))
if g, w := height, bs.Height()+1; g != w {
cmn.PanicSanity(cmn.Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", w, g))
}
if !blockParts.IsComplete() {
cmn.PanicSanity(cmn.Fmt("BlockStore can only save complete block part sets"))
@@ -219,6 +241,7 @@ type BlockStoreStateJSON struct {
Height int64
}
// Save persists the blockStore state to the database as JSON.
func (bsj BlockStoreStateJSON) Save(db dbm.DB) {
bytes, err := json.Marshal(bsj)
if err != nil {
@@ -227,9 +250,11 @@ func (bsj BlockStoreStateJSON) Save(db dbm.DB) {
db.SetSync(blockStoreKey, bytes)
}
// LoadBlockStoreStateJSON returns the BlockStoreStateJSON as loaded from disk.
// If no BlockStoreStateJSON was previously persisted, it returns the zero value.
func LoadBlockStoreStateJSON(db dbm.DB) BlockStoreStateJSON {
bytes := db.Get(blockStoreKey)
if bytes == nil {
if len(bytes) == 0 {
return BlockStoreStateJSON{
Height: 0,
}
@@ -237,7 +262,7 @@ func LoadBlockStoreStateJSON(db dbm.DB) BlockStoreStateJSON {
bsj := BlockStoreStateJSON{}
err := json.Unmarshal(bytes, &bsj)
if err != nil {
cmn.PanicCrisis(cmn.Fmt("Could not unmarshal bytes: %X", bytes))
panic(fmt.Sprintf("Could not unmarshal bytes: %X", bytes))
}
return bsj
}

424
blockchain/store_test.go Normal file
View File

@@ -0,0 +1,424 @@
package blockchain
import (
"bytes"
"fmt"
"io/ioutil"
"runtime/debug"
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
wire "github.com/tendermint/go-wire"
"github.com/tendermint/tmlibs/db"
"github.com/tendermint/tmlibs/log"
"github.com/tendermint/tendermint/types"
)
func TestLoadBlockStoreStateJSON(t *testing.T) {
db := db.NewMemDB()
bsj := &BlockStoreStateJSON{Height: 1000}
bsj.Save(db)
retrBSJ := LoadBlockStoreStateJSON(db)
assert.Equal(t, *bsj, retrBSJ, "expected the retrieved DBs to match")
}
func TestNewBlockStore(t *testing.T) {
db := db.NewMemDB()
db.Set(blockStoreKey, []byte(`{"height": 10000}`))
bs := NewBlockStore(db)
assert.Equal(t, bs.Height(), int64(10000), "failed to properly parse blockstore")
panicCausers := []struct {
data []byte
wantErr string
}{
{[]byte("artful-doger"), "not unmarshal bytes"},
{[]byte(" "), "unmarshal bytes"},
}
for i, tt := range panicCausers {
// Expecting a panic here on trying to parse an invalid blockStore
_, _, panicErr := doFn(func() (interface{}, error) {
db.Set(blockStoreKey, tt.data)
_ = NewBlockStore(db)
return nil, nil
})
require.NotNil(t, panicErr, "#%d panicCauser: %q expected a panic", i, tt.data)
assert.Contains(t, panicErr.Error(), tt.wantErr, "#%d data: %q", i, tt.data)
}
db.Set(blockStoreKey, nil)
bs = NewBlockStore(db)
assert.Equal(t, bs.Height(), int64(0), "expecting nil bytes to be unmarshaled alright")
}
func TestBlockStoreGetReader(t *testing.T) {
db := db.NewMemDB()
// Initial setup
db.Set([]byte("Foo"), []byte("Bar"))
db.Set([]byte("Foo1"), nil)
bs := NewBlockStore(db)
tests := [...]struct {
key []byte
want []byte
}{
0: {key: []byte("Foo"), want: []byte("Bar")},
1: {key: []byte("KnoxNonExistent"), want: nil},
2: {key: []byte("Foo1"), want: []byte{}},
}
for i, tt := range tests {
r := bs.GetReader(tt.key)
if r == nil {
assert.Nil(t, tt.want, "#%d: expected a non-nil reader", i)
continue
}
slurp, err := ioutil.ReadAll(r)
if err != nil {
t.Errorf("#%d: unexpected Read err: %v", i, err)
} else {
assert.Equal(t, slurp, tt.want, "#%d: mismatch", i)
}
}
}
func freshBlockStore() (*BlockStore, db.DB) {
db := db.NewMemDB()
return NewBlockStore(db), db
}
var (
state, _ = makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
block = makeBlock(1, state)
partSet = block.MakePartSet(2)
part1 = partSet.GetPart(0)
part2 = partSet.GetPart(1)
seenCommit1 = &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: time.Now().UTC()}}}
)
// TODO: This test should be simplified ...
func TestBlockStoreSaveLoadBlock(t *testing.T) {
state, bs := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
require.Equal(t, bs.Height(), int64(0), "initially the height should be zero")
// check there are no blocks at various heights
noBlockHeights := []int64{0, -1, 100, 1000, 2}
for i, height := range noBlockHeights {
if g := bs.LoadBlock(height); g != nil {
t.Errorf("#%d: height(%d) got a block; want nil", i, height)
}
}
// save a block
block := makeBlock(bs.Height()+1, state)
validPartSet := block.MakePartSet(2)
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: time.Now().UTC()}}}
bs.SaveBlock(block, partSet, seenCommit)
require.Equal(t, bs.Height(), block.Header.Height, "expecting the new height to be changed")
incompletePartSet := types.NewPartSetFromHeader(types.PartSetHeader{Total: 2})
uncontiguousPartSet := types.NewPartSetFromHeader(types.PartSetHeader{Total: 0})
uncontiguousPartSet.AddPart(part2, false)
header1 := types.Header{
Height: 1,
NumTxs: 100,
ChainID: "block_test",
Time: time.Now(),
}
header2 := header1
header2.Height = 4
// End of setup, test data
commitAtH10 := &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: time.Now().UTC()}}}
tuples := []struct {
block *types.Block
parts *types.PartSet
seenCommit *types.Commit
wantErr bool
wantPanic string
corruptBlockInDB bool
corruptCommitInDB bool
corruptSeenCommitInDB bool
eraseCommitInDB bool
eraseSeenCommitInDB bool
}{
{
block: newBlock(&header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
},
{
block: nil,
wantPanic: "only save a non-nil block",
},
{
block: newBlock(&header2, commitAtH10),
parts: uncontiguousPartSet,
wantPanic: "only save contiguous blocks", // and incomplete and uncontiguous parts
},
{
block: newBlock(&header1, commitAtH10),
parts: incompletePartSet,
wantPanic: "only save complete block", // incomplete parts
},
{
block: newBlock(&header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
corruptCommitInDB: true, // Corrupt the DB's commit entry
wantPanic: "rror reading commit",
},
{
block: newBlock(&header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
wantPanic: "rror reading block",
corruptBlockInDB: true, // Corrupt the DB's block entry
},
{
block: newBlock(&header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
// Expecting no error and we want a nil back
eraseSeenCommitInDB: true,
},
{
block: newBlock(&header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
corruptSeenCommitInDB: true,
wantPanic: "rror reading commit",
},
{
block: newBlock(&header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
// Expecting no error and we want a nil back
eraseCommitInDB: true,
},
}
type quad struct {
block *types.Block
commit *types.Commit
meta *types.BlockMeta
seenCommit *types.Commit
}
for i, tuple := range tuples {
bs, db := freshBlockStore()
// SaveBlock
res, err, panicErr := doFn(func() (interface{}, error) {
bs.SaveBlock(tuple.block, tuple.parts, tuple.seenCommit)
if tuple.block == nil {
return nil, nil
}
if tuple.corruptBlockInDB {
db.Set(calcBlockMetaKey(tuple.block.Height), []byte("block-bogus"))
}
bBlock := bs.LoadBlock(tuple.block.Height)
bBlockMeta := bs.LoadBlockMeta(tuple.block.Height)
if tuple.eraseSeenCommitInDB {
db.Delete(calcSeenCommitKey(tuple.block.Height))
}
if tuple.corruptSeenCommitInDB {
db.Set(calcSeenCommitKey(tuple.block.Height), []byte("bogus-seen-commit"))
}
bSeenCommit := bs.LoadSeenCommit(tuple.block.Height)
commitHeight := tuple.block.Height - 1
if tuple.eraseCommitInDB {
db.Delete(calcBlockCommitKey(commitHeight))
}
if tuple.corruptCommitInDB {
db.Set(calcBlockCommitKey(commitHeight), []byte("foo-bogus"))
}
bCommit := bs.LoadBlockCommit(commitHeight)
return &quad{block: bBlock, seenCommit: bSeenCommit, commit: bCommit,
meta: bBlockMeta}, nil
})
if subStr := tuple.wantPanic; subStr != "" {
if panicErr == nil {
t.Errorf("#%d: want a non-nil panic", i)
} else if got := panicErr.Error(); !strings.Contains(got, subStr) {
t.Errorf("#%d:\n\tgotErr: %q\nwant substring: %q", i, got, subStr)
}
continue
}
if tuple.wantErr {
if err == nil {
t.Errorf("#%d: got nil error", i)
}
continue
}
assert.Nil(t, panicErr, "#%d: unexpected panic", i)
assert.Nil(t, err, "#%d: expecting a non-nil error", i)
qua, ok := res.(*quad)
if !ok || qua == nil {
t.Errorf("#%d: got nil quad back; gotType=%T", i, res)
continue
}
if tuple.eraseSeenCommitInDB {
assert.Nil(t, qua.seenCommit,
"erased the seenCommit in the DB hence we should get back a nil seenCommit")
}
if tuple.eraseCommitInDB {
assert.Nil(t, qua.commit,
"erased the commit in the DB hence we should get back a nil commit")
}
}
}
func binarySerializeIt(v interface{}) []byte {
var n int
var err error
buf := new(bytes.Buffer)
wire.WriteBinary(v, buf, &n, &err)
return buf.Bytes()
}
func TestLoadBlockPart(t *testing.T) {
bs, db := freshBlockStore()
height, index := int64(10), 1
loadPart := func() (interface{}, error) {
part := bs.LoadBlockPart(height, index)
return part, nil
}
// Initially no contents.
// 1. Requesting for a non-existent block shouldn't fail
res, _, panicErr := doFn(loadPart)
require.Nil(t, panicErr, "a non-existent block part shouldn't cause a panic")
require.Nil(t, res, "a non-existent block part should return nil")
// 2. Next save a corrupted block then try to load it
db.Set(calcBlockPartKey(height, index), []byte("Tendermint"))
res, _, panicErr = doFn(loadPart)
require.NotNil(t, panicErr, "expecting a non-nil panic")
require.Contains(t, panicErr.Error(), "Error reading block part")
// 3. A good block serialized and saved to the DB should be retrievable
db.Set(calcBlockPartKey(height, index), binarySerializeIt(part1))
gotPart, _, panicErr := doFn(loadPart)
require.Nil(t, panicErr, "an existent and proper block should not panic")
require.Nil(t, res, "a properly saved block should return a proper block")
require.Equal(t, gotPart.(*types.Part).Hash(), part1.Hash(),
"expecting successful retrieval of previously saved block")
}
func TestLoadBlockMeta(t *testing.T) {
bs, db := freshBlockStore()
height := int64(10)
loadMeta := func() (interface{}, error) {
meta := bs.LoadBlockMeta(height)
return meta, nil
}
// Initially no contents.
// 1. Requesting for a non-existent blockMeta shouldn't fail
res, _, panicErr := doFn(loadMeta)
require.Nil(t, panicErr, "a non-existent blockMeta shouldn't cause a panic")
require.Nil(t, res, "a non-existent blockMeta should return nil")
// 2. Next save a corrupted blockMeta then try to load it
db.Set(calcBlockMetaKey(height), []byte("Tendermint-Meta"))
res, _, panicErr = doFn(loadMeta)
require.NotNil(t, panicErr, "expecting a non-nil panic")
require.Contains(t, panicErr.Error(), "Error reading block meta")
// 3. A good blockMeta serialized and saved to the DB should be retrievable
meta := &types.BlockMeta{}
db.Set(calcBlockMetaKey(height), binarySerializeIt(meta))
gotMeta, _, panicErr := doFn(loadMeta)
require.Nil(t, panicErr, "an existent and proper block should not panic")
require.Nil(t, res, "a properly saved blockMeta should return a proper blocMeta ")
require.Equal(t, binarySerializeIt(meta), binarySerializeIt(gotMeta),
"expecting successful retrieval of previously saved blockMeta")
}
func TestBlockFetchAtHeight(t *testing.T) {
state, bs := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
require.Equal(t, bs.Height(), int64(0), "initially the height should be zero")
block := makeBlock(bs.Height()+1, state)
partSet := block.MakePartSet(2)
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: time.Now().UTC()}}}
bs.SaveBlock(block, partSet, seenCommit)
require.Equal(t, bs.Height(), block.Header.Height, "expecting the new height to be changed")
blockAtHeight := bs.LoadBlock(bs.Height())
require.Equal(t, block.Hash(), blockAtHeight.Hash(),
"expecting a successful load of the last saved block")
blockAtHeightPlus1 := bs.LoadBlock(bs.Height() + 1)
require.Nil(t, blockAtHeightPlus1, "expecting an unsuccessful load of Height()+1")
blockAtHeightPlus2 := bs.LoadBlock(bs.Height() + 2)
require.Nil(t, blockAtHeightPlus2, "expecting an unsuccessful load of Height()+2")
}
func doFn(fn func() (interface{}, error)) (res interface{}, err error, panicErr error) {
defer func() {
if r := recover(); r != nil {
switch e := r.(type) {
case error:
panicErr = e
case string:
panicErr = fmt.Errorf("%s", e)
default:
if st, ok := r.(fmt.Stringer); ok {
panicErr = fmt.Errorf("%s", st)
} else {
panicErr = fmt.Errorf("%s", debug.Stack())
}
}
}
}()
res, err = fn()
return res, err, panicErr
}
func newBlock(hdr *types.Header, lastCommit *types.Commit) *types.Block {
return &types.Block{
Header: hdr,
LastCommit: lastCommit,
}
}

View File

@@ -1,35 +0,0 @@
---
machine:
environment:
MACH_PREFIX: tendermint-test-mach
DOCKER_VERSION: 1.10.0
DOCKER_MACHINE_VERSION: 0.9.0
GOPATH: "$HOME/.go_project"
PROJECT_PARENT_PATH: "$GOPATH/src/github.com/$CIRCLE_PROJECT_USERNAME"
PROJECT_PATH: "$PROJECT_PARENT_PATH/$CIRCLE_PROJECT_REPONAME"
PATH: "$HOME/.go_project/bin:${PATH}"
hosts:
localhost: 127.0.0.1
dependencies:
override:
- curl -sSL https://s3.amazonaws.com/circle-downloads/install-circleci-docker.sh | sudo bash -s -- $DOCKER_VERSION
- sudo start docker
- sudo curl -sSL -o /usr/bin/docker-machine "https://github.com/docker/machine/releases/download/v$DOCKER_MACHINE_VERSION/docker-machine-`uname -s`-`uname -m`"; sudo chmod 0755 /usr/bin/docker-machine
- mkdir -p "$PROJECT_PARENT_PATH"
- ln -sf "$HOME/$CIRCLE_PROJECT_REPONAME/" "$PROJECT_PATH"
post:
- go version
- docker version
- docker-machine version
test:
override:
- cd "$PROJECT_PATH" && make tools && make get_vendor_deps && make metalinter_test
- cd "$PROJECT_PATH" && set -o pipefail && make test_integrations 2>&1 | tee test_integrations.log:
timeout: 1800
post:
- cd "$PROJECT_PATH" && mv test_integrations.log "${CIRCLE_ARTIFACTS}"
- cd "$PROJECT_PATH" && bash <(curl -s https://codecov.io/bash) -f coverage.txt
- cd "$PROJECT_PATH" && mv coverage.txt "${CIRCLE_ARTIFACTS}"
- cd "$PROJECT_PATH" && cp test/logs/messages "${CIRCLE_ARTIFACTS}/docker_logs.txt"

View File

@@ -0,0 +1,53 @@
package main
import (
"flag"
"os"
crypto "github.com/tendermint/go-crypto"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/log"
priv_val "github.com/tendermint/tendermint/types/priv_validator"
)
func main() {
var (
addr = flag.String("addr", ":46659", "Address of client to connect to")
chainID = flag.String("chain-id", "mychain", "chain id")
privValPath = flag.String("priv", "", "priv val file path")
logger = log.NewTMLogger(
log.NewSyncWriter(os.Stdout),
).With("module", "priv_val")
)
flag.Parse()
logger.Info(
"Starting private validator",
"addr", *addr,
"chainID", *chainID,
"privPath", *privValPath,
)
privVal := priv_val.LoadPrivValidatorJSON(*privValPath)
rs := priv_val.NewRemoteSigner(
logger,
*chainID,
*addr,
privVal,
crypto.GenPrivKeyEd25519(),
)
err := rs.Start()
if err != nil {
panic(err)
}
cmn.TrapSignal(func() {
err := rs.Stop()
if err != nil {
panic(err)
}
})
}

View File

@@ -1,8 +1,6 @@
package commands
import (
"os"
"github.com/spf13/cobra"
"github.com/tendermint/tendermint/types"
@@ -17,29 +15,34 @@ var InitFilesCmd = &cobra.Command{
}
func initFiles(cmd *cobra.Command, args []string) {
// private validator
privValFile := config.PrivValidatorFile()
if _, err := os.Stat(privValFile); os.IsNotExist(err) {
privValidator := types.GenPrivValidatorFS(privValFile)
privValidator.Save()
genFile := config.GenesisFile()
if _, err := os.Stat(genFile); os.IsNotExist(err) {
genDoc := types.GenesisDoc{
ChainID: cmn.Fmt("test-chain-%v", cmn.RandStr(6)),
}
genDoc.Validators = []types.GenesisValidator{{
PubKey: privValidator.GetPubKey(),
Power: 10,
}}
if err := genDoc.SaveAs(genFile); err != nil {
panic(err)
}
}
logger.Info("Initialized tendermint", "genesis", config.GenesisFile(), "priv_validator", config.PrivValidatorFile())
var privValidator *types.PrivValidatorFS
if cmn.FileExists(privValFile) {
privValidator = types.LoadPrivValidatorFS(privValFile)
logger.Info("Found private validator", "path", privValFile)
} else {
logger.Info("Already initialized", "priv_validator", config.PrivValidatorFile())
privValidator = types.GenPrivValidatorFS(privValFile)
privValidator.Save()
logger.Info("Generated private validator", "path", privValFile)
}
// genesis file
genFile := config.GenesisFile()
if cmn.FileExists(genFile) {
logger.Info("Found genesis file", "path", genFile)
} else {
genDoc := types.GenesisDoc{
ChainID: cmn.Fmt("test-chain-%v", cmn.RandStr(6)),
}
genDoc.Validators = []types.GenesisValidator{{
PubKey: privValidator.GetPubKey(),
Power: 10,
}}
if err := genDoc.SaveAs(genFile); err != nil {
panic(err)
}
logger.Info("Generated genesis file", "path", genFile)
}
}

View File

@@ -1,6 +1,9 @@
package commands
import (
"fmt"
"net/url"
"github.com/spf13/cobra"
cmn "github.com/tendermint/tmlibs/common"
@@ -32,12 +35,36 @@ var (
func init() {
LiteCmd.Flags().StringVar(&listenAddr, "laddr", ":8888", "Serve the proxy on the given port")
LiteCmd.Flags().StringVar(&nodeAddr, "node", "localhost:46657", "Connect to a Tendermint node at this address")
LiteCmd.Flags().StringVar(&nodeAddr, "node", "tcp://localhost:46657", "Connect to a Tendermint node at this address")
LiteCmd.Flags().StringVar(&chainID, "chain-id", "tendermint", "Specify the Tendermint chain ID")
LiteCmd.Flags().StringVar(&home, "home-dir", ".tendermint-lite", "Specify the home directory")
}
func ensureAddrHasSchemeOrDefaultToTCP(addr string) (string, error) {
u, err := url.Parse(nodeAddr)
if err != nil {
return "", err
}
switch u.Scheme {
case "tcp", "unix":
case "":
u.Scheme = "tcp"
default:
return "", fmt.Errorf("unknown scheme %q, use either tcp or unix", u.Scheme)
}
return u.String(), nil
}
func runProxy(cmd *cobra.Command, args []string) error {
nodeAddr, err := ensureAddrHasSchemeOrDefaultToTCP(nodeAddr)
if err != nil {
return err
}
listenAddr, err := ensureAddrHasSchemeOrDefaultToTCP(listenAddr)
if err != nil {
return err
}
// First, connect a client
node := rpcclient.NewHTTP(nodeAddr, "/websocket")

View File

@@ -19,7 +19,7 @@ var ProbeUpnpCmd = &cobra.Command{
func probeUpnp(cmd *cobra.Command, args []string) error {
capabilities, err := upnp.Probe(logger)
if err != nil {
fmt.Println("Probe failed: %v", err)
fmt.Println("Probe failed: ", err)
} else {
fmt.Println("Probe success!")
jsonBytes, err := json.Marshal(capabilities)

View File

@@ -14,11 +14,15 @@ import (
var (
config = cfg.DefaultConfig()
logger = log.NewTMLogger(log.NewSyncWriter(os.Stdout)).With("module", "main")
logger = log.NewTMLogger(log.NewSyncWriter(os.Stdout))
)
func init() {
RootCmd.PersistentFlags().String("log_level", config.LogLevel, "Log level")
registerFlagsRootCmd(RootCmd)
}
func registerFlagsRootCmd(cmd *cobra.Command) {
cmd.PersistentFlags().String("log_level", config.LogLevel, "Log level")
}
// ParseConfig retrieves the default environment configuration,
@@ -53,6 +57,7 @@ var RootCmd = &cobra.Command{
if viper.GetBool(cli.TraceFlag) {
logger = log.NewTracingLogger(logger)
}
logger = logger.With("module", "main")
return nil
},
}

View File

@@ -1,7 +1,10 @@
package commands
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strconv"
"testing"
@@ -12,6 +15,7 @@ import (
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tmlibs/cli"
cmn "github.com/tendermint/tmlibs/common"
)
var (
@@ -22,89 +26,151 @@ const (
rootName = "root"
)
// isolate provides a clean setup and returns a copy of RootCmd you can
// modify in the test cases.
// NOTE: it unsets all TM* env variables.
func isolate(cmds ...*cobra.Command) cli.Executable {
// clearConfig clears env vars, the given root dir, and resets viper.
func clearConfig(dir string) {
if err := os.Unsetenv("TMHOME"); err != nil {
panic(err)
}
if err := os.Unsetenv("TM_HOME"); err != nil {
panic(err)
}
if err := os.RemoveAll(defaultRoot); err != nil {
if err := os.RemoveAll(dir); err != nil {
panic(err)
}
viper.Reset()
config = cfg.DefaultConfig()
r := &cobra.Command{
Use: rootName,
PersistentPreRunE: RootCmd.PersistentPreRunE,
}
r.AddCommand(cmds...)
wr := cli.PrepareBaseCmd(r, "TM", defaultRoot)
return wr
}
func TestRootConfig(t *testing.T) {
assert, require := assert.New(t), require.New(t)
// we pre-create a config file we can refer to in the rest of
// the test cases.
cvals := map[string]string{
"moniker": "monkey",
"fast_sync": "false",
// prepare new rootCmd
func testRootCmd() *cobra.Command {
rootCmd := &cobra.Command{
Use: RootCmd.Use,
PersistentPreRunE: RootCmd.PersistentPreRunE,
Run: func(cmd *cobra.Command, args []string) {},
}
// proper types of the above settings
cfast := false
conf, err := cli.WriteDemoConfig(cvals)
require.Nil(err)
registerFlagsRootCmd(rootCmd)
var l string
rootCmd.PersistentFlags().String("log", l, "Log")
return rootCmd
}
func testSetup(rootDir string, args []string, env map[string]string) error {
clearConfig(defaultRoot)
rootCmd := testRootCmd()
cmd := cli.PrepareBaseCmd(rootCmd, "TM", defaultRoot)
// run with the args and env
args = append([]string{rootCmd.Use}, args...)
return cli.RunWithArgs(cmd, args, env)
}
func TestRootHome(t *testing.T) {
newRoot := filepath.Join(defaultRoot, "something-else")
cases := []struct {
args []string
env map[string]string
root string
}{
{nil, nil, defaultRoot},
{[]string{"--home", newRoot}, nil, newRoot},
{nil, map[string]string{"TMHOME": newRoot}, newRoot},
}
for i, tc := range cases {
idxString := strconv.Itoa(i)
err := testSetup(defaultRoot, tc.args, tc.env)
require.Nil(t, err, idxString)
assert.Equal(t, tc.root, config.RootDir, idxString)
assert.Equal(t, tc.root, config.P2P.RootDir, idxString)
assert.Equal(t, tc.root, config.Consensus.RootDir, idxString)
assert.Equal(t, tc.root, config.Mempool.RootDir, idxString)
}
}
func TestRootFlagsEnv(t *testing.T) {
// defaults
defaults := cfg.DefaultConfig()
dmax := defaults.P2P.MaxNumPeers
defaultLogLvl := defaults.LogLevel
cases := []struct {
args []string
env map[string]string
root string
moniker string
fastSync bool
maxPeer int
logLevel string
}{
{nil, nil, defaultRoot, defaults.Moniker, defaults.FastSync, dmax},
// try multiple ways of setting root (two flags, cli vs. env)
{[]string{"--home", conf}, nil, conf, cvals["moniker"], cfast, dmax},
{nil, map[string]string{"TMHOME": conf}, conf, cvals["moniker"], cfast, dmax},
// check setting p2p subflags two different ways
{[]string{"--p2p.max_num_peers", "420"}, nil, defaultRoot, defaults.Moniker, defaults.FastSync, 420},
{nil, map[string]string{"TM_P2P_MAX_NUM_PEERS": "17"}, defaultRoot, defaults.Moniker, defaults.FastSync, 17},
// try to set env that have no flags attached...
{[]string{"--home", conf}, map[string]string{"TM_MONIKER": "funny"}, conf, "funny", cfast, dmax},
{[]string{"--log", "debug"}, nil, defaultLogLvl}, // wrong flag
{[]string{"--log_level", "debug"}, nil, "debug"}, // right flag
{nil, map[string]string{"TM_LOW": "debug"}, defaultLogLvl}, // wrong env flag
{nil, map[string]string{"MT_LOG_LEVEL": "debug"}, defaultLogLvl}, // wrong env prefix
{nil, map[string]string{"TM_LOG_LEVEL": "debug"}, "debug"}, // right env
}
for idx, tc := range cases {
i := strconv.Itoa(idx)
// test command that does nothing, except trigger unmarshalling in root
noop := &cobra.Command{
Use: "noop",
RunE: func(cmd *cobra.Command, args []string) error {
return nil
},
}
noop.Flags().Int("p2p.max_num_peers", defaults.P2P.MaxNumPeers, "")
cmd := isolate(noop)
for i, tc := range cases {
idxString := strconv.Itoa(i)
args := append([]string{rootName, noop.Use}, tc.args...)
err := cli.RunWithArgs(cmd, args, tc.env)
require.Nil(err, i)
assert.Equal(tc.root, config.RootDir, i)
assert.Equal(tc.root, config.P2P.RootDir, i)
assert.Equal(tc.root, config.Consensus.RootDir, i)
assert.Equal(tc.root, config.Mempool.RootDir, i)
assert.Equal(tc.moniker, config.Moniker, i)
assert.Equal(tc.fastSync, config.FastSync, i)
assert.Equal(tc.maxPeer, config.P2P.MaxNumPeers, i)
err := testSetup(defaultRoot, tc.args, tc.env)
require.Nil(t, err, idxString)
assert.Equal(t, tc.logLevel, config.LogLevel, idxString)
}
}
func TestRootConfig(t *testing.T) {
// write non-default config
nonDefaultLogLvl := "abc:debug"
cvals := map[string]string{
"log_level": nonDefaultLogLvl,
}
cases := []struct {
args []string
env map[string]string
logLvl string
}{
{nil, nil, nonDefaultLogLvl}, // should load config
{[]string{"--log_level=abc:info"}, nil, "abc:info"}, // flag over rides
{nil, map[string]string{"TM_LOG_LEVEL": "abc:info"}, "abc:info"}, // env over rides
}
for i, tc := range cases {
idxString := strconv.Itoa(i)
clearConfig(defaultRoot)
// XXX: path must match cfg.defaultConfigPath
configFilePath := filepath.Join(defaultRoot, "config")
err := cmn.EnsureDir(configFilePath, 0700)
require.Nil(t, err)
// write the non-defaults to a different path
// TODO: support writing sub configs so we can test that too
err = WriteConfigVals(configFilePath, cvals)
require.Nil(t, err)
rootCmd := testRootCmd()
cmd := cli.PrepareBaseCmd(rootCmd, "TM", defaultRoot)
// run with the args and env
tc.args = append([]string{rootCmd.Use}, tc.args...)
err = cli.RunWithArgs(cmd, tc.args, tc.env)
require.Nil(t, err, idxString)
assert.Equal(t, tc.logLvl, config.LogLevel, idxString)
}
}
// WriteConfigVals writes a toml file with the given values.
// It returns an error if writing was impossible.
func WriteConfigVals(dir string, vals map[string]string) error {
data := ""
for k, v := range vals {
data = data + fmt.Sprintf("%s = \"%s\"\n", k, v)
}
cfile := filepath.Join(dir, "config.toml")
return ioutil.WriteFile(cfile, []byte(data), 0666)
}

View File

@@ -14,11 +14,14 @@ func AddNodeFlags(cmd *cobra.Command) {
// bind flags
cmd.Flags().String("moniker", config.Moniker, "Node Name")
// priv val flags
cmd.Flags().String("priv_validator_laddr", config.PrivValidatorListenAddr, "Socket address to listen on for connections from external priv_validator process")
// node flags
cmd.Flags().Bool("fast_sync", config.FastSync, "Fast blockchain syncing")
// abci flags
cmd.Flags().String("proxy_app", config.ProxyApp, "Proxy app address, or 'nilapp' or 'dummy' for local testing.")
cmd.Flags().String("proxy_app", config.ProxyApp, "Proxy app address, or 'nilapp' or 'kvstore' for local testing.")
cmd.Flags().String("abci", config.ABCI, "Specify abci transport (socket | grpc)")
// rpc flags
@@ -28,16 +31,19 @@ func AddNodeFlags(cmd *cobra.Command) {
// p2p flags
cmd.Flags().String("p2p.laddr", config.P2P.ListenAddress, "Node listen address. (0.0.0.0:0 means any interface, any port)")
cmd.Flags().String("p2p.seeds", config.P2P.Seeds, "Comma delimited host:port seed nodes")
cmd.Flags().String("p2p.seeds", config.P2P.Seeds, "Comma-delimited ID@host:port seed nodes")
cmd.Flags().String("p2p.persistent_peers", config.P2P.PersistentPeers, "Comma-delimited ID@host:port persistent peers")
cmd.Flags().Bool("p2p.skip_upnp", config.P2P.SkipUPNP, "Skip UPNP configuration")
cmd.Flags().Bool("p2p.pex", config.P2P.PexReactor, "Enable Peer-Exchange (dev feature)")
cmd.Flags().Bool("p2p.pex", config.P2P.PexReactor, "Enable/disable Peer-Exchange")
cmd.Flags().Bool("p2p.seed_mode", config.P2P.SeedMode, "Enable/disable seed mode")
cmd.Flags().String("p2p.private_peer_ids", config.P2P.PrivatePeerIDs, "Comma-delimited private peer IDs")
// consensus flags
cmd.Flags().Bool("consensus.create_empty_blocks", config.Consensus.CreateEmptyBlocks, "Set this to false to only produce blocks when there are txs or when the AppHash changes")
}
// NewRunNodeCmd returns the command that allows the CLI to start a
// node. It can be used with a custom PrivValidator and in-process ABCI application.
// NewRunNodeCmd returns the command that allows the CLI to start a node.
// It can be used with a custom PrivValidator and in-process ABCI application.
func NewRunNodeCmd(nodeProvider nm.NodeProvider) *cobra.Command {
cmd := &cobra.Command{
Use: "node",

View File

@@ -0,0 +1,25 @@
package commands
import (
"fmt"
"github.com/spf13/cobra"
"github.com/tendermint/tendermint/p2p"
)
// ShowNodeIDCmd dumps node's ID to the standard output.
var ShowNodeIDCmd = &cobra.Command{
Use: "show_node_id",
Short: "Show this node's ID",
RunE: showNodeID,
}
func showNodeID(cmd *cobra.Command, args []string) error {
nodeKey, err := p2p.LoadOrGenNodeKey(config.NodeKeyFile())
if err != nil {
return err
}
fmt.Println(nodeKey.ID())
return nil
}

View File

@@ -2,11 +2,12 @@ package commands
import (
"fmt"
"path"
"path/filepath"
"time"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
)
@@ -35,6 +36,7 @@ var TestnetFilesCmd = &cobra.Command{
func testnetFiles(cmd *cobra.Command, args []string) {
genVals := make([]types.GenesisValidator, nValidators)
defaultConfig := cfg.DefaultBaseConfig()
// Initialize core dir and priv_validator.json's
for i := 0; i < nValidators; i++ {
@@ -44,7 +46,7 @@ func testnetFiles(cmd *cobra.Command, args []string) {
cmn.Exit(err.Error())
}
// Read priv_validator.json to populate vals
privValFile := path.Join(dataDir, mach, "priv_validator.json")
privValFile := filepath.Join(dataDir, mach, defaultConfig.PrivValidator)
privVal := types.LoadPrivValidatorFS(privValFile)
genVals[i] = types.GenesisValidator{
PubKey: privVal.GetPubKey(),
@@ -63,7 +65,7 @@ func testnetFiles(cmd *cobra.Command, args []string) {
// Write genesis file.
for i := 0; i < nValidators; i++ {
mach := cmn.Fmt("mach%d", i)
if err := genDoc.SaveAs(path.Join(dataDir, mach, "genesis.json")); err != nil {
if err := genDoc.SaveAs(filepath.Join(dataDir, mach, defaultConfig.Genesis)); err != nil {
panic(err)
}
}
@@ -73,14 +75,16 @@ func testnetFiles(cmd *cobra.Command, args []string) {
// Initialize per-machine core directory
func initMachCoreDirectory(base, mach string) error {
dir := path.Join(base, mach)
err := cmn.EnsureDir(dir, 0777)
// Create priv_validator.json file if not present
defaultConfig := cfg.DefaultBaseConfig()
dir := filepath.Join(base, mach)
privValPath := filepath.Join(dir, defaultConfig.PrivValidator)
dir = filepath.Dir(privValPath)
err := cmn.EnsureDir(dir, 0700)
if err != nil {
return err
}
// Create priv_validator.json file if not present
ensurePrivValidator(path.Join(dir, "priv_validator.json"))
ensurePrivValidator(privValPath)
return nil
}

View File

@@ -2,10 +2,12 @@ package main
import (
"os"
"path/filepath"
"github.com/tendermint/tmlibs/cli"
cmd "github.com/tendermint/tendermint/cmd/tendermint/commands"
cfg "github.com/tendermint/tendermint/config"
nm "github.com/tendermint/tendermint/node"
)
@@ -22,6 +24,7 @@ func main() {
cmd.ResetPrivValidatorCmd,
cmd.ShowValidatorCmd,
cmd.TestnetFilesCmd,
cmd.ShowNodeIDCmd,
cmd.VersionCmd)
// NOTE:
@@ -37,7 +40,7 @@ func main() {
// Create & start node
rootCmd.AddCommand(cmd.NewRunNodeCmd(nodeFunc))
cmd := cli.PrepareBaseCmd(rootCmd, "TM", os.ExpandEnv("$HOME/.tendermint"))
cmd := cli.PrepareBaseCmd(rootCmd, "TM", os.ExpandEnv(filepath.Join("$HOME", cfg.DefaultTendermintDir)))
if err := cmd.Execute(); err != nil {
panic(err)
}

18
codecov.yml Normal file
View File

@@ -0,0 +1,18 @@
coverage:
precision: 2
round: down
range: "70...100"
status:
project:
default:
threshold: 1%
patch: on
changes: off
comment:
layout: "diff, files"
behavior: default
require_changes: no
require_base: no
require_head: yes

View File

@@ -7,6 +7,31 @@ import (
"time"
)
// NOTE: Most of the structs & relevant comments + the
// default configuration options were used to manually
// generate the config.toml. Please reflect any changes
// made here in the defaultConfigTemplate constant in
// config/toml.go
// NOTE: tmlibs/cli must know to look in the config dir!
var (
DefaultTendermintDir = ".tendermint"
defaultConfigDir = "config"
defaultDataDir = "data"
defaultConfigFileName = "config.toml"
defaultGenesisJSONName = "genesis.json"
defaultPrivValName = "priv_validator.json"
defaultNodeKeyName = "node_key.json"
defaultAddrBookName = "addrbook.json"
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName)
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName)
defaultPrivValPath = filepath.Join(defaultConfigDir, defaultPrivValName)
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
)
// Config defines the top level configuration for a Tendermint node
type Config struct {
// Top level options use an anonymous struct
@@ -38,9 +63,9 @@ func TestConfig() *Config {
BaseConfig: TestBaseConfig(),
RPC: TestRPCConfig(),
P2P: TestP2PConfig(),
Mempool: DefaultMempoolConfig(),
Mempool: TestMempoolConfig(),
Consensus: TestConsensusConfig(),
TxIndex: DefaultTxIndexConfig(),
TxIndex: TestTxIndexConfig(),
}
}
@@ -59,22 +84,30 @@ func (cfg *Config) SetRoot(root string) *Config {
// BaseConfig defines the base configuration for a Tendermint node
type BaseConfig struct {
// chainID is unexposed and immutable but here for convenience
chainID string
// The root directory for all data.
// This should be set in viper so it can unmarshal into this struct
RootDir string `mapstructure:"home"`
// The ID of the chain to join (should be signed with every transaction and vote)
ChainID string `mapstructure:"chain_id"`
// A JSON file containing the initial validator set and other meta data
// Path to the JSON file containing the initial validator set and other meta data
Genesis string `mapstructure:"genesis_file"`
// A JSON file containing the private key to use as a validator in the consensus protocol
// Path to the JSON file containing the private key to use as a validator in the consensus protocol
PrivValidator string `mapstructure:"priv_validator_file"`
// A JSON file containing the private key to use for p2p authenticated encryption
NodeKey string `mapstructure:"node_key_file"`
// A custom human readable name for this node
Moniker string `mapstructure:"moniker"`
// TCP or UNIX socket address for Tendermint to listen on for
// connections from an external PrivValidator process
PrivValidatorListenAddr string `mapstructure:"priv_validator_laddr"`
// TCP or UNIX socket address of the ABCI application,
// or the name of an ABCI application compiled in with the Tendermint binary
ProxyApp string `mapstructure:"proxy_app"`
@@ -104,11 +137,16 @@ type BaseConfig struct {
DBPath string `mapstructure:"db_dir"`
}
func (c BaseConfig) ChainID() string {
return c.chainID
}
// DefaultBaseConfig returns a default base configuration for a Tendermint node
func DefaultBaseConfig() BaseConfig {
return BaseConfig{
Genesis: "genesis.json",
PrivValidator: "priv_validator.json",
Genesis: defaultGenesisJSONPath,
PrivValidator: defaultPrivValPath,
NodeKey: defaultNodeKeyPath,
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:46658",
ABCI: "socket",
@@ -124,8 +162,8 @@ func DefaultBaseConfig() BaseConfig {
// TestBaseConfig returns a base configuration for testing a Tendermint node
func TestBaseConfig() BaseConfig {
conf := DefaultBaseConfig()
conf.ChainID = "tendermint_test"
conf.ProxyApp = "dummy"
conf.chainID = "tendermint_test"
conf.ProxyApp = "kvstore"
conf.FastSync = false
conf.DBBackend = "memdb"
return conf
@@ -141,6 +179,11 @@ func (b BaseConfig) PrivValidatorFile() string {
return rootify(b.PrivValidator, b.RootDir)
}
// NodeKeyFile returns the full path to the node_key.json file
func (b BaseConfig) NodeKeyFile() string {
return rootify(b.NodeKey, b.RootDir)
}
// DBDir returns the full path to the database directory
func (b BaseConfig) DBDir() string {
return rootify(b.DBPath, b.RootDir)
@@ -151,9 +194,10 @@ func DefaultLogLevel() string {
return "error"
}
// DefaultPackageLogLevels returns a default log level setting so all packages log at "error", while the `state` package logs at "info"
// DefaultPackageLogLevels returns a default log level setting so all packages
// log at "error", while the `state` and `main` packages log at "info"
func DefaultPackageLogLevels() string {
return fmt.Sprintf("state:info,*:%s", DefaultLogLevel())
return fmt.Sprintf("main:info,state:info,*:%s", DefaultLogLevel())
}
//-----------------------------------------------------------------------------
@@ -170,7 +214,7 @@ type RPCConfig struct {
// NOTE: This server only supports /broadcast_tx_commit
GRPCListenAddress string `mapstructure:"grpc_laddr"`
// Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
// Activate unsafe RPC commands like /dial_persistent_peers and /unsafe_flush_mempool
Unsafe bool `mapstructure:"unsafe"`
}
@@ -203,8 +247,13 @@ type P2PConfig struct {
ListenAddress string `mapstructure:"laddr"`
// Comma separated list of seed nodes to connect to
// We only use these if we cant connect to peers in the addrbook
Seeds string `mapstructure:"seeds"`
// Comma separated list of nodes to keep persistent connections to
// Do not add private peers to this list if you don't want them advertised
PersistentPeers string `mapstructure:"persistent_peers"`
// Skip UPNP port forwarding
SkipUPNP bool `mapstructure:"skip_upnp"`
@@ -214,9 +263,6 @@ type P2PConfig struct {
// Set true for strict address routability rules
AddrBookStrict bool `mapstructure:"addr_book_strict"`
// Set true to enable the peer-exchange reactor
PexReactor bool `mapstructure:"pex"`
// Maximum number of peers to connect to
MaxNumPeers int `mapstructure:"max_num_peers"`
@@ -231,19 +277,37 @@ type P2PConfig struct {
// Rate at which packets can be received, in bytes/second
RecvRate int64 `mapstructure:"recv_rate"`
// Set true to enable the peer-exchange reactor
PexReactor bool `mapstructure:"pex"`
// Seed mode, in which node constantly crawls the network and looks for
// peers. If another node asks it for addresses, it responds and disconnects.
//
// Does not work if the peer-exchange reactor is disabled.
SeedMode bool `mapstructure:"seed_mode"`
// Authenticated encryption
AuthEnc bool `mapstructure:"auth_enc"`
// Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
PrivatePeerIDs string `mapstructure:"private_peer_ids"`
}
// DefaultP2PConfig returns a default configuration for the peer-to-peer layer
func DefaultP2PConfig() *P2PConfig {
return &P2PConfig{
ListenAddress: "tcp://0.0.0.0:46656",
AddrBook: "addrbook.json",
AddrBook: defaultAddrBookPath,
AddrBookStrict: true,
MaxNumPeers: 50,
FlushThrottleTimeout: 100,
MaxMsgPacketPayloadSize: 1024, // 1 kB
SendRate: 512000, // 500 kB/s
RecvRate: 512000, // 500 kB/s
PexReactor: true,
SeedMode: false,
AuthEnc: true,
}
}
@@ -252,6 +316,7 @@ func TestP2PConfig() *P2PConfig {
conf := DefaultP2PConfig()
conf.ListenAddress = "tcp://0.0.0.0:36656"
conf.SkipUPNP = true
conf.FlushThrottleTimeout = 10
return conf
}
@@ -270,6 +335,7 @@ type MempoolConfig struct {
RecheckEmpty bool `mapstructure:"recheck_empty"`
Broadcast bool `mapstructure:"broadcast"`
WalPath string `mapstructure:"wal_dir"`
CacheSize int `mapstructure:"cache_size"`
}
// DefaultMempoolConfig returns a default configuration for the Tendermint mempool
@@ -278,10 +344,18 @@ func DefaultMempoolConfig() *MempoolConfig {
Recheck: true,
RecheckEmpty: true,
Broadcast: true,
WalPath: "data/mempool.wal",
WalPath: filepath.Join(defaultDataDir, "mempool.wal"),
CacheSize: 100000,
}
}
// TestMempoolConfig returns a configuration for testing the Tendermint mempool
func TestMempoolConfig() *MempoolConfig {
config := DefaultMempoolConfig()
config.CacheSize = 1000
return config
}
// WalDir returns the full path to the mempool's write-ahead log
func (m *MempoolConfig) WalDir() string {
return rootify(m.WalPath, m.RootDir)
@@ -298,7 +372,7 @@ type ConsensusConfig struct {
WalLight bool `mapstructure:"wal_light"`
walFile string // overrides WalPath if set
// All timeouts are in ms
// All timeouts are in milliseconds
TimeoutPropose int `mapstructure:"timeout_propose"`
TimeoutProposeDelta int `mapstructure:"timeout_propose_delta"`
TimeoutPrevote int `mapstructure:"timeout_prevote"`
@@ -318,7 +392,7 @@ type ConsensusConfig struct {
CreateEmptyBlocks bool `mapstructure:"create_empty_blocks"`
CreateEmptyBlocksInterval int `mapstructure:"create_empty_blocks_interval"`
// Reactor sleep duration parameters are in ms
// Reactor sleep duration parameters are in milliseconds
PeerGossipSleepDuration int `mapstructure:"peer_gossip_sleep_duration"`
PeerQueryMaj23SleepDuration int `mapstructure:"peer_query_maj23_sleep_duration"`
}
@@ -366,7 +440,7 @@ func (cfg *ConsensusConfig) PeerQueryMaj23Sleep() time.Duration {
// DefaultConsensusConfig returns a default configuration for the consensus service
func DefaultConsensusConfig() *ConsensusConfig {
return &ConsensusConfig{
WalPath: "data/cs.wal/wal",
WalPath: filepath.Join(defaultDataDir, "cs.wal", "wal"),
WalLight: false,
TimeoutPropose: 3000,
TimeoutProposeDelta: 500,
@@ -388,7 +462,7 @@ func DefaultConsensusConfig() *ConsensusConfig {
// TestConsensusConfig returns a configuration for testing the consensus service
func TestConsensusConfig() *ConsensusConfig {
config := DefaultConsensusConfig()
config.TimeoutPropose = 2000
config.TimeoutPropose = 100
config.TimeoutProposeDelta = 1
config.TimeoutPrevote = 10
config.TimeoutPrevoteDelta = 1
@@ -396,6 +470,8 @@ func TestConsensusConfig() *ConsensusConfig {
config.TimeoutPrecommitDelta = 1
config.TimeoutCommit = 10
config.SkipTimeoutCommit = true
config.PeerGossipSleepDuration = 5
config.PeerQueryMaj23SleepDuration = 250
return config
}
@@ -447,6 +523,11 @@ func DefaultTxIndexConfig() *TxIndexConfig {
}
}
// TestTxIndexConfig returns a default configuration for the transaction indexer.
func TestTxIndexConfig() *TxIndexConfig {
return DefaultTxIndexConfig()
}
//-----------------------------------------------------------------------------
// Utils

View File

@@ -1,52 +1,231 @@
package config
import (
"bytes"
"os"
"path"
"path/filepath"
"strings"
"text/template"
cmn "github.com/tendermint/tmlibs/common"
)
var configTemplate *template.Template
func init() {
var err error
if configTemplate, err = template.New("configFileTemplate").Parse(defaultConfigTemplate); err != nil {
panic(err)
}
}
/****** these are for production settings ***********/
// EnsureRoot creates the root, config, and data directories if they don't exist,
// and panics if it fails.
func EnsureRoot(rootDir string) {
if err := cmn.EnsureDir(rootDir, 0700); err != nil {
cmn.PanicSanity(err.Error())
}
if err := cmn.EnsureDir(rootDir+"/data", 0700); err != nil {
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultConfigDir), 0700); err != nil {
cmn.PanicSanity(err.Error())
}
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultDataDir), 0700); err != nil {
cmn.PanicSanity(err.Error())
}
configFilePath := path.Join(rootDir, "config.toml")
configFilePath := filepath.Join(rootDir, defaultConfigFilePath)
// Write default config file if missing.
if !cmn.FileExists(configFilePath) {
cmn.MustWriteFile(configFilePath, []byte(defaultConfig(defaultMoniker)), 0644)
writeConfigFile(configFilePath)
}
}
var defaultConfigTmpl = `# This is a TOML config file.
// XXX: this func should probably be called by cmd/tendermint/commands/init.go
// alongside the writing of the genesis.json and priv_validator.json
func writeConfigFile(configFilePath string) {
var buffer bytes.Buffer
if err := configTemplate.Execute(&buffer, DefaultConfig()); err != nil {
panic(err)
}
cmn.MustWriteFile(configFilePath, buffer.Bytes(), 0644)
}
// Note: any changes to the comments/variables/mapstructure
// must be reflected in the appropriate struct in config/config.go
const defaultConfigTemplate = `# This is a TOML config file.
# For more information, see https://github.com/toml-lang/toml
proxy_app = "tcp://127.0.0.1:46658"
moniker = "__MONIKER__"
fast_sync = true
db_backend = "leveldb"
log_level = "state:info,*:error"
##### main base config options #####
# TCP or UNIX socket address of the ABCI application,
# or the name of an ABCI application compiled in with the Tendermint binary
proxy_app = "{{ .BaseConfig.ProxyApp }}"
# A custom human readable name for this node
moniker = "{{ .BaseConfig.Moniker }}"
# If this node is many blocks behind the tip of the chain, FastSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
fast_sync = {{ .BaseConfig.FastSync }}
# Database backend: leveldb | memdb
db_backend = "{{ .BaseConfig.DBBackend }}"
# Database directory
db_path = "{{ .BaseConfig.DBPath }}"
# Output level for logging, including package level options
log_level = "{{ .BaseConfig.LogLevel }}"
##### additional base config options #####
# Path to the JSON file containing the initial validator set and other meta data
genesis_file = "{{ .BaseConfig.Genesis }}"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_file = "{{ .BaseConfig.PrivValidator }}"
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
node_key_file = "{{ .BaseConfig.NodeKey}}"
# Mechanism to connect to the ABCI application: socket | grpc
abci = "{{ .BaseConfig.ABCI }}"
# TCP or UNIX socket address for the profiling server to listen on
prof_laddr = "{{ .BaseConfig.ProfListenAddress }}"
# If true, query the ABCI app on connecting to a new peer
# so the app can decide if we should keep the connection or not
filter_peers = {{ .BaseConfig.FilterPeers }}
##### advanced configuration options #####
##### rpc server configuration options #####
[rpc]
laddr = "tcp://0.0.0.0:46657"
# TCP or UNIX socket address for the RPC server to listen on
laddr = "{{ .RPC.ListenAddress }}"
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
grpc_laddr = "{{ .RPC.GRPCListenAddress }}"
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
unsafe = {{ .RPC.Unsafe }}
##### peer to peer configuration options #####
[p2p]
laddr = "tcp://0.0.0.0:46656"
seeds = ""
`
func defaultConfig(moniker string) string {
return strings.Replace(defaultConfigTmpl, "__MONIKER__", moniker, -1)
}
# Address to listen for incoming connections
laddr = "{{ .P2P.ListenAddress }}"
# Comma separated list of seed nodes to connect to
seeds = ""
# Comma separated list of nodes to keep persistent connections to
# Do not add private peers to this list if you don't want them advertised
persistent_peers = ""
# Path to address book
addr_book_file = "{{ .P2P.AddrBook }}"
# Set true for strict address routability rules
addr_book_strict = {{ .P2P.AddrBookStrict }}
# Time to wait before flushing messages out on the connection, in ms
flush_throttle_timeout = {{ .P2P.FlushThrottleTimeout }}
# Maximum number of peers to connect to
max_num_peers = {{ .P2P.MaxNumPeers }}
# Maximum size of a message packet payload, in bytes
max_msg_packet_payload_size = {{ .P2P.MaxMsgPacketPayloadSize }}
# Rate at which packets can be sent, in bytes/second
send_rate = {{ .P2P.SendRate }}
# Rate at which packets can be received, in bytes/second
recv_rate = {{ .P2P.RecvRate }}
# Set true to enable the peer-exchange reactor
pex = {{ .P2P.PexReactor }}
# Seed mode, in which node constantly crawls the network and looks for
# peers. If another node asks it for addresses, it responds and disconnects.
#
# Does not work if the peer-exchange reactor is disabled.
seed_mode = {{ .P2P.SeedMode }}
# Authenticated encryption
auth_enc = {{ .P2P.AuthEnc }}
# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
private_peer_ids = "{{ .P2P.PrivatePeerIDs }}"
##### mempool configuration options #####
[mempool]
recheck = {{ .Mempool.Recheck }}
recheck_empty = {{ .Mempool.RecheckEmpty }}
broadcast = {{ .Mempool.Broadcast }}
wal_dir = "{{ .Mempool.WalPath }}"
##### consensus configuration options #####
[consensus]
wal_file = "{{ .Consensus.WalPath }}"
wal_light = {{ .Consensus.WalLight }}
# All timeouts are in milliseconds
timeout_propose = {{ .Consensus.TimeoutPropose }}
timeout_propose_delta = {{ .Consensus.TimeoutProposeDelta }}
timeout_prevote = {{ .Consensus.TimeoutPrevote }}
timeout_prevote_delta = {{ .Consensus.TimeoutPrevoteDelta }}
timeout_precommit = {{ .Consensus.TimeoutPrecommit }}
timeout_precommit_delta = {{ .Consensus.TimeoutPrecommitDelta }}
timeout_commit = {{ .Consensus.TimeoutCommit }}
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
skip_timeout_commit = {{ .Consensus.SkipTimeoutCommit }}
# BlockSize
max_block_size_txs = {{ .Consensus.MaxBlockSizeTxs }}
max_block_size_bytes = {{ .Consensus.MaxBlockSizeBytes }}
# EmptyBlocks mode and possible interval between empty blocks in seconds
create_empty_blocks = {{ .Consensus.CreateEmptyBlocks }}
create_empty_blocks_interval = {{ .Consensus.CreateEmptyBlocksInterval }}
# Reactor sleep duration parameters are in milliseconds
peer_gossip_sleep_duration = {{ .Consensus.PeerGossipSleepDuration }}
peer_query_maj23_sleep_duration = {{ .Consensus.PeerQueryMaj23SleepDuration }}
##### transactions indexer configuration options #####
[tx_index]
# What indexer to use for transactions
#
# Options:
# 1) "null" (default)
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "{{ .TxIndex.Indexer }}"
# Comma-separated list of tags to index (by default the only tag is tx hash)
#
# It's recommended to index only a subset of tags due to possible memory
# bloat. This is, of course, depends on the indexer's DB and the volume of
# transactions.
index_tags = "{{ .TxIndex.IndexTags }}"
# When set to true, tells indexer to index all tags. Note this may be not
# desirable (see the comment above). IndexTags has a precedence over
# IndexAllTags (i.e. when given both, IndexTags will be indexed).
index_all_tags = {{ .TxIndex.IndexAllTags }}
`
/****** these are for test settings ***********/
@@ -69,17 +248,21 @@ func ResetTestRoot(testName string) *Config {
if err := cmn.EnsureDir(rootDir, 0700); err != nil {
cmn.PanicSanity(err.Error())
}
if err := cmn.EnsureDir(rootDir+"/data", 0700); err != nil {
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultConfigDir), 0700); err != nil {
cmn.PanicSanity(err.Error())
}
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultDataDir), 0700); err != nil {
cmn.PanicSanity(err.Error())
}
configFilePath := path.Join(rootDir, "config.toml")
genesisFilePath := path.Join(rootDir, "genesis.json")
privFilePath := path.Join(rootDir, "priv_validator.json")
baseConfig := DefaultBaseConfig()
configFilePath := filepath.Join(rootDir, defaultConfigFilePath)
genesisFilePath := filepath.Join(rootDir, baseConfig.Genesis)
privFilePath := filepath.Join(rootDir, baseConfig.PrivValidator)
// Write default config file if missing.
if !cmn.FileExists(configFilePath) {
cmn.MustWriteFile(configFilePath, []byte(testConfig(defaultMoniker)), 0644)
writeConfigFile(configFilePath)
}
if !cmn.FileExists(genesisFilePath) {
cmn.MustWriteFile(genesisFilePath, []byte(testGenesis), 0644)
@@ -91,28 +274,6 @@ func ResetTestRoot(testName string) *Config {
return config
}
var testConfigTmpl = `# This is a TOML config file.
# For more information, see https://github.com/toml-lang/toml
proxy_app = "dummy"
moniker = "__MONIKER__"
fast_sync = false
db_backend = "memdb"
log_level = "info"
[rpc]
laddr = "tcp://0.0.0.0:36657"
[p2p]
laddr = "tcp://0.0.0.0:36656"
seeds = ""
`
func testConfig(moniker string) (testConfig string) {
testConfig = strings.Replace(testConfigTmpl, "__MONIKER__", moniker, -1)
return
}
var testGenesis = `{
"genesis_time": "0001-01-01T00:00:00.000Z",
"chain_id": "tendermint_test",

View File

@@ -4,6 +4,7 @@ import (
"io/ioutil"
"os"
"path/filepath"
"strings"
"testing"
"github.com/stretchr/testify/assert"
@@ -19,7 +20,7 @@ func ensureFiles(t *testing.T, rootDir string, files ...string) {
}
func TestEnsureRoot(t *testing.T) {
assert, require := assert.New(t), require.New(t)
require := require.New(t)
// setup temp dir for test
tmpDir, err := ioutil.TempDir("", "config-test")
@@ -30,15 +31,18 @@ func TestEnsureRoot(t *testing.T) {
EnsureRoot(tmpDir)
// make sure config is set properly
data, err := ioutil.ReadFile(filepath.Join(tmpDir, "config.toml"))
data, err := ioutil.ReadFile(filepath.Join(tmpDir, defaultConfigFilePath))
require.Nil(err)
assert.Equal([]byte(defaultConfig(defaultMoniker)), data)
if !checkConfig(string(data)) {
t.Fatalf("config file missing some information")
}
ensureFiles(t, tmpDir, "data")
}
func TestEnsureTestRoot(t *testing.T) {
assert, require := assert.New(t), require.New(t)
require := require.New(t)
testName := "ensureTestRoot"
@@ -47,11 +51,44 @@ func TestEnsureTestRoot(t *testing.T) {
rootDir := cfg.RootDir
// make sure config is set properly
data, err := ioutil.ReadFile(filepath.Join(rootDir, "config.toml"))
data, err := ioutil.ReadFile(filepath.Join(rootDir, defaultConfigFilePath))
require.Nil(err)
assert.Equal([]byte(testConfig(defaultMoniker)), data)
if !checkConfig(string(data)) {
t.Fatalf("config file missing some information")
}
// TODO: make sure the cfg returned and testconfig are the same!
ensureFiles(t, rootDir, "data", "genesis.json", "priv_validator.json")
baseConfig := DefaultBaseConfig()
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, baseConfig.PrivValidator)
}
func checkConfig(configFile string) bool {
var valid bool
// list of words we expect in the config
var elems = []string{
"moniker",
"seeds",
"proxy_app",
"fast_sync",
"create_empty_blocks",
"peer",
"timeout",
"broadcast",
"send",
"addr",
"wal",
"propose",
"max",
"genesis",
}
for _, e := range elems {
if !strings.Contains(configFile, e) {
valid = false
} else {
valid = true
}
}
return valid
}

View File

@@ -8,7 +8,6 @@ import (
"github.com/stretchr/testify/require"
crypto "github.com/tendermint/go-crypto"
data "github.com/tendermint/go-wire/data"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
@@ -33,7 +32,9 @@ func TestByzantine(t *testing.T) {
css := randConsensusNet(N, "consensus_byzantine_test", newMockTickerFunc(false), newCounter)
// give the byzantine validator a normal ticker
css[0].SetTimeoutTicker(NewTimeoutTicker())
ticker := NewTimeoutTicker()
ticker.SetLogger(css[0].Logger)
css[0].SetTimeoutTicker(ticker)
switches := make([]*p2p.Switch, N)
p2pLogger := logger.With("module", "p2p")
@@ -45,9 +46,9 @@ func TestByzantine(t *testing.T) {
eventChans := make([]chan interface{}, N)
reactors := make([]p2p.Reactor, N)
for i := 0; i < N; i++ {
// make first val byzantine
if i == 0 {
css[i].privValidator = NewByzantinePrivValidator(css[i].privValidator)
// make byzantine
css[i].decideProposal = func(j int) func(int64, int) {
return func(height int64, round int) {
byzantineDecideProposalFunc(t, height, round, css[j], switches[j])
@@ -73,9 +74,11 @@ func TestByzantine(t *testing.T) {
var conRI p2p.Reactor // nolint: gotype, gosimple
conRI = conR
// make first val byzantine
if i == 0 {
conRI = NewByzantineReactor(conR)
}
reactors[i] = conRI
}
@@ -114,19 +117,19 @@ func TestByzantine(t *testing.T) {
// and the other block to peers[1] and peers[2].
// note peers and switches order don't match.
peers := switches[0].Peers().List()
// partition A
ind0 := getSwitchIndex(switches, peers[0])
// partition B
ind1 := getSwitchIndex(switches, peers[1])
ind2 := getSwitchIndex(switches, peers[2])
// connect the 2 peers in the larger partition
p2p.Connect2Switches(switches, ind1, ind2)
// wait for someone in the big partition to make a block
// wait for someone in the big partition (B) to make a block
<-eventChans[ind2]
t.Log("A block has been committed. Healing partition")
// connect the partitions
p2p.Connect2Switches(switches, ind0, ind1)
p2p.Connect2Switches(switches, ind0, ind2)
@@ -279,7 +282,7 @@ func NewByzantinePrivValidator(pv types.PrivValidator) *ByzantinePrivValidator {
}
}
func (privVal *ByzantinePrivValidator) GetAddress() data.Bytes {
func (privVal *ByzantinePrivValidator) GetAddress() types.Address {
return privVal.pv.GetAddress()
}
@@ -288,17 +291,17 @@ func (privVal *ByzantinePrivValidator) GetPubKey() crypto.PubKey {
}
func (privVal *ByzantinePrivValidator) SignVote(chainID string, vote *types.Vote) (err error) {
vote.Signature, err = privVal.Sign(types.SignBytes(chainID, vote))
vote.Signature, err = privVal.Sign(vote.SignBytes(chainID))
return err
}
func (privVal *ByzantinePrivValidator) SignProposal(chainID string, proposal *types.Proposal) (err error) {
proposal.Signature, _ = privVal.Sign(types.SignBytes(chainID, proposal))
proposal.Signature, _ = privVal.Sign(proposal.SignBytes(chainID))
return nil
}
func (privVal *ByzantinePrivValidator) SignHeartbeat(chainID string, heartbeat *types.Heartbeat) (err error) {
heartbeat.Signature, _ = privVal.Sign(types.SignBytes(chainID, heartbeat))
heartbeat.Signature, _ = privVal.Sign(heartbeat.SignBytes(chainID))
return nil
}

View File

@@ -26,7 +26,7 @@ import (
"github.com/tendermint/tmlibs/log"
"github.com/tendermint/abci/example/counter"
"github.com/tendermint/abci/example/dummy"
"github.com/tendermint/abci/example/kvstore"
"github.com/go-kit/kit/log/term"
)
@@ -36,8 +36,8 @@ const (
)
// genesis, chain_id, priv_val
var config *cfg.Config // NOTE: must be reset for each _test.go file
var ensureTimeout = time.Second * 2
var config *cfg.Config // NOTE: must be reset for each _test.go file
var ensureTimeout = time.Second * 1 // must be in seconds because CreateEmptyBlocksInterval is
func ensureDir(dir string, mode os.FileMode) {
if err := cmn.EnsureDir(dir, mode); err != nil {
@@ -50,7 +50,7 @@ func ResetConfig(name string) *cfg.Config {
}
//-------------------------------------------------------------------------------
// validator stub (a dummy consensus peer we control)
// validator stub (a kvstore consensus peer we control)
type validatorStub struct {
Index int // Validator index. NOTE: we don't assume validator set changes.
@@ -74,10 +74,11 @@ func (vs *validatorStub) signVote(voteType byte, hash []byte, header types.PartS
ValidatorAddress: vs.PrivValidator.GetAddress(),
Height: vs.Height,
Round: vs.Round,
Timestamp: time.Now().UTC(),
Type: voteType,
BlockID: types.BlockID{hash, header},
}
err := vs.PrivValidator.SignVote(config.ChainID, vote)
err := vs.PrivValidator.SignVote(config.ChainID(), vote)
return vote, err
}
@@ -128,7 +129,7 @@ func decideProposal(cs1 *ConsensusState, vs *validatorStub, height int64, round
// Make proposal
polRound, polBlockID := cs1.Votes.POLInfo()
proposal = types.NewProposal(height, round, blockParts.Header(), polRound, polBlockID)
if err := vs.SignProposal(config.ChainID, proposal); err != nil {
if err := vs.SignProposal(cs1.state.ChainID, proposal); err != nil {
panic(err)
}
return
@@ -234,16 +235,16 @@ func subscribeToVoter(cs *ConsensusState, addr []byte) chan interface{} {
//-------------------------------------------------------------------------------
// consensus states
func newConsensusState(state *sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState {
func newConsensusState(state sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState {
return newConsensusStateWithConfig(config, state, pv, app)
}
func newConsensusStateWithConfig(thisConfig *cfg.Config, state *sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState {
func newConsensusStateWithConfig(thisConfig *cfg.Config, state sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState {
blockDB := dbm.NewMemDB()
return newConsensusStateWithConfigAndBlockStore(thisConfig, state, pv, app, blockDB)
}
func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state *sm.State, pv types.PrivValidator, app abci.Application, blockDB dbm.DB) *ConsensusState {
func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state sm.State, pv types.PrivValidator, app abci.Application, blockDB dbm.DB) *ConsensusState {
// Get BlockStore
blockStore := bc.NewBlockStore(blockDB)
@@ -259,9 +260,14 @@ func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state *sm.
mempool.EnableTxsAvailable()
}
// mock the evidence pool
evpool := types.MockEvidencePool{}
// Make ConsensusReactor
cs := NewConsensusState(thisConfig.Consensus, state, proxyAppConnCon, blockStore, mempool)
cs.SetLogger(log.TestingLogger())
stateDB := dbm.NewMemDB()
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyAppConnCon, mempool, evpool)
cs := NewConsensusState(thisConfig.Consensus, state, blockExec, blockStore, mempool, evpool)
cs.SetLogger(log.TestingLogger().With("module", "consensus"))
cs.SetPrivValidator(pv)
eventBus := types.NewEventBus()
@@ -279,16 +285,6 @@ func loadPrivValidator(config *cfg.Config) *types.PrivValidatorFS {
return privValidator
}
func fixedConsensusStateDummy(config *cfg.Config, logger log.Logger) *ConsensusState {
stateDB := dbm.NewMemDB()
state, _ := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
state.SetLogger(logger.With("module", "state"))
privValidator := loadPrivValidator(config)
cs := newConsensusState(state, privValidator, dummy.NewDummyApplication())
cs.SetLogger(logger)
return cs
}
func randConsensusState(nValidators int) (*ConsensusState, []*validatorStub) {
// Get State
state, privVals := randGenesisState(nValidators, false, 10)
@@ -296,7 +292,6 @@ func randConsensusState(nValidators int) (*ConsensusState, []*validatorStub) {
vss := make([]*validatorStub, nValidators)
cs := newConsensusState(state, privVals[0], counter.NewCounterApplication(true))
cs.SetLogger(log.TestingLogger())
for i := 0; i < nValidators; i++ {
vss[i] = NewValidatorStub(privVals[i], i)
@@ -342,18 +337,16 @@ func consensusLogger() log.Logger {
}
}
return term.FgBgColor{}
})
}).With("module", "consensus")
}
func randConsensusNet(nValidators int, testName string, tickerFunc func() TimeoutTicker, appFunc func() abci.Application, configOpts ...func(*cfg.Config)) []*ConsensusState {
genDoc, privVals := randGenesisDoc(nValidators, false, 10)
genDoc, privVals := randGenesisDoc(nValidators, false, 30)
css := make([]*ConsensusState, nValidators)
logger := consensusLogger()
for i := 0; i < nValidators; i++ {
db := dbm.NewMemDB() // each state needs its own db
state, _ := sm.MakeGenesisState(db, genDoc)
state.SetLogger(logger.With("module", "state", "validator", i))
state.Save()
stateDB := dbm.NewMemDB() // each state needs its own db
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
thisConfig := ResetConfig(cmn.Fmt("%s_%d", testName, i))
for _, opt := range configOpts {
opt(thisConfig)
@@ -364,8 +357,8 @@ func randConsensusNet(nValidators int, testName string, tickerFunc func() Timeou
app.InitChain(abci.RequestInitChain{Validators: vals})
css[i] = newConsensusStateWithConfig(thisConfig, state, privVals[i], app)
css[i].SetLogger(logger.With("validator", i))
css[i].SetTimeoutTicker(tickerFunc())
css[i].SetLogger(logger.With("validator", i, "module", "consensus"))
}
return css
}
@@ -376,10 +369,8 @@ func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerF
css := make([]*ConsensusState, nPeers)
logger := consensusLogger()
for i := 0; i < nPeers; i++ {
db := dbm.NewMemDB() // each state needs its own db
state, _ := sm.MakeGenesisState(db, genDoc)
state.SetLogger(logger.With("module", "state", "validator", i))
state.Save()
stateDB := dbm.NewMemDB() // each state needs its own db
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
thisConfig := ResetConfig(cmn.Fmt("%s_%d", testName, i))
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
var privVal types.PrivValidator
@@ -395,8 +386,8 @@ func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerF
app.InitChain(abci.RequestInitChain{Validators: vals})
css[i] = newConsensusStateWithConfig(thisConfig, state, privVal, app)
css[i].SetLogger(logger.With("validator", i))
css[i].SetTimeoutTicker(tickerFunc())
css[i].SetLogger(logger.With("validator", i, "module", "consensus"))
}
return css
}
@@ -426,19 +417,19 @@ func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.G
privValidators[i] = privVal
}
sort.Sort(types.PrivValidatorsByAddress(privValidators))
return &types.GenesisDoc{
GenesisTime: time.Now(),
ChainID: config.ChainID,
ChainID: config.ChainID(),
Validators: validators,
}, privValidators
}
func randGenesisState(numValidators int, randPower bool, minPower int64) (*sm.State, []*types.PrivValidatorFS) {
func randGenesisState(numValidators int, randPower bool, minPower int64) (sm.State, []*types.PrivValidatorFS) {
genDoc, privValidators := randGenesisDoc(numValidators, randPower, minPower)
s0, _ := sm.MakeGenesisState(genDoc)
db := dbm.NewMemDB()
s0, _ := sm.MakeGenesisState(db, genDoc)
s0.SetLogger(log.TestingLogger().With("module", "state"))
s0.Save()
sm.SaveState(db, s0)
return s0, privValidators
}
@@ -497,7 +488,7 @@ func newCounter() abci.Application {
return counter.NewCounterApplication(true)
}
func newPersistentDummy() abci.Application {
dir, _ := ioutil.TempDir("/tmp", "persistent-dummy")
return dummy.NewPersistentDummyApplication(dir)
func newPersistentKVStore() abci.Application {
dir, _ := ioutil.TempDir("/tmp", "persistent-kvstore")
return kvstore.NewPersistentKVStoreApplication(dir)
}

View File

@@ -19,7 +19,7 @@ func init() {
config = ResetConfig("consensus_mempool_test")
}
func TestNoProgressUntilTxsAvailable(t *testing.T) {
func TestMempoolNoProgressUntilTxsAvailable(t *testing.T) {
config := ResetConfig("consensus_mempool_txs_available_test")
config.Consensus.CreateEmptyBlocks = false
state, privVals := randGenesisState(1, false, 10)
@@ -37,7 +37,7 @@ func TestNoProgressUntilTxsAvailable(t *testing.T) {
ensureNoNewStep(newBlockCh)
}
func TestProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
func TestMempoolProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
config := ResetConfig("consensus_mempool_txs_available_test")
config.Consensus.CreateEmptyBlocksInterval = int(ensureTimeout.Seconds())
state, privVals := randGenesisState(1, false, 10)
@@ -52,7 +52,7 @@ func TestProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
ensureNewStep(newBlockCh) // until the CreateEmptyBlocksInterval has passed
}
func TestProgressInHigherRound(t *testing.T) {
func TestMempoolProgressInHigherRound(t *testing.T) {
config := ResetConfig("consensus_mempool_txs_available_test")
config.Consensus.CreateEmptyBlocks = false
state, privVals := randGenesisState(1, false, 10)
@@ -94,7 +94,7 @@ func deliverTxsRange(cs *ConsensusState, start, end int) {
}
}
func TestTxConcurrentWithCommit(t *testing.T) {
func TestMempoolTxConcurrentWithCommit(t *testing.T) {
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusState(state, privVals[0], NewCounterApplication())
height, round := cs.Height, cs.Round
@@ -104,18 +104,19 @@ func TestTxConcurrentWithCommit(t *testing.T) {
go deliverTxsRange(cs, 0, NTxs)
startTestRound(cs, height, round)
ticker := time.NewTicker(time.Second * 20)
for nTxs := 0; nTxs < NTxs; {
ticker := time.NewTicker(time.Second * 30)
select {
case b := <-newBlockCh:
nTxs += b.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block.Header.NumTxs
evt := b.(types.TMEventData).Unwrap().(types.EventDataNewBlock)
nTxs += int(evt.Block.Header.NumTxs)
case <-ticker.C:
panic("Timed out waiting to commit blocks with transactions")
}
}
}
func TestRmBadTx(t *testing.T) {
func TestMempoolRmBadTx(t *testing.T) {
state, privVals := randGenesisState(1, false, 10)
app := NewCounterApplication()
cs := newConsensusState(state, privVals[0], app)
@@ -128,7 +129,7 @@ func TestRmBadTx(t *testing.T) {
assert.False(t, resDeliver.IsErr(), cmn.Fmt("expected no error. got %v", resDeliver))
resCommit := app.Commit()
assert.False(t, resCommit.IsErr(), cmn.Fmt("expected no error. got %v", resCommit))
assert.True(t, len(resCommit.Data) > 0)
emptyMempoolCh := make(chan struct{})
checkTxRespCh := make(chan struct{})
@@ -151,6 +152,7 @@ func TestRmBadTx(t *testing.T) {
txs := cs.mempool.Reap(1)
if len(txs) == 0 {
emptyMempoolCh <- struct{}{}
return
}
time.Sleep(10 * time.Millisecond)
}
@@ -222,10 +224,10 @@ func txAsUint64(tx []byte) uint64 {
func (app *CounterApplication) Commit() abci.ResponseCommit {
app.mempoolTxCount = app.txCount
if app.txCount == 0 {
return abci.ResponseCommit{Code: code.CodeTypeOK}
return abci.ResponseCommit{}
} else {
hash := make([]byte, 8)
binary.BigEndian.PutUint64(hash, uint64(app.txCount))
return abci.ResponseCommit{Code: code.CodeTypeOK, Data: hash}
return abci.ResponseCommit{Data: hash}
}
}

View File

@@ -27,6 +27,8 @@ const (
VoteSetBitsChannel = byte(0x23)
maxConsensusMessageSize = 1048576 // 1MB; NOTE/TODO: keep in sync with types.PartSet sizes.
blocksToContributeToBecomeGoodPeer = 10000
)
//-----------------------------------------------------------------------------
@@ -82,7 +84,7 @@ func (conR *ConsensusReactor) OnStop() {
// SwitchToConsensus switches from fast_sync mode to consensus mode.
// It resets the state, turns off fast_sync, and starts the consensus state-machine
func (conR *ConsensusReactor) SwitchToConsensus(state *sm.State, blocksSynced int) {
func (conR *ConsensusReactor) SwitchToConsensus(state sm.State, blocksSynced int) {
conR.Logger.Info("SwitchToConsensus")
conR.conS.reconstructLastCommit(state)
// NOTE: The line below causes broadcastNewRoundStepRoutine() to
@@ -179,7 +181,7 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
_, msg, err := DecodeMessage(msgBytes)
if err != nil {
conR.Logger.Error("Error decoding message", "src", src, "chId", chID, "msg", msg, "err", err, "bytes", msgBytes)
// TODO punish peer?
conR.Switch.StopPeerForError(src, err)
return
}
conR.Logger.Debug("Receive", "src", src, "chId", chID, "msg", msg)
@@ -205,7 +207,11 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
return
}
// Peer claims to have a maj23 for some BlockID at H,R,S,
votes.SetPeerMaj23(msg.Round, msg.Type, ps.Peer.Key(), msg.BlockID)
err := votes.SetPeerMaj23(msg.Round, msg.Type, ps.Peer.ID(), msg.BlockID)
if err != nil {
conR.Switch.StopPeerForError(src, err)
return
}
// Respond with a VoteSetBitsMessage showing which votes we have.
// (and consequently shows which we don't have)
var ourVotes *cmn.BitArray
@@ -242,12 +248,15 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
switch msg := msg.(type) {
case *ProposalMessage:
ps.SetHasProposal(msg.Proposal)
conR.conS.peerMsgQueue <- msgInfo{msg, src.Key()}
conR.conS.peerMsgQueue <- msgInfo{msg, src.ID()}
case *ProposalPOLMessage:
ps.ApplyProposalPOLMessage(msg)
case *BlockPartMessage:
ps.SetHasProposalBlockPart(msg.Height, msg.Round, msg.Part.Index)
conR.conS.peerMsgQueue <- msgInfo{msg, src.Key()}
if numBlocks := ps.RecordBlockPart(msg); numBlocks%blocksToContributeToBecomeGoodPeer == 0 {
conR.Switch.MarkPeerAsGood(src)
}
conR.conS.peerMsgQueue <- msgInfo{msg, src.ID()}
default:
conR.Logger.Error(cmn.Fmt("Unknown message type %v", reflect.TypeOf(msg)))
}
@@ -266,8 +275,11 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
ps.EnsureVoteBitArrays(height, valSize)
ps.EnsureVoteBitArrays(height-1, lastCommitSize)
ps.SetHasVote(msg.Vote)
if blocks := ps.RecordVote(msg.Vote); blocks%blocksToContributeToBecomeGoodPeer == 0 {
conR.Switch.MarkPeerAsGood(src)
}
cs.peerMsgQueue <- msgInfo{msg, src.Key()}
cs.peerMsgQueue <- msgInfo{msg, src.ID()}
default:
// don't punish (leave room for soft upgrades)
@@ -376,7 +388,7 @@ func (conR *ConsensusReactor) startBroadcastRoutine() error {
edph := data.(types.TMEventData).Unwrap().(types.EventDataProposalHeartbeat)
conR.broadcastProposalHeartbeatMessage(edph)
}
case <-conR.Quit:
case <-conR.Quit():
conR.eventBus.UnsubscribeAll(ctx, subscriber)
return
}
@@ -646,7 +658,6 @@ OUTER_LOOP:
// Load the block commit for prs.Height,
// which contains precommit signatures for prs.Height.
commit := conR.conS.blockStore.LoadBlockCommit(prs.Height)
logger.Info("Loaded BlockCommit for catch-up", "height", prs.Height, "commit", commit)
if ps.PickSendVote(commit) {
logger.Debug("Picked Catchup commit to send", "height", prs.Height)
continue OUTER_LOOP
@@ -828,6 +839,21 @@ type PeerState struct {
mtx sync.Mutex
cstypes.PeerRoundState
stats *peerStateStats
}
// peerStateStats holds internal statistics for a peer.
type peerStateStats struct {
lastVoteHeight int64
votes int
lastBlockPartHeight int64
blockParts int
}
func (pss peerStateStats) String() string {
return fmt.Sprintf("peerStateStats{votes: %d, blockParts: %d}", pss.votes, pss.blockParts)
}
// NewPeerState returns a new PeerState for the given Peer
@@ -841,6 +867,7 @@ func NewPeerState(peer p2p.Peer) *PeerState {
LastCommitRound: -1,
CatchupCommitRound: -1,
},
stats: &peerStateStats{},
}
}
@@ -916,6 +943,7 @@ func (ps *PeerState) SetHasProposalBlockPart(height int64, round int, index int)
func (ps *PeerState) PickSendVote(votes types.VoteSetReader) bool {
if vote, ok := ps.PickVoteToSend(votes); ok {
msg := &VoteMessage{vote}
ps.logger.Debug("Sending vote message", "ps", ps, "vote", vote)
return ps.Peer.Send(VoteChannel, struct{ ConsensusMessage }{msg})
}
return false
@@ -1051,6 +1079,43 @@ func (ps *PeerState) ensureVoteBitArrays(height int64, numValidators int) {
}
}
// RecordVote updates internal statistics for this peer by recording the vote.
// It returns the total number of votes (1 per block). This essentially means
// the number of blocks for which peer has been sending us votes.
func (ps *PeerState) RecordVote(vote *types.Vote) int {
if ps.stats.lastVoteHeight >= vote.Height {
return ps.stats.votes
}
ps.stats.lastVoteHeight = vote.Height
ps.stats.votes += 1
return ps.stats.votes
}
// VotesSent returns the number of blocks for which peer has been sending us
// votes.
func (ps *PeerState) VotesSent() int {
return ps.stats.votes
}
// RecordVote updates internal statistics for this peer by recording the block part.
// It returns the total number of block parts (1 per block). This essentially means
// the number of blocks for which peer has been sending us block parts.
func (ps *PeerState) RecordBlockPart(bp *BlockPartMessage) int {
if ps.stats.lastBlockPartHeight >= bp.Height {
return ps.stats.blockParts
}
ps.stats.lastBlockPartHeight = bp.Height
ps.stats.blockParts += 1
return ps.stats.blockParts
}
// BlockPartsSent returns the number of blocks for which peer has been sending
// us block parts.
func (ps *PeerState) BlockPartsSent() int {
return ps.stats.blockParts
}
// SetHasVote sets the given vote as known by the peer
func (ps *PeerState) SetHasVote(vote *types.Vote) {
ps.mtx.Lock()
@@ -1194,12 +1259,16 @@ func (ps *PeerState) String() string {
// StringIndented returns a string representation of the PeerState
func (ps *PeerState) StringIndented(indent string) string {
ps.mtx.Lock()
defer ps.mtx.Unlock()
return fmt.Sprintf(`PeerState{
%s Key %v
%s PRS %v
%s Key %v
%s PRS %v
%s Stats %v
%s}`,
indent, ps.Peer.Key(),
indent, ps.Peer.ID(),
indent, ps.PeerRoundState.StringIndented(indent+" "),
indent, ps.stats,
indent)
}
@@ -1344,7 +1413,7 @@ type HasVoteMessage struct {
// String returns a string representation.
func (m *HasVoteMessage) String() string {
return fmt.Sprintf("[HasVote VI:%v V:{%v/%02d/%v} VI:%v]", m.Index, m.Height, m.Round, m.Type, m.Index)
return fmt.Sprintf("[HasVote VI:%v V:{%v/%02d/%v}]", m.Index, m.Height, m.Round, m.Type)
}
//-------------------------------------

View File

@@ -4,17 +4,23 @@ import (
"context"
"fmt"
"os"
"runtime"
"runtime/pprof"
"sync"
"testing"
"time"
"github.com/tendermint/abci/example/dummy"
"github.com/tendermint/abci/example/kvstore"
wire "github.com/tendermint/tendermint/wire"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/log"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/p2p"
p2pdummy "github.com/tendermint/tendermint/p2p/dummy"
"github.com/tendermint/tendermint/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -29,30 +35,24 @@ func startConsensusNet(t *testing.T, css []*ConsensusState, N int) ([]*Consensus
reactors := make([]*ConsensusReactor, N)
eventChans := make([]chan interface{}, N)
eventBuses := make([]*types.EventBus, N)
logger := consensusLogger()
for i := 0; i < N; i++ {
/*thisLogger, err := tmflags.ParseLogLevel("consensus:info,*:error", logger, "info")
/*logger, err := tmflags.ParseLogLevel("consensus:info,*:error", logger, "info")
if err != nil { t.Fatal(err)}*/
thisLogger := logger
reactors[i] = NewConsensusReactor(css[i], true) // so we dont start the consensus states
reactors[i].conS.SetLogger(thisLogger.With("validator", i))
reactors[i].SetLogger(thisLogger.With("validator", i))
eventBuses[i] = types.NewEventBus()
eventBuses[i].SetLogger(thisLogger.With("module", "events", "validator", i))
err := eventBuses[i].Start()
require.NoError(t, err)
reactors[i].SetLogger(css[i].Logger)
// eventBus is already started with the cs
eventBuses[i] = css[i].eventBus
reactors[i].SetEventBus(eventBuses[i])
eventChans[i] = make(chan interface{}, 1)
err = eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, eventChans[i])
err := eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, eventChans[i])
require.NoError(t, err)
}
// make connected switches and start all reactors
p2p.MakeConnectedSwitches(config.P2P, N, func(i int, s *p2p.Switch) *p2p.Switch {
s.AddReactor("CONSENSUS", reactors[i])
s.SetLogger(reactors[i].conS.Logger.With("module", "p2p"))
return s
}, p2p.Connect2Switches)
@@ -67,25 +67,28 @@ func startConsensusNet(t *testing.T, css []*ConsensusState, N int) ([]*Consensus
return reactors, eventChans, eventBuses
}
func stopConsensusNet(reactors []*ConsensusReactor, eventBuses []*types.EventBus) {
for _, r := range reactors {
func stopConsensusNet(logger log.Logger, reactors []*ConsensusReactor, eventBuses []*types.EventBus) {
logger.Info("stopConsensusNet", "n", len(reactors))
for i, r := range reactors {
logger.Info("stopConsensusNet: Stopping ConsensusReactor", "i", i)
r.Switch.Stop()
}
for _, b := range eventBuses {
for i, b := range eventBuses {
logger.Info("stopConsensusNet: Stopping eventBus", "i", i)
b.Stop()
}
logger.Info("stopConsensusNet: DONE", "n", len(reactors))
}
// Ensure a testnet makes blocks
func TestReactor(t *testing.T) {
func TestReactorBasic(t *testing.T) {
N := 4
css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
reactors, eventChans, eventBuses := startConsensusNet(t, css, N)
defer stopConsensusNet(reactors, eventBuses)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
// wait till everyone makes the first new block
timeoutWaitGroup(t, N, func(wg *sync.WaitGroup, j int) {
timeoutWaitGroup(t, N, func(j int) {
<-eventChans[j]
wg.Done()
}, css)
}
@@ -97,7 +100,7 @@ func TestReactorProposalHeartbeats(t *testing.T) {
c.Consensus.CreateEmptyBlocks = false
})
reactors, eventChans, eventBuses := startConsensusNet(t, css, N)
defer stopConsensusNet(reactors, eventBuses)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
heartbeatChans := make([]chan interface{}, N)
var err error
for i := 0; i < N; i++ {
@@ -106,9 +109,8 @@ func TestReactorProposalHeartbeats(t *testing.T) {
require.NoError(t, err)
}
// wait till everyone sends a proposal heartbeat
timeoutWaitGroup(t, N, func(wg *sync.WaitGroup, j int) {
timeoutWaitGroup(t, N, func(j int) {
<-heartbeatChans[j]
wg.Done()
}, css)
// send a tx
@@ -117,20 +119,126 @@ func TestReactorProposalHeartbeats(t *testing.T) {
}
// wait till everyone makes the first new block
timeoutWaitGroup(t, N, func(wg *sync.WaitGroup, j int) {
timeoutWaitGroup(t, N, func(j int) {
<-eventChans[j]
wg.Done()
}, css)
}
// Test we record block parts from other peers
func TestReactorRecordsBlockParts(t *testing.T) {
// create dummy peer
peer := p2pdummy.NewPeer()
ps := NewPeerState(peer).SetLogger(log.TestingLogger())
peer.Set(types.PeerStateKey, ps)
// create reactor
css := randConsensusNet(1, "consensus_reactor_records_block_parts_test", newMockTickerFunc(true), newPersistentKVStore)
reactor := NewConsensusReactor(css[0], false) // so we dont start the consensus states
reactor.SetEventBus(css[0].eventBus)
reactor.SetLogger(log.TestingLogger())
sw := p2p.MakeSwitch(cfg.DefaultP2PConfig(), 1, "testing", "123.123.123", func(i int, sw *p2p.Switch) *p2p.Switch { return sw })
reactor.SetSwitch(sw)
err := reactor.Start()
require.NoError(t, err)
defer reactor.Stop()
// 1) new block part
parts := types.NewPartSetFromData(cmn.RandBytes(100), 10)
msg := &BlockPartMessage{
Height: 2,
Round: 0,
Part: parts.GetPart(0),
}
bz, err := wire.MarshalBinary(struct{ ConsensusMessage }{msg})
require.NoError(t, err)
reactor.Receive(DataChannel, peer, bz)
assert.Equal(t, 1, ps.BlockPartsSent(), "number of block parts sent should have increased by 1")
// 2) block part with the same height, but different round
msg.Round = 1
bz, err = wire.MarshalBinary(struct{ ConsensusMessage }{msg})
require.NoError(t, err)
reactor.Receive(DataChannel, peer, bz)
assert.Equal(t, 1, ps.BlockPartsSent(), "number of block parts sent should stay the same")
// 3) block part from earlier height
msg.Height = 1
msg.Round = 0
bz, err = wire.MarshalBinary(struct{ ConsensusMessage }{msg})
require.NoError(t, err)
reactor.Receive(DataChannel, peer, bz)
assert.Equal(t, 1, ps.BlockPartsSent(), "number of block parts sent should stay the same")
}
// Test we record votes from other peers
func TestReactorRecordsVotes(t *testing.T) {
// create dummy peer
peer := p2pdummy.NewPeer()
ps := NewPeerState(peer).SetLogger(log.TestingLogger())
peer.Set(types.PeerStateKey, ps)
// create reactor
css := randConsensusNet(1, "consensus_reactor_records_votes_test", newMockTickerFunc(true), newPersistentKVStore)
reactor := NewConsensusReactor(css[0], false) // so we dont start the consensus states
reactor.SetEventBus(css[0].eventBus)
reactor.SetLogger(log.TestingLogger())
sw := p2p.MakeSwitch(cfg.DefaultP2PConfig(), 1, "testing", "123.123.123", func(i int, sw *p2p.Switch) *p2p.Switch { return sw })
reactor.SetSwitch(sw)
err := reactor.Start()
require.NoError(t, err)
defer reactor.Stop()
_, val := css[0].state.Validators.GetByIndex(0)
// 1) new vote
vote := &types.Vote{
ValidatorIndex: 0,
ValidatorAddress: val.Address,
Height: 2,
Round: 0,
Timestamp: time.Now().UTC(),
Type: types.VoteTypePrevote,
BlockID: types.BlockID{},
}
bz, err := wire.MarshalBinary(struct{ ConsensusMessage }{&VoteMessage{vote}})
require.NoError(t, err)
reactor.Receive(VoteChannel, peer, bz)
assert.Equal(t, 1, ps.VotesSent(), "number of votes sent should have increased by 1")
// 2) vote with the same height, but different round
vote.Round = 1
bz, err = wire.MarshalBinary(struct{ ConsensusMessage }{&VoteMessage{vote}})
require.NoError(t, err)
reactor.Receive(VoteChannel, peer, bz)
assert.Equal(t, 1, ps.VotesSent(), "number of votes sent should stay the same")
// 3) vote from earlier height
vote.Height = 1
vote.Round = 0
bz, err = wire.MarshalBinary(struct{ ConsensusMessage }{&VoteMessage{vote}})
require.NoError(t, err)
reactor.Receive(VoteChannel, peer, bz)
assert.Equal(t, 1, ps.VotesSent(), "number of votes sent should stay the same")
}
//-------------------------------------------------------------
// ensure we can make blocks despite cycling a validator set
func TestVotingPowerChange(t *testing.T) {
func TestReactorVotingPowerChange(t *testing.T) {
nVals := 4
css := randConsensusNet(nVals, "consensus_voting_power_changes_test", newMockTickerFunc(true), newPersistentDummy)
logger := log.TestingLogger()
css := randConsensusNet(nVals, "consensus_voting_power_changes_test", newMockTickerFunc(true), newPersistentKVStore)
reactors, eventChans, eventBuses := startConsensusNet(t, css, nVals)
defer stopConsensusNet(reactors, eventBuses)
defer stopConsensusNet(logger, reactors, eventBuses)
// map of active validators
activeVals := make(map[string]struct{})
@@ -139,20 +247,19 @@ func TestVotingPowerChange(t *testing.T) {
}
// wait till everyone makes block 1
timeoutWaitGroup(t, nVals, func(wg *sync.WaitGroup, j int) {
timeoutWaitGroup(t, nVals, func(j int) {
<-eventChans[j]
wg.Done()
}, css)
//---------------------------------------------------------------------------
t.Log("---------------------------- Testing changing the voting power of one validator a few times")
logger.Debug("---------------------------- Testing changing the voting power of one validator a few times")
val1PubKey := css[0].privValidator.GetPubKey()
updateValidatorTx := dummy.MakeValSetChangeTx(val1PubKey.Bytes(), 25)
updateValidatorTx := kvstore.MakeValSetChangeTx(val1PubKey.Bytes(), 25)
previousTotalVotingPower := css[0].GetRoundState().LastValidators.TotalVotingPower()
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlockWithTx(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
@@ -160,11 +267,11 @@ func TestVotingPowerChange(t *testing.T) {
t.Fatalf("expected voting power to change (before: %d, after: %d)", previousTotalVotingPower, css[0].GetRoundState().LastValidators.TotalVotingPower())
}
updateValidatorTx = dummy.MakeValSetChangeTx(val1PubKey.Bytes(), 2)
updateValidatorTx = kvstore.MakeValSetChangeTx(val1PubKey.Bytes(), 2)
previousTotalVotingPower = css[0].GetRoundState().LastValidators.TotalVotingPower()
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlockWithTx(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
@@ -172,11 +279,11 @@ func TestVotingPowerChange(t *testing.T) {
t.Fatalf("expected voting power to change (before: %d, after: %d)", previousTotalVotingPower, css[0].GetRoundState().LastValidators.TotalVotingPower())
}
updateValidatorTx = dummy.MakeValSetChangeTx(val1PubKey.Bytes(), 100)
updateValidatorTx = kvstore.MakeValSetChangeTx(val1PubKey.Bytes(), 26)
previousTotalVotingPower = css[0].GetRoundState().LastValidators.TotalVotingPower()
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlockWithTx(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
@@ -185,13 +292,15 @@ func TestVotingPowerChange(t *testing.T) {
}
}
func TestValidatorSetChanges(t *testing.T) {
func TestReactorValidatorSetChanges(t *testing.T) {
nPeers := 7
nVals := 4
css := randConsensusNetWithPeers(nVals, nPeers, "consensus_val_set_changes_test", newMockTickerFunc(true), newPersistentDummy)
css := randConsensusNetWithPeers(nVals, nPeers, "consensus_val_set_changes_test", newMockTickerFunc(true), newPersistentKVStore)
logger := log.TestingLogger()
reactors, eventChans, eventBuses := startConsensusNet(t, css, nPeers)
defer stopConsensusNet(reactors, eventBuses)
defer stopConsensusNet(logger, reactors, eventBuses)
// map of active validators
activeVals := make(map[string]struct{})
@@ -200,16 +309,15 @@ func TestValidatorSetChanges(t *testing.T) {
}
// wait till everyone makes block 1
timeoutWaitGroup(t, nPeers, func(wg *sync.WaitGroup, j int) {
timeoutWaitGroup(t, nPeers, func(j int) {
<-eventChans[j]
wg.Done()
}, css)
//---------------------------------------------------------------------------
t.Log("---------------------------- Testing adding one validator")
logger.Info("---------------------------- Testing adding one validator")
newValidatorPubKey1 := css[nVals].privValidator.GetPubKey()
newValidatorTx1 := dummy.MakeValSetChangeTx(newValidatorPubKey1.Bytes(), testMinPower)
newValidatorTx1 := kvstore.MakeValSetChangeTx(newValidatorPubKey1.Bytes(), testMinPower)
// wait till everyone makes block 2
// ensure the commit includes all validators
@@ -218,7 +326,7 @@ func TestValidatorSetChanges(t *testing.T) {
// wait till everyone makes block 3.
// it includes the commit for block 2, which is by the original validator set
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, newValidatorTx1)
// wait till everyone makes block 4.
// it includes the commit for block 3, which is by the original validator set
@@ -232,14 +340,14 @@ func TestValidatorSetChanges(t *testing.T) {
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
//---------------------------------------------------------------------------
t.Log("---------------------------- Testing changing the voting power of one validator")
logger.Info("---------------------------- Testing changing the voting power of one validator")
updateValidatorPubKey1 := css[nVals].privValidator.GetPubKey()
updateValidatorTx1 := dummy.MakeValSetChangeTx(updateValidatorPubKey1.Bytes(), 25)
updateValidatorTx1 := kvstore.MakeValSetChangeTx(updateValidatorPubKey1.Bytes(), 25)
previousTotalVotingPower := css[nVals].GetRoundState().LastValidators.TotalVotingPower()
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, updateValidatorTx1)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, updateValidatorTx1)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
@@ -248,29 +356,29 @@ func TestValidatorSetChanges(t *testing.T) {
}
//---------------------------------------------------------------------------
t.Log("---------------------------- Testing adding two validators at once")
logger.Info("---------------------------- Testing adding two validators at once")
newValidatorPubKey2 := css[nVals+1].privValidator.GetPubKey()
newValidatorTx2 := dummy.MakeValSetChangeTx(newValidatorPubKey2.Bytes(), testMinPower)
newValidatorTx2 := kvstore.MakeValSetChangeTx(newValidatorPubKey2.Bytes(), testMinPower)
newValidatorPubKey3 := css[nVals+2].privValidator.GetPubKey()
newValidatorTx3 := dummy.MakeValSetChangeTx(newValidatorPubKey3.Bytes(), testMinPower)
newValidatorTx3 := kvstore.MakeValSetChangeTx(newValidatorPubKey3.Bytes(), testMinPower)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, newValidatorTx2, newValidatorTx3)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, newValidatorTx2, newValidatorTx3)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
activeVals[string(newValidatorPubKey2.Address())] = struct{}{}
activeVals[string(newValidatorPubKey3.Address())] = struct{}{}
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
//---------------------------------------------------------------------------
t.Log("---------------------------- Testing removing two validators at once")
logger.Info("---------------------------- Testing removing two validators at once")
removeValidatorTx2 := dummy.MakeValSetChangeTx(newValidatorPubKey2.Bytes(), 0)
removeValidatorTx3 := dummy.MakeValSetChangeTx(newValidatorPubKey3.Bytes(), 0)
removeValidatorTx2 := kvstore.MakeValSetChangeTx(newValidatorPubKey2.Bytes(), 0)
removeValidatorTx3 := kvstore.MakeValSetChangeTx(newValidatorPubKey3.Bytes(), 0)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, removeValidatorTx2, removeValidatorTx3)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, removeValidatorTx2, removeValidatorTx3)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
delete(activeVals, string(newValidatorPubKey2.Address()))
delete(activeVals, string(newValidatorPubKey3.Address()))
@@ -287,61 +395,85 @@ func TestReactorWithTimeoutCommit(t *testing.T) {
}
reactors, eventChans, eventBuses := startConsensusNet(t, css, N-1)
defer stopConsensusNet(reactors, eventBuses)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
// wait till everyone makes the first new block
timeoutWaitGroup(t, N-1, func(wg *sync.WaitGroup, j int) {
timeoutWaitGroup(t, N-1, func(j int) {
<-eventChans[j]
wg.Done()
}, css)
}
func waitForAndValidateBlock(t *testing.T, n int, activeVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState, txs ...[]byte) {
timeoutWaitGroup(t, n, func(wg *sync.WaitGroup, j int) {
defer wg.Done()
timeoutWaitGroup(t, n, func(j int) {
css[j].Logger.Debug("waitForAndValidateBlock")
newBlockI, ok := <-eventChans[j]
if !ok {
return
}
newBlock := newBlockI.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block
t.Logf("Got block height=%v validator=%v", newBlock.Height, j)
css[j].Logger.Debug("waitForAndValidateBlock: Got block", "height", newBlock.Height)
err := validateBlock(newBlock, activeVals)
if err != nil {
t.Fatal(err)
}
assert.Nil(t, err)
for _, tx := range txs {
if err = css[j].mempool.CheckTx(tx, nil); err != nil {
t.Fatal(err)
}
css[j].mempool.CheckTx(tx, nil)
assert.Nil(t, err)
}
}, css)
}
func waitForAndValidateBlockWithTx(t *testing.T, n int, activeVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState, txs ...[]byte) {
timeoutWaitGroup(t, n, func(j int) {
ntxs := 0
BLOCK_TX_LOOP:
for {
css[j].Logger.Debug("waitForAndValidateBlockWithTx", "ntxs", ntxs)
newBlockI, ok := <-eventChans[j]
if !ok {
return
}
newBlock := newBlockI.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block
css[j].Logger.Debug("waitForAndValidateBlockWithTx: Got block", "height", newBlock.Height)
err := validateBlock(newBlock, activeVals)
assert.Nil(t, err)
// check that txs match the txs we're waiting for.
// note they could be spread over multiple blocks,
// but they should be in order.
for _, tx := range newBlock.Data.Txs {
assert.EqualValues(t, txs[ntxs], tx)
ntxs += 1
}
if ntxs == len(txs) {
break BLOCK_TX_LOOP
}
}
}, css)
}
func waitForBlockWithUpdatedValsAndValidateIt(t *testing.T, n int, updatedVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState) {
timeoutWaitGroup(t, n, func(wg *sync.WaitGroup, j int) {
defer wg.Done()
timeoutWaitGroup(t, n, func(j int) {
var newBlock *types.Block
LOOP:
for {
css[j].Logger.Debug("waitForBlockWithUpdatedValsAndValidateIt")
newBlockI, ok := <-eventChans[j]
if !ok {
return
}
newBlock = newBlockI.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block
if newBlock.LastCommit.Size() == len(updatedVals) {
t.Logf("Block with new validators height=%v validator=%v", newBlock.Height, j)
css[j].Logger.Debug("waitForBlockWithUpdatedValsAndValidateIt: Got block", "height", newBlock.Height)
break LOOP
} else {
t.Logf("Block with no new validators height=%v validator=%v. Skipping...", newBlock.Height, j)
css[j].Logger.Debug("waitForBlockWithUpdatedValsAndValidateIt: Got block with no new validators. Skipping", "height", newBlock.Height)
}
}
err := validateBlock(newBlock, updatedVals)
if err != nil {
t.Fatal(err)
}
assert.Nil(t, err)
}, css)
}
@@ -359,11 +491,14 @@ func validateBlock(block *types.Block, activeVals map[string]struct{}) error {
return nil
}
func timeoutWaitGroup(t *testing.T, n int, f func(*sync.WaitGroup, int), css []*ConsensusState) {
func timeoutWaitGroup(t *testing.T, n int, f func(int), css []*ConsensusState) {
wg := new(sync.WaitGroup)
wg.Add(n)
for i := 0; i < n; i++ {
go f(wg, i)
go func(j int) {
f(j)
wg.Done()
}(i)
}
done := make(chan struct{})
@@ -374,7 +509,7 @@ func timeoutWaitGroup(t *testing.T, n int, f func(*sync.WaitGroup, int), css []*
// we're running many nodes in-process, possibly in in a virtual machine,
// and spewing debug messages - making a block could take a while,
timeout := time.Second * 60
timeout := time.Second * 300
select {
case <-done:
@@ -385,7 +520,15 @@ func timeoutWaitGroup(t *testing.T, n int, f func(*sync.WaitGroup, int), css []*
t.Log(cs.GetRoundState())
t.Log("")
}
os.Stdout.Write([]byte("pprof.Lookup('goroutine'):\n"))
pprof.Lookup("goroutine").WriteTo(os.Stdout, 1)
capture()
panic("Timed out waiting for all validators to commit a block")
}
}
func capture() {
trace := make([]byte, 10240000)
count := runtime.Stack(trace, true)
fmt.Printf("Stack of %d bytes: %s\n", count, trace)
}

View File

@@ -2,7 +2,7 @@ package consensus
import (
"bytes"
"errors"
"encoding/json"
"fmt"
"hash/crc32"
"io"
@@ -14,6 +14,7 @@ import (
abci "github.com/tendermint/abci/types"
//auto "github.com/tendermint/tmlibs/autofile"
cmn "github.com/tendermint/tmlibs/common"
dbm "github.com/tendermint/tmlibs/db"
"github.com/tendermint/tmlibs/log"
"github.com/tendermint/tendermint/proxy"
@@ -61,21 +62,21 @@ func (cs *ConsensusState) readReplayMessage(msg *TimedWALMessage, newStepCh chan
}
}
case msgInfo:
peerKey := m.PeerKey
if peerKey == "" {
peerKey = "local"
peerID := m.PeerID
if peerID == "" {
peerID = "local"
}
switch msg := m.Msg.(type) {
case *ProposalMessage:
p := msg.Proposal
cs.Logger.Info("Replay: Proposal", "height", p.Height, "round", p.Round, "header",
p.BlockPartsHeader, "pol", p.POLRound, "peer", peerKey)
p.BlockPartsHeader, "pol", p.POLRound, "peer", peerID)
case *BlockPartMessage:
cs.Logger.Info("Replay: BlockPart", "height", msg.Height, "round", msg.Round, "peer", peerKey)
cs.Logger.Info("Replay: BlockPart", "height", msg.Height, "round", msg.Round, "peer", peerID)
case *VoteMessage:
v := msg.Vote
cs.Logger.Info("Replay: Vote", "height", v.Height, "round", v.Round, "type", v.Type,
"blockID", v.BlockID, "peer", peerKey)
"blockID", v.BlockID, "peer", peerID)
}
cs.handleMsg(m)
@@ -95,10 +96,13 @@ func (cs *ConsensusState) catchupReplay(csHeight int64) error {
cs.replayMode = true
defer func() { cs.replayMode = false }()
// Ensure that ENDHEIGHT for this height doesn't exist
// NOTE: This is just a sanity check. As far as we know things work fine without it,
// and Handshake could reuse ConsensusState if it weren't for this check (since we can crash after writing ENDHEIGHT).
gr, found, err := cs.wal.SearchForEndHeight(csHeight)
// Ensure that ENDHEIGHT for this height doesn't exist.
// NOTE: This is just a sanity check. As far as we know things work fine
// without it, and Handshake could reuse ConsensusState if it weren't for
// this check (since we can crash after writing ENDHEIGHT).
//
// Ignore data corruption errors since this is a sanity check.
gr, found, err := cs.wal.SearchForEndHeight(csHeight, &WALSearchOptions{IgnoreDataCorruptionErrors: true})
if err != nil {
return err
}
@@ -112,14 +116,16 @@ func (cs *ConsensusState) catchupReplay(csHeight int64) error {
}
// Search for last height marker
gr, found, err = cs.wal.SearchForEndHeight(csHeight - 1)
//
// Ignore data corruption errors in previous heights because we only care about last height
gr, found, err = cs.wal.SearchForEndHeight(csHeight-1, &WALSearchOptions{IgnoreDataCorruptionErrors: true})
if err == io.EOF {
cs.Logger.Error("Replay: wal.group.Search returned EOF", "#ENDHEIGHT", csHeight-1)
} else if err != nil {
return err
}
if !found {
return errors.New(cmn.Fmt("Cannot replay height %d. WAL does not contain #ENDHEIGHT for %d.", csHeight, csHeight-1))
return fmt.Errorf("Cannot replay height %d. WAL does not contain #ENDHEIGHT for %d.", csHeight, csHeight-1)
}
defer gr.Close() // nolint: errcheck
@@ -132,9 +138,13 @@ func (cs *ConsensusState) catchupReplay(csHeight int64) error {
msg, err = dec.Decode()
if err == io.EOF {
break
} else if IsDataCorruptionError(err) {
cs.Logger.Debug("data has been corrupted in last height of consensus WAL", "err", err, "height", csHeight)
panic(fmt.Sprintf("data has been corrupted (%v) in last height %d of consensus WAL", err, csHeight))
} else if err != nil {
return err
}
// NOTE: since the priv key is set when the msgs are received
// it will attempt to eg double sign but we can just ignore it
// since the votes will be replayed and we'll get to the next step
@@ -178,15 +188,26 @@ func makeHeightSearchFunc(height int64) auto.SearchFunc {
// we were last and using the WAL to recover there
type Handshaker struct {
state *sm.State
store types.BlockStore
logger log.Logger
stateDB dbm.DB
initialState sm.State
store types.BlockStore
appState json.RawMessage
logger log.Logger
nBlocks int // number of blocks applied to the state
}
func NewHandshaker(state *sm.State, store types.BlockStore) *Handshaker {
return &Handshaker{state, store, log.NewNopLogger(), 0}
func NewHandshaker(stateDB dbm.DB, state sm.State,
store types.BlockStore, appState json.RawMessage) *Handshaker {
return &Handshaker{
stateDB: stateDB,
initialState: state,
store: store,
appState: appState,
logger: log.NewNopLogger(),
nBlocks: 0,
}
}
func (h *Handshaker) SetLogger(l log.Logger) {
@@ -202,7 +223,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
// handshake is done via info request on the query conn
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{version.Version})
if err != nil {
return errors.New(cmn.Fmt("Error calling Info: %v", err))
return fmt.Errorf("Error calling Info: %v", err)
}
blockHeight := int64(res.LastBlockHeight)
@@ -216,9 +237,9 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
// TODO: check version
// replay blocks up to the latest in the blockstore
_, err = h.ReplayBlocks(appHash, blockHeight, proxyApp)
_, err = h.ReplayBlocks(h.initialState, appHash, blockHeight, proxyApp)
if err != nil {
return errors.New(cmn.Fmt("Error on replay: %v", err))
return fmt.Errorf("Error on replay: %v", err)
}
h.logger.Info("Completed ABCI Handshake - Tendermint and App are synced", "appHeight", blockHeight, "appHash", fmt.Sprintf("%X", appHash))
@@ -230,23 +251,28 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
// Replay all blocks since appBlockHeight and ensure the result matches the current state.
// Returns the final AppHash or an error
func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int64, proxyApp proxy.AppConns) ([]byte, error) {
func (h *Handshaker) ReplayBlocks(state sm.State, appHash []byte, appBlockHeight int64, proxyApp proxy.AppConns) ([]byte, error) {
storeBlockHeight := h.store.Height()
stateBlockHeight := h.state.LastBlockHeight
stateBlockHeight := state.LastBlockHeight
h.logger.Info("ABCI Replay Blocks", "appHeight", appBlockHeight, "storeHeight", storeBlockHeight, "stateHeight", stateBlockHeight)
// If appBlockHeight == 0 it means that we are at genesis and hence should send InitChain
if appBlockHeight == 0 {
validators := types.TM2PB.Validators(h.state.Validators)
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators}); err != nil {
validators := types.TM2PB.Validators(state.Validators)
req := abci.RequestInitChain{
Validators: validators,
AppStateBytes: h.appState,
}
_, err := proxyApp.Consensus().InitChainSync(req)
if err != nil {
return nil, err
}
}
// First handle edge cases and constraints on the storeBlockHeight
if storeBlockHeight == 0 {
return appHash, h.checkAppHash(appHash)
return appHash, checkAppHash(state, appHash)
} else if storeBlockHeight < appBlockHeight {
// the app should never be ahead of the store (but this is under app's control)
@@ -261,6 +287,7 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int64, proxyApp
cmn.PanicSanity(cmn.Fmt("StoreBlockHeight (%d) > StateBlockHeight + 1 (%d)", storeBlockHeight, stateBlockHeight+1))
}
var err error
// Now either store is equal to state, or one ahead.
// For each, consider all cases of where the app could be, given app <= store
if storeBlockHeight == stateBlockHeight {
@@ -268,11 +295,11 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int64, proxyApp
// Either the app is asking for replay, or we're all synced up.
if appBlockHeight < storeBlockHeight {
// the app is behind, so replay blocks, but no need to go through WAL (state is already synced to store)
return h.replayBlocks(proxyApp, appBlockHeight, storeBlockHeight, false)
return h.replayBlocks(state, proxyApp, appBlockHeight, storeBlockHeight, false)
} else if appBlockHeight == storeBlockHeight {
// We're good!
return appHash, h.checkAppHash(appHash)
return appHash, checkAppHash(state, appHash)
}
} else if storeBlockHeight == stateBlockHeight+1 {
@@ -281,7 +308,7 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int64, proxyApp
if appBlockHeight < stateBlockHeight {
// the app is further behind than it should be, so replay blocks
// but leave the last block to go through the WAL
return h.replayBlocks(proxyApp, appBlockHeight, storeBlockHeight, true)
return h.replayBlocks(state, proxyApp, appBlockHeight, storeBlockHeight, true)
} else if appBlockHeight == stateBlockHeight {
// We haven't run Commit (both the state and app are one block behind),
@@ -289,14 +316,19 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int64, proxyApp
// NOTE: We could instead use the cs.WAL on cs.Start,
// but we'd have to allow the WAL to replay a block that wrote it's ENDHEIGHT
h.logger.Info("Replay last block using real app")
return h.replayBlock(storeBlockHeight, proxyApp.Consensus())
state, err = h.replayBlock(state, storeBlockHeight, proxyApp.Consensus())
return state.AppHash, err
} else if appBlockHeight == storeBlockHeight {
// We ran Commit, but didn't save the state, so replayBlock with mock app
abciResponses := h.state.LoadABCIResponses()
abciResponses, err := sm.LoadABCIResponses(h.stateDB, storeBlockHeight)
if err != nil {
return nil, err
}
mockApp := newMockProxyApp(appHash, abciResponses)
h.logger.Info("Replay last block using mock app")
return h.replayBlock(storeBlockHeight, mockApp)
state, err = h.replayBlock(state, storeBlockHeight, mockApp)
return state.AppHash, err
}
}
@@ -305,13 +337,14 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int64, proxyApp
return nil, nil
}
func (h *Handshaker) replayBlocks(proxyApp proxy.AppConns, appBlockHeight, storeBlockHeight int64, mutateState bool) ([]byte, error) {
func (h *Handshaker) replayBlocks(state sm.State, proxyApp proxy.AppConns, appBlockHeight, storeBlockHeight int64, mutateState bool) ([]byte, error) {
// App is further behind than it should be, so we need to replay blocks.
// We replay all blocks from appBlockHeight+1.
//
// Note that we don't have an old version of the state,
// so we by-pass state validation/mutation using sm.ExecCommitBlock.
// This also means we won't be saving validator sets if they change during this period.
// TODO: Load the historical information to fix this and just use state.ApplyBlock
//
// If mutateState == true, the final block is replayed with h.replayBlock()
@@ -334,31 +367,37 @@ func (h *Handshaker) replayBlocks(proxyApp proxy.AppConns, appBlockHeight, store
if mutateState {
// sync the final block
return h.replayBlock(storeBlockHeight, proxyApp.Consensus())
state, err = h.replayBlock(state, storeBlockHeight, proxyApp.Consensus())
if err != nil {
return nil, err
}
appHash = state.AppHash
}
return appHash, h.checkAppHash(appHash)
return appHash, checkAppHash(state, appHash)
}
// ApplyBlock on the proxyApp with the last block.
func (h *Handshaker) replayBlock(height int64, proxyApp proxy.AppConnConsensus) ([]byte, error) {
mempool := types.MockMempool{}
func (h *Handshaker) replayBlock(state sm.State, height int64, proxyApp proxy.AppConnConsensus) (sm.State, error) {
block := h.store.LoadBlock(height)
meta := h.store.LoadBlockMeta(height)
if err := h.state.ApplyBlock(types.NopEventBus{}, proxyApp, block, meta.BlockID.PartsHeader, mempool); err != nil {
return nil, err
blockExec := sm.NewBlockExecutor(h.stateDB, h.logger, proxyApp, types.MockMempool{}, types.MockEvidencePool{})
var err error
state, err = blockExec.ApplyBlock(state, meta.BlockID, block)
if err != nil {
return sm.State{}, err
}
h.nBlocks += 1
return h.state.AppHash, nil
return state, nil
}
func (h *Handshaker) checkAppHash(appHash []byte) error {
if !bytes.Equal(h.state.AppHash, appHash) {
panic(errors.New(cmn.Fmt("Tendermint state.AppHash does not match AppHash after replay. Got %X, expected %X", appHash, h.state.AppHash)).Error())
func checkAppHash(state sm.State, appHash []byte) error {
if !bytes.Equal(state.AppHash, appHash) {
panic(fmt.Errorf("Tendermint state.AppHash does not match AppHash after replay. Got %X, expected %X", appHash, state.AppHash).Error())
}
return nil
}
@@ -400,5 +439,5 @@ func (mock *mockProxyApp) EndBlock(req abci.RequestEndBlock) abci.ResponseEndBlo
}
func (mock *mockProxyApp) Commit() abci.ResponseCommit {
return abci.ResponseCommit{Code: abci.CodeTypeOK, Data: mock.appHash}
return abci.ResponseCommit{Data: mock.appHash}
}

View File

@@ -18,6 +18,7 @@ import (
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
dbm "github.com/tendermint/tmlibs/db"
"github.com/tendermint/tmlibs/log"
)
const (
@@ -104,11 +105,11 @@ type playback struct {
count int // how many lines/msgs into the file are we
// replays can be reset to beginning
fileName string // so we can close/reopen the file
genesisState *sm.State // so the replay session knows where to restart from
fileName string // so we can close/reopen the file
genesisState sm.State // so the replay session knows where to restart from
}
func newPlayback(fileName string, fp *os.File, cs *ConsensusState, genState *sm.State) *playback {
func newPlayback(fileName string, fp *os.File, cs *ConsensusState, genState sm.State) *playback {
return &playback{
cs: cs,
fp: fp,
@@ -123,7 +124,8 @@ func (pb *playback) replayReset(count int, newStepCh chan interface{}) error {
pb.cs.Stop()
pb.cs.Wait()
newCS := NewConsensusState(pb.cs.config, pb.genesisState.Copy(), pb.cs.proxyAppConn, pb.cs.blockStore, pb.cs.mempool)
newCS := NewConsensusState(pb.cs.config, pb.genesisState.Copy(), pb.cs.blockExec,
pb.cs.blockStore, pb.cs.mempool, pb.cs.evpool)
newCS.SetEventBus(pb.cs.eventBus)
newCS.startForReplay()
@@ -278,20 +280,26 @@ func (pb *playback) replayConsoleLoop() int {
// convenience for replay mode
func newConsensusStateForReplay(config cfg.BaseConfig, csConfig *cfg.ConsensusConfig) *ConsensusState {
dbType := dbm.DBBackendType(config.DBBackend)
// Get BlockStore
blockStoreDB := dbm.NewDB("blockstore", config.DBBackend, config.DBDir())
blockStoreDB := dbm.NewDB("blockstore", dbType, config.DBDir())
blockStore := bc.NewBlockStore(blockStoreDB)
// Get State
stateDB := dbm.NewDB("state", config.DBBackend, config.DBDir())
state, err := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
stateDB := dbm.NewDB("state", dbType, config.DBDir())
gdoc, err := sm.MakeGenesisDocFromFile(config.GenesisFile())
if err != nil {
cmn.Exit(err.Error())
}
state, err := sm.MakeGenesisState(gdoc)
if err != nil {
cmn.Exit(err.Error())
}
// Create proxyAppConn connection (consensus, mempool, query)
clientCreator := proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir())
proxyApp := proxy.NewAppConns(clientCreator, NewHandshaker(state, blockStore))
proxyApp := proxy.NewAppConns(clientCreator,
NewHandshaker(stateDB, state, blockStore, gdoc.AppState()))
err = proxyApp.Start()
if err != nil {
cmn.Exit(cmn.Fmt("Error starting proxy app conns: %v", err))
@@ -302,7 +310,11 @@ func newConsensusStateForReplay(config cfg.BaseConfig, csConfig *cfg.ConsensusCo
cmn.Exit(cmn.Fmt("Failed to start event bus: %v", err))
}
consensusState := NewConsensusState(csConfig, state.Copy(), proxyApp.Consensus(), blockStore, types.MockMempool{})
mempool, evpool := types.MockMempool{}, types.MockEvidencePool{}
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)
consensusState := NewConsensusState(csConfig, state.Copy(), blockExec,
blockStore, mempool, evpool)
consensusState.SetEventBus(eventBus)
return consensusState

View File

@@ -15,7 +15,7 @@ import (
"github.com/stretchr/testify/require"
"github.com/tendermint/abci/example/dummy"
"github.com/tendermint/abci/example/kvstore"
abci "github.com/tendermint/abci/types"
crypto "github.com/tendermint/go-crypto"
wire "github.com/tendermint/go-wire"
@@ -44,13 +44,6 @@ func init() {
// the `Handshake Tests` are for failures in applying the block.
// With the help of the WAL, we can recover from it all!
// NOTE: Files in this dir are generated by running the `build.sh` therein.
// It's a simple way to generate wals for a single block, or multiple blocks, with random transactions,
// and different part sizes. The output is not deterministic.
// It should only have to be re-run if there is some breaking change to the consensus data structures (eg. blocks, votes)
// or to the behaviour of the app (eg. computes app hash differently)
var data_dir = path.Join(cmn.GoPath(), "src/github.com/tendermint/tendermint/consensus", "test_data")
//------------------------------------------------------------------------------------------
// WAL Tests
@@ -60,10 +53,9 @@ var data_dir = path.Join(cmn.GoPath(), "src/github.com/tendermint/tendermint/con
func startNewConsensusStateAndWaitForBlock(t *testing.T, lastBlockHeight int64, blockDB dbm.DB, stateDB dbm.DB) {
logger := log.TestingLogger()
state, _ := sm.GetState(stateDB, consensusReplayConfig.GenesisFile())
state.SetLogger(logger.With("module", "state"))
state, _ := sm.LoadStateFromDBOrGenesisFile(stateDB, consensusReplayConfig.GenesisFile())
privValidator := loadPrivValidator(consensusReplayConfig)
cs := newConsensusStateWithConfigAndBlockStore(consensusReplayConfig, state, privValidator, dummy.NewDummyApplication(), blockDB)
cs := newConsensusStateWithConfigAndBlockStore(consensusReplayConfig, state, privValidator, kvstore.NewKVStoreApplication(), blockDB)
cs.SetLogger(logger)
bytes, _ := ioutil.ReadFile(cs.config.WalFile())
@@ -72,9 +64,7 @@ func startNewConsensusStateAndWaitForBlock(t *testing.T, lastBlockHeight int64,
err := cs.Start()
require.NoError(t, err)
defer func() {
cs.Stop()
}()
defer cs.Stop()
// This is just a signal that we haven't halted; its not something contained
// in the WAL itself. Assuming the consensus state is running, replay of any
@@ -85,19 +75,19 @@ func startNewConsensusStateAndWaitForBlock(t *testing.T, lastBlockHeight int64,
require.NoError(t, err)
select {
case <-newBlockCh:
case <-time.After(10 * time.Second):
case <-time.After(60 * time.Second):
t.Fatalf("Timed out waiting for new block (see trace above)")
}
}
func sendTxs(cs *ConsensusState, ctx context.Context) {
i := 0
for {
for i := 0; i < 256; i++ {
select {
case <-ctx.Done():
return
default:
cs.mempool.CheckTx([]byte{byte(i)}, nil)
tx := []byte{byte(i)}
cs.mempool.CheckTx(tx, nil)
i++
}
}
@@ -107,23 +97,24 @@ func sendTxs(cs *ConsensusState, ctx context.Context) {
func TestWALCrash(t *testing.T) {
testCases := []struct {
name string
initFn func(*ConsensusState, context.Context)
initFn func(dbm.DB, *ConsensusState, context.Context)
heightToStop int64
}{
{"empty block",
func(cs *ConsensusState, ctx context.Context) {},
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {},
1},
{"block with a smaller part size",
func(cs *ConsensusState, ctx context.Context) {
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {
// XXX: is there a better way to change BlockPartSizeBytes?
params := cs.state.Params
params.BlockPartSizeBytes = 512
cs.state.Params = params
sendTxs(cs, ctx)
cs.state.ConsensusParams.BlockPartSizeBytes = 512
sm.SaveState(stateDB, cs.state)
go sendTxs(cs, ctx)
},
1},
{"many non-empty blocks",
sendTxs,
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {
go sendTxs(cs, ctx)
},
3},
}
@@ -134,7 +125,7 @@ func TestWALCrash(t *testing.T) {
}
}
func crashWALandCheckLiveness(t *testing.T, initFn func(*ConsensusState, context.Context), heightToStop int64) {
func crashWALandCheckLiveness(t *testing.T, initFn func(dbm.DB, *ConsensusState, context.Context), heightToStop int64) {
walPaniced := make(chan error)
crashingWal := &crashingWAL{panicCh: walPaniced, heightToStop: heightToStop}
@@ -147,16 +138,15 @@ LOOP:
// create consensus state from a clean slate
logger := log.NewNopLogger()
stateDB := dbm.NewMemDB()
state, _ := sm.MakeGenesisStateFromFile(stateDB, consensusReplayConfig.GenesisFile())
state.SetLogger(logger.With("module", "state"))
state, _ := sm.MakeGenesisStateFromFile(consensusReplayConfig.GenesisFile())
privValidator := loadPrivValidator(consensusReplayConfig)
blockDB := dbm.NewMemDB()
cs := newConsensusStateWithConfigAndBlockStore(consensusReplayConfig, state, privValidator, dummy.NewDummyApplication(), blockDB)
cs := newConsensusStateWithConfigAndBlockStore(consensusReplayConfig, state, privValidator, kvstore.NewKVStoreApplication(), blockDB)
cs.SetLogger(logger)
// start sending transactions
ctx, cancel := context.WithCancel(context.Background())
go initFn(cs, ctx)
initFn(stateDB, cs, ctx)
// clean up WAL file from the previous iteration
walFile := cs.config.WalFile()
@@ -253,8 +243,8 @@ func (w *crashingWAL) Save(m WALMessage) {
}
func (w *crashingWAL) Group() *auto.Group { return w.next.Group() }
func (w *crashingWAL) SearchForEndHeight(height int64) (gr *auto.GroupReader, found bool, err error) {
return w.next.SearchForEndHeight(height)
func (w *crashingWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
return w.next.SearchForEndHeight(height, options)
}
func (w *crashingWAL) Start() error { return w.next.Start() }
@@ -270,6 +260,7 @@ const (
var (
mempool = types.MockMempool{}
evpool = types.MockEvidencePool{}
)
//---------------------------------------
@@ -344,35 +335,39 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
if err := wal.Start(); err != nil {
t.Fatal(err)
}
defer wal.Stop()
chain, commits, err := makeBlockchainFromWAL(wal)
if err != nil {
t.Fatalf(err.Error())
}
state, store := stateAndStore(config, privVal.GetPubKey())
stateDB, state, store := stateAndStore(config, privVal.GetPubKey())
store.chain = chain
store.commits = commits
// run the chain through state.ApplyBlock to build up the tendermint state
latestAppHash := buildTMStateFromChain(config, state, chain, mode)
state = buildTMStateFromChain(config, stateDB, state, chain, mode)
latestAppHash := state.AppHash
// make a new client creator
dummyApp := dummy.NewPersistentDummyApplication(path.Join(config.DBDir(), "2"))
clientCreator2 := proxy.NewLocalClientCreator(dummyApp)
kvstoreApp := kvstore.NewPersistentKVStoreApplication(path.Join(config.DBDir(), "2"))
clientCreator2 := proxy.NewLocalClientCreator(kvstoreApp)
if nBlocks > 0 {
// run nBlocks against a new client to build up the app state.
// use a throwaway tendermint state
proxyApp := proxy.NewAppConns(clientCreator2, nil)
state, _ := stateAndStore(config, privVal.GetPubKey())
buildAppStateFromChain(proxyApp, state, chain, nBlocks, mode)
stateDB, state, _ := stateAndStore(config, privVal.GetPubKey())
buildAppStateFromChain(proxyApp, stateDB, state, chain, nBlocks, mode)
}
// now start the app using the handshake - it should sync
handshaker := NewHandshaker(state, store)
handshaker := NewHandshaker(stateDB, state, store, nil)
proxyApp := proxy.NewAppConns(clientCreator2, handshaker)
if err := proxyApp.Start(); err != nil {
t.Fatalf("Error starting proxy app connections: %v", err)
}
defer proxyApp.Stop()
// get the latest app hash from the app
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{""})
@@ -397,87 +392,90 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
}
}
func applyBlock(st *sm.State, blk *types.Block, proxyApp proxy.AppConns) {
testPartSize := st.Params.BlockPartSizeBytes
err := st.ApplyBlock(types.NopEventBus{}, proxyApp.Consensus(), blk, blk.MakePartSet(testPartSize).Header(), mempool)
func applyBlock(stateDB dbm.DB, st sm.State, blk *types.Block, proxyApp proxy.AppConns) sm.State {
testPartSize := st.ConsensusParams.BlockPartSizeBytes
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)
blkID := types.BlockID{blk.Hash(), blk.MakePartSet(testPartSize).Header()}
newState, err := blockExec.ApplyBlock(st, blkID, blk)
if err != nil {
panic(err)
}
return newState
}
func buildAppStateFromChain(proxyApp proxy.AppConns,
state *sm.State, chain []*types.Block, nBlocks int, mode uint) {
func buildAppStateFromChain(proxyApp proxy.AppConns, stateDB dbm.DB,
state sm.State, chain []*types.Block, nBlocks int, mode uint) {
// start a new app without handshake, play nBlocks blocks
if err := proxyApp.Start(); err != nil {
panic(err)
}
defer proxyApp.Stop()
// TODO: get the genesis bytes (https://github.com/tendermint/tendermint/issues/1224)
var genesisBytes []byte
validators := types.TM2PB.Validators(state.Validators)
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators}); err != nil {
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators, genesisBytes}); err != nil {
panic(err)
}
defer proxyApp.Stop()
switch mode {
case 0:
for i := 0; i < nBlocks; i++ {
block := chain[i]
applyBlock(state, block, proxyApp)
state = applyBlock(stateDB, state, block, proxyApp)
}
case 1, 2:
for i := 0; i < nBlocks-1; i++ {
block := chain[i]
applyBlock(state, block, proxyApp)
state = applyBlock(stateDB, state, block, proxyApp)
}
if mode == 2 {
// update the dummy height and apphash
// update the kvstore height and apphash
// as if we ran commit but not
applyBlock(state, chain[nBlocks-1], proxyApp)
state = applyBlock(stateDB, state, chain[nBlocks-1], proxyApp)
}
}
}
func buildTMStateFromChain(config *cfg.Config, state *sm.State, chain []*types.Block, mode uint) []byte {
func buildTMStateFromChain(config *cfg.Config, stateDB dbm.DB, state sm.State, chain []*types.Block, mode uint) sm.State {
// run the whole chain against this client to build up the tendermint state
clientCreator := proxy.NewLocalClientCreator(dummy.NewPersistentDummyApplication(path.Join(config.DBDir(), "1")))
clientCreator := proxy.NewLocalClientCreator(kvstore.NewPersistentKVStoreApplication(path.Join(config.DBDir(), "1")))
proxyApp := proxy.NewAppConns(clientCreator, nil) // sm.NewHandshaker(config, state, store, ReplayLastBlock))
if err := proxyApp.Start(); err != nil {
panic(err)
}
defer proxyApp.Stop()
// TODO: get the genesis bytes (https://github.com/tendermint/tendermint/issues/1224)
var genesisBytes []byte
validators := types.TM2PB.Validators(state.Validators)
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators}); err != nil {
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators, genesisBytes}); err != nil {
panic(err)
}
var latestAppHash []byte
switch mode {
case 0:
// sync right up
for _, block := range chain {
applyBlock(state, block, proxyApp)
state = applyBlock(stateDB, state, block, proxyApp)
}
latestAppHash = state.AppHash
case 1, 2:
// sync up to the penultimate as if we stored the block.
// whether we commit or not depends on the appHash
for _, block := range chain[:len(chain)-1] {
applyBlock(state, block, proxyApp)
state = applyBlock(stateDB, state, block, proxyApp)
}
// apply the final block to a state copy so we can
// get the right next appHash but keep the state back
stateCopy := state.Copy()
applyBlock(stateCopy, chain[len(chain)-1], proxyApp)
latestAppHash = stateCopy.AppHash
applyBlock(stateDB, state, chain[len(chain)-1], proxyApp)
}
return latestAppHash
return state
}
//--------------------------
@@ -485,7 +483,7 @@ func buildTMStateFromChain(config *cfg.Config, state *sm.State, chain []*types.B
func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
// Search for height marker
gr, found, err := wal.SearchForEndHeight(0)
gr, found, err := wal.SearchForEndHeight(0, &WALSearchOptions{})
if err != nil {
return nil, nil, err
}
@@ -496,10 +494,13 @@ func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
// log.Notice("Build a blockchain by reading from the WAL")
var blockParts *types.PartSet
var blocks []*types.Block
var commits []*types.Commit
var thisBlockParts *types.PartSet
var thisBlockCommit *types.Commit
var height int64
dec := NewWALDecoder(gr)
for {
msg, err := dec.Decode()
@@ -515,42 +516,60 @@ func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
}
switch p := piece.(type) {
case *types.PartSetHeader:
case EndHeightMessage:
// if its not the first one, we have a full block
if blockParts != nil {
if thisBlockParts != nil {
var n int
block := wire.ReadBinary(&types.Block{}, blockParts.GetReader(), 0, &n, &err).(*types.Block)
block := wire.ReadBinary(&types.Block{}, thisBlockParts.GetReader(), 0, &n, &err).(*types.Block)
if err != nil {
panic(err)
}
if block.Height != height+1 {
panic(cmn.Fmt("read bad block from wal. got height %d, expected %d", block.Height, height+1))
}
commitHeight := thisBlockCommit.Precommits[0].Height
if commitHeight != height+1 {
panic(cmn.Fmt("commit doesnt match. got height %d, expected %d", commitHeight, height+1))
}
blocks = append(blocks, block)
commits = append(commits, thisBlockCommit)
height += 1
}
blockParts = types.NewPartSetFromHeader(*p)
case *types.PartSetHeader:
thisBlockParts = types.NewPartSetFromHeader(*p)
case *types.Part:
_, err := blockParts.AddPart(p, false)
_, err := thisBlockParts.AddPart(p, false)
if err != nil {
return nil, nil, err
}
case *types.Vote:
if p.Type == types.VoteTypePrecommit {
commit := &types.Commit{
thisBlockCommit = &types.Commit{
BlockID: p.BlockID,
Precommits: []*types.Vote{p},
}
commits = append(commits, commit)
}
}
}
// grab the last block too
var n int
block := wire.ReadBinary(&types.Block{}, blockParts.GetReader(), 0, &n, &err).(*types.Block)
block := wire.ReadBinary(&types.Block{}, thisBlockParts.GetReader(), 0, &n, &err).(*types.Block)
if err != nil {
panic(err)
}
if block.Height != height+1 {
panic(cmn.Fmt("read bad block from wal. got height %d, expected %d", block.Height, height+1))
}
commitHeight := thisBlockCommit.Precommits[0].Height
if commitHeight != height+1 {
panic(cmn.Fmt("commit doesnt match. got height %d, expected %d", commitHeight, height+1))
}
blocks = append(blocks, block)
commits = append(commits, thisBlockCommit)
return blocks, commits, nil
}
func readPieceFromWAL(msg *TimedWALMessage) interface{} {
// skip meta messages
if _, ok := msg.Msg.(EndHeightMessage); ok {
return nil
}
// for logging
switch m := msg.Msg.(type) {
case msgInfo:
@@ -562,19 +581,19 @@ func readPieceFromWAL(msg *TimedWALMessage) interface{} {
case *VoteMessage:
return msg.Vote
}
case EndHeightMessage:
return m
}
return nil
}
// fresh state and mock store
func stateAndStore(config *cfg.Config, pubKey crypto.PubKey) (*sm.State, *mockBlockStore) {
func stateAndStore(config *cfg.Config, pubKey crypto.PubKey) (dbm.DB, sm.State, *mockBlockStore) {
stateDB := dbm.NewMemDB()
state, _ := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
state.SetLogger(log.TestingLogger().With("module", "state"))
store := NewMockBlockStore(config, state.Params)
return state, store
state, _ := sm.MakeGenesisStateFromFile(config.GenesisFile())
store := NewMockBlockStore(config, state.ConsensusParams)
return stateDB, state, store
}
//----------------------------------

View File

@@ -17,7 +17,7 @@ import (
cfg "github.com/tendermint/tendermint/config"
cstypes "github.com/tendermint/tendermint/consensus/types"
"github.com/tendermint/tendermint/proxy"
"github.com/tendermint/tendermint/p2p"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
@@ -47,8 +47,8 @@ var (
// msgs from the reactor which may update the state
type msgInfo struct {
Msg ConsensusMessage `json:"msg"`
PeerKey string `json:"peer_key"`
Msg ConsensusMessage `json:"msg"`
PeerID p2p.ID `json:"peer_key"`
}
// internally generated messages which may update the state
@@ -75,16 +75,18 @@ type ConsensusState struct {
privValidator types.PrivValidator // for signing votes
// services for creating and executing blocks
proxyAppConn proxy.AppConnConsensus
blockStore types.BlockStore
mempool types.Mempool
// TODO: encapsulate all of this in one "BlockManager"
blockExec *sm.BlockExecutor
blockStore types.BlockStore
mempool types.Mempool
evpool types.EvidencePool
// internal state
mtx sync.Mutex
cstypes.RoundState
state *sm.State // State until height-1.
state sm.State // State until height-1.
// state changes may be triggered by msgs from peers,
// state changes may be triggered by: msgs from peers,
// msgs from ourself, or by timeouts
peerMsgQueue chan msgInfo
internalMsgQueue chan msgInfo
@@ -113,10 +115,10 @@ type ConsensusState struct {
}
// NewConsensusState returns a new ConsensusState.
func NewConsensusState(config *cfg.ConsensusConfig, state *sm.State, proxyAppConn proxy.AppConnConsensus, blockStore types.BlockStore, mempool types.Mempool) *ConsensusState {
func NewConsensusState(config *cfg.ConsensusConfig, state sm.State, blockExec *sm.BlockExecutor, blockStore types.BlockStore, mempool types.Mempool, evpool types.EvidencePool) *ConsensusState {
cs := &ConsensusState{
config: config,
proxyAppConn: proxyAppConn,
blockExec: blockExec,
blockStore: blockStore,
mempool: mempool,
peerMsgQueue: make(chan msgInfo, msgQueueSize),
@@ -125,6 +127,7 @@ func NewConsensusState(config *cfg.ConsensusConfig, state *sm.State, proxyAppCon
done: make(chan struct{}),
doWALCatchup: true,
wal: nilWAL{},
evpool: evpool,
}
// set function defaults (may be overwritten before calling Start)
cs.decideProposal = cs.defaultDecideProposal
@@ -151,6 +154,7 @@ func (cs *ConsensusState) SetLogger(l log.Logger) {
// SetEventBus sets event bus.
func (cs *ConsensusState) SetEventBus(b *types.EventBus) {
cs.eventBus = b
cs.blockExec.SetEventBus(b)
}
// String returns a string.
@@ -160,7 +164,7 @@ func (cs *ConsensusState) String() string {
}
// GetState returns a copy of the chain state.
func (cs *ConsensusState) GetState() *sm.State {
func (cs *ConsensusState) GetState() sm.State {
cs.mtx.Lock()
defer cs.mtx.Unlock()
return cs.state.Copy()
@@ -300,17 +304,17 @@ func (cs *ConsensusState) OpenWAL(walFile string) (WAL, error) {
//------------------------------------------------------------
// Public interface for passing messages into the consensus state, possibly causing a state transition.
// If peerKey == "", the msg is considered internal.
// If peerID == "", the msg is considered internal.
// Messages are added to the appropriate queue (peer or internal).
// If the queue is full, the function may block.
// TODO: should these return anything or let callers just use events?
// AddVote inputs a vote.
func (cs *ConsensusState) AddVote(vote *types.Vote, peerKey string) (added bool, err error) {
if peerKey == "" {
func (cs *ConsensusState) AddVote(vote *types.Vote, peerID p2p.ID) (added bool, err error) {
if peerID == "" {
cs.internalMsgQueue <- msgInfo{&VoteMessage{vote}, ""}
} else {
cs.peerMsgQueue <- msgInfo{&VoteMessage{vote}, peerKey}
cs.peerMsgQueue <- msgInfo{&VoteMessage{vote}, peerID}
}
// TODO: wait for event?!
@@ -318,12 +322,12 @@ func (cs *ConsensusState) AddVote(vote *types.Vote, peerKey string) (added bool,
}
// SetProposal inputs a proposal.
func (cs *ConsensusState) SetProposal(proposal *types.Proposal, peerKey string) error {
func (cs *ConsensusState) SetProposal(proposal *types.Proposal, peerID p2p.ID) error {
if peerKey == "" {
if peerID == "" {
cs.internalMsgQueue <- msgInfo{&ProposalMessage{proposal}, ""}
} else {
cs.peerMsgQueue <- msgInfo{&ProposalMessage{proposal}, peerKey}
cs.peerMsgQueue <- msgInfo{&ProposalMessage{proposal}, peerID}
}
// TODO: wait for event?!
@@ -331,12 +335,12 @@ func (cs *ConsensusState) SetProposal(proposal *types.Proposal, peerKey string)
}
// AddProposalBlockPart inputs a part of the proposal block.
func (cs *ConsensusState) AddProposalBlockPart(height int64, round int, part *types.Part, peerKey string) error {
func (cs *ConsensusState) AddProposalBlockPart(height int64, round int, part *types.Part, peerID p2p.ID) error {
if peerKey == "" {
if peerID == "" {
cs.internalMsgQueue <- msgInfo{&BlockPartMessage{height, round, part}, ""}
} else {
cs.peerMsgQueue <- msgInfo{&BlockPartMessage{height, round, part}, peerKey}
cs.peerMsgQueue <- msgInfo{&BlockPartMessage{height, round, part}, peerID}
}
// TODO: wait for event?!
@@ -344,13 +348,13 @@ func (cs *ConsensusState) AddProposalBlockPart(height int64, round int, part *ty
}
// SetProposalAndBlock inputs the proposal and all block parts.
func (cs *ConsensusState) SetProposalAndBlock(proposal *types.Proposal, block *types.Block, parts *types.PartSet, peerKey string) error {
if err := cs.SetProposal(proposal, peerKey); err != nil {
func (cs *ConsensusState) SetProposalAndBlock(proposal *types.Proposal, block *types.Block, parts *types.PartSet, peerID p2p.ID) error {
if err := cs.SetProposal(proposal, peerID); err != nil {
return err
}
for i := 0; i < parts.Total(); i++ {
part := parts.GetPart(i)
if err := cs.AddProposalBlockPart(proposal.Height, proposal.Round, part, peerKey); err != nil {
if err := cs.AddProposalBlockPart(proposal.Height, proposal.Round, part, peerID); err != nil {
return err
}
}
@@ -397,7 +401,7 @@ func (cs *ConsensusState) sendInternalMessage(mi msgInfo) {
// Reconstruct LastCommit from SeenCommit, which we saved along with the block,
// (which happens even before saving the state)
func (cs *ConsensusState) reconstructLastCommit(state *sm.State) {
func (cs *ConsensusState) reconstructLastCommit(state sm.State) {
if state.LastBlockHeight == 0 {
return
}
@@ -420,12 +424,12 @@ func (cs *ConsensusState) reconstructLastCommit(state *sm.State) {
// Updates ConsensusState and increments height to match that of state.
// The round becomes 0 and cs.Step becomes cstypes.RoundStepNewHeight.
func (cs *ConsensusState) updateToState(state *sm.State) {
func (cs *ConsensusState) updateToState(state sm.State) {
if cs.CommitRound > -1 && 0 < cs.Height && cs.Height != state.LastBlockHeight {
cmn.PanicSanity(cmn.Fmt("updateToState() expected state height of %v but found %v",
cs.Height, state.LastBlockHeight))
}
if cs.state != nil && cs.state.LastBlockHeight+1 != cs.Height {
if !cs.state.IsEmpty() && cs.state.LastBlockHeight+1 != cs.Height {
// This might happen when someone else is mutating cs.state.
// Someone forgot to pass in state.Copy() somewhere?!
cmn.PanicSanity(cmn.Fmt("Inconsistent cs.state.LastBlockHeight+1 %v vs cs.Height %v",
@@ -435,7 +439,7 @@ func (cs *ConsensusState) updateToState(state *sm.State) {
// If state isn't further out than cs.state, just ignore.
// This happens when SwitchToConsensus() is called in the reactor.
// We don't want to reset e.g. the Votes.
if cs.state != nil && (state.LastBlockHeight <= cs.state.LastBlockHeight) {
if !cs.state.IsEmpty() && (state.LastBlockHeight <= cs.state.LastBlockHeight) {
cs.Logger.Info("Ignoring updateToState()", "newHeight", state.LastBlockHeight+1, "oldHeight", cs.state.LastBlockHeight+1)
return
}
@@ -473,6 +477,9 @@ func (cs *ConsensusState) updateToState(state *sm.State) {
cs.LockedRound = 0
cs.LockedBlock = nil
cs.LockedBlockParts = nil
cs.ValidRound = 0
cs.ValidBlock = nil
cs.ValidBlockParts = nil
cs.Votes = cstypes.NewHeightVoteSet(state.ChainID, height, validators)
cs.CommitRound = -1
cs.LastCommit = lastPrecommits
@@ -537,7 +544,7 @@ func (cs *ConsensusState) receiveRoutine(maxSteps int) {
// if the timeout is relevant to the rs
// go to the next step
cs.handleTimeout(ti, rs)
case <-cs.Quit:
case <-cs.Quit():
// NOTE: the internalMsgQueue may have signed messages from our
// priv_val that haven't hit the WAL, but its ok because
@@ -558,7 +565,7 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
defer cs.mtx.Unlock()
var err error
msg, peerKey := mi.Msg, mi.PeerKey
msg, peerID := mi.Msg, mi.PeerID
switch msg := msg.(type) {
case *ProposalMessage:
// will not cause transition.
@@ -566,16 +573,20 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
err = cs.setProposal(msg.Proposal)
case *BlockPartMessage:
// if the proposal is complete, we'll enterPrevote or tryFinalizeCommit
_, err = cs.addProposalBlockPart(msg.Height, msg.Part, peerKey != "")
_, err = cs.addProposalBlockPart(msg.Height, msg.Part, peerID != "")
if err != nil && msg.Round != cs.Round {
err = nil
}
case *VoteMessage:
// attempt to add the vote and dupeout the validator if its a duplicate signature
// if the vote gives us a 2/3-any or 2/3-one, we transition
err := cs.tryAddVote(msg.Vote, peerKey)
err := cs.tryAddVote(msg.Vote, peerID)
if err == ErrAddingVote {
// TODO: punish peer
// We probably don't want to stop the peer here. The vote does not
// necessarily comes from a malicious peer but can be just broadcasted by
// a typical peer.
// https://github.com/tendermint/tendermint/issues/1281
}
// NOTE: the vote is broadcast to peers by the reactor listening
@@ -588,7 +599,7 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
cs.Logger.Error("Unknown msg type", reflect.TypeOf(msg))
}
if err != nil {
cs.Logger.Error("Error with msg", "type", reflect.TypeOf(msg), "peer", peerKey, "err", err, "msg", msg)
cs.Logger.Error("Error with msg", "type", reflect.TypeOf(msg), "peer", peerID, "err", err, "msg", msg)
}
}
@@ -767,17 +778,18 @@ func (cs *ConsensusState) enterPropose(height int64, round int) {
return
}
if !cs.isProposer() {
cs.Logger.Info("enterPropose: Not our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator)
if cs.Validators.HasAddress(cs.privValidator.GetAddress()) {
cs.Logger.Debug("This node is a validator")
} else {
cs.Logger.Debug("This node is not a validator")
}
} else {
// if not a validator, we're done
if !cs.Validators.HasAddress(cs.privValidator.GetAddress()) {
cs.Logger.Debug("This node is not a validator")
return
}
cs.Logger.Debug("This node is a validator")
if cs.isProposer() {
cs.Logger.Info("enterPropose: Our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator)
cs.Logger.Debug("This node is a validator")
cs.decideProposal(height, round)
} else {
cs.Logger.Info("enterPropose: Not our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator)
}
}
@@ -793,6 +805,9 @@ func (cs *ConsensusState) defaultDecideProposal(height int64, round int) {
if cs.LockedBlock != nil {
// If we're locked onto a block, just choose that.
block, blockParts = cs.LockedBlock, cs.LockedBlockParts
} else if cs.ValidBlock != nil {
// If there is valid block, choose that.
block, blockParts = cs.ValidBlock, cs.ValidBlockParts
} else {
// Create a new proposal block from state/txs from the mempool.
block, blockParts = cs.createProposalBlock()
@@ -863,9 +878,10 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
// Mempool validated transactions
txs := cs.mempool.Reap(cs.config.MaxBlockSizeTxs)
return types.MakeBlock(cs.Height, cs.state.ChainID, txs, commit,
cs.state.LastBlockID, cs.state.Validators.Hash(),
cs.state.AppHash, cs.state.Params.BlockPartSizeBytes)
block, parts := cs.state.MakeBlock(cs.Height, txs, commit)
evidence := cs.evpool.PendingEvidence()
block.AddEvidence(evidence)
return block, parts
}
// Enter: `timeoutPropose` after entering Propose.
@@ -919,7 +935,7 @@ func (cs *ConsensusState) defaultDoPrevote(height int64, round int) {
}
// Validate proposal block
err := cs.state.ValidateBlock(cs.ProposalBlock)
err := cs.blockExec.ValidateBlock(cs.state, cs.ProposalBlock)
if err != nil {
// ProposalBlock is invalid, prevote nil.
logger.Error("enterPrevote: ProposalBlock is invalid", "err", err)
@@ -1027,7 +1043,7 @@ func (cs *ConsensusState) enterPrecommit(height int64, round int) {
if cs.ProposalBlock.HashesTo(blockID.Hash) {
cs.Logger.Info("enterPrecommit: +2/3 prevoted proposal block. Locking", "hash", blockID.Hash)
// Validate the block.
if err := cs.state.ValidateBlock(cs.ProposalBlock); err != nil {
if err := cs.blockExec.ValidateBlock(cs.state, cs.ProposalBlock); err != nil {
cmn.PanicConsensus(cmn.Fmt("enterPrecommit: +2/3 prevoted for an invalid block: %v", err))
}
cs.LockedRound = round
@@ -1162,7 +1178,7 @@ func (cs *ConsensusState) finalizeCommit(height int64) {
if !block.HashesTo(blockID.Hash) {
cmn.PanicSanity(cmn.Fmt("Cannot finalizeCommit, ProposalBlock does not hash to commit hash"))
}
if err := cs.state.ValidateBlock(block); err != nil {
if err := cs.blockExec.ValidateBlock(cs.state, block); err != nil {
cmn.PanicConsensus(cmn.Fmt("+2/3 committed an invalid block: %v", err))
}
@@ -1200,12 +1216,11 @@ func (cs *ConsensusState) finalizeCommit(height int64) {
// Create a copy of the state for staging
// and an event cache for txs
stateCopy := cs.state.Copy()
txEventBuffer := types.NewTxEventBuffer(cs.eventBus, block.NumTxs)
// Execute and commit the block, update and save the state, and update the mempool.
// All calls to the proxyAppConn come here.
// NOTE: the block.AppHash wont reflect these txs until the next block
err := stateCopy.ApplyBlock(txEventBuffer, cs.proxyAppConn, block, blockParts.Header(), cs.mempool)
var err error
stateCopy, err = cs.blockExec.ApplyBlock(stateCopy, types.BlockID{block.Hash(), blockParts.Header()}, block)
if err != nil {
cs.Logger.Error("Error on ApplyBlock. Did the application crash? Please restart tendermint", "err", err)
err := cmn.Kill()
@@ -1217,22 +1232,6 @@ func (cs *ConsensusState) finalizeCommit(height int64) {
fail.Fail() // XXX
// Fire event for new block.
// NOTE: If we fail before firing, these events will never fire
//
// TODO: Either
// * Fire before persisting state, in ApplyBlock
// * Fire on start up if we haven't written any new WAL msgs
// Both options mean we may fire more than once. Is that fine ?
cs.eventBus.PublishEventNewBlock(types.EventDataNewBlock{block})
cs.eventBus.PublishEventNewBlockHeader(types.EventDataNewBlockHeader{block.Header})
err = txEventBuffer.Flush()
if err != nil {
cs.Logger.Error("Failed to flush event buffer", "err", err)
}
fail.Fail() // XXX
// NewHeightStep!
cs.updateToState(stateCopy)
@@ -1274,7 +1273,7 @@ func (cs *ConsensusState) defaultSetProposal(proposal *types.Proposal) error {
}
// Verify signature
if !cs.Validators.GetProposer().PubKey.VerifyBytes(types.SignBytes(cs.state.ChainID, proposal), proposal.Signature) {
if !cs.Validators.GetProposer().PubKey.VerifyBytes(proposal.SignBytes(cs.state.ChainID), proposal.Signature) {
return ErrInvalidProposalSignature
}
@@ -1305,7 +1304,7 @@ func (cs *ConsensusState) addProposalBlockPart(height int64, part *types.Part, v
var n int
var err error
cs.ProposalBlock = wire.ReadBinary(&types.Block{}, cs.ProposalBlockParts.GetReader(),
cs.state.Params.BlockSizeParams.MaxBytes, &n, &err).(*types.Block)
cs.state.ConsensusParams.BlockSize.MaxBytes, &n, &err).(*types.Block)
// NOTE: it's possible to receive complete proposal blocks for future rounds without having the proposal
cs.Logger.Info("Received complete proposal block", "height", cs.ProposalBlock.Height, "hash", cs.ProposalBlock.Hash())
if cs.Step == cstypes.RoundStepPropose && cs.isProposalComplete() {
@@ -1321,26 +1320,24 @@ func (cs *ConsensusState) addProposalBlockPart(height int64, part *types.Part, v
}
// Attempt to add the vote. if its a duplicate signature, dupeout the validator
func (cs *ConsensusState) tryAddVote(vote *types.Vote, peerKey string) error {
_, err := cs.addVote(vote, peerKey)
func (cs *ConsensusState) tryAddVote(vote *types.Vote, peerID p2p.ID) error {
_, err := cs.addVote(vote, peerID)
if err != nil {
// If the vote height is off, we'll just ignore it,
// But if it's a conflicting sig, broadcast evidence tx for slashing.
// But if it's a conflicting sig, add it to the cs.evpool.
// If it's otherwise invalid, punish peer.
if err == ErrVoteHeightMismatch {
return err
} else if _, ok := err.(*types.ErrVoteConflictingVotes); ok {
} else if voteErr, ok := err.(*types.ErrVoteConflictingVotes); ok {
if bytes.Equal(vote.ValidatorAddress, cs.privValidator.GetAddress()) {
cs.Logger.Error("Found conflicting vote from ourselves. Did you unsafe_reset a validator?", "height", vote.Height, "round", vote.Round, "type", vote.Type)
return err
}
cs.Logger.Error("Found conflicting vote. Publish evidence (TODO)", "height", vote.Height, "round", vote.Round, "type", vote.Type, "valAddr", vote.ValidatorAddress, "valIndex", vote.ValidatorIndex)
// TODO: track evidence for inclusion in a block
cs.evpool.AddEvidence(voteErr.DuplicateVoteEvidence)
return err
} else {
// Probably an invalid signature. Bad peer.
// Probably an invalid signature / Bad peer.
// Seems this can also err sometimes with "Unexpected step" - perhaps not from a bad peer ?
cs.Logger.Error("Error attempting to add vote", "err", err)
return ErrAddingVote
}
@@ -1350,7 +1347,7 @@ func (cs *ConsensusState) tryAddVote(vote *types.Vote, peerKey string) error {
//-----------------------------------------------------------------------------
func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool, err error) {
func (cs *ConsensusState) addVote(vote *types.Vote, peerID p2p.ID) (added bool, err error) {
cs.Logger.Debug("addVote", "voteHeight", vote.Height, "voteType", vote.Type, "valIndex", vote.ValidatorIndex, "csHeight", cs.Height)
// A precommit for the previous height?
@@ -1362,99 +1359,115 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool,
return added, ErrVoteHeightMismatch
}
added, err = cs.LastCommit.AddVote(vote)
if added {
cs.Logger.Info(cmn.Fmt("Added to lastPrecommits: %v", cs.LastCommit.StringShort()))
cs.eventBus.PublishEventVote(types.EventDataVote{vote})
if !added {
return added, err
}
// if we can skip timeoutCommit and have all the votes now,
if cs.config.SkipTimeoutCommit && cs.LastCommit.HasAll() {
// go straight to new round (skip timeout commit)
// cs.scheduleTimeout(time.Duration(0), cs.Height, 0, cstypes.RoundStepNewHeight)
cs.enterNewRound(cs.Height, 0)
}
cs.Logger.Info(cmn.Fmt("Added to lastPrecommits: %v", cs.LastCommit.StringShort()))
cs.eventBus.PublishEventVote(types.EventDataVote{vote})
// if we can skip timeoutCommit and have all the votes now,
if cs.config.SkipTimeoutCommit && cs.LastCommit.HasAll() {
// go straight to new round (skip timeout commit)
// cs.scheduleTimeout(time.Duration(0), cs.Height, 0, cstypes.RoundStepNewHeight)
cs.enterNewRound(cs.Height, 0)
}
return
}
// A prevote/precommit for this height?
if vote.Height == cs.Height {
height := cs.Height
added, err = cs.Votes.AddVote(vote, peerKey)
if added {
cs.eventBus.PublishEventVote(types.EventDataVote{vote})
// Height mismatch is ignored.
// Not necessarily a bad peer, but not favourable behaviour.
if vote.Height != cs.Height {
err = ErrVoteHeightMismatch
cs.Logger.Info("Vote ignored and not added", "voteHeight", vote.Height, "csHeight", cs.Height, "err", err)
return
}
switch vote.Type {
case types.VoteTypePrevote:
prevotes := cs.Votes.Prevotes(vote.Round)
cs.Logger.Info("Added to prevote", "vote", vote, "prevotes", prevotes.StringShort())
// First, unlock if prevotes is a valid POL.
// >> lockRound < POLRound <= unlockOrChangeLockRound (see spec)
// NOTE: If (lockRound < POLRound) but !(POLRound <= unlockOrChangeLockRound),
// we'll still enterNewRound(H,vote.R) and enterPrecommit(H,vote.R) to process it
// there.
if (cs.LockedBlock != nil) && (cs.LockedRound < vote.Round) && (vote.Round <= cs.Round) {
blockID, ok := prevotes.TwoThirdsMajority()
if ok && !cs.LockedBlock.HashesTo(blockID.Hash) {
cs.Logger.Info("Unlocking because of POL.", "lockedRound", cs.LockedRound, "POLRound", vote.Round)
cs.LockedRound = 0
cs.LockedBlock = nil
cs.LockedBlockParts = nil
cs.eventBus.PublishEventUnlock(cs.RoundStateEvent())
}
}
if cs.Round <= vote.Round && prevotes.HasTwoThirdsAny() {
// Round-skip over to PrevoteWait or goto Precommit.
cs.enterNewRound(height, vote.Round) // if the vote is ahead of us
if prevotes.HasTwoThirdsMajority() {
cs.enterPrecommit(height, vote.Round)
} else {
cs.enterPrevote(height, vote.Round) // if the vote is ahead of us
cs.enterPrevoteWait(height, vote.Round)
}
} else if cs.Proposal != nil && 0 <= cs.Proposal.POLRound && cs.Proposal.POLRound == vote.Round {
// If the proposal is now complete, enter prevote of cs.Round.
if cs.isProposalComplete() {
cs.enterPrevote(height, cs.Round)
}
}
case types.VoteTypePrecommit:
precommits := cs.Votes.Precommits(vote.Round)
cs.Logger.Info("Added to precommit", "vote", vote, "precommits", precommits.StringShort())
blockID, ok := precommits.TwoThirdsMajority()
if ok {
if len(blockID.Hash) == 0 {
cs.enterNewRound(height, vote.Round+1)
} else {
cs.enterNewRound(height, vote.Round)
cs.enterPrecommit(height, vote.Round)
cs.enterCommit(height, vote.Round)
if cs.config.SkipTimeoutCommit && precommits.HasAll() {
// if we have all the votes now,
// go straight to new round (skip timeout commit)
// cs.scheduleTimeout(time.Duration(0), cs.Height, 0, cstypes.RoundStepNewHeight)
cs.enterNewRound(cs.Height, 0)
}
}
} else if cs.Round <= vote.Round && precommits.HasTwoThirdsAny() {
cs.enterNewRound(height, vote.Round)
cs.enterPrecommit(height, vote.Round)
cs.enterPrecommitWait(height, vote.Round)
}
default:
cmn.PanicSanity(cmn.Fmt("Unexpected vote type %X", vote.Type)) // Should not happen.
}
}
height := cs.Height
added, err = cs.Votes.AddVote(vote, peerID)
if !added {
// Either duplicate, or error upon cs.Votes.AddByIndex()
return
} else {
err = ErrVoteHeightMismatch
}
// Height mismatch, bad peer?
cs.Logger.Info("Vote ignored and not added", "voteHeight", vote.Height, "csHeight", cs.Height, "err", err)
cs.eventBus.PublishEventVote(types.EventDataVote{vote})
switch vote.Type {
case types.VoteTypePrevote:
prevotes := cs.Votes.Prevotes(vote.Round)
cs.Logger.Info("Added to prevote", "vote", vote, "prevotes", prevotes.StringShort())
blockID, ok := prevotes.TwoThirdsMajority()
// First, unlock if prevotes is a valid POL.
// >> lockRound < POLRound <= unlockOrChangeLockRound (see spec)
// NOTE: If (lockRound < POLRound) but !(POLRound <= unlockOrChangeLockRound),
// we'll still enterNewRound(H,vote.R) and enterPrecommit(H,vote.R) to process it
// there.
if (cs.LockedBlock != nil) && (cs.LockedRound < vote.Round) && (vote.Round <= cs.Round) {
if ok && !cs.LockedBlock.HashesTo(blockID.Hash) {
cs.Logger.Info("Unlocking because of POL.", "lockedRound", cs.LockedRound, "POLRound", vote.Round)
cs.LockedRound = 0
cs.LockedBlock = nil
cs.LockedBlockParts = nil
cs.eventBus.PublishEventUnlock(cs.RoundStateEvent())
}
}
// Update ValidBlock
if ok && !blockID.IsZero() && !cs.ValidBlock.HashesTo(blockID.Hash) && vote.Round > cs.ValidRound {
// update valid value
if cs.ProposalBlock.HashesTo(blockID.Hash) {
cs.ValidRound = vote.Round
cs.ValidBlock = cs.ProposalBlock
cs.ValidBlockParts = cs.ProposalBlockParts
}
//TODO: We might want to update ValidBlock also in case we don't have that block yet,
// and obtain the required block using gossiping
}
if cs.Round <= vote.Round && prevotes.HasTwoThirdsAny() {
// Round-skip over to PrevoteWait or goto Precommit.
cs.enterNewRound(height, vote.Round) // if the vote is ahead of us
if prevotes.HasTwoThirdsMajority() {
cs.enterPrecommit(height, vote.Round)
} else {
cs.enterPrevote(height, vote.Round) // if the vote is ahead of us
cs.enterPrevoteWait(height, vote.Round)
}
} else if cs.Proposal != nil && 0 <= cs.Proposal.POLRound && cs.Proposal.POLRound == vote.Round {
// If the proposal is now complete, enter prevote of cs.Round.
if cs.isProposalComplete() {
cs.enterPrevote(height, cs.Round)
}
}
case types.VoteTypePrecommit:
precommits := cs.Votes.Precommits(vote.Round)
cs.Logger.Info("Added to precommit", "vote", vote, "precommits", precommits.StringShort())
blockID, ok := precommits.TwoThirdsMajority()
if ok {
if len(blockID.Hash) == 0 {
cs.enterNewRound(height, vote.Round+1)
} else {
cs.enterNewRound(height, vote.Round)
cs.enterPrecommit(height, vote.Round)
cs.enterCommit(height, vote.Round)
if cs.config.SkipTimeoutCommit && precommits.HasAll() {
// if we have all the votes now,
// go straight to new round (skip timeout commit)
// cs.scheduleTimeout(time.Duration(0), cs.Height, 0, cstypes.RoundStepNewHeight)
cs.enterNewRound(cs.Height, 0)
}
}
} else if cs.Round <= vote.Round && precommits.HasTwoThirdsAny() {
cs.enterNewRound(height, vote.Round)
cs.enterPrecommit(height, vote.Round)
cs.enterPrecommitWait(height, vote.Round)
}
default:
panic(cmn.Fmt("Unexpected vote type %X", vote.Type)) // go-wire should prevent this.
}
return
}
@@ -1466,6 +1479,7 @@ func (cs *ConsensusState) signVote(type_ byte, hash []byte, header types.PartSet
ValidatorIndex: valIndex,
Height: cs.Height,
Round: cs.Round,
Timestamp: time.Now().UTC(),
Type: type_,
BlockID: types.BlockID{hash, header},
}

View File

@@ -55,7 +55,7 @@ x * TestHalt1 - if we see +2/3 precommits after timing out into new round, we sh
//----------------------------------------------------------------------------------------------------
// ProposeSuite
func TestProposerSelection0(t *testing.T) {
func TestStateProposerSelection0(t *testing.T) {
cs1, vss := randConsensusState(4)
height, round := cs1.Height, cs1.Round
@@ -89,7 +89,7 @@ func TestProposerSelection0(t *testing.T) {
}
// Now let's do it all again, but starting from round 2 instead of 0
func TestProposerSelection2(t *testing.T) {
func TestStateProposerSelection2(t *testing.T) {
cs1, vss := randConsensusState(4) // test needs more work for more than 3 validators
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
@@ -118,7 +118,7 @@ func TestProposerSelection2(t *testing.T) {
}
// a non-validator should timeout into the prevote round
func TestEnterProposeNoPrivValidator(t *testing.T) {
func TestStateEnterProposeNoPrivValidator(t *testing.T) {
cs, _ := randConsensusState(1)
cs.SetPrivValidator(nil)
height, round := cs.Height, cs.Round
@@ -143,7 +143,7 @@ func TestEnterProposeNoPrivValidator(t *testing.T) {
}
// a validator should not timeout of the prevote round (TODO: unless the block is really big!)
func TestEnterProposeYesPrivValidator(t *testing.T) {
func TestStateEnterProposeYesPrivValidator(t *testing.T) {
cs, _ := randConsensusState(1)
height, round := cs.Height, cs.Round
@@ -179,12 +179,12 @@ func TestEnterProposeYesPrivValidator(t *testing.T) {
}
}
func TestBadProposal(t *testing.T) {
func TestStateBadProposal(t *testing.T) {
cs1, vss := randConsensusState(2)
height, round := cs1.Height, cs1.Round
vs2 := vss[1]
partSize := cs1.state.Params.BlockPartSizeBytes
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
@@ -204,7 +204,7 @@ func TestBadProposal(t *testing.T) {
propBlock.AppHash = stateHash
propBlockParts := propBlock.MakePartSet(partSize)
proposal := types.NewProposal(vs2.Height, round, propBlockParts.Header(), -1, types.BlockID{})
if err := vs2.SignProposal(config.ChainID, proposal); err != nil {
if err := vs2.SignProposal(config.ChainID(), proposal); err != nil {
t.Fatal("failed to sign bad proposal", err)
}
@@ -239,7 +239,7 @@ func TestBadProposal(t *testing.T) {
// FullRoundSuite
// propose, prevote, and precommit a block
func TestFullRound1(t *testing.T) {
func TestStateFullRound1(t *testing.T) {
cs, vss := randConsensusState(1)
height, round := cs.Height, cs.Round
@@ -275,7 +275,7 @@ func TestFullRound1(t *testing.T) {
}
// nil is proposed, so prevote and precommit nil
func TestFullRoundNil(t *testing.T) {
func TestStateFullRoundNil(t *testing.T) {
cs, vss := randConsensusState(1)
height, round := cs.Height, cs.Round
@@ -293,7 +293,7 @@ func TestFullRoundNil(t *testing.T) {
// run through propose, prevote, precommit commit with two validators
// where the first validator has to wait for votes from the second
func TestFullRound2(t *testing.T) {
func TestStateFullRound2(t *testing.T) {
cs1, vss := randConsensusState(2)
vs2 := vss[1]
height, round := cs1.Height, cs1.Round
@@ -334,12 +334,12 @@ func TestFullRound2(t *testing.T) {
// two validators, 4 rounds.
// two vals take turns proposing. val1 locks on first one, precommits nil on everything else
func TestLockNoPOL(t *testing.T) {
func TestStateLockNoPOL(t *testing.T) {
cs1, vss := randConsensusState(2)
vs2 := vss[1]
height := cs1.Height
partSize := cs1.state.Params.BlockPartSizeBytes
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
@@ -503,11 +503,11 @@ func TestLockNoPOL(t *testing.T) {
}
// 4 vals, one precommits, other 3 polka at next round, so we unlock and precomit the polka
func TestLockPOLRelock(t *testing.T) {
func TestStateLockPOLRelock(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
partSize := cs1.state.Params.BlockPartSizeBytes
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
@@ -618,11 +618,11 @@ func TestLockPOLRelock(t *testing.T) {
}
// 4 vals, one precommits, other 3 polka at next round, so we unlock and precomit the polka
func TestLockPOLUnlock(t *testing.T) {
func TestStateLockPOLUnlock(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
partSize := cs1.state.Params.BlockPartSizeBytes
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
@@ -715,11 +715,11 @@ func TestLockPOLUnlock(t *testing.T) {
// a polka at round 1 but we miss it
// then a polka at round 2 that we lock on
// then we see the polka from round 1 but shouldn't unlock
func TestLockPOLSafety1(t *testing.T) {
func TestStateLockPOLSafety1(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
partSize := cs1.state.Params.BlockPartSizeBytes
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
@@ -838,11 +838,11 @@ func TestLockPOLSafety1(t *testing.T) {
// What we want:
// dont see P0, lock on P1 at R1, dont unlock using P0 at R2
func TestLockPOLSafety2(t *testing.T) {
func TestStateLockPOLSafety2(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
partSize := cs1.state.Params.BlockPartSizeBytes
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
@@ -900,7 +900,7 @@ func TestLockPOLSafety2(t *testing.T) {
// in round 2 we see the polkad block from round 0
newProp := types.NewProposal(height, 2, propBlockParts0.Header(), 0, propBlockID1)
if err := vs3.SignProposal(config.ChainID, newProp); err != nil {
if err := vs3.SignProposal(config.ChainID(), newProp); err != nil {
t.Fatal(err)
}
if err := cs1.SetProposalAndBlock(newProp, propBlock0, propBlockParts0, "some peer"); err != nil {
@@ -937,7 +937,7 @@ func TestLockPOLSafety2(t *testing.T) {
// TODO: Slashing
/*
func TestSlashingPrevotes(t *testing.T) {
func TestStateSlashingPrevotes(t *testing.T) {
cs1, vss := randConsensusState(2)
vs2 := vss[1]
@@ -972,7 +972,7 @@ func TestSlashingPrevotes(t *testing.T) {
// XXX: Check for existence of Dupeout info
}
func TestSlashingPrecommits(t *testing.T) {
func TestStateSlashingPrecommits(t *testing.T) {
cs1, vss := randConsensusState(2)
vs2 := vss[1]
@@ -1017,11 +1017,11 @@ func TestSlashingPrecommits(t *testing.T) {
// 4 vals.
// we receive a final precommit after going into next round, but others might have gone to commit already!
func TestHalt1(t *testing.T) {
func TestStateHalt1(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
partSize := cs1.state.Params.BlockPartSizeBytes
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)

Binary file not shown.

View File

@@ -127,7 +127,7 @@ func (t *timeoutTicker) timeoutRoutine() {
// We can eliminate it by merging the timeoutRoutine into receiveRoutine
// and managing the timeouts ourselves with a millisecond ticker
go func(toi timeoutInfo) { t.tockChan <- toi }(ti)
case <-t.Quit:
case <-t.Quit():
return
}
}

View File

@@ -1,9 +1,12 @@
package types
import (
"errors"
"fmt"
"strings"
"sync"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
)
@@ -13,6 +16,10 @@ type RoundVoteSet struct {
Precommits *types.VoteSet
}
var (
GotVoteFromUnwantedRoundError = errors.New("Peer has sent a vote that does not match our round for more than one round")
)
/*
Keeps track of all VoteSets from round 0 to round 'round'.
@@ -35,7 +42,7 @@ type HeightVoteSet struct {
mtx sync.Mutex
round int // max tracked round
roundVoteSets map[int]RoundVoteSet // keys: [0...round]
peerCatchupRounds map[string][]int // keys: peer.Key; values: at most 2 rounds
peerCatchupRounds map[p2p.ID][]int // keys: peer.ID; values: at most 2 rounds
}
func NewHeightVoteSet(chainID string, height int64, valSet *types.ValidatorSet) *HeightVoteSet {
@@ -53,7 +60,7 @@ func (hvs *HeightVoteSet) Reset(height int64, valSet *types.ValidatorSet) {
hvs.height = height
hvs.valSet = valSet
hvs.roundVoteSets = make(map[int]RoundVoteSet)
hvs.peerCatchupRounds = make(map[string][]int)
hvs.peerCatchupRounds = make(map[p2p.ID][]int)
hvs.addRound(0)
hvs.round = 0
@@ -101,8 +108,8 @@ func (hvs *HeightVoteSet) addRound(round int) {
}
// Duplicate votes return added=false, err=nil.
// By convention, peerKey is "" if origin is self.
func (hvs *HeightVoteSet) AddVote(vote *types.Vote, peerKey string) (added bool, err error) {
// By convention, peerID is "" if origin is self.
func (hvs *HeightVoteSet) AddVote(vote *types.Vote, peerID p2p.ID) (added bool, err error) {
hvs.mtx.Lock()
defer hvs.mtx.Unlock()
if !types.IsVoteTypeValid(vote.Type) {
@@ -110,15 +117,13 @@ func (hvs *HeightVoteSet) AddVote(vote *types.Vote, peerKey string) (added bool,
}
voteSet := hvs.getVoteSet(vote.Round, vote.Type)
if voteSet == nil {
if rndz := hvs.peerCatchupRounds[peerKey]; len(rndz) < 2 {
if rndz := hvs.peerCatchupRounds[peerID]; len(rndz) < 2 {
hvs.addRound(vote.Round)
voteSet = hvs.getVoteSet(vote.Round, vote.Type)
hvs.peerCatchupRounds[peerKey] = append(rndz, vote.Round)
hvs.peerCatchupRounds[peerID] = append(rndz, vote.Round)
} else {
// Peer has sent a vote that does not match our round,
// for more than one round. Bad peer!
// TODO punish peer.
// log.Warn("Deal with peer giving votes from unwanted rounds")
// punish peer
err = GotVoteFromUnwantedRoundError
return
}
}
@@ -206,15 +211,15 @@ func (hvs *HeightVoteSet) StringIndented(indent string) string {
// NOTE: if there are too many peers, or too much peer churn,
// this can cause memory issues.
// TODO: implement ability to remove peers too
func (hvs *HeightVoteSet) SetPeerMaj23(round int, type_ byte, peerID string, blockID types.BlockID) {
func (hvs *HeightVoteSet) SetPeerMaj23(round int, type_ byte, peerID p2p.ID, blockID types.BlockID) error {
hvs.mtx.Lock()
defer hvs.mtx.Unlock()
if !types.IsVoteTypeValid(type_) {
return
return fmt.Errorf("SetPeerMaj23: Invalid vote type %v", type_)
}
voteSet := hvs.getVoteSet(round, type_)
if voteSet == nil {
return
return nil // something we don't know about yet
}
voteSet.SetPeerMaj23(peerID, blockID)
return voteSet.SetPeerMaj23(types.P2PID(peerID), blockID)
}

View File

@@ -2,6 +2,7 @@ package types
import (
"testing"
"time"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/types"
@@ -17,7 +18,7 @@ func init() {
func TestPeerCatchupRounds(t *testing.T) {
valSet, privVals := types.RandValidatorSet(10, 1)
hvs := NewHeightVoteSet(config.ChainID, 1, valSet)
hvs := NewHeightVoteSet(config.ChainID(), 1, valSet)
vote999_0 := makeVoteHR(t, 1, 999, privVals, 0)
added, err := hvs.AddVote(vote999_0, "peer1")
@@ -33,8 +34,8 @@ func TestPeerCatchupRounds(t *testing.T) {
vote1001_0 := makeVoteHR(t, 1, 1001, privVals, 0)
added, err = hvs.AddVote(vote1001_0, "peer1")
if err != nil {
t.Error("AddVote error", err)
if err != GotVoteFromUnwantedRoundError {
t.Errorf("Expected GotVoteFromUnwantedRoundError, but got %v", err)
}
if added {
t.Error("Expected to *not* add vote from peer, too many catchup rounds.")
@@ -54,10 +55,11 @@ func makeVoteHR(t *testing.T, height int64, round int, privVals []*types.PrivVal
ValidatorIndex: valIndex,
Height: height,
Round: round,
Timestamp: time.Now().UTC(),
Type: types.VoteTypePrecommit,
BlockID: types.BlockID{[]byte("fakehash"), types.PartSetHeader{}},
}
chainID := config.ChainID
chainID := config.ChainID()
err := privVal.SignVote(chainID, vote)
if err != nil {
panic(cmn.Fmt("Error signing vote: %v", err))

View File

@@ -70,6 +70,9 @@ type RoundState struct {
LockedRound int
LockedBlock *types.Block
LockedBlockParts *types.PartSet
ValidRound int
ValidBlock *types.Block
ValidBlockParts *types.PartSet
Votes *HeightVoteSet
CommitRound int //
LastCommit *types.VoteSet // Last precommits at Height-1
@@ -106,9 +109,11 @@ func (rs *RoundState) StringIndented(indent string) string {
%s ProposalBlock: %v %v
%s LockedRound: %v
%s LockedBlock: %v %v
%s ValidRound: %v
%s ValidBlock: %v %v
%s Votes: %v
%s LastCommit: %v
%s LastValidators: %v
%s LastCommit: %v
%s LastValidators:%v
%s}`,
indent, rs.Height, rs.Round, rs.Step,
indent, rs.StartTime,
@@ -118,6 +123,8 @@ func (rs *RoundState) StringIndented(indent string) string {
indent, rs.ProposalBlockParts.StringShort(), rs.ProposalBlock.StringShort(),
indent, rs.LockedRound,
indent, rs.LockedBlockParts.StringShort(), rs.LockedBlock.StringShort(),
indent, rs.ValidRound,
indent, rs.ValidBlockParts.StringShort(), rs.ValidBlock.StringShort(),
indent, rs.Votes.StringIndented(indent+" "),
indent, rs.LastCommit.StringShort(),
indent, rs.LastValidators.StringIndented(indent+" "),

View File

@@ -17,6 +17,11 @@ import (
cmn "github.com/tendermint/tmlibs/common"
)
const (
// must be greater than params.BlockGossip.BlockPartSizeBytes + a few bytes
maxMsgSizeBytes = 1024 * 1024 // 1MB
)
//--------------------------------------------------------
// types and functions for savings consensus messages
@@ -48,7 +53,7 @@ var _ = wire.RegisterInterface(
type WAL interface {
Save(WALMessage)
Group() *auto.Group
SearchForEndHeight(height int64) (gr *auto.GroupReader, found bool, err error)
SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error)
Start() error
Stop() error
@@ -116,7 +121,7 @@ func (wal *baseWAL) Save(msg WALMessage) {
if wal.light {
// in light mode we only write new steps, timeouts, and our own votes (no proposals, block parts)
if mi, ok := msg.(msgInfo); ok {
if mi.PeerKey != "" {
if mi.PeerID != "" {
return
}
}
@@ -133,12 +138,18 @@ func (wal *baseWAL) Save(msg WALMessage) {
}
}
// WALSearchOptions are optional arguments to SearchForEndHeight.
type WALSearchOptions struct {
// IgnoreDataCorruptionErrors set to true will result in skipping data corruption errors.
IgnoreDataCorruptionErrors bool
}
// SearchForEndHeight searches for the EndHeightMessage with the height and
// returns an auto.GroupReader, whenever it was found or not and an error.
// Group reader will be nil if found equals false.
//
// CONTRACT: caller must close group reader.
func (wal *baseWAL) SearchForEndHeight(height int64) (gr *auto.GroupReader, found bool, err error) {
func (wal *baseWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
var msg *TimedWALMessage
// NOTE: starting from the last file in the group because we're usually
@@ -158,7 +169,9 @@ func (wal *baseWAL) SearchForEndHeight(height int64) (gr *auto.GroupReader, foun
// check next file
break
}
if err != nil {
if options.IgnoreDataCorruptionErrors && IsDataCorruptionError(err) {
// do nothing
} else if err != nil {
gr.Close()
return nil, false, err
}
@@ -210,6 +223,25 @@ func (enc *WALEncoder) Encode(v *TimedWALMessage) error {
///////////////////////////////////////////////////////////////////////////////
// IsDataCorruptionError returns true if data has been corrupted inside WAL.
func IsDataCorruptionError(err error) bool {
_, ok := err.(DataCorruptionError)
return ok
}
// DataCorruptionError is an error that occures if data on disk was corrupted.
type DataCorruptionError struct {
cause error
}
func (e DataCorruptionError) Error() string {
return fmt.Sprintf("DataCorruptionError[%v]", e.cause)
}
func (e DataCorruptionError) Cause() error {
return e.cause
}
// A WALDecoder reads and decodes custom-encoded WAL messages from an input
// stream. See WALEncoder for the format used.
//
@@ -228,7 +260,7 @@ func NewWALDecoder(rd io.Reader) *WALDecoder {
func (dec *WALDecoder) Decode() (*TimedWALMessage, error) {
b := make([]byte, 4)
n, err := dec.rd.Read(b)
_, err := dec.rd.Read(b)
if err == io.EOF {
return nil, err
}
@@ -238,7 +270,7 @@ func (dec *WALDecoder) Decode() (*TimedWALMessage, error) {
crc := binary.BigEndian.Uint32(b)
b = make([]byte, 4)
n, err = dec.rd.Read(b)
_, err = dec.rd.Read(b)
if err == io.EOF {
return nil, err
}
@@ -247,26 +279,30 @@ func (dec *WALDecoder) Decode() (*TimedWALMessage, error) {
}
length := binary.BigEndian.Uint32(b)
if length > maxMsgSizeBytes {
return nil, DataCorruptionError{fmt.Errorf("length %d exceeded maximum possible value of %d bytes", length, maxMsgSizeBytes)}
}
data := make([]byte, length)
n, err = dec.rd.Read(data)
_, err = dec.rd.Read(data)
if err == io.EOF {
return nil, err
}
if err != nil {
return nil, fmt.Errorf("not enough bytes for data: %v (want: %d, read: %v)", err, length, n)
return nil, fmt.Errorf("failed to read data: %v", err)
}
// check checksum before decoding data
actualCRC := crc32.Checksum(data, crc32c)
if actualCRC != crc {
return nil, fmt.Errorf("checksums do not match: (read: %v, actual: %v)", crc, actualCRC)
return nil, DataCorruptionError{fmt.Errorf("checksums do not match: (read: %v, actual: %v)", crc, actualCRC)}
}
var nn int
var res *TimedWALMessage // nolint: gosimple
res = wire.ReadBinary(&TimedWALMessage{}, bytes.NewBuffer(data), int(length), &nn, &err).(*TimedWALMessage)
if err != nil {
return nil, fmt.Errorf("failed to decode data: %v", err)
return nil, DataCorruptionError{fmt.Errorf("failed to decode data: %v", err)}
}
return res, err
@@ -276,7 +312,7 @@ type nilWAL struct{}
func (nilWAL) Save(m WALMessage) {}
func (nilWAL) Group() *auto.Group { return nil }
func (nilWAL) SearchForEndHeight(height int64) (gr *auto.GroupReader, found bool, err error) {
func (nilWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
return nil, false, nil
}
func (nilWAL) Start() error { return nil }

31
consensus/wal_fuzz.go Normal file
View File

@@ -0,0 +1,31 @@
// +build gofuzz
package consensus
import (
"bytes"
"io"
)
func Fuzz(data []byte) int {
dec := NewWALDecoder(bytes.NewReader(data))
for {
msg, err := dec.Decode()
if err == io.EOF {
break
}
if err != nil {
if msg != nil {
panic("msg != nil on error")
}
return 0
}
var w bytes.Buffer
enc := NewWALEncoder(&w)
err = enc.Encode(msg)
if err != nil {
panic(err)
}
}
return 1
}

View File

@@ -11,28 +11,30 @@ import (
"time"
"github.com/pkg/errors"
"github.com/tendermint/abci/example/dummy"
"github.com/tendermint/abci/example/kvstore"
bc "github.com/tendermint/tendermint/blockchain"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
auto "github.com/tendermint/tmlibs/autofile"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/db"
"github.com/tendermint/tmlibs/log"
)
// WALWithNBlocks generates a consensus WAL. It does this by spining up a
// stripped down version of node (proxy app, event bus, consensus state) with a
// persistent dummy application and special consensus wal instance
// persistent kvstore application and special consensus wal instance
// (byteBufferWAL) and waits until numBlocks are created. Then it returns a WAL
// content.
func WALWithNBlocks(numBlocks int) (data []byte, err error) {
config := getConfig()
app := dummy.NewPersistentDummyApplication(filepath.Join(config.DBDir(), "wal_generator"))
app := kvstore.NewPersistentKVStoreApplication(filepath.Join(config.DBDir(), "wal_generator"))
logger := log.NewNopLogger() // log.TestingLogger().With("wal_generator", "wal_generator")
logger := log.TestingLogger().With("wal_generator", "wal_generator")
logger.Info("generating WAL (last height msg excluded)", "numBlocks", numBlocks)
/////////////////////////////////////////////////////////////////////////////
// COPY PASTE FROM node.go WITH A FEW MODIFICATIONS
@@ -45,13 +47,12 @@ func WALWithNBlocks(numBlocks int) (data []byte, err error) {
}
stateDB := db.NewMemDB()
blockStoreDB := db.NewMemDB()
state, err := sm.MakeGenesisState(stateDB, genDoc)
state.SetLogger(logger.With("module", "state"))
state, err := sm.MakeGenesisState(genDoc)
if err != nil {
return nil, errors.Wrap(err, "failed to make genesis state")
}
blockStore := bc.NewBlockStore(blockStoreDB)
handshaker := NewHandshaker(state, blockStore)
handshaker := NewHandshaker(stateDB, state, blockStore, genDoc.AppState())
proxyApp := proxy.NewAppConns(proxy.NewLocalClientCreator(app), handshaker)
proxyApp.SetLogger(logger.With("module", "proxy"))
if err := proxyApp.Start(); err != nil {
@@ -63,8 +64,11 @@ func WALWithNBlocks(numBlocks int) (data []byte, err error) {
if err := eventBus.Start(); err != nil {
return nil, errors.Wrap(err, "failed to start event bus")
}
defer eventBus.Stop()
mempool := types.MockMempool{}
consensusState := NewConsensusState(config.Consensus, state.Copy(), proxyApp.Consensus(), blockStore, mempool)
evpool := types.MockEvidencePool{}
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)
consensusState := NewConsensusState(config.Consensus, state.Copy(), blockExec, blockStore, mempool, evpool)
consensusState.SetLogger(logger)
consensusState.SetEventBus(eventBus)
if privValidator != nil {
@@ -77,7 +81,7 @@ func WALWithNBlocks(numBlocks int) (data []byte, err error) {
var b bytes.Buffer
wr := bufio.NewWriter(&b)
numBlocksWritten := make(chan struct{})
wal := &byteBufferWAL{enc: NewWALEncoder(wr), heightToStop: int64(numBlocks), signalWhenStopsTo: numBlocksWritten}
wal := newByteBufferWAL(logger, NewWALEncoder(wr), int64(numBlocks), numBlocksWritten)
// see wal.go#103
wal.Save(EndHeightMessage{0})
consensusState.wal = wal
@@ -91,8 +95,9 @@ func WALWithNBlocks(numBlocks int) (data []byte, err error) {
case <-numBlocksWritten:
wr.Flush()
return b.Bytes(), nil
case <-time.After(time.Duration(5*numBlocks) * time.Second):
return b.Bytes(), fmt.Errorf("waited too long for tendermint to produce %d blocks", numBlocks)
case <-time.After(1 * time.Minute):
wr.Flush()
return b.Bytes(), fmt.Errorf("waited too long for tendermint to produce %d blocks (grep logs for `wal_generator`)", numBlocks)
}
}
@@ -124,7 +129,7 @@ func makeAddrs() (string, string, string) {
// getConfig returns a config for test cases
func getConfig() *cfg.Config {
pathname := makePathname()
c := cfg.ResetTestRoot(pathname)
c := cfg.ResetTestRoot(fmt.Sprintf("%s_%d", pathname, cmn.RandInt()))
// and we use random ports to run in parallel
tm, rpc, grpc := makeAddrs()
@@ -141,28 +146,43 @@ type byteBufferWAL struct {
enc *WALEncoder
stopped bool
heightToStop int64
signalWhenStopsTo chan struct{}
signalWhenStopsTo chan<- struct{}
logger log.Logger
}
// needed for determinism
var fixedTime, _ = time.Parse(time.RFC3339, "2017-01-02T15:04:05Z")
func newByteBufferWAL(logger log.Logger, enc *WALEncoder, nBlocks int64, signalStop chan<- struct{}) *byteBufferWAL {
return &byteBufferWAL{
enc: enc,
heightToStop: nBlocks,
signalWhenStopsTo: signalStop,
logger: logger,
}
}
// Save writes message to the internal buffer except when heightToStop is
// reached, in which case it will signal the caller via signalWhenStopsTo and
// skip writing.
func (w *byteBufferWAL) Save(m WALMessage) {
if w.stopped {
w.logger.Debug("WAL already stopped. Not writing message", "msg", m)
return
}
if endMsg, ok := m.(EndHeightMessage); ok {
w.logger.Debug("WAL write end height message", "height", endMsg.Height, "stopHeight", w.heightToStop)
if endMsg.Height == w.heightToStop {
w.logger.Debug("Stopping WAL at height", "height", endMsg.Height)
w.signalWhenStopsTo <- struct{}{}
w.stopped = true
return
}
}
w.logger.Debug("WAL Write Message", "msg", m)
err := w.enc.Encode(&TimedWALMessage{fixedTime, m})
if err != nil {
panic(fmt.Sprintf("failed to encode the msg %v", m))
@@ -172,7 +192,7 @@ func (w *byteBufferWAL) Save(m WALMessage) {
func (w *byteBufferWAL) Group() *auto.Group {
panic("not implemented")
}
func (w *byteBufferWAL) SearchForEndHeight(height int64) (gr *auto.GroupReader, found bool, err error) {
func (w *byteBufferWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
return nil, false, nil
}

View File

@@ -41,7 +41,7 @@ func TestWALEncoderDecoder(t *testing.T) {
}
}
func TestSearchForEndHeight(t *testing.T) {
func TestWALSearchForEndHeight(t *testing.T) {
walBody, err := WALWithNBlocks(6)
if err != nil {
t.Fatal(err)
@@ -54,7 +54,7 @@ func TestSearchForEndHeight(t *testing.T) {
}
h := int64(3)
gr, found, err := wal.SearchForEndHeight(h)
gr, found, err := wal.SearchForEndHeight(h, &WALSearchOptions{})
assert.NoError(t, err, cmn.Fmt("expected not to err on height %d", h))
assert.True(t, found, cmn.Fmt("expected to find end height for %d", h))
assert.NotNil(t, gr, "expected group not to be nil")

1
docs/.python-version Normal file
View File

@@ -0,0 +1 @@
2.7.14

View File

@@ -12,6 +12,9 @@ BUILDDIR = _build
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
install:
@pip install -r requirements.txt
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new

View File

@@ -16,15 +16,15 @@ Next, install the ``abci-cli`` tool and example applications:
go get -u github.com/tendermint/abci/cmd/abci-cli
If this fails, you may need to use ``glide`` to get vendored
If this fails, you may need to use `dep <https://github.com/golang/dep>`__ to get vendored
dependencies:
::
go get github.com/Masterminds/glide
cd $GOPATH/src/github.com/tendermint/abci
glide install
go install ./cmd/abci-cli
make get_tools
make get_vendor_deps
make install
Now run ``abci-cli`` to see the list of commands:
@@ -40,7 +40,7 @@ Now run ``abci-cli`` to see the list of commands:
console Start an interactive abci console for multiple commands
counter ABCI demo example
deliver_tx Deliver a new tx to the application
dummy ABCI demo example
kvstore ABCI demo example
echo Have the application echo a message
help Help about any command
info Get some info about the application
@@ -53,11 +53,11 @@ Now run ``abci-cli`` to see the list of commands:
-h, --help help for abci-cli
-v, --verbose print the command and results as if it were a console session
Use "abci-cli [command] --help" for more information about a command.
Use "abci-cli [command] --help" for more information about a command.
Dummy - First Example
---------------------
KVStore - First Example
-----------------------
The ``abci-cli`` tool lets us send ABCI messages to our application, to
help build and debug them.
@@ -66,14 +66,56 @@ The most important messages are ``deliver_tx``, ``check_tx``, and
``commit``, but there are others for convenience, configuration, and
information purposes.
Let's start a dummy application, which was installed at the same time as
``abci-cli`` above. The dummy just stores transactions in a merkle tree:
We'll start a kvstore application, which was installed at the same time as
``abci-cli`` above. The kvstore just stores transactions in a merkle tree.
Its code can be found `here <https://github.com/tendermint/abci/blob/master/cmd/abci-cli/abci-cli.go>`__ and looks like:
.. container:: toggle
.. container:: header
**Show/Hide KVStore Example**
.. code-block:: go
func cmdKVStore(cmd *cobra.Command, args []string) error {
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
// Create the application - in memory or persisted to disk
var app types.Application
if flagPersist == "" {
app = kvstore.NewKVStoreApplication()
} else {
app = kvstore.NewPersistentKVStoreApplication(flagPersist)
app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore"))
}
// Start the listener
srv, err := server.NewServer(flagAddrD, flagAbci, app)
if err != nil {
return err
}
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
return err
}
// Wait forever
cmn.TrapSignal(func() {
// Cleanup
srv.Stop()
})
return nil
}
Start by running:
::
abci-cli dummy
abci-cli kvstore
In another terminal, run
And in another terminal, run
::
@@ -187,6 +229,41 @@ Counter - Another Example
Now that we've got the hang of it, let's try another application, the
"counter" app.
Like the kvstore app, its code can be found `here <https://github.com/tendermint/abci/blob/master/cmd/abci-cli/abci-cli.go>`__ and looks like:
.. container:: toggle
.. container:: header
**Show/Hide Counter Example**
.. code-block:: go
func cmdCounter(cmd *cobra.Command, args []string) error {
app := counter.NewCounterApplication(flagSerial)
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
// Start the listener
srv, err := server.NewServer(flagAddrC, flagAbci, app)
if err != nil {
return err
}
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
return err
}
// Wait forever
cmn.TrapSignal(func() {
// Cleanup
srv.Stop()
})
return nil
}
The counter app doesn't use a Merkle tree, it just counts how many times
we've sent a transaction, asked for a hash, or committed the state. The
result of ``commit`` is just the number of transactions sent.
@@ -211,7 +288,7 @@ other peers.
In this instance of the counter app, ``check_tx`` only allows
transactions whose integer is greater than the last committed one.
Let's kill the console and the dummy application, and start the counter
Let's kill the console and the kvstore application, and start the counter
app:
::
@@ -251,7 +328,7 @@ In another window, start the ``abci-cli console``:
-> data.hex: 0x7B22686173686573223A302C22747873223A327D
This is a very simple application, but between ``counter`` and
``dummy``, its easy to see how you can build out arbitrary application
``kvstore``, its easy to see how you can build out arbitrary application
states on top of the ABCI. `Hyperledger's
Burrow <https://github.com/hyperledger/burrow>`__ also runs atop ABCI,
bringing with it Ethereum-like accounts, the Ethereum virtual-machine,
@@ -261,7 +338,7 @@ But the ultimate flexibility comes from being able to write the
application easily in any language.
We have implemented the counter in a number of languages (see the
example directory).
`example directory <https://github.com/tendermint/abci/tree/master/example`__).
To run the Node JS version, ``cd`` to ``example/js`` and run
@@ -289,4 +366,4 @@ its own pattern of messages.
For more information, see the `application developers
guide <./app-development.html>`__. For examples of running an ABCI
app with Tendermint, see the `getting started
guide <./getting-started.html>`__.
guide <./getting-started.html>`__. Next is the ABCI specification.

View File

@@ -142,10 +142,10 @@ It is unlikely that you will need to implement a client. For details of
our client, see
`here <https://github.com/tendermint/abci/tree/master/client>`__.
Most of the examples below are from `dummy application
<https://github.com/tendermint/abci/blob/master/example/dummy/dummy.go>`__,
which is a part of the abci repo. `persistent_dummy application
<https://github.com/tendermint/abci/blob/master/example/dummy/persistent_dummy.go>`__
Most of the examples below are from `kvstore application
<https://github.com/tendermint/abci/blob/master/example/kvstore/kvstore.go>`__,
which is a part of the abci repo. `persistent_kvstore application
<https://github.com/tendermint/abci/blob/master/example/kvstore/persistent_kvstore.go>`__
is used to show ``BeginBlock``, ``EndBlock`` and ``InitChain``
example implementations.
@@ -202,7 +202,7 @@ mempool state.
.. code-block:: go
func (app *DummyApplication) CheckTx(tx []byte) types.Result {
func (app *KVStoreApplication) CheckTx(tx []byte) types.Result {
return types.OK
}
@@ -263,7 +263,7 @@ merkle root of the data returned by the DeliverTx requests, or both.
.. code-block:: go
// tx is either "key=value" or just arbitrary bytes
func (app *DummyApplication) DeliverTx(tx []byte) types.Result {
func (app *KVStoreApplication) DeliverTx(tx []byte) types.Result {
parts := strings.Split(string(tx), "=")
if len(parts) == 2 {
app.state.Set([]byte(parts[0]), []byte(parts[1]))
@@ -327,7 +327,7 @@ job of the `Handshake <#handshake>`__.
.. code-block:: go
func (app *DummyApplication) Commit() types.Result {
func (app *KVStoreApplication) Commit() types.Result {
hash := app.state.Hash()
return types.NewResultOK(hash, "")
}
@@ -369,7 +369,7 @@ pick up from when it restarts. See information on the Handshake, below.
.. code-block:: go
// Track the block hash and header information
func (app *PersistentDummyApplication) BeginBlock(params types.RequestBeginBlock) {
func (app *PersistentKVStoreApplication) BeginBlock(params types.RequestBeginBlock) {
// update latest block info
app.blockHeader = params.Header
@@ -403,14 +403,16 @@ pick up from when it restarts. See information on the Handshake, below.
EndBlock
^^^^^^^^
The EndBlock request can be used to run some code at the end of every
block. Additionally, the response may contain a list of validators,
which can be used to update the validator set. To add a new validator or
update an existing one, simply include them in the list returned in the
EndBlock response. To remove one, include it in the list with a
``power`` equal to ``0``. Tendermint core will take care of updating the
validator set. Note validator set changes are only available in v0.8.0
and up.
The EndBlock request can be used to run some code at the end of every block.
Additionally, the response may contain a list of validators, which can be used
to update the validator set. To add a new validator or update an existing one,
simply include them in the list returned in the EndBlock response. To remove
one, include it in the list with a ``power`` equal to ``0``. Tendermint core
will take care of updating the validator set. Note the change in voting power
must be strictly less than 1/3 per block if you want a light client to be able
to prove the transition externally. See the `light client docs
<https://godoc.org/github.com/tendermint/tendermint/lite#hdr-How_We_Track_Validators>`__
for details on how it tracks validators.
.. container:: toggle
@@ -421,8 +423,8 @@ and up.
.. code-block:: go
// Update the validator set
func (app *PersistentDummyApplication) EndBlock(height uint64) (resEndBlock types.ResponseEndBlock) {
return types.ResponseEndBlock{Diffs: app.changes}
func (app *PersistentKVStoreApplication) EndBlock(req types.RequestEndBlock) types.ResponseEndBlock {
return types.ResponseEndBlock{ValidatorUpdates: app.ValUpdates}
}
.. container:: toggle
@@ -475,7 +477,7 @@ Note: these query formats are subject to change!
.. code-block:: go
func (app *DummyApplication) Query(reqQuery types.RequestQuery) (resQuery types.ResponseQuery) {
func (app *KVStoreApplication) Query(reqQuery types.RequestQuery) (resQuery types.ResponseQuery) {
if reqQuery.Prove {
value, proof, exists := app.state.Proof(reqQuery.Data)
resQuery.Index = -1 // TODO make Proof return index
@@ -559,7 +561,7 @@ all blocks.
.. code-block:: go
func (app *DummyApplication) Info(req types.RequestInfo) (resInfo types.ResponseInfo) {
func (app *KVStoreApplication) Info(req types.RequestInfo) (resInfo types.ResponseInfo) {
return types.ResponseInfo{Data: cmn.Fmt("{\"size\":%v}", app.state.Size())}
}
@@ -593,7 +595,7 @@ consensus params.
.. code-block:: go
// Save the validators in the merkle tree
func (app *PersistentDummyApplication) InitChain(params types.RequestInitChain) {
func (app *PersistentKVStoreApplication) InitChain(params types.RequestInitChain) {
for _, v := range params.Validators {
r := app.updateValidator(v)
if r.IsErr() {

View File

@@ -22,30 +22,30 @@ The parameters are used to determine the validity of a block (and tx) via the un
```
type ConsensusParams struct {
BlockSizeParams
TxSizeParams
BlockGossipParams
BlockSize
TxSize
BlockGossip
}
type BlockSizeParams struct {
type BlockSize struct {
MaxBytes int
MaxTxs int
MaxGas int
}
type TxSizeParams struct {
type TxSize struct {
MaxBytes int
MaxGas int
}
type BlockGossipParams struct {
type BlockGossip struct {
BlockPartSizeBytes int
}
```
The `ConsensusParams` can evolve over time by adding new structs that cover different aspects of the consensus rules.
The `BlockPartSizeBytes` and the `BlockSizeParams.MaxBytes` are enforced to be greater than 0.
The `BlockPartSizeBytes` and the `BlockSize.MaxBytes` are enforced to be greater than 0.
The former because we need a part size, the latter so that we always have at least some sanity check over the size of blocks.
### ABCI
@@ -58,7 +58,7 @@ like the BlockPartSize, that the app shouldn't really know about.
#### EndBlock
The EndBlock response includes a `ConsensusParams`, which includes BlockSizeParams and TxSizeParams, but not BlockGossipParams.
The EndBlock response includes a `ConsensusParams`, which includes BlockSize and TxSize, but not BlockGossip.
Other param struct can be added to `ConsensusParams` in the future.
The `0` value is used to denote no change.
Any other value will update that parameter in the `State.ConsensusParams`, to be applied for the next block.
@@ -82,4 +82,5 @@ Proposed.
### Neutral
- The TxSizeParams, which checks validity, may be in conflict with the config's `max_block_size_tx`, which determines proposal sizes
- The TxSize, which checks validity, may be in conflict with the config's `max_block_size_tx`, which determines proposal sizes

View File

@@ -0,0 +1,128 @@
# ADR 008: PrivValidator
## Context
The current PrivValidator is monolithic and isn't easily reuseable by alternative signers.
For instance, see https://github.com/tendermint/tendermint/issues/673
The goal is to have a clean PrivValidator interface like:
```
type PrivValidator interface {
Address() data.Bytes
PubKey() crypto.PubKey
SignVote(chainID string, vote *types.Vote) error
SignProposal(chainID string, proposal *types.Proposal) error
SignHeartbeat(chainID string, heartbeat *types.Heartbeat) error
}
```
It should also be easy to re-use the LastSignedInfo logic to avoid double signing.
## Decision
Tendermint node's should support only two in-process PrivValidator implementations:
- PrivValidatorUnencrypted uses an unencrypted private key in a "priv_validator.json" file - no configuration required (just `tendermint init`).
- PrivValidatorSocket uses a socket to send signing requests to another process - user is responsible for starting that process themselves.
The PrivValidatorSocket address can be provided via flags at the command line -
doing so will cause Tendermint to ignore any "priv_validator.json" file and to listen
on the given address for incoming connections from an external priv_validator process.
It will halt any operation until at least one external process succesfully
connected.
The external priv_validator process will dial the address to connect to Tendermint,
and then Tendermint will send requests on the ensuing connection to sign votes and proposals.
Thus the external process initiates the connection, but the Tendermint process makes all requests.
In a later stage we're going to support multiple validators for fault
tolerance. To prevent double signing they need to be synced, which is deferred
to an external solution (see #1185).
In addition, Tendermint will provide implementations that can be run in that external process.
These include:
- PrivValidatorEncrypted uses an encrypted private key persisted to disk - user must enter password to decrypt key when process is started.
- PrivValidatorLedger uses a Ledger Nano S to handle all signing.
What follows are descriptions of useful types
### Signer
```
type Signer interface {
Sign(msg []byte) (crypto.Signature, error)
}
```
Signer signs a message. It can also return an error.
### ValidatorID
ValidatorID is just the Address and PubKey
```
type ValidatorID struct {
Address data.Bytes `json:"address"`
PubKey crypto.PubKey `json:"pub_key"`
}
```
### LastSignedInfo
LastSignedInfo tracks the last thing we signed:
```
type LastSignedInfo struct {
Height int64 `json:"height"`
Round int `json:"round"`
Step int8 `json:"step"`
Signature crypto.Signature `json:"signature,omitempty"` // so we dont lose signatures
SignBytes data.Bytes `json:"signbytes,omitempty"` // so we dont lose signatures
}
```
It exposes methods for signing votes and proposals using a `Signer`.
This allows it to easily be reused by developers implemented their own PrivValidator.
### PrivValidatorUnencrypted
```
type PrivValidatorUnencrypted struct {
ID types.ValidatorID `json:"id"`
PrivKey PrivKey `json:"priv_key"`
LastSignedInfo *LastSignedInfo `json:"last_signed_info"`
}
```
Has the same structure as currently, but broken up into sub structs.
Note the LastSignedInfo is mutated in place every time we sign.
### PrivValidatorJSON
The "priv_validator.json" file supports only the PrivValidatorUnencrypted type.
It unmarshals into PrivValidatorJSON, which is used as the default PrivValidator type.
It wraps the PrivValidatorUnencrypted and persists it to disk after every signature.
## Status
Accepted.
## Consequences
### Positive
- Cleaner separation of components enabling re-use.
### Negative
- More files - led to creation of new directory.
### Neutral

View File

@@ -41,15 +41,15 @@ templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
source_suffix = ['.rst', '.md']
# source_suffix = '.rst'
#source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Tendermint'
copyright = u'2017, The Authors'
copyright = u'2018, The Authors'
author = u'Tendermint'
# The version info for the project you're documenting, acts as replacement for
@@ -71,7 +71,7 @@ language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'architecture']
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'architecture', 'specification/new-spec', 'examples']
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
@@ -171,29 +171,38 @@ texinfo_documents = [
'Database'),
]
repo = "https://raw.githubusercontent.com/tendermint/tools/"
branch = "master"
# ---- customization -------------------------
tools = "./tools"
assets = tools + "/assets"
tools_repo = "https://raw.githubusercontent.com/tendermint/tools/"
tools_branch = "master"
if os.path.isdir(tools) != True:
os.mkdir(tools)
if os.path.isdir(assets) != True:
os.mkdir(assets)
tools_dir = "./tools"
assets_dir = tools_dir + "/assets"
urllib.urlretrieve(repo+branch+'/ansible/README.rst', filename=tools+'/ansible.rst')
urllib.urlretrieve(repo+branch+'/ansible/assets/a_plus_t.png', filename=assets+'/a_plus_t.png')
if os.path.isdir(tools_dir) != True:
os.mkdir(tools_dir)
if os.path.isdir(assets_dir) != True:
os.mkdir(assets_dir)
urllib.urlretrieve(repo+branch+'/docker/README.rst', filename=tools+'/docker.rst')
urllib.urlretrieve(tools_repo+tools_branch+'/ansible/README.rst', filename=tools_dir+'/ansible.rst')
urllib.urlretrieve(tools_repo+tools_branch+'/ansible/assets/a_plus_t.png', filename=assets_dir+'/a_plus_t.png')
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/README.rst', filename=tools+'/mintnet-kubernetes.rst')
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/gce1.png', filename=assets+'/gce1.png')
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/gce2.png', filename=assets+'/gce2.png')
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/statefulset.png', filename=assets+'/statefulset.png')
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/t_plus_k.png', filename=assets+'/t_plus_k.png')
urllib.urlretrieve(tools_repo+tools_branch+'/docker/README.rst', filename=tools_dir+'/docker.rst')
urllib.urlretrieve(repo+branch+'/terraform-digitalocean/README.rst', filename=tools+'/terraform-digitalocean.rst')
urllib.urlretrieve(repo+branch+'/tm-bench/README.rst', filename=tools+'/benchmarking-and-monitoring.rst')
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/README.rst', filename=tools_dir+'/mintnet-kubernetes.rst')
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/gce1.png', filename=assets_dir+'/gce1.png')
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/gce2.png', filename=assets_dir+'/gce2.png')
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/statefulset.png', filename=assets_dir+'/statefulset.png')
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/t_plus_k.png', filename=assets_dir+'/t_plus_k.png')
urllib.urlretrieve(tools_repo+tools_branch+'/terraform-digitalocean/README.rst', filename=tools_dir+'/terraform-digitalocean.rst')
urllib.urlretrieve(tools_repo+tools_branch+'/tm-bench/README.rst', filename=tools_dir+'/benchmarking-and-monitoring.rst')
# the readme for below is included in tm-bench
# urllib.urlretrieve('https://raw.githubusercontent.com/tendermint/tools/master/tm-monitor/README.rst', filename='tools/tm-monitor.rst')
#### abci spec #################################
abci_repo = "https://raw.githubusercontent.com/tendermint/abci/"
abci_branch = "develop"
urllib.urlretrieve(abci_repo+abci_branch+'/specification.rst', filename='abci-spec.rst')

View File

@@ -13,7 +13,7 @@ It's relatively easy to setup a Tendermint cluster manually. The only
requirements for a particular Tendermint node are a private key for the
validator, stored as ``priv_validator.json``, and a list of the public
keys of all validators, stored as ``genesis.json``. These files should
be stored in ``~/.tendermint``, or wherever the ``$TMHOME`` variable
be stored in ``~/.tendermint/config``, or wherever the ``$TMHOME`` variable
might be set to.
Here are the steps to setting up a testnet manually:
@@ -24,15 +24,15 @@ Here are the steps to setting up a testnet manually:
``tendermint gen_validator``
4) Compile a list of public keys for each validator into a
``genesis.json`` file.
5) Run ``tendermint node --p2p.seeds=< seed addresses >`` on each node,
where ``< seed addresses >`` is a comma separated list of the IP:PORT
5) Run ``tendermint node --p2p.persistent_peers=< peer addresses >`` on each node,
where ``< peer addresses >`` is a comma separated list of the IP:PORT
combination for each node. The default port for Tendermint is
``46656``. Thus, if the IP addresses of your nodes were
``192.168.0.1, 192.168.0.2, 192.168.0.3, 192.168.0.4``, the command
would look like:
``tendermint node --p2p.seeds=192.168.0.1:46656,192.168.0.2:46656,192.168.0.3:46656,192.168.0.4:46656``.
``tendermint node --p2p.persistent_peers=192.168.0.1:46656,192.168.0.2:46656,192.168.0.3:46656,192.168.0.4:46656``.
After a few seconds, all the nodes should connect to eachother and start
After a few seconds, all the nodes should connect to each other and start
making blocks! For more information, see the Tendermint Networks section
of `the guide to using Tendermint <using-tendermint.html>`__.
@@ -48,7 +48,7 @@ Automated Deployment using Kubernetes
The `mintnet-kubernetes tool <https://github.com/tendermint/tools/tree/master/mintnet-kubernetes>`__
allows automating the deployment of a Tendermint network on an already
provisioned kubernetes cluster. For simple provisioning of a kubernetes
provisioned Kubernetes cluster. For simple provisioning of a Kubernetes
cluster, check out the `Google Cloud Platform <https://cloud.google.com/>`__.
Automated Deployment using Terraform and Ansible

8
docs/determinism.rst Normal file
View File

@@ -0,0 +1,8 @@
On Determinism
==============
Arguably, the most difficult part of blockchain programming is determinism - that is, ensuring that sources of indeterminism do not creep into the design of such systems.
See `this issue <https://github.com/tendermint/abci/issues/56>`__ for more information on the potential sources of indeterminism.

View File

@@ -1,122 +1,15 @@
Tendermint Ecosystem
====================
Below are the many applications built using various pieces of the Tendermint stack. We thank the community for their contributions thus far and welcome the addition of new projects. Feel free to submit a pull request to add your project!
The growing list of applications built using various pieces of the Tendermint stack can be found at:
ABCI Applications
-----------------
* https://tendermint.com/ecosystem
Burrow
^^^^^^
We thank the community for their contributions thus far and welcome the addition of new projects. A pull request can be submitted to `this file <https://github.com/tendermint/aib-data/blob/master/json/ecosystem.json>`__ to include your project.
Ethereum Virtual Machine augmented with native permissioning scheme and global key-value store, written in Go, authored by Monax Industries, and incubated `by Hyperledger <https://github.com/hyperledger/burrow>`__.
cb-ledger
^^^^^^^^^
Custodian Bank Ledger, integrating central banking with the blockchains of tomorrow, written in C++, and `authored by Block Finance <https://github.com/block-finance/cpp-abci>`__.
Clearchain
^^^^^^^^^^
Application to manage a distributed ledger for money transfers that support multi-currency accounts, written in Go, and `authored by Allession Treglia <https://github.com/tendermint/clearchain>`__.
Comit
^^^^^
Public service reporting and tracking, written in Go, and `authored by Zach Balder <https://github.com/zbo14/comit>`__.
Cosmos SDK
^^^^^^^^^^
A prototypical account based crypto currency state machine supporting plugins, written in Go, and `authored by Cosmos <https://github.com/cosmos/cosmos-sdk>`__.
Ethermint
^^^^^^^^^
The go-ethereum state machine run as a ABCI app, written in Go, `authored by Tendermint <https://github.com/tendermint/ethermint>`__.
IAVL
^^^^
Immutable AVL+ tree with Merkle proofs, Written in Go, `authored by Tendermint <https://github.com/tendermint/iavl>`__.
Lotion
^^^^^^
A Javascript microframework for building blockchain applications with Tendermint, written in Javascript, `authored by Judd Keppel of Tendermint <https://github.com/keppel/lotion>`__. See also `lotion-chat <https://github.com/keppel/lotion-chat>`__ and `lotion-coin <https://github.com/keppel/lotion-coin>`__ apps written using Lotion.
MerkleTree
^^^^^^^^^^
Immutable AVL+ tree with Merkle proofs, Written in Java, `authored by jTendermint <https://github.com/jTendermint/MerkleTree>`__.
Passchain
^^^^^^^^^
Passchain is a tool to securely store and share passwords, tokens and other short secrets, `authored by trusch <https://github.com/trusch/passchain>`__.
Passwerk
^^^^^^^^
Encrypted storage web-utility backed by Tendermint, written in Go, `authored by Rigel Rozanski <https://github.com/rigelrozanski/passwerk>`__.
Py-Tendermint
^^^^^^^^^^^^^
A Python microframework for building blockchain applications with Tendermint, written in Python, `authored by Dave Bryson <https://github.com/davebryson/py-tendermint>`__.
Stratumn
^^^^^^^^
SDK for "Proof-of-Process" networks, written in Go, `authored by the Stratumn team <https://github.com/stratumn/sdk>`__.
TMChat
^^^^^^
P2P chat using Tendermint, written in Java, `authored by wolfposd <https://github.com/wolfposd/TMChat>`__.
ABCI Servers
------------
+------------------------------------------------------------------+--------------------+--------------+
| **Name** | **Author** | **Language** |
| | | |
+------------------------------------------------------------------+--------------------+--------------+
| `abci <https://github.com/tendermint/abci>`__ | Tendermint | Go |
+------------------------------------------------------------------+--------------------+--------------+
| `js abci <https://github.com/tendermint/js-abci>`__ | Tendermint | Javascript |
+------------------------------------------------------------------+--------------------+--------------+
| `cpp-tmsp <https://github.com/block-finance/cpp-abci>`__ | Martin Dyring | C++ |
+------------------------------------------------------------------+--------------------+--------------+
| `c-abci <https://github.com/chainx-org/c-abci>`__ | ChainX | C |
+------------------------------------------------------------------+--------------------+--------------+
| `jabci <https://github.com/jTendermint/jabci>`__ | jTendermint | Java |
+------------------------------------------------------------------+--------------------+--------------+
| `ocaml-tmsp <https://github.com/zbo14/ocaml-tmsp>`__ | Zach Balder | Ocaml |
+------------------------------------------------------------------+--------------------+--------------+
| `abci_server <https://github.com/KrzysiekJ/abci_server>`__ | Krzysztof Jurewicz | Erlang |
+------------------------------------------------------------------+--------------------+--------------+
| `rust-tsp <https://github.com/tendermint/rust-tsp>`__   | Adrian Brink | Rust       |
+------------------------------------------------------------------+--------------------+--------------+
| `hs-abci <https://github.com/albertov/hs-abci>`__ | Alberto Gonzalez | Haskell |
+------------------------------------------------------------------+--------------------+--------------+
| `haskell-abci <https://github.com/cwgoes/haskell-abci>`__ | Christoper Goes | Haskell |
+------------------------------------------------------------------+--------------------+--------------+
| `Spearmint <https://github.com/dennismckinnon/spearmint>`__ | Dennis Mckinnon | Javascript |
+------------------------------------------------------------------+--------------------+--------------+
| `py-abci <https://github.com/davebryson/py-abci>`__ | Dave Bryson | Python |
+------------------------------------------------------------------+--------------------+--------------+
Deployment Tools
----------------
Other Tools
-----------
See `deploy testnets <./deploy-testnets.html>`__ for information about all the tools built by Tendermint. We have Kubernetes, Ansible, and Terraform integrations.
Cloudsoft built `brooklyn-tendermint <https://github.com/cloudsoft/brooklyn-tendermint>`__ for deploying a tendermint testnet in docker continers. It uses Clocker for Apache Brooklyn.
Dev Tools
---------
For upgrading from older to newer versions of tendermint and to migrate your chain data, see `tm-migrator <https://github.com/hxzqlh/tm-tools>`__ written by @hxzqlh.

View File

@@ -0,0 +1,142 @@
# Tendermint
## Overview
This is a quick start guide. If you have a vague idea about how Tendermint works
and want to get started right away, continue. Otherwise, [review the documentation](http://tendermint.readthedocs.io/en/master/)
## Install
### Quick Install
On a fresh Ubuntu 16.04 machine can be done with [this script](https://git.io/vNLfY), like so:
```
curl -L https://git.io/vxWlX | bash
source ~/.profile
```
WARNING: do not run the above on your local machine.
The script is also used to facilitate cluster deployment below.
### Manual Install
Requires:
- `go` minimum version 1.9
- `$GOPATH` environment variable must be set
- `$GOPATH/bin` must be on your `$PATH` (see https://github.com/tendermint/tendermint/wiki/Setting-GOPATH)
To install Tendermint, run:
```
go get github.com/tendermint/tendermint
cd $GOPATH/src/github.com/tendermint/tendermint
make get_tools && make get_vendor_deps
make install
```
Note that `go get` may return an error but it can be ignored.
Confirm installation:
```
$ tendermint version
0.15.0-381fe19
```
## Initialization
Running:
```
tendermint init
```
will create the required files for a single, local node.
These files are found in `$HOME/.tendermint`:
```
$ ls $HOME/.tendermint
config.toml data genesis.json priv_validator.json
```
For a single, local node, no further configuration is required.
Configuring a cluster is covered further below.
## Local Node
Start tendermint with a simple in-process application:
```
tendermint node --proxy_app=kvstore
```
and blocks will start to stream in:
```
I[01-06|01:45:15.592] Executed block module=state height=1 validTxs=0 invalidTxs=0
I[01-06|01:45:15.624] Committed state module=state height=1 txs=0 appHash=
```
Check the status with:
```
curl -s localhost:46657/status
```
### Sending Transactions
With the kvstore app running, we can send transactions:
```
curl -s 'localhost:46657/broadcast_tx_commit?tx="abcd"'
```
and check that it worked with:
```
curl -s 'localhost:46657/abci_query?data="abcd"'
```
We can send transactions with a key and value too:
```
curl -s 'localhost:46657/broadcast_tx_commit?tx="name=satoshi"'
```
and query the key:
```
curl -s 'localhost:46657/abci_query?data="name"'
```
where the value is returned in hex.
## Cluster of Nodes
First create four Ubuntu cloud machines. The following was tested on Digital Ocean Ubuntu 16.04 x64 (3GB/1CPU, 20GB SSD). We'll refer to their respective IP addresses below as IP1, IP2, IP3, IP4.
Then, `ssh` into each machine, and execute [this script](https://git.io/vNLfY):
```
curl -L https://git.io/vNLfY | bash
source ~/.profile
```
This will install `go` and other dependencies, get the Tendermint source code, then compile the `tendermint` binary.
Next, `cd` into `docs/examples`. Each command below should be run from each node, in sequence:
```
tendermint node --home ./node1 --proxy_app=kvstore --p2p.seeds IP1:46656,IP2:46656,IP3:46656,IP4:46656
tendermint node --home ./node2 --proxy_app=kvstore --p2p.seeds IP1:46656,IP2:46656,IP3:46656,IP4:46656
tendermint node --home ./node3 --proxy_app=kvstore --p2p.seeds IP1:46656,IP2:46656,IP3:46656,IP4:46656
tendermint node --home ./node4 --proxy_app=kvstore --p2p.seeds IP1:46656,IP2:46656,IP3:46656,IP4:46656
```
Note that after the third node is started, blocks will start to stream in because >2/3 of validators (defined in the `genesis.json`) have come online. Seeds can also be specified in the `config.toml`. See [this PR](https://github.com/tendermint/tendermint/pull/792) for more information about configuration options.
Transactions can then be sent as covered in the single, local node example above.

View File

@@ -0,0 +1,32 @@
#!/usr/bin/env bash
# XXX: this script is meant to be used only on a fresh Ubuntu 16.04 instance
# and has only been tested on Digital Ocean
# get and unpack golang
curl -O https://storage.googleapis.com/golang/go1.10.linux-amd64.tar.gz
tar -xvf go1.10.linux-amd64.tar.gz
apt install make
## move go and add binary to path
mv go /usr/local
echo "export PATH=\$PATH:/usr/local/go/bin" >> ~/.profile
## create the GOPATH directory, set GOPATH and put on PATH
mkdir goApps
echo "export GOPATH=/root/goApps" >> ~/.profile
echo "export PATH=\$PATH:\$GOPATH/bin" >> ~/.profile
source ~/.profile
## get the code and move into it
REPO=github.com/tendermint/tendermint
go get $REPO
cd $GOPATH/src/$REPO
## build
git checkout v0.17.0
make get_tools
make get_vendor_deps
make install

View File

@@ -0,0 +1,15 @@
# This is a TOML config file.
# For more information, see https://github.com/toml-lang/toml
proxy_app = "tcp://127.0.0.1:46658"
moniker = "penguin"
fast_sync = true
db_backend = "leveldb"
log_level = "state:info,*:error"
[rpc]
laddr = "tcp://0.0.0.0:46657"
[p2p]
laddr = "tcp://0.0.0.0:46656"
seeds = ""

View File

@@ -0,0 +1,42 @@
{
"genesis_time":"0001-01-01T00:00:00Z",
"chain_id":"test-chain-wt7apy",
"validators":[
{
"pub_key":{
"type":"ed25519",
"data":"F08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
},
"power":10,
"name":"node1"
}
,
{
"pub_key":{
"type":"ed25519",
"data": "A8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
},
"power":10,
"name":"node2"
}
,
{
"pub_key":{
"type":"ed25519",
"data": "E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
},
"power":10,
"name":"node3"
}
,
{
"pub_key":{
"type":"ed25519",
"data": "2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
},
"power":10,
"name":"node4"
}
],
"app_hash":""
}

View File

@@ -0,0 +1,15 @@
{
"address":"4DC2756029CE0D8F8C6C3E4C3CE6EE8C30AF352F",
"pub_key":{
"type":"ed25519",
"data":"F08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
},
"last_height":0,
"last_round":0,
"last_step":0,
"last_signature":null,
"priv_key":{
"type":"ed25519",
"data":"4D3648E1D93C8703E436BFF814728B6BD270CFDFD686DF5385E8ACBEB7BE2D7DF08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
}
}

View File

@@ -0,0 +1,15 @@
# This is a TOML config file.
# For more information, see https://github.com/toml-lang/toml
proxy_app = "tcp://127.0.0.1:46658"
moniker = "penguin"
fast_sync = true
db_backend = "leveldb"
log_level = "state:info,*:error"
[rpc]
laddr = "tcp://0.0.0.0:46657"
[p2p]
laddr = "tcp://0.0.0.0:46656"
seeds = ""

View File

@@ -0,0 +1,42 @@
{
"genesis_time":"0001-01-01T00:00:00Z",
"chain_id":"test-chain-wt7apy",
"validators":[
{
"pub_key":{
"type":"ed25519",
"data":"F08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
},
"power":10,
"name":"node1"
}
,
{
"pub_key":{
"type":"ed25519",
"data": "A8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
},
"power":10,
"name":"node2"
}
,
{
"pub_key":{
"type":"ed25519",
"data": "E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
},
"power":10,
"name":"node3"
}
,
{
"pub_key":{
"type":"ed25519",
"data": "2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
},
"power":10,
"name":"node4"
}
],
"app_hash":""
}

View File

@@ -0,0 +1,15 @@
{
"address": "DD6C63A762608A9DDD4A845657743777F63121D6",
"pub_key": {
"type": "ed25519",
"data": "A8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
},
"last_height": 0,
"last_round": 0,
"last_step": 0,
"last_signature": null,
"priv_key": {
"type": "ed25519",
"data": "7B0DE666FF5E9B437D284BCE767F612381890C018B93B0A105D2E829A568DA6FA8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
}
}

View File

@@ -0,0 +1,15 @@
# This is a TOML config file.
# For more information, see https://github.com/toml-lang/toml
proxy_app = "tcp://127.0.0.1:46658"
moniker = "penguin"
fast_sync = true
db_backend = "leveldb"
log_level = "state:info,*:error"
[rpc]
laddr = "tcp://0.0.0.0:46657"
[p2p]
laddr = "tcp://0.0.0.0:46656"
seeds = ""

View File

@@ -0,0 +1,42 @@
{
"genesis_time":"0001-01-01T00:00:00Z",
"chain_id":"test-chain-wt7apy",
"validators":[
{
"pub_key":{
"type":"ed25519",
"data":"F08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
},
"power":10,
"name":"node1"
}
,
{
"pub_key":{
"type":"ed25519",
"data": "A8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
},
"power":10,
"name":"node2"
}
,
{
"pub_key":{
"type":"ed25519",
"data": "E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
},
"power":10,
"name":"node3"
}
,
{
"pub_key":{
"type":"ed25519",
"data": "2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
},
"power":10,
"name":"node4"
}
],
"app_hash":""
}

View File

@@ -0,0 +1,15 @@
{
"address": "6D6A1E313B407B5474106CA8759C976B777AB659",
"pub_key": {
"type": "ed25519",
"data": "E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
},
"last_height": 0,
"last_round": 0,
"last_step": 0,
"last_signature": null,
"priv_key": {
"type": "ed25519",
"data": "622432A370111A5C25CFE121E163FE709C9D5C95F551EDBD7A2C69A8545C9B76E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
}
}

View File

@@ -0,0 +1,15 @@
# This is a TOML config file.
# For more information, see https://github.com/toml-lang/toml
proxy_app = "tcp://127.0.0.1:46658"
moniker = "penguin"
fast_sync = true
db_backend = "leveldb"
log_level = "state:info,*:error"
[rpc]
laddr = "tcp://0.0.0.0:46657"
[p2p]
laddr = "tcp://0.0.0.0:46656"
seeds = ""

View File

@@ -0,0 +1,42 @@
{
"genesis_time":"0001-01-01T00:00:00Z",
"chain_id":"test-chain-wt7apy",
"validators":[
{
"pub_key":{
"type":"ed25519",
"data":"F08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
},
"power":10,
"name":"node1"
}
,
{
"pub_key":{
"type":"ed25519",
"data": "A8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
},
"power":10,
"name":"node2"
}
,
{
"pub_key":{
"type":"ed25519",
"data": "E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
},
"power":10,
"name":"node3"
}
,
{
"pub_key":{
"type":"ed25519",
"data": "2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
},
"power":10,
"name":"node4"
}
],
"app_hash":""
}

View File

@@ -0,0 +1,15 @@
{
"address": "829A9663611D3DD88A3D84EA0249679D650A0755",
"pub_key": {
"type": "ed25519",
"data": "2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
},
"last_height": 0,
"last_round": 0,
"last_step": 0,
"last_signature": null,
"priv_key": {
"type": "ed25519",
"data": "0A604D1C9AE94A50150BF39E603239092F9392E4773F4D8F4AC1D86E6438E89E2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
}
}

View File

@@ -27,38 +27,38 @@ Then run
go get -u github.com/tendermint/abci/cmd/abci-cli
If there is an error, install and run the ``glide`` tool to pin the
If there is an error, install and run the `dep <https://github.com/golang/dep>`__ tool to pin the
dependencies:
::
go get github.com/Masterminds/glide
cd $GOPATH/src/github.com/tendermint/abci
glide install
go install ./cmd/abci-cli
make get_tools
make get_vendor_deps
make install
Now you should have the ``abci-cli`` installed; you'll see
a couple of commands (``counter`` and ``dummy``) that are
a couple of commands (``counter`` and ``kvstore``) that are
example applications written in Go. See below for an application
written in Javascript.
written in JavaScript.
Now, let's run some apps!
Dummy - A First Example
-----------------------
KVStore - A First Example
-------------------------
The dummy app is a `Merkle
The kvstore app is a `Merkle
tree <https://en.wikipedia.org/wiki/Merkle_tree>`__ that just stores all
transactions. If the transaction contains an ``=``, eg. ``key=value``,
transactions. If the transaction contains an ``=``, e.g. ``key=value``,
then the ``value`` is stored under the ``key`` in the Merkle tree.
Otherwise, the full transaction bytes are stored as the key and the
value.
Let's start a dummy application.
Let's start a kvstore application.
::
abci-cli dummy
abci-cli kvstore
In another terminal, we can start Tendermint. If you have never run
Tendermint before, use:
@@ -85,7 +85,7 @@ The ``-s`` just silences ``curl``. For nicer output, pipe the result
into a tool like `jq <https://stedolan.github.io/jq/>`__ or
`jsonpp <https://github.com/jmhodges/jsonpp>`__.
Now let's send some transactions to the dummy.
Now let's send some transactions to the kvstore.
::
@@ -147,7 +147,7 @@ The result should look like:
Note the ``value`` in the result (``61626364``); this is the
hex-encoding of the ASCII of ``abcd``. You can verify this in
a python shell by running ``"61626364".decode('hex')``. Stay
a python 2 shell by running ``"61626364".decode('hex')`` or in python 3 shell by running ``import codecs; codecs.decode("61626364", 'hex').decode('ascii')``. Stay
tuned for a future release that `makes this output more human-readable <https://github.com/tendermint/abci/issues/32>`__.
Now let's try setting a different key and value:
@@ -192,7 +192,7 @@ In this instance of the counter app, with ``serial=on``, ``CheckTx``
only allows transactions whose integer is greater than the last
committed one.
Let's kill the previous instance of ``tendermint`` and the ``dummy``
Let's kill the previous instance of ``tendermint`` and the ``kvstore``
application, and start the counter app. We can enable ``serial=on`` with
a flag:
@@ -313,7 +313,7 @@ Neat, eh?
Basecoin - A More Interesting Example
-------------------------------------
We saved the best for last; the `Cosmos SDK <https://github.com/cosmos/cosmos-sdk>`__ is a general purpose framework for building cryptocurrencies. Unlike the ``dummy`` and ``counter``, which are strictly for example purposes. The reference implementation of Cosmos SDK is ``basecoin``, which demonstrates how to use the building blocks of the Cosmos SDK.
We saved the best for last; the `Cosmos SDK <https://github.com/cosmos/cosmos-sdk>`__ is a general purpose framework for building cryptocurrencies. Unlike the ``kvstore`` and ``counter``, which are strictly for example purposes. The reference implementation of Cosmos SDK is ``basecoin``, which demonstrates how to use the building blocks of the Cosmos SDK.
The default ``basecoin`` application is a multi-asset cryptocurrency
that supports inter-blockchain communication (IBC). For more details on how

View File

@@ -5,7 +5,7 @@ Walk through example
--------------------
We first create three connections (mempool, consensus and query) to the
application (locally running dummy in this case).
application (running ``kvstore`` locally in this case).
::

View File

@@ -53,6 +53,7 @@ Tendermint 102
:maxdepth: 2
abci-cli.rst
abci-spec.rst
app-architecture.rst
app-development.rst
how-to-read-logs.rst
@@ -64,10 +65,11 @@ Tendermint 201
:maxdepth: 2
specification.rst
determinism.rst
* For a deeper dive, see `this thesis <https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769>`__.
* There is also the `original whitepaper <https://tendermint.com/static/docs/tendermint.pdf>`__, though it is now quite outdated.
* Readers might also be interested in the `Cosmos Whitepaper <https://cosmos.network/whitepaper>`__ which describes Tendermint, ABCI, and how to build a scalable, heterogeneous, cryptocurrency network.
* For example applications and related software built by the Tendermint team and other, see the `software ecosystem <https://tendermint.com/ecosystem>`__.
Join the `Cosmos and Tendermint Rocket Chat <https://cosmos.rocket.chat>`__ to ask questions and discuss projects.
Join the `community <https://cosmos.network/community>`__ to ask questions and discuss projects.

View File

@@ -9,7 +9,7 @@ To download pre-built binaries, see the `Download page <https://tendermint.com/d
From Source
-----------
You'll need ``go``, maybe ``glide``, and the tendermint source code.
You'll need ``go``, maybe `dep <https://github.com/golang/dep>`__, and the Tendermint source code.
Install Go
^^^^^^^^^^
@@ -31,21 +31,21 @@ installation worked.
If the installation failed, a dependency may have been updated and become
incompatible with the latest Tendermint master branch. We solve this
using the ``glide`` tool for dependency management.
using the ``dep`` tool for dependency management.
First, install ``glide``:
First, install ``dep``:
::
go get github.com/Masterminds/glide
make get_tools
Now we can fetch the correct versions of each dependency by running:
::
cd $GOPATH/src/github.com/tendermint/tendermint
glide install
go install ./cmd/tendermint
make get_vendor_deps
make install
Note that even though ``go get`` originally failed, the repository was
still cloned to the correct location in the ``$GOPATH``.
@@ -60,7 +60,7 @@ If you already have Tendermint installed, and you make updates, simply
::
cd $GOPATH/src/github.com/tendermint/tendermint
go install ./cmd/tendermint
make install
To upgrade, there are a few options:
@@ -72,18 +72,18 @@ To upgrade, there are a few options:
its dependencies
- fetch and checkout the latest master branch in
``$GOPATH/src/github.com/tendermint/tendermint``, and then run
``glide install && go install ./cmd/tendermint`` as above.
``make get_vendor_deps && make install`` as above.
Note the first two options should usually work, but may fail. If they
do, use ``glide``, as above:
do, use ``dep``, as above:
::
cd $GOPATH/src/github.com/tendermint/tendermint
glide install
go install ./cmd/tendermint
make get_vendor_deps
make install
Since the third option just uses ``glide`` right away, it should always
Since the third option just uses ``dep`` right away, it should always
work.
Troubleshooting
@@ -96,8 +96,8 @@ If ``go get`` failing bothers you, fetch the code using ``git``:
mkdir -p $GOPATH/src/github.com/tendermint
git clone https://github.com/tendermint/tendermint $GOPATH/src/github.com/tendermint/tendermint
cd $GOPATH/src/github.com/tendermint/tendermint
glide install
go install ./cmd/tendermint
make get_vendor_deps
make install
Run
^^^
@@ -107,4 +107,4 @@ To start a one-node blockchain with a simple in-process application:
::
tendermint init
tendermint node --proxy_app=dummy
tendermint node --proxy_app=kvstore

View File

@@ -112,7 +112,7 @@ Motivation
Thus far, all blockchains "stacks" (such as `Bitcoin <https://github.com/bitcoin/bitcoin>`__) have had a monolithic design. That is, each blockchain stack is a single program that handles all the concerns of a decentralized ledger; this includes P2P connectivity, the "mempool" broadcasting of transactions, consensus on the most recent block, account balances, Turing-complete contracts, user-level permissions, etc.
Using a monolithic architecture is typically bad practice in computer science.
It makes it difficult to reuse components of the code, and attempts to do so result in complex maintanence procedures for forks of the codebase.
It makes it difficult to reuse components of the code, and attempts to do so result in complex maintenance procedures for forks of the codebase.
This is especially true when the codebase is not modular in design and suffers from "spaghetti code".
Another problem with monolithic design is that it limits you to the language of the blockchain stack (or vice versa). In the case of Ethereum which supports a Turing-complete bytecode virtual-machine, it limits you to languages that compile down to that bytecode; today, those are Serpent and Solidity.

View File

@@ -2,7 +2,7 @@
Specification
#############
Here you'll find details of the Tendermint specification. See `the spec repo <https://github.com/tendermint/spec>`__ for upcoming material. Tendermint's types are produced by `godoc <https://godoc.org/github.com/tendermint/tendermint/types>`__.
Here you'll find details of the Tendermint specification. Tendermint's types are produced by `godoc <https://godoc.org/github.com/tendermint/tendermint/types>`__.
.. toctree::
:maxdepth: 2
@@ -10,6 +10,7 @@ Here you'll find details of the Tendermint specification. See `the spec repo <ht
specification/block-structure.rst
specification/byzantine-consensus-algorithm.rst
specification/configuration.rst
specification/corruption.rst
specification/fast-sync.rst
specification/genesis.rst
specification/light-client-protocol.rst

View File

@@ -329,11 +329,11 @@ collateral on all other forks. Clients should verify the signatures on
the reorg-proposal, verify any evidence, and make a judgement or prompt
the end-user for a decision. For example, a phone wallet app may prompt
the user with a security warning, while a refrigerator may accept any
reorg-proposal signed by +½ of the original validators.
reorg-proposal signed by +1/2 of the original validators.
No non-synchronous Byzantine fault-tolerant algorithm can come to
consensus when + of validators are dishonest, yet a fork assumes that
+ of validators have already been dishonest by double-signing or
consensus when 1/3+ of validators are dishonest, yet a fork assumes that
1/3+ of validators have already been dishonest by double-signing or
lock-changing without justification. So, signing the reorg-proposal is a
coordination problem that cannot be solved by any non-synchronous
protocol (i.e. automatically, and without making assumptions about the

View File

@@ -1,58 +1,189 @@
Configuration
=============
TendermintCore can be configured via a TOML file in
``$TMHOME/config.toml``. Some of these parameters can be overridden by
command-line flags.
Tendermint Core can be configured via a TOML file in
``$TMHOME/config/config.toml``. Some of these parameters can be overridden by
command-line flags. For most users, the options in the ``##### main
base configuration options #####`` are intended to be modified while
config options further below are intended for advance power users.
Config parameters
~~~~~~~~~~~~~~~~~
Config options
~~~~~~~~~~~~~~
The main config parameters are defined
`here <https://github.com/tendermint/tendermint/blob/master/config/config.go>`__.
The default configuration file create by ``tendermint init`` has all
the parameters set with their default values. It will look something
like the file below, however, double check by inspecting the
``config.toml`` created with your version of ``tendermint`` installed:
- ``abci``: ABCI transport (socket \| grpc). *Default*: ``socket``
- ``db_backend``: Database backend for the blockchain and
TendermintCore state. ``leveldb`` or ``memdb``. *Default*:
``"leveldb"``
- ``db_dir``: Database dir. *Default*: ``"$TMHOME/data"``
- ``fast_sync``: Whether to sync faster from the block pool. *Default*:
``true``
- ``genesis_file``: The location of the genesis file. *Default*:
``"$TMHOME/genesis.json"``
- ``log_level``: *Default*: ``"state:info,*:error"``
- ``moniker``: Name of this node. *Default*: the host name or ``"anonymous"``
if runtime fails to get the host name
- ``priv_validator_file``: Validator private key file. *Default*:
``"$TMHOME/priv_validator.json"``
- ``prof_laddr``: Profile listen address. *Default*: ``""``
- ``proxy_app``: The ABCI app endpoint. *Default*:
``"tcp://127.0.0.1:46658"``
::
- ``consensus.max_block_size_txs``: Maximum number of block txs.
*Default*: ``10000``
- ``consensus.create_empty_blocks``: Create empty blocks w/o txs.
*Default*: ``true``
- ``consensus.create_empty_blocks_interval``: Block creation interval, even if empty.
- ``consensus.timeout_*``: Various consensus timeout parameters
- ``consensus.wal_file``: Consensus state WAL. *Default*:
``"$TMHOME/data/cs.wal/wal"``
- ``consensus.wal_light``: Whether to use light-mode for Consensus
state WAL. *Default*: ``false``
# This is a TOML config file.
# For more information, see https://github.com/toml-lang/toml
- ``mempool.*``: Various mempool parameters
##### main base config options #####
- ``p2p.addr_book_file``: Peer address book. *Default*:
``"$TMHOME/addrbook.json"``. **NOT USED**
- ``p2p.laddr``: Node listen address. (0.0.0.0:0 means any interface,
any port). *Default*: ``"0.0.0.0:46656"``
- ``p2p.pex``: Enable Peer-Exchange (dev feature). *Default*: ``false``
- ``p2p.seeds``: Comma delimited host:port seed nodes. *Default*:
``""``
- ``p2p.skip_upnp``: Skip UPNP detection. *Default*: ``false``
# TCP or UNIX socket address of the ABCI application,
# or the name of an ABCI application compiled in with the Tendermint binary
proxy_app = "tcp://127.0.0.1:46658"
- ``rpc.grpc_laddr``: GRPC listen address (BroadcastTx only). Port
required. *Default*: ``""``
- ``rpc.laddr``: RPC listen address. Port required. *Default*:
``"0.0.0.0:46657"``
- ``rpc.unsafe``: Enabled unsafe rpc methods. *Default*: ``true``
# A custom human readable name for this node
moniker = "anonymous"
# If this node is many blocks behind the tip of the chain, FastSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
fast_sync = true
# Database backend: leveldb | memdb
db_backend = "leveldb"
# Database directory
db_path = "data"
# Output level for logging
log_level = "state:info,*:error"
##### additional base config options #####
# The ID of the chain to join (should be signed with every transaction and vote)
chain_id = ""
# Path to the JSON file containing the initial validator set and other meta data
genesis_file = "genesis.json"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_file = "priv_validator.json"
# Mechanism to connect to the ABCI application: socket | grpc
abci = "socket"
# TCP or UNIX socket address for the profiling server to listen on
prof_laddr = ""
# If true, query the ABCI app on connecting to a new peer
# so the app can decide if we should keep the connection or not
filter_peers = false
##### advanced configuration options #####
##### rpc server configuration options #####
[rpc]
# TCP or UNIX socket address for the RPC server to listen on
laddr = "tcp://0.0.0.0:46657"
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
grpc_laddr = ""
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
unsafe = false
##### peer to peer configuration options #####
[p2p]
# Address to listen for incoming connections
laddr = "tcp://0.0.0.0:46656"
# Comma separated list of seed nodes to connect to
seeds = ""
# Comma separated list of nodes to keep persistent connections to
# Do not add private peers to this list if you don't want them advertised
persistent_peers = ""
# Path to address book
addr_book_file = "addrbook.json"
# Set true for strict address routability rules
addr_book_strict = true
# Time to wait before flushing messages out on the connection, in ms
flush_throttle_timeout = 100
# Maximum number of peers to connect to
max_num_peers = 50
# Maximum size of a message packet payload, in bytes
max_msg_packet_payload_size = 1024
# Rate at which packets can be sent, in bytes/second
send_rate = 512000
# Rate at which packets can be received, in bytes/second
recv_rate = 512000
# Set true to enable the peer-exchange reactor
pex = true
# Seed mode, in which node constantly crawls the network and looks for
# peers. If another node asks it for addresses, it responds and disconnects.
#
# Does not work if the peer-exchange reactor is disabled.
seed_mode = false
# Authenticated encryption
auth_enc = true
# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
private_peer_ids = ""
##### mempool configuration options #####
[mempool]
recheck = true
recheck_empty = true
broadcast = true
wal_dir = "data/mempool.wal"
##### consensus configuration options #####
[consensus]
wal_file = "data/cs.wal/wal"
wal_light = false
# All timeouts are in milliseconds
timeout_propose = 3000
timeout_propose_delta = 500
timeout_prevote = 1000
timeout_prevote_delta = 500
timeout_precommit = 1000
timeout_precommit_delta = 500
timeout_commit = 1000
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
skip_timeout_commit = false
# BlockSize
max_block_size_txs = 10000
max_block_size_bytes = 1
# EmptyBlocks mode and possible interval between empty blocks in seconds
create_empty_blocks = true
create_empty_blocks_interval = 0
# Reactor sleep duration parameters are in milliseconds
peer_gossip_sleep_duration = 100
peer_query_maj23_sleep_duration = 2000
##### transactions indexer configuration options #####
[tx_index]
# What indexer to use for transactions
#
# Options:
# 1) "null" (default)
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "{{ .TxIndex.Indexer }}"
# Comma-separated list of tags to index (by default the only tag is tx hash)
#
# It's recommended to index only a subset of tags due to possible memory
# bloat. This is, of course, depends on the indexer's DB and the volume of
# transactions.
index_tags = "{{ .TxIndex.IndexTags }}"
# When set to true, tells indexer to index all tags. Note this may be not
# desirable (see the comment above). IndexTags has a precedence over
# IndexAllTags (i.e. when given both, IndexTags will be indexed).
index_all_tags = {{ .TxIndex.IndexAllTags }}

View File

@@ -0,0 +1,69 @@
Corruption
==========
Important step
--------------
Make sure you have a backup of the Tendermint data directory.
Possible causes
---------------
Remember that most corruption is caused by hardware issues:
- RAID controllers with faulty / worn out battery backup, and an unexpected power loss
- Hard disk drives with write-back cache enabled, and an unexpected power loss
- Cheap SSDs with insufficient power-loss protection, and an unexpected power-loss
- Defective RAM
- Defective or overheating CPU(s)
Other causes can be:
- Database systems configured with fsync=off and an OS crash or power loss
- Filesystems configured to use write barriers plus a storage layer that ignores write barriers. LVM is a particular culprit.
- Tendermint bugs
- Operating system bugs
- Admin error
- directly modifying Tendermint data-directory contents
(Source: https://wiki.postgresql.org/wiki/Corruption)
WAL Corruption
--------------
If consensus WAL is corrupted at the lastest height and you are trying to start
Tendermint, replay will fail with panic.
Recovering from data corruption can be hard and time-consuming. Here are two approaches you can take:
1) Delete the WAL file and restart Tendermint. It will attempt to sync with other peers.
2) Try to repair the WAL file manually:
1. Create a backup of the corrupted WAL file:
.. code:: bash
cp "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal_backup
2. Use ./scripts/wal2json to create a human-readable version
.. code:: bash
./scripts/wal2json/wal2json "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal
3. Search for a "CORRUPTED MESSAGE" line.
4. By looking at the previous message and the message after the corrupted one
and looking at the logs, try to rebuild the message. If the consequent
messages are marked as corrupted too (this may happen if length header
got corrupted or some writes did not make it to the WAL ~ truncation),
then remove all the lines starting from the corrupted one and restart
Tendermint.
.. code:: bash
$EDITOR /tmp/corrupted_wal
5. After editing, convert this file back into binary form by running:
.. code:: bash
./scripts/json2wal/json2wal /tmp/corrupted_wal > "$TMHOME/data/cs.wal/wal"

View File

@@ -1,16 +1,10 @@
Genesis
=======
The genesis.json file in ``$TMHOME`` defines the initial TendermintCore
The genesis.json file in ``$TMHOME/config`` defines the initial TendermintCore
state upon genesis of the blockchain (`see
definition <https://github.com/tendermint/tendermint/blob/master/types/genesis.go>`__).
NOTE: This does not (yet) specify the application state (e.g. initial
distribution of tokens). Currently we leave it up to the application to
load the initial application genesis state. In the future, we may
include genesis SetOption messages that get passed from TendermintCore
to the app upon genesis.
Fields
~~~~~~
@@ -26,6 +20,7 @@ Fields
- ``app_hash``: The expected application hash (as returned by the
``Commit`` ABCI message) upon genesis. If the app's hash does not
match, a warning message is printed.
- ``app_state``: The application state (e.g. initial distribution of tokens).
Sample genesis.json
~~~~~~~~~~~~~~~~~~~
@@ -69,5 +64,8 @@ Sample genesis.json
"name": "mach4"
}
],
"app_hash": "15005165891224E721CB664D15CB972240F5703F"
"app_hash": "15005165891224E721CB664D15CB972240F5703F",
"app_state": {
{"account": "Bob", "coins": 5000}
}
}

View File

@@ -5,8 +5,8 @@ Light clients are an important part of the complete blockchain system
for most applications. Tendermint provides unique speed and security
properties for light client applications.
See our developing `light-client
repository <https://github.com/tendermint/light-client>`__.
See our `lite package
<https://godoc.org/github.com/tendermint/tendermint/lite>`__.
Overview
--------

View File

@@ -0,0 +1,71 @@
# Tendermint Specification
This is a markdown specification of the Tendermint blockchain.
It defines the base data structures, how they are validated,
and how they are communicated over the network.
If you find discrepancies between the spec and the code that
do not have an associated issue or pull request on github,
please submit them to our [bug bounty](https://tendermint.com/security)!
## Contents
### Data Structures
- [Overview](#overview)
- [Encoding and Digests](encoding.md)
- [Blockchain](blockchain.md)
- [State](state.md)
### P2P and Network Protocols
- [The Base P2P Layer](p2p): multiplex the protocols ("reactors") on authenticated and encrypted TCP connections
- [Peer Exchange (PEX)](reactors/pex): gossip known peer addresses so peers can find each other
- [Block Sync](reactors/block_sync): gossip blocks so peers can catch up quickly
- [Consensus](reactors/consensus): gossip votes and block parts so new blocks can be committed
- [Mempool](reactors/mempool): gossip transactions so they get included in blocks
- Evidence: TODO
### More
- Light Client: TODO
- Persistence: TODO
## Overview
Tendermint provides Byzantine Fault Tolerant State Machine Replication using
hash-linked batches of transactions. Such transaction batches are called "blocks".
Hence, Tendermint defines a "blockchain".
Each block in Tendermint has a unique index - its Height.
A block at `Height == H` can only be committed *after* the
block at `Height == H-1`.
Each block is committed by a known set of weighted Validators.
Membership and weighting within this set may change over time.
Tendermint guarantees the safety and liveness of the blockchain
so long as less than 1/3 of the total weight of the Validator set
is malicious or faulty.
A commit in Tendermint is a set of signed messages from more than 2/3 of
the total weight of the current Validator set. Validators take turns proposing
blocks and voting on them. Once enough votes are received, the block is considered
committed. These votes are included in the *next* block as proof that the previous block
was committed - they cannot be included in the current block, as that block has already been
created.
Once a block is committed, it can be executed against an application.
The application returns results for each of the transactions in the block.
The application can also return changes to be made to the validator set,
as well as a cryptographic digest of its latest state.
Tendermint is designed to enable efficient verification and authentication
of the latest state of the blockchain. To achieve this, it embeds
cryptographic commitments to certain information in the block "header".
This information includes the contents of the block (eg. the transactions),
the validator set committing the block, as well as the various results returned by the application.
Note, however, that block execution only occurs *after* a block is committed.
Thus, application results can only be included in the *next* block.
Also note that information like the transaction results and the validator set are never
directly included in the block - only their cryptographic digests (Merkle roots) are.
Hence, verification of a block requires a separate data structure to store this information.
We call this the `State`. Block verification also requires access to the previous block.

View File

@@ -0,0 +1,56 @@
# BFT time in Tendermint
Tendermint provides a deterministic, Byzantine fault-tolerant, source of time.
Time in Tendermint is defined with the Time field of the block header.
It satisfies the following properties:
- Time Monotonicity: Time is monotonically increasing, i.e., given
a header H1 for height h1 and a header H2 for height `h2 = h1 + 1`, `H1.Time < H2.Time`.
- Time Validity: Given a set of Commit votes that forms the `block.LastCommit` field, a range of
valid values for the Time field of the block header is defined only by
Precommit messages (from the LastCommit field) sent by correct processes, i.e.,
a faulty process cannot arbitrarily increase the Time value.
In the context of Tendermint, time is of type int64 and denotes UNIX time in milliseconds, i.e.,
corresponds to the number of milliseconds since January 1, 1970. Before defining rules that need to be enforced by the
Tendermint consensus protocol, so the properties above holds, we introduce the following definition:
- median of a set of `Vote` messages is equal to the median of `Vote.Time` fields of the corresponding `Vote` messages,
where the value of `Vote.Time` is counted number of times proportional to the process voting power. As in Tendermint
the voting power is not uniform (one process one vote), a vote message is actually an aggregator of the same votes whose
number is equal to the voting power of the process that has casted the corresponding votes message.
Let's consider the following example:
- we have four processes p1, p2, p3 and p4, with the following voting power distribution (p1, 23), (p2, 27), (p3, 10)
and (p4, 10). The total voting power is 70 (`N = 3f+1`, where `N` is the total voting power, and `f` is the maximum voting
power of the faulty processes), so we assume that the faulty processes have at most 23 of voting power.
Furthermore, we have the following vote messages in some LastCommit field (we ignore all fields except Time field):
- (p1, 100), (p2, 98), (p3, 1000), (p4, 500). We assume that p3 and p4 are faulty processes. Let's assume that the
`block.LastCommit` message contains votes of processes p2, p3 and p4. Median is then chosen the following way:
the value 98 is counted 27 times, the value 1000 is counted 10 times and the value 500 is counted also 10 times.
So the median value will be the value 98. No matter what set of messages with at least `2f+1` voting power we
choose, the median value will always be between the values sent by correct processes.
We ensure Time Monotonicity and Time Validity properties by the following rules:
- let rs denotes `RoundState` (consensus internal state) of some process. Then
`rs.ProposalBlock.Header.Time == median(rs.LastCommit) &&
rs.Proposal.Timestamp == rs.ProposalBlock.Header.Time`.
- Furthermore, when creating the `vote` message, the following rules for determining `vote.Time` field should hold:
- if `rs.Proposal` is defined then
`vote.Time = max(rs.Proposal.Timestamp + 1, time.Now())`, where `time.Now()`
denotes local Unix time in milliseconds.
- if `rs.Proposal` is not defined and `rs.Votes` contains +2/3 of the corresponding vote messages (votes for the
current height and round, and with the corresponding type (`Prevote` or `Precommit`)), then
`vote.Time = max(median(getVotes(rs.Votes, vote.Height, vote.Round, vote.Type)), time.Now())`,
where `getVotes` function returns the votes for particular `Height`, `Round` and `Type`.
The second rule is relevant for the case when a process jumps to a higher round upon receiving +2/3 votes for a higher
round, but the corresponding `Proposal` message for the higher round hasn't been received yet.

View File

@@ -0,0 +1,424 @@
# Tendermint Blockchain
Here we describe the data structures in the Tendermint blockchain and the rules for validating them.
## Data Structures
The Tendermint blockchains consists of a short list of basic data types:
- `Block`
- `Header`
- `Vote`
- `BlockID`
- `Signature`
- `Evidence`
## Block
A block consists of a header, a list of transactions, a list of votes (the commit),
and a list of evidence of malfeasance (ie. signing conflicting votes).
```go
type Block struct {
Header Header
Txs [][]byte
LastCommit []Vote
Evidence []Evidence
}
```
## Header
A block header contains metadata about the block and about the consensus, as well as commitments to
the data in the current block, the previous block, and the results returned by the application:
```go
type Header struct {
// block metadata
Version string // Version string
ChainID string // ID of the chain
Height int64 // Current block height
Time int64 // UNIX time, in millisconds
// current block
NumTxs int64 // Number of txs in this block
TxHash []byte // SimpleMerkle of the block.Txs
LastCommitHash []byte // SimpleMerkle of the block.LastCommit
// previous block
TotalTxs int64 // prevBlock.TotalTxs + block.NumTxs
LastBlockID BlockID // BlockID of prevBlock
// application
ResultsHash []byte // SimpleMerkle of []abci.Result from prevBlock
AppHash []byte // Arbitrary state digest
ValidatorsHash []byte // SimpleMerkle of the ValidatorSet
ConsensusParamsHash []byte // SimpleMerkle of the ConsensusParams
// consensus
Proposer []byte // Address of the block proposer
EvidenceHash []byte // SimpleMerkle of []Evidence
}
```
Further details on each of these fields is described below.
## BlockID
The `BlockID` contains two distinct Merkle roots of the block.
The first, used as the block's main hash, is the Merkle root
of all the fields in the header. The second, used for secure gossipping of
the block during consensus, is the Merkle root of the complete serialized block
cut into parts. The `BlockID` includes these two hashes, as well as the number of
parts.
```go
type BlockID struct {
Hash []byte
Parts PartsHeader
}
type PartsHeader struct {
Hash []byte
Total int32
}
```
## Vote
A vote is a signed message from a validator for a particular block.
The vote includes information about the validator signing it.
```go
type Vote struct {
Timestamp int64
Address []byte
Index int
Height int64
Round int
Type int8
BlockID BlockID
Signature Signature
}
```
There are two types of votes:
a *prevote* has `vote.Type == 1` and
a *precommit* has `vote.Type == 2`.
## Signature
Tendermint allows for multiple signature schemes to be used by prepending a single type-byte
to the signature bytes. Different signatures may also come with fixed or variable lengths.
Currently, Tendermint supports Ed25519 and Secp256k1.
### ED25519
An ED25519 signature has `Type == 0x1`. It looks like:
```go
// Implements Signature
type Ed25519Signature struct {
Type int8 = 0x1
Signature [64]byte
}
```
where `Signature` is the 64 byte signature.
### Secp256k1
A `Secp256k1` signature has `Type == 0x2`. It looks like:
```go
// Implements Signature
type Secp256k1Signature struct {
Type int8 = 0x2
Signature []byte
}
```
where `Signature` is the DER encoded signature, ie:
```hex
0x30 <length of whole message> <0x02> <length of R> <R> 0x2 <length of S> <S>.
```
## Evidence
TODO
## Validation
Here we describe the validation rules for every element in a block.
Blocks which do not satisfy these rules are considered invalid.
We abuse notation by using something that looks like Go, supplemented with English.
A statement such as `x == y` is an assertion - if it fails, the item is invalid.
We refer to certain globally available objects:
`block` is the block under consideration,
`prevBlock` is the `block` at the previous height,
and `state` keeps track of the validator set, the consensus parameters
and other results from the application.
Elements of an object are accessed as expected,
ie. `block.Header`. See [here](state.md) for the definition of `state`.
### Header
A Header is valid if its corresponding fields are valid.
### Version
Arbitrary string.
### ChainID
Arbitrary constant string.
### Height
```go
block.Header.Height > 0
block.Header.Height == prevBlock.Header.Height + 1
```
The height is an incrementing integer. The first block has `block.Header.Height == 1`.
### Time
The median of the timestamps of the valid votes in the block.LastCommit.
Corresponds to the number of nanoseconds, with millisecond resolution, since January 1, 1970.
Note: the timestamp of a vote must be greater by at least one millisecond than that of the
block being voted on.
### NumTxs
```go
block.Header.NumTxs == len(block.Txs)
```
Number of transactions included in the block.
### TxHash
```go
block.Header.TxHash == SimpleMerkleRoot(block.Txs)
```
Simple Merkle root of the transactions in the block.
### LastCommitHash
```go
block.Header.LastCommitHash == SimpleMerkleRoot(block.LastCommit)
```
Simple Merkle root of the votes included in the block.
These are the votes that committed the previous block.
The first block has `block.Header.LastCommitHash == []byte{}`
### TotalTxs
```go
block.Header.TotalTxs == prevBlock.Header.TotalTxs + block.Header.NumTxs
```
The cumulative sum of all transactions included in this blockchain.
The first block has `block.Header.TotalTxs = block.Header.NumberTxs`.
### LastBlockID
LastBlockID is the previous block's BlockID:
```go
prevBlockParts := MakeParts(prevBlock, state.LastConsensusParams.BlockGossip.BlockPartSize)
block.Header.LastBlockID == BlockID {
Hash: SimpleMerkleRoot(prevBlock.Header),
PartsHeader{
Hash: SimpleMerkleRoot(prevBlockParts),
Total: len(prevBlockParts),
},
}
```
Note: it depends on the ConsensusParams,
which are held in the `state` and may be updated by the application.
The first block has `block.Header.LastBlockID == BlockID{}`.
### ResultsHash
```go
block.ResultsHash == SimpleMerkleRoot(state.LastResults)
```
Simple Merkle root of the results of the transactions in the previous block.
The first block has `block.Header.ResultsHash == []byte{}`.
### AppHash
```go
block.AppHash == state.AppHash
```
Arbitrary byte array returned by the application after executing and commiting the previous block.
The first block has `block.Header.AppHash == []byte{}`.
### ValidatorsHash
```go
block.ValidatorsHash == SimpleMerkleRoot(state.Validators)
```
Simple Merkle root of the current validator set that is committing the block.
This can be used to validate the `LastCommit` included in the next block.
May be updated by the application.
### ConsensusParamsHash
```go
block.ConsensusParamsHash == SimpleMerkleRoot(state.ConsensusParams)
```
Simple Merkle root of the consensus parameters.
May be updated by the application.
### Proposer
```go
block.Header.Proposer in state.Validators
```
Original proposer of the block. Must be a current validator.
NOTE: we also need to track the round.
## EvidenceHash
```go
block.EvidenceHash == SimpleMerkleRoot(block.Evidence)
```
Simple Merkle root of the evidence of Byzantine behaviour included in this block.
## Txs
Arbitrary length array of arbitrary length byte-arrays.
## LastCommit
The first height is an exception - it requires the LastCommit to be empty:
```go
if block.Header.Height == 1 {
len(b.LastCommit) == 0
}
```
Otherwise, we require:
```go
len(block.LastCommit) == len(state.LastValidators)
talliedVotingPower := 0
for i, vote := range block.LastCommit{
if vote == nil{
continue
}
vote.Type == 2
vote.Height == block.LastCommit.Height()
vote.Round == block.LastCommit.Round()
vote.BlockID == block.LastBlockID
val := state.LastValidators[i]
vote.Verify(block.ChainID, val.PubKey) == true
talliedVotingPower += val.VotingPower
}
talliedVotingPower > (2/3) * TotalVotingPower(state.LastValidators)
```
Includes one (possibly nil) vote for every current validator.
Non-nil votes must be Precommits.
All votes must be for the same height and round.
All votes must be for the previous block.
All votes must have a valid signature from the corresponding validator.
The sum total of the voting power of the validators that voted
must be greater than 2/3 of the total voting power of the complete validator set.
### Vote
A vote is a signed message broadcast in the consensus for a particular block at a particular height and round.
When stored in the blockchain or propagated over the network, votes are encoded in TMBIN.
For signing, votes are encoded in JSON, and the ChainID is included, in the form of the `CanonicalSignBytes`.
We define a method `Verify` that returns `true` if the signature verifies against the pubkey for the CanonicalSignBytes
using the given ChainID:
```go
func (v Vote) Verify(chainID string, pubKey PubKey) bool {
return pubKey.Verify(v.Signature, CanonicalSignBytes(chainID, v))
}
```
where `pubKey.Verify` performs the appropriate digital signature verification of the `pubKey`
against the given signature and message bytes.
## Evidence
TODO
```
TODO
```
Every piece of evidence contains two conflicting votes from a single validator that
was active at the height indicated in the votes.
The votes must not be too old.
# Execution
Once a block is validated, it can be executed against the state.
The state follows this recursive equation:
```go
state(1) = InitialState
state(h+1) <- Execute(state(h), ABCIApp, block(h))
```
where `InitialState` includes the initial consensus parameters and validator set,
and `ABCIApp` is an ABCI application that can return results and changes to the validator
set (TODO). Execute is defined as:
```go
Execute(s State, app ABCIApp, block Block) State {
TODO: just spell out ApplyBlock here
and remove ABCIResponses struct.
abciResponses := app.ApplyBlock(block)
return State{
LastResults: abciResponses.DeliverTxResults,
AppHash: abciResponses.AppHash,
Validators: UpdateValidators(state.Validators, abciResponses.ValidatorChanges),
LastValidators: state.Validators,
ConsensusParams: UpdateConsensusParams(state.ConsensusParams, abci.Responses.ConsensusParamChanges),
}
}
type ABCIResponses struct {
DeliverTxResults []Result
ValidatorChanges []Validator
ConsensusParamChanges ConsensusParams
AppHash []byte
}
```

View File

@@ -0,0 +1,206 @@
# Tendermint Encoding
## Binary Serialization (TMBIN)
Tendermint aims to encode data structures in a manner similar to how the corresponding Go structs
are laid out in memory.
Variable length items are length-prefixed.
While the encoding was inspired by Go, it is easily implemented in other languages as well, given its intuitive design.
XXX: This is changing to use real varints and 4-byte-prefixes.
See https://github.com/tendermint/go-wire/tree/sdk2.
### Fixed Length Integers
Fixed length integers are encoded in Big-Endian using the specified number of bytes.
So `uint8` and `int8` use one byte, `uint16` and `int16` use two bytes,
`uint32` and `int32` use 3 bytes, and `uint64` and `int64` use 4 bytes.
Negative integers are encoded via twos-complement.
Examples:
```go
encode(uint8(6)) == [0x06]
encode(uint32(6)) == [0x00, 0x00, 0x00, 0x06]
encode(int8(-6)) == [0xFA]
encode(int32(-6)) == [0xFF, 0xFF, 0xFF, 0xFA]
```
### Variable Length Integers
Variable length integers are encoded as length-prefixed Big-Endian integers.
The length-prefix consists of a single byte and corresponds to the length of the encoded integer.
Negative integers are encoded by flipping the leading bit of the length-prefix to a `1`.
Zero is encoded as `0x00`. It is not length-prefixed.
Examples:
```go
encode(uint(6)) == [0x01, 0x06]
encode(uint(70000)) == [0x03, 0x01, 0x11, 0x70]
encode(int(-6)) == [0xF1, 0x06]
encode(int(-70000)) == [0xF3, 0x01, 0x11, 0x70]
encode(int(0)) == [0x00]
```
### Strings
An encoded string is length-prefixed followed by the underlying bytes of the string.
The length-prefix is itself encoded as an `int`.
The empty string is encoded as `0x00`. It is not length-prefixed.
Examples:
```go
encode("") == [0x00]
encode("a") == [0x01, 0x01, 0x61]
encode("hello") == [0x01, 0x05, 0x68, 0x65, 0x6C, 0x6C, 0x6F]
encode("¥") == [0x01, 0x02, 0xC2, 0xA5]
```
### Arrays (fixed length)
An encoded fix-lengthed array is the concatenation of the encoding of its elements.
There is no length-prefix.
Examples:
```go
encode([4]int8{1, 2, 3, 4}) == [0x01, 0x02, 0x03, 0x04]
encode([4]int16{1, 2, 3, 4}) == [0x00, 0x01, 0x00, 0x02, 0x00, 0x03, 0x00, 0x04]
encode([4]int{1, 2, 3, 4}) == [0x01, 0x01, 0x01, 0x02, 0x01, 0x03, 0x01, 0x04]
encode([2]string{"abc", "efg"}) == [0x01, 0x03, 0x61, 0x62, 0x63, 0x01, 0x03, 0x65, 0x66, 0x67]
```
### Slices (variable length)
An encoded variable-length array is length-prefixed followed by the concatenation of the encoding of
its elements.
The length-prefix is itself encoded as an `int`.
An empty slice is encoded as `0x00`. It is not length-prefixed.
Examples:
```go
encode([]int8{}) == [0x00]
encode([]int8{1, 2, 3, 4}) == [0x01, 0x04, 0x01, 0x02, 0x03, 0x04]
encode([]int16{1, 2, 3, 4}) == [0x01, 0x04, 0x00, 0x01, 0x00, 0x02, 0x00, 0x03, 0x00, 0x04]
encode([]int{1, 2, 3, 4}) == [0x01, 0x04, 0x01, 0x01, 0x01, 0x02, 0x01, 0x03, 0x01, 0x4]
encode([]string{"abc", "efg"}) == [0x01, 0x02, 0x01, 0x03, 0x61, 0x62, 0x63, 0x01, 0x03, 0x65, 0x66, 0x67]
```
### BitArray
BitArray is encoded as an `int` of the number of bits, and with an array of `uint64` to encode
value of each array element.
```go
type BitArray struct {
Bits int
Elems []uint64
}
```
### Time
Time is encoded as an `int64` of the number of nanoseconds since January 1, 1970,
rounded to the nearest millisecond.
Times before then are invalid.
Examples:
```go
encode(time.Time("Jan 1 00:00:00 UTC 1970")) == [0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00]
encode(time.Time("Jan 1 00:00:01 UTC 1970")) == [0x00, 0x00, 0x00, 0x00, 0x3B, 0x9A, 0xCA, 0x00] // 1,000,000,000 ns
encode(time.Time("Mon Jan 2 15:04:05 -0700 MST 2006")) == [0x0F, 0xC4, 0xBB, 0xC1, 0x53, 0x03, 0x12, 0x00]
```
### Structs
An encoded struct is the concatenation of the encoding of its elements.
There is no length-prefix.
Examples:
```go
type MyStruct struct{
A int
B string
C time.Time
}
encode(MyStruct{4, "hello", time.Time("Mon Jan 2 15:04:05 -0700 MST 2006")}) ==
[0x01, 0x04, 0x01, 0x05, 0x68, 0x65, 0x6C, 0x6C, 0x6F, 0x0F, 0xC4, 0xBB, 0xC1, 0x53, 0x03, 0x12, 0x00]
```
## Merkle Trees
Simple Merkle trees are used in numerous places in Tendermint to compute a cryptographic digest of a data structure.
RIPEMD160 is always used as the hashing function.
The function `SimpleMerkleRoot` is a simple recursive function defined as follows:
```go
func SimpleMerkleRoot(hashes [][]byte) []byte{
switch len(hashes) {
case 0:
return nil
case 1:
return hashes[0]
default:
left := SimpleMerkleRoot(hashes[:(len(hashes)+1)/2])
right := SimpleMerkleRoot(hashes[(len(hashes)+1)/2:])
return RIPEMD160(append(left, right))
}
}
```
Note: we abuse notion and call `SimpleMerkleRoot` with arguments of type `struct` or type `[]struct`.
For `struct` arguments, we compute a `[][]byte` by sorting elements of the `struct` according to
field name and then hashing them.
For `[]struct` arguments, we compute a `[][]byte` by hashing the individual `struct` elements.
## JSON (TMJSON)
Signed messages (eg. votes, proposals) in the consensus are encoded in TMJSON, rather than TMBIN.
TMJSON is JSON where `[]byte` are encoded as uppercase hex, rather than base64.
When signing, the elements of a message are sorted by key and the sorted message is embedded in an
outer JSON that includes a `chain_id` field.
We call this encoding the CanonicalSignBytes. For instance, CanonicalSignBytes for a vote would look
like:
```json
{"chain_id":"my-chain-id","vote":{"block_id":{"hash":DEADBEEF,"parts":{"hash":BEEFDEAD,"total":3}},"height":3,"round":2,"timestamp":1234567890, "type":2}
```
Note how the fields within each level are sorted.
## Other
### MakeParts
Encode an object using TMBIN and slice it into parts.
```go
MakeParts(object, partSize)
```
### Part
```go
type Part struct {
Index int
Bytes byte[]
Proof byte[]
}
```

View File

@@ -0,0 +1,38 @@
# P2P Config
Here we describe configuration options around the Peer Exchange.
These can be set using flags or via the `$TMHOME/config/config.toml` file.
## Seed Mode
`--p2p.seed_mode`
The node operates in seed mode. In seed mode, a node continuously crawls the network for peers,
and upon incoming connection shares some peers and disconnects.
## Seeds
`--p2p.seeds “1.2.3.4:466656,2.3.4.5:4444”`
Dials these seeds when we need more peers. They should return a list of peers and then disconnect.
If we already have enough peers in the address book, we may never need to dial them.
## Persistent Peers
`--p2p.persistent_peers “1.2.3.4:46656,2.3.4.5:466656”`
Dial these peers and auto-redial them if the connection fails.
These are intended to be trusted persistent peers that can help
anchor us in the p2p network. The auto-redial uses exponential
backoff and will give up after a day of trying to connect.
**Note:** If `seeds` and `persistent_peers` intersect,
the user will be warned that seeds may auto-close connections
and that the node may not be able to keep the connection persistent.
## Private Persistent Peers
`--p2p.private_persistent_peers “1.2.3.4:46656,2.3.4.5:466656”`
These are persistent peers that we do not add to the address book or
gossip to other peers. They stay private to us.

View File

@@ -0,0 +1,110 @@
# P2P Multiplex Connection
## MConnection
`MConnection` is a multiplex connection that supports multiple independent streams
with distinct quality of service guarantees atop a single TCP connection.
Each stream is known as a `Channel` and each `Channel` has a globally unique *byte id*.
Each `Channel` also has a relative priority that determines the quality of service
of the `Channel` compared to other `Channel`s.
The *byte id* and the relative priorities of each `Channel` are configured upon
initialization of the connection.
The `MConnection` supports three packet types:
- Ping
- Pong
- Msg
### Ping and Pong
The ping and pong messages consist of writing a single byte to the connection; 0x1 and 0x2, respectively.
When we haven't received any messages on an `MConnection` in time `pingTimeout`, we send a ping message.
When a ping is received on the `MConnection`, a pong is sent in response only if there are no other messages
to send and the peer has not sent us too many pings (TODO).
If a pong or message is not received in sufficient time after a ping, the peer is disconnected from.
### Msg
Messages in channels are chopped into smaller `msgPacket`s for multiplexing.
```
type msgPacket struct {
ChannelID byte
EOF byte // 1 means message ends here.
Bytes []byte
}
```
The `msgPacket` is serialized using [go-wire](https://github.com/tendermint/go-wire) and prefixed with 0x3.
The received `Bytes` of a sequential set of packets are appended together
until a packet with `EOF=1` is received, then the complete serialized message
is returned for processing by the `onReceive` function of the corresponding channel.
### Multiplexing
Messages are sent from a single `sendRoutine`, which loops over a select statement and results in the sending
of a ping, a pong, or a batch of data messages. The batch of data messages may include messages from multiple channels.
Message bytes are queued for sending in their respective channel, with each channel holding one unsent message at a time.
Messages are chosen for a batch one at a time from the channel with the lowest ratio of recently sent bytes to channel priority.
## Sending Messages
There are two methods for sending messages:
```go
func (m MConnection) Send(chID byte, msg interface{}) bool {}
func (m MConnection) TrySend(chID byte, msg interface{}) bool {}
```
`Send(chID, msg)` is a blocking call that waits until `msg` is successfully queued
for the channel with the given id byte `chID`. The message `msg` is serialized
using the `tendermint/wire` submodule's `WriteBinary()` reflection routine.
`TrySend(chID, msg)` is a nonblocking call that queues the message msg in the channel
with the given id byte chID if the queue is not full; otherwise it returns false immediately.
`Send()` and `TrySend()` are also exposed for each `Peer`.
## Peer
Each peer has one `MConnection` instance, and includes other information such as whether the connection
was outbound, whether the connection should be recreated if it closes, various identity information about the node,
and other higher level thread-safe data used by the reactors.
## Switch/Reactor
The `Switch` handles peer connections and exposes an API to receive incoming messages
on `Reactors`. Each `Reactor` is responsible for handling incoming messages of one
or more `Channels`. So while sending outgoing messages is typically performed on the peer,
incoming messages are received on the reactor.
```go
// Declare a MyReactor reactor that handles messages on MyChannelID.
type MyReactor struct{}
func (reactor MyReactor) GetChannels() []*ChannelDescriptor {
return []*ChannelDescriptor{ChannelDescriptor{ID:MyChannelID, Priority: 1}}
}
func (reactor MyReactor) Receive(chID byte, peer *Peer, msgBytes []byte) {
r, n, err := bytes.NewBuffer(msgBytes), new(int64), new(error)
msgString := ReadString(r, n, err)
fmt.Println(msgString)
}
// Other Reactor methods omitted for brevity
...
switch := NewSwitch([]Reactor{MyReactor{}})
...
// Send a random message to all outbound connections
for _, peer := range switch.Peers().List() {
if peer.IsOutbound() {
peer.Send(MyChannelID, "Here's a random message")
}
}
```

View File

@@ -0,0 +1,65 @@
# Tendermint Peer Discovery
A Tendermint P2P network has different kinds of nodes with different requirements for connectivity to one another.
This document describes what kind of nodes Tendermint should enable and how they should work.
## Seeds
Seeds are the first point of contact for a new node.
They return a list of known active peers and then disconnect.
Seeds should operate full nodes with the PEX reactor in a "crawler" mode
that continuously explores to validate the availability of peers.
Seeds should only respond with some top percentile of the best peers it knows about.
See [reputation](TODO) for details on peer quality.
## New Full Node
A new node needs a few things to connect to the network:
- a list of seeds, which can be provided to Tendermint via config file or flags,
or hardcoded into the software by in-process apps
- a `ChainID`, also called `Network` at the p2p layer
- a recent block height, H, and hash, HASH for the blockchain.
The values `H` and `HASH` must be received and corroborated by means external to Tendermint, and specific to the user - ie. via the user's trusted social consensus.
This requirement to validate `H` and `HASH` out-of-band and via social consensus
is the essential difference in security models between Proof-of-Work and Proof-of-Stake blockchains.
With the above, the node then queries some seeds for peers for its chain,
dials those peers, and runs the Tendermint protocols with those it successfully connects to.
When the peer catches up to height H, it ensures the block hash matches HASH.
If not, Tendermint will exit, and the user must try again - either they are connected
to bad peers or their social consensus is invalid.
## Restarted Full Node
A node checks its address book on startup and attempts to connect to peers from there.
If it can't connect to any peers after some time, it falls back to the seeds to find more.
Restarted full nodes can run the `blockchain` or `consensus` reactor protocols to sync up
to the latest state of the blockchain from wherever they were last.
In a Proof-of-Stake context, if they are sufficiently far behind (greater than the length
of the unbonding period), they will need to validate a recent `H` and `HASH` out-of-band again
so they know they have synced the correct chain.
## Validator Node
A validator node is a node that interfaces with a validator signing key.
These nodes require the highest security, and should not accept incoming connections.
They should maintain outgoing connections to a controlled set of "Sentry Nodes" that serve
as their proxy shield to the rest of the network.
Validators that know and trust each other can accept incoming connections from one another and maintain direct private connectivity via VPN.
## Sentry Node
Sentry nodes are guardians of a validator node and provide it access to the rest of the network.
They should be well connected to other full nodes on the network.
Sentry nodes may be dynamic, but should maintain persistent connections to some evolving random subset of each other.
They should always expect to have direct incoming connections from the validator node and its backup(s).
They do not report the validator node's address in the PEX and
they may be more strict about the quality of peers they keep.
Sentry nodes belonging to validators that trust each other may wish to maintain persistent connections via VPN with one another, but only report each other sparingly in the PEX.

View File

@@ -0,0 +1,114 @@
# Tendermint Peers
This document explains how Tendermint Peers are identified and how they connect to one another.
For details on peer discovery, see the [peer exchange (PEX) reactor doc](pex.md).
## Peer Identity
Tendermint peers are expected to maintain long-term persistent identities in the form of a public key.
Each peer has an ID defined as `peer.ID == peer.PubKey.Address()`, where `Address` uses the scheme defined in go-crypto.
A single peer ID can have multiple IP addresses associated with it.
TODO: define how to deal with this.
When attempting to connect to a peer, we use the PeerURL: `<ID>@<IP>:<PORT>`.
We will attempt to connect to the peer at IP:PORT, and verify,
via authenticated encryption, that it is in possession of the private key
corresponding to `<ID>`. This prevents man-in-the-middle attacks on the peer layer.
Peers can also be connected to without specifying an ID, ie. just `<IP>:<PORT>`.
In this case, the peer must be authenticated out-of-band of Tendermint,
for instance via VPN.
## Connections
All p2p connections use TCP.
Upon establishing a successful TCP connection with a peer,
two handhsakes are performed: one for authenticated encryption, and one for Tendermint versioning.
Both handshakes have configurable timeouts (they should complete quickly).
### Authenticated Encryption Handshake
Tendermint implements the Station-to-Station protocol
using ED25519 keys for Diffie-Helman key-exchange and NACL SecretBox for encryption.
It goes as follows:
- generate an emphemeral ED25519 keypair
- send the ephemeral public key to the peer
- wait to receive the peer's ephemeral public key
- compute the Diffie-Hellman shared secret using the peers ephemeral public key and our ephemeral private key
- generate two nonces to use for encryption (sending and receiving) as follows:
- sort the ephemeral public keys in ascending order and concatenate them
- RIPEMD160 the result
- append 4 empty bytes (extending the hash to 24-bytes)
- the result is nonce1
- flip the last bit of nonce1 to get nonce2
- if we had the smaller ephemeral pubkey, use nonce1 for receiving, nonce2 for sending;
else the opposite
- all communications from now on are encrypted using the shared secret and the nonces, where each nonce
increments by 2 every time it is used
- we now have an encrypted channel, but still need to authenticate
- generate a common challenge to sign:
- SHA256 of the sorted (lowest first) and concatenated ephemeral pub keys
- sign the common challenge with our persistent private key
- send the go-wire encoded persistent pubkey and signature to the peer
- wait to receive the persistent public key and signature from the peer
- verify the signature on the challenge using the peer's persistent public key
If this is an outgoing connection (we dialed the peer) and we used a peer ID,
then finally verify that the peer's persistent public key corresponds to the peer ID we dialed,
ie. `peer.PubKey.Address() == <ID>`.
The connection has now been authenticated. All traffic is encrypted.
Note: only the dialer can authenticate the identity of the peer,
but this is what we care about since when we join the network we wish to
ensure we have reached the intended peer (and are not being MITMd).
### Peer Filter
Before continuing, we check if the new peer has the same ID as ourselves or
an existing peer. If so, we disconnect.
We also check the peer's address and public key against
an optional whitelist which can be managed through the ABCI app -
if the whitelist is enabled and the peer does not qualify, the connection is
terminated.
### Tendermint Version Handshake
The Tendermint Version Handshake allows the peers to exchange their NodeInfo:
```golang
type NodeInfo struct {
PubKey crypto.PubKey
Moniker string
Network string
RemoteAddr string
ListenAddr string
Version string
Channels []int8
Other []string
}
```
The connection is disconnected if:
- `peer.NodeInfo.PubKey != peer.PubKey`
- `peer.NodeInfo.Version` is not formatted as `X.X.X` where X are integers known as Major, Minor, and Revision
- `peer.NodeInfo.Version` Major is not the same as ours
- `peer.NodeInfo.Version` Minor is not the same as ours
- `peer.NodeInfo.Network` is not the same as ours
- `peer.Channels` does not intersect with our known Channels.
At this point, if we have not disconnected, the peer is valid.
It is added to the switch and hence all reactors via the `AddPeer` method.
Note that each reactor may handle multiple channels.
## Connection Activity
Once a peer is added, incoming messages for a given reactor are handled through
that reactor's `Receive` method, and output messages are sent directly by the Reactors
on each peer. A typical reactor maintains per-peer go-routine(s) that handle this.

Some files were not shown because too many files have changed in this diff Show More