Compare commits

...

263 Commits

Author SHA1 Message Date
Ethan Buchman
e236302256 Merge pull request #793 from tendermint/release-v0.12.0
Release v0.12.0
2017-10-28 00:06:53 -04:00
Ethan Buchman
dfe28c8855 test: update for abci-cli consolidation. shell formatting 2017-10-27 23:09:50 -04:00
Ethan Buchman
4b616344fa update glide, again 2017-10-27 22:36:03 -04:00
Ethan Buchman
21dcb4f290 update glide 2017-10-27 13:55:56 -04:00
Ethan Buchman
b2b35d7dc1 update changelog 2017-10-27 11:54:20 -04:00
Ethan Buchman
a1501dcde8 version bump 2017-10-27 11:54:20 -04:00
Ethan Buchman
3319ad03b8 Merge pull request #791 from tendermint/740-no-wal-after-fast-sync
dont catchupReplay on wal if we fast synced
2017-10-27 11:49:54 -04:00
Ethan Buchman
591dd9e662 dont catchupReplay on wal if we fast synced 2017-10-27 10:46:19 -04:00
Ethan Buchman
bb6c15b00a CHANGELOG [ci skip] 2017-10-26 09:42:46 -04:00
Ethan Buchman
6af28ead87 Merge pull request #672 from tendermint/573-wal-issues
Add checksum and length to CS WAL record
2017-10-26 00:31:38 -04:00
Ethan Buchman
fcf459158d CHANGELOG [ci skip] 2017-10-26 00:29:58 -04:00
Ethan Buchman
e76ef2a8a1 types: unexpose valset.To/FromBytes 2017-10-26 00:27:02 -04:00
Ethan Buchman
3c92bea519 glide: more external deps locked to versions 2017-10-26 00:07:43 -04:00
Ethan Buchman
12c703c1c3 Merge branch 'develop' into 573-wal-issues 2017-10-25 23:56:08 -04:00
Ethan Buchman
376f47e030 Merge pull request #775 from tendermint/rpc-client-jitter
rpc/lib/client: add jitter for exponential backoff of WSClient
2017-10-25 22:53:50 -04:00
Petabyte Storage
ceedd4d968 remove unnecessary plus [ci skip] 2017-10-25 22:28:20 -04:00
Ethan Buchman
c595636999 glide 2017-10-25 22:23:55 -04:00
Emmanuel Odeke
6e5cd10399 rpc/lib/client: jitter test updates and only to-be run on releases
* Updated code with feedback from @melekes, @ebuchman and @silasdavis.
* Added Makefile clause `release` to only run the test on seeing tag
`release` during releases i.e
```shell
make release
```
which will run the comprehensive and long integration-ish tests.
2017-10-25 19:20:55 -07:00
Ethan Buchman
57a684d5ac fixes from review 2017-10-25 21:54:56 -04:00
Ethan Buchman
5534eb4707 Merge pull request #776 from tendermint/feature/merge-light-client
Merge light client
2017-10-25 16:12:39 -04:00
Ethan Buchman
b2d5546cf8 Merge pull request #777 from silasdavis/fix-blocking-ws-client
Fix WSClient deadlock in the readRoutine after Stop() is called
2017-10-25 11:00:30 -04:00
Ethan Frey
f653ba63bf Separated out certifiers.Commit from rpc structs 2017-10-25 16:43:18 +02:00
Ethan Frey
0396b6d521 Rename checkpoint.go 2017-10-25 16:13:04 +02:00
Ethan Frey
94b36bb65e Move VerifyCommitAny into the types package 2017-10-25 16:13:04 +02:00
Ethan Frey
b4fd6e876e Copy certifiers from light-client 2017-10-25 16:13:04 +02:00
Ethan Buchman
775e100d2c Merge pull request #783 from tendermint/782-tendermint-invalid-command-panics
fix panic: failed to determine gopath: exec: "go"
2017-10-25 08:50:16 -04:00
Silas Davis
4cb02d0bf2 Exploit the fact the BaseService's closed Quit channel will keep emitting quit signals to close both readRoutine and writeRoutine 2017-10-25 10:19:18 +01:00
Anton Kaliaev
ae538337ba fix panic: failed to determine gopath: exec: "go" (Refs #782)
```
-bash-4.2$ tendermint show_validators
panic: failed to determine gopath: exec: "go": executable file not found in $PATH

goroutine 1 [running]:
github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/common.gopath(0xc4200632c0, 0x18)
	/var/lib/jenkins/workspace/03.Build.Package/go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/common/os.go:26 +0x1b5
github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/common.init()
	/var/lib/jenkins/workspace/03.Build.Package/go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/common/os.go:17 +0x13c
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.init()
	/var/lib/jenkins/workspace/03.Build.Package/go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/wire.go:165 +0x50
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/data.init()
	/var/lib/jenkins/workspace/03.Build.Package/go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/data/wrapper.go:89 +0x50
github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/cli.init()
	/var/lib/jenkins/workspace/03.Build.Package/go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/cli/setup.go:190 +0x76
main.init()
	/var/lib/jenkins/workspace/03.Build.Package/go/src/github.com/tendermint/tendermint/cmd/tendermint/main.go:42 +0x49```

An error message instead would be nice.
```

Now GoPath() is a function instead of a variable.
2017-10-25 11:19:53 +04:00
Ethan Buchman
62a7beec21 Merge pull request #780 from ericdmann/769-error-msg-while-testnet-sync
Change log level to Info when proposal block hashing fails
2017-10-24 23:27:20 -04:00
Eric Mann
45e18a1832 Change log level to Info when proposal block hashing fails due to partially complete block 2017-10-24 14:13:35 -07:00
Silas Davis
f6adddb4a8 Replace ResultsCh with ResponsesCh 2017-10-24 17:45:13 +01:00
Ethan Buchman
38fc351532 Merge pull request #765 from tendermint/762-blockchain-reactor-timeout
blockchain reactor timeout
2017-10-24 09:13:26 -04:00
Silas Davis
01be6fa309 Fix WSClient blocking in the readRoutine after Stop() as it tries to write to ResultsCh 2017-10-24 13:31:24 +01:00
Anton Kaliaev
e06bbaf303 refactor TestNoBlockMessageResponse to eliminate a race condition 2017-10-24 15:32:01 +04:00
Anton Kaliaev
bb7b152af5 write docs for cutWALUntil and wal2json binaries 2017-10-24 13:25:47 +04:00
Emmanuel Odeke
5504920ba3 rpc/lib/client: add jitter for exponential backoff of WSClient
Fixes https://github.com/tendermint/tendermint/issues/751.

Adds jitter to our exponential backoff to mitigate a self DDOS
vector. The jitter is a randomly picked percentage of a second
whose purpose is to ensure that each exponential backoff retry
occurs within (1<<attempts) == 2**attempts, but with the delay
each client will have a random buffer time before it tries to
reconnect instead of all at once reconnections that might even
bring back the previous conditions that might have caused the
dial to the WSServer to have failed e.g
* Network outage
* File descriptor exhaustion
* False positives from firewalls
etc
2017-10-24 02:00:20 -07:00
Anton Kaliaev
c74a359c46 fixes per Bucky's review 2017-10-24 12:14:21 +04:00
Matt Bell
6a5254c475 Added local blockchain sync benchmark script 2017-10-23 19:46:57 -04:00
Ethan Buchman
2802a06a08 blockchain/store: comment about panics 2017-10-23 19:46:14 -04:00
Ethan Buchman
87cc277b38 Merge pull request #721 from tendermint/564-add-app-options-to-genesis-resp
Add app_options to GenesisDoc
2017-10-23 16:03:45 -04:00
Anton Kaliaev
3115c23762 binary format for WAL 2017-10-23 22:27:24 +04:00
Anton Kaliaev
31030c6514 make crc32c a global var
change echo format in build.sh script
2017-10-23 22:09:42 +04:00
Anton Kaliaev
7b8ffc9981 add checksum and msg size to TimedWALMessage
updated test_data/build.sh script
2017-10-23 22:09:17 +04:00
Ethan Buchman
a75bccfbc4 Merge branch 'develop' into 564-add-app-options-to-genesis-resp 2017-10-23 11:30:00 -04:00
Ethan Buchman
f97229f05a Merge pull request #748 from tendermint/params-test
types: ConsensusParams test + document the ranges/limits
2017-10-23 11:18:00 -04:00
Ethan Buchman
ac2ef9e0ea Merge pull request #750 from tendermint/feature/cleanup
Cleanup of code and code docs
2017-10-23 11:14:15 -04:00
Ethan Buchman
c2803b80e8 update changelog; fixes from rebase 2017-10-23 11:13:12 -04:00
Ethan Buchman
7a6876bc62 Merge pull request #768 from tendermint/feature/merkleeyes-to-iavl
Feature/merkleeyes to iavl
2017-10-23 11:07:21 -04:00
Adrian Brink
819f81f702 Change NOTE to CONTRACT 2017-10-23 11:04:45 -04:00
Adrian Brink
036d3b59a3 Address reviews 2017-10-23 11:04:45 -04:00
Adrian Brink
782a836db0 Cleanup of code and code docs
This cleans up some of the code in the state package
2017-10-23 11:04:45 -04:00
Ethan Buchman
bd46b78785 Merge pull request #755 from tendermint/753-notified-mempool-txs-but-mempool-empty
WIP: only notify when there are some txs (Refs #753)
2017-10-23 10:31:38 -04:00
Anton Kaliaev
f908dd0e55 only notify when there are some txs (Refs #753) 2017-10-23 10:31:00 -04:00
Ethan Buchman
0bbf38141a blockchain/pool: some comments and small changes 2017-10-23 10:13:46 -04:00
Ethan Buchman
f188366e26 update glide 2017-10-23 10:04:00 -04:00
Ethan Buchman
fd60621a8e update cswal test 2017-10-23 10:03:54 -04:00
Ethan Buchman
60b7f2c61b Merge pull request #767 from silasdavis/do-not-swallow
Make RPCError an actual error and don't swallow its companion data
2017-10-23 01:51:40 -04:00
Silas Davis
3e3d53daef Make RPCError an actual error and don't swallow its companion data 2017-10-22 15:14:21 +01:00
Anton Kaliaev
d64a48e0ee set logger on blockchain pool 2017-10-20 23:56:21 +04:00
Anton Kaliaev
0a7b2ab52c fix invalid memory address or nil pointer dereference error (Refs #762)
https://github.com/tendermint/tendermint/issues/762#issuecomment-338276055
```
E[10-19|04:52:38.969] Stopping peer for error                      module=p2p peer="Peer{MConn{178.62.46.14:46656} B14916FAF38A out}" err="Error: runtime error: invalid memory address or nil pointer dereference\nStack: goroutine 529485 [running]:\nruntime/debug.Stack(0xc4355cfb38, 0xb463e0, 0x11b1c30)\n\t/usr/local/go/src/runtime/debug/stack.go:24 +0xa7\ngithub.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p.(*MConnection)._recover(0xc439a28870)\n\t/home/ubuntu/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p/connection.go:206 +0x6e\npanic(0xb463e0, 0x11b1c30)\n\t/usr/local/go/src/runtime/panic.go:491 +0x283\ngithub.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/blockchain.(*bpPeer).decrPending(0x0, 0x381)\n\t/home/ubuntu/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/blockchain/pool.go:376 +0x22\ngithub.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/blockchain.(*BlockPool).AddBlock(0xc4200e4000, 0xc4266d1f00, 0x40, 0xc432ac9640, 0x381)\n\t/home/ubuntu/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/blockchain/pool.go:215 +0x139\ngithub.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/blockchain.(*BlockchainReactor).Receive(0xc42050a780, 0xc420257740, 0x1171be0, 0xc42ff302d0, 0xc4384b2000, 0x381, 0x1000)\n\t/home/ubuntu/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/blockchain/reactor.go:160 +0x712\ngithub.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p.createMConnection.func1(0x11e5040, 0xc4384b2000, 0x381, 0x1000)\n\t/home/ubuntu/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p/peer.go:334 +0xbd\ngithub.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p.(*MConnection).recvRoutine(0xc439a28870)\n\t/home/ubuntu/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p/connection.go:475 +0x4a3\ncreated by github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p.(*MConnection).OnStart\n\t/home/ubuntu/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p/connection.go:170 +0x187\n"
```
2017-10-20 21:56:10 +04:00
Emmanuel Odeke
f24f03906f types: ConsensusParams: add feedback from @ebuchman and @melekes 2017-10-20 00:11:30 -06:00
Ethan Buchman
fa56e8c0ce Merge pull request #676 from tendermint/state-unexpose-genesisDoc-chainID
all, state: unexpose GenesisDoc, ChainID fields make them accessor methods
2017-10-18 20:10:02 -04:00
Anton Kaliaev
75b78bfb72 panic on marshal/unmarshal failures for genesisDoc 2017-10-17 13:33:57 +04:00
Ethan Buchman
b234f7aba2 Merge pull request #741 from tendermint/client-compile-time-assertions
rpc/client: use compile time assertions instead of methods
2017-10-17 03:41:24 -04:00
Emmanuel Odeke
bff069f83c types: ConsensusParams test + document the ranges/limits
Fixes https://github.com/tendermint/tendermint/issues/747
Updates https://github.com/tendermint/tendermint/issues/693

* Document the unmentioned limits for ConsensusParams.Validate()
* Make the limit for ConsensusParams.BlockSizeParams.MaxBytes
clear at 100MiB
2017-10-16 16:57:44 -06:00
Anton Kaliaev
616b07ff6b make AppOptions an interface{} 2017-10-16 10:58:52 +04:00
Anton Kaliaev
b26f812399 update changelog 2017-10-16 10:58:52 +04:00
Anton Kaliaev
321061125f add app_options to GenesisDoc (Refs #564) 2017-10-16 10:58:52 +04:00
Anton Kaliaev
6469e2ccca save genesis doc in DB to prevent user errors
https://github.com/tendermint/tendermint/pull/676#discussion_r144411458
2017-10-16 10:51:58 +04:00
Anton Kaliaev
c4646bf87f make state#Params not a pointer
also remove the comment
2017-10-16 10:34:02 +04:00
Anton Kaliaev
716364182d [state] expose ChainID and Params
```
jaekwon
Yeah we should definitely expose ChainID.
ConsensusParams is small enough, we can just write it.
```
https://github.com/tendermint/tendermint/pull/676#discussion_r144123203
2017-10-16 10:34:02 +04:00
Anton Kaliaev
1971e149fb ChainID() and Params() do not return errors
- remove state#GenesisDoc() method
2017-10-16 10:34:02 +04:00
Emmanuel Odeke
7939d62ef0 all, state: unexpose GenesisDoc, ChainID fields make them accessor methods
Fixes #671

Unexpose GenesisDoc and ChainID fields to avoid them being
serialized to the DB on every block write/state.Save()

A GenesisDoc can now be alternatively written to the state's
database, by serializing its JSON as a value of key "genesis-doc".

There are now accessors and a setter for these attributes:
- state.GenesisDoc() (*types.GenesisDoc, error)
- state.ChainID() (string, error)
- state.SetGenesisDoc(*types.GenesisDoc)

This is a breaking change since it changes how the state's
serialization and requires that if loading the GenesisDoc entirely
from the database, you'll need to set its value in the database
as the GenesisDoc's JSON marshaled bytes.
2017-10-16 10:34:01 +04:00
Ethan Buchman
a1e0f0ba95 docs/ecosystem: add py-tendermint to abci-servers 2017-10-14 00:59:34 -04:00
Emmanuel Odeke
5f218a43fd rpc/client: use compile time assertions instead of methods 2017-10-13 14:30:54 -06:00
Ethan Buchman
d490c25807 Merge pull request #720 from tendermint/types-heartbeat-test
types/heartbeat: test all Heartbeat functions
2017-10-13 09:22:59 -04:00
Ethan Buchman
340f33b475 Merge pull request #734 from tendermint/482-support-historical-abci-queries
support historical abci queries
2017-10-13 09:14:32 -04:00
Anton Kaliaev
7518c4a9be [rpc] update comment [ci skip] 2017-10-13 15:03:21 +04:00
Anton Kaliaev
db413aadfd fixes from @cloudhead review 2017-10-13 15:03:21 +04:00
Anton Kaliaev
5433e5771e support historical abci queries (Refs #482) 2017-10-13 15:03:20 +04:00
Zach Ramsay
e2e50bc0fc rpc: use /iavl repo in test (#713) 2017-10-11 10:35:22 -04:00
Ethan Buchman
27245ce6f6 Merge branch 'master' into develop 2017-10-10 19:12:40 -04:00
Ethan Buchman
d4634dc683 Merge pull request #729 from tendermint/release/0.11.1
Release/0.11.1
2017-10-10 18:50:15 -04:00
Ethan Buchman
8c08fc671c fix version 2017-10-10 18:49:08 -04:00
Ethan Buchman
a2d40580d7 Merge pull request #732 from sbellem/docs-install-typo-fix
[docs: typo fix] add missing "have"
2017-10-10 11:22:39 -04:00
Ethan Buchman
0011af7adf Merge pull request #727 from sbellem/docs-typo-fix
[docs:typo fix] remove misplaced "the"
2017-10-10 11:20:34 -04:00
Sylvain Bellemare
1764106606 [docs: typo fix] add missing "have" 2017-10-10 17:15:35 +02:00
Ethan Buchman
3356544706 update changelog 2017-10-10 11:10:48 -04:00
Ethan Buchman
335e012b6a Merge branch 'develop' into release/0.11.1 2017-10-10 11:08:14 -04:00
Ethan Buchman
a458da8f92 Merge pull request #724 from tendermint/708-leaving-out-params-crashes-tm-rpc
Leaving out params crashes tm rpc
2017-10-10 10:53:21 -04:00
Ethan Buchman
9fb45c5b5a remove a stale comment 2017-10-10 10:52:26 -04:00
Anton Kaliaev
aae4e94998 make RPCRequest params not a pointer
https://github.com/tendermint/tendermint/pull/724#issuecomment-335362927
2017-10-10 13:50:06 +04:00
Anton Kaliaev
d935a4f0a8 recover from panic in WS JSON RPC readRoutine
https://github.com/tendermint/tendermint/pull/724#issuecomment-335316484
2017-10-10 13:48:56 +04:00
Anton Kaliaev
5c331d8276 log a notification to help debug user issues 2017-10-10 13:01:25 +04:00
Anton Kaliaev
13b9de6778 return missing package declaration 2017-10-10 12:48:36 +04:00
Anton Kaliaev
dc0e8de9b0 extract some of the consensus types into ./types
so they can be used in rpc/core/types/responses.go.

```
So, it seems like we could use the actual structs here, but we don't want to have to import consensus to get them, as then clients are importing too much crap. So probably we should move some types from consensus into consensus/types so we can import.

Will these raw messages be identical to:

type ResultDumpConsensusState struct {
  RoundState cstypes.RoundState
  PeerRoundStates map[string]cstypes.PeerRoundState
}
```
https://github.com/tendermint/tendermint/pull/724#discussion_r143598193
2017-10-10 12:39:21 +04:00
Anton Kaliaev
90a2335267 bump version to 0.11.1 2017-10-10 01:18:33 +04:00
Anton Kaliaev
99c4e48038 return missing package declaration 2017-10-10 01:14:42 +04:00
Anton Kaliaev
4bd4d59af5 update changelog [ci skip] 2017-10-10 01:14:00 +04:00
Sylvain Bellemare
8219abc552 [docs:typo fix] remove misplaced "the" 2017-10-09 20:06:43 +02:00
Anton Kaliaev
d6a87d3c43 [rpc] DumpConsensusState: output state as json rather than string
Before:

```
{
  "jsonrpc": "2.0",
  "id": "",
  "result": {
    "round_state": "RoundState{\n  H:10 R:0 S:RoundStepNewHeight\n  StartTime:     2017-10-09 13:07:24.841134374 +0400 +04\n  CommitTime:    2017-10-09 13:07:23.841134374 +0400 +04\n  Validators:    ValidatorSet{\n      Proposer: Validator{EF243CC0E9B88D0161D24D733BDE9003518CEA27 {PubKeyEd25519{2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202}} VP:10 A:0}\n      Validators:\n        Validator{EF243CC0E9B88D0161D24D733BDE9003518CEA27 {PubKeyEd25519{2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202}} VP:10 A:0}\n    }\n  Proposal:      \u003cnil\u003e\n  ProposalBlock: nil-PartSet nil-Block\n  LockedRound:   0\n  LockedBlock:   nil-PartSet nil-Block\n  Votes:         HeightVoteSet{H:10 R:0~0\n      VoteSet{H:10 R:0 T:1 +2/3:\u003cnil\u003e BA{1:_} map[]}\n      VoteSet{H:10 R:0 T:2 +2/3:\u003cnil\u003e BA{1:_} map[]}\n    }\n  LastCommit: VoteSet{H:9 R:0 T:2 +2/3:947F67A7B85439AF2CD5DFED376C51AC7BD67AEE:1:365E9983E466 BA{1:X} map[]}\n  LastValidators:    ValidatorSet{\n      Proposer: Validator{EF243CC0E9B88D0161D24D733BDE9003518CEA27 {PubKeyEd25519{2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202}} VP:10 A:0}\n      Validators:\n        Validator{EF243CC0E9B88D0161D24D733BDE9003518CEA27 {PubKeyEd25519{2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202}} VP:10 A:0}\n    }\n}",
    "peer_round_states": []
  }
}
```

After:

```
{
  "jsonrpc": "2.0",
  "id": "",
  "result": {
    "round_state": {
      "Height": 1691,
      "Round": 0,
      "Step": 1,
      "StartTime": "2017-10-09T14:08:09.129491764+04:00",
      "CommitTime": "2017-10-09T14:08:08.129491764+04:00",
      "Validators": {
        "validators": [
          {
            "address": "EF243CC0E9B88D0161D24D733BDE9003518CEA27",
            "pub_key": {
              "type": "ed25519",
              "data": "2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202"
            },
            "voting_power": 10,
            "accum": 0
          }
        ],
        "proposer": {
          "address": "EF243CC0E9B88D0161D24D733BDE9003518CEA27",
          "pub_key": {
            "type": "ed25519",
            "data": "2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202"
          },
          "voting_power": 10,
          "accum": 0
        }
      },
      "Proposal": null,
      "ProposalBlock": null,
      "ProposalBlockParts": null,
      "LockedRound": 0,
      "LockedBlock": null,
      "LockedBlockParts": null,
      "Votes": {},
      "CommitRound": -1,
      "LastCommit": {},
      "LastValidators": {
        "validators": [
          {
            "address": "EF243CC0E9B88D0161D24D733BDE9003518CEA27",
            "pub_key": {
              "type": "ed25519",
              "data": "2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202"
            },
            "voting_power": 10,
            "accum": 0
          }
        ],
        "proposer": {
          "address": "EF243CC0E9B88D0161D24D733BDE9003518CEA27",
          "pub_key": {
            "type": "ed25519",
            "data": "2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202"
          },
          "voting_power": 10,
          "accum": 0
        }
      }
    },
    "peer_round_states": {
      "75EC8F15D244A421202F9725CD4DE509EE50303670310CF7530EF25E2B7C524B": {
        "Height": 1691,
        "Round": 0,
        "Step": 1,
        "StartTime": "2017-10-09T14:08:08.563251997+04:00",
        "Proposal": false,
        "ProposalBlockPartsHeader": {
          "total": 0,
          "hash": ""
        },
        "ProposalBlockParts": null,
        "ProposalPOLRound": -1,
        "ProposalPOL": null,
        "Prevotes": null,
        "Precommits": null,
        "LastCommitRound": 0,
        "LastCommit": null,
        "CatchupCommitRound": -1,
        "CatchupCommit": null
      }
    }
  }
}
```
2017-10-09 14:09:26 +04:00
Anton Kaliaev
a3adac3787 [rpc] do not try to parse params if they were not provided (Refs #708) 2017-10-09 13:30:52 +04:00
Emmanuel Odeke
6fc82f3824 types/heartbeat: test all Heartbeat functions
Updates https://github.com/tendermint/tendermint/issues/693

* Adjusted Heartbeat.Copy to return nil on
trying to copy a nil value instead of panicking.
* Also documented that WriteSignBytes panics
if the Heartbeat is nil.
2017-10-05 21:24:36 -06:00
Ethan Buchman
bcca27ee20 Merge pull request #718 from tendermint/restore-rpc-lib-readme
restore rpc/lib readme as doc.go (Refs #710) [ci skip]
2017-10-05 22:08:17 -04:00
Anton Kaliaev
3702cb7e7c restore rpc/lib readme as doc.go (Refs #710) [ci skip]
I don't want to lose any documentation. Correct me if I am wrong, but we
don't have this docs anywhere else.
2017-10-05 11:44:02 +04:00
Ethan Buchman
49653d3e31 CODEOWNERS file 2017-10-04 23:44:55 -04:00
Ethan Buchman
ddb8430341 Merge pull request #716 from tendermint/unstable
Unstable
2017-10-04 23:43:36 -04:00
Ethan Buchman
91a3cb0f21 makefile: remove megacheck 2017-10-04 23:20:22 -04:00
Ethan Buchman
765c325441 Merge pull request #714 from tendermint/feature/no-block-response
Feature/no block response
2017-10-04 22:27:06 -04:00
Ethan Buchman
659768783f blockchain: fixing reactor tests 2017-10-04 22:26:00 -04:00
Ethan Buchman
e756906a0e Merge pull request #711 from tendermint/imports
get rid of anonymous imports
2017-10-04 22:16:07 -04:00
Zach Ramsay
136b6a7673 rpc/lib: remove dead files, closes #710 2017-10-04 17:45:15 -04:00
Emmanuel Odeke
068f01368f blockchain/reactor: respondWithNoResponseMessage for missing height
Fixes #514
Replaces #540

If a peer requests a block with a height that we don't have
respond with a bcNoBlockResponseMessage.
However, according to the Tendermint spec, if all nodes are honest
this condition shouldn't occur, so this is a possible hint of an
dishonest node.
2017-10-04 17:27:10 -04:00
Zach Ramsay
f23d47e5d2 upnp: keep a link 2017-10-04 17:19:49 -04:00
Ethan Buchman
09aed7ee89 Merge pull request #707 from tendermint/619-how-to-read-logs
How To Read Logs guide
2017-10-04 16:55:20 -04:00
Zach Ramsay
d56b44f3a5 all: no more anonymous imports 2017-10-04 16:40:45 -04:00
Anton Kaliaev
9e4c25761c relative links [ci skip] 2017-10-04 23:33:31 +04:00
Anton Kaliaev
54f2cc9709 [docs] add how to read logs guide [ci skip] 2017-10-04 18:35:22 +04:00
Ethan Buchman
31a7e2b3b4 Merge pull request #704 from tendermint/no-empty-docs
document no empty blocks
2017-10-03 23:56:05 -04:00
Zach Ramsay
00ab3daa0c document no empty blocks, closes #605 [ci skip] 2017-10-03 23:55:45 -04:00
Ethan Buchman
bbf0228aa7 Merge pull request #700 from tendermint/695-improve-app-dev-docs
Improve app dev docs
2017-10-03 23:40:21 -04:00
Ethan Buchman
6550199751 [docs] minor fixes from review [ci skip] 2017-10-03 23:39:46 -04:00
Anton Kaliaev
10f361fcd0 [docs] use persistent_dummy only when needed [ci skip] 2017-10-04 00:04:45 +04:00
Anton Kaliaev
4a0ae17401 [docs] include examples from the persistent_dummy app [ci skip] 2017-10-04 00:04:32 +04:00
Anton Kaliaev
8537070575 [docs] restructure sentence [ci skip] 2017-10-04 00:04:22 +04:00
Ethan Buchman
9cbcd4b5e3 Merge pull request #692 from tendermint/unstable
add the unstable changes
2017-10-03 12:47:16 -04:00
Ethan Buchman
4fa4e617b7 docs/ecosystem: add stratumn 2017-10-03 12:47:01 -04:00
Zach Ramsay
2e598a7caf one more fix 2017-10-03 11:26:32 -04:00
Zach Ramsay
031eb23dc8 docs: fix build warnings 2017-10-03 11:23:08 -04:00
Zach
edd718c580 Update ecosystem.rst 2017-10-03 11:14:24 -04:00
Zach
392a041c2b Merge pull request #699 from tendermint/update-getting-started-docs
remove unnecessary args in abci_query call in getting-started [ci skip]
2017-10-03 10:41:36 -04:00
Anton Kaliaev
8727bfc265 remove unnecessary args in abci_query call in getting-started [ci skip]
Since 0.10.0, RPC does not require all args (default values will be used).
2017-10-03 16:21:48 +04:00
Ethan Buchman
84e39203bb readme points to ecosystem doc; add lotion, clean up 2017-10-02 23:46:35 -04:00
Ethan Buchman
97e9802255 fix out of range error in VoteSet.addVote 2017-10-02 23:34:06 -04:00
Ethan Buchman
8c6bd44929 log stack trace on consensus failure 2017-10-02 23:34:06 -04:00
Ethan Buchman
ed5511dc08 glide: update for autofile fix 2017-10-02 23:34:02 -04:00
Ethan Buchman
aa57e89e21 changelog: add genesis amount->power 2017-10-02 14:28:04 -04:00
Zach Ramsay
c2f6ff759b typo 2017-10-02 13:02:45 -04:00
Anton Kaliaev
45ff7cdd0c rewrite ws client to expose a callback instead of a channel
callback gives more power to the publisher. plus it is optional
comparing to a channel, which will block the whole client if you won't
read from it.
2017-10-02 13:00:20 -04:00
Alexandre Thibault
ce36a0111a rpc: subscribe on reconnection (#689)
* rpc: subscribe on reconnection

* rpc: fix unit tests
2017-10-02 13:00:20 -04:00
Martin Dyring-Andersen
b61f5482d4 Fix broken reference to ABCI 2017-10-02 13:00:20 -04:00
Anton Kaliaev
40b5defe18 release script [ci skip] 2017-10-02 13:00:20 -04:00
Tino Rusch
e40d1b36f7 docs: added passchain to the ecosystem.rst in the applications section; 2017-10-02 13:00:20 -04:00
Zach Ramsay
60a2867af2 docs/ecosystem: add ABCI implementations 2017-10-02 12:59:32 -04:00
Adrian Brink
bfdad916a2 Update ecosystem.rst 2017-10-02 12:59:32 -04:00
Tino Rusch
498ff803db [README] added passchain to application list;
This adds passchain to the 'applications' part of the toplevel README.md file.
Passchain is a distributed password sharing system built on top of tendermint.
2017-10-02 12:59:32 -04:00
Anton Kaliaev
c8789492dc update docker readme [ci skip] 2017-10-02 12:59:32 -04:00
Anton Kaliaev
1723836014 update Dockerfile [ci skip] 2017-10-02 12:59:32 -04:00
Anton Kaliaev
2d6bc8d7d7 bump up Golang version to 1.9.0 2017-10-02 12:59:32 -04:00
Ethan Buchman
7448c4adde Merge pull request #690 from etherealmachine/patch-1
Fix typo in using-tendermint.rst
2017-10-01 11:03:16 -04:00
James Pettit
a221736eee Fix typo in using-tendermint.rst
configutation -> configuration
2017-09-29 23:18:45 -07:00
Alexandre Thibault
382bead548 rpc: fix client websocket timeout (#687) 2017-09-29 13:32:30 +04:00
Anton Kaliaev
f9479b34cb sleep time should be greater than readTimeout (5 sec)
otherwise, we're not testing ping/pongs.
see https://github.com/tendermint/tendermint/pull/687#issuecomment-332494735
2017-09-27 15:44:19 +04:00
Emmanuel Odeke
925696ca65 Merge branch 'p2p-prune-unused-IPRangeCount-funcs' into develop 2017-09-22 21:09:41 -06:00
Ethan Buchman
7682ad9a60 Merge pull request #675 from tendermint/release-v0.11.0
Release v0.11.0
2017-09-22 13:48:52 -04:00
Ethan Buchman
3e92d295e4 glide for tmlibs 0.3.1 2017-09-22 13:25:10 -04:00
Ethan Buchman
661d336dd5 glide 2017-09-22 12:29:53 -04:00
Ethan Buchman
d1a00c684e types: comments 2017-09-22 12:00:37 -04:00
Ethan Buchman
db034e079a version bump 2017-09-22 11:44:57 -04:00
Ethan Buchman
7d983a548b changelog 2017-09-22 11:44:25 -04:00
Ethan Buchman
8311f5c611 abci.Info takes a struct; less merkleeyes 2017-09-22 11:42:40 -04:00
Ethan Buchman
628791e5a5 Merge pull request #665 from tendermint/no-internet
p2p: allow listener with no external connection
2017-09-22 10:16:50 -04:00
Ethan Buchman
df857266b6 update glide 2017-09-22 10:14:05 -04:00
Ethan Buchman
ddb3d8945d p2p: allow listener with no external connection 2017-09-22 10:13:23 -04:00
Ethan Buchman
7f5908b622 Merge pull request #637 from tendermint/feature/hsm
PrivValidator Interface
2017-09-22 00:28:40 -04:00
Ethan Buchman
24f7b9387a more tests 2017-09-22 00:05:39 -04:00
Ethan Buchman
756818f940 fixes from review 2017-09-21 21:44:36 -04:00
Ethan Buchman
2131f8d330 some fixes from review 2017-09-21 17:21:20 -04:00
Ethan Buchman
8ae2ffda89 put funcs back in order to simplify review 2017-09-21 16:59:25 -04:00
Ethan Buchman
75b97a5a65 PrivValidatorFS is like old PrivValidator, for now 2017-09-21 16:46:31 -04:00
Ethan Buchman
7b99039c34 make signBytesHRS a method on LastSignedInfo 2017-09-21 15:54:33 -04:00
Ethan Buchman
3ca7b10ad4 types: more . -> cmn 2017-09-21 15:54:33 -04:00
Ethan Buchman
779c2a22d0 node: NewNode takes DBProvider and GenDocProvider 2017-09-21 15:54:33 -04:00
Ethan Buchman
147a18b34a fix some comments 2017-09-21 15:52:25 -04:00
Ethan Buchman
4382c8d28b fix tests 2017-09-21 15:52:25 -04:00
Ethan Buchman
944ebccfe9 more PrivValidator interface 2017-09-21 15:51:20 -04:00
Ethan Buchman
fd1b0b997a PrivValidator interface 2017-09-21 15:51:20 -04:00
Ethan Buchman
abe912c610 FuncSignerAndApp allows custom signer and abci app 2017-09-21 15:50:43 -04:00
Ethan Buchman
66fcdf7c7a minor fixes 2017-09-21 15:50:43 -04:00
Adrian Brink
4e13a19339 Add ability to construct new instance of Tendermint core from scratch 2017-09-21 15:50:43 -04:00
Adrian Brink
7dd3c007c7 Refactor priv_validator
Users can now just pass an object that implements the Signer interface.
2017-09-21 15:50:43 -04:00
Duncan Jones
0d392a0442 Allow Signer to be generated with priv key
Prior to this change, a custom Signer would have no knowledge of the private
key stored in the configuration file. This changes introduces a generator
function, which creates a Signer based on the private key. This provides an
opportunity for customer Signers to adjust behaviour based on the key
contents. (E.g. imagine key contents are a key label, rather than the key
itself).
2017-09-21 15:50:43 -04:00
Duncan Jones
7e4a704bd1 Remove reliance on default Signer
This change allows the default privValidator to use a custom Signer
implementation with no reliance on the default Signer implementation.
2017-09-21 15:50:43 -04:00
Adrian Brink
bf5e956087 Change capitalisation 2017-09-21 15:50:43 -04:00
Adrian Brink
2ccc3326ec Remove redundant file 2017-09-21 15:50:43 -04:00
Adrian Brink
83f7d5c95a Setup custom tendermint node
By exporting all of the commands, we allow users to setup their own
tendermint node cli. This enables users to provide a different
pivValidator without the need to fork tendermint.
2017-09-21 15:50:43 -04:00
Adrian Brink
2c129447fd Example that showcases how to build your own tendermint node
This example shows how a user of the tendermint library can build their
own node and supply it with its own commands. It includes two todos in
order to make it easier for library users to use tendermint.
2017-09-21 15:50:43 -04:00
Ethan Buchman
b50339e8e7 p2p: sw.AddPeer -> sw.addPeer 2017-09-21 15:47:41 -04:00
Ethan Buchman
8ff5b365dd Merge pull request #650 from tendermint/consensus_params
Consensus params
2017-09-21 15:46:57 -04:00
Ethan Buchman
1f0985689d ConsensusParams ptr in GenesisDoc for json 2017-09-21 15:22:58 -04:00
Ethan Buchman
3089bbf2b8 Amount -> Power. Closes #166 2017-09-21 14:59:27 -04:00
Ethan Buchman
5feeb65cf0 dont use pointers for ConsensusParams 2017-09-21 14:59:24 -04:00
Ethan Buchman
715e74186c fixes from review 2017-09-21 14:51:29 -04:00
Ethan Buchman
3a03fe5a15 updated to match adr 005 2017-09-21 14:51:29 -04:00
Ethan Buchman
d343560108 adr: add 005 consensus params 2017-09-21 14:51:29 -04:00
Ethan Buchman
2b6db268cf genesis json tests and mv ConsensusParams to types 2017-09-21 14:51:29 -04:00
Ethan Buchman
14abdd57f3 genDoc.ValidateAndComplete 2017-09-21 14:51:29 -04:00
Ethan Buchman
1f3e4d2d9a move PartSetSize out of the config, into ConsensusParams 2017-09-21 14:51:29 -04:00
Ethan Buchman
29bfcb0a31 minor comments/changes 2017-09-21 14:51:29 -04:00
Ethan Buchman
301845943c Merge pull request #663 from tendermint/readme-update
update readme and other files
2017-09-21 12:36:11 -04:00
Ethan Buchman
b0017c5460 update CHANGELOG [ci skip] 2017-09-21 12:34:37 -04:00
Zach
0b61d22652 Merge pull request #664 from tendermint/windows-path
use filepath for windows compatibility
2017-09-21 07:34:20 -04:00
Zach Ramsay
70b95135e6 consensus: use filepath for windows compatibility, closes #595 2017-09-18 17:49:43 -04:00
Zach Ramsay
a3d925ac1d pr fixes 2017-09-18 16:52:00 -04:00
Zach Ramsay
cf9a03f698 docs: organize install a bit better 2017-09-18 16:42:24 -04:00
Ethan Buchman
7f8240dfde Merge pull request #521 from tendermint/json-rpc-patch
updated json response to match spec by @davebryson
2017-09-18 16:37:34 -04:00
Anton Kaliaev
f8b152972f return method not found error
if somebody tries to access WS method in non-ws context
2017-09-18 16:36:03 -04:00
Anton Kaliaev
95875c55fc ID must be present in both request and response
from the spec:
This member is REQUIRED.
It MUST be the same as the value of the id member in the Request Object.
If there was an error in detecting the id in the Request object (e.g. Parse error/Invalid Request), it MUST be Null.
2017-09-18 16:36:03 -04:00
Anton Kaliaev
7fadde0b37 check for request ID after receiving it 2017-09-18 16:36:03 -04:00
Anton Kaliaev
e36c79f713 capitalize RpcError 2017-09-18 16:36:03 -04:00
Anton Kaliaev
2252071866 update changelog 2017-09-18 16:36:03 -04:00
Anton Kaliaev
b700ed8e31 remove check for non-empty message as it should always be present 2017-09-18 16:36:03 -04:00
Anton Kaliaev
e1fd587ddd we now omit error if empty 2017-09-18 16:36:03 -04:00
Anton Kaliaev
f74de4cb86 include optional data field in error object
```
data
A Primitive or Structured value that contains additional information about the error.
This may be omitted.
The value of this member is defined by the Server (e.g. detailed error information, nested errors etc.).
```
2017-09-18 16:36:02 -04:00
Anton Kaliaev
6c1572c9b8 fix invalid memory address or nil pointer dereference 2017-09-18 16:36:02 -04:00
Dave Bryson
60a1f49a5c updated json response to match spec by @davebryson 2017-09-18 16:35:50 -04:00
Zach Ramsay
740167202f readme & al., update links to docs 2017-09-18 16:23:22 -04:00
Ethan Buchman
e0017c8a97 Merge pull request #654 from tendermint/docs-add-tools
consolidate docs from tools repo
2017-09-18 13:55:37 -04:00
Ethan Buchman
4a78c1bb28 Merge pull request #661 from tendermint/docs-add-ecosystem
docs: add ecosystem from website
2017-09-18 13:55:09 -04:00
Zach Ramsay
1d3f723ccc docs: remove last section from ecosystem 2017-09-18 13:41:12 -04:00
Zach Ramsay
77408b7bde docs: using ABCI-CLI 2017-09-18 13:31:18 -04:00
Ethan Buchman
0cd1bd6d8b Merge pull request #658 from tendermint/organize-docs
docs: organize the directory
2017-09-18 13:08:38 -04:00
Zach Ramsay
881d2ce31e docs: pull from tools' master branch 2017-09-18 12:14:23 -04:00
Zach Ramsay
ad79ead93d docs: add and re-format the ecosystem from website 2017-09-18 11:57:35 -04:00
Zach Ramsay
17a748c796 docs: rename file 2017-09-18 09:39:23 -04:00
Zach Ramsay
583599b19f docs: add software.json from website (ecosystem) 2017-09-18 09:38:22 -04:00
Zach Ramsay
044fe56b43 update changelog 2017-09-16 15:24:30 -04:00
Zach Ramsay
b46da19b74 docs: organize the directory, #656 2017-09-16 15:19:22 -04:00
Zach Ramsay
d635783d07 docs: finish pull from tools 2017-09-16 15:11:10 -04:00
Zach Ramsay
f79e13af38 docs: pull from tools on build 2017-09-16 15:11:10 -04:00
Zach Ramsay
34e6474ad9 docs: organize tm-bench/monitor description 2017-09-16 15:11:10 -04:00
Zach Ramsay
5c96e0c812 docs: add original tm-bench/monitor files 2017-09-16 15:11:10 -04:00
Zach Ramsay
921a2b41f0 docs: add kubes docs to mintnet doc, from tools 2017-09-16 15:11:10 -04:00
Zach Ramsay
a8cec967ac docs: harmonize headers for tools docs 2017-09-16 15:11:10 -04:00
Zach Ramsay
044c500cc0 docs: move images in from tools repo 2017-09-16 15:11:10 -04:00
Zach Ramsay
a917ec9ea2 docs: update index.rst 2017-09-16 15:11:10 -04:00
Zach Ramsay
d4ccc88676 docs: convert from md to rst 2017-09-16 15:11:10 -04:00
Zach Ramsay
12c85c4e60 docs: add original README's from tools repo 2017-09-16 15:11:10 -04:00
Ethan Buchman
90c0267bc1 types: privVal.Sign returns an error 2017-09-16 01:07:04 -04:00
Ethan Buchman
8963bf08aa Merge pull request #657 from tendermint/fix/adr
docs: update and clean up adr
2017-09-15 19:32:58 -04:00
Ethan Buchman
ede4f818fd docs: update and clean up adr 2017-09-15 19:31:37 -04:00
Ethan Buchman
126f63c0a2 Merge pull request #653 from tendermint/improvement/peer_interface
peer interface
2017-09-15 18:43:53 -04:00
Ethan Buchman
aea8629272 peer interface 2017-09-15 18:40:59 -04:00
Emmanuel Odeke
5138bcb1c7 p2p: delete unused and untested *IPRangeCount functions
Fixes #602

Delete unused and untested functions:
- AddToIPRangeCounts
- CheckIPRangeCounts
2017-09-15 14:03:01 -06:00
Ethan Buchman
cc50dc076a Merge pull request #638 from tendermint/feature/state_tests
Upgrade state_test.go
2017-09-12 14:37:59 -04:00
Ethan Buchman
bf576f0097 state: minor comment fixes 2017-09-12 14:37:32 -04:00
Zach
377747b061 Merge pull request #651 from tendermint/doc/testnet
Started expanding "automated deployments" in documentation
2017-09-12 12:41:42 -04:00
Adrian Brink
870a98ccc3 Last fixes 2017-09-12 17:12:19 +02:00
Adrian Brink
8eda3efa28 Cleanup lines to fit within 72 characters 2017-09-12 17:08:30 +02:00
Zach Ramsay
e2c3cc9685 docs: give index a Tools section 2017-09-12 10:57:34 -04:00
Adrian Brink
2a6e71a753 Reformat tests to extract common setup 2017-09-12 16:57:10 +02:00
GaryTomato
340d273f83 Started expanding "automated deployments" 2017-09-11 11:34:36 -04:00
Ethan Buchman
54c63726b0 p2p: minor comment fixes 2017-09-06 16:40:21 -04:00
Ethan Buchman
cb80ab2965 Merge remote-tracking branch 'orijtech/p2p-full-test-PeerSet' into develop 2017-09-06 16:40:07 -04:00
Emmanuel Odeke
c48e772115 p2p: fully test PeerSet, more docs, parallelize PeerSet tests
* Full test PeerSet and check its concurrent guarantees
* Improve the doc for PeerSet.Has and remove unnecessary
defer for a path that sets a variable, make it fast anyways.
* Parallelize PeerSet tests with t.Parallel()
* Document functions in peer_set.go more.
2017-09-06 02:47:31 -06:00
Ethan Buchman
aa78fc14b5 cmd: dont wait for genesis. closes #562 2017-09-06 01:57:21 -04:00
Ethan Buchman
399fb9aa70 consensus: remove support for replay by #HEIGHT. closes #567 2017-09-06 01:53:41 -04:00
Ethan Buchman
ec3c91ac14 Merge pull request #618 from tendermint/historical_validators
Historical validators
2017-09-06 01:46:31 -04:00
Ethan Buchman
fae0603413 more fixes from review 2017-09-06 01:25:57 -04:00
Ethan Buchman
9deb647303 fixes from review 2017-09-04 18:29:51 -04:00
Ethan Buchman
f0f1ebe013 rpc: Block and Commit take pointers; return latest on nil 2017-09-03 16:07:37 -04:00
Ethan Buchman
e2e8746044 rpc: historical validators 2017-09-03 16:07:37 -04:00
Ethan Buchman
78446fd99c state: persist validators 2017-09-03 16:07:37 -04:00
225 changed files with 7329 additions and 2557 deletions

5
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,5 @@
# CODEOWNERS: https://help.github.com/articles/about-codeowners/
# Everything goes through Bucky. For now.
* @ebuchman

5
.gitignore vendored
View File

@@ -15,3 +15,8 @@ test/p2p/data/
test/logs
.glide
coverage.txt
docs/_build
docs/tools
scripts/wal2json/wal2json
scripts/cutWALUntil/cutWALUntil

View File

@@ -3,36 +3,98 @@
## Roadmap
BREAKING CHANGES:
- Upgrade the header to support better proves on validtors, results, evidence, and possibly more
- Upgrade the header to support better proofs on validtors, results, evidence, and possibly more
- Better support for injecting randomness
- Pass evidence/voteInfo through ABCI
- Upgrade consensus for more real-time use of evidence
FEATURES:
- Peer reputation management
- Use the chain as its own CA for nodes and validators
- Tooling to run multiple blockchains/apps, possibly in a single process
- State syncing (without transaction replay)
- Improved support for querying history and state
- Use the chain as its own CA for nodes and validators
- Tooling to run multiple blockchains/apps, possibly in a single process
- State syncing (without transaction replay)
- Add authentication and rate-limitting to the RPC
IMPROVEMENTS:
- Improve subtleties around mempool caching and logic
- Consensus optimizations:
- Improve subtleties around mempool caching and logic
- Consensus optimizations:
- cache block parts for faster agreement after round changes
- propagate block parts rarest first
- Better testing of the consensus state machine (ie. use a DSL)
- Better testing of the consensus state machine (ie. use a DSL)
- Auto compiled serialization/deserialization code instead of go-wire reflection
BUG FIXES:
- Graceful handling/recovery for apps that have non-determinism or fail to halt
BUG FIXES:
- Graceful handling/recovery for apps that have non-determinism or fail to halt
- Graceful handling/recovery for violations of safety, or liveness
## 0.10.4 (Septemeber 5, 2017)
## 0.12.0 (October 27, 2017)
BREAKING CHANGES:
- rpc/client: websocket ResultsCh and ErrorsCh unified in ResponsesCh.
- rpc/client: ABCIQuery no longer takes `prove`
- state: remove GenesisDoc from state.
- consensus: new binary WAL format provides efficiency and uses checksums to detect corruption
- use scripts/wal2json to convert to json for debugging
FEATURES:
- new `certifiers` pkg contains the tendermint light-client library (name subject to change)!
- rpc: `/genesis` includes the `app_options` .
- rpc: `/abci_query` takes an additional `height` parameter to support historical queries.
- rpc/client: new ABCIQueryWithOptions supports options like `trusted` (set false to get a proof) and `height` to query a historical height.
IMPROVEMENTS:
- docs: Added Slate docs to each rpc function (see rpc/core)
- docs: Ported all website docs to Read The Docs
- rpc: `/genesis` result includes `app_options`
- rpc/lib/client: add jitter to reconnects.
- rpc/lib/types: `RPCError` satisfies the `error` interface.
BUG FIXES:
- rpc/client: fix ws deadlock after stopping
- blockchain: fix panic on AddBlock when peer is nil
- mempool: fix sending on TxsAvailable when a tx has been invalidated
- consensus: dont run WAL catchup if we fast synced
## 0.11.1 (October 10, 2017)
IMPROVEMENTS:
- blockchain/reactor: respondWithNoResponseMessage for missing height
BUG FIXES:
- rpc: fixed client WebSocket timeout
- rpc: client now resubscribes on reconnection
- rpc: fix panics on missing params
- rpc: fix `/dump_consensus_state` to have normal json output (NOTE: technically breaking, but worth a bug fix label)
- types: fixed out of range error in VoteSet.addVote
- consensus: fix wal autofile via https://github.com/tendermint/tmlibs/blob/master/CHANGELOG.md#032-october-2-2017
## 0.11.0 (September 22, 2017)
BREAKING:
- genesis file: validator `amount` is now `power`
- abci: Info, BeginBlock, InitChain all take structs
- rpc: various changes to match JSONRPC spec (http://www.jsonrpc.org/specification), including breaking ones:
- requests that previously returned HTTP code 4XX now return 200 with an error code in the JSONRPC.
- `rpctypes.RPCResponse` uses new `RPCError` type instead of `string`.
- cmd: if there is no genesis, exit immediately instead of waiting around for one to show.
- types: `Signer.Sign` returns an error.
- state: every validator set change is persisted to disk, which required some changes to the `State` structure.
- p2p: new `p2p.Peer` interface used for all reactor methods (instead of `*p2p.Peer` struct).
FEATURES:
- rpc: `/validators?height=X` allows querying of validators at previous heights.
- rpc: Leaving the `height` param empty for `/block`, `/validators`, and `/commit` will return the value for the latest height.
IMPROVEMENTS:
- docs: Moved all docs from the website and tools repo in, converted to `.rst`, and cleaned up for presentation on `tendermint.readthedocs.io`
BUG FIXES:
- fix WAL openning issue on Windows
## 0.10.4 (September 5, 2017)
IMPROVEMENTS:
- docs: Added Slate docs to each rpc function (see rpc/core)
- docs: Ported all website docs to Read The Docs
- config: expose some p2p params to tweak performance: RecvRate, SendRate, and MaxMsgPacketPayloadSize
- rpc: Upgrade the websocket client and server, including improved auto reconnect, and proper ping/pong
@@ -72,7 +134,7 @@ IMPROVEMENTS:
FEATURES:
- Use `--trace` to get stack traces for logged errors
- types: GenesisDoc.ValidatorHash returns the hash of the genesis validator set
- types: GenesisDoc.ValidatorHash returns the hash of the genesis validator set
- types: GenesisDocFromFile parses a GenesiDoc from a JSON file
IMPROVEMENTS:
@@ -96,7 +158,7 @@ Also includes the Grand Repo-Merge of 2017.
BREAKING CHANGES:
- Config and Flags:
- The `config` map is replaced with a [`Config` struct](https://github.com/tendermint/tendermint/blob/master/config/config.go#L11),
- The `config` map is replaced with a [`Config` struct](https://github.com/tendermint/tendermint/blob/master/config/config.go#L11),
containing substructs: `BaseConfig`, `P2PConfig`, `MempoolConfig`, `ConsensusConfig`, `RPCConfig`
- This affects the following flags:
- `--seeds` is now `--p2p.seeds`
@@ -109,16 +171,16 @@ containing substructs: `BaseConfig`, `P2PConfig`, `MempoolConfig`, `ConsensusCon
```
[p2p]
laddr="tcp://1.2.3.4:46656"
[consensus]
timeout_propose=1000
```
- Use viper and `DefaultConfig() / TestConfig()` functions to handle defaults, and remove `config/tendermint` and `config/tendermint_test`
- Change some function and method signatures to
- Change some function and method signatures to
- Change some [function and method signatures](https://gist.github.com/ebuchman/640d5fc6c2605f73497992fe107ebe0b) accomodate new config
- Logger
- Replace static `log15` logger with a simple interface, and provide a new implementation using `go-kit`.
- Replace static `log15` logger with a simple interface, and provide a new implementation using `go-kit`.
See our new [logging library](https://github.com/tendermint/tmlibs/log) and [blog post](https://tendermint.com/blog/abstracting-the-logger-interface-in-go) for more details
- Levels `warn` and `notice` are removed (you may need to change them in your `config.toml`!)
- Change some [function and method signatures](https://gist.github.com/ebuchman/640d5fc6c2605f73497992fe107ebe0b) to accept a logger
@@ -161,7 +223,7 @@ IMPROVEMENTS:
- Limit `/blockchain_info` call to return a maximum of 20 blocks
- Use `.Wrap()` and `.Unwrap()` instead of eg. `PubKeyS` for `go-crypto` types
- RPC JSON responses use pretty printing (via `json.MarshalIndent`)
- Color code different instances of the consensus for tests
- Color code different instances of the consensus for tests
- Isolate viper to `cmd/tendermint/commands` and do not read config from file for tests
@@ -189,7 +251,7 @@ IMPROVEMENTS:
- WAL uses #ENDHEIGHT instead of #HEIGHT (#HEIGHT will stop working in 0.10.0)
- Peers included via `--seeds`, under `seeds` in the config, or in `/dial_seeds` are now persistent, and will be reconnected to if the connection breaks
BUG FIXES:
BUG FIXES:
- Fix bug in fast-sync where we stop syncing after a peer is removed, even if they're re-added later
- Fix handshake replay to handle validator set changes and results of DeliverTx when we crash after app.Commit but before state.Save()
@@ -205,7 +267,7 @@ message RequestQuery{
bytes data = 1;
string path = 2;
uint64 height = 3;
bool prove = 4;
bool prove = 4;
}
message ResponseQuery{
@@ -229,7 +291,7 @@ type BlockMeta struct {
}
```
- `ValidatorSet.Proposer` is exposed as a field and persisted with the `State`. Use `GetProposer()` to initialize or update after validator-set changes.
- `ValidatorSet.Proposer` is exposed as a field and persisted with the `State`. Use `GetProposer()` to initialize or update after validator-set changes.
- `tendermint gen_validator` command output is now pure JSON
@@ -272,7 +334,7 @@ type BlockID struct {
}
```
- `Vote` data type now includes validator address and index:
- `Vote` data type now includes validator address and index:
```
type Vote struct {
@@ -292,7 +354,7 @@ type Vote struct {
FEATURES:
- New message type on the ConsensusReactor, `Maj23Msg`, for peers to alert others they've seen a Maj23,
- New message type on the ConsensusReactor, `Maj23Msg`, for peers to alert others they've seen a Maj23,
in order to track and handle conflicting votes intelligently to prevent Byzantine faults from causing halts:
```
@@ -315,7 +377,7 @@ IMPROVEMENTS:
- Less verbose logging
- Better test coverage (37% -> 49%)
- Canonical SignBytes for signable types
- Write-Ahead Log for Mempool and Consensus via tmlibs/autofile
- Write-Ahead Log for Mempool and Consensus via tmlibs/autofile
- Better in-process testing for the consensus reactor and byzantine faults
- Better crash/restart testing for individual nodes at preset failure points, and of networks at arbitrary points
- Better abstraction over timeout mechanics
@@ -395,7 +457,7 @@ FEATURES:
- TMSP and RPC support TCP and UNIX sockets
- Addition config options including block size and consensus parameters
- New WAL mode `cswal_light`; logs only the validator's own votes
- New RPC endpoints:
- New RPC endpoints:
- for starting/stopping profilers, and for updating config
- `/broadcast_tx_commit`, returns when tx is included in a block, else an error
- `/unsafe_flush_mempool`, empties the mempool
@@ -416,14 +478,14 @@ BUG FIXES:
Strict versioning only began with the release of v0.7.0, in late summer 2016.
The project itself began in early summer 2014 and was workable decentralized cryptocurrency software by the end of that year.
Through the course of 2015, in collaboration with Eris Industries (now Monax Indsutries),
Through the course of 2015, in collaboration with Eris Industries (now Monax Indsutries),
many additional features were integrated, including an implementation from scratch of the Ethereum Virtual Machine.
That implementation now forms the heart of [Burrow](https://github.com/hyperledger/burrow).
In the later half of 2015, the consensus algorithm was upgraded with a more asynchronous design and a more deterministic and robust implementation.
By late 2015, frustration with the difficulty of forking a large monolithic stack to create alternative cryptocurrency designs led to the
By late 2015, frustration with the difficulty of forking a large monolithic stack to create alternative cryptocurrency designs led to the
invention of the Application Blockchain Interface (ABCI), then called the Tendermint Socket Protocol (TMSP).
The Ethereum Virtual Machine and various other transaction features were removed, and Tendermint was whittled down to a core consensus engine
driving an application running in another process.
driving an application running in another process.
The ABCI interface and implementation were iterated on and improved over the course of 2016,
until versioned history kicked in with v0.7.0.

View File

@@ -1,6 +1,6 @@
# Contributing
Thank you for considering making contributions to Tendermint and related repositories (Basecoin, Merkleeyes, etc.)!
Thank you for considering making contributions to Tendermint and related repositories! Start by taking a look at the [coding repo](https://github.com/tendermint/coding) for overall information on repository workflow and standards.
Please follow standard github best practices: fork the repo, branch from the tip of develop, make some commits, and submit a pull request to develop. See the [open issues](https://github.com/tendermint/tendermint/issues) for things we need help with!

View File

@@ -1,8 +1,8 @@
FROM alpine:3.6
# This is the release of tendermint to pull in.
ENV TM_VERSION 0.10.0
ENV TM_SHA256SUM a29852b8d51c00db93c87c3d148fa419a047abd38f32b2507a905805131acc19
ENV TM_VERSION 0.11.0
ENV TM_SHA256SUM 7e443bac4d42f12e7beaf9cee63b4a565dad8c58895291fdedde8057088b70c5
# Tendermint will be looking for genesis file in /tendermint (unless you change
# `genesis_file` in config.toml). You can put your config.toml and private

View File

@@ -1,6 +1,7 @@
# Supported tags and respective `Dockerfile` links
- `0.10.0`, `latest` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/e5342f4054ab784b2cd6150e14f01053d7c8deb2/DOCKER/Dockerfile)
- `0.11.0`, `latest` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/9177cc1f64ca88a4a0243c5d1773d10fba67e201/DOCKER/Dockerfile)
- `0.10.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/e5342f4054ab784b2cd6150e14f01053d7c8deb2/DOCKER/Dockerfile)
- `0.9.1`, `0.9`, [(Dockerfile)](https://github.com/tendermint/tendermint/blob/809e0e8c5933604ba8b2d096803ada7c5ec4dfd3/DOCKER/Dockerfile)
- `0.9.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/d474baeeea6c22b289e7402449572f7c89ee21da/DOCKER/Dockerfile)
- `0.8.0`, `0.8` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/bf64dd21fdb193e54d8addaaaa2ecf7ac371de8c/DOCKER/Dockerfile)
@@ -8,13 +9,24 @@
`develop` tag points to the [develop](https://github.com/tendermint/tendermint/tree/develop) branch.
# Quick reference
* **Where to get help:**
[Chat on Rocket](https://cosmos.rocket.chat/)
* **Where to file issues:**
https://github.com/tendermint/tendermint/issues
* **Supported Docker versions:**
[the latest release](https://github.com/moby/moby/releases) (down to 1.6 on a best-effort basis)
# Tendermint
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine, written in any programming language, and securely replicates it on many machines.
For more background, see the [introduction](https://tendermint.com/intro).
For more background, see the [introduction](https://tendermint.readthedocs.io/en/master/introduction.html).
To get started developing applications, see the [application developers guide](https://tendermint.com/docs/guides/app-development).
To get started developing applications, see the [application developers guide](https://tendermint.readthedocs.io/en/master/getting-started.html).
# How to use this image
@@ -31,26 +43,12 @@ docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint node --proxy_app
If you want to see many containers talking to each other, consider using [mintnet-kubernetes](https://github.com/tendermint/tools/tree/master/mintnet-kubernetes), which is a tool for running Tendermint-based applications on a Kubernetes cluster.
# Supported Docker versions
This image is officially supported on Docker version 1.13.1.
Support for older versions (down to 1.6) is provided on a best-effort basis.
Please see [the Docker installation documentation](https://docs.docker.com/installation/) for details on how to upgrade your Docker daemon.
# License
View [license information](https://raw.githubusercontent.com/tendermint/tendermint/master/LICENSE) for the software contained in this image.
# User Feedback
## Issues
If you have any problems with or questions about this image, please contact us through a [GitHub](https://github.com/tendermint/tendermint/issues) issue. If the issue is related to a CVE, please check for [a `cve-tracker` issue on the `official-images` repository](https://github.com/docker-library/official-images/issues?q=label%3Acve-tracker) first.
You can also reach the image maintainers via [Slack](http://forum.tendermint.com:3000/).
## Contributing
You are invited to contribute new features, fixes, or updates, large or small; we are always thrilled to receive pull requests, and do our best to process them as fast as we can.

View File

@@ -1,7 +1,7 @@
GOTOOLS = \
github.com/mitchellh/gox \
github.com/tcnksm/ghr \
github.com/Masterminds/glide \
honnef.co/go/tools/cmd/megacheck
PACKAGES=$(shell go list ./... | grep -v '/vendor/')
BUILD_TAGS?=tendermint
@@ -35,6 +35,9 @@ test_race:
test_integrations:
@bash ./test/test.sh
release:
@go test -tags release $(PACKAGES)
test100:
@for i in {1..100}; do make test; done

View File

@@ -13,22 +13,17 @@ https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/6874
[![](https://tokei.rs/b1/github/tendermint/tendermint?category=lines)](https://github.com/tendermint/tendermint)
Branch | Tests | Coverage | Report Card
----------|-------|----------|-------------
develop | [![CircleCI](https://circleci.com/gh/tendermint/tendermint/tree/develop.svg?style=shield)](https://circleci.com/gh/tendermint/tendermint/tree/develop) | [![codecov](https://codecov.io/gh/tendermint/tendermint/branch/develop/graph/badge.svg)](https://codecov.io/gh/tendermint/tendermint) | [![Go Report Card](https://goreportcard.com/badge/github.com/tendermint/tendermint/tree/develop)](https://goreportcard.com/report/github.com/tendermint/tendermint/tree/develop)
master | [![CircleCI](https://circleci.com/gh/tendermint/tendermint/tree/master.svg?style=shield)](https://circleci.com/gh/tendermint/tendermint/tree/master) | [![codecov](https://codecov.io/gh/tendermint/tendermint/branch/master/graph/badge.svg)](https://codecov.io/gh/tendermint/tendermint) | [![Go Report Card](https://goreportcard.com/badge/github.com/tendermint/tendermint/tree/master)](https://goreportcard.com/report/github.com/tendermint/tendermint/tree/master)
Branch | Tests | Coverage
----------|-------|----------
master | [![CircleCI](https://circleci.com/gh/tendermint/tendermint/tree/master.svg?style=shield)](https://circleci.com/gh/tendermint/tendermint/tree/master) | [![codecov](https://codecov.io/gh/tendermint/tendermint/branch/master/graph/badge.svg)](https://codecov.io/gh/tendermint/tendermint)
develop | [![CircleCI](https://circleci.com/gh/tendermint/tendermint/tree/develop.svg?style=shield)](https://circleci.com/gh/tendermint/tendermint/tree/develop) | [![codecov](https://codecov.io/gh/tendermint/tendermint/branch/develop/graph/badge.svg)](https://codecov.io/gh/tendermint/tendermint)
_NOTE: This is alpha software. Please contact us if you intend to run it in production._
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine, written in any programming language,
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine - written in any programming language -
and securely replicates it on many machines.
For more background, see the [introduction](https://tendermint.com/intro).
To get started developing applications, see the [application developers guide](https://tendermint.com/docs/guides/app-development).
### Code of Conduct
Please read, understand and adhere to our [code of conduct](CODE_OF_CONDUCT.md).
For more information, from introduction to install to application development, [Read The Docs](http://tendermint.readthedocs.io/projects/tools/en/master).
## Install
@@ -38,39 +33,73 @@ To install from source, you should be able to:
`go get -u github.com/tendermint/tendermint/cmd/tendermint`
For more details (or if it fails), see the [install guide](https://tendermint.com/docs/guides/install-from-source).
## Contributing
Yay open source! Please see our [contributing guidelines](CONTRIBUTING.md).
For more details (or if it fails), [read the docs](http://tendermint.readthedocs.io/projects/tools/en/master/install.html).
## Resources
### Tendermint Core
- [Introduction](https://tendermint.com/intro)
- [Docs](https://tendermint.com/docs)
- [Software using Tendermint](https://tendermint.com/ecosystem)
All resources involving the use of, building application on, or developing for, tendermint, can be found at [Read The Docs](http://tendermint.readthedocs.io/projects/tools/en/master). Additional information about some - and eventually all - of the sub-projects below, can be found at Read The Docs.
### Sub-projects
* [ABCI](http://github.com/tendermint/abci), the Application Blockchain Interface
* [Go-Wire](http://github.com/tendermint/go-wire), a deterministic serialization library
* [Go-Crypto](http://github.com/tendermint/go-crypto), an elliptic curve cryptography library
* [TmLibs](http://github.com/tendermint/tmlibs), an assortment of Go libraries
* [Merkleeyes](http://github.com/tendermint/merkleeyes), a balanced, binary Merkle tree for ABCI apps
* [TmLibs](http://github.com/tendermint/tmlibs), an assortment of Go libraries used internally
* [IAVL](http://github.com/tendermint/iavl), Merkleized IAVL+ Tree implementation
### Tools
* [Deployment, Benchmarking, and Monitoring](https://github.com/tendermint/tools)
* [Deployment, Benchmarking, and Monitoring](http://tendermint.readthedocs.io/projects/tools/en/develop/index.html#tendermint-tools)
### Applications
* [Ethermint](http://github.com/tendermint/ethermint): Ethereum on Tendermint
* [Basecoin](http://github.com/tendermint/basecoin), a cryptocurrency application framework
* [Ethermint](http://github.com/tendermint/ethermint); Ethereum on Tendermint
* [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
* [Many more](https://tendermint.readthedocs.io/en/master/ecosystem.html#abci-applications)
### More
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
* [Original Whitepaper](https://tendermint.com/static/docs/tendermint.pdf)
* [Tendermint Blog](https://blog.cosmos.network/tendermint/home)
* [Cosmos Blog](https://blog.cosmos.network)
* [Original Whitepaper (out-of-date)](http://www.the-blockchain.com/docs/Tendermint%20Consensus%20without%20Mining.pdf)
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
## Contributing
Yay open source! Please see our [contributing guidelines](CONTRIBUTING.md).
## Versioning
### SemVer
Tendermint uses [SemVer](http://semver.org/) to determine when and how the version changes.
According to SemVer, anything in the public API can change at any time before version 1.0.0
To provide some stability to Tendermint users in these 0.X.X days, the MINOR version is used
to signal breaking changes across a subset of the total public API. This subset includes all
interfaces exposed to other processes (cli, rpc, p2p, etc.), as well as parts of the following packages:
- types
- rpc/client
- config
- node
Exported objects in these packages that are not covered by the versioning scheme
are explicitly marked by `// UNSTABLE` in their go doc comment and may change at any time.
Functions, types, and values in any other package may also change at any time.
### Upgrades
In an effort to avoid accumulating technical debt prior to 1.0.0,
we do not guarantee that breaking changes (ie. bumps in the MINOR version)
will work with existing tendermint blockchains. In these cases you will
have to start a new blockchain, or write something custom to get the old
data into the new chain.
However, any bump in the PATCH version should be compatible with existing histories
(if not please open an [issue](https://github.com/tendermint/tendermint/issues)).
## Code of Conduct
Please read, understand and adhere to our [code of conduct](CODE_OF_CONDUCT.md).

6
Vagrantfile vendored
View File

@@ -17,11 +17,11 @@ Vagrant.configure("2") do |config|
usermod -a -G docker vagrant
apt-get autoremove -y
curl -O https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz
tar -xvf go1.8.linux-amd64.tar.gz
curl -O https://storage.googleapis.com/golang/go1.9.linux-amd64.tar.gz
tar -xvf go1.9.linux-amd64.tar.gz
rm -rf /usr/local/go
mv go /usr/local
rm -f go1.8.linux-amd64.tar.gz
rm -f go1.9.linux-amd64.tar.gz
mkdir -p /home/vagrant/go/bin
echo 'export PATH=$PATH:/usr/local/go/bin:/home/vagrant/go/bin' >> /home/vagrant/.bash_profile
echo 'export GOPATH=/home/vagrant/go' >> /home/vagrant/.bash_profile

2
benchmarks/blockchain/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
data

View File

@@ -0,0 +1,80 @@
#!/bin/bash
DATA=$GOPATH/src/github.com/tendermint/tendermint/benchmarks/blockchain/data
if [ ! -d $DATA ]; then
echo "no data found, generating a chain... (this only has to happen once)"
tendermint init --home $DATA
cp $DATA/config.toml $DATA/config2.toml
echo "
[consensus]
timeout_commit = 0
" >> $DATA/config.toml
echo "starting node"
tendermint node \
--home $DATA \
--proxy_app dummy \
--p2p.laddr tcp://127.0.0.1:56656 \
--rpc.laddr tcp://127.0.0.1:56657 \
--log_level error &
echo "making blocks for 60s"
sleep 60
mv $DATA/config2.toml $DATA/config.toml
kill %1
echo "done generating chain."
fi
# validator node
HOME1=$TMPDIR$RANDOM$RANDOM
cp -R $DATA $HOME1
echo "starting validator node"
tendermint node \
--home $HOME1 \
--proxy_app dummy \
--p2p.laddr tcp://127.0.0.1:56656 \
--rpc.laddr tcp://127.0.0.1:56657 \
--log_level error &
sleep 1
# downloader node
HOME2=$TMPDIR$RANDOM$RANDOM
tendermint init --home $HOME2
cp $HOME1/genesis.json $HOME2
printf "starting downloader node"
tendermint node \
--home $HOME2 \
--proxy_app dummy \
--p2p.laddr tcp://127.0.0.1:56666 \
--rpc.laddr tcp://127.0.0.1:56667 \
--p2p.seeds 127.0.0.1:56656 \
--log_level error &
# wait for node to start up so we only count time where we are actually syncing
sleep 0.5
while curl localhost:56667/status 2> /dev/null | grep "\"latest_block_height\": 0," > /dev/null
do
printf '.'
sleep 0.2
done
echo
echo "syncing blockchain for 10s"
for i in {1..10}
do
sleep 1
HEIGHT="$(curl localhost:56667/status 2> /dev/null \
| grep 'latest_block_height' \
| grep -o ' [0-9]*' \
| xargs)"
let 'RATE = HEIGHT / i'
echo "height: $HEIGHT, blocks/sec: $RATE"
done
kill %1
kill %2
rm -rf $HOME1 $HOME2

View File

@@ -1,8 +1,9 @@
package benchmarks
import (
. "github.com/tendermint/tmlibs/common"
"testing"
cmn "github.com/tendermint/tmlibs/common"
)
func BenchmarkSomething(b *testing.B) {
@@ -11,11 +12,11 @@ func BenchmarkSomething(b *testing.B) {
numChecks := 100000
keys := make([]string, numItems)
for i := 0; i < numItems; i++ {
keys[i] = RandStr(100)
keys[i] = cmn.RandStr(100)
}
txs := make([]string, numChecks)
for i := 0; i < numChecks; i++ {
txs[i] = RandStr(100)
txs[i] = cmn.RandStr(100)
}
b.StartTimer()

View File

@@ -4,7 +4,7 @@ import (
"os"
"testing"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
)
func BenchmarkFileWrite(b *testing.B) {
@@ -14,7 +14,7 @@ func BenchmarkFileWrite(b *testing.B) {
if err != nil {
b.Error(err)
}
testString := RandStr(200) + "\n"
testString := cmn.RandStr(200) + "\n"
b.StartTimer()
for i := 0; i < b.N; i++ {

View File

@@ -3,9 +3,8 @@ package main
import (
"context"
"encoding/binary"
"time"
//"encoding/hex"
"fmt"
"time"
rpcclient "github.com/tendermint/tendermint/rpc/lib/client"
cmn "github.com/tendermint/tmlibs/common"
@@ -22,7 +21,7 @@ func main() {
// Read a bunch of responses
go func() {
for {
_, ok := <-wsc.ResultsCh
_, ok := <-wsc.ResponsesCh
if !ok {
break
}

View File

@@ -6,16 +6,30 @@ import (
"time"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
flow "github.com/tendermint/tmlibs/flowrate"
"github.com/tendermint/tmlibs/log"
)
/*
eg, L = latency = 0.1s
P = num peers = 10
FN = num full nodes
BS = 1kB block size
CB = 1 Mbit/s = 128 kB/s
CB/P = 12.8 kB
B/S = CB/P/BS = 12.8 blocks/s
12.8 * 0.1 = 1.28 blocks on conn
*/
const (
requestIntervalMS = 250
maxTotalRequesters = 300
requestIntervalMS = 100
maxTotalRequesters = 1000
maxPendingRequests = maxTotalRequesters
maxPendingRequestsPerPeer = 75
maxPendingRequestsPerPeer = 50
minRecvRate = 10240 // 10Kb/s
)
@@ -33,7 +47,7 @@ var peerTimeoutSeconds = time.Duration(15) // not const so we can override with
*/
type BlockPool struct {
BaseService
cmn.BaseService
startTime time.Time
mtx sync.Mutex
@@ -42,7 +56,8 @@ type BlockPool struct {
height int // the lowest key in requesters.
numPending int32 // number of requests pending assignment or block response
// peers
peers map[string]*bpPeer
peers map[string]*bpPeer
maxPeerHeight int
requestsCh chan<- BlockRequest
timeoutsCh chan<- string
@@ -59,7 +74,7 @@ func NewBlockPool(start int, requestsCh chan<- BlockRequest, timeoutsCh chan<- s
requestsCh: requestsCh,
timeoutsCh: timeoutsCh,
}
bp.BaseService = *NewBaseService(nil, "BlockPool", bp)
bp.BaseService = *cmn.NewBaseService(nil, "BlockPool", bp)
return bp
}
@@ -69,16 +84,16 @@ func (pool *BlockPool) OnStart() error {
return nil
}
func (pool *BlockPool) OnStop() {
pool.BaseService.OnStop()
}
func (pool *BlockPool) OnStop() {}
// Run spawns requesters as needed.
func (pool *BlockPool) makeRequestersRoutine() {
for {
if !pool.IsRunning() {
break
}
_, numPending, lenRequesters := pool.GetStatus()
if numPending >= maxPendingRequests {
// sleep for a bit.
@@ -135,16 +150,10 @@ func (pool *BlockPool) IsCaughtUp() bool {
return false
}
maxPeerHeight := 0
for _, peer := range pool.peers {
maxPeerHeight = MaxInt(maxPeerHeight, peer.height)
}
// some conditions to determine if we're caught up
receivedBlockOrTimedOut := (pool.height > 0 || time.Since(pool.startTime) > 5*time.Second)
ourChainIsLongestAmongPeers := maxPeerHeight == 0 || pool.height >= maxPeerHeight
ourChainIsLongestAmongPeers := pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
isCaughtUp := receivedBlockOrTimedOut && ourChainIsLongestAmongPeers
pool.Logger.Info(Fmt("IsCaughtUp: %v", isCaughtUp), "height", pool.height, "maxPeerHeight", maxPeerHeight)
return isCaughtUp
}
@@ -180,7 +189,7 @@ func (pool *BlockPool) PopRequest() {
delete(pool.requesters, pool.height)
pool.height++
} else {
PanicSanity(Fmt("Expected requester to pop, got nothing at height %v", pool.height))
cmn.PanicSanity(cmn.Fmt("Expected requester to pop, got nothing at height %v", pool.height))
}
}
@@ -188,15 +197,16 @@ func (pool *BlockPool) PopRequest() {
// Remove the peer and redo request from others.
func (pool *BlockPool) RedoRequest(height int) {
pool.mtx.Lock()
defer pool.mtx.Unlock()
request := pool.requesters[height]
pool.mtx.Unlock()
if request.block == nil {
PanicSanity("Expected block to be non-nil")
cmn.PanicSanity("Expected block to be non-nil")
}
// RemovePeer will redo all requesters associated with this peer.
// TODO: record this malfeasance
pool.RemovePeer(request.peerID)
pool.removePeer(request.peerID)
}
// TODO: ensure that blocks come in order for each peer.
@@ -206,18 +216,29 @@ func (pool *BlockPool) AddBlock(peerID string, block *types.Block, blockSize int
requester := pool.requesters[block.Height]
if requester == nil {
// a block we didn't expect.
// TODO:if height is too far ahead, punish peer
return
}
if requester.setBlock(block, peerID) {
pool.numPending--
peer := pool.peers[peerID]
peer.decrPending(blockSize)
if peer != nil {
peer.decrPending(blockSize)
}
} else {
// Bad peer?
}
}
// MaxPeerHeight returns the heighest height reported by a peer
func (pool *BlockPool) MaxPeerHeight() int {
pool.mtx.Lock()
defer pool.mtx.Unlock()
return pool.maxPeerHeight
}
// Sets the peer's alleged blockchain height.
func (pool *BlockPool) SetPeerHeight(peerID string, height int) {
pool.mtx.Lock()
@@ -231,6 +252,10 @@ func (pool *BlockPool) SetPeerHeight(peerID string, height int) {
peer.setLogger(pool.Logger.With("peer", peerID))
pool.peers[peerID] = peer
}
if height > pool.maxPeerHeight {
pool.maxPeerHeight = height
}
}
func (pool *BlockPool) RemovePeer(peerID string) {
@@ -281,7 +306,7 @@ func (pool *BlockPool) makeNextRequester() {
nextHeight := pool.height + len(pool.requesters)
request := newBPRequester(pool, nextHeight)
request.SetLogger(pool.Logger.With("height", nextHeight))
// request.SetLogger(pool.Logger.With("height", nextHeight))
pool.requesters[nextHeight] = request
pool.numPending++
@@ -311,10 +336,10 @@ func (pool *BlockPool) debug() string {
str := ""
for h := pool.height; h < pool.height+len(pool.requesters); h++ {
if pool.requesters[h] == nil {
str += Fmt("H(%v):X ", h)
str += cmn.Fmt("H(%v):X ", h)
} else {
str += Fmt("H(%v):", h)
str += Fmt("B?(%v) ", pool.requesters[h].block != nil)
str += cmn.Fmt("H(%v):", h)
str += cmn.Fmt("B?(%v) ", pool.requesters[h].block != nil)
}
}
return str
@@ -352,7 +377,7 @@ func (peer *bpPeer) setLogger(l log.Logger) {
func (peer *bpPeer) resetMonitor() {
peer.recvMonitor = flow.New(time.Second, time.Second*40)
var initialValue = float64(minRecvRate) * math.E
initialValue := float64(minRecvRate) * math.E
peer.recvMonitor.SetREMA(initialValue)
}
@@ -394,7 +419,7 @@ func (peer *bpPeer) onTimeout() {
//-------------------------------------
type bpRequester struct {
BaseService
cmn.BaseService
pool *BlockPool
height int
gotBlockCh chan struct{}
@@ -415,7 +440,7 @@ func newBPRequester(pool *BlockPool, height int) *bpRequester {
peerID: "",
block: nil,
}
bpr.BaseService = *NewBaseService(nil, "bpRequester", bpr)
bpr.BaseService = *cmn.NewBaseService(nil, "bpRequester", bpr)
return bpr
}

View File

@@ -6,7 +6,7 @@ import (
"time"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/log"
)
@@ -22,7 +22,7 @@ type testPeer struct {
func makePeers(numPeers int, minHeight, maxHeight int) map[string]testPeer {
peers := make(map[string]testPeer, numPeers)
for i := 0; i < numPeers; i++ {
peerID := RandStr(12)
peerID := cmn.RandStr(12)
height := minHeight + rand.Intn(maxHeight-minHeight)
peers[peerID] = testPeer{peerID, height}
}

View File

@@ -12,14 +12,15 @@ import (
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/log"
)
const (
// BlockchainChannel is a channel for blocks and status updates (`BlockStore` height)
BlockchainChannel = byte(0x40)
defaultChannelCapacity = 100
trySyncIntervalMS = 100
defaultChannelCapacity = 1000
trySyncIntervalMS = 50
// stop syncing when last block's time is
// within this much of the system time.
// stopSyncingDurationMinutes = 10
@@ -28,13 +29,12 @@ const (
statusUpdateIntervalSeconds = 10
// check if we should switch to consensus reactor
switchToConsensusIntervalSeconds = 1
maxBlockchainResponseSize = types.MaxBlockSize + 2
)
type consensusReactor interface {
// for when we switch from blockchain reactor and fast sync to
// the consensus machine
SwitchToConsensus(*sm.State)
SwitchToConsensus(*sm.State, int)
}
// BlockchainReactor handles long-term catchup syncing.
@@ -80,7 +80,13 @@ func NewBlockchainReactor(state *sm.State, proxyAppConn proxy.AppConnConsensus,
return bcR
}
// OnStart implements BaseService
// SetLogger implements cmn.Service by setting the logger on reactor and pool.
func (bcR *BlockchainReactor) SetLogger(l log.Logger) {
bcR.BaseService.Logger = l
bcR.pool.Logger = l
}
// OnStart implements cmn.Service.
func (bcR *BlockchainReactor) OnStart() error {
bcR.BaseReactor.OnStart()
if bcR.fastSync {
@@ -93,7 +99,7 @@ func (bcR *BlockchainReactor) OnStart() error {
return nil
}
// OnStop implements BaseService
// OnStop implements cmn.Service.
func (bcR *BlockchainReactor) OnStop() {
bcR.BaseReactor.OnStop()
bcR.pool.Stop()
@@ -104,27 +110,45 @@ func (bcR *BlockchainReactor) GetChannels() []*p2p.ChannelDescriptor {
return []*p2p.ChannelDescriptor{
&p2p.ChannelDescriptor{
ID: BlockchainChannel,
Priority: 5,
SendQueueCapacity: 100,
Priority: 10,
SendQueueCapacity: 1000,
},
}
}
// AddPeer implements Reactor by sending our state to peer.
func (bcR *BlockchainReactor) AddPeer(peer *p2p.Peer) {
func (bcR *BlockchainReactor) AddPeer(peer p2p.Peer) {
if !peer.Send(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}}) {
// doing nothing, will try later in `poolRoutine`
}
}
// RemovePeer implements Reactor by removing peer from the pool.
func (bcR *BlockchainReactor) RemovePeer(peer *p2p.Peer, reason interface{}) {
bcR.pool.RemovePeer(peer.Key)
func (bcR *BlockchainReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
bcR.pool.RemovePeer(peer.Key())
}
// respondToPeer loads a block and sends it to the requesting peer,
// if we have it. Otherwise, we'll respond saying we don't have it.
// According to the Tendermint spec, if all nodes are honest,
// no node should be requesting for a block that's non-existent.
func (bcR *BlockchainReactor) respondToPeer(msg *bcBlockRequestMessage, src p2p.Peer) (queued bool) {
block := bcR.store.LoadBlock(msg.Height)
if block != nil {
msg := &bcBlockResponseMessage{Block: block}
return src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{msg})
}
bcR.Logger.Info("Peer asking for a block we don't have", "src", src, "height", msg.Height)
return src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{
&bcNoBlockResponseMessage{Height: msg.Height},
})
}
// Receive implements Reactor by handling 4 types of messages (look below).
func (bcR *BlockchainReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte) {
_, msg, err := DecodeMessage(msgBytes)
func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
_, msg, err := DecodeMessage(msgBytes, bcR.maxMsgSize())
if err != nil {
bcR.Logger.Error("Error decoding message", "err", err)
return
@@ -135,20 +159,12 @@ func (bcR *BlockchainReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte)
// TODO: improve logic to satisfy megacheck
switch msg := msg.(type) {
case *bcBlockRequestMessage:
// Got a request for a block. Respond with block if we have it.
block := bcR.store.LoadBlock(msg.Height)
if block != nil {
msg := &bcBlockResponseMessage{Block: block}
queued := src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{msg})
if !queued {
// queue is full, just ignore.
}
} else {
// TODO peer is asking for things we don't have.
if queued := bcR.respondToPeer(msg, src); !queued {
// Unfortunately not queued since the queue is full.
}
case *bcBlockResponseMessage:
// Got a block.
bcR.pool.AddBlock(src.Key, msg.Block, len(msgBytes))
bcR.pool.AddBlock(src.Key(), msg.Block, len(msgBytes))
case *bcStatusRequestMessage:
// Send peer our state.
queued := src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}})
@@ -157,12 +173,18 @@ func (bcR *BlockchainReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte)
}
case *bcStatusResponseMessage:
// Got a peer status. Unverified.
bcR.pool.SetPeerHeight(src.Key, msg.Height)
bcR.pool.SetPeerHeight(src.Key(), msg.Height)
default:
bcR.Logger.Error(cmn.Fmt("Unknown message type %v", reflect.TypeOf(msg)))
}
}
// maxMsgSize returns the maximum allowable size of a
// message on the blockchain reactor.
func (bcR *BlockchainReactor) maxMsgSize() int {
return bcR.state.Params.BlockSizeParams.MaxBytes + 2
}
// Handle messages from the poolReactor telling the reactor what to do.
// NOTE: Don't sleep in the FOR_LOOP or otherwise slow it down!
// (Except for the SYNC_LOOP, which is the primary purpose and must be synchronous.)
@@ -172,6 +194,13 @@ func (bcR *BlockchainReactor) poolRoutine() {
statusUpdateTicker := time.NewTicker(statusUpdateIntervalSeconds * time.Second)
switchToConsensusTicker := time.NewTicker(switchToConsensusIntervalSeconds * time.Second)
blocksSynced := 0
chainID := bcR.state.ChainID
lastHundred := time.Now()
lastRate := 0.0
FOR_LOOP:
for {
select {
@@ -197,16 +226,16 @@ FOR_LOOP:
// ask for status updates
go bcR.BroadcastStatusRequest()
case <-switchToConsensusTicker.C:
height, numPending, _ := bcR.pool.GetStatus()
height, numPending, lenRequesters := bcR.pool.GetStatus()
outbound, inbound, _ := bcR.Switch.NumPeers()
bcR.Logger.Info("Consensus ticker", "numPending", numPending, "total", len(bcR.pool.requesters),
bcR.Logger.Debug("Consensus ticker", "numPending", numPending, "total", lenRequesters,
"outbound", outbound, "inbound", inbound)
if bcR.pool.IsCaughtUp() {
bcR.Logger.Info("Time to switch to consensus reactor!", "height", height)
bcR.pool.Stop()
conR := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
conR.SwitchToConsensus(bcR.state)
conR.SwitchToConsensus(bcR.state, blocksSynced)
break FOR_LOOP
}
@@ -221,16 +250,16 @@ FOR_LOOP:
// We need both to sync the first block.
break SYNC_LOOP
}
firstParts := first.MakePartSet(types.DefaultBlockPartSize)
firstParts := first.MakePartSet(bcR.state.Params.BlockPartSizeBytes)
firstPartsHeader := firstParts.Header()
// Finally, verify the first block using the second's commit
// NOTE: we can probably make this more efficient, but note that calling
// first.Hash() doesn't verify the tx contents, so MakePartSet() is
// currently necessary.
err := bcR.state.Validators.VerifyCommit(
bcR.state.ChainID, types.BlockID{first.Hash(), firstPartsHeader}, first.Height, second.LastCommit)
chainID, types.BlockID{first.Hash(), firstPartsHeader}, first.Height, second.LastCommit)
if err != nil {
bcR.Logger.Info("error in validation", "err", err)
bcR.Logger.Error("Error in validation", "err", err)
bcR.pool.RedoRequest(first.Height)
break SYNC_LOOP
} else {
@@ -247,6 +276,14 @@ FOR_LOOP:
// TODO This is bad, are we zombie?
cmn.PanicQ(cmn.Fmt("Failed to process committed block (%d:%X): %v", first.Height, first.Hash(), err))
}
blocksSynced += 1
if blocksSynced%100 == 0 {
lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds())
bcR.Logger.Info("Fast Sync Rate", "height", bcR.pool.height,
"max_peer_height", bcR.pool.MaxPeerHeight(), "blocks/s", lastRate)
lastHundred = time.Now()
}
}
}
continue FOR_LOOP
@@ -271,10 +308,11 @@ func (bcR *BlockchainReactor) SetEventSwitch(evsw types.EventSwitch) {
// Messages
const (
msgTypeBlockRequest = byte(0x10)
msgTypeBlockResponse = byte(0x11)
msgTypeStatusResponse = byte(0x20)
msgTypeStatusRequest = byte(0x21)
msgTypeBlockRequest = byte(0x10)
msgTypeBlockResponse = byte(0x11)
msgTypeNoBlockResponse = byte(0x12)
msgTypeStatusResponse = byte(0x20)
msgTypeStatusRequest = byte(0x21)
)
// BlockchainMessage is a generic message for this reactor.
@@ -284,17 +322,18 @@ var _ = wire.RegisterInterface(
struct{ BlockchainMessage }{},
wire.ConcreteType{&bcBlockRequestMessage{}, msgTypeBlockRequest},
wire.ConcreteType{&bcBlockResponseMessage{}, msgTypeBlockResponse},
wire.ConcreteType{&bcNoBlockResponseMessage{}, msgTypeNoBlockResponse},
wire.ConcreteType{&bcStatusResponseMessage{}, msgTypeStatusResponse},
wire.ConcreteType{&bcStatusRequestMessage{}, msgTypeStatusRequest},
)
// DecodeMessage decodes BlockchainMessage.
// TODO: ensure that bz is completely read.
func DecodeMessage(bz []byte) (msgType byte, msg BlockchainMessage, err error) {
func DecodeMessage(bz []byte, maxSize int) (msgType byte, msg BlockchainMessage, err error) {
msgType = bz[0]
n := int(0)
r := bytes.NewReader(bz)
msg = wire.ReadBinary(struct{ BlockchainMessage }{}, r, maxBlockchainResponseSize, &n, &err).(struct{ BlockchainMessage }).BlockchainMessage
msg = wire.ReadBinary(struct{ BlockchainMessage }{}, r, maxSize, &n, &err).(struct{ BlockchainMessage }).BlockchainMessage
if err != nil && n != len(bz) {
err = errors.New("DecodeMessage() had bytes left over")
}
@@ -311,6 +350,14 @@ func (m *bcBlockRequestMessage) String() string {
return cmn.Fmt("[bcBlockRequestMessage %v]", m.Height)
}
type bcNoBlockResponseMessage struct {
Height int
}
func (brm *bcNoBlockResponseMessage) String() string {
return cmn.Fmt("[bcNoBlockResponseMessage %d]", brm.Height)
}
//-------------------------------------
// NOTE: keep up-to-date with maxBlockchainResponseSize

151
blockchain/reactor_test.go Normal file
View File

@@ -0,0 +1,151 @@
package blockchain
import (
"testing"
wire "github.com/tendermint/go-wire"
cmn "github.com/tendermint/tmlibs/common"
dbm "github.com/tendermint/tmlibs/db"
"github.com/tendermint/tmlibs/log"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/p2p"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
func newBlockchainReactor(maxBlockHeight int) *BlockchainReactor {
logger := log.TestingLogger()
config := cfg.ResetTestRoot("blockchain_reactor_test")
blockStore := NewBlockStore(dbm.NewMemDB())
// Get State
state, _ := sm.GetState(dbm.NewMemDB(), config.GenesisFile())
state.SetLogger(logger.With("module", "state"))
state.Save()
// Make the blockchainReactor itself
fastSync := true
bcReactor := NewBlockchainReactor(state.Copy(), nil, blockStore, fastSync)
bcReactor.SetLogger(logger.With("module", "blockchain"))
// Next: we need to set a switch in order for peers to be added in
bcReactor.Switch = p2p.NewSwitch(cfg.DefaultP2PConfig())
// Lastly: let's add some blocks in
for blockHeight := 1; blockHeight <= maxBlockHeight; blockHeight++ {
firstBlock := makeBlock(blockHeight, state)
secondBlock := makeBlock(blockHeight+1, state)
firstParts := firstBlock.MakePartSet(state.Params.BlockGossipParams.BlockPartSizeBytes)
blockStore.SaveBlock(firstBlock, firstParts, secondBlock.LastCommit)
}
return bcReactor
}
func TestNoBlockMessageResponse(t *testing.T) {
maxBlockHeight := 20
bcr := newBlockchainReactor(maxBlockHeight)
bcr.Start()
defer bcr.Stop()
// Add some peers in
peer := newbcrTestPeer(cmn.RandStr(12))
bcr.AddPeer(peer)
chID := byte(0x01)
tests := []struct {
height int
existent bool
}{
{maxBlockHeight + 2, false},
{10, true},
{1, true},
{100, false},
}
for _, tt := range tests {
reqBlockMsg := &bcBlockRequestMessage{tt.height}
reqBlockBytes := wire.BinaryBytes(struct{ BlockchainMessage }{reqBlockMsg})
bcr.Receive(chID, peer, reqBlockBytes)
value := peer.lastValue()
msg := value.(struct{ BlockchainMessage }).BlockchainMessage
if tt.existent {
if blockMsg, ok := msg.(*bcBlockResponseMessage); !ok {
t.Fatalf("Expected to receive a block response for height %d", tt.height)
} else if blockMsg.Block.Height != tt.height {
t.Fatalf("Expected response to be for height %d, got %d", tt.height, blockMsg.Block.Height)
}
} else {
if noBlockMsg, ok := msg.(*bcNoBlockResponseMessage); !ok {
t.Fatalf("Expected to receive a no block response for height %d", tt.height)
} else if noBlockMsg.Height != tt.height {
t.Fatalf("Expected response to be for height %d, got %d", tt.height, noBlockMsg.Height)
}
}
}
}
//----------------------------------------------
// utility funcs
func makeTxs(blockNumber int) (txs []types.Tx) {
for i := 0; i < 10; i++ {
txs = append(txs, types.Tx([]byte{byte(blockNumber), byte(i)}))
}
return txs
}
func makeBlock(blockNumber int, state *sm.State) *types.Block {
prevHash := state.LastBlockID.Hash
prevParts := types.PartSetHeader{}
valHash := state.Validators.Hash()
prevBlockID := types.BlockID{prevHash, prevParts}
block, _ := types.MakeBlock(blockNumber, "test_chain", makeTxs(blockNumber),
new(types.Commit), prevBlockID, valHash, state.AppHash, state.Params.BlockGossipParams.BlockPartSizeBytes)
return block
}
// The Test peer
type bcrTestPeer struct {
cmn.Service
key string
ch chan interface{}
}
var _ p2p.Peer = (*bcrTestPeer)(nil)
func newbcrTestPeer(key string) *bcrTestPeer {
return &bcrTestPeer{
Service: cmn.NewBaseService(nil, "bcrTestPeer", nil),
key: key,
ch: make(chan interface{}, 2),
}
}
func (tp *bcrTestPeer) lastValue() interface{} { return <-tp.ch }
func (tp *bcrTestPeer) TrySend(chID byte, value interface{}) bool {
if _, ok := value.(struct{ BlockchainMessage }).BlockchainMessage.(*bcStatusResponseMessage); ok {
// Discard status response messages since they skew our results
// We only want to deal with:
// + bcBlockResponseMessage
// + bcNoBlockResponseMessage
} else {
tp.ch <- value
}
return true
}
func (tp *bcrTestPeer) Send(chID byte, data interface{}) bool { return tp.TrySend(chID, data) }
func (tp *bcrTestPeer) NodeInfo() *p2p.NodeInfo { return nil }
func (tp *bcrTestPeer) Status() p2p.ConnectionStatus { return p2p.ConnectionStatus{} }
func (tp *bcrTestPeer) Key() string { return tp.key }
func (tp *bcrTestPeer) IsOutbound() bool { return false }
func (tp *bcrTestPeer) IsPersistent() bool { return true }
func (tp *bcrTestPeer) Get(s string) interface{} { return s }
func (tp *bcrTestPeer) Set(string, interface{}) {}

View File

@@ -7,10 +7,10 @@ import (
"io"
"sync"
wire "github.com/tendermint/go-wire"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
dbm "github.com/tendermint/tmlibs/db"
"github.com/tendermint/go-wire"
"github.com/tendermint/tendermint/types"
)
/*
@@ -25,7 +25,8 @@ Currently the precommit signatures are duplicated in the Block parts as
well as the Commit. In the future this may change, perhaps by moving
the Commit data outside the Block.
Panics indicate probable corruption in the data
// NOTE: BlockStore methods will panic if they encounter errors
// deserializing loaded data, indicating probable corruption on disk.
*/
type BlockStore struct {
db dbm.DB

View File

@@ -0,0 +1,25 @@
package client_test
import (
"os"
"testing"
"github.com/tendermint/abci/example/dummy"
nm "github.com/tendermint/tendermint/node"
rpctest "github.com/tendermint/tendermint/rpc/test"
)
var node *nm.Node
func TestMain(m *testing.M) {
// start a tendermint node (and merkleeyes) in the background to test against
app := dummy.NewDummyApplication()
node = rpctest.StartTendermint(app)
code := m.Run()
// and shut down proper at the end
node.Stop()
node.Wait()
os.Exit(code)
}

View File

@@ -0,0 +1,133 @@
/*
Package client defines a provider that uses a rpcclient
to get information, which is used to get new headers
and validators directly from a node.
*/
package client
import (
"bytes"
rpcclient "github.com/tendermint/tendermint/rpc/client"
ctypes "github.com/tendermint/tendermint/rpc/core/types"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/certifiers"
certerr "github.com/tendermint/tendermint/certifiers/errors"
)
type SignStatusClient interface {
rpcclient.SignClient
rpcclient.StatusClient
}
type provider struct {
node SignStatusClient
lastHeight int
}
// NewProvider can wrap any rpcclient to expose it as
// a read-only provider.
func NewProvider(node SignStatusClient) certifiers.Provider {
return &provider{node: node}
}
// NewProvider can connects to a tendermint json-rpc endpoint
// at the given url, and uses that as a read-only provider.
func NewHTTPProvider(remote string) certifiers.Provider {
return &provider{
node: rpcclient.NewHTTP(remote, "/websocket"),
}
}
// StoreCommit is a noop, as clients can only read from the chain...
func (p *provider) StoreCommit(_ certifiers.FullCommit) error { return nil }
// GetHash gets the most recent validator and sees if it matches
//
// TODO: improve when the rpc interface supports more functionality
func (p *provider) GetByHash(hash []byte) (certifiers.FullCommit, error) {
var fc certifiers.FullCommit
vals, err := p.node.Validators(nil)
// if we get no validators, or a different height, return an error
if err != nil {
return fc, err
}
p.updateHeight(vals.BlockHeight)
vhash := types.NewValidatorSet(vals.Validators).Hash()
if !bytes.Equal(hash, vhash) {
return fc, certerr.ErrCommitNotFound()
}
return p.seedFromVals(vals)
}
// GetByHeight gets the validator set by height
func (p *provider) GetByHeight(h int) (fc certifiers.FullCommit, err error) {
commit, err := p.node.Commit(&h)
if err != nil {
return fc, err
}
return p.seedFromCommit(commit)
}
func (p *provider) LatestCommit() (fc certifiers.FullCommit, err error) {
commit, err := p.GetLatestCommit()
if err != nil {
return fc, err
}
return p.seedFromCommit(commit)
}
// GetLatestCommit should return the most recent commit there is,
// which handles queries for future heights as per the semantics
// of GetByHeight.
func (p *provider) GetLatestCommit() (*ctypes.ResultCommit, error) {
status, err := p.node.Status()
if err != nil {
return nil, err
}
return p.node.Commit(&status.LatestBlockHeight)
}
func CommitFromResult(result *ctypes.ResultCommit) certifiers.Commit {
return (certifiers.Commit)(result.SignedHeader)
}
func (p *provider) seedFromVals(vals *ctypes.ResultValidators) (certifiers.FullCommit, error) {
// now get the commits and build a full commit
commit, err := p.node.Commit(&vals.BlockHeight)
if err != nil {
return certifiers.FullCommit{}, err
}
fc := certifiers.NewFullCommit(
CommitFromResult(commit),
types.NewValidatorSet(vals.Validators),
)
return fc, nil
}
func (p *provider) seedFromCommit(commit *ctypes.ResultCommit) (fc certifiers.FullCommit, err error) {
fc.Commit = CommitFromResult(commit)
// now get the proper validators
vals, err := p.node.Validators(&commit.Header.Height)
if err != nil {
return fc, err
}
// make sure they match the commit (as we cannot enforce height)
vset := types.NewValidatorSet(vals.Validators)
if !bytes.Equal(vset.Hash(), commit.Header.ValidatorsHash) {
return fc, certerr.ErrValidatorsChanged()
}
p.updateHeight(commit.Header.Height)
fc.Validators = vset
return fc, nil
}
func (p *provider) updateHeight(h int) {
if h > p.lastHeight {
p.lastHeight = h
}
}

View File

@@ -0,0 +1,62 @@
package client_test
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
rpctest "github.com/tendermint/tendermint/rpc/test"
"github.com/tendermint/tendermint/certifiers"
"github.com/tendermint/tendermint/certifiers/client"
certerr "github.com/tendermint/tendermint/certifiers/errors"
)
func TestProvider(t *testing.T) {
assert, require := assert.New(t), require.New(t)
cfg := rpctest.GetConfig()
rpcAddr := cfg.RPC.ListenAddress
chainID := cfg.ChainID
p := client.NewHTTPProvider(rpcAddr)
require.NotNil(t, p)
// let it produce some blocks
time.Sleep(500 * time.Millisecond)
// let's get the highest block
seed, err := p.LatestCommit()
require.Nil(err, "%+v", err)
sh := seed.Height()
vhash := seed.Header.ValidatorsHash
assert.True(sh < 5000)
// let's check this is valid somehow
assert.Nil(seed.ValidateBasic(chainID))
cert := certifiers.NewStatic(chainID, seed.Validators)
// historical queries now work :)
lower := sh - 5
seed, err = p.GetByHeight(lower)
assert.Nil(err, "%+v", err)
assert.Equal(lower, seed.Height())
// also get by hash (given the match)
seed, err = p.GetByHash(vhash)
require.Nil(err, "%+v", err)
require.Equal(vhash, seed.Header.ValidatorsHash)
err = cert.Certify(seed.Commit)
assert.Nil(err, "%+v", err)
// get by hash fails without match
seed, err = p.GetByHash([]byte("foobar"))
assert.NotNil(err)
assert.True(certerr.IsCommitNotFoundErr(err))
// storing the seed silently ignored
err = p.StoreCommit(seed)
assert.Nil(err, "%+v", err)
}

96
certifiers/commit.go Normal file
View File

@@ -0,0 +1,96 @@
package certifiers
import (
"bytes"
"github.com/pkg/errors"
"github.com/tendermint/tendermint/types"
certerr "github.com/tendermint/tendermint/certifiers/errors"
)
// Certifier checks the votes to make sure the block really is signed properly.
// Certifier must know the current set of validitors by some other means.
type Certifier interface {
Certify(check Commit) error
ChainID() string
}
// Commit is basically the rpc /commit response, but extended
//
// This is the basepoint for proving anything on the blockchain. It contains
// a signed header. If the signatures are valid and > 2/3 of the known set,
// we can store this checkpoint and use it to prove any number of aspects of
// the system: such as txs, abci state, validator sets, etc...
type Commit types.SignedHeader
// FullCommit is a commit and the actual validator set,
// the base info you need to update to a given point,
// assuming knowledge of some previous validator set
type FullCommit struct {
Commit `json:"commit"`
Validators *types.ValidatorSet `json:"validator_set"`
}
func NewFullCommit(commit Commit, vals *types.ValidatorSet) FullCommit {
return FullCommit{
Commit: commit,
Validators: vals,
}
}
func (c Commit) Height() int {
if c.Header == nil {
return 0
}
return c.Header.Height
}
func (c Commit) ValidatorsHash() []byte {
if c.Header == nil {
return nil
}
return c.Header.ValidatorsHash
}
// ValidateBasic does basic consistency checks and makes sure the headers
// and commits are all consistent and refer to our chain.
//
// Make sure to use a Verifier to validate the signatures actually provide
// a significantly strong proof for this header's validity.
func (c Commit) ValidateBasic(chainID string) error {
// make sure the header is reasonable
if c.Header == nil {
return errors.New("Commit missing header")
}
if c.Header.ChainID != chainID {
return errors.Errorf("Header belongs to another chain '%s' not '%s'",
c.Header.ChainID, chainID)
}
if c.Commit == nil {
return errors.New("Commit missing signatures")
}
// make sure the header and commit match (height and hash)
if c.Commit.Height() != c.Header.Height {
return certerr.ErrHeightMismatch(c.Commit.Height(), c.Header.Height)
}
hhash := c.Header.Hash()
chash := c.Commit.BlockID.Hash
if !bytes.Equal(hhash, chash) {
return errors.Errorf("Commits sign block %X header is block %X",
chash, hhash)
}
// make sure the commit is reasonable
err := c.Commit.ValidateBasic()
if err != nil {
return errors.WithStack(err)
}
// looks good, we just need to make sure the signatures are really from
// empowered validators
return nil
}

133
certifiers/doc.go Normal file
View File

@@ -0,0 +1,133 @@
/*
Package certifiers allows you to securely validate headers
without a full node.
This library pulls together all the crypto and algorithms,
so given a relatively recent (< unbonding period) known
validator set, one can get indisputable proof that data is in
the chain (current state) or detect if the node is lying to
the client.
Tendermint RPC exposes a lot of info, but a malicious node
could return any data it wants to queries, or even to block
headers, even making up fake signatures from non-existent
validators to justify it. This is a lot of logic to get
right, to be contained in a small, easy to use library,
that does this for you, so you can just build nice UI.
We design for clients who have no strong trust relationship
with any tendermint node, just the validator set as a whole.
Beyond building nice mobile or desktop applications, the
cosmos hub is another important example of a client,
that needs undeniable proof without syncing the full chain,
in order to efficiently implement IBC.
Commits
There are two main data structures that we pass around - Commit
and FullCommit. Both of them mirror what information is
exposed in tendermint rpc.
Commit is a block header along with enough validator signatures
to prove its validity (> 2/3 of the voting power). A FullCommit
is a Commit along with the full validator set. When the
validator set doesn't change, the Commit is enough, but since
the block header only has a hash, we need the FullCommit to
follow any changes to the validator set.
Certifiers
A Certifier validates a new Commit given the currently known
state. There are three different types of Certifiers exposed,
each one building on the last one, with additional complexity.
Static - given the validator set upon initialization. Verifies
all signatures against that set and if the validator set
changes, it will reject all headers.
Dynamic - This wraps Static and has the same Certify
method. However, it adds an Update method, which can be called
with a FullCommit when the validator set changes. If it can
prove this is a valid transition, it will update the validator
set.
Inquiring - this wraps Dynamic and implements an auto-update
strategy on top of the Dynamic update. If a call to
Certify fails as the validator set has changed, then it
attempts to find a FullCommit and Update to that header.
To get these FullCommits, it makes use of a Provider.
Providers
A Provider allows us to store and retrieve the FullCommits,
to provide memory to the Inquiring Certifier.
NewMemStoreProvider - in-memory cache.
files.NewProvider - disk backed storage.
client.NewHTTPProvider - query tendermint rpc.
NewCacheProvider - combine multiple providers.
The suggested use for local light clients is
client.NewHTTPProvider for getting new data (Source),
and NewCacheProvider(NewMemStoreProvider(),
files.NewProvider()) to store confirmed headers (Trusted)
How We Track Validators
Unless you want to blindly trust the node you talk with, you
need to trace every response back to a hash in a block header
and validate the commit signatures of that block header match
the proper validator set. If there is a contant validator
set, you store it locally upon initialization of the client,
and check against that every time.
Once there is a dynamic validator set, the issue of
verifying a block becomes a bit more tricky. There is
background information in a
github issue (https://github.com/tendermint/tendermint/issues/377).
In short, if there is a block at height H with a known
(trusted) validator set V, and another block at height H'
(H' > H) with validator set V' != V, then we want a way to
safely update it.
First, get the new (unconfirmed) validator set V' and
verify H' is internally consistent and properly signed by
this V'. Assuming it is a valid block, we check that at
least 2/3 of the validators in V also signed it, meaning
it would also be valid under our old assumptions.
That should be enough, but we can also check that the
V counts for at least 2/3 of the total votes in H'
for extra safety (we can have a discussion if this is
strictly required). If we can verify all this,
then we can accept H' and V' as valid and use that to
validate all blocks X > H'.
If we cannot update directly from H -> H' because there was
too much change to the validator set, then we can look for
some Hm (H < Hm < H') with a validator set Vm. Then we try
to update H -> Hm and Hm -> H' in two separate steps.
If one of these steps doesn't work, then we continue
bisecting, until we eventually have to externally
validate the valdiator set changes at every block.
Since we never trust any server in this protocol, only the
signatures themselves, it doesn't matter if the seed comes
from a (possibly malicious) node or a (possibly malicious) user.
We can accept it or reject it based only on our trusted
validator set and cryptographic proofs. This makes it
extremely important to verify that you have the proper
validator set when initializing the client, as that is the
root of all trust.
Or course, this assumes that the known block is within the
unbonding period to avoid the "nothing at stake" problem.
If you haven't seen the state in a few months, you will need
to manually verify the new validator set hash using off-chain
means (the same as getting the initial hash).
*/
package certifiers

89
certifiers/dynamic.go Normal file
View File

@@ -0,0 +1,89 @@
package certifiers
import (
"github.com/tendermint/tendermint/types"
certerr "github.com/tendermint/tendermint/certifiers/errors"
)
var _ Certifier = &Dynamic{}
// Dynamic uses a Static for Certify, but adds an
// Update method to allow for a change of validators.
//
// You can pass in a FullCommit with another validator set,
// and if this is a provably secure transition (< 1/3 change,
// sufficient signatures), then it will update the
// validator set for the next Certify call.
// For security, it will only follow validator set changes
// going forward.
type Dynamic struct {
cert *Static
lastHeight int
}
func NewDynamic(chainID string, vals *types.ValidatorSet, height int) *Dynamic {
return &Dynamic{
cert: NewStatic(chainID, vals),
lastHeight: height,
}
}
func (c *Dynamic) ChainID() string {
return c.cert.ChainID()
}
func (c *Dynamic) Validators() *types.ValidatorSet {
return c.cert.vSet
}
func (c *Dynamic) Hash() []byte {
return c.cert.Hash()
}
func (c *Dynamic) LastHeight() int {
return c.lastHeight
}
// Certify handles this with
func (c *Dynamic) Certify(check Commit) error {
err := c.cert.Certify(check)
if err == nil {
// update last seen height if input is valid
c.lastHeight = check.Height()
}
return err
}
// Update will verify if this is a valid change and update
// the certifying validator set if safe to do so.
//
// Returns an error if update is impossible (invalid proof or IsTooMuchChangeErr)
func (c *Dynamic) Update(fc FullCommit) error {
// ignore all checkpoints in the past -> only to the future
h := fc.Height()
if h <= c.lastHeight {
return certerr.ErrPastTime()
}
// first, verify if the input is self-consistent....
err := fc.ValidateBasic(c.ChainID())
if err != nil {
return err
}
// now, make sure not too much change... meaning this commit
// would be approved by the currently known validator set
// as well as the new set
commit := fc.Commit.Commit
err = c.Validators().VerifyCommitAny(fc.Validators, c.ChainID(),
commit.BlockID, h, commit)
if err != nil {
return certerr.ErrTooMuchChange()
}
// looks good, we can update
c.cert = NewStatic(c.ChainID(), fc.Validators)
c.lastHeight = h
return nil
}

130
certifiers/dynamic_test.go Normal file
View File

@@ -0,0 +1,130 @@
package certifiers_test
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/certifiers"
"github.com/tendermint/tendermint/certifiers/errors"
)
// TestDynamicCert just makes sure it still works like StaticCert
func TestDynamicCert(t *testing.T) {
// assert, require := assert.New(t), require.New(t)
assert := assert.New(t)
// require := require.New(t)
keys := certifiers.GenValKeys(4)
// 20, 30, 40, 50 - the first 3 don't have 2/3, the last 3 do!
vals := keys.ToValidators(20, 10)
// and a certifier based on our known set
chainID := "test-dyno"
cert := certifiers.NewDynamic(chainID, vals, 0)
cases := []struct {
keys certifiers.ValKeys
vals *types.ValidatorSet
height int
first, last int // who actually signs
proper bool // true -> expect no error
changed bool // true -> expect validator change error
}{
// perfect, signed by everyone
{keys, vals, 1, 0, len(keys), true, false},
// skip little guy is okay
{keys, vals, 2, 1, len(keys), true, false},
// but not the big guy
{keys, vals, 3, 0, len(keys) - 1, false, false},
// even changing the power a little bit breaks the static validator
// the sigs are enough, but the validator hash is unknown
{keys, keys.ToValidators(20, 11), 4, 0, len(keys), false, true},
}
for _, tc := range cases {
check := tc.keys.GenCommit(chainID, tc.height, nil, tc.vals,
[]byte("bar"), tc.first, tc.last)
err := cert.Certify(check)
if tc.proper {
assert.Nil(err, "%+v", err)
assert.Equal(cert.LastHeight(), tc.height)
} else {
assert.NotNil(err)
if tc.changed {
assert.True(errors.IsValidatorsChangedErr(err), "%+v", err)
}
}
}
}
// TestDynamicUpdate makes sure we update safely and sanely
func TestDynamicUpdate(t *testing.T) {
assert, require := assert.New(t), require.New(t)
chainID := "test-dyno-up"
keys := certifiers.GenValKeys(5)
vals := keys.ToValidators(20, 0)
cert := certifiers.NewDynamic(chainID, vals, 40)
// one valid block to give us a sense of time
h := 100
good := keys.GenCommit(chainID, h, nil, vals, []byte("foo"), 0, len(keys))
err := cert.Certify(good)
require.Nil(err, "%+v", err)
// some new sets to try later
keys2 := keys.Extend(2)
keys3 := keys2.Extend(4)
// we try to update with some blocks
cases := []struct {
keys certifiers.ValKeys
vals *types.ValidatorSet
height int
first, last int // who actually signs
proper bool // true -> expect no error
changed bool // true -> expect too much change error
}{
// same validator set, well signed, of course it is okay
{keys, vals, h + 10, 0, len(keys), true, false},
// same validator set, poorly signed, fails
{keys, vals, h + 20, 2, len(keys), false, false},
// shift the power a little, works if properly signed
{keys, keys.ToValidators(10, 0), h + 30, 1, len(keys), true, false},
// but not on a poor signature
{keys, keys.ToValidators(10, 0), h + 40, 2, len(keys), false, false},
// and not if it was in the past
{keys, keys.ToValidators(10, 0), h + 25, 0, len(keys), false, false},
// let's try to adjust to a whole new validator set (we have 5/7 of the votes)
{keys2, keys2.ToValidators(10, 0), h + 33, 0, len(keys2), true, false},
// properly signed but too much change, not allowed (only 7/11 validators known)
{keys3, keys3.ToValidators(10, 0), h + 50, 0, len(keys3), false, true},
}
for _, tc := range cases {
fc := tc.keys.GenFullCommit(chainID, tc.height, nil, tc.vals,
[]byte("bar"), tc.first, tc.last)
err := cert.Update(fc)
if tc.proper {
assert.Nil(err, "%d: %+v", tc.height, err)
// we update last seen height
assert.Equal(cert.LastHeight(), tc.height)
// and we update the proper validators
assert.EqualValues(fc.Header.ValidatorsHash, cert.Hash())
} else {
assert.NotNil(err, "%d", tc.height)
// we don't update the height
assert.NotEqual(cert.LastHeight(), tc.height)
if tc.changed {
assert.True(errors.IsTooMuchChangeErr(err),
"%d: %+v", tc.height, err)
}
}
}
}

View File

@@ -0,0 +1,86 @@
package errors
import (
"fmt"
"github.com/pkg/errors"
)
var (
errValidatorsChanged = fmt.Errorf("Validators differ between header and certifier")
errCommitNotFound = fmt.Errorf("Commit not found by provider")
errTooMuchChange = fmt.Errorf("Validators change too much to safely update")
errPastTime = fmt.Errorf("Update older than certifier height")
errNoPathFound = fmt.Errorf("Cannot find a path of validators")
)
// IsCommitNotFoundErr checks whether an error is due to missing data
func IsCommitNotFoundErr(err error) bool {
return err != nil && (errors.Cause(err) == errCommitNotFound)
}
func ErrCommitNotFound() error {
return errors.WithStack(errCommitNotFound)
}
// IsValidatorsChangedErr checks whether an error is due
// to a differing validator set
func IsValidatorsChangedErr(err error) bool {
return err != nil && (errors.Cause(err) == errValidatorsChanged)
}
func ErrValidatorsChanged() error {
return errors.WithStack(errValidatorsChanged)
}
// IsTooMuchChangeErr checks whether an error is due to too much change
// between these validators sets
func IsTooMuchChangeErr(err error) bool {
return err != nil && (errors.Cause(err) == errTooMuchChange)
}
func ErrTooMuchChange() error {
return errors.WithStack(errTooMuchChange)
}
func IsPastTimeErr(err error) bool {
return err != nil && (errors.Cause(err) == errPastTime)
}
func ErrPastTime() error {
return errors.WithStack(errPastTime)
}
// IsNoPathFoundErr checks whether an error is due to no path of
// validators in provider from where we are to where we want to be
func IsNoPathFoundErr(err error) bool {
return err != nil && (errors.Cause(err) == errNoPathFound)
}
func ErrNoPathFound() error {
return errors.WithStack(errNoPathFound)
}
//--------------------------------------------
type errHeightMismatch struct {
h1, h2 int
}
func (e errHeightMismatch) Error() string {
return fmt.Sprintf("Blocks don't match - %d vs %d", e.h1, e.h2)
}
// IsHeightMismatchErr checks whether an error is due to data from different blocks
func IsHeightMismatchErr(err error) bool {
if err == nil {
return false
}
_, ok := errors.Cause(err).(errHeightMismatch)
return ok
}
// ErrHeightMismatch returns an mismatch error with stack-trace
func ErrHeightMismatch(h1, h2 int) error {
return errors.WithStack(errHeightMismatch{h1, h2})
}

View File

@@ -0,0 +1,18 @@
package errors
import (
"errors"
"testing"
"github.com/stretchr/testify/assert"
)
func TestErrorHeight(t *testing.T) {
e1 := ErrHeightMismatch(2, 3)
e1.Error()
assert.True(t, IsHeightMismatchErr(e1))
e2 := errors.New("foobar")
assert.False(t, IsHeightMismatchErr(e2))
assert.False(t, IsHeightMismatchErr(nil))
}

View File

@@ -0,0 +1,77 @@
package files
import (
"encoding/json"
"os"
"github.com/pkg/errors"
wire "github.com/tendermint/go-wire"
"github.com/tendermint/tendermint/certifiers"
certerr "github.com/tendermint/tendermint/certifiers/errors"
)
const (
// MaxFullCommitSize is the maximum number of bytes we will
// read in for a full commit to avoid excessive allocations
// in the deserializer
MaxFullCommitSize = 1024 * 1024
)
// SaveFullCommit exports the seed in binary / go-wire style
func SaveFullCommit(fc certifiers.FullCommit, path string) error {
f, err := os.Create(path)
if err != nil {
return errors.WithStack(err)
}
defer f.Close()
var n int
wire.WriteBinary(fc, f, &n, &err)
return errors.WithStack(err)
}
// SaveFullCommitJSON exports the seed in a json format
func SaveFullCommitJSON(fc certifiers.FullCommit, path string) error {
f, err := os.Create(path)
if err != nil {
return errors.WithStack(err)
}
defer f.Close()
stream := json.NewEncoder(f)
err = stream.Encode(fc)
return errors.WithStack(err)
}
func LoadFullCommit(path string) (certifiers.FullCommit, error) {
var fc certifiers.FullCommit
f, err := os.Open(path)
if err != nil {
if os.IsNotExist(err) {
return fc, certerr.ErrCommitNotFound()
}
return fc, errors.WithStack(err)
}
defer f.Close()
var n int
wire.ReadBinaryPtr(&fc, f, MaxFullCommitSize, &n, &err)
return fc, errors.WithStack(err)
}
func LoadFullCommitJSON(path string) (certifiers.FullCommit, error) {
var fc certifiers.FullCommit
f, err := os.Open(path)
if err != nil {
if os.IsNotExist(err) {
return fc, certerr.ErrCommitNotFound()
}
return fc, errors.WithStack(err)
}
defer f.Close()
stream := json.NewDecoder(f)
err = stream.Decode(&fc)
return fc, errors.WithStack(err)
}

View File

@@ -0,0 +1,66 @@
package files
import (
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tendermint/certifiers"
)
func tmpFile() string {
suffix := cmn.RandStr(16)
return filepath.Join(os.TempDir(), "fc-test-"+suffix)
}
func TestSerializeFullCommits(t *testing.T) {
assert, require := assert.New(t), require.New(t)
// some constants
appHash := []byte("some crazy thing")
chainID := "ser-ial"
h := 25
// build a fc
keys := certifiers.GenValKeys(5)
vals := keys.ToValidators(10, 0)
fc := keys.GenFullCommit(chainID, h, nil, vals, appHash, 0, 5)
require.Equal(h, fc.Height())
require.Equal(vals.Hash(), fc.ValidatorsHash())
// try read/write with json
jfile := tmpFile()
defer os.Remove(jfile)
jseed, err := LoadFullCommitJSON(jfile)
assert.NotNil(err)
err = SaveFullCommitJSON(fc, jfile)
require.Nil(err)
jseed, err = LoadFullCommitJSON(jfile)
assert.Nil(err, "%+v", err)
assert.Equal(h, jseed.Height())
assert.Equal(vals.Hash(), jseed.ValidatorsHash())
// try read/write with binary
bfile := tmpFile()
defer os.Remove(bfile)
bseed, err := LoadFullCommit(bfile)
assert.NotNil(err)
err = SaveFullCommit(fc, bfile)
require.Nil(err)
bseed, err = LoadFullCommit(bfile)
assert.Nil(err, "%+v", err)
assert.Equal(h, bseed.Height())
assert.Equal(vals.Hash(), bseed.ValidatorsHash())
// make sure they don't read the other format (different)
_, err = LoadFullCommit(jfile)
assert.NotNil(err)
_, err = LoadFullCommitJSON(bfile)
assert.NotNil(err)
}

View File

@@ -0,0 +1,134 @@
/*
Package files defines a Provider that stores all data in the filesystem
We assume the same validator hash may be reused by many different
headers/Commits, and thus store it separately. This leaves us
with three issues:
1. Given a validator hash, retrieve the validator set if previously stored
2. Given a block height, find the Commit with the highest height <= h
3. Given a FullCommit, store it quickly to satisfy 1 and 2
Note that we do not worry about caching, as that can be achieved by
pairing this with a MemStoreProvider and CacheProvider from certifiers
*/
package files
import (
"encoding/hex"
"fmt"
"math"
"os"
"path/filepath"
"sort"
"github.com/pkg/errors"
"github.com/tendermint/tendermint/certifiers"
certerr "github.com/tendermint/tendermint/certifiers/errors"
)
const (
Ext = ".tsd"
ValDir = "validators"
CheckDir = "checkpoints"
dirPerm = os.FileMode(0755)
filePerm = os.FileMode(0644)
)
type provider struct {
valDir string
checkDir string
}
// NewProvider creates the parent dir and subdirs
// for validators and checkpoints as needed
func NewProvider(dir string) certifiers.Provider {
valDir := filepath.Join(dir, ValDir)
checkDir := filepath.Join(dir, CheckDir)
for _, d := range []string{valDir, checkDir} {
err := os.MkdirAll(d, dirPerm)
if err != nil {
panic(err)
}
}
return &provider{valDir: valDir, checkDir: checkDir}
}
func (p *provider) encodeHash(hash []byte) string {
return hex.EncodeToString(hash) + Ext
}
func (p *provider) encodeHeight(h int) string {
// pad up to 10^12 for height...
return fmt.Sprintf("%012d%s", h, Ext)
}
func (p *provider) StoreCommit(fc certifiers.FullCommit) error {
// make sure the fc is self-consistent before saving
err := fc.ValidateBasic(fc.Commit.Header.ChainID)
if err != nil {
return err
}
paths := []string{
filepath.Join(p.checkDir, p.encodeHeight(fc.Height())),
filepath.Join(p.valDir, p.encodeHash(fc.Header.ValidatorsHash)),
}
for _, path := range paths {
err := SaveFullCommit(fc, path)
// unknown error in creating or writing immediately breaks
if err != nil {
return err
}
}
return nil
}
func (p *provider) GetByHeight(h int) (certifiers.FullCommit, error) {
// first we look for exact match, then search...
path := filepath.Join(p.checkDir, p.encodeHeight(h))
fc, err := LoadFullCommit(path)
if certerr.IsCommitNotFoundErr(err) {
path, err = p.searchForHeight(h)
if err == nil {
fc, err = LoadFullCommit(path)
}
}
return fc, err
}
func (p *provider) LatestCommit() (fc certifiers.FullCommit, err error) {
// Note to future: please update by 2077 to avoid rollover
return p.GetByHeight(math.MaxInt32 - 1)
}
// search for height, looks for a file with highest height < h
// return certifiers.ErrCommitNotFound() if not there...
func (p *provider) searchForHeight(h int) (string, error) {
d, err := os.Open(p.checkDir)
if err != nil {
return "", errors.WithStack(err)
}
files, err := d.Readdirnames(0)
d.Close()
if err != nil {
return "", errors.WithStack(err)
}
desired := p.encodeHeight(h)
sort.Strings(files)
i := sort.SearchStrings(files, desired)
if i == 0 {
return "", certerr.ErrCommitNotFound()
}
found := files[i-1]
path := filepath.Join(p.checkDir, found)
return path, errors.WithStack(err)
}
func (p *provider) GetByHash(hash []byte) (certifiers.FullCommit, error) {
path := filepath.Join(p.valDir, p.encodeHash(hash))
return LoadFullCommit(path)
}

View File

@@ -0,0 +1,96 @@
package files_test
import (
"bytes"
"errors"
"io/ioutil"
"os"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/certifiers"
certerr "github.com/tendermint/tendermint/certifiers/errors"
"github.com/tendermint/tendermint/certifiers/files"
)
func checkEqual(stored, loaded certifiers.FullCommit, chainID string) error {
err := loaded.ValidateBasic(chainID)
if err != nil {
return err
}
if !bytes.Equal(stored.ValidatorsHash(), loaded.ValidatorsHash()) {
return errors.New("Different block hashes")
}
return nil
}
func TestFileProvider(t *testing.T) {
assert, require := assert.New(t), require.New(t)
dir, err := ioutil.TempDir("", "fileprovider-test")
assert.Nil(err)
defer os.RemoveAll(dir)
p := files.NewProvider(dir)
chainID := "test-files"
appHash := []byte("some-data")
keys := certifiers.GenValKeys(5)
count := 10
// make a bunch of seeds...
seeds := make([]certifiers.FullCommit, count)
for i := 0; i < count; i++ {
// two seeds for each validator, to check how we handle dups
// (10, 0), (10, 1), (10, 1), (10, 2), (10, 2), ...
vals := keys.ToValidators(10, int64(count/2))
h := 20 + 10*i
check := keys.GenCommit(chainID, h, nil, vals, appHash, 0, 5)
seeds[i] = certifiers.NewFullCommit(check, vals)
}
// check provider is empty
seed, err := p.GetByHeight(20)
require.NotNil(err)
assert.True(certerr.IsCommitNotFoundErr(err))
seed, err = p.GetByHash(seeds[3].ValidatorsHash())
require.NotNil(err)
assert.True(certerr.IsCommitNotFoundErr(err))
// now add them all to the provider
for _, s := range seeds {
err = p.StoreCommit(s)
require.Nil(err)
// and make sure we can get it back
s2, err := p.GetByHash(s.ValidatorsHash())
assert.Nil(err)
err = checkEqual(s, s2, chainID)
assert.Nil(err)
// by height as well
s2, err = p.GetByHeight(s.Height())
err = checkEqual(s, s2, chainID)
assert.Nil(err)
}
// make sure we get the last hash if we overstep
seed, err = p.GetByHeight(5000)
if assert.Nil(err, "%+v", err) {
assert.Equal(seeds[count-1].Height(), seed.Height())
err = checkEqual(seeds[count-1], seed, chainID)
assert.Nil(err)
}
// and middle ones as well
seed, err = p.GetByHeight(47)
if assert.Nil(err, "%+v", err) {
// we only step by 10, so 40 must be the one below this
assert.Equal(40, seed.Height())
}
// and proper error for too low
_, err = p.GetByHeight(5)
assert.NotNil(err)
assert.True(certerr.IsCommitNotFoundErr(err))
}

147
certifiers/helper.go Normal file
View File

@@ -0,0 +1,147 @@
package certifiers
import (
"time"
crypto "github.com/tendermint/go-crypto"
"github.com/tendermint/tendermint/types"
)
// ValKeys is a helper for testing.
//
// It lets us simulate signing with many keys, either ed25519 or secp256k1.
// The main use case is to create a set, and call GenCommit
// to get propely signed header for testing.
//
// You can set different weights of validators each time you call
// ToValidators, and can optionally extend the validator set later
// with Extend or ExtendSecp
type ValKeys []crypto.PrivKey
// GenValKeys produces an array of private keys to generate commits
func GenValKeys(n int) ValKeys {
res := make(ValKeys, n)
for i := range res {
res[i] = crypto.GenPrivKeyEd25519().Wrap()
}
return res
}
// Change replaces the key at index i
func (v ValKeys) Change(i int) ValKeys {
res := make(ValKeys, len(v))
copy(res, v)
res[i] = crypto.GenPrivKeyEd25519().Wrap()
return res
}
// Extend adds n more keys (to remove, just take a slice)
func (v ValKeys) Extend(n int) ValKeys {
extra := GenValKeys(n)
return append(v, extra...)
}
// GenSecpValKeys produces an array of secp256k1 private keys to generate commits
func GenSecpValKeys(n int) ValKeys {
res := make(ValKeys, n)
for i := range res {
res[i] = crypto.GenPrivKeySecp256k1().Wrap()
}
return res
}
// ExtendSecp adds n more secp256k1 keys (to remove, just take a slice)
func (v ValKeys) ExtendSecp(n int) ValKeys {
extra := GenSecpValKeys(n)
return append(v, extra...)
}
// ToValidators produces a list of validators from the set of keys
// The first key has weight `init` and it increases by `inc` every step
// so we can have all the same weight, or a simple linear distribution
// (should be enough for testing)
func (v ValKeys) ToValidators(init, inc int64) *types.ValidatorSet {
res := make([]*types.Validator, len(v))
for i, k := range v {
res[i] = types.NewValidator(k.PubKey(), init+int64(i)*inc)
}
return types.NewValidatorSet(res)
}
// signHeader properly signs the header with all keys from first to last exclusive
func (v ValKeys) signHeader(header *types.Header, first, last int) *types.Commit {
votes := make([]*types.Vote, len(v))
// we need this list to keep the ordering...
vset := v.ToValidators(1, 0)
// fill in the votes we want
for i := first; i < last; i++ {
vote := makeVote(header, vset, v[i])
votes[vote.ValidatorIndex] = vote
}
res := &types.Commit{
BlockID: types.BlockID{Hash: header.Hash()},
Precommits: votes,
}
return res
}
func makeVote(header *types.Header, vals *types.ValidatorSet, key crypto.PrivKey) *types.Vote {
addr := key.PubKey().Address()
idx, _ := vals.GetByAddress(addr)
vote := &types.Vote{
ValidatorAddress: addr,
ValidatorIndex: idx,
Height: header.Height,
Round: 1,
Type: types.VoteTypePrecommit,
BlockID: types.BlockID{Hash: header.Hash()},
}
// Sign it
signBytes := types.SignBytes(header.ChainID, vote)
vote.Signature = key.Sign(signBytes)
return vote
}
func genHeader(chainID string, height int, txs types.Txs,
vals *types.ValidatorSet, appHash []byte) *types.Header {
return &types.Header{
ChainID: chainID,
Height: height,
Time: time.Now(),
NumTxs: len(txs),
// LastBlockID
// LastCommitHash
ValidatorsHash: vals.Hash(),
DataHash: txs.Hash(),
AppHash: appHash,
}
}
// GenCommit calls genHeader and signHeader and combines them into a Commit
func (v ValKeys) GenCommit(chainID string, height int, txs types.Txs,
vals *types.ValidatorSet, appHash []byte, first, last int) Commit {
header := genHeader(chainID, height, txs, vals, appHash)
check := Commit{
Header: header,
Commit: v.signHeader(header, first, last),
}
return check
}
// GenFullCommit calls genHeader and signHeader and combines them into a Commit
func (v ValKeys) GenFullCommit(chainID string, height int, txs types.Txs,
vals *types.ValidatorSet, appHash []byte, first, last int) FullCommit {
header := genHeader(chainID, height, txs, vals, appHash)
commit := Commit{
Header: header,
Commit: v.signHeader(header, first, last),
}
return NewFullCommit(commit, vals)
}

142
certifiers/inquirer.go Normal file
View File

@@ -0,0 +1,142 @@
package certifiers
import (
"github.com/tendermint/tendermint/types"
certerr "github.com/tendermint/tendermint/certifiers/errors"
)
type Inquiring struct {
cert *Dynamic
// These are only properly validated data, from local system
trusted Provider
// This is a source of new info, like a node rpc, or other import method
Source Provider
}
func NewInquiring(chainID string, fc FullCommit, trusted Provider, source Provider) *Inquiring {
// store the data in trusted
trusted.StoreCommit(fc)
return &Inquiring{
cert: NewDynamic(chainID, fc.Validators, fc.Height()),
trusted: trusted,
Source: source,
}
}
func (c *Inquiring) ChainID() string {
return c.cert.ChainID()
}
func (c *Inquiring) Validators() *types.ValidatorSet {
return c.cert.cert.vSet
}
func (c *Inquiring) LastHeight() int {
return c.cert.lastHeight
}
// Certify makes sure this is checkpoint is valid.
//
// If the validators have changed since the last know time, it looks
// for a path to prove the new validators.
//
// On success, it will store the checkpoint in the store for later viewing
func (c *Inquiring) Certify(commit Commit) error {
err := c.useClosestTrust(commit.Height())
if err != nil {
return err
}
err = c.cert.Certify(commit)
if !certerr.IsValidatorsChangedErr(err) {
return err
}
err = c.updateToHash(commit.Header.ValidatorsHash)
if err != nil {
return err
}
err = c.cert.Certify(commit)
if err != nil {
return err
}
// store the new checkpoint
c.trusted.StoreCommit(
NewFullCommit(commit, c.Validators()))
return nil
}
func (c *Inquiring) Update(fc FullCommit) error {
err := c.useClosestTrust(fc.Height())
if err != nil {
return err
}
err = c.cert.Update(fc)
if err == nil {
c.trusted.StoreCommit(fc)
}
return err
}
func (c *Inquiring) useClosestTrust(h int) error {
closest, err := c.trusted.GetByHeight(h)
if err != nil {
return err
}
// if the best seed is not the one we currently use,
// let's just reset the dynamic validator
if closest.Height() != c.LastHeight() {
c.cert = NewDynamic(c.ChainID(), closest.Validators, closest.Height())
}
return nil
}
// updateToHash gets the validator hash we want to update to
// if IsTooMuchChangeErr, we try to find a path by binary search over height
func (c *Inquiring) updateToHash(vhash []byte) error {
// try to get the match, and update
fc, err := c.Source.GetByHash(vhash)
if err != nil {
return err
}
err = c.cert.Update(fc)
// handle IsTooMuchChangeErr by using divide and conquer
if certerr.IsTooMuchChangeErr(err) {
err = c.updateToHeight(fc.Height())
}
return err
}
// updateToHeight will use divide-and-conquer to find a path to h
func (c *Inquiring) updateToHeight(h int) error {
// try to update to this height (with checks)
fc, err := c.Source.GetByHeight(h)
if err != nil {
return err
}
start, end := c.LastHeight(), fc.Height()
if end <= start {
return certerr.ErrNoPathFound()
}
err = c.Update(fc)
// we can handle IsTooMuchChangeErr specially
if !certerr.IsTooMuchChangeErr(err) {
return err
}
// try to update to mid
mid := (start + end) / 2
err = c.updateToHeight(mid)
if err != nil {
return err
}
// if we made it to mid, we recurse
return c.updateToHeight(h)
}

165
certifiers/inquirer_test.go Normal file
View File

@@ -0,0 +1,165 @@
package certifiers_test
import (
"fmt"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/certifiers"
)
func TestInquirerValidPath(t *testing.T) {
assert, require := assert.New(t), require.New(t)
trust := certifiers.NewMemStoreProvider()
source := certifiers.NewMemStoreProvider()
// set up the validators to generate test blocks
var vote int64 = 10
keys := certifiers.GenValKeys(5)
vals := keys.ToValidators(vote, 0)
// construct a bunch of commits, each with one more height than the last
chainID := "inquiry-test"
count := 50
commits := make([]certifiers.FullCommit, count)
for i := 0; i < count; i++ {
// extend the keys by 1 each time
keys = keys.Extend(1)
vals = keys.ToValidators(vote, 0)
h := 20 + 10*i
appHash := []byte(fmt.Sprintf("h=%d", h))
commits[i] = keys.GenFullCommit(chainID, h, nil, vals, appHash, 0, len(keys))
}
// initialize a certifier with the initial state
cert := certifiers.NewInquiring(chainID, commits[0], trust, source)
// this should fail validation....
commit := commits[count-1].Commit
err := cert.Certify(commit)
require.NotNil(err)
// add a few seed in the middle should be insufficient
for i := 10; i < 13; i++ {
err := source.StoreCommit(commits[i])
require.Nil(err)
}
err = cert.Certify(commit)
assert.NotNil(err)
// with more info, we succeed
for i := 0; i < count; i++ {
err := source.StoreCommit(commits[i])
require.Nil(err)
}
err = cert.Certify(commit)
assert.Nil(err, "%+v", err)
}
func TestInquirerMinimalPath(t *testing.T) {
assert, require := assert.New(t), require.New(t)
trust := certifiers.NewMemStoreProvider()
source := certifiers.NewMemStoreProvider()
// set up the validators to generate test blocks
var vote int64 = 10
keys := certifiers.GenValKeys(5)
vals := keys.ToValidators(vote, 0)
// construct a bunch of commits, each with one more height than the last
chainID := "minimal-path"
count := 12
commits := make([]certifiers.FullCommit, count)
for i := 0; i < count; i++ {
// extend the validators, so we are just below 2/3
keys = keys.Extend(len(keys)/2 - 1)
vals = keys.ToValidators(vote, 0)
h := 5 + 10*i
appHash := []byte(fmt.Sprintf("h=%d", h))
commits[i] = keys.GenFullCommit(chainID, h, nil, vals, appHash, 0, len(keys))
}
// initialize a certifier with the initial state
cert := certifiers.NewInquiring(chainID, commits[0], trust, source)
// this should fail validation....
commit := commits[count-1].Commit
err := cert.Certify(commit)
require.NotNil(err)
// add a few seed in the middle should be insufficient
for i := 5; i < 8; i++ {
err := source.StoreCommit(commits[i])
require.Nil(err)
}
err = cert.Certify(commit)
assert.NotNil(err)
// with more info, we succeed
for i := 0; i < count; i++ {
err := source.StoreCommit(commits[i])
require.Nil(err)
}
err = cert.Certify(commit)
assert.Nil(err, "%+v", err)
}
func TestInquirerVerifyHistorical(t *testing.T) {
assert, require := assert.New(t), require.New(t)
trust := certifiers.NewMemStoreProvider()
source := certifiers.NewMemStoreProvider()
// set up the validators to generate test blocks
var vote int64 = 10
keys := certifiers.GenValKeys(5)
vals := keys.ToValidators(vote, 0)
// construct a bunch of commits, each with one more height than the last
chainID := "inquiry-test"
count := 10
commits := make([]certifiers.FullCommit, count)
for i := 0; i < count; i++ {
// extend the keys by 1 each time
keys = keys.Extend(1)
vals = keys.ToValidators(vote, 0)
h := 20 + 10*i
appHash := []byte(fmt.Sprintf("h=%d", h))
commits[i] = keys.GenFullCommit(chainID, h, nil, vals, appHash, 0, len(keys))
}
// initialize a certifier with the initial state
cert := certifiers.NewInquiring(chainID, commits[0], trust, source)
// store a few commits as trust
for _, i := range []int{2, 5} {
trust.StoreCommit(commits[i])
}
// let's see if we can jump forward using trusted commits
err := source.StoreCommit(commits[7])
require.Nil(err, "%+v", err)
check := commits[7].Commit
err = cert.Certify(check)
require.Nil(err, "%+v", err)
assert.Equal(check.Height(), cert.LastHeight())
// add access to all commits via untrusted source
for i := 0; i < count; i++ {
err := source.StoreCommit(commits[i])
require.Nil(err)
}
// try to check an unknown seed in the past
mid := commits[3].Commit
err = cert.Certify(mid)
require.Nil(err, "%+v", err)
assert.Equal(mid.Height(), cert.LastHeight())
// and jump all the way forward again
end := commits[count-1].Commit
err = cert.Certify(end)
require.Nil(err, "%+v", err)
assert.Equal(end.Height(), cert.LastHeight())
}

78
certifiers/memprovider.go Normal file
View File

@@ -0,0 +1,78 @@
package certifiers
import (
"encoding/hex"
"sort"
certerr "github.com/tendermint/tendermint/certifiers/errors"
)
type memStoreProvider struct {
// byHeight is always sorted by Height... need to support range search (nil, h]
// btree would be more efficient for larger sets
byHeight fullCommits
byHash map[string]FullCommit
}
// fullCommits just exists to allow easy sorting
type fullCommits []FullCommit
func (s fullCommits) Len() int { return len(s) }
func (s fullCommits) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s fullCommits) Less(i, j int) bool {
return s[i].Height() < s[j].Height()
}
func NewMemStoreProvider() Provider {
return &memStoreProvider{
byHeight: fullCommits{},
byHash: map[string]FullCommit{},
}
}
func (m *memStoreProvider) encodeHash(hash []byte) string {
return hex.EncodeToString(hash)
}
func (m *memStoreProvider) StoreCommit(fc FullCommit) error {
// make sure the fc is self-consistent before saving
err := fc.ValidateBasic(fc.Commit.Header.ChainID)
if err != nil {
return err
}
// store the valid fc
key := m.encodeHash(fc.ValidatorsHash())
m.byHash[key] = fc
m.byHeight = append(m.byHeight, fc)
sort.Sort(m.byHeight)
return nil
}
func (m *memStoreProvider) GetByHeight(h int) (FullCommit, error) {
// search from highest to lowest
for i := len(m.byHeight) - 1; i >= 0; i-- {
fc := m.byHeight[i]
if fc.Height() <= h {
return fc, nil
}
}
return FullCommit{}, certerr.ErrCommitNotFound()
}
func (m *memStoreProvider) GetByHash(hash []byte) (FullCommit, error) {
var err error
fc, ok := m.byHash[m.encodeHash(hash)]
if !ok {
err = certerr.ErrCommitNotFound()
}
return fc, err
}
func (m *memStoreProvider) LatestCommit() (FullCommit, error) {
l := len(m.byHeight)
if l == 0 {
return FullCommit{}, certerr.ErrCommitNotFound()
}
return m.byHeight[l-1], nil
}

View File

@@ -0,0 +1,116 @@
package certifiers_test
import (
"fmt"
"testing"
"github.com/tendermint/tendermint/certifiers"
)
func BenchmarkGenCommit20(b *testing.B) {
keys := certifiers.GenValKeys(20)
benchmarkGenCommit(b, keys)
}
func BenchmarkGenCommit100(b *testing.B) {
keys := certifiers.GenValKeys(100)
benchmarkGenCommit(b, keys)
}
func BenchmarkGenCommitSec20(b *testing.B) {
keys := certifiers.GenSecpValKeys(20)
benchmarkGenCommit(b, keys)
}
func BenchmarkGenCommitSec100(b *testing.B) {
keys := certifiers.GenSecpValKeys(100)
benchmarkGenCommit(b, keys)
}
func benchmarkGenCommit(b *testing.B, keys certifiers.ValKeys) {
chainID := fmt.Sprintf("bench-%d", len(keys))
vals := keys.ToValidators(20, 10)
for i := 0; i < b.N; i++ {
h := 1 + i
appHash := []byte(fmt.Sprintf("h=%d", h))
keys.GenCommit(chainID, h, nil, vals, appHash, 0, len(keys))
}
}
// this benchmarks generating one key
func BenchmarkGenValKeys(b *testing.B) {
keys := certifiers.GenValKeys(20)
for i := 0; i < b.N; i++ {
keys = keys.Extend(1)
}
}
// this benchmarks generating one key
func BenchmarkGenSecpValKeys(b *testing.B) {
keys := certifiers.GenSecpValKeys(20)
for i := 0; i < b.N; i++ {
keys = keys.Extend(1)
}
}
func BenchmarkToValidators20(b *testing.B) {
benchmarkToValidators(b, 20)
}
func BenchmarkToValidators100(b *testing.B) {
benchmarkToValidators(b, 100)
}
// this benchmarks constructing the validator set (.PubKey() * nodes)
func benchmarkToValidators(b *testing.B, nodes int) {
keys := certifiers.GenValKeys(nodes)
for i := 1; i <= b.N; i++ {
keys.ToValidators(int64(2*i), int64(i))
}
}
func BenchmarkToValidatorsSec100(b *testing.B) {
benchmarkToValidatorsSec(b, 100)
}
// this benchmarks constructing the validator set (.PubKey() * nodes)
func benchmarkToValidatorsSec(b *testing.B, nodes int) {
keys := certifiers.GenSecpValKeys(nodes)
for i := 1; i <= b.N; i++ {
keys.ToValidators(int64(2*i), int64(i))
}
}
func BenchmarkCertifyCommit20(b *testing.B) {
keys := certifiers.GenValKeys(20)
benchmarkCertifyCommit(b, keys)
}
func BenchmarkCertifyCommit100(b *testing.B) {
keys := certifiers.GenValKeys(100)
benchmarkCertifyCommit(b, keys)
}
func BenchmarkCertifyCommitSec20(b *testing.B) {
keys := certifiers.GenSecpValKeys(20)
benchmarkCertifyCommit(b, keys)
}
func BenchmarkCertifyCommitSec100(b *testing.B) {
keys := certifiers.GenSecpValKeys(100)
benchmarkCertifyCommit(b, keys)
}
func benchmarkCertifyCommit(b *testing.B, keys certifiers.ValKeys) {
chainID := "bench-certify"
vals := keys.ToValidators(20, 10)
cert := certifiers.NewStatic(chainID, vals)
check := keys.GenCommit(chainID, 123, nil, vals, []byte("foo"), 0, len(keys))
for i := 0; i < b.N; i++ {
err := cert.Certify(check)
if err != nil {
panic(err)
}
}
}

125
certifiers/provider.go Normal file
View File

@@ -0,0 +1,125 @@
package certifiers
import (
certerr "github.com/tendermint/tendermint/certifiers/errors"
)
// Provider is used to get more validators by other means
//
// Examples: MemProvider, files.Provider, client.Provider....
type Provider interface {
// StoreCommit saves a FullCommit after we have verified it,
// so we can query for it later. Important for updating our
// store of trusted commits
StoreCommit(fc FullCommit) error
// GetByHeight returns the closest commit with height <= h
GetByHeight(h int) (FullCommit, error)
// GetByHash returns a commit exactly matching this validator hash
GetByHash(hash []byte) (FullCommit, error)
// LatestCommit returns the newest commit stored
LatestCommit() (FullCommit, error)
}
// cacheProvider allows you to place one or more caches in front of a source
// Provider. It runs through them in order until a match is found.
// So you can keep a local cache, and check with the network if
// no data is there.
type cacheProvider struct {
Providers []Provider
}
func NewCacheProvider(providers ...Provider) Provider {
return cacheProvider{
Providers: providers,
}
}
// StoreCommit tries to add the seed to all providers.
//
// Aborts on first error it encounters (closest provider)
func (c cacheProvider) StoreCommit(fc FullCommit) (err error) {
for _, p := range c.Providers {
err = p.StoreCommit(fc)
if err != nil {
break
}
}
return err
}
/*
GetByHeight should return the closest possible match from all providers.
The Cache is usually organized in order from cheapest call (memory)
to most expensive calls (disk/network). However, since GetByHeight returns
a FullCommit at h' <= h, if the memory has a seed at h-10, but the network would
give us the exact match, a naive "stop at first non-error" would hide
the actual desired results.
Thus, we query each provider in order until we find an exact match
or we finished querying them all. If at least one returned a non-error,
then this returns the best match (minimum h-h').
*/
func (c cacheProvider) GetByHeight(h int) (fc FullCommit, err error) {
for _, p := range c.Providers {
var tfc FullCommit
tfc, err = p.GetByHeight(h)
if err == nil {
if tfc.Height() > fc.Height() {
fc = tfc
}
if tfc.Height() == h {
break
}
}
}
// even if the last one had an error, if any was a match, this is good
if fc.Height() > 0 {
err = nil
}
return fc, err
}
func (c cacheProvider) GetByHash(hash []byte) (fc FullCommit, err error) {
for _, p := range c.Providers {
fc, err = p.GetByHash(hash)
if err == nil {
break
}
}
return fc, err
}
func (c cacheProvider) LatestCommit() (fc FullCommit, err error) {
for _, p := range c.Providers {
var tfc FullCommit
tfc, err = p.LatestCommit()
if err == nil && tfc.Height() > fc.Height() {
fc = tfc
}
}
// even if the last one had an error, if any was a match, this is good
if fc.Height() > 0 {
err = nil
}
return fc, err
}
// missingProvider doens't store anything, always a miss
// Designed as a mock for testing
type missingProvider struct{}
func NewMissingProvider() Provider {
return missingProvider{}
}
func (missingProvider) StoreCommit(_ FullCommit) error { return nil }
func (missingProvider) GetByHeight(_ int) (FullCommit, error) {
return FullCommit{}, certerr.ErrCommitNotFound()
}
func (missingProvider) GetByHash(_ []byte) (FullCommit, error) {
return FullCommit{}, certerr.ErrCommitNotFound()
}
func (missingProvider) LatestCommit() (FullCommit, error) {
return FullCommit{}, certerr.ErrCommitNotFound()
}

128
certifiers/provider_test.go Normal file
View File

@@ -0,0 +1,128 @@
package certifiers_test
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/certifiers"
"github.com/tendermint/tendermint/certifiers/errors"
)
func TestMemProvider(t *testing.T) {
p := certifiers.NewMemStoreProvider()
checkProvider(t, p, "test-mem", "empty")
}
func TestCacheProvider(t *testing.T) {
p := certifiers.NewCacheProvider(
certifiers.NewMissingProvider(),
certifiers.NewMemStoreProvider(),
certifiers.NewMissingProvider(),
)
checkProvider(t, p, "test-cache", "kjfhekfhkewhgit")
}
func checkProvider(t *testing.T, p certifiers.Provider, chainID, app string) {
assert, require := assert.New(t), require.New(t)
appHash := []byte(app)
keys := certifiers.GenValKeys(5)
count := 10
// make a bunch of commits...
commits := make([]certifiers.FullCommit, count)
for i := 0; i < count; i++ {
// two commits for each validator, to check how we handle dups
// (10, 0), (10, 1), (10, 1), (10, 2), (10, 2), ...
vals := keys.ToValidators(10, int64(count/2))
h := 20 + 10*i
commits[i] = keys.GenFullCommit(chainID, h, nil, vals, appHash, 0, 5)
}
// check provider is empty
fc, err := p.GetByHeight(20)
require.NotNil(err)
assert.True(errors.IsCommitNotFoundErr(err))
fc, err = p.GetByHash(commits[3].ValidatorsHash())
require.NotNil(err)
assert.True(errors.IsCommitNotFoundErr(err))
// now add them all to the provider
for _, s := range commits {
err = p.StoreCommit(s)
require.Nil(err)
// and make sure we can get it back
s2, err := p.GetByHash(s.ValidatorsHash())
assert.Nil(err)
assert.Equal(s, s2)
// by height as well
s2, err = p.GetByHeight(s.Height())
assert.Nil(err)
assert.Equal(s, s2)
}
// make sure we get the last hash if we overstep
fc, err = p.GetByHeight(5000)
if assert.Nil(err) {
assert.Equal(commits[count-1].Height(), fc.Height())
assert.Equal(commits[count-1], fc)
}
// and middle ones as well
fc, err = p.GetByHeight(47)
if assert.Nil(err) {
// we only step by 10, so 40 must be the one below this
assert.Equal(40, fc.Height())
}
}
// this will make a get height, and if it is good, set the data as well
func checkGetHeight(t *testing.T, p certifiers.Provider, ask, expect int) {
fc, err := p.GetByHeight(ask)
require.Nil(t, err, "%+v", err)
if assert.Equal(t, expect, fc.Height()) {
err = p.StoreCommit(fc)
require.Nil(t, err, "%+v", err)
}
}
func TestCacheGetsBestHeight(t *testing.T) {
// assert, require := assert.New(t), require.New(t)
require := require.New(t)
// we will write data to the second level of the cache (p2),
// and see what gets cached, stored in
p := certifiers.NewMemStoreProvider()
p2 := certifiers.NewMemStoreProvider()
cp := certifiers.NewCacheProvider(p, p2)
chainID := "cache-best-height"
appHash := []byte("01234567")
keys := certifiers.GenValKeys(5)
count := 10
// set a bunch of commits
for i := 0; i < count; i++ {
vals := keys.ToValidators(10, int64(count/2))
h := 10 * (i + 1)
fc := keys.GenFullCommit(chainID, h, nil, vals, appHash, 0, 5)
err := p2.StoreCommit(fc)
require.NoError(err)
}
// let's get a few heights from the cache and set them proper
checkGetHeight(t, cp, 57, 50)
checkGetHeight(t, cp, 33, 30)
// make sure they are set in p as well (but nothing else)
checkGetHeight(t, p, 44, 30)
checkGetHeight(t, p, 50, 50)
checkGetHeight(t, p, 99, 50)
// now, query the cache for a higher value
checkGetHeight(t, p2, 99, 90)
checkGetHeight(t, cp, 99, 90)
}

66
certifiers/static.go Normal file
View File

@@ -0,0 +1,66 @@
package certifiers
import (
"bytes"
"github.com/pkg/errors"
"github.com/tendermint/tendermint/types"
certerr "github.com/tendermint/tendermint/certifiers/errors"
)
var _ Certifier = &Static{}
// Static assumes a static set of validators, set on
// initilization and checks against them.
// The signatures on every header is checked for > 2/3 votes
// against the known validator set upon Certify
//
// Good for testing or really simple chains. Building block
// to support real-world functionality.
type Static struct {
chainID string
vSet *types.ValidatorSet
vhash []byte
}
func NewStatic(chainID string, vals *types.ValidatorSet) *Static {
return &Static{
chainID: chainID,
vSet: vals,
}
}
func (c *Static) ChainID() string {
return c.chainID
}
func (c *Static) Validators() *types.ValidatorSet {
return c.vSet
}
func (c *Static) Hash() []byte {
if len(c.vhash) == 0 {
c.vhash = c.vSet.Hash()
}
return c.vhash
}
func (c *Static) Certify(commit Commit) error {
// do basic sanity checks
err := commit.ValidateBasic(c.chainID)
if err != nil {
return err
}
// make sure it has the same validator set we have (static means static)
if !bytes.Equal(c.Hash(), commit.Header.ValidatorsHash) {
return certerr.ErrValidatorsChanged()
}
// then make sure we have the proper signatures for this
err = c.vSet.VerifyCommit(c.chainID, commit.Commit.BlockID,
commit.Header.Height, commit.Commit)
return errors.WithStack(err)
}

59
certifiers/static_test.go Normal file
View File

@@ -0,0 +1,59 @@
package certifiers_test
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/certifiers"
errors "github.com/tendermint/tendermint/certifiers/errors"
)
func TestStaticCert(t *testing.T) {
// assert, require := assert.New(t), require.New(t)
assert := assert.New(t)
// require := require.New(t)
keys := certifiers.GenValKeys(4)
// 20, 30, 40, 50 - the first 3 don't have 2/3, the last 3 do!
vals := keys.ToValidators(20, 10)
// and a certifier based on our known set
chainID := "test-static"
cert := certifiers.NewStatic(chainID, vals)
cases := []struct {
keys certifiers.ValKeys
vals *types.ValidatorSet
height int
first, last int // who actually signs
proper bool // true -> expect no error
changed bool // true -> expect validator change error
}{
// perfect, signed by everyone
{keys, vals, 1, 0, len(keys), true, false},
// skip little guy is okay
{keys, vals, 2, 1, len(keys), true, false},
// but not the big guy
{keys, vals, 3, 0, len(keys) - 1, false, false},
// even changing the power a little bit breaks the static validator
// the sigs are enough, but the validator hash is unknown
{keys, keys.ToValidators(20, 11), 4, 0, len(keys), false, true},
}
for _, tc := range cases {
check := tc.keys.GenCommit(chainID, tc.height, nil, tc.vals,
[]byte("foo"), tc.first, tc.last)
err := cert.Certify(check)
if tc.proper {
assert.Nil(err, "%+v", err)
} else {
assert.NotNil(err)
if tc.changed {
assert.True(errors.IsValidatorsChangedErr(err), "%+v", err)
}
}
}
}

View File

@@ -9,18 +9,16 @@ import (
"github.com/tendermint/tendermint/types"
)
var genValidatorCmd = &cobra.Command{
// GenValidatorCmd allows the generation of a keypair for a
// validator.
var GenValidatorCmd = &cobra.Command{
Use: "gen_validator",
Short: "Generate new validator keypair",
Run: genValidator,
}
func init() {
RootCmd.AddCommand(genValidatorCmd)
}
func genValidator(cmd *cobra.Command, args []string) {
privValidator := types.GenPrivValidator()
privValidator := types.GenPrivValidatorFS("")
privValidatorJSONBytes, _ := json.MarshalIndent(privValidator, "", "\t")
fmt.Printf(`%v
`, string(privValidatorJSONBytes))

View File

@@ -9,21 +9,17 @@ import (
cmn "github.com/tendermint/tmlibs/common"
)
var initFilesCmd = &cobra.Command{
// InitFilesCmd initialises a fresh Tendermint Core instance.
var InitFilesCmd = &cobra.Command{
Use: "init",
Short: "Initialize Tendermint",
Run: initFiles,
}
func init() {
RootCmd.AddCommand(initFilesCmd)
}
func initFiles(cmd *cobra.Command, args []string) {
privValFile := config.PrivValidatorFile()
if _, err := os.Stat(privValFile); os.IsNotExist(err) {
privValidator := types.GenPrivValidator()
privValidator.SetFile(privValFile)
privValidator := types.GenPrivValidatorFS(privValFile)
privValidator.Save()
genFile := config.GenesisFile()
@@ -33,8 +29,8 @@ func initFiles(cmd *cobra.Command, args []string) {
ChainID: cmn.Fmt("test-chain-%v", cmn.RandStr(6)),
}
genDoc.Validators = []types.GenesisValidator{types.GenesisValidator{
PubKey: privValidator.PubKey,
Amount: 10,
PubKey: privValidator.GetPubKey(),
Power: 10,
}}
genDoc.SaveAs(genFile)

View File

@@ -9,16 +9,13 @@ import (
"github.com/tendermint/tendermint/p2p/upnp"
)
var probeUpnpCmd = &cobra.Command{
// ProbeUpnpCmd adds capabilities to test the UPnP functionality.
var ProbeUpnpCmd = &cobra.Command{
Use: "probe_upnp",
Short: "Test UPnP functionality",
RunE: probeUpnp,
}
func init() {
RootCmd.AddCommand(probeUpnpCmd)
}
func probeUpnp(cmd *cobra.Command, args []string) error {
capabilities, err := upnp.Probe(logger)
if err != nil {

View File

@@ -6,7 +6,8 @@ import (
"github.com/tendermint/tendermint/consensus"
)
var replayCmd = &cobra.Command{
// ReplayCmd allows replaying of messages from the WAL.
var ReplayCmd = &cobra.Command{
Use: "replay",
Short: "Replay messages from WAL",
Run: func(cmd *cobra.Command, args []string) {
@@ -14,15 +15,12 @@ var replayCmd = &cobra.Command{
},
}
var replayConsoleCmd = &cobra.Command{
// ReplayConsoleCmd allows replaying of messages from the WAL in a
// console.
var ReplayConsoleCmd = &cobra.Command{
Use: "replay_console",
Short: "Replay messages from WAL in a console",
Run: func(cmd *cobra.Command, args []string) {
consensus.RunReplayFile(config.BaseConfig, config.Consensus, true)
},
}
func init() {
RootCmd.AddCommand(replayCmd)
RootCmd.AddCommand(replayConsoleCmd)
}

View File

@@ -9,21 +9,27 @@ import (
"github.com/tendermint/tmlibs/log"
)
var resetAllCmd = &cobra.Command{
// ResetAllCmd removes the database of this Tendermint core
// instance.
var ResetAllCmd = &cobra.Command{
Use: "unsafe_reset_all",
Short: "(unsafe) Remove all the data and WAL, reset this node's validator",
Run: resetAll,
}
var resetPrivValidatorCmd = &cobra.Command{
// ResetPrivValidatorCmd resets the private validator files.
var ResetPrivValidatorCmd = &cobra.Command{
Use: "unsafe_reset_priv_validator",
Short: "(unsafe) Reset this node's validator",
Run: resetPrivValidator,
}
func init() {
RootCmd.AddCommand(resetAllCmd)
RootCmd.AddCommand(resetPrivValidatorCmd)
// ResetAll removes the privValidator files.
// Exported so other CLI tools can use it
func ResetAll(dbDir, privValFile string, logger log.Logger) {
resetPrivValidatorFS(privValFile, logger)
os.RemoveAll(dbDir)
logger.Info("Removed all data", "dir", dbDir)
}
// XXX: this is totally unsafe.
@@ -35,26 +41,17 @@ func resetAll(cmd *cobra.Command, args []string) {
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetPrivValidator(cmd *cobra.Command, args []string) {
resetPrivValidatorLocal(config.PrivValidatorFile(), logger)
resetPrivValidatorFS(config.PrivValidatorFile(), logger)
}
// Exported so other CLI tools can use it
func ResetAll(dbDir, privValFile string, logger log.Logger) {
resetPrivValidatorLocal(privValFile, logger)
os.RemoveAll(dbDir)
logger.Info("Removed all data", "dir", dbDir)
}
func resetPrivValidatorLocal(privValFile string, logger log.Logger) {
func resetPrivValidatorFS(privValFile string, logger log.Logger) {
// Get PrivValidator
var privValidator *types.PrivValidator
if _, err := os.Stat(privValFile); err == nil {
privValidator = types.LoadPrivValidator(privValFile)
privValidator := types.LoadPrivValidatorFS(privValFile)
privValidator.Reset()
logger.Info("Reset PrivValidator", "file", privValFile)
} else {
privValidator = types.GenPrivValidator()
privValidator.SetFile(privValFile)
privValidator := types.GenPrivValidatorFS(privValFile)
privValidator.Save()
logger.Info("Generated PrivValidator", "file", privValFile)
}

View File

@@ -34,11 +34,12 @@ func ParseConfig() (*cfg.Config, error) {
return conf, err
}
// RootCmd is the root command for Tendermint core.
var RootCmd = &cobra.Command{
Use: "tendermint",
Short: "Tendermint Core (BFT Consensus) in Go",
PersistentPreRunE: func(cmd *cobra.Command, args []string) (err error) {
if cmd.Name() == versionCmd.Name() {
if cmd.Name() == VersionCmd.Name() {
return nil
}
config, err = ParseConfig()

View File

@@ -3,15 +3,15 @@ package commands
import (
"os"
"strconv"
"testing"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tmlibs/cli"
"testing"
)
var (

View File

@@ -2,26 +2,12 @@ package commands
import (
"fmt"
"time"
"github.com/spf13/cobra"
"github.com/tendermint/tendermint/node"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
nm "github.com/tendermint/tendermint/node"
)
var runNodeCmd = &cobra.Command{
Use: "node",
Short: "Run the tendermint node",
RunE: runNode,
}
func init() {
AddNodeFlags(runNodeCmd)
RootCmd.AddCommand(runNodeCmd)
}
// AddNodeFlags exposes some common configuration options on the command-line
// These are exposed for convenience of commands embedding a tendermint node
func AddNodeFlags(cmd *cobra.Command) {
@@ -50,40 +36,32 @@ func AddNodeFlags(cmd *cobra.Command) {
cmd.Flags().Bool("consensus.create_empty_blocks", config.Consensus.CreateEmptyBlocks, "Set this to false to only produce blocks when there are txs or when the AppHash changes")
}
// Users wishing to:
// * Use an external signer for their validators
// * Supply an in-proc abci app
// should import github.com/tendermint/tendermint/node and implement
// their own run_node to call node.NewNode (instead of node.NewNodeDefault)
// with their custom priv validator and/or custom proxy.ClientCreator
func runNode(cmd *cobra.Command, args []string) error {
// NewRunNodeCmd returns the command that allows the CLI to start a
// node. It can be used with a custom PrivValidator and in-process ABCI application.
func NewRunNodeCmd(nodeProvider nm.NodeProvider) *cobra.Command {
cmd := &cobra.Command{
Use: "node",
Short: "Run the tendermint node",
RunE: func(cmd *cobra.Command, args []string) error {
// Create & start node
n, err := nodeProvider(config, logger)
if err != nil {
return fmt.Errorf("Failed to create node: %v", err)
}
// Wait until the genesis doc becomes available
// This is for Mintnet compatibility.
// TODO: If Mintnet gets deprecated or genesis_file is
// always available, remove.
genDocFile := config.GenesisFile()
for !cmn.FileExists(genDocFile) {
logger.Info(cmn.Fmt("Waiting for genesis file %v...", genDocFile))
time.Sleep(time.Second)
if _, err := n.Start(); err != nil {
return fmt.Errorf("Failed to start node: %v", err)
} else {
logger.Info("Started node", "nodeInfo", n.Switch().NodeInfo())
}
// Trap signal, run forever.
n.RunForever()
return nil
},
}
genDoc, err := types.GenesisDocFromFile(genDocFile)
if err != nil {
return err
}
config.ChainID = genDoc.ChainID
// Create & start node
n := node.NewNodeDefault(config, logger.With("module", "node"))
if _, err := n.Start(); err != nil {
return fmt.Errorf("Failed to start node: %v", err)
} else {
logger.Info("Started node", "nodeInfo", n.Switch().NodeInfo())
}
// Trap signal, run forever.
n.RunForever()
return nil
AddNodeFlags(cmd)
return cmd
}

View File

@@ -9,18 +9,15 @@ import (
"github.com/tendermint/tendermint/types"
)
var showValidatorCmd = &cobra.Command{
// ShowValidatorCmd adds capabilities for showing the validator info.
var ShowValidatorCmd = &cobra.Command{
Use: "show_validator",
Short: "Show this node's validator info",
Run: showValidator,
}
func init() {
RootCmd.AddCommand(showValidatorCmd)
}
func showValidator(cmd *cobra.Command, args []string) {
privValidator := types.LoadOrGenPrivValidator(config.PrivValidatorFile(), logger)
privValidator := types.LoadOrGenPrivValidatorFS(config.PrivValidatorFile())
pubKeyJSONBytes, _ := data.ToJSON(privValidator.PubKey)
fmt.Println(string(pubKeyJSONBytes))
}

View File

@@ -7,16 +7,10 @@ import (
"github.com/spf13/cobra"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
)
var testnetFilesCmd = &cobra.Command{
Use: "testnet",
Short: "Initialize files for a Tendermint testnet",
Run: testnetFiles,
}
//flags
var (
nValidators int
@@ -24,12 +18,18 @@ var (
)
func init() {
testnetFilesCmd.Flags().IntVar(&nValidators, "n", 4,
TestnetFilesCmd.Flags().IntVar(&nValidators, "n", 4,
"Number of validators to initialize the testnet with")
testnetFilesCmd.Flags().StringVar(&dataDir, "dir", "mytestnet",
TestnetFilesCmd.Flags().StringVar(&dataDir, "dir", "mytestnet",
"Directory to store initialization data for the testnet")
}
RootCmd.AddCommand(testnetFilesCmd)
// TestnetFilesCmd allows initialisation of files for a
// Tendermint testnet.
var TestnetFilesCmd = &cobra.Command{
Use: "testnet",
Short: "Initialize files for a Tendermint testnet",
Run: testnetFiles,
}
func testnetFiles(cmd *cobra.Command, args []string) {
@@ -45,10 +45,10 @@ func testnetFiles(cmd *cobra.Command, args []string) {
}
// Read priv_validator.json to populate vals
privValFile := path.Join(dataDir, mach, "priv_validator.json")
privVal := types.LoadPrivValidator(privValFile)
privVal := types.LoadPrivValidatorFS(privValFile)
genVals[i] = types.GenesisValidator{
PubKey: privVal.PubKey,
Amount: 1,
PubKey: privVal.GetPubKey(),
Power: 1,
Name: mach,
}
}
@@ -87,7 +87,6 @@ func ensurePrivValidator(file string) {
if cmn.FileExists(file) {
return
}
privValidator := types.GenPrivValidator()
privValidator.SetFile(file)
privValidator := types.GenPrivValidatorFS(file)
privValidator.Save()
}

View File

@@ -8,14 +8,11 @@ import (
"github.com/tendermint/tendermint/version"
)
var versionCmd = &cobra.Command{
// VersionCmd ...
var VersionCmd = &cobra.Command{
Use: "version",
Short: "Show version info",
Run: func(cmd *cobra.Command, args []string) {
fmt.Println(version.Version)
},
}
func init() {
RootCmd.AddCommand(versionCmd)
}

View File

@@ -3,11 +3,39 @@ package main
import (
"os"
"github.com/tendermint/tendermint/cmd/tendermint/commands"
"github.com/tendermint/tmlibs/cli"
cmd "github.com/tendermint/tendermint/cmd/tendermint/commands"
nm "github.com/tendermint/tendermint/node"
)
func main() {
cmd := cli.PrepareBaseCmd(commands.RootCmd, "TM", os.ExpandEnv("$HOME/.tendermint"))
rootCmd := cmd.RootCmd
rootCmd.AddCommand(
cmd.GenValidatorCmd,
cmd.InitFilesCmd,
cmd.ProbeUpnpCmd,
cmd.ReplayCmd,
cmd.ReplayConsoleCmd,
cmd.ResetAllCmd,
cmd.ResetPrivValidatorCmd,
cmd.ShowValidatorCmd,
cmd.TestnetFilesCmd,
cmd.VersionCmd)
// NOTE:
// Users wishing to:
// * Use an external signer for their validators
// * Supply an in-proc abci app
// * Supply a genesis doc file from another source
// * Provide their own DB implementation
// can copy this file and use something other than the
// DefaultNewNode function
nodeFunc := nm.DefaultNewNode
// Create & start node
rootCmd.AddCommand(cmd.NewRunNodeCmd(nodeFunc))
cmd := cli.PrepareBaseCmd(rootCmd, "TM", os.ExpandEnv("$HOME/.tendermint"))
cmd.Execute()
}

View File

@@ -4,8 +4,6 @@ import (
"fmt"
"path/filepath"
"time"
"github.com/tendermint/tendermint/types" // TODO: remove
)
// Config defines the top level configuration for a Tendermint node
@@ -320,10 +318,6 @@ type ConsensusConfig struct {
CreateEmptyBlocks bool `mapstructure:"create_empty_blocks"`
CreateEmptyBlocksInterval int `mapstructure:"create_empty_blocks_interval"`
// TODO: This probably shouldn't be exposed but it makes it
// easy to write tests for the wal/replay
BlockPartSize int `mapstructure:"block_part_size"`
// Reactor sleep duration parameters are in ms
PeerGossipSleepDuration int `mapstructure:"peer_gossip_sleep_duration"`
PeerQueryMaj23SleepDuration int `mapstructure:"peer_query_maj23_sleep_duration"`
@@ -386,7 +380,6 @@ func DefaultConsensusConfig() *ConsensusConfig {
MaxBlockSizeBytes: 1, // TODO
CreateEmptyBlocks: true,
CreateEmptyBlocksInterval: 0,
BlockPartSize: types.DefaultBlockPartSize, // TODO: we shouldnt be importing types
PeerGossipSleepDuration: 100,
PeerQueryMaj23SleepDuration: 2000,
}

View File

@@ -119,7 +119,7 @@ var testGenesis = `{
"type": "ed25519",
"data":"3B3069C422E19688B45CBFAE7BB009FC0FA1B1EA86593519318B7214853803C8"
},
"amount": 10,
"power": 10,
"name": ""
}
],

View File

@@ -5,9 +5,11 @@ import (
"testing"
"time"
crypto "github.com/tendermint/go-crypto"
data "github.com/tendermint/go-wire/data"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/events"
)
@@ -53,7 +55,7 @@ func TestByzantine(t *testing.T) {
eventLogger := logger.With("module", "events")
for i := 0; i < N; i++ {
if i == 0 {
css[i].privValidator = NewByzantinePrivValidator(css[i].privValidator.(*types.PrivValidator))
css[i].privValidator = NewByzantinePrivValidator(css[i].privValidator)
// make byzantine
css[i].decideProposal = func(j int) func(int, int) {
return func(height, round int) {
@@ -99,10 +101,10 @@ func TestByzantine(t *testing.T) {
// start the state machines
byzR := reactors[0].(*ByzantineReactor)
s := byzR.reactor.conS.GetState()
byzR.reactor.SwitchToConsensus(s)
byzR.reactor.SwitchToConsensus(s, 0)
for i := 1; i < N; i++ {
cr := reactors[i].(*ConsensusReactor)
cr.SwitchToConsensus(cr.conS.GetState())
cr.SwitchToConsensus(cr.conS.GetState(), 0)
}
// byz proposer sends one block to peers[0]
@@ -147,8 +149,8 @@ func TestByzantine(t *testing.T) {
case <-done:
case <-tick.C:
for i, reactor := range reactors {
t.Log(Fmt("Consensus Reactor %v", i))
t.Log(Fmt("%v", reactor))
t.Log(cmn.Fmt("Consensus Reactor %v", i))
t.Log(cmn.Fmt("%v", reactor))
}
t.Fatalf("Timed out waiting for all validators to commit first block")
}
@@ -188,7 +190,7 @@ func byzantineDecideProposalFunc(t *testing.T, height, round int, cs *ConsensusS
}
}
func sendProposalAndParts(height, round int, cs *ConsensusState, peer *p2p.Peer, proposal *types.Proposal, blockHash []byte, parts *types.PartSet) {
func sendProposalAndParts(height, round int, cs *ConsensusState, peer p2p.Peer, proposal *types.Proposal, blockHash []byte, parts *types.PartSet) {
// proposal
msg := &ProposalMessage{Proposal: proposal}
peer.Send(DataChannel, struct{ ConsensusMessage }{msg})
@@ -218,7 +220,7 @@ func sendProposalAndParts(height, round int, cs *ConsensusState, peer *p2p.Peer,
// byzantine consensus reactor
type ByzantineReactor struct {
Service
cmn.Service
reactor *ConsensusReactor
}
@@ -231,14 +233,14 @@ func NewByzantineReactor(conR *ConsensusReactor) *ByzantineReactor {
func (br *ByzantineReactor) SetSwitch(s *p2p.Switch) { br.reactor.SetSwitch(s) }
func (br *ByzantineReactor) GetChannels() []*p2p.ChannelDescriptor { return br.reactor.GetChannels() }
func (br *ByzantineReactor) AddPeer(peer *p2p.Peer) {
func (br *ByzantineReactor) AddPeer(peer p2p.Peer) {
if !br.reactor.IsRunning() {
return
}
// Create peerState for peer
peerState := NewPeerState(peer)
peer.Data.Set(types.PeerStateKey, peerState)
peerState := NewPeerState(peer).SetLogger(br.reactor.Logger)
peer.Set(types.PeerStateKey, peerState)
// Send our state to peer.
// If we're fast_syncing, broadcast a RoundStepMessage later upon SwitchToConsensus().
@@ -246,10 +248,10 @@ func (br *ByzantineReactor) AddPeer(peer *p2p.Peer) {
br.reactor.sendNewRoundStepMessages(peer)
}
}
func (br *ByzantineReactor) RemovePeer(peer *p2p.Peer, reason interface{}) {
func (br *ByzantineReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
br.reactor.RemovePeer(peer, reason)
}
func (br *ByzantineReactor) Receive(chID byte, peer *p2p.Peer, msgBytes []byte) {
func (br *ByzantineReactor) Receive(chID byte, peer p2p.Peer, msgBytes []byte) {
br.reactor.Receive(chID, peer, msgBytes)
}
@@ -257,51 +259,42 @@ func (br *ByzantineReactor) Receive(chID byte, peer *p2p.Peer, msgBytes []byte)
// byzantine privValidator
type ByzantinePrivValidator struct {
Address []byte `json:"address"`
types.Signer `json:"-"`
types.Signer
mtx sync.Mutex
pv types.PrivValidator
}
// Return a priv validator that will sign anything
func NewByzantinePrivValidator(pv *types.PrivValidator) *ByzantinePrivValidator {
func NewByzantinePrivValidator(pv types.PrivValidator) *ByzantinePrivValidator {
return &ByzantinePrivValidator{
Address: pv.Address,
Signer: pv.Signer,
Signer: pv.(*types.PrivValidatorFS).Signer,
pv: pv,
}
}
func (privVal *ByzantinePrivValidator) GetAddress() []byte {
return privVal.Address
func (privVal *ByzantinePrivValidator) GetAddress() data.Bytes {
return privVal.pv.GetAddress()
}
func (privVal *ByzantinePrivValidator) SignVote(chainID string, vote *types.Vote) error {
privVal.mtx.Lock()
defer privVal.mtx.Unlock()
func (privVal *ByzantinePrivValidator) GetPubKey() crypto.PubKey {
return privVal.pv.GetPubKey()
}
// Sign
vote.Signature = privVal.Sign(types.SignBytes(chainID, vote))
func (privVal *ByzantinePrivValidator) SignVote(chainID string, vote *types.Vote) (err error) {
vote.Signature, err = privVal.Sign(types.SignBytes(chainID, vote))
return err
}
func (privVal *ByzantinePrivValidator) SignProposal(chainID string, proposal *types.Proposal) (err error) {
proposal.Signature, err = privVal.Sign(types.SignBytes(chainID, proposal))
return nil
}
func (privVal *ByzantinePrivValidator) SignProposal(chainID string, proposal *types.Proposal) error {
privVal.mtx.Lock()
defer privVal.mtx.Unlock()
// Sign
proposal.Signature = privVal.Sign(types.SignBytes(chainID, proposal))
return nil
}
func (privVal *ByzantinePrivValidator) SignHeartbeat(chainID string, heartbeat *types.Heartbeat) error {
privVal.mtx.Lock()
defer privVal.mtx.Unlock()
// Sign
heartbeat.Signature = privVal.Sign(types.SignBytes(chainID, heartbeat))
func (privVal *ByzantinePrivValidator) SignHeartbeat(chainID string, heartbeat *types.Heartbeat) (err error) {
heartbeat.Signature, err = privVal.Sign(types.SignBytes(chainID, heartbeat))
return nil
}
func (privVal *ByzantinePrivValidator) String() string {
return Fmt("PrivValidator{%X}", privVal.Address)
return cmn.Fmt("PrivValidator{%X}", privVal.GetAddress())
}

View File

@@ -15,11 +15,12 @@ import (
abci "github.com/tendermint/abci/types"
bc "github.com/tendermint/tendermint/blockchain"
cfg "github.com/tendermint/tendermint/config"
cstypes "github.com/tendermint/tendermint/consensus/types"
mempl "github.com/tendermint/tendermint/mempool"
"github.com/tendermint/tendermint/p2p"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
dbm "github.com/tendermint/tmlibs/db"
"github.com/tendermint/tmlibs/log"
@@ -34,7 +35,7 @@ var config *cfg.Config // NOTE: must be reset for each _test.go file
var ensureTimeout = time.Second * 2
func ensureDir(dir string, mode os.FileMode) {
if err := EnsureDir(dir, mode); err != nil {
if err := cmn.EnsureDir(dir, mode); err != nil {
panic(err)
}
}
@@ -50,12 +51,12 @@ type validatorStub struct {
Index int // Validator index. NOTE: we don't assume validator set changes.
Height int
Round int
*types.PrivValidator
types.PrivValidator
}
var testMinPower = 10
func NewValidatorStub(privValidator *types.PrivValidator, valIndex int) *validatorStub {
func NewValidatorStub(privValidator types.PrivValidator, valIndex int) *validatorStub {
return &validatorStub{
Index: valIndex,
PrivValidator: privValidator,
@@ -65,7 +66,7 @@ func NewValidatorStub(privValidator *types.PrivValidator, valIndex int) *validat
func (vs *validatorStub) signVote(voteType byte, hash []byte, header types.PartSetHeader) (*types.Vote, error) {
vote := &types.Vote{
ValidatorIndex: vs.Index,
ValidatorAddress: vs.PrivValidator.Address,
ValidatorAddress: vs.PrivValidator.GetAddress(),
Height: vs.Height,
Round: vs.Round,
Type: voteType,
@@ -142,7 +143,7 @@ func signAddVotes(to *ConsensusState, voteType byte, hash []byte, header types.P
func validatePrevote(t *testing.T, cs *ConsensusState, round int, privVal *validatorStub, blockHash []byte) {
prevotes := cs.Votes.Prevotes(round)
var vote *types.Vote
if vote = prevotes.GetByAddress(privVal.Address); vote == nil {
if vote = prevotes.GetByAddress(privVal.GetAddress()); vote == nil {
panic("Failed to find prevote from validator")
}
if blockHash == nil {
@@ -159,7 +160,7 @@ func validatePrevote(t *testing.T, cs *ConsensusState, round int, privVal *valid
func validateLastPrecommit(t *testing.T, cs *ConsensusState, privVal *validatorStub, blockHash []byte) {
votes := cs.LastCommit
var vote *types.Vote
if vote = votes.GetByAddress(privVal.Address); vote == nil {
if vote = votes.GetByAddress(privVal.GetAddress()); vote == nil {
panic("Failed to find precommit from validator")
}
if !bytes.Equal(vote.BlockID.Hash, blockHash) {
@@ -170,7 +171,7 @@ func validateLastPrecommit(t *testing.T, cs *ConsensusState, privVal *validatorS
func validatePrecommit(t *testing.T, cs *ConsensusState, thisRound, lockRound int, privVal *validatorStub, votedBlockHash, lockedBlockHash []byte) {
precommits := cs.Votes.Precommits(thisRound)
var vote *types.Vote
if vote = precommits.GetByAddress(privVal.Address); vote == nil {
if vote = precommits.GetByAddress(privVal.GetAddress()); vote == nil {
panic("Failed to find precommit from validator")
}
@@ -225,11 +226,11 @@ func subscribeToVoter(cs *ConsensusState, addr []byte) chan interface{} {
//-------------------------------------------------------------------------------
// consensus states
func newConsensusState(state *sm.State, pv *types.PrivValidator, app abci.Application) *ConsensusState {
func newConsensusState(state *sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState {
return newConsensusStateWithConfig(config, state, pv, app)
}
func newConsensusStateWithConfig(thisConfig *cfg.Config, state *sm.State, pv *types.PrivValidator, app abci.Application) *ConsensusState {
func newConsensusStateWithConfig(thisConfig *cfg.Config, state *sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState {
// Get BlockStore
blockDB := dbm.NewMemDB()
blockStore := bc.NewBlockStore(blockDB)
@@ -258,17 +259,17 @@ func newConsensusStateWithConfig(thisConfig *cfg.Config, state *sm.State, pv *ty
return cs
}
func loadPrivValidator(config *cfg.Config) *types.PrivValidator {
func loadPrivValidator(config *cfg.Config) *types.PrivValidatorFS {
privValidatorFile := config.PrivValidatorFile()
ensureDir(path.Dir(privValidatorFile), 0700)
privValidator := types.LoadOrGenPrivValidator(privValidatorFile, log.TestingLogger())
privValidator := types.LoadOrGenPrivValidatorFS(privValidatorFile)
privValidator.Reset()
return privValidator
}
func fixedConsensusStateDummy() *ConsensusState {
stateDB := dbm.NewMemDB()
state := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
state, _ := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
state.SetLogger(log.TestingLogger().With("module", "state"))
privValidator := loadPrivValidator(config)
cs := newConsensusState(state, privValidator, dummy.NewDummyApplication())
@@ -338,15 +339,19 @@ func randConsensusNet(nValidators int, testName string, tickerFunc func() Timeou
logger := consensusLogger()
for i := 0; i < nValidators; i++ {
db := dbm.NewMemDB() // each state needs its own db
state := sm.MakeGenesisState(db, genDoc)
state, _ := sm.MakeGenesisState(db, genDoc)
state.SetLogger(logger.With("module", "state", "validator", i))
state.Save()
thisConfig := ResetConfig(Fmt("%s_%d", testName, i))
thisConfig := ResetConfig(cmn.Fmt("%s_%d", testName, i))
for _, opt := range configOpts {
opt(thisConfig)
}
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
css[i] = newConsensusStateWithConfig(thisConfig, state, privVals[i], appFunc())
app := appFunc()
vals := types.TM2PB.Validators(state.Validators)
app.InitChain(abci.RequestInitChain{Validators: vals})
css[i] = newConsensusStateWithConfig(thisConfig, state, privVals[i], app)
css[i].SetLogger(logger.With("validator", i))
css[i].SetTimeoutTicker(tickerFunc())
}
@@ -359,30 +364,33 @@ func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerF
css := make([]*ConsensusState, nPeers)
for i := 0; i < nPeers; i++ {
db := dbm.NewMemDB() // each state needs its own db
state := sm.MakeGenesisState(db, genDoc)
state, _ := sm.MakeGenesisState(db, genDoc)
state.SetLogger(log.TestingLogger().With("module", "state"))
state.Save()
thisConfig := ResetConfig(Fmt("%s_%d", testName, i))
thisConfig := ResetConfig(cmn.Fmt("%s_%d", testName, i))
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
var privVal *types.PrivValidator
var privVal types.PrivValidator
if i < nValidators {
privVal = privVals[i]
} else {
privVal = types.GenPrivValidator()
_, tempFilePath := Tempfile("priv_validator_")
privVal.SetFile(tempFilePath)
_, tempFilePath := cmn.Tempfile("priv_validator_")
privVal = types.GenPrivValidatorFS(tempFilePath)
}
css[i] = newConsensusStateWithConfig(thisConfig, state, privVal, appFunc())
app := appFunc()
vals := types.TM2PB.Validators(state.Validators)
app.InitChain(abci.RequestInitChain{Validators: vals})
css[i] = newConsensusStateWithConfig(thisConfig, state, privVal, app)
css[i].SetLogger(log.TestingLogger())
css[i].SetTimeoutTicker(tickerFunc())
}
return css
}
func getSwitchIndex(switches []*p2p.Switch, peer *p2p.Peer) int {
func getSwitchIndex(switches []*p2p.Switch, peer p2p.Peer) int {
for i, s := range switches {
if bytes.Equal(peer.NodeInfo.PubKey.Address(), s.NodeInfo().PubKey.Address()) {
if bytes.Equal(peer.NodeInfo().PubKey.Address(), s.NodeInfo().PubKey.Address()) {
return i
}
}
@@ -393,14 +401,14 @@ func getSwitchIndex(switches []*p2p.Switch, peer *p2p.Peer) int {
//-------------------------------------------------------------------------------
// genesis
func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.GenesisDoc, []*types.PrivValidator) {
func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.GenesisDoc, []*types.PrivValidatorFS) {
validators := make([]types.GenesisValidator, numValidators)
privValidators := make([]*types.PrivValidator, numValidators)
privValidators := make([]*types.PrivValidatorFS, numValidators)
for i := 0; i < numValidators; i++ {
val, privVal := types.RandValidator(randPower, minPower)
validators[i] = types.GenesisValidator{
PubKey: val.PubKey,
Amount: val.VotingPower,
Power: val.VotingPower,
}
privValidators[i] = privVal
}
@@ -412,10 +420,10 @@ func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.G
}, privValidators
}
func randGenesisState(numValidators int, randPower bool, minPower int64) (*sm.State, []*types.PrivValidator) {
func randGenesisState(numValidators int, randPower bool, minPower int64) (*sm.State, []*types.PrivValidatorFS) {
genDoc, privValidators := randGenesisDoc(numValidators, randPower, minPower)
db := dbm.NewMemDB()
s0 := sm.MakeGenesisState(db, genDoc)
s0, _ := sm.MakeGenesisState(db, genDoc)
s0.SetLogger(log.TestingLogger().With("module", "state"))
s0.Save()
return s0, privValidators
@@ -457,7 +465,7 @@ func (m *mockTicker) ScheduleTimeout(ti timeoutInfo) {
if m.onlyOnce && m.fired {
return
}
if ti.Step == RoundStepNewHeight {
if ti.Step == cstypes.RoundStepNewHeight {
m.c <- ti
m.fired = true
}

View File

@@ -8,7 +8,7 @@ import (
abci "github.com/tendermint/abci/types"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
)
func init() {
@@ -86,7 +86,7 @@ func deliverTxsRange(cs *ConsensusState, start, end int) {
binary.BigEndian.PutUint64(txBytes, uint64(i))
err := cs.mempool.CheckTx(txBytes, nil)
if err != nil {
panic(Fmt("Error after CheckTx: %v", err))
panic(cmn.Fmt("Error after CheckTx: %v", err))
}
}
}
@@ -183,8 +183,8 @@ func NewCounterApplication() *CounterApplication {
return &CounterApplication{}
}
func (app *CounterApplication) Info() abci.ResponseInfo {
return abci.ResponseInfo{Data: Fmt("txs:%v", app.txCount)}
func (app *CounterApplication) Info(req abci.RequestInfo) abci.ResponseInfo {
return abci.ResponseInfo{Data: cmn.Fmt("txs:%v", app.txCount)}
}
func (app *CounterApplication) DeliverTx(tx []byte) abci.Result {
@@ -201,7 +201,7 @@ func runTx(tx []byte, countPtr *int) abci.Result {
copy(tx8[len(tx8)-len(tx):], tx)
txValue := binary.BigEndian.Uint64(tx8)
if txValue != uint64(count) {
return abci.ErrBadNonce.AppendLog(Fmt("Invalid nonce. Expected %v, got %v", count, txValue))
return abci.ErrBadNonce.AppendLog(cmn.Fmt("Invalid nonce. Expected %v, got %v", count, txValue))
}
*countPtr += 1
return abci.OK

View File

@@ -12,6 +12,7 @@ import (
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/log"
cstypes "github.com/tendermint/tendermint/consensus/types"
"github.com/tendermint/tendermint/p2p"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
@@ -75,7 +76,7 @@ func (conR *ConsensusReactor) OnStop() {
// SwitchToConsensus switches from fast_sync mode to consensus mode.
// It resets the state, turns off fast_sync, and starts the consensus state-machine
func (conR *ConsensusReactor) SwitchToConsensus(state *sm.State) {
func (conR *ConsensusReactor) SwitchToConsensus(state *sm.State, blocksSynced int) {
conR.Logger.Info("SwitchToConsensus")
conR.conS.reconstructLastCommit(state)
// NOTE: The line below causes broadcastNewRoundStepRoutine() to
@@ -86,6 +87,10 @@ func (conR *ConsensusReactor) SwitchToConsensus(state *sm.State) {
conR.fastSync = false
conR.mtx.Unlock()
if blocksSynced > 0 {
// dont bother with the WAL if we fast synced
conR.conS.doWALCatchup = false
}
conR.conS.Start()
}
@@ -120,14 +125,14 @@ func (conR *ConsensusReactor) GetChannels() []*p2p.ChannelDescriptor {
}
// AddPeer implements Reactor
func (conR *ConsensusReactor) AddPeer(peer *p2p.Peer) {
func (conR *ConsensusReactor) AddPeer(peer p2p.Peer) {
if !conR.IsRunning() {
return
}
// Create peerState for peer
peerState := NewPeerState(peer)
peer.Data.Set(types.PeerStateKey, peerState)
peerState := NewPeerState(peer).SetLogger(conR.Logger)
peer.Set(types.PeerStateKey, peerState)
// Begin routines for this peer.
go conR.gossipDataRoutine(peer, peerState)
@@ -142,12 +147,12 @@ func (conR *ConsensusReactor) AddPeer(peer *p2p.Peer) {
}
// RemovePeer implements Reactor
func (conR *ConsensusReactor) RemovePeer(peer *p2p.Peer, reason interface{}) {
func (conR *ConsensusReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
if !conR.IsRunning() {
return
}
// TODO
//peer.Data.Get(PeerStateKey).(*PeerState).Disconnect()
//peer.Get(PeerStateKey).(*PeerState).Disconnect()
}
// Receive implements Reactor
@@ -156,7 +161,7 @@ func (conR *ConsensusReactor) RemovePeer(peer *p2p.Peer, reason interface{}) {
// Peer state updates can happen in parallel, but processing of
// proposals, block parts, and votes are ordered by the receiveRoutine
// NOTE: blocks on consensus state for proposals, block parts, and votes
func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte) {
func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
if !conR.IsRunning() {
conR.Logger.Debug("Receive", "src", src, "chId", chID, "bytes", msgBytes)
return
@@ -171,7 +176,7 @@ func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte)
conR.Logger.Debug("Receive", "src", src, "chId", chID, "msg", msg)
// Get peer states
ps := src.Data.Get(types.PeerStateKey).(*PeerState)
ps := src.Get(types.PeerStateKey).(*PeerState)
switch chID {
case StateChannel:
@@ -191,7 +196,7 @@ func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte)
return
}
// Peer claims to have a maj23 for some BlockID at H,R,S,
votes.SetPeerMaj23(msg.Round, msg.Type, ps.Peer.Key, msg.BlockID)
votes.SetPeerMaj23(msg.Round, msg.Type, ps.Peer.Key(), msg.BlockID)
// Respond with a VoteSetBitsMessage showing which votes we have.
// (and consequently shows which we don't have)
var ourVotes *cmn.BitArray
@@ -228,12 +233,12 @@ func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte)
switch msg := msg.(type) {
case *ProposalMessage:
ps.SetHasProposal(msg.Proposal)
conR.conS.peerMsgQueue <- msgInfo{msg, src.Key}
conR.conS.peerMsgQueue <- msgInfo{msg, src.Key()}
case *ProposalPOLMessage:
ps.ApplyProposalPOLMessage(msg)
case *BlockPartMessage:
ps.SetHasProposalBlockPart(msg.Height, msg.Round, msg.Part.Index)
conR.conS.peerMsgQueue <- msgInfo{msg, src.Key}
conR.conS.peerMsgQueue <- msgInfo{msg, src.Key()}
default:
conR.Logger.Error(cmn.Fmt("Unknown message type %v", reflect.TypeOf(msg)))
}
@@ -253,7 +258,7 @@ func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte)
ps.EnsureVoteBitArrays(height-1, lastCommitSize)
ps.SetHasVote(msg.Vote)
cs.peerMsgQueue <- msgInfo{msg, src.Key}
cs.peerMsgQueue <- msgInfo{msg, src.Key()}
default:
// don't punish (leave room for soft upgrades)
@@ -321,7 +326,7 @@ func (conR *ConsensusReactor) FastSync() bool {
func (conR *ConsensusReactor) registerEventCallbacks() {
types.AddListenerForEvent(conR.evsw, "conR", types.EventStringNewRoundStep(), func(data types.TMEventData) {
rs := data.Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs := data.Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
conR.broadcastNewRoundStep(rs)
})
@@ -344,7 +349,7 @@ func (conR *ConsensusReactor) broadcastProposalHeartbeatMessage(heartbeat types.
conR.Switch.Broadcast(StateChannel, struct{ ConsensusMessage }{msg})
}
func (conR *ConsensusReactor) broadcastNewRoundStep(rs *RoundState) {
func (conR *ConsensusReactor) broadcastNewRoundStep(rs *cstypes.RoundState) {
nrsMsg, csMsg := makeRoundStepMessages(rs)
if nrsMsg != nil {
@@ -367,7 +372,7 @@ func (conR *ConsensusReactor) broadcastHasVoteMessage(vote *types.Vote) {
/*
// TODO: Make this broadcast more selective.
for _, peer := range conR.Switch.Peers().List() {
ps := peer.Data.Get(PeerStateKey).(*PeerState)
ps := peer.Get(PeerStateKey).(*PeerState)
prs := ps.GetRoundState()
if prs.Height == vote.Height {
// TODO: Also filter on round?
@@ -381,7 +386,7 @@ func (conR *ConsensusReactor) broadcastHasVoteMessage(vote *types.Vote) {
*/
}
func makeRoundStepMessages(rs *RoundState) (nrsMsg *NewRoundStepMessage, csMsg *CommitStepMessage) {
func makeRoundStepMessages(rs *cstypes.RoundState) (nrsMsg *NewRoundStepMessage, csMsg *CommitStepMessage) {
nrsMsg = &NewRoundStepMessage{
Height: rs.Height,
Round: rs.Round,
@@ -389,7 +394,7 @@ func makeRoundStepMessages(rs *RoundState) (nrsMsg *NewRoundStepMessage, csMsg *
SecondsSinceStartTime: int(time.Since(rs.StartTime).Seconds()),
LastCommitRound: rs.LastCommit.Round(),
}
if rs.Step == RoundStepCommit {
if rs.Step == cstypes.RoundStepCommit {
csMsg = &CommitStepMessage{
Height: rs.Height,
BlockPartsHeader: rs.ProposalBlockParts.Header(),
@@ -399,7 +404,7 @@ func makeRoundStepMessages(rs *RoundState) (nrsMsg *NewRoundStepMessage, csMsg *
return
}
func (conR *ConsensusReactor) sendNewRoundStepMessages(peer *p2p.Peer) {
func (conR *ConsensusReactor) sendNewRoundStepMessages(peer p2p.Peer) {
rs := conR.conS.GetRoundState()
nrsMsg, csMsg := makeRoundStepMessages(rs)
if nrsMsg != nil {
@@ -410,7 +415,7 @@ func (conR *ConsensusReactor) sendNewRoundStepMessages(peer *p2p.Peer) {
}
}
func (conR *ConsensusReactor) gossipDataRoutine(peer *p2p.Peer, ps *PeerState) {
func (conR *ConsensusReactor) gossipDataRoutine(peer p2p.Peer, ps *PeerState) {
logger := conR.Logger.With("peer", peer)
OUTER_LOOP:
@@ -491,8 +496,8 @@ OUTER_LOOP:
}
}
func (conR *ConsensusReactor) gossipDataForCatchup(logger log.Logger, rs *RoundState,
prs *PeerRoundState, ps *PeerState, peer *p2p.Peer) {
func (conR *ConsensusReactor) gossipDataForCatchup(logger log.Logger, rs *cstypes.RoundState,
prs *cstypes.PeerRoundState, ps *PeerState, peer p2p.Peer) {
if index, ok := prs.ProposalBlockParts.Not().PickRandom(); ok {
// Ensure that the peer's PartSetHeader is correct
@@ -534,7 +539,7 @@ func (conR *ConsensusReactor) gossipDataForCatchup(logger log.Logger, rs *RoundS
}
}
func (conR *ConsensusReactor) gossipVotesRoutine(peer *p2p.Peer, ps *PeerState) {
func (conR *ConsensusReactor) gossipVotesRoutine(peer p2p.Peer, ps *PeerState) {
logger := conR.Logger.With("peer", peer)
// Simple hack to throttle logs upon sleep.
@@ -606,24 +611,24 @@ OUTER_LOOP:
}
}
func (conR *ConsensusReactor) gossipVotesForHeight(logger log.Logger, rs *RoundState, prs *PeerRoundState, ps *PeerState) bool {
func (conR *ConsensusReactor) gossipVotesForHeight(logger log.Logger, rs *cstypes.RoundState, prs *cstypes.PeerRoundState, ps *PeerState) bool {
// If there are lastCommits to send...
if prs.Step == RoundStepNewHeight {
if prs.Step == cstypes.RoundStepNewHeight {
if ps.PickSendVote(rs.LastCommit) {
logger.Debug("Picked rs.LastCommit to send")
return true
}
}
// If there are prevotes to send...
if prs.Step <= RoundStepPrevote && prs.Round != -1 && prs.Round <= rs.Round {
if prs.Step <= cstypes.RoundStepPrevote && prs.Round != -1 && prs.Round <= rs.Round {
if ps.PickSendVote(rs.Votes.Prevotes(prs.Round)) {
logger.Debug("Picked rs.Prevotes(prs.Round) to send", "round", prs.Round)
return true
}
}
// If there are precommits to send...
if prs.Step <= RoundStepPrecommit && prs.Round != -1 && prs.Round <= rs.Round {
if prs.Step <= cstypes.RoundStepPrecommit && prs.Round != -1 && prs.Round <= rs.Round {
if ps.PickSendVote(rs.Votes.Precommits(prs.Round)) {
logger.Debug("Picked rs.Precommits(prs.Round) to send", "round", prs.Round)
return true
@@ -644,7 +649,7 @@ func (conR *ConsensusReactor) gossipVotesForHeight(logger log.Logger, rs *RoundS
// NOTE: `queryMaj23Routine` has a simple crude design since it only comes
// into play for liveness when there's a signature DDoS attack happening.
func (conR *ConsensusReactor) queryMaj23Routine(peer *p2p.Peer, ps *PeerState) {
func (conR *ConsensusReactor) queryMaj23Routine(peer p2p.Peer, ps *PeerState) {
logger := conR.Logger.With("peer", peer)
OUTER_LOOP:
@@ -743,7 +748,7 @@ func (conR *ConsensusReactor) StringIndented(indent string) string {
s := "ConsensusReactor{\n"
s += indent + " " + conR.conS.StringIndented(indent+" ") + "\n"
for _, peer := range conR.Switch.Peers().List() {
ps := peer.Data.Get(types.PeerStateKey).(*PeerState)
ps := peer.Get(types.PeerStateKey).(*PeerState)
s += indent + " " + ps.StringIndented(indent+" ") + "\n"
}
s += indent + "}"
@@ -752,54 +757,6 @@ func (conR *ConsensusReactor) StringIndented(indent string) string {
//-----------------------------------------------------------------------------
// PeerRoundState contains the known state of a peer.
// NOTE: Read-only when returned by PeerState.GetRoundState().
type PeerRoundState struct {
Height int // Height peer is at
Round int // Round peer is at, -1 if unknown.
Step RoundStepType // Step peer is at
StartTime time.Time // Estimated start of round 0 at this height
Proposal bool // True if peer has proposal for this round
ProposalBlockPartsHeader types.PartSetHeader //
ProposalBlockParts *cmn.BitArray //
ProposalPOLRound int // Proposal's POL round. -1 if none.
ProposalPOL *cmn.BitArray // nil until ProposalPOLMessage received.
Prevotes *cmn.BitArray // All votes peer has for this round
Precommits *cmn.BitArray // All precommits peer has for this round
LastCommitRound int // Round of commit for last height. -1 if none.
LastCommit *cmn.BitArray // All commit precommits of commit for last height.
CatchupCommitRound int // Round that we have commit for. Not necessarily unique. -1 if none.
CatchupCommit *cmn.BitArray // All commit precommits peer has for this height & CatchupCommitRound
}
// String returns a string representation of the PeerRoundState
func (prs PeerRoundState) String() string {
return prs.StringIndented("")
}
// StringIndented returns a string representation of the PeerRoundState
func (prs PeerRoundState) StringIndented(indent string) string {
return fmt.Sprintf(`PeerRoundState{
%s %v/%v/%v @%v
%s Proposal %v -> %v
%s POL %v (round %v)
%s Prevotes %v
%s Precommits %v
%s LastCommit %v (round %v)
%s Catchup %v (round %v)
%s}`,
indent, prs.Height, prs.Round, prs.Step, prs.StartTime,
indent, prs.ProposalBlockPartsHeader, prs.ProposalBlockParts,
indent, prs.ProposalPOL, prs.ProposalPOLRound,
indent, prs.Prevotes,
indent, prs.Precommits,
indent, prs.LastCommit, prs.LastCommitRound,
indent, prs.CatchupCommit, prs.CatchupCommitRound,
indent)
}
//-----------------------------------------------------------------------------
var (
ErrPeerStateHeightRegression = errors.New("Error peer state height regression")
ErrPeerStateInvalidStartTime = errors.New("Error peer state invalid startTime")
@@ -808,17 +765,19 @@ var (
// PeerState contains the known state of a peer, including its connection
// and threadsafe access to its PeerRoundState.
type PeerState struct {
Peer *p2p.Peer
Peer p2p.Peer
logger log.Logger
mtx sync.Mutex
PeerRoundState
cstypes.PeerRoundState
}
// NewPeerState returns a new PeerState for the given Peer
func NewPeerState(peer *p2p.Peer) *PeerState {
func NewPeerState(peer p2p.Peer) *PeerState {
return &PeerState{
Peer: peer,
PeerRoundState: PeerRoundState{
Peer: peer,
logger: log.NewNopLogger(),
PeerRoundState: cstypes.PeerRoundState{
Round: -1,
ProposalPOLRound: -1,
LastCommitRound: -1,
@@ -827,9 +786,14 @@ func NewPeerState(peer *p2p.Peer) *PeerState {
}
}
func (ps *PeerState) SetLogger(logger log.Logger) *PeerState {
ps.logger = logger
return ps
}
// GetRoundState returns an atomic snapshot of the PeerRoundState.
// There's no point in mutating it since it won't change PeerState.
func (ps *PeerState) GetRoundState() *PeerRoundState {
func (ps *PeerState) GetRoundState() *cstypes.PeerRoundState {
ps.mtx.Lock()
defer ps.mtx.Unlock()
@@ -1025,7 +989,7 @@ func (ps *PeerState) SetHasVote(vote *types.Vote) {
}
func (ps *PeerState) setHasVote(height int, round int, type_ byte, index int) {
logger := ps.Peer.Logger.With("peerRound", ps.Round, "height", height, "round", round)
logger := ps.logger.With("peerRound", ps.Round, "height", height, "round", round)
logger.Debug("setHasVote(LastCommit)", "lastCommit", ps.LastCommit, "index", index)
// NOTE: some may be nil BitArrays -> no side effects.
@@ -1163,7 +1127,7 @@ func (ps *PeerState) StringIndented(indent string) string {
%s Key %v
%s PRS %v
%s}`,
indent, ps.Peer.Key,
indent, ps.Peer.Key(),
indent, ps.PeerRoundState.StringIndented(indent+" "),
indent)
}
@@ -1220,7 +1184,7 @@ func DecodeMessage(bz []byte) (msgType byte, msg ConsensusMessage, err error) {
type NewRoundStepMessage struct {
Height int
Round int
Step RoundStepType
Step cstypes.RoundStepType
SecondsSinceStartTime int
LastCommitRound int
}

View File

@@ -54,7 +54,7 @@ func startConsensusNet(t *testing.T, css []*ConsensusState, N int, subscribeEven
// we'd block when the cs fires NewBlockEvent and the peers are trying to start their reactors
for i := 0; i < N; i++ {
s := reactors[i].conS.GetState()
reactors[i].SwitchToConsensus(s)
reactors[i].SwitchToConsensus(s, 0)
}
return reactors, eventChans
}
@@ -132,7 +132,7 @@ func TestVotingPowerChange(t *testing.T) {
//---------------------------------------------------------------------------
t.Log("---------------------------- Testing changing the voting power of one validator a few times")
val1PubKey := css[0].privValidator.(*types.PrivValidator).PubKey
val1PubKey := css[0].privValidator.GetPubKey()
updateValidatorTx := dummy.MakeValSetChangeTx(val1PubKey.Bytes(), 25)
previousTotalVotingPower := css[0].GetRoundState().LastValidators.TotalVotingPower()
@@ -193,7 +193,7 @@ func TestValidatorSetChanges(t *testing.T) {
//---------------------------------------------------------------------------
t.Log("---------------------------- Testing adding one validator")
newValidatorPubKey1 := css[nVals].privValidator.(*types.PrivValidator).PubKey
newValidatorPubKey1 := css[nVals].privValidator.GetPubKey()
newValidatorTx1 := dummy.MakeValSetChangeTx(newValidatorPubKey1.Bytes(), uint64(testMinPower))
// wait till everyone makes block 2
@@ -219,7 +219,7 @@ func TestValidatorSetChanges(t *testing.T) {
//---------------------------------------------------------------------------
t.Log("---------------------------- Testing changing the voting power of one validator")
updateValidatorPubKey1 := css[nVals].privValidator.(*types.PrivValidator).PubKey
updateValidatorPubKey1 := css[nVals].privValidator.GetPubKey()
updateValidatorTx1 := dummy.MakeValSetChangeTx(updateValidatorPubKey1.Bytes(), 25)
previousTotalVotingPower := css[nVals].GetRoundState().LastValidators.TotalVotingPower()
@@ -235,10 +235,10 @@ func TestValidatorSetChanges(t *testing.T) {
//---------------------------------------------------------------------------
t.Log("---------------------------- Testing adding two validators at once")
newValidatorPubKey2 := css[nVals+1].privValidator.(*types.PrivValidator).PubKey
newValidatorPubKey2 := css[nVals+1].privValidator.GetPubKey()
newValidatorTx2 := dummy.MakeValSetChangeTx(newValidatorPubKey2.Bytes(), uint64(testMinPower))
newValidatorPubKey3 := css[nVals+2].privValidator.(*types.PrivValidator).PubKey
newValidatorPubKey3 := css[nVals+2].privValidator.GetPubKey()
newValidatorTx3 := dummy.MakeValSetChangeTx(newValidatorPubKey3.Bytes(), uint64(testMinPower))
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, newValidatorTx2, newValidatorTx3)

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"errors"
"fmt"
"hash/crc32"
"io"
"reflect"
"strconv"
@@ -11,7 +12,6 @@ import (
"time"
abci "github.com/tendermint/abci/types"
wire "github.com/tendermint/go-wire"
auto "github.com/tendermint/tmlibs/autofile"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/log"
@@ -19,8 +19,11 @@ import (
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/version"
)
var crc32c = crc32.MakeTable(crc32.Castagnoli)
// Functionality to replay blocks and messages on recovery from a crash.
// There are two general failure scenarios: failure during consensus, and failure while applying the block.
// The former is handled by the WAL, the latter by the proxyApp Handshake on restart,
@@ -34,18 +37,11 @@ import (
// as if it were received in receiveRoutine
// Lines that start with "#" are ignored.
// NOTE: receiveRoutine should not be running
func (cs *ConsensusState) readReplayMessage(msgBytes []byte, newStepCh chan interface{}) error {
// Skip over empty and meta lines
if len(msgBytes) == 0 || msgBytes[0] == '#' {
func (cs *ConsensusState) readReplayMessage(msg *TimedWALMessage, newStepCh chan interface{}) error {
// skip meta messages
if _, ok := msg.Msg.(EndHeightMessage); ok {
return nil
}
var err error
var msg TimedWALMessage
wire.ReadJSON(&msg, msgBytes, &err)
if err != nil {
fmt.Println("MsgBytes:", msgBytes, string(msgBytes))
return fmt.Errorf("Error reading json data: %v", err)
}
// for logging
switch m := msg.Msg.(type) {
@@ -103,7 +99,7 @@ func (cs *ConsensusState) catchupReplay(csHeight int) error {
// Ensure that ENDHEIGHT for this height doesn't exist
// NOTE: This is just a sanity check. As far as we know things work fine without it,
// and Handshake could reuse ConsensusState if it weren't for this check (since we can crash after writing ENDHEIGHT).
gr, found, err := cs.wal.group.Search("#ENDHEIGHT: ", makeHeightSearchFunc(csHeight))
gr, found, err := cs.wal.SearchForEndHeight(uint64(csHeight))
if gr != nil {
gr.Close()
}
@@ -112,55 +108,33 @@ func (cs *ConsensusState) catchupReplay(csHeight int) error {
}
// Search for last height marker
gr, found, err = cs.wal.group.Search("#ENDHEIGHT: ", makeHeightSearchFunc(csHeight-1))
gr, found, err = cs.wal.SearchForEndHeight(uint64(csHeight - 1))
if err == io.EOF {
cs.Logger.Error("Replay: wal.group.Search returned EOF", "#ENDHEIGHT", csHeight-1)
// if we upgraded from 0.9 to 0.9.1, we may have #HEIGHT instead
// TODO (0.10.0): remove this
gr, found, err = cs.wal.group.Search("#HEIGHT: ", makeHeightSearchFunc(csHeight))
if err == io.EOF {
cs.Logger.Error("Replay: wal.group.Search returned EOF", "#HEIGHT", csHeight)
return nil
} else if err != nil {
return err
}
} else if err != nil {
return err
} else {
defer gr.Close()
}
if !found {
// if we upgraded from 0.9 to 0.9.1, we may have #HEIGHT instead
// TODO (0.10.0): remove this
gr, _, err = cs.wal.group.Search("#HEIGHT: ", makeHeightSearchFunc(csHeight))
if err == io.EOF {
cs.Logger.Error("Replay: wal.group.Search returned EOF", "#HEIGHT", csHeight)
return nil
} else if err != nil {
return err
} else {
defer gr.Close()
}
// TODO (0.10.0): uncomment
// return errors.New(cmn.Fmt("Cannot replay height %d. WAL does not contain #ENDHEIGHT for %d.", csHeight, csHeight-1))
return errors.New(cmn.Fmt("Cannot replay height %d. WAL does not contain #ENDHEIGHT for %d.", csHeight, csHeight-1))
}
defer gr.Close()
cs.Logger.Info("Catchup by replaying consensus messages", "height", csHeight)
var msg *TimedWALMessage
dec := WALDecoder{gr}
for {
line, err := gr.ReadLine()
if err != nil {
if err == io.EOF {
break
} else {
return err
}
msg, err = dec.Decode()
if err == io.EOF {
break
} else if err != nil {
return err
}
// NOTE: since the priv key is set when the msgs are received
// it will attempt to eg double sign but we can just ignore it
// since the votes will be replayed and we'll get to the next step
if err := cs.readReplayMessage([]byte(line), nil); err != nil {
if err := cs.readReplayMessage(msg, nil); err != nil {
return err
}
}
@@ -221,7 +195,7 @@ func (h *Handshaker) NBlocks() int {
// TODO: retry the handshake/replay if it fails ?
func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
// handshake is done via info request on the query conn
res, err := proxyApp.Query().InfoSync()
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{version.Version})
if err != nil {
return errors.New(cmn.Fmt("Error calling Info: %v", err))
}
@@ -257,7 +231,7 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int, proxyApp p
// If appBlockHeight == 0 it means that we are at genesis and hence should send InitChain
if appBlockHeight == 0 {
validators := types.TM2PB.Validators(h.state.Validators)
proxyApp.Consensus().InitChainSync(validators)
proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators})
}
// First handle edge cases and constraints on the storeBlockHeight
@@ -324,8 +298,11 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int, proxyApp p
func (h *Handshaker) replayBlocks(proxyApp proxy.AppConns, appBlockHeight, storeBlockHeight int, mutateState bool) ([]byte, error) {
// App is further behind than it should be, so we need to replay blocks.
// We replay all blocks from appBlockHeight+1.
//
// Note that we don't have an old version of the state,
// so we by-pass state validation/mutation using sm.ExecCommitBlock.
// This also means we won't be saving validator sets if they change during this period.
//
// If mutateState == true, the final block is replayed with h.replayBlock()
var appHash []byte

View File

@@ -4,6 +4,7 @@ import (
"bufio"
"errors"
"fmt"
"io"
"os"
"strconv"
"strings"
@@ -53,12 +54,20 @@ func (cs *ConsensusState) ReplayFile(file string, console bool) error {
defer pb.fp.Close()
var nextN int // apply N msgs in a row
for pb.scanner.Scan() {
var msg *TimedWALMessage
for {
if nextN == 0 && console {
nextN = pb.replayConsoleLoop()
}
if err := pb.cs.readReplayMessage(pb.scanner.Bytes(), newStepCh); err != nil {
msg, err = pb.dec.Decode()
if err == io.EOF {
return nil
} else if err != nil {
return err
}
if err := pb.cs.readReplayMessage(msg, newStepCh); err != nil {
return err
}
@@ -76,9 +85,9 @@ func (cs *ConsensusState) ReplayFile(file string, console bool) error {
type playback struct {
cs *ConsensusState
fp *os.File
scanner *bufio.Scanner
count int // how many lines/msgs into the file are we
fp *os.File
dec *WALDecoder
count int // how many lines/msgs into the file are we
// replays can be reset to beginning
fileName string // so we can close/reopen the file
@@ -91,7 +100,7 @@ func newPlayback(fileName string, fp *os.File, cs *ConsensusState, genState *sm.
fp: fp,
fileName: fileName,
genesisState: genState,
scanner: bufio.NewScanner(fp),
dec: NewWALDecoder(fp),
}
}
@@ -111,13 +120,20 @@ func (pb *playback) replayReset(count int, newStepCh chan interface{}) error {
return err
}
pb.fp = fp
pb.scanner = bufio.NewScanner(fp)
pb.dec = NewWALDecoder(fp)
count = pb.count - count
fmt.Printf("Reseting from %d to %d\n", pb.count, count)
pb.count = 0
pb.cs = newCS
for i := 0; pb.scanner.Scan() && i < count; i++ {
if err := pb.cs.readReplayMessage(pb.scanner.Bytes(), newStepCh); err != nil {
var msg *TimedWALMessage
for i := 0; i < count; i++ {
msg, err = pb.dec.Decode()
if err == io.EOF {
return nil
} else if err != nil {
return err
}
if err := pb.cs.readReplayMessage(msg, newStepCh); err != nil {
return err
}
pb.count += 1
@@ -241,12 +257,15 @@ func newConsensusStateForReplay(config cfg.BaseConfig, csConfig *cfg.ConsensusCo
// Get State
stateDB := dbm.NewDB("state", config.DBBackend, config.DBDir())
state := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
state, err := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
if err != nil {
cmn.Exit(err.Error())
}
// Create proxyAppConn connection (consensus, mempool, query)
clientCreator := proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir())
proxyApp := proxy.NewAppConns(clientCreator, NewHandshaker(state, blockStore))
_, err := proxyApp.Start()
_, err = proxyApp.Start()
if err != nil {
cmn.Exit(cmn.Fmt("Error starting proxy app conns: %v", err))
}

View File

@@ -8,11 +8,11 @@ import (
"io/ioutil"
"os"
"path"
"strings"
"testing"
"time"
"github.com/tendermint/abci/example/dummy"
abci "github.com/tendermint/abci/types"
crypto "github.com/tendermint/go-crypto"
wire "github.com/tendermint/go-wire"
cmn "github.com/tendermint/tmlibs/common"
@@ -43,7 +43,7 @@ func init() {
// after running it (eg. sometimes small_block2 will have 5 block parts, sometimes 6).
// It should only have to be re-run if there is some breaking change to the consensus data structures (eg. blocks, votes)
// or to the behaviour of the app (eg. computes app hash differently)
var data_dir = path.Join(cmn.GoPath, "src/github.com/tendermint/tendermint/consensus", "test_data")
var data_dir = path.Join(cmn.GoPath(), "src/github.com/tendermint/tendermint/consensus", "test_data")
//------------------------------------------------------------------------------------------
// WAL Tests
@@ -59,12 +59,12 @@ var baseStepChanges = []int{3, 6, 8}
var testCases = []*testCase{
newTestCase("empty_block", baseStepChanges), // empty block (has 1 block part)
newTestCase("small_block1", baseStepChanges), // small block with txs in 1 block part
newTestCase("small_block2", []int{3, 11, 13}), // small block with txs across 6 smaller block parts
newTestCase("small_block2", []int{3, 12, 14}), // small block with txs across 6 smaller block parts
}
type testCase struct {
name string
log string //full cs wal
log []byte //full cs wal
stepMap map[int]int8 // map lines of log to privval step
proposeLine int
@@ -99,29 +99,27 @@ func newMapFromChanges(changes []int) map[int]int8 {
return m
}
func readWAL(p string) string {
func readWAL(p string) []byte {
b, err := ioutil.ReadFile(p)
if err != nil {
panic(err)
}
return string(b)
return b
}
func writeWAL(walMsgs string) string {
tempDir := os.TempDir()
walDir := path.Join(tempDir, "/wal"+cmn.RandStr(12))
walFile := path.Join(walDir, "wal")
// Create WAL directory
err := cmn.EnsureDir(walDir, 0700)
func writeWAL(walMsgs []byte) string {
walFile, err := ioutil.TempFile("", "wal")
if err != nil {
panic(err)
panic(fmt.Errorf("failed to create temp WAL file: %v", err))
}
// Write the needed WAL to file
err = cmn.WriteFile(walFile, []byte(walMsgs), 0600)
_, err = walFile.Write(walMsgs)
if err != nil {
panic(err)
panic(fmt.Errorf("failed to write to temp WAL file: %v", err))
}
return walFile
if err := walFile.Close(); err != nil {
panic(fmt.Errorf("failed to close temp WAL file: %v", err))
}
return walFile.Name()
}
func waitForBlock(newBlockCh chan interface{}, thisCase *testCase, i int) {
@@ -162,11 +160,11 @@ LOOP:
cs.Wait()
}
func toPV(pv PrivValidator) *types.PrivValidator {
return pv.(*types.PrivValidator)
func toPV(pv types.PrivValidator) *types.PrivValidatorFS {
return pv.(*types.PrivValidatorFS)
}
func setupReplayTest(t *testing.T, thisCase *testCase, nLines int, crashAfter bool) (*ConsensusState, chan interface{}, string, string) {
func setupReplayTest(t *testing.T, thisCase *testCase, nLines int, crashAfter bool) (*ConsensusState, chan interface{}, []byte, string) {
t.Log("-------------------------------------")
t.Logf("Starting replay test %v (of %d lines of WAL). Crash after = %v", thisCase.name, nLines, crashAfter)
@@ -175,11 +173,13 @@ func setupReplayTest(t *testing.T, thisCase *testCase, nLines int, crashAfter bo
lineStep -= 1
}
split := strings.Split(thisCase.log, "\n")
split := bytes.Split(thisCase.log, walSeparator)
lastMsg := split[nLines]
// we write those lines up to (not including) one with the signature
walFile := writeWAL(strings.Join(split[:nLines], "\n") + "\n")
b := bytes.Join(split[:nLines], walSeparator)
b = append(b, walSeparator...)
walFile := writeWAL(b)
cs := fixedConsensusStateDummy()
@@ -194,14 +194,19 @@ func setupReplayTest(t *testing.T, thisCase *testCase, nLines int, crashAfter bo
return cs, newBlockCh, lastMsg, walFile
}
func readTimedWALMessage(t *testing.T, walMsg string) TimedWALMessage {
var err error
var msg TimedWALMessage
wire.ReadJSON(&msg, []byte(walMsg), &err)
func readTimedWALMessage(t *testing.T, rawMsg []byte) TimedWALMessage {
b := bytes.NewBuffer(rawMsg)
// because rawMsg does not contain a separator and WALDecoder#Decode expects it
_, err := b.Write(walSeparator)
if err != nil {
t.Fatal(err)
}
dec := NewWALDecoder(b)
msg, err := dec.Decode()
if err != nil {
t.Fatalf("Error reading json data: %v", err)
}
return msg
return *msg
}
//-----------------------------------------------
@@ -210,10 +215,15 @@ func readTimedWALMessage(t *testing.T, walMsg string) TimedWALMessage {
func TestWALCrashAfterWrite(t *testing.T) {
for _, thisCase := range testCases {
split := strings.Split(thisCase.log, "\n")
for i := 0; i < len(split)-1; i++ {
cs, newBlockCh, _, walFile := setupReplayTest(t, thisCase, i+1, true)
runReplayTest(t, cs, walFile, newBlockCh, thisCase, i+1)
splitSize := bytes.Count(thisCase.log, walSeparator)
for i := 0; i < splitSize-1; i++ {
t.Run(fmt.Sprintf("%s:%d", thisCase.name, i), func(t *testing.T) {
cs, newBlockCh, _, walFile := setupReplayTest(t, thisCase, i+1, true)
cs.config.TimeoutPropose = 100
runReplayTest(t, cs, walFile, newBlockCh, thisCase, i+1)
// cleanup
os.Remove(walFile)
})
}
}
}
@@ -225,14 +235,19 @@ func TestWALCrashAfterWrite(t *testing.T) {
func TestWALCrashBeforeWritePropose(t *testing.T) {
for _, thisCase := range testCases {
lineNum := thisCase.proposeLine
// setup replay test where last message is a proposal
cs, newBlockCh, proposalMsg, walFile := setupReplayTest(t, thisCase, lineNum, false)
msg := readTimedWALMessage(t, proposalMsg)
proposal := msg.Msg.(msgInfo).Msg.(*ProposalMessage)
// Set LastSig
toPV(cs.privValidator).LastSignBytes = types.SignBytes(cs.state.ChainID, proposal.Proposal)
toPV(cs.privValidator).LastSignature = proposal.Proposal.Signature
runReplayTest(t, cs, walFile, newBlockCh, thisCase, lineNum)
t.Run(fmt.Sprintf("%s:%d", thisCase.name, lineNum), func(t *testing.T) {
// setup replay test where last message is a proposal
cs, newBlockCh, proposalMsg, walFile := setupReplayTest(t, thisCase, lineNum, false)
cs.config.TimeoutPropose = 100
msg := readTimedWALMessage(t, proposalMsg)
proposal := msg.Msg.(msgInfo).Msg.(*ProposalMessage)
// Set LastSig
toPV(cs.privValidator).LastSignBytes = types.SignBytes(cs.state.ChainID, proposal.Proposal)
toPV(cs.privValidator).LastSignature = proposal.Proposal.Signature
runReplayTest(t, cs, walFile, newBlockCh, thisCase, lineNum)
// cleanup
os.Remove(walFile)
})
}
}
@@ -267,8 +282,6 @@ func testReplayCrashBeforeWriteVote(t *testing.T, thisCase *testCase, lineNum in
var (
NUM_BLOCKS = 6 // number of blocks in the test_data/many_blocks.cswal
mempool = types.MockMempool{}
testPartSize int
)
//---------------------------------------
@@ -316,11 +329,10 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
if err != nil {
t.Fatal(err)
}
walFile := writeWAL(string(walBody))
walFile := writeWAL(walBody)
config.Consensus.SetWalFile(walFile)
privVal := types.LoadPrivValidator(config.PrivValidatorFile())
testPartSize = config.Consensus.BlockPartSize
privVal := types.LoadPrivValidatorFS(config.PrivValidatorFile())
wal, err := NewWAL(walFile, false)
if err != nil {
@@ -335,7 +347,7 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
t.Fatalf(err.Error())
}
state, store := stateAndStore(config, privVal.PubKey)
state, store := stateAndStore(config, privVal.GetPubKey())
store.chain = chain
store.commits = commits
@@ -349,7 +361,7 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
// run nBlocks against a new client to build up the app state.
// use a throwaway tendermint state
proxyApp := proxy.NewAppConns(clientCreator2, nil)
state, _ := stateAndStore(config, privVal.PubKey)
state, _ := stateAndStore(config, privVal.GetPubKey())
buildAppStateFromChain(proxyApp, state, chain, nBlocks, mode)
}
@@ -361,7 +373,7 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
}
// get the latest app hash from the app
res, err := proxyApp.Query().InfoSync()
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{""})
if err != nil {
t.Fatal(err)
}
@@ -384,6 +396,7 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
}
func applyBlock(st *sm.State, blk *types.Block, proxyApp proxy.AppConns) {
testPartSize := st.Params.BlockPartSizeBytes
err := st.ApplyBlock(nil, proxyApp.Consensus(), blk, blk.MakePartSet(testPartSize).Header(), mempool)
if err != nil {
panic(err)
@@ -398,7 +411,7 @@ func buildAppStateFromChain(proxyApp proxy.AppConns,
}
validators := types.TM2PB.Validators(state.Validators)
proxyApp.Consensus().InitChainSync(validators)
proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators})
defer proxyApp.Stop()
switch mode {
@@ -432,7 +445,7 @@ func buildTMStateFromChain(config *cfg.Config, state *sm.State, chain []*types.B
defer proxyApp.Stop()
validators := types.TM2PB.Validators(state.Validators)
proxyApp.Consensus().InitChainSync(validators)
proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators})
var latestAppHash []byte
@@ -466,7 +479,7 @@ func buildTMStateFromChain(config *cfg.Config, state *sm.State, chain []*types.B
func makeBlockchainFromWAL(wal *WAL) ([]*types.Block, []*types.Commit, error) {
// Search for height marker
gr, found, err := wal.group.Search("#ENDHEIGHT: ", makeHeightSearchFunc(0))
gr, found, err := wal.SearchForEndHeight(0)
if err != nil {
return nil, nil, err
}
@@ -480,20 +493,17 @@ func makeBlockchainFromWAL(wal *WAL) ([]*types.Block, []*types.Commit, error) {
var blockParts *types.PartSet
var blocks []*types.Block
var commits []*types.Commit
for {
line, err := gr.ReadLine()
if err != nil {
if err == io.EOF {
break
} else {
return nil, nil, err
}
}
piece, err := readPieceFromWAL([]byte(line))
if err != nil {
dec := NewWALDecoder(gr)
for {
msg, err := dec.Decode()
if err == io.EOF {
break
} else if err != nil {
return nil, nil, err
}
piece := readPieceFromWAL(msg)
if piece == nil {
continue
}
@@ -503,7 +513,7 @@ func makeBlockchainFromWAL(wal *WAL) ([]*types.Block, []*types.Commit, error) {
// if its not the first one, we have a full block
if blockParts != nil {
var n int
block := wire.ReadBinary(&types.Block{}, blockParts.GetReader(), types.MaxBlockSize, &n, &err).(*types.Block)
block := wire.ReadBinary(&types.Block{}, blockParts.GetReader(), 0, &n, &err).(*types.Block)
blocks = append(blocks, block)
}
blockParts = types.NewPartSetFromHeader(*p)
@@ -524,22 +534,15 @@ func makeBlockchainFromWAL(wal *WAL) ([]*types.Block, []*types.Commit, error) {
}
// grab the last block too
var n int
block := wire.ReadBinary(&types.Block{}, blockParts.GetReader(), types.MaxBlockSize, &n, &err).(*types.Block)
block := wire.ReadBinary(&types.Block{}, blockParts.GetReader(), 0, &n, &err).(*types.Block)
blocks = append(blocks, block)
return blocks, commits, nil
}
func readPieceFromWAL(msgBytes []byte) (interface{}, error) {
// Skip over empty and meta lines
if len(msgBytes) == 0 || msgBytes[0] == '#' {
return nil, nil
}
var err error
var msg TimedWALMessage
wire.ReadJSON(&msg, msgBytes, &err)
if err != nil {
fmt.Println("MsgBytes:", msgBytes, string(msgBytes))
return nil, fmt.Errorf("Error reading json data: %v", err)
func readPieceFromWAL(msg *TimedWALMessage) interface{} {
// skip meta messages
if _, ok := msg.Msg.(EndHeightMessage); ok {
return nil
}
// for logging
@@ -547,23 +550,24 @@ func readPieceFromWAL(msgBytes []byte) (interface{}, error) {
case msgInfo:
switch msg := m.Msg.(type) {
case *ProposalMessage:
return &msg.Proposal.BlockPartsHeader, nil
return &msg.Proposal.BlockPartsHeader
case *BlockPartMessage:
return msg.Part, nil
return msg.Part
case *VoteMessage:
return msg.Vote, nil
return msg.Vote
}
}
return nil, nil
return nil
}
// fresh state and mock store
func stateAndStore(config *cfg.Config, pubKey crypto.PubKey) (*sm.State, *mockBlockStore) {
stateDB := dbm.NewMemDB()
state := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
state, _ := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
state.SetLogger(log.TestingLogger().With("module", "state"))
store := NewMockBlockStore(config)
store := NewMockBlockStore(config, state.Params)
return state, store
}
@@ -572,13 +576,14 @@ func stateAndStore(config *cfg.Config, pubKey crypto.PubKey) (*sm.State, *mockBl
type mockBlockStore struct {
config *cfg.Config
params types.ConsensusParams
chain []*types.Block
commits []*types.Commit
}
// TODO: NewBlockStore(db.NewMemDB) ...
func NewMockBlockStore(config *cfg.Config) *mockBlockStore {
return &mockBlockStore{config, nil, nil}
func NewMockBlockStore(config *cfg.Config, params types.ConsensusParams) *mockBlockStore {
return &mockBlockStore{config, params, nil, nil}
}
func (bs *mockBlockStore) Height() int { return len(bs.chain) }
@@ -586,7 +591,7 @@ func (bs *mockBlockStore) LoadBlock(height int) *types.Block { return bs.chain[h
func (bs *mockBlockStore) LoadBlockMeta(height int) *types.BlockMeta {
block := bs.chain[height-1]
return &types.BlockMeta{
BlockID: types.BlockID{block.Hash(), block.MakePartSet(bs.config.Consensus.BlockPartSize).Header()},
BlockID: types.BlockID{block.Hash(), block.MakePartSet(bs.params.BlockPartSizeBytes).Header()},
Header: block.Header,
}
}

View File

@@ -4,8 +4,9 @@ import (
"bytes"
"errors"
"fmt"
"path"
"path/filepath"
"reflect"
"runtime/debug"
"sync"
"time"
@@ -16,6 +17,7 @@ import (
"github.com/tendermint/tmlibs/log"
cfg "github.com/tendermint/tendermint/config"
cstypes "github.com/tendermint/tendermint/consensus/types"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
@@ -38,124 +40,6 @@ var (
ErrVoteHeightMismatch = errors.New("Error vote height mismatch")
)
//-----------------------------------------------------------------------------
// RoundStepType enum type
// RoundStepType enumerates the state of the consensus state machine
type RoundStepType uint8 // These must be numeric, ordered.
const (
RoundStepNewHeight = RoundStepType(0x01) // Wait til CommitTime + timeoutCommit
RoundStepNewRound = RoundStepType(0x02) // Setup new round and go to RoundStepPropose
RoundStepPropose = RoundStepType(0x03) // Did propose, gossip proposal
RoundStepPrevote = RoundStepType(0x04) // Did prevote, gossip prevotes
RoundStepPrevoteWait = RoundStepType(0x05) // Did receive any +2/3 prevotes, start timeout
RoundStepPrecommit = RoundStepType(0x06) // Did precommit, gossip precommits
RoundStepPrecommitWait = RoundStepType(0x07) // Did receive any +2/3 precommits, start timeout
RoundStepCommit = RoundStepType(0x08) // Entered commit state machine
// NOTE: RoundStepNewHeight acts as RoundStepCommitWait.
)
// String returns a string
func (rs RoundStepType) String() string {
switch rs {
case RoundStepNewHeight:
return "RoundStepNewHeight"
case RoundStepNewRound:
return "RoundStepNewRound"
case RoundStepPropose:
return "RoundStepPropose"
case RoundStepPrevote:
return "RoundStepPrevote"
case RoundStepPrevoteWait:
return "RoundStepPrevoteWait"
case RoundStepPrecommit:
return "RoundStepPrecommit"
case RoundStepPrecommitWait:
return "RoundStepPrecommitWait"
case RoundStepCommit:
return "RoundStepCommit"
default:
return "RoundStepUnknown" // Cannot panic.
}
}
//-----------------------------------------------------------------------------
// RoundState defines the internal consensus state.
// It is Immutable when returned from ConsensusState.GetRoundState()
// TODO: Actually, only the top pointer is copied,
// so access to field pointers is still racey
type RoundState struct {
Height int // Height we are working on
Round int
Step RoundStepType
StartTime time.Time
CommitTime time.Time // Subjective time when +2/3 precommits for Block at Round were found
Validators *types.ValidatorSet
Proposal *types.Proposal
ProposalBlock *types.Block
ProposalBlockParts *types.PartSet
LockedRound int
LockedBlock *types.Block
LockedBlockParts *types.PartSet
Votes *HeightVoteSet
CommitRound int //
LastCommit *types.VoteSet // Last precommits at Height-1
LastValidators *types.ValidatorSet
}
// RoundStateEvent returns the H/R/S of the RoundState as an event.
func (rs *RoundState) RoundStateEvent() types.EventDataRoundState {
edrs := types.EventDataRoundState{
Height: rs.Height,
Round: rs.Round,
Step: rs.Step.String(),
RoundState: rs,
}
return edrs
}
// String returns a string
func (rs *RoundState) String() string {
return rs.StringIndented("")
}
// StringIndented returns a string
func (rs *RoundState) StringIndented(indent string) string {
return fmt.Sprintf(`RoundState{
%s H:%v R:%v S:%v
%s StartTime: %v
%s CommitTime: %v
%s Validators: %v
%s Proposal: %v
%s ProposalBlock: %v %v
%s LockedRound: %v
%s LockedBlock: %v %v
%s Votes: %v
%s LastCommit: %v
%s LastValidators: %v
%s}`,
indent, rs.Height, rs.Round, rs.Step,
indent, rs.StartTime,
indent, rs.CommitTime,
indent, rs.Validators.StringIndented(indent+" "),
indent, rs.Proposal,
indent, rs.ProposalBlockParts.StringShort(), rs.ProposalBlock.StringShort(),
indent, rs.LockedRound,
indent, rs.LockedBlockParts.StringShort(), rs.LockedBlock.StringShort(),
indent, rs.Votes.StringIndented(indent+" "),
indent, rs.LastCommit.StringShort(),
indent, rs.LastValidators.StringIndented(indent+" "),
indent)
}
// StringShort returns a string
func (rs *RoundState) StringShort() string {
return fmt.Sprintf(`RoundState{H:%v R:%v S:%v ST:%v}`,
rs.Height, rs.Round, rs.Step, rs.StartTime)
}
//-----------------------------------------------------------------------------
var (
@@ -170,24 +54,16 @@ type msgInfo struct {
// internally generated messages which may update the state
type timeoutInfo struct {
Duration time.Duration `json:"duration"`
Height int `json:"height"`
Round int `json:"round"`
Step RoundStepType `json:"step"`
Duration time.Duration `json:"duration"`
Height int `json:"height"`
Round int `json:"round"`
Step cstypes.RoundStepType `json:"step"`
}
func (ti *timeoutInfo) String() string {
return fmt.Sprintf("%v ; %d/%d %v", ti.Duration, ti.Height, ti.Round, ti.Step)
}
// PrivValidator is a validator that can sign votes and proposals.
type PrivValidator interface {
GetAddress() []byte
SignVote(chainID string, vote *types.Vote) error
SignProposal(chainID string, proposal *types.Proposal) error
SignHeartbeat(chainID string, heartbeat *types.Heartbeat) error
}
// ConsensusState handles execution of the consensus algorithm.
// It processes votes and proposals, and upon reaching agreement,
// commits blocks to the chain and executes them against the application.
@@ -197,7 +73,7 @@ type ConsensusState struct {
// config details
config *cfg.ConsensusConfig
privValidator PrivValidator // for signing votes
privValidator types.PrivValidator // for signing votes
// services for creating and executing blocks
proxyAppConn proxy.AppConnConsensus
@@ -206,7 +82,7 @@ type ConsensusState struct {
// internal state
mtx sync.Mutex
RoundState
cstypes.RoundState
state *sm.State // State until height-1.
// state changes may be triggered by msgs from peers,
@@ -221,8 +97,9 @@ type ConsensusState struct {
// a Write-Ahead Log ensures we can recover from any kind of crash
// and helps us avoid signing conflicting votes
wal *WAL
replayMode bool // so we don't log signing errors during replay
wal *WAL
replayMode bool // so we don't log signing errors during replay
doWALCatchup bool // determines if we even try to do the catchup
// for tests where we want to limit the number of transitions the state makes
nSteps int
@@ -247,6 +124,7 @@ func NewConsensusState(config *cfg.ConsensusConfig, state *sm.State, proxyAppCon
internalMsgQueue: make(chan msgInfo, msgQueueSize),
timeoutTicker: NewTimeoutTicker(),
done: make(chan struct{}),
doWALCatchup: true,
}
// set function defaults (may be overwritten before calling Start)
cs.decideProposal = cs.defaultDecideProposal
@@ -289,13 +167,13 @@ func (cs *ConsensusState) GetState() *sm.State {
}
// GetRoundState returns a copy of the internal consensus state.
func (cs *ConsensusState) GetRoundState() *RoundState {
func (cs *ConsensusState) GetRoundState() *cstypes.RoundState {
cs.mtx.Lock()
defer cs.mtx.Unlock()
return cs.getRoundState()
}
func (cs *ConsensusState) getRoundState() *RoundState {
func (cs *ConsensusState) getRoundState() *cstypes.RoundState {
rs := cs.RoundState // copy
return &rs
}
@@ -308,7 +186,7 @@ func (cs *ConsensusState) GetValidators() (int, []*types.Validator) {
}
// SetPrivValidator sets the private validator account for signing votes.
func (cs *ConsensusState) SetPrivValidator(priv PrivValidator) {
func (cs *ConsensusState) SetPrivValidator(priv types.PrivValidator) {
cs.mtx.Lock()
defer cs.mtx.Unlock()
cs.privValidator = priv
@@ -350,10 +228,12 @@ func (cs *ConsensusState) OnStart() error {
// we may have lost some votes if the process crashed
// reload from consensus log to catchup
if err := cs.catchupReplay(cs.Height); err != nil {
cs.Logger.Error("Error on catchup replay. Proceeding to start ConsensusState anyway", "err", err.Error())
// NOTE: if we ever do return an error here,
// make sure to stop the timeoutTicker
if cs.doWALCatchup {
if err := cs.catchupReplay(cs.Height); err != nil {
cs.Logger.Error("Error on catchup replay. Proceeding to start ConsensusState anyway", "err", err.Error())
// NOTE: if we ever do return an error here,
// make sure to stop the timeoutTicker
}
}
// now start the receiveRoutine
@@ -394,7 +274,7 @@ func (cs *ConsensusState) Wait() {
// OpenWAL opens a file to log all consensus messages and timeouts for deterministic accountability
func (cs *ConsensusState) OpenWAL(walFile string) (err error) {
err = cmn.EnsureDir(path.Dir(walFile), 0700)
err = cmn.EnsureDir(filepath.Dir(walFile), 0700)
if err != nil {
cs.Logger.Error("Error ensuring ConsensusState wal dir", "err", err.Error())
return err
@@ -476,20 +356,20 @@ func (cs *ConsensusState) updateHeight(height int) {
cs.Height = height
}
func (cs *ConsensusState) updateRoundStep(round int, step RoundStepType) {
func (cs *ConsensusState) updateRoundStep(round int, step cstypes.RoundStepType) {
cs.Round = round
cs.Step = step
}
// enterNewRound(height, 0) at cs.StartTime.
func (cs *ConsensusState) scheduleRound0(rs *RoundState) {
func (cs *ConsensusState) scheduleRound0(rs *cstypes.RoundState) {
//cs.Logger.Info("scheduleRound0", "now", time.Now(), "startTime", cs.StartTime)
sleepDuration := rs.StartTime.Sub(time.Now())
cs.scheduleTimeout(sleepDuration, rs.Height, 0, RoundStepNewHeight)
cs.scheduleTimeout(sleepDuration, rs.Height, 0, cstypes.RoundStepNewHeight)
}
// Attempt to schedule a timeout (by sending timeoutInfo on the tickChan)
func (cs *ConsensusState) scheduleTimeout(duration time.Duration, height, round int, step RoundStepType) {
func (cs *ConsensusState) scheduleTimeout(duration time.Duration, height, round int, step cstypes.RoundStepType) {
cs.timeoutTicker.ScheduleTimeout(timeoutInfo{duration, height, round, step})
}
@@ -514,7 +394,7 @@ func (cs *ConsensusState) reconstructLastCommit(state *sm.State) {
return
}
seenCommit := cs.blockStore.LoadSeenCommit(state.LastBlockHeight)
lastPrecommits := types.NewVoteSet(cs.state.ChainID, state.LastBlockHeight, seenCommit.Round(), types.VoteTypePrecommit, state.LastValidators)
lastPrecommits := types.NewVoteSet(state.ChainID, state.LastBlockHeight, seenCommit.Round(), types.VoteTypePrecommit, state.LastValidators)
for _, precommit := range seenCommit.Precommits {
if precommit == nil {
continue
@@ -531,7 +411,7 @@ func (cs *ConsensusState) reconstructLastCommit(state *sm.State) {
}
// Updates ConsensusState and increments height to match that of state.
// The round becomes 0 and cs.Step becomes RoundStepNewHeight.
// The round becomes 0 and cs.Step becomes cstypes.RoundStepNewHeight.
func (cs *ConsensusState) updateToState(state *sm.State) {
if cs.CommitRound > -1 && 0 < cs.Height && cs.Height != state.LastBlockHeight {
cmn.PanicSanity(cmn.Fmt("updateToState() expected state height of %v but found %v",
@@ -567,7 +447,7 @@ func (cs *ConsensusState) updateToState(state *sm.State) {
// RoundState fields
cs.updateHeight(height)
cs.updateRoundStep(0, RoundStepNewHeight)
cs.updateRoundStep(0, cstypes.RoundStepNewHeight)
if cs.CommitTime.IsZero() {
// "Now" makes it easier to sync up dev nodes.
// We add timeoutCommit to allow transactions
@@ -585,7 +465,7 @@ func (cs *ConsensusState) updateToState(state *sm.State) {
cs.LockedRound = 0
cs.LockedBlock = nil
cs.LockedBlockParts = nil
cs.Votes = NewHeightVoteSet(state.ChainID, height, validators)
cs.Votes = cstypes.NewHeightVoteSet(state.ChainID, height, validators)
cs.CommitRound = -1
cs.LastCommit = lastPrecommits
cs.LastValidators = state.LastValidators
@@ -617,7 +497,7 @@ func (cs *ConsensusState) newStep() {
func (cs *ConsensusState) receiveRoutine(maxSteps int) {
defer func() {
if r := recover(); r != nil {
cs.Logger.Error("CONSENSUS FAILURE!!!", "err", r)
cs.Logger.Error("CONSENSUS FAILURE!!!", "err", r, "stack", string(debug.Stack()))
}
}()
@@ -706,7 +586,7 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
}
}
func (cs *ConsensusState) handleTimeout(ti timeoutInfo, rs RoundState) {
func (cs *ConsensusState) handleTimeout(ti timeoutInfo, rs cstypes.RoundState) {
cs.Logger.Debug("Received tock", "timeout", ti.Duration, "height", ti.Height, "round", ti.Round, "step", ti.Step)
// timeouts must be for current height, round, step
@@ -720,19 +600,19 @@ func (cs *ConsensusState) handleTimeout(ti timeoutInfo, rs RoundState) {
defer cs.mtx.Unlock()
switch ti.Step {
case RoundStepNewHeight:
case cstypes.RoundStepNewHeight:
// NewRound event fired from enterNewRound.
// XXX: should we fire timeout here (for timeout commit)?
cs.enterNewRound(ti.Height, 0)
case RoundStepNewRound:
case cstypes.RoundStepNewRound:
cs.enterPropose(ti.Height, 0)
case RoundStepPropose:
case cstypes.RoundStepPropose:
types.FireEventTimeoutPropose(cs.evsw, cs.RoundStateEvent())
cs.enterPrevote(ti.Height, ti.Round)
case RoundStepPrevoteWait:
case cstypes.RoundStepPrevoteWait:
types.FireEventTimeoutWait(cs.evsw, cs.RoundStateEvent())
cs.enterPrecommit(ti.Height, ti.Round)
case RoundStepPrecommitWait:
case cstypes.RoundStepPrecommitWait:
types.FireEventTimeoutWait(cs.evsw, cs.RoundStateEvent())
cs.enterNewRound(ti.Height, ti.Round+1)
default:
@@ -759,7 +639,7 @@ func (cs *ConsensusState) handleTxsAvailable(height int) {
// Enter: +2/3 prevotes any or +2/3 precommits for block or any from (height, round)
// NOTE: cs.StartTime was already set for height.
func (cs *ConsensusState) enterNewRound(height int, round int) {
if cs.Height != height || round < cs.Round || (cs.Round == round && cs.Step != RoundStepNewHeight) {
if cs.Height != height || round < cs.Round || (cs.Round == round && cs.Step != cstypes.RoundStepNewHeight) {
cs.Logger.Debug(cmn.Fmt("enterNewRound(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
return
}
@@ -780,7 +660,7 @@ func (cs *ConsensusState) enterNewRound(height int, round int) {
// Setup new round
// we don't fire newStep for this step,
// but we fire an event, so update the round step first
cs.updateRoundStep(round, RoundStepNewRound)
cs.updateRoundStep(round, cstypes.RoundStepNewRound)
cs.Validators = validators
if round == 0 {
// We've already reset these upon new height,
@@ -801,7 +681,7 @@ func (cs *ConsensusState) enterNewRound(height int, round int) {
waitForTxs := cs.config.WaitForTxs() && round == 0 && !cs.needProofBlock(height)
if waitForTxs {
if cs.config.CreateEmptyBlocksInterval > 0 {
cs.scheduleTimeout(cs.config.EmptyBlocksInterval(), height, round, RoundStepNewRound)
cs.scheduleTimeout(cs.config.EmptyBlocksInterval(), height, round, cstypes.RoundStepNewRound)
}
go cs.proposalHeartbeat(height, round)
} else {
@@ -831,10 +711,11 @@ func (cs *ConsensusState) proposalHeartbeat(height, round int) {
// not a validator
valIndex = -1
}
chainID := cs.state.ChainID
for {
rs := cs.GetRoundState()
// if we've already moved on, no need to send more heartbeats
if rs.Step > RoundStepNewRound || rs.Round > round || rs.Height > height {
if rs.Step > cstypes.RoundStepNewRound || rs.Round > round || rs.Height > height {
return
}
heartbeat := &types.Heartbeat{
@@ -844,7 +725,7 @@ func (cs *ConsensusState) proposalHeartbeat(height, round int) {
ValidatorAddress: addr,
ValidatorIndex: valIndex,
}
cs.privValidator.SignHeartbeat(cs.state.ChainID, heartbeat)
cs.privValidator.SignHeartbeat(chainID, heartbeat)
heartbeatEvent := types.EventDataProposalHeartbeat{heartbeat}
types.FireEventProposalHeartbeat(cs.evsw, heartbeatEvent)
counter += 1
@@ -856,7 +737,7 @@ func (cs *ConsensusState) proposalHeartbeat(height, round int) {
// Enter (CreateEmptyBlocks, CreateEmptyBlocksInterval > 0 ): after enterNewRound(height,round), after timeout of CreateEmptyBlocksInterval
// Enter (!CreateEmptyBlocks) : after enterNewRound(height,round), once txs are in the mempool
func (cs *ConsensusState) enterPropose(height int, round int) {
if cs.Height != height || round < cs.Round || (cs.Round == round && RoundStepPropose <= cs.Step) {
if cs.Height != height || round < cs.Round || (cs.Round == round && cstypes.RoundStepPropose <= cs.Step) {
cs.Logger.Debug(cmn.Fmt("enterPropose(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
return
}
@@ -864,7 +745,7 @@ func (cs *ConsensusState) enterPropose(height int, round int) {
defer func() {
// Done enterPropose:
cs.updateRoundStep(round, RoundStepPropose)
cs.updateRoundStep(round, cstypes.RoundStepPropose)
cs.newStep()
// If we have the whole proposal + POL, then goto Prevote now.
@@ -876,7 +757,7 @@ func (cs *ConsensusState) enterPropose(height int, round int) {
}()
// If we don't get the proposal and all block parts quick enough, enterPrevote
cs.scheduleTimeout(cs.config.Propose(round), height, round, RoundStepPropose)
cs.scheduleTimeout(cs.config.Propose(round), height, round, cstypes.RoundStepPropose)
// Nothing more to do if we're not a validator
if cs.privValidator == nil {
@@ -921,8 +802,7 @@ func (cs *ConsensusState) defaultDecideProposal(height, round int) {
// Make proposal
polRound, polBlockID := cs.Votes.POLInfo()
proposal := types.NewProposal(height, round, blockParts.Header(), polRound, polBlockID)
err := cs.privValidator.SignProposal(cs.state.ChainID, proposal)
if err == nil {
if err := cs.privValidator.SignProposal(cs.state.ChainID, proposal); err == nil {
// Set fields
/* fields set by setProposal and addBlockPart
cs.Proposal = proposal
@@ -981,9 +861,9 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
// Mempool validated transactions
txs := cs.mempool.Reap(cs.config.MaxBlockSizeTxs)
return types.MakeBlock(cs.Height, cs.state.ChainID, txs, commit,
cs.state.LastBlockID, cs.state.Validators.Hash(), cs.state.AppHash, cs.config.BlockPartSize)
cs.state.LastBlockID, cs.state.Validators.Hash(),
cs.state.AppHash, cs.state.Params.BlockPartSizeBytes)
}
// Enter: `timeoutPropose` after entering Propose.
@@ -992,14 +872,14 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
// Prevote for LockedBlock if we're locked, or ProposalBlock if valid.
// Otherwise vote nil.
func (cs *ConsensusState) enterPrevote(height int, round int) {
if cs.Height != height || round < cs.Round || (cs.Round == round && RoundStepPrevote <= cs.Step) {
if cs.Height != height || round < cs.Round || (cs.Round == round && cstypes.RoundStepPrevote <= cs.Step) {
cs.Logger.Debug(cmn.Fmt("enterPrevote(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
return
}
defer func() {
// Done enterPrevote:
cs.updateRoundStep(round, RoundStepPrevote)
cs.updateRoundStep(round, cstypes.RoundStepPrevote)
cs.newStep()
}()
@@ -1054,7 +934,7 @@ func (cs *ConsensusState) defaultDoPrevote(height int, round int) {
// Enter: any +2/3 prevotes at next round.
func (cs *ConsensusState) enterPrevoteWait(height int, round int) {
if cs.Height != height || round < cs.Round || (cs.Round == round && RoundStepPrevoteWait <= cs.Step) {
if cs.Height != height || round < cs.Round || (cs.Round == round && cstypes.RoundStepPrevoteWait <= cs.Step) {
cs.Logger.Debug(cmn.Fmt("enterPrevoteWait(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
return
}
@@ -1065,12 +945,12 @@ func (cs *ConsensusState) enterPrevoteWait(height int, round int) {
defer func() {
// Done enterPrevoteWait:
cs.updateRoundStep(round, RoundStepPrevoteWait)
cs.updateRoundStep(round, cstypes.RoundStepPrevoteWait)
cs.newStep()
}()
// Wait for some more prevotes; enterPrecommit
cs.scheduleTimeout(cs.config.Prevote(round), height, round, RoundStepPrevoteWait)
cs.scheduleTimeout(cs.config.Prevote(round), height, round, cstypes.RoundStepPrevoteWait)
}
// Enter: `timeoutPrevote` after any +2/3 prevotes.
@@ -1080,7 +960,7 @@ func (cs *ConsensusState) enterPrevoteWait(height int, round int) {
// else, unlock an existing lock and precommit nil if +2/3 of prevotes were nil,
// else, precommit nil otherwise.
func (cs *ConsensusState) enterPrecommit(height int, round int) {
if cs.Height != height || round < cs.Round || (cs.Round == round && RoundStepPrecommit <= cs.Step) {
if cs.Height != height || round < cs.Round || (cs.Round == round && cstypes.RoundStepPrecommit <= cs.Step) {
cs.Logger.Debug(cmn.Fmt("enterPrecommit(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
return
}
@@ -1089,7 +969,7 @@ func (cs *ConsensusState) enterPrecommit(height int, round int) {
defer func() {
// Done enterPrecommit:
cs.updateRoundStep(round, RoundStepPrecommit)
cs.updateRoundStep(round, cstypes.RoundStepPrecommit)
cs.newStep()
}()
@@ -1173,7 +1053,7 @@ func (cs *ConsensusState) enterPrecommit(height int, round int) {
// Enter: any +2/3 precommits for next round.
func (cs *ConsensusState) enterPrecommitWait(height int, round int) {
if cs.Height != height || round < cs.Round || (cs.Round == round && RoundStepPrecommitWait <= cs.Step) {
if cs.Height != height || round < cs.Round || (cs.Round == round && cstypes.RoundStepPrecommitWait <= cs.Step) {
cs.Logger.Debug(cmn.Fmt("enterPrecommitWait(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
return
}
@@ -1184,18 +1064,18 @@ func (cs *ConsensusState) enterPrecommitWait(height int, round int) {
defer func() {
// Done enterPrecommitWait:
cs.updateRoundStep(round, RoundStepPrecommitWait)
cs.updateRoundStep(round, cstypes.RoundStepPrecommitWait)
cs.newStep()
}()
// Wait for some more precommits; enterNewRound
cs.scheduleTimeout(cs.config.Precommit(round), height, round, RoundStepPrecommitWait)
cs.scheduleTimeout(cs.config.Precommit(round), height, round, cstypes.RoundStepPrecommitWait)
}
// Enter: +2/3 precommits for block
func (cs *ConsensusState) enterCommit(height int, commitRound int) {
if cs.Height != height || RoundStepCommit <= cs.Step {
if cs.Height != height || cstypes.RoundStepCommit <= cs.Step {
cs.Logger.Debug(cmn.Fmt("enterCommit(%v/%v): Invalid args. Current step: %v/%v/%v", height, commitRound, cs.Height, cs.Round, cs.Step))
return
}
@@ -1204,7 +1084,7 @@ func (cs *ConsensusState) enterCommit(height int, commitRound int) {
defer func() {
// Done enterCommit:
// keep cs.Round the same, commitRound points to the right Precommits set.
cs.updateRoundStep(cs.Round, RoundStepCommit)
cs.updateRoundStep(cs.Round, cstypes.RoundStepCommit)
cs.CommitRound = commitRound
cs.CommitTime = time.Now()
cs.newStep()
@@ -1253,7 +1133,7 @@ func (cs *ConsensusState) tryFinalizeCommit(height int) {
if !cs.ProposalBlock.HashesTo(blockID.Hash) {
// TODO: this happens every time if we're not a validator (ugly logs)
// TODO: ^^ wait, why does it matter that we're a validator?
cs.Logger.Error("Attempt to finalize failed. We don't have the commit block.", "height", height, "proposal-block", cs.ProposalBlock.Hash(), "commit-block", blockID.Hash)
cs.Logger.Info("Attempt to finalize failed. We don't have the commit block.", "height", height, "proposal-block", cs.ProposalBlock.Hash(), "commit-block", blockID.Hash)
return
}
@@ -1261,9 +1141,9 @@ func (cs *ConsensusState) tryFinalizeCommit(height int) {
cs.finalizeCommit(height)
}
// Increment height and goto RoundStepNewHeight
// Increment height and goto cstypes.RoundStepNewHeight
func (cs *ConsensusState) finalizeCommit(height int) {
if cs.Height != height || cs.Step != RoundStepCommit {
if cs.Height != height || cs.Step != cstypes.RoundStepCommit {
cs.Logger.Debug(cmn.Fmt("finalizeCommit(%v): Invalid args. Current step: %v/%v/%v", height, cs.Height, cs.Round, cs.Step))
return
}
@@ -1312,7 +1192,7 @@ func (cs *ConsensusState) finalizeCommit(height int) {
// As is, ConsensusState should not be started again
// until we successfully call ApplyBlock (ie. here or in Handshake after restart)
if cs.wal != nil {
cs.wal.writeEndHeight(height)
cs.wal.Save(EndHeightMessage{uint64(height)})
}
fail.Fail() // XXX
@@ -1357,7 +1237,7 @@ func (cs *ConsensusState) finalizeCommit(height int) {
// By here,
// * cs.Height has been increment to height+1
// * cs.Step is now RoundStepNewHeight
// * cs.Step is now cstypes.RoundStepNewHeight
// * cs.StartTime is set to when we will start round0.
}
@@ -1375,8 +1255,8 @@ func (cs *ConsensusState) defaultSetProposal(proposal *types.Proposal) error {
return nil
}
// We don't care about the proposal if we're already in RoundStepCommit.
if RoundStepCommit <= cs.Step {
// We don't care about the proposal if we're already in cstypes.RoundStepCommit.
if cstypes.RoundStepCommit <= cs.Step {
return nil
}
@@ -1417,13 +1297,14 @@ func (cs *ConsensusState) addProposalBlockPart(height int, part *types.Part, ver
// Added and completed!
var n int
var err error
cs.ProposalBlock = wire.ReadBinary(&types.Block{}, cs.ProposalBlockParts.GetReader(), types.MaxBlockSize, &n, &err).(*types.Block)
cs.ProposalBlock = wire.ReadBinary(&types.Block{}, cs.ProposalBlockParts.GetReader(),
cs.state.Params.BlockSizeParams.MaxBytes, &n, &err).(*types.Block)
// NOTE: it's possible to receive complete proposal blocks for future rounds without having the proposal
cs.Logger.Info("Received complete proposal block", "height", cs.ProposalBlock.Height, "hash", cs.ProposalBlock.Hash())
if cs.Step == RoundStepPropose && cs.isProposalComplete() {
if cs.Step == cstypes.RoundStepPropose && cs.isProposalComplete() {
// Move onto the next step
cs.enterPrevote(height, cs.Round)
} else if cs.Step == RoundStepCommit {
} else if cs.Step == cstypes.RoundStepCommit {
// If we're waiting on the proposal block...
cs.tryFinalizeCommit(height)
}
@@ -1468,7 +1349,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool,
// A precommit for the previous height?
// These come in while we wait timeoutCommit
if vote.Height+1 == cs.Height {
if !(cs.Step == RoundStepNewHeight && vote.Type == types.VoteTypePrecommit) {
if !(cs.Step == cstypes.RoundStepNewHeight && vote.Type == types.VoteTypePrecommit) {
// TODO: give the reason ..
// fmt.Errorf("tryAddVote: Wrong height, not a LastCommit straggler commit.")
return added, ErrVoteHeightMismatch
@@ -1481,7 +1362,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool,
// if we can skip timeoutCommit and have all the votes now,
if cs.config.SkipTimeoutCommit && cs.LastCommit.HasAll() {
// go straight to new round (skip timeout commit)
// cs.scheduleTimeout(time.Duration(0), cs.Height, 0, RoundStepNewHeight)
// cs.scheduleTimeout(time.Duration(0), cs.Height, 0, cstypes.RoundStepNewHeight)
cs.enterNewRound(cs.Height, 0)
}
}
@@ -1545,7 +1426,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool,
if cs.config.SkipTimeoutCommit && precommits.HasAll() {
// if we have all the votes now,
// go straight to new round (skip timeout commit)
// cs.scheduleTimeout(time.Duration(0), cs.Height, 0, RoundStepNewHeight)
// cs.scheduleTimeout(time.Duration(0), cs.Height, 0, cstypes.RoundStepNewHeight)
cs.enterNewRound(cs.Height, 0)
}
@@ -1606,7 +1487,7 @@ func (cs *ConsensusState) signAddVote(type_ byte, hash []byte, header types.Part
//---------------------------------------------------------
func CompareHRS(h1, r1 int, s1 RoundStepType, h2, r2 int, s2 RoundStepType) int {
func CompareHRS(h1, r1 int, s1 cstypes.RoundStepType, h2, r2 int, s2 cstypes.RoundStepType) int {
if h1 < h2 {
return -1
} else if h1 > h2 {

View File

@@ -6,8 +6,9 @@ import (
"testing"
"time"
cstypes "github.com/tendermint/tendermint/consensus/types"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
)
func init() {
@@ -79,8 +80,8 @@ func TestProposerSelection0(t *testing.T) {
<-newRoundCh
prop = cs1.GetRoundState().Validators.GetProposer()
if !bytes.Equal(prop.Address, vss[1].Address) {
panic(Fmt("expected proposer to be validator %d. Got %X", 1, prop.Address))
if !bytes.Equal(prop.Address, vss[1].GetAddress()) {
panic(cmn.Fmt("expected proposer to be validator %d. Got %X", 1, prop.Address))
}
}
@@ -100,8 +101,8 @@ func TestProposerSelection2(t *testing.T) {
// everyone just votes nil. we get a new proposer each round
for i := 0; i < len(vss); i++ {
prop := cs1.GetRoundState().Validators.GetProposer()
if !bytes.Equal(prop.Address, vss[(i+2)%len(vss)].Address) {
panic(Fmt("expected proposer to be validator %d. Got %X", (i+2)%len(vss), prop.Address))
if !bytes.Equal(prop.Address, vss[(i+2)%len(vss)].GetAddress()) {
panic(cmn.Fmt("expected proposer to be validator %d. Got %X", (i+2)%len(vss), prop.Address))
}
rs := cs1.GetRoundState()
@@ -180,7 +181,7 @@ func TestBadProposal(t *testing.T) {
height, round := cs1.Height, cs1.Round
vs2 := vss[1]
partSize := config.Consensus.BlockPartSize
partSize := cs1.state.Params.BlockPartSizeBytes
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
voteCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringVote(), 1)
@@ -247,7 +248,7 @@ func TestFullRound1(t *testing.T) {
// grab proposal
re := <-propCh
propBlockHash := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState).ProposalBlock.Hash()
propBlockHash := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState).ProposalBlock.Hash()
<-voteCh // wait for prevote
// NOTE: voteChan cap of 0 ensures we can complete this
@@ -327,7 +328,7 @@ func TestLockNoPOL(t *testing.T) {
vs2 := vss[1]
height := cs1.Height
partSize := config.Consensus.BlockPartSize
partSize := cs1.state.Params.BlockPartSizeBytes
timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1)
timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1)
@@ -344,7 +345,7 @@ func TestLockNoPOL(t *testing.T) {
cs1.startRoutines(0)
re := <-proposalCh
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
theBlockHash := rs.ProposalBlock.Hash()
<-voteCh // prevote
@@ -384,7 +385,7 @@ func TestLockNoPOL(t *testing.T) {
// now we're on a new round and not the proposer, so wait for timeout
re = <-timeoutProposeCh
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
if rs.ProposalBlock != nil {
panic("Expected proposal block to be nil")
@@ -428,11 +429,11 @@ func TestLockNoPOL(t *testing.T) {
incrementRound(vs2)
re = <-proposalCh
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
// now we're on a new round and are the proposer
if !bytes.Equal(rs.ProposalBlock.Hash(), rs.LockedBlock.Hash()) {
panic(Fmt("Expected proposal block to be locked block. Got %v, Expected %v", rs.ProposalBlock, rs.LockedBlock))
panic(cmn.Fmt("Expected proposal block to be locked block. Got %v, Expected %v", rs.ProposalBlock, rs.LockedBlock))
}
<-voteCh // prevote
@@ -493,7 +494,7 @@ func TestLockPOLRelock(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
partSize := config.Consensus.BlockPartSize
partSize := cs1.state.Params.BlockPartSizeBytes
timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1)
timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1)
@@ -502,8 +503,6 @@ func TestLockPOLRelock(t *testing.T) {
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
newBlockCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewBlockHeader(), 1)
t.Logf("vs2 last round %v", vs2.PrivValidator.LastRound)
// everything done from perspective of cs1
/*
@@ -517,7 +516,7 @@ func TestLockPOLRelock(t *testing.T) {
<-newRoundCh
re := <-proposalCh
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
theBlockHash := rs.ProposalBlock.Hash()
<-voteCh // prevote
@@ -593,7 +592,7 @@ func TestLockPOLRelock(t *testing.T) {
be := <-newBlockCh
b := be.(types.TMEventData).Unwrap().(types.EventDataNewBlockHeader)
re = <-newRoundCh
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
if rs.Height != 2 {
panic("Expected height to increment")
}
@@ -608,7 +607,7 @@ func TestLockPOLUnlock(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
partSize := config.Consensus.BlockPartSize
partSize := cs1.state.Params.BlockPartSizeBytes
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1)
@@ -629,7 +628,7 @@ func TestLockPOLUnlock(t *testing.T) {
startTestRound(cs1, cs1.Height, 0)
<-newRoundCh
re := <-proposalCh
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
theBlockHash := rs.ProposalBlock.Hash()
<-voteCh // prevote
@@ -655,7 +654,7 @@ func TestLockPOLUnlock(t *testing.T) {
// timeout to new round
re = <-timeoutWaitCh
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
lockedBlockHash := rs.LockedBlock.Hash()
//XXX: this isnt guaranteed to get there before the timeoutPropose ...
@@ -703,7 +702,7 @@ func TestLockPOLSafety1(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
partSize := config.Consensus.BlockPartSize
partSize := cs1.state.Params.BlockPartSizeBytes
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1)
@@ -715,7 +714,7 @@ func TestLockPOLSafety1(t *testing.T) {
startTestRound(cs1, cs1.Height, 0)
<-newRoundCh
re := <-proposalCh
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
propBlock := rs.ProposalBlock
<-voteCh // prevote
@@ -763,7 +762,7 @@ func TestLockPOLSafety1(t *testing.T) {
re = <-proposalCh
}
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
if rs.LockedBlock != nil {
panic("we should not be locked!")
@@ -824,7 +823,7 @@ func TestLockPOLSafety2(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
partSize := config.Consensus.BlockPartSize
partSize := cs1.state.Params.BlockPartSizeBytes
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1)
@@ -850,7 +849,7 @@ func TestLockPOLSafety2(t *testing.T) {
incrementRound(vs2, vs3, vs4)
cs1.updateRoundStep(0, RoundStepPrecommitWait)
cs1.updateRoundStep(0, cstypes.RoundStepPrecommitWait)
t.Log("### ONTO Round 1")
// jump in at round 1
@@ -931,7 +930,7 @@ func TestSlashingPrevotes(t *testing.T) {
re := <-proposalCh
<-voteCh // prevote
rs := re.(types.EventDataRoundState).RoundState.(*RoundState)
rs := re.(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
// we should now be stuck in limbo forever, waiting for more prevotes
// add one for a different block should cause us to go into prevote wait
@@ -999,7 +998,7 @@ func TestHalt1(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
partSize := config.Consensus.BlockPartSize
partSize := cs1.state.Params.BlockPartSizeBytes
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1)
@@ -1011,7 +1010,7 @@ func TestHalt1(t *testing.T) {
startTestRound(cs1, cs1.Height, 0)
<-newRoundCh
re := <-proposalCh
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
propBlock := rs.ProposalBlock
propBlockParts := propBlock.MakePartSet(partSize)
@@ -1034,7 +1033,7 @@ func TestHalt1(t *testing.T) {
// timeout to new round
<-timeoutWaitCh
re = <-newRoundCh
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
t.Log("### ONTO ROUND 1")
/*Round2
@@ -1052,7 +1051,7 @@ func TestHalt1(t *testing.T) {
// receiving that precommit should take us straight to commit
<-newBlockCh
re = <-newRoundCh
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
if rs.Height != 2 {
panic("expected height to increment")

View File

@@ -1,88 +1,125 @@
#!/usr/bin/env bash
# XXX: removes tendermint dir
# Requires: killall command and jq JSON processor.
cd "$GOPATH/src/github.com/tendermint/tendermint" || exit 1
# Get the parent directory of where this script is.
SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
DIR="$( cd -P "$( dirname "$SOURCE" )/../.." && pwd )"
# Change into that dir because we expect that.
cd "$DIR" || exit 1
# Make sure we have a tendermint command.
if ! hash tendermint 2>/dev/null; then
make install
fi
# specify a dir to copy
# Make sure we have a cutWALUntil binary.
cutWALUntil=./scripts/cutWALUntil/cutWALUntil
cutWALUntilDir=$(dirname $cutWALUntil)
if ! hash $cutWALUntil 2>/dev/null; then
cd "$cutWALUntilDir" && go build && cd - || exit 1
fi
TMHOME=$(mktemp -d)
export TMHOME="$TMHOME"
if [[ ! -d "$TMHOME" ]]; then
echo "Could not create temp directory"
exit 1
else
echo "TMHOME: ${TMHOME}"
fi
# TODO: eventually we should replace with `tendermint init --test`
DIR_TO_COPY=$HOME/.tendermint_test/consensus_state_test
if [ ! -d "$DIR_TO_COPY" ]; then
echo "$DIR_TO_COPY does not exist. Please run: go test ./consensus"
exit 1
fi
echo "==> Copying ${DIR_TO_COPY} to ${TMHOME} directory..."
cp -r "$DIR_TO_COPY"/* "$TMHOME"
TMHOME="$HOME/.tendermint"
rm -rf "$TMHOME"
cp -r "$DIR_TO_COPY" "$TMHOME"
cp $TMHOME/config.toml $TMHOME/config.toml.bak
# preserve original genesis file because later it will be modified (see small_block2)
cp "$TMHOME/genesis.json" "$TMHOME/genesis.json.bak"
function reset(){
echo "==> Resetting tendermint..."
tendermint unsafe_reset_all
cp $TMHOME/config.toml.bak $TMHOME/config.toml
cp "$TMHOME/genesis.json.bak" "$TMHOME/genesis.json"
}
reset
# empty block
function empty_block(){
tendermint node --proxy_app=persistent_dummy &> /dev/null &
sleep 5
killall tendermint
echo "==> Starting tendermint..."
tendermint node --proxy_app=persistent_dummy &> /dev/null &
sleep 5
echo "==> Killing tendermint..."
killall tendermint
# /q would print up to and including the match, then quit.
# /Q doesn't include the match.
# http://unix.stackexchange.com/questions/11305/grep-show-all-the-file-up-to-the-match
sed '/ENDHEIGHT: 1/Q' ~/.tendermint/data/cs.wal/wal > consensus/test_data/empty_block.cswal
echo "==> Copying WAL log..."
$cutWALUntil "$TMHOME/data/cs.wal/wal" 1 consensus/test_data/new_empty_block.cswal
mv consensus/test_data/new_empty_block.cswal consensus/test_data/empty_block.cswal
reset
reset
}
# many blocks
function many_blocks(){
bash scripts/txs/random.sh 1000 36657 &> /dev/null &
PID=$!
tendermint node --proxy_app=persistent_dummy &> /dev/null &
sleep 7
killall tendermint
kill -9 $PID
bash scripts/txs/random.sh 1000 36657 &> /dev/null &
PID=$!
echo "==> Starting tendermint..."
tendermint node --proxy_app=persistent_dummy &> /dev/null &
sleep 10
echo "==> Killing tendermint..."
kill -9 $PID
killall tendermint
sed '/ENDHEIGHT: 6/Q' ~/.tendermint/data/cs.wal/wal > consensus/test_data/many_blocks.cswal
echo "==> Copying WAL log..."
$cutWALUntil "$TMHOME/data/cs.wal/wal" 6 consensus/test_data/new_many_blocks.cswal
mv consensus/test_data/new_many_blocks.cswal consensus/test_data/many_blocks.cswal
reset
reset
}
# small block 1
function small_block1(){
bash scripts/txs/random.sh 1000 36657 &> /dev/null &
PID=$!
tendermint node --proxy_app=persistent_dummy &> /dev/null &
sleep 10
killall tendermint
kill -9 $PID
bash scripts/txs/random.sh 1000 36657 &> /dev/null &
PID=$!
echo "==> Starting tendermint..."
tendermint node --proxy_app=persistent_dummy &> /dev/null &
sleep 10
echo "==> Killing tendermint..."
kill -9 $PID
killall tendermint
sed '/ENDHEIGHT: 1/Q' ~/.tendermint/data/cs.wal/wal > consensus/test_data/small_block1.cswal
echo "==> Copying WAL log..."
$cutWALUntil "$TMHOME/data/cs.wal/wal" 1 consensus/test_data/new_small_block1.cswal
mv consensus/test_data/new_small_block1.cswal consensus/test_data/small_block1.cswal
reset
reset
}
# small block 2 (part size = 512)
# block part size = 512
function small_block2(){
echo "" >> ~/.tendermint/config.toml
echo "block_part_size = 512" >> ~/.tendermint/config.toml
bash scripts/txs/random.sh 1000 36657 &> /dev/null &
PID=$!
tendermint node --proxy_app=persistent_dummy &> /dev/null &
sleep 5
killall tendermint
kill -9 $PID
cat "$TMHOME/genesis.json" | jq '. + {consensus_params: {block_size_params: {max_bytes: 22020096}, block_gossip_params: {block_part_size_bytes: 512}}}' > "$TMHOME/new_genesis.json"
mv "$TMHOME/new_genesis.json" "$TMHOME/genesis.json"
bash scripts/txs/random.sh 1000 36657 &> /dev/null &
PID=$!
echo "==> Starting tendermint..."
tendermint node --proxy_app=persistent_dummy &> /dev/null &
sleep 5
echo "==> Killing tendermint..."
kill -9 $PID
killall tendermint
sed '/ENDHEIGHT: 1/Q' ~/.tendermint/data/cs.wal/wal > consensus/test_data/small_block2.cswal
echo "==> Copying WAL log..."
$cutWALUntil "$TMHOME/data/cs.wal/wal" 1 consensus/test_data/new_small_block2.cswal
mv consensus/test_data/new_small_block2.cswal consensus/test_data/small_block2.cswal
reset
reset
}
@@ -107,4 +144,5 @@ case "$1" in
many_blocks
esac
echo "==> Cleaning up..."
rm -rf "$TMHOME"

View File

@@ -1,11 +1,11 @@
package consensus
package types
import (
"strings"
"sync"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
)
type RoundVoteSet struct {
@@ -76,7 +76,7 @@ func (hvs *HeightVoteSet) SetRound(round int) {
hvs.mtx.Lock()
defer hvs.mtx.Unlock()
if hvs.round != 0 && (round < hvs.round+1) {
PanicSanity("SetRound() must increment hvs.round")
cmn.PanicSanity("SetRound() must increment hvs.round")
}
for r := hvs.round + 1; r <= round; r++ {
if _, ok := hvs.roundVoteSets[r]; ok {
@@ -89,7 +89,7 @@ func (hvs *HeightVoteSet) SetRound(round int) {
func (hvs *HeightVoteSet) addRound(round int) {
if _, ok := hvs.roundVoteSets[round]; ok {
PanicSanity("addRound() for an existing round")
cmn.PanicSanity("addRound() for an existing round")
}
// log.Debug("addRound(round)", "round", round)
prevotes := types.NewVoteSet(hvs.chainID, hvs.height, round, types.VoteTypePrevote, hvs.valSet)
@@ -164,7 +164,7 @@ func (hvs *HeightVoteSet) getVoteSet(round int, type_ byte) *types.VoteSet {
case types.VoteTypePrecommit:
return rvs.Precommits
default:
PanicSanity(Fmt("Unexpected vote type %X", type_))
cmn.PanicSanity(cmn.Fmt("Unexpected vote type %X", type_))
return nil
}
}
@@ -194,7 +194,7 @@ func (hvs *HeightVoteSet) StringIndented(indent string) string {
voteSetString = roundVoteSet.Precommits.StringShort()
vsStrings = append(vsStrings, voteSetString)
}
return Fmt(`HeightVoteSet{H:%v R:0~%v
return cmn.Fmt(`HeightVoteSet{H:%v R:0~%v
%s %v
%s}`,
hvs.height, hvs.round,

View File

@@ -1,14 +1,17 @@
package consensus
package types
import (
"testing"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
)
var config *cfg.Config // NOTE: must be reset for each _test.go file
func init() {
config = ResetConfig("consensus_height_vote_set_test")
config = cfg.ResetTestRoot("consensus_height_vote_set_test")
}
func TestPeerCatchupRounds(t *testing.T) {
@@ -44,10 +47,10 @@ func TestPeerCatchupRounds(t *testing.T) {
}
func makeVoteHR(t *testing.T, height, round int, privVals []*types.PrivValidator, valIndex int) *types.Vote {
func makeVoteHR(t *testing.T, height, round int, privVals []*types.PrivValidatorFS, valIndex int) *types.Vote {
privVal := privVals[valIndex]
vote := &types.Vote{
ValidatorAddress: privVal.Address,
ValidatorAddress: privVal.GetAddress(),
ValidatorIndex: valIndex,
Height: height,
Round: round,
@@ -57,7 +60,7 @@ func makeVoteHR(t *testing.T, height, round int, privVals []*types.PrivValidator
chainID := config.ChainID
err := privVal.SignVote(chainID, vote)
if err != nil {
panic(Fmt("Error signing vote: %v", err))
panic(cmn.Fmt("Error signing vote: %v", err))
return nil
}
return vote

View File

@@ -0,0 +1,57 @@
package types
import (
"fmt"
"time"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
)
//-----------------------------------------------------------------------------
// PeerRoundState contains the known state of a peer.
// NOTE: Read-only when returned by PeerState.GetRoundState().
type PeerRoundState struct {
Height int // Height peer is at
Round int // Round peer is at, -1 if unknown.
Step RoundStepType // Step peer is at
StartTime time.Time // Estimated start of round 0 at this height
Proposal bool // True if peer has proposal for this round
ProposalBlockPartsHeader types.PartSetHeader //
ProposalBlockParts *cmn.BitArray //
ProposalPOLRound int // Proposal's POL round. -1 if none.
ProposalPOL *cmn.BitArray // nil until ProposalPOLMessage received.
Prevotes *cmn.BitArray // All votes peer has for this round
Precommits *cmn.BitArray // All precommits peer has for this round
LastCommitRound int // Round of commit for last height. -1 if none.
LastCommit *cmn.BitArray // All commit precommits of commit for last height.
CatchupCommitRound int // Round that we have commit for. Not necessarily unique. -1 if none.
CatchupCommit *cmn.BitArray // All commit precommits peer has for this height & CatchupCommitRound
}
// String returns a string representation of the PeerRoundState
func (prs PeerRoundState) String() string {
return prs.StringIndented("")
}
// StringIndented returns a string representation of the PeerRoundState
func (prs PeerRoundState) StringIndented(indent string) string {
return fmt.Sprintf(`PeerRoundState{
%s %v/%v/%v @%v
%s Proposal %v -> %v
%s POL %v (round %v)
%s Prevotes %v
%s Precommits %v
%s LastCommit %v (round %v)
%s Catchup %v (round %v)
%s}`,
indent, prs.Height, prs.Round, prs.Step, prs.StartTime,
indent, prs.ProposalBlockPartsHeader, prs.ProposalBlockParts,
indent, prs.ProposalPOL, prs.ProposalPOLRound,
indent, prs.Prevotes,
indent, prs.Precommits,
indent, prs.LastCommit, prs.LastCommitRound,
indent, prs.CatchupCommit, prs.CatchupCommitRound,
indent)
}

126
consensus/types/state.go Normal file
View File

@@ -0,0 +1,126 @@
package types
import (
"fmt"
"time"
"github.com/tendermint/tendermint/types"
)
//-----------------------------------------------------------------------------
// RoundStepType enum type
// RoundStepType enumerates the state of the consensus state machine
type RoundStepType uint8 // These must be numeric, ordered.
const (
RoundStepNewHeight = RoundStepType(0x01) // Wait til CommitTime + timeoutCommit
RoundStepNewRound = RoundStepType(0x02) // Setup new round and go to RoundStepPropose
RoundStepPropose = RoundStepType(0x03) // Did propose, gossip proposal
RoundStepPrevote = RoundStepType(0x04) // Did prevote, gossip prevotes
RoundStepPrevoteWait = RoundStepType(0x05) // Did receive any +2/3 prevotes, start timeout
RoundStepPrecommit = RoundStepType(0x06) // Did precommit, gossip precommits
RoundStepPrecommitWait = RoundStepType(0x07) // Did receive any +2/3 precommits, start timeout
RoundStepCommit = RoundStepType(0x08) // Entered commit state machine
// NOTE: RoundStepNewHeight acts as RoundStepCommitWait.
)
// String returns a string
func (rs RoundStepType) String() string {
switch rs {
case RoundStepNewHeight:
return "RoundStepNewHeight"
case RoundStepNewRound:
return "RoundStepNewRound"
case RoundStepPropose:
return "RoundStepPropose"
case RoundStepPrevote:
return "RoundStepPrevote"
case RoundStepPrevoteWait:
return "RoundStepPrevoteWait"
case RoundStepPrecommit:
return "RoundStepPrecommit"
case RoundStepPrecommitWait:
return "RoundStepPrecommitWait"
case RoundStepCommit:
return "RoundStepCommit"
default:
return "RoundStepUnknown" // Cannot panic.
}
}
//-----------------------------------------------------------------------------
// RoundState defines the internal consensus state.
// It is Immutable when returned from ConsensusState.GetRoundState()
// TODO: Actually, only the top pointer is copied,
// so access to field pointers is still racey
type RoundState struct {
Height int // Height we are working on
Round int
Step RoundStepType
StartTime time.Time
CommitTime time.Time // Subjective time when +2/3 precommits for Block at Round were found
Validators *types.ValidatorSet
Proposal *types.Proposal
ProposalBlock *types.Block
ProposalBlockParts *types.PartSet
LockedRound int
LockedBlock *types.Block
LockedBlockParts *types.PartSet
Votes *HeightVoteSet
CommitRound int //
LastCommit *types.VoteSet // Last precommits at Height-1
LastValidators *types.ValidatorSet
}
// RoundStateEvent returns the H/R/S of the RoundState as an event.
func (rs *RoundState) RoundStateEvent() types.EventDataRoundState {
edrs := types.EventDataRoundState{
Height: rs.Height,
Round: rs.Round,
Step: rs.Step.String(),
RoundState: rs,
}
return edrs
}
// String returns a string
func (rs *RoundState) String() string {
return rs.StringIndented("")
}
// StringIndented returns a string
func (rs *RoundState) StringIndented(indent string) string {
return fmt.Sprintf(`RoundState{
%s H:%v R:%v S:%v
%s StartTime: %v
%s CommitTime: %v
%s Validators: %v
%s Proposal: %v
%s ProposalBlock: %v %v
%s LockedRound: %v
%s LockedBlock: %v %v
%s Votes: %v
%s LastCommit: %v
%s LastValidators: %v
%s}`,
indent, rs.Height, rs.Round, rs.Step,
indent, rs.StartTime,
indent, rs.CommitTime,
indent, rs.Validators.StringIndented(indent+" "),
indent, rs.Proposal,
indent, rs.ProposalBlockParts.StringShort(), rs.ProposalBlock.StringShort(),
indent, rs.LockedRound,
indent, rs.LockedBlockParts.StringShort(), rs.LockedBlock.StringShort(),
indent, rs.Votes.StringIndented(indent+" "),
indent, rs.LastCommit.StringShort(),
indent, rs.LastValidators.StringIndented(indent+" "),
indent)
}
// StringShort returns a string
func (rs *RoundState) StringShort() string {
return fmt.Sprintf(`RoundState{H:%v R:%v S:%v ST:%v}`,
rs.Height, rs.Round, rs.Step, rs.StartTime)
}

View File

@@ -1,7 +1,7 @@
package consensus
import (
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
)
// kind of arbitrary
@@ -10,4 +10,4 @@ var Major = "0" //
var Minor = "2" // replay refactor
var Revision = "2" // validation -> commit
var Version = Fmt("v%s/%s.%s.%s", Spec, Major, Minor, Revision)
var Version = cmn.Fmt("v%s/%s.%s.%s", Spec, Major, Minor, Revision)

View File

@@ -1,22 +1,37 @@
package consensus
import (
"bytes"
"encoding/binary"
"fmt"
"hash/crc32"
"io"
"time"
wire "github.com/tendermint/go-wire"
"github.com/tendermint/tendermint/types"
auto "github.com/tendermint/tmlibs/autofile"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
)
//--------------------------------------------------------
// types and functions for savings consensus messages
var (
walSeparator = []byte{55, 127, 6, 130} // 0x377f0682 - magic number
)
type TimedWALMessage struct {
Time time.Time `json:"time"`
Time time.Time `json:"time"` // for debugging purposes
Msg WALMessage `json:"msg"`
}
// EndHeightMessage marks the end of the given height inside WAL.
// @internal used by scripts/cutWALUntil util.
type EndHeightMessage struct {
Height uint64 `json:"height"`
}
type WALMessage interface{}
var _ = wire.RegisterInterface(
@@ -24,6 +39,7 @@ var _ = wire.RegisterInterface(
wire.ConcreteType{types.EventDataRoundState{}, 0x01},
wire.ConcreteType{msgInfo{}, 0x02},
wire.ConcreteType{timeoutInfo{}, 0x03},
wire.ConcreteType{EndHeightMessage{}, 0x04},
)
//--------------------------------------------------------
@@ -34,10 +50,12 @@ var _ = wire.RegisterInterface(
// TODO: currently the wal is overwritten during replay catchup
// give it a mode so it's either reading or appending - must read to end to start appending again
type WAL struct {
BaseService
cmn.BaseService
group *auto.Group
light bool // ignore block parts
enc *WALEncoder
}
func NewWAL(walFile string, light bool) (*WAL, error) {
@@ -48,8 +66,9 @@ func NewWAL(walFile string, light bool) (*WAL, error) {
wal := &WAL{
group: group,
light: light,
enc: NewWALEncoder(group),
}
wal.BaseService = *NewBaseService(nil, "WAL", wal)
wal.BaseService = *cmn.NewBaseService(nil, "WAL", wal)
return wal, nil
}
@@ -58,7 +77,7 @@ func (wal *WAL) OnStart() error {
if err != nil {
return err
} else if size == 0 {
wal.writeEndHeight(0)
wal.Save(EndHeightMessage{0})
}
_, err = wal.group.Start()
return err
@@ -70,35 +89,191 @@ func (wal *WAL) OnStop() {
}
// called in newStep and for each pass in receiveRoutine
func (wal *WAL) Save(wmsg WALMessage) {
func (wal *WAL) Save(msg WALMessage) {
if wal == nil {
return
}
if wal.light {
// in light mode we only write new steps, timeouts, and our own votes (no proposals, block parts)
if mi, ok := wmsg.(msgInfo); ok {
if mi, ok := msg.(msgInfo); ok {
if mi.PeerKey != "" {
return
}
}
}
// Write the wal message
var wmsgBytes = wire.JSONBytes(TimedWALMessage{time.Now(), wmsg})
err := wal.group.WriteLine(string(wmsgBytes))
if err := wal.enc.Encode(&TimedWALMessage{time.Now(), msg}); err != nil {
cmn.PanicQ(cmn.Fmt("Error writing msg to consensus wal: %v \n\nMessage: %v", err, msg))
}
// TODO: only flush when necessary
if err := wal.group.Flush(); err != nil {
cmn.PanicQ(cmn.Fmt("Error flushing consensus wal buf to file. Error: %v \n", err))
}
}
// SearchForEndHeight searches for the EndHeightMessage with the height and
// returns an auto.GroupReader, whenever it was found or not and an error.
// Group reader will be nil if found equals false.
//
// CONTRACT: caller must close group reader.
func (wal *WAL) SearchForEndHeight(height uint64) (gr *auto.GroupReader, found bool, err error) {
var msg *TimedWALMessage
// NOTE: starting from the last file in the group because we're usually
// searching for the last height. See replay.go
min, max := wal.group.MinIndex(), wal.group.MaxIndex()
wal.Logger.Debug("Searching for height", "height", height, "min", min, "max", max)
for index := max; index >= min; index-- {
gr, err = wal.group.NewReader(index)
if err != nil {
return nil, false, err
}
dec := NewWALDecoder(gr)
for {
msg, err = dec.Decode()
if err == io.EOF {
// check next file
break
}
if err != nil {
gr.Close()
return nil, false, err
}
if m, ok := msg.Msg.(EndHeightMessage); ok {
if m.Height == height { // found
wal.Logger.Debug("Found", "height", height, "index", index)
return gr, true, nil
}
}
}
gr.Close()
}
return nil, false, nil
}
///////////////////////////////////////////////////////////////////////////////
// A WALEncoder writes custom-encoded WAL messages to an output stream.
//
// Format: 4 bytes CRC sum + 4 bytes length + arbitrary-length value (go-wire encoded)
type WALEncoder struct {
wr io.Writer
}
// NewWALEncoder returns a new encoder that writes to wr.
func NewWALEncoder(wr io.Writer) *WALEncoder {
return &WALEncoder{wr}
}
// Encode writes the custom encoding of v to the stream.
func (enc *WALEncoder) Encode(v interface{}) error {
data := wire.BinaryBytes(v)
crc := crc32.Checksum(data, crc32c)
length := uint32(len(data))
totalLength := 8 + int(length)
msg := make([]byte, totalLength)
binary.BigEndian.PutUint32(msg[0:4], crc)
binary.BigEndian.PutUint32(msg[4:8], length)
copy(msg[8:], data)
_, err := enc.wr.Write(msg)
if err == nil {
// TODO [Anton Kaliaev 23 Oct 2017]: remove separator
_, err = enc.wr.Write(walSeparator)
}
return err
}
///////////////////////////////////////////////////////////////////////////////
// A WALDecoder reads and decodes custom-encoded WAL messages from an input
// stream. See WALEncoder for the format used.
//
// It will also compare the checksums and make sure data size is equal to the
// length from the header. If that is not the case, error will be returned.
type WALDecoder struct {
rd io.Reader
}
// NewWALDecoder returns a new decoder that reads from rd.
func NewWALDecoder(rd io.Reader) *WALDecoder {
return &WALDecoder{rd}
}
// Decode reads the next custom-encoded value from its reader and returns it.
func (dec *WALDecoder) Decode() (*TimedWALMessage, error) {
b := make([]byte, 4)
n, err := dec.rd.Read(b)
if err == io.EOF {
return nil, err
}
if err != nil {
PanicQ(Fmt("Error writing msg to consensus wal. Error: %v \n\nMessage: %v", err, wmsg))
return nil, fmt.Errorf("failed to read checksum: %v", err)
}
// TODO: only flush when necessary
if err := wal.group.Flush(); err != nil {
PanicQ(Fmt("Error flushing consensus wal buf to file. Error: %v \n", err))
crc := binary.BigEndian.Uint32(b)
b = make([]byte, 4)
n, err = dec.rd.Read(b)
if err == io.EOF {
return nil, err
}
if err != nil {
return nil, fmt.Errorf("failed to read length: %v", err)
}
length := binary.BigEndian.Uint32(b)
data := make([]byte, length)
n, err = dec.rd.Read(data)
if err == io.EOF {
return nil, err
}
if err != nil {
return nil, fmt.Errorf("not enough bytes for data: %v (want: %d, read: %v)", err, length, n)
}
// check checksum before decoding data
actualCRC := crc32.Checksum(data, crc32c)
if actualCRC != crc {
return nil, fmt.Errorf("checksums do not match: (read: %v, actual: %v)", crc, actualCRC)
}
var nn int
var res *TimedWALMessage
res = wire.ReadBinary(&TimedWALMessage{}, bytes.NewBuffer(data), int(length), &nn, &err).(*TimedWALMessage)
if err != nil {
return nil, fmt.Errorf("failed to decode data: %v", err)
}
// TODO [Anton Kaliaev 23 Oct 2017]: remove separator
if err = readSeparator(dec.rd); err != nil {
return nil, err
}
return res, err
}
func (wal *WAL) writeEndHeight(height int) {
wal.group.WriteLine(Fmt("#ENDHEIGHT: %v", height))
// TODO: only flush when necessary
if err := wal.group.Flush(); err != nil {
PanicQ(Fmt("Error flushing consensus wal buf to file. Error: %v \n", err))
// readSeparator reads a separator from r. It returns any error from underlying
// reader or if it's not a separator.
func readSeparator(r io.Reader) error {
b := make([]byte, len(walSeparator))
_, err := r.Read(b)
if err != nil {
return fmt.Errorf("failed to read separator: %v", err)
}
if !bytes.Equal(b, walSeparator) {
return fmt.Errorf("not a separator: %v", b)
}
return nil
}

62
consensus/wal_test.go Normal file
View File

@@ -0,0 +1,62 @@
package consensus
import (
"bytes"
"path"
"testing"
"time"
"github.com/tendermint/tendermint/consensus/types"
tmtypes "github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestWALEncoderDecoder(t *testing.T) {
now := time.Now()
msgs := []TimedWALMessage{
TimedWALMessage{Time: now, Msg: EndHeightMessage{0}},
TimedWALMessage{Time: now, Msg: timeoutInfo{Duration: time.Second, Height: 1, Round: 1, Step: types.RoundStepPropose}},
}
b := new(bytes.Buffer)
for _, msg := range msgs {
b.Reset()
enc := NewWALEncoder(b)
err := enc.Encode(&msg)
require.NoError(t, err)
dec := NewWALDecoder(b)
decoded, err := dec.Decode()
require.NoError(t, err)
assert.Equal(t, msg.Time.Truncate(time.Millisecond), decoded.Time)
assert.Equal(t, msg.Msg, decoded.Msg)
}
}
func TestSearchForEndHeight(t *testing.T) {
wal, err := NewWAL(path.Join(data_dir, "many_blocks.cswal"), false)
if err != nil {
t.Fatal(err)
}
h := 3
gr, found, err := wal.SearchForEndHeight(uint64(h))
assert.NoError(t, err, cmn.Fmt("expected not to err on height %d", h))
assert.True(t, found, cmn.Fmt("expected to find end height for %d", h))
assert.NotNil(t, gr, "expected group not to be nil")
defer gr.Close()
dec := NewWALDecoder(gr)
msg, err := dec.Decode()
assert.NoError(t, err, "expected to decode a message")
rs, ok := msg.Msg.(tmtypes.EventDataRoundState)
assert.True(t, ok, "expected message of type EventDataRoundState")
assert.Equal(t, rs.Height, h+1, cmn.Fmt("wrong height"))
}

View File

@@ -1,5 +1,5 @@
Using the abci-cli
==================
Using ABCI-CLI
==============
To facilitate testing and debugging of ABCI servers and simple apps, we
built a CLI, the ``abci-cli``, for sending ABCI messages from the

View File

@@ -58,7 +58,7 @@ Tendermint Core RPC
The concept is that the ABCI app is completely hidden from the outside
world and only communicated through a tested and secured `interface
exposed by the tendermint core <./rpc.html>`__. This interface
exposed by the tendermint core <./specification/rpc.html>`__. This interface
exposes a lot of data on the block header and consensus process, which
is quite useful for externally verifying the system. It also includes
3(!) methods to broadcast a transaction (propose it for the blockchain,

View File

@@ -142,6 +142,13 @@ It is unlikely that you will need to implement a client. For details of
our client, see
`here <https://github.com/tendermint/abci/tree/master/client>`__.
Most of the examples below are from `dummy application
<https://github.com/tendermint/abci/blob/master/example/dummy/dummy.go>`__,
which is a part of the abci repo. `persistent_dummy application
<https://github.com/tendermint/abci/blob/master/example/dummy/persistent_dummy.go>`__
is used to show ``BeginBlock``, ``EndBlock`` and ``InitChain``
example implementations.
Blockchain Protocol
-------------------
@@ -187,6 +194,12 @@ through all transactions in the mempool, removing any that were included
in the block, and re-run the rest using CheckTx against the post-Commit
mempool state.
::
func (app *DummyApplication) CheckTx(tx []byte) types.Result {
return types.OK
}
Consensus Connection
~~~~~~~~~~~~~~~~~~~~
@@ -215,6 +228,19 @@ The block header will be updated (TODO) to include some commitment to
the results of DeliverTx, be it a bitarray of non-OK transactions, or a
merkle root of the data returned by the DeliverTx requests, or both.
::
// tx is either "key=value" or just arbitrary bytes
func (app *DummyApplication) DeliverTx(tx []byte) types.Result {
parts := strings.Split(string(tx), "=")
if len(parts) == 2 {
app.state.Set([]byte(parts[0]), []byte(parts[1]))
} else {
app.state.Set(tx, tx)
}
return types.OK
}
Commit
^^^^^^
@@ -228,7 +254,7 @@ Commit, or there will be deadlock. Note also that all remaining
transactions in the mempool are replayed on the mempool connection
(CheckTx) following a commit.
The Commit response includes a byte array, which is the deterministic
The app should respond to the Commit request with a byte array, which is the deterministic
state root of the application. It is included in the header of the next
block. It can be used to provide easily verified Merkle-proofs of the
state of the application.
@@ -237,6 +263,13 @@ It is expected that the app will persist state to disk on Commit. The
option to have all transactions replayed from some previous block is the
job of the `Handshake <#handshake>`__.
::
func (app *DummyApplication) Commit() types.Result {
hash := app.state.Hash()
return types.NewResultOK(hash, "")
}
BeginBlock
^^^^^^^^^^
@@ -248,6 +281,17 @@ The app should remember the latest height and header (ie. from which it
has run a successful Commit) so that it can tell Tendermint where to
pick up from when it restarts. See information on the Handshake, below.
::
// Track the block hash and header information
func (app *PersistentDummyApplication) BeginBlock(params types.RequestBeginBlock) {
// update latest block info
app.blockHeader = params.Header
// reset valset changes
app.changes = make([]*types.Validator, 0)
}
EndBlock
^^^^^^^^
@@ -260,6 +304,13 @@ EndBlock response. To remove one, include it in the list with a
validator set. Note validator set changes are only available in v0.8.0
and up.
::
// Update the validator set
func (app *PersistentDummyApplication) EndBlock(height uint64) (resEndBlock types.ResponseEndBlock) {
return types.ResponseEndBlock{Diffs: app.changes}
}
Query Connection
~~~~~~~~~~~~~~~~
@@ -281,6 +332,34 @@ cause Tendermint to not connect to the corresponding peer:
Note: these query formats are subject to change!
::
func (app *DummyApplication) Query(reqQuery types.RequestQuery) (resQuery types.ResponseQuery) {
if reqQuery.Prove {
value, proof, exists := app.state.Proof(reqQuery.Data)
resQuery.Index = -1 // TODO make Proof return index
resQuery.Key = reqQuery.Data
resQuery.Value = value
resQuery.Proof = proof
if exists {
resQuery.Log = "exists"
} else {
resQuery.Log = "does not exist"
}
return
} else {
index, value, exists := app.state.Get(reqQuery.Data)
resQuery.Index = int64(index)
resQuery.Value = value
if exists {
resQuery.Log = "exists"
} else {
resQuery.Log = "does not exist"
}
return
}
}
Handshake
~~~~~~~~~
@@ -288,7 +367,7 @@ When the app or tendermint restarts, they need to sync to a common
height. When an ABCI connection is first established, Tendermint will
call ``Info`` on the Query connection. The response should contain the
LastBlockHeight and LastBlockAppHash - the former is the last block for
the which the app ran Commit successfully, the latter is the response
which the app ran Commit successfully, the latter is the response
from that Commit.
Using this information, Tendermint will determine what needs to be
@@ -297,3 +376,28 @@ the app are synced to the latest block height.
If the app returns a LastBlockHeight of 0, Tendermint will just replay
all blocks.
::
func (app *DummyApplication) Info(req types.RequestInfo) (resInfo types.ResponseInfo) {
return types.ResponseInfo{Data: cmn.Fmt("{\"size\":%v}", app.state.Size())}
}
Genesis
~~~~~~~
``InitChain`` will be called once upon the genesis. ``params`` includes the
initial validator set. Later on, it may be extended to take parts of the
consensus params.
::
// Save the validators in the merkle tree
func (app *PersistentDummyApplication) InitChain(params types.RequestInitChain) {
for _, v := range params.Validators {
r := app.updateValidator(v)
if r.IsErr() {
app.logger.Error("Error updating validators", "r", r)
}
}
}

View File

@@ -1,16 +0,0 @@
# ABCI
ABCI is an interface between the consensus/blockchain engine known as tendermint, and the application-specific business logic, known as an ABCi app.
The tendermint core should run unchanged for all apps. Each app can customize it, the supported transactions, queries, even the validator sets and how to handle staking / slashing stake. This customization is achieved by implementing the ABCi app to send the proper information to the tendermint engine to perform as directed.
To understand this decision better, think of the design of the tendermint engine.
* A blockchain is simply consensus on a unique global ordering of events.
* This consensus can efficiently be implemented using BFT and PoS
* This code can be generalized to easily support a large number of blockchains
* The block-chain specific code, the interpretation of the individual events, can be implemented by a 3rd party app without touching the consensus engine core
* Use an efficient, language-agnostic layer to implement this (ABCi)
Bucky, please make this doc real.

View File

@@ -2,15 +2,4 @@
This is a location to record all high-level architecture decisions in the tendermint project. Not the implementation details, but the reasoning that happened. This should be refered to for guidance of the "right way" to extend the application. And if we notice that the original decisions were lacking, we should have another open discussion, record the new decisions here, and then modify the code to match.
This is like our guide and mentor when Jae and Bucky are offline.... The concept comes from a [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t) that resonated among the team when Anton shared it.
Each section of the code can have it's own markdown file in this directory, and please add a link to the readme.
## Sections
* [ABCI](./ABCI.md)
* [go-merkle / merkleeyes](./merkle.md)
* [Frey's thoughts on the data store](./merkle-frey.md)
* basecoin
* tendermint core (multiple sections)
* ???
Read up on the concept in this [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t).

View File

@@ -1,4 +1,4 @@
# ADR 2: Indexing
# ADR 2: Event Subscription
## Context

View File

@@ -1,4 +1,4 @@
# ADR 1: Must an ABCI-app have an RPC server?
# ADR 3: Must an ABCI-app have an RPC server?
## Context

View File

@@ -0,0 +1,38 @@
# ADR 004: Historical Validators
## Context
Right now, we can query the present validator set, but there is no history.
If you were offline for a long time, there is no way to reconstruct past validators. This is needed for the light client and we agreed needs enhancement of the API.
## Decision
For every block, store a new structure that contains either the latest validator set,
or the height of the last block for which the validator set changed. Note this is not
the height of the block which returned the validator set change itself, but the next block,
ie. the first block it comes into effect for.
Storing the validators will be handled by the `state` package.
At some point in the future, we may consider more efficient storage in the case where the validators
are updated frequently - for instance by only saving the diffs, rather than the whole set.
An alternative approach suggested keeping the validator set, or diffs of it, in a merkle IAVL tree.
While it might afford cheaper proofs that a validator set has not changed, it would be more complex,
and likely less efficient.
## Status
Accepted.
## Consequences
### Positive
- Can query old validator sets, with proof.
### Negative
- Writes an extra structure to disk with every block.
### Neutral

View File

@@ -0,0 +1,85 @@
# ADR 005: Consensus Params
## Context
Consensus critical parameters controlling blockchain capacity have until now been hard coded, loaded from a local config, or neglected.
Since they may be need to be different in different networks, and potentially to evolve over time within
networks, we seek to initialize them in a genesis file, and expose them through the ABCI.
While we have some specific parameters now, like maximum block and transaction size, we expect to have more in the future,
such as a period over which evidence is valid, or the frequency of checkpoints.
## Decision
### ConsensusParams
No consensus critical parameters should ever be found in the `config.toml`.
A new `ConsensusParams` is optionally included in the `genesis.json` file,
and loaded into the `State`. Any items not included are set to their default value.
A value of 0 is undefined (see ABCI, below). A value of -1 is used to indicate the parameter does not apply.
The parameters are used to determine the validity of a block (and tx) via the union of all relevant parameters.
```
type ConsensusParams struct {
BlockSizeParams
TxSizeParams
BlockGossipParams
}
type BlockSizeParams struct {
MaxBytes int
MaxTxs int
MaxGas int
}
type TxSizeParams struct {
MaxBytes int
MaxGas int
}
type BlockGossipParams struct {
BlockPartSizeBytes int
}
```
The `ConsensusParams` can evolve over time by adding new structs that cover different aspects of the consensus rules.
The `BlockPartSizeBytes` and the `BlockSizeParams.MaxBytes` are enforced to be greater than 0.
The former because we need a part size, the latter so that we always have at least some sanity check over the size of blocks.
### ABCI
#### InitChain
InitChain currently takes the initial validator set. It should be extended to also take parts of the ConsensusParams.
There is some case to be made for it to take the entire Genesis, except there may be things in the genesis,
like the BlockPartSize, that the app shouldn't really know about.
#### EndBlock
The EndBlock response includes a `ConsensusParams`, which includes BlockSizeParams and TxSizeParams, but not BlockGossipParams.
Other param struct can be added to `ConsensusParams` in the future.
The `0` value is used to denote no change.
Any other value will update that parameter in the `State.ConsensusParams`, to be applied for the next block.
Tendermint should have hard-coded upper limits as sanity checks.
## Status
Proposed.
## Consequences
### Positive
- Alternative capacity limits and consensus parameters can be specified without re-compiling the software.
- They can also change over time under the control of the application
### Negative
- More exposed parameters is more complexity
- Different rules at different heights in the blockchain complicates fast sync
### Neutral
- The TxSizeParams, which checks validity, may be in conflict with the config's `max_block_size_tx`, which determines proposal sizes

View File

@@ -0,0 +1,16 @@
# ADR 000: Template for an ADR
## Context
## Decision
## Status
## Consequences
### Positive
### Negative
### Neutral

View File

@@ -1,240 +0,0 @@
# Merkle data stores - Frey's proposal
## TL;DR
To allow the efficient creation of an ABCi app, tendermint wishes to provide a reference implementation of a key-value store that provides merkle proofs of the data. These proofs then quickly allow the ABCi app to provide an app hash to the consensus engine, as well as a full proof to any client.
This is equivalent to building a database, and I would propose designing it from the API first, then looking how to implement this (or make an adapter from the API to existing implementations). Once we agree on the functionality and the interface, we can implement the API bindings, and then work on building adapters to existence merkle-ized data stores, or modifying the stores to support this interface.
We need to consider the API (both in-process and over the network), language bindings, maintaining handles to old state (and garbage collecting), persistence, security, providing merkle proofs, and general key-value store operations. To stay consistent with the blockchains "single global order of operations", this data store should only allow one connection at a time to have write access.
## Overview
* **State**
* There are two concepts of state, "committed state" and "working state"
* The working state is only accessible from the ABCi app, allows writing, but does not need to support proofs.
* When we commit the "working state", it becomes a new "committed state" and has an immutable root hash, provides proofs, and can be exposed to external clients.
* **Transactions**
* The database always allows creating a read-only transaction at the last "committed state", this transaction can serve read queries and proofs.
* The database maintains all data to serve these read transactions until they are closed by the client (or time out). This allows the client(s) to determine how much old info is needed
* The database can only support *maximal* one writable transaction at a time. This makes it easy to enforce serializability, and attempting to start a second writable transaction may trigger a panic.
* **Functionality**
* It must support efficient key-value operations (get/set/delete)
* It must support returning merkle proofs for any "committed state"
* It should support range queries on subsets of the key space if possible (ie. if the db doesn't hash keys)
* It should also support listening to changes to a desired key via pub-sub or similar method, so I can quickly notify you on a change to your balance without constant polling.
* It may support other db-specific query types as an extension to this interface, as long as all specified actions maintain their meaning.
* **Interface**
* This interface should be domain-specific - ie. designed just for this use case
* It should present a simple go interface for embedding the data store in-process
* It should create a gRPC/protobuf API for calling from any client
* It should provide and maintain client adapters from our in-process interface to gRPC client calls for at least golang and Java (maybe more languages?)
* It should provide and maintain server adapters from our gRPC calls to the in-process interface for golang at least (unless there is another server we wish to support)
* **Persistence**
* It must support atomic persistence upon committing a new block. That is, upon crash recovery, the state is guaranteed to represent the state at the end of a complete block (along with a note of which height it was).
* It must delay deletion of old data as long as there are open read-only transactions referring to it, thus we must maintain some sort of WAL to keep track of pending cleanup.
* When a transaction is closed, or when we recover from a crash, it should clean up all no longer needed data to avoid memory/storage leaks.
* **Security and Auth**
* If we allow connections over gRPC, we must consider this issues and allow both encryption (SSL), and some basic auth rules to prevent undesired access to the DB
* This is client-specific and does not need to be supported in the in-process, embedded version.
## Details
Here we go more in-depth in each of the sections, explaining the reasoning and more details on the desired behavior. This document is only the high-level architecture and should support multiple implementations. When building out a specific implementation, a similar document should be provided for that repo, showing how it implements these concepts, and details about memory usage, storage, efficiency, etc.
### State
The current ABCi interface avoids this question a bit and that has brought confusion. If I use `merkleeyes` to store data, which state is returned from `Query`? The current "working" state, which I would like to refer to in my ABCi application? Or the last committed state, which I would like to return to a client's query? Or an old state, which I may select based on height?
Right now, `merkleeyes` implements `Query` like a normal ABCi app and only returns committed state, which has lead to problems and confusion. Thus, we need to be explicit about which state we want to view. Each viewer can then specify which state it wants to view. This allows the app to query the working state in DeliverTx, but the committed state in Query.
We can easily provide two global references for "last committed" and "current working" states. However, if we want to also allow querying of older commits... then we need some way to keep track of which ones are still in use, so we can garbage collect the unneeded ones. There is a non-trivial overhead in holding references to all past states, but also a hard-coded solution (hold onto the last 5 commits) may not support all clients. We should let the client define this somehow.
### Transactions
Transactions (in the typical database sense) are a clean and established solution to this issue. We can look at the [isolations levels](https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable) which attempt to provide us things like "repeatable reads". That means if we open a transaction, and query some data 100 times while other processes are writing to the db, we get the same result each time. This transaction has a reference to its own local state from the time the transaction started. (We are referring to the highest isolation levels here, which correlate well this the blockchain use case).
If we implement a read-only transaction as a reference to state at the time of creation of that transaction, we can then hold these references to various snapshots, one per block that we are interested, and allow the client to multiplex queries and proofs from these various blocks.
If we continue using these concepts (which have informed 30+ years of server side design), we can add a few nice features to our write transactions. The first of which is `Rollback` and `Commit`. That means all the changes we make in this transaction have no effect on the database until they are committed. And until they are committed, we can always abort if we detect an anomaly, returning to the last committed state with a rollback.
There is also a nice extension to this available on some database servers, basically, "nested" transactions or "savepoints". This means that within one transaction, you can open a subtransaction/savepoint and continue work. Later you have the option to commit or rollback all work since the savepoint/subtransaction. And then continue with the main transaction.
If you don't understand why this is useful, look at how basecoin needs to [hold cached state for AppTx](https://github.com/tendermint/basecoin/blob/master/state/execution.go#L126-L149), meaning that it rolls back all modifications if the AppTx returns an error. This was implemented as a wrapper in basecoin, but it is a reasonable thing to support in the DB interface itself (especially since the implementation becomes quite non-trivial as soon as you support range queries).
To give a bit more reference to this concept in practice, read about [Savepoints in Postgresql](https://www.postgresql.org/docs/current/static/tutorial-transactions.html) ([reference](https://www.postgresql.org/docs/current/static/sql-savepoint.html)) or [Nesting transactions in SQL Server](http://dba-presents.com/index.php/databases/sql-server/43-nesting-transactions-and-save-transaction-command) (TL;DR: scroll to the bottom, section "Real nesting transactions with SAVE TRANSACTION")
### Functionality
Merkle trees work with key-value pairs, so we should most importantly focus on the basic Key-Value operations. That is `Get`, `Set`, and `Remove`. We also need to return a merkle proof for any key, along with a root hash of the tree for committing state to the blockchain. This is just the basic merkle-tree stuff.
If it is possible with the implementation, it is nice to provide access to Range Queries. That is, return all values where the key is between X and Y. If you construct your keys wisely, it is possible to store lists (1:N) relations this way. Eg, storing blog posts and the key is blog:`poster_id`:`sequence`, then I could search for all blog posts by a given `poster_id`, or even return just posts 10-19 from the given poster.
The construction of a tree that supports range queries was one of the [design decisions of go-merkle](https://github.com/tendermint/go-merkle/blob/master/README.md). It is also kind of possible with [ethereum's patricia trie](https://github.com/ethereum/wiki/wiki/Patricia-Tree) as long as the key is less than 32 bytes.
In addition to range queries, there is one more nice feature that we could add to our data store - listening to events. Depending on your context, this is "reactive programming", "event emitters", "notifications", etc... But the basic concept is that a client can listen for all changes to a given key (or set of keys), and receive a notification when this happens. This is very important to avoid [repeated polling and wasted queries](http://resthooks.org/) when a client simply wants to [detect changes](https://www.rethinkdb.com/blog/realtime-web/).
If the database provides access to some "listener" functionality, the app can choose to expose this to the external client via websockets, web hooks, http2 push events, android push notifications, etc, etc etc.... But if we want to support modern client functionality, let's add support for this reactive paradigm in our DB interface.
**TODO** support for more advanced backends, eg. Bolt....
### Go Interface
I will start with a simple go interface to illustrate the in-process interface. Once there is agreement on how this looks, we can work out the gRPC bindings to support calling out of process. These interfaces are not finalized code, but I think the demonstrate the concepts better than text and provide a strawman to get feedback.
```
// DB represents the committed state of a merkle-ized key-value store
type DB interface {
// Snapshot returns a reference to last committed state to use for
// providing proofs, you must close it at the end to garbage collect
// the historical state we hold on to to make these proofs
Snapshot() Prover
// Start a transaction - only way to change state
// This will return an error if there is an open Transaction
Begin() (Transaction, error)
// These callbacks are triggered when the Transaction is Committed
// to the DB. They can be used to eg. notify clients via websockets when
// their account balance changes.
AddListener(key []byte, listener Listener)
RemoveListener(listener Listener)
}
// DBReader represents a read-only connection to a snapshot of the db
type DBReader interface {
// Queries on my local view
Has(key []byte) (bool, error)
Get(key []byte) (Model, error)
GetRange(start, end []byte, ascending bool, limit int) ([]Model, error)
Closer
}
// Prover is an interface that lets one query for Proofs, holding the
// data at a specific location in memory
type Prover interface {
DBReader
// Hash is the AppHash (RootHash) for this block
Hash() (hash []byte)
// Prove returns the data along with a merkle Proof
// Model and Proof are nil if not found
Prove(key []byte) (Model, Proof, error)
}
// Transaction is a set of state changes to the DB to be applied atomically.
// There can only be one open transaction at a time, which may only have
// maximum one subtransaction at a time.
// In short, at any time, there is exactly one object that can write to the
// DB, and we can use Subtransactions to group operations and roll them back
// together (kind of like `types.KVCache` from basecoin)
type Transaction interface {
DBReader
// Change the state - will raise error immediately if this Transaction
// is not holding the exclusive write lock
Set(model Model) (err error)
Remove(key []byte) (removed bool, err error)
// Subtransaction starts a new subtransaction, rollback will not affect the
// parent. Only on Commit are the changes applied to this transaction.
// While the subtransaction exists, no write allowed on the parent.
// (You must Commit or Rollback the child to continue)
Subtransaction() Transaction
// Commit this transaction (or subtransaction), the parent reference is
// now updated.
// This only updates persistant store if the top level transaction commits
// (You may have any number of nested sub transactions)
Commit() error
// Rollback ends the transaction and throw away all transaction-local state,
// allowing the tree to prune those elements.
// The parent transaction now recovers the write lock.
Rollback()
}
// Listener registers callbacks on changes to the data store
type Listener interface {
OnSet(key, value, oldValue []byte)
OnRemove(key, oldValue []byte)
}
// Proof represents a merkle proof for a key
type Proof interface {
RootHash() []byte
Verify(key, value, root []byte) bool
}
type Model interface {
Key() []byte
Value() []byte
}
// Closer releases the reference to this state, allowing us to garbage collect
// Make sure to call it before discarding.
type Closer interface {
Close()
}
```
### Remote Interface
The use-case of allowing out-of-process calls is very powerful. Not just to provide a powerful merkle-ready data store to non-go applications.
It we allow the ABCi app to maintain the only writable connections, we can guarantee that all transactions are only processed through the tendermint consensus engine. We could then allow multiple "web server" machines "read-only" access and scale out the database reads, assuming the consensus engine, ABCi logic, and public key cryptography is more the bottleneck than the database. We could even place the consensus engine, ABCi app, and data store on one machine, connected with unix sockets for security, and expose a tcp/ssl interface for reading the data, to scale out query processing over multiple machines.
But returning our focus directly to the ABCi app (which is the most important use case). An app may well want to maintain 100 or 1000 snapshots of different heights to allow people to easily query many proofs at a given height without race conditions (very important for IBC, ask Jae). Thus, we should not require a separate TCP connection for each height, as this gets quite awkward with so many connections. Also, if we want to use gRPC, we should consider the connections potentially transient (although they are more efficient with keep-alive).
Thus, the wire encoding of a transaction or a snapshot should simply return a unique id. All methods on a `Prover` or `Transaction` over the wire can send this id along with the arguments for the method call. And we just need a hash map on the server to map this id to a state.
The only negative of not requiring a persistent tcp connection for each snapshot is there is no auto-detection if the client crashes without explicitly closing the connections. Thus, I would suggest adding a `Ping` thread in the gRPC interface which keeps the Snapshot alive. If no ping is received within a server-defined time, it may automatically close those transactions. And if we consider a client with 500 snapshots that needs to ping each every 10 seconds, that is a lot of overhead, so we should design the ping to accept a list of IDs for the client and update them all. Or associate all snapshots with a clientID and then just send the clientID in the ping. (Please add other ideas on how to detect client crashes without persistent connections).
To encourage adoption, we should provide a nice client that uses this gRPC interface (like we do with ABCi). For go, the client may have the exact same interface as the in-process version, just that the error call may return network errors, not just illegal operations. We should also add a client with a clean API for Java, since that seems to be popular among app developers in the current tendermint community. Other bindings as we see the need in the server space.
### Persistence
Any data store worth it's name should not lose all data on a crash. Even [redis provides some persistence](https://redis.io/topics/persistence) these days. Ideally, if the system crashes and restarts, it should have the data at the last block N that was committed. If the system crash during the commit of block N+1, then the recovered state should either be block N or completely committed block N+1, but no partial state between the two. Basically, the commit must be an atomic operation (even if updating 100's of records).
To avoid a lot of headaches ourselves, we can use an existing data store, such as leveldb, which provides `WriteBatch` to group all operations.
The other issue is cleaning up old state. We cannot delete any information from our persistent store, as long as any snapshot holds a reference to it (or else we get some panics when the data we query is not there). So, we need to store the outstanding deletions that we can perform when the snapshot is `Close`d. In addition, we must consider the case that the data store crashes with open snapshots. Thus, the info on outstanding deletions must also be persisted somewhere. Something like a "delete-behind log" (the opposite of a "write ahead log").
This is not a concern of the generic interface, but each implementation should take care to handle this well to avoid accumulation of unused references in the data store and eventual data bloat.
#### Backing stores
It is way outside the scope of this project to build our own database that is capable of efficiently storing the data, provide multiple read-only snapshots at once, and save it atomically. The best approach seems to select an existing database (best a simple one) that provides this functionality and build upon it, much like the current `go-merkle` implementation builds upon `leveldb`. After some research here are winners and losers:
**Winners**
* Leveldb - [provides consistent snapshots](https://ayende.com/blog/161705/reviewing-leveldb-part-xiii-smile-and-here-is-your-snapshot), and [provides tooling for building ACID compliance](http://codeofrob.com/entries/writing-a-transaction-manager-on-top-of-leveldb.html)
* Note there are at least two solid implementations available in go - [goleveldb](https://github.com/syndtr/goleveldb) - a pure go implementation, and [levigo](https://github.com/jmhodges/levigo) - a go wrapper around leveldb.
* Goleveldb is much easier to compile and cross-compile (not requiring cgo), while levigo (or cleveldb) seems to provide a significant performance boosts (but I had trouble even running benchmarks)
* PostgreSQL - fully supports these ACID semantics if you call `SET TRANSACTION ISOLATION LEVEL SERIALIZABLE` at the beginning of a transaction (tested)
* This may be total overkill unless we also want to make use of other features, like storing data in multiple columns with secondary indexes.
* Trillian can show an example of [how to store a merkle tree in sql](https://github.com/google/trillian/blob/master/storage/mysql/tree_storage.go)
**Losers**
* Bolt - open [read-only snapshots can block writing](https://github.com/boltdb/bolt/issues/378)
* Mongo - [barely even supports atomic operations](https://docs.mongodb.com/manual/core/write-operations-atomicity/), much less multiple snapshots
**To investigate**
* [Trillian](https://github.com/google/trillian) - has a [persistent merkle tree interface](https://github.com/google/trillian/blob/master/storage/tree_storage.go) along with [backend storage with mysql](https://github.com/google/trillian/blob/master/storage/mysql/tree_storage.go), good inspiration for our design if not directly using it
* [Moss](https://github.com/couchbase/moss) - another key-value store in go, seems similar to leveldb, maybe compare with performance tests?
### Security
When allowing access out-of-process, we should provide different mechanisms to secure it. The first is the choice of binding to a local unix socket or a tcp port. The second is the optional use of ssl to encrypt the connection (very important over tcp). The third is authentication to control access to the database.
We may also want to consider the case of two server connections with different permissions, eg. a local unix socket that allows write access with no more credentials, and a public TCP connection with ssl and authentication that only provides read-only access.
The use of ssl is quite easy in go, we just need to generate and sign a certificate, so it is nice to be able to disable it for dev machines, but it is very important for production.
For authentication, let me sketch out a minimal solution. The server could just have a simple config file with key/bcrypt(password) pairs along with read/write permission level, and read that upon startup. The client must provide a username and password in the HTTP headers when making the original HTTPS gRPC connection.
This is super minimal to provide some protection. Things like LDAP, OAuth and single-sign on seem overkill and even potential security holes. Maybe there is another solution somewhere in the middle.

View File

@@ -1,17 +0,0 @@
# Merkle data stores
To allow the efficient creation of an ABCi app, tendermint wishes to provide a reference implemention of a key-value store that provides merkle proofs of the data. These proofs then quickly allow the ABCi app to provide an apphash to the consensus engine, as well as a full proof to any client.
This engine is currently implemented in `go-merkle` with `merkleeyes` providing a language-agnostic binding via ABCi. It uses `tmlibs/db` bindings internally to persist data to leveldb.
What are some of the requirements of this store:
* It must support efficient key-value operations (get/set/delete)
* It must support persistance.
* We must only persist complete blocks, so when we come up after a crash we are at the state of block N or N+1, but not in-between these two states.
* It must allow us to read/write from one uncommited state (working state), while serving other queries from the last commited state. And a way to determine which one to serve for each client.
* It must allow us to hold references to old state, to allow providing proofs from 20 blocks ago. We can define some limits as to the maximum time to hold this data.
* We provide in process binding in Go
* We provide language-agnostic bindings when running the data store as it's own process.

View File

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 43 KiB

View File

Before

Width:  |  Height:  |  Size: 103 KiB

After

Width:  |  Height:  |  Size: 103 KiB

View File

Before

Width:  |  Height:  |  Size: 2.4 MiB

After

Width:  |  Height:  |  Size: 2.4 MiB

View File

@@ -16,9 +16,10 @@
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
import urllib
import sphinx_rtd_theme
@@ -169,3 +170,30 @@ texinfo_documents = [
author, 'Tendermint', 'Byzantine Fault Tolerant Consensus.',
'Database'),
]
repo = "https://raw.githubusercontent.com/tendermint/tools/"
branch = "master"
tools = "./tools"
assets = tools + "/assets"
if os.path.isdir(tools) != True:
os.mkdir(tools)
if os.path.isdir(assets) != True:
os.mkdir(assets)
urllib.urlretrieve(repo+branch+'/ansible/README.rst', filename=tools+'/ansible.rst')
urllib.urlretrieve(repo+branch+'/ansible/assets/a_plus_t.png', filename=assets+'/a_plus_t.png')
urllib.urlretrieve(repo+branch+'/docker/README.rst', filename=tools+'/docker.rst')
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/README.rst', filename=tools+'/mintnet-kubernetes.rst')
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/gce1.png', filename=assets+'/gce1.png')
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/gce2.png', filename=assets+'/gce2.png')
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/statefulset.png', filename=assets+'/statefulset.png')
urllib.urlretrieve(repo+branch+'/mintnet-kubernetes/assets/t_plus_k.png', filename=assets+'/t_plus_k.png')
urllib.urlretrieve(repo+branch+'/terraform-digitalocean/README.rst', filename=tools+'/terraform-digitalocean.rst')
urllib.urlretrieve(repo+branch+'/tm-bench/README.rst', filename=tools+'/benchmarking-and-monitoring.rst')
# the readme for below is included in tm-bench
# urllib.urlretrieve('https://raw.githubusercontent.com/tendermint/tools/master/tm-monitor/README.rst', filename='tools/tm-monitor.rst')

View File

@@ -40,8 +40,51 @@ Automated Deployments
---------------------
While the manual deployment is easy enough, an automated deployment is
always better. For this, we have the `mintnet-kubernetes
tool <https://github.com/tendermint/tools/tree/master/mintnet-kubernetes>`__,
which allows us to automate the deployment of a Tendermint network on an
already provisioned kubernetes cluster. And for simple provisioning of kubernetes
usually quicker. The below examples show different tools that can be used
for automated deployments.
Automated Deployment using Kubernetes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The `mintnet-kubernetes tool <https://github.com/tendermint/tools/tree/master/mintnet-kubernetes>`__
allows automating the deployment of a Tendermint network on an already
provisioned kubernetes cluster. For simple provisioning of a kubernetes
cluster, check out the `Google Cloud Platform <https://cloud.google.com/>`__.
Automated Deployment using Terraform and Ansible
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The `terraform-digitalocean tool <https://github.com/tendermint/tools/tree/master/terraform-digitalocean>`__
allows creating a set of servers on the DigitalOcean cloud.
The `ansible playbooks <https://github.com/tendermint/tools/tree/master/ansible>`__
allow creating and managing a ``basecoin`` or ``ethermint`` testnet on provisioned servers.
Package Deployment on Linux for developers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``tendermint`` and ``basecoin`` applications can be installed from RPM or DEB packages on
Linux machines for development purposes. The packages are configured to be validators on the
one-node network that the machine represents. The services are not started after installation,
this way giving an opportunity to reconfigure the applications before starting.
The Ansible playbooks in the previous section use this repository to install ``basecoin``.
After installation, additional steps are executed to make sure that the multi-node testnet has
the right configuration before start.
Install from the CentOS/RedHat repository:
::
rpm --import https://tendermint-packages.interblock.io/centos/7/os/x86_64/RPM-GPG-KEY-Tendermint
wget -O /etc/yum.repos.d/tendermint.repo https://tendermint-packages.interblock.io/centos/7/os/x86_64/tendermint.repo
yum install basecoin
Install from the Debian/Ubuntu repository:
::
wget -O - https://tendermint-packages.interblock.io/centos/7/os/x86_64/RPM-GPG-KEY-Tendermint | apt-key add -
wget -O /etc/apt/sources.list.d/tendermint.list https://tendermint-packages.interblock.io/debian/tendermint.list
apt-get update && apt-get install basecoin

117
docs/ecosystem.rst Normal file
View File

@@ -0,0 +1,117 @@
Tendermint Ecosystem
====================
Below are the many applications built using various pieces of the Tendermint stack. We thank the community for their contributions thus far and welcome the addition of new projects. Feel free to submit a pull request to add your project!
ABCI Applications
-----------------
Burrow
^^^^^^
Ethereum Virtual Machine augmented with native permissioning scheme and global key-value store, written in Go, authored by Monax Industries, and incubated `by Hyperledger <https://github.com/hyperledger/burrow>`__.
cb-ledger
^^^^^^^^^
Custodian Bank Ledger, integrating central banking with the blockchains of tomorrow, written in C++, and `authored by Block Finance <https://github.com/block-finance/cpp-abci>`__.
Clearchain
^^^^^^^^^^
Application to manage a distributed ledger for money transfers that support multi-currency accounts, written in Go, and `authored by Allession Treglia <https://github.com/tendermint/clearchain>`__.
Comit
^^^^^
Public service reporting and tracking, written in Go, and `authored by Zach Balder <https://github.com/zbo14/comit>`__.
Cosmos SDK
^^^^^^^^^^
A prototypical account based crypto currency state machine supporting plugins, written in Go, and `authored by Cosmos <https://github.com/cosmos/cosmos-sdk>`__.
Ethermint
^^^^^^^^^
The go-ethereum state machine run as a ABCI app, written in Go, `authored by Tendermint <https://github.com/tendermint/ethermint>`__.
IAVL
^^^^
Immutable AVL+ tree with Merkle proofs, Written in Go, `authored by Tendermint <https://github.com/tendermint/iavl>`__.
Lotion
^^^^^^
A Javascript microframework for building blockchain applications with Tendermint, written in Javascript, `authored by Judd Keppel of Tendermint <https://github.com/keppel/lotion>`__. See also `lotion-chat <https://github.com/keppel/lotion-chat>`__ and `lotion-coin <https://github.com/keppel/lotion-coin>`__ apps written using Lotion.
MerkleTree
^^^^^^^^^^
Immutable AVL+ tree with Merkle proofs, Written in Java, `authored by jTendermint <https://github.com/jTendermint/MerkleTree>`__.
Passchain
^^^^^^^^^
Passchain is a tool to securely store and share passwords, tokens and other short secrets, `authored by trusch <https://github.com/trusch/passchain>`__.
Passwerk
^^^^^^^^
Encrypted storage web-utility backed by Tendermint, written in Go, `authored by Rigel Rozanski <https://github.com/rigelrozanski/passwerk>`__.
Py-Tendermint
^^^^^^^^^^^^^
A Python microframework for building blockchain applications with Tendermint, written in Python, `authored by Dave Bryson <https://github.com/davebryson/py-tendermint>`__.
Stratumn
^^^^^^^^
SDK for "Proof-of-Process" networks, written in Go, `authored by the Stratumn team <https://github.com/stratumn/sdk>`__.
TMChat
^^^^^^
P2P chat using Tendermint, written in Java, `authored by wolfposd <https://github.com/wolfposd/TMChat>`__.
ABCI Servers
------------
+------------------------------------------------------------------+--------------------+--------------+
| **Name** | **Author** | **Language** |
| | | |
+------------------------------------------------------------------+--------------------+--------------+
| `abci <https://github.com/tendermint/abci>`__ | Tendermint | Go |
+------------------------------------------------------------------+--------------------+--------------+
| `js abci <https://github.com/tendermint/js-abci>`__ | Tendermint | Javascript |
+------------------------------------------------------------------+--------------------+--------------+
| `cpp-tmsp <https://github.com/block-finance/cpp-abci>`__ | Martin Dyring | C++ |
+------------------------------------------------------------------+--------------------+--------------+
| `c-abci <https://github.com/chainx-org/c-abci>`__ | ChainX | C |
+------------------------------------------------------------------+--------------------+--------------+
| `jabci <https://github.com/jTendermint/jabci>`__ | jTendermint | Java |
+------------------------------------------------------------------+--------------------+--------------+
| `ocaml-tmsp <https://github.com/zbo14/ocaml-tmsp>`__ | Zach Balder | Ocaml |
+------------------------------------------------------------------+--------------------+--------------+
| `abci_server <https://github.com/KrzysiekJ/abci_server>`__ | Krzysztof Jurewicz | Erlang |
+------------------------------------------------------------------+--------------------+--------------+
| `rust-tsp <https://github.com/tendermint/rust-tsp>`__   | Adrian Brink | Rust       |
+------------------------------------------------------------------+--------------------+--------------+
| `hs-abci <https://github.com/albertov/hs-abci>`__ | Alberto Gonzalez | Haskell |
+------------------------------------------------------------------+--------------------+--------------+
| `haskell-abci <https://github.com/cwgoes/haskell-abci>`__ | Christoper Goes | Haskell |
+------------------------------------------------------------------+--------------------+--------------+
| `Spearmint <https://github.com/dennismckinnon/spearmint>`__ | Dennis Mckinnon | Javascript |
+------------------------------------------------------------------+--------------------+--------------+
| `py-tendermint <https://github.com/davebryson/py-tendermint>`__ | Dave Bryson | Python |
+------------------------------------------------------------------+--------------------+--------------+
Deployment Tools
----------------
See `deploy testnets <./deploy-testnets.html>`__ for information about all the tools built by Tendermint. We have Kubernetes, Ansible, and Terraform integrations.
Cloudsoft built `brooklyn-tendermint <https://github.com/cloudsoft/brooklyn-tendermint>`__ for deploying a tendermint testnet in docker continers. It uses Clocker for Apache Brooklyn.

View File

@@ -73,7 +73,7 @@ Tendermint before, use:
::
tendermint init
tendermint init
tendermint node
If you have used Tendermint, you may want to reset the data for a new
@@ -107,7 +107,24 @@ like:
::
{"jsonrpc":"2.0","id":"","result":[98,{"check_tx":{},"deliver_tx":{}}],"error":""}
{
"jsonrpc": "2.0",
"id": "",
"result": {
"check_tx": {
"code": 0,
"data": "",
"log": ""
},
"deliver_tx": {
"code": 0,
"data": "",
"log": ""
},
"hash": "2B8EC32BA2579B3B8606E42C06DE2F7AFA2556EF",
"height": 154
}
}
The ``98`` is a type-byte, and can be ignored (it's useful for
serializing and deserializing arbitrary json). Otherwise, this result is
@@ -118,14 +135,27 @@ querying the app:
::
curl -s 'localhost:46657/abci_query?data="abcd"&path=""&prove=false'
curl -s 'localhost:46657/abci_query?data="abcd"'
The ``path`` and ``prove`` arguments can be ignored for now, and in a
future release can be left out. The result should look like:
The result should look like:
::
{"jsonrpc":"2.0","id":"","result":[112,{"response":{"value":"61626364","log":"exists"}}],"error":""}
{
"jsonrpc": "2.0",
"id": "",
"result": {
"response": {
"code": 0,
"index": 0,
"key": "",
"value": "61626364",
"proof": "",
"height": 0,
"log": "exists"
}
}
}
Again, the ``112`` is the type-byte. Note the ``value`` in the result
(``61626364``); this is the hex-encoding of the ASCII of ``abcd``. You
@@ -144,7 +174,7 @@ Now if we query for ``name``, we should get ``satoshi``, or
::
curl -s 'localhost:46657/abci_query?data="name"&path=""&prove=false'
curl -s 'localhost:46657/abci_query?data="name"'
Try some other transactions and queries to make sure everything is
working!
@@ -204,14 +234,48 @@ the number ``1``. If instead, we try to send a ``5``, we get an error:
::
> curl localhost:46657/broadcast_tx_commit?tx=0x05
{"jsonrpc":"2.0","id":"","result":[98,{"check_tx":{},"deliver_tx":{"code":3,"log":"Invalid nonce. Expected 1, got 5"}}],"error":""}
{
"jsonrpc": "2.0",
"id": "",
"result": {
"check_tx": {
"code": 0,
"data": "",
"log": ""
},
"deliver_tx": {
"code": 3,
"data": "",
"log": "Invalid nonce. Expected 1, got 5"
},
"hash": "33B93DFF98749B0D6996A70F64071347060DC19C",
"height": 38
}
}
But if we send a ``1``, it works again:
::
> curl localhost:46657/broadcast_tx_commit?tx=0x01
{"jsonrpc":"2.0","id":"","result":[98,{"check_tx":{},"deliver_tx":{}}],"error":""}
{
"jsonrpc": "2.0",
"id": "",
"result": {
"check_tx": {
"code": 0,
"data": "",
"log": ""
},
"deliver_tx": {
"code": 0,
"data": "",
"log": ""
},
"hash": "F17854A977F6FA7EEA1BD758E296710B86F72F3D",
"height": 87
}
}
For more details on the ``broadcast_tx`` API, see `the guide on using
Tendermint <./using-tendermint.html>`__.

165
docs/how-to-read-logs.rst Normal file
View File

@@ -0,0 +1,165 @@
How to read logs
================
Walk through example
--------------------
We first create three connections (mempool, consensus and query) to the
application (locally running dummy in this case).
::
I[10-04|13:54:27.364] Starting multiAppConn module=proxy impl=multiAppConn
I[10-04|13:54:27.366] Starting localClient module=abci-client connection=query impl=localClient
I[10-04|13:54:27.366] Starting localClient module=abci-client connection=mempool impl=localClient
I[10-04|13:54:27.367] Starting localClient module=abci-client connection=consensus impl=localClient
Then Tendermint Core and the application perform a handshake.
::
I[10-04|13:54:27.367] ABCI Handshake module=consensus appHeight=90 appHash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
I[10-04|13:54:27.368] ABCI Replay Blocks module=consensus appHeight=90 storeHeight=90 stateHeight=90
I[10-04|13:54:27.368] Completed ABCI Handshake - Tendermint and App are synced module=consensus appHeight=90 appHash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
After that, we start a few more things like the event switch, reactors, and
perform UPNP discover in order to detect the IP address.
::
I[10-04|13:54:27.374] Starting EventSwitch module=types impl=EventSwitch
I[10-04|13:54:27.375] This node is a validator module=consensus
I[10-04|13:54:27.379] Starting Node module=main impl=Node
I[10-04|13:54:27.381] Local listener module=p2p ip=:: port=46656
I[10-04|13:54:27.382] Getting UPNP external address module=p2p
I[10-04|13:54:30.386] Could not perform UPNP discover module=p2p err="write udp4 0.0.0.0:38238->239.255.255.250:1900: i/o timeout"
I[10-04|13:54:30.386] Starting DefaultListener module=p2p impl=Listener(@10.0.2.15:46656)
I[10-04|13:54:30.387] Starting P2P Switch module=p2p impl="P2P Switch"
I[10-04|13:54:30.387] Starting MempoolReactor module=mempool impl=MempoolReactor
I[10-04|13:54:30.387] Starting BlockchainReactor module=blockchain impl=BlockchainReactor
I[10-04|13:54:30.387] Starting ConsensusReactor module=consensus impl=ConsensusReactor
I[10-04|13:54:30.387] ConsensusReactor module=consensus fastSync=false
I[10-04|13:54:30.387] Starting ConsensusState module=consensus impl=ConsensusState
I[10-04|13:54:30.387] Starting WAL module=consensus wal=/home/vagrant/.tendermint/data/cs.wal/wal impl=WAL
I[10-04|13:54:30.388] Starting TimeoutTicker module=consensus impl=TimeoutTicker
Notice the second row where Tendermint Core reports that "This node is a
validator". It also could be just an observer (regular node).
Next we replay all the messages from the WAL.
::
I[10-04|13:54:30.390] Catchup by replaying consensus messages module=consensus height=91
I[10-04|13:54:30.390] Replay: New Step module=consensus height=91 round=0 step=RoundStepNewHeight
I[10-04|13:54:30.390] Replay: Done module=consensus
"Started node" message signals that everything is ready for work.
::
I[10-04|13:54:30.391] Starting RPC HTTP server on tcp socket 0.0.0.0:46657 module=rpc-server
I[10-04|13:54:30.392] Started node module=main nodeInfo="NodeInfo{pk: PubKeyEd25519{DF22D7C92C91082324A1312F092AA1DA197FA598DBBFB6526E177003C4D6FD66}, moniker: anonymous, network: test-chain-3MNw2N [remote , listen 10.0.2.15:46656], version: 0.11.0-10f361fc ([wire_version=0.6.2 p2p_version=0.5.0 consensus_version=v1/0.2.2 rpc_version=0.7.0/3 tx_index=on rpc_addr=tcp://0.0.0.0:46657])}"
Next follows a standard block creation cycle, where we enter a new round,
propose a block, receive more than 2/3 of prevotes, then precommits and finally
have a chance to commit a block. For details, please refer to `Consensus
Overview
<introduction.html#consensus-overview>`__
or `Byzantine Consensus Algorithm
<specification.html>`__.
::
I[10-04|13:54:30.393] enterNewRound(91/0). Current: 91/0/RoundStepNewHeight module=consensus
I[10-04|13:54:30.393] enterPropose(91/0). Current: 91/0/RoundStepNewRound module=consensus
I[10-04|13:54:30.393] enterPropose: Our turn to propose module=consensus proposer=125B0E3C5512F5C2B0E1109E31885C4511570C42 privValidator="PrivValidator{125B0E3C5512F5C2B0E1109E31885C4511570C42 LH:90, LR:0, LS:3}"
I[10-04|13:54:30.394] Signed proposal module=consensus height=91 round=0 proposal="Proposal{91/0 1:21B79872514F (-1,:0:000000000000) {/10EDEDD7C84E.../}}"
I[10-04|13:54:30.397] Received complete proposal block module=consensus height=91 hash=F671D562C7B9242900A286E1882EE64E5556FE9E
I[10-04|13:54:30.397] enterPrevote(91/0). Current: 91/0/RoundStepPropose module=consensus
I[10-04|13:54:30.397] enterPrevote: ProposalBlock is valid module=consensus height=91 round=0
I[10-04|13:54:30.398] Signed and pushed vote module=consensus height=91 round=0 vote="Vote{0:125B0E3C5512 91/00/1(Prevote) F671D562C7B9 {/89047FFC21D8.../}}" err=null
I[10-04|13:54:30.401] Added to prevote module=consensus vote="Vote{0:125B0E3C5512 91/00/1(Prevote) F671D562C7B9 {/89047FFC21D8.../}}" prevotes="VoteSet{H:91 R:0 T:1 +2/3:F671D562C7B9242900A286E1882EE64E5556FE9E:1:21B79872514F BA{1:X} map[]}"
I[10-04|13:54:30.401] enterPrecommit(91/0). Current: 91/0/RoundStepPrevote module=consensus
I[10-04|13:54:30.401] enterPrecommit: +2/3 prevoted proposal block. Locking module=consensus hash=F671D562C7B9242900A286E1882EE64E5556FE9E
I[10-04|13:54:30.402] Signed and pushed vote module=consensus height=91 round=0 vote="Vote{0:125B0E3C5512 91/00/2(Precommit) F671D562C7B9 {/80533478E41A.../}}" err=null
I[10-04|13:54:30.404] Added to precommit module=consensus vote="Vote{0:125B0E3C5512 91/00/2(Precommit) F671D562C7B9 {/80533478E41A.../}}" precommits="VoteSet{H:91 R:0 T:2 +2/3:F671D562C7B9242900A286E1882EE64E5556FE9E:1:21B79872514F BA{1:X} map[]}"
I[10-04|13:54:30.404] enterCommit(91/0). Current: 91/0/RoundStepPrecommit module=consensus
I[10-04|13:54:30.405] Finalizing commit of block with 0 txs module=consensus height=91 hash=F671D562C7B9242900A286E1882EE64E5556FE9E root=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
I[10-04|13:54:30.405] Block{
Header{
ChainID: test-chain-3MNw2N
Height: 91
Time: 2017-10-04 13:54:30.393 +0000 UTC
NumTxs: 0
LastBlockID: F15AB8BEF9A6AAB07E457A6E16BC410546AA4DC6:1:D505DA273544
LastCommit: 56FEF2EFDB8B37E9C6E6D635749DF3169D5F005D
Data:
Validators: CE25FBFF2E10C0D51AA1A07C064A96931BC8B297
App: E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
}#F671D562C7B9242900A286E1882EE64E5556FE9E
Data{
}#
Commit{
BlockID: F15AB8BEF9A6AAB07E457A6E16BC410546AA4DC6:1:D505DA273544
Precommits: Vote{0:125B0E3C5512 90/00/2(Precommit) F15AB8BEF9A6 {/FE98E2B956F0.../}}
}#56FEF2EFDB8B37E9C6E6D635749DF3169D5F005D
}#F671D562C7B9242900A286E1882EE64E5556FE9E module=consensus
I[10-04|13:54:30.408] Executed block module=state height=91 validTxs=0 invalidTxs=0
I[10-04|13:54:30.410] Committed state module=state height=91 txs=0 hash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
I[10-04|13:54:30.410] Recheck txs module=mempool numtxs=0 height=91
List of modules
---------------
Here is the list of modules you may encounter in Tendermint's log and a little
overview what they do.
- ``abci-client`` As mentioned in `Application Development Guide
<app-development.html#abci-design>`__,
Tendermint acts as an ABCI client with respect to the application and
maintains 3 connections: mempool, consensus and query. The code used by
Tendermint Core can be found `here
<https://github.com/tendermint/abci/tree/master/client>`__.
- ``blockchain``
Provides storage, pool (a group of peers), and reactor for both storing and
exchanging blocks between peers.
- ``consensus``
The heart of Tendermint core, which is the implementation of the consensus
algorithm. Includes two "submodules": ``wal`` (write-ahead logging) for
ensuring data integrity and ``replay`` to replay blocks and messages on
recovery from a crash.
- ``events``
Simple event notification system. The list of events can be found
`here
<https://github.com/tendermint/tendermint/blob/master/types/events.go>`__.
You can subscribe to them by calling ``subscribe`` RPC method.
Refer to `RPC docs
<specification/rpc.html>`__
for additional information.
- ``mempool``
Mempool module handles all incoming transactions, whenever they are
coming from peers or the application.
- ``p2p``
Provides an abstraction around peer-to-peer communication. For more details,
please check out the `README
<https://github.com/tendermint/tendermint/blob/56c60fba2381e4ac41d2ae38a1eb6569acfbc095/p2p/README.md>`__.
- ``rpc``
`Tendermint's RPC <specification/rpc.html>`__.
- ``rpc-server``
RPC server. For implementation details, please read the `README <https://github.com/tendermint/tendermint/blob/master/rpc/lib/README.md>`__.
- ``state``
Represents the latest state and execution submodule, which executes
blocks against the application.
- ``types``
A collection of the publicly exposed types and methods to work with them.

View File

@@ -15,17 +15,37 @@ Welcome to Tendermint!
Tendermint 101
--------------
.. maxdepth set to 2 for sexinesss
.. but use 4 to upgrade overall documentation
.. toctree::
:maxdepth: 2
introduction.rst
install.rst
getting-started.rst
deploy-testnets.rst
using-tendermint.rst
Tendermint Ecosystem
--------------------
.. toctree::
:maxdepth: 2
ecosystem.rst
Tendermint Tools
----------------
.. the tools/ files are pulled in from the tools repo
.. see the bottom of conf.py
.. toctree::
:maxdepth: 2
deploy-testnets.rst
tools/ansible.rst
tools/docker.rst
tools/mintnet-kubernetes.rst
tools/terraform-digitalocean.rst
tools/benchmarking-and-monitoring.rst
Tendermint 102
--------------
@@ -35,6 +55,7 @@ Tendermint 102
abci-cli.rst
app-architecture.rst
app-development.rst
how-to-read-logs.rst
Tendermint 201
--------------

View File

@@ -1,17 +1,24 @@
Install from Source
===================
Install Tendermint
==================
This page provides instructions on installing Tendermint from source. To
download pre-built binaries, see the `Download page <https://tendermint.com/download>`__.
From Binary
-----------
To download pre-built binaries, see the `Download page <https://tendermint.com/download>`__.
From Source
-----------
You'll need `go`, maybe `glide` and the tendermint source code.
Install Go
----------
^^^^^^^^^^
Make sure you have `installed Go <https://golang.org/doc/install>`__ and
set the ``GOPATH``.
Install Tendermint
------------------
Get Source Code
^^^^^^^^^^^^^^^
You should be able to install the latest with a simple
@@ -19,13 +26,14 @@ You should be able to install the latest with a simple
go get github.com/tendermint/tendermint/cmd/tendermint
Run ``tendermint --help`` for more.
Run ``tendermint --help`` and ``tendermint version`` to ensure your
installation worked.
If the installation failed, a dependency may been updated and become
If the installation failed, a dependency may have been updated and become
incompatible with the latest Tendermint master branch. We solve this
using the ``glide`` tool for dependency management.
Fist, install ``glide``:
First, install ``glide``:
::
@@ -45,7 +53,7 @@ still cloned to the correct location in the ``$GOPATH``.
The latest Tendermint Core version is now installed.
Reinstall
~~~~~~~~~
---------
If you already have Tendermint installed, and you make updates, simply
@@ -79,7 +87,7 @@ Since the third option just uses ``glide`` right away, it should always
work.
Troubleshooting
~~~~~~~~~~~~~~~
---------------
If ``go get`` failing bothers you, fetch the code using ``git``:
@@ -92,7 +100,7 @@ If ``go get`` failing bothers you, fetch the code using ``git``:
go install ./cmd/tendermint
Run
~~~
^^^
To start a one-node blockchain with a simple in-process application:

Some files were not shown because too many files have changed in this diff Show More