Compare commits

...

1182 Commits

Author SHA1 Message Date
Ethan Buchman
c8a2bdf78b Merge pull request #1225 from tendermint/release-v0.16.0
Release v0.16.0
2018-02-20 23:23:48 -05:00
Zach Ramsay
3cd604562c RequestInitChain needs genesisBytes 2018-02-21 03:43:47 +00:00
Zach Ramsay
7c6c0dba53 glide update 2018-02-21 03:32:02 +00:00
Ethan Buchman
ec2f3f49ef changelog date and version 2018-02-19 17:35:46 -05:00
Ethan Buchman
8bba7c64bc update version and changelog [ci skip] 2018-02-19 17:33:48 -05:00
Ethan Buchman
0467de890a Merge pull request #1202 from tendermint/restore-mempool-memory-leak-tests
restore mempool memory leak tests
2018-02-19 15:36:19 -05:00
Anton Kaliaev
0ae0155cba restore mempool memory leak tests 2018-02-19 15:34:33 -05:00
Ethan Buchman
f4feb7703b fix appHash log. closes #1207 2018-02-19 15:32:09 -05:00
Ethan Buchman
0e68638af3 update glide abci/tmlibs to develop 2018-02-12 19:13:36 -05:00
Ethan Buchman
d3e276bf80 Merge pull request #1209 from tendermint/1205-fixes-for-p2p-memory-leak-and-pong
Fixes for p2p memory leak and pong
2018-02-12 19:04:31 -05:00
Anton Kaliaev
fc585bcdec do not block when writing to pongTimeoutCh
Refs #1205
2018-02-12 17:04:07 +04:00
Anton Kaliaev
2a24ae90c1 fixes from Jae's review
1. remove pointer
2. add Quit() method to Service interface
2018-02-12 14:32:09 +04:00
Ethan Buchman
f1c8489270 Merge pull request #1201 from tendermint/1022-do-not-enforce-1/3-val-changes
do not enforce 1/3 validator power change
2018-02-09 16:15:38 -05:00
Ethan Buchman
2d10c8f15b Merge pull request #1095 from tendermint/804-p2p-issues
[p2p] Pong Timeout
2018-02-09 14:38:24 -05:00
Anton Kaliaev
106cdb74e5 do not enforce 1/3 validator power change
leave it to the app

Refs #1022
2018-02-09 23:30:04 +04:00
Anton Kaliaev
22b038810a do not block in recvRoutine 2018-02-09 23:03:26 +04:00
Anton Kaliaev
45750e1b29 fix race by sending signal instead of stopping pongTimer 2018-02-09 21:32:29 +04:00
Anton Kaliaev
26419fba28 refactor code plus add one more test
* extract stopPongTimer method
* TestMConnectionMultiplePings
2018-02-09 21:32:29 +04:00
Anton Kaliaev
ac0123d249 drain pongTimeoutCh and pongTimer's channel to prevent leaks 2018-02-09 21:32:29 +04:00
Anton Kaliaev
f4ff66de30 rewrite pong timer to use time.AfterFunc 2018-02-09 21:32:29 +04:00
Anton Kaliaev
747b73cb95 fix merge conflicts 2018-02-09 21:32:29 +04:00
Anton Kaliaev
161e100a24 close return channel when we're done
Benchmark results:

```
BenchmarkSwitchBroadcast-2         30000             71275 ns/op
--- BENCH: BenchmarkSwitchBroadcast-2
        switch_test.go:339: success: 1, failure: 0
        switch_test.go:339: success: 100, failure: 0
        switch_test.go:339: success: 10000, failure: 0
        switch_test.go:339: success: 30000, failure: 0
```
2018-02-09 21:32:29 +04:00
Anton Kaliaev
3ae738f453 increase timeouts 2018-02-09 21:32:29 +04:00
Anton Kaliaev
d14d4a2527 remove TryBroadcast 2018-02-09 21:32:29 +04:00
Anton Kaliaev
860da464df remove weird concurrency testing 2018-02-09 21:32:28 +04:00
Anton Kaliaev
4e2000abfe control order by sending msgs from one goroutine 2018-02-09 21:32:28 +04:00
Anton Kaliaev
5834a59816 read ping 2018-02-09 21:32:28 +04:00
Anton Kaliaev
b28b76ddf7 rename pingTimeout to pingInterval, pongTimer is now time.Timer 2018-02-09 21:32:28 +04:00
zbo14
91e4f4b786 ping/pong timeout in config 2018-02-09 21:32:28 +04:00
zbo14
9b554fb2c4 switch test modification 2018-02-09 21:32:28 +04:00
zbo14
f97ead4f5f prep for merge 2018-02-09 21:32:28 +04:00
zbo14
5af22d6ee6 remove SwitchEventNewPeer, SwitchEventDonePeer 2018-02-09 21:32:28 +04:00
zbo14
1d16df6a92 add test, TrySend in broadcast 2018-02-09 21:32:27 +04:00
Ethan Buchman
e7bc946760 Merge pull request #1200 from tendermint/update-deps
Update tmlibs & protobuf deps
2018-02-09 11:00:43 -05:00
Ethan Buchman
cf1e1f5899 Merge pull request #1194 from tendermint/1177-semantics-of-currate-low-msg
improve "curRate too low" message
2018-02-09 11:00:09 -05:00
Anton Kaliaev
2f8372d629 update protobuf 2018-02-09 13:58:36 +04:00
Anton Kaliaev
d84e4effce update tmlibs 2018-02-09 13:47:51 +04:00
Anton Kaliaev
0c1b91b762 revert back curRate != 0 2018-02-09 13:02:44 +04:00
Anton Kaliaev
c8990d06d9 remove curRate != 0 2018-02-09 12:01:13 +04:00
Anton Kaliaev
b0a55882b2 lower the minRecvRate
See https://github.com/tendermint/tendermint/issues/1177#issuecomment-363720118
2018-02-09 12:01:12 +04:00
Anton Kaliaev
d1fa44e816 improve "curRate too low" message
Refs #1177

Note on labels:
KB - 1024
kB - 1000
https://ux.stackexchange.com/questions/13815/files-size-units-kib-vs-kb-vs-kb
2018-02-09 12:01:12 +04:00
Ethan Buchman
199ea40980 Merge pull request #1196 from tendermint/1149-TestReactorValidatorSetChanges-fails-non-deterministically
WIP: TestReactorValidatorSetChanges fails non deterministically
2018-02-09 01:51:17 -05:00
Ethan Buchman
66fc476e1e Merge pull request #1173 from tendermint/memory-leak-in-reconnect-to-peer-2
fix memory leak in mempool reactor
2018-02-09 01:50:34 -05:00
Ethan Buchman
6b347200d9 Merge pull request #1197 from tendermint/1155-seed-mode-flag
add seed_mode flag (`--p2p.seed_mode`)
2018-02-08 17:48:39 -05:00
Ethan Buchman
8a908a7cf9 Merge pull request #1193 from tendermint/return-back-cmd-logging
With must be called on log.filter, otherwise "main" entries get filtered
2018-02-08 16:30:29 -05:00
Ethan Buchman
0bcc11c9bc Merge pull request #1191 from tendermint/event-buffer-slice-clearing-on-flush
types: TxEventBuffer.Flush now uses capacity preserving slice clearing idiom
2018-02-08 16:28:47 -05:00
Ethan Buchman
0247a21a93 Merge pull request #1086 from tendermint/966-deterministic-tooling-for-releases
Distribution improvements (freeze glide version + get rid of gox)
2018-02-08 15:26:11 -05:00
Anton Kaliaev
cf1f483526 add seed_mode flag (--p2p.seed_mode) 2018-02-08 17:20:55 +04:00
Anton Kaliaev
3f9aa8d8fa document that msgBytes in p2p/connection change 2018-02-08 13:25:26 +04:00
Anton Kaliaev
d6d1f8512d do not reset pingTimer
don't bother with this "only ping when we havent heard from them". lets
just always ping every peer from the sendRoutine every 10s no matter
what. if they dont pong within pongTimeout, disconnect :)
2018-02-08 13:08:11 +04:00
Anton Kaliaev
2b2c233977 write docs for Reactor interface 2018-02-08 13:07:40 +04:00
Ethan Buchman
7640e6a29f add some p2p TODOs 2018-02-08 12:46:04 +04:00
Anton Kaliaev
b0ca8a0872 With must be called on log.filter, otherwise "main" entries get filtered
Also, we should allow "main" module to log INFO messages like

```
I[02-07|07:57:25.074] Found private validator                      module=main path=/home/vagrant/.tendermint/config/priv_validator.json
I[02-07|07:57:25.076] Found genesis file                           module=main path=/home/vagrant/.tendermint/config/genesis.json
```

Refs https://github.com/cosmos/gaia/issues/118

**BEFORE**:
```
$ tendermint init

```

**AFTER**:
```
$ tendermint init
I[02-07|07:57:25.074] Found private validator                      module=main path=/home/vagrant/.tendermint/config/priv_validator.json
I[02-07|07:57:25.076] Found genesis file                           module=main path=/home/vagrant/.tendermint/config/genesis.json
```
2018-02-07 12:08:13 +04:00
Ethan Buchman
9e767771fc Merge pull request #1188 from ltfschoen/patch-1
Update getting-started.rst with Python 3 example
2018-02-06 12:53:47 -05:00
Anton Kaliaev
6c8d7a8c19 deterministic tooling for releases
get rid of gox

build target builds inside docker, dev-build - locally

Revert "build target builds inside docker, dev-build - locally"

This reverts commit 8ba89d5e8c5668e3839ff49952a9166d1158f6e8.

add build tags to make build/build_race/install

use tendermint's fork of glide instead of tar.gz

remove TMHOME unused var + set length for git hash

get rid of GOTOOLS_CHECK

fixes after review

zip

needed for distribution
2018-02-06 12:46:13 +04:00
Emmanuel Odeke
15ef57c6d0 types: TxEventBuffer.Flush now uses capacity preserving slice clearing idiom
Fixes https://github.com/tendermint/tendermint/issues/1189

For every TxEventBuffer.Flush() invoking, we were invoking
a:
  b.events = make([]EventDataTx, 0, b.capacity)
whose intention is to innocently clear the events slice but
maintain the underlying capacity.

However, unfortunately this is memory and garbage collection intensive
which is linear in the number of events added. If an attack had access
to our code somehow, invoking .Flush() in tight loops would be a sure
way to cause huge GC pressure, and say if they added about 1e9
events maliciously, every Flush() would take at least 3.2seconds
which is enough to now control our application.

The new using of the capacity preserving slice clearing idiom
takes a constant time regardless of the number of elements with zero
allocations so we are killing many birds with one stone i.e
  b.events = b.events[:0]

For benchmarking results, please see
https://gist.github.com/odeke-em/532c14ab67d71c9c0b95518a7a526058
for a reference on how things can get out of hand easily.
2018-02-05 23:34:15 -08:00
Luke Schoen
f37c502fd8 Update getting-started.rst with Python 3 example 2018-02-06 16:00:51 +11:00
Anton Kaliaev
945b0e6eca cleanup glide.yaml 2018-02-05 22:53:44 +04:00
Anton Kaliaev
84a0a1987c comment out tests for now
https://github.com/tendermint/tendermint/pull/1173#issuecomment-363173047
2018-02-05 22:26:14 +04:00
Anton Kaliaev
11b68f1934 rewrite broadcastTxRoutine to use channels
https://play.golang.org/p/gN21yO9IRs3

```
func waitWithCancel(f func() *clist.CElement, ctx context.Context) *clist.CElement {
	el := make(chan *clist.CElement, 1)
	select {
	case el <- f():
```
will just run f() blockingly, so this doesn't change much in terms of behavior.
2018-02-05 16:36:26 +04:00
Anton Kaliaev
202d9a2c0c fix memory leak in mempool reactor
Leaking goroutine:
```
114 @ 0x42f2bc 0x42f3ae 0x440794 0x4403b9 0x468002 0x9fe32d 0x9ff78f 0xa025ed 0x45e571
```

Explanation:
it blocks on an empty clist forever. so unless theres txs coming in,
this go routine will just sit there, holding onto the peer too.
if we're constantly reconnecting to some peer, old instances are not
garbage collected, leading to memory leak.

Fixes https://github.com/cosmos/gaia/issues/108
Previous attempt https://github.com/tendermint/tendermint/pull/1156
2018-02-05 13:52:18 +04:00
Ethan Buchman
bf84e82577 Merge pull request #1184 from tendermint/sdk2-tmlibs-abci
Updates for tmlibs and abci (sdk2)
2018-02-03 10:45:54 -05:00
Ethan Buchman
abca9a2d61 woops - bring back glide.lock file 2018-02-03 03:59:16 -05:00
Ethan Buchman
d34286c421 minor fixes - tests pass 2018-02-03 03:54:49 -05:00
Anton Kaliaev
bb2bdbc0e1 add missing element (tag.Value) to keyForTag
encoded as %s. not sure this will work with raw bytes
2018-02-03 03:52:25 -05:00
Ethan Buchman
e7747f7d66 it compiles 2018-02-03 03:52:17 -05:00
Ethan Buchman
7a5060dc52 replace data.Bytes with cmn.HexBytes 2018-02-03 03:47:01 -05:00
Ethan Buchman
426379dc47 remove use of wire/nowriter 2018-02-03 03:39:14 -05:00
Ethan Buchman
cd0fd06b0d update for sdk2 libs. need to fix kv test
NOTE we only updating for tmlibs and abci
2018-02-03 03:35:02 -05:00
Ethan Buchman
4e3488c677 update types 2018-02-03 03:23:10 -05:00
Ethan Buchman
061ad355bb update glide 2018-02-03 03:22:56 -05:00
Ethan Buchman
2679b7554b lite: comment out iavl code - TODO #1183 2018-02-03 03:02:49 -05:00
Ethan Buchman
62c9cad484 Merge pull request #1180 from tendermint/1146-add-BFT-time-spec
Add BFT time spec
2018-02-01 16:38:05 -05:00
Zarko Milosevic
4cbdbbaac9 Add BFT time spec 2018-02-01 15:22:08 +01:00
Ethan Buchman
2919bc3f7f Merge pull request #1178 from tendermint/nice-err-msg
improve error message
2018-01-31 21:32:58 -05:00
Zach Ramsay
1c01671ec6 improve vague error msg, closes #1158 2018-01-31 17:44:19 +00:00
Ethan Buchman
fe632ea32a spec: minor fixes 2018-01-26 17:26:33 -05:00
Zach Ramsay
5b368252ac spec: more fixes 2018-01-26 17:26:33 -05:00
Zach Ramsay
8cca953590 spec: remove notes, see #1152 2018-01-26 17:26:33 -05:00
Zach Ramsay
4b4a2029c4 spec: typos & other fixes 2018-01-26 17:26:33 -05:00
Ethan Buchman
6aa85357b6 Merge pull request #1160 from shapeshed/patch-1
Fix documentation typos
2018-01-26 17:17:43 -05:00
Ethan Buchman
eae62ec09b Merge branch 'develop' into patch-1 2018-01-26 17:17:28 -05:00
Ethan Buchman
18d96266bc Merge pull request #1140 from tendermint/feature/vagrant
Fix Vagrantfile
2018-01-26 17:12:06 -05:00
George Ornbo
4529fd6787 Fix documentation typos 2018-01-26 14:44:48 +00:00
Ethan Buchman
4a99a2a07d update contributing.md 2018-01-26 01:18:33 -05:00
Adrian Brink
4b63b3aa0b Switch to correct directory in Vagrant 2018-01-26 01:16:07 -05:00
Adrian Brink
fc860c3a07 Final Vagrantfile 2018-01-26 01:16:07 -05:00
Adrian Brink
2f147ec000 Remove upgrade step 2018-01-26 01:16:07 -05:00
Adrian Brink
0a7a190cd1 Fix vagrantfile
If you get an error, please run `vagrant box update`.
2018-01-26 01:16:07 -05:00
Ethan Buchman
3366dfe32a Merge pull request #1151 from tendermint/fix/p2p-stop-conn
p2p/conn: fix blocking on pong during quit and break out of loops
2018-01-25 02:45:06 -05:00
Ethan Buchman
baff4bd8cc p2p/conn: better handling for some stop conditions 2018-01-25 02:11:16 -05:00
Ethan Buchman
fb109db33d update changelog 2018-01-25 02:10:01 -05:00
Ethan Buchman
2f5971532e Merge pull request #1154 from tendermint/fix/consensus-tests
consensus: fix SetLogger in tests
2018-01-25 02:07:20 -05:00
Ethan Buchman
ab13806276 consensus: print go routines in failed test 2018-01-25 01:22:53 -05:00
Ethan Buchman
3ae26bd6e6 consensus: fix SetLogger in tests 2018-01-24 23:34:57 -05:00
Ethan Buchman
27ef3489a0 Merge pull request #1049 from tendermint/p2p-channels
p2p: add Channels to NodeInfo and don't send for unknown channels
2018-01-24 15:29:38 -05:00
Ethan Buchman
b6eb275b22 p2p: fix break in double loop 2018-01-24 14:27:37 -05:00
Ethan Buchman
57cc8ab977 Merge pull request #1143 from tendermint/1091-race-condition
call FlushSync before calling CommitSync
2018-01-24 14:22:43 -05:00
Ethan Buchman
99034904f8 p2p: fix tests for required channels 2018-01-23 23:45:51 -05:00
Ethan Buchman
a0ffcbcee4 Merge pull request #1137 from tendermint/docs-consolidate
WIP: docs consolidation
2018-01-23 23:45:20 -05:00
Ethan Buchman
260affd037 docs consolidation 2018-01-23 23:46:28 -05:00
Ethan Buchman
d7b1b8d3d5 Merge pull request #1129 from tendermint/addrbook
p2p: bust up into sub dirs
2018-01-23 23:10:50 -05:00
Ethan Buchman
50129ad8ac p2p: add Channels to NodeInfo and don't send for unknown channels 2018-01-23 22:43:56 -05:00
Ethan Buchman
5c9cb5e6a2 Merge pull request #1133 from tendermint/fix/stop-peer-for-error
StopPeerForError in blockchain and consensus
2018-01-23 22:26:52 -05:00
Ethan Buchman
4051391039 blockchain: test wip for hard to test functionality [ci skip] 2018-01-23 22:27:33 -05:00
Ethan Buchman
8f3bd3f209 p2p: addrBook.Save() on DialPeersAsync 2018-01-23 22:25:39 -05:00
Ethan Buchman
85816877c6 config: fix addrbook path to go in config 2018-01-23 22:21:17 -05:00
Ethan Buchman
87087b8acd consensus: minor cosmetic 2018-01-23 21:41:13 -05:00
Ethan Buchman
775bb85efb p2p/pex: wait to connect to all peers in reactor test 2018-01-23 21:30:53 -05:00
Ethan Buchman
21ce5856b3 p2p: notes about ListenAddr 2018-01-23 21:26:19 -05:00
Ethan Buchman
f5226e0008 Merge pull request #1144 from tendermint/create-logs-tarball
mercy for developers with slow Internet
2018-01-23 12:47:45 -05:00
Anton Kaliaev
a745fe2eed mercy for developers with slow Internet 2018-01-23 20:37:38 +04:00
Anton Kaliaev
5f3048bd09 call FlushSync before calling CommitSync
if we call it after, we might receive a "fresh" transaction from
  `broadcast_tx_sync` before old transactions (which were not
  committed).

Refs #1091

```
Commit is called with a lock on the mempool, meaning no calls to CheckTx
can start. However, since CheckTx is called async in the mempool
connection, some CheckTx might have already "sailed", when the lock is
released in the mempool and Commit proceeds.

Then, that spurious CheckTx has not yet "begun" in the ABCI app (stuck
in transport?). Instead, ABCI app manages to start to process the
Commit. Next, the spurious, "sailed" CheckTx happens in the wrong place.
```
2018-01-23 16:56:14 +04:00
Ethan Buchman
6a5818e107 Merge pull request #1138 from tendermint/small-fix
Small fix in example
2018-01-22 17:27:35 -05:00
Zarko Milosevic
dfdfd6c98e Small fix in example 2018-01-22 13:10:54 +01:00
Ethan Buchman
3090b05eb4 p2p: use conn.Close when peer is nil 2018-01-21 16:26:59 -05:00
Ethan Buchman
ee674f919f StopPeerForError in blockchain and consensus 2018-01-21 13:32:04 -05:00
Ethan Buchman
813bb6af96 Merge pull request #1092 from tendermint/add-consensus-reactor-doc
Add consensus reactor doc
2018-01-21 12:40:28 -05:00
Ethan Buchman
aecbff725f Merge pull request #1082 from tendermint/document-proposer-selection
Document proposer selection procedure
2018-01-21 12:39:43 -05:00
Ethan Buchman
6679fef2be Merge pull request #1056 from tendermint/feature/mempool-spec
WIP: Mempool specification
2018-01-21 12:39:10 -05:00
Ethan Buchman
c070ed056a Merge pull request #1051 from tendermint/feature/blockchain_reactor_docs
docs: Blockchain Reactor Documentation
2018-01-21 12:38:32 -05:00
Ethan Buchman
2c6ed302b7 minor changes [ci skip] 2018-01-21 12:36:46 -05:00
Adrian Brink
0eb85161aa More specification 2018-01-21 12:35:09 -05:00
Adrian Brink
940145b368 Bullet points for reactor and poolRoutine 2018-01-21 12:32:45 -05:00
Adrian Brink
a30315276b Formatting and documentation 2018-01-21 12:32:23 -05:00
Adrian Brink
6366eb9d99 Cleanup build and structure 2018-01-21 12:31:14 -05:00
Ethan Buchman
44e967184a p2p: tmconn->conn and types->p2p 2018-01-21 00:34:41 -05:00
Ethan Buchman
2ec425ae4b Merge pull request #1128 from tendermint/862-seed-crawler-mode
seed crawler mode
2018-01-20 23:35:26 -05:00
Ethan Buchman
0d7d16005a fixes 2018-01-20 21:44:30 -05:00
Ethan Buchman
5b5cbaa66a p2p: use sub dirs 2018-01-20 21:35:37 -05:00
Ethan Buchman
03550c7076 wip addrbook 2018-01-20 21:33:43 -05:00
Ethan Buchman
930fde056a p2p: add back lost func 2018-01-20 21:28:00 -05:00
Ethan Buchman
8d758560d8 p2p/trustmetric: non-deterministic test 2018-01-20 21:24:22 -05:00
Ethan Buchman
7b87cdaed8 p2p: seed disconnects after sending addrs 2018-01-20 21:24:22 -05:00
Ethan Buchman
c2f97e6454 p2p: seed mode fixes from rebase and review 2018-01-20 21:24:22 -05:00
Ethan Buchman
88eb3e7af0 some minor renames 2018-01-20 21:24:20 -05:00
caffix
949211a137 added a test for PEX reactor seed mode 2018-01-20 21:23:48 -05:00
Ethan Buchman
39d8da3536 docs: update getting started [ci skip] 2018-01-20 21:21:50 -05:00
Ethan Buchman
ae27e85bf7 add warnings about new spec 2018-01-20 21:20:15 -05:00
Ethan Buchman
f2d19162d2 fixes from caffix review 2018-01-20 21:20:09 -05:00
Zarko Milosevic
d36e118bf6 Add Consensus reactor spec 2018-01-19 19:57:08 +01:00
Ethan Buchman
02c1aef48b Merge pull request #1121 from tendermint/consensus-tests
Consensus tests
2018-01-19 12:50:32 -05:00
Ethan Buchman
6f3d9b4be3 fix race 2018-01-19 01:36:52 -05:00
Ethan Buchman
f06cc6630b mempool: cfg.CacheSize and expose InitWAL 2018-01-19 01:03:03 -05:00
Ethan Buchman
8171628ee5 make tests run faster 2018-01-19 00:59:09 -05:00
Ethan Buchman
1cb76625d3 consensus: rename test funcs 2018-01-19 00:59:09 -05:00
Ethan Buchman
ebeadfc57e dont run metalinter 2018-01-19 00:58:54 -05:00
Ethan Buchman
cca597a9c0 fix and test config file 2018-01-19 00:08:19 -05:00
Ethan Buchman
940db715f4 Merge pull request #1104 from tendermint/p2p-consolidate
WIP: P2P consolidate
2018-01-18 19:00:35 -05:00
Ethan Buchman
ec2b038493 Merge pull request #1103 from tendermint/1100-document-event-subscriptions
document event subscriptions
2018-01-18 19:00:01 -05:00
Ethan Buchman
ba0cb4f10e Merge pull request #1115 from tendermint/docs-tx-format
document tx formats
2018-01-18 18:57:23 -05:00
Ethan Buchman
e764a180d8 docs: fix tx formats [ci skip] 2018-01-18 18:58:33 -05:00
Ethan Buchman
bc19e7843c Merge branch 'develop' into p2p-consolidate 2018-01-18 18:30:37 -05:00
Ethan Buchman
f57f97c4bd fix bash tests 2018-01-18 17:40:12 -05:00
Ethan Buchman
b32474bb1a Merge pull request #1116 from tendermint/feature/prs
Create PULL_REQUEST_TEMPLATE.md
2018-01-18 13:52:18 -05:00
Adrian Brink
0a20e8f268 Create PULL_REQUEST_TEMPLATE.md 2018-01-18 13:53:18 -05:00
Zach Ramsay
9d4d939b89 docs: tx formats: closes #1083, #536 2018-01-18 15:37:56 +00:00
Ethan Buchman
c1e6e73bb1 Merge pull request #1108 from tendermint/zramsay-patch-1
Update p2p README
2018-01-15 10:31:31 -05:00
Ethan Buchman
620c957a44 fix test 2018-01-14 13:24:43 -05:00
Ethan Buchman
64ce7eef16 Merge pull request #1107 from tendermint/p2p-pex-abuse
better abuse handling in pex
2018-01-14 13:03:02 -05:00
Ethan Buchman
fc7915ab4c fixes from review 2018-01-14 13:03:57 -05:00
Zach Ramsay
26aaa283a9 p2p: remove deprecated Dockerfile 2018-01-14 13:51:28 +00:00
Zach
a29c67563c Update p2p README, closes #1102 2018-01-14 13:50:34 +00:00
Ethan Buchman
17f7a9b510 improve seed dialing logic 2018-01-14 03:56:15 -05:00
Ethan Buchman
3df5fd21cd better abuse handling in pex 2018-01-14 03:22:01 -05:00
Ethan Buchman
99076f1942 Merge pull request #1105 from tendermint/p2p-switch
cleanup switch
2018-01-14 01:20:13 -05:00
Ethan Buchman
3368eeb03e fix tests 2018-01-14 01:19:07 -05:00
Ethan Buchman
68237911ba NetAddress.Same checks ID or DialString 2018-01-14 01:15:37 -05:00
Ethan Buchman
f9e4f6eb6b reorder peer.go methods 2018-01-14 01:15:37 -05:00
Ethan Buchman
8b74a8d6ac NodeInfo not a pointer 2018-01-14 01:15:33 -05:00
Ethan Buchman
08f84cd712 a little more moving around 2018-01-13 23:56:57 -05:00
Ethan Buchman
452d10f368 cleanup switch 2018-01-13 17:37:52 -05:00
Ethan Buchman
8b3fb743cf Merge pull request #1048 from tendermint/p2p-id
P2P ID
2018-01-13 17:35:21 -05:00
Ethan Buchman
7667e11973 remove RemoteAddr from NodeInfo 2018-01-13 17:36:03 -05:00
Ethan Buchman
53a5498fc5 more fixes from review 2018-01-13 17:34:12 -05:00
Ethan Buchman
e4d52401cf some fixes from review 2018-01-13 16:06:51 -05:00
Ethan Buchman
9670519a21 remove PoW from ID 2018-01-13 15:50:59 -05:00
Ethan Buchman
b1485b181a Merge branch 'p2p-consolidate' into p2p-id 2018-01-13 15:20:23 -05:00
Ethan Buchman
47a6928890 Merge pull request #1030 from tendermint/864-distinguish-between-seeds-and-manual-peers
Distinguish between seeds and manual peers
2018-01-13 15:12:47 -05:00
Ethan Buchman
c1e167e330 note in trust metric test 2018-01-13 15:11:13 -05:00
Ethan Buchman
e2b3b5b58c dial_persistent_peers -> dial_peers with persistent option 2018-01-13 14:50:58 -05:00
Ethan Buchman
e6b70baae0 Merge branch 'develop' into 864-distinguish-between-seeds-and-manual-peers 2018-01-13 14:34:32 -05:00
Anton Kaliaev
b40aa91b41 document event subscriptions
Refs #1100
2018-01-12 12:53:34 -06:00
Ethan Buchman
a13b17ec6c Merge pull request #792 from tendermint/config
Config Improvements
2018-01-10 14:41:30 -05:00
Ethan Buchman
b32a507a1b Merge pull request #1088 from tendermint/1087-update-docker
Update docker image
2018-01-10 14:23:06 -05:00
Ethan Buchman
d6e01e8cee Merge pull request #1089 from tendermint/1075-document-underflow-overflow
document the maximum supported voting power due to overflow [ci skip]
2018-01-10 14:20:31 -05:00
Anton Kaliaev
c79ba3c349 document the maximum supported voting power due to overflow [ci skip]
Refs #1075
2018-01-10 14:21:26 -05:00
Anton Kaliaev
12ca972761 Merge branch 'develop' into 1087-update-docker 2018-01-10 11:08:14 -06:00
Zach Ramsay
657ad214cb p2p tests: put priv_val in right place 2018-01-10 16:30:00 +00:00
Zach
39acf1c5e8 Merge branch 'develop' into config 2018-01-10 14:21:24 +00:00
Anton Kaliaev
075ae1e301 minimal test for dialing seeds in pex reactor 2018-01-09 18:29:29 -06:00
Anton Kaliaev
705d51aa42 move dialSeedsIfAddrBookIsEmptyOrPEXFailedToConnect into PEX reactor 2018-01-09 17:54:29 -06:00
Anton Kaliaev
ef0493ddf3 rewrite Peers section of Using Tendermint guide [ci skip] 2018-01-09 17:54:28 -06:00
Anton Kaliaev
1b455883d2 readd /dial_seeds 2018-01-09 17:54:28 -06:00
Anton Kaliaev
e4897b7bdd rename manual peers to persistent peers 2018-01-09 16:18:05 -06:00
Anton Kaliaev
37f86f9518 update changelog [ci skip] 2018-01-09 16:04:07 -06:00
Anton Kaliaev
28fc15028a distinguish between seeds and manual peers in the config/flags
- we only use seeds if we can’t connect to peers in the addrbook.
- we always connect to nodes given in config/flags

Refs #864
2018-01-09 16:03:24 -06:00
Anton Kaliaev
179d6062e4 try to connect through addrbook before requesting peers from seeds
we only use seeds if we can’t connect to peers in the addrbook.

Refs #864
2018-01-09 16:03:24 -06:00
Zach
c521f385a6 add quick start guide (#1069) 2018-01-09 20:35:47 +00:00
Anton Kaliaev
b8214fce66 Merge branch 'develop' into 1087-update-docker 2018-01-09 12:21:23 -06:00
Ethan Buchman
03a14d8342 docs/p2p: updates from review (#1076) 2018-01-09 11:44:49 -06:00
Anton Kaliaev
48638eaa20 update docker readme 2018-01-09 11:05:15 -06:00
Anton Kaliaev
170777300e update docker
Closes #1087
2018-01-09 11:04:37 -06:00
Adrian Brink
32311acd01 Vulnerability in light client proxy (#1081)
* Vulnerability in light client proxy

When calling GetCertifiedCommit the light client proxy would call
Certify and even on error return the Commit as if it had been correctly
certified.

Now it returns the error correctly and returns an empty Commit on error.

* Improve names for clarity

The lite package now contains StaticCertifier, DynamicCertifier and
InqueringCertifier. This also changes the method receivers from one
letter to two letter names, which will make future refactoring easier
and follows the coding standards.

* Fix test failures

* Rename files

* remove dead code
2018-01-09 10:36:11 -06:00
Ethan Buchman
b9cbaf8f10 priv-val: fix timestamp for signing things that only differ by timestamp 2018-01-08 16:36:16 -05:00
Ethan Buchman
8d86b6c2d2 Merge pull request #1084 from tendermint/fix-make-dist-script
fix broken `make dist` target
2018-01-08 16:20:36 -05:00
Anton Kaliaev
555f560ecd fix broken make dist target 2018-01-08 13:13:47 -06:00
Zarko Milosevic
dba4815616 Define requirements of the proposer selection procedure 2018-01-08 14:10:01 +01:00
Ethan Buchman
cf42611187 Merge pull request #997 from tendermint/919-careful-with-validator-voting
check for overflow and underflow while choosing proposer
2018-01-07 16:48:39 -05:00
Ethan Buchman
90c691df2b Merge pull request #1063 from tendermint/feature/Issue1020
NewInquiring returns error instead of swallowing it
2018-01-07 16:30:33 -05:00
Adrian Brink
13fa23c568 Add error checking 2018-01-06 22:24:58 +01:00
Jae Kwon
124c58d48f Merge pull request #1071 from tendermint/revert-1053-feature/buildprocess
Revert "Changes to achieve a standardized build process and deterministic builds"
2018-01-05 22:42:19 -08:00
Jae Kwon
a034600024 Revert "Changes to achieve a standardized build process and deterministic builds" 2018-01-05 22:35:57 -08:00
Zach
e0e600df05 Merge branch 'develop' into config 2018-01-06 05:30:38 +00:00
Ethan Buchman
92f5ae5a84 fix vagrant [ci skip] 2018-01-05 22:26:10 -05:00
Ethan Buchman
d57ddec302 Merge pull request #1053 from tendermint/feature/buildprocess
Changes to achieve a standardized build process and deterministic builds
2018-01-05 21:13:42 -05:00
Greg Szabo
bb3dc10f24 Makefile improvements for deterministic builds based on Bucky's feedback 2018-01-05 14:22:13 -05:00
Greg Szabo
2cc50938a4 Merge branch 'develop' into feature/buildprocess 2018-01-05 13:01:40 -05:00
Adrian Brink
ba475d3128 Fix formatting 2018-01-05 13:25:58 +01:00
Adrian Brink
ed81fb54ec NewInquiring returns error instead of swallowing it 2018-01-05 13:24:16 +01:00
Ethan Buchman
8481da2405 Merge pull request #1046 from tendermint/abci-spec-docs
docs: add abci spec
2018-01-04 13:57:40 -05:00
Ethan Buchman
ecb7303e35 Merge branch 'develop' into abci-spec-docs 2018-01-04 13:57:08 -05:00
Ethan Frey
9cb45eb7df Add skeleton for functionality and concurrency 2018-01-04 19:08:03 +01:00
Ethan Buchman
6855e62f0a Merge pull request #1052 from tendermint/feature/p2p_docs
docs: P2P Documentation
2018-01-04 12:32:28 -05:00
Ethan Frey
17b61db40a Document p2p and rpc messages 2018-01-04 18:12:48 +01:00
Ethan Frey
7b52499463 Start writing mempool specification
Include overview and configuration options.
2018-01-04 17:11:35 +01:00
Greg Szabo
f67f99c227 Extended install document with docker option. Added extra checks to developer's build target. 2018-01-03 17:24:11 -05:00
Greg Szabo
0430ebf95c Makefile changes for cross-building and standardized builds using gox 2018-01-03 14:58:23 -05:00
Adrian Brink
f602de437e Move P2P docs into docs folder 2018-01-03 10:49:47 +01:00
Zach Ramsay
a573b20888 docs: add counter/dummy code snippets
closes https://github.com/tendermint/abci/issues/134
2018-01-03 01:23:38 +00:00
Ethan Buchman
488ae529ad p2p: authenticate peer ID 2018-01-01 23:23:11 -05:00
Ethan Buchman
6e823c6e87 p2p: support addr format ID@IP:PORT 2018-01-01 23:08:20 -05:00
Ethan Buchman
7d35500e6b p2p: add ID to NetAddress and use for AddrBook 2018-01-01 22:39:08 -05:00
Ethan Buchman
a17105fd46 p2p: peer.Key -> peer.ID 2018-01-01 22:39:05 -05:00
Ethan Buchman
b289d2baf4 persistent node key and ID 2018-01-01 21:21:42 -05:00
Ethan Buchman
f2e0abf1dc p2p: reorder some checks in addPeer; add comments to NodeInfo 2018-01-01 21:21:39 -05:00
Ethan Buchman
528154f1a2 p2p: PrivKey need not be Ed25519 2018-01-01 19:44:01 -05:00
Ethan Buchman
bc71840f06 more p2p docs 2018-01-01 16:30:36 -05:00
Zach Ramsay
cd15b677ec docs: add abci spec 2018-01-01 15:35:28 +00:00
Ethan Buchman
1acb12edf5 p2p docs 2017-12-31 17:11:09 -05:00
Ethan Buchman
008de93bbe Merge pull request #1039 from tendermint/add_consensus_spec
Spec of consensus/gossip protocol
2017-12-31 14:59:32 -05:00
Zach
4e834baa9a docs: update ecosystem.rst (#1037)
* docs: update ecosystem.rst

* typo [ci skip]
2017-12-31 13:54:50 +00:00
Zarko Milosevic
96e0e4ab5a Describe messages sent as part of consensus/gossip protocol 2017-12-30 21:18:12 +01:00
Ethan Buchman
381fe19335 changelog date [ci skip] 2017-12-29 17:51:19 -05:00
Ethan Buchman
ff99ca7cdf bump wal test timeout 2017-12-29 17:51:13 -05:00
Ethan Buchman
60f95cd9ea changelog and version 2017-12-29 11:28:44 -05:00
Ethan Buchman
28bbeac763 state: send byzantine validators in BeginBlock 2017-12-29 11:26:55 -05:00
Ethan Buchman
e97e0bacd1 update glide 2017-12-29 11:11:13 -05:00
Ethan Buchman
abfdfe67e8 test/p2p: add some timeouts 2017-12-29 11:02:47 -05:00
Ethan Buchman
992371b4cf Merge pull request #1035 from tendermint/readme-min-requirements
README: document the minimum Go version
2017-12-29 10:24:57 -05:00
Ethan Buchman
07eeddc5e1 Merge pull request #1015 from tendermint/state_funcs
state: move methods to funcs
2017-12-29 10:24:22 -05:00
Emmanuel Odeke
3b70c89e07 README: document the minimum Go version
Solidify in writing, the minimum Go version that
we support, as Go1.9.
2017-12-28 23:26:45 -07:00
Ethan Buchman
444db4c242 metalinter 2017-12-28 23:15:54 -05:00
Ethan Buchman
cb845ebff5 fix EvidencePool and VerifyEvidence 2017-12-28 23:15:54 -05:00
Ethan Buchman
6112578d07 ValidateBlock is a method on blockExec 2017-12-28 23:15:54 -05:00
Ethan Buchman
ae68fcb78a move fireEvents to ApplyBlock 2017-12-28 23:15:54 -05:00
Ethan Buchman
8d8d63c94c changelog 2017-12-28 23:15:54 -05:00
Ethan Buchman
1d6f00859d fixes from review 2017-12-28 23:15:54 -05:00
Ethan Buchman
397251b0f4 fix evidence 2017-12-28 23:15:54 -05:00
Ethan Buchman
537b0dfa1a use NopEventBus 2017-12-28 23:15:54 -05:00
Ethan Buchman
0acca7fe69 final updates for state 2017-12-28 23:15:54 -05:00
Ethan Buchman
bac60f2067 blockchain: update for new state 2017-12-28 23:15:54 -05:00
Ethan Buchman
f82b7e2a13 state: re-order funcs. fix tests 2017-12-28 23:15:54 -05:00
Ethan Buchman
9e6d088757 state: BlockExecutor 2017-12-28 23:15:54 -05:00
Ethan Buchman
c915719f85 *State->State; SetBlockAndValidators->NextState 2017-12-28 23:15:54 -05:00
Ethan Buchman
f55135578c state: move methods to funcs 2017-12-28 23:15:54 -05:00
Ethan Buchman
139eca0177 Merge pull request #1027 from tendermint/1026-error-signing-vote-wal-gen
change directory for each call, not only for each test
2017-12-28 16:08:52 -05:00
Ethan Buchman
a8e625e99d config: unexpose chainID 2017-12-28 20:49:02 +00:00
Zach Ramsay
a92a32b862 config: lil fixes 2017-12-28 20:49:02 +00:00
Zach Ramsay
9da5cd0180 rebase fix 2017-12-28 20:49:02 +00:00
Anton Kaliaev
a6f2e502e7 move genesis.json file into config dir 2017-12-28 20:49:02 +00:00
Anton Kaliaev
69d8c2e554 fixes after my own review 2017-12-28 20:49:02 +00:00
Zach Ramsay
70ba608850 config: write all default options to config file
config: test the default file

docs: spiff up config

config: minor fixes & comments

config: simplify test

config; use a seperate config directory, #556

config: update docs & parameterize file paths

config: PR comments

config: use the default object

fix a rebase error
2017-12-28 20:49:02 +00:00
Anton Kaliaev
75182f7205 change directory for each call, not only for each test
Fixes #1026
2017-12-28 11:17:15 -06:00
Ethan Buchman
41caa4415c Merge pull request #1024 from tendermint/1023-tendermint-version
remove quotes from `tendermint version`
2017-12-28 10:14:39 -05:00
Anton Kaliaev
c611cc7268 remove quotes from tendermint version
before:
0.14.0-'88f5f21d'

after:
0.14.0-88f5f21d

Fixes #1023
2017-12-28 09:02:25 -06:00
Ethan Buchman
d14ec7d7d2 Merge pull request #1011 from tendermint/1006-panic-on-light-client-startup
protect memStoreProvider#byHash map by mutex
2017-12-27 15:30:59 -05:00
Anton Kaliaev
4ca19e33c2 add mutex to memStoreProvider
Fixes #1006

```
goroutine 230 [runnable]:
github.com/cosmos/gaia/vendor/golang.org/x/crypto/ripemd160.(*digest).Sum(0xc420ac2af0, 0x0, 0x0, 0x0, 0xc420c01360, 0xc420c01378, 0x20)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/golang.org/x/crypto/ripemd160/ripemd160.go:119 +0x2c9
github.com/cosmos/gaia/vendor/github.com/tendermint/tmlibs/merkle.KVPair.Hash(0x49447e8, 0x7, 0x47ff760, 0xc420048d50, 0xc420aba9a0, 0x14, 0x20)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tmlibs/merkle/simple_tree.go:122 +0x200
github.com/cosmos/gaia/vendor/github.com/tendermint/tmlibs/merkle.(*KVPair).Hash(0xc4200f7ae0, 0xc420aba9a0, 0x14, 0x20)
	<autogenerated>:1 +0x57
github.com/cosmos/gaia/vendor/github.com/tendermint/tmlibs/merkle.SimpleHashFromHashables(0xc420111900, 0x9, 0x10, 0x10, 0xc420ad06c0, 0xc420087500)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tmlibs/merkle/simple_tree.go:87 +0xaa
github.com/cosmos/gaia/vendor/github.com/tendermint/tmlibs/merkle.SimpleHashFromMap(0xc420b987e0, 0xc420b987e0, 0x4941892, 0x3)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tmlibs/merkle/simple_tree.go:96 +0x4d
github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/types.(*Header).Hash(0xc420a541a0, 0x4034f, 0xc420a28530, 0xa)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/types/block.go:177 +0x54a
github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite.Commit.ValidateBasic(0xc420a541a0, 0xc420a1e300, 0xc420a28530, 0xa, 0xb, 0x27)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite/commit.go:83 +0x1c9
github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite/files.(*provider).StoreCommit(0xc4202f2780, 0xc420a541a0, 0xc420a1e300, 0xc420a24870, 0x0, 0x0)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite/files/provider.go:71 +0x5e
github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite.cacheProvider.StoreCommit(0xc4202f27a0, 0x2, 0x2, 0xc420a541a0, 0xc420a1e300, 0xc420a24870, 0xc420a541a0, 0xc420a1e300)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite/provider.go:39 +0x8f
github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite.(*cacheProvider).StoreCommit(0xc4202f27c0, 0xc420a541a0, 0xc420a1e300, 0xc420a24870, 0x2, 0xc420a541a0)
	<autogenerated>:1 +0x70
github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite.NewInquiring(0xc420990610, 0xa, 0xc420a541a0, 0xc420a1e300, 0xc420a24870, 0x4f3eb80, 0xc4202f27c0, 0x4f3e5c0, 0xc4202f2840, 0xa)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/lite/inquirer.go:29 +0x5c
github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client.GetCertifier(0xc420990610, 0xa, 0x4f3eb80, 0xc4202f27c0, 0x4f3e5c0, 0xc4202f2840, 0xc4200f68a0, 0xc420b21888, 0x470eece)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/common.go:47 +0x141
github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands.GetCertifier(0x4f46aa0, 0xc4200f68a0, 0xc420b218e0)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands/common.go:82 +0x83
github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands/query.GetWithProof(0xc420a260c0, 0x22, 0x30, 0x0, 0x4947dd6, 0xc42016a000, 0xc420b219a0, 0x4389d85, 0x0, 0x0, ...)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands/query/get.go:73 +0x52
github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands/query.Get(0xc420a260c0, 0x22, 0x30, 0x0, 0x1, 0x0, 0xc420b21a08, 0x43c9c91, 0xc4200a20f0, 0x4947dd6, ...)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands/query/get.go:64 +0x178
github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands/query.GetParsed(0xc420a260c0, 0x22, 0x30, 0x47d1440, 0xc420b984e0, 0x0, 0x1, 0x30, 0x1d, 0x40)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/client/commands/query/get.go:31 +0x5f
github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/modules/coin/rest.doQueryAccount(0x4f3d300, 0xc4201ee0e0, 0xc420111500)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/cosmos/cosmos-sdk/modules/coin/rest/handlers.go:66 +0x3a5
net/http.HandlerFunc.ServeHTTP(0x4a9d808, 0x4f3d300, 0xc4201ee0e0, 0xc420111500)
	/usr/local/go/src/net/http/server.go:1918 +0x44
github.com/cosmos/gaia/vendor/github.com/gorilla/mux.(*Router).ServeHTTP(0xc4201aab40, 0x4f3d300, 0xc4201ee0e0, 0xc420111500)
	/Users/mappum/go/src/github.com/cosmos/gaia/vendor/github.com/gorilla/mux/mux.go:150 +0xed
net/http.serverHandler.ServeHTTP(0xc4209992b0, 0x4f3d300, 0xc4201ee0e0, 0xc420111300)
	/usr/local/go/src/net/http/server.go:2619 +0xb4
net/http.(*conn).serve(0xc4200b5540, 0x4f3df80, 0xc420086900)
	/usr/local/go/src/net/http/server.go:1801 +0x71d
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2720 +0x288
```
2017-12-27 15:29:17 -05:00
Ethan Buchman
1a0db878bf Merge pull request #1009 from tendermint/spec
Spec
2017-12-27 15:23:13 -05:00
Ethan Buchman
53eb9aca2b Merge pull request #592 from tendermint/evidence
track evidence, include in block
2017-12-27 15:22:24 -05:00
Ethan Buchman
caccabd4e5 spec: fixes from review 2017-12-27 15:14:49 -05:00
Ethan Buchman
d0e0ac5fac types: better error messages for votes 2017-12-27 14:46:24 -05:00
Ethan Buchman
7d81a3f4a5 address some comments from review 2017-12-27 01:27:03 -05:00
Ethan Buchman
f95b7529eb Merge pull request #999 from tendermint/feature/result-hash-header
Add results hash to header
2017-12-26 20:49:20 -05:00
Ethan Buchman
6a4fd46479 fixes from rebase 2017-12-26 20:42:34 -05:00
Ethan Buchman
b01b1e4758 remove unused var 2017-12-26 20:27:40 -05:00
Ethan Buchman
014b0b9944 evidence: reactor test 2017-12-26 20:27:40 -05:00
Ethan Buchman
5904f6df8b minor fixes from review 2017-12-26 20:27:40 -05:00
Ethan Buchman
0f293bfc2b remove some TODOs 2017-12-26 20:27:40 -05:00
Ethan Buchman
cfbedec719 evidence: reactor test 2017-12-26 20:27:40 -05:00
Ethan Buchman
666ae244b3 evidence: pool test 2017-12-26 20:27:40 -05:00
Ethan Buchman
c13e93d63e evidence: store tests and fixes 2017-12-26 20:27:40 -05:00
Ethan Buchman
c2585b5525 evidence_pool.go -> pool.go. remove old test files 2017-12-26 20:27:40 -05:00
Ethan Buchman
1d021c2790 update consensus/test_data/many_blocks.cswal 2017-12-26 20:27:40 -05:00
Ethan Buchman
cc418e5dab state.VerifyEvidence enforces EvidenceParams.MaxAge 2017-12-26 20:27:32 -05:00
Ethan Buchman
c7acdfadf2 evidence: more funcs in store.go 2017-12-26 20:27:32 -05:00
Ethan Buchman
869d873d5c state.ApplyBlock takes evpool and calls MarkEvidenceAsCommitted 2017-12-26 20:27:32 -05:00
Ethan Buchman
3271634e7a types: evidence cleanup 2017-12-26 20:26:21 -05:00
Ethan Buchman
4854c231e1 evidence store comments and cleanup 2017-12-26 20:26:21 -05:00
Ethan Buchman
7a18fa887d evidence linked with consensus/node. compiles 2017-12-26 20:26:21 -05:00
Ethan Buchman
6c4a0f9363 cleanup evidence pkg. state.VerifyEvidence 2017-12-26 20:26:21 -05:00
Ethan Buchman
f7731d38f6 some comments and cleanup 2017-12-26 20:25:14 -05:00
Ethan Buchman
df3f4de7c3 check evidence is from validator; some cleanup 2017-12-26 20:25:14 -05:00
Ethan Buchman
10c43c9edc introduce evidence store 2017-12-26 20:25:14 -05:00
Ethan Buchman
fe4b53a463 EvidencePool 2017-12-26 20:24:54 -05:00
Ethan Buchman
5b1f987ed1 mempool: remove Peer interface. use p2p.Peer 2017-12-26 20:24:12 -05:00
Ethan Buchman
48d778c4b3 types/params: introduce EvidenceParams 2017-12-26 20:24:12 -05:00
Ethan Buchman
7d086e9524 check if we already have evidence 2017-12-26 20:21:17 -05:00
Ethan Buchman
6e9433c7a8 post rebase fix 2017-12-26 20:21:17 -05:00
Ethan Buchman
39299e5cc1 consensus: note about duplicate evidence 2017-12-26 20:21:17 -05:00
Ethan Buchman
eeab0efa56 types: tx.go comments 2017-12-26 20:21:17 -05:00
Ethan Buchman
77e45756f2 types: Evidences for merkle hashing; Evidence.String() 2017-12-26 20:21:17 -05:00
Ethan Buchman
9cdcffbe4b types: comments; compiles; evidence test 2017-12-26 20:21:17 -05:00
Ethan Buchman
50850cf8a2 verify sigs on both votes; note about indices 2017-12-26 20:21:17 -05:00
Ethan Buchman
35587658cd verify evidence in block 2017-12-26 20:21:17 -05:00
Ethan Buchman
4661c98c17 add pubkey to conflicting vote evidence 2017-12-26 20:21:17 -05:00
Ethan Buchman
7928659f70 track evidence, include in block 2017-12-26 20:21:17 -05:00
Ethan Buchman
f80f6445a6 fix test 2017-12-26 20:15:09 -05:00
Ethan Buchman
336c2f4fe1 rpc: fix getHeight 2017-12-26 20:08:25 -05:00
Ethan Buchman
bfcb40bf6b validate block.ValidatorsHash 2017-12-26 20:00:45 -05:00
Ethan Buchman
051c2701ab remove LastConsensusParams 2017-12-26 19:56:39 -05:00
Ethan Buchman
028ee58580 call it LastResultsHash 2017-12-26 19:53:26 -05:00
Ethan Buchman
801e3dfacf rpc: getHeight helper function 2017-12-26 19:37:42 -05:00
Ethan Buchman
4171bd3bae fixes 2017-12-26 19:24:45 -05:00
Ethan Buchman
73fb1c3a17 consolidate saveResults/SaveABCIResponses 2017-12-26 19:24:45 -05:00
Ethan Frey
d65234ed51 Add /block_results?height=H as rpc endpoint
Expose it in rpc client
Move ABCIResults into tendermint/types from tendermint/state
2017-12-26 19:24:25 -05:00
Ethan Frey
58c5df729b Add ResultHash to header 2017-12-26 19:24:25 -05:00
Ethan Frey
632cc918b4 Save/Load Results for every height
Add some tests.
Behaves like saving validator set, except it always saves at each height
instead of a reference to last changed.
2017-12-26 19:24:25 -05:00
Ethan Frey
f870a49f42 Add ABCIResults with Hash and Proof to State
State maintains LastResultsHash
Verify that we can produce unique hashes for each result,
and provide valid proofs from the root hash.
2017-12-26 19:24:25 -05:00
Ethan Buchman
d844799b3b Merge branch '950-enforce-less-13-val-changes-per-block' into develop 2017-12-26 19:22:21 -05:00
Ethan Buchman
3ea1145486 bring back test 2017-12-26 19:22:15 -05:00
Ethan Buchman
d4716fc03c state 2017-12-26 18:43:03 -05:00
Ethan Buchman
16227594ef notes about block 1 2017-12-26 16:33:42 -05:00
Ethan Buchman
65cdb07f0c merkle 2017-12-26 15:48:17 -05:00
Ethan Buchman
eb73e82279 encoding.md 2017-12-26 15:30:56 -05:00
Ethan Buchman
d6fbfddddd spec.md -> blockchain.md. some fixes 2017-12-26 15:30:27 -05:00
Anton Kaliaev
1339a44402 add safe*Clip funcs 2017-12-26 14:13:12 -06:00
Anton Kaliaev
b8215d8ac8 more test cases 2017-12-26 13:30:00 -06:00
Ethan Buchman
289d92c97d consensus: remove log stmt. closes #987 2017-12-26 10:41:31 -05:00
Ethan Buchman
6c39c77fc5 Merge pull request #996 from ricardohsd/types-add-tests-to-vote
Add more tests to types/vote.go
2017-12-26 10:32:07 -05:00
Anton Kaliaev
69c3a7640b add safeAdd & safeSub plus quickcheck tests 2017-12-25 18:39:14 -06:00
Anton Kaliaev
e8b0458f16 check for overflow and underflow while choosing proposer
Refs #919
2017-12-25 18:39:14 -06:00
Anton Kaliaev
6b89639f90 update docs 2 [ci skip] 2017-12-25 17:58:15 -06:00
Anton Kaliaev
9b25f7325a update docs [ci skip] 2017-12-25 17:53:54 -06:00
Anton Kaliaev
0093f9877a change voting power change, not number of vals 2017-12-25 17:49:36 -06:00
Anton Kaliaev
cf0b5d3715 enforce <1/3 validator updates
Refs #950
2017-12-25 12:10:53 -06:00
Ethan Buchman
616f7e74db Merge pull request #1001 from tendermint/makefile
Cleaned up makefile
2017-12-25 12:09:10 -05:00
Ethan Buchman
14c812a39c tmlibs timer fix 2017-12-25 11:11:55 -05:00
Ethan Buchman
1e52751344 update tests for makefile 2017-12-25 10:24:41 -05:00
Jae Kwon
d7ac6e516a Cleaned up makefile 2017-12-23 02:23:05 -08:00
Ethan Buchman
c2436c46e6 Merge pull request #972 from tendermint/feature/enhance-endblock
Update EndBlock parameters
2017-12-22 01:30:58 -05:00
Ethan Buchman
e3585a6eb0 wip: tendermint specification 2017-12-21 22:36:37 -05:00
Ethan Buchman
38608b1b0f comment and tmlibs fix 2017-12-21 18:32:40 -05:00
Ethan Buchman
2b634dab32 Merge pull request #994 from tendermint/clean/block-validation
Clean/block validation
2017-12-21 17:53:11 -05:00
Ethan Buchman
91acc51cd1 fix test 2017-12-21 17:52:06 -05:00
Ethan Buchman
dc54ba67e4 state: TestValidateBlock 2017-12-21 17:51:03 -05:00
Ethan Buchman
35521b553a save historical consensus params 2017-12-21 17:46:25 -05:00
Ethan Buchman
70a744558c types: params.Update() 2017-12-21 17:00:52 -05:00
Ethan Buchman
4b789ff7e9 another cmn fix 2017-12-21 16:49:47 -05:00
Ethan Buchman
306657a118 no patience for metalinter right now 2017-12-21 16:49:47 -05:00
Ethan Buchman
be765e4cb9 update glide for cmn fixes 2017-12-21 16:49:47 -05:00
Ethan Buchman
b5857da877 forgot file 2017-12-21 16:49:47 -05:00
Ethan Buchman
3ad055ef3a fix randPort 2017-12-21 16:49:47 -05:00
Ethan Buchman
3d00c477fc separate block vs state based validation 2017-12-21 16:49:47 -05:00
Ethan Buchman
c2912d612a update glide 2017-12-21 16:49:47 -05:00
Ethan Buchman
a3c7525249 Merge pull request #993 from tendermint/984-priv-validator-signing
priv validator returns last sign bytes if h/r/s matches
2017-12-21 16:30:22 -05:00
Ethan Buchman
f81025631e update comment [ci skip] 2017-12-21 16:28:05 -05:00
Ethan Buchman
9c03c58de2 priv validator checks if only difference is timestamp; else error 2017-12-21 15:37:27 -05:00
Anton Kaliaev
0ffd60b8cf ValidatorSetUpdates -> ValidatorUpdates 2017-12-21 11:52:26 -06:00
Ricardo Domingos
d5baa6601c types: Add test for IsVoteTypeValid 2017-12-21 18:13:31 +01:00
Ricardo Domingos
19eeef0aad types: Rename exampleVote to examplePrecommit on vote_test
exampleVote doesn't express the type of the vote.
2017-12-21 18:13:31 +01:00
Ricardo Domingos
e76392e330 types: Update String() test to assert Prevote type 2017-12-20 23:21:30 +01:00
Anton Kaliaev
a1cc9ac642 priv validator returns last sign bytes if h/r/s matches
since now we have time in the msgs and we might crash between writing
the priv val and writing to wal.

Refs #984
2017-12-20 14:41:43 -06:00
Emmanuel T Odeke
67c3af3bf8 cmd/tendermint: fix initialization file creation checks (#991)
* cmd/tendermint: fix initialization file creation checks

Fixes #989.

The original initialization sequence started to inexplicably
fail
```shell
tendermint unsafe_reset_all
tendermint init
tendermint node --proxy_app=dummy
```

used to fail with
```shell
ERROR: Failed to create node: Couldn't read GenesisDoc file: open
/Users/emmanuelodeke/.tendermint/genesis.json: no such file or directory
```

because the initialization sequence always assumed that the
genesisDoc would only be set if the privValidator was generated.

However, `tendermint unsafe_reset_all` only created the
`priv_validator.json` file which would mean that then running
`tendermint init` would never create the `genesis.json` file
which if following the recommended sequence would then fail
since the `genesis.json` was absent.

* cmd/tendermint: Load PrivValidatorFS if existent, lest generate it

Feedback from @melekes

* change logging messages for init cmd

Refs #989
2017-12-20 12:50:27 -06:00
Anton Kaliaev
843e1ed400 Updates -> ValidatoSetUpdates 2017-12-19 13:03:39 -06:00
Ethan Buchman
4bca6bf6f5 fix test 2017-12-19 12:30:34 -05:00
Ethan Frey
960b25408f Store LastConsensusHash in State as well
Update all BlockValidation that it matches the last state
2017-12-19 12:28:08 -05:00
Ethan Frey
45bc106de7 Updated lite tests to set ConsensusHash in header 2017-12-19 12:28:08 -05:00
Ethan Frey
d151e36ea8 Add ConsensusHash to header 2017-12-19 12:28:08 -05:00
Ethan Frey
56cada6a0c Validate ConsensusParams returned from abci app 2017-12-19 12:28:08 -05:00
Ethan Frey
a0b2d77bef Add hash to ConsensusParams 2017-12-19 12:28:08 -05:00
Ethan Frey
030fd00232 Added tests for applying consensus param changes 2017-12-19 12:28:08 -05:00
Ethan Frey
d21f39160f Apply ConsensusParamChanges to state/State 2017-12-19 12:28:08 -05:00
Ethan Frey
4265a94bfe Update EndBlock parameters
* Update abci dependencies
* Modify references from Diffs to Changes
* Fixes issues #924
2017-12-19 12:28:08 -05:00
Ethan Buchman
652d1e3de8 Merge pull request #979 from tendermint/934-node-fails-to-parse-seeds
strip protocol if defined
2017-12-19 12:26:32 -05:00
Ethan Buchman
b33cff4cb7 Merge pull request #981 from tendermint/977-wal-generator
enable logging for wal_generator and set timeout to 1 min
2017-12-19 12:25:58 -05:00
Ethan Buchman
e0fe84a856 Merge branch 'develop' into 977-wal-generator 2017-12-19 11:11:26 -05:00
Ethan Buchman
783ffdb7fd Merge pull request #983 from tendermint/circle-testing
Circle testing
2017-12-19 11:09:22 -05:00
Ethan Buchman
cb3ac6987e remove some debugs 2017-12-19 10:11:37 -05:00
Anton Kaliaev
5a83e58428 stop eventBus 2017-12-17 20:16:02 -06:00
Anton Kaliaev
3f02ab0ead unidirectional channel 2017-12-16 22:20:07 -06:00
Anton Kaliaev
99c58fc561 enable logging for wal_generator and set timeout to 1 min
Refs #977
2017-12-16 21:59:10 -06:00
Ethan Buchman
a86df17ceb crank city 2017-12-16 19:55:04 -05:00
Ethan Buchman
5d04ccbe51 excessive logging. update tmlibs for timer fix 2017-12-16 19:16:08 -05:00
Ethan Buchman
61dc357bb3 test/p2p/kill_all: longer timeout 2017-12-16 13:36:52 -05:00
Ethan Buchman
d7cb2f850d more logs in p2p 2017-12-16 13:36:52 -05:00
Ethan Buchman
bfe0a4a8ac more logging 2017-12-16 13:36:52 -05:00
Ethan Buchman
0ec7909ec3 more logging in p2p and consensus 2017-12-16 13:36:52 -05:00
Ethan Buchman
b5b912e2c4 Merge remote-tracking branch 'origin/977-wal-generator' into develop 2017-12-16 13:36:32 -05:00
Ethan Buchman
9504a593e9 Merge pull request #980 from tendermint/fix-test-in-develop
add missing Timestamp to Votes
2017-12-15 22:11:47 -05:00
Anton Kaliaev
f8f28c8942 enable logging for wal_generator and set timeout to 1 min
Refs #977
2017-12-15 16:15:09 -06:00
Anton Kaliaev
8fc7d63cf8 add missing Timestamp to Votes
Fixes:

```
panic: Panicked on a Sanity Check: can't encode times below 1970 [recovered]
        panic: Panicked on a Sanity Check: can't encode times below 1970

goroutine 2042 [running]:
testing.tRunner.func1(0xc420e8c0f0)
        /usr/local/go/src/testing/testing.go:711 +0x5d9
panic(0xcd9e20, 0xc420c8c270)
        /usr/local/go/src/runtime/panic.go:491 +0x2a2
github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/common.PanicSanity(0xcd9e20, 0xf8ddd0)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/common/errors.go:26 +0x120
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.WriteTime(0x0, 0x0, 0x0, 0x1306440, 0xc4201607e0, 0xc420e31658, 0xc420e31680)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/time.go:19 +0x11e
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.writeReflectBinary(0xdc9e40, 0xc4201fcf30, 0x199, 0x1317b80, 0xdc9e40, 0xc98451, 0x9, 0x0, 0xdc9e40, 0xc420ead9c0, ...)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/reflect.go:525 +0x22f0
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.writeReflectBinary(0xd25400, 0xc42000e2e8, 0x196, 0x1317b80, 0xd92d60, 0xc9a195, 0xa, 0x0, 0xcc8ce0, 0xc420164920, ...)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/reflect.go:530 +0x21e7
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.writeReflectBinary(0xcc8ce0, 0xc4209f95b8, 0x197, 0x1317b80, 0xcc8ce0, 0xc9a195, 0xa, 0x0, 0xcc8ce0, 0xc420164920, ...)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/reflect.go:518 +0x2509
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.writeReflectBinary(0xd873e0, 0xc4209f9580, 0x16, 0x1317b80, 0xd79400, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/reflect.go:530 +0x21e7
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.WriteBinary(0xd873e0, 0xc4209f9580, 0x1306440, 0xc4201607e0, 0xc420e31658, 0xc420e31680)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/wire.go:80 +0x15f
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.BinaryBytes(0xd873e0, 0xc4209f9580, 0x3, 0x8, 0xc420160798)
        /go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/util.go:15 +0xb8
github.com/tendermint/tendermint/blockchain.(*BlockStore).SaveBlock(0xc4201342a0, 0xc420eac180, 0xc420130640, 0xc4209f9580)
        github.com/tendermint/tendermint/blockchain/_test/_obj_test/store.go:192 +0x439
github.com/tendermint/tendermint/blockchain.TestBlockStoreSaveLoadBlock(0xc420e8c0f0)
        /go/src/github.com/tendermint/tendermint/blockchain/store_test.go:128 +0x609
testing.tRunner(0xc420e8c0f0, 0xf20830)
        /usr/local/go/src/testing/testing.go:746 +0x16d
created by testing.(*T).Run
        /usr/local/go/src/testing/testing.go:789 +0x569
exit status 2
FAIL
```
2017-12-15 13:59:25 -06:00
Anton Kaliaev
c513649df4 strip protocol if defined
Fixes #934
2017-12-15 13:36:08 -06:00
Anton Kaliaev
a6911825b0 PanicCrisis is deprecated 2017-12-15 13:35:49 -06:00
Ethan Buchman
eddabab5e4 Merge pull request #965 from tendermint/573-handle-corrupt-wal-file
Handle corrupt WAL file
2017-12-15 14:33:16 -05:00
Ethan Buchman
3eee69de2d Merge pull request #954 from tendermint/668-send-absent-validators
Send absent validators
2017-12-15 13:55:52 -05:00
Ethan Buchman
068d83bce8 Merge pull request #677 from tendermint/blockchain-test-store
blockchain: add tests for BlockStore
2017-12-15 13:33:55 -05:00
Anton Kaliaev
7f649ccf23 fixes from Frey's review 2017-12-15 12:21:15 -06:00
Anton Kaliaev
808b830942 add a unit test
Refs #668
2017-12-15 12:13:02 -06:00
Anton Kaliaev
d669816a1b send absent validators in BeginBlock
Refs #668
2017-12-15 12:13:02 -06:00
Anton Kaliaev
e40689b9cc PanicCrisis is deprecated 2017-12-15 11:59:45 -06:00
Anton Kaliaev
709cf18aef add gofuzz test for consensus wal 2017-12-15 11:56:24 -06:00
Anton Kaliaev
e57cad6c3f correct maxMsgSizeBytes 2017-12-15 11:42:53 -06:00
Anton Kaliaev
4f94caa1b9 explain what to do in case of truncation [ci skip] 2017-12-15 11:11:21 -06:00
Ethan Buchman
78a682e4b6 blockchain: test fixes 2017-12-15 12:07:48 -05:00
Ethan Buchman
21d030dbfb Merge pull request #975 from tendermint/974-fix-test-in-develop
add missing Timestamp to Vote
2017-12-14 09:22:11 -05:00
Anton Kaliaev
72da553ed9 add missing Timestamp to Vote
Fixes #974
2017-12-13 22:24:06 -06:00
Anton Kaliaev
b78606d94f Merge pull request #967 from tendermint/feature/total-tx
Add TotalTx to block header
2017-12-13 17:09:48 -06:00
Ethan Frey
a6f719a402 Add tests for block validation 2017-12-13 19:54:16 +01:00
Anton Kaliaev
e0fbd148ef Merge pull request #958 from tendermint/pex-on-by-default
activate PEX reactor by default
2017-12-13 12:52:17 -06:00
Anton Kaliaev
2f91289880 update changelog [ci skip] 2017-12-13 12:26:12 -06:00
Ethan Buchman
462b755a60 activate PEX reactor by default 2017-12-13 12:25:48 -06:00
Anton Kaliaev
0a2ecaa393 Merge pull request #953 from tendermint/feature/time-fields
Add Timestamp to Proposal/Vote
2017-12-13 12:18:55 -06:00
Ethan Frey
dedf03bb81 Add TotalTx to block header, issue #952
Update state to keep track of this info.
Change function args as needed.
Make NumTx also an int64 for consistency.
2017-12-13 12:20:53 +01:00
Ethan Buchman
64f056b57d Merge branch '916-remove-sleeps-from-tests' into develop 2017-12-12 16:43:36 -05:00
Ethan Buchman
90df9fa1bf p2p/trust: remove extra channels 2017-12-12 16:43:19 -05:00
caffix
eae6e6381e trust metric is now a service and the test ticker has been added 2017-12-12 15:33:42 -05:00
Anton Kaliaev
04a18e0a97 briefly describe the recover process [ci skip] 2017-12-12 13:03:09 -06:00
Anton Kaliaev
06aece31cf lower the max message size 2017-12-12 13:02:40 -06:00
Ethan Buchman
e0296d6c3c consensus: fix makeBlockchainFromWAL 2017-12-12 12:14:15 -05:00
Ethan Frey
5ffb5f01cc Add more tests for Proposal/Vote serialization
String() and Proposal valid after serializing.
To be safe, but mainly to increase test coverage for the PR
2017-12-12 12:59:51 +01:00
Ethan Frey
8576ad58bd Cleanup canonical json 2017-12-12 12:59:51 +01:00
Ethan Frey
c4860f6c29 Force CanonicalTime to UTC
fixes issue with vote serialization breaking the signatures
2017-12-12 12:59:51 +01:00
Ethan Frey
850310b034 Add test to isolate precommit failure
types/vote_test.go now checks signature on a serialized and
then deserialized vote. Turns out go-wire time encoding doesn't
respect timezones, and the signatures don't check out.
2017-12-12 12:59:51 +01:00
Ethan Frey
a29c781295 Add default timestamp to all instances of *types.Vote 2017-12-12 12:59:51 +01:00
Ethan Frey
599673690c Add timestamp to vote canonical encoding 2017-12-12 12:59:51 +01:00
Ethan Frey
7deda53b7c Add Timestamp to Proposal for issue #929
Store it as time.Timestamp locally, encode it as RFC3339 with milliseconds
before signing the canonical form.
2017-12-12 12:59:51 +01:00
Ethan Buchman
5ecae52bf1 Merge branch 'master' into develop 2017-12-12 02:31:47 -05:00
Ethan Buchman
ac2d0edb2f Merge pull request #964 from tendermint/fix-gometa-makefile
fix gometalinter.v2 automatically
2017-12-12 02:26:18 -05:00
Petabyte Storage
ae632654d2 add tools check with short circuit 2017-12-11 23:00:18 -08:00
Ethan Buchman
88f5f21dbb Merge pull request #960 from tendermint/release-v0.14.0
Release v0.14.0
2017-12-12 01:28:35 -05:00
Petabyte Storage
49e5510953 remove tools from all 2017-12-11 21:44:53 -08:00
Anton Kaliaev
a6644f7477 remove gopath prefixes
it's safe because I added GOPATH to PATH earlier today
2017-12-11 23:05:22 -06:00
Anton Kaliaev
10265d8667 add tools to make all because it's required for test target 2017-12-11 23:02:42 -06:00
Petabyte Storage
8be708fe5b fix spelling and makefile gometalinter.v2 2017-12-11 20:48:15 -08:00
Ethan Buchman
5facfafbd5 Merge branch 'develop' into release-v0.14.0 2017-12-11 23:09:04 -05:00
Ethan Buchman
11761d1769 initial port of cosmos-sdk basecli proxy 2017-12-11 22:23:13 -05:00
Ethan Buchman
2c14488b93 Merge pull request #963 from tendermint/remove-get-deps-from-makefile
Remove get_deps, update_deps and list_deps from Makefile
2017-12-11 21:56:05 -05:00
Ethan Buchman
c127bce73b Merge pull request #962 from tendermint/wait-a-little-longet-on-ci
wait 5 sec for a block on CircleCI
2017-12-11 21:54:57 -05:00
Anton Kaliaev
af79a2a59e fix error msg 2017-12-11 19:50:05 -06:00
Anton Kaliaev
ee66476d62 set max msg size
otherwise, it is easy to get OutOfMemory panic (somebody can even expoit
this)
2017-12-11 19:48:57 -06:00
Anton Kaliaev
40f9261d48 handle data corruption errors
Refs #573
2017-12-11 19:48:20 -06:00
Anton Kaliaev
69205594cc add gopath to path on CircleCI 2017-12-11 16:59:08 -06:00
Anton Kaliaev
d943f66abc remove get_deps, update_deps and list_deps
Rationale: they only lead to broken builds and should not be used by
anyone.
2017-12-11 16:55:40 -06:00
Anton Kaliaev
b2385b46cf wait 5 sec for a block on CircleCI
Fixes:
```
--- FAIL: TestHandshakeReplaySome (12.40s)
        replay_test.go:332: waited too long for tendermint to produce 6 blocks
```
2017-12-11 16:22:27 -06:00
Ethan Buchman
2af32d6665 changelog and version bump 2017-12-11 15:05:56 -05:00
Ethan Buchman
5c58db3bb4 changelog [ci skip] 2017-12-11 14:49:34 -05:00
Ethan Buchman
24a9491203 Merge pull request #955 from tendermint/939-p2p-exponential-backoff-on-reconnect
p2p: exponential backoff on reconnect. closes #939
2017-12-11 14:25:39 -05:00
Ethan Buchman
5511bd8e85 p2p: exponential backoff on reconnect. closes #939 2017-12-11 13:41:09 -05:00
Ethan Buchman
f1ca2b3a3a Merge pull request #698 from tendermint/feat-appveyor
WIP: Run tests on AppVeyor
2017-12-10 20:06:34 -05:00
Ethan Buchman
10fcefe346 appveyor: use make 2017-12-10 20:07:44 -05:00
Ethan Buchman
0bfc11f1ba blockchain: note about store tests needing simplification ... 2017-12-10 20:03:58 -05:00
Emmanuel Odeke
96998a5498 blockchain: Block creator helper for compressing tests as per @ebuchman 2017-12-10 19:58:22 -05:00
Emmanuel Odeke
2da5299924 blockchain: less fragile and involved tests for blockstore
With feedback from @ebuchman, to make the tests nicer
and less fragile.
2017-12-10 19:58:22 -05:00
Emmanuel Odeke
83b40b25d6 blockchain: deduplicate store header value tests 2017-12-10 19:57:06 -05:00
Emmanuel Odeke
05f30b3e28 blockchain: updated store docs/comments from review 2017-12-10 19:57:06 -05:00
Ethan Buchman
116a61beb1 blockchain: update store comments 2017-12-10 19:57:06 -05:00
Emmanuel Odeke
8c86bb8024 blockchain: add tests and more docs for BlockStore
Add tests to test store, to the fullest reasonable extent for
paths that can taken by input arguments altering internal behavior,
as well as by mutating content in the DB.
2017-12-10 19:54:34 -05:00
Ethan Buchman
41bb2c2663 Merge pull request #931 from tendermint/785-wal-improvements
[consensus] remove WAL separator
2017-12-10 19:46:02 -05:00
Ethan Buchman
e7b9cd8ee8 Merge branch 'develop' into 785-wal-improvements 2017-12-10 19:45:52 -05:00
Ethan Buchman
3019b9f320 Merge pull request #948 from tendermint/945-transparent-websocket
bring back transparent websocket (Refs #945)
2017-12-10 19:05:32 -05:00
Ethan Buchman
60872d7d7c Merge pull request #949 from tendermint/docs/trust-metric-usage
Docs/trust metric usage
2017-12-10 19:01:16 -05:00
Ethan Buchman
a37c1143ca adr: update 007 trust metric usage 2017-12-10 19:00:44 -05:00
caffix
4e08ee1833 made clarifications based on odeke-em's PR comments 2017-12-10 15:39:43 -05:00
caffix
7563870d11 added the trust metric usage guide 2017-12-10 15:39:43 -05:00
Zach
12c5a57415 determinisitic linter (#902)
* linter: address gosimple lints

* linter: make deterministic & a rebase fix

* lint/rpc: fix a gosimple lint

* run linter in CI

* fix rebase mistake

* fix makefile

* ugh

* revert Makefile

* add metalinter to CI

* try this

* linter: last little fix

* need glide

* better

* okayy circle, have it your way

* lints: gosimple

* pr comments
2017-12-10 17:44:22 +00:00
Anton Kaliaev
101680d603 update changelog [ci skip] 2017-12-10 11:35:22 -06:00
Anton Kaliaev
d819d5d324 update changelog [ci skip] 2017-12-10 11:34:08 -06:00
Anton Kaliaev
90cdffa067 fixes after my own review (Refs #945) 2017-12-10 11:29:36 -06:00
Ethan Buchman
080640a171 Merge pull request #936 from tendermint/update-dockerfile-to-0.12.1
Update dockerfile to 0.12.1
2017-12-10 12:15:09 -05:00
Anton Kaliaev
950a64f756 bring back transparent websocket (Refs #945) 2017-12-10 01:18:10 -06:00
Anton Kaliaev
b50cef742d Merge pull request #943 from tendermint/feature/update-vagrantfile
Update Vagrantfile to xenial (16.04 LTS)
2017-12-09 17:03:44 -06:00
Anton Kaliaev
4f50935aa2 Merge pull request #946 from ricardohsd/fix-typo
consensus: Fix typo on ticker.go documentation
2017-12-09 17:03:12 -06:00
caffix
44f62e5e27 built the WaitForStop functionality into the Stop method 2017-12-09 13:25:28 -05:00
Ricardo Domingos
59e89e7664 consensus: Fix typo on ticker.go documentation 2017-12-09 13:14:53 +01:00
caffix
5d464364a8 fixed the racy test and removed all the calls to Sleep 2017-12-08 15:51:18 -05:00
Ethan Frey
2112299586 Cleanup apt-get ala PR comments 2017-12-08 19:20:53 +01:00
Ethan Frey
c771964a40 Add vagrant_test to Makefile for integration tests 2017-12-08 18:36:58 +01:00
Anton Kaliaev
a199ec2813 update docker readme 2017-12-08 11:35:35 -06:00
Anton Kaliaev
a28b3fff49 update Dockerfile to 0.13.0 2017-12-08 11:34:34 -06:00
Ethan Frey
5bcd95f01f Use apt-get/ppa instead of tarballs for golang/docker
Minor cleanup and comments
2017-12-08 18:10:39 +01:00
Ethan Frey
c84494b36b Update Vagrantfile to xenial (16.04 LTS)
Note default username changed from vagrant to ubuntu in the base image.
2017-12-08 18:10:39 +01:00
Anton Kaliaev
7e3a5b7ce8 Merge pull request #942 from tendermint/bugfix/install-stdlib
From CGO_ENABLED=0 from make install
2017-12-08 11:03:05 -06:00
Ethan Buchman
fbfd11de2c add @melekes as codeowner 2017-12-08 11:48:05 -05:00
Ethan Frey
9657d183f8 Remove CGO_ENABLED=0 from make install
It was writing to stdlib packages net, x/crypto, etc. and failing when it didn't have write access to them (which it often shouldn't)

Fixes issue #941

Explained in golang issue: golang/go#18981 (fixed in develop/1.10)
2017-12-08 17:03:09 +01:00
Ethan Buchman
b98098b1f0 Merge pull request #932 from tendermint/920-default-moniker-to-host-name
default moniker to the host name
2017-12-07 18:34:38 -05:00
Ethan Buchman
1ae14e5a3d Merge pull request #933 from tendermint/880-node-fails-if-one-of-the-seeds-cannot-be-resolved
tolerate unresolvable seeds
2017-12-07 18:27:58 -05:00
Anton Kaliaev
7a92a3b729 update docker readme 2017-12-07 13:49:29 -06:00
Anton Kaliaev
457c688346 update Dockerfile to point to 0.12.1 2017-12-07 13:48:31 -06:00
Anton Kaliaev
c609b18698 tolerate unresolvable seeds (Refs #880) 2017-12-07 13:17:09 -06:00
Anton Kaliaev
5ff0bb2100 default moniker to the host name (Refs #920) 2017-12-07 12:49:29 -06:00
Anton Kaliaev
90944bb1a2 be specific about what type we're encoding
to be consistent with Decode, which returns TimedWALMessage
2017-12-07 11:45:50 -06:00
Anton Kaliaev
07571741c5 [consensus] remove WAL separator (Refs #785)
We don't really need a separator unless we have complex structures
(rows, cells like RDBMS have https://www.sqlite.org/fileformat.html).
2017-12-07 11:36:46 -06:00
Ethan Buchman
14ccc8bc4c Merge pull request #930 from tendermint/468-make-consensus-data-deterministic
make consensus/test_data deterministic
2017-12-06 22:28:03 -05:00
Anton Kaliaev
5cb936fa00 fixes after my own review 2017-12-06 18:28:14 -06:00
Anton Kaliaev
c6f025f40e generate WAL on the fly (Refs #468) 2017-12-06 16:01:08 -06:00
Ethan Buchman
a2b92c0745 Merge pull request #927 from tendermint/release-v0.13.0
Release v0.13.0
2017-12-06 03:57:49 -05:00
Ethan Buchman
6884463ba2 update changelog [ci skip] 2017-12-06 03:38:03 -05:00
Ethan Buchman
167d0e82f9 fixes and version bump 2017-12-06 03:33:03 -05:00
Ethan Buchman
42e77de6a3 changelog; minor stuff; update glide 2017-12-06 02:45:38 -05:00
Ethan Buchman
58b4a8395b Merge pull request #917 from tendermint/trust-metric-cleanup
Trust metric cleanup
2017-12-06 02:11:38 -05:00
Ethan Buchman
d0dc04001e rpc: make time human readable. closes #926 2017-12-06 01:25:11 -05:00
Ethan Buchman
e101aa9fc8 fix for legacy gowire 2017-12-06 01:21:14 -05:00
Ethan Buchman
6a7c399c2d Merge pull request #923 from tendermint/enable-homebrew-installation-from-source
Enable homebrew installation from source
2017-12-05 00:50:52 -05:00
Anton Kaliaev
b3e1341e44 use get rev-parse --short HEAD everywhere instead of GitCommit[:8] 2017-12-04 22:16:19 -06:00
Anton Kaliaev
ebdc7ddf20 add missing get_vendor_deps to "make all" 2017-12-04 19:04:15 -06:00
Anton Kaliaev
f30ce8b210 remove GitDescribe - no such variable defined in version package
- add "-w -s" flags to reduce binary size (they remove debug info)
2017-12-04 18:19:11 -06:00
Anton Kaliaev
76cccfaabd get_vendor_deps does not require all the tools
- remove revision cmd
- rename ensure_tools to tools
2017-12-04 18:19:11 -06:00
Anton Kaliaev
440d76647d rename release to test_release 2017-12-04 17:37:32 -06:00
Anton Kaliaev
2cc2fade06 remove -extldflags "-static"
golang builds static libraries by default.
2017-12-04 17:35:42 -06:00
Anton Kaliaev
e50173c7dc merge 2 rules in .editorconfig 2017-12-04 15:01:28 -06:00
Anton Kaliaev
3318cf9aec do not suppose that GOPATH is added to PATH
```
brew install tendermint --HEAD
==> Installing tendermint from tendermint/tendermint
==> Cloning https://github.com/tendermint/tendermint.git
Cloning into '/Users/antonk/Library/Caches/Homebrew/tendermint--git'...
remote: Counting objects: 437, done.
remote: Compressing objects: 100% (412/412), done.
remote: Total 437 (delta 7), reused 129 (delta 2), pack-reused 0
Receiving objects: 100% (437/437), 3.04 MiB | 1006.00 KiB/s, done.
Resolving deltas: 100% (7/7), done.
==> Checking out branch develop
==> make get_vendor_deps
Last 15 lines from /Users/antonk/Library/Logs/Homebrew/tendermint/01.make:
2017-12-04 14:54:54 -0600

make
get_vendor_deps

go get github.com/mitchellh/gox github.com/tcnksm/ghr github.com/Masterminds/glide github.com/alecthomas/gometalinter
make: gometalinter: No such file or directory
make: *** [ensure_tools] Error 1

If reporting this issue please do so at (not Homebrew/brew or Homebrew/core):
https://github.com/tendermint/homebrew-tendermint/issues
```
2017-12-04 15:00:16 -06:00
Ethan Buchman
b37230f6db Merge pull request #918 from tendermint/abci-update
Abci update
2017-12-03 01:36:59 -05:00
Ethan Buchman
1bf57a90df glide: update grpc version 2017-12-02 23:41:12 -05:00
Ethan Buchman
d2db202a2d mempool: assert -> require in test 2017-12-02 23:41:09 -05:00
Ethan Buchman
08857606dc test: fix test/app/counter_test.sh 2017-12-02 23:39:19 -05:00
Ethan Buchman
72e389041c Merge pull request #914 from tendermint/11-response-info-with-invalid-height-crashes-tm
int64 height (Refs #911)
2017-12-02 23:17:41 -05:00
Anton Kaliaev
d41c0b10c8 change delta's type from int to int64 2017-12-02 11:48:24 -06:00
Ethan Buchman
898ae53672 types: fix for broken customtype int in gogo 2017-12-02 12:00:46 -05:00
Ethan Buchman
9e20cfee0e glide 2017-12-02 01:56:40 -05:00
Ethan Buchman
9af8da7aad update for new abci int types 2017-12-02 01:47:55 -05:00
Ethan Buchman
c9be2b89f9 mempool: return error on cached txs 2017-12-02 01:15:11 -05:00
Ethan Buchman
388f66c9b3 types: drop uint64 from protobuf.go 2017-12-02 01:07:17 -05:00
Anton Kaliaev
cd5a5d332f remove comments for uint64 related to possible underflow [ci skip] 2017-12-01 23:30:08 -06:00
Ethan Buchman
cb9a1dbb4f p2p/trust: lock on Copy() 2017-12-01 23:35:17 -05:00
Ethan Buchman
814541f6d9 p2p/trust: split into multiple files and improve function order 2017-12-01 23:35:12 -05:00
Anton Kaliaev
89cbcceac4 error if app returned negative last block height (Fixes #911) 2017-12-01 21:56:08 -06:00
Anton Kaliaev
10f7858453 use rand.Int63n, remove underflow check, remove unnecessary cast 2017-12-01 19:22:18 -06:00
Anton Kaliaev
922af7c405 int64 height
uint64 is considered dangerous. the details will follow in a blog post.
2017-12-01 19:04:53 -06:00
Ethan Buchman
e9f8e56895 fixes from rebase 2017-12-01 17:25:44 -05:00
Anton Kaliaev
3eb069a50c no need in this hack since we have replay now 2017-12-01 17:17:22 -05:00
Anton Kaliaev
f1fbf995f7 protect ourselves again underflow (Refs #911) 2017-12-01 17:17:22 -05:00
Anton Kaliaev
86af889dfb remove unnecessary casts (Refs #911) 2017-12-01 17:17:22 -05:00
Anton Kaliaev
b3492356e6 uint64 height (Refs #911) 2017-12-01 17:17:22 -05:00
Ethan Buchman
b2489b4318 Merge pull request #912 from bluecollarcoding/fix/docs
docs:clarify py-abci/py-tendermint
2017-12-01 02:57:32 -05:00
Ethan Buchman
a5a7e7ac56 Merge pull request #913 from bluecollarcoding/zramsay-patch-1
codecov: ignore docs files
2017-12-01 02:56:34 -05:00
Ethan Buchman
74cbfc68a1 Merge pull request #835 from tendermint/feature/indexing
Indexing DeliverTx tags
2017-12-01 02:54:30 -05:00
Anton Kaliaev
6423306980 TestIndexAllTags (unit) 2017-11-30 23:02:40 -06:00
Anton Kaliaev
c5b62ce1ee correct abci version 2017-11-30 20:20:35 -06:00
Anton Kaliaev
e538e0e077 config variable to index all tags 2017-11-30 20:02:39 -06:00
Zach
9314e451c8 Update .codecov.yml 2017-11-30 19:01:50 +00:00
Zach Ramsay
3b61e2854a docs: correction, closes #910 2017-11-30 18:27:46 +00:00
Anton Kaliaev
03222d834b update abci dependency 2017-11-30 11:46:28 -06:00
Anton Kaliaev
cb9743e567 dummy app now returns one DeliverTx tag 2017-11-29 20:30:37 -06:00
Anton Kaliaev
66ad366a4f test searching for tx with multiple same tags 2017-11-29 20:04:26 -06:00
Anton Kaliaev
864ad8546e more test cases 2017-11-29 20:04:00 -06:00
Anton Kaliaev
58789c52cd add example for tx_search endpoint 2017-11-29 15:30:12 -06:00
Anton Kaliaev
a762253e24 do not use AddBatch, prefer copying for now 2017-11-29 15:25:12 -06:00
Anton Kaliaev
10d893ee9b update deps 2017-11-29 15:19:45 -06:00
Anton Kaliaev
acbc0717d4 add client methods 2017-11-29 15:19:44 -06:00
Anton Kaliaev
1e19860585 fixes from my own review 2017-11-29 14:24:18 -06:00
Anton Kaliaev
09941b9aa9 fix metalinter warnings 2017-11-29 14:24:18 -06:00
Anton Kaliaev
2a5e8c4a47 add minimal documentation for tx_search RPC method [ci skip] 2017-11-29 14:24:18 -06:00
Anton Kaliaev
686e0eea9f extract indexing goroutine to a separate indexer service 2017-11-29 14:24:18 -06:00
Anton Kaliaev
91f2184003 fixes after bucky's review 2017-11-29 14:24:18 -06:00
Anton Kaliaev
3e577ccf4f add tx_search RPC endpoint 2017-11-29 14:24:18 -06:00
Anton Kaliaev
ea0b205455 searching transaction results 2017-11-29 14:24:18 -06:00
Anton Kaliaev
16cf7a5e0a use a switch when validating tags 2017-11-29 14:23:44 -06:00
Anton Kaliaev
56abea7427 rename tm.events.type to just tm.event 2017-11-29 14:23:44 -06:00
Anton Kaliaev
461a143a2b remove tx.hash tag from config because it's mandatory 2017-11-29 14:23:44 -06:00
Anton Kaliaev
f65e357d2b adapt Tendermint to new abci.Client interface
which was introduced in https://github.com/tendermint/abci/pull/130
2017-11-29 14:23:44 -06:00
Anton Kaliaev
4a31532897 remove unreachable code 2017-11-29 14:23:44 -06:00
Anton Kaliaev
29cd1a1b8f rewrite indexer to be a listener of eventBus 2017-11-29 14:23:44 -06:00
Anton Kaliaev
cd4be1f308 add tx_index config 2017-11-29 14:23:43 -06:00
Anton Kaliaev
acae38ab9e validate tags 2017-11-29 14:23:43 -06:00
Anton Kaliaev
a52cdbfe43 extract tags from DeliverTx/Result
and send them along with predefined
2017-11-29 14:23:43 -06:00
Ethan Buchman
f233cde9a9 Merge pull request #807 from tendermint/45-change-common-start-signature
service#Start, service#Stop signatures were changed
2017-11-29 20:23:07 +00:00
Anton Kaliaev
ceb8ba2e15 comment out gas linter for now 2017-11-29 13:18:34 -06:00
Ethan Buchman
aab54011b3 docs/install: add note about putting GOPATH/bin on PATH 2017-11-29 18:05:51 +00:00
Anton Kaliaev
691e266bef ignore ErrAlreadyStarted when starting addrbook in PEXReactor 2017-11-29 10:53:30 -06:00
Anton Kaliaev
c6b2334fa3 check for error when stopping WSClient 2017-11-29 10:38:58 -06:00
Anton Kaliaev
69b5da766c service#Start, service#Stop signatures were changed
See https://github.com/tendermint/tmlibs/issues/45
2017-11-29 10:38:58 -06:00
Ethan Buchman
a393cf4109 Merge pull request #904 from tendermint/fix-atomic-broadcast-test
test/p2p/atomic_broadcast: wait for node heights before checking app hash
2017-11-28 08:21:15 +00:00
Ethan Buchman
1b120466ee Merge pull request #892 from tendermint/mempool-Stop-method
mempool: implement Mempool.CloseWAL
2017-11-28 06:58:01 +00:00
Ethan Buchman
c8c533cc23 test/p2p/atomic_broadcast: wait for node heights before checking app hash 2017-11-28 05:51:32 +00:00
Ethan Buchman
1e8deacf2e Merge branch 'master' into develop 2017-11-28 04:39:41 +00:00
Emmanuel Odeke
3595b5931a mempool: implement Mempool.CloseWAL
Fixes https://github.com/tendermint/tendermint/issues/890

Add a CloseWAL method to Mempool to close the underlying WAL file
and then discard it so that further writes to it will have no effect.
2017-11-27 21:37:25 -07:00
Ethan Buchman
c7f923c5b0 Merge pull request #903 from tendermint/hotfix-v0.12.1
Upgrade tmlibs dependency to build on Windows
2017-11-28 04:36:52 +00:00
Adrian Brink
25f9eb782b Upgrade tmlibs dependency to build on Windows 2017-11-28 04:04:58 +00:00
Ethan Buchman
2009c3d4f1 Merge pull request #884 from tendermint/wal-benchmark-decode
consensus/WAL: benchmark WALDecode across data sizes
2017-11-28 03:39:55 +00:00
Zach
c3632bc54a Merge pull request #703 from tendermint/fix-linting
add metalinter to CI and include fixes
2017-11-28 00:02:52 +00:00
Ethan Buchman
d9c87a21a6 run metalinter in make test and run_test.sh 2017-11-27 22:39:12 +00:00
Ethan Buchman
2e76b23c9a rpc: fix tests 2017-11-27 22:39:12 +00:00
Ethan Buchman
9529f12c28 more linting 2017-11-27 22:39:12 +00:00
Ethan Buchman
55b81cc1a1 address linting FIXMEs 2017-11-27 22:39:12 +00:00
Zach Ramsay
c4caad7720 lint madness 2017-11-27 22:39:12 +00:00
Zach Ramsay
2563b4fc92 lint fixes 2017-11-27 22:39:12 +00:00
Zach Ramsay
6f3c05545d fix new linting errors 2017-11-27 22:39:12 +00:00
Zach Ramsay
414a8cb0ba pass tests! 2017-11-27 22:39:12 +00:00
Zach Ramsay
c84c7250ba linting: few more fixes 2017-11-27 22:39:12 +00:00
Zach Ramsay
c7b6faf96a bad goimports 2017-11-27 22:39:12 +00:00
Zach Ramsay
fe37afc0d7 do i need this? 2017-11-27 22:39:12 +00:00
Zach Ramsay
478a10aa41 Write doesn't need error checked 2017-11-27 22:39:12 +00:00
Zach Ramsay
d033470817 lil fixes 2017-11-27 22:39:12 +00:00
Zach Ramsay
7ad8a8ab55 Tests almost passing 2017-11-27 22:39:12 +00:00
Zach Ramsay
a15c7f221d linting: moar fixes 2017-11-27 22:39:11 +00:00
Zach Ramsay
d7cb291fb2 errcheck; sort some stuff out 2017-11-27 22:39:11 +00:00
Zach Ramsay
563faa98de address comments, pr #643 2017-11-27 22:39:11 +00:00
Zach Ramsay
9c62ed4595 run linting first for tests 2017-11-27 22:39:11 +00:00
Zach Ramsay
fe694e1fe1 ... 2017-11-27 22:39:11 +00:00
Zach Ramsay
bc2aa79f9a linter: sort through each kind and address small fixes 2017-11-27 22:39:11 +00:00
Zach Ramsay
48aca642e3 linter: address deadcode, implement incremental lint testing 2017-11-27 22:39:11 +00:00
Zach Ramsay
15651a931e linting errors: tackle p2p package 2017-11-27 22:39:11 +00:00
Zach Ramsay
68e7983c70 linting errors: afew more 2017-11-27 22:39:11 +00:00
Zach Ramsay
8f0237610e linting errors: clean it all up 2017-11-27 22:39:11 +00:00
Zach Ramsay
b75d4f73e7 errcheck: PR comment fixes 2017-11-27 22:39:11 +00:00
Zach Ramsay
b3c5933a23 state: return to-be-used function 2017-11-27 22:39:11 +00:00
Zach Ramsay
331857c9e6 linting: apply errcheck part2 2017-11-27 22:39:11 +00:00
Zach Ramsay
57ea4987f7 linting: apply errcheck part1 2017-11-27 22:39:11 +00:00
Zach Ramsay
d95ba866b8 lint: apply deadcode/unused 2017-11-27 22:39:11 +00:00
Zach Ramsay
1721543e5c linting: apply misspell 2017-11-27 22:39:11 +00:00
Zach Ramsay
46ccbcbff6 linting: apply 'gofmt -s -w' throughout 2017-11-27 22:39:11 +00:00
Zach Ramsay
fc33576bac linting: replace megacheck with metalinter 2017-11-27 22:39:11 +00:00
Ethan Buchman
366bc00b70 Merge pull request #895 from tendermint/894-vagrant-instructions
add Vagrant instructions to CONTRIBUTING guides (Refs #894) [ci skip]
2017-11-27 20:54:32 +00:00
Ethan Buchman
d5cd3a111f Merge pull request #899 from tendermint/feature/lite-client
Rename certifier to lite (#784) and add godocs
2017-11-27 17:21:15 +00:00
Ethan Buchman
94e400a5d6 Merge pull request #896 from tendermint/normalize-priority-and-id
normalize priority and id and remove pointers in ChannelDescriptor
2017-11-27 17:05:38 +00:00
Ethan Buchman
7dd249f754 Merge pull request #882 from caffix/develop
fixed race condition reported in issue #881
2017-11-27 16:50:57 +00:00
Adrian Brink
12ae1bb5e5 Address comments 2017-11-27 16:23:56 +01:00
Adrian Brink
248f176c1f Rename light to lite 2017-11-27 16:19:00 +01:00
Adrian Brink
1871a7c3d0 Rename certifier to light (#784) and add godocs
The certifier package is renamed to light. This is more descriptive
especially in the wider blockchain context. Moreover we are building
light-clients using the light package.

This also adds godocs to all exported functions.

Furthermore it introduces some extra error handling. I've added one TODO
where I would like someone else's opinion on how to handle the error.
2017-11-27 16:18:27 +01:00
Petabyte Storage
59b3dcb5cf normalize priority and id and remove pointers in ChannelDescriptor 2017-11-25 22:01:23 -08:00
Anton Kaliaev
fb87590c82 add Vagrant instructions to CONTRIBUTING guides (Refs #894) [ci skip] 2017-11-24 16:42:46 -06:00
Ethan Buchman
38c4de3fc7 Merge pull request #891 from tendermint/fix-vagrantfile
go requires Git (Fixes #879)
2017-11-24 22:07:29 +00:00
Anton Kaliaev
ae67408d13 go requires Git (Fixes #879) 2017-11-23 16:55:57 -06:00
Emmanuel Odeke
42da8cd297 consensus/WAL: benchmark WALDecode across data sizes 2017-11-23 12:43:11 -07:00
caffix
887cb6d0cd added public methods to handle locking within the trust metric 2017-11-22 23:42:38 -05:00
caffix
aeaf2d0b20 Merge branch 'develop' of https://github.com/tendermint/tendermint into develop 2017-11-22 23:11:01 -05:00
Ethan Buchman
932e472986 Merge pull request #885 from AFDudley/patch-1
Update getting-started.rst to fix broken link
2017-11-22 20:29:30 +00:00
Ethan Buchman
e845987503 p2p: disable trustmetric test while being fixed 2017-11-22 20:20:53 +00:00
Ethan Buchman
531b1197a7 Merge pull request #843 from tendermint/refactor-mconnection-with-go-wire-1
WIP: begin parallel refactoring: go-wire Write methods and MConnection
2017-11-22 19:34:22 +00:00
Ethan Buchman
52ad6242f4 Merge pull request #888 from tendermint/rm-dead-file
remove unused file
2017-11-22 18:47:30 +00:00
Zach Ramsay
969b34057b remove unused file 2017-11-22 17:22:53 +00:00
Petabyte Storage
e110f70b5c update glide.yaml versions with go-wire at develop branch 2017-11-22 07:34:10 -08:00
Ethan Buchman
e997db7a23 Merge pull request #859 from tendermint/fix/addrbook
Fix/addrbook
2017-11-21 15:31:48 +00:00
Ethan Buchman
c4b695f78d minor fixes from review 2017-11-21 15:30:19 +00:00
Ethan Buchman
75463b8331 Merge pull request #877 from tendermint/p2p-switch-DialSeeds-undeterministically
p2p: make Switch.DialSeeds use a new PRNG per call
2017-11-21 15:24:29 +00:00
A. F. Dudley
882c25f292 Update getting-started.rst to fix broken link
fixes broken link to introduction.html
2017-11-21 10:11:48 -05:00
caffix
9c8100043e made changes to address suggestions from the PR comments 2017-11-20 19:15:11 -05:00
Emmanuel Odeke
031e10133c p2p: make Switch.DialSeeds use a new PRNG per call
Fixes https://github.com/tendermint/tendermint/issues/875

Ensure that every DialSeeds call uses a new PRNG seeded from
tendermint/tmlibs/common.RandInt which internally uses
crypto/rand to seed its source.
2017-11-20 15:28:42 -07:00
caffix
4087326f45 fixed race condition reported in issue #881 2017-11-20 16:47:05 -05:00
Ethan Buchman
f9bc22ec6a p2p: fix comment on addPeer (thanks @odeke-em) 2017-11-20 21:36:01 +00:00
Ethan Buchman
26cd99c66e p2p: fix non-routable addr in test 2017-11-20 19:56:44 +00:00
Ethan Buchman
9334aad906 Merge pull request #871 from tendermint/fix/docs-860
docs: fix links
2017-11-20 19:47:18 +00:00
Ethan Buchman
c695c53259 Merge pull request #876 from tendermint/p2p-extract-key-lex-check-to-variable-for-clarity
p2p: use bytes.Equal for key comparison
2017-11-20 19:33:58 +00:00
Martin Dyring-Andersen
5c4397ab30 Run tests on AppVeyor 2017-11-20 13:23:28 +01:00
Emmanuel Odeke
5c34d087d9 p2p: use bytes.Equal for key comparison
Updates https://github.com/tendermint/tendermint/issues/850

My security alarms falsely blarred when I skimmed and noticed
keys being compared with `==`, without the proper context
so I mistakenly filed an issue, yet the purpose of that
comparison was to check if the local ephemeral public key
was just the least, sorted lexicographically.

Anyways, let's use the proper bytes.Equal check, to save future labor.
2017-11-18 23:34:27 -07:00
Zach Ramsay
559bd169bd docs: fix links, closes #860 2017-11-17 14:03:43 +00:00
Ethan Buchman
f8c969f5a5 Merge pull request #868 from tendermint/small-things
node: clean makeNodeInfo
2017-11-17 01:22:45 +00:00
Ethan Buchman
c5253c7a31 node: clean makeNodeInfo 2017-11-17 01:22:38 +00:00
Ethan Buchman
53f15fde07 update changelog 2017-11-17 00:04:03 +00:00
Ethan Buchman
814f9cb566 Merge pull request #856 from tendermint/small-things
Small things
2017-11-17 00:03:57 +00:00
Ethan Buchman
af0db599b0 minor fixes 2017-11-16 23:57:00 +00:00
Ethan Buchman
104368bd84 Merge pull request #787 from caffix/develop
Initial Trust Metric Implementation
2017-11-16 23:51:53 +00:00
Ethan Buchman
99461a178e Merge pull request #857 from gguoss/patch-1
Failed to compile comment code
2017-11-16 18:35:18 +00:00
Ethan Buchman
feb3230160 some comments 2017-11-16 04:43:07 +00:00
Ethan Buchman
be1a16a601 p2p/pex: simplify ensurePeers 2017-11-16 04:30:38 +00:00
Ethan Buchman
8e044b0e6d p2p/addrbook: some comments 2017-11-16 04:30:23 +00:00
Ethan Buchman
40e93a5f9e p2p/addrbook: fix addToOldBucket 2017-11-16 04:08:46 +00:00
Ethan Buchman
435eb6e2b3 p2p/addrbook: add non-terminating test 2017-11-16 04:04:54 +00:00
Ethan Buchman
8c88cc017a p2p/addrbook: addAddress returns error. more defensive PickAddress 2017-11-16 03:59:54 +00:00
Ethan Buchman
ed95cc160a p2p/addrbook: simplify PickAddress 2017-11-16 02:31:47 +00:00
Ethan Buchman
2f067a3f65 p2p/addrbook: addrNew/Old -> bucketsNew/Old 2017-11-16 02:28:11 +00:00
Ethan Buchman
498a82784d p2p/addrbook: comments 2017-11-16 02:25:00 +00:00
Guanghua Guo
b5708825a7 Failed to compile comment code 2017-11-16 09:45:58 +08:00
caffix
a724ffab25 added changes based on PR comments to the proposal 2017-11-15 17:59:48 -05:00
Ethan Buchman
de34ef91d7 Merge pull request #854 from tendermint/add-go-version-to-readme
add Go version badge to README [ci skip]
2017-11-15 20:49:36 +00:00
Anton Kaliaev
248a9383a0 add Go version badge to README [ci skip] 2017-11-15 10:22:02 -06:00
Ethan Buchman
78b4ad291c Merge pull request #853 from tendermint/p2p-netPipe-own-file
p2p: netPipe for <Go1.10 in own file with own build tag
2017-11-15 16:09:48 +00:00
Emmanuel Odeke
3f9dff9aac p2p: netPipe for <Go1.10 in own file with own build tag
Follow up of 283544c7f3
putting <Go1.10 implementation of netPipe in its own
file and protect it with its separate build tag.
2017-11-14 22:23:48 -07:00
Ethan Buchman
443854222c Merge pull request #852 from tendermint/net-conn-SetDeadline-wraps
p2p: use fake net.Pipe since only >=Go1.10 implements SetDeadline
2017-11-15 05:09:39 +00:00
Emmanuel Odeke
283544c7f3 p2p: use fake net.Pipe since only >=Go1.10 implements SetDeadline
Fixes https://github.com/tendermint/tendermint/issues/851

Go1.9 and below's net.Pipe did not implement the SetDeadline
method so after commit
e2dd8ca946
this problem was exposed since now we check for errors.

To counter this problem, implement a simple composition for
net.Conn that always returns nil on SetDeadline instead of
tripping out.

Added build tags so that anyone using go1.10 when it is released
will be able to automatically use net.Pipe's net.Conns
2017-11-14 22:03:23 -07:00
Ethan Buchman
49faa79bdc Merge pull request #848 from tendermint/p2p-catch-conn.SetDeadline-errors
p2p: peer should respect errors from SetDeadline
2017-11-15 03:34:58 +00:00
Ethan Buchman
7670049a31 Merge pull request #845 from tendermint/fix/826-wait-for-rpc
rpc: wait for rpc servers to be available in tests
2017-11-15 03:29:05 +00:00
Ethan Buchman
0cd642bca7 Merge pull request #849 from tendermint/846-fix-TestFullRound1-race
fix TestFullRound1 race (Refs #846)
2017-11-15 03:27:59 +00:00
Anton Kaliaev
fe3c92ecce unescape $NODE_FLAGS (see comment) 2017-11-14 20:56:39 -06:00
Ethan Buchman
a969e24177 crank context timeouts 2017-11-15 01:42:15 +00:00
Emmanuel Odeke
7b0fa6c889 p2p: peer should respect errors from SetDeadline
Noticed while auditing the code that we aren't respecting
(*net.Conn) SetDeadline errors which return after
a connection has been killed and is simultaneously
being used.

For example given program, without SetDeadline error checks
```go
package main

import (
  "log"
  "net"
  "time"
)

func main() {
  conn, err := net.Dial("tcp", "tendermint.com:443")
  if err != nil {
    log.Fatal(err)
  }
  go func() {
    <-time.After(400 * time.Millisecond)
    conn.Close()
  }()
  for i := 0; i < 5; i++ {
    if err := conn.SetDeadline(time.Now().Add(time.Duration(10 * time.Second))); err != nil {
      log.Fatalf("set deadline #%d, err: %v", i, err)
    }
    log.Printf("Successfully set deadline #%d", i)
    <-time.After(150 * time.Millisecond)
  }
}
```

erraneously gives
```shell
2017/11/14 17:46:28 Successfully set deadline #0
2017/11/14 17:46:29 Successfully set deadline #1
2017/11/14 17:46:29 Successfully set deadline #2
2017/11/14 17:46:29 Successfully set deadline #3
2017/11/14 17:46:29 Successfully set deadline #4
```

However, if we properly fix it to respect that error with
```diff
--- wild.go 2017-11-14 17:44:38.000000000 -0700
+++ main.go 2017-11-14 17:45:40.000000000 -0700
@@ -16,7 +16,9 @@
    conn.Close()
  }()
  for i := 0; i < 5; i++ {
-   conn.SetDeadline(time.Now().Add(time.Duration(10 * time.Second)))
+   if err := conn.SetDeadline(time.Now().Add(time.Duration(10 *
time.Second))); err != nil {
+     log.Fatalf("set deadline #%d, err: %v", i, err)
+   }
    log.Printf("Successfully set deadline #%d", i)
    <-time.After(150 * time.Millisecond)
  }
```

properly catches any problems and gives
```shell
$ go run main.go
2017/11/14 17:43:44 Successfully set deadline #0
2017/11/14 17:43:45 Successfully set deadline #1
2017/11/14 17:43:45 Successfully set deadline #2
2017/11/14 17:43:45 set deadline #3, err: set tcp 10.182.253.51:57395:
use of closed network connection
exit status 1
```
2017-11-14 18:01:51 -07:00
Anton Kaliaev
fa60d8120e fix TestFullRound1 race (Refs #846)
```
==================
WARNING: DATA RACE
Write at 0x00c42d7605f0 by goroutine 844:
  github.com/tendermint/tendermint/consensus.(*ConsensusState).updateToState()
      /home/vagrant/go/src/github.com/tendermint/tendermint/consensus/state.go:465 +0x59e
I[11-14|22:37:28.781] Added to prevote                             vote="Vote{0:646753DCE124 1/02/1(Prevote) E9B19636DCDB {/CAD5FA805E8C.../}}" prevotes="VoteSet{H:1 R:2 T:1 +2/3:<nil> BA{2:X_} map[]}"
  github.com/tendermint/tendermint/consensus.(*ConsensusState).finalizeCommit()
      /home/vagrant/go/src/github.com/tendermint/tendermint/consensus/state.go:1229 +0x16a9
  github.com/tendermint/tendermint/consensus.(*ConsensusState).tryFinalizeCommit()
      /home/vagrant/go/src/github.com/tendermint/tendermint/consensus/state.go:1135 +0x721
  github.com/tendermint/tendermint/consensus.(*ConsensusState).enterCommit.func1()
      /home/vagrant/go/src/github.com/tendermint/tendermint/consensus/state.go:1087 +0x153
  github.com/tendermint/tendermint/consensus.(*ConsensusState).enterCommit()
      /home/vagrant/go/src/github.com/tendermint/tendermint/consensus/state.go:1114 +0xa34
  github.com/tendermint/tendermint/consensus.(*ConsensusState).addVote()
      /home/vagrant/go/src/github.com/tendermint/tendermint/consensus/state.go:1423 +0xdd6
  github.com/tendermint/tendermint/consensus.(*ConsensusState).tryAddVote()
      /home/vagrant/go/src/github.com/tendermint/tendermint/consensus/state.go:1317 +0x77
  github.com/tendermint/tendermint/consensus.(*ConsensusState).handleMsg()
      /home/vagrant/go/src/github.com/tendermint/tendermint/consensus/state.go:565 +0x7a9
  github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
      /home/vagrant/go/src/github.com/tendermint/tendermint/consensus/state.go:523 +0x6d2

Previous read at 0x00c42d7605f0 by goroutine 654:
  github.com/tendermint/tendermint/consensus.validatePrevote()
      /home/vagrant/go/src/github.com/tendermint/tendermint/consensus/common_test.go:149 +0x57
  github.com/tendermint/tendermint/consensus.TestFullRound1()
      /home/vagrant/go/src/github.com/tendermint/tendermint/consensus/state_test.go:256 +0x3c5
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:746 +0x16c

Goroutine 844 (running) created at:
  github.com/tendermint/tendermint/consensus.(*ConsensusState).startRoutines()
      /home/vagrant/go/src/github.com/tendermint/tendermint/consensus/state.go:258 +0x8c
  github.com/tendermint/tendermint/consensus.startTestRound()
      /home/vagrant/go/src/github.com/tendermint/tendermint/consensus/common_test.go:118 +0x63
  github.com/tendermint/tendermint/consensus.TestFullRound1()
      /home/vagrant/go/src/github.com/tendermint/tendermint/consensus/state_test.go:247 +0x1fb
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:746 +0x16c

Goroutine 654 (running) created at:
  testing.(*T).Run()
      /usr/local/go/src/testing/testing.go:789 +0x568
  testing.runTests.func1()
      /usr/local/go/src/testing/testing.go:1004 +0xa7
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:746 +0x16c
  testing.runTests()
      /usr/local/go/src/testing/testing.go:1002 +0x521
  testing.(*M).Run()
      /usr/local/go/src/testing/testing.go:921 +0x206
  main.main()
      github.com/tendermint/tendermint/consensus/_test/_testmain.go:106 +0x1d3
==================
```
2017-11-14 17:41:30 -06:00
caffix
8b7649b90c enhancements made in response to PR full review comments 2017-11-14 18:26:06 -05:00
caffix
687834c99e added initial trust metric test routines 2017-11-14 18:26:06 -05:00
caffix
54c25ccbf5 integrated trust metric store as per PR comments 2017-11-14 18:26:06 -05:00
caffix
e160a6198c added initial trust metric design doc and code 2017-11-14 18:26:06 -05:00
Ethan Buchman
e69d36d54f some more robust sleeps 2017-11-14 22:31:23 +00:00
Ethan Buchman
844c43e044 use stdlib context 2017-11-14 22:30:00 +00:00
Ethan Buchman
194712fd3b rpc: wait for rpc servers to be available in tests 2017-11-14 21:51:49 +00:00
Ethan Buchman
30f675aafa Merge pull request #839 from tendermint/bugfix/pubsub-failures
Fix nondeterministic tests failures related to pubsub
2017-11-14 18:13:47 +00:00
Ethan Buchman
695266e907 Merge pull request #844 from tendermint/bunch-up-p2p.AddrBook-wg-calls
p2p: comment on the wg.Add before go saveRoutine()
2017-11-14 17:18:27 +00:00
Ethan Buchman
3db44dacae Merge pull request #840 from tendermint/fix/tests
Fix/tests
2017-11-14 15:48:17 +00:00
Emmanuel Odeke
62c1bc0a20 p2p: comment on the wg.Add before go saveRoutine()
Just noticed while auditing the code in p2p/addrbook.go,
wg.Add(1) but no subsequent defer.
@jaekwon and I had a discussion offline and we agreed to
comment about why the code was that way and why
we shouldn't move the wg.Add(1) into .saveRoutine() because
if go a.saveRoutine() isn't started before anyone invokes
a.Wait(), then we'd have raced a.saveRoutine().
2017-11-13 18:14:58 -07:00
Petabyte Storage
3863885c71 WIP: begin parallel refactoring with go-wire Write methods and MConnection 2017-11-12 22:11:15 -08:00
Ethan Buchman
238e2b72ee Merge pull request #834 from tendermint/829-enable-logs-by-default
Enable logs by default
2017-11-12 07:06:14 +00:00
Ethan Buchman
a65ab3b0e0 Merge pull request #838 from tendermint/docker
docker update || 0.12.0
2017-11-12 06:51:01 +00:00
Ethan Buchman
aba8a8f4fc consensus: crank timeout in timeoutWaitGroup 2017-11-12 06:41:15 +00:00
Ethan Buchman
0448c2b437 consensus: fix LastCommit log 2017-11-12 06:40:27 +00:00
Ethan Buchman
0ada0cf525 certifiers: test uses WaitForHeight 2017-11-12 00:43:16 +00:00
Anton Kaliaev
7fa12662c4 check whatever we can read from the channel
```
panic: interface conversion: interface {} is nil, not types.TMEventData

goroutine 7690 [running]:
github.com/tendermint/tendermint/consensus.waitForAndValidateBlock.func1(0xc427727620, 0x3)
        /go/src/github.com/tendermint/tendermint/consensus/reactor_test.go:292 +0x62b
created by github.com/tendermint/tendermint/consensus.timeoutWaitGroup
        /go/src/github.com/tendermint/tendermint/consensus/reactor_test.go:349 +0xa4
exit status 2
FAIL    github.com/tendermint/tendermint/consensus      38.614s

```
2017-11-10 18:16:31 -05:00
Anton Kaliaev
bc9c4e8dee update readme [ci skip] 2017-11-10 15:38:32 -05:00
Anton Kaliaev
8004af2519 update docker readme 2017-11-10 15:17:13 -05:00
Anton Kaliaev
21e87ebc11 update Go version to 1.9.2 2017-11-10 15:10:52 -05:00
Anton Kaliaev
70d8afa6e9 update Dockerfile 2017-11-10 15:09:38 -05:00
Ethan Buchman
847f865438 Merge pull request #836 from tendermint/fix/tests
consensus: make mempool_test deterministic
2017-11-10 05:08:54 +00:00
Ethan Buchman
2cda777900 consensus: make mempool_test deterministic 2017-11-09 23:54:02 +00:00
Anton Kaliaev
432a7276e2 [test_integrations] enable logs from peers by default (Refs #829) 2017-11-09 15:19:49 -05:00
Anton Kaliaev
533f7c45eb fix bash linter warnings for atomic_broadcast integration test 2017-11-09 14:58:16 -05:00
Anton Kaliaev
a1cdc2b68a set logger for peer's MConnection 2017-11-09 14:57:40 -05:00
Ethan Buchman
9c4d533695 Merge pull request #833 from tendermint/fix/consensus-tests
consensus: fix for initializing block parts during catchup
2017-11-09 19:04:21 +00:00
Anton Kaliaev
ad03491ee6 remove duplicated key 2017-11-09 13:37:29 -05:00
Ethan Buchman
4b9dfc8990 consensus: fix for initializing block parts during catchup 2017-11-09 18:14:41 +00:00
Ethan Buchman
a46f64cd1e Merge pull request #824 from tendermint/bugfix/node_test
rewrite node test to use new pubsub
2017-11-09 01:35:35 +00:00
Anton Kaliaev
b1e7163689 rewrite node test to use new pubsub 2017-11-08 13:12:48 -05:00
Ethan Buchman
c931279960 p2p: some fixes re @odeke-em issues #813,#816,#817 2017-11-08 17:54:29 +00:00
Ethan Buchman
12b25fdf6e blockchain: add comment in AddPeer. closes #666 2017-11-08 02:42:27 +00:00
Ethan Buchman
c0e2649ed6 Merge pull request #788 from tendermint/feature/548-indexing-tags
new pubsub package
2017-11-08 01:02:48 +00:00
Ethan Buchman
593c127257 rpc/lib/types: RPCResponse.Result is not a pointer 2017-11-08 01:00:15 +00:00
Ethan Buchman
9f6a09277e Merge pull request #812 from tendermint/808-make-connected-switches
MakeConnectedSwitches: connect first switch to others
2017-11-08 00:54:23 +00:00
Ethan Buchman
dd47884661 Merge pull request #820 from tendermint/790-use-tickers-instead-of-time-Sleep
prefer tickers to time.Sleep
2017-11-08 00:53:42 +00:00
Anton Kaliaev
a01c226dc4 wsConnection: call onDisconnect 2017-11-07 19:22:01 -05:00
Ethan Buchman
47f5e37205 copy RoundState for event 2017-11-07 23:57:23 +00:00
Petabyte Storage
51c9211cf4 add test for MConnection TrySend and Send 2017-11-07 23:35:25 +00:00
Anton Kaliaev
7869e541f6 change MakeConnectedSwitches to not connect to itself
and a test for it
2017-11-07 18:33:00 -05:00
Anton Kaliaev
e0daca5693 fixes from Bucky's review 2017-11-07 18:20:24 -05:00
Ethan Buchman
e8e512f1fa Merge pull request #815 from tendermint/p2p-readme-tests
P2P: readme and tests
2017-11-07 23:01:15 +00:00
Ethan Buchman
37ce171061 p2p/connetion: remove panics, test error cases 2017-11-07 23:00:52 +00:00
Ethan Buchman
e01986e2b3 p2p: update readme, some minor things 2017-11-07 23:00:49 +00:00
Ethan Buchman
433416fef8 Merge pull request #818 from tendermint/fix/810-stuck-trailing-node
consensus: ensure prs.ProposalBlockParts is initialized. fixes #810
2017-11-07 21:52:08 +00:00
Anton Kaliaev
2d4ad02356 prefer tickers to time.Sleep (Refs #790) 2017-11-07 15:38:25 -05:00
Ethan Buchman
3b81d3fea4 consensus: ensure prs.ProposalBlockParts is initialized. fixes #810 2017-11-07 17:14:40 +00:00
Anton Kaliaev
e785697a64 connect first switch to others (Refs #808) 2017-11-06 23:43:40 -05:00
Anton Kaliaev
4ffe9304ba unsubscribe from all subscriptions on WS disconnect 2017-11-02 14:00:18 -05:00
Anton Kaliaev
b1eec3a5d3 remove test_data/empty_block and test_data/small_blockN 2017-11-02 13:20:14 -05:00
Anton Kaliaev
fcdd30b2d3 fixes from Bucky's review 2 2017-11-02 13:20:05 -05:00
Ethan Buchman
ec87c740a7 Merge pull request #794 from tendermint/243-restart-app-via-os
Kill Tendermint when App dies
2017-10-31 16:27:23 -04:00
Ethan Buchman
6b737d2c1b Merge pull request #745 from tendermint/test-rpc-server-handlers
rpc/lib/server: add handlers tests for params inclusion
2017-10-31 15:42:24 -04:00
Ethan Buchman
f7f4ba5e90 rpc/lib/server: minor changes to test 2017-10-31 15:41:25 -04:00
Ethan Buchman
5466720d75 minor changes from @odeke-em PR #725 2017-10-31 15:32:07 -04:00
Ethan Buchman
7c3cf316f1 rpc/wsevents: small cleanup 2017-10-31 15:16:08 -04:00
Ethan Buchman
d71aed309f some minor changes 2017-10-30 22:52:03 -04:00
Ethan Buchman
d6beb60bf7 Merge pull request #799 from petabytestorage/fix-typo-switch-comment
fix comment typos
2017-10-30 18:01:32 -04:00
Anton Kaliaev
61d76a273f fixes from Bucky's and Emmanuel's reviews 2017-10-30 11:12:01 -05:00
Anton Kaliaev
6d18e2f447 do not send whole round state via eventHub
Fixes

```
WARNING: DATA RACE
Write at 0x00c4200715b8 by goroutine 24:
  github.com/tendermint/tendermint/consensus.(*ConsensusState).enterPrevote.func1()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:359 +0x3f
  github.com/tendermint/tendermint/consensus.(*ConsensusState).enterPrevote()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:897 +0x8de
  github.com/tendermint/tendermint/consensus.(*ConsensusState).addProposalBlockPart()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:1303 +0x701
  github.com/tendermint/tendermint/consensus.(*ConsensusState).handleMsg()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:560 +0x88c
  github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:525 +0x6d2

Previous read at 0x00c4200715b8 by goroutine 19:
  github.com/tendermint/tendermint/consensus.makeRoundStepMessages()
      /go/src/github.com/tendermint/tendermint/consensus/reactor.go:415 +0x192
  github.com/tendermint/tendermint/consensus.(*ConsensusReactor).broadcastNewRoundStep()
      /go/src/github.com/tendermint/tendermint/consensus/reactor.go:377 +0x3c
  github.com/tendermint/tendermint/consensus.(*ConsensusReactor).broadcastNewRoundStepsAndVotes.func1()
      /go/src/github.com/tendermint/tendermint/consensus/reactor.go:350 +0x275
```
2017-10-30 00:32:23 -05:00
Anton Kaliaev
1c1c68df8d fixes from my own review 2017-10-30 00:32:23 -05:00
Anton Kaliaev
f6539737de new pubsub package
comment out failing consensus tests for now

rewrite rpc httpclient to use new pubsub package

import pubsub as tmpubsub, query as tmquery

make event IDs constants
EventKey -> EventTypeKey

rename EventsPubsub to PubSub

mempool does not use pubsub

rename eventsSub to pubsub

new subscribe API

fix channel size issues and consensus tests bugs

refactor rpc client

add missing discardFromChan method

add mutex

rename pubsub to eventBus

remove IsRunning from WSRPCConnection interface (not needed)

add a comment in broadcastNewRoundStepsAndVotes

rename registerEventCallbacks to broadcastNewRoundStepsAndVotes

See https://dave.cheney.net/2014/03/19/channel-axioms

stop eventBuses after reactor tests

remove unnecessary Unsubscribe

return subscribe helper function

move discardFromChan to where it is used

subscribe now returns an err

this gives us ability to refuse to subscribe if pubsub is at its max
capacity.

use context for control overflow

cache queries

handle err when subscribing in replay_test

rename testClientID to testSubscriber

extract var

set channel buffer capacity to 1 in replay_file

fix byzantine_test

unsubscribe from single event, not all events

refactor httpclient to return events to appropriate channels

return failing testReplayCrashBeforeWriteVote test

fix TestValidatorSetChanges

refactor code a bit

fix testReplayCrashBeforeWriteVote

add comment

fix TestValidatorSetChanges

fixes from Bucky's review

update comment [ci skip]

test TxEventBuffer

update changelog

fix TestValidatorSetChanges (2nd attempt)

only do wg.Done when no errors

benchmark event bus

create pubsub server inside NewEventBus

only expose config params (later if needed)

set buffer capacity to 0 so we are not testing cache

new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ}

This should allow to subscribe to all transactions! or a specific one
using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'"

use TimeoutCommit instead of afterPublishEventNewBlockTimeout

TimeoutCommit is the time a node waits after committing a block, before
it goes into the next height. So it will finish everything from the last
block, but then wait a bit. The idea is this gives it time to hear more
votes from other validators, to strengthen the commit it includes in the
next block. But it also gives it time to hear about new transactions.

waitForBlockWithUpdatedVals

rewrite WAL crash tests

Task:
test that we can recover from any WAL crash.

Solution:
the old tests were relying on event hub being run in the same thread (we
were injecting the private validator's last signature).

when considering a rewrite, we considered two possible solutions: write
a "fuzzy" testing system where WAL is crashing upon receiving a new
message, or inject failures and trigger them in tests using something
like https://github.com/coreos/gofail.

remove sleep

no cs.Lock around wal.Save

test different cases (empty block, non-empty block, ...)

comments

add comments

test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks

fixes as per Bucky's last review

reset subscriptions on UnsubscribeAll

use a simple counter to track message for which we panicked

also, set a smaller part size for all test cases
2017-10-30 00:32:22 -05:00
Petabyte Storage
fe9ff62297 fix comment typos 2017-10-28 22:01:45 -07:00
Ethan Buchman
7c85f15a6c Merge pull request #798 from petabytestorage/fix-test-uncommon-names
fix test using uncommon names
2017-10-28 23:47:31 -04:00
Petabyte Storage
6b366b2443 fix test using uncommon names 2017-10-28 20:29:11 -07:00
Emmanuel Odeke
a8b77359df rpc/lib/server: separate out Notifications test
Addressing feedback from @ebuchman
2017-10-28 15:24:40 -07:00
Emmanuel Odeke
e7fab7d4bf rpc/lib/server: update with @melekes and @ebuchman feedback 2017-10-28 15:11:21 -07:00
Emmanuel Odeke
59556ab030 rpc/lib/server: add handlers tests
Follow up of PR https://github.com/tendermint/tendermint/pull/724
For https://github.com/tendermint/tendermint/issues/708

Reported initially in #708, this bug was reconfirmed
by the fuzzer.

This fix ensures that:
* if the user doesn't pass in `"id"` that we send them back
a message in an error telling them to send `"id"`. Previously
we let the handler return a 200 with nothing.
* passing in nil `params` doesn't crash
* not passing in `params` doesn't crash
* passing in non-JSON parseable data to `params` doesn't crash
2017-10-28 15:11:21 -07:00
Ethan Buchman
128e2a1d9e Merge branch 'master' into develop 2017-10-28 00:09:03 -04:00
Ethan Buchman
e236302256 Merge pull request #793 from tendermint/release-v0.12.0
Release v0.12.0
2017-10-28 00:06:53 -04:00
Ethan Buchman
dfe28c8855 test: update for abci-cli consolidation. shell formatting 2017-10-27 23:09:50 -04:00
Ethan Buchman
4b616344fa update glide, again 2017-10-27 22:36:03 -04:00
Ethan Buchman
1ecd580061 Merge pull request #773 from tendermint/docs-staging
docs improvements
2017-10-27 16:18:56 -04:00
Ethan Buchman
21dcb4f290 update glide 2017-10-27 13:55:56 -04:00
Ethan Buchman
b2b35d7dc1 update changelog 2017-10-27 11:54:20 -04:00
Ethan Buchman
a1501dcde8 version bump 2017-10-27 11:54:20 -04:00
Ethan Buchman
3319ad03b8 Merge pull request #791 from tendermint/740-no-wal-after-fast-sync
dont catchupReplay on wal if we fast synced
2017-10-27 11:49:54 -04:00
Ethan Buchman
fe1c60b5cf consensus: kill process on app error 2017-10-27 10:55:20 -04:00
Ethan Buchman
591dd9e662 dont catchupReplay on wal if we fast synced 2017-10-27 10:46:19 -04:00
Ethan Buchman
bb6c15b00a CHANGELOG [ci skip] 2017-10-26 09:42:46 -04:00
Ethan Buchman
6af28ead87 Merge pull request #672 from tendermint/573-wal-issues
Add checksum and length to CS WAL record
2017-10-26 00:31:38 -04:00
Ethan Buchman
fcf459158d CHANGELOG [ci skip] 2017-10-26 00:29:58 -04:00
Ethan Buchman
e76ef2a8a1 types: unexpose valset.To/FromBytes 2017-10-26 00:27:02 -04:00
Ethan Buchman
3c92bea519 glide: more external deps locked to versions 2017-10-26 00:07:43 -04:00
Ethan Buchman
12c703c1c3 Merge branch 'develop' into 573-wal-issues 2017-10-25 23:56:08 -04:00
Ethan Buchman
376f47e030 Merge pull request #775 from tendermint/rpc-client-jitter
rpc/lib/client: add jitter for exponential backoff of WSClient
2017-10-25 22:53:50 -04:00
Petabyte Storage
ceedd4d968 remove unnecessary plus [ci skip] 2017-10-25 22:28:20 -04:00
Ethan Buchman
c595636999 glide 2017-10-25 22:23:55 -04:00
Emmanuel Odeke
6e5cd10399 rpc/lib/client: jitter test updates and only to-be run on releases
* Updated code with feedback from @melekes, @ebuchman and @silasdavis.
* Added Makefile clause `release` to only run the test on seeing tag
`release` during releases i.e
```shell
make release
```
which will run the comprehensive and long integration-ish tests.
2017-10-25 19:20:55 -07:00
Ethan Buchman
57a684d5ac fixes from review 2017-10-25 21:54:56 -04:00
Ethan Buchman
5534eb4707 Merge pull request #776 from tendermint/feature/merge-light-client
Merge light client
2017-10-25 16:12:39 -04:00
Ethan Buchman
b2d5546cf8 Merge pull request #777 from silasdavis/fix-blocking-ws-client
Fix WSClient deadlock in the readRoutine after Stop() is called
2017-10-25 11:00:30 -04:00
Ethan Frey
f653ba63bf Separated out certifiers.Commit from rpc structs 2017-10-25 16:43:18 +02:00
Ethan Frey
0396b6d521 Rename checkpoint.go 2017-10-25 16:13:04 +02:00
Ethan Frey
94b36bb65e Move VerifyCommitAny into the types package 2017-10-25 16:13:04 +02:00
Ethan Frey
b4fd6e876e Copy certifiers from light-client 2017-10-25 16:13:04 +02:00
Ethan Buchman
775e100d2c Merge pull request #783 from tendermint/782-tendermint-invalid-command-panics
fix panic: failed to determine gopath: exec: "go"
2017-10-25 08:50:16 -04:00
Silas Davis
4cb02d0bf2 Exploit the fact the BaseService's closed Quit channel will keep emitting quit signals to close both readRoutine and writeRoutine 2017-10-25 10:19:18 +01:00
Anton Kaliaev
ae538337ba fix panic: failed to determine gopath: exec: "go" (Refs #782)
```
-bash-4.2$ tendermint show_validators
panic: failed to determine gopath: exec: "go": executable file not found in $PATH

goroutine 1 [running]:
github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/common.gopath(0xc4200632c0, 0x18)
	/var/lib/jenkins/workspace/03.Build.Package/go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/common/os.go:26 +0x1b5
github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/common.init()
	/var/lib/jenkins/workspace/03.Build.Package/go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/common/os.go:17 +0x13c
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire.init()
	/var/lib/jenkins/workspace/03.Build.Package/go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/wire.go:165 +0x50
github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/data.init()
	/var/lib/jenkins/workspace/03.Build.Package/go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/go-wire/data/wrapper.go:89 +0x50
github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/cli.init()
	/var/lib/jenkins/workspace/03.Build.Package/go/src/github.com/tendermint/tendermint/vendor/github.com/tendermint/tmlibs/cli/setup.go:190 +0x76
main.init()
	/var/lib/jenkins/workspace/03.Build.Package/go/src/github.com/tendermint/tendermint/cmd/tendermint/main.go:42 +0x49```

An error message instead would be nice.
```

Now GoPath() is a function instead of a variable.
2017-10-25 11:19:53 +04:00
Ethan Buchman
62a7beec21 Merge pull request #780 from ericdmann/769-error-msg-while-testnet-sync
Change log level to Info when proposal block hashing fails
2017-10-24 23:27:20 -04:00
Eric Mann
45e18a1832 Change log level to Info when proposal block hashing fails due to partially complete block 2017-10-24 14:13:35 -07:00
Silas Davis
f6adddb4a8 Replace ResultsCh with ResponsesCh 2017-10-24 17:45:13 +01:00
Ethan Buchman
38fc351532 Merge pull request #765 from tendermint/762-blockchain-reactor-timeout
blockchain reactor timeout
2017-10-24 09:13:26 -04:00
Silas Davis
01be6fa309 Fix WSClient blocking in the readRoutine after Stop() as it tries to write to ResultsCh 2017-10-24 13:31:24 +01:00
Anton Kaliaev
e06bbaf303 refactor TestNoBlockMessageResponse to eliminate a race condition 2017-10-24 15:32:01 +04:00
Anton Kaliaev
bb7b152af5 write docs for cutWALUntil and wal2json binaries 2017-10-24 13:25:47 +04:00
Emmanuel Odeke
5504920ba3 rpc/lib/client: add jitter for exponential backoff of WSClient
Fixes https://github.com/tendermint/tendermint/issues/751.

Adds jitter to our exponential backoff to mitigate a self DDOS
vector. The jitter is a randomly picked percentage of a second
whose purpose is to ensure that each exponential backoff retry
occurs within (1<<attempts) == 2**attempts, but with the delay
each client will have a random buffer time before it tries to
reconnect instead of all at once reconnections that might even
bring back the previous conditions that might have caused the
dial to the WSServer to have failed e.g
* Network outage
* File descriptor exhaustion
* False positives from firewalls
etc
2017-10-24 02:00:20 -07:00
Anton Kaliaev
c74a359c46 fixes per Bucky's review 2017-10-24 12:14:21 +04:00
Zach Ramsay
ee9dc6ce59 docs: fixup abci guide 2017-10-23 20:56:49 -04:00
Zach Ramsay
62e8ec34d1 fix comment, #723 2017-10-23 19:51:09 -04:00
Matt Bell
6a5254c475 Added local blockchain sync benchmark script 2017-10-23 19:46:57 -04:00
Ethan Buchman
2802a06a08 blockchain/store: comment about panics 2017-10-23 19:46:14 -04:00
Zach
8ac430813d Merge pull request #772 from tendermint/docs/flow
moar docs updates
2017-10-23 19:14:06 -04:00
Zach Ramsay
3e61b8c17a docs: comb through step by step 2017-10-23 19:11:51 -04:00
Zach Ramsay
9e277d1596 docs: smaller logo (200px) 2017-10-23 17:34:27 -04:00
Zach
4c9d5244a5 Merge pull request #759 from tendermint/improve-docs
docs: update abci example details
2017-10-23 17:29:34 -04:00
Ethan Buchman
87cc277b38 Merge pull request #721 from tendermint/564-add-app-options-to-genesis-resp
Add app_options to GenesisDoc
2017-10-23 16:03:45 -04:00
Anton Kaliaev
3115c23762 binary format for WAL 2017-10-23 22:27:24 +04:00
Anton Kaliaev
31030c6514 make crc32c a global var
change echo format in build.sh script
2017-10-23 22:09:42 +04:00
Anton Kaliaev
7b8ffc9981 add checksum and msg size to TimedWALMessage
updated test_data/build.sh script
2017-10-23 22:09:17 +04:00
Ethan Buchman
a75bccfbc4 Merge branch 'develop' into 564-add-app-options-to-genesis-resp 2017-10-23 11:30:00 -04:00
Ethan Buchman
f97229f05a Merge pull request #748 from tendermint/params-test
types: ConsensusParams test + document the ranges/limits
2017-10-23 11:18:00 -04:00
Ethan Buchman
ac2ef9e0ea Merge pull request #750 from tendermint/feature/cleanup
Cleanup of code and code docs
2017-10-23 11:14:15 -04:00
Ethan Buchman
c2803b80e8 update changelog; fixes from rebase 2017-10-23 11:13:12 -04:00
Ethan Buchman
7a6876bc62 Merge pull request #768 from tendermint/feature/merkleeyes-to-iavl
Feature/merkleeyes to iavl
2017-10-23 11:07:21 -04:00
Adrian Brink
819f81f702 Change NOTE to CONTRACT 2017-10-23 11:04:45 -04:00
Adrian Brink
036d3b59a3 Address reviews 2017-10-23 11:04:45 -04:00
Adrian Brink
782a836db0 Cleanup of code and code docs
This cleans up some of the code in the state package
2017-10-23 11:04:45 -04:00
Ethan Buchman
bd46b78785 Merge pull request #755 from tendermint/753-notified-mempool-txs-but-mempool-empty
WIP: only notify when there are some txs (Refs #753)
2017-10-23 10:31:38 -04:00
Anton Kaliaev
f908dd0e55 only notify when there are some txs (Refs #753) 2017-10-23 10:31:00 -04:00
Ethan Buchman
0bbf38141a blockchain/pool: some comments and small changes 2017-10-23 10:13:46 -04:00
Ethan Buchman
f188366e26 update glide 2017-10-23 10:04:00 -04:00
Ethan Buchman
fd60621a8e update cswal test 2017-10-23 10:03:54 -04:00
Ethan Buchman
60b7f2c61b Merge pull request #767 from silasdavis/do-not-swallow
Make RPCError an actual error and don't swallow its companion data
2017-10-23 01:51:40 -04:00
Silas Davis
3e3d53daef Make RPCError an actual error and don't swallow its companion data 2017-10-22 15:14:21 +01:00
Anton Kaliaev
d64a48e0ee set logger on blockchain pool 2017-10-20 23:56:21 +04:00
Anton Kaliaev
0a7b2ab52c fix invalid memory address or nil pointer dereference error (Refs #762)
https://github.com/tendermint/tendermint/issues/762#issuecomment-338276055
```
E[10-19|04:52:38.969] Stopping peer for error                      module=p2p peer="Peer{MConn{178.62.46.14:46656} B14916FAF38A out}" err="Error: runtime error: invalid memory address or nil pointer dereference\nStack: goroutine 529485 [running]:\nruntime/debug.Stack(0xc4355cfb38, 0xb463e0, 0x11b1c30)\n\t/usr/local/go/src/runtime/debug/stack.go:24 +0xa7\ngithub.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p.(*MConnection)._recover(0xc439a28870)\n\t/home/ubuntu/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p/connection.go:206 +0x6e\npanic(0xb463e0, 0x11b1c30)\n\t/usr/local/go/src/runtime/panic.go:491 +0x283\ngithub.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/blockchain.(*bpPeer).decrPending(0x0, 0x381)\n\t/home/ubuntu/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/blockchain/pool.go:376 +0x22\ngithub.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/blockchain.(*BlockPool).AddBlock(0xc4200e4000, 0xc4266d1f00, 0x40, 0xc432ac9640, 0x381)\n\t/home/ubuntu/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/blockchain/pool.go:215 +0x139\ngithub.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/blockchain.(*BlockchainReactor).Receive(0xc42050a780, 0xc420257740, 0x1171be0, 0xc42ff302d0, 0xc4384b2000, 0x381, 0x1000)\n\t/home/ubuntu/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/blockchain/reactor.go:160 +0x712\ngithub.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p.createMConnection.func1(0x11e5040, 0xc4384b2000, 0x381, 0x1000)\n\t/home/ubuntu/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p/peer.go:334 +0xbd\ngithub.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p.(*MConnection).recvRoutine(0xc439a28870)\n\t/home/ubuntu/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p/connection.go:475 +0x4a3\ncreated by github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p.(*MConnection).OnStart\n\t/home/ubuntu/go/src/github.com/cosmos/gaia/vendor/github.com/tendermint/tendermint/p2p/connection.go:170 +0x187\n"
```
2017-10-20 21:56:10 +04:00
Zach Ramsay
8a69f1087b docs: typo 2017-10-20 07:56:26 -04:00
Emmanuel Odeke
f24f03906f types: ConsensusParams: add feedback from @ebuchman and @melekes 2017-10-20 00:11:30 -06:00
Ethan Buchman
fa56e8c0ce Merge pull request #676 from tendermint/state-unexpose-genesisDoc-chainID
all, state: unexpose GenesisDoc, ChainID fields make them accessor methods
2017-10-18 20:10:02 -04:00
Zach Ramsay
fc406d1657 docs: update abci example details [ci skip] 2017-10-18 16:33:37 -04:00
Zach
9dcefd0e1e Merge pull request #754 from tendermint/improve-docs
add tm-migrator to docs
2017-10-18 13:04:12 -04:00
Zach
a2dc53d43d Merge pull request #757 from tendermint/756-specification-validators
correct an error in validator's specification [ci skip] (Refs #756)
2017-10-18 13:02:13 -04:00
Anton Kaliaev
b9c4fab96e correct an error in validator's specification [ci skip] (Refs #756) 2017-10-18 19:23:30 +04:00
Zach Ramsay
fa07dbd7ec docs: add info about tm-migrate 2017-10-18 08:47:58 -04:00
Zach Ramsay
9b382d7a11 docs: remove mention of type byte 2017-10-18 08:00:01 -04:00
Anton Kaliaev
75b78bfb72 panic on marshal/unmarshal failures for genesisDoc 2017-10-17 13:33:57 +04:00
Ethan Buchman
b234f7aba2 Merge pull request #741 from tendermint/client-compile-time-assertions
rpc/client: use compile time assertions instead of methods
2017-10-17 03:41:24 -04:00
Emmanuel Odeke
bff069f83c types: ConsensusParams test + document the ranges/limits
Fixes https://github.com/tendermint/tendermint/issues/747
Updates https://github.com/tendermint/tendermint/issues/693

* Document the unmentioned limits for ConsensusParams.Validate()
* Make the limit for ConsensusParams.BlockSizeParams.MaxBytes
clear at 100MiB
2017-10-16 16:57:44 -06:00
Anton Kaliaev
616b07ff6b make AppOptions an interface{} 2017-10-16 10:58:52 +04:00
Anton Kaliaev
b26f812399 update changelog 2017-10-16 10:58:52 +04:00
Anton Kaliaev
321061125f add app_options to GenesisDoc (Refs #564) 2017-10-16 10:58:52 +04:00
Anton Kaliaev
6469e2ccca save genesis doc in DB to prevent user errors
https://github.com/tendermint/tendermint/pull/676#discussion_r144411458
2017-10-16 10:51:58 +04:00
Anton Kaliaev
c4646bf87f make state#Params not a pointer
also remove the comment
2017-10-16 10:34:02 +04:00
Anton Kaliaev
716364182d [state] expose ChainID and Params
```
jaekwon
Yeah we should definitely expose ChainID.
ConsensusParams is small enough, we can just write it.
```
https://github.com/tendermint/tendermint/pull/676#discussion_r144123203
2017-10-16 10:34:02 +04:00
Anton Kaliaev
1971e149fb ChainID() and Params() do not return errors
- remove state#GenesisDoc() method
2017-10-16 10:34:02 +04:00
Emmanuel Odeke
7939d62ef0 all, state: unexpose GenesisDoc, ChainID fields make them accessor methods
Fixes #671

Unexpose GenesisDoc and ChainID fields to avoid them being
serialized to the DB on every block write/state.Save()

A GenesisDoc can now be alternatively written to the state's
database, by serializing its JSON as a value of key "genesis-doc".

There are now accessors and a setter for these attributes:
- state.GenesisDoc() (*types.GenesisDoc, error)
- state.ChainID() (string, error)
- state.SetGenesisDoc(*types.GenesisDoc)

This is a breaking change since it changes how the state's
serialization and requires that if loading the GenesisDoc entirely
from the database, you'll need to set its value in the database
as the GenesisDoc's JSON marshaled bytes.
2017-10-16 10:34:01 +04:00
Zach
4c1f1e4e57 Merge pull request #746 from srmo/701-add-dev-docs-in-java
701 add dev docs in java
2017-10-15 11:02:36 -04:00
Zach
09170f76fe Merge pull request #743 from tendermint/zramsay-patch-1
Update getting-started.rst
2017-10-15 11:01:19 -04:00
srmo
9e1edf8685 [docs] add Java examples for each section 2017-10-15 13:45:43 +02:00
srmo
e7fe299504 [docs] replace all GO snippets with collapsible blocks 2017-10-15 12:21:06 +02:00
srmo
b90edffe28 [docs] add first java block for deliverTx 2017-10-15 12:00:08 +02:00
srmo
f361092ed9 [docs] provide means to have collapsible code blocks without adding a new theme 2017-10-15 11:59:37 +02:00
Ethan Buchman
a1e0f0ba95 docs/ecosystem: add py-tendermint to abci-servers 2017-10-14 00:59:34 -04:00
Emmanuel Odeke
5f218a43fd rpc/client: use compile time assertions instead of methods 2017-10-13 14:30:54 -06:00
Zach
7fe470fc76 Update getting-started.rst 2017-10-13 14:50:54 -04:00
Ethan Buchman
d490c25807 Merge pull request #720 from tendermint/types-heartbeat-test
types/heartbeat: test all Heartbeat functions
2017-10-13 09:22:59 -04:00
Ethan Buchman
340f33b475 Merge pull request #734 from tendermint/482-support-historical-abci-queries
support historical abci queries
2017-10-13 09:14:32 -04:00
Anton Kaliaev
7518c4a9be [rpc] update comment [ci skip] 2017-10-13 15:03:21 +04:00
Anton Kaliaev
db413aadfd fixes from @cloudhead review 2017-10-13 15:03:21 +04:00
Anton Kaliaev
5433e5771e support historical abci queries (Refs #482) 2017-10-13 15:03:20 +04:00
Zach Ramsay
e2e50bc0fc rpc: use /iavl repo in test (#713) 2017-10-11 10:35:22 -04:00
Ethan Buchman
27245ce6f6 Merge branch 'master' into develop 2017-10-10 19:12:40 -04:00
Ethan Buchman
d4634dc683 Merge pull request #729 from tendermint/release/0.11.1
Release/0.11.1
2017-10-10 18:50:15 -04:00
Ethan Buchman
8c08fc671c fix version 2017-10-10 18:49:08 -04:00
Ethan Buchman
a2d40580d7 Merge pull request #732 from sbellem/docs-install-typo-fix
[docs: typo fix] add missing "have"
2017-10-10 11:22:39 -04:00
Ethan Buchman
0011af7adf Merge pull request #727 from sbellem/docs-typo-fix
[docs:typo fix] remove misplaced "the"
2017-10-10 11:20:34 -04:00
Sylvain Bellemare
1764106606 [docs: typo fix] add missing "have" 2017-10-10 17:15:35 +02:00
Ethan Buchman
3356544706 update changelog 2017-10-10 11:10:48 -04:00
Ethan Buchman
335e012b6a Merge branch 'develop' into release/0.11.1 2017-10-10 11:08:14 -04:00
Ethan Buchman
a458da8f92 Merge pull request #724 from tendermint/708-leaving-out-params-crashes-tm-rpc
Leaving out params crashes tm rpc
2017-10-10 10:53:21 -04:00
Ethan Buchman
9fb45c5b5a remove a stale comment 2017-10-10 10:52:26 -04:00
Anton Kaliaev
aae4e94998 make RPCRequest params not a pointer
https://github.com/tendermint/tendermint/pull/724#issuecomment-335362927
2017-10-10 13:50:06 +04:00
Anton Kaliaev
d935a4f0a8 recover from panic in WS JSON RPC readRoutine
https://github.com/tendermint/tendermint/pull/724#issuecomment-335316484
2017-10-10 13:48:56 +04:00
Anton Kaliaev
5c331d8276 log a notification to help debug user issues 2017-10-10 13:01:25 +04:00
Anton Kaliaev
13b9de6778 return missing package declaration 2017-10-10 12:48:36 +04:00
Anton Kaliaev
dc0e8de9b0 extract some of the consensus types into ./types
so they can be used in rpc/core/types/responses.go.

```
So, it seems like we could use the actual structs here, but we don't want to have to import consensus to get them, as then clients are importing too much crap. So probably we should move some types from consensus into consensus/types so we can import.

Will these raw messages be identical to:

type ResultDumpConsensusState struct {
  RoundState cstypes.RoundState
  PeerRoundStates map[string]cstypes.PeerRoundState
}
```
https://github.com/tendermint/tendermint/pull/724#discussion_r143598193
2017-10-10 12:39:21 +04:00
Anton Kaliaev
90a2335267 bump version to 0.11.1 2017-10-10 01:18:33 +04:00
Anton Kaliaev
99c4e48038 return missing package declaration 2017-10-10 01:14:42 +04:00
Anton Kaliaev
4bd4d59af5 update changelog [ci skip] 2017-10-10 01:14:00 +04:00
Sylvain Bellemare
8219abc552 [docs:typo fix] remove misplaced "the" 2017-10-09 20:06:43 +02:00
Anton Kaliaev
d6a87d3c43 [rpc] DumpConsensusState: output state as json rather than string
Before:

```
{
  "jsonrpc": "2.0",
  "id": "",
  "result": {
    "round_state": "RoundState{\n  H:10 R:0 S:RoundStepNewHeight\n  StartTime:     2017-10-09 13:07:24.841134374 +0400 +04\n  CommitTime:    2017-10-09 13:07:23.841134374 +0400 +04\n  Validators:    ValidatorSet{\n      Proposer: Validator{EF243CC0E9B88D0161D24D733BDE9003518CEA27 {PubKeyEd25519{2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202}} VP:10 A:0}\n      Validators:\n        Validator{EF243CC0E9B88D0161D24D733BDE9003518CEA27 {PubKeyEd25519{2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202}} VP:10 A:0}\n    }\n  Proposal:      \u003cnil\u003e\n  ProposalBlock: nil-PartSet nil-Block\n  LockedRound:   0\n  LockedBlock:   nil-PartSet nil-Block\n  Votes:         HeightVoteSet{H:10 R:0~0\n      VoteSet{H:10 R:0 T:1 +2/3:\u003cnil\u003e BA{1:_} map[]}\n      VoteSet{H:10 R:0 T:2 +2/3:\u003cnil\u003e BA{1:_} map[]}\n    }\n  LastCommit: VoteSet{H:9 R:0 T:2 +2/3:947F67A7B85439AF2CD5DFED376C51AC7BD67AEE:1:365E9983E466 BA{1:X} map[]}\n  LastValidators:    ValidatorSet{\n      Proposer: Validator{EF243CC0E9B88D0161D24D733BDE9003518CEA27 {PubKeyEd25519{2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202}} VP:10 A:0}\n      Validators:\n        Validator{EF243CC0E9B88D0161D24D733BDE9003518CEA27 {PubKeyEd25519{2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202}} VP:10 A:0}\n    }\n}",
    "peer_round_states": []
  }
}
```

After:

```
{
  "jsonrpc": "2.0",
  "id": "",
  "result": {
    "round_state": {
      "Height": 1691,
      "Round": 0,
      "Step": 1,
      "StartTime": "2017-10-09T14:08:09.129491764+04:00",
      "CommitTime": "2017-10-09T14:08:08.129491764+04:00",
      "Validators": {
        "validators": [
          {
            "address": "EF243CC0E9B88D0161D24D733BDE9003518CEA27",
            "pub_key": {
              "type": "ed25519",
              "data": "2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202"
            },
            "voting_power": 10,
            "accum": 0
          }
        ],
        "proposer": {
          "address": "EF243CC0E9B88D0161D24D733BDE9003518CEA27",
          "pub_key": {
            "type": "ed25519",
            "data": "2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202"
          },
          "voting_power": 10,
          "accum": 0
        }
      },
      "Proposal": null,
      "ProposalBlock": null,
      "ProposalBlockParts": null,
      "LockedRound": 0,
      "LockedBlock": null,
      "LockedBlockParts": null,
      "Votes": {},
      "CommitRound": -1,
      "LastCommit": {},
      "LastValidators": {
        "validators": [
          {
            "address": "EF243CC0E9B88D0161D24D733BDE9003518CEA27",
            "pub_key": {
              "type": "ed25519",
              "data": "2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202"
            },
            "voting_power": 10,
            "accum": 0
          }
        ],
        "proposer": {
          "address": "EF243CC0E9B88D0161D24D733BDE9003518CEA27",
          "pub_key": {
            "type": "ed25519",
            "data": "2E0B9301334FCDAB193D514022F81BA09BBEC028685C96602BE9DD0BD4F9E202"
          },
          "voting_power": 10,
          "accum": 0
        }
      }
    },
    "peer_round_states": {
      "75EC8F15D244A421202F9725CD4DE509EE50303670310CF7530EF25E2B7C524B": {
        "Height": 1691,
        "Round": 0,
        "Step": 1,
        "StartTime": "2017-10-09T14:08:08.563251997+04:00",
        "Proposal": false,
        "ProposalBlockPartsHeader": {
          "total": 0,
          "hash": ""
        },
        "ProposalBlockParts": null,
        "ProposalPOLRound": -1,
        "ProposalPOL": null,
        "Prevotes": null,
        "Precommits": null,
        "LastCommitRound": 0,
        "LastCommit": null,
        "CatchupCommitRound": -1,
        "CatchupCommit": null
      }
    }
  }
}
```
2017-10-09 14:09:26 +04:00
Anton Kaliaev
a3adac3787 [rpc] do not try to parse params if they were not provided (Refs #708) 2017-10-09 13:30:52 +04:00
Emmanuel Odeke
6fc82f3824 types/heartbeat: test all Heartbeat functions
Updates https://github.com/tendermint/tendermint/issues/693

* Adjusted Heartbeat.Copy to return nil on
trying to copy a nil value instead of panicking.
* Also documented that WriteSignBytes panics
if the Heartbeat is nil.
2017-10-05 21:24:36 -06:00
Ethan Buchman
bcca27ee20 Merge pull request #718 from tendermint/restore-rpc-lib-readme
restore rpc/lib readme as doc.go (Refs #710) [ci skip]
2017-10-05 22:08:17 -04:00
Anton Kaliaev
3702cb7e7c restore rpc/lib readme as doc.go (Refs #710) [ci skip]
I don't want to lose any documentation. Correct me if I am wrong, but we
don't have this docs anywhere else.
2017-10-05 11:44:02 +04:00
Ethan Buchman
49653d3e31 CODEOWNERS file 2017-10-04 23:44:55 -04:00
Ethan Buchman
ddb8430341 Merge pull request #716 from tendermint/unstable
Unstable
2017-10-04 23:43:36 -04:00
Ethan Buchman
91a3cb0f21 makefile: remove megacheck 2017-10-04 23:20:22 -04:00
Ethan Buchman
765c325441 Merge pull request #714 from tendermint/feature/no-block-response
Feature/no block response
2017-10-04 22:27:06 -04:00
Ethan Buchman
659768783f blockchain: fixing reactor tests 2017-10-04 22:26:00 -04:00
Ethan Buchman
e756906a0e Merge pull request #711 from tendermint/imports
get rid of anonymous imports
2017-10-04 22:16:07 -04:00
Zach Ramsay
136b6a7673 rpc/lib: remove dead files, closes #710 2017-10-04 17:45:15 -04:00
Emmanuel Odeke
068f01368f blockchain/reactor: respondWithNoResponseMessage for missing height
Fixes #514
Replaces #540

If a peer requests a block with a height that we don't have
respond with a bcNoBlockResponseMessage.
However, according to the Tendermint spec, if all nodes are honest
this condition shouldn't occur, so this is a possible hint of an
dishonest node.
2017-10-04 17:27:10 -04:00
Zach Ramsay
f23d47e5d2 upnp: keep a link 2017-10-04 17:19:49 -04:00
Ethan Buchman
09aed7ee89 Merge pull request #707 from tendermint/619-how-to-read-logs
How To Read Logs guide
2017-10-04 16:55:20 -04:00
Zach Ramsay
d56b44f3a5 all: no more anonymous imports 2017-10-04 16:40:45 -04:00
Anton Kaliaev
9e4c25761c relative links [ci skip] 2017-10-04 23:33:31 +04:00
Anton Kaliaev
54f2cc9709 [docs] add how to read logs guide [ci skip] 2017-10-04 18:35:22 +04:00
Ethan Buchman
31a7e2b3b4 Merge pull request #704 from tendermint/no-empty-docs
document no empty blocks
2017-10-03 23:56:05 -04:00
Zach Ramsay
00ab3daa0c document no empty blocks, closes #605 [ci skip] 2017-10-03 23:55:45 -04:00
Ethan Buchman
bbf0228aa7 Merge pull request #700 from tendermint/695-improve-app-dev-docs
Improve app dev docs
2017-10-03 23:40:21 -04:00
Ethan Buchman
6550199751 [docs] minor fixes from review [ci skip] 2017-10-03 23:39:46 -04:00
Anton Kaliaev
10f361fcd0 [docs] use persistent_dummy only when needed [ci skip] 2017-10-04 00:04:45 +04:00
Anton Kaliaev
4a0ae17401 [docs] include examples from the persistent_dummy app [ci skip] 2017-10-04 00:04:32 +04:00
Anton Kaliaev
8537070575 [docs] restructure sentence [ci skip] 2017-10-04 00:04:22 +04:00
Ethan Buchman
9cbcd4b5e3 Merge pull request #692 from tendermint/unstable
add the unstable changes
2017-10-03 12:47:16 -04:00
Ethan Buchman
4fa4e617b7 docs/ecosystem: add stratumn 2017-10-03 12:47:01 -04:00
Zach Ramsay
2e598a7caf one more fix 2017-10-03 11:26:32 -04:00
Zach Ramsay
031eb23dc8 docs: fix build warnings 2017-10-03 11:23:08 -04:00
Zach
edd718c580 Update ecosystem.rst 2017-10-03 11:14:24 -04:00
Zach
392a041c2b Merge pull request #699 from tendermint/update-getting-started-docs
remove unnecessary args in abci_query call in getting-started [ci skip]
2017-10-03 10:41:36 -04:00
Anton Kaliaev
8727bfc265 remove unnecessary args in abci_query call in getting-started [ci skip]
Since 0.10.0, RPC does not require all args (default values will be used).
2017-10-03 16:21:48 +04:00
Ethan Buchman
84e39203bb readme points to ecosystem doc; add lotion, clean up 2017-10-02 23:46:35 -04:00
Ethan Buchman
97e9802255 fix out of range error in VoteSet.addVote 2017-10-02 23:34:06 -04:00
Ethan Buchman
8c6bd44929 log stack trace on consensus failure 2017-10-02 23:34:06 -04:00
Ethan Buchman
ed5511dc08 glide: update for autofile fix 2017-10-02 23:34:02 -04:00
Ethan Buchman
aa57e89e21 changelog: add genesis amount->power 2017-10-02 14:28:04 -04:00
Zach Ramsay
c2f6ff759b typo 2017-10-02 13:02:45 -04:00
Anton Kaliaev
45ff7cdd0c rewrite ws client to expose a callback instead of a channel
callback gives more power to the publisher. plus it is optional
comparing to a channel, which will block the whole client if you won't
read from it.
2017-10-02 13:00:20 -04:00
Alexandre Thibault
ce36a0111a rpc: subscribe on reconnection (#689)
* rpc: subscribe on reconnection

* rpc: fix unit tests
2017-10-02 13:00:20 -04:00
Martin Dyring-Andersen
b61f5482d4 Fix broken reference to ABCI 2017-10-02 13:00:20 -04:00
Anton Kaliaev
40b5defe18 release script [ci skip] 2017-10-02 13:00:20 -04:00
Tino Rusch
e40d1b36f7 docs: added passchain to the ecosystem.rst in the applications section; 2017-10-02 13:00:20 -04:00
Zach Ramsay
60a2867af2 docs/ecosystem: add ABCI implementations 2017-10-02 12:59:32 -04:00
Adrian Brink
bfdad916a2 Update ecosystem.rst 2017-10-02 12:59:32 -04:00
Tino Rusch
498ff803db [README] added passchain to application list;
This adds passchain to the 'applications' part of the toplevel README.md file.
Passchain is a distributed password sharing system built on top of tendermint.
2017-10-02 12:59:32 -04:00
Anton Kaliaev
c8789492dc update docker readme [ci skip] 2017-10-02 12:59:32 -04:00
Anton Kaliaev
1723836014 update Dockerfile [ci skip] 2017-10-02 12:59:32 -04:00
Anton Kaliaev
2d6bc8d7d7 bump up Golang version to 1.9.0 2017-10-02 12:59:32 -04:00
Ethan Buchman
7448c4adde Merge pull request #690 from etherealmachine/patch-1
Fix typo in using-tendermint.rst
2017-10-01 11:03:16 -04:00
James Pettit
a221736eee Fix typo in using-tendermint.rst
configutation -> configuration
2017-09-29 23:18:45 -07:00
Alexandre Thibault
382bead548 rpc: fix client websocket timeout (#687) 2017-09-29 13:32:30 +04:00
Anton Kaliaev
f9479b34cb sleep time should be greater than readTimeout (5 sec)
otherwise, we're not testing ping/pongs.
see https://github.com/tendermint/tendermint/pull/687#issuecomment-332494735
2017-09-27 15:44:19 +04:00
Emmanuel Odeke
925696ca65 Merge branch 'p2p-prune-unused-IPRangeCount-funcs' into develop 2017-09-22 21:09:41 -06:00
Ethan Buchman
7682ad9a60 Merge pull request #675 from tendermint/release-v0.11.0
Release v0.11.0
2017-09-22 13:48:52 -04:00
Ethan Buchman
3e92d295e4 glide for tmlibs 0.3.1 2017-09-22 13:25:10 -04:00
Ethan Buchman
661d336dd5 glide 2017-09-22 12:29:53 -04:00
Ethan Buchman
d1a00c684e types: comments 2017-09-22 12:00:37 -04:00
Ethan Buchman
db034e079a version bump 2017-09-22 11:44:57 -04:00
Ethan Buchman
7d983a548b changelog 2017-09-22 11:44:25 -04:00
Ethan Buchman
8311f5c611 abci.Info takes a struct; less merkleeyes 2017-09-22 11:42:40 -04:00
Ethan Buchman
628791e5a5 Merge pull request #665 from tendermint/no-internet
p2p: allow listener with no external connection
2017-09-22 10:16:50 -04:00
Ethan Buchman
df857266b6 update glide 2017-09-22 10:14:05 -04:00
Ethan Buchman
ddb3d8945d p2p: allow listener with no external connection 2017-09-22 10:13:23 -04:00
Ethan Buchman
7f5908b622 Merge pull request #637 from tendermint/feature/hsm
PrivValidator Interface
2017-09-22 00:28:40 -04:00
Ethan Buchman
24f7b9387a more tests 2017-09-22 00:05:39 -04:00
Ethan Buchman
756818f940 fixes from review 2017-09-21 21:44:36 -04:00
Ethan Buchman
2131f8d330 some fixes from review 2017-09-21 17:21:20 -04:00
Ethan Buchman
8ae2ffda89 put funcs back in order to simplify review 2017-09-21 16:59:25 -04:00
Ethan Buchman
75b97a5a65 PrivValidatorFS is like old PrivValidator, for now 2017-09-21 16:46:31 -04:00
Ethan Buchman
7b99039c34 make signBytesHRS a method on LastSignedInfo 2017-09-21 15:54:33 -04:00
Ethan Buchman
3ca7b10ad4 types: more . -> cmn 2017-09-21 15:54:33 -04:00
Ethan Buchman
779c2a22d0 node: NewNode takes DBProvider and GenDocProvider 2017-09-21 15:54:33 -04:00
Ethan Buchman
147a18b34a fix some comments 2017-09-21 15:52:25 -04:00
Ethan Buchman
4382c8d28b fix tests 2017-09-21 15:52:25 -04:00
Ethan Buchman
944ebccfe9 more PrivValidator interface 2017-09-21 15:51:20 -04:00
Ethan Buchman
fd1b0b997a PrivValidator interface 2017-09-21 15:51:20 -04:00
Ethan Buchman
abe912c610 FuncSignerAndApp allows custom signer and abci app 2017-09-21 15:50:43 -04:00
Ethan Buchman
66fcdf7c7a minor fixes 2017-09-21 15:50:43 -04:00
Adrian Brink
4e13a19339 Add ability to construct new instance of Tendermint core from scratch 2017-09-21 15:50:43 -04:00
Adrian Brink
7dd3c007c7 Refactor priv_validator
Users can now just pass an object that implements the Signer interface.
2017-09-21 15:50:43 -04:00
Duncan Jones
0d392a0442 Allow Signer to be generated with priv key
Prior to this change, a custom Signer would have no knowledge of the private
key stored in the configuration file. This changes introduces a generator
function, which creates a Signer based on the private key. This provides an
opportunity for customer Signers to adjust behaviour based on the key
contents. (E.g. imagine key contents are a key label, rather than the key
itself).
2017-09-21 15:50:43 -04:00
Duncan Jones
7e4a704bd1 Remove reliance on default Signer
This change allows the default privValidator to use a custom Signer
implementation with no reliance on the default Signer implementation.
2017-09-21 15:50:43 -04:00
Adrian Brink
bf5e956087 Change capitalisation 2017-09-21 15:50:43 -04:00
Adrian Brink
2ccc3326ec Remove redundant file 2017-09-21 15:50:43 -04:00
Adrian Brink
83f7d5c95a Setup custom tendermint node
By exporting all of the commands, we allow users to setup their own
tendermint node cli. This enables users to provide a different
pivValidator without the need to fork tendermint.
2017-09-21 15:50:43 -04:00
Adrian Brink
2c129447fd Example that showcases how to build your own tendermint node
This example shows how a user of the tendermint library can build their
own node and supply it with its own commands. It includes two todos in
order to make it easier for library users to use tendermint.
2017-09-21 15:50:43 -04:00
Ethan Buchman
b50339e8e7 p2p: sw.AddPeer -> sw.addPeer 2017-09-21 15:47:41 -04:00
Ethan Buchman
8ff5b365dd Merge pull request #650 from tendermint/consensus_params
Consensus params
2017-09-21 15:46:57 -04:00
Ethan Buchman
1f0985689d ConsensusParams ptr in GenesisDoc for json 2017-09-21 15:22:58 -04:00
Ethan Buchman
3089bbf2b8 Amount -> Power. Closes #166 2017-09-21 14:59:27 -04:00
Ethan Buchman
5feeb65cf0 dont use pointers for ConsensusParams 2017-09-21 14:59:24 -04:00
Ethan Buchman
715e74186c fixes from review 2017-09-21 14:51:29 -04:00
Ethan Buchman
3a03fe5a15 updated to match adr 005 2017-09-21 14:51:29 -04:00
Ethan Buchman
d343560108 adr: add 005 consensus params 2017-09-21 14:51:29 -04:00
Ethan Buchman
2b6db268cf genesis json tests and mv ConsensusParams to types 2017-09-21 14:51:29 -04:00
Ethan Buchman
14abdd57f3 genDoc.ValidateAndComplete 2017-09-21 14:51:29 -04:00
Ethan Buchman
1f3e4d2d9a move PartSetSize out of the config, into ConsensusParams 2017-09-21 14:51:29 -04:00
Ethan Buchman
29bfcb0a31 minor comments/changes 2017-09-21 14:51:29 -04:00
Ethan Buchman
301845943c Merge pull request #663 from tendermint/readme-update
update readme and other files
2017-09-21 12:36:11 -04:00
Ethan Buchman
b0017c5460 update CHANGELOG [ci skip] 2017-09-21 12:34:37 -04:00
Zach
0b61d22652 Merge pull request #664 from tendermint/windows-path
use filepath for windows compatibility
2017-09-21 07:34:20 -04:00
Zach Ramsay
70b95135e6 consensus: use filepath for windows compatibility, closes #595 2017-09-18 17:49:43 -04:00
Zach Ramsay
a3d925ac1d pr fixes 2017-09-18 16:52:00 -04:00
Zach Ramsay
cf9a03f698 docs: organize install a bit better 2017-09-18 16:42:24 -04:00
Ethan Buchman
7f8240dfde Merge pull request #521 from tendermint/json-rpc-patch
updated json response to match spec by @davebryson
2017-09-18 16:37:34 -04:00
Anton Kaliaev
f8b152972f return method not found error
if somebody tries to access WS method in non-ws context
2017-09-18 16:36:03 -04:00
Anton Kaliaev
95875c55fc ID must be present in both request and response
from the spec:
This member is REQUIRED.
It MUST be the same as the value of the id member in the Request Object.
If there was an error in detecting the id in the Request object (e.g. Parse error/Invalid Request), it MUST be Null.
2017-09-18 16:36:03 -04:00
Anton Kaliaev
7fadde0b37 check for request ID after receiving it 2017-09-18 16:36:03 -04:00
Anton Kaliaev
e36c79f713 capitalize RpcError 2017-09-18 16:36:03 -04:00
Anton Kaliaev
2252071866 update changelog 2017-09-18 16:36:03 -04:00
Anton Kaliaev
b700ed8e31 remove check for non-empty message as it should always be present 2017-09-18 16:36:03 -04:00
Anton Kaliaev
e1fd587ddd we now omit error if empty 2017-09-18 16:36:03 -04:00
Anton Kaliaev
f74de4cb86 include optional data field in error object
```
data
A Primitive or Structured value that contains additional information about the error.
This may be omitted.
The value of this member is defined by the Server (e.g. detailed error information, nested errors etc.).
```
2017-09-18 16:36:02 -04:00
Anton Kaliaev
6c1572c9b8 fix invalid memory address or nil pointer dereference 2017-09-18 16:36:02 -04:00
Dave Bryson
60a1f49a5c updated json response to match spec by @davebryson 2017-09-18 16:35:50 -04:00
Zach Ramsay
740167202f readme & al., update links to docs 2017-09-18 16:23:22 -04:00
Ethan Buchman
e0017c8a97 Merge pull request #654 from tendermint/docs-add-tools
consolidate docs from tools repo
2017-09-18 13:55:37 -04:00
Ethan Buchman
4a78c1bb28 Merge pull request #661 from tendermint/docs-add-ecosystem
docs: add ecosystem from website
2017-09-18 13:55:09 -04:00
Zach Ramsay
1d3f723ccc docs: remove last section from ecosystem 2017-09-18 13:41:12 -04:00
Zach Ramsay
77408b7bde docs: using ABCI-CLI 2017-09-18 13:31:18 -04:00
Ethan Buchman
0cd1bd6d8b Merge pull request #658 from tendermint/organize-docs
docs: organize the directory
2017-09-18 13:08:38 -04:00
Zach Ramsay
881d2ce31e docs: pull from tools' master branch 2017-09-18 12:14:23 -04:00
Zach Ramsay
ad79ead93d docs: add and re-format the ecosystem from website 2017-09-18 11:57:35 -04:00
Zach Ramsay
17a748c796 docs: rename file 2017-09-18 09:39:23 -04:00
Zach Ramsay
583599b19f docs: add software.json from website (ecosystem) 2017-09-18 09:38:22 -04:00
Zach Ramsay
044fe56b43 update changelog 2017-09-16 15:24:30 -04:00
Zach Ramsay
b46da19b74 docs: organize the directory, #656 2017-09-16 15:19:22 -04:00
Zach Ramsay
d635783d07 docs: finish pull from tools 2017-09-16 15:11:10 -04:00
Zach Ramsay
f79e13af38 docs: pull from tools on build 2017-09-16 15:11:10 -04:00
Zach Ramsay
34e6474ad9 docs: organize tm-bench/monitor description 2017-09-16 15:11:10 -04:00
Zach Ramsay
5c96e0c812 docs: add original tm-bench/monitor files 2017-09-16 15:11:10 -04:00
Zach Ramsay
921a2b41f0 docs: add kubes docs to mintnet doc, from tools 2017-09-16 15:11:10 -04:00
Zach Ramsay
a8cec967ac docs: harmonize headers for tools docs 2017-09-16 15:11:10 -04:00
Zach Ramsay
044c500cc0 docs: move images in from tools repo 2017-09-16 15:11:10 -04:00
Zach Ramsay
a917ec9ea2 docs: update index.rst 2017-09-16 15:11:10 -04:00
Zach Ramsay
d4ccc88676 docs: convert from md to rst 2017-09-16 15:11:10 -04:00
Zach Ramsay
12c85c4e60 docs: add original README's from tools repo 2017-09-16 15:11:10 -04:00
Ethan Buchman
90c0267bc1 types: privVal.Sign returns an error 2017-09-16 01:07:04 -04:00
Ethan Buchman
8963bf08aa Merge pull request #657 from tendermint/fix/adr
docs: update and clean up adr
2017-09-15 19:32:58 -04:00
Ethan Buchman
ede4f818fd docs: update and clean up adr 2017-09-15 19:31:37 -04:00
Ethan Buchman
126f63c0a2 Merge pull request #653 from tendermint/improvement/peer_interface
peer interface
2017-09-15 18:43:53 -04:00
Ethan Buchman
aea8629272 peer interface 2017-09-15 18:40:59 -04:00
Emmanuel Odeke
5138bcb1c7 p2p: delete unused and untested *IPRangeCount functions
Fixes #602

Delete unused and untested functions:
- AddToIPRangeCounts
- CheckIPRangeCounts
2017-09-15 14:03:01 -06:00
Ethan Buchman
cc50dc076a Merge pull request #638 from tendermint/feature/state_tests
Upgrade state_test.go
2017-09-12 14:37:59 -04:00
Ethan Buchman
bf576f0097 state: minor comment fixes 2017-09-12 14:37:32 -04:00
Zach
377747b061 Merge pull request #651 from tendermint/doc/testnet
Started expanding "automated deployments" in documentation
2017-09-12 12:41:42 -04:00
Adrian Brink
870a98ccc3 Last fixes 2017-09-12 17:12:19 +02:00
Adrian Brink
8eda3efa28 Cleanup lines to fit within 72 characters 2017-09-12 17:08:30 +02:00
Zach Ramsay
e2c3cc9685 docs: give index a Tools section 2017-09-12 10:57:34 -04:00
Adrian Brink
2a6e71a753 Reformat tests to extract common setup 2017-09-12 16:57:10 +02:00
GaryTomato
340d273f83 Started expanding "automated deployments" 2017-09-11 11:34:36 -04:00
Ethan Buchman
54c63726b0 p2p: minor comment fixes 2017-09-06 16:40:21 -04:00
Ethan Buchman
cb80ab2965 Merge remote-tracking branch 'orijtech/p2p-full-test-PeerSet' into develop 2017-09-06 16:40:07 -04:00
Emmanuel Odeke
c48e772115 p2p: fully test PeerSet, more docs, parallelize PeerSet tests
* Full test PeerSet and check its concurrent guarantees
* Improve the doc for PeerSet.Has and remove unnecessary
defer for a path that sets a variable, make it fast anyways.
* Parallelize PeerSet tests with t.Parallel()
* Document functions in peer_set.go more.
2017-09-06 02:47:31 -06:00
Ethan Buchman
aa78fc14b5 cmd: dont wait for genesis. closes #562 2017-09-06 01:57:21 -04:00
Ethan Buchman
399fb9aa70 consensus: remove support for replay by #HEIGHT. closes #567 2017-09-06 01:53:41 -04:00
Ethan Buchman
ec3c91ac14 Merge pull request #618 from tendermint/historical_validators
Historical validators
2017-09-06 01:46:31 -04:00
Ethan Buchman
fae0603413 more fixes from review 2017-09-06 01:25:57 -04:00
Ethan Buchman
9af837c24d Merge pull request #644 from tendermint/release-v0.10.4
Release v0.10.4
2017-09-05 17:44:48 -04:00
Ethan Buchman
cc2b418f7f p2p: test fix 2017-09-05 17:10:11 -04:00
Ethan Buchman
6ddc30ffc7 Merge branch 'master' into release-v0.10.4 2017-09-05 16:49:57 -04:00
Ethan Buchman
aa7d63f2ff bump version to 0.10.4 2017-09-05 16:46:56 -04:00
Ethan Buchman
50f2ef548c Merge pull request #642 from tendermint/zramsay-patch-1
Delete roadmap.md
2017-09-05 16:26:04 -04:00
Ethan Buchman
b9637f7185 Update changelog and add roadmap 2017-09-05 16:25:02 -04:00
Ethan Buchman
88138c38cf mempool: reactor test 2017-09-05 16:25:02 -04:00
Ethan Buchman
5ad0da1750 Merge pull request #640 from tendermint/read-the-docs
Implement read the docs
2017-09-05 15:37:24 -04:00
Ethan Buchman
9deb647303 fixes from review 2017-09-04 18:29:51 -04:00
Ethan Buchman
b856c45b9c Merge pull request #634 from tendermint/config-p2p
p2p: put maxMsgPacketPayloadSize, recvRate, sendRate in config
2017-09-04 14:16:16 -04:00
Ethan Buchman
f0f1ebe013 rpc: Block and Commit take pointers; return latest on nil 2017-09-03 16:07:37 -04:00
Ethan Buchman
34beff117a state: comments; use wire.BinaryBytes 2017-09-03 16:07:37 -04:00
Ethan Buchman
e2e8746044 rpc: historical validators 2017-09-03 16:07:37 -04:00
Ethan Buchman
78446fd99c state: persist validators 2017-09-03 16:07:37 -04:00
Ethan Buchman
daa258ea6d p2p: put maxMsgPacketPayloadSize, recvRate, sendRate in config
Updates #628
2017-09-01 21:44:15 -04:00
Zach Ramsay
54c85ef8b9 try image 2017-09-01 21:14:26 -04:00
Zach Ramsay
de512c5923 fix 2017-09-01 21:05:12 -04:00
Zach Ramsay
8fd133ff19 docs: pretty-fy 2017-09-01 20:59:27 -04:00
Zach Ramsay
f5ba931115 docs: logo, add readme, fixes 2017-09-01 20:08:22 -04:00
Zach
3370e40fe1 Delete roadmap.md
file is out of date & provides no useful information
2017-09-01 19:38:11 -04:00
Zach Ramsay
e509e8f354 docs: clean a bunch of stuff up 2017-08-31 15:30:17 -04:00
Zach Ramsay
aaef2f7c84 docs: link fixes 2017-08-30 22:26:26 -04:00
Zach Ramsay
369aa5854f docs: fix image links 2017-08-30 22:11:21 -04:00
Zach Ramsay
9d12e485e5 docs: rst-ify the intro 2017-08-30 21:43:39 -04:00
Zach Ramsay
dc54fc707e docs: port website's intro for rtd 2017-08-30 21:34:51 -04:00
Zach Ramsay
212a5ebc2e docs: use maxdepth 2 for spec 2017-08-30 21:17:07 -04:00
Zach Ramsay
2d2e769e96 rtd: build fixes 2017-08-30 21:01:05 -04:00
Zach Ramsay
bba35c1486 docs: rpc docs to be slate, see #526, #629 2017-08-30 20:51:48 -04:00
Zach Ramsay
0a902ebf61 docs: organize the specification 2017-08-30 20:47:37 -04:00
Zach Ramsay
70c0f4b936 docs: convert markdown to rst 2017-08-30 17:54:25 -04:00
Zach Ramsay
e19fa59b63 docs: consolidate ADRs 2017-08-30 17:43:39 -04:00
Ethan Buchman
30d351b0c1 Merge pull request #623 from tendermint/fix/no-config-on-version
cmd: don't load config for version command. closes #620
2017-08-30 16:42:58 -04:00
Zach Ramsay
818314c5db docs: add sphinx Makefile & requirements 2017-08-30 16:37:51 -04:00
Zach Ramsay
700e4bb29d docs: test 2017-08-30 15:26:32 -04:00
Zach Ramsay
8ad4a1341f docs: add conf.py 2017-08-30 15:16:39 -04:00
Ethan Buchman
d87acc2d1b cmd: don't load config for version command. closes #620 2017-08-28 15:01:51 -04:00
Ethan Buchman
f2349b1092 Merge pull request #526 from tendermint/feature/rpc-docs
RPC docs
2017-08-25 18:10:21 -04:00
Ethan Buchman
7824b66f5b Merge pull request #548 from tendermint/indexing-proposal
Indexing proposal
2017-08-25 18:02:18 -04:00
Ethan Buchman
83048fb2fe Merge pull request #604 from tendermint/bugfix/ws-io-timeout
Biff up RPC WSClient
2017-08-25 17:59:34 -04:00
Ethan Buchman
9dde1a0bd4 rpc: comments 2017-08-25 17:57:09 -04:00
Ethan Buchman
bc9092d7db Merge pull request #609 from orijtech/parseConfig-doc-update
cmd/tendermint/commands: update ParseConfig doc
2017-08-21 13:55:45 -04:00
Ethan Buchman
56b959b9f9 Merge pull request #611 from tendermint/dont_panic
Dont panic
2017-08-21 13:54:58 -04:00
Zach Ramsay
b3796e0aaa rpc: typo fixes 2017-08-16 15:18:55 -04:00
Anton Kaliaev
d24083b257 generate md for Slate 2017-08-16 15:17:08 -04:00
Anton Kaliaev
83ec9f773a wrote docs for rpc methods [ci skip]
for all of them except unsafe
2017-08-16 15:17:08 -04:00
Ethan Buchman
9ceccbe9a4 consensus: recover panics in receive routine 2017-08-16 01:01:09 -04:00
Ethan Buchman
35a4912449 dont panic on getVoteBitArray 2017-08-16 00:43:55 -04:00
Ethan Buchman
01e008cedd Merge pull request #607 from tendermint/zramsay-patch-1
fix link from slack to rocket
2017-08-15 12:40:07 -04:00
Emmanuel Odeke
d8af26e75a cmd/tendermint/commands: update ParseConfig doc 2017-08-12 16:29:11 -06:00
Zach
03ddb109ec fix link from slack to rocket 2017-08-11 10:30:04 -04:00
Anton Kaliaev
2fd8496bc1 correct handling of pings and pongs
server:
- always has read & write timeouts
- ping handler never blocks the reader (see A)
- sends regular pings to check up on a client

A:
at some point server write buffer can become full, so in order not to
block reads from a client (see
https://github.com/gorilla/websocket/issues/97), server may skip some
pongs. As a result, client may disconnect. But you either have to do
that or block the reader. There is no third way.

client:
- optional read & write timeouts
- optional ping/pong to measure latency
2017-08-10 17:53:49 -04:00
Ethan Buchman
8d7640894d Merge pull request #606 from tendermint/release-v0.10.3
Release v0.10.3
2017-08-10 10:25:48 -04:00
Ethan Buchman
c5a657f540 consensus: test proposal heartbeat 2017-08-10 01:24:23 -04:00
Ethan Buchman
1ea43e513d version bump 2017-08-10 00:14:49 -04:00
Ethan Buchman
c9e11de2a7 consensus: log ProposalHeartbeat msg 2017-08-10 00:12:16 -04:00
Ethan Buchman
f644fc5725 changelog 2017-08-10 00:04:07 -04:00
Ethan Buchman
49278a7f9c Merge pull request #579 from tendermint/feature/sync_status
Add fast-sync status to Status() call
2017-08-09 23:51:25 -04:00
Ethan Buchman
b0728260e9 comments 2017-08-09 23:51:09 -04:00
Ethan Buchman
92ada55e5a make conR.FastSync() thread safe 2017-08-09 14:55:21 -04:00
Ethan Buchman
8840ae6ae2 Merge pull request #597 from zramsay/tendermint-types-links
fix dead tendermint types links in specs
2017-08-08 22:19:10 -04:00
Ethan Buchman
21dab22f17 Merge pull request #599 from zramsay/docs-getting-started
docs fixes throughout
2017-08-08 22:17:44 -04:00
Ethan Buchman
10a849c27e Merge pull request #596 from zramsay/deduplications
Deduplications
2017-08-08 22:16:25 -04:00
Anton Kaliaev
236489aecf backlog must always have higher priority 2017-08-08 19:03:48 -04:00
Ethan Buchman
7108c66e3b Merge pull request #584 from tendermint/no-empty-blocks
No empty blocks
2017-08-08 18:10:04 -04:00
Ethan Buchman
797acbe911 ws: small comment 2017-08-08 17:33:17 -04:00
Ethan Buchman
0bf66deb3c fixes from review 2017-08-08 17:09:04 -04:00
Ethan Buchman
d2f3e9faf4 test CreateEmptyBlocksInterval 2017-08-08 16:35:25 -04:00
Ethan Buchman
37f1390473 CreateEmptyBlocks and CreateEmptyBlocksInterval 2017-08-08 16:22:37 -04:00
Anton Kaliaev
9b5f21a650 [ws-server] reset readTimeout when we receive something 2017-08-08 16:03:04 -04:00
Anton Kaliaev
8267920749 [ws-client] write normal close message 2017-08-08 16:02:37 -04:00
Anton Kaliaev
6c85e4be4f change server ping period to be less frequent
no need to ping ws every 10 sec
2017-08-08 13:21:59 -04:00
Anton Kaliaev
23a87304cc add a comment for PingPongLatencyTimer [ci skip] 2017-08-08 13:20:58 -04:00
Anton Kaliaev
c14b39da5f make RPC server's ping period and pong wait configurable via options 2017-08-07 18:29:55 -04:00
Anton Kaliaev
57eee2466b make WSClient thread-safe 2017-08-07 17:56:38 -04:00
Anton Kaliaev
5d66d1c28c fixes from review 2017-08-05 13:11:00 -04:00
Ethan Buchman
fb47ca6d35 fixes from review 2017-08-04 21:36:11 -04:00
Anton Kaliaev
0013053fae allow to change pong wait and ping period 2017-08-04 10:42:55 -04:00
Anton Kaliaev
1abbb11b44 do not exit from reconnectRoutine! 2017-08-03 22:44:18 -04:00
Anton Kaliaev
54903adeff add IsReconnecting and IsActive methods 2017-08-03 19:10:15 -04:00
Anton Kaliaev
c08618f7e9 expose latency timer on WSClient 2017-08-03 19:10:14 -04:00
Anton Kaliaev
d578f7f81e biff up WS client
What's new:
- auto reconnect
- ping/pong
- colored tests
2017-08-03 19:10:14 -04:00
Ethan Buchman
043c6018b4 Merge pull request #591 from tendermint/heartbeat
broadcast proposer heartbeat msg
2017-08-03 14:35:25 -04:00
Ethan Buchman
d0965cca05 forgot heartbeat file 2017-08-03 13:58:17 -04:00
Ethan Buchman
b8ac67e240 some fixes 2017-08-03 13:25:26 -04:00
Zach Ramsay
350d584af8 docs: tons of minor improvements
closes: https://github.com/zramsay/tendermint/issues/3
closes: https://github.com/zramsay/tendermint/issues/5
2017-08-01 15:52:16 -04:00
Zach Ramsay
e3e75376ec fix mintnet-kubernetes link 2017-08-01 14:31:38 -04:00
Ethan Buchman
ab753abfa0 Proposer->Proposal; sign heartbeats 2017-07-29 17:04:28 -04:00
Zach Ramsay
bab7877fa1 route links to godoc rather than deprecated internal implementation, closes https://github.com/zramsay/tendermint/issues/1 2017-07-29 14:33:49 -04:00
Ethan Buchman
10f8101314 fix race 2017-07-29 11:45:07 -04:00
Ethan Buchman
530626dab7 broadcast proposer heartbeat msg 2017-07-29 11:45:02 -04:00
Ethan Buchman
b96d28a42b test progress in higher round 2017-07-28 23:43:30 -04:00
Ethan Buchman
3444bee47f fixes from review; use mempool.TxsAvailable() directly 2017-07-28 23:42:43 -04:00
Ethan Buchman
cf3abe5096 consensus: remove rs from handleMsg 2017-07-28 23:42:19 -04:00
Ethan Buchman
ecdda69fab commit empty blocks when needed to prove app hash 2017-07-28 22:12:11 -04:00
Ethan Buchman
fc3fe9292f fixes from review 2017-07-28 22:12:11 -04:00
Ethan Buchman
d396053872 changelog 2017-07-28 22:11:45 -04:00
Ethan Buchman
e9a2389300 cmd: --consensus.no_empty_blocks 2017-07-28 22:11:45 -04:00
Ethan Buchman
678a9a2e42 TxsAvailable tests 2017-07-28 22:11:45 -04:00
Ethan Buchman
124032e3e9 NoEmptyBlocks config option 2017-07-28 22:11:45 -04:00
Ethan Buchman
4beac54bd9 no empty blocks 2017-07-28 22:11:45 -04:00
Ethan Buchman
39493bcd9a consensus: isProposer func 2017-07-28 22:11:10 -04:00
Zach Ramsay
f96771e753 cleanup CONTRIBUTING.md, part of https://github.com/zramsay/tendermint/issues/7 2017-07-28 16:45:03 -04:00
Zach Ramsay
62f94a7948 deduplicate install, closes https://github.com/zramsay/tendermint/issues/2 2017-07-28 16:34:38 -04:00
Ethan Buchman
e9b7221292 consensus: more comments 2017-07-20 00:59:28 -04:00
Anton Kaliaev
5fea1d2675 [.editorconfig] add rule for .proto files [ci skip] 2017-07-19 12:29:10 +03:00
Anton Kaliaev
7a492e3612 Merge pull request #549 from tendermint/rpc-server-proposal
RPC server ADR
2017-07-19 10:44:12 +03:00
Anton Kaliaev
b893df3348 add rpc server proposal [ci skip] 2017-07-19 10:43:30 +03:00
Anton Kaliaev
742b5b705f update link to contributing guidelines in README.md [ci skip] 2017-07-19 10:35:54 +03:00
Peng Zhong
0153d21f3b Update website links on README.md (#589) [ci skip]
* Update website links on README.md

Changed the blog links to Medium.
Updated Tendermint website links.

* correct link for Tendermint blog [ci skip]
2017-07-19 10:31:49 +03:00
Ethan Buchman
695ad5fe2d Merge pull request #588 from tendermint/comments_cleanup
Comments and cleanup
2017-07-17 13:32:23 -04:00
Ethan Buchman
d8ca0580a8 rpc: move grpc_test from test/ to grpc/ 2017-07-17 12:58:15 -04:00
Ethan Buchman
525fc0ae5b types: block comments 2017-07-17 12:58:15 -04:00
Ethan Buchman
311f18bebf mempool: comments 2017-07-17 12:58:10 -04:00
Ethan Buchman
d49a5166ac scripts/txs: add 0x and randomness 2017-07-17 12:57:30 -04:00
Ethan Buchman
e560dd839f Merge pull request #587 from ramilexe/feature/sync_status
fast sync status
2017-07-17 11:51:44 -04:00
ramil
6f8d385dfa fast sync status 2017-07-17 09:44:23 +03:00
caojingqi
086544e367 p2p: sw.peers.List() is empty in sw.OnStart 2017-07-10 20:43:38 -04:00
Ethan Buchman
eed0297ed5 Merge pull request #538 from zramsay/docs-rework
rework docs
2017-07-10 20:32:03 -04:00
Ethan Buchman
b467515719 Merge pull request #576 from tendermint/release-v0.10.2
Release v0.10.2
2017-07-10 16:22:09 -04:00
Ethan Buchman
75df0d91ba comments from review 2017-07-10 13:39:23 -04:00
Adrian Brink
05c0dfac12 First crack it providing fast-sync endpoint 2017-07-10 19:30:54 +02:00
Ethan Buchman
bcde80fc4f changelog 2017-07-09 18:49:22 -04:00
Ethan Buchman
5f6b996d22 breakup some long lines; add more comments to consensus reactor 2017-07-09 18:38:59 -04:00
Ethan Buchman
74a3a2b56a fix comments 2017-07-09 18:01:25 -04:00
Adrian Brink
b07d01f102 Add more comments on public functions and extra logging during 'enterPrevote'
Signed-off-by: Adrian Brink <adrian@brink-holdings.com>
2017-07-09 20:35:48 +02:00
Ethan Buchman
eed3959749 version bump 2017-07-09 13:17:01 -04:00
Zach Ramsay
278b2344fe move docs/architechture to docs/hidden 2017-07-08 14:25:15 -04:00
Zach Ramsay
05095954c9 use official golang site for installing 2017-07-08 14:09:56 -04:00
Zach Ramsay
6fef4b080e docs: add docs from website 2017-07-08 14:05:24 -04:00
Zach Ramsay
259111d0e8 address adrian's comment 2017-07-08 14:05:24 -04:00
Zach Ramsay
a707748893 add roadmap.md from site 2017-07-08 14:05:24 -04:00
Zach Ramsay
130ae133c1 fix link in changelog 2017-07-08 14:05:24 -04:00
Zach Ramsay
1d387d7452 contributing: use full version from site 2017-07-08 14:05:24 -04:00
Ethan Buchman
612726d9f6 consensus: better logging 2017-07-07 16:58:16 -04:00
Ethan Buchman
5888ddaab1 consensus: improve logging for conflicting votes 2017-07-07 13:41:50 -04:00
Ethan Buchman
e6cecb9595 p2p: fix test 2017-07-07 13:33:15 -04:00
Ethan Buchman
e36e507463 changelog 2017-07-07 13:30:30 -04:00
Ethan Buchman
3c10f7a122 add p2p flush throttle to config 2017-07-07 13:08:52 -04:00
Ethan Buchman
ca8c34f966 add consensus reactor sleep durations to the config 2017-07-07 12:39:40 -04:00
Ethan Buchman
8355fee16a Merge pull request #563 from tendermint/bugfix/ignore-tmhome-in-tests
Do some cleanup in the test, so it doesn't fail in my shell
2017-07-06 21:48:40 -04:00
Ethan Frey
850da13622 Do some cleanup in the test, so it doesn't fail in my shell 2017-06-27 13:31:32 +02:00
Anton Kaliaev
a676b2aa10 add indexing proposal [ci skip] 2017-06-21 15:07:45 +04:00
402 changed files with 32067 additions and 8534 deletions

View File

@@ -19,3 +19,8 @@ coverage:
comment:
layout: "header, diff"
behavior: default # update if exists else create new
ignore:
- "docs"
- "*.md"
- "*.rst"

View File

@@ -8,8 +8,9 @@ end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
[Makefile]
[*.{sh,Makefile}]
indent_style = tab
[*.sh]
indent_style = tab
[*.proto]
indent_style = space
indent_size = 2

4
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,4 @@
# CODEOWNERS: https://help.github.com/articles/about-codeowners/
# Everything goes through Bucky and Anton. For now.
* @ebuchman @melekes

6
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,6 @@
<!-- Thanks for filing a PR! Before hitting the button, please check the following items.-->
* [ ] Updated all relevant documentation in docs
* [ ] Updated all code comments where relevant
* [ ] Wrote tests
* [ ] Updated CHANGELOG.md

6
.gitignore vendored
View File

@@ -15,3 +15,9 @@ test/p2p/data/
test/logs
.glide
coverage.txt
docs/_build
docs/tools
*.log
scripts/wal2json/wal2json
scripts/cutWALUntil/cutWALUntil

View File

@@ -1,10 +1,255 @@
# Changelog
## Roadmap
BREAKING CHANGES:
- Better support for injecting randomness
- Upgrade consensus for more real-time use of evidence
FEATURES:
- Peer reputation management
- Use the chain as its own CA for nodes and validators
- Tooling to run multiple blockchains/apps, possibly in a single process
- State syncing (without transaction replay)
- Add authentication and rate-limitting to the RPC
IMPROVEMENTS:
- Improve subtleties around mempool caching and logic
- Consensus optimizations:
- cache block parts for faster agreement after round changes
- propagate block parts rarest first
- Better testing of the consensus state machine (ie. use a DSL)
- Auto compiled serialization/deserialization code instead of go-wire reflection
BUG FIXES:
- Graceful handling/recovery for apps that have non-determinism or fail to halt
- Graceful handling/recovery for violations of safety, or liveness
## 0.16.0 (February 20th, 2017)
BREAKING CHANGES:
- [config] use $TMHOME/config for all config and json files
- [p2p] old `--p2p.seeds` is now `--p2p.persistent_peers` (persistent peers to which TM will always connect to)
- [p2p] now `--p2p.seeds` only used for getting addresses (if addrbook is empty; not persistent)
- [p2p] NodeInfo: remove RemoteAddr and add Channels
- we must have at least one overlapping channel with peer
- we only send msgs for channels the peer advertised
- [p2p/conn] pong timeout
- [lite] comment out IAVL related code
FEATURES:
- [p2p] added new `/dial_peers&persistent=_` **unsafe** endpoint
- [p2p] persistent node key in `$THMHOME/config/node_key.json`
- [p2p] introduce peer ID and authenticate peers by ID using addresses like `ID@IP:PORT`
- [p2p/pex] new seed mode crawls the network and serves as a seed.
- [config] MempoolConfig.CacheSize
- [config] P2P.SeedMode (`--p2p.seed_mode`)
IMPROVEMENT:
- [p2p/pex] stricter rules in the PEX reactor for better handling of abuse
- [p2p] various improvements to code structure including subpackages for `pex` and `conn`
- [docs] new spec!
- [all] speed up the tests!
BUG FIX:
- [blockchain] StopPeerForError on timeout
- [consensus] StopPeerForError on a bad Maj23 message
- [state] flush mempool conn before calling commit
- [types] fix priv val signing things that only differ by timestamp
- [mempool] fix memory leak causing zombie peers
- [p2p/conn] fix potential deadlock
## 0.15.0 (December 29, 2017)
BREAKING CHANGES:
- [p2p] enable the Peer Exchange reactor by default
- [types] add Timestamp field to Proposal/Vote
- [types] add new fields to Header: TotalTxs, ConsensusParamsHash, LastResultsHash, EvidenceHash
- [types] add Evidence to Block
- [types] simplify ValidateBasic
- [state] updates to support changes to the header
- [state] Enforce <1/3 of validator set can change at a time
FEATURES:
- [state] Send indices of absent validators and addresses of byzantine validators in BeginBlock
- [state] Historical ConsensusParams and ABCIResponses
- [docs] Specification for the base Tendermint data structures.
- [evidence] New evidence reactor for gossiping and managing evidence
- [rpc] `/block_results?height=X` returns the DeliverTx results for a given height.
IMPROVEMENTS:
- [consensus] Better handling of corrupt WAL file
BUG FIXES:
- [lite] fix race
- [state] validate block.Header.ValidatorsHash
- [p2p] allow seed addresses to be prefixed with eg. `tcp://`
- [p2p] use consistent key to refer to peers so we dont try to connect to existing peers
- [cmd] fix `tendermint init` to ignore files that are there and generate files that aren't.
## 0.14.0 (December 11, 2017)
BREAKING CHANGES:
- consensus/wal: removed separator
- rpc/client: changed Subscribe/Unsubscribe/UnsubscribeAll funcs signatures to be identical to event bus.
FEATURES:
- new `tendermint lite` command (and `lite/proxy` pkg) for running a light-client RPC proxy.
NOTE it is currently insecure and its APIs are not yet covered by semver
IMPROVEMENTS:
- rpc/client: can act as event bus subscriber (See https://github.com/tendermint/tendermint/issues/945).
- p2p: use exponential backoff from seconds to hours when attempting to reconnect to persistent peer
- config: moniker defaults to the machine's hostname instead of "anonymous"
BUG FIXES:
- p2p: no longer exit if one of the seed addresses is incorrect
## 0.13.0 (December 6, 2017)
BREAKING CHANGES:
- abci: update to v0.8 using gogo/protobuf; includes tx tags, vote info in RequestBeginBlock, data.Bytes everywhere, use int64, etc.
- types: block heights are now `int64` everywhere
- types & node: EventSwitch and EventCache have been replaced by EventBus and EventBuffer; event types have been overhauled
- node: EventSwitch methods now refer to EventBus
- rpc/lib/types: RPCResponse is no longer a pointer; WSRPCConnection interface has been modified
- rpc/client: WaitForOneEvent takes an EventsClient instead of types.EventSwitch
- rpc/client: Add/RemoveListenerForEvent are now Subscribe/Unsubscribe
- rpc/core/types: ResultABCIQuery wraps an abci.ResponseQuery
- rpc: `/subscribe` and `/unsubscribe` take `query` arg instead of `event`
- rpc: `/status` returns the LatestBlockTime in human readable form instead of in nanoseconds
- mempool: cached transactions return an error instead of an ABCI response with BadNonce
FEATURES:
- rpc: new `/unsubscribe_all` WebSocket RPC endpoint
- rpc: new `/tx_search` endpoint for filtering transactions by more complex queries
- p2p/trust: new trust metric for tracking peers. See ADR-006
- config: TxIndexConfig allows to set what DeliverTx tags to index
IMPROVEMENTS:
- New asynchronous events system using `tmlibs/pubsub`
- logging: Various small improvements
- consensus: Graceful shutdown when app crashes
- tests: Fix various non-deterministic errors
- p2p: more defensive programming
BUG FIXES:
- consensus: fix panic where prs.ProposalBlockParts is not initialized
- p2p: fix panic on bad channel
## 0.12.1 (November 27, 2017)
BUG FIXES:
- upgrade tmlibs dependency to enable Windows builds for Tendermint
## 0.12.0 (October 27, 2017)
BREAKING CHANGES:
- rpc/client: websocket ResultsCh and ErrorsCh unified in ResponsesCh.
- rpc/client: ABCIQuery no longer takes `prove`
- state: remove GenesisDoc from state.
- consensus: new binary WAL format provides efficiency and uses checksums to detect corruption
- use scripts/wal2json to convert to json for debugging
FEATURES:
- new `certifiers` pkg contains the tendermint light-client library (name subject to change)!
- rpc: `/genesis` includes the `app_options` .
- rpc: `/abci_query` takes an additional `height` parameter to support historical queries.
- rpc/client: new ABCIQueryWithOptions supports options like `trusted` (set false to get a proof) and `height` to query a historical height.
IMPROVEMENTS:
- rpc: `/genesis` result includes `app_options`
- rpc/lib/client: add jitter to reconnects.
- rpc/lib/types: `RPCError` satisfies the `error` interface.
BUG FIXES:
- rpc/client: fix ws deadlock after stopping
- blockchain: fix panic on AddBlock when peer is nil
- mempool: fix sending on TxsAvailable when a tx has been invalidated
- consensus: dont run WAL catchup if we fast synced
## 0.11.1 (October 10, 2017)
IMPROVEMENTS:
- blockchain/reactor: respondWithNoResponseMessage for missing height
BUG FIXES:
- rpc: fixed client WebSocket timeout
- rpc: client now resubscribes on reconnection
- rpc: fix panics on missing params
- rpc: fix `/dump_consensus_state` to have normal json output (NOTE: technically breaking, but worth a bug fix label)
- types: fixed out of range error in VoteSet.addVote
- consensus: fix wal autofile via https://github.com/tendermint/tmlibs/blob/master/CHANGELOG.md#032-october-2-2017
## 0.11.0 (September 22, 2017)
BREAKING:
- genesis file: validator `amount` is now `power`
- abci: Info, BeginBlock, InitChain all take structs
- rpc: various changes to match JSONRPC spec (http://www.jsonrpc.org/specification), including breaking ones:
- requests that previously returned HTTP code 4XX now return 200 with an error code in the JSONRPC.
- `rpctypes.RPCResponse` uses new `RPCError` type instead of `string`.
- cmd: if there is no genesis, exit immediately instead of waiting around for one to show.
- types: `Signer.Sign` returns an error.
- state: every validator set change is persisted to disk, which required some changes to the `State` structure.
- p2p: new `p2p.Peer` interface used for all reactor methods (instead of `*p2p.Peer` struct).
FEATURES:
- rpc: `/validators?height=X` allows querying of validators at previous heights.
- rpc: Leaving the `height` param empty for `/block`, `/validators`, and `/commit` will return the value for the latest height.
IMPROVEMENTS:
- docs: Moved all docs from the website and tools repo in, converted to `.rst`, and cleaned up for presentation on `tendermint.readthedocs.io`
BUG FIXES:
- fix WAL openning issue on Windows
## 0.10.4 (September 5, 2017)
IMPROVEMENTS:
- docs: Added Slate docs to each rpc function (see rpc/core)
- docs: Ported all website docs to Read The Docs
- config: expose some p2p params to tweak performance: RecvRate, SendRate, and MaxMsgPacketPayloadSize
- rpc: Upgrade the websocket client and server, including improved auto reconnect, and proper ping/pong
BUG FIXES:
- consensus: fix panic on getVoteBitArray
- consensus: hang instead of panicking on byzantine consensus failures
- cmd: dont load config for version command
## 0.10.3 (August 10, 2017)
FEATURES:
- control over empty block production:
- new flag, `--consensus.create_empty_blocks`; when set to false, blocks are only created when there are txs or when the AppHash changes.
- new config option, `consensus.create_empty_blocks_interval`; an empty block is created after this many seconds.
- in normal operation, `create_empty_blocks = true` and `create_empty_blocks_interval = 0`, so blocks are being created all the time (as in all previous versions of tendermint). The number of empty blocks can be reduced by increasing `create_empty_blocks_interval` or by setting `create_empty_blocks = false`.
- new `TxsAvailable()` method added to Mempool that returns a channel which fires when txs are available.
- new heartbeat message added to consensus reactor to notify peers that a node is waiting for txs before entering propose step.
- rpc: Add `syncing` field to response returned by `/status`. Is `true` while in fast-sync mode.
IMPROVEMENTS:
- various improvements to documentation and code comments
BUG FIXES:
- mempool: pass height into constructor so it doesn't always start at 0
## 0.10.2 (July 10, 2017)
FEATURES:
- Enable lower latency block commits by adding consensus reactor sleep durations and p2p flush throttle timeout to the config
IMPROVEMENTS:
- More detailed logging in the consensus reactor and state machine
- More in-code documentation for many exposed functions, especially in consensus/reactor.go and p2p/switch.go
- Improved readability for some function definitions and code blocks with long lines
## 0.10.1 (June 28, 2017)
FEATURES:
- Use `--trace` to get stack traces for logged errors
- types: GenesisDoc.ValidatorHash returns the hash of the genesis validator set
- types: GenesisDoc.ValidatorHash returns the hash of the genesis validator set
- types: GenesisDocFromFile parses a GenesiDoc from a JSON file
IMPROVEMENTS:
@@ -28,7 +273,7 @@ Also includes the Grand Repo-Merge of 2017.
BREAKING CHANGES:
- Config and Flags:
- The `config` map is replaced with a [`Config` struct](https://github.com/tendermint/tendermint/blob/master/config/config.go#L11),
- The `config` map is replaced with a [`Config` struct](https://github.com/tendermint/tendermint/blob/master/config/config.go#L11),
containing substructs: `BaseConfig`, `P2PConfig`, `MempoolConfig`, `ConsensusConfig`, `RPCConfig`
- This affects the following flags:
- `--seeds` is now `--p2p.seeds`
@@ -41,16 +286,16 @@ containing substructs: `BaseConfig`, `P2PConfig`, `MempoolConfig`, `ConsensusCon
```
[p2p]
laddr="tcp://1.2.3.4:46656"
[consensus]
timeout_propose=1000
```
- Use viper and `DefaultConfig() / TestConfig()` functions to handle defaults, and remove `config/tendermint` and `config/tendermint_test`
- Change some function and method signatures to
- Change some function and method signatures to
- Change some [function and method signatures](https://gist.github.com/ebuchman/640d5fc6c2605f73497992fe107ebe0b) accomodate new config
- Logger
- Replace static `log15` logger with a simple interface, and provide a new implementation using `go-kit`.
- Replace static `log15` logger with a simple interface, and provide a new implementation using `go-kit`.
See our new [logging library](https://github.com/tendermint/tmlibs/log) and [blog post](https://tendermint.com/blog/abstracting-the-logger-interface-in-go) for more details
- Levels `warn` and `notice` are removed (you may need to change them in your `config.toml`!)
- Change some [function and method signatures](https://gist.github.com/ebuchman/640d5fc6c2605f73497992fe107ebe0b) to accept a logger
@@ -93,7 +338,7 @@ IMPROVEMENTS:
- Limit `/blockchain_info` call to return a maximum of 20 blocks
- Use `.Wrap()` and `.Unwrap()` instead of eg. `PubKeyS` for `go-crypto` types
- RPC JSON responses use pretty printing (via `json.MarshalIndent`)
- Color code different instances of the consensus for tests
- Color code different instances of the consensus for tests
- Isolate viper to `cmd/tendermint/commands` and do not read config from file for tests
@@ -121,7 +366,7 @@ IMPROVEMENTS:
- WAL uses #ENDHEIGHT instead of #HEIGHT (#HEIGHT will stop working in 0.10.0)
- Peers included via `--seeds`, under `seeds` in the config, or in `/dial_seeds` are now persistent, and will be reconnected to if the connection breaks
BUG FIXES:
BUG FIXES:
- Fix bug in fast-sync where we stop syncing after a peer is removed, even if they're re-added later
- Fix handshake replay to handle validator set changes and results of DeliverTx when we crash after app.Commit but before state.Save()
@@ -137,7 +382,7 @@ message RequestQuery{
bytes data = 1;
string path = 2;
uint64 height = 3;
bool prove = 4;
bool prove = 4;
}
message ResponseQuery{
@@ -161,7 +406,7 @@ type BlockMeta struct {
}
```
- `ValidatorSet.Proposer` is exposed as a field and persisted with the `State`. Use `GetProposer()` to initialize or update after validator-set changes.
- `ValidatorSet.Proposer` is exposed as a field and persisted with the `State`. Use `GetProposer()` to initialize or update after validator-set changes.
- `tendermint gen_validator` command output is now pure JSON
@@ -204,7 +449,7 @@ type BlockID struct {
}
```
- `Vote` data type now includes validator address and index:
- `Vote` data type now includes validator address and index:
```
type Vote struct {
@@ -224,7 +469,7 @@ type Vote struct {
FEATURES:
- New message type on the ConsensusReactor, `Maj23Msg`, for peers to alert others they've seen a Maj23,
- New message type on the ConsensusReactor, `Maj23Msg`, for peers to alert others they've seen a Maj23,
in order to track and handle conflicting votes intelligently to prevent Byzantine faults from causing halts:
```
@@ -247,7 +492,7 @@ IMPROVEMENTS:
- Less verbose logging
- Better test coverage (37% -> 49%)
- Canonical SignBytes for signable types
- Write-Ahead Log for Mempool and Consensus via tmlibs/autofile
- Write-Ahead Log for Mempool and Consensus via tmlibs/autofile
- Better in-process testing for the consensus reactor and byzantine faults
- Better crash/restart testing for individual nodes at preset failure points, and of networks at arbitrary points
- Better abstraction over timeout mechanics
@@ -327,7 +572,7 @@ FEATURES:
- TMSP and RPC support TCP and UNIX sockets
- Addition config options including block size and consensus parameters
- New WAL mode `cswal_light`; logs only the validator's own votes
- New RPC endpoints:
- New RPC endpoints:
- for starting/stopping profilers, and for updating config
- `/broadcast_tx_commit`, returns when tx is included in a block, else an error
- `/unsafe_flush_mempool`, empties the mempool
@@ -348,14 +593,14 @@ BUG FIXES:
Strict versioning only began with the release of v0.7.0, in late summer 2016.
The project itself began in early summer 2014 and was workable decentralized cryptocurrency software by the end of that year.
Through the course of 2015, in collaboration with Eris Industries (now Monax Indsutries),
Through the course of 2015, in collaboration with Eris Industries (now Monax Indsutries),
many additional features were integrated, including an implementation from scratch of the Ethereum Virtual Machine.
That implementation now forms the heart of [ErisDB](https://github.com/eris-ltd/eris-db).
That implementation now forms the heart of [Burrow](https://github.com/hyperledger/burrow).
In the later half of 2015, the consensus algorithm was upgraded with a more asynchronous design and a more deterministic and robust implementation.
By late 2015, frustration with the difficulty of forking a large monolithic stack to create alternative cryptocurrency designs led to the
By late 2015, frustration with the difficulty of forking a large monolithic stack to create alternative cryptocurrency designs led to the
invention of the Application Blockchain Interface (ABCI), then called the Tendermint Socket Protocol (TMSP).
The Ethereum Virtual Machine and various other transaction features were removed, and Tendermint was whittled down to a core consensus engine
driving an application running in another process.
driving an application running in another process.
The ABCI interface and implementation were iterated on and improved over the course of 2016,
until versioned history kicked in with v0.7.0.

View File

@@ -1,16 +1,103 @@
# Contributing guidelines
# Contributing
**Thanks for considering making contributions to Tendermint!**
Thank you for considering making contributions to Tendermint and related repositories! Start by taking a look at the [coding repo](https://github.com/tendermint/coding) for overall information on repository workflow and standards.
Please follow standard github best practices: fork the repo, **branch from the
tip of develop**, make some commits, test your code changes with `make test`,
and submit a pull request to develop.
Please follow standard github best practices: fork the repo, branch from the tip of develop, make some commits, and submit a pull request to develop. See the [open issues](https://github.com/tendermint/tendermint/issues) for things we need help with!
See the [open issues](https://github.com/tendermint/tendermint/issues) for
things we need help with!
Please make sure to use `gofmt` before every commit - the easiest way to do this is have your editor run it for you upon saving a file.
Please make sure to use `gofmt` before every commit - the easiest way to do
this is have your editor run it for you upon saving a file.
## Forking
You can read the full guide [on our
site](https://tendermint.com/docs/guides/contributing).
Please note that Go requires code to live under absolute paths, which complicates forking.
While my fork lives at `https://github.com/ebuchman/tendermint`,
the code should never exist at `$GOPATH/src/github.com/ebuchman/tendermint`.
Instead, we use `git remote` to add the fork as a new remote for the original repo,
`$GOPATH/src/github.com/tendermint/tendermint `, and do all the work there.
For instance, to create a fork and work on a branch of it, I would:
* Create the fork on github, using the fork button.
* Go to the original repo checked out locally (ie. `$GOPATH/src/github.com/tendermint/tendermint`)
* `git remote rename origin upstream`
* `git remote add origin git@github.com:ebuchman/basecoin.git`
Now `origin` refers to my fork and `upstream` refers to the tendermint version.
So I can `git push -u origin master` to update my fork, and make pull requests to tendermint from there.
Of course, replace `ebuchman` with your git handle.
To pull in updates from the origin repo, run
* `git fetch upstream`
* `git rebase upstream/master` (or whatever branch you want)
Please don't make Pull Requests to `master`.
## Dependencies
We use [glide](https://github.com/masterminds/glide) to manage dependencies.
That said, the master branch of every Tendermint repository should just build with `go get`, which means they should be kept up-to-date with their dependencies so we can get away with telling people they can just `go get` our software.
Since some dependencies are not under our control, a third party may break our build, in which case we can fall back on `glide install`. Even for dependencies under our control, glide helps us keeps multiple repos in sync as they evolve. Anything with an executable, such as apps, tools, and the core, should use glide.
Run `bash scripts/glide/status.sh` to get a list of vendored dependencies that may not be up-to-date.
## Vagrant
If you are a [Vagrant](https://www.vagrantup.com/) user, you can get started hacking Tendermint with the commands below.
NOTE: In case you installed Vagrant in 2017, you might need to run
`vagrant box update` to upgrade to the latest `ubuntu/xenial64`.
```
vagrant up
vagrant ssh
make test
```
## Testing
All repos should be hooked up to circle.
If they have `.go` files in the root directory, they will be automatically tested by circle using `go test -v -race ./...`. If not, they will need a `circle.yml`. Ideally, every repo has a `Makefile` that defines `make test` and includes its continuous integration status using a badge in the `README.md`.
## Branching Model and Release
User-facing repos should adhere to the branching model: http://nvie.com/posts/a-successful-git-branching-model/.
That is, these repos should be well versioned, and any merge to master requires a version bump and tagged release.
Libraries need not follow the model strictly, but would be wise to,
especially `go-p2p` and `go-rpc`, as their versions are referenced in tendermint core.
### Development Procedure:
- the latest state of development is on `develop`
- `develop` must never fail `make test`
- no --force onto `develop` (except when reverting a broken commit, which should seldom happen)
- create a development branch either on github.com/tendermint/tendermint, or your fork (using `git add origin`)
- before submitting a pull request, begin `git rebase` on top of `develop`
### Pull Merge Procedure:
- ensure pull branch is rebased on develop
- run `make test` to ensure that all tests pass
- merge pull request
- the `unstable` branch may be used to aggregate pull merges before testing once
- push master may request that pull requests be rebased on top of `unstable`
### Release Procedure:
- start on `develop`
- run integration tests (see `test_integrations` in Makefile)
- prepare changelog/release issue
- bump versions
- push to release-vX.X.X to run the extended integration tests on the CI
- merge to master
- merge master back to develop
### Hotfix Procedure:
- start on `master`
- checkout a new branch named hotfix-vX.X.X
- make the required changes
- these changes should be small and an absolute necessity
- add a note to CHANGELOG.md
- bumb versions
- push to hotfix-vX.X.X to run the extended integration tests on the CI
- merge hotfix-vX.X.X to master
- merge hotfix-vX.X.X to develop
- delete the hotfix-vX.X.X branch

View File

@@ -1,8 +1,8 @@
FROM alpine:3.6
# This is the release of tendermint to pull in.
ENV TM_VERSION 0.10.0
ENV TM_SHA256SUM a29852b8d51c00db93c87c3d148fa419a047abd38f32b2507a905805131acc19
ENV TM_VERSION 0.15.0
ENV TM_SHA256SUM 71cc271c67eca506ca492c8b90b090132f104bf5dbfe0af2702a50886e88de17
# Tendermint will be looking for genesis file in /tendermint (unless you change
# `genesis_file` in config.toml). You can put your config.toml and private

View File

@@ -1,6 +1,11 @@
# Supported tags and respective `Dockerfile` links
- `0.10.0`, `latest` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/e5342f4054ab784b2cd6150e14f01053d7c8deb2/DOCKER/Dockerfile)
- `0.15.0`, `latest` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/170777300ea92dc21a8aec1abc16cb51812513a4/DOCKER/Dockerfile)
- `0.13.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/a28b3fff49dce2fb31f90abb2fc693834e0029c2/DOCKER/Dockerfile)
- `0.12.1` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/457c688346b565e90735431619ca3ca597ef9007/DOCKER/Dockerfile)
- `0.12.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/70d8afa6e952e24c573ece345560a5971bf2cc0e/DOCKER/Dockerfile)
- `0.11.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/9177cc1f64ca88a4a0243c5d1773d10fba67e201/DOCKER/Dockerfile)
- `0.10.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/e5342f4054ab784b2cd6150e14f01053d7c8deb2/DOCKER/Dockerfile)
- `0.9.1`, `0.9`, [(Dockerfile)](https://github.com/tendermint/tendermint/blob/809e0e8c5933604ba8b2d096803ada7c5ec4dfd3/DOCKER/Dockerfile)
- `0.9.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/d474baeeea6c22b289e7402449572f7c89ee21da/DOCKER/Dockerfile)
- `0.8.0`, `0.8` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/bf64dd21fdb193e54d8addaaaa2ecf7ac371de8c/DOCKER/Dockerfile)
@@ -8,13 +13,24 @@
`develop` tag points to the [develop](https://github.com/tendermint/tendermint/tree/develop) branch.
# Quick reference
* **Where to get help:**
https://tendermint.com/community
* **Where to file issues:**
https://github.com/tendermint/tendermint/issues
* **Supported Docker versions:**
[the latest release](https://github.com/moby/moby/releases) (down to 1.6 on a best-effort basis)
# Tendermint
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine, written in any programming language, and securely replicates it on many machines.
For more background, see the [introduction](https://tendermint.com/intro).
For more background, see the [introduction](https://tendermint.readthedocs.io/en/master/introduction.html).
To get started developing applications, see the [application developers guide](https://tendermint.com/docs/guides/app-development).
To get started developing applications, see the [application developers guide](https://tendermint.readthedocs.io/en/master/getting-started.html).
# How to use this image
@@ -31,26 +47,12 @@ docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint node --proxy_app
If you want to see many containers talking to each other, consider using [mintnet-kubernetes](https://github.com/tendermint/tools/tree/master/mintnet-kubernetes), which is a tool for running Tendermint-based applications on a Kubernetes cluster.
# Supported Docker versions
This image is officially supported on Docker version 1.13.1.
Support for older versions (down to 1.6) is provided on a best-effort basis.
Please see [the Docker installation documentation](https://docs.docker.com/installation/) for details on how to upgrade your Docker daemon.
# License
View [license information](https://raw.githubusercontent.com/tendermint/tendermint/master/LICENSE) for the software contained in this image.
# User Feedback
## Issues
If you have any problems with or questions about this image, please contact us through a [GitHub](https://github.com/tendermint/tendermint/issues) issue. If the issue is related to a CVE, please check for [a `cve-tracker` issue on the `official-images` repository](https://github.com/docker-library/official-images/issues?q=label%3Acve-tracker) first.
You can also reach the image maintainers via [Slack](http://forum.tendermint.com:3000/).
## Contributing
You are invited to contribute new features, fixes, or updates, large or small; we are always thrilled to receive pull requests, and do our best to process them as fast as we can.

View File

@@ -1,57 +0,0 @@
# Install Go
[Install Go, set the `GOPATH`, and put `GOPATH/bin` on your `PATH`](https://github.com/tendermint/tendermint/wiki/Setting-GOPATH).
# Install Tendermint
You should be able to install the latest with a simple `go get -u github.com/tendermint/tendermint/cmd/tendermint`.
The `-u` makes sure all dependencies are updated as well.
Run `tendermint version` and `tendermint --help`.
If the install falied, see [vendored dependencies below](#vendored-dependencies).
To start a one-node blockchain with a simple in-process application:
```
tendermint init
tendermint node --proxy_app=dummy
```
See the [application developers guide](https://github.com/tendermint/tendermint/wiki/Application-Developers) for more details on building and running applications.
## Vendored dependencies
If the `go get` failed, updated dependencies may have broken the build.
Install the correct version of each dependency using `glide`.
Fist, install `glide`:
```
go get github.com/Masterminds/glide
```
Now, fetch the dependencies and install them with `glide` and `go`:
```
cd $GOPATH/src/github.com/tendermint/tendermint
glide install
go install ./cmd/tendermint
```
Sometimes `glide install` is painfully slow. Hang in there champ.
The latest Tendermint Core version is now installed. Check by running `tendermint version`.
## Troubleshooting
If `go get` failing bothers you, fetch the code using `git`:
```
mkdir -p $GOPATH/src/github.com/tendermint
git clone https://github.com/tendermint/tendermint $GOPATH/src/github.com/tendermint/tendermint
cd $GOPATH/src/github.com/tendermint/tendermint
glide install
go install ./cmd/tendermint
```

145
Makefile
View File

@@ -1,29 +1,65 @@
GOTOOLS = \
github.com/mitchellh/gox \
github.com/Masterminds/glide \
honnef.co/go/tools/cmd/megacheck
github.com/tendermint/glide \
# gopkg.in/alecthomas/gometalinter.v2
PACKAGES=$(shell go list ./... | grep -v '/vendor/')
BUILD_TAGS?=tendermint
TMHOME = $${TMHOME:-$$HOME/.tendermint}
BUILD_FLAGS = -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`"
all: install test
all: check build test install
install: get_vendor_deps
@go install --ldflags '-extldflags "-static"' \
--ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse HEAD`" ./cmd/tendermint
check: check_tools get_vendor_deps
########################################
### Build
build:
go build \
--ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse HEAD`" -o build/tendermint ./cmd/tendermint/
go build $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint/
build_race:
go build -race -o build/tendermint ./cmd/tendermint
go build -race $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint
install:
go install $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' ./cmd/tendermint
########################################
### Distribution
# dist builds binaries for all platforms and packages them for distribution
dist:
@BUILD_TAGS='$(BUILD_TAGS)' sh -c "'$(CURDIR)/scripts/dist.sh'"
########################################
### Tools & dependencies
check_tools:
@# https://stackoverflow.com/a/25668869
@echo "Found tools: $(foreach tool,$(notdir $(GOTOOLS)),\
$(if $(shell which $(tool)),$(tool),$(error "No $(tool) in PATH")))"
get_tools:
@echo "--> Installing tools"
go get -u -v $(GOTOOLS)
# @gometalinter.v2 --install
update_tools:
@echo "--> Updating tools"
@go get -u $(GOTOOLS)
get_vendor_deps:
@rm -rf vendor/
@echo "--> Running glide install"
@glide install
draw_deps:
@# requires brew install graphviz or apt-get install graphviz
go get github.com/RobotsAndPencils/goviz
@goviz -i github.com/tendermint/tendermint/cmd/tendermint -d 3 | dot -Tpng -o dependency-graph.png
########################################
### Testing
test:
@echo "--> Running go test"
@go test $(PACKAGES)
@@ -35,49 +71,60 @@ test_race:
test_integrations:
@bash ./test/test.sh
test_release:
@go test -tags release $(PACKAGES)
test100:
@for i in {1..100}; do make test; done
draw_deps:
# requires brew install graphviz or apt-get install graphviz
go get github.com/RobotsAndPencils/goviz
@goviz -i github.com/tendermint/tendermint/cmd/tendermint -d 3 | dot -Tpng -o dependency-graph.png
vagrant_test:
vagrant up
vagrant ssh -c 'make install'
vagrant ssh -c 'make test_race'
vagrant ssh -c 'make test_integrations'
list_deps:
@go list -f '{{join .Deps "\n"}}' ./... | \
grep -v /vendor/ | sort | uniq | \
xargs go list -f '{{if not .Standard}}{{.ImportPath}}{{end}}'
get_deps:
@echo "--> Running go get"
@go get -v -d $(PACKAGES)
@go list -f '{{join .TestImports "\n"}}' ./... | \
grep -v /vendor/ | sort | uniq | \
xargs go get -v -d
get_vendor_deps: ensure_tools
@rm -rf vendor/
@echo "--> Running glide install"
@glide install
update_deps: tools
@echo "--> Updating dependencies"
@go get -d -u ./...
revision:
-echo `git rev-parse --verify HEAD` > $(TMHOME)/revision
-echo `git rev-parse --verify HEAD` >> $(TMHOME)/revision_history
tools:
go get -u -v $(GOTOOLS)
ensure_tools:
go get $(GOTOOLS)
########################################
### Formatting, linting, and vetting
megacheck:
@for pkg in ${PACKAGES}; do megacheck "$$pkg"; done
fmt:
@go fmt ./...
metalinter:
@echo "--> Running linter"
gometalinter.v2 --vendor --deadline=600s --disable-all \
--enable=deadcode \
--enable=gosimple \
--enable=misspell \
--enable=safesql \
./...
#--enable=gas \
#--enable=maligned \
#--enable=dupl \
#--enable=errcheck \
#--enable=goconst \
#--enable=gocyclo \
#--enable=goimports \
#--enable=golint \ <== comments on anything exported
#--enable=gotype \
#--enable=ineffassign \
#--enable=interfacer \
#--enable=megacheck \
#--enable=staticcheck \
#--enable=structcheck \
#--enable=unconvert \
#--enable=unparam \
#--enable=unused \
#--enable=varcheck \
#--enable=vet \
#--enable=vetshadow \
metalinter_all:
@echo "--> Running linter (all)"
gometalinter.v2 --vendor --deadline=600s --enable-all --disable=lll ./...
.PHONY: install build build_race dist test test_race test_integrations test100 draw_deps list_deps get_deps get_vendor_deps update_deps revision tools
# To avoid unintended conflicts with file names, always add to .PHONY
# unless there is a reason not to.
# https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html
.PHONY: check build build_race dist install check_tools get_tools update_tools get_vendor_deps draw_deps test test_race test_integrations test_release test100 vagrant_test fmt metalinter metalinter_all

View File

@@ -1,77 +1,112 @@
# Tendermint
[Byzantine-Fault Tolerant](https://en.wikipedia.org/wiki/Byzantine_fault_tolerance)
[State Machine Replication](https://en.wikipedia.org/wiki/State_machine_replication).
[State Machine Replication](https://en.wikipedia.org/wiki/State_machine_replication).
Or [Blockchain](https://en.wikipedia.org/wiki/Blockchain_(database)) for short.
[![version](https://img.shields.io/github/tag/tendermint/tendermint.svg)](https://github.com/tendermint/tendermint/releases/latest)
[![API Reference](
https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667
)](https://godoc.org/github.com/tendermint/tendermint)
[![chat](https://img.shields.io/badge/slack-join%20chat-pink.svg)](http://forum.tendermint.com:3000/)
[![Go version](https://img.shields.io/badge/go-1.9.2-blue.svg)](https://github.com/moovweb/gvm)
[![Rocket.Chat](https://demo.rocket.chat/images/join-chat.svg)](https://cosmos.rocket.chat/)
[![license](https://img.shields.io/github/license/tendermint/tendermint.svg)](https://github.com/tendermint/tendermint/blob/master/LICENSE)
[![](https://tokei.rs/b1/github/tendermint/tendermint?category=lines)](https://github.com/tendermint/tendermint)
Branch | Tests | Coverage | Report Card
----------|-------|----------|-------------
develop | [![CircleCI](https://circleci.com/gh/tendermint/tendermint/tree/develop.svg?style=shield)](https://circleci.com/gh/tendermint/tendermint/tree/develop) | [![codecov](https://codecov.io/gh/tendermint/tendermint/branch/develop/graph/badge.svg)](https://codecov.io/gh/tendermint/tendermint) | [![Go Report Card](https://goreportcard.com/badge/github.com/tendermint/tendermint/tree/develop)](https://goreportcard.com/report/github.com/tendermint/tendermint/tree/develop)
master | [![CircleCI](https://circleci.com/gh/tendermint/tendermint/tree/master.svg?style=shield)](https://circleci.com/gh/tendermint/tendermint/tree/master) | [![codecov](https://codecov.io/gh/tendermint/tendermint/branch/master/graph/badge.svg)](https://codecov.io/gh/tendermint/tendermint) | [![Go Report Card](https://goreportcard.com/badge/github.com/tendermint/tendermint/tree/master)](https://goreportcard.com/report/github.com/tendermint/tendermint/tree/master)
Branch | Tests | Coverage
----------|-------|----------
master | [![CircleCI](https://circleci.com/gh/tendermint/tendermint/tree/master.svg?style=shield)](https://circleci.com/gh/tendermint/tendermint/tree/master) | [![codecov](https://codecov.io/gh/tendermint/tendermint/branch/master/graph/badge.svg)](https://codecov.io/gh/tendermint/tendermint)
develop | [![CircleCI](https://circleci.com/gh/tendermint/tendermint/tree/develop.svg?style=shield)](https://circleci.com/gh/tendermint/tendermint/tree/develop) | [![codecov](https://codecov.io/gh/tendermint/tendermint/branch/develop/graph/badge.svg)](https://codecov.io/gh/tendermint/tendermint)
_NOTE: This is alpha software. Please contact us if you intend to run it in production._
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine, written in any programming language,
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine - written in any programming language -
and securely replicates it on many machines.
For more background, see the [introduction](https://tendermint.com/intro).
For more information, from introduction to install to application development, [Read The Docs](https://tendermint.readthedocs.io/en/master/).
To get started developing applications, see the [application developers guide](https://tendermint.com/docs/guides/app-development).
## Minimum requirements
### Code of Conduct
Please read, understand and adhere to our [code of conduct](CODE_OF_CONDUCT.md).
Requirement|Notes
---|---
Go version | Go1.9 or higher
## Install
To download pre-built binaries, see our [downloads page](https://tendermint.com/intro/getting-started/download).
To download pre-built binaries, see our [downloads page](https://tendermint.com/downloads).
To install from source, you should be able to:
`go get -u github.com/tendermint/tendermint/cmd/tendermint`
For more details (or if it fails), see the [install guide](https://tendermint.com/docs/guides/install).
## Contributing
Yay open source! Please see our [contributing guidelines](https://tendermint.com/docs/guides/contributing).
For more details (or if it fails), [read the docs](https://tendermint.readthedocs.io/en/master/install.html).
## Resources
### Tendermint Core
- [Introduction](https://tendermint.com/intro)
- [Docs](https://tendermint.com/docs)
- [Software using Tendermint](https://tendermint.com/ecosystem)
All resources involving the use of, building application on, or developing for, tendermint, can be found at [Read The Docs](https://tendermint.readthedocs.io/en/master/). Additional information about some - and eventually all - of the sub-projects below, can be found at Read The Docs.
### Sub-projects
* [ABCI](http://github.com/tendermint/abci), the Application Blockchain Interface
* [Go-Wire](http://github.com/tendermint/go-wire), a deterministic serialization library
* [Go-Crypto](http://github.com/tendermint/go-crypto), an elliptic curve cryptography library
* [TmLibs](http://github.com/tendermint/tmlibs), an assortment of Go libraries
* [Merkleeyes](http://github.com/tendermint/merkleeyes), a balanced, binary Merkle tree for ABCI apps
* [Go-Crypto](http://github.com/tendermint/go-crypto), an elliptic curve cryptography library
* [TmLibs](http://github.com/tendermint/tmlibs), an assortment of Go libraries used internally
* [IAVL](http://github.com/tendermint/iavl), Merkleized IAVL+ Tree implementation
### Tools
* [Deployment, Benchmarking, and Monitoring](https://github.com/tendermint/tools)
* [Deployment, Benchmarking, and Monitoring](http://tendermint.readthedocs.io/projects/tools/en/develop/index.html#tendermint-tools)
### Applications
* [Ethermint](http://github.com/tendermint/ethermint): Ethereum on Tendermint
* [Basecoin](http://github.com/tendermint/basecoin), a cryptocurrency application framework
* [Ethermint](http://github.com/tendermint/ethermint); Ethereum on Tendermint
* [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
* [Many more](https://tendermint.readthedocs.io/en/master/ecosystem.html#abci-applications)
### More
### More
* [Tendermint Blog](https://tendermint.com/blog)
* [Cosmos Blog](https://cosmos.network/blog)
* [Original Whitepaper (out-of-date)](http://www.the-blockchain.com/docs/Tendermint%20Consensus%20without%20Mining.pdf)
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
* [Original Whitepaper](https://tendermint.com/static/docs/tendermint.pdf)
* [Tendermint Blog](https://blog.cosmos.network/tendermint/home)
* [Cosmos Blog](https://blog.cosmos.network)
## Contributing
Yay open source! Please see our [contributing guidelines](CONTRIBUTING.md).
## Versioning
### SemVer
Tendermint uses [SemVer](http://semver.org/) to determine when and how the version changes.
According to SemVer, anything in the public API can change at any time before version 1.0.0
To provide some stability to Tendermint users in these 0.X.X days, the MINOR version is used
to signal breaking changes across a subset of the total public API. This subset includes all
interfaces exposed to other processes (cli, rpc, p2p, etc.), as well as parts of the following packages:
- types
- rpc/client
- config
- node
Exported objects in these packages that are not covered by the versioning scheme
are explicitly marked by `// UNSTABLE` in their go doc comment and may change at any time.
Functions, types, and values in any other package may also change at any time.
### Upgrades
In an effort to avoid accumulating technical debt prior to 1.0.0,
we do not guarantee that breaking changes (ie. bumps in the MINOR version)
will work with existing tendermint blockchains. In these cases you will
have to start a new blockchain, or write something custom to get the old
data into the new chain.
However, any bump in the PATCH version should be compatible with existing histories
(if not please open an [issue](https://github.com/tendermint/tendermint/issues)).
## Code of Conduct
Please read, understand and adhere to our [code of conduct](CODE_OF_CONDUCT.md).

42
Vagrantfile vendored
View File

@@ -2,7 +2,7 @@
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.box = "ubuntu/xenial64"
config.vm.provider "virtualbox" do |v|
v.memory = 4096
@@ -10,29 +10,43 @@ Vagrant.configure("2") do |config|
end
config.vm.provision "shell", inline: <<-SHELL
apt-get update
apt-get install -y --no-install-recommends wget curl jq shellcheck bsdmainutils psmisc
# add docker repo
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable"
wget -qO- https://get.docker.com/ | sh
usermod -a -G docker vagrant
# and golang 1.9 support
# official repo doesn't have race detection runtime...
# add-apt-repository ppa:gophers/archive
add-apt-repository ppa:longsleep/golang-backports
# install base requirements
apt-get update
apt-get install -y --no-install-recommends wget curl jq zip \
make shellcheck bsdmainutils psmisc
apt-get install -y docker-ce golang-1.9-go
apt-get install -y language-pack-en
# cleanup
apt-get autoremove -y
curl -O https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz
tar -xvf go1.8.linux-amd64.tar.gz
rm -rf /usr/local/go
mv go /usr/local
rm -f go1.8.linux-amd64.tar.gz
mkdir -p /home/vagrant/go/bin
echo 'export PATH=$PATH:/usr/local/go/bin:/home/vagrant/go/bin' >> /home/vagrant/.bash_profile
# needed for docker
usermod -a -G docker vagrant
# set env variables
echo 'export PATH=$PATH:/usr/lib/go-1.9/bin:/home/vagrant/go/bin' >> /home/vagrant/.bash_profile
echo 'export GOPATH=/home/vagrant/go' >> /home/vagrant/.bash_profile
echo 'export LC_ALL=en_US.UTF-8' >> /home/vagrant/.bash_profile
echo 'cd go/src/github.com/tendermint/tendermint' >> /home/vagrant/.bash_profile
mkdir -p /home/vagrant/go/bin
mkdir -p /home/vagrant/go/src/github.com/tendermint
ln -s /vagrant /home/vagrant/go/src/github.com/tendermint/tendermint
chown -R vagrant:vagrant /home/vagrant/go
chown vagrant:vagrant /home/vagrant/.bash_profile
su - vagrant -c 'cd /home/vagrant/go/src/github.com/tendermint/tendermint && make get_vendor_deps'
# get all deps and tools, ready to install/test
su - vagrant -c 'source /home/vagrant/.bash_profile'
su - vagrant -c 'cd /home/vagrant/go/src/github.com/tendermint/tendermint && make get_tools && make get_vendor_deps'
SHELL
end

13
appveyor.yml Normal file
View File

@@ -0,0 +1,13 @@
version: 1.0.{build}
configuration: Release
platform:
- x64
- x86
clone_folder: c:\go\path\src\github.com\tendermint\tendermint
before_build:
- cmd: set GOPATH=%GOROOT%\path
- cmd: set PATH=%GOPATH%\bin;%PATH%
- cmd: make get_vendor_deps
build_script:
- cmd: make test
test: off

2
benchmarks/blockchain/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
data

View File

@@ -0,0 +1,80 @@
#!/bin/bash
DATA=$GOPATH/src/github.com/tendermint/tendermint/benchmarks/blockchain/data
if [ ! -d $DATA ]; then
echo "no data found, generating a chain... (this only has to happen once)"
tendermint init --home $DATA
cp $DATA/config.toml $DATA/config2.toml
echo "
[consensus]
timeout_commit = 0
" >> $DATA/config.toml
echo "starting node"
tendermint node \
--home $DATA \
--proxy_app dummy \
--p2p.laddr tcp://127.0.0.1:56656 \
--rpc.laddr tcp://127.0.0.1:56657 \
--log_level error &
echo "making blocks for 60s"
sleep 60
mv $DATA/config2.toml $DATA/config.toml
kill %1
echo "done generating chain."
fi
# validator node
HOME1=$TMPDIR$RANDOM$RANDOM
cp -R $DATA $HOME1
echo "starting validator node"
tendermint node \
--home $HOME1 \
--proxy_app dummy \
--p2p.laddr tcp://127.0.0.1:56656 \
--rpc.laddr tcp://127.0.0.1:56657 \
--log_level error &
sleep 1
# downloader node
HOME2=$TMPDIR$RANDOM$RANDOM
tendermint init --home $HOME2
cp $HOME1/genesis.json $HOME2
printf "starting downloader node"
tendermint node \
--home $HOME2 \
--proxy_app dummy \
--p2p.laddr tcp://127.0.0.1:56666 \
--rpc.laddr tcp://127.0.0.1:56667 \
--p2p.persistent_peers 127.0.0.1:56656 \
--log_level error &
# wait for node to start up so we only count time where we are actually syncing
sleep 0.5
while curl localhost:56667/status 2> /dev/null | grep "\"latest_block_height\": 0," > /dev/null
do
printf '.'
sleep 0.2
done
echo
echo "syncing blockchain for 10s"
for i in {1..10}
do
sleep 1
HEIGHT="$(curl localhost:56667/status 2> /dev/null \
| grep 'latest_block_height' \
| grep -o ' [0-9]*' \
| xargs)"
let 'RATE = HEIGHT / i'
echo "height: $HEIGHT, blocks/sec: $RATE"
done
kill %1
kill %2
rm -rf $HOME1 $HOME2

View File

@@ -2,11 +2,13 @@ package benchmarks
import (
"testing"
"time"
"github.com/tendermint/go-crypto"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/go-wire"
proto "github.com/tendermint/tendermint/benchmarks/proto"
"github.com/tendermint/tendermint/p2p"
ctypes "github.com/tendermint/tendermint/rpc/core/types"
)
@@ -14,11 +16,10 @@ func BenchmarkEncodeStatusWire(b *testing.B) {
b.StopTimer()
pubKey := crypto.GenPrivKeyEd25519().PubKey()
status := &ctypes.ResultStatus{
NodeInfo: &p2p.NodeInfo{
PubKey: pubKey.Unwrap().(crypto.PubKeyEd25519),
NodeInfo: p2p.NodeInfo{
PubKey: pubKey,
Moniker: "SOMENAME",
Network: "SOMENAME",
RemoteAddr: "SOMEADDR",
ListenAddr: "SOMEADDR",
Version: "SOMEVER",
Other: []string{"SOMESTRING", "OTHERSTRING"},
@@ -26,7 +27,7 @@ func BenchmarkEncodeStatusWire(b *testing.B) {
PubKey: pubKey,
LatestBlockHash: []byte("SOMEBYTES"),
LatestBlockHeight: 123,
LatestBlockTime: 1234,
LatestBlockTime: time.Unix(0, 1234),
}
b.StartTimer()
@@ -40,12 +41,11 @@ func BenchmarkEncodeStatusWire(b *testing.B) {
func BenchmarkEncodeNodeInfoWire(b *testing.B) {
b.StopTimer()
pubKey := crypto.GenPrivKeyEd25519().PubKey().Unwrap().(crypto.PubKeyEd25519)
nodeInfo := &p2p.NodeInfo{
pubKey := crypto.GenPrivKeyEd25519().PubKey()
nodeInfo := p2p.NodeInfo{
PubKey: pubKey,
Moniker: "SOMENAME",
Network: "SOMENAME",
RemoteAddr: "SOMEADDR",
ListenAddr: "SOMEADDR",
Version: "SOMEVER",
Other: []string{"SOMESTRING", "OTHERSTRING"},
@@ -61,12 +61,11 @@ func BenchmarkEncodeNodeInfoWire(b *testing.B) {
func BenchmarkEncodeNodeInfoBinary(b *testing.B) {
b.StopTimer()
pubKey := crypto.GenPrivKeyEd25519().PubKey().Unwrap().(crypto.PubKeyEd25519)
nodeInfo := &p2p.NodeInfo{
pubKey := crypto.GenPrivKeyEd25519().PubKey()
nodeInfo := p2p.NodeInfo{
PubKey: pubKey,
Moniker: "SOMENAME",
Network: "SOMENAME",
RemoteAddr: "SOMEADDR",
ListenAddr: "SOMEADDR",
Version: "SOMEVER",
Other: []string{"SOMESTRING", "OTHERSTRING"},
@@ -85,11 +84,10 @@ func BenchmarkEncodeNodeInfoProto(b *testing.B) {
b.StopTimer()
pubKey := crypto.GenPrivKeyEd25519().PubKey().Unwrap().(crypto.PubKeyEd25519)
pubKey2 := &proto.PubKey{Ed25519: &proto.PubKeyEd25519{Bytes: pubKey[:]}}
nodeInfo := &proto.NodeInfo{
nodeInfo := proto.NodeInfo{
PubKey: pubKey2,
Moniker: "SOMENAME",
Network: "SOMENAME",
RemoteAddr: "SOMEADDR",
ListenAddr: "SOMEADDR",
Version: "SOMEVER",
Other: []string{"SOMESTRING", "OTHERSTRING"},

View File

@@ -1,8 +1,9 @@
package benchmarks
import (
. "github.com/tendermint/tmlibs/common"
"testing"
cmn "github.com/tendermint/tmlibs/common"
)
func BenchmarkSomething(b *testing.B) {
@@ -11,11 +12,11 @@ func BenchmarkSomething(b *testing.B) {
numChecks := 100000
keys := make([]string, numItems)
for i := 0; i < numItems; i++ {
keys[i] = RandStr(100)
keys[i] = cmn.RandStr(100)
}
txs := make([]string, numChecks)
for i := 0; i < numChecks; i++ {
txs[i] = RandStr(100)
txs[i] = cmn.RandStr(100)
}
b.StartTimer()

View File

@@ -4,7 +4,7 @@ import (
"os"
"testing"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
)
func BenchmarkFileWrite(b *testing.B) {
@@ -14,16 +14,20 @@ func BenchmarkFileWrite(b *testing.B) {
if err != nil {
b.Error(err)
}
testString := RandStr(200) + "\n"
testString := cmn.RandStr(200) + "\n"
b.StartTimer()
for i := 0; i < b.N; i++ {
file.Write([]byte(testString))
_, err := file.Write([]byte(testString))
if err != nil {
b.Error(err)
}
}
file.Close()
err = os.Remove("benchmark_file_write.out")
if err != nil {
if err := file.Close(); err != nil {
b.Error(err)
}
if err := os.Remove("benchmark_file_write.out"); err != nil {
b.Error(err)
}
}

View File

@@ -24,9 +24,6 @@ import bytes "bytes"
import strings "strings"
import github_com_gogo_protobuf_proto "github.com/gogo/protobuf/proto"
import sort "sort"
import strconv "strconv"
import reflect "reflect"
import io "io"
@@ -392,31 +389,6 @@ func (this *PubKeyEd25519) GoString() string {
s = append(s, "}")
return strings.Join(s, "")
}
func valueToGoStringTest(v interface{}, typ string) string {
rv := reflect.ValueOf(v)
if rv.IsNil() {
return "nil"
}
pv := reflect.Indirect(rv).Interface()
return fmt.Sprintf("func(v %v) *%v { return &v } ( %#v )", typ, typ, pv)
}
func extensionToGoStringTest(e map[int32]github_com_gogo_protobuf_proto.Extension) string {
if e == nil {
return "nil"
}
s := "map[int32]proto.Extension{"
keys := make([]int, 0, len(e))
for k := range e {
keys = append(keys, int(k))
}
sort.Ints(keys)
ss := []string{}
for _, k := range keys {
ss = append(ss, strconv.Itoa(k)+": "+e[int32(k)].GoString())
}
s += strings.Join(ss, ",") + "}"
return s
}
func (m *ResultStatus) Marshal() (data []byte, err error) {
size := m.Size()
data = make([]byte, size)
@@ -586,24 +558,6 @@ func (m *PubKeyEd25519) MarshalTo(data []byte) (int, error) {
return i, nil
}
func encodeFixed64Test(data []byte, offset int, v uint64) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
data[offset+4] = uint8(v >> 32)
data[offset+5] = uint8(v >> 40)
data[offset+6] = uint8(v >> 48)
data[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32Test(data []byte, offset int, v uint32) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintTest(data []byte, offset int, v uint64) int {
for v >= 1<<7 {
data[offset] = uint8(v&0x7f | 0x80)
@@ -689,9 +643,6 @@ func sovTest(x uint64) (n int) {
}
return n
}
func sozTest(x uint64) (n int) {
return sovTest(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (this *ResultStatus) String() string {
if this == nil {
return "nil"
@@ -742,14 +693,6 @@ func (this *PubKeyEd25519) String() string {
}, "")
return s
}
func valueToStringTest(v interface{}) string {
rv := reflect.ValueOf(v)
if rv.IsNil() {
return "nil"
}
pv := reflect.Indirect(rv).Interface()
return fmt.Sprintf("*%v", pv)
}
func (m *ResultStatus) Unmarshal(data []byte) error {
var hasFields [1]uint64
l := len(data)

View File

@@ -1,30 +1,27 @@
package main
import (
"context"
"encoding/binary"
"time"
//"encoding/hex"
"fmt"
"time"
"github.com/gorilla/websocket"
"github.com/tendermint/go-wire"
_ "github.com/tendermint/tendermint/rpc/core/types" // Register RPCResponse > Result types
"github.com/tendermint/tendermint/rpc/lib/client"
"github.com/tendermint/tendermint/rpc/lib/types"
. "github.com/tendermint/tmlibs/common"
rpcclient "github.com/tendermint/tendermint/rpc/lib/client"
cmn "github.com/tendermint/tmlibs/common"
)
func main() {
ws := rpcclient.NewWSClient("127.0.0.1:46657", "/websocket")
_, err := ws.Start()
wsc := rpcclient.NewWSClient("127.0.0.1:46657", "/websocket")
err := wsc.Start()
if err != nil {
Exit(err.Error())
cmn.Exit(err.Error())
}
defer wsc.Stop()
// Read a bunch of responses
go func() {
for {
_, ok := <-ws.ResultsCh
_, ok := <-wsc.ResponsesCh
if !ok {
break
}
@@ -37,24 +34,14 @@ func main() {
for i := 0; ; i++ {
binary.BigEndian.PutUint64(buf, uint64(i))
//txBytes := hex.EncodeToString(buf[:n])
request, err := rpctypes.MapToRequest("fakeid",
"broadcast_tx",
map[string]interface{}{"tx": buf[:8]})
if err != nil {
Exit(err.Error())
}
reqBytes := wire.JSONBytes(request)
//fmt.Println("!!", string(reqBytes))
fmt.Print(".")
err = ws.WriteMessage(websocket.TextMessage, reqBytes)
err = wsc.Call(context.TODO(), "broadcast_tx", map[string]interface{}{"tx": buf[:8]})
if err != nil {
Exit(err.Error())
cmn.Exit(err.Error())
}
if i%1000 == 0 {
fmt.Println(i)
}
time.Sleep(time.Microsecond * 1000)
}
ws.Stop()
}

View File

@@ -1,22 +1,44 @@
package blockchain
import (
"fmt"
"math"
"sync"
"time"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
flow "github.com/tendermint/tmlibs/flowrate"
"github.com/tendermint/tmlibs/log"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/types"
)
/*
eg, L = latency = 0.1s
P = num peers = 10
FN = num full nodes
BS = 1kB block size
CB = 1 Mbit/s = 128 kB/s
CB/P = 12.8 kB
B/S = CB/P/BS = 12.8 blocks/s
12.8 * 0.1 = 1.28 blocks on conn
*/
const (
requestIntervalMS = 250
maxTotalRequesters = 300
requestIntervalMS = 100
maxTotalRequesters = 1000
maxPendingRequests = maxTotalRequesters
maxPendingRequestsPerPeer = 75
minRecvRate = 10240 // 10Kb/s
maxPendingRequestsPerPeer = 50
// Minimum recv rate to ensure we're receiving blocks from a peer fast
// enough. If a peer is not sending us data at at least that rate, we
// consider them to have timedout and we disconnect.
//
// Assuming a DSL connection (not a good choice) 128 Kbps (upload) ~ 15 KB/s,
// sending data across atlantic ~ 7.5 KB/s.
minRecvRate = 7680
)
var peerTimeoutSeconds = time.Duration(15) // not const so we can override with tests
@@ -33,33 +55,34 @@ var peerTimeoutSeconds = time.Duration(15) // not const so we can override with
*/
type BlockPool struct {
BaseService
cmn.BaseService
startTime time.Time
mtx sync.Mutex
// block requests
requesters map[int]*bpRequester
height int // the lowest key in requesters.
requesters map[int64]*bpRequester
height int64 // the lowest key in requesters.
numPending int32 // number of requests pending assignment or block response
// peers
peers map[string]*bpPeer
peers map[p2p.ID]*bpPeer
maxPeerHeight int64
requestsCh chan<- BlockRequest
timeoutsCh chan<- string
timeoutsCh chan<- p2p.ID
}
func NewBlockPool(start int, requestsCh chan<- BlockRequest, timeoutsCh chan<- string) *BlockPool {
func NewBlockPool(start int64, requestsCh chan<- BlockRequest, timeoutsCh chan<- p2p.ID) *BlockPool {
bp := &BlockPool{
peers: make(map[string]*bpPeer),
peers: make(map[p2p.ID]*bpPeer),
requesters: make(map[int]*bpRequester),
requesters: make(map[int64]*bpRequester),
height: start,
numPending: 0,
requestsCh: requestsCh,
timeoutsCh: timeoutsCh,
}
bp.BaseService = *NewBaseService(nil, "BlockPool", bp)
bp.BaseService = *cmn.NewBaseService(nil, "BlockPool", bp)
return bp
}
@@ -69,9 +92,7 @@ func (pool *BlockPool) OnStart() error {
return nil
}
func (pool *BlockPool) OnStop() {
pool.BaseService.OnStop()
}
func (pool *BlockPool) OnStop() {}
// Run spawns requesters as needed.
func (pool *BlockPool) makeRequestersRoutine() {
@@ -79,6 +100,7 @@ func (pool *BlockPool) makeRequestersRoutine() {
if !pool.IsRunning() {
break
}
_, numPending, lenRequesters := pool.GetStatus()
if numPending >= maxPendingRequests {
// sleep for a bit.
@@ -104,10 +126,13 @@ func (pool *BlockPool) removeTimedoutPeers() {
for _, peer := range pool.peers {
if !peer.didTimeout && peer.numPending > 0 {
curRate := peer.recvMonitor.Status().CurRate
// XXX remove curRate != 0
// curRate can be 0 on start
if curRate != 0 && curRate < minRecvRate {
pool.sendTimeout(peer.id)
pool.Logger.Error("SendTimeout", "peer", peer.id, "reason", "curRate too low")
pool.Logger.Error("SendTimeout", "peer", peer.id,
"reason", "peer is not sending us data fast enough",
"curRate", fmt.Sprintf("%d KB/s", curRate/1024),
"minRate", fmt.Sprintf("%d KB/s", minRecvRate/1024))
peer.didTimeout = true
}
}
@@ -117,7 +142,7 @@ func (pool *BlockPool) removeTimedoutPeers() {
}
}
func (pool *BlockPool) GetStatus() (height int, numPending int32, lenRequesters int) {
func (pool *BlockPool) GetStatus() (height int64, numPending int32, lenRequesters int) {
pool.mtx.Lock()
defer pool.mtx.Unlock()
@@ -135,16 +160,10 @@ func (pool *BlockPool) IsCaughtUp() bool {
return false
}
maxPeerHeight := 0
for _, peer := range pool.peers {
maxPeerHeight = MaxInt(maxPeerHeight, peer.height)
}
// some conditions to determine if we're caught up
receivedBlockOrTimedOut := (pool.height > 0 || time.Since(pool.startTime) > 5*time.Second)
ourChainIsLongestAmongPeers := maxPeerHeight == 0 || pool.height >= maxPeerHeight
ourChainIsLongestAmongPeers := pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
isCaughtUp := receivedBlockOrTimedOut && ourChainIsLongestAmongPeers
pool.Logger.Info(Fmt("IsCaughtUp: %v", isCaughtUp), "height", pool.height, "maxPeerHeight", maxPeerHeight)
return isCaughtUp
}
@@ -180,46 +199,59 @@ func (pool *BlockPool) PopRequest() {
delete(pool.requesters, pool.height)
pool.height++
} else {
PanicSanity(Fmt("Expected requester to pop, got nothing at height %v", pool.height))
cmn.PanicSanity(cmn.Fmt("Expected requester to pop, got nothing at height %v", pool.height))
}
}
// Invalidates the block at pool.height,
// Remove the peer and redo request from others.
func (pool *BlockPool) RedoRequest(height int) {
// Returns the ID of the removed peer.
func (pool *BlockPool) RedoRequest(height int64) p2p.ID {
pool.mtx.Lock()
defer pool.mtx.Unlock()
request := pool.requesters[height]
pool.mtx.Unlock()
if request.block == nil {
PanicSanity("Expected block to be non-nil")
cmn.PanicSanity("Expected block to be non-nil")
}
// RemovePeer will redo all requesters associated with this peer.
// TODO: record this malfeasance
pool.RemovePeer(request.peerID)
pool.removePeer(request.peerID)
return request.peerID
}
// TODO: ensure that blocks come in order for each peer.
func (pool *BlockPool) AddBlock(peerID string, block *types.Block, blockSize int) {
func (pool *BlockPool) AddBlock(peerID p2p.ID, block *types.Block, blockSize int) {
pool.mtx.Lock()
defer pool.mtx.Unlock()
requester := pool.requesters[block.Height]
if requester == nil {
// a block we didn't expect.
// TODO:if height is too far ahead, punish peer
return
}
if requester.setBlock(block, peerID) {
pool.numPending--
peer := pool.peers[peerID]
peer.decrPending(blockSize)
if peer != nil {
peer.decrPending(blockSize)
}
} else {
// Bad peer?
}
}
// MaxPeerHeight returns the highest height reported by a peer.
func (pool *BlockPool) MaxPeerHeight() int64 {
pool.mtx.Lock()
defer pool.mtx.Unlock()
return pool.maxPeerHeight
}
// Sets the peer's alleged blockchain height.
func (pool *BlockPool) SetPeerHeight(peerID string, height int) {
func (pool *BlockPool) SetPeerHeight(peerID p2p.ID, height int64) {
pool.mtx.Lock()
defer pool.mtx.Unlock()
@@ -231,16 +263,20 @@ func (pool *BlockPool) SetPeerHeight(peerID string, height int) {
peer.setLogger(pool.Logger.With("peer", peerID))
pool.peers[peerID] = peer
}
if height > pool.maxPeerHeight {
pool.maxPeerHeight = height
}
}
func (pool *BlockPool) RemovePeer(peerID string) {
func (pool *BlockPool) RemovePeer(peerID p2p.ID) {
pool.mtx.Lock()
defer pool.mtx.Unlock()
pool.removePeer(peerID)
}
func (pool *BlockPool) removePeer(peerID string) {
func (pool *BlockPool) removePeer(peerID p2p.ID) {
for _, requester := range pool.requesters {
if requester.getPeerID() == peerID {
if requester.getBlock() != nil {
@@ -254,7 +290,7 @@ func (pool *BlockPool) removePeer(peerID string) {
// Pick an available peer with at least the given minHeight.
// If no peers are available, returns nil.
func (pool *BlockPool) pickIncrAvailablePeer(minHeight int) *bpPeer {
func (pool *BlockPool) pickIncrAvailablePeer(minHeight int64) *bpPeer {
pool.mtx.Lock()
defer pool.mtx.Unlock()
@@ -279,24 +315,31 @@ func (pool *BlockPool) makeNextRequester() {
pool.mtx.Lock()
defer pool.mtx.Unlock()
nextHeight := pool.height + len(pool.requesters)
nextHeight := pool.height + pool.requestersLen()
request := newBPRequester(pool, nextHeight)
request.SetLogger(pool.Logger.With("height", nextHeight))
// request.SetLogger(pool.Logger.With("height", nextHeight))
pool.requesters[nextHeight] = request
pool.numPending++
request.Start()
err := request.Start()
if err != nil {
request.Logger.Error("Error starting request", "err", err)
}
}
func (pool *BlockPool) sendRequest(height int, peerID string) {
func (pool *BlockPool) requestersLen() int64 {
return int64(len(pool.requesters))
}
func (pool *BlockPool) sendRequest(height int64, peerID p2p.ID) {
if !pool.IsRunning() {
return
}
pool.requestsCh <- BlockRequest{height, peerID}
}
func (pool *BlockPool) sendTimeout(peerID string) {
func (pool *BlockPool) sendTimeout(peerID p2p.ID) {
if !pool.IsRunning() {
return
}
@@ -309,12 +352,13 @@ func (pool *BlockPool) debug() string {
defer pool.mtx.Unlock()
str := ""
for h := pool.height; h < pool.height+len(pool.requesters); h++ {
nextHeight := pool.height + pool.requestersLen()
for h := pool.height; h < nextHeight; h++ {
if pool.requesters[h] == nil {
str += Fmt("H(%v):X ", h)
str += cmn.Fmt("H(%v):X ", h)
} else {
str += Fmt("H(%v):", h)
str += Fmt("B?(%v) ", pool.requesters[h].block != nil)
str += cmn.Fmt("H(%v):", h)
str += cmn.Fmt("B?(%v) ", pool.requesters[h].block != nil)
}
}
return str
@@ -324,10 +368,10 @@ func (pool *BlockPool) debug() string {
type bpPeer struct {
pool *BlockPool
id string
id p2p.ID
recvMonitor *flow.Monitor
height int
height int64
numPending int32
timeout *time.Timer
didTimeout bool
@@ -335,7 +379,7 @@ type bpPeer struct {
logger log.Logger
}
func newBPPeer(pool *BlockPool, peerID string, height int) *bpPeer {
func newBPPeer(pool *BlockPool, peerID p2p.ID, height int64) *bpPeer {
peer := &bpPeer{
pool: pool,
id: peerID,
@@ -352,7 +396,7 @@ func (peer *bpPeer) setLogger(l log.Logger) {
func (peer *bpPeer) resetMonitor() {
peer.recvMonitor = flow.New(time.Second, time.Second*40)
var initialValue = float64(minRecvRate) * math.E
initialValue := float64(minRecvRate) * math.E
peer.recvMonitor.SetREMA(initialValue)
}
@@ -394,18 +438,18 @@ func (peer *bpPeer) onTimeout() {
//-------------------------------------
type bpRequester struct {
BaseService
cmn.BaseService
pool *BlockPool
height int
height int64
gotBlockCh chan struct{}
redoCh chan struct{}
mtx sync.Mutex
peerID string
peerID p2p.ID
block *types.Block
}
func newBPRequester(pool *BlockPool, height int) *bpRequester {
func newBPRequester(pool *BlockPool, height int64) *bpRequester {
bpr := &bpRequester{
pool: pool,
height: height,
@@ -415,7 +459,7 @@ func newBPRequester(pool *BlockPool, height int) *bpRequester {
peerID: "",
block: nil,
}
bpr.BaseService = *NewBaseService(nil, "bpRequester", bpr)
bpr.BaseService = *cmn.NewBaseService(nil, "bpRequester", bpr)
return bpr
}
@@ -425,7 +469,7 @@ func (bpr *bpRequester) OnStart() error {
}
// Returns true if the peer matches
func (bpr *bpRequester) setBlock(block *types.Block, peerID string) bool {
func (bpr *bpRequester) setBlock(block *types.Block, peerID p2p.ID) bool {
bpr.mtx.Lock()
if bpr.block != nil || bpr.peerID != peerID {
bpr.mtx.Unlock()
@@ -444,7 +488,7 @@ func (bpr *bpRequester) getBlock() *types.Block {
return bpr.block
}
func (bpr *bpRequester) getPeerID() string {
func (bpr *bpRequester) getPeerID() p2p.ID {
bpr.mtx.Lock()
defer bpr.mtx.Unlock()
return bpr.peerID
@@ -469,7 +513,7 @@ func (bpr *bpRequester) requestRoutine() {
OUTER_LOOP:
for {
// Pick a peer to send request to.
var peer *bpPeer = nil
var peer *bpPeer
PICK_PEER_LOOP:
for {
if !bpr.IsRunning() || !bpr.pool.IsRunning() {
@@ -490,10 +534,10 @@ OUTER_LOOP:
// Send request and wait.
bpr.pool.sendRequest(bpr.height, peer.id)
select {
case <-bpr.pool.Quit:
case <-bpr.pool.Quit():
bpr.Stop()
return
case <-bpr.Quit:
case <-bpr.Quit():
return
case <-bpr.redoCh:
bpr.reset()
@@ -501,10 +545,10 @@ OUTER_LOOP:
case <-bpr.gotBlockCh:
// We got the block, now see if it's good.
select {
case <-bpr.pool.Quit:
case <-bpr.pool.Quit():
bpr.Stop()
return
case <-bpr.Quit:
case <-bpr.Quit():
return
case <-bpr.redoCh:
bpr.reset()
@@ -517,6 +561,6 @@ OUTER_LOOP:
//-------------------------------------
type BlockRequest struct {
Height int
PeerID string
Height int64
PeerID p2p.ID
}

View File

@@ -5,9 +5,11 @@ import (
"testing"
"time"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/log"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/types"
)
func init() {
@@ -15,28 +17,33 @@ func init() {
}
type testPeer struct {
id string
height int
id p2p.ID
height int64
}
func makePeers(numPeers int, minHeight, maxHeight int) map[string]testPeer {
peers := make(map[string]testPeer, numPeers)
func makePeers(numPeers int, minHeight, maxHeight int64) map[p2p.ID]testPeer {
peers := make(map[p2p.ID]testPeer, numPeers)
for i := 0; i < numPeers; i++ {
peerID := RandStr(12)
height := minHeight + rand.Intn(maxHeight-minHeight)
peerID := p2p.ID(cmn.RandStr(12))
height := minHeight + rand.Int63n(maxHeight-minHeight)
peers[peerID] = testPeer{peerID, height}
}
return peers
}
func TestBasic(t *testing.T) {
start := 42
start := int64(42)
peers := makePeers(10, start+1, 1000)
timeoutsCh := make(chan string, 100)
timeoutsCh := make(chan p2p.ID, 100)
requestsCh := make(chan BlockRequest, 100)
pool := NewBlockPool(start, requestsCh, timeoutsCh)
pool.SetLogger(log.TestingLogger())
pool.Start()
err := pool.Start()
if err != nil {
t.Error(err)
}
defer pool.Stop()
// Introduce each peer.
@@ -82,13 +89,16 @@ func TestBasic(t *testing.T) {
}
func TestTimeout(t *testing.T) {
start := 42
start := int64(42)
peers := makePeers(10, start+1, 1000)
timeoutsCh := make(chan string, 100)
timeoutsCh := make(chan p2p.ID, 100)
requestsCh := make(chan BlockRequest, 100)
pool := NewBlockPool(start, requestsCh, timeoutsCh)
pool.SetLogger(log.TestingLogger())
pool.Start()
err := pool.Start()
if err != nil {
t.Error(err)
}
defer pool.Stop()
for _, peer := range peers {
@@ -119,7 +129,7 @@ func TestTimeout(t *testing.T) {
// Pull from channels
counter := 0
timedOut := map[string]struct{}{}
timedOut := map[p2p.ID]struct{}{}
for {
select {
case peerID := <-timeoutsCh:

View File

@@ -3,23 +3,27 @@ package blockchain
import (
"bytes"
"errors"
"fmt"
"reflect"
"sync"
"time"
wire "github.com/tendermint/go-wire"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/log"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
)
const (
// BlockchainChannel is a channel for blocks and status updates (`BlockStore` height)
BlockchainChannel = byte(0x40)
defaultChannelCapacity = 100
trySyncIntervalMS = 100
defaultChannelCapacity = 1000
trySyncIntervalMS = 50
// stop syncing when last block's time is
// within this much of the system time.
// stopSyncingDurationMinutes = 10
@@ -28,48 +32,53 @@ const (
statusUpdateIntervalSeconds = 10
// check if we should switch to consensus reactor
switchToConsensusIntervalSeconds = 1
maxBlockchainResponseSize = types.MaxBlockSize + 2
)
type consensusReactor interface {
// for when we switch from blockchain reactor and fast sync to
// the consensus machine
SwitchToConsensus(*sm.State)
SwitchToConsensus(sm.State, int)
}
// BlockchainReactor handles long-term catchup syncing.
type BlockchainReactor struct {
p2p.BaseReactor
state *sm.State
proxyAppConn proxy.AppConnConsensus // same as consensus.proxyAppConn
store *BlockStore
pool *BlockPool
fastSync bool
requestsCh chan BlockRequest
timeoutsCh chan string
mtx sync.Mutex
params types.ConsensusParams
evsw types.EventSwitch
// immutable
initialState sm.State
blockExec *sm.BlockExecutor
store *BlockStore
pool *BlockPool
fastSync bool
requestsCh <-chan BlockRequest
timeoutsCh <-chan p2p.ID
}
// NewBlockchainReactor returns new reactor instance.
func NewBlockchainReactor(state *sm.State, proxyAppConn proxy.AppConnConsensus, store *BlockStore, fastSync bool) *BlockchainReactor {
if state.LastBlockHeight == store.Height()-1 {
store.height-- // XXX HACK, make this better
}
func NewBlockchainReactor(state sm.State, blockExec *sm.BlockExecutor, store *BlockStore,
fastSync bool) *BlockchainReactor {
if state.LastBlockHeight != store.Height() {
cmn.PanicSanity(cmn.Fmt("state (%v) and store (%v) height mismatch", state.LastBlockHeight, store.Height()))
cmn.PanicSanity(cmn.Fmt("state (%v) and store (%v) height mismatch", state.LastBlockHeight,
store.Height()))
}
requestsCh := make(chan BlockRequest, defaultChannelCapacity)
timeoutsCh := make(chan string, defaultChannelCapacity)
timeoutsCh := make(chan p2p.ID, defaultChannelCapacity)
pool := NewBlockPool(
store.Height()+1,
requestsCh,
timeoutsCh,
)
bcR := &BlockchainReactor{
state: state,
proxyAppConn: proxyAppConn,
params: state.ConsensusParams,
initialState: state,
blockExec: blockExec,
store: store,
pool: pool,
fastSync: fastSync,
@@ -80,11 +89,19 @@ func NewBlockchainReactor(state *sm.State, proxyAppConn proxy.AppConnConsensus,
return bcR
}
// OnStart implements BaseService
// SetLogger implements cmn.Service by setting the logger on reactor and pool.
func (bcR *BlockchainReactor) SetLogger(l log.Logger) {
bcR.BaseService.Logger = l
bcR.pool.Logger = l
}
// OnStart implements cmn.Service.
func (bcR *BlockchainReactor) OnStart() error {
bcR.BaseReactor.OnStart()
if err := bcR.BaseReactor.OnStart(); err != nil {
return err
}
if bcR.fastSync {
_, err := bcR.pool.Start()
err := bcR.pool.Start()
if err != nil {
return err
}
@@ -93,7 +110,7 @@ func (bcR *BlockchainReactor) OnStart() error {
return nil
}
// OnStop implements BaseService
// OnStop implements cmn.Service.
func (bcR *BlockchainReactor) OnStop() {
bcR.BaseReactor.OnStop()
bcR.pool.Stop()
@@ -102,29 +119,52 @@ func (bcR *BlockchainReactor) OnStop() {
// GetChannels implements Reactor
func (bcR *BlockchainReactor) GetChannels() []*p2p.ChannelDescriptor {
return []*p2p.ChannelDescriptor{
&p2p.ChannelDescriptor{
{
ID: BlockchainChannel,
Priority: 5,
SendQueueCapacity: 100,
Priority: 10,
SendQueueCapacity: 1000,
},
}
}
// AddPeer implements Reactor by sending our state to peer.
func (bcR *BlockchainReactor) AddPeer(peer *p2p.Peer) {
if !peer.Send(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}}) {
func (bcR *BlockchainReactor) AddPeer(peer p2p.Peer) {
if !peer.Send(BlockchainChannel,
struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}}) {
// doing nothing, will try later in `poolRoutine`
}
// peer is added to the pool once we receive the first
// bcStatusResponseMessage from the peer and call pool.SetPeerHeight
}
// RemovePeer implements Reactor by removing peer from the pool.
func (bcR *BlockchainReactor) RemovePeer(peer *p2p.Peer, reason interface{}) {
bcR.pool.RemovePeer(peer.Key)
func (bcR *BlockchainReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
bcR.pool.RemovePeer(peer.ID())
}
// respondToPeer loads a block and sends it to the requesting peer,
// if we have it. Otherwise, we'll respond saying we don't have it.
// According to the Tendermint spec, if all nodes are honest,
// no node should be requesting for a block that's non-existent.
func (bcR *BlockchainReactor) respondToPeer(msg *bcBlockRequestMessage,
src p2p.Peer) (queued bool) {
block := bcR.store.LoadBlock(msg.Height)
if block != nil {
msg := &bcBlockResponseMessage{Block: block}
return src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{msg})
}
bcR.Logger.Info("Peer asking for a block we don't have", "src", src, "height", msg.Height)
return src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{
&bcNoBlockResponseMessage{Height: msg.Height},
})
}
// Receive implements Reactor by handling 4 types of messages (look below).
func (bcR *BlockchainReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte) {
_, msg, err := DecodeMessage(msgBytes)
func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
_, msg, err := DecodeMessage(msgBytes, bcR.maxMsgSize())
if err != nil {
bcR.Logger.Error("Error decoding message", "err", err)
return
@@ -132,37 +172,44 @@ func (bcR *BlockchainReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte)
bcR.Logger.Debug("Receive", "src", src, "chID", chID, "msg", msg)
// TODO: improve logic to satisfy megacheck
switch msg := msg.(type) {
case *bcBlockRequestMessage:
// Got a request for a block. Respond with block if we have it.
block := bcR.store.LoadBlock(msg.Height)
if block != nil {
msg := &bcBlockResponseMessage{Block: block}
queued := src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{msg})
if !queued {
// queue is full, just ignore.
}
} else {
// TODO peer is asking for things we don't have.
if queued := bcR.respondToPeer(msg, src); !queued {
// Unfortunately not queued since the queue is full.
}
case *bcBlockResponseMessage:
// Got a block.
bcR.pool.AddBlock(src.Key, msg.Block, len(msgBytes))
bcR.pool.AddBlock(src.ID(), msg.Block, len(msgBytes))
case *bcStatusRequestMessage:
// Send peer our state.
queued := src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}})
queued := src.TrySend(BlockchainChannel,
struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}})
if !queued {
// sorry
}
case *bcStatusResponseMessage:
// Got a peer status. Unverified.
bcR.pool.SetPeerHeight(src.Key, msg.Height)
bcR.pool.SetPeerHeight(src.ID(), msg.Height)
default:
bcR.Logger.Error(cmn.Fmt("Unknown message type %v", reflect.TypeOf(msg)))
}
}
// maxMsgSize returns the maximum allowable size of a
// message on the blockchain reactor.
func (bcR *BlockchainReactor) maxMsgSize() int {
bcR.mtx.Lock()
defer bcR.mtx.Unlock()
return bcR.params.BlockSize.MaxBytes + 2
}
// updateConsensusParams updates the internal consensus params
func (bcR *BlockchainReactor) updateConsensusParams(params types.ConsensusParams) {
bcR.mtx.Lock()
defer bcR.mtx.Unlock()
bcR.params = params
}
// Handle messages from the poolReactor telling the reactor what to do.
// NOTE: Don't sleep in the FOR_LOOP or otherwise slow it down!
// (Except for the SYNC_LOOP, which is the primary purpose and must be synchronous.)
@@ -172,6 +219,14 @@ func (bcR *BlockchainReactor) poolRoutine() {
statusUpdateTicker := time.NewTicker(statusUpdateIntervalSeconds * time.Second)
switchToConsensusTicker := time.NewTicker(switchToConsensusIntervalSeconds * time.Second)
blocksSynced := 0
chainID := bcR.initialState.ChainID
state := bcR.initialState
lastHundred := time.Now()
lastRate := 0.0
FOR_LOOP:
for {
select {
@@ -195,18 +250,18 @@ FOR_LOOP:
}
case <-statusUpdateTicker.C:
// ask for status updates
go bcR.BroadcastStatusRequest()
go bcR.BroadcastStatusRequest() // nolint: errcheck
case <-switchToConsensusTicker.C:
height, numPending, _ := bcR.pool.GetStatus()
height, numPending, lenRequesters := bcR.pool.GetStatus()
outbound, inbound, _ := bcR.Switch.NumPeers()
bcR.Logger.Info("Consensus ticker", "numPending", numPending, "total", len(bcR.pool.requesters),
bcR.Logger.Debug("Consensus ticker", "numPending", numPending, "total", lenRequesters,
"outbound", outbound, "inbound", inbound)
if bcR.pool.IsCaughtUp() {
bcR.Logger.Info("Time to switch to consensus reactor!", "height", height)
bcR.pool.Stop()
conR := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
conR.SwitchToConsensus(bcR.state)
conR.SwitchToConsensus(state, blocksSynced)
break FOR_LOOP
}
@@ -221,36 +276,53 @@ FOR_LOOP:
// We need both to sync the first block.
break SYNC_LOOP
}
firstParts := first.MakePartSet(types.DefaultBlockPartSize)
firstParts := first.MakePartSet(state.ConsensusParams.BlockPartSizeBytes)
firstPartsHeader := firstParts.Header()
firstID := types.BlockID{first.Hash(), firstPartsHeader}
// Finally, verify the first block using the second's commit
// NOTE: we can probably make this more efficient, but note that calling
// first.Hash() doesn't verify the tx contents, so MakePartSet() is
// currently necessary.
err := bcR.state.Validators.VerifyCommit(
bcR.state.ChainID, types.BlockID{first.Hash(), firstPartsHeader}, first.Height, second.LastCommit)
err := state.Validators.VerifyCommit(
chainID, firstID, first.Height, second.LastCommit)
if err != nil {
bcR.Logger.Info("error in validation", "err", err)
bcR.pool.RedoRequest(first.Height)
bcR.Logger.Error("Error in validation", "err", err)
peerID := bcR.pool.RedoRequest(first.Height)
peer := bcR.Switch.Peers().Get(peerID)
if peer != nil {
bcR.Switch.StopPeerForError(peer, fmt.Errorf("BlockchainReactor validation error: %v", err))
}
break SYNC_LOOP
} else {
bcR.pool.PopRequest()
// TODO: batch saves so we dont persist to disk every block
bcR.store.SaveBlock(first, firstParts, second.LastCommit)
// TODO: should we be firing events? need to fire NewBlock events manually ...
// NOTE: we could improve performance if we
// didn't make the app commit to disk every block
// ... but we would need a way to get the hash without it persisting
err := bcR.state.ApplyBlock(bcR.evsw, bcR.proxyAppConn, first, firstPartsHeader, types.MockMempool{})
// TODO: same thing for app - but we would need a way to
// get the hash without persisting the state
var err error
state, err = bcR.blockExec.ApplyBlock(state, firstID, first)
if err != nil {
// TODO This is bad, are we zombie?
cmn.PanicQ(cmn.Fmt("Failed to process committed block (%d:%X): %v", first.Height, first.Hash(), err))
cmn.PanicQ(cmn.Fmt("Failed to process committed block (%d:%X): %v",
first.Height, first.Hash(), err))
}
blocksSynced++
// update the consensus params
bcR.updateConsensusParams(state.ConsensusParams)
if blocksSynced%100 == 0 {
lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds())
bcR.Logger.Info("Fast Sync Rate", "height", bcR.pool.height,
"max_peer_height", bcR.pool.MaxPeerHeight(), "blocks/s", lastRate)
lastHundred = time.Now()
}
}
}
continue FOR_LOOP
case <-bcR.Quit:
case <-bcR.Quit():
break FOR_LOOP
}
}
@@ -258,23 +330,20 @@ FOR_LOOP:
// BroadcastStatusRequest broadcasts `BlockStore` height.
func (bcR *BlockchainReactor) BroadcastStatusRequest() error {
bcR.Switch.Broadcast(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusRequestMessage{bcR.store.Height()}})
bcR.Switch.Broadcast(BlockchainChannel,
struct{ BlockchainMessage }{&bcStatusRequestMessage{bcR.store.Height()}})
return nil
}
// SetEventSwitch implements events.Eventable
func (bcR *BlockchainReactor) SetEventSwitch(evsw types.EventSwitch) {
bcR.evsw = evsw
}
//-----------------------------------------------------------------------------
// Messages
const (
msgTypeBlockRequest = byte(0x10)
msgTypeBlockResponse = byte(0x11)
msgTypeStatusResponse = byte(0x20)
msgTypeStatusRequest = byte(0x21)
msgTypeBlockRequest = byte(0x10)
msgTypeBlockResponse = byte(0x11)
msgTypeNoBlockResponse = byte(0x12)
msgTypeStatusResponse = byte(0x20)
msgTypeStatusRequest = byte(0x21)
)
// BlockchainMessage is a generic message for this reactor.
@@ -284,17 +353,18 @@ var _ = wire.RegisterInterface(
struct{ BlockchainMessage }{},
wire.ConcreteType{&bcBlockRequestMessage{}, msgTypeBlockRequest},
wire.ConcreteType{&bcBlockResponseMessage{}, msgTypeBlockResponse},
wire.ConcreteType{&bcNoBlockResponseMessage{}, msgTypeNoBlockResponse},
wire.ConcreteType{&bcStatusResponseMessage{}, msgTypeStatusResponse},
wire.ConcreteType{&bcStatusRequestMessage{}, msgTypeStatusRequest},
)
// DecodeMessage decodes BlockchainMessage.
// TODO: ensure that bz is completely read.
func DecodeMessage(bz []byte) (msgType byte, msg BlockchainMessage, err error) {
func DecodeMessage(bz []byte, maxSize int) (msgType byte, msg BlockchainMessage, err error) {
msgType = bz[0]
n := int(0)
r := bytes.NewReader(bz)
msg = wire.ReadBinary(struct{ BlockchainMessage }{}, r, maxBlockchainResponseSize, &n, &err).(struct{ BlockchainMessage }).BlockchainMessage
msg = wire.ReadBinary(struct{ BlockchainMessage }{}, r, maxSize, &n, &err).(struct{ BlockchainMessage }).BlockchainMessage
if err != nil && n != len(bz) {
err = errors.New("DecodeMessage() had bytes left over")
}
@@ -304,13 +374,21 @@ func DecodeMessage(bz []byte) (msgType byte, msg BlockchainMessage, err error) {
//-------------------------------------
type bcBlockRequestMessage struct {
Height int
Height int64
}
func (m *bcBlockRequestMessage) String() string {
return cmn.Fmt("[bcBlockRequestMessage %v]", m.Height)
}
type bcNoBlockResponseMessage struct {
Height int64
}
func (brm *bcNoBlockResponseMessage) String() string {
return cmn.Fmt("[bcNoBlockResponseMessage %d]", brm.Height)
}
//-------------------------------------
// NOTE: keep up-to-date with maxBlockchainResponseSize
@@ -325,7 +403,7 @@ func (m *bcBlockResponseMessage) String() string {
//-------------------------------------
type bcStatusRequestMessage struct {
Height int
Height int64
}
func (m *bcStatusRequestMessage) String() string {
@@ -335,7 +413,7 @@ func (m *bcStatusRequestMessage) String() string {
//-------------------------------------
type bcStatusResponseMessage struct {
Height int
Height int64
}
func (m *bcStatusResponseMessage) String() string {

198
blockchain/reactor_test.go Normal file
View File

@@ -0,0 +1,198 @@
package blockchain
import (
"testing"
wire "github.com/tendermint/go-wire"
cmn "github.com/tendermint/tmlibs/common"
dbm "github.com/tendermint/tmlibs/db"
"github.com/tendermint/tmlibs/log"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
func makeStateAndBlockStore(logger log.Logger) (sm.State, *BlockStore) {
config := cfg.ResetTestRoot("blockchain_reactor_test")
blockStore := NewBlockStore(dbm.NewMemDB())
state, _ := sm.LoadStateFromDBOrGenesisFile(dbm.NewMemDB(), config.GenesisFile())
return state, blockStore
}
func newBlockchainReactor(logger log.Logger, maxBlockHeight int64) *BlockchainReactor {
state, blockStore := makeStateAndBlockStore(logger)
// Make the blockchainReactor itself
fastSync := true
var nilApp proxy.AppConnConsensus
blockExec := sm.NewBlockExecutor(dbm.NewMemDB(), log.TestingLogger(), nilApp,
types.MockMempool{}, types.MockEvidencePool{})
bcReactor := NewBlockchainReactor(state.Copy(), blockExec, blockStore, fastSync)
bcReactor.SetLogger(logger.With("module", "blockchain"))
// Next: we need to set a switch in order for peers to be added in
bcReactor.Switch = p2p.NewSwitch(cfg.DefaultP2PConfig())
// Lastly: let's add some blocks in
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
firstBlock := makeBlock(blockHeight, state)
secondBlock := makeBlock(blockHeight+1, state)
firstParts := firstBlock.MakePartSet(state.ConsensusParams.BlockGossip.BlockPartSizeBytes)
blockStore.SaveBlock(firstBlock, firstParts, secondBlock.LastCommit)
}
return bcReactor
}
func TestNoBlockResponse(t *testing.T) {
maxBlockHeight := int64(20)
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
bcr.Start()
defer bcr.Stop()
// Add some peers in
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
bcr.AddPeer(peer)
chID := byte(0x01)
tests := []struct {
height int64
existent bool
}{
{maxBlockHeight + 2, false},
{10, true},
{1, true},
{100, false},
}
// receive a request message from peer,
// wait for our response to be received on the peer
for _, tt := range tests {
reqBlockMsg := &bcBlockRequestMessage{tt.height}
reqBlockBytes := wire.BinaryBytes(struct{ BlockchainMessage }{reqBlockMsg})
bcr.Receive(chID, peer, reqBlockBytes)
value := peer.lastValue()
msg := value.(struct{ BlockchainMessage }).BlockchainMessage
if tt.existent {
if blockMsg, ok := msg.(*bcBlockResponseMessage); !ok {
t.Fatalf("Expected to receive a block response for height %d", tt.height)
} else if blockMsg.Block.Height != tt.height {
t.Fatalf("Expected response to be for height %d, got %d", tt.height, blockMsg.Block.Height)
}
} else {
if noBlockMsg, ok := msg.(*bcNoBlockResponseMessage); !ok {
t.Fatalf("Expected to receive a no block response for height %d", tt.height)
} else if noBlockMsg.Height != tt.height {
t.Fatalf("Expected response to be for height %d, got %d", tt.height, noBlockMsg.Height)
}
}
}
}
/*
// NOTE: This is too hard to test without
// an easy way to add test peer to switch
// or without significant refactoring of the module.
// Alternatively we could actually dial a TCP conn but
// that seems extreme.
func TestBadBlockStopsPeer(t *testing.T) {
maxBlockHeight := int64(20)
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
bcr.Start()
defer bcr.Stop()
// Add some peers in
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
// XXX: This doesn't add the peer to anything,
// so it's hard to check that it's later removed
bcr.AddPeer(peer)
assert.True(t, bcr.Switch.Peers().Size() > 0)
// send a bad block from the peer
// default blocks already dont have commits, so should fail
block := bcr.store.LoadBlock(3)
msg := &bcBlockResponseMessage{Block: block}
peer.Send(BlockchainChannel, struct{ BlockchainMessage }{msg})
ticker := time.NewTicker(time.Millisecond * 10)
timer := time.NewTimer(time.Second * 2)
LOOP:
for {
select {
case <-ticker.C:
if bcr.Switch.Peers().Size() == 0 {
break LOOP
}
case <-timer.C:
t.Fatal("Timed out waiting to disconnect peer")
}
}
}
*/
//----------------------------------------------
// utility funcs
func makeTxs(height int64) (txs []types.Tx) {
for i := 0; i < 10; i++ {
txs = append(txs, types.Tx([]byte{byte(height), byte(i)}))
}
return txs
}
func makeBlock(height int64, state sm.State) *types.Block {
block, _ := state.MakeBlock(height, makeTxs(height), new(types.Commit))
return block
}
// The Test peer
type bcrTestPeer struct {
cmn.BaseService
id p2p.ID
ch chan interface{}
}
var _ p2p.Peer = (*bcrTestPeer)(nil)
func newbcrTestPeer(id p2p.ID) *bcrTestPeer {
bcr := &bcrTestPeer{
id: id,
ch: make(chan interface{}, 2),
}
bcr.BaseService = *cmn.NewBaseService(nil, "bcrTestPeer", bcr)
return bcr
}
func (tp *bcrTestPeer) lastValue() interface{} { return <-tp.ch }
func (tp *bcrTestPeer) TrySend(chID byte, value interface{}) bool {
if _, ok := value.(struct{ BlockchainMessage }).
BlockchainMessage.(*bcStatusResponseMessage); ok {
// Discard status response messages since they skew our results
// We only want to deal with:
// + bcBlockResponseMessage
// + bcNoBlockResponseMessage
} else {
tp.ch <- value
}
return true
}
func (tp *bcrTestPeer) Send(chID byte, data interface{}) bool { return tp.TrySend(chID, data) }
func (tp *bcrTestPeer) NodeInfo() p2p.NodeInfo { return p2p.NodeInfo{} }
func (tp *bcrTestPeer) Status() p2p.ConnectionStatus { return p2p.ConnectionStatus{} }
func (tp *bcrTestPeer) ID() p2p.ID { return tp.id }
func (tp *bcrTestPeer) IsOutbound() bool { return false }
func (tp *bcrTestPeer) IsPersistent() bool { return true }
func (tp *bcrTestPeer) Get(s string) interface{} { return s }
func (tp *bcrTestPeer) Set(string, interface{}) {}

View File

@@ -7,14 +7,16 @@ import (
"io"
"sync"
. "github.com/tendermint/tmlibs/common"
wire "github.com/tendermint/go-wire"
cmn "github.com/tendermint/tmlibs/common"
dbm "github.com/tendermint/tmlibs/db"
"github.com/tendermint/go-wire"
"github.com/tendermint/tendermint/types"
)
/*
Simple low level store for blocks.
BlockStore is a simple low level store for blocks.
There are three types of information stored:
- BlockMeta: Meta information about each block
@@ -23,17 +25,20 @@ There are three types of information stored:
Currently the precommit signatures are duplicated in the Block parts as
well as the Commit. In the future this may change, perhaps by moving
the Commit data outside the Block.
the Commit data outside the Block. (TODO)
Panics indicate probable corruption in the data
// NOTE: BlockStore methods will panic if they encounter errors
// deserializing loaded data, indicating probable corruption on disk.
*/
type BlockStore struct {
db dbm.DB
mtx sync.RWMutex
height int
height int64
}
// NewBlockStore returns a new BlockStore with the given DB,
// initialized to the last height that was committed to the DB.
func NewBlockStore(db dbm.DB) *BlockStore {
bsjson := LoadBlockStoreStateJSON(db)
return &BlockStore{
@@ -42,13 +47,16 @@ func NewBlockStore(db dbm.DB) *BlockStore {
}
}
// Height() returns the last known contiguous block height.
func (bs *BlockStore) Height() int {
// Height returns the last known contiguous block height.
func (bs *BlockStore) Height() int64 {
bs.mtx.RLock()
defer bs.mtx.RUnlock()
return bs.height
}
// GetReader returns the value associated with the given key wrapped in an io.Reader.
// If no value is found, it returns nil.
// It's mainly for use with wire.ReadBinary.
func (bs *BlockStore) GetReader(key []byte) io.Reader {
bytez := bs.db.Get(key)
if bytez == nil {
@@ -57,7 +65,9 @@ func (bs *BlockStore) GetReader(key []byte) io.Reader {
return bytes.NewReader(bytez)
}
func (bs *BlockStore) LoadBlock(height int) *types.Block {
// LoadBlock returns the block with the given height.
// If no block is found for that height, it returns nil.
func (bs *BlockStore) LoadBlock(height int64) *types.Block {
var n int
var err error
r := bs.GetReader(calcBlockMetaKey(height))
@@ -66,7 +76,7 @@ func (bs *BlockStore) LoadBlock(height int) *types.Block {
}
blockMeta := wire.ReadBinary(&types.BlockMeta{}, r, 0, &n, &err).(*types.BlockMeta)
if err != nil {
PanicCrisis(Fmt("Error reading block meta: %v", err))
cmn.PanicCrisis(cmn.Fmt("Error reading block meta: %v", err))
}
bytez := []byte{}
for i := 0; i < blockMeta.BlockID.PartsHeader.Total; i++ {
@@ -75,12 +85,15 @@ func (bs *BlockStore) LoadBlock(height int) *types.Block {
}
block := wire.ReadBinary(&types.Block{}, bytes.NewReader(bytez), 0, &n, &err).(*types.Block)
if err != nil {
PanicCrisis(Fmt("Error reading block: %v", err))
cmn.PanicCrisis(cmn.Fmt("Error reading block: %v", err))
}
return block
}
func (bs *BlockStore) LoadBlockPart(height int, index int) *types.Part {
// LoadBlockPart returns the Part at the given index
// from the block at the given height.
// If no part is found for the given height and index, it returns nil.
func (bs *BlockStore) LoadBlockPart(height int64, index int) *types.Part {
var n int
var err error
r := bs.GetReader(calcBlockPartKey(height, index))
@@ -89,12 +102,14 @@ func (bs *BlockStore) LoadBlockPart(height int, index int) *types.Part {
}
part := wire.ReadBinary(&types.Part{}, r, 0, &n, &err).(*types.Part)
if err != nil {
PanicCrisis(Fmt("Error reading block part: %v", err))
cmn.PanicCrisis(cmn.Fmt("Error reading block part: %v", err))
}
return part
}
func (bs *BlockStore) LoadBlockMeta(height int) *types.BlockMeta {
// LoadBlockMeta returns the BlockMeta for the given height.
// If no block is found for the given height, it returns nil.
func (bs *BlockStore) LoadBlockMeta(height int64) *types.BlockMeta {
var n int
var err error
r := bs.GetReader(calcBlockMetaKey(height))
@@ -103,14 +118,16 @@ func (bs *BlockStore) LoadBlockMeta(height int) *types.BlockMeta {
}
blockMeta := wire.ReadBinary(&types.BlockMeta{}, r, 0, &n, &err).(*types.BlockMeta)
if err != nil {
PanicCrisis(Fmt("Error reading block meta: %v", err))
cmn.PanicCrisis(cmn.Fmt("Error reading block meta: %v", err))
}
return blockMeta
}
// The +2/3 and other Precommit-votes for block at `height`.
// This Commit comes from block.LastCommit for `height+1`.
func (bs *BlockStore) LoadBlockCommit(height int) *types.Commit {
// LoadBlockCommit returns the Commit for the given height.
// This commit consists of the +2/3 and other Precommit-votes for block at `height`,
// and it comes from the block.LastCommit for `height+1`.
// If no commit is found for the given height, it returns nil.
func (bs *BlockStore) LoadBlockCommit(height int64) *types.Commit {
var n int
var err error
r := bs.GetReader(calcBlockCommitKey(height))
@@ -119,13 +136,15 @@ func (bs *BlockStore) LoadBlockCommit(height int) *types.Commit {
}
commit := wire.ReadBinary(&types.Commit{}, r, 0, &n, &err).(*types.Commit)
if err != nil {
PanicCrisis(Fmt("Error reading commit: %v", err))
cmn.PanicCrisis(cmn.Fmt("Error reading commit: %v", err))
}
return commit
}
// NOTE: the Precommit-vote heights are for the block at `height`
func (bs *BlockStore) LoadSeenCommit(height int) *types.Commit {
// LoadSeenCommit returns the locally seen Commit for the given height.
// This is useful when we've seen a commit, but there has not yet been
// a new block at `height + 1` that includes this commit in its block.LastCommit.
func (bs *BlockStore) LoadSeenCommit(height int64) *types.Commit {
var n int
var err error
r := bs.GetReader(calcSeenCommitKey(height))
@@ -134,23 +153,27 @@ func (bs *BlockStore) LoadSeenCommit(height int) *types.Commit {
}
commit := wire.ReadBinary(&types.Commit{}, r, 0, &n, &err).(*types.Commit)
if err != nil {
PanicCrisis(Fmt("Error reading commit: %v", err))
cmn.PanicCrisis(cmn.Fmt("Error reading commit: %v", err))
}
return commit
}
// SaveBlock persists the given block, blockParts, and seenCommit to the underlying db.
// blockParts: Must be parts of the block
// seenCommit: The +2/3 precommits that were seen which committed at height.
// If all the nodes restart after committing a block,
// we need this to reload the precommits to catch-up nodes to the
// most recent height. Otherwise they'd stall at H-1.
func (bs *BlockStore) SaveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) {
if block == nil {
cmn.PanicSanity("BlockStore can only save a non-nil block")
}
height := block.Height
if height != bs.Height()+1 {
PanicSanity(Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", bs.Height()+1, height))
if g, w := height, bs.Height()+1; g != w {
cmn.PanicSanity(cmn.Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", w, g))
}
if !blockParts.IsComplete() {
PanicSanity(Fmt("BlockStore can only save complete block part sets"))
cmn.PanicSanity(cmn.Fmt("BlockStore can only save complete block part sets"))
}
// Save block meta
@@ -184,9 +207,9 @@ func (bs *BlockStore) SaveBlock(block *types.Block, blockParts *types.PartSet, s
bs.db.SetSync(nil, nil)
}
func (bs *BlockStore) saveBlockPart(height int, index int, part *types.Part) {
func (bs *BlockStore) saveBlockPart(height int64, index int, part *types.Part) {
if height != bs.Height()+1 {
PanicSanity(Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", bs.Height()+1, height))
cmn.PanicSanity(cmn.Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", bs.Height()+1, height))
}
partBytes := wire.BinaryBytes(part)
bs.db.Set(calcBlockPartKey(height, index), partBytes)
@@ -194,19 +217,19 @@ func (bs *BlockStore) saveBlockPart(height int, index int, part *types.Part) {
//-----------------------------------------------------------------------------
func calcBlockMetaKey(height int) []byte {
func calcBlockMetaKey(height int64) []byte {
return []byte(fmt.Sprintf("H:%v", height))
}
func calcBlockPartKey(height int, partIndex int) []byte {
func calcBlockPartKey(height int64, partIndex int) []byte {
return []byte(fmt.Sprintf("P:%v:%v", height, partIndex))
}
func calcBlockCommitKey(height int) []byte {
func calcBlockCommitKey(height int64) []byte {
return []byte(fmt.Sprintf("C:%v", height))
}
func calcSeenCommitKey(height int) []byte {
func calcSeenCommitKey(height int64) []byte {
return []byte(fmt.Sprintf("SC:%v", height))
}
@@ -215,20 +238,23 @@ func calcSeenCommitKey(height int) []byte {
var blockStoreKey = []byte("blockStore")
type BlockStoreStateJSON struct {
Height int
Height int64
}
// Save persists the blockStore state to the database as JSON.
func (bsj BlockStoreStateJSON) Save(db dbm.DB) {
bytes, err := json.Marshal(bsj)
if err != nil {
PanicSanity(Fmt("Could not marshal state bytes: %v", err))
cmn.PanicSanity(cmn.Fmt("Could not marshal state bytes: %v", err))
}
db.SetSync(blockStoreKey, bytes)
}
// LoadBlockStoreStateJSON returns the BlockStoreStateJSON as loaded from disk.
// If no BlockStoreStateJSON was previously persisted, it returns the zero value.
func LoadBlockStoreStateJSON(db dbm.DB) BlockStoreStateJSON {
bytes := db.Get(blockStoreKey)
if bytes == nil {
if len(bytes) == 0 {
return BlockStoreStateJSON{
Height: 0,
}
@@ -236,7 +262,7 @@ func LoadBlockStoreStateJSON(db dbm.DB) BlockStoreStateJSON {
bsj := BlockStoreStateJSON{}
err := json.Unmarshal(bytes, &bsj)
if err != nil {
PanicCrisis(Fmt("Could not unmarshal bytes: %X", bytes))
cmn.PanicCrisis(cmn.Fmt("Could not unmarshal bytes: %X", bytes))
}
return bsj
}

424
blockchain/store_test.go Normal file
View File

@@ -0,0 +1,424 @@
package blockchain
import (
"bytes"
"fmt"
"io/ioutil"
"runtime/debug"
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
wire "github.com/tendermint/go-wire"
"github.com/tendermint/tmlibs/db"
"github.com/tendermint/tmlibs/log"
"github.com/tendermint/tendermint/types"
)
func TestLoadBlockStoreStateJSON(t *testing.T) {
db := db.NewMemDB()
bsj := &BlockStoreStateJSON{Height: 1000}
bsj.Save(db)
retrBSJ := LoadBlockStoreStateJSON(db)
assert.Equal(t, *bsj, retrBSJ, "expected the retrieved DBs to match")
}
func TestNewBlockStore(t *testing.T) {
db := db.NewMemDB()
db.Set(blockStoreKey, []byte(`{"height": 10000}`))
bs := NewBlockStore(db)
assert.Equal(t, bs.Height(), int64(10000), "failed to properly parse blockstore")
panicCausers := []struct {
data []byte
wantErr string
}{
{[]byte("artful-doger"), "not unmarshal bytes"},
{[]byte(" "), "unmarshal bytes"},
}
for i, tt := range panicCausers {
// Expecting a panic here on trying to parse an invalid blockStore
_, _, panicErr := doFn(func() (interface{}, error) {
db.Set(blockStoreKey, tt.data)
_ = NewBlockStore(db)
return nil, nil
})
require.NotNil(t, panicErr, "#%d panicCauser: %q expected a panic", i, tt.data)
assert.Contains(t, panicErr.Error(), tt.wantErr, "#%d data: %q", i, tt.data)
}
db.Set(blockStoreKey, nil)
bs = NewBlockStore(db)
assert.Equal(t, bs.Height(), int64(0), "expecting nil bytes to be unmarshaled alright")
}
func TestBlockStoreGetReader(t *testing.T) {
db := db.NewMemDB()
// Initial setup
db.Set([]byte("Foo"), []byte("Bar"))
db.Set([]byte("Foo1"), nil)
bs := NewBlockStore(db)
tests := [...]struct {
key []byte
want []byte
}{
0: {key: []byte("Foo"), want: []byte("Bar")},
1: {key: []byte("KnoxNonExistent"), want: nil},
2: {key: []byte("Foo1"), want: []byte{}},
}
for i, tt := range tests {
r := bs.GetReader(tt.key)
if r == nil {
assert.Nil(t, tt.want, "#%d: expected a non-nil reader", i)
continue
}
slurp, err := ioutil.ReadAll(r)
if err != nil {
t.Errorf("#%d: unexpected Read err: %v", i, err)
} else {
assert.Equal(t, slurp, tt.want, "#%d: mismatch", i)
}
}
}
func freshBlockStore() (*BlockStore, db.DB) {
db := db.NewMemDB()
return NewBlockStore(db), db
}
var (
state, _ = makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
block = makeBlock(1, state)
partSet = block.MakePartSet(2)
part1 = partSet.GetPart(0)
part2 = partSet.GetPart(1)
seenCommit1 = &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: time.Now().UTC()}}}
)
// TODO: This test should be simplified ...
func TestBlockStoreSaveLoadBlock(t *testing.T) {
state, bs := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
require.Equal(t, bs.Height(), int64(0), "initially the height should be zero")
// check there are no blocks at various heights
noBlockHeights := []int64{0, -1, 100, 1000, 2}
for i, height := range noBlockHeights {
if g := bs.LoadBlock(height); g != nil {
t.Errorf("#%d: height(%d) got a block; want nil", i, height)
}
}
// save a block
block := makeBlock(bs.Height()+1, state)
validPartSet := block.MakePartSet(2)
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: time.Now().UTC()}}}
bs.SaveBlock(block, partSet, seenCommit)
require.Equal(t, bs.Height(), block.Header.Height, "expecting the new height to be changed")
incompletePartSet := types.NewPartSetFromHeader(types.PartSetHeader{Total: 2})
uncontiguousPartSet := types.NewPartSetFromHeader(types.PartSetHeader{Total: 0})
uncontiguousPartSet.AddPart(part2, false)
header1 := types.Header{
Height: 1,
NumTxs: 100,
ChainID: "block_test",
Time: time.Now(),
}
header2 := header1
header2.Height = 4
// End of setup, test data
commitAtH10 := &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: time.Now().UTC()}}}
tuples := []struct {
block *types.Block
parts *types.PartSet
seenCommit *types.Commit
wantErr bool
wantPanic string
corruptBlockInDB bool
corruptCommitInDB bool
corruptSeenCommitInDB bool
eraseCommitInDB bool
eraseSeenCommitInDB bool
}{
{
block: newBlock(&header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
},
{
block: nil,
wantPanic: "only save a non-nil block",
},
{
block: newBlock(&header2, commitAtH10),
parts: uncontiguousPartSet,
wantPanic: "only save contiguous blocks", // and incomplete and uncontiguous parts
},
{
block: newBlock(&header1, commitAtH10),
parts: incompletePartSet,
wantPanic: "only save complete block", // incomplete parts
},
{
block: newBlock(&header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
corruptCommitInDB: true, // Corrupt the DB's commit entry
wantPanic: "rror reading commit",
},
{
block: newBlock(&header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
wantPanic: "rror reading block",
corruptBlockInDB: true, // Corrupt the DB's block entry
},
{
block: newBlock(&header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
// Expecting no error and we want a nil back
eraseSeenCommitInDB: true,
},
{
block: newBlock(&header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
corruptSeenCommitInDB: true,
wantPanic: "rror reading commit",
},
{
block: newBlock(&header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
// Expecting no error and we want a nil back
eraseCommitInDB: true,
},
}
type quad struct {
block *types.Block
commit *types.Commit
meta *types.BlockMeta
seenCommit *types.Commit
}
for i, tuple := range tuples {
bs, db := freshBlockStore()
// SaveBlock
res, err, panicErr := doFn(func() (interface{}, error) {
bs.SaveBlock(tuple.block, tuple.parts, tuple.seenCommit)
if tuple.block == nil {
return nil, nil
}
if tuple.corruptBlockInDB {
db.Set(calcBlockMetaKey(tuple.block.Height), []byte("block-bogus"))
}
bBlock := bs.LoadBlock(tuple.block.Height)
bBlockMeta := bs.LoadBlockMeta(tuple.block.Height)
if tuple.eraseSeenCommitInDB {
db.Delete(calcSeenCommitKey(tuple.block.Height))
}
if tuple.corruptSeenCommitInDB {
db.Set(calcSeenCommitKey(tuple.block.Height), []byte("bogus-seen-commit"))
}
bSeenCommit := bs.LoadSeenCommit(tuple.block.Height)
commitHeight := tuple.block.Height - 1
if tuple.eraseCommitInDB {
db.Delete(calcBlockCommitKey(commitHeight))
}
if tuple.corruptCommitInDB {
db.Set(calcBlockCommitKey(commitHeight), []byte("foo-bogus"))
}
bCommit := bs.LoadBlockCommit(commitHeight)
return &quad{block: bBlock, seenCommit: bSeenCommit, commit: bCommit,
meta: bBlockMeta}, nil
})
if subStr := tuple.wantPanic; subStr != "" {
if panicErr == nil {
t.Errorf("#%d: want a non-nil panic", i)
} else if got := panicErr.Error(); !strings.Contains(got, subStr) {
t.Errorf("#%d:\n\tgotErr: %q\nwant substring: %q", i, got, subStr)
}
continue
}
if tuple.wantErr {
if err == nil {
t.Errorf("#%d: got nil error", i)
}
continue
}
assert.Nil(t, panicErr, "#%d: unexpected panic", i)
assert.Nil(t, err, "#%d: expecting a non-nil error", i)
qua, ok := res.(*quad)
if !ok || qua == nil {
t.Errorf("#%d: got nil quad back; gotType=%T", i, res)
continue
}
if tuple.eraseSeenCommitInDB {
assert.Nil(t, qua.seenCommit,
"erased the seenCommit in the DB hence we should get back a nil seenCommit")
}
if tuple.eraseCommitInDB {
assert.Nil(t, qua.commit,
"erased the commit in the DB hence we should get back a nil commit")
}
}
}
func binarySerializeIt(v interface{}) []byte {
var n int
var err error
buf := new(bytes.Buffer)
wire.WriteBinary(v, buf, &n, &err)
return buf.Bytes()
}
func TestLoadBlockPart(t *testing.T) {
bs, db := freshBlockStore()
height, index := int64(10), 1
loadPart := func() (interface{}, error) {
part := bs.LoadBlockPart(height, index)
return part, nil
}
// Initially no contents.
// 1. Requesting for a non-existent block shouldn't fail
res, _, panicErr := doFn(loadPart)
require.Nil(t, panicErr, "a non-existent block part shouldn't cause a panic")
require.Nil(t, res, "a non-existent block part should return nil")
// 2. Next save a corrupted block then try to load it
db.Set(calcBlockPartKey(height, index), []byte("Tendermint"))
res, _, panicErr = doFn(loadPart)
require.NotNil(t, panicErr, "expecting a non-nil panic")
require.Contains(t, panicErr.Error(), "Error reading block part")
// 3. A good block serialized and saved to the DB should be retrievable
db.Set(calcBlockPartKey(height, index), binarySerializeIt(part1))
gotPart, _, panicErr := doFn(loadPart)
require.Nil(t, panicErr, "an existent and proper block should not panic")
require.Nil(t, res, "a properly saved block should return a proper block")
require.Equal(t, gotPart.(*types.Part).Hash(), part1.Hash(),
"expecting successful retrieval of previously saved block")
}
func TestLoadBlockMeta(t *testing.T) {
bs, db := freshBlockStore()
height := int64(10)
loadMeta := func() (interface{}, error) {
meta := bs.LoadBlockMeta(height)
return meta, nil
}
// Initially no contents.
// 1. Requesting for a non-existent blockMeta shouldn't fail
res, _, panicErr := doFn(loadMeta)
require.Nil(t, panicErr, "a non-existent blockMeta shouldn't cause a panic")
require.Nil(t, res, "a non-existent blockMeta should return nil")
// 2. Next save a corrupted blockMeta then try to load it
db.Set(calcBlockMetaKey(height), []byte("Tendermint-Meta"))
res, _, panicErr = doFn(loadMeta)
require.NotNil(t, panicErr, "expecting a non-nil panic")
require.Contains(t, panicErr.Error(), "Error reading block meta")
// 3. A good blockMeta serialized and saved to the DB should be retrievable
meta := &types.BlockMeta{}
db.Set(calcBlockMetaKey(height), binarySerializeIt(meta))
gotMeta, _, panicErr := doFn(loadMeta)
require.Nil(t, panicErr, "an existent and proper block should not panic")
require.Nil(t, res, "a properly saved blockMeta should return a proper blocMeta ")
require.Equal(t, binarySerializeIt(meta), binarySerializeIt(gotMeta),
"expecting successful retrieval of previously saved blockMeta")
}
func TestBlockFetchAtHeight(t *testing.T) {
state, bs := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
require.Equal(t, bs.Height(), int64(0), "initially the height should be zero")
block := makeBlock(bs.Height()+1, state)
partSet := block.MakePartSet(2)
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: time.Now().UTC()}}}
bs.SaveBlock(block, partSet, seenCommit)
require.Equal(t, bs.Height(), block.Header.Height, "expecting the new height to be changed")
blockAtHeight := bs.LoadBlock(bs.Height())
require.Equal(t, block.Hash(), blockAtHeight.Hash(),
"expecting a successful load of the last saved block")
blockAtHeightPlus1 := bs.LoadBlock(bs.Height() + 1)
require.Nil(t, blockAtHeightPlus1, "expecting an unsuccessful load of Height()+1")
blockAtHeightPlus2 := bs.LoadBlock(bs.Height() + 2)
require.Nil(t, blockAtHeightPlus2, "expecting an unsuccessful load of Height()+2")
}
func doFn(fn func() (interface{}, error)) (res interface{}, err error, panicErr error) {
defer func() {
if r := recover(); r != nil {
switch e := r.(type) {
case error:
panicErr = e
case string:
panicErr = fmt.Errorf("%s", e)
default:
if st, ok := r.(fmt.Stringer); ok {
panicErr = fmt.Errorf("%s", st)
} else {
panicErr = fmt.Errorf("%s", debug.Stack())
}
}
}
}()
res, err = fn()
return res, err, panicErr
}
func newBlock(hdr *types.Header, lastCommit *types.Commit) *types.Block {
return &types.Block{
Header: hdr,
LastCommit: lastCommit,
}
}

View File

@@ -7,6 +7,7 @@ machine:
GOPATH: "$HOME/.go_project"
PROJECT_PARENT_PATH: "$GOPATH/src/github.com/$CIRCLE_PROJECT_USERNAME"
PROJECT_PATH: "$PROJECT_PARENT_PATH/$CIRCLE_PROJECT_REPONAME"
PATH: "$HOME/.go_project/bin:${PATH}"
hosts:
localhost: 127.0.0.1
@@ -30,4 +31,5 @@ test:
- cd "$PROJECT_PATH" && mv test_integrations.log "${CIRCLE_ARTIFACTS}"
- cd "$PROJECT_PATH" && bash <(curl -s https://codecov.io/bash) -f coverage.txt
- cd "$PROJECT_PATH" && mv coverage.txt "${CIRCLE_ARTIFACTS}"
- cd "$PROJECT_PATH" && cp test/logs/messages "${CIRCLE_ARTIFACTS}/docker_logs.txt"
- cd "$PROJECT_PATH" && cp test/logs/messages "${CIRCLE_ARTIFACTS}/docker.log"
- cd "${CIRCLE_ARTIFACTS}" && tar czf logs.tar.gz *.log

View File

@@ -9,19 +9,20 @@ import (
"github.com/tendermint/tendermint/types"
)
var genValidatorCmd = &cobra.Command{
// GenValidatorCmd allows the generation of a keypair for a
// validator.
var GenValidatorCmd = &cobra.Command{
Use: "gen_validator",
Short: "Generate new validator keypair",
Run: genValidator,
}
func init() {
RootCmd.AddCommand(genValidatorCmd)
}
func genValidator(cmd *cobra.Command, args []string) {
privValidator := types.GenPrivValidator()
privValidatorJSONBytes, _ := json.MarshalIndent(privValidator, "", "\t")
privValidator := types.GenPrivValidatorFS("")
privValidatorJSONBytes, err := json.MarshalIndent(privValidator, "", "\t")
if err != nil {
panic(err)
}
fmt.Printf(`%v
`, string(privValidatorJSONBytes))
}

View File

@@ -1,47 +1,48 @@
package commands
import (
"os"
"github.com/spf13/cobra"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
)
var initFilesCmd = &cobra.Command{
// InitFilesCmd initialises a fresh Tendermint Core instance.
var InitFilesCmd = &cobra.Command{
Use: "init",
Short: "Initialize Tendermint",
Run: initFiles,
}
func init() {
RootCmd.AddCommand(initFilesCmd)
}
func initFiles(cmd *cobra.Command, args []string) {
// private validator
privValFile := config.PrivValidatorFile()
if _, err := os.Stat(privValFile); os.IsNotExist(err) {
privValidator := types.GenPrivValidator()
privValidator.SetFile(privValFile)
privValidator.Save()
genFile := config.GenesisFile()
if _, err := os.Stat(genFile); os.IsNotExist(err) {
genDoc := types.GenesisDoc{
ChainID: cmn.Fmt("test-chain-%v", cmn.RandStr(6)),
}
genDoc.Validators = []types.GenesisValidator{types.GenesisValidator{
PubKey: privValidator.PubKey,
Amount: 10,
}}
genDoc.SaveAs(genFile)
}
logger.Info("Initialized tendermint", "genesis", config.GenesisFile(), "priv_validator", config.PrivValidatorFile())
var privValidator *types.PrivValidatorFS
if cmn.FileExists(privValFile) {
privValidator = types.LoadPrivValidatorFS(privValFile)
logger.Info("Found private validator", "path", privValFile)
} else {
logger.Info("Already initialized", "priv_validator", config.PrivValidatorFile())
privValidator = types.GenPrivValidatorFS(privValFile)
privValidator.Save()
logger.Info("Genetated private validator", "path", privValFile)
}
// genesis file
genFile := config.GenesisFile()
if cmn.FileExists(genFile) {
logger.Info("Found genesis file", "path", genFile)
} else {
genDoc := types.GenesisDoc{
ChainID: cmn.Fmt("test-chain-%v", cmn.RandStr(6)),
}
genDoc.Validators = []types.GenesisValidator{{
PubKey: privValidator.GetPubKey(),
Power: 10,
}}
if err := genDoc.SaveAs(genFile); err != nil {
panic(err)
}
logger.Info("Genetated genesis file", "path", genFile)
}
}

View File

@@ -0,0 +1,60 @@
package commands
import (
"github.com/spf13/cobra"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tendermint/lite/proxy"
rpcclient "github.com/tendermint/tendermint/rpc/client"
)
// LiteCmd represents the base command when called without any subcommands
var LiteCmd = &cobra.Command{
Use: "lite",
Short: "Run lite-client proxy server, verifying tendermint rpc",
Long: `This node will run a secure proxy to a tendermint rpc server.
All calls that can be tracked back to a block header by a proof
will be verified before passing them back to the caller. Other that
that it will present the same interface as a full tendermint node,
just with added trust and running locally.`,
RunE: runProxy,
SilenceUsage: true,
}
var (
listenAddr string
nodeAddr string
chainID string
home string
)
func init() {
LiteCmd.Flags().StringVar(&listenAddr, "laddr", ":8888", "Serve the proxy on the given port")
LiteCmd.Flags().StringVar(&nodeAddr, "node", "localhost:46657", "Connect to a Tendermint node at this address")
LiteCmd.Flags().StringVar(&chainID, "chain-id", "tendermint", "Specify the Tendermint chain ID")
LiteCmd.Flags().StringVar(&home, "home-dir", ".tendermint-lite", "Specify the home directory")
}
func runProxy(cmd *cobra.Command, args []string) error {
// First, connect a client
node := rpcclient.NewHTTP(nodeAddr, "/websocket")
cert, err := proxy.GetCertifier(chainID, home, nodeAddr)
if err != nil {
return err
}
sc := proxy.SecureClient(node, cert)
err = proxy.StartProxy(sc, listenAddr, logger)
if err != nil {
return err
}
cmn.TrapSignal(func() {
// TODO: close up shop
})
return nil
}

View File

@@ -9,20 +9,17 @@ import (
"github.com/tendermint/tendermint/p2p/upnp"
)
var probeUpnpCmd = &cobra.Command{
// ProbeUpnpCmd adds capabilities to test the UPnP functionality.
var ProbeUpnpCmd = &cobra.Command{
Use: "probe_upnp",
Short: "Test UPnP functionality",
RunE: probeUpnp,
}
func init() {
RootCmd.AddCommand(probeUpnpCmd)
}
func probeUpnp(cmd *cobra.Command, args []string) error {
capabilities, err := upnp.Probe(logger)
if err != nil {
fmt.Println("Probe failed: %v", err)
fmt.Println("Probe failed: ", err)
} else {
fmt.Println("Probe success!")
jsonBytes, err := json.Marshal(capabilities)

View File

@@ -6,7 +6,8 @@ import (
"github.com/tendermint/tendermint/consensus"
)
var replayCmd = &cobra.Command{
// ReplayCmd allows replaying of messages from the WAL.
var ReplayCmd = &cobra.Command{
Use: "replay",
Short: "Replay messages from WAL",
Run: func(cmd *cobra.Command, args []string) {
@@ -14,15 +15,12 @@ var replayCmd = &cobra.Command{
},
}
var replayConsoleCmd = &cobra.Command{
// ReplayConsoleCmd allows replaying of messages from the WAL in a
// console.
var ReplayConsoleCmd = &cobra.Command{
Use: "replay_console",
Short: "Replay messages from WAL in a console",
Run: func(cmd *cobra.Command, args []string) {
consensus.RunReplayFile(config.BaseConfig, config.Consensus, true)
},
}
func init() {
RootCmd.AddCommand(replayCmd)
RootCmd.AddCommand(replayConsoleCmd)
}

View File

@@ -9,21 +9,30 @@ import (
"github.com/tendermint/tmlibs/log"
)
var resetAllCmd = &cobra.Command{
// ResetAllCmd removes the database of this Tendermint core
// instance.
var ResetAllCmd = &cobra.Command{
Use: "unsafe_reset_all",
Short: "(unsafe) Remove all the data and WAL, reset this node's validator",
Run: resetAll,
}
var resetPrivValidatorCmd = &cobra.Command{
// ResetPrivValidatorCmd resets the private validator files.
var ResetPrivValidatorCmd = &cobra.Command{
Use: "unsafe_reset_priv_validator",
Short: "(unsafe) Reset this node's validator",
Run: resetPrivValidator,
}
func init() {
RootCmd.AddCommand(resetAllCmd)
RootCmd.AddCommand(resetPrivValidatorCmd)
// ResetAll removes the privValidator files.
// Exported so other CLI tools can use it.
func ResetAll(dbDir, privValFile string, logger log.Logger) {
resetPrivValidatorFS(privValFile, logger)
if err := os.RemoveAll(dbDir); err != nil {
logger.Error("Error removing directory", "err", err)
return
}
logger.Info("Removed all data", "dir", dbDir)
}
// XXX: this is totally unsafe.
@@ -35,26 +44,17 @@ func resetAll(cmd *cobra.Command, args []string) {
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetPrivValidator(cmd *cobra.Command, args []string) {
resetPrivValidatorLocal(config.PrivValidatorFile(), logger)
resetPrivValidatorFS(config.PrivValidatorFile(), logger)
}
// Exported so other CLI tools can use it
func ResetAll(dbDir, privValFile string, logger log.Logger) {
resetPrivValidatorLocal(privValFile, logger)
os.RemoveAll(dbDir)
logger.Info("Removed all data", "dir", dbDir)
}
func resetPrivValidatorLocal(privValFile string, logger log.Logger) {
func resetPrivValidatorFS(privValFile string, logger log.Logger) {
// Get PrivValidator
var privValidator *types.PrivValidator
if _, err := os.Stat(privValFile); err == nil {
privValidator = types.LoadPrivValidator(privValFile)
privValidator := types.LoadPrivValidatorFS(privValFile)
privValidator.Reset()
logger.Info("Reset PrivValidator", "file", privValFile)
} else {
privValidator = types.GenPrivValidator()
privValidator.SetFile(privValFile)
privValidator := types.GenPrivValidatorFS(privValFile)
privValidator.Save()
logger.Info("Generated PrivValidator", "file", privValFile)
}

View File

@@ -14,14 +14,19 @@ import (
var (
config = cfg.DefaultConfig()
logger = log.NewTMLogger(log.NewSyncWriter(os.Stdout)).With("module", "main")
logger = log.NewTMLogger(log.NewSyncWriter(os.Stdout))
)
func init() {
RootCmd.PersistentFlags().String("log_level", config.LogLevel, "Log level")
registerFlagsRootCmd(RootCmd)
}
// ParseConfig will setup the tendermint configuration properly
func registerFlagsRootCmd(cmd *cobra.Command) {
cmd.PersistentFlags().String("log_level", config.LogLevel, "Log level")
}
// ParseConfig retrieves the default environment configuration,
// sets up the Tendermint root and ensures that the root exists
func ParseConfig() (*cfg.Config, error) {
conf := cfg.DefaultConfig()
err := viper.Unmarshal(conf)
@@ -33,10 +38,14 @@ func ParseConfig() (*cfg.Config, error) {
return conf, err
}
// RootCmd is the root command for Tendermint core.
var RootCmd = &cobra.Command{
Use: "tendermint",
Short: "Tendermint Core (BFT Consensus) in Go",
PersistentPreRunE: func(cmd *cobra.Command, args []string) (err error) {
if cmd.Name() == VersionCmd.Name() {
return nil
}
config, err = ParseConfig()
if err != nil {
return err
@@ -48,6 +57,7 @@ var RootCmd = &cobra.Command{
if viper.GetBool(cli.TraceFlag) {
logger = log.NewTracingLogger(logger)
}
logger = logger.With("module", "main")
return nil
},
}

View File

@@ -1,17 +1,21 @@
package commands
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strconv"
"testing"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tmlibs/cli"
"testing"
cmn "github.com/tendermint/tmlibs/common"
)
var (
@@ -22,78 +26,151 @@ const (
rootName = "root"
)
// isolate provides a clean setup and returns a copy of RootCmd you can
// modify in the test cases
func isolate(cmds ...*cobra.Command) cli.Executable {
// clearConfig clears env vars, the given root dir, and resets viper.
func clearConfig(dir string) {
if err := os.Unsetenv("TMHOME"); err != nil {
panic(err)
}
if err := os.Unsetenv("TM_HOME"); err != nil {
panic(err)
}
if err := os.RemoveAll(dir); err != nil {
panic(err)
}
viper.Reset()
config = cfg.DefaultConfig()
r := &cobra.Command{
Use: rootName,
PersistentPreRunE: RootCmd.PersistentPreRunE,
}
r.AddCommand(cmds...)
wr := cli.PrepareBaseCmd(r, "TM", defaultRoot)
return wr
}
func TestRootConfig(t *testing.T) {
assert, require := assert.New(t), require.New(t)
// we pre-create a config file we can refer to in the rest of
// the test cases.
cvals := map[string]string{
"moniker": "monkey",
"fast_sync": "false",
// prepare new rootCmd
func testRootCmd() *cobra.Command {
rootCmd := &cobra.Command{
Use: RootCmd.Use,
PersistentPreRunE: RootCmd.PersistentPreRunE,
Run: func(cmd *cobra.Command, args []string) {},
}
// proper types of the above settings
cfast := false
conf, err := cli.WriteDemoConfig(cvals)
require.Nil(err)
registerFlagsRootCmd(rootCmd)
var l string
rootCmd.PersistentFlags().String("log", l, "Log")
return rootCmd
}
func testSetup(rootDir string, args []string, env map[string]string) error {
clearConfig(defaultRoot)
rootCmd := testRootCmd()
cmd := cli.PrepareBaseCmd(rootCmd, "TM", defaultRoot)
// run with the args and env
args = append([]string{rootCmd.Use}, args...)
return cli.RunWithArgs(cmd, args, env)
}
func TestRootHome(t *testing.T) {
newRoot := filepath.Join(defaultRoot, "something-else")
cases := []struct {
args []string
env map[string]string
root string
}{
{nil, nil, defaultRoot},
{[]string{"--home", newRoot}, nil, newRoot},
{nil, map[string]string{"TMHOME": newRoot}, newRoot},
}
for i, tc := range cases {
idxString := strconv.Itoa(i)
err := testSetup(defaultRoot, tc.args, tc.env)
require.Nil(t, err, idxString)
assert.Equal(t, tc.root, config.RootDir, idxString)
assert.Equal(t, tc.root, config.P2P.RootDir, idxString)
assert.Equal(t, tc.root, config.Consensus.RootDir, idxString)
assert.Equal(t, tc.root, config.Mempool.RootDir, idxString)
}
}
func TestRootFlagsEnv(t *testing.T) {
// defaults
defaults := cfg.DefaultConfig()
dmax := defaults.P2P.MaxNumPeers
defaultLogLvl := defaults.LogLevel
cases := []struct {
args []string
env map[string]string
root string
moniker string
fastSync bool
maxPeer int
logLevel string
}{
{nil, nil, defaultRoot, defaults.Moniker, defaults.FastSync, dmax},
// try multiple ways of setting root (two flags, cli vs. env)
{[]string{"--home", conf}, nil, conf, cvals["moniker"], cfast, dmax},
{nil, map[string]string{"TMROOT": conf}, conf, cvals["moniker"], cfast, dmax},
// check setting p2p subflags two different ways
{[]string{"--p2p.max_num_peers", "420"}, nil, defaultRoot, defaults.Moniker, defaults.FastSync, 420},
{nil, map[string]string{"TM_P2P_MAX_NUM_PEERS": "17"}, defaultRoot, defaults.Moniker, defaults.FastSync, 17},
// try to set env that have no flags attached...
{[]string{"--home", conf}, map[string]string{"TM_MONIKER": "funny"}, conf, "funny", cfast, dmax},
{[]string{"--log", "debug"}, nil, defaultLogLvl}, // wrong flag
{[]string{"--log_level", "debug"}, nil, "debug"}, // right flag
{nil, map[string]string{"TM_LOW": "debug"}, defaultLogLvl}, // wrong env flag
{nil, map[string]string{"MT_LOG_LEVEL": "debug"}, defaultLogLvl}, // wrong env prefix
{nil, map[string]string{"TM_LOG_LEVEL": "debug"}, "debug"}, // right env
}
for idx, tc := range cases {
i := strconv.Itoa(idx)
// test command that does nothing, except trigger unmarshalling in root
noop := &cobra.Command{
Use: "noop",
RunE: func(cmd *cobra.Command, args []string) error {
return nil
},
}
noop.Flags().Int("p2p.max_num_peers", defaults.P2P.MaxNumPeers, "")
cmd := isolate(noop)
for i, tc := range cases {
idxString := strconv.Itoa(i)
args := append([]string{rootName, noop.Use}, tc.args...)
err := cli.RunWithArgs(cmd, args, tc.env)
require.Nil(err, i)
assert.Equal(tc.root, config.RootDir, i)
assert.Equal(tc.root, config.P2P.RootDir, i)
assert.Equal(tc.root, config.Consensus.RootDir, i)
assert.Equal(tc.root, config.Mempool.RootDir, i)
assert.Equal(tc.moniker, config.Moniker, i)
assert.Equal(tc.fastSync, config.FastSync, i)
assert.Equal(tc.maxPeer, config.P2P.MaxNumPeers, i)
err := testSetup(defaultRoot, tc.args, tc.env)
require.Nil(t, err, idxString)
assert.Equal(t, tc.logLevel, config.LogLevel, idxString)
}
}
func TestRootConfig(t *testing.T) {
// write non-default config
nonDefaultLogLvl := "abc:debug"
cvals := map[string]string{
"log_level": nonDefaultLogLvl,
}
cases := []struct {
args []string
env map[string]string
logLvl string
}{
{nil, nil, nonDefaultLogLvl}, // should load config
{[]string{"--log_level=abc:info"}, nil, "abc:info"}, // flag over rides
{nil, map[string]string{"TM_LOG_LEVEL": "abc:info"}, "abc:info"}, // env over rides
}
for i, tc := range cases {
idxString := strconv.Itoa(i)
clearConfig(defaultRoot)
// XXX: path must match cfg.defaultConfigPath
configFilePath := filepath.Join(defaultRoot, "config")
err := cmn.EnsureDir(configFilePath, 0700)
require.Nil(t, err)
// write the non-defaults to a different path
// TODO: support writing sub configs so we can test that too
err = WriteConfigVals(configFilePath, cvals)
require.Nil(t, err)
rootCmd := testRootCmd()
cmd := cli.PrepareBaseCmd(rootCmd, "TM", defaultRoot)
// run with the args and env
tc.args = append([]string{rootCmd.Use}, tc.args...)
err = cli.RunWithArgs(cmd, tc.args, tc.env)
require.Nil(t, err, idxString)
assert.Equal(t, tc.logLvl, config.LogLevel, idxString)
}
}
// WriteConfigVals writes a toml file with the given values.
// It returns an error if writing was impossible.
func WriteConfigVals(dir string, vals map[string]string) error {
data := ""
for k, v := range vals {
data = data + fmt.Sprintf("%s = \"%s\"\n", k, v)
}
cfile := filepath.Join(dir, "config.toml")
return ioutil.WriteFile(cfile, []byte(data), 0666)
}

View File

@@ -2,26 +2,12 @@ package commands
import (
"fmt"
"time"
"github.com/spf13/cobra"
"github.com/tendermint/tendermint/node"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
nm "github.com/tendermint/tendermint/node"
)
var runNodeCmd = &cobra.Command{
Use: "node",
Short: "Run the tendermint node",
RunE: runNode,
}
func init() {
AddNodeFlags(runNodeCmd)
RootCmd.AddCommand(runNodeCmd)
}
// AddNodeFlags exposes some common configuration options on the command-line
// These are exposed for convenience of commands embedding a tendermint node
func AddNodeFlags(cmd *cobra.Command) {
@@ -43,44 +29,41 @@ func AddNodeFlags(cmd *cobra.Command) {
// p2p flags
cmd.Flags().String("p2p.laddr", config.P2P.ListenAddress, "Node listen address. (0.0.0.0:0 means any interface, any port)")
cmd.Flags().String("p2p.seeds", config.P2P.Seeds, "Comma delimited host:port seed nodes")
cmd.Flags().String("p2p.persistent_peers", config.P2P.PersistentPeers, "Comma delimited host:port persistent peers")
cmd.Flags().Bool("p2p.skip_upnp", config.P2P.SkipUPNP, "Skip UPNP configuration")
cmd.Flags().Bool("p2p.pex", config.P2P.PexReactor, "Enable Peer-Exchange (dev feature)")
cmd.Flags().Bool("p2p.pex", config.P2P.PexReactor, "Enable/disable Peer-Exchange")
cmd.Flags().Bool("p2p.seed_mode", config.P2P.SeedMode, "Enable/disable seed mode")
// consensus flags
cmd.Flags().Bool("consensus.create_empty_blocks", config.Consensus.CreateEmptyBlocks, "Set this to false to only produce blocks when there are txs or when the AppHash changes")
}
// Users wishing to:
// * Use an external signer for their validators
// * Supply an in-proc abci app
// should import github.com/tendermint/tendermint/node and implement
// their own run_node to call node.NewNode (instead of node.NewNodeDefault)
// with their custom priv validator and/or custom proxy.ClientCreator
func runNode(cmd *cobra.Command, args []string) error {
// NewRunNodeCmd returns the command that allows the CLI to start a
// node. It can be used with a custom PrivValidator and in-process ABCI application.
func NewRunNodeCmd(nodeProvider nm.NodeProvider) *cobra.Command {
cmd := &cobra.Command{
Use: "node",
Short: "Run the tendermint node",
RunE: func(cmd *cobra.Command, args []string) error {
// Create & start node
n, err := nodeProvider(config, logger)
if err != nil {
return fmt.Errorf("Failed to create node: %v", err)
}
// Wait until the genesis doc becomes available
// This is for Mintnet compatibility.
// TODO: If Mintnet gets deprecated or genesis_file is
// always available, remove.
genDocFile := config.GenesisFile()
for !cmn.FileExists(genDocFile) {
logger.Info(cmn.Fmt("Waiting for genesis file %v...", genDocFile))
time.Sleep(time.Second)
if err := n.Start(); err != nil {
return fmt.Errorf("Failed to start node: %v", err)
} else {
logger.Info("Started node", "nodeInfo", n.Switch().NodeInfo())
}
// Trap signal, run forever.
n.RunForever()
return nil
},
}
genDoc, err := types.GenesisDocFromFile(genDocFile)
if err != nil {
return err
}
config.ChainID = genDoc.ChainID
// Create & start node
n := node.NewNodeDefault(config, logger.With("module", "node"))
if _, err := n.Start(); err != nil {
return fmt.Errorf("Failed to start node: %v", err)
} else {
logger.Info("Started node", "nodeInfo", n.Switch().NodeInfo())
}
// Trap signal, run forever.
n.RunForever()
return nil
AddNodeFlags(cmd)
return cmd
}

View File

@@ -9,18 +9,15 @@ import (
"github.com/tendermint/tendermint/types"
)
var showValidatorCmd = &cobra.Command{
// ShowValidatorCmd adds capabilities for showing the validator info.
var ShowValidatorCmd = &cobra.Command{
Use: "show_validator",
Short: "Show this node's validator info",
Run: showValidator,
}
func init() {
RootCmd.AddCommand(showValidatorCmd)
}
func showValidator(cmd *cobra.Command, args []string) {
privValidator := types.LoadOrGenPrivValidator(config.PrivValidatorFile(), logger)
privValidator := types.LoadOrGenPrivValidatorFS(config.PrivValidatorFile())
pubKeyJSONBytes, _ := data.ToJSON(privValidator.PubKey)
fmt.Println(string(pubKeyJSONBytes))
}

View File

@@ -2,21 +2,16 @@ package commands
import (
"fmt"
"path"
"path/filepath"
"time"
"github.com/spf13/cobra"
cmn "github.com/tendermint/tmlibs/common"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
)
var testnetFilesCmd = &cobra.Command{
Use: "testnet",
Short: "Initialize files for a Tendermint testnet",
Run: testnetFiles,
}
//flags
var (
nValidators int
@@ -24,17 +19,24 @@ var (
)
func init() {
testnetFilesCmd.Flags().IntVar(&nValidators, "n", 4,
TestnetFilesCmd.Flags().IntVar(&nValidators, "n", 4,
"Number of validators to initialize the testnet with")
testnetFilesCmd.Flags().StringVar(&dataDir, "dir", "mytestnet",
TestnetFilesCmd.Flags().StringVar(&dataDir, "dir", "mytestnet",
"Directory to store initialization data for the testnet")
}
RootCmd.AddCommand(testnetFilesCmd)
// TestnetFilesCmd allows initialisation of files for a
// Tendermint testnet.
var TestnetFilesCmd = &cobra.Command{
Use: "testnet",
Short: "Initialize files for a Tendermint testnet",
Run: testnetFiles,
}
func testnetFiles(cmd *cobra.Command, args []string) {
genVals := make([]types.GenesisValidator, nValidators)
defaultConfig := cfg.DefaultBaseConfig()
// Initialize core dir and priv_validator.json's
for i := 0; i < nValidators; i++ {
@@ -44,11 +46,11 @@ func testnetFiles(cmd *cobra.Command, args []string) {
cmn.Exit(err.Error())
}
// Read priv_validator.json to populate vals
privValFile := path.Join(dataDir, mach, "priv_validator.json")
privVal := types.LoadPrivValidator(privValFile)
privValFile := filepath.Join(dataDir, mach, defaultConfig.PrivValidator)
privVal := types.LoadPrivValidatorFS(privValFile)
genVals[i] = types.GenesisValidator{
PubKey: privVal.PubKey,
Amount: 1,
PubKey: privVal.GetPubKey(),
Power: 1,
Name: mach,
}
}
@@ -63,7 +65,9 @@ func testnetFiles(cmd *cobra.Command, args []string) {
// Write genesis file.
for i := 0; i < nValidators; i++ {
mach := cmn.Fmt("mach%d", i)
genDoc.SaveAs(path.Join(dataDir, mach, "genesis.json"))
if err := genDoc.SaveAs(filepath.Join(dataDir, mach, defaultConfig.Genesis)); err != nil {
panic(err)
}
}
fmt.Println(cmn.Fmt("Successfully initialized %v node directories", nValidators))
@@ -71,14 +75,15 @@ func testnetFiles(cmd *cobra.Command, args []string) {
// Initialize per-machine core directory
func initMachCoreDirectory(base, mach string) error {
dir := path.Join(base, mach)
dir := filepath.Join(base, mach)
err := cmn.EnsureDir(dir, 0777)
if err != nil {
return err
}
// Create priv_validator.json file if not present
ensurePrivValidator(path.Join(dir, "priv_validator.json"))
defaultConfig := cfg.DefaultBaseConfig()
ensurePrivValidator(filepath.Join(dir, defaultConfig.PrivValidator))
return nil
}
@@ -87,7 +92,6 @@ func ensurePrivValidator(file string) {
if cmn.FileExists(file) {
return
}
privValidator := types.GenPrivValidator()
privValidator.SetFile(file)
privValidator := types.GenPrivValidatorFS(file)
privValidator.Save()
}

View File

@@ -8,14 +8,11 @@ import (
"github.com/tendermint/tendermint/version"
)
var versionCmd = &cobra.Command{
// VersionCmd ...
var VersionCmd = &cobra.Command{
Use: "version",
Short: "Show version info",
Run: func(cmd *cobra.Command, args []string) {
fmt.Println(version.Version)
},
}
func init() {
RootCmd.AddCommand(versionCmd)
}

View File

@@ -2,12 +2,45 @@ package main
import (
"os"
"path/filepath"
"github.com/tendermint/tendermint/cmd/tendermint/commands"
"github.com/tendermint/tmlibs/cli"
cmd "github.com/tendermint/tendermint/cmd/tendermint/commands"
cfg "github.com/tendermint/tendermint/config"
nm "github.com/tendermint/tendermint/node"
)
func main() {
cmd := cli.PrepareBaseCmd(commands.RootCmd, "TM", os.ExpandEnv("$HOME/.tendermint"))
cmd.Execute()
rootCmd := cmd.RootCmd
rootCmd.AddCommand(
cmd.GenValidatorCmd,
cmd.InitFilesCmd,
cmd.ProbeUpnpCmd,
cmd.LiteCmd,
cmd.ReplayCmd,
cmd.ReplayConsoleCmd,
cmd.ResetAllCmd,
cmd.ResetPrivValidatorCmd,
cmd.ShowValidatorCmd,
cmd.TestnetFilesCmd,
cmd.VersionCmd)
// NOTE:
// Users wishing to:
// * Use an external signer for their validators
// * Supply an in-proc abci app
// * Supply a genesis doc file from another source
// * Provide their own DB implementation
// can copy this file and use something other than the
// DefaultNewNode function
nodeFunc := nm.DefaultNewNode
// Create & start node
rootCmd.AddCommand(cmd.NewRunNodeCmd(nodeFunc))
cmd := cli.PrepareBaseCmd(rootCmd, "TM", os.ExpandEnv(filepath.Join("$HOME", cfg.DefaultTendermintDir)))
if err := cmd.Execute(); err != nil {
panic(err)
}
}

View File

@@ -2,12 +2,36 @@ package config
import (
"fmt"
"os"
"path/filepath"
"time"
"github.com/tendermint/tendermint/types"
)
// NOTE: Most of the structs & relevant comments + the
// default configuration options were used to manually
// generate the config.toml. Please reflect any changes
// made here in the defaultConfigTemplate constant in
// config/toml.go
// NOTE: tmlibs/cli must know to look in the config dir!
var (
DefaultTendermintDir = ".tendermint"
defaultConfigDir = "config"
defaultDataDir = "data"
defaultConfigFileName = "config.toml"
defaultGenesisJSONName = "genesis.json"
defaultPrivValName = "priv_validator.json"
defaultNodeKeyName = "node_key.json"
defaultAddrBookName = "addrbook.json"
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName)
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName)
defaultPrivValPath = filepath.Join(defaultConfigDir, defaultPrivValName)
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
)
// Config defines the top level configuration for a Tendermint node
type Config struct {
// Top level options use an anonymous struct
BaseConfig `mapstructure:",squash"`
@@ -17,8 +41,10 @@ type Config struct {
P2P *P2PConfig `mapstructure:"p2p"`
Mempool *MempoolConfig `mapstructure:"mempool"`
Consensus *ConsensusConfig `mapstructure:"consensus"`
TxIndex *TxIndexConfig `mapstructure:"tx_index"`
}
// DefaultConfig returns a default configuration for a Tendermint node
func DefaultConfig() *Config {
return &Config{
BaseConfig: DefaultBaseConfig(),
@@ -26,20 +52,23 @@ func DefaultConfig() *Config {
P2P: DefaultP2PConfig(),
Mempool: DefaultMempoolConfig(),
Consensus: DefaultConsensusConfig(),
TxIndex: DefaultTxIndexConfig(),
}
}
// TestConfig returns a configuration that can be used for testing
func TestConfig() *Config {
return &Config{
BaseConfig: TestBaseConfig(),
RPC: TestRPCConfig(),
P2P: TestP2PConfig(),
Mempool: DefaultMempoolConfig(),
Mempool: TestMempoolConfig(),
Consensus: TestConsensusConfig(),
TxIndex: TestTxIndexConfig(),
}
}
// Set the RootDir for all Config structs
// SetRoot sets the RootDir for all Config structs
func (cfg *Config) SetRoot(root string) *Config {
cfg.BaseConfig.RootDir = root
cfg.RPC.RootDir = root
@@ -52,21 +81,25 @@ func (cfg *Config) SetRoot(root string) *Config {
//-----------------------------------------------------------------------------
// BaseConfig
// BaseConfig struct for a Tendermint node
// BaseConfig defines the base configuration for a Tendermint node
type BaseConfig struct {
// chainID is unexposed and immutable but here for convenience
chainID string
// The root directory for all data.
// This should be set in viper so it can unmarshal into this struct
RootDir string `mapstructure:"home"`
// The ID of the chain to join (should be signed with every transaction and vote)
ChainID string `mapstructure:"chain_id"`
// A JSON file containing the initial validator set and other meta data
// Path to the JSON file containing the initial validator set and other meta data
Genesis string `mapstructure:"genesis_file"`
// A JSON file containing the private key to use as a validator in the consensus protocol
// Path to the JSON file containing the private key to use as a validator in the consensus protocol
PrivValidator string `mapstructure:"priv_validator_file"`
// A JSON file containing the private key to use for p2p authenticated encryption
NodeKey string `mapstructure:"node_key_file"`
// A custom human readable name for this node
Moniker string `mapstructure:"moniker"`
@@ -92,9 +125,6 @@ type BaseConfig struct {
// so the app can decide if we should keep the connection or not
FilterPeers bool `mapstructure:"filter_peers"` // false
// What indexer to use for transactions
TxIndex string `mapstructure:"tx_index"`
// Database backend: leveldb | memdb
DBBackend string `mapstructure:"db_backend"`
@@ -102,55 +132,73 @@ type BaseConfig struct {
DBPath string `mapstructure:"db_dir"`
}
func (c BaseConfig) ChainID() string {
return c.chainID
}
// DefaultBaseConfig returns a default base configuration for a Tendermint node
func DefaultBaseConfig() BaseConfig {
return BaseConfig{
Genesis: "genesis.json",
PrivValidator: "priv_validator.json",
Moniker: "anonymous",
Genesis: defaultGenesisJSONPath,
PrivValidator: defaultPrivValPath,
NodeKey: defaultNodeKeyPath,
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:46658",
ABCI: "socket",
LogLevel: DefaultPackageLogLevels(),
ProfListenAddress: "",
FastSync: true,
FilterPeers: false,
TxIndex: "kv",
DBBackend: "leveldb",
DBPath: "data",
}
}
// TestBaseConfig returns a base configuration for testing a Tendermint node
func TestBaseConfig() BaseConfig {
conf := DefaultBaseConfig()
conf.ChainID = "tendermint_test"
conf.chainID = "tendermint_test"
conf.ProxyApp = "dummy"
conf.FastSync = false
conf.DBBackend = "memdb"
return conf
}
// GenesisFile returns the full path to the genesis.json file
func (b BaseConfig) GenesisFile() string {
return rootify(b.Genesis, b.RootDir)
}
// PrivValidatorFile returns the full path to the priv_validator.json file
func (b BaseConfig) PrivValidatorFile() string {
return rootify(b.PrivValidator, b.RootDir)
}
// NodeKeyFile returns the full path to the node_key.json file
func (b BaseConfig) NodeKeyFile() string {
return rootify(b.NodeKey, b.RootDir)
}
// DBDir returns the full path to the database directory
func (b BaseConfig) DBDir() string {
return rootify(b.DBPath, b.RootDir)
}
// DefaultLogLevel returns a default log level of "error"
func DefaultLogLevel() string {
return "error"
}
// DefaultPackageLogLevels returns a default log level setting so all packages
// log at "error", while the `state` and `main` packages log at "info"
func DefaultPackageLogLevels() string {
return fmt.Sprintf("state:info,*:%s", DefaultLogLevel())
return fmt.Sprintf("main:info,state:info,*:%s", DefaultLogLevel())
}
//-----------------------------------------------------------------------------
// RPCConfig
// RPCConfig defines the configuration options for the Tendermint RPC server
type RPCConfig struct {
RootDir string `mapstructure:"home"`
@@ -161,10 +209,11 @@ type RPCConfig struct {
// NOTE: This server only supports /broadcast_tx_commit
GRPCListenAddress string `mapstructure:"grpc_laddr"`
// Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
// Activate unsafe RPC commands like /dial_persistent_peers and /unsafe_flush_mempool
Unsafe bool `mapstructure:"unsafe"`
}
// DefaultRPCConfig returns a default configuration for the RPC server
func DefaultRPCConfig() *RPCConfig {
return &RPCConfig{
ListenAddress: "tcp://0.0.0.0:46657",
@@ -173,6 +222,7 @@ func DefaultRPCConfig() *RPCConfig {
}
}
// TestRPCConfig returns a configuration for testing the RPC server
func TestRPCConfig() *RPCConfig {
conf := DefaultRPCConfig()
conf.ListenAddress = "tcp://0.0.0.0:36657"
@@ -184,33 +234,81 @@ func TestRPCConfig() *RPCConfig {
//-----------------------------------------------------------------------------
// P2PConfig
// P2PConfig defines the configuration options for the Tendermint peer-to-peer networking layer
type P2PConfig struct {
RootDir string `mapstructure:"home"`
ListenAddress string `mapstructure:"laddr"`
Seeds string `mapstructure:"seeds"`
SkipUPNP bool `mapstructure:"skip_upnp"`
AddrBook string `mapstructure:"addr_book_file"`
AddrBookStrict bool `mapstructure:"addr_book_strict"`
PexReactor bool `mapstructure:"pex"`
MaxNumPeers int `mapstructure:"max_num_peers"`
RootDir string `mapstructure:"home"`
// Address to listen for incoming connections
ListenAddress string `mapstructure:"laddr"`
// Comma separated list of seed nodes to connect to
// We only use these if we cant connect to peers in the addrbook
Seeds string `mapstructure:"seeds"`
// Comma separated list of persistent peers to connect to
// We always connect to these
PersistentPeers string `mapstructure:"persistent_peers"`
// Skip UPNP port forwarding
SkipUPNP bool `mapstructure:"skip_upnp"`
// Path to address book
AddrBook string `mapstructure:"addr_book_file"`
// Set true for strict address routability rules
AddrBookStrict bool `mapstructure:"addr_book_strict"`
// Maximum number of peers to connect to
MaxNumPeers int `mapstructure:"max_num_peers"`
// Time to wait before flushing messages out on the connection, in ms
FlushThrottleTimeout int `mapstructure:"flush_throttle_timeout"`
// Maximum size of a message packet payload, in bytes
MaxMsgPacketPayloadSize int `mapstructure:"max_msg_packet_payload_size"`
// Rate at which packets can be sent, in bytes/second
SendRate int64 `mapstructure:"send_rate"`
// Rate at which packets can be received, in bytes/second
RecvRate int64 `mapstructure:"recv_rate"`
// Set true to enable the peer-exchange reactor
PexReactor bool `mapstructure:"pex"`
// Seed mode, in which node constantly crawls the network and looks for
// peers. If another node asks it for addresses, it responds and disconnects.
//
// Does not work if the peer-exchange reactor is disabled.
SeedMode bool `mapstructure:"seed_mode"`
}
// DefaultP2PConfig returns a default configuration for the peer-to-peer layer
func DefaultP2PConfig() *P2PConfig {
return &P2PConfig{
ListenAddress: "tcp://0.0.0.0:46656",
AddrBook: "addrbook.json",
AddrBookStrict: true,
MaxNumPeers: 50,
ListenAddress: "tcp://0.0.0.0:46656",
AddrBook: defaultAddrBookPath,
AddrBookStrict: true,
MaxNumPeers: 50,
FlushThrottleTimeout: 100,
MaxMsgPacketPayloadSize: 1024, // 1 kB
SendRate: 512000, // 500 kB/s
RecvRate: 512000, // 500 kB/s
PexReactor: true,
SeedMode: false,
}
}
// TestP2PConfig returns a configuration for testing the peer-to-peer layer
func TestP2PConfig() *P2PConfig {
conf := DefaultP2PConfig()
conf.ListenAddress = "tcp://0.0.0.0:36656"
conf.SkipUPNP = true
conf.FlushThrottleTimeout = 10
return conf
}
// AddrBookFile returns the full path to the address book
func (p *P2PConfig) AddrBookFile() string {
return rootify(p.AddrBook, p.RootDir)
}
@@ -218,23 +316,35 @@ func (p *P2PConfig) AddrBookFile() string {
//-----------------------------------------------------------------------------
// MempoolConfig
// MempoolConfig defines the configuration options for the Tendermint mempool
type MempoolConfig struct {
RootDir string `mapstructure:"home"`
Recheck bool `mapstructure:"recheck"`
RecheckEmpty bool `mapstructure:"recheck_empty"`
Broadcast bool `mapstructure:"broadcast"`
WalPath string `mapstructure:"wal_dir"`
CacheSize int `mapstructure:"cache_size"`
}
// DefaultMempoolConfig returns a default configuration for the Tendermint mempool
func DefaultMempoolConfig() *MempoolConfig {
return &MempoolConfig{
Recheck: true,
RecheckEmpty: true,
Broadcast: true,
WalPath: "data/mempool.wal",
WalPath: filepath.Join(defaultDataDir, "mempool.wal"),
CacheSize: 100000,
}
}
// TestMempoolConfig returns a configuration for testing the Tendermint mempool
func TestMempoolConfig() *MempoolConfig {
config := DefaultMempoolConfig()
config.CacheSize = 1000
return config
}
// WalDir returns the full path to the mempool's write-ahead log
func (m *MempoolConfig) WalDir() string {
return rootify(m.WalPath, m.RootDir)
}
@@ -242,15 +352,15 @@ func (m *MempoolConfig) WalDir() string {
//-----------------------------------------------------------------------------
// ConsensusConfig
// ConsensusConfig holds timeouts and details about the WAL, the block structure,
// and timeouts in the consensus protocol.
// ConsensusConfig defines the confuguration for the Tendermint consensus service,
// including timeouts and details about the WAL and the block structure.
type ConsensusConfig struct {
RootDir string `mapstructure:"home"`
WalPath string `mapstructure:"wal_file"`
WalLight bool `mapstructure:"wal_light"`
walFile string // overrides WalPath if set
// All timeouts are in ms
// All timeouts are in milliseconds
TimeoutPropose int `mapstructure:"timeout_propose"`
TimeoutProposeDelta int `mapstructure:"timeout_propose_delta"`
TimeoutPrevote int `mapstructure:"timeout_prevote"`
@@ -266,52 +376,81 @@ type ConsensusConfig struct {
MaxBlockSizeTxs int `mapstructure:"max_block_size_txs"`
MaxBlockSizeBytes int `mapstructure:"max_block_size_bytes"`
// TODO: This probably shouldn't be exposed but it makes it
// easy to write tests for the wal/replay
BlockPartSize int `mapstructure:"block_part_size"`
// EmptyBlocks mode and possible interval between empty blocks in seconds
CreateEmptyBlocks bool `mapstructure:"create_empty_blocks"`
CreateEmptyBlocksInterval int `mapstructure:"create_empty_blocks_interval"`
// Reactor sleep duration parameters are in milliseconds
PeerGossipSleepDuration int `mapstructure:"peer_gossip_sleep_duration"`
PeerQueryMaj23SleepDuration int `mapstructure:"peer_query_maj23_sleep_duration"`
}
// Wait this long for a proposal
// WaitForTxs returns true if the consensus should wait for transactions before entering the propose step
func (cfg *ConsensusConfig) WaitForTxs() bool {
return !cfg.CreateEmptyBlocks || cfg.CreateEmptyBlocksInterval > 0
}
// EmptyBlocks returns the amount of time to wait before proposing an empty block or starting the propose timer if there are no txs available
func (cfg *ConsensusConfig) EmptyBlocksInterval() time.Duration {
return time.Duration(cfg.CreateEmptyBlocksInterval) * time.Second
}
// Propose returns the amount of time to wait for a proposal
func (cfg *ConsensusConfig) Propose(round int) time.Duration {
return time.Duration(cfg.TimeoutPropose+cfg.TimeoutProposeDelta*round) * time.Millisecond
}
// After receiving any +2/3 prevote, wait this long for stragglers
// Prevote returns the amount of time to wait for straggler votes after receiving any +2/3 prevotes
func (cfg *ConsensusConfig) Prevote(round int) time.Duration {
return time.Duration(cfg.TimeoutPrevote+cfg.TimeoutPrevoteDelta*round) * time.Millisecond
}
// After receiving any +2/3 precommits, wait this long for stragglers
// Precommit returns the amount of time to wait for straggler votes after receiving any +2/3 precommits
func (cfg *ConsensusConfig) Precommit(round int) time.Duration {
return time.Duration(cfg.TimeoutPrecommit+cfg.TimeoutPrecommitDelta*round) * time.Millisecond
}
// After receiving +2/3 precommits for a single block (a commit), wait this long for stragglers in the next height's RoundStepNewHeight
// Commit returns the amount of time to wait for straggler votes after receiving +2/3 precommits for a single block (ie. a commit).
func (cfg *ConsensusConfig) Commit(t time.Time) time.Time {
return t.Add(time.Duration(cfg.TimeoutCommit) * time.Millisecond)
}
// PeerGossipSleep returns the amount of time to sleep if there is nothing to send from the ConsensusReactor
func (cfg *ConsensusConfig) PeerGossipSleep() time.Duration {
return time.Duration(cfg.PeerGossipSleepDuration) * time.Millisecond
}
// PeerQueryMaj23Sleep returns the amount of time to sleep after each VoteSetMaj23Message is sent in the ConsensusReactor
func (cfg *ConsensusConfig) PeerQueryMaj23Sleep() time.Duration {
return time.Duration(cfg.PeerQueryMaj23SleepDuration) * time.Millisecond
}
// DefaultConsensusConfig returns a default configuration for the consensus service
func DefaultConsensusConfig() *ConsensusConfig {
return &ConsensusConfig{
WalPath: "data/cs.wal/wal",
WalLight: false,
TimeoutPropose: 3000,
TimeoutProposeDelta: 500,
TimeoutPrevote: 1000,
TimeoutPrevoteDelta: 500,
TimeoutPrecommit: 1000,
TimeoutPrecommitDelta: 500,
TimeoutCommit: 1000,
SkipTimeoutCommit: false,
MaxBlockSizeTxs: 10000,
MaxBlockSizeBytes: 1, // TODO
BlockPartSize: types.DefaultBlockPartSize, // TODO: we shouldnt be importing types
WalPath: filepath.Join(defaultDataDir, "cs.wal", "wal"),
WalLight: false,
TimeoutPropose: 3000,
TimeoutProposeDelta: 500,
TimeoutPrevote: 1000,
TimeoutPrevoteDelta: 500,
TimeoutPrecommit: 1000,
TimeoutPrecommitDelta: 500,
TimeoutCommit: 1000,
SkipTimeoutCommit: false,
MaxBlockSizeTxs: 10000,
MaxBlockSizeBytes: 1, // TODO
CreateEmptyBlocks: true,
CreateEmptyBlocksInterval: 0,
PeerGossipSleepDuration: 100,
PeerQueryMaj23SleepDuration: 2000,
}
}
// TestConsensusConfig returns a configuration for testing the consensus service
func TestConsensusConfig() *ConsensusConfig {
config := DefaultConsensusConfig()
config.TimeoutPropose = 2000
config.TimeoutPropose = 100
config.TimeoutProposeDelta = 1
config.TimeoutPrevote = 10
config.TimeoutPrevoteDelta = 1
@@ -319,9 +458,12 @@ func TestConsensusConfig() *ConsensusConfig {
config.TimeoutPrecommitDelta = 1
config.TimeoutCommit = 10
config.SkipTimeoutCommit = true
config.PeerGossipSleepDuration = 5
config.PeerQueryMaj23SleepDuration = 250
return config
}
// WalFile returns the full path to the write-ahead log file
func (c *ConsensusConfig) WalFile() string {
if c.walFile != "" {
return c.walFile
@@ -329,10 +471,51 @@ func (c *ConsensusConfig) WalFile() string {
return rootify(c.WalPath, c.RootDir)
}
// SetWalFile sets the path to the write-ahead log file
func (c *ConsensusConfig) SetWalFile(walFile string) {
c.walFile = walFile
}
//-----------------------------------------------------------------------------
// TxIndexConfig
// TxIndexConfig defines the confuguration for the transaction
// indexer, including tags to index.
type TxIndexConfig struct {
// What indexer to use for transactions
//
// Options:
// 1) "null" (default)
// 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
Indexer string `mapstructure:"indexer"`
// Comma-separated list of tags to index (by default the only tag is tx hash)
//
// It's recommended to index only a subset of tags due to possible memory
// bloat. This is, of course, depends on the indexer's DB and the volume of
// transactions.
IndexTags string `mapstructure:"index_tags"`
// When set to true, tells indexer to index all tags. Note this may be not
// desirable (see the comment above). IndexTags has a precedence over
// IndexAllTags (i.e. when given both, IndexTags will be indexed).
IndexAllTags bool `mapstructure:"index_all_tags"`
}
// DefaultTxIndexConfig returns a default configuration for the transaction indexer.
func DefaultTxIndexConfig() *TxIndexConfig {
return &TxIndexConfig{
Indexer: "kv",
IndexTags: "",
IndexAllTags: false,
}
}
// TestTxIndexConfig returns a default configuration for the transaction indexer.
func TestTxIndexConfig() *TxIndexConfig {
return DefaultTxIndexConfig()
}
//-----------------------------------------------------------------------------
// Utils
@@ -343,3 +526,18 @@ func rootify(path, root string) string {
}
return filepath.Join(root, path)
}
//-----------------------------------------------------------------------------
// Moniker
var defaultMoniker = getDefaultMoniker()
// getDefaultMoniker returns a default moniker, which is the host name. If runtime
// fails to get the host name, "anonymous" will be returned.
func getDefaultMoniker() string {
moniker, err := os.Hostname()
if err != nil {
moniker = "anonymous"
}
return moniker
}

View File

@@ -1,50 +1,224 @@
package config
import (
"bytes"
"os"
"path"
"path/filepath"
"strings"
"text/template"
cmn "github.com/tendermint/tmlibs/common"
)
/****** these are for production settings ***********/
var configTemplate *template.Template
func EnsureRoot(rootDir string) {
cmn.EnsureDir(rootDir, 0700)
cmn.EnsureDir(rootDir+"/data", 0700)
configFilePath := path.Join(rootDir, "config.toml")
// Write default config file if missing.
if !cmn.FileExists(configFilePath) {
// Ask user for moniker
// moniker := cfg.Prompt("Type hostname: ", "anonymous")
cmn.MustWriteFile(configFilePath, []byte(defaultConfig("anonymous")), 0644)
func init() {
var err error
if configTemplate, err = template.New("configFileTemplate").Parse(defaultConfigTemplate); err != nil {
panic(err)
}
}
var defaultConfigTmpl = `# This is a TOML config file.
/****** these are for production settings ***********/
// EnsureRoot creates the root, config, and data directories if they don't exist,
// and panics if it fails.
func EnsureRoot(rootDir string) {
if err := cmn.EnsureDir(rootDir, 0700); err != nil {
cmn.PanicSanity(err.Error())
}
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultConfigDir), 0700); err != nil {
cmn.PanicSanity(err.Error())
}
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultDataDir), 0700); err != nil {
cmn.PanicSanity(err.Error())
}
configFilePath := filepath.Join(rootDir, defaultConfigFilePath)
// Write default config file if missing.
if !cmn.FileExists(configFilePath) {
writeConfigFile(configFilePath)
}
}
// XXX: this func should probably be called by cmd/tendermint/commands/init.go
// alongside the writing of the genesis.json and priv_validator.json
func writeConfigFile(configFilePath string) {
var buffer bytes.Buffer
if err := configTemplate.Execute(&buffer, DefaultConfig()); err != nil {
panic(err)
}
cmn.MustWriteFile(configFilePath, buffer.Bytes(), 0644)
}
// Note: any changes to the comments/variables/mapstructure
// must be reflected in the appropriate struct in config/config.go
const defaultConfigTemplate = `# This is a TOML config file.
# For more information, see https://github.com/toml-lang/toml
proxy_app = "tcp://127.0.0.1:46658"
moniker = "__MONIKER__"
fast_sync = true
db_backend = "leveldb"
log_level = "state:info,*:error"
##### main base config options #####
# TCP or UNIX socket address of the ABCI application,
# or the name of an ABCI application compiled in with the Tendermint binary
proxy_app = "{{ .BaseConfig.ProxyApp }}"
# A custom human readable name for this node
moniker = "{{ .BaseConfig.Moniker }}"
# If this node is many blocks behind the tip of the chain, FastSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
fast_sync = {{ .BaseConfig.FastSync }}
# Database backend: leveldb | memdb
db_backend = "{{ .BaseConfig.DBBackend }}"
# Database directory
db_path = "{{ .BaseConfig.DBPath }}"
# Output level for logging, including package level options
log_level = "{{ .BaseConfig.LogLevel }}"
##### additional base config options #####
# Path to the JSON file containing the initial validator set and other meta data
genesis_file = "{{ .BaseConfig.Genesis }}"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_file = "{{ .BaseConfig.PrivValidator }}"
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
node_key_file = "{{ .BaseConfig.NodeKey}}"
# Mechanism to connect to the ABCI application: socket | grpc
abci = "{{ .BaseConfig.ABCI }}"
# TCP or UNIX socket address for the profiling server to listen on
prof_laddr = "{{ .BaseConfig.ProfListenAddress }}"
# If true, query the ABCI app on connecting to a new peer
# so the app can decide if we should keep the connection or not
filter_peers = {{ .BaseConfig.FilterPeers }}
##### advanced configuration options #####
##### rpc server configuration options #####
[rpc]
laddr = "tcp://0.0.0.0:46657"
# TCP or UNIX socket address for the RPC server to listen on
laddr = "{{ .RPC.ListenAddress }}"
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
grpc_laddr = "{{ .RPC.GRPCListenAddress }}"
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
unsafe = {{ .RPC.Unsafe }}
##### peer to peer configuration options #####
[p2p]
laddr = "tcp://0.0.0.0:46656"
seeds = ""
`
func defaultConfig(moniker string) string {
return strings.Replace(defaultConfigTmpl, "__MONIKER__", moniker, -1)
}
# Address to listen for incoming connections
laddr = "{{ .P2P.ListenAddress }}"
# Comma separated list of seed nodes to connect to
seeds = ""
# Comma separated list of nodes to keep persistent connections to
persistent_peers = ""
# Path to address book
addr_book_file = "{{ .P2P.AddrBook }}"
# Set true for strict address routability rules
addr_book_strict = {{ .P2P.AddrBookStrict }}
# Time to wait before flushing messages out on the connection, in ms
flush_throttle_timeout = {{ .P2P.FlushThrottleTimeout }}
# Maximum number of peers to connect to
max_num_peers = {{ .P2P.MaxNumPeers }}
# Maximum size of a message packet payload, in bytes
max_msg_packet_payload_size = {{ .P2P.MaxMsgPacketPayloadSize }}
# Rate at which packets can be sent, in bytes/second
send_rate = {{ .P2P.SendRate }}
# Rate at which packets can be received, in bytes/second
recv_rate = {{ .P2P.RecvRate }}
# Set true to enable the peer-exchange reactor
pex = {{ .P2P.PexReactor }}
# Seed mode, in which node constantly crawls the network and looks for
# peers. If another node asks it for addresses, it responds and disconnects.
#
# Does not work if the peer-exchange reactor is disabled.
seed_mode = {{ .P2P.SeedMode }}
##### mempool configuration options #####
[mempool]
recheck = {{ .Mempool.Recheck }}
recheck_empty = {{ .Mempool.RecheckEmpty }}
broadcast = {{ .Mempool.Broadcast }}
wal_dir = "{{ .Mempool.WalPath }}"
##### consensus configuration options #####
[consensus]
wal_file = "{{ .Consensus.WalPath }}"
wal_light = {{ .Consensus.WalLight }}
# All timeouts are in milliseconds
timeout_propose = {{ .Consensus.TimeoutPropose }}
timeout_propose_delta = {{ .Consensus.TimeoutProposeDelta }}
timeout_prevote = {{ .Consensus.TimeoutPrevote }}
timeout_prevote_delta = {{ .Consensus.TimeoutPrevoteDelta }}
timeout_precommit = {{ .Consensus.TimeoutPrecommit }}
timeout_precommit_delta = {{ .Consensus.TimeoutPrecommitDelta }}
timeout_commit = {{ .Consensus.TimeoutCommit }}
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
skip_timeout_commit = {{ .Consensus.SkipTimeoutCommit }}
# BlockSize
max_block_size_txs = {{ .Consensus.MaxBlockSizeTxs }}
max_block_size_bytes = {{ .Consensus.MaxBlockSizeBytes }}
# EmptyBlocks mode and possible interval between empty blocks in seconds
create_empty_blocks = {{ .Consensus.CreateEmptyBlocks }}
create_empty_blocks_interval = {{ .Consensus.CreateEmptyBlocksInterval }}
# Reactor sleep duration parameters are in milliseconds
peer_gossip_sleep_duration = {{ .Consensus.PeerGossipSleepDuration }}
peer_query_maj23_sleep_duration = {{ .Consensus.PeerQueryMaj23SleepDuration }}
##### transactions indexer configuration options #####
[tx_index]
# What indexer to use for transactions
#
# Options:
# 1) "null" (default)
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "{{ .TxIndex.Indexer }}"
# Comma-separated list of tags to index (by default the only tag is tx hash)
#
# It's recommended to index only a subset of tags due to possible memory
# bloat. This is, of course, depends on the indexer's DB and the volume of
# transactions.
index_tags = "{{ .TxIndex.IndexTags }}"
# When set to true, tells indexer to index all tags. Note this may be not
# desirable (see the comment above). IndexTags has a precedence over
# IndexAllTags (i.e. when given both, IndexTags will be indexed).
index_all_tags = {{ .TxIndex.IndexAllTags }}
`
/****** these are for test settings ***********/
@@ -53,30 +227,35 @@ func ResetTestRoot(testName string) *Config {
rootDir = filepath.Join(rootDir, testName)
// Remove ~/.tendermint_test_bak
if cmn.FileExists(rootDir + "_bak") {
err := os.RemoveAll(rootDir + "_bak")
if err != nil {
if err := os.RemoveAll(rootDir + "_bak"); err != nil {
cmn.PanicSanity(err.Error())
}
}
// Move ~/.tendermint_test to ~/.tendermint_test_bak
if cmn.FileExists(rootDir) {
err := os.Rename(rootDir, rootDir+"_bak")
if err != nil {
if err := os.Rename(rootDir, rootDir+"_bak"); err != nil {
cmn.PanicSanity(err.Error())
}
}
// Create new dir
cmn.EnsureDir(rootDir, 0700)
cmn.EnsureDir(rootDir+"/data", 0700)
if err := cmn.EnsureDir(rootDir, 0700); err != nil {
cmn.PanicSanity(err.Error())
}
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultConfigDir), 0700); err != nil {
cmn.PanicSanity(err.Error())
}
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultDataDir), 0700); err != nil {
cmn.PanicSanity(err.Error())
}
configFilePath := path.Join(rootDir, "config.toml")
genesisFilePath := path.Join(rootDir, "genesis.json")
privFilePath := path.Join(rootDir, "priv_validator.json")
baseConfig := DefaultBaseConfig()
configFilePath := filepath.Join(rootDir, defaultConfigFilePath)
genesisFilePath := filepath.Join(rootDir, baseConfig.Genesis)
privFilePath := filepath.Join(rootDir, baseConfig.PrivValidator)
// Write default config file if missing.
if !cmn.FileExists(configFilePath) {
// Ask user for moniker
cmn.MustWriteFile(configFilePath, []byte(testConfig("anonymous")), 0644)
writeConfigFile(configFilePath)
}
if !cmn.FileExists(genesisFilePath) {
cmn.MustWriteFile(genesisFilePath, []byte(testGenesis), 0644)
@@ -88,28 +267,6 @@ func ResetTestRoot(testName string) *Config {
return config
}
var testConfigTmpl = `# This is a TOML config file.
# For more information, see https://github.com/toml-lang/toml
proxy_app = "dummy"
moniker = "__MONIKER__"
fast_sync = false
db_backend = "memdb"
log_level = "info"
[rpc]
laddr = "tcp://0.0.0.0:36657"
[p2p]
laddr = "tcp://0.0.0.0:36656"
seeds = ""
`
func testConfig(moniker string) (testConfig string) {
testConfig = strings.Replace(testConfigTmpl, "__MONIKER__", moniker, -1)
return
}
var testGenesis = `{
"genesis_time": "0001-01-01T00:00:00.000Z",
"chain_id": "tendermint_test",
@@ -119,7 +276,7 @@ var testGenesis = `{
"type": "ed25519",
"data":"3B3069C422E19688B45CBFAE7BB009FC0FA1B1EA86593519318B7214853803C8"
},
"amount": 10,
"power": 10,
"name": ""
}
],

View File

@@ -4,6 +4,7 @@ import (
"io/ioutil"
"os"
"path/filepath"
"strings"
"testing"
"github.com/stretchr/testify/assert"
@@ -19,26 +20,29 @@ func ensureFiles(t *testing.T, rootDir string, files ...string) {
}
func TestEnsureRoot(t *testing.T) {
assert, require := assert.New(t), require.New(t)
require := require.New(t)
// setup temp dir for test
tmpDir, err := ioutil.TempDir("", "config-test")
require.Nil(err)
defer os.RemoveAll(tmpDir)
defer os.RemoveAll(tmpDir) // nolint: errcheck
// create root dir
EnsureRoot(tmpDir)
// make sure config is set properly
data, err := ioutil.ReadFile(filepath.Join(tmpDir, "config.toml"))
data, err := ioutil.ReadFile(filepath.Join(tmpDir, defaultConfigFilePath))
require.Nil(err)
assert.Equal([]byte(defaultConfig("anonymous")), data)
if !checkConfig(string(data)) {
t.Fatalf("config file missing some information")
}
ensureFiles(t, tmpDir, "data")
}
func TestEnsureTestRoot(t *testing.T) {
assert, require := assert.New(t), require.New(t)
require := require.New(t)
testName := "ensureTestRoot"
@@ -47,11 +51,44 @@ func TestEnsureTestRoot(t *testing.T) {
rootDir := cfg.RootDir
// make sure config is set properly
data, err := ioutil.ReadFile(filepath.Join(rootDir, "config.toml"))
data, err := ioutil.ReadFile(filepath.Join(rootDir, defaultConfigFilePath))
require.Nil(err)
assert.Equal([]byte(testConfig("anonymous")), data)
if !checkConfig(string(data)) {
t.Fatalf("config file missing some information")
}
// TODO: make sure the cfg returned and testconfig are the same!
ensureFiles(t, rootDir, "data", "genesis.json", "priv_validator.json")
baseConfig := DefaultBaseConfig()
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, baseConfig.PrivValidator)
}
func checkConfig(configFile string) bool {
var valid bool
// list of words we expect in the config
var elems = []string{
"moniker",
"seeds",
"proxy_app",
"fast_sync",
"create_empty_blocks",
"peer",
"timeout",
"broadcast",
"send",
"addr",
"wal",
"propose",
"max",
"genesis",
}
for _, e := range elems {
if !strings.Contains(configFile, e) {
valid = false
} else {
valid = true
}
}
return valid
}

View File

@@ -1,14 +1,16 @@
package consensus
import (
"context"
"sync"
"testing"
"time"
"github.com/stretchr/testify/require"
crypto "github.com/tendermint/go-crypto"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/events"
cmn "github.com/tendermint/tmlibs/common"
)
func init() {
@@ -30,7 +32,9 @@ func TestByzantine(t *testing.T) {
css := randConsensusNet(N, "consensus_byzantine_test", newMockTickerFunc(false), newCounter)
// give the byzantine validator a normal ticker
css[0].SetTimeoutTicker(NewTimeoutTicker())
ticker := NewTimeoutTicker()
ticker.SetLogger(css[0].Logger)
css[0].SetTimeoutTicker(ticker)
switches := make([]*p2p.Switch, N)
p2pLogger := logger.With("module", "p2p")
@@ -39,7 +43,43 @@ func TestByzantine(t *testing.T) {
switches[i].SetLogger(p2pLogger.With("validator", i))
}
eventChans := make([]chan interface{}, N)
reactors := make([]p2p.Reactor, N)
for i := 0; i < N; i++ {
if i == 0 {
css[i].privValidator = NewByzantinePrivValidator(css[i].privValidator)
// make byzantine
css[i].decideProposal = func(j int) func(int64, int) {
return func(height int64, round int) {
byzantineDecideProposalFunc(t, height, round, css[j], switches[j])
}
}(i)
css[i].doPrevote = func(height int64, round int) {}
}
eventBus := types.NewEventBus()
eventBus.SetLogger(logger.With("module", "events", "validator", i))
err := eventBus.Start()
require.NoError(t, err)
defer eventBus.Stop()
eventChans[i] = make(chan interface{}, 1)
err = eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, eventChans[i])
require.NoError(t, err)
conR := NewConsensusReactor(css[i], true) // so we dont start the consensus states
conR.SetLogger(logger.With("validator", i))
conR.SetEventBus(eventBus)
var conRI p2p.Reactor // nolint: gotype, gosimple
conRI = conR
if i == 0 {
conRI = NewByzantineReactor(conR)
}
reactors[i] = conRI
}
defer func() {
for _, r := range reactors {
if rr, ok := r.(*ByzantineReactor); ok {
@@ -49,40 +89,6 @@ func TestByzantine(t *testing.T) {
}
}
}()
eventChans := make([]chan interface{}, N)
eventLogger := logger.With("module", "events")
for i := 0; i < N; i++ {
if i == 0 {
css[i].privValidator = NewByzantinePrivValidator(css[i].privValidator.(*types.PrivValidator))
// make byzantine
css[i].decideProposal = func(j int) func(int, int) {
return func(height, round int) {
byzantineDecideProposalFunc(t, height, round, css[j], switches[j])
}
}(i)
css[i].doPrevote = func(height, round int) {}
}
eventSwitch := events.NewEventSwitch()
eventSwitch.SetLogger(eventLogger.With("validator", i))
_, err := eventSwitch.Start()
if err != nil {
t.Fatalf("Failed to start switch: %v", err)
}
eventChans[i] = subscribeToEvent(eventSwitch, "tester", types.EventStringNewBlock(), 1)
conR := NewConsensusReactor(css[i], true) // so we dont start the consensus states
conR.SetLogger(logger.With("validator", i))
conR.SetEventSwitch(eventSwitch)
var conRI p2p.Reactor
conRI = conR
if i == 0 {
conRI = NewByzantineReactor(conR)
}
reactors[i] = conRI
}
p2p.MakeConnectedSwitches(config.P2P, N, func(i int, s *p2p.Switch) *p2p.Switch {
// ignore new switch s, we already made ours
@@ -99,10 +105,10 @@ func TestByzantine(t *testing.T) {
// start the state machines
byzR := reactors[0].(*ByzantineReactor)
s := byzR.reactor.conS.GetState()
byzR.reactor.SwitchToConsensus(s)
byzR.reactor.SwitchToConsensus(s, 0)
for i := 1; i < N; i++ {
cr := reactors[i].(*ConsensusReactor)
cr.SwitchToConsensus(cr.conS.GetState())
cr.SwitchToConsensus(cr.conS.GetState(), 0)
}
// byz proposer sends one block to peers[0]
@@ -147,8 +153,8 @@ func TestByzantine(t *testing.T) {
case <-done:
case <-tick.C:
for i, reactor := range reactors {
t.Log(Fmt("Consensus Reactor %v", i))
t.Log(Fmt("%v", reactor))
t.Log(cmn.Fmt("Consensus Reactor %v", i))
t.Log(cmn.Fmt("%v", reactor))
}
t.Fatalf("Timed out waiting for all validators to commit first block")
}
@@ -157,7 +163,7 @@ func TestByzantine(t *testing.T) {
//-------------------------------
// byzantine consensus functions
func byzantineDecideProposalFunc(t *testing.T, height, round int, cs *ConsensusState, sw *p2p.Switch) {
func byzantineDecideProposalFunc(t *testing.T, height int64, round int, cs *ConsensusState, sw *p2p.Switch) {
// byzantine user should create two proposals and try to split the vote.
// Avoid sending on internalMsgQueue and running consensus state.
@@ -165,13 +171,17 @@ func byzantineDecideProposalFunc(t *testing.T, height, round int, cs *ConsensusS
block1, blockParts1 := cs.createProposalBlock()
polRound, polBlockID := cs.Votes.POLInfo()
proposal1 := types.NewProposal(height, round, blockParts1.Header(), polRound, polBlockID)
cs.privValidator.SignProposal(cs.state.ChainID, proposal1) // byzantine doesnt err
if err := cs.privValidator.SignProposal(cs.state.ChainID, proposal1); err != nil {
t.Error(err)
}
// Create a new proposal block from state/txs from the mempool.
block2, blockParts2 := cs.createProposalBlock()
polRound, polBlockID = cs.Votes.POLInfo()
proposal2 := types.NewProposal(height, round, blockParts2.Header(), polRound, polBlockID)
cs.privValidator.SignProposal(cs.state.ChainID, proposal2) // byzantine doesnt err
if err := cs.privValidator.SignProposal(cs.state.ChainID, proposal2); err != nil {
t.Error(err)
}
block1Hash := block1.Hash()
block2Hash := block2.Hash()
@@ -188,7 +198,7 @@ func byzantineDecideProposalFunc(t *testing.T, height, round int, cs *ConsensusS
}
}
func sendProposalAndParts(height, round int, cs *ConsensusState, peer *p2p.Peer, proposal *types.Proposal, blockHash []byte, parts *types.PartSet) {
func sendProposalAndParts(height int64, round int, cs *ConsensusState, peer p2p.Peer, proposal *types.Proposal, blockHash []byte, parts *types.PartSet) {
// proposal
msg := &ProposalMessage{Proposal: proposal}
peer.Send(DataChannel, struct{ ConsensusMessage }{msg})
@@ -218,7 +228,7 @@ func sendProposalAndParts(height, round int, cs *ConsensusState, peer *p2p.Peer,
// byzantine consensus reactor
type ByzantineReactor struct {
Service
cmn.Service
reactor *ConsensusReactor
}
@@ -231,14 +241,14 @@ func NewByzantineReactor(conR *ConsensusReactor) *ByzantineReactor {
func (br *ByzantineReactor) SetSwitch(s *p2p.Switch) { br.reactor.SetSwitch(s) }
func (br *ByzantineReactor) GetChannels() []*p2p.ChannelDescriptor { return br.reactor.GetChannels() }
func (br *ByzantineReactor) AddPeer(peer *p2p.Peer) {
func (br *ByzantineReactor) AddPeer(peer p2p.Peer) {
if !br.reactor.IsRunning() {
return
}
// Create peerState for peer
peerState := NewPeerState(peer)
peer.Data.Set(types.PeerStateKey, peerState)
peerState := NewPeerState(peer).SetLogger(br.reactor.Logger)
peer.Set(types.PeerStateKey, peerState)
// Send our state to peer.
// If we're fast_syncing, broadcast a RoundStepMessage later upon SwitchToConsensus().
@@ -246,10 +256,10 @@ func (br *ByzantineReactor) AddPeer(peer *p2p.Peer) {
br.reactor.sendNewRoundStepMessages(peer)
}
}
func (br *ByzantineReactor) RemovePeer(peer *p2p.Peer, reason interface{}) {
func (br *ByzantineReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
br.reactor.RemovePeer(peer, reason)
}
func (br *ByzantineReactor) Receive(chID byte, peer *p2p.Peer, msgBytes []byte) {
func (br *ByzantineReactor) Receive(chID byte, peer p2p.Peer, msgBytes []byte) {
br.reactor.Receive(chID, peer, msgBytes)
}
@@ -257,42 +267,42 @@ func (br *ByzantineReactor) Receive(chID byte, peer *p2p.Peer, msgBytes []byte)
// byzantine privValidator
type ByzantinePrivValidator struct {
Address []byte `json:"address"`
types.Signer `json:"-"`
types.Signer
mtx sync.Mutex
pv types.PrivValidator
}
// Return a priv validator that will sign anything
func NewByzantinePrivValidator(pv *types.PrivValidator) *ByzantinePrivValidator {
func NewByzantinePrivValidator(pv types.PrivValidator) *ByzantinePrivValidator {
return &ByzantinePrivValidator{
Address: pv.Address,
Signer: pv.Signer,
Signer: pv.(*types.PrivValidatorFS).Signer,
pv: pv,
}
}
func (privVal *ByzantinePrivValidator) GetAddress() []byte {
return privVal.Address
func (privVal *ByzantinePrivValidator) GetAddress() types.Address {
return privVal.pv.GetAddress()
}
func (privVal *ByzantinePrivValidator) SignVote(chainID string, vote *types.Vote) error {
privVal.mtx.Lock()
defer privVal.mtx.Unlock()
func (privVal *ByzantinePrivValidator) GetPubKey() crypto.PubKey {
return privVal.pv.GetPubKey()
}
// Sign
vote.Signature = privVal.Sign(types.SignBytes(chainID, vote))
func (privVal *ByzantinePrivValidator) SignVote(chainID string, vote *types.Vote) (err error) {
vote.Signature, err = privVal.Sign(types.SignBytes(chainID, vote))
return err
}
func (privVal *ByzantinePrivValidator) SignProposal(chainID string, proposal *types.Proposal) (err error) {
proposal.Signature, _ = privVal.Sign(types.SignBytes(chainID, proposal))
return nil
}
func (privVal *ByzantinePrivValidator) SignProposal(chainID string, proposal *types.Proposal) error {
privVal.mtx.Lock()
defer privVal.mtx.Unlock()
// Sign
proposal.Signature = privVal.Sign(types.SignBytes(chainID, proposal))
func (privVal *ByzantinePrivValidator) SignHeartbeat(chainID string, heartbeat *types.Heartbeat) (err error) {
heartbeat.Signature, _ = privVal.Sign(types.SignBytes(chainID, heartbeat))
return nil
}
func (privVal *ByzantinePrivValidator) String() string {
return Fmt("PrivValidator{%X}", privVal.Address)
return cmn.Fmt("PrivValidator{%X}", privVal.GetAddress())
}

View File

@@ -1,35 +0,0 @@
package consensus
import (
"github.com/tendermint/tendermint/types"
)
// XXX: WARNING: these functions can halt the consensus as firing events is synchronous.
// Make sure to read off the channels, and in the case of subscribeToEventRespond, to write back on it
// NOTE: if chanCap=0, this blocks on the event being consumed
func subscribeToEvent(evsw types.EventSwitch, receiver, eventID string, chanCap int) chan interface{} {
// listen for event
ch := make(chan interface{}, chanCap)
types.AddListenerForEvent(evsw, receiver, eventID, func(data types.TMEventData) {
ch <- data
})
return ch
}
// NOTE: this blocks on receiving a response after the event is consumed
func subscribeToEventRespond(evsw types.EventSwitch, receiver, eventID string) chan interface{} {
// listen for event
ch := make(chan interface{})
types.AddListenerForEvent(evsw, receiver, eventID, func(data types.TMEventData) {
ch <- data
<-ch
})
return ch
}
func discardFromChan(ch chan interface{}, n int) {
for i := 0; i < n; i++ {
<-ch
}
}

View File

@@ -2,6 +2,7 @@ package consensus
import (
"bytes"
"context"
"fmt"
"io/ioutil"
"os"
@@ -15,11 +16,12 @@ import (
abci "github.com/tendermint/abci/types"
bc "github.com/tendermint/tendermint/blockchain"
cfg "github.com/tendermint/tendermint/config"
cstypes "github.com/tendermint/tendermint/consensus/types"
mempl "github.com/tendermint/tendermint/mempool"
"github.com/tendermint/tendermint/p2p"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
dbm "github.com/tendermint/tmlibs/db"
"github.com/tendermint/tmlibs/log"
@@ -29,12 +31,16 @@ import (
"github.com/go-kit/kit/log/term"
)
const (
testSubscriber = "test-client"
)
// genesis, chain_id, priv_val
var config *cfg.Config // NOTE: must be reset for each _test.go file
var ensureTimeout = time.Duration(2)
var config *cfg.Config // NOTE: must be reset for each _test.go file
var ensureTimeout = time.Second * 1 // must be in seconds because CreateEmptyBlocksInterval is
func ensureDir(dir string, mode os.FileMode) {
if err := EnsureDir(dir, mode); err != nil {
if err := cmn.EnsureDir(dir, mode); err != nil {
panic(err)
}
}
@@ -48,14 +54,14 @@ func ResetConfig(name string) *cfg.Config {
type validatorStub struct {
Index int // Validator index. NOTE: we don't assume validator set changes.
Height int
Height int64
Round int
*types.PrivValidator
types.PrivValidator
}
var testMinPower = 10
var testMinPower int64 = 10
func NewValidatorStub(privValidator *types.PrivValidator, valIndex int) *validatorStub {
func NewValidatorStub(privValidator types.PrivValidator, valIndex int) *validatorStub {
return &validatorStub{
Index: valIndex,
PrivValidator: privValidator,
@@ -65,13 +71,14 @@ func NewValidatorStub(privValidator *types.PrivValidator, valIndex int) *validat
func (vs *validatorStub) signVote(voteType byte, hash []byte, header types.PartSetHeader) (*types.Vote, error) {
vote := &types.Vote{
ValidatorIndex: vs.Index,
ValidatorAddress: vs.PrivValidator.Address,
ValidatorAddress: vs.PrivValidator.GetAddress(),
Height: vs.Height,
Round: vs.Round,
Timestamp: time.Now().UTC(),
Type: voteType,
BlockID: types.BlockID{hash, header},
}
err := vs.PrivValidator.SignVote(config.ChainID, vote)
err := vs.PrivValidator.SignVote(config.ChainID(), vote)
return vote, err
}
@@ -107,13 +114,13 @@ func incrementRound(vss ...*validatorStub) {
//-------------------------------------------------------------------------------
// Functions for transitioning the consensus state
func startTestRound(cs *ConsensusState, height, round int) {
func startTestRound(cs *ConsensusState, height int64, round int) {
cs.enterNewRound(height, round)
cs.startRoutines(0)
}
// Create proposal block from cs1 but sign it with vs
func decideProposal(cs1 *ConsensusState, vs *validatorStub, height, round int) (proposal *types.Proposal, block *types.Block) {
func decideProposal(cs1 *ConsensusState, vs *validatorStub, height int64, round int) (proposal *types.Proposal, block *types.Block) {
block, blockParts := cs1.createProposalBlock()
if block == nil { // on error
panic("error creating proposal block")
@@ -122,7 +129,7 @@ func decideProposal(cs1 *ConsensusState, vs *validatorStub, height, round int) (
// Make proposal
polRound, polBlockID := cs1.Votes.POLInfo()
proposal = types.NewProposal(height, round, blockParts.Header(), polRound, polBlockID)
if err := vs.SignProposal(config.ChainID, proposal); err != nil {
if err := vs.SignProposal(cs1.state.ChainID, proposal); err != nil {
panic(err)
}
return
@@ -142,7 +149,7 @@ func signAddVotes(to *ConsensusState, voteType byte, hash []byte, header types.P
func validatePrevote(t *testing.T, cs *ConsensusState, round int, privVal *validatorStub, blockHash []byte) {
prevotes := cs.Votes.Prevotes(round)
var vote *types.Vote
if vote = prevotes.GetByAddress(privVal.Address); vote == nil {
if vote = prevotes.GetByAddress(privVal.GetAddress()); vote == nil {
panic("Failed to find prevote from validator")
}
if blockHash == nil {
@@ -159,7 +166,7 @@ func validatePrevote(t *testing.T, cs *ConsensusState, round int, privVal *valid
func validateLastPrecommit(t *testing.T, cs *ConsensusState, privVal *validatorStub, blockHash []byte) {
votes := cs.LastCommit
var vote *types.Vote
if vote = votes.GetByAddress(privVal.Address); vote == nil {
if vote = votes.GetByAddress(privVal.GetAddress()); vote == nil {
panic("Failed to find precommit from validator")
}
if !bytes.Equal(vote.BlockID.Hash, blockHash) {
@@ -170,7 +177,7 @@ func validateLastPrecommit(t *testing.T, cs *ConsensusState, privVal *validatorS
func validatePrecommit(t *testing.T, cs *ConsensusState, thisRound, lockRound int, privVal *validatorStub, votedBlockHash, lockedBlockHash []byte) {
precommits := cs.Votes.Precommits(thisRound)
var vote *types.Vote
if vote = precommits.GetByAddress(privVal.Address); vote == nil {
if vote = precommits.GetByAddress(privVal.GetAddress()); vote == nil {
panic("Failed to find precommit from validator")
}
@@ -207,11 +214,14 @@ func validatePrevoteAndPrecommit(t *testing.T, cs *ConsensusState, thisRound, lo
// genesis
func subscribeToVoter(cs *ConsensusState, addr []byte) chan interface{} {
voteCh0 := subscribeToEvent(cs.evsw, "tester", types.EventStringVote(), 1)
voteCh0 := make(chan interface{})
err := cs.eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryVote, voteCh0)
if err != nil {
panic(fmt.Sprintf("failed to subscribe %s to %v", testSubscriber, types.EventQueryVote))
}
voteCh := make(chan interface{})
go func() {
for {
v := <-voteCh0
for v := range voteCh0 {
vote := v.(types.TMEventData).Unwrap().(types.EventDataVote)
// we only fire for our own votes
if bytes.Equal(addr, vote.Vote.ValidatorAddress) {
@@ -225,13 +235,17 @@ func subscribeToVoter(cs *ConsensusState, addr []byte) chan interface{} {
//-------------------------------------------------------------------------------
// consensus states
func newConsensusState(state *sm.State, pv *types.PrivValidator, app abci.Application) *ConsensusState {
func newConsensusState(state sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState {
return newConsensusStateWithConfig(config, state, pv, app)
}
func newConsensusStateWithConfig(thisConfig *cfg.Config, state *sm.State, pv *types.PrivValidator, app abci.Application) *ConsensusState {
// Get BlockStore
func newConsensusStateWithConfig(thisConfig *cfg.Config, state sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState {
blockDB := dbm.NewMemDB()
return newConsensusStateWithConfigAndBlockStore(thisConfig, state, pv, app, blockDB)
}
func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state sm.State, pv types.PrivValidator, app abci.Application, blockDB dbm.DB) *ConsensusState {
// Get BlockStore
blockStore := bc.NewBlockStore(blockDB)
// one for mempool, one for consensus
@@ -240,39 +254,37 @@ func newConsensusStateWithConfig(thisConfig *cfg.Config, state *sm.State, pv *ty
proxyAppConnCon := abcicli.NewLocalClient(mtx, app)
// Make Mempool
mempool := mempl.NewMempool(thisConfig.Mempool, proxyAppConnMem)
mempool := mempl.NewMempool(thisConfig.Mempool, proxyAppConnMem, 0)
mempool.SetLogger(log.TestingLogger().With("module", "mempool"))
if thisConfig.Consensus.WaitForTxs() {
mempool.EnableTxsAvailable()
}
// mock the evidence pool
evpool := types.MockEvidencePool{}
// Make ConsensusReactor
cs := NewConsensusState(thisConfig.Consensus, state, proxyAppConnCon, blockStore, mempool)
cs.SetLogger(log.TestingLogger())
stateDB := dbm.NewMemDB()
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyAppConnCon, mempool, evpool)
cs := NewConsensusState(thisConfig.Consensus, state, blockExec, blockStore, mempool, evpool)
cs.SetLogger(log.TestingLogger().With("module", "consensus"))
cs.SetPrivValidator(pv)
evsw := types.NewEventSwitch()
evsw.SetLogger(log.TestingLogger().With("module", "events"))
cs.SetEventSwitch(evsw)
evsw.Start()
eventBus := types.NewEventBus()
eventBus.SetLogger(log.TestingLogger().With("module", "events"))
eventBus.Start()
cs.SetEventBus(eventBus)
return cs
}
func loadPrivValidator(config *cfg.Config) *types.PrivValidator {
func loadPrivValidator(config *cfg.Config) *types.PrivValidatorFS {
privValidatorFile := config.PrivValidatorFile()
ensureDir(path.Dir(privValidatorFile), 0700)
privValidator := types.LoadOrGenPrivValidator(privValidatorFile, log.TestingLogger())
privValidator := types.LoadOrGenPrivValidatorFS(privValidatorFile)
privValidator.Reset()
return privValidator
}
func fixedConsensusStateDummy() *ConsensusState {
stateDB := dbm.NewMemDB()
state := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
state.SetLogger(log.TestingLogger().With("module", "state"))
privValidator := loadPrivValidator(config)
cs := newConsensusState(state, privValidator, dummy.NewDummyApplication())
cs.SetLogger(log.TestingLogger())
return cs
}
func randConsensusState(nValidators int) (*ConsensusState, []*validatorStub) {
// Get State
state, privVals := randGenesisState(nValidators, false, 10)
@@ -280,7 +292,6 @@ func randConsensusState(nValidators int) (*ConsensusState, []*validatorStub) {
vss := make([]*validatorStub, nValidators)
cs := newConsensusState(state, privVals[0], counter.NewCounterApplication(true))
cs.SetLogger(log.TestingLogger())
for i := 0; i < nValidators; i++ {
vss[i] = NewValidatorStub(privVals[i], i)
@@ -293,13 +304,23 @@ func randConsensusState(nValidators int) (*ConsensusState, []*validatorStub) {
//-------------------------------------------------------------------------------
func ensureNoNewStep(stepCh chan interface{}) {
timeout := time.NewTicker(ensureTimeout * time.Second)
func ensureNoNewStep(stepCh <-chan interface{}) {
timer := time.NewTimer(ensureTimeout)
select {
case <-timeout.C:
case <-timer.C:
break
case <-stepCh:
panic("We should be stuck waiting for more votes, not moving to the next step")
panic("We should be stuck waiting, not moving to the next step")
}
}
func ensureNewStep(stepCh <-chan interface{}) {
timer := time.NewTimer(ensureTimeout)
select {
case <-timer.C:
panic("We shouldnt be stuck waiting")
case <-stepCh:
break
}
}
@@ -316,57 +337,64 @@ func consensusLogger() log.Logger {
}
}
return term.FgBgColor{}
})
}).With("module", "consensus")
}
func randConsensusNet(nValidators int, testName string, tickerFunc func() TimeoutTicker, appFunc func() abci.Application) []*ConsensusState {
genDoc, privVals := randGenesisDoc(nValidators, false, 10)
func randConsensusNet(nValidators int, testName string, tickerFunc func() TimeoutTicker, appFunc func() abci.Application, configOpts ...func(*cfg.Config)) []*ConsensusState {
genDoc, privVals := randGenesisDoc(nValidators, false, 30)
css := make([]*ConsensusState, nValidators)
logger := consensusLogger()
for i := 0; i < nValidators; i++ {
db := dbm.NewMemDB() // each state needs its own db
state := sm.MakeGenesisState(db, genDoc)
state.SetLogger(logger.With("module", "state", "validator", i))
state.Save()
thisConfig := ResetConfig(Fmt("%s_%d", testName, i))
stateDB := dbm.NewMemDB() // each state needs its own db
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
thisConfig := ResetConfig(cmn.Fmt("%s_%d", testName, i))
for _, opt := range configOpts {
opt(thisConfig)
}
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
css[i] = newConsensusStateWithConfig(thisConfig, state, privVals[i], appFunc())
css[i].SetLogger(logger.With("validator", i))
app := appFunc()
vals := types.TM2PB.Validators(state.Validators)
app.InitChain(abci.RequestInitChain{Validators: vals})
css[i] = newConsensusStateWithConfig(thisConfig, state, privVals[i], app)
css[i].SetTimeoutTicker(tickerFunc())
css[i].SetLogger(logger.With("validator", i, "module", "consensus"))
}
return css
}
// nPeers = nValidators + nNotValidator
func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerFunc func() TimeoutTicker, appFunc func() abci.Application) []*ConsensusState {
genDoc, privVals := randGenesisDoc(nValidators, false, int64(testMinPower))
genDoc, privVals := randGenesisDoc(nValidators, false, testMinPower)
css := make([]*ConsensusState, nPeers)
logger := consensusLogger()
for i := 0; i < nPeers; i++ {
db := dbm.NewMemDB() // each state needs its own db
state := sm.MakeGenesisState(db, genDoc)
state.SetLogger(log.TestingLogger().With("module", "state"))
state.Save()
thisConfig := ResetConfig(Fmt("%s_%d", testName, i))
stateDB := dbm.NewMemDB() // each state needs its own db
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
thisConfig := ResetConfig(cmn.Fmt("%s_%d", testName, i))
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
var privVal *types.PrivValidator
var privVal types.PrivValidator
if i < nValidators {
privVal = privVals[i]
} else {
privVal = types.GenPrivValidator()
_, tempFilePath := Tempfile("priv_validator_")
privVal.SetFile(tempFilePath)
_, tempFilePath := cmn.Tempfile("priv_validator_")
privVal = types.GenPrivValidatorFS(tempFilePath)
}
css[i] = newConsensusStateWithConfig(thisConfig, state, privVal, appFunc())
css[i].SetLogger(log.TestingLogger())
app := appFunc()
vals := types.TM2PB.Validators(state.Validators)
app.InitChain(abci.RequestInitChain{Validators: vals})
css[i] = newConsensusStateWithConfig(thisConfig, state, privVal, app)
css[i].SetTimeoutTicker(tickerFunc())
css[i].SetLogger(logger.With("validator", i, "module", "consensus"))
}
return css
}
func getSwitchIndex(switches []*p2p.Switch, peer *p2p.Peer) int {
func getSwitchIndex(switches []*p2p.Switch, peer p2p.Peer) int {
for i, s := range switches {
if bytes.Equal(peer.NodeInfo.PubKey.Address(), s.NodeInfo().PubKey.Address()) {
if bytes.Equal(peer.NodeInfo().PubKey.Address(), s.NodeInfo().PubKey.Address()) {
return i
}
}
@@ -377,31 +405,31 @@ func getSwitchIndex(switches []*p2p.Switch, peer *p2p.Peer) int {
//-------------------------------------------------------------------------------
// genesis
func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.GenesisDoc, []*types.PrivValidator) {
func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.GenesisDoc, []*types.PrivValidatorFS) {
validators := make([]types.GenesisValidator, numValidators)
privValidators := make([]*types.PrivValidator, numValidators)
privValidators := make([]*types.PrivValidatorFS, numValidators)
for i := 0; i < numValidators; i++ {
val, privVal := types.RandValidator(randPower, minPower)
validators[i] = types.GenesisValidator{
PubKey: val.PubKey,
Amount: val.VotingPower,
Power: val.VotingPower,
}
privValidators[i] = privVal
}
sort.Sort(types.PrivValidatorsByAddress(privValidators))
return &types.GenesisDoc{
GenesisTime: time.Now(),
ChainID: config.ChainID,
ChainID: config.ChainID(),
Validators: validators,
}, privValidators
}
func randGenesisState(numValidators int, randPower bool, minPower int64) (*sm.State, []*types.PrivValidator) {
func randGenesisState(numValidators int, randPower bool, minPower int64) (sm.State, []*types.PrivValidatorFS) {
genDoc, privValidators := randGenesisDoc(numValidators, randPower, minPower)
s0, _ := sm.MakeGenesisState(genDoc)
db := dbm.NewMemDB()
s0 := sm.MakeGenesisState(db, genDoc)
s0.SetLogger(log.TestingLogger().With("module", "state"))
s0.Save()
sm.SaveState(db, s0)
return s0, privValidators
}
@@ -427,12 +455,12 @@ type mockTicker struct {
fired bool
}
func (m *mockTicker) Start() (bool, error) {
return true, nil
func (m *mockTicker) Start() error {
return nil
}
func (m *mockTicker) Stop() bool {
return true
func (m *mockTicker) Stop() error {
return nil
}
func (m *mockTicker) ScheduleTimeout(ti timeoutInfo) {
@@ -441,7 +469,7 @@ func (m *mockTicker) ScheduleTimeout(ti timeoutInfo) {
if m.onlyOnce && m.fired {
return
}
if ti.Step == RoundStepNewHeight {
if ti.Step == cstypes.RoundStepNewHeight {
m.c <- ti
m.fired = true
}

View File

@@ -2,55 +2,121 @@ package consensus
import (
"encoding/binary"
"fmt"
"testing"
"time"
abci "github.com/tendermint/abci/types"
"github.com/tendermint/tendermint/types"
"github.com/stretchr/testify/assert"
. "github.com/tendermint/tmlibs/common"
"github.com/tendermint/abci/example/code"
abci "github.com/tendermint/abci/types"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tendermint/types"
)
func init() {
config = ResetConfig("consensus_mempool_test")
}
func TestTxConcurrentWithCommit(t *testing.T) {
func TestMempoolNoProgressUntilTxsAvailable(t *testing.T) {
config := ResetConfig("consensus_mempool_txs_available_test")
config.Consensus.CreateEmptyBlocks = false
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
cs.mempool.EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
startTestRound(cs, height, round)
ensureNewStep(newBlockCh) // first block gets committed
ensureNoNewStep(newBlockCh)
deliverTxsRange(cs, 0, 1)
ensureNewStep(newBlockCh) // commit txs
ensureNewStep(newBlockCh) // commit updated app hash
ensureNoNewStep(newBlockCh)
}
func TestMempoolProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
config := ResetConfig("consensus_mempool_txs_available_test")
config.Consensus.CreateEmptyBlocksInterval = int(ensureTimeout.Seconds())
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
cs.mempool.EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
startTestRound(cs, height, round)
ensureNewStep(newBlockCh) // first block gets committed
ensureNoNewStep(newBlockCh) // then we dont make a block ...
ensureNewStep(newBlockCh) // until the CreateEmptyBlocksInterval has passed
}
func TestMempoolProgressInHigherRound(t *testing.T) {
config := ResetConfig("consensus_mempool_txs_available_test")
config.Consensus.CreateEmptyBlocks = false
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
cs.mempool.EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
newRoundCh := subscribe(cs.eventBus, types.EventQueryNewRound)
timeoutCh := subscribe(cs.eventBus, types.EventQueryTimeoutPropose)
cs.setProposal = func(proposal *types.Proposal) error {
if cs.Height == 2 && cs.Round == 0 {
// dont set the proposal in round 0 so we timeout and
// go to next round
cs.Logger.Info("Ignoring set proposal at height 2, round 0")
return nil
}
return cs.defaultSetProposal(proposal)
}
startTestRound(cs, height, round)
ensureNewStep(newRoundCh) // first round at first height
ensureNewStep(newBlockCh) // first block gets committed
ensureNewStep(newRoundCh) // first round at next height
deliverTxsRange(cs, 0, 1) // we deliver txs, but dont set a proposal so we get the next round
<-timeoutCh
ensureNewStep(newRoundCh) // wait for the next round
ensureNewStep(newBlockCh) // now we can commit the block
}
func deliverTxsRange(cs *ConsensusState, start, end int) {
// Deliver some txs.
for i := start; i < end; i++ {
txBytes := make([]byte, 8)
binary.BigEndian.PutUint64(txBytes, uint64(i))
err := cs.mempool.CheckTx(txBytes, nil)
if err != nil {
panic(cmn.Fmt("Error after CheckTx: %v", err))
}
}
}
func TestMempoolTxConcurrentWithCommit(t *testing.T) {
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusState(state, privVals[0], NewCounterApplication())
height, round := cs.Height, cs.Round
newBlockCh := subscribeToEvent(cs.evsw, "tester", types.EventStringNewBlock(), 1)
deliverTxsRange := func(start, end int) {
// Deliver some txs.
for i := start; i < end; i++ {
txBytes := make([]byte, 8)
binary.BigEndian.PutUint64(txBytes, uint64(i))
err := cs.mempool.CheckTx(txBytes, nil)
if err != nil {
panic(Fmt("Error after CheckTx: %v", err))
}
// time.Sleep(time.Microsecond * time.Duration(rand.Int63n(3000)))
}
}
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
NTxs := 10000
go deliverTxsRange(0, NTxs)
go deliverTxsRange(cs, 0, NTxs)
startTestRound(cs, height, round)
ticker := time.NewTicker(time.Second * 20)
for nTxs := 0; nTxs < NTxs; {
ticker := time.NewTicker(time.Second * 30)
select {
case b := <-newBlockCh:
nTxs += b.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block.Header.NumTxs
evt := b.(types.TMEventData).Unwrap().(types.EventDataNewBlock)
nTxs += int(evt.Block.Header.NumTxs)
case <-ticker.C:
panic("Timed out waiting to commit blocks with transactions")
}
}
}
func TestRmBadTx(t *testing.T) {
func TestMempoolRmBadTx(t *testing.T) {
state, privVals := randGenesisState(1, false, 10)
app := NewCounterApplication()
cs := newConsensusState(state, privVals[0], app)
@@ -58,41 +124,43 @@ func TestRmBadTx(t *testing.T) {
// increment the counter by 1
txBytes := make([]byte, 8)
binary.BigEndian.PutUint64(txBytes, uint64(0))
app.DeliverTx(txBytes)
app.Commit()
ch := make(chan struct{})
cbCh := make(chan struct{})
resDeliver := app.DeliverTx(txBytes)
assert.False(t, resDeliver.IsErr(), cmn.Fmt("expected no error. got %v", resDeliver))
resCommit := app.Commit()
assert.True(t, len(resCommit.Data) > 0)
emptyMempoolCh := make(chan struct{})
checkTxRespCh := make(chan struct{})
go func() {
// Try to send the tx through the mempool.
// CheckTx should not err, but the app should return a bad abci code
// and the tx should get removed from the pool
err := cs.mempool.CheckTx(txBytes, func(r *abci.Response) {
if r.GetCheckTx().Code != abci.CodeType_BadNonce {
if r.GetCheckTx().Code != code.CodeTypeBadNonce {
t.Fatalf("expected checktx to return bad nonce, got %v", r)
}
cbCh <- struct{}{}
checkTxRespCh <- struct{}{}
})
if err != nil {
t.Fatal("Error after CheckTx: %v", err)
t.Fatalf("Error after CheckTx: %v", err)
}
// check for the tx
for {
time.Sleep(time.Second)
txs := cs.mempool.Reap(1)
if len(txs) == 0 {
ch <- struct{}{}
return
emptyMempoolCh <- struct{}{}
}
time.Sleep(10 * time.Millisecond)
}
}()
// Wait until the tx returns
ticker := time.After(time.Second * 5)
select {
case <-cbCh:
case <-checkTxRespCh:
// success
case <-ticker:
t.Fatalf("Timed out waiting for tx to return")
@@ -101,7 +169,7 @@ func TestRmBadTx(t *testing.T) {
// Wait until the tx is removed
ticker = time.After(time.Second * 5)
select {
case <-ch:
case <-emptyMempoolCh:
// success
case <-ticker:
t.Fatalf("Timed out waiting for tx to be removed")
@@ -120,37 +188,45 @@ func NewCounterApplication() *CounterApplication {
return &CounterApplication{}
}
func (app *CounterApplication) Info() abci.ResponseInfo {
return abci.ResponseInfo{Data: Fmt("txs:%v", app.txCount)}
func (app *CounterApplication) Info(req abci.RequestInfo) abci.ResponseInfo {
return abci.ResponseInfo{Data: cmn.Fmt("txs:%v", app.txCount)}
}
func (app *CounterApplication) DeliverTx(tx []byte) abci.Result {
return runTx(tx, &app.txCount)
func (app *CounterApplication) DeliverTx(tx []byte) abci.ResponseDeliverTx {
txValue := txAsUint64(tx)
if txValue != uint64(app.txCount) {
return abci.ResponseDeliverTx{
Code: code.CodeTypeBadNonce,
Log: fmt.Sprintf("Invalid nonce. Expected %v, got %v", app.txCount, txValue)}
}
app.txCount += 1
return abci.ResponseDeliverTx{Code: code.CodeTypeOK}
}
func (app *CounterApplication) CheckTx(tx []byte) abci.Result {
return runTx(tx, &app.mempoolTxCount)
func (app *CounterApplication) CheckTx(tx []byte) abci.ResponseCheckTx {
txValue := txAsUint64(tx)
if txValue != uint64(app.mempoolTxCount) {
return abci.ResponseCheckTx{
Code: code.CodeTypeBadNonce,
Log: fmt.Sprintf("Invalid nonce. Expected %v, got %v", app.mempoolTxCount, txValue)}
}
app.mempoolTxCount += 1
return abci.ResponseCheckTx{Code: code.CodeTypeOK}
}
func runTx(tx []byte, countPtr *int) abci.Result {
count := *countPtr
func txAsUint64(tx []byte) uint64 {
tx8 := make([]byte, 8)
copy(tx8[len(tx8)-len(tx):], tx)
txValue := binary.BigEndian.Uint64(tx8)
if txValue != uint64(count) {
return abci.ErrBadNonce.AppendLog(Fmt("Invalid nonce. Expected %v, got %v", count, txValue))
}
*countPtr += 1
return abci.OK
return binary.BigEndian.Uint64(tx8)
}
func (app *CounterApplication) Commit() abci.Result {
func (app *CounterApplication) Commit() abci.ResponseCommit {
app.mempoolTxCount = app.txCount
if app.txCount == 0 {
return abci.OK
return abci.ResponseCommit{}
} else {
hash := make([]byte, 8)
binary.BigEndian.PutUint64(hash, uint64(app.txCount))
return abci.NewResultOK(hash, "")
return abci.ResponseCommit{Data: hash}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,24 @@
package consensus
import (
"context"
"fmt"
"os"
"runtime"
"runtime/pprof"
"sync"
"testing"
"time"
"github.com/tendermint/abci/example/dummy"
"github.com/tendermint/tmlibs/log"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tmlibs/events"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func init() {
@@ -19,71 +28,108 @@ func init() {
//----------------------------------------------
// in-process testnets
func startConsensusNet(t *testing.T, css []*ConsensusState, N int, subscribeEventRespond bool) ([]*ConsensusReactor, []chan interface{}) {
func startConsensusNet(t *testing.T, css []*ConsensusState, N int) ([]*ConsensusReactor, []chan interface{}, []*types.EventBus) {
reactors := make([]*ConsensusReactor, N)
eventChans := make([]chan interface{}, N)
logger := consensusLogger()
eventBuses := make([]*types.EventBus, N)
for i := 0; i < N; i++ {
/*logger, err := tmflags.ParseLogLevel("consensus:info,*:error", logger, "info")
if err != nil { t.Fatal(err)}*/
reactors[i] = NewConsensusReactor(css[i], true) // so we dont start the consensus states
reactors[i].SetLogger(logger.With("validator", i))
reactors[i].SetLogger(css[i].Logger)
eventSwitch := events.NewEventSwitch()
eventSwitch.SetLogger(logger.With("module", "events", "validator", i))
_, err := eventSwitch.Start()
if err != nil {
t.Fatalf("Failed to start switch: %v", err)
}
// eventBus is already started with the cs
eventBuses[i] = css[i].eventBus
reactors[i].SetEventBus(eventBuses[i])
reactors[i].SetEventSwitch(eventSwitch)
if subscribeEventRespond {
eventChans[i] = subscribeToEventRespond(eventSwitch, "tester", types.EventStringNewBlock())
} else {
eventChans[i] = subscribeToEvent(eventSwitch, "tester", types.EventStringNewBlock(), 1)
}
eventChans[i] = make(chan interface{}, 1)
err := eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, eventChans[i])
require.NoError(t, err)
}
// make connected switches and start all reactors
p2p.MakeConnectedSwitches(config.P2P, N, func(i int, s *p2p.Switch) *p2p.Switch {
s.AddReactor("CONSENSUS", reactors[i])
s.SetLogger(reactors[i].conS.Logger.With("module", "p2p"))
return s
}, p2p.Connect2Switches)
// now that everyone is connected, start the state machines
// If we started the state machines before everyone was connected,
// we'd block when the cs fires NewBlockEvent and the peers are trying to start their reactors
// TODO: is this still true with new pubsub?
for i := 0; i < N; i++ {
s := reactors[i].conS.GetState()
reactors[i].SwitchToConsensus(s)
reactors[i].SwitchToConsensus(s, 0)
}
return reactors, eventChans
return reactors, eventChans, eventBuses
}
func stopConsensusNet(reactors []*ConsensusReactor) {
for _, r := range reactors {
func stopConsensusNet(logger log.Logger, reactors []*ConsensusReactor, eventBuses []*types.EventBus) {
logger.Info("stopConsensusNet", "n", len(reactors))
for i, r := range reactors {
logger.Info("stopConsensusNet: Stopping ConsensusReactor", "i", i)
r.Switch.Stop()
}
for i, b := range eventBuses {
logger.Info("stopConsensusNet: Stopping eventBus", "i", i)
b.Stop()
}
logger.Info("stopConsensusNet: DONE", "n", len(reactors))
}
// Ensure a testnet makes blocks
func TestReactor(t *testing.T) {
func TestReactorBasic(t *testing.T) {
N := 4
css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
reactors, eventChans := startConsensusNet(t, css, N, false)
defer stopConsensusNet(reactors)
reactors, eventChans, eventBuses := startConsensusNet(t, css, N)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
// wait till everyone makes the first new block
timeoutWaitGroup(t, N, func(wg *sync.WaitGroup, j int) {
timeoutWaitGroup(t, N, func(j int) {
<-eventChans[j]
}, css)
}
// Ensure a testnet sends proposal heartbeats and makes blocks when there are txs
func TestReactorProposalHeartbeats(t *testing.T) {
N := 4
css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter,
func(c *cfg.Config) {
c.Consensus.CreateEmptyBlocks = false
})
reactors, eventChans, eventBuses := startConsensusNet(t, css, N)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
heartbeatChans := make([]chan interface{}, N)
var err error
for i := 0; i < N; i++ {
heartbeatChans[i] = make(chan interface{}, 1)
err = eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryProposalHeartbeat, heartbeatChans[i])
require.NoError(t, err)
}
// wait till everyone sends a proposal heartbeat
timeoutWaitGroup(t, N, func(j int) {
<-heartbeatChans[j]
}, css)
// send a tx
if err := css[3].mempool.CheckTx([]byte{1, 2, 3}, nil); err != nil {
//t.Fatal(err)
}
// wait till everyone makes the first new block
timeoutWaitGroup(t, N, func(j int) {
<-eventChans[j]
wg.Done()
}, css)
}
//-------------------------------------------------------------
// ensure we can make blocks despite cycling a validator set
func TestVotingPowerChange(t *testing.T) {
func TestReactorVotingPowerChange(t *testing.T) {
nVals := 4
logger := log.TestingLogger()
css := randConsensusNet(nVals, "consensus_voting_power_changes_test", newMockTickerFunc(true), newPersistentDummy)
reactors, eventChans := startConsensusNet(t, css, nVals, true)
defer stopConsensusNet(reactors)
reactors, eventChans, eventBuses := startConsensusNet(t, css, nVals)
defer stopConsensusNet(logger, reactors, eventBuses)
// map of active validators
activeVals := make(map[string]struct{})
@@ -92,21 +138,19 @@ func TestVotingPowerChange(t *testing.T) {
}
// wait till everyone makes block 1
timeoutWaitGroup(t, nVals, func(wg *sync.WaitGroup, j int) {
timeoutWaitGroup(t, nVals, func(j int) {
<-eventChans[j]
eventChans[j] <- struct{}{}
wg.Done()
}, css)
//---------------------------------------------------------------------------
t.Log("---------------------------- Testing changing the voting power of one validator a few times")
logger.Debug("---------------------------- Testing changing the voting power of one validator a few times")
val1PubKey := css[0].privValidator.(*types.PrivValidator).PubKey
val1PubKey := css[0].privValidator.GetPubKey()
updateValidatorTx := dummy.MakeValSetChangeTx(val1PubKey.Bytes(), 25)
previousTotalVotingPower := css[0].GetRoundState().LastValidators.TotalVotingPower()
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlockWithTx(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
@@ -118,7 +162,7 @@ func TestVotingPowerChange(t *testing.T) {
previousTotalVotingPower = css[0].GetRoundState().LastValidators.TotalVotingPower()
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlockWithTx(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
@@ -126,11 +170,11 @@ func TestVotingPowerChange(t *testing.T) {
t.Fatalf("expected voting power to change (before: %d, after: %d)", previousTotalVotingPower, css[0].GetRoundState().LastValidators.TotalVotingPower())
}
updateValidatorTx = dummy.MakeValSetChangeTx(val1PubKey.Bytes(), 100)
updateValidatorTx = dummy.MakeValSetChangeTx(val1PubKey.Bytes(), 26)
previousTotalVotingPower = css[0].GetRoundState().LastValidators.TotalVotingPower()
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlockWithTx(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
@@ -139,12 +183,15 @@ func TestVotingPowerChange(t *testing.T) {
}
}
func TestValidatorSetChanges(t *testing.T) {
func TestReactorValidatorSetChanges(t *testing.T) {
nPeers := 7
nVals := 4
css := randConsensusNetWithPeers(nVals, nPeers, "consensus_val_set_changes_test", newMockTickerFunc(true), newPersistentDummy)
reactors, eventChans := startConsensusNet(t, css, nPeers, true)
defer stopConsensusNet(reactors)
logger := log.TestingLogger()
reactors, eventChans, eventBuses := startConsensusNet(t, css, nPeers)
defer stopConsensusNet(logger, reactors, eventBuses)
// map of active validators
activeVals := make(map[string]struct{})
@@ -153,17 +200,15 @@ func TestValidatorSetChanges(t *testing.T) {
}
// wait till everyone makes block 1
timeoutWaitGroup(t, nPeers, func(wg *sync.WaitGroup, j int) {
timeoutWaitGroup(t, nPeers, func(j int) {
<-eventChans[j]
eventChans[j] <- struct{}{}
wg.Done()
}, css)
//---------------------------------------------------------------------------
t.Log("---------------------------- Testing adding one validator")
logger.Info("---------------------------- Testing adding one validator")
newValidatorPubKey1 := css[nVals].privValidator.(*types.PrivValidator).PubKey
newValidatorTx1 := dummy.MakeValSetChangeTx(newValidatorPubKey1.Bytes(), uint64(testMinPower))
newValidatorPubKey1 := css[nVals].privValidator.GetPubKey()
newValidatorTx1 := dummy.MakeValSetChangeTx(newValidatorPubKey1.Bytes(), testMinPower)
// wait till everyone makes block 2
// ensure the commit includes all validators
@@ -172,7 +217,7 @@ func TestValidatorSetChanges(t *testing.T) {
// wait till everyone makes block 3.
// it includes the commit for block 2, which is by the original validator set
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, newValidatorTx1)
// wait till everyone makes block 4.
// it includes the commit for block 3, which is by the original validator set
@@ -183,52 +228,52 @@ func TestValidatorSetChanges(t *testing.T) {
// wait till everyone makes block 5
// it includes the commit for block 4, which should have the updated validator set
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
//---------------------------------------------------------------------------
t.Log("---------------------------- Testing changing the voting power of one validator")
logger.Info("---------------------------- Testing changing the voting power of one validator")
updateValidatorPubKey1 := css[nVals].privValidator.(*types.PrivValidator).PubKey
updateValidatorPubKey1 := css[nVals].privValidator.GetPubKey()
updateValidatorTx1 := dummy.MakeValSetChangeTx(updateValidatorPubKey1.Bytes(), 25)
previousTotalVotingPower := css[nVals].GetRoundState().LastValidators.TotalVotingPower()
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, updateValidatorTx1)
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, updateValidatorTx1)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
if css[nVals].GetRoundState().LastValidators.TotalVotingPower() == previousTotalVotingPower {
t.Errorf("expected voting power to change (before: %d, after: %d)", previousTotalVotingPower, css[nVals].GetRoundState().LastValidators.TotalVotingPower())
}
//---------------------------------------------------------------------------
t.Log("---------------------------- Testing adding two validators at once")
logger.Info("---------------------------- Testing adding two validators at once")
newValidatorPubKey2 := css[nVals+1].privValidator.(*types.PrivValidator).PubKey
newValidatorTx2 := dummy.MakeValSetChangeTx(newValidatorPubKey2.Bytes(), uint64(testMinPower))
newValidatorPubKey2 := css[nVals+1].privValidator.GetPubKey()
newValidatorTx2 := dummy.MakeValSetChangeTx(newValidatorPubKey2.Bytes(), testMinPower)
newValidatorPubKey3 := css[nVals+2].privValidator.(*types.PrivValidator).PubKey
newValidatorTx3 := dummy.MakeValSetChangeTx(newValidatorPubKey3.Bytes(), uint64(testMinPower))
newValidatorPubKey3 := css[nVals+2].privValidator.GetPubKey()
newValidatorTx3 := dummy.MakeValSetChangeTx(newValidatorPubKey3.Bytes(), testMinPower)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, newValidatorTx2, newValidatorTx3)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, newValidatorTx2, newValidatorTx3)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
activeVals[string(newValidatorPubKey2.Address())] = struct{}{}
activeVals[string(newValidatorPubKey3.Address())] = struct{}{}
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
//---------------------------------------------------------------------------
t.Log("---------------------------- Testing removing two validators at once")
logger.Info("---------------------------- Testing removing two validators at once")
removeValidatorTx2 := dummy.MakeValSetChangeTx(newValidatorPubKey2.Bytes(), 0)
removeValidatorTx3 := dummy.MakeValSetChangeTx(newValidatorPubKey3.Bytes(), 0)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, removeValidatorTx2, removeValidatorTx3)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, removeValidatorTx2, removeValidatorTx3)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
delete(activeVals, string(newValidatorPubKey2.Address()))
delete(activeVals, string(newValidatorPubKey3.Address()))
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
}
// Check we can make blocks with skip_timeout_commit=false
@@ -240,31 +285,86 @@ func TestReactorWithTimeoutCommit(t *testing.T) {
css[i].config.SkipTimeoutCommit = false
}
reactors, eventChans := startConsensusNet(t, css, N-1, false)
defer stopConsensusNet(reactors)
reactors, eventChans, eventBuses := startConsensusNet(t, css, N-1)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
// wait till everyone makes the first new block
timeoutWaitGroup(t, N-1, func(wg *sync.WaitGroup, j int) {
timeoutWaitGroup(t, N-1, func(j int) {
<-eventChans[j]
wg.Done()
}, css)
}
func waitForAndValidateBlock(t *testing.T, n int, activeVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState, txs ...[]byte) {
timeoutWaitGroup(t, n, func(wg *sync.WaitGroup, j int) {
newBlockI := <-eventChans[j]
newBlock := newBlockI.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block
t.Logf("[WARN] Got block height=%v validator=%v", newBlock.Height, j)
err := validateBlock(newBlock, activeVals)
if err != nil {
t.Fatal(err)
timeoutWaitGroup(t, n, func(j int) {
css[j].Logger.Debug("waitForAndValidateBlock")
newBlockI, ok := <-eventChans[j]
if !ok {
return
}
newBlock := newBlockI.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block
css[j].Logger.Debug("waitForAndValidateBlock: Got block", "height", newBlock.Height)
err := validateBlock(newBlock, activeVals)
assert.Nil(t, err)
for _, tx := range txs {
css[j].mempool.CheckTx(tx, nil)
assert.Nil(t, err)
}
}, css)
}
func waitForAndValidateBlockWithTx(t *testing.T, n int, activeVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState, txs ...[]byte) {
timeoutWaitGroup(t, n, func(j int) {
ntxs := 0
BLOCK_TX_LOOP:
for {
css[j].Logger.Debug("waitForAndValidateBlockWithTx", "ntxs", ntxs)
newBlockI, ok := <-eventChans[j]
if !ok {
return
}
newBlock := newBlockI.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block
css[j].Logger.Debug("waitForAndValidateBlockWithTx: Got block", "height", newBlock.Height)
err := validateBlock(newBlock, activeVals)
assert.Nil(t, err)
// check that txs match the txs we're waiting for.
// note they could be spread over multiple blocks,
// but they should be in order.
for _, tx := range newBlock.Data.Txs {
assert.EqualValues(t, txs[ntxs], tx)
ntxs += 1
}
if ntxs == len(txs) {
break BLOCK_TX_LOOP
}
}
eventChans[j] <- struct{}{}
wg.Done()
}, css)
}
func waitForBlockWithUpdatedValsAndValidateIt(t *testing.T, n int, updatedVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState) {
timeoutWaitGroup(t, n, func(j int) {
var newBlock *types.Block
LOOP:
for {
css[j].Logger.Debug("waitForBlockWithUpdatedValsAndValidateIt")
newBlockI, ok := <-eventChans[j]
if !ok {
return
}
newBlock = newBlockI.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block
if newBlock.LastCommit.Size() == len(updatedVals) {
css[j].Logger.Debug("waitForBlockWithUpdatedValsAndValidateIt: Got block", "height", newBlock.Height)
break LOOP
} else {
css[j].Logger.Debug("waitForBlockWithUpdatedValsAndValidateIt: Got block with no new validators. Skipping", "height", newBlock.Height)
}
}
err := validateBlock(newBlock, updatedVals)
assert.Nil(t, err)
}, css)
}
@@ -282,11 +382,14 @@ func validateBlock(block *types.Block, activeVals map[string]struct{}) error {
return nil
}
func timeoutWaitGroup(t *testing.T, n int, f func(*sync.WaitGroup, int), css []*ConsensusState) {
func timeoutWaitGroup(t *testing.T, n int, f func(int), css []*ConsensusState) {
wg := new(sync.WaitGroup)
wg.Add(n)
for i := 0; i < n; i++ {
go f(wg, i)
go func(j int) {
f(j)
wg.Done()
}(i)
}
done := make(chan struct{})
@@ -295,15 +398,28 @@ func timeoutWaitGroup(t *testing.T, n int, f func(*sync.WaitGroup, int), css []*
close(done)
}()
// we're running many nodes in-process, possibly in in a virtual machine,
// and spewing debug messages - making a block could take a while,
timeout := time.Second * 300
select {
case <-done:
case <-time.After(time.Second * 10):
case <-time.After(timeout):
for i, cs := range css {
fmt.Println("#################")
fmt.Println("Validator", i)
fmt.Println(cs.GetRoundState())
fmt.Println("")
t.Log("#################")
t.Log("Validator", i)
t.Log(cs.GetRoundState())
t.Log("")
}
os.Stdout.Write([]byte("pprof.Lookup('goroutine'):\n"))
pprof.Lookup("goroutine").WriteTo(os.Stdout, 1)
capture()
panic("Timed out waiting for all validators to commit a block")
}
}
func capture() {
trace := make([]byte, 10240000)
count := runtime.Stack(trace, true)
fmt.Printf("Stack of %d bytes: %s\n", count, trace)
}

View File

@@ -2,25 +2,28 @@ package consensus
import (
"bytes"
"errors"
"fmt"
"hash/crc32"
"io"
"reflect"
"strconv"
"strings"
//"strconv"
//"strings"
"time"
abci "github.com/tendermint/abci/types"
wire "github.com/tendermint/go-wire"
auto "github.com/tendermint/tmlibs/autofile"
//auto "github.com/tendermint/tmlibs/autofile"
cmn "github.com/tendermint/tmlibs/common"
dbm "github.com/tendermint/tmlibs/db"
"github.com/tendermint/tmlibs/log"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/version"
)
var crc32c = crc32.MakeTable(crc32.Castagnoli)
// Functionality to replay blocks and messages on recovery from a crash.
// There are two general failure scenarios: failure during consensus, and failure while applying the block.
// The former is handled by the WAL, the latter by the proxyApp Handshake on restart,
@@ -34,18 +37,11 @@ import (
// as if it were received in receiveRoutine
// Lines that start with "#" are ignored.
// NOTE: receiveRoutine should not be running
func (cs *ConsensusState) readReplayMessage(msgBytes []byte, newStepCh chan interface{}) error {
// Skip over empty and meta lines
if len(msgBytes) == 0 || msgBytes[0] == '#' {
func (cs *ConsensusState) readReplayMessage(msg *TimedWALMessage, newStepCh chan interface{}) error {
// skip meta messages
if _, ok := msg.Msg.(EndHeightMessage); ok {
return nil
}
var err error
var msg TimedWALMessage
wire.ReadJSON(&msg, msgBytes, &err)
if err != nil {
fmt.Println("MsgBytes:", msgBytes, string(msgBytes))
return fmt.Errorf("Error reading json data: %v", err)
}
// for logging
switch m := msg.Msg.(type) {
@@ -65,24 +61,24 @@ func (cs *ConsensusState) readReplayMessage(msgBytes []byte, newStepCh chan inte
}
}
case msgInfo:
peerKey := m.PeerKey
if peerKey == "" {
peerKey = "local"
peerID := m.PeerID
if peerID == "" {
peerID = "local"
}
switch msg := m.Msg.(type) {
case *ProposalMessage:
p := msg.Proposal
cs.Logger.Info("Replay: Proposal", "height", p.Height, "round", p.Round, "header",
p.BlockPartsHeader, "pol", p.POLRound, "peer", peerKey)
p.BlockPartsHeader, "pol", p.POLRound, "peer", peerID)
case *BlockPartMessage:
cs.Logger.Info("Replay: BlockPart", "height", msg.Height, "round", msg.Round, "peer", peerKey)
cs.Logger.Info("Replay: BlockPart", "height", msg.Height, "round", msg.Round, "peer", peerID)
case *VoteMessage:
v := msg.Vote
cs.Logger.Info("Replay: Vote", "height", v.Height, "round", v.Round, "type", v.Type,
"blockID", v.BlockID, "peer", peerKey)
"blockID", v.BlockID, "peer", peerID)
}
cs.handleMsg(m, cs.RoundState)
cs.handleMsg(m)
case timeoutInfo:
cs.Logger.Info("Replay: Timeout", "height", m.Height, "round", m.Round, "step", m.Step, "dur", m.Duration)
cs.handleTimeout(m, cs.RoundState)
@@ -94,73 +90,64 @@ func (cs *ConsensusState) readReplayMessage(msgBytes []byte, newStepCh chan inte
// replay only those messages since the last block.
// timeoutRoutine should run concurrently to read off tickChan
func (cs *ConsensusState) catchupReplay(csHeight int) error {
func (cs *ConsensusState) catchupReplay(csHeight int64) error {
// set replayMode
cs.replayMode = true
defer func() { cs.replayMode = false }()
// Ensure that ENDHEIGHT for this height doesn't exist
// NOTE: This is just a sanity check. As far as we know things work fine without it,
// and Handshake could reuse ConsensusState if it weren't for this check (since we can crash after writing ENDHEIGHT).
gr, found, err := cs.wal.group.Search("#ENDHEIGHT: ", makeHeightSearchFunc(csHeight))
// Ensure that ENDHEIGHT for this height doesn't exist.
// NOTE: This is just a sanity check. As far as we know things work fine
// without it, and Handshake could reuse ConsensusState if it weren't for
// this check (since we can crash after writing ENDHEIGHT).
//
// Ignore data corruption errors since this is a sanity check.
gr, found, err := cs.wal.SearchForEndHeight(csHeight, &WALSearchOptions{IgnoreDataCorruptionErrors: true})
if err != nil {
return err
}
if gr != nil {
gr.Close()
if err := gr.Close(); err != nil {
return err
}
}
if found {
return errors.New(cmn.Fmt("WAL should not contain #ENDHEIGHT %d.", csHeight))
return fmt.Errorf("WAL should not contain #ENDHEIGHT %d.", csHeight)
}
// Search for last height marker
gr, found, err = cs.wal.group.Search("#ENDHEIGHT: ", makeHeightSearchFunc(csHeight-1))
//
// Ignore data corruption errors in previous heights because we only care about last height
gr, found, err = cs.wal.SearchForEndHeight(csHeight-1, &WALSearchOptions{IgnoreDataCorruptionErrors: true})
if err == io.EOF {
cs.Logger.Error("Replay: wal.group.Search returned EOF", "#ENDHEIGHT", csHeight-1)
// if we upgraded from 0.9 to 0.9.1, we may have #HEIGHT instead
// TODO (0.10.0): remove this
gr, found, err = cs.wal.group.Search("#HEIGHT: ", makeHeightSearchFunc(csHeight))
if err == io.EOF {
cs.Logger.Error("Replay: wal.group.Search returned EOF", "#HEIGHT", csHeight)
return nil
} else if err != nil {
return err
}
} else if err != nil {
return err
} else {
defer gr.Close()
}
if !found {
// if we upgraded from 0.9 to 0.9.1, we may have #HEIGHT instead
// TODO (0.10.0): remove this
gr, _, err = cs.wal.group.Search("#HEIGHT: ", makeHeightSearchFunc(csHeight))
if err == io.EOF {
cs.Logger.Error("Replay: wal.group.Search returned EOF", "#HEIGHT", csHeight)
return nil
} else if err != nil {
return err
} else {
defer gr.Close()
}
// TODO (0.10.0): uncomment
// return errors.New(cmn.Fmt("Cannot replay height %d. WAL does not contain #ENDHEIGHT for %d.", csHeight, csHeight-1))
return fmt.Errorf("Cannot replay height %d. WAL does not contain #ENDHEIGHT for %d.", csHeight, csHeight-1)
}
defer gr.Close() // nolint: errcheck
cs.Logger.Info("Catchup by replaying consensus messages", "height", csHeight)
var msg *TimedWALMessage
dec := WALDecoder{gr}
for {
line, err := gr.ReadLine()
if err != nil {
if err == io.EOF {
break
} else {
return err
}
msg, err = dec.Decode()
if err == io.EOF {
break
} else if IsDataCorruptionError(err) {
cs.Logger.Debug("data has been corrupted in last height of consensus WAL", "err", err, "height", csHeight)
panic(fmt.Sprintf("data has been corrupted (%v) in last height %d of consensus WAL", err, csHeight))
} else if err != nil {
return err
}
// NOTE: since the priv key is set when the msgs are received
// it will attempt to eg double sign but we can just ignore it
// since the votes will be replayed and we'll get to the next step
if err := cs.readReplayMessage([]byte(line), nil); err != nil {
if err := cs.readReplayMessage(msg, nil); err != nil {
return err
}
}
@@ -172,7 +159,8 @@ func (cs *ConsensusState) catchupReplay(csHeight int) error {
// Parses marker lines of the form:
// #ENDHEIGHT: 12345
func makeHeightSearchFunc(height int) auto.SearchFunc {
/*
func makeHeightSearchFunc(height int64) auto.SearchFunc {
return func(line string) (int, error) {
line = strings.TrimRight(line, "\n")
parts := strings.Split(line, " ")
@@ -191,7 +179,7 @@ func makeHeightSearchFunc(height int) auto.SearchFunc {
return -1, nil
}
}
}
}*/
//----------------------------------------------
// Recover from failure during block processing
@@ -199,15 +187,16 @@ func makeHeightSearchFunc(height int) auto.SearchFunc {
// we were last and using the WAL to recover there
type Handshaker struct {
state *sm.State
store types.BlockStore
logger log.Logger
stateDB dbm.DB
initialState sm.State
store types.BlockStore
logger log.Logger
nBlocks int // number of blocks applied to the state
}
func NewHandshaker(state *sm.State, store types.BlockStore) *Handshaker {
return &Handshaker{state, store, log.NewNopLogger(), 0}
func NewHandshaker(stateDB dbm.DB, state sm.State, store types.BlockStore) *Handshaker {
return &Handshaker{stateDB, state, store, log.NewNopLogger(), 0}
}
func (h *Handshaker) SetLogger(l log.Logger) {
@@ -221,12 +210,15 @@ func (h *Handshaker) NBlocks() int {
// TODO: retry the handshake/replay if it fails ?
func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
// handshake is done via info request on the query conn
res, err := proxyApp.Query().InfoSync()
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{version.Version})
if err != nil {
return errors.New(cmn.Fmt("Error calling Info: %v", err))
return fmt.Errorf("Error calling Info: %v", err)
}
blockHeight := int(res.LastBlockHeight) // XXX: beware overflow
blockHeight := int64(res.LastBlockHeight)
if blockHeight < 0 {
return fmt.Errorf("Got a negative last block height (%d) from the app", blockHeight)
}
appHash := res.LastBlockAppHash
h.logger.Info("ABCI Handshake", "appHeight", blockHeight, "appHash", fmt.Sprintf("%X", appHash))
@@ -234,9 +226,9 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
// TODO: check version
// replay blocks up to the latest in the blockstore
_, err = h.ReplayBlocks(appHash, blockHeight, proxyApp)
_, err = h.ReplayBlocks(h.initialState, appHash, blockHeight, proxyApp)
if err != nil {
return errors.New(cmn.Fmt("Error on replay: %v", err))
return fmt.Errorf("Error on replay: %v", err)
}
h.logger.Info("Completed ABCI Handshake - Tendermint and App are synced", "appHeight", blockHeight, "appHash", fmt.Sprintf("%X", appHash))
@@ -248,21 +240,25 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
// Replay all blocks since appBlockHeight and ensure the result matches the current state.
// Returns the final AppHash or an error
func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int, proxyApp proxy.AppConns) ([]byte, error) {
func (h *Handshaker) ReplayBlocks(state sm.State, appHash []byte, appBlockHeight int64, proxyApp proxy.AppConns) ([]byte, error) {
storeBlockHeight := h.store.Height()
stateBlockHeight := h.state.LastBlockHeight
stateBlockHeight := state.LastBlockHeight
h.logger.Info("ABCI Replay Blocks", "appHeight", appBlockHeight, "storeHeight", storeBlockHeight, "stateHeight", stateBlockHeight)
// If appBlockHeight == 0 it means that we are at genesis and hence should send InitChain
if appBlockHeight == 0 {
validators := types.TM2PB.Validators(h.state.Validators)
proxyApp.Consensus().InitChainSync(validators)
validators := types.TM2PB.Validators(state.Validators)
// TODO: get the genesis bytes (https://github.com/tendermint/tendermint/issues/1224)
var genesisBytes []byte
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators, genesisBytes}); err != nil {
return nil, err
}
}
// First handle edge cases and constraints on the storeBlockHeight
if storeBlockHeight == 0 {
return appHash, h.checkAppHash(appHash)
return appHash, checkAppHash(state, appHash)
} else if storeBlockHeight < appBlockHeight {
// the app should never be ahead of the store (but this is under app's control)
@@ -277,6 +273,7 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int, proxyApp p
cmn.PanicSanity(cmn.Fmt("StoreBlockHeight (%d) > StateBlockHeight + 1 (%d)", storeBlockHeight, stateBlockHeight+1))
}
var err error
// Now either store is equal to state, or one ahead.
// For each, consider all cases of where the app could be, given app <= store
if storeBlockHeight == stateBlockHeight {
@@ -284,11 +281,11 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int, proxyApp p
// Either the app is asking for replay, or we're all synced up.
if appBlockHeight < storeBlockHeight {
// the app is behind, so replay blocks, but no need to go through WAL (state is already synced to store)
return h.replayBlocks(proxyApp, appBlockHeight, storeBlockHeight, false)
return h.replayBlocks(state, proxyApp, appBlockHeight, storeBlockHeight, false)
} else if appBlockHeight == storeBlockHeight {
// We're good!
return appHash, h.checkAppHash(appHash)
return appHash, checkAppHash(state, appHash)
}
} else if storeBlockHeight == stateBlockHeight+1 {
@@ -297,7 +294,7 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int, proxyApp p
if appBlockHeight < stateBlockHeight {
// the app is further behind than it should be, so replay blocks
// but leave the last block to go through the WAL
return h.replayBlocks(proxyApp, appBlockHeight, storeBlockHeight, true)
return h.replayBlocks(state, proxyApp, appBlockHeight, storeBlockHeight, true)
} else if appBlockHeight == stateBlockHeight {
// We haven't run Commit (both the state and app are one block behind),
@@ -305,14 +302,19 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int, proxyApp p
// NOTE: We could instead use the cs.WAL on cs.Start,
// but we'd have to allow the WAL to replay a block that wrote it's ENDHEIGHT
h.logger.Info("Replay last block using real app")
return h.replayBlock(storeBlockHeight, proxyApp.Consensus())
state, err = h.replayBlock(state, storeBlockHeight, proxyApp.Consensus())
return state.AppHash, err
} else if appBlockHeight == storeBlockHeight {
// We ran Commit, but didn't save the state, so replayBlock with mock app
abciResponses := h.state.LoadABCIResponses()
abciResponses, err := sm.LoadABCIResponses(h.stateDB, storeBlockHeight)
if err != nil {
return nil, err
}
mockApp := newMockProxyApp(appHash, abciResponses)
h.logger.Info("Replay last block using mock app")
return h.replayBlock(storeBlockHeight, mockApp)
state, err = h.replayBlock(state, storeBlockHeight, mockApp)
return state.AppHash, err
}
}
@@ -321,11 +323,15 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int, proxyApp p
return nil, nil
}
func (h *Handshaker) replayBlocks(proxyApp proxy.AppConns, appBlockHeight, storeBlockHeight int, mutateState bool) ([]byte, error) {
func (h *Handshaker) replayBlocks(state sm.State, proxyApp proxy.AppConns, appBlockHeight, storeBlockHeight int64, mutateState bool) ([]byte, error) {
// App is further behind than it should be, so we need to replay blocks.
// We replay all blocks from appBlockHeight+1.
//
// Note that we don't have an old version of the state,
// so we by-pass state validation/mutation using sm.ExecCommitBlock.
// This also means we won't be saving validator sets if they change during this period.
// TODO: Load the historical information to fix this and just use state.ApplyBlock
//
// If mutateState == true, the final block is replayed with h.replayBlock()
var appHash []byte
@@ -347,33 +353,37 @@ func (h *Handshaker) replayBlocks(proxyApp proxy.AppConns, appBlockHeight, store
if mutateState {
// sync the final block
return h.replayBlock(storeBlockHeight, proxyApp.Consensus())
state, err = h.replayBlock(state, storeBlockHeight, proxyApp.Consensus())
if err != nil {
return nil, err
}
appHash = state.AppHash
}
return appHash, h.checkAppHash(appHash)
return appHash, checkAppHash(state, appHash)
}
// ApplyBlock on the proxyApp with the last block.
func (h *Handshaker) replayBlock(height int, proxyApp proxy.AppConnConsensus) ([]byte, error) {
mempool := types.MockMempool{}
var eventCache types.Fireable // nil
func (h *Handshaker) replayBlock(state sm.State, height int64, proxyApp proxy.AppConnConsensus) (sm.State, error) {
block := h.store.LoadBlock(height)
meta := h.store.LoadBlockMeta(height)
if err := h.state.ApplyBlock(eventCache, proxyApp, block, meta.BlockID.PartsHeader, mempool); err != nil {
return nil, err
blockExec := sm.NewBlockExecutor(h.stateDB, h.logger, proxyApp, types.MockMempool{}, types.MockEvidencePool{})
var err error
state, err = blockExec.ApplyBlock(state, meta.BlockID, block)
if err != nil {
return sm.State{}, err
}
h.nBlocks += 1
return h.state.AppHash, nil
return state, nil
}
func (h *Handshaker) checkAppHash(appHash []byte) error {
if !bytes.Equal(h.state.AppHash, appHash) {
panic(errors.New(cmn.Fmt("Tendermint state.AppHash does not match AppHash after replay. Got %X, expected %X", appHash, h.state.AppHash)).Error())
return nil
func checkAppHash(state sm.State, appHash []byte) error {
if !bytes.Equal(state.AppHash, appHash) {
panic(fmt.Errorf("Tendermint state.AppHash does not match AppHash after replay. Got %X, expected %X", appHash, state.AppHash).Error())
}
return nil
}
@@ -388,7 +398,10 @@ func newMockProxyApp(appHash []byte, abciResponses *sm.ABCIResponses) proxy.AppC
abciResponses: abciResponses,
})
cli, _ := clientCreator.NewABCIClient()
cli.Start()
err := cli.Start()
if err != nil {
panic(err)
}
return proxy.NewAppConnConsensus(cli)
}
@@ -400,21 +413,17 @@ type mockProxyApp struct {
abciResponses *sm.ABCIResponses
}
func (mock *mockProxyApp) DeliverTx(tx []byte) abci.Result {
func (mock *mockProxyApp) DeliverTx(tx []byte) abci.ResponseDeliverTx {
r := mock.abciResponses.DeliverTx[mock.txCount]
mock.txCount += 1
return abci.Result{
r.Code,
r.Data,
r.Log,
}
return *r
}
func (mock *mockProxyApp) EndBlock(height uint64) abci.ResponseEndBlock {
func (mock *mockProxyApp) EndBlock(req abci.RequestEndBlock) abci.ResponseEndBlock {
mock.txCount = 0
return mock.abciResponses.EndBlock
return *mock.abciResponses.EndBlock
}
func (mock *mockProxyApp) Commit() abci.Result {
return abci.NewResultOK(mock.appHash, "")
func (mock *mockProxyApp) Commit() abci.ResponseCommit {
return abci.ResponseCommit{Data: mock.appHash}
}

View File

@@ -2,12 +2,15 @@ package consensus
import (
"bufio"
"errors"
"context"
"fmt"
"io"
"os"
"strconv"
"strings"
"github.com/pkg/errors"
bc "github.com/tendermint/tendermint/blockchain"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/proxy"
@@ -15,6 +18,12 @@ import (
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
dbm "github.com/tendermint/tmlibs/db"
"github.com/tendermint/tmlibs/log"
)
const (
// event bus subscriber
subscriber = "replay-file"
)
//--------------------------------------------------------
@@ -41,24 +50,39 @@ func (cs *ConsensusState) ReplayFile(file string, console bool) error {
cs.startForReplay()
// ensure all new step events are regenerated as expected
newStepCh := subscribeToEvent(cs.evsw, "replay-test", types.EventStringNewRoundStep(), 1)
newStepCh := make(chan interface{}, 1)
ctx := context.Background()
err := cs.eventBus.Subscribe(ctx, subscriber, types.EventQueryNewRoundStep, newStepCh)
if err != nil {
return errors.Errorf("failed to subscribe %s to %v", subscriber, types.EventQueryNewRoundStep)
}
defer cs.eventBus.Unsubscribe(ctx, subscriber, types.EventQueryNewRoundStep)
// just open the file for reading, no need to use wal
fp, err := os.OpenFile(file, os.O_RDONLY, 0666)
fp, err := os.OpenFile(file, os.O_RDONLY, 0600)
if err != nil {
return err
}
pb := newPlayback(file, fp, cs, cs.state.Copy())
defer pb.fp.Close()
defer pb.fp.Close() // nolint: errcheck
var nextN int // apply N msgs in a row
for pb.scanner.Scan() {
var msg *TimedWALMessage
for {
if nextN == 0 && console {
nextN = pb.replayConsoleLoop()
}
if err := pb.cs.readReplayMessage(pb.scanner.Bytes(), newStepCh); err != nil {
msg, err = pb.dec.Decode()
if err == io.EOF {
return nil
} else if err != nil {
return err
}
if err := pb.cs.readReplayMessage(msg, newStepCh); err != nil {
return err
}
@@ -76,48 +100,57 @@ func (cs *ConsensusState) ReplayFile(file string, console bool) error {
type playback struct {
cs *ConsensusState
fp *os.File
scanner *bufio.Scanner
count int // how many lines/msgs into the file are we
fp *os.File
dec *WALDecoder
count int // how many lines/msgs into the file are we
// replays can be reset to beginning
fileName string // so we can close/reopen the file
genesisState *sm.State // so the replay session knows where to restart from
fileName string // so we can close/reopen the file
genesisState sm.State // so the replay session knows where to restart from
}
func newPlayback(fileName string, fp *os.File, cs *ConsensusState, genState *sm.State) *playback {
func newPlayback(fileName string, fp *os.File, cs *ConsensusState, genState sm.State) *playback {
return &playback{
cs: cs,
fp: fp,
fileName: fileName,
genesisState: genState,
scanner: bufio.NewScanner(fp),
dec: NewWALDecoder(fp),
}
}
// go back count steps by resetting the state and running (pb.count - count) steps
func (pb *playback) replayReset(count int, newStepCh chan interface{}) error {
pb.cs.Stop()
pb.cs.Wait()
newCS := NewConsensusState(pb.cs.config, pb.genesisState.Copy(), pb.cs.proxyAppConn, pb.cs.blockStore, pb.cs.mempool)
newCS.SetEventSwitch(pb.cs.evsw)
newCS := NewConsensusState(pb.cs.config, pb.genesisState.Copy(), pb.cs.blockExec,
pb.cs.blockStore, pb.cs.mempool, pb.cs.evpool)
newCS.SetEventBus(pb.cs.eventBus)
newCS.startForReplay()
pb.fp.Close()
fp, err := os.OpenFile(pb.fileName, os.O_RDONLY, 0666)
if err := pb.fp.Close(); err != nil {
return err
}
fp, err := os.OpenFile(pb.fileName, os.O_RDONLY, 0600)
if err != nil {
return err
}
pb.fp = fp
pb.scanner = bufio.NewScanner(fp)
pb.dec = NewWALDecoder(fp)
count = pb.count - count
fmt.Printf("Reseting from %d to %d\n", pb.count, count)
pb.count = 0
pb.cs = newCS
for i := 0; pb.scanner.Scan() && i < count; i++ {
if err := pb.cs.readReplayMessage(pb.scanner.Bytes(), newStepCh); err != nil {
var msg *TimedWALMessage
for i := 0; i < count; i++ {
msg, err = pb.dec.Decode()
if err == io.EOF {
return nil
} else if err != nil {
return err
}
if err := pb.cs.readReplayMessage(msg, newStepCh); err != nil {
return err
}
pb.count += 1
@@ -180,10 +213,20 @@ func (pb *playback) replayConsoleLoop() int {
// NOTE: "back" is not supported in the state machine design,
// so we restart and replay up to
ctx := context.Background()
// ensure all new step events are regenerated as expected
newStepCh := subscribeToEvent(pb.cs.evsw, "replay-test", types.EventStringNewRoundStep(), 1)
newStepCh := make(chan interface{}, 1)
err := pb.cs.eventBus.Subscribe(ctx, subscriber, types.EventQueryNewRoundStep, newStepCh)
if err != nil {
cmn.Exit(fmt.Sprintf("failed to subscribe %s to %v", subscriber, types.EventQueryNewRoundStep))
}
defer pb.cs.eventBus.Unsubscribe(ctx, subscriber, types.EventQueryNewRoundStep)
if len(tokens) == 1 {
pb.replayReset(1, newStepCh)
if err := pb.replayReset(1, newStepCh); err != nil {
pb.cs.Logger.Error("Replay reset error", "err", err)
}
} else {
i, err := strconv.Atoi(tokens[1])
if err != nil {
@@ -191,7 +234,9 @@ func (pb *playback) replayConsoleLoop() int {
} else if i > pb.count {
fmt.Printf("argument to back must not be larger than the current count (%d)\n", pb.count)
} else {
pb.replayReset(i, newStepCh)
if err := pb.replayReset(i, newStepCh); err != nil {
pb.cs.Logger.Error("Replay reset error", "err", err)
}
}
}
@@ -235,30 +280,37 @@ func (pb *playback) replayConsoleLoop() int {
// convenience for replay mode
func newConsensusStateForReplay(config cfg.BaseConfig, csConfig *cfg.ConsensusConfig) *ConsensusState {
dbType := dbm.DBBackendType(config.DBBackend)
// Get BlockStore
blockStoreDB := dbm.NewDB("blockstore", config.DBBackend, config.DBDir())
blockStoreDB := dbm.NewDB("blockstore", dbType, config.DBDir())
blockStore := bc.NewBlockStore(blockStoreDB)
// Get State
stateDB := dbm.NewDB("state", config.DBBackend, config.DBDir())
state := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
stateDB := dbm.NewDB("state", dbType, config.DBDir())
state, err := sm.MakeGenesisStateFromFile(config.GenesisFile())
if err != nil {
cmn.Exit(err.Error())
}
// Create proxyAppConn connection (consensus, mempool, query)
clientCreator := proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir())
proxyApp := proxy.NewAppConns(clientCreator, NewHandshaker(state, blockStore))
_, err := proxyApp.Start()
proxyApp := proxy.NewAppConns(clientCreator, NewHandshaker(stateDB, state, blockStore))
err = proxyApp.Start()
if err != nil {
cmn.Exit(cmn.Fmt("Error starting proxy app conns: %v", err))
}
// Make event switch
eventSwitch := types.NewEventSwitch()
if _, err := eventSwitch.Start(); err != nil {
cmn.Exit(cmn.Fmt("Failed to start event switch: %v", err))
eventBus := types.NewEventBus()
if err := eventBus.Start(); err != nil {
cmn.Exit(cmn.Fmt("Failed to start event bus: %v", err))
}
consensusState := NewConsensusState(csConfig, state.Copy(), proxyApp.Consensus(), blockStore, types.MockMempool{})
mempool, evpool := types.MockMempool{}, types.MockEvidencePool{}
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)
consensusState.SetEventSwitch(eventSwitch)
consensusState := NewConsensusState(csConfig, state.Copy(), blockExec,
blockStore, mempool, evpool)
consensusState.SetEventBus(eventBus)
return consensusState
}

View File

@@ -2,19 +2,24 @@ package consensus
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"strings"
"runtime"
"testing"
"time"
"github.com/stretchr/testify/require"
"github.com/tendermint/abci/example/dummy"
abci "github.com/tendermint/abci/types"
crypto "github.com/tendermint/go-crypto"
wire "github.com/tendermint/go-wire"
auto "github.com/tendermint/tmlibs/autofile"
cmn "github.com/tendermint/tmlibs/common"
dbm "github.com/tendermint/tmlibs/db"
@@ -25,8 +30,10 @@ import (
"github.com/tendermint/tmlibs/log"
)
var consensusReplayConfig *cfg.Config
func init() {
config = ResetConfig("consensus_replay_test")
consensusReplayConfig = ResetConfig("consensus_replay_test")
}
// These tests ensure we can always recover from failure at any part of the consensus process.
@@ -37,14 +44,6 @@ func init() {
// the `Handshake Tests` are for failures in applying the block.
// With the help of the WAL, we can recover from it all!
// NOTE: Files in this dir are generated by running the `build.sh` therein.
// It's a simple way to generate wals for a single block, or multiple blocks, with random transactions,
// and different part sizes. The output is not deterministic, and the stepChanges may need to be adjusted
// after running it (eg. sometimes small_block2 will have 5 block parts, sometimes 6).
// It should only have to be re-run if there is some breaking change to the consensus data structures (eg. blocks, votes)
// or to the behaviour of the app (eg. computes app hash differently)
var data_dir = path.Join(cmn.GoPath, "src/github.com/tendermint/tendermint/consensus", "test_data")
//------------------------------------------------------------------------------------------
// WAL Tests
@@ -52,223 +51,216 @@ var data_dir = path.Join(cmn.GoPath, "src/github.com/tendermint/tendermint/conse
// and which ones we need the wal for - then we'd also be able to only flush the
// wal writer when we need to, instead of with every message.
// the priv validator changes step at these lines for a block with 1 val and 1 part
var baseStepChanges = []int{3, 6, 8}
func startNewConsensusStateAndWaitForBlock(t *testing.T, lastBlockHeight int64, blockDB dbm.DB, stateDB dbm.DB) {
logger := log.TestingLogger()
state, _ := sm.LoadStateFromDBOrGenesisFile(stateDB, consensusReplayConfig.GenesisFile())
privValidator := loadPrivValidator(consensusReplayConfig)
cs := newConsensusStateWithConfigAndBlockStore(consensusReplayConfig, state, privValidator, dummy.NewDummyApplication(), blockDB)
cs.SetLogger(logger)
// test recovery from each line in each testCase
var testCases = []*testCase{
newTestCase("empty_block", baseStepChanges), // empty block (has 1 block part)
newTestCase("small_block1", baseStepChanges), // small block with txs in 1 block part
newTestCase("small_block2", []int{3, 11, 13}), // small block with txs across 6 smaller block parts
}
bytes, _ := ioutil.ReadFile(cs.config.WalFile())
// fmt.Printf("====== WAL: \n\r%s\n", bytes)
t.Logf("====== WAL: \n\r%s\n", bytes)
type testCase struct {
name string
log string //full cs wal
stepMap map[int]int8 // map lines of log to privval step
err := cs.Start()
require.NoError(t, err)
defer cs.Stop()
proposeLine int
prevoteLine int
precommitLine int
}
func newTestCase(name string, stepChanges []int) *testCase {
if len(stepChanges) != 3 {
panic(cmn.Fmt("a full wal has 3 step changes! Got array %v", stepChanges))
}
return &testCase{
name: name,
log: readWAL(path.Join(data_dir, name+".cswal")),
stepMap: newMapFromChanges(stepChanges),
proposeLine: stepChanges[0],
prevoteLine: stepChanges[1],
precommitLine: stepChanges[2],
}
}
func newMapFromChanges(changes []int) map[int]int8 {
changes = append(changes, changes[2]+1) // so we add the last step change to the map
m := make(map[int]int8)
var count int
for changeNum, nextChange := range changes {
for ; count < nextChange; count++ {
m[count] = int8(changeNum)
}
}
return m
}
func readWAL(p string) string {
b, err := ioutil.ReadFile(p)
if err != nil {
panic(err)
}
return string(b)
}
func writeWAL(walMsgs string) string {
tempDir := os.TempDir()
walDir := path.Join(tempDir, "/wal"+cmn.RandStr(12))
walFile := path.Join(walDir, "wal")
// Create WAL directory
err := cmn.EnsureDir(walDir, 0700)
if err != nil {
panic(err)
}
// Write the needed WAL to file
err = cmn.WriteFile(walFile, []byte(walMsgs), 0600)
if err != nil {
panic(err)
}
return walFile
}
func waitForBlock(newBlockCh chan interface{}, thisCase *testCase, i int) {
after := time.After(time.Second * 10)
// This is just a signal that we haven't halted; its not something contained
// in the WAL itself. Assuming the consensus state is running, replay of any
// WAL, including the empty one, should eventually be followed by a new
// block, or else something is wrong.
newBlockCh := make(chan interface{}, 1)
err = cs.eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, newBlockCh)
require.NoError(t, err)
select {
case <-newBlockCh:
case <-after:
panic(cmn.Fmt("Timed out waiting for new block for case '%s' line %d", thisCase.name, i))
case <-time.After(60 * time.Second):
t.Fatalf("Timed out waiting for new block (see trace above)")
}
}
func runReplayTest(t *testing.T, cs *ConsensusState, walFile string, newBlockCh chan interface{},
thisCase *testCase, i int) {
func sendTxs(cs *ConsensusState, ctx context.Context) {
for i := 0; i < 256; i++ {
select {
case <-ctx.Done():
return
default:
tx := []byte{byte(i)}
cs.mempool.CheckTx(tx, nil)
i++
}
}
}
cs.config.SetWalFile(walFile)
started, err := cs.Start()
if err != nil {
t.Fatalf("Cannot start consensus: %v", err)
// TestWALCrash uses crashing WAL to test we can recover from any WAL failure.
func TestWALCrash(t *testing.T) {
testCases := []struct {
name string
initFn func(dbm.DB, *ConsensusState, context.Context)
heightToStop int64
}{
{"empty block",
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {},
1},
{"block with a smaller part size",
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {
// XXX: is there a better way to change BlockPartSizeBytes?
cs.state.ConsensusParams.BlockPartSizeBytes = 512
sm.SaveState(stateDB, cs.state)
go sendTxs(cs, ctx)
},
1},
{"many non-empty blocks",
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {
go sendTxs(cs, ctx)
},
3},
}
if !started {
t.Error("Consensus did not start")
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
crashWALandCheckLiveness(t, tc.initFn, tc.heightToStop)
})
}
// Wait to make a new block.
// This is just a signal that we haven't halted; its not something contained in the WAL itself.
// Assuming the consensus state is running, replay of any WAL, including the empty one,
// should eventually be followed by a new block, or else something is wrong
waitForBlock(newBlockCh, thisCase, i)
cs.evsw.Stop()
cs.Stop()
}
func crashWALandCheckLiveness(t *testing.T, initFn func(dbm.DB, *ConsensusState, context.Context), heightToStop int64) {
walPaniced := make(chan error)
crashingWal := &crashingWAL{panicCh: walPaniced, heightToStop: heightToStop}
i := 1
LOOP:
for {
// fmt.Printf("====== LOOP %d\n", i)
t.Logf("====== LOOP %d\n", i)
// create consensus state from a clean slate
logger := log.NewNopLogger()
stateDB := dbm.NewMemDB()
state, _ := sm.MakeGenesisStateFromFile(consensusReplayConfig.GenesisFile())
privValidator := loadPrivValidator(consensusReplayConfig)
blockDB := dbm.NewMemDB()
cs := newConsensusStateWithConfigAndBlockStore(consensusReplayConfig, state, privValidator, dummy.NewDummyApplication(), blockDB)
cs.SetLogger(logger)
// start sending transactions
ctx, cancel := context.WithCancel(context.Background())
initFn(stateDB, cs, ctx)
// clean up WAL file from the previous iteration
walFile := cs.config.WalFile()
os.Remove(walFile)
// set crashing WAL
csWal, err := cs.OpenWAL(walFile)
require.NoError(t, err)
crashingWal.next = csWal
// reset the message counter
crashingWal.msgIndex = 1
cs.wal = crashingWal
// start consensus state
err = cs.Start()
require.NoError(t, err)
i++
select {
case <-newBlockCh:
default:
break LOOP
}
}
cs.Wait()
}
case err := <-walPaniced:
t.Logf("WAL paniced: %v", err)
func toPV(pv PrivValidator) *types.PrivValidator {
return pv.(*types.PrivValidator)
}
// make sure we can make blocks after a crash
startNewConsensusStateAndWaitForBlock(t, cs.Height, blockDB, stateDB)
func setupReplayTest(t *testing.T, thisCase *testCase, nLines int, crashAfter bool) (*ConsensusState, chan interface{}, string, string) {
t.Log("-------------------------------------")
t.Logf("Starting replay test %v (of %d lines of WAL). Crash after = %v", thisCase.name, nLines, crashAfter)
// stop consensus state and transactions sender (initFn)
cs.Stop()
cancel()
lineStep := nLines
if crashAfter {
lineStep -= 1
}
split := strings.Split(thisCase.log, "\n")
lastMsg := split[nLines]
// we write those lines up to (not including) one with the signature
walFile := writeWAL(strings.Join(split[:nLines], "\n") + "\n")
cs := fixedConsensusStateDummy()
// set the last step according to when we crashed vs the wal
toPV(cs.privValidator).LastHeight = 1 // first block
toPV(cs.privValidator).LastStep = thisCase.stepMap[lineStep]
t.Logf("[WARN] setupReplayTest LastStep=%v", toPV(cs.privValidator).LastStep)
newBlockCh := subscribeToEvent(cs.evsw, "tester", types.EventStringNewBlock(), 1)
return cs, newBlockCh, lastMsg, walFile
}
func readTimedWALMessage(t *testing.T, walMsg string) TimedWALMessage {
var err error
var msg TimedWALMessage
wire.ReadJSON(&msg, []byte(walMsg), &err)
if err != nil {
t.Fatalf("Error reading json data: %v", err)
}
return msg
}
//-----------------------------------------------
// Test the log at every iteration, and set the privVal last step
// as if the log was written after signing, before the crash
func TestWALCrashAfterWrite(t *testing.T) {
for _, thisCase := range testCases {
split := strings.Split(thisCase.log, "\n")
for i := 0; i < len(split)-1; i++ {
cs, newBlockCh, _, walFile := setupReplayTest(t, thisCase, i+1, true)
runReplayTest(t, cs, walFile, newBlockCh, thisCase, i+1)
// if we reached the required height, exit
if _, ok := err.(ReachedHeightToStopError); ok {
break LOOP
}
case <-time.After(10 * time.Second):
t.Fatal("WAL did not panic for 10 seconds (check the log)")
}
}
}
//-----------------------------------------------
// Test the log as if we crashed after signing but before writing.
// This relies on privValidator.LastSignature being set
// crashingWAL is a WAL which crashes or rather simulates a crash during Save
// (before and after). It remembers a message for which we last panicked
// (lastPanicedForMsgIndex), so we don't panic for it in subsequent iterations.
type crashingWAL struct {
next WAL
panicCh chan error
heightToStop int64
func TestWALCrashBeforeWritePropose(t *testing.T) {
for _, thisCase := range testCases {
lineNum := thisCase.proposeLine
// setup replay test where last message is a proposal
cs, newBlockCh, proposalMsg, walFile := setupReplayTest(t, thisCase, lineNum, false)
msg := readTimedWALMessage(t, proposalMsg)
proposal := msg.Msg.(msgInfo).Msg.(*ProposalMessage)
// Set LastSig
toPV(cs.privValidator).LastSignBytes = types.SignBytes(cs.state.ChainID, proposal.Proposal)
toPV(cs.privValidator).LastSignature = proposal.Proposal.Signature
runReplayTest(t, cs, walFile, newBlockCh, thisCase, lineNum)
msgIndex int // current message index
lastPanicedForMsgIndex int // last message for which we panicked
}
// WALWriteError indicates a WAL crash.
type WALWriteError struct {
msg string
}
func (e WALWriteError) Error() string {
return e.msg
}
// ReachedHeightToStopError indicates we've reached the required consensus
// height and may exit.
type ReachedHeightToStopError struct {
height int64
}
func (e ReachedHeightToStopError) Error() string {
return fmt.Sprintf("reached height to stop %d", e.height)
}
// Save simulate WAL's crashing by sending an error to the panicCh and then
// exiting the cs.receiveRoutine.
func (w *crashingWAL) Save(m WALMessage) {
if endMsg, ok := m.(EndHeightMessage); ok {
if endMsg.Height == w.heightToStop {
w.panicCh <- ReachedHeightToStopError{endMsg.Height}
runtime.Goexit()
} else {
w.next.Save(m)
}
return
}
if w.msgIndex > w.lastPanicedForMsgIndex {
w.lastPanicedForMsgIndex = w.msgIndex
_, file, line, _ := runtime.Caller(1)
w.panicCh <- WALWriteError{fmt.Sprintf("failed to write %T to WAL (fileline: %s:%d)", m, file, line)}
runtime.Goexit()
} else {
w.msgIndex++
w.next.Save(m)
}
}
func TestWALCrashBeforeWritePrevote(t *testing.T) {
for _, thisCase := range testCases {
testReplayCrashBeforeWriteVote(t, thisCase, thisCase.prevoteLine, types.EventStringCompleteProposal())
}
func (w *crashingWAL) Group() *auto.Group { return w.next.Group() }
func (w *crashingWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
return w.next.SearchForEndHeight(height, options)
}
func TestWALCrashBeforeWritePrecommit(t *testing.T) {
for _, thisCase := range testCases {
testReplayCrashBeforeWriteVote(t, thisCase, thisCase.precommitLine, types.EventStringPolka())
}
}
func testReplayCrashBeforeWriteVote(t *testing.T, thisCase *testCase, lineNum int, eventString string) {
// setup replay test where last message is a vote
cs, newBlockCh, voteMsg, walFile := setupReplayTest(t, thisCase, lineNum, false)
types.AddListenerForEvent(cs.evsw, "tester", eventString, func(data types.TMEventData) {
msg := readTimedWALMessage(t, voteMsg)
vote := msg.Msg.(msgInfo).Msg.(*VoteMessage)
// Set LastSig
toPV(cs.privValidator).LastSignBytes = types.SignBytes(cs.state.ChainID, vote.Vote)
toPV(cs.privValidator).LastSignature = vote.Vote.Signature
})
runReplayTest(t, cs, walFile, newBlockCh, thisCase, lineNum)
}
func (w *crashingWAL) Start() error { return w.next.Start() }
func (w *crashingWAL) Stop() error { return w.next.Stop() }
func (w *crashingWAL) Wait() { w.next.Wait() }
//------------------------------------------------------------------------------------------
// Handshake Tests
var (
NUM_BLOCKS = 6 // number of blocks in the test_data/many_blocks.cswal
mempool = types.MockMempool{}
const (
NUM_BLOCKS = 6
)
testPartSize int
var (
mempool = types.MockMempool{}
evpool = types.MockEvidencePool{}
)
//---------------------------------------
@@ -307,40 +299,56 @@ func TestHandshakeReplayNone(t *testing.T) {
}
}
func tempWALWithData(data []byte) string {
walFile, err := ioutil.TempFile("", "wal")
if err != nil {
panic(fmt.Errorf("failed to create temp WAL file: %v", err))
}
_, err = walFile.Write(data)
if err != nil {
panic(fmt.Errorf("failed to write to temp WAL file: %v", err))
}
if err := walFile.Close(); err != nil {
panic(fmt.Errorf("failed to close temp WAL file: %v", err))
}
return walFile.Name()
}
// Make some blocks. Start a fresh app and apply nBlocks blocks. Then restart the app and sync it up with the remaining blocks
func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
config := ResetConfig("proxy_test_")
// copy the many_blocks file
walBody, err := cmn.ReadFile(path.Join(data_dir, "many_blocks.cswal"))
walBody, err := WALWithNBlocks(NUM_BLOCKS)
if err != nil {
t.Fatal(err)
}
walFile := writeWAL(string(walBody))
walFile := tempWALWithData(walBody)
config.Consensus.SetWalFile(walFile)
privVal := types.LoadPrivValidator(config.PrivValidatorFile())
testPartSize = config.Consensus.BlockPartSize
privVal := types.LoadPrivValidatorFS(config.PrivValidatorFile())
wal, err := NewWAL(walFile, false)
if err != nil {
t.Fatal(err)
}
wal.SetLogger(log.TestingLogger())
if _, err := wal.Start(); err != nil {
if err := wal.Start(); err != nil {
t.Fatal(err)
}
defer wal.Stop()
chain, commits, err := makeBlockchainFromWAL(wal)
if err != nil {
t.Fatalf(err.Error())
}
state, store := stateAndStore(config, privVal.PubKey)
stateDB, state, store := stateAndStore(config, privVal.GetPubKey())
store.chain = chain
store.commits = commits
// run the chain through state.ApplyBlock to build up the tendermint state
latestAppHash := buildTMStateFromChain(config, state, chain, mode)
state = buildTMStateFromChain(config, stateDB, state, chain, mode)
latestAppHash := state.AppHash
// make a new client creator
dummyApp := dummy.NewPersistentDummyApplication(path.Join(config.DBDir(), "2"))
@@ -349,19 +357,20 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
// run nBlocks against a new client to build up the app state.
// use a throwaway tendermint state
proxyApp := proxy.NewAppConns(clientCreator2, nil)
state, _ := stateAndStore(config, privVal.PubKey)
buildAppStateFromChain(proxyApp, state, chain, nBlocks, mode)
stateDB, state, _ := stateAndStore(config, privVal.GetPubKey())
buildAppStateFromChain(proxyApp, stateDB, state, chain, nBlocks, mode)
}
// now start the app using the handshake - it should sync
handshaker := NewHandshaker(state, store)
handshaker := NewHandshaker(stateDB, state, store)
proxyApp := proxy.NewAppConns(clientCreator2, handshaker)
if _, err := proxyApp.Start(); err != nil {
if err := proxyApp.Start(); err != nil {
t.Fatalf("Error starting proxy app connections: %v", err)
}
defer proxyApp.Stop()
// get the latest app hash from the app
res, err := proxyApp.Query().InfoSync()
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{""})
if err != nil {
t.Fatal(err)
}
@@ -383,188 +392,208 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
}
}
func applyBlock(st *sm.State, blk *types.Block, proxyApp proxy.AppConns) {
err := st.ApplyBlock(nil, proxyApp.Consensus(), blk, blk.MakePartSet(testPartSize).Header(), mempool)
func applyBlock(stateDB dbm.DB, st sm.State, blk *types.Block, proxyApp proxy.AppConns) sm.State {
testPartSize := st.ConsensusParams.BlockPartSizeBytes
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)
blkID := types.BlockID{blk.Hash(), blk.MakePartSet(testPartSize).Header()}
newState, err := blockExec.ApplyBlock(st, blkID, blk)
if err != nil {
panic(err)
}
return newState
}
func buildAppStateFromChain(proxyApp proxy.AppConns,
state *sm.State, chain []*types.Block, nBlocks int, mode uint) {
func buildAppStateFromChain(proxyApp proxy.AppConns, stateDB dbm.DB,
state sm.State, chain []*types.Block, nBlocks int, mode uint) {
// start a new app without handshake, play nBlocks blocks
if _, err := proxyApp.Start(); err != nil {
if err := proxyApp.Start(); err != nil {
panic(err)
}
defer proxyApp.Stop()
validators := types.TM2PB.Validators(state.Validators)
// TODO: get the genesis bytes (https://github.com/tendermint/tendermint/issues/1224)
var genesisBytes []byte
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators, genesisBytes}); err != nil {
panic(err)
}
validators := types.TM2PB.Validators(state.Validators)
proxyApp.Consensus().InitChainSync(validators)
defer proxyApp.Stop()
switch mode {
case 0:
for i := 0; i < nBlocks; i++ {
block := chain[i]
applyBlock(state, block, proxyApp)
state = applyBlock(stateDB, state, block, proxyApp)
}
case 1, 2:
for i := 0; i < nBlocks-1; i++ {
block := chain[i]
applyBlock(state, block, proxyApp)
state = applyBlock(stateDB, state, block, proxyApp)
}
if mode == 2 {
// update the dummy height and apphash
// as if we ran commit but not
applyBlock(state, chain[nBlocks-1], proxyApp)
state = applyBlock(stateDB, state, chain[nBlocks-1], proxyApp)
}
}
}
func buildTMStateFromChain(config *cfg.Config, state *sm.State, chain []*types.Block, mode uint) []byte {
func buildTMStateFromChain(config *cfg.Config, stateDB dbm.DB, state sm.State, chain []*types.Block, mode uint) sm.State {
// run the whole chain against this client to build up the tendermint state
clientCreator := proxy.NewLocalClientCreator(dummy.NewPersistentDummyApplication(path.Join(config.DBDir(), "1")))
proxyApp := proxy.NewAppConns(clientCreator, nil) // sm.NewHandshaker(config, state, store, ReplayLastBlock))
if _, err := proxyApp.Start(); err != nil {
if err := proxyApp.Start(); err != nil {
panic(err)
}
defer proxyApp.Stop()
validators := types.TM2PB.Validators(state.Validators)
proxyApp.Consensus().InitChainSync(validators)
var latestAppHash []byte
// TODO: get the genesis bytes (https://github.com/tendermint/tendermint/issues/1224)
var genesisBytes []byte
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{validators, genesisBytes}); err != nil {
panic(err)
}
switch mode {
case 0:
// sync right up
for _, block := range chain {
applyBlock(state, block, proxyApp)
state = applyBlock(stateDB, state, block, proxyApp)
}
latestAppHash = state.AppHash
case 1, 2:
// sync up to the penultimate as if we stored the block.
// whether we commit or not depends on the appHash
for _, block := range chain[:len(chain)-1] {
applyBlock(state, block, proxyApp)
state = applyBlock(stateDB, state, block, proxyApp)
}
// apply the final block to a state copy so we can
// get the right next appHash but keep the state back
stateCopy := state.Copy()
applyBlock(stateCopy, chain[len(chain)-1], proxyApp)
latestAppHash = stateCopy.AppHash
applyBlock(stateDB, state, chain[len(chain)-1], proxyApp)
}
return latestAppHash
return state
}
//--------------------------
// utils for making blocks
func makeBlockchainFromWAL(wal *WAL) ([]*types.Block, []*types.Commit, error) {
func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
// Search for height marker
gr, found, err := wal.group.Search("#ENDHEIGHT: ", makeHeightSearchFunc(0))
gr, found, err := wal.SearchForEndHeight(0, &WALSearchOptions{})
if err != nil {
return nil, nil, err
}
if !found {
return nil, nil, errors.New(cmn.Fmt("WAL does not contain height %d.", 1))
}
defer gr.Close()
defer gr.Close() // nolint: errcheck
// log.Notice("Build a blockchain by reading from the WAL")
var blockParts *types.PartSet
var blocks []*types.Block
var commits []*types.Commit
for {
line, err := gr.ReadLine()
if err != nil {
if err == io.EOF {
break
} else {
return nil, nil, err
}
}
piece, err := readPieceFromWAL([]byte(line))
if err != nil {
var thisBlockParts *types.PartSet
var thisBlockCommit *types.Commit
var height int64
dec := NewWALDecoder(gr)
for {
msg, err := dec.Decode()
if err == io.EOF {
break
} else if err != nil {
return nil, nil, err
}
piece := readPieceFromWAL(msg)
if piece == nil {
continue
}
switch p := piece.(type) {
case *types.PartSetHeader:
case EndHeightMessage:
// if its not the first one, we have a full block
if blockParts != nil {
if thisBlockParts != nil {
var n int
block := wire.ReadBinary(&types.Block{}, blockParts.GetReader(), types.MaxBlockSize, &n, &err).(*types.Block)
block := wire.ReadBinary(&types.Block{}, thisBlockParts.GetReader(), 0, &n, &err).(*types.Block)
if err != nil {
panic(err)
}
if block.Height != height+1 {
panic(cmn.Fmt("read bad block from wal. got height %d, expected %d", block.Height, height+1))
}
commitHeight := thisBlockCommit.Precommits[0].Height
if commitHeight != height+1 {
panic(cmn.Fmt("commit doesnt match. got height %d, expected %d", commitHeight, height+1))
}
blocks = append(blocks, block)
commits = append(commits, thisBlockCommit)
height += 1
}
blockParts = types.NewPartSetFromHeader(*p)
case *types.PartSetHeader:
thisBlockParts = types.NewPartSetFromHeader(*p)
case *types.Part:
_, err := blockParts.AddPart(p, false)
_, err := thisBlockParts.AddPart(p, false)
if err != nil {
return nil, nil, err
}
case *types.Vote:
if p.Type == types.VoteTypePrecommit {
commit := &types.Commit{
thisBlockCommit = &types.Commit{
BlockID: p.BlockID,
Precommits: []*types.Vote{p},
}
commits = append(commits, commit)
}
}
}
// grab the last block too
var n int
block := wire.ReadBinary(&types.Block{}, blockParts.GetReader(), types.MaxBlockSize, &n, &err).(*types.Block)
block := wire.ReadBinary(&types.Block{}, thisBlockParts.GetReader(), 0, &n, &err).(*types.Block)
if err != nil {
panic(err)
}
if block.Height != height+1 {
panic(cmn.Fmt("read bad block from wal. got height %d, expected %d", block.Height, height+1))
}
commitHeight := thisBlockCommit.Precommits[0].Height
if commitHeight != height+1 {
panic(cmn.Fmt("commit doesnt match. got height %d, expected %d", commitHeight, height+1))
}
blocks = append(blocks, block)
commits = append(commits, thisBlockCommit)
return blocks, commits, nil
}
func readPieceFromWAL(msgBytes []byte) (interface{}, error) {
// Skip over empty and meta lines
if len(msgBytes) == 0 || msgBytes[0] == '#' {
return nil, nil
}
var err error
var msg TimedWALMessage
wire.ReadJSON(&msg, msgBytes, &err)
if err != nil {
fmt.Println("MsgBytes:", msgBytes, string(msgBytes))
return nil, fmt.Errorf("Error reading json data: %v", err)
}
func readPieceFromWAL(msg *TimedWALMessage) interface{} {
// for logging
switch m := msg.Msg.(type) {
case msgInfo:
switch msg := m.Msg.(type) {
case *ProposalMessage:
return &msg.Proposal.BlockPartsHeader, nil
return &msg.Proposal.BlockPartsHeader
case *BlockPartMessage:
return msg.Part, nil
return msg.Part
case *VoteMessage:
return msg.Vote, nil
return msg.Vote
}
case EndHeightMessage:
return m
}
return nil, nil
return nil
}
// fresh state and mock store
func stateAndStore(config *cfg.Config, pubKey crypto.PubKey) (*sm.State, *mockBlockStore) {
func stateAndStore(config *cfg.Config, pubKey crypto.PubKey) (dbm.DB, sm.State, *mockBlockStore) {
stateDB := dbm.NewMemDB()
state := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile())
state.SetLogger(log.TestingLogger().With("module", "state"))
store := NewMockBlockStore(config)
return state, store
state, _ := sm.MakeGenesisStateFromFile(config.GenesisFile())
store := NewMockBlockStore(config, state.ConsensusParams)
return stateDB, state, store
}
//----------------------------------
@@ -572,30 +601,31 @@ func stateAndStore(config *cfg.Config, pubKey crypto.PubKey) (*sm.State, *mockBl
type mockBlockStore struct {
config *cfg.Config
params types.ConsensusParams
chain []*types.Block
commits []*types.Commit
}
// TODO: NewBlockStore(db.NewMemDB) ...
func NewMockBlockStore(config *cfg.Config) *mockBlockStore {
return &mockBlockStore{config, nil, nil}
func NewMockBlockStore(config *cfg.Config, params types.ConsensusParams) *mockBlockStore {
return &mockBlockStore{config, params, nil, nil}
}
func (bs *mockBlockStore) Height() int { return len(bs.chain) }
func (bs *mockBlockStore) LoadBlock(height int) *types.Block { return bs.chain[height-1] }
func (bs *mockBlockStore) LoadBlockMeta(height int) *types.BlockMeta {
func (bs *mockBlockStore) Height() int64 { return int64(len(bs.chain)) }
func (bs *mockBlockStore) LoadBlock(height int64) *types.Block { return bs.chain[height-1] }
func (bs *mockBlockStore) LoadBlockMeta(height int64) *types.BlockMeta {
block := bs.chain[height-1]
return &types.BlockMeta{
BlockID: types.BlockID{block.Hash(), block.MakePartSet(bs.config.Consensus.BlockPartSize).Header()},
BlockID: types.BlockID{block.Hash(), block.MakePartSet(bs.params.BlockPartSizeBytes).Header()},
Header: block.Header,
}
}
func (bs *mockBlockStore) LoadBlockPart(height int, index int) *types.Part { return nil }
func (bs *mockBlockStore) LoadBlockPart(height int64, index int) *types.Part { return nil }
func (bs *mockBlockStore) SaveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) {
}
func (bs *mockBlockStore) LoadBlockCommit(height int) *types.Commit {
func (bs *mockBlockStore) LoadBlockCommit(height int64) *types.Commit {
return bs.commits[height-1]
}
func (bs *mockBlockStore) LoadSeenCommit(height int) *types.Commit {
func (bs *mockBlockStore) LoadSeenCommit(height int64) *types.Commit {
return bs.commits[height-1]
}

File diff suppressed because it is too large Load Diff

View File

@@ -2,12 +2,16 @@ package consensus
import (
"bytes"
"context"
"fmt"
"testing"
"time"
cstypes "github.com/tendermint/tendermint/consensus/types"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/log"
tmpubsub "github.com/tendermint/tmlibs/pubsub"
)
func init() {
@@ -51,12 +55,12 @@ x * TestHalt1 - if we see +2/3 precommits after timing out into new round, we sh
//----------------------------------------------------------------------------------------------------
// ProposeSuite
func TestProposerSelection0(t *testing.T) {
func TestStateProposerSelection0(t *testing.T) {
cs1, vss := randConsensusState(4)
height, round := cs1.Height, cs1.Round
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
startTestRound(cs1, height, round)
@@ -79,16 +83,16 @@ func TestProposerSelection0(t *testing.T) {
<-newRoundCh
prop = cs1.GetRoundState().Validators.GetProposer()
if !bytes.Equal(prop.Address, vss[1].Address) {
panic(Fmt("expected proposer to be validator %d. Got %X", 1, prop.Address))
if !bytes.Equal(prop.Address, vss[1].GetAddress()) {
panic(cmn.Fmt("expected proposer to be validator %d. Got %X", 1, prop.Address))
}
}
// Now let's do it all again, but starting from round 2 instead of 0
func TestProposerSelection2(t *testing.T) {
func TestStateProposerSelection2(t *testing.T) {
cs1, vss := randConsensusState(4) // test needs more work for more than 3 validators
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
// this time we jump in at round 2
incrementRound(vss[1:]...)
@@ -100,8 +104,8 @@ func TestProposerSelection2(t *testing.T) {
// everyone just votes nil. we get a new proposer each round
for i := 0; i < len(vss); i++ {
prop := cs1.GetRoundState().Validators.GetProposer()
if !bytes.Equal(prop.Address, vss[(i+2)%len(vss)].Address) {
panic(Fmt("expected proposer to be validator %d. Got %X", (i+2)%len(vss), prop.Address))
if !bytes.Equal(prop.Address, vss[(i+2)%len(vss)].GetAddress()) {
panic(cmn.Fmt("expected proposer to be validator %d. Got %X", (i+2)%len(vss), prop.Address))
}
rs := cs1.GetRoundState()
@@ -114,13 +118,13 @@ func TestProposerSelection2(t *testing.T) {
}
// a non-validator should timeout into the prevote round
func TestEnterProposeNoPrivValidator(t *testing.T) {
func TestStateEnterProposeNoPrivValidator(t *testing.T) {
cs, _ := randConsensusState(1)
cs.SetPrivValidator(nil)
height, round := cs.Height, cs.Round
// Listen for propose timeout event
timeoutCh := subscribeToEvent(cs.evsw, "tester", types.EventStringTimeoutPropose(), 1)
timeoutCh := subscribe(cs.eventBus, types.EventQueryTimeoutPropose)
startTestRound(cs, height, round)
@@ -139,14 +143,14 @@ func TestEnterProposeNoPrivValidator(t *testing.T) {
}
// a validator should not timeout of the prevote round (TODO: unless the block is really big!)
func TestEnterProposeYesPrivValidator(t *testing.T) {
func TestStateEnterProposeYesPrivValidator(t *testing.T) {
cs, _ := randConsensusState(1)
height, round := cs.Height, cs.Round
// Listen for propose timeout event
timeoutCh := subscribeToEvent(cs.evsw, "tester", types.EventStringTimeoutPropose(), 1)
proposalCh := subscribeToEvent(cs.evsw, "tester", types.EventStringCompleteProposal(), 1)
timeoutCh := subscribe(cs.eventBus, types.EventQueryTimeoutPropose)
proposalCh := subscribe(cs.eventBus, types.EventQueryCompleteProposal)
cs.enterNewRound(height, round)
cs.startRoutines(3)
@@ -175,15 +179,15 @@ func TestEnterProposeYesPrivValidator(t *testing.T) {
}
}
func TestBadProposal(t *testing.T) {
func TestStateBadProposal(t *testing.T) {
cs1, vss := randConsensusState(2)
height, round := cs1.Height, cs1.Round
vs2 := vss[1]
partSize := config.Consensus.BlockPartSize
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
voteCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringVote(), 1)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
propBlock, _ := cs1.createProposalBlock() //changeProposer(t, cs1, vs2)
@@ -200,12 +204,14 @@ func TestBadProposal(t *testing.T) {
propBlock.AppHash = stateHash
propBlockParts := propBlock.MakePartSet(partSize)
proposal := types.NewProposal(vs2.Height, round, propBlockParts.Header(), -1, types.BlockID{})
if err := vs2.SignProposal(config.ChainID, proposal); err != nil {
if err := vs2.SignProposal(config.ChainID(), proposal); err != nil {
t.Fatal("failed to sign bad proposal", err)
}
// set the proposal block
cs1.SetProposalAndBlock(proposal, propBlock, propBlockParts, "some peer")
if err := cs1.SetProposalAndBlock(proposal, propBlock, propBlockParts, "some peer"); err != nil {
t.Fatal(err)
}
// start the machine
startTestRound(cs1, height, round)
@@ -233,13 +239,21 @@ func TestBadProposal(t *testing.T) {
// FullRoundSuite
// propose, prevote, and precommit a block
func TestFullRound1(t *testing.T) {
func TestStateFullRound1(t *testing.T) {
cs, vss := randConsensusState(1)
height, round := cs.Height, cs.Round
voteCh := subscribeToEvent(cs.evsw, "tester", types.EventStringVote(), 0)
propCh := subscribeToEvent(cs.evsw, "tester", types.EventStringCompleteProposal(), 1)
newRoundCh := subscribeToEvent(cs.evsw, "tester", types.EventStringNewRound(), 1)
// NOTE: buffer capacity of 0 ensures we can validate prevote and last commit
// before consensus can move to the next height (and cause a race condition)
cs.eventBus.Stop()
eventBus := types.NewEventBusWithBufferCapacity(0)
eventBus.SetLogger(log.TestingLogger().With("module", "events"))
cs.SetEventBus(eventBus)
eventBus.Start()
voteCh := subscribe(cs.eventBus, types.EventQueryVote)
propCh := subscribe(cs.eventBus, types.EventQueryCompleteProposal)
newRoundCh := subscribe(cs.eventBus, types.EventQueryNewRound)
startTestRound(cs, height, round)
@@ -247,11 +261,9 @@ func TestFullRound1(t *testing.T) {
// grab proposal
re := <-propCh
propBlockHash := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState).ProposalBlock.Hash()
propBlockHash := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState).ProposalBlock.Hash()
<-voteCh // wait for prevote
// NOTE: voteChan cap of 0 ensures we can complete this
// before consensus can move to the next height (and cause a race condition)
validatePrevote(t, cs, round, vss[0], propBlockHash)
<-voteCh // wait for precommit
@@ -263,11 +275,11 @@ func TestFullRound1(t *testing.T) {
}
// nil is proposed, so prevote and precommit nil
func TestFullRoundNil(t *testing.T) {
func TestStateFullRoundNil(t *testing.T) {
cs, vss := randConsensusState(1)
height, round := cs.Height, cs.Round
voteCh := subscribeToEvent(cs.evsw, "tester", types.EventStringVote(), 1)
voteCh := subscribe(cs.eventBus, types.EventQueryVote)
cs.enterPrevote(height, round)
cs.startRoutines(4)
@@ -281,13 +293,13 @@ func TestFullRoundNil(t *testing.T) {
// run through propose, prevote, precommit commit with two validators
// where the first validator has to wait for votes from the second
func TestFullRound2(t *testing.T) {
func TestStateFullRound2(t *testing.T) {
cs1, vss := randConsensusState(2)
vs2 := vss[1]
height, round := cs1.Height, cs1.Round
voteCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringVote(), 1)
newBlockCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewBlock(), 1)
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
newBlockCh := subscribe(cs1.eventBus, types.EventQueryNewBlock)
// start round and wait for propose and prevote
startTestRound(cs1, height, round)
@@ -322,18 +334,18 @@ func TestFullRound2(t *testing.T) {
// two validators, 4 rounds.
// two vals take turns proposing. val1 locks on first one, precommits nil on everything else
func TestLockNoPOL(t *testing.T) {
func TestStateLockNoPOL(t *testing.T) {
cs1, vss := randConsensusState(2)
vs2 := vss[1]
height := cs1.Height
partSize := config.Consensus.BlockPartSize
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1)
timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1)
voteCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringVote(), 1)
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
/*
Round1 (cs1, B) // B B // B B2
@@ -344,7 +356,7 @@ func TestLockNoPOL(t *testing.T) {
cs1.startRoutines(0)
re := <-proposalCh
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
theBlockHash := rs.ProposalBlock.Hash()
<-voteCh // prevote
@@ -384,7 +396,7 @@ func TestLockNoPOL(t *testing.T) {
// now we're on a new round and not the proposer, so wait for timeout
re = <-timeoutProposeCh
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
if rs.ProposalBlock != nil {
panic("Expected proposal block to be nil")
@@ -428,11 +440,11 @@ func TestLockNoPOL(t *testing.T) {
incrementRound(vs2)
re = <-proposalCh
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
// now we're on a new round and are the proposer
if !bytes.Equal(rs.ProposalBlock.Hash(), rs.LockedBlock.Hash()) {
panic(Fmt("Expected proposal block to be locked block. Got %v, Expected %v", rs.ProposalBlock, rs.LockedBlock))
panic(cmn.Fmt("Expected proposal block to be locked block. Got %v, Expected %v", rs.ProposalBlock, rs.LockedBlock))
}
<-voteCh // prevote
@@ -468,7 +480,9 @@ func TestLockNoPOL(t *testing.T) {
// now we're on a new round and not the proposer
// so set the proposal block
cs1.SetProposalAndBlock(prop, propBlock, propBlock.MakePartSet(partSize), "")
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlock.MakePartSet(partSize), ""); err != nil {
t.Fatal(err)
}
<-proposalCh
<-voteCh // prevote
@@ -489,20 +503,18 @@ func TestLockNoPOL(t *testing.T) {
}
// 4 vals, one precommits, other 3 polka at next round, so we unlock and precomit the polka
func TestLockPOLRelock(t *testing.T) {
func TestStateLockPOLRelock(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
partSize := config.Consensus.BlockPartSize
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1)
timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1)
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
voteCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringVote(), 1)
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
newBlockCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewBlockHeader(), 1)
t.Logf("vs2 last round %v", vs2.PrivValidator.LastRound)
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
newBlockCh := subscribe(cs1.eventBus, types.EventQueryNewBlockHeader)
// everything done from perspective of cs1
@@ -517,7 +529,7 @@ func TestLockPOLRelock(t *testing.T) {
<-newRoundCh
re := <-proposalCh
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
theBlockHash := rs.ProposalBlock.Hash()
<-voteCh // prevote
@@ -547,7 +559,9 @@ func TestLockPOLRelock(t *testing.T) {
<-timeoutWaitCh
//XXX: this isnt guaranteed to get there before the timeoutPropose ...
cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer")
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer"); err != nil {
t.Fatal(err)
}
<-newRoundCh
t.Log("### ONTO ROUND 1")
@@ -593,7 +607,7 @@ func TestLockPOLRelock(t *testing.T) {
be := <-newBlockCh
b := be.(types.TMEventData).Unwrap().(types.EventDataNewBlockHeader)
re = <-newRoundCh
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
if rs.Height != 2 {
panic("Expected height to increment")
}
@@ -604,17 +618,17 @@ func TestLockPOLRelock(t *testing.T) {
}
// 4 vals, one precommits, other 3 polka at next round, so we unlock and precomit the polka
func TestLockPOLUnlock(t *testing.T) {
func TestStateLockPOLUnlock(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
partSize := config.Consensus.BlockPartSize
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1)
timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1)
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
unlockCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringUnlock(), 1)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
unlockCh := subscribe(cs1.eventBus, types.EventQueryUnlock)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
// everything done from perspective of cs1
@@ -629,7 +643,7 @@ func TestLockPOLUnlock(t *testing.T) {
startTestRound(cs1, cs1.Height, 0)
<-newRoundCh
re := <-proposalCh
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
theBlockHash := rs.ProposalBlock.Hash()
<-voteCh // prevote
@@ -655,11 +669,13 @@ func TestLockPOLUnlock(t *testing.T) {
// timeout to new round
re = <-timeoutWaitCh
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
lockedBlockHash := rs.LockedBlock.Hash()
//XXX: this isnt guaranteed to get there before the timeoutPropose ...
cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer")
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer"); err != nil {
t.Fatal(err)
}
<-newRoundCh
t.Log("#### ONTO ROUND 1")
@@ -699,23 +715,23 @@ func TestLockPOLUnlock(t *testing.T) {
// a polka at round 1 but we miss it
// then a polka at round 2 that we lock on
// then we see the polka from round 1 but shouldn't unlock
func TestLockPOLSafety1(t *testing.T) {
func TestStateLockPOLSafety1(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
partSize := config.Consensus.BlockPartSize
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1)
timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1)
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
// start round and wait for propose and prevote
startTestRound(cs1, cs1.Height, 0)
<-newRoundCh
re := <-proposalCh
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
propBlock := rs.ProposalBlock
<-voteCh // prevote
@@ -746,7 +762,9 @@ func TestLockPOLSafety1(t *testing.T) {
incrementRound(vs2, vs3, vs4)
//XXX: this isnt guaranteed to get there before the timeoutPropose ...
cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer")
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer"); err != nil {
t.Fatal(err)
}
<-newRoundCh
t.Log("### ONTO ROUND 1")
@@ -763,7 +781,7 @@ func TestLockPOLSafety1(t *testing.T) {
re = <-proposalCh
}
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
if rs.LockedBlock != nil {
panic("we should not be locked!")
@@ -803,7 +821,7 @@ func TestLockPOLSafety1(t *testing.T) {
// we should prevote what we're locked on
validatePrevote(t, cs1, 2, vss[0], propBlockHash)
newStepCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRoundStep(), 1)
newStepCh := subscribe(cs1.eventBus, types.EventQueryNewRoundStep)
// add prevotes from the earlier round
addVotes(cs1, prevotes...)
@@ -820,17 +838,17 @@ func TestLockPOLSafety1(t *testing.T) {
// What we want:
// dont see P0, lock on P1 at R1, dont unlock using P0 at R2
func TestLockPOLSafety2(t *testing.T) {
func TestStateLockPOLSafety2(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
partSize := config.Consensus.BlockPartSize
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1)
timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1)
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
unlockCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringUnlock(), 1)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
unlockCh := subscribe(cs1.eventBus, types.EventQueryUnlock)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
// the block for R0: gets polkad but we miss it
@@ -850,7 +868,7 @@ func TestLockPOLSafety2(t *testing.T) {
incrementRound(vs2, vs3, vs4)
cs1.updateRoundStep(0, RoundStepPrecommitWait)
cs1.updateRoundStep(0, cstypes.RoundStepPrecommitWait)
t.Log("### ONTO Round 1")
// jump in at round 1
@@ -858,7 +876,9 @@ func TestLockPOLSafety2(t *testing.T) {
startTestRound(cs1, height, 1)
<-newRoundCh
cs1.SetProposalAndBlock(prop1, propBlock1, propBlockParts1, "some peer")
if err := cs1.SetProposalAndBlock(prop1, propBlock1, propBlockParts1, "some peer"); err != nil {
t.Fatal(err)
}
<-proposalCh
<-voteCh // prevote
@@ -880,10 +900,12 @@ func TestLockPOLSafety2(t *testing.T) {
// in round 2 we see the polkad block from round 0
newProp := types.NewProposal(height, 2, propBlockParts0.Header(), 0, propBlockID1)
if err := vs3.SignProposal(config.ChainID, newProp); err != nil {
if err := vs3.SignProposal(config.ChainID(), newProp); err != nil {
t.Fatal(err)
}
if err := cs1.SetProposalAndBlock(newProp, propBlock0, propBlockParts0, "some peer"); err != nil {
t.Fatal(err)
}
cs1.SetProposalAndBlock(newProp, propBlock0, propBlockParts0, "some peer")
// Add the pol votes
addVotes(cs1, prevotes...)
@@ -915,14 +937,14 @@ func TestLockPOLSafety2(t *testing.T) {
// TODO: Slashing
/*
func TestSlashingPrevotes(t *testing.T) {
func TestStateSlashingPrevotes(t *testing.T) {
cs1, vss := randConsensusState(2)
vs2 := vss[1]
proposalCh := subscribeToEvent(cs1.evsw,"tester",types.EventStringCompleteProposal() , 1)
timeoutWaitCh := subscribeToEvent(cs1.evsw,"tester",types.EventStringTimeoutWait() , 1)
newRoundCh := subscribeToEvent(cs1.evsw,"tester",types.EventStringNewRound() , 1)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
// start round and wait for propose and prevote
@@ -931,7 +953,7 @@ func TestSlashingPrevotes(t *testing.T) {
re := <-proposalCh
<-voteCh // prevote
rs := re.(types.EventDataRoundState).RoundState.(*RoundState)
rs := re.(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
// we should now be stuck in limbo forever, waiting for more prevotes
// add one for a different block should cause us to go into prevote wait
@@ -950,14 +972,14 @@ func TestSlashingPrevotes(t *testing.T) {
// XXX: Check for existence of Dupeout info
}
func TestSlashingPrecommits(t *testing.T) {
func TestStateSlashingPrecommits(t *testing.T) {
cs1, vss := randConsensusState(2)
vs2 := vss[1]
proposalCh := subscribeToEvent(cs1.evsw,"tester",types.EventStringCompleteProposal() , 1)
timeoutWaitCh := subscribeToEvent(cs1.evsw,"tester",types.EventStringTimeoutWait() , 1)
newRoundCh := subscribeToEvent(cs1.evsw,"tester",types.EventStringNewRound() , 1)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
// start round and wait for propose and prevote
@@ -995,23 +1017,23 @@ func TestSlashingPrecommits(t *testing.T) {
// 4 vals.
// we receive a final precommit after going into next round, but others might have gone to commit already!
func TestHalt1(t *testing.T) {
func TestStateHalt1(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
partSize := config.Consensus.BlockPartSize
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1)
timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1)
newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1)
newBlockCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewBlock(), 1)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
newBlockCh := subscribe(cs1.eventBus, types.EventQueryNewBlock)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
// start round and wait for propose and prevote
startTestRound(cs1, cs1.Height, 0)
<-newRoundCh
re := <-proposalCh
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
propBlock := rs.ProposalBlock
propBlockParts := propBlock.MakePartSet(partSize)
@@ -1034,7 +1056,7 @@ func TestHalt1(t *testing.T) {
// timeout to new round
<-timeoutWaitCh
re = <-newRoundCh
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
t.Log("### ONTO ROUND 1")
/*Round2
@@ -1052,9 +1074,26 @@ func TestHalt1(t *testing.T) {
// receiving that precommit should take us straight to commit
<-newBlockCh
re = <-newRoundCh
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState)
rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*cstypes.RoundState)
if rs.Height != 2 {
panic("expected height to increment")
}
}
// subscribe subscribes test client to the given query and returns a channel with cap = 1.
func subscribe(eventBus *types.EventBus, q tmpubsub.Query) <-chan interface{} {
out := make(chan interface{}, 1)
err := eventBus.Subscribe(context.Background(), testSubscriber, q, out)
if err != nil {
panic(fmt.Sprintf("failed to subscribe %s to %v", testSubscriber, q))
}
return out
}
// discardFromChan reads n values from the channel.
func discardFromChan(ch <-chan interface{}, n int) {
for i := 0; i < n; i++ {
<-ch
}
}

View File

@@ -1,36 +0,0 @@
# Generating test data
To generate the data, run `build.sh`. See that script for more details.
Make sure to adjust the stepChanges in the testCases if the number of messages changes.
This sometimes happens for the `small_block2.cswal`, where the number of block parts changes between 4 and 5.
If you need to change the signatures, you can use a script as follows:
The privBytes comes from `config/tendermint_test/...`:
```
package main
import (
"encoding/hex"
"fmt"
"github.com/tendermint/go-crypto"
)
func main() {
signBytes, err := hex.DecodeString("7B22636861696E5F6964223A2274656E6465726D696E745F74657374222C22766F7465223A7B22626C6F636B5F68617368223A2242453544373939433846353044354645383533364334333932464443384537423342313830373638222C22626C6F636B5F70617274735F686561646572223A506172745365747B543A31204236323237323535464632307D2C22686569676874223A312C22726F756E64223A302C2274797065223A327D7D")
if err != nil {
panic(err)
}
privBytes, err := hex.DecodeString("27F82582AEFAE7AB151CFB01C48BB6C1A0DA78F9BDDA979A9F70A84D074EB07D3B3069C422E19688B45CBFAE7BB009FC0FA1B1EA86593519318B7214853803C8")
if err != nil {
panic(err)
}
privKey := crypto.PrivKeyEd25519{}
copy(privKey[:], privBytes)
signature := privKey.Sign(signBytes)
fmt.Printf("Signature Bytes: %X\n", signature.Bytes())
}
```

View File

@@ -1,110 +0,0 @@
#!/usr/bin/env bash
# XXX: removes tendermint dir
cd "$GOPATH/src/github.com/tendermint/tendermint" || exit 1
# Make sure we have a tendermint command.
if ! hash tendermint 2>/dev/null; then
make install
fi
# specify a dir to copy
# TODO: eventually we should replace with `tendermint init --test`
DIR_TO_COPY=$HOME/.tendermint_test/consensus_state_test
TMHOME="$HOME/.tendermint"
rm -rf "$TMHOME"
cp -r "$DIR_TO_COPY" "$TMHOME"
cp $TMHOME/config.toml $TMHOME/config.toml.bak
function reset(){
tendermint unsafe_reset_all
cp $TMHOME/config.toml.bak $TMHOME/config.toml
}
reset
# empty block
function empty_block(){
tendermint node --proxy_app=persistent_dummy &> /dev/null &
sleep 5
killall tendermint
# /q would print up to and including the match, then quit.
# /Q doesn't include the match.
# http://unix.stackexchange.com/questions/11305/grep-show-all-the-file-up-to-the-match
sed '/ENDHEIGHT: 1/Q' ~/.tendermint/data/cs.wal/wal > consensus/test_data/empty_block.cswal
reset
}
# many blocks
function many_blocks(){
bash scripts/txs/random.sh 1000 36657 &> /dev/null &
PID=$!
tendermint node --proxy_app=persistent_dummy &> /dev/null &
sleep 7
killall tendermint
kill -9 $PID
sed '/ENDHEIGHT: 6/Q' ~/.tendermint/data/cs.wal/wal > consensus/test_data/many_blocks.cswal
reset
}
# small block 1
function small_block1(){
bash scripts/txs/random.sh 1000 36657 &> /dev/null &
PID=$!
tendermint node --proxy_app=persistent_dummy &> /dev/null &
sleep 10
killall tendermint
kill -9 $PID
sed '/ENDHEIGHT: 1/Q' ~/.tendermint/data/cs.wal/wal > consensus/test_data/small_block1.cswal
reset
}
# small block 2 (part size = 512)
function small_block2(){
echo "" >> ~/.tendermint/config.toml
echo "block_part_size = 512" >> ~/.tendermint/config.toml
bash scripts/txs/random.sh 1000 36657 &> /dev/null &
PID=$!
tendermint node --proxy_app=persistent_dummy &> /dev/null &
sleep 5
killall tendermint
kill -9 $PID
sed '/ENDHEIGHT: 1/Q' ~/.tendermint/data/cs.wal/wal > consensus/test_data/small_block2.cswal
reset
}
case "$1" in
"small_block1")
small_block1
;;
"small_block2")
small_block2
;;
"empty_block")
empty_block
;;
"many_blocks")
many_blocks
;;
*)
small_block1
small_block2
empty_block
many_blocks
esac

View File

@@ -1,10 +0,0 @@
#ENDHEIGHT: 0
{"time":"2017-04-27T22:24:01.346Z","msg":[3,{"duration":972946821,"height":1,"round":0,"step":1}]}
{"time":"2017-04-27T22:24:01.349Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPropose"}]}
{"time":"2017-04-27T22:24:01.349Z","msg":[2,{"msg":[17,{"Proposal":{"height":1,"round":0,"block_parts_header":{"total":1,"hash":"ACED4A95DDEBD24E66A681F7EAB4CA22C4B8546D"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"E785764AED6D92D7CC65C0A3A4ED9C8465198A05142C3E6C7F3EF601FDCD3A604900B77B7B87C046221EF99FD038A960398385BD5BBAA50EE4F86DE757B8F704"]}}],"peer_key":""}]}
{"time":"2017-04-27T22:24:01.350Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010114B96165CF4496C00000000000000114354594CBFC1A7BCA1AD0050ED6AA010023EADA390001000100000000","proof":{"aunts":[]}}}],"peer_key":""}]}
{"time":"2017-04-27T22:24:01.351Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrevote"}]}
{"time":"2017-04-27T22:24:01.351Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":1,"block_id":{"hash":"F3BBFBE7E4A5D619E2C498C3D1B912883786DD71","parts":{"total":1,"hash":"ACED4A95DDEBD24E66A681F7EAB4CA22C4B8546D"}},"signature":[1,"35C937C78D061ECDC3770982A1330C9AA7F6FEF00835C43DEB50B8FCF69A3EEF221E675EE5E469114F64E4FBBABA414EB9170E1025FC47D3F0EADE46767D2E00"]}}],"peer_key":""}]}
{"time":"2017-04-27T22:24:01.352Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrecommit"}]}
{"time":"2017-04-27T22:24:01.352Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":2,"block_id":{"hash":"F3BBFBE7E4A5D619E2C498C3D1B912883786DD71","parts":{"total":1,"hash":"ACED4A95DDEBD24E66A681F7EAB4CA22C4B8546D"}},"signature":[1,"D1A7D27FCD5D352F3A3EDA8DE368520BC5B796662E32BCD8D91CDB8209A88DAF37CB7C4C93143D3C12B37C1435229268098CFFD0AD1400D88DA7606454692301"]}}],"peer_key":""}]}
{"time":"2017-04-27T22:24:01.352Z","msg":[1,{"height":1,"round":0,"step":"RoundStepCommit"}]}

File diff suppressed because one or more lines are too long

View File

@@ -1,15 +0,0 @@
#ENDHEIGHT: 0
{"time":"2017-04-27T22:23:56.310Z","msg":[3,{"duration":969732098,"height":1,"round":0,"step":1}]}
{"time":"2017-04-27T22:23:56.312Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPropose"}]}
{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[17,{"Proposal":{"height":1,"round":0,"block_parts_header":{"total":6,"hash":"A3C176F13F5CBC7C48EE27A472800410C9D487DC"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"7624F6E943B7A207E16D1FA87EA099BD924E930F98E7DECBC01DB37735C619409588A67C2EABA9845FD6B80FDB65ECFCDA5F0DEFCEF74B8C34DB8E0540480203"]}}],"peer_key":""}]}
{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010114B96164A30A118001620000000001141F6753D22BACA2180B1EADD722434EB28444D91D0114354594CBFC1A7BCA1AD0050ED6AA010023EADA3900010162011A3631363236333634333133303344363436333632363133313330011A3631363236333634333133313344363436333632363133313331011A3631363236333634333133323344363436333632363133313332011A3631363236333634333133333344363436333632363133313333011A3631363236333634333133343344363436333632363133313334011A3631363236333634333133353344363436333632363133313335011A3631363236333634333133363344363436333632363133313336011A3631363236333634333133373344363436333632363133313337011A3631363236333634333133383344363436333632363133313338011A3631363236333634333133393344363436333632363133313339011A3631363236333634333233303344363436333632363133323330011A3631363236333634333233313344363436333632363133323331011A3631363236333634333233323344363436333632363133323332011A3631363236333634333233333344363436333632363133323333011A3631363236333634333233343344363436333632363133323334011A36313632363336","proof":{"aunts":["49F4B71E3D7C457415069E2EA916DB12F67AA8D0","D35A72BEDAAAAC17045D7BFAAFA94C2EC0B0A4C2","705BC647374F3495EE73C3F44C21E9BDB4731738"]}}}],"peer_key":""}]}
{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":1,"bytes":"34333233353344363436333632363133323335011A3631363236333634333233363344363436333632363133323336011A3631363236333634333233373344363436333632363133323337011A3631363236333634333233383344363436333632363133323338011A3631363236333634333233393344363436333632363133323339011A3631363236333634333333303344363436333632363133333330011A3631363236333634333333313344363436333632363133333331011A3631363236333634333333323344363436333632363133333332011A3631363236333634333333333344363436333632363133333333011A3631363236333634333333343344363436333632363133333334011A3631363236333634333333353344363436333632363133333335011A3631363236333634333333363344363436333632363133333336011A3631363236333634333333373344363436333632363133333337011A3631363236333634333333383344363436333632363133333338011A3631363236333634333333393344363436333632363133333339011A3631363236333634333433303344363436333632363133343330011A3631363236333634333433313344363436333632363133343331011A3631363236333634333433323344363436333632363133343332011A363136323633363433343333334436","proof":{"aunts":["5AD2A9A1A49A1FD6EF83F05FA4588F800B29DEF1","D35A72BEDAAAAC17045D7BFAAFA94C2EC0B0A4C2","705BC647374F3495EE73C3F44C21E9BDB4731738"]}}}],"peer_key":""}]}
{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":2,"bytes":"3436333632363133343333011A3631363236333634333433343344363436333632363133343334011A3631363236333634333433353344363436333632363133343335011A3631363236333634333433363344363436333632363133343336011A3631363236333634333433373344363436333632363133343337011A3631363236333634333433383344363436333632363133343338011A3631363236333634333433393344363436333632363133343339011A3631363236333634333533303344363436333632363133353330011A3631363236333634333533313344363436333632363133353331011A3631363236333634333533323344363436333632363133353332011A3631363236333634333533333344363436333632363133353333011A3631363236333634333533343344363436333632363133353334011A3631363236333634333533353344363436333632363133353335011A3631363236333634333533363344363436333632363133353336011A3631363236333634333533373344363436333632363133353337011A3631363236333634333533383344363436333632363133353338011A3631363236333634333533393344363436333632363133353339011A3631363236333634333633303344363436333632363133363330011A3631363236333634333633313344363436333632363133","proof":{"aunts":["8B5786C3D871EE37B0F4B2DECAC39E157340DFBE","705BC647374F3495EE73C3F44C21E9BDB4731738"]}}}],"peer_key":""}]}
{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":3,"bytes":"363331011A3631363236333634333633323344363436333632363133363332011A3631363236333634333633333344363436333632363133363333011A3631363236333634333633343344363436333632363133363334011A3631363236333634333633353344363436333632363133363335011A3631363236333634333633363344363436333632363133363336011A3631363236333634333633373344363436333632363133363337011A3631363236333634333633383344363436333632363133363338011A3631363236333634333633393344363436333632363133363339011A3631363236333634333733303344363436333632363133373330011A3631363236333634333733313344363436333632363133373331011A3631363236333634333733323344363436333632363133373332011A3631363236333634333733333344363436333632363133373333011A3631363236333634333733343344363436333632363133373334011A3631363236333634333733353344363436333632363133373335011A3631363236333634333733363344363436333632363133373336011A3631363236333634333733373344363436333632363133373337011A3631363236333634333733383344363436333632363133373338011A3631363236333634333733393344363436333632363133373339011A363136","proof":{"aunts":["56097661A1B2707588100586B3B1C2C8A51057D1","6DE889147DF528EEB5F7422E95DC45900CAFB619","247C721D5CEB90BB1FE389BA74C43DF0955E1647"]}}}],"peer_key":""}]}
{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":4,"bytes":"3236333634333833303344363436333632363133383330011A3631363236333634333833313344363436333632363133383331011A3631363236333634333833323344363436333632363133383332011A3631363236333634333833333344363436333632363133383333011A3631363236333634333833343344363436333632363133383334011A3631363236333634333833353344363436333632363133383335011A3631363236333634333833363344363436333632363133383336011A3631363236333634333833373344363436333632363133383337011A3631363236333634333833383344363436333632363133383338011A3631363236333634333833393344363436333632363133383339011A3631363236333634333933303344363436333632363133393330011A3631363236333634333933313344363436333632363133393331011A3631363236333634333933323344363436333632363133393332011A3631363236333634333933333344363436333632363133393333011A3631363236333634333933343344363436333632363133393334011A3631363236333634333933353344363436333632363133393335011A3631363236333634333933363344363436333632363133393336011A3631363236333634333933373344363436333632363133393337011A3631363236333634333933","proof":{"aunts":["081D3DC5F11850851D5F0D760B98EE87BFA6B8B0","6DE889147DF528EEB5F7422E95DC45900CAFB619","247C721D5CEB90BB1FE389BA74C43DF0955E1647"]}}}],"peer_key":""}]}
{"time":"2017-04-27T22:23:56.313Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":5,"bytes":"383344363436333632363133393338011A3631363236333634333933393344363436333632363133393339011E363136323633363433313330333033443634363336323631333133303330011E363136323633363433313330333133443634363336323631333133303331011E363136323633363433313330333233443634363336323631333133303332011E363136323633363433313330333333443634363336323631333133303333011E363136323633363433313330333433443634363336323631333133303334011E363136323633363433313330333533443634363336323631333133303335011E363136323633363433313330333633443634363336323631333133303336011E3631363236333634333133303337334436343633363236313331333033370100000000","proof":{"aunts":["6AA912328C2B52EFA0ECE71F523E137E400EC484","247C721D5CEB90BB1FE389BA74C43DF0955E1647"]}}}],"peer_key":""}]}
{"time":"2017-04-27T22:23:56.314Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrevote"}]}
{"time":"2017-04-27T22:23:56.314Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":1,"block_id":{"hash":"62371CF72F8662378691706DB256C833CF1AF81B","parts":{"total":6,"hash":"A3C176F13F5CBC7C48EE27A472800410C9D487DC"}},"signature":[1,"255906FAAA50C84E85DABF7DE73468E4F95DB4E46F598848145926E2FAD77CA682BF07E09E2F3EC81FFBD9A036B67914A3C02F819B69248D777AEBA792725907"]}}],"peer_key":""}]}
{"time":"2017-04-27T22:23:56.315Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrecommit"}]}
{"time":"2017-04-27T22:23:56.315Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":2,"block_id":{"hash":"62371CF72F8662378691706DB256C833CF1AF81B","parts":{"total":6,"hash":"A3C176F13F5CBC7C48EE27A472800410C9D487DC"}},"signature":[1,"056CC15C748434D0A59B64B45CB56EDC1A437A426E68FA63DC7D61A7C17B0F768F207D81340D129A57C5A64195F8AFDD03B6BF28D7B2286290D61BCE88FCA304"]}}],"peer_key":""}]}
{"time":"2017-04-27T22:23:56.316Z","msg":[1,{"height":1,"round":0,"step":"RoundStepCommit"}]}

View File

@@ -3,7 +3,7 @@ package consensus
import (
"time"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/log"
)
@@ -15,8 +15,8 @@ var (
// conditional on the height/round/step in the timeoutInfo.
// The timeoutInfo.Duration may be non-positive.
type TimeoutTicker interface {
Start() (bool, error)
Stop() bool
Start() error
Stop() error
Chan() <-chan timeoutInfo // on which to receive a timeout
ScheduleTimeout(ti timeoutInfo) // reset the timer
@@ -29,24 +29,26 @@ type TimeoutTicker interface {
// Timeouts are scheduled along the tickChan,
// and fired on the tockChan.
type timeoutTicker struct {
BaseService
cmn.BaseService
timer *time.Timer
tickChan chan timeoutInfo
tockChan chan timeoutInfo
tickChan chan timeoutInfo // for scheduling timeouts
tockChan chan timeoutInfo // for notifying about them
}
// NewTimeoutTicker returns a new TimeoutTicker.
func NewTimeoutTicker() TimeoutTicker {
tt := &timeoutTicker{
timer: time.NewTimer(0),
tickChan: make(chan timeoutInfo, tickTockBufferSize),
tockChan: make(chan timeoutInfo, tickTockBufferSize),
}
tt.BaseService = *NewBaseService(nil, "TimeoutTicker", tt)
tt.BaseService = *cmn.NewBaseService(nil, "TimeoutTicker", tt)
tt.stopTimer() // don't want to fire until the first scheduled timeout
return tt
}
// OnStart implements cmn.Service. It starts the timeout routine.
func (t *timeoutTicker) OnStart() error {
go t.timeoutRoutine()
@@ -54,16 +56,19 @@ func (t *timeoutTicker) OnStart() error {
return nil
}
// OnStop implements cmn.Service. It stops the timeout routine.
func (t *timeoutTicker) OnStop() {
t.BaseService.OnStop()
t.stopTimer()
}
// Chan returns a channel on which timeouts are sent.
func (t *timeoutTicker) Chan() <-chan timeoutInfo {
return t.tockChan
}
// The timeoutRoutine is alwaya available to read from tickChan (it won't block).
// ScheduleTimeout schedules a new timeout by sending on the internal tickChan.
// The timeoutRoutine is always available to read from tickChan, so this won't block.
// The scheduling may fail if the timeoutRoutine has already scheduled a timeout for a later height/round/step.
func (t *timeoutTicker) ScheduleTimeout(ti timeoutInfo) {
t.tickChan <- ti
@@ -122,7 +127,7 @@ func (t *timeoutTicker) timeoutRoutine() {
// We can eliminate it by merging the timeoutRoutine into receiveRoutine
// and managing the timeouts ourselves with a millisecond ticker
go func(toi timeoutInfo) { t.tockChan <- toi }(ti)
case <-t.Quit:
case <-t.Quit():
return
}
}

View File

@@ -1,11 +1,13 @@
package consensus
package types
import (
"fmt"
"strings"
"sync"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
)
type RoundVoteSet struct {
@@ -29,16 +31,16 @@ One for their LastCommit round, and another for the official commit round.
*/
type HeightVoteSet struct {
chainID string
height int
height int64
valSet *types.ValidatorSet
mtx sync.Mutex
round int // max tracked round
roundVoteSets map[int]RoundVoteSet // keys: [0...round]
peerCatchupRounds map[string][]int // keys: peer.Key; values: at most 2 rounds
peerCatchupRounds map[p2p.ID][]int // keys: peer.ID; values: at most 2 rounds
}
func NewHeightVoteSet(chainID string, height int, valSet *types.ValidatorSet) *HeightVoteSet {
func NewHeightVoteSet(chainID string, height int64, valSet *types.ValidatorSet) *HeightVoteSet {
hvs := &HeightVoteSet{
chainID: chainID,
}
@@ -46,20 +48,20 @@ func NewHeightVoteSet(chainID string, height int, valSet *types.ValidatorSet) *H
return hvs
}
func (hvs *HeightVoteSet) Reset(height int, valSet *types.ValidatorSet) {
func (hvs *HeightVoteSet) Reset(height int64, valSet *types.ValidatorSet) {
hvs.mtx.Lock()
defer hvs.mtx.Unlock()
hvs.height = height
hvs.valSet = valSet
hvs.roundVoteSets = make(map[int]RoundVoteSet)
hvs.peerCatchupRounds = make(map[string][]int)
hvs.peerCatchupRounds = make(map[p2p.ID][]int)
hvs.addRound(0)
hvs.round = 0
}
func (hvs *HeightVoteSet) Height() int {
func (hvs *HeightVoteSet) Height() int64 {
hvs.mtx.Lock()
defer hvs.mtx.Unlock()
return hvs.height
@@ -76,7 +78,7 @@ func (hvs *HeightVoteSet) SetRound(round int) {
hvs.mtx.Lock()
defer hvs.mtx.Unlock()
if hvs.round != 0 && (round < hvs.round+1) {
PanicSanity("SetRound() must increment hvs.round")
cmn.PanicSanity("SetRound() must increment hvs.round")
}
for r := hvs.round + 1; r <= round; r++ {
if _, ok := hvs.roundVoteSets[r]; ok {
@@ -89,7 +91,7 @@ func (hvs *HeightVoteSet) SetRound(round int) {
func (hvs *HeightVoteSet) addRound(round int) {
if _, ok := hvs.roundVoteSets[round]; ok {
PanicSanity("addRound() for an existing round")
cmn.PanicSanity("addRound() for an existing round")
}
// log.Debug("addRound(round)", "round", round)
prevotes := types.NewVoteSet(hvs.chainID, hvs.height, round, types.VoteTypePrevote, hvs.valSet)
@@ -101,8 +103,8 @@ func (hvs *HeightVoteSet) addRound(round int) {
}
// Duplicate votes return added=false, err=nil.
// By convention, peerKey is "" if origin is self.
func (hvs *HeightVoteSet) AddVote(vote *types.Vote, peerKey string) (added bool, err error) {
// By convention, peerID is "" if origin is self.
func (hvs *HeightVoteSet) AddVote(vote *types.Vote, peerID p2p.ID) (added bool, err error) {
hvs.mtx.Lock()
defer hvs.mtx.Unlock()
if !types.IsVoteTypeValid(vote.Type) {
@@ -110,10 +112,10 @@ func (hvs *HeightVoteSet) AddVote(vote *types.Vote, peerKey string) (added bool,
}
voteSet := hvs.getVoteSet(vote.Round, vote.Type)
if voteSet == nil {
if rndz := hvs.peerCatchupRounds[peerKey]; len(rndz) < 2 {
if rndz := hvs.peerCatchupRounds[peerID]; len(rndz) < 2 {
hvs.addRound(vote.Round)
voteSet = hvs.getVoteSet(vote.Round, vote.Type)
hvs.peerCatchupRounds[peerKey] = append(rndz, vote.Round)
hvs.peerCatchupRounds[peerID] = append(rndz, vote.Round)
} else {
// Peer has sent a vote that does not match our round,
// for more than one round. Bad peer!
@@ -164,7 +166,7 @@ func (hvs *HeightVoteSet) getVoteSet(round int, type_ byte) *types.VoteSet {
case types.VoteTypePrecommit:
return rvs.Precommits
default:
PanicSanity(Fmt("Unexpected vote type %X", type_))
cmn.PanicSanity(cmn.Fmt("Unexpected vote type %X", type_))
return nil
}
}
@@ -194,7 +196,7 @@ func (hvs *HeightVoteSet) StringIndented(indent string) string {
voteSetString = roundVoteSet.Precommits.StringShort()
vsStrings = append(vsStrings, voteSetString)
}
return Fmt(`HeightVoteSet{H:%v R:0~%v
return cmn.Fmt(`HeightVoteSet{H:%v R:0~%v
%s %v
%s}`,
hvs.height, hvs.round,
@@ -206,15 +208,15 @@ func (hvs *HeightVoteSet) StringIndented(indent string) string {
// NOTE: if there are too many peers, or too much peer churn,
// this can cause memory issues.
// TODO: implement ability to remove peers too
func (hvs *HeightVoteSet) SetPeerMaj23(round int, type_ byte, peerID string, blockID types.BlockID) {
func (hvs *HeightVoteSet) SetPeerMaj23(round int, type_ byte, peerID p2p.ID, blockID types.BlockID) error {
hvs.mtx.Lock()
defer hvs.mtx.Unlock()
if !types.IsVoteTypeValid(type_) {
return
return fmt.Errorf("SetPeerMaj23: Invalid vote type %v", type_)
}
voteSet := hvs.getVoteSet(round, type_)
if voteSet == nil {
return
return nil // something we don't know about yet
}
voteSet.SetPeerMaj23(peerID, blockID)
return voteSet.SetPeerMaj23(peerID, blockID)
}

View File

@@ -1,20 +1,24 @@
package consensus
package types
import (
"testing"
"time"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/types"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
)
var config *cfg.Config // NOTE: must be reset for each _test.go file
func init() {
config = ResetConfig("consensus_height_vote_set_test")
config = cfg.ResetTestRoot("consensus_height_vote_set_test")
}
func TestPeerCatchupRounds(t *testing.T) {
valSet, privVals := types.RandValidatorSet(10, 1)
hvs := NewHeightVoteSet(config.ChainID, 1, valSet)
hvs := NewHeightVoteSet(config.ChainID(), 1, valSet)
vote999_0 := makeVoteHR(t, 1, 999, privVals, 0)
added, err := hvs.AddVote(vote999_0, "peer1")
@@ -44,20 +48,21 @@ func TestPeerCatchupRounds(t *testing.T) {
}
func makeVoteHR(t *testing.T, height, round int, privVals []*types.PrivValidator, valIndex int) *types.Vote {
func makeVoteHR(t *testing.T, height int64, round int, privVals []*types.PrivValidatorFS, valIndex int) *types.Vote {
privVal := privVals[valIndex]
vote := &types.Vote{
ValidatorAddress: privVal.Address,
ValidatorAddress: privVal.GetAddress(),
ValidatorIndex: valIndex,
Height: height,
Round: round,
Timestamp: time.Now().UTC(),
Type: types.VoteTypePrecommit,
BlockID: types.BlockID{[]byte("fakehash"), types.PartSetHeader{}},
}
chainID := config.ChainID
chainID := config.ChainID()
err := privVal.SignVote(chainID, vote)
if err != nil {
panic(Fmt("Error signing vote: %v", err))
panic(cmn.Fmt("Error signing vote: %v", err))
return nil
}
return vote

View File

@@ -0,0 +1,57 @@
package types
import (
"fmt"
"time"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
)
//-----------------------------------------------------------------------------
// PeerRoundState contains the known state of a peer.
// NOTE: Read-only when returned by PeerState.GetRoundState().
type PeerRoundState struct {
Height int64 // Height peer is at
Round int // Round peer is at, -1 if unknown.
Step RoundStepType // Step peer is at
StartTime time.Time // Estimated start of round 0 at this height
Proposal bool // True if peer has proposal for this round
ProposalBlockPartsHeader types.PartSetHeader //
ProposalBlockParts *cmn.BitArray //
ProposalPOLRound int // Proposal's POL round. -1 if none.
ProposalPOL *cmn.BitArray // nil until ProposalPOLMessage received.
Prevotes *cmn.BitArray // All votes peer has for this round
Precommits *cmn.BitArray // All precommits peer has for this round
LastCommitRound int // Round of commit for last height. -1 if none.
LastCommit *cmn.BitArray // All commit precommits of commit for last height.
CatchupCommitRound int // Round that we have commit for. Not necessarily unique. -1 if none.
CatchupCommit *cmn.BitArray // All commit precommits peer has for this height & CatchupCommitRound
}
// String returns a string representation of the PeerRoundState
func (prs PeerRoundState) String() string {
return prs.StringIndented("")
}
// StringIndented returns a string representation of the PeerRoundState
func (prs PeerRoundState) StringIndented(indent string) string {
return fmt.Sprintf(`PeerRoundState{
%s %v/%v/%v @%v
%s Proposal %v -> %v
%s POL %v (round %v)
%s Prevotes %v
%s Precommits %v
%s LastCommit %v (round %v)
%s Catchup %v (round %v)
%s}`,
indent, prs.Height, prs.Round, prs.Step, prs.StartTime,
indent, prs.ProposalBlockPartsHeader, prs.ProposalBlockParts,
indent, prs.ProposalPOL, prs.ProposalPOLRound,
indent, prs.Prevotes,
indent, prs.Precommits,
indent, prs.LastCommit, prs.LastCommitRound,
indent, prs.CatchupCommit, prs.CatchupCommitRound,
indent)
}

131
consensus/types/state.go Normal file
View File

@@ -0,0 +1,131 @@
package types
import (
"fmt"
"time"
"github.com/tendermint/tendermint/types"
)
//-----------------------------------------------------------------------------
// RoundStepType enum type
// RoundStepType enumerates the state of the consensus state machine
type RoundStepType uint8 // These must be numeric, ordered.
const (
RoundStepNewHeight = RoundStepType(0x01) // Wait til CommitTime + timeoutCommit
RoundStepNewRound = RoundStepType(0x02) // Setup new round and go to RoundStepPropose
RoundStepPropose = RoundStepType(0x03) // Did propose, gossip proposal
RoundStepPrevote = RoundStepType(0x04) // Did prevote, gossip prevotes
RoundStepPrevoteWait = RoundStepType(0x05) // Did receive any +2/3 prevotes, start timeout
RoundStepPrecommit = RoundStepType(0x06) // Did precommit, gossip precommits
RoundStepPrecommitWait = RoundStepType(0x07) // Did receive any +2/3 precommits, start timeout
RoundStepCommit = RoundStepType(0x08) // Entered commit state machine
// NOTE: RoundStepNewHeight acts as RoundStepCommitWait.
)
// String returns a string
func (rs RoundStepType) String() string {
switch rs {
case RoundStepNewHeight:
return "RoundStepNewHeight"
case RoundStepNewRound:
return "RoundStepNewRound"
case RoundStepPropose:
return "RoundStepPropose"
case RoundStepPrevote:
return "RoundStepPrevote"
case RoundStepPrevoteWait:
return "RoundStepPrevoteWait"
case RoundStepPrecommit:
return "RoundStepPrecommit"
case RoundStepPrecommitWait:
return "RoundStepPrecommitWait"
case RoundStepCommit:
return "RoundStepCommit"
default:
return "RoundStepUnknown" // Cannot panic.
}
}
//-----------------------------------------------------------------------------
// RoundState defines the internal consensus state.
// It is Immutable when returned from ConsensusState.GetRoundState()
// TODO: Actually, only the top pointer is copied,
// so access to field pointers is still racey
// NOTE: Not thread safe. Should only be manipulated by functions downstream
// of the cs.receiveRoutine
type RoundState struct {
Height int64 // Height we are working on
Round int
Step RoundStepType
StartTime time.Time
CommitTime time.Time // Subjective time when +2/3 precommits for Block at Round were found
Validators *types.ValidatorSet
Proposal *types.Proposal
ProposalBlock *types.Block
ProposalBlockParts *types.PartSet
LockedRound int
LockedBlock *types.Block
LockedBlockParts *types.PartSet
Votes *HeightVoteSet
CommitRound int //
LastCommit *types.VoteSet // Last precommits at Height-1
LastValidators *types.ValidatorSet
}
// RoundStateEvent returns the H/R/S of the RoundState as an event.
func (rs *RoundState) RoundStateEvent() types.EventDataRoundState {
// XXX: copy the RoundState
// if we want to avoid this, we may need synchronous events after all
rs_ := *rs
edrs := types.EventDataRoundState{
Height: rs.Height,
Round: rs.Round,
Step: rs.Step.String(),
RoundState: &rs_,
}
return edrs
}
// String returns a string
func (rs *RoundState) String() string {
return rs.StringIndented("")
}
// StringIndented returns a string
func (rs *RoundState) StringIndented(indent string) string {
return fmt.Sprintf(`RoundState{
%s H:%v R:%v S:%v
%s StartTime: %v
%s CommitTime: %v
%s Validators: %v
%s Proposal: %v
%s ProposalBlock: %v %v
%s LockedRound: %v
%s LockedBlock: %v %v
%s Votes: %v
%s LastCommit: %v
%s LastValidators:%v
%s}`,
indent, rs.Height, rs.Round, rs.Step,
indent, rs.StartTime,
indent, rs.CommitTime,
indent, rs.Validators.StringIndented(indent+" "),
indent, rs.Proposal,
indent, rs.ProposalBlockParts.StringShort(), rs.ProposalBlock.StringShort(),
indent, rs.LockedRound,
indent, rs.LockedBlockParts.StringShort(), rs.LockedBlock.StringShort(),
indent, rs.Votes.StringIndented(indent+" "),
indent, rs.LastCommit.StringShort(),
indent, rs.LastValidators.StringIndented(indent+" "),
indent)
}
// StringShort returns a string
func (rs *RoundState) StringShort() string {
return fmt.Sprintf(`RoundState{H:%v R:%v S:%v ST:%v}`,
rs.Height, rs.Round, rs.Step, rs.StartTime)
}

View File

@@ -1,7 +1,7 @@
package consensus
import (
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
)
// kind of arbitrary
@@ -10,4 +10,4 @@ var Major = "0" //
var Minor = "2" // replay refactor
var Revision = "2" // validation -> commit
var Version = Fmt("v%s/%s.%s.%s", Spec, Major, Minor, Revision)
var Version = cmn.Fmt("v%s/%s.%s.%s", Spec, Major, Minor, Revision)

View File

@@ -1,22 +1,41 @@
package consensus
import (
"bytes"
"encoding/binary"
"fmt"
"hash/crc32"
"io"
"path/filepath"
"time"
"github.com/pkg/errors"
wire "github.com/tendermint/go-wire"
"github.com/tendermint/tendermint/types"
auto "github.com/tendermint/tmlibs/autofile"
. "github.com/tendermint/tmlibs/common"
cmn "github.com/tendermint/tmlibs/common"
)
const (
// must be greater than params.BlockGossip.BlockPartSizeBytes + a few bytes
maxMsgSizeBytes = 1024 * 1024 // 1MB
)
//--------------------------------------------------------
// types and functions for savings consensus messages
type TimedWALMessage struct {
Time time.Time `json:"time"`
Time time.Time `json:"time"` // for debugging purposes
Msg WALMessage `json:"msg"`
}
// EndHeightMessage marks the end of the given height inside WAL.
// @internal used by scripts/wal2json util.
type EndHeightMessage struct {
Height int64 `json:"height"`
}
type WALMessage interface{}
var _ = wire.RegisterInterface(
@@ -24,81 +43,278 @@ var _ = wire.RegisterInterface(
wire.ConcreteType{types.EventDataRoundState{}, 0x01},
wire.ConcreteType{msgInfo{}, 0x02},
wire.ConcreteType{timeoutInfo{}, 0x03},
wire.ConcreteType{EndHeightMessage{}, 0x04},
)
//--------------------------------------------------------
// Simple write-ahead logger
// WAL is an interface for any write-ahead logger.
type WAL interface {
Save(WALMessage)
Group() *auto.Group
SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error)
Start() error
Stop() error
Wait()
}
// Write ahead logger writes msgs to disk before they are processed.
// Can be used for crash-recovery and deterministic replay
// TODO: currently the wal is overwritten during replay catchup
// give it a mode so it's either reading or appending - must read to end to start appending again
type WAL struct {
BaseService
type baseWAL struct {
cmn.BaseService
group *auto.Group
light bool // ignore block parts
enc *WALEncoder
}
func NewWAL(walFile string, light bool) (*WAL, error) {
func NewWAL(walFile string, light bool) (*baseWAL, error) {
err := cmn.EnsureDir(filepath.Dir(walFile), 0700)
if err != nil {
return nil, errors.Wrap(err, "failed to ensure WAL directory is in place")
}
group, err := auto.OpenGroup(walFile)
if err != nil {
return nil, err
}
wal := &WAL{
wal := &baseWAL{
group: group,
light: light,
enc: NewWALEncoder(group),
}
wal.BaseService = *NewBaseService(nil, "WAL", wal)
wal.BaseService = *cmn.NewBaseService(nil, "baseWAL", wal)
return wal, nil
}
func (wal *WAL) OnStart() error {
func (wal *baseWAL) Group() *auto.Group {
return wal.group
}
func (wal *baseWAL) OnStart() error {
size, err := wal.group.Head.Size()
if err != nil {
return err
} else if size == 0 {
wal.writeEndHeight(0)
wal.Save(EndHeightMessage{0})
}
_, err = wal.group.Start()
err = wal.group.Start()
return err
}
func (wal *WAL) OnStop() {
func (wal *baseWAL) OnStop() {
wal.BaseService.OnStop()
wal.group.Stop()
}
// called in newStep and for each pass in receiveRoutine
func (wal *WAL) Save(wmsg WALMessage) {
func (wal *baseWAL) Save(msg WALMessage) {
if wal == nil {
return
}
if wal.light {
// in light mode we only write new steps, timeouts, and our own votes (no proposals, block parts)
if mi, ok := wmsg.(msgInfo); ok {
if mi.PeerKey != "" {
if mi, ok := msg.(msgInfo); ok {
if mi.PeerID != "" {
return
}
}
}
// Write the wal message
var wmsgBytes = wire.JSONBytes(TimedWALMessage{time.Now(), wmsg})
err := wal.group.WriteLine(string(wmsgBytes))
if err := wal.enc.Encode(&TimedWALMessage{time.Now(), msg}); err != nil {
cmn.PanicQ(cmn.Fmt("Error writing msg to consensus wal: %v \n\nMessage: %v", err, msg))
}
// TODO: only flush when necessary
if err := wal.group.Flush(); err != nil {
cmn.PanicQ(cmn.Fmt("Error flushing consensus wal buf to file. Error: %v \n", err))
}
}
// WALSearchOptions are optional arguments to SearchForEndHeight.
type WALSearchOptions struct {
// IgnoreDataCorruptionErrors set to true will result in skipping data corruption errors.
IgnoreDataCorruptionErrors bool
}
// SearchForEndHeight searches for the EndHeightMessage with the height and
// returns an auto.GroupReader, whenever it was found or not and an error.
// Group reader will be nil if found equals false.
//
// CONTRACT: caller must close group reader.
func (wal *baseWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
var msg *TimedWALMessage
// NOTE: starting from the last file in the group because we're usually
// searching for the last height. See replay.go
min, max := wal.group.MinIndex(), wal.group.MaxIndex()
wal.Logger.Debug("Searching for height", "height", height, "min", min, "max", max)
for index := max; index >= min; index-- {
gr, err = wal.group.NewReader(index)
if err != nil {
return nil, false, err
}
dec := NewWALDecoder(gr)
for {
msg, err = dec.Decode()
if err == io.EOF {
// check next file
break
}
if options.IgnoreDataCorruptionErrors && IsDataCorruptionError(err) {
// do nothing
} else if err != nil {
gr.Close()
return nil, false, err
}
if m, ok := msg.Msg.(EndHeightMessage); ok {
if m.Height == height { // found
wal.Logger.Debug("Found", "height", height, "index", index)
return gr, true, nil
}
}
}
gr.Close()
}
return nil, false, nil
}
///////////////////////////////////////////////////////////////////////////////
// A WALEncoder writes custom-encoded WAL messages to an output stream.
//
// Format: 4 bytes CRC sum + 4 bytes length + arbitrary-length value (go-wire encoded)
type WALEncoder struct {
wr io.Writer
}
// NewWALEncoder returns a new encoder that writes to wr.
func NewWALEncoder(wr io.Writer) *WALEncoder {
return &WALEncoder{wr}
}
// Encode writes the custom encoding of v to the stream.
func (enc *WALEncoder) Encode(v *TimedWALMessage) error {
data := wire.BinaryBytes(v)
crc := crc32.Checksum(data, crc32c)
length := uint32(len(data))
totalLength := 8 + int(length)
msg := make([]byte, totalLength)
binary.BigEndian.PutUint32(msg[0:4], crc)
binary.BigEndian.PutUint32(msg[4:8], length)
copy(msg[8:], data)
_, err := enc.wr.Write(msg)
return err
}
///////////////////////////////////////////////////////////////////////////////
// IsDataCorruptionError returns true if data has been corrupted inside WAL.
func IsDataCorruptionError(err error) bool {
_, ok := err.(DataCorruptionError)
return ok
}
// DataCorruptionError is an error that occures if data on disk was corrupted.
type DataCorruptionError struct {
cause error
}
func (e DataCorruptionError) Error() string {
return fmt.Sprintf("DataCorruptionError[%v]", e.cause)
}
func (e DataCorruptionError) Cause() error {
return e.cause
}
// A WALDecoder reads and decodes custom-encoded WAL messages from an input
// stream. See WALEncoder for the format used.
//
// It will also compare the checksums and make sure data size is equal to the
// length from the header. If that is not the case, error will be returned.
type WALDecoder struct {
rd io.Reader
}
// NewWALDecoder returns a new decoder that reads from rd.
func NewWALDecoder(rd io.Reader) *WALDecoder {
return &WALDecoder{rd}
}
// Decode reads the next custom-encoded value from its reader and returns it.
func (dec *WALDecoder) Decode() (*TimedWALMessage, error) {
b := make([]byte, 4)
_, err := dec.rd.Read(b)
if err == io.EOF {
return nil, err
}
if err != nil {
PanicQ(Fmt("Error writing msg to consensus wal. Error: %v \n\nMessage: %v", err, wmsg))
return nil, fmt.Errorf("failed to read checksum: %v", err)
}
// TODO: only flush when necessary
if err := wal.group.Flush(); err != nil {
PanicQ(Fmt("Error flushing consensus wal buf to file. Error: %v \n", err))
crc := binary.BigEndian.Uint32(b)
b = make([]byte, 4)
_, err = dec.rd.Read(b)
if err == io.EOF {
return nil, err
}
if err != nil {
return nil, fmt.Errorf("failed to read length: %v", err)
}
length := binary.BigEndian.Uint32(b)
if length > maxMsgSizeBytes {
return nil, DataCorruptionError{fmt.Errorf("length %d exceeded maximum possible value of %d bytes", length, maxMsgSizeBytes)}
}
data := make([]byte, length)
_, err = dec.rd.Read(data)
if err == io.EOF {
return nil, err
}
if err != nil {
return nil, fmt.Errorf("failed to read data: %v", err)
}
// check checksum before decoding data
actualCRC := crc32.Checksum(data, crc32c)
if actualCRC != crc {
return nil, DataCorruptionError{fmt.Errorf("checksums do not match: (read: %v, actual: %v)", crc, actualCRC)}
}
var nn int
var res *TimedWALMessage // nolint: gosimple
res = wire.ReadBinary(&TimedWALMessage{}, bytes.NewBuffer(data), int(length), &nn, &err).(*TimedWALMessage)
if err != nil {
return nil, DataCorruptionError{fmt.Errorf("failed to decode data: %v", err)}
}
return res, err
}
func (wal *WAL) writeEndHeight(height int) {
wal.group.WriteLine(Fmt("#ENDHEIGHT: %v", height))
type nilWAL struct{}
// TODO: only flush when necessary
if err := wal.group.Flush(); err != nil {
PanicQ(Fmt("Error flushing consensus wal buf to file. Error: %v \n", err))
}
func (nilWAL) Save(m WALMessage) {}
func (nilWAL) Group() *auto.Group { return nil }
func (nilWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
return nil, false, nil
}
func (nilWAL) Start() error { return nil }
func (nilWAL) Stop() error { return nil }
func (nilWAL) Wait() {}

31
consensus/wal_fuzz.go Normal file
View File

@@ -0,0 +1,31 @@
// +build gofuzz
package consensus
import (
"bytes"
"io"
)
func Fuzz(data []byte) int {
dec := NewWALDecoder(bytes.NewReader(data))
for {
msg, err := dec.Decode()
if err == io.EOF {
break
}
if err != nil {
if msg != nil {
panic("msg != nil on error")
}
return 0
}
var w bytes.Buffer
enc := NewWALEncoder(&w)
err = enc.Encode(msg)
if err != nil {
panic(err)
}
}
return 1
}

201
consensus/wal_generator.go Normal file
View File

@@ -0,0 +1,201 @@
package consensus
import (
"bufio"
"bytes"
"fmt"
"math/rand"
"os"
"path/filepath"
"strings"
"time"
"github.com/pkg/errors"
"github.com/tendermint/abci/example/dummy"
bc "github.com/tendermint/tendermint/blockchain"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
auto "github.com/tendermint/tmlibs/autofile"
cmn "github.com/tendermint/tmlibs/common"
"github.com/tendermint/tmlibs/db"
"github.com/tendermint/tmlibs/log"
)
// WALWithNBlocks generates a consensus WAL. It does this by spining up a
// stripped down version of node (proxy app, event bus, consensus state) with a
// persistent dummy application and special consensus wal instance
// (byteBufferWAL) and waits until numBlocks are created. Then it returns a WAL
// content.
func WALWithNBlocks(numBlocks int) (data []byte, err error) {
config := getConfig()
app := dummy.NewPersistentDummyApplication(filepath.Join(config.DBDir(), "wal_generator"))
logger := log.TestingLogger().With("wal_generator", "wal_generator")
logger.Info("generating WAL (last height msg excluded)", "numBlocks", numBlocks)
/////////////////////////////////////////////////////////////////////////////
// COPY PASTE FROM node.go WITH A FEW MODIFICATIONS
// NOTE: we can't import node package because of circular dependency
privValidatorFile := config.PrivValidatorFile()
privValidator := types.LoadOrGenPrivValidatorFS(privValidatorFile)
genDoc, err := types.GenesisDocFromFile(config.GenesisFile())
if err != nil {
return nil, errors.Wrap(err, "failed to read genesis file")
}
stateDB := db.NewMemDB()
blockStoreDB := db.NewMemDB()
state, err := sm.MakeGenesisState(genDoc)
if err != nil {
return nil, errors.Wrap(err, "failed to make genesis state")
}
blockStore := bc.NewBlockStore(blockStoreDB)
handshaker := NewHandshaker(stateDB, state, blockStore)
proxyApp := proxy.NewAppConns(proxy.NewLocalClientCreator(app), handshaker)
proxyApp.SetLogger(logger.With("module", "proxy"))
if err := proxyApp.Start(); err != nil {
return nil, errors.Wrap(err, "failed to start proxy app connections")
}
defer proxyApp.Stop()
eventBus := types.NewEventBus()
eventBus.SetLogger(logger.With("module", "events"))
if err := eventBus.Start(); err != nil {
return nil, errors.Wrap(err, "failed to start event bus")
}
defer eventBus.Stop()
mempool := types.MockMempool{}
evpool := types.MockEvidencePool{}
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)
consensusState := NewConsensusState(config.Consensus, state.Copy(), blockExec, blockStore, mempool, evpool)
consensusState.SetLogger(logger)
consensusState.SetEventBus(eventBus)
if privValidator != nil {
consensusState.SetPrivValidator(privValidator)
}
// END OF COPY PASTE
/////////////////////////////////////////////////////////////////////////////
// set consensus wal to buffered WAL, which will write all incoming msgs to buffer
var b bytes.Buffer
wr := bufio.NewWriter(&b)
numBlocksWritten := make(chan struct{})
wal := newByteBufferWAL(logger, NewWALEncoder(wr), int64(numBlocks), numBlocksWritten)
// see wal.go#103
wal.Save(EndHeightMessage{0})
consensusState.wal = wal
if err := consensusState.Start(); err != nil {
return nil, errors.Wrap(err, "failed to start consensus state")
}
defer consensusState.Stop()
select {
case <-numBlocksWritten:
wr.Flush()
return b.Bytes(), nil
case <-time.After(1 * time.Minute):
wr.Flush()
return b.Bytes(), fmt.Errorf("waited too long for tendermint to produce %d blocks (grep logs for `wal_generator`)", numBlocks)
}
}
// f**ing long, but unique for each test
func makePathname() string {
// get path
p, err := os.Getwd()
if err != nil {
panic(err)
}
// fmt.Println(p)
sep := string(filepath.Separator)
return strings.Replace(p, sep, "_", -1)
}
func randPort() int {
// returns between base and base + spread
base, spread := 20000, 20000
return base + rand.Intn(spread)
}
func makeAddrs() (string, string, string) {
start := randPort()
return fmt.Sprintf("tcp://0.0.0.0:%d", start),
fmt.Sprintf("tcp://0.0.0.0:%d", start+1),
fmt.Sprintf("tcp://0.0.0.0:%d", start+2)
}
// getConfig returns a config for test cases
func getConfig() *cfg.Config {
pathname := makePathname()
c := cfg.ResetTestRoot(fmt.Sprintf("%s_%d", pathname, cmn.RandInt()))
// and we use random ports to run in parallel
tm, rpc, grpc := makeAddrs()
c.P2P.ListenAddress = tm
c.RPC.ListenAddress = rpc
c.RPC.GRPCListenAddress = grpc
return c
}
// byteBufferWAL is a WAL which writes all msgs to a byte buffer. Writing stops
// when the heightToStop is reached. Client will be notified via
// signalWhenStopsTo channel.
type byteBufferWAL struct {
enc *WALEncoder
stopped bool
heightToStop int64
signalWhenStopsTo chan<- struct{}
logger log.Logger
}
// needed for determinism
var fixedTime, _ = time.Parse(time.RFC3339, "2017-01-02T15:04:05Z")
func newByteBufferWAL(logger log.Logger, enc *WALEncoder, nBlocks int64, signalStop chan<- struct{}) *byteBufferWAL {
return &byteBufferWAL{
enc: enc,
heightToStop: nBlocks,
signalWhenStopsTo: signalStop,
logger: logger,
}
}
// Save writes message to the internal buffer except when heightToStop is
// reached, in which case it will signal the caller via signalWhenStopsTo and
// skip writing.
func (w *byteBufferWAL) Save(m WALMessage) {
if w.stopped {
w.logger.Debug("WAL already stopped. Not writing message", "msg", m)
return
}
if endMsg, ok := m.(EndHeightMessage); ok {
w.logger.Debug("WAL write end height message", "height", endMsg.Height, "stopHeight", w.heightToStop)
if endMsg.Height == w.heightToStop {
w.logger.Debug("Stopping WAL at height", "height", endMsg.Height)
w.signalWhenStopsTo <- struct{}{}
w.stopped = true
return
}
}
w.logger.Debug("WAL Write Message", "msg", m)
err := w.enc.Encode(&TimedWALMessage{fixedTime, m})
if err != nil {
panic(fmt.Sprintf("failed to encode the msg %v", m))
}
}
func (w *byteBufferWAL) Group() *auto.Group {
panic("not implemented")
}
func (w *byteBufferWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
return nil, false, nil
}
func (w *byteBufferWAL) Start() error { return nil }
func (w *byteBufferWAL) Stop() error { return nil }
func (w *byteBufferWAL) Wait() {}

132
consensus/wal_test.go Normal file
View File

@@ -0,0 +1,132 @@
package consensus
import (
"bytes"
"crypto/rand"
"sync"
"testing"
"time"
wire "github.com/tendermint/go-wire"
"github.com/tendermint/tendermint/consensus/types"
tmtypes "github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestWALEncoderDecoder(t *testing.T) {
now := time.Now()
msgs := []TimedWALMessage{
TimedWALMessage{Time: now, Msg: EndHeightMessage{0}},
TimedWALMessage{Time: now, Msg: timeoutInfo{Duration: time.Second, Height: 1, Round: 1, Step: types.RoundStepPropose}},
}
b := new(bytes.Buffer)
for _, msg := range msgs {
b.Reset()
enc := NewWALEncoder(b)
err := enc.Encode(&msg)
require.NoError(t, err)
dec := NewWALDecoder(b)
decoded, err := dec.Decode()
require.NoError(t, err)
assert.Equal(t, msg.Time.Truncate(time.Millisecond), decoded.Time)
assert.Equal(t, msg.Msg, decoded.Msg)
}
}
func TestWALSearchForEndHeight(t *testing.T) {
walBody, err := WALWithNBlocks(6)
if err != nil {
t.Fatal(err)
}
walFile := tempWALWithData(walBody)
wal, err := NewWAL(walFile, false)
if err != nil {
t.Fatal(err)
}
h := int64(3)
gr, found, err := wal.SearchForEndHeight(h, &WALSearchOptions{})
assert.NoError(t, err, cmn.Fmt("expected not to err on height %d", h))
assert.True(t, found, cmn.Fmt("expected to find end height for %d", h))
assert.NotNil(t, gr, "expected group not to be nil")
defer gr.Close()
dec := NewWALDecoder(gr)
msg, err := dec.Decode()
assert.NoError(t, err, "expected to decode a message")
rs, ok := msg.Msg.(tmtypes.EventDataRoundState)
assert.True(t, ok, "expected message of type EventDataRoundState")
assert.Equal(t, rs.Height, h+1, cmn.Fmt("wrong height"))
}
var initOnce sync.Once
func registerInterfacesOnce() {
initOnce.Do(func() {
var _ = wire.RegisterInterface(
struct{ WALMessage }{},
wire.ConcreteType{[]byte{}, 0x10},
)
})
}
func nBytes(n int) []byte {
buf := make([]byte, n)
n, _ = rand.Read(buf)
return buf[:n]
}
func benchmarkWalDecode(b *testing.B, n int) {
registerInterfacesOnce()
buf := new(bytes.Buffer)
enc := NewWALEncoder(buf)
data := nBytes(n)
enc.Encode(&TimedWALMessage{Msg: data, Time: time.Now().Round(time.Second)})
encoded := buf.Bytes()
b.ResetTimer()
for i := 0; i < b.N; i++ {
buf.Reset()
buf.Write(encoded)
dec := NewWALDecoder(buf)
if _, err := dec.Decode(); err != nil {
b.Fatal(err)
}
}
b.ReportAllocs()
}
func BenchmarkWalDecode512B(b *testing.B) {
benchmarkWalDecode(b, 512)
}
func BenchmarkWalDecode10KB(b *testing.B) {
benchmarkWalDecode(b, 10*1024)
}
func BenchmarkWalDecode100KB(b *testing.B) {
benchmarkWalDecode(b, 100*1024)
}
func BenchmarkWalDecode1MB(b *testing.B) {
benchmarkWalDecode(b, 1024*1024)
}
func BenchmarkWalDecode10MB(b *testing.B) {
benchmarkWalDecode(b, 10*1024*1024)
}
func BenchmarkWalDecode100MB(b *testing.B) {
benchmarkWalDecode(b, 100*1024*1024)
}
func BenchmarkWalDecode1GB(b *testing.B) {
benchmarkWalDecode(b, 1024*1024*1024)
}

1
docs/.python-version Normal file
View File

@@ -0,0 +1 @@
2.7.14

23
docs/Makefile Normal file
View File

@@ -0,0 +1,23 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = python -msphinx
SPHINXPROJ = Tendermint
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
install:
@pip install -r requirements.txt
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

14
docs/README.md Normal file
View File

@@ -0,0 +1,14 @@
Here lies our documentation. After making edits, run:
```
pip install -r requirements.txt
make html
```
to build the docs locally then open the file `_build/html/index.html` in your browser.
**WARNING:** This documentation is intended to be viewed at:
https://tendermint.readthedocs.io
and may contain broken internal links when viewed from Github.

View File

@@ -0,0 +1,17 @@
.toggle {
padding-bottom: 1em ;
}
.toggle .header {
display: block;
clear: both;
cursor: pointer;
}
.toggle .header:after {
content: " ▼";
}
.toggle .header.open:after {
content: " ▲";
}

10
docs/_static/custom_collapsible_code.js vendored Normal file
View File

@@ -0,0 +1,10 @@
let makeCodeBlocksCollapsible = function() {
$(".toggle > *").hide();
$(".toggle .header").show();
$(".toggle .header").click(function() {
$(this).parent().children().not(".header").toggle({"duration": 400});
$(this).parent().children(".header").toggleClass("open");
});
};
// we could use the }(); way if we would have access to jQuery in HEAD, i.e. we would need to force the theme
// to load jQuery before our custom scripts

20
docs/_templates/layout.html vendored Normal file
View File

@@ -0,0 +1,20 @@
{% extends "!layout.html" %}
{% set css_files = css_files + ["_static/custom_collapsible_code.css"] %}
# sadly, I didn't find a css style way to add custom JS to a list that is automagically added to head like CSS (above) #}
{% block extrahead %}
<script type="text/javascript" src="_static/custom_collapsible_code.js"></script>
{% endblock %}
{% block footer %}
<script type="text/javascript">
$(document).ready(function() {
// using this approach as we don't have access to the jQuery selectors
// when executing the function on load in HEAD
makeCodeBlocksCollapsible();
});
</script>
{% endblock %}

369
docs/abci-cli.rst Normal file
View File

@@ -0,0 +1,369 @@
Using ABCI-CLI
==============
To facilitate testing and debugging of ABCI servers and simple apps, we
built a CLI, the ``abci-cli``, for sending ABCI messages from the
command line.
Install
-------
Make sure you `have Go installed <https://golang.org/doc/install>`__.
Next, install the ``abci-cli`` tool and example applications:
::
go get -u github.com/tendermint/abci/cmd/abci-cli
If this fails, you may need to use ``glide`` to get vendored
dependencies:
::
go get github.com/Masterminds/glide
cd $GOPATH/src/github.com/tendermint/abci
glide install
go install ./cmd/abci-cli
Now run ``abci-cli`` to see the list of commands:
::
Usage:
abci-cli [command]
Available Commands:
batch Run a batch of abci commands against an application
check_tx Validate a tx
commit Commit the application state and return the Merkle root hash
console Start an interactive abci console for multiple commands
counter ABCI demo example
deliver_tx Deliver a new tx to the application
dummy ABCI demo example
echo Have the application echo a message
help Help about any command
info Get some info about the application
query Query the application state
set_option Set an options on the application
Flags:
--abci string socket or grpc (default "socket")
--address string address of application socket (default "tcp://127.0.0.1:46658")
-h, --help help for abci-cli
-v, --verbose print the command and results as if it were a console session
Use "abci-cli [command] --help" for more information about a command.
Dummy - First Example
---------------------
The ``abci-cli`` tool lets us send ABCI messages to our application, to
help build and debug them.
The most important messages are ``deliver_tx``, ``check_tx``, and
``commit``, but there are others for convenience, configuration, and
information purposes.
We'll start a dummy application, which was installed at the same time as
``abci-cli`` above. The dummy just stores transactions in a merkle tree.
Its code can be found `here <https://github.com/tendermint/abci/blob/master/cmd/abci-cli/abci-cli.go>`__ and looks like:
.. container:: toggle
.. container:: header
**Show/Hide Dummy Example**
.. code-block:: go
func cmdDummy(cmd *cobra.Command, args []string) error {
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
// Create the application - in memory or persisted to disk
var app types.Application
if flagPersist == "" {
app = dummy.NewDummyApplication()
} else {
app = dummy.NewPersistentDummyApplication(flagPersist)
app.(*dummy.PersistentDummyApplication).SetLogger(logger.With("module", "dummy"))
}
// Start the listener
srv, err := server.NewServer(flagAddrD, flagAbci, app)
if err != nil {
return err
}
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
return err
}
// Wait forever
cmn.TrapSignal(func() {
// Cleanup
srv.Stop()
})
return nil
}
Start by running:
::
abci-cli dummy
And in another terminal, run
::
abci-cli echo hello
abci-cli info
You'll see something like:
::
-> data: hello
-> data.hex: 68656C6C6F
and:
::
-> data: {"size":0}
-> data.hex: 7B2273697A65223A307D
An ABCI application must provide two things:
- a socket server
- a handler for ABCI messages
When we run the ``abci-cli`` tool we open a new connection to the
application's socket server, send the given ABCI message, and wait for a
response.
The server may be generic for a particular language, and we provide a
`reference implementation in
Golang <https://github.com/tendermint/abci/tree/master/server>`__. See
the `list of other ABCI
implementations <./ecosystem.html>`__ for servers in
other languages.
The handler is specific to the application, and may be arbitrary, so
long as it is deterministic and conforms to the ABCI interface
specification.
So when we run ``abci-cli info``, we open a new connection to the ABCI
server, which calls the ``Info()`` method on the application, which
tells us the number of transactions in our Merkle tree.
Now, since every command opens a new connection, we provide the
``abci-cli console`` and ``abci-cli batch`` commands, to allow multiple
ABCI messages to be sent over a single connection.
Running ``abci-cli console`` should drop you in an interactive console
for speaking ABCI messages to your application.
Try running these commands:
::
> echo hello
-> code: OK
-> data: hello
-> data.hex: 0x68656C6C6F
> info
-> code: OK
-> data: {"size":0}
-> data.hex: 0x7B2273697A65223A307D
> commit
-> code: OK
> deliver_tx "abc"
-> code: OK
> info
-> code: OK
-> data: {"size":1}
-> data.hex: 0x7B2273697A65223A317D
> commit
-> code: OK
-> data.hex: 0x49DFD15CCDACDEAE9728CB01FBB5E8688CA58B91
> query "abc"
-> code: OK
-> log: exists
-> height: 0
-> value: abc
-> value.hex: 616263
> deliver_tx "def=xyz"
-> code: OK
> commit
-> code: OK
-> data.hex: 0x70102DB32280373FBF3F9F89DA2A20CE2CD62B0B
> query "def"
-> code: OK
-> log: exists
-> height: 0
-> value: xyz
-> value.hex: 78797A
Note that if we do ``deliver_tx "abc"`` it will store ``(abc, abc)``,
but if we do ``deliver_tx "abc=efg"`` it will store ``(abc, efg)``.
Similarly, you could put the commands in a file and run
``abci-cli --verbose batch < myfile``.
Counter - Another Example
-------------------------
Now that we've got the hang of it, let's try another application, the
"counter" app.
Like the dummy app, its code can be found `here <https://github.com/tendermint/abci/blob/master/cmd/abci-cli/abci-cli.go>`__ and looks like:
.. container:: toggle
.. container:: header
**Show/Hide Counter Example**
.. code-block:: go
func cmdCounter(cmd *cobra.Command, args []string) error {
app := counter.NewCounterApplication(flagSerial)
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
// Start the listener
srv, err := server.NewServer(flagAddrC, flagAbci, app)
if err != nil {
return err
}
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
return err
}
// Wait forever
cmn.TrapSignal(func() {
// Cleanup
srv.Stop()
})
return nil
}
The counter app doesn't use a Merkle tree, it just counts how many times
we've sent a transaction, asked for a hash, or committed the state. The
result of ``commit`` is just the number of transactions sent.
This application has two modes: ``serial=off`` and ``serial=on``.
When ``serial=on``, transactions must be a big-endian encoded
incrementing integer, starting at 0.
If ``serial=off``, there are no restrictions on transactions.
We can toggle the value of ``serial`` using the ``set_option`` ABCI
message.
When ``serial=on``, some transactions are invalid. In a live blockchain,
transactions collect in memory before they are committed into blocks. To
avoid wasting resources on invalid transactions, ABCI provides the
``check_tx`` message, which application developers can use to accept or
reject transactions, before they are stored in memory or gossipped to
other peers.
In this instance of the counter app, ``check_tx`` only allows
transactions whose integer is greater than the last committed one.
Let's kill the console and the dummy application, and start the counter
app:
::
abci-cli counter
In another window, start the ``abci-cli console``:
::
> set_option serial on
-> code: OK
> check_tx 0x00
-> code: OK
> check_tx 0xff
-> code: OK
> deliver_tx 0x00
-> code: OK
> check_tx 0x00
-> code: BadNonce
-> log: Invalid nonce. Expected >= 1, got 0
> deliver_tx 0x01
-> code: OK
> deliver_tx 0x04
-> code: BadNonce
-> log: Invalid nonce. Expected 2, got 4
> info
-> code: OK
-> data: {"hashes":0,"txs":2}
-> data.hex: 0x7B22686173686573223A302C22747873223A327D
This is a very simple application, but between ``counter`` and
``dummy``, its easy to see how you can build out arbitrary application
states on top of the ABCI. `Hyperledger's
Burrow <https://github.com/hyperledger/burrow>`__ also runs atop ABCI,
bringing with it Ethereum-like accounts, the Ethereum virtual-machine,
Monax's permissioning scheme, and native contracts extensions.
But the ultimate flexibility comes from being able to write the
application easily in any language.
We have implemented the counter in a number of languages (see the
`example directory <https://github.com/tendermint/abci/tree/master/example`__).
To run the Node JS version, ``cd`` to ``example/js`` and run
::
node app.js
(you'll have to kill the other counter application process). In another
window, run the console and those previous ABCI commands. You should get
the same results as for the Go version.
Bounties
--------
Want to write the counter app in your favorite language?! We'd be happy
to add you to our `ecosystem <https://tendermint.com/ecosystem>`__!
We're also offering `bounties <https://tendermint.com/bounties>`__ for
implementations in new languages!
The ``abci-cli`` is designed strictly for testing and debugging. In a
real deployment, the role of sending messages is taken by Tendermint,
which connects to the app using three separate connections, each with
its own pattern of messages.
For more information, see the `application developers
guide <./app-development.html>`__. For examples of running an ABCI
app with Tendermint, see the `getting started
guide <./getting-started.html>`__. Next is the ABCI specification.

125
docs/app-architecture.rst Normal file
View File

@@ -0,0 +1,125 @@
Application Architecture Guide
==============================
Overview
--------
A blockchain application is more than the consensus engine and the
transaction logic (eg. smart contracts, business logic) as implemented
in the ABCI app. There are also (mobile, web, desktop) clients that will
need to connect and make use of the app. We will assume for now that you
have a well designed transactions and database model, but maybe this
will be the topic of another article. This article is more interested in
various ways of setting up the "plumbing" and connecting these pieces,
and demonstrating some evolving best practices.
Security
--------
A very important aspect when constructing a blockchain is security. The
consensus model can be DoSed (no consensus possible) by corrupting 1/3
of the validators and exploited (writing arbitrary blocks) by corrupting
2/3 of the validators. So, while the security is not that of the
"weakest link", you should take care that the "average link" is
sufficiently hardened.
One big attack surface on the validators is the communication between
the ABCI app and the tendermint core. This should be highly protected.
Ideally, the app and the core are running on the same machine, so no
external agent can target the communication channel. You can use unix
sockets (with permissions preventing access from other users), or even
compile the two apps into one binary if the ABCI app is also writen in
go. If you are unable to do that due to language support, then the ABCI
app should bind a TCP connection to localhost (127.0.0.1), which is less
efficient and secure, but still not reachable from outside. If you must
run the ABCI app and tendermint core on separate machines, make sure you
have a secure communication channel (ssh tunnel?)
Now assuming, you have linked together your app and the core securely,
you must also make sure no one can get on the machine it is hosted on.
At this point it is basic network security. Run on a secure operating
system (SELinux?). Limit who has access to the machine (user accounts,
but also where the physical machine is hosted). Turn off all services
except for ssh, which should only be accessible by some well-guarded
public/private key pairs (no password). And maybe even firewall off
access to the ports used by the validators, so only known validators can
connect.
There was also a suggestion on slack from @jhon about compiling
everything together with a unikernel for more security, such as
`Mirage <https://mirage.io>`__ or
`UNIK <https://github.com/emc-advanced-dev/unik>`__.
Connecting your client to the blockchain
----------------------------------------
Tendermint Core RPC
~~~~~~~~~~~~~~~~~~~
The concept is that the ABCI app is completely hidden from the outside
world and only communicated through a tested and secured `interface
exposed by the tendermint core <./specification/rpc.html>`__. This interface
exposes a lot of data on the block header and consensus process, which
is quite useful for externally verifying the system. It also includes
3(!) methods to broadcast a transaction (propose it for the blockchain,
and possibly await a response). And one method to query app-specific
data from the ABCI application.
Pros:
* Server code already written
* Access to block headers to validate merkle proofs (nice for light clients)
* Basic read/write functionality is supported
Cons:
* Limited interface to app. All queries must be serialized into
[]byte (less expressive than JSON over HTTP) and there is no way to push
data from ABCI app to the client (eg. notify me if account X receives a
transaction)
Custom ABCI server
~~~~~~~~~~~~~~~~~~
This was proposed by @wolfposd on slack and demonstrated by
`TMChat <https://github.com/wolfposd/TMChat>`__, a sample app. The
concept is to write a custom server for your app (with typical REST
API/websockets/etc for easy use by a mobile app). This custom server is
in the same binary as the ABCI app and data store, so can easily react
to complex events there that involve understanding the data format (send
a message if my balance drops below 500). All "writes" sent to this
server are proxied via websocket/JSON-RPC to tendermint core. When they
come back as deliver\_tx over ABCI, they will be written to the data
store. For "reads", we can do any queries we wish that are supported by
our architecture, using any web technology that is useful. The general
architecture is shown in the following diagram:
Pros: \* Separates application logic from blockchain logic \* Allows
much richer, more flexible client-facing API \* Allows pub-sub, watching
certain fields, etc.
Cons: \* Access to ABCI app can be dangerous (be VERY careful not to
write unless it comes from the validator node) \* No direct access to
the blockchain headers to verify tx \* You must write your own API (but
maybe that's a pro...)
Hybrid solutions
~~~~~~~~~~~~~~~~
Likely the least secure but most versatile. The client can access both
the tendermint node for all blockchain info, as well as a custom app
server, for complex queries and pub-sub on the abci app.
Pros: All from both above solutions
Cons: Even more complexity; even more attack vectors (less
security)
Scalability
-----------
Read replica using non-validating nodes? They could forward transactions
to the validators (fewer connections, more security), and locally allow
all queries in any of the above configurations. Thus, while
transaction-processing speed is limited by the speed of the abci app and
the number of validators, one should be able to scale our read
performance to quite an extent (until the replication process drains too
many resources from the validator nodes).

630
docs/app-development.rst Normal file
View File

@@ -0,0 +1,630 @@
Application Development Guide
=============================
ABCI Design
-----------
The purpose of ABCI is to provide a clean interface between state
transition machines on one computer and the mechanics of their
replication across multiple computers. The former we call 'application
logic' and the latter the 'consensus engine'. Application logic
validates transactions and optionally executes transactions against some
persistent state. A consensus engine ensures all transactions are
replicated in the same order on every machine. We call each machine in a
consensus engine a 'validator', and each validator runs the same
transactions through the same application logic. In particular, we are
interested in blockchain-style consensus engines, where transactions are
committed in hash-linked blocks.
The ABCI design has a few distinct components:
- message protocol
- pairs of request and response messages
- consensus makes requests, application responds
- defined using protobuf
- server/client
- consensus engine runs the client
- application runs the server
- two implementations:
- async raw bytes
- grpc
- blockchain protocol
- abci is connection oriented
- Tendermint Core maintains three connections:
- `mempool connection <#mempool-connection>`__: for checking if
transactions should be relayed before they are committed; only
uses ``CheckTx``
- `consensus connection <#consensus-connection>`__: for executing
transactions that have been committed. Message sequence is -
for every block -
``BeginBlock, [DeliverTx, ...], EndBlock, Commit``
- `query connection <#query-connection>`__: for querying the
application state; only uses Query and Info
The mempool and consensus logic act as clients, and each maintains an
open ABCI connection with the application, which hosts an ABCI server.
Shown are the request and response types sent on each connection.
Message Protocol
----------------
The message protocol consists of pairs of requests and responses. Some
messages have no fields, while others may include byte-arrays, strings,
or integers. See the ``message Request`` and ``message Response``
definitions in `the protobuf definition
file <https://github.com/tendermint/abci/blob/master/types/types.proto>`__,
and the `protobuf
documentation <https://developers.google.com/protocol-buffers/docs/overview>`__
for more details.
For each request, a server should respond with the corresponding
response, where order of requests is preserved in the order of
responses.
Server
------
To use ABCI in your programming language of choice, there must be a ABCI
server in that language. Tendermint supports two kinds of implementation
of the server:
- Asynchronous, raw socket server (Tendermint Socket Protocol, also
known as TSP or Teaspoon)
- GRPC
Both can be tested using the ``abci-cli`` by setting the ``--abci`` flag
appropriately (ie. to ``socket`` or ``grpc``).
See examples, in various stages of maintenance, in
`Go <https://github.com/tendermint/abci/tree/master/server>`__,
`JavaScript <https://github.com/tendermint/js-abci>`__,
`Python <https://github.com/tendermint/abci/tree/master/example/python3/abci>`__,
`C++ <https://github.com/mdyring/cpp-tmsp>`__, and
`Java <https://github.com/jTendermint/jabci>`__.
GRPC
~~~~
If GRPC is available in your language, this is the easiest approach,
though it will have significant performance overhead.
To get started with GRPC, copy in the `protobuf
file <https://github.com/tendermint/abci/blob/master/types/types.proto>`__
and compile it using the GRPC plugin for your language. For instance,
for golang, the command is
``protoc --go_out=plugins=grpc:. types.proto``. See the `grpc
documentation for more details <http://www.grpc.io/docs/>`__. ``protoc``
will autogenerate all the necessary code for ABCI client and server in
your language, including whatever interface your application must
satisfy to be used by the ABCI server for handling requests.
TSP
~~~
If GRPC is not available in your language, or you require higher
performance, or otherwise enjoy programming, you may implement your own
ABCI server using the Tendermint Socket Protocol, known affectionately
as Teaspoon. The first step is still to auto-generate the relevant data
types and codec in your language using ``protoc``. Messages coming over
the socket are Protobuf3 encoded, but additionally length-prefixed to
facilitate use as a streaming protocol. Protobuf3 doesn't have an
official length-prefix standard, so we use our own. The first byte in
the prefix represents the length of the Big Endian encoded length. The
remaining bytes in the prefix are the Big Endian encoded length.
For example, if the Protobuf3 encoded ABCI message is 0xDEADBEEF (4
bytes), the length-prefixed message is 0x0104DEADBEEF. If the Protobuf3
encoded ABCI message is 65535 bytes long, the length-prefixed message
would be like 0x02FFFF....
Note this prefixing does not apply for grpc.
An ABCI server must also be able to support multiple connections, as
Tendermint uses three connections.
Client
------
There are currently two use-cases for an ABCI client. One is a testing
tool, as in the ``abci-cli``, which allows ABCI requests to be sent via
command line. The other is a consensus engine, such as Tendermint Core,
which makes requests to the application every time a new transaction is
received or a block is committed.
It is unlikely that you will need to implement a client. For details of
our client, see
`here <https://github.com/tendermint/abci/tree/master/client>`__.
Most of the examples below are from `dummy application
<https://github.com/tendermint/abci/blob/master/example/dummy/dummy.go>`__,
which is a part of the abci repo. `persistent_dummy application
<https://github.com/tendermint/abci/blob/master/example/dummy/persistent_dummy.go>`__
is used to show ``BeginBlock``, ``EndBlock`` and ``InitChain``
example implementations.
Blockchain Protocol
-------------------
In ABCI, a transaction is simply an arbitrary length byte-array. It is
the application's responsibility to define the transaction codec as they
please, and to use it for both CheckTx and DeliverTx.
Note that there are two distinct means for running transactions,
corresponding to stages of 'awareness' of the transaction in the
network. The first stage is when a transaction is received by a
validator from a client into the so-called mempool or transaction pool -
this is where we use CheckTx. The second is when the transaction is
successfully committed on more than 2/3 of validators - where we use
DeliverTx. In the former case, it may not be necessary to run all the
state transitions associated with the transaction, as the transaction
may not ultimately be committed until some much later time, when the
result of its execution will be different. For instance, an Ethereum
ABCI app would check signatures and amounts in CheckTx, but would not
actually execute any contract code until the DeliverTx, so as to avoid
executing state transitions that have not been finalized.
To formalize the distinction further, two explicit ABCI connections are
made between Tendermint Core and the application: the mempool connection
and the consensus connection. We also make a third connection, the query
connection, to query the local state of the app.
Mempool Connection
~~~~~~~~~~~~~~~~~~
The mempool connection is used *only* for CheckTx requests. Transactions
are run using CheckTx in the same order they were received by the
validator. If the CheckTx returns ``OK``, the transaction is kept in
memory and relayed to other peers in the same order it was received.
Otherwise, it is discarded.
CheckTx requests run concurrently with block processing; so they should
run against a copy of the main application state which is reset after
every block. This copy is necessary to track transitions made by a
sequence of CheckTx requests before they are included in a block. When a
block is committed, the application must ensure to reset the mempool
state to the latest committed state. Tendermint Core will then filter
through all transactions in the mempool, removing any that were included
in the block, and re-run the rest using CheckTx against the post-Commit
mempool state.
.. container:: toggle
.. container:: header
**Show/Hide Go Example**
.. code-block:: go
func (app *DummyApplication) CheckTx(tx []byte) types.Result {
return types.OK
}
.. container:: toggle
.. container:: header
**Show/Hide Java Example**
.. code-block:: java
ResponseCheckTx requestCheckTx(RequestCheckTx req) {
byte[] transaction = req.getTx().toByteArray();
// validate transaction
if (notValid) {
return ResponseCheckTx.newBuilder().setCode(CodeType.BadNonce).setLog("invalid tx").build();
} else {
return ResponseCheckTx.newBuilder().setCode(CodeType.OK).build();
}
}
Consensus Connection
~~~~~~~~~~~~~~~~~~~~
The consensus connection is used only when a new block is committed, and
communicates all information from the block in a series of requests:
``BeginBlock, [DeliverTx, ...], EndBlock, Commit``. That is, when a
block is committed in the consensus, we send a list of DeliverTx
requests (one for each transaction) sandwiched by BeginBlock and
EndBlock requests, and followed by a Commit.
DeliverTx
^^^^^^^^^
DeliverTx is the workhorse of the blockchain. Tendermint sends the
DeliverTx requests asynchronously but in order, and relies on the
underlying socket protocol (ie. TCP) to ensure they are received by the
app in order. They have already been ordered in the global consensus by
the Tendermint protocol.
DeliverTx returns a abci.Result, which includes a Code, Data, and Log.
The code may be non-zero (non-OK), meaning the corresponding transaction
should have been rejected by the mempool, but may have been included in
a block by a Byzantine proposer.
The block header will be updated (TODO) to include some commitment to
the results of DeliverTx, be it a bitarray of non-OK transactions, or a
merkle root of the data returned by the DeliverTx requests, or both.
.. container:: toggle
.. container:: header
**Show/Hide Go Example**
.. code-block:: go
// tx is either "key=value" or just arbitrary bytes
func (app *DummyApplication) DeliverTx(tx []byte) types.Result {
parts := strings.Split(string(tx), "=")
if len(parts) == 2 {
app.state.Set([]byte(parts[0]), []byte(parts[1]))
} else {
app.state.Set(tx, tx)
}
return types.OK
}
.. container:: toggle
.. container:: header
**Show/Hide Java Example**
.. code-block:: java
/**
* Using Protobuf types from the protoc compiler, we always start with a byte[]
*/
ResponseDeliverTx deliverTx(RequestDeliverTx request) {
byte[] transaction = request.getTx().toByteArray();
// validate your transaction
if (notValid) {
return ResponseDeliverTx.newBuilder().setCode(CodeType.BadNonce).setLog("transaction was invalid").build();
} else {
ResponseDeliverTx.newBuilder().setCode(CodeType.OK).build();
}
}
Commit
^^^^^^
Once all processing of the block is complete, Tendermint sends the
Commit request and blocks waiting for a response. While the mempool may
run concurrently with block processing (the BeginBlock, DeliverTxs, and
EndBlock), it is locked for the Commit request so that its state can be
safely reset during Commit. This means the app *MUST NOT* do any
blocking communication with the mempool (ie. broadcast\_tx) during
Commit, or there will be deadlock. Note also that all remaining
transactions in the mempool are replayed on the mempool connection
(CheckTx) following a commit.
The app should respond to the Commit request with a byte array, which is the deterministic
state root of the application. It is included in the header of the next
block. It can be used to provide easily verified Merkle-proofs of the
state of the application.
It is expected that the app will persist state to disk on Commit. The
option to have all transactions replayed from some previous block is the
job of the `Handshake <#handshake>`__.
.. container:: toggle
.. container:: header
**Show/Hide Go Example**
.. code-block:: go
func (app *DummyApplication) Commit() types.Result {
hash := app.state.Hash()
return types.NewResultOK(hash, "")
}
.. container:: toggle
.. container:: header
**Show/Hide Java Example**
.. code-block:: java
ResponseCommit requestCommit(RequestCommit requestCommit) {
// update the internal app-state
byte[] newAppState = calculateAppState();
// and return it to the node
return ResponseCommit.newBuilder().setCode(CodeType.OK).setData(ByteString.copyFrom(newAppState)).build();
}
BeginBlock
^^^^^^^^^^
The BeginBlock request can be used to run some code at the beginning of
every block. It also allows Tendermint to send the current block hash
and header to the application, before it sends any of the transactions.
The app should remember the latest height and header (ie. from which it
has run a successful Commit) so that it can tell Tendermint where to
pick up from when it restarts. See information on the Handshake, below.
.. container:: toggle
.. container:: header
**Show/Hide Go Example**
.. code-block:: go
// Track the block hash and header information
func (app *PersistentDummyApplication) BeginBlock(params types.RequestBeginBlock) {
// update latest block info
app.blockHeader = params.Header
// reset valset changes
app.changes = make([]*types.Validator, 0)
}
.. container:: toggle
.. container:: header
**Show/Hide Java Example**
.. code-block:: java
/*
* all types come from protobuf definition
*/
ResponseBeginBlock requestBeginBlock(RequestBeginBlock req) {
Header header = req.getHeader();
byte[] prevAppHash = header.getAppHash().toByteArray();
long prevHeight = header.getHeight();
long numTxs = header.getNumTxs();
// run your pre-block logic. Maybe prepare a state snapshot, message components, etc
return ResponseBeginBlock.newBuilder().build();
}
EndBlock
^^^^^^^^
The EndBlock request can be used to run some code at the end of every block.
Additionally, the response may contain a list of validators, which can be used
to update the validator set. To add a new validator or update an existing one,
simply include them in the list returned in the EndBlock response. To remove
one, include it in the list with a ``power`` equal to ``0``. Tendermint core
will take care of updating the validator set. Note the change in voting power
must be strictly less than 1/3 per block if you want a light client to be able
to prove the transition externally. See the `light client docs
<https://godoc.org/github.com/tendermint/tendermint/lite#hdr-How_We_Track_Validators>`__
for details on how it tracks validators.
.. container:: toggle
.. container:: header
**Show/Hide Go Example**
.. code-block:: go
// Update the validator set
func (app *PersistentDummyApplication) EndBlock(req types.RequestEndBlock) types.ResponseEndBlock {
return types.ResponseEndBlock{ValidatorUpdates: app.ValUpdates}
}
.. container:: toggle
.. container:: header
**Show/Hide Java Example**
.. code-block:: java
/*
* Assume that one validator changes. The new validator has a power of 10
*/
ResponseEndBlock requestEndBlock(RequestEndBlock req) {
final long currentHeight = req.getHeight();
final byte[] validatorPubKey = getValPubKey();
ResponseEndBlock.Builder builder = ResponseEndBlock.newBuilder();
builder.addDiffs(1, Types.Validator.newBuilder().setPower(10L).setPubKey(ByteString.copyFrom(validatorPubKey)).build());
return builder.build();
}
Query Connection
~~~~~~~~~~~~~~~~
This connection is used to query the application without engaging
consensus. It's exposed over the tendermint core rpc, so clients can
query the app without exposing a server on the app itself, but they must
serialize each query as a single byte array. Additionally, certain
"standardized" queries may be used to inform local decisions, for
instance about which peers to connect to.
Tendermint Core currently uses the Query connection to filter peers upon
connecting, according to IP address or public key. For instance,
returning non-OK ABCI response to either of the following queries will
cause Tendermint to not connect to the corresponding peer:
- ``p2p/filter/addr/<addr>``, where ``<addr>`` is an IP address.
- ``p2p/filter/pubkey/<pubkey>``, where ``<pubkey>`` is the hex-encoded
ED25519 key of the node (not it's validator key)
Note: these query formats are subject to change!
.. container:: toggle
.. container:: header
**Show/Hide Go Example**
.. code-block:: go
func (app *DummyApplication) Query(reqQuery types.RequestQuery) (resQuery types.ResponseQuery) {
if reqQuery.Prove {
value, proof, exists := app.state.Proof(reqQuery.Data)
resQuery.Index = -1 // TODO make Proof return index
resQuery.Key = reqQuery.Data
resQuery.Value = value
resQuery.Proof = proof
if exists {
resQuery.Log = "exists"
} else {
resQuery.Log = "does not exist"
}
return
} else {
index, value, exists := app.state.Get(reqQuery.Data)
resQuery.Index = int64(index)
resQuery.Value = value
if exists {
resQuery.Log = "exists"
} else {
resQuery.Log = "does not exist"
}
return
}
}
.. container:: toggle
.. container:: header
**Show/Hide Java Example**
.. code-block:: java
ResponseQuery requestQuery(RequestQuery req) {
final boolean isProveQuery = req.getProve();
final ResponseQuery.Builder responseBuilder = ResponseQuery.newBuilder();
if (isProveQuery) {
com.app.example.ProofResult proofResult = generateProof(req.getData().toByteArray());
final byte[] proofAsByteArray = proofResult.getAsByteArray();
responseBuilder.setProof(ByteString.copyFrom(proofAsByteArray));
responseBuilder.setKey(req.getData());
responseBuilder.setValue(ByteString.copyFrom(proofResult.getData()));
responseBuilder.setLog(result.getLogValue());
} else {
byte[] queryData = req.getData().toByteArray();
final com.app.example.QueryResult result = generateQueryResult(queryData);
responseBuilder.setIndex(result.getIndex());
responseBuilder.setValue(ByteString.copyFrom(result.getValue()));
responseBuilder.setLog(result.getLogValue());
}
return responseBuilder.build();
}
Handshake
~~~~~~~~~
When the app or tendermint restarts, they need to sync to a common
height. When an ABCI connection is first established, Tendermint will
call ``Info`` on the Query connection. The response should contain the
LastBlockHeight and LastBlockAppHash - the former is the last block for
which the app ran Commit successfully, the latter is the response
from that Commit.
Using this information, Tendermint will determine what needs to be
replayed, if anything, against the app, to ensure both Tendermint and
the app are synced to the latest block height.
If the app returns a LastBlockHeight of 0, Tendermint will just replay
all blocks.
.. container:: toggle
.. container:: header
**Show/Hide Go Example**
.. code-block:: go
func (app *DummyApplication) Info(req types.RequestInfo) (resInfo types.ResponseInfo) {
return types.ResponseInfo{Data: cmn.Fmt("{\"size\":%v}", app.state.Size())}
}
.. container:: toggle
.. container:: header
**Show/Hide Java Example**
.. code-block:: java
ResponseInfo requestInfo(RequestInfo req) {
final byte[] lastAppHash = getLastAppHash();
final long lastHeight = getLastHeight();
return ResponseInfo.newBuilder().setLastBlockAppHash(ByteString.copyFrom(lastAppHash)).setLastBlockHeight(lastHeight).build();
}
Genesis
~~~~~~~
``InitChain`` will be called once upon the genesis. ``params`` includes the
initial validator set. Later on, it may be extended to take parts of the
consensus params.
.. container:: toggle
.. container:: header
**Show/Hide Go Example**
.. code-block:: go
// Save the validators in the merkle tree
func (app *PersistentDummyApplication) InitChain(params types.RequestInitChain) {
for _, v := range params.Validators {
r := app.updateValidator(v)
if r.IsErr() {
app.logger.Error("Error updating validators", "r", r)
}
}
}
.. container:: toggle
.. container:: header
**Show/Hide Java Example**
.. code-block:: java
/*
* all types come from protobuf definition
*/
ResponseInitChain requestInitChain(RequestInitChain req) {
final int validatorsCount = req.getValidatorsCount();
final List<Types.Validator> validatorsList = req.getValidatorsList();
validatorsList.forEach((validator) -> {
long power = validator.getPower();
byte[] validatorPubKey = validator.getPubKey().toByteArray();
// do somehing for validator setup in app
});
return ResponseInitChain.newBuilder().build();
}

View File

@@ -1,16 +0,0 @@
# ABCI
ABCI is an interface between the consensus/blockchain engine known as tendermint, and the application-specific business logic, known as an ABCi app.
The tendermint core should run unchanged for all apps. Each app can customize it, the supported transactions, queries, even the validator sets and how to handle staking / slashing stake. This customization is achieved by implementing the ABCi app to send the proper information to the tendermint engine to perform as directed.
To understand this decision better, think of the design of the tendermint engine.
* A blockchain is simply consensus on a unique global ordering of events.
* This consensus can efficiently be implemented using BFT and PoS
* This code can be generalized to easily support a large number of blockchains
* The block-chain specific code, the interpretation of the individual events, can be implemented by a 3rd party app without touching the consensus engine core
* Use an efficient, language-agnostic layer to implement this (ABCi)
Bucky, please make this doc real.

View File

@@ -2,15 +2,4 @@
This is a location to record all high-level architecture decisions in the tendermint project. Not the implementation details, but the reasoning that happened. This should be refered to for guidance of the "right way" to extend the application. And if we notice that the original decisions were lacking, we should have another open discussion, record the new decisions here, and then modify the code to match.
This is like our guide and mentor when Jae and Bucky are offline.... The concept comes from a [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t) that resonated among the team when Anton shared it.
Each section of the code can have it's own markdown file in this directory, and please add a link to the readme.
## Sections
* [ABCI](./ABCI.md)
* [go-merkle / merkleeyes](./merkle.md)
* [Frey's thoughts on the data store](./merkle-frey.md)
* basecoin
* tendermint core (multiple sections)
* ???
Read up on the concept in this [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t).

View File

@@ -0,0 +1,90 @@
# ADR 2: Event Subscription
## Context
In the light client (or any other client), the user may want to **subscribe to
a subset of transactions** (rather than all of them) using `/subscribe?event=X`. For
example, I want to subscribe for all transactions associated with a particular
account. Same for fetching. The user may want to **fetch transactions based on
some filter** (rather than fetching all the blocks). For example, I want to get
all transactions for a particular account in the last two weeks (`tx's block
time >= '2017-06-05'`).
Now you can't even subscribe to "all txs" in Tendermint.
The goal is a simple and easy to use API for doing that.
![Tx Send Flow Diagram](img/tags1.png)
## Decision
ABCI app return tags with a `DeliverTx` response inside the `data` field (_for
now, later we may create a separate field_). Tags is a list of key-value pairs,
protobuf encoded.
Example data:
```json
{
"abci.account.name": "Igor",
"abci.account.address": "0xdeadbeef",
"tx.gas": 7
}
```
### Subscribing for transactions events
If the user wants to receive only a subset of transactions, ABCI-app must
return a list of tags with a `DeliverTx` response. These tags will be parsed and
matched with the current queries (subscribers). If the query matches the tags,
subscriber will get the transaction event.
```
/subscribe?query="tm.event = Tx AND tx.hash = AB0023433CF0334223212243BDD AND abci.account.invoice.number = 22"
```
A new package must be developed to replace the current `events` package. It
will allow clients to subscribe to a different types of events in the future:
```
/subscribe?query="abci.account.invoice.number = 22"
/subscribe?query="abci.account.invoice.owner CONTAINS Igor"
```
### Fetching transactions
This is a bit tricky because a) we want to support a number of indexers, all of
which have a different API b) we don't know whenever tags will be sufficient
for the most apps (I guess we'll see).
```
/txs/search?query="tx.hash = AB0023433CF0334223212243BDD AND abci.account.owner CONTAINS Igor"
/txs/search?query="abci.account.owner = Igor"
```
For historic queries we will need a indexing storage (Postgres, SQLite, ...).
### Issues
- https://github.com/tendermint/basecoin/issues/91
- https://github.com/tendermint/tendermint/issues/376
- https://github.com/tendermint/tendermint/issues/287
- https://github.com/tendermint/tendermint/issues/525 (related)
## Status
proposed
## Consequences
### Positive
- same format for event notifications and search APIs
- powerful enough query
### Negative
- performance of the `match` function (where we have too many queries / subscribers)
- there is an issue where there are too many txs in the DB
### Neutral

View File

@@ -0,0 +1,34 @@
# ADR 3: Must an ABCI-app have an RPC server?
## Context
ABCI-server could expose its own RPC-server and act as a proxy to Tendermint.
The idea was for the Tendermint RPC to just be a transparent proxy to the app.
Clients need to talk to Tendermint for proofs, unless we burden all app devs
with exposing Tendermint proof stuff. Also seems less complex to lock down one
server than two, but granted it makes querying a bit more kludgy since it needs
to be passed as a `Query`. Also, **having a very standard rpc interface means
the light-client can work with all apps and handle proofs**. The only
app-specific logic is decoding the binary data to a more readable form (eg.
json). This is a huge advantage for code-reuse and standardization.
## Decision
We dont expose an RPC server on any of our ABCI-apps.
## Status
accepted
## Consequences
### Positive
- Unified interface for all apps
### Negative
- `Query` interface
### Neutral

View File

@@ -0,0 +1,38 @@
# ADR 004: Historical Validators
## Context
Right now, we can query the present validator set, but there is no history.
If you were offline for a long time, there is no way to reconstruct past validators. This is needed for the light client and we agreed needs enhancement of the API.
## Decision
For every block, store a new structure that contains either the latest validator set,
or the height of the last block for which the validator set changed. Note this is not
the height of the block which returned the validator set change itself, but the next block,
ie. the first block it comes into effect for.
Storing the validators will be handled by the `state` package.
At some point in the future, we may consider more efficient storage in the case where the validators
are updated frequently - for instance by only saving the diffs, rather than the whole set.
An alternative approach suggested keeping the validator set, or diffs of it, in a merkle IAVL tree.
While it might afford cheaper proofs that a validator set has not changed, it would be more complex,
and likely less efficient.
## Status
Accepted.
## Consequences
### Positive
- Can query old validator sets, with proof.
### Negative
- Writes an extra structure to disk with every block.
### Neutral

View File

@@ -0,0 +1,86 @@
# ADR 005: Consensus Params
## Context
Consensus critical parameters controlling blockchain capacity have until now been hard coded, loaded from a local config, or neglected.
Since they may be need to be different in different networks, and potentially to evolve over time within
networks, we seek to initialize them in a genesis file, and expose them through the ABCI.
While we have some specific parameters now, like maximum block and transaction size, we expect to have more in the future,
such as a period over which evidence is valid, or the frequency of checkpoints.
## Decision
### ConsensusParams
No consensus critical parameters should ever be found in the `config.toml`.
A new `ConsensusParams` is optionally included in the `genesis.json` file,
and loaded into the `State`. Any items not included are set to their default value.
A value of 0 is undefined (see ABCI, below). A value of -1 is used to indicate the parameter does not apply.
The parameters are used to determine the validity of a block (and tx) via the union of all relevant parameters.
```
type ConsensusParams struct {
BlockSize
TxSize
BlockGossip
}
type BlockSize struct {
MaxBytes int
MaxTxs int
MaxGas int
}
type TxSize struct {
MaxBytes int
MaxGas int
}
type BlockGossip struct {
BlockPartSizeBytes int
}
```
The `ConsensusParams` can evolve over time by adding new structs that cover different aspects of the consensus rules.
The `BlockPartSizeBytes` and the `BlockSize.MaxBytes` are enforced to be greater than 0.
The former because we need a part size, the latter so that we always have at least some sanity check over the size of blocks.
### ABCI
#### InitChain
InitChain currently takes the initial validator set. It should be extended to also take parts of the ConsensusParams.
There is some case to be made for it to take the entire Genesis, except there may be things in the genesis,
like the BlockPartSize, that the app shouldn't really know about.
#### EndBlock
The EndBlock response includes a `ConsensusParams`, which includes BlockSize and TxSize, but not BlockGossip.
Other param struct can be added to `ConsensusParams` in the future.
The `0` value is used to denote no change.
Any other value will update that parameter in the `State.ConsensusParams`, to be applied for the next block.
Tendermint should have hard-coded upper limits as sanity checks.
## Status
Proposed.
## Consequences
### Positive
- Alternative capacity limits and consensus parameters can be specified without re-compiling the software.
- They can also change over time under the control of the application
### Negative
- More exposed parameters is more complexity
- Different rules at different heights in the blockchain complicates fast sync
### Neutral
- The TxSize, which checks validity, may be in conflict with the config's `max_block_size_tx`, which determines proposal sizes

View File

@@ -0,0 +1,238 @@
# ADR 006: Trust Metric Design
## Context
The proposed trust metric will allow Tendermint to maintain local trust rankings for peers it has directly interacted with, which can then be used to implement soft security controls. The calculations were obtained from the [TrustGuard](https://dl.acm.org/citation.cfm?id=1060808) project.
### Background
The Tendermint Core project developers would like to improve Tendermint security and reliability by keeping track of the level of trustworthiness peers have demonstrated within the peer-to-peer network. This way, undesirable outcomes from peers will not immediately result in them being dropped from the network (potentially causing drastic changes to take place). Instead, peers behavior can be monitored with appropriate metrics and be removed from the network once Tendermint Core is certain the peer is a threat. For example, when the PEXReactor makes a request for peers network addresses from a already known peer, and the returned network addresses are unreachable, this untrustworthy behavior should be tracked. Returning a few bad network addresses probably shouldnt cause a peer to be dropped, while excessive amounts of this behavior does qualify the peer being dropped.
Trust metrics can be circumvented by malicious nodes through the use of strategic oscillation techniques, which adapts the malicious nodes behavior pattern in order to maximize its goals. For instance, if the malicious node learns that the time interval of the Tendermint trust metric is *X* hours, then it could wait *X* hours in-between malicious activities. We could try to combat this issue by increasing the interval length, yet this will make the system less adaptive to recent events.
Instead, having shorter intervals, but keeping a history of interval values, will give our metric the flexibility needed in order to keep the network stable, while also making it resilient against a strategic malicious node in the Tendermint peer-to-peer network. Also, the metric can access trust data over a rather long period of time while not greatly increasing its history size by aggregating older history values over a larger number of intervals, and at the same time, maintain great precision for the recent intervals. This approach is referred to as fading memories, and closely resembles the way human beings remember their experiences. The trade-off to using history data is that the interval values should be preserved in-between executions of the node.
### References
S. Mudhakar, L. Xiong, and L. Liu, “TrustGuard: Countering Vulnerabilities in Reputation Management for Decentralized Overlay Networks,” in *Proceedings of the 14th international conference on World Wide Web, pp. 422-431*, May 2005.
## Decision
The proposed trust metric will allow a developer to inform the trust metric store of all good and bad events relevant to a peer's behavior, and at any time, the metric can be queried for a peer's current trust ranking.
The three subsections below will cover the process being considered for calculating the trust ranking, the concept of the trust metric store, and the interface for the trust metric.
### Proposed Process
The proposed trust metric will count good and bad events relevant to the object, and calculate the percent of counters that are good over an interval with a predefined duration. This is the procedure that will continue for the life of the trust metric. When the trust metric is queried for the current **trust value**, a resilient equation will be utilized to perform the calculation.
The equation being proposed resembles a Proportional-Integral-Derivative (PID) controller used in control systems. The proportional component allows us to be sensitive to the value of the most recent interval, while the integral component allows us to incorporate trust values stored in the history data, and the derivative component allows us to give weight to sudden changes in the behavior of a peer. We compute the trust value of a peer in interval i based on its current trust ranking, its trust rating history prior to interval *i* (over the past *maxH* number of intervals) and its trust ranking fluctuation. We will break up the equation into the three components.
```math
(1) Proportional Value = a * R[i]
```
where *R*[*i*] denotes the raw trust value at time interval *i* (where *i* == 0 being current time) and *a* is the weight applied to the contribution of the current reports. The next component of our equation uses a weighted sum over the last *maxH* intervals to calculate the history value for time *i*:
`H[i] = ` ![formula1](img/formula1.png "Weighted Sum Formula")
The weights can be chosen either optimistically or pessimistically. An optimistic weight creates larger weights for newer history data values, while the the pessimistic weight creates larger weights for time intervals with lower scores. The default weights used during the calculation of the history value are optimistic and calculated as *Wk* = 0.8^*k*, for time interval *k*. With the history value available, we can now finish calculating the integral value:
```math
(2) Integral Value = b * H[i]
```
Where *H*[*i*] denotes the history value at time interval *i* and *b* is the weight applied to the contribution of past performance for the object being measured. The derivative component will be calculated as follows:
```math
D[i] = R[i] H[i]
(3) Derivative Value = c(D[i]) * D[i]
```
Where the value of *c* is selected based on the *D*[*i*] value relative to zero. The default selection process makes *c* equal to 0 unless *D*[*i*] is a negative value, in which case c is equal to 1. The result is that the maximum penalty is applied when current behavior is lower than previously experienced behavior. If the current behavior is better than the previously experienced behavior, then the Derivative Value has no impact on the trust value. With the three components brought together, our trust value equation is calculated as follows:
```math
TrustValue[i] = a * R[i] + b * H[i] + c(D[i]) * D[i]
```
As a performance optimization that will keep the amount of raw interval data being saved to a reasonable size of *m*, while allowing us to represent 2^*m* - 1 history intervals, we can employ the fading memories technique that will trade space and time complexity for the precision of the history data values by summarizing larger quantities of less recent values. While our equation above attempts to access up to *maxH* (which can be 2^*m* - 1), we will map those requests down to *m* values using equation 4 below:
```math
(4) j = index, where index > 0
```
Where *j* is one of *(0, 1, 2, … , m 1)* indices used to access history interval data. Now we can access the raw intervals using the following calculations:
```math
R[0] = raw data for current time interval
```
`R[j] = ` ![formula2](img/formula2.png "Fading Memories Formula")
### Trust Metric Store
Similar to the P2P subsystem AddrBook, the trust metric store will maintain information relevant to Tendermint peers. Additionally, the trust metric store will ensure that trust metrics will only be active for peers that a node is currently and directly engaged with.
Reactors will provide a peer key to the trust metric store in order to retrieve the associated trust metric. The trust metric can then record new positive and negative events experienced by the reactor, as well as provided the current trust score calculated by the metric.
When the node is shutting down, the trust metric store will save history data for trust metrics associated with all known peers. This saved information allows experiences with a peer to be preserved across node executions, which can span a tracking windows of days or weeks. The trust history data is loaded automatically during OnStart.
### Interface Detailed Design
Each trust metric allows for the recording of positive/negative events, querying the current trust value/score, and the stopping/pausing of tracking over time intervals. This can be seen below:
```go
// TrustMetric - keeps track of peer reliability
type TrustMetric struct {
// Private elements.
}
// Pause tells the metric to pause recording data over time intervals.
// All method calls that indicate events will unpause the metric
func (tm *TrustMetric) Pause() {}
// Stop tells the metric to stop recording data over time intervals
func (tm *TrustMetric) Stop() {}
// BadEvents indicates that an undesirable event(s) took place
func (tm *TrustMetric) BadEvents(num int) {}
// GoodEvents indicates that a desirable event(s) took place
func (tm *TrustMetric) GoodEvents(num int) {}
// TrustValue gets the dependable trust value; always between 0 and 1
func (tm *TrustMetric) TrustValue() float64 {}
// TrustScore gets a score based on the trust value always between 0 and 100
func (tm *TrustMetric) TrustScore() int {}
// NewMetric returns a trust metric with the default configuration
func NewMetric() *TrustMetric {}
//------------------------------------------------------------------------------------------------
// For example
tm := NewMetric()
tm.BadEvents(1)
score := tm.TrustScore()
tm.Stop()
```
Some of the trust metric parameters can be configured. The weight values should probably be left alone in more cases, yet the time durations for the tracking window and individual time interval should be considered.
```go
// TrustMetricConfig - Configures the weight functions and time intervals for the metric
type TrustMetricConfig struct {
// Determines the percentage given to current behavior
ProportionalWeight float64
// Determines the percentage given to prior behavior
IntegralWeight float64
// The window of time that the trust metric will track events across.
// This can be set to cover many days without issue
TrackingWindow time.Duration
// Each interval should be short for adapability.
// Less than 30 seconds is too sensitive,
// and greater than 5 minutes will make the metric numb
IntervalLength time.Duration
}
// DefaultConfig returns a config with values that have been tested and produce desirable results
func DefaultConfig() TrustMetricConfig {}
// NewMetricWithConfig returns a trust metric with a custom configuration
func NewMetricWithConfig(tmc TrustMetricConfig) *TrustMetric {}
//------------------------------------------------------------------------------------------------
// For example
config := TrustMetricConfig{
TrackingWindow: time.Minute * 60 * 24, // one day
IntervalLength: time.Minute * 2,
}
tm := NewMetricWithConfig(config)
tm.BadEvents(10)
tm.Pause()
tm.GoodEvents(1) // becomes active again
```
A trust metric store should be created with a DB that has persistent storage so it can save history data across node executions. All trust metrics instantiated by the store will be created with the provided TrustMetricConfig configuration.
When you attempt to fetch the trust metric for a peer, and an entry does not exist in the trust metric store, a new metric is automatically created and the entry made within the store.
In additional to the fetching method, GetPeerTrustMetric, the trust metric store provides a method to call when a peer has disconnected from the node. This is so the metric can be paused (history data will not be saved) for periods of time when the node is not having direct experiences with the peer.
```go
// TrustMetricStore - Manages all trust metrics for peers
type TrustMetricStore struct {
cmn.BaseService
// Private elements
}
// OnStart implements Service
func (tms *TrustMetricStore) OnStart() error {}
// OnStop implements Service
func (tms *TrustMetricStore) OnStop() {}
// NewTrustMetricStore returns a store that saves data to the DB
// and uses the config when creating new trust metrics
func NewTrustMetricStore(db dbm.DB, tmc TrustMetricConfig) *TrustMetricStore {}
// Size returns the number of entries in the trust metric store
func (tms *TrustMetricStore) Size() int {}
// GetPeerTrustMetric returns a trust metric by peer key
func (tms *TrustMetricStore) GetPeerTrustMetric(key string) *TrustMetric {}
// PeerDisconnected pauses the trust metric associated with the peer identified by the key
func (tms *TrustMetricStore) PeerDisconnected(key string) {}
//------------------------------------------------------------------------------------------------
// For example
db := dbm.NewDB("trusthistory", "goleveldb", dirPathStr)
tms := NewTrustMetricStore(db, DefaultConfig())
tm := tms.GetPeerTrustMetric(key)
tm.BadEvents(1)
tms.PeerDisconnected(key)
```
## Status
Approved.
## Consequences
### Positive
- The trust metric will allow Tendermint to make non-binary security and reliability decisions
- Will help Tendermint implement deterrents that provide soft security controls, yet avoids disruption on the network
- Will provide useful profiling information when analyzing performance over time related to peer interaction
### Negative
- Requires saving the trust metric history data across node executions
### Neutral
- Keep in mind that, good events need to be recorded just as bad events do using this implementation

View File

@@ -0,0 +1,103 @@
# ADR 007: Trust Metric Usage Guide
## Context
Tendermint is required to monitor peer quality in order to inform its peer dialing and peer exchange strategies.
When a node first connects to the network, it is important that it can quickly find good peers.
Thus, while a node has fewer connections, it should prioritize connecting to higher quality peers.
As the node becomes well connected to the rest of the network, it can dial lesser known or lesser
quality peers and help assess their quality. Similarly, when queried for peers, a node should make
sure they dont return low quality peers.
Peer quality can be tracked using a trust metric that flags certain behaviours as good or bad. When enough
bad behaviour accumulates, we can mark the peer as bad and disconnect.
For example, when the PEXReactor makes a request for peers network addresses from an already known peer, and the returned network addresses are unreachable, this undesirable behavior should be tracked. Returning a few bad network addresses probably shouldnt cause a peer to be dropped, while excessive amounts of this behavior does qualify the peer for removal. The originally proposed approach and design document for the trust metric can be found in the [ADR 006](adr-006-trust-metric.md) document.
The trust metric implementation allows a developer to obtain a peer's trust metric from a trust metric store, and track good and bad events relevant to a peer's behavior, and at any time, the peer's metric can be queried for a current trust value. The current trust value is calculated with a formula that utilizes current behavior, previous behavior, and change between the two. Current behavior is calculated as the percentage of good behavior within a time interval. The time interval is short; probably set between 30 seconds and 5 minutes. On the other hand, the historic data can estimate a peer's behavior over days worth of tracking. At the end of a time interval, the current behavior becomes part of the historic data, and a new time interval begins with the good and bad counters reset to zero.
These are some important things to keep in mind regarding how the trust metrics handle time intervals and scoring:
- Each new time interval begins with a perfect score
- Bad events quickly bring the score down and good events cause the score to slowly rise
- When the time interval is over, the percentage of good events becomes historic data.
Some useful information about the inner workings of the trust metric:
- When a trust metric is first instantiated, a timer (ticker) periodically fires in order to handle transitions between trust metric time intervals
- If a peer is disconnected from a node, the timer should be paused, since the node is no longer connected to that peer
- The ability to pause the metric is supported with the store **PeerDisconnected** method and the metric **Pause** method
- After a pause, if a good or bad event method is called on a metric, it automatically becomes unpaused and begins a new time interval.
## Decision
The trust metric capability is now available, yet, it still leaves the question of how should it be applied throughout Tendermint in order to properly track the quality of peers?
### Proposed Process
Peers are managed using an address book and a trust metric:
- The address book keeps a record of peers and provides selection methods
- The trust metric tracks the quality of the peers
#### Presence in Address Book
Outbound peers are added to the address book before they are dialed,
and inbound peers are added once the peer connection is set up.
Peers are also added to the address book when they are received in response to
a pexRequestMessage.
While a node has less than `needAddressThreshold`, it will periodically request more,
via pexRequestMessage, from randomly selected peers and from newly dialed outbound peers.
When a new address is added to an address book that has more than `0.5*needAddressThreshold` addresses,
then with some low probability, a randomly chosen low quality peer is removed.
#### Outbound Peers
Peers attempt to maintain a minimum number of outbound connections by
repeatedly querying the address book for peers to connect to.
While a node has few to no outbound connections, the address book is biased to return
higher quality peers. As the node increases the number of outbound connections,
the address book is biased to return less-vetted or lower-quality peers.
#### Inbound Peers
Peers also maintain a maximum number of total connections, MaxNumPeers.
If a peer has MaxNumPeers, new incoming connections will be accepted with low probability.
When such a new connection is accepted, the peer disconnects from a probabilistically chosen low ranking peer
so it does not exceed MaxNumPeers.
#### Peer Exchange
When a peer receives a pexRequestMessage, it returns a random sample of high quality peers from the address book. Peers with no score or low score should not be inclided in a response to pexRequestMessage.
#### Peer Quality
Peer quality is tracked in the connection and across the reactors by storing the TrustMetric in the peer's
thread safe Data store.
Peer behaviour is then defined as one of the following:
- Fatal - something outright malicious that causes us to disconnect the peer and ban it from the address book for some amount of time
- Bad - Any kind of timeout, messages that don't unmarshal, fail other validity checks, or messages we didn't ask for or aren't expecting (usually worth one bad event)
- Neutral - Unknown channels/message types/version upgrades (no good or bad events recorded)
- Correct - Normal correct behavior (worth one good event)
- Good - some random majority of peers per reactor sending us useful messages (worth more than one good event).
Note that Fatal behaviour causes us to remove the peer, and neutral behaviour does not affect the score.
## Status
Proposed.
## Consequences
### Positive
- Bringing the address book and trust metric store together will cause the network to be built in a way that encourages greater security and reliability.
### Negative
- TBD
### Neutral
- Keep in mind that, good events need to be recorded just as bad events do using this implementation.

View File

@@ -0,0 +1,16 @@
# ADR 000: Template for an ADR
## Context
## Decision
## Status
## Consequences
### Positive
### Negative
### Neutral

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@@ -1,240 +0,0 @@
# Merkle data stores - Frey's proposal
## TL;DR
To allow the efficient creation of an ABCi app, tendermint wishes to provide a reference implementation of a key-value store that provides merkle proofs of the data. These proofs then quickly allow the ABCi app to provide an app hash to the consensus engine, as well as a full proof to any client.
This is equivalent to building a database, and I would propose designing it from the API first, then looking how to implement this (or make an adapter from the API to existing implementations). Once we agree on the functionality and the interface, we can implement the API bindings, and then work on building adapters to existence merkle-ized data stores, or modifying the stores to support this interface.
We need to consider the API (both in-process and over the network), language bindings, maintaining handles to old state (and garbage collecting), persistence, security, providing merkle proofs, and general key-value store operations. To stay consistent with the blockchains "single global order of operations", this data store should only allow one connection at a time to have write access.
## Overview
* **State**
* There are two concepts of state, "committed state" and "working state"
* The working state is only accessible from the ABCi app, allows writing, but does not need to support proofs.
* When we commit the "working state", it becomes a new "committed state" and has an immutable root hash, provides proofs, and can be exposed to external clients.
* **Transactions**
* The database always allows creating a read-only transaction at the last "committed state", this transaction can serve read queries and proofs.
* The database maintains all data to serve these read transactions until they are closed by the client (or time out). This allows the client(s) to determine how much old info is needed
* The database can only support *maximal* one writable transaction at a time. This makes it easy to enforce serializability, and attempting to start a second writable transaction may trigger a panic.
* **Functionality**
* It must support efficient key-value operations (get/set/delete)
* It must support returning merkle proofs for any "committed state"
* It should support range queries on subsets of the key space if possible (ie. if the db doesn't hash keys)
* It should also support listening to changes to a desired key via pub-sub or similar method, so I can quickly notify you on a change to your balance without constant polling.
* It may support other db-specific query types as an extension to this interface, as long as all specified actions maintain their meaning.
* **Interface**
* This interface should be domain-specific - ie. designed just for this use case
* It should present a simple go interface for embedding the data store in-process
* It should create a gRPC/protobuf API for calling from any client
* It should provide and maintain client adapters from our in-process interface to gRPC client calls for at least golang and Java (maybe more languages?)
* It should provide and maintain server adapters from our gRPC calls to the in-process interface for golang at least (unless there is another server we wish to support)
* **Persistence**
* It must support atomic persistence upon committing a new block. That is, upon crash recovery, the state is guaranteed to represent the state at the end of a complete block (along with a note of which height it was).
* It must delay deletion of old data as long as there are open read-only transactions referring to it, thus we must maintain some sort of WAL to keep track of pending cleanup.
* When a transaction is closed, or when we recover from a crash, it should clean up all no longer needed data to avoid memory/storage leaks.
* **Security and Auth**
* If we allow connections over gRPC, we must consider this issues and allow both encryption (SSL), and some basic auth rules to prevent undesired access to the DB
* This is client-specific and does not need to be supported in the in-process, embedded version.
## Details
Here we go more in-depth in each of the sections, explaining the reasoning and more details on the desired behavior. This document is only the high-level architecture and should support multiple implementations. When building out a specific implementation, a similar document should be provided for that repo, showing how it implements these concepts, and details about memory usage, storage, efficiency, etc.
### State
The current ABCi interface avoids this question a bit and that has brought confusion. If I use `merkleeyes` to store data, which state is returned from `Query`? The current "working" state, which I would like to refer to in my ABCi application? Or the last committed state, which I would like to return to a client's query? Or an old state, which I may select based on height?
Right now, `merkleeyes` implements `Query` like a normal ABCi app and only returns committed state, which has lead to problems and confusion. Thus, we need to be explicit about which state we want to view. Each viewer can then specify which state it wants to view. This allows the app to query the working state in DeliverTx, but the committed state in Query.
We can easily provide two global references for "last committed" and "current working" states. However, if we want to also allow querying of older commits... then we need some way to keep track of which ones are still in use, so we can garbage collect the unneeded ones. There is a non-trivial overhead in holding references to all past states, but also a hard-coded solution (hold onto the last 5 commits) may not support all clients. We should let the client define this somehow.
### Transactions
Transactions (in the typical database sense) are a clean and established solution to this issue. We can look at the [isolations levels](https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable) which attempt to provide us things like "repeatable reads". That means if we open a transaction, and query some data 100 times while other processes are writing to the db, we get the same result each time. This transaction has a reference to its own local state from the time the transaction started. (We are referring to the highest isolation levels here, which correlate well this the blockchain use case).
If we implement a read-only transaction as a reference to state at the time of creation of that transaction, we can then hold these references to various snapshots, one per block that we are interested, and allow the client to multiplex queries and proofs from these various blocks.
If we continue using these concepts (which have informed 30+ years of server side design), we can add a few nice features to our write transactions. The first of which is `Rollback` and `Commit`. That means all the changes we make in this transaction have no effect on the database until they are committed. And until they are committed, we can always abort if we detect an anomaly, returning to the last committed state with a rollback.
There is also a nice extension to this available on some database servers, basically, "nested" transactions or "savepoints". This means that within one transaction, you can open a subtransaction/savepoint and continue work. Later you have the option to commit or rollback all work since the savepoint/subtransaction. And then continue with the main transaction.
If you don't understand why this is useful, look at how basecoin needs to [hold cached state for AppTx](https://github.com/tendermint/basecoin/blob/master/state/execution.go#L126-L149), meaning that it rolls back all modifications if the AppTx returns an error. This was implemented as a wrapper in basecoin, but it is a reasonable thing to support in the DB interface itself (especially since the implementation becomes quite non-trivial as soon as you support range queries).
To give a bit more reference to this concept in practice, read about [Savepoints in Postgresql](https://www.postgresql.org/docs/current/static/tutorial-transactions.html) ([reference](https://www.postgresql.org/docs/current/static/sql-savepoint.html)) or [Nesting transactions in SQL Server](http://dba-presents.com/index.php/databases/sql-server/43-nesting-transactions-and-save-transaction-command) (TL;DR: scroll to the bottom, section "Real nesting transactions with SAVE TRANSACTION")
### Functionality
Merkle trees work with key-value pairs, so we should most importantly focus on the basic Key-Value operations. That is `Get`, `Set`, and `Remove`. We also need to return a merkle proof for any key, along with a root hash of the tree for committing state to the blockchain. This is just the basic merkle-tree stuff.
If it is possible with the implementation, it is nice to provide access to Range Queries. That is, return all values where the key is between X and Y. If you construct your keys wisely, it is possible to store lists (1:N) relations this way. Eg, storing blog posts and the key is blog:`poster_id`:`sequence`, then I could search for all blog posts by a given `poster_id`, or even return just posts 10-19 from the given poster.
The construction of a tree that supports range queries was one of the [design decisions of go-merkle](https://github.com/tendermint/go-merkle/blob/master/README.md). It is also kind of possible with [ethereum's patricia trie](https://github.com/ethereum/wiki/wiki/Patricia-Tree) as long as the key is less than 32 bytes.
In addition to range queries, there is one more nice feature that we could add to our data store - listening to events. Depending on your context, this is "reactive programming", "event emitters", "notifications", etc... But the basic concept is that a client can listen for all changes to a given key (or set of keys), and receive a notification when this happens. This is very important to avoid [repeated polling and wasted queries](http://resthooks.org/) when a client simply wants to [detect changes](https://www.rethinkdb.com/blog/realtime-web/).
If the database provides access to some "listener" functionality, the app can choose to expose this to the external client via websockets, web hooks, http2 push events, android push notifications, etc, etc etc.... But if we want to support modern client functionality, let's add support for this reactive paradigm in our DB interface.
**TODO** support for more advanced backends, eg. Bolt....
### Go Interface
I will start with a simple go interface to illustrate the in-process interface. Once there is agreement on how this looks, we can work out the gRPC bindings to support calling out of process. These interfaces are not finalized code, but I think the demonstrate the concepts better than text and provide a strawman to get feedback.
```
// DB represents the committed state of a merkle-ized key-value store
type DB interface {
// Snapshot returns a reference to last committed state to use for
// providing proofs, you must close it at the end to garbage collect
// the historical state we hold on to to make these proofs
Snapshot() Prover
// Start a transaction - only way to change state
// This will return an error if there is an open Transaction
Begin() (Transaction, error)
// These callbacks are triggered when the Transaction is Committed
// to the DB. They can be used to eg. notify clients via websockets when
// their account balance changes.
AddListener(key []byte, listener Listener)
RemoveListener(listener Listener)
}
// DBReader represents a read-only connection to a snapshot of the db
type DBReader interface {
// Queries on my local view
Has(key []byte) (bool, error)
Get(key []byte) (Model, error)
GetRange(start, end []byte, ascending bool, limit int) ([]Model, error)
Closer
}
// Prover is an interface that lets one query for Proofs, holding the
// data at a specific location in memory
type Prover interface {
DBReader
// Hash is the AppHash (RootHash) for this block
Hash() (hash []byte)
// Prove returns the data along with a merkle Proof
// Model and Proof are nil if not found
Prove(key []byte) (Model, Proof, error)
}
// Transaction is a set of state changes to the DB to be applied atomically.
// There can only be one open transaction at a time, which may only have
// maximum one subtransaction at a time.
// In short, at any time, there is exactly one object that can write to the
// DB, and we can use Subtransactions to group operations and roll them back
// together (kind of like `types.KVCache` from basecoin)
type Transaction interface {
DBReader
// Change the state - will raise error immediately if this Transaction
// is not holding the exclusive write lock
Set(model Model) (err error)
Remove(key []byte) (removed bool, err error)
// Subtransaction starts a new subtransaction, rollback will not affect the
// parent. Only on Commit are the changes applied to this transaction.
// While the subtransaction exists, no write allowed on the parent.
// (You must Commit or Rollback the child to continue)
Subtransaction() Transaction
// Commit this transaction (or subtransaction), the parent reference is
// now updated.
// This only updates persistant store if the top level transaction commits
// (You may have any number of nested sub transactions)
Commit() error
// Rollback ends the transaction and throw away all transaction-local state,
// allowing the tree to prune those elements.
// The parent transaction now recovers the write lock.
Rollback()
}
// Listener registers callbacks on changes to the data store
type Listener interface {
OnSet(key, value, oldValue []byte)
OnRemove(key, oldValue []byte)
}
// Proof represents a merkle proof for a key
type Proof interface {
RootHash() []byte
Verify(key, value, root []byte) bool
}
type Model interface {
Key() []byte
Value() []byte
}
// Closer releases the reference to this state, allowing us to garbage collect
// Make sure to call it before discarding.
type Closer interface {
Close()
}
```
### Remote Interface
The use-case of allowing out-of-process calls is very powerful. Not just to provide a powerful merkle-ready data store to non-go applications.
It we allow the ABCi app to maintain the only writable connections, we can guarantee that all transactions are only processed through the tendermint consensus engine. We could then allow multiple "web server" machines "read-only" access and scale out the database reads, assuming the consensus engine, ABCi logic, and public key cryptography is more the bottleneck than the database. We could even place the consensus engine, ABCi app, and data store on one machine, connected with unix sockets for security, and expose a tcp/ssl interface for reading the data, to scale out query processing over multiple machines.
But returning our focus directly to the ABCi app (which is the most important use case). An app may well want to maintain 100 or 1000 snapshots of different heights to allow people to easily query many proofs at a given height without race conditions (very important for IBC, ask Jae). Thus, we should not require a separate TCP connection for each height, as this gets quite awkward with so many connections. Also, if we want to use gRPC, we should consider the connections potentially transient (although they are more efficient with keep-alive).
Thus, the wire encoding of a transaction or a snapshot should simply return a unique id. All methods on a `Prover` or `Transaction` over the wire can send this id along with the arguments for the method call. And we just need a hash map on the server to map this id to a state.
The only negative of not requiring a persistent tcp connection for each snapshot is there is no auto-detection if the client crashes without explicitly closing the connections. Thus, I would suggest adding a `Ping` thread in the gRPC interface which keeps the Snapshot alive. If no ping is received within a server-defined time, it may automatically close those transactions. And if we consider a client with 500 snapshots that needs to ping each every 10 seconds, that is a lot of overhead, so we should design the ping to accept a list of IDs for the client and update them all. Or associate all snapshots with a clientID and then just send the clientID in the ping. (Please add other ideas on how to detect client crashes without persistent connections).
To encourage adoption, we should provide a nice client that uses this gRPC interface (like we do with ABCi). For go, the client may have the exact same interface as the in-process version, just that the error call may return network errors, not just illegal operations. We should also add a client with a clean API for Java, since that seems to be popular among app developers in the current tendermint community. Other bindings as we see the need in the server space.
### Persistence
Any data store worth it's name should not lose all data on a crash. Even [redis provides some persistence](https://redis.io/topics/persistence) these days. Ideally, if the system crashes and restarts, it should have the data at the last block N that was committed. If the system crash during the commit of block N+1, then the recovered state should either be block N or completely committed block N+1, but no partial state between the two. Basically, the commit must be an atomic operation (even if updating 100's of records).
To avoid a lot of headaches ourselves, we can use an existing data store, such as leveldb, which provides `WriteBatch` to group all operations.
The other issue is cleaning up old state. We cannot delete any information from our persistent store, as long as any snapshot holds a reference to it (or else we get some panics when the data we query is not there). So, we need to store the outstanding deletions that we can perform when the snapshot is `Close`d. In addition, we must consider the case that the data store crashes with open snapshots. Thus, the info on outstanding deletions must also be persisted somewhere. Something like a "delete-behind log" (the opposite of a "write ahead log").
This is not a concern of the generic interface, but each implementation should take care to handle this well to avoid accumulation of unused references in the data store and eventual data bloat.
#### Backing stores
It is way outside the scope of this project to build our own database that is capable of efficiently storing the data, provide multiple read-only snapshots at once, and save it atomically. The best approach seems to select an existing database (best a simple one) that provides this functionality and build upon it, much like the current `go-merkle` implementation builds upon `leveldb`. After some research here are winners and losers:
**Winners**
* Leveldb - [provides consistent snapshots](https://ayende.com/blog/161705/reviewing-leveldb-part-xiii-smile-and-here-is-your-snapshot), and [provides tooling for building ACID compliance](http://codeofrob.com/entries/writing-a-transaction-manager-on-top-of-leveldb.html)
* Note there are at least two solid implementations available in go - [goleveldb](https://github.com/syndtr/goleveldb) - a pure go implementation, and [levigo](https://github.com/jmhodges/levigo) - a go wrapper around leveldb.
* Goleveldb is much easier to compile and cross-compile (not requiring cgo), while levigo (or cleveldb) seems to provide a significant performance boosts (but I had trouble even running benchmarks)
* PostgreSQL - fully supports these ACID semantics if you call `SET TRANSACTION ISOLATION LEVEL SERIALIZABLE` at the beginning of a transaction (tested)
* This may be total overkill unless we also want to make use of other features, like storing data in multiple columns with secondary indexes.
* Trillian can show an example of [how to store a merkle tree in sql](https://github.com/google/trillian/blob/master/storage/mysql/tree_storage.go)
**Losers**
* Bolt - open [read-only snapshots can block writing](https://github.com/boltdb/bolt/issues/378)
* Mongo - [barely even supports atomic operations](https://docs.mongodb.com/manual/core/write-operations-atomicity/), much less multiple snapshots
**To investigate**
* [Trillian](https://github.com/google/trillian) - has a [persistent merkle tree interface](https://github.com/google/trillian/blob/master/storage/tree_storage.go) along with [backend storage with mysql](https://github.com/google/trillian/blob/master/storage/mysql/tree_storage.go), good inspiration for our design if not directly using it
* [Moss](https://github.com/couchbase/moss) - another key-value store in go, seems similar to leveldb, maybe compare with performance tests?
### Security
When allowing access out-of-process, we should provide different mechanisms to secure it. The first is the choice of binding to a local unix socket or a tcp port. The second is the optional use of ssl to encrypt the connection (very important over tcp). The third is authentication to control access to the database.
We may also want to consider the case of two server connections with different permissions, eg. a local unix socket that allows write access with no more credentials, and a public TCP connection with ssl and authentication that only provides read-only access.
The use of ssl is quite easy in go, we just need to generate and sign a certificate, so it is nice to be able to disable it for dev machines, but it is very important for production.
For authentication, let me sketch out a minimal solution. The server could just have a simple config file with key/bcrypt(password) pairs along with read/write permission level, and read that upon startup. The client must provide a username and password in the HTTP headers when making the original HTTPS gRPC connection.
This is super minimal to provide some protection. Things like LDAP, OAuth and single-sign on seem overkill and even potential security holes. Maybe there is another solution somewhere in the middle.

View File

@@ -1,17 +0,0 @@
# Merkle data stores
To allow the efficient creation of an ABCi app, tendermint wishes to provide a reference implemention of a key-value store that provides merkle proofs of the data. These proofs then quickly allow the ABCi app to provide an apphash to the consensus engine, as well as a full proof to any client.
This engine is currently implemented in `go-merkle` with `merkleeyes` providing a language-agnostic binding via ABCi. It uses `tmlibs/db` bindings internally to persist data to leveldb.
What are some of the requirements of this store:
* It must support efficient key-value operations (get/set/delete)
* It must support persistance.
* We must only persist complete blocks, so when we come up after a crash we are at the state of block N or N+1, but not in-between these two states.
* It must allow us to read/write from one uncommited state (working state), while serving other queries from the last commited state. And a way to determine which one to serve for each client.
* It must allow us to hold references to old state, to allow providing proofs from 20 blocks ago. We can define some limits as to the maximum time to hold this data.
* We provide in process binding in Go
* We provide language-agnostic bindings when running the data store as it's own process.

BIN
docs/assets/abci.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

208
docs/conf.py Normal file
View File

@@ -0,0 +1,208 @@
# -*- coding: utf-8 -*-
#
# Tendermint documentation build configuration file, created by
# sphinx-quickstart on Mon Aug 7 04:55:09 2017.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
import urllib
import sphinx_rtd_theme
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
source_suffix = ['.rst', '.md']
# source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Tendermint'
copyright = u'2017, The Authors'
author = u'Tendermint'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = u''
# The full version, including alpha/beta/rc tags.
release = u''
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'architecture']
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
# html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# This is required for the alabaster theme
# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars
html_sidebars = {
'**': [
'about.html',
'navigation.html',
'relations.html', # needs 'show_related': True theme option to display
'searchbox.html',
'donate.html',
]
}
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'Tendermintdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'Tendermint.tex', u'Tendermint Documentation',
u'The Authors', 'manual'),
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'Tendermint', u'Tendermint Documentation',
[author], 1)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'Tendermint', u'Tendermint Documentation',
author, 'Tendermint', 'Byzantine Fault Tolerant Consensus.',
'Database'),
]
# ---- customization -------------------------
tools_repo = "https://raw.githubusercontent.com/tendermint/tools/"
tools_branch = "master"
tools_dir = "./tools"
assets_dir = tools_dir + "/assets"
if os.path.isdir(tools_dir) != True:
os.mkdir(tools_dir)
if os.path.isdir(assets_dir) != True:
os.mkdir(assets_dir)
urllib.urlretrieve(tools_repo+tools_branch+'/ansible/README.rst', filename=tools_dir+'/ansible.rst')
urllib.urlretrieve(tools_repo+tools_branch+'/ansible/assets/a_plus_t.png', filename=assets_dir+'/a_plus_t.png')
urllib.urlretrieve(tools_repo+tools_branch+'/docker/README.rst', filename=tools_dir+'/docker.rst')
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/README.rst', filename=tools_dir+'/mintnet-kubernetes.rst')
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/gce1.png', filename=assets_dir+'/gce1.png')
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/gce2.png', filename=assets_dir+'/gce2.png')
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/statefulset.png', filename=assets_dir+'/statefulset.png')
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/t_plus_k.png', filename=assets_dir+'/t_plus_k.png')
urllib.urlretrieve(tools_repo+tools_branch+'/terraform-digitalocean/README.rst', filename=tools_dir+'/terraform-digitalocean.rst')
urllib.urlretrieve(tools_repo+tools_branch+'/tm-bench/README.rst', filename=tools_dir+'/benchmarking-and-monitoring.rst')
# the readme for below is included in tm-bench
# urllib.urlretrieve('https://raw.githubusercontent.com/tendermint/tools/master/tm-monitor/README.rst', filename='tools/tm-monitor.rst')
#### abci spec #################################
abci_repo = "https://raw.githubusercontent.com/tendermint/abci/"
abci_branch = "spec-docs"
urllib.urlretrieve(abci_repo+abci_branch+'/specification.rst', filename='abci-spec.rst')

Some files were not shown because too many files have changed in this diff Show More