Compare commits

...

168 Commits

Author SHA1 Message Date
Ethan Buchman
792b12573e Prepare v0.30.0 (#3287)
* changelog, upgrading, version

* update for evidence fixes

* linkify

* fix an entry
2019-02-08 18:50:02 -05:00
Ethan Buchman
4f2ef36701 types.NewCommit (#3275)
* types.NewCommit

* use types.NewCommit everywhere

* fix log in unsafe_reset

* memoize height and round in constructor

* notes about deprecating toVote

* bring back memoizeHeightRound
2019-02-08 18:40:41 -05:00
Ethan Buchman
6b1b595951 Merge pull request #3286 from tendermint/bucky/fix-duplicate-evidence
Fixes for duplicate evidence
2019-02-08 18:33:45 -05:00
Ismail Khoffi
87bdc42bf8 Reject blocks with committed evidence (#37)
* evidence: NewEvidencePool takes evidenceDB

* evidence: failing TestStoreCommitDuplicate

tendermint/security#35

* GetEvidence -> GetEvidenceInfo

* fix TestStoreCommitDuplicate

* comment in VerifyEvidence

* add check if evidence was already seen
 - modify EventPool interface (EventStore is not known in ApplyBlock):
   - add IsCommitted method to iface
 - add test

* update changelog

* fix TestStoreMark:
 - priority in evidence info gets reset to zero after evidence gets committed

* review comments: simplify EvidencePool.IsCommitted
 - delete obsolete EvidenceStore.IsCommitted

* add simple test for IsCommitted

* update changelog: this is actually breaking (PR number still missing)

* fix TestStoreMark:
 - priority in evidence info gets reset to zero after evidence gets
 committed

* review suggestion: simplify return
2019-02-08 18:30:45 -05:00
Ethan Buchman
90ba63948a Sec/bucky/35 commit duplicate evidence (#36)
Don't add committed evidence to evpool
2019-02-08 18:25:48 -05:00
Anca Zamfir
cce4d21ccb treat validator updates as set (#3222)
* Initial commit for 3181..still early

* unit test updates

* unit test updates

* fix check of dups accross updates and deletes

* simplify the processChange() func

* added overflow check utest

* Added checks for empty valset, new utest

* deepcopy changes in processUpdate()

* moved to new API, fixed tests

* test cleanup

* address review comments

* make sure votePower > 0

* gofmt fixes

* handle duplicates and invalid values

* more work on tests, review comments

* Renamed and explained K

* make TestVal private

* split verifyUpdatesAndComputeNewPriorities.., added check for deletes

* return error if validator set is empty after processing changes

* address review comments

* lint err

* Fixed the total voting power and added comments

* fix lint

* fix lint
2019-02-08 13:05:09 -05:00
Ismail Khoffi
c1f7399a86 review comment: cleaner constant for N/2, delete secp256k1N and use (#3279)
`secp256k1.S256().N` directly instead
2019-02-08 09:48:09 -05:00
Ethan Buchman
44a89a3537 Merge pull request #3282 from tendermint/master
Merge master back to develop
2019-02-08 09:43:36 -05:00
Ethan Buchman
a8dbc64319 Merge pull request #3276 from tendermint/release/v0.29.2
Release/v0.29.2
2019-02-08 09:42:52 -05:00
Ethan Buchman
af6e6cd350 remove MixEntropy (#3278)
* remove MixEntropy

* changelog
2019-02-07 20:12:57 -05:00
Ethan Buchman
ad4bd92fec secp256k1: change build tags (#3277) 2019-02-07 19:57:30 -05:00
Ethan Buchman
f571ee8876 prepare v0.29.2 (#3272)
* update changelog

* linkify

* bump version

* update main changelog

* final fixes

* entry for wal fix

* changelog preamble

* remove a line
2019-02-07 19:34:01 -05:00
Anton Kaliaev
11e36d0bfb addrbook_test: preallocate memory for bookSizes (#3268)
Fixes https://circleci.com/gh/tendermint/tendermint/44901
2019-02-07 17:16:31 +04:00
Ethan Buchman
354a08c25a p2p: fix infinite loop in addrbook (#3232)
* failing test

* fix infinite loop in addrbook

There are cases where we only have a small number of addresses marked
good ("old"), but the selection mechanism keeps trying to select more of these
addresses, and hence ends up in an infinite loop. Here we fix this to
only try and select such "old" addresses if we have enough of them. Note this
means, if we don't have enough of them, we may return more "new"
addresses than otherwise expected by the newSelectionBias.

This whole GetSelectionWithBias method probably needs to be rewritten,
but this is a quick fix for the issue.

* changelog

* fix infinite loop if not enough new addrs

* fix another potential infinite loop

if a.nNew == 0 -> pickFromOldBucket=true, but we don't have enough items
  (a.nOld > len(oldBucketToAddrsMap) false)

* Revert "fix another potential infinite loop"

This reverts commit 146540c1125597162bd89820d611f6531f5e5e4b.

* check num addresses instead of buckets, new test

* fixed the int division

* add slack to bias % in test, lint fixes

* Added checks for selection content in test

* test cleanup

* Apply suggestions from code review

Co-Authored-By: ebuchman <ethan@coinculture.info>

* address review comments

* change after  Anton's review comments

* use the same docker image we use for testing

when building a binary for localnet

* switch back to circleci classic

* more review comments

* more review comments

* refactor addrbook_test

* build linux binary inside docker

in attempt to fix

```
--> Running dep
+ make build-linux
GOOS=linux GOARCH=amd64 make build
make[1]: Entering directory `/home/circleci/.go_workspace/src/github.com/tendermint/tendermint'
CGO_ENABLED=0 go build -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`" -tags 'tendermint' -o build/tendermint ./cmd/tendermint/
p2p/pex/addrbook.go:373:13: undefined: math.Round
```

* change dir from /usr to /go

* use concrete Go version for localnet binary

* check for nil addresses just to be sure
2019-02-07 15:58:23 +04:00
Thane Thomson
e70f27c8e4 Add remote signer test harness (KMS) (#3149)
* WIP: Starts adding remote signer test harness

This commit adds a new command to Tendermint to allow for us to build a
standalone binary to test remote signers such as KMS
(https://github.com/tendermint/kms).

Right now, all it does is test that the local public key matches the
public key reported by the client, and fails at the point where it
attempts to get the client to sign a proposal.

* Fixes typo

* Fixes proposal validation test

This commit fixes the proposal validation test as per #3149. It also
moves the test harness into its own internal package to isolate its
exports from the `privval` package.

* Adds vote signing validation

* Applying recommendations from #3149

* Adds function descriptions for test harness

* Adds ability to ask remote signer to shut down

Prior to this commit, the remote signer needs to manually be shut down,
which is not ideal for automated testing. This commit allows us to send
a poison pill message to the KMS to let it shut down gracefully once
testing is done (whether the tests pass or fail).

* Adds tests for remote signer test harness

This commit makes some minor modifications to a few files to allow for
testing of the remote signer test harness. Two tests are added here:
checking for a fully successful (the ideal) case, and for the case where
the maximum number of retries has been reached when attempting to accept
incoming connections from the remote signer.

* Condenses serialization of proposals and votes using existing Tendermint functions

* Removes now-unnecessary amino import and codec

* Adds error message for vote signing failure

* Adds key extraction command for integration test

Took the code from here:
https://gist.github.com/Liamsi/a80993f24bff574bbfdbbfa9efa84bc7 to
create a simple utility command to extract a key from a local Tendermint
validator for use in KMS integration testing.

* Makes path expansion success non-compulsory

* Fixes segfault on SIGTERM

We need an additional variable to keep track of whether we're
successfully connected, otherwise hitting Ctrl+Break during execution
causes a segmentation fault. This now allows for a clean shutdown.

* Consolidates shutdown checks

* Adds comments indicating codes for easy lookup

* Adds Docker build for remote signer harness

Updates the `DOCKER/build.sh` and `DOCKER/push.sh` files to allow one to
override the image name and Dockerfile using environment variables.
Updates the primary `Makefile` as well as the `DOCKER/Makefile` to allow
for building the `remote_val_harness` Docker image.

* Adds build_remote_val_harness_docker_image to .PHONY

* Removes remote signer poison pill messaging functionality

* Reduces fluff code in command line parsing

As per
https://github.com/tendermint/tendermint/pull/3149#pullrequestreview-196171788,
this reduces the amount of fluff code in the PR down to the bare
minimum.

* Fixes ordering of error check and info log

* Moves remove_val_harness cmd into tools folder

It seems to make sense to rather keep the remote signer test harness in
its own tool folder (now rather named `tm-signer-harness` to keep with
the tool naming convention). It is actually a separate tool, not meant
to be one of the core binaries, but supplementary and supportive.

* Updates documentation for tm-signer-harness

* Refactors flag parsing to be more compact and less redundant

* Adds version sub-command help

* Removes extraneous flags parsing

* Adds CHANGELOG_PENDING entry for tm-signer-harness

* Improves test coverage

Adds a few extra parameters to the `MockPV` type to fake broken vote and
proposal signing. Also adds some more tests for the test harness so as
to increase coverage for failed cases.

* Fixes formatting for CHANGELOG_PENDING.md

* Fix formatting for documentation config

* Point users towards official Tendermint docs for tools documentation

* Point users towards official Tendermint docs for tm-signer-harness

* Remove extraneous constant

* Rename TestHarness.sc to TestHarness.spv for naming consistency

* Refactor to remove redundant goroutine

* Refactor conditional to cleaner switch statement and better error handling for listener protocol

* Remove extraneous goroutine

* Add note about installing tmkms via Cargo

* Fix typo in naming of output signing key

* Add note about where to find chain ID

* Replace /home/user with ~/ for brevity

* Fixes "signer.key" typo

* Minor edits for clarification for tm-signer-harness bulid/setup process
2019-02-07 10:32:32 +04:00
Anton Kaliaev
fcebdf6720 Merge pull request #3261 from tendermint/anton/circle
Revert "quick fix for CircleCI (#2279)"
2019-02-07 10:28:22 +04:00
Ethan Buchman
9e9026452c p2p/conn: don't hold stopMtx while waiting (#3254)
* p2p/conn: fix deadlock in FlushStop/OnStop

* makefile: set_with_deadlock

* close doneSendRoutine at end of sendRoutine

* conn: initialize channs in OnStart
2019-02-06 10:29:51 -05:00
Ethan Buchman
45b70ae031 fix non deterministic test failures and race in privval socket (#3258)
* node: decrease retry conn timeout in test

Should fix #3256

The retry timeout was set to the default, which is the same as the
accept timeout, so it's no wonder this would fail. Here we decrease the
retry timeout so we can try many times before the accept timeout.

* p2p: increase handshake timeout in test

This fails sometimes, presumably because the handshake timeout is so low
(only 50ms). So increase it to 1s. Should fix #3187

* privval: fix race with ping. closes #3237

Pings happen in a go-routine and can happen concurrently with other
messages. Since we use a request/response protocol, we expect to send a
request and get back the corresponding response. But with pings
happening concurrently, this assumption could be violated. We were using
a mutex, but only a RWMutex, where the RLock was being held for sending
messages - this was to allow the underlying connection to be replaced if
it fails. Turns out we actually need to use a full lock (not just a read
lock) to prevent multiple requests from happening concurrently.

* node: fix test name. DelayedStop -> DelayedStart

* autofile: Wait() method

In the TestWALTruncate in consensus/wal_test.go we remove the WAL
directory at the end of the test. However the wal.Stop() does not
properly wait for the autofile group to finish shutting down. Hence it
was possible that the group's go-routine is still running when the
cleanup happens, which causes a panic since the directory disappeared.
Here we add a Wait() method to properly wait until the go-routine exits
so we can safely clean up. This fixes #2852.
2019-02-06 10:24:43 -05:00
Ethan Buchman
4429826229 cmn: GetFreePort (#3255) 2019-02-06 10:14:03 -05:00
Anton Kaliaev
1386707ceb rename TestGCRandom to _TestGCRandom 2019-02-06 18:36:54 +04:00
Anton Kaliaev
1a35895ac8 remove gotype linter
WARN [runner/nolint] Found unknown linters in //nolint directives: gotype
2019-02-06 18:26:56 +04:00
Anton Kaliaev
6941d1bb35 use nolint label instead of commenting 2019-02-06 18:21:32 +04:00
Anton Kaliaev
23314daee4 comment out clist tests w/ depend on runtime.SetFinalizer 2019-02-06 15:38:02 +04:00
Anton Kaliaev
3c8156a55a preallocating memory when we can 2019-02-06 15:24:54 +04:00
Anton Kaliaev
ffd3bf8448 remove or comment out unused code 2019-02-06 15:16:38 +04:00
Anton Kaliaev
da33dd04cc switch to golangci-lint from gometalinter 🚤 2019-02-06 15:00:55 +04:00
Anton Kaliaev
d8f0bc3e60 Revert "quick fix for CircleCI (#2279)"
This reverts commit 1cf6712a36.
2019-02-06 14:11:35 +04:00
Ethan Buchman
1809efa350 Introduce CommitSig alias for Vote in Commit (#3245)
* types: memoize height/round in commit instead of first vote

* types: commit.ValidateBasic in VerifyCommit

* types: new CommitSig alias for Vote

In preparation for reducing the redundancy in Commits, we introduce the
CommitSig as an alias for Vote. This is non-breaking on the protocol,
and minor breaking on the Go API, as Commit now contains a list of
CommitSig instead of Vote.

* remove dependence on ToVote

* update some comments

* fix tests

* fix tests

* fixes from review
2019-02-04 13:01:59 -05:00
Ethan Buchman
39eba4e154 WAL: better errors and new fail point (#3246)
* privval: more info in errors

* wal: change Debug logs to Info

* wal: log and return error on corrupted wal instead of panicing

* fail: Exit right away instead of sending interupt

* consensus: FAIL before handling our own vote

allows to replicate #3089:
- run using `FAIL_TEST_INDEX=0`
- delete some bytes from the end of the WAL
- start normally

Results in logs like:

```
I[2019-02-03|18:12:58.225] Searching for height module=consensus wal=/Users/ethanbuchman/.tendermint/data/cs.wal/wal height=1 min=0 max=0
E[2019-02-03|18:12:58.225] Error on catchup replay. Proceeding to start ConsensusState anyway module=consensus err="failed to read data: EOF"
I[2019-02-03|18:12:58.225] Started node module=main nodeInfo="{ProtocolVersion:{P2P:6 Block:9 App:1} ID_:35e87e93f2e31f305b65a5517fd2102331b56002 ListenAddr:tcp://0.0.0.0:26656 Network:test-chain-J8JvJH Version:0.29.1 Channels:4020212223303800 Moniker:Ethans-MacBook-Pro.local Other:{TxIndex:on RPCAddress:tcp://0.0.0.0:26657}}"
E[2019-02-03|18:12:58.226] Couldn't connect to any seeds module=p2p
I[2019-02-03|18:12:59.229] Timed out module=consensus dur=998.568ms height=1 round=0 step=RoundStepNewHeight
I[2019-02-03|18:12:59.230] enterNewRound(1/0). Current: 1/0/RoundStepNewHeight module=consensus height=1 round=0
I[2019-02-03|18:12:59.230] enterPropose(1/0). Current: 1/0/RoundStepNewRound module=consensus height=1 round=0
I[2019-02-03|18:12:59.230] enterPropose: Our turn to propose module=consensus height=1 round=0 proposer=AD278B7767B05D7FBEB76207024C650988FA77D5 privValidator="PrivValidator{AD278B7767B05D7FBEB76207024C650988FA77D5 LH:1, LR:0, LS:2}"
E[2019-02-03|18:12:59.230] enterPropose: Error signing proposal module=consensus height=1 round=0 err="Error signing proposal: Step regression at height 1 round 0. Got 1, last step 2"
I[2019-02-03|18:13:02.233] Timed out module=consensus dur=3s height=1 round=0 step=RoundStepPropose
I[2019-02-03|18:13:02.233] enterPrevote(1/0). Current: 1/0/RoundStepPropose module=consensus
I[2019-02-03|18:13:02.233] enterPrevote: ProposalBlock is nil module=consensus height=1 round=0
E[2019-02-03|18:13:02.234] Error signing vote module=consensus height=1 round=0 vote="Vote{0:AD278B7767B0 1/00/1(Prevote) 000000000000 000000000000 @ 2019-02-04T02:13:02.233897Z}" err="Error signing vote: Conflicting data"
```

Notice the EOF, the step regression, and the conflicting data.

* wal: change errors to be DataCorruptionError

* exit on corrupt WAL

* fix log

* fix new line
2019-02-04 13:00:06 -05:00
Ethan Buchman
eb4e23b91e fix FlushStop (#3247)
* p2p/pex: failing test

* p2p/conn: add stopMtx for FlushStop and OnStop

* changelog
2019-02-04 07:30:24 -08:00
Ismail Khoffi
6485e68beb Use ethereum's secp256k1 lib (#3234)
* switch from fork (tendermint/btcd) to orig package (btcsuite/btcd); also

 - remove obsolete check in test `size != -1` is always true
 - WIP as the serialization still needs to be wrapped

* WIP: wrap signature & privkey, pubkey needs to be wrapped as well

* wrap pubkey too

* use "github.com/ethereum/go-ethereum/crypto/secp256k1" if cgo is
available, else use "github.com/btcsuite/btcd/btcec" and take care of
lower-S when verifying

Annoyingly, had to disable pruning when importing
github.com/ethereum/go-ethereum/ :-/

* update comment

* update comment

* emulate signature_nocgo.go for additional benchmarks:
592bf6a59c/crypto/signature_nocgo.go (L60-L76)

* use our format (r || s) in lower-s form when in the non-cgo case

* remove comment about using the C library directly

* vendor github.com/btcsuite/btcd too

* Add test for the !cgo case

* update changelog pending

Closes #3162 #3163
Refs #1958, #2091, tendermint/btcd#1
2019-02-04 12:24:54 +04:00
Anton Kaliaev
d470945503 update gometalinter to 3.0.0 (#3233)
in the attempt to fix https://circleci.com/gh/tendermint/tendermint/43165

also

    code is simplified by running gofmt -s .
    remove unused vars
    enable linters we're currently passing
    remove deprecated linters
2019-01-30 12:24:26 +04:00
Anton Kaliaev
8985a1fa63 pubsub: fixes after Ethan's review (#3212)
in https://github.com/tendermint/tendermint/pull/3209
2019-01-29 13:16:43 +04:00
Ismail Khoffi
6dd817cbbc secret connection: check for low order points (#3040)
> Implement a check for the blacklisted low order points, ala the X25519 has_small_order() function in libsodium

(#3010 (comment))
resolves first half of #3010
2019-01-29 12:44:59 +04:00
rickyyangz
0b3a87a323 mempool: correct args order in the log msg (#3221)
Before: Unexpected tx response from proxy during recheck\n Expected: {r.CheckTx.Data}, got: {memTx.tx}
After: Unexpected tx response from proxy during recheck\n Expected: {memTx.tx}, got: {tx}

Closes #3214
2019-01-29 10:12:07 +04:00
Zach
e1edd2aa6a hardcode rpc link (#3223) 2019-01-28 23:41:37 +04:00
Thane Thomson
9a0bfafef6 docs: fix links (#3220)
Because there's nothing worse than having to copy/paste a link from a
web page to navigate to it 😁
2019-01-28 17:41:39 +04:00
Thane Thomson
a335caaedb alias amino imports (#3219)
As per conversation here: https://github.com/tendermint/tendermint/pull/3218#discussion_r251364041

This is the result of running the following code on the repo:

```bash
find . -name '*.go' | grep -v 'vendor/' | xargs -n 1 goimports -w
```
2019-01-28 16:13:17 +04:00
Anton Kaliaev
ff3c4bfc76 add go-deadlock tool to help detect deadlocks (#3218)
* add go-deadlock tool to help detect deadlocks

Run it with `make test_with_deadlock`. After it's done, use Git to
cleanup `git checkout .`

Link: https://github.com/sasha-s/go-deadlock/
Replaces https://github.com/tendermint/tendermint/pull/3148

* add a target to cleanup changes
2019-01-28 14:57:47 +04:00
Anton Kaliaev
8d2dd7e554 refactor TestListenerConnectDeadlines to avoid data races (#3201)
Fixes #3179
2019-01-28 12:38:11 +04:00
cong
71e5939441 start eventBus & indexerService before replay and use them while replaying blocks (#3194)
so if we did not index the last block (because of panic or smth else), we index it during replay

Closes #3186
2019-01-28 11:36:35 +04:00
Ethan Buchman
d91ea9b59d adr-033 update 2019-01-26 14:30:29 +04:00
Ethan Buchman
9b6b792ce7 pubsub: comments 2019-01-26 14:30:29 +04:00
Ethan Buchman
57af99d901 types: comments on user vs internal events
Distinguish between user events and internal consensus events
2019-01-26 14:30:29 +04:00
Ethan Buchman
a58d5897e4 note about TmCoreSemVer 2019-01-26 14:30:29 +04:00
Jae Kwon
ddbdffb4e5 add design philosophy doc (#3034) 2019-01-25 23:00:55 +04:00
Ismail Khoffi
d6dd43cdaa adr: style fixes (#3206)
- links to issues
 - fix a few markdown glitches
 - inline code
 - etc
2019-01-25 17:38:26 +04:00
Joon
75cbe4a1c1 R4R: Config TestRoot modification for LCD test (#3177)
* add ResetTestRootWithChainID

* modify chainid
2019-01-25 08:10:36 -05:00
Ethan Buchman
27c1563bf0 Merge pull request #3205 from tendermint/master
Merge master back to develop
2019-01-24 11:35:01 -05:00
Ethan Buchman
4d7b29cd8f Merge pull request #3203 from tendermint/release/v0.29.1
Release/v0.29.1
2019-01-24 11:34:20 -05:00
Ethan Buchman
90970d0ddc fix changelog 2019-01-24 11:19:52 -05:00
Anton Kaliaev
bb0a9b3d6d bump version 2019-01-24 19:18:32 +04:00
Anton Kaliaev
8992596192 update changelog 2019-01-24 19:17:53 +04:00
Zarko Milosevic
c20fbed2f7 [WIP] fix halting issue (#3197)
fix halting issue
2019-01-24 09:33:47 -05:00
Anton Kaliaev
c4157549ab only log "Reached max attempts to dial" once (#3144)
Closes #3037
2019-01-24 13:53:02 +04:00
Infinytum
fbd1e79465 docs: fix lite client formatting (#3198)
Closes #3180
2019-01-24 12:10:34 +04:00
Anton Kaliaev
1efacaa8d3 docs: update pubsub ADR (#3131)
* docs: update pubsub ADR

* third version
2019-01-24 11:33:58 +04:00
Gautham Santhosh
98b42e9eb2 docs: explain how someone can run his/her own ABCI app on localnet (#3195)
Closes #3192.
2019-01-24 10:45:43 +04:00
Anton Kaliaev
2449bf7300 p2p: file descriptor leaks (#3150)
* close peer's connection to avoid fd leak

Fixes #2967

* rename peer#Addr to RemoteAddr

* fix test

* fixes after Ethan's review

* bring back the check

* changelog entry

* write a test for switch#acceptRoutine

* increase timeouts? :(

* remove extra assertNPeersWithTimeout

* simplify test

* assert number of peers (just to be safe)

* Cleanup in OnStop

* run tests with verbose flag on CircleCI

* spawn a reading routine to prevent connection from closing

* get port from the listener

random port is faster, but often results in

```
panic: listen tcp 127.0.0.1:44068: bind: address already in use [recovered]
        panic: listen tcp 127.0.0.1:44068: bind: address already in use

goroutine 79 [running]:
testing.tRunner.func1(0xc0001bd600)
        /usr/local/go/src/testing/testing.go:792 +0x387
panic(0x974d20, 0xc0001b0500)
        /usr/local/go/src/runtime/panic.go:513 +0x1b9
github.com/tendermint/tendermint/p2p.MakeSwitch(0xc0000f42a0, 0x0, 0x9fb9cc, 0x9, 0x9fc346, 0xb, 0xb42128, 0x0, 0x0, 0x0, ...)
        /home/vagrant/go/src/github.com/tendermint/tendermint/p2p/test_util.go:182 +0xa28
github.com/tendermint/tendermint/p2p.MakeConnectedSwitches(0xc0000f42a0, 0x2, 0xb42128, 0xb41eb8, 0x4f1205, 0xc0001bed80, 0x4f16ed)
        /home/vagrant/go/src/github.com/tendermint/tendermint/p2p/test_util.go:75 +0xf9
github.com/tendermint/tendermint/p2p.MakeSwitchPair(0xbb8d20, 0xc0001bd600, 0xb42128, 0x2f7, 0x4f16c0)
        /home/vagrant/go/src/github.com/tendermint/tendermint/p2p/switch_test.go:94 +0x4c
github.com/tendermint/tendermint/p2p.TestSwitches(0xc0001bd600)
        /home/vagrant/go/src/github.com/tendermint/tendermint/p2p/switch_test.go:117 +0x58
testing.tRunner(0xc0001bd600, 0xb42038)
        /usr/local/go/src/testing/testing.go:827 +0xbf
created by testing.(*T).Run
        /usr/local/go/src/testing/testing.go:878 +0x353
exit status 2
FAIL    github.com/tendermint/tendermint/p2p    0.350s
```
2019-01-22 13:23:18 -05:00
Ethan Buchman
3362da0a69 Merge pull request #3191 from tendermint/master
Merge master to develop
2019-01-22 13:19:58 -05:00
Ethan Buchman
4514842a63 Merge pull request #3185 from tendermint/release/v0.29.0
Release/v0.29.0
2019-01-22 13:19:28 -05:00
Ethan Buchman
a97d6995c9 fix changelog indent (#3190) 2019-01-22 12:24:26 -05:00
Ethan Buchman
d9d4f3e629 Prepare v0.29.0 (#3184)
* update changelog and upgrading

* add note about max voting power in abci spec

* update version

* changelog
2019-01-21 19:32:10 -05:00
Ethan Buchman
7a8aeff4b0 update spec for Merkle RFC 6962 (#3175)
* spec: specify when MerkleRoot is on hashes

* remove unnecessary hash methods

* update changelog

* fix test
2019-01-21 10:02:57 -05:00
Ethan Buchman
de5a6010f0 fix DynamicVerifier for large validator set changes (#3171)
* base verifier: bc->bv and check chainid

* improve some comments

* comments in dynamic verifier

* fix comment in doc about BaseVerifier

It requires the validator set to perfectly match.

* failing test for #2862

* move errTooMuchChange to types. fixes #2862

* changelog, comments

* ic -> dv

* update comment, link to issue
2019-01-21 09:21:04 -05:00
Ethan Buchman
da95f4aa6d mempool: enforce maxMsgSize limit in CheckTx (#3168)
- fixes #3008
- reactor requires encoded messages are less than maxMsgSize
- requires size of tx + amino-overhead to not exceed maxMsgSize
2019-01-20 17:27:49 -05:00
Ethan Buchman
4f8769175e [types] hash of ConsensusParams includes only a subset of fields (#3165)
* types: dont hash entire ConsensusParams

* update encoding spec

* update blockchain spec

* spec: consensus params hash

* changelog
2019-01-19 16:08:57 -05:00
Ismail Khoffi
40c887baf7 Normalize priorities to not exceed total voting power (#3049)
* more proposer priority tests

 - test that we don't reset to zero when updating / adding
 - test that same power validators alternate

* add another test to track / simulate similar behaviour as in #2960

* address some of Chris' review comments

* address some more of Chris' review comments

* temporarily pushing branch with the following changes:
The total power might change if:
   - a validator is added
   - a validator is removed
   - a validator is updated

Decrement the accums (of all validators) directly after any of these events
(by the inverse of the change)

* Fix 2960 by re-normalizing / scaling priorities to be in bounds of total
power, additionally:

 - remove heap where it doesn't make sense
 - avg. only at the end of IncrementProposerPriority instead of each
   iteration
 - update (and slightly improve)
   TestAveragingInIncrementProposerPriorityWithVotingPower to reflect
   above changes

* Fix 2960 by re-normalizing / scaling priorities to be in bounds of total
power, additionally:

 - remove heap where it doesn't make sense
 - avg. only at the end of IncrementProposerPriority instead of each
   iteration
 - update (and slightly improve)
   TestAveragingInIncrementProposerPriorityWithVotingPower to reflect
   above changes

* fix tests

* add comment

* update changelog pending & some minor changes

* comment about division will floor the result & fix typo

* Update TestLargeGenesisValidator:
 - remove TODO and increase large genesis validator's voting power
accordingly

* move changelog entry to P2P Protocol

* Ceil instead of flooring when dividing & update test

* quickly fix failing TestProposerPriorityDoesNotGetResetToZero:

 - divide by Ceil((maxPriority - minPriority) / 2*totalVotingPower)

* fix typo: rename getValWitMostPriority -> getValWithMostPriority

* test proposer frequencies

* return absolute value for diff. keep testing

* use for loop for div

* cleanup, more tests

* spellcheck

* get rid of using floats: manually ceil where necessary

* Remove float, simplify, fix tests to match chris's proof (#3157)
2019-01-19 15:55:08 -05:00
Ethan Buchman
d3e8889411 update btcd fork for v0.1.1 (#3164)
* update btcd fork for v0.1.1

* changelog
2019-01-19 14:08:41 -05:00
Ethan Buchman
d17969e378 Merge pull request #3154 from tendermint/master
Merge master back to develop
2019-01-18 12:39:21 -05:00
Ethan Buchman
07263298bd Merge pull request #3153 from tendermint/release/v0.28.1
Release/v0.28.1
2019-01-18 12:38:48 -05:00
Ethan Buchman
5a2e69df81 changelog and version 2019-01-18 12:11:02 -05:00
bradyjoestar
f5f1416a14 json2wal: increase reader's buffer size (#3147)
```
panic: failed to unmarshal json: unexpected end of JSON input

goroutine 1 [running]:
main.main()
	/root/gelgo/src/github.com/tendermint/tendermint/scripts/json2wal/main.go:66 +0x73f
```

Closes #3146
2019-01-18 12:09:12 +04:00
Henry Harder
4d36647eea Add ParadigmCore to ecosystem.json (#3145)
Adding the OrderStream network client (alpha), ParadigmCore, to the `ecosystem.json` file.
2019-01-18 11:46:34 +04:00
Ethan Buchman
8fd8f800d0 Bucky/fix evidence halt (#34)
* consensus: createProposalBlock function

* blockExecutor.CreateProposalBlock

- factored out of consensus pkg into a method on blockExec
- new private interfaces for mempool ("txNotifier") and evpool with one function each
- consensus tests still require more mempool methods

* failing test for CreateProposalBlock

* Fix bug in include evidece into block

* evidence: change maxBytes to maxSize

* MaxEvidencePerBlock

- changed to return both the max number and the max bytes
- preparation for #2590

* changelog

* fix linter

* Fix from review

Co-Authored-By: ebuchman <ethan@coinculture.info>
2019-01-17 21:46:40 -05:00
Anton Kaliaev
87991059aa docs: fix RPC links (#3141) 2019-01-17 08:42:57 -05:00
Thane Thomson
c69dbb25ce Consolidates deadline tests for privval Unix/TCP (#3143)
* Consolidates deadline tests for privval Unix/TCP

Following on from #3115 and #3132, this converts fundamental timeout
errors from the client's `acceptConnection()` method so that these can
be detected by the test for the TCP connection.

Timeout deadlines are now tested for both TCP and Unix domain socket
connections.

There is also no need for the additional secret connection code: the
connection will time out at the `acceptConnection()` phase for TCP
connections, and will time out when attempting to obtain the
`RemoteSigner`'s public key for Unix domain socket connections.

* Removes extraneous logging

* Adds IsConnTimeout helper function

This commit adds a helper function to detect whether an error is either
a fundamental networking timeout error, or an `ErrConnTimeout` error
specific to the `RemoteSigner` class.

* Adds a test for the IsConnTimeout() helper function

* Separates tests logically for IsConnTimeout
2019-01-17 08:10:56 -05:00
Gautham Santhosh
bc8874020f docs: fix broken link (#3142) 2019-01-17 15:30:51 +04:00
Dev Ojha
55d7238708 Add comment to simple_merkle get_split_point (#3136)
* Add comment to simple_merkle get_split_point

* fix grammar error
2019-01-16 16:03:19 -05:00
Ethan Buchman
4a037f9fe6 Merge pull request #3138 from tendermint/master
Merge master back to develop
2019-01-16 16:02:51 -05:00
Ethan Buchman
aa40cfcbb9 Merge pull request #3135 from tendermint/release/v0.28.0
Release/v0.28.0
2019-01-16 16:01:58 -05:00
Ethan Buchman
6d6d103f15 fixes from review (#3137) 2019-01-16 13:41:37 -05:00
Ethan Buchman
239ebe2076 fix changelog fmt (#3134) 2019-01-16 10:21:15 -05:00
Ethan Buchman
0cba0e11b5 update changelog and upgrading (#3133) 2019-01-16 10:16:23 -05:00
Thane Thomson
d4e6720541 Expanding tests to cover Unix sockets version of client (#3132)
* Adds a random suffix to temporary Unix sockets during testing

* Adds Unix domain socket tests for client (FAILING)

This adds Unix domain socket tests for the privval client. Right now,
one of the tests (TestRemoteSignerRetry) fails, probably because the
Unix domain socket state is known instantaneously on both sides by the
OS. Committing this to collaborate on the error.

* Removes extraneous logging

* Completes testing of Unix sockets client version

This completes the testing of the client connecting via Unix sockets.
There are two specific tests (TestSocketPVDeadline and
TestRemoteSignerRetryTCPOnly) that are only relevant to TCP connections.

* Renames test to show TCP-specificity

* Adds testing into closures for consistency (forgot previously)

* Moves test specific to RemoteSigner into own file

As per discussion on #3132, `TestRemoteSignerRetryTCPOnly` doesn't
really belong with the client tests. This moves it into its own file
related to the `RemoteSigner` class.
2019-01-16 10:05:34 -05:00
Anton Kaliaev
dcb8f88525 add "chain_id" label for all metrics (#3123)
* add "chain_id" label for all metrics

Refs #3082

* fix labels extraction
2019-01-15 12:16:33 -05:00
Thane Thomson
a2a62c9be6 Merge pull request #3121 from thanethomson/release/v0.28.0
Adds tests for Unix sockets
2019-01-15 18:17:50 +02:00
Thane Thomson
3191ee8bad Dropping "construct" prefix as per #3121 2019-01-15 18:06:57 +02:00
Thane Thomson
308b7e3bbe Merge branch 'release/v0.28.0' into release/v0.28.0 2019-01-15 17:58:33 +02:00
Kunal Dhariwal
73ea5effe5 docs: update link for rpc docs (#3129) 2019-01-15 17:12:35 +04:00
Ethan Buchman
d1afa0ed6c privval: fixes from review (#3126)
https://github.com/tendermint/tendermint/pull/2923#pullrequestreview-192065694
2019-01-15 16:55:57 +04:00
Thane Thomson
ca00cd6a78 Make privval listener testing generic
This cuts out two tests by constructing test cases and iterating through
them, rather than having separate sets of tests for TCP and Unix listeners.
This is as per the feedback from #3121.
2019-01-15 10:14:41 +02:00
Anton Kaliaev
4daca1a634 return maxPerPage (not defaultPerPage) if per_page is greater than max (#3124)
it's more user-friendly.

Refs #3065
2019-01-14 17:35:31 -05:00
Anton Kaliaev
bc00a032c1 makefile: fix build-docker-localnode target (#3122)
cd does not work because it's executed in a subprocess
so it has to be either chained by && or ;
See https://stackoverflow.com/q/1789594/820520
for more details.

Fixes #3058
2019-01-14 17:33:33 -05:00
Anton Kaliaev
5f4d8e031e [log] fix year format (#3125)
Refs #3060
2019-01-14 14:10:13 -05:00
Ethan Buchman
7b2c4bb493 update ADR-020 (#3116) 2019-01-14 11:53:43 -05:00
Thane Thomson
5f93220c61 Adds tests for Unix sockets
As per #3115, adds simple Unix socket connect/accept deadline tests in
pretty much the same way as the TCP connect/accept deadline tests work.
2019-01-14 11:41:09 +02:00
Dev Ojha
ec53ce359b Simple merkle rfc compatibility (#2713)
* Begin simple merkle compatibility PR

* Fix query_test

* Use trillian test vectors

* Change the split point per RFC 6962

* update spec

* refactor innerhash to match spec

* Update changelog

* Address @liamsi's comments

* Write the comment requested by @liamsi
2019-01-13 18:02:38 -05:00
Ismail Khoffi
1f68318875 fix order of BlockID and Timestamp in Vote and Proposal (#3078)
* Consistent order fields of Timestamp/BlockID fields in CanonicalVote and
CanonicalProposal

* update spec too

* Introduce and use IsZero & IsComplete:
 - update IsZero method according to spec and introduce IsComplete
 - use methods in validate basic to validate: proposals come with a
 "complete" blockId and votes are either complete or empty
 - update spec: BlockID.IsNil() -> BlockID.IsZero() and fix typo

* BlockID comes first

* fix tests
2019-01-13 17:56:36 -05:00
Ismail Khoffi
1ccc0918f5 More ProposerPriority tests (#2966)
* more proposer priority tests

 - test that we don't reset to zero when updating / adding
 - test that same power validators alternate

* add another test to track / simulate similar behaviour as in #2960

* address some of Chris' review comments

* address some more of Chris' review comments
2019-01-13 17:40:50 -05:00
Ethan Buchman
fc031d980b Bucky/v0.28.0 (#3119)
* changelog pending and upgrading

* linkify and version bump

* changelog shuffle
2019-01-13 17:15:34 -05:00
Zarko Milosevic
1895cde590 [WIP] Fill in consensus core details in ADR 030 (#2696)
* Initial work towards making ConsensusCore spec complete

* Initial version of executor and complete consensus
2019-01-13 14:47:00 -05:00
Gian Felipe
be00cd1add Hotfix/validating query result length (#3053)
* Validating that there are txs in the query results before loop throught the array

* Created tests to validate the error has been fixed

* Added comments

* Fixing misspeling

* check if the variable "skipCount" is bigger than zero. If it is not, we set it to 0. If it, we do not do anything.

* using function that validates the skipCount variable

* undo Gopkg.lock changes
2019-01-13 14:34:29 -05:00
Ismail Khoffi
a6011c007d Close and retry a RemoteSigner on err (#2923)
* Close and recreate a RemoteSigner on err

* Update changelog

* Address Anton's comments / suggestions:

 - update changelog
 - restart TCPVal
 - shut down on `ErrUnexpectedResponse`

* re-init remote signer client with fresh connection if Ping fails

- add/update TODOs in secret connection
- rename tcp.go -> tcp_client.go, same with ipc to clarify their purpose

* account for `conn returned by waitConnection can be `nil`

- also add TODO about RemoteSigner conn field

* Tests for retrying: IPC / TCP

 - shorter info log on success
 - set conn and use it in tests to close conn

* Tests for retrying: IPC / TCP

 - shorter info log on success
 - set conn and use it in tests to close conn
 - add rwmutex for conn field in IPC

* comments and doc.go

* fix ipc tests. fixes #2677

* use constants for tests

* cleanup some error statements

* fixes #2784, race in tests

* remove print statement

* minor fixes from review

* update comment on sts spec

* cosmetics

* p2p/conn: add failing tests

* p2p/conn: make SecretConnection thread safe

* changelog

* IPCVal signer refactor

- use a .reset() method
- don't use embedded RemoteSignerClient
- guard RemoteSignerClient with mutex
- drop the .conn
- expose Close() on RemoteSignerClient

* apply IPCVal refactor to TCPVal

* remove mtx from RemoteSignerClient

* consolidate IPCVal and TCPVal, fixes #3104

- done in tcp_client.go
- now called SocketVal
- takes a listener in the constructor
- make tcpListener and unixListener contain all the differences

* delete ipc files

* introduce unix and tcp dialer for RemoteSigner

* rename files

- drop tcp_ prefix
- rename priv_validator.go to file.go

* bring back listener options

* fix node

* fix priv_val_server

* fix node test

* minor cleanup and comments
2019-01-13 14:31:31 -05:00
Ethan Buchman
ef94a322b8 Make SecretConnection thread safe (#3111)
* p2p/conn: add failing tests

* p2p/conn: make SecretConnection thread safe

* changelog

* fix from review
2019-01-13 13:46:25 -05:00
Mauricio Serna
7f607d0ce2 docs: fix p2p readme links (#3109) 2019-01-11 17:41:02 -05:00
Anton Kaliaev
81c51cd4fc rpc: include peer's remote IP in /net_info (#3052)
Refs #3047
2019-01-11 09:24:45 -05:00
Ethan Buchman
51094f9417 update README (#3097)
* update README

* fix from review
2019-01-11 08:28:29 -05:00
Alessio Treglia
7644d27307 Ensure multisig keys have 20-byte address (#3103)
* Ensure multisig keys have 20-byte address

Use crypto.AddressHash() to avoid returning 32-byte long address.

Closes: #3102

* fix pointer

* fix test
2019-01-10 18:37:34 -05:00
Alessio Treglia
764cfe33aa Don't use pointer receivers for PubKeyMultisigThreshold (#3100)
* Don't use pointer receivers for PubKeyMultisigThreshold

* test that showcases panic when PubKeyMultisigThreshold are used in sdk:

 - deserialization will fail in `readInfo` which tries to read a
 `crypto.PubKey` into a `localInfo` (called by
  cosmos-sdk/client/keys.GetKeyInfo)

* Update changelog

* Rename routeTable to nameTable, multisig key is no longer a pointer

* sed -i 's/PubKeyAminoRoute/PubKeyAminoName/g' `grep -lrw PubKeyAminoRoute .`

upon Jae's request

* AminoRoutes -> AminoNames

* sed -e 's/PrivKeyAminoRoute/PrivKeyAminoName/g'

* Update crypto/encoding/amino/amino.go

Co-Authored-By: alessio <quadrispro@ubuntu.com>
2019-01-10 17:47:20 -05:00
srmo
616c3a4bae cs: prettify logging of ignored votes (#3086)
Refs #3038
2019-01-06 12:00:12 +03:00
Piotr Husiatyński
04e97f599a fix build scripts (#3085)
* fix build scripts

Search for the right variable when introspecting Go code. `Version` was
renamed to `TMCoreSemVer`.

This is regression introduced in
b95ac688af

* fix all `Version` introspections.

Use `TMCoreSemVer` instead of `Version`
2019-01-06 11:40:15 +03:00
Ethan Buchman
56a4fb4d72 add signing spec (#3061)
* add signing spec

* fixes from review

* more fixes

* fixes from review
2019-01-02 17:18:45 -08:00
srmo
49017a5787 3070 [docs] unindent text as it is supposed to behave the same as the parts before (#3075) 2019-01-01 10:42:39 +03:00
Ismail Khoffi
6a80412a01 Remove privval.GetAddress(), memoize pubkey (#2948)
privval: remove GetAddress(), memoize pubkey
2018-12-22 00:36:45 -05:00
needkane
2348f38927 Update node_info.go (#3059) 2018-12-21 17:37:28 -05:00
yutianwu
41e2eeee9c R4R: Split immutable and mutable parts of priv_validator.json (#2870)
* split immutable and mutable parts of priv_validator.json

* fix bugs

* minor changes

* retrig test

* delete scripts/wire2amino.go

* fix test

* fixes from review

* privval: remove mtx

* rearrange priv_validator.go

* upgrade path

* write tests for the upgrade

* fix for unsafe_reset_all

* add test

* add reset test
2018-12-21 16:58:27 -05:00
Ethan Buchman
a88e283a9d Merge pull request #3063 from tendermint/master
Merge master back to develop
2018-12-21 16:39:02 -05:00
Ethan Buchman
1e1ca15bcc Merge pull request #3062 from tendermint/release/v0.27.4
Release/v0.27.4
2018-12-21 16:35:45 -05:00
Ethan Buchman
c6604b5a9b changelog and version 2018-12-21 16:31:28 -05:00
Anton Kaliaev
c510f823e7 mempool: move tx to back, not front (#3036)
because we pop txs from the front if the cache is full

Refs #3035
2018-12-21 16:28:21 -05:00
Zach
daddebac29 circleci: update go version (#3051) 2018-12-19 21:45:12 +04:00
Zach
30f346fe44 docs: add rpc link to docs navbar and re-org sidebar (#3041)
* add rpc to docs navbar and close #3000

* Update config.js
2018-12-17 23:02:26 +04:00
Anton Kaliaev
4d8f29f79c set allow_duplicate_ip to false (#2992)
* config: cors options are arrays of strings, not strings

Fixes #2980

* docs: update tendermint-core/configuration.html page

* set allow_duplicate_ip to false

* in `tendermint testnet`, set allow_duplicate_ip to true

Refs #2712

* fixes after Ismail's review
2018-12-17 11:52:33 -05:00
Zach
2182f6a702 update go version & other cleanup (#3018)
* update go version & other cleanup

* fix lints

* go one.eleven.four

* keep circle on 1.11.3 for now
2018-12-17 11:51:53 -05:00
Anton Kaliaev
a06912b579 mempool: move tx to back, not front (#3036)
because we pop txs from the front if the cache is full

Refs #3035
2018-12-17 11:35:05 -05:00
Zach
0ff715125b fix docs / proxy app (#2988)
* fix docs / proxy app, closes #2986

* counter_serial

* review comments

* list all possible options

* add changelog entries
2018-12-16 23:34:13 -05:00
Ethan Buchman
385977d5e8 Merge pull request #3033 from tendermint/master
Merge pull request #3032 from tendermint/release/v0.27.3
2018-12-16 14:30:46 -05:00
Ethan Buchman
0138530df2 Merge pull request #3032 from tendermint/release/v0.27.3
Release/v0.27.3
2018-12-16 14:30:23 -05:00
Ethan Buchman
0533c73a50 crypto: revert to mainline Go crypto lib (#3027)
* crypto: revert to mainline Go crypto lib

We used to use a fork for a modified bcrypt so we could pass our own
randomness but this was largely unecessary, unused, and a burden.
So now we just use the mainline Go crypto lib.

* changelog

* fix tests

* version and changelog
2018-12-16 14:19:38 -05:00
Ethan Buchman
1beb45511c Merge pull request #3031 from tendermint/master
Merge pull request #3030 from tendermint/release/v0.27.2
2018-12-16 14:13:57 -05:00
Ethan Buchman
4a568fcedb Merge pull request #3030 from tendermint/release/v0.27.2
Release/v0.27.2
2018-12-16 14:13:30 -05:00
Ethan Buchman
b3141d7d02 makeNodeInfo returns error (#3029)
* makeNodeInfo returns error

* version and changelog
2018-12-16 14:05:58 -05:00
Jae Kwon
9a6dd96cba Revert to using defers in addrbook. (#3025)
* Revert to using defers in addrbook.  ValidateBasic->Validate since it requires DNS

* Update CHANGELOG_PENDING
2018-12-16 12:27:16 -05:00
Ethan Buchman
9fa959619a Merge pull request #3026 from tendermint/master
Merge pull request #3023 from tendermint/release/v0.27.1
2018-12-16 12:25:19 -05:00
Ethan Buchman
1f09818770 Merge pull request #3023 from tendermint/release/v0.27.1
Release/v0.27.1
2018-12-16 12:24:54 -05:00
Ethan Buchman
e4806f980b Bucky/v0.27.1 (#3022)
* update changelog

* linkify

* changelog and version
2018-12-15 15:58:09 -05:00
Anton Kaliaev
b53a2712df docs: networks/docker-compose: small fixes (#3017) 2018-12-15 15:38:13 -05:00
Hleb Albau
a75dab492c #2980 fix cors doc (#3013) 2018-12-15 15:32:35 -05:00
Ethan Buchman
7c9e767e1f 2980 fix cors (#3021)
* config: cors options are arrays of strings, not strings

Fixes #2980

* docs: update tendermint-core/configuration.html page

* set allow_duplicate_ip to false

* in `tendermint testnet`, set allow_duplicate_ip to true

Refs #2712

* fixes after Ismail's review

* Revert "set allow_duplicate_ip to false"

This reverts commit 24c1094ebcf2bd35f2642a44d7a1e5fb5c178fb1.
2018-12-15 15:26:27 -05:00
JamesRay
f82a8ff73a During replay, when appHeight==0, only saveState if stateHeight is also 0 (#3006)
* optimize addProposalBlockPart

* optimize addProposalBlockPart

* if ProposalBlockParts and LockedBlockParts both exist,let LockedBlockParts overwrite ProposalBlockParts.

* fix tryAddBlock

* broadcast lockedBlockParts in higher priority

* when appHeight==0, it's better fetch genDoc than state.validators.

* not save state if replay from height 1

* only save state if replay from height 1 when stateHeight is also 1

* only save state if replay from height 1 when stateHeight is also 1

* only save state if replay from height 0 when stateHeight is also 0

* handshake info's response version only update when stateHeight==0

* save the handshake responseInfo appVersion
2018-12-15 14:33:30 -05:00
Anton Kaliaev
ae275d791e mempool: notifyTxsAvailable if there're txs left, but recheck=false (#2991)
(left after committing a block)

Fixes #2961

--------------
ORIGINAL ISSUE

Tendermint version : 0.26.4-b771798d

ABCI app : kv-store

Environment:

OS (e.g. from /etc/os-release): macOS 10.14.1 What happened: Set
mempool.recheck = false and create empty block = false in config.toml.
When transactions get added right between a new empty block is being
proposed and committed, the proposer won't propose new block for that
transactions immediately after. That transactions are stuck in the
mempool until a new transaction is added and trigger the proposer.

What you expected to happen: If there is a transaction left in the
mempool, new block should be proposed immediately.

Have you tried the latest version: yes

How to reproduce it (as minimally and precisely as possible): Fire two
transaction using broadcast_tx_sync with specific delay between them.
(You may need to do it multiple time before the right delay is found, on
my machine the delay is 0.98s)

Logs (paste a small part showing an error (< 10 lines) or link a
pastebin, gist, etc. containing more of the log file):
https://pastebin.com/0Wt6uhPF

Config (you can paste only the changes you've made): [mempool] recheck =
false create_empty_block = false

Anything else we need to know: In mempool.go, we found that proposer
will immediately propose new block if

Last committed block has some transaction (causing AppHash to changed)
or mem.notifyTxsAvailable() is called. Our scenario is as followed.

A transaction is fired, it will create 1 block with 1 tx (line 1-4 in
the log) and 1 empty block. After the empty block is proposed but before
it is committed, second transaction is fired and added to mempool. (line
8-16) Now, since the last committed block is empty and
mem.notifyTxsAvailable() will be called only if mempool.recheck = true.
The proposer won't immediately propose new block, causing the second
transaction to stuck in mempool until another transaction is added to
mempool and trigger mem.notifyTxsAvailable().
2018-12-15 14:27:02 -05:00
Zach
f5cca9f121 docs: update DOCS_README (#3019)
Co-Authored-By: zramsay <zach.ramsay@gmail.com>
2018-12-15 23:01:28 +04:00
Zach
3fbe9f235a docs: add edit on Github links (#3014) 2018-12-14 09:32:09 +04:00
mircea-c
f7e463f6d3 circleci: add a job to automatically update docs (#3005) 2018-12-12 13:48:53 +04:00
Dev Ojha
bc2a9b20c0 mempool: add a comment and missing changelog entry (#2996)
Refs #2994
2018-12-12 13:31:35 +04:00
Zach
9e075d8dd5 docs: enable full-text search (#3004) 2018-12-12 13:20:02 +04:00
Zach
8003786c9a docs: fixes from 'first time' review (#2999) 2018-12-11 22:21:54 +04:00
Daniil Lashin
2594cec116 add UnconfirmedTxs/NumUnconfirmedTxs methods to HTTP/Local clients (#2964) 2018-12-11 12:41:02 +04:00
Dev Ojha
df32ea4be5 Make testing logger that doesn't write to stdout (#2997) 2018-12-11 12:17:21 +04:00
Anton Kaliaev
f69e2c6d6c p2p: set MConnection#created during init (#2990)
Fixes #2715

In crawlPeersRoutine, which is performed when seedMode is run, there is
logic that disconnects the peer's state information at 3-hour intervals
through the duration value. The duration value is calculated by
referring to the created value of MConnection. When MConnection is
created for the first time, the created value is not initiated, so it is
not disconnected every 3 hours but every time it is disconnected. So,
normal nodes are connected to seedNode and disconnected immediately, so
address exchange does not work properly.

https://github.com/tendermint/tendermint/blob/master/p2p/pex/pex_reactor.go#L629
This point is not work correctly.
I think,
https://github.com/tendermint/tendermint/blob/master/p2p/conn/connection.go#L148
created variable is missing the current time setting.
2018-12-10 15:24:58 -05:00
Dev Ojha
d5d0d2bd77 Make mempool fail txs with negative gas wanted (#2994)
This is only one part of #2989. We also need to fix the application,
and add rules to consensus to ensure this.
2018-12-10 12:56:49 -05:00
Anton Kaliaev
41eaf0e31d turn off strict routability every time (#2983)
previously, we're turning it off only when --populate-persistent-peers
flag was used, which is obviously incorrect.

Fixes https://github.com/cosmos/cosmos-sdk/issues/2983
2018-12-09 13:29:51 -05:00
Zach
68b467886a docs: relative links in docs/spec/readme.md, js-amino lib (#2977)
Co-Authored-By: zramsay <zach.ramsay@gmail.com>
2018-12-07 19:41:19 +04:00
Leo Wang
2f64717bb5 return an error if validator set is empty in genesis file and after InitChain (#2971)
Fixes #2951
2018-12-07 12:30:58 +04:00
Anton Kaliaev
c4a1cfc5c2 don't ignore key when executing CONTAINS (#2924)
Fixes #2912
2018-12-07 12:28:02 +04:00
Ethan Buchman
0f96bea41d Merge pull request #2976 from tendermint/master
Merge pull request #2975 from tendermint/release/v0.27.0
2018-12-05 16:51:04 -05:00
Ethan Buchman
9c236ffd6c Merge pull request #2975 from tendermint/release/v0.27.0
Release/v0.27.0
2018-12-05 16:50:36 -05:00
Ethan Buchman
9f8761d105 update changelog and upgrading (#2974) 2018-12-05 15:19:30 -05:00
Anton Kaliaev
5413c11150 kv indexer: add separator to start key when matching ranges (#2925)
* kv indexer: add separator to start key when matching ranges

to avoid including false positives

Refs #2908

* refactor code

* add a test case
2018-12-05 14:21:46 -05:00
Ethan Buchman
a14fd8eba0 p2p: fix peer count mismatch #2332 (#2969)
* p2p: test case for peer count mismatch #2332

* p2p: fix peer count mismatch #2332

* changelog

* use httptest.Server to scrape Prometheus metrics
2018-12-05 07:32:27 -05:00
Ethan Buchman
1bb7e31d63 p2p: panic on transport error (#2968)
* p2p: panic on transport error

Addresses #2823. Currently, the acceptRoutine exits if the transport returns
an error trying to accept a new connection. Once this happens, the node
can't accept any new connections. So here, we panic instead. While we
could potentially be more intelligent by rerunning the acceptRoutine, the
error may indicate something more fundamental (eg. file desriptor limit)
that requires a restart anyways. We can leave it to process managers to
handle that restart, and notify operators about the panic.

* changelog
2018-12-04 19:16:06 -05:00
Ethan Buchman
222b8978c8 Minor log changes (#2959)
* node: allow state and code to have diff block versions

* node: pex is a log module
2018-12-04 08:30:29 -05:00
Anton Kaliaev
d9a1aad5c5 docs: add client#Start/Stop to examples in RPC docs (#2939)
follow-up on https://github.com/tendermint/tendermint/pull/2936
2018-12-03 16:17:06 +04:00
Anton Kaliaev
8ef0c2681d check if deliverTxResCh is still open, return an err otherwise (#2947)
deliverTxResCh, like any other eventBus (pubsub) channel, is closed when
eventBus is stopped. We must check if the channel is still open. The
alternative approach is to not close any channels, which seems a bit
odd.

Fixes #2408
2018-12-03 16:15:36 +04:00
Ismail Khoffi
c4d93fd27b explicitly type MaxTotalVotingPower to int64 (#2953) 2018-11-30 14:43:16 -05:00
Ethan Buchman
dc2a338d96 Bucky/v0.27.0 (#2950)
* update changelog

* changelog, upgrading, version
2018-11-30 14:05:16 -05:00
Ismail Khoffi
725ed7969a Add some ProposerPriority tests (#2946)
* WIP: tests for #2785

* rebase onto develop

* add Bucky's test without changing ValidatorSet.Update

* make TestValidatorSetBasic fail

* add ProposerPriority preserving fix to ValidatorSet.Update to fix
TestValidatorSetBasic

* fix randValidator_ to stay in bounds of MaxTotalVotingPower

* check for expected proposer and remove some duplicate code

* actually limit the voting power of random validator ...

* fix test
2018-11-29 17:03:41 -05:00
282 changed files with 9790 additions and 11195 deletions

View File

@@ -3,10 +3,17 @@ version: 2
defaults: &defaults
working_directory: /go/src/github.com/tendermint/tendermint
docker:
- image: circleci/golang:1.10.3
- image: circleci/golang:1.11.4
environment:
GOBIN: /tmp/workspace/bin
docs_update_config: &docs_update_config
working_directory: ~/repo
docker:
- image: tendermint/docs_deployment
environment:
AWS_REGION: us-east-1
jobs:
setup_dependencies:
<<: *defaults
@@ -41,10 +48,10 @@ jobs:
key: v3-pkg-cache
paths:
- /go/pkg
# - save_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
# paths:
# - /go/src/github.com/tendermint/tendermint
- save_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
paths:
- /go/src/github.com/tendermint/tendermint
build_slate:
<<: *defaults
@@ -53,23 +60,8 @@ jobs:
at: /tmp/workspace
- restore_cache:
key: v3-pkg-cache
# https://discuss.circleci.com/t/saving-cache-stopped-working-warning-skipping-this-step-disabled-in-configuration/24423/2
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: slate docs
command: |
@@ -84,29 +76,14 @@ jobs:
at: /tmp/workspace
- restore_cache:
key: v3-pkg-cache
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
make get_dev_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: metalinter
command: |
set -ex
export PATH="$GOBIN:$PATH"
make metalinter
make lint
- run:
name: check_dep
command: |
@@ -121,22 +98,8 @@ jobs:
at: /tmp/workspace
- restore_cache:
key: v3-pkg-cache
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: Run abci apps tests
command: |
@@ -152,22 +115,8 @@ jobs:
at: /tmp/workspace
- restore_cache:
key: v3-pkg-cache
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: Run abci-cli tests
command: |
@@ -181,22 +130,8 @@ jobs:
at: /tmp/workspace
- restore_cache:
key: v3-pkg-cache
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run: sudo apt-get update && sudo apt-get install -y --no-install-recommends bsdmainutils
- run:
name: Run tests
@@ -210,22 +145,8 @@ jobs:
at: /tmp/workspace
- restore_cache:
key: v3-pkg-cache
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run: mkdir -p /tmp/logs
- run:
name: Run tests
@@ -233,7 +154,7 @@ jobs:
for pkg in $(go list github.com/tendermint/tendermint/... | circleci tests split --split-by=timings); do
id=$(basename "$pkg")
GOCACHE=off go test -timeout 5m -race -coverprofile=/tmp/workspace/profiles/$id.out -covermode=atomic "$pkg" | tee "/tmp/logs/$id-$RANDOM.log"
GOCACHE=off go test -v -timeout 5m -race -coverprofile=/tmp/workspace/profiles/$id.out -covermode=atomic "$pkg" | tee "/tmp/logs/$id-$RANDOM.log"
done
- persist_to_workspace:
root: /tmp/workspace
@@ -249,22 +170,8 @@ jobs:
at: /tmp/workspace
- restore_cache:
key: v3-pkg-cache
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: Run tests
command: bash test/persist/test_failure_indices.sh
@@ -285,9 +192,7 @@ jobs:
name: run localnet and exit on failure
command: |
set -x
make get_tools
make get_vendor_deps
make build-linux
docker run --rm -v "$PWD":/go/src/github.com/tendermint/tendermint -w /go/src/github.com/tendermint/tendermint golang:1.11.4 make build-linux
make localnet-start &
./scripts/localnet-blocks-test.sh 40 5 10 localhost
@@ -310,22 +215,10 @@ jobs:
steps:
- attach_workspace:
at: /tmp/workspace
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-pkg-cache
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: gather
command: |
@@ -339,10 +232,25 @@ jobs:
name: upload
command: bash .circleci/codecov.sh -f coverage.txt
deploy_docs:
<<: *docs_update_config
steps:
- checkout
- run:
name: Trigger website build
command: |
chamber exec tendermint -- start_website_build
workflows:
version: 2
test-suite:
jobs:
- deploy_docs:
filters:
branches:
only:
- master
- develop
- setup_dependencies
- lint:
requires:

61
.golangci.yml Normal file
View File

@@ -0,0 +1,61 @@
run:
deadline: 1m
linters:
enable-all: true
disable:
- gocyclo
- golint
- maligned
- errcheck
- staticcheck
- dupl
- ineffassign
- interfacer
- unconvert
- goconst
- unparam
- nakedret
- lll
- gochecknoglobals
- govet
- gocritic
- gosec
- gochecknoinits
- scopelint
- stylecheck
# linters-settings:
# govet:
# check-shadowing: true
# golint:
# min-confidence: 0
# gocyclo:
# min-complexity: 10
# maligned:
# suggest-new: true
# dupl:
# threshold: 100
# goconst:
# min-len: 2
# min-occurrences: 2
# depguard:
# list-type: blacklist
# packages:
# # logging is allowed only by logutils.Log, logrus
# # is allowed to use only in logutils package
# - github.com/sirupsen/logrus
# misspell:
# locale: US
# lll:
# line-length: 140
# goimports:
# local-prefixes: github.com/golangci/golangci-lint
# gocritic:
# enabled-tags:
# - performance
# - style
# - experimental
# disabled-checks:
# - wrapperFunc
# - commentFormatting # https://github.com/go-critic/go-critic/issues/755

View File

@@ -1,13 +1,391 @@
# Changelog
## v0.30.0
*February 8th, 2019*
This release fixes yet another issue with the proposer selection algorithm.
We hope it's the last one, but we won't be surprised if it's not.
We plan to one day expose the selection algorithm more directly to
the application ([\#3285](https://github.com/tendermint/tendermint/issues/3285)), and even to support randomness ([\#763](https://github.com/tendermint/tendermint/issues/763)).
For more, see issues marked
[proposer-selection](https://github.com/tendermint/tendermint/labels/proposer-selection).
This release also includes a fix to prevent Tendermint from including the same
piece of evidence in more than one block. This issue was reported by @chengwenxi in our
[bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* Apps
- [state] [\#3222](https://github.com/tendermint/tendermint/issues/3222)
Duplicate updates for the same validator are forbidden. Apps must ensure
that a given `ResponseEndBlock.ValidatorUpdates` contains only one entry per pubkey.
* Go API
- [types] [\#3222](https://github.com/tendermint/tendermint/issues/3222)
Remove `Add` and `Update` methods from `ValidatorSet` in favor of new
`UpdateWithChangeSet`. This allows updates to be applied as a set, instead of
one at a time.
* Block Protocol
- [state] [\#3286](https://github.com/tendermint/tendermint/issues/3286) Blocks that include already committed evidence are invalid.
* P2P Protocol
- [consensus] [\#3222](https://github.com/tendermint/tendermint/issues/3222)
Validator updates are applied as a set, instead of one at a time, thus
impacting the proposer priority calculation. This ensures that the proposer
selection algorithm does not depend on the order of updates in
`ResponseEndBlock.ValidatorUpdates`.
### IMPROVEMENTS:
- [crypto] [\#3279](https://github.com/tendermint/tendermint/issues/3279) Use `btcec.S256().N` directly instead of hard coding a copy.
### BUG FIXES:
- [state] [\#3222](https://github.com/tendermint/tendermint/issues/3222) Fix validator set updates so they are applied as a set, rather
than one at a time. This makes the proposer selection algorithm independent of
the order of updates in `ResponseEndBlock.ValidatorUpdates`.
- [evidence] [\#3286](https://github.com/tendermint/tendermint/issues/3286) Don't add committed evidence to evidence pool.
## v0.29.2
*February 7th, 2019*
Special thanks to external contributors on this release:
@ackratos, @rickyyangz
**Note**: This release contains security sensitive patches in the `p2p` and
`crypto` packages:
- p2p:
- Partial fix for MITM attacks on the p2p connection. MITM conditions may
still exist. See [\#3010](https://github.com/tendermint/tendermint/issues/3010).
- crypto:
- Eliminate our fork of `btcd` and use the `btcd/btcec` library directly for
native secp256k1 signing. Note we still modify the signature encoding to
prevent malleability.
- Support the libsecp256k1 library via CGo through the `go-ethereum/crypto/secp256k1` package.
- Eliminate MixEntropy functions
### BREAKING CHANGES:
* Go API
- [crypto] [\#3278](https://github.com/tendermint/tendermint/issues/3278) Remove
MixEntropy functions
- [types] [\#3245](https://github.com/tendermint/tendermint/issues/3245) Commit uses `type CommitSig Vote` instead of `Vote` directly.
In preparation for removing redundant fields from the commit [\#1648](https://github.com/tendermint/tendermint/issues/1648)
### IMPROVEMENTS:
- [consensus] [\#3246](https://github.com/tendermint/tendermint/issues/3246) Better logging and notes on recovery for corrupted WAL file
- [crypto] [\#3163](https://github.com/tendermint/tendermint/issues/3163) Use ethereum's libsecp256k1 go-wrapper for signatures when cgo is available
- [crypto] [\#3162](https://github.com/tendermint/tendermint/issues/3162) Wrap btcd instead of forking it to keep up with fixes (used if cgo is not available)
- [makefile] [\#3233](https://github.com/tendermint/tendermint/issues/3233) Use golangci-lint instead of go-metalinter
- [tools] [\#3218](https://github.com/tendermint/tendermint/issues/3218) Add go-deadlock tool to help detect deadlocks
- [tools] [\#3106](https://github.com/tendermint/tendermint/issues/3106) Add tm-signer-harness test harness for remote signers
- [tests] [\#3258](https://github.com/tendermint/tendermint/issues/3258) Fixed a bunch of non-deterministic test failures
### BUG FIXES:
- [node] [\#3186](https://github.com/tendermint/tendermint/issues/3186) EventBus and indexerService should be started before first block (for replay last block on handshake) execution (@ackratos)
- [p2p] [\#3232](https://github.com/tendermint/tendermint/issues/3232) Fix infinite loop leading to addrbook deadlock for seed nodes
- [p2p] [\#3247](https://github.com/tendermint/tendermint/issues/3247) Fix panic in SeedMode when calling FlushStop and OnStop
concurrently
- [p2p] [\#3040](https://github.com/tendermint/tendermint/issues/3040) Fix MITM on secret connection by checking low-order points
- [privval] [\#3258](https://github.com/tendermint/tendermint/issues/3258) Fix race between sign requests and ping requests in socket
## v0.29.1
*January 24, 2019*
Special thanks to external contributors on this release:
@infinytum, @gauthamzz
This release contains two important fixes: one for p2p layer where we sometimes
were not closing connections and one for consensus layer where consensus with
no empty blocks (`create_empty_blocks = false`) could halt.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### IMPROVEMENTS:
- [pex] [\#3037](https://github.com/tendermint/tendermint/issues/3037) Only log "Reached max attempts to dial" once
- [rpc] [\#3159](https://github.com/tendermint/tendermint/issues/3159) Expose
`triggered_timeout_commit` in the `/dump_consensus_state`
### BUG FIXES:
- [consensus] [\#3199](https://github.com/tendermint/tendermint/issues/3199) Fix consensus halt with no empty blocks from not resetting triggeredTimeoutCommit
- [p2p] [\#2967](https://github.com/tendermint/tendermint/issues/2967) Fix file descriptor leak
## v0.29.0
*January 21, 2019*
Special thanks to external contributors on this release:
@bradyjoestar, @kunaldhariwal, @gauthamzz, @hrharder
This release is primarily about making some breaking changes to
the Block protocol version before Cosmos launch, and to fixing more issues
in the proposer selection algorithm discovered on Cosmos testnets.
The Block protocol changes include using a standard Merkle tree format (RFC 6962),
fixing some inconsistencies between field orders in Vote and Proposal structs,
and constraining the hash of the ConsensusParams to include only a few fields.
The proposer selection algorithm saw significant progress,
including a [formal proof by @cwgoes for the base-case in Idris](https://github.com/cwgoes/tm-proposer-idris)
and a [much more detailed specification (still in progress) by
@ancazamfir](https://github.com/tendermint/tendermint/pull/3140).
Fixes to the proposer selection algorithm include normalizing the proposer
priorities to mitigate the effects of large changes to the validator set.
That said, we just discovered [another bug](https://github.com/tendermint/tendermint/issues/3181),
which will be fixed in the next breaking release.
While we are trying to stabilize the Block protocol to preserve compatibility
with old chains, there may be some final changes yet to come before Cosmos
launch as we continue to audit and test the software.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* CLI/RPC/Config
* Apps
- [state] [\#3049](https://github.com/tendermint/tendermint/issues/3049) Total voting power of the validator set is upper bounded by
`MaxInt64 / 8`. Apps must ensure they do not return changes to the validator
set that cause this maximum to be exceeded.
* Go API
- [node] [\#3082](https://github.com/tendermint/tendermint/issues/3082) MetricsProvider now requires you to pass a chain ID
- [types] [\#2713](https://github.com/tendermint/tendermint/issues/2713) Rename `TxProof.LeafHash` to `TxProof.Leaf`
- [crypto/merkle] [\#2713](https://github.com/tendermint/tendermint/issues/2713) `SimpleProof.Verify` takes a `leaf` instead of a
`leafHash` and performs the hashing itself
* Blockchain Protocol
* [crypto/merkle] [\#2713](https://github.com/tendermint/tendermint/issues/2713) Merkle trees now match the RFC 6962 specification
* [types] [\#3078](https://github.com/tendermint/tendermint/issues/3078) Re-order Timestamp and BlockID in CanonicalVote so it's
consistent with CanonicalProposal (BlockID comes
first)
* [types] [\#3165](https://github.com/tendermint/tendermint/issues/3165) Hash of ConsensusParams only includes BlockSize.MaxBytes and
BlockSize.MaxGas
* P2P Protocol
- [consensus] [\#3049](https://github.com/tendermint/tendermint/issues/3049) Normalize priorities to not exceed `2*TotalVotingPower` to mitigate unfair proposer selection
heavily preferring earlier joined validators in the case of an early bonded large validator unbonding
### FEATURES:
### IMPROVEMENTS:
- [rpc] [\#3065](https://github.com/tendermint/tendermint/issues/3065) Return maxPerPage (100), not defaultPerPage (30) if `per_page` is greater than the max 100.
- [instrumentation] [\#3082](https://github.com/tendermint/tendermint/issues/3082) Add `chain_id` label for all metrics
### BUG FIXES:
- [crypto] [\#3164](https://github.com/tendermint/tendermint/issues/3164) Update `btcd` fork for rare signRFC6979 bug
- [lite] [\#3171](https://github.com/tendermint/tendermint/issues/3171) Fix verifying large validator set changes
- [log] [\#3125](https://github.com/tendermint/tendermint/issues/3125) Fix year format
- [mempool] [\#3168](https://github.com/tendermint/tendermint/issues/3168) Limit tx size to fit in the max reactor msg size
- [scripts] [\#3147](https://github.com/tendermint/tendermint/issues/3147) Fix json2wal for large block parts (@bradyjoestar)
## v0.28.1
*January 18th, 2019*
Special thanks to external contributors on this release:
@HaoyangLiu
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BUG FIXES:
- [consensus] Fix consensus halt from proposing blocks with too much evidence
## v0.28.0
*January 16th, 2019*
Special thanks to external contributors on this release:
@fmauricios, @gianfelipe93, @husio, @needkane, @srmo, @yutianwu
This release is primarily about upgrades to the `privval` system -
separating the `priv_validator.json` into distinct config and data files, and
refactoring the socket validator to support reconnections.
**Note:** Please backup your existing `priv_validator.json` before using this
version.
See [UPGRADING.md](UPGRADING.md) for more details.
### BREAKING CHANGES:
* CLI/RPC/Config
- [cli] Removed `--proxy_app=dummy` option. Use `kvstore` (`persistent_kvstore`) instead.
- [cli] Renamed `--proxy_app=nilapp` to `--proxy_app=noop`.
- [config] [\#2992](https://github.com/tendermint/tendermint/issues/2992) `allow_duplicate_ip` is now set to false
- [privval] [\#1181](https://github.com/tendermint/tendermint/issues/1181) Split `priv_validator.json` into immutable (`config/priv_validator_key.json`) and mutable (`data/priv_validator_state.json`) parts (@yutianwu)
- [privval] [\#2926](https://github.com/tendermint/tendermint/issues/2926) Split up `PubKeyMsg` into `PubKeyRequest` and `PubKeyResponse` to be consistent with other message types
- [privval] [\#2923](https://github.com/tendermint/tendermint/issues/2923) Listen for unix socket connections instead of dialing them
* Apps
* Go API
- [types] [\#2981](https://github.com/tendermint/tendermint/issues/2981) Remove `PrivValidator.GetAddress()`
* Blockchain Protocol
* P2P Protocol
### FEATURES:
- [rpc] [\#3052](https://github.com/tendermint/tendermint/issues/3052) Include peer's remote IP in `/net_info`
### IMPROVEMENTS:
- [consensus] [\#3086](https://github.com/tendermint/tendermint/issues/3086) Log peerID on ignored votes (@srmo)
- [docs] [\#3061](https://github.com/tendermint/tendermint/issues/3061) Added specification for signing consensus msgs at
./docs/spec/consensus/signing.md
- [privval] [\#2948](https://github.com/tendermint/tendermint/issues/2948) Memoize pubkey so it's only requested once on startup
- [privval] [\#2923](https://github.com/tendermint/tendermint/issues/2923) Retry RemoteSigner connections on error
### BUG FIXES:
- [build] [\#3085](https://github.com/tendermint/tendermint/issues/3085) Fix `Version` field in build scripts (@husio)
- [crypto/multisig] [\#3102](https://github.com/tendermint/tendermint/issues/3102) Fix multisig keys address length
- [crypto/encoding] [\#3101](https://github.com/tendermint/tendermint/issues/3101) Fix `PubKeyMultisigThreshold` unmarshalling into `crypto.PubKey` interface
- [p2p/conn] [\#3111](https://github.com/tendermint/tendermint/issues/3111) Make SecretConnection thread safe
- [rpc] [\#3053](https://github.com/tendermint/tendermint/issues/3053) Fix internal error in `/tx_search` when results are empty
(@gianfelipe93)
- [types] [\#2926](https://github.com/tendermint/tendermint/issues/2926) Do not panic if retrieving the privval's public key fails
## v0.27.4
*December 21st, 2018*
### BUG FIXES:
- [mempool] [\#3036](https://github.com/tendermint/tendermint/issues/3036) Fix
LRU cache by popping the least recently used item when the cache is full,
not the most recently used one!
## v0.27.3
*December 16th, 2018*
### BREAKING CHANGES:
* Go API
- [dep] [\#3027](https://github.com/tendermint/tendermint/issues/3027) Revert to mainline Go crypto library, eliminating the modified
`bcrypt.GenerateFromPassword`
## v0.27.2
*December 16th, 2018*
### IMPROVEMENTS:
- [node] [\#3025](https://github.com/tendermint/tendermint/issues/3025) Validate NodeInfo addresses on startup.
### BUG FIXES:
- [p2p] [\#3025](https://github.com/tendermint/tendermint/pull/3025) Revert to using defers in addrbook. Fixes deadlocks in pex and consensus upon invalid ExternalAddr/ListenAddr configuration.
## v0.27.1
*December 15th, 2018*
Special thanks to external contributors on this release:
@danil-lashin, @hleb-albau, @james-ray, @leo-xinwang
### FEATURES:
- [rpc] [\#2964](https://github.com/tendermint/tendermint/issues/2964) Add `UnconfirmedTxs(limit)` and `NumUnconfirmedTxs()` methods to HTTP/Local clients (@danil-lashin)
- [docs] [\#3004](https://github.com/tendermint/tendermint/issues/3004) Enable full-text search on docs pages
### IMPROVEMENTS:
- [consensus] [\#2971](https://github.com/tendermint/tendermint/issues/2971) Return error if ValidatorSet is empty after InitChain
(@leo-xinwang)
- [ci/cd] [\#3005](https://github.com/tendermint/tendermint/issues/3005) Updated CircleCI job to trigger website build when docs are updated
- [docs] Various updates
### BUG FIXES:
- [cmd] [\#2983](https://github.com/tendermint/tendermint/issues/2983) `testnet` command always sets `addr_book_strict = false`
- [config] [\#2980](https://github.com/tendermint/tendermint/issues/2980) Fix CORS options formatting
- [kv indexer] [\#2912](https://github.com/tendermint/tendermint/issues/2912) Don't ignore key when executing CONTAINS
- [mempool] [\#2961](https://github.com/tendermint/tendermint/issues/2961) Call `notifyTxsAvailable` if there're txs left after committing a block, but recheck=false
- [mempool] [\#2994](https://github.com/tendermint/tendermint/issues/2994) Reject txs with negative GasWanted
- [p2p] [\#2990](https://github.com/tendermint/tendermint/issues/2990) Fix a bug where seeds don't disconnect from a peer after 3h
- [consensus] [\#3006](https://github.com/tendermint/tendermint/issues/3006) Save state after InitChain only when stateHeight is also 0 (@james-ray)
## v0.27.0
*December 5th, 2018*
Special thanks to external contributors on this release:
@danil-lashin, @srmo
Special thanks to @dlguddus for discovering a [major
issue](https://github.com/tendermint/tendermint/issues/2718#issuecomment-440888677)
in the proposer selection algorithm.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
This release is primarily about fixes to the proposer selection algorithm
in preparation for the [Cosmos Game of
Stakes](https://blog.cosmos.network/the-game-of-stakes-is-open-for-registration-83a404746ee6).
It also makes use of the `ConsensusParams.Validator.PubKeyTypes` to restrict the
key types that can be used by validators, and removes the `Heartbeat` consensus
message.
### BREAKING CHANGES:
* CLI/RPC/Config
- [rpc] [\#2932](https://github.com/tendermint/tendermint/issues/2932) Rename `accum` to `proposer_priority`
* Go API
- [db] [\#2913](https://github.com/tendermint/tendermint/pull/2913)
ReverseIterator API change: start < end, and end is exclusive.
- [types] [\#2932](https://github.com/tendermint/tendermint/issues/2932) Rename `Validator.Accum` to `Validator.ProposerPriority`
* Blockchain Protocol
- [state] [\#2714](https://github.com/tendermint/tendermint/issues/2714) Validators can now only use pubkeys allowed within
ConsensusParams.Validator.PubKeyTypes
* P2P Protocol
- [consensus] [\#2871](https://github.com/tendermint/tendermint/issues/2871)
Remove *ProposalHeartbeat* message as it serves no real purpose (@srmo)
- [state] Fixes for proposer selection:
- [\#2785](https://github.com/tendermint/tendermint/issues/2785) Accum for new validators is `-1.125*totalVotingPower` instead of 0
- [\#2941](https://github.com/tendermint/tendermint/issues/2941) val.Accum is preserved during ValidatorSet.Update to avoid being
reset to 0
### IMPROVEMENTS:
- [state] [\#2929](https://github.com/tendermint/tendermint/issues/2929) Minor refactor of updateState logic (@danil-lashin)
- [node] [\#2959](https://github.com/tendermint/tendermint/issues/2959) Allow node to start even if software's BlockProtocol is
different from state's BlockProtocol
- [pex] [\#2959](https://github.com/tendermint/tendermint/issues/2959) Pex reactor logger uses `module=pex`
### BUG FIXES:
- [p2p] [\#2968](https://github.com/tendermint/tendermint/issues/2968) Panic on transport error rather than continuing to run but not
accept new connections
- [p2p] [\#2969](https://github.com/tendermint/tendermint/issues/2969) Fix mismatch in peer count between `/net_info` and the prometheus
metrics
- [rpc] [\#2408](https://github.com/tendermint/tendermint/issues/2408) `/broadcast_tx_commit`: Fix "interface conversion: interface {} in nil, not EventDataTx" panic (could happen if somebody sent a tx using `/broadcast_tx_commit` while Tendermint was being stopped)
- [state] [\#2785](https://github.com/tendermint/tendermint/issues/2785) Fix accum for new validators to be `-1.125*totalVotingPower`
instead of 0, forcing them to wait before becoming the proposer. Also:
- do not batch clip
- keep accums averaged near 0
- [txindex/kv] [\#2925](https://github.com/tendermint/tendermint/issues/2925) Don't return false positives when range searching for a prefix of a tag value
- [types] [\#2938](https://github.com/tendermint/tendermint/issues/2938) Fix regression in v0.26.4 where we panic on empty
genDoc.Validators
- [types] [\#2941](https://github.com/tendermint/tendermint/issues/2941) Preserve val.Accum during ValidatorSet.Update to avoid it being
reset to 0 every time a validator is updated
## v0.26.4
*November 27th, 2018*
Special thanks to external contributors on this release:
ackratos, goolAdapter, james-ray, joe-bowman, kostko,
nagarajmanjunath, tomtau
@ackratos, @goolAdapter, @james-ray, @joe-bowman, @kostko,
@nagarajmanjunath, @tomtau
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).

View File

@@ -1,48 +1,23 @@
# Pending
## v0.31.0
## v0.27.0
*TBD*
**
Special thanks to external contributors on this release:
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* CLI/RPC/Config
- [rpc] \#2932 Rename `accum` to `proposer_priority`
* Apps
* Go API
- [db] [\#2913](https://github.com/tendermint/tendermint/pull/2913)
ReverseIterator API change -- start < end, and end is exclusive.
- [types] \#2932 Rename `Validator.Accum` to `Validator.ProposerPriority`
* Blockchain Protocol
- [state] \#2714 Validators can now only use pubkeys allowed within
ConsensusParams.ValidatorParams
* P2P Protocol
- [consensus] [\#2871](https://github.com/tendermint/tendermint/issues/2871)
Remove *ProposalHeartbeat* message as it serves no real purpose
- [state] Fixes for proposer selection:
- \#2785 Accum for new validators is `-1.125*totalVotingPower` instead of 0
- \#2941 val.Accum is preserved during ValidatorSet.Update to avoid being
reset to 0
### FEATURES:
### IMPROVEMENTS:
### BUG FIXES:
- [types] \#2938 Fix regression in v0.26.4 where we panic on empty
genDoc.Validators
- [state] \#2785 Fix accum for new validators to be `-1.125*totalVotingPower`
instead of 0, forcing them to wait before becoming the proposer. Also:
- do not batch clip
- keep accums averaged near 0
- [types] \#2941 Preserve val.Accum during ValidatorSet.Update to avoid it being
reset to 0 every time a validator is updated

View File

@@ -3,7 +3,7 @@ set -e
# Get the tag from the version, or try to figure it out.
if [ -z "$TAG" ]; then
TAG=$(awk -F\" '/Version =/ { print $2; exit }' < ../version/version.go)
TAG=$(awk -F\" '/TMCoreSemVer =/ { print $2; exit }' < ../version/version.go)
fi
if [ -z "$TAG" ]; then
echo "Please specify a tag."

View File

@@ -3,7 +3,7 @@ set -e
# Get the tag from the version, or try to figure it out.
if [ -z "$TAG" ]; then
TAG=$(awk -F\" '/Version =/ { print $2; exit }' < ../version/version.go)
TAG=$(awk -F\" '/TMCoreSemVer =/ { print $2; exit }' < ../version/version.go)
fi
if [ -z "$TAG" ]; then
echo "Please specify a tag."

28
Gopkg.lock generated
View File

@@ -10,12 +10,11 @@
revision = "3a771d992973f24aa725d07868b467d1ddfceafb"
[[projects]]
branch = "master"
digest = "1:c0decf632843204d2b8781de7b26e7038584e2dcccc7e2f401e88ae85b1df2b7"
digest = "1:093bf93a65962e8191e3e8cd8fc6c363f83d43caca9739c906531ba7210a9904"
name = "github.com/btcsuite/btcd"
packages = ["btcec"]
pruneopts = "UT"
revision = "67e573d211ace594f1366b4ce9d39726c4b19bd0"
revision = "ed77733ec07dfc8a513741138419b8d9d3de9d2d"
[[projects]]
digest = "1:1d8e1cb71c33a9470bbbae09bfec09db43c6bf358dfcae13cd8807c4e2a9a2bf"
@@ -35,6 +34,14 @@
revision = "8991bc29aa16c548c550c7ff78260e27b9ab7c73"
version = "v1.1.1"
[[projects]]
digest = "1:b42be5a3601f833e0b9f2d6625d887ec1309764bfcac3d518f3db425dcd4ec5c"
name = "github.com/ethereum/go-ethereum"
packages = ["crypto/secp256k1"]
pruneopts = "T"
revision = "9dc5d1a915ac0e0bd8429d6ac41df50eec91de5f"
version = "v1.8.21"
[[projects]]
digest = "1:544229a3ca0fb2dd5ebc2896d3d2ff7ce096d9751635301e44e37e761349ee70"
name = "github.com/fortytw2/leaktest"
@@ -360,13 +367,6 @@
pruneopts = "UT"
revision = "6b91fda63f2e36186f1c9d0e48578defb69c5d43"
[[projects]]
digest = "1:605b6546f3f43745695298ec2d342d3e952b6d91cdf9f349bea9315f677d759f"
name = "github.com/tendermint/btcd"
packages = ["btcec"]
pruneopts = "UT"
revision = "e5840949ff4fff0c56f9b6a541e22b63581ea9df"
[[projects]]
digest = "1:ad9c4c1a4e7875330b1f62906f2830f043a23edb5db997e3a5ac5d3e6eadf80a"
name = "github.com/tendermint/go-amino"
@@ -376,7 +376,7 @@
version = "v0.14.1"
[[projects]]
digest = "1:72b71e3a29775e5752ed7a8012052a3dee165e27ec18cedddae5288058f09acf"
digest = "1:00d2b3e64cdc3fa69aa250dfbe4cc38c4837d4f37e62279be2ae52107ffbbb44"
name = "golang.org/x/crypto"
packages = [
"bcrypt",
@@ -397,8 +397,7 @@
"salsa20/salsa",
]
pruneopts = "UT"
revision = "3764759f34a542a3aef74d6b02e35be7ab893bba"
source = "github.com/tendermint/crypto"
revision = "505ab145d0a99da450461ae2c1a9f6cd10d1f447"
[[projects]]
digest = "1:d36f55a999540d29b6ea3c2ea29d71c76b1d9853fdcd3e5c5cb4836f2ba118f1"
@@ -504,8 +503,10 @@
analyzer-name = "dep"
analyzer-version = 1
input-imports = [
"github.com/btcsuite/btcd/btcec",
"github.com/btcsuite/btcutil/base58",
"github.com/btcsuite/btcutil/bech32",
"github.com/ethereum/go-ethereum/crypto/secp256k1",
"github.com/fortytw2/leaktest",
"github.com/go-kit/kit/log",
"github.com/go-kit/kit/log/level",
@@ -535,7 +536,6 @@
"github.com/syndtr/goleveldb/leveldb/errors",
"github.com/syndtr/goleveldb/leveldb/iterator",
"github.com/syndtr/goleveldb/leveldb/opt",
"github.com/tendermint/btcd/btcec",
"github.com/tendermint/go-amino",
"golang.org/x/crypto/bcrypt",
"golang.org/x/crypto/chacha20poly1305",

View File

@@ -75,14 +75,29 @@
name = "github.com/prometheus/client_golang"
version = "^0.9.1"
# we use the secp256k1 implementation:
[[constraint]]
name = "github.com/ethereum/go-ethereum"
version = "^v1.8.21"
# Prevent dep from pruning build scripts and codegen templates
# note: this leaves the whole go-ethereum package in vendor
# can be removed when https://github.com/golang/dep/issues/1847 is resolved
[[prune.project]]
name = "github.com/ethereum/go-ethereum"
unused-packages = false
###################################
## Some repos dont have releases.
## Pin to revision
[[constraint]]
name = "github.com/btcsuite/btcd"
revision = "ed77733ec07dfc8a513741138419b8d9d3de9d2d"
[[constraint]]
name = "golang.org/x/crypto"
source = "github.com/tendermint/crypto"
revision = "3764759f34a542a3aef74d6b02e35be7ab893bba"
revision = "505ab145d0a99da450461ae2c1a9f6cd10d1f447"
[[override]]
name = "github.com/jmhodges/levigo"
@@ -93,9 +108,6 @@
name = "github.com/btcsuite/btcutil"
revision = "d4cc87b860166d00d6b5b9e0d3b3d71d6088d4d4"
[[constraint]]
name = "github.com/tendermint/btcd"
revision = "e5840949ff4fff0c56f9b6a541e22b63581ea9df"
[[constraint]]
name = "github.com/rcrowley/go-metrics"

View File

@@ -1,7 +1,7 @@
GOTOOLS = \
github.com/mitchellh/gox \
github.com/golang/dep/cmd/dep \
github.com/alecthomas/gometalinter \
github.com/golangci/golangci-lint/cmd/golangci-lint \
github.com/gogo/protobuf/protoc-gen-gogo \
github.com/square/certstrap
GOBIN?=${GOPATH}/bin
@@ -11,8 +11,6 @@ INCLUDE = -I=. -I=${GOPATH}/src -I=${GOPATH}/src/github.com/gogo/protobuf/protob
BUILD_TAGS?='tendermint'
BUILD_FLAGS = -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`"
LINT_FLAGS = --exclude '.*\.pb\.go' --exclude 'vendor/*' --vendor --deadline=600s
all: check build test install
check: check_tools get_vendor_deps
@@ -82,10 +80,6 @@ get_tools:
@echo "--> Installing tools"
./scripts/get_tools.sh
get_dev_tools:
@echo "--> Downloading linters (this may take awhile)"
$(GOPATH)/src/github.com/alecthomas/gometalinter/scripts/install.sh -b $(GOBIN)
update_tools:
@echo "--> Updating tools"
./scripts/get_tools.sh
@@ -226,6 +220,22 @@ test_race:
@echo "--> Running go test --race"
@GOCACHE=off go test -p 1 -v -race $(PACKAGES)
# uses https://github.com/sasha-s/go-deadlock/ to detect potential deadlocks
test_with_deadlock:
make set_with_deadlock
make test
make cleanup_after_test_with_deadlock
set_with_deadlock:
find . -name "*.go" | grep -v "vendor/" | xargs -n 1 sed -i.bak 's/sync.RWMutex/deadlock.RWMutex/'
find . -name "*.go" | grep -v "vendor/" | xargs -n 1 sed -i.bak 's/sync.Mutex/deadlock.Mutex/'
find . -name "*.go" | grep -v "vendor/" | xargs -n 1 goimports -w
# cleanes up after you ran test_with_deadlock
cleanup_after_test_with_deadlock:
find . -name "*.go" | grep -v "vendor/" | xargs -n 1 sed -i.bak 's/deadlock.RWMutex/sync.RWMutex/'
find . -name "*.go" | grep -v "vendor/" | xargs -n 1 sed -i.bak 's/deadlock.Mutex/sync.Mutex/'
find . -name "*.go" | grep -v "vendor/" | xargs -n 1 goimports -w
########################################
### Formatting, linting, and vetting
@@ -233,38 +243,9 @@ test_race:
fmt:
@go fmt ./...
metalinter:
lint:
@echo "--> Running linter"
@gometalinter $(LINT_FLAGS) --disable-all \
--enable=deadcode \
--enable=gosimple \
--enable=misspell \
--enable=safesql \
./...
#--enable=gas \
#--enable=maligned \
#--enable=dupl \
#--enable=errcheck \
#--enable=goconst \
#--enable=gocyclo \
#--enable=goimports \
#--enable=golint \ <== comments on anything exported
#--enable=gotype \
#--enable=ineffassign \
#--enable=interfacer \
#--enable=megacheck \
#--enable=staticcheck \
#--enable=structcheck \
#--enable=unconvert \
#--enable=unparam \
#--enable=unused \
#--enable=varcheck \
#--enable=vet \
#--enable=vetshadow \
metalinter_all:
@echo "--> Running linter (all)"
gometalinter $(LINT_FLAGS) --enable-all --disable=lll ./...
@golangci-lint run
DESTINATION = ./index.html.md
@@ -288,12 +269,11 @@ build-docker:
### Local testnet using docker
# Build linux binary on other platforms
build-linux:
build-linux: get_tools get_vendor_deps
GOOS=linux GOARCH=amd64 $(MAKE) build
build-docker-localnode:
cd networks/local
make
@cd networks/local && make
# Run a 4-node testnet locally
localnet-start: localnet-stop
@@ -331,4 +311,4 @@ build-slate:
# To avoid unintended conflicts with file names, always add to .PHONY
# unless there is a reason not to.
# https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html
.PHONY: check build build_race build_abci dist install install_abci check_dep check_tools get_tools get_dev_tools update_tools get_vendor_deps draw_deps get_protoc protoc_abci protoc_libs gen_certs clean_certs grpc_dbserver test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt rpc-docs build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate protoc_grpc protoc_all build_c install_c
.PHONY: check build build_race build_abci dist install install_abci check_dep check_tools get_tools update_tools get_vendor_deps draw_deps get_protoc protoc_abci protoc_libs gen_certs clean_certs grpc_dbserver test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt rpc-docs build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate protoc_grpc protoc_all build_c install_c test_with_deadlock cleanup_after_test_with_deadlock lint

158
PHILOSOPHY.md Normal file
View File

@@ -0,0 +1,158 @@
## Design goals
The design goals for Tendermint (and the SDK and related libraries) are:
* Simplicity and Legibility
* Parallel performance, namely ability to utilize multicore architecture
* Ability to evolve the codebase bug-free
* Debuggability
* Complete correctness that considers all edge cases, esp in concurrency
* Future-proof modular architecture, message protocol, APIs, and encapsulation
### Justification
Legibility is key to maintaining bug-free software as it evolves toward more
optimizations, more ease of debugging, and additional features.
It is too easy to introduce bugs over time by replacing lines of code with
those that may panic, which means ideally locks are unlocked by defer
statements.
For example,
```go
func (obj *MyObj) something() {
mtx.Lock()
obj.something = other
mtx.Unlock()
}
```
It is too easy to refactor the codebase in the future to replace `other` with
`other.String()` for example, and this may introduce a bug that causes a
deadlock. So as much as reasonably possible, we need to be using defer
statements, even though it introduces additional overhead.
If it is necessary to optimize the unlocking of mutex locks, the solution is
more modularity via smaller functions, so that defer'd unlocks are scoped
within a smaller function.
Similarly, idiomatic for-loops should always be preferred over those that use
custom counters, because it is too easy to evolve the body of a for-loop to
become more complicated over time, and it becomes more and more difficult to
assess the correctness of such a for-loop by visual inspection.
### On performance
It doesn't matter whether there are alternative implementations that are 2x or
3x more performant, when the software doesn't work, deadlocks, or if bugs
cannot be debugged. By taking advantage of multicore concurrency, the
Tendermint implementation will at least be an order of magnitude within the
range of what is theoretically possible. The design philosophy of Tendermint,
and the choice of Go as implementation language, is designed to make Tendermint
implementation the standard specification for concurrent BFT software.
By focusing on the message protocols (e.g. ABCI, p2p messages), and
encapsulation e.g. IAVL module, (relatively) independent reactors, we are both
implementing a standard implementation to be used as the specification for
future implementations in more optimizable languages like Rust, Java, and C++;
as well as creating sufficiently performant software. Tendermint Core will
never be as fast as future implementations of the Tendermint Spec, because Go
isn't designed to be as fast as possible. The advantage of using Go is that we
can develop the whole stack of modular components **faster** than in other
languages.
Furthermore, the real bottleneck is in the application layer, and it isn't
necessary to support more than a sufficiently decentralized set of validators
(e.g. 100 ~ 300 validators is sufficient, with delegated bonded PoS).
Instead of optimizing Tendermint performance down to the metal, lets focus on
optimizing on other matters, namely ability to push feature complete software
that works well enough, can be debugged and maintained, and can serve as a spec
for future implementations.
### On encapsulation
In order to create maintainable, forward-optimizable software, it is critical
to develop well-encapsulated objects that have well understood properties, and
to re-use these easy-to-use-correctly components as building blocks for further
encapsulated meta-objects.
For example, mutexes are cheap enough for Tendermint's design goals when there
isn't goroutine contention, so it is encouraged to create concurrency safe
structures with struct-level mutexes. If they are used in the context of
non-concurrent logic, then the performance is good enough. If they are used in
the context of concurrent logic, then it will still perform correctly.
Examples of this design principle can be seen in the types.ValidatorSet struct,
and the cmn.Rand struct. It's one single struct declaration that can be used
in both concurrent and non-concurrent logic, and due to its well encapsulation,
it's easy to get the usage of the mutex right.
#### example: cmn.Rand:
`The default Source is safe for concurrent use by multiple goroutines, but
Sources created by NewSource are not`. The reason why the default
package-level source is safe for concurrent use is because it is protected (see
`lockedSource` in https://golang.org/src/math/rand/rand.go).
But we shouldn't rely on the global source, we should be creating our own
Rand/Source instances and using them, especially for determinism in testing.
So it is reasonable to have cmn.Rand be protected by a mutex. Whether we want
our own implementation of Rand is another question, but the answer there is
also in the affirmative. Sometimes you want to know where Rand is being used
in your code, so it becomes a simple matter of dropping in a log statement to
inject inspectability into Rand usage. Also, it is nice to be able to extend
the functionality of Rand with custom methods. For these reasons, and for the
reasons which is outlined in this design philosophy document, we should
continue to use the cmn.Rand object, with mutex protection.
Another key aspect of good encapsulation is the choice of exposed vs unexposed
methods. It should be clear to the reader of the code, which methods are
intended to be used in what context, and what safe usage is. Part of this is
solved by hiding methods via unexported methods. Another part of this is
naming conventions on the methods (e.g. underscores) with good documentation,
and code organization. If there are too many exposed methods and it isn't
clear what methods have what side effects, then there is something wrong about
the design of abstractions that should be revisited.
### On concurrency
In order for Tendermint to remain relevant in the years to come, it is vital
for Tendermint to take advantage of multicore architectures. Due to the nature
of the problem, namely consensus across a concurrent p2p gossip network, and to
handle RPC requests for a large number of consuming subscribers, it is
unavoidable for Tendermint development to require expertise in concurrency
design, especially when it comes to the reactor design, and also for RPC
request handling.
## Guidelines
Here are some guidelines for designing for (sufficient) performance and concurrency:
* Mutex locks are cheap enough when there isn't contention.
* Do not optimize code without analytical or observed proof that it is in a hot path.
* Don't over-use channels when mutex locks w/ encapsulation are sufficient.
* The need to drain channels are often a hint of unconsidered edge cases.
* The creation of O(N) one-off goroutines is generally technical debt that
needs to get addressed sooner than later. Avoid creating too many
goroutines as a patch around incomplete concurrency design, or at least be
aware of the debt and do not invest in the debt. On the other hand, Tendermint
is designed to have a limited number of peers (e.g. 10 or 20), so the creation
of O(C) goroutines per O(P) peers is still O(C\*P=constant).
* Use defer statements to unlock as much as possible. If you want to unlock sooner,
try to create more modular functions that do make use of defer statements.
## Matras
* Premature optimization kills
* Readability is paramount
* Beautiful is better than fast.
* In the face of ambiguity, refuse the temptation to guess.
* In the face of bugs, refuse the temptation to cover the bug.
* There should be one-- and preferably only one --obvious way to do it.

View File

@@ -1,8 +1,8 @@
# Tendermint
[Byzantine-Fault Tolerant](https://en.wikipedia.org/wiki/Byzantine_fault_tolerance)
[State Machine Replication](https://en.wikipedia.org/wiki/State_machine_replication).
Or [Blockchain](https://en.wikipedia.org/wiki/Blockchain_(database)) for short.
[State Machines](https://en.wikipedia.org/wiki/State_machine_replication).
Or [Blockchain](https://en.wikipedia.org/wiki/Blockchain_(database)), for short.
[![version](https://img.shields.io/github/tag/tendermint/tendermint.svg)](https://github.com/tendermint/tendermint/releases/latest)
[![API Reference](
@@ -49,7 +49,7 @@ For examples of the kinds of bugs we're looking for, see [SECURITY.md](SECURITY.
Requirement|Notes
---|---
Go version | Go1.10 or higher
Go version | Go1.11.4 or higher
## Documentation
@@ -66,49 +66,26 @@ See the [install instructions](/docs/introduction/install.md)
- [Remote cluster using terraform and ansible](/docs/networks/terraform-and-ansible.md)
- [Join the Cosmos testnet](https://cosmos.network/testnet)
## Resources
### Tendermint Core
For details about the blockchain data structures and the p2p protocols, see the
the [Tendermint specification](/docs/spec).
For details on using the software, see the [documentation](/docs/) which is also
hosted at: https://tendermint.com/docs/
### Tools
Benchmarking and monitoring is provided by `tm-bench` and `tm-monitor`, respectively.
Their code is found [here](/tools) and these binaries need to be built seperately.
Additional documentation is found [here](/docs/tools).
### Sub-projects
* [Amino](http://github.com/tendermint/go-amino), a reflection-based improvement on proto3
* [IAVL](http://github.com/tendermint/iavl), Merkleized IAVL+ Tree implementation
### Applications
* [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
* [Ethermint](http://github.com/cosmos/ethermint); Ethereum on Tendermint
* [Many more](https://tendermint.com/ecosystem)
### Research
* [The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938)
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
* [Original Whitepaper](https://tendermint.com/static/docs/tendermint.pdf)
* [Blog](https://blog.cosmos.network/tendermint/home)
## Contributing
Yay open source! Please see our [contributing guidelines](CONTRIBUTING.md).
Please abide by the [Code of Conduct](CODE_OF_CONDUCT.md) in all interactions,
and the [contributing guidelines](CONTRIBUTING.md) when submitting code.
Join the larger community on the [forum](https://forum.cosmos.network/) and the [chat](https://riot.im/app/#/room/#tendermint:matrix.org).
To learn more about the structure of the software, watch the [Developer
Sessions](https://www.youtube.com/playlist?list=PLdQIb0qr3pnBbG5ZG-0gr3zM86_s8Rpqv)
and read some [Architectural
Decision Records](https://github.com/tendermint/tendermint/tree/master/docs/architecture).
Learn more by reading the code and comparing it to the
[specification](https://github.com/tendermint/tendermint/tree/develop/docs/spec).
## Versioning
### SemVer
### Semantic Versioning
Tendermint uses [SemVer](http://semver.org/) to determine when and how the version changes.
Tendermint uses [Semantic Versioning](http://semver.org/) to determine when and how the version changes.
According to SemVer, anything in the public API can change at any time before version 1.0.0
To provide some stability to Tendermint users in these 0.X.X days, the MINOR version is used
@@ -119,6 +96,7 @@ include the in-process Go APIs.
That said, breaking changes in the following packages will be documented in the
CHANGELOG even if they don't lead to MINOR version bumps:
- crypto
- types
- rpc/client
- config
@@ -145,8 +123,40 @@ data into the new chain.
However, any bump in the PATCH version should be compatible with existing histories
(if not please open an [issue](https://github.com/tendermint/tendermint/issues)).
For more information on upgrading, see [here](./UPGRADING.md)
For more information on upgrading, see [UPGRADING.md](./UPGRADING.md)
## Code of Conduct
## Resources
### Tendermint Core
For details about the blockchain data structures and the p2p protocols, see the
[Tendermint specification](/docs/spec).
For details on using the software, see the [documentation](/docs/) which is also
hosted at: https://tendermint.com/docs/
### Tools
Benchmarking and monitoring is provided by `tm-bench` and `tm-monitor`, respectively.
Their code is found [here](/tools) and these binaries need to be built seperately.
Additional documentation is found [here](/docs/tools).
### Sub-projects
* [Amino](http://github.com/tendermint/go-amino), reflection-based proto3, with
interfaces
* [IAVL](http://github.com/tendermint/iavl), Merkleized IAVL+ Tree implementation
### Applications
* [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
* [Ethermint](http://github.com/cosmos/ethermint); Ethereum on Tendermint
* [Many more](https://tendermint.com/ecosystem)
### Research
* [The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938)
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
* [Original Whitepaper](https://tendermint.com/static/docs/tendermint.pdf)
* [Blog](https://blog.cosmos.network/tendermint/home)
Please read, understand and adhere to our [code of conduct](CODE_OF_CONDUCT.md).

View File

@@ -3,9 +3,144 @@
This guide provides steps to be followed when you upgrade your applications to
a newer version of Tendermint Core.
## v0.30.0
This release contains a breaking change to both the block and p2p protocols,
however it may be compatible with blockchains created with v0.29.0 depending on
the chain history. If your blockchain has not included any pieces of evidence,
or no piece of evidence has been included in more than one block,
and if your application has never returned multiple updates
for the same validator in a single block, then v0.30.0 will work fine with
blockchains created with v0.29.0.
The p2p protocol change is to fix the proposer selection algorithm again.
Note that proposer selection is purely a p2p concern right
now since the algorithm is only relevant during real time consensus.
This change is thus compatible with v0.29.0, but
all nodes must be upgraded to avoid disagreements on the proposer.
### Applications
Applications must ensure they do not return duplicates in
`ResponseEndBlock.ValidatorUpdates`. A pubkey must only appear once per set of
updates. Duplicates will cause irrecoverable failure. If you have a very good
reason why we shouldn't do this, please open an issue.
## v0.29.0
This release contains some breaking changes to the block and p2p protocols,
and will not be compatible with any previous versions of the software, primarily
due to changes in how various data structures are hashed.
Any implementations of Tendermint blockchain verification, including lite clients,
will need to be updated. For specific details:
- [Merkle tree](./docs/spec/blockchain/encoding.md#merkle-trees)
- [ConsensusParams](./docs/spec/blockchain/state.md#consensusparams)
There was also a small change to field ordering in the vote struct. Any
implementations of an out-of-process validator (like a Key-Management Server)
will need to be updated. For specific details:
- [Vote](https://github.com/tendermint/tendermint/blob/develop/docs/spec/consensus/signing.md#votes)
Finally, the proposer selection algorithm continues to evolve. See the
[work-in-progress
specification](https://github.com/tendermint/tendermint/pull/3140).
For everything else, please see the [CHANGELOG](./CHANGELOG.md#v0.29.0).
## v0.28.0
This release breaks the format for the `priv_validator.json` file
and the protocol used for the external validator process.
It is compatible with v0.27.0 blockchains (neither the BlockProtocol nor the
P2PProtocol have changed).
Please read carefully for details about upgrading.
**Note:** Backup your `config/priv_validator.json`
before proceeding.
### `priv_validator.json`
The `config/priv_validator.json` is now two files:
`config/priv_validator_key.json` and `data/priv_validator_state.json`.
The former contains the key material, the later contains the details on the last
message signed.
When running v0.28.0 for the first time, it will back up any pre-existing
`priv_validator.json` file and proceed to split it into the two new files.
Upgrading should happen automatically without problem.
To upgrade manually, use the provided `privValUpgrade.go` script, with exact paths for the old
`priv_validator.json` and the locations for the two new files. It's recomended
to use the default paths, of `config/priv_validator_key.json` and
`data/priv_validator_state.json`, respectively:
```
go run scripts/privValUpgrade.go <old-path> <new-key-path> <new-state-path>
```
### External validator signers
The Unix and TCP implementations of the remote signing validator
have been consolidated into a single implementation.
Thus in both cases, the external process is expected to dial
Tendermint. This is different from how Unix sockets used to work, where
Tendermint dialed the external process.
The `PubKeyMsg` was also split into separate `Request` and `Response` types
for consistency with other messages.
Note that the TCP sockets don't yet use a persistent key,
so while they're encrypted, they can't yet be properly authenticated.
See [#3105](https://github.com/tendermint/tendermint/issues/3105).
Note the Unix socket has neither encryption nor authentication, but will
add a shared-secret in [#3099](https://github.com/tendermint/tendermint/issues/3099).
## v0.27.0
This release contains some breaking changes to the block and p2p protocols,
but does not change any core data structures, so it should be compatible with
existing blockchains from the v0.26 series that only used Ed25519 validator keys.
Blockchains using Secp256k1 for validators will not be compatible. This is due
to the fact that we now enforce which key types validators can use as a
consensus param. The default is Ed25519, and Secp256k1 must be activated
explicitly.
It is recommended to upgrade all nodes at once to avoid incompatibilities at the
peer layer - namely, the heartbeat consensus message has been removed (only
relevant if `create_empty_blocks=false` or `create_empty_blocks_interval > 0`),
and the proposer selection algorithm has changed. Since proposer information is
never included in the blockchain, this change only affects the peer layer.
### Go API Changes
#### libs/db
The ReverseIterator API has changed the meaning of `start` and `end`.
Before, iteration was from `start` to `end`, where
`start > end`. Now, iteration is from `end` to `start`, where `start < end`.
The iterator also excludes `end`. This change allows a simplified and more
intuitive logic, aligning the semantic meaning of `start` and `end` in the
`Iterator` and `ReverseIterator`.
### Applications
This release enforces a new consensus parameter, the
ValidatorParams.PubKeyTypes. Applications must ensure that they only return
validator updates with the allowed PubKeyTypes. If a validator update includes a
pubkey type that is not included in the ConsensusParams.Validator.PubKeyTypes,
block execution will fail and the consensus will halt.
By default, only Ed25519 pubkeys may be used for validators. Enabling
Secp256k1 requires explicit modification of the ConsensusParams.
Please update your application accordingly (ie. restrict validators to only be
able to use Ed25519 keys, or explicitly add additional key types to the genesis
file).
## v0.26.0
New 0.26.0 release contains a lot of changes to core data types and protocols. It is not
This release contains a lot of changes to core data types and protocols. It is not
compatible to the old versions and there is no straight forward way to update
old data to be compatible with the new version.
@@ -67,7 +202,7 @@ For more information, see:
### Go API Changes
#### crypto.merkle
#### crypto/merkle
The `merkle.Hasher` interface was removed. Functions which used to take `Hasher`
now simply take `[]byte`. This means that any objects being Merklized should be

2
Vagrantfile vendored
View File

@@ -53,6 +53,6 @@ Vagrant.configure("2") do |config|
# get all deps and tools, ready to install/test
su - vagrant -c 'source /home/vagrant/.bash_profile'
su - vagrant -c 'cd /home/vagrant/go/src/github.com/tendermint/tendermint && make get_tools && make get_dev_tools && make get_vendor_deps'
su - vagrant -c 'cd /home/vagrant/go/src/github.com/tendermint/tendermint && make get_tools && make get_vendor_deps'
SHELL
end

View File

@@ -58,7 +58,7 @@ var RootCmd = &cobra.Command{
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
switch cmd.Use {
case "counter", "kvstore", "dummy": // for the examples apps, don't pre-run
case "counter", "kvstore": // for the examples apps, don't pre-run
return nil
case "version": // skip running for version command
return nil
@@ -127,10 +127,6 @@ func addCounterFlags() {
counterCmd.PersistentFlags().BoolVarP(&flagSerial, "serial", "", false, "enforce incrementing (serial) transactions")
}
func addDummyFlags() {
dummyCmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
}
func addKVStoreFlags() {
kvstoreCmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
}
@@ -152,10 +148,6 @@ func addCommands() {
// examples
addCounterFlags()
RootCmd.AddCommand(counterCmd)
// deprecated, left for backwards compatibility
addDummyFlags()
RootCmd.AddCommand(dummyCmd)
// replaces dummy, see issue #196
addKVStoreFlags()
RootCmd.AddCommand(kvstoreCmd)
}
@@ -291,18 +283,6 @@ var counterCmd = &cobra.Command{
},
}
// deprecated, left for backwards compatibility
var dummyCmd = &cobra.Command{
Use: "dummy",
Deprecated: "use: [abci-cli kvstore] instead",
Short: "ABCI demo example",
Long: "ABCI demo example",
Args: cobra.ExactArgs(0),
RunE: func(cmd *cobra.Command, args []string) error {
return cmdKVStore(cmd, args)
},
}
var kvstoreCmd = &cobra.Command{
Use: "kvstore",
Short: "ABCI demo example",

View File

@@ -83,7 +83,7 @@ func TestWriteReadMessage2(t *testing.T) {
Log: phrase,
GasWanted: 10,
Tags: []cmn.KVPair{
cmn.KVPair{Key: []byte("abc"), Value: []byte("def")},
{Key: []byte("abc"), Value: []byte("def")},
},
},
// TODO: add the rest

View File

@@ -4,7 +4,7 @@ import (
"testing"
"time"
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
proto "github.com/tendermint/tendermint/benchmarks/proto"
"github.com/tendermint/tendermint/crypto/ed25519"

View File

@@ -363,7 +363,8 @@ func (pool *BlockPool) sendError(err error, peerID p2p.ID) {
pool.errorsCh <- peerError{err, peerID}
}
// unused by tendermint; left for debugging purposes
// for debugging purposes
//nolint:unused
func (pool *BlockPool) debug() string {
pool.mtx.Lock()
defer pool.mtx.Unlock()

View File

@@ -432,11 +432,7 @@ type bcBlockResponseMessage struct {
// ValidateBasic performs basic validation.
func (m *bcBlockResponseMessage) ValidateBasic() error {
if err := m.Block.ValidateBasic(); err != nil {
return err
}
return nil
return m.Block.ValidateBasic()
}
func (m *bcBlockResponseMessage) String() string {

View File

@@ -42,7 +42,7 @@ func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.G
}
func makeVote(header *types.Header, blockID types.BlockID, valset *types.ValidatorSet, privVal types.PrivValidator) *types.Vote {
addr := privVal.GetAddress()
addr := privVal.GetPubKey().Address()
idx, _ := valset.GetByAddress(addr)
vote := &types.Vote{
ValidatorAddress: addr,
@@ -95,13 +95,13 @@ func newBlockchainReactor(logger log.Logger, genDoc *types.GenesisDoc, privVals
// let's add some blocks in
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
lastCommit := &types.Commit{}
lastCommit := types.NewCommit(types.BlockID{}, nil)
if blockHeight > 1 {
lastBlockMeta := blockStore.LoadBlockMeta(blockHeight - 1)
lastBlock := blockStore.LoadBlock(blockHeight - 1)
vote := makeVote(&lastBlock.Header, lastBlockMeta.BlockID, state.Validators, privVals[0])
lastCommit = &types.Commit{Precommits: []*types.Vote{vote}, BlockID: lastBlockMeta.BlockID}
vote := makeVote(&lastBlock.Header, lastBlockMeta.BlockID, state.Validators, privVals[0]).CommitSig()
lastCommit = types.NewCommit(lastBlockMeta.BlockID, []*types.CommitSig{vote})
}
thisBlock := makeBlock(blockHeight, state, lastCommit)

View File

@@ -6,6 +6,7 @@ import (
"runtime/debug"
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -20,6 +21,12 @@ import (
tmtime "github.com/tendermint/tendermint/types/time"
)
// make a Commit with a single vote containing just the height and a timestamp
func makeTestCommit(height int64, timestamp time.Time) *types.Commit {
commitSigs := []*types.CommitSig{{Height: height, Timestamp: timestamp}}
return types.NewCommit(types.BlockID{}, commitSigs)
}
func makeStateAndBlockStore(logger log.Logger) (sm.State, *BlockStore) {
config := cfg.ResetTestRoot("blockchain_reactor_test")
// blockDB := dbm.NewDebugDB("blockDB", dbm.NewMemDB())
@@ -86,8 +93,7 @@ var (
partSet = block.MakePartSet(2)
part1 = partSet.GetPart(0)
part2 = partSet.GetPart(1)
seenCommit1 = &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: tmtime.Now()}}}
seenCommit1 = makeTestCommit(10, tmtime.Now())
)
// TODO: This test should be simplified ...
@@ -107,8 +113,7 @@ func TestBlockStoreSaveLoadBlock(t *testing.T) {
// save a block
block := makeBlock(bs.Height()+1, state, new(types.Commit))
validPartSet := block.MakePartSet(2)
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: tmtime.Now()}}}
seenCommit := makeTestCommit(10, tmtime.Now())
bs.SaveBlock(block, partSet, seenCommit)
require.Equal(t, bs.Height(), block.Header.Height, "expecting the new height to be changed")
@@ -127,8 +132,7 @@ func TestBlockStoreSaveLoadBlock(t *testing.T) {
// End of setup, test data
commitAtH10 := &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: tmtime.Now()}}}
commitAtH10 := makeTestCommit(10, tmtime.Now())
tuples := []struct {
block *types.Block
parts *types.PartSet
@@ -311,7 +315,7 @@ func TestLoadBlockPart(t *testing.T) {
gotPart, _, panicErr := doFn(loadPart)
require.Nil(t, panicErr, "an existent and proper block should not panic")
require.Nil(t, res, "a properly saved block should return a proper block")
require.Equal(t, gotPart.(*types.Part).Hash(), part1.Hash(),
require.Equal(t, gotPart.(*types.Part), part1,
"expecting successful retrieval of previously saved block")
}
@@ -351,9 +355,7 @@ func TestBlockFetchAtHeight(t *testing.T) {
block := makeBlock(bs.Height()+1, state, new(types.Commit))
partSet := block.MakePartSet(2)
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: tmtime.Now()}}}
seenCommit := makeTestCommit(10, tmtime.Now())
bs.SaveBlock(block, partSet, seenCommit)
require.Equal(t, bs.Height(), block.Header.Height, "expecting the new height to be changed")

View File

@@ -1,7 +1,7 @@
package blockchain
import (
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/types"
)

View File

@@ -3,6 +3,7 @@ package main
import (
"flag"
"os"
"time"
"github.com/tendermint/tendermint/crypto/ed25519"
cmn "github.com/tendermint/tendermint/libs/common"
@@ -15,7 +16,8 @@ func main() {
var (
addr = flag.String("addr", ":26659", "Address of client to connect to")
chainID = flag.String("chain-id", "mychain", "chain id")
privValPath = flag.String("priv", "", "priv val file path")
privValKeyPath = flag.String("priv-key", "", "priv val key file path")
privValStatePath = flag.String("priv-state", "", "priv val state file path")
logger = log.NewTMLogger(
log.NewSyncWriter(os.Stdout),
@@ -27,18 +29,26 @@ func main() {
"Starting private validator",
"addr", *addr,
"chainID", *chainID,
"privPath", *privValPath,
"privKeyPath", *privValKeyPath,
"privStatePath", *privValStatePath,
)
pv := privval.LoadFilePV(*privValPath)
pv := privval.LoadFilePV(*privValKeyPath, *privValStatePath)
rs := privval.NewRemoteSigner(
logger,
*chainID,
*addr,
pv,
ed25519.GenPrivKey(),
)
var dialer privval.Dialer
protocol, address := cmn.ProtocolAndAddress(*addr)
switch protocol {
case "unix":
dialer = privval.DialUnixFn(address)
case "tcp":
connTimeout := 3 * time.Second // TODO
dialer = privval.DialTCPFn(address, connTimeout, ed25519.GenPrivKey())
default:
logger.Error("Unknown protocol", "protocol", protocol)
os.Exit(1)
}
rs := privval.NewRemoteSigner(logger, *chainID, pv, dialer)
err := rs.Start()
if err != nil {
panic(err)

View File

@@ -17,7 +17,7 @@ var GenValidatorCmd = &cobra.Command{
}
func genValidator(cmd *cobra.Command, args []string) {
pv := privval.GenFilePV("")
pv := privval.GenFilePV("", "")
jsbz, err := cdc.MarshalJSON(pv)
if err != nil {
panic(err)

View File

@@ -4,7 +4,6 @@ import (
"fmt"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/p2p"
@@ -26,15 +25,18 @@ func initFiles(cmd *cobra.Command, args []string) error {
func initFilesWithConfig(config *cfg.Config) error {
// private validator
privValFile := config.PrivValidatorFile()
privValKeyFile := config.PrivValidatorKeyFile()
privValStateFile := config.PrivValidatorStateFile()
var pv *privval.FilePV
if cmn.FileExists(privValFile) {
pv = privval.LoadFilePV(privValFile)
logger.Info("Found private validator", "path", privValFile)
if cmn.FileExists(privValKeyFile) {
pv = privval.LoadFilePV(privValKeyFile, privValStateFile)
logger.Info("Found private validator", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
} else {
pv = privval.GenFilePV(privValFile)
pv = privval.GenFilePV(privValKeyFile, privValStateFile)
pv.Save()
logger.Info("Generated private validator", "path", privValFile)
logger.Info("Generated private validator", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
}
nodeKeyFile := config.NodeKeyFile()
@@ -57,9 +59,10 @@ func initFilesWithConfig(config *cfg.Config) error {
GenesisTime: tmtime.Now(),
ConsensusParams: types.DefaultConsensusParams(),
}
key := pv.GetPubKey()
genDoc.Validators = []types.GenesisValidator{{
Address: pv.GetPubKey().Address(),
PubKey: pv.GetPubKey(),
Address: key.Address(),
PubKey: key,
Power: 10,
}}

View File

@@ -5,6 +5,7 @@ import (
"github.com/spf13/cobra"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/privval"
)
@@ -27,36 +28,41 @@ var ResetPrivValidatorCmd = &cobra.Command{
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetAll(cmd *cobra.Command, args []string) {
ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidatorFile(), logger)
ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidatorKeyFile(),
config.PrivValidatorStateFile(), logger)
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetPrivValidator(cmd *cobra.Command, args []string) {
resetFilePV(config.PrivValidatorFile(), logger)
resetFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile(), logger)
}
// ResetAll removes the privValidator and address book files plus all data.
// ResetAll removes address book files plus all data, and resets the privValdiator data.
// Exported so other CLI tools can use it.
func ResetAll(dbDir, addrBookFile, privValFile string, logger log.Logger) {
resetFilePV(privValFile, logger)
func ResetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logger log.Logger) {
removeAddrBook(addrBookFile, logger)
if err := os.RemoveAll(dbDir); err == nil {
logger.Info("Removed all blockchain history", "dir", dbDir)
} else {
logger.Error("Error removing all blockchain history", "dir", dbDir, "err", err)
}
// recreate the dbDir since the privVal state needs to live there
cmn.EnsureDir(dbDir, 0700)
resetFilePV(privValKeyFile, privValStateFile, logger)
}
func resetFilePV(privValFile string, logger log.Logger) {
if _, err := os.Stat(privValFile); err == nil {
pv := privval.LoadFilePV(privValFile)
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger) {
if _, err := os.Stat(privValKeyFile); err == nil {
pv := privval.LoadFilePVEmptyState(privValKeyFile, privValStateFile)
pv.Reset()
logger.Info("Reset private validator file to genesis state", "file", privValFile)
logger.Info("Reset private validator file to genesis state", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
} else {
pv := privval.GenFilePV(privValFile)
pv := privval.GenFilePV(privValKeyFile, privValStateFile)
pv.Save()
logger.Info("Generated private validator file", "file", privValFile)
logger.Info("Generated private validator file", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
}
}

View File

@@ -22,10 +22,6 @@ var (
defaultRoot = os.ExpandEnv("$HOME/.some/test/dir")
)
const (
rootName = "root"
)
// clearConfig clears env vars, the given root dir, and resets viper.
func clearConfig(dir string) {
if err := os.Unsetenv("TMHOME"); err != nil {

View File

@@ -24,7 +24,7 @@ func AddNodeFlags(cmd *cobra.Command) {
cmd.Flags().Bool("fast_sync", config.FastSync, "Fast blockchain syncing")
// abci flags
cmd.Flags().String("proxy_app", config.ProxyApp, "Proxy app address, or 'nilapp' or 'kvstore' for local testing.")
cmd.Flags().String("proxy_app", config.ProxyApp, "Proxy app address, or one of: 'kvstore', 'persistent_kvstore', 'counter', 'counter_serial' or 'noop' for local testing.")
cmd.Flags().String("abci", config.ABCI, "Specify abci transport (socket | grpc)")
// rpc flags

View File

@@ -16,7 +16,7 @@ var ShowValidatorCmd = &cobra.Command{
}
func showValidator(cmd *cobra.Command, args []string) {
privValidator := privval.LoadOrGenFilePV(config.PrivValidatorFile())
privValidator := privval.LoadOrGenFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
pubKeyJSONBytes, _ := cdc.MarshalJSON(privValidator.GetPubKey())
fmt.Println(string(pubKeyJSONBytes))
}

View File

@@ -85,11 +85,18 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
_ = os.RemoveAll(outputDir)
return err
}
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
if err != nil {
_ = os.RemoveAll(outputDir)
return err
}
initFilesWithConfig(config)
pvFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidator)
pv := privval.LoadFilePV(pvFile)
pvKeyFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidatorKey)
pvStateFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidatorState)
pv := privval.LoadFilePV(pvKeyFile, pvStateFile)
genVals[i] = types.GenesisValidator{
Address: pv.GetPubKey().Address(),
PubKey: pv.GetPubKey(),
@@ -127,14 +134,32 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
}
}
// Gather persistent peer addresses.
var (
persistentPeers string
err error
)
if populatePersistentPeers {
err := populatePersistentPeersInConfigAndWriteIt(config)
persistentPeers, err = persistentPeersString(config)
if err != nil {
_ = os.RemoveAll(outputDir)
return err
}
}
// Overwrite default config.
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
config.P2P.AddrBookStrict = false
config.P2P.AllowDuplicateIP = true
if populatePersistentPeers {
config.P2P.PersistentPeers = persistentPeers
}
cfg.WriteConfigFile(filepath.Join(nodeDir, "config", "config.toml"), config)
}
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
return nil
}
@@ -157,28 +182,16 @@ func hostnameOrIP(i int) string {
return fmt.Sprintf("%s%d", hostnamePrefix, i)
}
func populatePersistentPeersInConfigAndWriteIt(config *cfg.Config) error {
func persistentPeersString(config *cfg.Config) (string, error) {
persistentPeers := make([]string, nValidators+nNonValidators)
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
nodeKey, err := p2p.LoadNodeKey(config.NodeKeyFile())
if err != nil {
return err
return "", err
}
persistentPeers[i] = p2p.IDAddressString(nodeKey.ID(), fmt.Sprintf("%s:%d", hostnameOrIP(i), p2pPort))
}
persistentPeersList := strings.Join(persistentPeers, ",")
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
config.P2P.PersistentPeers = persistentPeersList
config.P2P.AddrBookStrict = false
// overwrite default config
cfg.WriteConfigFile(filepath.Join(nodeDir, "config", "config.toml"), config)
}
return nil
return strings.Join(persistentPeers, ","), nil
}

View File

@@ -1,7 +1,7 @@
package commands
import (
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
cryptoAmino "github.com/tendermint/tendermint/crypto/encoding/amino"
)

View File

@@ -35,17 +35,26 @@ var (
defaultConfigFileName = "config.toml"
defaultGenesisJSONName = "genesis.json"
defaultPrivValName = "priv_validator.json"
defaultPrivValKeyName = "priv_validator_key.json"
defaultPrivValStateName = "priv_validator_state.json"
defaultNodeKeyName = "node_key.json"
defaultAddrBookName = "addrbook.json"
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName)
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName)
defaultPrivValPath = filepath.Join(defaultConfigDir, defaultPrivValName)
defaultPrivValKeyPath = filepath.Join(defaultConfigDir, defaultPrivValKeyName)
defaultPrivValStatePath = filepath.Join(defaultDataDir, defaultPrivValStateName)
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
)
var (
oldPrivVal = "priv_validator.json"
oldPrivValPath = filepath.Join(defaultConfigDir, oldPrivVal)
)
// Config defines the top level configuration for a Tendermint node
type Config struct {
// Top level options use an anonymous struct
@@ -160,7 +169,10 @@ type BaseConfig struct {
Genesis string `mapstructure:"genesis_file"`
// Path to the JSON file containing the private key to use as a validator in the consensus protocol
PrivValidator string `mapstructure:"priv_validator_file"`
PrivValidatorKey string `mapstructure:"priv_validator_key_file"`
// Path to the JSON file containing the last sign state of a validator
PrivValidatorState string `mapstructure:"priv_validator_state_file"`
// TCP or UNIX socket address for Tendermint to listen on for
// connections from an external PrivValidator process
@@ -184,7 +196,8 @@ type BaseConfig struct {
func DefaultBaseConfig() BaseConfig {
return BaseConfig{
Genesis: defaultGenesisJSONPath,
PrivValidator: defaultPrivValPath,
PrivValidatorKey: defaultPrivValKeyPath,
PrivValidatorState: defaultPrivValStatePath,
NodeKey: defaultNodeKeyPath,
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:26658",
@@ -218,9 +231,20 @@ func (cfg BaseConfig) GenesisFile() string {
return rootify(cfg.Genesis, cfg.RootDir)
}
// PrivValidatorFile returns the full path to the priv_validator.json file
func (cfg BaseConfig) PrivValidatorFile() string {
return rootify(cfg.PrivValidator, cfg.RootDir)
// PrivValidatorKeyFile returns the full path to the priv_validator_key.json file
func (cfg BaseConfig) PrivValidatorKeyFile() string {
return rootify(cfg.PrivValidatorKey, cfg.RootDir)
}
// PrivValidatorFile returns the full path to the priv_validator_state.json file
func (cfg BaseConfig) PrivValidatorStateFile() string {
return rootify(cfg.PrivValidatorState, cfg.RootDir)
}
// OldPrivValidatorFile returns the full path of the priv_validator.json from pre v0.28.0.
// TODO: eventually remove.
func (cfg BaseConfig) OldPrivValidatorFile() string {
return rootify(oldPrivValPath, cfg.RootDir)
}
// NodeKeyFile returns the full path to the node_key.json file
@@ -283,7 +307,7 @@ type RPCConfig struct {
// Maximum number of simultaneous connections.
// Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
// If you want to accept more significant number than the default, make sure
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
GRPCMaxOpenConnections int `mapstructure:"grpc_max_open_connections"`
@@ -293,7 +317,7 @@ type RPCConfig struct {
// Maximum number of simultaneous connections (including WebSocket).
// Does not include gRPC connections. See grpc_max_open_connections
// If you want to accept more significant number than the default, make sure
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
// Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -434,7 +458,7 @@ func DefaultP2PConfig() *P2PConfig {
RecvRate: 5120000, // 5 mB/s
PexReactor: true,
SeedMode: false,
AllowDuplicateIP: true, // so non-breaking yet
AllowDuplicateIP: false,
HandshakeTimeout: 20 * time.Second,
DialTimeout: 3 * time.Second,
TestDialFail: false,
@@ -774,12 +798,12 @@ type InstrumentationConfig struct {
PrometheusListenAddr string `mapstructure:"prometheus_listen_addr"`
// Maximum number of simultaneous connections.
// If you want to accept more significant number than the default, make sure
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
MaxOpenConnections int `mapstructure:"max_open_connections"`
// Tendermint instrumentation namespace.
// Instrumentation namespace.
Namespace string `mapstructure:"namespace"`
}

View File

@@ -2,6 +2,7 @@ package config
import (
"bytes"
"fmt"
"os"
"path/filepath"
"text/template"
@@ -95,7 +96,10 @@ log_format = "{{ .BaseConfig.LogFormat }}"
genesis_file = "{{ js .BaseConfig.Genesis }}"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_file = "{{ js .BaseConfig.PrivValidator }}"
priv_validator_key_file = "{{ js .BaseConfig.PrivValidatorKey }}"
# Path to the JSON file containing the last sign state of a validator
priv_validator_state_file = "{{ js .BaseConfig.PrivValidatorState }}"
# TCP or UNIX socket address for Tendermint to listen on for
# connections from an external PrivValidator process
@@ -125,13 +129,13 @@ laddr = "{{ .RPC.ListenAddress }}"
# A list of origins a cross-domain request can be executed from
# Default value '[]' disables cors support
# Use '["*"]' to allow any origin
cors_allowed_origins = "{{ .RPC.CORSAllowedOrigins }}"
cors_allowed_origins = [{{ range .RPC.CORSAllowedOrigins }}{{ printf "%q, " . }}{{end}}]
# A list of methods the client is allowed to use with cross-domain requests
cors_allowed_methods = "{{ .RPC.CORSAllowedMethods }}"
cors_allowed_methods = [{{ range .RPC.CORSAllowedMethods }}{{ printf "%q, " . }}{{end}}]
# A list of non simple headers the client is allowed to use with cross-domain requests
cors_allowed_headers = "{{ .RPC.CORSAllowedHeaders }}"
cors_allowed_headers = [{{ range .RPC.CORSAllowedHeaders }}{{ printf "%q, " . }}{{end}}]
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
@@ -139,7 +143,7 @@ grpc_laddr = "{{ .RPC.GRPCListenAddress }}"
# Maximum number of simultaneous connections.
# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -151,7 +155,7 @@ unsafe = {{ .RPC.Unsafe }}
# Maximum number of simultaneous connections (including WebSocket).
# Does not include gRPC connections. See grpc_max_open_connections
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -269,8 +273,8 @@ blocktime_iota = "{{ .Consensus.BlockTimeIota }}"
# What indexer to use for transactions
#
# Options:
# 1) "null" (default)
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
# 1) "null"
# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "{{ .TxIndex.Indexer }}"
# Comma-separated list of tags to index (by default the only tag is "tx.hash")
@@ -302,7 +306,7 @@ prometheus = {{ .Instrumentation.Prometheus }}
prometheus_listen_addr = "{{ .Instrumentation.PrometheusListenAddr }}"
# Maximum number of simultaneous connections.
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
max_open_connections = {{ .Instrumentation.MaxOpenConnections }}
@@ -314,6 +318,10 @@ namespace = "{{ .Instrumentation.Namespace }}"
/****** these are for test settings ***********/
func ResetTestRoot(testName string) *Config {
return ResetTestRootWithChainID(testName, "")
}
func ResetTestRootWithChainID(testName string, chainID string) *Config {
rootDir := os.ExpandEnv("$HOME/.tendermint_test")
rootDir = filepath.Join(rootDir, testName)
// Remove ~/.tendermint_test_bak
@@ -342,25 +350,31 @@ func ResetTestRoot(testName string) *Config {
baseConfig := DefaultBaseConfig()
configFilePath := filepath.Join(rootDir, defaultConfigFilePath)
genesisFilePath := filepath.Join(rootDir, baseConfig.Genesis)
privFilePath := filepath.Join(rootDir, baseConfig.PrivValidator)
privKeyFilePath := filepath.Join(rootDir, baseConfig.PrivValidatorKey)
privStateFilePath := filepath.Join(rootDir, baseConfig.PrivValidatorState)
// Write default config file if missing.
if !cmn.FileExists(configFilePath) {
writeDefaultConfigFile(configFilePath)
}
if !cmn.FileExists(genesisFilePath) {
if chainID == "" {
chainID = "tendermint_test"
}
testGenesis := fmt.Sprintf(testGenesisFmt, chainID)
cmn.MustWriteFile(genesisFilePath, []byte(testGenesis), 0644)
}
// we always overwrite the priv val
cmn.MustWriteFile(privFilePath, []byte(testPrivValidator), 0644)
cmn.MustWriteFile(privKeyFilePath, []byte(testPrivValidatorKey), 0644)
cmn.MustWriteFile(privStateFilePath, []byte(testPrivValidatorState), 0644)
config := TestConfig().SetRoot(rootDir)
return config
}
var testGenesis = `{
var testGenesisFmt = `{
"genesis_time": "2018-10-10T08:20:13.695936996Z",
"chain_id": "tendermint_test",
"chain_id": "%s",
"validators": [
{
"pub_key": {
@@ -374,7 +388,7 @@ var testGenesis = `{
"app_hash": ""
}`
var testPrivValidator = `{
var testPrivValidatorKey = `{
"address": "A3258DCBF45DCA0DF052981870F2D1441A36D145",
"pub_key": {
"type": "tendermint/PubKeyEd25519",
@@ -383,8 +397,11 @@ var testPrivValidator = `{
"priv_key": {
"type": "tendermint/PrivKeyEd25519",
"value": "EVkqJO/jIXp3rkASXfh9YnyToYXRXhBr6g9cQVxPFnQBP/5povV4HTjvsy530kybxKHwEi85iU8YL0qQhSYVoQ=="
},
"last_height": "0",
"last_round": "0",
"last_step": 0
}
}`
var testPrivValidatorState = `{
"height": "0",
"round": "0",
"step": 0
}`

View File

@@ -60,7 +60,7 @@ func TestEnsureTestRoot(t *testing.T) {
// TODO: make sure the cfg returned and testconfig are the same!
baseConfig := DefaultBaseConfig()
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, baseConfig.PrivValidator)
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, baseConfig.PrivValidatorKey, baseConfig.PrivValidatorState)
}
func checkConfig(configFile string) bool {

View File

@@ -76,8 +76,7 @@ func TestByzantine(t *testing.T) {
conR.SetLogger(logger.With("validator", i))
conR.SetEventBus(eventBus)
var conRI p2p.Reactor // nolint: gotype, gosimple
conRI = conR
var conRI p2p.Reactor = conR
// make first val byzantine
if i == 0 {

View File

@@ -6,14 +6,18 @@ import (
"fmt"
"io/ioutil"
"os"
"path"
"path/filepath"
"reflect"
"sort"
"sync"
"testing"
"time"
"github.com/go-kit/kit/log/term"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
bc "github.com/tendermint/tendermint/blockchain"
cfg "github.com/tendermint/tendermint/config"
@@ -27,11 +31,6 @@ import (
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
"github.com/go-kit/kit/log/term"
)
const (
@@ -72,9 +71,10 @@ func NewValidatorStub(privValidator types.PrivValidator, valIndex int) *validato
}
func (vs *validatorStub) signVote(voteType types.SignedMsgType, hash []byte, header types.PartSetHeader) (*types.Vote, error) {
addr := vs.PrivValidator.GetPubKey().Address()
vote := &types.Vote{
ValidatorIndex: vs.Index,
ValidatorAddress: vs.PrivValidator.GetAddress(),
ValidatorAddress: addr,
Height: vs.Height,
Round: vs.Round,
Timestamp: tmtime.Now(),
@@ -151,8 +151,9 @@ func signAddVotes(to *ConsensusState, voteType types.SignedMsgType, hash []byte,
func validatePrevote(t *testing.T, cs *ConsensusState, round int, privVal *validatorStub, blockHash []byte) {
prevotes := cs.Votes.Prevotes(round)
address := privVal.GetPubKey().Address()
var vote *types.Vote
if vote = prevotes.GetByAddress(privVal.GetAddress()); vote == nil {
if vote = prevotes.GetByAddress(address); vote == nil {
panic("Failed to find prevote from validator")
}
if blockHash == nil {
@@ -168,8 +169,9 @@ func validatePrevote(t *testing.T, cs *ConsensusState, round int, privVal *valid
func validateLastPrecommit(t *testing.T, cs *ConsensusState, privVal *validatorStub, blockHash []byte) {
votes := cs.LastCommit
address := privVal.GetPubKey().Address()
var vote *types.Vote
if vote = votes.GetByAddress(privVal.GetAddress()); vote == nil {
if vote = votes.GetByAddress(address); vote == nil {
panic("Failed to find precommit from validator")
}
if !bytes.Equal(vote.BlockID.Hash, blockHash) {
@@ -179,8 +181,9 @@ func validateLastPrecommit(t *testing.T, cs *ConsensusState, privVal *validatorS
func validatePrecommit(t *testing.T, cs *ConsensusState, thisRound, lockRound int, privVal *validatorStub, votedBlockHash, lockedBlockHash []byte) {
precommits := cs.Votes.Precommits(thisRound)
address := privVal.GetPubKey().Address()
var vote *types.Vote
if vote = precommits.GetByAddress(privVal.GetAddress()); vote == nil {
if vote = precommits.GetByAddress(address); vote == nil {
panic("Failed to find precommit from validator")
}
@@ -281,9 +284,10 @@ func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state sm.S
}
func loadPrivValidator(config *cfg.Config) *privval.FilePV {
privValidatorFile := config.PrivValidatorFile()
ensureDir(path.Dir(privValidatorFile), 0700)
privValidator := privval.LoadOrGenFilePV(privValidatorFile)
privValidatorKeyFile := config.PrivValidatorKeyFile()
ensureDir(filepath.Dir(privValidatorKeyFile), 0700)
privValidatorStateFile := config.PrivValidatorStateFile()
privValidator := privval.LoadOrGenFilePV(privValidatorKeyFile, privValidatorStateFile)
privValidator.Reset()
return privValidator
}
@@ -374,36 +378,6 @@ func ensureNewEvent(
}
}
func ensureNewRoundStep(stepCh <-chan interface{}, height int64, round int) {
ensureNewEvent(
stepCh,
height,
round,
ensureTimeout,
"Timeout expired while waiting for NewStep event")
}
func ensureNewVote(voteCh <-chan interface{}, height int64, round int) {
select {
case <-time.After(ensureTimeout):
break
case v := <-voteCh:
edv, ok := v.(types.EventDataVote)
if !ok {
panic(fmt.Sprintf("expected a *types.Vote, "+
"got %v. wrong subscription channel?",
reflect.TypeOf(v)))
}
vote := edv.Vote
if vote.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, vote.Height))
}
if vote.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, vote.Round))
}
}
}
func ensureNewRound(roundCh <-chan interface{}, height int64, round int) {
select {
case <-time.After(ensureTimeout):
@@ -591,7 +565,7 @@ func randConsensusNet(nValidators int, testName string, tickerFunc func() Timeou
for _, opt := range configOpts {
opt(thisConfig)
}
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
ensureDir(filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
app := appFunc()
vals := types.TM2PB.ValidatorUpdates(state.Validators)
app.InitChain(abci.RequestInitChain{Validators: vals})
@@ -612,16 +586,21 @@ func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerF
stateDB := dbm.NewMemDB() // each state needs its own db
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
thisConfig := ResetConfig(fmt.Sprintf("%s_%d", testName, i))
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
ensureDir(filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
var privVal types.PrivValidator
if i < nValidators {
privVal = privVals[i]
} else {
tempFile, err := ioutil.TempFile("", "priv_validator_")
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
if err != nil {
panic(err)
}
privVal = privval.GenFilePV(tempFile.Name())
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
if err != nil {
panic(err)
}
privVal = privval.GenFilePV(tempKeyFile.Name(), tempStateFile.Name())
}
app := appFunc()

View File

@@ -10,6 +10,7 @@ import (
"github.com/tendermint/tendermint/abci/example/code"
abci "github.com/tendermint/tendermint/abci/types"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
@@ -17,12 +18,17 @@ func init() {
config = ResetConfig("consensus_mempool_test")
}
// for testing
func assertMempool(txn txNotifier) sm.Mempool {
return txn.(sm.Mempool)
}
func TestMempoolNoProgressUntilTxsAvailable(t *testing.T) {
config := ResetConfig("consensus_mempool_txs_available_test")
config.Consensus.CreateEmptyBlocks = false
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
cs.mempool.EnableTxsAvailable()
assertMempool(cs.txNotifier).EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
startTestRound(cs, height, round)
@@ -40,7 +46,7 @@ func TestMempoolProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
config.Consensus.CreateEmptyBlocksInterval = ensureTimeout
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
cs.mempool.EnableTxsAvailable()
assertMempool(cs.txNotifier).EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
startTestRound(cs, height, round)
@@ -55,7 +61,7 @@ func TestMempoolProgressInHigherRound(t *testing.T) {
config.Consensus.CreateEmptyBlocks = false
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
cs.mempool.EnableTxsAvailable()
assertMempool(cs.txNotifier).EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
newRoundCh := subscribe(cs.eventBus, types.EventQueryNewRound)
@@ -91,7 +97,7 @@ func deliverTxsRange(cs *ConsensusState, start, end int) {
for i := start; i < end; i++ {
txBytes := make([]byte, 8)
binary.BigEndian.PutUint64(txBytes, uint64(i))
err := cs.mempool.CheckTx(txBytes, nil)
err := assertMempool(cs.txNotifier).CheckTx(txBytes, nil)
if err != nil {
panic(fmt.Sprintf("Error after CheckTx: %v", err))
}
@@ -141,7 +147,7 @@ func TestMempoolRmBadTx(t *testing.T) {
// Try to send the tx through the mempool.
// CheckTx should not err, but the app should return a bad abci code
// and the tx should get removed from the pool
err := cs.mempool.CheckTx(txBytes, func(r *abci.Response) {
err := assertMempool(cs.txNotifier).CheckTx(txBytes, func(r *abci.Response) {
if r.GetCheckTx().Code != code.CodeTypeBadNonce {
t.Fatalf("expected checktx to return bad nonce, got %v", r)
}
@@ -153,7 +159,7 @@ func TestMempoolRmBadTx(t *testing.T) {
// check for the tx
for {
txs := cs.mempool.ReapMaxBytesMaxGas(int64(len(txBytes)), -1)
txs := assertMempool(cs.txNotifier).ReapMaxBytesMaxGas(int64(len(txBytes)), -1)
if len(txs) == 0 {
emptyMempoolCh <- struct{}{}
return

View File

@@ -8,7 +8,11 @@ import (
stdprometheus "github.com/prometheus/client_golang/prometheus"
)
const MetricsSubsystem = "consensus"
const (
// MetricsSubsystem is a subsystem shared by all metrics exposed by this
// package.
MetricsSubsystem = "consensus"
)
// Metrics contains metrics exposed by this package.
type Metrics struct {
@@ -50,101 +54,107 @@ type Metrics struct {
}
// PrometheusMetrics returns Metrics build using Prometheus client library.
func PrometheusMetrics(namespace string) *Metrics {
// Optionally, labels can be provided along with their values ("foo",
// "fooValue").
func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
labels := []string{}
for i := 0; i < len(labelsAndValues); i += 2 {
labels = append(labels, labelsAndValues[i])
}
return &Metrics{
Height: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "height",
Help: "Height of the chain.",
}, []string{}),
}, labels).With(labelsAndValues...),
Rounds: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "rounds",
Help: "Number of rounds.",
}, []string{}),
}, labels).With(labelsAndValues...),
Validators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validators",
Help: "Number of validators.",
}, []string{}),
}, labels).With(labelsAndValues...),
ValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validators_power",
Help: "Total power of all validators.",
}, []string{}),
}, labels).With(labelsAndValues...),
MissingValidators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "missing_validators",
Help: "Number of validators who did not sign.",
}, []string{}),
}, labels).With(labelsAndValues...),
MissingValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "missing_validators_power",
Help: "Total power of the missing validators.",
}, []string{}),
}, labels).With(labelsAndValues...),
ByzantineValidators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "byzantine_validators",
Help: "Number of validators who tried to double sign.",
}, []string{}),
}, labels).With(labelsAndValues...),
ByzantineValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "byzantine_validators_power",
Help: "Total power of the byzantine validators.",
}, []string{}),
}, labels).With(labelsAndValues...),
BlockIntervalSeconds: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_interval_seconds",
Help: "Time between this and the last block.",
}, []string{}),
}, labels).With(labelsAndValues...),
NumTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "num_txs",
Help: "Number of transactions.",
}, []string{}),
}, labels).With(labelsAndValues...),
BlockSizeBytes: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_size_bytes",
Help: "Size of the block.",
}, []string{}),
}, labels).With(labelsAndValues...),
TotalTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "total_txs",
Help: "Total number of transactions.",
}, []string{}),
}, labels).With(labelsAndValues...),
CommittedHeight: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "latest_block_height",
Help: "The latest block height.",
}, []string{}),
}, labels).With(labelsAndValues...),
FastSyncing: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "fast_syncing",
Help: "Whether or not a node is fast syncing. 1 if yes, 0 if no.",
}, []string{}),
}, labels).With(labelsAndValues...),
BlockParts: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_parts",
Help: "Number of blockparts transmitted by peer.",
}, []string{"peer_id"}),
}, append(labels, "peer_id")).With(labelsAndValues...),
}
}

View File

@@ -8,7 +8,7 @@ import (
"github.com/pkg/errors"
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
cstypes "github.com/tendermint/tendermint/consensus/types"
cmn "github.com/tendermint/tendermint/libs/common"
tmevents "github.com/tendermint/tendermint/libs/events"

View File

@@ -14,7 +14,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/abci/client"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
bc "github.com/tendermint/tendermint/blockchain"
@@ -143,7 +143,8 @@ func TestReactorWithEvidence(t *testing.T) {
// mock the evidence pool
// everyone includes evidence of another double signing
vIdx := (i + 1) % nValidators
evpool := newMockEvidencePool(privVals[vIdx].GetAddress())
addr := privVals[vIdx].GetPubKey().Address()
evpool := newMockEvidencePool(addr)
// Make ConsensusState
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyAppConnCon, mempool, evpool)
@@ -210,6 +211,7 @@ func (m *mockEvidencePool) Update(block *types.Block, state sm.State) {
}
m.height++
}
func (m *mockEvidencePool) IsCommitted(types.Evidence) bool { return false }
//------------------------------------
@@ -224,7 +226,7 @@ func TestReactorCreatesBlockWhenEmptyBlocksFalse(t *testing.T) {
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
// send a tx
if err := css[3].mempool.CheckTx([]byte{1, 2, 3}, nil); err != nil {
if err := assertMempool(css[3].txNotifier).CheckTx([]byte{1, 2, 3}, nil); err != nil {
//t.Fatal(err)
}
@@ -268,7 +270,8 @@ func TestReactorVotingPowerChange(t *testing.T) {
// map of active validators
activeVals := make(map[string]struct{})
for i := 0; i < nVals; i++ {
activeVals[string(css[i].privValidator.GetAddress())] = struct{}{}
addr := css[i].privValidator.GetPubKey().Address()
activeVals[string(addr)] = struct{}{}
}
// wait till everyone makes block 1
@@ -331,7 +334,8 @@ func TestReactorValidatorSetChanges(t *testing.T) {
// map of active validators
activeVals := make(map[string]struct{})
for i := 0; i < nVals; i++ {
activeVals[string(css[i].privValidator.GetAddress())] = struct{}{}
addr := css[i].privValidator.GetPubKey().Address()
activeVals[string(addr)] = struct{}{}
}
// wait till everyone makes block 1
@@ -445,7 +449,7 @@ func waitForAndValidateBlock(t *testing.T, n int, activeVals map[string]struct{}
err := validateBlock(newBlock, activeVals)
assert.Nil(t, err)
for _, tx := range txs {
err := css[j].mempool.CheckTx(tx, nil)
err := assertMempool(css[j].txNotifier).CheckTx(tx, nil)
assert.Nil(t, err)
}
}, css)

View File

@@ -6,12 +6,12 @@ import (
"hash/crc32"
"io"
"reflect"
//"strconv"
//"strings"
"time"
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/version"
//auto "github.com/tendermint/tendermint/libs/autofile"
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
@@ -20,6 +20,7 @@ import (
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/version"
)
var crc32c = crc32.MakeTable(crc32.Castagnoli)
@@ -143,8 +144,8 @@ func (cs *ConsensusState) catchupReplay(csHeight int64) error {
if err == io.EOF {
break
} else if IsDataCorruptionError(err) {
cs.Logger.Debug("data has been corrupted in last height of consensus WAL", "err", err, "height", csHeight)
panic(fmt.Sprintf("data has been corrupted (%v) in last height %d of consensus WAL", err, csHeight))
cs.Logger.Error("data has been corrupted in last height of consensus WAL", "err", err, "height", csHeight)
return err
} else if err != nil {
return err
}
@@ -196,6 +197,7 @@ type Handshaker struct {
stateDB dbm.DB
initialState sm.State
store sm.BlockStore
eventBus types.BlockEventPublisher
genDoc *types.GenesisDoc
logger log.Logger
@@ -209,6 +211,7 @@ func NewHandshaker(stateDB dbm.DB, state sm.State,
stateDB: stateDB,
initialState: state,
store: store,
eventBus: types.NopEventBus{},
genDoc: genDoc,
logger: log.NewNopLogger(),
nBlocks: 0,
@@ -219,6 +222,12 @@ func (h *Handshaker) SetLogger(l log.Logger) {
h.logger = l
}
// SetEventBus - sets the event bus for publishing block related events.
// If not called, it defaults to types.NopEventBus.
func (h *Handshaker) SetEventBus(eventBus types.BlockEventPublisher) {
h.eventBus = eventBus
}
func (h *Handshaker) NBlocks() int {
return h.nBlocks
}
@@ -247,6 +256,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
// Set AppVersion on the state.
h.initialState.Version.Consensus.App = version.Protocol(res.AppVersion)
sm.SaveState(h.stateDB, h.initialState)
// Replay blocks up to the latest in the blockstore.
_, err = h.ReplayBlocks(h.initialState, appHash, blockHeight, proxyApp)
@@ -295,6 +305,7 @@ func (h *Handshaker) ReplayBlocks(
return nil, err
}
if stateBlockHeight == 0 { //we only update state when we are in initial state
// If the app returned validators or consensus params, update the state.
if len(res.Validators) > 0 {
vals, err := types.PB2TM.ValidatorUpdates(res.Validators)
@@ -303,12 +314,19 @@ func (h *Handshaker) ReplayBlocks(
}
state.Validators = types.NewValidatorSet(vals)
state.NextValidators = types.NewValidatorSet(vals)
} else {
// If validator set is not set in genesis and still empty after InitChain, exit.
if len(h.genDoc.Validators) == 0 {
return nil, fmt.Errorf("Validator set is nil in genesis and still empty after InitChain")
}
}
if res.ConsensusParams != nil {
state.ConsensusParams = types.PB2TM.ConsensusParams(res.ConsensusParams)
}
sm.SaveState(h.stateDB, state)
}
}
// First handle edge cases and constraints on the storeBlockHeight.
if storeBlockHeight == 0 {
@@ -423,6 +441,7 @@ func (h *Handshaker) replayBlock(state sm.State, height int64, proxyApp proxy.Ap
meta := h.store.LoadBlockMeta(height)
blockExec := sm.NewBlockExecutor(h.stateDB, h.logger, proxyApp, sm.MockMempool{}, sm.MockEvidencePool{})
blockExec.SetEventBus(h.eventBus)
var err error
state, err = blockExec.ApplyBlock(state, meta.BlockID, block)

View File

@@ -137,7 +137,7 @@ func (pb *playback) replayReset(count int, newStepCh chan interface{}) error {
pb.cs.Wait()
newCS := NewConsensusState(pb.cs.config, pb.genesisState.Copy(), pb.cs.blockExec,
pb.cs.blockStore, pb.cs.mempool, pb.cs.evpool)
pb.cs.blockStore, pb.cs.txNotifier, pb.cs.evpool)
newCS.SetEventBus(pb.cs.eventBus)
newCS.startForReplay()
@@ -326,17 +326,18 @@ func newConsensusStateForReplay(config cfg.BaseConfig, csConfig *cfg.ConsensusCo
cmn.Exit(fmt.Sprintf("Error starting proxy app conns: %v", err))
}
handshaker := NewHandshaker(stateDB, state, blockStore, gdoc)
err = handshaker.Handshake(proxyApp)
if err != nil {
cmn.Exit(fmt.Sprintf("Error on handshake: %v", err))
}
eventBus := types.NewEventBus()
if err := eventBus.Start(); err != nil {
cmn.Exit(fmt.Sprintf("Failed to start event bus: %v", err))
}
handshaker := NewHandshaker(stateDB, state, blockStore, gdoc)
handshaker.SetEventBus(eventBus)
err = handshaker.Handshake(proxyApp)
if err != nil {
cmn.Exit(fmt.Sprintf("Error on handshake: %v", err))
}
mempool, evpool := sm.MockMempool{}, sm.MockEvidencePool{}
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)

View File

@@ -87,7 +87,7 @@ func sendTxs(cs *ConsensusState, ctx context.Context) {
return
default:
tx := []byte{byte(i)}
cs.mempool.CheckTx(tx, nil)
assertMempool(cs.txNotifier).CheckTx(tx, nil)
i++
}
}
@@ -319,7 +319,7 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
walFile := tempWALWithData(walBody)
config.Consensus.SetWalFile(walFile)
privVal := privval.LoadFilePV(config.PrivValidatorFile())
privVal := privval.LoadFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
wal, err := NewWAL(walFile)
require.NoError(t, err)
@@ -537,10 +537,8 @@ func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
}
case *types.Vote:
if p.Type == types.PrecommitType {
thisBlockCommit = &types.Commit{
BlockID: p.BlockID,
Precommits: []*types.Vote{p},
}
commitSigs := []*types.CommitSig{p.CommitSig()}
thisBlockCommit = types.NewCommit(p.BlockID, commitSigs)
}
}
}
@@ -633,7 +631,7 @@ func TestInitChainUpdateValidators(t *testing.T) {
clientCreator := proxy.NewLocalClientCreator(app)
config := ResetConfig("proxy_test_")
privVal := privval.LoadFilePV(config.PrivValidatorFile())
privVal := privval.LoadFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
stateDB, state, store := stateAndStore(config, privVal.GetPubKey(), 0x0)
oldValAddr := state.Validators.Validators[0].Address
@@ -659,12 +657,6 @@ func TestInitChainUpdateValidators(t *testing.T) {
assert.Equal(t, newValAddr, expectValAddr)
}
func newInitChainApp(vals []abci.ValidatorUpdate) *initChainApp {
return &initChainApp{
vals: vals,
}
}
// returns the vals on InitChain
type initChainApp struct {
abci.BaseApplication

View File

@@ -2,13 +2,14 @@ package consensus
import (
"bytes"
"errors"
"fmt"
"reflect"
"runtime/debug"
"sync"
"time"
"github.com/pkg/errors"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/fail"
"github.com/tendermint/tendermint/libs/log"
@@ -56,6 +57,16 @@ func (ti *timeoutInfo) String() string {
return fmt.Sprintf("%v ; %d/%d %v", ti.Duration, ti.Height, ti.Round, ti.Step)
}
// interface to the mempool
type txNotifier interface {
TxsAvailable() <-chan struct{}
}
// interface to the evidence pool
type evidencePool interface {
AddEvidence(types.Evidence) error
}
// ConsensusState handles execution of the consensus algorithm.
// It processes votes and proposals, and upon reaching agreement,
// commits blocks to the chain and executes them against the application.
@@ -67,16 +78,22 @@ type ConsensusState struct {
config *cfg.ConsensusConfig
privValidator types.PrivValidator // for signing votes
// services for creating and executing blocks
blockExec *sm.BlockExecutor
// store blocks and commits
blockStore sm.BlockStore
mempool sm.Mempool
evpool sm.EvidencePool
// create and execute blocks
blockExec *sm.BlockExecutor
// notify us if txs are available
txNotifier txNotifier
// add evidence to the pool
// when it's detected
evpool evidencePool
// internal state
mtx sync.RWMutex
cstypes.RoundState
triggeredTimeoutPrecommit bool
state sm.State // State until height-1.
// state changes may be triggered by: msgs from peers,
@@ -127,15 +144,15 @@ func NewConsensusState(
state sm.State,
blockExec *sm.BlockExecutor,
blockStore sm.BlockStore,
mempool sm.Mempool,
evpool sm.EvidencePool,
txNotifier txNotifier,
evpool evidencePool,
options ...StateOption,
) *ConsensusState {
cs := &ConsensusState{
config: config,
blockExec: blockExec,
blockStore: blockStore,
mempool: mempool,
txNotifier: txNotifier,
peerMsgQueue: make(chan msgInfo, msgQueueSize),
internalMsgQueue: make(chan msgInfo, msgQueueSize),
timeoutTicker: NewTimeoutTicker(),
@@ -290,6 +307,23 @@ func (cs *ConsensusState) OnStart() error {
// reload from consensus log to catchup
if cs.doWALCatchup {
if err := cs.catchupReplay(cs.Height); err != nil {
// don't try to recover from data corruption error
if IsDataCorruptionError(err) {
cs.Logger.Error("Encountered corrupt WAL file", "err", err.Error())
cs.Logger.Error("Please repair the WAL file before restarting")
fmt.Println(`You can attempt to repair the WAL as follows:
----
WALFILE=~/.tendermint/data/cs.wal/wal
cp $WALFILE ${WALFILE}.bak # backup the file
go run scripts/wal2json/main.go $WALFILE > wal.json # this will panic, but can be ignored
rm $WALFILE # remove the corrupt file
go run scripts/json2wal/main.go wal.json $WALFILE # rebuild the file without corruption
----`)
return err
}
cs.Logger.Error("Error on catchup replay. Proceeding to start ConsensusState anyway", "err", err.Error())
// NOTE: if we ever do return an error here,
// make sure to stop the timeoutTicker
@@ -420,7 +454,7 @@ func (cs *ConsensusState) updateRoundStep(round int, step cstypes.RoundStepType)
// enterNewRound(height, 0) at cs.StartTime.
func (cs *ConsensusState) scheduleRound0(rs *cstypes.RoundState) {
//cs.Logger.Info("scheduleRound0", "now", tmtime.Now(), "startTime", cs.StartTime)
sleepDuration := rs.StartTime.Sub(tmtime.Now()) // nolint: gotype, gosimple
sleepDuration := rs.StartTime.Sub(tmtime.Now())
cs.scheduleTimeout(sleepDuration, rs.Height, 0, cstypes.RoundStepNewHeight)
}
@@ -455,7 +489,7 @@ func (cs *ConsensusState) reconstructLastCommit(state sm.State) {
if precommit == nil {
continue
}
added, err := lastPrecommits.AddVote(precommit)
added, err := lastPrecommits.AddVote(seenCommit.ToVote(precommit))
if !added || err != nil {
cmn.PanicCrisis(fmt.Sprintf("Failed to reconstruct LastCommit: %v", err))
}
@@ -483,7 +517,7 @@ func (cs *ConsensusState) updateToState(state sm.State) {
// If state isn't further out than cs.state, just ignore.
// This happens when SwitchToConsensus() is called in the reactor.
// We don't want to reset e.g. the Votes, but we still want to
// signal the new round step, because other services (eg. mempool)
// signal the new round step, because other services (eg. txNotifier)
// depend on having an up-to-date peer state!
if !cs.state.IsEmpty() && (state.LastBlockHeight <= cs.state.LastBlockHeight) {
cs.Logger.Info("Ignoring updateToState()", "newHeight", state.LastBlockHeight+1, "oldHeight", cs.state.LastBlockHeight+1)
@@ -598,7 +632,7 @@ func (cs *ConsensusState) receiveRoutine(maxSteps int) {
var mi msgInfo
select {
case <-cs.mempool.TxsAvailable():
case <-cs.txNotifier.TxsAvailable():
cs.handleTxsAvailable()
case mi = <-cs.peerMsgQueue:
cs.wal.Write(mi)
@@ -607,6 +641,15 @@ func (cs *ConsensusState) receiveRoutine(maxSteps int) {
cs.handleMsg(mi)
case mi = <-cs.internalMsgQueue:
cs.wal.WriteSync(mi) // NOTE: fsync
if _, ok := mi.Msg.(*VoteMessage); ok {
// we actually want to simulate failing during
// the previous WriteSync, but this isn't easy to do.
// Equivalent would be to fail here and manually remove
// some bytes from the end of the wal.
fail.Fail() // XXX
}
// handles proposals, block parts, votes
cs.handleMsg(mi)
case ti := <-cs.timeoutTicker.Chan(): // tockChan:
@@ -714,6 +757,7 @@ func (cs *ConsensusState) handleTxsAvailable() {
cs.mtx.Lock()
defer cs.mtx.Unlock()
// we only need to do this for round 0
cs.enterNewRound(cs.Height, 0)
cs.enterPropose(cs.Height, 0)
}
@@ -764,7 +808,7 @@ func (cs *ConsensusState) enterNewRound(height int64, round int) {
cs.ProposalBlockParts = nil
}
cs.Votes.SetRound(round + 1) // also track next round (round+1) to allow round-skipping
cs.triggeredTimeoutPrecommit = false
cs.TriggeredTimeoutPrecommit = false
cs.eventBus.PublishEventNewRound(cs.NewRoundEvent())
cs.metrics.Rounds.Set(float64(round))
@@ -829,13 +873,14 @@ func (cs *ConsensusState) enterPropose(height int64, round int) {
}
// if not a validator, we're done
if !cs.Validators.HasAddress(cs.privValidator.GetAddress()) {
logger.Debug("This node is not a validator", "addr", cs.privValidator.GetAddress(), "vals", cs.Validators)
address := cs.privValidator.GetPubKey().Address()
if !cs.Validators.HasAddress(address) {
logger.Debug("This node is not a validator", "addr", address, "vals", cs.Validators)
return
}
logger.Debug("This node is a validator")
if cs.isProposer() {
if cs.isProposer(address) {
logger.Info("enterPropose: Our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator)
cs.decideProposal(height, round)
} else {
@@ -843,8 +888,8 @@ func (cs *ConsensusState) enterPropose(height int64, round int) {
}
}
func (cs *ConsensusState) isProposer() bool {
return bytes.Equal(cs.Validators.GetProposer().Address, cs.privValidator.GetAddress())
func (cs *ConsensusState) isProposer(address []byte) bool {
return bytes.Equal(cs.Validators.GetProposer().Address, address)
}
func (cs *ConsensusState) defaultDecideProposal(height int64, round int) {
@@ -909,7 +954,7 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
if cs.Height == 1 {
// We're creating a proposal for the first block.
// The commit is empty, but not nil.
commit = &types.Commit{}
commit = types.NewCommit(types.BlockID{}, nil)
} else if cs.LastCommit.HasTwoThirdsMajority() {
// Make the commit from LastCommit
commit = cs.LastCommit.MakeCommit()
@@ -919,20 +964,8 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
return
}
maxBytes := cs.state.ConsensusParams.BlockSize.MaxBytes
maxGas := cs.state.ConsensusParams.BlockSize.MaxGas
// bound evidence to 1/10th of the block
evidence := cs.evpool.PendingEvidence(types.MaxEvidenceBytesPerBlock(maxBytes))
// Mempool validated transactions
txs := cs.mempool.ReapMaxBytesMaxGas(types.MaxDataBytes(
maxBytes,
cs.state.Validators.Size(),
len(evidence),
), maxGas)
proposerAddr := cs.privValidator.GetAddress()
block, parts := cs.state.MakeBlock(cs.Height, txs, commit, evidence, proposerAddr)
return block, parts
proposerAddr := cs.privValidator.GetPubKey().Address()
return cs.blockExec.CreateProposalBlock(cs.Height, cs.state, commit, proposerAddr)
}
// Enter: `timeoutPropose` after entering Propose.
@@ -1121,12 +1154,12 @@ func (cs *ConsensusState) enterPrecommit(height int64, round int) {
func (cs *ConsensusState) enterPrecommitWait(height int64, round int) {
logger := cs.Logger.With("height", height, "round", round)
if cs.Height != height || round < cs.Round || (cs.Round == round && cs.triggeredTimeoutPrecommit) {
if cs.Height != height || round < cs.Round || (cs.Round == round && cs.TriggeredTimeoutPrecommit) {
logger.Debug(
fmt.Sprintf(
"enterPrecommitWait(%v/%v): Invalid args. "+
"Current state is Height/Round: %v/%v/, triggeredTimeoutPrecommit:%v",
height, round, cs.Height, cs.Round, cs.triggeredTimeoutPrecommit))
"Current state is Height/Round: %v/%v/, TriggeredTimeoutPrecommit:%v",
height, round, cs.Height, cs.Round, cs.TriggeredTimeoutPrecommit))
return
}
if !cs.Votes.Precommits(round).HasTwoThirdsAny() {
@@ -1136,7 +1169,7 @@ func (cs *ConsensusState) enterPrecommitWait(height int64, round int) {
defer func() {
// Done enterPrecommitWait:
cs.triggeredTimeoutPrecommit = true
cs.TriggeredTimeoutPrecommit = true
cs.newStep()
}()
@@ -1323,7 +1356,7 @@ func (cs *ConsensusState) recordMetrics(height int64, block *types.Block) {
missingValidators := 0
missingValidatorsPower := int64(0)
for i, val := range cs.Validators.Validators {
var vote *types.Vote
var vote *types.CommitSig
if i < len(block.LastCommit.Precommits) {
vote = block.LastCommit.Precommits[i]
}
@@ -1474,7 +1507,8 @@ func (cs *ConsensusState) tryAddVote(vote *types.Vote, peerID p2p.ID) (bool, err
if err == ErrVoteHeightMismatch {
return added, err
} else if voteErr, ok := err.(*types.ErrVoteConflictingVotes); ok {
if bytes.Equal(vote.ValidatorAddress, cs.privValidator.GetAddress()) {
addr := cs.privValidator.GetPubKey().Address()
if bytes.Equal(vote.ValidatorAddress, addr) {
cs.Logger.Error("Found conflicting vote from ourselves. Did you unsafe_reset a validator?", "height", vote.Height, "round", vote.Round, "type", vote.Type)
return added, err
}
@@ -1526,7 +1560,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerID p2p.ID) (added bool,
// Not necessarily a bad peer, but not favourable behaviour.
if vote.Height != cs.Height {
err = ErrVoteHeightMismatch
cs.Logger.Info("Vote ignored and not added", "voteHeight", vote.Height, "csHeight", cs.Height, "err", err)
cs.Logger.Info("Vote ignored and not added", "voteHeight", vote.Height, "csHeight", cs.Height, "peerID", peerID)
return
}
@@ -1639,7 +1673,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerID p2p.ID) (added bool,
}
func (cs *ConsensusState) signVote(type_ types.SignedMsgType, hash []byte, header types.PartSetHeader) (*types.Vote, error) {
addr := cs.privValidator.GetAddress()
addr := cs.privValidator.GetPubKey().Address()
valIndex, _ := cs.Validators.GetByAddress(addr)
vote := &types.Vote{
@@ -1675,7 +1709,7 @@ func (cs *ConsensusState) voteTime() time.Time {
// sign the vote and publish on internalMsgQueue
func (cs *ConsensusState) signAddVote(type_ types.SignedMsgType, hash []byte, header types.PartSetHeader) *types.Vote {
// if we don't have a key or we're not in the validator set, do nothing
if cs.privValidator == nil || !cs.Validators.HasAddress(cs.privValidator.GetAddress()) {
if cs.privValidator == nil || !cs.Validators.HasAddress(cs.privValidator.GetPubKey().Address()) {
return nil
}
vote, err := cs.signVote(type_, hash, header)

View File

@@ -22,10 +22,6 @@ func init() {
config = ResetConfig("consensus_state_test")
}
func ensureProposeTimeout(timeoutPropose time.Duration) time.Duration {
return time.Duration(timeoutPropose.Nanoseconds()*2) * time.Nanosecond
}
/*
ProposeSuite
@@ -73,7 +69,8 @@ func TestStateProposerSelection0(t *testing.T) {
// Commit a block and ensure proposer for the next height is correct.
prop := cs1.GetRoundState().Validators.GetProposer()
if !bytes.Equal(prop.Address, cs1.privValidator.GetAddress()) {
address := cs1.privValidator.GetPubKey().Address()
if !bytes.Equal(prop.Address, address) {
t.Fatalf("expected proposer to be validator %d. Got %X", 0, prop.Address)
}
@@ -87,7 +84,8 @@ func TestStateProposerSelection0(t *testing.T) {
ensureNewRound(newRoundCh, height+1, 0)
prop = cs1.GetRoundState().Validators.GetProposer()
if !bytes.Equal(prop.Address, vss[1].GetAddress()) {
addr := vss[1].GetPubKey().Address()
if !bytes.Equal(prop.Address, addr) {
panic(fmt.Sprintf("expected proposer to be validator %d. Got %X", 1, prop.Address))
}
}
@@ -110,7 +108,8 @@ func TestStateProposerSelection2(t *testing.T) {
// everyone just votes nil. we get a new proposer each round
for i := 0; i < len(vss); i++ {
prop := cs1.GetRoundState().Validators.GetProposer()
correctProposer := vss[(i+round)%len(vss)].GetAddress()
addr := vss[(i+round)%len(vss)].GetPubKey().Address()
correctProposer := addr
if !bytes.Equal(prop.Address, correctProposer) {
panic(fmt.Sprintf("expected RoundState.Validators.GetProposer() to be validator %d. Got %X", (i+2)%len(vss), prop.Address))
}
@@ -505,7 +504,8 @@ func TestStateLockPOLRelock(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
newBlockCh := subscribe(cs1.eventBus, types.EventQueryNewBlockHeader)
@@ -596,7 +596,8 @@ func TestStateLockPOLUnlock(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
unlockCh := subscribe(cs1.eventBus, types.EventQueryUnlock)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// everything done from perspective of cs1
@@ -689,7 +690,8 @@ func TestStateLockPOLSafety1(t *testing.T) {
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round and wait for propose and prevote
startTestRound(cs1, cs1.Height, round)
@@ -805,7 +807,8 @@ func TestStateLockPOLSafety2(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
unlockCh := subscribe(cs1.eventBus, types.EventQueryUnlock)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// the block for R0: gets polkad but we miss it
// (even though we signed it, shhh)
@@ -896,7 +899,8 @@ func TestProposeValidBlock(t *testing.T) {
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
unlockCh := subscribe(cs1.eventBus, types.EventQueryUnlock)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round and wait for propose and prevote
startTestRound(cs1, cs1.Height, round)
@@ -982,7 +986,8 @@ func TestSetValidBlockOnDelayedPrevote(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
validBlockCh := subscribe(cs1.eventBus, types.EventQueryValidBlock)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round and wait for propose and prevote
startTestRound(cs1, cs1.Height, round)
@@ -1041,7 +1046,8 @@ func TestSetValidBlockOnDelayedProposal(t *testing.T) {
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
validBlockCh := subscribe(cs1.eventBus, types.EventQueryValidBlock)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
round = round + 1 // move to round in which P0 is not proposer
@@ -1111,7 +1117,8 @@ func TestWaitingTimeoutProposeOnNewRound(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round
startTestRound(cs1, height, round)
@@ -1144,7 +1151,8 @@ func TestRoundSkipOnNilPolkaFromHigherRound(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round
startTestRound(cs1, height, round)
@@ -1177,7 +1185,8 @@ func TestWaitTimeoutProposeOnNilPolkaForTheCurrentRound(t *testing.T) {
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round in which PO is not proposer
startTestRound(cs1, height, round)
@@ -1266,6 +1275,71 @@ func TestCommitFromPreviousRound(t *testing.T) {
ensureNewRound(newRoundCh, height+1, 0)
}
type fakeTxNotifier struct {
ch chan struct{}
}
func (n *fakeTxNotifier) TxsAvailable() <-chan struct{} {
return n.ch
}
func (n *fakeTxNotifier) Notify() {
n.ch <- struct{}{}
}
func TestStartNextHeightCorrectly(t *testing.T) {
cs1, vss := randConsensusState(4)
cs1.config.SkipTimeoutCommit = false
cs1.txNotifier = &fakeTxNotifier{ch: make(chan struct{})}
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
height, round := cs1.Height, cs1.Round
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
newBlockHeader := subscribe(cs1.eventBus, types.EventQueryNewBlockHeader)
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round and wait for propose and prevote
startTestRound(cs1, height, round)
ensureNewRound(newRoundCh, height, round)
ensureNewProposal(proposalCh, height, round)
rs := cs1.GetRoundState()
theBlockHash := rs.ProposalBlock.Hash()
theBlockParts := rs.ProposalBlockParts.Header()
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], theBlockHash)
signAddVotes(cs1, types.PrevoteType, theBlockHash, theBlockParts, vs2, vs3, vs4)
ensurePrecommit(voteCh, height, round)
// the proposed block should now be locked and our precommit added
validatePrecommit(t, cs1, round, round, vss[0], theBlockHash, theBlockHash)
rs = cs1.GetRoundState()
// add precommits
signAddVotes(cs1, types.PrecommitType, nil, types.PartSetHeader{}, vs2)
signAddVotes(cs1, types.PrecommitType, theBlockHash, theBlockParts, vs3)
signAddVotes(cs1, types.PrecommitType, theBlockHash, theBlockParts, vs4)
ensureNewBlockHeader(newBlockHeader, height, theBlockHash)
rs = cs1.GetRoundState()
assert.True(t, rs.TriggeredTimeoutPrecommit)
cs1.txNotifier.(*fakeTxNotifier).Notify()
ensureNewTimeout(timeoutProposeCh, height+1, round, cs1.config.TimeoutPropose.Nanoseconds())
rs = cs1.GetRoundState()
assert.False(t, rs.TriggeredTimeoutPrecommit, "triggeredTimeoutPrecommit should be false at the beginning of each round")
}
//------------------------------------------------------------------------------------------
// SlashingSuite
// TODO: Slashing
@@ -1361,7 +1435,8 @@ func TestStateHalt1(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
newBlockCh := subscribe(cs1.eventBus, types.EventQueryNewBlock)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round and wait for propose and prevote
startTestRound(cs1, height, round)

View File

@@ -50,8 +50,9 @@ func TestPeerCatchupRounds(t *testing.T) {
func makeVoteHR(t *testing.T, height int64, round int, privVals []types.PrivValidator, valIndex int) *types.Vote {
privVal := privVals[valIndex]
addr := privVal.GetPubKey().Address()
vote := &types.Vote{
ValidatorAddress: privVal.GetAddress(),
ValidatorAddress: addr,
ValidatorIndex: valIndex,
Height: height,
Round: round,

View File

@@ -84,6 +84,7 @@ type RoundState struct {
CommitRound int `json:"commit_round"` //
LastCommit *types.VoteSet `json:"last_commit"` // Last precommits at Height-1
LastValidators *types.ValidatorSet `json:"last_validators"`
TriggeredTimeoutPrecommit bool `json:"triggered_timeout_precommit"`
}
// Compressed version of the RoundState for use in RPC

View File

@@ -3,7 +3,7 @@ package types
import (
"testing"
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/crypto/ed25519"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/types"
@@ -16,7 +16,7 @@ func BenchmarkRoundStateDeepCopy(b *testing.B) {
// Random validators
nval, ntxs := 100, 100
vset, _ := types.RandValidatorSet(nval, 1)
precommits := make([]*types.Vote, nval)
precommits := make([]*types.CommitSig, nval)
blockID := types.BlockID{
Hash: cmn.RandBytes(20),
PartsHeader: types.PartSetHeader{
@@ -25,12 +25,12 @@ func BenchmarkRoundStateDeepCopy(b *testing.B) {
}
sig := make([]byte, ed25519.SignatureSize)
for i := 0; i < nval; i++ {
precommits[i] = &types.Vote{
precommits[i] = (&types.Vote{
ValidatorAddress: types.Address(cmn.RandBytes(20)),
Timestamp: tmtime.Now(),
BlockID: blockID,
Signature: sig,
}
}).CommitSig()
}
txs := make([]types.Tx, ntxs)
for i := 0; i < ntxs; i++ {
@@ -54,10 +54,7 @@ func BenchmarkRoundStateDeepCopy(b *testing.B) {
Txs: txs,
},
Evidence: types.EvidenceData{},
LastCommit: &types.Commit{
BlockID: blockID,
Precommits: precommits,
},
LastCommit: types.NewCommit(blockID, precommits),
}
parts := block.MakePartSet(4096)
// Random Proposal

View File

@@ -112,11 +112,20 @@ func (wal *baseWAL) OnStart() error {
return err
}
// Stop the underlying autofile group.
// Use Wait() to ensure it's finished shutting down
// before cleaning up files.
func (wal *baseWAL) OnStop() {
wal.group.Stop()
wal.group.Close()
}
// Wait for the underlying autofile group to finish shutting down
// so it's safe to cleanup files.
func (wal *baseWAL) Wait() {
wal.group.Wait()
}
// Write is called in newStep and for each receive on the
// peerMsgQueue and the timeoutTicker.
// NOTE: does not call fsync()
@@ -163,7 +172,7 @@ func (wal *baseWAL) SearchForEndHeight(height int64, options *WALSearchOptions)
// NOTE: starting from the last file in the group because we're usually
// searching for the last height. See replay.go
min, max := wal.group.MinIndex(), wal.group.MaxIndex()
wal.Logger.Debug("Searching for height", "height", height, "min", min, "max", max)
wal.Logger.Info("Searching for height", "height", height, "min", min, "max", max)
for index := max; index >= min; index-- {
gr, err = wal.group.NewReader(index)
if err != nil {
@@ -183,7 +192,7 @@ func (wal *baseWAL) SearchForEndHeight(height int64, options *WALSearchOptions)
break
}
if options.IgnoreDataCorruptionErrors && IsDataCorruptionError(err) {
wal.Logger.Debug("Corrupted entry. Skipping...", "err", err)
wal.Logger.Error("Corrupted entry. Skipping...", "err", err)
// do nothing
continue
} else if err != nil {
@@ -194,7 +203,7 @@ func (wal *baseWAL) SearchForEndHeight(height int64, options *WALSearchOptions)
if m, ok := msg.Msg.(EndHeightMessage); ok {
lastHeightFound = m.Height
if m.Height == height { // found
wal.Logger.Debug("Found", "height", height, "index", index)
wal.Logger.Info("Found", "height", height, "index", index)
return gr, true, nil
}
}
@@ -281,25 +290,25 @@ func (dec *WALDecoder) Decode() (*TimedWALMessage, error) {
return nil, err
}
if err != nil {
return nil, fmt.Errorf("failed to read checksum: %v", err)
return nil, DataCorruptionError{fmt.Errorf("failed to read checksum: %v", err)}
}
crc := binary.BigEndian.Uint32(b)
b = make([]byte, 4)
_, err = dec.rd.Read(b)
if err != nil {
return nil, fmt.Errorf("failed to read length: %v", err)
return nil, DataCorruptionError{fmt.Errorf("failed to read length: %v", err)}
}
length := binary.BigEndian.Uint32(b)
if length > maxMsgSizeBytes {
return nil, fmt.Errorf("length %d exceeded maximum possible value of %d bytes", length, maxMsgSizeBytes)
return nil, DataCorruptionError{fmt.Errorf("length %d exceeded maximum possible value of %d bytes", length, maxMsgSizeBytes)}
}
data := make([]byte, length)
_, err = dec.rd.Read(data)
if err != nil {
return nil, fmt.Errorf("failed to read data: %v", err)
return nil, DataCorruptionError{fmt.Errorf("failed to read data: %v", err)}
}
// check checksum before decoding data

View File

@@ -40,8 +40,9 @@ func WALGenerateNBlocks(wr io.Writer, numBlocks int) (err error) {
// COPY PASTE FROM node.go WITH A FEW MODIFICATIONS
// NOTE: we can't import node package because of circular dependency.
// NOTE: we don't do handshake so need to set state.Version.Consensus.App directly.
privValidatorFile := config.PrivValidatorFile()
privValidator := privval.LoadOrGenFilePV(privValidatorFile)
privValidatorKeyFile := config.PrivValidatorKeyFile()
privValidatorStateFile := config.PrivValidatorStateFile()
privValidator := privval.LoadOrGenFilePV(privValidatorKeyFile, privValidatorStateFile)
genDoc, err := types.GenesisDocFromFile(config.GenesisFile())
if err != nil {
return errors.Wrap(err, "failed to read genesis file")

View File

@@ -7,6 +7,7 @@ import (
"io/ioutil"
"os"
"path/filepath"
// "sync"
"testing"
"time"
@@ -38,7 +39,12 @@ func TestWALTruncate(t *testing.T) {
wal.SetLogger(log.TestingLogger())
err = wal.Start()
require.NoError(t, err)
defer wal.Stop()
defer func() {
wal.Stop()
// wait for the wal to finish shutting down so we
// can safely remove the directory
wal.Wait()
}()
//60 block's size nearly 70K, greater than group's headBuf size(4096 * 10), when headBuf is full, truncate content will Flush to the file.
//at this time, RotateFile is called, truncate content exist in each file.
@@ -67,8 +73,8 @@ func TestWALTruncate(t *testing.T) {
func TestWALEncoderDecoder(t *testing.T) {
now := tmtime.Now()
msgs := []TimedWALMessage{
TimedWALMessage{Time: now, Msg: EndHeightMessage{0}},
TimedWALMessage{Time: now, Msg: timeoutInfo{Duration: time.Second, Height: 1, Round: 1, Step: types.RoundStepPropose}},
{Time: now, Msg: EndHeightMessage{0}},
{Time: now, Msg: timeoutInfo{Duration: time.Second, Height: 1, Round: 1, Step: types.RoundStepPropose}},
}
b := new(bytes.Buffer)

View File

@@ -1,7 +1,7 @@
package consensus
import (
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/types"
)

View File

@@ -5,7 +5,7 @@ import (
"fmt"
"io/ioutil"
"golang.org/x/crypto/openpgp/armor" // forked to github.com/tendermint/crypto
"golang.org/x/crypto/openpgp/armor"
)
func EncodeArmor(blockType string, headers map[string]string, data []byte) string {

View File

@@ -7,7 +7,7 @@ import (
"io"
amino "github.com/tendermint/go-amino"
"golang.org/x/crypto/ed25519" // forked to github.com/tendermint/crypto
"golang.org/x/crypto/ed25519"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/tmhash"
@@ -18,8 +18,8 @@ import (
var _ crypto.PrivKey = PrivKeyEd25519{}
const (
PrivKeyAminoRoute = "tendermint/PrivKeyEd25519"
PubKeyAminoRoute = "tendermint/PubKeyEd25519"
PrivKeyAminoName = "tendermint/PrivKeyEd25519"
PubKeyAminoName = "tendermint/PubKeyEd25519"
// Size of an Edwards25519 signature. Namely the size of a compressed
// Edwards25519 point, and a field element. Both of which are 32 bytes.
SignatureSize = 64
@@ -30,11 +30,11 @@ var cdc = amino.NewCodec()
func init() {
cdc.RegisterInterface((*crypto.PubKey)(nil), nil)
cdc.RegisterConcrete(PubKeyEd25519{},
PubKeyAminoRoute, nil)
PubKeyAminoName, nil)
cdc.RegisterInterface((*crypto.PrivKey)(nil), nil)
cdc.RegisterConcrete(PrivKeyEd25519{},
PrivKeyAminoRoute, nil)
PrivKeyAminoName, nil)
}
// PrivKeyEd25519 implements crypto.PrivKey.

View File

@@ -12,11 +12,11 @@ import (
var cdc = amino.NewCodec()
// routeTable is used to map public key concrete types back
// to their amino routes. This should eventually be handled
// nameTable is used to map public key concrete types back
// to their registered amino names. This should eventually be handled
// by amino. Example usage:
// routeTable[reflect.TypeOf(ed25519.PubKeyEd25519{})] = ed25519.PubKeyAminoRoute
var routeTable = make(map[reflect.Type]string, 3)
// nameTable[reflect.TypeOf(ed25519.PubKeyEd25519{})] = ed25519.PubKeyAminoName
var nameTable = make(map[reflect.Type]string, 3)
func init() {
// NOTE: It's important that there be no conflicts here,
@@ -29,16 +29,16 @@ func init() {
// TODO: Have amino provide a way to go from concrete struct to route directly.
// Its currently a private API
routeTable[reflect.TypeOf(ed25519.PubKeyEd25519{})] = ed25519.PubKeyAminoRoute
routeTable[reflect.TypeOf(secp256k1.PubKeySecp256k1{})] = secp256k1.PubKeyAminoRoute
routeTable[reflect.TypeOf(&multisig.PubKeyMultisigThreshold{})] = multisig.PubKeyMultisigThresholdAminoRoute
nameTable[reflect.TypeOf(ed25519.PubKeyEd25519{})] = ed25519.PubKeyAminoName
nameTable[reflect.TypeOf(secp256k1.PubKeySecp256k1{})] = secp256k1.PubKeyAminoName
nameTable[reflect.TypeOf(multisig.PubKeyMultisigThreshold{})] = multisig.PubKeyMultisigThresholdAminoRoute
}
// PubkeyAminoRoute returns the amino route of a pubkey
// PubkeyAminoName returns the amino route of a pubkey
// cdc is currently passed in, as eventually this will not be using
// a package level codec.
func PubkeyAminoRoute(cdc *amino.Codec, key crypto.PubKey) (string, bool) {
route, found := routeTable[reflect.TypeOf(key)]
func PubkeyAminoName(cdc *amino.Codec, key crypto.PubKey) (string, bool) {
route, found := nameTable[reflect.TypeOf(key)]
return route, found
}
@@ -47,17 +47,17 @@ func RegisterAmino(cdc *amino.Codec) {
// These are all written here instead of
cdc.RegisterInterface((*crypto.PubKey)(nil), nil)
cdc.RegisterConcrete(ed25519.PubKeyEd25519{},
ed25519.PubKeyAminoRoute, nil)
ed25519.PubKeyAminoName, nil)
cdc.RegisterConcrete(secp256k1.PubKeySecp256k1{},
secp256k1.PubKeyAminoRoute, nil)
secp256k1.PubKeyAminoName, nil)
cdc.RegisterConcrete(multisig.PubKeyMultisigThreshold{},
multisig.PubKeyMultisigThresholdAminoRoute, nil)
cdc.RegisterInterface((*crypto.PrivKey)(nil), nil)
cdc.RegisterConcrete(ed25519.PrivKeyEd25519{},
ed25519.PrivKeyAminoRoute, nil)
ed25519.PrivKeyAminoName, nil)
cdc.RegisterConcrete(secp256k1.PrivKeySecp256k1{},
secp256k1.PrivKeyAminoRoute, nil)
secp256k1.PrivKeyAminoName, nil)
}
func PrivKeyFromBytes(privKeyBytes []byte) (privKey crypto.PrivKey, err error) {

View File

@@ -25,9 +25,8 @@ func checkAminoBinary(t *testing.T, src, dst interface{}, size int) {
assert.Equal(t, byterSrc.Bytes(), bz, "Amino binary vs Bytes() mismatch")
}
// Make sure we have the expected length.
if size != -1 {
assert.Equal(t, size, len(bz), "Amino binary size mismatch")
}
// Unmarshal.
err = cdc.UnmarshalBinaryBare(bz, dst)
require.Nil(t, err, "%+v", err)
@@ -128,18 +127,18 @@ func TestPubKeyInvalidDataProperReturnsEmpty(t *testing.T) {
require.Nil(t, pk)
}
func TestPubkeyAminoRoute(t *testing.T) {
func TestPubkeyAminoName(t *testing.T) {
tests := []struct {
key crypto.PubKey
want string
found bool
}{
{ed25519.PubKeyEd25519{}, ed25519.PubKeyAminoRoute, true},
{secp256k1.PubKeySecp256k1{}, secp256k1.PubKeyAminoRoute, true},
{&multisig.PubKeyMultisigThreshold{}, multisig.PubKeyMultisigThresholdAminoRoute, true},
{ed25519.PubKeyEd25519{}, ed25519.PubKeyAminoName, true},
{secp256k1.PubKeySecp256k1{}, secp256k1.PubKeyAminoName, true},
{multisig.PubKeyMultisigThreshold{}, multisig.PubKeyMultisigThresholdAminoRoute, true},
}
for i, tc := range tests {
got, found := PubkeyAminoRoute(cdc, tc.key)
got, found := PubkeyAminoName(cdc, tc.key)
require.Equal(t, tc.found, found, "not equal on tc %d", i)
if tc.found {
require.Equal(t, tc.want, got, "not equal on tc %d", i)

View File

@@ -3,7 +3,7 @@ package crypto
import (
"crypto/sha256"
"golang.org/x/crypto/ripemd160" // forked to github.com/tendermint/crypto
"golang.org/x/crypto/ripemd160"
)
func Sha256(bytes []byte) []byte {

21
crypto/merkle/hash.go Normal file
View File

@@ -0,0 +1,21 @@
package merkle
import (
"github.com/tendermint/tendermint/crypto/tmhash"
)
// TODO: make these have a large predefined capacity
var (
leafPrefix = []byte{0}
innerPrefix = []byte{1}
)
// returns tmhash(0x00 || leaf)
func leafHash(leaf []byte) []byte {
return tmhash.Sum(append(leafPrefix, leaf...))
}
// returns tmhash(0x01 || left || right)
func innerHash(left []byte, right []byte) []byte {
return tmhash.Sum(append(innerPrefix, append(left, right...)...))
}

View File

@@ -98,7 +98,7 @@ func (prt *ProofRuntime) Decode(pop ProofOp) (ProofOperator, error) {
}
func (prt *ProofRuntime) DecodeProof(proof *Proof) (ProofOperators, error) {
var poz ProofOperators
poz := make(ProofOperators, 0, len(proof.Ops))
for _, pop := range proof.Ops {
operator, err := prt.Decode(pop)
if err != nil {

View File

@@ -71,11 +71,11 @@ func (op SimpleValueOp) Run(args [][]byte) ([][]byte, error) {
hasher.Write(value) // does not error
vhash := hasher.Sum(nil)
bz := new(bytes.Buffer)
// Wrap <op.Key, vhash> to hash the KVPair.
hasher = tmhash.New()
encodeByteSlice(hasher, []byte(op.key)) // does not error
encodeByteSlice(hasher, []byte(vhash)) // does not error
kvhash := hasher.Sum(nil)
encodeByteSlice(bz, []byte(op.key)) // does not error
encodeByteSlice(bz, []byte(vhash)) // does not error
kvhash := leafHash(bz.Bytes())
if !bytes.Equal(kvhash, op.Proof.LeafHash) {
return nil, cmn.NewError("leaf hash mismatch: want %X got %X", op.Proof.LeafHash, kvhash)

View File

@@ -4,7 +4,7 @@ import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
cmn "github.com/tendermint/tendermint/libs/common"
)
@@ -26,6 +26,7 @@ func NewDominoOp(key, input, output string) DominoOp {
}
}
//nolint:unused
func DominoOpDecoder(pop ProofOp) (ProofOperator, error) {
if pop.Type != ProofOpDomino {
panic("unexpected proof op type")

View File

@@ -0,0 +1,97 @@
package merkle
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// These tests were taken from https://github.com/google/trillian/blob/master/merkle/rfc6962/rfc6962_test.go,
// and consequently fall under the above license.
import (
"bytes"
"encoding/hex"
"testing"
"github.com/tendermint/tendermint/crypto/tmhash"
)
func TestRFC6962Hasher(t *testing.T) {
_, leafHashTrail := trailsFromByteSlices([][]byte{[]byte("L123456")})
leafHash := leafHashTrail.Hash
_, leafHashTrail = trailsFromByteSlices([][]byte{{}})
emptyLeafHash := leafHashTrail.Hash
for _, tc := range []struct {
desc string
got []byte
want string
}{
// Since creating a merkle tree of no leaves is unsupported here, we skip
// the corresponding trillian test vector.
// Check that the empty hash is not the same as the hash of an empty leaf.
// echo -n 00 | xxd -r -p | sha256sum
{
desc: "RFC6962 Empty Leaf",
want: "6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d"[:tmhash.Size*2],
got: emptyLeafHash,
},
// echo -n 004C313233343536 | xxd -r -p | sha256sum
{
desc: "RFC6962 Leaf",
want: "395aa064aa4c29f7010acfe3f25db9485bbd4b91897b6ad7ad547639252b4d56"[:tmhash.Size*2],
got: leafHash,
},
// echo -n 014E3132334E343536 | xxd -r -p | sha256sum
{
desc: "RFC6962 Node",
want: "aa217fe888e47007fa15edab33c2b492a722cb106c64667fc2b044444de66bbb"[:tmhash.Size*2],
got: innerHash([]byte("N123"), []byte("N456")),
},
} {
t.Run(tc.desc, func(t *testing.T) {
wantBytes, err := hex.DecodeString(tc.want)
if err != nil {
t.Fatalf("hex.DecodeString(%x): %v", tc.want, err)
}
if got, want := tc.got, wantBytes; !bytes.Equal(got, want) {
t.Errorf("got %x, want %x", got, want)
}
})
}
}
func TestRFC6962HasherCollisions(t *testing.T) {
// Check that different leaves have different hashes.
leaf1, leaf2 := []byte("Hello"), []byte("World")
_, leafHashTrail := trailsFromByteSlices([][]byte{leaf1})
hash1 := leafHashTrail.Hash
_, leafHashTrail = trailsFromByteSlices([][]byte{leaf2})
hash2 := leafHashTrail.Hash
if bytes.Equal(hash1, hash2) {
t.Errorf("Leaf hashes should differ, but both are %x", hash1)
}
// Compute an intermediate subtree hash.
_, subHash1Trail := trailsFromByteSlices([][]byte{hash1, hash2})
subHash1 := subHash1Trail.Hash
// Check that this is not the same as a leaf hash of their concatenation.
preimage := append(hash1, hash2...)
_, forgedHashTrail := trailsFromByteSlices([][]byte{preimage})
forgedHash := forgedHashTrail.Hash
if bytes.Equal(subHash1, forgedHash) {
t.Errorf("Hasher is not second-preimage resistant")
}
// Swap the order of nodes and check that the hash is different.
_, subHash2Trail := trailsFromByteSlices([][]byte{hash2, hash1})
subHash2 := subHash2Trail.Hash
if bytes.Equal(subHash1, subHash2) {
t.Errorf("Subtree hash does not depend on the order of leaves")
}
}

View File

@@ -13,14 +13,14 @@ func TestSimpleMap(t *testing.T) {
values []string // each string gets converted to []byte in test
want string
}{
{[]string{"key1"}, []string{"value1"}, "321d150de16dceb51c72981b432b115045383259b1a550adf8dc80f927508967"},
{[]string{"key1"}, []string{"value2"}, "2a9e4baf321eac99f6eecc3406603c14bc5e85bb7b80483cbfc75b3382d24a2f"},
{[]string{"key1"}, []string{"value1"}, "a44d3cc7daba1a4600b00a2434b30f8b970652169810d6dfa9fb1793a2189324"},
{[]string{"key1"}, []string{"value2"}, "0638e99b3445caec9d95c05e1a3fc1487b4ddec6a952ff337080360b0dcc078c"},
// swap order with 2 keys
{[]string{"key1", "key2"}, []string{"value1", "value2"}, "c4d8913ab543ba26aa970646d4c99a150fd641298e3367cf68ca45fb45a49881"},
{[]string{"key2", "key1"}, []string{"value2", "value1"}, "c4d8913ab543ba26aa970646d4c99a150fd641298e3367cf68ca45fb45a49881"},
{[]string{"key1", "key2"}, []string{"value1", "value2"}, "8fd19b19e7bb3f2b3ee0574027d8a5a4cec370464ea2db2fbfa5c7d35bb0cff3"},
{[]string{"key2", "key1"}, []string{"value2", "value1"}, "8fd19b19e7bb3f2b3ee0574027d8a5a4cec370464ea2db2fbfa5c7d35bb0cff3"},
// swap order with 3 keys
{[]string{"key1", "key2", "key3"}, []string{"value1", "value2", "value3"}, "b23cef00eda5af4548a213a43793f2752d8d9013b3f2b64bc0523a4791196268"},
{[]string{"key1", "key3", "key2"}, []string{"value1", "value3", "value2"}, "b23cef00eda5af4548a213a43793f2752d8d9013b3f2b64bc0523a4791196268"},
{[]string{"key1", "key2", "key3"}, []string{"value1", "value2", "value3"}, "1dd674ec6782a0d586a903c9c63326a41cbe56b3bba33ed6ff5b527af6efb3dc"},
{[]string{"key1", "key3", "key2"}, []string{"value1", "value3", "value2"}, "1dd674ec6782a0d586a903c9c63326a41cbe56b3bba33ed6ff5b527af6efb3dc"},
}
for i, tc := range tests {
db := newSimpleMap()

View File

@@ -5,7 +5,6 @@ import (
"errors"
"fmt"
"github.com/tendermint/tendermint/crypto/tmhash"
cmn "github.com/tendermint/tendermint/libs/common"
)
@@ -67,7 +66,8 @@ func SimpleProofsFromMap(m map[string][]byte) (rootHash []byte, proofs map[strin
// Verify that the SimpleProof proves the root hash.
// Check sp.Index/sp.Total manually if needed
func (sp *SimpleProof) Verify(rootHash []byte, leafHash []byte) error {
func (sp *SimpleProof) Verify(rootHash []byte, leaf []byte) error {
leafHash := leafHash(leaf)
if sp.Total < 0 {
return errors.New("Proof total must be positive")
}
@@ -128,19 +128,19 @@ func computeHashFromAunts(index int, total int, leafHash []byte, innerHashes [][
if len(innerHashes) == 0 {
return nil
}
numLeft := (total + 1) / 2
numLeft := getSplitPoint(total)
if index < numLeft {
leftHash := computeHashFromAunts(index, numLeft, leafHash, innerHashes[:len(innerHashes)-1])
if leftHash == nil {
return nil
}
return simpleHashFromTwoHashes(leftHash, innerHashes[len(innerHashes)-1])
return innerHash(leftHash, innerHashes[len(innerHashes)-1])
}
rightHash := computeHashFromAunts(index-numLeft, total-numLeft, leafHash, innerHashes[:len(innerHashes)-1])
if rightHash == nil {
return nil
}
return simpleHashFromTwoHashes(innerHashes[len(innerHashes)-1], rightHash)
return innerHash(innerHashes[len(innerHashes)-1], rightHash)
}
}
@@ -182,12 +182,13 @@ func trailsFromByteSlices(items [][]byte) (trails []*SimpleProofNode, root *Simp
case 0:
return nil, nil
case 1:
trail := &SimpleProofNode{tmhash.Sum(items[0]), nil, nil, nil}
trail := &SimpleProofNode{leafHash(items[0]), nil, nil, nil}
return []*SimpleProofNode{trail}, trail
default:
lefts, leftRoot := trailsFromByteSlices(items[:(len(items)+1)/2])
rights, rightRoot := trailsFromByteSlices(items[(len(items)+1)/2:])
rootHash := simpleHashFromTwoHashes(leftRoot.Hash, rightRoot.Hash)
k := getSplitPoint(len(items))
lefts, leftRoot := trailsFromByteSlices(items[:k])
rights, rightRoot := trailsFromByteSlices(items[k:])
rootHash := innerHash(leftRoot.Hash, rightRoot.Hash)
root := &SimpleProofNode{rootHash, nil, nil, nil}
leftRoot.Parent = root
leftRoot.Right = rightRoot

View File

@@ -1,23 +1,9 @@
package merkle
import (
"github.com/tendermint/tendermint/crypto/tmhash"
"math/bits"
)
// simpleHashFromTwoHashes is the basic operation of the Merkle tree: Hash(left | right).
func simpleHashFromTwoHashes(left, right []byte) []byte {
var hasher = tmhash.New()
err := encodeByteSlice(hasher, left)
if err != nil {
panic(err)
}
err = encodeByteSlice(hasher, right)
if err != nil {
panic(err)
}
return hasher.Sum(nil)
}
// SimpleHashFromByteSlices computes a Merkle tree where the leaves are the byte slice,
// in the provided order.
func SimpleHashFromByteSlices(items [][]byte) []byte {
@@ -25,11 +11,12 @@ func SimpleHashFromByteSlices(items [][]byte) []byte {
case 0:
return nil
case 1:
return tmhash.Sum(items[0])
return leafHash(items[0])
default:
left := SimpleHashFromByteSlices(items[:(len(items)+1)/2])
right := SimpleHashFromByteSlices(items[(len(items)+1)/2:])
return simpleHashFromTwoHashes(left, right)
k := getSplitPoint(len(items))
left := SimpleHashFromByteSlices(items[:k])
right := SimpleHashFromByteSlices(items[k:])
return innerHash(left, right)
}
}
@@ -44,3 +31,17 @@ func SimpleHashFromMap(m map[string][]byte) []byte {
}
return sm.Hash()
}
// getSplitPoint returns the largest power of 2 less than length
func getSplitPoint(length int) int {
if length < 1 {
panic("Trying to split a tree with size < 1")
}
uLength := uint(length)
bitlen := bits.Len(uLength)
k := 1 << uint(bitlen-1)
if k == length {
k >>= 1
}
return k
}

View File

@@ -34,7 +34,6 @@ func TestSimpleProof(t *testing.T) {
// For each item, check the trail.
for i, item := range items {
itemHash := tmhash.Sum(item)
proof := proofs[i]
// Check total/index
@@ -43,30 +42,53 @@ func TestSimpleProof(t *testing.T) {
require.Equal(t, proof.Total, total, "Unmatched totals: %d vs %d", proof.Total, total)
// Verify success
err := proof.Verify(rootHash, itemHash)
require.NoError(t, err, "Verificatior failed: %v.", err)
err := proof.Verify(rootHash, item)
require.NoError(t, err, "Verification failed: %v.", err)
// Trail too long should make it fail
origAunts := proof.Aunts
proof.Aunts = append(proof.Aunts, cmn.RandBytes(32))
err = proof.Verify(rootHash, itemHash)
err = proof.Verify(rootHash, item)
require.Error(t, err, "Expected verification to fail for wrong trail length")
proof.Aunts = origAunts
// Trail too short should make it fail
proof.Aunts = proof.Aunts[0 : len(proof.Aunts)-1]
err = proof.Verify(rootHash, itemHash)
err = proof.Verify(rootHash, item)
require.Error(t, err, "Expected verification to fail for wrong trail length")
proof.Aunts = origAunts
// Mutating the itemHash should make it fail.
err = proof.Verify(rootHash, MutateByteSlice(itemHash))
err = proof.Verify(rootHash, MutateByteSlice(item))
require.Error(t, err, "Expected verification to fail for mutated leaf hash")
// Mutating the rootHash should make it fail.
err = proof.Verify(MutateByteSlice(rootHash), itemHash)
err = proof.Verify(MutateByteSlice(rootHash), item)
require.Error(t, err, "Expected verification to fail for mutated root hash")
}
}
func Test_getSplitPoint(t *testing.T) {
tests := []struct {
length int
want int
}{
{1, 0},
{2, 1},
{3, 2},
{4, 2},
{5, 4},
{10, 8},
{20, 16},
{100, 64},
{255, 128},
{256, 128},
{257, 256},
}
for _, tt := range tests {
got := getSplitPoint(tt.length)
require.Equal(t, tt.want, got, "getSplitPoint(%d) = %v, want %v", tt.length, got, tt.want)
}
}

View File

@@ -1,7 +1,7 @@
package merkle
import (
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
)
var cdc *amino.Codec

View File

@@ -2,7 +2,6 @@ package multisig
import (
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/tmhash"
)
// PubKeyMultisigThreshold implements a K of N threshold multisig.
@@ -11,7 +10,7 @@ type PubKeyMultisigThreshold struct {
PubKeys []crypto.PubKey `json:"pubkeys"`
}
var _ crypto.PubKey = &PubKeyMultisigThreshold{}
var _ crypto.PubKey = PubKeyMultisigThreshold{}
// NewPubKeyMultisigThreshold returns a new PubKeyMultisigThreshold.
// Panics if len(pubkeys) < k or 0 >= k.
@@ -22,7 +21,7 @@ func NewPubKeyMultisigThreshold(k int, pubkeys []crypto.PubKey) crypto.PubKey {
if len(pubkeys) < k {
panic("threshold k of n multisignature: len(pubkeys) < k")
}
return &PubKeyMultisigThreshold{uint(k), pubkeys}
return PubKeyMultisigThreshold{uint(k), pubkeys}
}
// VerifyBytes expects sig to be an amino encoded version of a MultiSignature.
@@ -31,8 +30,8 @@ func NewPubKeyMultisigThreshold(k int, pubkeys []crypto.PubKey) crypto.PubKey {
// and all signatures are valid. (Not just k of the signatures)
// The multisig uses a bitarray, so multiple signatures for the same key is not
// a concern.
func (pk *PubKeyMultisigThreshold) VerifyBytes(msg []byte, marshalledSig []byte) bool {
var sig *Multisignature
func (pk PubKeyMultisigThreshold) VerifyBytes(msg []byte, marshalledSig []byte) bool {
var sig Multisignature
err := cdc.UnmarshalBinaryBare(marshalledSig, &sig)
if err != nil {
return false
@@ -64,19 +63,19 @@ func (pk *PubKeyMultisigThreshold) VerifyBytes(msg []byte, marshalledSig []byte)
}
// Bytes returns the amino encoded version of the PubKeyMultisigThreshold
func (pk *PubKeyMultisigThreshold) Bytes() []byte {
func (pk PubKeyMultisigThreshold) Bytes() []byte {
return cdc.MustMarshalBinaryBare(pk)
}
// Address returns tmhash(PubKeyMultisigThreshold.Bytes())
func (pk *PubKeyMultisigThreshold) Address() crypto.Address {
return crypto.Address(tmhash.Sum(pk.Bytes()))
func (pk PubKeyMultisigThreshold) Address() crypto.Address {
return crypto.AddressHash(pk.Bytes())
}
// Equals returns true iff pk and other both have the same number of keys, and
// all constituent keys are the same, and in the same order.
func (pk *PubKeyMultisigThreshold) Equals(other crypto.PubKey) bool {
otherKey, sameType := other.(*PubKeyMultisigThreshold)
func (pk PubKeyMultisigThreshold) Equals(other crypto.PubKey) bool {
otherKey, sameType := other.(PubKeyMultisigThreshold)
if !sameType {
return false
}

View File

@@ -82,7 +82,7 @@ func TestMultiSigPubKeyEquality(t *testing.T) {
msg := []byte{1, 2, 3, 4}
pubkeys, _ := generatePubKeysAndSignatures(5, msg)
multisigKey := NewPubKeyMultisigThreshold(2, pubkeys)
var unmarshalledMultisig *PubKeyMultisigThreshold
var unmarshalledMultisig PubKeyMultisigThreshold
cdc.MustUnmarshalBinaryBare(multisigKey.Bytes(), &unmarshalledMultisig)
require.True(t, multisigKey.Equals(unmarshalledMultisig))
@@ -95,6 +95,29 @@ func TestMultiSigPubKeyEquality(t *testing.T) {
require.False(t, multisigKey.Equals(multisigKey2))
}
func TestAddress(t *testing.T) {
msg := []byte{1, 2, 3, 4}
pubkeys, _ := generatePubKeysAndSignatures(5, msg)
multisigKey := NewPubKeyMultisigThreshold(2, pubkeys)
require.Len(t, multisigKey.Address().Bytes(), 20)
}
func TestPubKeyMultisigThresholdAminoToIface(t *testing.T) {
msg := []byte{1, 2, 3, 4}
pubkeys, _ := generatePubKeysAndSignatures(5, msg)
multisigKey := NewPubKeyMultisigThreshold(2, pubkeys)
ab, err := cdc.MarshalBinaryLengthPrefixed(multisigKey)
require.NoError(t, err)
// like other crypto.Pubkey implementations (e.g. ed25519.PubKeyEd25519),
// PubKeyMultisigThreshold should be deserializable into a crypto.PubKey:
var pubKey crypto.PubKey
err = cdc.UnmarshalBinaryLengthPrefixed(ab, &pubKey)
require.NoError(t, err)
require.Equal(t, multisigKey, pubKey)
}
func generatePubKeysAndSignatures(n int, msg []byte) (pubkeys []crypto.PubKey, signatures [][]byte) {
pubkeys = make([]crypto.PubKey, n)
signatures = make([][]byte, n)

View File

@@ -20,7 +20,7 @@ func init() {
cdc.RegisterConcrete(PubKeyMultisigThreshold{},
PubKeyMultisigThresholdAminoRoute, nil)
cdc.RegisterConcrete(ed25519.PubKeyEd25519{},
ed25519.PubKeyAminoRoute, nil)
ed25519.PubKeyAminoName, nil)
cdc.RegisterConcrete(secp256k1.PubKeySecp256k1{},
secp256k1.PubKeyAminoRoute, nil)
secp256k1.PubKeyAminoName, nil)
}

View File

@@ -1,42 +1,11 @@
package crypto
import (
"crypto/cipher"
crand "crypto/rand"
"crypto/sha256"
"encoding/hex"
"io"
"sync"
"golang.org/x/crypto/chacha20poly1305"
)
// NOTE: This is ignored for now until we have time
// to properly review the MixEntropy function - https://github.com/tendermint/tendermint/issues/2099.
//
// The randomness here is derived from xoring a chacha20 keystream with
// output from crypto/rand's OS Entropy Reader. (Due to fears of the OS'
// entropy being backdoored)
//
// For forward secrecy of produced randomness, the internal chacha key is hashed
// and thereby rotated after each call.
var gRandInfo *randInfo
func init() {
gRandInfo = &randInfo{}
// TODO: uncomment after reviewing MixEntropy -
// https://github.com/tendermint/tendermint/issues/2099
// gRandInfo.MixEntropy(randBytes(32)) // Init
}
// WARNING: This function needs review - https://github.com/tendermint/tendermint/issues/2099.
// Mix additional bytes of randomness, e.g. from hardware, user-input, etc.
// It is OK to call it multiple times. It does not diminish security.
func MixEntropy(seedBytes []byte) {
gRandInfo.MixEntropy(seedBytes)
}
// This only uses the OS's randomness
func randBytes(numBytes int) []byte {
b := make([]byte, numBytes)
@@ -52,19 +21,6 @@ func CRandBytes(numBytes int) []byte {
return randBytes(numBytes)
}
/* TODO: uncomment after reviewing MixEntropy - https://github.com/tendermint/tendermint/issues/2099
// This uses the OS and the Seed(s).
func CRandBytes(numBytes int) []byte {
return randBytes(numBytes)
b := make([]byte, numBytes)
_, err := gRandInfo.Read(b)
if err != nil {
panic(err)
}
return b
}
*/
// CRandHex returns a hex encoded string that's floor(numDigits/2) * 2 long.
//
// Note: CRandHex(24) gives 96 bits of randomness that
@@ -77,63 +33,3 @@ func CRandHex(numDigits int) string {
func CReader() io.Reader {
return crand.Reader
}
/* TODO: uncomment after reviewing MixEntropy - https://github.com/tendermint/tendermint/issues/2099
// Returns a crand.Reader mixed with user-supplied entropy
func CReader() io.Reader {
return gRandInfo
}
*/
//--------------------------------------------------------------------------------
type randInfo struct {
mtx sync.Mutex
seedBytes [chacha20poly1305.KeySize]byte
chacha cipher.AEAD
reader io.Reader
}
// You can call this as many times as you'd like.
// XXX/TODO: review - https://github.com/tendermint/tendermint/issues/2099
func (ri *randInfo) MixEntropy(seedBytes []byte) {
ri.mtx.Lock()
defer ri.mtx.Unlock()
// Make new ri.seedBytes using passed seedBytes and current ri.seedBytes:
// ri.seedBytes = sha256( seedBytes || ri.seedBytes )
h := sha256.New()
h.Write(seedBytes)
h.Write(ri.seedBytes[:])
hashBytes := h.Sum(nil)
copy(ri.seedBytes[:], hashBytes)
chacha, err := chacha20poly1305.New(ri.seedBytes[:])
if err != nil {
panic("Initializing chacha20 failed")
}
ri.chacha = chacha
// Create new reader
ri.reader = &cipher.StreamReader{S: ri, R: crand.Reader}
}
func (ri *randInfo) XORKeyStream(dst, src []byte) {
// nonce being 0 is safe due to never re-using a key.
emptyNonce := make([]byte, 12)
tmpDst := ri.chacha.Seal([]byte{}, emptyNonce, src, []byte{0})
// this removes the poly1305 tag as well, since chacha is a stream cipher
// and we truncate at input length.
copy(dst, tmpDst[:len(src)])
// hash seedBytes for forward secrecy, and initialize new chacha instance
newSeed := sha256.Sum256(ri.seedBytes[:])
chacha, err := chacha20poly1305.New(newSeed[:])
if err != nil {
panic("Initializing chacha20 failed")
}
ri.chacha = chacha
}
func (ri *randInfo) Read(b []byte) (n int, err error) {
ri.mtx.Lock()
n, err = ri.reader.Read(b)
ri.mtx.Unlock()
return
}

View File

@@ -7,17 +7,19 @@ import (
"fmt"
"io"
secp256k1 "github.com/tendermint/btcd/btcec"
"golang.org/x/crypto/ripemd160"
secp256k1 "github.com/btcsuite/btcd/btcec"
amino "github.com/tendermint/go-amino"
"golang.org/x/crypto/ripemd160" // forked to github.com/tendermint/crypto
"github.com/tendermint/tendermint/crypto"
)
//-------------------------------------
const (
PrivKeyAminoRoute = "tendermint/PrivKeySecp256k1"
PubKeyAminoRoute = "tendermint/PubKeySecp256k1"
PrivKeyAminoName = "tendermint/PrivKeySecp256k1"
PubKeyAminoName = "tendermint/PubKeySecp256k1"
)
var cdc = amino.NewCodec()
@@ -25,11 +27,11 @@ var cdc = amino.NewCodec()
func init() {
cdc.RegisterInterface((*crypto.PubKey)(nil), nil)
cdc.RegisterConcrete(PubKeySecp256k1{},
PubKeyAminoRoute, nil)
PubKeyAminoName, nil)
cdc.RegisterInterface((*crypto.PrivKey)(nil), nil)
cdc.RegisterConcrete(PrivKeySecp256k1{},
PrivKeyAminoRoute, nil)
PrivKeyAminoName, nil)
}
//-------------------------------------
@@ -44,16 +46,6 @@ func (privKey PrivKeySecp256k1) Bytes() []byte {
return cdc.MustMarshalBinaryBare(privKey)
}
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
func (privKey PrivKeySecp256k1) Sign(msg []byte) ([]byte, error) {
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey[:])
sig, err := priv.Sign(crypto.Sha256(msg))
if err != nil {
return nil, err
}
return sig.Serialize(), nil
}
// PubKey performs the point-scalar multiplication from the privKey on the
// generator point to get the pubkey.
func (privKey PrivKeySecp256k1) PubKey() crypto.PubKey {
@@ -137,20 +129,6 @@ func (pubKey PubKeySecp256k1) Bytes() []byte {
return bz
}
func (pubKey PubKeySecp256k1) VerifyBytes(msg []byte, sig []byte) bool {
pub, err := secp256k1.ParsePubKey(pubKey[:], secp256k1.S256())
if err != nil {
return false
}
parsedSig, err := secp256k1.ParseSignature(sig[:], secp256k1.S256())
if err != nil {
return false
}
// Underlying library ensures that this signature is in canonical form, to
// prevent Secp256k1 malleability from altering the sign of the s term.
return parsedSig.Verify(crypto.Sha256(msg), pub)
}
func (pubKey PubKeySecp256k1) String() string {
return fmt.Sprintf("PubKeySecp256k1{%X}", pubKey[:])
}

View File

@@ -0,0 +1,24 @@
// +build libsecp256k1
package secp256k1
import (
"github.com/ethereum/go-ethereum/crypto/secp256k1"
"github.com/tendermint/tendermint/crypto"
)
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
func (privKey PrivKeySecp256k1) Sign(msg []byte) ([]byte, error) {
rsv, err := secp256k1.Sign(crypto.Sha256(msg), privKey[:])
if err != nil {
return nil, err
}
// we do not need v in r||s||v:
rs := rsv[:len(rsv)-1]
return rs, nil
}
func (pubKey PubKeySecp256k1) VerifyBytes(msg []byte, sig []byte) bool {
return secp256k1.VerifySignature(pubKey[:], crypto.Sha256(msg), sig)
}

View File

@@ -0,0 +1,70 @@
// +build !libsecp256k1
package secp256k1
import (
"math/big"
secp256k1 "github.com/btcsuite/btcd/btcec"
"github.com/tendermint/tendermint/crypto"
)
// used to reject malleable signatures
// see:
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
// The returned signature will be of the form R || S (in lower-S form).
func (privKey PrivKeySecp256k1) Sign(msg []byte) ([]byte, error) {
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey[:])
sig, err := priv.Sign(crypto.Sha256(msg))
if err != nil {
return nil, err
}
sigBytes := serializeSig(sig)
return sigBytes, nil
}
// VerifyBytes verifies a signature of the form R || S.
// It rejects signatures which are not in lower-S form.
func (pubKey PubKeySecp256k1) VerifyBytes(msg []byte, sigStr []byte) bool {
if len(sigStr) != 64 {
return false
}
pub, err := secp256k1.ParsePubKey(pubKey[:], secp256k1.S256())
if err != nil {
return false
}
// parse the signature:
signature := signatureFromBytes(sigStr)
// Reject malleable signatures. libsecp256k1 does this check but btcec doesn't.
// see: https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
if signature.S.Cmp(secp256k1halfN) > 0 {
return false
}
return signature.Verify(crypto.Sha256(msg), pub)
}
// Read Signature struct from R || S. Caller needs to ensure
// that len(sigStr) == 64.
func signatureFromBytes(sigStr []byte) *secp256k1.Signature {
return &secp256k1.Signature{
new(big.Int).SetBytes(sigStr[:32]),
new(big.Int).SetBytes(sigStr[32:64]),
}
}
// Serialize signature to R || S.
// R, S are padded to 32 bytes respectively.
func serializeSig(sig *secp256k1.Signature) []byte {
rBytes := sig.R.Bytes()
sBytes := sig.S.Bytes()
sigBytes := make([]byte, 64)
// 0 pad the byte arrays from the left if they aren't big enough.
copy(sigBytes[32-len(rBytes):32], rBytes)
copy(sigBytes[64-len(sBytes):64], sBytes)
return sigBytes
}

View File

@@ -0,0 +1,39 @@
// +build !libsecp256k1
package secp256k1
import (
"testing"
secp256k1 "github.com/btcsuite/btcd/btcec"
"github.com/stretchr/testify/require"
)
// Ensure that signature verification works, and that
// non-canonical signatures fail.
// Note: run with CGO_ENABLED=0 or go test -tags !cgo.
func TestSignatureVerificationAndRejectUpperS(t *testing.T) {
msg := []byte("We have lingered long enough on the shores of the cosmic ocean.")
for i := 0; i < 500; i++ {
priv := GenPrivKey()
sigStr, err := priv.Sign(msg)
require.NoError(t, err)
sig := signatureFromBytes(sigStr)
require.False(t, sig.S.Cmp(secp256k1halfN) > 0)
pub := priv.PubKey()
require.True(t, pub.VerifyBytes(msg, sigStr))
// malleate:
sig.S.Sub(secp256k1.S256().CurveParams.N, sig.S)
require.True(t, sig.S.Cmp(secp256k1halfN) > 0)
malSigStr := serializeSig(sig)
require.False(t, pub.VerifyBytes(msg, malSigStr),
"VerifyBytes incorrect with malleated & invalid S. sig=%v, key=%v",
sig,
priv,
)
}
}

View File

@@ -11,7 +11,7 @@ import (
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/secp256k1"
underlyingSecp256k1 "github.com/tendermint/btcd/btcec"
underlyingSecp256k1 "github.com/btcsuite/btcd/btcec"
)
type keyData struct {

View File

@@ -8,7 +8,7 @@ import (
"errors"
"fmt"
"golang.org/x/crypto/chacha20poly1305" // forked to github.com/tendermint/crypto
"golang.org/x/crypto/chacha20poly1305"
)
// Implements crypto.AEAD

View File

@@ -4,7 +4,7 @@ import (
"errors"
"fmt"
"golang.org/x/crypto/nacl/secretbox" // forked to github.com/tendermint/crypto
"golang.org/x/crypto/nacl/secretbox"
"github.com/tendermint/tendermint/crypto"
cmn "github.com/tendermint/tendermint/libs/common"
@@ -17,7 +17,6 @@ const secretLen = 32
// secret must be 32 bytes long. Use something like Sha256(Bcrypt(passphrase))
// The ciphertext is (secretbox.Overhead + 24) bytes longer than the plaintext.
// NOTE: call crypto.MixEntropy() first.
func EncryptSymmetric(plaintext []byte, secret []byte) (ciphertext []byte) {
if len(secret) != secretLen {
cmn.PanicSanity(fmt.Sprintf("Secret must be 32 bytes long, got len %v", len(secret)))

View File

@@ -6,15 +6,13 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/bcrypt" // forked to github.com/tendermint/crypto
"golang.org/x/crypto/bcrypt"
"github.com/tendermint/tendermint/crypto"
)
func TestSimple(t *testing.T) {
crypto.MixEntropy([]byte("someentropy"))
plaintext := []byte("sometext")
secret := []byte("somesecretoflengththirtytwo===32")
ciphertext := EncryptSymmetric(plaintext, secret)
@@ -26,13 +24,9 @@ func TestSimple(t *testing.T) {
func TestSimpleWithKDF(t *testing.T) {
crypto.MixEntropy([]byte("someentropy"))
plaintext := []byte("sometext")
secretPass := []byte("somesecret")
salt := []byte("somesaltsomesalt") // len 16
// NOTE: we use a fork of x/crypto so we can inject our own randomness for salt
secret, err := bcrypt.GenerateFromPassword(salt, secretPass, 12)
secret, err := bcrypt.GenerateFromPassword(secretPass, 12)
if err != nil {
t.Error(err)
}

View File

@@ -8,8 +8,21 @@ module.exports = {
lineNumbers: true
},
themeConfig: {
lastUpdated: "Last Updated",
nav: [{ text: "Back to Tendermint", link: "https://tendermint.com" }],
repo: "tendermint/tendermint",
editLinks: true,
docsDir: "docs",
docsBranch: "develop",
editLinkText: 'Edit this page on Github',
lastUpdated: true,
algolia: {
apiKey: '59f0e2deb984aa9cdf2b3a5fd24ac501',
indexName: 'tendermint',
debug: false
},
nav: [
{ text: "Back to Tendermint", link: "https://tendermint.com" },
{ text: "RPC", link: "https://tendermint.com/rpc/" }
],
sidebar: [
{
title: "Introduction",
@@ -21,6 +34,20 @@ module.exports = {
"/introduction/what-is-tendermint"
]
},
{
title: "Apps",
collapsable: false,
children: [
"/app-dev/getting-started",
"/app-dev/abci-cli",
"/app-dev/app-architecture",
"/app-dev/app-development",
"/app-dev/subscribing-to-events-via-websocket",
"/app-dev/indexing-transactions",
"/app-dev/abci-spec",
"/app-dev/ecosystem"
]
},
{
title: "Tendermint Core",
collapsable: false,
@@ -39,15 +66,6 @@ module.exports = {
"/tendermint-core/validators"
]
},
{
title: "Tools",
collapsable: false,
children: [
"/tools/",
"/tools/benchmarking",
"/tools/monitoring"
]
},
{
title: "Networks",
collapsable: false,
@@ -58,17 +76,13 @@ module.exports = {
]
},
{
title: "Apps",
title: "Tools",
collapsable: false,
children: [
"/app-dev/getting-started",
"/app-dev/abci-cli",
"/app-dev/app-architecture",
"/app-dev/app-development",
"/app-dev/subscribing-to-events-via-websocket",
"/app-dev/indexing-transactions",
"/app-dev/abci-spec",
"/app-dev/ecosystem"
"/tools/",
"/tools/benchmarking",
"/tools/monitoring",
"/tools/remote-signer-validation"
]
},
{

View File

@@ -12,10 +12,10 @@ respectively.
## How It Works
There is a Jenkins job listening for changes in the `/docs` directory, on both
There is a CircleCI job listening for changes in the `/docs` directory, on both
the `master` and `develop` branches. Any updates to files in this directory
on those branches will automatically trigger a website deployment. Under the hood,
a private website repository has make targets consumed by a standard Jenkins task.
the private website repository has a `make build-docs` target consumed by a CircleCI job in that repo.
## README
@@ -93,6 +93,10 @@ python -m SimpleHTTPServer 8080
then navigate to localhost:8080 in your browser.
## Search
We are using [Algolia](https://www.algolia.com) to power full-text search. This uses a public API search-only key in the `config.js` as well as a [tendermint.json](https://github.com/algolia/docsearch-configs/blob/master/configs/tendermint.json) configuration file that we can update with PRs.
## Consistency
Because the build processes are identical (as is the information contained herein), this file should be kept in sync as

View File

@@ -5,7 +5,7 @@ Tendermint blockchain application.
The following diagram provides a superb example:
<https://drive.google.com/open?id=1yR2XpRi9YCY9H9uMfcw8-RMJpvDyvjz9>
![](../imgs/cosmos-tendermint-stack-4k.jpg)
The end-user application here is the Cosmos Voyager, at the bottom left.
Voyager communicates with a REST API exposed by a local Light-Client
@@ -45,6 +45,6 @@ Tendermint.
See the following for more extensive documentation:
- [Interchain Standard for the Light-Client REST API](https://github.com/cosmos/cosmos-sdk/pull/1028)
- [Tendermint RPC Docs](https://tendermint.github.io/slate/)
- [Tendermint RPC Docs](https://tendermint.com/rpc/)
- [Tendermint in Production](../tendermint-core/running-in-production.md)
- [ABCI spec](./abci-spec.md)

View File

@@ -63,6 +63,13 @@
"author": "Zach Balder",
"description": "Public service reporting and tracking"
},
{
"name": "ParadigmCore",
"url": "https://github.com/ParadigmFoundation/ParadigmCore",
"language": "TypeScript",
"author": "Paradigm Labs",
"description": "Reference implementation of the Paradigm Protocol, and OrderStream network client."
},
{
"name": "Passchain",
"url": "https://github.com/trusch/passchain",
@@ -181,5 +188,13 @@
"language": "Javascript",
"author": "Dennis McKinnon"
}
],
"aminoLibraries": [
{
"name": "JS-Amino",
"url": "https://github.com/TanNgocDo/Js-Amino",
"language": "Javascript",
"author": "TanNgocDo"
}
]
}

View File

@@ -1,11 +1,9 @@
# Ecosystem
The growing list of applications built using various pieces of the
Tendermint stack can be found at:
Tendermint stack can be found at the [ecosystem page](https://tendermint.com/ecosystem).
- https://tendermint.com/ecosystem
We thank the community for their contributions thus far and welcome the
We thank the community for their contributions and welcome the
addition of new projects. A pull request can be submitted to [this
file](https://github.com/tendermint/tendermint/blob/master/docs/app-dev/ecosystem.json)
to include your project.

View File

@@ -78,7 +78,7 @@ endpoint:
curl "localhost:26657/tx_search?query=\"account.name='igor'\"&prove=true"
```
Check out [API docs](https://tendermint.github.io/slate/?shell#txsearch)
Check out [API docs](https://tendermint.com/rpc/#txsearch)
for more information on query syntax and other options.
## Subscribing to transactions
@@ -97,5 +97,5 @@ by providing a query to `/subscribe` RPC endpoint.
}
```
Check out [API docs](https://tendermint.github.io/slate/#subscribe) for
Check out [API docs](https://tendermint.com/rpc/#subscribe) for
more information on query syntax and other options.

View File

@@ -7,6 +7,7 @@
28-08-2018: Third version after Ethan's comments
30-08-2018: AminoOverheadForBlock => MaxAminoOverheadForBlock
31-08-2018: Bounding evidence and chain ID
13-01-2019: Add section on MaxBytes vs MaxDataBytes
## Context
@@ -20,6 +21,32 @@ We should just remove MaxTxs all together and stick with MaxBytes, and have a
But we can't just reap BlockSize.MaxBytes, since MaxBytes is for the entire block,
not for the txs inside the block. There's extra amino overhead + the actual
headers on top of the actual transactions + evidence + last commit.
We could also consider using a MaxDataBytes instead of or in addition to MaxBytes.
## MaxBytes vs MaxDataBytes
The [PR #3045](https://github.com/tendermint/tendermint/pull/3045) suggested
additional clarity/justification was necessary here, wither respect to the use
of MaxDataBytes in addition to, or instead of, MaxBytes.
MaxBytes provides a clear limit on the total size of a block that requires no
additional calculation if you want to use it to bound resource usage, and there
has been considerable discussions about optimizing tendermint around 1MB blocks.
Regardless, we need some maximum on the size of a block so we can avoid
unmarshalling blocks that are too big during the consensus, and it seems more
straightforward to provide a single fixed number for this rather than a
computation of "MaxDataBytes + everything else you need to make room for
(signatures, evidence, header)". MaxBytes provides a simple bound so we can
always say "blocks are less than X MB".
Having both MaxBytes and MaxDataBytes feels like unnecessary complexity. It's
not particularly surprising for MaxBytes to imply the maximum size of the
entire block (not just txs), one just has to know that a block includes header,
txs, evidence, votes. For more fine grained control over the txs included in the
block, there is the MaxGas. In practice, the MaxGas may be expected to do most of
the tx throttling, and the MaxBytes to just serve as an upper bound on the total
size. Applications can use MaxGas as a MaxDataBytes by just taking the gas for
every tx to be its size in bytes.
## Proposed solution
@@ -61,7 +88,7 @@ MaxXXX stayed the same.
## Status
Proposed.
Accepted.
## Consequences

View File

@@ -126,6 +126,312 @@ func TestConsensusXXX(t *testing.T) {
}
```
## Consensus Executor
## Consensus Core
```go
type Event interface{}
type EventNewHeight struct {
Height int64
ValidatorId int
}
type EventNewRound HeightAndRound
type EventProposal struct {
Height int64
Round int
Timestamp Time
BlockID BlockID
POLRound int
Sender int
}
type Majority23PrevotesBlock struct {
Height int64
Round int
BlockID BlockID
}
type Majority23PrecommitBlock struct {
Height int64
Round int
BlockID BlockID
}
type HeightAndRound struct {
Height int64
Round int
}
type Majority23PrevotesAny HeightAndRound
type Majority23PrecommitAny HeightAndRound
type TimeoutPropose HeightAndRound
type TimeoutPrevotes HeightAndRound
type TimeoutPrecommit HeightAndRound
type Message interface{}
type MessageProposal struct {
Height int64
Round int
BlockID BlockID
POLRound int
}
type VoteType int
const (
VoteTypeUnknown VoteType = iota
Prevote
Precommit
)
type MessageVote struct {
Height int64
Round int
BlockID BlockID
Type VoteType
}
type MessageDecision struct {
Height int64
Round int
BlockID BlockID
}
type TriggerTimeout struct {
Height int64
Round int
Duration Duration
}
type RoundStep int
const (
RoundStepUnknown RoundStep = iota
RoundStepPropose
RoundStepPrevote
RoundStepPrecommit
RoundStepCommit
)
type State struct {
Height int64
Round int
Step RoundStep
LockedValue BlockID
LockedRound int
ValidValue BlockID
ValidRound int
ValidatorId int
ValidatorSetSize int
}
func proposer(height int64, round int) int {}
func getValue() BlockID {}
func Consensus(event Event, state State) (State, Message, TriggerTimeout) {
msg = nil
timeout = nil
switch event := event.(type) {
case EventNewHeight:
if event.Height > state.Height {
state.Height = event.Height
state.Round = -1
state.Step = RoundStepPropose
state.LockedValue = nil
state.LockedRound = -1
state.ValidValue = nil
state.ValidRound = -1
state.ValidatorId = event.ValidatorId
}
return state, msg, timeout
case EventNewRound:
if event.Height == state.Height and event.Round > state.Round {
state.Round = eventRound
state.Step = RoundStepPropose
if proposer(state.Height, state.Round) == state.ValidatorId {
proposal = state.ValidValue
if proposal == nil {
proposal = getValue()
}
msg = MessageProposal { state.Height, state.Round, proposal, state.ValidRound }
}
timeout = TriggerTimeout { state.Height, state.Round, timeoutPropose(state.Round) }
}
return state, msg, timeout
case EventProposal:
if event.Height == state.Height and event.Round == state.Round and
event.Sender == proposal(state.Height, state.Round) and state.Step == RoundStepPropose {
if event.POLRound >= state.LockedRound or event.BlockID == state.BlockID or state.LockedRound == -1 {
msg = MessageVote { state.Height, state.Round, event.BlockID, Prevote }
}
state.Step = RoundStepPrevote
}
return state, msg, timeout
case TimeoutPropose:
if event.Height == state.Height and event.Round == state.Round and state.Step == RoundStepPropose {
msg = MessageVote { state.Height, state.Round, nil, Prevote }
state.Step = RoundStepPrevote
}
return state, msg, timeout
case Majority23PrevotesBlock:
if event.Height == state.Height and event.Round == state.Round and state.Step >= RoundStepPrevote and event.Round > state.ValidRound {
state.ValidRound = event.Round
state.ValidValue = event.BlockID
if state.Step == RoundStepPrevote {
state.LockedRound = event.Round
state.LockedValue = event.BlockID
msg = MessageVote { state.Height, state.Round, event.BlockID, Precommit }
state.Step = RoundStepPrecommit
}
}
return state, msg, timeout
case Majority23PrevotesAny:
if event.Height == state.Height and event.Round == state.Round and state.Step == RoundStepPrevote {
timeout = TriggerTimeout { state.Height, state.Round, timeoutPrevote(state.Round) }
}
return state, msg, timeout
case TimeoutPrevote:
if event.Height == state.Height and event.Round == state.Round and state.Step == RoundStepPrevote {
msg = MessageVote { state.Height, state.Round, nil, Precommit }
state.Step = RoundStepPrecommit
}
return state, msg, timeout
case Majority23PrecommitBlock:
if event.Height == state.Height {
state.Step = RoundStepCommit
state.LockedValue = event.BlockID
}
return state, msg, timeout
case Majority23PrecommitAny:
if event.Height == state.Height and event.Round == state.Round {
timeout = TriggerTimeout { state.Height, state.Round, timeoutPrecommit(state.Round) }
}
return state, msg, timeout
case TimeoutPrecommit:
if event.Height == state.Height and event.Round == state.Round {
state.Round = state.Round + 1
}
return state, msg, timeout
}
}
func ConsensusExecutor() {
proposal = nil
votes = HeightVoteSet { Height: 1 }
state = State {
Height: 1
Round: 0
Step: RoundStepPropose
LockedValue: nil
LockedRound: -1
ValidValue: nil
ValidRound: -1
}
event = EventNewHeight {1, id}
state, msg, timeout = Consensus(event, state)
event = EventNewRound {state.Height, 0}
state, msg, timeout = Consensus(event, state)
if msg != nil {
send msg
}
if timeout != nil {
trigger timeout
}
for {
select {
case message := <- msgCh:
switch msg := message.(type) {
case MessageProposal:
case MessageVote:
if msg.Height == state.Height {
newVote = votes.AddVote(msg)
if newVote {
switch msg.Type {
case Prevote:
prevotes = votes.Prevotes(msg.Round)
if prevotes.WeakCertificate() and msg.Round > state.Round {
event = EventNewRound { msg.Height, msg.Round }
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
if blockID, ok = prevotes.TwoThirdsMajority(); ok and blockID != nil {
if msg.Round == state.Round and hasBlock(blockID) {
event = Majority23PrevotesBlock { msg.Height, msg.Round, blockID }
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
if proposal != nil and proposal.POLRound == msg.Round and hasBlock(blockID) {
event = EventProposal {
Height: state.Height
Round: state.Round
BlockID: blockID
POLRound: proposal.POLRound
Sender: message.Sender
}
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
}
if prevotes.HasTwoThirdsAny() and msg.Round == state.Round {
event = Majority23PrevotesAny { msg.Height, msg.Round, blockID }
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
case Precommit:
}
}
}
case timeout := <- timeoutCh:
case block := <- blockCh:
}
}
}
func handleStateChange(state, msg, timeout) State {
if state.Step == Commit {
state = ExecuteBlock(state.LockedValue)
}
if msg != nil {
send msg
}
if timeout != nil {
trigger timeout
}
}
```
### Implementation roadmap
* implement proposed implementation

View File

@@ -6,10 +6,16 @@ Author: Anton Kaliaev (@melekes)
02-10-2018: Initial draft
16-01-2019: Second version based on our conversation with Jae
17-01-2019: Third version explaining how new design solves current issues
25-01-2019: Fourth version to treat buffered and unbuffered channels differently
## Context
Since the initial version of the pubsub, there's been a number of issues
raised: #951, #1879, #1880. Some of them are high-level issues questioning the
raised: [#951], [#1879], [#1880]. Some of them are high-level issues questioning the
core design choices made. Others are minor and mostly about the interface of
`Subscribe()` / `Publish()` functions.
@@ -40,9 +46,19 @@ goroutines can be used to avoid uncontrolled memory growth.
In certain cases, this is what you want. But in our case, because we need
strict ordering of events (if event A was published before B, the guaranteed
delivery order will be A -> B), we can't use goroutines.
delivery order will be A -> B), we can't publish msg in a new goroutine every time.
There is also a question whenever we should have a non-blocking send:
We can also have a goroutine per subscriber, although we'd need to be careful
with the number of subscribers. It's more difficult to implement as well +
unclear if we'll benefit from it (cause we'd be forced to create N additional
channels to distribute msg to these goroutines).
### Non-blocking send
There is also a question whenever we should have a non-blocking send.
Currently, sends are blocking, so publishing to one client can block on
publishing to another. This means a slow or unresponsive client can halt the
system. Instead, we can use a non-blocking send:
```go
for each subscriber {
@@ -56,15 +72,14 @@ for each subscriber {
```
This fixes the "slow client problem", but there is no way for a slow client to
know if it had missed a message. On the other hand, if we're going to stick
with blocking send, **devs must always ensure subscriber's handling code does not
block**. As you can see, there is an implicit choice between ordering guarantees
and using goroutines.
know if it had missed a message. We could return a second channel and close it
to indicate subscription termination. On the other hand, if we're going to
stick with blocking send, **devs must always ensure subscriber's handling code
does not block**, which is a hard task to put on their shoulders.
The interim option is to run goroutines pool for a single message, wait for all
goroutines to finish. This will solve "slow client problem", but we'd still
have to wait `max(goroutine_X_time)` before we can publish the next message.
My opinion: not worth doing.
### Channels vs Callbacks
@@ -76,36 +91,137 @@ memory leaks and/or memory usage increase.
Go channels are de-facto standard for carrying data between goroutines.
**Question: Is it worth switching to callback functions?**
### Why `Subscribe()` accepts an `out` channel?
Because in our tests, we create buffered channels (cap: 1). Alternatively, we
can make capacity an argument.
can make capacity an argument and return a channel.
## Decision
Change Subscribe() function to return out channel:
### MsgAndTags
```go
// outCap can be used to set capacity of out channel (unbuffered by default).
Subscribe(ctx context.Context, clientID string, query Query, outCap... int) (out <-chan interface{}, err error) {
```
It's more idiomatic since we're closing it during Unsubscribe/UnsubscribeAll calls.
Also, we should make tags available to subscribers:
Use a `MsgAndTags` struct on the subscription channel to indicate what tags the
msg matched.
```go
type MsgAndTags struct {
Msg interface{}
Tags TagMap
}
// outCap can be used to set capacity of out channel (unbuffered by default).
Subscribe(ctx context.Context, clientID string, query Query, outCap... int) (out <-chan MsgAndTags, err error) {
```
### Subscription Struct
Change `Subscribe()` function to return a `Subscription` struct:
```go
type Subscription struct {
// private fields
}
func (s *Subscription) Out() <-chan MsgAndTags
func (s *Subscription) Cancelled() <-chan struct{}
func (s *Subscription) Err() error
```
`Out()` returns a channel onto which messages and tags are published.
`Unsubscribe`/`UnsubscribeAll` does not close the channel to avoid clients from
receiving a nil message.
`Cancelled()` returns a channel that's closed when the subscription is terminated
and supposed to be used in a select statement.
If the channel returned by `Cancelled()` is not closed yet, `Err()` returns nil.
If the channel is closed, `Err()` returns a non-nil error explaining why:
`ErrUnsubscribed` if the subscriber choose to unsubscribe,
`ErrOutOfCapacity` if the subscriber is not pulling messages fast enough and the channel returned by `Out()` became full.
After `Err()` returns a non-nil error, successive calls to `Err() return the same error.
```go
subscription, err := pubsub.Subscribe(...)
if err != nil {
// ...
}
for {
select {
case msgAndTags <- subscription.Out():
// ...
case <-subscription.Cancelled():
return subscription.Err()
}
```
### Capacity and Subscriptions
Make the `Out()` channel buffered (with capacity 1) by default. In most cases, we want to
terminate the slow subscriber. Only in rare cases, we want to block the pubsub
(e.g. when debugging consensus). This should lower the chances of the pubsub
being frozen.
```go
// outCap can be used to set capacity of Out channel
// (1 by default, must be greater than 0).
Subscribe(ctx context.Context, clientID string, query Query, outCap... int) (Subscription, error) {
```
Use a different function for an unbuffered channel:
```go
// Subscription uses an unbuffered channel. Publishing will block.
SubscribeUnbuffered(ctx context.Context, clientID string, query Query) (Subscription, error) {
```
SubscribeUnbuffered should not be exposed to users.
### Blocking/Nonblocking
The publisher should treat these kinds of channels separately.
It should block on unbuffered channels (for use with internal consensus events
in the consensus tests) and not block on the buffered ones. If a client is too
slow to keep up with it's messages, it's subscription is terminated:
for each subscription {
out := subscription.outChan
if cap(out) == 0 {
// block on unbuffered channel
out <- msg
} else {
// don't block on buffered channels
select {
case out <- msg:
default:
// set the error, notify on the cancel chan
subscription.err = fmt.Errorf("client is too slow for msg)
close(subscription.cancelChan)
// ... unsubscribe and close out
}
}
}
### How this new design solves the current issues?
[#951] ([#1880]):
Because of non-blocking send, situation where we'll deadlock is not possible
anymore. If the client stops reading messages, it will be removed.
[#1879]:
MsgAndTags is used now instead of a plain message.
### Future problems and their possible solutions
[#2826]
One question I am still pondering about: how to prevent pubsub from slowing
down consensus. We can increase the pubsub queue size (which is 0 now). Also,
it's probably a good idea to limit the total number of subscribers.
This can be made automatically. Say we set queue size to 1000 and, when it's >=
80% full, refuse new subscriptions.
## Status
In review
@@ -116,7 +232,16 @@ In review
- more idiomatic interface
- subscribers know what tags msg was published with
- subscribers aware of the reason their subscription was cancelled
### Negative
- (since v1) no concurrency when it comes to publishing messages
### Neutral
[#951]: https://github.com/tendermint/tendermint/issues/951
[#1879]: https://github.com/tendermint/tendermint/issues/1879
[#1880]: https://github.com/tendermint/tendermint/issues/1880
[#2826]: https://github.com/tendermint/tendermint/issues/2826

Binary file not shown.

After

Width:  |  Height:  |  Size: 625 KiB

View File

@@ -70,10 +70,6 @@ Tendermint is in essence similar software, but with two key differences:
the application logic that's right for them, from key-value store to
cryptocurrency to e-voting platform and beyond.
The layout of this Tendermint website content is also ripped directly
and without shame from [consul.io](https://www.consul.io/) and the other
[Hashicorp sites](https://www.hashicorp.com/#tools).
### Bitcoin, Ethereum, etc.
Tendermint emerged in the tradition of cryptocurrencies like Bitcoin,

View File

@@ -1,20 +1,17 @@
# Docker Compose
With Docker Compose, we can spin up local testnets in a single command:
```
make localnet-start
```
With Docker Compose, you can spin up local testnets with a single command.
## Requirements
- [Install tendermint](/docs/install.md)
- [Install docker](https://docs.docker.com/engine/installation/)
- [Install docker-compose](https://docs.docker.com/compose/install/)
1. [Install tendermint](/docs/introduction/install.md)
2. [Install docker](https://docs.docker.com/engine/installation/)
3. [Install docker-compose](https://docs.docker.com/compose/install/)
## Build
Build the `tendermint` binary and the `tendermint/localnode` docker image.
Build the `tendermint` binary and, optionally, the `tendermint/localnode`
docker image.
Note the binary will be mounted into the container so it can be updated without
rebuilding the image.
@@ -25,11 +22,10 @@ cd $GOPATH/src/github.com/tendermint/tendermint
# Build the linux binary in ./build
make build-linux
# Build tendermint/localnode image
# (optionally) Build tendermint/localnode image
make build-docker-localnode
```
## Run a testnet
To start a 4 node testnet run:
@@ -38,9 +34,13 @@ To start a 4 node testnet run:
make localnet-start
```
The nodes bind their RPC servers to ports 26657, 26660, 26662, and 26664 on the host.
The nodes bind their RPC servers to ports 26657, 26660, 26662, and 26664 on the
host.
This file creates a 4-node network using the localnode image.
The nodes of the network expose their P2P and RPC endpoints to the host machine on ports 26656-26657, 26659-26660, 26661-26662, and 26663-26664 respectively.
The nodes of the network expose their P2P and RPC endpoints to the host machine
on ports 26656-26657, 26659-26660, 26661-26662, and 26663-26664 respectively.
To update the binary, just rebuild it and restart the nodes:
@@ -52,34 +52,112 @@ make localnet-start
## Configuration
The `make localnet-start` creates files for a 4-node testnet in `./build` by calling the `tendermint testnet` command.
The `make localnet-start` creates files for a 4-node testnet in `./build` by
calling the `tendermint testnet` command.
The `./build` directory is mounted to the `/tendermint` mount point to attach the binary and config files to the container.
The `./build` directory is mounted to the `/tendermint` mount point to attach
the binary and config files to the container.
For instance, to create a single node testnet:
To change the number of validators / non-validators change the `localnet-start` Makefile target:
```
localnet-start: localnet-stop
@if ! [ -f build/node0/config/genesis.json ]; then docker run --rm -v $(CURDIR)/build:/tendermint:Z tendermint/localnode testnet --v 5 --n 3 --o . --populate-persistent-peers --starting-ip-address 192.167.10.2 ; fi
docker-compose up
```
The command now will generate config files for 5 validators and 3
non-validators network.
Before running it, don't forget to cleanup the old files:
```
cd $GOPATH/src/github.com/tendermint/tendermint
# Clear the build folder
rm -rf ./build
rm -rf ./build/node*
```
# Build binary
make build-linux
## Configuring abci containers
# Create configuration
docker run -e LOG="stdout" -v `pwd`/build:/tendermint tendermint/localnode testnet --o . --v 1
To use your own abci applications with 4-node setup edit the [docker-compose.yaml](https://github.com/tendermint/tendermint/blob/develop/docker-compose.yml) file and add image to your abci application.
#Run the node
docker run -v `pwd`/build:/tendermint tendermint/localnode
```
abci0:
container_name: abci0
image: "abci-image"
build:
context: .
dockerfile: abci.Dockerfile
command: <insert command to run your abci application>
networks:
localnet:
ipv4_address: 192.167.10.6
abci1:
container_name: abci1
image: "abci-image"
build:
context: .
dockerfile: abci.Dockerfile
command: <insert command to run your abci application>
networks:
localnet:
ipv4_address: 192.167.10.7
abci2:
container_name: abci2
image: "abci-image"
build:
context: .
dockerfile: abci.Dockerfile
command: <insert command to run your abci application>
networks:
localnet:
ipv4_address: 192.167.10.8
abci3:
container_name: abci3
image: "abci-image"
build:
context: .
dockerfile: abci.Dockerfile
command: <insert command to run your abci application>
networks:
localnet:
ipv4_address: 192.167.10.9
```
Override the [command](https://github.com/tendermint/tendermint/blob/master/networks/local/localnode/Dockerfile#L12) in each node to connect to it's abci.
```
node0:
container_name: node0
image: "tendermint/localnode"
ports:
- "26656-26657:26656-26657"
environment:
- ID=0
- LOG=$${LOG:-tendermint.log}
volumes:
- ./build:/tendermint:Z
command: node --proxy_app=tcp://abci0:26658
networks:
localnet:
ipv4_address: 192.167.10.2
```
Similarly do for node1, node2 and node3 then [run testnet](https://github.com/tendermint/tendermint/blob/master/docs/networks/docker-compose.md#run-a-testnet)
## Logging
Log is saved under the attached volume, in the `tendermint.log` file. If the `LOG` environment variable is set to `stdout` at start, the log is not saved, but printed on the screen.
Log is saved under the attached volume, in the `tendermint.log` file. If the
`LOG` environment variable is set to `stdout` at start, the log is not saved,
but printed on the screen.
## Special binaries
If you have multiple binaries with different names, you can specify which one to run with the BINARY environment variable. The path of the binary is relative to the attached volume.
If you have multiple binaries with different names, you can specify which one
to run with the `BINARY` environment variable. The path of the binary is relative
to the attached volume.

4670
docs/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,40 +0,0 @@
{
"dependencies": {
"prettier": "^1.13.7",
"remark-cli": "^5.0.0",
"remark-lint-no-dead-urls": "^0.3.0",
"remark-lint-write-good": "^1.0.3",
"textlint": "^10.2.1",
"textlint-rule-stop-words": "^1.0.3"
},
"name": "tendermint",
"description": "Tendermint Core Documentation",
"version": "0.0.1",
"main": "README.md",
"devDependencies": {},
"scripts": {
"lint:json": "prettier \"**/*.json\" --write",
"lint:md": "prettier \"**/*.md\" --write && remark . && textlint \"md/**\"",
"lint": "yarn lint:json && yarn lint:md"
},
"repository": {
"type": "git",
"url": "git+https://github.com/tendermint/tendermint.git"
},
"keywords": [
"tendermint",
"blockchain"
],
"author": "Tendermint",
"license": "ISC",
"bugs": {
"url": "https://github.com/tendermint/tendermint/issues"
},
"homepage": "https://tendermint.com/docs/",
"remarkConfig": {
"plugins": [
"remark-lint-no-dead-urls",
"remark-lint-write-good"
]
}
}

View File

@@ -14,31 +14,31 @@ please submit them to our [bug bounty](https://tendermint.com/security)!
### Data Structures
- [Encoding and Digests](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/encoding.md)
- [Blockchain](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/blockchain.md)
- [State](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md)
- [Encoding and Digests](./blockchain/encoding.md)
- [Blockchain](./blockchain/blockchain.md)
- [State](./blockchain/state.md)
### Consensus Protocol
- [Consensus Algorithm](/docs/spec/consensus/consensus.md)
- [Creating a proposal](/docs/spec/consensus/creating-proposal.md)
- [Time](/docs/spec/consensus/bft-time.md)
- [Light-Client](/docs/spec/consensus/light-client.md)
- [Consensus Algorithm](./consensus/consensus.md)
- [Creating a proposal](./consensus/creating-proposal.md)
- [Time](./consensus/bft-time.md)
- [Light-Client](./consensus/light-client.md)
### P2P and Network Protocols
- [The Base P2P Layer](https://github.com/tendermint/tendermint/tree/master/docs/spec/p2p): multiplex the protocols ("reactors") on authenticated and encrypted TCP connections
- [Peer Exchange (PEX)](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/pex): gossip known peer addresses so peers can find each other
- [Block Sync](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/block_sync): gossip blocks so peers can catch up quickly
- [Consensus](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/consensus): gossip votes and block parts so new blocks can be committed
- [Mempool](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/mempool): gossip transactions so they get included in blocks
- Evidence: Forthcoming, see [this issue](https://github.com/tendermint/tendermint/issues/2329).
- [The Base P2P Layer](./p2p/): multiplex the protocols ("reactors") on authenticated and encrypted TCP connections
- [Peer Exchange (PEX)](./reactors/pex/): gossip known peer addresses so peers can find each other
- [Block Sync](./reactors/block_sync/): gossip blocks so peers can catch up quickly
- [Consensus](./reactors/consensus/): gossip votes and block parts so new blocks can be committed
- [Mempool](./reactors/mempool/): gossip transactions so they get included in blocks
- [Evidence](./reactors/evidence/): sending invalid evidence will stop the peer
### Software
- [ABCI](/docs/spec/software/abci.md): Details about interactions between the
- [ABCI](./software/abci.md): Details about interactions between the
application and consensus engine over ABCI
- [Write-Ahead Log](/docs/spec/software/wal.md): Details about how the consensus
- [Write-Ahead Log](./software/wal.md): Details about how the consensus
engine preserves data and recovers from crash failures
## Overview

View File

@@ -166,6 +166,11 @@ the tags will be hashed into the next block header.
The application may set the validator set during InitChain, and update it during
EndBlock.
Note that the maximum total power of the validator set is bounded by
`MaxTotalVotingPower = MaxInt64 / 8`. Applications are responsible for ensuring
they do not make changes to the validator set that cause it to exceed this
limit.
### InitChain
ResponseInitChain can return a list of validators.
@@ -206,6 +211,7 @@ following rules:
- if the validator does not already exist, it will be added to the validator
set with the given power
- if the validator does already exist, its power will be adjusted to the given power
- the total power of the new validator set must not exceed MaxTotalVotingPower
Note the updates returned in block `H` will only take effect at block `H+2`.
@@ -407,7 +413,7 @@ If `storeBlockHeight == stateBlockHeight && appBlockHeight < storeBlockHeight`,
replay all blocks in full from `appBlockHeight` to `storeBlockHeight`.
This happens if we completed processing the block, but the app forgot its height.
If `storeBlockHeight == stateBlockHeight && appBlockHeight == storeBlockHeight`, we're done
If `storeBlockHeight == stateBlockHeight && appBlockHeight == storeBlockHeight`, we're done.
This happens if we crashed at an opportune spot.
If `storeBlockHeight == stateBlockHeight+1`
@@ -422,6 +428,7 @@ This happens if we started processing the block but didn't finish.
replay the last block (storeBlockHeight) in full.
This happens if we crashed before the app finished Commit
If appBlockHeight == storeBlockHeight {
If `appBlockHeight == storeBlockHeight`
update the state using the saved ABCI responses but dont run the block against the real app.
This happens if we crashed after the app finished Commit but before Tendermint saved the state.

View File

@@ -51,7 +51,7 @@ type Header struct {
// hashes of block data
LastCommitHash []byte // commit from validators from the last block
DataHash []byte // Merkle root of transactions
DataHash []byte // MerkleRoot of transaction hashes
// hashes from the app output from the prev block
ValidatorsHash []byte // validators for the current block
@@ -83,25 +83,27 @@ type Version struct {
## BlockID
The `BlockID` contains two distinct Merkle roots of the block.
The first, used as the block's main hash, is the Merkle root
of all the fields in the header. The second, used for secure gossipping of
the block during consensus, is the Merkle root of the complete serialized block
cut into parts. The `BlockID` includes these two hashes, as well as the number of
parts.
The first, used as the block's main hash, is the MerkleRoot
of all the fields in the header (ie. `MerkleRoot(header)`.
The second, used for secure gossipping of the block during consensus,
is the MerkleRoot of the complete serialized block
cut into parts (ie. `MerkleRoot(MakeParts(block))`).
The `BlockID` includes these two hashes, as well as the number of
parts (ie. `len(MakeParts(block))`)
```go
type BlockID struct {
Hash []byte
Parts PartsHeader
PartsHeader PartSetHeader
}
type PartsHeader struct {
Hash []byte
type PartSetHeader struct {
Total int32
Hash []byte
}
```
TODO: link to details of merkle sums.
See [MerkleRoot](/docs/spec/blockchain/encoding.md#MerkleRoot) for details.
## Time
@@ -109,10 +111,6 @@ Tendermint uses the
[Google.Protobuf.WellKnownTypes.Timestamp](https://developers.google.com/protocol-buffers/docs/reference/csharp/class/google/protobuf/well-known-types/timestamp)
format, which uses two integers, one for Seconds and for Nanoseconds.
NOTE: there is currently a small divergence between Tendermint and the
Google.Protobuf.WellKnownTypes.Timestamp that should be resolved. See [this
issue](https://github.com/tendermint/go-amino/issues/223) for details.
## Data
Data is just a wrapper for a list of transactions, where transactions are
@@ -146,12 +144,12 @@ The vote includes information about the validator signing it.
```go
type Vote struct {
Type SignedMsgType // byte
Type byte
Height int64
Round int
Timestamp time.Time
BlockID BlockID
ValidatorAddress Address
Timestamp Time
ValidatorAddress []byte
ValidatorIndex int
Signature []byte
}
@@ -164,8 +162,8 @@ a _precommit_ has `vote.Type == 2`.
## Signature
Signatures in Tendermint are raw bytes representing the underlying signature.
The only signature scheme currently supported for Tendermint validators is
ED25519. The signature is the raw 64-byte ED25519 signature.
See the [signature spec](/docs/spec/blockchain/encoding.md#key-types) for more.
## EvidenceData
@@ -192,6 +190,8 @@ type DuplicateVoteEvidence struct {
}
```
See the [pubkey spec](/docs/spec/blockchain/encoding.md#key-types) for more.
## Validation
Here we describe the validation rules for every element in a block.
@@ -209,7 +209,7 @@ the current version of the `state` corresponds to the state
after executing transactions from the `prevBlock`.
Elements of an object are accessed as expected,
ie. `block.Header`.
See [here](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md) for the definition of `state`.
See the [definition of `State`](/docs/spec/blockchain/state.md).
### Header
@@ -230,7 +230,7 @@ The block version must match the state version.
len(block.ChainID) < 50
```
ChainID must be maximum 50 UTF-8 symbols.
ChainID must be less than 50 bytes.
### Height
@@ -288,28 +288,25 @@ The first block has `block.Header.TotalTxs = block.Header.NumberTxs`.
LastBlockID is the previous block's BlockID:
```go
prevBlockParts := MakeParts(prevBlock, state.LastConsensusParams.BlockGossip.BlockPartSize)
prevBlockParts := MakeParts(prevBlock)
block.Header.LastBlockID == BlockID {
Hash: SimpleMerkleRoot(prevBlock.Header),
Hash: MerkleRoot(prevBlock.Header),
PartsHeader{
Hash: SimpleMerkleRoot(prevBlockParts),
Hash: MerkleRoot(prevBlockParts),
Total: len(prevBlockParts),
},
}
```
Note: it depends on the ConsensusParams,
which are held in the `state` and may be updated by the application.
The first block has `block.Header.LastBlockID == BlockID{}`.
### LastCommitHash
```go
block.Header.LastCommitHash == SimpleMerkleRoot(block.LastCommit)
block.Header.LastCommitHash == MerkleRoot(block.LastCommit.Precommits)
```
Simple Merkle root of the votes included in the block.
MerkleRoot of the votes included in the block.
These are the votes that committed the previous block.
The first block has `block.Header.LastCommitHash == []byte{}`
@@ -317,37 +314,42 @@ The first block has `block.Header.LastCommitHash == []byte{}`
### DataHash
```go
block.Header.DataHash == SimpleMerkleRoot(block.Txs.Txs)
block.Header.DataHash == MerkleRoot(Hashes(block.Txs.Txs))
```
Simple Merkle root of the transactions included in the block.
MerkleRoot of the hashes of transactions included in the block.
Note the transactions are hashed before being included in the Merkle tree,
so the leaves of the Merkle tree are the hashes, not the transactions
themselves. This is because transaction hashes are regularly used as identifiers for
transactions.
### ValidatorsHash
```go
block.ValidatorsHash == SimpleMerkleRoot(state.Validators)
block.ValidatorsHash == MerkleRoot(state.Validators)
```
Simple Merkle root of the current validator set that is committing the block.
MerkleRoot of the current validator set that is committing the block.
This can be used to validate the `LastCommit` included in the next block.
### NextValidatorsHash
```go
block.NextValidatorsHash == SimpleMerkleRoot(state.NextValidators)
block.NextValidatorsHash == MerkleRoot(state.NextValidators)
```
Simple Merkle root of the next validator set that will be the validator set that commits the next block.
MerkleRoot of the next validator set that will be the validator set that commits the next block.
This is included so that the current validator set gets a chance to sign the
next validator sets Merkle root.
### ConsensusParamsHash
### ConsensusHash
```go
block.ConsensusParamsHash == TMHASH(amino(state.ConsensusParams))
block.ConsensusHash == state.ConsensusParams.Hash()
```
Hash of the amino-encoded consensus parameters.
Hash of the amino-encoding of a subset of the consensus parameters.
### AppHash
@@ -362,20 +364,20 @@ The first block has `block.Header.AppHash == []byte{}`.
### LastResultsHash
```go
block.ResultsHash == SimpleMerkleRoot(state.LastResults)
block.ResultsHash == MerkleRoot(state.LastResults)
```
Simple Merkle root of the results of the transactions in the previous block.
MerkleRoot of the results of the transactions in the previous block.
The first block has `block.Header.ResultsHash == []byte{}`.
## EvidenceHash
```go
block.EvidenceHash == SimpleMerkleRoot(block.Evidence)
block.EvidenceHash == MerkleRoot(block.Evidence)
```
Simple Merkle root of the evidence of Byzantine behaviour included in this block.
MerkleRoot of the evidence of Byzantine behaviour included in this block.
### ProposerAddress

View File

@@ -30,6 +30,12 @@ For example, the byte-array `[0xA, 0xB]` would be encoded as `0x020A0B`,
while a byte-array containing 300 entires beginning with `[0xA, 0xB, ...]` would
be encoded as `0xAC020A0B...` where `0xAC02` is the UVarint encoding of 300.
## Hashing
Tendermint uses `SHA256` as its hash function.
Objects are always Amino encoded before being hashed.
So `SHA256(obj)` is short for `SHA256(AminoEncode(obj))`.
## Public Key Cryptography
Tendermint uses Amino to distinguish between different types of private keys,
@@ -68,23 +74,27 @@ For example, the 33-byte (or 0x21-byte in hex) Secp256k1 pubkey
would be encoded as
`EB5AE98721020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
### Addresses
### Key Types
Addresses for each public key types are computed as follows:
Each type specifies it's own pubkey, address, and signature format.
#### Ed25519
First 20-bytes of the SHA256 hash of the raw 32-byte public key:
TODO: pubkey
The address is the first 20-bytes of the SHA256 hash of the raw 32-byte public key:
```
address = SHA256(pubkey)[:20]
```
NOTE: before v0.22.0, this was the RIPEMD160 of the Amino encoded public key.
The signature is the raw 64-byte ED25519 signature.
#### Secp256k1
RIPEMD160 hash of the SHA256 hash of the OpenSSL compressed public key:
TODO: pubkey
The address is the RIPEMD160 hash of the SHA256 hash of the OpenSSL compressed public key:
```
address = RIPEMD160(SHA256(pubkey))
@@ -92,12 +102,21 @@ address = RIPEMD160(SHA256(pubkey))
This is the same as Bitcoin.
The signature is the 64-byte concatenation of ECDSA `r` and `s` (ie. `r || s`),
where `s` is lexicographically less than its inverse, to prevent malleability.
This is like Ethereum, but without the extra byte for pubkey recovery, since
Tendermint assumes the pubkey is always provided anyway.
#### Multisig
TODO
## Other Common Types
### BitArray
The BitArray is used in block headers and some consensus messages to signal
whether or not something was done by each validator. BitArray is represented
The BitArray is used in some consensus messages to represent votes received from
validators, or parts received in a block. It is represented
with a struct containing the number of bits (`Bits`) and the bit-array itself
encoded in base64 (`Elems`).
@@ -119,24 +138,27 @@ representing `1` and `0`. Ie. the BitArray `10110` would be JSON encoded as
Part is used to break up blocks into pieces that can be gossiped in parallel
and securely verified using a Merkle tree of the parts.
Part contains the index of the part in the larger set (`Index`), the actual
underlying data of the part (`Bytes`), and a simple Merkle proof that the part is contained in
the larger set (`Proof`).
Part contains the index of the part (`Index`), the actual
underlying data of the part (`Bytes`), and a Merkle proof that the part is contained in
the set (`Proof`).
```go
type Part struct {
Index int
Bytes byte[]
Proof byte[]
Bytes []byte
Proof SimpleProof
}
```
See details of SimpleProof, below.
### MakeParts
Encode an object using Amino and slice it into parts.
Tendermint uses a part size of 65536 bytes.
```go
func MakeParts(obj interface{}, partSize int) []Part
func MakeParts(block Block) []Part
```
## Merkle Trees
@@ -144,12 +166,17 @@ func MakeParts(obj interface{}, partSize int) []Part
For an overview of Merkle trees, see
[wikipedia](https://en.wikipedia.org/wiki/Merkle_tree)
A Simple Tree is a simple compact binary tree for a static list of items. Simple Merkle trees are used in numerous places in Tendermint to compute a cryptographic digest of a data structure. In a Simple Tree, the transactions and validation signatures of a block are hashed using this simple merkle tree logic.
We use the RFC 6962 specification of a merkle tree, with sha256 as the hash function.
Merkle trees are used throughout Tendermint to compute a cryptographic digest of a data structure.
The differences between RFC 6962 and the simplest form a merkle tree are that:
If the number of items is not a power of two, the tree will not be full
and some leaf nodes will be at different levels. Simple Tree tries to
keep both sides of the tree the same size, but the left side may be one
greater, for example:
1) leaf nodes and inner nodes have different hashes.
This is for "second pre-image resistance", to prevent the proof to an inner node being valid as the proof of a leaf.
The leaf nodes are `SHA256(0x00 || leaf_data)`, and inner nodes are `SHA256(0x01 || left_hash || right_hash)`.
2) When the number of items isn't a power of two, the left half of the tree is as big as it could be.
(The smallest power of two less than the number of items) This allows new leaves to be added with less
recomputation. For example:
```
Simple Tree with 6 items Simple Tree with 7 items
@@ -163,68 +190,79 @@ greater, for example:
/ \ / \ / \ / \
/ \ / \ / \ / \
/ \ / \ / \ / \
* h2 * h5 * * * h6
* * h4 h5 * * * h6
/ \ / \ / \ / \ / \
h0 h1 h3 h4 h0 h1 h2 h3 h4 h5
h0 h1 h2 h3 h0 h1 h2 h3 h4 h5
```
Tendermint always uses the `TMHASH` hash function, which is equivalent to
SHA256:
### MerkleRoot
```
func TMHASH(bz []byte) []byte {
return SHA256(bz)
}
```
### Simple Merkle Root
The function `SimpleMerkleRoot` is a simple recursive function defined as follows:
The function `MerkleRoot` is a simple recursive function defined as follows:
```go
func SimpleMerkleRoot(hashes [][]byte) []byte{
switch len(hashes) {
// SHA256(0x00 || leaf)
func leafHash(leaf []byte) []byte {
return tmhash.Sum(append(0x00, leaf...))
}
// SHA256(0x01 || left || right)
func innerHash(left []byte, right []byte) []byte {
return tmhash.Sum(append(0x01, append(left, right...)...))
}
// largest power of 2 less than k
func getSplitPoint(k int) { ... }
func MerkleRoot(items [][]byte) []byte{
switch len(items) {
case 0:
return nil
case 1:
return hashes[0]
return leafHash(leafs[0])
default:
left := SimpleMerkleRoot(hashes[:(len(hashes)+1)/2])
right := SimpleMerkleRoot(hashes[(len(hashes)+1)/2:])
return SimpleConcatHash(left, right)
k := getSplitPoint(len(items))
left := MerkleRoot(items[:k])
right := MerkleRoot(items[k:])
return innerHash(left, right)
}
}
func SimpleConcatHash(left, right []byte) []byte{
left = encodeByteSlice(left)
right = encodeByteSlice(right)
return TMHASH(append(left, right))
}
```
Note that the leaves are Amino encoded as byte-arrays (ie. simple Uvarint length
prefix) before being concatenated together and hashed.
Note: `MerkleRoot` operates on items which are arbitrary byte arrays, not
necessarily hashes. For items which need to be hashed first, we introduce the
`Hashes` function:
Note: we will abuse notion and invoke `SimpleMerkleRoot` with arguments of type `struct` or type `[]struct`.
For `struct` arguments, we compute a `[][]byte` containing the hash of each
```
func Hashes(items [][]byte) [][]byte {
return SHA256 of each item
}
```
Note: we will abuse notion and invoke `MerkleRoot` with arguments of type `struct` or type `[]struct`.
For `struct` arguments, we compute a `[][]byte` containing the amino encoding of each
field in the struct, in the same order the fields appear in the struct.
For `[]struct` arguments, we compute a `[][]byte` by hashing the individual `struct` elements.
For `[]struct` arguments, we compute a `[][]byte` by amino encoding the individual `struct` elements.
### Simple Merkle Proof
Proof that a leaf is in a Merkle tree consists of a simple structure:
Proof that a leaf is in a Merkle tree is composed as follows:
```
```golang
type SimpleProof struct {
Total int
Index int
LeafHash []byte
Aunts [][]byte
}
```
Which is verified using the following:
Which is verified as follows:
```
func (proof SimpleProof) Verify(index, total int, leafHash, rootHash []byte) bool {
computedHash := computeHashFromAunts(index, total, leafHash, proof.Aunts)
```golang
func (proof SimpleProof) Verify(rootHash []byte, leaf []byte) bool {
assert(proof.LeafHash, leafHash(leaf)
computedHash := computeHashFromAunts(proof.Index, proof.Total, proof.LeafHash, proof.Aunts)
return computedHash == rootHash
}
@@ -238,26 +276,18 @@ func computeHashFromAunts(index, total int, leafHash []byte, innerHashes [][]byt
assert(len(innerHashes) > 0)
numLeft := (total + 1) / 2
numLeft := getSplitPoint(total) // largest power of 2 less than total
if index < numLeft {
leftHash := computeHashFromAunts(index, numLeft, leafHash, innerHashes[:len(innerHashes)-1])
assert(leftHash != nil)
return SimpleHashFromTwoHashes(leftHash, innerHashes[len(innerHashes)-1])
return innerHash(leftHash, innerHashes[len(innerHashes)-1])
}
rightHash := computeHashFromAunts(index-numLeft, total-numLeft, leafHash, innerHashes[:len(innerHashes)-1])
assert(rightHash != nil)
return SimpleHashFromTwoHashes(innerHashes[len(innerHashes)-1], rightHash)
return innerHash(innerHashes[len(innerHashes)-1], rightHash)
}
```
### Simple Tree with Dictionaries
The Simple Tree is used to merkelize a list of items, so to merkelize a
(short) dictionary of key-value pairs, encode the dictionary as an
ordered list of `KVPair` structs. The block hash is such a hash
derived from all the fields of the block `Header`. The state hash is
similarly derived.
### IAVL+ Tree
Because Tendermint only uses a Simple Merkle Tree, application developers are expect to use their own Merkle tree in their applications. For example, the IAVL+ Tree - an immutable self-balancing binary tree for persisting application state is used by the [Cosmos SDK](https://github.com/cosmos/cosmos-sdk/blob/develop/docs/sdk/core/multistore.md)
@@ -301,12 +331,14 @@ type CanonicalVote struct {
Type byte
Height int64 `binary:"fixed64"`
Round int64 `binary:"fixed64"`
Timestamp time.Time
BlockID CanonicalBlockID
Timestamp time.Time
ChainID string
}
```
The field ordering and the fixed sized encoding for the first three fields is optimized to ease parsing of SignBytes
in HSMs. It creates fixed offsets for relevant fields that need to be read in this context.
See [#1622](https://github.com/tendermint/tendermint/issues/1622) for more details.
For more details, see the [signing spec](/docs/spec/consensus/signing.md).
Also, see the motivating discussion in
[#1622](https://github.com/tendermint/tendermint/issues/1622).

View File

@@ -60,7 +60,7 @@ When hashing the Validator struct, the address is not included,
because it is redundant with the pubkey.
The `state.Validators`, `state.LastValidators`, and `state.NextValidators`, must always by sorted by validator address,
so that there is a canonical order for computing the SimpleMerkleRoot.
so that there is a canonical order for computing the MerkleRoot.
We also define a `TotalVotingPower` function, to return the total voting power:
@@ -78,6 +78,8 @@ func TotalVotingPower(vals []Validators) int64{
ConsensusParams define various limits for blockchain data structures.
Like validator sets, they are set during genesis and can be updated by the application through ABCI.
When hashed, only a subset of the params are included, to allow the params to
evolve without breaking the header.
```go
type ConsensusParams struct {
@@ -86,6 +88,18 @@ type ConsensusParams struct {
Validator
}
type hashedParams struct {
BlockMaxBytes int64
BlockMaxGas int64
}
func (params ConsensusParams) Hash() []byte {
SHA256(hashedParams{
BlockMaxBytes: params.BlockSize.MaxBytes,
BlockMaxGas: params.BlockSize.MaxGas,
})
}
type BlockSize struct {
MaxBytes int64
MaxGas int64

View File

@@ -0,0 +1,205 @@
# Validator Signing
Here we specify the rules for validating a proposal and vote before signing.
First we include some general notes on validating data structures common to both types.
We then provide specific validation rules for each. Finally, we include validation rules to prevent double-sigining.
## SignedMsgType
The `SignedMsgType` is a single byte that refers to the type of the message
being signed. It is defined in Go as follows:
```
// SignedMsgType is a type of signed message in the consensus.
type SignedMsgType byte
const (
// Votes
PrevoteType SignedMsgType = 0x01
PrecommitType SignedMsgType = 0x02
// Proposals
ProposalType SignedMsgType = 0x20
)
```
All signed messages must correspond to one of these types.
## Timestamp
Timestamp validation is subtle and there are currently no bounds placed on the
timestamp included in a proposal or vote. It is expected that validators will honestly
report their local clock time. The median of all timestamps
included in a commit is used as the timestamp for the next block height.
Timestamps are expected to be strictly monotonic for a given validator, though
this is not currently enforced.
## ChainID
ChainID is an unstructured string with a max length of 50-bytes.
In the future, the ChainID may become structured, and may take on longer lengths.
For now, it is recommended that signers be configured for a particular ChainID,
and to only sign votes and proposals corresponding to that ChainID.
## BlockID
BlockID is the structure used to represent the block:
```
type BlockID struct {
Hash []byte
PartsHeader PartSetHeader
}
type PartSetHeader struct {
Hash []byte
Total int
}
```
To be included in a valid vote or proposal, BlockID must either represent a `nil` block, or a complete one.
We introduce two methods, `BlockID.IsZero()` and `BlockID.IsComplete()` for these cases, respectively.
`BlockID.IsZero()` returns true for BlockID `b` if each of the following
are true:
```
b.Hash == nil
b.PartsHeader.Total == 0
b.PartsHeader.Hash == nil
```
`BlockID.IsComplete()` returns true for BlockID `b` if each of the following
are true:
```
len(b.Hash) == 32
b.PartsHeader.Total > 0
len(b.PartsHeader.Hash) == 32
```
## Proposals
The structure of a proposal for signing looks like:
```
type CanonicalProposal struct {
Type SignedMsgType // type alias for byte
Height int64 `binary:"fixed64"`
Round int64 `binary:"fixed64"`
POLRound int64 `binary:"fixed64"`
BlockID BlockID
Timestamp time.Time
ChainID string
}
```
A proposal is valid if each of the following lines evaluates to true for proposal `p`:
```
p.Type == 0x20
p.Height > 0
p.Round >= 0
p.POLRound >= -1
p.BlockID.IsComplete()
```
In other words, a proposal is valid for signing if it contains the type of a Proposal
(0x20), has a positive, non-zero height, a
non-negative round, a POLRound not less than -1, and a complete BlockID.
## Votes
The structure of a vote for signing looks like:
```
type CanonicalVote struct {
Type SignedMsgType // type alias for byte
Height int64 `binary:"fixed64"`
Round int64 `binary:"fixed64"`
BlockID BlockID
Timestamp time.Time
ChainID string
}
```
A vote is valid if each of the following lines evaluates to true for vote `v`:
```
v.Type == 0x1 || v.Type == 0x2
v.Height > 0
v.Round >= 0
v.BlockID.IsZero() || v.BlockID.IsComplete()
```
In other words, a vote is valid for signing if it contains the type of a Prevote
or Precommit (0x1 or 0x2, respectively), has a positive, non-zero height, a
non-negative round, and an empty or valid BlockID.
## Invalid Votes and Proposals
Votes and proposals which do not satisfy the above rules are considered invalid.
Peers gossipping invalid votes and proposals may be disconnected from other peers on the network.
Note, however, that there is not currently any explicit mechanism to punish validators signing votes or proposals that fail
these basic validation rules.
## Double Signing
Signers must be careful not to sign conflicting messages, also known as "double signing" or "equivocating".
Tendermint has mechanisms to publish evidence of validators that signed conflicting votes, so they can be punished
by the application. Note Tendermint does not currently handle evidence of conflciting proposals, though it may in the future.
### State
To prevent such double signing, signers must track the height, round, and type of the last message signed.
Assume the signer keeps the following state, `s`:
```
type LastSigned struct {
Height int64
Round int64
Type SignedMsgType // byte
}
```
After signing a vote or proposal `m`, the signer sets:
```
s.Height = m.Height
s.Round = m.Round
s.Type = m.Type
```
### Proposals
A signer should only sign a proposal `p` if any of the following lines are true:
```
p.Height > s.Height
p.Height == s.Height && p.Round > s.Round
```
In other words, a proposal should only be signed if it's at a higher height, or a higher round for the same height.
Once a proposal or vote has been signed for a given height and round, a proposal should never be signed for the same height and round.
### Votes
A signer should only sign a vote `v` if any of the following lines are true:
```
v.Height > s.Height
v.Height == s.Height && v.Round > s.Round
v.Height == s.Height && v.Round == s.Round && v.Step == 0x1 && s.Step == 0x20
v.Height == s.Height && v.Round == s.Round && v.Step == 0x2 && s.Step != 0x2
```
In other words, a vote should only be signed if it's:
- at a higher height
- at a higher round for the same height
- a prevote for the same height and round where we haven't signed a prevote or precommit (but have signed a proposal)
- a precommit for the same height and round where we haven't signed a precommit (but have signed a proposal and/or a prevote)
This means that once a validator signs a prevote for a given height and round, the only other message it can sign for that height and round is a precommit.
And once a validator signs a precommit for a given height and round, it must not sign any other message for that same height and round.

View File

@@ -36,22 +36,26 @@ db_backend = "leveldb"
# Database directory
db_dir = "data"
# Output level for logging
log_level = "state:info,*:error"
# Output level for logging, including package level options
log_level = "main:info,state:info,*:error"
# Output format: 'plain' (colored text) or 'json'
log_format = "plain"
##### additional base config options #####
# The ID of the chain to join (should be signed with every transaction and vote)
chain_id = ""
# Path to the JSON file containing the initial validator set and other meta data
genesis_file = "genesis.json"
genesis_file = "config/genesis.json"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_file = "priv_validator.json"
priv_validator_file = "config/priv_validator.json"
# TCP or UNIX socket address for Tendermint to listen on for
# connections from an external PrivValidator process
priv_validator_laddr = ""
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
node_key_file = "config/node_key.json"
# Mechanism to connect to the ABCI application: socket | grpc
abci = "socket"
@@ -74,13 +78,13 @@ laddr = "tcp://0.0.0.0:26657"
# A list of origins a cross-domain request can be executed from
# Default value '[]' disables cors support
# Use '["*"]' to allow any origin
cors_allowed_origins = "[]"
cors_allowed_origins = []
# A list of methods the client is allowed to use with cross-domain requests
cors_allowed_methods = "[HEAD GET POST]"
cors_allowed_methods = ["HEAD", "GET", "POST"]
# A list of non simple headers the client is allowed to use with cross-domain requests
cors_allowed_headers = "[Origin Accept Content-Type X-Requested-With X-Server-Time]"
cors_allowed_headers = ["Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time"]
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
@@ -88,7 +92,7 @@ grpc_laddr = ""
# Maximum number of simultaneous connections.
# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -100,7 +104,7 @@ unsafe = false
# Maximum number of simultaneous connections (including WebSocket).
# Does not include gRPC connections. See grpc_max_open_connections
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -113,6 +117,12 @@ max_open_connections = 900
# Address to listen for incoming connections
laddr = "tcp://0.0.0.0:26656"
# Address to advertise to peers for them to dial
# If empty, will use the same port as the laddr,
# and will introspect on the listener or use UPnP
# to figure out the address.
external_address = ""
# Comma separated list of seed nodes to connect to
seeds = ""
@@ -123,7 +133,7 @@ persistent_peers = ""
upnp = false
# Path to address book
addr_book_file = "addrbook.json"
addr_book_file = "config/addrbook.json"
# Set true for strict address routability rules
# Set false for private or local networks
@@ -160,7 +170,7 @@ seed_mode = false
private_peer_ids = ""
# Toggle to disable guard against peers connecting from the same ip.
allow_duplicate_ip = true
allow_duplicate_ip = false
# Peer connection configuration.
handshake_timeout = "20s"
@@ -171,26 +181,26 @@ dial_timeout = "3s"
recheck = true
broadcast = true
wal_dir = "data/mempool.wal"
wal_dir = ""
# size of the mempool
size = 100000
size = 5000
# size of the cache (used to filter transactions we saw earlier)
cache_size = 100000
cache_size = 10000
##### consensus configuration options #####
[consensus]
wal_file = "data/cs.wal/wal"
timeout_propose = "3000ms"
timeout_propose = "3s"
timeout_propose_delta = "500ms"
timeout_prevote = "1000ms"
timeout_prevote = "1s"
timeout_prevote_delta = "500ms"
timeout_precommit = "1000ms"
timeout_precommit = "1s"
timeout_precommit_delta = "500ms"
timeout_commit = "1000ms"
timeout_commit = "1s"
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
skip_timeout_commit = false
@@ -201,10 +211,10 @@ create_empty_blocks_interval = "0s"
# Reactor sleep duration parameters
peer_gossip_sleep_duration = "100ms"
peer_query_maj23_sleep_duration = "2000ms"
peer_query_maj23_sleep_duration = "2s"
# Block time parameters. Corresponds to the minimum time increment between consecutive blocks.
blocktime_iota = "1000ms"
blocktime_iota = "1s"
##### transactions indexer configuration options #####
[tx_index]
@@ -245,7 +255,7 @@ prometheus = false
prometheus_listen_addr = ":26660"
# Maximum number of simultaneous connections.
# If you want to accept a more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
max_open_connections = 3

View File

@@ -2,6 +2,6 @@
The RPC documentation is hosted here:
- https://tendermint.com/rpc/
- [https://tendermint.com/rpc/](https://tendermint.com/rpc/)
To update the documentation, edit the relevant `godoc` comments in the [rpc/core directory](https://github.com/tendermint/tendermint/tree/develop/rpc/core).

Some files were not shown because too many files have changed in this diff Show More