Compare commits

...

464 Commits

Author SHA1 Message Date
Josef Widder
2506acb958 fixes 2019-06-03 15:48:42 +02:00
Josef Widder
4214694d2b pdf version of draft spec 2019-06-03 15:45:10 +02:00
Josef Widder
782c655823 draft spec 2019-06-03 15:35:28 +02:00
Anton Kaliaev
048ac8d94b remove invalid transactions from the mempool (#3701)
Otherwise, we'll be trying to include them in each consecutive block.

The downside is that evil proposers will be able to drop valid
transactions (see #3322).

Reverts https://github.com/tendermint/tendermint/pull/3625

Fixes #3699
2019-06-03 08:46:02 -04:00
Thane Thomson
21bfd7fba9 Add check for version/version.go TMCoreSemVer (#3704) 2019-06-03 08:41:27 -04:00
Thane Thomson
0dd6b92a64 Relocate GenesisDocProvider and DefaultGenesisDocProviderFunc (#3693)
* Move GenesisDocProvider and DefaultGenesisDocProviderFunc

GenesisDocProvider, being a provider of *types.GenesisDoc, makes sense
to be part of the types package.

DefaultGenesisDocProviderFunc, which relies on *config.Config to produce
a types.GenesisDocProvider, makes sense being part of the config
package.

* Add aliases to avoid breaking node package API

* Revert to original structure

After discussion, it appears as though the best place for the relocated
structures is still in the node package. This means that for the v0.31.6
release and into the future, there will be no changes two these two
entities' APIs.
2019-05-30 18:40:17 -04:00
Anton Kaliaev
9dcee69ac2 v0.31.6 changelog (#3688)
* update changelog

* Update changelog

- less detail about internal stuff like the `mempool`
- focus BUG FIX entries more on what was fixed rather than how

* minor cleanup.

- remove entry for GenesisDocProvider, it's being undone

* minor fix
2019-05-30 18:25:21 -04:00
Girish Ramnani
b522ad0052 docs: fix minor typo (#3681) 2019-05-29 17:32:34 +09:00
Sean Braithwaite
b9508ffecb [p2p] Peer behaviour test tweaks (#3662)
* Peer behaviour test tweaks:

    Address the remaining test issues mentioned in ##3552 notably:
    + switch to p2p_test package
    + Use `Error` instead of `Errorf` when not using formatting
    + Add expected/got errors to
    `TestMockPeerBehaviourReporterConcurrency` test

* Peer behaviour equal behaviours test

    + slices of PeerBehaviours should be compared as histograms to
    ensure they have the same set of PeerBehaviours at the same
    freequncy.

* TestEqualPeerBehaviours:

    + Add tests for the equivalence between sets of PeerBehaviours
2019-05-27 15:44:56 -04:00
Thane Thomson
a6ac611e77 tendermint testnet: Allow for better hostname control (#3661)
* Allow testnet hostnames to be overridden

This allows one to specify the `--hostname` flag multiple times, each
time providing an additional custom hostname for a respective peer
(validator or non-validator). This overrides any of the
`--hostname-prefix` or `--starting-ip-address` flags.

The string array approach is taken instead of the string slice approach
(see the pflag docs:
https://godoc.org/github.com/spf13/pflag#StringArray) because the string
slice approach (a comma-separated string) doesn't allow for cleaner
multi-line BASH scripts - where this feature is intended to be used.

* Reorder conditional for clarity with simpler earlier return

* Allow for specifying peer hostname suffix

* Quote values in help strings for greater clarity

* Fix command switch

* Add CHANGELOG_PENDING entry for PR

* Allow for unique monikers

The current approach to generating monikers for testnet nodes assigns
the local hostname of the machine on which the testnet config was
generated to all nodes. This results in the same moniker for each and
every node.

This commit makes use of the supplied `--hostname-prefix` and
`--hostname-suffix`, or `--hostname` parameters to generate unique
monikers for each node. Alternatively, another parameter
(`--random-monikers`) allows one to forcibly override all of the other
options with random hexadecimal strings.

* Update CHANGELOG_PENDING entry for new command line switch
2019-05-27 15:33:41 -04:00
Ethan Buchman
e7bf25844f update PULL_REQUEST_TEMPLATE and CONTRIBUTING (#3655)
* update PULL_REQUEST_TEMPLATE and CONTRIBUTING

* Update .github/PULL_REQUEST_TEMPLATE.md

Co-Authored-By: Thane Thomson <connect@thanethomson.com>

* note ADRs
2019-05-27 14:45:27 -04:00
Anton Kaliaev
bcf10d5bae p2p: peer state init too late and pex message too soon (#3634)
* fix peer state init to late

Peer does not have a state yet. We set it in AddPeer.
We need an new interface before mconnection is started

* pex message to soon

fix reconnection pex send too fast,
error is caused lastReceivedRequests is still
not deleted when a peer reconnected

* add test case for initpeer

* add prove case

* remove potentially infinite loop

* Update consensus/reactor.go

Co-Authored-By: guagualvcha <baifudong@lancai.cn>

* Update consensus/reactor_test.go

Co-Authored-By: guagualvcha <baifudong@lancai.cn>

* document Reactor interface better

* refactor TestReactorReceiveDoesNotPanicIfAddPeerHasntBeenCalledYet

* fix merge conflicts

* blockchain: remove peer's ID from the pool in InitPeer

Refs #3338

* pex: resetPeersRequestsInfo both upon InitPeer and RemovePeer

* ensure RemovePeer is always called before InitPeer

by removing the peer from the switch last (after we've stopped it and
removed from all reactors)

* add some comments for ConsensusReactor#InitPeer

* fix pex reactor

* format code

* fix spelling

* update changelog

* remove unused methods

* do not clear lastReceivedRequests upon error

only in RemovePeer

* call InitPeer before we start the peer!

* add a comment to InitPeer

* write a test

* use waitUntilSwitchHasAtLeastNPeers func

* bring back timeouts

* Test to ensure Receive panics if InitPeer has not been called
2019-05-27 14:39:58 -04:00
Carlos Flores
5997e75c84 fix integration script (#3667) 2019-05-23 12:56:57 -04:00
Ismail Khoffi
97ceeed054 Merge pull request #3670 from yutianwu/fix_proxy
[R4R] Fix missing context parameter in proxy
2019-05-23 10:09:52 +02:00
yutianwu
3ef9e453b7 update change log 2019-05-23 14:26:11 +08:00
Ismail Khoffi
c7b324d3f2 Merge pull request #3675 from alexanderbez/bez/3672-improve-AddSignatureFromPubKey-error
Improve error message returned from AddSignatureFromPubKey
2019-05-23 07:45:05 +02:00
Aleksandr Bezobchuk
cfd42be0fe Improve error and tests 2019-05-22 20:02:00 -04:00
Roman Shtylman
f1f243d749 p2p/pex: consult seeds in crawlPeersRoutine (#3647)
* p2p/pex: consult seeds in crawlPeersRoutine

This changeset alters the startup behavior for crawlPeersRoutine. Previously
the routine would crawl a random selection of peers on startup. For a
new seed node, there are no peers. As a result, new seed nodes are unable
to bootstrap themselves with a list of peers until another node with a list
of peers connects to the seed. If this node relies on the seed node for peers,
then the two will not discover more peers.

This changeset makes the startup behavior for crawlPeersRoutine connect to
any seed nodes. Upon connecting, a request for peers will be sent to the seed node
thus helping bootstrap our seed node.

* p2p/pex: Adjust error message for no peers

Co-Authored-By: Ethan Buchman <ethan@coinculture.info>
2019-05-21 17:05:56 -04:00
yutianwu
fcce9ed4db fix lint 2019-05-21 14:23:50 +08:00
yutianwu
c21b4fcc93 fix bug of proxy 2019-05-21 14:12:40 +08:00
Sean Braithwaite
86cf8ee3f9 p2p: PeerBehaviour implementation (#3539) (#3552)
* p2p: initial implementation of peer behaviour

* [p2p] re-use newMockPeer

* p2p: add inline docs for peer_behaviour interface

* [p2p] Align PeerBehaviour interface (#3558)

* [p2p] make switchedPeerHebaviour private

* [p2p] make storePeerHebaviour private

* [p2p] Add CHANGELOG_PENDING entry for PeerBehaviour

* [p2p] Adjustment naming for PeerBehaviour

* [p2p] Add coarse lock around storedPeerBehaviour

* [p2p]: Fix non-pointer methods in storedPeerBehaviour

    + Structs with embeded locks must specify all methods with pointer
     receivers to avoid creating a copy of the embeded lock.

* [p2p] Thorough refactoring based on comments in #3552

    + Decouple PeerBehaviour interface from Peer by parametrizing
    methods with `p2p.ID` instead of `p2p.Peer`
    + Setter methods wrapped in a write lock
    + Getter methods wrapped in a read lock
    + Getter methods on storedPeerBehaviour now take a `p2p.ID`
    + Getter methods return a copy of underlying stored behaviours
    + Add doc strings to public types and methods

* [p2p] make structs public

* [p2p] Test empty StoredPeerBehaviour

* [p2p] typo fix

* [p2p] add TestStoredPeerBehaviourConcurrency

    + Add a test which uses StoredPeerBehaviour in multiple goroutines
      to ensure thread-safety.

* Update p2p/peer_behaviour.go

Co-Authored-By: brapse <brapse@gmail.com>

* Update p2p/peer_behaviour.go

Co-Authored-By: brapse <brapse@gmail.com>

* Update p2p/peer_behaviour.go

Co-Authored-By: brapse <brapse@gmail.com>

* Update p2p/peer_behaviour.go

Co-Authored-By: brapse <brapse@gmail.com>

* Update p2p/peer_behaviour.go

Co-Authored-By: brapse <brapse@gmail.com>

* Update p2p/peer_behaviour_test.go

Co-Authored-By: brapse <brapse@gmail.com>

* Update p2p/peer_behaviour.go

Co-Authored-By: brapse <brapse@gmail.com>

* Update p2p/peer_behaviour.go

Co-Authored-By: brapse <brapse@gmail.com>

* Update p2p/peer_behaviour.go

Co-Authored-By: brapse <brapse@gmail.com>

* Update p2p/peer_behaviour.go

Co-Authored-By: brapse <brapse@gmail.com>

* [p2p] field ordering convention

* p2p: peer behaviour refactor

    + Change naming of reporting behaviour to `Report`
    + Remove the responsibility of distinguishing between the categories
    of good and bad behaviour and instead focus on reporting behaviour.

* p2p: rename PeerReporter -> PeerBehaviourReporter
2019-05-10 13:45:50 -04:00
Zarko Milosevic
6f1ccb6c49 [ADR] ADR-36: blockchain reactor refactor spec (#3506)
* Add blockchain reactor refactor ADR

* docs: Correct markdown and go syntax in adr-036

* Update docs/architecture/adr-036-blockchain-reactor-refactor.md

Co-Authored-By: milosevic <zarko@interchain.io>

* Add comments and address reviewer comments

* Improve formating

* Small improvements

* adr-036 -> adr-040
2019-05-10 13:37:21 -04:00
Anton Kaliaev
a076b48202 libs/db: boltdb: use slice instead of sync.Map (#3633)
for storing batch keys and values in boltDBBatch.

NOTE: batch does not have to be safe for concurrent access. Delete may
be slow, but given it should not be used often, it's ok.

Fixes #3631
2019-05-07 14:06:20 +04:00
Anton Kaliaev
a7358bc69f libs/db: conditional compilation (#3628)
* libs/db: conditional compilation

For cleveldb: go build -tags cleveldb
For boltdb: go build -tags boltdb

Fixes #3611

* document db_backend param better

* remove deprecated LevelDBBackend

* update changelog

* add missing lines

* add new line

* fix TestRemoteDB

* add a line about boltdb tag

* Revert "remove deprecated LevelDBBackend"

This reverts commit 1aa85453f76605e0c4d967601bbe26240e668d51.

* make PR non breaking

* change DEPRECATED label format

https://stackoverflow.com/a/36360323/820520
2019-05-07 12:33:47 +04:00
Anton Kaliaev
27909e5d2a mempool: remove only valid (Code==0) txs on Update (#3625)
* mempool: remove only valid (Code==0) txs on Update

so evil proposers can't drop valid txs in Commit stage.

Also remove invalid (Code!=0) txs from the cache so they can be
resubmitted.

Fixes #3322

@rickyyangz:

In the end of commit stage, we will update mempool to remove all the txs
in current block.

// Update mempool.
err = blockExec.mempool.Update(
	block.Height,
	block.Txs,
	TxPreCheck(state),
	TxPostCheck(state),
)

Assum an account has 3 transactions in the mempool, the sequences are
100, 101 and 102 separately, So an evil proposal can only package the
101 and 102 transactions into its proposal block, and leave 100 still in
mempool, then the two txs will be removed from all validators' mempool
when commit. So the account lost the two valid txs.

@ebuchman:

In the longer term we may want to do something like #2639 so we can
validate txs before we commit the block. But even in this case we'd only
want to run the equivalent of CheckTx, which means the DeliverTx could
still fail even if the CheckTx passes depending on how the app handles
the ABCI Code semantics. So more work will be required around the ABCI
code. See also #2185

* add changelog entry and tests

* improve changelog message

* reformat code
2019-05-07 12:25:35 +04:00
Anton Kaliaev
1e073817de rpc: /dial_peers: only mark peers as persistent if flag is on (#3620)
## Description

also

    handle errors from DialPeersAsync
    remove nil addr from log msg
    fix TestPEXReactorDoesNotDisconnectFromPersistentPeerInSeedMode

This is a follow-up from
#3593 (review)

Fixes most of the #3617, except #3593 (comment)

## Commits

* rpc: /dial_peers: only mark peers as persistent if flag is on

also

- handle errors from DialPeersAsync
- remove nil addr from log msg
- fix TestPEXReactorDoesNotDisconnectFromPersistentPeerInSeedMode

This is a follow-up from
https://github.com/tendermint/tendermint/pull/3593#pullrequestreview-233556909

* remove a call to AddPersistentPeers

TestDialFail will trigger a reconnect
2019-05-07 11:09:06 +04:00
Ismail Khoffi
2bb1a87d41 Merge pull request #3629 from leoluk/fix-boltdb-batch
libs/db: fix boltdb batching
2019-05-07 09:03:49 +02:00
Leo
d0d9ef16f7 libs/db: fix boltdb batching
`[]byte` is unhashable and needs to be casted to a string.

See https://github.com/tendermint/tendermint/pull/3610#discussion_r280995538

Ref https://github.com/tendermint/tendermint/issues/3626
2019-05-06 20:10:39 +00:00
Anton Kaliaev
60b833403c libs/db: close boltDBIterator (#3627)
Refs https://github.com/tendermint/tendermint/pull/3610#discussion_r281201274

If we do not close, other txs will be stuck forever.
2019-05-06 21:37:41 +04:00
Anton Kaliaev
debf8f70c9 libs/db: bbolt (etcd's fork of bolt) (#3610)
Description:

    add boltdb implement for db interface.

    The boltdb is safe and high performence embedded KV database. It is based on B+tree and Mmap.
    Now it is maintained by etcd-io org. https://github.com/etcd-io/bbolt
    It is much better than LSM tree (levelDB) when RANDOM and SCAN read.

Replaced #3604
Refs #1835

Commits:

* add boltdb for storage

* update boltdb in gopkg.toml

* update bbolt in Gopkg.lock

* dep update  Gopkg.lock

* add boltdb'bucket when create Boltdb

* some modifies

* address my own comments

* fix most of the Iterator tests for boltdb

* bolt does not support concurrent writes while iterating

* add WARNING word

* note that ReadOnly option is not supported

* fixes after Ismail's review

Co-Authored-By: melekes <anton.kalyaev@gmail.com>

* panic if View returns error

plus specify bucket in the top comment
2019-05-06 13:42:21 +04:00
Ethan Buchman
4519ef3109 adr renames (#3622) 2019-05-05 12:42:26 -04:00
Mostafa Sedaghat Joo
86f2765b32 Add adr-36 (#3324) 2019-05-05 12:19:41 -04:00
Daniil Lashin
e2f5471545 ADR-037: DeliverBlock (#3420)
* ADR 037 Initial draft

* Add link to initial conversation

* Update initial draft

* Fix indentations

* Change negative consequences

* Fix indentations

* Change ADR id

* Add one more positive consequence

* Rollback ADR number
2019-05-05 12:19:19 -04:00
Ethan Buchman
4905640e9b types: CommitVotes struct as last step towards #1648 (#3298)
* VoteSignBytes builds CanonicalVote

* CommitVotes implements VoteSetReader

- new CommitVotes struct holds both the Commit and the ValidatorSet and
  implements VoteSetReader
- ToVote takes a ValidatorSet

* fix TestCommit

* use CommitSig.BlockID

Commits may include votes for a different BlockID, could be nil,
or different altogether. This means we can't use `commit.BlockID`
for reconstructing the sign bytes, since up to -1/3 of the commits
might be for independent BlockIDs. This means CommitSig will need to
include an indicator for what BlockID it signed - if it's not the
committed one or nil, it will need to include it fully in order to be
verified. This is unfortunate but unavoidable so long as we include
votes for non-committed BlockIDs (which we do to track validator
liveness)

* fixes from review

* remove CommitVotes. CommitSig contains address

* remove commit.canonicalVote method

* toVote -> getVote, takes valIdx

* update adr-025

* commit.ToVoteSet -> CommitToVoteSet

* add test

* fix from review
2019-05-05 18:01:35 +02:00
needkane
4c60ab83c8 Update apps.md (#3621) 2019-05-05 10:23:41 -04:00
Anton Kaliaev
5051a1f7bc mempool: move interface into mempool package (#3524)
## Description

Refs #2659

Breaking changes in the mempool package:

[mempool] #2659 Mempool now an interface
    old Mempool renamed to CListMempool
    NewMempool renamed to NewCListMempool
    Option renamed to CListOption
    MempoolReactor renamed to Reactor
    NewMempoolReactor renamed to NewReactor
    unexpose TxID method
    TxInfo.PeerID renamed to SenderID
    unexpose MempoolReactor.Mempool

Breaking changes in the state package:

[state] #2659 Mempool interface moved to mempool package
    MockMempool moved to top-level mock package and renamed to Mempool

Non Breaking changes in the node package:

[node] #2659 Add Mempool method, which allows you to access mempool

## Commits

* move Mempool interface into mempool package

Refs #2659

Breaking changes in the mempool package:

- Mempool now an interface
- old Mempool renamed to CListMempool

Breaking changes to state package:

- MockMempool moved to mempool/mock package and renamed to Mempool
- Mempool interface moved to mempool package

* assert CListMempool impl Mempool

* gofmt code

* rename MempoolReactor to Reactor

- combine everything into one interface
- rename TxInfo.PeerID to TxInfo.SenderID
- unexpose MempoolReactor.Mempool

* move mempool mock into top-level mock package

* add a fixme

TxsFront should not be a part of the Mempool interface
because it leaks implementation details. Instead, we need to come up
with general interface for querying the mempool so the MempoolReactor
can fetch and broadcast txs to peers.

* change node#Mempool to return interface

* save commit = new reactor arch

* Revert "save commit = new reactor arch"

This reverts commit 1bfceacd9d65a720574683a7f22771e69af9af4d.

* require CListMempool in mempool.Reactor

* add two changelog entries

* fixes after my own review

* quote interfaces, structs and functions

* fixes after Ismail's review

* make node's mempool an interface

* make InitWAL/CloseWAL methods a part of Mempool interface

* fix merge conflicts

* make node's mempool an interface
2019-05-04 10:41:31 +04:00
Anton Kaliaev
8711af608f p2p: make persistent prop independent of conn direction (#3593)
## Description

Previously only outbound peers can be persistent.

Now, even if the peer is inbound, if it's marked as persistent, when/if conn is lost,
Tendermint will try to reconnect. This part is actually optional and can be reverted.

Plus, seed won't disconnect from inbound peer if it's marked as
persistent. Fixes #3362

## Commits

* make persistent prop independent of conn direction

Previously only outbound peers can be persistent. Now, even if the peer
is inbound, if it's marked as persistent, when/if conn is lost,
Tendermint will try to reconnect.

Plus, seed won't disconnect from inbound peer if it's marked as
persistent. Fixes #3362

* fix TestPEXReactorDialPeer test

* add a changelog entry

* update changelog

* add two tests

* reformat code

* test UnsafeDialPeers and UnsafeDialSeeds

* add TestSwitchDialPeersAsync

* spec: update p2p/config spec

* fixes after Ismail's review

* Apply suggestions from code review

Co-Authored-By: melekes <anton.kalyaev@gmail.com>

* fix merge conflict

* remove sleep from TestPEXReactorDoesNotDisconnectFromPersistentPeerInSeedMode

We don't need it actually.
2019-05-03 17:21:56 +04:00
Anton Kaliaev
9926ae768e abci/types: update comment (#3612)
Fixes #3607
2019-05-02 09:53:33 +02:00
Ivan Kushmantsev
5df6cf563a p2p: session should terminate on nonce wrapping (#3531) (#3609)
Refs #3531
2019-05-02 10:09:56 +04:00
JamesRay
2c26d95ab9 cs/replay: execCommitBlock should not read from state.lastValidators (#3067)
* execCommitBlock should not read from state.lastValidators

* fix height 1

* fix blockchain/reactor_test

* fix consensus/mempool_test

* fix consensus/reactor_test

* fix consensus/replay_test

* add CHANGELOG

* fix consensus/reactor_test

* fix consensus/replay_test

* add a test for replay validators change

* fix mem_pool test

* fix byzantine test

* remove a redundant code

* reduce validator change blocks to 6

* fix

* return peer0 config

* seperate testName

* seperate testName 1

* seperate testName 2

* seperate app db path

* seperate app db path 1

* add a lock before startNet

* move the lock to reactor_test

* simulate just once

* try to find problem

* handshake only saveState when app version changed

* update gometalinter to 3.0.0 (#3233)

in the attempt to fix https://circleci.com/gh/tendermint/tendermint/43165

also

    code is simplified by running gofmt -s .
    remove unused vars
    enable linters we're currently passing
    remove deprecated linters
(cherry picked from commit d470945503)

* gofmt code

* goimport code

* change the bool name to testValidatorsChange

* adjust receive kvstore.ProtocolVersion

* adjust receive kvstore.ProtocolVersion 1

* adjust receive kvstore.ProtocolVersion 3

* fix merge execution.go

* fix merge develop

* fix merge develop 1

* fix run cleanupFunc

* adjust code according to reviewers' opinion

* modify the func name match the convention

* simplify simulate a chain containing some validator change txs 1

* test CI error

* Merge remote-tracking branch 'upstream/develop' into fixReplay 1

* fix pubsub_test

* subscribeUnbuffered vote channel
2019-05-01 17:15:53 -04:00
Ivan Kushmantsev
a2a68df521 cli: add option to not clear address book with unsafe reset (#3606)
Fixes #3585
2019-05-01 10:01:41 +04:00
Anton Kaliaev
43348022d6 pex: various follow-ups (#3605)
* p2p: merge switch cases

also improve the error msg in privval

* pex: refactor code plus update specification

follow-up to https://github.com/tendermint/tendermint/pull/3603

* Update docs/spec/reactors/pex/pex.md

Co-Authored-By: melekes <anton.kalyaev@gmail.com>
2019-05-01 09:38:26 +04:00
Roman Shtylman
40dbad9915 pex: dial seeds when address book needs more addresses (#3603)
If we are low on addresses for peering, we need to discover more peers. The
previous behavior would query existing peers; however, if an existing peer
does not participate in peer exchange, then our node will not discover more peers.

This change consults both existing peers as well as seeds when there is a deficit
in address book addresses. This allows for discovering peers though existing channels
as well as via seeds if existing peers do not share addresses.
2019-04-30 10:17:57 +04:00
Ethan Buchman
2585187880 Merge pull request #3597 from tendermint/bucky/merge-master
Bucky/merge master
2019-04-26 19:14:54 -04:00
Ethan Buchman
d76952c674 Merge branch 'master' into bucky/merge-master 2019-04-26 17:49:28 -04:00
Thane Thomson
7b162f5c54 node: refactor node.NewNode (#3456)
The node.NewNode method is pretty complex at the moment, an in order to address issues like #3156, we need to simplify the interface for partial node instantiation. In some places, we don't need to build up a full node (like in the node.TestCreateProposalBlock test), but the complexity of such partial instantiation needs to be reduced.

This PR aims to eventually make this easier/simpler.

See also this gist https://gist.github.com/thanethomson/56e1640d057a26186e38ad678a1d114c for some background work done when starting to refactor here.

## Commits:

* [WIP] Refactor node.NewNode to simplify

The `node.NewNode` method is pretty complex at the moment, an in order
to address issues like #3156, we need to simplify the interface for
partial node instantiation. In some places, we don't need to build up a
full node (like in the `node.TestCreateProposalBlock` test), but the
complexity of such partial instantiation needs to be reduced.

This PR aims to eventually make this easier/simpler.

* Refactor state loading and genesis doc provider into state package

* Refactor for clarity of return parameters

* Fix incorrect capitalization of error messages

* Simplify extracted functions' names

* Document optionally-prefixed functions

* Refactor optionallyFastSync for clarity of separation of concerns

* Restructure function for early return

* Restructure function for early return

* Remove dependence on deprecated panic functions

* refactor code a bit more

plus, expose PEXReactor on node

* align logger names

* add a changelog entry

* align logger names 2

* add a note about PEXReactor returning nil
2019-04-26 16:05:39 +04:00
Thane Thomson
70592cc4d8 libs/common: remove deprecated PanicXXX functions (#3595)
* Remove deprecated PanicXXX functions from codebase

As per discussion over
[here](https://github.com/tendermint/tendermint/pull/3456#discussion_r278423492),
we need to remove these `PanicXXX` functions and eliminate our
dependence on them. In this PR, each and every `PanicXXX` function call
is replaced with a simple `panic` call.

* add a changelog entry
2019-04-26 14:23:43 +04:00
Ethan Buchman
90997ab1b5 docs: update contributing.md (#3503)
Minor updates to reflect squash merging and how to prepare releases
2019-04-23 13:34:14 +04:00
Zarko Milosevic
b738add80c cs: fix nondeterministic tests (#3582)
Should fix #3451, #2723 and #3317.

Test TestResetTimeoutPrecommitUponNewHeight is simplified so it reduces a risk of timeout failure. Furthermore, timeout we wait for TimeoutEvents is increased, and the timeout value is more precisely computed. This should hopefully decrease a chance of non-deterministic test failures.

This assertion is problematic to ensure consistently due to dependency on scheduler. On the other hand, if I am not wrong, order in which messages are read from the channel respects order in which messages are written. Therefore, process will receive 2f+1 precommits that are not all for v (one is for nil) so TriggeredTimeoutPrecommit would be set to true. So we don't need to assert it. I know that it would be better to still assert to it but I don't know how to do it without sleep and that is ugly and is causing us nondeterministic failures.
2019-04-23 13:19:16 +04:00
Greg Hill
968e955c46 testnet cmd: add config option (#3559)
Option to explicitly provide the -config in the testnet command, closing #3160.
2019-04-23 12:52:46 +04:00
Anton Kaliaev
ebf815ee57 cs/replay: check appHash for each block (#3579)
* check every block appHash

Fixes #3460

Not really needed, but it would detect if the app changed in a way it
shouldn't have.

* add a changelog entry

* no need to return an error if we panic

* rename methods

* rename methods once again

* add a test

* correct error msg

plus fix a few go-lint warnings

* better panic msg
2019-04-23 12:22:40 +04:00
Ismail Khoffi
8db7e74b87 privval: increase timeout to mitigate non-deterministic test failure (#3580)
This should fix #3576 (ran it many times locally but only time will tell). The test actually only checked for the opcode of the error. From the name of the test we actually want to test if we see a timeout after a pre-defined time.

## Commits:

* increase readWrite timeout as it is also used in the `Accept` of the tcp
listener:

 - before this caused the readWriteTimeout to kick in (rarely) while Accept
 - as a side-effect: remove obsolete time.Sleep: in both listener cases
 the Accept will only return after successfully accepting and the timeout
 that is supposed to be tested here will be triggered because there is a
 read without a write
 - see if we actually run into a timeout error (the whole purpose of
 this test AFAIU)

Signed-off-by: Ismail Khoffi <Ismail.Khoffi@gmail.com>

* makee local test-vars `const`

Signed-off-by: Ismail Khoffi <Ismail.Khoffi@gmail.com>

## Additional comments:

@xla: Confusing how an accept could take longer than that, but assuming a noisy environment full of little docker whales will be slower than what 50 years of silicon are capable of. The only thing I'd be vary of is that we mask structural issues with the code by just bumping the timeout, if we are sensitive towards that it warrants invesigation, but again this might only be true in the environment our CI runs in.
2019-04-22 12:04:04 +04:00
Ethan Buchman
4253e67c07 Merge pull request #3571 from tendermint/v0.31
V0.31.5
2019-04-19 07:46:11 -04:00
Sean Braithwaite
671c5c9b84 crypto: Proof of Concept for iterative version of SimpleHashFromByteSlices (#2611) (#3530)
(#2611) had suggested that an iterative version of
SimpleHashFromByteSlice would be faster, presumably because
 we can envision some overhead accumulating from stack
frames and function calls. Additionally, a recursive algorithm risks
hitting the stack limit and causing a stack overflow should the tree
be too large.

Provided here is an iterative alternative, a simple test to assert
correctness and a benchmark. On the performance side, there appears to
be no overall difference:

```
BenchmarkSimpleHashAlternatives/recursive-4                20000 77677 ns/op
BenchmarkSimpleHashAlternatives/iterative-4                20000 76802 ns/op
```

On the surface it might seem that the additional overhead is due to
the different allocation patterns of the implementations. The recursive
version uses a single `[][]byte` slices which it then re-slices at each level of the tree.
The iterative version reproduces `[][]byte` once within the function and
then rewrites sub-slices of that array at each level of the tree.

Eexperimenting by modifying the code to simply calculate the
hash and not store the result show little to no difference in performance.

These preliminary results suggest:
1. The performance of the current implementation is pretty good
2. Go has low overhead for recursive functions
3. The performance of the SimpleHashFromByteSlice routine is dominated
by the actual hashing of data

Although this work is in no way exhaustive, point #3 suggests that
optimizations of this routine would need to take an alternative
approach to make significant improvements on the current performance.

Finally, considering that the recursive implementation is easier to
read, it might not be worthwhile to switch to a less intuitive
implementation for so little benefit.

* re-add slice re-writing
* [crypto] Document SimpleHashFromByteSlicesIterative
2019-04-18 17:31:36 +02:00
kevlubkcm
f2aa1bf50e bandaid for non-deterministic clist test (#3575)
* add a deterministic timeout

Co-Authored-By: kevlubkcm <36485490+kevlubkcm@users.noreply.github.com>
2019-04-17 18:14:01 +02:00
Thane Thomson
90465f727f rpc: add support for batched requests/responses (#3534)
Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213).

* Add JSON RPC batching for client and server

As per #3213, this adds support for [JSON RPC batch requests and
responses](https://www.jsonrpc.org/specification#batch).

* Add additional checks to ensure client responses are the same as results

* Fix case where a notification is sent and no response is expected

* Add test to check that JSON RPC notifications in a batch are left out in responses

* Update CHANGELOG_PENDING.md

* Update PR number now that PR has been created

* Make errors start with lowercase letter

* Refactor batch functionality to be standalone

This refactors the batching functionality to rather act in a standalone
way. In light of supporting concurrent goroutines making use of the same
client, it would make sense to have batching functionality where one
could create a batch of requests per goroutine and send that batch
without interfering with a batch from another goroutine.

* Add examples for simple and batch HTTP client usage

* Check errors from writer and remove nolinter directives

* Make error strings start with lowercase letter

* Refactor examples to make them testable

* Use safer deferred shutdown for example Tendermint test node

* Recompose rpcClient interface from pre-existing interface components

* Rename WaitGroup for brevity

* Replace empty ID string with request ID

* Remove extraneous test case

* Convert first letter of errors.Wrap() messages to lowercase

* Remove extraneous function parameter

* Make variable declaration terse

* Reorder WaitGroup.Done call to help prevent race conditions in the face of failure

* Swap mutex to value representation and remove initialization

* Restore empty JSONRPC string ID in response to prevent nil

* Make JSONRPCBufferedRequest private

* Revert PR hard link in CHANGELOG_PENDING

* Add client ID for JSONRPCClient

This adds code to automatically generate a randomized client ID for the
JSONRPCClient, and adds a check of the IDs in the responses (if one was
set in the requests).

* Extract response ID validation into separate function

* Remove extraneous comments

* Reorder fields to indicate clearly which are protected by the mutex

* Refactor for loop to remove indexing

* Restructure and combine loop

* Flatten conditional block for better readability

* Make multi-variable declaration slightly more readable

* Change for loop style

* Compress error check statements

* Make function description more generic to show that we support different protocols

* Preallocate memory for request and result objects
2019-04-17 19:10:12 +04:00
kevlubkcm
621c0e629d docs: fix typo in clist readme (#3574) 2019-04-17 19:09:17 +04:00
Anton Kaliaev
c0e8fb5085 p2p: (seed mode) limit the number of attempts to connect to a peer (#3573)
* use dialPeer function in a seed mode

Fixes #3532

by storing a number of attempts we've tried to connect in-memory and
removing the address from addrbook when number of attempts > 16
2019-04-17 16:44:26 +04:00
Ethan Buchman
d2eab536ac Merge pull request #3568 from tendermint/anton/release-v0.31.5
v0.31.5 changelog and version updates
2019-04-16 16:51:18 -04:00
Ismail Khoffi
18bd5b627a Apply suggestions from code review
Co-Authored-By: melekes <anton.kalyaev@gmail.com>
2019-04-16 15:13:30 +04:00
Anton Kaliaev
4474a5ec70 bump version 2019-04-16 13:38:54 +04:00
Anton Kaliaev
3cb7013c38 update changelog 2019-04-16 13:38:54 +04:00
hucc
5b8888b01b common: CMap: slight optimization in Keys() and Values(). (#3567) 2019-04-16 12:04:08 +04:00
zjubfd
439312b9c0 blockchain: dismiss request channel delay (#3459)
Fixes #3457

The topic of the issue is that : write a BlockRequest int requestsCh channel will create an timer at the same time that stop the peer 15s later if no block have been received . But pop a BlockRequest from requestsCh and send it out may delay more than 15s later. So that the peer will be stopped for error("send nothing to us").
Extracting requestsCh into its own goroutine can make sure that every BlockRequest been handled timely.

Instead of the requestsCh handling, we should probably pull the didProcessCh handling in a separate go routine since this is the one "starving" the other channel handlers. I believe the way it is right now, we still have issues with high delays in errorsCh handling that might cause sending requests to invalid/ disconnected peers.
2019-04-16 11:54:19 +04:00
dongsamb
f1cf10150a gitignore: add .vendor-new (#3566) 2019-04-16 08:49:03 +04:00
Anton Kaliaev
50b87c3445 state: Use last height changed if validator set is empty (#3560)
What happened:

New code was supposed to fall back to last height changed when/if it
failed to find validators at checkpoint height (to make release
non-breaking).

But because we did not check if validator set is empty, the fall back
logic was never executed => resulting in LoadValidators returning an
empty validator set for cases where `lastStoredHeight` is checkpoint
height (i.e. almost all heights if the application does not change
validator set often).

How it was found:

one of our users - @sunboshan reported a bug here
https://github.com/tendermint/tendermint/pull/3537#issuecomment-482711833

* use last height changed in validator set is empty
* add a changelog entry
2019-04-15 16:53:38 +02:00
Sean Braithwaite
f2119c35de adr: PeerBehaviour updates (#3558)
* [adr] Peer behaviour adr updates
* [docs] fix Behaved function signature
* [adr] typo fix in code example
2019-04-15 16:38:45 +02:00
Ethan Buchman
d35c08724c Merge pull request #3563 from tendermint/master
Merge master to develop
2019-04-15 08:16:37 -04:00
Ethan Buchman
1c6d9d20e4 Merge pull request #3553 from tendermint/v0.31
V0.31
2019-04-15 08:16:00 -04:00
Ethan Buchman
4695414393 Merge pull request #3548 from tendermint/release/v0.31.4
Release/v0.31.4
2019-04-12 10:56:03 -04:00
Ismail Khoffi
def5c8cf12 address review comments: (#3550)
- mention ADR in release summary
 - remove [p2p] api changes
 - amend v0.31.3 log to contain note about breaking change
2019-04-12 10:48:34 -04:00
Ismail Khoffi
b6da8880c2 prepare v0.31.4 release:
- prep changelog
 - add missing changelog entries
 - fix minor glitch in existing changelog (v0.31.2)
 - bump versions
2019-04-12 14:24:51 +02:00
Martin Dyring-Andersen
a453628c4e Fix a couple of typos (#3547)
Fix some typos in p2p/transport.go
2019-04-12 13:25:14 +02:00
Sean Braithwaite
4e4224213f adr: Peer Behaviour (#3539)
* [adr] ADR 037: Peer Behaviour inital draft
* Update docs/architecture/adr-037-peer-behaviour.md

Co-Authored-By: brapse <brapse@gmail.com>

* Update docs/architecture/adr-037-peer-behaviour.md
Co-Authored-By: brapse <brapse@gmail.com>

* [docs] adr-037 Better footnote styling
* [ADR] ADR-037 adjust Footnotes for github markdown
* [ADR] ADR-037 fix numbered list
2019-04-12 12:32:00 +02:00
Alexander Simmerl
b5b3b85697 Bring back NodeInfo NetAddress form the dead (#3545)
A prior change to address accidental DNS lookups introduced the
SocketAddr on peer, which was then used to add it to the addressbook.
Which in turn swallowed the self reported port of the peer, which is
important on a reconnect. This change revives the NetAddress on NodeInfo
which the Peer carries, but now returns an error to avoid nil
dereferencing another issue observed in the past. Additionally we could
potentially address #3532, yet the original problem statemenf of that
issue stands.

As a drive-by optimisation `MarkAsGood` now takes only a `p2p.ID` which
makes it interface a bit stricter and leaner.
2019-04-12 12:31:02 +02:00
Anton Kaliaev
18d2c45c33 rpc: Fix response time grow over time (#3537)
* rpc: store validator info periodly

* increase ValidatorSetStoreInterval

also

- unexpose it
- add a comment
- refactor code
- add a benchmark, which shows that 100000 results in ~ 100ms to get 100
validators

* make the change non-breaking

* expand comment

* rename valSetStoreInterval to valSetCheckpointInterval

* change the panic msg

* add a test and changelog entry

* update changelog entry

* update changelog entry

* add a link to PR

* fix test

* Update CHANGELOG_PENDING.md

Co-Authored-By: melekes <anton.kalyaev@gmail.com>

* update comment

* use MaxInt64 func
2019-04-12 10:46:07 +02:00
Anton Kaliaev
c3df21fe82 add missing changelog entry (#3544)
* add missing changelog entry
2019-04-11 17:59:14 +02:00
Anton Kaliaev
bcec8be035 p2p: do not log err if peer is private (#3474)
* add actionable advice for ErrAddrBookNonRoutable err

Should replace https://github.com/tendermint/tendermint/pull/3463

* reorder checks in addrbook#addAddress so

ErrAddrBookPrivate is returned first

and do not log error in DialPeersAsync if the address is private
because it's not an error
2019-04-11 15:32:16 +02:00
Anton Kaliaev
9a415b0572 docs: abci#Commit: better explain the possible deadlock (#3536) 2019-04-09 18:21:35 +02:00
Anton Kaliaev
40da355234 docs: fix block.Header.Time description (#3529)
It's not proposer local time anymore, but a weighted median

Fixes #3514
2019-04-03 14:56:51 +02:00
Anton Kaliaev
f965a4db15 p2p: seed mode refactoring (#3011)
ListOfKnownAddresses is removed
panic if addrbook size is less than zero
CrawlPeers does not attempt to connect to existing or peers we're currently dialing
various perf. fixes
improved tests (though not complete)
move IsDialingOrExistingAddress check into DialPeerWithAddress (Fixes #2716)


* addrbook: preallocate memory when saving addrbook to file

* addrbook: remove oldestFirst struct and check for ID

* oldestFirst replaced with sort.Slice
* ID is now mandatory, so no need to check

* addrbook: remove ListOfKnownAddresses

GetSelection is used instead in seed mode.

* addrbook: panic if size is less than 0

* rewrite addrbook#saveToFile to not use a counter

* test AttemptDisconnects func

* move IsDialingOrExistingAddress check into DialPeerWithAddress

* save and cleanup crawl peer data

* get rid of DefaultSeedDisconnectWaitPeriod

* make linter happy

* fix TestPEXReactorSeedMode

* fix comment

* add a changelog entry

* Apply suggestions from code review

Co-Authored-By: melekes <anton.kalyaev@gmail.com>

* rename ErrDialingOrExistingAddress to ErrCurrentlyDialingOrExistingAddress

* lowercase errors

* do not persist seed data

pros:
- no extra files
- less IO

cons:
- if the node crashes, seed might crawl a peer too soon

* fixes after Ethan's review

* add a changelog entry

* we should only consult Switch about peers

checking addrbook size does not make sense since only PEX reactor uses
it for dialing peers!

https://github.com/tendermint/tendermint/pull/3011#discussion_r270948875
2019-04-03 11:22:52 +02:00
Ethan Buchman
75ffa2bf1c Merge pull request #3528 from tendermint/v0.31
Merge v0.31.3 to master
2019-04-02 19:18:57 -04:00
Ethan Buchman
086d6cbe8c Merge pull request #3527 from tendermint/v0.31
Merge V0.31.3 back to develop
2019-04-02 16:49:44 -04:00
Ethan Buchman
6cc3f4d87c Merge pull request #3525 from tendermint/release/v0.31.3
Release/v0.31.3
2019-04-02 16:45:04 -04:00
Ethan Buchman
3cfd9757a7 changelog and version v0.31.3 2019-04-02 09:14:33 -04:00
Ethan Buchman
882622ec10 Fixes tendermint/tendermint#3522
* OriginalAddr -> SocketAddr

OriginalAddr records the originally dialed address for outbound peers,
rather than the peer's self reported address. For inbound peers, it was
nil. Here, we rename it to SocketAddr and for inbound peers, set it to
the RemoteAddr of the connection.

* use SocketAddr

Numerous places in the code call peer.NodeInfo().NetAddress().
However, this call to NetAddress() may perform a DNS lookup if the
reported NodeInfo.ListenAddr includes a name. Failure of this lookup
returns a nil address, which can lead to panics in the code.

Instead, call peer.SocketAddr() to return the static address of the
connection.

* remove nodeInfo.NetAddress()

Expose `transport.NetAddress()`, a static result determined
when the transport is created. Removing NetAddress() from the nodeInfo
prevents accidental DNS lookups.

* fixes from review

* linter

* fixes from review
2019-04-01 19:59:57 -04:00
Ethan Buchman
1ecf814838 Fixes tendermint/tendermint#3439
* make sure we create valid private keys:

 - genPrivKey samples and rejects invalid fieldelems (like libsecp256k1)
 - GenPrivKeySecp256k1 uses `(sha(secret) mod (n − 1)) + 1`
 - fix typo, rename test file: s/secpk256k1/secp256k1/

* Update crypto/secp256k1/secp256k1.go
2019-04-01 19:45:57 -04:00
Greg Szabo
e4a03f249d Release message changelog link fix (#3519) 2019-04-01 14:18:18 -04:00
Ethan Buchman
56d8aa42b3 Merge pull request #3520 from tendermint/v0.31
Merge v0.31.2 release back to develop
2019-04-01 14:17:58 -04:00
Ismail Khoffi
79e9f20578 Merge pull request #3518 from tendermint/prepare-release-v0.31.2
Release v0.31.2
2019-04-01 17:58:28 +02:00
Ismail Khoffi
ab24925c94 prepare changelog and bump versions to v0.31.2 2019-04-01 17:49:34 +02:00
Greg Szabo
0ae41cc663 Fix for wrong version tag (#3517)
* Fix for wrong version tag (tag on the release branch instead of master)
2019-04-01 17:47:00 +02:00
Ethan Buchman
422d04c8ba Bucky/mempool txsmap (#3512)
* mempool: resCb -> globalCb

* reqResCb takes an externalCb

* failing test for #3509

* txsMap is sync.Map

* update changelog
2019-03-31 13:14:18 +02:00
zjubfd
2233dd45bd libs: remove useless code in group (#3504)
* lib: remove useless code in group
* update change log
* Update CHANGELOG_PENDING.md

Co-Authored-By: guagualvcha <baifudong@lancai.cn>
2019-03-29 18:47:53 +01:00
Greg Szabo
9199f3f613 Release management using CircleCI (#3498)
* Release management using CircleCI

* Changelog updated
2019-03-29 12:57:16 +01:00
Ismail Khoffi
6c1a4b5137 blockchain: comment out logger in test code that causes a race condition (#3500) 2019-03-28 17:39:09 +01:00
Ethan Buchman
c7bb998497 Merge pull request #3502 from tendermint/bucky/merge-master
Bucky/merge master
2019-03-28 08:07:59 -04:00
Ethan Buchman
7b72436c75 Merge branch 'master' into bucky/merge-master 2019-03-28 08:12:05 -04:00
Ethan Buchman
a0234affb6 Merge pull request #3489 from tendermint/release/v0.31.1
Release/v0.31.1
2019-03-28 07:57:42 -04:00
Ethan Buchman
9390a810eb minor changelog updates (#3499) 2019-03-27 21:03:28 -04:00
Ismail Khoffi
a49d80b89c catch up with develop and rebase on current release to include #3482 2019-03-27 23:14:57 +01:00
Sean Braithwaite
ccfe75ec4a docs: Fix broken links (#3482) (#3488)
* docs: fix broken links (#3482)

A bunch of links were broken in the documentation s they included the
`docs` prefix.

* Update CHANGELOG_PENDING

* docs: switch to relative links for github compatitibility (#3482)
2019-03-27 23:14:57 +01:00
Sean Braithwaite
d586945d69 docs: Fix broken links (#3482) (#3488)
* docs: fix broken links (#3482)

A bunch of links were broken in the documentation s they included the
`docs` prefix.

* Update CHANGELOG_PENDING

* docs: switch to relative links for github compatitibility (#3482)
2019-03-27 18:51:57 +01:00
Ismail Khoffi
ae88965ff6 changelog: add summary & fix link & add external contributor (#3490) 2019-03-27 17:46:29 +01:00
Ismail Khoffi
2338134836 bump versions 2019-03-27 16:52:19 +01:00
Ismail Khoffi
1b33a50e6d Merge remote-tracking branch 'remotes/origin/develop' into release/v0.31.1 2019-03-27 16:50:59 +01:00
Ismail Khoffi
3c7bb6b571 Add some numbers for #2778 2019-03-27 16:50:42 +01:00
Anton Kaliaev
5fa540bdc9 mempool: add a safety check, write tests for mempoolIDs (#3487)
* mempool: add a safety check, write tests for mempoolIDs

and document 65536 limit in the mempool reactor spec

follow-up to https://github.com/tendermint/tendermint/pull/2778

* rename the test

* fixes after Ismail's review
2019-03-27 16:45:34 +01:00
Ismail Khoffi
52727863e1 add external contributors 2019-03-27 16:07:03 +01:00
Ismail Khoffi
e3f840e6a6 reset CHANGELOG_PENDING.md 2019-03-27 16:04:43 +01:00
Ismail Khoffi
ed63e1f378 Add more entries to the Changelog, fix formatting, linkify 2019-03-27 16:03:25 +01:00
Ismail Khoffi
55b7118c98 Prep changelog: copy from pending & update version 2019-03-27 15:35:32 +01:00
zjubfd
5a25b75b1d p2p: refactor GetSelectionWithBias for addressbook (#3475)
Why submit this pr:

    we have suffered from infinite loop in addrbook bug which takes us a long time to find out why process become a zombie peer. It have been fixed in #3232. But the ADDRS_LOOP is still there, risk of infinite loop is still exist.
    The algorithm that to random pick a bucket is not stable, which means the peer may unluckily always choose the wrong bucket for a long time, the time and cpu cost is meaningless.

A simple improvement:
shuffle bucketsNew and bucketsOld, and pick necessary number of address from them. A stable
algorithm.
2019-03-26 17:13:14 +01:00
Anton Kaliaev
a4d9539544 rpc/client: include NetworkClient interface into Client interface (#3473)
I think it's nice when the Client interface has all the methods. If someone does not need a particular method/set of methods, she can use individual interfaces (e.g. NetworkClient, MempoolClient) or write her own interface.

technically breaking

Fixes #3458
2019-03-26 09:44:49 +01:00
HaoyangLiu
1bb8e02a96 mempool: fix broadcastTxRoutine leak (#3478)
Refs #3306, irisnet@fdbb676

I ran an irishub validator. After the validator node ran several days, I dump the whole goroutine stack. I found that there were hundreds of broadcastTxRoutine. However, the connected peer quantity was less than 30. So I belive that there must be broadcastTxRoutine leakage issue.

According to my analysis, I think the root cause of this issue locate in below code:

		select {
		case <-next.NextWaitChan():
			// see the start of the for loop for nil check
			next = next.Next()
		case <-peer.Quit():
			return
		case <-memR.Quit():
			return
		}

As we know, if multiple paths are avaliable in the same time, then a random path will be selected. Suppose that next.NextWaitChan() and peer.Quit() are both avaliable, and next.NextWaitChan() is chosen.

                // send memTx
		msg := &TxMessage{Tx: memTx.tx}
		success := peer.Send(MempoolChannel, cdc.MustMarshalBinaryBare(msg))
		if !success {
			time.Sleep(peerCatchupSleepIntervalMS * time.Millisecond)
			continue
		}

Then next will be non-empty and the peer send operation won't be success. As a result, this go routine will be track into infinite loop and won't be released.

My proposal is to check peer.Quit() and memR.Quit() in every loop no matter whether next is nil.
2019-03-26 09:29:06 +01:00
Dev Ojha
6de7effb05 mempool no gossip back (#2778)
Closes #1798

This is done by making every mempool tx maintain a list of peers who its received the tx from. Instead of using the 20byte peer ID, it instead uses a local map from peerID to uint16 counter, so every peer adds 2 bytes. (Word aligned to probably make it 8 bytes)

This also required resetting the callback function on every CheckTx. This likely has performance ramifications for instruction caching. The actual setting operation isn't costly with the removal of defers in this PR.

* Make the mempool not gossip txs back to peers its received it from

* Fix adversarial memleak

* Don't break interface

* Update changelog

* Forgot to add a mtx

* forgot a mutex

* Update mempool/reactor.go

Co-Authored-By: ValarDragon <ValarDragon@users.noreply.github.com>

* Update mempool/mempool.go

Co-Authored-By: ValarDragon <ValarDragon@users.noreply.github.com>

* Use unknown peer ID

Co-Authored-By: ValarDragon <ValarDragon@users.noreply.github.com>

* fix compilation

* use next wait chan logic when skipping

* Minor fixes

* Add TxInfo

* Add reverse map

* Make activeID's auto-reserve 0

* 0 -> UnknownPeerID

Co-Authored-By: ValarDragon <ValarDragon@users.noreply.github.com>

* Switch to making the normal case set a callback on the reqres object

The recheck case is still done via the global callback, and stats
are also set via global callback

* fix merge conflict

* Addres comments

* Add cache tests

* add cache tests

* minor fixes

* update metrics in reqResCb and reformat code

* goimport -w mempool/reactor.go

* mempool: update memTx senders

I had to introduce txsMap for quick mempoolTx lookups.

* change senders type from []uint16 to sync.Map

Fixes DATA RACE:

```
Read at 0x00c0013fcd3a by goroutine 183:
  github.com/tendermint/tendermint/mempool.(*MempoolReactor).broadcastTxRoutine()
      /go/src/github.com/tendermint/tendermint/mempool/reactor.go:195 +0x3c7

Previous write at 0x00c0013fcd3a by D[2019-02-27|10:10:49.058] Read PacketMsg                               switch=3 peer=35bc1e3558c182927b31987eeff3feb3d58a0fc5@127.0.0.1
:46552 conn=MConn{pipe} packet="PacketMsg{30:2B06579D0A143EB78F3D3299DE8213A51D4E11FB05ACE4D6A14F T:1}"
goroutine 190:
  github.com/tendermint/tendermint/mempool.(*Mempool).CheckTxWithInfo()
      /go/src/github.com/tendermint/tendermint/mempool/mempool.go:387 +0xdc1
  github.com/tendermint/tendermint/mempool.(*MempoolReactor).Receive()
      /go/src/github.com/tendermint/tendermint/mempool/reactor.go:134 +0xb04
  github.com/tendermint/tendermint/p2p.createMConnection.func1()
      /go/src/github.com/tendermint/tendermint/p2p/peer.go:374 +0x25b
  github.com/tendermint/tendermint/p2p/conn.(*MConnection).recvRoutine()
      /go/src/github.com/tendermint/tendermint/p2p/conn/connection.go:599 +0xcce

Goroutine 183 (running) created at:
D[2019-02-27|10:10:49.058] Send                                         switch=2 peer=1efafad5443abeea4b7a8155218e4369525d987e@127.0.0.1:46193 channel=48 conn=MConn{pipe} m
sgBytes=2B06579D0A146194480ADAE00C2836ED7125FEE65C1D9DD51049
  github.com/tendermint/tendermint/mempool.(*MempoolReactor).AddPeer()
      /go/src/github.com/tendermint/tendermint/mempool/reactor.go:105 +0x1b1
  github.com/tendermint/tendermint/p2p.(*Switch).startInitPeer()
      /go/src/github.com/tendermint/tendermint/p2p/switch.go:683 +0x13b
  github.com/tendermint/tendermint/p2p.(*Switch).addPeer()
      /go/src/github.com/tendermint/tendermint/p2p/switch.go:650 +0x585
  github.com/tendermint/tendermint/p2p.(*Switch).addPeerWithConnection()
      /go/src/github.com/tendermint/tendermint/p2p/test_util.go:145 +0x939
  github.com/tendermint/tendermint/p2p.Connect2Switches.func2()
      /go/src/github.com/tendermint/tendermint/p2p/test_util.go:109 +0x50

I[2019-02-27|10:10:49.058] Added good transaction                       validator=0 tx=43B4D1F0F03460BD262835C4AA560DB860CFBBE85BD02386D83DAC38C67B3AD7 res="&{CheckTx:gas_w
anted:1 }" height=0 total=375
Goroutine 190 (running) created at:
  github.com/tendermint/tendermint/p2p/conn.(*MConnection).OnStart()
      /go/src/github.com/tendermint/tendermint/p2p/conn/connection.go:210 +0x313
  github.com/tendermint/tendermint/libs/common.(*BaseService).Start()
      /go/src/github.com/tendermint/tendermint/libs/common/service.go:139 +0x4df
  github.com/tendermint/tendermint/p2p.(*peer).OnStart()
      /go/src/github.com/tendermint/tendermint/p2p/peer.go:179 +0x56
  github.com/tendermint/tendermint/libs/common.(*BaseService).Start()
      /go/src/github.com/tendermint/tendermint/libs/common/service.go:139 +0x4df
  github.com/tendermint/tendermint/p2p.(*peer).Start()
      <autogenerated>:1 +0x43
  github.com/tendermint/tendermint/p2p.(*Switch).startInitPeer()
```

* explain the choice of a map DS for senders

* extract ids pool/mapper to a separate struct

* fix literal copies lock value from senders: sync.Map contains sync.Mutex

* use sync.Map#LoadOrStore instead of Load

* fixes after Ismail's review

* rename resCbNormal to resCbFirstTime
2019-03-26 09:27:29 +01:00
zjubfd
25a3c8b172 rpc: support tls rpc (#3469)
Refs #3419
2019-03-23 18:08:15 +01:00
Thane Thomson
85be2a554e tools/tm-signer-harness: update height and round for test harness (#3466)
In order to re-enable the test harness for the KMS (see
tendermint/kms#227), we need some marginally more realistic proposals
and votes. This is because the KMS does some additional sanity checks
now to ensure the height and round are increasing over time.
2019-03-22 14:16:38 +01:00
Anton Kaliaev
1d4afb179b replace PB2TM.ConsensusParams with a call to params#Update (#3448)
Fixes #3444
2019-03-21 11:05:39 +01:00
tracebundy
660bd4a53e fix comment (#3454) 2019-03-20 08:30:49 -04:00
Ethan Buchman
81b9bdf400 comments on validator ordering (#3452)
* comments on validator ordering

* NextValidatorsHash
2019-03-20 08:29:40 -04:00
Anton Kaliaev
926127c774 blockchain: update the maxHeight when a peer is removed (#3350)
* blockchain: update the maxHeight when a peer is removed

Refs #2699

* add a changelog entry

* make linter pass
2019-03-19 20:59:33 -04:00
zjubfd
03085c2da2 rpc: client disable compression (#3430) 2019-03-19 20:18:18 -04:00
Anton Kaliaev
7af4b5086a Remove RepeatTimer and refactor Switch#Broadcast (#3429)
* p2p: refactor Switch#Broadcast func

- call wg.Add only once
- do not call peers.List twice!
  * bad for perfomance
  * peers list can change in between calls!

Refs #3306

* p2p: use time.Ticker instead of RepeatTimer

no need in RepeatTimer since we don't Reset them

Refs #3306

* libs/common: remove RepeatTimer (also TimerMaker and Ticker interface)

"ancient code that’s caused no end of trouble" Ethan

I believe there's much simplier way to write a ticker than can be reset
https://medium.com/@arpith/resetting-a-ticker-in-go-63858a2c17ec
2019-03-19 20:10:54 -04:00
needkane
60b2ae5f5a crypto: delete unused code (#3426) 2019-03-19 20:00:53 -04:00
Anca Zamfir
a6349f5063 Formalize proposer election algorithm properties (#3140)
* Update proposer-selection.md

* Fixed typos

* fixed typos

* Attempt to address some comments

* Update proposer-selection.md

* Update proposer-selection.md

* Update proposer-selection.md

Added the normalization step.

* Addressed review comments

* New example for normalization section

Added a new example to better show the need for normalization
Added requirement for changing validator set
Addressed review comments

* Fixed problem with R2

* fixed the math for new validator

* test

* more small updates

* Moved the centering above the round-robin election

- the centering is now done before the actual round-robin block
- updated examples
- cleanup

* change to reflect new implementation for new validator
2019-03-19 19:56:13 -04:00
Ethan Buchman
22bcfca87a Merge pull request #3450 from tendermint/master
Merge master back to develop
2019-03-19 19:54:09 -04:00
Ethan Buchman
0d985ede28 Merge pull request #3417 from tendermint/release/v0.31.0
Release/v0.31.0
2019-03-19 19:53:37 -04:00
Ismail Khoffi
1e3469789d Ensure WriteTimeout > TimeoutBroadcastTxCommit (#3443)
* Make sure config.TimeoutBroadcastTxCommit < rpcserver.WriteTimeout()

* remove redundant comment

* libs/rpc/http_server: move Read/WriteTimeout into Config

* increase defaults for read/write timeouts

Based on this article
https://www.digitalocean.com/community/tutorials/how-to-optimize-nginx-configuration

* WriteTimeout should be larger than TimeoutBroadcastTxCommit

* set a deadline for subscribing to txs

* extract duration into const

* add two changelog entries

* Update CHANGELOG_PENDING.md

Co-Authored-By: melekes <anton.kalyaev@gmail.com>

* Update CHANGELOG_PENDING.md

Co-Authored-By: melekes <anton.kalyaev@gmail.com>

* 12 -> 10

* changelog

* changelog
2019-03-19 19:45:51 -04:00
Ethan Buchman
5f68fbae37 Merge pull request #3449 from tendermint/ismail/merge_develop_into_release/0.31.0
Merge develop into release/0.31.0
2019-03-19 19:25:26 -04:00
Ismail Khoffi
e276f35f86 remove 3421 from changelog 2019-03-19 14:36:42 +01:00
Ismail Khoffi
8e62a3d62a Add #3421 to changelog and reorder alphabetically 2019-03-19 12:19:02 +01:00
Ismail Khoffi
48aaccab8f Merge in develop and update CHANGELOG.md 2019-03-19 12:09:26 +01:00
Anton Kaliaev
4162ebe8b5 types: refactor PB2TM.ConsensusParams to take BlockTimeIota as an arg (#3442)
See https://github.com/tendermint/tendermint/pull/3403/files#r266208947

In #3403 we unexposed BlockTimeIota from the ABCI, but it's still part
of the ConsensusParams struct, so we have to remember to add it back
after calling PB2TM.ConsensusParams. Instead, PB2TM.ConsensusParams
should take it as an argument

Fixes #3432
2019-03-19 11:38:32 +04:00
Ethan Buchman
551b6322f5 Update v0.31.0 release notes (#3434)
* changelog: fix formatting

* update release notes

* update changelog

* linkify

* update UPGRADING
2019-03-16 19:24:12 -04:00
Ismail Khoffi
52c4e15eb2 changelog: more review fixes/release/v0.31.0 (#3427)
* Update release summary

* Add pubsub config changes

* Add link to issue for pubsub changes
2019-03-14 19:07:06 +04:00
Ismail Khoffi
5483ac6b0a minor changes / fixes to release 0.31.0 (#3422)
* bump ABCIVersion due to renaming BlockSizeParams -> BlockParams
(https://github.com/tendermint/tendermint/pull/3417#discussion_r264974791)

* Move changelog on consensus params entry to breaking

* Add @melekes' suggestion for breaking change in pubsub into upgrading.md

* Add changelog entry for #3351

* Add changelog entry for #3358 & #3359

* Add changelog entry for #3397

* remove changelog entry for #3397 (was already released in 0.30.2)

* move 3351 to improvements

* Update changelog comment
2019-03-14 15:17:49 +04:00
Anton Kaliaev
7457133307 grpcdb: close Iterator/ReverseIterator after use (#3424)
Fixes #3402
2019-03-14 15:00:58 +04:00
Anton Kaliaev
a59930a327 localnet: fix $LOG variable (#3423)
Fixes #3421

Before: it was creating a file named ${LOG:-tendermint.log} in .build/nodeX
After: it creates a file named tendermint.log
2019-03-13 16:09:05 +04:00
Ismail Khoffi
85c023db88 Prep release v0.31.0:
- update changelog, reset pending
 - bump versions
 - add external contributors (partly manually)
2019-03-12 20:07:26 +01:00
Anton Kaliaev
4cbd36f341 Merge pull request #3415 from tendermint/master
Merge master back to develop (do not squash)
2019-03-12 21:22:03 +04:00
Anton Kaliaev
e42f833fd4 Merge master back to develop (#3412)
* libs/db: close batch (#3397)

ClevelDB requires closing when WriteBatch is no longer needed, https://godoc.org/github.com/jmhodges/levigo#WriteBatch.Close

Fixes the memory leak in https://github.com/cosmos/cosmos-sdk/issues/3842

* update changelog and bump version to 0.30.2
2019-03-12 16:20:59 +04:00
Anton Kaliaev
ad3e990c6a fix GO_VERSION in installation scripts (#3411)
there is no such file https://storage.googleapis.com/golang/go1.12.0.linux-amd64.tar.gz

Fixes #3405
2019-03-11 23:59:00 +04:00
srmo
676212fa8f cmd: make sure to have 'testnet' create the data directory for nonvals (#3409)
Fixes #3408
2019-03-11 23:06:03 +04:00
Anton Kaliaev
3035572034 cs: comment out log.Error to avoid TestReactorValidatorSetChanges timing out (#3401) 2019-03-11 22:52:09 +04:00
Anton Kaliaev
d741c7b478 limit number of /subscribe clients and queries per client (#3269)
* limit number of /subscribe clients and queries per client

Add the following config variables (under [rpc] section):
  * max_subscription_clients
  * max_subscriptions_per_client
  * timeout_broadcast_tx_commit

Fixes #2826

new HTTPClient interface for subscriptions

finalize HTTPClient events interface

remove EventSubscriber

fix data race

```
WARNING: DATA RACE
Read at 0x00c000a36060 by goroutine 129:
  github.com/tendermint/tendermint/rpc/client.(*Local).Subscribe.func1()
      /go/src/github.com/tendermint/tendermint/rpc/client/localclient.go:168 +0x1f0

Previous write at 0x00c000a36060 by goroutine 132:
  github.com/tendermint/tendermint/rpc/client.(*Local).Subscribe()
      /go/src/github.com/tendermint/tendermint/rpc/client/localclient.go:191 +0x4e0
  github.com/tendermint/tendermint/rpc/client.WaitForOneEvent()
      /go/src/github.com/tendermint/tendermint/rpc/client/helpers.go:64 +0x178
  github.com/tendermint/tendermint/rpc/client_test.TestTxEventsSentWithBroadcastTxSync.func1()
      /go/src/github.com/tendermint/tendermint/rpc/client/event_test.go:139 +0x298
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:827 +0x162

Goroutine 129 (running) created at:
  github.com/tendermint/tendermint/rpc/client.(*Local).Subscribe()
      /go/src/github.com/tendermint/tendermint/rpc/client/localclient.go:164 +0x4b7
  github.com/tendermint/tendermint/rpc/client.WaitForOneEvent()
      /go/src/github.com/tendermint/tendermint/rpc/client/helpers.go:64 +0x178
  github.com/tendermint/tendermint/rpc/client_test.TestTxEventsSentWithBroadcastTxSync.func1()
      /go/src/github.com/tendermint/tendermint/rpc/client/event_test.go:139 +0x298
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:827 +0x162

Goroutine 132 (running) created at:
  testing.(*T).Run()
      /usr/local/go/src/testing/testing.go:878 +0x659
  github.com/tendermint/tendermint/rpc/client_test.TestTxEventsSentWithBroadcastTxSync()
      /go/src/github.com/tendermint/tendermint/rpc/client/event_test.go:119 +0x186
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:827 +0x162
==================
```

lite client works (tested manually)

godoc comments

httpclient: do not close the out channel

use TimeoutBroadcastTxCommit

no timeout for unsubscribe

but 1s Local (5s HTTP) timeout for resubscribe

format code

change Subscribe#out cap to 1

and replace config vars with RPCConfig

TimeoutBroadcastTxCommit can't be greater than rpcserver.WriteTimeout

rpc: Context as first parameter to all functions

reformat code

fixes after my own review

fixes after Ethan's review

add test stubs

fix config.toml

* fixes after manual testing

- rpc: do not recommend to use BroadcastTxCommit because it's slow and wastes
Tendermint resources (pubsub)
- rpc: better error in Subscribe and BroadcastTxCommit
- HTTPClient: do not resubscribe if err = ErrAlreadySubscribed

* fixes after Ismail's review

* Update rpc/grpc/grpc_test.go

Co-Authored-By: melekes <anton.kalyaev@gmail.com>
2019-03-11 22:45:58 +04:00
Anton Kaliaev
15f621141d remove TimeIotaMs from ABCI consensus params (#3403)
Also

- init substructures to avoid panic in pb2tm.ConsensusParams
Before: if csp.Block is nil and we later try to access/write to it,
we'll panic.
After: if csp.Block is nil and we later try to access/write to it,
there'll be no panic.
2019-03-11 22:21:17 +04:00
Anca Zamfir
dc359bd3a5 types: remove check for priority order of existing validators (#3407)
When scaling and averaging is invoked, it is possible to have validators
with close priorities ending up with same priority. With the current code,
this  makes it impossible to verify the priority orders before and after updates.

Fixes #3383
2019-03-11 18:17:25 +04:00
Ethan Buchman
976819537d Merge pull request #3399 from tendermint/release/v0.30.2
Release/v0.30.2
2019-03-11 08:17:14 -04:00
Anton Kaliaev
100ff08de9 p2p: do not panic when filter times out (#3384)
Fixes #3369
2019-03-11 15:31:53 +04:00
Anton Kaliaev
f996b10f47 update changelog and bump version to 0.30.2 2019-03-10 13:06:34 +04:00
Yumin Xia
36d7180ca2 libs/db: close batch (#3397)
ClevelDB requires closing when WriteBatch is no longer needed, https://godoc.org/github.com/jmhodges/levigo#WriteBatch.Close

Fixes the memory leak in https://github.com/cosmos/cosmos-sdk/issues/3842
2019-03-10 12:56:04 +04:00
Yumin Xia
b021f1e505 libs/db: close batch (#3397)
ClevelDB requires closing when WriteBatch is no longer needed, https://godoc.org/github.com/jmhodges/levigo#WriteBatch.Close

Fixes the memory leak in https://github.com/cosmos/cosmos-sdk/issues/3842
2019-03-10 12:46:32 +04:00
mircea-c
90794260bc circleci: removed complexity from docs deployment job (#3396) 2019-03-09 19:13:36 +04:00
Anton Kaliaev
b6a510a3e7 make ineffassign linter pass (#3386)
Refs #3262

This fixes two small bugs:

1) lite/dbprovider: return `ok` instead of true in parse* functions. It's weird that we're ignoring `ok` value before.
2) consensus/state: previously because of the shadowing we almost never output "Error with msg". Now we declare both `added` and `err` in the beginning of the function, so there's no shadowing.
2019-03-08 09:46:09 +04:00
Ismail Khoffi
e415c326f9 update golang.org/x/crypto (#3392)
Update Gopkg.lock via dep ensure --update golang.org/x/crypto

see #3391 (comment) (nothing to review here really).
2019-03-08 09:40:59 +04:00
Ismail Khoffi
28e9e9e714 update levigo to 1.0.0 (#3389)
Although the version we were pinning to is from Nov. 2016 there were no substantial changes:

    jmhodges/levigo@2b8c778 added go-modules support (no code changes)
    jmhodges/levigo@853d788 added a badge to the readme

closes #3381
2019-03-07 20:03:57 +04:00
Anton Kaliaev
3ebfa99f2c do not pin repos without releases to exact revisions (#3382)
We're pinning repos without releases because it's very easy to upgrade all the dependencies by executing dep ensure --upgrade. Instead, we should just never run this command directly, only dep ensure --upgrade <some repo>. And we can defend that in PRs.

Refs #3374

The problem with pinning to exact revisions: people who import Tendermint as a library (e.g. abci/types) are stuck with these revisions even though the code they import may not even use them.
2019-03-07 19:35:04 +04:00
YOSHIDA Masanori
91b488f9a5 docs: fix the reverse of meaning in spec (#3387)
https://tools.ietf.org/html/rfc6962#section-2.1
"The largest power of two less than the number of items" is actually correct!

    For n > 1, let k be the largest power of two smaller than n (i.e.,
    k < n <= 2k).
2019-03-07 17:02:13 +04:00
Anton Kaliaev
f25d727035 make dupl linter pass (#3385)
Refs #3262
2019-03-07 09:10:34 +04:00
Anca Zamfir
411bc5e49f types: followup after validator set changes (#3301)
* fix failure in TestProposerFrequency

* Add test to check priority order after updates

* Changed applyRemovals() and removed Remove()

Changed applyRemovals() similar to applyUpdates()
Removed function Remove()
Updated comments

* review comments

* simplify applyRemovals and add more comments

* small correction in comment

* Fix check in test

* Fix priority check for centering, address review comments

* fix assert for priority centering

* review comments

* review comments

* cleanup and review comments

added upper limit check for validator voting power
moved check for empty validator set earlier
moved panic on potential negative set length in verifyRemovals
added more tests

* review comments
2019-03-06 12:54:49 +04:00
Silas Davis
858875fbb8 Copy secp256k1 code from go-ethereum to avoid GPL vendoring issues in (#3371)
downstream

Signed-off-by: Silas Davis <silas@monax.io>
2019-03-06 12:22:35 +04:00
Ismail Khoffi
1eaa42cd25 golang 1.12.0 (#3376)
- update docker image on circleci
 - remove GOCACHE=off from Makefile (see:
 https://tip.golang.org/doc/go1.12#gocache)
 - update badge in readme
 - update in scripts/install
 - update Vagrantfile
 - update in networks/remote/integration.sh
 - tools/build/Makefile
2019-03-05 11:08:52 +04:00
Jack Zampolin
8c9df30e28 libs/db: Add cleveldb.Stats() (#3379)
Fixes: #3378

* Add stats to cleveldb implementation

* update changelog

* remote TODO

also
- sort keys
- preallocate memory

* fix const initializer []string literal is not a constant

* add test
2019-03-05 10:56:46 +04:00
Anton Kaliaev
52771e1287 make BlockTimeIota a consensus parameter, not a locally configurable … (#3048)
* make BlockTimeIota a consensus parameter, not a locally configurable option

Refs #2920

* make TimeIota int64 ms

Refs #2920

* update Gopkg.toml

* fixes after Ethan's review

* fix TestRemoteSignerProposalSigningFailed

* update changelog
2019-03-04 13:24:44 +04:00
Anton Kaliaev
f39138aa2e remove RoundState from EventDataRoundState (#3354)
Before we're using it to get a round state in tests. Now it can be done
by calling csX.GetRoundState. We will need to rewrite
TestStateSlashingPrevotes and TestStateSlashingPrecommits, which are
commented right now, to not rely on EventDataRoundState#RoundState
field.

Refs #1527
2019-03-04 12:18:32 +04:00
Anton Kaliaev
8a962ffc46 deps: update gogo/protobuf from 1.1.1 to 1.2.1 and golang/protobuf from 1.1.0 to 1.3.0 (#3357)
* deps: update gogo/proto from 1.1.1 to 1.2.1

- verified changes manually
  git diff 636bf030~ ba06b47c --stat -- ':!*.pb.go' ':!test'

* deps: update golang/protobuf from 1.1.0 to 1.3.0

- verified changes manually
  git diff b4deda0~ c823c79 -- ':!*.pb.go' ':!test'
2019-03-04 12:17:38 +04:00
zjubfd
3421e4dcd7 make lightd availbe (#3364)
1."abci_query": rpcserver.NewRPCFunc(c.ABCIQuery, "path,data,prove")
"validators": rpcserver.NewRPCFunc(c.Validators, "height"),
the parameters and function do not match, cause index out of range error.
2. the prove of query is forced to be true, while default option is false.
3. fix the wrong key of merkle
2019-03-02 16:36:52 -05:00
zjubfd
976b1c2ef7 fix pool timer leak bug, resolve#3353 (#3358)
When remove peer, block pool simple remove bpPeer,
but do not stop timer, that cause stopError for recorrected
peers. Stop timer when remove from pool.
2019-03-02 15:21:21 -05:00
zjubfd
d95894152b fix dirty data in peerset,resolve #3304 (#3359)
* fix dirty data in peerset

startInitPeer before PeerSet add the peer,
once mconnection start and Receive of one
Reactor faild, will try to remove it from PeerSet
while PeerSet still not contain the peer. Fix
this by change the order.

* fix test FilterDuplicate

* fix start/stop race

* fix err
2019-03-02 15:17:37 -05:00
Sven-Hendrik Haase
37a548414b docs: fix typo (#3373) 2019-03-02 10:06:57 +04:00
Juan Leni
853dd34d31 privval: improve Remote Signer implementation (#3351)
This issue is related to #3107
This is a first renaming/refactoring step before reworking and removing heartbeats.
As discussed with @Liamsi , we preferred to go for a couple of independent and separate PRs to simplify review work.

The changes:

    Help to clarify the relation between the validator and remote signer endpoints
    Differentiate between timeouts and deadlines
    Prepare to encapsulate networking related code behind RemoteSigner in the next PR

My intention is to separate and encapsulate the "network related" code from the actual signer.

SignerRemote ---(uses/contains)--> SignerValidatorEndpoint <--(connects to)--> SignerServiceEndpoint ---> SignerService (future.. not here yet but would like to decouple too)

All reconnection/heartbeat/whatever code goes in the endpoints. Signer[Remote/Service] do not need to know about that.

I agree Endpoint may not be the perfect name. I tried to find something "Go-ish" enough. It is a common name in go-kit, kubernetes, etc.

Right now:
SignerValidatorEndpoint:

    handles the listener
    contains SignerRemote
    Implements the PrivValidator interface
    connects and sets a connection object in a contained SignerRemote
    delegates PrivValidator some calls to SignerRemote which in turn uses the conn object that was set externally

SignerRemote:

    Implements the PrivValidator interface
    read/writes from a connection object directly
    handles heartbeats

SignerServiceEndpoint:

    Does most things in a single place
    delegates to a PrivValidator IIRC.

* cleanup

* Refactoring step 1

* Refactoring step 2

* move messages to another file

* mark for future work / next steps

* mark deprecated classes in docs

* Fix linter problems

* additional linter fixes
2019-02-28 11:48:20 +04:00
Anton Kaliaev
d6e2fb453d update docs (#3349)
* docs: explain create_empty_blocks configurations

Closes #3307

* Vagrantfile: install nodejs for docs

* update docs instructions

npm install does not make sense since there's no packages.json file

* explain broadcast_tx_* tx format

Closes #536

* docs: explain how transaction ordering works

Closes #2904

* bring in consensus parameters explained

* example for create_empty_blocks_interval

* bring in explanation from https://github.com/tendermint/tendermint/issues/2487#issuecomment-424899799

* link to formatting instead of duplicating info
2019-02-28 11:31:59 +04:00
Anton Kaliaev
ec9bff5234 rename WAL#Flush to WAL#FlushAndSync (#3345)
* rename WAL#Flush to WAL#FlushAndSync

- rename auto#Flush to auto#FlushAndSync
- cleanup WAL interface to not leak implementation details!
  * remove Group()
  * add WALReader interface and return it in SearchForEndHeight()
- add interface assertions

Refs #3337

* replace WALReader with io.ReadCloser
2019-02-25 09:11:07 +04:00
Ismail Khoffi
6797d85851 p2p: fix comment in secret connection (#3348)
Just a minor followup on the review if #3347: Fixes a comment. [#3347 (comment)](https://github.com/tendermint/tendermint/pull/3347#discussion_r259582330)
2019-02-25 09:06:21 +04:00
Anton Kaliaev
cdf3a74f48 Unclean shutdown on SIGINT / SIGTERM (#3308)
* libs/common: TrapSignal accepts logger as a first parameter

 and does not block anymore
* previously it was dumping "captured ..." msg to os.Stdout
* TrapSignal should not be responsible for blocking thread of execution

Refs #3238

* exit with zero (0) code upon receiving SIGTERM/SIGINT

Refs #3238

* fix formatting in docs/app-dev/abci-cli.md

Co-Authored-By: melekes <anton.kalyaev@gmail.com>

* fix formatting in docs/app-dev/abci-cli.md

Co-Authored-By: melekes <anton.kalyaev@gmail.com>
2019-02-23 10:48:28 -05:00
Anton Kaliaev
41f91318e9 bound mempool memory usage (#3248)
* bound mempool memory usage

Closes #3079

* rename SizeBytes to TxsTotalBytes

and other small fixes after Zarko's review

* rename MaxBytes to MaxTxsTotalBytes

* make ErrMempoolIsFull more informative

* expose mempool's txs_total_bytes via RPC

* test full response

* fixes after Ethan's review

* config: rename mempool.size to mempool.max_txs

https://github.com/tendermint/tendermint/pull/3248#discussion_r254034004

* test more cases

https://github.com/tendermint/tendermint/pull/3248#discussion_r254036532

* simplify test

* Revert "config: rename mempool.size to mempool.max_txs"

This reverts commit 39bfa3696177aa46195000b90655419a975d6ff7.

* rename count back to n_txs

to make a change non-breaking

* rename max_txs_total_bytes to max_txs_bytes

* format code

* fix TestWALPeriodicSync

The test was sometimes failing due to processFlushTicks being called too
early. The solution is to call wal#Start later in the test.

* Apply suggestions from code review
2019-02-23 10:32:31 -05:00
Ismail Khoffi
e0adc5e807 secret connection check all zeroes (#3347)
* reject the shared secret if is all zeros in case the blacklist was not
sufficient

* Add test that verifies lower order pub-keys are rejected at the DH step

* Update changelog

* fix typo in test-comment
2019-02-23 10:25:57 -05:00
Anton Kaliaev
2137ecc130 pubsub 2.0 (#3227)
* green pubsub tests :OK:

* get rid of clientToQueryMap

* Subscribe and SubscribeUnbuffered

* start adapting other pkgs to new pubsub

* nope

* rename MsgAndTags to Message

* remove TagMap

it does not bring any additional benefits

* bring back EventSubscriber

* fix test

* fix data race in TestStartNextHeightCorrectly

```
Write at 0x00c0001c7418 by goroutine 796:
  github.com/tendermint/tendermint/consensus.TestStartNextHeightCorrectly()
      /go/src/github.com/tendermint/tendermint/consensus/state_test.go:1296 +0xad
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:827 +0x162

Previous read at 0x00c0001c7418 by goroutine 858:
  github.com/tendermint/tendermint/consensus.(*ConsensusState).addVote()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:1631 +0x1366
  github.com/tendermint/tendermint/consensus.(*ConsensusState).tryAddVote()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:1476 +0x8f
  github.com/tendermint/tendermint/consensus.(*ConsensusState).handleMsg()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:667 +0xa1e
  github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:628 +0x794

Goroutine 796 (running) created at:
  testing.(*T).Run()
      /usr/local/go/src/testing/testing.go:878 +0x659
  testing.runTests.func1()
      /usr/local/go/src/testing/testing.go:1119 +0xa8
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:827 +0x162
  testing.runTests()
      /usr/local/go/src/testing/testing.go:1117 +0x4ee
  testing.(*M).Run()
      /usr/local/go/src/testing/testing.go:1034 +0x2ee
  main.main()
      _testmain.go:214 +0x332

Goroutine 858 (running) created at:
  github.com/tendermint/tendermint/consensus.(*ConsensusState).startRoutines()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:334 +0x221
  github.com/tendermint/tendermint/consensus.startTestRound()
      /go/src/github.com/tendermint/tendermint/consensus/common_test.go:122 +0x63
  github.com/tendermint/tendermint/consensus.TestStateFullRound1()
      /go/src/github.com/tendermint/tendermint/consensus/state_test.go:255 +0x397
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:827 +0x162
```

* fixes after my own review

* fix formatting

* wait 100ms before kicking a subscriber out

+ a test for indexer_service

* fixes after my second review

* no timeout

* add changelog entries

* fix merge conflicts

* fix typos after Thane's review

Co-Authored-By: melekes <anton.kalyaev@gmail.com>

* reformat code

* rewrite indexer service in the attempt to fix failing test

https://github.com/tendermint/tendermint/pull/3227/#issuecomment-462316527

* Revert "rewrite indexer service in the attempt to fix failing test"

This reverts commit 0d9107a098.

* another attempt to fix indexer

* fixes after Ethan's review

* use unbuffered channel when indexing transactions

Refs https://github.com/tendermint/tendermint/pull/3227#discussion_r258786716

* add a comment for EventBus#SubscribeUnbuffered

* format code
2019-02-22 23:11:27 -05:00
Anton Kaliaev
67fd428354 fix TestWALPeriodicSync (#3342)
The test was sometimes failing due to processFlushTicks being called too
early. The solution is to call wal#Start later in the test.
2019-02-22 11:49:08 +04:00
Anton Kaliaev
f22ada442a refactor decideProposal in common_test (#3343) 2019-02-22 11:48:40 +04:00
Anton Kaliaev
ed1de13548 cs: update wal comments (#3334)
* cs: update wal comments

Follow-up to https://github.com/tendermint/tendermint/pull/3300

* Update consensus/wal.go

Co-Authored-By: melekes <anton.kalyaev@gmail.com>
2019-02-21 18:28:02 +04:00
Ethan Buchman
4f83eec782 Merge pull request #3339 from tendermint/master
Merge master back to develop
2019-02-20 10:11:33 -05:00
Ethan Buchman
e0f8936455 Merge pull request #3326 from tendermint/release/v0.30.1
Release/v0.30.1
2019-02-20 10:10:49 -05:00
Ethan Buchman
f2351dc758 update changelog 2019-02-20 10:05:57 -05:00
Daniil Lashin
db5d7602fe docs: fix rpc Tx() method docs (#3331) 2019-02-20 15:34:52 +04:00
Thane Thomson
dff3deb2a9 cs: sync WAL more frequently (#3300)
As per #3043, this adds a ticker to sync the WAL every 2s while the WAL is running.

* Flush WAL every 2s

This adds a ticker that flushes the WAL every 2s while the WAL is
running. This is related to #3043.

* Fix spelling

* Increase timeout to 2mins for slower build environments

* Make WAL sync interval configurable

* Add TODO to replace testChan with more comprehensive testBus

* Remove extraneous debug statement

* Remove testChan in favour of using system time

As per
https://github.com/tendermint/tendermint/pull/3300#discussion_r255886586,
this removes the `testChan` WAL member and replaces the approach with a
system time-oriented one. In this new approach, we keep track of the
system time at which each flush and periodic flush successfully
occurred.

The naming of the various functions is also updated here to be more
consistent with "flushing" as opposed to "sync'ing".

* Update naming convention and ensure lock for timestamp update

* Add Flush method as part of WAL interface

Adds a `Flush` method as part of the WAL interface to enforce the idea
that we can manually trigger a WAL flush from outside of the WAL. This
is employed in the consensus state management to flush the WAL prior to
signing votes/proposals, as per https://github.com/tendermint/tendermint/issues/3043#issuecomment-453853630

* Update CHANGELOG_PENDING

* Remove mutex approach and replace with DI

The dependency injection approach to dealing with testing concerns could
allow similar effects to some kind of "testing bus"-based approach. This
commit introduces an example of this, where instead of relying on
(potentially fragile) timing of things between the code and the test, we
inject code into the function under test that can signal the test
through a channel.

This allows us to avoid the `time.Sleep()`-based approach previously
employed.

* Update comment on WAL flushing during vote signing

Co-Authored-By: thanethomson <connect@thanethomson.com>

* Simplify flush interval definition

Co-Authored-By: thanethomson <connect@thanethomson.com>

* Expand commentary on WAL disk flushing

Co-Authored-By: thanethomson <connect@thanethomson.com>

* Add broken test to illustrate WAL sync test problem

Removes test-related state (dependency injection code) from the WAL data
structure and adds test code to illustrate the problem with using
`WALGenerateNBlocks` and `wal.SearchForEndHeight` to test periodic
sync'ing.

* Fix test error messages

* Use WAL group buffer size to check for flush

A function is added to `libs/autofile/group.go#Group` in order to return
the size of the buffered data (i.e. data that has not yet been flushed
to disk). The test now checks that, prior to a `time.Sleep`, the group
buffer has data in it. After the `time.Sleep` (during which time the
periodic flush should have been called), the buffer should be empty.

* Remove config root dir removal from #3291

* Add godoc for NewWAL mentioning periodic sync
2019-02-20 09:45:18 +04:00
Anton Kaliaev
9d4f59b836 update changelog and bump version 2019-02-18 15:27:26 +04:00
Ismail Khoffi
d2c7f8dbcf p2p: check secret conn id matches dialed id (#3321)
ref: [#3010 (comment)](https://github.com/tendermint/tendermint/issues/3010#issuecomment-464287627)

> I tried searching for code where we authenticate a peer against its NetAddress.ID and couldn't find it. I don't see a reason to switch to Noise, but a need to ensure that the node's ID is authenticated e.g. after dialing from the address book.

* p2p: check secret conn id matches dialed id

* Fix all p2p tests & make code compile

* add simple test for dialing with wrong ID

* update changelog

* address review comments

* yet another place where to use IDAddressString and fix
testSetupMultiplexTransport
2019-02-18 14:08:22 +04:00
Anton Kaliaev
8283ca7ddb 3291 follow-up (#3323)
* changelog: use issue number instead of PR number

* follow up to #3291

- rpc/test/helpers.go add StopTendermint(node) func
- remove ensureDir(filepath.Dir(walFile), 0700)
- mempool/mempool_test.go add type cleanupFunc func()

* cmd/show_validator: wrap err to make it more clear
2019-02-18 13:23:40 +04:00
Alessio Treglia
59cc6d36c9 improve ResetTestRootWithChainID() concurrency safety (#3291)
* improve ResetTestRootWithChainID() concurrency safety

Rely on ioutil.TempDir() to create test root directories and ensure
multiple same-chain id test cases can run in parallel.

* Update config/toml.go

Co-Authored-By: alessio <quadrispro@ubuntu.com>

* clean up test directories after completion

Closes: #1034

* Remove redundant EnsureDir call

* s/PanicSafety()/panic()/s

* Put create dir functionality back in ResetTestRootWithChainID

* Place test directories in OS's tempdir

In modern UNIX and UNIX-like systems /tmp is very often
mounted as tmpfs. This might speed test execution a bit.

* Set 0700 to a const

* rootsDirs -> configRootDirs

* Don't double remove directories

* Avoid global variables

* Fix consensus tests

* Reduce defer stack

* Address review comments

* Try to fix tests

* Update CHANGELOG_PENDING.md

Co-Authored-By: alessio <quadrispro@ubuntu.com>

* Update consensus/common_test.go

Co-Authored-By: alessio <quadrispro@ubuntu.com>

* Update consensus/common_test.go

Co-Authored-By: alessio <quadrispro@ubuntu.com>
2019-02-18 11:45:27 +04:00
Zarko Milosevic
af8793c01a cs: reset triggered timeout precommit (#3310)
* Reset TriggeredTimeoutPrecommit as part of updateToState

* Add failing test and fix

* fix DATA RACE in TestResetTimeoutPrecommitUponNewHeight

```
WARNING: DATA RACE
Read at 0x00c001691d28 by goroutine 691:
  github.com/tendermint/tendermint/consensus.decideProposal()
      /go/src/github.com/tendermint/tendermint/consensus/common_test.go:133 +0x121
  github.com/tendermint/tendermint/consensus.TestResetTimeoutPrecommitUponNewHeight()
      /go/src/github.com/tendermint/tendermint/consensus/state_test.go:1389 +0x958
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:827 +0x162

Previous write at 0x00c001691d28 by goroutine 931:
  github.com/tendermint/tendermint/consensus.(*ConsensusState).updateToState()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:562 +0x5b2
  github.com/tendermint/tendermint/consensus.(*ConsensusState).finalizeCommit()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:1340 +0x141e
  github.com/tendermint/tendermint/consensus.(*ConsensusState).tryFinalizeCommit()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:1255 +0x66e
  github.com/tendermint/tendermint/consensus.(*ConsensusState).enterCommit.func1()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:1201 +0x135
  github.com/tendermint/tendermint/consensus.(*ConsensusState).enterCommit()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:1232 +0x94b
  github.com/tendermint/tendermint/consensus.(*ConsensusState).addVote()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:1657 +0x132e
  github.com/tendermint/tendermint/consensus.(*ConsensusState).tryAddVote()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:1503 +0x8f
  github.com/tendermint/tendermint/consensus.(*ConsensusState).handleMsg()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:694 +0xa1e
  github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:642 +0x948
  github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:642 +0x948
  github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:642 +0x948
  github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:655 +0x7dd
  github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:642 +0x948
  github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:655 +0x7dd

Goroutine 691 (running) created at:
  testing.(*T).Run()
      /usr/local/go/src/testing/testing.go:878 +0x659
  testing.runTests.func1()
      /usr/local/go/src/testing/testing.go:1119 +0xa8
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:827 +0x162
  testing.runTests()
  testing.(*M).Run()
      /usr/local/go/src/testing/testing.go:1034 +0x2ee
  main.main()
      _testmain.go:216 +0x332
```

* fix another DATA RACE by locking consensus

```
WARNING: DATA RACE
Read at 0x00c009b835a8 by goroutine 871:
  github.com/tendermint/tendermint/consensus.(*ConsensusState).createProposalBlock()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:955 +0x7c
  github.com/tendermint/tendermint/consensus.decideProposal()
      /go/src/github.com/tendermint/tendermint/consensus/common_test.go:127 +0x53
  github.com/tendermint/tendermint/consensus.TestResetTimeoutPrecommitUponNewHeight()
      /go/src/github.com/tendermint/tendermint/consensus/state_test.go:1389 +0x958
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:827 +0x162

Previous write at 0x00c009b835a8 by goroutine 931:
  github.com/tendermint/tendermint/consensus.(*ConsensusState).updateHeight()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:446 +0xb7
  github.com/tendermint/tendermint/consensus.(*ConsensusState).updateToState()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:542 +0x22f
  github.com/tendermint/tendermint/consensus.(*ConsensusState).finalizeCommit()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:1340 +0x141e
  github.com/tendermint/tendermint/consensus.(*ConsensusState).tryFinalizeCommit()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:1255 +0x66e
  github.com/tendermint/tendermint/consensus.(*ConsensusState).enterCommit.func1()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:1201 +0x135
  github.com/tendermint/tendermint/consensus.(*ConsensusState).enterCommit()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:1232 +0x94b
  github.com/tendermint/tendermint/consensus.(*ConsensusState).addVote()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:1657 +0x132e
  github.com/tendermint/tendermint/consensus.(*ConsensusState).tryAddVote()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:1503 +0x8f
  github.com/tendermint/tendermint/consensus.(*ConsensusState).handleMsg()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:694 +0xa1e
  github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:642 +0x948
  github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:642 +0x948
  github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:642 +0x948
  github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:655 +0x7dd
  github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:642 +0x948
  github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
      /go/src/github.com/tendermint/tendermint/consensus/state.go:655 +0x7dd
```

* Fix failing test

* Delete profile.out

* fix data races
2019-02-18 11:29:41 +04:00
Anton Kaliaev
0b0a8b3128 cs/wal: refuse to encode msg that is bigger than maxMsgSizeBytes (#3303)
Earlier this week somebody posted this in GoS Riot chat:

```
E[2019-02-12|10:38:37.596] Corrupted entry. Skipping... module=consensus wal=/home/gaia/.gaiad/data/cs.wal/wal err="DataCorruptionError[length 878916964 exceeded maximum possible value of 1048576 bytes]"
E[2019-02-12|10:38:37.596] Corrupted entry. Skipping... module=consensus wal=/home/gaia/.gaiad/data/cs.wal/wal err="DataCorruptionError[length 825701731 exceeded maximum possible value of 1048576 bytes]"
E[2019-02-12|10:38:37.596] Corrupted entry. Skipping... module=consensus wal=/home/gaia/.gaiad/data/cs.wal/wal err="DataCorruptionError[length 1631073634 exceeded maximum possible value of 1048576 bytes]"
E[2019-02-12|10:38:37.596] Corrupted entry. Skipping... module=consensus wal=/home/gaia/.gaiad/data/cs.wal/wal err="DataCorruptionError[length 912418148 exceeded maximum possible value of 1048576 bytes]"
E[2019-02-12|10:38:37.600] Corrupted entry. Skipping... module=consensus wal=/home/gaia/.gaiad/data/cs.wal/wal err="DataCorruptionError[failed to read data: EOF]"
E[2019-02-12|10:38:37.600] Error on catchup replay. Proceeding to start ConsensusState anyway module=consensus err="Cannot replay height 7242. WAL does not contain #ENDHEIGHT for 7241"
E[2019-02-12|10:38:37.861] Error dialing peer module=p2p err="dial tcp 35.183.126.181:26656: i/o timeout
```

Note the length error messages. What has happened is the length field got corrupted probably. I've looked at the code and noticed that we don't check the msg size during encoding. This PR fixes that. It also improves a few error messages in WALDecoder.
2019-02-18 11:23:06 +04:00
Anton Kaliaev
7ced9e416b rpc/net_info: change RemoteIP type from net.IP to String (#3309)
* rpc/net_info: change RemoteIP type from net.IP to String

Before:
"AAAAAAAAAAAAAP//rB8ktw=="
which is amino-encoded net.IP byte slice

After:
"192.0.2.1"

Fixes #3251

* rpc/net_info: non-empty response in docs
2019-02-16 09:46:02 +04:00
Ismail Khoffi
af3ba5145a fix test-vectors to actually match the sign bytes: (#3312)
- sign bytes are length prefixed
 - change test to use SignBytes methods instead of calling amino methods
 directly

Resolves #3311
ref: #2455 #2508
2019-02-16 09:12:00 +04:00
Alexander Bezobchuk
cf737ec85c return an error on show_validator (#3316)
* return a command error prior to init

* add a pending log entry

* Update CHANGELOG_PENDING.md

Co-Authored-By: alexanderbez <alexanderbez@users.noreply.github.com>

closes: #3314
2019-02-16 09:07:18 +04:00
Zach
d32f7d2416 update codeowners (#3305)
limit the scope of @zramsay because he's annoyed by notifications
2019-02-13 08:26:01 +04:00
Ethan Buchman
dc6567c677 consensus: flush wal on stop (#3297)
Should fix #3295
Also partial fix of #3043
2019-02-12 09:32:40 +04:00
Ethan Buchman
08dabab024 types: validator set update tests (#3284)
* types: validator set update tests

* docs: abci val updates must not include duplicates
2019-02-12 09:04:23 +04:00
Anca Zamfir
8a9eecce7f test blockExec does not panic if all vals removed (#3241)
Fix for #3224
Also address #2084
2019-02-12 09:02:44 +04:00
Ismail Khoffi
b089587b42 make gosec linter pass (#3294)
* not related to linter: remove obsolete constants:
 - `Insecure` and `Secure` and type `Security` are not used anywhere

* not related to linter: update example

 - NewInsecure was deleted; change example to NewRemoteDB

* address: Binds to all network interfaces (gosec):

 - bind to localhost instead of 0.0.0.0
 - regenerate test key and cert for this purpose (was valid for ::) and
 otherwise we would see:
 transport: authentication handshake failed: x509: certificate is
 valid for ::, not 127.0.0.1\"

(used https://github.com/google/keytransparency/blob/master/scripts/gen_server_keys.sh
to regenerate certs)

* use sha256 in tests instead of md5; time difference is negligible

* nolint usage of math/rand in test and add comment on its import

 - crypto/rand is slower and we do not need sth more secure in tests

* enable linter in circle-ci

* another nolint math/rand in test

* replace another occurrence of md5

* consistent comment about importing math/rand
2019-02-12 08:54:12 +04:00
Anton Kaliaev
7fd51e6ade make govet linter pass (#3292)
* make govet linter pass

Refs #3262

* close PipeReader and check for err
2019-02-11 16:31:34 +04:00
Anca Zamfir
966b5bdf6e fix failure in TestProposerFrequency (#3293)
```
--- FAIL: TestProposerFrequency (2.50s)

panic: empty validator set [recovered]

        panic: empty validator set



goroutine 91 [running]:

testing.tRunner.func1(0xc000a98c00)

        /usr/local/go/src/testing/testing.go:792 +0x6a7

panic(0xeae7e0, 0x11fbb30)

        /usr/local/go/src/runtime/panic.go:513 +0x1b9

github.com/tendermint/tendermint/types.(*ValidatorSet).RescalePriorities(0xc0000e7380, 0x0)

        /go/src/github.com/tendermint/tendermint/types/validator_set.go:106 +0x1ac

github.com/tendermint/tendermint/state.TestProposerFrequency(0xc000a98c00)

        /go/src/github.com/tendermint/tendermint/state/state_test.go:335 +0xb44

testing.tRunner(0xc000a98c00, 0x111a4d8)

        /usr/local/go/src/testing/testing.go:827 +0x163

created by testing.(*T).Run

        /usr/local/go/src/testing/testing.go:878 +0x65a

FAIL    github.com/tendermint/tendermint/state  3.139s
```
2019-02-11 16:21:51 +04:00
Ethan Buchman
021b5cc7f6 Merge pull request #3290 from tendermint/master
Merge pull request #3288 from tendermint/release/v0.30.0
2019-02-09 10:07:49 -05:00
Ethan Buchman
28d75ec801 Merge pull request #3288 from tendermint/release/v0.30.0
Release/v0.30.0
2019-02-09 10:07:22 -05:00
Ethan Buchman
792b12573e Prepare v0.30.0 (#3287)
* changelog, upgrading, version

* update for evidence fixes

* linkify

* fix an entry
2019-02-08 18:50:02 -05:00
Ethan Buchman
4f2ef36701 types.NewCommit (#3275)
* types.NewCommit

* use types.NewCommit everywhere

* fix log in unsafe_reset

* memoize height and round in constructor

* notes about deprecating toVote

* bring back memoizeHeightRound
2019-02-08 18:40:41 -05:00
Ethan Buchman
6b1b595951 Merge pull request #3286 from tendermint/bucky/fix-duplicate-evidence
Fixes for duplicate evidence
2019-02-08 18:33:45 -05:00
Ismail Khoffi
87bdc42bf8 Reject blocks with committed evidence (#37)
* evidence: NewEvidencePool takes evidenceDB

* evidence: failing TestStoreCommitDuplicate

tendermint/security#35

* GetEvidence -> GetEvidenceInfo

* fix TestStoreCommitDuplicate

* comment in VerifyEvidence

* add check if evidence was already seen
 - modify EventPool interface (EventStore is not known in ApplyBlock):
   - add IsCommitted method to iface
 - add test

* update changelog

* fix TestStoreMark:
 - priority in evidence info gets reset to zero after evidence gets committed

* review comments: simplify EvidencePool.IsCommitted
 - delete obsolete EvidenceStore.IsCommitted

* add simple test for IsCommitted

* update changelog: this is actually breaking (PR number still missing)

* fix TestStoreMark:
 - priority in evidence info gets reset to zero after evidence gets
 committed

* review suggestion: simplify return
2019-02-08 18:30:45 -05:00
Ethan Buchman
90ba63948a Sec/bucky/35 commit duplicate evidence (#36)
Don't add committed evidence to evpool
2019-02-08 18:25:48 -05:00
Anca Zamfir
cce4d21ccb treat validator updates as set (#3222)
* Initial commit for 3181..still early

* unit test updates

* unit test updates

* fix check of dups accross updates and deletes

* simplify the processChange() func

* added overflow check utest

* Added checks for empty valset, new utest

* deepcopy changes in processUpdate()

* moved to new API, fixed tests

* test cleanup

* address review comments

* make sure votePower > 0

* gofmt fixes

* handle duplicates and invalid values

* more work on tests, review comments

* Renamed and explained K

* make TestVal private

* split verifyUpdatesAndComputeNewPriorities.., added check for deletes

* return error if validator set is empty after processing changes

* address review comments

* lint err

* Fixed the total voting power and added comments

* fix lint

* fix lint
2019-02-08 13:05:09 -05:00
Ismail Khoffi
c1f7399a86 review comment: cleaner constant for N/2, delete secp256k1N and use (#3279)
`secp256k1.S256().N` directly instead
2019-02-08 09:48:09 -05:00
Ethan Buchman
44a89a3537 Merge pull request #3282 from tendermint/master
Merge master back to develop
2019-02-08 09:43:36 -05:00
Ethan Buchman
a8dbc64319 Merge pull request #3276 from tendermint/release/v0.29.2
Release/v0.29.2
2019-02-08 09:42:52 -05:00
Ethan Buchman
af6e6cd350 remove MixEntropy (#3278)
* remove MixEntropy

* changelog
2019-02-07 20:12:57 -05:00
Ethan Buchman
ad4bd92fec secp256k1: change build tags (#3277) 2019-02-07 19:57:30 -05:00
Ethan Buchman
f571ee8876 prepare v0.29.2 (#3272)
* update changelog

* linkify

* bump version

* update main changelog

* final fixes

* entry for wal fix

* changelog preamble

* remove a line
2019-02-07 19:34:01 -05:00
Anton Kaliaev
11e36d0bfb addrbook_test: preallocate memory for bookSizes (#3268)
Fixes https://circleci.com/gh/tendermint/tendermint/44901
2019-02-07 17:16:31 +04:00
Ethan Buchman
354a08c25a p2p: fix infinite loop in addrbook (#3232)
* failing test

* fix infinite loop in addrbook

There are cases where we only have a small number of addresses marked
good ("old"), but the selection mechanism keeps trying to select more of these
addresses, and hence ends up in an infinite loop. Here we fix this to
only try and select such "old" addresses if we have enough of them. Note this
means, if we don't have enough of them, we may return more "new"
addresses than otherwise expected by the newSelectionBias.

This whole GetSelectionWithBias method probably needs to be rewritten,
but this is a quick fix for the issue.

* changelog

* fix infinite loop if not enough new addrs

* fix another potential infinite loop

if a.nNew == 0 -> pickFromOldBucket=true, but we don't have enough items
  (a.nOld > len(oldBucketToAddrsMap) false)

* Revert "fix another potential infinite loop"

This reverts commit 146540c1125597162bd89820d611f6531f5e5e4b.

* check num addresses instead of buckets, new test

* fixed the int division

* add slack to bias % in test, lint fixes

* Added checks for selection content in test

* test cleanup

* Apply suggestions from code review

Co-Authored-By: ebuchman <ethan@coinculture.info>

* address review comments

* change after  Anton's review comments

* use the same docker image we use for testing

when building a binary for localnet

* switch back to circleci classic

* more review comments

* more review comments

* refactor addrbook_test

* build linux binary inside docker

in attempt to fix

```
--> Running dep
+ make build-linux
GOOS=linux GOARCH=amd64 make build
make[1]: Entering directory `/home/circleci/.go_workspace/src/github.com/tendermint/tendermint'
CGO_ENABLED=0 go build -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`" -tags 'tendermint' -o build/tendermint ./cmd/tendermint/
p2p/pex/addrbook.go:373:13: undefined: math.Round
```

* change dir from /usr to /go

* use concrete Go version for localnet binary

* check for nil addresses just to be sure
2019-02-07 15:58:23 +04:00
Thane Thomson
e70f27c8e4 Add remote signer test harness (KMS) (#3149)
* WIP: Starts adding remote signer test harness

This commit adds a new command to Tendermint to allow for us to build a
standalone binary to test remote signers such as KMS
(https://github.com/tendermint/kms).

Right now, all it does is test that the local public key matches the
public key reported by the client, and fails at the point where it
attempts to get the client to sign a proposal.

* Fixes typo

* Fixes proposal validation test

This commit fixes the proposal validation test as per #3149. It also
moves the test harness into its own internal package to isolate its
exports from the `privval` package.

* Adds vote signing validation

* Applying recommendations from #3149

* Adds function descriptions for test harness

* Adds ability to ask remote signer to shut down

Prior to this commit, the remote signer needs to manually be shut down,
which is not ideal for automated testing. This commit allows us to send
a poison pill message to the KMS to let it shut down gracefully once
testing is done (whether the tests pass or fail).

* Adds tests for remote signer test harness

This commit makes some minor modifications to a few files to allow for
testing of the remote signer test harness. Two tests are added here:
checking for a fully successful (the ideal) case, and for the case where
the maximum number of retries has been reached when attempting to accept
incoming connections from the remote signer.

* Condenses serialization of proposals and votes using existing Tendermint functions

* Removes now-unnecessary amino import and codec

* Adds error message for vote signing failure

* Adds key extraction command for integration test

Took the code from here:
https://gist.github.com/Liamsi/a80993f24bff574bbfdbbfa9efa84bc7 to
create a simple utility command to extract a key from a local Tendermint
validator for use in KMS integration testing.

* Makes path expansion success non-compulsory

* Fixes segfault on SIGTERM

We need an additional variable to keep track of whether we're
successfully connected, otherwise hitting Ctrl+Break during execution
causes a segmentation fault. This now allows for a clean shutdown.

* Consolidates shutdown checks

* Adds comments indicating codes for easy lookup

* Adds Docker build for remote signer harness

Updates the `DOCKER/build.sh` and `DOCKER/push.sh` files to allow one to
override the image name and Dockerfile using environment variables.
Updates the primary `Makefile` as well as the `DOCKER/Makefile` to allow
for building the `remote_val_harness` Docker image.

* Adds build_remote_val_harness_docker_image to .PHONY

* Removes remote signer poison pill messaging functionality

* Reduces fluff code in command line parsing

As per
https://github.com/tendermint/tendermint/pull/3149#pullrequestreview-196171788,
this reduces the amount of fluff code in the PR down to the bare
minimum.

* Fixes ordering of error check and info log

* Moves remove_val_harness cmd into tools folder

It seems to make sense to rather keep the remote signer test harness in
its own tool folder (now rather named `tm-signer-harness` to keep with
the tool naming convention). It is actually a separate tool, not meant
to be one of the core binaries, but supplementary and supportive.

* Updates documentation for tm-signer-harness

* Refactors flag parsing to be more compact and less redundant

* Adds version sub-command help

* Removes extraneous flags parsing

* Adds CHANGELOG_PENDING entry for tm-signer-harness

* Improves test coverage

Adds a few extra parameters to the `MockPV` type to fake broken vote and
proposal signing. Also adds some more tests for the test harness so as
to increase coverage for failed cases.

* Fixes formatting for CHANGELOG_PENDING.md

* Fix formatting for documentation config

* Point users towards official Tendermint docs for tools documentation

* Point users towards official Tendermint docs for tm-signer-harness

* Remove extraneous constant

* Rename TestHarness.sc to TestHarness.spv for naming consistency

* Refactor to remove redundant goroutine

* Refactor conditional to cleaner switch statement and better error handling for listener protocol

* Remove extraneous goroutine

* Add note about installing tmkms via Cargo

* Fix typo in naming of output signing key

* Add note about where to find chain ID

* Replace /home/user with ~/ for brevity

* Fixes "signer.key" typo

* Minor edits for clarification for tm-signer-harness bulid/setup process
2019-02-07 10:32:32 +04:00
Anton Kaliaev
fcebdf6720 Merge pull request #3261 from tendermint/anton/circle
Revert "quick fix for CircleCI (#2279)"
2019-02-07 10:28:22 +04:00
Ethan Buchman
9e9026452c p2p/conn: don't hold stopMtx while waiting (#3254)
* p2p/conn: fix deadlock in FlushStop/OnStop

* makefile: set_with_deadlock

* close doneSendRoutine at end of sendRoutine

* conn: initialize channs in OnStart
2019-02-06 10:29:51 -05:00
Ethan Buchman
45b70ae031 fix non deterministic test failures and race in privval socket (#3258)
* node: decrease retry conn timeout in test

Should fix #3256

The retry timeout was set to the default, which is the same as the
accept timeout, so it's no wonder this would fail. Here we decrease the
retry timeout so we can try many times before the accept timeout.

* p2p: increase handshake timeout in test

This fails sometimes, presumably because the handshake timeout is so low
(only 50ms). So increase it to 1s. Should fix #3187

* privval: fix race with ping. closes #3237

Pings happen in a go-routine and can happen concurrently with other
messages. Since we use a request/response protocol, we expect to send a
request and get back the corresponding response. But with pings
happening concurrently, this assumption could be violated. We were using
a mutex, but only a RWMutex, where the RLock was being held for sending
messages - this was to allow the underlying connection to be replaced if
it fails. Turns out we actually need to use a full lock (not just a read
lock) to prevent multiple requests from happening concurrently.

* node: fix test name. DelayedStop -> DelayedStart

* autofile: Wait() method

In the TestWALTruncate in consensus/wal_test.go we remove the WAL
directory at the end of the test. However the wal.Stop() does not
properly wait for the autofile group to finish shutting down. Hence it
was possible that the group's go-routine is still running when the
cleanup happens, which causes a panic since the directory disappeared.
Here we add a Wait() method to properly wait until the go-routine exits
so we can safely clean up. This fixes #2852.
2019-02-06 10:24:43 -05:00
Ethan Buchman
4429826229 cmn: GetFreePort (#3255) 2019-02-06 10:14:03 -05:00
Anton Kaliaev
1386707ceb rename TestGCRandom to _TestGCRandom 2019-02-06 18:36:54 +04:00
Anton Kaliaev
1a35895ac8 remove gotype linter
WARN [runner/nolint] Found unknown linters in //nolint directives: gotype
2019-02-06 18:26:56 +04:00
Anton Kaliaev
6941d1bb35 use nolint label instead of commenting 2019-02-06 18:21:32 +04:00
Anton Kaliaev
23314daee4 comment out clist tests w/ depend on runtime.SetFinalizer 2019-02-06 15:38:02 +04:00
Anton Kaliaev
3c8156a55a preallocating memory when we can 2019-02-06 15:24:54 +04:00
Anton Kaliaev
ffd3bf8448 remove or comment out unused code 2019-02-06 15:16:38 +04:00
Anton Kaliaev
da33dd04cc switch to golangci-lint from gometalinter 🚤 2019-02-06 15:00:55 +04:00
Anton Kaliaev
d8f0bc3e60 Revert "quick fix for CircleCI (#2279)"
This reverts commit 1cf6712a36.
2019-02-06 14:11:35 +04:00
Ethan Buchman
1809efa350 Introduce CommitSig alias for Vote in Commit (#3245)
* types: memoize height/round in commit instead of first vote

* types: commit.ValidateBasic in VerifyCommit

* types: new CommitSig alias for Vote

In preparation for reducing the redundancy in Commits, we introduce the
CommitSig as an alias for Vote. This is non-breaking on the protocol,
and minor breaking on the Go API, as Commit now contains a list of
CommitSig instead of Vote.

* remove dependence on ToVote

* update some comments

* fix tests

* fix tests

* fixes from review
2019-02-04 13:01:59 -05:00
Ethan Buchman
39eba4e154 WAL: better errors and new fail point (#3246)
* privval: more info in errors

* wal: change Debug logs to Info

* wal: log and return error on corrupted wal instead of panicing

* fail: Exit right away instead of sending interupt

* consensus: FAIL before handling our own vote

allows to replicate #3089:
- run using `FAIL_TEST_INDEX=0`
- delete some bytes from the end of the WAL
- start normally

Results in logs like:

```
I[2019-02-03|18:12:58.225] Searching for height module=consensus wal=/Users/ethanbuchman/.tendermint/data/cs.wal/wal height=1 min=0 max=0
E[2019-02-03|18:12:58.225] Error on catchup replay. Proceeding to start ConsensusState anyway module=consensus err="failed to read data: EOF"
I[2019-02-03|18:12:58.225] Started node module=main nodeInfo="{ProtocolVersion:{P2P:6 Block:9 App:1} ID_:35e87e93f2e31f305b65a5517fd2102331b56002 ListenAddr:tcp://0.0.0.0:26656 Network:test-chain-J8JvJH Version:0.29.1 Channels:4020212223303800 Moniker:Ethans-MacBook-Pro.local Other:{TxIndex:on RPCAddress:tcp://0.0.0.0:26657}}"
E[2019-02-03|18:12:58.226] Couldn't connect to any seeds module=p2p
I[2019-02-03|18:12:59.229] Timed out module=consensus dur=998.568ms height=1 round=0 step=RoundStepNewHeight
I[2019-02-03|18:12:59.230] enterNewRound(1/0). Current: 1/0/RoundStepNewHeight module=consensus height=1 round=0
I[2019-02-03|18:12:59.230] enterPropose(1/0). Current: 1/0/RoundStepNewRound module=consensus height=1 round=0
I[2019-02-03|18:12:59.230] enterPropose: Our turn to propose module=consensus height=1 round=0 proposer=AD278B7767B05D7FBEB76207024C650988FA77D5 privValidator="PrivValidator{AD278B7767B05D7FBEB76207024C650988FA77D5 LH:1, LR:0, LS:2}"
E[2019-02-03|18:12:59.230] enterPropose: Error signing proposal module=consensus height=1 round=0 err="Error signing proposal: Step regression at height 1 round 0. Got 1, last step 2"
I[2019-02-03|18:13:02.233] Timed out module=consensus dur=3s height=1 round=0 step=RoundStepPropose
I[2019-02-03|18:13:02.233] enterPrevote(1/0). Current: 1/0/RoundStepPropose module=consensus
I[2019-02-03|18:13:02.233] enterPrevote: ProposalBlock is nil module=consensus height=1 round=0
E[2019-02-03|18:13:02.234] Error signing vote module=consensus height=1 round=0 vote="Vote{0:AD278B7767B0 1/00/1(Prevote) 000000000000 000000000000 @ 2019-02-04T02:13:02.233897Z}" err="Error signing vote: Conflicting data"
```

Notice the EOF, the step regression, and the conflicting data.

* wal: change errors to be DataCorruptionError

* exit on corrupt WAL

* fix log

* fix new line
2019-02-04 13:00:06 -05:00
Ethan Buchman
eb4e23b91e fix FlushStop (#3247)
* p2p/pex: failing test

* p2p/conn: add stopMtx for FlushStop and OnStop

* changelog
2019-02-04 07:30:24 -08:00
Ismail Khoffi
6485e68beb Use ethereum's secp256k1 lib (#3234)
* switch from fork (tendermint/btcd) to orig package (btcsuite/btcd); also

 - remove obsolete check in test `size != -1` is always true
 - WIP as the serialization still needs to be wrapped

* WIP: wrap signature & privkey, pubkey needs to be wrapped as well

* wrap pubkey too

* use "github.com/ethereum/go-ethereum/crypto/secp256k1" if cgo is
available, else use "github.com/btcsuite/btcd/btcec" and take care of
lower-S when verifying

Annoyingly, had to disable pruning when importing
github.com/ethereum/go-ethereum/ :-/

* update comment

* update comment

* emulate signature_nocgo.go for additional benchmarks:
592bf6a59c/crypto/signature_nocgo.go (L60-L76)

* use our format (r || s) in lower-s form when in the non-cgo case

* remove comment about using the C library directly

* vendor github.com/btcsuite/btcd too

* Add test for the !cgo case

* update changelog pending

Closes #3162 #3163
Refs #1958, #2091, tendermint/btcd#1
2019-02-04 12:24:54 +04:00
Anton Kaliaev
d470945503 update gometalinter to 3.0.0 (#3233)
in the attempt to fix https://circleci.com/gh/tendermint/tendermint/43165

also

    code is simplified by running gofmt -s .
    remove unused vars
    enable linters we're currently passing
    remove deprecated linters
2019-01-30 12:24:26 +04:00
Anton Kaliaev
8985a1fa63 pubsub: fixes after Ethan's review (#3212)
in https://github.com/tendermint/tendermint/pull/3209
2019-01-29 13:16:43 +04:00
Ismail Khoffi
6dd817cbbc secret connection: check for low order points (#3040)
> Implement a check for the blacklisted low order points, ala the X25519 has_small_order() function in libsodium

(#3010 (comment))
resolves first half of #3010
2019-01-29 12:44:59 +04:00
rickyyangz
0b3a87a323 mempool: correct args order in the log msg (#3221)
Before: Unexpected tx response from proxy during recheck\n Expected: {r.CheckTx.Data}, got: {memTx.tx}
After: Unexpected tx response from proxy during recheck\n Expected: {memTx.tx}, got: {tx}

Closes #3214
2019-01-29 10:12:07 +04:00
Zach
e1edd2aa6a hardcode rpc link (#3223) 2019-01-28 23:41:37 +04:00
Thane Thomson
9a0bfafef6 docs: fix links (#3220)
Because there's nothing worse than having to copy/paste a link from a
web page to navigate to it 😁
2019-01-28 17:41:39 +04:00
Thane Thomson
a335caaedb alias amino imports (#3219)
As per conversation here: https://github.com/tendermint/tendermint/pull/3218#discussion_r251364041

This is the result of running the following code on the repo:

```bash
find . -name '*.go' | grep -v 'vendor/' | xargs -n 1 goimports -w
```
2019-01-28 16:13:17 +04:00
Anton Kaliaev
ff3c4bfc76 add go-deadlock tool to help detect deadlocks (#3218)
* add go-deadlock tool to help detect deadlocks

Run it with `make test_with_deadlock`. After it's done, use Git to
cleanup `git checkout .`

Link: https://github.com/sasha-s/go-deadlock/
Replaces https://github.com/tendermint/tendermint/pull/3148

* add a target to cleanup changes
2019-01-28 14:57:47 +04:00
Anton Kaliaev
8d2dd7e554 refactor TestListenerConnectDeadlines to avoid data races (#3201)
Fixes #3179
2019-01-28 12:38:11 +04:00
cong
71e5939441 start eventBus & indexerService before replay and use them while replaying blocks (#3194)
so if we did not index the last block (because of panic or smth else), we index it during replay

Closes #3186
2019-01-28 11:36:35 +04:00
Ethan Buchman
d91ea9b59d adr-033 update 2019-01-26 14:30:29 +04:00
Ethan Buchman
9b6b792ce7 pubsub: comments 2019-01-26 14:30:29 +04:00
Ethan Buchman
57af99d901 types: comments on user vs internal events
Distinguish between user events and internal consensus events
2019-01-26 14:30:29 +04:00
Ethan Buchman
a58d5897e4 note about TmCoreSemVer 2019-01-26 14:30:29 +04:00
Jae Kwon
ddbdffb4e5 add design philosophy doc (#3034) 2019-01-25 23:00:55 +04:00
Ismail Khoffi
d6dd43cdaa adr: style fixes (#3206)
- links to issues
 - fix a few markdown glitches
 - inline code
 - etc
2019-01-25 17:38:26 +04:00
Joon
75cbe4a1c1 R4R: Config TestRoot modification for LCD test (#3177)
* add ResetTestRootWithChainID

* modify chainid
2019-01-25 08:10:36 -05:00
Ethan Buchman
27c1563bf0 Merge pull request #3205 from tendermint/master
Merge master back to develop
2019-01-24 11:35:01 -05:00
Ethan Buchman
4d7b29cd8f Merge pull request #3203 from tendermint/release/v0.29.1
Release/v0.29.1
2019-01-24 11:34:20 -05:00
Ethan Buchman
90970d0ddc fix changelog 2019-01-24 11:19:52 -05:00
Anton Kaliaev
bb0a9b3d6d bump version 2019-01-24 19:18:32 +04:00
Anton Kaliaev
8992596192 update changelog 2019-01-24 19:17:53 +04:00
Zarko Milosevic
c20fbed2f7 [WIP] fix halting issue (#3197)
fix halting issue
2019-01-24 09:33:47 -05:00
Anton Kaliaev
c4157549ab only log "Reached max attempts to dial" once (#3144)
Closes #3037
2019-01-24 13:53:02 +04:00
Infinytum
fbd1e79465 docs: fix lite client formatting (#3198)
Closes #3180
2019-01-24 12:10:34 +04:00
Anton Kaliaev
1efacaa8d3 docs: update pubsub ADR (#3131)
* docs: update pubsub ADR

* third version
2019-01-24 11:33:58 +04:00
Gautham Santhosh
98b42e9eb2 docs: explain how someone can run his/her own ABCI app on localnet (#3195)
Closes #3192.
2019-01-24 10:45:43 +04:00
Anton Kaliaev
2449bf7300 p2p: file descriptor leaks (#3150)
* close peer's connection to avoid fd leak

Fixes #2967

* rename peer#Addr to RemoteAddr

* fix test

* fixes after Ethan's review

* bring back the check

* changelog entry

* write a test for switch#acceptRoutine

* increase timeouts? :(

* remove extra assertNPeersWithTimeout

* simplify test

* assert number of peers (just to be safe)

* Cleanup in OnStop

* run tests with verbose flag on CircleCI

* spawn a reading routine to prevent connection from closing

* get port from the listener

random port is faster, but often results in

```
panic: listen tcp 127.0.0.1:44068: bind: address already in use [recovered]
        panic: listen tcp 127.0.0.1:44068: bind: address already in use

goroutine 79 [running]:
testing.tRunner.func1(0xc0001bd600)
        /usr/local/go/src/testing/testing.go:792 +0x387
panic(0x974d20, 0xc0001b0500)
        /usr/local/go/src/runtime/panic.go:513 +0x1b9
github.com/tendermint/tendermint/p2p.MakeSwitch(0xc0000f42a0, 0x0, 0x9fb9cc, 0x9, 0x9fc346, 0xb, 0xb42128, 0x0, 0x0, 0x0, ...)
        /home/vagrant/go/src/github.com/tendermint/tendermint/p2p/test_util.go:182 +0xa28
github.com/tendermint/tendermint/p2p.MakeConnectedSwitches(0xc0000f42a0, 0x2, 0xb42128, 0xb41eb8, 0x4f1205, 0xc0001bed80, 0x4f16ed)
        /home/vagrant/go/src/github.com/tendermint/tendermint/p2p/test_util.go:75 +0xf9
github.com/tendermint/tendermint/p2p.MakeSwitchPair(0xbb8d20, 0xc0001bd600, 0xb42128, 0x2f7, 0x4f16c0)
        /home/vagrant/go/src/github.com/tendermint/tendermint/p2p/switch_test.go:94 +0x4c
github.com/tendermint/tendermint/p2p.TestSwitches(0xc0001bd600)
        /home/vagrant/go/src/github.com/tendermint/tendermint/p2p/switch_test.go:117 +0x58
testing.tRunner(0xc0001bd600, 0xb42038)
        /usr/local/go/src/testing/testing.go:827 +0xbf
created by testing.(*T).Run
        /usr/local/go/src/testing/testing.go:878 +0x353
exit status 2
FAIL    github.com/tendermint/tendermint/p2p    0.350s
```
2019-01-22 13:23:18 -05:00
Ethan Buchman
3362da0a69 Merge pull request #3191 from tendermint/master
Merge master to develop
2019-01-22 13:19:58 -05:00
Ethan Buchman
4514842a63 Merge pull request #3185 from tendermint/release/v0.29.0
Release/v0.29.0
2019-01-22 13:19:28 -05:00
Ethan Buchman
a97d6995c9 fix changelog indent (#3190) 2019-01-22 12:24:26 -05:00
Ethan Buchman
d9d4f3e629 Prepare v0.29.0 (#3184)
* update changelog and upgrading

* add note about max voting power in abci spec

* update version

* changelog
2019-01-21 19:32:10 -05:00
Ethan Buchman
7a8aeff4b0 update spec for Merkle RFC 6962 (#3175)
* spec: specify when MerkleRoot is on hashes

* remove unnecessary hash methods

* update changelog

* fix test
2019-01-21 10:02:57 -05:00
Ethan Buchman
de5a6010f0 fix DynamicVerifier for large validator set changes (#3171)
* base verifier: bc->bv and check chainid

* improve some comments

* comments in dynamic verifier

* fix comment in doc about BaseVerifier

It requires the validator set to perfectly match.

* failing test for #2862

* move errTooMuchChange to types. fixes #2862

* changelog, comments

* ic -> dv

* update comment, link to issue
2019-01-21 09:21:04 -05:00
Ethan Buchman
da95f4aa6d mempool: enforce maxMsgSize limit in CheckTx (#3168)
- fixes #3008
- reactor requires encoded messages are less than maxMsgSize
- requires size of tx + amino-overhead to not exceed maxMsgSize
2019-01-20 17:27:49 -05:00
Ethan Buchman
4f8769175e [types] hash of ConsensusParams includes only a subset of fields (#3165)
* types: dont hash entire ConsensusParams

* update encoding spec

* update blockchain spec

* spec: consensus params hash

* changelog
2019-01-19 16:08:57 -05:00
Ismail Khoffi
40c887baf7 Normalize priorities to not exceed total voting power (#3049)
* more proposer priority tests

 - test that we don't reset to zero when updating / adding
 - test that same power validators alternate

* add another test to track / simulate similar behaviour as in #2960

* address some of Chris' review comments

* address some more of Chris' review comments

* temporarily pushing branch with the following changes:
The total power might change if:
   - a validator is added
   - a validator is removed
   - a validator is updated

Decrement the accums (of all validators) directly after any of these events
(by the inverse of the change)

* Fix 2960 by re-normalizing / scaling priorities to be in bounds of total
power, additionally:

 - remove heap where it doesn't make sense
 - avg. only at the end of IncrementProposerPriority instead of each
   iteration
 - update (and slightly improve)
   TestAveragingInIncrementProposerPriorityWithVotingPower to reflect
   above changes

* Fix 2960 by re-normalizing / scaling priorities to be in bounds of total
power, additionally:

 - remove heap where it doesn't make sense
 - avg. only at the end of IncrementProposerPriority instead of each
   iteration
 - update (and slightly improve)
   TestAveragingInIncrementProposerPriorityWithVotingPower to reflect
   above changes

* fix tests

* add comment

* update changelog pending & some minor changes

* comment about division will floor the result & fix typo

* Update TestLargeGenesisValidator:
 - remove TODO and increase large genesis validator's voting power
accordingly

* move changelog entry to P2P Protocol

* Ceil instead of flooring when dividing & update test

* quickly fix failing TestProposerPriorityDoesNotGetResetToZero:

 - divide by Ceil((maxPriority - minPriority) / 2*totalVotingPower)

* fix typo: rename getValWitMostPriority -> getValWithMostPriority

* test proposer frequencies

* return absolute value for diff. keep testing

* use for loop for div

* cleanup, more tests

* spellcheck

* get rid of using floats: manually ceil where necessary

* Remove float, simplify, fix tests to match chris's proof (#3157)
2019-01-19 15:55:08 -05:00
Ethan Buchman
d3e8889411 update btcd fork for v0.1.1 (#3164)
* update btcd fork for v0.1.1

* changelog
2019-01-19 14:08:41 -05:00
Ethan Buchman
d17969e378 Merge pull request #3154 from tendermint/master
Merge master back to develop
2019-01-18 12:39:21 -05:00
Ethan Buchman
07263298bd Merge pull request #3153 from tendermint/release/v0.28.1
Release/v0.28.1
2019-01-18 12:38:48 -05:00
Ethan Buchman
5a2e69df81 changelog and version 2019-01-18 12:11:02 -05:00
bradyjoestar
f5f1416a14 json2wal: increase reader's buffer size (#3147)
```
panic: failed to unmarshal json: unexpected end of JSON input

goroutine 1 [running]:
main.main()
	/root/gelgo/src/github.com/tendermint/tendermint/scripts/json2wal/main.go:66 +0x73f
```

Closes #3146
2019-01-18 12:09:12 +04:00
Henry Harder
4d36647eea Add ParadigmCore to ecosystem.json (#3145)
Adding the OrderStream network client (alpha), ParadigmCore, to the `ecosystem.json` file.
2019-01-18 11:46:34 +04:00
Ethan Buchman
8fd8f800d0 Bucky/fix evidence halt (#34)
* consensus: createProposalBlock function

* blockExecutor.CreateProposalBlock

- factored out of consensus pkg into a method on blockExec
- new private interfaces for mempool ("txNotifier") and evpool with one function each
- consensus tests still require more mempool methods

* failing test for CreateProposalBlock

* Fix bug in include evidece into block

* evidence: change maxBytes to maxSize

* MaxEvidencePerBlock

- changed to return both the max number and the max bytes
- preparation for #2590

* changelog

* fix linter

* Fix from review

Co-Authored-By: ebuchman <ethan@coinculture.info>
2019-01-17 21:46:40 -05:00
Anton Kaliaev
87991059aa docs: fix RPC links (#3141) 2019-01-17 08:42:57 -05:00
Thane Thomson
c69dbb25ce Consolidates deadline tests for privval Unix/TCP (#3143)
* Consolidates deadline tests for privval Unix/TCP

Following on from #3115 and #3132, this converts fundamental timeout
errors from the client's `acceptConnection()` method so that these can
be detected by the test for the TCP connection.

Timeout deadlines are now tested for both TCP and Unix domain socket
connections.

There is also no need for the additional secret connection code: the
connection will time out at the `acceptConnection()` phase for TCP
connections, and will time out when attempting to obtain the
`RemoteSigner`'s public key for Unix domain socket connections.

* Removes extraneous logging

* Adds IsConnTimeout helper function

This commit adds a helper function to detect whether an error is either
a fundamental networking timeout error, or an `ErrConnTimeout` error
specific to the `RemoteSigner` class.

* Adds a test for the IsConnTimeout() helper function

* Separates tests logically for IsConnTimeout
2019-01-17 08:10:56 -05:00
Gautham Santhosh
bc8874020f docs: fix broken link (#3142) 2019-01-17 15:30:51 +04:00
Dev Ojha
55d7238708 Add comment to simple_merkle get_split_point (#3136)
* Add comment to simple_merkle get_split_point

* fix grammar error
2019-01-16 16:03:19 -05:00
Ethan Buchman
4a037f9fe6 Merge pull request #3138 from tendermint/master
Merge master back to develop
2019-01-16 16:02:51 -05:00
Ethan Buchman
aa40cfcbb9 Merge pull request #3135 from tendermint/release/v0.28.0
Release/v0.28.0
2019-01-16 16:01:58 -05:00
Ethan Buchman
6d6d103f15 fixes from review (#3137) 2019-01-16 13:41:37 -05:00
Ethan Buchman
239ebe2076 fix changelog fmt (#3134) 2019-01-16 10:21:15 -05:00
Ethan Buchman
0cba0e11b5 update changelog and upgrading (#3133) 2019-01-16 10:16:23 -05:00
Thane Thomson
d4e6720541 Expanding tests to cover Unix sockets version of client (#3132)
* Adds a random suffix to temporary Unix sockets during testing

* Adds Unix domain socket tests for client (FAILING)

This adds Unix domain socket tests for the privval client. Right now,
one of the tests (TestRemoteSignerRetry) fails, probably because the
Unix domain socket state is known instantaneously on both sides by the
OS. Committing this to collaborate on the error.

* Removes extraneous logging

* Completes testing of Unix sockets client version

This completes the testing of the client connecting via Unix sockets.
There are two specific tests (TestSocketPVDeadline and
TestRemoteSignerRetryTCPOnly) that are only relevant to TCP connections.

* Renames test to show TCP-specificity

* Adds testing into closures for consistency (forgot previously)

* Moves test specific to RemoteSigner into own file

As per discussion on #3132, `TestRemoteSignerRetryTCPOnly` doesn't
really belong with the client tests. This moves it into its own file
related to the `RemoteSigner` class.
2019-01-16 10:05:34 -05:00
Anton Kaliaev
dcb8f88525 add "chain_id" label for all metrics (#3123)
* add "chain_id" label for all metrics

Refs #3082

* fix labels extraction
2019-01-15 12:16:33 -05:00
Thane Thomson
a2a62c9be6 Merge pull request #3121 from thanethomson/release/v0.28.0
Adds tests for Unix sockets
2019-01-15 18:17:50 +02:00
Thane Thomson
3191ee8bad Dropping "construct" prefix as per #3121 2019-01-15 18:06:57 +02:00
Thane Thomson
308b7e3bbe Merge branch 'release/v0.28.0' into release/v0.28.0 2019-01-15 17:58:33 +02:00
Kunal Dhariwal
73ea5effe5 docs: update link for rpc docs (#3129) 2019-01-15 17:12:35 +04:00
Ethan Buchman
d1afa0ed6c privval: fixes from review (#3126)
https://github.com/tendermint/tendermint/pull/2923#pullrequestreview-192065694
2019-01-15 16:55:57 +04:00
Thane Thomson
ca00cd6a78 Make privval listener testing generic
This cuts out two tests by constructing test cases and iterating through
them, rather than having separate sets of tests for TCP and Unix listeners.
This is as per the feedback from #3121.
2019-01-15 10:14:41 +02:00
Anton Kaliaev
4daca1a634 return maxPerPage (not defaultPerPage) if per_page is greater than max (#3124)
it's more user-friendly.

Refs #3065
2019-01-14 17:35:31 -05:00
Anton Kaliaev
bc00a032c1 makefile: fix build-docker-localnode target (#3122)
cd does not work because it's executed in a subprocess
so it has to be either chained by && or ;
See https://stackoverflow.com/q/1789594/820520
for more details.

Fixes #3058
2019-01-14 17:33:33 -05:00
Anton Kaliaev
5f4d8e031e [log] fix year format (#3125)
Refs #3060
2019-01-14 14:10:13 -05:00
Ethan Buchman
7b2c4bb493 update ADR-020 (#3116) 2019-01-14 11:53:43 -05:00
Thane Thomson
5f93220c61 Adds tests for Unix sockets
As per #3115, adds simple Unix socket connect/accept deadline tests in
pretty much the same way as the TCP connect/accept deadline tests work.
2019-01-14 11:41:09 +02:00
Dev Ojha
ec53ce359b Simple merkle rfc compatibility (#2713)
* Begin simple merkle compatibility PR

* Fix query_test

* Use trillian test vectors

* Change the split point per RFC 6962

* update spec

* refactor innerhash to match spec

* Update changelog

* Address @liamsi's comments

* Write the comment requested by @liamsi
2019-01-13 18:02:38 -05:00
Ismail Khoffi
1f68318875 fix order of BlockID and Timestamp in Vote and Proposal (#3078)
* Consistent order fields of Timestamp/BlockID fields in CanonicalVote and
CanonicalProposal

* update spec too

* Introduce and use IsZero & IsComplete:
 - update IsZero method according to spec and introduce IsComplete
 - use methods in validate basic to validate: proposals come with a
 "complete" blockId and votes are either complete or empty
 - update spec: BlockID.IsNil() -> BlockID.IsZero() and fix typo

* BlockID comes first

* fix tests
2019-01-13 17:56:36 -05:00
Ismail Khoffi
1ccc0918f5 More ProposerPriority tests (#2966)
* more proposer priority tests

 - test that we don't reset to zero when updating / adding
 - test that same power validators alternate

* add another test to track / simulate similar behaviour as in #2960

* address some of Chris' review comments

* address some more of Chris' review comments
2019-01-13 17:40:50 -05:00
Ethan Buchman
fc031d980b Bucky/v0.28.0 (#3119)
* changelog pending and upgrading

* linkify and version bump

* changelog shuffle
2019-01-13 17:15:34 -05:00
Zarko Milosevic
1895cde590 [WIP] Fill in consensus core details in ADR 030 (#2696)
* Initial work towards making ConsensusCore spec complete

* Initial version of executor and complete consensus
2019-01-13 14:47:00 -05:00
Gian Felipe
be00cd1add Hotfix/validating query result length (#3053)
* Validating that there are txs in the query results before loop throught the array

* Created tests to validate the error has been fixed

* Added comments

* Fixing misspeling

* check if the variable "skipCount" is bigger than zero. If it is not, we set it to 0. If it, we do not do anything.

* using function that validates the skipCount variable

* undo Gopkg.lock changes
2019-01-13 14:34:29 -05:00
Ismail Khoffi
a6011c007d Close and retry a RemoteSigner on err (#2923)
* Close and recreate a RemoteSigner on err

* Update changelog

* Address Anton's comments / suggestions:

 - update changelog
 - restart TCPVal
 - shut down on `ErrUnexpectedResponse`

* re-init remote signer client with fresh connection if Ping fails

- add/update TODOs in secret connection
- rename tcp.go -> tcp_client.go, same with ipc to clarify their purpose

* account for `conn returned by waitConnection can be `nil`

- also add TODO about RemoteSigner conn field

* Tests for retrying: IPC / TCP

 - shorter info log on success
 - set conn and use it in tests to close conn

* Tests for retrying: IPC / TCP

 - shorter info log on success
 - set conn and use it in tests to close conn
 - add rwmutex for conn field in IPC

* comments and doc.go

* fix ipc tests. fixes #2677

* use constants for tests

* cleanup some error statements

* fixes #2784, race in tests

* remove print statement

* minor fixes from review

* update comment on sts spec

* cosmetics

* p2p/conn: add failing tests

* p2p/conn: make SecretConnection thread safe

* changelog

* IPCVal signer refactor

- use a .reset() method
- don't use embedded RemoteSignerClient
- guard RemoteSignerClient with mutex
- drop the .conn
- expose Close() on RemoteSignerClient

* apply IPCVal refactor to TCPVal

* remove mtx from RemoteSignerClient

* consolidate IPCVal and TCPVal, fixes #3104

- done in tcp_client.go
- now called SocketVal
- takes a listener in the constructor
- make tcpListener and unixListener contain all the differences

* delete ipc files

* introduce unix and tcp dialer for RemoteSigner

* rename files

- drop tcp_ prefix
- rename priv_validator.go to file.go

* bring back listener options

* fix node

* fix priv_val_server

* fix node test

* minor cleanup and comments
2019-01-13 14:31:31 -05:00
Ethan Buchman
ef94a322b8 Make SecretConnection thread safe (#3111)
* p2p/conn: add failing tests

* p2p/conn: make SecretConnection thread safe

* changelog

* fix from review
2019-01-13 13:46:25 -05:00
Mauricio Serna
7f607d0ce2 docs: fix p2p readme links (#3109) 2019-01-11 17:41:02 -05:00
Anton Kaliaev
81c51cd4fc rpc: include peer's remote IP in /net_info (#3052)
Refs #3047
2019-01-11 09:24:45 -05:00
Ethan Buchman
51094f9417 update README (#3097)
* update README

* fix from review
2019-01-11 08:28:29 -05:00
Alessio Treglia
7644d27307 Ensure multisig keys have 20-byte address (#3103)
* Ensure multisig keys have 20-byte address

Use crypto.AddressHash() to avoid returning 32-byte long address.

Closes: #3102

* fix pointer

* fix test
2019-01-10 18:37:34 -05:00
Alessio Treglia
764cfe33aa Don't use pointer receivers for PubKeyMultisigThreshold (#3100)
* Don't use pointer receivers for PubKeyMultisigThreshold

* test that showcases panic when PubKeyMultisigThreshold are used in sdk:

 - deserialization will fail in `readInfo` which tries to read a
 `crypto.PubKey` into a `localInfo` (called by
  cosmos-sdk/client/keys.GetKeyInfo)

* Update changelog

* Rename routeTable to nameTable, multisig key is no longer a pointer

* sed -i 's/PubKeyAminoRoute/PubKeyAminoName/g' `grep -lrw PubKeyAminoRoute .`

upon Jae's request

* AminoRoutes -> AminoNames

* sed -e 's/PrivKeyAminoRoute/PrivKeyAminoName/g'

* Update crypto/encoding/amino/amino.go

Co-Authored-By: alessio <quadrispro@ubuntu.com>
2019-01-10 17:47:20 -05:00
srmo
616c3a4bae cs: prettify logging of ignored votes (#3086)
Refs #3038
2019-01-06 12:00:12 +03:00
Piotr Husiatyński
04e97f599a fix build scripts (#3085)
* fix build scripts

Search for the right variable when introspecting Go code. `Version` was
renamed to `TMCoreSemVer`.

This is regression introduced in
b95ac688af

* fix all `Version` introspections.

Use `TMCoreSemVer` instead of `Version`
2019-01-06 11:40:15 +03:00
Ethan Buchman
56a4fb4d72 add signing spec (#3061)
* add signing spec

* fixes from review

* more fixes

* fixes from review
2019-01-02 17:18:45 -08:00
srmo
49017a5787 3070 [docs] unindent text as it is supposed to behave the same as the parts before (#3075) 2019-01-01 10:42:39 +03:00
Ismail Khoffi
6a80412a01 Remove privval.GetAddress(), memoize pubkey (#2948)
privval: remove GetAddress(), memoize pubkey
2018-12-22 00:36:45 -05:00
needkane
2348f38927 Update node_info.go (#3059) 2018-12-21 17:37:28 -05:00
yutianwu
41e2eeee9c R4R: Split immutable and mutable parts of priv_validator.json (#2870)
* split immutable and mutable parts of priv_validator.json

* fix bugs

* minor changes

* retrig test

* delete scripts/wire2amino.go

* fix test

* fixes from review

* privval: remove mtx

* rearrange priv_validator.go

* upgrade path

* write tests for the upgrade

* fix for unsafe_reset_all

* add test

* add reset test
2018-12-21 16:58:27 -05:00
Ethan Buchman
a88e283a9d Merge pull request #3063 from tendermint/master
Merge master back to develop
2018-12-21 16:39:02 -05:00
Ethan Buchman
1e1ca15bcc Merge pull request #3062 from tendermint/release/v0.27.4
Release/v0.27.4
2018-12-21 16:35:45 -05:00
Ethan Buchman
c6604b5a9b changelog and version 2018-12-21 16:31:28 -05:00
Anton Kaliaev
c510f823e7 mempool: move tx to back, not front (#3036)
because we pop txs from the front if the cache is full

Refs #3035
2018-12-21 16:28:21 -05:00
Zach
daddebac29 circleci: update go version (#3051) 2018-12-19 21:45:12 +04:00
Zach
30f346fe44 docs: add rpc link to docs navbar and re-org sidebar (#3041)
* add rpc to docs navbar and close #3000

* Update config.js
2018-12-17 23:02:26 +04:00
Anton Kaliaev
4d8f29f79c set allow_duplicate_ip to false (#2992)
* config: cors options are arrays of strings, not strings

Fixes #2980

* docs: update tendermint-core/configuration.html page

* set allow_duplicate_ip to false

* in `tendermint testnet`, set allow_duplicate_ip to true

Refs #2712

* fixes after Ismail's review
2018-12-17 11:52:33 -05:00
Zach
2182f6a702 update go version & other cleanup (#3018)
* update go version & other cleanup

* fix lints

* go one.eleven.four

* keep circle on 1.11.3 for now
2018-12-17 11:51:53 -05:00
Anton Kaliaev
a06912b579 mempool: move tx to back, not front (#3036)
because we pop txs from the front if the cache is full

Refs #3035
2018-12-17 11:35:05 -05:00
Zach
0ff715125b fix docs / proxy app (#2988)
* fix docs / proxy app, closes #2986

* counter_serial

* review comments

* list all possible options

* add changelog entries
2018-12-16 23:34:13 -05:00
Ethan Buchman
385977d5e8 Merge pull request #3033 from tendermint/master
Merge pull request #3032 from tendermint/release/v0.27.3
2018-12-16 14:30:46 -05:00
Ethan Buchman
0138530df2 Merge pull request #3032 from tendermint/release/v0.27.3
Release/v0.27.3
2018-12-16 14:30:23 -05:00
Ethan Buchman
0533c73a50 crypto: revert to mainline Go crypto lib (#3027)
* crypto: revert to mainline Go crypto lib

We used to use a fork for a modified bcrypt so we could pass our own
randomness but this was largely unecessary, unused, and a burden.
So now we just use the mainline Go crypto lib.

* changelog

* fix tests

* version and changelog
2018-12-16 14:19:38 -05:00
Ethan Buchman
1beb45511c Merge pull request #3031 from tendermint/master
Merge pull request #3030 from tendermint/release/v0.27.2
2018-12-16 14:13:57 -05:00
Ethan Buchman
4a568fcedb Merge pull request #3030 from tendermint/release/v0.27.2
Release/v0.27.2
2018-12-16 14:13:30 -05:00
Ethan Buchman
b3141d7d02 makeNodeInfo returns error (#3029)
* makeNodeInfo returns error

* version and changelog
2018-12-16 14:05:58 -05:00
Jae Kwon
9a6dd96cba Revert to using defers in addrbook. (#3025)
* Revert to using defers in addrbook.  ValidateBasic->Validate since it requires DNS

* Update CHANGELOG_PENDING
2018-12-16 12:27:16 -05:00
Ethan Buchman
9fa959619a Merge pull request #3026 from tendermint/master
Merge pull request #3023 from tendermint/release/v0.27.1
2018-12-16 12:25:19 -05:00
Ethan Buchman
1f09818770 Merge pull request #3023 from tendermint/release/v0.27.1
Release/v0.27.1
2018-12-16 12:24:54 -05:00
Ethan Buchman
e4806f980b Bucky/v0.27.1 (#3022)
* update changelog

* linkify

* changelog and version
2018-12-15 15:58:09 -05:00
Anton Kaliaev
b53a2712df docs: networks/docker-compose: small fixes (#3017) 2018-12-15 15:38:13 -05:00
Hleb Albau
a75dab492c #2980 fix cors doc (#3013) 2018-12-15 15:32:35 -05:00
Ethan Buchman
7c9e767e1f 2980 fix cors (#3021)
* config: cors options are arrays of strings, not strings

Fixes #2980

* docs: update tendermint-core/configuration.html page

* set allow_duplicate_ip to false

* in `tendermint testnet`, set allow_duplicate_ip to true

Refs #2712

* fixes after Ismail's review

* Revert "set allow_duplicate_ip to false"

This reverts commit 24c1094ebcf2bd35f2642a44d7a1e5fb5c178fb1.
2018-12-15 15:26:27 -05:00
JamesRay
f82a8ff73a During replay, when appHeight==0, only saveState if stateHeight is also 0 (#3006)
* optimize addProposalBlockPart

* optimize addProposalBlockPart

* if ProposalBlockParts and LockedBlockParts both exist,let LockedBlockParts overwrite ProposalBlockParts.

* fix tryAddBlock

* broadcast lockedBlockParts in higher priority

* when appHeight==0, it's better fetch genDoc than state.validators.

* not save state if replay from height 1

* only save state if replay from height 1 when stateHeight is also 1

* only save state if replay from height 1 when stateHeight is also 1

* only save state if replay from height 0 when stateHeight is also 0

* handshake info's response version only update when stateHeight==0

* save the handshake responseInfo appVersion
2018-12-15 14:33:30 -05:00
Anton Kaliaev
ae275d791e mempool: notifyTxsAvailable if there're txs left, but recheck=false (#2991)
(left after committing a block)

Fixes #2961

--------------
ORIGINAL ISSUE

Tendermint version : 0.26.4-b771798d

ABCI app : kv-store

Environment:

OS (e.g. from /etc/os-release): macOS 10.14.1 What happened: Set
mempool.recheck = false and create empty block = false in config.toml.
When transactions get added right between a new empty block is being
proposed and committed, the proposer won't propose new block for that
transactions immediately after. That transactions are stuck in the
mempool until a new transaction is added and trigger the proposer.

What you expected to happen: If there is a transaction left in the
mempool, new block should be proposed immediately.

Have you tried the latest version: yes

How to reproduce it (as minimally and precisely as possible): Fire two
transaction using broadcast_tx_sync with specific delay between them.
(You may need to do it multiple time before the right delay is found, on
my machine the delay is 0.98s)

Logs (paste a small part showing an error (< 10 lines) or link a
pastebin, gist, etc. containing more of the log file):
https://pastebin.com/0Wt6uhPF

Config (you can paste only the changes you've made): [mempool] recheck =
false create_empty_block = false

Anything else we need to know: In mempool.go, we found that proposer
will immediately propose new block if

Last committed block has some transaction (causing AppHash to changed)
or mem.notifyTxsAvailable() is called. Our scenario is as followed.

A transaction is fired, it will create 1 block with 1 tx (line 1-4 in
the log) and 1 empty block. After the empty block is proposed but before
it is committed, second transaction is fired and added to mempool. (line
8-16) Now, since the last committed block is empty and
mem.notifyTxsAvailable() will be called only if mempool.recheck = true.
The proposer won't immediately propose new block, causing the second
transaction to stuck in mempool until another transaction is added to
mempool and trigger mem.notifyTxsAvailable().
2018-12-15 14:27:02 -05:00
Zach
f5cca9f121 docs: update DOCS_README (#3019)
Co-Authored-By: zramsay <zach.ramsay@gmail.com>
2018-12-15 23:01:28 +04:00
Zach
3fbe9f235a docs: add edit on Github links (#3014) 2018-12-14 09:32:09 +04:00
mircea-c
f7e463f6d3 circleci: add a job to automatically update docs (#3005) 2018-12-12 13:48:53 +04:00
Dev Ojha
bc2a9b20c0 mempool: add a comment and missing changelog entry (#2996)
Refs #2994
2018-12-12 13:31:35 +04:00
Zach
9e075d8dd5 docs: enable full-text search (#3004) 2018-12-12 13:20:02 +04:00
Zach
8003786c9a docs: fixes from 'first time' review (#2999) 2018-12-11 22:21:54 +04:00
Daniil Lashin
2594cec116 add UnconfirmedTxs/NumUnconfirmedTxs methods to HTTP/Local clients (#2964) 2018-12-11 12:41:02 +04:00
Dev Ojha
df32ea4be5 Make testing logger that doesn't write to stdout (#2997) 2018-12-11 12:17:21 +04:00
Anton Kaliaev
f69e2c6d6c p2p: set MConnection#created during init (#2990)
Fixes #2715

In crawlPeersRoutine, which is performed when seedMode is run, there is
logic that disconnects the peer's state information at 3-hour intervals
through the duration value. The duration value is calculated by
referring to the created value of MConnection. When MConnection is
created for the first time, the created value is not initiated, so it is
not disconnected every 3 hours but every time it is disconnected. So,
normal nodes are connected to seedNode and disconnected immediately, so
address exchange does not work properly.

https://github.com/tendermint/tendermint/blob/master/p2p/pex/pex_reactor.go#L629
This point is not work correctly.
I think,
https://github.com/tendermint/tendermint/blob/master/p2p/conn/connection.go#L148
created variable is missing the current time setting.
2018-12-10 15:24:58 -05:00
Dev Ojha
d5d0d2bd77 Make mempool fail txs with negative gas wanted (#2994)
This is only one part of #2989. We also need to fix the application,
and add rules to consensus to ensure this.
2018-12-10 12:56:49 -05:00
Anton Kaliaev
41eaf0e31d turn off strict routability every time (#2983)
previously, we're turning it off only when --populate-persistent-peers
flag was used, which is obviously incorrect.

Fixes https://github.com/cosmos/cosmos-sdk/issues/2983
2018-12-09 13:29:51 -05:00
Zach
68b467886a docs: relative links in docs/spec/readme.md, js-amino lib (#2977)
Co-Authored-By: zramsay <zach.ramsay@gmail.com>
2018-12-07 19:41:19 +04:00
Leo Wang
2f64717bb5 return an error if validator set is empty in genesis file and after InitChain (#2971)
Fixes #2951
2018-12-07 12:30:58 +04:00
Anton Kaliaev
c4a1cfc5c2 don't ignore key when executing CONTAINS (#2924)
Fixes #2912
2018-12-07 12:28:02 +04:00
Ethan Buchman
0f96bea41d Merge pull request #2976 from tendermint/master
Merge pull request #2975 from tendermint/release/v0.27.0
2018-12-05 16:51:04 -05:00
Ethan Buchman
9c236ffd6c Merge pull request #2975 from tendermint/release/v0.27.0
Release/v0.27.0
2018-12-05 16:50:36 -05:00
Ethan Buchman
9f8761d105 update changelog and upgrading (#2974) 2018-12-05 15:19:30 -05:00
Anton Kaliaev
5413c11150 kv indexer: add separator to start key when matching ranges (#2925)
* kv indexer: add separator to start key when matching ranges

to avoid including false positives

Refs #2908

* refactor code

* add a test case
2018-12-05 14:21:46 -05:00
Ethan Buchman
a14fd8eba0 p2p: fix peer count mismatch #2332 (#2969)
* p2p: test case for peer count mismatch #2332

* p2p: fix peer count mismatch #2332

* changelog

* use httptest.Server to scrape Prometheus metrics
2018-12-05 07:32:27 -05:00
Ethan Buchman
1bb7e31d63 p2p: panic on transport error (#2968)
* p2p: panic on transport error

Addresses #2823. Currently, the acceptRoutine exits if the transport returns
an error trying to accept a new connection. Once this happens, the node
can't accept any new connections. So here, we panic instead. While we
could potentially be more intelligent by rerunning the acceptRoutine, the
error may indicate something more fundamental (eg. file desriptor limit)
that requires a restart anyways. We can leave it to process managers to
handle that restart, and notify operators about the panic.

* changelog
2018-12-04 19:16:06 -05:00
Ethan Buchman
222b8978c8 Minor log changes (#2959)
* node: allow state and code to have diff block versions

* node: pex is a log module
2018-12-04 08:30:29 -05:00
Anton Kaliaev
d9a1aad5c5 docs: add client#Start/Stop to examples in RPC docs (#2939)
follow-up on https://github.com/tendermint/tendermint/pull/2936
2018-12-03 16:17:06 +04:00
Anton Kaliaev
8ef0c2681d check if deliverTxResCh is still open, return an err otherwise (#2947)
deliverTxResCh, like any other eventBus (pubsub) channel, is closed when
eventBus is stopped. We must check if the channel is still open. The
alternative approach is to not close any channels, which seems a bit
odd.

Fixes #2408
2018-12-03 16:15:36 +04:00
Ismail Khoffi
c4d93fd27b explicitly type MaxTotalVotingPower to int64 (#2953) 2018-11-30 14:43:16 -05:00
Ethan Buchman
dc2a338d96 Bucky/v0.27.0 (#2950)
* update changelog

* changelog, upgrading, version
2018-11-30 14:05:16 -05:00
Ismail Khoffi
725ed7969a Add some ProposerPriority tests (#2946)
* WIP: tests for #2785

* rebase onto develop

* add Bucky's test without changing ValidatorSet.Update

* make TestValidatorSetBasic fail

* add ProposerPriority preserving fix to ValidatorSet.Update to fix
TestValidatorSetBasic

* fix randValidator_ to stay in bounds of MaxTotalVotingPower

* check for expected proposer and remove some duplicate code

* actually limit the voting power of random validator ...

* fix test
2018-11-29 17:03:41 -05:00
Ethan Buchman
44b769b1ac types: ValidatorSet.Update preserves Accum (#2941)
* types: ValidatorSet.Update preserves ProposerPriority

This solves the other issue discovered as part of #2718,
where Accum (now called ProposerPriority) is reset to
0 every time a validator is updated.

* update changelog

* add test

* update comment

* Update types/validator_set_test.go

Co-Authored-By: ebuchman <ethan@coinculture.info>
2018-11-29 08:26:12 -05:00
Anton Kaliaev
380afaa678 docs: update ecosystem.json: add Rust ABCI (#2945) 2018-11-29 15:57:11 +04:00
Ismail Khoffi
b30c34e713 rename Accum -> ProposerPriority: (#2932)
- rename fields, methods, comments, tests
2018-11-28 15:35:09 -05:00
Dev Ojha
4039276085 remove unnecessary "crypto" import alias (#2940) 2018-11-28 23:53:04 +04:00
Ismail Khoffi
3f987adc92 Set accum of freshly added validator -(total voting power) (#2785)
* set the accum of a new validator to (-total voting power):

- disincentivize validators to unbond, then rebon to reset their
negative Accum to zero

additional unrelated changes:
- do not capitalize error msgs
- fix typo

* review comments: (re)capitalize errors & delete obsolete comments

* More changes suggested by @melekes

* WIP: do not batch clip (#2809)

* substract avgAccum on each iteration

- temporarily skip test

* remove unused method safeMulClip / safeMul

* always substract the avg accum

 - temp. skip another test

* remove overflow / underflow tests & add tests for avgAccum:

- add test for computeAvgAccum
- as we substract the avgAccum now we will not trivially over/underflow

* address @cwgoes' comments

* shift by avg at the end of IncrementAccum

* Add comment to MaxTotalVotingPower

* Guard inputs to not exceed MaxTotalVotingPower

* Address review comments:

 - do not fetch current validator from set again
 - update error message

* Address a few review comments:

 - fix typo
 - extract variable

* address more review comments:

 - clarify 1.125*totalVotingPower == totalVotingPower + (totalVotingPower >> 3)

* review comments: panic instead of "clipping":

 - total voting power is guarded to not exceed MaxTotalVotingPower ->
 panic if this invariant is violated

* fix failing test
2018-11-28 13:12:17 -05:00
Ethan Buchman
b11788d36d types: NewValidatorSet doesn't panic on empty valz list (#2938)
* types: NewValidatorSet doesn't panic on empty valz list

* changelog
2018-11-28 13:09:29 -05:00
Daniil Lashin
9adcfe2804 docs: add client.Start() to RPC WS examples (#2936) 2018-11-28 20:55:18 +04:00
Zach
3d15579e0c docs: fix js-abci example (#2935) 2018-11-28 20:29:26 +04:00
Dev Ojha
4571f0fbe8 Enforce validators can only use the correct pubkey type (#2739)
* Enforce validators can only use the correct pubkey type

* adapt to variable renames

* Address comments from #2636

* separate updating and validation logic

* update spec

* Add test case for TestStringSliceEqual, clarify slice copying code

* Address @ebuchman's comments

* Split up testing validator update execution, and its validation
2018-11-28 09:09:27 -05:00
Ethan Buchman
8a73feae14 Merge pull request #2934 from tendermint/master
Merge pull request #2922 from tendermint/release/v0.26.4
2018-11-28 08:56:39 -05:00
srmo
e291fbbebe 2871 remove proposalHeartbeat infrastructure (#2874)
* 2871 remove proposalHeartbeat infrastructure

* 2871 add preliminary changelog entry
2018-11-28 08:52:34 -05:00
Jae Kwon
416d143bf7 R4R: Swap start/end in ReverseIterator (#2913)
* Swap start/end in ReverseIterator

* update CHANGELOG_PENDING

* fixes from review
2018-11-28 08:49:24 -05:00
Daniil Lashin
7213869fc6 Refactor updateState #2865 (#2929)
* Refactor updateState #2865

* Apply suggestions from code review

Co-Authored-By: danil-lashin <danil-lashin@yandex.ru>

* Apply suggestions from code review
2018-11-28 08:32:16 -05:00
Anton Kaliaev
ef9902e602 docs: small improvements (#2933)
* update docs

- make install_c cmd (install)
- explain node IDs (quick-start)
- update UPGRADING section (using-tendermint)

* use git clone with JS example

JS devs may not have Go installed and we should not force them to.

* rewrite sentence
2018-11-28 08:25:23 -05:00
Ethan Buchman
b771798d48 Merge pull request #2922 from tendermint/release/v0.26.4
Release/v0.26.4
2018-11-27 08:51:45 -05:00
Ethan Buchman
1abf34aa91 Prepare v0.26.4 changelog (#2921)
* prepare changelog

* linkify changelog

* changelog and version

* update changelog
2018-11-27 08:43:21 -05:00
Anton Kaliaev
92dc5fc77a don't return false positives when searching for a prefix of a tag value (#2919)
Fixes #2908
2018-11-27 08:12:28 -05:00
nagarajmanjunath
bef39f3346 Updated Marshal and unmarshal methods to make compatible with protobuf (#2918)
* upadtes in grpc Marshal and unmarshal

* update comments
2018-11-27 08:07:20 -05:00
Anton Kaliaev
94e63be922 [indexer] order results by index if height is the same (#2900)
Fixes #2775
2018-11-27 07:53:06 -05:00
Anton Kaliaev
9570ac4d3e rpc: Fix tx.height range queries (#2899)
Modify lookForHeight to return a height only there's a equal operator.
Previously, it was returning a height even for range conditions: "height
< 10000".

Fixes #2759
2018-11-27 07:47:50 -05:00
Jernej Kos
99b9c9bf60 types: Emit tags from BeginBlock/EndBlock (#2747)
This commit makes both EventNewBlock and EventNewBlockHeader emit tags
on the event bus, so subscribers can use them in queries.
2018-11-26 22:21:42 -05:00
Ethan Buchman
47a0669d12 Fix fast sync stack with wrong block #2457 (#2731)
* fix fastsync may stuck by a wrong block

* fixes from updates

* fixes from review

* Align spec with the changes

* fmt
2018-11-26 15:31:11 -05:00
JamesRay
fe3b97fd66 It's better read from genDoc than from state.validators when appHeight==0 in replay (#2893)
* optimize addProposalBlockPart

* optimize addProposalBlockPart

* if ProposalBlockParts and LockedBlockParts both exist,let LockedBlockParts overwrite ProposalBlockParts.

* fix tryAddBlock

* broadcast lockedBlockParts in higher priority

* when appHeight==0, it's better fetch genDoc than state.validators.
2018-11-26 08:03:08 -05:00
Ismail Khoffi
56052c0a87 update encoding spec (#2903)
Quick fix for #2902
2018-11-26 12:24:32 +04:00
Anton Kaliaev
98e442a8de return back initially allowed level if we encounter allowed key (#2889)
Fixes #2868 where module=main setting overrides all others
2018-11-25 23:34:22 -05:00
Tomas Tauber
b12488b5f1 Handling integer IDs in JSON-RPC requests -- fixes #2366 (#2811)
* Fixed accepting integer IDs in requests for Tendermint RPC server (#2366)

* added a wrapper interface `jsonrpcid` that represents both string and int IDs in JSON-RPC requests/responses + custom JSON unmarshallers

* changed client-side code in RPC that uses it

* added extra tests for integer IDs

* updated CHANGELOG_PENDING, as suggested by PR instructions

* addressed PR comments

* added table driven tests for request type marshalling/unmarshalling
* expanded handler test to check IDs
* changed pending changelog note

* changed json rpc request/response unmarshalling to use empty interfaces and type switches on ID

* some cleanup
2018-11-25 23:33:40 -05:00
Anton Kaliaev
b487feba42 node: refactor privValidator ext client code & tests (#2895)
* update ConsensusState#OnStop comment

* consensus: set logger for WAL in tests

* refactor privValidator client code and tests

follow-up on https://github.com/tendermint/tendermint/pull/2866
2018-11-21 21:24:13 +04:00
Joe Bowman
72f86b5192 [pv] add ability to use ipc validator (#2866)
Ref #2827

(I have since seen #2847 which is a fix for the same issue; this PR has tests and docs too ;) )
2018-11-21 10:45:20 +04:00
Jae Kwon
42592d9ae0 IncrementAccum upon RPC /validators; Sanity checks and comments (#2808) 2018-11-21 10:43:02 +04:00
Dev Ojha
1610a05cbd Remove counter from every mempoolTx (#2891)
Within every tx in the mempool, we store a 64 bit counter,
as an index for when it was inserted into the mempool.
This counter doesn't really serve any purpose.
It was likely added for debugging at one point,
Removing the counter reclaims memory,
which enables greater mempool sizes / mitigates resources at the same size.

Closes #2835
2018-11-21 10:33:41 +04:00
Anton Kaliaev
2d525bf2b8 mempool: add txs from Update to cache
We should add txs that come in from mempool.Update to the mempool's
cache, so that they never hit a potentially expensive check tx.

Originally posted by @ValarDragon in #2846
https://github.com/tendermint/tendermint/issues/2846#issuecomment-439216656

Refs #2855
2018-11-20 10:47:30 +04:00
Anton Kaliaev
e9efbfe267 refactor mempool.Update
- rename filterTxs to removeTxs
- move txsMap into removeTxs func
- rename goodTxs to txsLeft
2018-11-20 10:47:30 +04:00
Anton Kaliaev
7b883a5457 docs/install: prepend cp to /usr/local with sudo (#2885)
Closes #2884
2018-11-20 10:27:58 +04:00
cong
fd8d1d6b69 add BlockTimeIota to the config.toml (#2878)
Refs #2877
2018-11-19 20:15:23 +04:00
Ethan Buchman
5abdd254e7 Merge pull request #2875 from tendermint/master
Merge pull request #2873 from tendermint/release/v0.26.3
2018-11-17 18:25:46 -05:00
Ethan Buchman
22dcc92232 Merge pull request #2873 from tendermint/release/v0.26.3
Release/v0.26.3
2018-11-17 18:25:12 -05:00
Ethan Buchman
ccf6b2b512 Bucky/v0.26.3 (#2872)
* update CONTRIBUTING with notes on CHANGELOG

* update changelog

* changelog and version
2018-11-17 16:04:05 -05:00
srmo
1466a2cc9f #2815 do not broadcast heartbeat proposal when we are non-validator (#2819)
* #2815 do not broadcast heartbeat proposal when we are non-validator

* #2815 adding preliminary changelog entry

* #2815 cosmetics and added test

* #2815 missed a little detail

- it's enough to call getAddress() once here

* #2815 remove debug logging from tests

* #2815 OK. I seem to be doing something fundamentally wrong here

* #2815 next iteration of proposalHeartbeat tests

- try and use "ensure" pattern in common_test

* 2815 incorporate review comments
2018-11-17 15:23:39 -05:00
Ethan Buchman
6168b404a7 p2p: NewMultiplexTransport takes an MConnConfig (#2869)
* p2p: NewMultiplexTransport takes an MConnConfig

* changelog

* move test func to test file
2018-11-17 03:16:49 -05:00
Zaki Manian
e6fc10faf6 R4R: Add timeouts to http servers (#2780)
* Replaces our current http servers where connections stay open forever with ones with timeouts to prevent file descriptor exhaustion

* Use the correct handler

* Put in go routines

* fix err

* changelog

* rpc: export Read/WriteTimeout

The `broadcast_tx_commit` endpoint has it's own timeout.
If this is longer than the http server's WriteTimeout, the
user will receive an error. Here, we export the WriteTimeout
and set the broadcast_tx_commit timeout to be less than it.

In the future, we should use a config struct for the timeouts
to avoid the need to export. The broadcast_tx_commit timeout
may also become configurable, but we must check that it's less
than the server's WriteTimeout.
2018-11-17 03:10:22 -05:00
Anton Kaliaev
60018d6148 comment out until someone decides to tackle #2285 (#2760)
current code results in panic and we certainly don't want that.
https://github.com/tendermint/tendermint/pull/2286#issuecomment-418281846
2018-11-16 18:17:07 -05:00
Ethan Buchman
0d5e0d2f13 p2p/conn: FlushStop. Use in pex. Closes #2092 (#2802)
* p2p/conn: FlushStop. Use in pex. Closes #2092

In seed mode, we call StopPeer immediately after Send.
Since flushing msgs to the peer happens in the background,
the peer connection is often closed before the messages are
actually sent out. The new FlushStop method allows all msgs
to first be written and flushed out on the conn before it is closed.

* fix dummy peer

* typo

* fixes from review

* more comments

* ensure pex doesn't call FlushStop more than once

FlushStop is not safe to call more than once,
but we call it from Receive in a go-routine so Receive
doesn't block.

To ensure we only call it once, we use the lastReceivedRequests
map - if an entry already exists, then FlushStop should already have
been called and we can return.
2018-11-16 17:44:19 -05:00
Daniil Lashin
2cfdef6111 Make "Update to validators" msg value pretty (#2848)
* Make "Update to validators" msg value pretty #2765

* New format for logging validator updates

* Refactor logging validator updates

* Fix changelog item

* fix merge conflict
2018-11-16 17:38:22 -05:00
krhubert
b90e06a37c More verbose error log (#2864) 2018-11-16 17:36:42 -05:00
Anton Kaliaev
e6a0d098e8 small fixes to spec & http_server & Vagrantfile (#2859)
* Vagrantfile: install dev_tools

Follow-up on https://github.com/tendermint/tendermint/pull/2824

* update consensus params spec

* fix test name

* rpc_test: panic if failed to start listener

also
- remove http_server#MustListen
- align StartHTTPServer and StartHTTPAndTLSServer functions

* dep: allow minor releases for grpc
2018-11-16 12:58:30 -05:00
Ethan Buchman
d8ab8509de p2p: log 'Send failed' on Debug (#2857) 2018-11-16 11:37:58 +04:00
Ethan Buchman
85bba82154 Merge pull request #2858 from tendermint/master
Merge pull request #2851 from tendermint/release/v0.26.2
2018-11-15 18:42:47 -05:00
Ethan Buchman
d5a05eccba Merge pull request #2851 from tendermint/release/v0.26.2
Release/v0.26.2
2018-11-15 18:41:54 -05:00
kevlubkcm
a676c71678 [R4R] Add proposer to NewRound event and proposal info to CompleteProposal event (#2767)
* add proposer info to EventCompleteProposal

* create separate EventData structure for CompleteProposal

* cant us rs.Proposal to get BlockID because it is not guaranteed to be set yet

* copying RoundState isnt helping us here

* add Step back to make compatible with original RoundState event. update changelog

* add NewRound event

* fix test

* remove unneeded RoundState

* put height round step into a struct

* pull out ValidatorInfo struct. add ensureProposal assert

* remove height-round-state sub-struct refactor

* minor fixes from review
2018-11-15 18:40:42 -05:00
Zach
c033975a53 docs ADR (#2828)
* wip

* use same ADR template as SDK

* finish docs adr

* lil fixes
2018-11-15 18:08:24 -05:00
Anton Kaliaev
06225e332e Config option for JSON output formatter (#2843)
* Introduce a structured logging option

* rename StructuredLog to LogFormat

* add changelog entry

* move log_format under log_level
2018-11-15 18:05:06 -05:00
Alessio Treglia
b646437ec7 Decouple StartHTTP{,AndTLS}Server from Listen() (#2791)
* Decouple StartHTTP{,AndTLS}Server from Listen()

This should help solve cosmos/cosmos-sdk#2715

* Fix small mistake

* Update StartGRPCServer

* s/rpc/rpcserver/

* Start grpccore.StartGRPCServer in a goroutine

* Reinstate l.Close()

* Fix rpc/lib/test/main.go

* Update code comment

* update changelog and comments

* fix tm-monitor. more comments
2018-11-15 15:33:04 -05:00
Zach
be8c2d5018 use a github link (#2849) 2018-11-15 14:41:40 -05:00
Anton Kaliaev
e93b492efe do not pin deps to exact versions (#2844)
because
- they are locked in .lock file already
- individual dependencies can be updated with `dep ensure -update XXX`
- review process (and ^^^) should help us prevent accidental updates

Closes #2798
2018-11-15 14:40:18 -05:00
Ethan Buchman
9973decff9 changelog, versionbump (#2850) 2018-11-15 12:16:24 -05:00
Ethan Buchman
a0412357c1 update some top-level markdown files (#2841)
* update some top-level markdown files

* Update README.md

Co-Authored-By: ebuchman <ethan@coinculture.info>
2018-11-15 12:04:47 -05:00
Zeyu Zhu
a70a53254d Optimize: using parameters in func (#2845)
Signed-off-by: Zeyu Zhu <zhuzeyu0409@gmail.com>
2018-11-15 19:57:13 +04:00
zramsay
1ce24a6282 docs: update config: ref #2800 & #2837 2018-11-15 10:23:00 +04:00
zramsay
814a88a69b more maintainable/useful install scripts 2018-11-15 10:23:00 +04:00
Hleb Albau
6353862ac0 2582 Enable CORS on RPC API (#2800) 2018-11-14 16:47:41 +04:00
Zach
3af11c43f2 cleanup ecosystem docs (#2829) 2018-11-14 14:52:01 +04:00
Zach
27fcf96556 update genesis docs, closes #2814 (#2831) 2018-11-14 14:34:10 +04:00
Zach
bb0e17dbf0 arm: add install script, fix Makefile (#2824)
* be like the SDK makefile

* arm: add install script, fix Makefile

* ...
2018-11-14 14:17:07 +04:00
Anton Kaliaev
5a6822c8ac abci: localClient improvements & bugfixes & pubsub Unsubscribe issues (#2748)
* use READ lock/unlock in ConsensusState#GetLastHeight

Refs #2721

* do not use defers when there's no need

* fix peer formatting (output its address instead of the pointer)

```
[54310]: E[11-02|11:59:39.851] Connection failed @ sendRoutine              module=p2p peer=0xb78f00 conn=MConn{74.207.236.148:26656} err="pong timeout"
```

https://github.com/tendermint/tendermint/issues/2721#issuecomment-435326581

* panic if peer has no state

https://github.com/tendermint/tendermint/issues/2721#issuecomment-435347165

It's confusing that sometimes we check if peer has a state, but most of
the times we expect it to be there

1. add79700b5/mempool/reactor.go (L138)
2. add79700b5/rpc/core/consensus.go (L196) (edited)

I will change everything to always assume peer has a state and panic
otherwise

that should help identify issues earlier

* abci/localclient: extend lock on app callback

App callback should be protected by lock as well (note this was already
done for InitChainAsync, why not for others???). Otherwise, when we
execute the block, tx might come in and call the callback in the same
time we're updating it in execBlockOnProxyApp => DATA RACE

Fixes #2721

Consensus state is locked

```
goroutine 113333 [semacquire, 309 minutes]:
sync.runtime_SemacquireMutex(0xc00180009c, 0xc0000c7e00)
        /usr/local/go/src/runtime/sema.go:71 +0x3d
sync.(*RWMutex).RLock(0xc001800090)
        /usr/local/go/src/sync/rwmutex.go:50 +0x4e
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).GetRoundState(0xc001800000, 0x0)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:218 +0x46
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusReactor).queryMaj23Routine(0xc0017def80, 0x11104a0, 0xc0072488f0, 0xc007248
9c0)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/reactor.go:735 +0x16d
created by github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusReactor).AddPeer
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/reactor.go:172 +0x236
```

because localClient is locked

```
goroutine 1899 [semacquire, 309 minutes]:
sync.runtime_SemacquireMutex(0xc00003363c, 0xc0000cb500)
        /usr/local/go/src/runtime/sema.go:71 +0x3d
sync.(*Mutex).Lock(0xc000033638)
        /usr/local/go/src/sync/mutex.go:134 +0xff
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/abci/client.(*localClient).SetResponseCallback(0xc0001fb560, 0xc007868540)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/abci/client/local_client.go:32 +0x33
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/proxy.(*appConnConsensus).SetResponseCallback(0xc00002f750, 0xc007868540)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/proxy/app_conn.go:57 +0x40
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/state.execBlockOnProxyApp(0x1104e20, 0xc002ca0ba0, 0x11092a0, 0xc00002f750, 0xc0001fe960, 0xc000bfc660, 0x110cfe0, 0xc000090330, 0xc9d12, 0xc000d9d5a0, ...)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/state/execution.go:230 +0x1fd
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/state.(*BlockExecutor).ApplyBlock(0xc002c2a230, 0x7, 0x0, 0xc000eae880, 0x6, 0xc002e52c60, 0x16, 0x1f927, 0xc9d12, 0xc000d9d5a0, ...)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/state/execution.go:96 +0x142
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).finalizeCommit(0xc001800000, 0x1f928)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1339 +0xa3e
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).tryFinalizeCommit(0xc001800000, 0x1f928)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1270 +0x451
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).enterCommit.func1(0xc001800000, 0x0, 0x1f928)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1218 +0x90
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).enterCommit(0xc001800000, 0x1f928, 0x0)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1247 +0x6b8
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).addVote(0xc001800000, 0xc003d8dea0, 0xc000cf4cc0, 0x28, 0xf1, 0xc003bc7ad0, 0xc003bc7b10)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1659 +0xbad
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).tryAddVote(0xc001800000, 0xc003d8dea0, 0xc000cf4cc0, 0x28, 0xf1, 0xf1, 0xf1)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1517 +0x59
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).handleMsg(0xc001800000, 0xd98200, 0xc0070dbed0, 0xc000cf4cc0, 0x28)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:660 +0x64b
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine(0xc001800000, 0x0)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:617 +0x670
created by github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).OnStart
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:311 +0x132
```

tx comes in and CheckTx is executed right when we execute the block

```
goroutine 111044 [semacquire, 309 minutes]:
sync.runtime_SemacquireMutex(0xc00003363c, 0x0)
        /usr/local/go/src/runtime/sema.go:71 +0x3d
sync.(*Mutex).Lock(0xc000033638)
        /usr/local/go/src/sync/mutex.go:134 +0xff
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/abci/client.(*localClient).CheckTxAsync(0xc0001fb0e0, 0xc002d94500, 0x13f, 0x280, 0x0)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/abci/client/local_client.go:85 +0x47
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/proxy.(*appConnMempool).CheckTxAsync(0xc00002f720, 0xc002d94500, 0x13f, 0x280, 0x1)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/proxy/app_conn.go:114 +0x51
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/mempool.(*Mempool).CheckTx(0xc002d3a320, 0xc002d94500, 0x13f, 0x280, 0xc0072355f0, 0x0, 0x0)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/mempool/mempool.go:316 +0x17b
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/core.BroadcastTxSync(0xc002d94500, 0x13f, 0x280, 0x0, 0x0, 0x0)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/core/mempool.go:93 +0xb8
reflect.Value.call(0xd85560, 0x10326c0, 0x13, 0xec7b8b, 0x4, 0xc00663f180, 0x1, 0x1, 0xc00663f180, 0xc00663f188, ...)
        /usr/local/go/src/reflect/value.go:447 +0x449
reflect.Value.Call(0xd85560, 0x10326c0, 0x13, 0xc00663f180, 0x1, 0x1, 0x0, 0x0, 0xc005cc9344)
        /usr/local/go/src/reflect/value.go:308 +0xa4
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server.makeHTTPHandler.func2(0x1102060, 0xc00663f100, 0xc0082d7900)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server/handlers.go:269 +0x188
net/http.HandlerFunc.ServeHTTP(0xc002c81f20, 0x1102060, 0xc00663f100, 0xc0082d7900)
        /usr/local/go/src/net/http/server.go:1964 +0x44
net/http.(*ServeMux).ServeHTTP(0xc002c81b60, 0x1102060, 0xc00663f100, 0xc0082d7900)
        /usr/local/go/src/net/http/server.go:2361 +0x127
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server.maxBytesHandler.ServeHTTP(0x10f8a40, 0xc002c81b60, 0xf4240, 0x1102060, 0xc00663f100, 0xc0082d7900)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server/http_server.go:219 +0xcf
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server.RecoverAndLogHandler.func1(0x1103220, 0xc00121e620, 0xc0082d7900)
        /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server/http_server.go:192 +0x394
net/http.HandlerFunc.ServeHTTP(0xc002c06ea0, 0x1103220, 0xc00121e620, 0xc0082d7900)
        /usr/local/go/src/net/http/server.go:1964 +0x44
net/http.serverHandler.ServeHTTP(0xc001a1aa90, 0x1103220, 0xc00121e620, 0xc0082d7900)
        /usr/local/go/src/net/http/server.go:2741 +0xab
net/http.(*conn).serve(0xc00785a3c0, 0x11041a0, 0xc000f844c0)
        /usr/local/go/src/net/http/server.go:1847 +0x646
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:2851 +0x2f5
```

* consensus: use read lock in Receive#VoteMessage

* use defer to unlock mutex because application might panic

* use defer in every method of the localClient

* add a changelog entry

* drain channels before Unsubscribe(All)

Read 55362ed766/libs/pubsub/pubsub.go (L13)
for the detailed explanation of the issue.

We'll need to fix it someday. Make sure to keep an eye on
https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-033-pubsub.md

* retry instead of panic when peer has no state in reactors other than consensus

in /dump_consensus_state RPC endpoint, skip a peer with no state

* rpc/core/mempool: simplify error messages

* rpc/core/mempool: use time.After instead of timer

also, do not log DeliverTx result (to be consistent with other memthods)

* unlock before calling the callback in reqRes#SetCallback
2018-11-13 11:32:51 -05:00
Zach
fb10209a96 update to amino 0.14.1 (#2822) 2018-11-13 11:54:43 +04:00
Ethan Buchman
0f793a5a00 Merge pull request #2813 from tendermint/master
Merge pull request #2807 from tendermint/release/v0.26.1
2018-11-12 08:05:12 -05:00
Ethan Buchman
80d0a36250 Merge pull request #2807 from tendermint/release/v0.26.1
Release/v0.26.1
2018-11-12 08:04:27 -05:00
yutianwu
e11699038d [R4R] Add adr-034: PrivValidator file structure (#2751)
* add adr-034

* update changelog

* minor changes

* do some refactor
2018-11-11 14:47:34 -05:00
Ethan Buchman
533b3cfb73 Release/v0.26.1 (#2803)
* changelog and version

* fix changelog
2018-11-11 12:08:28 -05:00
Ismail Khoffi
3ff820bdf4 fix amino overhead computation for Tx (#2792)
* fix amino overhead computation for Tx:

- also count the fieldnum / typ3
- add method to compute overhead per Tx
- slightly clarify comment on MaxAminoOverheadForBlock
- add tests

* fix TestReapMaxBytesMaxGas according to amino overhead

* fix TestMempoolFilters according to amino overhead

* address review comments:

 - add a note about fieldNum = 1
 - add forgotten godoc comment

* fix and use sm.TxPreCheck

* fix test

* remove print statement
2018-11-11 10:09:33 -05:00
Mehmet Gurevin
905abf1388 p2p: re-check after sleeps (#2664)
* p2p: re-check after sleeps

* use NodeInfo as an interface

* Revert "use NodeInfo as an interface"

This reverts commit 5f7d055e6c745ac8c8e5a9a7f0bd5ea5bc3d448c.

* Revert "p2p: re-check after sleeps"

This reverts commit 7f41070da070eadd3312efce1cc821aaf3e23771.

* preserve dial to itself

* ignore ensured connections while re-connecting

* re-check after sleep

* keep protocol definition on net addresses

* decrease log level

* Revert "preserve dial to itself"

This reverts commit 0c6e0fc58da78c378c32bb9ded2dd04ad5e754a9.

* correct func comment according to modification

Co-Authored-By: mgurevin <mehmet@gurevin.net>
2018-11-11 08:14:52 -05:00
Ismail Khoffi
7a4b62d3be check the result of ps.peer.Send before calling ps.setHasVote (#2787)
- actually call `ps.SetHasVote` instead to avoid carrying around
`votes.Height()`, `votes.Round()`, `types.SignedMsgType(votes.Type())`
2018-11-11 07:57:08 -05:00
Jae Kwon
5b19fcf204 p2p: AddressBook requires addresses to have IDs; Do not close conn immediately after sending pex addrs in seed mode (#2797)
* Require addressbook to only store addresses with valid ID

* Do not shut down peer immediately after sending pex addrs in SeedMode

* p2p: fix #2773

* seed mode: use go-routine to sleep before stopping peer
2018-11-11 06:50:25 -05:00
Anton Kaliaev
1944d8534b test AutoFile#Size (happy path) 2018-11-09 16:29:43 +01:00
Anton Kaliaev
13badc1d29 [autofile/group] do not panic when checking size
It's OK if the head will grow a little bit bigger, but we'll avoid
panic.

Refs #2703
2018-11-09 16:29:43 +01:00
Anton Kaliaev
091d2c3e5e openFile creates a file if not exist => ErrNotExist is not possible 2018-11-09 16:29:43 +01:00
Anton Kaliaev
d178ea9eaf use our logger in autofile/group 2018-11-09 16:29:43 +01:00
Catalin Pirvu
46d32af055 Add tests for ValidateBasic methods (#2754)
Fixes #2740
2018-11-09 15:59:04 +01:00
Zach
8b77328313 [docs] improve organization of ABCI docs & fix links (#2749)
* dedup with spec/abci/client-server

* fixup abci/readme

* link to getting started in abci/README

* https

* spec/abci: some deduplication

* docs: remove extraneous comment
2018-11-09 15:11:06 +01:00
Ethan Buchman
6e9aee5460 p2p: peer-id -> peer_id (#2771)
* p2p: peer-id -> peer_id

* update changelog
2018-11-06 21:12:46 -08:00
Anton Kaliaev
d460df1335 mempool: print postCheck error (#2762)
This is a follow-up from https://github.com/tendermint/tendermint/pull/2724

Closes #2761
2018-11-06 20:23:44 -08:00
Jae Kwon
03e42d2e38 Fix crypto/merkle ProofOperators.Verify to check bounds on keypath pa… (#2756)
* Fix crypto/merkle ProofOperators.Verify to check bounds on keypath parts.

* Update PENDING
2018-11-05 22:53:44 -08:00
Anton Kaliaev
b8a9b0bf78 Mempool WAL is still created by default in home directory, leads to permission errors (#2758)
* only invoke InitWAL/CloseWAL if WalPath is not empty

Closes #2717

* panic if WAL is not initialized when calling CloseWAL

* add a changelog entry
2018-11-05 22:39:05 -08:00
Ethan Buchman
7246ffc48f mempool: ErrPreCheck and more log info (#2724)
* mempool: ErrPreCheck and more log info

* change Pre/PostCheckFunc to return errors

also, continue execution when checking txs in mempool_test if
err=PreCheckErr
2018-11-05 22:32:52 -08:00
Ethan Buchman
071ebdd514 Merge pull request #2704 from tendermint/2702-proposal-pol-round-validation
if some process locks a block in round 0, then 0 is valid proposal.PO…
2018-11-05 22:32:15 -08:00
Ethan Buchman
8760c5b4f9 Merge branch 'develop' into 2702-proposal-pol-round-validation 2018-11-05 20:53:05 -08:00
Ethan Buchman
59b75d3f28 Merge pull request #2753 from tendermint/master
Merge pull request #2750 from tendermint/release/v0.26.0
2018-11-03 12:46:55 -07:00
Ethan Buchman
c086d0a341 Merge pull request #2750 from tendermint/release/v0.26.0
Release/v0.26.0
2018-11-03 12:46:14 -07:00
Ethan Buchman
322cee9156 Release/v0.26.0 (#2726)
* changelog_pending -> changelog

* update changelog

* update changelog

* update changelog and upgrading
2018-11-02 13:55:09 -04:00
Anton Kaliaev
80e4fe6c0d [ADR] [DRAFT] pubsub 2.0 (#2532)
* pubsub adr

Refs #951, #1879, #1880

* highlight question

* fix typos after Ismail's review
2018-11-02 10:16:29 +01:00
Anton Kaliaev
a67ae81469 if some process locks a block in round 0, then 0 is valid proposal.POLRound in rounds > 0
This condition is really hard to get. Initially, lockedRound and
validRound are set to -1 as we start with round 0.

Refs #2702
2018-10-25 16:40:20 +02:00
583 changed files with 48083 additions and 18673 deletions

View File

@@ -3,10 +3,20 @@ version: 2
defaults: &defaults
working_directory: /go/src/github.com/tendermint/tendermint
docker:
- image: circleci/golang:1.10.3
- image: circleci/golang
environment:
GOBIN: /tmp/workspace/bin
docs_update_config: &docs_update_config
working_directory: ~/repo
docker:
- image: tendermintdev/jq_curl
environment:
AWS_REGION: us-east-1
release_management_docker: &release_management_docker
machine: true
jobs:
setup_dependencies:
<<: *defaults
@@ -41,10 +51,10 @@ jobs:
key: v3-pkg-cache
paths:
- /go/pkg
# - save_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
# paths:
# - /go/src/github.com/tendermint/tendermint
- save_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
paths:
- /go/src/github.com/tendermint/tendermint
build_slate:
<<: *defaults
@@ -53,23 +63,8 @@ jobs:
at: /tmp/workspace
- restore_cache:
key: v3-pkg-cache
# https://discuss.circleci.com/t/saving-cache-stopped-working-warning-skipping-this-step-disabled-in-configuration/24423/2
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: slate docs
command: |
@@ -84,28 +79,14 @@ jobs:
at: /tmp/workspace
- restore_cache:
key: v3-pkg-cache
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: metalinter
command: |
set -ex
export PATH="$GOBIN:$PATH"
make metalinter
make lint
- run:
name: check_dep
command: |
@@ -120,22 +101,8 @@ jobs:
at: /tmp/workspace
- restore_cache:
key: v3-pkg-cache
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: Run abci apps tests
command: |
@@ -151,22 +118,8 @@ jobs:
at: /tmp/workspace
- restore_cache:
key: v3-pkg-cache
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: Run abci-cli tests
command: |
@@ -180,22 +133,8 @@ jobs:
at: /tmp/workspace
- restore_cache:
key: v3-pkg-cache
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run: sudo apt-get update && sudo apt-get install -y --no-install-recommends bsdmainutils
- run:
name: Run tests
@@ -209,22 +148,8 @@ jobs:
at: /tmp/workspace
- restore_cache:
key: v3-pkg-cache
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run: mkdir -p /tmp/logs
- run:
name: Run tests
@@ -232,7 +157,7 @@ jobs:
for pkg in $(go list github.com/tendermint/tendermint/... | circleci tests split --split-by=timings); do
id=$(basename "$pkg")
GOCACHE=off go test -timeout 5m -race -coverprofile=/tmp/workspace/profiles/$id.out -covermode=atomic "$pkg" | tee "/tmp/logs/$id-$RANDOM.log"
go test -v -timeout 5m -race -coverprofile=/tmp/workspace/profiles/$id.out -covermode=atomic "$pkg" | tee "/tmp/logs/$id-$RANDOM.log"
done
- persist_to_workspace:
root: /tmp/workspace
@@ -248,22 +173,8 @@ jobs:
at: /tmp/workspace
- restore_cache:
key: v3-pkg-cache
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: Run tests
command: bash test/persist/test_failure_indices.sh
@@ -284,9 +195,7 @@ jobs:
name: run localnet and exit on failure
command: |
set -x
make get_tools
make get_vendor_deps
make build-linux
docker run --rm -v "$PWD":/go/src/github.com/tendermint/tendermint -w /go/src/github.com/tendermint/tendermint golang make build-linux
make localnet-start &
./scripts/localnet-blocks-test.sh 40 5 10 localhost
@@ -309,22 +218,10 @@ jobs:
steps:
- attach_workspace:
at: /tmp/workspace
# - restore_cache:
# key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- checkout
- run:
name: tools
command: |
export PATH="$GOBIN:$PATH"
make get_tools
- run:
name: dependencies
command: |
export PATH="$GOBIN:$PATH"
make get_vendor_deps
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- restore_cache:
key: v3-pkg-cache
- restore_cache:
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: gather
command: |
@@ -338,10 +235,139 @@ jobs:
name: upload
command: bash .circleci/codecov.sh -f coverage.txt
deploy_docs:
<<: *docs_update_config
steps:
- checkout
- run:
name: Trigger website build
command: |
curl --silent \
--show-error \
-X POST \
--header "Content-Type: application/json" \
-d "{\"branch\": \"$CIRCLE_BRANCH\"}" \
"https://circleci.com/api/v1.1/project/github/$CIRCLE_PROJECT_USERNAME/$WEBSITE_REPO_NAME/build?circle-token=$TENDERBOT_API_TOKEN" > response.json
RESULT=`jq -r '.status' response.json`
MESSAGE=`jq -r '.message' response.json`
if [[ ${RESULT} == "null" ]] || [[ ${RESULT} -ne "200" ]]; then
echo "CircleCI API call failed: $MESSAGE"
exit 1
else
echo "Website build started"
fi
prepare_build:
<<: *defaults
steps:
- checkout
- run:
name: Get next release number
command: |
export LAST_TAG="`git describe --tags --abbrev=0 --match "${CIRCLE_BRANCH}.*"`"
echo "Last tag: ${LAST_TAG}"
if [ -z "${LAST_TAG}" ]; then
export LAST_TAG="${CIRCLE_BRANCH}"
echo "Last tag not found. Possibly fresh branch or feature branch. Setting ${LAST_TAG} as tag."
fi
export NEXT_TAG="`python -u scripts/release_management/bump-semver.py --version "${LAST_TAG}"`"
echo "Next tag: ${NEXT_TAG}"
echo "export CIRCLE_TAG=\"${NEXT_TAG}\"" > release-version.source
- run:
name: Build dependencies
command: |
make get_tools get_vendor_deps
- persist_to_workspace:
root: .
paths:
- "release-version.source"
- save_cache:
key: v1-release-deps-{{ .Branch }}-{{ .Revision }}
paths:
- "vendor"
build_artifacts:
<<: *defaults
parallelism: 4
steps:
- checkout
- restore_cache:
keys:
- v1-release-deps-{{ .Branch }}-{{ .Revision }}
- attach_workspace:
at: /tmp/workspace
- run:
name: Build artifact
command: |
# Setting CIRCLE_TAG because we do not tag the release ourselves.
source /tmp/workspace/release-version.source
if test ${CIRCLE_NODE_INDEX:-0} == 0 ;then export GOOS=linux GOARCH=amd64 && export OUTPUT=build/tendermint_${GOOS}_${GOARCH} && make build && python -u scripts/release_management/zip-file.py ;fi
if test ${CIRCLE_NODE_INDEX:-0} == 1 ;then export GOOS=darwin GOARCH=amd64 && export OUTPUT=build/tendermint_${GOOS}_${GOARCH} && make build && python -u scripts/release_management/zip-file.py ;fi
if test ${CIRCLE_NODE_INDEX:-0} == 2 ;then export GOOS=windows GOARCH=amd64 && export OUTPUT=build/tendermint_${GOOS}_${GOARCH} && make build && python -u scripts/release_management/zip-file.py ;fi
if test ${CIRCLE_NODE_INDEX:-0} == 3 ;then export GOOS=linux GOARCH=arm && export OUTPUT=build/tendermint_${GOOS}_${GOARCH} && make build && python -u scripts/release_management/zip-file.py ;fi
- persist_to_workspace:
root: build
paths:
- "*.zip"
- "tendermint_linux_amd64"
release_artifacts:
<<: *defaults
steps:
- checkout
- attach_workspace:
at: /tmp/workspace
- run:
name: Deploy to GitHub
command: |
# Setting CIRCLE_TAG because we do not tag the release ourselves.
source /tmp/workspace/release-version.source
echo "---"
ls -la /tmp/workspace/*.zip
echo "---"
python -u scripts/release_management/sha-files.py
echo "---"
cat /tmp/workspace/SHA256SUMS
echo "---"
export RELEASE_ID="`python -u scripts/release_management/github-draft.py`"
echo "Release ID: ${RELEASE_ID}"
#Todo: Parallelize uploads
export GOOS=linux GOARCH=amd64 && python -u scripts/release_management/github-upload.py --id "${RELEASE_ID}"
export GOOS=darwin GOARCH=amd64 && python -u scripts/release_management/github-upload.py --id "${RELEASE_ID}"
export GOOS=windows GOARCH=amd64 && python -u scripts/release_management/github-upload.py --id "${RELEASE_ID}"
export GOOS=linux GOARCH=arm && python -u scripts/release_management/github-upload.py --id "${RELEASE_ID}"
python -u scripts/release_management/github-upload.py --file "/tmp/workspace/SHA256SUMS" --id "${RELEASE_ID}"
python -u scripts/release_management/github-publish.py --id "${RELEASE_ID}"
release_docker:
<<: *release_management_docker
steps:
- checkout
- attach_workspace:
at: /tmp/workspace
- run:
name: Deploy to Docker Hub
command: |
# Setting CIRCLE_TAG because we do not tag the release ourselves.
source /tmp/workspace/release-version.source
cp /tmp/workspace/tendermint_linux_amd64 DOCKER/tendermint
docker build --label="tendermint" --tag="tendermint/tendermint:${CIRCLE_TAG}" --tag="tendermint/tendermint:latest" "DOCKER"
docker login -u "${DOCKERHUB_USER}" --password-stdin <<< "${DOCKERHUB_PASS}"
docker push "tendermint/tendermint"
docker logout
workflows:
version: 2
test-suite:
jobs:
- deploy_docs:
filters:
branches:
only:
- master
- develop
- setup_dependencies
- lint:
requires:
@@ -368,3 +394,25 @@ workflows:
- upload_coverage:
requires:
- test_cover
release:
jobs:
- prepare_build
- build_artifacts:
requires:
- prepare_build
- release_artifacts:
requires:
- prepare_build
- build_artifacts
filters:
branches:
only:
- /v[0-9]+\.[0-9]+/
- release_docker:
requires:
- prepare_build
- build_artifacts
filters:
branches:
only:
- /v[0-9]+\.[0-9]+/

4
.github/CODEOWNERS vendored
View File

@@ -4,4 +4,6 @@
* @ebuchman @melekes @xla
# Precious documentation
/docs/ @zramsay
/docs/README.md @zramsay
/docs/DOCS_README.md @zramsay
/docs/.vuepress/ @zramsay

View File

@@ -1,5 +1,12 @@
<!-- Thanks for filing a PR! Before hitting the button, please check the following items.-->
<!--
Thanks for filing a PR! Before hitting the button, please check the following items.
Please note that every non-trivial PR must reference an issue that explains the
changes in the PR.
-->
* [ ] Referenced an issue explaining the need for the change
* [ ] Updated all relevant documentation in docs
* [ ] Updated all code comments where relevant
* [ ] Wrote tests

1
.gitignore vendored
View File

@@ -35,6 +35,7 @@ shunit2
addrbook.json
*/vendor
.vendor-new/
*/.glide
.terraform
terraform.tfstate

57
.golangci.yml Normal file
View File

@@ -0,0 +1,57 @@
run:
deadline: 1m
linters:
enable-all: true
disable:
- gocyclo
- golint
- maligned
- errcheck
- staticcheck
- interfacer
- unconvert
- goconst
- unparam
- nakedret
- lll
- gochecknoglobals
- gocritic
- gochecknoinits
- scopelint
- stylecheck
# linters-settings:
# govet:
# check-shadowing: true
# golint:
# min-confidence: 0
# gocyclo:
# min-complexity: 10
# maligned:
# suggest-new: true
# dupl:
# threshold: 100
# goconst:
# min-len: 2
# min-occurrences: 2
# depguard:
# list-type: blacklist
# packages:
# # logging is allowed only by logutils.Log, logrus
# # is allowed to use only in logutils package
# - github.com/sirupsen/logrus
# misspell:
# locale: US
# lll:
# line-length: 140
# goimports:
# local-prefixes: github.com/golangci/golangci-lint
# gocritic:
# enabled-tags:
# - performance
# - style
# - experimental
# disabled-checks:
# - wrapperFunc
# - commentFormatting # https://github.com/go-critic/go-critic/issues/755

File diff suppressed because it is too large Load Diff

View File

@@ -1,124 +1,23 @@
# Pending
## v0.31.7
## v0.26.0
*October 29, 2018*
Special thanks to external contributors on this release:
@bradyjoestar, @connorwstein, @goolAdapter, @HaoyangLiu,
@james-ray, @overbool, @phymbert, @Slamper, @Uzair1995
This release is primarily about adding Version fields to various data structures,
optimizing consensus messages for signing and verification in
restricted environments (like HSMs and the Ethereum Virtual Machine), and
aligning the consensus code with the [specification](https://arxiv.org/abs/1807.04938).
It also includes our first take at a generalized merkle proof system.
See the [UPGRADING.md](UPGRADING.md#v0.26.0) for details on upgrading to the new
version.
Please note that we are still making breaking changes to the protocols.
While the new Version fields should help us to keep the software backwards compatible
even while upgrading the protocols, we cannot guarantee that new releases will
be compatible with old chains just yet. Thanks for bearing with us!
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
**
### BREAKING CHANGES:
* CLI/RPC/Config
* [config] [\#2232](https://github.com/tendermint/tendermint/issues/2232) timeouts as time.Duration, not ints
* [config] [\#2505](https://github.com/tendermint/tendermint/issues/2505) Remove Mempool.RecheckEmpty (it was effectively useless anyways)
* [config] [\#2490](https://github.com/tendermint/tendermint/issues/2490) `mempool.wal` is disabled by default
* [privval] [\#2459](https://github.com/tendermint/tendermint/issues/2459) Split `SocketPVMsg`s implementations into Request and Response, where the Response may contain a error message (returned by the remote signer)
* [state] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Add Version field to State, breaking the format of State as
encoded on disk.
* [rpc] [\#2298](https://github.com/tendermint/tendermint/issues/2298) `/abci_query` takes `prove` argument instead of `trusted` and switches the default
behaviour to `prove=false`
* [rpc] [\#2654](https://github.com/tendermint/tendermint/issues/2654) Remove all `node_info.other.*_version` fields in `/status` and
`/net_info`
* Apps
* [abci] [\#2298](https://github.com/tendermint/tendermint/issues/2298) ResponseQuery.Proof is now a structured merkle.Proof, not just
arbitrary bytes
* [abci] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Add Version to Header and shift all fields by one
* [abci] [\#2662](https://github.com/tendermint/tendermint/issues/2662) Bump the field numbers for some `ResponseInfo` fields to make room for
`AppVersion`
* Go API
* [config] [\#2232](https://github.com/tendermint/tendermint/issues/2232) timeouts as time.Duration, not ints
* [crypto/merkle & lite] [\#2298](https://github.com/tendermint/tendermint/issues/2298) Various changes to accomodate General Merkle trees
* [crypto/merkle] [\#2595](https://github.com/tendermint/tendermint/issues/2595) Remove all Hasher objects in favor of byte slices
* [crypto/merkle] [\#2635](https://github.com/tendermint/tendermint/issues/2635) merkle.SimpleHashFromTwoHashes is no longer exported
* [node] [\#2479](https://github.com/tendermint/tendermint/issues/2479) Remove node.RunForever
* [rpc/client] [\#2298](https://github.com/tendermint/tendermint/issues/2298) `ABCIQueryOptions.Trusted` -> `ABCIQueryOptions.Prove`
* [types] [\#2298](https://github.com/tendermint/tendermint/issues/2298) Remove `Index` and `Total` fields from `TxProof`.
* [types] [\#2598](https://github.com/tendermint/tendermint/issues/2598) `VoteTypeXxx` are now of type `SignedMsgType byte` and named `XxxType`, eg. `PrevoteType`,
`PrecommitType`.
* Blockchain Protocol
* [abci] [\#2636](https://github.com/tendermint/tendermint/issues/2636) Add ValidatorParams field to ConsensusParams.
(Used to control which pubkey types validators can use, by abci type)
* [types] Update SignBytes for `Vote`/`Proposal`/`Heartbeat`:
* [\#2459](https://github.com/tendermint/tendermint/issues/2459) Use amino encoding instead of JSON in `SignBytes`.
* [\#2598](https://github.com/tendermint/tendermint/issues/2598) Reorder fields and use fixed sized encoding.
* [\#2598](https://github.com/tendermint/tendermint/issues/2598) Change `Type` field fromt `string` to `byte` and use new
`SignedMsgType` to enumerate.
* [types] [\#2512](https://github.com/tendermint/tendermint/issues/2512) Remove the pubkey field from the validator hash
* [types] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Add Version struct to Header
* [types] [\#2609](https://github.com/tendermint/tendermint/issues/2609) ConsensusParams.Hash() is the hash of the amino encoded
struct instead of the Merkle tree of the fields
* [state] [\#2587](https://github.com/tendermint/tendermint/issues/2587) Require block.Time of the fist block to be genesis time
* [state] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Require block.Version to match state.Version
* [types] [\#2670](https://github.com/tendermint/tendermint/issues/2670) Header.Hash() builds Merkle tree out of fields in the same
order they appear in the header, instead of sorting by field name
* [types] [\#2682](https://github.com/tendermint/tendermint/issues/2682) Use proto3 `varint` encoding for ints that are usually unsigned (instead of zigzag encoding).
* P2P Protocol
* [p2p] [\#2654](https://github.com/tendermint/tendermint/issues/2654) Add `ProtocolVersion` struct with protocol versions to top of
DefaultNodeInfo and require `ProtocolVersion.Block` to match during peer handshake
### FEATURES:
- [abci] [\#2557](https://github.com/tendermint/tendermint/issues/2557) Add `Codespace` field to `Response{CheckTx, DeliverTx, Query}`
- [abci] [\#2662](https://github.com/tendermint/tendermint/issues/2662) Add `BlockVersion` and `P2PVersion` to `RequestInfo`
- [crypto/merkle] [\#2298](https://github.com/tendermint/tendermint/issues/2298) General Merkle Proof scheme for chaining various types of Merkle trees together
### IMPROVEMENTS:
- Additional Metrics
- [consensus] [\#2169](https://github.com/cosmos/cosmos-sdk/issues/2169)
- [p2p] [\#2169](https://github.com/cosmos/cosmos-sdk/issues/2169)
- [config] [\#2232](https://github.com/tendermint/tendermint/issues/2232) Added ValidateBasic method, which performs basic checks
- [crypto/ed25519] [\#2558](https://github.com/tendermint/tendermint/issues/2558) Switch to use latest `golang.org/x/crypto` through our fork at
github.com/tendermint/crypto
- [tools] [\#2238](https://github.com/tendermint/tendermint/issues/2238) Binary dependencies are now locked to a specific git commit
- [libs/log] [\#2706](https://github.com/tendermint/tendermint/issues/2706) Add year to log format
- [consensus] [\#2683] validate all incoming messages
- [evidence] [\#2683] validate all incoming messages
- [blockchain] [\#2683] validate all incoming messages
- [p2p/pex] [\#2683] validate pexAddrsMessage addresses
### BUG FIXES:
- [autofile] [\#2428](https://github.com/tendermint/tendermint/issues/2428) Group.RotateFile need call Flush() before rename (@goolAdapter)
- [common] [\#2533](https://github.com/tendermint/tendermint/issues/2533) Fixed a bug in the `BitArray.Or` method
- [common] [\#2506](https://github.com/tendermint/tendermint/issues/2506) Fixed a bug in the `BitArray.Sub` method (@james-ray)
- [common] [\#2534](https://github.com/tendermint/tendermint/issues/2534) Fix `BitArray.PickRandom` to choose uniformly from true bits
- [consensus] [\#1690](https://github.com/tendermint/tendermint/issues/1690) Wait for
timeoutPrecommit before starting next round
- [consensus] [\#1745](https://github.com/tendermint/tendermint/issues/1745) Wait for
Proposal or timeoutProposal before entering prevote
- [consensus] [\#2583](https://github.com/tendermint/tendermint/issues/2583) ensure valid
block property with faulty proposer
- [consensus] [\#2642](https://github.com/tendermint/tendermint/issues/2642) Only propose ValidBlock, not LockedBlock
- [consensus] [\#2642](https://github.com/tendermint/tendermint/issues/2642) Initialized ValidRound and LockedRound to -1
- [consensus] [\#1637](https://github.com/tendermint/tendermint/issues/1637) Limit the amount of evidence that can be included in a
block
- [consensus] [\#2646](https://github.com/tendermint/tendermint/issues/2646) Simplify Proposal message (align with spec)
- [crypto] [\#2733](https://github.com/tendermint/tendermint/pull/2733) Fix general merkle keypath to start w/ last op's key
- [evidence] [\#2515](https://github.com/tendermint/tendermint/issues/2515) Fix db iter leak (@goolAdapter)
- [libs/event] [\#2518](https://github.com/tendermint/tendermint/issues/2518) Fix event concurrency flaw (@goolAdapter)
- [node] [\#2434](https://github.com/tendermint/tendermint/issues/2434) Make node respond to signal interrupts while sleeping for genesis time
- [state] [\#2616](https://github.com/tendermint/tendermint/issues/2616) Pass nil to NewValidatorSet() when genesis file's Validators field is nil
- [p2p] [\#2555](https://github.com/tendermint/tendermint/issues/2555) Fix p2p switch FlushThrottle value (@goolAdapter)
- [p2p] [\#2668](https://github.com/tendermint/tendermint/issues/2668) Reconnect to originally dialed address (not self-reported
address) for persistent peers
- [mempool] \#3699 Revert the change where we only remove valid transactions
from the mempool once a block is committed.

View File

@@ -6,7 +6,7 @@ This code of conduct applies to all projects run by the Tendermint/COSMOS team a
# Conduct
## Contact: adrian@tendermint.com
## Contact: conduct@tendermint.com
* We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.

View File

@@ -4,6 +4,14 @@ Thank you for considering making contributions to Tendermint and related reposit
Please follow standard github best practices: fork the repo, branch from the tip of develop, make some commits, and submit a pull request to develop. See the [open issues](https://github.com/tendermint/tendermint/issues) for things we need help with!
Before making a pull request, please open an issue describing the
change you would like to make. If an issue for your change already exists,
please comment on it that you will submit a pull request. Be sure to reference the issue in the opening
comment of your pull request. If your change is substantial, you will be asked
to write a more detailed design document in the form of an
Architectural Decision Record (ie. see [here](./docs/architecture/)) before submitting code
changes.
Please make sure to use `gofmt` before every commit - the easiest way to do this is have your editor run it for you upon saving a file.
## Forking
@@ -27,8 +35,8 @@ Of course, replace `ebuchman` with your git handle.
To pull in updates from the origin repo, run
* `git fetch upstream`
* `git rebase upstream/master` (or whatever branch you want)
* `git fetch upstream`
* `git rebase upstream/master` (or whatever branch you want)
Please don't make Pull Requests to `master`.
@@ -50,6 +58,11 @@ as apps, tools, and the core, should use dep.
Run `dep status` to get a list of vendor dependencies that may not be
up-to-date.
When updating dependencies, please only update the particular dependencies you
need. Instead of running `dep ensure -update`, which will update anything,
specify exactly the dependency you want to update, eg.
`dep ensure -update github.com/tendermint/go-amino`.
## Vagrant
If you are a [Vagrant](https://www.vagrantup.com/) user, you can get started
@@ -64,6 +77,86 @@ vagrant ssh
make test
```
## Changelog
Every fix, improvement, feature, or breaking change should be made in a
pull-request that includes an update to the `CHANGELOG_PENDING.md` file.
Changelog entries should be formatted as follows:
```
- [module] \#xxx Some description about the change (@contributor)
```
Here, `module` is the part of the code that changed (typically a
top-level Go package), `xxx` is the pull-request number, and `contributor`
is the author/s of the change.
It's also acceptable for `xxx` to refer to the relevent issue number, but pull-request
numbers are preferred.
Note this means pull-requests should be opened first so the changelog can then
be updated with the pull-request's number.
There is no need to include the full link, as this will be added
automatically during release. But please include the backslash and pound, eg. `\#2313`.
Changelog entries should be ordered alphabetically according to the
`module`, and numerically according to the pull-request number.
Changes with multiple classifications should be doubly included (eg. a bug fix
that is also a breaking change should be recorded under both).
Breaking changes are further subdivided according to the APIs/users they impact.
Any change that effects multiple APIs/users should be recorded multiply - for
instance, a change to the `Blockchain Protocol` that removes a field from the
header should also be recorded under `CLI/RPC/Config` since the field will be
removed from the header in rpc responses as well.
## Branching Model and Release
We follow a variant of [git flow](http://nvie.com/posts/a-successful-git-branching-model/).
This means that all pull-requests should be made against develop. Any merge to
master constitutes a tagged release.
Note all pull requests should be squash merged except for merging to master and
merging master back to develop. This keeps the commit history clean and makes it
easy to reference the pull request where a change was introduced.
### Development Procedure:
- the latest state of development is on `develop`
- `develop` must never fail `make test`
- never --force onto `develop` (except when reverting a broken commit, which should seldom happen)
- create a development branch either on github.com/tendermint/tendermint, or your fork (using `git remote add origin`)
- make changes and update the `CHANGELOG_PENDING.md` to record your change
- before submitting a pull request, run `git rebase` on top of the latest `develop`
### Pull Merge Procedure:
- ensure pull branch is based on a recent develop
- run `make test` to ensure that all tests pass
- squash merge pull request
- the `unstable` branch may be used to aggregate pull merges before fixing tests
### Release Procedure:
- start on `develop`
- run integration tests (see `test_integrations` in Makefile)
- prepare release in a pull request against develop (to be squash merged):
- copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
- run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all issues
- run `bash ./scripts/authors.sh` to get a list of authors since the latest
release, and add the github aliases of external contributors to the top of
the changelog. To lookup an alias from an email, try `bash
./scripts/authors.sh <email>`
- reset the `CHANGELOG_PENDING.md`
- bump versions
- push latest develop with prepared release details to release/vX.X.X to run the extended integration tests on the CI
- if necessary, make pull requests against release/vX.X.X and squash merge them
- merge to master (don't squash merge!)
- merge master back to develop (don't squash merge!)
### Hotfix Procedure:
- follow the normal development and release procedure without any differences
## Testing
All repos should be hooked up to [CircleCI](https://circleci.com/).
@@ -72,46 +165,3 @@ If they have `.go` files in the root directory, they will be automatically
tested by circle using `go test -v -race ./...`. If not, they will need a
`circle.yml`. Ideally, every repo has a `Makefile` that defines `make test` and
includes its continuous integration status using a badge in the `README.md`.
## Branching Model and Release
User-facing repos should adhere to the branching model: http://nvie.com/posts/a-successful-git-branching-model/.
That is, these repos should be well versioned, and any merge to master requires a version bump and tagged release.
Libraries need not follow the model strictly, but would be wise to,
especially `go-p2p` and `go-rpc`, as their versions are referenced in tendermint core.
### Development Procedure:
- the latest state of development is on `develop`
- `develop` must never fail `make test`
- no --force onto `develop` (except when reverting a broken commit, which should seldom happen)
- create a development branch either on github.com/tendermint/tendermint, or your fork (using `git remote add origin`)
- before submitting a pull request, begin `git rebase` on top of `develop`
### Pull Merge Procedure:
- ensure pull branch is rebased on develop
- run `make test` to ensure that all tests pass
- merge pull request
- the `unstable` branch may be used to aggregate pull merges before testing once
- push master may request that pull requests be rebased on top of `unstable`
### Release Procedure:
- start on `develop`
- run integration tests (see `test_integrations` in Makefile)
- prepare changelog/release issue
- bump versions
- push to release-vX.X.X to run the extended integration tests on the CI
- merge to master
- merge master back to develop
### Hotfix Procedure:
- start on `master`
- checkout a new branch named hotfix-vX.X.X
- make the required changes
- these changes should be small and an absolute necessity
- add a note to CHANGELOG.md
- bump versions
- push to hotfix-vX.X.X to run the extended integration tests on the CI
- merge hotfix-vX.X.X to master
- merge hotfix-vX.X.X to develop
- delete the hotfix-vX.X.X branch

View File

@@ -1,5 +1,5 @@
FROM alpine:3.7
MAINTAINER Greg Szabo <greg@tendermint.com>
FROM alpine:3.9
LABEL maintainer="hello@tendermint.com"
# Tendermint will be looking for the genesis file in /tendermint/config/genesis.json
# (unless you change `genesis_file` in config.toml). You can put your config.toml and

View File

@@ -3,7 +3,7 @@ set -e
# Get the tag from the version, or try to figure it out.
if [ -z "$TAG" ]; then
TAG=$(awk -F\" '/Version =/ { print $2; exit }' < ../version/version.go)
TAG=$(awk -F\" '/TMCoreSemVer =/ { print $2; exit }' < ../version/version.go)
fi
if [ -z "$TAG" ]; then
echo "Please specify a tag."

View File

@@ -3,7 +3,7 @@ set -e
# Get the tag from the version, or try to figure it out.
if [ -z "$TAG" ]; then
TAG=$(awk -F\" '/Version =/ { print $2; exit }' < ../version/version.go)
TAG=$(awk -F\" '/TMCoreSemVer =/ { print $2; exit }' < ../version/version.go)
fi
if [ -z "$TAG" ]; then
echo "Please specify a tag."

75
Gopkg.lock generated
View File

@@ -10,12 +10,11 @@
revision = "3a771d992973f24aa725d07868b467d1ddfceafb"
[[projects]]
branch = "master"
digest = "1:c0decf632843204d2b8781de7b26e7038584e2dcccc7e2f401e88ae85b1df2b7"
digest = "1:093bf93a65962e8191e3e8cd8fc6c363f83d43caca9739c906531ba7210a9904"
name = "github.com/btcsuite/btcd"
packages = ["btcec"]
pruneopts = "UT"
revision = "67e573d211ace594f1366b4ce9d39726c4b19bd0"
revision = "ed77733ec07dfc8a513741138419b8d9d3de9d2d"
[[projects]]
digest = "1:1d8e1cb71c33a9470bbbae09bfec09db43c6bf358dfcae13cd8807c4e2a9a2bf"
@@ -35,6 +34,14 @@
revision = "8991bc29aa16c548c550c7ff78260e27b9ab7c73"
version = "v1.1.1"
[[projects]]
digest = "1:5f7414cf41466d4b4dd7ec52b2cd3e481e08cfd11e7e24fef730c0e483e88bb1"
name = "github.com/etcd-io/bbolt"
packages = ["."]
pruneopts = "UT"
revision = "63597a96ec0ad9e6d43c3fc81e809909e0237461"
version = "v1.3.2"
[[projects]]
digest = "1:544229a3ca0fb2dd5ebc2896d3d2ff7ce096d9751635301e44e37e761349ee70"
name = "github.com/fortytw2/leaktest"
@@ -84,7 +91,7 @@
version = "v1.8.0"
[[projects]]
digest = "1:35621fe20f140f05a0c4ef662c26c0ab4ee50bca78aa30fe87d33120bd28165e"
digest = "1:95e1006e41c641abd2f365dfa0f1213c04da294e7cd5f0bf983af234b775db64"
name = "github.com/gogo/protobuf"
packages = [
"gogoproto",
@@ -95,11 +102,11 @@
"types",
]
pruneopts = "UT"
revision = "636bf0302bc95575d69441b25a2603156ffdddf1"
version = "v1.1.1"
revision = "ba06b47c162d49f2af050fb4c75bcbc86a159d5c"
version = "v1.2.1"
[[projects]]
digest = "1:17fe264ee908afc795734e8c4e63db2accabaf57326dbf21763a7d6b86096260"
digest = "1:239c4c7fd2159585454003d9be7207167970194216193a8a210b8d29576f19c9"
name = "github.com/golang/protobuf"
packages = [
"proto",
@@ -109,8 +116,8 @@
"ptypes/timestamp",
]
pruneopts = "UT"
revision = "b4deda0973fb4c70b50d226b1af49f3da59f5265"
version = "v1.1.0"
revision = "c823c79ea1570fb5ff454033735a8e68575d1d0f"
version = "v1.3.0"
[[projects]]
branch = "master"
@@ -155,11 +162,12 @@
version = "v1.0"
[[projects]]
digest = "1:39b27d1381a30421f9813967a5866fba35dc1d4df43a6eefe3b7a5444cb07214"
digest = "1:a74b5a8e34ee5843cd6e65f698f3e75614f812ff170c2243425d75bc091e9af2"
name = "github.com/jmhodges/levigo"
packages = ["."]
pruneopts = "UT"
revision = "c42d9e0ca023e2198120196f842701bb4c55d7b9"
revision = "853d788c5c416eaaee5b044570784a96c7a26975"
version = "v1.0.0"
[[projects]]
branch = "master"
@@ -170,9 +178,12 @@
revision = "b84e30acd515aadc4b783ad4ff83aff3299bdfe0"
[[projects]]
digest = "1:c568d7727aa262c32bdf8a3f7db83614f7af0ed661474b24588de635c20024c7"
digest = "1:53e8c5c79716437e601696140e8b1801aae4204f4ec54a504333702a49572c4f"
name = "github.com/magiconair/properties"
packages = ["."]
packages = [
".",
"assert",
]
pruneopts = "UT"
revision = "c2353362d570a7bfa228149c62842019201cfb71"
version = "v1.8.0"
@@ -218,14 +229,16 @@
version = "v1.0.0"
[[projects]]
digest = "1:c1a04665f9613e082e1209cf288bf64f4068dcd6c87a64bf1c4ff006ad422ba0"
digest = "1:26663fafdea73a38075b07e8e9d82fc0056379d2be8bb4e13899e8fda7c7dd23"
name = "github.com/prometheus/client_golang"
packages = [
"prometheus",
"prometheus/internal",
"prometheus/promhttp",
]
pruneopts = "UT"
revision = "ae27198cdd90bf12cd134ad79d1366a6cf49f632"
revision = "abad2d1bd44235a26707c172eab6bca5bf2dbad3"
version = "v0.9.1"
[[projects]]
branch = "master"
@@ -267,6 +280,14 @@
pruneopts = "UT"
revision = "e2704e165165ec55d062f5919b4b29494e9fa790"
[[projects]]
digest = "1:b0c25f00bad20d783d259af2af8666969e2fc343fa0dc9efe52936bbd67fb758"
name = "github.com/rs/cors"
packages = ["."]
pruneopts = "UT"
revision = "9a47f48565a795472d43519dd49aac781f3034fb"
version = "v1.6.0"
[[projects]]
digest = "1:6a4a11ba764a56d2758899ec6f3848d24698d48442ebce85ee7a3f63284526cd"
name = "github.com/spf13/afero"
@@ -351,22 +372,16 @@
revision = "6b91fda63f2e36186f1c9d0e48578defb69c5d43"
[[projects]]
digest = "1:605b6546f3f43745695298ec2d342d3e952b6d91cdf9f349bea9315f677d759f"
name = "github.com/tendermint/btcd"
packages = ["btcec"]
pruneopts = "UT"
revision = "e5840949ff4fff0c56f9b6a541e22b63581ea9df"
[[projects]]
digest = "1:10b3a599325740c84a7c81f3f3cb2e1fdb70b3ea01b7fa28495567a2519df431"
digest = "1:ad9c4c1a4e7875330b1f62906f2830f043a23edb5db997e3a5ac5d3e6eadf80a"
name = "github.com/tendermint/go-amino"
packages = ["."]
pruneopts = "UT"
revision = "6dcc6ddc143e116455c94b25c1004c99e0d0ca12"
version = "v0.14.0"
revision = "dc14acf9ef15f85828bfbc561ed9dd9d2a284885"
version = "v0.14.1"
[[projects]]
digest = "1:72b71e3a29775e5752ed7a8012052a3dee165e27ec18cedddae5288058f09acf"
branch = "master"
digest = "1:f4edb30d5ff238e2abba10457010f74cd55ae20bbda8c54db1a07155fa020490"
name = "golang.org/x/crypto"
packages = [
"bcrypt",
@@ -387,8 +402,7 @@
"salsa20/salsa",
]
pruneopts = "UT"
revision = "3764759f34a542a3aef74d6b02e35be7ab893bba"
source = "github.com/tendermint/crypto"
revision = "8dd112bcdc25174059e45e07517d9fc663123347"
[[projects]]
digest = "1:d36f55a999540d29b6ea3c2ea29d71c76b1d9853fdcd3e5c5cb4836f2ba118f1"
@@ -494,8 +508,10 @@
analyzer-name = "dep"
analyzer-version = 1
input-imports = [
"github.com/btcsuite/btcd/btcec",
"github.com/btcsuite/btcutil/base58",
"github.com/btcsuite/btcutil/bech32",
"github.com/etcd-io/bbolt",
"github.com/fortytw2/leaktest",
"github.com/go-kit/kit/log",
"github.com/go-kit/kit/log/level",
@@ -512,10 +528,12 @@
"github.com/golang/protobuf/ptypes/timestamp",
"github.com/gorilla/websocket",
"github.com/jmhodges/levigo",
"github.com/magiconair/properties/assert",
"github.com/pkg/errors",
"github.com/prometheus/client_golang/prometheus",
"github.com/prometheus/client_golang/prometheus/promhttp",
"github.com/rcrowley/go-metrics",
"github.com/rs/cors",
"github.com/spf13/cobra",
"github.com/spf13/viper",
"github.com/stretchr/testify/assert",
@@ -524,7 +542,6 @@
"github.com/syndtr/goleveldb/leveldb/errors",
"github.com/syndtr/goleveldb/leveldb/iterator",
"github.com/syndtr/goleveldb/leveldb/opt",
"github.com/tendermint/btcd/btcec",
"github.com/tendermint/go-amino",
"golang.org/x/crypto/bcrypt",
"golang.org/x/crypto/chacha20poly1305",

View File

@@ -20,88 +20,77 @@
# unused-packages = true
#
###########################################################
# NOTE: All packages should be pinned to specific versions.
# Packages without releases must pin to a commit.
# Allow only patch releases for serialization libraries
[[constraint]]
name = "github.com/go-kit/kit"
version = "=0.6.0"
[[constraint]]
name = "github.com/gogo/protobuf"
version = "=1.1.1"
[[constraint]]
name = "github.com/golang/protobuf"
version = "=1.1.0"
[[constraint]]
name = "github.com/gorilla/websocket"
version = "=1.2.0"
[[constraint]]
name = "github.com/pkg/errors"
version = "=0.8.0"
[[constraint]]
name = "github.com/spf13/cobra"
version = "=0.0.1"
[[constraint]]
name = "github.com/spf13/viper"
version = "=1.0.0"
[[constraint]]
name = "github.com/stretchr/testify"
version = "=1.2.1"
name = "github.com/etcd-io/bbolt"
version = "v1.3.2"
[[constraint]]
name = "github.com/tendermint/go-amino"
version = "v0.14.0"
version = "~0.14.1"
[[constraint]]
name = "github.com/gogo/protobuf"
version = "~1.2.1"
[[constraint]]
name = "github.com/golang/protobuf"
version = "~1.3.0"
# Allow only minor releases for other libraries
[[constraint]]
name = "github.com/go-kit/kit"
version = "^0.6.0"
[[constraint]]
name = "github.com/gorilla/websocket"
version = "^1.2.0"
[[constraint]]
name = "github.com/rs/cors"
version = "^1.6.0"
[[constraint]]
name = "github.com/pkg/errors"
version = "^0.8.0"
[[constraint]]
name = "github.com/spf13/cobra"
version = "^0.0.1"
[[constraint]]
name = "github.com/spf13/viper"
version = "^1.0.0"
[[constraint]]
name = "github.com/stretchr/testify"
version = "^1.2.1"
[[constraint]]
name = "google.golang.org/grpc"
version = "=1.13.0"
version = "^1.13.0"
[[constraint]]
name = "github.com/fortytw2/leaktest"
version = "=1.2.0"
version = "^1.2.0"
###################################
## Some repos dont have releases.
## Pin to revision
[[constraint]]
name = "golang.org/x/crypto"
source = "github.com/tendermint/crypto"
revision = "3764759f34a542a3aef74d6b02e35be7ab893bba"
[[override]]
name = "github.com/jmhodges/levigo"
revision = "c42d9e0ca023e2198120196f842701bb4c55d7b9"
# last revision used by go-crypto
[[constraint]]
name = "github.com/btcsuite/btcutil"
revision = "d4cc87b860166d00d6b5b9e0d3b3d71d6088d4d4"
[[constraint]]
name = "github.com/tendermint/btcd"
revision = "e5840949ff4fff0c56f9b6a541e22b63581ea9df"
# Haven't made a release since 2016.
[[constraint]]
name = "github.com/prometheus/client_golang"
revision = "ae27198cdd90bf12cd134ad79d1366a6cf49f632"
version = "^0.9.1"
[[constraint]]
name = "github.com/rcrowley/go-metrics"
revision = "e2704e165165ec55d062f5919b4b29494e9fa790"
name = "github.com/jmhodges/levigo"
version = "^1.0.0"
[[constraint]]
name = "golang.org/x/net"
revision = "292b43bbf7cb8d35ddf40f8d5100ef3837cced3f"
###################################
## Repos which don't have releases.
## - github.com/btcsuite/btcd
## - golang.org/x/crypto
## - github.com/btcsuite/btcutil
## - github.com/rcrowley/go-metrics
## - golang.org/x/net
[prune]
go-tests = true

View File

@@ -1,38 +1,39 @@
GOTOOLS = \
github.com/mitchellh/gox \
github.com/golang/dep/cmd/dep \
github.com/alecthomas/gometalinter \
github.com/golangci/golangci-lint/cmd/golangci-lint \
github.com/gogo/protobuf/protoc-gen-gogo \
github.com/square/certstrap
GOBIN?=${GOPATH}/bin
PACKAGES=$(shell go list ./...)
OUTPUT?=build/tendermint
INCLUDE = -I=. -I=${GOPATH}/src -I=${GOPATH}/src/github.com/gogo/protobuf/protobuf
BUILD_TAGS?='tendermint'
BUILD_FLAGS = -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`"
LINT_FLAGS = --exclude '.*\.pb\.go' --exclude 'vendor/*' --vendor --deadline=600s
all: check build test install
check: check_tools get_vendor_deps
########################################
### Build Tendermint
build:
CGO_ENABLED=0 go build $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o build/tendermint ./cmd/tendermint/
CGO_ENABLED=0 go build $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o $(OUTPUT) ./cmd/tendermint/
build_c:
CGO_ENABLED=1 go build $(BUILD_FLAGS) -tags "$(BUILD_TAGS) gcc" -o build/tendermint ./cmd/tendermint/
CGO_ENABLED=1 go build $(BUILD_FLAGS) -tags "$(BUILD_TAGS) cleveldb" -o $(OUTPUT) ./cmd/tendermint/
build_race:
CGO_ENABLED=0 go build -race $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o build/tendermint ./cmd/tendermint
CGO_ENABLED=0 go build -race $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o $(OUTPUT) ./cmd/tendermint
install:
CGO_ENABLED=0 go install $(BUILD_FLAGS) -tags $(BUILD_TAGS) ./cmd/tendermint
install_c:
CGO_ENABLED=1 go install $(BUILD_FLAGS) -tags "$(BUILD_TAGS) cleveldb" ./cmd/tendermint
########################################
### Protobuf
@@ -79,8 +80,6 @@ check_tools:
get_tools:
@echo "--> Installing tools"
./scripts/get_tools.sh
@echo "--> Downloading linters (this may take awhile)"
$(GOPATH)/src/github.com/alecthomas/gometalinter/scripts/install.sh -b $(GOBIN)
update_tools:
@echo "--> Updating tools"
@@ -111,7 +110,7 @@ draw_deps:
get_deps_bin_size:
@# Copy of build recipe with additional flags to perform binary size analysis
$(eval $(shell go build -work -a $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o build/tendermint ./cmd/tendermint/ 2>&1))
$(eval $(shell go build -work -a $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o $(OUTPUT) ./cmd/tendermint/ 2>&1))
@find $(WORK) -type f -name "*.a" | xargs -I{} du -hxs "{}" | sort -rh | sed -e s:${WORK}/::g > deps_bin_size.log
@echo "Results can be found here: $(CURDIR)/deps_bin_size.log"
@@ -133,7 +132,7 @@ clean_certs:
rm -f db/remotedb/::.crt db/remotedb/::.key
test_libs: gen_certs
GOCACHE=off go test -tags gcc $(PACKAGES)
go test -tags clevedb boltdb $(PACKAGES)
make clean_certs
grpc_dbserver:
@@ -216,12 +215,28 @@ vagrant_test:
### go tests
test:
@echo "--> Running go test"
@GOCACHE=off go test -p 1 $(PACKAGES)
@go test -p 1 $(PACKAGES)
test_race:
@echo "--> Running go test --race"
@GOCACHE=off go test -p 1 -v -race $(PACKAGES)
@go test -p 1 -v -race $(PACKAGES)
# uses https://github.com/sasha-s/go-deadlock/ to detect potential deadlocks
test_with_deadlock:
make set_with_deadlock
make test
make cleanup_after_test_with_deadlock
set_with_deadlock:
find . -name "*.go" | grep -v "vendor/" | xargs -n 1 sed -i.bak 's/sync.RWMutex/deadlock.RWMutex/'
find . -name "*.go" | grep -v "vendor/" | xargs -n 1 sed -i.bak 's/sync.Mutex/deadlock.Mutex/'
find . -name "*.go" | grep -v "vendor/" | xargs -n 1 goimports -w
# cleanes up after you ran test_with_deadlock
cleanup_after_test_with_deadlock:
find . -name "*.go" | grep -v "vendor/" | xargs -n 1 sed -i.bak 's/deadlock.RWMutex/sync.RWMutex/'
find . -name "*.go" | grep -v "vendor/" | xargs -n 1 sed -i.bak 's/deadlock.Mutex/sync.Mutex/'
find . -name "*.go" | grep -v "vendor/" | xargs -n 1 goimports -w
########################################
### Formatting, linting, and vetting
@@ -229,38 +244,9 @@ test_race:
fmt:
@go fmt ./...
metalinter:
lint:
@echo "--> Running linter"
@gometalinter $(LINT_FLAGS) --disable-all \
--enable=deadcode \
--enable=gosimple \
--enable=misspell \
--enable=safesql \
./...
#--enable=gas \
#--enable=maligned \
#--enable=dupl \
#--enable=errcheck \
#--enable=goconst \
#--enable=gocyclo \
#--enable=goimports \
#--enable=golint \ <== comments on anything exported
#--enable=gotype \
#--enable=ineffassign \
#--enable=interfacer \
#--enable=megacheck \
#--enable=staticcheck \
#--enable=structcheck \
#--enable=unconvert \
#--enable=unparam \
#--enable=unused \
#--enable=varcheck \
#--enable=vet \
#--enable=vetshadow \
metalinter_all:
@echo "--> Running linter (all)"
gometalinter $(LINT_FLAGS) --enable-all --disable=lll ./...
@golangci-lint run
DESTINATION = ./index.html.md
@@ -276,7 +262,7 @@ check_dep:
### Docker image
build-docker:
cp build/tendermint DOCKER/tendermint
cp $(OUTPUT) DOCKER/tendermint
docker build --label=tendermint --tag="tendermint/tendermint" DOCKER
rm -rf DOCKER/tendermint
@@ -284,12 +270,11 @@ build-docker:
### Local testnet using docker
# Build linux binary on other platforms
build-linux:
build-linux: get_tools get_vendor_deps
GOOS=linux GOARCH=amd64 $(MAKE) build
build-docker-localnode:
cd networks/local
make
@cd networks/local && make
# Run a 4-node testnet locally
localnet-start: localnet-stop
@@ -327,4 +312,4 @@ build-slate:
# To avoid unintended conflicts with file names, always add to .PHONY
# unless there is a reason not to.
# https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html
.PHONY: check build build_race build_abci dist install install_abci check_dep check_tools get_tools update_tools get_vendor_deps draw_deps get_protoc protoc_abci protoc_libs gen_certs clean_certs grpc_dbserver test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt rpc-docs build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate protoc_grpc protoc_all
.PHONY: check build build_race build_abci dist install install_abci check_dep check_tools get_tools update_tools get_vendor_deps draw_deps get_protoc protoc_abci protoc_libs gen_certs clean_certs grpc_dbserver test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt rpc-docs build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate protoc_grpc protoc_all build_c install_c test_with_deadlock cleanup_after_test_with_deadlock lint

158
PHILOSOPHY.md Normal file
View File

@@ -0,0 +1,158 @@
## Design goals
The design goals for Tendermint (and the SDK and related libraries) are:
* Simplicity and Legibility
* Parallel performance, namely ability to utilize multicore architecture
* Ability to evolve the codebase bug-free
* Debuggability
* Complete correctness that considers all edge cases, esp in concurrency
* Future-proof modular architecture, message protocol, APIs, and encapsulation
### Justification
Legibility is key to maintaining bug-free software as it evolves toward more
optimizations, more ease of debugging, and additional features.
It is too easy to introduce bugs over time by replacing lines of code with
those that may panic, which means ideally locks are unlocked by defer
statements.
For example,
```go
func (obj *MyObj) something() {
mtx.Lock()
obj.something = other
mtx.Unlock()
}
```
It is too easy to refactor the codebase in the future to replace `other` with
`other.String()` for example, and this may introduce a bug that causes a
deadlock. So as much as reasonably possible, we need to be using defer
statements, even though it introduces additional overhead.
If it is necessary to optimize the unlocking of mutex locks, the solution is
more modularity via smaller functions, so that defer'd unlocks are scoped
within a smaller function.
Similarly, idiomatic for-loops should always be preferred over those that use
custom counters, because it is too easy to evolve the body of a for-loop to
become more complicated over time, and it becomes more and more difficult to
assess the correctness of such a for-loop by visual inspection.
### On performance
It doesn't matter whether there are alternative implementations that are 2x or
3x more performant, when the software doesn't work, deadlocks, or if bugs
cannot be debugged. By taking advantage of multicore concurrency, the
Tendermint implementation will at least be an order of magnitude within the
range of what is theoretically possible. The design philosophy of Tendermint,
and the choice of Go as implementation language, is designed to make Tendermint
implementation the standard specification for concurrent BFT software.
By focusing on the message protocols (e.g. ABCI, p2p messages), and
encapsulation e.g. IAVL module, (relatively) independent reactors, we are both
implementing a standard implementation to be used as the specification for
future implementations in more optimizable languages like Rust, Java, and C++;
as well as creating sufficiently performant software. Tendermint Core will
never be as fast as future implementations of the Tendermint Spec, because Go
isn't designed to be as fast as possible. The advantage of using Go is that we
can develop the whole stack of modular components **faster** than in other
languages.
Furthermore, the real bottleneck is in the application layer, and it isn't
necessary to support more than a sufficiently decentralized set of validators
(e.g. 100 ~ 300 validators is sufficient, with delegated bonded PoS).
Instead of optimizing Tendermint performance down to the metal, lets focus on
optimizing on other matters, namely ability to push feature complete software
that works well enough, can be debugged and maintained, and can serve as a spec
for future implementations.
### On encapsulation
In order to create maintainable, forward-optimizable software, it is critical
to develop well-encapsulated objects that have well understood properties, and
to re-use these easy-to-use-correctly components as building blocks for further
encapsulated meta-objects.
For example, mutexes are cheap enough for Tendermint's design goals when there
isn't goroutine contention, so it is encouraged to create concurrency safe
structures with struct-level mutexes. If they are used in the context of
non-concurrent logic, then the performance is good enough. If they are used in
the context of concurrent logic, then it will still perform correctly.
Examples of this design principle can be seen in the types.ValidatorSet struct,
and the cmn.Rand struct. It's one single struct declaration that can be used
in both concurrent and non-concurrent logic, and due to its well encapsulation,
it's easy to get the usage of the mutex right.
#### example: cmn.Rand:
`The default Source is safe for concurrent use by multiple goroutines, but
Sources created by NewSource are not`. The reason why the default
package-level source is safe for concurrent use is because it is protected (see
`lockedSource` in https://golang.org/src/math/rand/rand.go).
But we shouldn't rely on the global source, we should be creating our own
Rand/Source instances and using them, especially for determinism in testing.
So it is reasonable to have cmn.Rand be protected by a mutex. Whether we want
our own implementation of Rand is another question, but the answer there is
also in the affirmative. Sometimes you want to know where Rand is being used
in your code, so it becomes a simple matter of dropping in a log statement to
inject inspectability into Rand usage. Also, it is nice to be able to extend
the functionality of Rand with custom methods. For these reasons, and for the
reasons which is outlined in this design philosophy document, we should
continue to use the cmn.Rand object, with mutex protection.
Another key aspect of good encapsulation is the choice of exposed vs unexposed
methods. It should be clear to the reader of the code, which methods are
intended to be used in what context, and what safe usage is. Part of this is
solved by hiding methods via unexported methods. Another part of this is
naming conventions on the methods (e.g. underscores) with good documentation,
and code organization. If there are too many exposed methods and it isn't
clear what methods have what side effects, then there is something wrong about
the design of abstractions that should be revisited.
### On concurrency
In order for Tendermint to remain relevant in the years to come, it is vital
for Tendermint to take advantage of multicore architectures. Due to the nature
of the problem, namely consensus across a concurrent p2p gossip network, and to
handle RPC requests for a large number of consuming subscribers, it is
unavoidable for Tendermint development to require expertise in concurrency
design, especially when it comes to the reactor design, and also for RPC
request handling.
## Guidelines
Here are some guidelines for designing for (sufficient) performance and concurrency:
* Mutex locks are cheap enough when there isn't contention.
* Do not optimize code without analytical or observed proof that it is in a hot path.
* Don't over-use channels when mutex locks w/ encapsulation are sufficient.
* The need to drain channels are often a hint of unconsidered edge cases.
* The creation of O(N) one-off goroutines is generally technical debt that
needs to get addressed sooner than later. Avoid creating too many
goroutines as a patch around incomplete concurrency design, or at least be
aware of the debt and do not invest in the debt. On the other hand, Tendermint
is designed to have a limited number of peers (e.g. 10 or 20), so the creation
of O(C) goroutines per O(P) peers is still O(C\*P=constant).
* Use defer statements to unlock as much as possible. If you want to unlock sooner,
try to create more modular functions that do make use of defer statements.
## Matras
* Premature optimization kills
* Readability is paramount
* Beautiful is better than fast.
* In the face of ambiguity, refuse the temptation to guess.
* In the face of bugs, refuse the temptation to cover the bug.
* There should be one-- and preferably only one --obvious way to do it.

111
README.md
View File

@@ -1,14 +1,14 @@
# Tendermint
[Byzantine-Fault Tolerant](https://en.wikipedia.org/wiki/Byzantine_fault_tolerance)
[State Machine Replication](https://en.wikipedia.org/wiki/State_machine_replication).
Or [Blockchain](https://en.wikipedia.org/wiki/Blockchain_(database)) for short.
[State Machines](https://en.wikipedia.org/wiki/State_machine_replication).
Or [Blockchain](https://en.wikipedia.org/wiki/Blockchain_(database)), for short.
[![version](https://img.shields.io/github/tag/tendermint/tendermint.svg)](https://github.com/tendermint/tendermint/releases/latest)
[![API Reference](
https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667
)](https://godoc.org/github.com/tendermint/tendermint)
[![Go version](https://img.shields.io/badge/go-1.10.4-blue.svg)](https://github.com/moovweb/gvm)
[![Go version](https://img.shields.io/badge/go-1.12.0-blue.svg)](https://github.com/moovweb/gvm)
[![riot.im](https://img.shields.io/badge/riot.im-JOIN%20CHAT-green.svg)](https://riot.im/app/#/room/#tendermint:matrix.org)
[![license](https://img.shields.io/github/license/tendermint/tendermint.svg)](https://github.com/tendermint/tendermint/blob/master/LICENSE)
[![](https://tokei.rs/b1/github/tendermint/tendermint?category=lines)](https://github.com/tendermint/tendermint)
@@ -41,7 +41,7 @@ please [contact us](mailto:partners@tendermint.com) and [join the chat](https://
## Security
To report a security vulnerability, see our [bug bounty
program](https://tendermint.com/security).
program](https://hackerone.com/tendermint)
For examples of the kinds of bugs we're looking for, see [SECURITY.md](SECURITY.md)
@@ -49,61 +49,43 @@ For examples of the kinds of bugs we're looking for, see [SECURITY.md](SECURITY.
Requirement|Notes
---|---
Go version | Go1.10 or higher
Go version | Go1.11.4 or higher
## Install
## Documentation
Complete documentation can be found on the [website](https://tendermint.com/docs/).
### Install
See the [install instructions](/docs/introduction/install.md)
## Quick Start
### Quick Start
- [Single node](/docs/tendermint-core/using-tendermint.md)
- [Local cluster using docker-compose](/networks/local)
- [Single node](/docs/introduction/quick-start.md)
- [Local cluster using docker-compose](/docs/networks/docker-compose.md)
- [Remote cluster using terraform and ansible](/docs/networks/terraform-and-ansible.md)
- [Join the Cosmos testnet](https://cosmos.network/testnet)
## Resources
### Tendermint Core
For details about the blockchain data structures and the p2p protocols, see the
the [Tendermint specification](/docs/spec).
For details on using the software, see the [documentation](/docs/) which is also
hosted at: https://tendermint.com/docs/
### Tools
Benchmarking and monitoring is provided by `tm-bench` and `tm-monitor`, respectively.
Their code is found [here](/tools) and these binaries need to be built seperately.
Additional documentation is found [here](/docs/tools).
### Sub-projects
* [Amino](http://github.com/tendermint/go-amino), a reflection-based improvement on proto3
* [IAVL](http://github.com/tendermint/iavl), Merkleized IAVL+ Tree implementation
### Applications
* [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
* [Ethermint](http://github.com/cosmos/ethermint); Ethereum on Tendermint
* [Many more](https://tendermint.com/ecosystem)
### Research
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
* [Original Whitepaper](https://tendermint.com/static/docs/tendermint.pdf)
* [Blog](https://blog.cosmos.network/tendermint/home)
## Contributing
Yay open source! Please see our [contributing guidelines](CONTRIBUTING.md).
Please abide by the [Code of Conduct](CODE_OF_CONDUCT.md) in all interactions,
and the [contributing guidelines](CONTRIBUTING.md) when submitting code.
Join the larger community on the [forum](https://forum.cosmos.network/) and the [chat](https://riot.im/app/#/room/#tendermint:matrix.org).
To learn more about the structure of the software, watch the [Developer
Sessions](https://www.youtube.com/playlist?list=PLdQIb0qr3pnBbG5ZG-0gr3zM86_s8Rpqv)
and read some [Architectural
Decision Records](https://github.com/tendermint/tendermint/tree/master/docs/architecture).
Learn more by reading the code and comparing it to the
[specification](https://github.com/tendermint/tendermint/tree/develop/docs/spec).
## Versioning
### SemVer
### Semantic Versioning
Tendermint uses [SemVer](http://semver.org/) to determine when and how the version changes.
Tendermint uses [Semantic Versioning](http://semver.org/) to determine when and how the version changes.
According to SemVer, anything in the public API can change at any time before version 1.0.0
To provide some stability to Tendermint users in these 0.X.X days, the MINOR version is used
@@ -114,6 +96,7 @@ include the in-process Go APIs.
That said, breaking changes in the following packages will be documented in the
CHANGELOG even if they don't lead to MINOR version bumps:
- crypto
- types
- rpc/client
- config
@@ -140,8 +123,40 @@ data into the new chain.
However, any bump in the PATCH version should be compatible with existing histories
(if not please open an [issue](https://github.com/tendermint/tendermint/issues)).
For more information on upgrading, see [here](./UPGRADING.md)
For more information on upgrading, see [UPGRADING.md](./UPGRADING.md)
## Code of Conduct
## Resources
### Tendermint Core
For details about the blockchain data structures and the p2p protocols, see the
[Tendermint specification](/docs/spec).
For details on using the software, see the [documentation](/docs/) which is also
hosted at: https://tendermint.com/docs/
### Tools
Benchmarking and monitoring is provided by `tm-bench` and `tm-monitor`, respectively.
Their code is found [here](/tools) and these binaries need to be built seperately.
Additional documentation is found [here](/docs/tools).
### Sub-projects
* [Amino](http://github.com/tendermint/go-amino), reflection-based proto3, with
interfaces
* [IAVL](http://github.com/tendermint/iavl), Merkleized IAVL+ Tree implementation
### Applications
* [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
* [Ethermint](http://github.com/cosmos/ethermint); Ethereum on Tendermint
* [Many more](https://tendermint.com/ecosystem)
### Research
* [The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938)
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
* [Original Whitepaper](https://tendermint.com/static/docs/tendermint.pdf)
* [Blog](https://blog.cosmos.network/tendermint/home)
Please read, understand and adhere to our [code of conduct](CODE_OF_CONDUCT.md).

View File

@@ -1,7 +1,8 @@
# Security
As part of our [Coordinated Vulnerability Disclosure
Policy](https://tendermint.com/security), we operate a bug bounty.
Policy](https://tendermint.com/security), we operate a [bug
bounty](https://hackerone.com/tendermint).
See the policy for more details on submissions and rewards.
Here is a list of examples of the kinds of bugs we're most interested in:

View File

@@ -3,9 +3,195 @@
This guide provides steps to be followed when you upgrade your applications to
a newer version of Tendermint Core.
## v0.31.6
There are no breaking changes in this release except Go API of p2p and
mempool packages. Hovewer, if you're using cleveldb, you'll need to change
the compilation tag:
Use `cleveldb` tag instead of `gcc` to compile Tendermint with CLevelDB or
use `make build_c` / `make install_c` (full instructions can be found at
https://tendermint.com/docs/introduction/install.html#compile-with-cleveldb-support)
## v0.31.0
This release contains a breaking change to the behaviour of the pubsub system.
It also contains some minor breaking changes in the Go API and ABCI.
There are no changes to the block or p2p protocols, so v0.31.0 should work fine
with blockchains created from the v0.30 series.
### RPC
The pubsub no longer blocks on publishing. This may cause some WebSocket (WS) clients to stop working as expected.
If your WS client is not consuming events fast enough, Tendermint can terminate the subscription.
In this case, the WS client will receive an error with description:
```json
{
"jsonrpc": "2.0",
"id": "{ID}#event",
"error": {
"code": -32000,
"msg": "Server error",
"data": "subscription was cancelled (reason: client is not pulling messages fast enough)" // or "subscription was cancelled (reason: Tendermint exited)"
}
}
Additionally, there are now limits on the number of subscribers and
subscriptions that can be active at once. See the new
`rpc.max_subscription_clients` and `rpc.max_subscriptions_per_client` values to
configure this.
```
### Applications
Simple rename of `ConsensusParams.BlockSize` to `ConsensusParams.Block`.
The `ConsensusParams.Block.TimeIotaMS` field was also removed. It's configured
in the ConsensusParsm in genesis.
### Go API
See the [CHANGELOG](CHANGELOG.md). These are relatively straight forward.
## v0.30.0
This release contains a breaking change to both the block and p2p protocols,
however it may be compatible with blockchains created with v0.29.0 depending on
the chain history. If your blockchain has not included any pieces of evidence,
or no piece of evidence has been included in more than one block,
and if your application has never returned multiple updates
for the same validator in a single block, then v0.30.0 will work fine with
blockchains created with v0.29.0.
The p2p protocol change is to fix the proposer selection algorithm again.
Note that proposer selection is purely a p2p concern right
now since the algorithm is only relevant during real time consensus.
This change is thus compatible with v0.29.0, but
all nodes must be upgraded to avoid disagreements on the proposer.
### Applications
Applications must ensure they do not return duplicates in
`ResponseEndBlock.ValidatorUpdates`. A pubkey must only appear once per set of
updates. Duplicates will cause irrecoverable failure. If you have a very good
reason why we shouldn't do this, please open an issue.
## v0.29.0
This release contains some breaking changes to the block and p2p protocols,
and will not be compatible with any previous versions of the software, primarily
due to changes in how various data structures are hashed.
Any implementations of Tendermint blockchain verification, including lite clients,
will need to be updated. For specific details:
- [Merkle tree](./docs/spec/blockchain/encoding.md#merkle-trees)
- [ConsensusParams](./docs/spec/blockchain/state.md#consensusparams)
There was also a small change to field ordering in the vote struct. Any
implementations of an out-of-process validator (like a Key-Management Server)
will need to be updated. For specific details:
- [Vote](https://github.com/tendermint/tendermint/blob/develop/docs/spec/consensus/signing.md#votes)
Finally, the proposer selection algorithm continues to evolve. See the
[work-in-progress
specification](https://github.com/tendermint/tendermint/pull/3140).
For everything else, please see the [CHANGELOG](./CHANGELOG.md#v0.29.0).
## v0.28.0
This release breaks the format for the `priv_validator.json` file
and the protocol used for the external validator process.
It is compatible with v0.27.0 blockchains (neither the BlockProtocol nor the
P2PProtocol have changed).
Please read carefully for details about upgrading.
**Note:** Backup your `config/priv_validator.json`
before proceeding.
### `priv_validator.json`
The `config/priv_validator.json` is now two files:
`config/priv_validator_key.json` and `data/priv_validator_state.json`.
The former contains the key material, the later contains the details on the last
message signed.
When running v0.28.0 for the first time, it will back up any pre-existing
`priv_validator.json` file and proceed to split it into the two new files.
Upgrading should happen automatically without problem.
To upgrade manually, use the provided `privValUpgrade.go` script, with exact paths for the old
`priv_validator.json` and the locations for the two new files. It's recomended
to use the default paths, of `config/priv_validator_key.json` and
`data/priv_validator_state.json`, respectively:
```
go run scripts/privValUpgrade.go <old-path> <new-key-path> <new-state-path>
```
### External validator signers
The Unix and TCP implementations of the remote signing validator
have been consolidated into a single implementation.
Thus in both cases, the external process is expected to dial
Tendermint. This is different from how Unix sockets used to work, where
Tendermint dialed the external process.
The `PubKeyMsg` was also split into separate `Request` and `Response` types
for consistency with other messages.
Note that the TCP sockets don't yet use a persistent key,
so while they're encrypted, they can't yet be properly authenticated.
See [#3105](https://github.com/tendermint/tendermint/issues/3105).
Note the Unix socket has neither encryption nor authentication, but will
add a shared-secret in [#3099](https://github.com/tendermint/tendermint/issues/3099).
## v0.27.0
This release contains some breaking changes to the block and p2p protocols,
but does not change any core data structures, so it should be compatible with
existing blockchains from the v0.26 series that only used Ed25519 validator keys.
Blockchains using Secp256k1 for validators will not be compatible. This is due
to the fact that we now enforce which key types validators can use as a
consensus param. The default is Ed25519, and Secp256k1 must be activated
explicitly.
It is recommended to upgrade all nodes at once to avoid incompatibilities at the
peer layer - namely, the heartbeat consensus message has been removed (only
relevant if `create_empty_blocks=false` or `create_empty_blocks_interval > 0`),
and the proposer selection algorithm has changed. Since proposer information is
never included in the blockchain, this change only affects the peer layer.
### Go API Changes
#### libs/db
The ReverseIterator API has changed the meaning of `start` and `end`.
Before, iteration was from `start` to `end`, where
`start > end`. Now, iteration is from `end` to `start`, where `start < end`.
The iterator also excludes `end`. This change allows a simplified and more
intuitive logic, aligning the semantic meaning of `start` and `end` in the
`Iterator` and `ReverseIterator`.
### Applications
This release enforces a new consensus parameter, the
ValidatorParams.PubKeyTypes. Applications must ensure that they only return
validator updates with the allowed PubKeyTypes. If a validator update includes a
pubkey type that is not included in the ConsensusParams.Validator.PubKeyTypes,
block execution will fail and the consensus will halt.
By default, only Ed25519 pubkeys may be used for validators. Enabling
Secp256k1 requires explicit modification of the ConsensusParams.
Please update your application accordingly (ie. restrict validators to only be
able to use Ed25519 keys, or explicitly add additional key types to the genesis
file).
## v0.26.0
New 0.26.0 release contains a lot of changes to core data types. It is not
This release contains a lot of changes to core data types and protocols. It is not
compatible to the old versions and there is no straight forward way to update
old data to be compatible with the new version.
@@ -33,7 +219,7 @@ to `prove`. To get proofs with your queries, ensure you set `prove=true`.
Various version fields like `amino_version`, `p2p_version`, `consensus_version`,
and `rpc_version` have been removed from the `node_info.other` and are
consolidated under the tendermint semantic version (ie. `node_info.version`) and
the new `block` and `p2p` protocol versions under `node_info.protocol_version`..
the new `block` and `p2p` protocol versions under `node_info.protocol_version`.
### ABCI Changes
@@ -45,7 +231,7 @@ protobuf file for these changes.
The `ResponseQuery.Proof` field is now structured as a `[]ProofOp` to support
generalized Merkle tree constructions where the leaves of one Merkle tree are
the root of another. If you don't need this functionaluty, and you used to
the root of another. If you don't need this functionality, and you used to
return `<proof bytes>` here, you should instead return a single `ProofOp` with
just the `Data` field set:
@@ -67,7 +253,7 @@ For more information, see:
### Go API Changes
#### crypto.merkle
#### crypto/merkle
The `merkle.Hasher` interface was removed. Functions which used to take `Hasher`
now simply take `[]byte`. This means that any objects being Merklized should be
@@ -79,6 +265,10 @@ The `node.RunForever` function was removed. Signal handling and running forever
should instead be explicitly configured by the caller. See how we do it
[here](https://github.com/tendermint/tendermint/blob/30519e8361c19f4bf320ef4d26288ebc621ad725/cmd/tendermint/commands/run_node.go#L60).
### Other
All hashes, except for public key addresses, are now 32-bytes.
## v0.25.0
This release has minimal impact.

10
Vagrantfile vendored
View File

@@ -29,10 +29,14 @@ Vagrant.configure("2") do |config|
usermod -a -G docker vagrant
# install go
wget -q https://dl.google.com/go/go1.11.linux-amd64.tar.gz
tar -xvf go1.11.linux-amd64.tar.gz
wget -q https://dl.google.com/go/go1.12.linux-amd64.tar.gz
tar -xvf go1.12.linux-amd64.tar.gz
mv go /usr/local
rm -f go1.11.linux-amd64.tar.gz
rm -f go1.12.linux-amd64.tar.gz
# install nodejs (for docs)
curl -sL https://deb.nodesource.com/setup_11.x | bash -
apt-get install -y nodejs
# cleanup
apt-get autoremove -y

View File

@@ -1,7 +1,5 @@
# Application BlockChain Interface (ABCI)
[![CircleCI](https://circleci.com/gh/tendermint/abci.svg?style=svg)](https://circleci.com/gh/tendermint/abci)
Blockchains are systems for multi-master state machine replication.
**ABCI** is an interface that defines the boundary between the replication engine (the blockchain),
and the state machine (the application).
@@ -12,160 +10,28 @@ Previously, the ABCI was referred to as TMSP.
The community has provided a number of addtional implementations, see the [Tendermint Ecosystem](https://tendermint.com/ecosystem)
## Installation & Usage
To get up and running quickly, see the [getting started guide](../docs/app-dev/getting-started.md) along with the [abci-cli documentation](../docs/app-dev/abci-cli.md) which will go through the examples found in the [examples](./example/) directory.
## Specification
A detailed description of the ABCI methods and message types is contained in:
- [A prose specification](specification.md)
- [A protobuf file](https://github.com/tendermint/tendermint/blob/master/abci/types/types.proto)
- [A Go interface](https://github.com/tendermint/tendermint/blob/master/abci/types/application.go).
For more background information on ABCI, motivations, and tendermint, please visit [the documentation](https://tendermint.com/docs/).
The two guides to focus on are the `Application Development Guide` and `Using ABCI-CLI`.
- [The main spec](../docs/spec/abci/abci.md)
- [A protobuf file](./types/types.proto)
- [A Go interface](./types/application.go)
## Protocol Buffers
To compile the protobuf file, run:
To compile the protobuf file, run (from the root of the repo):
```
cd $GOPATH/src/github.com/tendermint/tendermint/; make protoc_abci
make protoc_abci
```
See `protoc --help` and [the Protocol Buffers site](https://developers.google.com/protocol-buffers)
for details on compiling for other languages. Note we also include a [GRPC](http://www.grpc.io/docs)
for details on compiling for other languages. Note we also include a [GRPC](https://www.grpc.io/docs)
service definition.
## Install ABCI-CLI
The `abci-cli` is a simple tool for debugging ABCI servers and running some
example apps. To install it:
```
mkdir -p $GOPATH/src/github.com/tendermint
cd $GOPATH/src/github.com/tendermint
git clone https://github.com/tendermint/tendermint.git
cd tendermint
make get_tools
make get_vendor_deps
make install_abci
```
## Implementation
We provide three implementations of the ABCI in Go:
- Golang in-process
- ABCI-socket
- GRPC
Note the GRPC version is maintained primarily to simplify onboarding and prototyping and is not receiving the same
attention to security and performance as the others
### In Process
The simplest implementation just uses function calls within Go.
This means ABCI applications written in Golang can be compiled with TendermintCore and run as a single binary.
See the [examples](#examples) below for more information.
### Socket (TSP)
ABCI is best implemented as a streaming protocol.
The socket implementation provides for asynchronous, ordered message passing over unix or tcp.
Messages are serialized using Protobuf3 and length-prefixed with a [signed Varint](https://developers.google.com/protocol-buffers/docs/encoding?csw=1#signed-integers)
For example, if the Protobuf3 encoded ABCI message is `0xDEADBEEF` (4 bytes), the length-prefixed message is `0x08DEADBEEF`, since `0x08` is the signed varint
encoding of `4`. If the Protobuf3 encoded ABCI message is 65535 bytes long, the length-prefixed message would be like `0xFEFF07...`.
Note the benefit of using this `varint` encoding over the old version (where integers were encoded as `<len of len><big endian len>` is that
it is the standard way to encode integers in Protobuf. It is also generally shorter.
### GRPC
GRPC is an rpc framework native to Protocol Buffers with support in many languages.
Implementing the ABCI using GRPC can allow for faster prototyping, but is expected to be much slower than
the ordered, asynchronous socket protocol. The implementation has also not received as much testing or review.
Note the length-prefixing used in the socket implementation does not apply for GRPC.
## Usage
The `abci-cli` tool wraps an ABCI client and can be used for probing/testing an ABCI server.
For instance, `abci-cli test` will run a test sequence against a listening server running the Counter application (see below).
It can also be used to run some example applications.
See [the documentation](https://tendermint.com/docs/) for more details.
### Examples
Check out the variety of example applications in the [example directory](example/).
It also contains the code refered to by the `counter` and `kvstore` apps; these apps come
built into the `abci-cli` binary.
#### Counter
The `abci-cli counter` application illustrates nonce checking in transactions. It's code looks like:
```golang
func cmdCounter(cmd *cobra.Command, args []string) error {
app := counter.NewCounterApplication(flagSerial)
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
// Start the listener
srv, err := server.NewServer(flagAddrC, flagAbci, app)
if err != nil {
return err
}
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
return err
}
// Wait forever
cmn.TrapSignal(func() {
// Cleanup
srv.Stop()
})
return nil
}
```
and can be found in [this file](cmd/abci-cli/abci-cli.go).
#### kvstore
The `abci-cli kvstore` application, which illustrates a simple key-value Merkle tree
```golang
func cmdKVStore(cmd *cobra.Command, args []string) error {
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
// Create the application - in memory or persisted to disk
var app types.Application
if flagPersist == "" {
app = kvstore.NewKVStoreApplication()
} else {
app = kvstore.NewPersistentKVStoreApplication(flagPersist)
app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore"))
}
// Start the listener
srv, err := server.NewServer(flagAddrD, flagAbci, app)
if err != nil {
return err
}
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
return err
}
// Wait forever
cmn.TrapSignal(func() {
// Cleanup
srv.Stop()
})
return nil
}
```

View File

@@ -105,8 +105,8 @@ func (reqRes *ReqRes) SetCallback(cb func(res *types.Response)) {
return
}
defer reqRes.mtx.Unlock()
reqRes.cb = cb
reqRes.mtx.Unlock()
}
func (reqRes *ReqRes) GetCallback() func(*types.Response) {

View File

@@ -54,7 +54,7 @@ RETRY_LOOP:
if cli.mustConnect {
return err
}
cli.Logger.Error(fmt.Sprintf("abci.grpcClient failed to connect to %v. Retrying...\n", cli.addr))
cli.Logger.Error(fmt.Sprintf("abci.grpcClient failed to connect to %v. Retrying...\n", cli.addr), "err", err)
time.Sleep(time.Second * dialRetryIntervalSeconds)
continue RETRY_LOOP
}
@@ -111,8 +111,8 @@ func (cli *grpcClient) Error() error {
// NOTE: callback may get internally generated flush responses.
func (cli *grpcClient) SetResponseCallback(resCb Callback) {
cli.mtx.Lock()
defer cli.mtx.Unlock()
cli.resCb = resCb
cli.mtx.Unlock()
}
//----------------------------------------
@@ -129,7 +129,7 @@ func (cli *grpcClient) EchoAsync(msg string) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Echo{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Echo{Echo: res}})
}
func (cli *grpcClient) FlushAsync() *ReqRes {
@@ -138,7 +138,7 @@ func (cli *grpcClient) FlushAsync() *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Flush{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Flush{Flush: res}})
}
func (cli *grpcClient) InfoAsync(params types.RequestInfo) *ReqRes {
@@ -147,7 +147,7 @@ func (cli *grpcClient) InfoAsync(params types.RequestInfo) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Info{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Info{Info: res}})
}
func (cli *grpcClient) SetOptionAsync(params types.RequestSetOption) *ReqRes {
@@ -156,7 +156,7 @@ func (cli *grpcClient) SetOptionAsync(params types.RequestSetOption) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_SetOption{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_SetOption{SetOption: res}})
}
func (cli *grpcClient) DeliverTxAsync(tx []byte) *ReqRes {
@@ -165,7 +165,7 @@ func (cli *grpcClient) DeliverTxAsync(tx []byte) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_DeliverTx{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_DeliverTx{DeliverTx: res}})
}
func (cli *grpcClient) CheckTxAsync(tx []byte) *ReqRes {
@@ -174,7 +174,7 @@ func (cli *grpcClient) CheckTxAsync(tx []byte) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_CheckTx{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_CheckTx{CheckTx: res}})
}
func (cli *grpcClient) QueryAsync(params types.RequestQuery) *ReqRes {
@@ -183,7 +183,7 @@ func (cli *grpcClient) QueryAsync(params types.RequestQuery) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Query{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Query{Query: res}})
}
func (cli *grpcClient) CommitAsync() *ReqRes {
@@ -192,7 +192,7 @@ func (cli *grpcClient) CommitAsync() *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Commit{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Commit{Commit: res}})
}
func (cli *grpcClient) InitChainAsync(params types.RequestInitChain) *ReqRes {
@@ -201,7 +201,7 @@ func (cli *grpcClient) InitChainAsync(params types.RequestInitChain) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_InitChain{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_InitChain{InitChain: res}})
}
func (cli *grpcClient) BeginBlockAsync(params types.RequestBeginBlock) *ReqRes {
@@ -210,7 +210,7 @@ func (cli *grpcClient) BeginBlockAsync(params types.RequestBeginBlock) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_BeginBlock{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_BeginBlock{BeginBlock: res}})
}
func (cli *grpcClient) EndBlockAsync(params types.RequestEndBlock) *ReqRes {
@@ -219,7 +219,7 @@ func (cli *grpcClient) EndBlockAsync(params types.RequestEndBlock) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_EndBlock{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_EndBlock{EndBlock: res}})
}
func (cli *grpcClient) finishAsyncCall(req *types.Request, res *types.Response) *ReqRes {

View File

@@ -9,8 +9,13 @@ import (
var _ Client = (*localClient)(nil)
// NOTE: use defer to unlock mutex because Application might panic (e.g., in
// case of malicious tx or query). It only makes sense for publicly exposed
// methods like CheckTx (/broadcast_tx_* RPC endpoint) or Query (/abci_query
// RPC endpoint), but defers are used everywhere for the sake of consistency.
type localClient struct {
cmn.BaseService
mtx *sync.Mutex
types.Application
Callback
@@ -30,8 +35,8 @@ func NewLocalClient(mtx *sync.Mutex, app types.Application) *localClient {
func (app *localClient) SetResponseCallback(cb Callback) {
app.mtx.Lock()
defer app.mtx.Unlock()
app.Callback = cb
app.mtx.Unlock()
}
// TODO: change types.Application to include Error()?
@@ -45,6 +50,9 @@ func (app *localClient) FlushAsync() *ReqRes {
}
func (app *localClient) EchoAsync(msg string) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
return app.callback(
types.ToRequestEcho(msg),
types.ToResponseEcho(msg),
@@ -53,8 +61,9 @@ func (app *localClient) EchoAsync(msg string) *ReqRes {
func (app *localClient) InfoAsync(req types.RequestInfo) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Info(req)
app.mtx.Unlock()
return app.callback(
types.ToRequestInfo(req),
types.ToResponseInfo(res),
@@ -63,8 +72,9 @@ func (app *localClient) InfoAsync(req types.RequestInfo) *ReqRes {
func (app *localClient) SetOptionAsync(req types.RequestSetOption) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.SetOption(req)
app.mtx.Unlock()
return app.callback(
types.ToRequestSetOption(req),
types.ToResponseSetOption(res),
@@ -73,8 +83,9 @@ func (app *localClient) SetOptionAsync(req types.RequestSetOption) *ReqRes {
func (app *localClient) DeliverTxAsync(tx []byte) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.DeliverTx(tx)
app.mtx.Unlock()
return app.callback(
types.ToRequestDeliverTx(tx),
types.ToResponseDeliverTx(res),
@@ -83,8 +94,9 @@ func (app *localClient) DeliverTxAsync(tx []byte) *ReqRes {
func (app *localClient) CheckTxAsync(tx []byte) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.CheckTx(tx)
app.mtx.Unlock()
return app.callback(
types.ToRequestCheckTx(tx),
types.ToResponseCheckTx(res),
@@ -93,8 +105,9 @@ func (app *localClient) CheckTxAsync(tx []byte) *ReqRes {
func (app *localClient) QueryAsync(req types.RequestQuery) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Query(req)
app.mtx.Unlock()
return app.callback(
types.ToRequestQuery(req),
types.ToResponseQuery(res),
@@ -103,8 +116,9 @@ func (app *localClient) QueryAsync(req types.RequestQuery) *ReqRes {
func (app *localClient) CommitAsync() *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Commit()
app.mtx.Unlock()
return app.callback(
types.ToRequestCommit(),
types.ToResponseCommit(res),
@@ -113,19 +127,20 @@ func (app *localClient) CommitAsync() *ReqRes {
func (app *localClient) InitChainAsync(req types.RequestInitChain) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.InitChain(req)
reqRes := app.callback(
return app.callback(
types.ToRequestInitChain(req),
types.ToResponseInitChain(res),
)
app.mtx.Unlock()
return reqRes
}
func (app *localClient) BeginBlockAsync(req types.RequestBeginBlock) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.BeginBlock(req)
app.mtx.Unlock()
return app.callback(
types.ToRequestBeginBlock(req),
types.ToResponseBeginBlock(res),
@@ -134,8 +149,9 @@ func (app *localClient) BeginBlockAsync(req types.RequestBeginBlock) *ReqRes {
func (app *localClient) EndBlockAsync(req types.RequestEndBlock) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.EndBlock(req)
app.mtx.Unlock()
return app.callback(
types.ToRequestEndBlock(req),
types.ToResponseEndBlock(res),
@@ -154,64 +170,73 @@ func (app *localClient) EchoSync(msg string) (*types.ResponseEcho, error) {
func (app *localClient) InfoSync(req types.RequestInfo) (*types.ResponseInfo, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Info(req)
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) SetOptionSync(req types.RequestSetOption) (*types.ResponseSetOption, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.SetOption(req)
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) DeliverTxSync(tx []byte) (*types.ResponseDeliverTx, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.DeliverTx(tx)
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) CheckTxSync(tx []byte) (*types.ResponseCheckTx, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.CheckTx(tx)
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) QuerySync(req types.RequestQuery) (*types.ResponseQuery, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Query(req)
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) CommitSync() (*types.ResponseCommit, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Commit()
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) InitChainSync(req types.RequestInitChain) (*types.ResponseInitChain, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.InitChain(req)
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) BeginBlockSync(req types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.BeginBlock(req)
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) EndBlockSync(req types.RequestEndBlock) (*types.ResponseEndBlock, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.EndBlock(req)
app.mtx.Unlock()
return &res, nil
}

View File

@@ -26,16 +26,17 @@ var _ Client = (*socketClient)(nil)
type socketClient struct {
cmn.BaseService
reqQueue chan *ReqRes
flushTimer *cmn.ThrottleTimer
addr string
mustConnect bool
conn net.Conn
reqQueue chan *ReqRes
flushTimer *cmn.ThrottleTimer
mtx sync.Mutex
addr string
conn net.Conn
err error
reqSent *list.List
resCb func(*types.Request, *types.Response) // listens to all callbacks
reqSent *list.List // list of requests sent, waiting for response
resCb func(*types.Request, *types.Response) // called on all requests, if set.
}
@@ -67,7 +68,7 @@ RETRY_LOOP:
if cli.mustConnect {
return err
}
cli.Logger.Error(fmt.Sprintf("abci.socketClient failed to connect to %v. Retrying...", cli.addr))
cli.Logger.Error(fmt.Sprintf("abci.socketClient failed to connect to %v. Retrying...", cli.addr), "err", err)
time.Sleep(time.Second * dialRetryIntervalSeconds)
continue RETRY_LOOP
}
@@ -86,6 +87,7 @@ func (cli *socketClient) OnStop() {
cli.mtx.Lock()
defer cli.mtx.Unlock()
if cli.conn != nil {
// does this really need a mutex?
cli.conn.Close()
}
@@ -118,8 +120,8 @@ func (cli *socketClient) Error() error {
// NOTE: callback may get internally generated flush responses.
func (cli *socketClient) SetResponseCallback(resCb Callback) {
cli.mtx.Lock()
defer cli.mtx.Unlock()
cli.resCb = resCb
cli.mtx.Unlock()
}
//----------------------------------------
@@ -207,12 +209,15 @@ func (cli *socketClient) didRecvResponse(res *types.Response) error {
reqres.Done() // Release waiters
cli.reqSent.Remove(next) // Pop first item from linked list
// Notify reqRes listener if set
// Notify reqRes listener if set (request specific callback).
// NOTE: it is possible this callback isn't set on the reqres object.
// at this point, in which case it will be called after, when it is set.
// TODO: should we move this after the resCb call so the order is always consistent?
if cb := reqres.GetCallback(); cb != nil {
cb(res)
}
// Notify client listener if set
// Notify client listener if set (global callback).
if cli.resCb != nil {
cli.resCb(reqres.Request, res)
}

View File

@@ -58,7 +58,7 @@ var RootCmd = &cobra.Command{
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
switch cmd.Use {
case "counter", "kvstore", "dummy": // for the examples apps, don't pre-run
case "counter", "kvstore": // for the examples apps, don't pre-run
return nil
case "version": // skip running for version command
return nil
@@ -127,10 +127,6 @@ func addCounterFlags() {
counterCmd.PersistentFlags().BoolVarP(&flagSerial, "serial", "", false, "enforce incrementing (serial) transactions")
}
func addDummyFlags() {
dummyCmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
}
func addKVStoreFlags() {
kvstoreCmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
}
@@ -152,10 +148,6 @@ func addCommands() {
// examples
addCounterFlags()
RootCmd.AddCommand(counterCmd)
// deprecated, left for backwards compatibility
addDummyFlags()
RootCmd.AddCommand(dummyCmd)
// replaces dummy, see issue #196
addKVStoreFlags()
RootCmd.AddCommand(kvstoreCmd)
}
@@ -291,18 +283,6 @@ var counterCmd = &cobra.Command{
},
}
// deprecated, left for backwards compatibility
var dummyCmd = &cobra.Command{
Use: "dummy",
Deprecated: "use: [abci-cli kvstore] instead",
Short: "ABCI demo example",
Long: "ABCI demo example",
Args: cobra.ExactArgs(0),
RunE: func(cmd *cobra.Command, args []string) error {
return cmdKVStore(cmd, args)
},
}
var kvstoreCmd = &cobra.Command{
Use: "kvstore",
Short: "ABCI demo example",
@@ -414,7 +394,6 @@ func cmdConsole(cmd *cobra.Command, args []string) error {
return err
}
}
return nil
}
func muxOnCommands(cmd *cobra.Command, pArgs []string) error {
@@ -657,9 +636,7 @@ func cmdQuery(cmd *cobra.Command, args []string) error {
}
func cmdCounter(cmd *cobra.Command, args []string) error {
app := counter.NewCounterApplication(flagSerial)
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
// Start the listener
@@ -672,12 +649,14 @@ func cmdCounter(cmd *cobra.Command, args []string) error {
return err
}
// Wait forever
cmn.TrapSignal(func() {
// Stop upon receiving SIGTERM or CTRL-C.
cmn.TrapSignal(logger, func() {
// Cleanup
srv.Stop()
})
return nil
// Run forever.
select {}
}
func cmdKVStore(cmd *cobra.Command, args []string) error {
@@ -702,12 +681,14 @@ func cmdKVStore(cmd *cobra.Command, args []string) error {
return err
}
// Wait forever
cmn.TrapSignal(func() {
// Stop upon receiving SIGTERM or CTRL-C.
cmn.TrapSignal(logger, func() {
// Cleanup
srv.Stop()
})
return nil
// Run forever.
select {}
}
//--------------------------------------------------------------------------------

View File

@@ -83,7 +83,7 @@ func TestWriteReadMessage2(t *testing.T) {
Log: phrase,
GasWanted: 10,
Tags: []cmn.KVPair{
cmn.KVPair{Key: []byte("abc"), Value: []byte("def")},
{Key: []byte("abc"), Value: []byte("def")},
},
},
// TODO: add the rest

File diff suppressed because it is too large Load Diff

View File

@@ -4,9 +4,9 @@ package types;
// For more information on gogo.proto, see:
// https://github.com/gogo/protobuf/blob/master/extensions.md
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
import "google/protobuf/timestamp.proto";
import "github.com/tendermint/tendermint/libs/common/types.proto";
import "github.com/tendermint/tendermint/crypto/merkle/merkle.proto";
import "github.com/tendermint/tendermint/libs/common/types.proto";
import "google/protobuf/timestamp.proto";
// This file is copied from http://github.com/tendermint/abci
// NOTE: When using custom types, mind the warnings.
@@ -207,13 +207,13 @@ message ResponseCommit {
// ConsensusParams contains all consensus-relevant parameters
// that can be adjusted by the abci app
message ConsensusParams {
BlockSizeParams block_size = 1;
BlockParams block = 1;
EvidenceParams evidence = 2;
ValidatorParams validator = 3;
}
// BlockSize contains limits on the block size.
message BlockSizeParams {
// BlockParams contains limits on the block size.
message BlockParams {
// Note: must be greater than 0
int64 max_bytes = 1;
// Note: must be greater or equal to -1

View File

@@ -1479,15 +1479,15 @@ func TestConsensusParamsMarshalTo(t *testing.T) {
}
}
func TestBlockSizeParamsProto(t *testing.T) {
func TestBlockParamsProto(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedBlockSizeParams(popr, false)
p := NewPopulatedBlockParams(popr, false)
dAtA, err := github_com_gogo_protobuf_proto.Marshal(p)
if err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
msg := &BlockSizeParams{}
msg := &BlockParams{}
if err := github_com_gogo_protobuf_proto.Unmarshal(dAtA, msg); err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
@@ -1510,10 +1510,10 @@ func TestBlockSizeParamsProto(t *testing.T) {
}
}
func TestBlockSizeParamsMarshalTo(t *testing.T) {
func TestBlockParamsMarshalTo(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedBlockSizeParams(popr, false)
p := NewPopulatedBlockParams(popr, false)
size := p.Size()
dAtA := make([]byte, size)
for i := range dAtA {
@@ -1523,7 +1523,7 @@ func TestBlockSizeParamsMarshalTo(t *testing.T) {
if err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
msg := &BlockSizeParams{}
msg := &BlockParams{}
if err := github_com_gogo_protobuf_proto.Unmarshal(dAtA, msg); err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
@@ -2675,16 +2675,16 @@ func TestConsensusParamsJSON(t *testing.T) {
t.Fatalf("seed = %d, %#v !Json Equal %#v", seed, msg, p)
}
}
func TestBlockSizeParamsJSON(t *testing.T) {
func TestBlockParamsJSON(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedBlockSizeParams(popr, true)
p := NewPopulatedBlockParams(popr, true)
marshaler := github_com_gogo_protobuf_jsonpb.Marshaler{}
jsondata, err := marshaler.MarshalToString(p)
if err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
msg := &BlockSizeParams{}
msg := &BlockParams{}
err = github_com_gogo_protobuf_jsonpb.UnmarshalString(jsondata, msg)
if err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
@@ -3637,12 +3637,12 @@ func TestConsensusParamsProtoCompactText(t *testing.T) {
}
}
func TestBlockSizeParamsProtoText(t *testing.T) {
func TestBlockParamsProtoText(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedBlockSizeParams(popr, true)
p := NewPopulatedBlockParams(popr, true)
dAtA := github_com_gogo_protobuf_proto.MarshalTextString(p)
msg := &BlockSizeParams{}
msg := &BlockParams{}
if err := github_com_gogo_protobuf_proto.UnmarshalText(dAtA, msg); err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
@@ -3651,12 +3651,12 @@ func TestBlockSizeParamsProtoText(t *testing.T) {
}
}
func TestBlockSizeParamsProtoCompactText(t *testing.T) {
func TestBlockParamsProtoCompactText(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedBlockSizeParams(popr, true)
p := NewPopulatedBlockParams(popr, true)
dAtA := github_com_gogo_protobuf_proto.CompactTextString(p)
msg := &BlockSizeParams{}
msg := &BlockParams{}
if err := github_com_gogo_protobuf_proto.UnmarshalText(dAtA, msg); err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
@@ -4573,10 +4573,10 @@ func TestConsensusParamsSize(t *testing.T) {
}
}
func TestBlockSizeParamsSize(t *testing.T) {
func TestBlockParamsSize(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedBlockSizeParams(popr, true)
p := NewPopulatedBlockParams(popr, true)
size2 := github_com_gogo_protobuf_proto.Size(p)
dAtA, err := github_com_gogo_protobuf_proto.Marshal(p)
if err != nil {

View File

@@ -4,7 +4,7 @@ import (
"testing"
"time"
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
proto "github.com/tendermint/tendermint/benchmarks/proto"
"github.com/tendermint/tendermint/crypto/ed25519"

View File

@@ -69,7 +69,7 @@ type BlockPool struct {
height int64 // the lowest key in requesters.
// peers
peers map[p2p.ID]*bpPeer
maxPeerHeight int64
maxPeerHeight int64 // the biggest reported height
// atomic
numPending int32 // number of requests pending assignment or block response
@@ -78,6 +78,8 @@ type BlockPool struct {
errorsCh chan<- peerError
}
// NewBlockPool returns a new BlockPool with the height equal to start. Block
// requests and errors will be sent to requestsCh and errorsCh accordingly.
func NewBlockPool(start int64, requestsCh chan<- BlockRequest, errorsCh chan<- peerError) *BlockPool {
bp := &BlockPool{
peers: make(map[p2p.ID]*bpPeer),
@@ -93,15 +95,15 @@ func NewBlockPool(start int64, requestsCh chan<- BlockRequest, errorsCh chan<- p
return bp
}
// OnStart implements cmn.Service by spawning requesters routine and recording
// pool's start time.
func (pool *BlockPool) OnStart() error {
go pool.makeRequestersRoutine()
pool.startTime = time.Now()
return nil
}
func (pool *BlockPool) OnStop() {}
// Run spawns requesters as needed.
// spawns requesters as needed
func (pool *BlockPool) makeRequestersRoutine() {
for {
if !pool.IsRunning() {
@@ -150,6 +152,8 @@ func (pool *BlockPool) removeTimedoutPeers() {
}
}
// GetStatus returns pool's height, numPending requests and the number of
// requesters.
func (pool *BlockPool) GetStatus() (height int64, numPending int32, lenRequesters int) {
pool.mtx.Lock()
defer pool.mtx.Unlock()
@@ -157,6 +161,7 @@ func (pool *BlockPool) GetStatus() (height int64, numPending int32, lenRequester
return pool.height, atomic.LoadInt32(&pool.numPending), len(pool.requesters)
}
// IsCaughtUp returns true if this node is caught up, false - otherwise.
// TODO: relax conditions, prevent abuse.
func (pool *BlockPool) IsCaughtUp() bool {
pool.mtx.Lock()
@@ -168,9 +173,13 @@ func (pool *BlockPool) IsCaughtUp() bool {
return false
}
// some conditions to determine if we're caught up
receivedBlockOrTimedOut := (pool.height > 0 || time.Since(pool.startTime) > 5*time.Second)
ourChainIsLongestAmongPeers := pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
// Some conditions to determine if we're caught up.
// Ensures we've either received a block or waited some amount of time,
// and that we're synced to the highest known height.
// Note we use maxPeerHeight - 1 because to sync block H requires block H+1
// to verify the LastCommit.
receivedBlockOrTimedOut := pool.height > 0 || time.Since(pool.startTime) > 5*time.Second
ourChainIsLongestAmongPeers := pool.maxPeerHeight == 0 || pool.height >= (pool.maxPeerHeight-1)
isCaughtUp := receivedBlockOrTimedOut && ourChainIsLongestAmongPeers
return isCaughtUp
}
@@ -252,18 +261,19 @@ func (pool *BlockPool) AddBlock(peerID p2p.ID, block *types.Block, blockSize int
peer.decrPending(blockSize)
}
} else {
// Bad peer?
pool.Logger.Info("invalid peer", "peer", peerID, "blockHeight", block.Height)
pool.sendError(errors.New("invalid peer"), peerID)
}
}
// MaxPeerHeight returns the highest height reported by a peer.
// MaxPeerHeight returns the highest reported height.
func (pool *BlockPool) MaxPeerHeight() int64 {
pool.mtx.Lock()
defer pool.mtx.Unlock()
return pool.maxPeerHeight
}
// Sets the peer's alleged blockchain height.
// SetPeerHeight sets the peer's alleged blockchain height.
func (pool *BlockPool) SetPeerHeight(peerID p2p.ID, height int64) {
pool.mtx.Lock()
defer pool.mtx.Unlock()
@@ -282,6 +292,8 @@ func (pool *BlockPool) SetPeerHeight(peerID p2p.ID, height int64) {
}
}
// RemovePeer removes the peer with peerID from the pool. If there's no peer
// with peerID, function is a no-op.
func (pool *BlockPool) RemovePeer(peerID p2p.ID) {
pool.mtx.Lock()
defer pool.mtx.Unlock()
@@ -292,10 +304,35 @@ func (pool *BlockPool) RemovePeer(peerID p2p.ID) {
func (pool *BlockPool) removePeer(peerID p2p.ID) {
for _, requester := range pool.requesters {
if requester.getPeerID() == peerID {
requester.redo()
requester.redo(peerID)
}
}
delete(pool.peers, peerID)
peer, ok := pool.peers[peerID]
if ok {
if peer.timeout != nil {
peer.timeout.Stop()
}
delete(pool.peers, peerID)
// Find a new peer with the biggest height and update maxPeerHeight if the
// peer's height was the biggest.
if peer.height == pool.maxPeerHeight {
pool.updateMaxPeerHeight()
}
}
}
// If no peers are left, maxPeerHeight is set to 0.
func (pool *BlockPool) updateMaxPeerHeight() {
var max int64
for _, peer := range pool.peers {
if peer.height > max {
max = peer.height
}
}
pool.maxPeerHeight = max
}
// Pick an available peer with at least the given minHeight.
@@ -326,8 +363,11 @@ func (pool *BlockPool) makeNextRequester() {
defer pool.mtx.Unlock()
nextHeight := pool.height + pool.requestersLen()
if nextHeight > pool.maxPeerHeight {
return
}
request := newBPRequester(pool, nextHeight)
// request.SetLogger(pool.Logger.With("height", nextHeight))
pool.requesters[nextHeight] = request
atomic.AddInt32(&pool.numPending, 1)
@@ -356,7 +396,8 @@ func (pool *BlockPool) sendError(err error, peerID p2p.ID) {
pool.errorsCh <- peerError{err, peerID}
}
// unused by tendermint; left for debugging purposes
// for debugging purposes
//nolint:unused
func (pool *BlockPool) debug() string {
pool.mtx.Lock()
defer pool.mtx.Unlock()
@@ -453,7 +494,7 @@ type bpRequester struct {
pool *BlockPool
height int64
gotBlockCh chan struct{}
redoCh chan struct{}
redoCh chan p2p.ID //redo may send multitime, add peerId to identify repeat
mtx sync.Mutex
peerID p2p.ID
@@ -465,7 +506,7 @@ func newBPRequester(pool *BlockPool, height int64) *bpRequester {
pool: pool,
height: height,
gotBlockCh: make(chan struct{}, 1),
redoCh: make(chan struct{}, 1),
redoCh: make(chan p2p.ID, 1),
peerID: "",
block: nil,
@@ -524,9 +565,9 @@ func (bpr *bpRequester) reset() {
// Tells bpRequester to pick another peer and try again.
// NOTE: Nonblocking, and does nothing if another redo
// was already requested.
func (bpr *bpRequester) redo() {
func (bpr *bpRequester) redo(peerId p2p.ID) {
select {
case bpr.redoCh <- struct{}{}:
case bpr.redoCh <- peerId:
default:
}
}
@@ -565,9 +606,13 @@ OUTER_LOOP:
return
case <-bpr.Quit():
return
case <-bpr.redoCh:
bpr.reset()
continue OUTER_LOOP
case peerID := <-bpr.redoCh:
if peerID == bpr.peerID {
bpr.reset()
continue OUTER_LOOP
} else {
continue WAIT_LOOP
}
case <-bpr.gotBlockCh:
// We got a block!
// Continue the for-loop and wait til Quit.

View File

@@ -1,12 +1,15 @@
package blockchain
import (
"fmt"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/types"
)
@@ -16,21 +19,59 @@ func init() {
}
type testPeer struct {
id p2p.ID
height int64
id p2p.ID
height int64
inputChan chan inputData //make sure each peer's data is sequential
}
func makePeers(numPeers int, minHeight, maxHeight int64) map[p2p.ID]testPeer {
peers := make(map[p2p.ID]testPeer, numPeers)
type inputData struct {
t *testing.T
pool *BlockPool
request BlockRequest
}
func (p testPeer) runInputRoutine() {
go func() {
for input := range p.inputChan {
p.simulateInput(input)
}
}()
}
// Request desired, pretend like we got the block immediately.
func (p testPeer) simulateInput(input inputData) {
block := &types.Block{Header: types.Header{Height: input.request.Height}}
input.pool.AddBlock(input.request.PeerID, block, 123)
// TODO: uncommenting this creates a race which is detected by: https://github.com/golang/go/blob/2bd767b1022dd3254bcec469f0ee164024726486/src/testing/testing.go#L854-L856
// see: https://github.com/tendermint/tendermint/issues/3390#issue-418379890
// input.t.Logf("Added block from peer %v (height: %v)", input.request.PeerID, input.request.Height)
}
type testPeers map[p2p.ID]testPeer
func (ps testPeers) start() {
for _, v := range ps {
v.runInputRoutine()
}
}
func (ps testPeers) stop() {
for _, v := range ps {
close(v.inputChan)
}
}
func makePeers(numPeers int, minHeight, maxHeight int64) testPeers {
peers := make(testPeers, numPeers)
for i := 0; i < numPeers; i++ {
peerID := p2p.ID(cmn.RandStr(12))
height := minHeight + cmn.RandInt63n(maxHeight-minHeight)
peers[peerID] = testPeer{peerID, height}
peers[peerID] = testPeer{peerID, height, make(chan inputData, 10)}
}
return peers
}
func TestBasic(t *testing.T) {
func TestBlockPoolBasic(t *testing.T) {
start := int64(42)
peers := makePeers(10, start+1, 1000)
errorsCh := make(chan peerError, 1000)
@@ -45,6 +86,9 @@ func TestBasic(t *testing.T) {
defer pool.Stop()
peers.start()
defer peers.stop()
// Introduce each peer.
go func() {
for _, peer := range peers {
@@ -77,17 +121,13 @@ func TestBasic(t *testing.T) {
if request.Height == 300 {
return // Done!
}
// Request desired, pretend like we got the block immediately.
go func() {
block := &types.Block{Header: types.Header{Height: request.Height}}
pool.AddBlock(request.PeerID, block, 123)
t.Logf("Added block from peer %v (height: %v)", request.PeerID, request.Height)
}()
peers[request.PeerID].inputChan <- inputData{t, pool, request}
}
}
}
func TestTimeout(t *testing.T) {
func TestBlockPoolTimeout(t *testing.T) {
start := int64(42)
peers := makePeers(10, start+1, 1000)
errorsCh := make(chan peerError, 1000)
@@ -145,3 +185,40 @@ func TestTimeout(t *testing.T) {
}
}
}
func TestBlockPoolRemovePeer(t *testing.T) {
peers := make(testPeers, 10)
for i := 0; i < 10; i++ {
peerID := p2p.ID(fmt.Sprintf("%d", i+1))
height := int64(i + 1)
peers[peerID] = testPeer{peerID, height, make(chan inputData)}
}
requestsCh := make(chan BlockRequest)
errorsCh := make(chan peerError)
pool := NewBlockPool(1, requestsCh, errorsCh)
pool.SetLogger(log.TestingLogger())
err := pool.Start()
require.NoError(t, err)
defer pool.Stop()
// add peers
for peerID, peer := range peers {
pool.SetPeerHeight(peerID, peer.height)
}
assert.EqualValues(t, 10, pool.MaxPeerHeight())
// remove not-existing peer
assert.NotPanics(t, func() { pool.RemovePeer(p2p.ID("Superman")) })
// remove peer with biggest height
pool.RemovePeer(p2p.ID("10"))
assert.EqualValues(t, 9, pool.MaxPeerHeight())
// remove all peers
for peerID := range peers {
pool.RemovePeer(peerID)
}
assert.EqualValues(t, 0, pool.MaxPeerHeight())
}

View File

@@ -8,7 +8,6 @@ import (
amino "github.com/tendermint/go-amino"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/p2p"
sm "github.com/tendermint/tendermint/state"
@@ -229,32 +228,40 @@ func (bcR *BlockchainReactor) poolRoutine() {
didProcessCh := make(chan struct{}, 1)
go func() {
for {
select {
case <-bcR.Quit():
return
case <-bcR.pool.Quit():
return
case request := <-bcR.requestsCh:
peer := bcR.Switch.Peers().Get(request.PeerID)
if peer == nil {
continue
}
msgBytes := cdc.MustMarshalBinaryBare(&bcBlockRequestMessage{request.Height})
queued := peer.TrySend(BlockchainChannel, msgBytes)
if !queued {
bcR.Logger.Debug("Send queue is full, drop block request", "peer", peer.ID(), "height", request.Height)
}
case err := <-bcR.errorsCh:
peer := bcR.Switch.Peers().Get(err.peerID)
if peer != nil {
bcR.Switch.StopPeerForError(peer, err)
}
case <-statusUpdateTicker.C:
// ask for status updates
go bcR.BroadcastStatusRequest() // nolint: errcheck
}
}
}()
FOR_LOOP:
for {
select {
case request := <-bcR.requestsCh:
peer := bcR.Switch.Peers().Get(request.PeerID)
if peer == nil {
continue FOR_LOOP // Peer has since been disconnected.
}
msgBytes := cdc.MustMarshalBinaryBare(&bcBlockRequestMessage{request.Height})
queued := peer.TrySend(BlockchainChannel, msgBytes)
if !queued {
// We couldn't make the request, send-queue full.
// The pool handles timeouts, just let it go.
continue FOR_LOOP
}
case err := <-bcR.errorsCh:
peer := bcR.Switch.Peers().Get(err.peerID)
if peer != nil {
bcR.Switch.StopPeerForError(peer, err)
}
case <-statusUpdateTicker.C:
// ask for status updates
go bcR.BroadcastStatusRequest() // nolint: errcheck
case <-switchToConsensusTicker.C:
height, numPending, lenRequesters := bcR.pool.GetStatus()
outbound, inbound, _ := bcR.Switch.NumPeers()
@@ -263,9 +270,12 @@ FOR_LOOP:
if bcR.pool.IsCaughtUp() {
bcR.Logger.Info("Time to switch to consensus reactor!", "height", height)
bcR.pool.Stop()
conR := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
conR.SwitchToConsensus(state, blocksSynced)
conR, ok := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
if ok {
conR.SwitchToConsensus(state, blocksSynced)
} else {
// should only happen during testing
}
break FOR_LOOP
}
@@ -298,7 +308,7 @@ FOR_LOOP:
firstParts := first.MakePartSet(types.BlockPartSizeBytes)
firstPartsHeader := firstParts.Header()
firstID := types.BlockID{first.Hash(), firstPartsHeader}
firstID := types.BlockID{Hash: first.Hash(), PartsHeader: firstPartsHeader}
// Finally, verify the first block using the second's commit
// NOTE: we can probably make this more efficient, but note that calling
// first.Hash() doesn't verify the tx contents, so MakePartSet() is
@@ -314,6 +324,13 @@ FOR_LOOP:
// still need to clean up the rest.
bcR.Switch.StopPeerForError(peer, fmt.Errorf("BlockchainReactor validation error: %v", err))
}
peerID2 := bcR.pool.RedoRequest(second.Height)
peer2 := bcR.Switch.Peers().Get(peerID2)
if peer2 != nil && peer2 != peer {
// NOTE: we've already removed the peer's request, but we
// still need to clean up the rest.
bcR.Switch.StopPeerForError(peer2, fmt.Errorf("BlockchainReactor validation error: %v", err))
}
continue FOR_LOOP
} else {
bcR.pool.PopRequest()
@@ -327,8 +344,7 @@ FOR_LOOP:
state, err = bcR.blockExec.ApplyBlock(state, firstID, first)
if err != nil {
// TODO This is bad, are we zombie?
cmn.PanicQ(fmt.Sprintf("Failed to process committed block (%d:%X): %v",
first.Height, first.Hash(), err))
panic(fmt.Sprintf("Failed to process committed block (%d:%X): %v", first.Height, first.Hash(), err))
}
blocksSynced++
@@ -421,11 +437,7 @@ type bcBlockResponseMessage struct {
// ValidateBasic performs basic validation.
func (m *bcBlockResponseMessage) ValidateBasic() error {
if err := m.Block.ValidateBasic(); err != nil {
return err
}
return nil
return m.Block.ValidateBasic()
}
func (m *bcBlockResponseMessage) String() string {

View File

@@ -1,72 +1,156 @@
package blockchain
import (
"net"
"os"
"sort"
"testing"
"time"
"github.com/stretchr/testify/assert"
abci "github.com/tendermint/tendermint/abci/types"
cfg "github.com/tendermint/tendermint/config"
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/libs/log"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/mock"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
)
func makeStateAndBlockStore(logger log.Logger) (sm.State, *BlockStore) {
config := cfg.ResetTestRoot("blockchain_reactor_test")
// blockDB := dbm.NewDebugDB("blockDB", dbm.NewMemDB())
// stateDB := dbm.NewDebugDB("stateDB", dbm.NewMemDB())
var config *cfg.Config
func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.GenesisDoc, []types.PrivValidator) {
validators := make([]types.GenesisValidator, numValidators)
privValidators := make([]types.PrivValidator, numValidators)
for i := 0; i < numValidators; i++ {
val, privVal := types.RandValidator(randPower, minPower)
validators[i] = types.GenesisValidator{
PubKey: val.PubKey,
Power: val.VotingPower,
}
privValidators[i] = privVal
}
sort.Sort(types.PrivValidatorsByAddress(privValidators))
return &types.GenesisDoc{
GenesisTime: tmtime.Now(),
ChainID: config.ChainID(),
Validators: validators,
}, privValidators
}
func makeVote(header *types.Header, blockID types.BlockID, valset *types.ValidatorSet, privVal types.PrivValidator) *types.Vote {
addr := privVal.GetPubKey().Address()
idx, _ := valset.GetByAddress(addr)
vote := &types.Vote{
ValidatorAddress: addr,
ValidatorIndex: idx,
Height: header.Height,
Round: 1,
Timestamp: tmtime.Now(),
Type: types.PrecommitType,
BlockID: blockID,
}
privVal.SignVote(header.ChainID, vote)
return vote
}
type BlockchainReactorPair struct {
reactor *BlockchainReactor
app proxy.AppConns
}
func newBlockchainReactor(logger log.Logger, genDoc *types.GenesisDoc, privVals []types.PrivValidator, maxBlockHeight int64) BlockchainReactorPair {
if len(privVals) != 1 {
panic("only support one validator")
}
app := &testApp{}
cc := proxy.NewLocalClientCreator(app)
proxyApp := proxy.NewAppConns(cc)
err := proxyApp.Start()
if err != nil {
panic(cmn.ErrorWrap(err, "error start app"))
}
blockDB := dbm.NewMemDB()
stateDB := dbm.NewMemDB()
blockStore := NewBlockStore(blockDB)
state, err := sm.LoadStateFromDBOrGenesisFile(stateDB, config.GenesisFile())
state, err := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
if err != nil {
panic(cmn.ErrorWrap(err, "error constructing state from genesis file"))
}
return state, blockStore
}
func newBlockchainReactor(logger log.Logger, maxBlockHeight int64) *BlockchainReactor {
state, blockStore := makeStateAndBlockStore(logger)
// Make the blockchainReactor itself
// Make the BlockchainReactor itself.
// NOTE we have to create and commit the blocks first because
// pool.height is determined from the store.
fastSync := true
var nilApp proxy.AppConnConsensus
blockExec := sm.NewBlockExecutor(dbm.NewMemDB(), log.TestingLogger(), nilApp,
sm.MockMempool{}, sm.MockEvidencePool{})
db := dbm.NewMemDB()
blockExec := sm.NewBlockExecutor(db, log.TestingLogger(), proxyApp.Consensus(),
mock.Mempool{}, sm.MockEvidencePool{})
sm.SaveState(db, state)
// let's add some blocks in
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
lastCommit := types.NewCommit(types.BlockID{}, nil)
if blockHeight > 1 {
lastBlockMeta := blockStore.LoadBlockMeta(blockHeight - 1)
lastBlock := blockStore.LoadBlock(blockHeight - 1)
vote := makeVote(&lastBlock.Header, lastBlockMeta.BlockID, state.Validators, privVals[0]).CommitSig()
lastCommit = types.NewCommit(lastBlockMeta.BlockID, []*types.CommitSig{vote})
}
thisBlock := makeBlock(blockHeight, state, lastCommit)
thisParts := thisBlock.MakePartSet(types.BlockPartSizeBytes)
blockID := types.BlockID{thisBlock.Hash(), thisParts.Header()}
state, err = blockExec.ApplyBlock(state, blockID, thisBlock)
if err != nil {
panic(cmn.ErrorWrap(err, "error apply block"))
}
blockStore.SaveBlock(thisBlock, thisParts, lastCommit)
}
bcReactor := NewBlockchainReactor(state.Copy(), blockExec, blockStore, fastSync)
bcReactor.SetLogger(logger.With("module", "blockchain"))
// Next: we need to set a switch in order for peers to be added in
bcReactor.Switch = p2p.NewSwitch(cfg.DefaultP2PConfig(), nil)
// Lastly: let's add some blocks in
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
firstBlock := makeBlock(blockHeight, state)
secondBlock := makeBlock(blockHeight+1, state)
firstParts := firstBlock.MakePartSet(types.BlockPartSizeBytes)
blockStore.SaveBlock(firstBlock, firstParts, secondBlock.LastCommit)
}
return bcReactor
return BlockchainReactorPair{bcReactor, proxyApp}
}
func TestNoBlockResponse(t *testing.T) {
maxBlockHeight := int64(20)
config = cfg.ResetTestRoot("blockchain_reactor_test")
defer os.RemoveAll(config.RootDir)
genDoc, privVals := randGenesisDoc(1, false, 30)
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
bcr.Start()
defer bcr.Stop()
maxBlockHeight := int64(65)
// Add some peers in
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
bcr.AddPeer(peer)
reactorPairs := make([]BlockchainReactorPair, 2)
chID := byte(0x01)
reactorPairs[0] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, maxBlockHeight)
reactorPairs[1] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
p2p.MakeConnectedSwitches(config.P2P, 2, func(i int, s *p2p.Switch) *p2p.Switch {
s.AddReactor("BLOCKCHAIN", reactorPairs[i].reactor)
return s
}, p2p.Connect2Switches)
defer func() {
for _, r := range reactorPairs {
r.reactor.Stop()
r.app.Stop()
}
}()
tests := []struct {
height int64
@@ -78,72 +162,101 @@ func TestNoBlockResponse(t *testing.T) {
{100, false},
}
// receive a request message from peer,
// wait for our response to be received on the peer
for _, tt := range tests {
reqBlockMsg := &bcBlockRequestMessage{tt.height}
reqBlockBytes := cdc.MustMarshalBinaryBare(reqBlockMsg)
bcr.Receive(chID, peer, reqBlockBytes)
msg := peer.lastBlockchainMessage()
for {
if reactorPairs[1].reactor.pool.IsCaughtUp() {
break
}
time.Sleep(10 * time.Millisecond)
}
assert.Equal(t, maxBlockHeight, reactorPairs[0].reactor.store.Height())
for _, tt := range tests {
block := reactorPairs[1].reactor.store.LoadBlock(tt.height)
if tt.existent {
if blockMsg, ok := msg.(*bcBlockResponseMessage); !ok {
t.Fatalf("Expected to receive a block response for height %d", tt.height)
} else if blockMsg.Block.Height != tt.height {
t.Fatalf("Expected response to be for height %d, got %d", tt.height, blockMsg.Block.Height)
}
assert.True(t, block != nil)
} else {
if noBlockMsg, ok := msg.(*bcNoBlockResponseMessage); !ok {
t.Fatalf("Expected to receive a no block response for height %d", tt.height)
} else if noBlockMsg.Height != tt.height {
t.Fatalf("Expected response to be for height %d, got %d", tt.height, noBlockMsg.Height)
}
assert.True(t, block == nil)
}
}
}
/*
// NOTE: This is too hard to test without
// an easy way to add test peer to switch
// or without significant refactoring of the module.
// Alternatively we could actually dial a TCP conn but
// that seems extreme.
func TestBadBlockStopsPeer(t *testing.T) {
maxBlockHeight := int64(20)
config = cfg.ResetTestRoot("blockchain_reactor_test")
defer os.RemoveAll(config.RootDir)
genDoc, privVals := randGenesisDoc(1, false, 30)
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
bcr.Start()
defer bcr.Stop()
maxBlockHeight := int64(148)
// Add some peers in
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
otherChain := newBlockchainReactor(log.TestingLogger(), genDoc, privVals, maxBlockHeight)
defer func() {
otherChain.reactor.Stop()
otherChain.app.Stop()
}()
// XXX: This doesn't add the peer to anything,
// so it's hard to check that it's later removed
bcr.AddPeer(peer)
assert.True(t, bcr.Switch.Peers().Size() > 0)
reactorPairs := make([]BlockchainReactorPair, 4)
// send a bad block from the peer
// default blocks already dont have commits, so should fail
block := bcr.store.LoadBlock(3)
msg := &bcBlockResponseMessage{Block: block}
peer.Send(BlockchainChannel, struct{ BlockchainMessage }{msg})
reactorPairs[0] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, maxBlockHeight)
reactorPairs[1] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
reactorPairs[2] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
reactorPairs[3] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
ticker := time.NewTicker(time.Millisecond * 10)
timer := time.NewTimer(time.Second * 2)
LOOP:
for {
select {
case <-ticker.C:
if bcr.Switch.Peers().Size() == 0 {
break LOOP
}
case <-timer.C:
t.Fatal("Timed out waiting to disconnect peer")
switches := p2p.MakeConnectedSwitches(config.P2P, 4, func(i int, s *p2p.Switch) *p2p.Switch {
s.AddReactor("BLOCKCHAIN", reactorPairs[i].reactor)
return s
}, p2p.Connect2Switches)
defer func() {
for _, r := range reactorPairs {
r.reactor.Stop()
r.app.Stop()
}
}()
for {
if reactorPairs[3].reactor.pool.IsCaughtUp() {
break
}
time.Sleep(1 * time.Second)
}
//at this time, reactors[0-3] is the newest
assert.Equal(t, 3, reactorPairs[1].reactor.Switch.Peers().Size())
//mark reactorPairs[3] is an invalid peer
reactorPairs[3].reactor.store = otherChain.reactor.store
lastReactorPair := newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
reactorPairs = append(reactorPairs, lastReactorPair)
switches = append(switches, p2p.MakeConnectedSwitches(config.P2P, 1, func(i int, s *p2p.Switch) *p2p.Switch {
s.AddReactor("BLOCKCHAIN", reactorPairs[len(reactorPairs)-1].reactor)
return s
}, p2p.Connect2Switches)...)
for i := 0; i < len(reactorPairs)-1; i++ {
p2p.Connect2Switches(switches, i, len(reactorPairs)-1)
}
for {
if lastReactorPair.reactor.pool.IsCaughtUp() || lastReactorPair.reactor.Switch.Peers().Size() == 0 {
break
}
time.Sleep(1 * time.Second)
}
assert.True(t, lastReactorPair.reactor.Switch.Peers().Size() < len(reactorPairs)-1)
}
*/
//----------------------------------------------
// utility funcs
@@ -155,55 +268,41 @@ func makeTxs(height int64) (txs []types.Tx) {
return txs
}
func makeBlock(height int64, state sm.State) *types.Block {
block, _ := state.MakeBlock(height, makeTxs(height), new(types.Commit), nil, state.Validators.GetProposer().Address)
func makeBlock(height int64, state sm.State, lastCommit *types.Commit) *types.Block {
block, _ := state.MakeBlock(height, makeTxs(height), lastCommit, nil, state.Validators.GetProposer().Address)
return block
}
// The Test peer
type bcrTestPeer struct {
cmn.BaseService
id p2p.ID
ch chan interface{}
type testApp struct {
abci.BaseApplication
}
var _ p2p.Peer = (*bcrTestPeer)(nil)
var _ abci.Application = (*testApp)(nil)
func newbcrTestPeer(id p2p.ID) *bcrTestPeer {
bcr := &bcrTestPeer{
id: id,
ch: make(chan interface{}, 2),
}
bcr.BaseService = *cmn.NewBaseService(nil, "bcrTestPeer", bcr)
return bcr
func (app *testApp) Info(req abci.RequestInfo) (resInfo abci.ResponseInfo) {
return abci.ResponseInfo{}
}
func (tp *bcrTestPeer) lastBlockchainMessage() interface{} { return <-tp.ch }
func (tp *bcrTestPeer) TrySend(chID byte, msgBytes []byte) bool {
var msg BlockchainMessage
err := cdc.UnmarshalBinaryBare(msgBytes, &msg)
if err != nil {
panic(cmn.ErrorWrap(err, "Error while trying to parse a BlockchainMessage"))
}
if _, ok := msg.(*bcStatusResponseMessage); ok {
// Discard status response messages since they skew our results
// We only want to deal with:
// + bcBlockResponseMessage
// + bcNoBlockResponseMessage
} else {
tp.ch <- msg
}
return true
func (app *testApp) BeginBlock(req abci.RequestBeginBlock) abci.ResponseBeginBlock {
return abci.ResponseBeginBlock{}
}
func (tp *bcrTestPeer) Send(chID byte, msgBytes []byte) bool { return tp.TrySend(chID, msgBytes) }
func (tp *bcrTestPeer) NodeInfo() p2p.NodeInfo { return p2p.DefaultNodeInfo{} }
func (tp *bcrTestPeer) Status() p2p.ConnectionStatus { return p2p.ConnectionStatus{} }
func (tp *bcrTestPeer) ID() p2p.ID { return tp.id }
func (tp *bcrTestPeer) IsOutbound() bool { return false }
func (tp *bcrTestPeer) IsPersistent() bool { return true }
func (tp *bcrTestPeer) Get(s string) interface{} { return s }
func (tp *bcrTestPeer) Set(string, interface{}) {}
func (tp *bcrTestPeer) RemoteIP() net.IP { return []byte{127, 0, 0, 1} }
func (tp *bcrTestPeer) OriginalAddr() *p2p.NetAddress { return nil }
func (app *testApp) EndBlock(req abci.RequestEndBlock) abci.ResponseEndBlock {
return abci.ResponseEndBlock{}
}
func (app *testApp) DeliverTx(tx []byte) abci.ResponseDeliverTx {
return abci.ResponseDeliverTx{Tags: []cmn.KVPair{}}
}
func (app *testApp) CheckTx(tx []byte) abci.ResponseCheckTx {
return abci.ResponseCheckTx{}
}
func (app *testApp) Commit() abci.ResponseCommit {
return abci.ResponseCommit{}
}
func (app *testApp) Query(reqQuery abci.RequestQuery) (resQuery abci.ResponseQuery) {
return
}

View File

@@ -144,14 +144,14 @@ func (bs *BlockStore) LoadSeenCommit(height int64) *types.Commit {
// most recent height. Otherwise they'd stall at H-1.
func (bs *BlockStore) SaveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) {
if block == nil {
cmn.PanicSanity("BlockStore can only save a non-nil block")
panic("BlockStore can only save a non-nil block")
}
height := block.Height
if g, w := height, bs.Height()+1; g != w {
cmn.PanicSanity(fmt.Sprintf("BlockStore can only save contiguous blocks. Wanted %v, got %v", w, g))
panic(fmt.Sprintf("BlockStore can only save contiguous blocks. Wanted %v, got %v", w, g))
}
if !blockParts.IsComplete() {
cmn.PanicSanity(fmt.Sprintf("BlockStore can only save complete block part sets"))
panic(fmt.Sprintf("BlockStore can only save complete block part sets"))
}
// Save block meta
@@ -188,7 +188,7 @@ func (bs *BlockStore) SaveBlock(block *types.Block, blockParts *types.PartSet, s
func (bs *BlockStore) saveBlockPart(height int64, index int, part *types.Part) {
if height != bs.Height()+1 {
cmn.PanicSanity(fmt.Sprintf("BlockStore can only save contiguous blocks. Wanted %v, got %v", bs.Height()+1, height))
panic(fmt.Sprintf("BlockStore can only save contiguous blocks. Wanted %v, got %v", bs.Height()+1, height))
}
partBytes := cdc.MustMarshalBinaryBare(part)
bs.db.Set(calcBlockPartKey(height, index), partBytes)
@@ -224,7 +224,7 @@ type BlockStoreStateJSON struct {
func (bsj BlockStoreStateJSON) Save(db dbm.DB) {
bytes, err := cdc.MarshalJSON(bsj)
if err != nil {
cmn.PanicSanity(fmt.Sprintf("Could not marshal state bytes: %v", err))
panic(fmt.Sprintf("Could not marshal state bytes: %v", err))
}
db.SetSync(blockStoreKey, bytes)
}

View File

@@ -3,19 +3,48 @@ package blockchain
import (
"bytes"
"fmt"
"os"
"runtime/debug"
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
cfg "github.com/tendermint/tendermint/config"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/db"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/libs/log"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
)
// A cleanupFunc cleans up any config / test files created for a particular
// test.
type cleanupFunc func()
// make a Commit with a single vote containing just the height and a timestamp
func makeTestCommit(height int64, timestamp time.Time) *types.Commit {
commitSigs := []*types.CommitSig{{Height: height, Timestamp: timestamp}}
return types.NewCommit(types.BlockID{}, commitSigs)
}
func makeStateAndBlockStore(logger log.Logger) (sm.State, *BlockStore, cleanupFunc) {
config := cfg.ResetTestRoot("blockchain_reactor_test")
// blockDB := dbm.NewDebugDB("blockDB", dbm.NewMemDB())
// stateDB := dbm.NewDebugDB("stateDB", dbm.NewMemDB())
blockDB := dbm.NewMemDB()
stateDB := dbm.NewMemDB()
state, err := sm.LoadStateFromDBOrGenesisFile(stateDB, config.GenesisFile())
if err != nil {
panic(cmn.ErrorWrap(err, "error constructing state from genesis file"))
}
return state, NewBlockStore(blockDB), func() { os.RemoveAll(config.RootDir) }
}
func TestLoadBlockStoreStateJSON(t *testing.T) {
db := db.NewMemDB()
@@ -63,20 +92,32 @@ func freshBlockStore() (*BlockStore, db.DB) {
}
var (
state, _ = makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
block = makeBlock(1, state)
partSet = block.MakePartSet(2)
part1 = partSet.GetPart(0)
part2 = partSet.GetPart(1)
seenCommit1 = &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: tmtime.Now()}}}
state sm.State
block *types.Block
partSet *types.PartSet
part1 *types.Part
part2 *types.Part
seenCommit1 *types.Commit
)
func TestMain(m *testing.M) {
var cleanup cleanupFunc
state, _, cleanup = makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
block = makeBlock(1, state, new(types.Commit))
partSet = block.MakePartSet(2)
part1 = partSet.GetPart(0)
part2 = partSet.GetPart(1)
seenCommit1 = makeTestCommit(10, tmtime.Now())
code := m.Run()
cleanup()
os.Exit(code)
}
// TODO: This test should be simplified ...
func TestBlockStoreSaveLoadBlock(t *testing.T) {
state, bs := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
state, bs, cleanup := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
defer cleanup()
require.Equal(t, bs.Height(), int64(0), "initially the height should be zero")
// check there are no blocks at various heights
@@ -88,10 +129,9 @@ func TestBlockStoreSaveLoadBlock(t *testing.T) {
}
// save a block
block := makeBlock(bs.Height()+1, state)
block := makeBlock(bs.Height()+1, state, new(types.Commit))
validPartSet := block.MakePartSet(2)
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: tmtime.Now()}}}
seenCommit := makeTestCommit(10, tmtime.Now())
bs.SaveBlock(block, partSet, seenCommit)
require.Equal(t, bs.Height(), block.Header.Height, "expecting the new height to be changed")
@@ -110,8 +150,7 @@ func TestBlockStoreSaveLoadBlock(t *testing.T) {
// End of setup, test data
commitAtH10 := &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: tmtime.Now()}}}
commitAtH10 := makeTestCommit(10, tmtime.Now())
tuples := []struct {
block *types.Block
parts *types.PartSet
@@ -294,7 +333,7 @@ func TestLoadBlockPart(t *testing.T) {
gotPart, _, panicErr := doFn(loadPart)
require.Nil(t, panicErr, "an existent and proper block should not panic")
require.Nil(t, res, "a properly saved block should return a proper block")
require.Equal(t, gotPart.(*types.Part).Hash(), part1.Hash(),
require.Equal(t, gotPart.(*types.Part), part1,
"expecting successful retrieval of previously saved block")
}
@@ -329,14 +368,13 @@ func TestLoadBlockMeta(t *testing.T) {
}
func TestBlockFetchAtHeight(t *testing.T) {
state, bs := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
state, bs, cleanup := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
defer cleanup()
require.Equal(t, bs.Height(), int64(0), "initially the height should be zero")
block := makeBlock(bs.Height()+1, state)
block := makeBlock(bs.Height()+1, state, new(types.Commit))
partSet := block.MakePartSet(2)
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: tmtime.Now()}}}
seenCommit := makeTestCommit(10, tmtime.Now())
bs.SaveBlock(block, partSet, seenCommit)
require.Equal(t, bs.Height(), block.Header.Height, "expecting the new height to be changed")

View File

@@ -1,7 +1,7 @@
package blockchain
import (
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/types"
)

View File

@@ -3,6 +3,7 @@ package main
import (
"flag"
"os"
"time"
"github.com/tendermint/tendermint/crypto/ed25519"
cmn "github.com/tendermint/tendermint/libs/common"
@@ -13,9 +14,10 @@ import (
func main() {
var (
addr = flag.String("addr", ":26659", "Address of client to connect to")
chainID = flag.String("chain-id", "mychain", "chain id")
privValPath = flag.String("priv", "", "priv val file path")
addr = flag.String("addr", ":26659", "Address of client to connect to")
chainID = flag.String("chain-id", "mychain", "chain id")
privValKeyPath = flag.String("priv-key", "", "priv val key file path")
privValStatePath = flag.String("priv-state", "", "priv val state file path")
logger = log.NewTMLogger(
log.NewSyncWriter(os.Stdout),
@@ -27,27 +29,39 @@ func main() {
"Starting private validator",
"addr", *addr,
"chainID", *chainID,
"privPath", *privValPath,
"privKeyPath", *privValKeyPath,
"privStatePath", *privValStatePath,
)
pv := privval.LoadFilePV(*privValPath)
pv := privval.LoadFilePV(*privValKeyPath, *privValStatePath)
rs := privval.NewRemoteSigner(
logger,
*chainID,
*addr,
pv,
ed25519.GenPrivKey(),
)
var dialer privval.SocketDialer
protocol, address := cmn.ProtocolAndAddress(*addr)
switch protocol {
case "unix":
dialer = privval.DialUnixFn(address)
case "tcp":
connTimeout := 3 * time.Second // TODO
dialer = privval.DialTCPFn(address, connTimeout, ed25519.GenPrivKey())
default:
logger.Error("Unknown protocol", "protocol", protocol)
os.Exit(1)
}
rs := privval.NewSignerServiceEndpoint(logger, *chainID, pv, dialer)
err := rs.Start()
if err != nil {
panic(err)
}
cmn.TrapSignal(func() {
// Stop upon receiving SIGTERM or CTRL-C.
cmn.TrapSignal(logger, func() {
err := rs.Stop()
if err != nil {
panic(err)
}
})
// Run forever.
select {}
}

View File

@@ -17,7 +17,7 @@ var GenValidatorCmd = &cobra.Command{
}
func genValidator(cmd *cobra.Command, args []string) {
pv := privval.GenFilePV("")
pv := privval.GenFilePV("", "")
jsbz, err := cdc.MarshalJSON(pv)
if err != nil {
panic(err)

View File

@@ -4,7 +4,6 @@ import (
"fmt"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/p2p"
@@ -26,15 +25,18 @@ func initFiles(cmd *cobra.Command, args []string) error {
func initFilesWithConfig(config *cfg.Config) error {
// private validator
privValFile := config.PrivValidatorFile()
privValKeyFile := config.PrivValidatorKeyFile()
privValStateFile := config.PrivValidatorStateFile()
var pv *privval.FilePV
if cmn.FileExists(privValFile) {
pv = privval.LoadFilePV(privValFile)
logger.Info("Found private validator", "path", privValFile)
if cmn.FileExists(privValKeyFile) {
pv = privval.LoadFilePV(privValKeyFile, privValStateFile)
logger.Info("Found private validator", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
} else {
pv = privval.GenFilePV(privValFile)
pv = privval.GenFilePV(privValKeyFile, privValStateFile)
pv.Save()
logger.Info("Generated private validator", "path", privValFile)
logger.Info("Generated private validator", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
}
nodeKeyFile := config.NodeKeyFile()
@@ -57,9 +59,10 @@ func initFilesWithConfig(config *cfg.Config) error {
GenesisTime: tmtime.Now(),
ConsensusParams: types.DefaultConsensusParams(),
}
key := pv.GetPubKey()
genDoc.Validators = []types.GenesisValidator{{
Address: pv.GetPubKey().Address(),
PubKey: pv.GetPubKey(),
Address: key.Address(),
PubKey: key,
Power: 10,
}}

View File

@@ -43,7 +43,7 @@ func init() {
LiteCmd.Flags().IntVar(&cacheSize, "cache-size", 10, "Specify the memory trust store cache size")
}
func ensureAddrHasSchemeOrDefaultToTCP(addr string) (string, error) {
func EnsureAddrHasSchemeOrDefaultToTCP(addr string) (string, error) {
u, err := url.Parse(addr)
if err != nil {
return "", err
@@ -59,11 +59,16 @@ func ensureAddrHasSchemeOrDefaultToTCP(addr string) (string, error) {
}
func runProxy(cmd *cobra.Command, args []string) error {
nodeAddr, err := ensureAddrHasSchemeOrDefaultToTCP(nodeAddr)
// Stop upon receiving SIGTERM or CTRL-C.
cmn.TrapSignal(logger, func() {
// TODO: close up shop
})
nodeAddr, err := EnsureAddrHasSchemeOrDefaultToTCP(nodeAddr)
if err != nil {
return err
}
listenAddr, err := ensureAddrHasSchemeOrDefaultToTCP(listenAddr)
listenAddr, err := EnsureAddrHasSchemeOrDefaultToTCP(listenAddr)
if err != nil {
return err
}
@@ -86,9 +91,6 @@ func runProxy(cmd *cobra.Command, args []string) error {
return cmn.ErrorWrap(err, "starting proxy")
}
cmn.TrapSignal(func() {
// TODO: close up shop
})
return nil
// Run forever
select {}
}

View File

@@ -5,6 +5,7 @@ import (
"github.com/spf13/cobra"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/privval"
)
@@ -17,6 +18,12 @@ var ResetAllCmd = &cobra.Command{
Run: resetAll,
}
var keepAddrBook bool
func init() {
ResetAllCmd.Flags().BoolVar(&keepAddrBook, "keep-addr-book", false, "Keep the address book intact")
}
// ResetPrivValidatorCmd resets the private validator files.
var ResetPrivValidatorCmd = &cobra.Command{
Use: "unsafe_reset_priv_validator",
@@ -27,36 +34,45 @@ var ResetPrivValidatorCmd = &cobra.Command{
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetAll(cmd *cobra.Command, args []string) {
ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidatorFile(), logger)
ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidatorKeyFile(),
config.PrivValidatorStateFile(), logger)
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetPrivValidator(cmd *cobra.Command, args []string) {
resetFilePV(config.PrivValidatorFile(), logger)
resetFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile(), logger)
}
// ResetAll removes the privValidator and address book files plus all data.
// ResetAll removes address book files plus all data, and resets the privValdiator data.
// Exported so other CLI tools can use it.
func ResetAll(dbDir, addrBookFile, privValFile string, logger log.Logger) {
resetFilePV(privValFile, logger)
removeAddrBook(addrBookFile, logger)
func ResetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logger log.Logger) {
if keepAddrBook {
logger.Info("The address book remains intact")
} else {
removeAddrBook(addrBookFile, logger)
}
if err := os.RemoveAll(dbDir); err == nil {
logger.Info("Removed all blockchain history", "dir", dbDir)
} else {
logger.Error("Error removing all blockchain history", "dir", dbDir, "err", err)
}
// recreate the dbDir since the privVal state needs to live there
cmn.EnsureDir(dbDir, 0700)
resetFilePV(privValKeyFile, privValStateFile, logger)
}
func resetFilePV(privValFile string, logger log.Logger) {
if _, err := os.Stat(privValFile); err == nil {
pv := privval.LoadFilePV(privValFile)
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger) {
if _, err := os.Stat(privValKeyFile); err == nil {
pv := privval.LoadFilePVEmptyState(privValKeyFile, privValStateFile)
pv.Reset()
logger.Info("Reset private validator file to genesis state", "file", privValFile)
logger.Info("Reset private validator file to genesis state", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
} else {
pv := privval.GenFilePV(privValFile)
pv := privval.GenFilePV(privValKeyFile, privValStateFile)
pv.Save()
logger.Info("Generated private validator file", "file", privValFile)
logger.Info("Generated private validator file", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
}
}

View File

@@ -54,6 +54,9 @@ var RootCmd = &cobra.Command{
if err != nil {
return err
}
if config.LogFormat == cfg.LogFormatJSON {
logger = log.NewTMJSONLogger(log.NewSyncWriter(os.Stdout))
}
logger, err = tmflags.ParseLogLevel(config.LogLevel, logger, cfg.DefaultLogLevel())
if err != nil {
return err

View File

@@ -22,10 +22,6 @@ var (
defaultRoot = os.ExpandEnv("$HOME/.some/test/dir")
)
const (
rootName = "root"
)
// clearConfig clears env vars, the given root dir, and resets viper.
func clearConfig(dir string) {
if err := os.Unsetenv("TMHOME"); err != nil {

View File

@@ -2,12 +2,10 @@ package commands
import (
"fmt"
"os"
"os/signal"
"syscall"
"github.com/spf13/cobra"
cmn "github.com/tendermint/tendermint/libs/common"
nm "github.com/tendermint/tendermint/node"
)
@@ -24,7 +22,7 @@ func AddNodeFlags(cmd *cobra.Command) {
cmd.Flags().Bool("fast_sync", config.FastSync, "Fast blockchain syncing")
// abci flags
cmd.Flags().String("proxy_app", config.ProxyApp, "Proxy app address, or 'nilapp' or 'kvstore' for local testing.")
cmd.Flags().String("proxy_app", config.ProxyApp, "Proxy app address, or one of: 'kvstore', 'persistent_kvstore', 'counter', 'counter_serial' or 'noop' for local testing.")
cmd.Flags().String("abci", config.ABCI, "Specify abci transport (socket | grpc)")
// rpc flags
@@ -57,28 +55,20 @@ func NewRunNodeCmd(nodeProvider nm.NodeProvider) *cobra.Command {
return fmt.Errorf("Failed to create node: %v", err)
}
// Stop upon receiving SIGTERM or CTRL-C
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt, syscall.SIGTERM)
go func() {
for sig := range c {
logger.Error(fmt.Sprintf("captured %v, exiting...", sig))
if n.IsRunning() {
n.Stop()
}
os.Exit(1)
// Stop upon receiving SIGTERM or CTRL-C.
cmn.TrapSignal(logger, func() {
if n.IsRunning() {
n.Stop()
}
}()
})
if err := n.Start(); err != nil {
return fmt.Errorf("Failed to start node: %v", err)
}
logger.Info("Started node", "nodeInfo", n.Switch().NodeInfo())
// Run forever
// Run forever.
select {}
return nil
},
}

View File

@@ -16,12 +16,11 @@ var ShowNodeIDCmd = &cobra.Command{
}
func showNodeID(cmd *cobra.Command, args []string) error {
nodeKey, err := p2p.LoadNodeKey(config.NodeKeyFile())
if err != nil {
return err
}
fmt.Println(nodeKey.ID())
fmt.Println(nodeKey.ID())
return nil
}

View File

@@ -3,8 +3,10 @@ package commands
import (
"fmt"
"github.com/pkg/errors"
"github.com/spf13/cobra"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/privval"
)
@@ -12,11 +14,21 @@ import (
var ShowValidatorCmd = &cobra.Command{
Use: "show_validator",
Short: "Show this node's validator info",
Run: showValidator,
RunE: showValidator,
}
func showValidator(cmd *cobra.Command, args []string) {
privValidator := privval.LoadOrGenFilePV(config.PrivValidatorFile())
pubKeyJSONBytes, _ := cdc.MarshalJSON(privValidator.GetPubKey())
fmt.Println(string(pubKeyJSONBytes))
func showValidator(cmd *cobra.Command, args []string) error {
keyFilePath := config.PrivValidatorKeyFile()
if !cmn.FileExists(keyFilePath) {
return fmt.Errorf("private validator file %s does not exist", keyFilePath)
}
pv := privval.LoadFilePV(keyFilePath, config.PrivValidatorStateFile())
bz, err := cdc.MarshalJSON(pv.GetPubKey())
if err != nil {
return errors.Wrap(err, "failed to marshal private validator pubkey")
}
fmt.Println(string(bz))
return nil
}

View File

@@ -8,6 +8,7 @@ import (
"strings"
"github.com/spf13/cobra"
"github.com/spf13/viper"
cfg "github.com/tendermint/tendermint/config"
cmn "github.com/tendermint/tendermint/libs/common"
@@ -20,13 +21,17 @@ import (
var (
nValidators int
nNonValidators int
configFile string
outputDir string
nodeDirPrefix string
populatePersistentPeers bool
hostnamePrefix string
hostnameSuffix string
startingIPAddress string
hostnames []string
p2pPort int
randomMonikers bool
)
const (
@@ -36,6 +41,8 @@ const (
func init() {
TestnetFilesCmd.Flags().IntVar(&nValidators, "v", 4,
"Number of validators to initialize the testnet with")
TestnetFilesCmd.Flags().StringVar(&configFile, "config", "",
"Config file to use (note some options may be overwritten)")
TestnetFilesCmd.Flags().IntVar(&nNonValidators, "n", 0,
"Number of non-validators to initialize the testnet with")
TestnetFilesCmd.Flags().StringVar(&outputDir, "o", "./mytestnet",
@@ -46,11 +53,17 @@ func init() {
TestnetFilesCmd.Flags().BoolVar(&populatePersistentPeers, "populate-persistent-peers", true,
"Update config of each node with the list of persistent peers build using either hostname-prefix or starting-ip-address")
TestnetFilesCmd.Flags().StringVar(&hostnamePrefix, "hostname-prefix", "node",
"Hostname prefix (node results in persistent peers list ID0@node0:26656, ID1@node1:26656, ...)")
"Hostname prefix (\"node\" results in persistent peers list ID0@node0:26656, ID1@node1:26656, ...)")
TestnetFilesCmd.Flags().StringVar(&hostnameSuffix, "hostname-suffix", "",
"Hostname suffix (\".xyz.com\" results in persistent peers list ID0@node0.xyz.com:26656, ID1@node1.xyz.com:26656, ...)")
TestnetFilesCmd.Flags().StringVar(&startingIPAddress, "starting-ip-address", "",
"Starting IP address (192.168.0.1 results in persistent peers list ID0@192.168.0.1:26656, ID1@192.168.0.2:26656, ...)")
"Starting IP address (\"192.168.0.1\" results in persistent peers list ID0@192.168.0.1:26656, ID1@192.168.0.2:26656, ...)")
TestnetFilesCmd.Flags().StringArrayVar(&hostnames, "hostname", []string{},
"Manually override all hostnames of validators and non-validators (use --hostname multiple times for multiple hosts)")
TestnetFilesCmd.Flags().IntVar(&p2pPort, "p2p-port", 26656,
"P2P Port")
TestnetFilesCmd.Flags().BoolVar(&randomMonikers, "random-monikers", false,
"Randomize the moniker for each generated node")
}
// TestnetFilesCmd allows initialisation of files for a Tendermint testnet.
@@ -72,7 +85,29 @@ Example:
}
func testnetFiles(cmd *cobra.Command, args []string) error {
if len(hostnames) > 0 && len(hostnames) != (nValidators+nNonValidators) {
return fmt.Errorf(
"testnet needs precisely %d hostnames (number of validators plus non-validators) if --hostname parameter is used",
nValidators+nNonValidators,
)
}
config := cfg.DefaultConfig()
// overwrite default config if set and valid
if configFile != "" {
viper.SetConfigFile(configFile)
if err := viper.ReadInConfig(); err != nil {
return err
}
if err := viper.Unmarshal(config); err != nil {
return err
}
if err := config.ValidateBasic(); err != nil {
return err
}
}
genVals := make([]types.GenesisValidator, nValidators)
for i := 0; i < nValidators; i++ {
@@ -85,11 +120,18 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
_ = os.RemoveAll(outputDir)
return err
}
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
if err != nil {
_ = os.RemoveAll(outputDir)
return err
}
initFilesWithConfig(config)
pvFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidator)
pv := privval.LoadFilePV(pvFile)
pvKeyFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidatorKey)
pvStateFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidatorState)
pv := privval.LoadFilePV(pvKeyFile, pvStateFile)
genVals[i] = types.GenesisValidator{
Address: pv.GetPubKey().Address(),
PubKey: pv.GetPubKey(),
@@ -108,6 +150,12 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
return err
}
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
if err != nil {
_ = os.RemoveAll(outputDir)
return err
}
initFilesWithConfig(config)
}
@@ -127,58 +175,84 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
}
}
// Gather persistent peer addresses.
var (
persistentPeers string
err error
)
if populatePersistentPeers {
err := populatePersistentPeersInConfigAndWriteIt(config)
persistentPeers, err = persistentPeersString(config)
if err != nil {
_ = os.RemoveAll(outputDir)
return err
}
}
// Overwrite default config.
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
config.P2P.AddrBookStrict = false
config.P2P.AllowDuplicateIP = true
if populatePersistentPeers {
config.P2P.PersistentPeers = persistentPeers
}
config.Moniker = moniker(i)
cfg.WriteConfigFile(filepath.Join(nodeDir, "config", "config.toml"), config)
}
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
return nil
}
func hostnameOrIP(i int) string {
if startingIPAddress != "" {
ip := net.ParseIP(startingIPAddress)
ip = ip.To4()
if ip == nil {
fmt.Printf("%v: non ipv4 address\n", startingIPAddress)
os.Exit(1)
}
for j := 0; j < i; j++ {
ip[3]++
}
return ip.String()
if len(hostnames) > 0 && i < len(hostnames) {
return hostnames[i]
}
if startingIPAddress == "" {
return fmt.Sprintf("%s%d%s", hostnamePrefix, i, hostnameSuffix)
}
ip := net.ParseIP(startingIPAddress)
ip = ip.To4()
if ip == nil {
fmt.Printf("%v: non ipv4 address\n", startingIPAddress)
os.Exit(1)
}
return fmt.Sprintf("%s%d", hostnamePrefix, i)
for j := 0; j < i; j++ {
ip[3]++
}
return ip.String()
}
func populatePersistentPeersInConfigAndWriteIt(config *cfg.Config) error {
func persistentPeersString(config *cfg.Config) (string, error) {
persistentPeers := make([]string, nValidators+nNonValidators)
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
nodeKey, err := p2p.LoadNodeKey(config.NodeKeyFile())
if err != nil {
return err
return "", err
}
persistentPeers[i] = p2p.IDAddressString(nodeKey.ID(), fmt.Sprintf("%s:%d", hostnameOrIP(i), p2pPort))
}
persistentPeersList := strings.Join(persistentPeers, ",")
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
config.P2P.PersistentPeers = persistentPeersList
config.P2P.AddrBookStrict = false
// overwrite default config
cfg.WriteConfigFile(filepath.Join(nodeDir, "config", "config.toml"), config)
}
return nil
return strings.Join(persistentPeers, ","), nil
}
func moniker(i int) string {
if randomMonikers {
return randomMoniker()
}
if len(hostnames) > 0 && i < len(hostnames) {
return hostnames[i]
}
if startingIPAddress == "" {
return fmt.Sprintf("%s%d%s", hostnamePrefix, i, hostnameSuffix)
}
return randomMoniker()
}
func randomMoniker() string {
return cmn.HexBytes(cmn.RandBytes(8)).String()
}

View File

@@ -1,7 +1,7 @@
package commands
import (
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
cryptoAmino "github.com/tendermint/tendermint/crypto/encoding/amino"
)

View File

@@ -14,6 +14,11 @@ const (
FuzzModeDrop = iota
// FuzzModeDelay is a mode in which we randomly sleep
FuzzModeDelay
// LogFormatPlain is a format for colored text
LogFormatPlain = "plain"
// LogFormatJSON is a format for json output
LogFormatJSON = "json"
)
// NOTE: Most of the structs & relevant comments + the
@@ -30,15 +35,24 @@ var (
defaultConfigFileName = "config.toml"
defaultGenesisJSONName = "genesis.json"
defaultPrivValName = "priv_validator.json"
defaultPrivValKeyName = "priv_validator_key.json"
defaultPrivValStateName = "priv_validator_state.json"
defaultNodeKeyName = "node_key.json"
defaultAddrBookName = "addrbook.json"
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName)
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName)
defaultPrivValPath = filepath.Join(defaultConfigDir, defaultPrivValName)
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName)
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName)
defaultPrivValKeyPath = filepath.Join(defaultConfigDir, defaultPrivValKeyName)
defaultPrivValStatePath = filepath.Join(defaultDataDir, defaultPrivValStateName)
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
)
var (
oldPrivVal = "priv_validator.json"
oldPrivValPath = filepath.Join(defaultConfigDir, oldPrivVal)
)
// Config defines the top level configuration for a Tendermint node
@@ -94,6 +108,9 @@ func (cfg *Config) SetRoot(root string) *Config {
// ValidateBasic performs basic validation (checking param bounds, etc.) and
// returns an error if any check fails.
func (cfg *Config) ValidateBasic() error {
if err := cfg.BaseConfig.ValidateBasic(); err != nil {
return err
}
if err := cfg.RPC.ValidateBasic(); err != nil {
return errors.Wrap(err, "Error in [rpc] section")
}
@@ -136,7 +153,18 @@ type BaseConfig struct {
// and verifying their commits
FastSync bool `mapstructure:"fast_sync"`
// Database backend: leveldb | memdb | cleveldb
// Database backend: goleveldb | cleveldb | boltdb
// * goleveldb (github.com/syndtr/goleveldb - most popular implementation)
// - pure go
// - stable
// * cleveldb (uses levigo wrapper)
// - fast
// - requires gcc
// - use cleveldb build tag (go build -tags cleveldb)
// * boltdb (uses etcd's fork of bolt - github.com/etcd-io/bbolt)
// - EXPERIMENTAL
// - may be faster is some use-cases (random reads - indexer)
// - use boltdb build tag (go build -tags boltdb)
DBBackend string `mapstructure:"db_backend"`
// Database directory
@@ -145,11 +173,17 @@ type BaseConfig struct {
// Output level for logging
LogLevel string `mapstructure:"log_level"`
// Output format: 'plain' (colored text) or 'json'
LogFormat string `mapstructure:"log_format"`
// Path to the JSON file containing the initial validator set and other meta data
Genesis string `mapstructure:"genesis_file"`
// Path to the JSON file containing the private key to use as a validator in the consensus protocol
PrivValidator string `mapstructure:"priv_validator_file"`
PrivValidatorKey string `mapstructure:"priv_validator_key_file"`
// Path to the JSON file containing the last sign state of a validator
PrivValidatorState string `mapstructure:"priv_validator_state_file"`
// TCP or UNIX socket address for Tendermint to listen on for
// connections from an external PrivValidator process
@@ -172,18 +206,20 @@ type BaseConfig struct {
// DefaultBaseConfig returns a default base configuration for a Tendermint node
func DefaultBaseConfig() BaseConfig {
return BaseConfig{
Genesis: defaultGenesisJSONPath,
PrivValidator: defaultPrivValPath,
NodeKey: defaultNodeKeyPath,
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:26658",
ABCI: "socket",
LogLevel: DefaultPackageLogLevels(),
ProfListenAddress: "",
FastSync: true,
FilterPeers: false,
DBBackend: "leveldb",
DBPath: "data",
Genesis: defaultGenesisJSONPath,
PrivValidatorKey: defaultPrivValKeyPath,
PrivValidatorState: defaultPrivValStatePath,
NodeKey: defaultNodeKeyPath,
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:26658",
ABCI: "socket",
LogLevel: DefaultPackageLogLevels(),
LogFormat: LogFormatPlain,
ProfListenAddress: "",
FastSync: true,
FilterPeers: false,
DBBackend: "goleveldb",
DBPath: "data",
}
}
@@ -206,9 +242,20 @@ func (cfg BaseConfig) GenesisFile() string {
return rootify(cfg.Genesis, cfg.RootDir)
}
// PrivValidatorFile returns the full path to the priv_validator.json file
func (cfg BaseConfig) PrivValidatorFile() string {
return rootify(cfg.PrivValidator, cfg.RootDir)
// PrivValidatorKeyFile returns the full path to the priv_validator_key.json file
func (cfg BaseConfig) PrivValidatorKeyFile() string {
return rootify(cfg.PrivValidatorKey, cfg.RootDir)
}
// PrivValidatorFile returns the full path to the priv_validator_state.json file
func (cfg BaseConfig) PrivValidatorStateFile() string {
return rootify(cfg.PrivValidatorState, cfg.RootDir)
}
// OldPrivValidatorFile returns the full path of the priv_validator.json from pre v0.28.0.
// TODO: eventually remove.
func (cfg BaseConfig) OldPrivValidatorFile() string {
return rootify(oldPrivValPath, cfg.RootDir)
}
// NodeKeyFile returns the full path to the node_key.json file
@@ -221,6 +268,17 @@ func (cfg BaseConfig) DBDir() string {
return rootify(cfg.DBPath, cfg.RootDir)
}
// ValidateBasic performs basic validation (checking param bounds, etc.) and
// returns an error if any check fails.
func (cfg BaseConfig) ValidateBasic() error {
switch cfg.LogFormat {
case LogFormatPlain, LogFormatJSON:
default:
return errors.New("unknown log_format (must be 'plain' or 'json')")
}
return nil
}
// DefaultLogLevel returns a default log level of "error"
func DefaultLogLevel() string {
return "error"
@@ -242,13 +300,25 @@ type RPCConfig struct {
// TCP or UNIX socket address for the RPC server to listen on
ListenAddress string `mapstructure:"laddr"`
// A list of origins a cross-domain request can be executed from.
// If the special '*' value is present in the list, all origins will be allowed.
// An origin may contain a wildcard (*) to replace 0 or more characters (i.e.: http://*.domain.com).
// Only one wildcard can be used per origin.
CORSAllowedOrigins []string `mapstructure:"cors_allowed_origins"`
// A list of methods the client is allowed to use with cross-domain requests.
CORSAllowedMethods []string `mapstructure:"cors_allowed_methods"`
// A list of non simple headers the client is allowed to use with cross-domain requests.
CORSAllowedHeaders []string `mapstructure:"cors_allowed_headers"`
// TCP or UNIX socket address for the gRPC server to listen on
// NOTE: This server only supports /broadcast_tx_commit
GRPCListenAddress string `mapstructure:"grpc_laddr"`
// Maximum number of simultaneous connections.
// Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
// If you want to accept more significant number than the default, make sure
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
GRPCMaxOpenConnections int `mapstructure:"grpc_max_open_connections"`
@@ -258,24 +328,63 @@ type RPCConfig struct {
// Maximum number of simultaneous connections (including WebSocket).
// Does not include gRPC connections. See grpc_max_open_connections
// If you want to accept more significant number than the default, make sure
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
// Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
// 1024 - 40 - 10 - 50 = 924 = ~900
MaxOpenConnections int `mapstructure:"max_open_connections"`
// Maximum number of unique clientIDs that can /subscribe
// If you're using /broadcast_tx_commit, set to the estimated maximum number
// of broadcast_tx_commit calls per block.
MaxSubscriptionClients int `mapstructure:"max_subscription_clients"`
// Maximum number of unique queries a given client can /subscribe to
// If you're using GRPC (or Local RPC client) and /broadcast_tx_commit, set
// to the estimated maximum number of broadcast_tx_commit calls per block.
MaxSubscriptionsPerClient int `mapstructure:"max_subscriptions_per_client"`
// How long to wait for a tx to be committed during /broadcast_tx_commit
// WARNING: Using a value larger than 10s will result in increasing the
// global HTTP write timeout, which applies to all connections and endpoints.
// See https://github.com/tendermint/tendermint/issues/3435
TimeoutBroadcastTxCommit time.Duration `mapstructure:"timeout_broadcast_tx_commit"`
// The name of a file containing certificate that is used to create the HTTPS server.
//
// If the certificate is signed by a certificate authority,
// the certFile should be the concatenation of the server's certificate, any intermediates,
// and the CA's certificate.
//
// NOTE: both tls_cert_file and tls_key_file must be present for Tendermint to create HTTPS server. Otherwise, HTTP server is run.
TLSCertFile string `mapstructure:"tls_cert_file"`
// The name of a file containing matching private key that is used to create the HTTPS server.
//
// NOTE: both tls_cert_file and tls_key_file must be present for Tendermint to create HTTPS server. Otherwise, HTTP server is run.
TLSKeyFile string `mapstructure:"tls_key_file"`
}
// DefaultRPCConfig returns a default configuration for the RPC server
func DefaultRPCConfig() *RPCConfig {
return &RPCConfig{
ListenAddress: "tcp://0.0.0.0:26657",
ListenAddress: "tcp://0.0.0.0:26657",
CORSAllowedOrigins: []string{},
CORSAllowedMethods: []string{"HEAD", "GET", "POST"},
CORSAllowedHeaders: []string{"Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time"},
GRPCListenAddress: "",
GRPCMaxOpenConnections: 900,
Unsafe: false,
MaxOpenConnections: 900,
MaxSubscriptionClients: 100,
MaxSubscriptionsPerClient: 5,
TimeoutBroadcastTxCommit: 10 * time.Second,
TLSCertFile: "",
TLSKeyFile: "",
}
}
@@ -297,9 +406,35 @@ func (cfg *RPCConfig) ValidateBasic() error {
if cfg.MaxOpenConnections < 0 {
return errors.New("max_open_connections can't be negative")
}
if cfg.MaxSubscriptionClients < 0 {
return errors.New("max_subscription_clients can't be negative")
}
if cfg.MaxSubscriptionsPerClient < 0 {
return errors.New("max_subscriptions_per_client can't be negative")
}
if cfg.TimeoutBroadcastTxCommit < 0 {
return errors.New("timeout_broadcast_tx_commit can't be negative")
}
return nil
}
// IsCorsEnabled returns true if cross-origin resource sharing is enabled.
func (cfg *RPCConfig) IsCorsEnabled() bool {
return len(cfg.CORSAllowedOrigins) != 0
}
func (cfg RPCConfig) KeyFile() string {
return rootify(filepath.Join(defaultConfigDir, cfg.TLSKeyFile), cfg.RootDir)
}
func (cfg RPCConfig) CertFile() string {
return rootify(filepath.Join(defaultConfigDir, cfg.TLSCertFile), cfg.RootDir)
}
func (cfg RPCConfig) IsTLSEnabled() bool {
return cfg.TLSCertFile != "" && cfg.TLSKeyFile != ""
}
//-----------------------------------------------------------------------------
// P2PConfig
@@ -392,7 +527,7 @@ func DefaultP2PConfig() *P2PConfig {
RecvRate: 5120000, // 5 mB/s
PexReactor: true,
SeedMode: false,
AllowDuplicateIP: true, // so non-breaking yet
AllowDuplicateIP: false,
HandshakeTimeout: 20 * time.Second,
DialTimeout: 3 * time.Second,
TestDialFail: false,
@@ -464,12 +599,13 @@ func DefaultFuzzConnConfig() *FuzzConnConfig {
// MempoolConfig defines the configuration options for the Tendermint mempool
type MempoolConfig struct {
RootDir string `mapstructure:"home"`
Recheck bool `mapstructure:"recheck"`
Broadcast bool `mapstructure:"broadcast"`
WalPath string `mapstructure:"wal_dir"`
Size int `mapstructure:"size"`
CacheSize int `mapstructure:"cache_size"`
RootDir string `mapstructure:"home"`
Recheck bool `mapstructure:"recheck"`
Broadcast bool `mapstructure:"broadcast"`
WalPath string `mapstructure:"wal_dir"`
Size int `mapstructure:"size"`
MaxTxsBytes int64 `mapstructure:"max_txs_bytes"`
CacheSize int `mapstructure:"cache_size"`
}
// DefaultMempoolConfig returns a default configuration for the Tendermint mempool
@@ -478,10 +614,11 @@ func DefaultMempoolConfig() *MempoolConfig {
Recheck: true,
Broadcast: true,
WalPath: "",
// Each signature verification takes .5ms, size reduced until we implement
// Each signature verification takes .5ms, Size reduced until we implement
// ABCI Recheck
Size: 5000,
CacheSize: 10000,
Size: 5000,
MaxTxsBytes: 1024 * 1024 * 1024, // 1GB
CacheSize: 10000,
}
}
@@ -497,12 +634,20 @@ func (cfg *MempoolConfig) WalDir() string {
return rootify(cfg.WalPath, cfg.RootDir)
}
// WalEnabled returns true if the WAL is enabled.
func (cfg *MempoolConfig) WalEnabled() bool {
return cfg.WalPath != ""
}
// ValidateBasic performs basic validation (checking param bounds, etc.) and
// returns an error if any check fails.
func (cfg *MempoolConfig) ValidateBasic() error {
if cfg.Size < 0 {
return errors.New("size can't be negative")
}
if cfg.MaxTxsBytes < 0 {
return errors.New("max_txs_bytes can't be negative")
}
if cfg.CacheSize < 0 {
return errors.New("cache_size can't be negative")
}
@@ -537,9 +682,6 @@ type ConsensusConfig struct {
// Reactor sleep duration parameters
PeerGossipSleepDuration time.Duration `mapstructure:"peer_gossip_sleep_duration"`
PeerQueryMaj23SleepDuration time.Duration `mapstructure:"peer_query_maj23_sleep_duration"`
// Block time parameters. Corresponds to the minimum time increment between consecutive blocks.
BlockTimeIota time.Duration `mapstructure:"blocktime_iota"`
}
// DefaultConsensusConfig returns a default configuration for the consensus service
@@ -558,7 +700,6 @@ func DefaultConsensusConfig() *ConsensusConfig {
CreateEmptyBlocksInterval: 0 * time.Second,
PeerGossipSleepDuration: 100 * time.Millisecond,
PeerQueryMaj23SleepDuration: 2000 * time.Millisecond,
BlockTimeIota: 1000 * time.Millisecond,
}
}
@@ -575,16 +716,9 @@ func TestConsensusConfig() *ConsensusConfig {
cfg.SkipTimeoutCommit = true
cfg.PeerGossipSleepDuration = 5 * time.Millisecond
cfg.PeerQueryMaj23SleepDuration = 250 * time.Millisecond
cfg.BlockTimeIota = 10 * time.Millisecond
return cfg
}
// MinValidVoteTime returns the minimum acceptable block time.
// See the [BFT time spec](https://godoc.org/github.com/tendermint/tendermint/docs/spec/consensus/bft-time.md).
func (cfg *ConsensusConfig) MinValidVoteTime(lastBlockTime time.Time) time.Time {
return lastBlockTime.Add(cfg.BlockTimeIota)
}
// WaitForTxs returns true if the consensus should wait for transactions before entering the propose step
func (cfg *ConsensusConfig) WaitForTxs() bool {
return !cfg.CreateEmptyBlocks || cfg.CreateEmptyBlocksInterval > 0
@@ -662,9 +796,6 @@ func (cfg *ConsensusConfig) ValidateBasic() error {
if cfg.PeerQueryMaj23SleepDuration < 0 {
return errors.New("peer_query_maj23_sleep_duration can't be negative")
}
if cfg.BlockTimeIota < 0 {
return errors.New("blocktime_iota can't be negative")
}
return nil
}
@@ -727,12 +858,12 @@ type InstrumentationConfig struct {
PrometheusListenAddr string `mapstructure:"prometheus_listen_addr"`
// Maximum number of simultaneous connections.
// If you want to accept more significant number than the default, make sure
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
MaxOpenConnections int `mapstructure:"max_open_connections"`
// Tendermint instrumentation namespace.
// Instrumentation namespace.
Namespace string `mapstructure:"namespace"`
}

View File

@@ -2,13 +2,17 @@ package config
import (
"bytes"
"os"
"fmt"
"io/ioutil"
"path/filepath"
"text/template"
cmn "github.com/tendermint/tendermint/libs/common"
)
// DefaultDirPerm is the default permissions used when creating directories.
const DefaultDirPerm = 0700
var configTemplate *template.Template
func init() {
@@ -23,14 +27,14 @@ func init() {
// EnsureRoot creates the root, config, and data directories if they don't exist,
// and panics if it fails.
func EnsureRoot(rootDir string) {
if err := cmn.EnsureDir(rootDir, 0700); err != nil {
cmn.PanicSanity(err.Error())
if err := cmn.EnsureDir(rootDir, DefaultDirPerm); err != nil {
panic(err.Error())
}
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultConfigDir), 0700); err != nil {
cmn.PanicSanity(err.Error())
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultConfigDir), DefaultDirPerm); err != nil {
panic(err.Error())
}
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultDataDir), 0700); err != nil {
cmn.PanicSanity(err.Error())
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultDataDir), DefaultDirPerm); err != nil {
panic(err.Error())
}
configFilePath := filepath.Join(rootDir, defaultConfigFilePath)
@@ -77,7 +81,18 @@ moniker = "{{ .BaseConfig.Moniker }}"
# and verifying their commits
fast_sync = {{ .BaseConfig.FastSync }}
# Database backend: leveldb | memdb | cleveldb
# Database backend: goleveldb | cleveldb | boltdb
# * goleveldb (github.com/syndtr/goleveldb - most popular implementation)
# - pure go
# - stable
# * cleveldb (uses levigo wrapper)
# - fast
# - requires gcc
# - use cleveldb build tag (go build -tags cleveldb)
# * boltdb (uses etcd's fork of bolt - github.com/etcd-io/bbolt)
# - EXPERIMENTAL
# - may be faster is some use-cases (random reads - indexer)
# - use boltdb build tag (go build -tags boltdb)
db_backend = "{{ .BaseConfig.DBBackend }}"
# Database directory
@@ -86,13 +101,19 @@ db_dir = "{{ js .BaseConfig.DBPath }}"
# Output level for logging, including package level options
log_level = "{{ .BaseConfig.LogLevel }}"
# Output format: 'plain' (colored text) or 'json'
log_format = "{{ .BaseConfig.LogFormat }}"
##### additional base config options #####
# Path to the JSON file containing the initial validator set and other meta data
genesis_file = "{{ js .BaseConfig.Genesis }}"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_file = "{{ js .BaseConfig.PrivValidator }}"
priv_validator_key_file = "{{ js .BaseConfig.PrivValidatorKey }}"
# Path to the JSON file containing the last sign state of a validator
priv_validator_state_file = "{{ js .BaseConfig.PrivValidatorState }}"
# TCP or UNIX socket address for Tendermint to listen on for
# connections from an external PrivValidator process
@@ -119,13 +140,24 @@ filter_peers = {{ .BaseConfig.FilterPeers }}
# TCP or UNIX socket address for the RPC server to listen on
laddr = "{{ .RPC.ListenAddress }}"
# A list of origins a cross-domain request can be executed from
# Default value '[]' disables cors support
# Use '["*"]' to allow any origin
cors_allowed_origins = [{{ range .RPC.CORSAllowedOrigins }}{{ printf "%q, " . }}{{end}}]
# A list of methods the client is allowed to use with cross-domain requests
cors_allowed_methods = [{{ range .RPC.CORSAllowedMethods }}{{ printf "%q, " . }}{{end}}]
# A list of non simple headers the client is allowed to use with cross-domain requests
cors_allowed_headers = [{{ range .RPC.CORSAllowedHeaders }}{{ printf "%q, " . }}{{end}}]
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
grpc_laddr = "{{ .RPC.GRPCListenAddress }}"
# Maximum number of simultaneous connections.
# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -137,13 +169,40 @@ unsafe = {{ .RPC.Unsafe }}
# Maximum number of simultaneous connections (including WebSocket).
# Does not include gRPC connections. See grpc_max_open_connections
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
# 1024 - 40 - 10 - 50 = 924 = ~900
max_open_connections = {{ .RPC.MaxOpenConnections }}
# Maximum number of unique clientIDs that can /subscribe
# If you're using /broadcast_tx_commit, set to the estimated maximum number
# of broadcast_tx_commit calls per block.
max_subscription_clients = {{ .RPC.MaxSubscriptionClients }}
# Maximum number of unique queries a given client can /subscribe to
# If you're using GRPC (or Local RPC client) and /broadcast_tx_commit, set to
# the estimated # maximum number of broadcast_tx_commit calls per block.
max_subscriptions_per_client = {{ .RPC.MaxSubscriptionsPerClient }}
# How long to wait for a tx to be committed during /broadcast_tx_commit.
# WARNING: Using a value larger than 10s will result in increasing the
# global HTTP write timeout, which applies to all connections and endpoints.
# See https://github.com/tendermint/tendermint/issues/3435
timeout_broadcast_tx_commit = "{{ .RPC.TimeoutBroadcastTxCommit }}"
# The name of a file containing certificate that is used to create the HTTPS server.
# If the certificate is signed by a certificate authority,
# the certFile should be the concatenation of the server's certificate, any intermediates,
# and the CA's certificate.
# NOTE: both tls_cert_file and tls_key_file must be present for Tendermint to create HTTPS server. Otherwise, HTTP server is run.
tls_cert_file = "{{ .RPC.TLSCertFile }}"
# The name of a file containing matching private key that is used to create the HTTPS server.
# NOTE: both tls_cert_file and tls_key_file must be present for Tendermint to create HTTPS server. Otherwise, HTTP server is run.
tls_key_file = "{{ .RPC.TLSKeyFile }}"
##### peer to peer configuration options #####
[p2p]
@@ -216,10 +275,15 @@ recheck = {{ .Mempool.Recheck }}
broadcast = {{ .Mempool.Broadcast }}
wal_dir = "{{ js .Mempool.WalPath }}"
# size of the mempool
# Maximum number of transactions in the mempool
size = {{ .Mempool.Size }}
# size of the cache (used to filter transactions we saw earlier)
# Limit the total size of all txs in the mempool.
# This only accounts for raw transactions (e.g. given 1MB transactions and
# max_txs_bytes=5MB, mempool will only accept 5 transactions).
max_txs_bytes = {{ .Mempool.MaxTxsBytes }}
# Size of the cache (used to filter transactions we saw earlier) in transactions
cache_size = {{ .Mempool.CacheSize }}
##### consensus configuration options #####
@@ -252,8 +316,8 @@ peer_query_maj23_sleep_duration = "{{ .Consensus.PeerQueryMaj23SleepDuration }}"
# What indexer to use for transactions
#
# Options:
# 1) "null" (default)
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
# 1) "null"
# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "{{ .TxIndex.Indexer }}"
# Comma-separated list of tags to index (by default the only tag is "tx.hash")
@@ -285,7 +349,7 @@ prometheus = {{ .Instrumentation.Prometheus }}
prometheus_listen_addr = "{{ .Instrumentation.PrometheusListenAddr }}"
# Maximum number of simultaneous connections.
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
max_open_connections = {{ .Instrumentation.MaxOpenConnections }}
@@ -297,53 +361,51 @@ namespace = "{{ .Instrumentation.Namespace }}"
/****** these are for test settings ***********/
func ResetTestRoot(testName string) *Config {
rootDir := os.ExpandEnv("$HOME/.tendermint_test")
rootDir = filepath.Join(rootDir, testName)
// Remove ~/.tendermint_test_bak
if cmn.FileExists(rootDir + "_bak") {
if err := os.RemoveAll(rootDir + "_bak"); err != nil {
cmn.PanicSanity(err.Error())
}
return ResetTestRootWithChainID(testName, "")
}
func ResetTestRootWithChainID(testName string, chainID string) *Config {
// create a unique, concurrency-safe test directory under os.TempDir()
rootDir, err := ioutil.TempDir("", fmt.Sprintf("%s-%s_", chainID, testName))
if err != nil {
panic(err)
}
// Move ~/.tendermint_test to ~/.tendermint_test_bak
if cmn.FileExists(rootDir) {
if err := os.Rename(rootDir, rootDir+"_bak"); err != nil {
cmn.PanicSanity(err.Error())
}
// ensure config and data subdirs are created
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultConfigDir), DefaultDirPerm); err != nil {
panic(err)
}
// Create new dir
if err := cmn.EnsureDir(rootDir, 0700); err != nil {
cmn.PanicSanity(err.Error())
}
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultConfigDir), 0700); err != nil {
cmn.PanicSanity(err.Error())
}
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultDataDir), 0700); err != nil {
cmn.PanicSanity(err.Error())
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultDataDir), DefaultDirPerm); err != nil {
panic(err)
}
baseConfig := DefaultBaseConfig()
configFilePath := filepath.Join(rootDir, defaultConfigFilePath)
genesisFilePath := filepath.Join(rootDir, baseConfig.Genesis)
privFilePath := filepath.Join(rootDir, baseConfig.PrivValidator)
privKeyFilePath := filepath.Join(rootDir, baseConfig.PrivValidatorKey)
privStateFilePath := filepath.Join(rootDir, baseConfig.PrivValidatorState)
// Write default config file if missing.
if !cmn.FileExists(configFilePath) {
writeDefaultConfigFile(configFilePath)
}
if !cmn.FileExists(genesisFilePath) {
if chainID == "" {
chainID = "tendermint_test"
}
testGenesis := fmt.Sprintf(testGenesisFmt, chainID)
cmn.MustWriteFile(genesisFilePath, []byte(testGenesis), 0644)
}
// we always overwrite the priv val
cmn.MustWriteFile(privFilePath, []byte(testPrivValidator), 0644)
cmn.MustWriteFile(privKeyFilePath, []byte(testPrivValidatorKey), 0644)
cmn.MustWriteFile(privStateFilePath, []byte(testPrivValidatorState), 0644)
config := TestConfig().SetRoot(rootDir)
return config
}
var testGenesis = `{
var testGenesisFmt = `{
"genesis_time": "2018-10-10T08:20:13.695936996Z",
"chain_id": "tendermint_test",
"chain_id": "%s",
"validators": [
{
"pub_key": {
@@ -357,7 +419,7 @@ var testGenesis = `{
"app_hash": ""
}`
var testPrivValidator = `{
var testPrivValidatorKey = `{
"address": "A3258DCBF45DCA0DF052981870F2D1441A36D145",
"pub_key": {
"type": "tendermint/PubKeyEd25519",
@@ -366,8 +428,11 @@ var testPrivValidator = `{
"priv_key": {
"type": "tendermint/PrivKeyEd25519",
"value": "EVkqJO/jIXp3rkASXfh9YnyToYXRXhBr6g9cQVxPFnQBP/5povV4HTjvsy530kybxKHwEi85iU8YL0qQhSYVoQ=="
},
"last_height": "0",
"last_round": "0",
"last_step": 0
}
}`
var testPrivValidatorState = `{
"height": "0",
"round": "0",
"step": 0
}`

View File

@@ -48,6 +48,7 @@ func TestEnsureTestRoot(t *testing.T) {
// create root dir
cfg := ResetTestRoot(testName)
defer os.RemoveAll(cfg.RootDir)
rootDir := cfg.RootDir
// make sure config is set properly
@@ -60,7 +61,7 @@ func TestEnsureTestRoot(t *testing.T) {
// TODO: make sure the cfg returned and testconfig are the same!
baseConfig := DefaultBaseConfig()
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, baseConfig.PrivValidator)
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, baseConfig.PrivValidatorKey, baseConfig.PrivValidatorState)
}
func checkConfig(configFile string) bool {

View File

@@ -10,13 +10,10 @@ import (
"github.com/stretchr/testify/require"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/p2p"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
func init() {
config = ResetConfig("consensus_byzantine_test")
}
//----------------------------------------------
// byzantine failures
@@ -29,7 +26,8 @@ func init() {
func TestByzantine(t *testing.T) {
N := 4
logger := consensusLogger().With("test", "byzantine")
css := randConsensusNet(N, "consensus_byzantine_test", newMockTickerFunc(false), newCounter)
css, cleanup := randConsensusNet(N, "consensus_byzantine_test", newMockTickerFunc(false), newCounter)
defer cleanup()
// give the byzantine validator a normal ticker
ticker := NewTimeoutTicker()
@@ -49,7 +47,7 @@ func TestByzantine(t *testing.T) {
switches[i].SetLogger(p2pLogger.With("validator", i))
}
eventChans := make([]chan interface{}, N)
blocksSubs := make([]types.Subscription, N)
reactors := make([]p2p.Reactor, N)
for i := 0; i < N; i++ {
// make first val byzantine
@@ -68,16 +66,15 @@ func TestByzantine(t *testing.T) {
eventBus := css[i].eventBus
eventBus.SetLogger(logger.With("module", "events", "validator", i))
eventChans[i] = make(chan interface{}, 1)
err := eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, eventChans[i])
var err error
blocksSubs[i], err = eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock)
require.NoError(t, err)
conR := NewConsensusReactor(css[i], true) // so we dont start the consensus states
conR := NewConsensusReactor(css[i], true) // so we don't start the consensus states
conR.SetLogger(logger.With("validator", i))
conR.SetEventBus(eventBus)
var conRI p2p.Reactor // nolint: gotype, gosimple
conRI = conR
var conRI p2p.Reactor = conR
// make first val byzantine
if i == 0 {
@@ -85,6 +82,7 @@ func TestByzantine(t *testing.T) {
}
reactors[i] = conRI
sm.SaveState(css[i].blockExec.DB(), css[i].state) //for save height 1's validators info
}
defer func() {
@@ -135,7 +133,7 @@ func TestByzantine(t *testing.T) {
p2p.Connect2Switches(switches, ind1, ind2)
// wait for someone in the big partition (B) to make a block
<-eventChans[ind2]
<-blocksSubs[ind2].Out()
t.Log("A block has been committed. Healing partition")
p2p.Connect2Switches(switches, ind0, ind1)
@@ -147,7 +145,7 @@ func TestByzantine(t *testing.T) {
wg.Add(2)
for i := 1; i < N-1; i++ {
go func(j int) {
<-eventChans[j]
<-blocksSubs[j].Out()
wg.Done()
}(i)
}
@@ -272,3 +270,4 @@ func (br *ByzantineReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
func (br *ByzantineReactor) Receive(chID byte, peer p2p.Peer, msgBytes []byte) {
br.reactor.Receive(chID, peer, msgBytes)
}
func (br *ByzantineReactor) InitPeer(peer p2p.Peer) p2p.Peer { return peer }

View File

@@ -6,14 +6,19 @@ import (
"fmt"
"io/ioutil"
"os"
"path"
"reflect"
"path/filepath"
"sort"
"sync"
"testing"
"time"
"github.com/go-kit/kit/log/term"
"path"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
bc "github.com/tendermint/tendermint/blockchain"
cfg "github.com/tendermint/tendermint/config"
@@ -21,25 +26,26 @@ import (
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/libs/log"
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
mempl "github.com/tendermint/tendermint/mempool"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/privval"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
"github.com/go-kit/kit/log/term"
)
const (
testSubscriber = "test-client"
)
// A cleanupFunc cleans up any config / test files created for a particular
// test.
type cleanupFunc func()
// genesis, chain_id, priv_val
var config *cfg.Config // NOTE: must be reset for each _test.go file
var consensusReplayConfig *cfg.Config
var ensureTimeout = time.Millisecond * 100
func ensureDir(dir string, mode os.FileMode) {
@@ -72,9 +78,10 @@ func NewValidatorStub(privValidator types.PrivValidator, valIndex int) *validato
}
func (vs *validatorStub) signVote(voteType types.SignedMsgType, hash []byte, header types.PartSetHeader) (*types.Vote, error) {
addr := vs.PrivValidator.GetPubKey().Address()
vote := &types.Vote{
ValidatorIndex: vs.Index,
ValidatorAddress: vs.PrivValidator.GetAddress(),
ValidatorAddress: addr,
Height: vs.Height,
Round: vs.Round,
Timestamp: tmtime.Now(),
@@ -114,6 +121,24 @@ func incrementRound(vss ...*validatorStub) {
}
}
type ValidatorStubsByAddress []*validatorStub
func (vss ValidatorStubsByAddress) Len() int {
return len(vss)
}
func (vss ValidatorStubsByAddress) Less(i, j int) bool {
return bytes.Compare(vss[i].GetPubKey().Address(), vss[j].GetPubKey().Address()) == -1
}
func (vss ValidatorStubsByAddress) Swap(i, j int) {
it := vss[i]
vss[i] = vss[j]
vss[i].Index = i
vss[j] = it
vss[j].Index = j
}
//-------------------------------------------------------------------------------
// Functions for transitioning the consensus state
@@ -122,17 +147,21 @@ func startTestRound(cs *ConsensusState, height int64, round int) {
cs.startRoutines(0)
}
// Create proposal block from cs1 but sign it with vs
// Create proposal block from cs1 but sign it with vs.
func decideProposal(cs1 *ConsensusState, vs *validatorStub, height int64, round int) (proposal *types.Proposal, block *types.Block) {
cs1.mtx.Lock()
block, blockParts := cs1.createProposalBlock()
if block == nil { // on error
panic("error creating proposal block")
validRound := cs1.ValidRound
chainID := cs1.state.ChainID
cs1.mtx.Unlock()
if block == nil {
panic("Failed to createProposalBlock. Did you forget to add commit for previous block?")
}
// Make proposal
polRound, propBlockID := cs1.ValidRound, types.BlockID{block.Hash(), blockParts.Header()}
polRound, propBlockID := validRound, types.BlockID{block.Hash(), blockParts.Header()}
proposal = types.NewProposal(height, round, polRound, propBlockID)
if err := vs.SignProposal(cs1.state.ChainID, proposal); err != nil {
if err := vs.SignProposal(chainID, proposal); err != nil {
panic(err)
}
return
@@ -151,8 +180,9 @@ func signAddVotes(to *ConsensusState, voteType types.SignedMsgType, hash []byte,
func validatePrevote(t *testing.T, cs *ConsensusState, round int, privVal *validatorStub, blockHash []byte) {
prevotes := cs.Votes.Prevotes(round)
address := privVal.GetPubKey().Address()
var vote *types.Vote
if vote = prevotes.GetByAddress(privVal.GetAddress()); vote == nil {
if vote = prevotes.GetByAddress(address); vote == nil {
panic("Failed to find prevote from validator")
}
if blockHash == nil {
@@ -168,8 +198,9 @@ func validatePrevote(t *testing.T, cs *ConsensusState, round int, privVal *valid
func validateLastPrecommit(t *testing.T, cs *ConsensusState, privVal *validatorStub, blockHash []byte) {
votes := cs.LastCommit
address := privVal.GetPubKey().Address()
var vote *types.Vote
if vote = votes.GetByAddress(privVal.GetAddress()); vote == nil {
if vote = votes.GetByAddress(address); vote == nil {
panic("Failed to find precommit from validator")
}
if !bytes.Equal(vote.BlockID.Hash, blockHash) {
@@ -179,8 +210,9 @@ func validateLastPrecommit(t *testing.T, cs *ConsensusState, privVal *validatorS
func validatePrecommit(t *testing.T, cs *ConsensusState, thisRound, lockRound int, privVal *validatorStub, votedBlockHash, lockedBlockHash []byte) {
precommits := cs.Votes.Precommits(thisRound)
address := privVal.GetPubKey().Address()
var vote *types.Vote
if vote = precommits.GetByAddress(privVal.GetAddress()); vote == nil {
if vote = precommits.GetByAddress(address); vote == nil {
panic("Failed to find precommit from validator")
}
@@ -215,30 +247,29 @@ func validatePrevoteAndPrecommit(t *testing.T, cs *ConsensusState, thisRound, lo
cs.mtx.Unlock()
}
// genesis
func subscribeToVoter(cs *ConsensusState, addr []byte) chan interface{} {
voteCh0 := make(chan interface{})
err := cs.eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryVote, voteCh0)
func subscribeToVoter(cs *ConsensusState, addr []byte) <-chan tmpubsub.Message {
votesSub, err := cs.eventBus.SubscribeUnbuffered(context.Background(), testSubscriber, types.EventQueryVote)
if err != nil {
panic(fmt.Sprintf("failed to subscribe %s to %v", testSubscriber, types.EventQueryVote))
}
voteCh := make(chan interface{})
ch := make(chan tmpubsub.Message)
go func() {
for v := range voteCh0 {
vote := v.(types.EventDataVote)
for msg := range votesSub.Out() {
vote := msg.Data().(types.EventDataVote)
// we only fire for our own votes
if bytes.Equal(addr, vote.Vote.ValidatorAddress) {
voteCh <- v
ch <- msg
}
}
}()
return voteCh
return ch
}
//-------------------------------------------------------------------------------
// consensus states
func newConsensusState(state sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState {
config := cfg.ResetTestRoot("consensus_state_test")
return newConsensusStateWithConfig(config, state, pv, app)
}
@@ -257,7 +288,7 @@ func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state sm.S
proxyAppConnCon := abcicli.NewLocalClient(mtx, app)
// Make Mempool
mempool := mempl.NewMempool(thisConfig.Mempool, proxyAppConnMem, 0)
mempool := mempl.NewCListMempool(thisConfig.Mempool, proxyAppConnMem, 0)
mempool.SetLogger(log.TestingLogger().With("module", "mempool"))
if thisConfig.Consensus.WaitForTxs() {
mempool.EnableTxsAvailable()
@@ -267,7 +298,8 @@ func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state sm.S
evpool := sm.MockEvidencePool{}
// Make ConsensusState
stateDB := dbm.NewMemDB()
stateDB := blockDB
sm.SaveState(stateDB, state) //for save height 1's validators info
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyAppConnCon, mempool, evpool)
cs := NewConsensusState(thisConfig.Consensus, state, blockExec, blockStore, mempool, evpool)
cs.SetLogger(log.TestingLogger().With("module", "consensus"))
@@ -281,9 +313,10 @@ func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state sm.S
}
func loadPrivValidator(config *cfg.Config) *privval.FilePV {
privValidatorFile := config.PrivValidatorFile()
ensureDir(path.Dir(privValidatorFile), 0700)
privValidator := privval.LoadOrGenFilePV(privValidatorFile)
privValidatorKeyFile := config.PrivValidatorKeyFile()
ensureDir(filepath.Dir(privValidatorKeyFile), 0700)
privValidatorStateFile := config.PrivValidatorStateFile()
privValidator := privval.LoadOrGenFilePV(privValidatorKeyFile, privValidatorStateFile)
privValidator.Reset()
return privValidator
}
@@ -307,7 +340,7 @@ func randConsensusState(nValidators int) (*ConsensusState, []*validatorStub) {
//-------------------------------------------------------------------------------
func ensureNoNewEvent(ch <-chan interface{}, timeout time.Duration,
func ensureNoNewEvent(ch <-chan tmpubsub.Message, timeout time.Duration,
errorMessage string) {
select {
case <-time.After(timeout):
@@ -317,169 +350,186 @@ func ensureNoNewEvent(ch <-chan interface{}, timeout time.Duration,
}
}
func ensureNoNewEventOnChannel(ch <-chan interface{}) {
func ensureNoNewEventOnChannel(ch <-chan tmpubsub.Message) {
ensureNoNewEvent(
ch,
ensureTimeout,
"We should be stuck waiting, not receiving new event on the channel")
}
func ensureNoNewRoundStep(stepCh <-chan interface{}) {
func ensureNoNewRoundStep(stepCh <-chan tmpubsub.Message) {
ensureNoNewEvent(
stepCh,
ensureTimeout,
"We should be stuck waiting, not receiving NewRoundStep event")
}
func ensureNoNewUnlock(unlockCh <-chan interface{}) {
func ensureNoNewUnlock(unlockCh <-chan tmpubsub.Message) {
ensureNoNewEvent(
unlockCh,
ensureTimeout,
"We should be stuck waiting, not receiving Unlock event")
}
func ensureNoNewTimeout(stepCh <-chan interface{}, timeout int64) {
timeoutDuration := time.Duration(timeout*5) * time.Nanosecond
func ensureNoNewTimeout(stepCh <-chan tmpubsub.Message, timeout int64) {
timeoutDuration := time.Duration(timeout*10) * time.Nanosecond
ensureNoNewEvent(
stepCh,
timeoutDuration,
"We should be stuck waiting, not receiving NewTimeout event")
}
func ensureNewEvent(
ch <-chan interface{},
height int64,
round int,
timeout time.Duration,
errorMessage string) {
func ensureNewEvent(ch <-chan tmpubsub.Message, height int64, round int, timeout time.Duration, errorMessage string) {
select {
case <-time.After(timeout):
panic(errorMessage)
case ev := <-ch:
rs, ok := ev.(types.EventDataRoundState)
case msg := <-ch:
roundStateEvent, ok := msg.Data().(types.EventDataRoundState)
if !ok {
panic(
fmt.Sprintf(
"expected a EventDataRoundState, got %v.Wrong subscription channel?",
reflect.TypeOf(rs)))
panic(fmt.Sprintf("expected a EventDataRoundState, got %T. Wrong subscription channel?",
msg.Data()))
}
if rs.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, rs.Height))
if roundStateEvent.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, roundStateEvent.Height))
}
if rs.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, rs.Round))
if roundStateEvent.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, roundStateEvent.Round))
}
// TODO: We could check also for a step at this point!
}
}
func ensureNewRoundStep(stepCh <-chan interface{}, height int64, round int) {
ensureNewEvent(
stepCh,
height,
round,
ensureTimeout,
"Timeout expired while waiting for NewStep event")
}
func ensureNewVote(voteCh <-chan interface{}, height int64, round int) {
func ensureNewRound(roundCh <-chan tmpubsub.Message, height int64, round int) {
select {
case <-time.After(ensureTimeout):
break
case v := <-voteCh:
edv, ok := v.(types.EventDataVote)
panic("Timeout expired while waiting for NewRound event")
case msg := <-roundCh:
newRoundEvent, ok := msg.Data().(types.EventDataNewRound)
if !ok {
panic(fmt.Sprintf("expected a *types.Vote, "+
"got %v. wrong subscription channel?",
reflect.TypeOf(v)))
panic(fmt.Sprintf("expected a EventDataNewRound, got %T. Wrong subscription channel?",
msg.Data()))
}
vote := edv.Vote
if vote.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, vote.Height))
if newRoundEvent.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, newRoundEvent.Height))
}
if vote.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, vote.Round))
if newRoundEvent.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, newRoundEvent.Round))
}
}
}
func ensureNewRound(roundCh <-chan interface{}, height int64, round int) {
ensureNewEvent(roundCh, height, round, ensureTimeout,
"Timeout expired while waiting for NewRound event")
}
func ensureNewTimeout(timeoutCh <-chan interface{}, height int64, round int, timeout int64) {
timeoutDuration := time.Duration(timeout*3) * time.Nanosecond
func ensureNewTimeout(timeoutCh <-chan tmpubsub.Message, height int64, round int, timeout int64) {
timeoutDuration := time.Duration(timeout*10) * time.Nanosecond
ensureNewEvent(timeoutCh, height, round, timeoutDuration,
"Timeout expired while waiting for NewTimeout event")
}
func ensureNewProposal(proposalCh <-chan interface{}, height int64, round int) {
ensureNewEvent(proposalCh, height, round, ensureTimeout,
"Timeout expired while waiting for NewProposal event")
func ensureNewProposal(proposalCh <-chan tmpubsub.Message, height int64, round int) {
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewProposal event")
case msg := <-proposalCh:
proposalEvent, ok := msg.Data().(types.EventDataCompleteProposal)
if !ok {
panic(fmt.Sprintf("expected a EventDataCompleteProposal, got %T. Wrong subscription channel?",
msg.Data()))
}
if proposalEvent.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, proposalEvent.Height))
}
if proposalEvent.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, proposalEvent.Round))
}
}
}
func ensureNewValidBlock(validBlockCh <-chan interface{}, height int64, round int) {
func ensureNewValidBlock(validBlockCh <-chan tmpubsub.Message, height int64, round int) {
ensureNewEvent(validBlockCh, height, round, ensureTimeout,
"Timeout expired while waiting for NewValidBlock event")
}
func ensureNewBlock(blockCh <-chan interface{}, height int64) {
func ensureNewBlock(blockCh <-chan tmpubsub.Message, height int64) {
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewBlock event")
case ev := <-blockCh:
block, ok := ev.(types.EventDataNewBlock)
case msg := <-blockCh:
blockEvent, ok := msg.Data().(types.EventDataNewBlock)
if !ok {
panic(fmt.Sprintf("expected a *types.EventDataNewBlock, "+
"got %v. wrong subscription channel?",
reflect.TypeOf(block)))
panic(fmt.Sprintf("expected a EventDataNewBlock, got %T. Wrong subscription channel?",
msg.Data()))
}
if block.Block.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, block.Block.Height))
if blockEvent.Block.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, blockEvent.Block.Height))
}
}
}
func ensureNewBlockHeader(blockCh <-chan interface{}, height int64, blockHash cmn.HexBytes) {
func ensureNewBlockHeader(blockCh <-chan tmpubsub.Message, height int64, blockHash cmn.HexBytes) {
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewBlockHeader event")
case ev := <-blockCh:
blockHeader, ok := ev.(types.EventDataNewBlockHeader)
case msg := <-blockCh:
blockHeaderEvent, ok := msg.Data().(types.EventDataNewBlockHeader)
if !ok {
panic(fmt.Sprintf("expected a *types.EventDataNewBlockHeader, "+
"got %v. wrong subscription channel?",
reflect.TypeOf(blockHeader)))
panic(fmt.Sprintf("expected a EventDataNewBlockHeader, got %T. Wrong subscription channel?",
msg.Data()))
}
if blockHeader.Header.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, blockHeader.Header.Height))
if blockHeaderEvent.Header.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, blockHeaderEvent.Header.Height))
}
if !bytes.Equal(blockHeader.Header.Hash(), blockHash) {
panic(fmt.Sprintf("expected header %X, got %X", blockHash, blockHeader.Header.Hash()))
if !bytes.Equal(blockHeaderEvent.Header.Hash(), blockHash) {
panic(fmt.Sprintf("expected header %X, got %X", blockHash, blockHeaderEvent.Header.Hash()))
}
}
}
func ensureNewUnlock(unlockCh <-chan interface{}, height int64, round int) {
func ensureNewUnlock(unlockCh <-chan tmpubsub.Message, height int64, round int) {
ensureNewEvent(unlockCh, height, round, ensureTimeout,
"Timeout expired while waiting for NewUnlock event")
}
func ensureVote(voteCh <-chan interface{}, height int64, round int,
func ensureProposal(proposalCh <-chan tmpubsub.Message, height int64, round int, propID types.BlockID) {
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewProposal event")
case msg := <-proposalCh:
proposalEvent, ok := msg.Data().(types.EventDataCompleteProposal)
if !ok {
panic(fmt.Sprintf("expected a EventDataCompleteProposal, got %T. Wrong subscription channel?",
msg.Data()))
}
if proposalEvent.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, proposalEvent.Height))
}
if proposalEvent.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, proposalEvent.Round))
}
if !proposalEvent.BlockID.Equals(propID) {
panic("Proposed block does not match expected block")
}
}
}
func ensurePrecommit(voteCh <-chan tmpubsub.Message, height int64, round int) {
ensureVote(voteCh, height, round, types.PrecommitType)
}
func ensurePrevote(voteCh <-chan tmpubsub.Message, height int64, round int) {
ensureVote(voteCh, height, round, types.PrevoteType)
}
func ensureVote(voteCh <-chan tmpubsub.Message, height int64, round int,
voteType types.SignedMsgType) {
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewVote event")
case v := <-voteCh:
edv, ok := v.(types.EventDataVote)
case msg := <-voteCh:
voteEvent, ok := msg.Data().(types.EventDataVote)
if !ok {
panic(fmt.Sprintf("expected a *types.Vote, "+
"got %v. wrong subscription channel?",
reflect.TypeOf(v)))
panic(fmt.Sprintf("expected a EventDataVote, got %T. Wrong subscription channel?",
msg.Data()))
}
vote := edv.Vote
vote := voteEvent.Vote
if vote.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, vote.Height))
}
@@ -492,15 +542,7 @@ func ensureVote(voteCh <-chan interface{}, height int64, round int,
}
}
func ensurePrecommit(voteCh <-chan interface{}, height int64, round int) {
ensureVote(voteCh, height, round, types.PrecommitType)
}
func ensurePrevote(voteCh <-chan interface{}, height int64, round int) {
ensureVote(voteCh, height, round, types.PrevoteType)
}
func ensureNewEventOnChannel(ch <-chan interface{}) {
func ensureNewEventOnChannel(ch <-chan tmpubsub.Message) {
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for new activity on the channel")
@@ -524,59 +566,85 @@ func consensusLogger() log.Logger {
}).With("module", "consensus")
}
func randConsensusNet(nValidators int, testName string, tickerFunc func() TimeoutTicker, appFunc func() abci.Application, configOpts ...func(*cfg.Config)) []*ConsensusState {
func randConsensusNet(nValidators int, testName string, tickerFunc func() TimeoutTicker,
appFunc func() abci.Application, configOpts ...func(*cfg.Config)) ([]*ConsensusState, cleanupFunc) {
genDoc, privVals := randGenesisDoc(nValidators, false, 30)
css := make([]*ConsensusState, nValidators)
logger := consensusLogger()
configRootDirs := make([]string, 0, nValidators)
for i := 0; i < nValidators; i++ {
stateDB := dbm.NewMemDB() // each state needs its own db
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
thisConfig := ResetConfig(fmt.Sprintf("%s_%d", testName, i))
configRootDirs = append(configRootDirs, thisConfig.RootDir)
for _, opt := range configOpts {
opt(thisConfig)
}
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
ensureDir(filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
app := appFunc()
vals := types.TM2PB.ValidatorUpdates(state.Validators)
app.InitChain(abci.RequestInitChain{Validators: vals})
css[i] = newConsensusStateWithConfig(thisConfig, state, privVals[i], app)
css[i] = newConsensusStateWithConfigAndBlockStore(thisConfig, state, privVals[i], app, stateDB)
css[i].SetTimeoutTicker(tickerFunc())
css[i].SetLogger(logger.With("validator", i, "module", "consensus"))
}
return css
return css, func() {
for _, dir := range configRootDirs {
os.RemoveAll(dir)
}
}
}
// nPeers = nValidators + nNotValidator
func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerFunc func() TimeoutTicker, appFunc func() abci.Application) []*ConsensusState {
func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerFunc func() TimeoutTicker, appFunc func(string) abci.Application) ([]*ConsensusState, *types.GenesisDoc, *cfg.Config, cleanupFunc) {
genDoc, privVals := randGenesisDoc(nValidators, false, testMinPower)
css := make([]*ConsensusState, nPeers)
logger := consensusLogger()
var peer0Config *cfg.Config
configRootDirs := make([]string, 0, nPeers)
for i := 0; i < nPeers; i++ {
stateDB := dbm.NewMemDB() // each state needs its own db
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
thisConfig := ResetConfig(fmt.Sprintf("%s_%d", testName, i))
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
configRootDirs = append(configRootDirs, thisConfig.RootDir)
ensureDir(filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
if i == 0 {
peer0Config = thisConfig
}
var privVal types.PrivValidator
if i < nValidators {
privVal = privVals[i]
} else {
tempFile, err := ioutil.TempFile("", "priv_validator_")
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
if err != nil {
panic(err)
}
privVal = privval.GenFilePV(tempFile.Name())
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
if err != nil {
panic(err)
}
privVal = privval.GenFilePV(tempKeyFile.Name(), tempStateFile.Name())
}
app := appFunc()
app := appFunc(path.Join(config.DBDir(), fmt.Sprintf("%s_%d", testName, i)))
vals := types.TM2PB.ValidatorUpdates(state.Validators)
if _, ok := app.(*kvstore.PersistentKVStoreApplication); ok {
state.Version.Consensus.App = kvstore.ProtocolVersion //simulate handshake, receive app version. If don't do this, replay test will fail
}
app.InitChain(abci.RequestInitChain{Validators: vals})
//sm.SaveState(stateDB,state) //height 1's validatorsInfo already saved in LoadStateFromDBOrGenesisDoc above
css[i] = newConsensusStateWithConfig(thisConfig, state, privVal, app)
css[i].SetTimeoutTicker(tickerFunc())
css[i].SetLogger(logger.With("validator", i, "module", "consensus"))
}
return css
return css, genDoc, peer0Config, func() {
for _, dir := range configRootDirs {
os.RemoveAll(dir)
}
}
}
func getSwitchIndex(switches []*p2p.Switch, peer p2p.Peer) int {
@@ -586,7 +654,6 @@ func getSwitchIndex(switches []*p2p.Switch, peer p2p.Peer) int {
}
}
panic("didnt find peer in switches")
return -1
}
//-------------------------------------------------------------------------------
@@ -664,8 +731,7 @@ func (m *mockTicker) Chan() <-chan timeoutInfo {
return m.c
}
func (mockTicker) SetLogger(log.Logger) {
}
func (*mockTicker) SetLogger(log.Logger) {}
//------------------------------------
@@ -674,6 +740,13 @@ func newCounter() abci.Application {
}
func newPersistentKVStore() abci.Application {
dir, _ := ioutil.TempDir("/tmp", "persistent-kvstore")
dir, err := ioutil.TempDir("", "persistent-kvstore")
if err != nil {
panic(err)
}
return kvstore.NewPersistentKVStoreApplication(dir)
}
func newPersistentKVStoreWithPath(dbDir string) abci.Application {
return kvstore.NewPersistentKVStoreApplication(dbDir)
}

View File

@@ -3,6 +3,7 @@ package consensus
import (
"encoding/binary"
"fmt"
"os"
"testing"
"time"
@@ -10,19 +11,24 @@ import (
"github.com/tendermint/tendermint/abci/example/code"
abci "github.com/tendermint/tendermint/abci/types"
dbm "github.com/tendermint/tendermint/libs/db"
mempl "github.com/tendermint/tendermint/mempool"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
func init() {
config = ResetConfig("consensus_mempool_test")
// for testing
func assertMempool(txn txNotifier) mempl.Mempool {
return txn.(mempl.Mempool)
}
func TestMempoolNoProgressUntilTxsAvailable(t *testing.T) {
config := ResetConfig("consensus_mempool_txs_available_test")
defer os.RemoveAll(config.RootDir)
config.Consensus.CreateEmptyBlocks = false
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
cs.mempool.EnableTxsAvailable()
assertMempool(cs.txNotifier).EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
startTestRound(cs, height, round)
@@ -37,10 +43,11 @@ func TestMempoolNoProgressUntilTxsAvailable(t *testing.T) {
func TestMempoolProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
config := ResetConfig("consensus_mempool_txs_available_test")
defer os.RemoveAll(config.RootDir)
config.Consensus.CreateEmptyBlocksInterval = ensureTimeout
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
cs.mempool.EnableTxsAvailable()
assertMempool(cs.txNotifier).EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
startTestRound(cs, height, round)
@@ -52,10 +59,11 @@ func TestMempoolProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
func TestMempoolProgressInHigherRound(t *testing.T) {
config := ResetConfig("consensus_mempool_txs_available_test")
defer os.RemoveAll(config.RootDir)
config.Consensus.CreateEmptyBlocks = false
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
cs.mempool.EnableTxsAvailable()
assertMempool(cs.txNotifier).EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
newRoundCh := subscribe(cs.eventBus, types.EventQueryNewRound)
@@ -71,19 +79,19 @@ func TestMempoolProgressInHigherRound(t *testing.T) {
}
startTestRound(cs, height, round)
ensureNewRoundStep(newRoundCh, height, round) // first round at first height
ensureNewEventOnChannel(newBlockCh) // first block gets committed
ensureNewRound(newRoundCh, height, round) // first round at first height
ensureNewEventOnChannel(newBlockCh) // first block gets committed
height = height + 1 // moving to the next height
round = 0
ensureNewRoundStep(newRoundCh, height, round) // first round at next height
deliverTxsRange(cs, 0, 1) // we deliver txs, but dont set a proposal so we get the next round
ensureNewRound(newRoundCh, height, round) // first round at next height
deliverTxsRange(cs, 0, 1) // we deliver txs, but dont set a proposal so we get the next round
ensureNewTimeout(timeoutCh, height, round, cs.config.TimeoutPropose.Nanoseconds())
round = round + 1 // moving to the next round
ensureNewRoundStep(newRoundCh, height, round) // wait for the next round
ensureNewEventOnChannel(newBlockCh) // now we can commit the block
round = round + 1 // moving to the next round
ensureNewRound(newRoundCh, height, round) // wait for the next round
ensureNewEventOnChannel(newBlockCh) // now we can commit the block
}
func deliverTxsRange(cs *ConsensusState, start, end int) {
@@ -91,7 +99,7 @@ func deliverTxsRange(cs *ConsensusState, start, end int) {
for i := start; i < end; i++ {
txBytes := make([]byte, 8)
binary.BigEndian.PutUint64(txBytes, uint64(i))
err := cs.mempool.CheckTx(txBytes, nil)
err := assertMempool(cs.txNotifier).CheckTx(txBytes, nil)
if err != nil {
panic(fmt.Sprintf("Error after CheckTx: %v", err))
}
@@ -100,7 +108,9 @@ func deliverTxsRange(cs *ConsensusState, start, end int) {
func TestMempoolTxConcurrentWithCommit(t *testing.T) {
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusState(state, privVals[0], NewCounterApplication())
blockDB := dbm.NewMemDB()
cs := newConsensusStateWithConfigAndBlockStore(config, state, privVals[0], NewCounterApplication(), blockDB)
sm.SaveState(blockDB, state)
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
@@ -111,9 +121,9 @@ func TestMempoolTxConcurrentWithCommit(t *testing.T) {
for nTxs := 0; nTxs < NTxs; {
ticker := time.NewTicker(time.Second * 30)
select {
case b := <-newBlockCh:
evt := b.(types.EventDataNewBlock)
nTxs += int(evt.Block.Header.NumTxs)
case msg := <-newBlockCh:
blockEvent := msg.Data().(types.EventDataNewBlock)
nTxs += int(blockEvent.Block.Header.NumTxs)
case <-ticker.C:
panic("Timed out waiting to commit blocks with transactions")
}
@@ -123,7 +133,9 @@ func TestMempoolTxConcurrentWithCommit(t *testing.T) {
func TestMempoolRmBadTx(t *testing.T) {
state, privVals := randGenesisState(1, false, 10)
app := NewCounterApplication()
cs := newConsensusState(state, privVals[0], app)
blockDB := dbm.NewMemDB()
cs := newConsensusStateWithConfigAndBlockStore(config, state, privVals[0], app, blockDB)
sm.SaveState(blockDB, state)
// increment the counter by 1
txBytes := make([]byte, 8)
@@ -141,7 +153,7 @@ func TestMempoolRmBadTx(t *testing.T) {
// Try to send the tx through the mempool.
// CheckTx should not err, but the app should return a bad abci code
// and the tx should get removed from the pool
err := cs.mempool.CheckTx(txBytes, func(r *abci.Response) {
err := assertMempool(cs.txNotifier).CheckTx(txBytes, func(r *abci.Response) {
if r.GetCheckTx().Code != code.CodeTypeBadNonce {
t.Fatalf("expected checktx to return bad nonce, got %v", r)
}
@@ -153,7 +165,7 @@ func TestMempoolRmBadTx(t *testing.T) {
// check for the tx
for {
txs := cs.mempool.ReapMaxBytesMaxGas(int64(len(txBytes)), -1)
txs := assertMempool(cs.txNotifier).ReapMaxBytesMaxGas(int64(len(txBytes)), -1)
if len(txs) == 0 {
emptyMempoolCh <- struct{}{}
return

View File

@@ -8,7 +8,11 @@ import (
stdprometheus "github.com/prometheus/client_golang/prometheus"
)
const MetricsSubsystem = "consensus"
const (
// MetricsSubsystem is a subsystem shared by all metrics exposed by this
// package.
MetricsSubsystem = "consensus"
)
// Metrics contains metrics exposed by this package.
type Metrics struct {
@@ -50,101 +54,107 @@ type Metrics struct {
}
// PrometheusMetrics returns Metrics build using Prometheus client library.
func PrometheusMetrics(namespace string) *Metrics {
// Optionally, labels can be provided along with their values ("foo",
// "fooValue").
func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
labels := []string{}
for i := 0; i < len(labelsAndValues); i += 2 {
labels = append(labels, labelsAndValues[i])
}
return &Metrics{
Height: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "height",
Help: "Height of the chain.",
}, []string{}),
}, labels).With(labelsAndValues...),
Rounds: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "rounds",
Help: "Number of rounds.",
}, []string{}),
}, labels).With(labelsAndValues...),
Validators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validators",
Help: "Number of validators.",
}, []string{}),
}, labels).With(labelsAndValues...),
ValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validators_power",
Help: "Total power of all validators.",
}, []string{}),
}, labels).With(labelsAndValues...),
MissingValidators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "missing_validators",
Help: "Number of validators who did not sign.",
}, []string{}),
}, labels).With(labelsAndValues...),
MissingValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "missing_validators_power",
Help: "Total power of the missing validators.",
}, []string{}),
}, labels).With(labelsAndValues...),
ByzantineValidators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "byzantine_validators",
Help: "Number of validators who tried to double sign.",
}, []string{}),
}, labels).With(labelsAndValues...),
ByzantineValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "byzantine_validators_power",
Help: "Total power of the byzantine validators.",
}, []string{}),
}, labels).With(labelsAndValues...),
BlockIntervalSeconds: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_interval_seconds",
Help: "Time between this and the last block.",
}, []string{}),
}, labels).With(labelsAndValues...),
NumTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "num_txs",
Help: "Number of transactions.",
}, []string{}),
}, labels).With(labelsAndValues...),
BlockSizeBytes: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_size_bytes",
Help: "Size of the block.",
}, []string{}),
}, labels).With(labelsAndValues...),
TotalTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "total_txs",
Help: "Total number of transactions.",
}, []string{}),
}, labels).With(labelsAndValues...),
CommittedHeight: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "latest_block_height",
Help: "The latest block height.",
}, []string{}),
}, labels).With(labelsAndValues...),
FastSyncing: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "fast_syncing",
Help: "Whether or not a node is fast syncing. 1 if yes, 0 if no.",
}, []string{}),
}, labels).With(labelsAndValues...),
BlockParts: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_parts",
Help: "Number of blockparts transmitted by peer.",
}, []string{"peer_id"}),
}, append(labels, "peer_id")).With(labelsAndValues...),
}
}

View File

@@ -155,16 +155,24 @@ func (conR *ConsensusReactor) GetChannels() []*p2p.ChannelDescriptor {
}
}
// AddPeer implements Reactor
// InitPeer implements Reactor by creating a state for the peer.
func (conR *ConsensusReactor) InitPeer(peer p2p.Peer) p2p.Peer {
peerState := NewPeerState(peer).SetLogger(conR.Logger)
peer.Set(types.PeerStateKey, peerState)
return peer
}
// AddPeer implements Reactor by spawning multiple gossiping goroutines for the
// peer.
func (conR *ConsensusReactor) AddPeer(peer p2p.Peer) {
if !conR.IsRunning() {
return
}
// Create peerState for peer
peerState := NewPeerState(peer).SetLogger(conR.Logger)
peer.Set(types.PeerStateKey, peerState)
peerState, ok := peer.Get(types.PeerStateKey).(*PeerState)
if !ok {
panic(fmt.Sprintf("peer %v has no state", peer))
}
// Begin routines for this peer.
go conR.gossipDataRoutine(peer, peerState)
go conR.gossipVotesRoutine(peer, peerState)
@@ -177,13 +185,17 @@ func (conR *ConsensusReactor) AddPeer(peer p2p.Peer) {
}
}
// RemovePeer implements Reactor
// RemovePeer is a noop.
func (conR *ConsensusReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
if !conR.IsRunning() {
return
}
// TODO
//peer.Get(PeerStateKey).(*PeerState).Disconnect()
// ps, ok := peer.Get(PeerStateKey).(*PeerState)
// if !ok {
// panic(fmt.Sprintf("Peer %v has no state", peer))
// }
// ps.Disconnect()
}
// Receive implements Reactor
@@ -214,7 +226,10 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
conR.Logger.Debug("Receive", "src", src, "chId", chID, "msg", msg)
// Get peer states
ps := src.Get(types.PeerStateKey).(*PeerState)
ps, ok := src.Get(types.PeerStateKey).(*PeerState)
if !ok {
panic(fmt.Sprintf("Peer %v has no state", src))
}
switch chID {
case StateChannel:
@@ -257,11 +272,6 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
BlockID: msg.BlockID,
Votes: ourVotes,
}))
case *ProposalHeartbeatMessage:
hb := msg.Heartbeat
conR.Logger.Debug("Received proposal heartbeat message",
"height", hb.Height, "round", hb.Round, "sequence", hb.Sequence,
"valIdx", hb.ValidatorIndex, "valAddr", hb.ValidatorAddress)
default:
conR.Logger.Error(fmt.Sprintf("Unknown message type %v", reflect.TypeOf(msg)))
}
@@ -293,9 +303,9 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
switch msg := msg.(type) {
case *VoteMessage:
cs := conR.conS
cs.mtx.Lock()
cs.mtx.RLock()
height, valSize, lastCommitSize := cs.Height, cs.Validators.Size(), cs.LastCommit.Size()
cs.mtx.Unlock()
cs.mtx.RUnlock()
ps.EnsureVoteBitArrays(height, valSize)
ps.EnsureVoteBitArrays(height-1, lastCommitSize)
ps.SetHasVote(msg.Vote)
@@ -362,8 +372,8 @@ func (conR *ConsensusReactor) FastSync() bool {
//--------------------------------------
// subscribeToBroadcastEvents subscribes for new round steps, votes and
// proposal heartbeats using internal pubsub defined on state to broadcast
// subscribeToBroadcastEvents subscribes for new round steps and votes
// using internal pubsub defined on state to broadcast
// them to peers upon receiving.
func (conR *ConsensusReactor) subscribeToBroadcastEvents() {
const subscriber = "consensus-reactor"
@@ -382,10 +392,6 @@ func (conR *ConsensusReactor) subscribeToBroadcastEvents() {
conR.broadcastHasVoteMessage(data.(*types.Vote))
})
conR.conS.evsw.AddListenerForEvent(subscriber, types.EventProposalHeartbeat,
func(data tmevents.EventData) {
conR.broadcastProposalHeartbeatMessage(data.(*types.Heartbeat))
})
}
func (conR *ConsensusReactor) unsubscribeFromBroadcastEvents() {
@@ -393,13 +399,6 @@ func (conR *ConsensusReactor) unsubscribeFromBroadcastEvents() {
conR.conS.evsw.RemoveListener(subscriber)
}
func (conR *ConsensusReactor) broadcastProposalHeartbeatMessage(hb *types.Heartbeat) {
conR.Logger.Debug("Broadcasting proposal heartbeat message",
"height", hb.Height, "round", hb.Round, "sequence", hb.Sequence)
msg := &ProposalHeartbeatMessage{hb}
conR.Switch.Broadcast(StateChannel, cdc.MustMarshalBinaryBare(msg))
}
func (conR *ConsensusReactor) broadcastNewRoundStepMessage(rs *cstypes.RoundState) {
nrsMsg := makeRoundStepMessage(rs)
conR.Switch.Broadcast(StateChannel, cdc.MustMarshalBinaryBare(nrsMsg))
@@ -428,7 +427,10 @@ func (conR *ConsensusReactor) broadcastHasVoteMessage(vote *types.Vote) {
/*
// TODO: Make this broadcast more selective.
for _, peer := range conR.Switch.Peers().List() {
ps := peer.Get(PeerStateKey).(*PeerState)
ps, ok := peer.Get(PeerStateKey).(*PeerState)
if !ok {
panic(fmt.Sprintf("Peer %v has no state", peer))
}
prs := ps.GetRoundState()
if prs.Height == vote.Height {
// TODO: Also filter on round?
@@ -444,9 +446,9 @@ func (conR *ConsensusReactor) broadcastHasVoteMessage(vote *types.Vote) {
func makeRoundStepMessage(rs *cstypes.RoundState) (nrsMsg *NewRoundStepMessage) {
nrsMsg = &NewRoundStepMessage{
Height: rs.Height,
Round: rs.Round,
Step: rs.Step,
Height: rs.Height,
Round: rs.Round,
Step: rs.Step,
SecondsSinceStartTime: int(time.Since(rs.StartTime).Seconds()),
LastCommitRound: rs.LastCommit.Round(),
}
@@ -497,7 +499,7 @@ OUTER_LOOP:
if prs.ProposalBlockParts == nil {
blockMeta := conR.conS.blockStore.LoadBlockMeta(prs.Height)
if blockMeta == nil {
cmn.PanicCrisis(fmt.Sprintf("Failed to load block %d when blockStore is at %d",
panic(fmt.Sprintf("Failed to load block %d when blockStore is at %d",
prs.Height, conR.conS.blockStore.Height()))
}
ps.InitProposalBlockParts(blockMeta.BlockID.PartsHeader)
@@ -826,7 +828,10 @@ func (conR *ConsensusReactor) peerStatsRoutine() {
continue
}
// Get peer state
ps := peer.Get(types.PeerStateKey).(*PeerState)
ps, ok := peer.Get(types.PeerStateKey).(*PeerState)
if !ok {
panic(fmt.Sprintf("Peer %v has no state", peer))
}
switch msg.Msg.(type) {
case *VoteMessage:
if numVotes := ps.RecordVote(); numVotes%votesToContributeToBecomeGoodPeer == 0 {
@@ -859,7 +864,10 @@ func (conR *ConsensusReactor) StringIndented(indent string) string {
s := "ConsensusReactor{\n"
s += indent + " " + conR.conS.StringIndented(indent+" ") + "\n"
for _, peer := range conR.Switch.Peers().List() {
ps := peer.Get(types.PeerStateKey).(*PeerState)
ps, ok := peer.Get(types.PeerStateKey).(*PeerState)
if !ok {
panic(fmt.Sprintf("Peer %v has no state", peer))
}
s += indent + " " + ps.StringIndented(indent+" ") + "\n"
}
s += indent + "}"
@@ -896,7 +904,7 @@ type PeerState struct {
peer p2p.Peer
logger log.Logger
mtx sync.Mutex `json:"-"` // NOTE: Modify below using setters, never directly.
mtx sync.Mutex // NOTE: Modify below using setters, never directly.
PRS cstypes.PeerRoundState `json:"round_state"` // Exposed.
Stats *peerStateStats `json:"stats"` // Exposed.
}
@@ -1017,7 +1025,11 @@ func (ps *PeerState) PickSendVote(votes types.VoteSetReader) bool {
if vote, ok := ps.PickVoteToSend(votes); ok {
msg := &VoteMessage{vote}
ps.logger.Debug("Sending vote message", "ps", ps, "vote", vote)
return ps.peer.Send(VoteChannel, cdc.MustMarshalBinaryBare(msg))
if ps.peer.Send(VoteChannel, cdc.MustMarshalBinaryBare(msg)) {
ps.SetHasVote(vote)
return true
}
return false
}
return false
}
@@ -1046,7 +1058,6 @@ func (ps *PeerState) PickVoteToSend(votes types.VoteSetReader) (vote *types.Vote
return nil, false // Not something worth sending
}
if index, ok := votes.BitArray().Sub(psVotes).PickRandom(); ok {
ps.setHasVote(height, round, type_, index)
return votes.GetByIndex(index), true
}
return nil, false
@@ -1107,7 +1118,7 @@ func (ps *PeerState) ensureCatchupCommitRound(height int64, round int, numValida
NOTE: This is wrong, 'round' could change.
e.g. if orig round is not the same as block LastCommit round.
if ps.CatchupCommitRound != -1 && ps.CatchupCommitRound != round {
cmn.PanicSanity(fmt.Sprintf("Conflicting CatchupCommitRound. Height: %v, Orig: %v, New: %v", height, ps.CatchupCommitRound, round))
panic(fmt.Sprintf("Conflicting CatchupCommitRound. Height: %v, Orig: %v, New: %v", height, ps.CatchupCommitRound, round))
}
*/
if ps.PRS.CatchupCommitRound == round {
@@ -1368,7 +1379,6 @@ func RegisterConsensusMessages(cdc *amino.Codec) {
cdc.RegisterConcrete(&HasVoteMessage{}, "tendermint/HasVote", nil)
cdc.RegisterConcrete(&VoteSetMaj23Message{}, "tendermint/VoteSetMaj23", nil)
cdc.RegisterConcrete(&VoteSetBitsMessage{}, "tendermint/VoteSetBits", nil)
cdc.RegisterConcrete(&ProposalHeartbeatMessage{}, "tendermint/ProposalHeartbeat", nil)
}
func decodeMsg(bz []byte) (msg ConsensusMessage, err error) {
@@ -1645,18 +1655,3 @@ func (m *VoteSetBitsMessage) String() string {
}
//-------------------------------------
// ProposalHeartbeatMessage is sent to signal that a node is alive and waiting for transactions for a proposal.
type ProposalHeartbeatMessage struct {
Heartbeat *types.Heartbeat
}
// ValidateBasic performs basic validation.
func (m *ProposalHeartbeatMessage) ValidateBasic() error {
return m.Heartbeat.ValidateBasic()
}
// String returns a string representation.
func (m *ProposalHeartbeatMessage) String() string {
return fmt.Sprintf("[HEARTBEAT %v]", m.Heartbeat)
}

View File

@@ -14,7 +14,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/abci/client"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
bc "github.com/tendermint/tendermint/blockchain"
@@ -23,20 +23,21 @@ import (
"github.com/tendermint/tendermint/libs/log"
mempl "github.com/tendermint/tendermint/mempool"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/p2p/mock"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
func init() {
config = ResetConfig("consensus_reactor_test")
}
//----------------------------------------------
// in-process testnets
func startConsensusNet(t *testing.T, css []*ConsensusState, N int) ([]*ConsensusReactor, []chan interface{}, []*types.EventBus) {
func startConsensusNet(t *testing.T, css []*ConsensusState, N int) (
[]*ConsensusReactor,
[]types.Subscription,
[]*types.EventBus,
) {
reactors := make([]*ConsensusReactor, N)
eventChans := make([]chan interface{}, N)
blocksSubs := make([]types.Subscription, 0)
eventBuses := make([]*types.EventBus, N)
for i := 0; i < N; i++ {
/*logger, err := tmflags.ParseLogLevel("consensus:info,*:error", logger, "info")
@@ -48,9 +49,13 @@ func startConsensusNet(t *testing.T, css []*ConsensusState, N int) ([]*Consensus
eventBuses[i] = css[i].eventBus
reactors[i].SetEventBus(eventBuses[i])
eventChans[i] = make(chan interface{}, 1)
err := eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, eventChans[i])
blocksSub, err := eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock)
require.NoError(t, err)
blocksSubs = append(blocksSubs, blocksSub)
if css[i].state.LastBlockHeight == 0 { //simulate handle initChain in handshake
sm.SaveState(css[i].blockExec.DB(), css[i].state)
}
}
// make connected switches and start all reactors
p2p.MakeConnectedSwitches(config.P2P, N, func(i int, s *p2p.Switch) *p2p.Switch {
@@ -67,7 +72,7 @@ func startConsensusNet(t *testing.T, css []*ConsensusState, N int) ([]*Consensus
s := reactors[i].conS.GetState()
reactors[i].SwitchToConsensus(s, 0)
}
return reactors, eventChans, eventBuses
return reactors, blocksSubs, eventBuses
}
func stopConsensusNet(logger log.Logger, reactors []*ConsensusReactor, eventBuses []*types.EventBus) {
@@ -86,12 +91,13 @@ func stopConsensusNet(logger log.Logger, reactors []*ConsensusReactor, eventBuse
// Ensure a testnet makes blocks
func TestReactorBasic(t *testing.T) {
N := 4
css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
reactors, eventChans, eventBuses := startConsensusNet(t, css, N)
css, cleanup := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
defer cleanup()
reactors, blocksSubs, eventBuses := startConsensusNet(t, css, N)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
// wait till everyone makes the first new block
timeoutWaitGroup(t, N, func(j int) {
<-eventChans[j]
<-blocksSubs[j].Out()
}, css)
}
@@ -116,6 +122,7 @@ func TestReactorWithEvidence(t *testing.T) {
stateDB := dbm.NewMemDB() // each state needs its own db
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
thisConfig := ResetConfig(fmt.Sprintf("%s_%d", testName, i))
defer os.RemoveAll(thisConfig.RootDir)
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
app := appFunc()
vals := types.TM2PB.ValidatorUpdates(state.Validators)
@@ -134,7 +141,7 @@ func TestReactorWithEvidence(t *testing.T) {
proxyAppConnCon := abcicli.NewLocalClient(mtx, app)
// Make Mempool
mempool := mempl.NewMempool(thisConfig.Mempool, proxyAppConnMem, 0)
mempool := mempl.NewCListMempool(thisConfig.Mempool, proxyAppConnMem, 0)
mempool.SetLogger(log.TestingLogger().With("module", "mempool"))
if thisConfig.Consensus.WaitForTxs() {
mempool.EnableTxsAvailable()
@@ -143,7 +150,8 @@ func TestReactorWithEvidence(t *testing.T) {
// mock the evidence pool
// everyone includes evidence of another double signing
vIdx := (i + 1) % nValidators
evpool := newMockEvidencePool(privVals[vIdx].GetAddress())
addr := privVals[vIdx].GetPubKey().Address()
evpool := newMockEvidencePool(addr)
// Make ConsensusState
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyAppConnCon, mempool, evpool)
@@ -162,20 +170,20 @@ func TestReactorWithEvidence(t *testing.T) {
css[i] = cs
}
reactors, eventChans, eventBuses := startConsensusNet(t, css, nValidators)
reactors, blocksSubs, eventBuses := startConsensusNet(t, css, nValidators)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
// wait till everyone makes the first new block with no evidence
timeoutWaitGroup(t, nValidators, func(j int) {
blockI := <-eventChans[j]
block := blockI.(types.EventDataNewBlock).Block
msg := <-blocksSubs[j].Out()
block := msg.Data().(types.EventDataNewBlock).Block
assert.True(t, len(block.Evidence.Evidence) == 0)
}, css)
// second block should have evidence
timeoutWaitGroup(t, nValidators, func(j int) {
blockI := <-eventChans[j]
block := blockI.(types.EventDataNewBlock).Block
msg := <-blocksSubs[j].Out()
block := msg.Data().(types.EventDataNewBlock).Block
assert.True(t, len(block.Evidence.Evidence) > 0)
}, css)
}
@@ -210,51 +218,86 @@ func (m *mockEvidencePool) Update(block *types.Block, state sm.State) {
}
m.height++
}
func (m *mockEvidencePool) IsCommitted(types.Evidence) bool { return false }
//------------------------------------
// Ensure a testnet sends proposal heartbeats and makes blocks when there are txs
func TestReactorProposalHeartbeats(t *testing.T) {
// Ensure a testnet makes blocks when there are txs
func TestReactorCreatesBlockWhenEmptyBlocksFalse(t *testing.T) {
N := 4
css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter,
css, cleanup := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter,
func(c *cfg.Config) {
c.Consensus.CreateEmptyBlocks = false
})
reactors, eventChans, eventBuses := startConsensusNet(t, css, N)
defer cleanup()
reactors, blocksSubs, eventBuses := startConsensusNet(t, css, N)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
heartbeatChans := make([]chan interface{}, N)
var err error
for i := 0; i < N; i++ {
heartbeatChans[i] = make(chan interface{}, 1)
err = eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryProposalHeartbeat, heartbeatChans[i])
require.NoError(t, err)
}
// wait till everyone sends a proposal heartbeat
timeoutWaitGroup(t, N, func(j int) {
<-heartbeatChans[j]
}, css)
// send a tx
if err := css[3].mempool.CheckTx([]byte{1, 2, 3}, nil); err != nil {
if err := assertMempool(css[3].txNotifier).CheckTx([]byte{1, 2, 3}, nil); err != nil {
//t.Fatal(err)
}
// wait till everyone makes the first new block
timeoutWaitGroup(t, N, func(j int) {
<-eventChans[j]
<-blocksSubs[j].Out()
}, css)
}
func TestReactorReceiveDoesNotPanicIfAddPeerHasntBeenCalledYet(t *testing.T) {
N := 1
css, cleanup := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
defer cleanup()
reactors, _, eventBuses := startConsensusNet(t, css, N)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
var (
reactor = reactors[0]
peer = mock.NewPeer(nil)
msg = cdc.MustMarshalBinaryBare(&HasVoteMessage{Height: 1, Round: 1, Index: 1, Type: types.PrevoteType})
)
reactor.InitPeer(peer)
// simulate switch calling Receive before AddPeer
assert.NotPanics(t, func() {
reactor.Receive(StateChannel, peer, msg)
reactor.AddPeer(peer)
})
}
func TestReactorReceivePanicsIfInitPeerHasntBeenCalledYet(t *testing.T) {
N := 1
css, cleanup := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
defer cleanup()
reactors, _, eventBuses := startConsensusNet(t, css, N)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
var (
reactor = reactors[0]
peer = mock.NewPeer(nil)
msg = cdc.MustMarshalBinaryBare(&HasVoteMessage{Height: 1, Round: 1, Index: 1, Type: types.PrevoteType})
)
// we should call InitPeer here
// simulate switch calling Receive before AddPeer
assert.Panics(t, func() {
reactor.Receive(StateChannel, peer, msg)
})
}
// Test we record stats about votes and block parts from other peers.
func TestReactorRecordsVotesAndBlockParts(t *testing.T) {
N := 4
css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
reactors, eventChans, eventBuses := startConsensusNet(t, css, N)
css, cleanup := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
defer cleanup()
reactors, blocksSubs, eventBuses := startConsensusNet(t, css, N)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
// wait till everyone makes the first new block
timeoutWaitGroup(t, N, func(j int) {
<-eventChans[j]
<-blocksSubs[j].Out()
}, css)
// Get peer
@@ -272,19 +315,21 @@ func TestReactorRecordsVotesAndBlockParts(t *testing.T) {
func TestReactorVotingPowerChange(t *testing.T) {
nVals := 4
logger := log.TestingLogger()
css := randConsensusNet(nVals, "consensus_voting_power_changes_test", newMockTickerFunc(true), newPersistentKVStore)
reactors, eventChans, eventBuses := startConsensusNet(t, css, nVals)
css, cleanup := randConsensusNet(nVals, "consensus_voting_power_changes_test", newMockTickerFunc(true), newPersistentKVStore)
defer cleanup()
reactors, blocksSubs, eventBuses := startConsensusNet(t, css, nVals)
defer stopConsensusNet(logger, reactors, eventBuses)
// map of active validators
activeVals := make(map[string]struct{})
for i := 0; i < nVals; i++ {
activeVals[string(css[i].privValidator.GetAddress())] = struct{}{}
addr := css[i].privValidator.GetPubKey().Address()
activeVals[string(addr)] = struct{}{}
}
// wait till everyone makes block 1
timeoutWaitGroup(t, nVals, func(j int) {
<-eventChans[j]
<-blocksSubs[j].Out()
}, css)
//---------------------------------------------------------------------------
@@ -295,10 +340,10 @@ func TestReactorVotingPowerChange(t *testing.T) {
updateValidatorTx := kvstore.MakeValSetChangeTx(val1PubKeyABCI, 25)
previousTotalVotingPower := css[0].GetRoundState().LastValidators.TotalVotingPower()
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlockWithTx(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlock(t, nVals, activeVals, blocksSubs, css, updateValidatorTx)
waitForAndValidateBlockWithTx(t, nVals, activeVals, blocksSubs, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, blocksSubs, css)
waitForAndValidateBlock(t, nVals, activeVals, blocksSubs, css)
if css[0].GetRoundState().LastValidators.TotalVotingPower() == previousTotalVotingPower {
t.Fatalf("expected voting power to change (before: %d, after: %d)", previousTotalVotingPower, css[0].GetRoundState().LastValidators.TotalVotingPower())
@@ -307,10 +352,10 @@ func TestReactorVotingPowerChange(t *testing.T) {
updateValidatorTx = kvstore.MakeValSetChangeTx(val1PubKeyABCI, 2)
previousTotalVotingPower = css[0].GetRoundState().LastValidators.TotalVotingPower()
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlockWithTx(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlock(t, nVals, activeVals, blocksSubs, css, updateValidatorTx)
waitForAndValidateBlockWithTx(t, nVals, activeVals, blocksSubs, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, blocksSubs, css)
waitForAndValidateBlock(t, nVals, activeVals, blocksSubs, css)
if css[0].GetRoundState().LastValidators.TotalVotingPower() == previousTotalVotingPower {
t.Fatalf("expected voting power to change (before: %d, after: %d)", previousTotalVotingPower, css[0].GetRoundState().LastValidators.TotalVotingPower())
@@ -319,10 +364,10 @@ func TestReactorVotingPowerChange(t *testing.T) {
updateValidatorTx = kvstore.MakeValSetChangeTx(val1PubKeyABCI, 26)
previousTotalVotingPower = css[0].GetRoundState().LastValidators.TotalVotingPower()
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlockWithTx(t, nVals, activeVals, eventChans, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css)
waitForAndValidateBlock(t, nVals, activeVals, blocksSubs, css, updateValidatorTx)
waitForAndValidateBlockWithTx(t, nVals, activeVals, blocksSubs, css, updateValidatorTx)
waitForAndValidateBlock(t, nVals, activeVals, blocksSubs, css)
waitForAndValidateBlock(t, nVals, activeVals, blocksSubs, css)
if css[0].GetRoundState().LastValidators.TotalVotingPower() == previousTotalVotingPower {
t.Fatalf("expected voting power to change (before: %d, after: %d)", previousTotalVotingPower, css[0].GetRoundState().LastValidators.TotalVotingPower())
@@ -332,22 +377,24 @@ func TestReactorVotingPowerChange(t *testing.T) {
func TestReactorValidatorSetChanges(t *testing.T) {
nPeers := 7
nVals := 4
css := randConsensusNetWithPeers(nVals, nPeers, "consensus_val_set_changes_test", newMockTickerFunc(true), newPersistentKVStore)
css, _, _, cleanup := randConsensusNetWithPeers(nVals, nPeers, "consensus_val_set_changes_test", newMockTickerFunc(true), newPersistentKVStoreWithPath)
defer cleanup()
logger := log.TestingLogger()
reactors, eventChans, eventBuses := startConsensusNet(t, css, nPeers)
reactors, blocksSubs, eventBuses := startConsensusNet(t, css, nPeers)
defer stopConsensusNet(logger, reactors, eventBuses)
// map of active validators
activeVals := make(map[string]struct{})
for i := 0; i < nVals; i++ {
activeVals[string(css[i].privValidator.GetAddress())] = struct{}{}
addr := css[i].privValidator.GetPubKey().Address()
activeVals[string(addr)] = struct{}{}
}
// wait till everyone makes block 1
timeoutWaitGroup(t, nPeers, func(j int) {
<-eventChans[j]
<-blocksSubs[j].Out()
}, css)
//---------------------------------------------------------------------------
@@ -360,22 +407,22 @@ func TestReactorValidatorSetChanges(t *testing.T) {
// wait till everyone makes block 2
// ensure the commit includes all validators
// send newValTx to change vals in block 3
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, newValidatorTx1)
waitForAndValidateBlock(t, nPeers, activeVals, blocksSubs, css, newValidatorTx1)
// wait till everyone makes block 3.
// it includes the commit for block 2, which is by the original validator set
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, newValidatorTx1)
waitForAndValidateBlockWithTx(t, nPeers, activeVals, blocksSubs, css, newValidatorTx1)
// wait till everyone makes block 4.
// it includes the commit for block 3, which is by the original validator set
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForAndValidateBlock(t, nPeers, activeVals, blocksSubs, css)
// the commits for block 4 should be with the updated validator set
activeVals[string(newValidatorPubKey1.Address())] = struct{}{}
// wait till everyone makes block 5
// it includes the commit for block 4, which should have the updated validator set
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, blocksSubs, css)
//---------------------------------------------------------------------------
logger.Info("---------------------------- Testing changing the voting power of one validator")
@@ -385,10 +432,10 @@ func TestReactorValidatorSetChanges(t *testing.T) {
updateValidatorTx1 := kvstore.MakeValSetChangeTx(updatePubKey1ABCI, 25)
previousTotalVotingPower := css[nVals].GetRoundState().LastValidators.TotalVotingPower()
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, updateValidatorTx1)
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, updateValidatorTx1)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
waitForAndValidateBlock(t, nPeers, activeVals, blocksSubs, css, updateValidatorTx1)
waitForAndValidateBlockWithTx(t, nPeers, activeVals, blocksSubs, css, updateValidatorTx1)
waitForAndValidateBlock(t, nPeers, activeVals, blocksSubs, css)
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, blocksSubs, css)
if css[nVals].GetRoundState().LastValidators.TotalVotingPower() == previousTotalVotingPower {
t.Errorf("expected voting power to change (before: %d, after: %d)", previousTotalVotingPower, css[nVals].GetRoundState().LastValidators.TotalVotingPower())
@@ -405,12 +452,12 @@ func TestReactorValidatorSetChanges(t *testing.T) {
newVal3ABCI := types.TM2PB.PubKey(newValidatorPubKey3)
newValidatorTx3 := kvstore.MakeValSetChangeTx(newVal3ABCI, testMinPower)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, newValidatorTx2, newValidatorTx3)
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, newValidatorTx2, newValidatorTx3)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForAndValidateBlock(t, nPeers, activeVals, blocksSubs, css, newValidatorTx2, newValidatorTx3)
waitForAndValidateBlockWithTx(t, nPeers, activeVals, blocksSubs, css, newValidatorTx2, newValidatorTx3)
waitForAndValidateBlock(t, nPeers, activeVals, blocksSubs, css)
activeVals[string(newValidatorPubKey2.Address())] = struct{}{}
activeVals[string(newValidatorPubKey3.Address())] = struct{}{}
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, blocksSubs, css)
//---------------------------------------------------------------------------
logger.Info("---------------------------- Testing removing two validators at once")
@@ -418,61 +465,70 @@ func TestReactorValidatorSetChanges(t *testing.T) {
removeValidatorTx2 := kvstore.MakeValSetChangeTx(newVal2ABCI, 0)
removeValidatorTx3 := kvstore.MakeValSetChangeTx(newVal3ABCI, 0)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, removeValidatorTx2, removeValidatorTx3)
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, removeValidatorTx2, removeValidatorTx3)
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css)
waitForAndValidateBlock(t, nPeers, activeVals, blocksSubs, css, removeValidatorTx2, removeValidatorTx3)
waitForAndValidateBlockWithTx(t, nPeers, activeVals, blocksSubs, css, removeValidatorTx2, removeValidatorTx3)
waitForAndValidateBlock(t, nPeers, activeVals, blocksSubs, css)
delete(activeVals, string(newValidatorPubKey2.Address()))
delete(activeVals, string(newValidatorPubKey3.Address()))
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css)
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, blocksSubs, css)
}
// Check we can make blocks with skip_timeout_commit=false
func TestReactorWithTimeoutCommit(t *testing.T) {
N := 4
css := randConsensusNet(N, "consensus_reactor_with_timeout_commit_test", newMockTickerFunc(false), newCounter)
css, cleanup := randConsensusNet(N, "consensus_reactor_with_timeout_commit_test", newMockTickerFunc(false), newCounter)
defer cleanup()
// override default SkipTimeoutCommit == true for tests
for i := 0; i < N; i++ {
css[i].config.SkipTimeoutCommit = false
}
reactors, eventChans, eventBuses := startConsensusNet(t, css, N-1)
reactors, blocksSubs, eventBuses := startConsensusNet(t, css, N-1)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
// wait till everyone makes the first new block
timeoutWaitGroup(t, N-1, func(j int) {
<-eventChans[j]
<-blocksSubs[j].Out()
}, css)
}
func waitForAndValidateBlock(t *testing.T, n int, activeVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState, txs ...[]byte) {
func waitForAndValidateBlock(
t *testing.T,
n int,
activeVals map[string]struct{},
blocksSubs []types.Subscription,
css []*ConsensusState,
txs ...[]byte,
) {
timeoutWaitGroup(t, n, func(j int) {
css[j].Logger.Debug("waitForAndValidateBlock")
newBlockI, ok := <-eventChans[j]
if !ok {
return
}
newBlock := newBlockI.(types.EventDataNewBlock).Block
msg := <-blocksSubs[j].Out()
newBlock := msg.Data().(types.EventDataNewBlock).Block
css[j].Logger.Debug("waitForAndValidateBlock: Got block", "height", newBlock.Height)
err := validateBlock(newBlock, activeVals)
assert.Nil(t, err)
for _, tx := range txs {
err := css[j].mempool.CheckTx(tx, nil)
err := assertMempool(css[j].txNotifier).CheckTx(tx, nil)
assert.Nil(t, err)
}
}, css)
}
func waitForAndValidateBlockWithTx(t *testing.T, n int, activeVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState, txs ...[]byte) {
func waitForAndValidateBlockWithTx(
t *testing.T,
n int,
activeVals map[string]struct{},
blocksSubs []types.Subscription,
css []*ConsensusState,
txs ...[]byte,
) {
timeoutWaitGroup(t, n, func(j int) {
ntxs := 0
BLOCK_TX_LOOP:
for {
css[j].Logger.Debug("waitForAndValidateBlockWithTx", "ntxs", ntxs)
newBlockI, ok := <-eventChans[j]
if !ok {
return
}
newBlock := newBlockI.(types.EventDataNewBlock).Block
msg := <-blocksSubs[j].Out()
newBlock := msg.Data().(types.EventDataNewBlock).Block
css[j].Logger.Debug("waitForAndValidateBlockWithTx: Got block", "height", newBlock.Height)
err := validateBlock(newBlock, activeVals)
assert.Nil(t, err)
@@ -493,18 +549,21 @@ func waitForAndValidateBlockWithTx(t *testing.T, n int, activeVals map[string]st
}, css)
}
func waitForBlockWithUpdatedValsAndValidateIt(t *testing.T, n int, updatedVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState) {
func waitForBlockWithUpdatedValsAndValidateIt(
t *testing.T,
n int,
updatedVals map[string]struct{},
blocksSubs []types.Subscription,
css []*ConsensusState,
) {
timeoutWaitGroup(t, n, func(j int) {
var newBlock *types.Block
LOOP:
for {
css[j].Logger.Debug("waitForBlockWithUpdatedValsAndValidateIt")
newBlockI, ok := <-eventChans[j]
if !ok {
return
}
newBlock = newBlockI.(types.EventDataNewBlock).Block
msg := <-blocksSubs[j].Out()
newBlock = msg.Data().(types.EventDataNewBlock).Block
if newBlock.LastCommit.Size() == len(updatedVals) {
css[j].Logger.Debug("waitForBlockWithUpdatedValsAndValidateIt: Got block", "height", newBlock.Height)
break LOOP

View File

@@ -6,20 +6,21 @@ import (
"hash/crc32"
"io"
"reflect"
//"strconv"
//"strings"
"time"
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/version"
//auto "github.com/tendermint/tendermint/libs/autofile"
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/mock"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/version"
)
var crc32c = crc32.MakeTable(crc32.Castagnoli)
@@ -41,7 +42,7 @@ var crc32c = crc32.MakeTable(crc32.Castagnoli)
// Unmarshal and apply a single message to the consensus state as if it were
// received in receiveRoutine. Lines that start with "#" are ignored.
// NOTE: receiveRoutine should not be running.
func (cs *ConsensusState) readReplayMessage(msg *TimedWALMessage, newStepCh chan interface{}) error {
func (cs *ConsensusState) readReplayMessage(msg *TimedWALMessage, newStepSub types.Subscription) error {
// Skip meta messages which exist for demarcating boundaries.
if _, ok := msg.Msg.(EndHeightMessage); ok {
return nil
@@ -53,15 +54,17 @@ func (cs *ConsensusState) readReplayMessage(msg *TimedWALMessage, newStepCh chan
cs.Logger.Info("Replay: New Step", "height", m.Height, "round", m.Round, "step", m.Step)
// these are playback checks
ticker := time.After(time.Second * 2)
if newStepCh != nil {
if newStepSub != nil {
select {
case mi := <-newStepCh:
m2 := mi.(types.EventDataRoundState)
case stepMsg := <-newStepSub.Out():
m2 := stepMsg.Data().(types.EventDataRoundState)
if m.Height != m2.Height || m.Round != m2.Round || m.Step != m2.Step {
return fmt.Errorf("RoundState mismatch. Got %v; Expected %v", m2, m)
}
case <-newStepSub.Cancelled():
return fmt.Errorf("Failed to read off newStepSub.Out(). newStepSub was cancelled")
case <-ticker:
return fmt.Errorf("Failed to read off newStepCh")
return fmt.Errorf("Failed to read off newStepSub.Out()")
}
}
case msgInfo:
@@ -143,8 +146,8 @@ func (cs *ConsensusState) catchupReplay(csHeight int64) error {
if err == io.EOF {
break
} else if IsDataCorruptionError(err) {
cs.Logger.Debug("data has been corrupted in last height of consensus WAL", "err", err, "height", csHeight)
panic(fmt.Sprintf("data has been corrupted (%v) in last height %d of consensus WAL", err, csHeight))
cs.Logger.Error("data has been corrupted in last height of consensus WAL", "err", err, "height", csHeight)
return err
} else if err != nil {
return err
}
@@ -196,6 +199,7 @@ type Handshaker struct {
stateDB dbm.DB
initialState sm.State
store sm.BlockStore
eventBus types.BlockEventPublisher
genDoc *types.GenesisDoc
logger log.Logger
@@ -209,6 +213,7 @@ func NewHandshaker(stateDB dbm.DB, state sm.State,
stateDB: stateDB,
initialState: state,
store: store,
eventBus: types.NopEventBus{},
genDoc: genDoc,
logger: log.NewNopLogger(),
nBlocks: 0,
@@ -219,6 +224,13 @@ func (h *Handshaker) SetLogger(l log.Logger) {
h.logger = l
}
// SetEventBus - sets the event bus for publishing block related events.
// If not called, it defaults to types.NopEventBus.
func (h *Handshaker) SetEventBus(eventBus types.BlockEventPublisher) {
h.eventBus = eventBus
}
// NBlocks returns the number of blocks applied to the state.
func (h *Handshaker) NBlocks() int {
return h.nBlocks
}
@@ -246,12 +258,15 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
)
// Set AppVersion on the state.
h.initialState.Version.Consensus.App = version.Protocol(res.AppVersion)
if h.initialState.Version.Consensus.App != version.Protocol(res.AppVersion) {
h.initialState.Version.Consensus.App = version.Protocol(res.AppVersion)
sm.SaveState(h.stateDB, h.initialState)
}
// Replay blocks up to the latest in the blockstore.
_, err = h.ReplayBlocks(h.initialState, appHash, blockHeight, proxyApp)
if err != nil {
return fmt.Errorf("Error on replay: %v", err)
return fmt.Errorf("error on replay: %v", err)
}
h.logger.Info("Completed ABCI Handshake - Tendermint and App are synced",
@@ -262,7 +277,8 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
return nil
}
// Replay all blocks since appBlockHeight and ensure the result matches the current state.
// ReplayBlocks replays all blocks since appBlockHeight and ensures the result
// matches the current state.
// Returns the final AppHash or an error.
func (h *Handshaker) ReplayBlocks(
state sm.State,
@@ -276,7 +292,12 @@ func (h *Handshaker) ReplayBlocks(
// If appBlockHeight == 0 it means that we are at genesis and hence should send InitChain.
if appBlockHeight == 0 {
nextVals := types.TM2PB.ValidatorUpdates(state.NextValidators) // state.Validators would work too.
validators := make([]*types.Validator, len(h.genDoc.Validators))
for i, val := range h.genDoc.Validators {
validators[i] = types.NewValidator(val.PubKey, val.Power)
}
validatorSet := types.NewValidatorSet(validators)
nextVals := types.TM2PB.ValidatorUpdates(validatorSet)
csParams := types.TM2PB.ConsensusParams(h.genDoc.ConsensusParams)
req := abci.RequestInitChain{
Time: h.genDoc.GenesisTime,
@@ -290,36 +311,45 @@ func (h *Handshaker) ReplayBlocks(
return nil, err
}
// If the app returned validators or consensus params, update the state.
if len(res.Validators) > 0 {
vals, err := types.PB2TM.ValidatorUpdates(res.Validators)
if err != nil {
return nil, err
if stateBlockHeight == 0 { //we only update state when we are in initial state
// If the app returned validators or consensus params, update the state.
if len(res.Validators) > 0 {
vals, err := types.PB2TM.ValidatorUpdates(res.Validators)
if err != nil {
return nil, err
}
state.Validators = types.NewValidatorSet(vals)
state.NextValidators = types.NewValidatorSet(vals)
} else {
// If validator set is not set in genesis and still empty after InitChain, exit.
if len(h.genDoc.Validators) == 0 {
return nil, fmt.Errorf("validator set is nil in genesis and still empty after InitChain")
}
}
state.Validators = types.NewValidatorSet(vals)
state.NextValidators = types.NewValidatorSet(vals)
if res.ConsensusParams != nil {
state.ConsensusParams = state.ConsensusParams.Update(res.ConsensusParams)
}
sm.SaveState(h.stateDB, state)
}
if res.ConsensusParams != nil {
state.ConsensusParams = types.PB2TM.ConsensusParams(res.ConsensusParams)
}
sm.SaveState(h.stateDB, state)
}
// First handle edge cases and constraints on the storeBlockHeight.
if storeBlockHeight == 0 {
return appHash, checkAppHash(state, appHash)
assertAppHashEqualsOneFromState(appHash, state)
return appHash, nil
} else if storeBlockHeight < appBlockHeight {
// the app should never be ahead of the store (but this is under app's control)
return appHash, sm.ErrAppBlockHeightTooHigh{storeBlockHeight, appBlockHeight}
return appHash, sm.ErrAppBlockHeightTooHigh{CoreHeight: storeBlockHeight, AppHeight: appBlockHeight}
} else if storeBlockHeight < stateBlockHeight {
// the state should never be ahead of the store (this is under tendermint's control)
cmn.PanicSanity(fmt.Sprintf("StateBlockHeight (%d) > StoreBlockHeight (%d)", stateBlockHeight, storeBlockHeight))
panic(fmt.Sprintf("StateBlockHeight (%d) > StoreBlockHeight (%d)", stateBlockHeight, storeBlockHeight))
} else if storeBlockHeight > stateBlockHeight+1 {
// store should be at most one ahead of the state (this is under tendermint's control)
cmn.PanicSanity(fmt.Sprintf("StoreBlockHeight (%d) > StateBlockHeight + 1 (%d)", storeBlockHeight, stateBlockHeight+1))
panic(fmt.Sprintf("StoreBlockHeight (%d) > StateBlockHeight + 1 (%d)", storeBlockHeight, stateBlockHeight+1))
}
var err error
@@ -334,7 +364,8 @@ func (h *Handshaker) ReplayBlocks(
} else if appBlockHeight == storeBlockHeight {
// We're good!
return appHash, checkAppHash(state, appHash)
assertAppHashEqualsOneFromState(appHash, state)
return appHash, nil
}
} else if storeBlockHeight == stateBlockHeight+1 {
@@ -355,7 +386,7 @@ func (h *Handshaker) ReplayBlocks(
return state.AppHash, err
} else if appBlockHeight == storeBlockHeight {
// We ran Commit, but didn't save the state, so replayBlock with mock app
// We ran Commit, but didn't save the state, so replayBlock with mock app.
abciResponses, err := sm.LoadABCIResponses(h.stateDB, storeBlockHeight)
if err != nil {
return nil, err
@@ -368,8 +399,8 @@ func (h *Handshaker) ReplayBlocks(
}
cmn.PanicSanity("Should never happen")
return nil, nil
panic(fmt.Sprintf("uncovered case! appHeight: %d, storeHeight: %d, stateHeight: %d",
appBlockHeight, storeBlockHeight, stateBlockHeight))
}
func (h *Handshaker) replayBlocks(state sm.State, proxyApp proxy.AppConns, appBlockHeight, storeBlockHeight int64, mutateState bool) ([]byte, error) {
@@ -392,7 +423,12 @@ func (h *Handshaker) replayBlocks(state sm.State, proxyApp proxy.AppConns, appBl
for i := appBlockHeight + 1; i <= finalBlock; i++ {
h.logger.Info("Applying block", "height", i)
block := h.store.LoadBlock(i)
appHash, err = sm.ExecCommitBlock(proxyApp.Consensus(), block, h.logger, state.LastValidators, h.stateDB)
// Extra check to ensure the app was not changed in a way it shouldn't have.
if len(appHash) > 0 {
assertAppHashEqualsOneFromBlock(appHash, block)
}
appHash, err = sm.ExecCommitBlock(proxyApp.Consensus(), block, h.logger, h.stateDB)
if err != nil {
return nil, err
}
@@ -409,7 +445,8 @@ func (h *Handshaker) replayBlocks(state sm.State, proxyApp proxy.AppConns, appBl
appHash = state.AppHash
}
return appHash, checkAppHash(state, appHash)
assertAppHashEqualsOneFromState(appHash, state)
return appHash, nil
}
// ApplyBlock on the proxyApp with the last block.
@@ -417,7 +454,8 @@ func (h *Handshaker) replayBlock(state sm.State, height int64, proxyApp proxy.Ap
block := h.store.LoadBlock(height)
meta := h.store.LoadBlockMeta(height)
blockExec := sm.NewBlockExecutor(h.stateDB, h.logger, proxyApp, sm.MockMempool{}, sm.MockEvidencePool{})
blockExec := sm.NewBlockExecutor(h.stateDB, h.logger, proxyApp, mock.Mempool{}, sm.MockEvidencePool{})
blockExec.SetEventBus(h.eventBus)
var err error
state, err = blockExec.ApplyBlock(state, meta.BlockID, block)
@@ -430,11 +468,26 @@ func (h *Handshaker) replayBlock(state sm.State, height int64, proxyApp proxy.Ap
return state, nil
}
func checkAppHash(state sm.State, appHash []byte) error {
if !bytes.Equal(state.AppHash, appHash) {
panic(fmt.Errorf("Tendermint state.AppHash does not match AppHash after replay. Got %X, expected %X", appHash, state.AppHash).Error())
func assertAppHashEqualsOneFromBlock(appHash []byte, block *types.Block) {
if !bytes.Equal(appHash, block.AppHash) {
panic(fmt.Sprintf(`block.AppHash does not match AppHash after replay. Got %X, expected %X.
Block: %v
`,
appHash, block.AppHash, block))
}
}
func assertAppHashEqualsOneFromState(appHash []byte, state sm.State) {
if !bytes.Equal(appHash, state.AppHash) {
panic(fmt.Sprintf(`state.AppHash does not match AppHash after replay. Got
%X, expected %X.
State: %v
Did you reset Tendermint without resetting your application's data?`,
appHash, state.AppHash, state))
}
return nil
}
//--------------------------------------------------------------------------------
@@ -465,6 +518,9 @@ type mockProxyApp struct {
func (mock *mockProxyApp) DeliverTx(tx []byte) abci.ResponseDeliverTx {
r := mock.abciResponses.DeliverTx[mock.txCount]
mock.txCount++
if r == nil { //it could be nil because of amino unMarshall, it will cause an empty ResponseDeliverTx to become nil
return abci.ResponseDeliverTx{}
}
return *r
}

View File

@@ -16,6 +16,7 @@ import (
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/mock"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
@@ -51,10 +52,9 @@ func (cs *ConsensusState) ReplayFile(file string, console bool) error {
cs.startForReplay()
// ensure all new step events are regenerated as expected
newStepCh := make(chan interface{}, 1)
ctx := context.Background()
err := cs.eventBus.Subscribe(ctx, subscriber, types.EventQueryNewRoundStep, newStepCh)
newStepSub, err := cs.eventBus.Subscribe(ctx, subscriber, types.EventQueryNewRoundStep)
if err != nil {
return errors.Errorf("failed to subscribe %s to %v", subscriber, types.EventQueryNewRoundStep)
}
@@ -83,7 +83,7 @@ func (cs *ConsensusState) ReplayFile(file string, console bool) error {
return err
}
if err := pb.cs.readReplayMessage(msg, newStepCh); err != nil {
if err := pb.cs.readReplayMessage(msg, newStepSub); err != nil {
return err
}
@@ -92,7 +92,6 @@ func (cs *ConsensusState) ReplayFile(file string, console bool) error {
}
pb.count++
}
return nil
}
//------------------------------------------------
@@ -121,12 +120,12 @@ func newPlayback(fileName string, fp *os.File, cs *ConsensusState, genState sm.S
}
// go back count steps by resetting the state and running (pb.count - count) steps
func (pb *playback) replayReset(count int, newStepCh chan interface{}) error {
func (pb *playback) replayReset(count int, newStepSub types.Subscription) error {
pb.cs.Stop()
pb.cs.Wait()
newCS := NewConsensusState(pb.cs.config, pb.genesisState.Copy(), pb.cs.blockExec,
pb.cs.blockStore, pb.cs.mempool, pb.cs.evpool)
pb.cs.blockStore, pb.cs.txNotifier, pb.cs.evpool)
newCS.SetEventBus(pb.cs.eventBus)
newCS.startForReplay()
@@ -151,7 +150,7 @@ func (pb *playback) replayReset(count int, newStepCh chan interface{}) error {
} else if err != nil {
return err
}
if err := pb.cs.readReplayMessage(msg, newStepCh); err != nil {
if err := pb.cs.readReplayMessage(msg, newStepSub); err != nil {
return err
}
pb.count++
@@ -215,16 +214,15 @@ func (pb *playback) replayConsoleLoop() int {
ctx := context.Background()
// ensure all new step events are regenerated as expected
newStepCh := make(chan interface{}, 1)
err := pb.cs.eventBus.Subscribe(ctx, subscriber, types.EventQueryNewRoundStep, newStepCh)
newStepSub, err := pb.cs.eventBus.Subscribe(ctx, subscriber, types.EventQueryNewRoundStep)
if err != nil {
cmn.Exit(fmt.Sprintf("failed to subscribe %s to %v", subscriber, types.EventQueryNewRoundStep))
}
defer pb.cs.eventBus.Unsubscribe(ctx, subscriber, types.EventQueryNewRoundStep)
if len(tokens) == 1 {
if err := pb.replayReset(1, newStepCh); err != nil {
if err := pb.replayReset(1, newStepSub); err != nil {
pb.cs.Logger.Error("Replay reset error", "err", err)
}
} else {
@@ -234,7 +232,7 @@ func (pb *playback) replayConsoleLoop() int {
} else if i > pb.count {
fmt.Printf("argument to back must not be larger than the current count (%d)\n", pb.count)
} else {
if err := pb.replayReset(i, newStepCh); err != nil {
if err := pb.replayReset(i, newStepSub); err != nil {
pb.cs.Logger.Error("Replay reset error", "err", err)
}
}
@@ -273,7 +271,6 @@ func (pb *playback) replayConsoleLoop() int {
fmt.Println(pb.count)
}
}
return 0
}
//--------------------------------------------------------------------------------
@@ -304,18 +301,19 @@ func newConsensusStateForReplay(config cfg.BaseConfig, csConfig *cfg.ConsensusCo
cmn.Exit(fmt.Sprintf("Error starting proxy app conns: %v", err))
}
handshaker := NewHandshaker(stateDB, state, blockStore, gdoc)
err = handshaker.Handshake(proxyApp)
if err != nil {
cmn.Exit(fmt.Sprintf("Error on handshake: %v", err))
}
eventBus := types.NewEventBus()
if err := eventBus.Start(); err != nil {
cmn.Exit(fmt.Sprintf("Failed to start event bus: %v", err))
}
mempool, evpool := sm.MockMempool{}, sm.MockEvidencePool{}
handshaker := NewHandshaker(stateDB, state, blockStore, gdoc)
handshaker.SetEventBus(eventBus)
err = handshaker.Handshake(proxyApp)
if err != nil {
cmn.Exit(fmt.Sprintf("Error on handshake: %v", err))
}
mempool, evpool := mock.Mempool{}, sm.MockEvidencePool{}
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)
consensusState := NewConsensusState(csConfig, state.Copy(), blockExec,

View File

@@ -7,7 +7,7 @@ import (
"io"
"io/ioutil"
"os"
"path"
"path/filepath"
"runtime"
"testing"
"time"
@@ -15,25 +15,37 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"sort"
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
crypto "github.com/tendermint/tendermint/crypto"
auto "github.com/tendermint/tendermint/libs/autofile"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/version"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/crypto"
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/mock"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
"github.com/tendermint/tendermint/version"
)
var consensusReplayConfig *cfg.Config
func init() {
func TestMain(m *testing.M) {
config = ResetConfig("consensus_reactor_test")
consensusReplayConfig = ResetConfig("consensus_replay_test")
configStateTest := ResetConfig("consensus_state_test")
configMempoolTest := ResetConfig("consensus_mempool_test")
configByzantineTest := ResetConfig("consensus_byzantine_test")
code := m.Run()
os.RemoveAll(config.RootDir)
os.RemoveAll(consensusReplayConfig.RootDir)
os.RemoveAll(configStateTest.RootDir)
os.RemoveAll(configMempoolTest.RootDir)
os.RemoveAll(configByzantineTest.RootDir)
os.Exit(code)
}
// These tests ensure we can always recover from failure at any part of the consensus process.
@@ -51,7 +63,8 @@ func init() {
// and which ones we need the wal for - then we'd also be able to only flush the
// wal writer when we need to, instead of with every message.
func startNewConsensusStateAndWaitForBlock(t *testing.T, lastBlockHeight int64, blockDB dbm.DB, stateDB dbm.DB) {
func startNewConsensusStateAndWaitForBlock(t *testing.T, consensusReplayConfig *cfg.Config,
lastBlockHeight int64, blockDB dbm.DB, stateDB dbm.DB) {
logger := log.TestingLogger()
state, _ := sm.LoadStateFromDBOrGenesisFile(stateDB, consensusReplayConfig.GenesisFile())
privValidator := loadPrivValidator(consensusReplayConfig)
@@ -59,7 +72,6 @@ func startNewConsensusStateAndWaitForBlock(t *testing.T, lastBlockHeight int64,
cs.SetLogger(logger)
bytes, _ := ioutil.ReadFile(cs.config.WalFile())
// fmt.Printf("====== WAL: \n\r%s\n", bytes)
t.Logf("====== WAL: \n\r%X\n", bytes)
err := cs.Start()
@@ -70,24 +82,25 @@ func startNewConsensusStateAndWaitForBlock(t *testing.T, lastBlockHeight int64,
// in the WAL itself. Assuming the consensus state is running, replay of any
// WAL, including the empty one, should eventually be followed by a new
// block, or else something is wrong.
newBlockCh := make(chan interface{}, 1)
err = cs.eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, newBlockCh)
newBlockSub, err := cs.eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock)
require.NoError(t, err)
select {
case <-newBlockCh:
case <-time.After(60 * time.Second):
t.Fatalf("Timed out waiting for new block (see trace above)")
case <-newBlockSub.Out():
case <-newBlockSub.Cancelled():
t.Fatal("newBlockSub was cancelled")
case <-time.After(120 * time.Second):
t.Fatal("Timed out waiting for new block (see trace above)")
}
}
func sendTxs(cs *ConsensusState, ctx context.Context) {
func sendTxs(ctx context.Context, cs *ConsensusState) {
for i := 0; i < 256; i++ {
select {
case <-ctx.Done():
return
default:
tx := []byte{byte(i)}
cs.mempool.CheckTx(tx, nil)
assertMempool(cs.txNotifier).CheckTx(tx, nil)
i++
}
}
@@ -105,34 +118,35 @@ func TestWALCrash(t *testing.T) {
1},
{"many non-empty blocks",
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {
go sendTxs(cs, ctx)
go sendTxs(ctx, cs)
},
3},
}
for _, tc := range testCases {
for i, tc := range testCases {
consensusReplayConfig := ResetConfig(fmt.Sprintf("%s_%d", t.Name(), i))
t.Run(tc.name, func(t *testing.T) {
crashWALandCheckLiveness(t, tc.initFn, tc.heightToStop)
crashWALandCheckLiveness(t, consensusReplayConfig, tc.initFn, tc.heightToStop)
})
}
}
func crashWALandCheckLiveness(t *testing.T, initFn func(dbm.DB, *ConsensusState, context.Context), heightToStop int64) {
walPaniced := make(chan error)
crashingWal := &crashingWAL{panicCh: walPaniced, heightToStop: heightToStop}
func crashWALandCheckLiveness(t *testing.T, consensusReplayConfig *cfg.Config,
initFn func(dbm.DB, *ConsensusState, context.Context), heightToStop int64) {
walPanicked := make(chan error)
crashingWal := &crashingWAL{panicCh: walPanicked, heightToStop: heightToStop}
i := 1
LOOP:
for {
// fmt.Printf("====== LOOP %d\n", i)
t.Logf("====== LOOP %d\n", i)
// create consensus state from a clean slate
logger := log.NewNopLogger()
stateDB := dbm.NewMemDB()
blockDB := dbm.NewMemDB()
stateDB := blockDB
state, _ := sm.MakeGenesisStateFromFile(consensusReplayConfig.GenesisFile())
privValidator := loadPrivValidator(consensusReplayConfig)
blockDB := dbm.NewMemDB()
cs := newConsensusStateWithConfigAndBlockStore(consensusReplayConfig, state, privValidator, kvstore.NewKVStoreApplication(), blockDB)
cs.SetLogger(logger)
@@ -159,11 +173,11 @@ LOOP:
i++
select {
case err := <-walPaniced:
t.Logf("WAL paniced: %v", err)
case err := <-walPanicked:
t.Logf("WAL panicked: %v", err)
// make sure we can make blocks after a crash
startNewConsensusStateAndWaitForBlock(t, cs.Height, blockDB, stateDB)
startNewConsensusStateAndWaitForBlock(t, consensusReplayConfig, cs.Height, blockDB, stateDB)
// stop consensus state and transactions sender (initFn)
cs.Stop()
@@ -181,16 +195,18 @@ LOOP:
// crashingWAL is a WAL which crashes or rather simulates a crash during Save
// (before and after). It remembers a message for which we last panicked
// (lastPanicedForMsgIndex), so we don't panic for it in subsequent iterations.
// (lastPanickedForMsgIndex), so we don't panic for it in subsequent iterations.
type crashingWAL struct {
next WAL
panicCh chan error
heightToStop int64
msgIndex int // current message index
lastPanicedForMsgIndex int // last message for which we panicked
msgIndex int // current message index
lastPanickedForMsgIndex int // last message for which we panicked
}
var _ WAL = &crashingWAL{}
// WALWriteError indicates a WAL crash.
type WALWriteError struct {
msg string
@@ -223,8 +239,8 @@ func (w *crashingWAL) Write(m WALMessage) {
return
}
if w.msgIndex > w.lastPanicedForMsgIndex {
w.lastPanicedForMsgIndex = w.msgIndex
if w.msgIndex > w.lastPanickedForMsgIndex {
w.lastPanickedForMsgIndex = w.msgIndex
_, file, line, _ := runtime.Caller(1)
w.panicCh <- WALWriteError{fmt.Sprintf("failed to write %T to WAL (fileline: %s:%d)", m, file, line)}
runtime.Goexit()
@@ -238,8 +254,9 @@ func (w *crashingWAL) WriteSync(m WALMessage) {
w.Write(m)
}
func (w *crashingWAL) Group() *auto.Group { return w.next.Group() }
func (w *crashingWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
func (w *crashingWAL) FlushAndSync() error { return w.next.FlushAndSync() }
func (w *crashingWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (rd io.ReadCloser, found bool, err error) {
return w.next.SearchForEndHeight(height, options)
}
@@ -248,15 +265,23 @@ func (w *crashingWAL) Stop() error { return w.next.Stop() }
func (w *crashingWAL) Wait() { w.next.Wait() }
//------------------------------------------------------------------------------------------
// Handshake Tests
type testSim struct {
GenesisState sm.State
Config *cfg.Config
Chain []*types.Block
Commits []*types.Commit
CleanupFunc cleanupFunc
}
const (
NUM_BLOCKS = 6
numBlocks = 6
)
var (
mempool = sm.MockMempool{}
mempool = mock.Mempool{}
evpool = sm.MockEvidencePool{}
sim testSim
)
//---------------------------------------
@@ -267,94 +292,356 @@ var (
// 2 - save block and committed but state is behind
var modes = []uint{0, 1, 2}
// This is actually not a test, it's for storing validator change tx data for testHandshakeReplay
func TestSimulateValidatorsChange(t *testing.T) {
nPeers := 7
nVals := 4
css, genDoc, config, cleanup := randConsensusNetWithPeers(nVals, nPeers, "replay_test", newMockTickerFunc(true), newPersistentKVStoreWithPath)
sim.Config = config
sim.GenesisState, _ = sm.MakeGenesisState(genDoc)
sim.CleanupFunc = cleanup
partSize := types.BlockPartSizeBytes
newRoundCh := subscribe(css[0].eventBus, types.EventQueryNewRound)
proposalCh := subscribe(css[0].eventBus, types.EventQueryCompleteProposal)
vss := make([]*validatorStub, nPeers)
for i := 0; i < nPeers; i++ {
vss[i] = NewValidatorStub(css[i].privValidator, i)
}
height, round := css[0].Height, css[0].Round
// start the machine
startTestRound(css[0], height, round)
incrementHeight(vss...)
ensureNewRound(newRoundCh, height, 0)
ensureNewProposal(proposalCh, height, round)
rs := css[0].GetRoundState()
signAddVotes(css[0], types.PrecommitType, rs.ProposalBlock.Hash(), rs.ProposalBlockParts.Header(), vss[1:nVals]...)
ensureNewRound(newRoundCh, height+1, 0)
//height 2
height++
incrementHeight(vss...)
newValidatorPubKey1 := css[nVals].privValidator.GetPubKey()
valPubKey1ABCI := types.TM2PB.PubKey(newValidatorPubKey1)
newValidatorTx1 := kvstore.MakeValSetChangeTx(valPubKey1ABCI, testMinPower)
err := assertMempool(css[0].txNotifier).CheckTx(newValidatorTx1, nil)
assert.Nil(t, err)
propBlock, _ := css[0].createProposalBlock() //changeProposer(t, cs1, vs2)
propBlockParts := propBlock.MakePartSet(partSize)
blockID := types.BlockID{Hash: propBlock.Hash(), PartsHeader: propBlockParts.Header()}
proposal := types.NewProposal(vss[1].Height, round, -1, blockID)
if err := vss[1].SignProposal(config.ChainID(), proposal); err != nil {
t.Fatal("failed to sign bad proposal", err)
}
// set the proposal block
if err := css[0].SetProposalAndBlock(proposal, propBlock, propBlockParts, "some peer"); err != nil {
t.Fatal(err)
}
ensureNewProposal(proposalCh, height, round)
rs = css[0].GetRoundState()
signAddVotes(css[0], types.PrecommitType, rs.ProposalBlock.Hash(), rs.ProposalBlockParts.Header(), vss[1:nVals]...)
ensureNewRound(newRoundCh, height+1, 0)
//height 3
height++
incrementHeight(vss...)
updateValidatorPubKey1 := css[nVals].privValidator.GetPubKey()
updatePubKey1ABCI := types.TM2PB.PubKey(updateValidatorPubKey1)
updateValidatorTx1 := kvstore.MakeValSetChangeTx(updatePubKey1ABCI, 25)
err = assertMempool(css[0].txNotifier).CheckTx(updateValidatorTx1, nil)
assert.Nil(t, err)
propBlock, _ = css[0].createProposalBlock() //changeProposer(t, cs1, vs2)
propBlockParts = propBlock.MakePartSet(partSize)
blockID = types.BlockID{Hash: propBlock.Hash(), PartsHeader: propBlockParts.Header()}
proposal = types.NewProposal(vss[2].Height, round, -1, blockID)
if err := vss[2].SignProposal(config.ChainID(), proposal); err != nil {
t.Fatal("failed to sign bad proposal", err)
}
// set the proposal block
if err := css[0].SetProposalAndBlock(proposal, propBlock, propBlockParts, "some peer"); err != nil {
t.Fatal(err)
}
ensureNewProposal(proposalCh, height, round)
rs = css[0].GetRoundState()
signAddVotes(css[0], types.PrecommitType, rs.ProposalBlock.Hash(), rs.ProposalBlockParts.Header(), vss[1:nVals]...)
ensureNewRound(newRoundCh, height+1, 0)
//height 4
height++
incrementHeight(vss...)
newValidatorPubKey2 := css[nVals+1].privValidator.GetPubKey()
newVal2ABCI := types.TM2PB.PubKey(newValidatorPubKey2)
newValidatorTx2 := kvstore.MakeValSetChangeTx(newVal2ABCI, testMinPower)
err = assertMempool(css[0].txNotifier).CheckTx(newValidatorTx2, nil)
assert.Nil(t, err)
newValidatorPubKey3 := css[nVals+2].privValidator.GetPubKey()
newVal3ABCI := types.TM2PB.PubKey(newValidatorPubKey3)
newValidatorTx3 := kvstore.MakeValSetChangeTx(newVal3ABCI, testMinPower)
err = assertMempool(css[0].txNotifier).CheckTx(newValidatorTx3, nil)
assert.Nil(t, err)
propBlock, _ = css[0].createProposalBlock() //changeProposer(t, cs1, vs2)
propBlockParts = propBlock.MakePartSet(partSize)
blockID = types.BlockID{Hash: propBlock.Hash(), PartsHeader: propBlockParts.Header()}
newVss := make([]*validatorStub, nVals+1)
copy(newVss, vss[:nVals+1])
sort.Sort(ValidatorStubsByAddress(newVss))
selfIndex := 0
for i, vs := range newVss {
if vs.GetPubKey().Equals(css[0].privValidator.GetPubKey()) {
selfIndex = i
break
}
}
proposal = types.NewProposal(vss[3].Height, round, -1, blockID)
if err := vss[3].SignProposal(config.ChainID(), proposal); err != nil {
t.Fatal("failed to sign bad proposal", err)
}
// set the proposal block
if err := css[0].SetProposalAndBlock(proposal, propBlock, propBlockParts, "some peer"); err != nil {
t.Fatal(err)
}
ensureNewProposal(proposalCh, height, round)
removeValidatorTx2 := kvstore.MakeValSetChangeTx(newVal2ABCI, 0)
err = assertMempool(css[0].txNotifier).CheckTx(removeValidatorTx2, nil)
assert.Nil(t, err)
rs = css[0].GetRoundState()
for i := 0; i < nVals+1; i++ {
if i == selfIndex {
continue
}
signAddVotes(css[0], types.PrecommitType, rs.ProposalBlock.Hash(), rs.ProposalBlockParts.Header(), newVss[i])
}
ensureNewRound(newRoundCh, height+1, 0)
//height 5
height++
incrementHeight(vss...)
ensureNewProposal(proposalCh, height, round)
rs = css[0].GetRoundState()
for i := 0; i < nVals+1; i++ {
if i == selfIndex {
continue
}
signAddVotes(css[0], types.PrecommitType, rs.ProposalBlock.Hash(), rs.ProposalBlockParts.Header(), newVss[i])
}
ensureNewRound(newRoundCh, height+1, 0)
//height 6
height++
incrementHeight(vss...)
removeValidatorTx3 := kvstore.MakeValSetChangeTx(newVal3ABCI, 0)
err = assertMempool(css[0].txNotifier).CheckTx(removeValidatorTx3, nil)
assert.Nil(t, err)
propBlock, _ = css[0].createProposalBlock() //changeProposer(t, cs1, vs2)
propBlockParts = propBlock.MakePartSet(partSize)
blockID = types.BlockID{Hash: propBlock.Hash(), PartsHeader: propBlockParts.Header()}
newVss = make([]*validatorStub, nVals+3)
copy(newVss, vss[:nVals+3])
sort.Sort(ValidatorStubsByAddress(newVss))
for i, vs := range newVss {
if vs.GetPubKey().Equals(css[0].privValidator.GetPubKey()) {
selfIndex = i
break
}
}
proposal = types.NewProposal(vss[1].Height, round, -1, blockID)
if err := vss[1].SignProposal(config.ChainID(), proposal); err != nil {
t.Fatal("failed to sign bad proposal", err)
}
// set the proposal block
if err := css[0].SetProposalAndBlock(proposal, propBlock, propBlockParts, "some peer"); err != nil {
t.Fatal(err)
}
ensureNewProposal(proposalCh, height, round)
rs = css[0].GetRoundState()
for i := 0; i < nVals+3; i++ {
if i == selfIndex {
continue
}
signAddVotes(css[0], types.PrecommitType, rs.ProposalBlock.Hash(), rs.ProposalBlockParts.Header(), newVss[i])
}
ensureNewRound(newRoundCh, height+1, 0)
sim.Chain = make([]*types.Block, 0)
sim.Commits = make([]*types.Commit, 0)
for i := 1; i <= numBlocks; i++ {
sim.Chain = append(sim.Chain, css[0].blockStore.LoadBlock(int64(i)))
sim.Commits = append(sim.Commits, css[0].blockStore.LoadBlockCommit(int64(i)))
}
}
// Sync from scratch
func TestHandshakeReplayAll(t *testing.T) {
for _, m := range modes {
testHandshakeReplay(t, 0, m)
testHandshakeReplay(t, config, 0, m, false)
}
for _, m := range modes {
testHandshakeReplay(t, config, 0, m, true)
}
}
// Sync many, not from scratch
func TestHandshakeReplaySome(t *testing.T) {
for _, m := range modes {
testHandshakeReplay(t, 1, m)
testHandshakeReplay(t, config, 1, m, false)
}
for _, m := range modes {
testHandshakeReplay(t, config, 1, m, true)
}
}
// Sync from lagging by one
func TestHandshakeReplayOne(t *testing.T) {
for _, m := range modes {
testHandshakeReplay(t, NUM_BLOCKS-1, m)
testHandshakeReplay(t, config, numBlocks-1, m, false)
}
for _, m := range modes {
testHandshakeReplay(t, config, numBlocks-1, m, true)
}
}
// Sync from caught up
func TestHandshakeReplayNone(t *testing.T) {
for _, m := range modes {
testHandshakeReplay(t, NUM_BLOCKS, m)
testHandshakeReplay(t, config, numBlocks, m, false)
}
for _, m := range modes {
testHandshakeReplay(t, config, numBlocks, m, true)
}
}
// Test mockProxyApp should not panic when app return ABCIResponses with some empty ResponseDeliverTx
func TestMockProxyApp(t *testing.T) {
sim.CleanupFunc() //clean the test env created in TestSimulateValidatorsChange
logger := log.TestingLogger()
var validTxs, invalidTxs = 0, 0
txIndex := 0
assert.NotPanics(t, func() {
abciResWithEmptyDeliverTx := new(sm.ABCIResponses)
abciResWithEmptyDeliverTx.DeliverTx = make([]*abci.ResponseDeliverTx, 0)
abciResWithEmptyDeliverTx.DeliverTx = append(abciResWithEmptyDeliverTx.DeliverTx, &abci.ResponseDeliverTx{})
// called when saveABCIResponses:
bytes := cdc.MustMarshalBinaryBare(abciResWithEmptyDeliverTx)
loadedAbciRes := new(sm.ABCIResponses)
// this also happens sm.LoadABCIResponses
err := cdc.UnmarshalBinaryBare(bytes, loadedAbciRes)
require.NoError(t, err)
mock := newMockProxyApp([]byte("mock_hash"), loadedAbciRes)
abciRes := new(sm.ABCIResponses)
abciRes.DeliverTx = make([]*abci.ResponseDeliverTx, len(loadedAbciRes.DeliverTx))
// Execute transactions and get hash.
proxyCb := func(req *abci.Request, res *abci.Response) {
switch r := res.Value.(type) {
case *abci.Response_DeliverTx:
// TODO: make use of res.Log
// TODO: make use of this info
// Blocks may include invalid txs.
txRes := r.DeliverTx
if txRes.Code == abci.CodeTypeOK {
validTxs++
} else {
logger.Debug("Invalid tx", "code", txRes.Code, "log", txRes.Log)
invalidTxs++
}
abciRes.DeliverTx[txIndex] = txRes
txIndex++
}
}
mock.SetResponseCallback(proxyCb)
someTx := []byte("tx")
mock.DeliverTxAsync(someTx)
})
assert.True(t, validTxs == 1)
assert.True(t, invalidTxs == 0)
}
func tempWALWithData(data []byte) string {
walFile, err := ioutil.TempFile("", "wal")
if err != nil {
panic(fmt.Errorf("failed to create temp WAL file: %v", err))
panic(fmt.Sprintf("failed to create temp WAL file: %v", err))
}
_, err = walFile.Write(data)
if err != nil {
panic(fmt.Errorf("failed to write to temp WAL file: %v", err))
panic(fmt.Sprintf("failed to write to temp WAL file: %v", err))
}
if err := walFile.Close(); err != nil {
panic(fmt.Errorf("failed to close temp WAL file: %v", err))
panic(fmt.Sprintf("failed to close temp WAL file: %v", err))
}
return walFile.Name()
}
// Make some blocks. Start a fresh app and apply nBlocks blocks. Then restart the app and sync it up with the remaining blocks
func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
config := ResetConfig("proxy_test_")
func testHandshakeReplay(t *testing.T, config *cfg.Config, nBlocks int, mode uint, testValidatorsChange bool) {
var chain []*types.Block
var commits []*types.Commit
var store *mockBlockStore
var stateDB dbm.DB
var genisisState sm.State
if testValidatorsChange {
testConfig := ResetConfig(fmt.Sprintf("%s_%v_m", t.Name(), mode))
defer os.RemoveAll(testConfig.RootDir)
stateDB = dbm.NewMemDB()
genisisState = sim.GenesisState
config = sim.Config
chain = sim.Chain
commits = sim.Commits
store = newMockBlockStore(config, genisisState.ConsensusParams)
} else { //test single node
testConfig := ResetConfig(fmt.Sprintf("%s_%v_s", t.Name(), mode))
defer os.RemoveAll(testConfig.RootDir)
walBody, err := WALWithNBlocks(t, numBlocks)
require.NoError(t, err)
walFile := tempWALWithData(walBody)
config.Consensus.SetWalFile(walFile)
walBody, err := WALWithNBlocks(NUM_BLOCKS)
if err != nil {
t.Fatal(err)
privVal := privval.LoadFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
wal, err := NewWAL(walFile)
require.NoError(t, err)
wal.SetLogger(log.TestingLogger())
err = wal.Start()
require.NoError(t, err)
defer wal.Stop()
chain, commits, err = makeBlockchainFromWAL(wal)
require.NoError(t, err)
stateDB, genisisState, store = stateAndStore(config, privVal.GetPubKey(), kvstore.ProtocolVersion)
}
walFile := tempWALWithData(walBody)
config.Consensus.SetWalFile(walFile)
privVal := privval.LoadFilePV(config.PrivValidatorFile())
wal, err := NewWAL(walFile)
if err != nil {
t.Fatal(err)
}
wal.SetLogger(log.TestingLogger())
if err := wal.Start(); err != nil {
t.Fatal(err)
}
defer wal.Stop()
chain, commits, err := makeBlockchainFromWAL(wal)
if err != nil {
t.Fatalf(err.Error())
}
stateDB, state, store := stateAndStore(config, privVal.GetPubKey(), kvstore.ProtocolVersion)
store.chain = chain
store.commits = commits
state := genisisState.Copy()
// run the chain through state.ApplyBlock to build up the tendermint state
state = buildTMStateFromChain(config, stateDB, state, chain, mode)
state = buildTMStateFromChain(config, stateDB, state, chain, nBlocks, mode)
latestAppHash := state.AppHash
// make a new client creator
kvstoreApp := kvstore.NewPersistentKVStoreApplication(path.Join(config.DBDir(), "2"))
kvstoreApp := kvstore.NewPersistentKVStoreApplication(filepath.Join(config.DBDir(), fmt.Sprintf("replay_test_%d_%d_a", nBlocks, mode)))
clientCreator2 := proxy.NewLocalClientCreator(kvstoreApp)
if nBlocks > 0 {
// run nBlocks against a new client to build up the app state.
// use a throwaway tendermint state
proxyApp := proxy.NewAppConns(clientCreator2)
stateDB, state, _ := stateAndStore(config, privVal.GetPubKey(), kvstore.ProtocolVersion)
buildAppStateFromChain(proxyApp, stateDB, state, chain, nBlocks, mode)
stateDB1 := dbm.NewMemDB()
sm.SaveState(stateDB1, genisisState)
buildAppStateFromChain(proxyApp, stateDB1, genisisState, chain, nBlocks, mode)
}
// now start the app using the handshake - it should sync
@@ -380,8 +667,8 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
t.Fatalf("Expected app hashes to match after handshake/replay. got %X, expected %X", res.LastBlockAppHash, latestAppHash)
}
expectedBlocksToSync := NUM_BLOCKS - nBlocks
if nBlocks == NUM_BLOCKS && mode > 0 {
expectedBlocksToSync := numBlocks - nBlocks
if nBlocks == numBlocks && mode > 0 {
expectedBlocksToSync++
} else if nBlocks > 0 && mode == 1 {
expectedBlocksToSync++
@@ -396,7 +683,7 @@ func applyBlock(stateDB dbm.DB, st sm.State, blk *types.Block, proxyApp proxy.Ap
testPartSize := types.BlockPartSizeBytes
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)
blkID := types.BlockID{blk.Hash(), blk.MakePartSet(testPartSize).Header()}
blkID := types.BlockID{Hash: blk.Hash(), PartsHeader: blk.MakePartSet(testPartSize).Header()}
newState, err := blockExec.ApplyBlock(st, blkID, blk)
if err != nil {
panic(err)
@@ -412,12 +699,14 @@ func buildAppStateFromChain(proxyApp proxy.AppConns, stateDB dbm.DB,
}
defer proxyApp.Stop()
state.Version.Consensus.App = kvstore.ProtocolVersion //simulate handshake, receive app version
validators := types.TM2PB.ValidatorUpdates(state.Validators)
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{
Validators: validators,
}); err != nil {
panic(err)
}
sm.SaveState(stateDB, state) //save height 1's validatorsInfo
switch mode {
case 0:
@@ -440,21 +729,23 @@ func buildAppStateFromChain(proxyApp proxy.AppConns, stateDB dbm.DB,
}
func buildTMStateFromChain(config *cfg.Config, stateDB dbm.DB, state sm.State, chain []*types.Block, mode uint) sm.State {
func buildTMStateFromChain(config *cfg.Config, stateDB dbm.DB, state sm.State, chain []*types.Block, nBlocks int, mode uint) sm.State {
// run the whole chain against this client to build up the tendermint state
clientCreator := proxy.NewLocalClientCreator(kvstore.NewPersistentKVStoreApplication(path.Join(config.DBDir(), "1")))
clientCreator := proxy.NewLocalClientCreator(kvstore.NewPersistentKVStoreApplication(filepath.Join(config.DBDir(), fmt.Sprintf("replay_test_%d_%d_t", nBlocks, mode))))
proxyApp := proxy.NewAppConns(clientCreator)
if err := proxyApp.Start(); err != nil {
panic(err)
}
defer proxyApp.Stop()
state.Version.Consensus.App = kvstore.ProtocolVersion //simulate handshake, receive app version
validators := types.TM2PB.ValidatorUpdates(state.Validators)
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{
Validators: validators,
}); err != nil {
panic(err)
}
sm.SaveState(stateDB, state) //save height 1's validatorsInfo
switch mode {
case 0:
@@ -478,28 +769,162 @@ func buildTMStateFromChain(config *cfg.Config, stateDB dbm.DB, state sm.State, c
return state
}
func TestHandshakePanicsIfAppReturnsWrongAppHash(t *testing.T) {
// 1. Initialize tendermint and commit 3 blocks with the following app hashes:
// - 0x01
// - 0x02
// - 0x03
config := ResetConfig("handshake_test_")
defer os.RemoveAll(config.RootDir)
privVal := privval.LoadFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
const appVersion = 0x0
stateDB, state, store := stateAndStore(config, privVal.GetPubKey(), appVersion)
genDoc, _ := sm.MakeGenesisDocFromFile(config.GenesisFile())
state.LastValidators = state.Validators.Copy()
// mode = 0 for committing all the blocks
blocks := makeBlocks(3, &state, privVal)
store.chain = blocks
// 2. Tendermint must panic if app returns wrong hash for the first block
// - RANDOM HASH
// - 0x02
// - 0x03
{
app := &badApp{numBlocks: 3, allHashesAreWrong: true}
clientCreator := proxy.NewLocalClientCreator(app)
proxyApp := proxy.NewAppConns(clientCreator)
err := proxyApp.Start()
require.NoError(t, err)
defer proxyApp.Stop()
assert.Panics(t, func() {
h := NewHandshaker(stateDB, state, store, genDoc)
h.Handshake(proxyApp)
})
}
// 3. Tendermint must panic if app returns wrong hash for the last block
// - 0x01
// - 0x02
// - RANDOM HASH
{
app := &badApp{numBlocks: 3, onlyLastHashIsWrong: true}
clientCreator := proxy.NewLocalClientCreator(app)
proxyApp := proxy.NewAppConns(clientCreator)
err := proxyApp.Start()
require.NoError(t, err)
defer proxyApp.Stop()
assert.Panics(t, func() {
h := NewHandshaker(stateDB, state, store, genDoc)
h.Handshake(proxyApp)
})
}
}
func makeBlocks(n int, state *sm.State, privVal types.PrivValidator) []*types.Block {
blocks := make([]*types.Block, 0)
var (
prevBlock *types.Block
prevBlockMeta *types.BlockMeta
)
appHeight := byte(0x01)
for i := 0; i < n; i++ {
height := int64(i + 1)
block, parts := makeBlock(*state, prevBlock, prevBlockMeta, privVal, height)
blocks = append(blocks, block)
prevBlock = block
prevBlockMeta = types.NewBlockMeta(block, parts)
// update state
state.AppHash = []byte{appHeight}
appHeight++
state.LastBlockHeight = height
}
return blocks
}
func makeVote(header *types.Header, blockID types.BlockID, valset *types.ValidatorSet, privVal types.PrivValidator) *types.Vote {
addr := privVal.GetPubKey().Address()
idx, _ := valset.GetByAddress(addr)
vote := &types.Vote{
ValidatorAddress: addr,
ValidatorIndex: idx,
Height: header.Height,
Round: 1,
Timestamp: tmtime.Now(),
Type: types.PrecommitType,
BlockID: blockID,
}
privVal.SignVote(header.ChainID, vote)
return vote
}
func makeBlock(state sm.State, lastBlock *types.Block, lastBlockMeta *types.BlockMeta,
privVal types.PrivValidator, height int64) (*types.Block, *types.PartSet) {
lastCommit := types.NewCommit(types.BlockID{}, nil)
if height > 1 {
vote := makeVote(&lastBlock.Header, lastBlockMeta.BlockID, state.Validators, privVal).CommitSig()
lastCommit = types.NewCommit(lastBlockMeta.BlockID, []*types.CommitSig{vote})
}
return state.MakeBlock(height, []types.Tx{}, lastCommit, nil, state.Validators.GetProposer().Address)
}
type badApp struct {
abci.BaseApplication
numBlocks byte
height byte
allHashesAreWrong bool
onlyLastHashIsWrong bool
}
func (app *badApp) Commit() abci.ResponseCommit {
app.height++
if app.onlyLastHashIsWrong {
if app.height == app.numBlocks {
return abci.ResponseCommit{Data: cmn.RandBytes(8)}
}
return abci.ResponseCommit{Data: []byte{app.height}}
} else if app.allHashesAreWrong {
return abci.ResponseCommit{Data: cmn.RandBytes(8)}
}
panic("either allHashesAreWrong or onlyLastHashIsWrong must be set")
}
//--------------------------
// utils for making blocks
func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
var height int64
// Search for height marker
gr, found, err := wal.SearchForEndHeight(0, &WALSearchOptions{})
gr, found, err := wal.SearchForEndHeight(height, &WALSearchOptions{})
if err != nil {
return nil, nil, err
}
if !found {
return nil, nil, fmt.Errorf("WAL does not contain height %d.", 1)
return nil, nil, fmt.Errorf("WAL does not contain height %d", height)
}
defer gr.Close() // nolint: errcheck
// log.Notice("Build a blockchain by reading from the WAL")
var blocks []*types.Block
var commits []*types.Commit
var thisBlockParts *types.PartSet
var thisBlockCommit *types.Commit
var height int64
var (
blocks []*types.Block
commits []*types.Commit
thisBlockParts *types.PartSet
thisBlockCommit *types.Commit
)
dec := NewWALDecoder(gr)
for {
@@ -544,10 +969,8 @@ func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
}
case *types.Vote:
if p.Type == types.PrecommitType {
thisBlockCommit = &types.Commit{
BlockID: p.BlockID,
Precommits: []*types.Vote{p},
}
commitSigs := []*types.CommitSig{p.CommitSig()}
thisBlockCommit = types.NewCommit(p.BlockID, commitSigs)
}
}
}
@@ -593,7 +1016,8 @@ func stateAndStore(config *cfg.Config, pubKey crypto.PubKey, appVersion version.
stateDB := dbm.NewMemDB()
state, _ := sm.MakeGenesisStateFromFile(config.GenesisFile())
state.Version.Consensus.App = appVersion
store := NewMockBlockStore(config, state.ConsensusParams)
store := newMockBlockStore(config, state.ConsensusParams)
sm.SaveState(stateDB, state)
return stateDB, state, store
}
@@ -608,7 +1032,7 @@ type mockBlockStore struct {
}
// TODO: NewBlockStore(db.NewMemDB) ...
func NewMockBlockStore(config *cfg.Config, params types.ConsensusParams) *mockBlockStore {
func newMockBlockStore(config *cfg.Config, params types.ConsensusParams) *mockBlockStore {
return &mockBlockStore{config, params, nil, nil}
}
@@ -617,7 +1041,7 @@ func (bs *mockBlockStore) LoadBlock(height int64) *types.Block { return bs.chain
func (bs *mockBlockStore) LoadBlockMeta(height int64) *types.BlockMeta {
block := bs.chain[height-1]
return &types.BlockMeta{
BlockID: types.BlockID{block.Hash(), block.MakePartSet(types.BlockPartSizeBytes).Header()},
BlockID: types.BlockID{Hash: block.Hash(), PartsHeader: block.MakePartSet(types.BlockPartSizeBytes).Header()},
Header: block.Header,
}
}
@@ -631,16 +1055,18 @@ func (bs *mockBlockStore) LoadSeenCommit(height int64) *types.Commit {
return bs.commits[height-1]
}
//----------------------------------------
//---------------------------------------
// Test handshake/init chain
func TestInitChainUpdateValidators(t *testing.T) {
func TestHandshakeUpdatesValidators(t *testing.T) {
val, _ := types.RandValidator(true, 10)
vals := types.NewValidatorSet([]*types.Validator{val})
app := &initChainApp{vals: types.TM2PB.ValidatorUpdates(vals)}
clientCreator := proxy.NewLocalClientCreator(app)
config := ResetConfig("proxy_test_")
privVal := privval.LoadFilePV(config.PrivValidatorFile())
config := ResetConfig("handshake_test_")
defer os.RemoveAll(config.RootDir)
privVal := privval.LoadFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
stateDB, state, store := stateAndStore(config, privVal.GetPubKey(), 0x0)
oldValAddr := state.Validators.Validators[0].Address
@@ -666,12 +1092,6 @@ func TestInitChainUpdateValidators(t *testing.T) {
assert.Equal(t, newValAddr, expectValAddr)
}
func newInitChainApp(vals []abci.ValidatorUpdate) *initChainApp {
return &initChainApp{
vals: vals,
}
}
// returns the vals on InitChain
type initChainApp struct {
abci.BaseApplication

View File

@@ -2,13 +2,14 @@ package consensus
import (
"bytes"
"errors"
"fmt"
"reflect"
"runtime/debug"
"sync"
"time"
"github.com/pkg/errors"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/fail"
"github.com/tendermint/tendermint/libs/log"
@@ -22,13 +23,6 @@ import (
"github.com/tendermint/tendermint/types"
)
//-----------------------------------------------------------------------------
// Config
const (
proposalHeartbeatIntervalSeconds = 2
)
//-----------------------------------------------------------------------------
// Errors
@@ -63,6 +57,16 @@ func (ti *timeoutInfo) String() string {
return fmt.Sprintf("%v ; %d/%d %v", ti.Duration, ti.Height, ti.Round, ti.Step)
}
// interface to the mempool
type txNotifier interface {
TxsAvailable() <-chan struct{}
}
// interface to the evidence pool
type evidencePool interface {
AddEvidence(types.Evidence) error
}
// ConsensusState handles execution of the consensus algorithm.
// It processes votes and proposals, and upon reaching agreement,
// commits blocks to the chain and executes them against the application.
@@ -74,17 +78,23 @@ type ConsensusState struct {
config *cfg.ConsensusConfig
privValidator types.PrivValidator // for signing votes
// services for creating and executing blocks
blockExec *sm.BlockExecutor
// store blocks and commits
blockStore sm.BlockStore
mempool sm.Mempool
evpool sm.EvidencePool
// create and execute blocks
blockExec *sm.BlockExecutor
// notify us if txs are available
txNotifier txNotifier
// add evidence to the pool
// when it's detected
evpool evidencePool
// internal state
mtx sync.RWMutex
cstypes.RoundState
triggeredTimeoutPrecommit bool
state sm.State // State until height-1.
state sm.State // State until height-1.
// state changes may be triggered by: msgs from peers,
// msgs from ourself, or by timeouts
@@ -118,7 +128,7 @@ type ConsensusState struct {
done chan struct{}
// synchronous pubsub between consensus state and reactor.
// state only emits EventNewRoundStep, EventVote and EventProposalHeartbeat
// state only emits EventNewRoundStep and EventVote
evsw tmevents.EventSwitch
// for reporting metrics
@@ -134,15 +144,15 @@ func NewConsensusState(
state sm.State,
blockExec *sm.BlockExecutor,
blockStore sm.BlockStore,
mempool sm.Mempool,
evpool sm.EvidencePool,
txNotifier txNotifier,
evpool evidencePool,
options ...StateOption,
) *ConsensusState {
cs := &ConsensusState{
config: config,
blockExec: blockExec,
blockStore: blockStore,
mempool: mempool,
txNotifier: txNotifier,
peerMsgQueue: make(chan msgInfo, msgQueueSize),
internalMsgQueue: make(chan msgInfo, msgQueueSize),
timeoutTicker: NewTimeoutTicker(),
@@ -207,18 +217,16 @@ func (cs *ConsensusState) GetState() sm.State {
// GetLastHeight returns the last height committed.
// If there were no blocks, returns 0.
func (cs *ConsensusState) GetLastHeight() int64 {
cs.mtx.Lock()
defer cs.mtx.Unlock()
cs.mtx.RLock()
defer cs.mtx.RUnlock()
return cs.RoundState.Height - 1
}
// GetRoundState returns a shallow copy of the internal consensus state.
func (cs *ConsensusState) GetRoundState() *cstypes.RoundState {
cs.mtx.RLock()
defer cs.mtx.RUnlock()
rs := cs.RoundState // copy
cs.mtx.RUnlock()
return &rs
}
@@ -226,7 +234,6 @@ func (cs *ConsensusState) GetRoundState() *cstypes.RoundState {
func (cs *ConsensusState) GetRoundStateJSON() ([]byte, error) {
cs.mtx.RLock()
defer cs.mtx.RUnlock()
return cdc.MarshalJSON(cs.RoundState)
}
@@ -234,7 +241,6 @@ func (cs *ConsensusState) GetRoundStateJSON() ([]byte, error) {
func (cs *ConsensusState) GetRoundStateSimpleJSON() ([]byte, error) {
cs.mtx.RLock()
defer cs.mtx.RUnlock()
return cdc.MarshalJSON(cs.RoundState.RoundStateSimple())
}
@@ -248,15 +254,15 @@ func (cs *ConsensusState) GetValidators() (int64, []*types.Validator) {
// SetPrivValidator sets the private validator account for signing votes.
func (cs *ConsensusState) SetPrivValidator(priv types.PrivValidator) {
cs.mtx.Lock()
defer cs.mtx.Unlock()
cs.privValidator = priv
cs.mtx.Unlock()
}
// SetTimeoutTicker sets the local timer. It may be useful to overwrite for testing.
func (cs *ConsensusState) SetTimeoutTicker(timeoutTicker TimeoutTicker) {
cs.mtx.Lock()
defer cs.mtx.Unlock()
cs.timeoutTicker = timeoutTicker
cs.mtx.Unlock()
}
// LoadCommit loads the commit for a given height.
@@ -301,6 +307,23 @@ func (cs *ConsensusState) OnStart() error {
// reload from consensus log to catchup
if cs.doWALCatchup {
if err := cs.catchupReplay(cs.Height); err != nil {
// don't try to recover from data corruption error
if IsDataCorruptionError(err) {
cs.Logger.Error("Encountered corrupt WAL file", "err", err.Error())
cs.Logger.Error("Please repair the WAL file before restarting")
fmt.Println(`You can attempt to repair the WAL as follows:
----
WALFILE=~/.tendermint/data/cs.wal/wal
cp $WALFILE ${WALFILE}.bak # backup the file
go run scripts/wal2json/main.go $WALFILE > wal.json # this will panic, but can be ignored
rm $WALFILE # remove the corrupt file
go run scripts/json2wal/main.go wal.json $WALFILE # rebuild the file without corruption
----`)
return err
}
cs.Logger.Error("Error on catchup replay. Proceeding to start ConsensusState anyway", "err", err.Error())
// NOTE: if we ever do return an error here,
// make sure to stop the timeoutTicker
@@ -328,10 +351,11 @@ func (cs *ConsensusState) startRoutines(maxSteps int) {
go cs.receiveRoutine(maxSteps)
}
// OnStop implements cmn.Service. It stops all routines and waits for the WAL to finish.
// OnStop implements cmn.Service.
func (cs *ConsensusState) OnStop() {
cs.evsw.Stop()
cs.timeoutTicker.Stop()
// WAL is stopped in receiveRoutine.
}
// Wait waits for the the main routine to return.
@@ -430,7 +454,7 @@ func (cs *ConsensusState) updateRoundStep(round int, step cstypes.RoundStepType)
// enterNewRound(height, 0) at cs.StartTime.
func (cs *ConsensusState) scheduleRound0(rs *cstypes.RoundState) {
//cs.Logger.Info("scheduleRound0", "now", tmtime.Now(), "startTime", cs.StartTime)
sleepDuration := rs.StartTime.Sub(tmtime.Now()) // nolint: gotype, gosimple
sleepDuration := rs.StartTime.Sub(tmtime.Now())
cs.scheduleTimeout(sleepDuration, rs.Height, 0, cstypes.RoundStepNewHeight)
}
@@ -460,18 +484,9 @@ func (cs *ConsensusState) reconstructLastCommit(state sm.State) {
return
}
seenCommit := cs.blockStore.LoadSeenCommit(state.LastBlockHeight)
lastPrecommits := types.NewVoteSet(state.ChainID, state.LastBlockHeight, seenCommit.Round(), types.PrecommitType, state.LastValidators)
for _, precommit := range seenCommit.Precommits {
if precommit == nil {
continue
}
added, err := lastPrecommits.AddVote(precommit)
if !added || err != nil {
cmn.PanicCrisis(fmt.Sprintf("Failed to reconstruct LastCommit: %v", err))
}
}
lastPrecommits := types.CommitToVoteSet(state.ChainID, seenCommit, state.LastValidators)
if !lastPrecommits.HasTwoThirdsMajority() {
cmn.PanicSanity("Failed to reconstruct LastCommit: Does not have +2/3 maj")
panic("Failed to reconstruct LastCommit: Does not have +2/3 maj")
}
cs.LastCommit = lastPrecommits
}
@@ -480,20 +495,20 @@ func (cs *ConsensusState) reconstructLastCommit(state sm.State) {
// The round becomes 0 and cs.Step becomes cstypes.RoundStepNewHeight.
func (cs *ConsensusState) updateToState(state sm.State) {
if cs.CommitRound > -1 && 0 < cs.Height && cs.Height != state.LastBlockHeight {
cmn.PanicSanity(fmt.Sprintf("updateToState() expected state height of %v but found %v",
panic(fmt.Sprintf("updateToState() expected state height of %v but found %v",
cs.Height, state.LastBlockHeight))
}
if !cs.state.IsEmpty() && cs.state.LastBlockHeight+1 != cs.Height {
// This might happen when someone else is mutating cs.state.
// Someone forgot to pass in state.Copy() somewhere?!
cmn.PanicSanity(fmt.Sprintf("Inconsistent cs.state.LastBlockHeight+1 %v vs cs.Height %v",
panic(fmt.Sprintf("Inconsistent cs.state.LastBlockHeight+1 %v vs cs.Height %v",
cs.state.LastBlockHeight+1, cs.Height))
}
// If state isn't further out than cs.state, just ignore.
// This happens when SwitchToConsensus() is called in the reactor.
// We don't want to reset e.g. the Votes, but we still want to
// signal the new round step, because other services (eg. mempool)
// signal the new round step, because other services (eg. txNotifier)
// depend on having an up-to-date peer state!
if !cs.state.IsEmpty() && (state.LastBlockHeight <= cs.state.LastBlockHeight) {
cs.Logger.Info("Ignoring updateToState()", "newHeight", state.LastBlockHeight+1, "oldHeight", cs.state.LastBlockHeight+1)
@@ -506,7 +521,7 @@ func (cs *ConsensusState) updateToState(state sm.State) {
lastPrecommits := (*types.VoteSet)(nil)
if cs.CommitRound > -1 && cs.Votes != nil {
if !cs.Votes.Precommits(cs.CommitRound).HasTwoThirdsMajority() {
cmn.PanicSanity("updateToState(state) called but last Precommit round didn't have +2/3")
panic("updateToState(state) called but last Precommit round didn't have +2/3")
}
lastPrecommits = cs.Votes.Precommits(cs.CommitRound)
}
@@ -542,6 +557,7 @@ func (cs *ConsensusState) updateToState(state sm.State) {
cs.CommitRound = -1
cs.LastCommit = lastPrecommits
cs.LastValidators = state.LastValidators
cs.TriggeredTimeoutPrecommit = false
cs.state = state
@@ -608,7 +624,7 @@ func (cs *ConsensusState) receiveRoutine(maxSteps int) {
var mi msgInfo
select {
case <-cs.mempool.TxsAvailable():
case <-cs.txNotifier.TxsAvailable():
cs.handleTxsAvailable()
case mi = <-cs.peerMsgQueue:
cs.wal.Write(mi)
@@ -617,6 +633,15 @@ func (cs *ConsensusState) receiveRoutine(maxSteps int) {
cs.handleMsg(mi)
case mi = <-cs.internalMsgQueue:
cs.wal.WriteSync(mi) // NOTE: fsync
if _, ok := mi.Msg.(*VoteMessage); ok {
// we actually want to simulate failing during
// the previous WriteSync, but this isn't easy to do.
// Equivalent would be to fail here and manually remove
// some bytes from the end of the wal.
fail.Fail() // XXX
}
// handles proposals, block parts, votes
cs.handleMsg(mi)
case ti := <-cs.timeoutTicker.Chan(): // tockChan:
@@ -636,7 +661,10 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
cs.mtx.Lock()
defer cs.mtx.Unlock()
var err error
var (
added bool
err error
)
msg, peerID := mi.Msg, mi.PeerID
switch msg := msg.(type) {
case *ProposalMessage:
@@ -645,7 +673,7 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
err = cs.setProposal(msg.Proposal)
case *BlockPartMessage:
// if the proposal is complete, we'll enterPrevote or tryFinalizeCommit
added, err := cs.addProposalBlockPart(msg, peerID)
added, err = cs.addProposalBlockPart(msg, peerID)
if added {
cs.statsMsgQueue <- mi
}
@@ -657,7 +685,7 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
case *VoteMessage:
// attempt to add the vote and dupeout the validator if its a duplicate signature
// if the vote gives us a 2/3-any or 2/3-one, we transition
added, err := cs.tryAddVote(msg.Vote, peerID)
added, err = cs.tryAddVote(msg.Vote, peerID)
if added {
cs.statsMsgQueue <- mi
}
@@ -677,10 +705,15 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
// the peer is sending us CatchupCommit precommits.
// We could make note of this and help filter in broadcastHasVoteMessage().
default:
cs.Logger.Error("Unknown msg type", reflect.TypeOf(msg))
cs.Logger.Error("Unknown msg type", "type", reflect.TypeOf(msg))
return
}
if err != nil {
cs.Logger.Error("Error with msg", "height", cs.Height, "round", cs.Round, "type", reflect.TypeOf(msg), "peer", peerID, "err", err, "msg", msg)
// Causes TestReactorValidatorSetChanges to timeout
// https://github.com/tendermint/tendermint/issues/3406
// cs.Logger.Error("Error with msg", "height", cs.Height, "round", cs.Round,
// "peer", peerID, "err", err, "msg", msg)
}
}
@@ -724,6 +757,7 @@ func (cs *ConsensusState) handleTxsAvailable() {
cs.mtx.Lock()
defer cs.mtx.Unlock()
// we only need to do this for round 0
cs.enterNewRound(cs.Height, 0)
cs.enterPropose(cs.Height, 0)
}
@@ -755,7 +789,7 @@ func (cs *ConsensusState) enterNewRound(height int64, round int) {
validators := cs.Validators
if cs.Round < round {
validators = validators.Copy()
validators.IncrementAccum(round - cs.Round)
validators.IncrementProposerPriority(round - cs.Round)
}
// Setup new round
@@ -774,9 +808,9 @@ func (cs *ConsensusState) enterNewRound(height int64, round int) {
cs.ProposalBlockParts = nil
}
cs.Votes.SetRound(round + 1) // also track next round (round+1) to allow round-skipping
cs.triggeredTimeoutPrecommit = false
cs.TriggeredTimeoutPrecommit = false
cs.eventBus.PublishEventNewRound(cs.RoundStateEvent())
cs.eventBus.PublishEventNewRound(cs.NewRoundEvent())
cs.metrics.Rounds.Set(float64(round))
// Wait for txs to be available in the mempool
@@ -788,7 +822,6 @@ func (cs *ConsensusState) enterNewRound(height int64, round int) {
cs.scheduleTimeout(cs.config.CreateEmptyBlocksInterval, height, round,
cstypes.RoundStepNewRound)
}
go cs.proposalHeartbeat(height, round)
} else {
cs.enterPropose(height, round)
}
@@ -805,32 +838,6 @@ func (cs *ConsensusState) needProofBlock(height int64) bool {
return !bytes.Equal(cs.state.AppHash, lastBlockMeta.Header.AppHash)
}
func (cs *ConsensusState) proposalHeartbeat(height int64, round int) {
counter := 0
addr := cs.privValidator.GetAddress()
valIndex, _ := cs.Validators.GetByAddress(addr)
chainID := cs.state.ChainID
for {
rs := cs.GetRoundState()
// if we've already moved on, no need to send more heartbeats
if rs.Step > cstypes.RoundStepNewRound || rs.Round > round || rs.Height > height {
return
}
heartbeat := &types.Heartbeat{
Height: rs.Height,
Round: rs.Round,
Sequence: counter,
ValidatorAddress: addr,
ValidatorIndex: valIndex,
}
cs.privValidator.SignHeartbeat(chainID, heartbeat)
cs.eventBus.PublishEventProposalHeartbeat(types.EventDataProposalHeartbeat{heartbeat})
cs.evsw.FireEvent(types.EventProposalHeartbeat, heartbeat)
counter++
time.Sleep(proposalHeartbeatIntervalSeconds * time.Second)
}
}
// Enter (CreateEmptyBlocks): from enterNewRound(height,round)
// Enter (CreateEmptyBlocks, CreateEmptyBlocksInterval > 0 ): after enterNewRound(height,round), after timeout of CreateEmptyBlocksInterval
// Enter (!CreateEmptyBlocks) : after enterNewRound(height,round), once txs are in the mempool
@@ -866,13 +873,14 @@ func (cs *ConsensusState) enterPropose(height int64, round int) {
}
// if not a validator, we're done
if !cs.Validators.HasAddress(cs.privValidator.GetAddress()) {
logger.Debug("This node is not a validator", "addr", cs.privValidator.GetAddress(), "vals", cs.Validators)
address := cs.privValidator.GetPubKey().Address()
if !cs.Validators.HasAddress(address) {
logger.Debug("This node is not a validator", "addr", address, "vals", cs.Validators)
return
}
logger.Debug("This node is a validator")
if cs.isProposer() {
if cs.isProposer(address) {
logger.Info("enterPropose: Our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator)
cs.decideProposal(height, round)
} else {
@@ -880,8 +888,8 @@ func (cs *ConsensusState) enterPropose(height int64, round int) {
}
}
func (cs *ConsensusState) isProposer() bool {
return bytes.Equal(cs.Validators.GetProposer().Address, cs.privValidator.GetAddress())
func (cs *ConsensusState) isProposer(address []byte) bool {
return bytes.Equal(cs.Validators.GetProposer().Address, address)
}
func (cs *ConsensusState) defaultDecideProposal(height int64, round int) {
@@ -900,8 +908,11 @@ func (cs *ConsensusState) defaultDecideProposal(height int64, round int) {
}
}
// Flush the WAL. Otherwise, we may not recompute the same proposal to sign, and the privValidator will refuse to sign anything.
cs.wal.FlushAndSync()
// Make proposal
propBlockId := types.BlockID{block.Hash(), blockParts.Header()}
propBlockId := types.BlockID{Hash: block.Hash(), PartsHeader: blockParts.Header()}
proposal := types.NewProposal(height, round, cs.ValidRound, propBlockId)
if err := cs.privValidator.SignProposal(cs.state.ChainID, proposal); err == nil {
@@ -946,7 +957,7 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
if cs.Height == 1 {
// We're creating a proposal for the first block.
// The commit is empty, but not nil.
commit = &types.Commit{}
commit = types.NewCommit(types.BlockID{}, nil)
} else if cs.LastCommit.HasTwoThirdsMajority() {
// Make the commit from LastCommit
commit = cs.LastCommit.MakeCommit()
@@ -956,20 +967,8 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
return
}
maxBytes := cs.state.ConsensusParams.BlockSize.MaxBytes
maxGas := cs.state.ConsensusParams.BlockSize.MaxGas
// bound evidence to 1/10th of the block
evidence := cs.evpool.PendingEvidence(types.MaxEvidenceBytesPerBlock(maxBytes))
// Mempool validated transactions
txs := cs.mempool.ReapMaxBytesMaxGas(types.MaxDataBytes(
maxBytes,
cs.state.Validators.Size(),
len(evidence),
), maxGas)
proposerAddr := cs.privValidator.GetAddress()
block, parts := cs.state.MakeBlock(cs.Height, txs, commit, evidence, proposerAddr)
return block, parts
proposerAddr := cs.privValidator.GetPubKey().Address()
return cs.blockExec.CreateProposalBlock(cs.Height, cs.state, commit, proposerAddr)
}
// Enter: `timeoutPropose` after entering Propose.
@@ -1039,7 +1038,7 @@ func (cs *ConsensusState) enterPrevoteWait(height int64, round int) {
return
}
if !cs.Votes.Prevotes(round).HasTwoThirdsAny() {
cmn.PanicSanity(fmt.Sprintf("enterPrevoteWait(%v/%v), but Prevotes does not have any +2/3 votes", height, round))
panic(fmt.Sprintf("enterPrevoteWait(%v/%v), but Prevotes does not have any +2/3 votes", height, round))
}
logger.Info(fmt.Sprintf("enterPrevoteWait(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
@@ -1095,7 +1094,7 @@ func (cs *ConsensusState) enterPrecommit(height int64, round int) {
// the latest POLRound should be this round.
polRound, _ := cs.Votes.POLInfo()
if polRound < round {
cmn.PanicSanity(fmt.Sprintf("This POLRound should be %v but got %v", round, polRound))
panic(fmt.Sprintf("This POLRound should be %v but got %v", round, polRound))
}
// +2/3 prevoted nil. Unlock and precommit nil.
@@ -1129,7 +1128,7 @@ func (cs *ConsensusState) enterPrecommit(height int64, round int) {
logger.Info("enterPrecommit: +2/3 prevoted proposal block. Locking", "hash", blockID.Hash)
// Validate the block.
if err := cs.blockExec.ValidateBlock(cs.state, cs.ProposalBlock); err != nil {
cmn.PanicConsensus(fmt.Sprintf("enterPrecommit: +2/3 prevoted for an invalid block: %v", err))
panic(fmt.Sprintf("enterPrecommit: +2/3 prevoted for an invalid block: %v", err))
}
cs.LockedRound = round
cs.LockedBlock = cs.ProposalBlock
@@ -1158,22 +1157,22 @@ func (cs *ConsensusState) enterPrecommit(height int64, round int) {
func (cs *ConsensusState) enterPrecommitWait(height int64, round int) {
logger := cs.Logger.With("height", height, "round", round)
if cs.Height != height || round < cs.Round || (cs.Round == round && cs.triggeredTimeoutPrecommit) {
if cs.Height != height || round < cs.Round || (cs.Round == round && cs.TriggeredTimeoutPrecommit) {
logger.Debug(
fmt.Sprintf(
"enterPrecommitWait(%v/%v): Invalid args. "+
"Current state is Height/Round: %v/%v/, triggeredTimeoutPrecommit:%v",
height, round, cs.Height, cs.Round, cs.triggeredTimeoutPrecommit))
"Current state is Height/Round: %v/%v/, TriggeredTimeoutPrecommit:%v",
height, round, cs.Height, cs.Round, cs.TriggeredTimeoutPrecommit))
return
}
if !cs.Votes.Precommits(round).HasTwoThirdsAny() {
cmn.PanicSanity(fmt.Sprintf("enterPrecommitWait(%v/%v), but Precommits does not have any +2/3 votes", height, round))
panic(fmt.Sprintf("enterPrecommitWait(%v/%v), but Precommits does not have any +2/3 votes", height, round))
}
logger.Info(fmt.Sprintf("enterPrecommitWait(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
defer func() {
// Done enterPrecommitWait:
cs.triggeredTimeoutPrecommit = true
cs.TriggeredTimeoutPrecommit = true
cs.newStep()
}()
@@ -1206,7 +1205,7 @@ func (cs *ConsensusState) enterCommit(height int64, commitRound int) {
blockID, ok := cs.Votes.Precommits(commitRound).TwoThirdsMajority()
if !ok {
cmn.PanicSanity("RunActionCommit() expects +2/3 precommits")
panic("RunActionCommit() expects +2/3 precommits")
}
// The Locked* fields no longer matter.
@@ -1239,7 +1238,7 @@ func (cs *ConsensusState) tryFinalizeCommit(height int64) {
logger := cs.Logger.With("height", height)
if cs.Height != height {
cmn.PanicSanity(fmt.Sprintf("tryFinalizeCommit() cs.Height: %v vs height: %v", cs.Height, height))
panic(fmt.Sprintf("tryFinalizeCommit() cs.Height: %v vs height: %v", cs.Height, height))
}
blockID, ok := cs.Votes.Precommits(cs.CommitRound).TwoThirdsMajority()
@@ -1269,16 +1268,16 @@ func (cs *ConsensusState) finalizeCommit(height int64) {
block, blockParts := cs.ProposalBlock, cs.ProposalBlockParts
if !ok {
cmn.PanicSanity(fmt.Sprintf("Cannot finalizeCommit, commit does not have two thirds majority"))
panic(fmt.Sprintf("Cannot finalizeCommit, commit does not have two thirds majority"))
}
if !blockParts.HasHeader(blockID.PartsHeader) {
cmn.PanicSanity(fmt.Sprintf("Expected ProposalBlockParts header to be commit header"))
panic(fmt.Sprintf("Expected ProposalBlockParts header to be commit header"))
}
if !block.HashesTo(blockID.Hash) {
cmn.PanicSanity(fmt.Sprintf("Cannot finalizeCommit, ProposalBlock does not hash to commit hash"))
panic(fmt.Sprintf("Cannot finalizeCommit, ProposalBlock does not hash to commit hash"))
}
if err := cs.blockExec.ValidateBlock(cs.state, block); err != nil {
cmn.PanicConsensus(fmt.Sprintf("+2/3 committed an invalid block: %v", err))
panic(fmt.Sprintf("+2/3 committed an invalid block: %v", err))
}
cs.Logger.Info(fmt.Sprintf("Finalizing commit of block with %d txs", block.NumTxs),
@@ -1324,7 +1323,7 @@ func (cs *ConsensusState) finalizeCommit(height int64) {
// Execute and commit the block, update and save the state, and update the mempool.
// NOTE The block.AppHash wont reflect these txs until the next block.
var err error
stateCopy, err = cs.blockExec.ApplyBlock(stateCopy, types.BlockID{block.Hash(), blockParts.Header()}, block)
stateCopy, err = cs.blockExec.ApplyBlock(stateCopy, types.BlockID{Hash: block.Hash(), PartsHeader: blockParts.Header()}, block)
if err != nil {
cs.Logger.Error("Error on ApplyBlock. Did the application crash? Please restart tendermint", "err", err)
err := cmn.Kill()
@@ -1360,7 +1359,7 @@ func (cs *ConsensusState) recordMetrics(height int64, block *types.Block) {
missingValidators := 0
missingValidatorsPower := int64(0)
for i, val := range cs.Validators.Validators {
var vote *types.Vote
var vote *types.CommitSig
if i < len(block.LastCommit.Precommits) {
vote = block.LastCommit.Precommits[i]
}
@@ -1408,9 +1407,9 @@ func (cs *ConsensusState) defaultSetProposal(proposal *types.Proposal) error {
return nil
}
// Verify POLRound, which must be -1 or between 0 and proposal.Round exclusive.
if proposal.POLRound != -1 &&
(proposal.POLRound < 0 || proposal.Round <= proposal.POLRound) {
// Verify POLRound, which must be -1 or in range [0, proposal.Round).
if proposal.POLRound < -1 ||
(proposal.POLRound >= 0 && proposal.POLRound >= proposal.Round) {
return ErrInvalidProposalPOLRound
}
@@ -1459,14 +1458,14 @@ func (cs *ConsensusState) addProposalBlockPart(msg *BlockPartMessage, peerID p2p
_, err = cdc.UnmarshalBinaryLengthPrefixedReader(
cs.ProposalBlockParts.GetReader(),
&cs.ProposalBlock,
int64(cs.state.ConsensusParams.BlockSize.MaxBytes),
int64(cs.state.ConsensusParams.Block.MaxBytes),
)
if err != nil {
return added, err
}
// NOTE: it's possible to receive complete proposal blocks for future rounds without having the proposal
cs.Logger.Info("Received complete proposal block", "height", cs.ProposalBlock.Height, "hash", cs.ProposalBlock.Hash())
cs.eventBus.PublishEventCompleteProposal(cs.RoundStateEvent())
cs.eventBus.PublishEventCompleteProposal(cs.CompleteProposalEvent())
// Update Valid* if we can.
prevotes := cs.Votes.Prevotes(cs.Round)
@@ -1511,7 +1510,8 @@ func (cs *ConsensusState) tryAddVote(vote *types.Vote, peerID p2p.ID) (bool, err
if err == ErrVoteHeightMismatch {
return added, err
} else if voteErr, ok := err.(*types.ErrVoteConflictingVotes); ok {
if bytes.Equal(vote.ValidatorAddress, cs.privValidator.GetAddress()) {
addr := cs.privValidator.GetPubKey().Address()
if bytes.Equal(vote.ValidatorAddress, addr) {
cs.Logger.Error("Found conflicting vote from ourselves. Did you unsafe_reset a validator?", "height", vote.Height, "round", vote.Round, "type", vote.Type)
return added, err
}
@@ -1546,7 +1546,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerID p2p.ID) (added bool,
}
cs.Logger.Info(fmt.Sprintf("Added to lastPrecommits: %v", cs.LastCommit.StringShort()))
cs.eventBus.PublishEventVote(types.EventDataVote{vote})
cs.eventBus.PublishEventVote(types.EventDataVote{Vote: vote})
cs.evsw.FireEvent(types.EventVote, vote)
// if we can skip timeoutCommit and have all the votes now,
@@ -1563,7 +1563,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerID p2p.ID) (added bool,
// Not necessarily a bad peer, but not favourable behaviour.
if vote.Height != cs.Height {
err = ErrVoteHeightMismatch
cs.Logger.Info("Vote ignored and not added", "voteHeight", vote.Height, "csHeight", cs.Height, "err", err)
cs.Logger.Info("Vote ignored and not added", "voteHeight", vote.Height, "csHeight", cs.Height, "peerID", peerID)
return
}
@@ -1574,7 +1574,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerID p2p.ID) (added bool,
return
}
cs.eventBus.PublishEventVote(types.EventDataVote{vote})
cs.eventBus.PublishEventVote(types.EventDataVote{Vote: vote})
cs.evsw.FireEvent(types.EventVote, vote)
switch vote.Type {
@@ -1676,7 +1676,10 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerID p2p.ID) (added bool,
}
func (cs *ConsensusState) signVote(type_ types.SignedMsgType, hash []byte, header types.PartSetHeader) (*types.Vote, error) {
addr := cs.privValidator.GetAddress()
// Flush the WAL. Otherwise, we may not recompute the same vote to sign, and the privValidator will refuse to sign anything.
cs.wal.FlushAndSync()
addr := cs.privValidator.GetPubKey().Address()
valIndex, _ := cs.Validators.GetByAddress(addr)
vote := &types.Vote{
@@ -1686,7 +1689,7 @@ func (cs *ConsensusState) signVote(type_ types.SignedMsgType, hash []byte, heade
Round: cs.Round,
Timestamp: cs.voteTime(),
Type: type_,
BlockID: types.BlockID{hash, header},
BlockID: types.BlockID{Hash: hash, PartsHeader: header},
}
err := cs.privValidator.SignVote(cs.state.ChainID, vote)
return vote, err
@@ -1697,10 +1700,12 @@ func (cs *ConsensusState) voteTime() time.Time {
minVoteTime := now
// TODO: We should remove next line in case we don't vote for v in case cs.ProposalBlock == nil,
// even if cs.LockedBlock != nil. See https://github.com/tendermint/spec.
timeIotaMs := time.Duration(cs.state.ConsensusParams.Block.TimeIotaMs) * time.Millisecond
if cs.LockedBlock != nil {
minVoteTime = cs.config.MinValidVoteTime(cs.LockedBlock.Time)
// See the BFT time spec https://tendermint.com/docs/spec/consensus/bft-time.html
minVoteTime = cs.LockedBlock.Time.Add(timeIotaMs)
} else if cs.ProposalBlock != nil {
minVoteTime = cs.config.MinValidVoteTime(cs.ProposalBlock.Time)
minVoteTime = cs.ProposalBlock.Time.Add(timeIotaMs)
}
if now.After(minVoteTime) {
@@ -1712,7 +1717,7 @@ func (cs *ConsensusState) voteTime() time.Time {
// sign the vote and publish on internalMsgQueue
func (cs *ConsensusState) signAddVote(type_ types.SignedMsgType, hash []byte, header types.PartSetHeader) *types.Vote {
// if we don't have a key or we're not in the validator set, do nothing
if cs.privValidator == nil || !cs.Validators.HasAddress(cs.privValidator.GetAddress()) {
if cs.privValidator == nil || !cs.Validators.HasAddress(cs.privValidator.GetPubKey().Address()) {
return nil
}
vote, err := cs.signVote(type_, hash, header)

View File

@@ -14,18 +14,10 @@ import (
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/log"
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
p2pdummy "github.com/tendermint/tendermint/p2p/dummy"
p2pmock "github.com/tendermint/tendermint/p2p/mock"
"github.com/tendermint/tendermint/types"
)
func init() {
config = ResetConfig("consensus_state_test")
}
func ensureProposeTimeout(timeoutPropose time.Duration) time.Duration {
return time.Duration(timeoutPropose.Nanoseconds()*2) * time.Nanosecond
}
/*
ProposeSuite
@@ -73,7 +65,8 @@ func TestStateProposerSelection0(t *testing.T) {
// Commit a block and ensure proposer for the next height is correct.
prop := cs1.GetRoundState().Validators.GetProposer()
if !bytes.Equal(prop.Address, cs1.privValidator.GetAddress()) {
address := cs1.privValidator.GetPubKey().Address()
if !bytes.Equal(prop.Address, address) {
t.Fatalf("expected proposer to be validator %d. Got %X", 0, prop.Address)
}
@@ -87,7 +80,8 @@ func TestStateProposerSelection0(t *testing.T) {
ensureNewRound(newRoundCh, height+1, 0)
prop = cs1.GetRoundState().Validators.GetProposer()
if !bytes.Equal(prop.Address, vss[1].GetAddress()) {
addr := vss[1].GetPubKey().Address()
if !bytes.Equal(prop.Address, addr) {
panic(fmt.Sprintf("expected proposer to be validator %d. Got %X", 1, prop.Address))
}
}
@@ -110,7 +104,8 @@ func TestStateProposerSelection2(t *testing.T) {
// everyone just votes nil. we get a new proposer each round
for i := 0; i < len(vss); i++ {
prop := cs1.GetRoundState().Validators.GetProposer()
correctProposer := vss[(i+round)%len(vss)].GetAddress()
addr := vss[(i+round)%len(vss)].GetPubKey().Address()
correctProposer := addr
if !bytes.Equal(prop.Address, correctProposer) {
panic(fmt.Sprintf("expected RoundState.Validators.GetProposer() to be validator %d. Got %X", (i+2)%len(vss), prop.Address))
}
@@ -197,9 +192,8 @@ func TestStateBadProposal(t *testing.T) {
stateHash[0] = byte((stateHash[0] + 1) % 255)
propBlock.AppHash = stateHash
propBlockParts := propBlock.MakePartSet(partSize)
proposal := types.NewProposal(
vs2.Height, round, -1,
types.BlockID{propBlock.Hash(), propBlockParts.Header()})
blockID := types.BlockID{propBlock.Hash(), propBlockParts.Header()}
proposal := types.NewProposal(vs2.Height, round, -1, blockID)
if err := vs2.SignProposal(config.ChainID(), proposal); err != nil {
t.Fatal("failed to sign bad proposal", err)
}
@@ -213,7 +207,7 @@ func TestStateBadProposal(t *testing.T) {
startTestRound(cs1, height, round)
// wait for proposal
ensureNewProposal(proposalCh, height, round)
ensureProposal(proposalCh, height, round, blockID)
// wait for prevote
ensurePrevote(voteCh, height, round)
@@ -245,7 +239,7 @@ func TestStateFullRound1(t *testing.T) {
cs.SetEventBus(eventBus)
eventBus.Start()
voteCh := subscribe(cs.eventBus, types.EventQueryVote)
voteCh := subscribeUnBuffered(cs.eventBus, types.EventQueryVote)
propCh := subscribe(cs.eventBus, types.EventQueryCompleteProposal)
newRoundCh := subscribe(cs.eventBus, types.EventQueryNewRound)
@@ -273,7 +267,7 @@ func TestStateFullRoundNil(t *testing.T) {
cs, vss := randConsensusState(1)
height, round := cs.Height, cs.Round
voteCh := subscribe(cs.eventBus, types.EventQueryVote)
voteCh := subscribeUnBuffered(cs.eventBus, types.EventQueryVote)
cs.enterPrevote(height, round)
cs.startRoutines(4)
@@ -292,7 +286,7 @@ func TestStateFullRound2(t *testing.T) {
vs2 := vss[1]
height, round := cs1.Height, cs1.Round
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
voteCh := subscribeUnBuffered(cs1.eventBus, types.EventQueryVote)
newBlockCh := subscribe(cs1.eventBus, types.EventQueryNewBlock)
// start round and wait for propose and prevote
@@ -336,7 +330,7 @@ func TestStateLockNoPOL(t *testing.T) {
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
voteCh := subscribeUnBuffered(cs1.eventBus, types.EventQueryVote)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
@@ -376,7 +370,7 @@ func TestStateLockNoPOL(t *testing.T) {
// (note we're entering precommit for a second time this round)
// but with invalid args. then we enterPrecommitWait, and the timeout to new round
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrecommit.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Precommit(round).Nanoseconds())
///
@@ -390,7 +384,7 @@ func TestStateLockNoPOL(t *testing.T) {
incrementRound(vs2)
// now we're on a new round and not the proposer, so wait for timeout
ensureNewTimeout(timeoutProposeCh, height, round, cs1.config.TimeoutPropose.Nanoseconds())
ensureNewTimeout(timeoutProposeCh, height, round, cs1.config.Propose(round).Nanoseconds())
rs := cs1.GetRoundState()
@@ -409,7 +403,7 @@ func TestStateLockNoPOL(t *testing.T) {
// now we're going to enter prevote again, but with invalid args
// and then prevote wait, which should timeout. then wait for precommit
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrevote.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Prevote(round).Nanoseconds())
ensurePrecommit(voteCh, height, round) // precommit
// the proposed block should still be locked and our precommit added
@@ -422,7 +416,7 @@ func TestStateLockNoPOL(t *testing.T) {
// (note we're entering precommit for a second time this round, but with invalid args
// then we enterPrecommitWait and timeout into NewRound
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrecommit.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Precommit(round).Nanoseconds())
round = round + 1 // entering new round
ensureNewRound(newRoundCh, height, round)
@@ -447,7 +441,7 @@ func TestStateLockNoPOL(t *testing.T) {
signAddVotes(cs1, types.PrevoteType, hash, rs.ProposalBlock.MakePartSet(partSize).Header(), vs2)
ensurePrevote(voteCh, height, round)
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrevote.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Prevote(round).Nanoseconds())
ensurePrecommit(voteCh, height, round) // precommit
validatePrecommit(t, cs1, round, 0, vss[0], nil, theBlockHash) // precommit nil but be locked on proposal
@@ -455,7 +449,7 @@ func TestStateLockNoPOL(t *testing.T) {
signAddVotes(cs1, types.PrecommitType, hash, rs.ProposalBlock.MakePartSet(partSize).Header(), vs2) // NOTE: conflicting precommits at same height
ensurePrecommit(voteCh, height, round)
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrecommit.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Precommit(round).Nanoseconds())
cs2, _ := randConsensusState(2) // needed so generated block is different than locked block
// before we time out into new round, set next proposal block
@@ -488,7 +482,7 @@ func TestStateLockNoPOL(t *testing.T) {
signAddVotes(cs1, types.PrevoteType, propBlock.Hash(), propBlock.MakePartSet(partSize).Header(), vs2)
ensurePrevote(voteCh, height, round)
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrevote.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Prevote(round).Nanoseconds())
ensurePrecommit(voteCh, height, round)
validatePrecommit(t, cs1, round, 0, vss[0], nil, theBlockHash) // precommit nil but locked on proposal
@@ -506,7 +500,8 @@ func TestStateLockPOLRelock(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
newBlockCh := subscribe(cs1.eventBus, types.EventQueryNewBlockHeader)
@@ -547,7 +542,7 @@ func TestStateLockPOLRelock(t *testing.T) {
incrementRound(vs2, vs3, vs4)
// timeout to new round
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrecommit.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Precommit(round).Nanoseconds())
round = round + 1 // moving to the next round
//XXX: this isnt guaranteed to get there before the timeoutPropose ...
@@ -597,7 +592,8 @@ func TestStateLockPOLUnlock(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
unlockCh := subscribe(cs1.eventBus, types.EventQueryUnlock)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// everything done from perspective of cs1
@@ -636,7 +632,7 @@ func TestStateLockPOLUnlock(t *testing.T) {
propBlockParts := propBlock.MakePartSet(partSize)
// timeout to new round
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrecommit.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Precommit(round).Nanoseconds())
rs = cs1.GetRoundState()
lockedBlockHash := rs.LockedBlock.Hash()
@@ -690,7 +686,8 @@ func TestStateLockPOLSafety1(t *testing.T) {
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round and wait for propose and prevote
startTestRound(cs1, cs1.Height, round)
@@ -713,7 +710,7 @@ func TestStateLockPOLSafety1(t *testing.T) {
// cs1 precommit nil
ensurePrecommit(voteCh, height, round)
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrecommit.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Precommit(round).Nanoseconds())
t.Log("### ONTO ROUND 1")
@@ -757,7 +754,7 @@ func TestStateLockPOLSafety1(t *testing.T) {
signAddVotes(cs1, types.PrecommitType, nil, types.PartSetHeader{}, vs2, vs3, vs4)
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrecommit.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Precommit(round).Nanoseconds())
incrementRound(vs2, vs3, vs4)
round = round + 1 // moving to the next round
@@ -770,7 +767,7 @@ func TestStateLockPOLSafety1(t *testing.T) {
*/
// timeout of propose
ensureNewTimeout(timeoutProposeCh, height, round, cs1.config.TimeoutPropose.Nanoseconds())
ensureNewTimeout(timeoutProposeCh, height, round, cs1.config.Propose(round).Nanoseconds())
// finish prevote
ensurePrevote(voteCh, height, round)
@@ -806,7 +803,8 @@ func TestStateLockPOLSafety2(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
unlockCh := subscribe(cs1.eventBus, types.EventQueryUnlock)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// the block for R0: gets polkad but we miss it
// (even though we signed it, shhh)
@@ -852,7 +850,7 @@ func TestStateLockPOLSafety2(t *testing.T) {
incrementRound(vs2, vs3, vs4)
// timeout of precommit wait to new round
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrecommit.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Precommit(round).Nanoseconds())
round = round + 1 // moving to the next round
// in round 2 we see the polkad block from round 0
@@ -897,7 +895,8 @@ func TestProposeValidBlock(t *testing.T) {
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
unlockCh := subscribe(cs1.eventBus, types.EventQueryUnlock)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round and wait for propose and prevote
startTestRound(cs1, cs1.Height, round)
@@ -920,7 +919,7 @@ func TestProposeValidBlock(t *testing.T) {
signAddVotes(cs1, types.PrecommitType, nil, types.PartSetHeader{}, vs2, vs3, vs4)
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrecommit.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Precommit(round).Nanoseconds())
incrementRound(vs2, vs3, vs4)
round = round + 1 // moving to the next round
@@ -930,7 +929,7 @@ func TestProposeValidBlock(t *testing.T) {
t.Log("### ONTO ROUND 2")
// timeout of propose
ensureNewTimeout(timeoutProposeCh, height, round, cs1.config.TimeoutPropose.Nanoseconds())
ensureNewTimeout(timeoutProposeCh, height, round, cs1.config.Propose(round).Nanoseconds())
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], propBlockHash)
@@ -953,7 +952,7 @@ func TestProposeValidBlock(t *testing.T) {
ensureNewRound(newRoundCh, height, round)
t.Log("### ONTO ROUND 3")
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrecommit.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Precommit(round).Nanoseconds())
round = round + 1 // moving to the next round
@@ -983,7 +982,8 @@ func TestSetValidBlockOnDelayedPrevote(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
validBlockCh := subscribe(cs1.eventBus, types.EventQueryValidBlock)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round and wait for propose and prevote
startTestRound(cs1, cs1.Height, round)
@@ -1004,7 +1004,7 @@ func TestSetValidBlockOnDelayedPrevote(t *testing.T) {
// vs3 send prevote nil
signAddVotes(cs1, types.PrevoteType, nil, types.PartSetHeader{}, vs3)
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrevote.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Prevote(round).Nanoseconds())
ensurePrecommit(voteCh, height, round)
// we should have precommitted
@@ -1042,7 +1042,8 @@ func TestSetValidBlockOnDelayedProposal(t *testing.T) {
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
validBlockCh := subscribe(cs1.eventBus, types.EventQueryValidBlock)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
round = round + 1 // move to round in which P0 is not proposer
@@ -1051,7 +1052,7 @@ func TestSetValidBlockOnDelayedProposal(t *testing.T) {
startTestRound(cs1, cs1.Height, round)
ensureNewRound(newRoundCh, height, round)
ensureNewTimeout(timeoutProposeCh, height, round, cs1.config.TimeoutPropose.Nanoseconds())
ensureNewTimeout(timeoutProposeCh, height, round, cs1.config.Propose(round).Nanoseconds())
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], nil)
@@ -1064,7 +1065,7 @@ func TestSetValidBlockOnDelayedProposal(t *testing.T) {
signAddVotes(cs1, types.PrevoteType, propBlockHash, propBlockParts.Header(), vs2, vs3, vs4)
ensureNewValidBlock(validBlockCh, height, round)
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrevote.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Prevote(round).Nanoseconds())
ensurePrecommit(voteCh, height, round)
validatePrecommit(t, cs1, round, -1, vss[0], nil, nil)
@@ -1098,7 +1099,7 @@ func TestWaitingTimeoutOnNilPolka(t *testing.T) {
signAddVotes(cs1, types.PrecommitType, nil, types.PartSetHeader{}, vs2, vs3, vs4)
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrecommit.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Precommit(round).Nanoseconds())
ensureNewRound(newRoundCh, height, round+1)
}
@@ -1112,7 +1113,8 @@ func TestWaitingTimeoutProposeOnNewRound(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round
startTestRound(cs1, height, round)
@@ -1129,7 +1131,7 @@ func TestWaitingTimeoutProposeOnNewRound(t *testing.T) {
rs := cs1.GetRoundState()
assert.True(t, rs.Step == cstypes.RoundStepPropose) // P0 does not prevote before timeoutPropose expires
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPropose.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Propose(round).Nanoseconds())
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], nil)
@@ -1145,7 +1147,8 @@ func TestRoundSkipOnNilPolkaFromHigherRound(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round
startTestRound(cs1, height, round)
@@ -1162,7 +1165,7 @@ func TestRoundSkipOnNilPolkaFromHigherRound(t *testing.T) {
ensurePrecommit(voteCh, height, round)
validatePrecommit(t, cs1, round, -1, vss[0], nil, nil)
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrecommit.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Precommit(round).Nanoseconds())
round = round + 1 // moving to the next round
ensureNewRound(newRoundCh, height, round)
@@ -1178,7 +1181,8 @@ func TestWaitTimeoutProposeOnNilPolkaForTheCurrentRound(t *testing.T) {
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round in which PO is not proposer
startTestRound(cs1, height, round)
@@ -1187,7 +1191,7 @@ func TestWaitTimeoutProposeOnNilPolkaForTheCurrentRound(t *testing.T) {
incrementRound(vss[1:]...)
signAddVotes(cs1, types.PrevoteType, nil, types.PartSetHeader{}, vs2, vs3, vs4)
ensureNewTimeout(timeoutProposeCh, height, round, cs1.config.TimeoutPropose.Nanoseconds())
ensureNewTimeout(timeoutProposeCh, height, round, cs1.config.Propose(round).Nanoseconds())
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], nil)
@@ -1267,6 +1271,126 @@ func TestCommitFromPreviousRound(t *testing.T) {
ensureNewRound(newRoundCh, height+1, 0)
}
type fakeTxNotifier struct {
ch chan struct{}
}
func (n *fakeTxNotifier) TxsAvailable() <-chan struct{} {
return n.ch
}
func (n *fakeTxNotifier) Notify() {
n.ch <- struct{}{}
}
func TestStartNextHeightCorrectly(t *testing.T) {
config.Consensus.SkipTimeoutCommit = false
cs1, vss := randConsensusState(4)
cs1.txNotifier = &fakeTxNotifier{ch: make(chan struct{})}
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
height, round := cs1.Height, cs1.Round
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
newBlockHeader := subscribe(cs1.eventBus, types.EventQueryNewBlockHeader)
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round and wait for propose and prevote
startTestRound(cs1, height, round)
ensureNewRound(newRoundCh, height, round)
ensureNewProposal(proposalCh, height, round)
rs := cs1.GetRoundState()
theBlockHash := rs.ProposalBlock.Hash()
theBlockParts := rs.ProposalBlockParts.Header()
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], theBlockHash)
signAddVotes(cs1, types.PrevoteType, theBlockHash, theBlockParts, vs2, vs3, vs4)
ensurePrecommit(voteCh, height, round)
// the proposed block should now be locked and our precommit added
validatePrecommit(t, cs1, round, round, vss[0], theBlockHash, theBlockHash)
rs = cs1.GetRoundState()
// add precommits
signAddVotes(cs1, types.PrecommitType, nil, types.PartSetHeader{}, vs2)
signAddVotes(cs1, types.PrecommitType, theBlockHash, theBlockParts, vs3)
time.Sleep(5 * time.Millisecond)
signAddVotes(cs1, types.PrecommitType, theBlockHash, theBlockParts, vs4)
rs = cs1.GetRoundState()
assert.True(t, rs.TriggeredTimeoutPrecommit)
ensureNewBlockHeader(newBlockHeader, height, theBlockHash)
cs1.txNotifier.(*fakeTxNotifier).Notify()
ensureNewTimeout(timeoutProposeCh, height+1, round, cs1.config.Propose(round).Nanoseconds())
rs = cs1.GetRoundState()
assert.False(t, rs.TriggeredTimeoutPrecommit, "triggeredTimeoutPrecommit should be false at the beginning of each round")
}
func TestResetTimeoutPrecommitUponNewHeight(t *testing.T) {
config.Consensus.SkipTimeoutCommit = false
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
height, round := cs1.Height, cs1.Round
partSize := types.BlockPartSizeBytes
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
newBlockHeader := subscribe(cs1.eventBus, types.EventQueryNewBlockHeader)
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round and wait for propose and prevote
startTestRound(cs1, height, round)
ensureNewRound(newRoundCh, height, round)
ensureNewProposal(proposalCh, height, round)
rs := cs1.GetRoundState()
theBlockHash := rs.ProposalBlock.Hash()
theBlockParts := rs.ProposalBlockParts.Header()
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], theBlockHash)
signAddVotes(cs1, types.PrevoteType, theBlockHash, theBlockParts, vs2, vs3, vs4)
ensurePrecommit(voteCh, height, round)
validatePrecommit(t, cs1, round, round, vss[0], theBlockHash, theBlockHash)
rs = cs1.GetRoundState()
// add precommits
signAddVotes(cs1, types.PrecommitType, nil, types.PartSetHeader{}, vs2)
signAddVotes(cs1, types.PrecommitType, theBlockHash, theBlockParts, vs3)
signAddVotes(cs1, types.PrecommitType, theBlockHash, theBlockParts, vs4)
ensureNewBlockHeader(newBlockHeader, height, theBlockHash)
prop, propBlock := decideProposal(cs1, vs2, height+1, 0)
propBlockParts := propBlock.MakePartSet(partSize)
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer"); err != nil {
t.Fatal(err)
}
ensureNewProposal(proposalCh, height+1, 0)
rs = cs1.GetRoundState()
assert.False(t, rs.TriggeredTimeoutPrecommit, "triggeredTimeoutPrecommit should be false at the beginning of each height")
}
//------------------------------------------------------------------------------------------
// SlashingSuite
// TODO: Slashing
@@ -1362,7 +1486,8 @@ func TestStateHalt1(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
newBlockCh := subscribe(cs1.eventBus, types.EventQueryNewBlock)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round and wait for propose and prevote
startTestRound(cs1, height, round)
@@ -1390,7 +1515,7 @@ func TestStateHalt1(t *testing.T) {
incrementRound(vs2, vs3, vs4)
// timeout to new round
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrecommit.Nanoseconds())
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Precommit(round).Nanoseconds())
round = round + 1 // moving to the next round
@@ -1419,7 +1544,7 @@ func TestStateHalt1(t *testing.T) {
func TestStateOutputsBlockPartsStats(t *testing.T) {
// create dummy peer
cs, _ := randConsensusState(1)
peer := p2pdummy.NewPeer()
peer := p2pmock.NewPeer(nil)
// 1) new block part
parts := types.NewPartSetFromData(cmn.RandBytes(100), 10)
@@ -1462,7 +1587,7 @@ func TestStateOutputsBlockPartsStats(t *testing.T) {
func TestStateOutputVoteStats(t *testing.T) {
cs, vss := randConsensusState(2)
// create dummy peer
peer := p2pdummy.NewPeer()
peer := p2pmock.NewPeer(nil)
vote := signVote(vss[1], types.PrecommitType, []byte("test"), types.PartSetHeader{})
@@ -1491,11 +1616,19 @@ func TestStateOutputVoteStats(t *testing.T) {
}
// subscribe subscribes test client to the given query and returns a channel with cap = 1.
func subscribe(eventBus *types.EventBus, q tmpubsub.Query) <-chan interface{} {
out := make(chan interface{}, 1)
err := eventBus.Subscribe(context.Background(), testSubscriber, q, out)
func subscribe(eventBus *types.EventBus, q tmpubsub.Query) <-chan tmpubsub.Message {
sub, err := eventBus.Subscribe(context.Background(), testSubscriber, q)
if err != nil {
panic(fmt.Sprintf("failed to subscribe %s to %v", testSubscriber, q))
}
return out
return sub.Out()
}
// subscribe subscribes test client to the given query and returns a channel with cap = 0.
func subscribeUnBuffered(eventBus *types.EventBus, q tmpubsub.Query) <-chan tmpubsub.Message {
sub, err := eventBus.SubscribeUnbuffered(context.Background(), testSubscriber, q)
if err != nil {
panic(fmt.Sprintf("failed to subscribe %s to %v", testSubscriber, q))
}
return sub.Out()
}

View File

@@ -6,7 +6,6 @@ import (
"strings"
"sync"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/types"
)
@@ -83,7 +82,7 @@ func (hvs *HeightVoteSet) SetRound(round int) {
hvs.mtx.Lock()
defer hvs.mtx.Unlock()
if hvs.round != 0 && (round < hvs.round+1) {
cmn.PanicSanity("SetRound() must increment hvs.round")
panic("SetRound() must increment hvs.round")
}
for r := hvs.round + 1; r <= round; r++ {
if _, ok := hvs.roundVoteSets[r]; ok {
@@ -96,7 +95,7 @@ func (hvs *HeightVoteSet) SetRound(round int) {
func (hvs *HeightVoteSet) addRound(round int) {
if _, ok := hvs.roundVoteSets[round]; ok {
cmn.PanicSanity("addRound() for an existing round")
panic("addRound() for an existing round")
}
// log.Debug("addRound(round)", "round", round)
prevotes := types.NewVoteSet(hvs.chainID, hvs.height, round, types.PrevoteType, hvs.valSet)
@@ -169,8 +168,7 @@ func (hvs *HeightVoteSet) getVoteSet(round int, type_ types.SignedMsgType) *type
case types.PrecommitType:
return rvs.Precommits
default:
cmn.PanicSanity(fmt.Sprintf("Unexpected vote type %X", type_))
return nil
panic(fmt.Sprintf("Unexpected vote type %X", type_))
}
}

View File

@@ -2,6 +2,7 @@ package types
import (
"fmt"
"os"
"testing"
cfg "github.com/tendermint/tendermint/config"
@@ -11,8 +12,11 @@ import (
var config *cfg.Config // NOTE: must be reset for each _test.go file
func init() {
func TestMain(m *testing.M) {
config = cfg.ResetTestRoot("consensus_height_vote_set_test")
code := m.Run()
os.RemoveAll(config.RootDir)
os.Exit(code)
}
func TestPeerCatchupRounds(t *testing.T) {
@@ -50,8 +54,9 @@ func TestPeerCatchupRounds(t *testing.T) {
func makeVoteHR(t *testing.T, height int64, round int, privVals []types.PrivValidator, valIndex int) *types.Vote {
privVal := privVals[valIndex]
addr := privVal.GetPubKey().Address()
vote := &types.Vote{
ValidatorAddress: privVal.GetAddress(),
ValidatorAddress: addr,
ValidatorIndex: valIndex,
Height: height,
Round: round,
@@ -63,7 +68,6 @@ func makeVoteHR(t *testing.T, height int64, round int, privVals []types.PrivVali
err := privVal.SignVote(chainID, vote)
if err != nil {
panic(fmt.Sprintf("Error signing vote: %v", err))
return nil
}
return vote
}

View File

@@ -55,3 +55,31 @@ func (prs PeerRoundState) StringIndented(indent string) string {
indent, prs.CatchupCommit, prs.CatchupCommitRound,
indent)
}
//-----------------------------------------------------------
// These methods are for Protobuf Compatibility
// Size returns the size of the amino encoding, in bytes.
func (ps *PeerRoundState) Size() int {
bs, _ := ps.Marshal()
return len(bs)
}
// Marshal returns the amino encoding.
func (ps *PeerRoundState) Marshal() ([]byte, error) {
return cdc.MarshalBinaryBare(ps)
}
// MarshalTo calls Marshal and copies to the given buffer.
func (ps *PeerRoundState) MarshalTo(data []byte) (int, error) {
bs, err := ps.Marshal()
if err != nil {
return -1, err
}
return copy(data, bs), nil
}
// Unmarshal deserializes from amino encoded form.
func (ps *PeerRoundState) Unmarshal(bs []byte) error {
return cdc.UnmarshalBinaryBare(bs, ps)
}

View File

@@ -65,25 +65,26 @@ func (rs RoundStepType) String() string {
// NOTE: Not thread safe. Should only be manipulated by functions downstream
// of the cs.receiveRoutine
type RoundState struct {
Height int64 `json:"height"` // Height we are working on
Round int `json:"round"`
Step RoundStepType `json:"step"`
StartTime time.Time `json:"start_time"`
CommitTime time.Time `json:"commit_time"` // Subjective time when +2/3 precommits for Block at Round were found
Validators *types.ValidatorSet `json:"validators"`
Proposal *types.Proposal `json:"proposal"`
ProposalBlock *types.Block `json:"proposal_block"`
ProposalBlockParts *types.PartSet `json:"proposal_block_parts"`
LockedRound int `json:"locked_round"`
LockedBlock *types.Block `json:"locked_block"`
LockedBlockParts *types.PartSet `json:"locked_block_parts"`
ValidRound int `json:"valid_round"` // Last known round with POL for non-nil valid block.
ValidBlock *types.Block `json:"valid_block"` // Last known block of POL mentioned above.
ValidBlockParts *types.PartSet `json:"valid_block_parts"` // Last known block parts of POL metnioned above.
Votes *HeightVoteSet `json:"votes"`
CommitRound int `json:"commit_round"` //
LastCommit *types.VoteSet `json:"last_commit"` // Last precommits at Height-1
LastValidators *types.ValidatorSet `json:"last_validators"`
Height int64 `json:"height"` // Height we are working on
Round int `json:"round"`
Step RoundStepType `json:"step"`
StartTime time.Time `json:"start_time"`
CommitTime time.Time `json:"commit_time"` // Subjective time when +2/3 precommits for Block at Round were found
Validators *types.ValidatorSet `json:"validators"`
Proposal *types.Proposal `json:"proposal"`
ProposalBlock *types.Block `json:"proposal_block"`
ProposalBlockParts *types.PartSet `json:"proposal_block_parts"`
LockedRound int `json:"locked_round"`
LockedBlock *types.Block `json:"locked_block"`
LockedBlockParts *types.PartSet `json:"locked_block_parts"`
ValidRound int `json:"valid_round"` // Last known round with POL for non-nil valid block.
ValidBlock *types.Block `json:"valid_block"` // Last known block of POL mentioned above.
ValidBlockParts *types.PartSet `json:"valid_block_parts"` // Last known block parts of POL metnioned above.
Votes *HeightVoteSet `json:"votes"`
CommitRound int `json:"commit_round"` //
LastCommit *types.VoteSet `json:"last_commit"` // Last precommits at Height-1
LastValidators *types.ValidatorSet `json:"last_validators"`
TriggeredTimeoutPrecommit bool `json:"triggered_timeout_precommit"`
}
// Compressed version of the RoundState for use in RPC
@@ -112,18 +113,46 @@ func (rs *RoundState) RoundStateSimple() RoundStateSimple {
}
}
// NewRoundEvent returns the RoundState with proposer information as an event.
func (rs *RoundState) NewRoundEvent() types.EventDataNewRound {
addr := rs.Validators.GetProposer().Address
idx, _ := rs.Validators.GetByAddress(addr)
return types.EventDataNewRound{
Height: rs.Height,
Round: rs.Round,
Step: rs.Step.String(),
Proposer: types.ValidatorInfo{
Address: addr,
Index: idx,
},
}
}
// CompleteProposalEvent returns information about a proposed block as an event.
func (rs *RoundState) CompleteProposalEvent() types.EventDataCompleteProposal {
// We must construct BlockID from ProposalBlock and ProposalBlockParts
// cs.Proposal is not guaranteed to be set when this function is called
blockId := types.BlockID{
Hash: rs.ProposalBlock.Hash(),
PartsHeader: rs.ProposalBlockParts.Header(),
}
return types.EventDataCompleteProposal{
Height: rs.Height,
Round: rs.Round,
Step: rs.Step.String(),
BlockID: blockId,
}
}
// RoundStateEvent returns the H/R/S of the RoundState as an event.
func (rs *RoundState) RoundStateEvent() types.EventDataRoundState {
// copy the RoundState.
// TODO: if we want to avoid this, we may need synchronous events after all
rsCopy := *rs
edrs := types.EventDataRoundState{
Height: rs.Height,
Round: rs.Round,
Step: rs.Step.String(),
RoundState: &rsCopy,
return types.EventDataRoundState{
Height: rs.Height,
Round: rs.Round,
Step: rs.Step.String(),
}
return edrs
}
// String returns a string
@@ -169,3 +198,31 @@ func (rs *RoundState) StringShort() string {
return fmt.Sprintf(`RoundState{H:%v R:%v S:%v ST:%v}`,
rs.Height, rs.Round, rs.Step, rs.StartTime)
}
//-----------------------------------------------------------
// These methods are for Protobuf Compatibility
// Size returns the size of the amino encoding, in bytes.
func (rs *RoundStateSimple) Size() int {
bs, _ := rs.Marshal()
return len(bs)
}
// Marshal returns the amino encoding.
func (rs *RoundStateSimple) Marshal() ([]byte, error) {
return cdc.MarshalBinaryBare(rs)
}
// MarshalTo calls Marshal and copies to the given buffer.
func (rs *RoundStateSimple) MarshalTo(data []byte) (int, error) {
bs, err := rs.Marshal()
if err != nil {
return -1, err
}
return copy(data, bs), nil
}
// Unmarshal deserializes from amino encoded form.
func (rs *RoundStateSimple) Unmarshal(bs []byte) error {
return cdc.UnmarshalBinaryBare(bs, rs)
}

View File

@@ -3,7 +3,7 @@ package types
import (
"testing"
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/crypto/ed25519"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/types"
@@ -16,7 +16,7 @@ func BenchmarkRoundStateDeepCopy(b *testing.B) {
// Random validators
nval, ntxs := 100, 100
vset, _ := types.RandValidatorSet(nval, 1)
precommits := make([]*types.Vote, nval)
precommits := make([]*types.CommitSig, nval)
blockID := types.BlockID{
Hash: cmn.RandBytes(20),
PartsHeader: types.PartSetHeader{
@@ -25,12 +25,12 @@ func BenchmarkRoundStateDeepCopy(b *testing.B) {
}
sig := make([]byte, ed25519.SignatureSize)
for i := 0; i < nval; i++ {
precommits[i] = &types.Vote{
precommits[i] = (&types.Vote{
ValidatorAddress: types.Address(cmn.RandBytes(20)),
Timestamp: tmtime.Now(),
BlockID: blockID,
Signature: sig,
}
}).CommitSig()
}
txs := make([]types.Tx, ntxs)
for i := 0; i < ntxs; i++ {
@@ -53,11 +53,8 @@ func BenchmarkRoundStateDeepCopy(b *testing.B) {
Data: types.Data{
Txs: txs,
},
Evidence: types.EvidenceData{},
LastCommit: &types.Commit{
BlockID: blockID,
Precommits: precommits,
},
Evidence: types.EvidenceData{},
LastCommit: types.NewCommit(blockID, precommits),
}
parts := block.MakePartSet(4096)
// Random Proposal

View File

@@ -13,6 +13,7 @@ import (
amino "github.com/tendermint/go-amino"
auto "github.com/tendermint/tendermint/libs/autofile"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
)
@@ -20,6 +21,9 @@ import (
const (
// must be greater than types.BlockPartSizeBytes + a few bytes
maxMsgSizeBytes = 1024 * 1024 // 1MB
// how often the WAL should be sync'd during period sync'ing
walDefaultFlushInterval = 2 * time.Second
)
//--------------------------------------------------------
@@ -53,26 +57,36 @@ func RegisterWALMessages(cdc *amino.Codec) {
type WAL interface {
Write(WALMessage)
WriteSync(WALMessage)
Group() *auto.Group
SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error)
FlushAndSync() error
SearchForEndHeight(height int64, options *WALSearchOptions) (rd io.ReadCloser, found bool, err error)
// service methods
Start() error
Stop() error
Wait()
}
// Write ahead logger writes msgs to disk before they are processed.
// Can be used for crash-recovery and deterministic replay
// TODO: currently the wal is overwritten during replay catchup
// give it a mode so it's either reading or appending - must read to end to start appending again
// Can be used for crash-recovery and deterministic replay.
// TODO: currently the wal is overwritten during replay catchup, give it a mode
// so it's either reading or appending - must read to end to start appending
// again.
type baseWAL struct {
cmn.BaseService
group *auto.Group
enc *WALEncoder
flushTicker *time.Ticker
flushInterval time.Duration
}
var _ WAL = &baseWAL{}
// NewWAL returns a new write-ahead logger based on `baseWAL`, which implements
// WAL. It's flushed and synced to disk every 2s and once when stopped.
func NewWAL(walFile string, groupOptions ...func(*auto.Group)) (*baseWAL, error) {
err := cmn.EnsureDir(filepath.Dir(walFile), 0700)
if err != nil {
@@ -84,17 +98,28 @@ func NewWAL(walFile string, groupOptions ...func(*auto.Group)) (*baseWAL, error)
return nil, err
}
wal := &baseWAL{
group: group,
enc: NewWALEncoder(group),
group: group,
enc: NewWALEncoder(group),
flushInterval: walDefaultFlushInterval,
}
wal.BaseService = *cmn.NewBaseService(nil, "baseWAL", wal)
return wal, nil
}
// SetFlushInterval allows us to override the periodic flush interval for the WAL.
func (wal *baseWAL) SetFlushInterval(i time.Duration) {
wal.flushInterval = i
}
func (wal *baseWAL) Group() *auto.Group {
return wal.group
}
func (wal *baseWAL) SetLogger(l log.Logger) {
wal.BaseService.Logger = l
wal.group.SetLogger(l)
}
func (wal *baseWAL) OnStart() error {
size, err := wal.group.Head.Size()
if err != nil {
@@ -103,14 +128,49 @@ func (wal *baseWAL) OnStart() error {
wal.WriteSync(EndHeightMessage{0})
}
err = wal.group.Start()
return err
if err != nil {
return err
}
wal.flushTicker = time.NewTicker(wal.flushInterval)
go wal.processFlushTicks()
return nil
}
func (wal *baseWAL) processFlushTicks() {
for {
select {
case <-wal.flushTicker.C:
if err := wal.FlushAndSync(); err != nil {
wal.Logger.Error("Periodic WAL flush failed", "err", err)
}
case <-wal.Quit():
return
}
}
}
// FlushAndSync flushes and fsync's the underlying group's data to disk.
// See auto#FlushAndSync
func (wal *baseWAL) FlushAndSync() error {
return wal.group.FlushAndSync()
}
// Stop the underlying autofile group.
// Use Wait() to ensure it's finished shutting down
// before cleaning up files.
func (wal *baseWAL) OnStop() {
wal.flushTicker.Stop()
wal.FlushAndSync()
wal.group.Stop()
wal.group.Close()
}
// Wait for the underlying autofile group to finish shutting down
// so it's safe to cleanup files.
func (wal *baseWAL) Wait() {
wal.group.Wait()
}
// Write is called in newStep and for each receive on the
// peerMsgQueue and the timeoutTicker.
// NOTE: does not call fsync()
@@ -134,7 +194,7 @@ func (wal *baseWAL) WriteSync(msg WALMessage) {
}
wal.Write(msg)
if err := wal.group.Flush(); err != nil {
if err := wal.FlushAndSync(); err != nil {
panic(fmt.Sprintf("Error flushing consensus wal buf to file. Error: %v \n", err))
}
}
@@ -150,14 +210,17 @@ type WALSearchOptions struct {
// Group reader will be nil if found equals false.
//
// CONTRACT: caller must close group reader.
func (wal *baseWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
var msg *TimedWALMessage
func (wal *baseWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (rd io.ReadCloser, found bool, err error) {
var (
msg *TimedWALMessage
gr *auto.GroupReader
)
lastHeightFound := int64(-1)
// NOTE: starting from the last file in the group because we're usually
// searching for the last height. See replay.go
min, max := wal.group.MinIndex(), wal.group.MaxIndex()
wal.Logger.Debug("Searching for height", "height", height, "min", min, "max", max)
wal.Logger.Info("Searching for height", "height", height, "min", min, "max", max)
for index := max; index >= min; index-- {
gr, err = wal.group.NewReader(index)
if err != nil {
@@ -177,7 +240,7 @@ func (wal *baseWAL) SearchForEndHeight(height int64, options *WALSearchOptions)
break
}
if options.IgnoreDataCorruptionErrors && IsDataCorruptionError(err) {
wal.Logger.Debug("Corrupted entry. Skipping...", "err", err)
wal.Logger.Error("Corrupted entry. Skipping...", "err", err)
// do nothing
continue
} else if err != nil {
@@ -188,7 +251,7 @@ func (wal *baseWAL) SearchForEndHeight(height int64, options *WALSearchOptions)
if m, ok := msg.Msg.(EndHeightMessage); ok {
lastHeightFound = m.Height
if m.Height == height { // found
wal.Logger.Debug("Found", "height", height, "index", index)
wal.Logger.Info("Found", "height", height, "index", index)
return gr, true, nil
}
}
@@ -213,12 +276,17 @@ func NewWALEncoder(wr io.Writer) *WALEncoder {
return &WALEncoder{wr}
}
// Encode writes the custom encoding of v to the stream.
// Encode writes the custom encoding of v to the stream. It returns an error if
// the amino-encoded size of v is greater than 1MB. Any error encountered
// during the write is also returned.
func (enc *WALEncoder) Encode(v *TimedWALMessage) error {
data := cdc.MustMarshalBinaryBare(v)
crc := crc32.Checksum(data, crc32c)
length := uint32(len(data))
if length > maxMsgSizeBytes {
return fmt.Errorf("Msg is too big: %d bytes, max: %d bytes", length, maxMsgSizeBytes)
}
totalLength := 8 + int(length)
msg := make([]byte, totalLength)
@@ -275,31 +343,31 @@ func (dec *WALDecoder) Decode() (*TimedWALMessage, error) {
return nil, err
}
if err != nil {
return nil, fmt.Errorf("failed to read checksum: %v", err)
return nil, DataCorruptionError{fmt.Errorf("failed to read checksum: %v", err)}
}
crc := binary.BigEndian.Uint32(b)
b = make([]byte, 4)
_, err = dec.rd.Read(b)
if err != nil {
return nil, fmt.Errorf("failed to read length: %v", err)
return nil, DataCorruptionError{fmt.Errorf("failed to read length: %v", err)}
}
length := binary.BigEndian.Uint32(b)
if length > maxMsgSizeBytes {
return nil, fmt.Errorf("length %d exceeded maximum possible value of %d bytes", length, maxMsgSizeBytes)
return nil, DataCorruptionError{fmt.Errorf("length %d exceeded maximum possible value of %d bytes", length, maxMsgSizeBytes)}
}
data := make([]byte, length)
_, err = dec.rd.Read(data)
n, err := dec.rd.Read(data)
if err != nil {
return nil, fmt.Errorf("failed to read data: %v", err)
return nil, DataCorruptionError{fmt.Errorf("failed to read data: %v (read: %d, wanted: %d)", err, n, length)}
}
// check checksum before decoding data
actualCRC := crc32.Checksum(data, crc32c)
if actualCRC != crc {
return nil, DataCorruptionError{fmt.Errorf("checksums do not match: (read: %v, actual: %v)", crc, actualCRC)}
return nil, DataCorruptionError{fmt.Errorf("checksums do not match: read: %v, actual: %v", crc, actualCRC)}
}
var res = new(TimedWALMessage) // nolint: gosimple
@@ -313,10 +381,12 @@ func (dec *WALDecoder) Decode() (*TimedWALMessage, error) {
type nilWAL struct{}
var _ WAL = nilWAL{}
func (nilWAL) Write(m WALMessage) {}
func (nilWAL) WriteSync(m WALMessage) {}
func (nilWAL) Group() *auto.Group { return nil }
func (nilWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
func (nilWAL) FlushAndSync() error { return nil }
func (nilWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (rd io.ReadCloser, found bool, err error) {
return nil, false, nil
}
func (nilWAL) Start() error { return nil }

View File

@@ -5,31 +5,31 @@ import (
"bytes"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"testing"
"time"
"github.com/pkg/errors"
"github.com/tendermint/tendermint/abci/example/kvstore"
bc "github.com/tendermint/tendermint/blockchain"
cfg "github.com/tendermint/tendermint/config"
auto "github.com/tendermint/tendermint/libs/autofile"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/mock"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
// WALGenerateNBlocks generates a consensus WAL. It does this by spining up a
// WALGenerateNBlocks generates a consensus WAL. It does this by spinning up a
// stripped down version of node (proxy app, event bus, consensus state) with a
// persistent kvstore application and special consensus wal instance
// (byteBufferWAL) and waits until numBlocks are created. If the node fails to produce given numBlocks, it returns an error.
func WALGenerateNBlocks(wr io.Writer, numBlocks int) (err error) {
config := getConfig()
func WALGenerateNBlocks(t *testing.T, wr io.Writer, numBlocks int) (err error) {
config := getConfig(t)
app := kvstore.NewPersistentKVStoreApplication(filepath.Join(config.DBDir(), "wal_generator"))
@@ -40,19 +40,21 @@ func WALGenerateNBlocks(wr io.Writer, numBlocks int) (err error) {
// COPY PASTE FROM node.go WITH A FEW MODIFICATIONS
// NOTE: we can't import node package because of circular dependency.
// NOTE: we don't do handshake so need to set state.Version.Consensus.App directly.
privValidatorFile := config.PrivValidatorFile()
privValidator := privval.LoadOrGenFilePV(privValidatorFile)
privValidatorKeyFile := config.PrivValidatorKeyFile()
privValidatorStateFile := config.PrivValidatorStateFile()
privValidator := privval.LoadOrGenFilePV(privValidatorKeyFile, privValidatorStateFile)
genDoc, err := types.GenesisDocFromFile(config.GenesisFile())
if err != nil {
return errors.Wrap(err, "failed to read genesis file")
}
stateDB := db.NewMemDB()
blockStoreDB := db.NewMemDB()
stateDB := blockStoreDB
state, err := sm.MakeGenesisState(genDoc)
if err != nil {
return errors.Wrap(err, "failed to make genesis state")
}
state.Version.Consensus.App = kvstore.ProtocolVersion
sm.SaveState(stateDB, state)
blockStore := bc.NewBlockStore(blockStoreDB)
proxyApp := proxy.NewAppConns(proxy.NewLocalClientCreator(app))
proxyApp.SetLogger(logger.With("module", "proxy"))
@@ -67,7 +69,7 @@ func WALGenerateNBlocks(wr io.Writer, numBlocks int) (err error) {
return errors.Wrap(err, "failed to start event bus")
}
defer eventBus.Stop()
mempool := sm.MockMempool{}
mempool := mock.Mempool{}
evpool := sm.MockEvidencePool{}
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)
consensusState := NewConsensusState(config.Consensus, state.Copy(), blockExec, blockStore, mempool, evpool)
@@ -101,11 +103,11 @@ func WALGenerateNBlocks(wr io.Writer, numBlocks int) (err error) {
}
//WALWithNBlocks returns a WAL content with numBlocks.
func WALWithNBlocks(numBlocks int) (data []byte, err error) {
func WALWithNBlocks(t *testing.T, numBlocks int) (data []byte, err error) {
var b bytes.Buffer
wr := bufio.NewWriter(&b)
if err := WALGenerateNBlocks(wr, numBlocks); err != nil {
if err := WALGenerateNBlocks(t, wr, numBlocks); err != nil {
return []byte{}, err
}
@@ -113,18 +115,6 @@ func WALWithNBlocks(numBlocks int) (data []byte, err error) {
return b.Bytes(), nil
}
// f**ing long, but unique for each test
func makePathname() string {
// get path
p, err := os.Getwd()
if err != nil {
panic(err)
}
// fmt.Println(p)
sep := string(filepath.Separator)
return strings.Replace(p, sep, "_", -1)
}
func randPort() int {
// returns between base and base + spread
base, spread := 20000, 20000
@@ -139,9 +129,8 @@ func makeAddrs() (string, string, string) {
}
// getConfig returns a config for test cases
func getConfig() *cfg.Config {
pathname := makePathname()
c := cfg.ResetTestRoot(fmt.Sprintf("%s_%d", pathname, cmn.RandInt()))
func getConfig(t *testing.T) *cfg.Config {
c := cfg.ResetTestRoot(t.Name())
// and we use random ports to run in parallel
tm, rpc, grpc := makeAddrs()
@@ -205,10 +194,9 @@ func (w *byteBufferWAL) WriteSync(m WALMessage) {
w.Write(m)
}
func (w *byteBufferWAL) Group() *auto.Group {
panic("not implemented")
}
func (w *byteBufferWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
func (w *byteBufferWAL) FlushAndSync() error { return nil }
func (w *byteBufferWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (rd io.ReadCloser, found bool, err error) {
return nil, false, nil
}

View File

@@ -3,7 +3,6 @@ package consensus
import (
"bytes"
"crypto/rand"
"fmt"
"io/ioutil"
"os"
"path/filepath"
@@ -14,6 +13,7 @@ import (
"github.com/tendermint/tendermint/consensus/types"
"github.com/tendermint/tendermint/libs/autofile"
"github.com/tendermint/tendermint/libs/log"
tmtypes "github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
@@ -21,41 +21,51 @@ import (
"github.com/stretchr/testify/require"
)
const (
walTestFlushInterval = time.Duration(100) * time.Millisecond
)
func TestWALTruncate(t *testing.T) {
walDir, err := ioutil.TempDir("", "wal")
if err != nil {
panic(fmt.Errorf("failed to create temp WAL file: %v", err))
}
require.NoError(t, err)
defer os.RemoveAll(walDir)
walFile := filepath.Join(walDir, "wal")
//this magic number 4K can truncate the content when RotateFile. defaultHeadSizeLimit(10M) is hard to simulate.
//this magic number 1 * time.Millisecond make RotateFile check frequently. defaultGroupCheckDuration(5s) is hard to simulate.
wal, err := NewWAL(walFile, autofile.GroupHeadSizeLimit(4096), autofile.GroupCheckDuration(1*time.Millisecond))
if err != nil {
t.Fatal(err)
}
// this magic number 4K can truncate the content when RotateFile.
// defaultHeadSizeLimit(10M) is hard to simulate.
// this magic number 1 * time.Millisecond make RotateFile check frequently.
// defaultGroupCheckDuration(5s) is hard to simulate.
wal, err := NewWAL(walFile,
autofile.GroupHeadSizeLimit(4096),
autofile.GroupCheckDuration(1*time.Millisecond),
)
require.NoError(t, err)
wal.SetLogger(log.TestingLogger())
err = wal.Start()
require.NoError(t, err)
defer func() {
wal.Stop()
// wait for the wal to finish shutting down so we
// can safely remove the directory
wal.Wait()
}()
wal.Start()
defer wal.Stop()
//60 block's size nearly 70K, greater than group's headBuf size(4096 * 10), when headBuf is full, truncate content will Flush to the file.
//at this time, RotateFile is called, truncate content exist in each file.
err = WALGenerateNBlocks(wal.Group(), 60)
if err != nil {
t.Fatal(err)
}
// 60 block's size nearly 70K, greater than group's headBuf size(4096 * 10),
// when headBuf is full, truncate content will Flush to the file. at this
// time, RotateFile is called, truncate content exist in each file.
err = WALGenerateNBlocks(t, wal.Group(), 60)
require.NoError(t, err)
time.Sleep(1 * time.Millisecond) //wait groupCheckDuration, make sure RotateFile run
wal.Group().Flush()
wal.FlushAndSync()
h := int64(50)
gr, found, err := wal.SearchForEndHeight(h, &WALSearchOptions{})
assert.NoError(t, err, fmt.Sprintf("expected not to err on height %d", h))
assert.True(t, found, fmt.Sprintf("expected to find end height for %d", h))
assert.NotNil(t, gr, "expected group not to be nil")
assert.NoError(t, err, "expected not to err on height %d", h)
assert.True(t, found, "expected to find end height for %d", h)
assert.NotNil(t, gr)
defer gr.Close()
dec := NewWALDecoder(gr)
@@ -63,14 +73,14 @@ func TestWALTruncate(t *testing.T) {
assert.NoError(t, err, "expected to decode a message")
rs, ok := msg.Msg.(tmtypes.EventDataRoundState)
assert.True(t, ok, "expected message of type EventDataRoundState")
assert.Equal(t, rs.Height, h+1, fmt.Sprintf("wrong height"))
assert.Equal(t, rs.Height, h+1, "wrong height")
}
func TestWALEncoderDecoder(t *testing.T) {
now := tmtime.Now()
msgs := []TimedWALMessage{
TimedWALMessage{Time: now, Msg: EndHeightMessage{0}},
TimedWALMessage{Time: now, Msg: timeoutInfo{Duration: time.Second, Height: 1, Round: 1, Step: types.RoundStepPropose}},
{Time: now, Msg: EndHeightMessage{0}},
{Time: now, Msg: timeoutInfo{Duration: time.Second, Height: 1, Round: 1, Step: types.RoundStepPropose}},
}
b := new(bytes.Buffer)
@@ -91,23 +101,42 @@ func TestWALEncoderDecoder(t *testing.T) {
}
}
func TestWALWritePanicsIfMsgIsTooBig(t *testing.T) {
walDir, err := ioutil.TempDir("", "wal")
require.NoError(t, err)
defer os.RemoveAll(walDir)
walFile := filepath.Join(walDir, "wal")
wal, err := NewWAL(walFile)
require.NoError(t, err)
err = wal.Start()
require.NoError(t, err)
defer func() {
wal.Stop()
// wait for the wal to finish shutting down so we
// can safely remove the directory
wal.Wait()
}()
assert.Panics(t, func() { wal.Write(make([]byte, maxMsgSizeBytes+1)) })
}
func TestWALSearchForEndHeight(t *testing.T) {
walBody, err := WALWithNBlocks(6)
walBody, err := WALWithNBlocks(t, 6)
if err != nil {
t.Fatal(err)
}
walFile := tempWALWithData(walBody)
wal, err := NewWAL(walFile)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
wal.SetLogger(log.TestingLogger())
h := int64(3)
gr, found, err := wal.SearchForEndHeight(h, &WALSearchOptions{})
assert.NoError(t, err, fmt.Sprintf("expected not to err on height %d", h))
assert.True(t, found, fmt.Sprintf("expected to find end height for %d", h))
assert.NotNil(t, gr, "expected group not to be nil")
assert.NoError(t, err, "expected not to err on height %d", h)
assert.True(t, found, "expected to find end height for %d", h)
assert.NotNil(t, gr)
defer gr.Close()
dec := NewWALDecoder(gr)
@@ -115,7 +144,47 @@ func TestWALSearchForEndHeight(t *testing.T) {
assert.NoError(t, err, "expected to decode a message")
rs, ok := msg.Msg.(tmtypes.EventDataRoundState)
assert.True(t, ok, "expected message of type EventDataRoundState")
assert.Equal(t, rs.Height, h+1, fmt.Sprintf("wrong height"))
assert.Equal(t, rs.Height, h+1, "wrong height")
}
func TestWALPeriodicSync(t *testing.T) {
walDir, err := ioutil.TempDir("", "wal")
require.NoError(t, err)
defer os.RemoveAll(walDir)
walFile := filepath.Join(walDir, "wal")
wal, err := NewWAL(walFile, autofile.GroupCheckDuration(1*time.Millisecond))
require.NoError(t, err)
wal.SetFlushInterval(walTestFlushInterval)
wal.SetLogger(log.TestingLogger())
// Generate some data
err = WALGenerateNBlocks(t, wal.Group(), 5)
require.NoError(t, err)
// We should have data in the buffer now
assert.NotZero(t, wal.Group().Buffered())
require.NoError(t, wal.Start())
defer func() {
wal.Stop()
wal.Wait()
}()
time.Sleep(walTestFlushInterval + (10 * time.Millisecond))
// The data should have been flushed by the periodic sync
assert.Zero(t, wal.Group().Buffered())
h := int64(4)
gr, found, err := wal.SearchForEndHeight(h, &WALSearchOptions{})
assert.NoError(t, err, "expected not to err on height %d", h)
assert.True(t, found, "expected to find end height for %d", h)
assert.NotNil(t, gr)
if gr != nil {
gr.Close()
}
}
/*

View File

@@ -1,7 +1,7 @@
package consensus
import (
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/types"
)

View File

@@ -5,7 +5,7 @@ import (
"fmt"
"io/ioutil"
"golang.org/x/crypto/openpgp/armor" // forked to github.com/tendermint/crypto
"golang.org/x/crypto/openpgp/armor"
)
func EncodeArmor(blockType string, headers map[string]string, data []byte) string {

View File

@@ -37,9 +37,6 @@
// sum := crypto.Sha256([]byte("This is Tendermint"))
// fmt.Printf("%x\n", sum)
// Ripemd160
// sum := crypto.Ripemd160([]byte("This is consensus"))
// fmt.Printf("%x\n", sum)
package crypto
// TODO: Add more docs in here

View File

@@ -7,7 +7,7 @@ import (
"io"
amino "github.com/tendermint/go-amino"
"golang.org/x/crypto/ed25519" // forked to github.com/tendermint/crypto
"golang.org/x/crypto/ed25519"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/tmhash"
@@ -18,8 +18,8 @@ import (
var _ crypto.PrivKey = PrivKeyEd25519{}
const (
PrivKeyAminoRoute = "tendermint/PrivKeyEd25519"
PubKeyAminoRoute = "tendermint/PubKeyEd25519"
PrivKeyAminoName = "tendermint/PrivKeyEd25519"
PubKeyAminoName = "tendermint/PubKeyEd25519"
// Size of an Edwards25519 signature. Namely the size of a compressed
// Edwards25519 point, and a field element. Both of which are 32 bytes.
SignatureSize = 64
@@ -30,11 +30,11 @@ var cdc = amino.NewCodec()
func init() {
cdc.RegisterInterface((*crypto.PubKey)(nil), nil)
cdc.RegisterConcrete(PubKeyEd25519{},
PubKeyAminoRoute, nil)
PubKeyAminoName, nil)
cdc.RegisterInterface((*crypto.PrivKey)(nil), nil)
cdc.RegisterConcrete(PrivKeyEd25519{},
PrivKeyAminoRoute, nil)
PrivKeyAminoName, nil)
}
// PrivKeyEd25519 implements crypto.PrivKey.

View File

@@ -12,11 +12,11 @@ import (
var cdc = amino.NewCodec()
// routeTable is used to map public key concrete types back
// to their amino routes. This should eventually be handled
// nameTable is used to map public key concrete types back
// to their registered amino names. This should eventually be handled
// by amino. Example usage:
// routeTable[reflect.TypeOf(ed25519.PubKeyEd25519{})] = ed25519.PubKeyAminoRoute
var routeTable = make(map[reflect.Type]string, 3)
// nameTable[reflect.TypeOf(ed25519.PubKeyEd25519{})] = ed25519.PubKeyAminoName
var nameTable = make(map[reflect.Type]string, 3)
func init() {
// NOTE: It's important that there be no conflicts here,
@@ -29,16 +29,16 @@ func init() {
// TODO: Have amino provide a way to go from concrete struct to route directly.
// Its currently a private API
routeTable[reflect.TypeOf(ed25519.PubKeyEd25519{})] = ed25519.PubKeyAminoRoute
routeTable[reflect.TypeOf(secp256k1.PubKeySecp256k1{})] = secp256k1.PubKeyAminoRoute
routeTable[reflect.TypeOf(&multisig.PubKeyMultisigThreshold{})] = multisig.PubKeyMultisigThresholdAminoRoute
nameTable[reflect.TypeOf(ed25519.PubKeyEd25519{})] = ed25519.PubKeyAminoName
nameTable[reflect.TypeOf(secp256k1.PubKeySecp256k1{})] = secp256k1.PubKeyAminoName
nameTable[reflect.TypeOf(multisig.PubKeyMultisigThreshold{})] = multisig.PubKeyMultisigThresholdAminoRoute
}
// PubkeyAminoRoute returns the amino route of a pubkey
// PubkeyAminoName returns the amino route of a pubkey
// cdc is currently passed in, as eventually this will not be using
// a package level codec.
func PubkeyAminoRoute(cdc *amino.Codec, key crypto.PubKey) (string, bool) {
route, found := routeTable[reflect.TypeOf(key)]
func PubkeyAminoName(cdc *amino.Codec, key crypto.PubKey) (string, bool) {
route, found := nameTable[reflect.TypeOf(key)]
return route, found
}
@@ -47,17 +47,17 @@ func RegisterAmino(cdc *amino.Codec) {
// These are all written here instead of
cdc.RegisterInterface((*crypto.PubKey)(nil), nil)
cdc.RegisterConcrete(ed25519.PubKeyEd25519{},
ed25519.PubKeyAminoRoute, nil)
ed25519.PubKeyAminoName, nil)
cdc.RegisterConcrete(secp256k1.PubKeySecp256k1{},
secp256k1.PubKeyAminoRoute, nil)
secp256k1.PubKeyAminoName, nil)
cdc.RegisterConcrete(multisig.PubKeyMultisigThreshold{},
multisig.PubKeyMultisigThresholdAminoRoute, nil)
cdc.RegisterInterface((*crypto.PrivKey)(nil), nil)
cdc.RegisterConcrete(ed25519.PrivKeyEd25519{},
ed25519.PrivKeyAminoRoute, nil)
ed25519.PrivKeyAminoName, nil)
cdc.RegisterConcrete(secp256k1.PrivKeySecp256k1{},
secp256k1.PrivKeyAminoRoute, nil)
secp256k1.PrivKeyAminoName, nil)
}
func PrivKeyFromBytes(privKeyBytes []byte) (privKey crypto.PrivKey, err error) {

View File

@@ -25,9 +25,8 @@ func checkAminoBinary(t *testing.T, src, dst interface{}, size int) {
assert.Equal(t, byterSrc.Bytes(), bz, "Amino binary vs Bytes() mismatch")
}
// Make sure we have the expected length.
if size != -1 {
assert.Equal(t, size, len(bz), "Amino binary size mismatch")
}
assert.Equal(t, size, len(bz), "Amino binary size mismatch")
// Unmarshal.
err = cdc.UnmarshalBinaryBare(bz, dst)
require.Nil(t, err, "%+v", err)
@@ -48,6 +47,8 @@ func checkAminoJSON(t *testing.T, src interface{}, dst interface{}, isNil bool)
require.Nil(t, err, "%+v", err)
}
// ExamplePrintRegisteredTypes refers to unknown identifier: PrintRegisteredTypes
//nolint:govet
func ExamplePrintRegisteredTypes() {
cdc.PrintTypes(os.Stdout)
// Output: | Type | Name | Prefix | Length | Notes |
@@ -128,18 +129,18 @@ func TestPubKeyInvalidDataProperReturnsEmpty(t *testing.T) {
require.Nil(t, pk)
}
func TestPubkeyAminoRoute(t *testing.T) {
func TestPubkeyAminoName(t *testing.T) {
tests := []struct {
key crypto.PubKey
want string
found bool
}{
{ed25519.PubKeyEd25519{}, ed25519.PubKeyAminoRoute, true},
{secp256k1.PubKeySecp256k1{}, secp256k1.PubKeyAminoRoute, true},
{&multisig.PubKeyMultisigThreshold{}, multisig.PubKeyMultisigThresholdAminoRoute, true},
{ed25519.PubKeyEd25519{}, ed25519.PubKeyAminoName, true},
{secp256k1.PubKeySecp256k1{}, secp256k1.PubKeyAminoName, true},
{multisig.PubKeyMultisigThreshold{}, multisig.PubKeyMultisigThresholdAminoRoute, true},
}
for i, tc := range tests {
got, found := PubkeyAminoRoute(cdc, tc.key)
got, found := PubkeyAminoName(cdc, tc.key)
require.Equal(t, tc.found, found, "not equal on tc %d", i)
if tc.found {
require.Equal(t, tc.want, got, "not equal on tc %d", i)

View File

@@ -26,10 +26,3 @@ func ExampleSha256() {
// Output:
// f91afb642f3d1c87c17eb01aae5cb65c242dfdbe7cf1066cc260f4ce5d33b94e
}
func ExampleRipemd160() {
sum := crypto.Ripemd160([]byte("This is Tendermint"))
fmt.Printf("%x\n", sum)
// Output:
// 051e22663e8f0fd2f2302f1210f954adff009005
}

View File

@@ -2,8 +2,6 @@ package crypto
import (
"crypto/sha256"
"golang.org/x/crypto/ripemd160" // forked to github.com/tendermint/crypto
)
func Sha256(bytes []byte) []byte {
@@ -11,9 +9,3 @@ func Sha256(bytes []byte) []byte {
hasher.Write(bytes)
return hasher.Sum(nil)
}
func Ripemd160(bytes []byte) []byte {
hasher := ripemd160.New()
hasher.Write(bytes)
return hasher.Sum(nil)
}

21
crypto/merkle/hash.go Normal file
View File

@@ -0,0 +1,21 @@
package merkle
import (
"github.com/tendermint/tendermint/crypto/tmhash"
)
// TODO: make these have a large predefined capacity
var (
leafPrefix = []byte{0}
innerPrefix = []byte{1}
)
// returns tmhash(0x00 || leaf)
func leafHash(leaf []byte) []byte {
return tmhash.Sum(append(leafPrefix, leaf...))
}
// returns tmhash(0x01 || left || right)
func innerHash(left []byte, right []byte) []byte {
return tmhash.Sum(append(innerPrefix, append(left, right...)...))
}

View File

@@ -39,7 +39,7 @@ func (m *ProofOp) Reset() { *m = ProofOp{} }
func (m *ProofOp) String() string { return proto.CompactTextString(m) }
func (*ProofOp) ProtoMessage() {}
func (*ProofOp) Descriptor() ([]byte, []int) {
return fileDescriptor_merkle_5d3f6051907285da, []int{0}
return fileDescriptor_merkle_24be8bc4e405ac66, []int{0}
}
func (m *ProofOp) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -101,7 +101,7 @@ func (m *Proof) Reset() { *m = Proof{} }
func (m *Proof) String() string { return proto.CompactTextString(m) }
func (*Proof) ProtoMessage() {}
func (*Proof) Descriptor() ([]byte, []int) {
return fileDescriptor_merkle_5d3f6051907285da, []int{1}
return fileDescriptor_merkle_24be8bc4e405ac66, []int{1}
}
func (m *Proof) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -395,6 +395,9 @@ func encodeVarintPopulateMerkle(dAtA []byte, v uint64) []byte {
return dAtA
}
func (m *ProofOp) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Type)
@@ -416,6 +419,9 @@ func (m *ProofOp) Size() (n int) {
}
func (m *Proof) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Ops) > 0 {
@@ -772,9 +778,9 @@ var (
ErrIntOverflowMerkle = fmt.Errorf("proto: integer overflow")
)
func init() { proto.RegisterFile("crypto/merkle/merkle.proto", fileDescriptor_merkle_5d3f6051907285da) }
func init() { proto.RegisterFile("crypto/merkle/merkle.proto", fileDescriptor_merkle_24be8bc4e405ac66) }
var fileDescriptor_merkle_5d3f6051907285da = []byte{
var fileDescriptor_merkle_24be8bc4e405ac66 = []byte{
// 200 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x4a, 0x2e, 0xaa, 0x2c,
0x28, 0xc9, 0xd7, 0xcf, 0x4d, 0x2d, 0xca, 0xce, 0x49, 0x85, 0x52, 0x7a, 0x05, 0x45, 0xf9, 0x25,

View File

@@ -43,6 +43,9 @@ func (poz ProofOperators) Verify(root []byte, keypath string, args [][]byte) (er
for i, op := range poz {
key := op.GetKey()
if len(key) != 0 {
if len(keys) == 0 {
return cmn.NewError("Key path has insufficient # of parts: expected no more keys but got %+v", string(key))
}
lastKey := keys[len(keys)-1]
if !bytes.Equal(lastKey, key) {
return cmn.NewError("Key mismatch on operation #%d: expected %+v but got %+v", i, string(lastKey), string(key))
@@ -95,7 +98,7 @@ func (prt *ProofRuntime) Decode(pop ProofOp) (ProofOperator, error) {
}
func (prt *ProofRuntime) DecodeProof(proof *Proof) (ProofOperators, error) {
var poz ProofOperators
poz := make(ProofOperators, 0, len(proof.Ops))
for _, pop := range proof.Ops {
operator, err := prt.Decode(pop)
if err != nil {

View File

@@ -1,6 +1,8 @@
package merkle
import (
// it is ok to use math/rand here: we do not need a cryptographically secure random
// number generator here and we can run the tests a bit faster
"math/rand"
"testing"
@@ -24,7 +26,7 @@ func TestKeyPath(t *testing.T) {
keys[i][j] = alphanum[rand.Intn(len(alphanum))]
}
case KeyEncodingHex:
rand.Read(keys[i])
rand.Read(keys[i]) //nolint: gosec
default:
panic("Unexpected encoding")
}

View File

@@ -71,11 +71,11 @@ func (op SimpleValueOp) Run(args [][]byte) ([][]byte, error) {
hasher.Write(value) // does not error
vhash := hasher.Sum(nil)
bz := new(bytes.Buffer)
// Wrap <op.Key, vhash> to hash the KVPair.
hasher = tmhash.New()
encodeByteSlice(hasher, []byte(op.key)) // does not error
encodeByteSlice(hasher, []byte(vhash)) // does not error
kvhash := hasher.Sum(nil)
encodeByteSlice(bz, []byte(op.key)) // does not error
encodeByteSlice(bz, []byte(vhash)) // does not error
kvhash := leafHash(bz.Bytes())
if !bytes.Equal(kvhash, op.Proof.LeafHash) {
return nil, cmn.NewError("leaf hash mismatch: want %X got %X", op.Proof.LeafHash, kvhash)

View File

@@ -4,7 +4,7 @@ import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
cmn "github.com/tendermint/tendermint/libs/common"
)
@@ -26,6 +26,7 @@ func NewDominoOp(key, input, output string) DominoOp {
}
}
//nolint:unused
func DominoOpDecoder(pop ProofOp) (ProofOperator, error) {
if pop.Type != ProofOpDomino {
panic("unexpected proof op type")
@@ -107,6 +108,10 @@ func TestProofOperators(t *testing.T) {
err = popz.Verify(bz("OUTPUT4"), "//KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
// BAD KEY 5
err = popz.Verify(bz("OUTPUT4"), "/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
// BAD OUTPUT 1
err = popz.Verify(bz("OUTPUT4_WRONG"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)

View File

@@ -0,0 +1,97 @@
package merkle
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// These tests were taken from https://github.com/google/trillian/blob/master/merkle/rfc6962/rfc6962_test.go,
// and consequently fall under the above license.
import (
"bytes"
"encoding/hex"
"testing"
"github.com/tendermint/tendermint/crypto/tmhash"
)
func TestRFC6962Hasher(t *testing.T) {
_, leafHashTrail := trailsFromByteSlices([][]byte{[]byte("L123456")})
leafHash := leafHashTrail.Hash
_, leafHashTrail = trailsFromByteSlices([][]byte{{}})
emptyLeafHash := leafHashTrail.Hash
for _, tc := range []struct {
desc string
got []byte
want string
}{
// Since creating a merkle tree of no leaves is unsupported here, we skip
// the corresponding trillian test vector.
// Check that the empty hash is not the same as the hash of an empty leaf.
// echo -n 00 | xxd -r -p | sha256sum
{
desc: "RFC6962 Empty Leaf",
want: "6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d"[:tmhash.Size*2],
got: emptyLeafHash,
},
// echo -n 004C313233343536 | xxd -r -p | sha256sum
{
desc: "RFC6962 Leaf",
want: "395aa064aa4c29f7010acfe3f25db9485bbd4b91897b6ad7ad547639252b4d56"[:tmhash.Size*2],
got: leafHash,
},
// echo -n 014E3132334E343536 | xxd -r -p | sha256sum
{
desc: "RFC6962 Node",
want: "aa217fe888e47007fa15edab33c2b492a722cb106c64667fc2b044444de66bbb"[:tmhash.Size*2],
got: innerHash([]byte("N123"), []byte("N456")),
},
} {
t.Run(tc.desc, func(t *testing.T) {
wantBytes, err := hex.DecodeString(tc.want)
if err != nil {
t.Fatalf("hex.DecodeString(%x): %v", tc.want, err)
}
if got, want := tc.got, wantBytes; !bytes.Equal(got, want) {
t.Errorf("got %x, want %x", got, want)
}
})
}
}
func TestRFC6962HasherCollisions(t *testing.T) {
// Check that different leaves have different hashes.
leaf1, leaf2 := []byte("Hello"), []byte("World")
_, leafHashTrail := trailsFromByteSlices([][]byte{leaf1})
hash1 := leafHashTrail.Hash
_, leafHashTrail = trailsFromByteSlices([][]byte{leaf2})
hash2 := leafHashTrail.Hash
if bytes.Equal(hash1, hash2) {
t.Errorf("Leaf hashes should differ, but both are %x", hash1)
}
// Compute an intermediate subtree hash.
_, subHash1Trail := trailsFromByteSlices([][]byte{hash1, hash2})
subHash1 := subHash1Trail.Hash
// Check that this is not the same as a leaf hash of their concatenation.
preimage := append(hash1, hash2...)
_, forgedHashTrail := trailsFromByteSlices([][]byte{preimage})
forgedHash := forgedHashTrail.Hash
if bytes.Equal(subHash1, forgedHash) {
t.Errorf("Hasher is not second-preimage resistant")
}
// Swap the order of nodes and check that the hash is different.
_, subHash2Trail := trailsFromByteSlices([][]byte{hash2, hash1})
subHash2 := subHash2Trail.Hash
if bytes.Equal(subHash1, subHash2) {
t.Errorf("Subtree hash does not depend on the order of leaves")
}
}

View File

@@ -13,14 +13,14 @@ func TestSimpleMap(t *testing.T) {
values []string // each string gets converted to []byte in test
want string
}{
{[]string{"key1"}, []string{"value1"}, "321d150de16dceb51c72981b432b115045383259b1a550adf8dc80f927508967"},
{[]string{"key1"}, []string{"value2"}, "2a9e4baf321eac99f6eecc3406603c14bc5e85bb7b80483cbfc75b3382d24a2f"},
{[]string{"key1"}, []string{"value1"}, "a44d3cc7daba1a4600b00a2434b30f8b970652169810d6dfa9fb1793a2189324"},
{[]string{"key1"}, []string{"value2"}, "0638e99b3445caec9d95c05e1a3fc1487b4ddec6a952ff337080360b0dcc078c"},
// swap order with 2 keys
{[]string{"key1", "key2"}, []string{"value1", "value2"}, "c4d8913ab543ba26aa970646d4c99a150fd641298e3367cf68ca45fb45a49881"},
{[]string{"key2", "key1"}, []string{"value2", "value1"}, "c4d8913ab543ba26aa970646d4c99a150fd641298e3367cf68ca45fb45a49881"},
{[]string{"key1", "key2"}, []string{"value1", "value2"}, "8fd19b19e7bb3f2b3ee0574027d8a5a4cec370464ea2db2fbfa5c7d35bb0cff3"},
{[]string{"key2", "key1"}, []string{"value2", "value1"}, "8fd19b19e7bb3f2b3ee0574027d8a5a4cec370464ea2db2fbfa5c7d35bb0cff3"},
// swap order with 3 keys
{[]string{"key1", "key2", "key3"}, []string{"value1", "value2", "value3"}, "b23cef00eda5af4548a213a43793f2752d8d9013b3f2b64bc0523a4791196268"},
{[]string{"key1", "key3", "key2"}, []string{"value1", "value3", "value2"}, "b23cef00eda5af4548a213a43793f2752d8d9013b3f2b64bc0523a4791196268"},
{[]string{"key1", "key2", "key3"}, []string{"value1", "value2", "value3"}, "1dd674ec6782a0d586a903c9c63326a41cbe56b3bba33ed6ff5b527af6efb3dc"},
{[]string{"key1", "key3", "key2"}, []string{"value1", "value3", "value2"}, "1dd674ec6782a0d586a903c9c63326a41cbe56b3bba33ed6ff5b527af6efb3dc"},
}
for i, tc := range tests {
db := newSimpleMap()

View File

@@ -5,7 +5,6 @@ import (
"errors"
"fmt"
"github.com/tendermint/tendermint/crypto/tmhash"
cmn "github.com/tendermint/tendermint/libs/common"
)
@@ -67,7 +66,8 @@ func SimpleProofsFromMap(m map[string][]byte) (rootHash []byte, proofs map[strin
// Verify that the SimpleProof proves the root hash.
// Check sp.Index/sp.Total manually if needed
func (sp *SimpleProof) Verify(rootHash []byte, leafHash []byte) error {
func (sp *SimpleProof) Verify(rootHash []byte, leaf []byte) error {
leafHash := leafHash(leaf)
if sp.Total < 0 {
return errors.New("Proof total must be positive")
}
@@ -128,19 +128,19 @@ func computeHashFromAunts(index int, total int, leafHash []byte, innerHashes [][
if len(innerHashes) == 0 {
return nil
}
numLeft := (total + 1) / 2
numLeft := getSplitPoint(total)
if index < numLeft {
leftHash := computeHashFromAunts(index, numLeft, leafHash, innerHashes[:len(innerHashes)-1])
if leftHash == nil {
return nil
}
return simpleHashFromTwoHashes(leftHash, innerHashes[len(innerHashes)-1])
return innerHash(leftHash, innerHashes[len(innerHashes)-1])
}
rightHash := computeHashFromAunts(index-numLeft, total-numLeft, leafHash, innerHashes[:len(innerHashes)-1])
if rightHash == nil {
return nil
}
return simpleHashFromTwoHashes(innerHashes[len(innerHashes)-1], rightHash)
return innerHash(innerHashes[len(innerHashes)-1], rightHash)
}
}
@@ -182,12 +182,13 @@ func trailsFromByteSlices(items [][]byte) (trails []*SimpleProofNode, root *Simp
case 0:
return nil, nil
case 1:
trail := &SimpleProofNode{tmhash.Sum(items[0]), nil, nil, nil}
trail := &SimpleProofNode{leafHash(items[0]), nil, nil, nil}
return []*SimpleProofNode{trail}, trail
default:
lefts, leftRoot := trailsFromByteSlices(items[:(len(items)+1)/2])
rights, rightRoot := trailsFromByteSlices(items[(len(items)+1)/2:])
rootHash := simpleHashFromTwoHashes(leftRoot.Hash, rightRoot.Hash)
k := getSplitPoint(len(items))
lefts, leftRoot := trailsFromByteSlices(items[:k])
rights, rightRoot := trailsFromByteSlices(items[k:])
rootHash := innerHash(leftRoot.Hash, rightRoot.Hash)
root := &SimpleProofNode{rootHash, nil, nil, nil}
leftRoot.Parent = root
leftRoot.Right = rightRoot

View File

@@ -1,23 +1,9 @@
package merkle
import (
"github.com/tendermint/tendermint/crypto/tmhash"
"math/bits"
)
// simpleHashFromTwoHashes is the basic operation of the Merkle tree: Hash(left | right).
func simpleHashFromTwoHashes(left, right []byte) []byte {
var hasher = tmhash.New()
err := encodeByteSlice(hasher, left)
if err != nil {
panic(err)
}
err = encodeByteSlice(hasher, right)
if err != nil {
panic(err)
}
return hasher.Sum(nil)
}
// SimpleHashFromByteSlices computes a Merkle tree where the leaves are the byte slice,
// in the provided order.
func SimpleHashFromByteSlices(items [][]byte) []byte {
@@ -25,11 +11,83 @@ func SimpleHashFromByteSlices(items [][]byte) []byte {
case 0:
return nil
case 1:
return tmhash.Sum(items[0])
return leafHash(items[0])
default:
left := SimpleHashFromByteSlices(items[:(len(items)+1)/2])
right := SimpleHashFromByteSlices(items[(len(items)+1)/2:])
return simpleHashFromTwoHashes(left, right)
k := getSplitPoint(len(items))
left := SimpleHashFromByteSlices(items[:k])
right := SimpleHashFromByteSlices(items[k:])
return innerHash(left, right)
}
}
// SimpleHashFromByteSliceIterative is an iterative alternative to
// SimpleHashFromByteSlice motivated by potential performance improvements.
// (#2611) had suggested that an iterative version of
// SimpleHashFromByteSlice would be faster, presumably because
// we can envision some overhead accumulating from stack
// frames and function calls. Additionally, a recursive algorithm risks
// hitting the stack limit and causing a stack overflow should the tree
// be too large.
//
// Provided here is an iterative alternative, a simple test to assert
// correctness and a benchmark. On the performance side, there appears to
// be no overall difference:
//
// BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op
// BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op
//
// On the surface it might seem that the additional overhead is due to
// the different allocation patterns of the implementations. The recursive
// version uses a single [][]byte slices which it then re-slices at each level of the tree.
// The iterative version reproduces [][]byte once within the function and
// then rewrites sub-slices of that array at each level of the tree.
//
// Experimenting by modifying the code to simply calculate the
// hash and not store the result show little to no difference in performance.
//
// These preliminary results suggest:
//
// 1. The performance of the SimpleHashFromByteSlice is pretty good
// 2. Go has low overhead for recursive functions
// 3. The performance of the SimpleHashFromByteSlice routine is dominated
// by the actual hashing of data
//
// Although this work is in no way exhaustive, point #3 suggests that
// optimization of this routine would need to take an alternative
// approach to make significant improvements on the current performance.
//
// Finally, considering that the recursive implementation is easier to
// read, it might not be worthwhile to switch to a less intuitive
// implementation for so little benefit.
func SimpleHashFromByteSlicesIterative(input [][]byte) []byte {
items := make([][]byte, len(input))
for i, leaf := range input {
items[i] = leafHash(leaf)
}
size := len(items)
for {
switch size {
case 0:
return nil
case 1:
return items[0]
default:
rp := 0 // read position
wp := 0 // write position
for rp < size {
if rp+1 < size {
items[wp] = innerHash(items[rp], items[rp+1])
rp += 2
} else {
items[wp] = items[rp]
rp += 1
}
wp += 1
}
size = wp
}
}
}
@@ -44,3 +102,17 @@ func SimpleHashFromMap(m map[string][]byte) []byte {
}
return sm.Hash()
}
// getSplitPoint returns the largest power of 2 less than length
func getSplitPoint(length int) int {
if length < 1 {
panic("Trying to split a tree with size < 1")
}
uLength := uint(length)
bitlen := bits.Len(uLength)
k := 1 << uint(bitlen-1)
if k == length {
k >>= 1
}
return k
}

View File

@@ -34,7 +34,6 @@ func TestSimpleProof(t *testing.T) {
// For each item, check the trail.
for i, item := range items {
itemHash := tmhash.Sum(item)
proof := proofs[i]
// Check total/index
@@ -43,30 +42,89 @@ func TestSimpleProof(t *testing.T) {
require.Equal(t, proof.Total, total, "Unmatched totals: %d vs %d", proof.Total, total)
// Verify success
err := proof.Verify(rootHash, itemHash)
require.NoError(t, err, "Verificatior failed: %v.", err)
err := proof.Verify(rootHash, item)
require.NoError(t, err, "Verification failed: %v.", err)
// Trail too long should make it fail
origAunts := proof.Aunts
proof.Aunts = append(proof.Aunts, cmn.RandBytes(32))
err = proof.Verify(rootHash, itemHash)
err = proof.Verify(rootHash, item)
require.Error(t, err, "Expected verification to fail for wrong trail length")
proof.Aunts = origAunts
// Trail too short should make it fail
proof.Aunts = proof.Aunts[0 : len(proof.Aunts)-1]
err = proof.Verify(rootHash, itemHash)
err = proof.Verify(rootHash, item)
require.Error(t, err, "Expected verification to fail for wrong trail length")
proof.Aunts = origAunts
// Mutating the itemHash should make it fail.
err = proof.Verify(rootHash, MutateByteSlice(itemHash))
err = proof.Verify(rootHash, MutateByteSlice(item))
require.Error(t, err, "Expected verification to fail for mutated leaf hash")
// Mutating the rootHash should make it fail.
err = proof.Verify(MutateByteSlice(rootHash), itemHash)
err = proof.Verify(MutateByteSlice(rootHash), item)
require.Error(t, err, "Expected verification to fail for mutated root hash")
}
}
func TestSimpleHashAlternatives(t *testing.T) {
total := 100
items := make([][]byte, total)
for i := 0; i < total; i++ {
items[i] = testItem(cmn.RandBytes(tmhash.Size))
}
rootHash1 := SimpleHashFromByteSlicesIterative(items)
rootHash2 := SimpleHashFromByteSlices(items)
require.Equal(t, rootHash1, rootHash2, "Unmatched root hashes: %X vs %X", rootHash1, rootHash2)
}
func BenchmarkSimpleHashAlternatives(b *testing.B) {
total := 100
items := make([][]byte, total)
for i := 0; i < total; i++ {
items[i] = testItem(cmn.RandBytes(tmhash.Size))
}
b.ResetTimer()
b.Run("recursive", func(b *testing.B) {
for i := 0; i < b.N; i++ {
_ = SimpleHashFromByteSlices(items)
}
})
b.Run("iterative", func(b *testing.B) {
for i := 0; i < b.N; i++ {
_ = SimpleHashFromByteSlicesIterative(items)
}
})
}
func Test_getSplitPoint(t *testing.T) {
tests := []struct {
length int
want int
}{
{1, 0},
{2, 1},
{3, 2},
{4, 2},
{5, 4},
{10, 8},
{20, 16},
{100, 64},
{255, 128},
{256, 128},
{257, 256},
}
for _, tt := range tests {
got := getSplitPoint(tt.length)
require.Equal(t, tt.want, got, "getSplitPoint(%d) = %v, want %v", tt.length, got, tt.want)
}
}

View File

@@ -1,7 +1,7 @@
package merkle
import (
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
)
var cdc *amino.Codec

View File

@@ -1,7 +1,8 @@
package multisig
import (
"errors"
"fmt"
"strings"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/multisig/bitarray"
@@ -53,13 +54,19 @@ func (mSig *Multisignature) AddSignature(sig []byte, index int) {
mSig.Sigs[newSigIndex] = sig
}
// AddSignatureFromPubKey adds a signature to the multisig,
// at the index in keys corresponding to the provided pubkey.
// AddSignatureFromPubKey adds a signature to the multisig, at the index in
// keys corresponding to the provided pubkey.
func (mSig *Multisignature) AddSignatureFromPubKey(sig []byte, pubkey crypto.PubKey, keys []crypto.PubKey) error {
index := getIndex(pubkey, keys)
if index == -1 {
return errors.New("provided key didn't exist in pubkeys")
keysStr := make([]string, len(keys))
for i, k := range keys {
keysStr[i] = fmt.Sprintf("%X", k.Bytes())
}
return fmt.Errorf("provided key %X doesn't exist in pubkeys: \n%s", pubkey.Bytes(), strings.Join(keysStr, "\n"))
}
mSig.AddSignature(sig, index)
return nil
}

View File

@@ -2,7 +2,6 @@ package multisig
import (
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/tmhash"
)
// PubKeyMultisigThreshold implements a K of N threshold multisig.
@@ -11,7 +10,7 @@ type PubKeyMultisigThreshold struct {
PubKeys []crypto.PubKey `json:"pubkeys"`
}
var _ crypto.PubKey = &PubKeyMultisigThreshold{}
var _ crypto.PubKey = PubKeyMultisigThreshold{}
// NewPubKeyMultisigThreshold returns a new PubKeyMultisigThreshold.
// Panics if len(pubkeys) < k or 0 >= k.
@@ -22,7 +21,7 @@ func NewPubKeyMultisigThreshold(k int, pubkeys []crypto.PubKey) crypto.PubKey {
if len(pubkeys) < k {
panic("threshold k of n multisignature: len(pubkeys) < k")
}
return &PubKeyMultisigThreshold{uint(k), pubkeys}
return PubKeyMultisigThreshold{uint(k), pubkeys}
}
// VerifyBytes expects sig to be an amino encoded version of a MultiSignature.
@@ -31,8 +30,8 @@ func NewPubKeyMultisigThreshold(k int, pubkeys []crypto.PubKey) crypto.PubKey {
// and all signatures are valid. (Not just k of the signatures)
// The multisig uses a bitarray, so multiple signatures for the same key is not
// a concern.
func (pk *PubKeyMultisigThreshold) VerifyBytes(msg []byte, marshalledSig []byte) bool {
var sig *Multisignature
func (pk PubKeyMultisigThreshold) VerifyBytes(msg []byte, marshalledSig []byte) bool {
var sig Multisignature
err := cdc.UnmarshalBinaryBare(marshalledSig, &sig)
if err != nil {
return false
@@ -64,19 +63,19 @@ func (pk *PubKeyMultisigThreshold) VerifyBytes(msg []byte, marshalledSig []byte)
}
// Bytes returns the amino encoded version of the PubKeyMultisigThreshold
func (pk *PubKeyMultisigThreshold) Bytes() []byte {
func (pk PubKeyMultisigThreshold) Bytes() []byte {
return cdc.MustMarshalBinaryBare(pk)
}
// Address returns tmhash(PubKeyMultisigThreshold.Bytes())
func (pk *PubKeyMultisigThreshold) Address() crypto.Address {
return crypto.Address(tmhash.Sum(pk.Bytes()))
func (pk PubKeyMultisigThreshold) Address() crypto.Address {
return crypto.AddressHash(pk.Bytes())
}
// Equals returns true iff pk and other both have the same number of keys, and
// all constituent keys are the same, and in the same order.
func (pk *PubKeyMultisigThreshold) Equals(other crypto.PubKey) bool {
otherKey, sameType := other.(*PubKeyMultisigThreshold)
func (pk PubKeyMultisigThreshold) Equals(other crypto.PubKey) bool {
otherKey, sameType := other.(PubKeyMultisigThreshold)
if !sameType {
return false
}

View File

@@ -36,30 +36,68 @@ func TestThresholdMultisigValidCases(t *testing.T) {
for tcIndex, tc := range cases {
multisigKey := NewPubKeyMultisigThreshold(tc.k, tc.pubkeys)
multisignature := NewMultisig(len(tc.pubkeys))
for i := 0; i < tc.k-1; i++ {
signingIndex := tc.signingIndices[i]
multisignature.AddSignatureFromPubKey(tc.signatures[signingIndex], tc.pubkeys[signingIndex], tc.pubkeys)
require.False(t, multisigKey.VerifyBytes(tc.msg, multisignature.Marshal()),
"multisig passed when i < k, tc %d, i %d", tcIndex, i)
multisignature.AddSignatureFromPubKey(tc.signatures[signingIndex], tc.pubkeys[signingIndex], tc.pubkeys)
require.Equal(t, i+1, len(multisignature.Sigs),
"adding a signature for the same pubkey twice increased signature count by 2, tc %d", tcIndex)
require.NoError(
t,
multisignature.AddSignatureFromPubKey(tc.signatures[signingIndex], tc.pubkeys[signingIndex], tc.pubkeys),
)
require.False(
t,
multisigKey.VerifyBytes(tc.msg, multisignature.Marshal()),
"multisig passed when i < k, tc %d, i %d", tcIndex, i,
)
require.NoError(
t,
multisignature.AddSignatureFromPubKey(tc.signatures[signingIndex], tc.pubkeys[signingIndex], tc.pubkeys),
)
require.Equal(
t,
i+1,
len(multisignature.Sigs),
"adding a signature for the same pubkey twice increased signature count by 2, tc %d", tcIndex,
)
}
require.False(t, multisigKey.VerifyBytes(tc.msg, multisignature.Marshal()),
"multisig passed with k - 1 sigs, tc %d", tcIndex)
multisignature.AddSignatureFromPubKey(tc.signatures[tc.signingIndices[tc.k]], tc.pubkeys[tc.signingIndices[tc.k]], tc.pubkeys)
require.True(t, multisigKey.VerifyBytes(tc.msg, multisignature.Marshal()),
"multisig failed after k good signatures, tc %d", tcIndex)
require.False(
t,
multisigKey.VerifyBytes(tc.msg, multisignature.Marshal()),
"multisig passed with k - 1 sigs, tc %d", tcIndex,
)
require.NoError(
t,
multisignature.AddSignatureFromPubKey(tc.signatures[tc.signingIndices[tc.k]], tc.pubkeys[tc.signingIndices[tc.k]], tc.pubkeys),
)
require.True(
t,
multisigKey.VerifyBytes(tc.msg, multisignature.Marshal()),
"multisig failed after k good signatures, tc %d", tcIndex,
)
for i := tc.k + 1; i < len(tc.signingIndices); i++ {
signingIndex := tc.signingIndices[i]
multisignature.AddSignatureFromPubKey(tc.signatures[signingIndex], tc.pubkeys[signingIndex], tc.pubkeys)
require.Equal(t, tc.passAfterKSignatures[i-tc.k-1],
multisigKey.VerifyBytes(tc.msg, multisignature.Marshal()),
"multisig didn't verify as expected after k sigs, tc %d, i %d", tcIndex, i)
multisignature.AddSignatureFromPubKey(tc.signatures[signingIndex], tc.pubkeys[signingIndex], tc.pubkeys)
require.Equal(t, i+1, len(multisignature.Sigs),
"adding a signature for the same pubkey twice increased signature count by 2, tc %d", tcIndex)
require.NoError(
t,
multisignature.AddSignatureFromPubKey(tc.signatures[signingIndex], tc.pubkeys[signingIndex], tc.pubkeys),
)
require.Equal(
t,
tc.passAfterKSignatures[i-tc.k-1],
multisigKey.VerifyBytes(tc.msg, multisignature.Marshal()),
"multisig didn't verify as expected after k sigs, tc %d, i %d", tcIndex, i,
)
require.NoError(
t,
multisignature.AddSignatureFromPubKey(tc.signatures[signingIndex], tc.pubkeys[signingIndex], tc.pubkeys),
)
require.Equal(
t,
i+1,
len(multisignature.Sigs),
"adding a signature for the same pubkey twice increased signature count by 2, tc %d", tcIndex,
)
}
}
}
@@ -82,7 +120,7 @@ func TestMultiSigPubKeyEquality(t *testing.T) {
msg := []byte{1, 2, 3, 4}
pubkeys, _ := generatePubKeysAndSignatures(5, msg)
multisigKey := NewPubKeyMultisigThreshold(2, pubkeys)
var unmarshalledMultisig *PubKeyMultisigThreshold
var unmarshalledMultisig PubKeyMultisigThreshold
cdc.MustUnmarshalBinaryBare(multisigKey.Bytes(), &unmarshalledMultisig)
require.True(t, multisigKey.Equals(unmarshalledMultisig))
@@ -95,6 +133,29 @@ func TestMultiSigPubKeyEquality(t *testing.T) {
require.False(t, multisigKey.Equals(multisigKey2))
}
func TestAddress(t *testing.T) {
msg := []byte{1, 2, 3, 4}
pubkeys, _ := generatePubKeysAndSignatures(5, msg)
multisigKey := NewPubKeyMultisigThreshold(2, pubkeys)
require.Len(t, multisigKey.Address().Bytes(), 20)
}
func TestPubKeyMultisigThresholdAminoToIface(t *testing.T) {
msg := []byte{1, 2, 3, 4}
pubkeys, _ := generatePubKeysAndSignatures(5, msg)
multisigKey := NewPubKeyMultisigThreshold(2, pubkeys)
ab, err := cdc.MarshalBinaryLengthPrefixed(multisigKey)
require.NoError(t, err)
// like other crypto.Pubkey implementations (e.g. ed25519.PubKeyEd25519),
// PubKeyMultisigThreshold should be deserializable into a crypto.PubKey:
var pubKey crypto.PubKey
err = cdc.UnmarshalBinaryLengthPrefixed(ab, &pubKey)
require.NoError(t, err)
require.Equal(t, multisigKey, pubKey)
}
func generatePubKeysAndSignatures(n int, msg []byte) (pubkeys []crypto.PubKey, signatures [][]byte) {
pubkeys = make([]crypto.PubKey, n)
signatures = make([][]byte, n)

View File

@@ -20,7 +20,7 @@ func init() {
cdc.RegisterConcrete(PubKeyMultisigThreshold{},
PubKeyMultisigThresholdAminoRoute, nil)
cdc.RegisterConcrete(ed25519.PubKeyEd25519{},
ed25519.PubKeyAminoRoute, nil)
ed25519.PubKeyAminoName, nil)
cdc.RegisterConcrete(secp256k1.PubKeySecp256k1{},
secp256k1.PubKeyAminoRoute, nil)
secp256k1.PubKeyAminoName, nil)
}

View File

@@ -1,42 +1,11 @@
package crypto
import (
"crypto/cipher"
crand "crypto/rand"
"crypto/sha256"
"encoding/hex"
"io"
"sync"
"golang.org/x/crypto/chacha20poly1305"
)
// NOTE: This is ignored for now until we have time
// to properly review the MixEntropy function - https://github.com/tendermint/tendermint/issues/2099.
//
// The randomness here is derived from xoring a chacha20 keystream with
// output from crypto/rand's OS Entropy Reader. (Due to fears of the OS'
// entropy being backdoored)
//
// For forward secrecy of produced randomness, the internal chacha key is hashed
// and thereby rotated after each call.
var gRandInfo *randInfo
func init() {
gRandInfo = &randInfo{}
// TODO: uncomment after reviewing MixEntropy -
// https://github.com/tendermint/tendermint/issues/2099
// gRandInfo.MixEntropy(randBytes(32)) // Init
}
// WARNING: This function needs review - https://github.com/tendermint/tendermint/issues/2099.
// Mix additional bytes of randomness, e.g. from hardware, user-input, etc.
// It is OK to call it multiple times. It does not diminish security.
func MixEntropy(seedBytes []byte) {
gRandInfo.MixEntropy(seedBytes)
}
// This only uses the OS's randomness
func randBytes(numBytes int) []byte {
b := make([]byte, numBytes)
@@ -52,19 +21,6 @@ func CRandBytes(numBytes int) []byte {
return randBytes(numBytes)
}
/* TODO: uncomment after reviewing MixEntropy - https://github.com/tendermint/tendermint/issues/2099
// This uses the OS and the Seed(s).
func CRandBytes(numBytes int) []byte {
return randBytes(numBytes)
b := make([]byte, numBytes)
_, err := gRandInfo.Read(b)
if err != nil {
panic(err)
}
return b
}
*/
// CRandHex returns a hex encoded string that's floor(numDigits/2) * 2 long.
//
// Note: CRandHex(24) gives 96 bits of randomness that
@@ -77,63 +33,3 @@ func CRandHex(numDigits int) string {
func CReader() io.Reader {
return crand.Reader
}
/* TODO: uncomment after reviewing MixEntropy - https://github.com/tendermint/tendermint/issues/2099
// Returns a crand.Reader mixed with user-supplied entropy
func CReader() io.Reader {
return gRandInfo
}
*/
//--------------------------------------------------------------------------------
type randInfo struct {
mtx sync.Mutex
seedBytes [chacha20poly1305.KeySize]byte
chacha cipher.AEAD
reader io.Reader
}
// You can call this as many times as you'd like.
// XXX/TODO: review - https://github.com/tendermint/tendermint/issues/2099
func (ri *randInfo) MixEntropy(seedBytes []byte) {
ri.mtx.Lock()
defer ri.mtx.Unlock()
// Make new ri.seedBytes using passed seedBytes and current ri.seedBytes:
// ri.seedBytes = sha256( seedBytes || ri.seedBytes )
h := sha256.New()
h.Write(seedBytes)
h.Write(ri.seedBytes[:])
hashBytes := h.Sum(nil)
copy(ri.seedBytes[:], hashBytes)
chacha, err := chacha20poly1305.New(ri.seedBytes[:])
if err != nil {
panic("Initializing chacha20 failed")
}
ri.chacha = chacha
// Create new reader
ri.reader = &cipher.StreamReader{S: ri, R: crand.Reader}
}
func (ri *randInfo) XORKeyStream(dst, src []byte) {
// nonce being 0 is safe due to never re-using a key.
emptyNonce := make([]byte, 12)
tmpDst := ri.chacha.Seal([]byte{}, emptyNonce, src, []byte{0})
// this removes the poly1305 tag as well, since chacha is a stream cipher
// and we truncate at input length.
copy(dst, tmpDst[:len(src)])
// hash seedBytes for forward secrecy, and initialize new chacha instance
newSeed := sha256.Sum256(ri.seedBytes[:])
chacha, err := chacha20poly1305.New(newSeed[:])
if err != nil {
panic("Initializing chacha20 failed")
}
ri.chacha = chacha
}
func (ri *randInfo) Read(b []byte) (n int, err error) {
ri.mtx.Lock()
n, err = ri.reader.Read(b)
ri.mtx.Unlock()
return
}

View File

@@ -0,0 +1,24 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*~

View File

@@ -0,0 +1,31 @@
Copyright (c) 2010 The Go Authors. All rights reserved.
Copyright (c) 2011 ThePiachu. All rights reserved.
Copyright (c) 2015 Jeffrey Wilcke. All rights reserved.
Copyright (c) 2015 Felix Lange. All rights reserved.
Copyright (c) 2015 Gustav Simonsson. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of the copyright holder. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -0,0 +1,3 @@
This package is copied from https://github.com/ethereum/go-ethereum/tree/729bf365b5f17325be9107b63b233da54100eec6/crypto/secp256k1
Unlike the rest of go-ethereum it is MIT licensed so compatible with our Apache2.0 license. We opt to copy in here rather than depend on go-ethereum to avoid issues with vendoring of the GPL parts of that repository by downstream.

Some files were not shown because too many files have changed in this diff Show More