Compare commits

...

90 Commits

Author SHA1 Message Date
Ethan Buchman
4a568fcedb Merge pull request #3030 from tendermint/release/v0.27.2
Release/v0.27.2
2018-12-16 14:13:30 -05:00
Ethan Buchman
b3141d7d02 makeNodeInfo returns error (#3029)
* makeNodeInfo returns error

* version and changelog
2018-12-16 14:05:58 -05:00
Jae Kwon
9a6dd96cba Revert to using defers in addrbook. (#3025)
* Revert to using defers in addrbook.  ValidateBasic->Validate since it requires DNS

* Update CHANGELOG_PENDING
2018-12-16 12:27:16 -05:00
Ethan Buchman
9fa959619a Merge pull request #3026 from tendermint/master
Merge pull request #3023 from tendermint/release/v0.27.1
2018-12-16 12:25:19 -05:00
Ethan Buchman
1f09818770 Merge pull request #3023 from tendermint/release/v0.27.1
Release/v0.27.1
2018-12-16 12:24:54 -05:00
Ethan Buchman
e4806f980b Bucky/v0.27.1 (#3022)
* update changelog

* linkify

* changelog and version
2018-12-15 15:58:09 -05:00
Anton Kaliaev
b53a2712df docs: networks/docker-compose: small fixes (#3017) 2018-12-15 15:38:13 -05:00
Hleb Albau
a75dab492c #2980 fix cors doc (#3013) 2018-12-15 15:32:35 -05:00
Ethan Buchman
7c9e767e1f 2980 fix cors (#3021)
* config: cors options are arrays of strings, not strings

Fixes #2980

* docs: update tendermint-core/configuration.html page

* set allow_duplicate_ip to false

* in `tendermint testnet`, set allow_duplicate_ip to true

Refs #2712

* fixes after Ismail's review

* Revert "set allow_duplicate_ip to false"

This reverts commit 24c1094ebcf2bd35f2642a44d7a1e5fb5c178fb1.
2018-12-15 15:26:27 -05:00
JamesRay
f82a8ff73a During replay, when appHeight==0, only saveState if stateHeight is also 0 (#3006)
* optimize addProposalBlockPart

* optimize addProposalBlockPart

* if ProposalBlockParts and LockedBlockParts both exist,let LockedBlockParts overwrite ProposalBlockParts.

* fix tryAddBlock

* broadcast lockedBlockParts in higher priority

* when appHeight==0, it's better fetch genDoc than state.validators.

* not save state if replay from height 1

* only save state if replay from height 1 when stateHeight is also 1

* only save state if replay from height 1 when stateHeight is also 1

* only save state if replay from height 0 when stateHeight is also 0

* handshake info's response version only update when stateHeight==0

* save the handshake responseInfo appVersion
2018-12-15 14:33:30 -05:00
Anton Kaliaev
ae275d791e mempool: notifyTxsAvailable if there're txs left, but recheck=false (#2991)
(left after committing a block)

Fixes #2961

--------------
ORIGINAL ISSUE

Tendermint version : 0.26.4-b771798d

ABCI app : kv-store

Environment:

OS (e.g. from /etc/os-release): macOS 10.14.1 What happened: Set
mempool.recheck = false and create empty block = false in config.toml.
When transactions get added right between a new empty block is being
proposed and committed, the proposer won't propose new block for that
transactions immediately after. That transactions are stuck in the
mempool until a new transaction is added and trigger the proposer.

What you expected to happen: If there is a transaction left in the
mempool, new block should be proposed immediately.

Have you tried the latest version: yes

How to reproduce it (as minimally and precisely as possible): Fire two
transaction using broadcast_tx_sync with specific delay between them.
(You may need to do it multiple time before the right delay is found, on
my machine the delay is 0.98s)

Logs (paste a small part showing an error (< 10 lines) or link a
pastebin, gist, etc. containing more of the log file):
https://pastebin.com/0Wt6uhPF

Config (you can paste only the changes you've made): [mempool] recheck =
false create_empty_block = false

Anything else we need to know: In mempool.go, we found that proposer
will immediately propose new block if

Last committed block has some transaction (causing AppHash to changed)
or mem.notifyTxsAvailable() is called. Our scenario is as followed.

A transaction is fired, it will create 1 block with 1 tx (line 1-4 in
the log) and 1 empty block. After the empty block is proposed but before
it is committed, second transaction is fired and added to mempool. (line
8-16) Now, since the last committed block is empty and
mem.notifyTxsAvailable() will be called only if mempool.recheck = true.
The proposer won't immediately propose new block, causing the second
transaction to stuck in mempool until another transaction is added to
mempool and trigger mem.notifyTxsAvailable().
2018-12-15 14:27:02 -05:00
Zach
f5cca9f121 docs: update DOCS_README (#3019)
Co-Authored-By: zramsay <zach.ramsay@gmail.com>
2018-12-15 23:01:28 +04:00
Zach
3fbe9f235a docs: add edit on Github links (#3014) 2018-12-14 09:32:09 +04:00
mircea-c
f7e463f6d3 circleci: add a job to automatically update docs (#3005) 2018-12-12 13:48:53 +04:00
Dev Ojha
bc2a9b20c0 mempool: add a comment and missing changelog entry (#2996)
Refs #2994
2018-12-12 13:31:35 +04:00
Zach
9e075d8dd5 docs: enable full-text search (#3004) 2018-12-12 13:20:02 +04:00
Zach
8003786c9a docs: fixes from 'first time' review (#2999) 2018-12-11 22:21:54 +04:00
Daniil Lashin
2594cec116 add UnconfirmedTxs/NumUnconfirmedTxs methods to HTTP/Local clients (#2964) 2018-12-11 12:41:02 +04:00
Dev Ojha
df32ea4be5 Make testing logger that doesn't write to stdout (#2997) 2018-12-11 12:17:21 +04:00
Anton Kaliaev
f69e2c6d6c p2p: set MConnection#created during init (#2990)
Fixes #2715

In crawlPeersRoutine, which is performed when seedMode is run, there is
logic that disconnects the peer's state information at 3-hour intervals
through the duration value. The duration value is calculated by
referring to the created value of MConnection. When MConnection is
created for the first time, the created value is not initiated, so it is
not disconnected every 3 hours but every time it is disconnected. So,
normal nodes are connected to seedNode and disconnected immediately, so
address exchange does not work properly.

https://github.com/tendermint/tendermint/blob/master/p2p/pex/pex_reactor.go#L629
This point is not work correctly.
I think,
https://github.com/tendermint/tendermint/blob/master/p2p/conn/connection.go#L148
created variable is missing the current time setting.
2018-12-10 15:24:58 -05:00
Dev Ojha
d5d0d2bd77 Make mempool fail txs with negative gas wanted (#2994)
This is only one part of #2989. We also need to fix the application,
and add rules to consensus to ensure this.
2018-12-10 12:56:49 -05:00
Anton Kaliaev
41eaf0e31d turn off strict routability every time (#2983)
previously, we're turning it off only when --populate-persistent-peers
flag was used, which is obviously incorrect.

Fixes https://github.com/cosmos/cosmos-sdk/issues/2983
2018-12-09 13:29:51 -05:00
Zach
68b467886a docs: relative links in docs/spec/readme.md, js-amino lib (#2977)
Co-Authored-By: zramsay <zach.ramsay@gmail.com>
2018-12-07 19:41:19 +04:00
Leo Wang
2f64717bb5 return an error if validator set is empty in genesis file and after InitChain (#2971)
Fixes #2951
2018-12-07 12:30:58 +04:00
Anton Kaliaev
c4a1cfc5c2 don't ignore key when executing CONTAINS (#2924)
Fixes #2912
2018-12-07 12:28:02 +04:00
Ethan Buchman
0f96bea41d Merge pull request #2976 from tendermint/master
Merge pull request #2975 from tendermint/release/v0.27.0
2018-12-05 16:51:04 -05:00
Ethan Buchman
9c236ffd6c Merge pull request #2975 from tendermint/release/v0.27.0
Release/v0.27.0
2018-12-05 16:50:36 -05:00
Ethan Buchman
9f8761d105 update changelog and upgrading (#2974) 2018-12-05 15:19:30 -05:00
Anton Kaliaev
5413c11150 kv indexer: add separator to start key when matching ranges (#2925)
* kv indexer: add separator to start key when matching ranges

to avoid including false positives

Refs #2908

* refactor code

* add a test case
2018-12-05 14:21:46 -05:00
Ethan Buchman
a14fd8eba0 p2p: fix peer count mismatch #2332 (#2969)
* p2p: test case for peer count mismatch #2332

* p2p: fix peer count mismatch #2332

* changelog

* use httptest.Server to scrape Prometheus metrics
2018-12-05 07:32:27 -05:00
Ethan Buchman
1bb7e31d63 p2p: panic on transport error (#2968)
* p2p: panic on transport error

Addresses #2823. Currently, the acceptRoutine exits if the transport returns
an error trying to accept a new connection. Once this happens, the node
can't accept any new connections. So here, we panic instead. While we
could potentially be more intelligent by rerunning the acceptRoutine, the
error may indicate something more fundamental (eg. file desriptor limit)
that requires a restart anyways. We can leave it to process managers to
handle that restart, and notify operators about the panic.

* changelog
2018-12-04 19:16:06 -05:00
Ethan Buchman
222b8978c8 Minor log changes (#2959)
* node: allow state and code to have diff block versions

* node: pex is a log module
2018-12-04 08:30:29 -05:00
Anton Kaliaev
d9a1aad5c5 docs: add client#Start/Stop to examples in RPC docs (#2939)
follow-up on https://github.com/tendermint/tendermint/pull/2936
2018-12-03 16:17:06 +04:00
Anton Kaliaev
8ef0c2681d check if deliverTxResCh is still open, return an err otherwise (#2947)
deliverTxResCh, like any other eventBus (pubsub) channel, is closed when
eventBus is stopped. We must check if the channel is still open. The
alternative approach is to not close any channels, which seems a bit
odd.

Fixes #2408
2018-12-03 16:15:36 +04:00
Ismail Khoffi
c4d93fd27b explicitly type MaxTotalVotingPower to int64 (#2953) 2018-11-30 14:43:16 -05:00
Ethan Buchman
dc2a338d96 Bucky/v0.27.0 (#2950)
* update changelog

* changelog, upgrading, version
2018-11-30 14:05:16 -05:00
Ismail Khoffi
725ed7969a Add some ProposerPriority tests (#2946)
* WIP: tests for #2785

* rebase onto develop

* add Bucky's test without changing ValidatorSet.Update

* make TestValidatorSetBasic fail

* add ProposerPriority preserving fix to ValidatorSet.Update to fix
TestValidatorSetBasic

* fix randValidator_ to stay in bounds of MaxTotalVotingPower

* check for expected proposer and remove some duplicate code

* actually limit the voting power of random validator ...

* fix test
2018-11-29 17:03:41 -05:00
Ethan Buchman
44b769b1ac types: ValidatorSet.Update preserves Accum (#2941)
* types: ValidatorSet.Update preserves ProposerPriority

This solves the other issue discovered as part of #2718,
where Accum (now called ProposerPriority) is reset to
0 every time a validator is updated.

* update changelog

* add test

* update comment

* Update types/validator_set_test.go

Co-Authored-By: ebuchman <ethan@coinculture.info>
2018-11-29 08:26:12 -05:00
Anton Kaliaev
380afaa678 docs: update ecosystem.json: add Rust ABCI (#2945) 2018-11-29 15:57:11 +04:00
Ismail Khoffi
b30c34e713 rename Accum -> ProposerPriority: (#2932)
- rename fields, methods, comments, tests
2018-11-28 15:35:09 -05:00
Dev Ojha
4039276085 remove unnecessary "crypto" import alias (#2940) 2018-11-28 23:53:04 +04:00
Ismail Khoffi
3f987adc92 Set accum of freshly added validator -(total voting power) (#2785)
* set the accum of a new validator to (-total voting power):

- disincentivize validators to unbond, then rebon to reset their
negative Accum to zero

additional unrelated changes:
- do not capitalize error msgs
- fix typo

* review comments: (re)capitalize errors & delete obsolete comments

* More changes suggested by @melekes

* WIP: do not batch clip (#2809)

* substract avgAccum on each iteration

- temporarily skip test

* remove unused method safeMulClip / safeMul

* always substract the avg accum

 - temp. skip another test

* remove overflow / underflow tests & add tests for avgAccum:

- add test for computeAvgAccum
- as we substract the avgAccum now we will not trivially over/underflow

* address @cwgoes' comments

* shift by avg at the end of IncrementAccum

* Add comment to MaxTotalVotingPower

* Guard inputs to not exceed MaxTotalVotingPower

* Address review comments:

 - do not fetch current validator from set again
 - update error message

* Address a few review comments:

 - fix typo
 - extract variable

* address more review comments:

 - clarify 1.125*totalVotingPower == totalVotingPower + (totalVotingPower >> 3)

* review comments: panic instead of "clipping":

 - total voting power is guarded to not exceed MaxTotalVotingPower ->
 panic if this invariant is violated

* fix failing test
2018-11-28 13:12:17 -05:00
Ethan Buchman
b11788d36d types: NewValidatorSet doesn't panic on empty valz list (#2938)
* types: NewValidatorSet doesn't panic on empty valz list

* changelog
2018-11-28 13:09:29 -05:00
Daniil Lashin
9adcfe2804 docs: add client.Start() to RPC WS examples (#2936) 2018-11-28 20:55:18 +04:00
Zach
3d15579e0c docs: fix js-abci example (#2935) 2018-11-28 20:29:26 +04:00
Dev Ojha
4571f0fbe8 Enforce validators can only use the correct pubkey type (#2739)
* Enforce validators can only use the correct pubkey type

* adapt to variable renames

* Address comments from #2636

* separate updating and validation logic

* update spec

* Add test case for TestStringSliceEqual, clarify slice copying code

* Address @ebuchman's comments

* Split up testing validator update execution, and its validation
2018-11-28 09:09:27 -05:00
Ethan Buchman
8a73feae14 Merge pull request #2934 from tendermint/master
Merge pull request #2922 from tendermint/release/v0.26.4
2018-11-28 08:56:39 -05:00
srmo
e291fbbebe 2871 remove proposalHeartbeat infrastructure (#2874)
* 2871 remove proposalHeartbeat infrastructure

* 2871 add preliminary changelog entry
2018-11-28 08:52:34 -05:00
Jae Kwon
416d143bf7 R4R: Swap start/end in ReverseIterator (#2913)
* Swap start/end in ReverseIterator

* update CHANGELOG_PENDING

* fixes from review
2018-11-28 08:49:24 -05:00
Daniil Lashin
7213869fc6 Refactor updateState #2865 (#2929)
* Refactor updateState #2865

* Apply suggestions from code review

Co-Authored-By: danil-lashin <danil-lashin@yandex.ru>

* Apply suggestions from code review
2018-11-28 08:32:16 -05:00
Anton Kaliaev
ef9902e602 docs: small improvements (#2933)
* update docs

- make install_c cmd (install)
- explain node IDs (quick-start)
- update UPGRADING section (using-tendermint)

* use git clone with JS example

JS devs may not have Go installed and we should not force them to.

* rewrite sentence
2018-11-28 08:25:23 -05:00
Ethan Buchman
b771798d48 Merge pull request #2922 from tendermint/release/v0.26.4
Release/v0.26.4
2018-11-27 08:51:45 -05:00
Ethan Buchman
1abf34aa91 Prepare v0.26.4 changelog (#2921)
* prepare changelog

* linkify changelog

* changelog and version

* update changelog
2018-11-27 08:43:21 -05:00
Anton Kaliaev
92dc5fc77a don't return false positives when searching for a prefix of a tag value (#2919)
Fixes #2908
2018-11-27 08:12:28 -05:00
nagarajmanjunath
bef39f3346 Updated Marshal and unmarshal methods to make compatible with protobuf (#2918)
* upadtes in grpc Marshal and unmarshal

* update comments
2018-11-27 08:07:20 -05:00
Anton Kaliaev
94e63be922 [indexer] order results by index if height is the same (#2900)
Fixes #2775
2018-11-27 07:53:06 -05:00
Anton Kaliaev
9570ac4d3e rpc: Fix tx.height range queries (#2899)
Modify lookForHeight to return a height only there's a equal operator.
Previously, it was returning a height even for range conditions: "height
< 10000".

Fixes #2759
2018-11-27 07:47:50 -05:00
Jernej Kos
99b9c9bf60 types: Emit tags from BeginBlock/EndBlock (#2747)
This commit makes both EventNewBlock and EventNewBlockHeader emit tags
on the event bus, so subscribers can use them in queries.
2018-11-26 22:21:42 -05:00
Ethan Buchman
47a0669d12 Fix fast sync stack with wrong block #2457 (#2731)
* fix fastsync may stuck by a wrong block

* fixes from updates

* fixes from review

* Align spec with the changes

* fmt
2018-11-26 15:31:11 -05:00
JamesRay
fe3b97fd66 It's better read from genDoc than from state.validators when appHeight==0 in replay (#2893)
* optimize addProposalBlockPart

* optimize addProposalBlockPart

* if ProposalBlockParts and LockedBlockParts both exist,let LockedBlockParts overwrite ProposalBlockParts.

* fix tryAddBlock

* broadcast lockedBlockParts in higher priority

* when appHeight==0, it's better fetch genDoc than state.validators.
2018-11-26 08:03:08 -05:00
Ismail Khoffi
56052c0a87 update encoding spec (#2903)
Quick fix for #2902
2018-11-26 12:24:32 +04:00
Anton Kaliaev
98e442a8de return back initially allowed level if we encounter allowed key (#2889)
Fixes #2868 where module=main setting overrides all others
2018-11-25 23:34:22 -05:00
Tomas Tauber
b12488b5f1 Handling integer IDs in JSON-RPC requests -- fixes #2366 (#2811)
* Fixed accepting integer IDs in requests for Tendermint RPC server (#2366)

* added a wrapper interface `jsonrpcid` that represents both string and int IDs in JSON-RPC requests/responses + custom JSON unmarshallers

* changed client-side code in RPC that uses it

* added extra tests for integer IDs

* updated CHANGELOG_PENDING, as suggested by PR instructions

* addressed PR comments

* added table driven tests for request type marshalling/unmarshalling
* expanded handler test to check IDs
* changed pending changelog note

* changed json rpc request/response unmarshalling to use empty interfaces and type switches on ID

* some cleanup
2018-11-25 23:33:40 -05:00
Anton Kaliaev
b487feba42 node: refactor privValidator ext client code & tests (#2895)
* update ConsensusState#OnStop comment

* consensus: set logger for WAL in tests

* refactor privValidator client code and tests

follow-up on https://github.com/tendermint/tendermint/pull/2866
2018-11-21 21:24:13 +04:00
Joe Bowman
72f86b5192 [pv] add ability to use ipc validator (#2866)
Ref #2827

(I have since seen #2847 which is a fix for the same issue; this PR has tests and docs too ;) )
2018-11-21 10:45:20 +04:00
Jae Kwon
42592d9ae0 IncrementAccum upon RPC /validators; Sanity checks and comments (#2808) 2018-11-21 10:43:02 +04:00
Dev Ojha
1610a05cbd Remove counter from every mempoolTx (#2891)
Within every tx in the mempool, we store a 64 bit counter,
as an index for when it was inserted into the mempool.
This counter doesn't really serve any purpose.
It was likely added for debugging at one point,
Removing the counter reclaims memory,
which enables greater mempool sizes / mitigates resources at the same size.

Closes #2835
2018-11-21 10:33:41 +04:00
Anton Kaliaev
2d525bf2b8 mempool: add txs from Update to cache
We should add txs that come in from mempool.Update to the mempool's
cache, so that they never hit a potentially expensive check tx.

Originally posted by @ValarDragon in #2846
https://github.com/tendermint/tendermint/issues/2846#issuecomment-439216656

Refs #2855
2018-11-20 10:47:30 +04:00
Anton Kaliaev
e9efbfe267 refactor mempool.Update
- rename filterTxs to removeTxs
- move txsMap into removeTxs func
- rename goodTxs to txsLeft
2018-11-20 10:47:30 +04:00
Anton Kaliaev
7b883a5457 docs/install: prepend cp to /usr/local with sudo (#2885)
Closes #2884
2018-11-20 10:27:58 +04:00
cong
fd8d1d6b69 add BlockTimeIota to the config.toml (#2878)
Refs #2877
2018-11-19 20:15:23 +04:00
Ethan Buchman
5abdd254e7 Merge pull request #2875 from tendermint/master
Merge pull request #2873 from tendermint/release/v0.26.3
2018-11-17 18:25:46 -05:00
Ethan Buchman
22dcc92232 Merge pull request #2873 from tendermint/release/v0.26.3
Release/v0.26.3
2018-11-17 18:25:12 -05:00
Ethan Buchman
ccf6b2b512 Bucky/v0.26.3 (#2872)
* update CONTRIBUTING with notes on CHANGELOG

* update changelog

* changelog and version
2018-11-17 16:04:05 -05:00
srmo
1466a2cc9f #2815 do not broadcast heartbeat proposal when we are non-validator (#2819)
* #2815 do not broadcast heartbeat proposal when we are non-validator

* #2815 adding preliminary changelog entry

* #2815 cosmetics and added test

* #2815 missed a little detail

- it's enough to call getAddress() once here

* #2815 remove debug logging from tests

* #2815 OK. I seem to be doing something fundamentally wrong here

* #2815 next iteration of proposalHeartbeat tests

- try and use "ensure" pattern in common_test

* 2815 incorporate review comments
2018-11-17 15:23:39 -05:00
Ethan Buchman
6168b404a7 p2p: NewMultiplexTransport takes an MConnConfig (#2869)
* p2p: NewMultiplexTransport takes an MConnConfig

* changelog

* move test func to test file
2018-11-17 03:16:49 -05:00
Zaki Manian
e6fc10faf6 R4R: Add timeouts to http servers (#2780)
* Replaces our current http servers where connections stay open forever with ones with timeouts to prevent file descriptor exhaustion

* Use the correct handler

* Put in go routines

* fix err

* changelog

* rpc: export Read/WriteTimeout

The `broadcast_tx_commit` endpoint has it's own timeout.
If this is longer than the http server's WriteTimeout, the
user will receive an error. Here, we export the WriteTimeout
and set the broadcast_tx_commit timeout to be less than it.

In the future, we should use a config struct for the timeouts
to avoid the need to export. The broadcast_tx_commit timeout
may also become configurable, but we must check that it's less
than the server's WriteTimeout.
2018-11-17 03:10:22 -05:00
Anton Kaliaev
60018d6148 comment out until someone decides to tackle #2285 (#2760)
current code results in panic and we certainly don't want that.
https://github.com/tendermint/tendermint/pull/2286#issuecomment-418281846
2018-11-16 18:17:07 -05:00
Ethan Buchman
0d5e0d2f13 p2p/conn: FlushStop. Use in pex. Closes #2092 (#2802)
* p2p/conn: FlushStop. Use in pex. Closes #2092

In seed mode, we call StopPeer immediately after Send.
Since flushing msgs to the peer happens in the background,
the peer connection is often closed before the messages are
actually sent out. The new FlushStop method allows all msgs
to first be written and flushed out on the conn before it is closed.

* fix dummy peer

* typo

* fixes from review

* more comments

* ensure pex doesn't call FlushStop more than once

FlushStop is not safe to call more than once,
but we call it from Receive in a go-routine so Receive
doesn't block.

To ensure we only call it once, we use the lastReceivedRequests
map - if an entry already exists, then FlushStop should already have
been called and we can return.
2018-11-16 17:44:19 -05:00
Daniil Lashin
2cfdef6111 Make "Update to validators" msg value pretty (#2848)
* Make "Update to validators" msg value pretty #2765

* New format for logging validator updates

* Refactor logging validator updates

* Fix changelog item

* fix merge conflict
2018-11-16 17:38:22 -05:00
krhubert
b90e06a37c More verbose error log (#2864) 2018-11-16 17:36:42 -05:00
Anton Kaliaev
e6a0d098e8 small fixes to spec & http_server & Vagrantfile (#2859)
* Vagrantfile: install dev_tools

Follow-up on https://github.com/tendermint/tendermint/pull/2824

* update consensus params spec

* fix test name

* rpc_test: panic if failed to start listener

also
- remove http_server#MustListen
- align StartHTTPServer and StartHTTPAndTLSServer functions

* dep: allow minor releases for grpc
2018-11-16 12:58:30 -05:00
Ethan Buchman
d8ab8509de p2p: log 'Send failed' on Debug (#2857) 2018-11-16 11:37:58 +04:00
Ethan Buchman
85bba82154 Merge pull request #2858 from tendermint/master
Merge pull request #2851 from tendermint/release/v0.26.2
2018-11-15 18:42:47 -05:00
kevlubkcm
a676c71678 [R4R] Add proposer to NewRound event and proposal info to CompleteProposal event (#2767)
* add proposer info to EventCompleteProposal

* create separate EventData structure for CompleteProposal

* cant us rs.Proposal to get BlockID because it is not guaranteed to be set yet

* copying RoundState isnt helping us here

* add Step back to make compatible with original RoundState event. update changelog

* add NewRound event

* fix test

* remove unneeded RoundState

* put height round step into a struct

* pull out ValidatorInfo struct. add ensureProposal assert

* remove height-round-state sub-struct refactor

* minor fixes from review
2018-11-15 18:40:42 -05:00
Zach
c033975a53 docs ADR (#2828)
* wip

* use same ADR template as SDK

* finish docs adr

* lil fixes
2018-11-15 18:08:24 -05:00
Anton Kaliaev
06225e332e Config option for JSON output formatter (#2843)
* Introduce a structured logging option

* rename StructuredLog to LogFormat

* add changelog entry

* move log_format under log_level
2018-11-15 18:05:06 -05:00
Alessio Treglia
b646437ec7 Decouple StartHTTP{,AndTLS}Server from Listen() (#2791)
* Decouple StartHTTP{,AndTLS}Server from Listen()

This should help solve cosmos/cosmos-sdk#2715

* Fix small mistake

* Update StartGRPCServer

* s/rpc/rpcserver/

* Start grpccore.StartGRPCServer in a goroutine

* Reinstate l.Close()

* Fix rpc/lib/test/main.go

* Update code comment

* update changelog and comments

* fix tm-monitor. more comments
2018-11-15 15:33:04 -05:00
Zach
be8c2d5018 use a github link (#2849) 2018-11-15 14:41:40 -05:00
Anton Kaliaev
e93b492efe do not pin deps to exact versions (#2844)
because
- they are locked in .lock file already
- individual dependencies can be updated with `dep ensure -update XXX`
- review process (and ^^^) should help us prevent accidental updates

Closes #2798
2018-11-15 14:40:18 -05:00
165 changed files with 3554 additions and 1864 deletions

View File

@@ -7,6 +7,13 @@ defaults: &defaults
environment:
GOBIN: /tmp/workspace/bin
docs_update_config: &docs_update_config
working_directory: ~/repo
docker:
- image: tendermint/docs_deployment
environment:
AWS_REGION: us-east-1
jobs:
setup_dependencies:
<<: *defaults
@@ -339,10 +346,25 @@ jobs:
name: upload
command: bash .circleci/codecov.sh -f coverage.txt
deploy_docs:
<<: *docs_update_config
steps:
- checkout
- run:
name: Trigger website build
command: |
chamber exec tendermint -- start_website_build
workflows:
version: 2
test-suite:
jobs:
- deploy_docs:
filters:
branches:
only:
- master
- develop
- setup_dependencies
- lint:
requires:

View File

@@ -1,5 +1,199 @@
# Changelog
## v0.27.2
*December 16th, 2018*
### IMPROVEMENTS:
- [node] [\#3025](https://github.com/tendermint/tendermint/issues/3025) Validate NodeInfo addresses on startup.
### BUG FIXES:
- [p2p] [\#3025](https://github.com/tendermint/tendermint/pull/3025) Revert to using defers in addrbook. Fixes deadlocks in pex and consensus upon invalid ExternalAddr/ListenAddr configuration.
## v0.27.1
*December 15th, 2018*
Special thanks to external contributors on this release:
@danil-lashin, @hleb-albau, @james-ray, @leo-xinwang
### FEATURES:
- [rpc] [\#2964](https://github.com/tendermint/tendermint/issues/2964) Add `UnconfirmedTxs(limit)` and `NumUnconfirmedTxs()` methods to HTTP/Local clients (@danil-lashin)
- [docs] [\#3004](https://github.com/tendermint/tendermint/issues/3004) Enable full-text search on docs pages
### IMPROVEMENTS:
- [consensus] [\#2971](https://github.com/tendermint/tendermint/issues/2971) Return error if ValidatorSet is empty after InitChain
(@leo-xinwang)
- [ci/cd] [\#3005](https://github.com/tendermint/tendermint/issues/3005) Updated CircleCI job to trigger website build when docs are updated
- [docs] Various updates
### BUG FIXES:
- [cmd] [\#2983](https://github.com/tendermint/tendermint/issues/2983) `testnet` command always sets `addr_book_strict = false`
- [config] [\#2980](https://github.com/tendermint/tendermint/issues/2980) Fix CORS options formatting
- [kv indexer] [\#2912](https://github.com/tendermint/tendermint/issues/2912) Don't ignore key when executing CONTAINS
- [mempool] [\#2961](https://github.com/tendermint/tendermint/issues/2961) Call `notifyTxsAvailable` if there're txs left after committing a block, but recheck=false
- [mempool] [\#2994](https://github.com/tendermint/tendermint/issues/2994) Reject txs with negative GasWanted
- [p2p] [\#2990](https://github.com/tendermint/tendermint/issues/2990) Fix a bug where seeds don't disconnect from a peer after 3h
- [consensus] [\#3006](https://github.com/tendermint/tendermint/issues/3006) Save state after InitChain only when stateHeight is also 0 (@james-ray)
## v0.27.0
*December 5th, 2018*
Special thanks to external contributors on this release:
@danil-lashin, @srmo
Special thanks to @dlguddus for discovering a [major
issue](https://github.com/tendermint/tendermint/issues/2718#issuecomment-440888677)
in the proposer selection algorithm.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
This release is primarily about fixes to the proposer selection algorithm
in preparation for the [Cosmos Game of
Stakes](https://blog.cosmos.network/the-game-of-stakes-is-open-for-registration-83a404746ee6).
It also makes use of the `ConsensusParams.Validator.PubKeyTypes` to restrict the
key types that can be used by validators, and removes the `Heartbeat` consensus
message.
### BREAKING CHANGES:
* CLI/RPC/Config
- [rpc] [\#2932](https://github.com/tendermint/tendermint/issues/2932) Rename `accum` to `proposer_priority`
* Go API
- [db] [\#2913](https://github.com/tendermint/tendermint/pull/2913)
ReverseIterator API change: start < end, and end is exclusive.
- [types] [\#2932](https://github.com/tendermint/tendermint/issues/2932) Rename `Validator.Accum` to `Validator.ProposerPriority`
* Blockchain Protocol
- [state] [\#2714](https://github.com/tendermint/tendermint/issues/2714) Validators can now only use pubkeys allowed within
ConsensusParams.Validator.PubKeyTypes
* P2P Protocol
- [consensus] [\#2871](https://github.com/tendermint/tendermint/issues/2871)
Remove *ProposalHeartbeat* message as it serves no real purpose (@srmo)
- [state] Fixes for proposer selection:
- [\#2785](https://github.com/tendermint/tendermint/issues/2785) Accum for new validators is `-1.125*totalVotingPower` instead of 0
- [\#2941](https://github.com/tendermint/tendermint/issues/2941) val.Accum is preserved during ValidatorSet.Update to avoid being
reset to 0
### IMPROVEMENTS:
- [state] [\#2929](https://github.com/tendermint/tendermint/issues/2929) Minor refactor of updateState logic (@danil-lashin)
- [node] \#2959 Allow node to start even if software's BlockProtocol is
different from state's BlockProtocol
- [pex] \#2959 Pex reactor logger uses `module=pex`
### BUG FIXES:
- [p2p] \#2968 Panic on transport error rather than continuing to run but not
accept new connections
- [p2p] \#2969 Fix mismatch in peer count between `/net_info` and the prometheus
metrics
- [rpc] \#2408 `/broadcast_tx_commit`: Fix "interface conversion: interface {} in nil, not EventDataTx" panic (could happen if somebody sent a tx using `/broadcast_tx_commit` while Tendermint was being stopped)
- [state] [\#2785](https://github.com/tendermint/tendermint/issues/2785) Fix accum for new validators to be `-1.125*totalVotingPower`
instead of 0, forcing them to wait before becoming the proposer. Also:
- do not batch clip
- keep accums averaged near 0
- [txindex/kv] [\#2925](https://github.com/tendermint/tendermint/issues/2925) Don't return false positives when range searching for a prefix of a tag value
- [types] [\#2938](https://github.com/tendermint/tendermint/issues/2938) Fix regression in v0.26.4 where we panic on empty
genDoc.Validators
- [types] [\#2941](https://github.com/tendermint/tendermint/issues/2941) Preserve val.Accum during ValidatorSet.Update to avoid it being
reset to 0 every time a validator is updated
## v0.26.4
*November 27th, 2018*
Special thanks to external contributors on this release:
@ackratos, @goolAdapter, @james-ray, @joe-bowman, @kostko,
@nagarajmanjunath, @tomtau
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### FEATURES:
- [rpc] [\#2747](https://github.com/tendermint/tendermint/issues/2747) Enable subscription to tags emitted from `BeginBlock`/`EndBlock` (@kostko)
- [types] [\#2747](https://github.com/tendermint/tendermint/issues/2747) Add `ResultBeginBlock` and `ResultEndBlock` fields to `EventDataNewBlock`
and `EventDataNewBlockHeader` to support subscriptions (@kostko)
- [types] [\#2918](https://github.com/tendermint/tendermint/issues/2918) Add Marshal, MarshalTo, Unmarshal methods to various structs
to support Protobuf compatibility (@nagarajmanjunath)
### IMPROVEMENTS:
- [config] [\#2877](https://github.com/tendermint/tendermint/issues/2877) Add `blocktime_iota` to the config.toml (@ackratos)
- NOTE: this should be a ConsensusParam, not part of the config, and will be
removed from the config at a later date
([\#2920](https://github.com/tendermint/tendermint/issues/2920).
- [mempool] [\#2882](https://github.com/tendermint/tendermint/issues/2882) Add txs from Update to cache
- [mempool] [\#2891](https://github.com/tendermint/tendermint/issues/2891) Remove local int64 counter from being stored in every tx
- [node] [\#2866](https://github.com/tendermint/tendermint/issues/2866) Add ability to instantiate IPCVal (@joe-bowman)
### BUG FIXES:
- [blockchain] [\#2731](https://github.com/tendermint/tendermint/issues/2731) Retry both blocks if either is bad to avoid getting stuck during fast sync (@goolAdapter)
- [consensus] [\#2893](https://github.com/tendermint/tendermint/issues/2893) Use genDoc.Validators instead of state.NextValidators on replay when appHeight==0 (@james-ray)
- [log] [\#2868](https://github.com/tendermint/tendermint/issues/2868) Fix `module=main` setting overriding all others
- NOTE: this changes the default logging behaviour to be much less verbose.
Set `log_level="info"` to restore the previous behaviour.
- [rpc] [\#2808](https://github.com/tendermint/tendermint/issues/2808) Fix `accum` field in `/validators` by calling `IncrementAccum` if necessary
- [rpc] [\#2811](https://github.com/tendermint/tendermint/issues/2811) Allow integer IDs in JSON-RPC requests (@tomtau)
- [txindex/kv] [\#2759](https://github.com/tendermint/tendermint/issues/2759) Fix tx.height range queries
- [txindex/kv] [\#2775](https://github.com/tendermint/tendermint/issues/2775) Order tx results by index if height is the same
- [txindex/kv] [\#2908](https://github.com/tendermint/tendermint/issues/2908) Don't return false positives when searching for a prefix of a tag value
## v0.26.3
*November 17th, 2018*
Special thanks to external contributors on this release:
@danil-lashin, @kevlubkcm, @krhubert, @srmo
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* Go API
- [rpc] [\#2791](https://github.com/tendermint/tendermint/issues/2791) Functions that start HTTP servers are now blocking:
- Impacts `StartHTTPServer`, `StartHTTPAndTLSServer`, and `StartGRPCServer`
- These functions now take a `net.Listener` instead of an address
- [rpc] [\#2767](https://github.com/tendermint/tendermint/issues/2767) Subscribing to events
`NewRound` and `CompleteProposal` return new types `EventDataNewRound` and
`EventDataCompleteProposal`, respectively, instead of the generic `EventDataRoundState`. (@kevlubkcm)
### FEATURES:
- [log] [\#2843](https://github.com/tendermint/tendermint/issues/2843) New `log_format` config option, which can be set to 'plain' for colored
text or 'json' for JSON output
- [types] [\#2767](https://github.com/tendermint/tendermint/issues/2767) New event types EventDataNewRound (with ProposerInfo) and EventDataCompleteProposal (with BlockID). (@kevlubkcm)
### IMPROVEMENTS:
- [dep] [\#2844](https://github.com/tendermint/tendermint/issues/2844) Dependencies are no longer pinned to an exact version in the
Gopkg.toml:
- Serialization libs are allowed to vary by patch release
- Other libs are allowed to vary by minor release
- [p2p] [\#2857](https://github.com/tendermint/tendermint/issues/2857) "Send failed" is logged at debug level instead of error.
- [rpc] [\#2780](https://github.com/tendermint/tendermint/issues/2780) Add read and write timeouts to HTTP servers
- [state] [\#2848](https://github.com/tendermint/tendermint/issues/2848) Make "Update to validators" msg value pretty (@danil-lashin)
### BUG FIXES:
- [consensus] [\#2819](https://github.com/tendermint/tendermint/issues/2819) Don't send proposalHearbeat if not a validator
- [docs] [\#2859](https://github.com/tendermint/tendermint/issues/2859) Fix ConsensusParams details in spec
- [libs/autofile] [\#2760](https://github.com/tendermint/tendermint/issues/2760) Comment out autofile permissions check - should fix
running Tendermint on Windows
- [p2p] [\#2869](https://github.com/tendermint/tendermint/issues/2869) Set connection config properly instead of always using default
- [p2p/pex] [\#2802](https://github.com/tendermint/tendermint/issues/2802) Seed mode fixes:
- Only disconnect from inbound peers
- Use FlushStop instead of Sleep to ensure all messages are sent before
disconnecting
## v0.26.2
*November 15th, 2018*

View File

@@ -1,14 +1,9 @@
# Pending
## v0.26.3
## v0.27.3
*TBD*
Special thanks to external contributors on this release:
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* CLI/RPC/Config
@@ -21,9 +16,9 @@ program](https://hackerone.com/tendermint).
* P2P Protocol
### FEATURES:
### IMPROVEMENTS:
### BUG FIXES:

View File

@@ -69,17 +69,40 @@ vagrant ssh
make test
```
## Testing
All repos should be hooked up to [CircleCI](https://circleci.com/).
If they have `.go` files in the root directory, they will be automatically
tested by circle using `go test -v -race ./...`. If not, they will need a
`circle.yml`. Ideally, every repo has a `Makefile` that defines `make test` and
includes its continuous integration status using a badge in the `README.md`.
## Changelog
Every fix, improvement, feature, or breaking change should be made in a
pull-request that includes an update to the `CHANGELOG_PENDING.md` file.
Changelog entries should be formatted as follows:
```
- [module] \#xxx Some description about the change (@contributor)
```
Here, `module` is the part of the code that changed (typically a
top-level Go package), `xxx` is the pull-request number, and `contributor`
is the author/s of the change.
It's also acceptable for `xxx` to refer to the relevent issue number, but pull-request
numbers are preferred.
Note this means pull-requests should be opened first so the changelog can then
be updated with the pull-request's number.
There is no need to include the full link, as this will be added
automatically during release. But please include the backslash and pound, eg. `\#2313`.
Changelog entries should be ordered alphabetically according to the
`module`, and numerically according to the pull-request number.
Changes with multiple classifications should be doubly included (eg. a bug fix
that is also a breaking change should be recorded under both).
Breaking changes are further subdivided according to the APIs/users they impact.
Any change that effects multiple APIs/users should be recorded multiply - for
instance, a change to the `Blockchain Protocol` that removes a field from the
header should also be recorded under `CLI/RPC/Config` since the field will be
removed from the header in rpc responses as well.
## Branching Model and Release
All repos should adhere to the branching model: http://nvie.com/posts/a-successful-git-branching-model/.
@@ -104,13 +127,14 @@ master constitutes a tagged release.
- start on `develop`
- run integration tests (see `test_integrations` in Makefile)
- prepare changelog:
- copy `CHANGELOG_PENDING.md` to `CHANGELOG.md`
- copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
- run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all issues
- run `bash ./scripts/authors.sh` to get a list of authors since the latest
release, and add the github aliases of external contributors to the top of
the changelog. To lookup an alias from an email, try `bash
./scripts/authors.sh <email>`
- reset the `CHANGELOG_PENDING.md`
- bump versions
- push to release/vX.X.X to run the extended integration tests on the CI
- merge to master
@@ -127,3 +151,13 @@ master constitutes a tagged release.
- merge hotfix-vX.X.X to master
- merge hotfix-vX.X.X to develop
- delete the hotfix-vX.X.X branch
## Testing
All repos should be hooked up to [CircleCI](https://circleci.com/).
If they have `.go` files in the root directory, they will be automatically
tested by circle using `go test -v -race ./...`. If not, they will need a
`circle.yml`. Ideally, every repo has a `Makefile` that defines `make test` and
includes its continuous integration status using a badge in the `README.md`.

23
Gopkg.lock generated
View File

@@ -128,14 +128,6 @@
revision = "ea4d1f681babbce9545c9c5f3d5194a789c89f5b"
version = "v1.2.0"
[[projects]]
digest = "1:b0c25f00bad20d783d259af2af8666969e2fc343fa0dc9efe52936bbd67fb758"
name = "github.com/rs/cors"
packages = ["."]
pruneopts = "UT"
revision = "9a47f48565a795472d43519dd49aac781f3034fb"
version = "v1.6.0"
[[projects]]
digest = "1:ea40c24cdbacd054a6ae9de03e62c5f252479b96c716375aace5c120d68647c8"
name = "github.com/hashicorp/hcl"
@@ -226,14 +218,16 @@
version = "v1.0.0"
[[projects]]
digest = "1:c1a04665f9613e082e1209cf288bf64f4068dcd6c87a64bf1c4ff006ad422ba0"
digest = "1:26663fafdea73a38075b07e8e9d82fc0056379d2be8bb4e13899e8fda7c7dd23"
name = "github.com/prometheus/client_golang"
packages = [
"prometheus",
"prometheus/internal",
"prometheus/promhttp",
]
pruneopts = "UT"
revision = "ae27198cdd90bf12cd134ad79d1366a6cf49f632"
revision = "abad2d1bd44235a26707c172eab6bca5bf2dbad3"
version = "v0.9.1"
[[projects]]
branch = "master"
@@ -275,6 +269,14 @@
pruneopts = "UT"
revision = "e2704e165165ec55d062f5919b4b29494e9fa790"
[[projects]]
digest = "1:b0c25f00bad20d783d259af2af8666969e2fc343fa0dc9efe52936bbd67fb758"
name = "github.com/rs/cors"
packages = ["."]
pruneopts = "UT"
revision = "9a47f48565a795472d43519dd49aac781f3034fb"
version = "v1.6.0"
[[projects]]
digest = "1:6a4a11ba764a56d2758899ec6f3848d24698d48442ebce85ee7a3f63284526cd"
name = "github.com/spf13/afero"
@@ -524,6 +526,7 @@
"github.com/prometheus/client_golang/prometheus",
"github.com/prometheus/client_golang/prometheus/promhttp",
"github.com/rcrowley/go-metrics",
"github.com/rs/cors",
"github.com/spf13/cobra",
"github.com/spf13/viper",
"github.com/stretchr/testify/assert",

View File

@@ -20,57 +20,60 @@
# unused-packages = true
#
###########################################################
# NOTE: All packages should be pinned to specific versions.
# Packages without releases must pin to a commit.
# Allow only patch releases for serialization libraries
[[constraint]]
name = "github.com/go-kit/kit"
version = "=0.6.0"
name = "github.com/tendermint/go-amino"
version = "~0.14.1"
[[constraint]]
name = "github.com/gogo/protobuf"
version = "=1.1.1"
version = "~1.1.1"
[[constraint]]
name = "github.com/golang/protobuf"
version = "=1.1.0"
version = "~1.1.0"
# Allow only minor releases for other libraries
[[constraint]]
name = "github.com/go-kit/kit"
version = "^0.6.0"
[[constraint]]
name = "github.com/gorilla/websocket"
version = "=1.2.0"
version = "^1.2.0"
[[constraint]]
name = "github.com/rs/cors"
version = "1.6.0"
version = "^1.6.0"
[[constraint]]
name = "github.com/pkg/errors"
version = "=0.8.0"
version = "^0.8.0"
[[constraint]]
name = "github.com/spf13/cobra"
version = "=0.0.1"
version = "^0.0.1"
[[constraint]]
name = "github.com/spf13/viper"
version = "=1.0.0"
version = "^1.0.0"
[[constraint]]
name = "github.com/stretchr/testify"
version = "=1.2.1"
[[constraint]]
name = "github.com/tendermint/go-amino"
version = "v0.14.1"
version = "^1.2.1"
[[constraint]]
name = "google.golang.org/grpc"
version = "=1.13.0"
version = "^1.13.0"
[[constraint]]
name = "github.com/fortytw2/leaktest"
version = "=1.2.0"
version = "^1.2.0"
[[constraint]]
name = "github.com/prometheus/client_golang"
version = "^0.9.1"
###################################
## Some repos dont have releases.
@@ -94,11 +97,6 @@
name = "github.com/tendermint/btcd"
revision = "e5840949ff4fff0c56f9b6a541e22b63581ea9df"
# Haven't made a release since 2016.
[[constraint]]
name = "github.com/prometheus/client_golang"
revision = "ae27198cdd90bf12cd134ad79d1366a6cf49f632"
[[constraint]]
name = "github.com/rcrowley/go-metrics"
revision = "e2704e165165ec55d062f5919b4b29494e9fa790"

View File

@@ -32,6 +32,9 @@ build_race:
install:
CGO_ENABLED=0 go install $(BUILD_FLAGS) -tags $(BUILD_TAGS) ./cmd/tendermint
install_c:
CGO_ENABLED=1 go install $(BUILD_FLAGS) -tags "$(BUILD_TAGS) gcc" ./cmd/tendermint
########################################
### Protobuf
@@ -291,6 +294,7 @@ build-linux:
build-docker-localnode:
cd networks/local
make
cd -
# Run a 4-node testnet locally
localnet-start: localnet-stop
@@ -328,4 +332,4 @@ build-slate:
# To avoid unintended conflicts with file names, always add to .PHONY
# unless there is a reason not to.
# https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html
.PHONY: check build build_race build_abci dist install install_abci check_dep check_tools get_tools get_dev_tools update_tools get_vendor_deps draw_deps get_protoc protoc_abci protoc_libs gen_certs clean_certs grpc_dbserver test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt rpc-docs build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate protoc_grpc protoc_all
.PHONY: check build build_race build_abci dist install install_abci check_dep check_tools get_tools get_dev_tools update_tools get_vendor_deps draw_deps get_protoc protoc_abci protoc_libs gen_certs clean_certs grpc_dbserver test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt rpc-docs build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate protoc_grpc protoc_all build_c install_c

View File

@@ -3,9 +3,50 @@
This guide provides steps to be followed when you upgrade your applications to
a newer version of Tendermint Core.
## v0.27.0
This release contains some breaking changes to the block and p2p protocols,
but does not change any core data structures, so it should be compatible with
existing blockchains from the v0.26 series that only used Ed25519 validator keys.
Blockchains using Secp256k1 for validators will not be compatible. This is due
to the fact that we now enforce which key types validators can use as a
consensus param. The default is Ed25519, and Secp256k1 must be activated
explicitly.
It is recommended to upgrade all nodes at once to avoid incompatibilities at the
peer layer - namely, the heartbeat consensus message has been removed (only
relevant if `create_empty_blocks=false` or `create_empty_blocks_interval > 0`),
and the proposer selection algorithm has changed. Since proposer information is
never included in the blockchain, this change only affects the peer layer.
### Go API Changes
#### libs/db
The ReverseIterator API has changed the meaning of `start` and `end`.
Before, iteration was from `start` to `end`, where
`start > end`. Now, iteration is from `end` to `start`, where `start < end`.
The iterator also excludes `end`. This change allows a simplified and more
intuitive logic, aligning the semantic meaning of `start` and `end` in the
`Iterator` and `ReverseIterator`.
### Applications
This release enforces a new consensus parameter, the
ValidatorParams.PubKeyTypes. Applications must ensure that they only return
validator updates with the allowed PubKeyTypes. If a validator update includes a
pubkey type that is not included in the ConsensusParams.Validator.PubKeyTypes,
block execution will fail and the consensus will halt.
By default, only Ed25519 pubkeys may be used for validators. Enabling
Secp256k1 requires explicit modification of the ConsensusParams.
Please update your application accordingly (ie. restrict validators to only be
able to use Ed25519 keys, or explicitly add additional key types to the genesis
file).
## v0.26.0
New 0.26.0 release contains a lot of changes to core data types and protocols. It is not
This release contains a lot of changes to core data types and protocols. It is not
compatible to the old versions and there is no straight forward way to update
old data to be compatible with the new version.
@@ -67,7 +108,7 @@ For more information, see:
### Go API Changes
#### crypto.merkle
#### crypto/merkle
The `merkle.Hasher` interface was removed. Functions which used to take `Hasher`
now simply take `[]byte`. This means that any objects being Merklized should be

2
Vagrantfile vendored
View File

@@ -53,6 +53,6 @@ Vagrant.configure("2") do |config|
# get all deps and tools, ready to install/test
su - vagrant -c 'source /home/vagrant/.bash_profile'
su - vagrant -c 'cd /home/vagrant/go/src/github.com/tendermint/tendermint && make get_tools && make get_vendor_deps'
su - vagrant -c 'cd /home/vagrant/go/src/github.com/tendermint/tendermint && make get_tools && make get_dev_tools && make get_vendor_deps'
SHELL
end

View File

@@ -54,7 +54,7 @@ RETRY_LOOP:
if cli.mustConnect {
return err
}
cli.Logger.Error(fmt.Sprintf("abci.grpcClient failed to connect to %v. Retrying...\n", cli.addr))
cli.Logger.Error(fmt.Sprintf("abci.grpcClient failed to connect to %v. Retrying...\n", cli.addr), "err", err)
time.Sleep(time.Second * dialRetryIntervalSeconds)
continue RETRY_LOOP
}

View File

@@ -67,7 +67,7 @@ RETRY_LOOP:
if cli.mustConnect {
return err
}
cli.Logger.Error(fmt.Sprintf("abci.socketClient failed to connect to %v. Retrying...", cli.addr))
cli.Logger.Error(fmt.Sprintf("abci.socketClient failed to connect to %v. Retrying...", cli.addr), "err", err)
time.Sleep(time.Second * dialRetryIntervalSeconds)
continue RETRY_LOOP
}

View File

@@ -168,9 +168,12 @@ func (pool *BlockPool) IsCaughtUp() bool {
return false
}
// some conditions to determine if we're caught up
receivedBlockOrTimedOut := (pool.height > 0 || time.Since(pool.startTime) > 5*time.Second)
ourChainIsLongestAmongPeers := pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
// Some conditions to determine if we're caught up.
// Ensures we've either received a block or waited some amount of time,
// and that we're synced to the highest known height. Note we use maxPeerHeight - 1
// because to sync block H requires block H+1 to verify the LastCommit.
receivedBlockOrTimedOut := pool.height > 0 || time.Since(pool.startTime) > 5*time.Second
ourChainIsLongestAmongPeers := pool.maxPeerHeight == 0 || pool.height >= (pool.maxPeerHeight-1)
isCaughtUp := receivedBlockOrTimedOut && ourChainIsLongestAmongPeers
return isCaughtUp
}
@@ -252,7 +255,8 @@ func (pool *BlockPool) AddBlock(peerID p2p.ID, block *types.Block, blockSize int
peer.decrPending(blockSize)
}
} else {
// Bad peer?
pool.Logger.Info("invalid peer", "peer", peerID, "blockHeight", block.Height)
pool.sendError(errors.New("invalid peer"), peerID)
}
}
@@ -292,7 +296,7 @@ func (pool *BlockPool) RemovePeer(peerID p2p.ID) {
func (pool *BlockPool) removePeer(peerID p2p.ID) {
for _, requester := range pool.requesters {
if requester.getPeerID() == peerID {
requester.redo()
requester.redo(peerID)
}
}
delete(pool.peers, peerID)
@@ -326,8 +330,11 @@ func (pool *BlockPool) makeNextRequester() {
defer pool.mtx.Unlock()
nextHeight := pool.height + pool.requestersLen()
if nextHeight > pool.maxPeerHeight {
return
}
request := newBPRequester(pool, nextHeight)
// request.SetLogger(pool.Logger.With("height", nextHeight))
pool.requesters[nextHeight] = request
atomic.AddInt32(&pool.numPending, 1)
@@ -453,7 +460,7 @@ type bpRequester struct {
pool *BlockPool
height int64
gotBlockCh chan struct{}
redoCh chan struct{}
redoCh chan p2p.ID //redo may send multitime, add peerId to identify repeat
mtx sync.Mutex
peerID p2p.ID
@@ -465,7 +472,7 @@ func newBPRequester(pool *BlockPool, height int64) *bpRequester {
pool: pool,
height: height,
gotBlockCh: make(chan struct{}, 1),
redoCh: make(chan struct{}, 1),
redoCh: make(chan p2p.ID, 1),
peerID: "",
block: nil,
@@ -524,9 +531,9 @@ func (bpr *bpRequester) reset() {
// Tells bpRequester to pick another peer and try again.
// NOTE: Nonblocking, and does nothing if another redo
// was already requested.
func (bpr *bpRequester) redo() {
func (bpr *bpRequester) redo(peerId p2p.ID) {
select {
case bpr.redoCh <- struct{}{}:
case bpr.redoCh <- peerId:
default:
}
}
@@ -565,9 +572,13 @@ OUTER_LOOP:
return
case <-bpr.Quit():
return
case <-bpr.redoCh:
bpr.reset()
continue OUTER_LOOP
case peerID := <-bpr.redoCh:
if peerID == bpr.peerID {
bpr.reset()
continue OUTER_LOOP
} else {
continue WAIT_LOOP
}
case <-bpr.gotBlockCh:
// We got a block!
// Continue the for-loop and wait til Quit.

View File

@@ -16,16 +16,52 @@ func init() {
}
type testPeer struct {
id p2p.ID
height int64
id p2p.ID
height int64
inputChan chan inputData //make sure each peer's data is sequential
}
func makePeers(numPeers int, minHeight, maxHeight int64) map[p2p.ID]testPeer {
peers := make(map[p2p.ID]testPeer, numPeers)
type inputData struct {
t *testing.T
pool *BlockPool
request BlockRequest
}
func (p testPeer) runInputRoutine() {
go func() {
for input := range p.inputChan {
p.simulateInput(input)
}
}()
}
// Request desired, pretend like we got the block immediately.
func (p testPeer) simulateInput(input inputData) {
block := &types.Block{Header: types.Header{Height: input.request.Height}}
input.pool.AddBlock(input.request.PeerID, block, 123)
input.t.Logf("Added block from peer %v (height: %v)", input.request.PeerID, input.request.Height)
}
type testPeers map[p2p.ID]testPeer
func (ps testPeers) start() {
for _, v := range ps {
v.runInputRoutine()
}
}
func (ps testPeers) stop() {
for _, v := range ps {
close(v.inputChan)
}
}
func makePeers(numPeers int, minHeight, maxHeight int64) testPeers {
peers := make(testPeers, numPeers)
for i := 0; i < numPeers; i++ {
peerID := p2p.ID(cmn.RandStr(12))
height := minHeight + cmn.RandInt63n(maxHeight-minHeight)
peers[peerID] = testPeer{peerID, height}
peers[peerID] = testPeer{peerID, height, make(chan inputData, 10)}
}
return peers
}
@@ -45,6 +81,9 @@ func TestBasic(t *testing.T) {
defer pool.Stop()
peers.start()
defer peers.stop()
// Introduce each peer.
go func() {
for _, peer := range peers {
@@ -77,12 +116,8 @@ func TestBasic(t *testing.T) {
if request.Height == 300 {
return // Done!
}
// Request desired, pretend like we got the block immediately.
go func() {
block := &types.Block{Header: types.Header{Height: request.Height}}
pool.AddBlock(request.PeerID, block, 123)
t.Logf("Added block from peer %v (height: %v)", request.PeerID, request.Height)
}()
peers[request.PeerID].inputChan <- inputData{t, pool, request}
}
}
}

View File

@@ -264,8 +264,12 @@ FOR_LOOP:
bcR.Logger.Info("Time to switch to consensus reactor!", "height", height)
bcR.pool.Stop()
conR := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
conR.SwitchToConsensus(state, blocksSynced)
conR, ok := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
if ok {
conR.SwitchToConsensus(state, blocksSynced)
} else {
// should only happen during testing
}
break FOR_LOOP
}
@@ -314,6 +318,13 @@ FOR_LOOP:
// still need to clean up the rest.
bcR.Switch.StopPeerForError(peer, fmt.Errorf("BlockchainReactor validation error: %v", err))
}
peerID2 := bcR.pool.RedoRequest(second.Height)
peer2 := bcR.Switch.Peers().Get(peerID2)
if peer2 != nil && peer2 != peer {
// NOTE: we've already removed the peer's request, but we
// still need to clean up the rest.
bcR.Switch.StopPeerForError(peer2, fmt.Errorf("BlockchainReactor validation error: %v", err))
}
continue FOR_LOOP
} else {
bcR.pool.PopRequest()

View File

@@ -1,72 +1,151 @@
package blockchain
import (
"net"
"sort"
"testing"
"time"
"github.com/stretchr/testify/assert"
abci "github.com/tendermint/tendermint/abci/types"
cfg "github.com/tendermint/tendermint/config"
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/libs/log"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
)
func makeStateAndBlockStore(logger log.Logger) (sm.State, *BlockStore) {
config := cfg.ResetTestRoot("blockchain_reactor_test")
// blockDB := dbm.NewDebugDB("blockDB", dbm.NewMemDB())
// stateDB := dbm.NewDebugDB("stateDB", dbm.NewMemDB())
var config *cfg.Config
func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.GenesisDoc, []types.PrivValidator) {
validators := make([]types.GenesisValidator, numValidators)
privValidators := make([]types.PrivValidator, numValidators)
for i := 0; i < numValidators; i++ {
val, privVal := types.RandValidator(randPower, minPower)
validators[i] = types.GenesisValidator{
PubKey: val.PubKey,
Power: val.VotingPower,
}
privValidators[i] = privVal
}
sort.Sort(types.PrivValidatorsByAddress(privValidators))
return &types.GenesisDoc{
GenesisTime: tmtime.Now(),
ChainID: config.ChainID(),
Validators: validators,
}, privValidators
}
func makeVote(header *types.Header, blockID types.BlockID, valset *types.ValidatorSet, privVal types.PrivValidator) *types.Vote {
addr := privVal.GetAddress()
idx, _ := valset.GetByAddress(addr)
vote := &types.Vote{
ValidatorAddress: addr,
ValidatorIndex: idx,
Height: header.Height,
Round: 1,
Timestamp: tmtime.Now(),
Type: types.PrecommitType,
BlockID: blockID,
}
privVal.SignVote(header.ChainID, vote)
return vote
}
type BlockchainReactorPair struct {
reactor *BlockchainReactor
app proxy.AppConns
}
func newBlockchainReactor(logger log.Logger, genDoc *types.GenesisDoc, privVals []types.PrivValidator, maxBlockHeight int64) BlockchainReactorPair {
if len(privVals) != 1 {
panic("only support one validator")
}
app := &testApp{}
cc := proxy.NewLocalClientCreator(app)
proxyApp := proxy.NewAppConns(cc)
err := proxyApp.Start()
if err != nil {
panic(cmn.ErrorWrap(err, "error start app"))
}
blockDB := dbm.NewMemDB()
stateDB := dbm.NewMemDB()
blockStore := NewBlockStore(blockDB)
state, err := sm.LoadStateFromDBOrGenesisFile(stateDB, config.GenesisFile())
state, err := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
if err != nil {
panic(cmn.ErrorWrap(err, "error constructing state from genesis file"))
}
return state, blockStore
}
func newBlockchainReactor(logger log.Logger, maxBlockHeight int64) *BlockchainReactor {
state, blockStore := makeStateAndBlockStore(logger)
// Make the blockchainReactor itself
// Make the BlockchainReactor itself.
// NOTE we have to create and commit the blocks first because
// pool.height is determined from the store.
fastSync := true
var nilApp proxy.AppConnConsensus
blockExec := sm.NewBlockExecutor(dbm.NewMemDB(), log.TestingLogger(), nilApp,
blockExec := sm.NewBlockExecutor(dbm.NewMemDB(), log.TestingLogger(), proxyApp.Consensus(),
sm.MockMempool{}, sm.MockEvidencePool{})
// let's add some blocks in
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
lastCommit := &types.Commit{}
if blockHeight > 1 {
lastBlockMeta := blockStore.LoadBlockMeta(blockHeight - 1)
lastBlock := blockStore.LoadBlock(blockHeight - 1)
vote := makeVote(&lastBlock.Header, lastBlockMeta.BlockID, state.Validators, privVals[0])
lastCommit = &types.Commit{Precommits: []*types.Vote{vote}, BlockID: lastBlockMeta.BlockID}
}
thisBlock := makeBlock(blockHeight, state, lastCommit)
thisParts := thisBlock.MakePartSet(types.BlockPartSizeBytes)
blockID := types.BlockID{thisBlock.Hash(), thisParts.Header()}
state, err = blockExec.ApplyBlock(state, blockID, thisBlock)
if err != nil {
panic(cmn.ErrorWrap(err, "error apply block"))
}
blockStore.SaveBlock(thisBlock, thisParts, lastCommit)
}
bcReactor := NewBlockchainReactor(state.Copy(), blockExec, blockStore, fastSync)
bcReactor.SetLogger(logger.With("module", "blockchain"))
// Next: we need to set a switch in order for peers to be added in
bcReactor.Switch = p2p.NewSwitch(cfg.DefaultP2PConfig(), nil)
// Lastly: let's add some blocks in
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
firstBlock := makeBlock(blockHeight, state)
secondBlock := makeBlock(blockHeight+1, state)
firstParts := firstBlock.MakePartSet(types.BlockPartSizeBytes)
blockStore.SaveBlock(firstBlock, firstParts, secondBlock.LastCommit)
}
return bcReactor
return BlockchainReactorPair{bcReactor, proxyApp}
}
func TestNoBlockResponse(t *testing.T) {
maxBlockHeight := int64(20)
config = cfg.ResetTestRoot("blockchain_reactor_test")
genDoc, privVals := randGenesisDoc(1, false, 30)
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
bcr.Start()
defer bcr.Stop()
maxBlockHeight := int64(65)
// Add some peers in
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
bcr.AddPeer(peer)
reactorPairs := make([]BlockchainReactorPair, 2)
chID := byte(0x01)
reactorPairs[0] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, maxBlockHeight)
reactorPairs[1] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
p2p.MakeConnectedSwitches(config.P2P, 2, func(i int, s *p2p.Switch) *p2p.Switch {
s.AddReactor("BLOCKCHAIN", reactorPairs[i].reactor)
return s
}, p2p.Connect2Switches)
defer func() {
for _, r := range reactorPairs {
r.reactor.Stop()
r.app.Stop()
}
}()
tests := []struct {
height int64
@@ -78,72 +157,100 @@ func TestNoBlockResponse(t *testing.T) {
{100, false},
}
// receive a request message from peer,
// wait for our response to be received on the peer
for _, tt := range tests {
reqBlockMsg := &bcBlockRequestMessage{tt.height}
reqBlockBytes := cdc.MustMarshalBinaryBare(reqBlockMsg)
bcr.Receive(chID, peer, reqBlockBytes)
msg := peer.lastBlockchainMessage()
for {
if reactorPairs[1].reactor.pool.IsCaughtUp() {
break
}
time.Sleep(10 * time.Millisecond)
}
assert.Equal(t, maxBlockHeight, reactorPairs[0].reactor.store.Height())
for _, tt := range tests {
block := reactorPairs[1].reactor.store.LoadBlock(tt.height)
if tt.existent {
if blockMsg, ok := msg.(*bcBlockResponseMessage); !ok {
t.Fatalf("Expected to receive a block response for height %d", tt.height)
} else if blockMsg.Block.Height != tt.height {
t.Fatalf("Expected response to be for height %d, got %d", tt.height, blockMsg.Block.Height)
}
assert.True(t, block != nil)
} else {
if noBlockMsg, ok := msg.(*bcNoBlockResponseMessage); !ok {
t.Fatalf("Expected to receive a no block response for height %d", tt.height)
} else if noBlockMsg.Height != tt.height {
t.Fatalf("Expected response to be for height %d, got %d", tt.height, noBlockMsg.Height)
}
assert.True(t, block == nil)
}
}
}
/*
// NOTE: This is too hard to test without
// an easy way to add test peer to switch
// or without significant refactoring of the module.
// Alternatively we could actually dial a TCP conn but
// that seems extreme.
func TestBadBlockStopsPeer(t *testing.T) {
maxBlockHeight := int64(20)
config = cfg.ResetTestRoot("blockchain_reactor_test")
genDoc, privVals := randGenesisDoc(1, false, 30)
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
bcr.Start()
defer bcr.Stop()
maxBlockHeight := int64(148)
// Add some peers in
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
otherChain := newBlockchainReactor(log.TestingLogger(), genDoc, privVals, maxBlockHeight)
defer func() {
otherChain.reactor.Stop()
otherChain.app.Stop()
}()
// XXX: This doesn't add the peer to anything,
// so it's hard to check that it's later removed
bcr.AddPeer(peer)
assert.True(t, bcr.Switch.Peers().Size() > 0)
reactorPairs := make([]BlockchainReactorPair, 4)
// send a bad block from the peer
// default blocks already dont have commits, so should fail
block := bcr.store.LoadBlock(3)
msg := &bcBlockResponseMessage{Block: block}
peer.Send(BlockchainChannel, struct{ BlockchainMessage }{msg})
reactorPairs[0] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, maxBlockHeight)
reactorPairs[1] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
reactorPairs[2] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
reactorPairs[3] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
ticker := time.NewTicker(time.Millisecond * 10)
timer := time.NewTimer(time.Second * 2)
LOOP:
for {
select {
case <-ticker.C:
if bcr.Switch.Peers().Size() == 0 {
break LOOP
}
case <-timer.C:
t.Fatal("Timed out waiting to disconnect peer")
switches := p2p.MakeConnectedSwitches(config.P2P, 4, func(i int, s *p2p.Switch) *p2p.Switch {
s.AddReactor("BLOCKCHAIN", reactorPairs[i].reactor)
return s
}, p2p.Connect2Switches)
defer func() {
for _, r := range reactorPairs {
r.reactor.Stop()
r.app.Stop()
}
}()
for {
if reactorPairs[3].reactor.pool.IsCaughtUp() {
break
}
time.Sleep(1 * time.Second)
}
//at this time, reactors[0-3] is the newest
assert.Equal(t, 3, reactorPairs[1].reactor.Switch.Peers().Size())
//mark reactorPairs[3] is an invalid peer
reactorPairs[3].reactor.store = otherChain.reactor.store
lastReactorPair := newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
reactorPairs = append(reactorPairs, lastReactorPair)
switches = append(switches, p2p.MakeConnectedSwitches(config.P2P, 1, func(i int, s *p2p.Switch) *p2p.Switch {
s.AddReactor("BLOCKCHAIN", reactorPairs[len(reactorPairs)-1].reactor)
return s
}, p2p.Connect2Switches)...)
for i := 0; i < len(reactorPairs)-1; i++ {
p2p.Connect2Switches(switches, i, len(reactorPairs)-1)
}
for {
if lastReactorPair.reactor.pool.IsCaughtUp() || lastReactorPair.reactor.Switch.Peers().Size() == 0 {
break
}
time.Sleep(1 * time.Second)
}
assert.True(t, lastReactorPair.reactor.Switch.Peers().Size() < len(reactorPairs)-1)
}
*/
//----------------------------------------------
// utility funcs
@@ -155,55 +262,41 @@ func makeTxs(height int64) (txs []types.Tx) {
return txs
}
func makeBlock(height int64, state sm.State) *types.Block {
block, _ := state.MakeBlock(height, makeTxs(height), new(types.Commit), nil, state.Validators.GetProposer().Address)
func makeBlock(height int64, state sm.State, lastCommit *types.Commit) *types.Block {
block, _ := state.MakeBlock(height, makeTxs(height), lastCommit, nil, state.Validators.GetProposer().Address)
return block
}
// The Test peer
type bcrTestPeer struct {
cmn.BaseService
id p2p.ID
ch chan interface{}
type testApp struct {
abci.BaseApplication
}
var _ p2p.Peer = (*bcrTestPeer)(nil)
var _ abci.Application = (*testApp)(nil)
func newbcrTestPeer(id p2p.ID) *bcrTestPeer {
bcr := &bcrTestPeer{
id: id,
ch: make(chan interface{}, 2),
}
bcr.BaseService = *cmn.NewBaseService(nil, "bcrTestPeer", bcr)
return bcr
func (app *testApp) Info(req abci.RequestInfo) (resInfo abci.ResponseInfo) {
return abci.ResponseInfo{}
}
func (tp *bcrTestPeer) lastBlockchainMessage() interface{} { return <-tp.ch }
func (tp *bcrTestPeer) TrySend(chID byte, msgBytes []byte) bool {
var msg BlockchainMessage
err := cdc.UnmarshalBinaryBare(msgBytes, &msg)
if err != nil {
panic(cmn.ErrorWrap(err, "Error while trying to parse a BlockchainMessage"))
}
if _, ok := msg.(*bcStatusResponseMessage); ok {
// Discard status response messages since they skew our results
// We only want to deal with:
// + bcBlockResponseMessage
// + bcNoBlockResponseMessage
} else {
tp.ch <- msg
}
return true
func (app *testApp) BeginBlock(req abci.RequestBeginBlock) abci.ResponseBeginBlock {
return abci.ResponseBeginBlock{}
}
func (tp *bcrTestPeer) Send(chID byte, msgBytes []byte) bool { return tp.TrySend(chID, msgBytes) }
func (tp *bcrTestPeer) NodeInfo() p2p.NodeInfo { return p2p.DefaultNodeInfo{} }
func (tp *bcrTestPeer) Status() p2p.ConnectionStatus { return p2p.ConnectionStatus{} }
func (tp *bcrTestPeer) ID() p2p.ID { return tp.id }
func (tp *bcrTestPeer) IsOutbound() bool { return false }
func (tp *bcrTestPeer) IsPersistent() bool { return true }
func (tp *bcrTestPeer) Get(s string) interface{} { return s }
func (tp *bcrTestPeer) Set(string, interface{}) {}
func (tp *bcrTestPeer) RemoteIP() net.IP { return []byte{127, 0, 0, 1} }
func (tp *bcrTestPeer) OriginalAddr() *p2p.NetAddress { return nil }
func (app *testApp) EndBlock(req abci.RequestEndBlock) abci.ResponseEndBlock {
return abci.ResponseEndBlock{}
}
func (app *testApp) DeliverTx(tx []byte) abci.ResponseDeliverTx {
return abci.ResponseDeliverTx{Tags: []cmn.KVPair{}}
}
func (app *testApp) CheckTx(tx []byte) abci.ResponseCheckTx {
return abci.ResponseCheckTx{}
}
func (app *testApp) Commit() abci.ResponseCommit {
return abci.ResponseCommit{}
}
func (app *testApp) Query(reqQuery abci.RequestQuery) (resQuery abci.ResponseQuery) {
return
}

View File

@@ -9,13 +9,30 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
cfg "github.com/tendermint/tendermint/config"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/db"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/libs/log"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
)
func makeStateAndBlockStore(logger log.Logger) (sm.State, *BlockStore) {
config := cfg.ResetTestRoot("blockchain_reactor_test")
// blockDB := dbm.NewDebugDB("blockDB", dbm.NewMemDB())
// stateDB := dbm.NewDebugDB("stateDB", dbm.NewMemDB())
blockDB := dbm.NewMemDB()
stateDB := dbm.NewMemDB()
state, err := sm.LoadStateFromDBOrGenesisFile(stateDB, config.GenesisFile())
if err != nil {
panic(cmn.ErrorWrap(err, "error constructing state from genesis file"))
}
return state, NewBlockStore(blockDB)
}
func TestLoadBlockStoreStateJSON(t *testing.T) {
db := db.NewMemDB()
@@ -65,7 +82,7 @@ func freshBlockStore() (*BlockStore, db.DB) {
var (
state, _ = makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
block = makeBlock(1, state)
block = makeBlock(1, state, new(types.Commit))
partSet = block.MakePartSet(2)
part1 = partSet.GetPart(0)
part2 = partSet.GetPart(1)
@@ -88,7 +105,7 @@ func TestBlockStoreSaveLoadBlock(t *testing.T) {
}
// save a block
block := makeBlock(bs.Height()+1, state)
block := makeBlock(bs.Height()+1, state, new(types.Commit))
validPartSet := block.MakePartSet(2)
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: tmtime.Now()}}}
@@ -331,7 +348,7 @@ func TestLoadBlockMeta(t *testing.T) {
func TestBlockFetchAtHeight(t *testing.T) {
state, bs := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
require.Equal(t, bs.Height(), int64(0), "initially the height should be zero")
block := makeBlock(bs.Height()+1, state)
block := makeBlock(bs.Height()+1, state, new(types.Commit))
partSet := block.MakePartSet(2)
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,

View File

@@ -54,6 +54,9 @@ var RootCmd = &cobra.Command{
if err != nil {
return err
}
if config.LogFormat == cfg.LogFormatJSON {
logger = log.NewTMJSONLogger(log.NewSyncWriter(os.Stdout))
}
logger, err = tmflags.ParseLogLevel(config.LogLevel, logger, cfg.DefaultLogLevel())
if err != nil {
return err

View File

@@ -127,14 +127,31 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
}
}
// Gather persistent peer addresses.
var (
persistentPeers string
err error
)
if populatePersistentPeers {
err := populatePersistentPeersInConfigAndWriteIt(config)
persistentPeers, err = persistentPeersString(config)
if err != nil {
_ = os.RemoveAll(outputDir)
return err
}
}
// Overwrite default config.
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
config.P2P.AddrBookStrict = false
if populatePersistentPeers {
config.P2P.PersistentPeers = persistentPeers
}
cfg.WriteConfigFile(filepath.Join(nodeDir, "config", "config.toml"), config)
}
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
return nil
}
@@ -157,28 +174,16 @@ func hostnameOrIP(i int) string {
return fmt.Sprintf("%s%d", hostnamePrefix, i)
}
func populatePersistentPeersInConfigAndWriteIt(config *cfg.Config) error {
func persistentPeersString(config *cfg.Config) (string, error) {
persistentPeers := make([]string, nValidators+nNonValidators)
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
nodeKey, err := p2p.LoadNodeKey(config.NodeKeyFile())
if err != nil {
return err
return "", err
}
persistentPeers[i] = p2p.IDAddressString(nodeKey.ID(), fmt.Sprintf("%s:%d", hostnameOrIP(i), p2pPort))
}
persistentPeersList := strings.Join(persistentPeers, ",")
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
config.P2P.PersistentPeers = persistentPeersList
config.P2P.AddrBookStrict = false
// overwrite default config
cfg.WriteConfigFile(filepath.Join(nodeDir, "config", "config.toml"), config)
}
return nil
return strings.Join(persistentPeers, ","), nil
}

View File

@@ -14,6 +14,11 @@ const (
FuzzModeDrop = iota
// FuzzModeDelay is a mode in which we randomly sleep
FuzzModeDelay
// LogFormatPlain is a format for colored text
LogFormatPlain = "plain"
// LogFormatJSON is a format for json output
LogFormatJSON = "json"
)
// NOTE: Most of the structs & relevant comments + the
@@ -94,6 +99,9 @@ func (cfg *Config) SetRoot(root string) *Config {
// ValidateBasic performs basic validation (checking param bounds, etc.) and
// returns an error if any check fails.
func (cfg *Config) ValidateBasic() error {
if err := cfg.BaseConfig.ValidateBasic(); err != nil {
return err
}
if err := cfg.RPC.ValidateBasic(); err != nil {
return errors.Wrap(err, "Error in [rpc] section")
}
@@ -145,6 +153,9 @@ type BaseConfig struct {
// Output level for logging
LogLevel string `mapstructure:"log_level"`
// Output format: 'plain' (colored text) or 'json'
LogFormat string `mapstructure:"log_format"`
// Path to the JSON file containing the initial validator set and other meta data
Genesis string `mapstructure:"genesis_file"`
@@ -179,6 +190,7 @@ func DefaultBaseConfig() BaseConfig {
ProxyApp: "tcp://127.0.0.1:26658",
ABCI: "socket",
LogLevel: DefaultPackageLogLevels(),
LogFormat: LogFormatPlain,
ProfListenAddress: "",
FastSync: true,
FilterPeers: false,
@@ -221,6 +233,17 @@ func (cfg BaseConfig) DBDir() string {
return rootify(cfg.DBPath, cfg.RootDir)
}
// ValidateBasic performs basic validation (checking param bounds, etc.) and
// returns an error if any check fails.
func (cfg BaseConfig) ValidateBasic() error {
switch cfg.LogFormat {
case LogFormatPlain, LogFormatJSON:
default:
return errors.New("unknown log_format (must be 'plain' or 'json')")
}
return nil
}
// DefaultLogLevel returns a default log level of "error"
func DefaultLogLevel() string {
return "error"
@@ -260,7 +283,7 @@ type RPCConfig struct {
// Maximum number of simultaneous connections.
// Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
// If you want to accept more significant number than the default, make sure
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
GRPCMaxOpenConnections int `mapstructure:"grpc_max_open_connections"`
@@ -270,7 +293,7 @@ type RPCConfig struct {
// Maximum number of simultaneous connections (including WebSocket).
// Does not include gRPC connections. See grpc_max_open_connections
// If you want to accept more significant number than the default, make sure
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
// Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -751,12 +774,12 @@ type InstrumentationConfig struct {
PrometheusListenAddr string `mapstructure:"prometheus_listen_addr"`
// Maximum number of simultaneous connections.
// If you want to accept more significant number than the default, make sure
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
MaxOpenConnections int `mapstructure:"max_open_connections"`
// Tendermint instrumentation namespace.
// Instrumentation namespace.
Namespace string `mapstructure:"namespace"`
}

View File

@@ -86,6 +86,9 @@ db_dir = "{{ js .BaseConfig.DBPath }}"
# Output level for logging, including package level options
log_level = "{{ .BaseConfig.LogLevel }}"
# Output format: 'plain' (colored text) or 'json'
log_format = "{{ .BaseConfig.LogFormat }}"
##### additional base config options #####
# Path to the JSON file containing the initial validator set and other meta data
@@ -122,13 +125,13 @@ laddr = "{{ .RPC.ListenAddress }}"
# A list of origins a cross-domain request can be executed from
# Default value '[]' disables cors support
# Use '["*"]' to allow any origin
cors_allowed_origins = "{{ .RPC.CORSAllowedOrigins }}"
cors_allowed_origins = [{{ range .RPC.CORSAllowedOrigins }}{{ printf "%q, " . }}{{end}}]
# A list of methods the client is allowed to use with cross-domain requests
cors_allowed_methods = "{{ .RPC.CORSAllowedMethods }}"
cors_allowed_methods = [{{ range .RPC.CORSAllowedMethods }}{{ printf "%q, " . }}{{end}}]
# A list of non simple headers the client is allowed to use with cross-domain requests
cors_allowed_headers = "{{ .RPC.CORSAllowedHeaders }}"
cors_allowed_headers = [{{ range .RPC.CORSAllowedHeaders }}{{ printf "%q, " . }}{{end}}]
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
@@ -136,7 +139,7 @@ grpc_laddr = "{{ .RPC.GRPCListenAddress }}"
# Maximum number of simultaneous connections.
# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -148,7 +151,7 @@ unsafe = {{ .RPC.Unsafe }}
# Maximum number of simultaneous connections (including WebSocket).
# Does not include gRPC connections. See grpc_max_open_connections
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -257,14 +260,17 @@ create_empty_blocks_interval = "{{ .Consensus.CreateEmptyBlocksInterval }}"
peer_gossip_sleep_duration = "{{ .Consensus.PeerGossipSleepDuration }}"
peer_query_maj23_sleep_duration = "{{ .Consensus.PeerQueryMaj23SleepDuration }}"
# Block time parameters. Corresponds to the minimum time increment between consecutive blocks.
blocktime_iota = "{{ .Consensus.BlockTimeIota }}"
##### transactions indexer configuration options #####
[tx_index]
# What indexer to use for transactions
#
# Options:
# 1) "null" (default)
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
# 1) "null"
# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "{{ .TxIndex.Indexer }}"
# Comma-separated list of tags to index (by default the only tag is "tx.hash")
@@ -296,7 +302,7 @@ prometheus = {{ .Instrumentation.Prometheus }}
prometheus_listen_addr = "{{ .Instrumentation.PrometheusListenAddr }}"
# Maximum number of simultaneous connections.
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
max_open_connections = {{ .Instrumentation.MaxOpenConnections }}

View File

@@ -405,8 +405,24 @@ func ensureNewVote(voteCh <-chan interface{}, height int64, round int) {
}
func ensureNewRound(roundCh <-chan interface{}, height int64, round int) {
ensureNewEvent(roundCh, height, round, ensureTimeout,
"Timeout expired while waiting for NewRound event")
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewRound event")
case ev := <-roundCh:
rs, ok := ev.(types.EventDataNewRound)
if !ok {
panic(
fmt.Sprintf(
"expected a EventDataNewRound, got %v.Wrong subscription channel?",
reflect.TypeOf(rs)))
}
if rs.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, rs.Height))
}
if rs.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, rs.Round))
}
}
}
func ensureNewTimeout(timeoutCh <-chan interface{}, height int64, round int, timeout int64) {
@@ -416,8 +432,24 @@ func ensureNewTimeout(timeoutCh <-chan interface{}, height int64, round int, tim
}
func ensureNewProposal(proposalCh <-chan interface{}, height int64, round int) {
ensureNewEvent(proposalCh, height, round, ensureTimeout,
"Timeout expired while waiting for NewProposal event")
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewProposal event")
case ev := <-proposalCh:
rs, ok := ev.(types.EventDataCompleteProposal)
if !ok {
panic(
fmt.Sprintf(
"expected a EventDataCompleteProposal, got %v.Wrong subscription channel?",
reflect.TypeOf(rs)))
}
if rs.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, rs.Height))
}
if rs.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, rs.Round))
}
}
}
func ensureNewValidBlock(validBlockCh <-chan interface{}, height int64, round int) {
@@ -492,6 +524,30 @@ func ensureVote(voteCh <-chan interface{}, height int64, round int,
}
}
func ensureProposal(proposalCh <-chan interface{}, height int64, round int, propId types.BlockID) {
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewProposal event")
case ev := <-proposalCh:
rs, ok := ev.(types.EventDataCompleteProposal)
if !ok {
panic(
fmt.Sprintf(
"expected a EventDataCompleteProposal, got %v.Wrong subscription channel?",
reflect.TypeOf(rs)))
}
if rs.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, rs.Height))
}
if rs.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, rs.Round))
}
if !rs.BlockID.Equals(propId) {
panic("Proposed block does not match expected block")
}
}
}
func ensurePrecommit(voteCh <-chan interface{}, height int64, round int) {
ensureVote(voteCh, height, round, types.PrecommitType)
}

View File

@@ -71,19 +71,19 @@ func TestMempoolProgressInHigherRound(t *testing.T) {
}
startTestRound(cs, height, round)
ensureNewRoundStep(newRoundCh, height, round) // first round at first height
ensureNewEventOnChannel(newBlockCh) // first block gets committed
ensureNewRound(newRoundCh, height, round) // first round at first height
ensureNewEventOnChannel(newBlockCh) // first block gets committed
height = height + 1 // moving to the next height
round = 0
ensureNewRoundStep(newRoundCh, height, round) // first round at next height
deliverTxsRange(cs, 0, 1) // we deliver txs, but dont set a proposal so we get the next round
ensureNewRound(newRoundCh, height, round) // first round at next height
deliverTxsRange(cs, 0, 1) // we deliver txs, but dont set a proposal so we get the next round
ensureNewTimeout(timeoutCh, height, round, cs.config.TimeoutPropose.Nanoseconds())
round = round + 1 // moving to the next round
ensureNewRoundStep(newRoundCh, height, round) // wait for the next round
ensureNewEventOnChannel(newBlockCh) // now we can commit the block
round = round + 1 // moving to the next round
ensureNewRound(newRoundCh, height, round) // wait for the next round
ensureNewEventOnChannel(newBlockCh) // now we can commit the block
}
func deliverTxsRange(cs *ConsensusState, start, end int) {

View File

@@ -8,7 +8,7 @@ import (
"github.com/pkg/errors"
amino "github.com/tendermint/go-amino"
"github.com/tendermint/go-amino"
cstypes "github.com/tendermint/tendermint/consensus/types"
cmn "github.com/tendermint/tendermint/libs/common"
tmevents "github.com/tendermint/tendermint/libs/events"
@@ -264,11 +264,6 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
BlockID: msg.BlockID,
Votes: ourVotes,
}))
case *ProposalHeartbeatMessage:
hb := msg.Heartbeat
conR.Logger.Debug("Received proposal heartbeat message",
"height", hb.Height, "round", hb.Round, "sequence", hb.Sequence,
"valIdx", hb.ValidatorIndex, "valAddr", hb.ValidatorAddress)
default:
conR.Logger.Error(fmt.Sprintf("Unknown message type %v", reflect.TypeOf(msg)))
}
@@ -369,8 +364,8 @@ func (conR *ConsensusReactor) FastSync() bool {
//--------------------------------------
// subscribeToBroadcastEvents subscribes for new round steps, votes and
// proposal heartbeats using internal pubsub defined on state to broadcast
// subscribeToBroadcastEvents subscribes for new round steps and votes
// using internal pubsub defined on state to broadcast
// them to peers upon receiving.
func (conR *ConsensusReactor) subscribeToBroadcastEvents() {
const subscriber = "consensus-reactor"
@@ -389,10 +384,6 @@ func (conR *ConsensusReactor) subscribeToBroadcastEvents() {
conR.broadcastHasVoteMessage(data.(*types.Vote))
})
conR.conS.evsw.AddListenerForEvent(subscriber, types.EventProposalHeartbeat,
func(data tmevents.EventData) {
conR.broadcastProposalHeartbeatMessage(data.(*types.Heartbeat))
})
}
func (conR *ConsensusReactor) unsubscribeFromBroadcastEvents() {
@@ -400,13 +391,6 @@ func (conR *ConsensusReactor) unsubscribeFromBroadcastEvents() {
conR.conS.evsw.RemoveListener(subscriber)
}
func (conR *ConsensusReactor) broadcastProposalHeartbeatMessage(hb *types.Heartbeat) {
conR.Logger.Debug("Broadcasting proposal heartbeat message",
"height", hb.Height, "round", hb.Round, "sequence", hb.Sequence)
msg := &ProposalHeartbeatMessage{hb}
conR.Switch.Broadcast(StateChannel, cdc.MustMarshalBinaryBare(msg))
}
func (conR *ConsensusReactor) broadcastNewRoundStepMessage(rs *cstypes.RoundState) {
nrsMsg := makeRoundStepMessage(rs)
conR.Switch.Broadcast(StateChannel, cdc.MustMarshalBinaryBare(nrsMsg))
@@ -1387,7 +1371,6 @@ func RegisterConsensusMessages(cdc *amino.Codec) {
cdc.RegisterConcrete(&HasVoteMessage{}, "tendermint/HasVote", nil)
cdc.RegisterConcrete(&VoteSetMaj23Message{}, "tendermint/VoteSetMaj23", nil)
cdc.RegisterConcrete(&VoteSetBitsMessage{}, "tendermint/VoteSetBits", nil)
cdc.RegisterConcrete(&ProposalHeartbeatMessage{}, "tendermint/ProposalHeartbeat", nil)
}
func decodeMsg(bz []byte) (msg ConsensusMessage, err error) {
@@ -1664,18 +1647,3 @@ func (m *VoteSetBitsMessage) String() string {
}
//-------------------------------------
// ProposalHeartbeatMessage is sent to signal that a node is alive and waiting for transactions for a proposal.
type ProposalHeartbeatMessage struct {
Heartbeat *types.Heartbeat
}
// ValidateBasic performs basic validation.
func (m *ProposalHeartbeatMessage) ValidateBasic() error {
return m.Heartbeat.ValidateBasic()
}
// String returns a string representation.
func (m *ProposalHeartbeatMessage) String() string {
return fmt.Sprintf("[HEARTBEAT %v]", m.Heartbeat)
}

View File

@@ -213,8 +213,8 @@ func (m *mockEvidencePool) Update(block *types.Block, state sm.State) {
//------------------------------------
// Ensure a testnet sends proposal heartbeats and makes blocks when there are txs
func TestReactorProposalHeartbeats(t *testing.T) {
// Ensure a testnet makes blocks when there are txs
func TestReactorCreatesBlockWhenEmptyBlocksFalse(t *testing.T) {
N := 4
css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter,
func(c *cfg.Config) {
@@ -222,17 +222,6 @@ func TestReactorProposalHeartbeats(t *testing.T) {
})
reactors, eventChans, eventBuses := startConsensusNet(t, css, N)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
heartbeatChans := make([]chan interface{}, N)
var err error
for i := 0; i < N; i++ {
heartbeatChans[i] = make(chan interface{}, 1)
err = eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryProposalHeartbeat, heartbeatChans[i])
require.NoError(t, err)
}
// wait till everyone sends a proposal heartbeat
timeoutWaitGroup(t, N, func(j int) {
<-heartbeatChans[j]
}, css)
// send a tx
if err := css[3].mempool.CheckTx([]byte{1, 2, 3}, nil); err != nil {

View File

@@ -11,7 +11,6 @@ import (
"time"
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/version"
//auto "github.com/tendermint/tendermint/libs/autofile"
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
@@ -20,6 +19,7 @@ import (
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/version"
)
var crc32c = crc32.MakeTable(crc32.Castagnoli)
@@ -247,6 +247,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
// Set AppVersion on the state.
h.initialState.Version.Consensus.App = version.Protocol(res.AppVersion)
sm.SaveState(h.stateDB, h.initialState)
// Replay blocks up to the latest in the blockstore.
_, err = h.ReplayBlocks(h.initialState, appHash, blockHeight, proxyApp)
@@ -276,7 +277,12 @@ func (h *Handshaker) ReplayBlocks(
// If appBlockHeight == 0 it means that we are at genesis and hence should send InitChain.
if appBlockHeight == 0 {
nextVals := types.TM2PB.ValidatorUpdates(state.NextValidators) // state.Validators would work too.
validators := make([]*types.Validator, len(h.genDoc.Validators))
for i, val := range h.genDoc.Validators {
validators[i] = types.NewValidator(val.PubKey, val.Power)
}
validatorSet := types.NewValidatorSet(validators)
nextVals := types.TM2PB.ValidatorUpdates(validatorSet)
csParams := types.TM2PB.ConsensusParams(h.genDoc.ConsensusParams)
req := abci.RequestInitChain{
Time: h.genDoc.GenesisTime,
@@ -290,19 +296,27 @@ func (h *Handshaker) ReplayBlocks(
return nil, err
}
// If the app returned validators or consensus params, update the state.
if len(res.Validators) > 0 {
vals, err := types.PB2TM.ValidatorUpdates(res.Validators)
if err != nil {
return nil, err
if stateBlockHeight == 0 { //we only update state when we are in initial state
// If the app returned validators or consensus params, update the state.
if len(res.Validators) > 0 {
vals, err := types.PB2TM.ValidatorUpdates(res.Validators)
if err != nil {
return nil, err
}
state.Validators = types.NewValidatorSet(vals)
state.NextValidators = types.NewValidatorSet(vals)
} else {
// If validator set is not set in genesis and still empty after InitChain, exit.
if len(h.genDoc.Validators) == 0 {
return nil, fmt.Errorf("Validator set is nil in genesis and still empty after InitChain")
}
}
state.Validators = types.NewValidatorSet(vals)
state.NextValidators = types.NewValidatorSet(vals)
if res.ConsensusParams != nil {
state.ConsensusParams = types.PB2TM.ConsensusParams(res.ConsensusParams)
}
sm.SaveState(h.stateDB, state)
}
if res.ConsensusParams != nil {
state.ConsensusParams = types.PB2TM.ConsensusParams(res.ConsensusParams)
}
sm.SaveState(h.stateDB, state)
}
// First handle edge cases and constraints on the storeBlockHeight.

View File

@@ -17,7 +17,7 @@ import (
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
crypto "github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto"
auto "github.com/tendermint/tendermint/libs/autofile"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/version"
@@ -315,28 +315,21 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
config := ResetConfig("proxy_test_")
walBody, err := WALWithNBlocks(NUM_BLOCKS)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
walFile := tempWALWithData(walBody)
config.Consensus.SetWalFile(walFile)
privVal := privval.LoadFilePV(config.PrivValidatorFile())
wal, err := NewWAL(walFile)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
wal.SetLogger(log.TestingLogger())
if err := wal.Start(); err != nil {
t.Fatal(err)
}
err = wal.Start()
require.NoError(t, err)
defer wal.Stop()
chain, commits, err := makeBlockchainFromWAL(wal)
if err != nil {
t.Fatalf(err.Error())
}
require.NoError(t, err)
stateDB, state, store := stateAndStore(config, privVal.GetPubKey(), kvstore.ProtocolVersion)
store.chain = chain

View File

@@ -22,13 +22,6 @@ import (
"github.com/tendermint/tendermint/types"
)
//-----------------------------------------------------------------------------
// Config
const (
proposalHeartbeatIntervalSeconds = 2
)
//-----------------------------------------------------------------------------
// Errors
@@ -118,7 +111,7 @@ type ConsensusState struct {
done chan struct{}
// synchronous pubsub between consensus state and reactor.
// state only emits EventNewRoundStep, EventVote and EventProposalHeartbeat
// state only emits EventNewRoundStep and EventVote
evsw tmevents.EventSwitch
// for reporting metrics
@@ -324,10 +317,11 @@ func (cs *ConsensusState) startRoutines(maxSteps int) {
go cs.receiveRoutine(maxSteps)
}
// OnStop implements cmn.Service. It stops all routines and waits for the WAL to finish.
// OnStop implements cmn.Service.
func (cs *ConsensusState) OnStop() {
cs.evsw.Stop()
cs.timeoutTicker.Stop()
// WAL is stopped in receiveRoutine.
}
// Wait waits for the the main routine to return.
@@ -751,7 +745,7 @@ func (cs *ConsensusState) enterNewRound(height int64, round int) {
validators := cs.Validators
if cs.Round < round {
validators = validators.Copy()
validators.IncrementAccum(round - cs.Round)
validators.IncrementProposerPriority(round - cs.Round)
}
// Setup new round
@@ -772,7 +766,7 @@ func (cs *ConsensusState) enterNewRound(height int64, round int) {
cs.Votes.SetRound(round + 1) // also track next round (round+1) to allow round-skipping
cs.triggeredTimeoutPrecommit = false
cs.eventBus.PublishEventNewRound(cs.RoundStateEvent())
cs.eventBus.PublishEventNewRound(cs.NewRoundEvent())
cs.metrics.Rounds.Set(float64(round))
// Wait for txs to be available in the mempool
@@ -784,7 +778,6 @@ func (cs *ConsensusState) enterNewRound(height int64, round int) {
cs.scheduleTimeout(cs.config.CreateEmptyBlocksInterval, height, round,
cstypes.RoundStepNewRound)
}
go cs.proposalHeartbeat(height, round)
} else {
cs.enterPropose(height, round)
}
@@ -801,32 +794,6 @@ func (cs *ConsensusState) needProofBlock(height int64) bool {
return !bytes.Equal(cs.state.AppHash, lastBlockMeta.Header.AppHash)
}
func (cs *ConsensusState) proposalHeartbeat(height int64, round int) {
counter := 0
addr := cs.privValidator.GetAddress()
valIndex, _ := cs.Validators.GetByAddress(addr)
chainID := cs.state.ChainID
for {
rs := cs.GetRoundState()
// if we've already moved on, no need to send more heartbeats
if rs.Step > cstypes.RoundStepNewRound || rs.Round > round || rs.Height > height {
return
}
heartbeat := &types.Heartbeat{
Height: rs.Height,
Round: rs.Round,
Sequence: counter,
ValidatorAddress: addr,
ValidatorIndex: valIndex,
}
cs.privValidator.SignHeartbeat(chainID, heartbeat)
cs.eventBus.PublishEventProposalHeartbeat(types.EventDataProposalHeartbeat{heartbeat})
cs.evsw.FireEvent(types.EventProposalHeartbeat, heartbeat)
counter++
time.Sleep(proposalHeartbeatIntervalSeconds * time.Second)
}
}
// Enter (CreateEmptyBlocks): from enterNewRound(height,round)
// Enter (CreateEmptyBlocks, CreateEmptyBlocksInterval > 0 ): after enterNewRound(height,round), after timeout of CreateEmptyBlocksInterval
// Enter (!CreateEmptyBlocks) : after enterNewRound(height,round), once txs are in the mempool
@@ -1404,7 +1371,7 @@ func (cs *ConsensusState) defaultSetProposal(proposal *types.Proposal) error {
return nil
}
// Verify POLRound, which must be -1 or in range [0, proposal.Round).
// Verify POLRound, which must be -1 or in range [0, proposal.Round).
if proposal.POLRound < -1 ||
(proposal.POLRound >= 0 && proposal.POLRound >= proposal.Round) {
return ErrInvalidProposalPOLRound
@@ -1462,7 +1429,7 @@ func (cs *ConsensusState) addProposalBlockPart(msg *BlockPartMessage, peerID p2p
}
// NOTE: it's possible to receive complete proposal blocks for future rounds without having the proposal
cs.Logger.Info("Received complete proposal block", "height", cs.ProposalBlock.Height, "hash", cs.ProposalBlock.Hash())
cs.eventBus.PublishEventCompleteProposal(cs.RoundStateEvent())
cs.eventBus.PublishEventCompleteProposal(cs.CompleteProposalEvent())
// Update Valid* if we can.
prevotes := cs.Votes.Prevotes(cs.Round)

View File

@@ -197,9 +197,8 @@ func TestStateBadProposal(t *testing.T) {
stateHash[0] = byte((stateHash[0] + 1) % 255)
propBlock.AppHash = stateHash
propBlockParts := propBlock.MakePartSet(partSize)
proposal := types.NewProposal(
vs2.Height, round, -1,
types.BlockID{propBlock.Hash(), propBlockParts.Header()})
blockID := types.BlockID{propBlock.Hash(), propBlockParts.Header()}
proposal := types.NewProposal(vs2.Height, round, -1, blockID)
if err := vs2.SignProposal(config.ChainID(), proposal); err != nil {
t.Fatal("failed to sign bad proposal", err)
}
@@ -213,7 +212,7 @@ func TestStateBadProposal(t *testing.T) {
startTestRound(cs1, height, round)
// wait for proposal
ensureNewProposal(proposalCh, height, round)
ensureProposal(proposalCh, height, round, blockID)
// wait for prevote
ensurePrevote(voteCh, height, round)

View File

@@ -55,3 +55,31 @@ func (prs PeerRoundState) StringIndented(indent string) string {
indent, prs.CatchupCommit, prs.CatchupCommitRound,
indent)
}
//-----------------------------------------------------------
// These methods are for Protobuf Compatibility
// Size returns the size of the amino encoding, in bytes.
func (ps *PeerRoundState) Size() int {
bs, _ := ps.Marshal()
return len(bs)
}
// Marshal returns the amino encoding.
func (ps *PeerRoundState) Marshal() ([]byte, error) {
return cdc.MarshalBinaryBare(ps)
}
// MarshalTo calls Marshal and copies to the given buffer.
func (ps *PeerRoundState) MarshalTo(data []byte) (int, error) {
bs, err := ps.Marshal()
if err != nil {
return -1, err
}
return copy(data, bs), nil
}
// Unmarshal deserializes from amino encoded form.
func (ps *PeerRoundState) Unmarshal(bs []byte) error {
return cdc.UnmarshalBinaryBare(bs, ps)
}

View File

@@ -112,18 +112,50 @@ func (rs *RoundState) RoundStateSimple() RoundStateSimple {
}
}
// NewRoundEvent returns the RoundState with proposer information as an event.
func (rs *RoundState) NewRoundEvent() types.EventDataNewRound {
addr := rs.Validators.GetProposer().Address
idx, _ := rs.Validators.GetByAddress(addr)
return types.EventDataNewRound{
Height: rs.Height,
Round: rs.Round,
Step: rs.Step.String(),
Proposer: types.ValidatorInfo{
Address: addr,
Index: idx,
},
}
}
// CompleteProposalEvent returns information about a proposed block as an event.
func (rs *RoundState) CompleteProposalEvent() types.EventDataCompleteProposal {
// We must construct BlockID from ProposalBlock and ProposalBlockParts
// cs.Proposal is not guaranteed to be set when this function is called
blockId := types.BlockID{
Hash: rs.ProposalBlock.Hash(),
PartsHeader: rs.ProposalBlockParts.Header(),
}
return types.EventDataCompleteProposal{
Height: rs.Height,
Round: rs.Round,
Step: rs.Step.String(),
BlockID: blockId,
}
}
// RoundStateEvent returns the H/R/S of the RoundState as an event.
func (rs *RoundState) RoundStateEvent() types.EventDataRoundState {
// copy the RoundState.
// TODO: if we want to avoid this, we may need synchronous events after all
rsCopy := *rs
edrs := types.EventDataRoundState{
return types.EventDataRoundState{
Height: rs.Height,
Round: rs.Round,
Step: rs.Step.String(),
RoundState: &rsCopy,
}
return edrs
}
// String returns a string
@@ -169,3 +201,31 @@ func (rs *RoundState) StringShort() string {
return fmt.Sprintf(`RoundState{H:%v R:%v S:%v ST:%v}`,
rs.Height, rs.Round, rs.Step, rs.StartTime)
}
//-----------------------------------------------------------
// These methods are for Protobuf Compatibility
// Size returns the size of the amino encoding, in bytes.
func (rs *RoundStateSimple) Size() int {
bs, _ := rs.Marshal()
return len(bs)
}
// Marshal returns the amino encoding.
func (rs *RoundStateSimple) Marshal() ([]byte, error) {
return cdc.MarshalBinaryBare(rs)
}
// MarshalTo calls Marshal and copies to the given buffer.
func (rs *RoundStateSimple) MarshalTo(data []byte) (int, error) {
bs, err := rs.Marshal()
if err != nil {
return -1, err
}
return copy(data, bs), nil
}
// Unmarshal deserializes from amino encoded form.
func (rs *RoundStateSimple) Unmarshal(bs []byte) error {
return cdc.UnmarshalBinaryBare(bs, rs)
}

View File

@@ -7,13 +7,13 @@ import (
"io/ioutil"
"os"
"path/filepath"
// "sync"
"testing"
"time"
"github.com/tendermint/tendermint/consensus/types"
"github.com/tendermint/tendermint/libs/autofile"
"github.com/tendermint/tendermint/libs/log"
tmtypes "github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
@@ -23,29 +23,27 @@ import (
func TestWALTruncate(t *testing.T) {
walDir, err := ioutil.TempDir("", "wal")
if err != nil {
panic(fmt.Errorf("failed to create temp WAL file: %v", err))
}
require.NoError(t, err)
defer os.RemoveAll(walDir)
walFile := filepath.Join(walDir, "wal")
//this magic number 4K can truncate the content when RotateFile. defaultHeadSizeLimit(10M) is hard to simulate.
//this magic number 1 * time.Millisecond make RotateFile check frequently. defaultGroupCheckDuration(5s) is hard to simulate.
wal, err := NewWAL(walFile, autofile.GroupHeadSizeLimit(4096), autofile.GroupCheckDuration(1*time.Millisecond))
if err != nil {
t.Fatal(err)
}
wal.Start()
wal, err := NewWAL(walFile,
autofile.GroupHeadSizeLimit(4096),
autofile.GroupCheckDuration(1*time.Millisecond),
)
require.NoError(t, err)
wal.SetLogger(log.TestingLogger())
err = wal.Start()
require.NoError(t, err)
defer wal.Stop()
//60 block's size nearly 70K, greater than group's headBuf size(4096 * 10), when headBuf is full, truncate content will Flush to the file.
//at this time, RotateFile is called, truncate content exist in each file.
err = WALGenerateNBlocks(wal.Group(), 60)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
time.Sleep(1 * time.Millisecond) //wait groupCheckDuration, make sure RotateFile run
@@ -99,9 +97,8 @@ func TestWALSearchForEndHeight(t *testing.T) {
walFile := tempWALWithData(walBody)
wal, err := NewWAL(walFile)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
wal.SetLogger(log.TestingLogger())
h := int64(3)
gr, found, err := wal.SearchForEndHeight(h, &WALSearchOptions{})

View File

@@ -8,7 +8,17 @@ module.exports = {
lineNumbers: true
},
themeConfig: {
lastUpdated: "Last Updated",
repo: "tendermint/tendermint",
editLinks: true,
docsDir: "docs",
docsBranch: "develop",
editLinkText: 'Edit this page on Github',
lastUpdated: true,
algolia: {
apiKey: '59f0e2deb984aa9cdf2b3a5fd24ac501',
indexName: 'tendermint',
debug: false
},
nav: [{ text: "Back to Tendermint", link: "https://tendermint.com" }],
sidebar: [
{

View File

@@ -12,10 +12,10 @@ respectively.
## How It Works
There is a Jenkins job listening for changes in the `/docs` directory, on both
There is a CircleCI job listening for changes in the `/docs` directory, on both
the `master` and `develop` branches. Any updates to files in this directory
on those branches will automatically trigger a website deployment. Under the hood,
a private website repository has make targets consumed by a standard Jenkins task.
the private website repository has a `make build-docs` target consumed by a CircleCI job in that repo.
## README
@@ -93,6 +93,10 @@ python -m SimpleHTTPServer 8080
then navigate to localhost:8080 in your browser.
## Search
We are using [Algolia](https://www.algolia.com) to power full-text search. This uses a public API search-only key in the `config.js` as well as a [tendermint.json](https://github.com/algolia/docsearch-configs/blob/master/configs/tendermint.json) configuration file that we can update with PRs.
## Consistency
Because the build processes are identical (as is the information contained herein), this file should be kept in sync as

View File

@@ -21,7 +21,7 @@ For more details on using Tendermint, see the respective documentation for
## Contribute
To contribute to the documentation, see [this file](./DOCS_README.md) for details of the build process and
To contribute to the documentation, see [this file](https://github.com/tendermint/tendermint/blob/master/docs/DOCS_README.md) for details of the build process and
considerations when making changes.
## Version

View File

@@ -11,13 +11,10 @@ Make sure you [have Go installed](https://golang.org/doc/install).
Next, install the `abci-cli` tool and example applications:
```
go get github.com/tendermint/tendermint
```
to get vendored dependencies:
```
cd $GOPATH/src/github.com/tendermint/tendermint
mkdir -p $GOPATH/src/github.com/tendermint
cd $GOPATH/src/github.com/tendermint
git clone https://github.com/tendermint/tendermint.git
cd tendermint
make get_tools
make get_vendor_deps
make install_abci

View File

@@ -5,7 +5,7 @@ Tendermint blockchain application.
The following diagram provides a superb example:
<https://drive.google.com/open?id=1yR2XpRi9YCY9H9uMfcw8-RMJpvDyvjz9>
![](../imgs/cosmos-tendermint-stack-4k.jpg)
The end-user application here is the Cosmos Voyager, at the bottom left.
Voyager communicates with a REST API exposed by a local Light-Client

View File

@@ -122,7 +122,7 @@
],
"abciServers": [
{
"name": "abci",
"name": "go-abci",
"url": "https://github.com/tendermint/tendermint/tree/master/abci",
"language": "Go",
"author": "Tendermint"
@@ -133,6 +133,12 @@
"language": "Javascript",
"author": "Tendermint"
},
{
"name": "rust-tsp",
"url": "https://github.com/tendermint/rust-tsp",
"language": "Rust",
"author": "Tendermint"
},
{
"name": "cpp-tmsp",
"url": "https://github.com/mdyring/cpp-tmsp",
@@ -164,7 +170,7 @@
"author": "Dave Bryson"
},
{
"name": "tm-abci",
"name": "tm-abci (fork of py-abci with async IO)",
"url": "https://github.com/SoftblocksCo/tm-abci",
"language": "Python",
"author": "Softblocks"
@@ -175,5 +181,13 @@
"language": "Javascript",
"author": "Dennis McKinnon"
}
],
"aminoLibraries": [
{
"name": "JS-Amino",
"url": "https://github.com/TanNgocDo/Js-Amino",
"language": "Javascript",
"author": "TanNgocDo"
}
]
}

View File

@@ -1,11 +1,9 @@
# Ecosystem
The growing list of applications built using various pieces of the
Tendermint stack can be found at:
Tendermint stack can be found at the [ecosystem page](https://tendermint.com/ecosystem).
- https://tendermint.com/ecosystem
We thank the community for their contributions thus far and welcome the
We thank the community for their contributions and welcome the
addition of new projects. A pull request can be submitted to [this
file](https://github.com/tendermint/tendermint/blob/master/docs/app-dev/ecosystem.json)
to include your project.

View File

@@ -252,14 +252,12 @@ we'll run a Javascript version of the `counter`. To run it, you'll need
to [install node](https://nodejs.org/en/download/).
You'll also need to fetch the relevant repository, from
[here](https://github.com/tendermint/js-abci) then install it. As go
devs, we keep all our code under the `$GOPATH`, so run:
[here](https://github.com/tendermint/js-abci), then install it:
```
go get github.com/tendermint/js-abci &> /dev/null
cd $GOPATH/src/github.com/tendermint/js-abci/example
npm install
cd ..
git clone https://github.com/tendermint/js-abci.git
cd js-abci
npm install abci
```
Kill the previous `counter` and `tendermint` processes. Now run the app:
@@ -276,13 +274,16 @@ tendermint node
```
Once again, you should see blocks streaming by - but now, our
application is written in javascript! Try sending some transactions, and
application is written in Javascript! Try sending some transactions, and
like before - the results should be the same:
```
curl localhost:26657/broadcast_tx_commit?tx=0x00 # ok
curl localhost:26657/broadcast_tx_commit?tx=0x05 # invalid nonce
curl localhost:26657/broadcast_tx_commit?tx=0x01 # ok
# ok
curl localhost:26657/broadcast_tx_commit?tx=0x00
# invalid nonce
curl localhost:26657/broadcast_tx_commit?tx=0x05
# ok
curl localhost:26657/broadcast_tx_commit?tx=0x01
```
Neat, eh?

View File

@@ -54,7 +54,7 @@ Response:
"value": "ww0z4WaZ0Xg+YI10w43wTWbBmM3dpVza4mmSQYsd0ck="
},
"voting_power": "10",
"accum": "0"
"proposer_priority": "0"
}
]
}

View File

@@ -5,14 +5,17 @@ implementations:
- FilePV uses an unencrypted private key in a "priv_validator.json" file - no
configuration required (just `tendermint init`).
- SocketPV uses a socket to send signing requests to another process - user is
responsible for starting that process themselves.
- TCPVal and IPCVal use TCP and Unix sockets respectively to send signing requests
to another process - the user is responsible for starting that process themselves.
The SocketPV address can be provided via flags at the command line - doing so
will cause Tendermint to ignore any "priv_validator.json" file and to listen on
the given address for incoming connections from an external priv_validator
process. It will halt any operation until at least one external process
succesfully connected.
Both TCPVal and IPCVal addresses can be provided via flags at the command line
or in the configuration file; TCPVal addresses must be of the form
`tcp://<ip_address>:<port>` and IPCVal addresses `unix:///path/to/file.sock` -
doing so will cause Tendermint to ignore any private validator files.
TCPVal will listen on the given address for incoming connections from an external
private validator process. It will halt any operation until at least one external
process successfully connected.
The external priv_validator process will dial the address to connect to
Tendermint, and then Tendermint will send requests on the ensuing connection to
@@ -21,6 +24,9 @@ but the Tendermint process makes all requests. In a later stage we're going to
support multiple validators for fault tolerance. To prevent double signing they
need to be synced, which is deferred to an external solution (see #1185).
Conversely, IPCVal will make an outbound connection to an existing socket opened
by the external validator process.
In addition, Tendermint will provide implementations that can be run in that
external process. These include:

View File

@@ -0,0 +1,40 @@
# ADR 035: Documentation
Author: @zramsay (Zach Ramsay)
## Changelog
### November 2nd 2018
- initial write-up
## Context
The Tendermint documentation has undergone several changes until settling on the current model. Originally, the documentation was hosted on the website and had to be updated asynchronously from the code. Along with the other repositories requiring documentation, the whole stack moved to using Read The Docs to automatically generate, publish, and host the documentation. This, however, was insufficient; the RTD site had advertisement, it wasn't easily accessible to devs, didn't collect metrics, was another set of external links, etc.
## Decision
For two reasons, the decision was made to use VuePress:
1) ability to get metrics (implemented on both Tendermint and SDK)
2) host the documentation on the website as a `/docs` endpoint.
This is done while maintaining synchrony between the docs and code, i.e., the website is built whenever the docs are updated.
## Status
The two points above have been implemented; the `config.js` has a Google Analytics identifier and the documentation workflow has been up and running largely without problems for several months. Details about the documentation build & workflow can be found [here](../DOCS_README.md)
## Consequences
Because of the organizational seperation between Tendermint & Cosmos, there is a challenge of "what goes where" for certain aspects of documentation.
### Positive
This architecture is largely positive relative to prior docs arrangements.
### Negative
A significant portion of the docs automation / build process is in private repos with limited access/visibility to devs. However, these tasks are handled by the SRE team.
### Neutral

View File

@@ -1,19 +1,36 @@
# ADR 000: Template for an ADR
Author:
# ADR {ADR-NUMBER}: {TITLE}
## Changelog
* {date}: {changelog}
## Context
> This section contains all the context one needs to understand the current state, and why there is a problem. It should be as succinct as possible and introduce the high level idea behind the solution.
## Decision
> This section explains all of the details of the proposed solution, including implementation details.
It should also describe affects / corollary items that may need to be changed as a part of this.
If the proposed change will be large, please also indicate a way to do the change to maximize ease of review.
(e.g. the optimal split of things to do between separate PR's)
## Status
> A decision may be "proposed" if it hasn't been agreed upon yet, or "accepted" once it is agreed upon. If a later ADR changes or reverses a decision, it may be marked as "deprecated" or "superseded" with a reference to its replacement.
{Deprecated|Proposed|Accepted}
## Consequences
> This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the "positive" ones.
### Positive
### Negative
### Neutral
## References
> Are there any relevant PR comments, issues that led up to this, or articles referrenced for why we made the given design choice? If so link them here!
* {reference link}

Binary file not shown.

After

Width:  |  Height:  |  Size: 625 KiB

View File

@@ -79,11 +79,9 @@ make install
Install [LevelDB](https://github.com/google/leveldb) (minimum version is 1.7).
Build Tendermint with C libraries: `make build_c`.
### Ubuntu
Install LevelDB with snappy:
Install LevelDB with snappy (optionally):
```
sudo apt-get update
@@ -95,9 +93,9 @@ wget https://github.com/google/leveldb/archive/v1.20.tar.gz && \
tar -zxvf v1.20.tar.gz && \
cd leveldb-1.20/ && \
make && \
cp -r out-static/lib* out-shared/lib* /usr/local/lib/ && \
sudo cp -r out-static/lib* out-shared/lib* /usr/local/lib/ && \
cd include/ && \
cp -r leveldb /usr/local/include/ && \
sudo cp -r leveldb /usr/local/include/ && \
sudo ldconfig && \
rm -f v1.20.tar.gz
```
@@ -109,8 +107,16 @@ Set database backend to cleveldb:
db_backend = "cleveldb"
```
To build Tendermint, run
To install Tendermint, run
```
CGO_LDFLAGS="-lsnappy" go build -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`" -tags "tendermint gcc" -o build/tendermint ./cmd/tendermint/
CGO_LDFLAGS="-lsnappy" make install_c
```
or run
```
CGO_LDFLAGS="-lsnappy" make build_c
```
to put the binary in `./build`.

View File

@@ -40,7 +40,11 @@ These files are found in `$HOME/.tendermint`:
```
$ ls $HOME/.tendermint
config.toml data genesis.json priv_validator.json
config data
$ ls $HOME/.tendermint/config/
config.toml genesis.json node_key.json priv_validator.json
```
For a single, local node, no further configuration is required.
@@ -110,7 +114,18 @@ source ~/.profile
This will install `go` and other dependencies, get the Tendermint source code, then compile the `tendermint` binary.
Next, use the `tendermint testnet` command to create four directories of config files (found in `./mytestnet`) and copy each directory to the relevant machine in the cloud, so that each machine has `$HOME/mytestnet/node[0-3]` directory. Then from each machine, run:
Next, use the `tendermint testnet` command to create four directories of config files (found in `./mytestnet`) and copy each directory to the relevant machine in the cloud, so that each machine has `$HOME/mytestnet/node[0-3]` directory.
Before you can start the network, you'll need peers identifiers (IPs are not enough and can change). We'll refer to them as ID1, ID2, ID3, ID4.
```
tendermint show_node_id --home ./mytestnet/node0
tendermint show_node_id --home ./mytestnet/node1
tendermint show_node_id --home ./mytestnet/node2
tendermint show_node_id --home ./mytestnet/node3
```
Finally, from each machine, run:
```
tendermint node --home ./mytestnet/node0 --proxy_app=kvstore --p2p.persistent_peers="ID1@IP1:26656,ID2@IP2:26656,ID3@IP3:26656,ID4@IP4:26656"
@@ -121,6 +136,6 @@ tendermint node --home ./mytestnet/node3 --proxy_app=kvstore --p2p.persistent_pe
Note that after the third node is started, blocks will start to stream in
because >2/3 of validators (defined in the `genesis.json`) have come online.
Seeds can also be specified in the `config.toml`. See [here](../tendermint-core/configuration.md) for more information about configuration options.
Persistent peers can also be specified in the `config.toml`. See [here](../tendermint-core/configuration.md) for more information about configuration options.
Transactions can then be sent as covered in the single, local node example above.

View File

@@ -70,10 +70,6 @@ Tendermint is in essence similar software, but with two key differences:
the application logic that's right for them, from key-value store to
cryptocurrency to e-voting platform and beyond.
The layout of this Tendermint website content is also ripped directly
and without shame from [consul.io](https://www.consul.io/) and the other
[Hashicorp sites](https://www.hashicorp.com/#tools).
### Bitcoin, Ethereum, etc.
Tendermint emerged in the tradition of cryptocurrencies like Bitcoin,

View File

@@ -1,20 +1,17 @@
# Docker Compose
With Docker Compose, we can spin up local testnets in a single command:
```
make localnet-start
```
With Docker Compose, you can spin up local testnets with a single command.
## Requirements
- [Install tendermint](/docs/install.md)
- [Install docker](https://docs.docker.com/engine/installation/)
- [Install docker-compose](https://docs.docker.com/compose/install/)
1. [Install tendermint](/docs/install.md)
2. [Install docker](https://docs.docker.com/engine/installation/)
3. [Install docker-compose](https://docs.docker.com/compose/install/)
## Build
Build the `tendermint` binary and the `tendermint/localnode` docker image.
Build the `tendermint` binary and, optionally, the `tendermint/localnode`
docker image.
Note the binary will be mounted into the container so it can be updated without
rebuilding the image.
@@ -25,11 +22,10 @@ cd $GOPATH/src/github.com/tendermint/tendermint
# Build the linux binary in ./build
make build-linux
# Build tendermint/localnode image
# (optionally) Build tendermint/localnode image
make build-docker-localnode
```
## Run a testnet
To start a 4 node testnet run:
@@ -38,9 +34,13 @@ To start a 4 node testnet run:
make localnet-start
```
The nodes bind their RPC servers to ports 26657, 26660, 26662, and 26664 on the host.
The nodes bind their RPC servers to ports 26657, 26660, 26662, and 26664 on the
host.
This file creates a 4-node network using the localnode image.
The nodes of the network expose their P2P and RPC endpoints to the host machine on ports 26656-26657, 26659-26660, 26661-26662, and 26663-26664 respectively.
The nodes of the network expose their P2P and RPC endpoints to the host machine
on ports 26656-26657, 26659-26660, 26661-26662, and 26663-26664 respectively.
To update the binary, just rebuild it and restart the nodes:
@@ -52,34 +52,40 @@ make localnet-start
## Configuration
The `make localnet-start` creates files for a 4-node testnet in `./build` by calling the `tendermint testnet` command.
The `make localnet-start` creates files for a 4-node testnet in `./build` by
calling the `tendermint testnet` command.
The `./build` directory is mounted to the `/tendermint` mount point to attach the binary and config files to the container.
The `./build` directory is mounted to the `/tendermint` mount point to attach
the binary and config files to the container.
For instance, to create a single node testnet:
To change the number of validators / non-validators change the `localnet-start` Makefile target:
```
localnet-start: localnet-stop
@if ! [ -f build/node0/config/genesis.json ]; then docker run --rm -v $(CURDIR)/build:/tendermint:Z tendermint/localnode testnet --v 5 --n 3 --o . --populate-persistent-peers --starting-ip-address 192.167.10.2 ; fi
docker-compose up
```
The command now will generate config files for 5 validators and 3
non-validators network.
Before running it, don't forget to cleanup the old files:
```
cd $GOPATH/src/github.com/tendermint/tendermint
# Clear the build folder
rm -rf ./build
# Build binary
make build-linux
# Create configuration
docker run -e LOG="stdout" -v `pwd`/build:/tendermint tendermint/localnode testnet --o . --v 1
#Run the node
docker run -v `pwd`/build:/tendermint tendermint/localnode
rm -rf ./build/node*
```
## Logging
Log is saved under the attached volume, in the `tendermint.log` file. If the `LOG` environment variable is set to `stdout` at start, the log is not saved, but printed on the screen.
Log is saved under the attached volume, in the `tendermint.log` file. If the
`LOG` environment variable is set to `stdout` at start, the log is not saved,
but printed on the screen.
## Special binaries
If you have multiple binaries with different names, you can specify which one to run with the BINARY environment variable. The path of the binary is relative to the attached volume.
If you have multiple binaries with different names, you can specify which one
to run with the `BINARY` environment variable. The path of the binary is relative
to the attached volume.

View File

@@ -14,31 +14,31 @@ please submit them to our [bug bounty](https://tendermint.com/security)!
### Data Structures
- [Encoding and Digests](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/encoding.md)
- [Blockchain](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/blockchain.md)
- [State](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md)
- [Encoding and Digests](./blockchain/encoding.md)
- [Blockchain](./blockchain/blockchain.md)
- [State](./blockchain/state.md)
### Consensus Protocol
- [Consensus Algorithm](/docs/spec/consensus/consensus.md)
- [Creating a proposal](/docs/spec/consensus/creating-proposal.md)
- [Time](/docs/spec/consensus/bft-time.md)
- [Light-Client](/docs/spec/consensus/light-client.md)
- [Consensus Algorithm](./consensus/consensus.md)
- [Creating a proposal](./consensus/creating-proposal.md)
- [Time](./consensus/bft-time.md)
- [Light-Client](./consensus/light-client.md)
### P2P and Network Protocols
- [The Base P2P Layer](https://github.com/tendermint/tendermint/tree/master/docs/spec/p2p): multiplex the protocols ("reactors") on authenticated and encrypted TCP connections
- [Peer Exchange (PEX)](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/pex): gossip known peer addresses so peers can find each other
- [Block Sync](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/block_sync): gossip blocks so peers can catch up quickly
- [Consensus](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/consensus): gossip votes and block parts so new blocks can be committed
- [Mempool](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/mempool): gossip transactions so they get included in blocks
- Evidence: Forthcoming, see [this issue](https://github.com/tendermint/tendermint/issues/2329).
- [The Base P2P Layer](./p2p/): multiplex the protocols ("reactors") on authenticated and encrypted TCP connections
- [Peer Exchange (PEX)](./reactors/pex/): gossip known peer addresses so peers can find each other
- [Block Sync](./reactors/block_sync/): gossip blocks so peers can catch up quickly
- [Consensus](./reactors/consensus/): gossip votes and block parts so new blocks can be committed
- [Mempool](./reactors/mempool/): gossip transactions so they get included in blocks
- [Evidence](./reactors/evidence/): sending invalid evidence will stop the peer
### Software
- [ABCI](/docs/spec/software/abci.md): Details about interactions between the
- [ABCI](./software/abci.md): Details about interactions between the
application and consensus engine over ABCI
- [Write-Ahead Log](/docs/spec/software/wal.md): Details about how the consensus
- [Write-Ahead Log](./software/wal.md): Details about how the consensus
engine preserves data and recovers from crash failures
## Overview

View File

@@ -45,7 +45,9 @@ include a `Tags` field in their `Response*`. Each tag is key-value pair denoting
something about what happened during the methods execution.
Tags can be used to index transactions and blocks according to what happened
during their execution.
during their execution. Note that the set of tags returned for a block from
`BeginBlock` and `EndBlock` are merged. In case both methods return the same
tag, only the value defined in `EndBlock` is used.
Keys and values in tags must be UTF-8 encoded strings (e.g.
"account.owner": "Bob", "balance": "100.0",

View File

@@ -59,22 +59,14 @@ You can simply use below table and concatenate Prefix || Length (of raw bytes) |
| PubKeySecp256k1 | tendermint/PubKeySecp256k1 | 0xEB5AE987 | 0x21 | |
| PrivKeyEd25519 | tendermint/PrivKeyEd25519 | 0xA3288910 | 0x40 | |
| PrivKeySecp256k1 | tendermint/PrivKeySecp256k1 | 0xE1B0F79B | 0x20 | |
| SignatureEd25519 | tendermint/SignatureEd25519 | 0x2031EA53 | 0x40 | |
| SignatureSecp256k1 | tendermint/SignatureSecp256k1 | 0x7FC4A495 | variable |
| PubKeyMultisigThreshold | tendermint/PubKeyMultisigThreshold | 0x22C1F7E2 | variable | |
|
### Example
### Examples
1. For example, the 33-byte (or 0x21-byte in hex) Secp256k1 pubkey
For example, the 33-byte (or 0x21-byte in hex) Secp256k1 pubkey
`020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
would be encoded as
`EB5AE98221020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
2. For example, the variable size Secp256k1 signature (in this particular example 70 or 0x46 bytes)
`304402201CD4B8C764D2FD8AF23ECFE6666CA8A53886D47754D951295D2D311E1FEA33BF02201E0F906BB1CF2C30EAACFFB032A7129358AFF96B9F79B06ACFFB18AC90C2ADD7`
would be encoded as
`16E1FEEA46304402201CD4B8C764D2FD8AF23ECFE6666CA8A53886D47754D951295D2D311E1FEA33BF02201E0F906BB1CF2C30EAACFFB032A7129358AFF96B9F79B06ACFFB18AC90C2ADD7`
`EB5AE98721020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
### Addresses

View File

@@ -79,31 +79,29 @@ func TotalVotingPower(vals []Validators) int64{
ConsensusParams define various limits for blockchain data structures.
Like validator sets, they are set during genesis and can be updated by the application through ABCI.
```
```go
type ConsensusParams struct {
BlockSize
TxSize
BlockGossip
EvidenceParams
Evidence
Validator
}
type BlockSize struct {
MaxBytes int
MaxBytes int64
MaxGas int64
}
type TxSize struct {
MaxBytes int
MaxGas int64
}
type BlockGossip struct {
BlockPartSizeBytes int
}
type EvidenceParams struct {
type Evidence struct {
MaxAge int64
}
type Validator struct {
PubKeyTypes []string
}
type ValidatorParams struct {
PubKeyTypes []string
}
```
#### BlockSize
@@ -115,20 +113,15 @@ otherwise.
Blocks should additionally be limited by the amount of "gas" consumed by the
transactions in the block, though this is not yet implemented.
#### TxSize
These parameters are not yet enforced and may disappear. See [issue
#2347](https://github.com/tendermint/tendermint/issues/2347).
#### BlockGossip
When gossipping blocks in the consensus, they are first split into parts. The
size of each part is `ConsensusParams.BlockGossip.BlockPartSizeBytes`.
#### EvidenceParams
#### Evidence
For evidence in a block to be valid, it must satisfy:
```
block.Header.Height - evidence.Height < ConsensusParams.EvidenceParams.MaxAge
block.Header.Height - evidence.Height < ConsensusParams.Evidence.MaxAge
```
#### Validator
Validators from genesis file and `ResponseEndBlock` must have pubkeys of type ∈
`ConsensusParams.Validator.PubKeyTypes`.

View File

@@ -65,24 +65,24 @@ type Requester {
mtx Mutex
block Block
height int64
peerID p2p.ID
redoChannel chan struct{}
peerID p2p.ID
redoChannel chan p2p.ID //redo may send multi-time; peerId is used to identify repeat
}
```
Pool is core data structure that stores last executed block (`height`), assignment of requests to peers (`requesters`), current height for each peer and number of pending requests for each peer (`peers`), maximum peer height, etc.
Pool is a core data structure that stores last executed block (`height`), assignment of requests to peers (`requesters`), current height for each peer and number of pending requests for each peer (`peers`), maximum peer height, etc.
```go
type Pool {
mtx Mutex
requesters map[int64]*Requester
height int64
peers map[p2p.ID]*Peer
maxPeerHeight int64
numPending int32
store BlockStore
requestsChannel chan<- BlockRequest
errorsChannel chan<- peerError
mtx Mutex
requesters map[int64]*Requester
height int64
peers map[p2p.ID]*Peer
maxPeerHeight int64
numPending int32
store BlockStore
requestsChannel chan<- BlockRequest
errorsChannel chan<- peerError
}
```
@@ -90,11 +90,11 @@ Peer data structure stores for each peer current `height` and number of pending
```go
type Peer struct {
id p2p.ID
height int64
numPending int32
timeout *time.Timer
didTimeout bool
id p2p.ID
height int64
numPending int32
timeout *time.Timer
didTimeout bool
}
```
@@ -169,11 +169,11 @@ Requester task is responsible for fetching a single block at position `height`.
```go
fetchBlock(height, pool):
while true do
while true do {
peerID = nil
block = nil
peer = pickAvailablePeer(height)
peerId = peer.id
peerID = peer.id
enqueue BlockRequest(height, peerID) to pool.requestsChannel
redo = false
@@ -181,12 +181,15 @@ fetchBlock(height, pool):
select {
upon receiving Quit message do
return
upon receiving message on redoChannel do
mtx.Lock()
pool.numPending++
redo = true
mtx.UnLock()
upon receiving redo message with id on redoChannel do
if peerID == id {
mtx.Lock()
pool.numPending++
redo = true
mtx.UnLock()
}
}
}
pickAvailablePeer(height):
selectedPeer = nil
@@ -244,7 +247,7 @@ createRequesters(pool):
main(pool):
create trySyncTicker with interval trySyncIntervalMS
create statusUpdateTicker with interval statusUpdateIntervalSeconds
create switchToConsensusTicker with interbal switchToConsensusIntervalSeconds
create switchToConsensusTicker with interval switchToConsensusIntervalSeconds
while true do
select {

View File

@@ -338,12 +338,11 @@ BlockID has seen +2/3 votes. This routine is based on the local RoundState (`rs`
## Broadcast routine
The Broadcast routine subscribes to an internal event bus to receive new round steps, votes messages and proposal
heartbeat messages, and broadcasts messages to peers upon receiving those events.
The Broadcast routine subscribes to an internal event bus to receive new round steps and votes messages, and broadcasts messages to peers upon receiving those
events.
It broadcasts `NewRoundStepMessage` or `CommitStepMessage` upon new round state event. Note that
broadcasting these messages does not depend on the PeerRoundState; it is sent on the StateChannel.
Upon receiving VoteMessage it broadcasts `HasVoteMessage` message to its peers on the StateChannel.
`ProposalHeartbeatMessage` is sent the same way on the StateChannel.
## Channels

View File

@@ -89,33 +89,6 @@ type BlockPartMessage struct {
}
```
## ProposalHeartbeatMessage
ProposalHeartbeatMessage is sent to signal that a node is alive and waiting for transactions
to be able to create a next block proposal.
```go
type ProposalHeartbeatMessage struct {
Heartbeat Heartbeat
}
```
### Heartbeat
Heartbeat contains validator information (address and index),
height, round and sequence number. It is signed by the private key of the validator.
```go
type Heartbeat struct {
ValidatorAddress []byte
ValidatorIndex int
Height int64
Round int
Sequence int
Signature Signature
}
```
## NewRoundStepMessage
NewRoundStepMessage is sent for every step transition during the core consensus algorithm execution.

View File

@@ -36,19 +36,26 @@ db_backend = "leveldb"
# Database directory
db_dir = "data"
# Output level for logging
log_level = "state:info,*:error"
# Output level for logging, including package level options
log_level = "main:info,state:info,*:error"
# Output format: 'plain' (colored text) or 'json'
log_format = "plain"
##### additional base config options #####
# The ID of the chain to join (should be signed with every transaction and vote)
chain_id = ""
# Path to the JSON file containing the initial validator set and other meta data
genesis_file = "genesis.json"
genesis_file = "config/genesis.json"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_file = "priv_validator.json"
priv_validator_file = "config/priv_validator.json"
# TCP or UNIX socket address for Tendermint to listen on for
# connections from an external PrivValidator process
priv_validator_laddr = ""
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
node_key_file = "config/node_key.json"
# Mechanism to connect to the ABCI application: socket | grpc
abci = "socket"
@@ -71,13 +78,13 @@ laddr = "tcp://0.0.0.0:26657"
# A list of origins a cross-domain request can be executed from
# Default value '[]' disables cors support
# Use '["*"]' to allow any origin
cors_allowed_origins = "[]"
cors_allowed_origins = []
# A list of methods the client is allowed to use with cross-domain requests
cors_allowed_methods = "[HEAD GET POST]"
cors_allowed_methods = ["HEAD", "GET", "POST"]
# A list of non simple headers the client is allowed to use with cross-domain requests
cors_allowed_headers = "[Origin Accept Content-Type X-Requested-With X-Server-Time]"
cors_allowed_headers = ["Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time"]
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
@@ -85,7 +92,7 @@ grpc_laddr = ""
# Maximum number of simultaneous connections.
# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -97,7 +104,7 @@ unsafe = false
# Maximum number of simultaneous connections (including WebSocket).
# Does not include gRPC connections. See grpc_max_open_connections
# If you want to accept more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -110,6 +117,12 @@ max_open_connections = 900
# Address to listen for incoming connections
laddr = "tcp://0.0.0.0:26656"
# Address to advertise to peers for them to dial
# If empty, will use the same port as the laddr,
# and will introspect on the listener or use UPnP
# to figure out the address.
external_address = ""
# Comma separated list of seed nodes to connect to
seeds = ""
@@ -120,7 +133,7 @@ persistent_peers = ""
upnp = false
# Path to address book
addr_book_file = "addrbook.json"
addr_book_file = "config/addrbook.json"
# Set true for strict address routability rules
# Set false for private or local networks
@@ -168,26 +181,26 @@ dial_timeout = "3s"
recheck = true
broadcast = true
wal_dir = "data/mempool.wal"
wal_dir = ""
# size of the mempool
size = 100000
size = 5000
# size of the cache (used to filter transactions we saw earlier)
cache_size = 100000
cache_size = 10000
##### consensus configuration options #####
[consensus]
wal_file = "data/cs.wal/wal"
timeout_propose = "3000ms"
timeout_propose = "3s"
timeout_propose_delta = "500ms"
timeout_prevote = "1000ms"
timeout_prevote = "1s"
timeout_prevote_delta = "500ms"
timeout_precommit = "1000ms"
timeout_precommit = "1s"
timeout_precommit_delta = "500ms"
timeout_commit = "1000ms"
timeout_commit = "1s"
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
skip_timeout_commit = false
@@ -198,7 +211,10 @@ create_empty_blocks_interval = "0s"
# Reactor sleep duration parameters
peer_gossip_sleep_duration = "100ms"
peer_query_maj23_sleep_duration = "2000ms"
peer_query_maj23_sleep_duration = "2s"
# Block time parameters. Corresponds to the minimum time increment between consecutive blocks.
blocktime_iota = "1s"
##### transactions indexer configuration options #####
[tx_index]
@@ -239,7 +255,7 @@ prometheus = false
prometheus_listen_addr = ":26660"
# Maximum number of simultaneous connections.
# If you want to accept a more significant number than the default, make sure
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
max_open_connections = 3

View File

@@ -519,18 +519,16 @@ developers guide](../app-dev/app-development.md) for more details.
### Local Network
To run a network locally, say on a single machine, you must change the
`_laddr` fields in the `config.toml` (or using the flags) so that the
listening addresses of the various sockets don't conflict. Additionally,
you must set `addr_book_strict=false` in the `config.toml`, otherwise
Tendermint's p2p library will deny making connections to peers with the
same IP address.
To run a network locally, say on a single machine, you must change the `_laddr`
fields in the `config.toml` (or using the flags) so that the listening
addresses of the various sockets don't conflict. Additionally, you must set
`addr_book_strict=false` in the `config.toml`, otherwise Tendermint's p2p
library will deny making connections to peers with the same IP address.
### Upgrading
The Tendermint development cycle currently includes a lot of breaking changes.
Upgrading from an old version to a new version usually means throwing
away the chain data. Try out the
[tm-migrate](https://github.com/hxzqlh/tm-tools) tool written by
[@hxzqlh](https://github.com/hxzqlh) if you are keen to preserve the
state of your chain when upgrading to newer versions.
See the
[UPGRADING.md](https://github.com/tendermint/tendermint/blob/master/UPGRADING.md)
guide. You may need to reset your chain between major breaking releases.
Although, we expect Tendermint to have fewer breaking releases in the future
(especially after 1.0 release).

View File

@@ -35,7 +35,7 @@ func initializeValidatorState(valAddr []byte, height int64) dbm.DB {
LastBlockHeight: 0,
LastBlockTime: tmtime.Now(),
Validators: valSet,
NextValidators: valSet.CopyIncrementAccum(1),
NextValidators: valSet.CopyIncrementProposerPriority(1),
LastHeightValidatorsChanged: 1,
ConsensusParams: types.ConsensusParams{
Evidence: types.EvidenceParams{

View File

@@ -163,7 +163,7 @@ func (evR EvidenceReactor) checkSendEvidenceMessage(peer p2p.Peer, ev types.Evid
// make sure the peer is up to date
evHeight := ev.Height()
peerState, ok := peer.Get(types.PeerStateKey).(PeerState)
if !ok {
if !ok {
// Peer does not have a state yet. We set it in the consensus reactor, but
// when we add peer in Switch, the order we call reactors#AddPeer is
// different every time due to us using a map. Sometimes other reactors

View File

@@ -8,7 +8,6 @@ import (
"time"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/errors"
)
/* AutoFile usage
@@ -157,13 +156,13 @@ func (af *AutoFile) openFile() error {
if err != nil {
return err
}
fileInfo, err := file.Stat()
if err != nil {
return err
}
if fileInfo.Mode() != autoFilePerms {
return errors.NewErrPermissionsChanged(file.Name(), fileInfo.Mode(), autoFilePerms)
}
// fileInfo, err := file.Stat()
// if err != nil {
// return err
// }
// if fileInfo.Mode() != autoFilePerms {
// return errors.NewErrPermissionsChanged(file.Name(), fileInfo.Mode(), autoFilePerms)
// }
af.file = file
return nil
}

View File

@@ -10,7 +10,6 @@ import (
"github.com/stretchr/testify/require"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/errors"
)
func TestSIGHUP(t *testing.T) {
@@ -58,32 +57,32 @@ func TestSIGHUP(t *testing.T) {
}
}
// Manually modify file permissions, close, and reopen using autofile:
// We expect the file permissions to be changed back to the intended perms.
func TestOpenAutoFilePerms(t *testing.T) {
file, err := ioutil.TempFile("", "permission_test")
require.NoError(t, err)
err = file.Close()
require.NoError(t, err)
name := file.Name()
// // Manually modify file permissions, close, and reopen using autofile:
// // We expect the file permissions to be changed back to the intended perms.
// func TestOpenAutoFilePerms(t *testing.T) {
// file, err := ioutil.TempFile("", "permission_test")
// require.NoError(t, err)
// err = file.Close()
// require.NoError(t, err)
// name := file.Name()
// open and change permissions
af, err := OpenAutoFile(name)
require.NoError(t, err)
err = af.file.Chmod(0755)
require.NoError(t, err)
err = af.Close()
require.NoError(t, err)
// // open and change permissions
// af, err := OpenAutoFile(name)
// require.NoError(t, err)
// err = af.file.Chmod(0755)
// require.NoError(t, err)
// err = af.Close()
// require.NoError(t, err)
// reopen and expect an ErrPermissionsChanged as Cause
af, err = OpenAutoFile(name)
require.Error(t, err)
if e, ok := err.(*errors.ErrPermissionsChanged); ok {
t.Logf("%v", e)
} else {
t.Errorf("unexpected error %v", e)
}
}
// // reopen and expect an ErrPermissionsChanged as Cause
// af, err = OpenAutoFile(name)
// require.Error(t, err)
// if e, ok := err.(*errors.ErrPermissionsChanged); ok {
// t.Logf("%v", e)
// } else {
// t.Errorf("unexpected error %v", e)
// }
// }
func TestAutoFileSize(t *testing.T) {
// First, create an AutoFile writing to a tempfile dir

View File

@@ -51,7 +51,7 @@ func TestParseLogLevel(t *testing.T) {
buf.Reset()
logger.With("module", "wire").Debug("Kingpin")
logger.With("module", "mempool").With("module", "wire").Debug("Kingpin")
if have := strings.TrimSpace(buf.String()); c.expectedLogLines[0] != have {
t.Errorf("\nwant '%s'\nhave '%s'\nlevel '%s'", c.expectedLogLines[0], have, c.lvl)
}

View File

@@ -61,3 +61,16 @@ func ASCIITrim(s string) string {
}
return string(r)
}
// StringSliceEqual checks if string slices a and b are equal
func StringSliceEqual(a, b []string) bool {
if len(a) != len(b) {
return false
}
for i := 0; i < len(a); i++ {
if a[i] != b[i] {
return false
}
}
return true
}

View File

@@ -3,6 +3,8 @@ package common
import (
"testing"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/assert"
)
@@ -35,3 +37,22 @@ func TestASCIITrim(t *testing.T) {
assert.Equal(t, ASCIITrim(" a "), "a")
assert.Panics(t, func() { ASCIITrim("\xC2\xA2") })
}
func TestStringSliceEqual(t *testing.T) {
tests := []struct {
a []string
b []string
want bool
}{
{[]string{"hello", "world"}, []string{"hello", "world"}, true},
{[]string{"test"}, []string{"test"}, true},
{[]string{"test1"}, []string{"test2"}, false},
{[]string{"hello", "world."}, []string{"hello", "world!"}, false},
{[]string{"only 1 word"}, []string{"two", "words!"}, false},
{[]string{"two", "words!"}, []string{"only 1 word"}, false},
}
for i, tt := range tests {
require.Equal(t, tt.want, StringSliceEqual(tt.a, tt.b),
"StringSliceEqual failed on test %d", i)
}
}

View File

@@ -180,13 +180,13 @@ func testDBIterator(t *testing.T, backend DBBackendType) {
verifyIterator(t, db.ReverseIterator(nil, nil), []int64{9, 8, 7, 5, 4, 3, 2, 1, 0}, "reverse iterator")
verifyIterator(t, db.Iterator(nil, int642Bytes(0)), []int64(nil), "forward iterator to 0")
verifyIterator(t, db.ReverseIterator(nil, int642Bytes(10)), []int64(nil), "reverse iterator 10")
verifyIterator(t, db.ReverseIterator(int642Bytes(10), nil), []int64(nil), "reverse iterator from 10 (ex)")
verifyIterator(t, db.Iterator(int642Bytes(0), nil), []int64{0, 1, 2, 3, 4, 5, 7, 8, 9}, "forward iterator from 0")
verifyIterator(t, db.Iterator(int642Bytes(1), nil), []int64{1, 2, 3, 4, 5, 7, 8, 9}, "forward iterator from 1")
verifyIterator(t, db.ReverseIterator(int642Bytes(10), nil), []int64{9, 8, 7, 5, 4, 3, 2, 1, 0}, "reverse iterator from 10")
verifyIterator(t, db.ReverseIterator(int642Bytes(9), nil), []int64{9, 8, 7, 5, 4, 3, 2, 1, 0}, "reverse iterator from 9")
verifyIterator(t, db.ReverseIterator(int642Bytes(8), nil), []int64{8, 7, 5, 4, 3, 2, 1, 0}, "reverse iterator from 8")
verifyIterator(t, db.ReverseIterator(nil, int642Bytes(10)), []int64{9, 8, 7, 5, 4, 3, 2, 1, 0}, "reverse iterator from 10 (ex)")
verifyIterator(t, db.ReverseIterator(nil, int642Bytes(9)), []int64{8, 7, 5, 4, 3, 2, 1, 0}, "reverse iterator from 9 (ex)")
verifyIterator(t, db.ReverseIterator(nil, int642Bytes(8)), []int64{7, 5, 4, 3, 2, 1, 0}, "reverse iterator from 8 (ex)")
verifyIterator(t, db.Iterator(int642Bytes(5), int642Bytes(6)), []int64{5}, "forward iterator from 5 to 6")
verifyIterator(t, db.Iterator(int642Bytes(5), int642Bytes(7)), []int64{5}, "forward iterator from 5 to 7")
@@ -195,20 +195,20 @@ func testDBIterator(t *testing.T, backend DBBackendType) {
verifyIterator(t, db.Iterator(int642Bytes(6), int642Bytes(8)), []int64{7}, "forward iterator from 6 to 8")
verifyIterator(t, db.Iterator(int642Bytes(7), int642Bytes(8)), []int64{7}, "forward iterator from 7 to 8")
verifyIterator(t, db.ReverseIterator(int642Bytes(5), int642Bytes(4)), []int64{5}, "reverse iterator from 5 to 4")
verifyIterator(t, db.ReverseIterator(int642Bytes(6), int642Bytes(4)), []int64{5}, "reverse iterator from 6 to 4")
verifyIterator(t, db.ReverseIterator(int642Bytes(7), int642Bytes(4)), []int64{7, 5}, "reverse iterator from 7 to 4")
verifyIterator(t, db.ReverseIterator(int642Bytes(6), int642Bytes(5)), []int64(nil), "reverse iterator from 6 to 5")
verifyIterator(t, db.ReverseIterator(int642Bytes(7), int642Bytes(5)), []int64{7}, "reverse iterator from 7 to 5")
verifyIterator(t, db.ReverseIterator(int642Bytes(7), int642Bytes(6)), []int64{7}, "reverse iterator from 7 to 6")
verifyIterator(t, db.ReverseIterator(int642Bytes(4), int642Bytes(5)), []int64{4}, "reverse iterator from 5 (ex) to 4")
verifyIterator(t, db.ReverseIterator(int642Bytes(4), int642Bytes(6)), []int64{5, 4}, "reverse iterator from 6 (ex) to 4")
verifyIterator(t, db.ReverseIterator(int642Bytes(4), int642Bytes(7)), []int64{5, 4}, "reverse iterator from 7 (ex) to 4")
verifyIterator(t, db.ReverseIterator(int642Bytes(5), int642Bytes(6)), []int64{5}, "reverse iterator from 6 (ex) to 5")
verifyIterator(t, db.ReverseIterator(int642Bytes(5), int642Bytes(7)), []int64{5}, "reverse iterator from 7 (ex) to 5")
verifyIterator(t, db.ReverseIterator(int642Bytes(6), int642Bytes(7)), []int64(nil), "reverse iterator from 7 (ex) to 6")
verifyIterator(t, db.Iterator(int642Bytes(0), int642Bytes(1)), []int64{0}, "forward iterator from 0 to 1")
verifyIterator(t, db.ReverseIterator(int642Bytes(9), int642Bytes(8)), []int64{9}, "reverse iterator from 9 to 8")
verifyIterator(t, db.ReverseIterator(int642Bytes(8), int642Bytes(9)), []int64{8}, "reverse iterator from 9 (ex) to 8")
verifyIterator(t, db.Iterator(int642Bytes(2), int642Bytes(4)), []int64{2, 3}, "forward iterator from 2 to 4")
verifyIterator(t, db.Iterator(int642Bytes(4), int642Bytes(2)), []int64(nil), "forward iterator from 4 to 2")
verifyIterator(t, db.ReverseIterator(int642Bytes(4), int642Bytes(2)), []int64{4, 3}, "reverse iterator from 4 to 2")
verifyIterator(t, db.ReverseIterator(int642Bytes(2), int642Bytes(4)), []int64(nil), "reverse iterator from 2 to 4")
verifyIterator(t, db.ReverseIterator(int642Bytes(2), int642Bytes(4)), []int64{3, 2}, "reverse iterator from 4 (ex) to 2")
verifyIterator(t, db.ReverseIterator(int642Bytes(4), int642Bytes(2)), []int64(nil), "reverse iterator from 2 (ex) to 4")
}

View File

@@ -205,13 +205,13 @@ type cLevelDBIterator struct {
func newCLevelDBIterator(source *levigo.Iterator, start, end []byte, isReverse bool) *cLevelDBIterator {
if isReverse {
if start == nil {
if end == nil {
source.SeekToLast()
} else {
source.Seek(start)
source.Seek(end)
if source.Valid() {
soakey := source.Key() // start or after key
if bytes.Compare(start, soakey) < 0 {
eoakey := source.Key() // end or after key
if bytes.Compare(end, eoakey) <= 0 {
source.Prev()
}
} else {
@@ -255,10 +255,11 @@ func (itr cLevelDBIterator) Valid() bool {
}
// If key is end or past it, invalid.
var start = itr.start
var end = itr.end
var key = itr.source.Key()
if itr.isReverse {
if end != nil && bytes.Compare(key, end) <= 0 {
if start != nil && bytes.Compare(key, start) < 0 {
itr.isInvalid = true
return false
}

View File

@@ -12,7 +12,6 @@ import (
"github.com/pkg/errors"
cmn "github.com/tendermint/tendermint/libs/common"
tmerrors "github.com/tendermint/tendermint/libs/errors"
)
const (
@@ -162,7 +161,7 @@ func (db *FSDB) MakeIterator(start, end []byte, isReversed bool) Iterator {
// We need a copy of all of the keys.
// Not the best, but probably not a bottleneck depending.
keys, err := list(db.dir, start, end, isReversed)
keys, err := list(db.dir, start, end)
if err != nil {
panic(errors.Wrapf(err, "Listing keys in %s", db.dir))
}
@@ -207,13 +206,13 @@ func write(path string, d []byte) error {
return err
}
defer f.Close()
fInfo, err := f.Stat()
if err != nil {
return err
}
if fInfo.Mode() != keyPerm {
return tmerrors.NewErrPermissionsChanged(f.Name(), keyPerm, fInfo.Mode())
}
// fInfo, err := f.Stat()
// if err != nil {
// return err
// }
// if fInfo.Mode() != keyPerm {
// return tmerrors.NewErrPermissionsChanged(f.Name(), keyPerm, fInfo.Mode())
// }
_, err = f.Write(d)
if err != nil {
return err
@@ -230,7 +229,7 @@ func remove(path string) error {
// List keys in a directory, stripping of escape sequences and dir portions.
// CONTRACT: returns os errors directly without wrapping.
func list(dirPath string, start, end []byte, isReversed bool) ([]string, error) {
func list(dirPath string, start, end []byte) ([]string, error) {
dir, err := os.Open(dirPath)
if err != nil {
return nil, err
@@ -248,7 +247,7 @@ func list(dirPath string, start, end []byte, isReversed bool) ([]string, error)
return nil, fmt.Errorf("Failed to unescape %s while listing", name)
}
key := unescapeKey([]byte(n))
if IsKeyInDomain(key, start, end, isReversed) {
if IsKeyInDomain(key, start, end) {
keys = append(keys, string(key))
}
}

View File

@@ -213,13 +213,13 @@ var _ Iterator = (*goLevelDBIterator)(nil)
func newGoLevelDBIterator(source iterator.Iterator, start, end []byte, isReverse bool) *goLevelDBIterator {
if isReverse {
if start == nil {
if end == nil {
source.Last()
} else {
valid := source.Seek(start)
valid := source.Seek(end)
if valid {
soakey := source.Key() // start or after key
if bytes.Compare(start, soakey) < 0 {
eoakey := source.Key() // end or after key
if bytes.Compare(end, eoakey) <= 0 {
source.Prev()
}
} else {
@@ -265,11 +265,12 @@ func (itr *goLevelDBIterator) Valid() bool {
}
// If key is end or past it, invalid.
var start = itr.start
var end = itr.end
var key = itr.source.Key()
if itr.isReverse {
if end != nil && bytes.Compare(key, end) <= 0 {
if start != nil && bytes.Compare(key, start) < 0 {
itr.isInvalid = true
return false
}

View File

@@ -237,7 +237,7 @@ func (itr *memDBIterator) assertIsValid() {
func (db *MemDB) getSortedKeys(start, end []byte, reverse bool) []string {
keys := []string{}
for key := range db.db {
inDomain := IsKeyInDomain([]byte(key), start, end, reverse)
inDomain := IsKeyInDomain([]byte(key), start, end)
if inDomain {
keys = append(keys, key)
}

View File

@@ -131,27 +131,13 @@ func (pdb *prefixDB) ReverseIterator(start, end []byte) Iterator {
defer pdb.mtx.Unlock()
var pstart, pend []byte
if start == nil {
// This may cause the underlying iterator to start with
// an item which doesn't start with prefix. We will skip
// that item later in this function. See 'skipOne'.
pstart = cpIncr(pdb.prefix)
} else {
pstart = append(cp(pdb.prefix), start...)
}
pstart = append(cp(pdb.prefix), start...)
if end == nil {
// This may cause the underlying iterator to end with an
// item which doesn't start with prefix. The
// prefixIterator will terminate iteration
// automatically upon detecting this.
pend = cpDecr(pdb.prefix)
pend = cpIncr(pdb.prefix)
} else {
pend = append(cp(pdb.prefix), end...)
}
ritr := pdb.db.ReverseIterator(pstart, pend)
if start == nil {
skipOne(ritr, cpIncr(pdb.prefix))
}
return newPrefixIterator(
pdb.prefix,
start,
@@ -310,7 +296,6 @@ func (itr *prefixIterator) Next() {
}
itr.source.Next()
if !itr.source.Valid() || !bytes.HasPrefix(itr.source.Key(), itr.prefix) {
itr.source.Close()
itr.valid = false
return
}
@@ -345,13 +330,3 @@ func stripPrefix(key []byte, prefix []byte) (stripped []byte) {
}
return key[len(prefix):]
}
// If the first iterator item is skipKey, then
// skip it.
func skipOne(itr Iterator, skipKey []byte) {
if itr.Valid() {
if bytes.Equal(itr.Key(), skipKey) {
itr.Next()
}
}
}

View File

@@ -113,8 +113,46 @@ func TestPrefixDBReverseIterator2(t *testing.T) {
db := mockDBWithStuff()
pdb := NewPrefixDB(db, bz("key"))
itr := pdb.ReverseIterator(bz(""), nil)
checkDomain(t, itr, bz(""), nil)
checkItem(t, itr, bz("3"), bz("value3"))
checkNext(t, itr, true)
checkItem(t, itr, bz("2"), bz("value2"))
checkNext(t, itr, true)
checkItem(t, itr, bz("1"), bz("value1"))
checkNext(t, itr, true)
checkItem(t, itr, bz(""), bz("value"))
checkNext(t, itr, false)
checkInvalid(t, itr)
itr.Close()
}
func TestPrefixDBReverseIterator3(t *testing.T) {
db := mockDBWithStuff()
pdb := NewPrefixDB(db, bz("key"))
itr := pdb.ReverseIterator(nil, bz(""))
checkDomain(t, itr, nil, bz(""))
checkInvalid(t, itr)
itr.Close()
}
func TestPrefixDBReverseIterator4(t *testing.T) {
db := mockDBWithStuff()
pdb := NewPrefixDB(db, bz("key"))
itr := pdb.ReverseIterator(bz(""), bz(""))
checkDomain(t, itr, bz(""), bz(""))
checkInvalid(t, itr)
itr.Close()
}
func TestPrefixDBReverseIterator5(t *testing.T) {
db := mockDBWithStuff()
pdb := NewPrefixDB(db, bz("key"))
itr := pdb.ReverseIterator(bz("1"), nil)
checkDomain(t, itr, bz("1"), nil)
checkItem(t, itr, bz("3"), bz("value3"))
checkNext(t, itr, true)
checkItem(t, itr, bz("2"), bz("value2"))
@@ -125,23 +163,30 @@ func TestPrefixDBReverseIterator2(t *testing.T) {
itr.Close()
}
func TestPrefixDBReverseIterator3(t *testing.T) {
func TestPrefixDBReverseIterator6(t *testing.T) {
db := mockDBWithStuff()
pdb := NewPrefixDB(db, bz("key"))
itr := pdb.ReverseIterator(bz(""), nil)
checkDomain(t, itr, bz(""), nil)
checkItem(t, itr, bz(""), bz("value"))
itr := pdb.ReverseIterator(bz("2"), nil)
checkDomain(t, itr, bz("2"), nil)
checkItem(t, itr, bz("3"), bz("value3"))
checkNext(t, itr, true)
checkItem(t, itr, bz("2"), bz("value2"))
checkNext(t, itr, false)
checkInvalid(t, itr)
itr.Close()
}
func TestPrefixDBReverseIterator4(t *testing.T) {
func TestPrefixDBReverseIterator7(t *testing.T) {
db := mockDBWithStuff()
pdb := NewPrefixDB(db, bz("key"))
itr := pdb.ReverseIterator(bz(""), bz(""))
itr := pdb.ReverseIterator(nil, bz("2"))
checkDomain(t, itr, nil, bz("2"))
checkItem(t, itr, bz("1"), bz("value1"))
checkNext(t, itr, true)
checkItem(t, itr, bz(""), bz("value"))
checkNext(t, itr, false)
checkInvalid(t, itr)
itr.Close()
}

View File

@@ -34,9 +34,9 @@ type DB interface {
Iterator(start, end []byte) Iterator
// Iterate over a domain of keys in descending order. End is exclusive.
// Start must be greater than end, or the Iterator is invalid.
// If start is nil, iterates from the last/greatest item (inclusive).
// If end is nil, iterates up to the first/least item (inclusive).
// Start must be less than end, or the Iterator is invalid.
// If start is nil, iterates up to the first/least item (inclusive).
// If end is nil, iterates from the last/greatest item (inclusive).
// CONTRACT: No writes may happen within a domain while an iterator exists over it.
// CONTRACT: start, end readonly []byte
ReverseIterator(start, end []byte) Iterator

View File

@@ -33,46 +33,13 @@ func cpIncr(bz []byte) (ret []byte) {
return nil
}
// Returns a slice of the same length (big endian)
// except decremented by one.
// Returns nil on underflow (e.g. if bz bytes are all 0x00)
// CONTRACT: len(bz) > 0
func cpDecr(bz []byte) (ret []byte) {
if len(bz) == 0 {
panic("cpDecr expects non-zero bz length")
}
ret = cp(bz)
for i := len(bz) - 1; i >= 0; i-- {
if ret[i] > byte(0x00) {
ret[i]--
return
}
ret[i] = byte(0xFF)
if i == 0 {
// Underflow
return nil
}
}
return nil
}
// See DB interface documentation for more information.
func IsKeyInDomain(key, start, end []byte, isReverse bool) bool {
if !isReverse {
if bytes.Compare(key, start) < 0 {
return false
}
if end != nil && bytes.Compare(end, key) <= 0 {
return false
}
return true
} else {
if start != nil && bytes.Compare(start, key) < 0 {
return false
}
if end != nil && bytes.Compare(key, end) <= 0 {
return false
}
return true
func IsKeyInDomain(key, start, end []byte) bool {
if bytes.Compare(key, start) < 0 {
return false
}
if end != nil && bytes.Compare(end, key) <= 0 {
return false
}
return true
}

View File

@@ -1,26 +1,21 @@
// Package errors contains errors that are thrown across packages.
package errors
import (
"fmt"
"os"
)
// // ErrPermissionsChanged occurs if the file permission have changed since the file was created.
// type ErrPermissionsChanged struct {
// name string
// got, want os.FileMode
// }
// ErrPermissionsChanged occurs if the file permission have changed since the file was created.
type ErrPermissionsChanged struct {
name string
got, want os.FileMode
}
// func NewErrPermissionsChanged(name string, got, want os.FileMode) *ErrPermissionsChanged {
// return &ErrPermissionsChanged{name: name, got: got, want: want}
// }
func NewErrPermissionsChanged(name string, got, want os.FileMode) *ErrPermissionsChanged {
return &ErrPermissionsChanged{name: name, got: got, want: want}
}
func (e ErrPermissionsChanged) Error() string {
return fmt.Sprintf(
"file: [%v]\nexpected file permissions: %v, got: %v",
e.name,
e.want,
e.got,
)
}
// func (e ErrPermissionsChanged) Error() string {
// return fmt.Sprintf(
// "file: [%v]\nexpected file permissions: %v, got: %v",
// e.name,
// e.want,
// e.got,
// )
// }

View File

@@ -11,9 +11,10 @@ const (
)
type filter struct {
next Logger
allowed level // XOR'd levels for default case
allowedKeyvals map[keyval]level // When key-value match, use this level
next Logger
allowed level // XOR'd levels for default case
initiallyAllowed level // XOR'd levels for initial case
allowedKeyvals map[keyval]level // When key-value match, use this level
}
type keyval struct {
@@ -33,6 +34,7 @@ func NewFilter(next Logger, options ...Option) Logger {
for _, option := range options {
option(l)
}
l.initiallyAllowed = l.allowed
return l
}
@@ -76,14 +78,45 @@ func (l *filter) Error(msg string, keyvals ...interface{}) {
// logger = log.NewFilter(logger, log.AllowError(), log.AllowInfoWith("module", "crypto"), log.AllowNoneWith("user", "Sam"))
// logger.With("user", "Sam").With("module", "crypto").Info("Hello") # produces "I... Hello module=crypto user=Sam"
func (l *filter) With(keyvals ...interface{}) Logger {
keyInAllowedKeyvals := false
for i := len(keyvals) - 2; i >= 0; i -= 2 {
for kv, allowed := range l.allowedKeyvals {
if keyvals[i] == kv.key && keyvals[i+1] == kv.value {
return &filter{next: l.next.With(keyvals...), allowed: allowed, allowedKeyvals: l.allowedKeyvals}
if keyvals[i] == kv.key {
keyInAllowedKeyvals = true
// Example:
// logger = log.NewFilter(logger, log.AllowError(), log.AllowInfoWith("module", "crypto"))
// logger.With("module", "crypto")
if keyvals[i+1] == kv.value {
return &filter{
next: l.next.With(keyvals...),
allowed: allowed, // set the desired level
allowedKeyvals: l.allowedKeyvals,
initiallyAllowed: l.initiallyAllowed,
}
}
}
}
}
return &filter{next: l.next.With(keyvals...), allowed: l.allowed, allowedKeyvals: l.allowedKeyvals}
// Example:
// logger = log.NewFilter(logger, log.AllowError(), log.AllowInfoWith("module", "crypto"))
// logger.With("module", "main")
if keyInAllowedKeyvals {
return &filter{
next: l.next.With(keyvals...),
allowed: l.initiallyAllowed, // return back to initially allowed
allowedKeyvals: l.allowedKeyvals,
initiallyAllowed: l.initiallyAllowed,
}
}
return &filter{
next: l.next.With(keyvals...),
allowed: l.allowed, // simply continue with the current level
allowedKeyvals: l.allowedKeyvals,
initiallyAllowed: l.initiallyAllowed,
}
}
//--------------------------------------------------------------------------------

View File

@@ -1,6 +1,7 @@
package log
import (
"io"
"os"
"testing"
@@ -19,12 +20,22 @@ var (
// inside a test (not in the init func) because
// verbose flag only set at the time of testing.
func TestingLogger() Logger {
return TestingLoggerWithOutput(os.Stdout)
}
// TestingLoggerWOutput returns a TMLogger which writes to (w io.Writer) if testing being run
// with the verbose (-v) flag, NopLogger otherwise.
//
// Note that the call to TestingLoggerWithOutput(w io.Writer) must be made
// inside a test (not in the init func) because
// verbose flag only set at the time of testing.
func TestingLoggerWithOutput(w io.Writer) Logger {
if _testingLogger != nil {
return _testingLogger
}
if testing.Verbose() {
_testingLogger = NewTMLogger(NewSyncWriter(os.Stdout))
_testingLogger = NewTMLogger(NewSyncWriter(w))
} else {
_testingLogger = NewNopLogger()
}

View File

@@ -105,8 +105,8 @@ func (dbp *DBProvider) LatestFullCommit(chainID string, minHeight, maxHeight int
}
itr := dbp.db.ReverseIterator(
signedHeaderKey(chainID, maxHeight),
signedHeaderKey(chainID, minHeight-1),
signedHeaderKey(chainID, minHeight),
append(signedHeaderKey(chainID, maxHeight), byte(0x00)),
)
defer itr.Close()
@@ -190,8 +190,8 @@ func (dbp *DBProvider) deleteAfterN(chainID string, after int) error {
dbp.logger.Info("DBProvider.deleteAfterN()...", "chainID", chainID, "after", after)
itr := dbp.db.ReverseIterator(
signedHeaderKey(chainID, 1<<63-1),
signedHeaderKey(chainID, 0),
signedHeaderKey(chainID, 1),
append(signedHeaderKey(chainID, 1<<63-1), byte(0x00)),
)
defer itr.Close()

View File

@@ -121,7 +121,7 @@ If we cannot update directly from H -> H' because there was too much change to
the validator set, then we can look for some Hm (H < Hm < H') with a validator
set Vm. Then we try to update H -> Hm and then Hm -> H' in two steps. If one
of these steps doesn't work, then we continue bisecting, until we eventually
have to externally validate the valdiator set changes at every block.
have to externally validate the validator set changes at every block.
Since we never trust any server in this protocol, only the signatures
themselves, it doesn't matter if the seed comes from a (possibly malicious)

View File

@@ -9,7 +9,7 @@ import (
rpcclient "github.com/tendermint/tendermint/rpc/client"
"github.com/tendermint/tendermint/rpc/core"
ctypes "github.com/tendermint/tendermint/rpc/core/types"
rpc "github.com/tendermint/tendermint/rpc/lib/server"
rpcserver "github.com/tendermint/tendermint/rpc/lib/server"
)
const (
@@ -19,6 +19,7 @@ const (
// StartProxy will start the websocket manager on the client,
// set up the rpc routes to proxy via the given client,
// and start up an http/rpc server on the location given by bind (eg. :1234)
// NOTE: This function blocks - you may want to call it in a go-routine.
func StartProxy(c rpcclient.Client, listenAddr string, logger log.Logger, maxOpenConnections int) error {
err := c.Start()
if err != nil {
@@ -31,47 +32,49 @@ func StartProxy(c rpcclient.Client, listenAddr string, logger log.Logger, maxOpe
// build the handler...
mux := http.NewServeMux()
rpc.RegisterRPCFuncs(mux, r, cdc, logger)
rpcserver.RegisterRPCFuncs(mux, r, cdc, logger)
wm := rpc.NewWebsocketManager(r, cdc, rpc.EventSubscriber(c))
wm := rpcserver.NewWebsocketManager(r, cdc, rpcserver.EventSubscriber(c))
wm.SetLogger(logger)
core.SetLogger(logger)
mux.HandleFunc(wsEndpoint, wm.WebsocketHandler)
_, err = rpc.StartHTTPServer(listenAddr, mux, logger, rpc.Config{MaxOpenConnections: maxOpenConnections})
return err
l, err := rpcserver.Listen(listenAddr, rpcserver.Config{MaxOpenConnections: maxOpenConnections})
if err != nil {
return err
}
return rpcserver.StartHTTPServer(l, mux, logger)
}
// RPCRoutes just routes everything to the given client, as if it were
// a tendermint fullnode.
//
// if we want security, the client must implement it as a secure client
func RPCRoutes(c rpcclient.Client) map[string]*rpc.RPCFunc {
func RPCRoutes(c rpcclient.Client) map[string]*rpcserver.RPCFunc {
return map[string]*rpc.RPCFunc{
return map[string]*rpcserver.RPCFunc{
// Subscribe/unsubscribe are reserved for websocket events.
// We can just use the core tendermint impl, which uses the
// EventSwitch we registered in NewWebsocketManager above
"subscribe": rpc.NewWSRPCFunc(core.Subscribe, "query"),
"unsubscribe": rpc.NewWSRPCFunc(core.Unsubscribe, "query"),
"subscribe": rpcserver.NewWSRPCFunc(core.Subscribe, "query"),
"unsubscribe": rpcserver.NewWSRPCFunc(core.Unsubscribe, "query"),
// info API
"status": rpc.NewRPCFunc(c.Status, ""),
"blockchain": rpc.NewRPCFunc(c.BlockchainInfo, "minHeight,maxHeight"),
"genesis": rpc.NewRPCFunc(c.Genesis, ""),
"block": rpc.NewRPCFunc(c.Block, "height"),
"commit": rpc.NewRPCFunc(c.Commit, "height"),
"tx": rpc.NewRPCFunc(c.Tx, "hash,prove"),
"validators": rpc.NewRPCFunc(c.Validators, ""),
"status": rpcserver.NewRPCFunc(c.Status, ""),
"blockchain": rpcserver.NewRPCFunc(c.BlockchainInfo, "minHeight,maxHeight"),
"genesis": rpcserver.NewRPCFunc(c.Genesis, ""),
"block": rpcserver.NewRPCFunc(c.Block, "height"),
"commit": rpcserver.NewRPCFunc(c.Commit, "height"),
"tx": rpcserver.NewRPCFunc(c.Tx, "hash,prove"),
"validators": rpcserver.NewRPCFunc(c.Validators, ""),
// broadcast API
"broadcast_tx_commit": rpc.NewRPCFunc(c.BroadcastTxCommit, "tx"),
"broadcast_tx_sync": rpc.NewRPCFunc(c.BroadcastTxSync, "tx"),
"broadcast_tx_async": rpc.NewRPCFunc(c.BroadcastTxAsync, "tx"),
"broadcast_tx_commit": rpcserver.NewRPCFunc(c.BroadcastTxCommit, "tx"),
"broadcast_tx_sync": rpcserver.NewRPCFunc(c.BroadcastTxSync, "tx"),
"broadcast_tx_async": rpcserver.NewRPCFunc(c.BroadcastTxAsync, "tx"),
// abci API
"abci_query": rpc.NewRPCFunc(c.ABCIQuery, "path,data,prove"),
"abci_info": rpc.NewRPCFunc(c.ABCIInfo, ""),
"abci_query": rpcserver.NewRPCFunc(c.ABCIQuery, "path,data,prove"),
"abci_info": rpcserver.NewRPCFunc(c.ABCIInfo, ""),
}
}

View File

@@ -108,6 +108,10 @@ func PostCheckMaxGas(maxGas int64) PostCheckFunc {
if maxGas == -1 {
return nil
}
if res.GasWanted < 0 {
return fmt.Errorf("gas wanted %d is negative",
res.GasWanted)
}
if res.GasWanted > maxGas {
return fmt.Errorf("gas wanted %d is greater than max gas %d",
res.GasWanted, maxGas)
@@ -131,7 +135,6 @@ type Mempool struct {
proxyMtx sync.Mutex
proxyAppConn proxy.AppConnMempool
txs *clist.CList // concurrent linked-list of good txs
counter int64 // simple incrementing counter
height int64 // the last block Update()'d to
rechecking int32 // for re-checking filtered txs on Update()
recheckCursor *clist.CElement // next expected response
@@ -167,7 +170,6 @@ func NewMempool(
config: config,
proxyAppConn: proxyAppConn,
txs: clist.New(),
counter: 0,
height: height,
rechecking: 0,
recheckCursor: nil,
@@ -365,9 +367,7 @@ func (mem *Mempool) resCbNormal(req *abci.Request, res *abci.Response) {
postCheckErr = mem.postCheck(tx, r.CheckTx)
}
if (r.CheckTx.Code == abci.CodeTypeOK) && postCheckErr == nil {
mem.counter++
memTx := &mempoolTx{
counter: mem.counter,
height: mem.height,
gasWanted: r.CheckTx.GasWanted,
tx: tx,
@@ -378,7 +378,6 @@ func (mem *Mempool) resCbNormal(req *abci.Request, res *abci.Response) {
"res", r,
"height", memTx.height,
"total", mem.Size(),
"counter", memTx.counter,
)
mem.metrics.TxSizeBytes.Observe(float64(len(tx)))
mem.notifyTxsAvailable()
@@ -491,11 +490,15 @@ func (mem *Mempool) ReapMaxBytesMaxGas(maxBytes, maxGas int64) types.Txs {
return txs
}
totalBytes += int64(len(memTx.tx)) + aminoOverhead
// Check total gas requirement
if maxGas > -1 && totalGas+memTx.gasWanted > maxGas {
// Check total gas requirement.
// If maxGas is negative, skip this check.
// Since newTotalGas < masGas, which
// must be non-negative, it follows that this won't overflow.
newTotalGas := totalGas + memTx.gasWanted
if maxGas > -1 && newTotalGas > maxGas {
return txs
}
totalGas += memTx.gasWanted
totalGas = newTotalGas
txs = append(txs, memTx.tx)
}
return txs
@@ -534,12 +537,6 @@ func (mem *Mempool) Update(
preCheck PreCheckFunc,
postCheck PostCheckFunc,
) error {
// First, create a lookup map of txns in new txs.
txsMap := make(map[string]struct{}, len(txs))
for _, tx := range txs {
txsMap[string(tx)] = struct{}{}
}
// Set height
mem.height = height
mem.notifiedTxsAvailable = false
@@ -551,15 +548,26 @@ func (mem *Mempool) Update(
mem.postCheck = postCheck
}
// Remove transactions that are already in txs.
goodTxs := mem.filterTxs(txsMap)
// Recheck mempool txs if any txs were committed in the block
if mem.config.Recheck && len(goodTxs) > 0 {
mem.logger.Info("Recheck txs", "numtxs", len(goodTxs), "height", height)
mem.recheckTxs(goodTxs)
// At this point, mem.txs are being rechecked.
// mem.recheckCursor re-scans mem.txs and possibly removes some txs.
// Before mem.Reap(), we should wait for mem.recheckCursor to be nil.
// Add committed transactions to cache (if missing).
for _, tx := range txs {
_ = mem.cache.Push(tx)
}
// Remove committed transactions.
txsLeft := mem.removeTxs(txs)
// Either recheck non-committed txs to see if they became invalid
// or just notify there're some txs left.
if len(txsLeft) > 0 {
if mem.config.Recheck {
mem.logger.Info("Recheck txs", "numtxs", len(txsLeft), "height", height)
mem.recheckTxs(txsLeft)
// At this point, mem.txs are being rechecked.
// mem.recheckCursor re-scans mem.txs and possibly removes some txs.
// Before mem.Reap(), we should wait for mem.recheckCursor to be nil.
} else {
mem.notifyTxsAvailable()
}
}
// Update metrics
@@ -568,12 +576,18 @@ func (mem *Mempool) Update(
return nil
}
func (mem *Mempool) filterTxs(blockTxsMap map[string]struct{}) []types.Tx {
goodTxs := make([]types.Tx, 0, mem.txs.Len())
func (mem *Mempool) removeTxs(txs types.Txs) []types.Tx {
// Build a map for faster lookups.
txsMap := make(map[string]struct{}, len(txs))
for _, tx := range txs {
txsMap[string(tx)] = struct{}{}
}
txsLeft := make([]types.Tx, 0, mem.txs.Len())
for e := mem.txs.Front(); e != nil; e = e.Next() {
memTx := e.Value.(*mempoolTx)
// Remove the tx if it's alredy in a block.
if _, ok := blockTxsMap[string(memTx.tx)]; ok {
// Remove the tx if it's already in a block.
if _, ok := txsMap[string(memTx.tx)]; ok {
// remove from clist
mem.txs.Remove(e)
e.DetachPrev()
@@ -581,15 +595,14 @@ func (mem *Mempool) filterTxs(blockTxsMap map[string]struct{}) []types.Tx {
// NOTE: we don't remove committed txs from the cache.
continue
}
// Good tx!
goodTxs = append(goodTxs, memTx.tx)
txsLeft = append(txsLeft, memTx.tx)
}
return goodTxs
return txsLeft
}
// NOTE: pass in goodTxs because mem.txs can mutate concurrently.
func (mem *Mempool) recheckTxs(goodTxs []types.Tx) {
if len(goodTxs) == 0 {
// NOTE: pass in txs because mem.txs can mutate concurrently.
func (mem *Mempool) recheckTxs(txs []types.Tx) {
if len(txs) == 0 {
return
}
atomic.StoreInt32(&mem.rechecking, 1)
@@ -598,7 +611,7 @@ func (mem *Mempool) recheckTxs(goodTxs []types.Tx) {
// Push txs to proxyAppConn
// NOTE: resCb() may be called concurrently.
for _, tx := range goodTxs {
for _, tx := range txs {
mem.proxyAppConn.CheckTxAsync(tx)
}
mem.proxyAppConn.FlushAsync()
@@ -608,7 +621,6 @@ func (mem *Mempool) recheckTxs(goodTxs []types.Tx) {
// mempoolTx is a transaction that successfully ran
type mempoolTx struct {
counter int64 // a simple incrementing counter
height int64 // height that this tx had been validated in
gasWanted int64 // amount of gas this tx states it will require
tx types.Tx //

View File

@@ -163,6 +163,17 @@ func TestMempoolFilters(t *testing.T) {
}
}
func TestMempoolUpdateAddsTxsToCache(t *testing.T) {
app := kvstore.NewKVStoreApplication()
cc := proxy.NewLocalClientCreator(app)
mempool := newMempoolWithApp(cc)
mempool.Update(1, []types.Tx{[]byte{0x01}}, nil, nil)
err := mempool.CheckTx([]byte{0x01}, nil)
if assert.Error(t, err) {
assert.Equal(t, ErrTxInCache, err)
}
}
func TestTxsAvailable(t *testing.T) {
app := kvstore.NewKVStoreApplication()
cc := proxy.NewLocalClientCreator(app)

View File

@@ -3,7 +3,6 @@ package node
import (
"bytes"
"context"
"errors"
"fmt"
"net"
"net/http"
@@ -11,11 +10,12 @@ import (
"strings"
"time"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/rs/cors"
"github.com/tendermint/go-amino"
amino "github.com/tendermint/go-amino"
abci "github.com/tendermint/tendermint/abci/types"
bc "github.com/tendermint/tendermint/blockchain"
cfg "github.com/tendermint/tendermint/config"
@@ -210,35 +210,29 @@ func NewNode(config *cfg.Config,
// what happened during block replay).
state = sm.LoadState(stateDB)
// Ensure the state's block version matches that of the software.
// Log the version info.
logger.Info("Version info",
"software", version.TMCoreSemVer,
"block", version.BlockProtocol,
"p2p", version.P2PProtocol,
)
// If the state and software differ in block version, at least log it.
if state.Version.Consensus.Block != version.BlockProtocol {
return nil, fmt.Errorf(
"Block version of the software does not match that of the state.\n"+
"Got version.BlockProtocol=%v, state.Version.Consensus.Block=%v",
version.BlockProtocol,
state.Version.Consensus.Block,
logger.Info("Software and state have different block protocols",
"software", version.BlockProtocol,
"state", state.Version.Consensus.Block,
)
}
// If an address is provided, listen on the socket for a
// connection from an external signing process.
if config.PrivValidatorListenAddr != "" {
var (
// TODO: persist this key so external signer
// can actually authenticate us
privKey = ed25519.GenPrivKey()
pvsc = privval.NewTCPVal(
logger.With("module", "privval"),
config.PrivValidatorListenAddr,
privKey,
)
)
if err := pvsc.Start(); err != nil {
return nil, fmt.Errorf("Error starting private validator client: %v", err)
// If an address is provided, listen on the socket for a connection from an
// external signing process.
// FIXME: we should start services inside OnStart
privValidator, err = createAndStartPrivValidatorSocketClient(config.PrivValidatorListenAddr, logger)
if err != nil {
return nil, errors.Wrap(err, "Error with private validator socket client")
}
privValidator = pvsc
}
// Decide whether to fast-sync or not
@@ -354,24 +348,26 @@ func NewNode(config *cfg.Config,
indexerService := txindex.NewIndexerService(txIndexer, eventBus)
indexerService.SetLogger(logger.With("module", "txindex"))
var (
p2pLogger = logger.With("module", "p2p")
nodeInfo = makeNodeInfo(
config,
nodeKey.ID(),
txIndexer,
genDoc.ChainID,
p2p.NewProtocolVersion(
version.P2PProtocol, // global
state.Version.Consensus.Block,
state.Version.Consensus.App,
),
)
p2pLogger := logger.With("module", "p2p")
nodeInfo, err := makeNodeInfo(
config,
nodeKey.ID(),
txIndexer,
genDoc.ChainID,
p2p.NewProtocolVersion(
version.P2PProtocol, // global
state.Version.Consensus.Block,
state.Version.Consensus.App,
),
)
if err != nil {
return nil, err
}
// Setup Transport.
var (
transport = p2p.NewMultiplexTransport(nodeInfo, *nodeKey)
mConnConfig = p2p.MConnConfig(config.P2P)
transport = p2p.NewMultiplexTransport(nodeInfo, *nodeKey, mConnConfig)
connFilters = []p2p.ConnFilterFunc{}
peerFilters = []p2p.PeerFilterFunc{}
)
@@ -464,7 +460,7 @@ func NewNode(config *cfg.Config,
Seeds: splitAndTrimEmpty(config.P2P.Seeds, ",", " "),
SeedMode: config.P2P.SeedMode,
})
pexReactor.SetLogger(p2pLogger)
pexReactor.SetLogger(logger.With("module", "pex"))
sw.AddReactor("PEX", pexReactor)
}
@@ -599,10 +595,8 @@ func (n *Node) OnStop() {
}
}
if pvsc, ok := n.privValidator.(*privval.TCPVal); ok {
if err := pvsc.Stop(); err != nil {
n.Logger.Error("Error stopping priv validator socket client", "err", err)
}
if pvsc, ok := n.privValidator.(cmn.Service); ok {
pvsc.Stop()
}
if n.prometheusSrv != nil {
@@ -653,6 +647,14 @@ func (n *Node) startRPC() ([]net.Listener, error) {
mux.HandleFunc("/websocket", wm.WebsocketHandler)
rpcserver.RegisterRPCFuncs(mux, rpccore.Routes, coreCodec, rpcLogger)
listener, err := rpcserver.Listen(
listenAddr,
rpcserver.Config{MaxOpenConnections: n.config.RPC.MaxOpenConnections},
)
if err != nil {
return nil, err
}
var rootHandler http.Handler = mux
if n.config.RPC.IsCorsEnabled() {
corsMiddleware := cors.New(cors.Options{
@@ -663,30 +665,23 @@ func (n *Node) startRPC() ([]net.Listener, error) {
rootHandler = corsMiddleware.Handler(mux)
}
listener, err := rpcserver.StartHTTPServer(
listenAddr,
go rpcserver.StartHTTPServer(
listener,
rootHandler,
rpcLogger,
rpcserver.Config{MaxOpenConnections: n.config.RPC.MaxOpenConnections},
)
if err != nil {
return nil, err
}
listeners[i] = listener
}
// we expose a simplified api over grpc for convenience to app devs
grpcListenAddr := n.config.RPC.GRPCListenAddress
if grpcListenAddr != "" {
listener, err := grpccore.StartGRPCServer(
grpcListenAddr,
grpccore.Config{
MaxOpenConnections: n.config.RPC.GRPCMaxOpenConnections,
},
)
listener, err := rpcserver.Listen(
grpcListenAddr, rpcserver.Config{MaxOpenConnections: n.config.RPC.GRPCMaxOpenConnections})
if err != nil {
return nil, err
}
go grpccore.StartGRPCServer(listener)
listeners = append(listeners, listener)
}
@@ -788,7 +783,7 @@ func makeNodeInfo(
txIndexer txindex.TxIndexer,
chainID string,
protocolVersion p2p.ProtocolVersion,
) p2p.NodeInfo {
) (p2p.NodeInfo, error) {
txIndexerStatus := "on"
if _, ok := txIndexer.(*null.TxIndex); ok {
txIndexerStatus = "off"
@@ -823,7 +818,8 @@ func makeNodeInfo(
nodeInfo.ListenAddr = lAddr
return nodeInfo
err := nodeInfo.Validate()
return nodeInfo, err
}
//------------------------------------------------------------------------------
@@ -855,6 +851,36 @@ func saveGenesisDoc(db dbm.DB, genDoc *types.GenesisDoc) {
db.SetSync(genesisDocKey, bytes)
}
func createAndStartPrivValidatorSocketClient(
listenAddr string,
logger log.Logger,
) (types.PrivValidator, error) {
var pvsc types.PrivValidator
protocol, address := cmn.ProtocolAndAddress(listenAddr)
switch protocol {
case "unix":
pvsc = privval.NewIPCVal(logger.With("module", "privval"), address)
case "tcp":
// TODO: persist this key so external signer
// can actually authenticate us
pvsc = privval.NewTCPVal(logger.With("module", "privval"), listenAddr, ed25519.GenPrivKey())
default:
return nil, fmt.Errorf(
"Wrong listen address: expected either 'tcp' or 'unix' protocols, got %s",
protocol,
)
}
if pvsc, ok := pvsc.(cmn.Service); ok {
if err := pvsc.Start(); err != nil {
return nil, errors.Wrap(err, "failed to start")
}
}
return pvsc, nil
}
// splitAndTrimEmpty slices s into all subslices separated by sep and returns a
// slice of the string s with all leading and trailing Unicode code points
// contained in cutset removed. If sep is empty, SplitAndTrim splits after each

View File

@@ -3,23 +3,26 @@ package node
import (
"context"
"fmt"
"net"
"os"
"syscall"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/abci/example/kvstore"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/crypto/ed25519"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/privval"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/version"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
"github.com/tendermint/tendermint/version"
)
func TestNodeStartStop(t *testing.T) {
@@ -27,17 +30,16 @@ func TestNodeStartStop(t *testing.T) {
// create & start node
n, err := DefaultNewNode(config, log.TestingLogger())
assert.NoError(t, err, "expected no err on DefaultNewNode")
err1 := n.Start()
if err1 != nil {
t.Error(err1)
}
require.NoError(t, err)
err = n.Start()
require.NoError(t, err)
t.Logf("Started node %v", n.sw.NodeInfo())
// wait for the node to produce a block
blockCh := make(chan interface{})
err = n.EventBus().Subscribe(context.Background(), "node_test", types.EventQueryNewBlock, blockCh)
assert.NoError(t, err)
require.NoError(t, err)
select {
case <-blockCh:
case <-time.After(10 * time.Second):
@@ -89,7 +91,7 @@ func TestNodeDelayedStop(t *testing.T) {
// create & start node
n, err := DefaultNewNode(config, log.TestingLogger())
n.GenesisDoc().GenesisTime = now.Add(5 * time.Second)
assert.NoError(t, err)
require.NoError(t, err)
n.Start()
startTime := tmtime.Now()
@@ -101,7 +103,7 @@ func TestNodeSetAppVersion(t *testing.T) {
// create & start node
n, err := DefaultNewNode(config, log.TestingLogger())
assert.NoError(t, err, "expected no err on DefaultNewNode")
require.NoError(t, err)
// default config uses the kvstore app
var appVersion version.Protocol = kvstore.ProtocolVersion
@@ -113,3 +115,80 @@ func TestNodeSetAppVersion(t *testing.T) {
// check version is set in node info
assert.Equal(t, n.nodeInfo.(p2p.DefaultNodeInfo).ProtocolVersion.App, appVersion)
}
func TestNodeSetPrivValTCP(t *testing.T) {
addr := "tcp://" + testFreeAddr(t)
config := cfg.ResetTestRoot("node_priv_val_tcp_test")
config.BaseConfig.PrivValidatorListenAddr = addr
rs := privval.NewRemoteSigner(
log.TestingLogger(),
config.ChainID(),
addr,
types.NewMockPV(),
ed25519.GenPrivKey(),
)
privval.RemoteSignerConnDeadline(5 * time.Millisecond)(rs)
go func() {
err := rs.Start()
if err != nil {
panic(err)
}
}()
defer rs.Stop()
n, err := DefaultNewNode(config, log.TestingLogger())
require.NoError(t, err)
assert.IsType(t, &privval.TCPVal{}, n.PrivValidator())
}
// address without a protocol must result in error
func TestPrivValidatorListenAddrNoProtocol(t *testing.T) {
addrNoPrefix := testFreeAddr(t)
config := cfg.ResetTestRoot("node_priv_val_tcp_test")
config.BaseConfig.PrivValidatorListenAddr = addrNoPrefix
_, err := DefaultNewNode(config, log.TestingLogger())
assert.Error(t, err)
}
func TestNodeSetPrivValIPC(t *testing.T) {
tmpfile := "/tmp/kms." + cmn.RandStr(6) + ".sock"
defer os.Remove(tmpfile) // clean up
config := cfg.ResetTestRoot("node_priv_val_tcp_test")
config.BaseConfig.PrivValidatorListenAddr = "unix://" + tmpfile
rs := privval.NewIPCRemoteSigner(
log.TestingLogger(),
config.ChainID(),
tmpfile,
types.NewMockPV(),
)
privval.IPCRemoteSignerConnDeadline(3 * time.Second)(rs)
done := make(chan struct{})
go func() {
defer close(done)
n, err := DefaultNewNode(config, log.TestingLogger())
require.NoError(t, err)
assert.IsType(t, &privval.IPCVal{}, n.PrivValidator())
}()
err := rs.Start()
require.NoError(t, err)
defer rs.Stop()
<-done
}
// testFreeAddr claims a free port so we don't block on listener being ready.
func testFreeAddr(t *testing.T) string {
ln, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
defer ln.Close()
return fmt.Sprintf("127.0.0.1:%d", ln.Addr().(*net.TCPAddr).Port)
}

View File

@@ -84,7 +84,11 @@ type MConnection struct {
errored uint32
config MConnConfig
quit chan struct{}
// Closing quitSendRoutine will cause
// doneSendRoutine to close.
quitSendRoutine chan struct{}
doneSendRoutine chan struct{}
flushTimer *cmn.ThrottleTimer // flush writes as necessary but throttled.
pingTimer *cmn.RepeatTimer // send pings periodically
@@ -156,6 +160,7 @@ func NewMConnectionWithConfig(conn net.Conn, chDescs []*ChannelDescriptor, onRec
onReceive: onReceive,
onError: onError,
config: config,
created: time.Now(),
}
// Create channels
@@ -190,7 +195,8 @@ func (c *MConnection) OnStart() error {
if err := c.BaseService.OnStart(); err != nil {
return err
}
c.quit = make(chan struct{})
c.quitSendRoutine = make(chan struct{})
c.doneSendRoutine = make(chan struct{})
c.flushTimer = cmn.NewThrottleTimer("flush", c.config.FlushThrottle)
c.pingTimer = cmn.NewRepeatTimer("ping", c.config.PingInterval)
c.pongTimeoutCh = make(chan bool, 1)
@@ -200,15 +206,59 @@ func (c *MConnection) OnStart() error {
return nil
}
// OnStop implements BaseService
func (c *MConnection) OnStop() {
// FlushStop replicates the logic of OnStop.
// It additionally ensures that all successful
// .Send() calls will get flushed before closing
// the connection.
// NOTE: it is not safe to call this method more than once.
func (c *MConnection) FlushStop() {
c.BaseService.OnStop()
c.flushTimer.Stop()
c.pingTimer.Stop()
c.chStatsTimer.Stop()
if c.quit != nil {
close(c.quit)
if c.quitSendRoutine != nil {
close(c.quitSendRoutine)
// wait until the sendRoutine exits
// so we dont race on calling sendSomePacketMsgs
<-c.doneSendRoutine
}
// Send and flush all pending msgs.
// By now, IsRunning == false,
// so any concurrent attempts to send will fail.
// Since sendRoutine has exited, we can call this
// safely
eof := c.sendSomePacketMsgs()
for !eof {
eof = c.sendSomePacketMsgs()
}
c.flush()
// Now we can close the connection
c.conn.Close() // nolint: errcheck
// We can't close pong safely here because
// recvRoutine may write to it after we've stopped.
// Though it doesn't need to get closed at all,
// we close it @ recvRoutine.
// c.Stop()
}
// OnStop implements BaseService
func (c *MConnection) OnStop() {
select {
case <-c.quitSendRoutine:
// already quit via FlushStop
return
default:
}
c.BaseService.OnStop()
c.flushTimer.Stop()
c.pingTimer.Stop()
c.chStatsTimer.Stop()
close(c.quitSendRoutine)
c.conn.Close() // nolint: errcheck
// We can't close pong safely here because
@@ -269,7 +319,7 @@ func (c *MConnection) Send(chID byte, msgBytes []byte) bool {
default:
}
} else {
c.Logger.Error("Send failed", "channel", chID, "conn", c, "msgBytes", fmt.Sprintf("%X", msgBytes))
c.Logger.Debug("Send failed", "channel", chID, "conn", c, "msgBytes", fmt.Sprintf("%X", msgBytes))
}
return success
}
@@ -365,7 +415,8 @@ FOR_LOOP:
}
c.sendMonitor.Update(int(_n))
c.flush()
case <-c.quit:
case <-c.quitSendRoutine:
close(c.doneSendRoutine)
break FOR_LOOP
case <-c.send:
// Send some PacketMsgs

View File

@@ -36,6 +36,43 @@ func createMConnectionWithCallbacks(conn net.Conn, onReceive func(chID byte, msg
return c
}
func TestMConnectionSendFlushStop(t *testing.T) {
server, client := NetPipe()
defer server.Close() // nolint: errcheck
defer client.Close() // nolint: errcheck
clientConn := createTestMConnection(client)
err := clientConn.Start()
require.Nil(t, err)
defer clientConn.Stop()
msg := []byte("abc")
assert.True(t, clientConn.Send(0x01, msg))
aminoMsgLength := 14
// start the reader in a new routine, so we can flush
errCh := make(chan error)
go func() {
msgB := make([]byte, aminoMsgLength)
_, err := server.Read(msgB)
if err != nil {
t.Fatal(err)
}
errCh <- err
}()
// stop the conn - it should flush all conns
clientConn.FlushStop()
timer := time.NewTimer(3 * time.Second)
select {
case <-errCh:
case <-timer.C:
t.Error("timed out waiting for msgs to be read")
}
}
func TestMConnectionSend(t *testing.T) {
server, client := NetPipe()
defer server.Close() // nolint: errcheck

View File

@@ -25,6 +25,11 @@ func NewPeer() *peer {
return p
}
// FlushStop just calls Stop.
func (p *peer) FlushStop() {
p.Stop()
}
// ID always returns dummy.
func (p *peer) ID() p2p.ID {
return p2p.ID("dummy")

View File

@@ -6,7 +6,7 @@ import (
"fmt"
"io/ioutil"
crypto "github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/ed25519"
cmn "github.com/tendermint/tendermint/libs/common"
)

View File

@@ -62,7 +62,7 @@ func PrometheusMetrics(namespace string) *Metrics {
// NopMetrics returns no-op Metrics.
func NopMetrics() *Metrics {
return &Metrics{
Peers: discard.NewGauge(),
Peers: discard.NewGauge(),
PeerReceiveBytesTotal: discard.NewCounter(),
PeerSendBytesTotal: discard.NewCounter(),
PeerPendingSendBytes: discard.NewGauge(),

View File

@@ -175,6 +175,9 @@ func (na *NetAddress) Same(other interface{}) bool {
// String representation: <ID>@<IP>:<PORT>
func (na *NetAddress) String() string {
if na == nil {
return "<nil-NetAddress>"
}
if na.str == "" {
addrStr := na.DialString()
if na.ID != "" {
@@ -186,6 +189,9 @@ func (na *NetAddress) String() string {
}
func (na *NetAddress) DialString() string {
if na == nil {
return "<nil-NetAddress>"
}
return net.JoinHostPort(
na.IP.String(),
strconv.FormatUint(uint64(na.Port), 10),

View File

@@ -36,7 +36,7 @@ type nodeInfoAddress interface {
// nodeInfoTransport validates a nodeInfo and checks
// our compatibility with it. It's for use in the handshake.
type nodeInfoTransport interface {
ValidateBasic() error
Validate() error
CompatibleWith(other NodeInfo) error
}
@@ -103,7 +103,7 @@ func (info DefaultNodeInfo) ID() ID {
return info.ID_
}
// ValidateBasic checks the self-reported DefaultNodeInfo is safe.
// Validate checks the self-reported DefaultNodeInfo is safe.
// It returns an error if there
// are too many Channels, if there are any duplicate Channels,
// if the ListenAddr is malformed, or if the ListenAddr is a host name
@@ -116,7 +116,7 @@ func (info DefaultNodeInfo) ID() ID {
// International clients could then use punycode (or we could use
// url-encoding), and we just need to be careful with how we handle that in our
// clients. (e.g. off by default).
func (info DefaultNodeInfo) ValidateBasic() error {
func (info DefaultNodeInfo) Validate() error {
// ID is already validated.
@@ -230,3 +230,31 @@ func (info DefaultNodeInfo) NetAddress() *NetAddress {
}
return netAddr
}
//-----------------------------------------------------------
// These methods are for Protobuf Compatibility
// Size returns the size of the amino encoding, in bytes.
func (info *DefaultNodeInfo) Size() int {
bs, _ := info.Marshal()
return len(bs)
}
// Marshal returns the amino encoding.
func (info *DefaultNodeInfo) Marshal() ([]byte, error) {
return cdc.MarshalBinaryBare(info)
}
// MarshalTo calls Marshal and copies to the given buffer.
func (info *DefaultNodeInfo) MarshalTo(data []byte) (int, error) {
bs, err := info.Marshal()
if err != nil {
return -1, err
}
return copy(data, bs), nil
}
// Unmarshal deserializes from amino encoded form.
func (info *DefaultNodeInfo) Unmarshal(bs []byte) error {
return cdc.UnmarshalBinaryBare(bs, info)
}

View File

@@ -12,7 +12,7 @@ func TestNodeInfoValidate(t *testing.T) {
// empty fails
ni := DefaultNodeInfo{}
assert.Error(t, ni.ValidateBasic())
assert.Error(t, ni.Validate())
channels := make([]byte, maxNumChannels)
for i := 0; i < maxNumChannels; i++ {
@@ -68,13 +68,13 @@ func TestNodeInfoValidate(t *testing.T) {
// test case passes
ni = testNodeInfo(nodeKey.ID(), name).(DefaultNodeInfo)
ni.Channels = channels
assert.NoError(t, ni.ValidateBasic())
assert.NoError(t, ni.Validate())
for _, tc := range testCases {
ni := testNodeInfo(nodeKey.ID(), name).(DefaultNodeInfo)
ni.Channels = channels
tc.malleateNodeInfo(&ni)
err := ni.ValidateBasic()
err := ni.Validate()
if tc.expectErr {
assert.Error(t, err, tc.testName)
} else {

View File

@@ -8,7 +8,6 @@ import (
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/config"
tmconn "github.com/tendermint/tendermint/p2p/conn"
)
@@ -17,6 +16,7 @@ const metricsTickerDuration = 10 * time.Second
// Peer is an interface representing a peer connected on a reactor.
type Peer interface {
cmn.Service
FlushStop()
ID() ID // peer's cryptographic ID
RemoteIP() net.IP // remote IP of the connection
@@ -41,7 +41,6 @@ type Peer interface {
type peerConn struct {
outbound bool
persistent bool
config *config.P2PConfig
conn net.Conn // source connection
originalAddr *NetAddress // nil for inbound connections
@@ -52,7 +51,6 @@ type peerConn struct {
func newPeerConn(
outbound, persistent bool,
config *config.P2PConfig,
conn net.Conn,
originalAddr *NetAddress,
) peerConn {
@@ -60,7 +58,6 @@ func newPeerConn(
return peerConn{
outbound: outbound,
persistent: persistent,
config: config,
conn: conn,
originalAddr: originalAddr,
}
@@ -184,6 +181,15 @@ func (p *peer) OnStart() error {
return nil
}
// FlushStop mimics OnStop but additionally ensures that all successful
// .Send() calls will get flushed before closing the connection.
// NOTE: it is not safe to call this method more than once.
func (p *peer) FlushStop() {
p.metricsTicker.Stop()
p.BaseService.OnStop()
p.mconn.FlushStop() // stop everything and close the conn
}
// OnStop implements BaseService.
func (p *peer) OnStop() {
p.metricsTicker.Stop()

View File

@@ -98,13 +98,15 @@ func (ps *PeerSet) Get(peerKey ID) Peer {
}
// Remove discards peer by its Key, if the peer was previously memoized.
func (ps *PeerSet) Remove(peer Peer) {
// Returns true if the peer was removed, and false if it was not found.
// in the set.
func (ps *PeerSet) Remove(peer Peer) bool {
ps.mtx.Lock()
defer ps.mtx.Unlock()
item := ps.lookup[peer.ID()]
if item == nil {
return
return false
}
index := item.index
@@ -116,7 +118,7 @@ func (ps *PeerSet) Remove(peer Peer) {
if index == len(ps.list)-1 {
ps.list = newList
delete(ps.lookup, peer.ID())
return
return true
}
// Replace the popped item with the last item in the old list.
@@ -127,6 +129,7 @@ func (ps *PeerSet) Remove(peer Peer) {
lastPeerItem.index = index
ps.list = newList
delete(ps.lookup, peer.ID())
return true
}
// Size returns the number of unique items in the peerSet.

View File

@@ -18,6 +18,7 @@ type mockPeer struct {
id ID
}
func (mp *mockPeer) FlushStop() { mp.Stop() }
func (mp *mockPeer) TrySend(chID byte, msgBytes []byte) bool { return true }
func (mp *mockPeer) Send(chID byte, msgBytes []byte) bool { return true }
func (mp *mockPeer) NodeInfo() NodeInfo { return DefaultNodeInfo{} }
@@ -59,13 +60,15 @@ func TestPeerSetAddRemoveOne(t *testing.T) {
n := len(peerList)
// 1. Test removing from the front
for i, peerAtFront := range peerList {
peerSet.Remove(peerAtFront)
removed := peerSet.Remove(peerAtFront)
assert.True(t, removed)
wantSize := n - i - 1
for j := 0; j < 2; j++ {
assert.Equal(t, false, peerSet.Has(peerAtFront.ID()), "#%d Run #%d: failed to remove peer", i, j)
assert.Equal(t, wantSize, peerSet.Size(), "#%d Run #%d: failed to remove peer and decrement size", i, j)
// Test the route of removing the now non-existent element
peerSet.Remove(peerAtFront)
removed := peerSet.Remove(peerAtFront)
assert.False(t, removed)
}
}
@@ -80,7 +83,8 @@ func TestPeerSetAddRemoveOne(t *testing.T) {
// b) In reverse, remove each element
for i := n - 1; i >= 0; i-- {
peerAtEnd := peerList[i]
peerSet.Remove(peerAtEnd)
removed := peerSet.Remove(peerAtEnd)
assert.True(t, removed)
assert.Equal(t, false, peerSet.Has(peerAtEnd.ID()), "#%d: failed to remove item at end", i)
assert.Equal(t, i, peerSet.Size(), "#%d: differing sizes after peerSet.Remove(atEndPeer)", i)
}
@@ -104,7 +108,8 @@ func TestPeerSetAddRemoveMany(t *testing.T) {
}
for i, peer := range peers {
peerSet.Remove(peer)
removed := peerSet.Remove(peer)
assert.True(t, removed)
if peerSet.Has(peer.ID()) {
t.Errorf("Failed to remove peer")
}

View File

@@ -10,7 +10,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
crypto "github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/ed25519"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/log"

View File

@@ -13,7 +13,7 @@ import (
"sync"
"time"
crypto "github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/p2p"
)
@@ -162,26 +162,29 @@ func (a *addrBook) FilePath() string {
// AddOurAddress one of our addresses.
func (a *addrBook) AddOurAddress(addr *p2p.NetAddress) {
a.Logger.Info("Add our address to book", "addr", addr)
a.mtx.Lock()
defer a.mtx.Unlock()
a.Logger.Info("Add our address to book", "addr", addr)
a.ourAddrs[addr.String()] = struct{}{}
a.mtx.Unlock()
}
// OurAddress returns true if it is our address.
func (a *addrBook) OurAddress(addr *p2p.NetAddress) bool {
a.mtx.Lock()
defer a.mtx.Unlock()
_, ok := a.ourAddrs[addr.String()]
a.mtx.Unlock()
return ok
}
func (a *addrBook) AddPrivateIDs(IDs []string) {
a.mtx.Lock()
defer a.mtx.Unlock()
for _, id := range IDs {
a.privateIDs[p2p.ID(id)] = struct{}{}
}
a.mtx.Unlock()
}
// AddAddress implements AddrBook
@@ -191,6 +194,7 @@ func (a *addrBook) AddPrivateIDs(IDs []string) {
func (a *addrBook) AddAddress(addr *p2p.NetAddress, src *p2p.NetAddress) error {
a.mtx.Lock()
defer a.mtx.Unlock()
return a.addAddress(addr, src)
}
@@ -198,6 +202,7 @@ func (a *addrBook) AddAddress(addr *p2p.NetAddress, src *p2p.NetAddress) error {
func (a *addrBook) RemoveAddress(addr *p2p.NetAddress) {
a.mtx.Lock()
defer a.mtx.Unlock()
ka := a.addrLookup[addr.ID]
if ka == nil {
return
@@ -211,14 +216,16 @@ func (a *addrBook) RemoveAddress(addr *p2p.NetAddress) {
func (a *addrBook) IsGood(addr *p2p.NetAddress) bool {
a.mtx.Lock()
defer a.mtx.Unlock()
return a.addrLookup[addr.ID].isOld()
}
// HasAddress returns true if the address is in the book.
func (a *addrBook) HasAddress(addr *p2p.NetAddress) bool {
a.mtx.Lock()
defer a.mtx.Unlock()
ka := a.addrLookup[addr.ID]
a.mtx.Unlock()
return ka != nil
}
@@ -292,6 +299,7 @@ func (a *addrBook) PickAddress(biasTowardsNewAddrs int) *p2p.NetAddress {
func (a *addrBook) MarkGood(addr *p2p.NetAddress) {
a.mtx.Lock()
defer a.mtx.Unlock()
ka := a.addrLookup[addr.ID]
if ka == nil {
return
@@ -306,6 +314,7 @@ func (a *addrBook) MarkGood(addr *p2p.NetAddress) {
func (a *addrBook) MarkAttempt(addr *p2p.NetAddress) {
a.mtx.Lock()
defer a.mtx.Unlock()
ka := a.addrLookup[addr.ID]
if ka == nil {
return
@@ -461,12 +470,13 @@ ADDRS_LOOP:
// ListOfKnownAddresses returns the new and old addresses.
func (a *addrBook) ListOfKnownAddresses() []*knownAddress {
addrs := []*knownAddress{}
a.mtx.Lock()
defer a.mtx.Unlock()
addrs := []*knownAddress{}
for _, addr := range a.addrLookup {
addrs = append(addrs, addr.copy())
}
a.mtx.Unlock()
return addrs
}
@@ -476,6 +486,7 @@ func (a *addrBook) ListOfKnownAddresses() []*knownAddress {
func (a *addrBook) Size() int {
a.mtx.Lock()
defer a.mtx.Unlock()
return a.size()
}

View File

@@ -208,25 +208,38 @@ func (r *PEXReactor) Receive(chID byte, src Peer, msgBytes []byte) {
switch msg := msg.(type) {
case *pexRequestMessage:
// Check we're not receiving too many requests
if err := r.receiveRequest(src); err != nil {
r.Switch.StopPeerForError(src, err)
return
}
// Seeds disconnect after sending a batch of addrs
// NOTE: this is a prime candidate for amplification attacks
// NOTE: this is a prime candidate for amplification attacks,
// so it's important we
// 1) restrict how frequently peers can request
// 2) limit the output size
if r.config.SeedMode {
// If we're a seed and this is an inbound peer,
// respond once and disconnect.
if r.config.SeedMode && !src.IsOutbound() {
id := string(src.ID())
v := r.lastReceivedRequests.Get(id)
if v != nil {
// FlushStop/StopPeer are already
// running in a go-routine.
return
}
r.lastReceivedRequests.Set(id, time.Now())
// Send addrs and disconnect
r.SendAddrs(src, r.book.GetSelectionWithBias(biasToSelectNewPeers))
go func() {
// TODO Fix properly #2092
time.Sleep(time.Second * 5)
// In a go-routine so it doesn't block .Receive.
src.FlushStop()
r.Switch.StopPeerGracefully(src)
}()
} else {
// Check we're not receiving requests too frequently.
if err := r.receiveRequest(src); err != nil {
r.Switch.StopPeerForError(src, err)
return
}
r.SendAddrs(src, r.book.GetSelection())
}

View File

@@ -12,7 +12,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
crypto "github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/ed25519"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/log"
@@ -387,6 +387,7 @@ func newMockPeer() mockPeer {
return mp
}
func (mp mockPeer) FlushStop() { mp.Stop() }
func (mp mockPeer) ID() p2p.ID { return mp.addr.ID }
func (mp mockPeer) IsOutbound() bool { return mp.outbound }
func (mp mockPeer) IsPersistent() bool { return mp.persistent }

View File

@@ -27,6 +27,17 @@ const (
reconnectBackOffBaseSeconds = 3
)
// MConnConfig returns an MConnConfig with fields updated
// from the P2PConfig.
func MConnConfig(cfg *config.P2PConfig) conn.MConnConfig {
mConfig := conn.DefaultMConnConfig()
mConfig.FlushThrottle = cfg.FlushThrottleTimeout
mConfig.SendRate = cfg.SendRate
mConfig.RecvRate = cfg.RecvRate
mConfig.MaxPacketMsgPayloadSize = cfg.MaxPacketMsgPayloadSize
return mConfig
}
//-----------------------------------------------------------------------------
// An AddrBook represents an address book from the pex package, which is used
@@ -70,8 +81,6 @@ type Switch struct {
filterTimeout time.Duration
peerFilters []PeerFilterFunc
mConfig conn.MConnConfig
rng *cmn.Rand // seed for randomizing dial times and orders
metrics *Metrics
@@ -102,14 +111,6 @@ func NewSwitch(
// Ensure we have a completely undeterministic PRNG.
sw.rng = cmn.NewRand()
mConfig := conn.DefaultMConnConfig()
mConfig.FlushThrottle = cfg.FlushThrottleTimeout
mConfig.SendRate = cfg.SendRate
mConfig.RecvRate = cfg.RecvRate
mConfig.MaxPacketMsgPayloadSize = cfg.MaxPacketMsgPayloadSize
sw.mConfig = mConfig
sw.BaseService = *cmn.NewBaseService(nil, "P2P Switch", sw)
for _, option := range options {
@@ -210,7 +211,9 @@ func (sw *Switch) OnStop() {
// Stop peers
for _, p := range sw.peers.List() {
p.Stop()
sw.peers.Remove(p)
if sw.peers.Remove(p) {
sw.metrics.Peers.Add(float64(-1))
}
}
// Stop reactors
@@ -298,8 +301,9 @@ func (sw *Switch) StopPeerGracefully(peer Peer) {
}
func (sw *Switch) stopAndRemovePeer(peer Peer, reason interface{}) {
sw.peers.Remove(peer)
sw.metrics.Peers.Add(float64(-1))
if sw.peers.Remove(peer) {
sw.metrics.Peers.Add(float64(-1))
}
peer.Stop()
for _, reactor := range sw.reactors {
reactor.RemovePeer(peer, reason)
@@ -504,6 +508,12 @@ func (sw *Switch) acceptRoutine() {
"err", err,
"numPeers", sw.peers.Size(),
)
// We could instead have a retry loop around the acceptRoutine,
// but that would need to stop and let the node shutdown eventually.
// So might as well panic and let process managers restart the node.
// There's no point in letting the node run without the acceptRoutine,
// since it won't be able to accept new connections.
panic(fmt.Errorf("accept routine exited: %v", err))
}
break

View File

@@ -3,10 +3,17 @@ package p2p
import (
"bytes"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
"regexp"
"strconv"
"sync"
"testing"
"time"
stdprometheus "github.com/prometheus/client_golang/prometheus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -335,6 +342,54 @@ func TestSwitchStopsNonPersistentPeerOnError(t *testing.T) {
assert.False(p.IsRunning())
}
func TestSwitchStopPeerForError(t *testing.T) {
s := httptest.NewServer(stdprometheus.UninstrumentedHandler())
defer s.Close()
scrapeMetrics := func() string {
resp, _ := http.Get(s.URL)
buf, _ := ioutil.ReadAll(resp.Body)
return string(buf)
}
namespace, subsystem, name := config.TestInstrumentationConfig().Namespace, MetricsSubsystem, "peers"
re := regexp.MustCompile(namespace + `_` + subsystem + `_` + name + ` ([0-9\.]+)`)
peersMetricValue := func() float64 {
matches := re.FindStringSubmatch(scrapeMetrics())
f, _ := strconv.ParseFloat(matches[1], 64)
return f
}
p2pMetrics := PrometheusMetrics(namespace)
// make two connected switches
sw1, sw2 := MakeSwitchPair(t, func(i int, sw *Switch) *Switch {
// set metrics on sw1
if i == 0 {
opt := WithMetrics(p2pMetrics)
opt(sw)
}
return initSwitchFunc(i, sw)
})
assert.Equal(t, len(sw1.Peers().List()), 1)
assert.EqualValues(t, 1, peersMetricValue())
// send messages to the peer from sw1
p := sw1.Peers().List()[0]
p.Send(0x1, []byte("here's a message to send"))
// stop sw2. this should cause the p to fail,
// which results in calling StopPeerForError internally
sw2.Stop()
// now call StopPeerForError explicitly, eg. from a reactor
sw1.StopPeerForError(p, fmt.Errorf("some err"))
assert.Equal(t, len(sw1.Peers().List()), 0)
assert.EqualValues(t, 0, peersMetricValue())
}
func TestSwitchReconnectsToPersistentPeer(t *testing.T) {
assert, require := assert.New(t), require.New(t)

Some files were not shown because too many files have changed in this diff Show More