mirror of
https://github.com/fluencelabs/tendermint
synced 2025-07-31 20:21:56 +00:00
Compare commits
106 Commits
v0.19.3
...
v0.19.6-rc
Author | SHA1 | Date | |
---|---|---|---|
|
8e1e2bd10a | ||
|
9671c0d4fe | ||
|
a885af0826 | ||
|
3a947b0117 | ||
|
caf5afc084 | ||
|
2aa5285c66 | ||
|
b166831fb5 | ||
|
423fef1416 | ||
|
b4d10b5b91 | ||
|
6f1bfb6280 | ||
|
5e7177053c | ||
|
a0201e7862 | ||
|
126ddca1a6 | ||
|
186d38dd8a | ||
|
01fd102dba | ||
|
e11f3167ff | ||
|
7d98cfd3d6 | ||
|
4848e88737 | ||
|
60d7486de2 | ||
|
229c18f1bd | ||
|
91b6d3f18c | ||
|
20e9dd0737 | ||
|
7b02b5b66b | ||
|
0cd92a4948 | ||
|
747f28f85f | ||
|
a9d0adbdef | ||
|
3485edf4f5 | ||
|
c6f612bfc3 | ||
|
bb9aa85d22 | ||
|
c4fef499b6 | ||
|
b77d5344fc | ||
|
21f5f3faa7 | ||
|
bf6527fc59 | ||
|
ed8d9951c0 | ||
|
97b39f340e | ||
|
383c255f35 | ||
|
931fb385d7 | ||
|
018e096748 | ||
|
ee4eb59355 | ||
|
082a02e6d1 | ||
|
2c40966e46 | ||
|
0a9dc9f875 | ||
|
87cefb724d | ||
|
6701dba876 | ||
|
442bbe592f | ||
|
301aa92f9c | ||
|
52f27686ef | ||
|
6f9867cba6 | ||
|
02615c8695 | ||
|
2df137193c | ||
|
1ef415728d | ||
|
773e3917ec | ||
|
26fdfe10fd | ||
|
d76e2dc3ff | ||
|
420f925a4d | ||
|
d7d12c8030 | ||
|
6c4a26f248 | ||
|
2a26c47da5 | ||
|
aabe96f1af | ||
|
0e1f730fbb | ||
|
0b68ec4b8e | ||
|
ca120798e4 | ||
|
595fc24c56 | ||
|
4611cf44f0 | ||
|
d596ed1bc2 | ||
|
0fb33ca91d | ||
|
35428ceb53 | ||
|
de8d4325de | ||
|
5a041baa36 | ||
|
202a43a5af | ||
|
2987158a65 | ||
|
c9001d5a11 | ||
|
90446261f3 | ||
|
ae572b9038 | ||
|
0908e668bd | ||
|
e0dbc3673c | ||
|
545990f845 | ||
|
19ccd1842f | ||
|
b4d6bf7697 | ||
|
1854ce41fc | ||
|
547e8223b9 | ||
|
8e46df14e7 | ||
|
8d60a5a7bd | ||
|
5115618550 | ||
|
a6b74b82d1 | ||
|
e5220360c5 | ||
|
b5c4098c53 | ||
|
bc8768cfea | ||
|
d832bde280 | ||
|
5e3a23df6d | ||
|
6f7333fd5f | ||
|
58e3246ffc | ||
|
bbe1355957 | ||
|
7c14fa820d | ||
|
0d93424c6a | ||
|
efc01cf582 | ||
|
754be1887c | ||
|
775b015173 | ||
|
b698a9febc | ||
|
c5f45275ec | ||
|
77f09f5b5e | ||
|
1fe41be929 | ||
|
68a0b3f95b | ||
|
b1f3c11948 | ||
|
e1a3f16fa4 | ||
|
34f5d439ee |
@@ -153,7 +153,7 @@ jobs:
|
||||
- checkout
|
||||
- run: mkdir -p $GOPATH/src/github.com/tendermint
|
||||
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
|
||||
- run: bash test/circleci/p2p.sh
|
||||
- run: bash test/p2p/circleci.sh
|
||||
|
||||
upload_coverage:
|
||||
<<: *defaults
|
||||
|
4
.github/ISSUE_TEMPLATE
vendored
4
.github/ISSUE_TEMPLATE
vendored
@@ -19,10 +19,6 @@ in a case of bug.
|
||||
|
||||
**ABCI app** (name for built-in, URL for self-written if it's publicly available):
|
||||
|
||||
|
||||
**Merkleeyes version** (use `git rev-parse --verify HEAD`, skip if you don't use it):
|
||||
|
||||
|
||||
**Environment**:
|
||||
- **OS** (e.g. from /etc/os-release):
|
||||
- **Install tools**:
|
||||
|
4
.gitignore
vendored
4
.gitignore
vendored
@@ -5,7 +5,6 @@
|
||||
.DS_Store
|
||||
build/*
|
||||
rpc/test/.tendermint
|
||||
.debora
|
||||
.tendermint
|
||||
remote_dump
|
||||
.revision
|
||||
@@ -13,7 +12,6 @@ vendor
|
||||
.vagrant
|
||||
test/p2p/data/
|
||||
test/logs
|
||||
.glide
|
||||
coverage.txt
|
||||
docs/_build
|
||||
docs/tools
|
||||
@@ -25,3 +23,5 @@ scripts/cutWALUntil/cutWALUntil
|
||||
|
||||
.idea/
|
||||
*.iml
|
||||
|
||||
libs/pubsub/query/fuzz_test/output
|
||||
|
52
CHANGELOG.md
52
CHANGELOG.md
@@ -1,5 +1,57 @@
|
||||
# Changelog
|
||||
|
||||
## 0.19.6
|
||||
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
- [consensus] Consensus reactor now receives events from a separate synchronous event bus,
|
||||
which is not dependant on external RPC load
|
||||
|
||||
BUG FIX:
|
||||
|
||||
- [evidence] Dont send peers evidence from heights they haven't synced to yet
|
||||
- [p2p] Refuse connections to more than one peer with the same IP
|
||||
- [docs] Various fixes
|
||||
|
||||
## 0.19.5
|
||||
|
||||
*May 20th, 2018*
|
||||
|
||||
BREAKING CHANGES
|
||||
|
||||
- [rpc/client] TxSearch and UnconfirmedTxs have new arguments (see below)
|
||||
- [rpc/client] TxSearch returns ResultTxSearch
|
||||
- [version] Breaking changes to Go APIs will not be reflected in breaking
|
||||
version change, but will be included in changelog.
|
||||
|
||||
FEATURES
|
||||
|
||||
- [rpc] `/tx_search` takes `page` (starts at 1) and `per_page` (max 100, default 30) args to paginate results
|
||||
- [rpc] `/unconfirmed_txs` takes `limit` (max 100, default 30) arg to limit the output
|
||||
- [config] `mempool.size` and `mempool.cache_size` options
|
||||
|
||||
IMPROVEMENTS
|
||||
|
||||
- [docs] Lots of updates
|
||||
- [consensus] Only Fsync() the WAL before executing msgs from ourselves
|
||||
|
||||
BUG FIXES
|
||||
|
||||
- [mempool] Enforce upper bound on number of transactions
|
||||
|
||||
## 0.19.4 (May 17th, 2018)
|
||||
|
||||
IMPROVEMENTS
|
||||
|
||||
- [state] Improve tx indexing by using batches
|
||||
- [consensus, state] Improve logging (more consensus logs, fewer tx logs)
|
||||
- [spec] Moved to `docs/spec` (TODO cleanup the rest of the docs ...)
|
||||
|
||||
BUG FIXES
|
||||
|
||||
- [consensus] Fix issue #1575 where a late proposer can get stuck
|
||||
|
||||
## 0.19.3 (May 14th, 2018)
|
||||
|
||||
FEATURES
|
||||
|
@@ -17,7 +17,7 @@
|
||||
# Quick reference
|
||||
|
||||
* **Where to get help:**
|
||||
https://tendermint.com/community
|
||||
https://cosmos.network/community
|
||||
|
||||
* **Where to file issues:**
|
||||
https://github.com/tendermint/tendermint/issues
|
||||
@@ -37,25 +37,29 @@ To get started developing applications, see the [application developers guide](h
|
||||
|
||||
## Start one instance of the Tendermint core with the `kvstore` app
|
||||
|
||||
A very simple example of a built-in app and Tendermint core in one container.
|
||||
A quick example of a built-in app and Tendermint core in one container.
|
||||
|
||||
```
|
||||
docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint init
|
||||
docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint node --proxy_app=kvstore
|
||||
```
|
||||
|
||||
## mintnet-kubernetes
|
||||
# Local cluster
|
||||
|
||||
If you want to see many containers talking to each other, consider using [mintnet-kubernetes](https://github.com/tendermint/tools/tree/master/mintnet-kubernetes), which is a tool for running Tendermint-based applications on a Kubernetes cluster.
|
||||
To run a 4-node network, see the `Makefile` in the root of [the repo](https://github.com/tendermint/tendermint/master/Makefile) and run:
|
||||
|
||||
```
|
||||
make build-linux
|
||||
make build-docker-localnode
|
||||
make localnet-start
|
||||
```
|
||||
|
||||
Note that this will build and use a different image than the ones provided here.
|
||||
|
||||
# License
|
||||
|
||||
View [license information](https://raw.githubusercontent.com/tendermint/tendermint/master/LICENSE) for the software contained in this image.
|
||||
- Tendermint's license is [Apache 2.0](https://github.com/tendermint/tendermint/master/LICENSE).
|
||||
|
||||
# User Feedback
|
||||
# Contributing
|
||||
|
||||
## Contributing
|
||||
|
||||
You are invited to contribute new features, fixes, or updates, large or small; we are always thrilled to receive pull requests, and do our best to process them as fast as we can.
|
||||
|
||||
Before you start to code, we recommend discussing your plans through a [GitHub](https://github.com/tendermint/tendermint/issues) issue, especially for more ambitious contributions. This gives other contributors a chance to point you in the right direction, give you feedback on your design, and help you find out if someone else is working on the same thing.
|
||||
Contributions are most welcome! See the [contributing file](https://github.com/tendermint/tendermint/blob/master/CONTRIBUTING.md) for more information.
|
||||
|
4
Gopkg.lock
generated
4
Gopkg.lock
generated
@@ -281,8 +281,6 @@
|
||||
"flowrate",
|
||||
"log",
|
||||
"merkle",
|
||||
"pubsub",
|
||||
"pubsub/query",
|
||||
"test"
|
||||
]
|
||||
revision = "cc5f287c4798ffe88c04d02df219ecb6932080fd"
|
||||
@@ -384,6 +382,6 @@
|
||||
[solve-meta]
|
||||
analyzer-name = "dep"
|
||||
analyzer-version = 1
|
||||
inputs-digest = "52a0dcbebdf8714612444914cfce59a3af8c47c4453a2d43c4ccc5ff1a91d8ea"
|
||||
inputs-digest = "d85c98dcac32cc1fe05d006aa75e8985f6447a150a041b972a673a65e7681da9"
|
||||
solver-name = "gps-cdcl"
|
||||
solver-version = 1
|
||||
|
10
Makefile
10
Makefile
@@ -193,8 +193,12 @@ build-docker:
|
||||
build-linux:
|
||||
GOOS=linux GOARCH=amd64 $(MAKE) build
|
||||
|
||||
build-docker-localnode:
|
||||
cd networks/local
|
||||
make
|
||||
|
||||
# Run a 4-node testnet locally
|
||||
localnet-start:
|
||||
localnet-start: localnet-stop
|
||||
@if ! [ -f build/node0/config/genesis.json ]; then docker run --rm -v $(CURDIR)/build:/tendermint:Z tendermint/localnode testnet --v 4 --o . --populate-persistent-peers --starting-ip-address 192.167.10.2 ; fi
|
||||
docker-compose up
|
||||
|
||||
@@ -212,7 +216,7 @@ sentry-start:
|
||||
cd networks/remote/terraform && terraform init && terraform apply -var DO_API_TOKEN="$(DO_API_TOKEN)" -var SSH_KEY_FILE="$(HOME)/.ssh/id_rsa.pub"
|
||||
@if ! [ -f $(CURDIR)/build/node0/config/genesis.json ]; then docker run --rm -v $(CURDIR)/build:/tendermint:Z tendermint/localnode testnet --v 0 --n 4 --o . ; fi
|
||||
cd networks/remote/ansible && ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i inventory/digital_ocean.py -l sentrynet install.yml
|
||||
@echo "Next step: Add your validator setup in the genesis.json and config.tml files and run \"make server-config\". (Public key of validator, chain ID, peer IP and node ID.)"
|
||||
@echo "Next step: Add your validator setup in the genesis.json and config.tml files and run \"make sentry-config\". (Public key of validator, chain ID, peer IP and node ID.)"
|
||||
|
||||
# Configuration management
|
||||
sentry-config:
|
||||
@@ -225,5 +229,5 @@ sentry-stop:
|
||||
# To avoid unintended conflicts with file names, always add to .PHONY
|
||||
# unless there is a reason not to.
|
||||
# https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html
|
||||
.PHONY: check build build_race dist install check_tools get_tools update_tools get_vendor_deps draw_deps test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt build-linux localnet-start localnet-stop build-docker sentry-start sentry-config sentry-stop
|
||||
.PHONY: check build build_race dist install check_tools get_tools update_tools get_vendor_deps draw_deps test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop
|
||||
|
||||
|
28
README.md
28
README.md
@@ -24,9 +24,14 @@ _NOTE: This is alpha software. Please contact us if you intend to run it in prod
|
||||
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine - written in any programming language -
|
||||
and securely replicates it on many machines.
|
||||
|
||||
For more information, from introduction to installation and application development, [Read The Docs](https://tendermint.readthedocs.io/en/master/).
|
||||
For protocol details, see [the specification](/docs/spec).
|
||||
|
||||
For protocol details, see [the specification](./docs/specification/new-spec).
|
||||
## Security
|
||||
|
||||
To report a security vulnerability, see our [bug bounty
|
||||
program](https://tendermint.com/security).
|
||||
|
||||
For examples of the kinds of bugs we're looking for, see [SECURITY.md](SECURITY.md)
|
||||
|
||||
## Minimum requirements
|
||||
|
||||
@@ -36,19 +41,20 @@ Go version | Go1.9 or higher
|
||||
|
||||
## Install
|
||||
|
||||
To download pre-built binaries, see our [downloads page](https://tendermint.com/downloads).
|
||||
See the [install instructions](/docs/install.rst)
|
||||
|
||||
To install from source, you should be able to:
|
||||
## Quick Start
|
||||
|
||||
`go get -u github.com/tendermint/tendermint/cmd/tendermint`
|
||||
|
||||
For more details (or if it fails), [read the docs](https://tendermint.readthedocs.io/en/master/install.html).
|
||||
- [Single node](/docs/using-tendermint.rst)
|
||||
- [Local cluster using docker-compose](/networks/local)
|
||||
- [Remote cluster using terraform and ansible](/docs/terraform-and-ansible.rst)
|
||||
- [Join the public testnet](https://cosmos.network/testnet)
|
||||
|
||||
## Resources
|
||||
|
||||
### Tendermint Core
|
||||
|
||||
To use Tendermint, build apps on it, or develop it, [Read The Docs](https://tendermint.readthedocs.io/en/master/).
|
||||
For more, [Read The Docs](https://tendermint.readthedocs.io/en/master/).
|
||||
Additional information about some - and eventually all - of the sub-projects below, can be found at Read The Docs.
|
||||
|
||||
### Sub-projects
|
||||
@@ -88,7 +94,11 @@ According to SemVer, anything in the public API can change at any time before ve
|
||||
|
||||
To provide some stability to Tendermint users in these 0.X.X days, the MINOR version is used
|
||||
to signal breaking changes across a subset of the total public API. This subset includes all
|
||||
interfaces exposed to other processes (cli, rpc, p2p, etc.), as well as parts of the following packages:
|
||||
interfaces exposed to other processes (cli, rpc, p2p, etc.), but does not
|
||||
include the in-process Go APIs.
|
||||
|
||||
That said, breaking changes in the following packages will be documented in the
|
||||
CHANGELOG even if they don't lead to MINOR version bumps:
|
||||
|
||||
- types
|
||||
- rpc/client
|
||||
|
23
ROADMAP.md
Normal file
23
ROADMAP.md
Normal file
@@ -0,0 +1,23 @@
|
||||
# Roadmap
|
||||
|
||||
BREAKING CHANGES:
|
||||
- Better support for injecting randomness
|
||||
- Upgrade consensus for more real-time use of evidence
|
||||
|
||||
FEATURES:
|
||||
- Use the chain as its own CA for nodes and validators
|
||||
- Tooling to run multiple blockchains/apps, possibly in a single process
|
||||
- State syncing (without transaction replay)
|
||||
- Add authentication and rate-limitting to the RPC
|
||||
|
||||
IMPROVEMENTS:
|
||||
- Improve subtleties around mempool caching and logic
|
||||
- Consensus optimizations:
|
||||
- cache block parts for faster agreement after round changes
|
||||
- propagate block parts rarest first
|
||||
- Better testing of the consensus state machine (ie. use a DSL)
|
||||
- Auto compiled serialization/deserialization code instead of go-wire reflection
|
||||
|
||||
BUG FIXES:
|
||||
- Graceful handling/recovery for apps that have non-determinism or fail to halt
|
||||
- Graceful handling/recovery for violations of safety, or liveness
|
71
SECURITY.md
Normal file
71
SECURITY.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# Security
|
||||
|
||||
As part of our [Coordinated Vulnerability Disclosure
|
||||
Policy](https://tendermint.com/security), we operate a bug bounty.
|
||||
See the policy for more details on submissions and rewards.
|
||||
|
||||
Here is a list of examples of the kinds of bugs we're most interested in:
|
||||
|
||||
## Specification
|
||||
|
||||
- Conceptual flaws
|
||||
- Ambiguities, inconsistencies, or incorrect statements
|
||||
- Mis-match between specification and implementation of any component
|
||||
|
||||
## Consensus
|
||||
|
||||
Assuming less than 1/3 of the voting power is Byzantine (malicious):
|
||||
|
||||
- Validation of blockchain data structures, including blocks, block parts,
|
||||
votes, and so on
|
||||
- Execution of blocks
|
||||
- Validator set changes
|
||||
- Proposer round robin
|
||||
- Two nodes committing conflicting blocks for the same height (safety failure)
|
||||
- A correct node signing conflicting votes
|
||||
- A node halting (liveness failure)
|
||||
- Syncing new and old nodes
|
||||
|
||||
## Networking
|
||||
|
||||
- Authenticated encryption (MITM, information leakage)
|
||||
- Eclipse attacks
|
||||
- Sybil attacks
|
||||
- Long-range attacks
|
||||
- Denial-of-Service
|
||||
|
||||
## RPC
|
||||
|
||||
- Write-access to anything besides sending transactions
|
||||
- Denial-of-Service
|
||||
- Leakage of secrets
|
||||
|
||||
## Denial-of-Service
|
||||
|
||||
Attacks may come through the P2P network or the RPC:
|
||||
|
||||
- Amplification attacks
|
||||
- Resource abuse
|
||||
- Deadlocks and race conditions
|
||||
- Panics and unhandled errors
|
||||
|
||||
## Libraries
|
||||
|
||||
- Serialization (Amino)
|
||||
- Reading/Writing files and databases
|
||||
- Logging and monitoring
|
||||
|
||||
## Cryptography
|
||||
|
||||
- Elliptic curves for validator signatures
|
||||
- Hash algorithms and Merkle trees for block validation
|
||||
- Authenticated encryption for P2P connections
|
||||
|
||||
## Light Client
|
||||
|
||||
- Validation of blockchain data structures
|
||||
- Correctly validating an incorrect proof
|
||||
- Incorrectly validating a correct proof
|
||||
- Syncing validator set changes
|
||||
|
||||
|
34
Vagrantfile
vendored
34
Vagrantfile
vendored
@@ -10,31 +10,37 @@ Vagrant.configure("2") do |config|
|
||||
end
|
||||
|
||||
config.vm.provision "shell", inline: <<-SHELL
|
||||
# add docker repo
|
||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
|
||||
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable"
|
||||
|
||||
# and golang 1.9 support
|
||||
# official repo doesn't have race detection runtime...
|
||||
# add-apt-repository ppa:gophers/archive
|
||||
add-apt-repository ppa:longsleep/golang-backports
|
||||
apt-get update
|
||||
|
||||
# install base requirements
|
||||
apt-get update
|
||||
apt-get install -y --no-install-recommends wget curl jq zip \
|
||||
make shellcheck bsdmainutils psmisc
|
||||
apt-get install -y docker-ce golang-1.9-go
|
||||
apt-get install -y language-pack-en
|
||||
|
||||
# install docker
|
||||
apt-get install -y --no-install-recommends apt-transport-https \
|
||||
ca-certificates curl software-properties-common
|
||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
|
||||
add-apt-repository \
|
||||
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
|
||||
$(lsb_release -cs) \
|
||||
stable"
|
||||
apt-get install -y docker-ce
|
||||
usermod -a -G docker vagrant
|
||||
|
||||
# install go
|
||||
wget -q https://dl.google.com/go/go1.10.1.linux-amd64.tar.gz
|
||||
tar -xvf go1.10.1.linux-amd64.tar.gz
|
||||
mv go /usr/local
|
||||
rm -f go1.10.1.linux-amd64.tar.gz
|
||||
|
||||
# cleanup
|
||||
apt-get autoremove -y
|
||||
|
||||
# needed for docker
|
||||
usermod -a -G docker vagrant
|
||||
|
||||
# set env variables
|
||||
echo 'export PATH=$PATH:/usr/lib/go-1.9/bin:/home/vagrant/go/bin' >> /home/vagrant/.bash_profile
|
||||
echo 'export GOROOT=/usr/local/go' >> /home/vagrant/.bash_profile
|
||||
echo 'export GOPATH=/home/vagrant/go' >> /home/vagrant/.bash_profile
|
||||
echo 'export PATH=$PATH:$GOROOT/bin:$GOPATH/bin' >> /home/vagrant/.bash_profile
|
||||
echo 'export LC_ALL=en_US.UTF-8' >> /home/vagrant/.bash_profile
|
||||
echo 'cd go/src/github.com/tendermint/tendermint' >> /home/vagrant/.bash_profile
|
||||
|
||||
|
@@ -1,6 +1,7 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"net"
|
||||
"testing"
|
||||
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
@@ -204,3 +205,4 @@ func (tp *bcrTestPeer) IsOutbound() bool { return false }
|
||||
func (tp *bcrTestPeer) IsPersistent() bool { return true }
|
||||
func (tp *bcrTestPeer) Get(s string) interface{} { return s }
|
||||
func (tp *bcrTestPeer) Set(string, interface{}) {}
|
||||
func (tp *bcrTestPeer) RemoteIP() net.IP { return []byte{127, 0, 0, 1} }
|
||||
|
@@ -97,7 +97,7 @@ func TestBlockStoreSaveLoadBlock(t *testing.T) {
|
||||
|
||||
incompletePartSet := types.NewPartSetFromHeader(types.PartSetHeader{Total: 2})
|
||||
uncontiguousPartSet := types.NewPartSetFromHeader(types.PartSetHeader{Total: 0})
|
||||
uncontiguousPartSet.AddPart(part2, false)
|
||||
uncontiguousPartSet.AddPart(part2)
|
||||
|
||||
header1 := types.Header{
|
||||
Height: 1,
|
||||
|
@@ -335,6 +335,7 @@ type MempoolConfig struct {
|
||||
RecheckEmpty bool `mapstructure:"recheck_empty"`
|
||||
Broadcast bool `mapstructure:"broadcast"`
|
||||
WalPath string `mapstructure:"wal_dir"`
|
||||
Size int `mapstructure:"size"`
|
||||
CacheSize int `mapstructure:"cache_size"`
|
||||
}
|
||||
|
||||
@@ -345,6 +346,7 @@ func DefaultMempoolConfig() *MempoolConfig {
|
||||
RecheckEmpty: true,
|
||||
Broadcast: true,
|
||||
WalPath: filepath.Join(defaultDataDir, "mempool.wal"),
|
||||
Size: 100000,
|
||||
CacheSize: 100000,
|
||||
}
|
||||
}
|
||||
|
@@ -179,6 +179,12 @@ recheck_empty = {{ .Mempool.RecheckEmpty }}
|
||||
broadcast = {{ .Mempool.Broadcast }}
|
||||
wal_dir = "{{ .Mempool.WalPath }}"
|
||||
|
||||
# size of the mempool
|
||||
size = {{ .Mempool.Size }}
|
||||
|
||||
# size of the cache (used to filter transactions we saw earlier)
|
||||
cache_size = {{ .Mempool.CacheSize }}
|
||||
|
||||
##### consensus configuration options #####
|
||||
[consensus]
|
||||
|
||||
|
@@ -1,18 +1 @@
|
||||
# The core consensus algorithm.
|
||||
|
||||
* state.go - The state machine as detailed in the whitepaper
|
||||
* reactor.go - A reactor that connects the state machine to the gossip network
|
||||
|
||||
# Go-routine summary
|
||||
|
||||
The reactor runs 2 go-routines for each added peer: gossipDataRoutine and gossipVotesRoutine.
|
||||
|
||||
The consensus state runs two persistent go-routines: timeoutRoutine and receiveRoutine.
|
||||
Go-routines are also started to trigger timeouts and to avoid blocking when the internalMsgQueue is really backed up.
|
||||
|
||||
# Replay/WAL
|
||||
|
||||
A write-ahead log is used to record all messages processed by the receiveRoutine,
|
||||
which amounts to all inputs to the consensus state machine:
|
||||
messages from peers, messages from ourselves, and timeouts.
|
||||
They can be played back deterministically at startup or using the replay console.
|
||||
See the [consensus spec](https://github.com/tendermint/tendermint/tree/master/docs/spec/consensus) and the [reactor consensus spec](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/consensus) for more information.
|
||||
|
@@ -27,7 +27,7 @@ func init() {
|
||||
// Heal partition and ensure A sees the commit
|
||||
func TestByzantine(t *testing.T) {
|
||||
N := 4
|
||||
logger := consensusLogger()
|
||||
logger := consensusLogger().With("test", "byzantine")
|
||||
css := randConsensusNet(N, "consensus_byzantine_test", newMockTickerFunc(false), newCounter)
|
||||
|
||||
// give the byzantine validator a normal ticker
|
||||
|
@@ -264,7 +264,7 @@ func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state sm.S
|
||||
// mock the evidence pool
|
||||
evpool := types.MockEvidencePool{}
|
||||
|
||||
// Make ConsensusReactor
|
||||
// Make ConsensusState
|
||||
stateDB := dbm.NewMemDB()
|
||||
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyAppConnCon, mempool, evpool)
|
||||
cs := NewConsensusState(thisConfig.Consensus, state, blockExec, blockStore, mempool, evpool)
|
||||
|
@@ -1,7 +1,6 @@
|
||||
package consensus
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"sync"
|
||||
@@ -14,6 +13,7 @@ import (
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
|
||||
cstypes "github.com/tendermint/tendermint/consensus/types"
|
||||
tmevents "github.com/tendermint/tendermint/libs/events"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
@@ -43,7 +43,8 @@ type ConsensusReactor struct {
|
||||
eventBus *types.EventBus
|
||||
}
|
||||
|
||||
// NewConsensusReactor returns a new ConsensusReactor with the given consensusState.
|
||||
// NewConsensusReactor returns a new ConsensusReactor with the given
|
||||
// consensusState.
|
||||
func NewConsensusReactor(consensusState *ConsensusState, fastSync bool) *ConsensusReactor {
|
||||
conR := &ConsensusReactor{
|
||||
conS: consensusState,
|
||||
@@ -53,17 +54,15 @@ func NewConsensusReactor(consensusState *ConsensusState, fastSync bool) *Consens
|
||||
return conR
|
||||
}
|
||||
|
||||
// OnStart implements BaseService.
|
||||
// OnStart implements BaseService by subscribing to events, which later will be
|
||||
// broadcasted to other peers and starting state if we're not in fast sync.
|
||||
func (conR *ConsensusReactor) OnStart() error {
|
||||
conR.Logger.Info("ConsensusReactor ", "fastSync", conR.FastSync())
|
||||
if err := conR.BaseReactor.OnStart(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err := conR.startBroadcastRoutine()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
conR.subscribeToBroadcastEvents()
|
||||
|
||||
if !conR.FastSync() {
|
||||
err := conR.conS.Start()
|
||||
@@ -75,9 +74,11 @@ func (conR *ConsensusReactor) OnStart() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// OnStop implements BaseService
|
||||
// OnStop implements BaseService by unsubscribing from events and stopping
|
||||
// state.
|
||||
func (conR *ConsensusReactor) OnStop() {
|
||||
conR.BaseReactor.OnStop()
|
||||
conR.unsubscribeFromBroadcastEvents()
|
||||
conR.conS.Stop()
|
||||
}
|
||||
|
||||
@@ -101,6 +102,7 @@ func (conR *ConsensusReactor) SwitchToConsensus(state sm.State, blocksSynced int
|
||||
err := conR.conS.Start()
|
||||
if err != nil {
|
||||
conR.Logger.Error("Error starting conS", "err", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
@@ -345,77 +347,40 @@ func (conR *ConsensusReactor) FastSync() bool {
|
||||
|
||||
//--------------------------------------
|
||||
|
||||
// startBroadcastRoutine subscribes for new round steps, votes and proposal
|
||||
// heartbeats using the event bus and starts a go routine to broadcasts events
|
||||
// to peers upon receiving them.
|
||||
func (conR *ConsensusReactor) startBroadcastRoutine() error {
|
||||
// subscribeToBroadcastEvents subscribes for new round steps, votes and
|
||||
// proposal heartbeats using internal pubsub defined on state to broadcast
|
||||
// them to peers upon receiving.
|
||||
func (conR *ConsensusReactor) subscribeToBroadcastEvents() {
|
||||
const subscriber = "consensus-reactor"
|
||||
ctx := context.Background()
|
||||
conR.conS.evsw.AddListenerForEvent(subscriber, types.EventNewRoundStep,
|
||||
func(data tmevents.EventData) {
|
||||
conR.broadcastNewRoundStepMessages(data.(*cstypes.RoundState))
|
||||
})
|
||||
|
||||
// new round steps
|
||||
stepsCh := make(chan interface{})
|
||||
err := conR.eventBus.Subscribe(ctx, subscriber, types.EventQueryNewRoundStep, stepsCh)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "failed to subscribe %s to %s", subscriber, types.EventQueryNewRoundStep)
|
||||
}
|
||||
conR.conS.evsw.AddListenerForEvent(subscriber, types.EventVote,
|
||||
func(data tmevents.EventData) {
|
||||
conR.broadcastHasVoteMessage(data.(*types.Vote))
|
||||
})
|
||||
|
||||
// votes
|
||||
votesCh := make(chan interface{})
|
||||
err = conR.eventBus.Subscribe(ctx, subscriber, types.EventQueryVote, votesCh)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "failed to subscribe %s to %s", subscriber, types.EventQueryVote)
|
||||
}
|
||||
|
||||
// proposal heartbeats
|
||||
heartbeatsCh := make(chan interface{})
|
||||
err = conR.eventBus.Subscribe(ctx, subscriber, types.EventQueryProposalHeartbeat, heartbeatsCh)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "failed to subscribe %s to %s", subscriber, types.EventQueryProposalHeartbeat)
|
||||
}
|
||||
|
||||
go func() {
|
||||
var data interface{}
|
||||
var ok bool
|
||||
for {
|
||||
select {
|
||||
case data, ok = <-stepsCh:
|
||||
if ok { // a receive from a closed channel returns the zero value immediately
|
||||
edrs := data.(types.EventDataRoundState)
|
||||
conR.broadcastNewRoundStep(edrs.RoundState.(*cstypes.RoundState))
|
||||
}
|
||||
case data, ok = <-votesCh:
|
||||
if ok {
|
||||
edv := data.(types.EventDataVote)
|
||||
conR.broadcastHasVoteMessage(edv.Vote)
|
||||
}
|
||||
case data, ok = <-heartbeatsCh:
|
||||
if ok {
|
||||
edph := data.(types.EventDataProposalHeartbeat)
|
||||
conR.broadcastProposalHeartbeatMessage(edph)
|
||||
}
|
||||
case <-conR.Quit():
|
||||
conR.eventBus.UnsubscribeAll(ctx, subscriber)
|
||||
return
|
||||
}
|
||||
if !ok {
|
||||
conR.eventBus.UnsubscribeAll(ctx, subscriber)
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
return nil
|
||||
conR.conS.evsw.AddListenerForEvent(subscriber, types.EventProposalHeartbeat,
|
||||
func(data tmevents.EventData) {
|
||||
conR.broadcastProposalHeartbeatMessage(data.(*types.Heartbeat))
|
||||
})
|
||||
}
|
||||
|
||||
func (conR *ConsensusReactor) broadcastProposalHeartbeatMessage(heartbeat types.EventDataProposalHeartbeat) {
|
||||
hb := heartbeat.Heartbeat
|
||||
func (conR *ConsensusReactor) unsubscribeFromBroadcastEvents() {
|
||||
const subscriber = "consensus-reactor"
|
||||
conR.conS.evsw.RemoveListener(subscriber)
|
||||
}
|
||||
|
||||
func (conR *ConsensusReactor) broadcastProposalHeartbeatMessage(hb *types.Heartbeat) {
|
||||
conR.Logger.Debug("Broadcasting proposal heartbeat message",
|
||||
"height", hb.Height, "round", hb.Round, "sequence", hb.Sequence)
|
||||
msg := &ProposalHeartbeatMessage{hb}
|
||||
conR.Switch.Broadcast(StateChannel, cdc.MustMarshalBinaryBare(msg))
|
||||
}
|
||||
|
||||
func (conR *ConsensusReactor) broadcastNewRoundStep(rs *cstypes.RoundState) {
|
||||
func (conR *ConsensusReactor) broadcastNewRoundStepMessages(rs *cstypes.RoundState) {
|
||||
nrsMsg, csMsg := makeRoundStepMessages(rs)
|
||||
if nrsMsg != nil {
|
||||
conR.Switch.Broadcast(StateChannel, cdc.MustMarshalBinaryBare(nrsMsg))
|
||||
|
@@ -26,20 +26,24 @@ import (
|
||||
var crc32c = crc32.MakeTable(crc32.Castagnoli)
|
||||
|
||||
// Functionality to replay blocks and messages on recovery from a crash.
|
||||
// There are two general failure scenarios: failure during consensus, and failure while applying the block.
|
||||
// The former is handled by the WAL, the latter by the proxyApp Handshake on restart,
|
||||
// which ultimately hands off the work to the WAL.
|
||||
// There are two general failure scenarios:
|
||||
//
|
||||
// 1. failure during consensus
|
||||
// 2. failure while applying the block
|
||||
//
|
||||
// The former is handled by the WAL, the latter by the proxyApp Handshake on
|
||||
// restart, which ultimately hands off the work to the WAL.
|
||||
|
||||
//-----------------------------------------
|
||||
// recover from failure during consensus
|
||||
// by replaying messages from the WAL
|
||||
// 1. Recover from failure during consensus
|
||||
// (by replaying messages from the WAL)
|
||||
//-----------------------------------------
|
||||
|
||||
// Unmarshal and apply a single message to the consensus state
|
||||
// as if it were received in receiveRoutine
|
||||
// Lines that start with "#" are ignored.
|
||||
// NOTE: receiveRoutine should not be running
|
||||
// Unmarshal and apply a single message to the consensus state as if it were
|
||||
// received in receiveRoutine. Lines that start with "#" are ignored.
|
||||
// NOTE: receiveRoutine should not be running.
|
||||
func (cs *ConsensusState) readReplayMessage(msg *TimedWALMessage, newStepCh chan interface{}) error {
|
||||
// skip meta messages
|
||||
// Skip meta messages which exist for demarcating boundaries.
|
||||
if _, ok := msg.Msg.(EndHeightMessage); ok {
|
||||
return nil
|
||||
}
|
||||
@@ -89,17 +93,18 @@ func (cs *ConsensusState) readReplayMessage(msg *TimedWALMessage, newStepCh chan
|
||||
return nil
|
||||
}
|
||||
|
||||
// replay only those messages since the last block.
|
||||
// timeoutRoutine should run concurrently to read off tickChan
|
||||
// Replay only those messages since the last block. `timeoutRoutine` should
|
||||
// run concurrently to read off tickChan.
|
||||
func (cs *ConsensusState) catchupReplay(csHeight int64) error {
|
||||
// set replayMode
|
||||
|
||||
// Set replayMode to true so we don't log signing errors.
|
||||
cs.replayMode = true
|
||||
defer func() { cs.replayMode = false }()
|
||||
|
||||
// Ensure that ENDHEIGHT for this height doesn't exist.
|
||||
// Ensure that #ENDHEIGHT for this height doesn't exist.
|
||||
// NOTE: This is just a sanity check. As far as we know things work fine
|
||||
// without it, and Handshake could reuse ConsensusState if it weren't for
|
||||
// this check (since we can crash after writing ENDHEIGHT).
|
||||
// this check (since we can crash after writing #ENDHEIGHT).
|
||||
//
|
||||
// Ignore data corruption errors since this is a sanity check.
|
||||
gr, found, err := cs.wal.SearchForEndHeight(csHeight, &WALSearchOptions{IgnoreDataCorruptionErrors: true})
|
||||
@@ -115,7 +120,7 @@ func (cs *ConsensusState) catchupReplay(csHeight int64) error {
|
||||
return fmt.Errorf("WAL should not contain #ENDHEIGHT %d", csHeight)
|
||||
}
|
||||
|
||||
// Search for last height marker
|
||||
// Search for last height marker.
|
||||
//
|
||||
// Ignore data corruption errors in previous heights because we only care about last height
|
||||
gr, found, err = cs.wal.SearchForEndHeight(csHeight-1, &WALSearchOptions{IgnoreDataCorruptionErrors: true})
|
||||
@@ -182,10 +187,11 @@ func makeHeightSearchFunc(height int64) auto.SearchFunc {
|
||||
}
|
||||
}*/
|
||||
|
||||
//----------------------------------------------
|
||||
// Recover from failure during block processing
|
||||
// by handshaking with the app to figure out where
|
||||
// we were last and using the WAL to recover there
|
||||
//---------------------------------------------------
|
||||
// 2. Recover from failure while applying the block.
|
||||
// (by handshaking with the app to figure out where
|
||||
// we were last, and using the WAL to recover there.)
|
||||
//---------------------------------------------------
|
||||
|
||||
type Handshaker struct {
|
||||
stateDB dbm.DB
|
||||
@@ -220,7 +226,8 @@ func (h *Handshaker) NBlocks() int {
|
||||
|
||||
// TODO: retry the handshake/replay if it fails ?
|
||||
func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
|
||||
// handshake is done via info request on the query conn
|
||||
|
||||
// Handshake is done via ABCI Info on the query conn.
|
||||
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{version.Version})
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error calling Info: %v", err)
|
||||
@@ -234,15 +241,16 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
|
||||
|
||||
h.logger.Info("ABCI Handshake", "appHeight", blockHeight, "appHash", fmt.Sprintf("%X", appHash))
|
||||
|
||||
// TODO: check version
|
||||
// TODO: check app version.
|
||||
|
||||
// replay blocks up to the latest in the blockstore
|
||||
// Replay blocks up to the latest in the blockstore.
|
||||
_, err = h.ReplayBlocks(h.initialState, appHash, blockHeight, proxyApp)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error on replay: %v", err)
|
||||
}
|
||||
|
||||
h.logger.Info("Completed ABCI Handshake - Tendermint and App are synced", "appHeight", blockHeight, "appHash", fmt.Sprintf("%X", appHash))
|
||||
h.logger.Info("Completed ABCI Handshake - Tendermint and App are synced",
|
||||
"appHeight", blockHeight, "appHash", fmt.Sprintf("%X", appHash))
|
||||
|
||||
// TODO: (on restart) replay mempool
|
||||
|
||||
@@ -250,7 +258,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
|
||||
}
|
||||
|
||||
// Replay all blocks since appBlockHeight and ensure the result matches the current state.
|
||||
// Returns the final AppHash or an error
|
||||
// Returns the final AppHash or an error.
|
||||
func (h *Handshaker) ReplayBlocks(state sm.State, appHash []byte, appBlockHeight int64, proxyApp proxy.AppConns) ([]byte, error) {
|
||||
|
||||
storeBlockHeight := h.store.Height()
|
||||
@@ -314,7 +322,7 @@ func (h *Handshaker) ReplayBlocks(state sm.State, appHash []byte, appBlockHeight
|
||||
// We haven't run Commit (both the state and app are one block behind),
|
||||
// so replayBlock with the real app.
|
||||
// NOTE: We could instead use the cs.WAL on cs.Start,
|
||||
// but we'd have to allow the WAL to replay a block that wrote it's ENDHEIGHT
|
||||
// but we'd have to allow the WAL to replay a block that wrote it's #ENDHEIGHT
|
||||
h.logger.Info("Replay last block using real app")
|
||||
state, err = h.replayBlock(state, storeBlockHeight, proxyApp.Consensus())
|
||||
return state.AppHash, err
|
||||
|
@@ -218,15 +218,15 @@ func (e ReachedHeightToStopError) Error() string {
|
||||
return fmt.Sprintf("reached height to stop %d", e.height)
|
||||
}
|
||||
|
||||
// Save simulate WAL's crashing by sending an error to the panicCh and then
|
||||
// Write simulate WAL's crashing by sending an error to the panicCh and then
|
||||
// exiting the cs.receiveRoutine.
|
||||
func (w *crashingWAL) Save(m WALMessage) {
|
||||
func (w *crashingWAL) Write(m WALMessage) {
|
||||
if endMsg, ok := m.(EndHeightMessage); ok {
|
||||
if endMsg.Height == w.heightToStop {
|
||||
w.panicCh <- ReachedHeightToStopError{endMsg.Height}
|
||||
runtime.Goexit()
|
||||
} else {
|
||||
w.next.Save(m)
|
||||
w.next.Write(m)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -238,10 +238,14 @@ func (w *crashingWAL) Save(m WALMessage) {
|
||||
runtime.Goexit()
|
||||
} else {
|
||||
w.msgIndex++
|
||||
w.next.Save(m)
|
||||
w.next.Write(m)
|
||||
}
|
||||
}
|
||||
|
||||
func (w *crashingWAL) WriteSync(m WALMessage) {
|
||||
w.Write(m)
|
||||
}
|
||||
|
||||
func (w *crashingWAL) Group() *auto.Group { return w.next.Group() }
|
||||
func (w *crashingWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
|
||||
return w.next.SearchForEndHeight(height, options)
|
||||
@@ -538,7 +542,7 @@ func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
|
||||
case *types.PartSetHeader:
|
||||
thisBlockParts = types.NewPartSetFromHeader(*p)
|
||||
case *types.Part:
|
||||
_, err := thisBlockParts.AddPart(p, false)
|
||||
_, err := thisBlockParts.AddPart(p)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
@@ -15,6 +15,7 @@ import (
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
cstypes "github.com/tendermint/tendermint/consensus/types"
|
||||
tmevents "github.com/tendermint/tendermint/libs/events"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
@@ -110,6 +111,10 @@ type ConsensusState struct {
|
||||
|
||||
// closed when we finish shutting down
|
||||
done chan struct{}
|
||||
|
||||
// synchronous pubsub between consensus state and reactor.
|
||||
// state only emits EventNewRoundStep, EventVote and EventProposalHeartbeat
|
||||
evsw tmevents.EventSwitch
|
||||
}
|
||||
|
||||
// NewConsensusState returns a new ConsensusState.
|
||||
@@ -126,6 +131,7 @@ func NewConsensusState(config *cfg.ConsensusConfig, state sm.State, blockExec *s
|
||||
doWALCatchup: true,
|
||||
wal: nilWAL{},
|
||||
evpool: evpool,
|
||||
evsw: tmevents.NewEventSwitch(),
|
||||
}
|
||||
// set function defaults (may be overwritten before calling Start)
|
||||
cs.decideProposal = cs.defaultDecideProposal
|
||||
@@ -227,6 +233,10 @@ func (cs *ConsensusState) LoadCommit(height int64) *types.Commit {
|
||||
// OnStart implements cmn.Service.
|
||||
// It loads the latest state via the WAL, and starts the timeout and receive routines.
|
||||
func (cs *ConsensusState) OnStart() error {
|
||||
if err := cs.evsw.Start(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// we may set the WAL in testing before calling Start,
|
||||
// so only OpenWAL if its still the nilWAL
|
||||
if _, ok := cs.wal.(nilWAL); ok {
|
||||
@@ -244,8 +254,7 @@ func (cs *ConsensusState) OnStart() error {
|
||||
// NOTE: we will get a build up of garbage go routines
|
||||
// firing on the tockChan until the receiveRoutine is started
|
||||
// to deal with them (by that point, at most one will be valid)
|
||||
err := cs.timeoutTicker.Start()
|
||||
if err != nil {
|
||||
if err := cs.timeoutTicker.Start(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -284,6 +293,8 @@ func (cs *ConsensusState) startRoutines(maxSteps int) {
|
||||
func (cs *ConsensusState) OnStop() {
|
||||
cs.BaseService.OnStop()
|
||||
|
||||
cs.evsw.Stop()
|
||||
|
||||
cs.timeoutTicker.Stop()
|
||||
|
||||
// Make BaseService.Wait() wait until cs.wal.Wait()
|
||||
@@ -504,11 +515,12 @@ func (cs *ConsensusState) updateToState(state sm.State) {
|
||||
|
||||
func (cs *ConsensusState) newStep() {
|
||||
rs := cs.RoundStateEvent()
|
||||
cs.wal.Save(rs)
|
||||
cs.wal.Write(rs)
|
||||
cs.nSteps++
|
||||
// newStep is called by updateToStep in NewConsensusState before the eventBus is set!
|
||||
if cs.eventBus != nil {
|
||||
cs.eventBus.PublishEventNewRoundStep(rs)
|
||||
cs.evsw.FireEvent(types.EventNewRoundStep, &cs.RoundState)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -542,16 +554,16 @@ func (cs *ConsensusState) receiveRoutine(maxSteps int) {
|
||||
case height := <-cs.mempool.TxsAvailable():
|
||||
cs.handleTxsAvailable(height)
|
||||
case mi = <-cs.peerMsgQueue:
|
||||
cs.wal.Save(mi)
|
||||
cs.wal.Write(mi)
|
||||
// handles proposals, block parts, votes
|
||||
// may generate internal events (votes, complete proposals, 2/3 majorities)
|
||||
cs.handleMsg(mi)
|
||||
case mi = <-cs.internalMsgQueue:
|
||||
cs.wal.Save(mi)
|
||||
cs.wal.WriteSync(mi) // NOTE: fsync
|
||||
// handles proposals, block parts, votes
|
||||
cs.handleMsg(mi)
|
||||
case ti := <-cs.timeoutTicker.Chan(): // tockChan:
|
||||
cs.wal.Save(ti)
|
||||
cs.wal.Write(ti)
|
||||
// if the timeout is relevant to the rs
|
||||
// go to the next step
|
||||
cs.handleTimeout(ti, rs)
|
||||
@@ -584,8 +596,9 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
|
||||
err = cs.setProposal(msg.Proposal)
|
||||
case *BlockPartMessage:
|
||||
// if the proposal is complete, we'll enterPrevote or tryFinalizeCommit
|
||||
_, err = cs.addProposalBlockPart(msg.Height, msg.Part, peerID != "")
|
||||
_, err = cs.addProposalBlockPart(msg.Height, msg.Part)
|
||||
if err != nil && msg.Round != cs.Round {
|
||||
cs.Logger.Debug("Received block part from wrong round", "height", cs.Height, "csRound", cs.Round, "blockRound", msg.Round)
|
||||
err = nil
|
||||
}
|
||||
case *VoteMessage:
|
||||
@@ -610,7 +623,7 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
|
||||
cs.Logger.Error("Unknown msg type", reflect.TypeOf(msg))
|
||||
}
|
||||
if err != nil {
|
||||
cs.Logger.Error("Error with msg", "type", reflect.TypeOf(msg), "peer", peerID, "err", err, "msg", msg)
|
||||
cs.Logger.Error("Error with msg", "height", cs.Height, "round", cs.Round, "type", reflect.TypeOf(msg), "peer", peerID, "err", err, "msg", msg)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -667,16 +680,18 @@ func (cs *ConsensusState) handleTxsAvailable(height int64) {
|
||||
// Enter: +2/3 prevotes any or +2/3 precommits for block or any from (height, round)
|
||||
// NOTE: cs.StartTime was already set for height.
|
||||
func (cs *ConsensusState) enterNewRound(height int64, round int) {
|
||||
logger := cs.Logger.With("height", height, "round", round)
|
||||
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && cs.Step != cstypes.RoundStepNewHeight) {
|
||||
cs.Logger.Debug(cmn.Fmt("enterNewRound(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
logger.Debug(cmn.Fmt("enterNewRound(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
return
|
||||
}
|
||||
|
||||
if now := time.Now(); cs.StartTime.After(now) {
|
||||
cs.Logger.Info("Need to set a buffer and log message here for sanity.", "startTime", cs.StartTime, "now", now)
|
||||
logger.Info("Need to set a buffer and log message here for sanity.", "startTime", cs.StartTime, "now", now)
|
||||
}
|
||||
|
||||
cs.Logger.Info(cmn.Fmt("enterNewRound(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
logger.Info(cmn.Fmt("enterNewRound(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
|
||||
// Increment validators if necessary
|
||||
validators := cs.Validators
|
||||
@@ -695,6 +710,7 @@ func (cs *ConsensusState) enterNewRound(height int64, round int) {
|
||||
// and meanwhile we might have received a proposal
|
||||
// for round 0.
|
||||
} else {
|
||||
logger.Info("Resetting Proposal info")
|
||||
cs.Proposal = nil
|
||||
cs.ProposalBlock = nil
|
||||
cs.ProposalBlockParts = nil
|
||||
@@ -748,6 +764,7 @@ func (cs *ConsensusState) proposalHeartbeat(height int64, round int) {
|
||||
}
|
||||
cs.privValidator.SignHeartbeat(chainID, heartbeat)
|
||||
cs.eventBus.PublishEventProposalHeartbeat(types.EventDataProposalHeartbeat{heartbeat})
|
||||
cs.evsw.FireEvent(types.EventProposalHeartbeat, heartbeat)
|
||||
counter++
|
||||
time.Sleep(proposalHeartbeatIntervalSeconds * time.Second)
|
||||
}
|
||||
@@ -757,11 +774,13 @@ func (cs *ConsensusState) proposalHeartbeat(height int64, round int) {
|
||||
// Enter (CreateEmptyBlocks, CreateEmptyBlocksInterval > 0 ): after enterNewRound(height,round), after timeout of CreateEmptyBlocksInterval
|
||||
// Enter (!CreateEmptyBlocks) : after enterNewRound(height,round), once txs are in the mempool
|
||||
func (cs *ConsensusState) enterPropose(height int64, round int) {
|
||||
logger := cs.Logger.With("height", height, "round", round)
|
||||
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && cstypes.RoundStepPropose <= cs.Step) {
|
||||
cs.Logger.Debug(cmn.Fmt("enterPropose(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
logger.Debug(cmn.Fmt("enterPropose(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
return
|
||||
}
|
||||
cs.Logger.Info(cmn.Fmt("enterPropose(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
logger.Info(cmn.Fmt("enterPropose(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
|
||||
defer func() {
|
||||
// Done enterPropose:
|
||||
@@ -781,22 +800,22 @@ func (cs *ConsensusState) enterPropose(height int64, round int) {
|
||||
|
||||
// Nothing more to do if we're not a validator
|
||||
if cs.privValidator == nil {
|
||||
cs.Logger.Debug("This node is not a validator")
|
||||
logger.Debug("This node is not a validator")
|
||||
return
|
||||
}
|
||||
|
||||
// if not a validator, we're done
|
||||
if !cs.Validators.HasAddress(cs.privValidator.GetAddress()) {
|
||||
cs.Logger.Debug("This node is not a validator", "addr", cs.privValidator.GetAddress(), "vals", cs.Validators)
|
||||
logger.Debug("This node is not a validator", "addr", cs.privValidator.GetAddress(), "vals", cs.Validators)
|
||||
return
|
||||
}
|
||||
cs.Logger.Debug("This node is a validator")
|
||||
logger.Debug("This node is a validator")
|
||||
|
||||
if cs.isProposer() {
|
||||
cs.Logger.Info("enterPropose: Our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator)
|
||||
logger.Info("enterPropose: Our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator)
|
||||
cs.decideProposal(height, round)
|
||||
} else {
|
||||
cs.Logger.Info("enterPropose: Not our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator)
|
||||
logger.Info("enterPropose: Not our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -959,14 +978,16 @@ func (cs *ConsensusState) defaultDoPrevote(height int64, round int) {
|
||||
|
||||
// Enter: any +2/3 prevotes at next round.
|
||||
func (cs *ConsensusState) enterPrevoteWait(height int64, round int) {
|
||||
logger := cs.Logger.With("height", height, "round", round)
|
||||
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && cstypes.RoundStepPrevoteWait <= cs.Step) {
|
||||
cs.Logger.Debug(cmn.Fmt("enterPrevoteWait(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
logger.Debug(cmn.Fmt("enterPrevoteWait(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
return
|
||||
}
|
||||
if !cs.Votes.Prevotes(round).HasTwoThirdsAny() {
|
||||
cmn.PanicSanity(cmn.Fmt("enterPrevoteWait(%v/%v), but Prevotes does not have any +2/3 votes", height, round))
|
||||
}
|
||||
cs.Logger.Info(cmn.Fmt("enterPrevoteWait(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
logger.Info(cmn.Fmt("enterPrevoteWait(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
|
||||
defer func() {
|
||||
// Done enterPrevoteWait:
|
||||
@@ -985,12 +1006,14 @@ func (cs *ConsensusState) enterPrevoteWait(height int64, round int) {
|
||||
// else, unlock an existing lock and precommit nil if +2/3 of prevotes were nil,
|
||||
// else, precommit nil otherwise.
|
||||
func (cs *ConsensusState) enterPrecommit(height int64, round int) {
|
||||
logger := cs.Logger.With("height", height, "round", round)
|
||||
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && cstypes.RoundStepPrecommit <= cs.Step) {
|
||||
cs.Logger.Debug(cmn.Fmt("enterPrecommit(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
logger.Debug(cmn.Fmt("enterPrecommit(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
return
|
||||
}
|
||||
|
||||
cs.Logger.Info(cmn.Fmt("enterPrecommit(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
logger.Info(cmn.Fmt("enterPrecommit(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
|
||||
defer func() {
|
||||
// Done enterPrecommit:
|
||||
@@ -1004,9 +1027,9 @@ func (cs *ConsensusState) enterPrecommit(height int64, round int) {
|
||||
// If we don't have a polka, we must precommit nil.
|
||||
if !ok {
|
||||
if cs.LockedBlock != nil {
|
||||
cs.Logger.Info("enterPrecommit: No +2/3 prevotes during enterPrecommit while we're locked. Precommitting nil")
|
||||
logger.Info("enterPrecommit: No +2/3 prevotes during enterPrecommit while we're locked. Precommitting nil")
|
||||
} else {
|
||||
cs.Logger.Info("enterPrecommit: No +2/3 prevotes during enterPrecommit. Precommitting nil.")
|
||||
logger.Info("enterPrecommit: No +2/3 prevotes during enterPrecommit. Precommitting nil.")
|
||||
}
|
||||
cs.signAddVote(types.VoteTypePrecommit, nil, types.PartSetHeader{})
|
||||
return
|
||||
@@ -1024,9 +1047,9 @@ func (cs *ConsensusState) enterPrecommit(height int64, round int) {
|
||||
// +2/3 prevoted nil. Unlock and precommit nil.
|
||||
if len(blockID.Hash) == 0 {
|
||||
if cs.LockedBlock == nil {
|
||||
cs.Logger.Info("enterPrecommit: +2/3 prevoted for nil.")
|
||||
logger.Info("enterPrecommit: +2/3 prevoted for nil.")
|
||||
} else {
|
||||
cs.Logger.Info("enterPrecommit: +2/3 prevoted for nil. Unlocking")
|
||||
logger.Info("enterPrecommit: +2/3 prevoted for nil. Unlocking")
|
||||
cs.LockedRound = 0
|
||||
cs.LockedBlock = nil
|
||||
cs.LockedBlockParts = nil
|
||||
@@ -1040,7 +1063,7 @@ func (cs *ConsensusState) enterPrecommit(height int64, round int) {
|
||||
|
||||
// If we're already locked on that block, precommit it, and update the LockedRound
|
||||
if cs.LockedBlock.HashesTo(blockID.Hash) {
|
||||
cs.Logger.Info("enterPrecommit: +2/3 prevoted locked block. Relocking")
|
||||
logger.Info("enterPrecommit: +2/3 prevoted locked block. Relocking")
|
||||
cs.LockedRound = round
|
||||
cs.eventBus.PublishEventRelock(cs.RoundStateEvent())
|
||||
cs.signAddVote(types.VoteTypePrecommit, blockID.Hash, blockID.PartsHeader)
|
||||
@@ -1049,7 +1072,7 @@ func (cs *ConsensusState) enterPrecommit(height int64, round int) {
|
||||
|
||||
// If +2/3 prevoted for proposal block, stage and precommit it
|
||||
if cs.ProposalBlock.HashesTo(blockID.Hash) {
|
||||
cs.Logger.Info("enterPrecommit: +2/3 prevoted proposal block. Locking", "hash", blockID.Hash)
|
||||
logger.Info("enterPrecommit: +2/3 prevoted proposal block. Locking", "hash", blockID.Hash)
|
||||
// Validate the block.
|
||||
if err := cs.blockExec.ValidateBlock(cs.state, cs.ProposalBlock); err != nil {
|
||||
cmn.PanicConsensus(cmn.Fmt("enterPrecommit: +2/3 prevoted for an invalid block: %v", err))
|
||||
@@ -1079,14 +1102,16 @@ func (cs *ConsensusState) enterPrecommit(height int64, round int) {
|
||||
|
||||
// Enter: any +2/3 precommits for next round.
|
||||
func (cs *ConsensusState) enterPrecommitWait(height int64, round int) {
|
||||
logger := cs.Logger.With("height", height, "round", round)
|
||||
|
||||
if cs.Height != height || round < cs.Round || (cs.Round == round && cstypes.RoundStepPrecommitWait <= cs.Step) {
|
||||
cs.Logger.Debug(cmn.Fmt("enterPrecommitWait(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
logger.Debug(cmn.Fmt("enterPrecommitWait(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
return
|
||||
}
|
||||
if !cs.Votes.Precommits(round).HasTwoThirdsAny() {
|
||||
cmn.PanicSanity(cmn.Fmt("enterPrecommitWait(%v/%v), but Precommits does not have any +2/3 votes", height, round))
|
||||
}
|
||||
cs.Logger.Info(cmn.Fmt("enterPrecommitWait(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
logger.Info(cmn.Fmt("enterPrecommitWait(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
|
||||
|
||||
defer func() {
|
||||
// Done enterPrecommitWait:
|
||||
@@ -1101,11 +1126,13 @@ func (cs *ConsensusState) enterPrecommitWait(height int64, round int) {
|
||||
|
||||
// Enter: +2/3 precommits for block
|
||||
func (cs *ConsensusState) enterCommit(height int64, commitRound int) {
|
||||
logger := cs.Logger.With("height", height, "commitRound", commitRound)
|
||||
|
||||
if cs.Height != height || cstypes.RoundStepCommit <= cs.Step {
|
||||
cs.Logger.Debug(cmn.Fmt("enterCommit(%v/%v): Invalid args. Current step: %v/%v/%v", height, commitRound, cs.Height, cs.Round, cs.Step))
|
||||
logger.Debug(cmn.Fmt("enterCommit(%v/%v): Invalid args. Current step: %v/%v/%v", height, commitRound, cs.Height, cs.Round, cs.Step))
|
||||
return
|
||||
}
|
||||
cs.Logger.Info(cmn.Fmt("enterCommit(%v/%v). Current: %v/%v/%v", height, commitRound, cs.Height, cs.Round, cs.Step))
|
||||
logger.Info(cmn.Fmt("enterCommit(%v/%v). Current: %v/%v/%v", height, commitRound, cs.Height, cs.Round, cs.Step))
|
||||
|
||||
defer func() {
|
||||
// Done enterCommit:
|
||||
@@ -1128,6 +1155,7 @@ func (cs *ConsensusState) enterCommit(height int64, commitRound int) {
|
||||
// Move them over to ProposalBlock if they match the commit hash,
|
||||
// otherwise they'll be cleared in updateToState.
|
||||
if cs.LockedBlock.HashesTo(blockID.Hash) {
|
||||
logger.Info("Commit is for locked block. Set ProposalBlock=LockedBlock", "blockHash", blockID.Hash)
|
||||
cs.ProposalBlock = cs.LockedBlock
|
||||
cs.ProposalBlockParts = cs.LockedBlockParts
|
||||
}
|
||||
@@ -1135,6 +1163,7 @@ func (cs *ConsensusState) enterCommit(height int64, commitRound int) {
|
||||
// If we don't have the block being committed, set up to get it.
|
||||
if !cs.ProposalBlock.HashesTo(blockID.Hash) {
|
||||
if !cs.ProposalBlockParts.HasHeader(blockID.PartsHeader) {
|
||||
logger.Info("Commit is for a block we don't know about. Set ProposalBlock=nil", "proposal", cs.ProposalBlock.Hash(), "commit", blockID.Hash)
|
||||
// We're getting the wrong block.
|
||||
// Set up ProposalBlockParts and keep waiting.
|
||||
cs.ProposalBlock = nil
|
||||
@@ -1147,19 +1176,21 @@ func (cs *ConsensusState) enterCommit(height int64, commitRound int) {
|
||||
|
||||
// If we have the block AND +2/3 commits for it, finalize.
|
||||
func (cs *ConsensusState) tryFinalizeCommit(height int64) {
|
||||
logger := cs.Logger.With("height", height)
|
||||
|
||||
if cs.Height != height {
|
||||
cmn.PanicSanity(cmn.Fmt("tryFinalizeCommit() cs.Height: %v vs height: %v", cs.Height, height))
|
||||
}
|
||||
|
||||
blockID, ok := cs.Votes.Precommits(cs.CommitRound).TwoThirdsMajority()
|
||||
if !ok || len(blockID.Hash) == 0 {
|
||||
cs.Logger.Error("Attempt to finalize failed. There was no +2/3 majority, or +2/3 was for <nil>.", "height", height)
|
||||
logger.Error("Attempt to finalize failed. There was no +2/3 majority, or +2/3 was for <nil>.")
|
||||
return
|
||||
}
|
||||
if !cs.ProposalBlock.HashesTo(blockID.Hash) {
|
||||
// TODO: this happens every time if we're not a validator (ugly logs)
|
||||
// TODO: ^^ wait, why does it matter that we're a validator?
|
||||
cs.Logger.Info("Attempt to finalize failed. We don't have the commit block.", "height", height, "proposal-block", cs.ProposalBlock.Hash(), "commit-block", blockID.Hash)
|
||||
logger.Info("Attempt to finalize failed. We don't have the commit block.", "proposal-block", cs.ProposalBlock.Hash(), "commit-block", blockID.Hash)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1210,23 +1241,28 @@ func (cs *ConsensusState) finalizeCommit(height int64) {
|
||||
|
||||
fail.Fail() // XXX
|
||||
|
||||
// Finish writing to the WAL for this height.
|
||||
// NOTE: If we fail before writing this, we'll never write it,
|
||||
// and just recover by running ApplyBlock in the Handshake.
|
||||
// If we moved it before persisting the block, we'd have to allow
|
||||
// WAL replay for blocks with an #ENDHEIGHT
|
||||
// As is, ConsensusState should not be started again
|
||||
// until we successfully call ApplyBlock (ie. here or in Handshake after restart)
|
||||
cs.wal.Save(EndHeightMessage{height})
|
||||
// Write EndHeightMessage{} for this height, implying that the blockstore
|
||||
// has saved the block.
|
||||
//
|
||||
// If we crash before writing this EndHeightMessage{}, we will recover by
|
||||
// running ApplyBlock during the ABCI handshake when we restart. If we
|
||||
// didn't save the block to the blockstore before writing
|
||||
// EndHeightMessage{}, we'd have to change WAL replay -- currently it
|
||||
// complains about replaying for heights where an #ENDHEIGHT entry already
|
||||
// exists.
|
||||
//
|
||||
// Either way, the ConsensusState should not be resumed until we
|
||||
// successfully call ApplyBlock (ie. later here, or in Handshake after
|
||||
// restart).
|
||||
cs.wal.WriteSync(EndHeightMessage{height}) // NOTE: fsync
|
||||
|
||||
fail.Fail() // XXX
|
||||
|
||||
// Create a copy of the state for staging
|
||||
// and an event cache for txs
|
||||
// Create a copy of the state for staging and an event cache for txs.
|
||||
stateCopy := cs.state.Copy()
|
||||
|
||||
// Execute and commit the block, update and save the state, and update the mempool.
|
||||
// NOTE: the block.AppHash wont reflect these txs until the next block
|
||||
// NOTE The block.AppHash wont reflect these txs until the next block.
|
||||
var err error
|
||||
stateCopy, err = cs.blockExec.ApplyBlock(stateCopy, types.BlockID{block.Hash(), blockParts.Header()}, block)
|
||||
if err != nil {
|
||||
@@ -1293,18 +1329,20 @@ func (cs *ConsensusState) defaultSetProposal(proposal *types.Proposal) error {
|
||||
|
||||
// NOTE: block is not necessarily valid.
|
||||
// Asynchronously triggers either enterPrevote (before we timeout of propose) or tryFinalizeCommit, once we have the full block.
|
||||
func (cs *ConsensusState) addProposalBlockPart(height int64, part *types.Part, verify bool) (added bool, err error) {
|
||||
func (cs *ConsensusState) addProposalBlockPart(height int64, part *types.Part) (added bool, err error) {
|
||||
// Blocks might be reused, so round mismatch is OK
|
||||
if cs.Height != height {
|
||||
cs.Logger.Debug("Received block part from wrong height", "height", height)
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// We're not expecting a block part.
|
||||
if cs.ProposalBlockParts == nil {
|
||||
cs.Logger.Info("Received a block part when we're not expecting any", "height", height)
|
||||
return false, nil // TODO: bad peer? Return error?
|
||||
}
|
||||
|
||||
added, err = cs.ProposalBlockParts.AddPart(part, verify)
|
||||
added, err = cs.ProposalBlockParts.AddPart(part)
|
||||
if err != nil {
|
||||
return added, err
|
||||
}
|
||||
@@ -1322,6 +1360,8 @@ func (cs *ConsensusState) addProposalBlockPart(height int64, part *types.Part, v
|
||||
blockID, hasTwoThirds := prevotes.TwoThirdsMajority()
|
||||
if hasTwoThirds && !blockID.IsZero() && (cs.ValidRound < cs.Round) {
|
||||
if cs.ProposalBlock.HashesTo(blockID.Hash) {
|
||||
cs.Logger.Info("Updating valid block to new proposal block",
|
||||
"valid-round", cs.Round, "valid-block-hash", cs.ProposalBlock.Hash())
|
||||
cs.ValidRound = cs.Round
|
||||
cs.ValidBlock = cs.ProposalBlock
|
||||
cs.ValidBlockParts = cs.ProposalBlockParts
|
||||
@@ -1391,6 +1431,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerID p2p.ID) (added bool,
|
||||
|
||||
cs.Logger.Info(cmn.Fmt("Added to lastPrecommits: %v", cs.LastCommit.StringShort()))
|
||||
cs.eventBus.PublishEventVote(types.EventDataVote{vote})
|
||||
cs.evsw.FireEvent(types.EventVote, vote)
|
||||
|
||||
// if we can skip timeoutCommit and have all the votes now,
|
||||
if cs.config.SkipTimeoutCommit && cs.LastCommit.HasAll() {
|
||||
@@ -1418,6 +1459,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerID p2p.ID) (added bool,
|
||||
}
|
||||
|
||||
cs.eventBus.PublishEventVote(types.EventDataVote{vote})
|
||||
cs.evsw.FireEvent(types.EventVote, vote)
|
||||
|
||||
switch vote.Type {
|
||||
case types.VoteTypePrevote:
|
||||
@@ -1453,6 +1495,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerID p2p.ID) (added bool,
|
||||
(vote.Round <= cs.Round) &&
|
||||
cs.ProposalBlock.HashesTo(blockID.Hash) {
|
||||
|
||||
cs.Logger.Info("Updating ValidBlock because of POL.", "validRound", cs.ValidRound, "POLRound", vote.Round)
|
||||
cs.ValidRound = vote.Round
|
||||
cs.ValidBlock = cs.ProposalBlock
|
||||
cs.ValidBlockParts = cs.ProposalBlockParts
|
||||
|
@@ -11,7 +11,7 @@ import (
|
||||
"github.com/tendermint/tendermint/types"
|
||||
cmn "github.com/tendermint/tmlibs/common"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
tmpubsub "github.com/tendermint/tmlibs/pubsub"
|
||||
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
@@ -50,7 +50,8 @@ func RegisterWALMessages(cdc *amino.Codec) {
|
||||
|
||||
// WAL is an interface for any write-ahead logger.
|
||||
type WAL interface {
|
||||
Save(WALMessage)
|
||||
Write(WALMessage)
|
||||
WriteSync(WALMessage)
|
||||
Group() *auto.Group
|
||||
SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error)
|
||||
|
||||
@@ -98,7 +99,7 @@ func (wal *baseWAL) OnStart() error {
|
||||
if err != nil {
|
||||
return err
|
||||
} else if size == 0 {
|
||||
wal.Save(EndHeightMessage{0})
|
||||
wal.WriteSync(EndHeightMessage{0})
|
||||
}
|
||||
err = wal.group.Start()
|
||||
return err
|
||||
@@ -109,20 +110,31 @@ func (wal *baseWAL) OnStop() {
|
||||
wal.group.Stop()
|
||||
}
|
||||
|
||||
// called in newStep and for each pass in receiveRoutine
|
||||
func (wal *baseWAL) Save(msg WALMessage) {
|
||||
// Write is called in newStep and for each receive on the
|
||||
// peerMsgQueue and the timoutTicker.
|
||||
// NOTE: does not call fsync()
|
||||
func (wal *baseWAL) Write(msg WALMessage) {
|
||||
if wal == nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Write the wal message
|
||||
if err := wal.enc.Encode(&TimedWALMessage{time.Now(), msg}); err != nil {
|
||||
cmn.PanicQ(cmn.Fmt("Error writing msg to consensus wal: %v \n\nMessage: %v", err, msg))
|
||||
panic(cmn.Fmt("Error writing msg to consensus wal: %v \n\nMessage: %v", err, msg))
|
||||
}
|
||||
}
|
||||
|
||||
// WriteSync is called when we receive a msg from ourselves
|
||||
// so that we write to disk before sending signed messages.
|
||||
// NOTE: calls fsync()
|
||||
func (wal *baseWAL) WriteSync(msg WALMessage) {
|
||||
if wal == nil {
|
||||
return
|
||||
}
|
||||
|
||||
// TODO: only flush when necessary
|
||||
wal.Write(msg)
|
||||
if err := wal.group.Flush(); err != nil {
|
||||
cmn.PanicQ(cmn.Fmt("Error flushing consensus wal buf to file. Error: %v \n", err))
|
||||
panic(cmn.Fmt("Error flushing consensus wal buf to file. Error: %v \n", err))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -297,8 +309,9 @@ func (dec *WALDecoder) Decode() (*TimedWALMessage, error) {
|
||||
|
||||
type nilWAL struct{}
|
||||
|
||||
func (nilWAL) Save(m WALMessage) {}
|
||||
func (nilWAL) Group() *auto.Group { return nil }
|
||||
func (nilWAL) Write(m WALMessage) {}
|
||||
func (nilWAL) WriteSync(m WALMessage) {}
|
||||
func (nilWAL) Group() *auto.Group { return nil }
|
||||
func (nilWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
|
||||
return nil, false, nil
|
||||
}
|
||||
|
@@ -83,7 +83,7 @@ func WALWithNBlocks(numBlocks int) (data []byte, err error) {
|
||||
numBlocksWritten := make(chan struct{})
|
||||
wal := newByteBufferWAL(logger, NewWALEncoder(wr), int64(numBlocks), numBlocksWritten)
|
||||
// see wal.go#103
|
||||
wal.Save(EndHeightMessage{0})
|
||||
wal.Write(EndHeightMessage{0})
|
||||
consensusState.wal = wal
|
||||
|
||||
if err := consensusState.Start(); err != nil {
|
||||
@@ -166,7 +166,7 @@ func newByteBufferWAL(logger log.Logger, enc *WALEncoder, nBlocks int64, signalS
|
||||
// Save writes message to the internal buffer except when heightToStop is
|
||||
// reached, in which case it will signal the caller via signalWhenStopsTo and
|
||||
// skip writing.
|
||||
func (w *byteBufferWAL) Save(m WALMessage) {
|
||||
func (w *byteBufferWAL) Write(m WALMessage) {
|
||||
if w.stopped {
|
||||
w.logger.Debug("WAL already stopped. Not writing message", "msg", m)
|
||||
return
|
||||
@@ -189,6 +189,10 @@ func (w *byteBufferWAL) Save(m WALMessage) {
|
||||
}
|
||||
}
|
||||
|
||||
func (w *byteBufferWAL) WriteSync(m WALMessage) {
|
||||
w.Write(m)
|
||||
}
|
||||
|
||||
func (w *byteBufferWAL) Group() *auto.Group {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
@@ -8,9 +8,9 @@ services:
|
||||
- "46656-46657:46656-46657"
|
||||
environment:
|
||||
- ID=0
|
||||
- LOG=${LOG:-tendermint.log}
|
||||
- LOG=$${LOG:-tendermint.log}
|
||||
volumes:
|
||||
- ${FOLDER:-./build}:/tendermint:Z
|
||||
- ./build:/tendermint:Z
|
||||
networks:
|
||||
localnet:
|
||||
ipv4_address: 192.167.10.2
|
||||
@@ -22,9 +22,9 @@ services:
|
||||
- "46659-46660:46656-46657"
|
||||
environment:
|
||||
- ID=1
|
||||
- LOG=${LOG:-tendermint.log}
|
||||
- LOG=$${LOG:-tendermint.log}
|
||||
volumes:
|
||||
- ${FOLDER:-./build}:/tendermint:Z
|
||||
- ./build:/tendermint:Z
|
||||
networks:
|
||||
localnet:
|
||||
ipv4_address: 192.167.10.3
|
||||
@@ -34,11 +34,11 @@ services:
|
||||
image: "tendermint/localnode"
|
||||
environment:
|
||||
- ID=2
|
||||
- LOG=${LOG:-tendermint.log}
|
||||
- LOG=$${LOG:-tendermint.log}
|
||||
ports:
|
||||
- "46661-46662:46656-46657"
|
||||
volumes:
|
||||
- ${FOLDER:-./build}:/tendermint:Z
|
||||
- ./build:/tendermint:Z
|
||||
networks:
|
||||
localnet:
|
||||
ipv4_address: 192.167.10.4
|
||||
@@ -48,11 +48,11 @@ services:
|
||||
image: "tendermint/localnode"
|
||||
environment:
|
||||
- ID=3
|
||||
- LOG=${LOG:-tendermint.log}
|
||||
- LOG=$${LOG:-tendermint.log}
|
||||
ports:
|
||||
- "46663-46664:46656-46657"
|
||||
volumes:
|
||||
- ${FOLDER:-./build}:/tendermint:Z
|
||||
- ./build:/tendermint:Z
|
||||
networks:
|
||||
localnet:
|
||||
ipv4_address: 192.167.10.5
|
||||
|
@@ -66,15 +66,14 @@ and possibly await a response). And one method to query app-specific
|
||||
data from the ABCI application.
|
||||
|
||||
Pros:
|
||||
* Server code already written
|
||||
* Access to block headers to validate merkle proofs (nice for light clients)
|
||||
* Basic read/write functionality is supported
|
||||
|
||||
- Server code already written
|
||||
- Access to block headers to validate merkle proofs (nice for light clients)
|
||||
- Basic read/write functionality is supported
|
||||
|
||||
Cons:
|
||||
* Limited interface to app. All queries must be serialized into
|
||||
[]byte (less expressive than JSON over HTTP) and there is no way to push
|
||||
data from ABCI app to the client (eg. notify me if account X receives a
|
||||
transaction)
|
||||
|
||||
- Limited interface to app. All queries must be serialized into []byte (less expressive than JSON over HTTP) and there is no way to push data from ABCI app to the client (eg. notify me if account X receives a transaction)
|
||||
|
||||
Custom ABCI server
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
@@ -92,14 +91,19 @@ store. For "reads", we can do any queries we wish that are supported by
|
||||
our architecture, using any web technology that is useful. The general
|
||||
architecture is shown in the following diagram:
|
||||
|
||||
Pros: \* Separates application logic from blockchain logic \* Allows
|
||||
much richer, more flexible client-facing API \* Allows pub-sub, watching
|
||||
certain fields, etc.
|
||||
.. figure:: assets/tm-application-example.png
|
||||
|
||||
Cons: \* Access to ABCI app can be dangerous (be VERY careful not to
|
||||
write unless it comes from the validator node) \* No direct access to
|
||||
the blockchain headers to verify tx \* You must write your own API (but
|
||||
maybe that's a pro...)
|
||||
Pros:
|
||||
|
||||
- Separates application logic from blockchain logic
|
||||
- Allows much richer, more flexible client-facing API
|
||||
- Allows pub-sub, watching certain fields, etc.
|
||||
|
||||
Cons:
|
||||
|
||||
- Access to ABCI app can be dangerous (be VERY careful not to write unless it comes from the validator node)
|
||||
- No direct access to the blockchain headers to verify tx
|
||||
- You must write your own API (but maybe that's a pro...)
|
||||
|
||||
Hybrid solutions
|
||||
~~~~~~~~~~~~~~~~
|
||||
@@ -108,9 +112,13 @@ Likely the least secure but most versatile. The client can access both
|
||||
the tendermint node for all blockchain info, as well as a custom app
|
||||
server, for complex queries and pub-sub on the abci app.
|
||||
|
||||
Pros: All from both above solutions
|
||||
Pros:
|
||||
|
||||
Cons: Even more complexity; even more attack vectors (less
|
||||
- All from both above solutions
|
||||
|
||||
Cons:
|
||||
|
||||
- Even more complexity; even more attack vectors (less
|
||||
security)
|
||||
|
||||
Scalability
|
||||
|
@@ -178,21 +178,22 @@ connection, to query the local state of the app.
|
||||
Mempool Connection
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The mempool connection is used *only* for CheckTx requests. Transactions
|
||||
are run using CheckTx in the same order they were received by the
|
||||
validator. If the CheckTx returns ``OK``, the transaction is kept in
|
||||
memory and relayed to other peers in the same order it was received.
|
||||
Otherwise, it is discarded.
|
||||
The mempool connection is used *only* for CheckTx requests.
|
||||
Transactions are run using CheckTx in the same order they were
|
||||
received by the validator. If the CheckTx returns ``OK``, the
|
||||
transaction is kept in memory and relayed to other peers in the same
|
||||
order it was received. Otherwise, it is discarded.
|
||||
|
||||
CheckTx requests run concurrently with block processing; so they should
|
||||
run against a copy of the main application state which is reset after
|
||||
every block. This copy is necessary to track transitions made by a
|
||||
sequence of CheckTx requests before they are included in a block. When a
|
||||
block is committed, the application must ensure to reset the mempool
|
||||
state to the latest committed state. Tendermint Core will then filter
|
||||
through all transactions in the mempool, removing any that were included
|
||||
in the block, and re-run the rest using CheckTx against the post-Commit
|
||||
mempool state.
|
||||
CheckTx requests run concurrently with block processing; so they
|
||||
should run against a copy of the main application state which is reset
|
||||
after every block. This copy is necessary to track transitions made by
|
||||
a sequence of CheckTx requests before they are included in a block.
|
||||
When a block is committed, the application must ensure to reset the
|
||||
mempool state to the latest committed state. Tendermint Core will then
|
||||
filter through all transactions in the mempool, removing any that were
|
||||
included in the block, and re-run the rest using CheckTx against the
|
||||
post-Commit mempool state (this behaviour can be turned off with
|
||||
``[mempool] recheck = false``).
|
||||
|
||||
.. container:: toggle
|
||||
|
||||
@@ -226,6 +227,23 @@ mempool state.
|
||||
}
|
||||
}
|
||||
|
||||
Replay Protection
|
||||
^^^^^^^^^^^^^^^^^
|
||||
To prevent old transactions from being replayed, CheckTx must
|
||||
implement replay protection.
|
||||
|
||||
Tendermint provides the first defence layer by keeping a lightweight
|
||||
in-memory cache of 100k (``[mempool] cache_size``) last transactions in
|
||||
the mempool. If Tendermint is just started or the clients sent more
|
||||
than 100k transactions, old transactions may be sent to the
|
||||
application. So it is important CheckTx implements some logic to
|
||||
handle them.
|
||||
|
||||
There are cases where a transaction will (or may) become valid in some
|
||||
future state, in which case you probably want to disable Tendermint's
|
||||
cache. You can do that by setting ``[mempool] cache_size = 0`` in the
|
||||
config.
|
||||
|
||||
Consensus Connection
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
Before Width: | Height: | Size: 14 KiB After Width: | Height: | Size: 14 KiB |
BIN
docs/assets/tm-application-example.png
Normal file
BIN
docs/assets/tm-application-example.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 26 KiB |
14
docs/conf.py
14
docs/conf.py
@@ -71,7 +71,7 @@ language = None
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
# This patterns also effect to html_static_path and html_extra_path
|
||||
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'architecture', 'specification/new-spec', 'examples']
|
||||
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'architecture', 'spec', 'examples']
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
@@ -184,20 +184,10 @@ if os.path.isdir(tools_dir) != True:
|
||||
if os.path.isdir(assets_dir) != True:
|
||||
os.mkdir(assets_dir)
|
||||
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/ansible/README.rst', filename=tools_dir+'/ansible.rst')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/ansible/assets/a_plus_t.png', filename=assets_dir+'/a_plus_t.png')
|
||||
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/docker/README.rst', filename=tools_dir+'/docker.rst')
|
||||
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/README.rst', filename=tools_dir+'/mintnet-kubernetes.rst')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/gce1.png', filename=assets_dir+'/gce1.png')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/gce2.png', filename=assets_dir+'/gce2.png')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/statefulset.png', filename=assets_dir+'/statefulset.png')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/t_plus_k.png', filename=assets_dir+'/t_plus_k.png')
|
||||
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/terraform-digitalocean/README.rst', filename=tools_dir+'/terraform-digitalocean.rst')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/tm-bench/README.rst', filename=tools_dir+'/benchmarking.rst')
|
||||
urllib.urlretrieve('https://raw.githubusercontent.com/tendermint/tools/master/tm-monitor/README.rst', filename='tools/monitoring.rst')
|
||||
urllib.urlretrieve(tools_repo+tools_branch+'/tm-monitor/README.rst', filename='tools/monitoring.rst')
|
||||
|
||||
#### abci spec #################################
|
||||
|
||||
|
@@ -3,8 +3,7 @@ Deploy a Testnet
|
||||
|
||||
Now that we've seen how ABCI works, and even played with a few
|
||||
applications on a single validator node, it's time to deploy a test
|
||||
network to four validator nodes. For this deployment, we'll use the
|
||||
``basecoin`` application.
|
||||
network to four validator nodes.
|
||||
|
||||
Manual Deployments
|
||||
------------------
|
||||
@@ -24,67 +23,42 @@ Here are the steps to setting up a testnet manually:
|
||||
``tendermint init``
|
||||
4) Compile a list of public keys for each validator into a
|
||||
``genesis.json`` file and replace the existing file with it.
|
||||
5) Run ``tendermint node --p2p.persistent_peers=< peer addresses >`` on each node,
|
||||
5) Run ``tendermint node --proxy_app=kvstore --p2p.persistent_peers=< peer addresses >`` on each node,
|
||||
where ``< peer addresses >`` is a comma separated list of the IP:PORT
|
||||
combination for each node. The default port for Tendermint is
|
||||
``46656``. Thus, if the IP addresses of your nodes were
|
||||
``192.168.0.1, 192.168.0.2, 192.168.0.3, 192.168.0.4``, the command
|
||||
would look like:
|
||||
``tendermint node --p2p.persistent_peers=96663a3dd0d7b9d17d4c8211b191af259621c693@192.168.0.1:46656, 429fcf25974313b95673f58d77eacdd434402665@192.168.0.2:46656, 0491d373a8e0fcf1023aaf18c51d6a1d0d4f31bd@192.168.0.3:46656, f9baeaa15fedf5e1ef7448dd60f46c01f1a9e9c4@192.168.0.4:46656``.
|
||||
|
||||
::
|
||||
|
||||
tendermint node --proxy_app=kvstore --p2p.persistent_peers=96663a3dd0d7b9d17d4c8211b191af259621c693@192.168.0.1:46656, 429fcf25974313b95673f58d77eacdd434402665@192.168.0.2:46656, 0491d373a8e0fcf1023aaf18c51d6a1d0d4f31bd@192.168.0.3:46656, f9baeaa15fedf5e1ef7448dd60f46c01f1a9e9c4@192.168.0.4:46656
|
||||
|
||||
After a few seconds, all the nodes should connect to each other and start
|
||||
making blocks! For more information, see the Tendermint Networks section
|
||||
of `the guide to using Tendermint <using-tendermint.html>`__.
|
||||
|
||||
But wait! Steps 3 and 4 are quite manual. Instead, use `this script <https://github.com/tendermint/tendermint/blob/develop/docs/examples/init_testnet.sh>`__, which does the heavy lifting for you. And it gets better.
|
||||
|
||||
Instead of the previously linked script to initialize the files required for a testnet, we have the ``tendermint testnet`` command. By default, running ``tendermint testnet`` will create all the required files, just like the script. Of course, you'll still need to manually edit some fields in the ``config.toml``. Alternatively, see the available flags to auto-populate the ``config.toml`` with the fields that would otherwise be passed in via flags when running ``tendermint node``. As you might imagine, this command is useful for manual or automated deployments.
|
||||
|
||||
Automated Deployments
|
||||
---------------------
|
||||
|
||||
While the manual deployment is easy enough, an automated deployment is
|
||||
usually quicker. The below examples show different tools that can be used
|
||||
for automated deployments.
|
||||
The easiest and fastest way to get a testnet up in less than 5 minutes.
|
||||
|
||||
Automated Deployment using Kubernetes
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Local
|
||||
^^^^^
|
||||
|
||||
The `mintnet-kubernetes tool <https://github.com/tendermint/tools/tree/master/mintnet-kubernetes>`__
|
||||
allows automating the deployment of a Tendermint network on an already
|
||||
provisioned Kubernetes cluster. For simple provisioning of a Kubernetes
|
||||
cluster, check out the `Google Cloud Platform <https://cloud.google.com/>`__.
|
||||
|
||||
Automated Deployment using Terraform and Ansible
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The `terraform-digitalocean tool <https://github.com/tendermint/tools/tree/master/terraform-digitalocean>`__
|
||||
allows creating a set of servers on the DigitalOcean cloud.
|
||||
|
||||
The `ansible playbooks <https://github.com/tendermint/tools/tree/master/ansible>`__
|
||||
allow creating and managing a ``basecoin`` or ``ethermint`` testnet on provisioned servers.
|
||||
|
||||
Package Deployment on Linux for developers
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The ``tendermint`` and ``basecoin`` applications can be installed from RPM or DEB packages on
|
||||
Linux machines for development purposes. The packages are configured to be validators on the
|
||||
one-node network that the machine represents. The services are not started after installation,
|
||||
this way giving an opportunity to reconfigure the applications before starting.
|
||||
|
||||
The Ansible playbooks in the previous section use this repository to install ``basecoin``.
|
||||
After installation, additional steps are executed to make sure that the multi-node testnet has
|
||||
the right configuration before start.
|
||||
|
||||
Install from the CentOS/RedHat repository:
|
||||
With ``docker`` and ``docker-compose`` installed, run the command:
|
||||
|
||||
::
|
||||
|
||||
rpm --import https://tendermint-packages.interblock.io/centos/7/os/x86_64/RPM-GPG-KEY-Tendermint
|
||||
wget -O /etc/yum.repos.d/tendermint.repo https://tendermint-packages.interblock.io/centos/7/os/x86_64/tendermint.repo
|
||||
yum install basecoin
|
||||
make localnet-start
|
||||
|
||||
Install from the Debian/Ubuntu repository:
|
||||
from the root of the tendermint repository. This will spin up a 4-node local testnet.
|
||||
|
||||
::
|
||||
|
||||
wget -O - https://tendermint-packages.interblock.io/centos/7/os/x86_64/RPM-GPG-KEY-Tendermint | apt-key add -
|
||||
wget -O /etc/apt/sources.list.d/tendermint.list https://tendermint-packages.interblock.io/debian/tendermint.list
|
||||
apt-get update && apt-get install basecoin
|
||||
Cloud
|
||||
^^^^^
|
||||
|
||||
See the `next section <./terraform-and-ansible.html>`__ for details.
|
||||
|
@@ -10,10 +10,10 @@ documentation](http://tendermint.readthedocs.io/en/master/).
|
||||
|
||||
### Quick Install
|
||||
|
||||
On a fresh Ubuntu 16.04 machine can be done with [this script](https://git.io/vNLfY), like so:
|
||||
On a fresh Ubuntu 16.04 machine can be done with [this script](https://git.io/vpgEI), like so:
|
||||
|
||||
```
|
||||
curl -L https://git.io/vxWlX | bash
|
||||
curl -L https://git.io/vpgEI | bash
|
||||
source ~/.profile
|
||||
```
|
||||
|
||||
@@ -24,7 +24,7 @@ The script is also used to facilitate cluster deployment below.
|
||||
### Manual Install
|
||||
|
||||
Requires:
|
||||
- `go` minimum version 1.9
|
||||
- `go` minimum version 1.10
|
||||
- `$GOPATH` environment variable must be set
|
||||
- `$GOPATH/bin` must be on your `$PATH` (see https://github.com/tendermint/tendermint/wiki/Setting-GOPATH)
|
||||
|
||||
@@ -125,7 +125,7 @@ addresses below as IP1, IP2, IP3, IP4.
|
||||
Then, `ssh` into each machine, and execute [this script](https://git.io/vNLfY):
|
||||
|
||||
```
|
||||
curl -L https://git.io/vNLfY | bash
|
||||
curl -L https://git.io/vpgEI | bash
|
||||
source ~/.profile
|
||||
```
|
||||
|
||||
@@ -134,10 +134,10 @@ This will install `go` and other dependencies, get the Tendermint source code, t
|
||||
Next, `cd` into `docs/examples`. Each command below should be run from each node, in sequence:
|
||||
|
||||
```
|
||||
tendermint node --home ./node1 --proxy_app=kvstore --p2p.persistent_peers="3a558bd6f8c97453aa6c2372bb800e8b6ed8e6db@IP1:46656,ccf30d873fddda10a495f42687c8f33472a6569f@IP2:46656,9a4c3de5d6788a76c6ee3cd9ff41e3b45b4cfd14@IP3:46656,58e6f2ab297b3ceae107ba4c8c2898da5c009ff4@IP4:46656"
|
||||
tendermint node --home ./node2 --proxy_app=kvstore --p2p.persistent_peers="3a558bd6f8c97453aa6c2372bb800e8b6ed8e6db@IP1:46656,ccf30d873fddda10a495f42687c8f33472a6569f@IP2:46656,9a4c3de5d6788a76c6ee3cd9ff41e3b45b4cfd14@IP3:46656,58e6f2ab297b3ceae107ba4c8c2898da5c009ff4@IP4:46656"
|
||||
tendermint node --home ./node3 --proxy_app=kvstore --p2p.persistent_peers="3a558bd6f8c97453aa6c2372bb800e8b6ed8e6db@IP1:46656,ccf30d873fddda10a495f42687c8f33472a6569f@IP2:46656,9a4c3de5d6788a76c6ee3cd9ff41e3b45b4cfd14@IP3:46656,58e6f2ab297b3ceae107ba4c8c2898da5c009ff4@IP4:46656"
|
||||
tendermint node --home ./node4 --proxy_app=kvstore --p2p.persistent_peers="3a558bd6f8c97453aa6c2372bb800e8b6ed8e6db@IP1:46656,ccf30d873fddda10a495f42687c8f33472a6569f@IP2:46656,9a4c3de5d6788a76c6ee3cd9ff41e3b45b4cfd14@IP3:46656,58e6f2ab297b3ceae107ba4c8c2898da5c009ff4@IP4:46656"
|
||||
tendermint node --home ./node0 --proxy_app=kvstore --p2p.persistent_peers="167b80242c300bf0ccfb3ced3dec60dc2a81776e@IP1:46656,3c7a5920811550c04bf7a0b2f1e02ab52317b5e6@IP2:46656,303a1a4312c30525c99ba66522dd81cca56a361a@IP3:46656,b686c2a7f4b1b46dca96af3a0f31a6a7beae0be4@IP4:46656"
|
||||
tendermint node --home ./node1 --proxy_app=kvstore --p2p.persistent_peers="167b80242c300bf0ccfb3ced3dec60dc2a81776e@IP1:46656,3c7a5920811550c04bf7a0b2f1e02ab52317b5e6@IP2:46656,303a1a4312c30525c99ba66522dd81cca56a361a@IP3:46656,b686c2a7f4b1b46dca96af3a0f31a6a7beae0be4@IP4:46656"
|
||||
tendermint node --home ./node2 --proxy_app=kvstore --p2p.persistent_peers="167b80242c300bf0ccfb3ced3dec60dc2a81776e@IP1:46656,3c7a5920811550c04bf7a0b2f1e02ab52317b5e6@IP2:46656,303a1a4312c30525c99ba66522dd81cca56a361a@IP3:46656,b686c2a7f4b1b46dca96af3a0f31a6a7beae0be4@IP4:46656"
|
||||
tendermint node --home ./node3 --proxy_app=kvstore --p2p.persistent_peers="167b80242c300bf0ccfb3ced3dec60dc2a81776e@IP1:46656,3c7a5920811550c04bf7a0b2f1e02ab52317b5e6@IP2:46656,303a1a4312c30525c99ba66522dd81cca56a361a@IP3:46656,b686c2a7f4b1b46dca96af3a0f31a6a7beae0be4@IP4:46656"
|
||||
```
|
||||
|
||||
Note that after the third node is started, blocks will start to stream in
|
||||
|
69
docs/examples/init_testnet.sh
Normal file
69
docs/examples/init_testnet.sh
Normal file
@@ -0,0 +1,69 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# make all the files
|
||||
tendermint init --home ./tester/node0
|
||||
tendermint init --home ./tester/node1
|
||||
tendermint init --home ./tester/node2
|
||||
tendermint init --home ./tester/node3
|
||||
|
||||
file0=./tester/node0/config/genesis.json
|
||||
file1=./tester/node1/config/genesis.json
|
||||
file2=./tester/node2/config/genesis.json
|
||||
file3=./tester/node3/config/genesis.json
|
||||
|
||||
genesis_time=`cat $file0 | jq '.genesis_time'`
|
||||
chain_id=`cat $file0 | jq '.chain_id'`
|
||||
|
||||
value0=`cat $file0 | jq '.validators[0].pub_key.value'`
|
||||
value1=`cat $file1 | jq '.validators[0].pub_key.value'`
|
||||
value2=`cat $file2 | jq '.validators[0].pub_key.value'`
|
||||
value3=`cat $file3 | jq '.validators[0].pub_key.value'`
|
||||
|
||||
rm $file0
|
||||
rm $file1
|
||||
rm $file2
|
||||
rm $file3
|
||||
|
||||
echo "{
|
||||
\"genesis_time\": $genesis_time,
|
||||
\"chain_id\": $chain_id,
|
||||
\"validators\": [
|
||||
{
|
||||
\"pub_key\": {
|
||||
\"type\": \"AC26791624DE60\",
|
||||
\"value\": $value0
|
||||
},
|
||||
\"power:\": 10,
|
||||
\"name\":, \"\"
|
||||
},
|
||||
{
|
||||
\"pub_key\": {
|
||||
\"type\": \"AC26791624DE60\",
|
||||
\"value\": $value1
|
||||
},
|
||||
\"power:\": 10,
|
||||
\"name\":, \"\"
|
||||
},
|
||||
{
|
||||
\"pub_key\": {
|
||||
\"type\": \"AC26791624DE60\",
|
||||
\"value\": $value2
|
||||
},
|
||||
\"power:\": 10,
|
||||
\"name\":, \"\"
|
||||
},
|
||||
{
|
||||
\"pub_key\": {
|
||||
\"type\": \"AC26791624DE60\",
|
||||
\"value\": $value3
|
||||
},
|
||||
\"power:\": 10,
|
||||
\"name\":, \"\"
|
||||
}
|
||||
],
|
||||
\"app_hash\": \"\"
|
||||
}" >> $file0
|
||||
|
||||
cp $file0 $file1
|
||||
cp $file0 $file2
|
||||
cp $file2 $file3
|
@@ -26,7 +26,7 @@ go get $REPO
|
||||
cd $GOPATH/src/$REPO
|
||||
|
||||
## build
|
||||
git checkout v0.18.0
|
||||
git checkout master
|
||||
make get_tools
|
||||
make get_vendor_deps
|
||||
make install
|
||||
|
169
docs/examples/node0/config/config.toml
Normal file
169
docs/examples/node0/config/config.toml
Normal file
@@ -0,0 +1,169 @@
|
||||
# This is a TOML config file.
|
||||
# For more information, see https://github.com/toml-lang/toml
|
||||
|
||||
##### main base config options #####
|
||||
|
||||
# TCP or UNIX socket address of the ABCI application,
|
||||
# or the name of an ABCI application compiled in with the Tendermint binary
|
||||
proxy_app = "tcp://127.0.0.1:46658"
|
||||
|
||||
# A custom human readable name for this node
|
||||
moniker = "alpha"
|
||||
|
||||
# If this node is many blocks behind the tip of the chain, FastSync
|
||||
# allows them to catchup quickly by downloading blocks in parallel
|
||||
# and verifying their commits
|
||||
fast_sync = true
|
||||
|
||||
# Database backend: leveldb | memdb
|
||||
db_backend = "leveldb"
|
||||
|
||||
# Database directory
|
||||
db_path = "data"
|
||||
|
||||
# Output level for logging, including package level options
|
||||
log_level = "main:info,state:info,*:error"
|
||||
|
||||
##### additional base config options #####
|
||||
|
||||
# Path to the JSON file containing the initial validator set and other meta data
|
||||
genesis_file = "config/genesis.json"
|
||||
|
||||
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
|
||||
priv_validator_file = "config/priv_validator.json"
|
||||
|
||||
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
|
||||
node_key_file = "config/node_key.json"
|
||||
|
||||
# Mechanism to connect to the ABCI application: socket | grpc
|
||||
abci = "socket"
|
||||
|
||||
# TCP or UNIX socket address for the profiling server to listen on
|
||||
prof_laddr = ""
|
||||
|
||||
# If true, query the ABCI app on connecting to a new peer
|
||||
# so the app can decide if we should keep the connection or not
|
||||
filter_peers = false
|
||||
|
||||
##### advanced configuration options #####
|
||||
|
||||
##### rpc server configuration options #####
|
||||
[rpc]
|
||||
|
||||
# TCP or UNIX socket address for the RPC server to listen on
|
||||
laddr = "tcp://0.0.0.0:46657"
|
||||
|
||||
# TCP or UNIX socket address for the gRPC server to listen on
|
||||
# NOTE: This server only supports /broadcast_tx_commit
|
||||
grpc_laddr = ""
|
||||
|
||||
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
|
||||
unsafe = false
|
||||
|
||||
##### peer to peer configuration options #####
|
||||
[p2p]
|
||||
|
||||
# Address to listen for incoming connections
|
||||
laddr = "tcp://0.0.0.0:46656"
|
||||
|
||||
# Comma separated list of seed nodes to connect to
|
||||
seeds = ""
|
||||
|
||||
# Comma separated list of nodes to keep persistent connections to
|
||||
# Do not add private peers to this list if you don't want them advertised
|
||||
persistent_peers = ""
|
||||
|
||||
# Path to address book
|
||||
addr_book_file = "config/addrbook.json"
|
||||
|
||||
# Set true for strict address routability rules
|
||||
addr_book_strict = true
|
||||
|
||||
# Time to wait before flushing messages out on the connection, in ms
|
||||
flush_throttle_timeout = 100
|
||||
|
||||
# Maximum number of peers to connect to
|
||||
max_num_peers = 50
|
||||
|
||||
# Maximum size of a message packet payload, in bytes
|
||||
max_packet_msg_payload_size = 1024
|
||||
|
||||
# Rate at which packets can be sent, in bytes/second
|
||||
send_rate = 512000
|
||||
|
||||
# Rate at which packets can be received, in bytes/second
|
||||
recv_rate = 512000
|
||||
|
||||
# Set true to enable the peer-exchange reactor
|
||||
pex = true
|
||||
|
||||
# Seed mode, in which node constantly crawls the network and looks for
|
||||
# peers. If another node asks it for addresses, it responds and disconnects.
|
||||
#
|
||||
# Does not work if the peer-exchange reactor is disabled.
|
||||
seed_mode = false
|
||||
|
||||
# Authenticated encryption
|
||||
auth_enc = true
|
||||
|
||||
# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
|
||||
private_peer_ids = ""
|
||||
|
||||
##### mempool configuration options #####
|
||||
[mempool]
|
||||
|
||||
recheck = true
|
||||
recheck_empty = true
|
||||
broadcast = true
|
||||
wal_dir = "data/mempool.wal"
|
||||
|
||||
##### consensus configuration options #####
|
||||
[consensus]
|
||||
|
||||
wal_file = "data/cs.wal/wal"
|
||||
|
||||
# All timeouts are in milliseconds
|
||||
timeout_propose = 3000
|
||||
timeout_propose_delta = 500
|
||||
timeout_prevote = 1000
|
||||
timeout_prevote_delta = 500
|
||||
timeout_precommit = 1000
|
||||
timeout_precommit_delta = 500
|
||||
timeout_commit = 1000
|
||||
|
||||
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
|
||||
skip_timeout_commit = false
|
||||
|
||||
# BlockSize
|
||||
max_block_size_txs = 10000
|
||||
max_block_size_bytes = 1
|
||||
|
||||
# EmptyBlocks mode and possible interval between empty blocks in seconds
|
||||
create_empty_blocks = true
|
||||
create_empty_blocks_interval = 0
|
||||
|
||||
# Reactor sleep duration parameters are in milliseconds
|
||||
peer_gossip_sleep_duration = 100
|
||||
peer_query_maj23_sleep_duration = 2000
|
||||
|
||||
##### transactions indexer configuration options #####
|
||||
[tx_index]
|
||||
|
||||
# What indexer to use for transactions
|
||||
#
|
||||
# Options:
|
||||
# 1) "null" (default)
|
||||
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
|
||||
indexer = "kv"
|
||||
|
||||
# Comma-separated list of tags to index (by default the only tag is tx hash)
|
||||
#
|
||||
# It's recommended to index only a subset of tags due to possible memory
|
||||
# bloat. This is, of course, depends on the indexer's DB and the volume of
|
||||
# transactions.
|
||||
index_tags = ""
|
||||
|
||||
# When set to true, tells indexer to index all tags. Note this may be not
|
||||
# desirable (see the comment above). IndexTags has a precedence over
|
||||
# IndexAllTags (i.e. when given both, IndexTags will be indexed).
|
||||
index_all_tags = false
|
39
docs/examples/node0/config/genesis.json
Normal file
39
docs/examples/node0/config/genesis.json
Normal file
@@ -0,0 +1,39 @@
|
||||
{
|
||||
"genesis_time": "0001-01-01T00:00:00Z",
|
||||
"chain_id": "test-chain-A2i3OZ",
|
||||
"validators": [
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "D+k4AdjnYPWbB9wmad137Bdpo/kAulOoTRQrLy/Qc4k="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
},
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "b56N5GCR1adcVRuENjfKw/mrm2dkhT7wNZXV/SDsKsU="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
},
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "IgZDpJvGA0TAamicA8ircy+RX/BkUlj6DXwM791ywIU="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
},
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "KGAZfxZvIZ7abbeIQ85U1ECG6+I62KSdaH8ulc0+OiU="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
}
|
||||
],
|
||||
"app_hash": ""
|
||||
}
|
1
docs/examples/node0/config/node_key.json
Normal file
1
docs/examples/node0/config/node_key.json
Normal file
@@ -0,0 +1 @@
|
||||
{"priv_key":{"type":"954568A3288910","value":"7lY+k6EDllG8Q9gVbF5313t/ag2YGkBVKdVa0YHJ9xO5k0w3Q/hke0Z7UFT1KgVDGRUEKzwAwwjwFQUvgF0ZWg=="}}
|
14
docs/examples/node0/config/priv_validator.json
Normal file
14
docs/examples/node0/config/priv_validator.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"address": "122A9414774A2FCAD026201DA477EF3F41970EF0",
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "D+k4AdjnYPWbB9wmad137Bdpo/kAulOoTRQrLy/Qc4k="
|
||||
},
|
||||
"last_height": 0,
|
||||
"last_round": 0,
|
||||
"last_step": 0,
|
||||
"priv_key": {
|
||||
"type": "954568A3288910",
|
||||
"value": "YLxp3ho+kySgAnzjBptbxDzSGw2ntGZLsIHQsaVxY/cP6TgB2Odg9ZsH3CZp3XfsF2mj+QC6U6hNFCsvL9BziQ=="
|
||||
}
|
||||
}
|
@@ -1,15 +0,0 @@
|
||||
# This is a TOML config file.
|
||||
# For more information, see https://github.com/toml-lang/toml
|
||||
|
||||
proxy_app = "tcp://127.0.0.1:46658"
|
||||
moniker = "penguin"
|
||||
fast_sync = true
|
||||
db_backend = "leveldb"
|
||||
log_level = "state:info,*:error"
|
||||
|
||||
[rpc]
|
||||
laddr = "tcp://0.0.0.0:46657"
|
||||
|
||||
[p2p]
|
||||
laddr = "tcp://0.0.0.0:46656"
|
||||
seeds = ""
|
169
docs/examples/node1/config/config.toml
Normal file
169
docs/examples/node1/config/config.toml
Normal file
@@ -0,0 +1,169 @@
|
||||
# This is a TOML config file.
|
||||
# For more information, see https://github.com/toml-lang/toml
|
||||
|
||||
##### main base config options #####
|
||||
|
||||
# TCP or UNIX socket address of the ABCI application,
|
||||
# or the name of an ABCI application compiled in with the Tendermint binary
|
||||
proxy_app = "tcp://127.0.0.1:46658"
|
||||
|
||||
# A custom human readable name for this node
|
||||
moniker = "bravo"
|
||||
|
||||
# If this node is many blocks behind the tip of the chain, FastSync
|
||||
# allows them to catchup quickly by downloading blocks in parallel
|
||||
# and verifying their commits
|
||||
fast_sync = true
|
||||
|
||||
# Database backend: leveldb | memdb
|
||||
db_backend = "leveldb"
|
||||
|
||||
# Database directory
|
||||
db_path = "data"
|
||||
|
||||
# Output level for logging, including package level options
|
||||
log_level = "main:info,state:info,*:error"
|
||||
|
||||
##### additional base config options #####
|
||||
|
||||
# Path to the JSON file containing the initial validator set and other meta data
|
||||
genesis_file = "config/genesis.json"
|
||||
|
||||
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
|
||||
priv_validator_file = "config/priv_validator.json"
|
||||
|
||||
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
|
||||
node_key_file = "config/node_key.json"
|
||||
|
||||
# Mechanism to connect to the ABCI application: socket | grpc
|
||||
abci = "socket"
|
||||
|
||||
# TCP or UNIX socket address for the profiling server to listen on
|
||||
prof_laddr = ""
|
||||
|
||||
# If true, query the ABCI app on connecting to a new peer
|
||||
# so the app can decide if we should keep the connection or not
|
||||
filter_peers = false
|
||||
|
||||
##### advanced configuration options #####
|
||||
|
||||
##### rpc server configuration options #####
|
||||
[rpc]
|
||||
|
||||
# TCP or UNIX socket address for the RPC server to listen on
|
||||
laddr = "tcp://0.0.0.0:46657"
|
||||
|
||||
# TCP or UNIX socket address for the gRPC server to listen on
|
||||
# NOTE: This server only supports /broadcast_tx_commit
|
||||
grpc_laddr = ""
|
||||
|
||||
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
|
||||
unsafe = false
|
||||
|
||||
##### peer to peer configuration options #####
|
||||
[p2p]
|
||||
|
||||
# Address to listen for incoming connections
|
||||
laddr = "tcp://0.0.0.0:46656"
|
||||
|
||||
# Comma separated list of seed nodes to connect to
|
||||
seeds = ""
|
||||
|
||||
# Comma separated list of nodes to keep persistent connections to
|
||||
# Do not add private peers to this list if you don't want them advertised
|
||||
persistent_peers = ""
|
||||
|
||||
# Path to address book
|
||||
addr_book_file = "config/addrbook.json"
|
||||
|
||||
# Set true for strict address routability rules
|
||||
addr_book_strict = true
|
||||
|
||||
# Time to wait before flushing messages out on the connection, in ms
|
||||
flush_throttle_timeout = 100
|
||||
|
||||
# Maximum number of peers to connect to
|
||||
max_num_peers = 50
|
||||
|
||||
# Maximum size of a message packet payload, in bytes
|
||||
max_packet_msg_payload_size = 1024
|
||||
|
||||
# Rate at which packets can be sent, in bytes/second
|
||||
send_rate = 512000
|
||||
|
||||
# Rate at which packets can be received, in bytes/second
|
||||
recv_rate = 512000
|
||||
|
||||
# Set true to enable the peer-exchange reactor
|
||||
pex = true
|
||||
|
||||
# Seed mode, in which node constantly crawls the network and looks for
|
||||
# peers. If another node asks it for addresses, it responds and disconnects.
|
||||
#
|
||||
# Does not work if the peer-exchange reactor is disabled.
|
||||
seed_mode = false
|
||||
|
||||
# Authenticated encryption
|
||||
auth_enc = true
|
||||
|
||||
# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
|
||||
private_peer_ids = ""
|
||||
|
||||
##### mempool configuration options #####
|
||||
[mempool]
|
||||
|
||||
recheck = true
|
||||
recheck_empty = true
|
||||
broadcast = true
|
||||
wal_dir = "data/mempool.wal"
|
||||
|
||||
##### consensus configuration options #####
|
||||
[consensus]
|
||||
|
||||
wal_file = "data/cs.wal/wal"
|
||||
|
||||
# All timeouts are in milliseconds
|
||||
timeout_propose = 3000
|
||||
timeout_propose_delta = 500
|
||||
timeout_prevote = 1000
|
||||
timeout_prevote_delta = 500
|
||||
timeout_precommit = 1000
|
||||
timeout_precommit_delta = 500
|
||||
timeout_commit = 1000
|
||||
|
||||
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
|
||||
skip_timeout_commit = false
|
||||
|
||||
# BlockSize
|
||||
max_block_size_txs = 10000
|
||||
max_block_size_bytes = 1
|
||||
|
||||
# EmptyBlocks mode and possible interval between empty blocks in seconds
|
||||
create_empty_blocks = true
|
||||
create_empty_blocks_interval = 0
|
||||
|
||||
# Reactor sleep duration parameters are in milliseconds
|
||||
peer_gossip_sleep_duration = 100
|
||||
peer_query_maj23_sleep_duration = 2000
|
||||
|
||||
##### transactions indexer configuration options #####
|
||||
[tx_index]
|
||||
|
||||
# What indexer to use for transactions
|
||||
#
|
||||
# Options:
|
||||
# 1) "null" (default)
|
||||
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
|
||||
indexer = "kv"
|
||||
|
||||
# Comma-separated list of tags to index (by default the only tag is tx hash)
|
||||
#
|
||||
# It's recommended to index only a subset of tags due to possible memory
|
||||
# bloat. This is, of course, depends on the indexer's DB and the volume of
|
||||
# transactions.
|
||||
index_tags = ""
|
||||
|
||||
# When set to true, tells indexer to index all tags. Note this may be not
|
||||
# desirable (see the comment above). IndexTags has a precedence over
|
||||
# IndexAllTags (i.e. when given both, IndexTags will be indexed).
|
||||
index_all_tags = false
|
39
docs/examples/node1/config/genesis.json
Normal file
39
docs/examples/node1/config/genesis.json
Normal file
@@ -0,0 +1,39 @@
|
||||
{
|
||||
"genesis_time": "0001-01-01T00:00:00Z",
|
||||
"chain_id": "test-chain-A2i3OZ",
|
||||
"validators": [
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "D+k4AdjnYPWbB9wmad137Bdpo/kAulOoTRQrLy/Qc4k="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
},
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "b56N5GCR1adcVRuENjfKw/mrm2dkhT7wNZXV/SDsKsU="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
},
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "IgZDpJvGA0TAamicA8ircy+RX/BkUlj6DXwM791ywIU="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
},
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "KGAZfxZvIZ7abbeIQ85U1ECG6+I62KSdaH8ulc0+OiU="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
}
|
||||
],
|
||||
"app_hash": ""
|
||||
}
|
1
docs/examples/node1/config/node_key.json
Normal file
1
docs/examples/node1/config/node_key.json
Normal file
@@ -0,0 +1 @@
|
||||
{"priv_key":{"type":"954568A3288910","value":"H71dc/TIG7nTselfa9nG0WRArXLKYnm7P5eFCk2lk8ASKQ3sIHpbdxCSHQD/RcdHe7TiabJeuOssNPvPWiyQEQ=="}}
|
14
docs/examples/node1/config/priv_validator.json
Normal file
14
docs/examples/node1/config/priv_validator.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"address": "BEA1B57F5806CF9AC4D54C8CF806DED5C0F102E1",
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "b56N5GCR1adcVRuENjfKw/mrm2dkhT7wNZXV/SDsKsU="
|
||||
},
|
||||
"last_height": 0,
|
||||
"last_round": 0,
|
||||
"last_step": 0,
|
||||
"priv_key": {
|
||||
"type": "954568A3288910",
|
||||
"value": "o0IqrHSPtd5YqGefodWxpJuRzvuVBjgbH785vbMgk7Vvno3kYJHVp1xVG4Q2N8rD+aubZ2SFPvA1ldX9IOwqxQ=="
|
||||
}
|
||||
}
|
@@ -1,42 +0,0 @@
|
||||
{
|
||||
"genesis_time":"0001-01-01T00:00:00Z",
|
||||
"chain_id":"test-chain-wt7apy",
|
||||
"validators":[
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data":"F08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node1"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "A8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node2"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node3"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node4"
|
||||
}
|
||||
],
|
||||
"app_hash":""
|
||||
}
|
@@ -1,6 +0,0 @@
|
||||
{
|
||||
"priv_key" : {
|
||||
"data" : "DA9BAABEA7211A6D93D9A1986B4279EAB3021FAA1653D459D53E6AB4D1CFB4C69BF7D52E48CF00AC5779AA0A6D3C368955D5636A677F72370B8ED19989714CFC",
|
||||
"type" : "ed25519"
|
||||
}
|
||||
}
|
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"address":"4DC2756029CE0D8F8C6C3E4C3CE6EE8C30AF352F",
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data":"F08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
|
||||
},
|
||||
"last_height":0,
|
||||
"last_round":0,
|
||||
"last_step":0,
|
||||
"last_signature":null,
|
||||
"priv_key":{
|
||||
"type":"ed25519",
|
||||
"data":"4D3648E1D93C8703E436BFF814728B6BD270CFDFD686DF5385E8ACBEB7BE2D7DF08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
|
||||
}
|
||||
}
|
@@ -1,15 +0,0 @@
|
||||
# This is a TOML config file.
|
||||
# For more information, see https://github.com/toml-lang/toml
|
||||
|
||||
proxy_app = "tcp://127.0.0.1:46658"
|
||||
moniker = "penguin"
|
||||
fast_sync = true
|
||||
db_backend = "leveldb"
|
||||
log_level = "state:info,*:error"
|
||||
|
||||
[rpc]
|
||||
laddr = "tcp://0.0.0.0:46657"
|
||||
|
||||
[p2p]
|
||||
laddr = "tcp://0.0.0.0:46656"
|
||||
seeds = ""
|
169
docs/examples/node2/config/config.toml
Normal file
169
docs/examples/node2/config/config.toml
Normal file
@@ -0,0 +1,169 @@
|
||||
# This is a TOML config file.
|
||||
# For more information, see https://github.com/toml-lang/toml
|
||||
|
||||
##### main base config options #####
|
||||
|
||||
# TCP or UNIX socket address of the ABCI application,
|
||||
# or the name of an ABCI application compiled in with the Tendermint binary
|
||||
proxy_app = "tcp://127.0.0.1:46658"
|
||||
|
||||
# A custom human readable name for this node
|
||||
moniker = "charlie"
|
||||
|
||||
# If this node is many blocks behind the tip of the chain, FastSync
|
||||
# allows them to catchup quickly by downloading blocks in parallel
|
||||
# and verifying their commits
|
||||
fast_sync = true
|
||||
|
||||
# Database backend: leveldb | memdb
|
||||
db_backend = "leveldb"
|
||||
|
||||
# Database directory
|
||||
db_path = "data"
|
||||
|
||||
# Output level for logging, including package level options
|
||||
log_level = "main:info,state:info,*:error"
|
||||
|
||||
##### additional base config options #####
|
||||
|
||||
# Path to the JSON file containing the initial validator set and other meta data
|
||||
genesis_file = "config/genesis.json"
|
||||
|
||||
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
|
||||
priv_validator_file = "config/priv_validator.json"
|
||||
|
||||
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
|
||||
node_key_file = "config/node_key.json"
|
||||
|
||||
# Mechanism to connect to the ABCI application: socket | grpc
|
||||
abci = "socket"
|
||||
|
||||
# TCP or UNIX socket address for the profiling server to listen on
|
||||
prof_laddr = ""
|
||||
|
||||
# If true, query the ABCI app on connecting to a new peer
|
||||
# so the app can decide if we should keep the connection or not
|
||||
filter_peers = false
|
||||
|
||||
##### advanced configuration options #####
|
||||
|
||||
##### rpc server configuration options #####
|
||||
[rpc]
|
||||
|
||||
# TCP or UNIX socket address for the RPC server to listen on
|
||||
laddr = "tcp://0.0.0.0:46657"
|
||||
|
||||
# TCP or UNIX socket address for the gRPC server to listen on
|
||||
# NOTE: This server only supports /broadcast_tx_commit
|
||||
grpc_laddr = ""
|
||||
|
||||
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
|
||||
unsafe = false
|
||||
|
||||
##### peer to peer configuration options #####
|
||||
[p2p]
|
||||
|
||||
# Address to listen for incoming connections
|
||||
laddr = "tcp://0.0.0.0:46656"
|
||||
|
||||
# Comma separated list of seed nodes to connect to
|
||||
seeds = ""
|
||||
|
||||
# Comma separated list of nodes to keep persistent connections to
|
||||
# Do not add private peers to this list if you don't want them advertised
|
||||
persistent_peers = ""
|
||||
|
||||
# Path to address book
|
||||
addr_book_file = "config/addrbook.json"
|
||||
|
||||
# Set true for strict address routability rules
|
||||
addr_book_strict = true
|
||||
|
||||
# Time to wait before flushing messages out on the connection, in ms
|
||||
flush_throttle_timeout = 100
|
||||
|
||||
# Maximum number of peers to connect to
|
||||
max_num_peers = 50
|
||||
|
||||
# Maximum size of a message packet payload, in bytes
|
||||
max_packet_msg_payload_size = 1024
|
||||
|
||||
# Rate at which packets can be sent, in bytes/second
|
||||
send_rate = 512000
|
||||
|
||||
# Rate at which packets can be received, in bytes/second
|
||||
recv_rate = 512000
|
||||
|
||||
# Set true to enable the peer-exchange reactor
|
||||
pex = true
|
||||
|
||||
# Seed mode, in which node constantly crawls the network and looks for
|
||||
# peers. If another node asks it for addresses, it responds and disconnects.
|
||||
#
|
||||
# Does not work if the peer-exchange reactor is disabled.
|
||||
seed_mode = false
|
||||
|
||||
# Authenticated encryption
|
||||
auth_enc = true
|
||||
|
||||
# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
|
||||
private_peer_ids = ""
|
||||
|
||||
##### mempool configuration options #####
|
||||
[mempool]
|
||||
|
||||
recheck = true
|
||||
recheck_empty = true
|
||||
broadcast = true
|
||||
wal_dir = "data/mempool.wal"
|
||||
|
||||
##### consensus configuration options #####
|
||||
[consensus]
|
||||
|
||||
wal_file = "data/cs.wal/wal"
|
||||
|
||||
# All timeouts are in milliseconds
|
||||
timeout_propose = 3000
|
||||
timeout_propose_delta = 500
|
||||
timeout_prevote = 1000
|
||||
timeout_prevote_delta = 500
|
||||
timeout_precommit = 1000
|
||||
timeout_precommit_delta = 500
|
||||
timeout_commit = 1000
|
||||
|
||||
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
|
||||
skip_timeout_commit = false
|
||||
|
||||
# BlockSize
|
||||
max_block_size_txs = 10000
|
||||
max_block_size_bytes = 1
|
||||
|
||||
# EmptyBlocks mode and possible interval between empty blocks in seconds
|
||||
create_empty_blocks = true
|
||||
create_empty_blocks_interval = 0
|
||||
|
||||
# Reactor sleep duration parameters are in milliseconds
|
||||
peer_gossip_sleep_duration = 100
|
||||
peer_query_maj23_sleep_duration = 2000
|
||||
|
||||
##### transactions indexer configuration options #####
|
||||
[tx_index]
|
||||
|
||||
# What indexer to use for transactions
|
||||
#
|
||||
# Options:
|
||||
# 1) "null" (default)
|
||||
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
|
||||
indexer = "kv"
|
||||
|
||||
# Comma-separated list of tags to index (by default the only tag is tx hash)
|
||||
#
|
||||
# It's recommended to index only a subset of tags due to possible memory
|
||||
# bloat. This is, of course, depends on the indexer's DB and the volume of
|
||||
# transactions.
|
||||
index_tags = ""
|
||||
|
||||
# When set to true, tells indexer to index all tags. Note this may be not
|
||||
# desirable (see the comment above). IndexTags has a precedence over
|
||||
# IndexAllTags (i.e. when given both, IndexTags will be indexed).
|
||||
index_all_tags = false
|
39
docs/examples/node2/config/genesis.json
Normal file
39
docs/examples/node2/config/genesis.json
Normal file
@@ -0,0 +1,39 @@
|
||||
{
|
||||
"genesis_time": "0001-01-01T00:00:00Z",
|
||||
"chain_id": "test-chain-A2i3OZ",
|
||||
"validators": [
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "D+k4AdjnYPWbB9wmad137Bdpo/kAulOoTRQrLy/Qc4k="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
},
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "b56N5GCR1adcVRuENjfKw/mrm2dkhT7wNZXV/SDsKsU="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
},
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "IgZDpJvGA0TAamicA8ircy+RX/BkUlj6DXwM791ywIU="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
},
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "KGAZfxZvIZ7abbeIQ85U1ECG6+I62KSdaH8ulc0+OiU="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
}
|
||||
],
|
||||
"app_hash": ""
|
||||
}
|
1
docs/examples/node2/config/node_key.json
Normal file
1
docs/examples/node2/config/node_key.json
Normal file
@@ -0,0 +1 @@
|
||||
{"priv_key":{"type":"954568A3288910","value":"COHZ/Y2cWGWxJNkRwtpQBt5sYvOnb6Gpz0lO46XERRJFBIdSWD5x1UMGRSTmnvW1ec5G4bMdg6zUZKOZD+vVPg=="}}
|
14
docs/examples/node2/config/priv_validator.json
Normal file
14
docs/examples/node2/config/priv_validator.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"address": "F0AA266949FB29ADA0B679C27889ED930BD1BDA1",
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "IgZDpJvGA0TAamicA8ircy+RX/BkUlj6DXwM791ywIU="
|
||||
},
|
||||
"last_height": 0,
|
||||
"last_round": 0,
|
||||
"last_step": 0,
|
||||
"priv_key": {
|
||||
"type": "954568A3288910",
|
||||
"value": "khADeZ5K/8u/L99DFaZNRq8V5g+EHWbwfqFjhCrppaAiBkOkm8YDRMBqaJwDyKtzL5Ff8GRSWPoNfAzv3XLAhQ=="
|
||||
}
|
||||
}
|
@@ -1,42 +0,0 @@
|
||||
{
|
||||
"genesis_time":"0001-01-01T00:00:00Z",
|
||||
"chain_id":"test-chain-wt7apy",
|
||||
"validators":[
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data":"F08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node1"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "A8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node2"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node3"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node4"
|
||||
}
|
||||
],
|
||||
"app_hash":""
|
||||
}
|
@@ -1,6 +0,0 @@
|
||||
{
|
||||
"priv_key" : {
|
||||
"data" : "F7BCABA165DFC0DDD50AE563EFB285BAA236EA805D35612504238A36EFA105958756442B1D9F942D7ABD259F2D59671657B6378E9C7194342A7AAA47A66D1E95",
|
||||
"type" : "ed25519"
|
||||
}
|
||||
}
|
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"address": "DD6C63A762608A9DDD4A845657743777F63121D6",
|
||||
"pub_key": {
|
||||
"type": "ed25519",
|
||||
"data": "A8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
|
||||
},
|
||||
"last_height": 0,
|
||||
"last_round": 0,
|
||||
"last_step": 0,
|
||||
"last_signature": null,
|
||||
"priv_key": {
|
||||
"type": "ed25519",
|
||||
"data": "7B0DE666FF5E9B437D284BCE767F612381890C018B93B0A105D2E829A568DA6FA8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
|
||||
}
|
||||
}
|
@@ -1,15 +0,0 @@
|
||||
# This is a TOML config file.
|
||||
# For more information, see https://github.com/toml-lang/toml
|
||||
|
||||
proxy_app = "tcp://127.0.0.1:46658"
|
||||
moniker = "penguin"
|
||||
fast_sync = true
|
||||
db_backend = "leveldb"
|
||||
log_level = "state:info,*:error"
|
||||
|
||||
[rpc]
|
||||
laddr = "tcp://0.0.0.0:46657"
|
||||
|
||||
[p2p]
|
||||
laddr = "tcp://0.0.0.0:46656"
|
||||
seeds = ""
|
169
docs/examples/node3/config/config.toml
Normal file
169
docs/examples/node3/config/config.toml
Normal file
@@ -0,0 +1,169 @@
|
||||
# This is a TOML config file.
|
||||
# For more information, see https://github.com/toml-lang/toml
|
||||
|
||||
##### main base config options #####
|
||||
|
||||
# TCP or UNIX socket address of the ABCI application,
|
||||
# or the name of an ABCI application compiled in with the Tendermint binary
|
||||
proxy_app = "tcp://127.0.0.1:46658"
|
||||
|
||||
# A custom human readable name for this node
|
||||
moniker = "delta"
|
||||
|
||||
# If this node is many blocks behind the tip of the chain, FastSync
|
||||
# allows them to catchup quickly by downloading blocks in parallel
|
||||
# and verifying their commits
|
||||
fast_sync = true
|
||||
|
||||
# Database backend: leveldb | memdb
|
||||
db_backend = "leveldb"
|
||||
|
||||
# Database directory
|
||||
db_path = "data"
|
||||
|
||||
# Output level for logging, including package level options
|
||||
log_level = "main:info,state:info,*:error"
|
||||
|
||||
##### additional base config options #####
|
||||
|
||||
# Path to the JSON file containing the initial validator set and other meta data
|
||||
genesis_file = "config/genesis.json"
|
||||
|
||||
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
|
||||
priv_validator_file = "config/priv_validator.json"
|
||||
|
||||
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
|
||||
node_key_file = "config/node_key.json"
|
||||
|
||||
# Mechanism to connect to the ABCI application: socket | grpc
|
||||
abci = "socket"
|
||||
|
||||
# TCP or UNIX socket address for the profiling server to listen on
|
||||
prof_laddr = ""
|
||||
|
||||
# If true, query the ABCI app on connecting to a new peer
|
||||
# so the app can decide if we should keep the connection or not
|
||||
filter_peers = false
|
||||
|
||||
##### advanced configuration options #####
|
||||
|
||||
##### rpc server configuration options #####
|
||||
[rpc]
|
||||
|
||||
# TCP or UNIX socket address for the RPC server to listen on
|
||||
laddr = "tcp://0.0.0.0:46657"
|
||||
|
||||
# TCP or UNIX socket address for the gRPC server to listen on
|
||||
# NOTE: This server only supports /broadcast_tx_commit
|
||||
grpc_laddr = ""
|
||||
|
||||
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
|
||||
unsafe = false
|
||||
|
||||
##### peer to peer configuration options #####
|
||||
[p2p]
|
||||
|
||||
# Address to listen for incoming connections
|
||||
laddr = "tcp://0.0.0.0:46656"
|
||||
|
||||
# Comma separated list of seed nodes to connect to
|
||||
seeds = ""
|
||||
|
||||
# Comma separated list of nodes to keep persistent connections to
|
||||
# Do not add private peers to this list if you don't want them advertised
|
||||
persistent_peers = ""
|
||||
|
||||
# Path to address book
|
||||
addr_book_file = "config/addrbook.json"
|
||||
|
||||
# Set true for strict address routability rules
|
||||
addr_book_strict = true
|
||||
|
||||
# Time to wait before flushing messages out on the connection, in ms
|
||||
flush_throttle_timeout = 100
|
||||
|
||||
# Maximum number of peers to connect to
|
||||
max_num_peers = 50
|
||||
|
||||
# Maximum size of a message packet payload, in bytes
|
||||
max_packet_msg_payload_size = 1024
|
||||
|
||||
# Rate at which packets can be sent, in bytes/second
|
||||
send_rate = 512000
|
||||
|
||||
# Rate at which packets can be received, in bytes/second
|
||||
recv_rate = 512000
|
||||
|
||||
# Set true to enable the peer-exchange reactor
|
||||
pex = true
|
||||
|
||||
# Seed mode, in which node constantly crawls the network and looks for
|
||||
# peers. If another node asks it for addresses, it responds and disconnects.
|
||||
#
|
||||
# Does not work if the peer-exchange reactor is disabled.
|
||||
seed_mode = false
|
||||
|
||||
# Authenticated encryption
|
||||
auth_enc = true
|
||||
|
||||
# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
|
||||
private_peer_ids = ""
|
||||
|
||||
##### mempool configuration options #####
|
||||
[mempool]
|
||||
|
||||
recheck = true
|
||||
recheck_empty = true
|
||||
broadcast = true
|
||||
wal_dir = "data/mempool.wal"
|
||||
|
||||
##### consensus configuration options #####
|
||||
[consensus]
|
||||
|
||||
wal_file = "data/cs.wal/wal"
|
||||
|
||||
# All timeouts are in milliseconds
|
||||
timeout_propose = 3000
|
||||
timeout_propose_delta = 500
|
||||
timeout_prevote = 1000
|
||||
timeout_prevote_delta = 500
|
||||
timeout_precommit = 1000
|
||||
timeout_precommit_delta = 500
|
||||
timeout_commit = 1000
|
||||
|
||||
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
|
||||
skip_timeout_commit = false
|
||||
|
||||
# BlockSize
|
||||
max_block_size_txs = 10000
|
||||
max_block_size_bytes = 1
|
||||
|
||||
# EmptyBlocks mode and possible interval between empty blocks in seconds
|
||||
create_empty_blocks = true
|
||||
create_empty_blocks_interval = 0
|
||||
|
||||
# Reactor sleep duration parameters are in milliseconds
|
||||
peer_gossip_sleep_duration = 100
|
||||
peer_query_maj23_sleep_duration = 2000
|
||||
|
||||
##### transactions indexer configuration options #####
|
||||
[tx_index]
|
||||
|
||||
# What indexer to use for transactions
|
||||
#
|
||||
# Options:
|
||||
# 1) "null" (default)
|
||||
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
|
||||
indexer = "kv"
|
||||
|
||||
# Comma-separated list of tags to index (by default the only tag is tx hash)
|
||||
#
|
||||
# It's recommended to index only a subset of tags due to possible memory
|
||||
# bloat. This is, of course, depends on the indexer's DB and the volume of
|
||||
# transactions.
|
||||
index_tags = ""
|
||||
|
||||
# When set to true, tells indexer to index all tags. Note this may be not
|
||||
# desirable (see the comment above). IndexTags has a precedence over
|
||||
# IndexAllTags (i.e. when given both, IndexTags will be indexed).
|
||||
index_all_tags = false
|
39
docs/examples/node3/config/genesis.json
Normal file
39
docs/examples/node3/config/genesis.json
Normal file
@@ -0,0 +1,39 @@
|
||||
{
|
||||
"genesis_time": "0001-01-01T00:00:00Z",
|
||||
"chain_id": "test-chain-A2i3OZ",
|
||||
"validators": [
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "D+k4AdjnYPWbB9wmad137Bdpo/kAulOoTRQrLy/Qc4k="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
},
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "b56N5GCR1adcVRuENjfKw/mrm2dkhT7wNZXV/SDsKsU="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
},
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "IgZDpJvGA0TAamicA8ircy+RX/BkUlj6DXwM791ywIU="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
},
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "KGAZfxZvIZ7abbeIQ85U1ECG6+I62KSdaH8ulc0+OiU="
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
}
|
||||
],
|
||||
"app_hash": ""
|
||||
}
|
1
docs/examples/node3/config/node_key.json
Normal file
1
docs/examples/node3/config/node_key.json
Normal file
@@ -0,0 +1 @@
|
||||
{"priv_key":{"type":"954568A3288910","value":"9Y9xp/tUJJ6pHTF5SUV0bGKYSdVbFtMHu+Lr8S0JBSZAwneaejnfOEU1LMKOnQ07skrDUaJcj5di3jAyjxJzqg=="}}
|
14
docs/examples/node3/config/priv_validator.json
Normal file
14
docs/examples/node3/config/priv_validator.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"address": "9A1A6914EB5F4FF0269C7EEEE627C27310CC64F9",
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "KGAZfxZvIZ7abbeIQ85U1ECG6+I62KSdaH8ulc0+OiU="
|
||||
},
|
||||
"last_height": 0,
|
||||
"last_round": 0,
|
||||
"last_step": 0,
|
||||
"priv_key": {
|
||||
"type": "954568A3288910",
|
||||
"value": "jb52LZ5gp+eQ8nJlFK1z06nBMp1gD8ICmyzdM1icGOgoYBl/Fm8hntptt4hDzlTUQIbr4jrYpJ1ofy6VzT46JQ=="
|
||||
}
|
||||
}
|
@@ -1,42 +0,0 @@
|
||||
{
|
||||
"genesis_time":"0001-01-01T00:00:00Z",
|
||||
"chain_id":"test-chain-wt7apy",
|
||||
"validators":[
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data":"F08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node1"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "A8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node2"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node3"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node4"
|
||||
}
|
||||
],
|
||||
"app_hash":""
|
||||
}
|
@@ -1,6 +0,0 @@
|
||||
{
|
||||
"priv_key" : {
|
||||
"data" : "95136FCC97E4446B3141EDF9841078107ECE755E99925D79CCBF91085492680B3CA1034D9917DF1DED4E4AB2D9BC225919F6CB2176F210D2368697CC339DF4E7",
|
||||
"type" : "ed25519"
|
||||
}
|
||||
}
|
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"address": "6D6A1E313B407B5474106CA8759C976B777AB659",
|
||||
"pub_key": {
|
||||
"type": "ed25519",
|
||||
"data": "E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
|
||||
},
|
||||
"last_height": 0,
|
||||
"last_round": 0,
|
||||
"last_step": 0,
|
||||
"last_signature": null,
|
||||
"priv_key": {
|
||||
"type": "ed25519",
|
||||
"data": "622432A370111A5C25CFE121E163FE709C9D5C95F551EDBD7A2C69A8545C9B76E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
|
||||
}
|
||||
}
|
@@ -1,15 +0,0 @@
|
||||
# This is a TOML config file.
|
||||
# For more information, see https://github.com/toml-lang/toml
|
||||
|
||||
proxy_app = "tcp://127.0.0.1:46658"
|
||||
moniker = "penguin"
|
||||
fast_sync = true
|
||||
db_backend = "leveldb"
|
||||
log_level = "state:info,*:error"
|
||||
|
||||
[rpc]
|
||||
laddr = "tcp://0.0.0.0:46657"
|
||||
|
||||
[p2p]
|
||||
laddr = "tcp://0.0.0.0:46656"
|
||||
seeds = ""
|
@@ -1,42 +0,0 @@
|
||||
{
|
||||
"genesis_time":"0001-01-01T00:00:00Z",
|
||||
"chain_id":"test-chain-wt7apy",
|
||||
"validators":[
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data":"F08446C80A33E10D620E21450821B58D053778528F2B583D423B3E46EC647D30"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node1"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "A8423F70A9E512643B4B00F7C3701ECAD1F31B0A1FAA45852C41046353B9A07F"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node2"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "E52EFFAEDFE1D618ECDA71DE3B23592B3612CAABA0C10826E4C3120B2198C29A"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node3"
|
||||
}
|
||||
,
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"ed25519",
|
||||
"data": "2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
|
||||
},
|
||||
"power":10,
|
||||
"name":"node4"
|
||||
}
|
||||
],
|
||||
"app_hash":""
|
||||
}
|
@@ -1,6 +0,0 @@
|
||||
{
|
||||
"priv_key" : {
|
||||
"data" : "8895D6C9A1B46AB83A8E2BAE2121B8C3E245B9E9126EBD797FEAC5058285F2F64FDE2E8182C88AD5185A49D837C581465D57BD478C41865A66D7D9742D8AEF57",
|
||||
"type" : "ed25519"
|
||||
}
|
||||
}
|
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"address": "829A9663611D3DD88A3D84EA0249679D650A0755",
|
||||
"pub_key": {
|
||||
"type": "ed25519",
|
||||
"data": "2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
|
||||
},
|
||||
"last_height": 0,
|
||||
"last_round": 0,
|
||||
"last_step": 0,
|
||||
"last_signature": null,
|
||||
"priv_key": {
|
||||
"type": "ed25519",
|
||||
"data": "0A604D1C9AE94A50150BF39E603239092F9392E4773F4D8F4AC1D86E6438E89E2B8FC09C07955A02998DFE5AF1AAD1C44115ECA7635FF51A867CF4265D347C07"
|
||||
}
|
||||
}
|
@@ -40,10 +40,8 @@ Tendermint Tools
|
||||
:maxdepth: 2
|
||||
|
||||
deploy-testnets.rst
|
||||
tools/ansible.rst
|
||||
terraform-and-ansible.rst
|
||||
tools/docker.rst
|
||||
tools/mintnet-kubernetes.rst
|
||||
tools/terraform-digitalocean.rst
|
||||
tools/benchmarking.rst
|
||||
tools/monitoring.rst
|
||||
|
||||
@@ -67,6 +65,7 @@ Tendermint 201
|
||||
|
||||
specification.rst
|
||||
determinism.rst
|
||||
transactional-semantics.rst
|
||||
|
||||
* For a deeper dive, see `this thesis <https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769>`__.
|
||||
* There is also the `original whitepaper <https://tendermint.com/static/docs/tendermint.pdf>`__, though it is now quite outdated.
|
||||
|
@@ -4,53 +4,48 @@ Install Tendermint
|
||||
From Binary
|
||||
-----------
|
||||
|
||||
To download pre-built binaries, see the `Download page <https://tendermint.com/downloads>`__.
|
||||
To download pre-built binaries, see the `releases page <https://github.com/tendermint/tendermint/releases>`__.
|
||||
|
||||
From Source
|
||||
-----------
|
||||
|
||||
You'll need ``go``, maybe `dep <https://github.com/golang/dep>`__, and the Tendermint source code.
|
||||
|
||||
Install Go
|
||||
^^^^^^^^^^
|
||||
|
||||
Make sure you have `installed Go <https://golang.org/doc/install>`__ and
|
||||
set the ``GOPATH``. You should also put ``GOPATH/bin`` on your ``PATH``.
|
||||
You'll need ``go`` `installed <https://golang.org/doc/install>`__ and the required
|
||||
`environment variables set <https://github.com/tendermint/tendermint/wiki/Setting-GOPATH>`__
|
||||
|
||||
Get Source Code
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
You should be able to install the latest with a simple
|
||||
::
|
||||
|
||||
mkdir -p $GOPATH/src/github.com/tendermint
|
||||
cd $GOPATH/src/github.com/tendermint
|
||||
git clone https://github.com/tendermint/tendermint.git
|
||||
cd tendermint
|
||||
|
||||
Get Tools & Dependencies
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
::
|
||||
|
||||
go get github.com/tendermint/tendermint/cmd/tendermint
|
||||
|
||||
Run ``tendermint --help`` and ``tendermint version`` to ensure your
|
||||
installation worked.
|
||||
|
||||
If the installation failed, a dependency may have been updated and become
|
||||
incompatible with the latest Tendermint master branch. We solve this
|
||||
using the ``dep`` tool for dependency management.
|
||||
|
||||
First, install ``dep``:
|
||||
|
||||
::
|
||||
|
||||
cd $GOPATH/src/github.com/tendermint/tendermint
|
||||
make get_tools
|
||||
make get_vendor_deps
|
||||
|
||||
Now we can fetch the correct versions of each dependency by running:
|
||||
Compile
|
||||
^^^^^^^
|
||||
|
||||
::
|
||||
|
||||
make get_vendor_deps
|
||||
make install
|
||||
|
||||
Note that even though ``go get`` originally failed, the repository was
|
||||
still cloned to the correct location in the ``$GOPATH``.
|
||||
to put the binary in ``$GOPATH/bin`` or use:
|
||||
|
||||
The latest Tendermint Core version is now installed.
|
||||
::
|
||||
|
||||
make build
|
||||
|
||||
to put the binary in ``./build``.
|
||||
|
||||
The latest ``tendermint version`` is now installed.
|
||||
|
||||
Reinstall
|
||||
---------
|
||||
@@ -86,20 +81,6 @@ do, use ``dep``, as above:
|
||||
Since the third option just uses ``dep`` right away, it should always
|
||||
work.
|
||||
|
||||
Troubleshooting
|
||||
---------------
|
||||
|
||||
If ``go get`` failing bothers you, fetch the code using ``git``:
|
||||
|
||||
::
|
||||
|
||||
mkdir -p $GOPATH/src/github.com/tendermint
|
||||
git clone https://github.com/tendermint/tendermint $GOPATH/src/github.com/tendermint/tendermint
|
||||
cd $GOPATH/src/github.com/tendermint/tendermint
|
||||
make get_tools
|
||||
make get_vendor_deps
|
||||
make install
|
||||
|
||||
Run
|
||||
^^^
|
||||
|
||||
|
76
docs/spec/README.md
Normal file
76
docs/spec/README.md
Normal file
@@ -0,0 +1,76 @@
|
||||
# Tendermint Specification
|
||||
|
||||
This is a markdown specification of the Tendermint blockchain.
|
||||
It defines the base data structures, how they are validated,
|
||||
and how they are communicated over the network.
|
||||
|
||||
If you find discrepancies between the spec and the code that
|
||||
do not have an associated issue or pull request on github,
|
||||
please submit them to our [bug bounty](https://tendermint.com/security)!
|
||||
|
||||
## Contents
|
||||
|
||||
- [Overview](#overview)
|
||||
|
||||
### Data Structures
|
||||
|
||||
- [Encoding and Digests](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/encoding.md)
|
||||
- [Blockchain](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/blockchain.md)
|
||||
- [State](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md)
|
||||
|
||||
### Consensus Protocol
|
||||
|
||||
- TODO
|
||||
|
||||
### P2P and Network Protocols
|
||||
|
||||
- [The Base P2P Layer](https://github.com/tendermint/tendermint/tree/master/docs/spec/p2p): multiplex the protocols ("reactors") on authenticated and encrypted TCP connections
|
||||
- [Peer Exchange (PEX)](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/pex): gossip known peer addresses so peers can find each other
|
||||
- [Block Sync](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/block_sync): gossip blocks so peers can catch up quickly
|
||||
- [Consensus](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/consensus): gossip votes and block parts so new blocks can be committed
|
||||
- [Mempool](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/mempool): gossip transactions so they get included in blocks
|
||||
- Evidence: TODO
|
||||
|
||||
### More
|
||||
- Light Client: TODO
|
||||
- Persistence: TODO
|
||||
|
||||
## Overview
|
||||
|
||||
Tendermint provides Byzantine Fault Tolerant State Machine Replication using
|
||||
hash-linked batches of transactions. Such transaction batches are called "blocks".
|
||||
Hence, Tendermint defines a "blockchain".
|
||||
|
||||
Each block in Tendermint has a unique index - its Height.
|
||||
A block at `Height == H` can only be committed *after* the
|
||||
block at `Height == H-1`.
|
||||
Each block is committed by a known set of weighted Validators.
|
||||
Membership and weighting within this set may change over time.
|
||||
Tendermint guarantees the safety and liveness of the blockchain
|
||||
so long as less than 1/3 of the total weight of the Validator set
|
||||
is malicious or faulty.
|
||||
|
||||
A commit in Tendermint is a set of signed messages from more than 2/3 of
|
||||
the total weight of the current Validator set. Validators take turns proposing
|
||||
blocks and voting on them. Once enough votes are received, the block is considered
|
||||
committed. These votes are included in the *next* block as proof that the previous block
|
||||
was committed - they cannot be included in the current block, as that block has already been
|
||||
created.
|
||||
|
||||
Once a block is committed, it can be executed against an application.
|
||||
The application returns results for each of the transactions in the block.
|
||||
The application can also return changes to be made to the validator set,
|
||||
as well as a cryptographic digest of its latest state.
|
||||
|
||||
Tendermint is designed to enable efficient verification and authentication
|
||||
of the latest state of the blockchain. To achieve this, it embeds
|
||||
cryptographic commitments to certain information in the block "header".
|
||||
This information includes the contents of the block (eg. the transactions),
|
||||
the validator set committing the block, as well as the various results returned by the application.
|
||||
Note, however, that block execution only occurs *after* a block is committed.
|
||||
Thus, application results can only be included in the *next* block.
|
||||
|
||||
Also note that information like the transaction results and the validator set are never
|
||||
directly included in the block - only their cryptographic digests (Merkle roots) are.
|
||||
Hence, verification of a block requires a separate data structure to store this information.
|
||||
We call this the `State`. Block verification also requires access to the previous block.
|
@@ -162,7 +162,7 @@ We refer to certain globally available objects:
|
||||
and `state` keeps track of the validator set, the consensus parameters
|
||||
and other results from the application.
|
||||
Elements of an object are accessed as expected,
|
||||
ie. `block.Header`. See [here](state.md) for the definition of `state`.
|
||||
ie. `block.Header`. See [here](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md) for the definition of `state`.
|
||||
|
||||
### Header
|
||||
|
@@ -2,7 +2,7 @@
|
||||
|
||||
## Amino
|
||||
|
||||
Tendermint uses the Protobuf3 derrivative [Amino]() for all data structures.
|
||||
Tendermint uses the Protobuf3 derivative [Amino](https://github.com/tendermint/go-amino) for all data structures.
|
||||
Think of Amino as an object-oriented Protobuf3 with native JSON support.
|
||||
The goal of the Amino encoding protocol is to bring parity between application
|
||||
logic objects and persistence objects.
|
||||
@@ -51,8 +51,8 @@ Notice that when encoding byte-arrays, the length of the byte-array is appended
|
||||
to the PrefixBytes. Thus the encoding of a byte array becomes `<PrefixBytes>
|
||||
<Length> <ByteArray>`
|
||||
|
||||
(NOTE: the remainder of this section on Public Key Cryptography can be generated
|
||||
from [this script](./scripts/crypto.go))
|
||||
NOTE: the remainder of this section on Public Key Cryptography can be generated
|
||||
from [this script](https://github.com/tendermint/tendermint/blob/master/docs/spec/scripts/crypto.go)
|
||||
|
||||
### PubKeyEd25519
|
||||
|
||||
@@ -290,6 +290,7 @@ Amino also supports JSON encoding - registered types are simply encoded as:
|
||||
"type": "<DisfixBytes>",
|
||||
"value": <JSON>
|
||||
}
|
||||
```
|
||||
|
||||
For instance, an ED25519 PubKey would look like:
|
||||
|
@@ -77,5 +77,4 @@ func TotalVotingPower(vals []Validators) int64{
|
||||
|
||||
### ConsensusParams
|
||||
|
||||
TODO:
|
||||
|
||||
TODO
|
@@ -58,7 +58,7 @@ message Validator {
|
||||
```
|
||||
|
||||
The `pub_key` is the Amino encoded public key for the validator. For details on
|
||||
Amino encoded public keys, see the [section of the encoding spec](./encoding.md#public-key-cryptography).
|
||||
Amino encoded public keys, see the [section of the encoding spec](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/encoding.md#public-key-cryptography).
|
||||
|
||||
For Ed25519 pubkeys, the Amino prefix is always "1624DE6220". For example, the 32-byte Ed25519 pubkey
|
||||
`76852933A4686A721442E931A8415F62F5F1AEDF4910F1F252FB393F74C40C85` would be
|
||||
@@ -121,7 +121,6 @@ stateBlockHeight = height of the last block for which Tendermint completed all
|
||||
block processing and saved all ABCI results to disk
|
||||
appBlockHeight = height of the last block for which ABCI app succesfully
|
||||
completely Commit
|
||||
|
||||
```
|
||||
|
||||
Note we always have `storeBlockHeight >= stateBlockHeight` and `storeBlockHeight >= appBlockHeight`
|
||||
@@ -165,4 +164,3 @@ If `storeBlockHeight == stateBlockHeight+1`
|
||||
If appBlockHeight == storeBlockHeight {
|
||||
update the state using the saved ABCI responses but dont run the block against the real app.
|
||||
This happens if we crashed after the app finished Commit but before Tendermint saved the state.
|
||||
|
@@ -12,7 +12,7 @@ Seeds should operate full nodes with the PEX reactor in a "crawler" mode
|
||||
that continuously explores to validate the availability of peers.
|
||||
|
||||
Seeds should only respond with some top percentile of the best peers it knows about.
|
||||
See [the peer-exchange docs](/docs/specification/new-spec/reactors/pex/pex.md)for details on peer quality.
|
||||
See [the peer-exchange docs](https://github.com/tendermint/tendermint/blob/master/docs/spec/reactors/pex/pex.md)for details on peer quality.
|
||||
|
||||
## New Full Node
|
||||
|
@@ -2,7 +2,7 @@
|
||||
|
||||
This document explains how Tendermint Peers are identified and how they connect to one another.
|
||||
|
||||
For details on peer discovery, see the [peer exchange (PEX) reactor doc](/docs/specification/new-spec/reactors/pex/pex.md).
|
||||
For details on peer discovery, see the [peer exchange (PEX) reactor doc](https://github.com/tendermint/tendermint/blob/master/docs/spec/reactors/pex/pex.md).
|
||||
|
||||
## Peer Identity
|
||||
|
@@ -16,7 +16,7 @@ explained in a forthcoming document.
|
||||
For efficiency reasons, validators in Tendermint consensus protocol do not agree directly on the
|
||||
block as the block size is big, i.e., they don't embed the block inside `Proposal` and
|
||||
`VoteMessage`. Instead, they reach agreement on the `BlockID` (see `BlockID` definition in
|
||||
[Blockchain](blockchain.md) section) that uniquely identifies each block. The block itself is
|
||||
[Blockchain](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/blockchain.md#blockid) section) that uniquely identifies each block. The block itself is
|
||||
disseminated to validator processes using peer-to-peer gossiping protocol. It starts by having a
|
||||
proposer first splitting a block into a number of block parts, that are then gossiped between
|
||||
processes using `BlockPartMessage`.
|
||||
@@ -69,7 +69,7 @@ BlockID contains PartSetHeader.
|
||||
## VoteMessage
|
||||
|
||||
VoteMessage is sent to vote for some block (or to inform others that a process does not vote in the
|
||||
current round). Vote is defined in [Blockchain](blockchain.md) section and contains validator's
|
||||
current round). Vote is defined in the [Blockchain](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/blockchain.md#blockid) section and contains validator's
|
||||
information (validator address and index), height and round for which the vote is sent, vote type,
|
||||
blockID if process vote for some block (`nil` otherwise) and a timestamp when the vote is sent. The
|
||||
message is signed by the validator private key.
|
@@ -44,4 +44,3 @@ p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p1, p0, p0, p0, p0, p0, etc
|
||||
|
||||
This basically means that almost all rounds have the same proposer. But in this case, the process p0 has anyway enough
|
||||
voting power to decide whatever he wants, so the fact that he coordinates almost all rounds seems correct.
|
||||
|
@@ -32,6 +32,7 @@ wait before returning (sync makes sure CheckTx passes, commit
|
||||
makes sure it was included in a signed block).
|
||||
|
||||
Request (`POST http://gaia.zone:46657/`):
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "",
|
||||
@@ -43,8 +44,8 @@ Request (`POST http://gaia.zone:46657/`):
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
Response:
|
||||
|
||||
```json
|
||||
{
|
||||
"error": "",
|
@@ -117,7 +117,7 @@ current, past, and rate-of-change data to inform peer quality.
|
||||
While a PID trust metric has been implemented, it remains for future work
|
||||
to use it in the PEX.
|
||||
|
||||
See the [trustmetric](../../../architecture/adr-006-trust-metric.md )
|
||||
and [trustmetric useage](../../../architecture/adr-007-trust-metric-usage.md )
|
||||
See the [trustmetric](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-006-trust-metric.md)
|
||||
and [trustmetric useage](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-007-trust-metric-usage.md)
|
||||
architecture docs for more details.
|
||||
|
@@ -136,6 +136,12 @@ like the file below, however, double check by inspecting the
|
||||
broadcast = true
|
||||
wal_dir = "data/mempool.wal"
|
||||
|
||||
# size of the mempool
|
||||
size = 100000
|
||||
|
||||
# size of the cache (used to filter transactions we saw earlier)
|
||||
cache_size = 100000
|
||||
|
||||
##### consensus configuration options #####
|
||||
[consensus]
|
||||
|
||||
|
@@ -38,6 +38,7 @@ Recovering from data corruption can be hard and time-consuming. Here are two app
|
||||
|
||||
1) Delete the WAL file and restart Tendermint. It will attempt to sync with other peers.
|
||||
2) Try to repair the WAL file manually:
|
||||
|
||||
1. Create a backup of the corrupted WAL file:
|
||||
|
||||
.. code:: bash
|
||||
|
@@ -1,71 +1 @@
|
||||
# Tendermint Specification
|
||||
|
||||
This is a markdown specification of the Tendermint blockchain.
|
||||
It defines the base data structures, how they are validated,
|
||||
and how they are communicated over the network.
|
||||
|
||||
If you find discrepancies between the spec and the code that
|
||||
do not have an associated issue or pull request on github,
|
||||
please submit them to our [bug bounty](https://tendermint.com/security)!
|
||||
|
||||
## Contents
|
||||
|
||||
### Data Structures
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Encoding and Digests](encoding.md)
|
||||
- [Blockchain](blockchain.md)
|
||||
- [State](state.md)
|
||||
|
||||
### P2P and Network Protocols
|
||||
|
||||
- [The Base P2P Layer](p2p): multiplex the protocols ("reactors") on authenticated and encrypted TCP connections
|
||||
- [Peer Exchange (PEX)](reactors/pex): gossip known peer addresses so peers can find each other
|
||||
- [Block Sync](reactors/block_sync): gossip blocks so peers can catch up quickly
|
||||
- [Consensus](reactors/consensus): gossip votes and block parts so new blocks can be committed
|
||||
- [Mempool](reactors/mempool): gossip transactions so they get included in blocks
|
||||
- Evidence: TODO
|
||||
|
||||
### More
|
||||
- Light Client: TODO
|
||||
- Persistence: TODO
|
||||
|
||||
## Overview
|
||||
|
||||
Tendermint provides Byzantine Fault Tolerant State Machine Replication using
|
||||
hash-linked batches of transactions. Such transaction batches are called "blocks".
|
||||
Hence, Tendermint defines a "blockchain".
|
||||
|
||||
Each block in Tendermint has a unique index - its Height.
|
||||
A block at `Height == H` can only be committed *after* the
|
||||
block at `Height == H-1`.
|
||||
Each block is committed by a known set of weighted Validators.
|
||||
Membership and weighting within this set may change over time.
|
||||
Tendermint guarantees the safety and liveness of the blockchain
|
||||
so long as less than 1/3 of the total weight of the Validator set
|
||||
is malicious or faulty.
|
||||
|
||||
A commit in Tendermint is a set of signed messages from more than 2/3 of
|
||||
the total weight of the current Validator set. Validators take turns proposing
|
||||
blocks and voting on them. Once enough votes are received, the block is considered
|
||||
committed. These votes are included in the *next* block as proof that the previous block
|
||||
was committed - they cannot be included in the current block, as that block has already been
|
||||
created.
|
||||
|
||||
Once a block is committed, it can be executed against an application.
|
||||
The application returns results for each of the transactions in the block.
|
||||
The application can also return changes to be made to the validator set,
|
||||
as well as a cryptographic digest of its latest state.
|
||||
|
||||
Tendermint is designed to enable efficient verification and authentication
|
||||
of the latest state of the blockchain. To achieve this, it embeds
|
||||
cryptographic commitments to certain information in the block "header".
|
||||
This information includes the contents of the block (eg. the transactions),
|
||||
the validator set committing the block, as well as the various results returned by the application.
|
||||
Note, however, that block execution only occurs *after* a block is committed.
|
||||
Thus, application results can only be included in the *next* block.
|
||||
|
||||
Also note that information like the transaction results and the validator set are never
|
||||
directly included in the block - only their cryptographic digests (Merkle roots) are.
|
||||
Hence, verification of a block requires a separate data structure to store this information.
|
||||
We call this the `State`. Block verification also requires access to the previous block.
|
||||
Spec moved to [docs/spec](https://github.com/tendermint/tendermint/tree/master/docs/spec).
|
||||
|
@@ -1,11 +0,0 @@
|
||||
# Mempool Specification
|
||||
|
||||
This package contains documents specifying the functionality
|
||||
of the mempool module.
|
||||
|
||||
Components:
|
||||
|
||||
* [Config](./config.md) - how to configure it
|
||||
* [External Messages](./messages.md) - The messages we accept over p2p and rpc interfaces
|
||||
* [Functionality](./functionality.md) - high-level description of the functionality it provides
|
||||
* [Concurrency Model](./concurrency.md) - What guarantees we provide, what locks we require.
|
138
docs/terraform-and-ansible.rst
Normal file
138
docs/terraform-and-ansible.rst
Normal file
@@ -0,0 +1,138 @@
|
||||
Terraform & Ansible
|
||||
===================
|
||||
|
||||
Automated deployments are done using `Terraform <https://www.terraform.io/>`__ to create servers on Digital Ocean then
|
||||
`Ansible <http://www.ansible.com/>`__ to create and manage testnets on those servers.
|
||||
|
||||
Install
|
||||
-------
|
||||
|
||||
NOTE: see the `integration bash script <https://github.com/tendermint/tendermint/blob/develop/networks/remote/integration.sh>`__ that can be run on a fresh DO droplet and will automatically spin up a 4 node testnet. The script more or less does everything described below.
|
||||
|
||||
- Install `Terraform <https://www.terraform.io/downloads.html>`__ and `Ansible <http://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html>`__ on a Linux machine.
|
||||
- Create a `DigitalOcean API token <https://cloud.digitalocean.com/settings/api/tokens>`__ with read and write capability.
|
||||
- Install the python dopy package (``pip install dopy``)
|
||||
- Create SSH keys (``ssh-keygen``)
|
||||
- Set environment variables:
|
||||
|
||||
::
|
||||
|
||||
export DO_API_TOKEN="abcdef01234567890abcdef01234567890"
|
||||
export SSH_KEY_FILE="$HOME/.ssh/id_rsa.pub"
|
||||
|
||||
These will be used by both ``terraform`` and ``ansible``.
|
||||
|
||||
Terraform
|
||||
---------
|
||||
|
||||
This step will create four Digital Ocean droplets. First, go to the correct directory:
|
||||
|
||||
::
|
||||
|
||||
cd $GOPATH/src/github.com/tendermint/tendermint/networks/remote/terraform
|
||||
|
||||
then:
|
||||
|
||||
::
|
||||
|
||||
terraform init
|
||||
terraform apply -var DO_API_TOKEN="$DO_API_TOKEN" -var SSH_KEY_FILE="$SSH_KEY_FILE"
|
||||
|
||||
and you will get a list of IP addresses that belong to your droplets.
|
||||
|
||||
With the droplets created and running, let's setup Ansible.
|
||||
|
||||
Using Ansible
|
||||
-------------
|
||||
|
||||
The playbooks in `the ansible directory <https://github.com/tendermint/tendermint/tree/master/networks/remote/ansible>`__
|
||||
run ansible roles to configure the sentry node architecture. You must switch to this directory to run ansible (``cd $GOPATH/src/github.com/tendermint/tendermint/networks/remote/ansible``).
|
||||
|
||||
There are several roles that are self-explanatory:
|
||||
|
||||
First, we configure our droplets by specifying the paths for tendermint (``BINARY``) and the node files (``CONFIGDIR``). The latter expects any number of directories named ``node0, node1, ...`` and so on (equal to the number of droplets created). For this example, we use pre-created files from `this directory <https://github.com/tendermint/tendermint/tree/master/docs/examples>`__. To create your own files, use either the ``tendermint testnet`` command or review `manual deployments <./deploy-testnets.html>`__.
|
||||
|
||||
Here's the command to run:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook -i inventory/digital_ocean.py -l sentrynet config.yml -e BINARY=$GOPATH/src/github.com/tendermint/tendermint/build/tendermint -e CONFIGDIR=$GOPATH/src/github.com/tendermint/tendermint/docs/examples
|
||||
|
||||
Voila! All your droplets now have the ``tendermint`` binary and required configuration files to run a testnet.
|
||||
|
||||
Next, we run the install role:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook -i inventory/digital_ocean.py -l sentrynet install.yml
|
||||
|
||||
which as you'll see below, executes ``tendermint node --proxy_app=kvstore`` on all droplets. Although we'll soon be modifying this role and running it again, this first execution allows us to get each ``node_info.id`` that corresponds to each ``node_info.listen_addr``. (This part will be automated in the future). In your browser (or using ``curl``), for every droplet, go to IP:46657/status and note the two just mentioned ``node_info`` fields. Notice that blocks aren't being created (``latest_block_height`` should be zero and not increasing).
|
||||
|
||||
Next, open ``roles/install/templates/systemd.service.j2`` and look for the line ``ExecStart`` which should look something like:
|
||||
|
||||
::
|
||||
|
||||
ExecStart=/usr/bin/tendermint node --proxy_app=kvstore
|
||||
|
||||
and add the ``--p2p.persistent_peers`` flag with the relevant information for each node. The resulting file should look something like:
|
||||
|
||||
::
|
||||
|
||||
[Unit]
|
||||
Description={{service}}
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
|
||||
[Service]
|
||||
Restart=on-failure
|
||||
User={{service}}
|
||||
Group={{service}}
|
||||
PermissionsStartOnly=true
|
||||
ExecStart=/usr/bin/tendermint node --proxy_app=kvstore --p2p.persistent_peers=167b80242c300bf0ccfb3ced3dec60dc2a81776e@165.227.41.206:46656,3c7a5920811550c04bf7a0b2f1e02ab52317b5e6@165.227.43.146:46656,303a1a4312c30525c99ba66522dd81cca56a361a@159.89.115.32:46656,b686c2a7f4b1b46dca96af3a0f31a6a7beae0be4@159.89.119.125:46656
|
||||
ExecReload=/bin/kill -HUP $MAINPID
|
||||
KillSignal=SIGTERM
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
Then, stop the nodes:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook -i inventory/digital_ocean.py -l sentrynet stop.yml
|
||||
|
||||
Finally, we run the install role again:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook -i inventory/digital_ocean.py -l sentrynet install.yml
|
||||
|
||||
to re-run ``tendermint node`` with the new flag, on all droplets. The ``latest_block_hash`` should now be changing and ``latest_block_height`` increasing. Your testnet is now up and running :)
|
||||
|
||||
Peek at the logs with the status role:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook -i inventory/digital_ocean.py -l sentrynet status.yml
|
||||
|
||||
Logging
|
||||
-------
|
||||
|
||||
The crudest way is the status role described above. You can also ship logs to Logz.io, an Elastic stack (Elastic search, Logstash and Kibana) service provider. You can set up your nodes to log there automatically. Create an account and get your API key from the notes on `this page <https://app.logz.io/#/dashboard/data-sources/Filebeat>`__, then:
|
||||
|
||||
::
|
||||
|
||||
yum install systemd-devel || echo "This will only work on RHEL-based systems."
|
||||
apt-get install libsystemd-dev || echo "This will only work on Debian-based systems."
|
||||
|
||||
go get github.com/mheese/journalbeat
|
||||
ansible-playbook -i inventory/digital_ocean.py -l sentrynet logzio.yml -e LOGZIO_TOKEN=ABCDEFGHIJKLMNOPQRSTUVWXYZ012345
|
||||
|
||||
Cleanup
|
||||
-------
|
||||
|
||||
To remove your droplets, run:
|
||||
|
||||
::
|
||||
|
||||
terraform destroy -var DO_API_TOKEN="$DO_API_TOKEN" -var SSH_KEY_FILE="$SSH_KEY_FILE"
|
27
docs/transactional-semantics.rst
Normal file
27
docs/transactional-semantics.rst
Normal file
@@ -0,0 +1,27 @@
|
||||
Transactional Semantics
|
||||
=======================
|
||||
|
||||
In `Using
|
||||
Tendermint <./using-tendermint.html#broadcast-api>`__ we
|
||||
discussed different API endpoints for sending transactions and
|
||||
differences between them.
|
||||
|
||||
What we have not yet covered is transactional semantics.
|
||||
|
||||
When you send a transaction using one of the available methods, it
|
||||
first goes to the mempool. Currently, it does not provide strong
|
||||
guarantees like "if the transaction were accepted, it would be
|
||||
eventually included in a block (given CheckTx passes)."
|
||||
|
||||
For instance a tx could enter the mempool, but before it can be sent
|
||||
to peers the node crashes.
|
||||
|
||||
We are planning to provide such guarantees by using a WAL and
|
||||
replaying transactions (See
|
||||
`GH#248 <https://github.com/tendermint/tendermint/issues/248>`__), but
|
||||
it's non-trivial to do this all efficiently.
|
||||
|
||||
The temporary solution is for clients to monitor the node and resubmit
|
||||
transaction(s) or/and send them to more nodes at once, so the
|
||||
probability of all of them crashing at the same time and losing the
|
||||
msg decreases substantially.
|
@@ -28,8 +28,11 @@ genesis file (``genesis.json``) containing the associated public key,
|
||||
in ``$TMHOME/config``.
|
||||
This is all that's necessary to run a local testnet with one validator.
|
||||
|
||||
For more elaborate initialization, see our `testnet deployment
|
||||
tool <https://github.com/tendermint/tools/tree/master/mintnet-kubernetes>`__.
|
||||
For more elaborate initialization, see the `tesnet` command:
|
||||
|
||||
::
|
||||
|
||||
tendermint testnet --help
|
||||
|
||||
Run
|
||||
---
|
||||
@@ -214,7 +217,7 @@ Broadcast API
|
||||
Earlier, we used the ``broadcast_tx_commit`` endpoint to send a
|
||||
transaction. When a transaction is sent to a Tendermint node, it will
|
||||
run via ``CheckTx`` against the application. If it passes ``CheckTx``,
|
||||
it will be included in the mempool, broadcast to other peers, and
|
||||
it will be included in the mempool, broadcasted to other peers, and
|
||||
eventually included in a block.
|
||||
|
||||
Since there are multiple phases to processing a transaction, we offer
|
||||
|
@@ -8,6 +8,7 @@ import (
|
||||
"github.com/tendermint/go-amino"
|
||||
"github.com/tendermint/tmlibs/log"
|
||||
|
||||
cstypes "github.com/tendermint/tendermint/consensus/types"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
@@ -118,21 +119,48 @@ func (evR *EvidenceReactor) broadcastRoutine() {
|
||||
case evidence := <-evR.evpool.EvidenceChan():
|
||||
// broadcast some new evidence
|
||||
msg := &EvidenceListMessage{[]types.Evidence{evidence}}
|
||||
evR.Switch.Broadcast(EvidenceChannel, cdc.MustMarshalBinaryBare(msg))
|
||||
evR.broadcastEvidenceListMsg(msg)
|
||||
|
||||
// TODO: Broadcast runs asynchronously, so this should wait on the successChan
|
||||
// in another routine before marking to be proper.
|
||||
// TODO: the broadcast here is just doing TrySend.
|
||||
// We should make sure the send succeeds before marking broadcasted.
|
||||
evR.evpool.evidenceStore.MarkEvidenceAsBroadcasted(evidence)
|
||||
case <-ticker.C:
|
||||
// broadcast all pending evidence
|
||||
msg := &EvidenceListMessage{evR.evpool.PendingEvidence()}
|
||||
evR.Switch.Broadcast(EvidenceChannel, cdc.MustMarshalBinaryBare(msg))
|
||||
evR.broadcastEvidenceListMsg(msg)
|
||||
case <-evR.Quit():
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (evR *EvidenceReactor) broadcastEvidenceListMsg(msg *EvidenceListMessage) {
|
||||
// NOTE: we dont send evidence to peers higher than their height,
|
||||
// because they can't validate it (don't have validators from the height).
|
||||
// So, for now, only send the `msg` to peers synced to the highest height in the list.
|
||||
// TODO: send each peer all the evidence below its current height -
|
||||
// might require a routine per peer, like the mempool.
|
||||
|
||||
var maxHeight int64
|
||||
for _, ev := range msg.Evidence {
|
||||
if ev.Height() > maxHeight {
|
||||
maxHeight = ev.Height()
|
||||
}
|
||||
}
|
||||
|
||||
for _, peer := range evR.Switch.Peers().List() {
|
||||
ps := peer.Get(types.PeerStateKey).(PeerState)
|
||||
rs := ps.GetRoundState()
|
||||
if rs.Height >= maxHeight {
|
||||
peer.TrySend(EvidenceChannel, cdc.MustMarshalBinaryBare(msg))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type PeerState interface {
|
||||
GetRoundState() *cstypes.PeerRoundState
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
// Messages
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user