Revert "delete everything" (includes everything non-go-crypto)

This reverts commit 96a3502
This commit is contained in:
Liamsi
2018-06-20 17:35:30 -07:00
parent 587505d4d2
commit d2c05bc5b9
533 changed files with 69873 additions and 162 deletions

1
docs/.python-version Normal file
View File

@ -0,0 +1 @@
2.7.14

23
docs/Makefile Normal file
View File

@ -0,0 +1,23 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = python -msphinx
SPHINXPROJ = Tendermint
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
install:
@pip install -r requirements.txt
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

14
docs/README.md Normal file
View File

@ -0,0 +1,14 @@
Here lies our documentation. After making edits, run:
```
pip install -r requirements.txt
make html
```
to build the docs locally then open the file `_build/html/index.html` in your browser.
**WARNING:** This documentation is intended to be viewed at:
https://tendermint.readthedocs.io
and may contain broken internal links when viewed from Github.

View File

@ -0,0 +1,17 @@
.toggle {
padding-bottom: 1em ;
}
.toggle .header {
display: block;
clear: both;
cursor: pointer;
}
.toggle .header:after {
content: " ▼";
}
.toggle .header.open:after {
content: " ▲";
}

10
docs/_static/custom_collapsible_code.js vendored Normal file
View File

@ -0,0 +1,10 @@
let makeCodeBlocksCollapsible = function() {
$(".toggle > *").hide();
$(".toggle .header").show();
$(".toggle .header").click(function() {
$(this).parent().children().not(".header").toggle({"duration": 400});
$(this).parent().children(".header").toggleClass("open");
});
};
// we could use the }(); way if we would have access to jQuery in HEAD, i.e. we would need to force the theme
// to load jQuery before our custom scripts

20
docs/_templates/layout.html vendored Normal file
View File

@ -0,0 +1,20 @@
{% extends "!layout.html" %}
{% set css_files = css_files + ["_static/custom_collapsible_code.css"] %}
# sadly, I didn't find a css style way to add custom JS to a list that is automagically added to head like CSS (above) #}
{% block extrahead %}
<script type="text/javascript" src="_static/custom_collapsible_code.js"></script>
{% endblock %}
{% block footer %}
<script type="text/javascript">
$(document).ready(function() {
// using this approach as we don't have access to the jQuery selectors
// when executing the function on load in HEAD
makeCodeBlocksCollapsible();
});
</script>
{% endblock %}

329
docs/abci-cli.md Normal file
View File

@ -0,0 +1,329 @@
# Using ABCI-CLI
To facilitate testing and debugging of ABCI servers and simple apps, we
built a CLI, the `abci-cli`, for sending ABCI messages from the command
line.
## Install
Make sure you [have Go installed](https://golang.org/doc/install).
Next, install the `abci-cli` tool and example applications:
go get -u github.com/tendermint/abci/cmd/abci-cli
If this fails, you may need to use [dep](https://github.com/golang/dep)
to get vendored dependencies:
cd $GOPATH/src/github.com/tendermint/abci
make get_tools
make get_vendor_deps
make install
Now run `abci-cli` to see the list of commands:
Usage:
abci-cli [command]
Available Commands:
batch Run a batch of abci commands against an application
check_tx Validate a tx
commit Commit the application state and return the Merkle root hash
console Start an interactive abci console for multiple commands
counter ABCI demo example
deliver_tx Deliver a new tx to the application
kvstore ABCI demo example
echo Have the application echo a message
help Help about any command
info Get some info about the application
query Query the application state
set_option Set an options on the application
Flags:
--abci string socket or grpc (default "socket")
--address string address of application socket (default "tcp://127.0.0.1:26658")
-h, --help help for abci-cli
-v, --verbose print the command and results as if it were a console session
Use "abci-cli [command] --help" for more information about a command.
## KVStore - First Example
The `abci-cli` tool lets us send ABCI messages to our application, to
help build and debug them.
The most important messages are `deliver_tx`, `check_tx`, and `commit`,
but there are others for convenience, configuration, and information
purposes.
We'll start a kvstore application, which was installed at the same time
as `abci-cli` above. The kvstore just stores transactions in a merkle
tree.
Its code can be found
[here](https://github.com/tendermint/abci/blob/master/cmd/abci-cli/abci-cli.go)
and looks like:
func cmdKVStore(cmd *cobra.Command, args []string) error {
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
// Create the application - in memory or persisted to disk
var app types.Application
if flagPersist == "" {
app = kvstore.NewKVStoreApplication()
} else {
app = kvstore.NewPersistentKVStoreApplication(flagPersist)
app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore"))
}
// Start the listener
srv, err := server.NewServer(flagAddrD, flagAbci, app)
if err != nil {
return err
}
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
return err
}
// Wait forever
cmn.TrapSignal(func() {
// Cleanup
srv.Stop()
})
return nil
}
Start by running:
abci-cli kvstore
And in another terminal, run
abci-cli echo hello
abci-cli info
You'll see something like:
-> data: hello
-> data.hex: 68656C6C6F
and:
-> data: {"size":0}
-> data.hex: 7B2273697A65223A307D
An ABCI application must provide two things:
- a socket server
- a handler for ABCI messages
When we run the `abci-cli` tool we open a new connection to the
application's socket server, send the given ABCI message, and wait for a
response.
The server may be generic for a particular language, and we provide a
[reference implementation in
Golang](https://github.com/tendermint/abci/tree/master/server). See the
[list of other ABCI implementations](./ecosystem.html) for servers in
other languages.
The handler is specific to the application, and may be arbitrary, so
long as it is deterministic and conforms to the ABCI interface
specification.
So when we run `abci-cli info`, we open a new connection to the ABCI
server, which calls the `Info()` method on the application, which tells
us the number of transactions in our Merkle tree.
Now, since every command opens a new connection, we provide the
`abci-cli console` and `abci-cli batch` commands, to allow multiple ABCI
messages to be sent over a single connection.
Running `abci-cli console` should drop you in an interactive console for
speaking ABCI messages to your application.
Try running these commands:
> echo hello
-> code: OK
-> data: hello
-> data.hex: 0x68656C6C6F
> info
-> code: OK
-> data: {"size":0}
-> data.hex: 0x7B2273697A65223A307D
> commit
-> code: OK
-> data.hex: 0x0000000000000000
> deliver_tx "abc"
-> code: OK
> info
-> code: OK
-> data: {"size":1}
-> data.hex: 0x7B2273697A65223A317D
> commit
-> code: OK
-> data.hex: 0x0200000000000000
> query "abc"
-> code: OK
-> log: exists
-> height: 0
-> value: abc
-> value.hex: 616263
> deliver_tx "def=xyz"
-> code: OK
> commit
-> code: OK
-> data.hex: 0x0400000000000000
> query "def"
-> code: OK
-> log: exists
-> height: 0
-> value: xyz
-> value.hex: 78797A
Note that if we do `deliver_tx "abc"` it will store `(abc, abc)`, but if
we do `deliver_tx "abc=efg"` it will store `(abc, efg)`.
Similarly, you could put the commands in a file and run
`abci-cli --verbose batch < myfile`.
## Counter - Another Example
Now that we've got the hang of it, let's try another application, the
"counter" app.
Like the kvstore app, its code can be found
[here](https://github.com/tendermint/abci/blob/master/cmd/abci-cli/abci-cli.go)
and looks like:
func cmdCounter(cmd *cobra.Command, args []string) error {
app := counter.NewCounterApplication(flagSerial)
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
// Start the listener
srv, err := server.NewServer(flagAddrC, flagAbci, app)
if err != nil {
return err
}
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
return err
}
// Wait forever
cmn.TrapSignal(func() {
// Cleanup
srv.Stop()
})
return nil
}
The counter app doesn't use a Merkle tree, it just counts how many times
we've sent a transaction, asked for a hash, or committed the state. The
result of `commit` is just the number of transactions sent.
This application has two modes: `serial=off` and `serial=on`.
When `serial=on`, transactions must be a big-endian encoded incrementing
integer, starting at 0.
If `serial=off`, there are no restrictions on transactions.
We can toggle the value of `serial` using the `set_option` ABCI message.
When `serial=on`, some transactions are invalid. In a live blockchain,
transactions collect in memory before they are committed into blocks. To
avoid wasting resources on invalid transactions, ABCI provides the
`check_tx` message, which application developers can use to accept or
reject transactions, before they are stored in memory or gossipped to
other peers.
In this instance of the counter app, `check_tx` only allows transactions
whose integer is greater than the last committed one.
Let's kill the console and the kvstore application, and start the
counter app:
abci-cli counter
In another window, start the `abci-cli console`:
> set_option serial on
-> code: OK
-> log: OK (SetOption doesn't return anything.)
> check_tx 0x00
-> code: OK
> check_tx 0xff
-> code: OK
> deliver_tx 0x00
-> code: OK
> check_tx 0x00
-> code: BadNonce
-> log: Invalid nonce. Expected >= 1, got 0
> deliver_tx 0x01
-> code: OK
> deliver_tx 0x04
-> code: BadNonce
-> log: Invalid nonce. Expected 2, got 4
> info
-> code: OK
-> data: {"hashes":0,"txs":2}
-> data.hex: 0x7B22686173686573223A302C22747873223A327D
This is a very simple application, but between `counter` and `kvstore`,
its easy to see how you can build out arbitrary application states on
top of the ABCI. [Hyperledger's
Burrow](https://github.com/hyperledger/burrow) also runs atop ABCI,
bringing with it Ethereum-like accounts, the Ethereum virtual-machine,
Monax's permissioning scheme, and native contracts extensions.
But the ultimate flexibility comes from being able to write the
application easily in any language.
We have implemented the counter in a number of languages [see the
example directory](https://github.com/tendermint/abci/tree/master/example).
To run the Node JS version, `cd` to `example/js` and run
node app.js
(you'll have to kill the other counter application process). In another
window, run the console and those previous ABCI commands. You should get
the same results as for the Go version.
## Bounties
Want to write the counter app in your favorite language?! We'd be happy
to add you to our [ecosystem](https://tendermint.com/ecosystem)! We're
also offering [bounties](https://hackerone.com/tendermint/) for
implementations in new languages!
The `abci-cli` is designed strictly for testing and debugging. In a real
deployment, the role of sending messages is taken by Tendermint, which
connects to the app using three separate connections, each with its own
pattern of messages.
For more information, see the [application developers
guide](./app-development.html). For examples of running an ABCI app with
Tendermint, see the [getting started guide](./getting-started.html).
Next is the ABCI specification.

50
docs/app-architecture.md Normal file
View File

@ -0,0 +1,50 @@
# Application Architecture Guide
Here we provide a brief guide on the recommended architecture of a
Tendermint blockchain application.
The following diagram provides a superb example:
<https://drive.google.com/open?id=1yR2XpRi9YCY9H9uMfcw8-RMJpvDyvjz9>
The end-user application here is the Cosmos Voyager, at the bottom left.
Voyager communicates with a REST API exposed by a local Light-Client
Daemon. The Light-Client Daemon is an application specific program that
communicates with Tendermint nodes and verifies Tendermint light-client
proofs through the Tendermint Core RPC. The Tendermint Core process
communicates with a local ABCI application, where the user query or
transaction is actually processed.
The ABCI application must be a deterministic result of the Tendermint
consensus - any external influence on the application state that didn't
come through Tendermint could cause a consensus failure. Thus *nothing*
should communicate with the application except Tendermint via ABCI.
If the application is written in Go, it can be compiled into the
Tendermint binary. Otherwise, it should use a unix socket to communicate
with Tendermint. If it's necessary to use TCP, extra care must be taken
to encrypt and authenticate the connection.
All reads from the app happen through the Tendermint `/abci_query`
endpoint. All writes to the app happen through the Tendermint
`/broadcast_tx_*` endpoints.
The Light-Client Daemon is what provides light clients (end users) with
nearly all the security of a full node. It formats and broadcasts
transactions, and verifies proofs of queries and transaction results.
Note that it need not be a daemon - the Light-Client logic could instead
be implemented in the same process as the end-user application.
Note for those ABCI applications with weaker security requirements, the
functionality of the Light-Client Daemon can be moved into the ABCI
application process itself. That said, exposing the application process
to anything besides Tendermint over ABCI requires extreme caution, as
all transactions, and possibly all queries, should still pass through
Tendermint.
See the following for more extensive documentation:
- [Interchain Standard for the Light-Client REST API](https://github.com/cosmos/cosmos-sdk/pull/1028)
- [Tendermint RPC Docs](https://tendermint.github.io/slate/)
- [Tendermint in Production](https://github.com/tendermint/tendermint/pull/1618)
- [Tendermint Basics](https://tendermint.readthedocs.io/en/master/using-tendermint.html)
- [ABCI spec](https://github.com/tendermint/abci/blob/develop/specification.md)

527
docs/app-development.md Normal file
View File

@ -0,0 +1,527 @@
# Application Development Guide
## ABCI Design
The purpose of ABCI is to provide a clean interface between state
transition machines on one computer and the mechanics of their
replication across multiple computers. The former we call 'application
logic' and the latter the 'consensus engine'. Application logic
validates transactions and optionally executes transactions against some
persistent state. A consensus engine ensures all transactions are
replicated in the same order on every machine. We call each machine in a
consensus engine a 'validator', and each validator runs the same
transactions through the same application logic. In particular, we are
interested in blockchain-style consensus engines, where transactions are
committed in hash-linked blocks.
The ABCI design has a few distinct components:
- message protocol
- pairs of request and response messages
- consensus makes requests, application responds
- defined using protobuf
- server/client
- consensus engine runs the client
- application runs the server
- two implementations:
- async raw bytes
- grpc
- blockchain protocol
- abci is connection oriented
- Tendermint Core maintains three connections:
- [mempool connection](#mempool-connection): for checking if
transactions should be relayed before they are committed;
only uses `CheckTx`
- [consensus connection](#consensus-connection): for executing
transactions that have been committed. Message sequence is
-for every block
-`BeginBlock, [DeliverTx, ...], EndBlock, Commit`
- [query connection](#query-connection): for querying the
application state; only uses Query and Info
The mempool and consensus logic act as clients, and each maintains an
open ABCI connection with the application, which hosts an ABCI server.
Shown are the request and response types sent on each connection.
## Message Protocol
The message protocol consists of pairs of requests and responses. Some
messages have no fields, while others may include byte-arrays, strings,
or integers. See the `message Request` and `message Response`
definitions in [the protobuf definition
file](https://github.com/tendermint/abci/blob/master/types/types.proto),
and the [protobuf
documentation](https://developers.google.com/protocol-buffers/docs/overview)
for more details.
For each request, a server should respond with the corresponding
response, where order of requests is preserved in the order of
responses.
## Server
To use ABCI in your programming language of choice, there must be a ABCI
server in that language. Tendermint supports two kinds of implementation
of the server:
- Asynchronous, raw socket server (Tendermint Socket Protocol, also
known as TSP or Teaspoon)
- GRPC
Both can be tested using the `abci-cli` by setting the `--abci` flag
appropriately (ie. to `socket` or `grpc`).
See examples, in various stages of maintenance, in
[Go](https://github.com/tendermint/abci/tree/master/server),
[JavaScript](https://github.com/tendermint/js-abci),
[Python](https://github.com/tendermint/abci/tree/master/example/python3/abci),
[C++](https://github.com/mdyring/cpp-tmsp), and
[Java](https://github.com/jTendermint/jabci).
### GRPC
If GRPC is available in your language, this is the easiest approach,
though it will have significant performance overhead.
To get started with GRPC, copy in the [protobuf
file](https://github.com/tendermint/abci/blob/master/types/types.proto)
and compile it using the GRPC plugin for your language. For instance,
for golang, the command is `protoc --go_out=plugins=grpc:. types.proto`.
See the [grpc documentation for more details](http://www.grpc.io/docs/).
`protoc` will autogenerate all the necessary code for ABCI client and
server in your language, including whatever interface your application
must satisfy to be used by the ABCI server for handling requests.
### TSP
If GRPC is not available in your language, or you require higher
performance, or otherwise enjoy programming, you may implement your own
ABCI server using the Tendermint Socket Protocol, known affectionately
as Teaspoon. The first step is still to auto-generate the relevant data
types and codec in your language using `protoc`. Messages coming over
the socket are Protobuf3 encoded, but additionally length-prefixed to
facilitate use as a streaming protocol. Protobuf3 doesn't have an
official length-prefix standard, so we use our own. The first byte in
the prefix represents the length of the Big Endian encoded length. The
remaining bytes in the prefix are the Big Endian encoded length.
For example, if the Protobuf3 encoded ABCI message is 0xDEADBEEF (4
bytes), the length-prefixed message is 0x0104DEADBEEF. If the Protobuf3
encoded ABCI message is 65535 bytes long, the length-prefixed message
would be like 0x02FFFF....
Note this prefixing does not apply for grpc.
An ABCI server must also be able to support multiple connections, as
Tendermint uses three connections.
## Client
There are currently two use-cases for an ABCI client. One is a testing
tool, as in the `abci-cli`, which allows ABCI requests to be sent via
command line. The other is a consensus engine, such as Tendermint Core,
which makes requests to the application every time a new transaction is
received or a block is committed.
It is unlikely that you will need to implement a client. For details of
our client, see
[here](https://github.com/tendermint/abci/tree/master/client).
Most of the examples below are from [kvstore
application](https://github.com/tendermint/abci/blob/master/example/kvstore/kvstore.go),
which is a part of the abci repo. [persistent_kvstore
application](https://github.com/tendermint/abci/blob/master/example/kvstore/persistent_kvstore.go)
is used to show `BeginBlock`, `EndBlock` and `InitChain` example
implementations.
## Blockchain Protocol
In ABCI, a transaction is simply an arbitrary length byte-array. It is
the application's responsibility to define the transaction codec as they
please, and to use it for both CheckTx and DeliverTx.
Note that there are two distinct means for running transactions,
corresponding to stages of 'awareness' of the transaction in the
network. The first stage is when a transaction is received by a
validator from a client into the so-called mempool or transaction pool
-this is where we use CheckTx. The second is when the transaction is
successfully committed on more than 2/3 of validators - where we use
DeliverTx. In the former case, it may not be necessary to run all the
state transitions associated with the transaction, as the transaction
may not ultimately be committed until some much later time, when the
result of its execution will be different. For instance, an Ethereum
ABCI app would check signatures and amounts in CheckTx, but would not
actually execute any contract code until the DeliverTx, so as to avoid
executing state transitions that have not been finalized.
To formalize the distinction further, two explicit ABCI connections are
made between Tendermint Core and the application: the mempool connection
and the consensus connection. We also make a third connection, the query
connection, to query the local state of the app.
### Mempool Connection
The mempool connection is used *only* for CheckTx requests. Transactions
are run using CheckTx in the same order they were received by the
validator. If the CheckTx returns `OK`, the transaction is kept in
memory and relayed to other peers in the same order it was received.
Otherwise, it is discarded.
CheckTx requests run concurrently with block processing; so they should
run against a copy of the main application state which is reset after
every block. This copy is necessary to track transitions made by a
sequence of CheckTx requests before they are included in a block. When a
block is committed, the application must ensure to reset the mempool
state to the latest committed state. Tendermint Core will then filter
through all transactions in the mempool, removing any that were included
in the block, and re-run the rest using CheckTx against the post-Commit
mempool state (this behaviour can be turned off with
`[mempool] recheck = false`).
In go:
func (app *KVStoreApplication) CheckTx(tx []byte) types.Result {
return types.OK
}
In Java:
ResponseCheckTx requestCheckTx(RequestCheckTx req) {
byte[] transaction = req.getTx().toByteArray();
// validate transaction
if (notValid) {
return ResponseCheckTx.newBuilder().setCode(CodeType.BadNonce).setLog("invalid tx").build();
} else {
return ResponseCheckTx.newBuilder().setCode(CodeType.OK).build();
}
}
### Replay Protection
To prevent old transactions from being replayed, CheckTx must implement
replay protection.
Tendermint provides the first defence layer by keeping a lightweight
in-memory cache of 100k (`[mempool] cache_size`) last transactions in
the mempool. If Tendermint is just started or the clients sent more than
100k transactions, old transactions may be sent to the application. So
it is important CheckTx implements some logic to handle them.
There are cases where a transaction will (or may) become valid in some
future state, in which case you probably want to disable Tendermint's
cache. You can do that by setting `[mempool] cache_size = 0` in the
config.
### Consensus Connection
The consensus connection is used only when a new block is committed, and
communicates all information from the block in a series of requests:
`BeginBlock, [DeliverTx, ...], EndBlock, Commit`. That is, when a block
is committed in the consensus, we send a list of DeliverTx requests (one
for each transaction) sandwiched by BeginBlock and EndBlock requests,
and followed by a Commit.
### DeliverTx
DeliverTx is the workhorse of the blockchain. Tendermint sends the
DeliverTx requests asynchronously but in order, and relies on the
underlying socket protocol (ie. TCP) to ensure they are received by the
app in order. They have already been ordered in the global consensus by
the Tendermint protocol.
DeliverTx returns a abci.Result, which includes a Code, Data, and Log.
The code may be non-zero (non-OK), meaning the corresponding transaction
should have been rejected by the mempool, but may have been included in
a block by a Byzantine proposer.
The block header will be updated (TODO) to include some commitment to
the results of DeliverTx, be it a bitarray of non-OK transactions, or a
merkle root of the data returned by the DeliverTx requests, or both.
In go:
// tx is either "key=value" or just arbitrary bytes
func (app *KVStoreApplication) DeliverTx(tx []byte) types.Result {
parts := strings.Split(string(tx), "=")
if len(parts) == 2 {
app.state.Set([]byte(parts[0]), []byte(parts[1]))
} else {
app.state.Set(tx, tx)
}
return types.OK
}
In Java:
/**
* Using Protobuf types from the protoc compiler, we always start with a byte[]
*/
ResponseDeliverTx deliverTx(RequestDeliverTx request) {
byte[] transaction = request.getTx().toByteArray();
// validate your transaction
if (notValid) {
return ResponseDeliverTx.newBuilder().setCode(CodeType.BadNonce).setLog("transaction was invalid").build();
} else {
ResponseDeliverTx.newBuilder().setCode(CodeType.OK).build();
}
}
### Commit
Once all processing of the block is complete, Tendermint sends the
Commit request and blocks waiting for a response. While the mempool may
run concurrently with block processing (the BeginBlock, DeliverTxs, and
EndBlock), it is locked for the Commit request so that its state can be
safely reset during Commit. This means the app *MUST NOT* do any
blocking communication with the mempool (ie. broadcast\_tx) during
Commit, or there will be deadlock. Note also that all remaining
transactions in the mempool are replayed on the mempool connection
(CheckTx) following a commit.
The app should respond to the Commit request with a byte array, which is
the deterministic state root of the application. It is included in the
header of the next block. It can be used to provide easily verified
Merkle-proofs of the state of the application.
It is expected that the app will persist state to disk on Commit. The
option to have all transactions replayed from some previous block is the
job of the [Handshake](#handshake).
In go:
func (app *KVStoreApplication) Commit() types.Result {
hash := app.state.Hash()
return types.NewResultOK(hash, "")
}
In Java:
ResponseCommit requestCommit(RequestCommit requestCommit) {
// update the internal app-state
byte[] newAppState = calculateAppState();
// and return it to the node
return ResponseCommit.newBuilder().setCode(CodeType.OK).setData(ByteString.copyFrom(newAppState)).build();
}
### BeginBlock
The BeginBlock request can be used to run some code at the beginning of
every block. It also allows Tendermint to send the current block hash
and header to the application, before it sends any of the transactions.
The app should remember the latest height and header (ie. from which it
has run a successful Commit) so that it can tell Tendermint where to
pick up from when it restarts. See information on the Handshake, below.
In go:
// Track the block hash and header information
func (app *PersistentKVStoreApplication) BeginBlock(params types.RequestBeginBlock) {
// update latest block info
app.blockHeader = params.Header
// reset valset changes
app.changes = make([]*types.Validator, 0)
}
In Java:
/*
* all types come from protobuf definition
*/
ResponseBeginBlock requestBeginBlock(RequestBeginBlock req) {
Header header = req.getHeader();
byte[] prevAppHash = header.getAppHash().toByteArray();
long prevHeight = header.getHeight();
long numTxs = header.getNumTxs();
// run your pre-block logic. Maybe prepare a state snapshot, message components, etc
return ResponseBeginBlock.newBuilder().build();
}
### EndBlock
The EndBlock request can be used to run some code at the end of every
block. Additionally, the response may contain a list of validators,
which can be used to update the validator set. To add a new validator or
update an existing one, simply include them in the list returned in the
EndBlock response. To remove one, include it in the list with a `power`
equal to `0`. Tendermint core will take care of updating the validator
set. Note the change in voting power must be strictly less than 1/3 per
block if you want a light client to be able to prove the transition
externally. See the [light client
docs](https://godoc.org/github.com/tendermint/tendermint/lite#hdr-How_We_Track_Validators)
for details on how it tracks validators.
In go:
// Update the validator set
func (app *PersistentKVStoreApplication) EndBlock(req types.RequestEndBlock) types.ResponseEndBlock {
return types.ResponseEndBlock{ValidatorUpdates: app.ValUpdates}
}
In Java:
/*
* Assume that one validator changes. The new validator has a power of 10
*/
ResponseEndBlock requestEndBlock(RequestEndBlock req) {
final long currentHeight = req.getHeight();
final byte[] validatorPubKey = getValPubKey();
ResponseEndBlock.Builder builder = ResponseEndBlock.newBuilder();
builder.addDiffs(1, Types.Validator.newBuilder().setPower(10L).setPubKey(ByteString.copyFrom(validatorPubKey)).build());
return builder.build();
}
### Query Connection
This connection is used to query the application without engaging
consensus. It's exposed over the tendermint core rpc, so clients can
query the app without exposing a server on the app itself, but they must
serialize each query as a single byte array. Additionally, certain
"standardized" queries may be used to inform local decisions, for
instance about which peers to connect to.
Tendermint Core currently uses the Query connection to filter peers upon
connecting, according to IP address or public key. For instance,
returning non-OK ABCI response to either of the following queries will
cause Tendermint to not connect to the corresponding peer:
- `p2p/filter/addr/<addr>`, where `<addr>` is an IP address.
- `p2p/filter/pubkey/<pubkey>`, where `<pubkey>` is the hex-encoded
ED25519 key of the node (not it's validator key)
Note: these query formats are subject to change!
In go:
func (app *KVStoreApplication) Query(reqQuery types.RequestQuery) (resQuery types.ResponseQuery) {
if reqQuery.Prove {
value, proof, exists := app.state.Proof(reqQuery.Data)
resQuery.Index = -1 // TODO make Proof return index
resQuery.Key = reqQuery.Data
resQuery.Value = value
resQuery.Proof = proof
if exists {
resQuery.Log = "exists"
} else {
resQuery.Log = "does not exist"
}
return
} else {
index, value, exists := app.state.Get(reqQuery.Data)
resQuery.Index = int64(index)
resQuery.Value = value
if exists {
resQuery.Log = "exists"
} else {
resQuery.Log = "does not exist"
}
return
}
}
In Java:
ResponseQuery requestQuery(RequestQuery req) {
final boolean isProveQuery = req.getProve();
final ResponseQuery.Builder responseBuilder = ResponseQuery.newBuilder();
if (isProveQuery) {
com.app.example.ProofResult proofResult = generateProof(req.getData().toByteArray());
final byte[] proofAsByteArray = proofResult.getAsByteArray();
responseBuilder.setProof(ByteString.copyFrom(proofAsByteArray));
responseBuilder.setKey(req.getData());
responseBuilder.setValue(ByteString.copyFrom(proofResult.getData()));
responseBuilder.setLog(result.getLogValue());
} else {
byte[] queryData = req.getData().toByteArray();
final com.app.example.QueryResult result = generateQueryResult(queryData);
responseBuilder.setIndex(result.getIndex());
responseBuilder.setValue(ByteString.copyFrom(result.getValue()));
responseBuilder.setLog(result.getLogValue());
}
return responseBuilder.build();
}
### Handshake
When the app or tendermint restarts, they need to sync to a common
height. When an ABCI connection is first established, Tendermint will
call `Info` on the Query connection. The response should contain the
LastBlockHeight and LastBlockAppHash - the former is the last block for
which the app ran Commit successfully, the latter is the response from
that Commit.
Using this information, Tendermint will determine what needs to be
replayed, if anything, against the app, to ensure both Tendermint and
the app are synced to the latest block height.
If the app returns a LastBlockHeight of 0, Tendermint will just replay
all blocks.
In go:
func (app *KVStoreApplication) Info(req types.RequestInfo) (resInfo types.ResponseInfo) {
return types.ResponseInfo{Data: cmn.Fmt("{\"size\":%v}", app.state.Size())}
}
In Java:
ResponseInfo requestInfo(RequestInfo req) {
final byte[] lastAppHash = getLastAppHash();
final long lastHeight = getLastHeight();
return ResponseInfo.newBuilder().setLastBlockAppHash(ByteString.copyFrom(lastAppHash)).setLastBlockHeight(lastHeight).build();
}
### Genesis
`InitChain` will be called once upon the genesis. `params` includes the
initial validator set. Later on, it may be extended to take parts of the
consensus params.
In go:
// Save the validators in the merkle tree
func (app *PersistentKVStoreApplication) InitChain(params types.RequestInitChain) {
for _, v := range params.Validators {
r := app.updateValidator(v)
if r.IsErr() {
app.logger.Error("Error updating validators", "r", r)
}
}
}
In Java:
/*
* all types come from protobuf definition
*/
ResponseInitChain requestInitChain(RequestInitChain req) {
final int validatorsCount = req.getValidatorsCount();
final List<Types.Validator> validatorsList = req.getValidatorsList();
validatorsList.forEach((validator) -> {
long power = validator.getPower();
byte[] validatorPubKey = validator.getPubKey().toByteArray();
// do somehing for validator setup in app
});
return ResponseInitChain.newBuilder().build();
}

View File

@ -0,0 +1,5 @@
# Architecture Decision Records
This is a location to record all high-level architecture decisions in the tendermint project. Not the implementation details, but the reasoning that happened. This should be refered to for guidance of the "right way" to extend the application. And if we notice that the original decisions were lacking, we should have another open discussion, record the new decisions here, and then modify the code to match.
Read up on the concept in this [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t).

View File

@ -0,0 +1,216 @@
# ADR 1: Logging
## Context
Current logging system in Tendermint is very static and not flexible enough.
Issues: [358](https://github.com/tendermint/tendermint/issues/358), [375](https://github.com/tendermint/tendermint/issues/375).
What we want from the new system:
- per package dynamic log levels
- dynamic logger setting (logger tied to the processing struct)
- conventions
- be more visually appealing
"dynamic" here means the ability to set smth in runtime.
## Decision
### 1) An interface
First, we will need an interface for all of our libraries (`tmlibs`, Tendermint, etc.). My personal preference is go-kit `Logger` interface (see Appendix A.), but that is too much a bigger change. Plus we will still need levels.
```go
# log.go
type Logger interface {
Debug(msg string, keyvals ...interface{}) error
Info(msg string, keyvals ...interface{}) error
Error(msg string, keyvals ...interface{}) error
With(keyvals ...interface{}) Logger
}
```
On a side note: difference between `Info` and `Notice` is subtle. We probably
could do without `Notice`. Don't think we need `Panic` or `Fatal` as a part of
the interface. These funcs could be implemented as helpers. In fact, we already
have some in `tmlibs/common`.
- `Debug` - extended output for devs
- `Info` - all that is useful for a user
- `Error` - errors
`Notice` should become `Info`, `Warn` either `Error` or `Debug` depending on the message, `Crit` -> `Error`.
This interface should go into `tmlibs/log`. All libraries which are part of the core (tendermint/tendermint) should obey it.
### 2) Logger with our current formatting
On top of this interface, we will need to implement a stdout logger, which will be used when Tendermint is configured to output logs to STDOUT.
Many people say that they like the current output, so let's stick with it.
```
NOTE[04-25|14:45:08] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0
```
Couple of minor changes:
```
I[04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0
```
Notice the level is encoded using only one char plus milliseconds.
Note: there are many other formats out there like [logfmt](https://brandur.org/logfmt).
This logger could be implemented using any logger - [logrus](https://github.com/sirupsen/logrus), [go-kit/log](https://github.com/go-kit/kit/tree/master/log), [zap](https://github.com/uber-go/zap), log15 so far as it
a) supports coloring output<br>
b) is moderately fast (buffering) <br>
c) conforms to the new interface or adapter could be written for it <br>
d) is somewhat configurable<br>
go-kit is my favorite so far. Check out how easy it is to color errors in red https://github.com/go-kit/kit/blob/master/log/term/example_test.go#L12. Although, coloring could only be applied to the whole string :(
```
go-kit +: flexible, modular
go-kit “-”: logfmt format https://brandur.org/logfmt
logrus +: popular, feature rich (hooks), API and output is more like what we want
logrus -: not so flexible
```
```go
# tm_logger.go
// NewTmLogger returns a logger that encodes keyvals to the Writer in
// tm format.
func NewTmLogger(w io.Writer) Logger {
return &tmLogger{kitlog.NewLogfmtLogger(w)}
}
func (l tmLogger) SetLevel(level string() {
switch (level) {
case "debug":
l.sourceLogger = level.NewFilter(l.sourceLogger, level.AllowDebug())
}
}
func (l tmLogger) Info(msg string, keyvals ...interface{}) error {
l.sourceLogger.Log("msg", msg, keyvals...)
}
# log.go
func With(logger Logger, keyvals ...interface{}) Logger {
kitlog.With(logger.sourceLogger, keyvals...)
}
```
Usage:
```go
logger := log.NewTmLogger(os.Stdout)
logger.SetLevel(config.GetString("log_level"))
node.SetLogger(log.With(logger, "node", Name))
```
**Other log formatters**
In the future, we may want other formatters like JSONFormatter.
```
{ "level": "notice", "time": "2017-04-25 14:45:08.562471297 -0400 EDT", "module": "consensus", "msg": "ABCI Replay Blocks", "appHeight": 0, "storeHeight": 0, "stateHeight": 0 }
```
### 3) Dynamic logger setting
https://dave.cheney.net/2017/01/23/the-package-level-logger-anti-pattern
This is the hardest part and where the most work will be done. logger should be tied to the processing struct, or the context if it adds some fields to the logger.
```go
type BaseService struct {
log log15.Logger
name string
started uint32 // atomic
stopped uint32 // atomic
...
}
```
BaseService already contains `log` field, so most of the structs embedding it should be fine. We should rename it to `logger`.
The only thing missing is the ability to set logger:
```
func (bs *BaseService) SetLogger(l log.Logger) {
bs.logger = l
}
```
### 4) Conventions
Important keyvals should go first. Example:
```
correct
I[04-25|14:45:08.322] ABCI Replay Blocks module=consensus instance=1 appHeight=0 storeHeight=0 stateHeight=0
```
not
```
wrong
I[04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0 instance=1
```
for that in most cases you'll need to add `instance` field to a logger upon creating, not when u log a particular message:
```go
colorFn := func(keyvals ...interface{}) term.FgBgColor {
for i := 1; i < len(keyvals); i += 2 {
if keyvals[i] == "instance" && keyvals[i+1] == "1" {
return term.FgBgColor{Fg: term.Blue}
} else if keyvals[i] == "instance" && keyvals[i+1] == "1" {
return term.FgBgColor{Fg: term.Red}
}
}
return term.FgBgColor{}
}
logger := term.NewLogger(os.Stdout, log.NewTmLogger, colorFn)
c1 := NewConsensusReactor(...)
c1.SetLogger(log.With(logger, "instance", 1))
c2 := NewConsensusReactor(...)
c2.SetLogger(log.With(logger, "instance", 2))
```
## Status
proposed
## Consequences
### Positive
Dynamic logger, which could be turned off for some modules at runtime. Public interface for other projects using Tendermint libraries.
### Negative
We may loose the ability to color keys in keyvalue pairs. go-kit allow you to easily change foreground / background colors of the whole string, but not its parts.
### Neutral
## Appendix A.
I really like a minimalistic approach go-kit took with his logger https://github.com/go-kit/kit/tree/master/log:
```
type Logger interface {
Log(keyvals ...interface{}) error
}
```
See [The Hunt for a Logger Interface](https://go-talks.appspot.com/github.com/ChrisHines/talks/structured-logging/structured-logging.slide). The advantage is greater composability (check out how go-kit defines colored logging or log-leveled logging on top of this interface https://github.com/go-kit/kit/tree/master/log).

View File

@ -0,0 +1,90 @@
# ADR 2: Event Subscription
## Context
In the light client (or any other client), the user may want to **subscribe to
a subset of transactions** (rather than all of them) using `/subscribe?event=X`. For
example, I want to subscribe for all transactions associated with a particular
account. Same for fetching. The user may want to **fetch transactions based on
some filter** (rather than fetching all the blocks). For example, I want to get
all transactions for a particular account in the last two weeks (`tx's block
time >= '2017-06-05'`).
Now you can't even subscribe to "all txs" in Tendermint.
The goal is a simple and easy to use API for doing that.
![Tx Send Flow Diagram](img/tags1.png)
## Decision
ABCI app return tags with a `DeliverTx` response inside the `data` field (_for
now, later we may create a separate field_). Tags is a list of key-value pairs,
protobuf encoded.
Example data:
```json
{
"abci.account.name": "Igor",
"abci.account.address": "0xdeadbeef",
"tx.gas": 7
}
```
### Subscribing for transactions events
If the user wants to receive only a subset of transactions, ABCI-app must
return a list of tags with a `DeliverTx` response. These tags will be parsed and
matched with the current queries (subscribers). If the query matches the tags,
subscriber will get the transaction event.
```
/subscribe?query="tm.event = Tx AND tx.hash = AB0023433CF0334223212243BDD AND abci.account.invoice.number = 22"
```
A new package must be developed to replace the current `events` package. It
will allow clients to subscribe to a different types of events in the future:
```
/subscribe?query="abci.account.invoice.number = 22"
/subscribe?query="abci.account.invoice.owner CONTAINS Igor"
```
### Fetching transactions
This is a bit tricky because a) we want to support a number of indexers, all of
which have a different API b) we don't know whenever tags will be sufficient
for the most apps (I guess we'll see).
```
/txs/search?query="tx.hash = AB0023433CF0334223212243BDD AND abci.account.owner CONTAINS Igor"
/txs/search?query="abci.account.owner = Igor"
```
For historic queries we will need a indexing storage (Postgres, SQLite, ...).
### Issues
- https://github.com/tendermint/basecoin/issues/91
- https://github.com/tendermint/tendermint/issues/376
- https://github.com/tendermint/tendermint/issues/287
- https://github.com/tendermint/tendermint/issues/525 (related)
## Status
proposed
## Consequences
### Positive
- same format for event notifications and search APIs
- powerful enough query
### Negative
- performance of the `match` function (where we have too many queries / subscribers)
- there is an issue where there are too many txs in the DB
### Neutral

View File

@ -0,0 +1,34 @@
# ADR 3: Must an ABCI-app have an RPC server?
## Context
ABCI-server could expose its own RPC-server and act as a proxy to Tendermint.
The idea was for the Tendermint RPC to just be a transparent proxy to the app.
Clients need to talk to Tendermint for proofs, unless we burden all app devs
with exposing Tendermint proof stuff. Also seems less complex to lock down one
server than two, but granted it makes querying a bit more kludgy since it needs
to be passed as a `Query`. Also, **having a very standard rpc interface means
the light-client can work with all apps and handle proofs**. The only
app-specific logic is decoding the binary data to a more readable form (eg.
json). This is a huge advantage for code-reuse and standardization.
## Decision
We dont expose an RPC server on any of our ABCI-apps.
## Status
accepted
## Consequences
### Positive
- Unified interface for all apps
### Negative
- `Query` interface
### Neutral

View File

@ -0,0 +1,38 @@
# ADR 004: Historical Validators
## Context
Right now, we can query the present validator set, but there is no history.
If you were offline for a long time, there is no way to reconstruct past validators. This is needed for the light client and we agreed needs enhancement of the API.
## Decision
For every block, store a new structure that contains either the latest validator set,
or the height of the last block for which the validator set changed. Note this is not
the height of the block which returned the validator set change itself, but the next block,
ie. the first block it comes into effect for.
Storing the validators will be handled by the `state` package.
At some point in the future, we may consider more efficient storage in the case where the validators
are updated frequently - for instance by only saving the diffs, rather than the whole set.
An alternative approach suggested keeping the validator set, or diffs of it, in a merkle IAVL tree.
While it might afford cheaper proofs that a validator set has not changed, it would be more complex,
and likely less efficient.
## Status
Accepted.
## Consequences
### Positive
- Can query old validator sets, with proof.
### Negative
- Writes an extra structure to disk with every block.
### Neutral

View File

@ -0,0 +1,86 @@
# ADR 005: Consensus Params
## Context
Consensus critical parameters controlling blockchain capacity have until now been hard coded, loaded from a local config, or neglected.
Since they may be need to be different in different networks, and potentially to evolve over time within
networks, we seek to initialize them in a genesis file, and expose them through the ABCI.
While we have some specific parameters now, like maximum block and transaction size, we expect to have more in the future,
such as a period over which evidence is valid, or the frequency of checkpoints.
## Decision
### ConsensusParams
No consensus critical parameters should ever be found in the `config.toml`.
A new `ConsensusParams` is optionally included in the `genesis.json` file,
and loaded into the `State`. Any items not included are set to their default value.
A value of 0 is undefined (see ABCI, below). A value of -1 is used to indicate the parameter does not apply.
The parameters are used to determine the validity of a block (and tx) via the union of all relevant parameters.
```
type ConsensusParams struct {
BlockSize
TxSize
BlockGossip
}
type BlockSize struct {
MaxBytes int
MaxTxs int
MaxGas int
}
type TxSize struct {
MaxBytes int
MaxGas int
}
type BlockGossip struct {
BlockPartSizeBytes int
}
```
The `ConsensusParams` can evolve over time by adding new structs that cover different aspects of the consensus rules.
The `BlockPartSizeBytes` and the `BlockSize.MaxBytes` are enforced to be greater than 0.
The former because we need a part size, the latter so that we always have at least some sanity check over the size of blocks.
### ABCI
#### InitChain
InitChain currently takes the initial validator set. It should be extended to also take parts of the ConsensusParams.
There is some case to be made for it to take the entire Genesis, except there may be things in the genesis,
like the BlockPartSize, that the app shouldn't really know about.
#### EndBlock
The EndBlock response includes a `ConsensusParams`, which includes BlockSize and TxSize, but not BlockGossip.
Other param struct can be added to `ConsensusParams` in the future.
The `0` value is used to denote no change.
Any other value will update that parameter in the `State.ConsensusParams`, to be applied for the next block.
Tendermint should have hard-coded upper limits as sanity checks.
## Status
Proposed.
## Consequences
### Positive
- Alternative capacity limits and consensus parameters can be specified without re-compiling the software.
- They can also change over time under the control of the application
### Negative
- More exposed parameters is more complexity
- Different rules at different heights in the blockchain complicates fast sync
### Neutral
- The TxSize, which checks validity, may be in conflict with the config's `max_block_size_tx`, which determines proposal sizes

View File

@ -0,0 +1,238 @@
# ADR 006: Trust Metric Design
## Context
The proposed trust metric will allow Tendermint to maintain local trust rankings for peers it has directly interacted with, which can then be used to implement soft security controls. The calculations were obtained from the [TrustGuard](https://dl.acm.org/citation.cfm?id=1060808) project.
### Background
The Tendermint Core project developers would like to improve Tendermint security and reliability by keeping track of the level of trustworthiness peers have demonstrated within the peer-to-peer network. This way, undesirable outcomes from peers will not immediately result in them being dropped from the network (potentially causing drastic changes to take place). Instead, peers behavior can be monitored with appropriate metrics and be removed from the network once Tendermint Core is certain the peer is a threat. For example, when the PEXReactor makes a request for peers network addresses from a already known peer, and the returned network addresses are unreachable, this untrustworthy behavior should be tracked. Returning a few bad network addresses probably shouldnt cause a peer to be dropped, while excessive amounts of this behavior does qualify the peer being dropped.
Trust metrics can be circumvented by malicious nodes through the use of strategic oscillation techniques, which adapts the malicious nodes behavior pattern in order to maximize its goals. For instance, if the malicious node learns that the time interval of the Tendermint trust metric is *X* hours, then it could wait *X* hours in-between malicious activities. We could try to combat this issue by increasing the interval length, yet this will make the system less adaptive to recent events.
Instead, having shorter intervals, but keeping a history of interval values, will give our metric the flexibility needed in order to keep the network stable, while also making it resilient against a strategic malicious node in the Tendermint peer-to-peer network. Also, the metric can access trust data over a rather long period of time while not greatly increasing its history size by aggregating older history values over a larger number of intervals, and at the same time, maintain great precision for the recent intervals. This approach is referred to as fading memories, and closely resembles the way human beings remember their experiences. The trade-off to using history data is that the interval values should be preserved in-between executions of the node.
### References
S. Mudhakar, L. Xiong, and L. Liu, “TrustGuard: Countering Vulnerabilities in Reputation Management for Decentralized Overlay Networks,” in *Proceedings of the 14th international conference on World Wide Web, pp. 422-431*, May 2005.
## Decision
The proposed trust metric will allow a developer to inform the trust metric store of all good and bad events relevant to a peer's behavior, and at any time, the metric can be queried for a peer's current trust ranking.
The three subsections below will cover the process being considered for calculating the trust ranking, the concept of the trust metric store, and the interface for the trust metric.
### Proposed Process
The proposed trust metric will count good and bad events relevant to the object, and calculate the percent of counters that are good over an interval with a predefined duration. This is the procedure that will continue for the life of the trust metric. When the trust metric is queried for the current **trust value**, a resilient equation will be utilized to perform the calculation.
The equation being proposed resembles a Proportional-Integral-Derivative (PID) controller used in control systems. The proportional component allows us to be sensitive to the value of the most recent interval, while the integral component allows us to incorporate trust values stored in the history data, and the derivative component allows us to give weight to sudden changes in the behavior of a peer. We compute the trust value of a peer in interval i based on its current trust ranking, its trust rating history prior to interval *i* (over the past *maxH* number of intervals) and its trust ranking fluctuation. We will break up the equation into the three components.
```math
(1) Proportional Value = a * R[i]
```
where *R*[*i*] denotes the raw trust value at time interval *i* (where *i* == 0 being current time) and *a* is the weight applied to the contribution of the current reports. The next component of our equation uses a weighted sum over the last *maxH* intervals to calculate the history value for time *i*:
`H[i] = ` ![formula1](img/formula1.png "Weighted Sum Formula")
The weights can be chosen either optimistically or pessimistically. An optimistic weight creates larger weights for newer history data values, while the the pessimistic weight creates larger weights for time intervals with lower scores. The default weights used during the calculation of the history value are optimistic and calculated as *Wk* = 0.8^*k*, for time interval *k*. With the history value available, we can now finish calculating the integral value:
```math
(2) Integral Value = b * H[i]
```
Where *H*[*i*] denotes the history value at time interval *i* and *b* is the weight applied to the contribution of past performance for the object being measured. The derivative component will be calculated as follows:
```math
D[i] = R[i] H[i]
(3) Derivative Value = c(D[i]) * D[i]
```
Where the value of *c* is selected based on the *D*[*i*] value relative to zero. The default selection process makes *c* equal to 0 unless *D*[*i*] is a negative value, in which case c is equal to 1. The result is that the maximum penalty is applied when current behavior is lower than previously experienced behavior. If the current behavior is better than the previously experienced behavior, then the Derivative Value has no impact on the trust value. With the three components brought together, our trust value equation is calculated as follows:
```math
TrustValue[i] = a * R[i] + b * H[i] + c(D[i]) * D[i]
```
As a performance optimization that will keep the amount of raw interval data being saved to a reasonable size of *m*, while allowing us to represent 2^*m* - 1 history intervals, we can employ the fading memories technique that will trade space and time complexity for the precision of the history data values by summarizing larger quantities of less recent values. While our equation above attempts to access up to *maxH* (which can be 2^*m* - 1), we will map those requests down to *m* values using equation 4 below:
```math
(4) j = index, where index > 0
```
Where *j* is one of *(0, 1, 2, … , m 1)* indices used to access history interval data. Now we can access the raw intervals using the following calculations:
```math
R[0] = raw data for current time interval
```
`R[j] = ` ![formula2](img/formula2.png "Fading Memories Formula")
### Trust Metric Store
Similar to the P2P subsystem AddrBook, the trust metric store will maintain information relevant to Tendermint peers. Additionally, the trust metric store will ensure that trust metrics will only be active for peers that a node is currently and directly engaged with.
Reactors will provide a peer key to the trust metric store in order to retrieve the associated trust metric. The trust metric can then record new positive and negative events experienced by the reactor, as well as provided the current trust score calculated by the metric.
When the node is shutting down, the trust metric store will save history data for trust metrics associated with all known peers. This saved information allows experiences with a peer to be preserved across node executions, which can span a tracking windows of days or weeks. The trust history data is loaded automatically during OnStart.
### Interface Detailed Design
Each trust metric allows for the recording of positive/negative events, querying the current trust value/score, and the stopping/pausing of tracking over time intervals. This can be seen below:
```go
// TrustMetric - keeps track of peer reliability
type TrustMetric struct {
// Private elements.
}
// Pause tells the metric to pause recording data over time intervals.
// All method calls that indicate events will unpause the metric
func (tm *TrustMetric) Pause() {}
// Stop tells the metric to stop recording data over time intervals
func (tm *TrustMetric) Stop() {}
// BadEvents indicates that an undesirable event(s) took place
func (tm *TrustMetric) BadEvents(num int) {}
// GoodEvents indicates that a desirable event(s) took place
func (tm *TrustMetric) GoodEvents(num int) {}
// TrustValue gets the dependable trust value; always between 0 and 1
func (tm *TrustMetric) TrustValue() float64 {}
// TrustScore gets a score based on the trust value always between 0 and 100
func (tm *TrustMetric) TrustScore() int {}
// NewMetric returns a trust metric with the default configuration
func NewMetric() *TrustMetric {}
//------------------------------------------------------------------------------------------------
// For example
tm := NewMetric()
tm.BadEvents(1)
score := tm.TrustScore()
tm.Stop()
```
Some of the trust metric parameters can be configured. The weight values should probably be left alone in more cases, yet the time durations for the tracking window and individual time interval should be considered.
```go
// TrustMetricConfig - Configures the weight functions and time intervals for the metric
type TrustMetricConfig struct {
// Determines the percentage given to current behavior
ProportionalWeight float64
// Determines the percentage given to prior behavior
IntegralWeight float64
// The window of time that the trust metric will track events across.
// This can be set to cover many days without issue
TrackingWindow time.Duration
// Each interval should be short for adapability.
// Less than 30 seconds is too sensitive,
// and greater than 5 minutes will make the metric numb
IntervalLength time.Duration
}
// DefaultConfig returns a config with values that have been tested and produce desirable results
func DefaultConfig() TrustMetricConfig {}
// NewMetricWithConfig returns a trust metric with a custom configuration
func NewMetricWithConfig(tmc TrustMetricConfig) *TrustMetric {}
//------------------------------------------------------------------------------------------------
// For example
config := TrustMetricConfig{
TrackingWindow: time.Minute * 60 * 24, // one day
IntervalLength: time.Minute * 2,
}
tm := NewMetricWithConfig(config)
tm.BadEvents(10)
tm.Pause()
tm.GoodEvents(1) // becomes active again
```
A trust metric store should be created with a DB that has persistent storage so it can save history data across node executions. All trust metrics instantiated by the store will be created with the provided TrustMetricConfig configuration.
When you attempt to fetch the trust metric for a peer, and an entry does not exist in the trust metric store, a new metric is automatically created and the entry made within the store.
In additional to the fetching method, GetPeerTrustMetric, the trust metric store provides a method to call when a peer has disconnected from the node. This is so the metric can be paused (history data will not be saved) for periods of time when the node is not having direct experiences with the peer.
```go
// TrustMetricStore - Manages all trust metrics for peers
type TrustMetricStore struct {
cmn.BaseService
// Private elements
}
// OnStart implements Service
func (tms *TrustMetricStore) OnStart() error {}
// OnStop implements Service
func (tms *TrustMetricStore) OnStop() {}
// NewTrustMetricStore returns a store that saves data to the DB
// and uses the config when creating new trust metrics
func NewTrustMetricStore(db dbm.DB, tmc TrustMetricConfig) *TrustMetricStore {}
// Size returns the number of entries in the trust metric store
func (tms *TrustMetricStore) Size() int {}
// GetPeerTrustMetric returns a trust metric by peer key
func (tms *TrustMetricStore) GetPeerTrustMetric(key string) *TrustMetric {}
// PeerDisconnected pauses the trust metric associated with the peer identified by the key
func (tms *TrustMetricStore) PeerDisconnected(key string) {}
//------------------------------------------------------------------------------------------------
// For example
db := dbm.NewDB("trusthistory", "goleveldb", dirPathStr)
tms := NewTrustMetricStore(db, DefaultConfig())
tm := tms.GetPeerTrustMetric(key)
tm.BadEvents(1)
tms.PeerDisconnected(key)
```
## Status
Approved.
## Consequences
### Positive
- The trust metric will allow Tendermint to make non-binary security and reliability decisions
- Will help Tendermint implement deterrents that provide soft security controls, yet avoids disruption on the network
- Will provide useful profiling information when analyzing performance over time related to peer interaction
### Negative
- Requires saving the trust metric history data across node executions
### Neutral
- Keep in mind that, good events need to be recorded just as bad events do using this implementation

View File

@ -0,0 +1,103 @@
# ADR 007: Trust Metric Usage Guide
## Context
Tendermint is required to monitor peer quality in order to inform its peer dialing and peer exchange strategies.
When a node first connects to the network, it is important that it can quickly find good peers.
Thus, while a node has fewer connections, it should prioritize connecting to higher quality peers.
As the node becomes well connected to the rest of the network, it can dial lesser known or lesser
quality peers and help assess their quality. Similarly, when queried for peers, a node should make
sure they dont return low quality peers.
Peer quality can be tracked using a trust metric that flags certain behaviours as good or bad. When enough
bad behaviour accumulates, we can mark the peer as bad and disconnect.
For example, when the PEXReactor makes a request for peers network addresses from an already known peer, and the returned network addresses are unreachable, this undesirable behavior should be tracked. Returning a few bad network addresses probably shouldnt cause a peer to be dropped, while excessive amounts of this behavior does qualify the peer for removal. The originally proposed approach and design document for the trust metric can be found in the [ADR 006](adr-006-trust-metric.md) document.
The trust metric implementation allows a developer to obtain a peer's trust metric from a trust metric store, and track good and bad events relevant to a peer's behavior, and at any time, the peer's metric can be queried for a current trust value. The current trust value is calculated with a formula that utilizes current behavior, previous behavior, and change between the two. Current behavior is calculated as the percentage of good behavior within a time interval. The time interval is short; probably set between 30 seconds and 5 minutes. On the other hand, the historic data can estimate a peer's behavior over days worth of tracking. At the end of a time interval, the current behavior becomes part of the historic data, and a new time interval begins with the good and bad counters reset to zero.
These are some important things to keep in mind regarding how the trust metrics handle time intervals and scoring:
- Each new time interval begins with a perfect score
- Bad events quickly bring the score down and good events cause the score to slowly rise
- When the time interval is over, the percentage of good events becomes historic data.
Some useful information about the inner workings of the trust metric:
- When a trust metric is first instantiated, a timer (ticker) periodically fires in order to handle transitions between trust metric time intervals
- If a peer is disconnected from a node, the timer should be paused, since the node is no longer connected to that peer
- The ability to pause the metric is supported with the store **PeerDisconnected** method and the metric **Pause** method
- After a pause, if a good or bad event method is called on a metric, it automatically becomes unpaused and begins a new time interval.
## Decision
The trust metric capability is now available, yet, it still leaves the question of how should it be applied throughout Tendermint in order to properly track the quality of peers?
### Proposed Process
Peers are managed using an address book and a trust metric:
- The address book keeps a record of peers and provides selection methods
- The trust metric tracks the quality of the peers
#### Presence in Address Book
Outbound peers are added to the address book before they are dialed,
and inbound peers are added once the peer connection is set up.
Peers are also added to the address book when they are received in response to
a pexRequestMessage.
While a node has less than `needAddressThreshold`, it will periodically request more,
via pexRequestMessage, from randomly selected peers and from newly dialed outbound peers.
When a new address is added to an address book that has more than `0.5*needAddressThreshold` addresses,
then with some low probability, a randomly chosen low quality peer is removed.
#### Outbound Peers
Peers attempt to maintain a minimum number of outbound connections by
repeatedly querying the address book for peers to connect to.
While a node has few to no outbound connections, the address book is biased to return
higher quality peers. As the node increases the number of outbound connections,
the address book is biased to return less-vetted or lower-quality peers.
#### Inbound Peers
Peers also maintain a maximum number of total connections, MaxNumPeers.
If a peer has MaxNumPeers, new incoming connections will be accepted with low probability.
When such a new connection is accepted, the peer disconnects from a probabilistically chosen low ranking peer
so it does not exceed MaxNumPeers.
#### Peer Exchange
When a peer receives a pexRequestMessage, it returns a random sample of high quality peers from the address book. Peers with no score or low score should not be inclided in a response to pexRequestMessage.
#### Peer Quality
Peer quality is tracked in the connection and across the reactors by storing the TrustMetric in the peer's
thread safe Data store.
Peer behaviour is then defined as one of the following:
- Fatal - something outright malicious that causes us to disconnect the peer and ban it from the address book for some amount of time
- Bad - Any kind of timeout, messages that don't unmarshal, fail other validity checks, or messages we didn't ask for or aren't expecting (usually worth one bad event)
- Neutral - Unknown channels/message types/version upgrades (no good or bad events recorded)
- Correct - Normal correct behavior (worth one good event)
- Good - some random majority of peers per reactor sending us useful messages (worth more than one good event).
Note that Fatal behaviour causes us to remove the peer, and neutral behaviour does not affect the score.
## Status
Proposed.
## Consequences
### Positive
- Bringing the address book and trust metric store together will cause the network to be built in a way that encourages greater security and reliability.
### Negative
- TBD
### Neutral
- Keep in mind that, good events need to be recorded just as bad events do using this implementation.

View File

@ -0,0 +1,29 @@
# ADR 008: SocketPV
Tendermint node's should support only two in-process PrivValidator
implementations:
- FilePV uses an unencrypted private key in a "priv_validator.json" file - no
configuration required (just `tendermint init`).
- SocketPV uses a socket to send signing requests to another process - user is
responsible for starting that process themselves.
The SocketPV address can be provided via flags at the command line - doing so
will cause Tendermint to ignore any "priv_validator.json" file and to listen on
the given address for incoming connections from an external priv_validator
process. It will halt any operation until at least one external process
succesfully connected.
The external priv_validator process will dial the address to connect to
Tendermint, and then Tendermint will send requests on the ensuing connection to
sign votes and proposals. Thus the external process initiates the connection,
but the Tendermint process makes all requests. In a later stage we're going to
support multiple validators for fault tolerance. To prevent double signing they
need to be synced, which is deferred to an external solution (see #1185).
In addition, Tendermint will provide implementations that can be run in that
external process. These include:
- FilePV will encrypt the private key, and the user must enter password to
decrypt key when process is started.
- LedgerPV uses a Ledger Nano S to handle all signing.

View File

@ -0,0 +1,116 @@
# ADR 011: Monitoring
## Changelog
08-06-2018: Initial draft
11-06-2018: Reorg after @xla comments
13-06-2018: Clarification about usage of labels
## Context
In order to bring more visibility into Tendermint, we would like it to report
metrics and, maybe later, traces of transactions and RPC queries. See
https://github.com/tendermint/tendermint/issues/986.
A few solutions were considered:
1. [Prometheus](https://prometheus.io)
a) Prometheus API
b) [go-kit metrics package](https://github.com/go-kit/kit/tree/master/metrics) as an interface plus Prometheus
c) [telegraf](https://github.com/influxdata/telegraf)
d) new service, which will listen to events emitted by pubsub and report metrics
5. [OpenCensus](https://opencensus.io/go/index.html)
### 1. Prometheus
Prometheus seems to be the most popular product out there for monitoring. It has
a Go client library, powerful queries, alerts.
**a) Prometheus API**
We can commit to using Prometheus in Tendermint, but I think Tendermint users
should be free to choose whatever monitoring tool they feel will better suit
their needs (if they don't have existing one already). So we should try to
abstract interface enough so people can switch between Prometheus and other
similar tools.
**b) go-kit metrics package as an interface**
metrics package provides a set of uniform interfaces for service
instrumentation and offers adapters to popular metrics packages:
https://godoc.org/github.com/go-kit/kit/metrics#pkg-subdirectories
Comparing to Prometheus API, we're losing customisability and control, but gaining
freedom in choosing any instrument from the above list given we will extract
metrics creation into a separate function (see "providers" in node/node.go).
**c) telegraf**
Unlike already discussed options, telegraf does not require modifying Tendermint
source code. You create something called an input plugin, which polls
Tendermint RPC every second and calculates the metrics itself.
While it may sound good, but some metrics we want to report are not exposed via
RPC or pubsub, therefore can't be accessed externally.
**d) service, listening to pubsub**
Same issue as the above.
### 2. opencensus
opencensus provides both metrics and tracing, which may be important in the
future. It's API looks different from go-kit and Prometheus, but looks like it
covers everything we need.
Unfortunately, OpenCensus go client does not define any
interfaces, so if we want to abstract away metrics we
will need to write interfaces ourselves.
### List of metrics
| | Name | Type | Description |
| - | --------------------------------------- | ------- | ----------------------------------------------------------------------------- |
| A | consensus_height | Gauge | |
| A | consensus_validators | Gauge | Number of validators who signed |
| A | consensus_validators_power | Gauge | Total voting power of all validators |
| A | consensus_missing_validators | Gauge | Number of validators who did not sign |
| A | consensus_missing_validators_power | Gauge | Total voting power of the missing validators |
| A | consensus_byzantine_validators | Gauge | Number of validators who tried to double sign |
| A | consensus_byzantine_validators_power | Gauge | Total voting power of the byzantine validators |
| A | consensus_block_interval | Timing | Time between this and last block (Block.Header.Time) |
| | consensus_block_time | Timing | Time to create a block (from creating a proposal to commit) |
| | consensus_time_between_blocks | Timing | Time between committing last block and (receiving proposal creating proposal) |
| A | consensus_rounds | Gauge | Number of rounds |
| | consensus_prevotes | Gauge | |
| | consensus_precommits | Gauge | |
| | consensus_prevotes_total_power | Gauge | |
| | consensus_precommits_total_power | Gauge | |
| A | consensus_num_txs | Gauge | |
| A | mempool_size | Gauge | |
| A | consensus_total_txs | Gauge | |
| A | consensus_block_size | Gauge | In bytes |
| A | p2p_peers | Gauge | Number of peers node's connected to |
`A` - will be implemented in the fist place.
**Proposed solution**
## Status
Proposed.
## Consequences
### Positive
Better visibility, support of variety of monitoring backends
### Negative
One more library to audit, messing metrics reporting code with business domain.
### Neutral
-

View File

@ -0,0 +1,16 @@
# ADR 000: Template for an ADR
## Context
## Decision
## Status
## Consequences
### Positive
### Negative
### Neutral

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

BIN
docs/assets/a_plus_t.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/assets/abci.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

206
docs/conf.py Normal file
View File

@ -0,0 +1,206 @@
# -*- coding: utf-8 -*-
#
# Tendermint documentation build configuration file, created by
# sphinx-quickstart on Mon Aug 7 04:55:09 2017.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
import urllib
import sphinx_rtd_theme
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
from recommonmark.parser import CommonMarkParser
source_parsers = {
'.md': CommonMarkParser,
}
source_suffix = ['.rst', '.md']
#source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Tendermint'
copyright = u'2018, The Authors'
author = u'Tendermint'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = u''
# The full version, including alpha/beta/rc tags.
release = u''
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'architecture', 'spec', 'examples']
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
# html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# This is required for the alabaster theme
# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars
html_sidebars = {
'**': [
'about.html',
'navigation.html',
'relations.html', # needs 'show_related': True theme option to display
'searchbox.html',
'donate.html',
]
}
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'Tendermintdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'Tendermint.tex', u'Tendermint Documentation',
u'The Authors', 'manual'),
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'Tendermint', u'Tendermint Documentation',
[author], 1)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'Tendermint', u'Tendermint Documentation',
author, 'Tendermint', 'Byzantine Fault Tolerant Consensus.',
'Database'),
]
# ---------------- customizations ----------------------
# for Docker README, below
from shutil import copyfile
# tm-bench and tm-monitor
tools_repo = "https://raw.githubusercontent.com/tendermint/tools/"
tools_branch = "master"
tools_dir = "./tools"
if os.path.isdir(tools_dir) != True:
os.mkdir(tools_dir)
copyfile('../DOCKER/README.md', tools_dir+'/docker.md')
urllib.urlretrieve(tools_repo+tools_branch+'/tm-bench/README.md', filename=tools_dir+'/benchmarking.md')
urllib.urlretrieve(tools_repo+tools_branch+'/tm-monitor/README.md', filename=tools_dir+'/monitoring.md')
#### abci spec #################################
abci_repo = "https://raw.githubusercontent.com/tendermint/abci/"
abci_branch = "develop"
urllib.urlretrieve(abci_repo+abci_branch+'/specification.md', filename='abci-spec.md')

68
docs/deploy-testnets.md Normal file
View File

@ -0,0 +1,68 @@
# Deploy a Testnet
Now that we've seen how ABCI works, and even played with a few
applications on a single validator node, it's time to deploy a test
network to four validator nodes.
## Manual Deployments
It's relatively easy to setup a Tendermint cluster manually. The only
requirements for a particular Tendermint node are a private key for the
validator, stored as `priv_validator.json`, a node key, stored as
`node_key.json` and a list of the public keys of all validators, stored
as `genesis.json`. These files should be stored in
`~/.tendermint/config`, or wherever the `$TMHOME` variable might be set
to.
Here are the steps to setting up a testnet manually:
1) Provision nodes on your cloud provider of choice
2) Install Tendermint and the application of interest on all nodes
3) Generate a private key and a node key for each validator using
`tendermint init`
4) Compile a list of public keys for each validator into a
`genesis.json` file and replace the existing file with it.
5) Run
`tendermint node --proxy_app=kvstore --p2p.persistent_peers=< peer addresses >`
on each node, where `< peer addresses >` is a comma separated list
of the IP:PORT combination for each node. The default port for
Tendermint is `26656`. Thus, if the IP addresses of your nodes were
`192.168.0.1, 192.168.0.2, 192.168.0.3, 192.168.0.4`, the command
would look like:
tendermint node --proxy_app=kvstore --p2p.persistent_peers=96663a3dd0d7b9d17d4c8211b191af259621c693@192.168.0.1:26656, 429fcf25974313b95673f58d77eacdd434402665@192.168.0.2:26656, 0491d373a8e0fcf1023aaf18c51d6a1d0d4f31bd@192.168.0.3:26656, f9baeaa15fedf5e1ef7448dd60f46c01f1a9e9c4@192.168.0.4:26656
After a few seconds, all the nodes should connect to each other and
start making blocks! For more information, see the Tendermint Networks
section of [the guide to using Tendermint](using-tendermint.html).
But wait! Steps 3 and 4 are quite manual. Instead, use [this
script](https://github.com/tendermint/tendermint/blob/develop/docs/examples/init_testnet.sh),
which does the heavy lifting for you. And it gets better.
Instead of the previously linked script to initialize the files required
for a testnet, we have the `tendermint testnet` command. By default,
running `tendermint testnet` will create all the required files, just
like the script. Of course, you'll still need to manually edit some
fields in the `config.toml`. Alternatively, see the available flags to
auto-populate the `config.toml` with the fields that would otherwise be
passed in via flags when running `tendermint node`. As you might
imagine, this command is useful for manual or automated deployments.
## Automated Deployments
The easiest and fastest way to get a testnet up in less than 5 minutes.
### Local
With `docker` and `docker-compose` installed, run the command:
make localnet-start
from the root of the tendermint repository. This will spin up a 4-node
local testnet. Review the target in the Makefile to debug any problems.
### Cloud
See the [next section](./terraform-and-ansible.html) for details.

5
docs/determinism.md Normal file
View File

@ -0,0 +1,5 @@
# On Determinism
Arguably, the most difficult part of blockchain programming is determinism - that is, ensuring that sources of indeterminism do not creep into the design of such systems.
See [this issue](https://github.com/tendermint/abci/issues/56) for more information on the potential sources of indeterminism.

21
docs/ecosystem.md Normal file
View File

@ -0,0 +1,21 @@
# Ecosystem
The growing list of applications built using various pieces of the
Tendermint stack can be found at:
- https://tendermint.com/ecosystem
We thank the community for their contributions thus far and welcome the
addition of new projects. A pull request can be submitted to [this
file](https://github.com/tendermint/aib-data/blob/master/json/ecosystem.json)
to include your project.
## Other Tools
See [deploy testnets](./deploy-testnets) for information about all
the tools built by Tendermint. We have Kubernetes, Ansible, and
Terraform integrations.
For upgrading from older to newer versions of tendermint and to migrate
your chain data, see [tm-migrator](https://github.com/hxzqlh/tm-tools)
written by @hxzqlh.

View File

@ -0,0 +1,149 @@
# Tendermint
## Overview
This is a quick start guide. If you have a vague idea about how Tendermint
works and want to get started right away, continue. Otherwise, [review the
documentation](http://tendermint.readthedocs.io/en/master/).
## Install
### Quick Install
On a fresh Ubuntu 16.04 machine can be done with [this script](https://git.io/vpgEI), like so:
```
curl -L https://git.io/vpgEI | bash
source ~/.profile
```
WARNING: do not run the above on your local machine.
The script is also used to facilitate cluster deployment below.
### Manual Install
Requires:
- `go` minimum version 1.10
- `$GOPATH` environment variable must be set
- `$GOPATH/bin` must be on your `$PATH` (see https://github.com/tendermint/tendermint/wiki/Setting-GOPATH)
To install Tendermint, run:
```
go get github.com/tendermint/tendermint
cd $GOPATH/src/github.com/tendermint/tendermint
make get_tools && make get_vendor_deps
make install
```
Note that `go get` may return an error but it can be ignored.
Confirm installation:
```
$ tendermint version
0.18.0-XXXXXXX
```
## Initialization
Running:
```
tendermint init
```
will create the required files for a single, local node.
These files are found in `$HOME/.tendermint`:
```
$ ls $HOME/.tendermint
config.toml data genesis.json priv_validator.json
```
For a single, local node, no further configuration is required.
Configuring a cluster is covered further below.
## Local Node
Start tendermint with a simple in-process application:
```
tendermint node --proxy_app=kvstore
```
and blocks will start to stream in:
```
I[01-06|01:45:15.592] Executed block module=state height=1 validTxs=0 invalidTxs=0
I[01-06|01:45:15.624] Committed state module=state height=1 txs=0 appHash=
```
Check the status with:
```
curl -s localhost:26657/status
```
### Sending Transactions
With the kvstore app running, we can send transactions:
```
curl -s 'localhost:26657/broadcast_tx_commit?tx="abcd"'
```
and check that it worked with:
```
curl -s 'localhost:26657/abci_query?data="abcd"'
```
We can send transactions with a key and value too:
```
curl -s 'localhost:26657/broadcast_tx_commit?tx="name=satoshi"'
```
and query the key:
```
curl -s 'localhost:26657/abci_query?data="name"'
```
where the value is returned in hex.
## Cluster of Nodes
First create four Ubuntu cloud machines. The following was tested on Digital
Ocean Ubuntu 16.04 x64 (3GB/1CPU, 20GB SSD). We'll refer to their respective IP
addresses below as IP1, IP2, IP3, IP4.
Then, `ssh` into each machine, and execute [this script](https://git.io/vNLfY):
```
curl -L https://git.io/vpgEI | bash
source ~/.profile
```
This will install `go` and other dependencies, get the Tendermint source code, then compile the `tendermint` binary.
Next, `cd` into `docs/examples`. Each command below should be run from each node, in sequence:
```
tendermint node --home ./node0 --proxy_app=kvstore --p2p.persistent_peers="167b80242c300bf0ccfb3ced3dec60dc2a81776e@IP1:26656,3c7a5920811550c04bf7a0b2f1e02ab52317b5e6@IP2:26656,303a1a4312c30525c99ba66522dd81cca56a361a@IP3:26656,b686c2a7f4b1b46dca96af3a0f31a6a7beae0be4@IP4:26656"
tendermint node --home ./node1 --proxy_app=kvstore --p2p.persistent_peers="167b80242c300bf0ccfb3ced3dec60dc2a81776e@IP1:26656,3c7a5920811550c04bf7a0b2f1e02ab52317b5e6@IP2:26656,303a1a4312c30525c99ba66522dd81cca56a361a@IP3:26656,b686c2a7f4b1b46dca96af3a0f31a6a7beae0be4@IP4:26656"
tendermint node --home ./node2 --proxy_app=kvstore --p2p.persistent_peers="167b80242c300bf0ccfb3ced3dec60dc2a81776e@IP1:26656,3c7a5920811550c04bf7a0b2f1e02ab52317b5e6@IP2:26656,303a1a4312c30525c99ba66522dd81cca56a361a@IP3:26656,b686c2a7f4b1b46dca96af3a0f31a6a7beae0be4@IP4:26656"
tendermint node --home ./node3 --proxy_app=kvstore --p2p.persistent_peers="167b80242c300bf0ccfb3ced3dec60dc2a81776e@IP1:26656,3c7a5920811550c04bf7a0b2f1e02ab52317b5e6@IP2:26656,303a1a4312c30525c99ba66522dd81cca56a361a@IP3:26656,b686c2a7f4b1b46dca96af3a0f31a6a7beae0be4@IP4:26656"
```
Note that after the third node is started, blocks will start to stream in
because >2/3 of validators (defined in the `genesis.json`) have come online.
Seeds can also be specified in the `config.toml`. See [this
PR](https://github.com/tendermint/tendermint/pull/792) for more information
about configuration options.
Transactions can then be sent as covered in the single, local node example above.

View File

@ -0,0 +1,69 @@
#!/usr/bin/env bash
# make all the files
tendermint init --home ./tester/node0
tendermint init --home ./tester/node1
tendermint init --home ./tester/node2
tendermint init --home ./tester/node3
file0=./tester/node0/config/genesis.json
file1=./tester/node1/config/genesis.json
file2=./tester/node2/config/genesis.json
file3=./tester/node3/config/genesis.json
genesis_time=`cat $file0 | jq '.genesis_time'`
chain_id=`cat $file0 | jq '.chain_id'`
value0=`cat $file0 | jq '.validators[0].pub_key.value'`
value1=`cat $file1 | jq '.validators[0].pub_key.value'`
value2=`cat $file2 | jq '.validators[0].pub_key.value'`
value3=`cat $file3 | jq '.validators[0].pub_key.value'`
rm $file0
rm $file1
rm $file2
rm $file3
echo "{
\"genesis_time\": $genesis_time,
\"chain_id\": $chain_id,
\"validators\": [
{
\"pub_key\": {
\"type\": \"tendermint/PubKeyEd25519\",
\"value\": $value0
},
\"power:\": 10,
\"name\":, \"\"
},
{
\"pub_key\": {
\"type\": \"tendermint/PubKeyEd25519\",
\"value\": $value1
},
\"power:\": 10,
\"name\":, \"\"
},
{
\"pub_key\": {
\"type\": \"tendermint/PubKeyEd25519\",
\"value\": $value2
},
\"power:\": 10,
\"name\":, \"\"
},
{
\"pub_key\": {
\"type\": \"tendermint/PubKeyEd25519\",
\"value\": $value3
},
\"power:\": 10,
\"name\":, \"\"
}
],
\"app_hash\": \"\"
}" >> $file0
cp $file0 $file1
cp $file0 $file2
cp $file2 $file3

View File

@ -0,0 +1,32 @@
#!/usr/bin/env bash
# XXX: this script is meant to be used only on a fresh Ubuntu 16.04 instance
# and has only been tested on Digital Ocean
# get and unpack golang
curl -O https://storage.googleapis.com/golang/go1.10.linux-amd64.tar.gz
tar -xvf go1.10.linux-amd64.tar.gz
apt install make
## move go and add binary to path
mv go /usr/local
echo "export PATH=\$PATH:/usr/local/go/bin" >> ~/.profile
## create the GOPATH directory, set GOPATH and put on PATH
mkdir goApps
echo "export GOPATH=/root/goApps" >> ~/.profile
echo "export PATH=\$PATH:\$GOPATH/bin" >> ~/.profile
source ~/.profile
## get the code and move into it
REPO=github.com/tendermint/tendermint
go get $REPO
cd $GOPATH/src/$REPO
## build
git checkout master
make get_tools
make get_vendor_deps
make install

View File

@ -0,0 +1,166 @@
# This is a TOML config file.
# For more information, see https://github.com/toml-lang/toml
##### main base config options #####
# TCP or UNIX socket address of the ABCI application,
# or the name of an ABCI application compiled in with the Tendermint binary
proxy_app = "tcp://127.0.0.1:26658"
# A custom human readable name for this node
moniker = "alpha"
# If this node is many blocks behind the tip of the chain, FastSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
fast_sync = true
# Database backend: leveldb | memdb
db_backend = "leveldb"
# Database directory
db_path = "data"
# Output level for logging, including package level options
log_level = "main:info,state:info,*:error"
##### additional base config options #####
# Path to the JSON file containing the initial validator set and other meta data
genesis_file = "config/genesis.json"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_file = "config/priv_validator.json"
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
node_key_file = "config/node_key.json"
# Mechanism to connect to the ABCI application: socket | grpc
abci = "socket"
# TCP or UNIX socket address for the profiling server to listen on
prof_laddr = ""
# If true, query the ABCI app on connecting to a new peer
# so the app can decide if we should keep the connection or not
filter_peers = false
##### advanced configuration options #####
##### rpc server configuration options #####
[rpc]
# TCP or UNIX socket address for the RPC server to listen on
laddr = "tcp://0.0.0.0:26657"
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
grpc_laddr = ""
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
unsafe = false
##### peer to peer configuration options #####
[p2p]
# Address to listen for incoming connections
laddr = "tcp://0.0.0.0:26656"
# Comma separated list of seed nodes to connect to
seeds = ""
# Comma separated list of nodes to keep persistent connections to
# Do not add private peers to this list if you don't want them advertised
persistent_peers = ""
# Path to address book
addr_book_file = "config/addrbook.json"
# Set true for strict address routability rules
addr_book_strict = true
# Time to wait before flushing messages out on the connection, in ms
flush_throttle_timeout = 100
# Maximum number of peers to connect to
max_num_peers = 50
# Maximum size of a message packet payload, in bytes
max_packet_msg_payload_size = 1024
# Rate at which packets can be sent, in bytes/second
send_rate = 512000
# Rate at which packets can be received, in bytes/second
recv_rate = 512000
# Set true to enable the peer-exchange reactor
pex = true
# Seed mode, in which node constantly crawls the network and looks for
# peers. If another node asks it for addresses, it responds and disconnects.
#
# Does not work if the peer-exchange reactor is disabled.
seed_mode = false
# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
private_peer_ids = ""
##### mempool configuration options #####
[mempool]
recheck = true
recheck_empty = true
broadcast = true
wal_dir = "data/mempool.wal"
##### consensus configuration options #####
[consensus]
wal_file = "data/cs.wal/wal"
# All timeouts are in milliseconds
timeout_propose = 3000
timeout_propose_delta = 500
timeout_prevote = 1000
timeout_prevote_delta = 500
timeout_precommit = 1000
timeout_precommit_delta = 500
timeout_commit = 1000
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
skip_timeout_commit = false
# BlockSize
max_block_size_txs = 10000
max_block_size_bytes = 1
# EmptyBlocks mode and possible interval between empty blocks in seconds
create_empty_blocks = true
create_empty_blocks_interval = 0
# Reactor sleep duration parameters are in milliseconds
peer_gossip_sleep_duration = 100
peer_query_maj23_sleep_duration = 2000
##### transactions indexer configuration options #####
[tx_index]
# What indexer to use for transactions
#
# Options:
# 1) "null" (default)
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "kv"
# Comma-separated list of tags to index (by default the only tag is tx hash)
#
# It's recommended to index only a subset of tags due to possible memory
# bloat. This is, of course, depends on the indexer's DB and the volume of
# transactions.
index_tags = ""
# When set to true, tells indexer to index all tags. Note this may be not
# desirable (see the comment above). IndexTags has a precedence over
# IndexAllTags (i.e. when given both, IndexTags will be indexed).
index_all_tags = false

View File

@ -0,0 +1,39 @@
{
"genesis_time": "0001-01-01T00:00:00Z",
"chain_id": "test-chain-A2i3OZ",
"validators": [
{
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": "D+k4AdjnYPWbB9wmad137Bdpo/kAulOoTRQrLy/Qc4k="
},
"power": "10",
"name": ""
},
{
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": "b56N5GCR1adcVRuENjfKw/mrm2dkhT7wNZXV/SDsKsU="
},
"power": "10",
"name": ""
},
{
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": "IgZDpJvGA0TAamicA8ircy+RX/BkUlj6DXwM791ywIU="
},
"power": "10",
"name": ""
},
{
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": "KGAZfxZvIZ7abbeIQ85U1ECG6+I62KSdaH8ulc0+OiU="
},
"power": "10",
"name": ""
}
],
"app_hash": ""
}

View File

@ -0,0 +1 @@
{"priv_key":{"type":"tendermint/PrivKeyEd25519","value":"7lY+k6EDllG8Q9gVbF5313t/ag2YGkBVKdVa0YHJ9xO5k0w3Q/hke0Z7UFT1KgVDGRUEKzwAwwjwFQUvgF0ZWg=="}}

View File

@ -0,0 +1,14 @@
{
"address": "122A9414774A2FCAD026201DA477EF3F41970EF0",
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": "D+k4AdjnYPWbB9wmad137Bdpo/kAulOoTRQrLy/Qc4k="
},
"last_height": "0",
"last_round": "0",
"last_step": 0,
"priv_key": {
"type": "tendermint/PrivKeyEd25519",
"value": "YLxp3ho+kySgAnzjBptbxDzSGw2ntGZLsIHQsaVxY/cP6TgB2Odg9ZsH3CZp3XfsF2mj+QC6U6hNFCsvL9BziQ=="
}
}

View File

@ -0,0 +1,166 @@
# This is a TOML config file.
# For more information, see https://github.com/toml-lang/toml
##### main base config options #####
# TCP or UNIX socket address of the ABCI application,
# or the name of an ABCI application compiled in with the Tendermint binary
proxy_app = "tcp://127.0.0.1:26658"
# A custom human readable name for this node
moniker = "bravo"
# If this node is many blocks behind the tip of the chain, FastSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
fast_sync = true
# Database backend: leveldb | memdb
db_backend = "leveldb"
# Database directory
db_path = "data"
# Output level for logging, including package level options
log_level = "main:info,state:info,*:error"
##### additional base config options #####
# Path to the JSON file containing the initial validator set and other meta data
genesis_file = "config/genesis.json"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_file = "config/priv_validator.json"
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
node_key_file = "config/node_key.json"
# Mechanism to connect to the ABCI application: socket | grpc
abci = "socket"
# TCP or UNIX socket address for the profiling server to listen on
prof_laddr = ""
# If true, query the ABCI app on connecting to a new peer
# so the app can decide if we should keep the connection or not
filter_peers = false
##### advanced configuration options #####
##### rpc server configuration options #####
[rpc]
# TCP or UNIX socket address for the RPC server to listen on
laddr = "tcp://0.0.0.0:26657"
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
grpc_laddr = ""
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
unsafe = false
##### peer to peer configuration options #####
[p2p]
# Address to listen for incoming connections
laddr = "tcp://0.0.0.0:26656"
# Comma separated list of seed nodes to connect to
seeds = ""
# Comma separated list of nodes to keep persistent connections to
# Do not add private peers to this list if you don't want them advertised
persistent_peers = ""
# Path to address book
addr_book_file = "config/addrbook.json"
# Set true for strict address routability rules
addr_book_strict = true
# Time to wait before flushing messages out on the connection, in ms
flush_throttle_timeout = 100
# Maximum number of peers to connect to
max_num_peers = 50
# Maximum size of a message packet payload, in bytes
max_packet_msg_payload_size = 1024
# Rate at which packets can be sent, in bytes/second
send_rate = 512000
# Rate at which packets can be received, in bytes/second
recv_rate = 512000
# Set true to enable the peer-exchange reactor
pex = true
# Seed mode, in which node constantly crawls the network and looks for
# peers. If another node asks it for addresses, it responds and disconnects.
#
# Does not work if the peer-exchange reactor is disabled.
seed_mode = false
# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
private_peer_ids = ""
##### mempool configuration options #####
[mempool]
recheck = true
recheck_empty = true
broadcast = true
wal_dir = "data/mempool.wal"
##### consensus configuration options #####
[consensus]
wal_file = "data/cs.wal/wal"
# All timeouts are in milliseconds
timeout_propose = 3000
timeout_propose_delta = 500
timeout_prevote = 1000
timeout_prevote_delta = 500
timeout_precommit = 1000
timeout_precommit_delta = 500
timeout_commit = 1000
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
skip_timeout_commit = false
# BlockSize
max_block_size_txs = 10000
max_block_size_bytes = 1
# EmptyBlocks mode and possible interval between empty blocks in seconds
create_empty_blocks = true
create_empty_blocks_interval = 0
# Reactor sleep duration parameters are in milliseconds
peer_gossip_sleep_duration = 100
peer_query_maj23_sleep_duration = 2000
##### transactions indexer configuration options #####
[tx_index]
# What indexer to use for transactions
#
# Options:
# 1) "null" (default)
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "kv"
# Comma-separated list of tags to index (by default the only tag is tx hash)
#
# It's recommended to index only a subset of tags due to possible memory
# bloat. This is, of course, depends on the indexer's DB and the volume of
# transactions.
index_tags = ""
# When set to true, tells indexer to index all tags. Note this may be not
# desirable (see the comment above). IndexTags has a precedence over
# IndexAllTags (i.e. when given both, IndexTags will be indexed).
index_all_tags = false

View File

@ -0,0 +1,39 @@
{
"genesis_time": "0001-01-01T00:00:00Z",
"chain_id": "test-chain-A2i3OZ",
"validators": [
{
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": "D+k4AdjnYPWbB9wmad137Bdpo/kAulOoTRQrLy/Qc4k="
},
"power": "10",
"name": ""
},
{
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": "b56N5GCR1adcVRuENjfKw/mrm2dkhT7wNZXV/SDsKsU="
},
"power": "10",
"name": ""
},
{
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": "IgZDpJvGA0TAamicA8ircy+RX/BkUlj6DXwM791ywIU="
},
"power": "10",
"name": ""
},
{
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": "KGAZfxZvIZ7abbeIQ85U1ECG6+I62KSdaH8ulc0+OiU="
},
"power": "10",
"name": ""
}
],
"app_hash": ""
}

View File

@ -0,0 +1 @@
{"priv_key":{"type":"tendermint/PrivKeyEd25519","value":"H71dc/TIG7nTselfa9nG0WRArXLKYnm7P5eFCk2lk8ASKQ3sIHpbdxCSHQD/RcdHe7TiabJeuOssNPvPWiyQEQ=="}}

View File

@ -0,0 +1,14 @@
{
"address": "BEA1B57F5806CF9AC4D54C8CF806DED5C0F102E1",
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": "b56N5GCR1adcVRuENjfKw/mrm2dkhT7wNZXV/SDsKsU="
},
"last_height": "0",
"last_round": "0",
"last_step": 0,
"priv_key": {
"type": "tendermint/PrivKeyEd25519",
"value": "o0IqrHSPtd5YqGefodWxpJuRzvuVBjgbH785vbMgk7Vvno3kYJHVp1xVG4Q2N8rD+aubZ2SFPvA1ldX9IOwqxQ=="
}
}

View File

@ -0,0 +1,166 @@
# This is a TOML config file.
# For more information, see https://github.com/toml-lang/toml
##### main base config options #####
# TCP or UNIX socket address of the ABCI application,
# or the name of an ABCI application compiled in with the Tendermint binary
proxy_app = "tcp://127.0.0.1:26658"
# A custom human readable name for this node
moniker = "charlie"
# If this node is many blocks behind the tip of the chain, FastSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
fast_sync = true
# Database backend: leveldb | memdb
db_backend = "leveldb"
# Database directory
db_path = "data"
# Output level for logging, including package level options
log_level = "main:info,state:info,*:error"
##### additional base config options #####
# Path to the JSON file containing the initial validator set and other meta data
genesis_file = "config/genesis.json"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_file = "config/priv_validator.json"
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
node_key_file = "config/node_key.json"
# Mechanism to connect to the ABCI application: socket | grpc
abci = "socket"
# TCP or UNIX socket address for the profiling server to listen on
prof_laddr = ""
# If true, query the ABCI app on connecting to a new peer
# so the app can decide if we should keep the connection or not
filter_peers = false
##### advanced configuration options #####
##### rpc server configuration options #####
[rpc]
# TCP or UNIX socket address for the RPC server to listen on
laddr = "tcp://0.0.0.0:26657"
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
grpc_laddr = ""
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
unsafe = false
##### peer to peer configuration options #####
[p2p]
# Address to listen for incoming connections
laddr = "tcp://0.0.0.0:26656"
# Comma separated list of seed nodes to connect to
seeds = ""
# Comma separated list of nodes to keep persistent connections to
# Do not add private peers to this list if you don't want them advertised
persistent_peers = ""
# Path to address book
addr_book_file = "config/addrbook.json"
# Set true for strict address routability rules
addr_book_strict = true
# Time to wait before flushing messages out on the connection, in ms
flush_throttle_timeout = 100
# Maximum number of peers to connect to
max_num_peers = 50
# Maximum size of a message packet payload, in bytes
max_packet_msg_payload_size = 1024
# Rate at which packets can be sent, in bytes/second
send_rate = 512000
# Rate at which packets can be received, in bytes/second
recv_rate = 512000
# Set true to enable the peer-exchange reactor
pex = true
# Seed mode, in which node constantly crawls the network and looks for
# peers. If another node asks it for addresses, it responds and disconnects.
#
# Does not work if the peer-exchange reactor is disabled.
seed_mode = false
# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
private_peer_ids = ""
##### mempool configuration options #####
[mempool]
recheck = true
recheck_empty = true
broadcast = true
wal_dir = "data/mempool.wal"
##### consensus configuration options #####
[consensus]
wal_file = "data/cs.wal/wal"
# All timeouts are in milliseconds
timeout_propose = 3000
timeout_propose_delta = 500
timeout_prevote = 1000
timeout_prevote_delta = 500
timeout_precommit = 1000
timeout_precommit_delta = 500
timeout_commit = 1000
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
skip_timeout_commit = false
# BlockSize
max_block_size_txs = 10000
max_block_size_bytes = 1
# EmptyBlocks mode and possible interval between empty blocks in seconds
create_empty_blocks = true
create_empty_blocks_interval = 0
# Reactor sleep duration parameters are in milliseconds
peer_gossip_sleep_duration = 100
peer_query_maj23_sleep_duration = 2000
##### transactions indexer configuration options #####
[tx_index]
# What indexer to use for transactions
#
# Options:
# 1) "null" (default)
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "kv"
# Comma-separated list of tags to index (by default the only tag is tx hash)
#
# It's recommended to index only a subset of tags due to possible memory
# bloat. This is, of course, depends on the indexer's DB and the volume of
# transactions.
index_tags = ""
# When set to true, tells indexer to index all tags. Note this may be not
# desirable (see the comment above). IndexTags has a precedence over
# IndexAllTags (i.e. when given both, IndexTags will be indexed).
index_all_tags = false

View File

@ -0,0 +1,39 @@
{
"genesis_time": "0001-01-01T00:00:00Z",
"chain_id": "test-chain-A2i3OZ",
"validators": [
{
"pub_key": {
"type": "AC26791624DE60",
"value": "D+k4AdjnYPWbB9wmad137Bdpo/kAulOoTRQrLy/Qc4k="
},
"power": 10,
"name": ""
},
{
"pub_key": {
"type": "AC26791624DE60",
"value": "b56N5GCR1adcVRuENjfKw/mrm2dkhT7wNZXV/SDsKsU="
},
"power": 10,
"name": ""
},
{
"pub_key": {
"type": "AC26791624DE60",
"value": "IgZDpJvGA0TAamicA8ircy+RX/BkUlj6DXwM791ywIU="
},
"power": 10,
"name": ""
},
{
"pub_key": {
"type": "AC26791624DE60",
"value": "KGAZfxZvIZ7abbeIQ85U1ECG6+I62KSdaH8ulc0+OiU="
},
"power": 10,
"name": ""
}
],
"app_hash": ""
}

View File

@ -0,0 +1 @@
{"priv_key":{"type":"954568A3288910","value":"COHZ/Y2cWGWxJNkRwtpQBt5sYvOnb6Gpz0lO46XERRJFBIdSWD5x1UMGRSTmnvW1ec5G4bMdg6zUZKOZD+vVPg=="}}

View File

@ -0,0 +1,14 @@
{
"address": "F0AA266949FB29ADA0B679C27889ED930BD1BDA1",
"pub_key": {
"type": "AC26791624DE60",
"value": "IgZDpJvGA0TAamicA8ircy+RX/BkUlj6DXwM791ywIU="
},
"last_height": 0,
"last_round": 0,
"last_step": 0,
"priv_key": {
"type": "954568A3288910",
"value": "khADeZ5K/8u/L99DFaZNRq8V5g+EHWbwfqFjhCrppaAiBkOkm8YDRMBqaJwDyKtzL5Ff8GRSWPoNfAzv3XLAhQ=="
}
}

View File

@ -0,0 +1,166 @@
# This is a TOML config file.
# For more information, see https://github.com/toml-lang/toml
##### main base config options #####
# TCP or UNIX socket address of the ABCI application,
# or the name of an ABCI application compiled in with the Tendermint binary
proxy_app = "tcp://127.0.0.1:26658"
# A custom human readable name for this node
moniker = "delta"
# If this node is many blocks behind the tip of the chain, FastSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
fast_sync = true
# Database backend: leveldb | memdb
db_backend = "leveldb"
# Database directory
db_path = "data"
# Output level for logging, including package level options
log_level = "main:info,state:info,*:error"
##### additional base config options #####
# Path to the JSON file containing the initial validator set and other meta data
genesis_file = "config/genesis.json"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_file = "config/priv_validator.json"
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
node_key_file = "config/node_key.json"
# Mechanism to connect to the ABCI application: socket | grpc
abci = "socket"
# TCP or UNIX socket address for the profiling server to listen on
prof_laddr = ""
# If true, query the ABCI app on connecting to a new peer
# so the app can decide if we should keep the connection or not
filter_peers = false
##### advanced configuration options #####
##### rpc server configuration options #####
[rpc]
# TCP or UNIX socket address for the RPC server to listen on
laddr = "tcp://0.0.0.0:26657"
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
grpc_laddr = ""
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
unsafe = false
##### peer to peer configuration options #####
[p2p]
# Address to listen for incoming connections
laddr = "tcp://0.0.0.0:26656"
# Comma separated list of seed nodes to connect to
seeds = ""
# Comma separated list of nodes to keep persistent connections to
# Do not add private peers to this list if you don't want them advertised
persistent_peers = ""
# Path to address book
addr_book_file = "config/addrbook.json"
# Set true for strict address routability rules
addr_book_strict = true
# Time to wait before flushing messages out on the connection, in ms
flush_throttle_timeout = 100
# Maximum number of peers to connect to
max_num_peers = 50
# Maximum size of a message packet payload, in bytes
max_packet_msg_payload_size = 1024
# Rate at which packets can be sent, in bytes/second
send_rate = 512000
# Rate at which packets can be received, in bytes/second
recv_rate = 512000
# Set true to enable the peer-exchange reactor
pex = true
# Seed mode, in which node constantly crawls the network and looks for
# peers. If another node asks it for addresses, it responds and disconnects.
#
# Does not work if the peer-exchange reactor is disabled.
seed_mode = false
# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
private_peer_ids = ""
##### mempool configuration options #####
[mempool]
recheck = true
recheck_empty = true
broadcast = true
wal_dir = "data/mempool.wal"
##### consensus configuration options #####
[consensus]
wal_file = "data/cs.wal/wal"
# All timeouts are in milliseconds
timeout_propose = 3000
timeout_propose_delta = 500
timeout_prevote = 1000
timeout_prevote_delta = 500
timeout_precommit = 1000
timeout_precommit_delta = 500
timeout_commit = 1000
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
skip_timeout_commit = false
# BlockSize
max_block_size_txs = 10000
max_block_size_bytes = 1
# EmptyBlocks mode and possible interval between empty blocks in seconds
create_empty_blocks = true
create_empty_blocks_interval = 0
# Reactor sleep duration parameters are in milliseconds
peer_gossip_sleep_duration = 100
peer_query_maj23_sleep_duration = 2000
##### transactions indexer configuration options #####
[tx_index]
# What indexer to use for transactions
#
# Options:
# 1) "null" (default)
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "kv"
# Comma-separated list of tags to index (by default the only tag is tx hash)
#
# It's recommended to index only a subset of tags due to possible memory
# bloat. This is, of course, depends on the indexer's DB and the volume of
# transactions.
index_tags = ""
# When set to true, tells indexer to index all tags. Note this may be not
# desirable (see the comment above). IndexTags has a precedence over
# IndexAllTags (i.e. when given both, IndexTags will be indexed).
index_all_tags = false

View File

@ -0,0 +1,39 @@
{
"genesis_time": "0001-01-01T00:00:00Z",
"chain_id": "test-chain-A2i3OZ",
"validators": [
{
"pub_key": {
"type": "AC26791624DE60",
"value": "D+k4AdjnYPWbB9wmad137Bdpo/kAulOoTRQrLy/Qc4k="
},
"power": 10,
"name": ""
},
{
"pub_key": {
"type": "AC26791624DE60",
"value": "b56N5GCR1adcVRuENjfKw/mrm2dkhT7wNZXV/SDsKsU="
},
"power": 10,
"name": ""
},
{
"pub_key": {
"type": "AC26791624DE60",
"value": "IgZDpJvGA0TAamicA8ircy+RX/BkUlj6DXwM791ywIU="
},
"power": 10,
"name": ""
},
{
"pub_key": {
"type": "AC26791624DE60",
"value": "KGAZfxZvIZ7abbeIQ85U1ECG6+I62KSdaH8ulc0+OiU="
},
"power": 10,
"name": ""
}
],
"app_hash": ""
}

View File

@ -0,0 +1 @@
{"priv_key":{"type":"954568A3288910","value":"9Y9xp/tUJJ6pHTF5SUV0bGKYSdVbFtMHu+Lr8S0JBSZAwneaejnfOEU1LMKOnQ07skrDUaJcj5di3jAyjxJzqg=="}}

View File

@ -0,0 +1,14 @@
{
"address": "9A1A6914EB5F4FF0269C7EEEE627C27310CC64F9",
"pub_key": {
"type": "AC26791624DE60",
"value": "KGAZfxZvIZ7abbeIQ85U1ECG6+I62KSdaH8ulc0+OiU="
},
"last_height": 0,
"last_round": 0,
"last_step": 0,
"priv_key": {
"type": "954568A3288910",
"value": "jb52LZ5gp+eQ8nJlFK1z06nBMp1gD8ICmyzdM1icGOgoYBl/Fm8hntptt4hDzlTUQIbr4jrYpJ1ofy6VzT46JQ=="
}
}

265
docs/getting-started.md Normal file
View File

@ -0,0 +1,265 @@
# Getting Started
## First Tendermint App
As a general purpose blockchain engine, Tendermint is agnostic to the
application you want to run. So, to run a complete blockchain that does
something useful, you must start two programs: one is Tendermint Core,
the other is your application, which can be written in any programming
language. Recall from [the intro to
ABCI](introduction.html#ABCI-Overview) that Tendermint Core handles all
the p2p and consensus stuff, and just forwards transactions to the
application when they need to be validated, or when they're ready to be
committed to a block.
In this guide, we show you some examples of how to run an application
using Tendermint.
### Install
The first apps we will work with are written in Go. To install them, you
need to [install Go](https://golang.org/doc/install) and put
`$GOPATH/bin` in your `$PATH`; see
[here](https://github.com/tendermint/tendermint/wiki/Setting-GOPATH) for
more info.
Then run
go get -u github.com/tendermint/abci/cmd/abci-cli
If there is an error, install and run the
[dep](https://github.com/golang/dep) tool to pin the dependencies:
cd $GOPATH/src/github.com/tendermint/abci
make get_tools
make get_vendor_deps
make install
Now you should have the `abci-cli` installed; you'll see a couple of
commands (`counter` and `kvstore`) that are example applications written
in Go. See below for an application written in JavaScript.
Now, let's run some apps!
## KVStore - A First Example
The kvstore app is a [Merkle
tree](https://en.wikipedia.org/wiki/Merkle_tree) that just stores all
transactions. If the transaction contains an `=`, e.g. `key=value`, then
the `value` is stored under the `key` in the Merkle tree. Otherwise, the
full transaction bytes are stored as the key and the value.
Let's start a kvstore application.
abci-cli kvstore
In another terminal, we can start Tendermint. If you have never run
Tendermint before, use:
tendermint init
tendermint node
If you have used Tendermint, you may want to reset the data for a new
blockchain by running `tendermint unsafe_reset_all`. Then you can run
`tendermint node` to start Tendermint, and connect to the app. For more
details, see [the guide on using Tendermint](./using-tendermint.html).
You should see Tendermint making blocks! We can get the status of our
Tendermint node as follows:
curl -s localhost:26657/status
The `-s` just silences `curl`. For nicer output, pipe the result into a
tool like [jq](https://stedolan.github.io/jq/) or `json_pp`.
Now let's send some transactions to the kvstore.
curl -s 'localhost:26657/broadcast_tx_commit?tx="abcd"'
Note the single quote (`'`) around the url, which ensures that the
double quotes (`"`) are not escaped by bash. This command sent a
transaction with bytes `abcd`, so `abcd` will be stored as both the key
and the value in the Merkle tree. The response should look something
like:
{
"jsonrpc": "2.0",
"id": "",
"result": {
"check_tx": {
"fee": {}
},
"deliver_tx": {
"tags": [
{
"key": "YXBwLmNyZWF0b3I=",
"value": "amFl"
},
{
"key": "YXBwLmtleQ==",
"value": "YWJjZA=="
}
],
"fee": {}
},
"hash": "9DF66553F98DE3C26E3C3317A3E4CED54F714E39",
"height": 14
}
}
We can confirm that our transaction worked and the value got stored by
querying the app:
curl -s 'localhost:26657/abci_query?data="abcd"'
The result should look like:
{
"jsonrpc": "2.0",
"id": "",
"result": {
"response": {
"log": "exists",
"index": "-1",
"key": "YWJjZA==",
"value": "YWJjZA=="
}
}
}
Note the `value` in the result (`YWJjZA==`); this is the base64-encoding
of the ASCII of `abcd`. You can verify this in a python 2 shell by
running `"61626364".decode('base64')` or in python 3 shell by running
`import codecs; codecs.decode("61626364", 'base64').decode('ascii')`.
Stay tuned for a future release that [makes this output more
human-readable](https://github.com/tendermint/abci/issues/32).
Now let's try setting a different key and value:
curl -s 'localhost:26657/broadcast_tx_commit?tx="name=satoshi"'
Now if we query for `name`, we should get `satoshi`, or `c2F0b3NoaQ==`
in base64:
curl -s 'localhost:26657/abci_query?data="name"'
Try some other transactions and queries to make sure everything is
working!
## Counter - Another Example
Now that we've got the hang of it, let's try another application, the
`counter` app.
The counter app doesn't use a Merkle tree, it just counts how many times
we've sent a transaction, or committed the state.
This application has two modes: `serial=off` and `serial=on`.
When `serial=on`, transactions must be a big-endian encoded incrementing
integer, starting at 0.
If `serial=off`, there are no restrictions on transactions.
In a live blockchain, transactions collect in memory before they are
committed into blocks. To avoid wasting resources on invalid
transactions, ABCI provides the `CheckTx` message, which application
developers can use to accept or reject transactions, before they are
stored in memory or gossipped to other peers.
In this instance of the counter app, with `serial=on`, `CheckTx` only
allows transactions whose integer is greater than the last committed
one.
Let's kill the previous instance of `tendermint` and the `kvstore`
application, and start the counter app. We can enable `serial=on` with a
flag:
abci-cli counter --serial
In another window, reset then start Tendermint:
tendermint unsafe_reset_all
tendermint node
Once again, you can see the blocks streaming by. Let's send some
transactions. Since we have set `serial=on`, the first transaction must
be the number `0`:
curl localhost:26657/broadcast_tx_commit?tx=0x00
Note the empty (hence successful) response. The next transaction must be
the number `1`. If instead, we try to send a `5`, we get an error:
> curl localhost:26657/broadcast_tx_commit?tx=0x05
{
"jsonrpc": "2.0",
"id": "",
"result": {
"check_tx": {
"fee": {}
},
"deliver_tx": {
"code": 2,
"log": "Invalid nonce. Expected 1, got 5",
"fee": {}
},
"hash": "33B93DFF98749B0D6996A70F64071347060DC19C",
"height": 34
}
}
But if we send a `1`, it works again:
> curl localhost:26657/broadcast_tx_commit?tx=0x01
{
"jsonrpc": "2.0",
"id": "",
"result": {
"check_tx": {
"fee": {}
},
"deliver_tx": {
"fee": {}
},
"hash": "F17854A977F6FA7EEA1BD758E296710B86F72F3D",
"height": 60
}
}
For more details on the `broadcast_tx` API, see [the guide on using
Tendermint](./using-tendermint.html).
## CounterJS - Example in Another Language
We also want to run applications in another language - in this case,
we'll run a Javascript version of the `counter`. To run it, you'll need
to [install node](https://nodejs.org/en/download/).
You'll also need to fetch the relevant repository, from
[here](https://github.com/tendermint/js-abci) then install it. As go
devs, we keep all our code under the `$GOPATH`, so run:
go get github.com/tendermint/js-abci &> /dev/null
cd $GOPATH/src/github.com/tendermint/js-abci/example
npm install
cd ..
Kill the previous `counter` and `tendermint` processes. Now run the app:
node example/app.js
In another window, reset and start `tendermint`:
tendermint unsafe_reset_all
tendermint node
Once again, you should see blocks streaming by - but now, our
application is written in javascript! Try sending some transactions, and
like before - the results should be the same:
curl localhost:26657/broadcast_tx_commit?tx=0x00 # ok
curl localhost:26657/broadcast_tx_commit?tx=0x05 # invalid nonce
curl localhost:26657/broadcast_tx_commit?tx=0x01 # ok
Neat, eh?

130
docs/how-to-read-logs.md Normal file
View File

@ -0,0 +1,130 @@
# How to read logs
## Walkabout example
We first create three connections (mempool, consensus and query) to the
application (running `kvstore` locally in this case).
I[10-04|13:54:27.364] Starting multiAppConn module=proxy impl=multiAppConn
I[10-04|13:54:27.366] Starting localClient module=abci-client connection=query impl=localClient
I[10-04|13:54:27.366] Starting localClient module=abci-client connection=mempool impl=localClient
I[10-04|13:54:27.367] Starting localClient module=abci-client connection=consensus impl=localClient
Then Tendermint Core and the application perform a handshake.
I[10-04|13:54:27.367] ABCI Handshake module=consensus appHeight=90 appHash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
I[10-04|13:54:27.368] ABCI Replay Blocks module=consensus appHeight=90 storeHeight=90 stateHeight=90
I[10-04|13:54:27.368] Completed ABCI Handshake - Tendermint and App are synced module=consensus appHeight=90 appHash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
After that, we start a few more things like the event switch, reactors,
and perform UPNP discover in order to detect the IP address.
I[10-04|13:54:27.374] Starting EventSwitch module=types impl=EventSwitch
I[10-04|13:54:27.375] This node is a validator module=consensus
I[10-04|13:54:27.379] Starting Node module=main impl=Node
I[10-04|13:54:27.381] Local listener module=p2p ip=:: port=26656
I[10-04|13:54:27.382] Getting UPNP external address module=p2p
I[10-04|13:54:30.386] Could not perform UPNP discover module=p2p err="write udp4 0.0.0.0:38238->239.255.255.250:1900: i/o timeout"
I[10-04|13:54:30.386] Starting DefaultListener module=p2p impl=Listener(@10.0.2.15:26656)
I[10-04|13:54:30.387] Starting P2P Switch module=p2p impl="P2P Switch"
I[10-04|13:54:30.387] Starting MempoolReactor module=mempool impl=MempoolReactor
I[10-04|13:54:30.387] Starting BlockchainReactor module=blockchain impl=BlockchainReactor
I[10-04|13:54:30.387] Starting ConsensusReactor module=consensus impl=ConsensusReactor
I[10-04|13:54:30.387] ConsensusReactor module=consensus fastSync=false
I[10-04|13:54:30.387] Starting ConsensusState module=consensus impl=ConsensusState
I[10-04|13:54:30.387] Starting WAL module=consensus wal=/home/vagrant/.tendermint/data/cs.wal/wal impl=WAL
I[10-04|13:54:30.388] Starting TimeoutTicker module=consensus impl=TimeoutTicker
Notice the second row where Tendermint Core reports that "This node is a
validator". It also could be just an observer (regular node).
Next we replay all the messages from the WAL.
I[10-04|13:54:30.390] Catchup by replaying consensus messages module=consensus height=91
I[10-04|13:54:30.390] Replay: New Step module=consensus height=91 round=0 step=RoundStepNewHeight
I[10-04|13:54:30.390] Replay: Done module=consensus
"Started node" message signals that everything is ready for work.
I[10-04|13:54:30.391] Starting RPC HTTP server on tcp socket 0.0.0.0:26657 module=rpc-server
I[10-04|13:54:30.392] Started node module=main nodeInfo="NodeInfo{id: DF22D7C92C91082324A1312F092AA1DA197FA598DBBFB6526E, moniker: anonymous, network: test-chain-3MNw2N [remote , listen 10.0.2.15:26656], version: 0.11.0-10f361fc ([wire_version=0.6.2 p2p_version=0.5.0 consensus_version=v1/0.2.2 rpc_version=0.7.0/3 tx_index=on rpc_addr=tcp://0.0.0.0:26657])}"
Next follows a standard block creation cycle, where we enter a new
round, propose a block, receive more than 2/3 of prevotes, then
precommits and finally have a chance to commit a block. For details,
please refer to [Consensus
Overview](introduction.html#consensus-overview) or [Byzantine Consensus
Algorithm](specification.html).
I[10-04|13:54:30.393] enterNewRound(91/0). Current: 91/0/RoundStepNewHeight module=consensus
I[10-04|13:54:30.393] enterPropose(91/0). Current: 91/0/RoundStepNewRound module=consensus
I[10-04|13:54:30.393] enterPropose: Our turn to propose module=consensus proposer=125B0E3C5512F5C2B0E1109E31885C4511570C42 privValidator="PrivValidator{125B0E3C5512F5C2B0E1109E31885C4511570C42 LH:90, LR:0, LS:3}"
I[10-04|13:54:30.394] Signed proposal module=consensus height=91 round=0 proposal="Proposal{91/0 1:21B79872514F (-1,:0:000000000000) {/10EDEDD7C84E.../}}"
I[10-04|13:54:30.397] Received complete proposal block module=consensus height=91 hash=F671D562C7B9242900A286E1882EE64E5556FE9E
I[10-04|13:54:30.397] enterPrevote(91/0). Current: 91/0/RoundStepPropose module=consensus
I[10-04|13:54:30.397] enterPrevote: ProposalBlock is valid module=consensus height=91 round=0
I[10-04|13:54:30.398] Signed and pushed vote module=consensus height=91 round=0 vote="Vote{0:125B0E3C5512 91/00/1(Prevote) F671D562C7B9 {/89047FFC21D8.../}}" err=null
I[10-04|13:54:30.401] Added to prevote module=consensus vote="Vote{0:125B0E3C5512 91/00/1(Prevote) F671D562C7B9 {/89047FFC21D8.../}}" prevotes="VoteSet{H:91 R:0 T:1 +2/3:F671D562C7B9242900A286E1882EE64E5556FE9E:1:21B79872514F BA{1:X} map[]}"
I[10-04|13:54:30.401] enterPrecommit(91/0). Current: 91/0/RoundStepPrevote module=consensus
I[10-04|13:54:30.401] enterPrecommit: +2/3 prevoted proposal block. Locking module=consensus hash=F671D562C7B9242900A286E1882EE64E5556FE9E
I[10-04|13:54:30.402] Signed and pushed vote module=consensus height=91 round=0 vote="Vote{0:125B0E3C5512 91/00/2(Precommit) F671D562C7B9 {/80533478E41A.../}}" err=null
I[10-04|13:54:30.404] Added to precommit module=consensus vote="Vote{0:125B0E3C5512 91/00/2(Precommit) F671D562C7B9 {/80533478E41A.../}}" precommits="VoteSet{H:91 R:0 T:2 +2/3:F671D562C7B9242900A286E1882EE64E5556FE9E:1:21B79872514F BA{1:X} map[]}"
I[10-04|13:54:30.404] enterCommit(91/0). Current: 91/0/RoundStepPrecommit module=consensus
I[10-04|13:54:30.405] Finalizing commit of block with 0 txs module=consensus height=91 hash=F671D562C7B9242900A286E1882EE64E5556FE9E root=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
I[10-04|13:54:30.405] Block{
Header{
ChainID: test-chain-3MNw2N
Height: 91
Time: 2017-10-04 13:54:30.393 +0000 UTC
NumTxs: 0
LastBlockID: F15AB8BEF9A6AAB07E457A6E16BC410546AA4DC6:1:D505DA273544
LastCommit: 56FEF2EFDB8B37E9C6E6D635749DF3169D5F005D
Data:
Validators: CE25FBFF2E10C0D51AA1A07C064A96931BC8B297
App: E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
}#F671D562C7B9242900A286E1882EE64E5556FE9E
Data{
}#
Commit{
BlockID: F15AB8BEF9A6AAB07E457A6E16BC410546AA4DC6:1:D505DA273544
Precommits: Vote{0:125B0E3C5512 90/00/2(Precommit) F15AB8BEF9A6 {/FE98E2B956F0.../}}
}#56FEF2EFDB8B37E9C6E6D635749DF3169D5F005D
}#F671D562C7B9242900A286E1882EE64E5556FE9E module=consensus
I[10-04|13:54:30.408] Executed block module=state height=91 validTxs=0 invalidTxs=0
I[10-04|13:54:30.410] Committed state module=state height=91 txs=0 hash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
I[10-04|13:54:30.410] Recheck txs module=mempool numtxs=0 height=91
## List of modules
Here is the list of modules you may encounter in Tendermint's log and a
little overview what they do.
- `abci-client` As mentioned in [Application Development Guide](app-development.md#abci-design), Tendermint acts as an ABCI
client with respect to the application and maintains 3 connections:
mempool, consensus and query. The code used by Tendermint Core can
be found [here](https://github.com/tendermint/abci/tree/master/client).
- `blockchain` Provides storage, pool (a group of peers), and reactor
for both storing and exchanging blocks between peers.
- `consensus` The heart of Tendermint core, which is the
implementation of the consensus algorithm. Includes two
"submodules": `wal` (write-ahead logging) for ensuring data
integrity and `replay` to replay blocks and messages on recovery
from a crash.
- `events` Simple event notification system. The list of events can be
found
[here](https://github.com/tendermint/tendermint/blob/master/types/events.go).
You can subscribe to them by calling `subscribe` RPC method. Refer
to [RPC docs](specification/rpc.html) for additional information.
- `mempool` Mempool module handles all incoming transactions, whenever
they are coming from peers or the application.
- `p2p` Provides an abstraction around peer-to-peer communication. For
more details, please check out the
[README](https://github.com/tendermint/tendermint/blob/master/p2p/README.md).
- `rpc` [Tendermint's RPC](specification/rpc.html).
- `rpc-server` RPC server. For implementation details, please read the
[README](https://github.com/tendermint/tendermint/blob/master/rpc/lib/README.md).
- `state` Represents the latest state and execution submodule, which
executes blocks against the application.
- `types` A collection of the publicly exposed types and methods to
work with them.

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

73
docs/index.rst Normal file
View File

@ -0,0 +1,73 @@
.. Tendermint documentation master file, created by
sphinx-quickstart on Mon Aug 7 04:55:09 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Tendermint!
======================
.. image:: assets/tmint-logo-blue.png
:height: 200px
:width: 200px
:align: center
Introduction
------------
.. toctree::
:maxdepth: 1
introduction.md
install.md
getting-started.md
using-tendermint.md
deploy-testnets.md
ecosystem.md
Tendermint Tools
----------------
.. the tools/ files are pulled in from the tools repo
.. see the bottom of conf.py
.. toctree::
:maxdepth: 1
tools/docker.md
terraform-and-ansible.md
tools/benchmarking.md
tools/monitoring.md
ABCI, Apps, Logging, Etc
------------------------
.. toctree::
:maxdepth: 1
abci-cli.md
abci-spec.md
app-architecture.md
app-development.md
subscribing-to-events-via-websocket.md
indexing-transactions.md
how-to-read-logs.md
running-in-production.md
metrics.md
Research & Specification
------------------------
.. toctree::
:maxdepth: 1
determinism.md
transactional-semantics.md
.. specification.md ## keep this file for legacy purpose. needs to be fixed though
* For a deeper dive, see `this thesis <https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769>`__.
* There is also the `original whitepaper <https://tendermint.com/static/docs/tendermint.pdf>`__, though it is now quite outdated.
* Readers might also be interested in the `Cosmos Whitepaper <https://cosmos.network/whitepaper>`__ which describes Tendermint, ABCI, and how to build a scalable, heterogeneous, cryptocurrency network.
* For example applications and related software built by the Tendermint team and other, see the `software ecosystem <https://tendermint.com/ecosystem>`__.
Join the `community <https://cosmos.network/community>`__ to ask questions and discuss projects.

View File

@ -0,0 +1,89 @@
# Indexing Transactions
Tendermint allows you to index transactions and later query or subscribe
to their results.
Let's take a look at the `[tx_index]` config section:
##### transactions indexer configuration options #####
[tx_index]
# What indexer to use for transactions
#
# Options:
# 1) "null" (default)
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "kv"
# Comma-separated list of tags to index (by default the only tag is tx hash)
#
# It's recommended to index only a subset of tags due to possible memory
# bloat. This is, of course, depends on the indexer's DB and the volume of
# transactions.
index_tags = ""
# When set to true, tells indexer to index all tags. Note this may be not
# desirable (see the comment above). IndexTags has a precedence over
# IndexAllTags (i.e. when given both, IndexTags will be indexed).
index_all_tags = false
By default, Tendermint will index all transactions by their respective
hashes using an embedded simple indexer. Note, we are planning to add
more options in the future (e.g., Postgresql indexer).
## Adding tags
In your application's `DeliverTx` method, add the `Tags` field with the
pairs of UTF-8 encoded strings (e.g. "account.owner": "Bob", "balance":
"100.0", "date": "2018-01-02").
Example:
func (app *KVStoreApplication) DeliverTx(tx []byte) types.Result {
...
tags := []cmn.KVPair{
{[]byte("account.name"), []byte("igor")},
{[]byte("account.address"), []byte("0xdeadbeef")},
{[]byte("tx.amount"), []byte("7")},
}
return types.ResponseDeliverTx{Code: code.CodeTypeOK, Tags: tags}
}
If you want Tendermint to only index transactions by "account.name" tag,
in the config set `tx_index.index_tags="account.name"`. If you to index
all tags, set `index_all_tags=true`
Note, there are a few predefined tags:
- `tm.event` (event type)
- `tx.hash` (transaction's hash)
- `tx.height` (height of the block transaction was committed in)
Tendermint will throw a warning if you try to use any of the above keys.
## Querying transactions
You can query the transaction results by calling `/tx_search` RPC
endpoint:
curl "localhost:26657/tx_search?query=\"account.name='igor'\"&prove=true"
Check out [API docs](https://tendermint.github.io/slate/?shell#txsearch)
for more information on query syntax and other options.
## Subscribing to transactions
Clients can subscribe to transactions with the given tags via Websocket
by providing a query to `/subscribe` RPC endpoint.
{
"jsonrpc": "2.0",
"method": "subscribe",
"id": "0",
"params": {
"query": "account.name='igor'"
}
}
Check out [API docs](https://tendermint.github.io/slate/#subscribe) for
more information on query syntax and other options.

76
docs/install.md Normal file
View File

@ -0,0 +1,76 @@
# Install Tendermint
The fastest and easiest way to install the `tendermint` binary
is to run [this script](https://github.com/tendermint/tendermint/blob/develop/scripts/install/install_tendermint_ubuntu.sh) on
a fresh Ubuntu instance,
or [this script](https://github.com/tendermint/tendermint/blob/develop/scripts/install/install_tendermint_bsd.sh)
on a fresh FreeBSD instance. Read the comments / instructions carefully (i.e., reset your terminal after running the script,
make sure your okay with the network connections being made).
## From Binary
To download pre-built binaries, see the [releases page](https://github.com/tendermint/tendermint/releases).
## From Source
You'll need `go` [installed](https://golang.org/doc/install) and the required
[environment variables set](https://github.com/tendermint/tendermint/wiki/Setting-GOPATH)
### Get Source Code
```
mkdir -p $GOPATH/src/github.com/tendermint
cd $GOPATH/src/github.com/tendermint
git clone https://github.com/tendermint/tendermint.git
cd tendermint
```
### Get Tools & Dependencies
```
make get_tools
make get_vendor_deps
```
### Compile
```
make install
```
to put the binary in `$GOPATH/bin` or use:
```
make build
```
to put the binary in `./build`.
The latest `tendermint version` is now installed.
## Reinstall
If you already have Tendermint installed, and you make updates, simply
```
cd $GOPATH/src/github.com/tendermint/tendermint
make install
```
To upgrade, run
```
cd $GOPATH/src/github.com/tendermint/tendermint
git pull origin master
make get_vendor_deps
make install
```
## Run
To start a one-node blockchain with a simple in-process application:
```
tendermint init
tendermint node --proxy_app=kvstore
```

331
docs/introduction.md Normal file
View File

@ -0,0 +1,331 @@
# What is Tendermint?
Tendermint is software for securely and consistently replicating an
application on many machines. By securely, we mean that Tendermint works
even if up to 1/3 of machines fail in arbitrary ways. By consistently,
we mean that every non-faulty machine sees the same transaction log and
computes the same state. Secure and consistent replication is a
fundamental problem in distributed systems; it plays a critical role in
the fault tolerance of a broad range of applications, from currencies,
to elections, to infrastructure orchestration, and beyond.
The ability to tolerate machines failing in arbitrary ways, including
becoming malicious, is known as Byzantine fault tolerance (BFT). The
theory of BFT is decades old, but software implementations have only
became popular recently, due largely to the success of "blockchain
technology" like Bitcoin and Ethereum. Blockchain technology is just a
reformalization of BFT in a more modern setting, with emphasis on
peer-to-peer networking and cryptographic authentication. The name
derives from the way transactions are batched in blocks, where each
block contains a cryptographic hash of the previous one, forming a
chain. In practice, the blockchain data structure actually optimizes BFT
design.
Tendermint consists of two chief technical components: a blockchain
consensus engine and a generic application interface. The consensus
engine, called Tendermint Core, ensures that the same transactions are
recorded on every machine in the same order. The application interface,
called the Application BlockChain Interface (ABCI), enables the
transactions to be processed in any programming language. Unlike other
blockchain and consensus solutions, which come pre-packaged with built
in state machines (like a fancy key-value store, or a quirky scripting
language), developers can use Tendermint for BFT state machine
replication of applications written in whatever programming language and
development environment is right for them.
Tendermint is designed to be easy-to-use, simple-to-understand, highly
performant, and useful for a wide variety of distributed applications.
## Tendermint vs. X
Tendermint is broadly similar to two classes of software. The first
class consists of distributed key-value stores, like Zookeeper, etcd,
and consul, which use non-BFT consensus. The second class is known as
"blockchain technology", and consists of both cryptocurrencies like
Bitcoin and Ethereum, and alternative distributed ledger designs like
Hyperledger's Burrow.
### Zookeeper, etcd, consul
Zookeeper, etcd, and consul are all implementations of a key-value store
atop a classical, non-BFT consensus algorithm. Zookeeper uses a version
of Paxos called Zookeeper Atomic Broadcast, while etcd and consul use
the Raft consensus algorithm, which is much younger and simpler. A
typical cluster contains 3-5 machines, and can tolerate crash failures
in up to 1/2 of the machines, but even a single Byzantine fault can
destroy the system.
Each offering provides a slightly different implementation of a
featureful key-value store, but all are generally focused around
providing basic services to distributed systems, such as dynamic
configuration, service discovery, locking, leader-election, and so on.
Tendermint is in essence similar software, but with two key differences:
- It is Byzantine Fault Tolerant, meaning it can only tolerate up to a
1/3 of failures, but those failures can include arbitrary behaviour -
including hacking and malicious attacks. - It does not specify a
particular application, like a fancy key-value store. Instead, it
focuses on arbitrary state machine replication, so developers can build
the application logic that's right for them, from key-value store to
cryptocurrency to e-voting platform and beyond.
The layout of this Tendermint website content is also ripped directly
and without shame from [consul.io](https://www.consul.io/) and the other
[Hashicorp sites](https://www.hashicorp.com/#tools).
### Bitcoin, Ethereum, etc.
Tendermint emerged in the tradition of cryptocurrencies like Bitcoin,
Ethereum, etc. with the goal of providing a more efficient and secure
consensus algorithm than Bitcoin's Proof of Work. In the early days,
Tendermint had a simple currency built in, and to participate in
consensus, users had to "bond" units of the currency into a security
deposit which could be revoked if they misbehaved -this is what made
Tendermint a Proof-of-Stake algorithm.
Since then, Tendermint has evolved to be a general purpose blockchain
consensus engine that can host arbitrary application states. That means
it can be used as a plug-and-play replacement for the consensus engines
of other blockchain software. So one can take the current Ethereum code
base, whether in Rust, or Go, or Haskell, and run it as a ABCI
application using Tendermint consensus. Indeed, [we did that with
Ethereum](https://github.com/tendermint/ethermint). And we plan to do
the same for Bitcoin, ZCash, and various other deterministic
applications as well.
Another example of a cryptocurrency application built on Tendermint is
[the Cosmos network](http://cosmos.network).
### Other Blockchain Projects
[Fabric](https://github.com/hyperledger/fabric) takes a similar approach
to Tendermint, but is more opinionated about how the state is managed,
and requires that all application behaviour runs in potentially many
docker containers, modules it calls "chaincode". It uses an
implementation of [PBFT](http://pmg.csail.mit.edu/papers/osdi99.pdf).
from a team at IBM that is [augmented to handle potentially
non-deterministic
chaincode](https://www.zurich.ibm.com/~cca/papers/sieve.pdf) It is
possible to implement this docker-based behaviour as a ABCI app in
Tendermint, though extending Tendermint to handle non-determinism
remains for future work.
[Burrow](https://github.com/hyperledger/burrow) is an implementation of
the Ethereum Virtual Machine and Ethereum transaction mechanics, with
additional features for a name-registry, permissions, and native
contracts, and an alternative blockchain API. It uses Tendermint as its
consensus engine, and provides a particular application state.
## ABCI Overview
The [Application BlockChain Interface
(ABCI)](https://github.com/tendermint/abci) allows for Byzantine Fault
Tolerant replication of applications written in any programming
language.
### Motivation
Thus far, all blockchains "stacks" (such as
[Bitcoin](https://github.com/bitcoin/bitcoin)) have had a monolithic
design. That is, each blockchain stack is a single program that handles
all the concerns of a decentralized ledger; this includes P2P
connectivity, the "mempool" broadcasting of transactions, consensus on
the most recent block, account balances, Turing-complete contracts,
user-level permissions, etc.
Using a monolithic architecture is typically bad practice in computer
science. It makes it difficult to reuse components of the code, and
attempts to do so result in complex maintenance procedures for forks of
the codebase. This is especially true when the codebase is not modular
in design and suffers from "spaghetti code".
Another problem with monolithic design is that it limits you to the
language of the blockchain stack (or vice versa). In the case of
Ethereum which supports a Turing-complete bytecode virtual-machine, it
limits you to languages that compile down to that bytecode; today, those
are Serpent and Solidity.
In contrast, our approach is to decouple the consensus engine and P2P
layers from the details of the application state of the particular
blockchain application. We do this by abstracting away the details of
the application to an interface, which is implemented as a socket
protocol.
Thus we have an interface, the Application BlockChain Interface (ABCI),
and its primary implementation, the Tendermint Socket Protocol (TSP, or
Teaspoon).
### Intro to ABCI
[Tendermint Core](https://github.com/tendermint/tendermint) (the
"consensus engine") communicates with the application via a socket
protocol that satisfies the [ABCI](https://github.com/tendermint/abci).
To draw an analogy, lets talk about a well-known cryptocurrency,
Bitcoin. Bitcoin is a cryptocurrency blockchain where each node
maintains a fully audited Unspent Transaction Output (UTXO) database. If
one wanted to create a Bitcoin-like system on top of ABCI, Tendermint
Core would be responsible for
- Sharing blocks and transactions between nodes
- Establishing a canonical/immutable order of transactions
(the blockchain)
The application will be responsible for
- Maintaining the UTXO database
- Validating cryptographic signatures of transactions
- Preventing transactions from spending non-existent transactions
- Allowing clients to query the UTXO database.
Tendermint is able to decompose the blockchain design by offering a very
simple API (ie. the ABCI) between the application process and consensus
process.
The ABCI consists of 3 primary message types that get delivered from the
core to the application. The application replies with corresponding
response messages.
The messages are specified here: [ABCI Message
Types](https://github.com/tendermint/abci#message-types).
The **DeliverTx** message is the work horse of the application. Each
transaction in the blockchain is delivered with this message. The
application needs to validate each transaction received with the
**DeliverTx** message against the current state, application protocol,
and the cryptographic credentials of the transaction. A validated
transaction then needs to update the application state — by binding a
value into a key values store, or by updating the UTXO database, for
instance.
The **CheckTx** message is similar to **DeliverTx**, but it's only for
validating transactions. Tendermint Core's mempool first checks the
validity of a transaction with **CheckTx**, and only relays valid
transactions to its peers. For instance, an application may check an
incrementing sequence number in the transaction and return an error upon
**CheckTx** if the sequence number is old. Alternatively, they might use
a capabilities based system that requires capabilities to be renewed
with every transaction.
The **Commit** message is used to compute a cryptographic commitment to
the current application state, to be placed into the next block header.
This has some handy properties. Inconsistencies in updating that state
will now appear as blockchain forks which catches a whole class of
programming errors. This also simplifies the development of secure
lightweight clients, as Merkle-hash proofs can be verified by checking
against the block hash, and that the block hash is signed by a quorum.
There can be multiple ABCI socket connections to an application.
Tendermint Core creates three ABCI connections to the application; one
for the validation of transactions when broadcasting in the mempool, one
for the consensus engine to run block proposals, and one more for
querying the application state.
It's probably evident that applications designers need to very carefully
design their message handlers to create a blockchain that does anything
useful but this architecture provides a place to start. The diagram
below illustrates the flow of messages via ABCI.
![](assets/abci.png)
## A Note on Determinism
The logic for blockchain transaction processing must be deterministic.
If the application logic weren't deterministic, consensus would not be
reached among the Tendermint Core replica nodes.
Solidity on Ethereum is a great language of choice for blockchain
applications because, among other reasons, it is a completely
deterministic programming language. However, it's also possible to
create deterministic applications using existing popular languages like
Java, C++, Python, or Go. Game programmers and blockchain developers are
already familiar with creating deterministic programs by avoiding
sources of non-determinism such as:
- random number generators (without deterministic seeding)
- race conditions on threads (or avoiding threads altogether)
- the system clock
- uninitialized memory (in unsafe programming languages like C
or C++)
- [floating point
arithmetic](http://gafferongames.com/networking-for-game-programmers/floating-point-determinism/)
- language features that are random (e.g. map iteration in Go)
While programmers can avoid non-determinism by being careful, it is also
possible to create a special linter or static analyzer for each language
to check for determinism. In the future we may work with partners to
create such tools.
## Consensus Overview
Tendermint is an easy-to-understand, mostly asynchronous, BFT consensus
protocol. The protocol follows a simple state machine that looks like
this:
![](assets/consensus_logic.png)
Participants in the protocol are called **validators**; they take turns
proposing blocks of transactions and voting on them. Blocks are
committed in a chain, with one block at each **height**. A block may
fail to be committed, in which case the protocol moves to the next
**round**, and a new validator gets to propose a block for that height.
Two stages of voting are required to successfully commit a block; we
call them **pre-vote** and **pre-commit**. A block is committed when
more than 2/3 of validators pre-commit for the same block in the same
round.
There is a picture of a couple doing the polka because validators are
doing something like a polka dance. When more than two-thirds of the
validators pre-vote for the same block, we call that a **polka**. Every
pre-commit must be justified by a polka in the same round.
Validators may fail to commit a block for a number of reasons; the
current proposer may be offline, or the network may be slow. Tendermint
allows them to establish that a validator should be skipped. Validators
wait a small amount of time to receive a complete proposal block from
the proposer before voting to move to the next round. This reliance on a
timeout is what makes Tendermint a weakly synchronous protocol, rather
than an asynchronous one. However, the rest of the protocol is
asynchronous, and validators only make progress after hearing from more
than two-thirds of the validator set. A simplifying element of
Tendermint is that it uses the same mechanism to commit a block as it
does to skip to the next round.
Assuming less than one-third of the validators are Byzantine, Tendermint
guarantees that safety will never be violated - that is, validators will
never commit conflicting blocks at the same height. To do this it
introduces a few **locking** rules which modulate which paths can be
followed in the flow diagram. Once a validator precommits a block, it is
locked on that block. Then,
1) it must prevote for the block it is locked on
2) it can only unlock, and precommit for a new block, if there is a
polka for that block in a later round
## Stake
In many systems, not all validators will have the same "weight" in the
consensus protocol. Thus, we are not so much interested in one-third or
two-thirds of the validators, but in those proportions of the total
voting power, which may not be uniformly distributed across individual
validators.
Since Tendermint can replicate arbitrary applications, it is possible to
define a currency, and denominate the voting power in that currency.
When voting power is denominated in a native currency, the system is
often referred to as Proof-of-Stake. Validators can be forced, by logic
in the application, to "bond" their currency holdings in a security
deposit that can be destroyed if they're found to misbehave in the
consensus protocol. This adds an economic element to the security of the
protocol, allowing one to quantify the cost of violating the assumption
that less than one-third of voting power is Byzantine.
The [Cosmos Network](http://cosmos.network) is designed to use this
Proof-of-Stake mechanism across an array of cryptocurrencies implemented
as ABCI applications.
The following diagram is Tendermint in a (technical) nutshell. [See here
for high resolution
version](https://github.com/mobfoundry/hackatom/blob/master/tminfo.pdf).
![](assets/tm-transaction-flow.png)

40
docs/metrics.md Normal file
View File

@ -0,0 +1,40 @@
# Metrics
Tendermint can report and serve the Prometheus metrics, which in their turn can
be consumed by Prometheus collector(s).
This functionality is disabled by default.
To enable the Prometheus metrics, set `instrumentation.prometheus=true` if your
config file. Metrics will be served under `/metrics` on 26660 port by default.
Listen address can be changed in the config file (see
`prometheus_listen_addr`).
## List of available metrics
The following metrics are available:
| Name | Type | Since | Description |
| --------------------------------------- | ------- | --------- | ----------------------------------------------------------------------------- |
| consensus_height | Gauge | 0.20.1 | Height of the chain |
| consensus_validators | Gauge | 0.20.1 | Number of validators |
| consensus_validators_power | Gauge | 0.20.1 | Total voting power of all validators |
| consensus_missing_validators | Gauge | 0.20.1 | Number of validators who did not sign |
| consensus_missing_validators_power | Gauge | 0.20.1 | Total voting power of the missing validators |
| consensus_byzantine_validators | Gauge | 0.20.1 | Number of validators who tried to double sign |
| consensus_byzantine_validators_power | Gauge | 0.20.1 | Total voting power of the byzantine validators |
| consensus_block_interval_seconds | Histogram | 0.20.1 | Time between this and last block (Block.Header.Time) in seconds |
| consensus_rounds | Gauge | 0.20.1 | Number of rounds |
| consensus_num_txs | Gauge | 0.20.1 | Number of transactions |
| mempool_size | Gauge | 0.20.1 | Number of uncommitted transactions |
| consensus_total_txs | Gauge | 0.20.1 | Total number of transactions committed |
| consensus_block_size_bytes | Gauge | 0.20.1 | Block size in bytes |
| p2p_peers | Gauge | 0.20.1 | Number of peers node's connected to |
## Useful queries
Percentage of missing + byzantine validators:
```
((consensus_byzantine_validators_power + consensus_missing_validators_power) / consensus_validators_power) * 100
```

4
docs/requirements.txt Normal file
View File

@ -0,0 +1,4 @@
sphinx
sphinx-autobuild
recommonmark
sphinx_rtd_theme

View File

@ -0,0 +1,206 @@
# Running in production
## Logging
Default logging level (`main:info,state:info,*:`) should suffice for
normal operation mode. Read [this
post](https://blog.cosmos.network/one-of-the-exciting-new-features-in-0-10-0-release-is-smart-log-level-flag-e2506b4ab756)
for details on how to configure `log_level` config variable. Some of the
modules can be found [here](./how-to-read-logs.md#list-of-modules). If
you're trying to debug Tendermint or asked to provide logs with debug
logging level, you can do so by running tendermint with
`--log_level="*:debug"`.
## DOS Exposure and Mitigation
Validators are supposed to setup [Sentry Node
Architecture](https://blog.cosmos.network/tendermint-explained-bringing-bft-based-pos-to-the-public-blockchain-domain-f22e274a0fdb)
to prevent Denial-of-service attacks. You can read more about it
[here](https://github.com/tendermint/aib-data/blob/develop/medium/TendermintBFT.md).
### P2P
The core of the Tendermint peer-to-peer system is `MConnection`. Each
connection has `MaxPacketMsgPayloadSize`, which is the maximum packet
size and bounded send & receive queues. One can impose restrictions on
send & receive rate per connection (`SendRate`, `RecvRate`).
### RPC
Endpoints returning multiple entries are limited by default to return 30
elements (100 max).
Rate-limiting and authentication are another key aspects to help protect
against DOS attacks. While in the future we may implement these
features, for now, validators are supposed to use external tools like
[NGINX](https://www.nginx.com/blog/rate-limiting-nginx/) or
[traefik](https://docs.traefik.io/configuration/commons/#rate-limiting)
to achieve the same things.
## Debugging Tendermint
If you ever have to debug Tendermint, the first thing you should
probably do is to check out the logs. See ["How to read
logs"](./how-to-read-logs.md), where we explain what certain log
statements mean.
If, after skimming through the logs, things are not clear still, the
second TODO is to query the /status RPC endpoint. It provides the
necessary info: whenever the node is syncing or not, what height it is
on, etc.
$ curl http(s)://{ip}:{rpcPort}/status
`dump_consensus_state` will give you a detailed overview of the
consensus state (proposer, lastest validators, peers states). From it,
you should be able to figure out why, for example, the network had
halted.
$ curl http(s)://{ip}:{rpcPort}/dump_consensus_state
There is a reduced version of this endpoint - `consensus_state`, which
returns just the votes seen at the current height.
- [Github Issues](https://github.com/tendermint/tendermint/issues)
- [StackOverflow
questions](https://stackoverflow.com/questions/tagged/tendermint)
## Monitoring Tendermint
Each Tendermint instance has a standard `/health` RPC endpoint, which
responds with 200 (OK) if everything is fine and 500 (or no response) -
if something is wrong.
Other useful endpoints include mentioned earlier `/status`, `/net_info` and
`/validators`.
We have a small tool, called `tm-monitor`, which outputs information from
the endpoints above plus some statistics. The tool can be found
[here](https://github.com/tendermint/tools/tree/master/tm-monitor).
## What happens when my app dies?
You are supposed to run Tendermint under a [process
supervisor](https://en.wikipedia.org/wiki/Process_supervision) (like
systemd or runit). It will ensure Tendermint is always running (despite
possible errors).
Getting back to the original question, if your application dies,
Tendermint will panic. After a process supervisor restarts your
application, Tendermint should be able to reconnect successfully. The
order of restart does not matter for it.
## Signal handling
We catch SIGINT and SIGTERM and try to clean up nicely. For other
signals we use the default behaviour in Go: [Default behavior of signals
in Go
programs](https://golang.org/pkg/os/signal/#hdr-Default_behavior_of_signals_in_Go_programs).
## Hardware
### Processor and Memory
While actual specs vary depending on the load and validators count,
minimal requirements are:
- 1GB RAM
- 25GB of disk space
- 1.4 GHz CPU
SSD disks are preferable for applications with high transaction
throughput.
Recommended:
- 2GB RAM
- 100GB SSD
- x64 2.0 GHz 2v CPU
While for now, Tendermint stores all the history and it may require
significant disk space over time, we are planning to implement state
syncing (See
[this issue](https://github.com/tendermint/tendermint/issues/828)). So,
storing all the past blocks will not be necessary.
### Operating Systems
Tendermint can be compiled for a wide range of operating systems thanks
to Go language (the list of \$OS/\$ARCH pairs can be found
[here](https://golang.org/doc/install/source#environment)).
While we do not favor any operation system, more secure and stable Linux
server distributions (like Centos) should be preferred over desktop
operation systems (like Mac OS).
### Miscellaneous
NOTE: if you are going to use Tendermint in a public domain, make sure
you read [hardware recommendations (see "4.
Hardware")](https://cosmos.network/validators) for a validator in the
Cosmos network.
## Configuration parameters
- `p2p.flush_throttle_timeout` `p2p.max_packet_msg_payload_size`
`p2p.send_rate` `p2p.recv_rate`
If you are going to use Tendermint in a private domain and you have a
private high-speed network among your peers, it makes sense to lower
flush throttle timeout and increase other params.
[p2p]
send_rate=20000000 # 2MB/s
recv_rate=20000000 # 2MB/s
flush_throttle_timeout=10
max_packet_msg_payload_size=10240 # 10KB
- `mempool.recheck`
After every block, Tendermint rechecks every transaction left in the
mempool to see if transactions committed in that block affected the
application state, so some of the transactions left may become invalid.
If that does not apply to your application, you can disable it by
setting `mempool.recheck=false`.
- `mempool.broadcast`
Setting this to false will stop the mempool from relaying transactions
to other peers until they are included in a block. It means only the
peer you send the tx to will see it until it is included in a block.
- `consensus.skip_timeout_commit`
We want `skip_timeout_commit=false` when there is economics on the line
because proposers should wait to hear for more votes. But if you don't
care about that and want the fastest consensus, you can skip it. It will
be kept false by default for public deployments (e.g. [Cosmos
Hub](https://cosmos.network/intro/hub)) while for enterprise
applications, setting it to true is not a problem.
- `consensus.peer_gossip_sleep_duration`
You can try to reduce the time your node sleeps before checking if
theres something to send its peers.
- `consensus.timeout_commit`
You can also try lowering `timeout_commit` (time we sleep before
proposing the next block).
- `consensus.max_block_size_txs`
By default, the maximum number of transactions per a block is 10_000.
Feel free to change it to suit your needs.
- `p2p.addr_book_strict`
By default, Tendermint checks whenever a peer's address is routable before
saving it to the address book. The address is considered as routable if the IP
is [valid and within allowed
ranges](https://github.com/tendermint/tendermint/blob/27bd1deabe4ba6a2d9b463b8f3e3f1e31b993e61/p2p/netaddress.go#L209).
This may not be the case for private networks, where your IP range is usually
strictly limited and private. If that case, you need to set `addr_book_strict`
to `false` (turn off).

80
docs/spec/README.md Normal file
View File

@ -0,0 +1,80 @@
# Tendermint Specification
This is a markdown specification of the Tendermint blockchain.
It defines the base data structures, how they are validated,
and how they are communicated over the network.
If you find discrepancies between the spec and the code that
do not have an associated issue or pull request on github,
please submit them to our [bug bounty](https://tendermint.com/security)!
## Contents
- [Overview](#overview)
### Data Structures
- [Encoding and Digests](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/encoding.md)
- [Blockchain](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/blockchain.md)
- [State](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md)
### Consensus Protocol
- [Consensus Algorithm](/docs/spec/consensus/consensus.md)
- [Time](/docs/spec/consensus/bft-time.md)
- [Light-Client](/docs/spec/consensus/light-client.md)
### P2P and Network Protocols
- [The Base P2P Layer](https://github.com/tendermint/tendermint/tree/master/docs/spec/p2p): multiplex the protocols ("reactors") on authenticated and encrypted TCP connections
- [Peer Exchange (PEX)](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/pex): gossip known peer addresses so peers can find each other
- [Block Sync](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/block_sync): gossip blocks so peers can catch up quickly
- [Consensus](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/consensus): gossip votes and block parts so new blocks can be committed
- [Mempool](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/mempool): gossip transactions so they get included in blocks
- Evidence: TODO
### Software
- [ABCI](/docs/spec/software/abci.md): Details about interactions between the
application and consensus engine over ABCI
- [Write-Ahead Log](/docs/spec/software/wal.md): Details about how the consensus
engine preserves data and recovers from crash failures
## Overview
Tendermint provides Byzantine Fault Tolerant State Machine Replication using
hash-linked batches of transactions. Such transaction batches are called "blocks".
Hence, Tendermint defines a "blockchain".
Each block in Tendermint has a unique index - its Height.
Height's in the blockchain are monotonic.
Each block is committed by a known set of weighted Validators.
Membership and weighting within this validator set may change over time.
Tendermint guarantees the safety and liveness of the blockchain
so long as less than 1/3 of the total weight of the Validator set
is malicious or faulty.
A commit in Tendermint is a set of signed messages from more than 2/3 of
the total weight of the current Validator set. Validators take turns proposing
blocks and voting on them. Once enough votes are received, the block is considered
committed. These votes are included in the *next* block as proof that the previous block
was committed - they cannot be included in the current block, as that block has already been
created.
Once a block is committed, it can be executed against an application.
The application returns results for each of the transactions in the block.
The application can also return changes to be made to the validator set,
as well as a cryptographic digest of its latest state.
Tendermint is designed to enable efficient verification and authentication
of the latest state of the blockchain. To achieve this, it embeds
cryptographic commitments to certain information in the block "header".
This information includes the contents of the block (eg. the transactions),
the validator set committing the block, as well as the various results returned by the application.
Note, however, that block execution only occurs *after* a block is committed.
Thus, application results can only be included in the *next* block.
Also note that information like the transaction results and the validator set are never
directly included in the block - only their cryptographic digests (Merkle roots) are.
Hence, verification of a block requires a separate data structure to store this information.
We call this the `State`. Block verification also requires access to the previous block.

View File

@ -0,0 +1,431 @@
# Tendermint Blockchain
Here we describe the data structures in the Tendermint blockchain and the rules for validating them.
## Data Structures
The Tendermint blockchains consists of a short list of basic data types:
- `Block`
- `Header`
- `Vote`
- `BlockID`
- `Signature`
- `Evidence`
## Block
A block consists of a header, a list of transactions, a list of votes (the commit),
and a list of evidence of malfeasance (ie. signing conflicting votes).
```go
type Block struct {
Header Header
Txs [][]byte
LastCommit []Vote
Evidence []Evidence
}
```
## Header
A block header contains metadata about the block and about the consensus, as well as commitments to
the data in the current block, the previous block, and the results returned by the application:
```go
type Header struct {
// block metadata
Version string // Version string
ChainID string // ID of the chain
Height int64 // Current block height
Time int64 // UNIX time, in millisconds
// current block
NumTxs int64 // Number of txs in this block
TxHash []byte // SimpleMerkle of the block.Txs
LastCommitHash []byte // SimpleMerkle of the block.LastCommit
// previous block
TotalTxs int64 // prevBlock.TotalTxs + block.NumTxs
LastBlockID BlockID // BlockID of prevBlock
// application
ResultsHash []byte // SimpleMerkle of []abci.Result from prevBlock
AppHash []byte // Arbitrary state digest
ValidatorsHash []byte // SimpleMerkle of the ValidatorSet
ConsensusParamsHash []byte // SimpleMerkle of the ConsensusParams
// consensus
Proposer []byte // Address of the block proposer
EvidenceHash []byte // SimpleMerkle of []Evidence
}
```
Further details on each of these fields is described below.
## BlockID
The `BlockID` contains two distinct Merkle roots of the block.
The first, used as the block's main hash, is the Merkle root
of all the fields in the header. The second, used for secure gossipping of
the block during consensus, is the Merkle root of the complete serialized block
cut into parts. The `BlockID` includes these two hashes, as well as the number of
parts.
```go
type BlockID struct {
Hash []byte
Parts PartsHeader
}
type PartsHeader struct {
Hash []byte
Total int32
}
```
## Vote
A vote is a signed message from a validator for a particular block.
The vote includes information about the validator signing it.
```go
type Vote struct {
Timestamp int64
Address []byte
Index int
Height int64
Round int
Type int8
BlockID BlockID
Signature Signature
}
```
There are two types of votes:
a *prevote* has `vote.Type == 1` and
a *precommit* has `vote.Type == 2`.
## Signature
Tendermint allows for multiple signature schemes to be used by prepending a single type-byte
to the signature bytes. Different signatures may also come with fixed or variable lengths.
Currently, Tendermint supports Ed25519 and Secp256k1.
### ED25519
An ED25519 signature has `Type == 0x1`. It looks like:
```go
// Implements Signature
type Ed25519Signature struct {
Type int8 = 0x1
Signature [64]byte
}
```
where `Signature` is the 64 byte signature.
### Secp256k1
A `Secp256k1` signature has `Type == 0x2`. It looks like:
```go
// Implements Signature
type Secp256k1Signature struct {
Type int8 = 0x2
Signature []byte
}
```
where `Signature` is the DER encoded signature, ie:
```hex
0x30 <length of whole message> <0x02> <length of R> <R> 0x2 <length of S> <S>.
```
## Evidence
TODO
## Validation
Here we describe the validation rules for every element in a block.
Blocks which do not satisfy these rules are considered invalid.
We abuse notation by using something that looks like Go, supplemented with English.
A statement such as `x == y` is an assertion - if it fails, the item is invalid.
We refer to certain globally available objects:
`block` is the block under consideration,
`prevBlock` is the `block` at the previous height,
and `state` keeps track of the validator set, the consensus parameters
and other results from the application.
Elements of an object are accessed as expected,
ie. `block.Header`. See [here](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md) for the definition of `state`.
### Header
A Header is valid if its corresponding fields are valid.
### Version
Arbitrary string.
### ChainID
Arbitrary constant string.
### Height
```go
block.Header.Height > 0
block.Header.Height == prevBlock.Header.Height + 1
```
The height is an incrementing integer. The first block has `block.Header.Height == 1`.
### Time
The median of the timestamps of the valid votes in the block.LastCommit.
Corresponds to the number of nanoseconds, with millisecond resolution, since January 1, 1970.
Note: the timestamp of a vote must be greater by at least one millisecond than that of the
block being voted on.
### NumTxs
```go
block.Header.NumTxs == len(block.Txs)
```
Number of transactions included in the block.
### TxHash
```go
block.Header.TxHash == SimpleMerkleRoot(block.Txs)
```
Simple Merkle root of the transactions in the block.
### LastCommitHash
```go
block.Header.LastCommitHash == SimpleMerkleRoot(block.LastCommit)
```
Simple Merkle root of the votes included in the block.
These are the votes that committed the previous block.
The first block has `block.Header.LastCommitHash == []byte{}`
### TotalTxs
```go
block.Header.TotalTxs == prevBlock.Header.TotalTxs + block.Header.NumTxs
```
The cumulative sum of all transactions included in this blockchain.
The first block has `block.Header.TotalTxs = block.Header.NumberTxs`.
### LastBlockID
LastBlockID is the previous block's BlockID:
```go
prevBlockParts := MakeParts(prevBlock, state.LastConsensusParams.BlockGossip.BlockPartSize)
block.Header.LastBlockID == BlockID {
Hash: SimpleMerkleRoot(prevBlock.Header),
PartsHeader{
Hash: SimpleMerkleRoot(prevBlockParts),
Total: len(prevBlockParts),
},
}
```
Note: it depends on the ConsensusParams,
which are held in the `state` and may be updated by the application.
The first block has `block.Header.LastBlockID == BlockID{}`.
### ResultsHash
```go
block.ResultsHash == SimpleMerkleRoot(state.LastResults)
```
Simple Merkle root of the results of the transactions in the previous block.
The first block has `block.Header.ResultsHash == []byte{}`.
### AppHash
```go
block.AppHash == state.AppHash
```
Arbitrary byte array returned by the application after executing and commiting the previous block.
The first block has `block.Header.AppHash == []byte{}`.
### ValidatorsHash
```go
block.ValidatorsHash == SimpleMerkleRoot(state.Validators)
```
Simple Merkle root of the current validator set that is committing the block.
This can be used to validate the `LastCommit` included in the next block.
May be updated by the application.
### ConsensusParamsHash
```go
block.ConsensusParamsHash == SimpleMerkleRoot(state.ConsensusParams)
```
Simple Merkle root of the consensus parameters.
May be updated by the application.
### Proposer
```go
block.Header.Proposer in state.Validators
```
Original proposer of the block. Must be a current validator.
NOTE: we also need to track the round.
## EvidenceHash
```go
block.EvidenceHash == SimpleMerkleRoot(block.Evidence)
```
Simple Merkle root of the evidence of Byzantine behaviour included in this block.
## Txs
Arbitrary length array of arbitrary length byte-arrays.
## LastCommit
The first height is an exception - it requires the LastCommit to be empty:
```go
if block.Header.Height == 1 {
len(b.LastCommit) == 0
}
```
Otherwise, we require:
```go
len(block.LastCommit) == len(state.LastValidators)
talliedVotingPower := 0
for i, vote := range block.LastCommit{
if vote == nil{
continue
}
vote.Type == 2
vote.Height == block.LastCommit.Height()
vote.Round == block.LastCommit.Round()
vote.BlockID == block.LastBlockID
val := state.LastValidators[i]
vote.Verify(block.ChainID, val.PubKey) == true
talliedVotingPower += val.VotingPower
}
talliedVotingPower > (2/3) * TotalVotingPower(state.LastValidators)
```
Includes one (possibly nil) vote for every current validator.
Non-nil votes must be Precommits.
All votes must be for the same height and round.
All votes must be for the previous block.
All votes must have a valid signature from the corresponding validator.
The sum total of the voting power of the validators that voted
must be greater than 2/3 of the total voting power of the complete validator set.
### Vote
A vote is a signed message broadcast in the consensus for a particular block at a particular height and round.
When stored in the blockchain or propagated over the network, votes are encoded in TMBIN.
For signing, votes are encoded in JSON, and the ChainID is included, in the form of the `CanonicalSignBytes`.
We define a method `Verify` that returns `true` if the signature verifies against the pubkey for the CanonicalSignBytes
using the given ChainID:
```go
func (v Vote) Verify(chainID string, pubKey PubKey) bool {
return pubKey.Verify(v.Signature, CanonicalSignBytes(chainID, v))
}
```
where `pubKey.Verify` performs the appropriate digital signature verification of the `pubKey`
against the given signature and message bytes.
## Evidence
There is currently only one kind of evidence:
```
// amino: "tendermint/DuplicateVoteEvidence"
type DuplicateVoteEvidence struct {
PubKey crypto.PubKey
VoteA *Vote
VoteB *Vote
}
```
DuplicateVoteEvidence `ev` is valid if
- `ev.VoteA` and `ev.VoteB` can be verified with `ev.PubKey`
- `ev.VoteA` and `ev.VoteB` have the same `Height, Round, Address, Index, Type`
- `ev.VoteA.BlockID != ev.VoteB.BlockID`
- `(block.Height - ev.VoteA.Height) < MAX_EVIDENCE_AGE`
# Execution
Once a block is validated, it can be executed against the state.
The state follows this recursive equation:
```go
state(1) = InitialState
state(h+1) <- Execute(state(h), ABCIApp, block(h))
```
where `InitialState` includes the initial consensus parameters and validator set,
and `ABCIApp` is an ABCI application that can return results and changes to the validator
set (TODO). Execute is defined as:
```go
Execute(s State, app ABCIApp, block Block) State {
TODO: just spell out ApplyBlock here
and remove ABCIResponses struct.
abciResponses := app.ApplyBlock(block)
return State{
LastResults: abciResponses.DeliverTxResults,
AppHash: abciResponses.AppHash,
Validators: UpdateValidators(state.Validators, abciResponses.ValidatorChanges),
LastValidators: state.Validators,
ConsensusParams: UpdateConsensusParams(state.ConsensusParams, abci.Responses.ConsensusParamChanges),
}
}
type ABCIResponses struct {
DeliverTxResults []Result
ValidatorChanges []Validator
ConsensusParamChanges ConsensusParams
AppHash []byte
}
```

View File

@ -0,0 +1,274 @@
# Tendermint Encoding
## Amino
Tendermint uses the Protobuf3 derivative [Amino](https://github.com/tendermint/go-amino) for all data structures.
Think of Amino as an object-oriented Protobuf3 with native JSON support.
The goal of the Amino encoding protocol is to bring parity between application
logic objects and persistence objects.
Please see the [Amino
specification](https://github.com/tendermint/go-amino#amino-encoding-for-go) for
more details.
Notably, every object that satisfies an interface (eg. a particular kind of p2p message,
or a particular kind of pubkey) is registered with a global name, the hash of
which is included in the object's encoding as the so-called "prefix bytes".
We define the `func AminoEncode(obj interface{}) []byte` function to take an
arbitrary object and return the Amino encoded bytes.
## Byte Arrays
The encoding of a byte array is simply the raw-bytes prefixed with the length of
the array as a `UVarint` (what Protobuf calls a `Varint`).
For details on varints, see the [protobuf
spec](https://developers.google.com/protocol-buffers/docs/encoding#varints).
For example, the byte-array `[0xA, 0xB]` would be encoded as `0x020A0B`,
while a byte-array containing 300 entires beginning with `[0xA, 0xB, ...]` would
be encoded as `0xAC020A0B...` where `0xAC02` is the UVarint encoding of 300.
## Public Key Cryptography
Tendermint uses Amino to distinguish between different types of private keys,
public keys, and signatures. Additionally, for each public key, Tendermint
defines an Address function that can be used as a more compact identifier in
place of the public key. Here we list the concrete types, their names,
and prefix bytes for public keys and signatures, as well as the address schemes
for each PubKey. Note for brevity we don't
include details of the private keys beyond their type and name, as they can be
derived the same way as the others using Amino.
All registered objects are encoded by Amino using a 4-byte PrefixBytes that
uniquely identifies the object and includes information about its underlying
type. For details on how PrefixBytes are computed, see the [Amino
spec](https://github.com/tendermint/go-amino#computing-the-prefix-and-disambiguation-bytes).
In what follows, we provide the type names and prefix bytes directly.
Notice that when encoding byte-arrays, the length of the byte-array is appended
to the PrefixBytes. Thus the encoding of a byte array becomes `<PrefixBytes>
<Length> <ByteArray>`. In other words, to encode any type listed below you do not need to be
familiar with amino encoding.
You can simply use below table and concatenate Prefix || Length (of raw bytes) || raw bytes
( while || stands for byte concatenation here).
| Type | Name | Prefix | Length |
| ---- | ---- | ------ | ----- |
| PubKeyEd25519 | tendermint/PubKeyEd25519 | 0x1624DE62 | 0x20 |
| PubKeyLedgerEd25519 | tendermint/PubKeyLedgerEd25519 | 0x5C3453B2 | 0x20 |
| PubKeySecp256k1 | tendermint/PubKeySecp256k1 | 0xEB5AE982 | 0x21 |
| PrivKeyEd25519 | tendermint/PrivKeyEd25519 | 0xA3288912 | 0x40 |
| PrivKeySecp256k1 | tendermint/PrivKeySecp256k1 | 0xE1B0F79A | 0x20 |
| PrivKeyLedgerSecp256k1 | tendermint/PrivKeyLedgerSecp256k1 | 0x10CAB393 | variable |
| PrivKeyLedgerEd25519 | tendermint/PrivKeyLedgerEd25519 | 0x0CFEEF9B | variable |
| SignatureEd25519 | tendermint/SignatureKeyEd25519 | 0x3DA1DB2A | 0x40 |
| SignatureSecp256k1 | tendermint/SignatureKeySecp256k1 | 0x16E1FEEA | variable |
### Examples
1. For example, the 33-byte (or 0x21-byte in hex) Secp256k1 pubkey
`020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
would be encoded as
`EB5AE98221020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
2. For example, the variable size Secp256k1 signature (in this particular example 70 or 0x46 bytes)
`304402201CD4B8C764D2FD8AF23ECFE6666CA8A53886D47754D951295D2D311E1FEA33BF02201E0F906BB1CF2C30EAACFFB032A7129358AFF96B9F79B06ACFFB18AC90C2ADD7`
would be encoded as
`16E1FEEA46304402201CD4B8C764D2FD8AF23ECFE6666CA8A53886D47754D951295D2D311E1FEA33BF02201E0F906BB1CF2C30EAACFFB032A7129358AFF96B9F79B06ACFFB18AC90C2ADD7`
### Addresses
Addresses for each public key types are computed as follows:
#### Ed25519
RIPEMD160 hash of the Amino encoded public key:
```
address = RIPEMD160(AMINO(pubkey))
```
NOTE: this will soon change to the truncated 20-bytes of the SHA256 of the raw
public key
#### Secp256k1
RIPEMD160 hash of the SHA256 hash of the OpenSSL compressed public key:
```
address = RIPEMD160(SHA256(pubkey))
```
This is the same as Bitcoin.
## Other Common Types
### BitArray
The BitArray is used in block headers and some consensus messages to signal
whether or not something was done by each validator. BitArray is represented
with a struct containing the number of bits (`Bits`) and the bit-array itself
encoded in base64 (`Elems`).
```go
type BitArray struct {
Bits int
Elems []uint64
}
```
This type is easily encoded directly by Amino.
Note BitArray receives a special JSON encoding in the form of `x` and `_`
representing `1` and `0`. Ie. the BitArray `10110` would be JSON encoded as
`"x_xx_"`
### Part
Part is used to break up blocks into pieces that can be gossiped in parallel
and securely verified using a Merkle tree of the parts.
Part contains the index of the part in the larger set (`Index`), the actual
underlying data of the part (`Bytes`), and a simple Merkle proof that the part is contained in
the larger set (`Proof`).
```go
type Part struct {
Index int
Bytes byte[]
Proof byte[]
}
```
### MakeParts
Encode an object using Amino and slice it into parts.
```go
func MakeParts(obj interface{}, partSize int) []Part
```
## Merkle Trees
Simple Merkle trees are used in numerous places in Tendermint to compute a cryptographic digest of a data structure.
RIPEMD160 is always used as the hashing function.
### Simple Merkle Root
The function `SimpleMerkleRoot` is a simple recursive function defined as follows:
```go
func SimpleMerkleRoot(hashes [][]byte) []byte{
switch len(hashes) {
case 0:
return nil
case 1:
return hashes[0]
default:
left := SimpleMerkleRoot(hashes[:(len(hashes)+1)/2])
right := SimpleMerkleRoot(hashes[(len(hashes)+1)/2:])
return SimpleConcatHash(left, right)
}
}
func SimpleConcatHash(left, right []byte) []byte{
left = encodeByteSlice(left)
right = encodeByteSlice(right)
return RIPEMD160 (append(left, right))
}
```
Note that the leaves are Amino encoded as byte-arrays (ie. simple Uvarint length
prefix) before being concatenated together and hashed.
Note: we will abuse notion and invoke `SimpleMerkleRoot` with arguments of type `struct` or type `[]struct`.
For `struct` arguments, we compute a `[][]byte` by sorting elements of the `struct` according to
field name and then hashing them.
For `[]struct` arguments, we compute a `[][]byte` by hashing the individual `struct` elements.
### Simple Merkle Proof
Proof that a leaf is in a Merkle tree consists of a simple structure:
```
type SimpleProof struct {
Aunts [][]byte
}
```
Which is verified using the following:
```
func (proof SimpleProof) Verify(index, total int, leafHash, rootHash []byte) bool {
computedHash := computeHashFromAunts(index, total, leafHash, proof.Aunts)
return computedHash == rootHash
}
func computeHashFromAunts(index, total int, leafHash []byte, innerHashes [][]byte) []byte{
assert(index < total && index >= 0 && total > 0)
if total == 1{
assert(len(proof.Aunts) == 0)
return leafHash
}
assert(len(innerHashes) > 0)
numLeft := (total + 1) / 2
if index < numLeft {
leftHash := computeHashFromAunts(index, numLeft, leafHash, innerHashes[:len(innerHashes)-1])
assert(leftHash != nil)
return SimpleHashFromTwoHashes(leftHash, innerHashes[len(innerHashes)-1])
}
rightHash := computeHashFromAunts(index-numLeft, total-numLeft, leafHash, innerHashes[:len(innerHashes)-1])
assert(rightHash != nil)
return SimpleHashFromTwoHashes(innerHashes[len(innerHashes)-1], rightHash)
}
```
## JSON
### Amino
TODO: improve this
Amino also supports JSON encoding - registered types are simply encoded as:
```
{
"type": "<DisfixBytes>",
"value": <JSON>
}
```
For instance, an ED25519 PubKey would look like:
```
{
"type": "AC26791624DE60",
"value": "uZ4h63OFWuQ36ZZ4Bd6NF+/w9fWUwrOncrQsackrsTk="
}
```
Where the `"value"` is the base64 encoding of the raw pubkey bytes, and the
`"type"` is the full disfix bytes for Ed25519 pubkeys.
### Signed Messages
Signed messages (eg. votes, proposals) in the consensus are encoded using Amino-JSON, rather than in the standard binary format.
When signing, the elements of a message are sorted by key and the sorted message is embedded in an
outer JSON that includes a `chain_id` field.
We call this encoding the CanonicalSignBytes. For instance, CanonicalSignBytes for a vote would look
like:
```json
{"chain_id":"my-chain-id","vote":{"block_id":{"hash":DEADBEEF,"parts":{"hash":BEEFDEAD,"total":3}},"height":3,"round":2,"timestamp":1234567890, "type":2}
```
Note how the fields within each level are sorted.

View File

@ -0,0 +1,80 @@
# Tendermint State
## State
The state contains information whose cryptographic digest is included in block headers, and thus is
necessary for validating new blocks. For instance, the set of validators and the results of
transactions are never included in blocks, but their Merkle roots are - the state keeps track of them.
Note that the `State` object itself is an implementation detail, since it is never
included in a block or gossipped over the network, and we never compute
its hash. However, the types it contains are part of the specification, since
their Merkle roots are included in blocks.
For details on an implementation of `State` with persistence, see TODO
```go
type State struct {
LastResults []Result
AppHash []byte
Validators []Validator
LastValidators []Validator
ConsensusParams ConsensusParams
}
```
### Result
```go
type Result struct {
Code uint32
Data []byte
Tags []KVPair
}
type KVPair struct {
Key []byte
Value []byte
}
```
`Result` is the result of executing a transaction against the application.
It returns a result code, an arbitrary byte array (ie. a return value),
and a list of key-value pairs ordered by key. The key-value pairs, or tags,
can be used to index transactions according to their "effects", which are
represented in the tags.
### Validator
A validator is an active participant in the consensus with a public key and a voting power.
Validator's also contain an address which is derived from the PubKey:
```go
type Validator struct {
Address []byte
PubKey PubKey
VotingPower int64
}
```
The `state.Validators` and `state.LastValidators` must always by sorted by validator address,
so that there is a canonical order for computing the SimpleMerkleRoot.
We also define a `TotalVotingPower` function, to return the total voting power:
```go
func TotalVotingPower(vals []Validators) int64{
sum := 0
for v := range vals{
sum += v.VotingPower
}
return sum
}
```
### ConsensusParams
TODO

View File

@ -0,0 +1 @@
[Moved](/docs/spec/software/abci.md)

View File

@ -0,0 +1,56 @@
# BFT time in Tendermint
Tendermint provides a deterministic, Byzantine fault-tolerant, source of time.
Time in Tendermint is defined with the Time field of the block header.
It satisfies the following properties:
- Time Monotonicity: Time is monotonically increasing, i.e., given
a header H1 for height h1 and a header H2 for height `h2 = h1 + 1`, `H1.Time < H2.Time`.
- Time Validity: Given a set of Commit votes that forms the `block.LastCommit` field, a range of
valid values for the Time field of the block header is defined only by
Precommit messages (from the LastCommit field) sent by correct processes, i.e.,
a faulty process cannot arbitrarily increase the Time value.
In the context of Tendermint, time is of type int64 and denotes UNIX time in milliseconds, i.e.,
corresponds to the number of milliseconds since January 1, 1970. Before defining rules that need to be enforced by the
Tendermint consensus protocol, so the properties above holds, we introduce the following definition:
- median of a set of `Vote` messages is equal to the median of `Vote.Time` fields of the corresponding `Vote` messages,
where the value of `Vote.Time` is counted number of times proportional to the process voting power. As in Tendermint
the voting power is not uniform (one process one vote), a vote message is actually an aggregator of the same votes whose
number is equal to the voting power of the process that has casted the corresponding votes message.
Let's consider the following example:
- we have four processes p1, p2, p3 and p4, with the following voting power distribution (p1, 23), (p2, 27), (p3, 10)
and (p4, 10). The total voting power is 70 (`N = 3f+1`, where `N` is the total voting power, and `f` is the maximum voting
power of the faulty processes), so we assume that the faulty processes have at most 23 of voting power.
Furthermore, we have the following vote messages in some LastCommit field (we ignore all fields except Time field):
- (p1, 100), (p2, 98), (p3, 1000), (p4, 500). We assume that p3 and p4 are faulty processes. Let's assume that the
`block.LastCommit` message contains votes of processes p2, p3 and p4. Median is then chosen the following way:
the value 98 is counted 27 times, the value 1000 is counted 10 times and the value 500 is counted also 10 times.
So the median value will be the value 98. No matter what set of messages with at least `2f+1` voting power we
choose, the median value will always be between the values sent by correct processes.
We ensure Time Monotonicity and Time Validity properties by the following rules:
- let rs denotes `RoundState` (consensus internal state) of some process. Then
`rs.ProposalBlock.Header.Time == median(rs.LastCommit) &&
rs.Proposal.Timestamp == rs.ProposalBlock.Header.Time`.
- Furthermore, when creating the `vote` message, the following rules for determining `vote.Time` field should hold:
- if `rs.Proposal` is defined then
`vote.Time = max(rs.Proposal.Timestamp + 1, time.Now())`, where `time.Now()`
denotes local Unix time in milliseconds.
- if `rs.Proposal` is not defined and `rs.Votes` contains +2/3 of the corresponding vote messages (votes for the
current height and round, and with the corresponding type (`Prevote` or `Precommit`)), then
`vote.Time = max(median(getVotes(rs.Votes, vote.Height, vote.Round, vote.Type)), time.Now())`,
where `getVotes` function returns the votes for particular `Height`, `Round` and `Type`.
The second rule is relevant for the case when a process jumps to a higher round upon receiving +2/3 votes for a higher
round, but the corresponding `Proposal` message for the higher round hasn't been received yet.

View File

@ -0,0 +1,9 @@
We are working to finalize an updated Tendermint specification with formal
proofs of safety and liveness.
In the meantime, see the [description in the
docs](http://tendermint.readthedocs.io/en/master/specification/byzantine-consensus-algorithm.html).
There are also relevant but somewhat outdated descriptions in Jae Kwon's [original
whitepaper](https://tendermint.com/static/docs/tendermint.pdf) and Ethan Buchman's [master's
thesis](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769).

View File

@ -0,0 +1,114 @@
# Light client
A light client is a process that connects to the Tendermint Full Node(s) and then tries to verify the Merkle proofs
about the blockchain application. In this document we describe mechanisms that ensures that the Tendermint light client
has the same level of security as Full Node processes (without being itself a Full Node).
To be able to validate a Merkle proof, a light client needs to validate the blockchain header that contains the root app hash.
Validating a blockchain header in Tendermint consists in verifying that the header is committed (signed) by >2/3 of the
voting power of the corresponding validator set. As the validator set is a dynamic set (it is changing), one of the
core functionality of the light client is updating the current validator set, that is then used to verify the
blockchain header, and further the corresponding Merkle proofs.
For the purpose of this light client specification, we assume that the Tendermint Full Node exposes the following functions over
Tendermint RPC:
```golang
Header(height int64) (SignedHeader, error) // returns signed header for the given height
Validators(height int64) (ResultValidators, error) // returns validator set for the given height
LastHeader(valSetNumber int64) (SignedHeader, error) // returns last header signed by the validator set with the given validator set number
type SignedHeader struct {
Header Header
Commit Commit
ValSetNumber int64
}
type ResultValidators struct {
BlockHeight int64
Validators []Validator
// time the current validator set is initialised, i.e, time of the last validator change before header BlockHeight
ValSetTime int64
}
```
We assume that Tendermint keeps track of the validator set changes and that each time a validator set is changed it is
being assigned the next sequence number. We can call this number the validator set sequence number. Tendermint also remembers
the Time from the header when the next validator set is initialised (starts to be in power), and we refer to this time
as validator set init time.
Furthermore, we assume that each validator set change is signed (committed) by the current validator set. More precisely,
given a block `H` that contains transactions that are modifying the current validator set, the Merkle root hash of the next
validator set (modified based on transactions from block H) will be in block `H+1` (and signed by the current validator
set), and then starting from the block `H+2`, it will be signed by the next validator set.
Note that the real Tendermint RPC API is slightly different (for example, response messages contain more data and function
names are slightly different); we shortened (and modified) it for the purpose of this document to make the spec more
clear and simple. Furthermore, note that in case of the third function, the returned header has `ValSetNumber` equals to
`valSetNumber+1`.
Locally, light client manages the following state:
```golang
valSet []Validator // current validator set (last known and verified validator set)
valSetNumber int64 // sequence number of the current validator set
valSetHash []byte // hash of the current validator set
valSetTime int64 // time when the current validator set is initialised
```
The light client is initialised with the trusted validator set, for example based on the known validator set hash,
validator set sequence number and the validator set init time.
The core of the light client logic is captured by the VerifyAndUpdate function that is used to 1) verify if the given header is valid,
and 2) update the validator set (when the given header is valid and it is more recent than the seen headers).
```golang
VerifyAndUpdate(signedHeader SignedHeader):
assertThat signedHeader.valSetNumber >= valSetNumber
if isValid(signedHeader) and signedHeader.Header.Time <= valSetTime + UNBONDING_PERIOD then
setValidatorSet(signedHeader)
return true
else
updateValidatorSet(signedHeader.ValSetNumber)
return VerifyAndUpdate(signedHeader)
isValid(signedHeader SignedHeader):
valSetOfTheHeader = Validators(signedHeader.Header.Height)
assertThat Hash(valSetOfTheHeader) == signedHeader.Header.ValSetHash
assertThat signedHeader is passing basic validation
if votingPower(signedHeader.Commit) > 2/3 * votingPower(valSetOfTheHeader) then return true
else
return false
setValidatorSet(signedHeader SignedHeader):
nextValSet = Validators(signedHeader.Header.Height)
assertThat Hash(nextValSet) == signedHeader.Header.ValidatorsHash
valSet = nextValSet.Validators
valSetHash = signedHeader.Header.ValidatorsHash
valSetNumber = signedHeader.ValSetNumber
valSetTime = nextValSet.ValSetTime
votingPower(commit Commit):
votingPower = 0
for each precommit in commit.Precommits do:
if precommit.ValidatorAddress is in valSet and signature of the precommit verifies then
votingPower += valSet[precommit.ValidatorAddress].VotingPower
return votingPower
votingPower(validatorSet []Validator):
for each validator in validatorSet do:
votingPower += validator.VotingPower
return votingPower
updateValidatorSet(valSetNumberOfTheHeader):
while valSetNumber != valSetNumberOfTheHeader do
signedHeader = LastHeader(valSetNumber)
if isValid(signedHeader) then
setValidatorSet(signedHeader)
else return error
return
```
Note that in the logic above we assume that the light client will always go upward with respect to header verifications,
i.e., that it will always be used to verify more recent headers. In case a light client needs to be used to verify older
headers (go backward) the same mechanisms and similar logic can be used. In case a call to the FullNode or subsequent
checks fail, a light client need to implement some recovery strategy, for example connecting to other FullNode.

View File

@ -0,0 +1 @@
[Moved](/docs/spec/software/wal.md)

38
docs/spec/p2p/config.md Normal file
View File

@ -0,0 +1,38 @@
# P2P Config
Here we describe configuration options around the Peer Exchange.
These can be set using flags or via the `$TMHOME/config/config.toml` file.
## Seed Mode
`--p2p.seed_mode`
The node operates in seed mode. In seed mode, a node continuously crawls the network for peers,
and upon incoming connection shares some peers and disconnects.
## Seeds
`--p2p.seeds “1.2.3.4:26656,2.3.4.5:4444”`
Dials these seeds when we need more peers. They should return a list of peers and then disconnect.
If we already have enough peers in the address book, we may never need to dial them.
## Persistent Peers
`--p2p.persistent_peers “1.2.3.4:26656,2.3.4.5:26656”`
Dial these peers and auto-redial them if the connection fails.
These are intended to be trusted persistent peers that can help
anchor us in the p2p network. The auto-redial uses exponential
backoff and will give up after a day of trying to connect.
**Note:** If `seeds` and `persistent_peers` intersect,
the user will be warned that seeds may auto-close connections
and that the node may not be able to keep the connection persistent.
## Private Persistent Peers
`--p2p.private_persistent_peers “1.2.3.4:26656,2.3.4.5:26656”`
These are persistent peers that we do not add to the address book or
gossip to other peers. They stay private to us.

110
docs/spec/p2p/connection.md Normal file
View File

@ -0,0 +1,110 @@
# P2P Multiplex Connection
## MConnection
`MConnection` is a multiplex connection that supports multiple independent streams
with distinct quality of service guarantees atop a single TCP connection.
Each stream is known as a `Channel` and each `Channel` has a globally unique *byte id*.
Each `Channel` also has a relative priority that determines the quality of service
of the `Channel` compared to other `Channel`s.
The *byte id* and the relative priorities of each `Channel` are configured upon
initialization of the connection.
The `MConnection` supports three packet types:
- Ping
- Pong
- Msg
### Ping and Pong
The ping and pong messages consist of writing a single byte to the connection; 0x1 and 0x2, respectively.
When we haven't received any messages on an `MConnection` in time `pingTimeout`, we send a ping message.
When a ping is received on the `MConnection`, a pong is sent in response only if there are no other messages
to send and the peer has not sent us too many pings (TODO).
If a pong or message is not received in sufficient time after a ping, the peer is disconnected from.
### Msg
Messages in channels are chopped into smaller `msgPacket`s for multiplexing.
```
type msgPacket struct {
ChannelID byte
EOF byte // 1 means message ends here.
Bytes []byte
}
```
The `msgPacket` is serialized using [go-wire](https://github.com/tendermint/go-wire) and prefixed with 0x3.
The received `Bytes` of a sequential set of packets are appended together
until a packet with `EOF=1` is received, then the complete serialized message
is returned for processing by the `onReceive` function of the corresponding channel.
### Multiplexing
Messages are sent from a single `sendRoutine`, which loops over a select statement and results in the sending
of a ping, a pong, or a batch of data messages. The batch of data messages may include messages from multiple channels.
Message bytes are queued for sending in their respective channel, with each channel holding one unsent message at a time.
Messages are chosen for a batch one at a time from the channel with the lowest ratio of recently sent bytes to channel priority.
## Sending Messages
There are two methods for sending messages:
```go
func (m MConnection) Send(chID byte, msg interface{}) bool {}
func (m MConnection) TrySend(chID byte, msg interface{}) bool {}
```
`Send(chID, msg)` is a blocking call that waits until `msg` is successfully queued
for the channel with the given id byte `chID`. The message `msg` is serialized
using the `tendermint/wire` submodule's `WriteBinary()` reflection routine.
`TrySend(chID, msg)` is a nonblocking call that queues the message msg in the channel
with the given id byte chID if the queue is not full; otherwise it returns false immediately.
`Send()` and `TrySend()` are also exposed for each `Peer`.
## Peer
Each peer has one `MConnection` instance, and includes other information such as whether the connection
was outbound, whether the connection should be recreated if it closes, various identity information about the node,
and other higher level thread-safe data used by the reactors.
## Switch/Reactor
The `Switch` handles peer connections and exposes an API to receive incoming messages
on `Reactors`. Each `Reactor` is responsible for handling incoming messages of one
or more `Channels`. So while sending outgoing messages is typically performed on the peer,
incoming messages are received on the reactor.
```go
// Declare a MyReactor reactor that handles messages on MyChannelID.
type MyReactor struct{}
func (reactor MyReactor) GetChannels() []*ChannelDescriptor {
return []*ChannelDescriptor{ChannelDescriptor{ID:MyChannelID, Priority: 1}}
}
func (reactor MyReactor) Receive(chID byte, peer *Peer, msgBytes []byte) {
r, n, err := bytes.NewBuffer(msgBytes), new(int64), new(error)
msgString := ReadString(r, n, err)
fmt.Println(msgString)
}
// Other Reactor methods omitted for brevity
...
switch := NewSwitch([]Reactor{MyReactor{}})
...
// Send a random message to all outbound connections
for _, peer := range switch.Peers().List() {
if peer.IsOutbound() {
peer.Send(MyChannelID, "Here's a random message")
}
}
```

65
docs/spec/p2p/node.md Normal file
View File

@ -0,0 +1,65 @@
# Tendermint Peer Discovery
A Tendermint P2P network has different kinds of nodes with different requirements for connectivity to one another.
This document describes what kind of nodes Tendermint should enable and how they should work.
## Seeds
Seeds are the first point of contact for a new node.
They return a list of known active peers and then disconnect.
Seeds should operate full nodes with the PEX reactor in a "crawler" mode
that continuously explores to validate the availability of peers.
Seeds should only respond with some top percentile of the best peers it knows about.
See [the peer-exchange docs](https://github.com/tendermint/tendermint/blob/master/docs/spec/reactors/pex/pex.md)for details on peer quality.
## New Full Node
A new node needs a few things to connect to the network:
- a list of seeds, which can be provided to Tendermint via config file or flags,
or hardcoded into the software by in-process apps
- a `ChainID`, also called `Network` at the p2p layer
- a recent block height, H, and hash, HASH for the blockchain.
The values `H` and `HASH` must be received and corroborated by means external to Tendermint, and specific to the user - ie. via the user's trusted social consensus.
This requirement to validate `H` and `HASH` out-of-band and via social consensus
is the essential difference in security models between Proof-of-Work and Proof-of-Stake blockchains.
With the above, the node then queries some seeds for peers for its chain,
dials those peers, and runs the Tendermint protocols with those it successfully connects to.
When the peer catches up to height H, it ensures the block hash matches HASH.
If not, Tendermint will exit, and the user must try again - either they are connected
to bad peers or their social consensus is invalid.
## Restarted Full Node
A node checks its address book on startup and attempts to connect to peers from there.
If it can't connect to any peers after some time, it falls back to the seeds to find more.
Restarted full nodes can run the `blockchain` or `consensus` reactor protocols to sync up
to the latest state of the blockchain from wherever they were last.
In a Proof-of-Stake context, if they are sufficiently far behind (greater than the length
of the unbonding period), they will need to validate a recent `H` and `HASH` out-of-band again
so they know they have synced the correct chain.
## Validator Node
A validator node is a node that interfaces with a validator signing key.
These nodes require the highest security, and should not accept incoming connections.
They should maintain outgoing connections to a controlled set of "Sentry Nodes" that serve
as their proxy shield to the rest of the network.
Validators that know and trust each other can accept incoming connections from one another and maintain direct private connectivity via VPN.
## Sentry Node
Sentry nodes are guardians of a validator node and provide it access to the rest of the network.
They should be well connected to other full nodes on the network.
Sentry nodes may be dynamic, but should maintain persistent connections to some evolving random subset of each other.
They should always expect to have direct incoming connections from the validator node and its backup(s).
They do not report the validator node's address in the PEX and
they may be more strict about the quality of peers they keep.
Sentry nodes belonging to validators that trust each other may wish to maintain persistent connections via VPN with one another, but only report each other sparingly in the PEX.

112
docs/spec/p2p/peer.md Normal file
View File

@ -0,0 +1,112 @@
# Tendermint Peers
This document explains how Tendermint Peers are identified and how they connect to one another.
For details on peer discovery, see the [peer exchange (PEX) reactor doc](https://github.com/tendermint/tendermint/blob/master/docs/spec/reactors/pex/pex.md).
## Peer Identity
Tendermint peers are expected to maintain long-term persistent identities in the form of a public key.
Each peer has an ID defined as `peer.ID == peer.PubKey.Address()`, where `Address` uses the scheme defined in go-crypto.
A single peer ID can have multiple IP addresses associated with it, but a node
will only ever connect to one at a time.
When attempting to connect to a peer, we use the PeerURL: `<ID>@<IP>:<PORT>`.
We will attempt to connect to the peer at IP:PORT, and verify,
via authenticated encryption, that it is in possession of the private key
corresponding to `<ID>`. This prevents man-in-the-middle attacks on the peer layer.
## Connections
All p2p connections use TCP.
Upon establishing a successful TCP connection with a peer,
two handhsakes are performed: one for authenticated encryption, and one for Tendermint versioning.
Both handshakes have configurable timeouts (they should complete quickly).
### Authenticated Encryption Handshake
Tendermint implements the Station-to-Station protocol
using ED25519 keys for Diffie-Helman key-exchange and NACL SecretBox for encryption.
It goes as follows:
- generate an emphemeral ED25519 keypair
- send the ephemeral public key to the peer
- wait to receive the peer's ephemeral public key
- compute the Diffie-Hellman shared secret using the peers ephemeral public key and our ephemeral private key
- generate two nonces to use for encryption (sending and receiving) as follows:
- sort the ephemeral public keys in ascending order and concatenate them
- RIPEMD160 the result
- append 4 empty bytes (extending the hash to 24-bytes)
- the result is nonce1
- flip the last bit of nonce1 to get nonce2
- if we had the smaller ephemeral pubkey, use nonce1 for receiving, nonce2 for sending;
else the opposite
- all communications from now on are encrypted using the shared secret and the nonces, where each nonce
increments by 2 every time it is used
- we now have an encrypted channel, but still need to authenticate
- generate a common challenge to sign:
- SHA256 of the sorted (lowest first) and concatenated ephemeral pub keys
- sign the common challenge with our persistent private key
- send the go-wire encoded persistent pubkey and signature to the peer
- wait to receive the persistent public key and signature from the peer
- verify the signature on the challenge using the peer's persistent public key
If this is an outgoing connection (we dialed the peer) and we used a peer ID,
then finally verify that the peer's persistent public key corresponds to the peer ID we dialed,
ie. `peer.PubKey.Address() == <ID>`.
The connection has now been authenticated. All traffic is encrypted.
Note: only the dialer can authenticate the identity of the peer,
but this is what we care about since when we join the network we wish to
ensure we have reached the intended peer (and are not being MITMd).
### Peer Filter
Before continuing, we check if the new peer has the same ID as ourselves or
an existing peer. If so, we disconnect.
We also check the peer's address and public key against
an optional whitelist which can be managed through the ABCI app -
if the whitelist is enabled and the peer does not qualify, the connection is
terminated.
### Tendermint Version Handshake
The Tendermint Version Handshake allows the peers to exchange their NodeInfo:
```golang
type NodeInfo struct {
ID p2p.ID
ListenAddr string
Network string
Version string
Channels []int8
Moniker string
Other []string
}
```
The connection is disconnected if:
- `peer.NodeInfo.ID` is not equal `peerConn.ID`
- `peer.NodeInfo.Version` is not formatted as `X.X.X` where X are integers known as Major, Minor, and Revision
- `peer.NodeInfo.Version` Major is not the same as ours
- `peer.NodeInfo.Network` is not the same as ours
- `peer.Channels` does not intersect with our known Channels.
- `peer.NodeInfo.ListenAddr` is malformed or is a DNS host that cannot be
resolved
At this point, if we have not disconnected, the peer is valid.
It is added to the switch and hence all reactors via the `AddPeer` method.
Note that each reactor may handle multiple channels.
## Connection Activity
Once a peer is added, incoming messages for a given reactor are handled through
that reactor's `Receive` method, and output messages are sent directly by the Reactors
on each peer. A typical reactor maintains per-peer go-routine(s) that handle this.

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

View File

@ -0,0 +1,46 @@
## Blockchain Reactor
* coordinates the pool for syncing
* coordinates the store for persistence
* coordinates the playing of blocks towards the app using a sm.BlockExecutor
* handles switching between fastsync and consensus
* it is a p2p.BaseReactor
* starts the pool.Start() and its poolRoutine()
* registers all the concrete types and interfaces for serialisation
### poolRoutine
* listens to these channels:
* pool requests blocks from a specific peer by posting to requestsCh, block reactor then sends
a &bcBlockRequestMessage for a specific height
* pool signals timeout of a specific peer by posting to timeoutsCh
* switchToConsensusTicker to periodically try and switch to consensus
* trySyncTicker to periodically check if we have fallen behind and then catch-up sync
* if there aren't any new blocks available on the pool it skips syncing
* tries to sync the app by taking downloaded blocks from the pool, gives them to the app and stores
them on disk
* implements Receive which is called by the switch/peer
* calls AddBlock on the pool when it receives a new block from a peer
## Block Pool
* responsible for downloading blocks from peers
* makeRequestersRoutine()
* removes timeout peers
* starts new requesters by calling makeNextRequester()
* requestRoutine():
* picks a peer and sends the request, then blocks until:
* pool is stopped by listening to pool.Quit
* requester is stopped by listening to Quit
* request is redone
* we receive a block
* gotBlockCh is strange
## Block Store
* persists blocks to disk
# TODO
* How does the switch from bcR to conR happen? Does conR persist blocks to disk too?
* What is the interaction between the consensus and blockchain reactors?

View File

@ -0,0 +1,307 @@
# Blockchain Reactor
The Blockchain Reactor's high level responsibility is to enable peers who are
far behind the current state of the consensus to quickly catch up by downloading
many blocks in parallel, verifying their commits, and executing them against the
ABCI application.
Tendermint full nodes run the Blockchain Reactor as a service to provide blocks
to new nodes. New nodes run the Blockchain Reactor in "fast_sync" mode,
where they actively make requests for more blocks until they sync up.
Once caught up, "fast_sync" mode is disabled and the node switches to
using (and turns on) the Consensus Reactor.
## Message Types
```go
const (
msgTypeBlockRequest = byte(0x10)
msgTypeBlockResponse = byte(0x11)
msgTypeNoBlockResponse = byte(0x12)
msgTypeStatusResponse = byte(0x20)
msgTypeStatusRequest = byte(0x21)
)
```
```go
type bcBlockRequestMessage struct {
Height int64
}
type bcNoBlockResponseMessage struct {
Height int64
}
type bcBlockResponseMessage struct {
Block Block
}
type bcStatusRequestMessage struct {
Height int64
type bcStatusResponseMessage struct {
Height int64
}
```
## Architecture and algorithm
The Blockchain reactor is organised as a set of concurrent tasks:
- Receive routine of Blockchain Reactor
- Task for creating Requesters
- Set of Requesters tasks and
- Controller task.
![Blockchain Reactor Architecture Diagram](img/bc-reactor.png)
### Data structures
These are the core data structures necessarily to provide the Blockchain Reactor logic.
Requester data structure is used to track assignment of request for `block` at position `height` to a
peer with id equals to `peerID`.
```go
type Requester {
mtx Mutex
block Block
height int64
peerID p2p.ID
redoChannel chan struct{}
}
```
Pool is core data structure that stores last executed block (`height`), assignment of requests to peers (`requesters`),
current height for each peer and number of pending requests for each peer (`peers`), maximum peer height, etc.
```go
type Pool {
mtx Mutex
requesters map[int64]*Requester
height int64
peers map[p2p.ID]*Peer
maxPeerHeight int64
numPending int32
store BlockStore
requestsChannel chan<- BlockRequest
errorsChannel chan<- peerError
}
```
Peer data structure stores for each peer current `height` and number of pending requests sent to
the peer (`numPending`), etc.
```go
type Peer struct {
id p2p.ID
height int64
numPending int32
timeout *time.Timer
didTimeout bool
}
```
BlockRequest is internal data structure used to denote current mapping of request for a block at some `height` to
a peer (`PeerID`).
```go
type BlockRequest {
Height int64
PeerID p2p.ID
}
```
### Receive routine of Blockchain Reactor
It is executed upon message reception on the BlockchainChannel inside p2p receive routine. There is a separate p2p
receive routine (and therefore receive routine of the Blockchain Reactor) executed for each peer. Note that
try to send will not block (returns immediately) if outgoing buffer is full.
```go
handleMsg(pool, m):
upon receiving bcBlockRequestMessage m from peer p:
block = load block for height m.Height from pool.store
if block != nil then
try to send BlockResponseMessage(block) to p
else
try to send bcNoBlockResponseMessage(m.Height) to p
upon receiving bcBlockResponseMessage m from peer p:
pool.mtx.Lock()
requester = pool.requesters[m.Height]
if requester == nil then
error("peer sent us a block we didn't expect")
continue
if requester.block == nil and requester.peerID == p then
requester.block = m
pool.numPending -= 1 // atomic decrement
peer = pool.peers[p]
if peer != nil then
peer.numPending--
if peer.numPending == 0 then
peer.timeout.Stop()
// NOTE: we don't send Quit signal to the corresponding requester task!
else
trigger peer timeout to expire after peerTimeout
pool.mtx.Unlock()
upon receiving bcStatusRequestMessage m from peer p:
try to send bcStatusResponseMessage(pool.store.Height)
upon receiving bcStatusResponseMessage m from peer p:
pool.mtx.Lock()
peer = pool.peers[p]
if peer != nil then
peer.height = m.height
else
peer = create new Peer data structure with id = p and height = m.Height
pool.peers[p] = peer
if m.Height > pool.maxPeerHeight then
pool.maxPeerHeight = m.Height
pool.mtx.Unlock()
onTimeout(p):
send error message to pool error channel
peer = pool.peers[p]
peer.didTimeout = true
```
### Requester tasks
Requester task is responsible for fetching a single block at position `height`.
```go
fetchBlock(height, pool):
while true do
peerID = nil
block = nil
peer = pickAvailablePeer(height)
peerId = peer.id
enqueue BlockRequest(height, peerID) to pool.requestsChannel
redo = false
while !redo do
select {
upon receiving Quit message do
return
upon receiving message on redoChannel do
mtx.Lock()
pool.numPending++
redo = true
mtx.UnLock()
}
pickAvailablePeer(height):
selectedPeer = nil
while selectedPeer = nil do
pool.mtx.Lock()
for each peer in pool.peers do
if !peer.didTimeout and peer.numPending < maxPendingRequestsPerPeer and peer.height >= height then
peer.numPending++
selectedPeer = peer
break
pool.mtx.Unlock()
if selectedPeer = nil then
sleep requestIntervalMS
return selectedPeer
```
sleep for requestIntervalMS
### Task for creating Requesters
This task is responsible for continuously creating and starting Requester tasks.
```go
createRequesters(pool):
while true do
if !pool.isRunning then break
if pool.numPending < maxPendingRequests or size(pool.requesters) < maxTotalRequesters then
pool.mtx.Lock()
nextHeight = pool.height + size(pool.requesters)
requester = create new requester for height nextHeight
pool.requesters[nextHeight] = requester
pool.numPending += 1 // atomic increment
start requester task
pool.mtx.Unlock()
else
sleep requestIntervalMS
pool.mtx.Lock()
for each peer in pool.peers do
if !peer.didTimeout && peer.numPending > 0 && peer.curRate < minRecvRate then
send error on pool error channel
peer.didTimeout = true
if peer.didTimeout then
for each requester in pool.requesters do
if requester.getPeerID() == peer then
enqueue msg on requestor's redoChannel
delete(pool.peers, peerID)
pool.mtx.Unlock()
```
### Main blockchain reactor controller task
```go
main(pool):
create trySyncTicker with interval trySyncIntervalMS
create statusUpdateTicker with interval statusUpdateIntervalSeconds
create switchToConsensusTicker with interbal switchToConsensusIntervalSeconds
while true do
select {
upon receiving BlockRequest(Height, Peer) on pool.requestsChannel:
try to send bcBlockRequestMessage(Height) to Peer
upon receiving error(peer) on errorsChannel:
stop peer for error
upon receiving message on statusUpdateTickerChannel:
broadcast bcStatusRequestMessage(bcR.store.Height) // message sent in a separate routine
upon receiving message on switchToConsensusTickerChannel:
pool.mtx.Lock()
receivedBlockOrTimedOut = pool.height > 0 || (time.Now() - pool.startTime) > 5 Seconds
ourChainIsLongestAmongPeers = pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
haveSomePeers = size of pool.peers > 0
pool.mtx.Unlock()
if haveSomePeers && receivedBlockOrTimedOut && ourChainIsLongestAmongPeers then
switch to consensus mode
upon receiving message on trySyncTickerChannel:
for i = 0; i < 10; i++ do
pool.mtx.Lock()
firstBlock = pool.requesters[pool.height].block
secondBlock = pool.requesters[pool.height].block
if firstBlock == nil or secondBlock == nil then continue
pool.mtx.Unlock()
verify firstBlock using LastCommit from secondBlock
if verification failed
pool.mtx.Lock()
peerID = pool.requesters[pool.height].peerID
redoRequestsForPeer(peerId)
delete(pool.peers, peerID)
stop peer peerID for error
pool.mtx.Unlock()
else
delete(pool.requesters, pool.height)
save firstBlock to store
pool.height++
execute firstBlock
}
redoRequestsForPeer(pool, peerId):
for each requester in pool.requesters do
if requester.getPeerID() == peerID
enqueue msg on redoChannel for requester
```
## Channels
Defines `maxMsgSize` for the maximum size of incoming messages,
`SendQueueCapacity` and `RecvBufferCapacity` for maximum sending and
receiving buffers respectively. These are supposed to prevent amplification
attacks by setting up the upper limit on how much data we can receive & send to
a peer.
Sending incorrectly encoded data will result in stopping the peer.

View File

@ -0,0 +1,352 @@
# Consensus Reactor
Consensus Reactor defines a reactor for the consensus service. It contains the ConsensusState service that
manages the state of the Tendermint consensus internal state machine.
When Consensus Reactor is started, it starts Broadcast Routine which starts ConsensusState service.
Furthermore, for each peer that is added to the Consensus Reactor, it creates (and manages) the known peer state
(that is used extensively in gossip routines) and starts the following three routines for the peer p:
Gossip Data Routine, Gossip Votes Routine and QueryMaj23Routine. Finally, Consensus Reactor is responsible
for decoding messages received from a peer and for adequate processing of the message depending on its type and content.
The processing normally consists of updating the known peer state and for some messages
(`ProposalMessage`, `BlockPartMessage` and `VoteMessage`) also forwarding message to ConsensusState module
for further processing. In the following text we specify the core functionality of those separate unit of executions
that are part of the Consensus Reactor.
## ConsensusState service
Consensus State handles execution of the Tendermint BFT consensus algorithm. It processes votes and proposals,
and upon reaching agreement, commits blocks to the chain and executes them against the application.
The internal state machine receives input from peers, the internal validator and from a timer.
Inside Consensus State we have the following units of execution: Timeout Ticker and Receive Routine.
Timeout Ticker is a timer that schedules timeouts conditional on the height/round/step that are processed
by the Receive Routine.
### Receive Routine of the ConsensusState service
Receive Routine of the ConsensusState handles messages which may cause internal consensus state transitions.
It is the only routine that updates RoundState that contains internal consensus state.
Updates (state transitions) happen on timeouts, complete proposals, and 2/3 majorities.
It receives messages from peers, internal validators and from Timeout Ticker
and invokes the corresponding handlers, potentially updating the RoundState.
The details of the protocol (together with formal proofs of correctness) implemented by the Receive Routine are
discussed in separate document. For understanding of this document
it is sufficient to understand that the Receive Routine manages and updates RoundState data structure that is
then extensively used by the gossip routines to determine what information should be sent to peer processes.
## Round State
RoundState defines the internal consensus state. It contains height, round, round step, a current validator set,
a proposal and proposal block for the current round, locked round and block (if some block is being locked), set of
received votes and last commit and last validators set.
```golang
type RoundState struct {
Height int64
Round int
Step RoundStepType
Validators ValidatorSet
Proposal Proposal
ProposalBlock Block
ProposalBlockParts PartSet
LockedRound int
LockedBlock Block
LockedBlockParts PartSet
Votes HeightVoteSet
LastCommit VoteSet
LastValidators ValidatorSet
}
```
Internally, consensus will run as a state machine with the following states:
- RoundStepNewHeight
- RoundStepNewRound
- RoundStepPropose
- RoundStepProposeWait
- RoundStepPrevote
- RoundStepPrevoteWait
- RoundStepPrecommit
- RoundStepPrecommitWait
- RoundStepCommit
## Peer Round State
Peer round state contains the known state of a peer. It is being updated by the Receive routine of
Consensus Reactor and by the gossip routines upon sending a message to the peer.
```golang
type PeerRoundState struct {
Height int64 // Height peer is at
Round int // Round peer is at, -1 if unknown.
Step RoundStepType // Step peer is at
Proposal bool // True if peer has proposal for this round
ProposalBlockPartsHeader PartSetHeader
ProposalBlockParts BitArray
ProposalPOLRound int // Proposal's POL round. -1 if none.
ProposalPOL BitArray // nil until ProposalPOLMessage received.
Prevotes BitArray // All votes peer has for this round
Precommits BitArray // All precommits peer has for this round
LastCommitRound int // Round of commit for last height. -1 if none.
LastCommit BitArray // All commit precommits of commit for last height.
CatchupCommitRound int // Round that we have commit for. Not necessarily unique. -1 if none.
CatchupCommit BitArray // All commit precommits peer has for this height & CatchupCommitRound
}
```
## Receive method of Consensus reactor
The entry point of the Consensus reactor is a receive method. When a message is received from a peer p,
normally the peer round state is updated correspondingly, and some messages
are passed for further processing, for example to ConsensusState service. We now specify the processing of messages
in the receive method of Consensus reactor for each message type. In the following message handler, `rs` and `prs` denote
`RoundState` and `PeerRoundState`, respectively.
### NewRoundStepMessage handler
```
handleMessage(msg):
if msg is from smaller height/round/step then return
// Just remember these values.
prsHeight = prs.Height
prsRound = prs.Round
prsCatchupCommitRound = prs.CatchupCommitRound
prsCatchupCommit = prs.CatchupCommit
Update prs with values from msg
if prs.Height or prs.Round has been updated then
reset Proposal related fields of the peer state
if prs.Round has been updated and msg.Round == prsCatchupCommitRound then
prs.Precommits = psCatchupCommit
if prs.Height has been updated then
if prsHeight+1 == msg.Height && prsRound == msg.LastCommitRound then
prs.LastCommitRound = msg.LastCommitRound
prs.LastCommit = prs.Precommits
} else {
prs.LastCommitRound = msg.LastCommitRound
prs.LastCommit = nil
}
Reset prs.CatchupCommitRound and prs.CatchupCommit
```
### CommitStepMessage handler
```
handleMessage(msg):
if prs.Height == msg.Height then
prs.ProposalBlockPartsHeader = msg.BlockPartsHeader
prs.ProposalBlockParts = msg.BlockParts
```
### HasVoteMessage handler
```
handleMessage(msg):
if prs.Height == msg.Height then
prs.setHasVote(msg.Height, msg.Round, msg.Type, msg.Index)
```
### VoteSetMaj23Message handler
```
handleMessage(msg):
if prs.Height == msg.Height then
Record in rs that a peer claim to have majority for msg.BlockID
Send VoteSetBitsMessage showing votes node has for that BlockId
```
### ProposalMessage handler
```
handleMessage(msg):
if prs.Height != msg.Height || prs.Round != msg.Round || prs.Proposal then return
prs.Proposal = true
prs.ProposalBlockPartsHeader = msg.BlockPartsHeader
prs.ProposalBlockParts = empty set
prs.ProposalPOLRound = msg.POLRound
prs.ProposalPOL = nil
Send msg through internal peerMsgQueue to ConsensusState service
```
### ProposalPOLMessage handler
```
handleMessage(msg):
if prs.Height != msg.Height or prs.ProposalPOLRound != msg.ProposalPOLRound then return
prs.ProposalPOL = msg.ProposalPOL
```
### BlockPartMessage handler
```
handleMessage(msg):
if prs.Height != msg.Height || prs.Round != msg.Round then return
Record in prs that peer has block part msg.Part.Index
Send msg trough internal peerMsgQueue to ConsensusState service
```
### VoteMessage handler
```
handleMessage(msg):
Record in prs that a peer knows vote with index msg.vote.ValidatorIndex for particular height and round
Send msg trough internal peerMsgQueue to ConsensusState service
```
### VoteSetBitsMessage handler
```
handleMessage(msg):
Update prs for the bit-array of votes peer claims to have for the msg.BlockID
```
## Gossip Data Routine
It is used to send the following messages to the peer: `BlockPartMessage`, `ProposalMessage` and
`ProposalPOLMessage` on the DataChannel. The gossip data routine is based on the local RoundState (`rs`)
and the known PeerRoundState (`prs`). The routine repeats forever the logic shown below:
```
1a) if rs.ProposalBlockPartsHeader == prs.ProposalBlockPartsHeader and the peer does not have all the proposal parts then
Part = pick a random proposal block part the peer does not have
Send BlockPartMessage(rs.Height, rs.Round, Part) to the peer on the DataChannel
if send returns true, record that the peer knows the corresponding block Part
Continue
1b) if (0 < prs.Height) and (prs.Height < rs.Height) then
help peer catch up using gossipDataForCatchup function
Continue
1c) if (rs.Height != prs.Height) or (rs.Round != prs.Round) then
Sleep PeerGossipSleepDuration
Continue
// at this point rs.Height == prs.Height and rs.Round == prs.Round
1d) if (rs.Proposal != nil and !prs.Proposal) then
Send ProposalMessage(rs.Proposal) to the peer
if send returns true, record that the peer knows Proposal
if 0 <= rs.Proposal.POLRound then
polRound = rs.Proposal.POLRound
prevotesBitArray = rs.Votes.Prevotes(polRound).BitArray()
Send ProposalPOLMessage(rs.Height, polRound, prevotesBitArray)
Continue
2) Sleep PeerGossipSleepDuration
```
### Gossip Data For Catchup
This function is responsible for helping peer catch up if it is at the smaller height (prs.Height < rs.Height).
The function executes the following logic:
if peer does not have all block parts for prs.ProposalBlockPart then
blockMeta = Load Block Metadata for height prs.Height from blockStore
if (!blockMeta.BlockID.PartsHeader == prs.ProposalBlockPartsHeader) then
Sleep PeerGossipSleepDuration
return
Part = pick a random proposal block part the peer does not have
Send BlockPartMessage(prs.Height, prs.Round, Part) to the peer on the DataChannel
if send returns true, record that the peer knows the corresponding block Part
return
else Sleep PeerGossipSleepDuration
## Gossip Votes Routine
It is used to send the following message: `VoteMessage` on the VoteChannel.
The gossip votes routine is based on the local RoundState (`rs`)
and the known PeerRoundState (`prs`). The routine repeats forever the logic shown below:
```
1a) if rs.Height == prs.Height then
if prs.Step == RoundStepNewHeight then
vote = random vote from rs.LastCommit the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
if prs.Step <= RoundStepPrevote and prs.Round != -1 and prs.Round <= rs.Round then
Prevotes = rs.Votes.Prevotes(prs.Round)
vote = random vote from Prevotes the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
if prs.Step <= RoundStepPrecommit and prs.Round != -1 and prs.Round <= rs.Round then
Precommits = rs.Votes.Precommits(prs.Round)
vote = random vote from Precommits the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
if prs.ProposalPOLRound != -1 then
PolPrevotes = rs.Votes.Prevotes(prs.ProposalPOLRound)
vote = random vote from PolPrevotes the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
1b) if prs.Height != 0 and rs.Height == prs.Height+1 then
vote = random vote from rs.LastCommit peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
1c) if prs.Height != 0 and rs.Height >= prs.Height+2 then
Commit = get commit from BlockStore for prs.Height
vote = random vote from Commit the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
2) Sleep PeerGossipSleepDuration
```
## QueryMaj23Routine
It is used to send the following message: `VoteSetMaj23Message`. `VoteSetMaj23Message` is sent to indicate that a given
BlockID has seen +2/3 votes. This routine is based on the local RoundState (`rs`) and the known PeerRoundState
(`prs`). The routine repeats forever the logic shown below.
```
1a) if rs.Height == prs.Height then
Prevotes = rs.Votes.Prevotes(prs.Round)
if there is a ⅔ majority for some blockId in Prevotes then
m = VoteSetMaj23Message(prs.Height, prs.Round, Prevote, blockId)
Send m to peer
Sleep PeerQueryMaj23SleepDuration
1b) if rs.Height == prs.Height then
Precommits = rs.Votes.Precommits(prs.Round)
if there is a ⅔ majority for some blockId in Precommits then
m = VoteSetMaj23Message(prs.Height,prs.Round,Precommit,blockId)
Send m to peer
Sleep PeerQueryMaj23SleepDuration
1c) if rs.Height == prs.Height and prs.ProposalPOLRound >= 0 then
Prevotes = rs.Votes.Prevotes(prs.ProposalPOLRound)
if there is a ⅔ majority for some blockId in Prevotes then
m = VoteSetMaj23Message(prs.Height,prs.ProposalPOLRound,Prevotes,blockId)
Send m to peer
Sleep PeerQueryMaj23SleepDuration
1d) if prs.CatchupCommitRound != -1 and 0 < prs.Height and
prs.Height <= blockStore.Height() then
Commit = LoadCommit(prs.Height)
m = VoteSetMaj23Message(prs.Height,Commit.Round,Precommit,Commit.blockId)
Send m to peer
Sleep PeerQueryMaj23SleepDuration
2) Sleep PeerQueryMaj23SleepDuration
```
## Broadcast routine
The Broadcast routine subscribes to an internal event bus to receive new round steps, votes messages and proposal
heartbeat messages, and broadcasts messages to peers upon receiving those events.
It broadcasts `NewRoundStepMessage` or `CommitStepMessage` upon new round state event. Note that
broadcasting these messages does not depend on the PeerRoundState; it is sent on the StateChannel.
Upon receiving VoteMessage it broadcasts `HasVoteMessage` message to its peers on the StateChannel.
`ProposalHeartbeatMessage` is sent the same way on the StateChannel.
## Channels
Defines 4 channels: state, data, vote and vote_set_bits. Each channel
has `SendQueueCapacity` and `RecvBufferCapacity` and
`RecvMessageCapacity` set to `maxMsgSize`.
Sending incorrectly encoded data will result in stopping the peer.

View File

@ -0,0 +1,212 @@
# Tendermint Consensus Reactor
Tendermint Consensus is a distributed protocol executed by validator processes to agree on
the next block to be added to the Tendermint blockchain. The protocol proceeds in rounds, where
each round is a try to reach agreement on the next block. A round starts by having a dedicated
process (called proposer) suggesting to other processes what should be the next block with
the `ProposalMessage`.
The processes respond by voting for a block with `VoteMessage` (there are two kinds of vote
messages, prevote and precommit votes). Note that a proposal message is just a suggestion what the
next block should be; a validator might vote with a `VoteMessage` for a different block. If in some
round, enough number of processes vote for the same block, then this block is committed and later
added to the blockchain. `ProposalMessage` and `VoteMessage` are signed by the private key of the
validator. The internals of the protocol and how it ensures safety and liveness properties are
explained in a forthcoming document.
For efficiency reasons, validators in Tendermint consensus protocol do not agree directly on the
block as the block size is big, i.e., they don't embed the block inside `Proposal` and
`VoteMessage`. Instead, they reach agreement on the `BlockID` (see `BlockID` definition in
[Blockchain](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/blockchain.md#blockid) section) that uniquely identifies each block. The block itself is
disseminated to validator processes using peer-to-peer gossiping protocol. It starts by having a
proposer first splitting a block into a number of block parts, that are then gossiped between
processes using `BlockPartMessage`.
Validators in Tendermint communicate by peer-to-peer gossiping protocol. Each validator is connected
only to a subset of processes called peers. By the gossiping protocol, a validator send to its peers
all needed information (`ProposalMessage`, `VoteMessage` and `BlockPartMessage`) so they can
reach agreement on some block, and also obtain the content of the chosen block (block parts). As
part of the gossiping protocol, processes also send auxiliary messages that inform peers about the
executed steps of the core consensus algorithm (`NewRoundStepMessage` and `CommitStepMessage`), and
also messages that inform peers what votes the process has seen (`HasVoteMessage`,
`VoteSetMaj23Message` and `VoteSetBitsMessage`). These messages are then used in the gossiping
protocol to determine what messages a process should send to its peers.
We now describe the content of each message exchanged during Tendermint consensus protocol.
## ProposalMessage
ProposalMessage is sent when a new block is proposed. It is a suggestion of what the
next block in the blockchain should be.
```go
type ProposalMessage struct {
Proposal Proposal
}
```
### Proposal
Proposal contains height and round for which this proposal is made, BlockID as a unique identifier
of proposed block, timestamp, and two fields (POLRound and POLBlockID) that are needed for
termination of the consensus. The message is signed by the validator private key.
```go
type Proposal struct {
Height int64
Round int
Timestamp Time
BlockID BlockID
POLRound int
POLBlockID BlockID
Signature Signature
}
```
NOTE: In the current version of the Tendermint, the consensus value in proposal is represented with
PartSetHeader, and with BlockID in vote message. It should be aligned as suggested in this spec as
BlockID contains PartSetHeader.
## VoteMessage
VoteMessage is sent to vote for some block (or to inform others that a process does not vote in the
current round). Vote is defined in the [Blockchain](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/blockchain.md#blockid) section and contains validator's
information (validator address and index), height and round for which the vote is sent, vote type,
blockID if process vote for some block (`nil` otherwise) and a timestamp when the vote is sent. The
message is signed by the validator private key.
```go
type VoteMessage struct {
Vote Vote
}
```
## BlockPartMessage
BlockPartMessage is sent when gossipping a piece of the proposed block. It contains height, round
and the block part.
```go
type BlockPartMessage struct {
Height int64
Round int
Part Part
}
```
## ProposalHeartbeatMessage
ProposalHeartbeatMessage is sent to signal that a node is alive and waiting for transactions
to be able to create a next block proposal.
```go
type ProposalHeartbeatMessage struct {
Heartbeat Heartbeat
}
```
### Heartbeat
Heartbeat contains validator information (address and index),
height, round and sequence number. It is signed by the private key of the validator.
```go
type Heartbeat struct {
ValidatorAddress []byte
ValidatorIndex int
Height int64
Round int
Sequence int
Signature Signature
}
```
## NewRoundStepMessage
NewRoundStepMessage is sent for every step transition during the core consensus algorithm execution.
It is used in the gossip part of the Tendermint protocol to inform peers about a current
height/round/step a process is in.
```go
type NewRoundStepMessage struct {
Height int64
Round int
Step RoundStepType
SecondsSinceStartTime int
LastCommitRound int
}
```
## CommitStepMessage
CommitStepMessage is sent when an agreement on some block is reached. It contains height for which
agreement is reached, block parts header that describes the decided block and is used to obtain all
block parts, and a bit array of the block parts a process currently has, so its peers can know what
parts it is missing so they can send them.
```go
type CommitStepMessage struct {
Height int64
BlockID BlockID
BlockParts BitArray
}
```
TODO: We use BlockID instead of BlockPartsHeader (in current implementation) for symmetry.
## ProposalPOLMessage
ProposalPOLMessage is sent when a previous block is re-proposed.
It is used to inform peers in what round the process learned for this block (ProposalPOLRound),
and what prevotes for the re-proposed block the process has.
```go
type ProposalPOLMessage struct {
Height int64
ProposalPOLRound int
ProposalPOL BitArray
}
```
## HasVoteMessage
HasVoteMessage is sent to indicate that a particular vote has been received. It contains height,
round, vote type and the index of the validator that is the originator of the corresponding vote.
```go
type HasVoteMessage struct {
Height int64
Round int
Type byte
Index int
}
```
## VoteSetMaj23Message
VoteSetMaj23Message is sent to indicate that a process has seen +2/3 votes for some BlockID.
It contains height, round, vote type and the BlockID.
```go
type VoteSetMaj23Message struct {
Height int64
Round int
Type byte
BlockID BlockID
}
```
## VoteSetBitsMessage
VoteSetBitsMessage is sent to communicate the bit-array of votes a process has seen for a given
BlockID. It contains height, round, vote type, BlockID and a bit array of
the votes a process has.
```go
type VoteSetBitsMessage struct {
Height int64
Round int
Type byte
BlockID BlockID
Votes BitArray
}
```

View File

@ -0,0 +1,46 @@
# Proposer selection procedure in Tendermint
This document specifies the Proposer Selection Procedure that is used in Tendermint to choose a round proposer.
As Tendermint is “leader-based protocol”, the proposer selection is critical for its correct functioning.
Let denote with `proposer_p(h,r)` a process returned by the Proposer Selection Procedure at the process p, at height h
and round r. Then the Proposer Selection procedure should fulfill the following properties:
`Agreement`: Given a validator set V, and two honest validators,
p and q, for each height h, and each round r,
proposer_p(h,r) = proposer_q(h,r)
`Liveness`: In every consecutive sequence of rounds of size K (K is system parameter), at least a
single round has an honest proposer.
`Fairness`: The proposer selection is proportional to the validator voting power, i.e., a validator with more
voting power is selected more frequently, proportional to its power. More precisely, given a set of processes
with the total voting power N, during a sequence of rounds of size N, every process is proposer in a number of rounds
equal to its voting power.
We now look at a few particular cases to understand better how fairness should be implemented.
If we have 4 processes with the following voting power distribution (p0,4), (p1, 2), (p2, 2), (p3, 2) at some round r,
we have the following sequence of proposer selections in the following rounds:
`p0, p1, p2, p3, p0, p0, p1, p2, p3, p0, p0, p1, p2, p3, p0, p0, p1, p2, p3, p0, etc`
Let consider now the following scenario where a total voting power of faulty processes is aggregated in a single process
p0: (p0,3), (p1, 1), (p2, 1), (p3, 1), (p4, 1), (p5, 1), (p6, 1), (p7, 1).
In this case the sequence of proposer selections looks like this:
`p0, p1, p2, p3, p0, p4, p5, p6, p7, p0, p0, p1, p2, p3, p0, p4, p5, p6, p7, p0, etc`
In this case, we see that a number of rounds coordinated by a faulty process is proportional to its voting power.
We consider also the case where we have voting power uniformly distributed among processes, i.e., we have 10 processes
each with voting power of 1. And let consider that there are 3 faulty processes with consecutive addresses,
for example the first 3 processes are faulty. Then the sequence looks like this:
`p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, etc`
In this case, we have 3 consecutive rounds with a faulty proposer.
One special case we consider is the case where a single honest process p0 has most of the voting power, for example:
(p0,100), (p1, 2), (p2, 3), (p3, 4). Then the sequence of proposer selection looks like this:
p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p1, p0, p0, p0, p0, p0, etc
This basically means that almost all rounds have the same proposer. But in this case, the process p0 has anyway enough
voting power to decide whatever he wants, so the fact that he coordinates almost all rounds seems correct.

View File

@ -0,0 +1,10 @@
# Evidence Reactor
## Channels
[#1503](https://github.com/tendermint/tendermint/issues/1503)
Sending invalid evidence will result in stopping the peer.
Sending incorrectly encoded data or data exceeding `maxMsgSize` will result
in stopping the peer.

View File

@ -0,0 +1,8 @@
# Mempool Concurrency
Look at the concurrency model this uses...
* Receiving CheckTx
* Broadcasting new tx
* Interfaces with consensus engine, reap/update while checking
* Calling the ABCI app (ordering. callbacks. how proxy works alongside the blockchain proxy which actually writes blocks)

View File

@ -0,0 +1,59 @@
# Mempool Configuration
Here we describe configuration options around mempool.
For the purposes of this document, they are described
as command-line flags, but they can also be passed in as
environmental variables or in the config.toml file. The
following are all equivalent:
Flag: `--mempool.recheck_empty=false`
Environment: `TM_MEMPOOL_RECHECK_EMPTY=false`
Config:
```
[mempool]
recheck_empty = false
```
## Recheck
`--mempool.recheck=false` (default: true)
`--mempool.recheck_empty=false` (default: true)
Recheck determines if the mempool rechecks all pending
transactions after a block was committed. Once a block
is committed, the mempool removes all valid transactions
that were successfully included in the block.
If `recheck` is true, then it will rerun CheckTx on
all remaining transactions with the new block state.
If the block contained no transactions, it will skip the
recheck unless `recheck_empty` is true.
## Broadcast
`--mempool.broadcast=false` (default: true)
Determines whether this node gossips any valid transactions
that arrive in mempool. Default is to gossip anything that
passes checktx. If this is disabled, transactions are not
gossiped, but instead stored locally and added to the next
block this node is the proposer.
## WalDir
`--mempool.wal_dir=/tmp/gaia/mempool.wal` (default: $TM_HOME/data/mempool.wal)
This defines the directory where mempool writes the write-ahead
logs. These files can be used to reload unbroadcasted
transactions if the node crashes.
If the directory passed in is an absolute path, the wal file is
created there. If the directory is a relative path, the path is
appended to home directory of the tendermint process to
generate an absolute path to the wal directory
(default `$HOME/.tendermint` or set via `TM_HOME` or `--home``)

View File

@ -0,0 +1,37 @@
# Mempool Functionality
The mempool maintains a list of potentially valid transactions,
both to broadcast to other nodes, as well as to provide to the
consensus reactor when it is selected as the block proposer.
There are two sides to the mempool state:
* External: get, check, and broadcast new transactions
* Internal: return valid transaction, update list after block commit
## External functionality
External functionality is exposed via network interfaces
to potentially untrusted actors.
* CheckTx - triggered via RPC or P2P
* Broadcast - gossip messages after a successful check
## Internal functionality
Internal functionality is exposed via method calls to other
code compiled into the tendermint binary.
* Reap - get tx to propose in next block
* Update - remove tx that were included in last block
* ABCI.CheckTx - call ABCI app to validate the tx
What does it provide the consensus reactor?
What guarantees does it need from the ABCI app?
(talk about interleaving processes in concurrency)
## Optimizations
Talk about the LRU cache to make sure we don't process any
tx that we have seen before

View File

@ -0,0 +1,61 @@
# Mempool Messages
## P2P Messages
There is currently only one message that Mempool broadcasts
and receives over the p2p gossip network (via the reactor):
`TxMessage`
```go
// TxMessage is a MempoolMessage containing a transaction.
type TxMessage struct {
Tx types.Tx
}
```
TxMessage is go-wire encoded and prepended with `0x1` as a
"type byte". This is followed by a go-wire encoded byte-slice.
Prefix of 40=0x28 byte tx is: `0x010128...` followed by
the actual 40-byte tx. Prefix of 350=0x015e byte tx is:
`0x0102015e...` followed by the actual 350 byte tx.
(Please see the [go-wire repo](https://github.com/tendermint/go-wire#an-interface-example) for more information)
## RPC Messages
Mempool exposes `CheckTx([]byte)` over the RPC interface.
It can be posted via `broadcast_commit`, `broadcast_sync` or
`broadcast_async`. They all parse a message with one argument,
`"tx": "HEX_ENCODED_BINARY"` and differ in only how long they
wait before returning (sync makes sure CheckTx passes, commit
makes sure it was included in a signed block).
Request (`POST http://gaia.zone:26657/`):
```json
{
"id": "",
"jsonrpc": "2.0",
"method": "broadcast_sync",
"params": {
"tx": "F012A4BC68..."
}
}
```
Response:
```json
{
"error": "",
"result": {
"hash": "E39AAB7A537ABAA237831742DCE1117F187C3C52",
"log": "",
"data": "",
"code": 0
},
"id": "",
"jsonrpc": "2.0"
}
```

View File

@ -0,0 +1,14 @@
# Mempool Reactor
## Channels
[#1503](https://github.com/tendermint/tendermint/issues/1503)
Mempool maintains a cache of the last 10000 transactions to prevent
replaying old transactions (plus transactions coming from other
validators, who are continually exchanging transactions). Read [Replay
Protection](https://tendermint.readthedocs.io/projects/tools/en/master/app-development.html?#replay-protection)
for details.
Sending incorrectly encoded data or data exceeding `maxMsgSize` will result
in stopping the peer.

View File

@ -0,0 +1,123 @@
# Peer Strategy and Exchange
Here we outline the design of the AddressBook
and how it used by the Peer Exchange Reactor (PEX) to ensure we are connected
to good peers and to gossip peers to others.
## Peer Types
Certain peers are special in that they are specified by the user as `persistent`,
which means we auto-redial them if the connection fails, or if we fail to dial
them.
Some peers can be marked as `private`, which means
we will not put them in the address book or gossip them to others.
All peers except private peers are tracked using the address book.
## Discovery
Peer discovery begins with a list of seeds.
When we have no peers, or have been unable to find enough peers from existing ones,
we dial a randomly selected seed to get a list of peers to dial.
On startup, we will also immediately dial the given list of `persistent_peers`,
and will attempt to maintain persistent connections with them. If the connections die, or we fail to dial,
we will redial every 5s for a few minutes, then switch to an exponential backoff schedule,
and after about a day of trying, stop dialing the peer.
So long as we have less than `MinNumOutboundPeers`, we periodically request additional peers
from each of our own. If sufficient time goes by and we still can't find enough peers,
we try the seeds again.
## Listening
Peers listen on a configurable ListenAddr that they self-report in their
NodeInfo during handshakes with other peers. Peers accept up to (MaxNumPeers -
MinNumOutboundPeers) incoming peers.
## Address Book
Peers are tracked via their ID (their PubKey.Address()).
Peers are added to the address book from the PEX when they first connect to us or
when we hear about them from other peers.
The address book is arranged in sets of buckets, and distinguishes between
vetted (old) and unvetted (new) peers. It keeps different sets of buckets for vetted and
unvetted peers. Buckets provide randomization over peer selection. Peers are put
in buckets according to their IP groups.
A vetted peer can only be in one bucket. An unvetted peer can be in multiple buckets, and
each instance of the peer can have a different IP:PORT.
If we're trying to add a new peer but there's no space in its bucket, we'll
remove the worst peer from that bucket to make room.
## Vetting
When a peer is first added, it is unvetted.
Marking a peer as vetted is outside the scope of the `p2p` package.
For Tendermint, a Peer becomes vetted once it has contributed sufficiently
at the consensus layer; ie. once it has sent us valid and not-yet-known
votes and/or block parts for `NumBlocksForVetted` blocks.
Other users of the p2p package can determine their own conditions for when a peer is marked vetted.
If a peer becomes vetted but there are already too many vetted peers,
a randomly selected one of the vetted peers becomes unvetted.
If a peer becomes unvetted (either a new peer, or one that was previously vetted),
a randomly selected one of the unvetted peers is removed from the address book.
More fine-grained tracking of peer behaviour can be done using
a trust metric (see below), but it's best to start with something simple.
## Select Peers to Dial
When we need more peers, we pick them randomly from the addrbook with some
configurable bias for unvetted peers. The bias should be lower when we have fewer peers
and can increase as we obtain more, ensuring that our first peers are more trustworthy,
but always giving us the chance to discover new good peers.
We track the last time we dialed a peer and the number of unsuccessful attempts
we've made. If too many attempts are made, we mark the peer as bad.
Connection attempts are made with exponential backoff (plus jitter). Because
the selection process happens every `ensurePeersPeriod`, we might not end up
dialing a peer for much longer than the backoff duration.
If we fail to connect to the peer after 16 tries (with exponential backoff), we remove from address book completely.
## Select Peers to Exchange
When were asked for peers, we select them as follows:
- select at most `maxGetSelection` peers
- try to select at least `minGetSelection` peers - if we have less than that, select them all.
- select a random, unbiased `getSelectionPercent` of the peers
Send the selected peers. Note we select peers for sending without bias for vetted/unvetted.
## Preventing Spam
There are various cases where we decide a peer has misbehaved and we disconnect from them.
When this happens, the peer is removed from the address book and black listed for
some amount of time. We call this "Disconnect and Mark".
Note that the bad behaviour may be detected outside the PEX reactor itself
(for instance, in the mconnection, or another reactor), but it must be communicated to the PEX reactor
so it can remove and mark the peer.
In the PEX, if a peer sends us an unsolicited list of peers,
or if the peer sends a request too soon after another one,
we Disconnect and MarkBad.
## Trust Metric
The quality of peers can be tracked in more fine-grained detail using a
Proportional-Integral-Derivative (PID) controller that incorporates
current, past, and rate-of-change data to inform peer quality.
While a PID trust metric has been implemented, it remains for future work
to use it in the PEX.
See the [trustmetric](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-006-trust-metric.md)
and [trustmetric useage](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-007-trust-metric-usage.md)
architecture docs for more details.

View File

@ -0,0 +1,12 @@
# PEX Reactor
## Channels
Defines only `SendQueueCapacity`. [#1503](https://github.com/tendermint/tendermint/issues/1503)
Implements rate-limiting by enforcing minimal time between two consecutive
`pexRequestMessage` requests. If the peer sends us addresses we did not ask,
it is stopped.
Sending incorrectly encoded data or data exceeding `maxMsgSize` will result
in stopping the peer.

133
docs/spec/scripts/crypto.go Normal file
View File

@ -0,0 +1,133 @@
package main
import (
"fmt"
"github.com/tendermint/tendermint/crypto"
)
// SECRET
var SECRET = []byte("some secret")
func printEd() {
priv := crypto.GenPrivKeyEd25519FromSecret(SECRET)
pub := priv.PubKey().(crypto.PubKeyEd25519)
sigV, err := priv.Sign([]byte("hello"))
if err != nil {
fmt.Println("Unexpected error:", err)
}
sig := sigV.(crypto.SignatureEd25519)
name := "tendermint/PubKeyEd25519"
length := len(pub[:])
fmt.Println("### PubKeyEd25519")
fmt.Println("")
fmt.Println("```")
fmt.Printf("// Name: %s\n", name)
fmt.Printf("// PrefixBytes: 0x%X \n", pub.Bytes()[:4])
fmt.Printf("// Length: 0x%X \n", length)
fmt.Println("// Notes: raw 32-byte Ed25519 pubkey")
fmt.Println("type PubKeyEd25519 [32]byte")
fmt.Println("")
fmt.Println(`func (pubkey PubKeyEd25519) Address() []byte {
// NOTE: hash of the Amino encoded bytes!
return RIPEMD160(AminoEncode(pubkey))
}`)
fmt.Println("```")
fmt.Println("")
fmt.Printf("For example, the 32-byte Ed25519 pubkey `%X` would be encoded as `%X`.\n\n", pub[:], pub.Bytes())
fmt.Printf("The address would then be `RIPEMD160(0x%X)` or `%X`\n", pub.Bytes(), pub.Address())
fmt.Println("")
name = "tendermint/SignatureKeyEd25519"
length = len(sig[:])
fmt.Println("### SignatureEd25519")
fmt.Println("")
fmt.Println("```")
fmt.Printf("// Name: %s\n", name)
fmt.Printf("// PrefixBytes: 0x%X \n", sig.Bytes()[:4])
fmt.Printf("// Length: 0x%X \n", length)
fmt.Println("// Notes: raw 64-byte Ed25519 signature")
fmt.Println("type SignatureEd25519 [64]byte")
fmt.Println("```")
fmt.Println("")
fmt.Printf("For example, the 64-byte Ed25519 signature `%X` would be encoded as `%X`\n", sig[:], sig.Bytes())
fmt.Println("")
name = "tendermint/PrivKeyEd25519"
fmt.Println("### PrivKeyEd25519")
fmt.Println("")
fmt.Println("```")
fmt.Println("// Name:", name)
fmt.Println("// Notes: raw 32-byte priv key concatenated to raw 32-byte pub key")
fmt.Println("type PrivKeyEd25519 [64]byte")
fmt.Println("```")
}
func printSecp() {
priv := crypto.GenPrivKeySecp256k1FromSecret(SECRET)
pub := priv.PubKey().(crypto.PubKeySecp256k1)
sigV, err := priv.Sign([]byte("hello"))
if err != nil {
fmt.Println("Unexpected error:", err)
}
sig := sigV.(crypto.SignatureSecp256k1)
name := "tendermint/PubKeySecp256k1"
length := len(pub[:])
fmt.Println("### PubKeySecp256k1")
fmt.Println("")
fmt.Println("```")
fmt.Printf("// Name: %s\n", name)
fmt.Printf("// PrefixBytes: 0x%X \n", pub.Bytes()[:4])
fmt.Printf("// Length: 0x%X \n", length)
fmt.Println("// Notes: OpenSSL compressed pubkey prefixed with 0x02 or 0x03")
fmt.Println("type PubKeySecp256k1 [33]byte")
fmt.Println("")
fmt.Println(`func (pubkey PubKeySecp256k1) Address() []byte {
// NOTE: hash of the raw pubkey bytes (not Amino encoded!).
// Compatible with Bitcoin addresses.
return RIPEMD160(SHA256(pubkey[:]))
}`)
fmt.Println("```")
fmt.Println("")
fmt.Printf("For example, the 33-byte Secp256k1 pubkey `%X` would be encoded as `%X`\n\n", pub[:], pub.Bytes())
fmt.Printf("The address would then be `RIPEMD160(SHA256(0x%X))` or `%X`\n", pub[:], pub.Address())
fmt.Println("")
name = "tendermint/SignatureKeySecp256k1"
fmt.Println("### SignatureSecp256k1")
fmt.Println("")
fmt.Println("```")
fmt.Printf("// Name: %s\n", name)
fmt.Printf("// PrefixBytes: 0x%X \n", sig.Bytes()[:4])
fmt.Printf("// Length: Variable\n")
fmt.Printf("// Encoding prefix: Variable\n")
fmt.Println("// Notes: raw bytes of the Secp256k1 signature")
fmt.Println("type SignatureSecp256k1 []byte")
fmt.Println("```")
fmt.Println("")
fmt.Printf("For example, the Secp256k1 signature `%X` would be encoded as `%X`\n", []byte(sig[:]), sig.Bytes())
fmt.Println("")
name = "tendermint/PrivKeySecp256k1"
fmt.Println("### PrivKeySecp256k1")
fmt.Println("")
fmt.Println("```")
fmt.Println("// Name:", name)
fmt.Println("// Notes: raw 32-byte priv key")
fmt.Println("type PrivKeySecp256k1 [32]byte")
fmt.Println("```")
}
func main() {
printEd()
fmt.Println("")
printSecp()
}

192
docs/spec/software/abci.md Normal file
View File

@ -0,0 +1,192 @@
# Application Blockchain Interface (ABCI)
ABCI is the interface between Tendermint (a state-machine replication engine)
and an application (the actual state machine).
The ABCI message types are defined in a [protobuf
file](https://github.com/tendermint/abci/blob/master/types/types.proto).
For full details on the ABCI message types and protocol, see the [ABCI
specificaiton](https://github.com/tendermint/abci/blob/master/specification.rst).
Be sure to read the specification if you're trying to build an ABCI app!
For additional details on server implementation, see the [ABCI
readme](https://github.com/tendermint/abci#implementation).
Here we provide some more details around the use of ABCI by Tendermint and
clarify common "gotchas".
## ABCI connections
Tendermint opens 3 ABCI connections to the app: one for Consensus, one for
Mempool, one for Queries.
## Async vs Sync
The main ABCI server (ie. non-GRPC) provides ordered asynchronous messages.
This is useful for DeliverTx and CheckTx, since it allows Tendermint to forward
transactions to the app before it's finished processing previous ones.
Thus, DeliverTx and CheckTx messages are sent asycnhronously, while all other
messages are sent synchronously.
## CheckTx and Commit
It is typical to hold three distinct states in an ABCI app: CheckTxState, DeliverTxState,
QueryState. The QueryState contains the latest committed state for a block.
The CheckTxState and DeliverTxState may be updated concurrently with one another.
Before Commit is called, Tendermint locks and flushes the mempool so that no new changes will happen
to CheckTxState. When Commit completes, it unlocks the mempool.
Thus, during Commit, it is safe to reset the QueryState and the CheckTxState to the latest DeliverTxState
(ie. the new state from executing all the txs in the block).
Note, however, that it is not possible to send transactions to Tendermint during Commit - if your app
tries to send a `/broadcast_tx` to Tendermint during Commit, it will deadlock.
## EndBlock Validator Updates
Updates to the Tendermint validator set can be made by returning `Validator`
objects in the `ResponseBeginBlock`:
```
message Validator {
bytes address = 1;
PubKey pub_key = 2;
int64 power = 3;
}
message PubKey {
string type = 1;
bytes data = 2;
}
```
The `pub_key` currently supports two types:
- `type = "ed25519" and `data = <raw 32-byte public key>`
- `type = "secp256k1" and `data = <33-byte OpenSSL compressed public key>`
If the address is provided, it must match the address of the pubkey, as
specified [here](/docs/spec/blockchain/encoding.md#Addresses)
(Note: In the v0.19 series, the `pub_key` is the [Amino encoded public
key](/docs/spec/blockchain/encoding.md#public-key-cryptography).
For Ed25519 pubkeys, the Amino prefix is always "1624DE6220". For example, the 32-byte Ed25519 pubkey
`76852933A4686A721442E931A8415F62F5F1AEDF4910F1F252FB393F74C40C85` would be
Amino encoded as
`1624DE622076852933A4686A721442E931A8415F62F5F1AEDF4910F1F252FB393F74C40C85`)
(Note: In old versions of Tendermint (pre-v0.19.0), the pubkey is just prefixed with a
single type byte, so for ED25519 we'd have `pub_key = 0x1 | pub`)
The `power` is the new voting power for the validator, with the
following rules:
- power must be non-negative
- if power is 0, the validator must already exist, and will be removed from the
validator set
- if power is non-0:
- if the validator does not already exist, it will be added to the validator
set with the given power
- if the validator does already exist, its power will be adjusted to the given power
## InitChain Validator Updates
ResponseInitChain has the option to return a list of validators.
If the list is not empty, Tendermint will adopt it for the validator set.
This way the application can determine the initial validator set for the
blockchain.
Note that if addressses are included in the returned validators, they must match
the address of the public key.
ResponseInitChain also includes ConsensusParams, but these are presently
ignored.
## Query
Query is a generic message type with lots of flexibility to enable diverse sets
of queries from applications. Tendermint has no requirements from the Query
message for normal operation - that is, the ABCI app developer need not implement Query functionality if they do not wish too.
That said, Tendermint makes a number of queries to support some optional
features. These are:
### Peer Filtering
When Tendermint connects to a peer, it sends two queries to the ABCI application
using the following paths, with no additional data:
- `/p2p/filter/addr/<IP:PORT>`, where `<IP:PORT>` denote the IP address and
the port of the connection
- `p2p/filter/id/<ID>`, where `<ID>` is the peer node ID (ie. the
pubkey.Address() for the peer's PubKey)
If either of these queries return a non-zero ABCI code, Tendermint will refuse
to connect to the peer.
## Info and the Handshake/Replay
On startup, Tendermint calls Info on the Query connection to get the latest
committed state of the app. The app MUST return information consistent with the
last block it succesfully completed Commit for.
If the app succesfully committed block H but not H+1, then `last_block_height =
H` and `last_block_app_hash = <hash returned by Commit for block H>`. If the app
failed during the Commit of block H, then `last_block_height = H-1` and
`last_block_app_hash = <hash returned by Commit for block H-1, which is the hash
in the header of block H>`.
We now distinguish three heights, and describe how Tendermint syncs itself with
the app.
```
storeBlockHeight = height of the last block Tendermint saw a commit for
stateBlockHeight = height of the last block for which Tendermint completed all
block processing and saved all ABCI results to disk
appBlockHeight = height of the last block for which ABCI app succesfully
completely Commit
```
Note we always have `storeBlockHeight >= stateBlockHeight` and `storeBlockHeight >= appBlockHeight`
Note also we never call Commit on an ABCI app twice for the same height.
The procedure is as follows.
First, some simeple start conditions:
If `appBlockHeight == 0`, then call InitChain.
If `storeBlockHeight == 0`, we're done.
Now, some sanity checks:
If `storeBlockHeight < appBlockHeight`, error
If `storeBlockHeight < stateBlockHeight`, panic
If `storeBlockHeight > stateBlockHeight+1`, panic
Now, the meat:
If `storeBlockHeight == stateBlockHeight && appBlockHeight < storeBlockHeight`,
replay all blocks in full from `appBlockHeight` to `storeBlockHeight`.
This happens if we completed processing the block, but the app forgot its height.
If `storeBlockHeight == stateBlockHeight && appBlockHeight == storeBlockHeight`, we're done
This happens if we crashed at an opportune spot.
If `storeBlockHeight == stateBlockHeight+1`
This happens if we started processing the block but didn't finish.
If `appBlockHeight < stateBlockHeight`
replay all blocks in full from `appBlockHeight` to `storeBlockHeight-1`,
and replay the block at `storeBlockHeight` using the WAL.
This happens if the app forgot the last block it committed.
If `appBlockHeight == stateBlockHeight`,
replay the last block (storeBlockHeight) in full.
This happens if we crashed before the app finished Commit
If appBlockHeight == storeBlockHeight {
update the state using the saved ABCI responses but dont run the block against the real app.
This happens if we crashed after the app finished Commit but before Tendermint saved the state.

33
docs/spec/software/wal.md Normal file
View File

@ -0,0 +1,33 @@
# WAL
Consensus module writes every message to the WAL (write-ahead log).
It also issues fsync syscall through
[File#Sync](https://golang.org/pkg/os/#File.Sync) for messages signed by this
node (to prevent double signing).
Under the hood, it uses
[autofile.Group](https://godoc.org/github.com/tendermint/tmlibs/autofile#Group),
which rotates files when those get too big (> 10MB).
The total maximum size is 1GB. We only need the latest block and the block before it,
but if the former is dragging on across many rounds, we want all those rounds.
## Replay
Consensus module will replay all the messages of the last height written to WAL
before a crash (if such occurs).
The private validator may try to sign messages during replay because it runs
somewhat autonomously and does not know about replay process.
For example, if we got all the way to precommit in the WAL and then crash,
after we replay the proposal message, the private validator will try to sign a
prevote. But it will fail. That's ok because well see the prevote later in the
WAL. Then it will go to precommit, and that time it will work because the
private validator contains the `LastSignBytes` and then well replay the
precommit from the WAL.
Make sure to read about [WAL
corruption](https://tendermint.readthedocs.io/projects/tools/en/master/specification/corruption.html#wal-corruption)
and recovery strategies.

21
docs/specification.rst Normal file
View File

@ -0,0 +1,21 @@
#############
Specification
#############
Here you'll find details of the Tendermint specification. Tendermint's types are produced by `godoc <https://godoc.org/github.com/tendermint/tendermint/types>`__.
.. toctree::
:maxdepth: 2
specification/block-structure.rst
specification/byzantine-consensus-algorithm.rst
specification/configuration.rst
specification/corruption.rst
specification/fast-sync.rst
specification/genesis.rst
specification/light-client-protocol.rst
specification/merkle.rst
specification/rpc.rst
specification/secure-p2p.rst
specification/validators.rst
specification/wire-protocol.rst

View File

@ -0,0 +1,220 @@
Block Structure
===============
The tendermint consensus engine records all agreements by a
supermajority of nodes into a blockchain, which is replicated among all
nodes. This blockchain is accessible via various rpc endpoints, mainly
``/block?height=`` to get the full block, as well as
``/blockchain?minHeight=_&maxHeight=_`` to get a list of headers. But
what exactly is stored in these blocks?
Block
~~~~~
A
`Block <https://godoc.org/github.com/tendermint/tendermint/types#Block>`__
contains:
- a `Header <#header>`__ contains merkle hashes for various chain
states
- the
`Data <https://godoc.org/github.com/tendermint/tendermint/types#Data>`__
is all transactions which are to be processed
- the `LastCommit <#commit>`__ > 2/3 signatures for the last block
The signatures returned along with block ``H`` are those validating
block ``H-1``. This can be a little confusing, but we must also consider
that the ``Header`` also contains the ``LastCommitHash``. It would be
impossible for a Header to include the commits that sign it, as it would
cause an infinite loop here. But when we get block ``H``, we find
``Header.LastCommitHash``, which must match the hash of ``LastCommit``.
Header
~~~~~~
The
`Header <https://godoc.org/github.com/tendermint/tendermint/types#Header>`__
contains lots of information (follow link for up-to-date info). Notably,
it maintains the ``Height``, the ``LastBlockID`` (to make it a chain),
and hashes of the data, the app state, and the validator set. This is
important as the only item that is signed by the validators is the
``Header``, and all other data must be validated against one of the
merkle hashes in the ``Header``.
The ``DataHash`` can provide a nice check on the
`Data <https://godoc.org/github.com/tendermint/tendermint/types#Data>`__
returned in this same block. If you are subscribed to new blocks, via
tendermint RPC, in order to display or process the new transactions you
should at least validate that the ``DataHash`` is valid. If it is
important to verify autheniticity, you must wait for the ``LastCommit``
from the next block to make sure the block header (including
``DataHash``) was properly signed.
The ``ValidatorHash`` contains a hash of the current
`Validators <https://godoc.org/github.com/tendermint/tendermint/types#Validator>`__.
Tracking all changes in the validator set is complex, but a client can
quickly compare this hash with the `hash of the currently known
validators <https://godoc.org/github.com/tendermint/tendermint/types#ValidatorSet.Hash>`__
to see if there have been changes.
The ``AppHash`` serves as the basis for validating any merkle proofs
that come from the `ABCI
application <https://github.com/tendermint/abci>`__. It represents the
state of the actual application, rather that the state of the blockchain
itself. This means it's necessary in order to perform any business
logic, such as verifying an account balance.
**Note** After the transactions are committed to a block, they still
need to be processed in a separate step, which happens between the
blocks. If you find a given transaction in the block at height ``H``,
the effects of running that transaction will be first visible in the
``AppHash`` from the block header at height ``H+1``.
Like the ``LastCommit`` issue, this is a requirement of the immutability
of the block chain, as the application only applies transactions *after*
they are commited to the chain.
Commit
~~~~~~
The
`Commit <https://godoc.org/github.com/tendermint/tendermint/types#Commit>`__
contains a set of
`Votes <https://godoc.org/github.com/tendermint/tendermint/types#Vote>`__
that were made by the validator set to reach consensus on this block.
This is the key to the security in any PoS system, and actually no data
that cannot be traced back to a block header with a valid set of Votes
can be trusted. Thus, getting the Commit data and verifying the votes is
extremely important.
As mentioned above, in order to find the ``precommit votes`` for block
header ``H``, we need to query block ``H+1``. Then we need to check the
votes, make sure they really are for that block, and properly formatted.
Much of this code is implemented in Go in the
`light-client <https://github.com/tendermint/light-client>`__ package.
If you look at the code, you will notice that we need to provide the
``chainID`` of the blockchain in order to properly calculate the votes.
This is to protect anyone from swapping votes between chains to fake (or
frame) a validator. Also note that this ``chainID`` is in the
``genesis.json`` from *Tendermint*, not the ``genesis.json`` from the
basecoin app (`that is a different
chainID... <https://github.com/cosmos/cosmos-sdk/issues/32>`__).
Once we have those votes, and we calculated the proper `sign
bytes <https://godoc.org/github.com/tendermint/tendermint/types#Vote.WriteSignBytes>`__
using the chainID and a `nice helper
function <https://godoc.org/github.com/tendermint/tendermint/types#SignBytes>`__,
we can verify them. The light client is responsible for maintaining a
set of validators that we trust. Each vote only stores the validators
``Address``, as well as the ``Signature``. Assuming we have a local copy
of the trusted validator set, we can look up the ``Public Key`` of the
validator given its ``Address``, then verify that the ``Signature``
matches the ``SignBytes`` and ``Public Key``. Then we sum up the total
voting power of all validators, whose votes fulfilled all these
stringent requirements. If the total number of voting power for a single
block is greater than 2/3 of all voting power, then we can finally trust
the block header, the AppHash, and the proof we got from the ABCI
application.
Vote Sign Bytes
^^^^^^^^^^^^^^^
The ``sign-bytes`` of a vote is produced by taking a
`stable-json <https://github.com/substack/json-stable-stringify>`__-like
deterministic JSON `wire <./wire-protocol.html>`__ encoding of
the vote (excluding the ``Signature`` field), and wrapping it with
``{"chain_id":"my_chain","vote":...}``.
For example, a precommit vote might have the following ``sign-bytes``:
.. code:: json
{"chain_id":"my_chain","vote":{"block_hash":"611801F57B4CE378DF1A3FFF1216656E89209A99","block_parts_header":{"hash":"B46697379DBE0774CC2C3B656083F07CA7E0F9CE","total":123},"height":1234,"round":1,"type":2}}
Block Hash
~~~~~~~~~~
The `block
hash <https://godoc.org/github.com/tendermint/tendermint/types#Block.Hash>`__
is the `Simple Tree hash <./merkle.html#simple-tree-with-dictionaries>`__
of the fields of the block ``Header`` encoded as a list of
``KVPair``\ s.
Transaction
~~~~~~~~~~~
A transaction is any sequence of bytes. It is up to your
`ABCI <https://github.com/tendermint/abci>`__ application to accept or
reject transactions.
BlockID
~~~~~~~
Many of these data structures refer to the
`BlockID <https://godoc.org/github.com/tendermint/tendermint/types#BlockID>`__,
which is the ``BlockHash`` (hash of the block header, also referred to
by the next block) along with the ``PartSetHeader``. The
``PartSetHeader`` is explained below and is used internally to
orchestrate the p2p propogation. For clients, it is basically opaque
bytes, but they must match for all votes.
PartSetHeader
~~~~~~~~~~~~~
The
`PartSetHeader <https://godoc.org/github.com/tendermint/tendermint/types#PartSetHeader>`__
contains the total number of pieces in a
`PartSet <https://godoc.org/github.com/tendermint/tendermint/types#PartSet>`__,
and the Merkle root hash of those pieces.
PartSet
~~~~~~~
PartSet is used to split a byteslice of data into parts (pieces) for
transmission. By splitting data into smaller parts and computing a
Merkle root hash on the list, you can verify that a part is legitimately
part of the complete data, and the part can be forwarded to other peers
before all the parts are known. In short, it's a fast way to securely
propagate a large chunk of data (like a block) over a gossip network.
PartSet was inspired by the LibSwift project.
Usage:
.. code:: go
data := RandBytes(2 << 20) // Something large
partSet := NewPartSetFromData(data)
partSet.Total() // Total number of 4KB parts
partSet.Count() // Equal to the Total, since we already have all the parts
partSet.Hash() // The Merkle root hash
partSet.BitArray() // A BitArray of partSet.Total() 1's
header := partSet.Header() // Send this to the peer
header.Total // Total number of parts
header.Hash // The merkle root hash
// Now we'll reconstruct the data from the parts
partSet2 := NewPartSetFromHeader(header)
partSet2.Total() // Same total as partSet.Total()
partSet2.Count() // Zero, since this PartSet doesn't have any parts yet.
partSet2.Hash() // Same hash as in partSet.Hash()
partSet2.BitArray() // A BitArray of partSet.Total() 0's
// In a gossip network the parts would arrive in arbitrary order, perhaps
// in response to explicit requests for parts, or optimistically in response
// to the receiving peer's partSet.BitArray().
for !partSet2.IsComplete() {
part := receivePartFromGossipNetwork()
added, err := partSet2.AddPart(part)
if err != nil {
// A wrong part,
// the merkle trail does not hash to partSet2.Hash()
} else if !added {
// A duplicate part already received
}
}
data2, _ := ioutil.ReadAll(partSet2.GetReader())
bytes.Equal(data, data2) // true

View File

@ -0,0 +1,349 @@
Byzantine Consensus Algorithm
=============================
Terms
-----
- The network is composed of optionally connected *nodes*. Nodes
directly connected to a particular node are called *peers*.
- The consensus process in deciding the next block (at some *height*
``H``) is composed of one or many *rounds*.
- ``NewHeight``, ``Propose``, ``Prevote``, ``Precommit``, and
``Commit`` represent state machine states of a round. (aka
``RoundStep`` or just "step").
- A node is said to be *at* a given height, round, and step, or at
``(H,R,S)``, or at ``(H,R)`` in short to omit the step.
- To *prevote* or *precommit* something means to broadcast a `prevote
vote <https://godoc.org/github.com/tendermint/tendermint/types#Vote>`__
or `first precommit
vote <https://godoc.org/github.com/tendermint/tendermint/types#FirstPrecommit>`__
for something.
- A vote *at* ``(H,R)`` is a vote signed with the bytes for ``H`` and
``R`` included in its
`sign-bytes <block-structure.html#vote-sign-bytes>`__.
- *+2/3* is short for "more than 2/3"
- *1/3+* is short for "1/3 or more"
- A set of +2/3 of prevotes for a particular block or ``<nil>`` at
``(H,R)`` is called a *proof-of-lock-change* or *PoLC* for short.
State Machine Overview
----------------------
At each height of the blockchain a round-based protocol is run to
determine the next block. Each round is composed of three *steps*
(``Propose``, ``Prevote``, and ``Precommit``), along with two special
steps ``Commit`` and ``NewHeight``.
In the optimal scenario, the order of steps is:
::
NewHeight -> (Propose -> Prevote -> Precommit)+ -> Commit -> NewHeight ->...
The sequence ``(Propose -> Prevote -> Precommit)`` is called a *round*.
There may be more than one round required to commit a block at a given
height. Examples for why more rounds may be required include:
- The designated proposer was not online.
- The block proposed by the designated proposer was not valid.
- The block proposed by the designated proposer did not propagate in
time.
- The block proposed was valid, but +2/3 of prevotes for the proposed
block were not received in time for enough validator nodes by the
time they reached the ``Precommit`` step. Even though +2/3 of
prevotes are necessary to progress to the next step, at least one
validator may have voted ``<nil>`` or maliciously voted for something
else.
- The block proposed was valid, and +2/3 of prevotes were received for
enough nodes, but +2/3 of precommits for the proposed block were not
received for enough validator nodes.
Some of these problems are resolved by moving onto the next round &
proposer. Others are resolved by increasing certain round timeout
parameters over each successive round.
State Machine Diagram
---------------------
::
+-------------------------------------+
v |(Wait til `CommmitTime+timeoutCommit`)
+-----------+ +-----+-----+
+----------> | Propose +--------------+ | NewHeight |
| +-----------+ | +-----------+
| | ^
|(Else, after timeoutPrecommit) v |
+-----+-----+ +-----------+ |
| Precommit | <------------------------+ Prevote | |
+-----+-----+ +-----------+ |
|(When +2/3 Precommits for block found) |
v |
+--------------------------------------------------------------------+
| Commit |
| |
| * Set CommitTime = now; |
| * Wait for block, then stage/save/commit block; |
+--------------------------------------------------------------------+
Background Gossip
-----------------
A node may not have a corresponding validator private key, but it
nevertheless plays an active role in the consensus process by relaying
relevant meta-data, proposals, blocks, and votes to its peers. A node
that has the private keys of an active validator and is engaged in
signing votes is called a *validator-node*. All nodes (not just
validator-nodes) have an associated state (the current height, round,
and step) and work to make progress.
Between two nodes there exists a ``Connection``, and multiplexed on top
of this connection are fairly throttled ``Channel``\ s of information.
An epidemic gossip protocol is implemented among some of these channels
to bring peers up to speed on the most recent state of consensus. For
example,
- Nodes gossip ``PartSet`` parts of the current round's proposer's
proposed block. A LibSwift inspired algorithm is used to quickly
broadcast blocks across the gossip network.
- Nodes gossip prevote/precommit votes. A node NODE\_A that is ahead of
NODE\_B can send NODE\_B prevotes or precommits for NODE\_B's current
(or future) round to enable it to progress forward.
- Nodes gossip prevotes for the proposed PoLC (proof-of-lock-change)
round if one is proposed.
- Nodes gossip to nodes lagging in blockchain height with block
`commits <https://godoc.org/github.com/tendermint/tendermint/types#Commit>`__
for older blocks.
- Nodes opportunistically gossip ``HasVote`` messages to hint peers
what votes it already has.
- Nodes broadcast their current state to all neighboring peers. (but is
not gossiped further)
There's more, but let's not get ahead of ourselves here.
Proposals
---------
A proposal is signed and published by the designated proposer at each
round. The proposer is chosen by a deterministic and non-choking round
robin selection algorithm that selects proposers in proportion to their
voting power. (see
`implementation <https://github.com/tendermint/tendermint/blob/develop/types/validator_set.go>`__)
A proposal at ``(H,R)`` is composed of a block and an optional latest
``PoLC-Round < R`` which is included iff the proposer knows of one. This
hints the network to allow nodes to unlock (when safe) to ensure the
liveness property.
State Machine Spec
------------------
Propose Step (height:H,round:R)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Upon entering ``Propose``: - The designated proposer proposes a block at
``(H,R)``.
The ``Propose`` step ends: - After ``timeoutProposeR`` after entering
``Propose``. --> goto ``Prevote(H,R)`` - After receiving proposal block
and all prevotes at ``PoLC-Round``. --> goto ``Prevote(H,R)`` - After
`common exit conditions <#common-exit-conditions>`__
Prevote Step (height:H,round:R)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Upon entering ``Prevote``, each validator broadcasts its prevote vote.
- First, if the validator is locked on a block since ``LastLockRound``
but now has a PoLC for something else at round ``PoLC-Round`` where
``LastLockRound < PoLC-Round < R``, then it unlocks.
- If the validator is still locked on a block, it prevotes that.
- Else, if the proposed block from ``Propose(H,R)`` is good, it
prevotes that.
- Else, if the proposal is invalid or wasn't received on time, it
prevotes ``<nil>``.
The ``Prevote`` step ends: - After +2/3 prevotes for a particular block
or ``<nil>``. --> goto ``Precommit(H,R)`` - After ``timeoutPrevote``
after receiving any +2/3 prevotes. --> goto ``Precommit(H,R)`` - After
`common exit conditions <#common-exit-conditions>`__
Precommit Step (height:H,round:R)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Upon entering ``Precommit``, each validator broadcasts its precommit
vote. - If the validator has a PoLC at ``(H,R)`` for a particular block
``B``, it (re)locks (or changes lock to) and precommits ``B`` and sets
``LastLockRound = R``. - Else, if the validator has a PoLC at ``(H,R)``
for ``<nil>``, it unlocks and precommits ``<nil>``. - Else, it keeps the
lock unchanged and precommits ``<nil>``.
A precommit for ``<nil>`` means "I didnt see a PoLC for this round, but
I did get +2/3 prevotes and waited a bit".
The Precommit step ends: - After +2/3 precommits for ``<nil>``. --> goto
``Propose(H,R+1)`` - After ``timeoutPrecommit`` after receiving any +2/3
precommits. --> goto ``Propose(H,R+1)`` - After `common exit
conditions <#common-exit-conditions>`__
common exit conditions
^^^^^^^^^^^^^^^^^^^^^^
- After +2/3 precommits for a particular block. --> goto ``Commit(H)``
- After any +2/3 prevotes received at ``(H,R+x)``. --> goto
``Prevote(H,R+x)``
- After any +2/3 precommits received at ``(H,R+x)``. --> goto
``Precommit(H,R+x)``
Commit Step (height:H)
~~~~~~~~~~~~~~~~~~~~~~
- Set ``CommitTime = now()``
- Wait until block is received. --> goto ``NewHeight(H+1)``
NewHeight Step (height:H)
~~~~~~~~~~~~~~~~~~~~~~~~~
- Move ``Precommits`` to ``LastCommit`` and increment height.
- Set ``StartTime = CommitTime+timeoutCommit``
- Wait until ``StartTime`` to receive straggler commits. --> goto
``Propose(H,0)``
Proofs
------
Proof of Safety
~~~~~~~~~~~~~~~
Assume that at most -1/3 of the voting power of validators is byzantine.
If a validator commits block ``B`` at round ``R``, it's because it saw
+2/3 of precommits at round ``R``. This implies that 1/3+ of honest
nodes are still locked at round ``R' > R``. These locked validators will
remain locked until they see a PoLC at ``R' > R``, but this won't happen
because 1/3+ are locked and honest, so at most -2/3 are available to
vote for anything other than ``B``.
Proof of Liveness
~~~~~~~~~~~~~~~~~
If 1/3+ honest validators are locked on two different blocks from
different rounds, a proposers' ``PoLC-Round`` will eventually cause
nodes locked from the earlier round to unlock. Eventually, the
designated proposer will be one that is aware of a PoLC at the later
round. Also, ``timeoutProposalR`` increments with round ``R``, while the
size of a proposal are capped, so eventually the network is able to
"fully gossip" the whole proposal (e.g. the block & PoLC).
Proof of Fork Accountability
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Define the JSet (justification-vote-set) at height ``H`` of a validator
``V1`` to be all the votes signed by the validator at ``H`` along with
justification PoLC prevotes for each lock change. For example, if ``V1``
signed the following precommits: ``Precommit(B1 @ round 0)``,
``Precommit(<nil> @ round 1)``, ``Precommit(B2 @ round 4)`` (note that
no precommits were signed for rounds 2 and 3, and that's ok),
``Precommit(B1 @ round 0)`` must be justified by a PoLC at round 0, and
``Precommit(B2 @ round 4)`` must be justified by a PoLC at round 4; but
the precommit for ``<nil>`` at round 1 is not a lock-change by
definition so the JSet for ``V1`` need not include any prevotes at round
1, 2, or 3 (unless ``V1`` happened to have prevoted for those rounds).
Further, define the JSet at height ``H`` of a set of validators ``VSet``
to be the union of the JSets for each validator in ``VSet``. For a given
commit by honest validators at round ``R`` for block ``B`` we can
construct a JSet to justify the commit for ``B`` at ``R``. We say that a
JSet *justifies* a commit at ``(H,R)`` if all the committers (validators
in the commit-set) are each justified in the JSet with no duplicitous
vote signatures (by the committers).
- **Lemma**: When a fork is detected by the existence of two
conflicting `commits <./validators.html#commiting-a-block>`__,
the union of the JSets for both commits (if they can be compiled)
must include double-signing by at least 1/3+ of the validator set.
**Proof**: The commit cannot be at the same round, because that would
immediately imply double-signing by 1/3+. Take the union of the JSets
of both commits. If there is no double-signing by at least 1/3+ of
the validator set in the union, then no honest validator could have
precommitted any different block after the first commit. Yet, +2/3
did. Reductio ad absurdum.
As a corollary, when there is a fork, an external process can determine
the blame by requiring each validator to justify all of its round votes.
Either we will find 1/3+ who cannot justify at least one of their votes,
and/or, we will find 1/3+ who had double-signed.
Alternative algorithm
~~~~~~~~~~~~~~~~~~~~~
Alternatively, we can take the JSet of a commit to be the "full commit".
That is, if light clients and validators do not consider a block to be
committed unless the JSet of the commit is also known, then we get the
desirable property that if there ever is a fork (e.g. there are two
conflicting "full commits"), then 1/3+ of the validators are immediately
punishable for double-signing.
There are many ways to ensure that the gossip network efficiently share
the JSet of a commit. One solution is to add a new message type that
tells peers that this node has (or does not have) a +2/3 majority for B
(or ) at (H,R), and a bitarray of which votes contributed towards that
majority. Peers can react by responding with appropriate votes.
We will implement such an algorithm for the next iteration of the
Tendermint consensus protocol.
Other potential improvements include adding more data in votes such as
the last known PoLC round that caused a lock change, and the last voted
round/step (or, we may require that validators not skip any votes). This
may make JSet verification/gossip logic easier to implement.
Censorship Attacks
~~~~~~~~~~~~~~~~~~
Due to the definition of a block
`commit <validators.html#commiting-a-block>`__, any 1/3+
coalition of validators can halt the blockchain by not broadcasting
their votes. Such a coalition can also censor particular transactions by
rejecting blocks that include these transactions, though this would
result in a significant proportion of block proposals to be rejected,
which would slow down the rate of block commits of the blockchain,
reducing its utility and value. The malicious coalition might also
broadcast votes in a trickle so as to grind blockchain block commits to
a near halt, or engage in any combination of these attacks.
If a global active adversary were also involved, it can partition the
network in such a way that it may appear that the wrong subset of
validators were responsible for the slowdown. This is not just a
limitation of Tendermint, but rather a limitation of all consensus
protocols whose network is potentially controlled by an active
adversary.
Overcoming Forks and Censorship Attacks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For these types of attacks, a subset of the validators through external
means should coordinate to sign a reorg-proposal that chooses a fork
(and any evidence thereof) and the initial subset of validators with
their signatures. Validators who sign such a reorg-proposal forego its
collateral on all other forks. Clients should verify the signatures on
the reorg-proposal, verify any evidence, and make a judgement or prompt
the end-user for a decision. For example, a phone wallet app may prompt
the user with a security warning, while a refrigerator may accept any
reorg-proposal signed by +1/2 of the original validators.
No non-synchronous Byzantine fault-tolerant algorithm can come to
consensus when 1/3+ of validators are dishonest, yet a fork assumes that
1/3+ of validators have already been dishonest by double-signing or
lock-changing without justification. So, signing the reorg-proposal is a
coordination problem that cannot be solved by any non-synchronous
protocol (i.e. automatically, and without making assumptions about the
reliability of the underlying network). It must be provided by means
external to the weakly-synchronous Tendermint consensus algorithm. For
now, we leave the problem of reorg-proposal coordination to human
coordination via internet media. Validators must take care to ensure
that there are no significant network partitions, to avoid situations
where two conflicting reorg-proposals are signed.
Assuming that the external coordination medium and protocol is robust,
it follows that forks are less of a concern than `censorship
attacks <#censorship-attacks>`__.

View File

@ -0,0 +1,200 @@
# Configuration
Tendermint Core can be configured via a TOML file in
`$TMHOME/config/config.toml`. Some of these parameters can be overridden by
command-line flags. For most users, the options in the `##### main
base configuration options #####` are intended to be modified while
config options further below are intended for advance power users.
## Options
The default configuration file create by `tendermint init` has all
the parameters set with their default values. It will look something
like the file below, however, double check by inspecting the
`config.toml` created with your version of `tendermint` installed:
```
# This is a TOML config file.
# For more information, see https://github.com/toml-lang/toml
##### main base config options #####
# TCP or UNIX socket address of the ABCI application,
# or the name of an ABCI application compiled in with the Tendermint binary
proxy_app = "tcp://127.0.0.1:26658"
# A custom human readable name for this node
moniker = "anonymous"
# If this node is many blocks behind the tip of the chain, FastSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
fast_sync = true
# Database backend: leveldb | memdb
db_backend = "leveldb"
# Database directory
db_path = "data"
# Output level for logging
log_level = "state:info,*:error"
##### additional base config options #####
# The ID of the chain to join (should be signed with every transaction and vote)
chain_id = ""
# Path to the JSON file containing the initial validator set and other meta data
genesis_file = "genesis.json"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_file = "priv_validator.json"
# Mechanism to connect to the ABCI application: socket | grpc
abci = "socket"
# TCP or UNIX socket address for the profiling server to listen on
prof_laddr = ""
# If true, query the ABCI app on connecting to a new peer
# so the app can decide if we should keep the connection or not
filter_peers = false
##### advanced configuration options #####
##### rpc server configuration options #####
[rpc]
# TCP or UNIX socket address for the RPC server to listen on
laddr = "tcp://0.0.0.0:26657"
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
grpc_laddr = ""
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
unsafe = false
##### peer to peer configuration options #####
[p2p]
# Address to listen for incoming connections
laddr = "tcp://0.0.0.0:26656"
# Comma separated list of seed nodes to connect to
seeds = ""
# Comma separated list of nodes to keep persistent connections to
# Do not add private peers to this list if you don't want them advertised
persistent_peers = ""
# Path to address book
addr_book_file = "addrbook.json"
# Set true for strict address routability rules
addr_book_strict = true
# Time to wait before flushing messages out on the connection, in ms
flush_throttle_timeout = 100
# Maximum number of peers to connect to
max_num_peers = 50
# Maximum size of a message packet payload, in bytes
max_msg_packet_payload_size = 1024
# Rate at which packets can be sent, in bytes/second
send_rate = 512000
# Rate at which packets can be received, in bytes/second
recv_rate = 512000
# Set true to enable the peer-exchange reactor
pex = true
# Seed mode, in which node constantly crawls the network and looks for
# peers. If another node asks it for addresses, it responds and disconnects.
#
# Does not work if the peer-exchange reactor is disabled.
seed_mode = false
# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
private_peer_ids = ""
##### mempool configuration options #####
[mempool]
recheck = true
recheck_empty = true
broadcast = true
wal_dir = "data/mempool.wal"
# size of the mempool
size = 100000
# size of the cache (used to filter transactions we saw earlier)
cache_size = 100000
##### consensus configuration options #####
[consensus]
wal_file = "data/cs.wal/wal"
# All timeouts are in milliseconds
timeout_propose = 3000
timeout_propose_delta = 500
timeout_prevote = 1000
timeout_prevote_delta = 500
timeout_precommit = 1000
timeout_precommit_delta = 500
timeout_commit = 1000
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
skip_timeout_commit = false
# BlockSize
max_block_size_txs = 10000
max_block_size_bytes = 1
# EmptyBlocks mode and possible interval between empty blocks in seconds
create_empty_blocks = true
create_empty_blocks_interval = 0
# Reactor sleep duration parameters are in milliseconds
peer_gossip_sleep_duration = 100
peer_query_maj23_sleep_duration = 2000
##### transactions indexer configuration options #####
[tx_index]
# What indexer to use for transactions
#
# Options:
# 1) "null"
# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "kv"
# Comma-separated list of tags to index (by default the only tag is tx hash)
#
# It's recommended to index only a subset of tags due to possible memory
# bloat. This is, of course, depends on the indexer's DB and the volume of
# transactions.
index_tags = ""
# When set to true, tells indexer to index all tags. Note this may be not
# desirable (see the comment above). IndexTags has a precedence over
# IndexAllTags (i.e. when given both, IndexTags will be indexed).
index_all_tags = false
##### instrumentation configuration options #####
[instrumentation]
# When true, Prometheus metrics are served under /metrics on
# PrometheusListenAddr.
# Check out the documentation for the list of available metrics.
prometheus = false
# Address to listen for Prometheus collector(s) connections
prometheus_listen_addr = ":26660"
```

View File

@ -0,0 +1,70 @@
Corruption
==========
Important step
--------------
Make sure you have a backup of the Tendermint data directory.
Possible causes
---------------
Remember that most corruption is caused by hardware issues:
- RAID controllers with faulty / worn out battery backup, and an unexpected power loss
- Hard disk drives with write-back cache enabled, and an unexpected power loss
- Cheap SSDs with insufficient power-loss protection, and an unexpected power-loss
- Defective RAM
- Defective or overheating CPU(s)
Other causes can be:
- Database systems configured with fsync=off and an OS crash or power loss
- Filesystems configured to use write barriers plus a storage layer that ignores write barriers. LVM is a particular culprit.
- Tendermint bugs
- Operating system bugs
- Admin error
- directly modifying Tendermint data-directory contents
(Source: https://wiki.postgresql.org/wiki/Corruption)
WAL Corruption
--------------
If consensus WAL is corrupted at the lastest height and you are trying to start
Tendermint, replay will fail with panic.
Recovering from data corruption can be hard and time-consuming. Here are two approaches you can take:
1) Delete the WAL file and restart Tendermint. It will attempt to sync with other peers.
2) Try to repair the WAL file manually:
1. Create a backup of the corrupted WAL file:
.. code:: bash
cp "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal_backup
2. Use ./scripts/wal2json to create a human-readable version
.. code:: bash
./scripts/wal2json/wal2json "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal
3. Search for a "CORRUPTED MESSAGE" line.
4. By looking at the previous message and the message after the corrupted one
and looking at the logs, try to rebuild the message. If the consequent
messages are marked as corrupted too (this may happen if length header
got corrupted or some writes did not make it to the WAL ~ truncation),
then remove all the lines starting from the corrupted one and restart
Tendermint.
.. code:: bash
$EDITOR /tmp/corrupted_wal
5. After editing, convert this file back into binary form by running:
.. code:: bash
./scripts/json2wal/json2wal /tmp/corrupted_wal > "$TMHOME/data/cs.wal/wal"

View File

@ -0,0 +1,34 @@
Fast Sync
=========
Background
----------
In a proof of work blockchain, syncing with the chain is the same
process as staying up-to-date with the consensus: download blocks, and
look for the one with the most total work. In proof-of-stake, the
consensus process is more complex, as it involves rounds of
communication between the nodes to determine what block should be
committed next. Using this process to sync up with the blockchain from
scratch can take a very long time. It's much faster to just download
blocks and check the merkle tree of validators than to run the real-time
consensus gossip protocol.
Fast Sync
---------
To support faster syncing, tendermint offers a ``fast-sync`` mode, which
is enabled by default, and can be toggled in the ``config.toml`` or via
``--fast_sync=false``.
In this mode, the tendermint daemon will sync hundreds of times faster
than if it used the real-time consensus process. Once caught up, the
daemon will switch out of fast sync and into the normal consensus mode.
After running for some time, the node is considered ``caught up`` if it
has at least one peer and it's height is at least as high as the max
reported peer height. See `the IsCaughtUp
method <https://github.com/tendermint/tendermint/blob/b467515719e686e4678e6da4e102f32a491b85a0/blockchain/pool.go#L128>`__.
If we're lagging sufficiently, we should go back to fast syncing, but
this is an open issue:
https://github.com/tendermint/tendermint/issues/129

View File

@ -0,0 +1,71 @@
Genesis
=======
The genesis.json file in ``$TMHOME/config`` defines the initial TendermintCore
state upon genesis of the blockchain (`see
definition <https://github.com/tendermint/tendermint/blob/master/types/genesis.go>`__).
Fields
~~~~~~
- ``genesis_time``: Official time of blockchain start.
- ``chain_id``: ID of the blockchain. This must be unique for every
blockchain. If your testnet blockchains do not have unique chain IDs,
you will have a bad time.
- ``validators``:
- ``pub_key``: The first element specifies the pub\_key type. 1 ==
Ed25519. The second element are the pubkey bytes.
- ``power``: The validator's voting power.
- ``name``: Name of the validator (optional).
- ``app_hash``: The expected application hash (as returned by the
``ResponseInfo`` ABCI message) upon genesis. If the app's hash does not
match, Tendermint will panic.
- ``app_state``: The application state (e.g. initial distribution of tokens).
Sample genesis.json
~~~~~~~~~~~~~~~~~~~
.. code:: json
{
"genesis_time": "2016-02-05T06:02:31.526Z",
"chain_id": "chain-tTH4mi",
"validators": [
{
"pub_key": [
1,
"9BC5112CB9614D91CE423FA8744885126CD9D08D9FC9D1F42E552D662BAA411E"
],
"power": 1,
"name": "mach1"
},
{
"pub_key": [
1,
"F46A5543D51F31660D9F59653B4F96061A740FF7433E0DC1ECBC30BE8494DE06"
],
"power": 1,
"name": "mach2"
},
{
"pub_key": [
1,
"0E7B423C1635FD07C0FC3603B736D5D27953C1C6CA865BB9392CD79DE1A682BB"
],
"power": 1,
"name": "mach3"
},
{
"pub_key": [
1,
"4F49237B9A32EB50682EDD83C48CE9CDB1D02A7CFDADCFF6EC8C1FAADB358879"
],
"power": 1,
"name": "mach4"
}
],
"app_hash": "15005165891224E721CB664D15CB972240F5703F",
"app_state": {
{"account": "Bob", "coins": 5000}
}
}

View File

@ -0,0 +1,33 @@
Light Client Protocol
=====================
Light clients are an important part of the complete blockchain system
for most applications. Tendermint provides unique speed and security
properties for light client applications.
See our `lite package
<https://godoc.org/github.com/tendermint/tendermint/lite>`__.
Overview
--------
The objective of the light client protocol is to get a
`commit <./validators.html#committing-a-block>`__ for a recent
`block hash <./block-structure.html#block-hash>`__ where the commit
includes a majority of signatures from the last known validator set.
From there, all the application state is verifiable with `merkle
proofs <./merkle.html#iavl-tree>`__.
Properties
----------
- You get the full collateralized security benefits of Tendermint; No
need to wait for confirmations.
- You get the full speed benefits of Tendermint; transactions commit
instantly.
- You can get the most recent version of the application state
non-interactively (without committing anything to the blockchain).
For example, this means that you can get the most recent value of a
name from the name-registry without worrying about fork censorship
attacks, without posting a commit and waiting for confirmations. It's
fast, secure, and free!

Some files were not shown because too many files have changed in this diff Show More