mirror of
https://github.com/fluencelabs/tendermint
synced 2025-04-24 22:32:15 +00:00
Revert "detele everything"
This reverts commit d02c5d1e30c569ff5f785f2b7d80b875cf2e9977.
This commit is contained in:
parent
2f4ab0c068
commit
44dad6d70b
962
CHANGELOG.md
Normal file
962
CHANGELOG.md
Normal file
@ -0,0 +1,962 @@
|
||||
# Changelog
|
||||
|
||||
## 0.22.2
|
||||
|
||||
*July 10th, 2018*
|
||||
|
||||
IMPROVEMENTS
|
||||
- More cleanup post repo merge!
|
||||
- [docs] Include `ecosystem.json` and `tendermint-bft.md` from deprecated `aib-data` repository.
|
||||
- [config] Add `instrumentation.max_open_connections`, which limits the number
|
||||
of requests in flight to Prometheus server (if enabled). Default: 3.
|
||||
|
||||
|
||||
BUG FIXES
|
||||
- [rpc] Allow unquoted integers in requests
|
||||
- NOTE: this is only for URI requests. JSONRPC requests and all responses
|
||||
will use quoted integers (the proto3 JSON standard).
|
||||
- [consensus] Fix halt on shutdown
|
||||
|
||||
## 0.22.1
|
||||
|
||||
*July 5th, 2018*
|
||||
|
||||
IMPROVEMENTS
|
||||
|
||||
* Cleanup post repo-merge.
|
||||
* [docs] Various improvements.
|
||||
|
||||
BUG FIXES
|
||||
|
||||
* [state] Return error when EndBlock returns a 0-power validator that isn't
|
||||
already in the validator set.
|
||||
* [consensus] Shut down WAL properly.
|
||||
|
||||
## 0.22.0
|
||||
|
||||
*July 2nd, 2018*
|
||||
|
||||
BREAKING CHANGES:
|
||||
- [config]
|
||||
* Remove `max_block_size_txs` and `max_block_size_bytes` in favor of
|
||||
consensus params from the genesis file.
|
||||
* Rename `skip_upnp` to `upnp`, and turn it off by default.
|
||||
* Change `max_packet_msg_size` back to `max_packet_msg_payload_size`
|
||||
- [rpc]
|
||||
* All integers are encoded as strings (part of the update for Amino v0.10.1)
|
||||
* `syncing` is now called `catching_up`
|
||||
- [types] Update Amino to v0.10.1
|
||||
* Amino is now fully proto3 compatible for the basic types
|
||||
* JSON-encoded types now use the type name instead of the prefix bytes
|
||||
* Integers are encoded as strings
|
||||
- [crypto] Update go-crypto to v0.10.0 and merge into `crypto`
|
||||
* privKey.Sign returns error.
|
||||
* ed25519 address changed to the first 20-bytes of the SHA256 of the raw pubkey bytes
|
||||
* `tmlibs/merkle` -> `crypto/merkle`. Uses SHA256 instead of RIPEMD160
|
||||
- [tmlibs] Update to v0.9.0 and merge into `libs`
|
||||
* remove `merkle` package (moved to `crypto/merkle`)
|
||||
|
||||
FEATURES
|
||||
- [cmd] Added metrics (served under `/metrics` using a Prometheus client;
|
||||
disabled by default). See the new `instrumentation` section in the config and
|
||||
[metrics](https://tendermint.readthedocs.io/projects/tools/en/develop/metrics.html)
|
||||
guide.
|
||||
- [p2p] Add IPv6 support to peering.
|
||||
- [p2p] Add `external_address` to config to allow specifying the address for
|
||||
peers to dial
|
||||
|
||||
IMPROVEMENT
|
||||
- [rpc/client] Supports https and wss now.
|
||||
- [crypto] Make public key size into public constants
|
||||
- [mempool] Log tx hash, not entire tx
|
||||
- [abci] Merged in github.com/tendermint/abci
|
||||
- [crypto] Merged in github.com/tendermint/go-crypto
|
||||
- [libs] Merged in github.com/tendermint/tmlibs
|
||||
- [docs] Move from .rst to .md
|
||||
|
||||
BUG FIXES:
|
||||
- [rpc] Limit maximum number of HTTP/WebSocket connections
|
||||
(`rpc.max_open_connections`) and gRPC connections
|
||||
(`rpc.grpc_max_open_connections`). Check out "Running In Production" guide if
|
||||
you want to increase them.
|
||||
- [rpc] Limit maximum request body size to 1MB (header is limited to 1MB).
|
||||
- [consensus] Fix a halting bug where `create_empty_blocks=false`
|
||||
- [p2p] Fix panic in seed mode
|
||||
|
||||
## 0.21.0
|
||||
|
||||
*June 21th, 2018*
|
||||
|
||||
BREAKING CHANGES
|
||||
|
||||
- [config] Change default ports from 4665X to 2665X. Ports over 32768 are
|
||||
ephemeral and reserved for use by the kernel.
|
||||
- [cmd] `unsafe_reset_all` removes the addrbook.json
|
||||
|
||||
IMPROVEMENT
|
||||
|
||||
- [pubsub] Set default capacity to 0
|
||||
- [docs] Various improvements
|
||||
|
||||
BUG FIXES
|
||||
|
||||
- [consensus] Fix an issue where we don't make blocks after `fast_sync` when `create_empty_blocks=false`
|
||||
- [mempool] Fix #1761 where we don't process txs if `cache_size=0`
|
||||
- [rpc] Fix memory leak in Websocket (when using `/subscribe` method)
|
||||
- [config] Escape paths in config - fixes config paths on Windows
|
||||
|
||||
## 0.20.0
|
||||
|
||||
*June 6th, 2018*
|
||||
|
||||
This is the first in a series of breaking releases coming to Tendermint after
|
||||
soliciting developer feedback and conducting security audits.
|
||||
|
||||
This release does not break any blockchain data structures or
|
||||
protocols other than the ABCI messages between Tendermint and the application.
|
||||
|
||||
Applications that upgrade for ABCI v0.11.0 should be able to continue running Tendermint
|
||||
v0.20.0 on blockchains created with v0.19.X
|
||||
|
||||
BREAKING CHANGES
|
||||
|
||||
- [abci] Upgrade to
|
||||
[v0.11.0](https://github.com/tendermint/abci/blob/master/CHANGELOG.md#0110)
|
||||
- [abci] Change Query path for filtering peers by node ID from
|
||||
`p2p/filter/pubkey/<id>` to `p2p/filter/id/<id>`
|
||||
|
||||
## 0.19.9
|
||||
|
||||
*June 5th, 2018*
|
||||
|
||||
BREAKING CHANGES
|
||||
|
||||
- [types/priv_validator] Moved to top level `privval` package
|
||||
|
||||
FEATURES
|
||||
|
||||
- [config] Collapse PeerConfig into P2PConfig
|
||||
- [docs] Add quick-install script
|
||||
- [docs/spec] Add table of Amino prefixes
|
||||
|
||||
BUG FIXES
|
||||
|
||||
- [rpc] Return 404 for unknown endpoints
|
||||
- [consensus] Flush WAL on stop
|
||||
- [evidence] Don't send evidence to peers that are behind
|
||||
- [p2p] Fix memory leak on peer disconnects
|
||||
- [rpc] Fix panic when `per_page=0`
|
||||
|
||||
## 0.19.8
|
||||
|
||||
*June 4th, 2018*
|
||||
|
||||
BREAKING:
|
||||
|
||||
- [p2p] Remove `auth_enc` config option, peer connections are always auth
|
||||
encrypted. Technically a breaking change but seems no one was using it and
|
||||
arguably a bug fix :)
|
||||
|
||||
BUG FIXES
|
||||
|
||||
- [mempool] Fix deadlock under high load when `skip_timeout_commit=true` and
|
||||
`create_empty_blocks=false`
|
||||
|
||||
## 0.19.7
|
||||
|
||||
*May 31st, 2018*
|
||||
|
||||
BREAKING:
|
||||
|
||||
- [libs/pubsub] TagMap#Get returns a string value
|
||||
- [libs/pubsub] NewTagMap accepts a map of strings
|
||||
|
||||
FEATURES
|
||||
|
||||
- [rpc] the RPC documentation is now published to https://tendermint.github.io/slate
|
||||
- [p2p] AllowDuplicateIP config option to refuse connections from same IP.
|
||||
- true by default for now, false by default in next breaking release
|
||||
- [docs] Add docs for query, tx indexing, events, pubsub
|
||||
- [docs] Add some notes about running Tendermint in production
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
- [consensus] Consensus reactor now receives events from a separate synchronous event bus,
|
||||
which is not dependant on external RPC load
|
||||
- [consensus/wal] do not look for height in older files if we've seen height - 1
|
||||
- [docs] Various cleanup and link fixes
|
||||
|
||||
## 0.19.6
|
||||
|
||||
*May 29th, 2018*
|
||||
|
||||
BUG FIXES
|
||||
|
||||
- [blockchain] Fix fast-sync deadlock during high peer turnover
|
||||
|
||||
BUG FIX:
|
||||
|
||||
- [evidence] Dont send peers evidence from heights they haven't synced to yet
|
||||
- [p2p] Refuse connections to more than one peer with the same IP
|
||||
- [docs] Various fixes
|
||||
|
||||
## 0.19.5
|
||||
|
||||
*May 20th, 2018*
|
||||
|
||||
BREAKING CHANGES
|
||||
|
||||
- [rpc/client] TxSearch and UnconfirmedTxs have new arguments (see below)
|
||||
- [rpc/client] TxSearch returns ResultTxSearch
|
||||
- [version] Breaking changes to Go APIs will not be reflected in breaking
|
||||
version change, but will be included in changelog.
|
||||
|
||||
FEATURES
|
||||
|
||||
- [rpc] `/tx_search` takes `page` (starts at 1) and `per_page` (max 100, default 30) args to paginate results
|
||||
- [rpc] `/unconfirmed_txs` takes `limit` (max 100, default 30) arg to limit the output
|
||||
- [config] `mempool.size` and `mempool.cache_size` options
|
||||
|
||||
IMPROVEMENTS
|
||||
|
||||
- [docs] Lots of updates
|
||||
- [consensus] Only Fsync() the WAL before executing msgs from ourselves
|
||||
|
||||
BUG FIXES
|
||||
|
||||
- [mempool] Enforce upper bound on number of transactions
|
||||
|
||||
## 0.19.4 (May 17th, 2018)
|
||||
|
||||
IMPROVEMENTS
|
||||
|
||||
- [state] Improve tx indexing by using batches
|
||||
- [consensus, state] Improve logging (more consensus logs, fewer tx logs)
|
||||
- [spec] Moved to `docs/spec` (TODO cleanup the rest of the docs ...)
|
||||
|
||||
BUG FIXES
|
||||
|
||||
- [consensus] Fix issue #1575 where a late proposer can get stuck
|
||||
|
||||
## 0.19.3 (May 14th, 2018)
|
||||
|
||||
FEATURES
|
||||
|
||||
- [rpc] New `/consensus_state` returns just the votes seen at the current height
|
||||
|
||||
IMPROVEMENTS
|
||||
|
||||
- [rpc] Add stringified votes and fraction of power voted to `/dump_consensus_state`
|
||||
- [rpc] Add PeerStateStats to `/dump_consensus_state`
|
||||
|
||||
BUG FIXES
|
||||
|
||||
- [cmd] Set GenesisTime during `tendermint init`
|
||||
- [consensus] fix ValidBlock rules
|
||||
|
||||
## 0.19.2 (April 30th, 2018)
|
||||
|
||||
FEATURES:
|
||||
|
||||
- [p2p] Allow peers with different Minor versions to connect
|
||||
- [rpc] `/net_info` includes `n_peers`
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
- [p2p] Various code comments, cleanup, error types
|
||||
- [p2p] Change some Error logs to Debug
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
- [p2p] Fix reconnect to persistent peer when first dial fails
|
||||
- [p2p] Validate NodeInfo.ListenAddr
|
||||
- [p2p] Only allow (MaxNumPeers - MaxNumOutboundPeers) inbound peers
|
||||
- [p2p/pex] Limit max msg size to 64kB
|
||||
- [p2p] Fix panic when pex=false
|
||||
- [p2p] Allow multiple IPs per ID in AddrBook
|
||||
- [p2p] Fix before/after bugs in addrbook isBad()
|
||||
|
||||
## 0.19.1 (April 27th, 2018)
|
||||
|
||||
Note this release includes some small breaking changes in the RPC and one in the
|
||||
config that are really bug fixes. v0.19.1 will work with existing chains, and make Tendermint
|
||||
easier to use and debug. With <3
|
||||
|
||||
BREAKING (MINOR)
|
||||
|
||||
- [config] Removed `wal_light` setting. If you really needed this, let us know
|
||||
|
||||
FEATURES:
|
||||
|
||||
- [networks] moved in tooling from devops repo: terraform and ansible scripts for deploying testnets !
|
||||
- [cmd] Added `gen_node_key` command
|
||||
|
||||
BUG FIXES
|
||||
|
||||
Some of these are breaking in the RPC response, but they're really bugs!
|
||||
|
||||
- [spec] Document address format and pubkey encoding pre and post Amino
|
||||
- [rpc] Lower case JSON field names
|
||||
- [rpc] Fix missing entries, improve, and lower case the fields in `/dump_consensus_state`
|
||||
- [rpc] Fix NodeInfo.Channels format to hex
|
||||
- [rpc] Add Validator address to `/status`
|
||||
- [rpc] Fix `prove` in ABCIQuery
|
||||
- [cmd] MarshalJSONIndent on init
|
||||
|
||||
## 0.19.0 (April 13th, 2018)
|
||||
|
||||
BREAKING:
|
||||
- [cmd] improved `testnet` command; now it can fill in `persistent_peers` for you in the config file and much more (see `tendermint testnet --help` for details)
|
||||
- [cmd] `show_node_id` now returns an error if there is no node key
|
||||
- [rpc]: changed the output format for the `/status` endpoint (see https://godoc.org/github.com/tendermint/tendermint/rpc/core#Status)
|
||||
|
||||
Upgrade from go-wire to go-amino. This is a sweeping change that breaks everything that is
|
||||
serialized to disk or over the network.
|
||||
|
||||
See github.com/tendermint/go-amino for details on the new format.
|
||||
|
||||
See `scripts/wire2amino.go` for a tool to upgrade
|
||||
genesis/priv_validator/node_key JSON files.
|
||||
|
||||
FEATURES
|
||||
|
||||
- [test] docker-compose for local testnet setup (thanks Greg!)
|
||||
|
||||
## 0.18.0 (April 6th, 2018)
|
||||
|
||||
BREAKING:
|
||||
|
||||
- [types] Merkle tree uses different encoding for varints (see tmlibs v0.8.0)
|
||||
- [types] ValidtorSet.GetByAddress returns -1 if no validator found
|
||||
- [p2p] require all addresses come with an ID no matter what
|
||||
- [rpc] Listening address must contain tcp:// or unix:// prefix
|
||||
|
||||
FEATURES:
|
||||
|
||||
- [rpc] StartHTTPAndTLSServer (not used yet)
|
||||
- [rpc] Include validator's voting power in `/status`
|
||||
- [rpc] `/tx` and `/tx_search` responses now include the transaction hash
|
||||
- [rpc] Include peer NodeIDs in `/net_info`
|
||||
|
||||
IMPROVEMENTS:
|
||||
- [config] trim whitespace from elements of lists (like `persistent_peers`)
|
||||
- [rpc] `/tx_search` results are sorted by height
|
||||
- [p2p] do not try to connect to ourselves (ok, maybe only once)
|
||||
- [p2p] seeds respond with a bias towards good peers
|
||||
|
||||
BUG FIXES:
|
||||
- [rpc] fix subscribing using an abci.ResponseDeliverTx tag
|
||||
- [rpc] fix tx_indexers matchRange
|
||||
- [rpc] fix unsubscribing (see tmlibs v0.8.0)
|
||||
|
||||
## 0.17.1 (March 27th, 2018)
|
||||
|
||||
BUG FIXES:
|
||||
- [types] Actually support `app_state` in genesis as `AppStateJSON`
|
||||
|
||||
## 0.17.0 (March 27th, 2018)
|
||||
|
||||
BREAKING:
|
||||
- [types] WriteSignBytes -> SignBytes
|
||||
|
||||
IMPROVEMENTS:
|
||||
- [all] renamed `dummy` (`persistent_dummy`) to `kvstore` (`persistent_kvstore`) (name "dummy" is deprecated and will not work in the next breaking release)
|
||||
- [docs] note on determinism (docs/determinism.rst)
|
||||
- [genesis] `app_options` field is deprecated. please rename it to `app_state` in your genesis file(s). `app_options` will not work in the next breaking release
|
||||
- [p2p] dial seeds directly without potential peers
|
||||
- [p2p] exponential backoff for addrs in the address book
|
||||
- [p2p] mark peer as good if it contributed enough votes or block parts
|
||||
- [p2p] stop peer if it sends incorrect data, msg to unknown channel, msg we did not expect
|
||||
- [p2p] when `auth_enc` is true, all dialed peers must have a node ID in their address
|
||||
- [spec] various improvements
|
||||
- switched from glide to dep internally for package management
|
||||
- [wire] prep work for upgrading to new go-wire (which is now called go-amino)
|
||||
|
||||
FEATURES:
|
||||
- [config] exposed `auth_enc` flag to enable/disable encryption
|
||||
- [config] added the `--p2p.private_peer_ids` flag and `PrivatePeerIDs` config variable (see config for description)
|
||||
- [rpc] added `/health` endpoint, which returns empty result for now
|
||||
- [types/priv_validator] new format and socket client, allowing for remote signing
|
||||
|
||||
BUG FIXES:
|
||||
- [consensus] fix liveness bug by introducing ValidBlock mechanism
|
||||
|
||||
## 0.16.0 (February 20th, 2018)
|
||||
|
||||
BREAKING CHANGES:
|
||||
- [config] use $TMHOME/config for all config and json files
|
||||
- [p2p] old `--p2p.seeds` is now `--p2p.persistent_peers` (persistent peers to which TM will always connect to)
|
||||
- [p2p] now `--p2p.seeds` only used for getting addresses (if addrbook is empty; not persistent)
|
||||
- [p2p] NodeInfo: remove RemoteAddr and add Channels
|
||||
- we must have at least one overlapping channel with peer
|
||||
- we only send msgs for channels the peer advertised
|
||||
- [p2p/conn] pong timeout
|
||||
- [lite] comment out IAVL related code
|
||||
|
||||
FEATURES:
|
||||
- [p2p] added new `/dial_peers&persistent=_` **unsafe** endpoint
|
||||
- [p2p] persistent node key in `$THMHOME/config/node_key.json`
|
||||
- [p2p] introduce peer ID and authenticate peers by ID using addresses like `ID@IP:PORT`
|
||||
- [p2p/pex] new seed mode crawls the network and serves as a seed.
|
||||
- [config] MempoolConfig.CacheSize
|
||||
- [config] P2P.SeedMode (`--p2p.seed_mode`)
|
||||
|
||||
IMPROVEMENT:
|
||||
- [p2p/pex] stricter rules in the PEX reactor for better handling of abuse
|
||||
- [p2p] various improvements to code structure including subpackages for `pex` and `conn`
|
||||
- [docs] new spec!
|
||||
- [all] speed up the tests!
|
||||
|
||||
BUG FIX:
|
||||
- [blockchain] StopPeerForError on timeout
|
||||
- [consensus] StopPeerForError on a bad Maj23 message
|
||||
- [state] flush mempool conn before calling commit
|
||||
- [types] fix priv val signing things that only differ by timestamp
|
||||
- [mempool] fix memory leak causing zombie peers
|
||||
- [p2p/conn] fix potential deadlock
|
||||
|
||||
## 0.15.0 (December 29, 2017)
|
||||
|
||||
BREAKING CHANGES:
|
||||
- [p2p] enable the Peer Exchange reactor by default
|
||||
- [types] add Timestamp field to Proposal/Vote
|
||||
- [types] add new fields to Header: TotalTxs, ConsensusParamsHash, LastResultsHash, EvidenceHash
|
||||
- [types] add Evidence to Block
|
||||
- [types] simplify ValidateBasic
|
||||
- [state] updates to support changes to the header
|
||||
- [state] Enforce <1/3 of validator set can change at a time
|
||||
|
||||
FEATURES:
|
||||
- [state] Send indices of absent validators and addresses of byzantine validators in BeginBlock
|
||||
- [state] Historical ConsensusParams and ABCIResponses
|
||||
- [docs] Specification for the base Tendermint data structures.
|
||||
- [evidence] New evidence reactor for gossiping and managing evidence
|
||||
- [rpc] `/block_results?height=X` returns the DeliverTx results for a given height.
|
||||
|
||||
IMPROVEMENTS:
|
||||
- [consensus] Better handling of corrupt WAL file
|
||||
|
||||
BUG FIXES:
|
||||
- [lite] fix race
|
||||
- [state] validate block.Header.ValidatorsHash
|
||||
- [p2p] allow seed addresses to be prefixed with eg. `tcp://`
|
||||
- [p2p] use consistent key to refer to peers so we dont try to connect to existing peers
|
||||
- [cmd] fix `tendermint init` to ignore files that are there and generate files that aren't.
|
||||
|
||||
## 0.14.0 (December 11, 2017)
|
||||
|
||||
BREAKING CHANGES:
|
||||
- consensus/wal: removed separator
|
||||
- rpc/client: changed Subscribe/Unsubscribe/UnsubscribeAll funcs signatures to be identical to event bus.
|
||||
|
||||
FEATURES:
|
||||
- new `tendermint lite` command (and `lite/proxy` pkg) for running a light-client RPC proxy.
|
||||
NOTE it is currently insecure and its APIs are not yet covered by semver
|
||||
|
||||
IMPROVEMENTS:
|
||||
- rpc/client: can act as event bus subscriber (See https://github.com/tendermint/tendermint/issues/945).
|
||||
- p2p: use exponential backoff from seconds to hours when attempting to reconnect to persistent peer
|
||||
- config: moniker defaults to the machine's hostname instead of "anonymous"
|
||||
|
||||
BUG FIXES:
|
||||
- p2p: no longer exit if one of the seed addresses is incorrect
|
||||
|
||||
## 0.13.0 (December 6, 2017)
|
||||
|
||||
BREAKING CHANGES:
|
||||
- abci: update to v0.8 using gogo/protobuf; includes tx tags, vote info in RequestBeginBlock, data.Bytes everywhere, use int64, etc.
|
||||
- types: block heights are now `int64` everywhere
|
||||
- types & node: EventSwitch and EventCache have been replaced by EventBus and EventBuffer; event types have been overhauled
|
||||
- node: EventSwitch methods now refer to EventBus
|
||||
- rpc/lib/types: RPCResponse is no longer a pointer; WSRPCConnection interface has been modified
|
||||
- rpc/client: WaitForOneEvent takes an EventsClient instead of types.EventSwitch
|
||||
- rpc/client: Add/RemoveListenerForEvent are now Subscribe/Unsubscribe
|
||||
- rpc/core/types: ResultABCIQuery wraps an abci.ResponseQuery
|
||||
- rpc: `/subscribe` and `/unsubscribe` take `query` arg instead of `event`
|
||||
- rpc: `/status` returns the LatestBlockTime in human readable form instead of in nanoseconds
|
||||
- mempool: cached transactions return an error instead of an ABCI response with BadNonce
|
||||
|
||||
FEATURES:
|
||||
- rpc: new `/unsubscribe_all` WebSocket RPC endpoint
|
||||
- rpc: new `/tx_search` endpoint for filtering transactions by more complex queries
|
||||
- p2p/trust: new trust metric for tracking peers. See ADR-006
|
||||
- config: TxIndexConfig allows to set what DeliverTx tags to index
|
||||
|
||||
IMPROVEMENTS:
|
||||
- New asynchronous events system using `tmlibs/pubsub`
|
||||
- logging: Various small improvements
|
||||
- consensus: Graceful shutdown when app crashes
|
||||
- tests: Fix various non-deterministic errors
|
||||
- p2p: more defensive programming
|
||||
|
||||
BUG FIXES:
|
||||
- consensus: fix panic where prs.ProposalBlockParts is not initialized
|
||||
- p2p: fix panic on bad channel
|
||||
|
||||
## 0.12.1 (November 27, 2017)
|
||||
|
||||
BUG FIXES:
|
||||
- upgrade tmlibs dependency to enable Windows builds for Tendermint
|
||||
|
||||
## 0.12.0 (October 27, 2017)
|
||||
|
||||
BREAKING CHANGES:
|
||||
- rpc/client: websocket ResultsCh and ErrorsCh unified in ResponsesCh.
|
||||
- rpc/client: ABCIQuery no longer takes `prove`
|
||||
- state: remove GenesisDoc from state.
|
||||
- consensus: new binary WAL format provides efficiency and uses checksums to detect corruption
|
||||
- use scripts/wal2json to convert to json for debugging
|
||||
|
||||
FEATURES:
|
||||
- new `certifiers` pkg contains the tendermint light-client library (name subject to change)!
|
||||
- rpc: `/genesis` includes the `app_options` .
|
||||
- rpc: `/abci_query` takes an additional `height` parameter to support historical queries.
|
||||
- rpc/client: new ABCIQueryWithOptions supports options like `trusted` (set false to get a proof) and `height` to query a historical height.
|
||||
|
||||
IMPROVEMENTS:
|
||||
- rpc: `/genesis` result includes `app_options`
|
||||
- rpc/lib/client: add jitter to reconnects.
|
||||
- rpc/lib/types: `RPCError` satisfies the `error` interface.
|
||||
|
||||
BUG FIXES:
|
||||
- rpc/client: fix ws deadlock after stopping
|
||||
- blockchain: fix panic on AddBlock when peer is nil
|
||||
- mempool: fix sending on TxsAvailable when a tx has been invalidated
|
||||
- consensus: dont run WAL catchup if we fast synced
|
||||
|
||||
## 0.11.1 (October 10, 2017)
|
||||
|
||||
IMPROVEMENTS:
|
||||
- blockchain/reactor: respondWithNoResponseMessage for missing height
|
||||
|
||||
BUG FIXES:
|
||||
- rpc: fixed client WebSocket timeout
|
||||
- rpc: client now resubscribes on reconnection
|
||||
- rpc: fix panics on missing params
|
||||
- rpc: fix `/dump_consensus_state` to have normal json output (NOTE: technically breaking, but worth a bug fix label)
|
||||
- types: fixed out of range error in VoteSet.addVote
|
||||
- consensus: fix wal autofile via https://github.com/tendermint/tmlibs/blob/master/CHANGELOG.md#032-october-2-2017
|
||||
|
||||
## 0.11.0 (September 22, 2017)
|
||||
|
||||
BREAKING:
|
||||
- genesis file: validator `amount` is now `power`
|
||||
- abci: Info, BeginBlock, InitChain all take structs
|
||||
- rpc: various changes to match JSONRPC spec (http://www.jsonrpc.org/specification), including breaking ones:
|
||||
- requests that previously returned HTTP code 4XX now return 200 with an error code in the JSONRPC.
|
||||
- `rpctypes.RPCResponse` uses new `RPCError` type instead of `string`.
|
||||
|
||||
- cmd: if there is no genesis, exit immediately instead of waiting around for one to show.
|
||||
- types: `Signer.Sign` returns an error.
|
||||
- state: every validator set change is persisted to disk, which required some changes to the `State` structure.
|
||||
- p2p: new `p2p.Peer` interface used for all reactor methods (instead of `*p2p.Peer` struct).
|
||||
|
||||
FEATURES:
|
||||
- rpc: `/validators?height=X` allows querying of validators at previous heights.
|
||||
- rpc: Leaving the `height` param empty for `/block`, `/validators`, and `/commit` will return the value for the latest height.
|
||||
|
||||
IMPROVEMENTS:
|
||||
- docs: Moved all docs from the website and tools repo in, converted to `.rst`, and cleaned up for presentation on `tendermint.readthedocs.io`
|
||||
|
||||
BUG FIXES:
|
||||
- fix WAL openning issue on Windows
|
||||
|
||||
## 0.10.4 (September 5, 2017)
|
||||
|
||||
IMPROVEMENTS:
|
||||
- docs: Added Slate docs to each rpc function (see rpc/core)
|
||||
- docs: Ported all website docs to Read The Docs
|
||||
- config: expose some p2p params to tweak performance: RecvRate, SendRate, and MaxMsgPacketPayloadSize
|
||||
- rpc: Upgrade the websocket client and server, including improved auto reconnect, and proper ping/pong
|
||||
|
||||
BUG FIXES:
|
||||
- consensus: fix panic on getVoteBitArray
|
||||
- consensus: hang instead of panicking on byzantine consensus failures
|
||||
- cmd: dont load config for version command
|
||||
|
||||
## 0.10.3 (August 10, 2017)
|
||||
|
||||
FEATURES:
|
||||
- control over empty block production:
|
||||
- new flag, `--consensus.create_empty_blocks`; when set to false, blocks are only created when there are txs or when the AppHash changes.
|
||||
- new config option, `consensus.create_empty_blocks_interval`; an empty block is created after this many seconds.
|
||||
- in normal operation, `create_empty_blocks = true` and `create_empty_blocks_interval = 0`, so blocks are being created all the time (as in all previous versions of tendermint). The number of empty blocks can be reduced by increasing `create_empty_blocks_interval` or by setting `create_empty_blocks = false`.
|
||||
- new `TxsAvailable()` method added to Mempool that returns a channel which fires when txs are available.
|
||||
- new heartbeat message added to consensus reactor to notify peers that a node is waiting for txs before entering propose step.
|
||||
- rpc: Add `syncing` field to response returned by `/status`. Is `true` while in fast-sync mode.
|
||||
|
||||
IMPROVEMENTS:
|
||||
- various improvements to documentation and code comments
|
||||
|
||||
BUG FIXES:
|
||||
- mempool: pass height into constructor so it doesn't always start at 0
|
||||
|
||||
## 0.10.2 (July 10, 2017)
|
||||
|
||||
FEATURES:
|
||||
- Enable lower latency block commits by adding consensus reactor sleep durations and p2p flush throttle timeout to the config
|
||||
|
||||
IMPROVEMENTS:
|
||||
- More detailed logging in the consensus reactor and state machine
|
||||
- More in-code documentation for many exposed functions, especially in consensus/reactor.go and p2p/switch.go
|
||||
- Improved readability for some function definitions and code blocks with long lines
|
||||
|
||||
## 0.10.1 (June 28, 2017)
|
||||
|
||||
FEATURES:
|
||||
- Use `--trace` to get stack traces for logged errors
|
||||
- types: GenesisDoc.ValidatorHash returns the hash of the genesis validator set
|
||||
- types: GenesisDocFromFile parses a GenesiDoc from a JSON file
|
||||
|
||||
IMPROVEMENTS:
|
||||
- Add a Code of Conduct
|
||||
- Variety of improvements as suggested by `megacheck` tool
|
||||
- rpc: deduplicate tests between rpc/client and rpc/tests
|
||||
- rpc: addresses without a protocol prefix default to `tcp://`. `http://` is also accepted as an alias for `tcp://`
|
||||
- cmd: commands are more easily reuseable from other tools
|
||||
- DOCKER: automate build/push
|
||||
|
||||
BUG FIXES:
|
||||
- Fix log statements using keys with spaces (logger does not currently support spaces)
|
||||
- rpc: set logger on websocket connection
|
||||
- rpc: fix ws connection stability by setting write deadline on pings
|
||||
|
||||
## 0.10.0 (June 2, 2017)
|
||||
|
||||
Includes major updates to configuration, logging, and json serialization.
|
||||
Also includes the Grand Repo-Merge of 2017.
|
||||
|
||||
BREAKING CHANGES:
|
||||
|
||||
- Config and Flags:
|
||||
- The `config` map is replaced with a [`Config` struct](https://github.com/tendermint/tendermint/blob/master/config/config.go#L11),
|
||||
containing substructs: `BaseConfig`, `P2PConfig`, `MempoolConfig`, `ConsensusConfig`, `RPCConfig`
|
||||
- This affects the following flags:
|
||||
- `--seeds` is now `--p2p.seeds`
|
||||
- `--node_laddr` is now `--p2p.laddr`
|
||||
- `--pex` is now `--p2p.pex`
|
||||
- `--skip_upnp` is now `--p2p.skip_upnp`
|
||||
- `--rpc_laddr` is now `--rpc.laddr`
|
||||
- `--grpc_laddr` is now `--rpc.grpc_laddr`
|
||||
- Any configuration option now within a substract must come under that heading in the `config.toml`, for instance:
|
||||
```
|
||||
[p2p]
|
||||
laddr="tcp://1.2.3.4:46656"
|
||||
|
||||
[consensus]
|
||||
timeout_propose=1000
|
||||
```
|
||||
- Use viper and `DefaultConfig() / TestConfig()` functions to handle defaults, and remove `config/tendermint` and `config/tendermint_test`
|
||||
- Change some function and method signatures to
|
||||
- Change some [function and method signatures](https://gist.github.com/ebuchman/640d5fc6c2605f73497992fe107ebe0b) accomodate new config
|
||||
|
||||
- Logger
|
||||
- Replace static `log15` logger with a simple interface, and provide a new implementation using `go-kit`.
|
||||
See our new [logging library](https://github.com/tendermint/tmlibs/log) and [blog post](https://tendermint.com/blog/abstracting-the-logger-interface-in-go) for more details
|
||||
- Levels `warn` and `notice` are removed (you may need to change them in your `config.toml`!)
|
||||
- Change some [function and method signatures](https://gist.github.com/ebuchman/640d5fc6c2605f73497992fe107ebe0b) to accept a logger
|
||||
|
||||
- JSON serialization:
|
||||
- Replace `[TypeByte, Xxx]` with `{"type": "some-type", "data": Xxx}` in RPC and all `.json` files by using `go-wire/data`. For instance, a public key is now:
|
||||
```
|
||||
"pub_key": {
|
||||
"type": "ed25519",
|
||||
"data": "83DDF8775937A4A12A2704269E2729FCFCD491B933C4B0A7FFE37FE41D7760D0"
|
||||
}
|
||||
```
|
||||
- Remove type information about RPC responses, so `[TypeByte, {"jsonrpc": "2.0", ... }]` is now just `{"jsonrpc": "2.0", ... }`
|
||||
- Change `[]byte` to `data.Bytes` in all serialized types (for hex encoding)
|
||||
- Lowercase the JSON tags in `ValidatorSet` fields
|
||||
- Introduce `EventDataInner` for serializing events
|
||||
|
||||
- Other:
|
||||
- Send InitChain message in handshake if `appBlockHeight == 0`
|
||||
- Do not include the `Accum` field when computing the validator hash. This makes the ValidatorSetHash unique for a given validator set, rather than changing with every block (as the Accum changes)
|
||||
- Unsafe RPC calls are not enabled by default. This includes `/dial_seeds`, and all calls prefixed with `unsafe`. Use the `--rpc.unsafe` flag to enable.
|
||||
|
||||
|
||||
FEATURES:
|
||||
|
||||
- Per-module log levels. For instance, the new default is `state:info,*:error`, which means the `state` package logs at `info` level, and everything else logs at `error` level
|
||||
- Log if a node is validator or not in every consensus round
|
||||
- Use ldflags to set git hash as part of the version
|
||||
- Ignore `address` and `pub_key` fields in `priv_validator.json` and overwrite them with the values derrived from the `priv_key`
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
- Merge `tendermint/go-p2p -> tendermint/tendermint/p2p` and `tendermint/go-rpc -> tendermint/tendermint/rpc/lib`
|
||||
- Update paths for grand repo merge:
|
||||
- `go-common -> tmlibs/common`
|
||||
- `go-data -> go-wire/data`
|
||||
- All other `go-` libs, except `go-crypto` and `go-wire`, are merged under `tmlibs`
|
||||
- No global loggers (loggers are passed into constructors, or preferably set with a SetLogger method)
|
||||
- Return HTTP status codes with errors for RPC responses
|
||||
- Limit `/blockchain_info` call to return a maximum of 20 blocks
|
||||
- Use `.Wrap()` and `.Unwrap()` instead of eg. `PubKeyS` for `go-crypto` types
|
||||
- RPC JSON responses use pretty printing (via `json.MarshalIndent`)
|
||||
- Color code different instances of the consensus for tests
|
||||
- Isolate viper to `cmd/tendermint/commands` and do not read config from file for tests
|
||||
|
||||
|
||||
## 0.9.2 (April 26, 2017)
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
- Fix bug in `ResetPrivValidator` where we were using the global config and log (causing external consumers, eg. basecoin, to fail).
|
||||
|
||||
## 0.9.1 (April 21, 2017)
|
||||
|
||||
FEATURES:
|
||||
|
||||
- Transaction indexing - txs are indexed by their hash using a simple key-value store; easily extended to more advanced indexers
|
||||
- New `/tx?hash=X` endpoint to query for transactions and their DeliverTx result by hash. Optionally returns a proof of the tx's inclusion in the block
|
||||
- `tendermint testnet` command initializes files for a testnet
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
- CLI now uses Cobra framework
|
||||
- TMROOT is now TMHOME (TMROOT will stop working in 0.10.0)
|
||||
- `/broadcast_tx_XXX` also returns the Hash (can be used to query for the tx)
|
||||
- `/broadcast_tx_commit` also returns the height the block was committed in
|
||||
- ABCIResponses struct persisted to disk before calling Commit; makes handshake replay much cleaner
|
||||
- WAL uses #ENDHEIGHT instead of #HEIGHT (#HEIGHT will stop working in 0.10.0)
|
||||
- Peers included via `--seeds`, under `seeds` in the config, or in `/dial_seeds` are now persistent, and will be reconnected to if the connection breaks
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
- Fix bug in fast-sync where we stop syncing after a peer is removed, even if they're re-added later
|
||||
- Fix handshake replay to handle validator set changes and results of DeliverTx when we crash after app.Commit but before state.Save()
|
||||
|
||||
## 0.9.0 (March 6, 2017)
|
||||
|
||||
BREAKING CHANGES:
|
||||
|
||||
- Update ABCI to v0.4.0, where Query is now `Query(RequestQuery) ResponseQuery`, enabling precise proofs at particular heights:
|
||||
|
||||
```
|
||||
message RequestQuery{
|
||||
bytes data = 1;
|
||||
string path = 2;
|
||||
uint64 height = 3;
|
||||
bool prove = 4;
|
||||
}
|
||||
|
||||
message ResponseQuery{
|
||||
CodeType code = 1;
|
||||
int64 index = 2;
|
||||
bytes key = 3;
|
||||
bytes value = 4;
|
||||
bytes proof = 5;
|
||||
uint64 height = 6;
|
||||
string log = 7;
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
- `BlockMeta` data type unifies its Hash and PartSetHash under a `BlockID`:
|
||||
|
||||
```
|
||||
type BlockMeta struct {
|
||||
BlockID BlockID `json:"block_id"` // the block hash and partsethash
|
||||
Header *Header `json:"header"` // The block's Header
|
||||
}
|
||||
```
|
||||
|
||||
- `ValidatorSet.Proposer` is exposed as a field and persisted with the `State`. Use `GetProposer()` to initialize or update after validator-set changes.
|
||||
|
||||
- `tendermint gen_validator` command output is now pure JSON
|
||||
|
||||
FEATURES:
|
||||
|
||||
- New RPC endpoint `/commit?height=X` returns header and commit for block at height `X`
|
||||
- Client API for each endpoint, including mocks for testing
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
- `Node` is now a `BaseService`
|
||||
- Simplified starting Tendermint in-process from another application
|
||||
- Better organized Makefile
|
||||
- Scripts for auto-building binaries across platforms
|
||||
- Docker image improved, slimmed down (using Alpine), and changed from tendermint/tmbase to tendermint/tendermint
|
||||
- New repo files: `CONTRIBUTING.md`, Github `ISSUE_TEMPLATE`, `CHANGELOG.md`
|
||||
- Improvements on CircleCI for managing build/test artifacts
|
||||
- Handshake replay is doen through the consensus package, possibly using a mockApp
|
||||
- Graceful shutdown of RPC listeners
|
||||
- Tests for the PEX reactor and DialSeeds
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
- Check peer.Send for failure before updating PeerState in consensus
|
||||
- Fix panic in `/dial_seeds` with invalid addresses
|
||||
- Fix proposer selection logic in ValidatorSet by taking the address into account in the `accumComparable`
|
||||
- Fix inconcistencies with `ValidatorSet.Proposer` across restarts by persisting it in the `State`
|
||||
|
||||
|
||||
## 0.8.0 (January 13, 2017)
|
||||
|
||||
BREAKING CHANGES:
|
||||
|
||||
- New data type `BlockID` to represent blocks:
|
||||
|
||||
```
|
||||
type BlockID struct {
|
||||
Hash []byte `json:"hash"`
|
||||
PartsHeader PartSetHeader `json:"parts"`
|
||||
}
|
||||
```
|
||||
|
||||
- `Vote` data type now includes validator address and index:
|
||||
|
||||
```
|
||||
type Vote struct {
|
||||
ValidatorAddress []byte `json:"validator_address"`
|
||||
ValidatorIndex int `json:"validator_index"`
|
||||
Height int `json:"height"`
|
||||
Round int `json:"round"`
|
||||
Type byte `json:"type"`
|
||||
BlockID BlockID `json:"block_id"` // zero if vote is nil.
|
||||
Signature crypto.Signature `json:"signature"`
|
||||
}
|
||||
```
|
||||
|
||||
- Update TMSP to v0.3.0, where it is now called ABCI and AppendTx is DeliverTx
|
||||
- Hex strings in the RPC are now "0x" prefixed
|
||||
|
||||
|
||||
FEATURES:
|
||||
|
||||
- New message type on the ConsensusReactor, `Maj23Msg`, for peers to alert others they've seen a Maj23,
|
||||
in order to track and handle conflicting votes intelligently to prevent Byzantine faults from causing halts:
|
||||
|
||||
```
|
||||
type VoteSetMaj23Message struct {
|
||||
Height int
|
||||
Round int
|
||||
Type byte
|
||||
BlockID types.BlockID
|
||||
}
|
||||
```
|
||||
|
||||
- Configurable block part set size
|
||||
- Validator set changes
|
||||
- Optionally skip TimeoutCommit if we have all the votes
|
||||
- Handshake between Tendermint and App on startup to sync latest state and ensure consistent recovery from crashes
|
||||
- GRPC server for BroadcastTx endpoint
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
- Less verbose logging
|
||||
- Better test coverage (37% -> 49%)
|
||||
- Canonical SignBytes for signable types
|
||||
- Write-Ahead Log for Mempool and Consensus via tmlibs/autofile
|
||||
- Better in-process testing for the consensus reactor and byzantine faults
|
||||
- Better crash/restart testing for individual nodes at preset failure points, and of networks at arbitrary points
|
||||
- Better abstraction over timeout mechanics
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
- Fix memory leak in mempool peer
|
||||
- Fix panic on POLRound=-1
|
||||
- Actually set the CommitTime
|
||||
- Actually send BeginBlock message
|
||||
- Fix a liveness issues caused by Byzantine proposals/votes. Uses the new `Maj23Msg`.
|
||||
|
||||
|
||||
## 0.7.4 (December 14, 2016)
|
||||
|
||||
FEATURES:
|
||||
|
||||
- Enable the Peer Exchange reactor with the `--pex` flag for more resilient gossip network (feature still in development, beware dragons)
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
- Remove restrictions on RPC endpoint `/dial_seeds` to enable manual network configuration
|
||||
|
||||
## 0.7.3 (October 20, 2016)
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
- Type safe FireEvent
|
||||
- More WAL/replay tests
|
||||
- Cleanup some docs
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
- Fix deadlock in mempool for synchronous apps
|
||||
- Replay handles non-empty blocks
|
||||
- Fix race condition in HeightVoteSet
|
||||
|
||||
## 0.7.2 (September 11, 2016)
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
- Set mustConnect=false so tendermint will retry connecting to the app
|
||||
|
||||
## 0.7.1 (September 10, 2016)
|
||||
|
||||
FEATURES:
|
||||
|
||||
- New TMSP connection for Query/Info
|
||||
- New RPC endpoints:
|
||||
- `tmsp_query`
|
||||
- `tmsp_info`
|
||||
- Allow application to filter peers through Query (off by default)
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
- TMSP connection type enforced at compile time
|
||||
- All listen/client urls use a "tcp://" or "unix://" prefix
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
- Save LastSignature/LastSignBytes to `priv_validator.json` for recovery
|
||||
- Fix event unsubscribe
|
||||
- Fix fastsync/blockchain reactor
|
||||
|
||||
## 0.7.0 (August 7, 2016)
|
||||
|
||||
BREAKING CHANGES:
|
||||
|
||||
- Strict SemVer starting now!
|
||||
- Update to ABCI v0.2.0
|
||||
- Validation types now called Commit
|
||||
- NewBlock event only returns the block header
|
||||
|
||||
|
||||
FEATURES:
|
||||
|
||||
- TMSP and RPC support TCP and UNIX sockets
|
||||
- Addition config options including block size and consensus parameters
|
||||
- New WAL mode `cswal_light`; logs only the validator's own votes
|
||||
- New RPC endpoints:
|
||||
- for starting/stopping profilers, and for updating config
|
||||
- `/broadcast_tx_commit`, returns when tx is included in a block, else an error
|
||||
- `/unsafe_flush_mempool`, empties the mempool
|
||||
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
- Various optimizations
|
||||
- Remove bad or invalidated transactions from the mempool cache (allows later duplicates)
|
||||
- More elaborate testing using CircleCI including benchmarking throughput on 4 digitalocean droplets
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
- Various fixes to WAL and replay logic
|
||||
- Various race conditions
|
||||
|
||||
## PreHistory
|
||||
|
||||
Strict versioning only began with the release of v0.7.0, in late summer 2016.
|
||||
The project itself began in early summer 2014 and was workable decentralized cryptocurrency software by the end of that year.
|
||||
Through the course of 2015, in collaboration with Eris Industries (now Monax Indsutries),
|
||||
many additional features were integrated, including an implementation from scratch of the Ethereum Virtual Machine.
|
||||
That implementation now forms the heart of [Burrow](https://github.com/hyperledger/burrow).
|
||||
In the later half of 2015, the consensus algorithm was upgraded with a more asynchronous design and a more deterministic and robust implementation.
|
||||
|
||||
By late 2015, frustration with the difficulty of forking a large monolithic stack to create alternative cryptocurrency designs led to the
|
||||
invention of the Application Blockchain Interface (ABCI), then called the Tendermint Socket Protocol (TMSP).
|
||||
The Ethereum Virtual Machine and various other transaction features were removed, and Tendermint was whittled down to a core consensus engine
|
||||
driving an application running in another process.
|
||||
The ABCI interface and implementation were iterated on and improved over the course of 2016,
|
||||
until versioned history kicked in with v0.7.0.
|
56
CODE_OF_CONDUCT.md
Normal file
56
CODE_OF_CONDUCT.md
Normal file
@ -0,0 +1,56 @@
|
||||
# The Tendermint Code of Conduct
|
||||
This code of conduct applies to all projects run by the Tendermint/COSMOS team and hence to tendermint.
|
||||
|
||||
|
||||
----
|
||||
|
||||
|
||||
# Conduct
|
||||
## Contact: adrian@tendermint.com
|
||||
|
||||
* We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.
|
||||
|
||||
* On Slack, please avoid using overtly sexual nicknames or other nicknames that might detract from a friendly, safe and welcoming environment for all.
|
||||
|
||||
* Please be kind and courteous. There’s no need to be mean or rude.
|
||||
|
||||
* Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.
|
||||
|
||||
* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.
|
||||
|
||||
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term “harassment” as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we don’t tolerate behavior that excludes people in socially marginalized groups.
|
||||
|
||||
* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the channel admins or the person mentioned above immediately. Whether you’re a regular contributor or a newcomer, we care about making this community a safe place for you and we’ve got your back.
|
||||
|
||||
* Likewise any spamming, trolling, flaming, baiting or other attention-stealing behaviour is not welcome.
|
||||
|
||||
|
||||
----
|
||||
|
||||
|
||||
# Moderation
|
||||
These are the policies for upholding our community’s standards of conduct. If you feel that a thread needs moderation, please contact the above mentioned person.
|
||||
|
||||
1. Remarks that violate the Tendermint/COSMOS standards of conduct, including hateful, hurtful, oppressive, or exclusionary remarks, are not allowed. (Cursing is allowed, but never targeting another user, and never in a hateful manner.)
|
||||
|
||||
2. Remarks that moderators find inappropriate, whether listed in the code of conduct or not, are also not allowed.
|
||||
|
||||
3. Moderators will first respond to such remarks with a warning.
|
||||
|
||||
4. If the warning is unheeded, the user will be “kicked,” i.e., kicked out of the communication channel to cool off.
|
||||
|
||||
5. If the user comes back and continues to make trouble, they will be banned, i.e., indefinitely excluded.
|
||||
|
||||
6. Moderators may choose at their discretion to un-ban the user if it was a first offense and they offer the offended party a genuine apology.
|
||||
|
||||
7. If a moderator bans someone and you think it was unjustified, please take it up with that moderator, or with a different moderator, in private. Complaints about bans in-channel are not allowed.
|
||||
|
||||
8. Moderators are held to a higher standard than other community members. If a moderator creates an inappropriate situation, they should expect less leeway than others.
|
||||
|
||||
In the Tendermint/COSMOS community we strive to go the extra step to look out for each other. Don’t just aim to be technically unimpeachable, try to be your best self. In particular, avoid flirting with offensive or sensitive issues, particularly if they’re off-topic; this all too often leads to unnecessary fights, hurt feelings, and damaged trust; worse, it can drive people away from the community entirely.
|
||||
|
||||
And if someone takes issue with something you said or did, resist the urge to be defensive. Just stop doing what it was they complained about and apologize. Even if you feel you were misinterpreted or unfairly accused, chances are good there was something you could’ve communicated better — remember that it’s your responsibility to make your fellow Cosmonauts comfortable. Everyone wants to get along and we are all here first and foremost because we want to talk about cool technology. You will find that people will be eager to assume good intent and forgive as long as you earn their trust.
|
||||
|
||||
The enforcement policies listed above apply to all official Tendermint/COSMOS venues.For other projects adopting the Tendermint/COSMOS Code of Conduct, please contact the maintainers of those projects for enforcement. If you wish to use this code of conduct for your own project, consider explicitly mentioning your moderation policy or making a copy with your own moderation policy so as to avoid confusion.
|
||||
|
||||
*Adapted from the [Node.js Policy on Trolling](http://blog.izs.me/post/30036893703/policy-on-trolling), the [Contributor Covenant v1.3.0](http://contributor-covenant.org/version/1/3/0/) and the [Rust Code of Conduct](https://www.rust-lang.org/en-US/conduct.html).
|
117
CONTRIBUTING.md
Normal file
117
CONTRIBUTING.md
Normal file
@ -0,0 +1,117 @@
|
||||
# Contributing
|
||||
|
||||
Thank you for considering making contributions to Tendermint and related repositories! Start by taking a look at the [coding repo](https://github.com/tendermint/coding) for overall information on repository workflow and standards.
|
||||
|
||||
Please follow standard github best practices: fork the repo, branch from the tip of develop, make some commits, and submit a pull request to develop. See the [open issues](https://github.com/tendermint/tendermint/issues) for things we need help with!
|
||||
|
||||
Please make sure to use `gofmt` before every commit - the easiest way to do this is have your editor run it for you upon saving a file.
|
||||
|
||||
## Forking
|
||||
|
||||
Please note that Go requires code to live under absolute paths, which complicates forking.
|
||||
While my fork lives at `https://github.com/ebuchman/tendermint`,
|
||||
the code should never exist at `$GOPATH/src/github.com/ebuchman/tendermint`.
|
||||
Instead, we use `git remote` to add the fork as a new remote for the original repo,
|
||||
`$GOPATH/src/github.com/tendermint/tendermint `, and do all the work there.
|
||||
|
||||
For instance, to create a fork and work on a branch of it, I would:
|
||||
|
||||
* Create the fork on github, using the fork button.
|
||||
* Go to the original repo checked out locally (i.e. `$GOPATH/src/github.com/tendermint/tendermint`)
|
||||
* `git remote rename origin upstream`
|
||||
* `git remote add origin git@github.com:ebuchman/basecoin.git`
|
||||
|
||||
Now `origin` refers to my fork and `upstream` refers to the tendermint version.
|
||||
So I can `git push -u origin master` to update my fork, and make pull requests to tendermint from there.
|
||||
Of course, replace `ebuchman` with your git handle.
|
||||
|
||||
To pull in updates from the origin repo, run
|
||||
|
||||
* `git fetch upstream`
|
||||
* `git rebase upstream/master` (or whatever branch you want)
|
||||
|
||||
Please don't make Pull Requests to `master`.
|
||||
|
||||
## Dependencies
|
||||
|
||||
We use [dep](https://github.com/golang/dep) to manage dependencies.
|
||||
|
||||
That said, the master branch of every Tendermint repository should just build
|
||||
with `go get`, which means they should be kept up-to-date with their
|
||||
dependencies so we can get away with telling people they can just `go get` our
|
||||
software.
|
||||
|
||||
Since some dependencies are not under our control, a third party may break our
|
||||
build, in which case we can fall back on `dep ensure` (or `make
|
||||
get_vendor_deps`). Even for dependencies under our control, dep helps us to
|
||||
keep multiple repos in sync as they evolve. Anything with an executable, such
|
||||
as apps, tools, and the core, should use dep.
|
||||
|
||||
Run `dep status` to get a list of vendor dependencies that may not be
|
||||
up-to-date.
|
||||
|
||||
## Vagrant
|
||||
|
||||
If you are a [Vagrant](https://www.vagrantup.com/) user, you can get started
|
||||
hacking Tendermint with the commands below.
|
||||
|
||||
NOTE: In case you installed Vagrant in 2017, you might need to run
|
||||
`vagrant box update` to upgrade to the latest `ubuntu/xenial64`.
|
||||
|
||||
```
|
||||
vagrant up
|
||||
vagrant ssh
|
||||
make test
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
All repos should be hooked up to [CircleCI](https://circleci.com/).
|
||||
|
||||
If they have `.go` files in the root directory, they will be automatically
|
||||
tested by circle using `go test -v -race ./...`. If not, they will need a
|
||||
`circle.yml`. Ideally, every repo has a `Makefile` that defines `make test` and
|
||||
includes its continuous integration status using a badge in the `README.md`.
|
||||
|
||||
## Branching Model and Release
|
||||
|
||||
User-facing repos should adhere to the branching model: http://nvie.com/posts/a-successful-git-branching-model/.
|
||||
That is, these repos should be well versioned, and any merge to master requires a version bump and tagged release.
|
||||
|
||||
Libraries need not follow the model strictly, but would be wise to,
|
||||
especially `go-p2p` and `go-rpc`, as their versions are referenced in tendermint core.
|
||||
|
||||
### Development Procedure:
|
||||
- the latest state of development is on `develop`
|
||||
- `develop` must never fail `make test`
|
||||
- no --force onto `develop` (except when reverting a broken commit, which should seldom happen)
|
||||
- create a development branch either on github.com/tendermint/tendermint, or your fork (using `git remote add origin`)
|
||||
- before submitting a pull request, begin `git rebase` on top of `develop`
|
||||
|
||||
### Pull Merge Procedure:
|
||||
- ensure pull branch is rebased on develop
|
||||
- run `make test` to ensure that all tests pass
|
||||
- merge pull request
|
||||
- the `unstable` branch may be used to aggregate pull merges before testing once
|
||||
- push master may request that pull requests be rebased on top of `unstable`
|
||||
|
||||
### Release Procedure:
|
||||
- start on `develop`
|
||||
- run integration tests (see `test_integrations` in Makefile)
|
||||
- prepare changelog/release issue
|
||||
- bump versions
|
||||
- push to release-vX.X.X to run the extended integration tests on the CI
|
||||
- merge to master
|
||||
- merge master back to develop
|
||||
|
||||
### Hotfix Procedure:
|
||||
- start on `master`
|
||||
- checkout a new branch named hotfix-vX.X.X
|
||||
- make the required changes
|
||||
- these changes should be small and an absolute necessity
|
||||
- add a note to CHANGELOG.md
|
||||
- bump versions
|
||||
- push to hotfix-vX.X.X to run the extended integration tests on the CI
|
||||
- merge hotfix-vX.X.X to master
|
||||
- merge hotfix-vX.X.X to develop
|
||||
- delete the hotfix-vX.X.X branch
|
1
DOCKER/.gitignore
vendored
Normal file
1
DOCKER/.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
||||
tendermint
|
39
DOCKER/Dockerfile
Normal file
39
DOCKER/Dockerfile
Normal file
@ -0,0 +1,39 @@
|
||||
FROM alpine:3.7
|
||||
MAINTAINER Greg Szabo <greg@tendermint.com>
|
||||
|
||||
# Tendermint will be looking for the genesis file in /tendermint/config/genesis.json
|
||||
# (unless you change `genesis_file` in config.toml). You can put your config.toml and
|
||||
# private validator file into /tendermint/config.
|
||||
#
|
||||
# The /tendermint/data dir is used by tendermint to store state.
|
||||
ENV TMHOME /tendermint
|
||||
|
||||
# OS environment setup
|
||||
# Set user right away for determinism, create directory for persistence and give our user ownership
|
||||
# jq and curl used for extracting `pub_key` from private validator while
|
||||
# deploying tendermint with Kubernetes. It is nice to have bash so the users
|
||||
# could execute bash commands.
|
||||
RUN apk update && \
|
||||
apk upgrade && \
|
||||
apk --no-cache add curl jq bash && \
|
||||
addgroup tmuser && \
|
||||
adduser -S -G tmuser tmuser -h "$TMHOME"
|
||||
|
||||
# Run the container with tmuser by default. (UID=100, GID=1000)
|
||||
USER tmuser
|
||||
|
||||
# Expose the data directory as a volume since there's mutable state in there
|
||||
VOLUME [ $TMHOME ]
|
||||
|
||||
WORKDIR $TMHOME
|
||||
|
||||
# p2p and rpc port
|
||||
EXPOSE 26656 26657
|
||||
|
||||
ENTRYPOINT ["/usr/bin/tendermint"]
|
||||
CMD ["node", "--moniker=`hostname`"]
|
||||
STOPSIGNAL SIGTERM
|
||||
|
||||
ARG BINARY=tendermint
|
||||
COPY $BINARY /usr/bin/tendermint
|
||||
|
23
DOCKER/Dockerfile.abci
Normal file
23
DOCKER/Dockerfile.abci
Normal file
@ -0,0 +1,23 @@
|
||||
FROM golang:latest
|
||||
|
||||
RUN mkdir -p /go/src/github.com/tendermint/abci
|
||||
WORKDIR /go/src/github.com/tendermint/abci
|
||||
|
||||
COPY Makefile /go/src/github.com/tendermint/abci/
|
||||
|
||||
# see make protoc for details on ldconfig
|
||||
RUN make get_protoc && ldconfig
|
||||
|
||||
# killall is used in tests
|
||||
RUN apt-get update && apt-get install -y \
|
||||
psmisc \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
COPY Gopkg.toml /go/src/github.com/tendermint/abci/
|
||||
COPY Gopkg.lock /go/src/github.com/tendermint/abci/
|
||||
RUN make get_tools
|
||||
|
||||
# see https://github.com/golang/dep/issues/1312
|
||||
RUN dep ensure -vendor-only
|
||||
|
||||
COPY . /go/src/github.com/tendermint/abci
|
35
DOCKER/Dockerfile.develop
Normal file
35
DOCKER/Dockerfile.develop
Normal file
@ -0,0 +1,35 @@
|
||||
FROM alpine:3.7
|
||||
|
||||
ENV DATA_ROOT /tendermint
|
||||
ENV TMHOME $DATA_ROOT
|
||||
|
||||
RUN addgroup tmuser && \
|
||||
adduser -S -G tmuser tmuser
|
||||
|
||||
RUN mkdir -p $DATA_ROOT && \
|
||||
chown -R tmuser:tmuser $DATA_ROOT
|
||||
|
||||
RUN apk add --no-cache bash curl jq
|
||||
|
||||
ENV GOPATH /go
|
||||
ENV PATH "$PATH:/go/bin"
|
||||
RUN mkdir -p /go/src/github.com/tendermint/tendermint && \
|
||||
apk add --no-cache go build-base git && \
|
||||
cd /go/src/github.com/tendermint/tendermint && \
|
||||
git clone https://github.com/tendermint/tendermint . && \
|
||||
git checkout develop && \
|
||||
make get_tools && \
|
||||
make get_vendor_deps && \
|
||||
make install && \
|
||||
cd - && \
|
||||
rm -rf /go/src/github.com/tendermint/tendermint && \
|
||||
apk del go build-base git
|
||||
|
||||
VOLUME $DATA_ROOT
|
||||
|
||||
EXPOSE 26656
|
||||
EXPOSE 26657
|
||||
|
||||
ENTRYPOINT ["tendermint"]
|
||||
|
||||
CMD ["node", "--moniker=`hostname`", "--proxy_app=kvstore"]
|
18
DOCKER/Dockerfile.testing
Normal file
18
DOCKER/Dockerfile.testing
Normal file
@ -0,0 +1,18 @@
|
||||
FROM golang:1.10.1
|
||||
|
||||
|
||||
# Grab deps (jq, hexdump, xxd, killall)
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
jq bsdmainutils vim-common psmisc netcat
|
||||
|
||||
# Add testing deps for curl
|
||||
RUN echo 'deb http://httpredir.debian.org/debian testing main non-free contrib' >> /etc/apt/sources.list && \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends curl
|
||||
|
||||
VOLUME /go
|
||||
|
||||
EXPOSE 26656
|
||||
EXPOSE 26657
|
||||
|
16
DOCKER/Makefile
Normal file
16
DOCKER/Makefile
Normal file
@ -0,0 +1,16 @@
|
||||
build:
|
||||
@sh -c "'$(CURDIR)/build.sh'"
|
||||
|
||||
push:
|
||||
@sh -c "'$(CURDIR)/push.sh'"
|
||||
|
||||
build_develop:
|
||||
docker build -t "tendermint/tendermint:develop" -f Dockerfile.develop .
|
||||
|
||||
build_testing:
|
||||
docker build --tag tendermint/testing -f ./Dockerfile.testing .
|
||||
|
||||
push_develop:
|
||||
docker push "tendermint/tendermint:develop"
|
||||
|
||||
.PHONY: build build_develop push push_develop
|
67
DOCKER/README.md
Normal file
67
DOCKER/README.md
Normal file
@ -0,0 +1,67 @@
|
||||
# Docker
|
||||
|
||||
## Supported tags and respective `Dockerfile` links
|
||||
|
||||
- `0.17.1`, `latest` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/208ac32fa266657bd6c304e84ec828aa252bb0b8/DOCKER/Dockerfile)
|
||||
- `0.15.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/170777300ea92dc21a8aec1abc16cb51812513a4/DOCKER/Dockerfile)
|
||||
- `0.13.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/a28b3fff49dce2fb31f90abb2fc693834e0029c2/DOCKER/Dockerfile)
|
||||
- `0.12.1` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/457c688346b565e90735431619ca3ca597ef9007/DOCKER/Dockerfile)
|
||||
- `0.12.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/70d8afa6e952e24c573ece345560a5971bf2cc0e/DOCKER/Dockerfile)
|
||||
- `0.11.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/9177cc1f64ca88a4a0243c5d1773d10fba67e201/DOCKER/Dockerfile)
|
||||
- `0.10.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/e5342f4054ab784b2cd6150e14f01053d7c8deb2/DOCKER/Dockerfile)
|
||||
- `0.9.1`, `0.9`, [(Dockerfile)](https://github.com/tendermint/tendermint/blob/809e0e8c5933604ba8b2d096803ada7c5ec4dfd3/DOCKER/Dockerfile)
|
||||
- `0.9.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/d474baeeea6c22b289e7402449572f7c89ee21da/DOCKER/Dockerfile)
|
||||
- `0.8.0`, `0.8` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/bf64dd21fdb193e54d8addaaaa2ecf7ac371de8c/DOCKER/Dockerfile)
|
||||
- `develop` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/master/DOCKER/Dockerfile.develop)
|
||||
|
||||
`develop` tag points to the [develop](https://github.com/tendermint/tendermint/tree/develop) branch.
|
||||
|
||||
## Quick reference
|
||||
|
||||
* **Where to get help:**
|
||||
https://cosmos.network/community
|
||||
|
||||
* **Where to file issues:**
|
||||
https://github.com/tendermint/tendermint/issues
|
||||
|
||||
* **Supported Docker versions:**
|
||||
[the latest release](https://github.com/moby/moby/releases) (down to 1.6 on a best-effort basis)
|
||||
|
||||
## Tendermint
|
||||
|
||||
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine, written in any programming language, and securely replicates it on many machines.
|
||||
|
||||
For more background, see the [introduction](https://tendermint.readthedocs.io/en/master/introduction.html).
|
||||
|
||||
To get started developing applications, see the [application developers guide](https://tendermint.readthedocs.io/en/master/getting-started.html).
|
||||
|
||||
## How to use this image
|
||||
|
||||
### Start one instance of the Tendermint core with the `kvstore` app
|
||||
|
||||
A quick example of a built-in app and Tendermint core in one container.
|
||||
|
||||
```
|
||||
docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint init
|
||||
docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint node --proxy_app=kvstore
|
||||
```
|
||||
|
||||
## Local cluster
|
||||
|
||||
To run a 4-node network, see the `Makefile` in the root of [the repo](https://github.com/tendermint/tendermint/master/Makefile) and run:
|
||||
|
||||
```
|
||||
make build-linux
|
||||
make build-docker-localnode
|
||||
make localnet-start
|
||||
```
|
||||
|
||||
Note that this will build and use a different image than the ones provided here.
|
||||
|
||||
## License
|
||||
|
||||
- Tendermint's license is [Apache 2.0](https://github.com/tendermint/tendermint/master/LICENSE).
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions are most welcome! See the [contributing file](https://github.com/tendermint/tendermint/blob/master/CONTRIBUTING.md) for more information.
|
20
DOCKER/build.sh
Executable file
20
DOCKER/build.sh
Executable file
@ -0,0 +1,20 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
# Get the tag from the version, or try to figure it out.
|
||||
if [ -z "$TAG" ]; then
|
||||
TAG=$(awk -F\" '/Version =/ { print $2; exit }' < ../version/version.go)
|
||||
fi
|
||||
if [ -z "$TAG" ]; then
|
||||
echo "Please specify a tag."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
TAG_NO_PATCH=${TAG%.*}
|
||||
|
||||
read -p "==> Build 3 docker images with the following tags (latest, $TAG, $TAG_NO_PATCH)? y/n" -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]
|
||||
then
|
||||
docker build -t "tendermint/tendermint" -t "tendermint/tendermint:$TAG" -t "tendermint/tendermint:$TAG_NO_PATCH" .
|
||||
fi
|
22
DOCKER/push.sh
Executable file
22
DOCKER/push.sh
Executable file
@ -0,0 +1,22 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
# Get the tag from the version, or try to figure it out.
|
||||
if [ -z "$TAG" ]; then
|
||||
TAG=$(awk -F\" '/Version =/ { print $2; exit }' < ../version/version.go)
|
||||
fi
|
||||
if [ -z "$TAG" ]; then
|
||||
echo "Please specify a tag."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
TAG_NO_PATCH=${TAG%.*}
|
||||
|
||||
read -p "==> Push 3 docker images with the following tags (latest, $TAG, $TAG_NO_PATCH)? y/n" -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]
|
||||
then
|
||||
docker push "tendermint/tendermint:latest"
|
||||
docker push "tendermint/tendermint:$TAG"
|
||||
docker push "tendermint/tendermint:$TAG_NO_PATCH"
|
||||
fi
|
419
Gopkg.lock
generated
Normal file
419
Gopkg.lock
generated
Normal file
@ -0,0 +1,419 @@
|
||||
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
|
||||
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/beorn7/perks"
|
||||
packages = ["quantile"]
|
||||
revision = "3a771d992973f24aa725d07868b467d1ddfceafb"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/btcsuite/btcd"
|
||||
packages = ["btcec"]
|
||||
revision = "86fed781132ac890ee03e906e4ecd5d6fa180c64"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/btcsuite/btcutil"
|
||||
packages = [
|
||||
"base58",
|
||||
"bech32"
|
||||
]
|
||||
revision = "d4cc87b860166d00d6b5b9e0d3b3d71d6088d4d4"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/davecgh/go-spew"
|
||||
packages = ["spew"]
|
||||
revision = "346938d642f2ec3594ed81d874461961cd0faa76"
|
||||
version = "v1.1.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/ebuchman/fail-test"
|
||||
packages = ["."]
|
||||
revision = "95f809107225be108efcf10a3509e4ea6ceef3c4"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/fortytw2/leaktest"
|
||||
packages = ["."]
|
||||
revision = "b008db64ef8daabb22ff6daa557f33b41d8f6ccd"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/fsnotify/fsnotify"
|
||||
packages = ["."]
|
||||
revision = "c2828203cd70a50dcccfb2761f8b1f8ceef9a8e9"
|
||||
version = "v1.4.7"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/go-kit/kit"
|
||||
packages = [
|
||||
"log",
|
||||
"log/level",
|
||||
"log/term",
|
||||
"metrics",
|
||||
"metrics/discard",
|
||||
"metrics/internal/lv",
|
||||
"metrics/prometheus"
|
||||
]
|
||||
revision = "4dc7be5d2d12881735283bcab7352178e190fc71"
|
||||
version = "v0.6.0"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/go-logfmt/logfmt"
|
||||
packages = ["."]
|
||||
revision = "390ab7935ee28ec6b286364bba9b4dd6410cb3d5"
|
||||
version = "v0.3.0"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/go-stack/stack"
|
||||
packages = ["."]
|
||||
revision = "259ab82a6cad3992b4e21ff5cac294ccb06474bc"
|
||||
version = "v1.7.0"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/gogo/protobuf"
|
||||
packages = [
|
||||
"gogoproto",
|
||||
"jsonpb",
|
||||
"proto",
|
||||
"protoc-gen-gogo/descriptor",
|
||||
"sortkeys",
|
||||
"types"
|
||||
]
|
||||
revision = "1adfc126b41513cc696b209667c8656ea7aac67c"
|
||||
version = "v1.0.0"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/golang/protobuf"
|
||||
packages = [
|
||||
"proto",
|
||||
"ptypes",
|
||||
"ptypes/any",
|
||||
"ptypes/duration",
|
||||
"ptypes/timestamp"
|
||||
]
|
||||
revision = "925541529c1fa6821df4e44ce2723319eb2be768"
|
||||
version = "v1.0.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/golang/snappy"
|
||||
packages = ["."]
|
||||
revision = "2e65f85255dbc3072edf28d6b5b8efc472979f5a"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/gorilla/websocket"
|
||||
packages = ["."]
|
||||
revision = "ea4d1f681babbce9545c9c5f3d5194a789c89f5b"
|
||||
version = "v1.2.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/hashicorp/hcl"
|
||||
packages = [
|
||||
".",
|
||||
"hcl/ast",
|
||||
"hcl/parser",
|
||||
"hcl/scanner",
|
||||
"hcl/strconv",
|
||||
"hcl/token",
|
||||
"json/parser",
|
||||
"json/scanner",
|
||||
"json/token"
|
||||
]
|
||||
revision = "ef8a98b0bbce4a65b5aa4c368430a80ddc533168"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/inconshreveable/mousetrap"
|
||||
packages = ["."]
|
||||
revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
|
||||
version = "v1.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/jmhodges/levigo"
|
||||
packages = ["."]
|
||||
revision = "c42d9e0ca023e2198120196f842701bb4c55d7b9"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/kr/logfmt"
|
||||
packages = ["."]
|
||||
revision = "b84e30acd515aadc4b783ad4ff83aff3299bdfe0"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/magiconair/properties"
|
||||
packages = ["."]
|
||||
revision = "c2353362d570a7bfa228149c62842019201cfb71"
|
||||
version = "v1.8.0"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/matttproud/golang_protobuf_extensions"
|
||||
packages = ["pbutil"]
|
||||
revision = "c12348ce28de40eed0136aa2b644d0ee0650e56c"
|
||||
version = "v1.0.1"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/mitchellh/mapstructure"
|
||||
packages = ["."]
|
||||
revision = "bb74f1db0675b241733089d5a1faa5dd8b0ef57b"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/pelletier/go-toml"
|
||||
packages = ["."]
|
||||
revision = "c01d1270ff3e442a8a57cddc1c92dc1138598194"
|
||||
version = "v1.2.0"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/pkg/errors"
|
||||
packages = ["."]
|
||||
revision = "645ef00459ed84a119197bfb8d8205042c6df63d"
|
||||
version = "v0.8.0"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/pmezard/go-difflib"
|
||||
packages = ["difflib"]
|
||||
revision = "792786c7400a136282c1664665ae0a8db921c6c2"
|
||||
version = "v1.0.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/prometheus/client_golang"
|
||||
packages = [
|
||||
"prometheus",
|
||||
"prometheus/promhttp"
|
||||
]
|
||||
revision = "d6a9817c4afc94d51115e4a30d449056a3fbf547"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/prometheus/client_model"
|
||||
packages = ["go"]
|
||||
revision = "99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/prometheus/common"
|
||||
packages = [
|
||||
"expfmt",
|
||||
"internal/bitbucket.org/ww/goautoneg",
|
||||
"model"
|
||||
]
|
||||
revision = "7600349dcfe1abd18d72d3a1770870d9800a7801"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/prometheus/procfs"
|
||||
packages = [
|
||||
".",
|
||||
"internal/util",
|
||||
"nfs",
|
||||
"xfs"
|
||||
]
|
||||
revision = "40f013a808ec4fa79def444a1a56de4d1727efcb"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/rcrowley/go-metrics"
|
||||
packages = ["."]
|
||||
revision = "e2704e165165ec55d062f5919b4b29494e9fa790"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/spf13/afero"
|
||||
packages = [
|
||||
".",
|
||||
"mem"
|
||||
]
|
||||
revision = "787d034dfe70e44075ccc060d346146ef53270ad"
|
||||
version = "v1.1.1"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/spf13/cast"
|
||||
packages = ["."]
|
||||
revision = "8965335b8c7107321228e3e3702cab9832751bac"
|
||||
version = "v1.2.0"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/spf13/cobra"
|
||||
packages = ["."]
|
||||
revision = "7b2c5ac9fc04fc5efafb60700713d4fa609b777b"
|
||||
version = "v0.0.1"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/spf13/jwalterweatherman"
|
||||
packages = ["."]
|
||||
revision = "7c0cea34c8ece3fbeb2b27ab9b59511d360fb394"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/spf13/pflag"
|
||||
packages = ["."]
|
||||
revision = "583c0c0531f06d5278b7d917446061adc344b5cd"
|
||||
version = "v1.0.1"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/spf13/viper"
|
||||
packages = ["."]
|
||||
revision = "25b30aa063fc18e48662b86996252eabdcf2f0c7"
|
||||
version = "v1.0.0"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/stretchr/testify"
|
||||
packages = [
|
||||
"assert",
|
||||
"require"
|
||||
]
|
||||
revision = "f35b8ab0b5a2cef36673838d662e249dd9c94686"
|
||||
version = "v1.2.2"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/syndtr/goleveldb"
|
||||
packages = [
|
||||
"leveldb",
|
||||
"leveldb/cache",
|
||||
"leveldb/comparer",
|
||||
"leveldb/errors",
|
||||
"leveldb/filter",
|
||||
"leveldb/iterator",
|
||||
"leveldb/journal",
|
||||
"leveldb/memdb",
|
||||
"leveldb/opt",
|
||||
"leveldb/storage",
|
||||
"leveldb/table",
|
||||
"leveldb/util"
|
||||
]
|
||||
revision = "e2150783cd35f5b607daca48afd8c57ec54cc995"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/tendermint/ed25519"
|
||||
packages = [
|
||||
".",
|
||||
"edwards25519",
|
||||
"extra25519"
|
||||
]
|
||||
revision = "d8387025d2b9d158cf4efb07e7ebf814bcce2057"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/tendermint/go-amino"
|
||||
packages = ["."]
|
||||
revision = "2106ca61d91029c931fd54968c2bb02dc96b1412"
|
||||
version = "0.10.1"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "golang.org/x/crypto"
|
||||
packages = [
|
||||
"bcrypt",
|
||||
"blowfish",
|
||||
"chacha20poly1305",
|
||||
"curve25519",
|
||||
"hkdf",
|
||||
"internal/chacha20",
|
||||
"internal/subtle",
|
||||
"nacl/box",
|
||||
"nacl/secretbox",
|
||||
"openpgp/armor",
|
||||
"openpgp/errors",
|
||||
"poly1305",
|
||||
"ripemd160",
|
||||
"salsa20/salsa"
|
||||
]
|
||||
revision = "a49355c7e3f8fe157a85be2f77e6e269a0f89602"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "golang.org/x/net"
|
||||
packages = [
|
||||
"context",
|
||||
"http/httpguts",
|
||||
"http2",
|
||||
"http2/hpack",
|
||||
"idna",
|
||||
"internal/timeseries",
|
||||
"netutil",
|
||||
"trace"
|
||||
]
|
||||
revision = "4cb1c02c05b0e749b0365f61ae859a8e0cfceed9"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "golang.org/x/sys"
|
||||
packages = [
|
||||
"cpu",
|
||||
"unix"
|
||||
]
|
||||
revision = "7138fd3d9dc8335c567ca206f4333fb75eb05d56"
|
||||
|
||||
[[projects]]
|
||||
name = "golang.org/x/text"
|
||||
packages = [
|
||||
"collate",
|
||||
"collate/build",
|
||||
"internal/colltab",
|
||||
"internal/gen",
|
||||
"internal/tag",
|
||||
"internal/triegen",
|
||||
"internal/ucd",
|
||||
"language",
|
||||
"secure/bidirule",
|
||||
"transform",
|
||||
"unicode/bidi",
|
||||
"unicode/cldr",
|
||||
"unicode/norm",
|
||||
"unicode/rangetable"
|
||||
]
|
||||
revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0"
|
||||
version = "v0.3.0"
|
||||
|
||||
[[projects]]
|
||||
name = "google.golang.org/genproto"
|
||||
packages = ["googleapis/rpc/status"]
|
||||
revision = "7fd901a49ba6a7f87732eb344f6e3c5b19d1b200"
|
||||
|
||||
[[projects]]
|
||||
name = "google.golang.org/grpc"
|
||||
packages = [
|
||||
".",
|
||||
"balancer",
|
||||
"balancer/base",
|
||||
"balancer/roundrobin",
|
||||
"codes",
|
||||
"connectivity",
|
||||
"credentials",
|
||||
"encoding",
|
||||
"encoding/proto",
|
||||
"grpclb/grpc_lb_v1/messages",
|
||||
"grpclog",
|
||||
"internal",
|
||||
"keepalive",
|
||||
"metadata",
|
||||
"naming",
|
||||
"peer",
|
||||
"resolver",
|
||||
"resolver/dns",
|
||||
"resolver/passthrough",
|
||||
"stats",
|
||||
"status",
|
||||
"tap",
|
||||
"transport"
|
||||
]
|
||||
revision = "d11072e7ca9811b1100b80ca0269ac831f06d024"
|
||||
version = "v1.11.3"
|
||||
|
||||
[[projects]]
|
||||
name = "gopkg.in/yaml.v2"
|
||||
packages = ["."]
|
||||
revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183"
|
||||
version = "v2.2.1"
|
||||
|
||||
[solve-meta]
|
||||
analyzer-name = "dep"
|
||||
analyzer-version = 1
|
||||
inputs-digest = "6e854634d6c203278ce83bef7725cecbcf90023b0d0e440fb3374acedacbd5ad"
|
||||
solver-name = "gps-cdcl"
|
||||
solver-version = 1
|
95
Gopkg.toml
Normal file
95
Gopkg.toml
Normal file
@ -0,0 +1,95 @@
|
||||
# Gopkg.toml example
|
||||
#
|
||||
# Refer to https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md
|
||||
# for detailed Gopkg.toml documentation.
|
||||
#
|
||||
# required = ["github.com/user/thing/cmd/thing"]
|
||||
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
|
||||
#
|
||||
# [[constraint]]
|
||||
# name = "github.com/user/project"
|
||||
# version = "1.0.0"
|
||||
#
|
||||
# [[constraint]]
|
||||
# name = "github.com/user/project2"
|
||||
# branch = "dev"
|
||||
# source = "github.com/myfork/project2"
|
||||
#
|
||||
# [[override]]
|
||||
# name = "github.com/x/y"
|
||||
# version = "2.4.0"
|
||||
#
|
||||
# [prune]
|
||||
# non-go = false
|
||||
# go-tests = true
|
||||
# unused-packages = true
|
||||
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/ebuchman/fail-test"
|
||||
branch = "master"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/fortytw2/leaktest"
|
||||
branch = "master"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/go-kit/kit"
|
||||
version = "=0.6.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/gogo/protobuf"
|
||||
version = "=1.0.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/golang/protobuf"
|
||||
version = "=1.0.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/gorilla/websocket"
|
||||
version = "~1.2.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/pkg/errors"
|
||||
version = "=0.8.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/rcrowley/go-metrics"
|
||||
branch = "master"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/spf13/cobra"
|
||||
version = "=0.0.1"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/spf13/viper"
|
||||
version = "=1.0.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/stretchr/testify"
|
||||
version = "~1.2.1"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/tendermint/go-amino"
|
||||
version = "~0.10.1"
|
||||
|
||||
[[constraint]]
|
||||
name = "google.golang.org/grpc"
|
||||
version = "~1.11.3"
|
||||
|
||||
# this got updated and broke, so locked to an old working commit ...
|
||||
[[override]]
|
||||
name = "google.golang.org/genproto"
|
||||
revision = "7fd901a49ba6a7f87732eb344f6e3c5b19d1b200"
|
||||
|
||||
[prune]
|
||||
go-tests = true
|
||||
unused-packages = true
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/prometheus/client_golang"
|
||||
branch = "master"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "golang.org/x/net"
|
204
LICENSE
Normal file
204
LICENSE
Normal file
@ -0,0 +1,204 @@
|
||||
Tendermint Core
|
||||
License: Apache2.0
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright 2016 All in Bits, Inc
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
310
Makefile
Normal file
310
Makefile
Normal file
@ -0,0 +1,310 @@
|
||||
GOTOOLS = \
|
||||
github.com/mitchellh/gox \
|
||||
github.com/golang/dep/cmd/dep \
|
||||
gopkg.in/alecthomas/gometalinter.v2 \
|
||||
github.com/gogo/protobuf/protoc-gen-gogo \
|
||||
github.com/gogo/protobuf/gogoproto \
|
||||
github.com/square/certstrap
|
||||
PACKAGES=$(shell go list ./... | grep -v '/vendor/')
|
||||
INCLUDE = -I=. -I=${GOPATH}/src -I=${GOPATH}/src/github.com/gogo/protobuf/protobuf
|
||||
BUILD_TAGS?=tendermint
|
||||
BUILD_FLAGS = -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`"
|
||||
|
||||
all: check build test install
|
||||
|
||||
check: check_tools ensure_deps
|
||||
|
||||
|
||||
########################################
|
||||
### Build Tendermint
|
||||
|
||||
build:
|
||||
CGO_ENABLED=0 go build $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint/
|
||||
|
||||
build_race:
|
||||
CGO_ENABLED=0 go build -race $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint
|
||||
|
||||
install:
|
||||
CGO_ENABLED=0 go install $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' ./cmd/tendermint
|
||||
|
||||
########################################
|
||||
### Build ABCI
|
||||
|
||||
protoc_abci:
|
||||
## If you get the following error,
|
||||
## "error while loading shared libraries: libprotobuf.so.14: cannot open shared object file: No such file or directory"
|
||||
## See https://stackoverflow.com/a/25518702
|
||||
protoc $(INCLUDE) --gogo_out=plugins=grpc:. abci/types/*.proto
|
||||
@echo "--> adding nolint declarations to protobuf generated files"
|
||||
@awk '/package abci/types/ { print "//nolint: gas"; print; next }1' abci/types/types.pb.go > abci/types/types.pb.go.new
|
||||
@mv abci/types/types.pb.go.new abci/types/types.pb.go
|
||||
|
||||
build_abci:
|
||||
@go build -i ./abci/cmd/...
|
||||
|
||||
install_abci:
|
||||
@go install ./abci/cmd/...
|
||||
|
||||
########################################
|
||||
### Distribution
|
||||
|
||||
# dist builds binaries for all platforms and packages them for distribution
|
||||
# TODO add abci to these scripts
|
||||
dist:
|
||||
@BUILD_TAGS='$(BUILD_TAGS)' sh -c "'$(CURDIR)/scripts/dist.sh'"
|
||||
|
||||
########################################
|
||||
### Tools & dependencies
|
||||
|
||||
check_tools:
|
||||
@# https://stackoverflow.com/a/25668869
|
||||
@echo "Found tools: $(foreach tool,$(notdir $(GOTOOLS)),\
|
||||
$(if $(shell which $(tool)),$(tool),$(error "No $(tool) in PATH")))"
|
||||
|
||||
get_tools:
|
||||
@echo "--> Installing tools"
|
||||
go get -u -v $(GOTOOLS)
|
||||
@gometalinter.v2 --install
|
||||
|
||||
update_tools:
|
||||
@echo "--> Updating tools"
|
||||
@go get -u $(GOTOOLS)
|
||||
|
||||
#Run this from CI
|
||||
get_vendor_deps:
|
||||
@rm -rf vendor/
|
||||
@echo "--> Running dep"
|
||||
@dep ensure -vendor-only
|
||||
|
||||
|
||||
#Run this locally.
|
||||
ensure_deps:
|
||||
@rm -rf vendor/
|
||||
@echo "--> Running dep"
|
||||
@dep ensure
|
||||
|
||||
#For ABCI and libs
|
||||
get_protoc:
|
||||
@# https://github.com/google/protobuf/releases
|
||||
curl -L https://github.com/google/protobuf/releases/download/v3.4.1/protobuf-cpp-3.4.1.tar.gz | tar xvz && \
|
||||
cd protobuf-3.4.1 && \
|
||||
DIST_LANG=cpp ./configure && \
|
||||
make && \
|
||||
make install && \
|
||||
cd .. && \
|
||||
rm -rf protobuf-3.4.1
|
||||
|
||||
draw_deps:
|
||||
@# requires brew install graphviz or apt-get install graphviz
|
||||
go get github.com/RobotsAndPencils/goviz
|
||||
@goviz -i github.com/tendermint/tendermint/cmd/tendermint -d 3 | dot -Tpng -o dependency-graph.png
|
||||
|
||||
get_deps_bin_size:
|
||||
@# Copy of build recipe with additional flags to perform binary size analysis
|
||||
$(eval $(shell go build -work -a $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint/ 2>&1))
|
||||
@find $(WORK) -type f -name "*.a" | xargs -I{} du -hxs "{}" | sort -rh | sed -e s:${WORK}/::g > deps_bin_size.log
|
||||
@echo "Results can be found here: $(CURDIR)/deps_bin_size.log"
|
||||
|
||||
########################################
|
||||
### Libs
|
||||
|
||||
protoc_libs:
|
||||
## If you get the following error,
|
||||
## "error while loading shared libraries: libprotobuf.so.14: cannot open shared object file: No such file or directory"
|
||||
## See https://stackoverflow.com/a/25518702
|
||||
protoc $(INCLUDE) --go_out=plugins=grpc:. libs/common/*.proto
|
||||
@echo "--> adding nolint declarations to protobuf generated files"
|
||||
@awk '/package libs/common/ { print "//nolint: gas"; print; next }1' libs/common/types.pb.go > libs/common/types.pb.go.new
|
||||
@mv libs/common/types.pb.go.new libs/common/types.pb.go
|
||||
|
||||
gen_certs: clean_certs
|
||||
## Generating certificates for TLS testing...
|
||||
certstrap init --common-name "tendermint.com" --passphrase ""
|
||||
certstrap request-cert -ip "::" --passphrase ""
|
||||
certstrap sign "::" --CA "tendermint.com" --passphrase ""
|
||||
mv out/::.crt out/::.key db/remotedb
|
||||
|
||||
clean_certs:
|
||||
## Cleaning TLS testing certificates...
|
||||
rm -rf out
|
||||
rm -f db/remotedb/::.crt db/remotedb/::.key
|
||||
|
||||
test_libs: gen_certs
|
||||
GOCACHE=off go test -tags gcc $(shell go list ./... | grep -v vendor)
|
||||
make clean_certs
|
||||
|
||||
grpc_dbserver:
|
||||
protoc -I db/remotedb/proto/ db/remotedb/proto/defs.proto --go_out=plugins=grpc:db/remotedb/proto
|
||||
|
||||
########################################
|
||||
### Testing
|
||||
|
||||
## required to be run first by most tests
|
||||
build_docker_test_image:
|
||||
docker build -t tester -f ./test/docker/Dockerfile .
|
||||
|
||||
### coverage, app, persistence, and libs tests
|
||||
test_cover:
|
||||
# run the go unit tests with coverage
|
||||
bash test/test_cover.sh
|
||||
|
||||
test_apps:
|
||||
# run the app tests using bash
|
||||
# requires `abci-cli` and `tendermint` binaries installed
|
||||
bash test/app/test.sh
|
||||
|
||||
test_abci_apps:
|
||||
bash abci/tests/test_app/test.sh
|
||||
|
||||
test_abci_cli:
|
||||
# test the cli against the examples in the tutorial at:
|
||||
# ./docs/abci-cli.md
|
||||
# if test fails, update the docs ^
|
||||
@ bash abci/tests/test_cli/test.sh
|
||||
|
||||
test_persistence:
|
||||
# run the persistence tests using bash
|
||||
# requires `abci-cli` installed
|
||||
docker run --name run_persistence -t tester bash test/persist/test_failure_indices.sh
|
||||
|
||||
# TODO undockerize
|
||||
# bash test/persist/test_failure_indices.sh
|
||||
|
||||
test_p2p:
|
||||
docker rm -f rsyslog || true
|
||||
rm -rf test/logs || true
|
||||
mkdir test/logs
|
||||
cd test/
|
||||
docker run -d -v "logs:/var/log/" -p 127.0.0.1:5514:514/udp --name rsyslog voxxit/rsyslog
|
||||
cd ..
|
||||
# requires 'tester' the image from above
|
||||
bash test/p2p/test.sh tester
|
||||
|
||||
test_integrations:
|
||||
make build_docker_test_image
|
||||
make get_tools
|
||||
make get_vendor_deps
|
||||
make install
|
||||
make test_cover
|
||||
make test_apps
|
||||
make test_abci_apps
|
||||
make test_abci_cli
|
||||
make test_libs
|
||||
make test_persistence
|
||||
make test_p2p
|
||||
|
||||
test_release:
|
||||
@go test -tags release $(PACKAGES)
|
||||
|
||||
test100:
|
||||
@for i in {1..100}; do make test; done
|
||||
|
||||
vagrant_test:
|
||||
vagrant up
|
||||
vagrant ssh -c 'make test_integrations'
|
||||
|
||||
### go tests
|
||||
test:
|
||||
@echo "--> Running go test"
|
||||
@go test $(PACKAGES)
|
||||
|
||||
test_race:
|
||||
@echo "--> Running go test --race"
|
||||
@go test -v -race $(PACKAGES)
|
||||
|
||||
|
||||
########################################
|
||||
### Formatting, linting, and vetting
|
||||
|
||||
fmt:
|
||||
@go fmt ./...
|
||||
|
||||
metalinter:
|
||||
@echo "--> Running linter"
|
||||
@gometalinter.v2 --vendor --deadline=600s --disable-all \
|
||||
--enable=deadcode \
|
||||
--enable=gosimple \
|
||||
--enable=misspell \
|
||||
--enable=safesql \
|
||||
./...
|
||||
#--enable=gas \
|
||||
#--enable=maligned \
|
||||
#--enable=dupl \
|
||||
#--enable=errcheck \
|
||||
#--enable=goconst \
|
||||
#--enable=gocyclo \
|
||||
#--enable=goimports \
|
||||
#--enable=golint \ <== comments on anything exported
|
||||
#--enable=gotype \
|
||||
#--enable=ineffassign \
|
||||
#--enable=interfacer \
|
||||
#--enable=megacheck \
|
||||
#--enable=staticcheck \
|
||||
#--enable=structcheck \
|
||||
#--enable=unconvert \
|
||||
#--enable=unparam \
|
||||
#--enable=unused \
|
||||
#--enable=varcheck \
|
||||
#--enable=vet \
|
||||
#--enable=vetshadow \
|
||||
|
||||
metalinter_all:
|
||||
@echo "--> Running linter (all)"
|
||||
gometalinter.v2 --vendor --deadline=600s --enable-all --disable=lll ./...
|
||||
|
||||
###########################################################
|
||||
### Docker image
|
||||
|
||||
build-docker:
|
||||
cp build/tendermint DOCKER/tendermint
|
||||
docker build --label=tendermint --tag="tendermint/tendermint" DOCKER
|
||||
rm -rf DOCKER/tendermint
|
||||
|
||||
###########################################################
|
||||
### Local testnet using docker
|
||||
|
||||
# Build linux binary on other platforms
|
||||
build-linux:
|
||||
GOOS=linux GOARCH=amd64 $(MAKE) build
|
||||
|
||||
build-docker-localnode:
|
||||
cd networks/local
|
||||
make
|
||||
|
||||
# Run a 4-node testnet locally
|
||||
localnet-start: localnet-stop
|
||||
@if ! [ -f build/node0/config/genesis.json ]; then docker run --rm -v $(CURDIR)/build:/tendermint:Z tendermint/localnode testnet --v 4 --o . --populate-persistent-peers --starting-ip-address 192.167.10.2 ; fi
|
||||
docker-compose up
|
||||
|
||||
# Stop testnet
|
||||
localnet-stop:
|
||||
docker-compose down
|
||||
|
||||
###########################################################
|
||||
### Remote full-nodes (sentry) using terraform and ansible
|
||||
|
||||
# Server management
|
||||
sentry-start:
|
||||
@if [ -z "$(DO_API_TOKEN)" ]; then echo "DO_API_TOKEN environment variable not set." ; false ; fi
|
||||
@if ! [ -f $(HOME)/.ssh/id_rsa.pub ]; then ssh-keygen ; fi
|
||||
cd networks/remote/terraform && terraform init && terraform apply -var DO_API_TOKEN="$(DO_API_TOKEN)" -var SSH_KEY_FILE="$(HOME)/.ssh/id_rsa.pub"
|
||||
@if ! [ -f $(CURDIR)/build/node0/config/genesis.json ]; then docker run --rm -v $(CURDIR)/build:/tendermint:Z tendermint/localnode testnet --v 0 --n 4 --o . ; fi
|
||||
cd networks/remote/ansible && ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i inventory/digital_ocean.py -l sentrynet install.yml
|
||||
@echo "Next step: Add your validator setup in the genesis.json and config.tml files and run \"make sentry-config\". (Public key of validator, chain ID, peer IP and node ID.)"
|
||||
|
||||
# Configuration management
|
||||
sentry-config:
|
||||
cd networks/remote/ansible && ansible-playbook -i inventory/digital_ocean.py -l sentrynet config.yml -e BINARY=$(CURDIR)/build/tendermint -e CONFIGDIR=$(CURDIR)/build
|
||||
|
||||
sentry-stop:
|
||||
@if [ -z "$(DO_API_TOKEN)" ]; then echo "DO_API_TOKEN environment variable not set." ; false ; fi
|
||||
cd networks/remote/terraform && terraform destroy -var DO_API_TOKEN="$(DO_API_TOKEN)" -var SSH_KEY_FILE="$(HOME)/.ssh/id_rsa.pub"
|
||||
|
||||
# meant for the CI, inspect script & adapt accordingly
|
||||
build-slate:
|
||||
bash scripts/slate.sh
|
||||
|
||||
# To avoid unintended conflicts with file names, always add to .PHONY
|
||||
# unless there is a reason not to.
|
||||
# https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html
|
||||
.PHONY: check build build_race build_abci dist install install_abci check_tools get_tools update_tools get_vendor_deps draw_deps get_protoc protoc_abci protoc_libs gen_certs clean_certs grpc_dbserver test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate
|
135
README.md
Normal file
135
README.md
Normal file
@ -0,0 +1,135 @@
|
||||
# Tendermint
|
||||
|
||||
[Byzantine-Fault Tolerant](https://en.wikipedia.org/wiki/Byzantine_fault_tolerance)
|
||||
[State Machine Replication](https://en.wikipedia.org/wiki/State_machine_replication).
|
||||
Or [Blockchain](https://en.wikipedia.org/wiki/Blockchain_(database)) for short.
|
||||
|
||||
[](https://github.com/tendermint/tendermint/releases/latest)
|
||||
[](https://godoc.org/github.com/tendermint/tendermint)
|
||||
[](https://github.com/moovweb/gvm)
|
||||
[](https://riot.im/app/#/room/#tendermint:matrix.org)
|
||||
[](https://github.com/tendermint/tendermint/blob/master/LICENSE)
|
||||
[](https://github.com/tendermint/tendermint)
|
||||
|
||||
|
||||
Branch | Tests | Coverage
|
||||
----------|-------|----------
|
||||
master | [](https://circleci.com/gh/tendermint/tendermint/tree/master) | [](https://codecov.io/gh/tendermint/tendermint)
|
||||
develop | [](https://circleci.com/gh/tendermint/tendermint/tree/develop) | [](https://codecov.io/gh/tendermint/tendermint)
|
||||
|
||||
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine - written in any programming language -
|
||||
and securely replicates it on many machines.
|
||||
|
||||
For protocol details, see [the specification](/docs/spec).
|
||||
|
||||
## A Note on Production Readiness
|
||||
|
||||
While Tendermint is being used in production in private, permissioned
|
||||
environments, we are still working actively to harden and audit it in preparation
|
||||
for use in public blockchains, such as the [Cosmos Network](https://cosmos.network/).
|
||||
We are also still making breaking changes to the protocol and the APIs.
|
||||
Thus we tag the releases as *alpha software*.
|
||||
|
||||
In any case, if you intend to run Tendermint in production,
|
||||
please [contact us](https://riot.im/app/#/room/#tendermint:matrix.org) :)
|
||||
|
||||
## Security
|
||||
|
||||
To report a security vulnerability, see our [bug bounty
|
||||
program](https://tendermint.com/security).
|
||||
|
||||
For examples of the kinds of bugs we're looking for, see [SECURITY.md](SECURITY.md)
|
||||
|
||||
## Minimum requirements
|
||||
|
||||
Requirement|Notes
|
||||
---|---
|
||||
Go version | Go1.9 or higher
|
||||
|
||||
## Install
|
||||
|
||||
See the [install instructions](/docs/install.md)
|
||||
|
||||
## Quick Start
|
||||
|
||||
- [Single node](/docs/using-tendermint.md)
|
||||
- [Local cluster using docker-compose](/networks/local)
|
||||
- [Remote cluster using terraform and ansible](/docs/terraform-and-ansible.md)
|
||||
- [Join the public testnet](https://cosmos.network/testnet)
|
||||
|
||||
## Resources
|
||||
|
||||
### Tendermint Core
|
||||
|
||||
For details about the blockchain data structures and the p2p protocols, see the
|
||||
the [Tendermint specification](/docs/spec).
|
||||
|
||||
For details on using the software, [Read The Docs](https://tendermint.readthedocs.io/en/master/).
|
||||
Additional information about some - and eventually all - of the sub-projects below, can be found at Read The Docs.
|
||||
|
||||
|
||||
### Sub-projects
|
||||
|
||||
* [Amino](http://github.com/tendermint/go-amino), a reflection-based improvement on proto3
|
||||
* [IAVL](http://github.com/tendermint/iavl), Merkleized IAVL+ Tree implementation
|
||||
|
||||
### Tools
|
||||
* [Deployment, Benchmarking, and Monitoring](http://tendermint.readthedocs.io/projects/tools/en/develop/index.html#tendermint-tools)
|
||||
|
||||
### Applications
|
||||
|
||||
* [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
|
||||
* [Ethermint](http://github.com/tendermint/ethermint); Ethereum on Tendermint
|
||||
* [Many more](https://tendermint.readthedocs.io/en/master/ecosystem.html#abci-applications)
|
||||
|
||||
### More
|
||||
|
||||
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
|
||||
* [Original Whitepaper](https://tendermint.com/static/docs/tendermint.pdf)
|
||||
* [Tendermint Blog](https://blog.cosmos.network/tendermint/home)
|
||||
* [Cosmos Blog](https://blog.cosmos.network)
|
||||
|
||||
## Contributing
|
||||
|
||||
Yay open source! Please see our [contributing guidelines](CONTRIBUTING.md).
|
||||
|
||||
## Versioning
|
||||
|
||||
### SemVer
|
||||
|
||||
Tendermint uses [SemVer](http://semver.org/) to determine when and how the version changes.
|
||||
According to SemVer, anything in the public API can change at any time before version 1.0.0
|
||||
|
||||
To provide some stability to Tendermint users in these 0.X.X days, the MINOR version is used
|
||||
to signal breaking changes across a subset of the total public API. This subset includes all
|
||||
interfaces exposed to other processes (cli, rpc, p2p, etc.), but does not
|
||||
include the in-process Go APIs.
|
||||
|
||||
That said, breaking changes in the following packages will be documented in the
|
||||
CHANGELOG even if they don't lead to MINOR version bumps:
|
||||
|
||||
- types
|
||||
- rpc/client
|
||||
- config
|
||||
- node
|
||||
|
||||
Exported objects in these packages that are not covered by the versioning scheme
|
||||
are explicitly marked by `// UNSTABLE` in their go doc comment and may change at any
|
||||
time without notice. Functions, types, and values in any other package may also change at any time.
|
||||
|
||||
### Upgrades
|
||||
|
||||
In an effort to avoid accumulating technical debt prior to 1.0.0,
|
||||
we do not guarantee that breaking changes (ie. bumps in the MINOR version)
|
||||
will work with existing tendermint blockchains. In these cases you will
|
||||
have to start a new blockchain, or write something custom to get the old
|
||||
data into the new chain.
|
||||
|
||||
However, any bump in the PATCH version should be compatible with existing histories
|
||||
(if not please open an [issue](https://github.com/tendermint/tendermint/issues)).
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
Please read, understand and adhere to our [code of conduct](CODE_OF_CONDUCT.md).
|
23
ROADMAP.md
Normal file
23
ROADMAP.md
Normal file
@ -0,0 +1,23 @@
|
||||
# Roadmap
|
||||
|
||||
BREAKING CHANGES:
|
||||
- Better support for injecting randomness
|
||||
- Upgrade consensus for more real-time use of evidence
|
||||
|
||||
FEATURES:
|
||||
- Use the chain as its own CA for nodes and validators
|
||||
- Tooling to run multiple blockchains/apps, possibly in a single process
|
||||
- State syncing (without transaction replay)
|
||||
- Add authentication and rate-limitting to the RPC
|
||||
|
||||
IMPROVEMENTS:
|
||||
- Improve subtleties around mempool caching and logic
|
||||
- Consensus optimizations:
|
||||
- cache block parts for faster agreement after round changes
|
||||
- propagate block parts rarest first
|
||||
- Better testing of the consensus state machine (ie. use a DSL)
|
||||
- Auto compiled serialization/deserialization code instead of go-wire reflection
|
||||
|
||||
BUG FIXES:
|
||||
- Graceful handling/recovery for apps that have non-determinism or fail to halt
|
||||
- Graceful handling/recovery for violations of safety, or liveness
|
71
SECURITY.md
Normal file
71
SECURITY.md
Normal file
@ -0,0 +1,71 @@
|
||||
# Security
|
||||
|
||||
As part of our [Coordinated Vulnerability Disclosure
|
||||
Policy](https://tendermint.com/security), we operate a bug bounty.
|
||||
See the policy for more details on submissions and rewards.
|
||||
|
||||
Here is a list of examples of the kinds of bugs we're most interested in:
|
||||
|
||||
## Specification
|
||||
|
||||
- Conceptual flaws
|
||||
- Ambiguities, inconsistencies, or incorrect statements
|
||||
- Mis-match between specification and implementation of any component
|
||||
|
||||
## Consensus
|
||||
|
||||
Assuming less than 1/3 of the voting power is Byzantine (malicious):
|
||||
|
||||
- Validation of blockchain data structures, including blocks, block parts,
|
||||
votes, and so on
|
||||
- Execution of blocks
|
||||
- Validator set changes
|
||||
- Proposer round robin
|
||||
- Two nodes committing conflicting blocks for the same height (safety failure)
|
||||
- A correct node signing conflicting votes
|
||||
- A node halting (liveness failure)
|
||||
- Syncing new and old nodes
|
||||
|
||||
## Networking
|
||||
|
||||
- Authenticated encryption (MITM, information leakage)
|
||||
- Eclipse attacks
|
||||
- Sybil attacks
|
||||
- Long-range attacks
|
||||
- Denial-of-Service
|
||||
|
||||
## RPC
|
||||
|
||||
- Write-access to anything besides sending transactions
|
||||
- Denial-of-Service
|
||||
- Leakage of secrets
|
||||
|
||||
## Denial-of-Service
|
||||
|
||||
Attacks may come through the P2P network or the RPC:
|
||||
|
||||
- Amplification attacks
|
||||
- Resource abuse
|
||||
- Deadlocks and race conditions
|
||||
- Panics and unhandled errors
|
||||
|
||||
## Libraries
|
||||
|
||||
- Serialization (Amino)
|
||||
- Reading/Writing files and databases
|
||||
- Logging and monitoring
|
||||
|
||||
## Cryptography
|
||||
|
||||
- Elliptic curves for validator signatures
|
||||
- Hash algorithms and Merkle trees for block validation
|
||||
- Authenticated encryption for P2P connections
|
||||
|
||||
## Light Client
|
||||
|
||||
- Validation of blockchain data structures
|
||||
- Correctly validating an incorrect proof
|
||||
- Incorrectly validating a correct proof
|
||||
- Syncing validator set changes
|
||||
|
||||
|
58
Vagrantfile
vendored
Normal file
58
Vagrantfile
vendored
Normal file
@ -0,0 +1,58 @@
|
||||
# -*- mode: ruby -*-
|
||||
# vi: set ft=ruby :
|
||||
|
||||
Vagrant.configure("2") do |config|
|
||||
config.vm.box = "ubuntu/xenial64"
|
||||
|
||||
config.vm.provider "virtualbox" do |v|
|
||||
v.memory = 4096
|
||||
v.cpus = 2
|
||||
end
|
||||
|
||||
config.vm.provision "shell", inline: <<-SHELL
|
||||
apt-get update
|
||||
|
||||
# install base requirements
|
||||
apt-get install -y --no-install-recommends wget curl jq zip \
|
||||
make shellcheck bsdmainutils psmisc
|
||||
apt-get install -y language-pack-en
|
||||
|
||||
# install docker
|
||||
apt-get install -y --no-install-recommends apt-transport-https \
|
||||
ca-certificates curl software-properties-common
|
||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
|
||||
add-apt-repository \
|
||||
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
|
||||
$(lsb_release -cs) \
|
||||
stable"
|
||||
apt-get install -y docker-ce
|
||||
usermod -a -G docker vagrant
|
||||
|
||||
# install go
|
||||
wget -q https://dl.google.com/go/go1.10.1.linux-amd64.tar.gz
|
||||
tar -xvf go1.10.1.linux-amd64.tar.gz
|
||||
mv go /usr/local
|
||||
rm -f go1.10.1.linux-amd64.tar.gz
|
||||
|
||||
# cleanup
|
||||
apt-get autoremove -y
|
||||
|
||||
# set env variables
|
||||
echo 'export GOROOT=/usr/local/go' >> /home/vagrant/.bash_profile
|
||||
echo 'export GOPATH=/home/vagrant/go' >> /home/vagrant/.bash_profile
|
||||
echo 'export PATH=$PATH:$GOROOT/bin:$GOPATH/bin' >> /home/vagrant/.bash_profile
|
||||
echo 'export LC_ALL=en_US.UTF-8' >> /home/vagrant/.bash_profile
|
||||
echo 'cd go/src/github.com/tendermint/tendermint' >> /home/vagrant/.bash_profile
|
||||
|
||||
mkdir -p /home/vagrant/go/bin
|
||||
mkdir -p /home/vagrant/go/src/github.com/tendermint
|
||||
ln -s /vagrant /home/vagrant/go/src/github.com/tendermint/tendermint
|
||||
|
||||
chown -R vagrant:vagrant /home/vagrant/go
|
||||
chown vagrant:vagrant /home/vagrant/.bash_profile
|
||||
|
||||
# get all deps and tools, ready to install/test
|
||||
su - vagrant -c 'source /home/vagrant/.bash_profile'
|
||||
su - vagrant -c 'cd /home/vagrant/go/src/github.com/tendermint/tendermint && make get_tools && make get_vendor_deps'
|
||||
SHELL
|
||||
end
|
168
abci/README.md
Normal file
168
abci/README.md
Normal file
@ -0,0 +1,168 @@
|
||||
# Application BlockChain Interface (ABCI)
|
||||
|
||||
[](https://circleci.com/gh/tendermint/abci)
|
||||
|
||||
Blockchains are systems for multi-master state machine replication.
|
||||
**ABCI** is an interface that defines the boundary between the replication engine (the blockchain),
|
||||
and the state machine (the application).
|
||||
Using a socket protocol, a consensus engine running in one process
|
||||
can manage an application state running in another.
|
||||
|
||||
Previously, the ABCI was referred to as TMSP.
|
||||
|
||||
The community has provided a number of addtional implementations, see the [Tendermint Ecosystem](https://tendermint.com/ecosystem)
|
||||
|
||||
## Specification
|
||||
|
||||
A detailed description of the ABCI methods and message types is contained in:
|
||||
|
||||
- [A prose specification](specification.md)
|
||||
- [A protobuf file](https://github.com/tendermint/abci/blob/master/types/types.proto)
|
||||
- [A Go interface](https://github.com/tendermint/abci/blob/master/types/application.go).
|
||||
|
||||
For more background information on ABCI, motivations, and tendermint, please visit [the documentation](http://tendermint.readthedocs.io/en/master/).
|
||||
The two guides to focus on are the `Application Development Guide` and `Using ABCI-CLI`.
|
||||
|
||||
|
||||
## Protocl Buffers
|
||||
|
||||
To compile the protobuf file, run:
|
||||
|
||||
```
|
||||
make protoc
|
||||
```
|
||||
|
||||
See `protoc --help` and [the Protocol Buffers site](https://developers.google.com/protocol-buffers)
|
||||
for details on compiling for other languages. Note we also include a [GRPC](http://www.grpc.io/docs)
|
||||
service definition.
|
||||
|
||||
## Install ABCI-CLI
|
||||
|
||||
The `abci-cli` is a simple tool for debugging ABCI servers and running some
|
||||
example apps. To install it:
|
||||
|
||||
```
|
||||
go get github.com/tendermint/abci
|
||||
cd $GOPATH/src/github.com/tendermint/abci
|
||||
make get_vendor_deps
|
||||
make install
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
We provide three implementations of the ABCI in Go:
|
||||
|
||||
- Golang in-process
|
||||
- ABCI-socket
|
||||
- GRPC
|
||||
|
||||
Note the GRPC version is maintained primarily to simplify onboarding and prototyping and is not receiving the same
|
||||
attention to security and performance as the others
|
||||
|
||||
### In Process
|
||||
|
||||
The simplest implementation just uses function calls within Go.
|
||||
This means ABCI applications written in Golang can be compiled with TendermintCore and run as a single binary.
|
||||
|
||||
See the [examples](#examples) below for more information.
|
||||
|
||||
### Socket (TSP)
|
||||
|
||||
ABCI is best implemented as a streaming protocol.
|
||||
The socket implementation provides for asynchronous, ordered message passing over unix or tcp.
|
||||
Messages are serialized using Protobuf3 and length-prefixed with a [signed Varint](https://developers.google.com/protocol-buffers/docs/encoding?csw=1#signed-integers)
|
||||
|
||||
For example, if the Protobuf3 encoded ABCI message is `0xDEADBEEF` (4 bytes), the length-prefixed message is `0x08DEADBEEF`, since `0x08` is the signed varint
|
||||
encoding of `4`. If the Protobuf3 encoded ABCI message is 65535 bytes long, the length-prefixed message would be like `0xFEFF07...`.
|
||||
|
||||
Note the benefit of using this `varint` encoding over the old version (where integers were encoded as `<len of len><big endian len>` is that
|
||||
it is the standard way to encode integers in Protobuf. It is also generally shorter.
|
||||
|
||||
### GRPC
|
||||
|
||||
GRPC is an rpc framework native to Protocol Buffers with support in many languages.
|
||||
Implementing the ABCI using GRPC can allow for faster prototyping, but is expected to be much slower than
|
||||
the ordered, asynchronous socket protocol. The implementation has also not received as much testing or review.
|
||||
|
||||
Note the length-prefixing used in the socket implementation does not apply for GRPC.
|
||||
|
||||
## Usage
|
||||
|
||||
The `abci-cli` tool wraps an ABCI client and can be used for probing/testing an ABCI server.
|
||||
For instance, `abci-cli test` will run a test sequence against a listening server running the Counter application (see below).
|
||||
It can also be used to run some example applications.
|
||||
See [the documentation](http://tendermint.readthedocs.io/en/master/) for more details.
|
||||
|
||||
### Examples
|
||||
|
||||
Check out the variety of example applications in the [example directory](example/).
|
||||
It also contains the code refered to by the `counter` and `kvstore` apps; these apps come
|
||||
built into the `abci-cli` binary.
|
||||
|
||||
#### Counter
|
||||
|
||||
The `abci-cli counter` application illustrates nonce checking in transactions. It's code looks like:
|
||||
|
||||
```golang
|
||||
func cmdCounter(cmd *cobra.Command, args []string) error {
|
||||
|
||||
app := counter.NewCounterApplication(flagSerial)
|
||||
|
||||
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
|
||||
|
||||
// Start the listener
|
||||
srv, err := server.NewServer(flagAddrC, flagAbci, app)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
srv.SetLogger(logger.With("module", "abci-server"))
|
||||
if err := srv.Start(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Wait forever
|
||||
cmn.TrapSignal(func() {
|
||||
// Cleanup
|
||||
srv.Stop()
|
||||
})
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
and can be found in [this file](cmd/abci-cli/abci-cli.go).
|
||||
|
||||
#### kvstore
|
||||
|
||||
The `abci-cli kvstore` application, which illustrates a simple key-value Merkle tree
|
||||
|
||||
```golang
|
||||
func cmdKVStore(cmd *cobra.Command, args []string) error {
|
||||
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
|
||||
|
||||
// Create the application - in memory or persisted to disk
|
||||
var app types.Application
|
||||
if flagPersist == "" {
|
||||
app = kvstore.NewKVStoreApplication()
|
||||
} else {
|
||||
app = kvstore.NewPersistentKVStoreApplication(flagPersist)
|
||||
app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore"))
|
||||
}
|
||||
|
||||
// Start the listener
|
||||
srv, err := server.NewServer(flagAddrD, flagAbci, app)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
srv.SetLogger(logger.With("module", "abci-server"))
|
||||
if err := srv.Start(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Wait forever
|
||||
cmn.TrapSignal(func() {
|
||||
// Cleanup
|
||||
srv.Stop()
|
||||
})
|
||||
return nil
|
||||
}
|
||||
```
|
129
abci/client/client.go
Normal file
129
abci/client/client.go
Normal file
@ -0,0 +1,129 @@
|
||||
package abcicli
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
const (
|
||||
dialRetryIntervalSeconds = 3
|
||||
echoRetryIntervalSeconds = 1
|
||||
)
|
||||
|
||||
// Client defines an interface for an ABCI client.
|
||||
// All `Async` methods return a `ReqRes` object.
|
||||
// All `Sync` methods return the appropriate protobuf ResponseXxx struct and an error.
|
||||
// Note these are client errors, eg. ABCI socket connectivity issues.
|
||||
// Application-related errors are reflected in response via ABCI error codes and logs.
|
||||
type Client interface {
|
||||
cmn.Service
|
||||
|
||||
SetResponseCallback(Callback)
|
||||
Error() error
|
||||
|
||||
FlushAsync() *ReqRes
|
||||
EchoAsync(msg string) *ReqRes
|
||||
InfoAsync(types.RequestInfo) *ReqRes
|
||||
SetOptionAsync(types.RequestSetOption) *ReqRes
|
||||
DeliverTxAsync(tx []byte) *ReqRes
|
||||
CheckTxAsync(tx []byte) *ReqRes
|
||||
QueryAsync(types.RequestQuery) *ReqRes
|
||||
CommitAsync() *ReqRes
|
||||
InitChainAsync(types.RequestInitChain) *ReqRes
|
||||
BeginBlockAsync(types.RequestBeginBlock) *ReqRes
|
||||
EndBlockAsync(types.RequestEndBlock) *ReqRes
|
||||
|
||||
FlushSync() error
|
||||
EchoSync(msg string) (*types.ResponseEcho, error)
|
||||
InfoSync(types.RequestInfo) (*types.ResponseInfo, error)
|
||||
SetOptionSync(types.RequestSetOption) (*types.ResponseSetOption, error)
|
||||
DeliverTxSync(tx []byte) (*types.ResponseDeliverTx, error)
|
||||
CheckTxSync(tx []byte) (*types.ResponseCheckTx, error)
|
||||
QuerySync(types.RequestQuery) (*types.ResponseQuery, error)
|
||||
CommitSync() (*types.ResponseCommit, error)
|
||||
InitChainSync(types.RequestInitChain) (*types.ResponseInitChain, error)
|
||||
BeginBlockSync(types.RequestBeginBlock) (*types.ResponseBeginBlock, error)
|
||||
EndBlockSync(types.RequestEndBlock) (*types.ResponseEndBlock, error)
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
// NewClient returns a new ABCI client of the specified transport type.
|
||||
// It returns an error if the transport is not "socket" or "grpc"
|
||||
func NewClient(addr, transport string, mustConnect bool) (client Client, err error) {
|
||||
switch transport {
|
||||
case "socket":
|
||||
client = NewSocketClient(addr, mustConnect)
|
||||
case "grpc":
|
||||
client = NewGRPCClient(addr, mustConnect)
|
||||
default:
|
||||
err = fmt.Errorf("Unknown abci transport %s", transport)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
type Callback func(*types.Request, *types.Response)
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
type ReqRes struct {
|
||||
*types.Request
|
||||
*sync.WaitGroup
|
||||
*types.Response // Not set atomically, so be sure to use WaitGroup.
|
||||
|
||||
mtx sync.Mutex
|
||||
done bool // Gets set to true once *after* WaitGroup.Done().
|
||||
cb func(*types.Response) // A single callback that may be set.
|
||||
}
|
||||
|
||||
func NewReqRes(req *types.Request) *ReqRes {
|
||||
return &ReqRes{
|
||||
Request: req,
|
||||
WaitGroup: waitGroup1(),
|
||||
Response: nil,
|
||||
|
||||
done: false,
|
||||
cb: nil,
|
||||
}
|
||||
}
|
||||
|
||||
// Sets the callback for this ReqRes atomically.
|
||||
// If reqRes is already done, calls cb immediately.
|
||||
// NOTE: reqRes.cb should not change if reqRes.done.
|
||||
// NOTE: only one callback is supported.
|
||||
func (reqRes *ReqRes) SetCallback(cb func(res *types.Response)) {
|
||||
reqRes.mtx.Lock()
|
||||
|
||||
if reqRes.done {
|
||||
reqRes.mtx.Unlock()
|
||||
cb(reqRes.Response)
|
||||
return
|
||||
}
|
||||
|
||||
defer reqRes.mtx.Unlock()
|
||||
reqRes.cb = cb
|
||||
}
|
||||
|
||||
func (reqRes *ReqRes) GetCallback() func(*types.Response) {
|
||||
reqRes.mtx.Lock()
|
||||
defer reqRes.mtx.Unlock()
|
||||
return reqRes.cb
|
||||
}
|
||||
|
||||
// NOTE: it should be safe to read reqRes.cb without locks after this.
|
||||
func (reqRes *ReqRes) SetDone() {
|
||||
reqRes.mtx.Lock()
|
||||
reqRes.done = true
|
||||
reqRes.mtx.Unlock()
|
||||
}
|
||||
|
||||
func waitGroup1() (wg *sync.WaitGroup) {
|
||||
wg = &sync.WaitGroup{}
|
||||
wg.Add(1)
|
||||
return
|
||||
}
|
301
abci/client/grpc_client.go
Normal file
301
abci/client/grpc_client.go
Normal file
@ -0,0 +1,301 @@
|
||||
package abcicli
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
context "golang.org/x/net/context"
|
||||
grpc "google.golang.org/grpc"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
var _ Client = (*grpcClient)(nil)
|
||||
|
||||
// A stripped copy of the remoteClient that makes
|
||||
// synchronous calls using grpc
|
||||
type grpcClient struct {
|
||||
cmn.BaseService
|
||||
mustConnect bool
|
||||
|
||||
client types.ABCIApplicationClient
|
||||
|
||||
mtx sync.Mutex
|
||||
addr string
|
||||
err error
|
||||
resCb func(*types.Request, *types.Response) // listens to all callbacks
|
||||
}
|
||||
|
||||
func NewGRPCClient(addr string, mustConnect bool) *grpcClient {
|
||||
cli := &grpcClient{
|
||||
addr: addr,
|
||||
mustConnect: mustConnect,
|
||||
}
|
||||
cli.BaseService = *cmn.NewBaseService(nil, "grpcClient", cli)
|
||||
return cli
|
||||
}
|
||||
|
||||
func dialerFunc(addr string, timeout time.Duration) (net.Conn, error) {
|
||||
return cmn.Connect(addr)
|
||||
}
|
||||
|
||||
func (cli *grpcClient) OnStart() error {
|
||||
if err := cli.BaseService.OnStart(); err != nil {
|
||||
return err
|
||||
}
|
||||
RETRY_LOOP:
|
||||
for {
|
||||
conn, err := grpc.Dial(cli.addr, grpc.WithInsecure(), grpc.WithDialer(dialerFunc))
|
||||
if err != nil {
|
||||
if cli.mustConnect {
|
||||
return err
|
||||
}
|
||||
cli.Logger.Error(fmt.Sprintf("abci.grpcClient failed to connect to %v. Retrying...\n", cli.addr))
|
||||
time.Sleep(time.Second * dialRetryIntervalSeconds)
|
||||
continue RETRY_LOOP
|
||||
}
|
||||
|
||||
cli.Logger.Info("Dialed server. Waiting for echo.", "addr", cli.addr)
|
||||
client := types.NewABCIApplicationClient(conn)
|
||||
|
||||
ENSURE_CONNECTED:
|
||||
for {
|
||||
_, err := client.Echo(context.Background(), &types.RequestEcho{"hello"}, grpc.FailFast(true))
|
||||
if err == nil {
|
||||
break ENSURE_CONNECTED
|
||||
}
|
||||
cli.Logger.Error("Echo failed", "err", err)
|
||||
time.Sleep(time.Second * echoRetryIntervalSeconds)
|
||||
}
|
||||
|
||||
cli.client = client
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (cli *grpcClient) OnStop() {
|
||||
cli.BaseService.OnStop()
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
// TODO: how to close conn? its not a net.Conn and grpc doesn't expose a Close()
|
||||
/*if cli.client.conn != nil {
|
||||
cli.client.conn.Close()
|
||||
}*/
|
||||
}
|
||||
|
||||
func (cli *grpcClient) StopForError(err error) {
|
||||
cli.mtx.Lock()
|
||||
if !cli.IsRunning() {
|
||||
return
|
||||
}
|
||||
|
||||
if cli.err == nil {
|
||||
cli.err = err
|
||||
}
|
||||
cli.mtx.Unlock()
|
||||
|
||||
cli.Logger.Error(fmt.Sprintf("Stopping abci.grpcClient for error: %v", err.Error()))
|
||||
cli.Stop()
|
||||
}
|
||||
|
||||
func (cli *grpcClient) Error() error {
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
return cli.err
|
||||
}
|
||||
|
||||
// Set listener for all responses
|
||||
// NOTE: callback may get internally generated flush responses.
|
||||
func (cli *grpcClient) SetResponseCallback(resCb Callback) {
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
cli.resCb = resCb
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
// GRPC calls are synchronous, but some callbacks expect to be called asynchronously
|
||||
// (eg. the mempool expects to be able to lock to remove bad txs from cache).
|
||||
// To accommodate, we finish each call in its own go-routine,
|
||||
// which is expensive, but easy - if you want something better, use the socket protocol!
|
||||
// maybe one day, if people really want it, we use grpc streams,
|
||||
// but hopefully not :D
|
||||
|
||||
func (cli *grpcClient) EchoAsync(msg string) *ReqRes {
|
||||
req := types.ToRequestEcho(msg)
|
||||
res, err := cli.client.Echo(context.Background(), req.GetEcho(), grpc.FailFast(true))
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_Echo{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) FlushAsync() *ReqRes {
|
||||
req := types.ToRequestFlush()
|
||||
res, err := cli.client.Flush(context.Background(), req.GetFlush(), grpc.FailFast(true))
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_Flush{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) InfoAsync(params types.RequestInfo) *ReqRes {
|
||||
req := types.ToRequestInfo(params)
|
||||
res, err := cli.client.Info(context.Background(), req.GetInfo(), grpc.FailFast(true))
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_Info{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) SetOptionAsync(params types.RequestSetOption) *ReqRes {
|
||||
req := types.ToRequestSetOption(params)
|
||||
res, err := cli.client.SetOption(context.Background(), req.GetSetOption(), grpc.FailFast(true))
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_SetOption{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) DeliverTxAsync(tx []byte) *ReqRes {
|
||||
req := types.ToRequestDeliverTx(tx)
|
||||
res, err := cli.client.DeliverTx(context.Background(), req.GetDeliverTx(), grpc.FailFast(true))
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_DeliverTx{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) CheckTxAsync(tx []byte) *ReqRes {
|
||||
req := types.ToRequestCheckTx(tx)
|
||||
res, err := cli.client.CheckTx(context.Background(), req.GetCheckTx(), grpc.FailFast(true))
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_CheckTx{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) QueryAsync(params types.RequestQuery) *ReqRes {
|
||||
req := types.ToRequestQuery(params)
|
||||
res, err := cli.client.Query(context.Background(), req.GetQuery(), grpc.FailFast(true))
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_Query{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) CommitAsync() *ReqRes {
|
||||
req := types.ToRequestCommit()
|
||||
res, err := cli.client.Commit(context.Background(), req.GetCommit(), grpc.FailFast(true))
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_Commit{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) InitChainAsync(params types.RequestInitChain) *ReqRes {
|
||||
req := types.ToRequestInitChain(params)
|
||||
res, err := cli.client.InitChain(context.Background(), req.GetInitChain(), grpc.FailFast(true))
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_InitChain{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) BeginBlockAsync(params types.RequestBeginBlock) *ReqRes {
|
||||
req := types.ToRequestBeginBlock(params)
|
||||
res, err := cli.client.BeginBlock(context.Background(), req.GetBeginBlock(), grpc.FailFast(true))
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_BeginBlock{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) EndBlockAsync(params types.RequestEndBlock) *ReqRes {
|
||||
req := types.ToRequestEndBlock(params)
|
||||
res, err := cli.client.EndBlock(context.Background(), req.GetEndBlock(), grpc.FailFast(true))
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_EndBlock{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) finishAsyncCall(req *types.Request, res *types.Response) *ReqRes {
|
||||
reqres := NewReqRes(req)
|
||||
reqres.Response = res // Set response
|
||||
reqres.Done() // Release waiters
|
||||
reqres.SetDone() // so reqRes.SetCallback will run the callback
|
||||
|
||||
// go routine for callbacks
|
||||
go func() {
|
||||
// Notify reqRes listener if set
|
||||
if cb := reqres.GetCallback(); cb != nil {
|
||||
cb(res)
|
||||
}
|
||||
|
||||
// Notify client listener if set
|
||||
if cli.resCb != nil {
|
||||
cli.resCb(reqres.Request, res)
|
||||
}
|
||||
}()
|
||||
return reqres
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *grpcClient) FlushSync() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (cli *grpcClient) EchoSync(msg string) (*types.ResponseEcho, error) {
|
||||
reqres := cli.EchoAsync(msg)
|
||||
// StopForError should already have been called if error is set
|
||||
return reqres.Response.GetEcho(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *grpcClient) InfoSync(req types.RequestInfo) (*types.ResponseInfo, error) {
|
||||
reqres := cli.InfoAsync(req)
|
||||
return reqres.Response.GetInfo(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *grpcClient) SetOptionSync(req types.RequestSetOption) (*types.ResponseSetOption, error) {
|
||||
reqres := cli.SetOptionAsync(req)
|
||||
return reqres.Response.GetSetOption(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *grpcClient) DeliverTxSync(tx []byte) (*types.ResponseDeliverTx, error) {
|
||||
reqres := cli.DeliverTxAsync(tx)
|
||||
return reqres.Response.GetDeliverTx(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *grpcClient) CheckTxSync(tx []byte) (*types.ResponseCheckTx, error) {
|
||||
reqres := cli.CheckTxAsync(tx)
|
||||
return reqres.Response.GetCheckTx(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *grpcClient) QuerySync(req types.RequestQuery) (*types.ResponseQuery, error) {
|
||||
reqres := cli.QueryAsync(req)
|
||||
return reqres.Response.GetQuery(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *grpcClient) CommitSync() (*types.ResponseCommit, error) {
|
||||
reqres := cli.CommitAsync()
|
||||
return reqres.Response.GetCommit(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *grpcClient) InitChainSync(params types.RequestInitChain) (*types.ResponseInitChain, error) {
|
||||
reqres := cli.InitChainAsync(params)
|
||||
return reqres.Response.GetInitChain(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *grpcClient) BeginBlockSync(params types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
|
||||
reqres := cli.BeginBlockAsync(params)
|
||||
return reqres.Response.GetBeginBlock(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *grpcClient) EndBlockSync(params types.RequestEndBlock) (*types.ResponseEndBlock, error) {
|
||||
reqres := cli.EndBlockAsync(params)
|
||||
return reqres.Response.GetEndBlock(), cli.Error()
|
||||
}
|
230
abci/client/local_client.go
Normal file
230
abci/client/local_client.go
Normal file
@ -0,0 +1,230 @@
|
||||
package abcicli
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
types "github.com/tendermint/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
var _ Client = (*localClient)(nil)
|
||||
|
||||
type localClient struct {
|
||||
cmn.BaseService
|
||||
mtx *sync.Mutex
|
||||
types.Application
|
||||
Callback
|
||||
}
|
||||
|
||||
func NewLocalClient(mtx *sync.Mutex, app types.Application) *localClient {
|
||||
if mtx == nil {
|
||||
mtx = new(sync.Mutex)
|
||||
}
|
||||
cli := &localClient{
|
||||
mtx: mtx,
|
||||
Application: app,
|
||||
}
|
||||
cli.BaseService = *cmn.NewBaseService(nil, "localClient", cli)
|
||||
return cli
|
||||
}
|
||||
|
||||
func (app *localClient) SetResponseCallback(cb Callback) {
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
app.Callback = cb
|
||||
}
|
||||
|
||||
// TODO: change types.Application to include Error()?
|
||||
func (app *localClient) Error() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (app *localClient) FlushAsync() *ReqRes {
|
||||
// Do nothing
|
||||
return newLocalReqRes(types.ToRequestFlush(), nil)
|
||||
}
|
||||
|
||||
func (app *localClient) EchoAsync(msg string) *ReqRes {
|
||||
return app.callback(
|
||||
types.ToRequestEcho(msg),
|
||||
types.ToResponseEcho(msg),
|
||||
)
|
||||
}
|
||||
|
||||
func (app *localClient) InfoAsync(req types.RequestInfo) *ReqRes {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.Info(req)
|
||||
app.mtx.Unlock()
|
||||
return app.callback(
|
||||
types.ToRequestInfo(req),
|
||||
types.ToResponseInfo(res),
|
||||
)
|
||||
}
|
||||
|
||||
func (app *localClient) SetOptionAsync(req types.RequestSetOption) *ReqRes {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.SetOption(req)
|
||||
app.mtx.Unlock()
|
||||
return app.callback(
|
||||
types.ToRequestSetOption(req),
|
||||
types.ToResponseSetOption(res),
|
||||
)
|
||||
}
|
||||
|
||||
func (app *localClient) DeliverTxAsync(tx []byte) *ReqRes {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.DeliverTx(tx)
|
||||
app.mtx.Unlock()
|
||||
return app.callback(
|
||||
types.ToRequestDeliverTx(tx),
|
||||
types.ToResponseDeliverTx(res),
|
||||
)
|
||||
}
|
||||
|
||||
func (app *localClient) CheckTxAsync(tx []byte) *ReqRes {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.CheckTx(tx)
|
||||
app.mtx.Unlock()
|
||||
return app.callback(
|
||||
types.ToRequestCheckTx(tx),
|
||||
types.ToResponseCheckTx(res),
|
||||
)
|
||||
}
|
||||
|
||||
func (app *localClient) QueryAsync(req types.RequestQuery) *ReqRes {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.Query(req)
|
||||
app.mtx.Unlock()
|
||||
return app.callback(
|
||||
types.ToRequestQuery(req),
|
||||
types.ToResponseQuery(res),
|
||||
)
|
||||
}
|
||||
|
||||
func (app *localClient) CommitAsync() *ReqRes {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.Commit()
|
||||
app.mtx.Unlock()
|
||||
return app.callback(
|
||||
types.ToRequestCommit(),
|
||||
types.ToResponseCommit(res),
|
||||
)
|
||||
}
|
||||
|
||||
func (app *localClient) InitChainAsync(req types.RequestInitChain) *ReqRes {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.InitChain(req)
|
||||
reqRes := app.callback(
|
||||
types.ToRequestInitChain(req),
|
||||
types.ToResponseInitChain(res),
|
||||
)
|
||||
app.mtx.Unlock()
|
||||
return reqRes
|
||||
}
|
||||
|
||||
func (app *localClient) BeginBlockAsync(req types.RequestBeginBlock) *ReqRes {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.BeginBlock(req)
|
||||
app.mtx.Unlock()
|
||||
return app.callback(
|
||||
types.ToRequestBeginBlock(req),
|
||||
types.ToResponseBeginBlock(res),
|
||||
)
|
||||
}
|
||||
|
||||
func (app *localClient) EndBlockAsync(req types.RequestEndBlock) *ReqRes {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.EndBlock(req)
|
||||
app.mtx.Unlock()
|
||||
return app.callback(
|
||||
types.ToRequestEndBlock(req),
|
||||
types.ToResponseEndBlock(res),
|
||||
)
|
||||
}
|
||||
|
||||
//-------------------------------------------------------
|
||||
|
||||
func (app *localClient) FlushSync() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (app *localClient) EchoSync(msg string) (*types.ResponseEcho, error) {
|
||||
return &types.ResponseEcho{msg}, nil
|
||||
}
|
||||
|
||||
func (app *localClient) InfoSync(req types.RequestInfo) (*types.ResponseInfo, error) {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.Info(req)
|
||||
app.mtx.Unlock()
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) SetOptionSync(req types.RequestSetOption) (*types.ResponseSetOption, error) {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.SetOption(req)
|
||||
app.mtx.Unlock()
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) DeliverTxSync(tx []byte) (*types.ResponseDeliverTx, error) {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.DeliverTx(tx)
|
||||
app.mtx.Unlock()
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) CheckTxSync(tx []byte) (*types.ResponseCheckTx, error) {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.CheckTx(tx)
|
||||
app.mtx.Unlock()
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) QuerySync(req types.RequestQuery) (*types.ResponseQuery, error) {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.Query(req)
|
||||
app.mtx.Unlock()
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) CommitSync() (*types.ResponseCommit, error) {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.Commit()
|
||||
app.mtx.Unlock()
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) InitChainSync(req types.RequestInitChain) (*types.ResponseInitChain, error) {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.InitChain(req)
|
||||
app.mtx.Unlock()
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) BeginBlockSync(req types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.BeginBlock(req)
|
||||
app.mtx.Unlock()
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) EndBlockSync(req types.RequestEndBlock) (*types.ResponseEndBlock, error) {
|
||||
app.mtx.Lock()
|
||||
res := app.Application.EndBlock(req)
|
||||
app.mtx.Unlock()
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
//-------------------------------------------------------
|
||||
|
||||
func (app *localClient) callback(req *types.Request, res *types.Response) *ReqRes {
|
||||
app.Callback(req, res)
|
||||
return newLocalReqRes(req, res)
|
||||
}
|
||||
|
||||
func newLocalReqRes(req *types.Request, res *types.Response) *ReqRes {
|
||||
reqRes := NewReqRes(req)
|
||||
reqRes.Response = res
|
||||
reqRes.SetDone()
|
||||
return reqRes
|
||||
}
|
399
abci/client/socket_client.go
Normal file
399
abci/client/socket_client.go
Normal file
@ -0,0 +1,399 @@
|
||||
package abcicli
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"container/list"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"reflect"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
const reqQueueSize = 256 // TODO make configurable
|
||||
// const maxResponseSize = 1048576 // 1MB TODO make configurable
|
||||
const flushThrottleMS = 20 // Don't wait longer than...
|
||||
|
||||
var _ Client = (*socketClient)(nil)
|
||||
|
||||
// This is goroutine-safe, but users should beware that
|
||||
// the application in general is not meant to be interfaced
|
||||
// with concurrent callers.
|
||||
type socketClient struct {
|
||||
cmn.BaseService
|
||||
|
||||
reqQueue chan *ReqRes
|
||||
flushTimer *cmn.ThrottleTimer
|
||||
mustConnect bool
|
||||
|
||||
mtx sync.Mutex
|
||||
addr string
|
||||
conn net.Conn
|
||||
err error
|
||||
reqSent *list.List
|
||||
resCb func(*types.Request, *types.Response) // listens to all callbacks
|
||||
|
||||
}
|
||||
|
||||
func NewSocketClient(addr string, mustConnect bool) *socketClient {
|
||||
cli := &socketClient{
|
||||
reqQueue: make(chan *ReqRes, reqQueueSize),
|
||||
flushTimer: cmn.NewThrottleTimer("socketClient", flushThrottleMS),
|
||||
mustConnect: mustConnect,
|
||||
|
||||
addr: addr,
|
||||
reqSent: list.New(),
|
||||
resCb: nil,
|
||||
}
|
||||
cli.BaseService = *cmn.NewBaseService(nil, "socketClient", cli)
|
||||
return cli
|
||||
}
|
||||
|
||||
func (cli *socketClient) OnStart() error {
|
||||
if err := cli.BaseService.OnStart(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var err error
|
||||
var conn net.Conn
|
||||
RETRY_LOOP:
|
||||
for {
|
||||
conn, err = cmn.Connect(cli.addr)
|
||||
if err != nil {
|
||||
if cli.mustConnect {
|
||||
return err
|
||||
}
|
||||
cli.Logger.Error(fmt.Sprintf("abci.socketClient failed to connect to %v. Retrying...", cli.addr))
|
||||
time.Sleep(time.Second * dialRetryIntervalSeconds)
|
||||
continue RETRY_LOOP
|
||||
}
|
||||
cli.conn = conn
|
||||
|
||||
go cli.sendRequestsRoutine(conn)
|
||||
go cli.recvResponseRoutine(conn)
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (cli *socketClient) OnStop() {
|
||||
cli.BaseService.OnStop()
|
||||
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
if cli.conn != nil {
|
||||
cli.conn.Close()
|
||||
}
|
||||
|
||||
cli.flushQueue()
|
||||
}
|
||||
|
||||
// Stop the client and set the error
|
||||
func (cli *socketClient) StopForError(err error) {
|
||||
if !cli.IsRunning() {
|
||||
return
|
||||
}
|
||||
|
||||
cli.mtx.Lock()
|
||||
if cli.err == nil {
|
||||
cli.err = err
|
||||
}
|
||||
cli.mtx.Unlock()
|
||||
|
||||
cli.Logger.Error(fmt.Sprintf("Stopping abci.socketClient for error: %v", err.Error()))
|
||||
cli.Stop()
|
||||
}
|
||||
|
||||
func (cli *socketClient) Error() error {
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
return cli.err
|
||||
}
|
||||
|
||||
// Set listener for all responses
|
||||
// NOTE: callback may get internally generated flush responses.
|
||||
func (cli *socketClient) SetResponseCallback(resCb Callback) {
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
cli.resCb = resCb
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *socketClient) sendRequestsRoutine(conn net.Conn) {
|
||||
|
||||
w := bufio.NewWriter(conn)
|
||||
for {
|
||||
select {
|
||||
case <-cli.flushTimer.Ch:
|
||||
select {
|
||||
case cli.reqQueue <- NewReqRes(types.ToRequestFlush()):
|
||||
default:
|
||||
// Probably will fill the buffer, or retry later.
|
||||
}
|
||||
case <-cli.Quit():
|
||||
return
|
||||
case reqres := <-cli.reqQueue:
|
||||
cli.willSendReq(reqres)
|
||||
err := types.WriteMessage(reqres.Request, w)
|
||||
if err != nil {
|
||||
cli.StopForError(fmt.Errorf("Error writing msg: %v", err))
|
||||
return
|
||||
}
|
||||
// cli.Logger.Debug("Sent request", "requestType", reflect.TypeOf(reqres.Request), "request", reqres.Request)
|
||||
if _, ok := reqres.Request.Value.(*types.Request_Flush); ok {
|
||||
err = w.Flush()
|
||||
if err != nil {
|
||||
cli.StopForError(fmt.Errorf("Error flushing writer: %v", err))
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (cli *socketClient) recvResponseRoutine(conn net.Conn) {
|
||||
|
||||
r := bufio.NewReader(conn) // Buffer reads
|
||||
for {
|
||||
var res = &types.Response{}
|
||||
err := types.ReadMessage(r, res)
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
return
|
||||
}
|
||||
switch r := res.Value.(type) {
|
||||
case *types.Response_Exception:
|
||||
// XXX After setting cli.err, release waiters (e.g. reqres.Done())
|
||||
cli.StopForError(errors.New(r.Exception.Error))
|
||||
return
|
||||
default:
|
||||
// cli.Logger.Debug("Received response", "responseType", reflect.TypeOf(res), "response", res)
|
||||
err := cli.didRecvResponse(res)
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (cli *socketClient) willSendReq(reqres *ReqRes) {
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
cli.reqSent.PushBack(reqres)
|
||||
}
|
||||
|
||||
func (cli *socketClient) didRecvResponse(res *types.Response) error {
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
|
||||
// Get the first ReqRes
|
||||
next := cli.reqSent.Front()
|
||||
if next == nil {
|
||||
return fmt.Errorf("Unexpected result type %v when nothing expected", reflect.TypeOf(res.Value))
|
||||
}
|
||||
reqres := next.Value.(*ReqRes)
|
||||
if !resMatchesReq(reqres.Request, res) {
|
||||
return fmt.Errorf("Unexpected result type %v when response to %v expected",
|
||||
reflect.TypeOf(res.Value), reflect.TypeOf(reqres.Request.Value))
|
||||
}
|
||||
|
||||
reqres.Response = res // Set response
|
||||
reqres.Done() // Release waiters
|
||||
cli.reqSent.Remove(next) // Pop first item from linked list
|
||||
|
||||
// Notify reqRes listener if set
|
||||
if cb := reqres.GetCallback(); cb != nil {
|
||||
cb(res)
|
||||
}
|
||||
|
||||
// Notify client listener if set
|
||||
if cli.resCb != nil {
|
||||
cli.resCb(reqres.Request, res)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *socketClient) EchoAsync(msg string) *ReqRes {
|
||||
return cli.queueRequest(types.ToRequestEcho(msg))
|
||||
}
|
||||
|
||||
func (cli *socketClient) FlushAsync() *ReqRes {
|
||||
return cli.queueRequest(types.ToRequestFlush())
|
||||
}
|
||||
|
||||
func (cli *socketClient) InfoAsync(req types.RequestInfo) *ReqRes {
|
||||
return cli.queueRequest(types.ToRequestInfo(req))
|
||||
}
|
||||
|
||||
func (cli *socketClient) SetOptionAsync(req types.RequestSetOption) *ReqRes {
|
||||
return cli.queueRequest(types.ToRequestSetOption(req))
|
||||
}
|
||||
|
||||
func (cli *socketClient) DeliverTxAsync(tx []byte) *ReqRes {
|
||||
return cli.queueRequest(types.ToRequestDeliverTx(tx))
|
||||
}
|
||||
|
||||
func (cli *socketClient) CheckTxAsync(tx []byte) *ReqRes {
|
||||
return cli.queueRequest(types.ToRequestCheckTx(tx))
|
||||
}
|
||||
|
||||
func (cli *socketClient) QueryAsync(req types.RequestQuery) *ReqRes {
|
||||
return cli.queueRequest(types.ToRequestQuery(req))
|
||||
}
|
||||
|
||||
func (cli *socketClient) CommitAsync() *ReqRes {
|
||||
return cli.queueRequest(types.ToRequestCommit())
|
||||
}
|
||||
|
||||
func (cli *socketClient) InitChainAsync(req types.RequestInitChain) *ReqRes {
|
||||
return cli.queueRequest(types.ToRequestInitChain(req))
|
||||
}
|
||||
|
||||
func (cli *socketClient) BeginBlockAsync(req types.RequestBeginBlock) *ReqRes {
|
||||
return cli.queueRequest(types.ToRequestBeginBlock(req))
|
||||
}
|
||||
|
||||
func (cli *socketClient) EndBlockAsync(req types.RequestEndBlock) *ReqRes {
|
||||
return cli.queueRequest(types.ToRequestEndBlock(req))
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *socketClient) FlushSync() error {
|
||||
reqRes := cli.queueRequest(types.ToRequestFlush())
|
||||
if err := cli.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
reqRes.Wait() // NOTE: if we don't flush the queue, its possible to get stuck here
|
||||
return cli.Error()
|
||||
}
|
||||
|
||||
func (cli *socketClient) EchoSync(msg string) (*types.ResponseEcho, error) {
|
||||
reqres := cli.queueRequest(types.ToRequestEcho(msg))
|
||||
cli.FlushSync()
|
||||
return reqres.Response.GetEcho(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *socketClient) InfoSync(req types.RequestInfo) (*types.ResponseInfo, error) {
|
||||
reqres := cli.queueRequest(types.ToRequestInfo(req))
|
||||
cli.FlushSync()
|
||||
return reqres.Response.GetInfo(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *socketClient) SetOptionSync(req types.RequestSetOption) (*types.ResponseSetOption, error) {
|
||||
reqres := cli.queueRequest(types.ToRequestSetOption(req))
|
||||
cli.FlushSync()
|
||||
return reqres.Response.GetSetOption(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *socketClient) DeliverTxSync(tx []byte) (*types.ResponseDeliverTx, error) {
|
||||
reqres := cli.queueRequest(types.ToRequestDeliverTx(tx))
|
||||
cli.FlushSync()
|
||||
return reqres.Response.GetDeliverTx(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *socketClient) CheckTxSync(tx []byte) (*types.ResponseCheckTx, error) {
|
||||
reqres := cli.queueRequest(types.ToRequestCheckTx(tx))
|
||||
cli.FlushSync()
|
||||
return reqres.Response.GetCheckTx(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *socketClient) QuerySync(req types.RequestQuery) (*types.ResponseQuery, error) {
|
||||
reqres := cli.queueRequest(types.ToRequestQuery(req))
|
||||
cli.FlushSync()
|
||||
return reqres.Response.GetQuery(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *socketClient) CommitSync() (*types.ResponseCommit, error) {
|
||||
reqres := cli.queueRequest(types.ToRequestCommit())
|
||||
cli.FlushSync()
|
||||
return reqres.Response.GetCommit(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *socketClient) InitChainSync(req types.RequestInitChain) (*types.ResponseInitChain, error) {
|
||||
reqres := cli.queueRequest(types.ToRequestInitChain(req))
|
||||
cli.FlushSync()
|
||||
return reqres.Response.GetInitChain(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *socketClient) BeginBlockSync(req types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
|
||||
reqres := cli.queueRequest(types.ToRequestBeginBlock(req))
|
||||
cli.FlushSync()
|
||||
return reqres.Response.GetBeginBlock(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *socketClient) EndBlockSync(req types.RequestEndBlock) (*types.ResponseEndBlock, error) {
|
||||
reqres := cli.queueRequest(types.ToRequestEndBlock(req))
|
||||
cli.FlushSync()
|
||||
return reqres.Response.GetEndBlock(), cli.Error()
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *socketClient) queueRequest(req *types.Request) *ReqRes {
|
||||
reqres := NewReqRes(req)
|
||||
|
||||
// TODO: set cli.err if reqQueue times out
|
||||
cli.reqQueue <- reqres
|
||||
|
||||
// Maybe auto-flush, or unset auto-flush
|
||||
switch req.Value.(type) {
|
||||
case *types.Request_Flush:
|
||||
cli.flushTimer.Unset()
|
||||
default:
|
||||
cli.flushTimer.Set()
|
||||
}
|
||||
|
||||
return reqres
|
||||
}
|
||||
|
||||
func (cli *socketClient) flushQueue() {
|
||||
LOOP:
|
||||
for {
|
||||
select {
|
||||
case reqres := <-cli.reqQueue:
|
||||
reqres.Done()
|
||||
default:
|
||||
break LOOP
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func resMatchesReq(req *types.Request, res *types.Response) (ok bool) {
|
||||
switch req.Value.(type) {
|
||||
case *types.Request_Echo:
|
||||
_, ok = res.Value.(*types.Response_Echo)
|
||||
case *types.Request_Flush:
|
||||
_, ok = res.Value.(*types.Response_Flush)
|
||||
case *types.Request_Info:
|
||||
_, ok = res.Value.(*types.Response_Info)
|
||||
case *types.Request_SetOption:
|
||||
_, ok = res.Value.(*types.Response_SetOption)
|
||||
case *types.Request_DeliverTx:
|
||||
_, ok = res.Value.(*types.Response_DeliverTx)
|
||||
case *types.Request_CheckTx:
|
||||
_, ok = res.Value.(*types.Response_CheckTx)
|
||||
case *types.Request_Commit:
|
||||
_, ok = res.Value.(*types.Response_Commit)
|
||||
case *types.Request_Query:
|
||||
_, ok = res.Value.(*types.Response_Query)
|
||||
case *types.Request_InitChain:
|
||||
_, ok = res.Value.(*types.Response_InitChain)
|
||||
case *types.Request_BeginBlock:
|
||||
_, ok = res.Value.(*types.Response_BeginBlock)
|
||||
case *types.Request_EndBlock:
|
||||
_, ok = res.Value.(*types.Response_EndBlock)
|
||||
}
|
||||
return ok
|
||||
}
|
28
abci/client/socket_client_test.go
Normal file
28
abci/client/socket_client_test.go
Normal file
@ -0,0 +1,28 @@
|
||||
package abcicli_test
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/client"
|
||||
)
|
||||
|
||||
func TestSocketClientStopForErrorDeadlock(t *testing.T) {
|
||||
c := abcicli.NewSocketClient(":80", false)
|
||||
err := errors.New("foo-tendermint")
|
||||
|
||||
// See Issue https://github.com/tendermint/abci/issues/114
|
||||
doneChan := make(chan bool)
|
||||
go func() {
|
||||
defer close(doneChan)
|
||||
c.StopForError(err)
|
||||
c.StopForError(err)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-doneChan:
|
||||
case <-time.After(time.Second * 4):
|
||||
t.Fatalf("Test took too long, potential deadlock still exists")
|
||||
}
|
||||
}
|
765
abci/cmd/abci-cli/abci-cli.go
Normal file
765
abci/cmd/abci-cli/abci-cli.go
Normal file
@ -0,0 +1,765 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
||||
abcicli "github.com/tendermint/tendermint/abci/client"
|
||||
"github.com/tendermint/tendermint/abci/example/code"
|
||||
"github.com/tendermint/tendermint/abci/example/counter"
|
||||
"github.com/tendermint/tendermint/abci/example/kvstore"
|
||||
"github.com/tendermint/tendermint/abci/server"
|
||||
servertest "github.com/tendermint/tendermint/abci/tests/server"
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
"github.com/tendermint/tendermint/abci/version"
|
||||
)
|
||||
|
||||
// client is a global variable so it can be reused by the console
|
||||
var (
|
||||
client abcicli.Client
|
||||
logger log.Logger
|
||||
)
|
||||
|
||||
// flags
|
||||
var (
|
||||
// global
|
||||
flagAddress string
|
||||
flagAbci string
|
||||
flagVerbose bool // for the println output
|
||||
flagLogLevel string // for the logger
|
||||
|
||||
// query
|
||||
flagPath string
|
||||
flagHeight int
|
||||
flagProve bool
|
||||
|
||||
// counter
|
||||
flagSerial bool
|
||||
|
||||
// kvstore
|
||||
flagPersist string
|
||||
)
|
||||
|
||||
var RootCmd = &cobra.Command{
|
||||
Use: "abci-cli",
|
||||
Short: "the ABCI CLI tool wraps an ABCI client",
|
||||
Long: "the ABCI CLI tool wraps an ABCI client and is used for testing ABCI servers",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
|
||||
switch cmd.Use {
|
||||
case "counter", "kvstore", "dummy": // for the examples apps, don't pre-run
|
||||
return nil
|
||||
case "version": // skip running for version command
|
||||
return nil
|
||||
}
|
||||
|
||||
if logger == nil {
|
||||
allowLevel, err := log.AllowLevel(flagLogLevel)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
logger = log.NewFilter(log.NewTMLogger(log.NewSyncWriter(os.Stdout)), allowLevel)
|
||||
}
|
||||
if client == nil {
|
||||
var err error
|
||||
client, err = abcicli.NewClient(flagAddress, flagAbci, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
client.SetLogger(logger.With("module", "abci-client"))
|
||||
if err := client.Start(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
// Structure for data passed to print response.
|
||||
type response struct {
|
||||
// generic abci response
|
||||
Data []byte
|
||||
Code uint32
|
||||
Info string
|
||||
Log string
|
||||
|
||||
Query *queryResponse
|
||||
}
|
||||
|
||||
type queryResponse struct {
|
||||
Key []byte
|
||||
Value []byte
|
||||
Height int64
|
||||
Proof []byte
|
||||
}
|
||||
|
||||
func Execute() error {
|
||||
addGlobalFlags()
|
||||
addCommands()
|
||||
return RootCmd.Execute()
|
||||
}
|
||||
|
||||
func addGlobalFlags() {
|
||||
RootCmd.PersistentFlags().StringVarP(&flagAddress, "address", "", "tcp://0.0.0.0:26658", "address of application socket")
|
||||
RootCmd.PersistentFlags().StringVarP(&flagAbci, "abci", "", "socket", "either socket or grpc")
|
||||
RootCmd.PersistentFlags().BoolVarP(&flagVerbose, "verbose", "v", false, "print the command and results as if it were a console session")
|
||||
RootCmd.PersistentFlags().StringVarP(&flagLogLevel, "log_level", "", "debug", "set the logger level")
|
||||
}
|
||||
|
||||
func addQueryFlags() {
|
||||
queryCmd.PersistentFlags().StringVarP(&flagPath, "path", "", "/store", "path to prefix query with")
|
||||
queryCmd.PersistentFlags().IntVarP(&flagHeight, "height", "", 0, "height to query the blockchain at")
|
||||
queryCmd.PersistentFlags().BoolVarP(&flagProve, "prove", "", false, "whether or not to return a merkle proof of the query result")
|
||||
}
|
||||
|
||||
func addCounterFlags() {
|
||||
counterCmd.PersistentFlags().BoolVarP(&flagSerial, "serial", "", false, "enforce incrementing (serial) transactions")
|
||||
}
|
||||
|
||||
func addDummyFlags() {
|
||||
dummyCmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
|
||||
}
|
||||
|
||||
func addKVStoreFlags() {
|
||||
kvstoreCmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
|
||||
}
|
||||
|
||||
func addCommands() {
|
||||
RootCmd.AddCommand(batchCmd)
|
||||
RootCmd.AddCommand(consoleCmd)
|
||||
RootCmd.AddCommand(echoCmd)
|
||||
RootCmd.AddCommand(infoCmd)
|
||||
RootCmd.AddCommand(setOptionCmd)
|
||||
RootCmd.AddCommand(deliverTxCmd)
|
||||
RootCmd.AddCommand(checkTxCmd)
|
||||
RootCmd.AddCommand(commitCmd)
|
||||
RootCmd.AddCommand(versionCmd)
|
||||
RootCmd.AddCommand(testCmd)
|
||||
addQueryFlags()
|
||||
RootCmd.AddCommand(queryCmd)
|
||||
|
||||
// examples
|
||||
addCounterFlags()
|
||||
RootCmd.AddCommand(counterCmd)
|
||||
// deprecated, left for backwards compatibility
|
||||
addDummyFlags()
|
||||
RootCmd.AddCommand(dummyCmd)
|
||||
// replaces dummy, see issue #196
|
||||
addKVStoreFlags()
|
||||
RootCmd.AddCommand(kvstoreCmd)
|
||||
}
|
||||
|
||||
var batchCmd = &cobra.Command{
|
||||
Use: "batch",
|
||||
Short: "run a batch of abci commands against an application",
|
||||
Long: `run a batch of abci commands against an application
|
||||
|
||||
This command is run by piping in a file containing a series of commands
|
||||
you'd like to run:
|
||||
|
||||
abci-cli batch < example.file
|
||||
|
||||
where example.file looks something like:
|
||||
|
||||
set_option serial on
|
||||
check_tx 0x00
|
||||
check_tx 0xff
|
||||
deliver_tx 0x00
|
||||
check_tx 0x00
|
||||
deliver_tx 0x01
|
||||
deliver_tx 0x04
|
||||
info
|
||||
`,
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmdBatch(cmd, args)
|
||||
},
|
||||
}
|
||||
|
||||
var consoleCmd = &cobra.Command{
|
||||
Use: "console",
|
||||
Short: "start an interactive ABCI console for multiple commands",
|
||||
Long: `start an interactive ABCI console for multiple commands
|
||||
|
||||
This command opens an interactive console for running any of the other commands
|
||||
without opening a new connection each time
|
||||
`,
|
||||
Args: cobra.ExactArgs(0),
|
||||
ValidArgs: []string{"echo", "info", "set_option", "deliver_tx", "check_tx", "commit", "query"},
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmdConsole(cmd, args)
|
||||
},
|
||||
}
|
||||
|
||||
var echoCmd = &cobra.Command{
|
||||
Use: "echo",
|
||||
Short: "have the application echo a message",
|
||||
Long: "have the application echo a message",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmdEcho(cmd, args)
|
||||
},
|
||||
}
|
||||
var infoCmd = &cobra.Command{
|
||||
Use: "info",
|
||||
Short: "get some info about the application",
|
||||
Long: "get some info about the application",
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmdInfo(cmd, args)
|
||||
},
|
||||
}
|
||||
var setOptionCmd = &cobra.Command{
|
||||
Use: "set_option",
|
||||
Short: "set an option on the application",
|
||||
Long: "set an option on the application",
|
||||
Args: cobra.ExactArgs(2),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmdSetOption(cmd, args)
|
||||
},
|
||||
}
|
||||
|
||||
var deliverTxCmd = &cobra.Command{
|
||||
Use: "deliver_tx",
|
||||
Short: "deliver a new transaction to the application",
|
||||
Long: "deliver a new transaction to the application",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmdDeliverTx(cmd, args)
|
||||
},
|
||||
}
|
||||
|
||||
var checkTxCmd = &cobra.Command{
|
||||
Use: "check_tx",
|
||||
Short: "validate a transaction",
|
||||
Long: "validate a transaction",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmdCheckTx(cmd, args)
|
||||
},
|
||||
}
|
||||
|
||||
var commitCmd = &cobra.Command{
|
||||
Use: "commit",
|
||||
Short: "commit the application state and return the Merkle root hash",
|
||||
Long: "commit the application state and return the Merkle root hash",
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmdCommit(cmd, args)
|
||||
},
|
||||
}
|
||||
|
||||
var versionCmd = &cobra.Command{
|
||||
Use: "version",
|
||||
Short: "print ABCI console version",
|
||||
Long: "print ABCI console version",
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
fmt.Println(version.Version)
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
var queryCmd = &cobra.Command{
|
||||
Use: "query",
|
||||
Short: "query the application state",
|
||||
Long: "query the application state",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmdQuery(cmd, args)
|
||||
},
|
||||
}
|
||||
|
||||
var counterCmd = &cobra.Command{
|
||||
Use: "counter",
|
||||
Short: "ABCI demo example",
|
||||
Long: "ABCI demo example",
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmdCounter(cmd, args)
|
||||
},
|
||||
}
|
||||
|
||||
// deprecated, left for backwards compatibility
|
||||
var dummyCmd = &cobra.Command{
|
||||
Use: "dummy",
|
||||
Deprecated: "use: [abci-cli kvstore] instead",
|
||||
Short: "ABCI demo example",
|
||||
Long: "ABCI demo example",
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmdKVStore(cmd, args)
|
||||
},
|
||||
}
|
||||
|
||||
var kvstoreCmd = &cobra.Command{
|
||||
Use: "kvstore",
|
||||
Short: "ABCI demo example",
|
||||
Long: "ABCI demo example",
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmdKVStore(cmd, args)
|
||||
},
|
||||
}
|
||||
|
||||
var testCmd = &cobra.Command{
|
||||
Use: "test",
|
||||
Short: "run integration tests",
|
||||
Long: "run integration tests",
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmdTest(cmd, args)
|
||||
},
|
||||
}
|
||||
|
||||
// Generates new Args array based off of previous call args to maintain flag persistence
|
||||
func persistentArgs(line []byte) []string {
|
||||
|
||||
// generate the arguments to run from original os.Args
|
||||
// to maintain flag arguments
|
||||
args := os.Args
|
||||
args = args[:len(args)-1] // remove the previous command argument
|
||||
|
||||
if len(line) > 0 { // prevents introduction of extra space leading to argument parse errors
|
||||
args = append(args, strings.Split(string(line), " ")...)
|
||||
}
|
||||
return args
|
||||
}
|
||||
|
||||
//--------------------------------------------------------------------------------
|
||||
|
||||
func compose(fs []func() error) error {
|
||||
if len(fs) == 0 {
|
||||
return nil
|
||||
} else {
|
||||
err := fs[0]()
|
||||
if err == nil {
|
||||
return compose(fs[1:])
|
||||
} else {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func cmdTest(cmd *cobra.Command, args []string) error {
|
||||
return compose(
|
||||
[]func() error{
|
||||
func() error { return servertest.InitChain(client) },
|
||||
func() error { return servertest.SetOption(client, "serial", "on") },
|
||||
func() error { return servertest.Commit(client, nil) },
|
||||
func() error { return servertest.DeliverTx(client, []byte("abc"), code.CodeTypeBadNonce, nil) },
|
||||
func() error { return servertest.Commit(client, nil) },
|
||||
func() error { return servertest.DeliverTx(client, []byte{0x00}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.Commit(client, []byte{0, 0, 0, 0, 0, 0, 0, 1}) },
|
||||
func() error { return servertest.DeliverTx(client, []byte{0x00}, code.CodeTypeBadNonce, nil) },
|
||||
func() error { return servertest.DeliverTx(client, []byte{0x01}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.DeliverTx(client, []byte{0x00, 0x02}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.DeliverTx(client, []byte{0x00, 0x03}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.DeliverTx(client, []byte{0x00, 0x00, 0x04}, code.CodeTypeOK, nil) },
|
||||
func() error {
|
||||
return servertest.DeliverTx(client, []byte{0x00, 0x00, 0x06}, code.CodeTypeBadNonce, nil)
|
||||
},
|
||||
func() error { return servertest.Commit(client, []byte{0, 0, 0, 0, 0, 0, 0, 5}) },
|
||||
})
|
||||
}
|
||||
|
||||
func cmdBatch(cmd *cobra.Command, args []string) error {
|
||||
bufReader := bufio.NewReader(os.Stdin)
|
||||
for {
|
||||
|
||||
line, more, err := bufReader.ReadLine()
|
||||
if more {
|
||||
return errors.New("Input line is too long")
|
||||
} else if err == io.EOF {
|
||||
break
|
||||
} else if len(line) == 0 {
|
||||
continue
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cmdArgs := persistentArgs(line)
|
||||
if err := muxOnCommands(cmd, cmdArgs); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func cmdConsole(cmd *cobra.Command, args []string) error {
|
||||
for {
|
||||
fmt.Printf("> ")
|
||||
bufReader := bufio.NewReader(os.Stdin)
|
||||
line, more, err := bufReader.ReadLine()
|
||||
if more {
|
||||
return errors.New("Input is too long")
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pArgs := persistentArgs(line)
|
||||
if err := muxOnCommands(cmd, pArgs); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func muxOnCommands(cmd *cobra.Command, pArgs []string) error {
|
||||
if len(pArgs) < 2 {
|
||||
return errors.New("expecting persistent args of the form: abci-cli [command] <...>")
|
||||
}
|
||||
|
||||
// TODO: this parsing is fragile
|
||||
args := []string{}
|
||||
for i := 0; i < len(pArgs); i++ {
|
||||
arg := pArgs[i]
|
||||
|
||||
// check for flags
|
||||
if strings.HasPrefix(arg, "-") {
|
||||
// if it has an equal, we can just skip
|
||||
if strings.Contains(arg, "=") {
|
||||
continue
|
||||
}
|
||||
// if its a boolean, we can just skip
|
||||
_, err := cmd.Flags().GetBool(strings.TrimLeft(arg, "-"))
|
||||
if err == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// otherwise, we need to skip the next one too
|
||||
i += 1
|
||||
continue
|
||||
}
|
||||
|
||||
// append the actual arg
|
||||
args = append(args, arg)
|
||||
}
|
||||
var subCommand string
|
||||
var actualArgs []string
|
||||
if len(args) > 1 {
|
||||
subCommand = args[1]
|
||||
}
|
||||
if len(args) > 2 {
|
||||
actualArgs = args[2:]
|
||||
}
|
||||
cmd.Use = subCommand // for later print statements ...
|
||||
|
||||
switch strings.ToLower(subCommand) {
|
||||
case "check_tx":
|
||||
return cmdCheckTx(cmd, actualArgs)
|
||||
case "commit":
|
||||
return cmdCommit(cmd, actualArgs)
|
||||
case "deliver_tx":
|
||||
return cmdDeliverTx(cmd, actualArgs)
|
||||
case "echo":
|
||||
return cmdEcho(cmd, actualArgs)
|
||||
case "info":
|
||||
return cmdInfo(cmd, actualArgs)
|
||||
case "query":
|
||||
return cmdQuery(cmd, actualArgs)
|
||||
case "set_option":
|
||||
return cmdSetOption(cmd, actualArgs)
|
||||
default:
|
||||
return cmdUnimplemented(cmd, pArgs)
|
||||
}
|
||||
}
|
||||
|
||||
func cmdUnimplemented(cmd *cobra.Command, args []string) error {
|
||||
// TODO: Print out all the sub-commands available
|
||||
msg := "unimplemented command"
|
||||
if err := cmd.Help(); err != nil {
|
||||
msg = err.Error()
|
||||
}
|
||||
if len(args) > 0 {
|
||||
msg += fmt.Sprintf(" args: [%s]", strings.Join(args, " "))
|
||||
}
|
||||
printResponse(cmd, args, response{
|
||||
Code: codeBad,
|
||||
Log: msg,
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
// Have the application echo a message
|
||||
func cmdEcho(cmd *cobra.Command, args []string) error {
|
||||
msg := ""
|
||||
if len(args) > 0 {
|
||||
msg = args[0]
|
||||
}
|
||||
res, err := client.EchoSync(msg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
printResponse(cmd, args, response{
|
||||
Data: []byte(res.Message),
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get some info from the application
|
||||
func cmdInfo(cmd *cobra.Command, args []string) error {
|
||||
var version string
|
||||
if len(args) == 1 {
|
||||
version = args[0]
|
||||
}
|
||||
res, err := client.InfoSync(types.RequestInfo{version})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
printResponse(cmd, args, response{
|
||||
Data: []byte(res.Data),
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
const codeBad uint32 = 10
|
||||
|
||||
// Set an option on the application
|
||||
func cmdSetOption(cmd *cobra.Command, args []string) error {
|
||||
if len(args) < 2 {
|
||||
printResponse(cmd, args, response{
|
||||
Code: codeBad,
|
||||
Log: "want at least arguments of the form: <key> <value>",
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
key, val := args[0], args[1]
|
||||
_, err := client.SetOptionSync(types.RequestSetOption{key, val})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
printResponse(cmd, args, response{Log: "OK (SetOption doesn't return anything.)"}) // NOTE: Nothing to show...
|
||||
return nil
|
||||
}
|
||||
|
||||
// Append a new tx to application
|
||||
func cmdDeliverTx(cmd *cobra.Command, args []string) error {
|
||||
if len(args) == 0 {
|
||||
printResponse(cmd, args, response{
|
||||
Code: codeBad,
|
||||
Log: "want the tx",
|
||||
})
|
||||
return nil
|
||||
}
|
||||
txBytes, err := stringOrHexToBytes(args[0])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
res, err := client.DeliverTxSync(txBytes)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
printResponse(cmd, args, response{
|
||||
Code: res.Code,
|
||||
Data: res.Data,
|
||||
Info: res.Info,
|
||||
Log: res.Log,
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
// Validate a tx
|
||||
func cmdCheckTx(cmd *cobra.Command, args []string) error {
|
||||
if len(args) == 0 {
|
||||
printResponse(cmd, args, response{
|
||||
Code: codeBad,
|
||||
Info: "want the tx",
|
||||
})
|
||||
return nil
|
||||
}
|
||||
txBytes, err := stringOrHexToBytes(args[0])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
res, err := client.CheckTxSync(txBytes)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
printResponse(cmd, args, response{
|
||||
Code: res.Code,
|
||||
Data: res.Data,
|
||||
Info: res.Info,
|
||||
Log: res.Log,
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get application Merkle root hash
|
||||
func cmdCommit(cmd *cobra.Command, args []string) error {
|
||||
res, err := client.CommitSync()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
printResponse(cmd, args, response{
|
||||
Data: res.Data,
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
// Query application state
|
||||
func cmdQuery(cmd *cobra.Command, args []string) error {
|
||||
if len(args) == 0 {
|
||||
printResponse(cmd, args, response{
|
||||
Code: codeBad,
|
||||
Info: "want the query",
|
||||
Log: "",
|
||||
})
|
||||
return nil
|
||||
}
|
||||
queryBytes, err := stringOrHexToBytes(args[0])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
resQuery, err := client.QuerySync(types.RequestQuery{
|
||||
Data: queryBytes,
|
||||
Path: flagPath,
|
||||
Height: int64(flagHeight),
|
||||
Prove: flagProve,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
printResponse(cmd, args, response{
|
||||
Code: resQuery.Code,
|
||||
Info: resQuery.Info,
|
||||
Log: resQuery.Log,
|
||||
Query: &queryResponse{
|
||||
Key: resQuery.Key,
|
||||
Value: resQuery.Value,
|
||||
Height: resQuery.Height,
|
||||
Proof: resQuery.Proof,
|
||||
},
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
func cmdCounter(cmd *cobra.Command, args []string) error {
|
||||
|
||||
app := counter.NewCounterApplication(flagSerial)
|
||||
|
||||
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
|
||||
|
||||
// Start the listener
|
||||
srv, err := server.NewServer(flagAddress, flagAbci, app)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
srv.SetLogger(logger.With("module", "abci-server"))
|
||||
if err := srv.Start(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Wait forever
|
||||
cmn.TrapSignal(func() {
|
||||
// Cleanup
|
||||
srv.Stop()
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
func cmdKVStore(cmd *cobra.Command, args []string) error {
|
||||
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
|
||||
|
||||
// Create the application - in memory or persisted to disk
|
||||
var app types.Application
|
||||
if flagPersist == "" {
|
||||
app = kvstore.NewKVStoreApplication()
|
||||
} else {
|
||||
app = kvstore.NewPersistentKVStoreApplication(flagPersist)
|
||||
app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore"))
|
||||
}
|
||||
|
||||
// Start the listener
|
||||
srv, err := server.NewServer(flagAddress, flagAbci, app)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
srv.SetLogger(logger.With("module", "abci-server"))
|
||||
if err := srv.Start(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Wait forever
|
||||
cmn.TrapSignal(func() {
|
||||
// Cleanup
|
||||
srv.Stop()
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
//--------------------------------------------------------------------------------
|
||||
|
||||
func printResponse(cmd *cobra.Command, args []string, rsp response) {
|
||||
|
||||
if flagVerbose {
|
||||
fmt.Println(">", cmd.Use, strings.Join(args, " "))
|
||||
}
|
||||
|
||||
// Always print the status code.
|
||||
if rsp.Code == types.CodeTypeOK {
|
||||
fmt.Printf("-> code: OK\n")
|
||||
} else {
|
||||
fmt.Printf("-> code: %d\n", rsp.Code)
|
||||
|
||||
}
|
||||
|
||||
if len(rsp.Data) != 0 {
|
||||
// Do no print this line when using the commit command
|
||||
// because the string comes out as gibberish
|
||||
if cmd.Use != "commit" {
|
||||
fmt.Printf("-> data: %s\n", rsp.Data)
|
||||
}
|
||||
fmt.Printf("-> data.hex: 0x%X\n", rsp.Data)
|
||||
}
|
||||
if rsp.Log != "" {
|
||||
fmt.Printf("-> log: %s\n", rsp.Log)
|
||||
}
|
||||
|
||||
if rsp.Query != nil {
|
||||
fmt.Printf("-> height: %d\n", rsp.Query.Height)
|
||||
if rsp.Query.Key != nil {
|
||||
fmt.Printf("-> key: %s\n", rsp.Query.Key)
|
||||
fmt.Printf("-> key.hex: %X\n", rsp.Query.Key)
|
||||
}
|
||||
if rsp.Query.Value != nil {
|
||||
fmt.Printf("-> value: %s\n", rsp.Query.Value)
|
||||
fmt.Printf("-> value.hex: %X\n", rsp.Query.Value)
|
||||
}
|
||||
if rsp.Query.Proof != nil {
|
||||
fmt.Printf("-> proof: %X\n", rsp.Query.Proof)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// NOTE: s is interpreted as a string unless prefixed with 0x
|
||||
func stringOrHexToBytes(s string) ([]byte, error) {
|
||||
if len(s) > 2 && strings.ToLower(s[:2]) == "0x" {
|
||||
b, err := hex.DecodeString(s[2:])
|
||||
if err != nil {
|
||||
err = fmt.Errorf("Error decoding hex argument: %s", err.Error())
|
||||
return nil, err
|
||||
}
|
||||
return b, nil
|
||||
}
|
||||
|
||||
if !strings.HasPrefix(s, "\"") || !strings.HasSuffix(s, "\"") {
|
||||
err := fmt.Errorf("Invalid string arg: \"%s\". Must be quoted or a \"0x\"-prefixed hex string", s)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return []byte(s[1 : len(s)-1]), nil
|
||||
}
|
14
abci/cmd/abci-cli/main.go
Normal file
14
abci/cmd/abci-cli/main.go
Normal file
@ -0,0 +1,14 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
)
|
||||
|
||||
func main() {
|
||||
err := Execute()
|
||||
if err != nil {
|
||||
fmt.Print(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
9
abci/example/code/code.go
Normal file
9
abci/example/code/code.go
Normal file
@ -0,0 +1,9 @@
|
||||
package code
|
||||
|
||||
// Return codes for the examples
|
||||
const (
|
||||
CodeTypeOK uint32 = 0
|
||||
CodeTypeEncodingError uint32 = 1
|
||||
CodeTypeBadNonce uint32 = 2
|
||||
CodeTypeUnauthorized uint32 = 3
|
||||
)
|
104
abci/example/counter/counter.go
Normal file
104
abci/example/counter/counter.go
Normal file
@ -0,0 +1,104 @@
|
||||
package counter
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/example/code"
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
type CounterApplication struct {
|
||||
types.BaseApplication
|
||||
|
||||
hashCount int
|
||||
txCount int
|
||||
serial bool
|
||||
}
|
||||
|
||||
func NewCounterApplication(serial bool) *CounterApplication {
|
||||
return &CounterApplication{serial: serial}
|
||||
}
|
||||
|
||||
func (app *CounterApplication) Info(req types.RequestInfo) types.ResponseInfo {
|
||||
return types.ResponseInfo{Data: cmn.Fmt("{\"hashes\":%v,\"txs\":%v}", app.hashCount, app.txCount)}
|
||||
}
|
||||
|
||||
func (app *CounterApplication) SetOption(req types.RequestSetOption) types.ResponseSetOption {
|
||||
key, value := req.Key, req.Value
|
||||
if key == "serial" && value == "on" {
|
||||
app.serial = true
|
||||
} else {
|
||||
/*
|
||||
TODO Panic and have the ABCI server pass an exception.
|
||||
The client can call SetOptionSync() and get an `error`.
|
||||
return types.ResponseSetOption{
|
||||
Error: cmn.Fmt("Unknown key (%s) or value (%s)", key, value),
|
||||
}
|
||||
*/
|
||||
return types.ResponseSetOption{}
|
||||
}
|
||||
|
||||
return types.ResponseSetOption{}
|
||||
}
|
||||
|
||||
func (app *CounterApplication) DeliverTx(tx []byte) types.ResponseDeliverTx {
|
||||
if app.serial {
|
||||
if len(tx) > 8 {
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Max tx size is 8 bytes, got %d", len(tx))}
|
||||
}
|
||||
tx8 := make([]byte, 8)
|
||||
copy(tx8[len(tx8)-len(tx):], tx)
|
||||
txValue := binary.BigEndian.Uint64(tx8)
|
||||
if txValue != uint64(app.txCount) {
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeBadNonce,
|
||||
Log: fmt.Sprintf("Invalid nonce. Expected %v, got %v", app.txCount, txValue)}
|
||||
}
|
||||
}
|
||||
app.txCount++
|
||||
return types.ResponseDeliverTx{Code: code.CodeTypeOK}
|
||||
}
|
||||
|
||||
func (app *CounterApplication) CheckTx(tx []byte) types.ResponseCheckTx {
|
||||
if app.serial {
|
||||
if len(tx) > 8 {
|
||||
return types.ResponseCheckTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Max tx size is 8 bytes, got %d", len(tx))}
|
||||
}
|
||||
tx8 := make([]byte, 8)
|
||||
copy(tx8[len(tx8)-len(tx):], tx)
|
||||
txValue := binary.BigEndian.Uint64(tx8)
|
||||
if txValue < uint64(app.txCount) {
|
||||
return types.ResponseCheckTx{
|
||||
Code: code.CodeTypeBadNonce,
|
||||
Log: fmt.Sprintf("Invalid nonce. Expected >= %v, got %v", app.txCount, txValue)}
|
||||
}
|
||||
}
|
||||
return types.ResponseCheckTx{Code: code.CodeTypeOK}
|
||||
}
|
||||
|
||||
func (app *CounterApplication) Commit() (resp types.ResponseCommit) {
|
||||
app.hashCount++
|
||||
if app.txCount == 0 {
|
||||
return types.ResponseCommit{}
|
||||
}
|
||||
hash := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(hash, uint64(app.txCount))
|
||||
return types.ResponseCommit{Data: hash}
|
||||
}
|
||||
|
||||
func (app *CounterApplication) Query(reqQuery types.RequestQuery) types.ResponseQuery {
|
||||
switch reqQuery.Path {
|
||||
case "hash":
|
||||
return types.ResponseQuery{Value: []byte(cmn.Fmt("%v", app.hashCount))}
|
||||
case "tx":
|
||||
return types.ResponseQuery{Value: []byte(cmn.Fmt("%v", app.txCount))}
|
||||
default:
|
||||
return types.ResponseQuery{Log: cmn.Fmt("Invalid query path. Expected hash or tx, got %v", reqQuery.Path)}
|
||||
}
|
||||
}
|
3
abci/example/example.go
Normal file
3
abci/example/example.go
Normal file
@ -0,0 +1,3 @@
|
||||
package example
|
||||
|
||||
// so the go tool doesn't return errors about no buildable go files ...
|
154
abci/example/example_test.go
Normal file
154
abci/example/example_test.go
Normal file
@ -0,0 +1,154 @@
|
||||
package example
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"google.golang.org/grpc"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
||||
abcicli "github.com/tendermint/tendermint/abci/client"
|
||||
"github.com/tendermint/tendermint/abci/example/code"
|
||||
"github.com/tendermint/tendermint/abci/example/kvstore"
|
||||
abciserver "github.com/tendermint/tendermint/abci/server"
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
)
|
||||
|
||||
func TestKVStore(t *testing.T) {
|
||||
fmt.Println("### Testing KVStore")
|
||||
testStream(t, kvstore.NewKVStoreApplication())
|
||||
}
|
||||
|
||||
func TestBaseApp(t *testing.T) {
|
||||
fmt.Println("### Testing BaseApp")
|
||||
testStream(t, types.NewBaseApplication())
|
||||
}
|
||||
|
||||
func TestGRPC(t *testing.T) {
|
||||
fmt.Println("### Testing GRPC")
|
||||
testGRPCSync(t, types.NewGRPCApplication(types.NewBaseApplication()))
|
||||
}
|
||||
|
||||
func testStream(t *testing.T, app types.Application) {
|
||||
numDeliverTxs := 200000
|
||||
|
||||
// Start the listener
|
||||
server := abciserver.NewSocketServer("unix://test.sock", app)
|
||||
server.SetLogger(log.TestingLogger().With("module", "abci-server"))
|
||||
if err := server.Start(); err != nil {
|
||||
t.Fatalf("Error starting socket server: %v", err.Error())
|
||||
}
|
||||
defer server.Stop()
|
||||
|
||||
// Connect to the socket
|
||||
client := abcicli.NewSocketClient("unix://test.sock", false)
|
||||
client.SetLogger(log.TestingLogger().With("module", "abci-client"))
|
||||
if err := client.Start(); err != nil {
|
||||
t.Fatalf("Error starting socket client: %v", err.Error())
|
||||
}
|
||||
defer client.Stop()
|
||||
|
||||
done := make(chan struct{})
|
||||
counter := 0
|
||||
client.SetResponseCallback(func(req *types.Request, res *types.Response) {
|
||||
// Process response
|
||||
switch r := res.Value.(type) {
|
||||
case *types.Response_DeliverTx:
|
||||
counter++
|
||||
if r.DeliverTx.Code != code.CodeTypeOK {
|
||||
t.Error("DeliverTx failed with ret_code", r.DeliverTx.Code)
|
||||
}
|
||||
if counter > numDeliverTxs {
|
||||
t.Fatalf("Too many DeliverTx responses. Got %d, expected %d", counter, numDeliverTxs)
|
||||
}
|
||||
if counter == numDeliverTxs {
|
||||
go func() {
|
||||
time.Sleep(time.Second * 2) // Wait for a bit to allow counter overflow
|
||||
close(done)
|
||||
}()
|
||||
return
|
||||
}
|
||||
case *types.Response_Flush:
|
||||
// ignore
|
||||
default:
|
||||
t.Error("Unexpected response type", reflect.TypeOf(res.Value))
|
||||
}
|
||||
})
|
||||
|
||||
// Write requests
|
||||
for counter := 0; counter < numDeliverTxs; counter++ {
|
||||
// Send request
|
||||
reqRes := client.DeliverTxAsync([]byte("test"))
|
||||
_ = reqRes
|
||||
// check err ?
|
||||
|
||||
// Sometimes send flush messages
|
||||
if counter%123 == 0 {
|
||||
client.FlushAsync()
|
||||
// check err ?
|
||||
}
|
||||
}
|
||||
|
||||
// Send final flush message
|
||||
client.FlushAsync()
|
||||
|
||||
<-done
|
||||
}
|
||||
|
||||
//-------------------------
|
||||
// test grpc
|
||||
|
||||
func dialerFunc(addr string, timeout time.Duration) (net.Conn, error) {
|
||||
return cmn.Connect(addr)
|
||||
}
|
||||
|
||||
func testGRPCSync(t *testing.T, app *types.GRPCApplication) {
|
||||
numDeliverTxs := 2000
|
||||
|
||||
// Start the listener
|
||||
server := abciserver.NewGRPCServer("unix://test.sock", app)
|
||||
server.SetLogger(log.TestingLogger().With("module", "abci-server"))
|
||||
if err := server.Start(); err != nil {
|
||||
t.Fatalf("Error starting GRPC server: %v", err.Error())
|
||||
}
|
||||
defer server.Stop()
|
||||
|
||||
// Connect to the socket
|
||||
conn, err := grpc.Dial("unix://test.sock", grpc.WithInsecure(), grpc.WithDialer(dialerFunc))
|
||||
if err != nil {
|
||||
t.Fatalf("Error dialing GRPC server: %v", err.Error())
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
client := types.NewABCIApplicationClient(conn)
|
||||
|
||||
// Write requests
|
||||
for counter := 0; counter < numDeliverTxs; counter++ {
|
||||
// Send request
|
||||
response, err := client.DeliverTx(context.Background(), &types.RequestDeliverTx{[]byte("test")})
|
||||
if err != nil {
|
||||
t.Fatalf("Error in GRPC DeliverTx: %v", err.Error())
|
||||
}
|
||||
counter++
|
||||
if response.Code != code.CodeTypeOK {
|
||||
t.Error("DeliverTx failed with ret_code", response.Code)
|
||||
}
|
||||
if counter > numDeliverTxs {
|
||||
t.Fatal("Too many DeliverTx responses")
|
||||
}
|
||||
t.Log("response", counter)
|
||||
if counter == numDeliverTxs {
|
||||
go func() {
|
||||
time.Sleep(time.Second * 2) // Wait for a bit to allow counter overflow
|
||||
}()
|
||||
}
|
||||
|
||||
}
|
||||
}
|
1
abci/example/js/.gitignore
vendored
Normal file
1
abci/example/js/.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
||||
node_modules
|
1
abci/example/js/README.md
Normal file
1
abci/example/js/README.md
Normal file
@ -0,0 +1 @@
|
||||
This example has been moved here: https://github.com/tendermint/js-abci/tree/master/example
|
31
abci/example/kvstore/README.md
Normal file
31
abci/example/kvstore/README.md
Normal file
@ -0,0 +1,31 @@
|
||||
# KVStore
|
||||
|
||||
There are two app's here: the KVStoreApplication and the PersistentKVStoreApplication.
|
||||
|
||||
## KVStoreApplication
|
||||
|
||||
The KVStoreApplication is a simple merkle key-value store.
|
||||
Transactions of the form `key=value` are stored as key-value pairs in the tree.
|
||||
Transactions without an `=` sign set the value to the key.
|
||||
The app has no replay protection (other than what the mempool provides).
|
||||
|
||||
## PersistentKVStoreApplication
|
||||
|
||||
The PersistentKVStoreApplication wraps the KVStoreApplication
|
||||
and provides two additional features:
|
||||
|
||||
1) persistence of state across app restarts (using Tendermint's ABCI-Handshake mechanism)
|
||||
2) validator set changes
|
||||
|
||||
The state is persisted in leveldb along with the last block committed,
|
||||
and the Handshake allows any necessary blocks to be replayed.
|
||||
Validator set changes are effected using the following transaction format:
|
||||
|
||||
```
|
||||
val:pubkey1/power1,addr2/power2,addr3/power3"
|
||||
```
|
||||
|
||||
where `power1` is the new voting power for the validator with `pubkey1` (possibly a new one).
|
||||
There is no sybil protection against new validators joining.
|
||||
Validators can be removed by setting their power to `0`.
|
||||
|
38
abci/example/kvstore/helpers.go
Normal file
38
abci/example/kvstore/helpers.go
Normal file
@ -0,0 +1,38 @@
|
||||
package kvstore
|
||||
|
||||
import (
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
// RandVal creates one random validator, with a key derived
|
||||
// from the input value
|
||||
func RandVal(i int) types.Validator {
|
||||
addr := cmn.RandBytes(20)
|
||||
pubkey := cmn.RandBytes(32)
|
||||
power := cmn.RandUint16() + 1
|
||||
v := types.Ed25519Validator(pubkey, int64(power))
|
||||
v.Address = addr
|
||||
return v
|
||||
}
|
||||
|
||||
// RandVals returns a list of cnt validators for initializing
|
||||
// the application. Note that the keys are deterministically
|
||||
// derived from the index in the array, while the power is
|
||||
// random (Change this if not desired)
|
||||
func RandVals(cnt int) []types.Validator {
|
||||
res := make([]types.Validator, cnt)
|
||||
for i := 0; i < cnt; i++ {
|
||||
res[i] = RandVal(i)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// InitKVStore initializes the kvstore app with some data,
|
||||
// which allows tests to pass and is fine as long as you
|
||||
// don't make any tx that modify the validator state
|
||||
func InitKVStore(app *PersistentKVStoreApplication) {
|
||||
app.InitChain(types.RequestInitChain{
|
||||
Validators: RandVals(1),
|
||||
})
|
||||
}
|
126
abci/example/kvstore/kvstore.go
Normal file
126
abci/example/kvstore/kvstore.go
Normal file
@ -0,0 +1,126 @@
|
||||
package kvstore
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/example/code"
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
dbm "github.com/tendermint/tendermint/libs/db"
|
||||
)
|
||||
|
||||
var (
|
||||
stateKey = []byte("stateKey")
|
||||
kvPairPrefixKey = []byte("kvPairKey:")
|
||||
)
|
||||
|
||||
type State struct {
|
||||
db dbm.DB
|
||||
Size int64 `json:"size"`
|
||||
Height int64 `json:"height"`
|
||||
AppHash []byte `json:"app_hash"`
|
||||
}
|
||||
|
||||
func loadState(db dbm.DB) State {
|
||||
stateBytes := db.Get(stateKey)
|
||||
var state State
|
||||
if len(stateBytes) != 0 {
|
||||
err := json.Unmarshal(stateBytes, &state)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
state.db = db
|
||||
return state
|
||||
}
|
||||
|
||||
func saveState(state State) {
|
||||
stateBytes, err := json.Marshal(state)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
state.db.Set(stateKey, stateBytes)
|
||||
}
|
||||
|
||||
func prefixKey(key []byte) []byte {
|
||||
return append(kvPairPrefixKey, key...)
|
||||
}
|
||||
|
||||
//---------------------------------------------------
|
||||
|
||||
var _ types.Application = (*KVStoreApplication)(nil)
|
||||
|
||||
type KVStoreApplication struct {
|
||||
types.BaseApplication
|
||||
|
||||
state State
|
||||
}
|
||||
|
||||
func NewKVStoreApplication() *KVStoreApplication {
|
||||
state := loadState(dbm.NewMemDB())
|
||||
return &KVStoreApplication{state: state}
|
||||
}
|
||||
|
||||
func (app *KVStoreApplication) Info(req types.RequestInfo) (resInfo types.ResponseInfo) {
|
||||
return types.ResponseInfo{Data: fmt.Sprintf("{\"size\":%v}", app.state.Size)}
|
||||
}
|
||||
|
||||
// tx is either "key=value" or just arbitrary bytes
|
||||
func (app *KVStoreApplication) DeliverTx(tx []byte) types.ResponseDeliverTx {
|
||||
var key, value []byte
|
||||
parts := bytes.Split(tx, []byte("="))
|
||||
if len(parts) == 2 {
|
||||
key, value = parts[0], parts[1]
|
||||
} else {
|
||||
key, value = tx, tx
|
||||
}
|
||||
app.state.db.Set(prefixKey(key), value)
|
||||
app.state.Size += 1
|
||||
|
||||
tags := []cmn.KVPair{
|
||||
{[]byte("app.creator"), []byte("jae")},
|
||||
{[]byte("app.key"), key},
|
||||
}
|
||||
return types.ResponseDeliverTx{Code: code.CodeTypeOK, Tags: tags}
|
||||
}
|
||||
|
||||
func (app *KVStoreApplication) CheckTx(tx []byte) types.ResponseCheckTx {
|
||||
return types.ResponseCheckTx{Code: code.CodeTypeOK}
|
||||
}
|
||||
|
||||
func (app *KVStoreApplication) Commit() types.ResponseCommit {
|
||||
// Using a memdb - just return the big endian size of the db
|
||||
appHash := make([]byte, 8)
|
||||
binary.PutVarint(appHash, app.state.Size)
|
||||
app.state.AppHash = appHash
|
||||
app.state.Height += 1
|
||||
saveState(app.state)
|
||||
return types.ResponseCommit{Data: appHash}
|
||||
}
|
||||
|
||||
func (app *KVStoreApplication) Query(reqQuery types.RequestQuery) (resQuery types.ResponseQuery) {
|
||||
if reqQuery.Prove {
|
||||
value := app.state.db.Get(prefixKey(reqQuery.Data))
|
||||
resQuery.Index = -1 // TODO make Proof return index
|
||||
resQuery.Key = reqQuery.Data
|
||||
resQuery.Value = value
|
||||
if value != nil {
|
||||
resQuery.Log = "exists"
|
||||
} else {
|
||||
resQuery.Log = "does not exist"
|
||||
}
|
||||
return
|
||||
} else {
|
||||
value := app.state.db.Get(prefixKey(reqQuery.Data))
|
||||
resQuery.Value = value
|
||||
if value != nil {
|
||||
resQuery.Log = "exists"
|
||||
} else {
|
||||
resQuery.Log = "does not exist"
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
310
abci/example/kvstore/kvstore_test.go
Normal file
310
abci/example/kvstore/kvstore_test.go
Normal file
@ -0,0 +1,310 @@
|
||||
package kvstore
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io/ioutil"
|
||||
"sort"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
||||
abcicli "github.com/tendermint/tendermint/abci/client"
|
||||
"github.com/tendermint/tendermint/abci/example/code"
|
||||
abciserver "github.com/tendermint/tendermint/abci/server"
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
)
|
||||
|
||||
func testKVStore(t *testing.T, app types.Application, tx []byte, key, value string) {
|
||||
ar := app.DeliverTx(tx)
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
// repeating tx doesn't raise error
|
||||
ar = app.DeliverTx(tx)
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
|
||||
// make sure query is fine
|
||||
resQuery := app.Query(types.RequestQuery{
|
||||
Path: "/store",
|
||||
Data: []byte(key),
|
||||
})
|
||||
require.Equal(t, code.CodeTypeOK, resQuery.Code)
|
||||
require.Equal(t, value, string(resQuery.Value))
|
||||
|
||||
// make sure proof is fine
|
||||
resQuery = app.Query(types.RequestQuery{
|
||||
Path: "/store",
|
||||
Data: []byte(key),
|
||||
Prove: true,
|
||||
})
|
||||
require.EqualValues(t, code.CodeTypeOK, resQuery.Code)
|
||||
require.Equal(t, value, string(resQuery.Value))
|
||||
}
|
||||
|
||||
func TestKVStoreKV(t *testing.T) {
|
||||
kvstore := NewKVStoreApplication()
|
||||
key := "abc"
|
||||
value := key
|
||||
tx := []byte(key)
|
||||
testKVStore(t, kvstore, tx, key, value)
|
||||
|
||||
value = "def"
|
||||
tx = []byte(key + "=" + value)
|
||||
testKVStore(t, kvstore, tx, key, value)
|
||||
}
|
||||
|
||||
func TestPersistentKVStoreKV(t *testing.T) {
|
||||
dir, err := ioutil.TempDir("/tmp", "abci-kvstore-test") // TODO
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
kvstore := NewPersistentKVStoreApplication(dir)
|
||||
key := "abc"
|
||||
value := key
|
||||
tx := []byte(key)
|
||||
testKVStore(t, kvstore, tx, key, value)
|
||||
|
||||
value = "def"
|
||||
tx = []byte(key + "=" + value)
|
||||
testKVStore(t, kvstore, tx, key, value)
|
||||
}
|
||||
|
||||
func TestPersistentKVStoreInfo(t *testing.T) {
|
||||
dir, err := ioutil.TempDir("/tmp", "abci-kvstore-test") // TODO
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
kvstore := NewPersistentKVStoreApplication(dir)
|
||||
InitKVStore(kvstore)
|
||||
height := int64(0)
|
||||
|
||||
resInfo := kvstore.Info(types.RequestInfo{})
|
||||
if resInfo.LastBlockHeight != height {
|
||||
t.Fatalf("expected height of %d, got %d", height, resInfo.LastBlockHeight)
|
||||
}
|
||||
|
||||
// make and apply block
|
||||
height = int64(1)
|
||||
hash := []byte("foo")
|
||||
header := types.Header{
|
||||
Height: int64(height),
|
||||
}
|
||||
kvstore.BeginBlock(types.RequestBeginBlock{hash, header, nil, nil})
|
||||
kvstore.EndBlock(types.RequestEndBlock{header.Height})
|
||||
kvstore.Commit()
|
||||
|
||||
resInfo = kvstore.Info(types.RequestInfo{})
|
||||
if resInfo.LastBlockHeight != height {
|
||||
t.Fatalf("expected height of %d, got %d", height, resInfo.LastBlockHeight)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// add a validator, remove a validator, update a validator
|
||||
func TestValUpdates(t *testing.T) {
|
||||
dir, err := ioutil.TempDir("/tmp", "abci-kvstore-test") // TODO
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
kvstore := NewPersistentKVStoreApplication(dir)
|
||||
|
||||
// init with some validators
|
||||
total := 10
|
||||
nInit := 5
|
||||
vals := RandVals(total)
|
||||
// iniitalize with the first nInit
|
||||
kvstore.InitChain(types.RequestInitChain{
|
||||
Validators: vals[:nInit],
|
||||
})
|
||||
|
||||
vals1, vals2 := vals[:nInit], kvstore.Validators()
|
||||
valsEqual(t, vals1, vals2)
|
||||
|
||||
var v1, v2, v3 types.Validator
|
||||
|
||||
// add some validators
|
||||
v1, v2 = vals[nInit], vals[nInit+1]
|
||||
diff := []types.Validator{v1, v2}
|
||||
tx1 := MakeValSetChangeTx(v1.PubKey, v1.Power)
|
||||
tx2 := MakeValSetChangeTx(v2.PubKey, v2.Power)
|
||||
|
||||
makeApplyBlock(t, kvstore, 1, diff, tx1, tx2)
|
||||
|
||||
vals1, vals2 = vals[:nInit+2], kvstore.Validators()
|
||||
valsEqual(t, vals1, vals2)
|
||||
|
||||
// remove some validators
|
||||
v1, v2, v3 = vals[nInit-2], vals[nInit-1], vals[nInit]
|
||||
v1.Power = 0
|
||||
v2.Power = 0
|
||||
v3.Power = 0
|
||||
diff = []types.Validator{v1, v2, v3}
|
||||
tx1 = MakeValSetChangeTx(v1.PubKey, v1.Power)
|
||||
tx2 = MakeValSetChangeTx(v2.PubKey, v2.Power)
|
||||
tx3 := MakeValSetChangeTx(v3.PubKey, v3.Power)
|
||||
|
||||
makeApplyBlock(t, kvstore, 2, diff, tx1, tx2, tx3)
|
||||
|
||||
vals1 = append(vals[:nInit-2], vals[nInit+1])
|
||||
vals2 = kvstore.Validators()
|
||||
valsEqual(t, vals1, vals2)
|
||||
|
||||
// update some validators
|
||||
v1 = vals[0]
|
||||
if v1.Power == 5 {
|
||||
v1.Power = 6
|
||||
} else {
|
||||
v1.Power = 5
|
||||
}
|
||||
diff = []types.Validator{v1}
|
||||
tx1 = MakeValSetChangeTx(v1.PubKey, v1.Power)
|
||||
|
||||
makeApplyBlock(t, kvstore, 3, diff, tx1)
|
||||
|
||||
vals1 = append([]types.Validator{v1}, vals1[1:]...)
|
||||
vals2 = kvstore.Validators()
|
||||
valsEqual(t, vals1, vals2)
|
||||
|
||||
}
|
||||
|
||||
func makeApplyBlock(t *testing.T, kvstore types.Application, heightInt int, diff []types.Validator, txs ...[]byte) {
|
||||
// make and apply block
|
||||
height := int64(heightInt)
|
||||
hash := []byte("foo")
|
||||
header := types.Header{
|
||||
Height: height,
|
||||
}
|
||||
|
||||
kvstore.BeginBlock(types.RequestBeginBlock{hash, header, nil, nil})
|
||||
for _, tx := range txs {
|
||||
if r := kvstore.DeliverTx(tx); r.IsErr() {
|
||||
t.Fatal(r)
|
||||
}
|
||||
}
|
||||
resEndBlock := kvstore.EndBlock(types.RequestEndBlock{header.Height})
|
||||
kvstore.Commit()
|
||||
|
||||
valsEqual(t, diff, resEndBlock.ValidatorUpdates)
|
||||
|
||||
}
|
||||
|
||||
// order doesn't matter
|
||||
func valsEqual(t *testing.T, vals1, vals2 []types.Validator) {
|
||||
if len(vals1) != len(vals2) {
|
||||
t.Fatalf("vals dont match in len. got %d, expected %d", len(vals2), len(vals1))
|
||||
}
|
||||
sort.Sort(types.Validators(vals1))
|
||||
sort.Sort(types.Validators(vals2))
|
||||
for i, v1 := range vals1 {
|
||||
v2 := vals2[i]
|
||||
if !bytes.Equal(v1.PubKey.Data, v2.PubKey.Data) ||
|
||||
v1.Power != v2.Power {
|
||||
t.Fatalf("vals dont match at index %d. got %X/%d , expected %X/%d", i, v2.PubKey, v2.Power, v1.PubKey, v1.Power)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func makeSocketClientServer(app types.Application, name string) (abcicli.Client, cmn.Service, error) {
|
||||
// Start the listener
|
||||
socket := cmn.Fmt("unix://%s.sock", name)
|
||||
logger := log.TestingLogger()
|
||||
|
||||
server := abciserver.NewSocketServer(socket, app)
|
||||
server.SetLogger(logger.With("module", "abci-server"))
|
||||
if err := server.Start(); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
// Connect to the socket
|
||||
client := abcicli.NewSocketClient(socket, false)
|
||||
client.SetLogger(logger.With("module", "abci-client"))
|
||||
if err := client.Start(); err != nil {
|
||||
server.Stop()
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
return client, server, nil
|
||||
}
|
||||
|
||||
func makeGRPCClientServer(app types.Application, name string) (abcicli.Client, cmn.Service, error) {
|
||||
// Start the listener
|
||||
socket := cmn.Fmt("unix://%s.sock", name)
|
||||
logger := log.TestingLogger()
|
||||
|
||||
gapp := types.NewGRPCApplication(app)
|
||||
server := abciserver.NewGRPCServer(socket, gapp)
|
||||
server.SetLogger(logger.With("module", "abci-server"))
|
||||
if err := server.Start(); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
client := abcicli.NewGRPCClient(socket, true)
|
||||
client.SetLogger(logger.With("module", "abci-client"))
|
||||
if err := client.Start(); err != nil {
|
||||
server.Stop()
|
||||
return nil, nil, err
|
||||
}
|
||||
return client, server, nil
|
||||
}
|
||||
|
||||
func TestClientServer(t *testing.T) {
|
||||
// set up socket app
|
||||
kvstore := NewKVStoreApplication()
|
||||
client, server, err := makeSocketClientServer(kvstore, "kvstore-socket")
|
||||
require.Nil(t, err)
|
||||
defer server.Stop()
|
||||
defer client.Stop()
|
||||
|
||||
runClientTests(t, client)
|
||||
|
||||
// set up grpc app
|
||||
kvstore = NewKVStoreApplication()
|
||||
gclient, gserver, err := makeGRPCClientServer(kvstore, "kvstore-grpc")
|
||||
require.Nil(t, err)
|
||||
defer gserver.Stop()
|
||||
defer gclient.Stop()
|
||||
|
||||
runClientTests(t, gclient)
|
||||
}
|
||||
|
||||
func runClientTests(t *testing.T, client abcicli.Client) {
|
||||
// run some tests....
|
||||
key := "abc"
|
||||
value := key
|
||||
tx := []byte(key)
|
||||
testClient(t, client, tx, key, value)
|
||||
|
||||
value = "def"
|
||||
tx = []byte(key + "=" + value)
|
||||
testClient(t, client, tx, key, value)
|
||||
}
|
||||
|
||||
func testClient(t *testing.T, app abcicli.Client, tx []byte, key, value string) {
|
||||
ar, err := app.DeliverTxSync(tx)
|
||||
require.NoError(t, err)
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
// repeating tx doesn't raise error
|
||||
ar, err = app.DeliverTxSync(tx)
|
||||
require.NoError(t, err)
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
|
||||
// make sure query is fine
|
||||
resQuery, err := app.QuerySync(types.RequestQuery{
|
||||
Path: "/store",
|
||||
Data: []byte(key),
|
||||
})
|
||||
require.Nil(t, err)
|
||||
require.Equal(t, code.CodeTypeOK, resQuery.Code)
|
||||
require.Equal(t, value, string(resQuery.Value))
|
||||
|
||||
// make sure proof is fine
|
||||
resQuery, err = app.QuerySync(types.RequestQuery{
|
||||
Path: "/store",
|
||||
Data: []byte(key),
|
||||
Prove: true,
|
||||
})
|
||||
require.Nil(t, err)
|
||||
require.Equal(t, code.CodeTypeOK, resQuery.Code)
|
||||
require.Equal(t, value, string(resQuery.Value))
|
||||
}
|
200
abci/example/kvstore/persistent_kvstore.go
Normal file
200
abci/example/kvstore/persistent_kvstore.go
Normal file
@ -0,0 +1,200 @@
|
||||
package kvstore
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/example/code"
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
dbm "github.com/tendermint/tendermint/libs/db"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
const (
|
||||
ValidatorSetChangePrefix string = "val:"
|
||||
)
|
||||
|
||||
//-----------------------------------------
|
||||
|
||||
var _ types.Application = (*PersistentKVStoreApplication)(nil)
|
||||
|
||||
type PersistentKVStoreApplication struct {
|
||||
app *KVStoreApplication
|
||||
|
||||
// validator set
|
||||
ValUpdates []types.Validator
|
||||
|
||||
logger log.Logger
|
||||
}
|
||||
|
||||
func NewPersistentKVStoreApplication(dbDir string) *PersistentKVStoreApplication {
|
||||
name := "kvstore"
|
||||
db, err := dbm.NewGoLevelDB(name, dbDir)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
state := loadState(db)
|
||||
|
||||
return &PersistentKVStoreApplication{
|
||||
app: &KVStoreApplication{state: state},
|
||||
logger: log.NewNopLogger(),
|
||||
}
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) SetLogger(l log.Logger) {
|
||||
app.logger = l
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) Info(req types.RequestInfo) types.ResponseInfo {
|
||||
res := app.app.Info(req)
|
||||
res.LastBlockHeight = app.app.state.Height
|
||||
res.LastBlockAppHash = app.app.state.AppHash
|
||||
return res
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) SetOption(req types.RequestSetOption) types.ResponseSetOption {
|
||||
return app.app.SetOption(req)
|
||||
}
|
||||
|
||||
// tx is either "val:pubkey/power" or "key=value" or just arbitrary bytes
|
||||
func (app *PersistentKVStoreApplication) DeliverTx(tx []byte) types.ResponseDeliverTx {
|
||||
// if it starts with "val:", update the validator set
|
||||
// format is "val:pubkey/power"
|
||||
if isValidatorTx(tx) {
|
||||
// update validators in the merkle tree
|
||||
// and in app.ValUpdates
|
||||
return app.execValidatorTx(tx)
|
||||
}
|
||||
|
||||
// otherwise, update the key-value store
|
||||
return app.app.DeliverTx(tx)
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) CheckTx(tx []byte) types.ResponseCheckTx {
|
||||
return app.app.CheckTx(tx)
|
||||
}
|
||||
|
||||
// Commit will panic if InitChain was not called
|
||||
func (app *PersistentKVStoreApplication) Commit() types.ResponseCommit {
|
||||
return app.app.Commit()
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) Query(reqQuery types.RequestQuery) types.ResponseQuery {
|
||||
return app.app.Query(reqQuery)
|
||||
}
|
||||
|
||||
// Save the validators in the merkle tree
|
||||
func (app *PersistentKVStoreApplication) InitChain(req types.RequestInitChain) types.ResponseInitChain {
|
||||
for _, v := range req.Validators {
|
||||
r := app.updateValidator(v)
|
||||
if r.IsErr() {
|
||||
app.logger.Error("Error updating validators", "r", r)
|
||||
}
|
||||
}
|
||||
return types.ResponseInitChain{}
|
||||
}
|
||||
|
||||
// Track the block hash and header information
|
||||
func (app *PersistentKVStoreApplication) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock {
|
||||
// reset valset changes
|
||||
app.ValUpdates = make([]types.Validator, 0)
|
||||
return types.ResponseBeginBlock{}
|
||||
}
|
||||
|
||||
// Update the validator set
|
||||
func (app *PersistentKVStoreApplication) EndBlock(req types.RequestEndBlock) types.ResponseEndBlock {
|
||||
return types.ResponseEndBlock{ValidatorUpdates: app.ValUpdates}
|
||||
}
|
||||
|
||||
//---------------------------------------------
|
||||
// update validators
|
||||
|
||||
func (app *PersistentKVStoreApplication) Validators() (validators []types.Validator) {
|
||||
itr := app.app.state.db.Iterator(nil, nil)
|
||||
for ; itr.Valid(); itr.Next() {
|
||||
if isValidatorTx(itr.Key()) {
|
||||
validator := new(types.Validator)
|
||||
err := types.ReadMessage(bytes.NewBuffer(itr.Value()), validator)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
validators = append(validators, *validator)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func MakeValSetChangeTx(pubkey types.PubKey, power int64) []byte {
|
||||
return []byte(cmn.Fmt("val:%X/%d", pubkey.Data, power))
|
||||
}
|
||||
|
||||
func isValidatorTx(tx []byte) bool {
|
||||
return strings.HasPrefix(string(tx), ValidatorSetChangePrefix)
|
||||
}
|
||||
|
||||
// format is "val:pubkey/power"
|
||||
// pubkey is raw 32-byte ed25519 key
|
||||
func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) types.ResponseDeliverTx {
|
||||
tx = tx[len(ValidatorSetChangePrefix):]
|
||||
|
||||
//get the pubkey and power
|
||||
pubKeyAndPower := strings.Split(string(tx), "/")
|
||||
if len(pubKeyAndPower) != 2 {
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Expected 'pubkey/power'. Got %v", pubKeyAndPower)}
|
||||
}
|
||||
pubkeyS, powerS := pubKeyAndPower[0], pubKeyAndPower[1]
|
||||
|
||||
// decode the pubkey
|
||||
pubkey, err := hex.DecodeString(pubkeyS)
|
||||
if err != nil {
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Pubkey (%s) is invalid hex", pubkeyS)}
|
||||
}
|
||||
|
||||
// decode the power
|
||||
power, err := strconv.ParseInt(powerS, 10, 64)
|
||||
if err != nil {
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Power (%s) is not an int", powerS)}
|
||||
}
|
||||
|
||||
// update
|
||||
return app.updateValidator(types.Ed25519Validator(pubkey, int64(power)))
|
||||
}
|
||||
|
||||
// add, update, or remove a validator
|
||||
func (app *PersistentKVStoreApplication) updateValidator(v types.Validator) types.ResponseDeliverTx {
|
||||
key := []byte("val:" + string(v.PubKey.Data))
|
||||
if v.Power == 0 {
|
||||
// remove validator
|
||||
if !app.app.state.db.Has(key) {
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeUnauthorized,
|
||||
Log: fmt.Sprintf("Cannot remove non-existent validator %X", key)}
|
||||
}
|
||||
app.app.state.db.Delete(key)
|
||||
} else {
|
||||
// add or update validator
|
||||
value := bytes.NewBuffer(make([]byte, 0))
|
||||
if err := types.WriteMessage(&v, value); err != nil {
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Error encoding validator: %v", err)}
|
||||
}
|
||||
app.app.state.db.Set(key, value.Bytes())
|
||||
}
|
||||
|
||||
// we only update the changes array if we successfully updated the tree
|
||||
app.ValUpdates = append(app.ValUpdates, v)
|
||||
|
||||
return types.ResponseDeliverTx{Code: code.CodeTypeOK}
|
||||
}
|
0
abci/example/python/abci/__init__.py
Normal file
0
abci/example/python/abci/__init__.py
Normal file
50
abci/example/python/abci/msg.py
Normal file
50
abci/example/python/abci/msg.py
Normal file
@ -0,0 +1,50 @@
|
||||
from wire import decode_string
|
||||
|
||||
# map type_byte to message name
|
||||
message_types = {
|
||||
0x01: "echo",
|
||||
0x02: "flush",
|
||||
0x03: "info",
|
||||
0x04: "set_option",
|
||||
0x21: "deliver_tx",
|
||||
0x22: "check_tx",
|
||||
0x23: "commit",
|
||||
0x24: "add_listener",
|
||||
0x25: "rm_listener",
|
||||
}
|
||||
|
||||
# return the decoded arguments of abci messages
|
||||
|
||||
class RequestDecoder():
|
||||
|
||||
def __init__(self, reader):
|
||||
self.reader = reader
|
||||
|
||||
def echo(self):
|
||||
return decode_string(self.reader)
|
||||
|
||||
def flush(self):
|
||||
return
|
||||
|
||||
def info(self):
|
||||
return
|
||||
|
||||
def set_option(self):
|
||||
return decode_string(self.reader), decode_string(self.reader)
|
||||
|
||||
def deliver_tx(self):
|
||||
return decode_string(self.reader)
|
||||
|
||||
def check_tx(self):
|
||||
return decode_string(self.reader)
|
||||
|
||||
def commit(self):
|
||||
return
|
||||
|
||||
def add_listener(self):
|
||||
# TODO
|
||||
return
|
||||
|
||||
def rm_listener(self):
|
||||
# TODO
|
||||
return
|
56
abci/example/python/abci/reader.py
Normal file
56
abci/example/python/abci/reader.py
Normal file
@ -0,0 +1,56 @@
|
||||
|
||||
# Simple read() method around a bytearray
|
||||
|
||||
|
||||
class BytesBuffer():
|
||||
|
||||
def __init__(self, b):
|
||||
self.buf = b
|
||||
self.readCount = 0
|
||||
|
||||
def count(self):
|
||||
return self.readCount
|
||||
|
||||
def reset_count(self):
|
||||
self.readCount = 0
|
||||
|
||||
def size(self):
|
||||
return len(self.buf)
|
||||
|
||||
def peek(self):
|
||||
return self.buf[0]
|
||||
|
||||
def write(self, b):
|
||||
# b should be castable to byte array
|
||||
self.buf += bytearray(b)
|
||||
|
||||
def read(self, n):
|
||||
if len(self.buf) < n:
|
||||
print "reader err: buf less than n"
|
||||
# TODO: exception
|
||||
return
|
||||
self.readCount += n
|
||||
r = self.buf[:n]
|
||||
self.buf = self.buf[n:]
|
||||
return r
|
||||
|
||||
# Buffer bytes off a tcp connection and read them off in chunks
|
||||
|
||||
|
||||
class ConnReader():
|
||||
|
||||
def __init__(self, conn):
|
||||
self.conn = conn
|
||||
self.buf = bytearray()
|
||||
|
||||
# blocking
|
||||
def read(self, n):
|
||||
while n > len(self.buf):
|
||||
moreBuf = self.conn.recv(1024)
|
||||
if not moreBuf:
|
||||
raise IOError("dead connection")
|
||||
self.buf = self.buf + bytearray(moreBuf)
|
||||
|
||||
r = self.buf[:n]
|
||||
self.buf = self.buf[n:]
|
||||
return r
|
202
abci/example/python/abci/server.py
Normal file
202
abci/example/python/abci/server.py
Normal file
@ -0,0 +1,202 @@
|
||||
import socket
|
||||
import select
|
||||
import sys
|
||||
|
||||
from wire import decode_varint, encode
|
||||
from reader import BytesBuffer
|
||||
from msg import RequestDecoder, message_types
|
||||
|
||||
# hold the asyncronous state of a connection
|
||||
# ie. we may not get enough bytes on one read to decode the message
|
||||
|
||||
class Connection():
|
||||
|
||||
def __init__(self, fd, app):
|
||||
self.fd = fd
|
||||
self.app = app
|
||||
self.recBuf = BytesBuffer(bytearray())
|
||||
self.resBuf = BytesBuffer(bytearray())
|
||||
self.msgLength = 0
|
||||
self.decoder = RequestDecoder(self.recBuf)
|
||||
self.inProgress = False # are we in the middle of a message
|
||||
|
||||
def recv(this):
|
||||
data = this.fd.recv(1024)
|
||||
if not data: # what about len(data) == 0
|
||||
raise IOError("dead connection")
|
||||
this.recBuf.write(data)
|
||||
|
||||
# ABCI server responds to messges by calling methods on the app
|
||||
|
||||
class ABCIServer():
|
||||
|
||||
def __init__(self, app, port=5410):
|
||||
self.app = app
|
||||
# map conn file descriptors to (app, reqBuf, resBuf, msgDecoder)
|
||||
self.appMap = {}
|
||||
|
||||
self.port = port
|
||||
self.listen_backlog = 10
|
||||
|
||||
self.listener = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
self.listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
self.listener.setblocking(0)
|
||||
self.listener.bind(('', port))
|
||||
|
||||
self.listener.listen(self.listen_backlog)
|
||||
|
||||
self.shutdown = False
|
||||
|
||||
self.read_list = [self.listener]
|
||||
self.write_list = []
|
||||
|
||||
def handle_new_connection(self, r):
|
||||
new_fd, new_addr = r.accept()
|
||||
new_fd.setblocking(0) # non-blocking
|
||||
self.read_list.append(new_fd)
|
||||
self.write_list.append(new_fd)
|
||||
print 'new connection to', new_addr
|
||||
|
||||
self.appMap[new_fd] = Connection(new_fd, self.app)
|
||||
|
||||
def handle_conn_closed(self, r):
|
||||
self.read_list.remove(r)
|
||||
self.write_list.remove(r)
|
||||
r.close()
|
||||
print "connection closed"
|
||||
|
||||
def handle_recv(self, r):
|
||||
# app, recBuf, resBuf, conn
|
||||
conn = self.appMap[r]
|
||||
while True:
|
||||
try:
|
||||
print "recv loop"
|
||||
# check if we need more data first
|
||||
if conn.inProgress:
|
||||
if (conn.msgLength == 0 or conn.recBuf.size() < conn.msgLength):
|
||||
conn.recv()
|
||||
else:
|
||||
if conn.recBuf.size() == 0:
|
||||
conn.recv()
|
||||
|
||||
conn.inProgress = True
|
||||
|
||||
# see if we have enough to get the message length
|
||||
if conn.msgLength == 0:
|
||||
ll = conn.recBuf.peek()
|
||||
if conn.recBuf.size() < 1 + ll:
|
||||
# we don't have enough bytes to read the length yet
|
||||
return
|
||||
print "decoding msg length"
|
||||
conn.msgLength = decode_varint(conn.recBuf)
|
||||
|
||||
# see if we have enough to decode the message
|
||||
if conn.recBuf.size() < conn.msgLength:
|
||||
return
|
||||
|
||||
# now we can decode the message
|
||||
|
||||
# first read the request type and get the particular msg
|
||||
# decoder
|
||||
typeByte = conn.recBuf.read(1)
|
||||
typeByte = int(typeByte[0])
|
||||
resTypeByte = typeByte + 0x10
|
||||
req_type = message_types[typeByte]
|
||||
|
||||
if req_type == "flush":
|
||||
# messages are length prefixed
|
||||
conn.resBuf.write(encode(1))
|
||||
conn.resBuf.write([resTypeByte])
|
||||
conn.fd.send(str(conn.resBuf.buf))
|
||||
conn.msgLength = 0
|
||||
conn.inProgress = False
|
||||
conn.resBuf = BytesBuffer(bytearray())
|
||||
return
|
||||
|
||||
decoder = getattr(conn.decoder, req_type)
|
||||
|
||||
print "decoding args"
|
||||
req_args = decoder()
|
||||
print "got args", req_args
|
||||
|
||||
# done decoding message
|
||||
conn.msgLength = 0
|
||||
conn.inProgress = False
|
||||
|
||||
req_f = getattr(conn.app, req_type)
|
||||
if req_args is None:
|
||||
res = req_f()
|
||||
elif isinstance(req_args, tuple):
|
||||
res = req_f(*req_args)
|
||||
else:
|
||||
res = req_f(req_args)
|
||||
|
||||
if isinstance(res, tuple):
|
||||
res, ret_code = res
|
||||
else:
|
||||
ret_code = res
|
||||
res = None
|
||||
|
||||
print "called", req_type, "ret code:", ret_code
|
||||
if ret_code != 0:
|
||||
print "non-zero retcode:", ret_code
|
||||
|
||||
if req_type in ("echo", "info"): # these dont return a ret code
|
||||
enc = encode(res)
|
||||
# messages are length prefixed
|
||||
conn.resBuf.write(encode(len(enc) + 1))
|
||||
conn.resBuf.write([resTypeByte])
|
||||
conn.resBuf.write(enc)
|
||||
else:
|
||||
enc, encRet = encode(res), encode(ret_code)
|
||||
# messages are length prefixed
|
||||
conn.resBuf.write(encode(len(enc) + len(encRet) + 1))
|
||||
conn.resBuf.write([resTypeByte])
|
||||
conn.resBuf.write(encRet)
|
||||
conn.resBuf.write(enc)
|
||||
except TypeError as e:
|
||||
print "TypeError on reading from connection:", e
|
||||
self.handle_conn_closed(r)
|
||||
return
|
||||
except ValueError as e:
|
||||
print "ValueError on reading from connection:", e
|
||||
self.handle_conn_closed(r)
|
||||
return
|
||||
except IOError as e:
|
||||
print "IOError on reading from connection:", e
|
||||
self.handle_conn_closed(r)
|
||||
return
|
||||
except Exception as e:
|
||||
# sys.exc_info()[0] # TODO better
|
||||
print "error reading from connection", str(e)
|
||||
self.handle_conn_closed(r)
|
||||
return
|
||||
|
||||
def main_loop(self):
|
||||
while not self.shutdown:
|
||||
r_list, w_list, _ = select.select(
|
||||
self.read_list, self.write_list, [], 2.5)
|
||||
|
||||
for r in r_list:
|
||||
if (r == self.listener):
|
||||
try:
|
||||
self.handle_new_connection(r)
|
||||
# undo adding to read list ...
|
||||
except NameError as e:
|
||||
print "Could not connect due to NameError:", e
|
||||
except TypeError as e:
|
||||
print "Could not connect due to TypeError:", e
|
||||
except:
|
||||
print "Could not connect due to unexpected error:", sys.exc_info()[0]
|
||||
else:
|
||||
self.handle_recv(r)
|
||||
|
||||
def handle_shutdown(self):
|
||||
for r in self.read_list:
|
||||
r.close()
|
||||
for w in self.write_list:
|
||||
try:
|
||||
w.close()
|
||||
except Exception as e:
|
||||
print(e) # TODO: add logging
|
||||
self.shutdown = True
|
115
abci/example/python/abci/wire.py
Normal file
115
abci/example/python/abci/wire.py
Normal file
@ -0,0 +1,115 @@
|
||||
|
||||
# the decoder works off a reader
|
||||
# the encoder returns bytearray
|
||||
|
||||
|
||||
def hex2bytes(h):
|
||||
return bytearray(h.decode('hex'))
|
||||
|
||||
|
||||
def bytes2hex(b):
|
||||
if type(b) in (str, unicode):
|
||||
return "".join([hex(ord(c))[2:].zfill(2) for c in b])
|
||||
else:
|
||||
return bytes2hex(b.decode())
|
||||
|
||||
|
||||
# expects uvarint64 (no crazy big nums!)
|
||||
def uvarint_size(i):
|
||||
if i == 0:
|
||||
return 0
|
||||
for j in xrange(1, 8):
|
||||
if i < 1 << j * 8:
|
||||
return j
|
||||
return 8
|
||||
|
||||
# expects i < 2**size
|
||||
|
||||
|
||||
def encode_big_endian(i, size):
|
||||
if size == 0:
|
||||
return bytearray()
|
||||
return encode_big_endian(i / 256, size - 1) + bytearray([i % 256])
|
||||
|
||||
|
||||
def decode_big_endian(reader, size):
|
||||
if size == 0:
|
||||
return 0
|
||||
firstByte = reader.read(1)[0]
|
||||
return firstByte * (256 ** (size - 1)) + decode_big_endian(reader, size - 1)
|
||||
|
||||
# ints are max 16 bytes long
|
||||
|
||||
|
||||
def encode_varint(i):
|
||||
negate = False
|
||||
if i < 0:
|
||||
negate = True
|
||||
i = -i
|
||||
size = uvarint_size(i)
|
||||
if size == 0:
|
||||
return bytearray([0])
|
||||
big_end = encode_big_endian(i, size)
|
||||
if negate:
|
||||
size += 0xF0
|
||||
return bytearray([size]) + big_end
|
||||
|
||||
# returns the int and whats left of the byte array
|
||||
|
||||
|
||||
def decode_varint(reader):
|
||||
size = reader.read(1)[0]
|
||||
if size == 0:
|
||||
return 0
|
||||
|
||||
negate = True if size > int(0xF0) else False
|
||||
if negate:
|
||||
size = size - 0xF0
|
||||
i = decode_big_endian(reader, size)
|
||||
if negate:
|
||||
i = i * (-1)
|
||||
return i
|
||||
|
||||
|
||||
def encode_string(s):
|
||||
size = encode_varint(len(s))
|
||||
return size + bytearray(s)
|
||||
|
||||
|
||||
def decode_string(reader):
|
||||
length = decode_varint(reader)
|
||||
return str(reader.read(length))
|
||||
|
||||
|
||||
def encode_list(s):
|
||||
b = bytearray()
|
||||
map(b.extend, map(encode, s))
|
||||
return encode_varint(len(s)) + b
|
||||
|
||||
|
||||
def encode(s):
|
||||
if s is None:
|
||||
return bytearray()
|
||||
if isinstance(s, int):
|
||||
return encode_varint(s)
|
||||
elif isinstance(s, str):
|
||||
return encode_string(s)
|
||||
elif isinstance(s, list):
|
||||
return encode_list(s)
|
||||
else:
|
||||
print "UNSUPPORTED TYPE!", type(s), s
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
ns = [100, 100, 1000, 256]
|
||||
ss = [2, 5, 5, 2]
|
||||
bs = map(encode_big_endian, ns, ss)
|
||||
ds = map(decode_big_endian, bs, ss)
|
||||
print ns
|
||||
print [i[0] for i in ds]
|
||||
|
||||
ss = ["abc", "hi there jim", "ok now what"]
|
||||
e = map(encode_string, ss)
|
||||
d = map(decode_string, e)
|
||||
print ss
|
||||
print [i[0] for i in d]
|
82
abci/example/python/app.py
Normal file
82
abci/example/python/app.py
Normal file
@ -0,0 +1,82 @@
|
||||
import sys
|
||||
|
||||
from abci.wire import hex2bytes, decode_big_endian, encode_big_endian
|
||||
from abci.server import ABCIServer
|
||||
from abci.reader import BytesBuffer
|
||||
|
||||
|
||||
class CounterApplication():
|
||||
|
||||
def __init__(self):
|
||||
sys.exit("The python example is out of date. Upgrading the Python examples is currently left as an exercise to you.")
|
||||
self.hashCount = 0
|
||||
self.txCount = 0
|
||||
self.serial = False
|
||||
|
||||
def echo(self, msg):
|
||||
return msg, 0
|
||||
|
||||
def info(self):
|
||||
return ["hashes:%d, txs:%d" % (self.hashCount, self.txCount)], 0
|
||||
|
||||
def set_option(self, key, value):
|
||||
if key == "serial" and value == "on":
|
||||
self.serial = True
|
||||
return 0
|
||||
|
||||
def deliver_tx(self, txBytes):
|
||||
if self.serial:
|
||||
txByteArray = bytearray(txBytes)
|
||||
if len(txBytes) >= 2 and txBytes[:2] == "0x":
|
||||
txByteArray = hex2bytes(txBytes[2:])
|
||||
txValue = decode_big_endian(
|
||||
BytesBuffer(txByteArray), len(txBytes))
|
||||
if txValue != self.txCount:
|
||||
return None, 6
|
||||
self.txCount += 1
|
||||
return None, 0
|
||||
|
||||
def check_tx(self, txBytes):
|
||||
if self.serial:
|
||||
txByteArray = bytearray(txBytes)
|
||||
if len(txBytes) >= 2 and txBytes[:2] == "0x":
|
||||
txByteArray = hex2bytes(txBytes[2:])
|
||||
txValue = decode_big_endian(
|
||||
BytesBuffer(txByteArray), len(txBytes))
|
||||
if txValue < self.txCount:
|
||||
return 6
|
||||
return 0
|
||||
|
||||
def commit(self):
|
||||
self.hashCount += 1
|
||||
if self.txCount == 0:
|
||||
return "", 0
|
||||
h = encode_big_endian(self.txCount, 8)
|
||||
h.reverse()
|
||||
return str(h), 0
|
||||
|
||||
def add_listener(self):
|
||||
return 0
|
||||
|
||||
def rm_listener(self):
|
||||
return 0
|
||||
|
||||
def event(self):
|
||||
return
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
l = len(sys.argv)
|
||||
if l == 1:
|
||||
port = 26658
|
||||
elif l == 2:
|
||||
port = int(sys.argv[1])
|
||||
else:
|
||||
print "too many arguments"
|
||||
quit()
|
||||
|
||||
print 'ABCI Demo APP (Python)'
|
||||
|
||||
app = CounterApplication()
|
||||
server = ABCIServer(app, port)
|
||||
server.main_loop()
|
0
abci/example/python3/abci/__init__.py
Normal file
0
abci/example/python3/abci/__init__.py
Normal file
50
abci/example/python3/abci/msg.py
Normal file
50
abci/example/python3/abci/msg.py
Normal file
@ -0,0 +1,50 @@
|
||||
from .wire import decode_string
|
||||
|
||||
# map type_byte to message name
|
||||
message_types = {
|
||||
0x01: "echo",
|
||||
0x02: "flush",
|
||||
0x03: "info",
|
||||
0x04: "set_option",
|
||||
0x21: "deliver_tx",
|
||||
0x22: "check_tx",
|
||||
0x23: "commit",
|
||||
0x24: "add_listener",
|
||||
0x25: "rm_listener",
|
||||
}
|
||||
|
||||
# return the decoded arguments of abci messages
|
||||
|
||||
class RequestDecoder():
|
||||
|
||||
def __init__(self, reader):
|
||||
self.reader = reader
|
||||
|
||||
def echo(self):
|
||||
return decode_string(self.reader)
|
||||
|
||||
def flush(self):
|
||||
return
|
||||
|
||||
def info(self):
|
||||
return
|
||||
|
||||
def set_option(self):
|
||||
return decode_string(self.reader), decode_string(self.reader)
|
||||
|
||||
def deliver_tx(self):
|
||||
return decode_string(self.reader)
|
||||
|
||||
def check_tx(self):
|
||||
return decode_string(self.reader)
|
||||
|
||||
def commit(self):
|
||||
return
|
||||
|
||||
def add_listener(self):
|
||||
# TODO
|
||||
return
|
||||
|
||||
def rm_listener(self):
|
||||
# TODO
|
||||
return
|
56
abci/example/python3/abci/reader.py
Normal file
56
abci/example/python3/abci/reader.py
Normal file
@ -0,0 +1,56 @@
|
||||
|
||||
# Simple read() method around a bytearray
|
||||
|
||||
|
||||
class BytesBuffer():
|
||||
|
||||
def __init__(self, b):
|
||||
self.buf = b
|
||||
self.readCount = 0
|
||||
|
||||
def count(self):
|
||||
return self.readCount
|
||||
|
||||
def reset_count(self):
|
||||
self.readCount = 0
|
||||
|
||||
def size(self):
|
||||
return len(self.buf)
|
||||
|
||||
def peek(self):
|
||||
return self.buf[0]
|
||||
|
||||
def write(self, b):
|
||||
# b should be castable to byte array
|
||||
self.buf += bytearray(b)
|
||||
|
||||
def read(self, n):
|
||||
if len(self.buf) < n:
|
||||
print("reader err: buf less than n")
|
||||
# TODO: exception
|
||||
return
|
||||
self.readCount += n
|
||||
r = self.buf[:n]
|
||||
self.buf = self.buf[n:]
|
||||
return r
|
||||
|
||||
# Buffer bytes off a tcp connection and read them off in chunks
|
||||
|
||||
|
||||
class ConnReader():
|
||||
|
||||
def __init__(self, conn):
|
||||
self.conn = conn
|
||||
self.buf = bytearray()
|
||||
|
||||
# blocking
|
||||
def read(self, n):
|
||||
while n > len(self.buf):
|
||||
moreBuf = self.conn.recv(1024)
|
||||
if not moreBuf:
|
||||
raise IOError("dead connection")
|
||||
self.buf = self.buf + bytearray(moreBuf)
|
||||
|
||||
r = self.buf[:n]
|
||||
self.buf = self.buf[n:]
|
||||
return r
|
196
abci/example/python3/abci/server.py
Normal file
196
abci/example/python3/abci/server.py
Normal file
@ -0,0 +1,196 @@
|
||||
import socket
|
||||
import select
|
||||
import sys
|
||||
import logging
|
||||
|
||||
from .wire import decode_varint, encode
|
||||
from .reader import BytesBuffer
|
||||
from .msg import RequestDecoder, message_types
|
||||
|
||||
# hold the asyncronous state of a connection
|
||||
# ie. we may not get enough bytes on one read to decode the message
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class Connection():
|
||||
|
||||
def __init__(self, fd, app):
|
||||
self.fd = fd
|
||||
self.app = app
|
||||
self.recBuf = BytesBuffer(bytearray())
|
||||
self.resBuf = BytesBuffer(bytearray())
|
||||
self.msgLength = 0
|
||||
self.decoder = RequestDecoder(self.recBuf)
|
||||
self.inProgress = False # are we in the middle of a message
|
||||
|
||||
def recv(this):
|
||||
data = this.fd.recv(1024)
|
||||
if not data: # what about len(data) == 0
|
||||
raise IOError("dead connection")
|
||||
this.recBuf.write(data)
|
||||
|
||||
# ABCI server responds to messges by calling methods on the app
|
||||
|
||||
class ABCIServer():
|
||||
|
||||
def __init__(self, app, port=5410):
|
||||
self.app = app
|
||||
# map conn file descriptors to (app, reqBuf, resBuf, msgDecoder)
|
||||
self.appMap = {}
|
||||
|
||||
self.port = port
|
||||
self.listen_backlog = 10
|
||||
|
||||
self.listener = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
self.listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
self.listener.setblocking(0)
|
||||
self.listener.bind(('', port))
|
||||
|
||||
self.listener.listen(self.listen_backlog)
|
||||
|
||||
self.shutdown = False
|
||||
|
||||
self.read_list = [self.listener]
|
||||
self.write_list = []
|
||||
|
||||
def handle_new_connection(self, r):
|
||||
new_fd, new_addr = r.accept()
|
||||
new_fd.setblocking(0) # non-blocking
|
||||
self.read_list.append(new_fd)
|
||||
self.write_list.append(new_fd)
|
||||
print('new connection to', new_addr)
|
||||
|
||||
self.appMap[new_fd] = Connection(new_fd, self.app)
|
||||
|
||||
def handle_conn_closed(self, r):
|
||||
self.read_list.remove(r)
|
||||
self.write_list.remove(r)
|
||||
r.close()
|
||||
print("connection closed")
|
||||
|
||||
def handle_recv(self, r):
|
||||
# app, recBuf, resBuf, conn
|
||||
conn = self.appMap[r]
|
||||
while True:
|
||||
try:
|
||||
print("recv loop")
|
||||
# check if we need more data first
|
||||
if conn.inProgress:
|
||||
if (conn.msgLength == 0 or conn.recBuf.size() < conn.msgLength):
|
||||
conn.recv()
|
||||
else:
|
||||
if conn.recBuf.size() == 0:
|
||||
conn.recv()
|
||||
|
||||
conn.inProgress = True
|
||||
|
||||
# see if we have enough to get the message length
|
||||
if conn.msgLength == 0:
|
||||
ll = conn.recBuf.peek()
|
||||
if conn.recBuf.size() < 1 + ll:
|
||||
# we don't have enough bytes to read the length yet
|
||||
return
|
||||
print("decoding msg length")
|
||||
conn.msgLength = decode_varint(conn.recBuf)
|
||||
|
||||
# see if we have enough to decode the message
|
||||
if conn.recBuf.size() < conn.msgLength:
|
||||
return
|
||||
|
||||
# now we can decode the message
|
||||
|
||||
# first read the request type and get the particular msg
|
||||
# decoder
|
||||
typeByte = conn.recBuf.read(1)
|
||||
typeByte = int(typeByte[0])
|
||||
resTypeByte = typeByte + 0x10
|
||||
req_type = message_types[typeByte]
|
||||
|
||||
if req_type == "flush":
|
||||
# messages are length prefixed
|
||||
conn.resBuf.write(encode(1))
|
||||
conn.resBuf.write([resTypeByte])
|
||||
conn.fd.send(conn.resBuf.buf)
|
||||
conn.msgLength = 0
|
||||
conn.inProgress = False
|
||||
conn.resBuf = BytesBuffer(bytearray())
|
||||
return
|
||||
|
||||
decoder = getattr(conn.decoder, req_type)
|
||||
|
||||
print("decoding args")
|
||||
req_args = decoder()
|
||||
print("got args", req_args)
|
||||
|
||||
# done decoding message
|
||||
conn.msgLength = 0
|
||||
conn.inProgress = False
|
||||
|
||||
req_f = getattr(conn.app, req_type)
|
||||
if req_args is None:
|
||||
res = req_f()
|
||||
elif isinstance(req_args, tuple):
|
||||
res = req_f(*req_args)
|
||||
else:
|
||||
res = req_f(req_args)
|
||||
|
||||
if isinstance(res, tuple):
|
||||
res, ret_code = res
|
||||
else:
|
||||
ret_code = res
|
||||
res = None
|
||||
|
||||
print("called", req_type, "ret code:", ret_code, 'res:', res)
|
||||
if ret_code != 0:
|
||||
print("non-zero retcode:", ret_code)
|
||||
|
||||
if req_type in ("echo", "info"): # these dont return a ret code
|
||||
enc = encode(res)
|
||||
# messages are length prefixed
|
||||
conn.resBuf.write(encode(len(enc) + 1))
|
||||
conn.resBuf.write([resTypeByte])
|
||||
conn.resBuf.write(enc)
|
||||
else:
|
||||
enc, encRet = encode(res), encode(ret_code)
|
||||
# messages are length prefixed
|
||||
conn.resBuf.write(encode(len(enc) + len(encRet) + 1))
|
||||
conn.resBuf.write([resTypeByte])
|
||||
conn.resBuf.write(encRet)
|
||||
conn.resBuf.write(enc)
|
||||
except IOError as e:
|
||||
print("IOError on reading from connection:", e)
|
||||
self.handle_conn_closed(r)
|
||||
return
|
||||
except Exception as e:
|
||||
logger.exception("error reading from connection")
|
||||
self.handle_conn_closed(r)
|
||||
return
|
||||
|
||||
def main_loop(self):
|
||||
while not self.shutdown:
|
||||
r_list, w_list, _ = select.select(
|
||||
self.read_list, self.write_list, [], 2.5)
|
||||
|
||||
for r in r_list:
|
||||
if (r == self.listener):
|
||||
try:
|
||||
self.handle_new_connection(r)
|
||||
# undo adding to read list ...
|
||||
except NameError as e:
|
||||
print("Could not connect due to NameError:", e)
|
||||
except TypeError as e:
|
||||
print("Could not connect due to TypeError:", e)
|
||||
except:
|
||||
print("Could not connect due to unexpected error:", sys.exc_info()[0])
|
||||
else:
|
||||
self.handle_recv(r)
|
||||
|
||||
def handle_shutdown(self):
|
||||
for r in self.read_list:
|
||||
r.close()
|
||||
for w in self.write_list:
|
||||
try:
|
||||
w.close()
|
||||
except Exception as e:
|
||||
print(e) # TODO: add logging
|
||||
self.shutdown = True
|
119
abci/example/python3/abci/wire.py
Normal file
119
abci/example/python3/abci/wire.py
Normal file
@ -0,0 +1,119 @@
|
||||
|
||||
# the decoder works off a reader
|
||||
# the encoder returns bytearray
|
||||
|
||||
|
||||
def hex2bytes(h):
|
||||
return bytearray(h.decode('hex'))
|
||||
|
||||
|
||||
def bytes2hex(b):
|
||||
if type(b) in (str, str):
|
||||
return "".join([hex(ord(c))[2:].zfill(2) for c in b])
|
||||
else:
|
||||
return bytes2hex(b.decode())
|
||||
|
||||
|
||||
# expects uvarint64 (no crazy big nums!)
|
||||
def uvarint_size(i):
|
||||
if i == 0:
|
||||
return 0
|
||||
for j in range(1, 8):
|
||||
if i < 1 << j * 8:
|
||||
return j
|
||||
return 8
|
||||
|
||||
# expects i < 2**size
|
||||
|
||||
|
||||
def encode_big_endian(i, size):
|
||||
if size == 0:
|
||||
return bytearray()
|
||||
return encode_big_endian(i // 256, size - 1) + bytearray([i % 256])
|
||||
|
||||
|
||||
def decode_big_endian(reader, size):
|
||||
if size == 0:
|
||||
return 0
|
||||
firstByte = reader.read(1)[0]
|
||||
return firstByte * (256 ** (size - 1)) + decode_big_endian(reader, size - 1)
|
||||
|
||||
# ints are max 16 bytes long
|
||||
|
||||
|
||||
def encode_varint(i):
|
||||
negate = False
|
||||
if i < 0:
|
||||
negate = True
|
||||
i = -i
|
||||
size = uvarint_size(i)
|
||||
if size == 0:
|
||||
return bytearray([0])
|
||||
big_end = encode_big_endian(i, size)
|
||||
if negate:
|
||||
size += 0xF0
|
||||
return bytearray([size]) + big_end
|
||||
|
||||
# returns the int and whats left of the byte array
|
||||
|
||||
|
||||
def decode_varint(reader):
|
||||
size = reader.read(1)[0]
|
||||
if size == 0:
|
||||
return 0
|
||||
|
||||
negate = True if size > int(0xF0) else False
|
||||
if negate:
|
||||
size = size - 0xF0
|
||||
i = decode_big_endian(reader, size)
|
||||
if negate:
|
||||
i = i * (-1)
|
||||
return i
|
||||
|
||||
|
||||
def encode_string(s):
|
||||
size = encode_varint(len(s))
|
||||
return size + bytearray(s, 'utf8')
|
||||
|
||||
|
||||
def decode_string(reader):
|
||||
length = decode_varint(reader)
|
||||
raw_data = reader.read(length)
|
||||
return raw_data.decode()
|
||||
|
||||
|
||||
def encode_list(s):
|
||||
b = bytearray()
|
||||
list(map(b.extend, list(map(encode, s))))
|
||||
return encode_varint(len(s)) + b
|
||||
|
||||
|
||||
def encode(s):
|
||||
print('encoding', repr(s))
|
||||
if s is None:
|
||||
return bytearray()
|
||||
if isinstance(s, int):
|
||||
return encode_varint(s)
|
||||
elif isinstance(s, str):
|
||||
return encode_string(s)
|
||||
elif isinstance(s, list):
|
||||
return encode_list(s)
|
||||
elif isinstance(s, bytearray):
|
||||
return encode_string(s)
|
||||
else:
|
||||
print("UNSUPPORTED TYPE!", type(s), s)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
ns = [100, 100, 1000, 256]
|
||||
ss = [2, 5, 5, 2]
|
||||
bs = list(map(encode_big_endian, ns, ss))
|
||||
ds = list(map(decode_big_endian, bs, ss))
|
||||
print(ns)
|
||||
print([i[0] for i in ds])
|
||||
|
||||
ss = ["abc", "hi there jim", "ok now what"]
|
||||
e = list(map(encode_string, ss))
|
||||
d = list(map(decode_string, e))
|
||||
print(ss)
|
||||
print([i[0] for i in d])
|
82
abci/example/python3/app.py
Normal file
82
abci/example/python3/app.py
Normal file
@ -0,0 +1,82 @@
|
||||
import sys
|
||||
|
||||
from abci.wire import hex2bytes, decode_big_endian, encode_big_endian
|
||||
from abci.server import ABCIServer
|
||||
from abci.reader import BytesBuffer
|
||||
|
||||
|
||||
class CounterApplication():
|
||||
|
||||
def __init__(self):
|
||||
sys.exit("The python example is out of date. Upgrading the Python examples is currently left as an exercise to you.")
|
||||
self.hashCount = 0
|
||||
self.txCount = 0
|
||||
self.serial = False
|
||||
|
||||
def echo(self, msg):
|
||||
return msg, 0
|
||||
|
||||
def info(self):
|
||||
return ["hashes:%d, txs:%d" % (self.hashCount, self.txCount)], 0
|
||||
|
||||
def set_option(self, key, value):
|
||||
if key == "serial" and value == "on":
|
||||
self.serial = True
|
||||
return 0
|
||||
|
||||
def deliver_tx(self, txBytes):
|
||||
if self.serial:
|
||||
txByteArray = bytearray(txBytes)
|
||||
if len(txBytes) >= 2 and txBytes[:2] == "0x":
|
||||
txByteArray = hex2bytes(txBytes[2:])
|
||||
txValue = decode_big_endian(
|
||||
BytesBuffer(txByteArray), len(txBytes))
|
||||
if txValue != self.txCount:
|
||||
return None, 6
|
||||
self.txCount += 1
|
||||
return None, 0
|
||||
|
||||
def check_tx(self, txBytes):
|
||||
if self.serial:
|
||||
txByteArray = bytearray(txBytes)
|
||||
if len(txBytes) >= 2 and txBytes[:2] == "0x":
|
||||
txByteArray = hex2bytes(txBytes[2:])
|
||||
txValue = decode_big_endian(
|
||||
BytesBuffer(txByteArray), len(txBytes))
|
||||
if txValue < self.txCount:
|
||||
return 6
|
||||
return 0
|
||||
|
||||
def commit(self):
|
||||
self.hashCount += 1
|
||||
if self.txCount == 0:
|
||||
return "", 0
|
||||
h = encode_big_endian(self.txCount, 8)
|
||||
h.reverse()
|
||||
return h.decode(), 0
|
||||
|
||||
def add_listener(self):
|
||||
return 0
|
||||
|
||||
def rm_listener(self):
|
||||
return 0
|
||||
|
||||
def event(self):
|
||||
return
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
l = len(sys.argv)
|
||||
if l == 1:
|
||||
port = 26658
|
||||
elif l == 2:
|
||||
port = int(sys.argv[1])
|
||||
else:
|
||||
print("too many arguments")
|
||||
quit()
|
||||
|
||||
print('ABCI Demo APP (Python)')
|
||||
|
||||
app = CounterApplication()
|
||||
server = ABCIServer(app, port)
|
||||
server.main_loop()
|
57
abci/server/grpc_server.go
Normal file
57
abci/server/grpc_server.go
Normal file
@ -0,0 +1,57 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"net"
|
||||
|
||||
"google.golang.org/grpc"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
type GRPCServer struct {
|
||||
cmn.BaseService
|
||||
|
||||
proto string
|
||||
addr string
|
||||
listener net.Listener
|
||||
server *grpc.Server
|
||||
|
||||
app types.ABCIApplicationServer
|
||||
}
|
||||
|
||||
// NewGRPCServer returns a new gRPC ABCI server
|
||||
func NewGRPCServer(protoAddr string, app types.ABCIApplicationServer) cmn.Service {
|
||||
proto, addr := cmn.ProtocolAndAddress(protoAddr)
|
||||
s := &GRPCServer{
|
||||
proto: proto,
|
||||
addr: addr,
|
||||
listener: nil,
|
||||
app: app,
|
||||
}
|
||||
s.BaseService = *cmn.NewBaseService(nil, "ABCIServer", s)
|
||||
return s
|
||||
}
|
||||
|
||||
// OnStart starts the gRPC service
|
||||
func (s *GRPCServer) OnStart() error {
|
||||
if err := s.BaseService.OnStart(); err != nil {
|
||||
return err
|
||||
}
|
||||
ln, err := net.Listen(s.proto, s.addr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s.Logger.Info("Listening", "proto", s.proto, "addr", s.addr)
|
||||
s.listener = ln
|
||||
s.server = grpc.NewServer()
|
||||
types.RegisterABCIApplicationServer(s.server, s.app)
|
||||
go s.server.Serve(s.listener)
|
||||
return nil
|
||||
}
|
||||
|
||||
// OnStop stops the gRPC server
|
||||
func (s *GRPCServer) OnStop() {
|
||||
s.BaseService.OnStop()
|
||||
s.server.Stop()
|
||||
}
|
31
abci/server/server.go
Normal file
31
abci/server/server.go
Normal file
@ -0,0 +1,31 @@
|
||||
/*
|
||||
Package server is used to start a new ABCI server.
|
||||
|
||||
It contains two server implementation:
|
||||
* gRPC server
|
||||
* socket server
|
||||
|
||||
*/
|
||||
|
||||
package server
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
func NewServer(protoAddr, transport string, app types.Application) (cmn.Service, error) {
|
||||
var s cmn.Service
|
||||
var err error
|
||||
switch transport {
|
||||
case "socket":
|
||||
s = NewSocketServer(protoAddr, app)
|
||||
case "grpc":
|
||||
s = NewGRPCServer(protoAddr, types.NewGRPCApplication(app))
|
||||
default:
|
||||
err = fmt.Errorf("Unknown server type %s", transport)
|
||||
}
|
||||
return s, err
|
||||
}
|
226
abci/server/socket_server.go
Normal file
226
abci/server/socket_server.go
Normal file
@ -0,0 +1,226 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"sync"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
// var maxNumberConnections = 2
|
||||
|
||||
type SocketServer struct {
|
||||
cmn.BaseService
|
||||
|
||||
proto string
|
||||
addr string
|
||||
listener net.Listener
|
||||
|
||||
connsMtx sync.Mutex
|
||||
conns map[int]net.Conn
|
||||
nextConnID int
|
||||
|
||||
appMtx sync.Mutex
|
||||
app types.Application
|
||||
}
|
||||
|
||||
func NewSocketServer(protoAddr string, app types.Application) cmn.Service {
|
||||
proto, addr := cmn.ProtocolAndAddress(protoAddr)
|
||||
s := &SocketServer{
|
||||
proto: proto,
|
||||
addr: addr,
|
||||
listener: nil,
|
||||
app: app,
|
||||
conns: make(map[int]net.Conn),
|
||||
}
|
||||
s.BaseService = *cmn.NewBaseService(nil, "ABCIServer", s)
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *SocketServer) OnStart() error {
|
||||
if err := s.BaseService.OnStart(); err != nil {
|
||||
return err
|
||||
}
|
||||
ln, err := net.Listen(s.proto, s.addr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s.listener = ln
|
||||
go s.acceptConnectionsRoutine()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *SocketServer) OnStop() {
|
||||
s.BaseService.OnStop()
|
||||
if err := s.listener.Close(); err != nil {
|
||||
s.Logger.Error("Error closing listener", "err", err)
|
||||
}
|
||||
|
||||
s.connsMtx.Lock()
|
||||
defer s.connsMtx.Unlock()
|
||||
for id, conn := range s.conns {
|
||||
delete(s.conns, id)
|
||||
if err := conn.Close(); err != nil {
|
||||
s.Logger.Error("Error closing connection", "id", id, "conn", conn, "err", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *SocketServer) addConn(conn net.Conn) int {
|
||||
s.connsMtx.Lock()
|
||||
defer s.connsMtx.Unlock()
|
||||
|
||||
connID := s.nextConnID
|
||||
s.nextConnID++
|
||||
s.conns[connID] = conn
|
||||
|
||||
return connID
|
||||
}
|
||||
|
||||
// deletes conn even if close errs
|
||||
func (s *SocketServer) rmConn(connID int) error {
|
||||
s.connsMtx.Lock()
|
||||
defer s.connsMtx.Unlock()
|
||||
|
||||
conn, ok := s.conns[connID]
|
||||
if !ok {
|
||||
return fmt.Errorf("Connection %d does not exist", connID)
|
||||
}
|
||||
|
||||
delete(s.conns, connID)
|
||||
return conn.Close()
|
||||
}
|
||||
|
||||
func (s *SocketServer) acceptConnectionsRoutine() {
|
||||
for {
|
||||
// Accept a connection
|
||||
s.Logger.Info("Waiting for new connection...")
|
||||
conn, err := s.listener.Accept()
|
||||
if err != nil {
|
||||
if !s.IsRunning() {
|
||||
return // Ignore error from listener closing.
|
||||
}
|
||||
s.Logger.Error("Failed to accept connection: " + err.Error())
|
||||
continue
|
||||
}
|
||||
|
||||
s.Logger.Info("Accepted a new connection")
|
||||
|
||||
connID := s.addConn(conn)
|
||||
|
||||
closeConn := make(chan error, 2) // Push to signal connection closed
|
||||
responses := make(chan *types.Response, 1000) // A channel to buffer responses
|
||||
|
||||
// Read requests from conn and deal with them
|
||||
go s.handleRequests(closeConn, conn, responses)
|
||||
// Pull responses from 'responses' and write them to conn.
|
||||
go s.handleResponses(closeConn, conn, responses)
|
||||
|
||||
// Wait until signal to close connection
|
||||
go s.waitForClose(closeConn, connID)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *SocketServer) waitForClose(closeConn chan error, connID int) {
|
||||
err := <-closeConn
|
||||
if err == io.EOF {
|
||||
s.Logger.Error("Connection was closed by client")
|
||||
} else if err != nil {
|
||||
s.Logger.Error("Connection error", "error", err)
|
||||
} else {
|
||||
// never happens
|
||||
s.Logger.Error("Connection was closed.")
|
||||
}
|
||||
|
||||
// Close the connection
|
||||
if err := s.rmConn(connID); err != nil {
|
||||
s.Logger.Error("Error in closing connection", "error", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Read requests from conn and deal with them
|
||||
func (s *SocketServer) handleRequests(closeConn chan error, conn net.Conn, responses chan<- *types.Response) {
|
||||
var count int
|
||||
var bufReader = bufio.NewReader(conn)
|
||||
for {
|
||||
|
||||
var req = &types.Request{}
|
||||
err := types.ReadMessage(bufReader, req)
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
closeConn <- err
|
||||
} else {
|
||||
closeConn <- fmt.Errorf("Error reading message: %v", err.Error())
|
||||
}
|
||||
return
|
||||
}
|
||||
s.appMtx.Lock()
|
||||
count++
|
||||
s.handleRequest(req, responses)
|
||||
s.appMtx.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
func (s *SocketServer) handleRequest(req *types.Request, responses chan<- *types.Response) {
|
||||
switch r := req.Value.(type) {
|
||||
case *types.Request_Echo:
|
||||
responses <- types.ToResponseEcho(r.Echo.Message)
|
||||
case *types.Request_Flush:
|
||||
responses <- types.ToResponseFlush()
|
||||
case *types.Request_Info:
|
||||
res := s.app.Info(*r.Info)
|
||||
responses <- types.ToResponseInfo(res)
|
||||
case *types.Request_SetOption:
|
||||
res := s.app.SetOption(*r.SetOption)
|
||||
responses <- types.ToResponseSetOption(res)
|
||||
case *types.Request_DeliverTx:
|
||||
res := s.app.DeliverTx(r.DeliverTx.Tx)
|
||||
responses <- types.ToResponseDeliverTx(res)
|
||||
case *types.Request_CheckTx:
|
||||
res := s.app.CheckTx(r.CheckTx.Tx)
|
||||
responses <- types.ToResponseCheckTx(res)
|
||||
case *types.Request_Commit:
|
||||
res := s.app.Commit()
|
||||
responses <- types.ToResponseCommit(res)
|
||||
case *types.Request_Query:
|
||||
res := s.app.Query(*r.Query)
|
||||
responses <- types.ToResponseQuery(res)
|
||||
case *types.Request_InitChain:
|
||||
res := s.app.InitChain(*r.InitChain)
|
||||
responses <- types.ToResponseInitChain(res)
|
||||
case *types.Request_BeginBlock:
|
||||
res := s.app.BeginBlock(*r.BeginBlock)
|
||||
responses <- types.ToResponseBeginBlock(res)
|
||||
case *types.Request_EndBlock:
|
||||
res := s.app.EndBlock(*r.EndBlock)
|
||||
responses <- types.ToResponseEndBlock(res)
|
||||
default:
|
||||
responses <- types.ToResponseException("Unknown request")
|
||||
}
|
||||
}
|
||||
|
||||
// Pull responses from 'responses' and write them to conn.
|
||||
func (s *SocketServer) handleResponses(closeConn chan error, conn net.Conn, responses <-chan *types.Response) {
|
||||
var count int
|
||||
var bufWriter = bufio.NewWriter(conn)
|
||||
for {
|
||||
var res = <-responses
|
||||
err := types.WriteMessage(res, bufWriter)
|
||||
if err != nil {
|
||||
closeConn <- fmt.Errorf("Error writing message: %v", err.Error())
|
||||
return
|
||||
}
|
||||
if _, ok := res.Value.(*types.Response_Flush); ok {
|
||||
err = bufWriter.Flush()
|
||||
if err != nil {
|
||||
closeConn <- fmt.Errorf("Error flushing write buffer: %v", err.Error())
|
||||
return
|
||||
}
|
||||
}
|
||||
count++
|
||||
}
|
||||
}
|
1
abci/tests/benchmarks/blank.go
Normal file
1
abci/tests/benchmarks/blank.go
Normal file
@ -0,0 +1 @@
|
||||
package benchmarks
|
55
abci/tests/benchmarks/parallel/parallel.go
Normal file
55
abci/tests/benchmarks/parallel/parallel.go
Normal file
@ -0,0 +1,55 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"log"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
func main() {
|
||||
|
||||
conn, err := cmn.Connect("unix://test.sock")
|
||||
if err != nil {
|
||||
log.Fatal(err.Error())
|
||||
}
|
||||
|
||||
// Read a bunch of responses
|
||||
go func() {
|
||||
counter := 0
|
||||
for {
|
||||
var res = &types.Response{}
|
||||
err := types.ReadMessage(conn, res)
|
||||
if err != nil {
|
||||
log.Fatal(err.Error())
|
||||
}
|
||||
counter++
|
||||
if counter%1000 == 0 {
|
||||
fmt.Println("Read", counter)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// Write a bunch of requests
|
||||
counter := 0
|
||||
for i := 0; ; i++ {
|
||||
var bufWriter = bufio.NewWriter(conn)
|
||||
var req = types.ToRequestEcho("foobar")
|
||||
|
||||
err := types.WriteMessage(req, bufWriter)
|
||||
if err != nil {
|
||||
log.Fatal(err.Error())
|
||||
}
|
||||
err = bufWriter.Flush()
|
||||
if err != nil {
|
||||
log.Fatal(err.Error())
|
||||
}
|
||||
|
||||
counter++
|
||||
if counter%1000 == 0 {
|
||||
fmt.Println("Write", counter)
|
||||
}
|
||||
}
|
||||
}
|
69
abci/tests/benchmarks/simple/simple.go
Normal file
69
abci/tests/benchmarks/simple/simple.go
Normal file
@ -0,0 +1,69 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"log"
|
||||
"net"
|
||||
"reflect"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
func main() {
|
||||
|
||||
conn, err := cmn.Connect("unix://test.sock")
|
||||
if err != nil {
|
||||
log.Fatal(err.Error())
|
||||
}
|
||||
|
||||
// Make a bunch of requests
|
||||
counter := 0
|
||||
for i := 0; ; i++ {
|
||||
req := types.ToRequestEcho("foobar")
|
||||
_, err := makeRequest(conn, req)
|
||||
if err != nil {
|
||||
log.Fatal(err.Error())
|
||||
}
|
||||
counter++
|
||||
if counter%1000 == 0 {
|
||||
fmt.Println(counter)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func makeRequest(conn net.Conn, req *types.Request) (*types.Response, error) {
|
||||
var bufWriter = bufio.NewWriter(conn)
|
||||
|
||||
// Write desired request
|
||||
err := types.WriteMessage(req, bufWriter)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = types.WriteMessage(types.ToRequestFlush(), bufWriter)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = bufWriter.Flush()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Read desired response
|
||||
var res = &types.Response{}
|
||||
err = types.ReadMessage(conn, res)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var resFlush = &types.Response{}
|
||||
err = types.ReadMessage(conn, resFlush)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if _, ok := resFlush.Value.(*types.Response_Flush); !ok {
|
||||
return nil, fmt.Errorf("Expected flush response but got something else: %v", reflect.TypeOf(resFlush))
|
||||
}
|
||||
|
||||
return res, nil
|
||||
}
|
27
abci/tests/client_server_test.go
Normal file
27
abci/tests/client_server_test.go
Normal file
@ -0,0 +1,27 @@
|
||||
package tests
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
abciclient "github.com/tendermint/tendermint/abci/client"
|
||||
"github.com/tendermint/tendermint/abci/example/kvstore"
|
||||
abciserver "github.com/tendermint/tendermint/abci/server"
|
||||
)
|
||||
|
||||
func TestClientServerNoAddrPrefix(t *testing.T) {
|
||||
addr := "localhost:26658"
|
||||
transport := "socket"
|
||||
app := kvstore.NewKVStoreApplication()
|
||||
|
||||
server, err := abciserver.NewServer(addr, transport, app)
|
||||
assert.NoError(t, err, "expected no error on NewServer")
|
||||
err = server.Start()
|
||||
assert.NoError(t, err, "expected no error on server.Start")
|
||||
|
||||
client, err := abciclient.NewClient(addr, transport, true)
|
||||
assert.NoError(t, err, "expected no error on NewClient")
|
||||
err = client.Start()
|
||||
assert.NoError(t, err, "expected no error on client.Start")
|
||||
}
|
96
abci/tests/server/client.go
Normal file
96
abci/tests/server/client.go
Normal file
@ -0,0 +1,96 @@
|
||||
package testsuite
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
abcicli "github.com/tendermint/tendermint/abci/client"
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
func InitChain(client abcicli.Client) error {
|
||||
total := 10
|
||||
vals := make([]types.Validator, total)
|
||||
for i := 0; i < total; i++ {
|
||||
pubkey := cmn.RandBytes(33)
|
||||
power := cmn.RandInt()
|
||||
vals[i] = types.Ed25519Validator(pubkey, int64(power))
|
||||
}
|
||||
_, err := client.InitChainSync(types.RequestInitChain{
|
||||
Validators: vals,
|
||||
})
|
||||
if err != nil {
|
||||
fmt.Printf("Failed test: InitChain - %v\n", err)
|
||||
return err
|
||||
}
|
||||
fmt.Println("Passed test: InitChain")
|
||||
return nil
|
||||
}
|
||||
|
||||
func SetOption(client abcicli.Client, key, value string) error {
|
||||
_, err := client.SetOptionSync(types.RequestSetOption{Key: key, Value: value})
|
||||
if err != nil {
|
||||
fmt.Println("Failed test: SetOption")
|
||||
fmt.Printf("error while setting %v=%v: \nerror: %v\n", key, value, err)
|
||||
return err
|
||||
}
|
||||
fmt.Println("Passed test: SetOption")
|
||||
return nil
|
||||
}
|
||||
|
||||
func Commit(client abcicli.Client, hashExp []byte) error {
|
||||
res, err := client.CommitSync()
|
||||
data := res.Data
|
||||
if err != nil {
|
||||
fmt.Println("Failed test: Commit")
|
||||
fmt.Printf("error while committing: %v\n", err)
|
||||
return err
|
||||
}
|
||||
if !bytes.Equal(data, hashExp) {
|
||||
fmt.Println("Failed test: Commit")
|
||||
fmt.Printf("Commit hash was unexpected. Got %X expected %X\n", data, hashExp)
|
||||
return errors.New("CommitTx failed")
|
||||
}
|
||||
fmt.Println("Passed test: Commit")
|
||||
return nil
|
||||
}
|
||||
|
||||
func DeliverTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
|
||||
res, _ := client.DeliverTxSync(txBytes)
|
||||
code, data, log := res.Code, res.Data, res.Log
|
||||
if code != codeExp {
|
||||
fmt.Println("Failed test: DeliverTx")
|
||||
fmt.Printf("DeliverTx response code was unexpected. Got %v expected %v. Log: %v\n",
|
||||
code, codeExp, log)
|
||||
return errors.New("DeliverTx error")
|
||||
}
|
||||
if !bytes.Equal(data, dataExp) {
|
||||
fmt.Println("Failed test: DeliverTx")
|
||||
fmt.Printf("DeliverTx response data was unexpected. Got %X expected %X\n",
|
||||
data, dataExp)
|
||||
return errors.New("DeliverTx error")
|
||||
}
|
||||
fmt.Println("Passed test: DeliverTx")
|
||||
return nil
|
||||
}
|
||||
|
||||
func CheckTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
|
||||
res, _ := client.CheckTxSync(txBytes)
|
||||
code, data, log := res.Code, res.Data, res.Log
|
||||
if code != codeExp {
|
||||
fmt.Println("Failed test: CheckTx")
|
||||
fmt.Printf("CheckTx response code was unexpected. Got %v expected %v. Log: %v\n",
|
||||
code, codeExp, log)
|
||||
return errors.New("CheckTx")
|
||||
}
|
||||
if !bytes.Equal(data, dataExp) {
|
||||
fmt.Println("Failed test: CheckTx")
|
||||
fmt.Printf("CheckTx response data was unexpected. Got %X expected %X\n",
|
||||
data, dataExp)
|
||||
return errors.New("CheckTx")
|
||||
}
|
||||
fmt.Println("Passed test: CheckTx")
|
||||
return nil
|
||||
}
|
78
abci/tests/test_app/app.go
Normal file
78
abci/tests/test_app/app.go
Normal file
@ -0,0 +1,78 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
abcicli "github.com/tendermint/tendermint/abci/client"
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
func startClient(abciType string) abcicli.Client {
|
||||
// Start client
|
||||
client, err := abcicli.NewClient("tcp://127.0.0.1:26658", abciType, true)
|
||||
if err != nil {
|
||||
panic(err.Error())
|
||||
}
|
||||
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
|
||||
client.SetLogger(logger.With("module", "abcicli"))
|
||||
if err := client.Start(); err != nil {
|
||||
panicf("connecting to abci_app: %v", err.Error())
|
||||
}
|
||||
|
||||
return client
|
||||
}
|
||||
|
||||
func setOption(client abcicli.Client, key, value string) {
|
||||
_, err := client.SetOptionSync(types.RequestSetOption{key, value})
|
||||
if err != nil {
|
||||
panicf("setting %v=%v: \nerr: %v", key, value, err)
|
||||
}
|
||||
}
|
||||
|
||||
func commit(client abcicli.Client, hashExp []byte) {
|
||||
res, err := client.CommitSync()
|
||||
if err != nil {
|
||||
panicf("client error: %v", err)
|
||||
}
|
||||
if !bytes.Equal(res.Data, hashExp) {
|
||||
panicf("Commit hash was unexpected. Got %X expected %X", res.Data, hashExp)
|
||||
}
|
||||
}
|
||||
|
||||
func deliverTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []byte) {
|
||||
res, err := client.DeliverTxSync(txBytes)
|
||||
if err != nil {
|
||||
panicf("client error: %v", err)
|
||||
}
|
||||
if res.Code != codeExp {
|
||||
panicf("DeliverTx response code was unexpected. Got %v expected %v. Log: %v", res.Code, codeExp, res.Log)
|
||||
}
|
||||
if !bytes.Equal(res.Data, dataExp) {
|
||||
panicf("DeliverTx response data was unexpected. Got %X expected %X", res.Data, dataExp)
|
||||
}
|
||||
}
|
||||
|
||||
/*func checkTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []byte) {
|
||||
res, err := client.CheckTxSync(txBytes)
|
||||
if err != nil {
|
||||
panicf("client error: %v", err)
|
||||
}
|
||||
if res.IsErr() {
|
||||
panicf("checking tx %X: %v\nlog: %v", txBytes, res.Log)
|
||||
}
|
||||
if res.Code != codeExp {
|
||||
panicf("CheckTx response code was unexpected. Got %v expected %v. Log: %v",
|
||||
res.Code, codeExp, res.Log)
|
||||
}
|
||||
if !bytes.Equal(res.Data, dataExp) {
|
||||
panicf("CheckTx response data was unexpected. Got %X expected %X",
|
||||
res.Data, dataExp)
|
||||
}
|
||||
}*/
|
||||
|
||||
func panicf(format string, a ...interface{}) {
|
||||
panic(fmt.Sprintf(format, a...))
|
||||
}
|
84
abci/tests/test_app/main.go
Normal file
84
abci/tests/test_app/main.go
Normal file
@ -0,0 +1,84 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/example/code"
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
)
|
||||
|
||||
var abciType string
|
||||
|
||||
func init() {
|
||||
abciType = os.Getenv("ABCI")
|
||||
if abciType == "" {
|
||||
abciType = "socket"
|
||||
}
|
||||
}
|
||||
|
||||
func main() {
|
||||
testCounter()
|
||||
}
|
||||
|
||||
const (
|
||||
maxABCIConnectTries = 10
|
||||
)
|
||||
|
||||
func ensureABCIIsUp(typ string, n int) error {
|
||||
var err error
|
||||
cmdString := "abci-cli echo hello"
|
||||
if typ == "grpc" {
|
||||
cmdString = "abci-cli --abci grpc echo hello"
|
||||
}
|
||||
|
||||
for i := 0; i < n; i++ {
|
||||
cmd := exec.Command("bash", "-c", cmdString) // nolint: gas
|
||||
_, err = cmd.CombinedOutput()
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
<-time.After(500 * time.Millisecond)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func testCounter() {
|
||||
abciApp := os.Getenv("ABCI_APP")
|
||||
if abciApp == "" {
|
||||
panic("No ABCI_APP specified")
|
||||
}
|
||||
|
||||
fmt.Printf("Running %s test with abci=%s\n", abciApp, abciType)
|
||||
cmd := exec.Command("bash", "-c", fmt.Sprintf("abci-cli %s", abciApp)) // nolint: gas
|
||||
cmd.Stdout = os.Stdout
|
||||
if err := cmd.Start(); err != nil {
|
||||
log.Fatalf("starting %q err: %v", abciApp, err)
|
||||
}
|
||||
defer cmd.Wait()
|
||||
defer cmd.Process.Kill()
|
||||
|
||||
if err := ensureABCIIsUp(abciType, maxABCIConnectTries); err != nil {
|
||||
log.Fatalf("echo failed: %v", err)
|
||||
}
|
||||
|
||||
client := startClient(abciType)
|
||||
defer client.Stop()
|
||||
|
||||
setOption(client, "serial", "on")
|
||||
commit(client, nil)
|
||||
deliverTx(client, []byte("abc"), code.CodeTypeBadNonce, nil)
|
||||
commit(client, nil)
|
||||
deliverTx(client, []byte{0x00}, types.CodeTypeOK, nil)
|
||||
commit(client, []byte{0, 0, 0, 0, 0, 0, 0, 1})
|
||||
deliverTx(client, []byte{0x00}, code.CodeTypeBadNonce, nil)
|
||||
deliverTx(client, []byte{0x01}, types.CodeTypeOK, nil)
|
||||
deliverTx(client, []byte{0x00, 0x02}, types.CodeTypeOK, nil)
|
||||
deliverTx(client, []byte{0x00, 0x03}, types.CodeTypeOK, nil)
|
||||
deliverTx(client, []byte{0x00, 0x00, 0x04}, types.CodeTypeOK, nil)
|
||||
deliverTx(client, []byte{0x00, 0x00, 0x06}, code.CodeTypeBadNonce, nil)
|
||||
commit(client, []byte{0, 0, 0, 0, 0, 0, 0, 5})
|
||||
}
|
27
abci/tests/test_app/test.sh
Executable file
27
abci/tests/test_app/test.sh
Executable file
@ -0,0 +1,27 @@
|
||||
#! /bin/bash
|
||||
set -e
|
||||
|
||||
# These tests spawn the counter app and server by execing the ABCI_APP command and run some simple client tests against it
|
||||
|
||||
# Get the directory of where this script is.
|
||||
SOURCE="${BASH_SOURCE[0]}"
|
||||
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
|
||||
DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
|
||||
|
||||
# Change into that dir because we expect that.
|
||||
cd "$DIR"
|
||||
|
||||
echo "RUN COUNTER OVER SOCKET"
|
||||
# test golang counter
|
||||
ABCI_APP="counter" go run ./*.go
|
||||
echo "----------------------"
|
||||
|
||||
|
||||
echo "RUN COUNTER OVER GRPC"
|
||||
# test golang counter via grpc
|
||||
ABCI_APP="counter --abci=grpc" ABCI="grpc" go run ./*.go
|
||||
echo "----------------------"
|
||||
|
||||
# test nodejs counter
|
||||
# TODO: fix node app
|
||||
#ABCI_APP="node $GOPATH/src/github.com/tendermint/js-abci/example/app.js" go test -test.run TestCounter
|
10
abci/tests/test_cli/ex1.abci
Normal file
10
abci/tests/test_cli/ex1.abci
Normal file
@ -0,0 +1,10 @@
|
||||
echo hello
|
||||
info
|
||||
commit
|
||||
deliver_tx "abc"
|
||||
info
|
||||
commit
|
||||
query "abc"
|
||||
deliver_tx "def=xyz"
|
||||
commit
|
||||
query "def"
|
47
abci/tests/test_cli/ex1.abci.out
Normal file
47
abci/tests/test_cli/ex1.abci.out
Normal file
@ -0,0 +1,47 @@
|
||||
> echo hello
|
||||
-> code: OK
|
||||
-> data: hello
|
||||
-> data.hex: 0x68656C6C6F
|
||||
|
||||
> info
|
||||
-> code: OK
|
||||
-> data: {"size":0}
|
||||
-> data.hex: 0x7B2273697A65223A307D
|
||||
|
||||
> commit
|
||||
-> code: OK
|
||||
-> data.hex: 0x0000000000000000
|
||||
|
||||
> deliver_tx "abc"
|
||||
-> code: OK
|
||||
|
||||
> info
|
||||
-> code: OK
|
||||
-> data: {"size":1}
|
||||
-> data.hex: 0x7B2273697A65223A317D
|
||||
|
||||
> commit
|
||||
-> code: OK
|
||||
-> data.hex: 0x0200000000000000
|
||||
|
||||
> query "abc"
|
||||
-> code: OK
|
||||
-> log: exists
|
||||
-> height: 0
|
||||
-> value: abc
|
||||
-> value.hex: 616263
|
||||
|
||||
> deliver_tx "def=xyz"
|
||||
-> code: OK
|
||||
|
||||
> commit
|
||||
-> code: OK
|
||||
-> data.hex: 0x0400000000000000
|
||||
|
||||
> query "def"
|
||||
-> code: OK
|
||||
-> log: exists
|
||||
-> height: 0
|
||||
-> value: xyz
|
||||
-> value.hex: 78797A
|
||||
|
8
abci/tests/test_cli/ex2.abci
Normal file
8
abci/tests/test_cli/ex2.abci
Normal file
@ -0,0 +1,8 @@
|
||||
set_option serial on
|
||||
check_tx 0x00
|
||||
check_tx 0xff
|
||||
deliver_tx 0x00
|
||||
check_tx 0x00
|
||||
deliver_tx 0x01
|
||||
deliver_tx 0x04
|
||||
info
|
29
abci/tests/test_cli/ex2.abci.out
Normal file
29
abci/tests/test_cli/ex2.abci.out
Normal file
@ -0,0 +1,29 @@
|
||||
> set_option serial on
|
||||
-> code: OK
|
||||
-> log: OK (SetOption doesn't return anything.)
|
||||
|
||||
> check_tx 0x00
|
||||
-> code: OK
|
||||
|
||||
> check_tx 0xff
|
||||
-> code: OK
|
||||
|
||||
> deliver_tx 0x00
|
||||
-> code: OK
|
||||
|
||||
> check_tx 0x00
|
||||
-> code: 2
|
||||
-> log: Invalid nonce. Expected >= 1, got 0
|
||||
|
||||
> deliver_tx 0x01
|
||||
-> code: OK
|
||||
|
||||
> deliver_tx 0x04
|
||||
-> code: 2
|
||||
-> log: Invalid nonce. Expected 2, got 4
|
||||
|
||||
> info
|
||||
-> code: OK
|
||||
-> data: {"hashes":0,"txs":2}
|
||||
-> data.hex: 0x7B22686173686573223A302C22747873223A327D
|
||||
|
42
abci/tests/test_cli/test.sh
Executable file
42
abci/tests/test_cli/test.sh
Executable file
@ -0,0 +1,42 @@
|
||||
#! /bin/bash
|
||||
set -e
|
||||
|
||||
# Get the root directory.
|
||||
SOURCE="${BASH_SOURCE[0]}"
|
||||
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
|
||||
DIR="$( cd -P "$( dirname "$SOURCE" )/../.." && pwd )"
|
||||
|
||||
# Change into that dir because we expect that.
|
||||
cd "$DIR" || exit
|
||||
|
||||
function testExample() {
|
||||
N=$1
|
||||
INPUT=$2
|
||||
APP="$3 $4"
|
||||
|
||||
echo "Example $N: $APP"
|
||||
$APP &> /dev/null &
|
||||
sleep 2
|
||||
abci-cli --log_level=error --verbose batch < "$INPUT" > "${INPUT}.out.new"
|
||||
killall "$3"
|
||||
|
||||
pre=$(shasum < "${INPUT}.out")
|
||||
post=$(shasum < "${INPUT}.out.new")
|
||||
|
||||
if [[ "$pre" != "$post" ]]; then
|
||||
echo "You broke the tutorial"
|
||||
echo "Got:"
|
||||
cat "${INPUT}.out.new"
|
||||
echo "Expected:"
|
||||
cat "${INPUT}.out"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
rm "${INPUT}".out.new
|
||||
}
|
||||
|
||||
testExample 1 tests/test_cli/ex1.abci abci-cli kvstore
|
||||
testExample 2 tests/test_cli/ex2.abci abci-cli counter
|
||||
|
||||
echo ""
|
||||
echo "PASS"
|
1
abci/tests/tests.go
Normal file
1
abci/tests/tests.go
Normal file
@ -0,0 +1 @@
|
||||
package tests
|
138
abci/types/application.go
Normal file
138
abci/types/application.go
Normal file
@ -0,0 +1,138 @@
|
||||
package types // nolint: goimports
|
||||
|
||||
import (
|
||||
context "golang.org/x/net/context"
|
||||
)
|
||||
|
||||
// Application is an interface that enables any finite, deterministic state machine
|
||||
// to be driven by a blockchain-based replication engine via the ABCI.
|
||||
// All methods take a RequestXxx argument and return a ResponseXxx argument,
|
||||
// except CheckTx/DeliverTx, which take `tx []byte`, and `Commit`, which takes nothing.
|
||||
type Application interface {
|
||||
// Info/Query Connection
|
||||
Info(RequestInfo) ResponseInfo // Return application info
|
||||
SetOption(RequestSetOption) ResponseSetOption // Set application option
|
||||
Query(RequestQuery) ResponseQuery // Query for state
|
||||
|
||||
// Mempool Connection
|
||||
CheckTx(tx []byte) ResponseCheckTx // Validate a tx for the mempool
|
||||
|
||||
// Consensus Connection
|
||||
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain with validators and other info from TendermintCore
|
||||
BeginBlock(RequestBeginBlock) ResponseBeginBlock // Signals the beginning of a block
|
||||
DeliverTx(tx []byte) ResponseDeliverTx // Deliver a tx for full processing
|
||||
EndBlock(RequestEndBlock) ResponseEndBlock // Signals the end of a block, returns changes to the validator set
|
||||
Commit() ResponseCommit // Commit the state and return the application Merkle root hash
|
||||
}
|
||||
|
||||
//-------------------------------------------------------
|
||||
// BaseApplication is a base form of Application
|
||||
|
||||
var _ Application = (*BaseApplication)(nil)
|
||||
|
||||
type BaseApplication struct {
|
||||
}
|
||||
|
||||
func NewBaseApplication() *BaseApplication {
|
||||
return &BaseApplication{}
|
||||
}
|
||||
|
||||
func (BaseApplication) Info(req RequestInfo) ResponseInfo {
|
||||
return ResponseInfo{}
|
||||
}
|
||||
|
||||
func (BaseApplication) SetOption(req RequestSetOption) ResponseSetOption {
|
||||
return ResponseSetOption{}
|
||||
}
|
||||
|
||||
func (BaseApplication) DeliverTx(tx []byte) ResponseDeliverTx {
|
||||
return ResponseDeliverTx{Code: CodeTypeOK}
|
||||
}
|
||||
|
||||
func (BaseApplication) CheckTx(tx []byte) ResponseCheckTx {
|
||||
return ResponseCheckTx{Code: CodeTypeOK}
|
||||
}
|
||||
|
||||
func (BaseApplication) Commit() ResponseCommit {
|
||||
return ResponseCommit{}
|
||||
}
|
||||
|
||||
func (BaseApplication) Query(req RequestQuery) ResponseQuery {
|
||||
return ResponseQuery{Code: CodeTypeOK}
|
||||
}
|
||||
|
||||
func (BaseApplication) InitChain(req RequestInitChain) ResponseInitChain {
|
||||
return ResponseInitChain{}
|
||||
}
|
||||
|
||||
func (BaseApplication) BeginBlock(req RequestBeginBlock) ResponseBeginBlock {
|
||||
return ResponseBeginBlock{}
|
||||
}
|
||||
|
||||
func (BaseApplication) EndBlock(req RequestEndBlock) ResponseEndBlock {
|
||||
return ResponseEndBlock{}
|
||||
}
|
||||
|
||||
//-------------------------------------------------------
|
||||
|
||||
// GRPCApplication is a GRPC wrapper for Application
|
||||
type GRPCApplication struct {
|
||||
app Application
|
||||
}
|
||||
|
||||
func NewGRPCApplication(app Application) *GRPCApplication {
|
||||
return &GRPCApplication{app}
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) Echo(ctx context.Context, req *RequestEcho) (*ResponseEcho, error) {
|
||||
return &ResponseEcho{req.Message}, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) Flush(ctx context.Context, req *RequestFlush) (*ResponseFlush, error) {
|
||||
return &ResponseFlush{}, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) Info(ctx context.Context, req *RequestInfo) (*ResponseInfo, error) {
|
||||
res := app.app.Info(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) SetOption(ctx context.Context, req *RequestSetOption) (*ResponseSetOption, error) {
|
||||
res := app.app.SetOption(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) DeliverTx(ctx context.Context, req *RequestDeliverTx) (*ResponseDeliverTx, error) {
|
||||
res := app.app.DeliverTx(req.Tx)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) CheckTx(ctx context.Context, req *RequestCheckTx) (*ResponseCheckTx, error) {
|
||||
res := app.app.CheckTx(req.Tx)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) Query(ctx context.Context, req *RequestQuery) (*ResponseQuery, error) {
|
||||
res := app.app.Query(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) Commit(ctx context.Context, req *RequestCommit) (*ResponseCommit, error) {
|
||||
res := app.app.Commit()
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) InitChain(ctx context.Context, req *RequestInitChain) (*ResponseInitChain, error) {
|
||||
res := app.app.InitChain(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) BeginBlock(ctx context.Context, req *RequestBeginBlock) (*ResponseBeginBlock, error) {
|
||||
res := app.app.BeginBlock(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) EndBlock(ctx context.Context, req *RequestEndBlock) (*ResponseEndBlock, error) {
|
||||
res := app.app.EndBlock(*req)
|
||||
return &res, nil
|
||||
}
|
210
abci/types/messages.go
Normal file
210
abci/types/messages.go
Normal file
@ -0,0 +1,210 @@
|
||||
package types
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/binary"
|
||||
"io"
|
||||
|
||||
"github.com/gogo/protobuf/proto"
|
||||
)
|
||||
|
||||
const (
|
||||
maxMsgSize = 104857600 // 100MB
|
||||
)
|
||||
|
||||
// WriteMessage writes a varint length-delimited protobuf message.
|
||||
func WriteMessage(msg proto.Message, w io.Writer) error {
|
||||
bz, err := proto.Marshal(msg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return encodeByteSlice(w, bz)
|
||||
}
|
||||
|
||||
// ReadMessage reads a varint length-delimited protobuf message.
|
||||
func ReadMessage(r io.Reader, msg proto.Message) error {
|
||||
return readProtoMsg(r, msg, maxMsgSize)
|
||||
}
|
||||
|
||||
func readProtoMsg(r io.Reader, msg proto.Message, maxSize int) error {
|
||||
// binary.ReadVarint takes an io.ByteReader, eg. a bufio.Reader
|
||||
reader, ok := r.(*bufio.Reader)
|
||||
if !ok {
|
||||
reader = bufio.NewReader(r)
|
||||
}
|
||||
length64, err := binary.ReadVarint(reader)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
length := int(length64)
|
||||
if length < 0 || length > maxSize {
|
||||
return io.ErrShortBuffer
|
||||
}
|
||||
buf := make([]byte, length)
|
||||
if _, err := io.ReadFull(reader, buf); err != nil {
|
||||
return err
|
||||
}
|
||||
return proto.Unmarshal(buf, msg)
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------
|
||||
// NOTE: we copied wire.EncodeByteSlice from go-wire rather than keep
|
||||
// go-wire as a dep
|
||||
|
||||
func encodeByteSlice(w io.Writer, bz []byte) (err error) {
|
||||
err = encodeVarint(w, int64(len(bz)))
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
_, err = w.Write(bz)
|
||||
return
|
||||
}
|
||||
|
||||
func encodeVarint(w io.Writer, i int64) (err error) {
|
||||
var buf [10]byte
|
||||
n := binary.PutVarint(buf[:], i)
|
||||
_, err = w.Write(buf[0:n])
|
||||
return
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func ToRequestEcho(message string) *Request {
|
||||
return &Request{
|
||||
Value: &Request_Echo{&RequestEcho{message}},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestFlush() *Request {
|
||||
return &Request{
|
||||
Value: &Request_Flush{&RequestFlush{}},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestInfo(req RequestInfo) *Request {
|
||||
return &Request{
|
||||
Value: &Request_Info{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestSetOption(req RequestSetOption) *Request {
|
||||
return &Request{
|
||||
Value: &Request_SetOption{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestDeliverTx(tx []byte) *Request {
|
||||
return &Request{
|
||||
Value: &Request_DeliverTx{&RequestDeliverTx{tx}},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestCheckTx(tx []byte) *Request {
|
||||
return &Request{
|
||||
Value: &Request_CheckTx{&RequestCheckTx{tx}},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestCommit() *Request {
|
||||
return &Request{
|
||||
Value: &Request_Commit{&RequestCommit{}},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestQuery(req RequestQuery) *Request {
|
||||
return &Request{
|
||||
Value: &Request_Query{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestInitChain(req RequestInitChain) *Request {
|
||||
return &Request{
|
||||
Value: &Request_InitChain{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestBeginBlock(req RequestBeginBlock) *Request {
|
||||
return &Request{
|
||||
Value: &Request_BeginBlock{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestEndBlock(req RequestEndBlock) *Request {
|
||||
return &Request{
|
||||
Value: &Request_EndBlock{&req},
|
||||
}
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func ToResponseException(errStr string) *Response {
|
||||
return &Response{
|
||||
Value: &Response_Exception{&ResponseException{errStr}},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseEcho(message string) *Response {
|
||||
return &Response{
|
||||
Value: &Response_Echo{&ResponseEcho{message}},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseFlush() *Response {
|
||||
return &Response{
|
||||
Value: &Response_Flush{&ResponseFlush{}},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseInfo(res ResponseInfo) *Response {
|
||||
return &Response{
|
||||
Value: &Response_Info{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseSetOption(res ResponseSetOption) *Response {
|
||||
return &Response{
|
||||
Value: &Response_SetOption{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseDeliverTx(res ResponseDeliverTx) *Response {
|
||||
return &Response{
|
||||
Value: &Response_DeliverTx{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseCheckTx(res ResponseCheckTx) *Response {
|
||||
return &Response{
|
||||
Value: &Response_CheckTx{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseCommit(res ResponseCommit) *Response {
|
||||
return &Response{
|
||||
Value: &Response_Commit{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseQuery(res ResponseQuery) *Response {
|
||||
return &Response{
|
||||
Value: &Response_Query{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseInitChain(res ResponseInitChain) *Response {
|
||||
return &Response{
|
||||
Value: &Response_InitChain{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseBeginBlock(res ResponseBeginBlock) *Response {
|
||||
return &Response{
|
||||
Value: &Response_BeginBlock{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseEndBlock(res ResponseEndBlock) *Response {
|
||||
return &Response{
|
||||
Value: &Response_EndBlock{&res},
|
||||
}
|
||||
}
|
104
abci/types/messages_test.go
Normal file
104
abci/types/messages_test.go
Normal file
@ -0,0 +1,104 @@
|
||||
package types
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/gogo/protobuf/proto"
|
||||
"github.com/stretchr/testify/assert"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
func TestMarshalJSON(t *testing.T) {
|
||||
b, err := json.Marshal(&ResponseDeliverTx{})
|
||||
assert.Nil(t, err)
|
||||
// Do not include empty fields.
|
||||
assert.False(t, strings.Contains(string(b), "code"))
|
||||
|
||||
r1 := ResponseCheckTx{
|
||||
Code: 1,
|
||||
Data: []byte("hello"),
|
||||
GasWanted: 43,
|
||||
Tags: []cmn.KVPair{
|
||||
{[]byte("pho"), []byte("bo")},
|
||||
},
|
||||
}
|
||||
b, err = json.Marshal(&r1)
|
||||
assert.Nil(t, err)
|
||||
|
||||
var r2 ResponseCheckTx
|
||||
err = json.Unmarshal(b, &r2)
|
||||
assert.Nil(t, err)
|
||||
assert.Equal(t, r1, r2)
|
||||
}
|
||||
|
||||
func TestWriteReadMessageSimple(t *testing.T) {
|
||||
cases := []proto.Message{
|
||||
&RequestEcho{
|
||||
Message: "Hello",
|
||||
},
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
buf := new(bytes.Buffer)
|
||||
err := WriteMessage(c, buf)
|
||||
assert.Nil(t, err)
|
||||
|
||||
msg := new(RequestEcho)
|
||||
err = ReadMessage(buf, msg)
|
||||
assert.Nil(t, err)
|
||||
|
||||
assert.Equal(t, c, msg)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriteReadMessage(t *testing.T) {
|
||||
cases := []proto.Message{
|
||||
&Header{
|
||||
NumTxs: 4,
|
||||
},
|
||||
// TODO: add the rest
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
buf := new(bytes.Buffer)
|
||||
err := WriteMessage(c, buf)
|
||||
assert.Nil(t, err)
|
||||
|
||||
msg := new(Header)
|
||||
err = ReadMessage(buf, msg)
|
||||
assert.Nil(t, err)
|
||||
|
||||
assert.Equal(t, c, msg)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriteReadMessage2(t *testing.T) {
|
||||
phrase := "hello-world"
|
||||
cases := []proto.Message{
|
||||
&ResponseCheckTx{
|
||||
Data: []byte(phrase),
|
||||
Log: phrase,
|
||||
GasWanted: 10,
|
||||
Tags: []cmn.KVPair{
|
||||
cmn.KVPair{[]byte("abc"), []byte("def")},
|
||||
},
|
||||
// Fee: cmn.KI64Pair{
|
||||
},
|
||||
// TODO: add the rest
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
buf := new(bytes.Buffer)
|
||||
err := WriteMessage(c, buf)
|
||||
assert.Nil(t, err)
|
||||
|
||||
msg := new(ResponseCheckTx)
|
||||
err = ReadMessage(buf, msg)
|
||||
assert.Nil(t, err)
|
||||
|
||||
assert.Equal(t, c, msg)
|
||||
}
|
||||
}
|
55
abci/types/protoreplace/protoreplace.go
Normal file
55
abci/types/protoreplace/protoreplace.go
Normal file
@ -0,0 +1,55 @@
|
||||
// +build ignore
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// This script replaces most `[]byte` with `data.Bytes` in a `.pb.go` file.
|
||||
// It was written before we realized we could use `gogo/protobuf` to achieve
|
||||
// this more natively. So it's here for safe keeping in case we ever need to
|
||||
// abandon `gogo/protobuf`.
|
||||
|
||||
func main() {
|
||||
bytePattern := regexp.MustCompile("[[][]]byte")
|
||||
const oldPath = "types/types.pb.go"
|
||||
const tmpPath = "types/types.pb.new"
|
||||
content, err := ioutil.ReadFile(oldPath)
|
||||
if err != nil {
|
||||
panic("cannot read " + oldPath)
|
||||
os.Exit(1)
|
||||
}
|
||||
lines := bytes.Split(content, []byte("\n"))
|
||||
outFile, _ := os.Create(tmpPath)
|
||||
wroteImport := false
|
||||
for _, line_bytes := range lines {
|
||||
line := string(line_bytes)
|
||||
gotPackageLine := strings.HasPrefix(line, "package ")
|
||||
writeImportTime := strings.HasPrefix(line, "import ")
|
||||
containsDescriptor := strings.Contains(line, "Descriptor")
|
||||
containsByteArray := strings.Contains(line, "[]byte")
|
||||
if containsByteArray && !containsDescriptor {
|
||||
line = string(bytePattern.ReplaceAll([]byte(line), []byte("data.Bytes")))
|
||||
}
|
||||
if writeImportTime && !wroteImport {
|
||||
wroteImport = true
|
||||
fmt.Fprintf(outFile, "import \"github.com/tendermint/go-wire/data\"\n")
|
||||
|
||||
}
|
||||
if gotPackageLine {
|
||||
fmt.Fprintf(outFile, "%s\n", "//nolint: gas")
|
||||
}
|
||||
fmt.Fprintf(outFile, "%s\n", line)
|
||||
}
|
||||
outFile.Close()
|
||||
os.Remove(oldPath)
|
||||
os.Rename(tmpPath, oldPath)
|
||||
exec.Command("goimports", "-w", oldPath)
|
||||
}
|
16
abci/types/pubkey.go
Normal file
16
abci/types/pubkey.go
Normal file
@ -0,0 +1,16 @@
|
||||
package types
|
||||
|
||||
const (
|
||||
PubKeyEd25519 = "ed25519"
|
||||
)
|
||||
|
||||
func Ed25519Validator(pubkey []byte, power int64) Validator {
|
||||
return Validator{
|
||||
// Address:
|
||||
PubKey: PubKey{
|
||||
Type: PubKeyEd25519,
|
||||
Data: pubkey,
|
||||
},
|
||||
Power: power,
|
||||
}
|
||||
}
|
121
abci/types/result.go
Normal file
121
abci/types/result.go
Normal file
@ -0,0 +1,121 @@
|
||||
package types
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
|
||||
"github.com/gogo/protobuf/jsonpb"
|
||||
)
|
||||
|
||||
const (
|
||||
CodeTypeOK uint32 = 0
|
||||
)
|
||||
|
||||
// IsOK returns true if Code is OK.
|
||||
func (r ResponseCheckTx) IsOK() bool {
|
||||
return r.Code == CodeTypeOK
|
||||
}
|
||||
|
||||
// IsErr returns true if Code is something other than OK.
|
||||
func (r ResponseCheckTx) IsErr() bool {
|
||||
return r.Code != CodeTypeOK
|
||||
}
|
||||
|
||||
// IsOK returns true if Code is OK.
|
||||
func (r ResponseDeliverTx) IsOK() bool {
|
||||
return r.Code == CodeTypeOK
|
||||
}
|
||||
|
||||
// IsErr returns true if Code is something other than OK.
|
||||
func (r ResponseDeliverTx) IsErr() bool {
|
||||
return r.Code != CodeTypeOK
|
||||
}
|
||||
|
||||
// IsOK returns true if Code is OK.
|
||||
func (r ResponseQuery) IsOK() bool {
|
||||
return r.Code == CodeTypeOK
|
||||
}
|
||||
|
||||
// IsErr returns true if Code is something other than OK.
|
||||
func (r ResponseQuery) IsErr() bool {
|
||||
return r.Code != CodeTypeOK
|
||||
}
|
||||
|
||||
//---------------------------------------------------------------------------
|
||||
// override JSON marshalling so we dont emit defaults (ie. disable omitempty)
|
||||
// note we need Unmarshal functions too because protobuf had the bright idea
|
||||
// to marshal int64->string. cool. cool, cool, cool: https://developers.google.com/protocol-buffers/docs/proto3#json
|
||||
|
||||
var (
|
||||
jsonpbMarshaller = jsonpb.Marshaler{
|
||||
EnumsAsInts: true,
|
||||
EmitDefaults: false,
|
||||
}
|
||||
jsonpbUnmarshaller = jsonpb.Unmarshaler{}
|
||||
)
|
||||
|
||||
func (r *ResponseSetOption) MarshalJSON() ([]byte, error) {
|
||||
s, err := jsonpbMarshaller.MarshalToString(r)
|
||||
return []byte(s), err
|
||||
}
|
||||
|
||||
func (r *ResponseSetOption) UnmarshalJSON(b []byte) error {
|
||||
reader := bytes.NewBuffer(b)
|
||||
return jsonpbUnmarshaller.Unmarshal(reader, r)
|
||||
}
|
||||
|
||||
func (r *ResponseCheckTx) MarshalJSON() ([]byte, error) {
|
||||
s, err := jsonpbMarshaller.MarshalToString(r)
|
||||
return []byte(s), err
|
||||
}
|
||||
|
||||
func (r *ResponseCheckTx) UnmarshalJSON(b []byte) error {
|
||||
reader := bytes.NewBuffer(b)
|
||||
return jsonpbUnmarshaller.Unmarshal(reader, r)
|
||||
}
|
||||
|
||||
func (r *ResponseDeliverTx) MarshalJSON() ([]byte, error) {
|
||||
s, err := jsonpbMarshaller.MarshalToString(r)
|
||||
return []byte(s), err
|
||||
}
|
||||
|
||||
func (r *ResponseDeliverTx) UnmarshalJSON(b []byte) error {
|
||||
reader := bytes.NewBuffer(b)
|
||||
return jsonpbUnmarshaller.Unmarshal(reader, r)
|
||||
}
|
||||
|
||||
func (r *ResponseQuery) MarshalJSON() ([]byte, error) {
|
||||
s, err := jsonpbMarshaller.MarshalToString(r)
|
||||
return []byte(s), err
|
||||
}
|
||||
|
||||
func (r *ResponseQuery) UnmarshalJSON(b []byte) error {
|
||||
reader := bytes.NewBuffer(b)
|
||||
return jsonpbUnmarshaller.Unmarshal(reader, r)
|
||||
}
|
||||
|
||||
func (r *ResponseCommit) MarshalJSON() ([]byte, error) {
|
||||
s, err := jsonpbMarshaller.MarshalToString(r)
|
||||
return []byte(s), err
|
||||
}
|
||||
|
||||
func (r *ResponseCommit) UnmarshalJSON(b []byte) error {
|
||||
reader := bytes.NewBuffer(b)
|
||||
return jsonpbUnmarshaller.Unmarshal(reader, r)
|
||||
}
|
||||
|
||||
// Some compile time assertions to ensure we don't
|
||||
// have accidental runtime surprises later on.
|
||||
|
||||
// jsonEncodingRoundTripper ensures that asserted
|
||||
// interfaces implement both MarshalJSON and UnmarshalJSON
|
||||
type jsonRoundTripper interface {
|
||||
json.Marshaler
|
||||
json.Unmarshaler
|
||||
}
|
||||
|
||||
var _ jsonRoundTripper = (*ResponseCommit)(nil)
|
||||
var _ jsonRoundTripper = (*ResponseQuery)(nil)
|
||||
var _ jsonRoundTripper = (*ResponseDeliverTx)(nil)
|
||||
var _ jsonRoundTripper = (*ResponseCheckTx)(nil)
|
||||
var _ jsonRoundTripper = (*ResponseSetOption)(nil)
|
2455
abci/types/types.pb.go
Normal file
2455
abci/types/types.pb.go
Normal file
File diff suppressed because it is too large
Load Diff
282
abci/types/types.proto
Normal file
282
abci/types/types.proto
Normal file
@ -0,0 +1,282 @@
|
||||
syntax = "proto3";
|
||||
package types;
|
||||
|
||||
// For more information on gogo.proto, see:
|
||||
// https://github.com/gogo/protobuf/blob/master/extensions.md
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
import "github.com/tendermint/tmlibs/common/types.proto";
|
||||
|
||||
// This file is copied from http://github.com/tendermint/abci
|
||||
// NOTE: When using custom types, mind the warnings.
|
||||
// https://github.com/gogo/protobuf/blob/master/custom_types.md#warnings-and-issues
|
||||
|
||||
//----------------------------------------
|
||||
// Request types
|
||||
|
||||
message Request {
|
||||
oneof value {
|
||||
RequestEcho echo = 2;
|
||||
RequestFlush flush = 3;
|
||||
RequestInfo info = 4;
|
||||
RequestSetOption set_option = 5;
|
||||
RequestInitChain init_chain = 6;
|
||||
RequestQuery query = 7;
|
||||
RequestBeginBlock begin_block = 8;
|
||||
RequestCheckTx check_tx = 9;
|
||||
RequestDeliverTx deliver_tx = 19;
|
||||
RequestEndBlock end_block = 11;
|
||||
RequestCommit commit = 12;
|
||||
}
|
||||
}
|
||||
|
||||
message RequestEcho {
|
||||
string message = 1;
|
||||
}
|
||||
|
||||
message RequestFlush {
|
||||
}
|
||||
|
||||
message RequestInfo {
|
||||
string version = 1;
|
||||
}
|
||||
|
||||
// nondeterministic
|
||||
message RequestSetOption {
|
||||
string key = 1;
|
||||
string value = 2;
|
||||
}
|
||||
|
||||
message RequestInitChain {
|
||||
int64 time = 1;
|
||||
string chain_id = 2;
|
||||
ConsensusParams consensus_params = 3;
|
||||
repeated Validator validators = 4 [(gogoproto.nullable)=false];
|
||||
bytes app_state_bytes = 5;
|
||||
}
|
||||
|
||||
message RequestQuery {
|
||||
bytes data = 1;
|
||||
string path = 2;
|
||||
int64 height = 3;
|
||||
bool prove = 4;
|
||||
}
|
||||
|
||||
message RequestBeginBlock {
|
||||
bytes hash = 1;
|
||||
Header header = 2 [(gogoproto.nullable)=false];
|
||||
repeated SigningValidator validators = 3 [(gogoproto.nullable)=false];
|
||||
repeated Evidence byzantine_validators = 4 [(gogoproto.nullable)=false];
|
||||
}
|
||||
|
||||
message RequestCheckTx {
|
||||
bytes tx = 1;
|
||||
}
|
||||
|
||||
message RequestDeliverTx {
|
||||
bytes tx = 1;
|
||||
}
|
||||
|
||||
message RequestEndBlock {
|
||||
int64 height = 1;
|
||||
}
|
||||
|
||||
message RequestCommit {
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
// Response types
|
||||
|
||||
message Response {
|
||||
oneof value {
|
||||
ResponseException exception = 1;
|
||||
ResponseEcho echo = 2;
|
||||
ResponseFlush flush = 3;
|
||||
ResponseInfo info = 4;
|
||||
ResponseSetOption set_option = 5;
|
||||
ResponseInitChain init_chain = 6;
|
||||
ResponseQuery query = 7;
|
||||
ResponseBeginBlock begin_block = 8;
|
||||
ResponseCheckTx check_tx = 9;
|
||||
ResponseDeliverTx deliver_tx = 10;
|
||||
ResponseEndBlock end_block = 11;
|
||||
ResponseCommit commit = 12;
|
||||
}
|
||||
}
|
||||
|
||||
// nondeterministic
|
||||
message ResponseException {
|
||||
string error = 1;
|
||||
}
|
||||
|
||||
message ResponseEcho {
|
||||
string message = 1;
|
||||
}
|
||||
|
||||
message ResponseFlush {
|
||||
}
|
||||
|
||||
message ResponseInfo {
|
||||
string data = 1;
|
||||
string version = 2;
|
||||
int64 last_block_height = 3;
|
||||
bytes last_block_app_hash = 4;
|
||||
}
|
||||
|
||||
// nondeterministic
|
||||
message ResponseSetOption {
|
||||
uint32 code = 1;
|
||||
// bytes data = 2;
|
||||
string log = 3;
|
||||
string info = 4;
|
||||
}
|
||||
|
||||
message ResponseInitChain {
|
||||
ConsensusParams consensus_params = 1;
|
||||
repeated Validator validators = 2 [(gogoproto.nullable)=false];
|
||||
}
|
||||
|
||||
message ResponseQuery {
|
||||
uint32 code = 1;
|
||||
// bytes data = 2; // use "value" instead.
|
||||
string log = 3; // nondeterministic
|
||||
string info = 4; // nondeterministic
|
||||
int64 index = 5;
|
||||
bytes key = 6;
|
||||
bytes value = 7;
|
||||
bytes proof = 8;
|
||||
int64 height = 9;
|
||||
}
|
||||
|
||||
message ResponseBeginBlock {
|
||||
repeated common.KVPair tags = 1 [(gogoproto.nullable)=false, (gogoproto.jsontag)="tags,omitempty"];
|
||||
}
|
||||
|
||||
message ResponseCheckTx {
|
||||
uint32 code = 1;
|
||||
bytes data = 2;
|
||||
string log = 3; // nondeterministic
|
||||
string info = 4; // nondeterministic
|
||||
int64 gas_wanted = 5;
|
||||
int64 gas_used = 6;
|
||||
repeated common.KVPair tags = 7 [(gogoproto.nullable)=false, (gogoproto.jsontag)="tags,omitempty"];
|
||||
common.KI64Pair fee = 8 [(gogoproto.nullable)=false];
|
||||
}
|
||||
|
||||
message ResponseDeliverTx {
|
||||
uint32 code = 1;
|
||||
bytes data = 2;
|
||||
string log = 3; // nondeterministic
|
||||
string info = 4; // nondeterministic
|
||||
int64 gas_wanted = 5;
|
||||
int64 gas_used = 6;
|
||||
repeated common.KVPair tags = 7 [(gogoproto.nullable)=false, (gogoproto.jsontag)="tags,omitempty"];
|
||||
common.KI64Pair fee = 8 [(gogoproto.nullable)=false];
|
||||
}
|
||||
|
||||
message ResponseEndBlock {
|
||||
repeated Validator validator_updates = 1 [(gogoproto.nullable)=false];
|
||||
ConsensusParams consensus_param_updates = 2;
|
||||
repeated common.KVPair tags = 3 [(gogoproto.nullable)=false, (gogoproto.jsontag)="tags,omitempty"];
|
||||
}
|
||||
|
||||
message ResponseCommit {
|
||||
// reserve 1
|
||||
bytes data = 2;
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
// Misc.
|
||||
|
||||
// ConsensusParams contains all consensus-relevant parameters
|
||||
// that can be adjusted by the abci app
|
||||
message ConsensusParams {
|
||||
BlockSize block_size = 1;
|
||||
TxSize tx_size = 2;
|
||||
BlockGossip block_gossip = 3;
|
||||
}
|
||||
|
||||
// BlockSize contain limits on the block size.
|
||||
message BlockSize {
|
||||
int32 max_bytes = 1;
|
||||
int32 max_txs = 2;
|
||||
int64 max_gas = 3;
|
||||
}
|
||||
|
||||
// TxSize contain limits on the tx size.
|
||||
message TxSize {
|
||||
int32 max_bytes = 1;
|
||||
int64 max_gas = 2;
|
||||
}
|
||||
|
||||
// BlockGossip determine consensus critical
|
||||
// elements of how blocks are gossiped
|
||||
message BlockGossip {
|
||||
// Note: must not be 0
|
||||
int32 block_part_size_bytes = 1;
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
// Blockchain Types
|
||||
|
||||
// just the minimum the app might need
|
||||
message Header {
|
||||
// basics
|
||||
string chain_id = 1 [(gogoproto.customname)="ChainID"];
|
||||
int64 height = 2;
|
||||
int64 time = 3;
|
||||
|
||||
// txs
|
||||
int32 num_txs = 4;
|
||||
int64 total_txs = 5;
|
||||
|
||||
// hashes
|
||||
bytes last_block_hash = 6;
|
||||
bytes validators_hash = 7;
|
||||
bytes app_hash = 8;
|
||||
|
||||
// consensus
|
||||
Validator proposer = 9 [(gogoproto.nullable)=false];
|
||||
}
|
||||
|
||||
// Validator
|
||||
message Validator {
|
||||
bytes address = 1;
|
||||
PubKey pub_key = 2 [(gogoproto.nullable)=false];
|
||||
int64 power = 3;
|
||||
}
|
||||
|
||||
// Validator with an extra bool
|
||||
message SigningValidator {
|
||||
Validator validator = 1 [(gogoproto.nullable)=false];
|
||||
bool signed_last_block = 2;
|
||||
}
|
||||
|
||||
message PubKey {
|
||||
string type = 1;
|
||||
bytes data = 2;
|
||||
}
|
||||
|
||||
message Evidence {
|
||||
string type = 1;
|
||||
Validator validator = 2 [(gogoproto.nullable)=false];
|
||||
int64 height = 3;
|
||||
int64 time = 4;
|
||||
int64 total_voting_power = 5;
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
// Service Definition
|
||||
|
||||
service ABCIApplication {
|
||||
rpc Echo(RequestEcho) returns (ResponseEcho) ;
|
||||
rpc Flush(RequestFlush) returns (ResponseFlush);
|
||||
rpc Info(RequestInfo) returns (ResponseInfo);
|
||||
rpc SetOption(RequestSetOption) returns (ResponseSetOption);
|
||||
rpc DeliverTx(RequestDeliverTx) returns (ResponseDeliverTx);
|
||||
rpc CheckTx(RequestCheckTx) returns (ResponseCheckTx);
|
||||
rpc Query(RequestQuery) returns (ResponseQuery);
|
||||
rpc Commit(RequestCommit) returns (ResponseCommit);
|
||||
rpc InitChain(RequestInitChain) returns (ResponseInitChain);
|
||||
rpc BeginBlock(RequestBeginBlock) returns (ResponseBeginBlock);
|
||||
rpc EndBlock(RequestEndBlock) returns (ResponseEndBlock);
|
||||
}
|
31
abci/types/types_test.go
Normal file
31
abci/types/types_test.go
Normal file
@ -0,0 +1,31 @@
|
||||
package types
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
asrt "github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestConsensusParams(t *testing.T) {
|
||||
assert := asrt.New(t)
|
||||
|
||||
params := &ConsensusParams{
|
||||
BlockSize: &BlockSize{MaxGas: 12345},
|
||||
BlockGossip: &BlockGossip{BlockPartSizeBytes: 54321},
|
||||
}
|
||||
var noParams *ConsensusParams // nil
|
||||
|
||||
// no error with nil fields
|
||||
assert.Nil(noParams.GetBlockSize())
|
||||
assert.EqualValues(noParams.GetBlockSize().GetMaxGas(), 0)
|
||||
|
||||
// get values with real fields
|
||||
assert.NotNil(params.GetBlockSize())
|
||||
assert.EqualValues(params.GetBlockSize().GetMaxTxs(), 0)
|
||||
assert.EqualValues(params.GetBlockSize().GetMaxGas(), 12345)
|
||||
assert.NotNil(params.GetBlockGossip())
|
||||
assert.EqualValues(params.GetBlockGossip().GetBlockPartSizeBytes(), 54321)
|
||||
assert.Nil(params.GetTxSize())
|
||||
assert.EqualValues(params.GetTxSize().GetMaxBytes(), 0)
|
||||
|
||||
}
|
59
abci/types/util.go
Normal file
59
abci/types/util.go
Normal file
@ -0,0 +1,59 @@
|
||||
package types
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"sort"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
//------------------------------------------------------------------------------
|
||||
|
||||
// Validators is a list of validators that implements the Sort interface
|
||||
type Validators []Validator
|
||||
|
||||
var _ sort.Interface = (Validators)(nil)
|
||||
|
||||
// All these methods for Validators:
|
||||
// Len, Less and Swap
|
||||
// are for Validators to implement sort.Interface
|
||||
// which will be used by the sort package.
|
||||
// See Issue https://github.com/tendermint/abci/issues/212
|
||||
|
||||
func (v Validators) Len() int {
|
||||
return len(v)
|
||||
}
|
||||
|
||||
// XXX: doesn't distinguish same validator with different power
|
||||
func (v Validators) Less(i, j int) bool {
|
||||
return bytes.Compare(v[i].PubKey.Data, v[j].PubKey.Data) <= 0
|
||||
}
|
||||
|
||||
func (v Validators) Swap(i, j int) {
|
||||
v1 := v[i]
|
||||
v[i] = v[j]
|
||||
v[j] = v1
|
||||
}
|
||||
|
||||
func ValidatorsString(vs Validators) string {
|
||||
s := make([]validatorPretty, len(vs))
|
||||
for i, v := range vs {
|
||||
s[i] = validatorPretty{
|
||||
Address: v.Address,
|
||||
PubKey: v.PubKey.Data,
|
||||
Power: v.Power,
|
||||
}
|
||||
}
|
||||
b, err := json.Marshal(s)
|
||||
if err != nil {
|
||||
panic(err.Error())
|
||||
}
|
||||
return string(b)
|
||||
}
|
||||
|
||||
type validatorPretty struct {
|
||||
Address cmn.HexBytes `json:"address"`
|
||||
PubKey []byte `json:"pub_key"`
|
||||
Power int64 `json:"power"`
|
||||
}
|
9
abci/version/version.go
Normal file
9
abci/version/version.go
Normal file
@ -0,0 +1,9 @@
|
||||
package version
|
||||
|
||||
// NOTE: we should probably be versioning the ABCI and the abci-cli separately
|
||||
|
||||
const Maj = "0"
|
||||
const Min = "12"
|
||||
const Fix = "0"
|
||||
|
||||
const Version = "0.12.0"
|
13
appveyor.yml
Normal file
13
appveyor.yml
Normal file
@ -0,0 +1,13 @@
|
||||
version: 1.0.{build}
|
||||
configuration: Release
|
||||
platform:
|
||||
- x64
|
||||
- x86
|
||||
clone_folder: c:\go\path\src\github.com\tendermint\tendermint
|
||||
before_build:
|
||||
- cmd: set GOPATH=%GOROOT%\path
|
||||
- cmd: set PATH=%GOPATH%\bin;%PATH%
|
||||
- cmd: make get_vendor_deps
|
||||
build_script:
|
||||
- cmd: make test
|
||||
test: off
|
29
benchmarks/atomic_test.go
Normal file
29
benchmarks/atomic_test.go
Normal file
@ -0,0 +1,29 @@
|
||||
package benchmarks
|
||||
|
||||
import (
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
func BenchmarkAtomicUintPtr(b *testing.B) {
|
||||
b.StopTimer()
|
||||
pointers := make([]uintptr, 1000)
|
||||
b.Log(unsafe.Sizeof(pointers[0]))
|
||||
b.StartTimer()
|
||||
|
||||
for j := 0; j < b.N; j++ {
|
||||
atomic.StoreUintptr(&pointers[j%1000], uintptr(j))
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkAtomicPointer(b *testing.B) {
|
||||
b.StopTimer()
|
||||
pointers := make([]unsafe.Pointer, 1000)
|
||||
b.Log(unsafe.Sizeof(pointers[0]))
|
||||
b.StartTimer()
|
||||
|
||||
for j := 0; j < b.N; j++ {
|
||||
atomic.StorePointer(&pointers[j%1000], unsafe.Pointer(uintptr(j)))
|
||||
}
|
||||
}
|
2
benchmarks/blockchain/.gitignore
vendored
Normal file
2
benchmarks/blockchain/.gitignore
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
data
|
||||
|
80
benchmarks/blockchain/localsync.sh
Executable file
80
benchmarks/blockchain/localsync.sh
Executable file
@ -0,0 +1,80 @@
|
||||
#!/bin/bash
|
||||
|
||||
DATA=$GOPATH/src/github.com/tendermint/tendermint/benchmarks/blockchain/data
|
||||
if [ ! -d $DATA ]; then
|
||||
echo "no data found, generating a chain... (this only has to happen once)"
|
||||
|
||||
tendermint init --home $DATA
|
||||
cp $DATA/config.toml $DATA/config2.toml
|
||||
echo "
|
||||
[consensus]
|
||||
timeout_commit = 0
|
||||
" >> $DATA/config.toml
|
||||
|
||||
echo "starting node"
|
||||
tendermint node \
|
||||
--home $DATA \
|
||||
--proxy_app kvstore \
|
||||
--p2p.laddr tcp://127.0.0.1:56656 \
|
||||
--rpc.laddr tcp://127.0.0.1:56657 \
|
||||
--log_level error &
|
||||
|
||||
echo "making blocks for 60s"
|
||||
sleep 60
|
||||
|
||||
mv $DATA/config2.toml $DATA/config.toml
|
||||
|
||||
kill %1
|
||||
|
||||
echo "done generating chain."
|
||||
fi
|
||||
|
||||
# validator node
|
||||
HOME1=$TMPDIR$RANDOM$RANDOM
|
||||
cp -R $DATA $HOME1
|
||||
echo "starting validator node"
|
||||
tendermint node \
|
||||
--home $HOME1 \
|
||||
--proxy_app kvstore \
|
||||
--p2p.laddr tcp://127.0.0.1:56656 \
|
||||
--rpc.laddr tcp://127.0.0.1:56657 \
|
||||
--log_level error &
|
||||
sleep 1
|
||||
|
||||
# downloader node
|
||||
HOME2=$TMPDIR$RANDOM$RANDOM
|
||||
tendermint init --home $HOME2
|
||||
cp $HOME1/genesis.json $HOME2
|
||||
printf "starting downloader node"
|
||||
tendermint node \
|
||||
--home $HOME2 \
|
||||
--proxy_app kvstore \
|
||||
--p2p.laddr tcp://127.0.0.1:56666 \
|
||||
--rpc.laddr tcp://127.0.0.1:56667 \
|
||||
--p2p.persistent_peers 127.0.0.1:56656 \
|
||||
--log_level error &
|
||||
|
||||
# wait for node to start up so we only count time where we are actually syncing
|
||||
sleep 0.5
|
||||
while curl localhost:56667/status 2> /dev/null | grep "\"latest_block_height\": 0," > /dev/null
|
||||
do
|
||||
printf '.'
|
||||
sleep 0.2
|
||||
done
|
||||
echo
|
||||
|
||||
echo "syncing blockchain for 10s"
|
||||
for i in {1..10}
|
||||
do
|
||||
sleep 1
|
||||
HEIGHT="$(curl localhost:56667/status 2> /dev/null \
|
||||
| grep 'latest_block_height' \
|
||||
| grep -o ' [0-9]*' \
|
||||
| xargs)"
|
||||
let 'RATE = HEIGHT / i'
|
||||
echo "height: $HEIGHT, blocks/sec: $RATE"
|
||||
done
|
||||
|
||||
kill %1
|
||||
kill %2
|
||||
rm -rf $HOME1 $HOME2
|
19
benchmarks/chan_test.go
Normal file
19
benchmarks/chan_test.go
Normal file
@ -0,0 +1,19 @@
|
||||
package benchmarks
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkChanMakeClose(b *testing.B) {
|
||||
b.StopTimer()
|
||||
b.StartTimer()
|
||||
|
||||
for j := 0; j < b.N; j++ {
|
||||
foo := make(chan struct{})
|
||||
close(foo)
|
||||
something, ok := <-foo
|
||||
if ok {
|
||||
b.Error(something, ok)
|
||||
}
|
||||
}
|
||||
}
|
129
benchmarks/codec_test.go
Normal file
129
benchmarks/codec_test.go
Normal file
@ -0,0 +1,129 @@
|
||||
package benchmarks
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/go-amino"
|
||||
|
||||
proto "github.com/tendermint/tendermint/benchmarks/proto"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
ctypes "github.com/tendermint/tendermint/rpc/core/types"
|
||||
)
|
||||
|
||||
func BenchmarkEncodeStatusWire(b *testing.B) {
|
||||
b.StopTimer()
|
||||
cdc := amino.NewCodec()
|
||||
ctypes.RegisterAmino(cdc)
|
||||
nodeKey := p2p.NodeKey{PrivKey: crypto.GenPrivKeyEd25519()}
|
||||
status := &ctypes.ResultStatus{
|
||||
NodeInfo: p2p.NodeInfo{
|
||||
ID: nodeKey.ID(),
|
||||
Moniker: "SOMENAME",
|
||||
Network: "SOMENAME",
|
||||
ListenAddr: "SOMEADDR",
|
||||
Version: "SOMEVER",
|
||||
Other: []string{"SOMESTRING", "OTHERSTRING"},
|
||||
},
|
||||
SyncInfo: ctypes.SyncInfo{
|
||||
LatestBlockHash: []byte("SOMEBYTES"),
|
||||
LatestBlockHeight: 123,
|
||||
LatestBlockTime: time.Unix(0, 1234),
|
||||
},
|
||||
ValidatorInfo: ctypes.ValidatorInfo{
|
||||
PubKey: nodeKey.PubKey(),
|
||||
},
|
||||
}
|
||||
b.StartTimer()
|
||||
|
||||
counter := 0
|
||||
for i := 0; i < b.N; i++ {
|
||||
jsonBytes, err := cdc.MarshalJSON(status)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
counter += len(jsonBytes)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func BenchmarkEncodeNodeInfoWire(b *testing.B) {
|
||||
b.StopTimer()
|
||||
cdc := amino.NewCodec()
|
||||
ctypes.RegisterAmino(cdc)
|
||||
nodeKey := p2p.NodeKey{PrivKey: crypto.GenPrivKeyEd25519()}
|
||||
nodeInfo := p2p.NodeInfo{
|
||||
ID: nodeKey.ID(),
|
||||
Moniker: "SOMENAME",
|
||||
Network: "SOMENAME",
|
||||
ListenAddr: "SOMEADDR",
|
||||
Version: "SOMEVER",
|
||||
Other: []string{"SOMESTRING", "OTHERSTRING"},
|
||||
}
|
||||
b.StartTimer()
|
||||
|
||||
counter := 0
|
||||
for i := 0; i < b.N; i++ {
|
||||
jsonBytes, err := cdc.MarshalJSON(nodeInfo)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
counter += len(jsonBytes)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkEncodeNodeInfoBinary(b *testing.B) {
|
||||
b.StopTimer()
|
||||
cdc := amino.NewCodec()
|
||||
ctypes.RegisterAmino(cdc)
|
||||
nodeKey := p2p.NodeKey{PrivKey: crypto.GenPrivKeyEd25519()}
|
||||
nodeInfo := p2p.NodeInfo{
|
||||
ID: nodeKey.ID(),
|
||||
Moniker: "SOMENAME",
|
||||
Network: "SOMENAME",
|
||||
ListenAddr: "SOMEADDR",
|
||||
Version: "SOMEVER",
|
||||
Other: []string{"SOMESTRING", "OTHERSTRING"},
|
||||
}
|
||||
b.StartTimer()
|
||||
|
||||
counter := 0
|
||||
for i := 0; i < b.N; i++ {
|
||||
jsonBytes := cdc.MustMarshalBinaryBare(nodeInfo)
|
||||
counter += len(jsonBytes)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func BenchmarkEncodeNodeInfoProto(b *testing.B) {
|
||||
b.StopTimer()
|
||||
nodeKey := p2p.NodeKey{PrivKey: crypto.GenPrivKeyEd25519()}
|
||||
nodeID := string(nodeKey.ID())
|
||||
someName := "SOMENAME"
|
||||
someAddr := "SOMEADDR"
|
||||
someVer := "SOMEVER"
|
||||
someString := "SOMESTRING"
|
||||
otherString := "OTHERSTRING"
|
||||
nodeInfo := proto.NodeInfo{
|
||||
Id: &proto.ID{Id: &nodeID},
|
||||
Moniker: &someName,
|
||||
Network: &someName,
|
||||
ListenAddr: &someAddr,
|
||||
Version: &someVer,
|
||||
Other: []string{someString, otherString},
|
||||
}
|
||||
b.StartTimer()
|
||||
|
||||
counter := 0
|
||||
for i := 0; i < b.N; i++ {
|
||||
bytes, err := nodeInfo.Marshal()
|
||||
if err != nil {
|
||||
b.Fatal(err)
|
||||
return
|
||||
}
|
||||
//jsonBytes := wire.JSONBytes(nodeInfo)
|
||||
counter += len(bytes)
|
||||
}
|
||||
|
||||
}
|
1
benchmarks/empty.go
Normal file
1
benchmarks/empty.go
Normal file
@ -0,0 +1 @@
|
||||
package benchmarks
|
35
benchmarks/map_test.go
Normal file
35
benchmarks/map_test.go
Normal file
@ -0,0 +1,35 @@
|
||||
package benchmarks
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
func BenchmarkSomething(b *testing.B) {
|
||||
b.StopTimer()
|
||||
numItems := 100000
|
||||
numChecks := 100000
|
||||
keys := make([]string, numItems)
|
||||
for i := 0; i < numItems; i++ {
|
||||
keys[i] = cmn.RandStr(100)
|
||||
}
|
||||
txs := make([]string, numChecks)
|
||||
for i := 0; i < numChecks; i++ {
|
||||
txs[i] = cmn.RandStr(100)
|
||||
}
|
||||
b.StartTimer()
|
||||
|
||||
counter := 0
|
||||
for j := 0; j < b.N; j++ {
|
||||
foo := make(map[string]string)
|
||||
for _, key := range keys {
|
||||
foo[key] = key
|
||||
}
|
||||
for _, tx := range txs {
|
||||
if _, ok := foo[tx]; ok {
|
||||
counter++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
33
benchmarks/os_test.go
Normal file
33
benchmarks/os_test.go
Normal file
@ -0,0 +1,33 @@
|
||||
package benchmarks
|
||||
|
||||
import (
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
func BenchmarkFileWrite(b *testing.B) {
|
||||
b.StopTimer()
|
||||
file, err := os.OpenFile("benchmark_file_write.out",
|
||||
os.O_RDWR|os.O_CREATE|os.O_APPEND, 0600)
|
||||
if err != nil {
|
||||
b.Error(err)
|
||||
}
|
||||
testString := cmn.RandStr(200) + "\n"
|
||||
b.StartTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
_, err := file.Write([]byte(testString))
|
||||
if err != nil {
|
||||
b.Error(err)
|
||||
}
|
||||
}
|
||||
|
||||
if err := file.Close(); err != nil {
|
||||
b.Error(err)
|
||||
}
|
||||
if err := os.Remove("benchmark_file_write.out"); err != nil {
|
||||
b.Error(err)
|
||||
}
|
||||
}
|
2
benchmarks/proto/README
Normal file
2
benchmarks/proto/README
Normal file
@ -0,0 +1,2 @@
|
||||
Doing some protobuf tests here.
|
||||
Using gogoprotobuf.
|
1456
benchmarks/proto/test.pb.go
Normal file
1456
benchmarks/proto/test.pb.go
Normal file
File diff suppressed because it is too large
Load Diff
29
benchmarks/proto/test.proto
Normal file
29
benchmarks/proto/test.proto
Normal file
@ -0,0 +1,29 @@
|
||||
message ResultStatus {
|
||||
optional NodeInfo nodeInfo = 1;
|
||||
required PubKey pubKey = 2;
|
||||
required bytes latestBlockHash = 3;
|
||||
required int64 latestBlockHeight = 4;
|
||||
required int64 latestBlocktime = 5;
|
||||
}
|
||||
|
||||
message NodeInfo {
|
||||
required ID id = 1;
|
||||
required string moniker = 2;
|
||||
required string network = 3;
|
||||
required string remoteAddr = 4;
|
||||
required string listenAddr = 5;
|
||||
required string version = 6;
|
||||
repeated string other = 7;
|
||||
}
|
||||
|
||||
message ID {
|
||||
required string id = 1;
|
||||
}
|
||||
|
||||
message PubKey {
|
||||
optional PubKeyEd25519 ed25519 = 1;
|
||||
}
|
||||
|
||||
message PubKeyEd25519 {
|
||||
required bytes bytes = 1;
|
||||
}
|
47
benchmarks/simu/counter.go
Normal file
47
benchmarks/simu/counter.go
Normal file
@ -0,0 +1,47 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
rpcclient "github.com/tendermint/tendermint/rpc/lib/client"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
func main() {
|
||||
wsc := rpcclient.NewWSClient("127.0.0.1:26657", "/websocket")
|
||||
err := wsc.Start()
|
||||
if err != nil {
|
||||
cmn.Exit(err.Error())
|
||||
}
|
||||
defer wsc.Stop()
|
||||
|
||||
// Read a bunch of responses
|
||||
go func() {
|
||||
for {
|
||||
_, ok := <-wsc.ResponsesCh
|
||||
if !ok {
|
||||
break
|
||||
}
|
||||
//fmt.Println("Received response", string(wire.JSONBytes(res)))
|
||||
}
|
||||
}()
|
||||
|
||||
// Make a bunch of requests
|
||||
buf := make([]byte, 32)
|
||||
for i := 0; ; i++ {
|
||||
binary.BigEndian.PutUint64(buf, uint64(i))
|
||||
//txBytes := hex.EncodeToString(buf[:n])
|
||||
fmt.Print(".")
|
||||
err = wsc.Call(context.TODO(), "broadcast_tx", map[string]interface{}{"tx": buf[:8]})
|
||||
if err != nil {
|
||||
cmn.Exit(err.Error())
|
||||
}
|
||||
if i%1000 == 0 {
|
||||
fmt.Println(i)
|
||||
}
|
||||
time.Sleep(time.Microsecond * 1000)
|
||||
}
|
||||
}
|
587
blockchain/pool.go
Normal file
587
blockchain/pool.go
Normal file
@ -0,0 +1,587 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"math"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
flow "github.com/tendermint/tendermint/libs/flowrate"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
/*
|
||||
eg, L = latency = 0.1s
|
||||
P = num peers = 10
|
||||
FN = num full nodes
|
||||
BS = 1kB block size
|
||||
CB = 1 Mbit/s = 128 kB/s
|
||||
CB/P = 12.8 kB
|
||||
B/S = CB/P/BS = 12.8 blocks/s
|
||||
|
||||
12.8 * 0.1 = 1.28 blocks on conn
|
||||
*/
|
||||
|
||||
const (
|
||||
requestIntervalMS = 100
|
||||
maxTotalRequesters = 1000
|
||||
maxPendingRequests = maxTotalRequesters
|
||||
maxPendingRequestsPerPeer = 50
|
||||
|
||||
// Minimum recv rate to ensure we're receiving blocks from a peer fast
|
||||
// enough. If a peer is not sending us data at at least that rate, we
|
||||
// consider them to have timedout and we disconnect.
|
||||
//
|
||||
// Assuming a DSL connection (not a good choice) 128 Kbps (upload) ~ 15 KB/s,
|
||||
// sending data across atlantic ~ 7.5 KB/s.
|
||||
minRecvRate = 7680
|
||||
|
||||
// Maximum difference between current and new block's height.
|
||||
maxDiffBetweenCurrentAndReceivedBlockHeight = 100
|
||||
)
|
||||
|
||||
var peerTimeout = 15 * time.Second // not const so we can override with tests
|
||||
|
||||
/*
|
||||
Peers self report their heights when we join the block pool.
|
||||
Starting from our latest pool.height, we request blocks
|
||||
in sequence from peers that reported higher heights than ours.
|
||||
Every so often we ask peers what height they're on so we can keep going.
|
||||
|
||||
Requests are continuously made for blocks of higher heights until
|
||||
the limit is reached. If most of the requests have no available peers, and we
|
||||
are not at peer limits, we can probably switch to consensus reactor
|
||||
*/
|
||||
|
||||
type BlockPool struct {
|
||||
cmn.BaseService
|
||||
startTime time.Time
|
||||
|
||||
mtx sync.Mutex
|
||||
// block requests
|
||||
requesters map[int64]*bpRequester
|
||||
height int64 // the lowest key in requesters.
|
||||
// peers
|
||||
peers map[p2p.ID]*bpPeer
|
||||
maxPeerHeight int64
|
||||
|
||||
// atomic
|
||||
numPending int32 // number of requests pending assignment or block response
|
||||
|
||||
requestsCh chan<- BlockRequest
|
||||
errorsCh chan<- peerError
|
||||
}
|
||||
|
||||
func NewBlockPool(start int64, requestsCh chan<- BlockRequest, errorsCh chan<- peerError) *BlockPool {
|
||||
bp := &BlockPool{
|
||||
peers: make(map[p2p.ID]*bpPeer),
|
||||
|
||||
requesters: make(map[int64]*bpRequester),
|
||||
height: start,
|
||||
numPending: 0,
|
||||
|
||||
requestsCh: requestsCh,
|
||||
errorsCh: errorsCh,
|
||||
}
|
||||
bp.BaseService = *cmn.NewBaseService(nil, "BlockPool", bp)
|
||||
return bp
|
||||
}
|
||||
|
||||
func (pool *BlockPool) OnStart() error {
|
||||
go pool.makeRequestersRoutine()
|
||||
pool.startTime = time.Now()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (pool *BlockPool) OnStop() {}
|
||||
|
||||
// Run spawns requesters as needed.
|
||||
func (pool *BlockPool) makeRequestersRoutine() {
|
||||
for {
|
||||
if !pool.IsRunning() {
|
||||
break
|
||||
}
|
||||
|
||||
_, numPending, lenRequesters := pool.GetStatus()
|
||||
if numPending >= maxPendingRequests {
|
||||
// sleep for a bit.
|
||||
time.Sleep(requestIntervalMS * time.Millisecond)
|
||||
// check for timed out peers
|
||||
pool.removeTimedoutPeers()
|
||||
} else if lenRequesters >= maxTotalRequesters {
|
||||
// sleep for a bit.
|
||||
time.Sleep(requestIntervalMS * time.Millisecond)
|
||||
// check for timed out peers
|
||||
pool.removeTimedoutPeers()
|
||||
} else {
|
||||
// request for more blocks.
|
||||
pool.makeNextRequester()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (pool *BlockPool) removeTimedoutPeers() {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
for _, peer := range pool.peers {
|
||||
if !peer.didTimeout && peer.numPending > 0 {
|
||||
curRate := peer.recvMonitor.Status().CurRate
|
||||
// curRate can be 0 on start
|
||||
if curRate != 0 && curRate < minRecvRate {
|
||||
err := errors.New("peer is not sending us data fast enough")
|
||||
pool.sendError(err, peer.id)
|
||||
pool.Logger.Error("SendTimeout", "peer", peer.id,
|
||||
"reason", err,
|
||||
"curRate", fmt.Sprintf("%d KB/s", curRate/1024),
|
||||
"minRate", fmt.Sprintf("%d KB/s", minRecvRate/1024))
|
||||
peer.didTimeout = true
|
||||
}
|
||||
}
|
||||
if peer.didTimeout {
|
||||
pool.removePeer(peer.id)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (pool *BlockPool) GetStatus() (height int64, numPending int32, lenRequesters int) {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
return pool.height, atomic.LoadInt32(&pool.numPending), len(pool.requesters)
|
||||
}
|
||||
|
||||
// TODO: relax conditions, prevent abuse.
|
||||
func (pool *BlockPool) IsCaughtUp() bool {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
// Need at least 1 peer to be considered caught up.
|
||||
if len(pool.peers) == 0 {
|
||||
pool.Logger.Debug("Blockpool has no peers")
|
||||
return false
|
||||
}
|
||||
|
||||
// some conditions to determine if we're caught up
|
||||
receivedBlockOrTimedOut := (pool.height > 0 || time.Since(pool.startTime) > 5*time.Second)
|
||||
ourChainIsLongestAmongPeers := pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
|
||||
isCaughtUp := receivedBlockOrTimedOut && ourChainIsLongestAmongPeers
|
||||
return isCaughtUp
|
||||
}
|
||||
|
||||
// We need to see the second block's Commit to validate the first block.
|
||||
// So we peek two blocks at a time.
|
||||
// The caller will verify the commit.
|
||||
func (pool *BlockPool) PeekTwoBlocks() (first *types.Block, second *types.Block) {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
if r := pool.requesters[pool.height]; r != nil {
|
||||
first = r.getBlock()
|
||||
}
|
||||
if r := pool.requesters[pool.height+1]; r != nil {
|
||||
second = r.getBlock()
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Pop the first block at pool.height
|
||||
// It must have been validated by 'second'.Commit from PeekTwoBlocks().
|
||||
func (pool *BlockPool) PopRequest() {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
if r := pool.requesters[pool.height]; r != nil {
|
||||
/* The block can disappear at any time, due to removePeer().
|
||||
if r := pool.requesters[pool.height]; r == nil || r.block == nil {
|
||||
PanicSanity("PopRequest() requires a valid block")
|
||||
}
|
||||
*/
|
||||
r.Stop()
|
||||
delete(pool.requesters, pool.height)
|
||||
pool.height++
|
||||
} else {
|
||||
panic(fmt.Sprintf("Expected requester to pop, got nothing at height %v", pool.height))
|
||||
}
|
||||
}
|
||||
|
||||
// Invalidates the block at pool.height,
|
||||
// Remove the peer and redo request from others.
|
||||
// Returns the ID of the removed peer.
|
||||
func (pool *BlockPool) RedoRequest(height int64) p2p.ID {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
request := pool.requesters[height]
|
||||
|
||||
if request.block == nil {
|
||||
panic("Expected block to be non-nil")
|
||||
}
|
||||
|
||||
// RemovePeer will redo all requesters associated with this peer.
|
||||
pool.removePeer(request.peerID)
|
||||
return request.peerID
|
||||
}
|
||||
|
||||
// TODO: ensure that blocks come in order for each peer.
|
||||
func (pool *BlockPool) AddBlock(peerID p2p.ID, block *types.Block, blockSize int) {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
requester := pool.requesters[block.Height]
|
||||
if requester == nil {
|
||||
pool.Logger.Info("peer sent us a block we didn't expect", "peer", peerID, "curHeight", pool.height, "blockHeight", block.Height)
|
||||
diff := pool.height - block.Height
|
||||
if diff < 0 {
|
||||
diff *= -1
|
||||
}
|
||||
if diff > maxDiffBetweenCurrentAndReceivedBlockHeight {
|
||||
pool.sendError(errors.New("peer sent us a block we didn't expect with a height too far ahead/behind"), peerID)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if requester.setBlock(block, peerID) {
|
||||
atomic.AddInt32(&pool.numPending, -1)
|
||||
peer := pool.peers[peerID]
|
||||
if peer != nil {
|
||||
peer.decrPending(blockSize)
|
||||
}
|
||||
} else {
|
||||
// Bad peer?
|
||||
}
|
||||
}
|
||||
|
||||
// MaxPeerHeight returns the highest height reported by a peer.
|
||||
func (pool *BlockPool) MaxPeerHeight() int64 {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
return pool.maxPeerHeight
|
||||
}
|
||||
|
||||
// Sets the peer's alleged blockchain height.
|
||||
func (pool *BlockPool) SetPeerHeight(peerID p2p.ID, height int64) {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
peer := pool.peers[peerID]
|
||||
if peer != nil {
|
||||
peer.height = height
|
||||
} else {
|
||||
peer = newBPPeer(pool, peerID, height)
|
||||
peer.setLogger(pool.Logger.With("peer", peerID))
|
||||
pool.peers[peerID] = peer
|
||||
}
|
||||
|
||||
if height > pool.maxPeerHeight {
|
||||
pool.maxPeerHeight = height
|
||||
}
|
||||
}
|
||||
|
||||
func (pool *BlockPool) RemovePeer(peerID p2p.ID) {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
pool.removePeer(peerID)
|
||||
}
|
||||
|
||||
func (pool *BlockPool) removePeer(peerID p2p.ID) {
|
||||
for _, requester := range pool.requesters {
|
||||
if requester.getPeerID() == peerID {
|
||||
requester.redo()
|
||||
}
|
||||
}
|
||||
delete(pool.peers, peerID)
|
||||
}
|
||||
|
||||
// Pick an available peer with at least the given minHeight.
|
||||
// If no peers are available, returns nil.
|
||||
func (pool *BlockPool) pickIncrAvailablePeer(minHeight int64) *bpPeer {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
for _, peer := range pool.peers {
|
||||
if peer.didTimeout {
|
||||
pool.removePeer(peer.id)
|
||||
continue
|
||||
}
|
||||
if peer.numPending >= maxPendingRequestsPerPeer {
|
||||
continue
|
||||
}
|
||||
if peer.height < minHeight {
|
||||
continue
|
||||
}
|
||||
peer.incrPending()
|
||||
return peer
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (pool *BlockPool) makeNextRequester() {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
nextHeight := pool.height + pool.requestersLen()
|
||||
request := newBPRequester(pool, nextHeight)
|
||||
// request.SetLogger(pool.Logger.With("height", nextHeight))
|
||||
|
||||
pool.requesters[nextHeight] = request
|
||||
atomic.AddInt32(&pool.numPending, 1)
|
||||
|
||||
err := request.Start()
|
||||
if err != nil {
|
||||
request.Logger.Error("Error starting request", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
func (pool *BlockPool) requestersLen() int64 {
|
||||
return int64(len(pool.requesters))
|
||||
}
|
||||
|
||||
func (pool *BlockPool) sendRequest(height int64, peerID p2p.ID) {
|
||||
if !pool.IsRunning() {
|
||||
return
|
||||
}
|
||||
pool.requestsCh <- BlockRequest{height, peerID}
|
||||
}
|
||||
|
||||
func (pool *BlockPool) sendError(err error, peerID p2p.ID) {
|
||||
if !pool.IsRunning() {
|
||||
return
|
||||
}
|
||||
pool.errorsCh <- peerError{err, peerID}
|
||||
}
|
||||
|
||||
// unused by tendermint; left for debugging purposes
|
||||
func (pool *BlockPool) debug() string {
|
||||
pool.mtx.Lock()
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
str := ""
|
||||
nextHeight := pool.height + pool.requestersLen()
|
||||
for h := pool.height; h < nextHeight; h++ {
|
||||
if pool.requesters[h] == nil {
|
||||
str += cmn.Fmt("H(%v):X ", h)
|
||||
} else {
|
||||
str += cmn.Fmt("H(%v):", h)
|
||||
str += cmn.Fmt("B?(%v) ", pool.requesters[h].block != nil)
|
||||
}
|
||||
}
|
||||
return str
|
||||
}
|
||||
|
||||
//-------------------------------------
|
||||
|
||||
type bpPeer struct {
|
||||
pool *BlockPool
|
||||
id p2p.ID
|
||||
recvMonitor *flow.Monitor
|
||||
|
||||
height int64
|
||||
numPending int32
|
||||
timeout *time.Timer
|
||||
didTimeout bool
|
||||
|
||||
logger log.Logger
|
||||
}
|
||||
|
||||
func newBPPeer(pool *BlockPool, peerID p2p.ID, height int64) *bpPeer {
|
||||
peer := &bpPeer{
|
||||
pool: pool,
|
||||
id: peerID,
|
||||
height: height,
|
||||
numPending: 0,
|
||||
logger: log.NewNopLogger(),
|
||||
}
|
||||
return peer
|
||||
}
|
||||
|
||||
func (peer *bpPeer) setLogger(l log.Logger) {
|
||||
peer.logger = l
|
||||
}
|
||||
|
||||
func (peer *bpPeer) resetMonitor() {
|
||||
peer.recvMonitor = flow.New(time.Second, time.Second*40)
|
||||
initialValue := float64(minRecvRate) * math.E
|
||||
peer.recvMonitor.SetREMA(initialValue)
|
||||
}
|
||||
|
||||
func (peer *bpPeer) resetTimeout() {
|
||||
if peer.timeout == nil {
|
||||
peer.timeout = time.AfterFunc(peerTimeout, peer.onTimeout)
|
||||
} else {
|
||||
peer.timeout.Reset(peerTimeout)
|
||||
}
|
||||
}
|
||||
|
||||
func (peer *bpPeer) incrPending() {
|
||||
if peer.numPending == 0 {
|
||||
peer.resetMonitor()
|
||||
peer.resetTimeout()
|
||||
}
|
||||
peer.numPending++
|
||||
}
|
||||
|
||||
func (peer *bpPeer) decrPending(recvSize int) {
|
||||
peer.numPending--
|
||||
if peer.numPending == 0 {
|
||||
peer.timeout.Stop()
|
||||
} else {
|
||||
peer.recvMonitor.Update(recvSize)
|
||||
peer.resetTimeout()
|
||||
}
|
||||
}
|
||||
|
||||
func (peer *bpPeer) onTimeout() {
|
||||
peer.pool.mtx.Lock()
|
||||
defer peer.pool.mtx.Unlock()
|
||||
|
||||
err := errors.New("peer did not send us anything")
|
||||
peer.pool.sendError(err, peer.id)
|
||||
peer.logger.Error("SendTimeout", "reason", err, "timeout", peerTimeout)
|
||||
peer.didTimeout = true
|
||||
}
|
||||
|
||||
//-------------------------------------
|
||||
|
||||
type bpRequester struct {
|
||||
cmn.BaseService
|
||||
pool *BlockPool
|
||||
height int64
|
||||
gotBlockCh chan struct{}
|
||||
redoCh chan struct{}
|
||||
|
||||
mtx sync.Mutex
|
||||
peerID p2p.ID
|
||||
block *types.Block
|
||||
}
|
||||
|
||||
func newBPRequester(pool *BlockPool, height int64) *bpRequester {
|
||||
bpr := &bpRequester{
|
||||
pool: pool,
|
||||
height: height,
|
||||
gotBlockCh: make(chan struct{}, 1),
|
||||
redoCh: make(chan struct{}, 1),
|
||||
|
||||
peerID: "",
|
||||
block: nil,
|
||||
}
|
||||
bpr.BaseService = *cmn.NewBaseService(nil, "bpRequester", bpr)
|
||||
return bpr
|
||||
}
|
||||
|
||||
func (bpr *bpRequester) OnStart() error {
|
||||
go bpr.requestRoutine()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Returns true if the peer matches and block doesn't already exist.
|
||||
func (bpr *bpRequester) setBlock(block *types.Block, peerID p2p.ID) bool {
|
||||
bpr.mtx.Lock()
|
||||
if bpr.block != nil || bpr.peerID != peerID {
|
||||
bpr.mtx.Unlock()
|
||||
return false
|
||||
}
|
||||
bpr.block = block
|
||||
bpr.mtx.Unlock()
|
||||
|
||||
select {
|
||||
case bpr.gotBlockCh <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (bpr *bpRequester) getBlock() *types.Block {
|
||||
bpr.mtx.Lock()
|
||||
defer bpr.mtx.Unlock()
|
||||
return bpr.block
|
||||
}
|
||||
|
||||
func (bpr *bpRequester) getPeerID() p2p.ID {
|
||||
bpr.mtx.Lock()
|
||||
defer bpr.mtx.Unlock()
|
||||
return bpr.peerID
|
||||
}
|
||||
|
||||
// This is called from the requestRoutine, upon redo().
|
||||
func (bpr *bpRequester) reset() {
|
||||
bpr.mtx.Lock()
|
||||
defer bpr.mtx.Unlock()
|
||||
|
||||
if bpr.block != nil {
|
||||
atomic.AddInt32(&bpr.pool.numPending, 1)
|
||||
}
|
||||
|
||||
bpr.peerID = ""
|
||||
bpr.block = nil
|
||||
}
|
||||
|
||||
// Tells bpRequester to pick another peer and try again.
|
||||
// NOTE: Nonblocking, and does nothing if another redo
|
||||
// was already requested.
|
||||
func (bpr *bpRequester) redo() {
|
||||
select {
|
||||
case bpr.redoCh <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
// Responsible for making more requests as necessary
|
||||
// Returns only when a block is found (e.g. AddBlock() is called)
|
||||
func (bpr *bpRequester) requestRoutine() {
|
||||
OUTER_LOOP:
|
||||
for {
|
||||
// Pick a peer to send request to.
|
||||
var peer *bpPeer
|
||||
PICK_PEER_LOOP:
|
||||
for {
|
||||
if !bpr.IsRunning() || !bpr.pool.IsRunning() {
|
||||
return
|
||||
}
|
||||
peer = bpr.pool.pickIncrAvailablePeer(bpr.height)
|
||||
if peer == nil {
|
||||
//log.Info("No peers available", "height", height)
|
||||
time.Sleep(requestIntervalMS * time.Millisecond)
|
||||
continue PICK_PEER_LOOP
|
||||
}
|
||||
break PICK_PEER_LOOP
|
||||
}
|
||||
bpr.mtx.Lock()
|
||||
bpr.peerID = peer.id
|
||||
bpr.mtx.Unlock()
|
||||
|
||||
// Send request and wait.
|
||||
bpr.pool.sendRequest(bpr.height, peer.id)
|
||||
WAIT_LOOP:
|
||||
for {
|
||||
select {
|
||||
case <-bpr.pool.Quit():
|
||||
bpr.Stop()
|
||||
return
|
||||
case <-bpr.Quit():
|
||||
return
|
||||
case <-bpr.redoCh:
|
||||
bpr.reset()
|
||||
continue OUTER_LOOP
|
||||
case <-bpr.gotBlockCh:
|
||||
// We got a block!
|
||||
// Continue the for-loop and wait til Quit.
|
||||
continue WAIT_LOOP
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//-------------------------------------
|
||||
|
||||
type BlockRequest struct {
|
||||
Height int64
|
||||
PeerID p2p.ID
|
||||
}
|
148
blockchain/pool_test.go
Normal file
148
blockchain/pool_test.go
Normal file
@ -0,0 +1,148 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"math/rand"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
func init() {
|
||||
peerTimeout = 2 * time.Second
|
||||
}
|
||||
|
||||
type testPeer struct {
|
||||
id p2p.ID
|
||||
height int64
|
||||
}
|
||||
|
||||
func makePeers(numPeers int, minHeight, maxHeight int64) map[p2p.ID]testPeer {
|
||||
peers := make(map[p2p.ID]testPeer, numPeers)
|
||||
for i := 0; i < numPeers; i++ {
|
||||
peerID := p2p.ID(cmn.RandStr(12))
|
||||
height := minHeight + rand.Int63n(maxHeight-minHeight)
|
||||
peers[peerID] = testPeer{peerID, height}
|
||||
}
|
||||
return peers
|
||||
}
|
||||
|
||||
func TestBasic(t *testing.T) {
|
||||
start := int64(42)
|
||||
peers := makePeers(10, start+1, 1000)
|
||||
errorsCh := make(chan peerError, 1000)
|
||||
requestsCh := make(chan BlockRequest, 1000)
|
||||
pool := NewBlockPool(start, requestsCh, errorsCh)
|
||||
pool.SetLogger(log.TestingLogger())
|
||||
|
||||
err := pool.Start()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
defer pool.Stop()
|
||||
|
||||
// Introduce each peer.
|
||||
go func() {
|
||||
for _, peer := range peers {
|
||||
pool.SetPeerHeight(peer.id, peer.height)
|
||||
}
|
||||
}()
|
||||
|
||||
// Start a goroutine to pull blocks
|
||||
go func() {
|
||||
for {
|
||||
if !pool.IsRunning() {
|
||||
return
|
||||
}
|
||||
first, second := pool.PeekTwoBlocks()
|
||||
if first != nil && second != nil {
|
||||
pool.PopRequest()
|
||||
} else {
|
||||
time.Sleep(1 * time.Second)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// Pull from channels
|
||||
for {
|
||||
select {
|
||||
case err := <-errorsCh:
|
||||
t.Error(err)
|
||||
case request := <-requestsCh:
|
||||
t.Logf("Pulled new BlockRequest %v", request)
|
||||
if request.Height == 300 {
|
||||
return // Done!
|
||||
}
|
||||
// Request desired, pretend like we got the block immediately.
|
||||
go func() {
|
||||
block := &types.Block{Header: &types.Header{Height: request.Height}}
|
||||
pool.AddBlock(request.PeerID, block, 123)
|
||||
t.Logf("Added block from peer %v (height: %v)", request.PeerID, request.Height)
|
||||
}()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestTimeout(t *testing.T) {
|
||||
start := int64(42)
|
||||
peers := makePeers(10, start+1, 1000)
|
||||
errorsCh := make(chan peerError, 1000)
|
||||
requestsCh := make(chan BlockRequest, 1000)
|
||||
pool := NewBlockPool(start, requestsCh, errorsCh)
|
||||
pool.SetLogger(log.TestingLogger())
|
||||
err := pool.Start()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
defer pool.Stop()
|
||||
|
||||
for _, peer := range peers {
|
||||
t.Logf("Peer %v", peer.id)
|
||||
}
|
||||
|
||||
// Introduce each peer.
|
||||
go func() {
|
||||
for _, peer := range peers {
|
||||
pool.SetPeerHeight(peer.id, peer.height)
|
||||
}
|
||||
}()
|
||||
|
||||
// Start a goroutine to pull blocks
|
||||
go func() {
|
||||
for {
|
||||
if !pool.IsRunning() {
|
||||
return
|
||||
}
|
||||
first, second := pool.PeekTwoBlocks()
|
||||
if first != nil && second != nil {
|
||||
pool.PopRequest()
|
||||
} else {
|
||||
time.Sleep(1 * time.Second)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// Pull from channels
|
||||
counter := 0
|
||||
timedOut := map[p2p.ID]struct{}{}
|
||||
for {
|
||||
select {
|
||||
case err := <-errorsCh:
|
||||
t.Log(err)
|
||||
// consider error to be always timeout here
|
||||
if _, ok := timedOut[err.peerID]; !ok {
|
||||
counter++
|
||||
if counter == len(peers) {
|
||||
return // Done!
|
||||
}
|
||||
}
|
||||
case request := <-requestsCh:
|
||||
t.Logf("Pulled new BlockRequest %+v", request)
|
||||
}
|
||||
}
|
||||
}
|
400
blockchain/reactor.go
Normal file
400
blockchain/reactor.go
Normal file
@ -0,0 +1,400 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"time"
|
||||
|
||||
amino "github.com/tendermint/go-amino"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
const (
|
||||
// BlockchainChannel is a channel for blocks and status updates (`BlockStore` height)
|
||||
BlockchainChannel = byte(0x40)
|
||||
|
||||
trySyncIntervalMS = 50
|
||||
// stop syncing when last block's time is
|
||||
// within this much of the system time.
|
||||
// stopSyncingDurationMinutes = 10
|
||||
|
||||
// ask for best height every 10s
|
||||
statusUpdateIntervalSeconds = 10
|
||||
// check if we should switch to consensus reactor
|
||||
switchToConsensusIntervalSeconds = 1
|
||||
|
||||
// NOTE: keep up to date with bcBlockResponseMessage
|
||||
bcBlockResponseMessagePrefixSize = 4
|
||||
bcBlockResponseMessageFieldKeySize = 1
|
||||
maxMsgSize = types.MaxBlockSizeBytes +
|
||||
bcBlockResponseMessagePrefixSize +
|
||||
bcBlockResponseMessageFieldKeySize
|
||||
)
|
||||
|
||||
type consensusReactor interface {
|
||||
// for when we switch from blockchain reactor and fast sync to
|
||||
// the consensus machine
|
||||
SwitchToConsensus(sm.State, int)
|
||||
}
|
||||
|
||||
type peerError struct {
|
||||
err error
|
||||
peerID p2p.ID
|
||||
}
|
||||
|
||||
func (e peerError) Error() string {
|
||||
return fmt.Sprintf("error with peer %v: %s", e.peerID, e.err.Error())
|
||||
}
|
||||
|
||||
// BlockchainReactor handles long-term catchup syncing.
|
||||
type BlockchainReactor struct {
|
||||
p2p.BaseReactor
|
||||
|
||||
// immutable
|
||||
initialState sm.State
|
||||
|
||||
blockExec *sm.BlockExecutor
|
||||
store *BlockStore
|
||||
pool *BlockPool
|
||||
fastSync bool
|
||||
|
||||
requestsCh <-chan BlockRequest
|
||||
errorsCh <-chan peerError
|
||||
}
|
||||
|
||||
// NewBlockchainReactor returns new reactor instance.
|
||||
func NewBlockchainReactor(state sm.State, blockExec *sm.BlockExecutor, store *BlockStore,
|
||||
fastSync bool) *BlockchainReactor {
|
||||
|
||||
if state.LastBlockHeight != store.Height() {
|
||||
panic(fmt.Sprintf("state (%v) and store (%v) height mismatch", state.LastBlockHeight,
|
||||
store.Height()))
|
||||
}
|
||||
|
||||
const capacity = 1000 // must be bigger than peers count
|
||||
requestsCh := make(chan BlockRequest, capacity)
|
||||
errorsCh := make(chan peerError, capacity) // so we don't block in #Receive#pool.AddBlock
|
||||
|
||||
pool := NewBlockPool(
|
||||
store.Height()+1,
|
||||
requestsCh,
|
||||
errorsCh,
|
||||
)
|
||||
|
||||
bcR := &BlockchainReactor{
|
||||
initialState: state,
|
||||
blockExec: blockExec,
|
||||
store: store,
|
||||
pool: pool,
|
||||
fastSync: fastSync,
|
||||
requestsCh: requestsCh,
|
||||
errorsCh: errorsCh,
|
||||
}
|
||||
bcR.BaseReactor = *p2p.NewBaseReactor("BlockchainReactor", bcR)
|
||||
return bcR
|
||||
}
|
||||
|
||||
// SetLogger implements cmn.Service by setting the logger on reactor and pool.
|
||||
func (bcR *BlockchainReactor) SetLogger(l log.Logger) {
|
||||
bcR.BaseService.Logger = l
|
||||
bcR.pool.Logger = l
|
||||
}
|
||||
|
||||
// OnStart implements cmn.Service.
|
||||
func (bcR *BlockchainReactor) OnStart() error {
|
||||
if err := bcR.BaseReactor.OnStart(); err != nil {
|
||||
return err
|
||||
}
|
||||
if bcR.fastSync {
|
||||
err := bcR.pool.Start()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
go bcR.poolRoutine()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// OnStop implements cmn.Service.
|
||||
func (bcR *BlockchainReactor) OnStop() {
|
||||
bcR.BaseReactor.OnStop()
|
||||
bcR.pool.Stop()
|
||||
}
|
||||
|
||||
// GetChannels implements Reactor
|
||||
func (bcR *BlockchainReactor) GetChannels() []*p2p.ChannelDescriptor {
|
||||
return []*p2p.ChannelDescriptor{
|
||||
{
|
||||
ID: BlockchainChannel,
|
||||
Priority: 10,
|
||||
SendQueueCapacity: 1000,
|
||||
RecvBufferCapacity: 50 * 4096,
|
||||
RecvMessageCapacity: maxMsgSize,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// AddPeer implements Reactor by sending our state to peer.
|
||||
func (bcR *BlockchainReactor) AddPeer(peer p2p.Peer) {
|
||||
msgBytes := cdc.MustMarshalBinaryBare(&bcStatusResponseMessage{bcR.store.Height()})
|
||||
if !peer.Send(BlockchainChannel, msgBytes) {
|
||||
// doing nothing, will try later in `poolRoutine`
|
||||
}
|
||||
// peer is added to the pool once we receive the first
|
||||
// bcStatusResponseMessage from the peer and call pool.SetPeerHeight
|
||||
}
|
||||
|
||||
// RemovePeer implements Reactor by removing peer from the pool.
|
||||
func (bcR *BlockchainReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
|
||||
bcR.pool.RemovePeer(peer.ID())
|
||||
}
|
||||
|
||||
// respondToPeer loads a block and sends it to the requesting peer,
|
||||
// if we have it. Otherwise, we'll respond saying we don't have it.
|
||||
// According to the Tendermint spec, if all nodes are honest,
|
||||
// no node should be requesting for a block that's non-existent.
|
||||
func (bcR *BlockchainReactor) respondToPeer(msg *bcBlockRequestMessage,
|
||||
src p2p.Peer) (queued bool) {
|
||||
|
||||
block := bcR.store.LoadBlock(msg.Height)
|
||||
if block != nil {
|
||||
msgBytes := cdc.MustMarshalBinaryBare(&bcBlockResponseMessage{Block: block})
|
||||
return src.TrySend(BlockchainChannel, msgBytes)
|
||||
}
|
||||
|
||||
bcR.Logger.Info("Peer asking for a block we don't have", "src", src, "height", msg.Height)
|
||||
|
||||
msgBytes := cdc.MustMarshalBinaryBare(&bcNoBlockResponseMessage{Height: msg.Height})
|
||||
return src.TrySend(BlockchainChannel, msgBytes)
|
||||
}
|
||||
|
||||
// Receive implements Reactor by handling 4 types of messages (look below).
|
||||
func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
|
||||
msg, err := decodeMsg(msgBytes)
|
||||
if err != nil {
|
||||
bcR.Logger.Error("Error decoding message", "src", src, "chId", chID, "msg", msg, "err", err, "bytes", msgBytes)
|
||||
bcR.Switch.StopPeerForError(src, err)
|
||||
return
|
||||
}
|
||||
|
||||
bcR.Logger.Debug("Receive", "src", src, "chID", chID, "msg", msg)
|
||||
|
||||
switch msg := msg.(type) {
|
||||
case *bcBlockRequestMessage:
|
||||
if queued := bcR.respondToPeer(msg, src); !queued {
|
||||
// Unfortunately not queued since the queue is full.
|
||||
}
|
||||
case *bcBlockResponseMessage:
|
||||
// Got a block.
|
||||
bcR.pool.AddBlock(src.ID(), msg.Block, len(msgBytes))
|
||||
case *bcStatusRequestMessage:
|
||||
// Send peer our state.
|
||||
msgBytes := cdc.MustMarshalBinaryBare(&bcStatusResponseMessage{bcR.store.Height()})
|
||||
queued := src.TrySend(BlockchainChannel, msgBytes)
|
||||
if !queued {
|
||||
// sorry
|
||||
}
|
||||
case *bcStatusResponseMessage:
|
||||
// Got a peer status. Unverified.
|
||||
bcR.pool.SetPeerHeight(src.ID(), msg.Height)
|
||||
default:
|
||||
bcR.Logger.Error(cmn.Fmt("Unknown message type %v", reflect.TypeOf(msg)))
|
||||
}
|
||||
}
|
||||
|
||||
// Handle messages from the poolReactor telling the reactor what to do.
|
||||
// NOTE: Don't sleep in the FOR_LOOP or otherwise slow it down!
|
||||
// (Except for the SYNC_LOOP, which is the primary purpose and must be synchronous.)
|
||||
func (bcR *BlockchainReactor) poolRoutine() {
|
||||
|
||||
trySyncTicker := time.NewTicker(trySyncIntervalMS * time.Millisecond)
|
||||
statusUpdateTicker := time.NewTicker(statusUpdateIntervalSeconds * time.Second)
|
||||
switchToConsensusTicker := time.NewTicker(switchToConsensusIntervalSeconds * time.Second)
|
||||
|
||||
blocksSynced := 0
|
||||
|
||||
chainID := bcR.initialState.ChainID
|
||||
state := bcR.initialState
|
||||
|
||||
lastHundred := time.Now()
|
||||
lastRate := 0.0
|
||||
|
||||
FOR_LOOP:
|
||||
for {
|
||||
select {
|
||||
case request := <-bcR.requestsCh:
|
||||
peer := bcR.Switch.Peers().Get(request.PeerID)
|
||||
if peer == nil {
|
||||
continue FOR_LOOP // Peer has since been disconnected.
|
||||
}
|
||||
msgBytes := cdc.MustMarshalBinaryBare(&bcBlockRequestMessage{request.Height})
|
||||
queued := peer.TrySend(BlockchainChannel, msgBytes)
|
||||
if !queued {
|
||||
// We couldn't make the request, send-queue full.
|
||||
// The pool handles timeouts, just let it go.
|
||||
continue FOR_LOOP
|
||||
}
|
||||
case err := <-bcR.errorsCh:
|
||||
peer := bcR.Switch.Peers().Get(err.peerID)
|
||||
if peer != nil {
|
||||
bcR.Switch.StopPeerForError(peer, err)
|
||||
}
|
||||
case <-statusUpdateTicker.C:
|
||||
// ask for status updates
|
||||
go bcR.BroadcastStatusRequest() // nolint: errcheck
|
||||
case <-switchToConsensusTicker.C:
|
||||
height, numPending, lenRequesters := bcR.pool.GetStatus()
|
||||
outbound, inbound, _ := bcR.Switch.NumPeers()
|
||||
bcR.Logger.Debug("Consensus ticker", "numPending", numPending, "total", lenRequesters,
|
||||
"outbound", outbound, "inbound", inbound)
|
||||
if bcR.pool.IsCaughtUp() {
|
||||
bcR.Logger.Info("Time to switch to consensus reactor!", "height", height)
|
||||
bcR.pool.Stop()
|
||||
|
||||
conR := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
|
||||
conR.SwitchToConsensus(state, blocksSynced)
|
||||
|
||||
break FOR_LOOP
|
||||
}
|
||||
case <-trySyncTicker.C: // chan time
|
||||
// This loop can be slow as long as it's doing syncing work.
|
||||
SYNC_LOOP:
|
||||
for i := 0; i < 10; i++ {
|
||||
// See if there are any blocks to sync.
|
||||
first, second := bcR.pool.PeekTwoBlocks()
|
||||
//bcR.Logger.Info("TrySync peeked", "first", first, "second", second)
|
||||
if first == nil || second == nil {
|
||||
// We need both to sync the first block.
|
||||
break SYNC_LOOP
|
||||
}
|
||||
firstParts := first.MakePartSet(state.ConsensusParams.BlockPartSizeBytes)
|
||||
firstPartsHeader := firstParts.Header()
|
||||
firstID := types.BlockID{first.Hash(), firstPartsHeader}
|
||||
// Finally, verify the first block using the second's commit
|
||||
// NOTE: we can probably make this more efficient, but note that calling
|
||||
// first.Hash() doesn't verify the tx contents, so MakePartSet() is
|
||||
// currently necessary.
|
||||
err := state.Validators.VerifyCommit(
|
||||
chainID, firstID, first.Height, second.LastCommit)
|
||||
if err != nil {
|
||||
bcR.Logger.Error("Error in validation", "err", err)
|
||||
peerID := bcR.pool.RedoRequest(first.Height)
|
||||
peer := bcR.Switch.Peers().Get(peerID)
|
||||
if peer != nil {
|
||||
bcR.Switch.StopPeerForError(peer, fmt.Errorf("BlockchainReactor validation error: %v", err))
|
||||
}
|
||||
break SYNC_LOOP
|
||||
} else {
|
||||
bcR.pool.PopRequest()
|
||||
|
||||
// TODO: batch saves so we dont persist to disk every block
|
||||
bcR.store.SaveBlock(first, firstParts, second.LastCommit)
|
||||
|
||||
// TODO: same thing for app - but we would need a way to
|
||||
// get the hash without persisting the state
|
||||
var err error
|
||||
state, err = bcR.blockExec.ApplyBlock(state, firstID, first)
|
||||
if err != nil {
|
||||
// TODO This is bad, are we zombie?
|
||||
cmn.PanicQ(cmn.Fmt("Failed to process committed block (%d:%X): %v",
|
||||
first.Height, first.Hash(), err))
|
||||
}
|
||||
blocksSynced++
|
||||
|
||||
if blocksSynced%100 == 0 {
|
||||
lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds())
|
||||
bcR.Logger.Info("Fast Sync Rate", "height", bcR.pool.height,
|
||||
"max_peer_height", bcR.pool.MaxPeerHeight(), "blocks/s", lastRate)
|
||||
lastHundred = time.Now()
|
||||
}
|
||||
}
|
||||
}
|
||||
continue FOR_LOOP
|
||||
case <-bcR.Quit():
|
||||
break FOR_LOOP
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// BroadcastStatusRequest broadcasts `BlockStore` height.
|
||||
func (bcR *BlockchainReactor) BroadcastStatusRequest() error {
|
||||
msgBytes := cdc.MustMarshalBinaryBare(&bcStatusRequestMessage{bcR.store.Height()})
|
||||
bcR.Switch.Broadcast(BlockchainChannel, msgBytes)
|
||||
return nil
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
// Messages
|
||||
|
||||
// BlockchainMessage is a generic message for this reactor.
|
||||
type BlockchainMessage interface{}
|
||||
|
||||
func RegisterBlockchainMessages(cdc *amino.Codec) {
|
||||
cdc.RegisterInterface((*BlockchainMessage)(nil), nil)
|
||||
cdc.RegisterConcrete(&bcBlockRequestMessage{}, "tendermint/mempool/BlockRequest", nil)
|
||||
cdc.RegisterConcrete(&bcBlockResponseMessage{}, "tendermint/mempool/BlockResponse", nil)
|
||||
cdc.RegisterConcrete(&bcNoBlockResponseMessage{}, "tendermint/mempool/NoBlockResponse", nil)
|
||||
cdc.RegisterConcrete(&bcStatusResponseMessage{}, "tendermint/mempool/StatusResponse", nil)
|
||||
cdc.RegisterConcrete(&bcStatusRequestMessage{}, "tendermint/mempool/StatusRequest", nil)
|
||||
}
|
||||
|
||||
func decodeMsg(bz []byte) (msg BlockchainMessage, err error) {
|
||||
if len(bz) > maxMsgSize {
|
||||
return msg, fmt.Errorf("Msg exceeds max size (%d > %d)", len(bz), maxMsgSize)
|
||||
}
|
||||
err = cdc.UnmarshalBinaryBare(bz, &msg)
|
||||
return
|
||||
}
|
||||
|
||||
//-------------------------------------
|
||||
|
||||
type bcBlockRequestMessage struct {
|
||||
Height int64
|
||||
}
|
||||
|
||||
func (m *bcBlockRequestMessage) String() string {
|
||||
return cmn.Fmt("[bcBlockRequestMessage %v]", m.Height)
|
||||
}
|
||||
|
||||
type bcNoBlockResponseMessage struct {
|
||||
Height int64
|
||||
}
|
||||
|
||||
func (brm *bcNoBlockResponseMessage) String() string {
|
||||
return cmn.Fmt("[bcNoBlockResponseMessage %d]", brm.Height)
|
||||
}
|
||||
|
||||
//-------------------------------------
|
||||
|
||||
type bcBlockResponseMessage struct {
|
||||
Block *types.Block
|
||||
}
|
||||
|
||||
func (m *bcBlockResponseMessage) String() string {
|
||||
return cmn.Fmt("[bcBlockResponseMessage %v]", m.Block.Height)
|
||||
}
|
||||
|
||||
//-------------------------------------
|
||||
|
||||
type bcStatusRequestMessage struct {
|
||||
Height int64
|
||||
}
|
||||
|
||||
func (m *bcStatusRequestMessage) String() string {
|
||||
return cmn.Fmt("[bcStatusRequestMessage %v]", m.Height)
|
||||
}
|
||||
|
||||
//-------------------------------------
|
||||
|
||||
type bcStatusResponseMessage struct {
|
||||
Height int64
|
||||
}
|
||||
|
||||
func (m *bcStatusResponseMessage) String() string {
|
||||
return cmn.Fmt("[bcStatusResponseMessage %v]", m.Height)
|
||||
}
|
208
blockchain/reactor_test.go
Normal file
208
blockchain/reactor_test.go
Normal file
@ -0,0 +1,208 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"net"
|
||||
"testing"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
dbm "github.com/tendermint/tendermint/libs/db"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/proxy"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
func makeStateAndBlockStore(logger log.Logger) (sm.State, *BlockStore) {
|
||||
config := cfg.ResetTestRoot("blockchain_reactor_test")
|
||||
// blockDB := dbm.NewDebugDB("blockDB", dbm.NewMemDB())
|
||||
// stateDB := dbm.NewDebugDB("stateDB", dbm.NewMemDB())
|
||||
blockDB := dbm.NewMemDB()
|
||||
stateDB := dbm.NewMemDB()
|
||||
blockStore := NewBlockStore(blockDB)
|
||||
state, err := sm.LoadStateFromDBOrGenesisFile(stateDB, config.GenesisFile())
|
||||
if err != nil {
|
||||
panic(cmn.ErrorWrap(err, "error constructing state from genesis file"))
|
||||
}
|
||||
return state, blockStore
|
||||
}
|
||||
|
||||
func newBlockchainReactor(logger log.Logger, maxBlockHeight int64) *BlockchainReactor {
|
||||
state, blockStore := makeStateAndBlockStore(logger)
|
||||
|
||||
// Make the blockchainReactor itself
|
||||
fastSync := true
|
||||
var nilApp proxy.AppConnConsensus
|
||||
blockExec := sm.NewBlockExecutor(dbm.NewMemDB(), log.TestingLogger(), nilApp,
|
||||
sm.MockMempool{}, sm.MockEvidencePool{})
|
||||
|
||||
bcReactor := NewBlockchainReactor(state.Copy(), blockExec, blockStore, fastSync)
|
||||
bcReactor.SetLogger(logger.With("module", "blockchain"))
|
||||
|
||||
// Next: we need to set a switch in order for peers to be added in
|
||||
bcReactor.Switch = p2p.NewSwitch(cfg.DefaultP2PConfig())
|
||||
|
||||
// Lastly: let's add some blocks in
|
||||
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
|
||||
firstBlock := makeBlock(blockHeight, state)
|
||||
secondBlock := makeBlock(blockHeight+1, state)
|
||||
firstParts := firstBlock.MakePartSet(state.ConsensusParams.BlockGossip.BlockPartSizeBytes)
|
||||
blockStore.SaveBlock(firstBlock, firstParts, secondBlock.LastCommit)
|
||||
}
|
||||
|
||||
return bcReactor
|
||||
}
|
||||
|
||||
func TestNoBlockResponse(t *testing.T) {
|
||||
maxBlockHeight := int64(20)
|
||||
|
||||
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
|
||||
bcr.Start()
|
||||
defer bcr.Stop()
|
||||
|
||||
// Add some peers in
|
||||
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
|
||||
bcr.AddPeer(peer)
|
||||
|
||||
chID := byte(0x01)
|
||||
|
||||
tests := []struct {
|
||||
height int64
|
||||
existent bool
|
||||
}{
|
||||
{maxBlockHeight + 2, false},
|
||||
{10, true},
|
||||
{1, true},
|
||||
{100, false},
|
||||
}
|
||||
|
||||
// receive a request message from peer,
|
||||
// wait for our response to be received on the peer
|
||||
for _, tt := range tests {
|
||||
reqBlockMsg := &bcBlockRequestMessage{tt.height}
|
||||
reqBlockBytes := cdc.MustMarshalBinaryBare(reqBlockMsg)
|
||||
bcr.Receive(chID, peer, reqBlockBytes)
|
||||
msg := peer.lastBlockchainMessage()
|
||||
|
||||
if tt.existent {
|
||||
if blockMsg, ok := msg.(*bcBlockResponseMessage); !ok {
|
||||
t.Fatalf("Expected to receive a block response for height %d", tt.height)
|
||||
} else if blockMsg.Block.Height != tt.height {
|
||||
t.Fatalf("Expected response to be for height %d, got %d", tt.height, blockMsg.Block.Height)
|
||||
}
|
||||
} else {
|
||||
if noBlockMsg, ok := msg.(*bcNoBlockResponseMessage); !ok {
|
||||
t.Fatalf("Expected to receive a no block response for height %d", tt.height)
|
||||
} else if noBlockMsg.Height != tt.height {
|
||||
t.Fatalf("Expected response to be for height %d, got %d", tt.height, noBlockMsg.Height)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
// NOTE: This is too hard to test without
|
||||
// an easy way to add test peer to switch
|
||||
// or without significant refactoring of the module.
|
||||
// Alternatively we could actually dial a TCP conn but
|
||||
// that seems extreme.
|
||||
func TestBadBlockStopsPeer(t *testing.T) {
|
||||
maxBlockHeight := int64(20)
|
||||
|
||||
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
|
||||
bcr.Start()
|
||||
defer bcr.Stop()
|
||||
|
||||
// Add some peers in
|
||||
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
|
||||
|
||||
// XXX: This doesn't add the peer to anything,
|
||||
// so it's hard to check that it's later removed
|
||||
bcr.AddPeer(peer)
|
||||
assert.True(t, bcr.Switch.Peers().Size() > 0)
|
||||
|
||||
// send a bad block from the peer
|
||||
// default blocks already dont have commits, so should fail
|
||||
block := bcr.store.LoadBlock(3)
|
||||
msg := &bcBlockResponseMessage{Block: block}
|
||||
peer.Send(BlockchainChannel, struct{ BlockchainMessage }{msg})
|
||||
|
||||
ticker := time.NewTicker(time.Millisecond * 10)
|
||||
timer := time.NewTimer(time.Second * 2)
|
||||
LOOP:
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
if bcr.Switch.Peers().Size() == 0 {
|
||||
break LOOP
|
||||
}
|
||||
case <-timer.C:
|
||||
t.Fatal("Timed out waiting to disconnect peer")
|
||||
}
|
||||
}
|
||||
}
|
||||
*/
|
||||
|
||||
//----------------------------------------------
|
||||
// utility funcs
|
||||
|
||||
func makeTxs(height int64) (txs []types.Tx) {
|
||||
for i := 0; i < 10; i++ {
|
||||
txs = append(txs, types.Tx([]byte{byte(height), byte(i)}))
|
||||
}
|
||||
return txs
|
||||
}
|
||||
|
||||
func makeBlock(height int64, state sm.State) *types.Block {
|
||||
block, _ := state.MakeBlock(height, makeTxs(height), new(types.Commit))
|
||||
return block
|
||||
}
|
||||
|
||||
// The Test peer
|
||||
type bcrTestPeer struct {
|
||||
cmn.BaseService
|
||||
id p2p.ID
|
||||
ch chan interface{}
|
||||
}
|
||||
|
||||
var _ p2p.Peer = (*bcrTestPeer)(nil)
|
||||
|
||||
func newbcrTestPeer(id p2p.ID) *bcrTestPeer {
|
||||
bcr := &bcrTestPeer{
|
||||
id: id,
|
||||
ch: make(chan interface{}, 2),
|
||||
}
|
||||
bcr.BaseService = *cmn.NewBaseService(nil, "bcrTestPeer", bcr)
|
||||
return bcr
|
||||
}
|
||||
|
||||
func (tp *bcrTestPeer) lastBlockchainMessage() interface{} { return <-tp.ch }
|
||||
|
||||
func (tp *bcrTestPeer) TrySend(chID byte, msgBytes []byte) bool {
|
||||
var msg BlockchainMessage
|
||||
err := cdc.UnmarshalBinaryBare(msgBytes, &msg)
|
||||
if err != nil {
|
||||
panic(cmn.ErrorWrap(err, "Error while trying to parse a BlockchainMessage"))
|
||||
}
|
||||
if _, ok := msg.(*bcStatusResponseMessage); ok {
|
||||
// Discard status response messages since they skew our results
|
||||
// We only want to deal with:
|
||||
// + bcBlockResponseMessage
|
||||
// + bcNoBlockResponseMessage
|
||||
} else {
|
||||
tp.ch <- msg
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (tp *bcrTestPeer) Send(chID byte, msgBytes []byte) bool { return tp.TrySend(chID, msgBytes) }
|
||||
func (tp *bcrTestPeer) NodeInfo() p2p.NodeInfo { return p2p.NodeInfo{} }
|
||||
func (tp *bcrTestPeer) Status() p2p.ConnectionStatus { return p2p.ConnectionStatus{} }
|
||||
func (tp *bcrTestPeer) ID() p2p.ID { return tp.id }
|
||||
func (tp *bcrTestPeer) IsOutbound() bool { return false }
|
||||
func (tp *bcrTestPeer) IsPersistent() bool { return true }
|
||||
func (tp *bcrTestPeer) Get(s string) interface{} { return s }
|
||||
func (tp *bcrTestPeer) Set(string, interface{}) {}
|
||||
func (tp *bcrTestPeer) RemoteIP() net.IP { return []byte{127, 0, 0, 1} }
|
247
blockchain/store.go
Normal file
247
blockchain/store.go
Normal file
@ -0,0 +1,247 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
dbm "github.com/tendermint/tendermint/libs/db"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
/*
|
||||
BlockStore is a simple low level store for blocks.
|
||||
|
||||
There are three types of information stored:
|
||||
- BlockMeta: Meta information about each block
|
||||
- Block part: Parts of each block, aggregated w/ PartSet
|
||||
- Commit: The commit part of each block, for gossiping precommit votes
|
||||
|
||||
Currently the precommit signatures are duplicated in the Block parts as
|
||||
well as the Commit. In the future this may change, perhaps by moving
|
||||
the Commit data outside the Block. (TODO)
|
||||
|
||||
// NOTE: BlockStore methods will panic if they encounter errors
|
||||
// deserializing loaded data, indicating probable corruption on disk.
|
||||
*/
|
||||
type BlockStore struct {
|
||||
db dbm.DB
|
||||
|
||||
mtx sync.RWMutex
|
||||
height int64
|
||||
}
|
||||
|
||||
// NewBlockStore returns a new BlockStore with the given DB,
|
||||
// initialized to the last height that was committed to the DB.
|
||||
func NewBlockStore(db dbm.DB) *BlockStore {
|
||||
bsjson := LoadBlockStoreStateJSON(db)
|
||||
return &BlockStore{
|
||||
height: bsjson.Height,
|
||||
db: db,
|
||||
}
|
||||
}
|
||||
|
||||
// Height returns the last known contiguous block height.
|
||||
func (bs *BlockStore) Height() int64 {
|
||||
bs.mtx.RLock()
|
||||
defer bs.mtx.RUnlock()
|
||||
return bs.height
|
||||
}
|
||||
|
||||
// LoadBlock returns the block with the given height.
|
||||
// If no block is found for that height, it returns nil.
|
||||
func (bs *BlockStore) LoadBlock(height int64) *types.Block {
|
||||
var blockMeta = bs.LoadBlockMeta(height)
|
||||
if blockMeta == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
var block = new(types.Block)
|
||||
buf := []byte{}
|
||||
for i := 0; i < blockMeta.BlockID.PartsHeader.Total; i++ {
|
||||
part := bs.LoadBlockPart(height, i)
|
||||
buf = append(buf, part.Bytes...)
|
||||
}
|
||||
err := cdc.UnmarshalBinary(buf, block)
|
||||
if err != nil {
|
||||
// NOTE: The existence of meta should imply the existence of the
|
||||
// block. So, make sure meta is only saved after blocks are saved.
|
||||
panic(cmn.ErrorWrap(err, "Error reading block"))
|
||||
}
|
||||
return block
|
||||
}
|
||||
|
||||
// LoadBlockPart returns the Part at the given index
|
||||
// from the block at the given height.
|
||||
// If no part is found for the given height and index, it returns nil.
|
||||
func (bs *BlockStore) LoadBlockPart(height int64, index int) *types.Part {
|
||||
var part = new(types.Part)
|
||||
bz := bs.db.Get(calcBlockPartKey(height, index))
|
||||
if len(bz) == 0 {
|
||||
return nil
|
||||
}
|
||||
err := cdc.UnmarshalBinaryBare(bz, part)
|
||||
if err != nil {
|
||||
panic(cmn.ErrorWrap(err, "Error reading block part"))
|
||||
}
|
||||
return part
|
||||
}
|
||||
|
||||
// LoadBlockMeta returns the BlockMeta for the given height.
|
||||
// If no block is found for the given height, it returns nil.
|
||||
func (bs *BlockStore) LoadBlockMeta(height int64) *types.BlockMeta {
|
||||
var blockMeta = new(types.BlockMeta)
|
||||
bz := bs.db.Get(calcBlockMetaKey(height))
|
||||
if len(bz) == 0 {
|
||||
return nil
|
||||
}
|
||||
err := cdc.UnmarshalBinaryBare(bz, blockMeta)
|
||||
if err != nil {
|
||||
panic(cmn.ErrorWrap(err, "Error reading block meta"))
|
||||
}
|
||||
return blockMeta
|
||||
}
|
||||
|
||||
// LoadBlockCommit returns the Commit for the given height.
|
||||
// This commit consists of the +2/3 and other Precommit-votes for block at `height`,
|
||||
// and it comes from the block.LastCommit for `height+1`.
|
||||
// If no commit is found for the given height, it returns nil.
|
||||
func (bs *BlockStore) LoadBlockCommit(height int64) *types.Commit {
|
||||
var commit = new(types.Commit)
|
||||
bz := bs.db.Get(calcBlockCommitKey(height))
|
||||
if len(bz) == 0 {
|
||||
return nil
|
||||
}
|
||||
err := cdc.UnmarshalBinaryBare(bz, commit)
|
||||
if err != nil {
|
||||
panic(cmn.ErrorWrap(err, "Error reading block commit"))
|
||||
}
|
||||
return commit
|
||||
}
|
||||
|
||||
// LoadSeenCommit returns the locally seen Commit for the given height.
|
||||
// This is useful when we've seen a commit, but there has not yet been
|
||||
// a new block at `height + 1` that includes this commit in its block.LastCommit.
|
||||
func (bs *BlockStore) LoadSeenCommit(height int64) *types.Commit {
|
||||
var commit = new(types.Commit)
|
||||
bz := bs.db.Get(calcSeenCommitKey(height))
|
||||
if len(bz) == 0 {
|
||||
return nil
|
||||
}
|
||||
err := cdc.UnmarshalBinaryBare(bz, commit)
|
||||
if err != nil {
|
||||
panic(cmn.ErrorWrap(err, "Error reading block seen commit"))
|
||||
}
|
||||
return commit
|
||||
}
|
||||
|
||||
// SaveBlock persists the given block, blockParts, and seenCommit to the underlying db.
|
||||
// blockParts: Must be parts of the block
|
||||
// seenCommit: The +2/3 precommits that were seen which committed at height.
|
||||
// If all the nodes restart after committing a block,
|
||||
// we need this to reload the precommits to catch-up nodes to the
|
||||
// most recent height. Otherwise they'd stall at H-1.
|
||||
func (bs *BlockStore) SaveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) {
|
||||
if block == nil {
|
||||
cmn.PanicSanity("BlockStore can only save a non-nil block")
|
||||
}
|
||||
height := block.Height
|
||||
if g, w := height, bs.Height()+1; g != w {
|
||||
cmn.PanicSanity(cmn.Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", w, g))
|
||||
}
|
||||
if !blockParts.IsComplete() {
|
||||
cmn.PanicSanity(cmn.Fmt("BlockStore can only save complete block part sets"))
|
||||
}
|
||||
|
||||
// Save block meta
|
||||
blockMeta := types.NewBlockMeta(block, blockParts)
|
||||
metaBytes := cdc.MustMarshalBinaryBare(blockMeta)
|
||||
bs.db.Set(calcBlockMetaKey(height), metaBytes)
|
||||
|
||||
// Save block parts
|
||||
for i := 0; i < blockParts.Total(); i++ {
|
||||
part := blockParts.GetPart(i)
|
||||
bs.saveBlockPart(height, i, part)
|
||||
}
|
||||
|
||||
// Save block commit (duplicate and separate from the Block)
|
||||
blockCommitBytes := cdc.MustMarshalBinaryBare(block.LastCommit)
|
||||
bs.db.Set(calcBlockCommitKey(height-1), blockCommitBytes)
|
||||
|
||||
// Save seen commit (seen +2/3 precommits for block)
|
||||
// NOTE: we can delete this at a later height
|
||||
seenCommitBytes := cdc.MustMarshalBinaryBare(seenCommit)
|
||||
bs.db.Set(calcSeenCommitKey(height), seenCommitBytes)
|
||||
|
||||
// Save new BlockStoreStateJSON descriptor
|
||||
BlockStoreStateJSON{Height: height}.Save(bs.db)
|
||||
|
||||
// Done!
|
||||
bs.mtx.Lock()
|
||||
bs.height = height
|
||||
bs.mtx.Unlock()
|
||||
|
||||
// Flush
|
||||
bs.db.SetSync(nil, nil)
|
||||
}
|
||||
|
||||
func (bs *BlockStore) saveBlockPart(height int64, index int, part *types.Part) {
|
||||
if height != bs.Height()+1 {
|
||||
cmn.PanicSanity(cmn.Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", bs.Height()+1, height))
|
||||
}
|
||||
partBytes := cdc.MustMarshalBinaryBare(part)
|
||||
bs.db.Set(calcBlockPartKey(height, index), partBytes)
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
|
||||
func calcBlockMetaKey(height int64) []byte {
|
||||
return []byte(fmt.Sprintf("H:%v", height))
|
||||
}
|
||||
|
||||
func calcBlockPartKey(height int64, partIndex int) []byte {
|
||||
return []byte(fmt.Sprintf("P:%v:%v", height, partIndex))
|
||||
}
|
||||
|
||||
func calcBlockCommitKey(height int64) []byte {
|
||||
return []byte(fmt.Sprintf("C:%v", height))
|
||||
}
|
||||
|
||||
func calcSeenCommitKey(height int64) []byte {
|
||||
return []byte(fmt.Sprintf("SC:%v", height))
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
|
||||
var blockStoreKey = []byte("blockStore")
|
||||
|
||||
type BlockStoreStateJSON struct {
|
||||
Height int64 `json:"height"`
|
||||
}
|
||||
|
||||
// Save persists the blockStore state to the database as JSON.
|
||||
func (bsj BlockStoreStateJSON) Save(db dbm.DB) {
|
||||
bytes, err := cdc.MarshalJSON(bsj)
|
||||
if err != nil {
|
||||
cmn.PanicSanity(cmn.Fmt("Could not marshal state bytes: %v", err))
|
||||
}
|
||||
db.SetSync(blockStoreKey, bytes)
|
||||
}
|
||||
|
||||
// LoadBlockStoreStateJSON returns the BlockStoreStateJSON as loaded from disk.
|
||||
// If no BlockStoreStateJSON was previously persisted, it returns the zero value.
|
||||
func LoadBlockStoreStateJSON(db dbm.DB) BlockStoreStateJSON {
|
||||
bytes := db.Get(blockStoreKey)
|
||||
if len(bytes) == 0 {
|
||||
return BlockStoreStateJSON{
|
||||
Height: 0,
|
||||
}
|
||||
}
|
||||
bsj := BlockStoreStateJSON{}
|
||||
err := cdc.UnmarshalJSON(bytes, &bsj)
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("Could not unmarshal bytes: %X", bytes))
|
||||
}
|
||||
return bsj
|
||||
}
|
383
blockchain/store_test.go
Normal file
383
blockchain/store_test.go
Normal file
@ -0,0 +1,383 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"runtime/debug"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/tendermint/tendermint/libs/db"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
func TestLoadBlockStoreStateJSON(t *testing.T) {
|
||||
db := db.NewMemDB()
|
||||
|
||||
bsj := &BlockStoreStateJSON{Height: 1000}
|
||||
bsj.Save(db)
|
||||
|
||||
retrBSJ := LoadBlockStoreStateJSON(db)
|
||||
|
||||
assert.Equal(t, *bsj, retrBSJ, "expected the retrieved DBs to match")
|
||||
}
|
||||
|
||||
func TestNewBlockStore(t *testing.T) {
|
||||
db := db.NewMemDB()
|
||||
db.Set(blockStoreKey, []byte(`{"height": "10000"}`))
|
||||
bs := NewBlockStore(db)
|
||||
require.Equal(t, int64(10000), bs.Height(), "failed to properly parse blockstore")
|
||||
|
||||
panicCausers := []struct {
|
||||
data []byte
|
||||
wantErr string
|
||||
}{
|
||||
{[]byte("artful-doger"), "not unmarshal bytes"},
|
||||
{[]byte(" "), "unmarshal bytes"},
|
||||
}
|
||||
|
||||
for i, tt := range panicCausers {
|
||||
// Expecting a panic here on trying to parse an invalid blockStore
|
||||
_, _, panicErr := doFn(func() (interface{}, error) {
|
||||
db.Set(blockStoreKey, tt.data)
|
||||
_ = NewBlockStore(db)
|
||||
return nil, nil
|
||||
})
|
||||
require.NotNil(t, panicErr, "#%d panicCauser: %q expected a panic", i, tt.data)
|
||||
assert.Contains(t, panicErr.Error(), tt.wantErr, "#%d data: %q", i, tt.data)
|
||||
}
|
||||
|
||||
db.Set(blockStoreKey, nil)
|
||||
bs = NewBlockStore(db)
|
||||
assert.Equal(t, bs.Height(), int64(0), "expecting nil bytes to be unmarshaled alright")
|
||||
}
|
||||
|
||||
func freshBlockStore() (*BlockStore, db.DB) {
|
||||
db := db.NewMemDB()
|
||||
return NewBlockStore(db), db
|
||||
}
|
||||
|
||||
var (
|
||||
state, _ = makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
|
||||
|
||||
block = makeBlock(1, state)
|
||||
partSet = block.MakePartSet(2)
|
||||
part1 = partSet.GetPart(0)
|
||||
part2 = partSet.GetPart(1)
|
||||
seenCommit1 = &types.Commit{Precommits: []*types.Vote{{Height: 10,
|
||||
Timestamp: time.Now().UTC()}}}
|
||||
)
|
||||
|
||||
// TODO: This test should be simplified ...
|
||||
|
||||
func TestBlockStoreSaveLoadBlock(t *testing.T) {
|
||||
state, bs := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
|
||||
require.Equal(t, bs.Height(), int64(0), "initially the height should be zero")
|
||||
|
||||
// check there are no blocks at various heights
|
||||
noBlockHeights := []int64{0, -1, 100, 1000, 2}
|
||||
for i, height := range noBlockHeights {
|
||||
if g := bs.LoadBlock(height); g != nil {
|
||||
t.Errorf("#%d: height(%d) got a block; want nil", i, height)
|
||||
}
|
||||
}
|
||||
|
||||
// save a block
|
||||
block := makeBlock(bs.Height()+1, state)
|
||||
validPartSet := block.MakePartSet(2)
|
||||
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
|
||||
Timestamp: time.Now().UTC()}}}
|
||||
bs.SaveBlock(block, partSet, seenCommit)
|
||||
require.Equal(t, bs.Height(), block.Header.Height, "expecting the new height to be changed")
|
||||
|
||||
incompletePartSet := types.NewPartSetFromHeader(types.PartSetHeader{Total: 2})
|
||||
uncontiguousPartSet := types.NewPartSetFromHeader(types.PartSetHeader{Total: 0})
|
||||
uncontiguousPartSet.AddPart(part2)
|
||||
|
||||
header1 := types.Header{
|
||||
Height: 1,
|
||||
NumTxs: 100,
|
||||
ChainID: "block_test",
|
||||
Time: time.Now(),
|
||||
}
|
||||
header2 := header1
|
||||
header2.Height = 4
|
||||
|
||||
// End of setup, test data
|
||||
|
||||
commitAtH10 := &types.Commit{Precommits: []*types.Vote{{Height: 10,
|
||||
Timestamp: time.Now().UTC()}}}
|
||||
tuples := []struct {
|
||||
block *types.Block
|
||||
parts *types.PartSet
|
||||
seenCommit *types.Commit
|
||||
wantErr bool
|
||||
wantPanic string
|
||||
|
||||
corruptBlockInDB bool
|
||||
corruptCommitInDB bool
|
||||
corruptSeenCommitInDB bool
|
||||
eraseCommitInDB bool
|
||||
eraseSeenCommitInDB bool
|
||||
}{
|
||||
{
|
||||
block: newBlock(&header1, commitAtH10),
|
||||
parts: validPartSet,
|
||||
seenCommit: seenCommit1,
|
||||
},
|
||||
|
||||
{
|
||||
block: nil,
|
||||
wantPanic: "only save a non-nil block",
|
||||
},
|
||||
|
||||
{
|
||||
block: newBlock(&header2, commitAtH10),
|
||||
parts: uncontiguousPartSet,
|
||||
wantPanic: "only save contiguous blocks", // and incomplete and uncontiguous parts
|
||||
},
|
||||
|
||||
{
|
||||
block: newBlock(&header1, commitAtH10),
|
||||
parts: incompletePartSet,
|
||||
wantPanic: "only save complete block", // incomplete parts
|
||||
},
|
||||
|
||||
{
|
||||
block: newBlock(&header1, commitAtH10),
|
||||
parts: validPartSet,
|
||||
seenCommit: seenCommit1,
|
||||
corruptCommitInDB: true, // Corrupt the DB's commit entry
|
||||
wantPanic: "unmarshal to types.Commit failed",
|
||||
},
|
||||
|
||||
{
|
||||
block: newBlock(&header1, commitAtH10),
|
||||
parts: validPartSet,
|
||||
seenCommit: seenCommit1,
|
||||
wantPanic: "unmarshal to types.BlockMeta failed",
|
||||
corruptBlockInDB: true, // Corrupt the DB's block entry
|
||||
},
|
||||
|
||||
{
|
||||
block: newBlock(&header1, commitAtH10),
|
||||
parts: validPartSet,
|
||||
seenCommit: seenCommit1,
|
||||
|
||||
// Expecting no error and we want a nil back
|
||||
eraseSeenCommitInDB: true,
|
||||
},
|
||||
|
||||
{
|
||||
block: newBlock(&header1, commitAtH10),
|
||||
parts: validPartSet,
|
||||
seenCommit: seenCommit1,
|
||||
|
||||
corruptSeenCommitInDB: true,
|
||||
wantPanic: "unmarshal to types.Commit failed",
|
||||
},
|
||||
|
||||
{
|
||||
block: newBlock(&header1, commitAtH10),
|
||||
parts: validPartSet,
|
||||
seenCommit: seenCommit1,
|
||||
|
||||
// Expecting no error and we want a nil back
|
||||
eraseCommitInDB: true,
|
||||
},
|
||||
}
|
||||
|
||||
type quad struct {
|
||||
block *types.Block
|
||||
commit *types.Commit
|
||||
meta *types.BlockMeta
|
||||
|
||||
seenCommit *types.Commit
|
||||
}
|
||||
|
||||
for i, tuple := range tuples {
|
||||
bs, db := freshBlockStore()
|
||||
// SaveBlock
|
||||
res, err, panicErr := doFn(func() (interface{}, error) {
|
||||
bs.SaveBlock(tuple.block, tuple.parts, tuple.seenCommit)
|
||||
if tuple.block == nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
if tuple.corruptBlockInDB {
|
||||
db.Set(calcBlockMetaKey(tuple.block.Height), []byte("block-bogus"))
|
||||
}
|
||||
bBlock := bs.LoadBlock(tuple.block.Height)
|
||||
bBlockMeta := bs.LoadBlockMeta(tuple.block.Height)
|
||||
|
||||
if tuple.eraseSeenCommitInDB {
|
||||
db.Delete(calcSeenCommitKey(tuple.block.Height))
|
||||
}
|
||||
if tuple.corruptSeenCommitInDB {
|
||||
db.Set(calcSeenCommitKey(tuple.block.Height), []byte("bogus-seen-commit"))
|
||||
}
|
||||
bSeenCommit := bs.LoadSeenCommit(tuple.block.Height)
|
||||
|
||||
commitHeight := tuple.block.Height - 1
|
||||
if tuple.eraseCommitInDB {
|
||||
db.Delete(calcBlockCommitKey(commitHeight))
|
||||
}
|
||||
if tuple.corruptCommitInDB {
|
||||
db.Set(calcBlockCommitKey(commitHeight), []byte("foo-bogus"))
|
||||
}
|
||||
bCommit := bs.LoadBlockCommit(commitHeight)
|
||||
return &quad{block: bBlock, seenCommit: bSeenCommit, commit: bCommit,
|
||||
meta: bBlockMeta}, nil
|
||||
})
|
||||
|
||||
if subStr := tuple.wantPanic; subStr != "" {
|
||||
if panicErr == nil {
|
||||
t.Errorf("#%d: want a non-nil panic", i)
|
||||
} else if got := panicErr.Error(); !strings.Contains(got, subStr) {
|
||||
t.Errorf("#%d:\n\tgotErr: %q\nwant substring: %q", i, got, subStr)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if tuple.wantErr {
|
||||
if err == nil {
|
||||
t.Errorf("#%d: got nil error", i)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
assert.Nil(t, panicErr, "#%d: unexpected panic", i)
|
||||
assert.Nil(t, err, "#%d: expecting a non-nil error", i)
|
||||
qua, ok := res.(*quad)
|
||||
if !ok || qua == nil {
|
||||
t.Errorf("#%d: got nil quad back; gotType=%T", i, res)
|
||||
continue
|
||||
}
|
||||
if tuple.eraseSeenCommitInDB {
|
||||
assert.Nil(t, qua.seenCommit,
|
||||
"erased the seenCommit in the DB hence we should get back a nil seenCommit")
|
||||
}
|
||||
if tuple.eraseCommitInDB {
|
||||
assert.Nil(t, qua.commit,
|
||||
"erased the commit in the DB hence we should get back a nil commit")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoadBlockPart(t *testing.T) {
|
||||
bs, db := freshBlockStore()
|
||||
height, index := int64(10), 1
|
||||
loadPart := func() (interface{}, error) {
|
||||
part := bs.LoadBlockPart(height, index)
|
||||
return part, nil
|
||||
}
|
||||
|
||||
// Initially no contents.
|
||||
// 1. Requesting for a non-existent block shouldn't fail
|
||||
res, _, panicErr := doFn(loadPart)
|
||||
require.Nil(t, panicErr, "a non-existent block part shouldn't cause a panic")
|
||||
require.Nil(t, res, "a non-existent block part should return nil")
|
||||
|
||||
// 2. Next save a corrupted block then try to load it
|
||||
db.Set(calcBlockPartKey(height, index), []byte("Tendermint"))
|
||||
res, _, panicErr = doFn(loadPart)
|
||||
require.NotNil(t, panicErr, "expecting a non-nil panic")
|
||||
require.Contains(t, panicErr.Error(), "unmarshal to types.Part failed")
|
||||
|
||||
// 3. A good block serialized and saved to the DB should be retrievable
|
||||
db.Set(calcBlockPartKey(height, index), cdc.MustMarshalBinaryBare(part1))
|
||||
gotPart, _, panicErr := doFn(loadPart)
|
||||
require.Nil(t, panicErr, "an existent and proper block should not panic")
|
||||
require.Nil(t, res, "a properly saved block should return a proper block")
|
||||
require.Equal(t, gotPart.(*types.Part).Hash(), part1.Hash(),
|
||||
"expecting successful retrieval of previously saved block")
|
||||
}
|
||||
|
||||
func TestLoadBlockMeta(t *testing.T) {
|
||||
bs, db := freshBlockStore()
|
||||
height := int64(10)
|
||||
loadMeta := func() (interface{}, error) {
|
||||
meta := bs.LoadBlockMeta(height)
|
||||
return meta, nil
|
||||
}
|
||||
|
||||
// Initially no contents.
|
||||
// 1. Requesting for a non-existent blockMeta shouldn't fail
|
||||
res, _, panicErr := doFn(loadMeta)
|
||||
require.Nil(t, panicErr, "a non-existent blockMeta shouldn't cause a panic")
|
||||
require.Nil(t, res, "a non-existent blockMeta should return nil")
|
||||
|
||||
// 2. Next save a corrupted blockMeta then try to load it
|
||||
db.Set(calcBlockMetaKey(height), []byte("Tendermint-Meta"))
|
||||
res, _, panicErr = doFn(loadMeta)
|
||||
require.NotNil(t, panicErr, "expecting a non-nil panic")
|
||||
require.Contains(t, panicErr.Error(), "unmarshal to types.BlockMeta")
|
||||
|
||||
// 3. A good blockMeta serialized and saved to the DB should be retrievable
|
||||
meta := &types.BlockMeta{}
|
||||
db.Set(calcBlockMetaKey(height), cdc.MustMarshalBinaryBare(meta))
|
||||
gotMeta, _, panicErr := doFn(loadMeta)
|
||||
require.Nil(t, panicErr, "an existent and proper block should not panic")
|
||||
require.Nil(t, res, "a properly saved blockMeta should return a proper blocMeta ")
|
||||
require.Equal(t, cdc.MustMarshalBinaryBare(meta), cdc.MustMarshalBinaryBare(gotMeta),
|
||||
"expecting successful retrieval of previously saved blockMeta")
|
||||
}
|
||||
|
||||
func TestBlockFetchAtHeight(t *testing.T) {
|
||||
state, bs := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
|
||||
require.Equal(t, bs.Height(), int64(0), "initially the height should be zero")
|
||||
block := makeBlock(bs.Height()+1, state)
|
||||
|
||||
partSet := block.MakePartSet(2)
|
||||
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
|
||||
Timestamp: time.Now().UTC()}}}
|
||||
|
||||
bs.SaveBlock(block, partSet, seenCommit)
|
||||
require.Equal(t, bs.Height(), block.Header.Height, "expecting the new height to be changed")
|
||||
|
||||
blockAtHeight := bs.LoadBlock(bs.Height())
|
||||
bz1 := cdc.MustMarshalBinaryBare(block)
|
||||
bz2 := cdc.MustMarshalBinaryBare(blockAtHeight)
|
||||
require.Equal(t, bz1, bz2)
|
||||
require.Equal(t, block.Hash(), blockAtHeight.Hash(),
|
||||
"expecting a successful load of the last saved block")
|
||||
|
||||
blockAtHeightPlus1 := bs.LoadBlock(bs.Height() + 1)
|
||||
require.Nil(t, blockAtHeightPlus1, "expecting an unsuccessful load of Height()+1")
|
||||
blockAtHeightPlus2 := bs.LoadBlock(bs.Height() + 2)
|
||||
require.Nil(t, blockAtHeightPlus2, "expecting an unsuccessful load of Height()+2")
|
||||
}
|
||||
|
||||
func doFn(fn func() (interface{}, error)) (res interface{}, err error, panicErr error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
switch e := r.(type) {
|
||||
case error:
|
||||
panicErr = e
|
||||
case string:
|
||||
panicErr = fmt.Errorf("%s", e)
|
||||
default:
|
||||
if st, ok := r.(fmt.Stringer); ok {
|
||||
panicErr = fmt.Errorf("%s", st)
|
||||
} else {
|
||||
panicErr = fmt.Errorf("%s", debug.Stack())
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
res, err = fn()
|
||||
return res, err, panicErr
|
||||
}
|
||||
|
||||
func newBlock(hdr *types.Header, lastCommit *types.Commit) *types.Block {
|
||||
return &types.Block{
|
||||
Header: hdr,
|
||||
LastCommit: lastCommit,
|
||||
}
|
||||
}
|
13
blockchain/wire.go
Normal file
13
blockchain/wire.go
Normal file
@ -0,0 +1,13 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"github.com/tendermint/go-amino"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
)
|
||||
|
||||
var cdc = amino.NewCodec()
|
||||
|
||||
func init() {
|
||||
RegisterBlockchainMessages(cdc)
|
||||
crypto.RegisterAmino(cdc)
|
||||
}
|
53
cmd/priv_val_server/main.go
Normal file
53
cmd/priv_val_server/main.go
Normal file
@ -0,0 +1,53 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"os"
|
||||
|
||||
crypto "github.com/tendermint/tendermint/crypto"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
)
|
||||
|
||||
func main() {
|
||||
var (
|
||||
addr = flag.String("addr", ":26659", "Address of client to connect to")
|
||||
chainID = flag.String("chain-id", "mychain", "chain id")
|
||||
privValPath = flag.String("priv", "", "priv val file path")
|
||||
|
||||
logger = log.NewTMLogger(
|
||||
log.NewSyncWriter(os.Stdout),
|
||||
).With("module", "priv_val")
|
||||
)
|
||||
flag.Parse()
|
||||
|
||||
logger.Info(
|
||||
"Starting private validator",
|
||||
"addr", *addr,
|
||||
"chainID", *chainID,
|
||||
"privPath", *privValPath,
|
||||
)
|
||||
|
||||
pv := privval.LoadFilePV(*privValPath)
|
||||
|
||||
rs := privval.NewRemoteSigner(
|
||||
logger,
|
||||
*chainID,
|
||||
*addr,
|
||||
pv,
|
||||
crypto.GenPrivKeyEd25519(),
|
||||
)
|
||||
err := rs.Start()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
cmn.TrapSignal(func() {
|
||||
err := rs.Stop()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
})
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user