mirror of
https://github.com/fluencelabs/tendermint
synced 2025-06-14 13:51:21 +00:00
Merge pull request #2807 from tendermint/release/v0.26.1
Release/v0.26.1
This commit is contained in:
34
CHANGELOG.md
34
CHANGELOG.md
@ -1,5 +1,31 @@
|
|||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## v0.26.1
|
||||||
|
|
||||||
|
*November 11, 2018*
|
||||||
|
|
||||||
|
Special thanks to external contributors on this release: @katakonst
|
||||||
|
|
||||||
|
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
|
||||||
|
|
||||||
|
### IMPROVEMENTS:
|
||||||
|
|
||||||
|
- [consensus] [\#2704](https://github.com/tendermint/tendermint/issues/2704) Simplify valid POL round logic
|
||||||
|
- [docs] [\#2749](https://github.com/tendermint/tendermint/issues/2749) Deduplicate some ABCI docs
|
||||||
|
- [mempool] More detailed log messages
|
||||||
|
- [\#2724](https://github.com/tendermint/tendermint/issues/2724)
|
||||||
|
- [\#2762](https://github.com/tendermint/tendermint/issues/2762)
|
||||||
|
|
||||||
|
### BUG FIXES:
|
||||||
|
|
||||||
|
- [autofile] [\#2703](https://github.com/tendermint/tendermint/issues/2703) Do not panic when checking Head size
|
||||||
|
- [crypto/merkle] [\#2756](https://github.com/tendermint/tendermint/issues/2756) Fix crypto/merkle ProofOperators.Verify to check bounds on keypath parts.
|
||||||
|
- [mempool] fix a bug where we create a WAL despite `wal_dir` being empty
|
||||||
|
- [p2p] [\#2771](https://github.com/tendermint/tendermint/issues/2771) Fix `peer-id` label name to `peer_id` in prometheus metrics
|
||||||
|
- [p2p] [\#2797](https://github.com/tendermint/tendermint/pull/2797) Fix IDs in peer NodeInfo and require them for addresses
|
||||||
|
in AddressBook
|
||||||
|
- [p2p] [\#2797](https://github.com/tendermint/tendermint/pull/2797) Do not close conn immediately after sending pex addrs in seed mode. Partial fix for [\#2092](https://github.com/tendermint/tendermint/issues/2092).
|
||||||
|
|
||||||
## v0.26.0
|
## v0.26.0
|
||||||
|
|
||||||
*November 2, 2018*
|
*November 2, 2018*
|
||||||
@ -139,9 +165,7 @@ increasing attention to backwards compatibility. Thanks for bearing with us!
|
|||||||
- [node] [\#2434](https://github.com/tendermint/tendermint/issues/2434) Make node respond to signal interrupts while sleeping for genesis time
|
- [node] [\#2434](https://github.com/tendermint/tendermint/issues/2434) Make node respond to signal interrupts while sleeping for genesis time
|
||||||
- [state] [\#2616](https://github.com/tendermint/tendermint/issues/2616) Pass nil to NewValidatorSet() when genesis file's Validators field is nil
|
- [state] [\#2616](https://github.com/tendermint/tendermint/issues/2616) Pass nil to NewValidatorSet() when genesis file's Validators field is nil
|
||||||
- [p2p] [\#2555](https://github.com/tendermint/tendermint/issues/2555) Fix p2p switch FlushThrottle value (@goolAdapter)
|
- [p2p] [\#2555](https://github.com/tendermint/tendermint/issues/2555) Fix p2p switch FlushThrottle value (@goolAdapter)
|
||||||
- [p2p] [\#2668](https://github.com/tendermint/tendermint/issues/2668) Reconnect to originally dialed address (not self-reported
|
- [p2p] [\#2668](https://github.com/tendermint/tendermint/issues/2668) Reconnect to originally dialed address (not self-reported address) for persistent peers
|
||||||
address) for persistent peers
|
|
||||||
|
|
||||||
|
|
||||||
## v0.25.0
|
## v0.25.0
|
||||||
|
|
||||||
@ -307,8 +331,8 @@ BUG FIXES:
|
|||||||
*August 22nd, 2018*
|
*August 22nd, 2018*
|
||||||
|
|
||||||
BUG FIXES:
|
BUG FIXES:
|
||||||
- [libs/autofile] \#2261 Fix log rotation so it actually happens.
|
- [libs/autofile] [\#2261](https://github.com/tendermint/tendermint/issues/2261) Fix log rotation so it actually happens.
|
||||||
- Fixes issues with consensus WAL growing unbounded ala \#2259
|
- Fixes issues with consensus WAL growing unbounded ala [\#2259](https://github.com/tendermint/tendermint/issues/2259)
|
||||||
|
|
||||||
## 0.23.0
|
## 0.23.0
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Pending
|
# Pending
|
||||||
|
|
||||||
## v0.26.1
|
## v0.26.2
|
||||||
|
|
||||||
*TBA*
|
*TBA*
|
||||||
|
|
||||||
@ -25,3 +25,4 @@ Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermi
|
|||||||
### IMPROVEMENTS:
|
### IMPROVEMENTS:
|
||||||
|
|
||||||
### BUG FIXES:
|
### BUG FIXES:
|
||||||
|
|
||||||
|
156
abci/README.md
156
abci/README.md
@ -1,7 +1,5 @@
|
|||||||
# Application BlockChain Interface (ABCI)
|
# Application BlockChain Interface (ABCI)
|
||||||
|
|
||||||
[](https://circleci.com/gh/tendermint/abci)
|
|
||||||
|
|
||||||
Blockchains are systems for multi-master state machine replication.
|
Blockchains are systems for multi-master state machine replication.
|
||||||
**ABCI** is an interface that defines the boundary between the replication engine (the blockchain),
|
**ABCI** is an interface that defines the boundary between the replication engine (the blockchain),
|
||||||
and the state machine (the application).
|
and the state machine (the application).
|
||||||
@ -12,160 +10,28 @@ Previously, the ABCI was referred to as TMSP.
|
|||||||
|
|
||||||
The community has provided a number of addtional implementations, see the [Tendermint Ecosystem](https://tendermint.com/ecosystem)
|
The community has provided a number of addtional implementations, see the [Tendermint Ecosystem](https://tendermint.com/ecosystem)
|
||||||
|
|
||||||
|
|
||||||
|
## Installation & Usage
|
||||||
|
|
||||||
|
To get up and running quickly, see the [getting started guide](../docs/app-dev/getting-started.md) along with the [abci-cli documentation](../docs/app-dev/abci-cli.md) which will go through the examples found in the [examples](./example/) directory.
|
||||||
|
|
||||||
## Specification
|
## Specification
|
||||||
|
|
||||||
A detailed description of the ABCI methods and message types is contained in:
|
A detailed description of the ABCI methods and message types is contained in:
|
||||||
|
|
||||||
- [A prose specification](specification.md)
|
- [The main spec](../docs/spec/abci/abci.md)
|
||||||
- [A protobuf file](https://github.com/tendermint/tendermint/blob/master/abci/types/types.proto)
|
- [A protobuf file](./types/types.proto)
|
||||||
- [A Go interface](https://github.com/tendermint/tendermint/blob/master/abci/types/application.go).
|
- [A Go interface](./types/application.go)
|
||||||
|
|
||||||
For more background information on ABCI, motivations, and tendermint, please visit [the documentation](https://tendermint.com/docs/).
|
|
||||||
The two guides to focus on are the `Application Development Guide` and `Using ABCI-CLI`.
|
|
||||||
|
|
||||||
|
|
||||||
## Protocol Buffers
|
## Protocol Buffers
|
||||||
|
|
||||||
To compile the protobuf file, run:
|
To compile the protobuf file, run (from the root of the repo):
|
||||||
|
|
||||||
```
|
```
|
||||||
cd $GOPATH/src/github.com/tendermint/tendermint/; make protoc_abci
|
make protoc_abci
|
||||||
```
|
```
|
||||||
|
|
||||||
See `protoc --help` and [the Protocol Buffers site](https://developers.google.com/protocol-buffers)
|
See `protoc --help` and [the Protocol Buffers site](https://developers.google.com/protocol-buffers)
|
||||||
for details on compiling for other languages. Note we also include a [GRPC](http://www.grpc.io/docs)
|
for details on compiling for other languages. Note we also include a [GRPC](https://www.grpc.io/docs)
|
||||||
service definition.
|
service definition.
|
||||||
|
|
||||||
## Install ABCI-CLI
|
|
||||||
|
|
||||||
The `abci-cli` is a simple tool for debugging ABCI servers and running some
|
|
||||||
example apps. To install it:
|
|
||||||
|
|
||||||
```
|
|
||||||
mkdir -p $GOPATH/src/github.com/tendermint
|
|
||||||
cd $GOPATH/src/github.com/tendermint
|
|
||||||
git clone https://github.com/tendermint/tendermint.git
|
|
||||||
cd tendermint
|
|
||||||
make get_tools
|
|
||||||
make get_vendor_deps
|
|
||||||
make install_abci
|
|
||||||
```
|
|
||||||
|
|
||||||
## Implementation
|
|
||||||
|
|
||||||
We provide three implementations of the ABCI in Go:
|
|
||||||
|
|
||||||
- Golang in-process
|
|
||||||
- ABCI-socket
|
|
||||||
- GRPC
|
|
||||||
|
|
||||||
Note the GRPC version is maintained primarily to simplify onboarding and prototyping and is not receiving the same
|
|
||||||
attention to security and performance as the others
|
|
||||||
|
|
||||||
### In Process
|
|
||||||
|
|
||||||
The simplest implementation just uses function calls within Go.
|
|
||||||
This means ABCI applications written in Golang can be compiled with TendermintCore and run as a single binary.
|
|
||||||
|
|
||||||
See the [examples](#examples) below for more information.
|
|
||||||
|
|
||||||
### Socket (TSP)
|
|
||||||
|
|
||||||
ABCI is best implemented as a streaming protocol.
|
|
||||||
The socket implementation provides for asynchronous, ordered message passing over unix or tcp.
|
|
||||||
Messages are serialized using Protobuf3 and length-prefixed with a [signed Varint](https://developers.google.com/protocol-buffers/docs/encoding?csw=1#signed-integers)
|
|
||||||
|
|
||||||
For example, if the Protobuf3 encoded ABCI message is `0xDEADBEEF` (4 bytes), the length-prefixed message is `0x08DEADBEEF`, since `0x08` is the signed varint
|
|
||||||
encoding of `4`. If the Protobuf3 encoded ABCI message is 65535 bytes long, the length-prefixed message would be like `0xFEFF07...`.
|
|
||||||
|
|
||||||
Note the benefit of using this `varint` encoding over the old version (where integers were encoded as `<len of len><big endian len>` is that
|
|
||||||
it is the standard way to encode integers in Protobuf. It is also generally shorter.
|
|
||||||
|
|
||||||
### GRPC
|
|
||||||
|
|
||||||
GRPC is an rpc framework native to Protocol Buffers with support in many languages.
|
|
||||||
Implementing the ABCI using GRPC can allow for faster prototyping, but is expected to be much slower than
|
|
||||||
the ordered, asynchronous socket protocol. The implementation has also not received as much testing or review.
|
|
||||||
|
|
||||||
Note the length-prefixing used in the socket implementation does not apply for GRPC.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
The `abci-cli` tool wraps an ABCI client and can be used for probing/testing an ABCI server.
|
|
||||||
For instance, `abci-cli test` will run a test sequence against a listening server running the Counter application (see below).
|
|
||||||
It can also be used to run some example applications.
|
|
||||||
See [the documentation](https://tendermint.com/docs/) for more details.
|
|
||||||
|
|
||||||
### Examples
|
|
||||||
|
|
||||||
Check out the variety of example applications in the [example directory](example/).
|
|
||||||
It also contains the code refered to by the `counter` and `kvstore` apps; these apps come
|
|
||||||
built into the `abci-cli` binary.
|
|
||||||
|
|
||||||
#### Counter
|
|
||||||
|
|
||||||
The `abci-cli counter` application illustrates nonce checking in transactions. It's code looks like:
|
|
||||||
|
|
||||||
```golang
|
|
||||||
func cmdCounter(cmd *cobra.Command, args []string) error {
|
|
||||||
|
|
||||||
app := counter.NewCounterApplication(flagSerial)
|
|
||||||
|
|
||||||
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
|
|
||||||
|
|
||||||
// Start the listener
|
|
||||||
srv, err := server.NewServer(flagAddrC, flagAbci, app)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
srv.SetLogger(logger.With("module", "abci-server"))
|
|
||||||
if err := srv.Start(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait forever
|
|
||||||
cmn.TrapSignal(func() {
|
|
||||||
// Cleanup
|
|
||||||
srv.Stop()
|
|
||||||
})
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
and can be found in [this file](cmd/abci-cli/abci-cli.go).
|
|
||||||
|
|
||||||
#### kvstore
|
|
||||||
|
|
||||||
The `abci-cli kvstore` application, which illustrates a simple key-value Merkle tree
|
|
||||||
|
|
||||||
```golang
|
|
||||||
func cmdKVStore(cmd *cobra.Command, args []string) error {
|
|
||||||
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
|
|
||||||
|
|
||||||
// Create the application - in memory or persisted to disk
|
|
||||||
var app types.Application
|
|
||||||
if flagPersist == "" {
|
|
||||||
app = kvstore.NewKVStoreApplication()
|
|
||||||
} else {
|
|
||||||
app = kvstore.NewPersistentKVStoreApplication(flagPersist)
|
|
||||||
app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore"))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start the listener
|
|
||||||
srv, err := server.NewServer(flagAddrD, flagAbci, app)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
srv.SetLogger(logger.With("module", "abci-server"))
|
|
||||||
if err := srv.Start(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait forever
|
|
||||||
cmn.TrapSignal(func() {
|
|
||||||
// Cleanup
|
|
||||||
srv.Stop()
|
|
||||||
})
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
@ -497,6 +497,11 @@ func (cfg *MempoolConfig) WalDir() string {
|
|||||||
return rootify(cfg.WalPath, cfg.RootDir)
|
return rootify(cfg.WalPath, cfg.RootDir)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// WalEnabled returns true if the WAL is enabled.
|
||||||
|
func (cfg *MempoolConfig) WalEnabled() bool {
|
||||||
|
return cfg.WalPath != ""
|
||||||
|
}
|
||||||
|
|
||||||
// ValidateBasic performs basic validation (checking param bounds, etc.) and
|
// ValidateBasic performs basic validation (checking param bounds, etc.) and
|
||||||
// returns an error if any check fails.
|
// returns an error if any check fails.
|
||||||
func (cfg *MempoolConfig) ValidateBasic() error {
|
func (cfg *MempoolConfig) ValidateBasic() error {
|
||||||
|
@ -1017,7 +1017,11 @@ func (ps *PeerState) PickSendVote(votes types.VoteSetReader) bool {
|
|||||||
if vote, ok := ps.PickVoteToSend(votes); ok {
|
if vote, ok := ps.PickVoteToSend(votes); ok {
|
||||||
msg := &VoteMessage{vote}
|
msg := &VoteMessage{vote}
|
||||||
ps.logger.Debug("Sending vote message", "ps", ps, "vote", vote)
|
ps.logger.Debug("Sending vote message", "ps", ps, "vote", vote)
|
||||||
return ps.peer.Send(VoteChannel, cdc.MustMarshalBinaryBare(msg))
|
if ps.peer.Send(VoteChannel, cdc.MustMarshalBinaryBare(msg)) {
|
||||||
|
ps.SetHasVote(vote)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
}
|
}
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
@ -1046,7 +1050,6 @@ func (ps *PeerState) PickVoteToSend(votes types.VoteSetReader) (vote *types.Vote
|
|||||||
return nil, false // Not something worth sending
|
return nil, false // Not something worth sending
|
||||||
}
|
}
|
||||||
if index, ok := votes.BitArray().Sub(psVotes).PickRandom(); ok {
|
if index, ok := votes.BitArray().Sub(psVotes).PickRandom(); ok {
|
||||||
ps.setHasVote(height, round, type_, index)
|
|
||||||
return votes.GetByIndex(index), true
|
return votes.GetByIndex(index), true
|
||||||
}
|
}
|
||||||
return nil, false
|
return nil, false
|
||||||
|
@ -1408,9 +1408,9 @@ func (cs *ConsensusState) defaultSetProposal(proposal *types.Proposal) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Verify POLRound, which must be -1 or between 0 and proposal.Round exclusive.
|
// Verify POLRound, which must be -1 or in range [0, proposal.Round).
|
||||||
if proposal.POLRound != -1 &&
|
if proposal.POLRound < -1 ||
|
||||||
(proposal.POLRound < 0 || proposal.Round <= proposal.POLRound) {
|
(proposal.POLRound >= 0 && proposal.POLRound >= proposal.Round) {
|
||||||
return ErrInvalidProposalPOLRound
|
return ErrInvalidProposalPOLRound
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -13,6 +13,7 @@ import (
|
|||||||
amino "github.com/tendermint/go-amino"
|
amino "github.com/tendermint/go-amino"
|
||||||
auto "github.com/tendermint/tendermint/libs/autofile"
|
auto "github.com/tendermint/tendermint/libs/autofile"
|
||||||
cmn "github.com/tendermint/tendermint/libs/common"
|
cmn "github.com/tendermint/tendermint/libs/common"
|
||||||
|
"github.com/tendermint/tendermint/libs/log"
|
||||||
"github.com/tendermint/tendermint/types"
|
"github.com/tendermint/tendermint/types"
|
||||||
tmtime "github.com/tendermint/tendermint/types/time"
|
tmtime "github.com/tendermint/tendermint/types/time"
|
||||||
)
|
)
|
||||||
@ -95,6 +96,11 @@ func (wal *baseWAL) Group() *auto.Group {
|
|||||||
return wal.group
|
return wal.group
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (wal *baseWAL) SetLogger(l log.Logger) {
|
||||||
|
wal.BaseService.Logger = l
|
||||||
|
wal.group.SetLogger(l)
|
||||||
|
}
|
||||||
|
|
||||||
func (wal *baseWAL) OnStart() error {
|
func (wal *baseWAL) OnStart() error {
|
||||||
size, err := wal.group.Head.Size()
|
size, err := wal.group.Head.Size()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -43,6 +43,9 @@ func (poz ProofOperators) Verify(root []byte, keypath string, args [][]byte) (er
|
|||||||
for i, op := range poz {
|
for i, op := range poz {
|
||||||
key := op.GetKey()
|
key := op.GetKey()
|
||||||
if len(key) != 0 {
|
if len(key) != 0 {
|
||||||
|
if len(keys) == 0 {
|
||||||
|
return cmn.NewError("Key path has insufficient # of parts: expected no more keys but got %+v", string(key))
|
||||||
|
}
|
||||||
lastKey := keys[len(keys)-1]
|
lastKey := keys[len(keys)-1]
|
||||||
if !bytes.Equal(lastKey, key) {
|
if !bytes.Equal(lastKey, key) {
|
||||||
return cmn.NewError("Key mismatch on operation #%d: expected %+v but got %+v", i, string(lastKey), string(key))
|
return cmn.NewError("Key mismatch on operation #%d: expected %+v but got %+v", i, string(lastKey), string(key))
|
||||||
|
@ -107,6 +107,10 @@ func TestProofOperators(t *testing.T) {
|
|||||||
err = popz.Verify(bz("OUTPUT4"), "//KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
|
err = popz.Verify(bz("OUTPUT4"), "//KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
|
||||||
assert.NotNil(t, err)
|
assert.NotNil(t, err)
|
||||||
|
|
||||||
|
// BAD KEY 5
|
||||||
|
err = popz.Verify(bz("OUTPUT4"), "/KEY2/KEY1", [][]byte{bz("INPUT1")})
|
||||||
|
assert.NotNil(t, err)
|
||||||
|
|
||||||
// BAD OUTPUT 1
|
// BAD OUTPUT 1
|
||||||
err = popz.Verify(bz("OUTPUT4_WRONG"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
|
err = popz.Verify(bz("OUTPUT4_WRONG"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
|
||||||
assert.NotNil(t, err)
|
assert.NotNil(t, err)
|
||||||
|
@ -47,90 +47,6 @@ The mempool and consensus logic act as clients, and each maintains an
|
|||||||
open ABCI connection with the application, which hosts an ABCI server.
|
open ABCI connection with the application, which hosts an ABCI server.
|
||||||
Shown are the request and response types sent on each connection.
|
Shown are the request and response types sent on each connection.
|
||||||
|
|
||||||
## Message Protocol
|
|
||||||
|
|
||||||
The message protocol consists of pairs of requests and responses. Some
|
|
||||||
messages have no fields, while others may include byte-arrays, strings,
|
|
||||||
or integers. See the `message Request` and `message Response`
|
|
||||||
definitions in [the protobuf definition
|
|
||||||
file](https://github.com/tendermint/tendermint/blob/develop/abci/types/types.proto),
|
|
||||||
and the [protobuf
|
|
||||||
documentation](https://developers.google.com/protocol-buffers/docs/overview)
|
|
||||||
for more details.
|
|
||||||
|
|
||||||
For each request, a server should respond with the corresponding
|
|
||||||
response, where order of requests is preserved in the order of
|
|
||||||
responses.
|
|
||||||
|
|
||||||
## Server
|
|
||||||
|
|
||||||
To use ABCI in your programming language of choice, there must be a ABCI
|
|
||||||
server in that language. Tendermint supports two kinds of implementation
|
|
||||||
of the server:
|
|
||||||
|
|
||||||
- Asynchronous, raw socket server (Tendermint Socket Protocol, also
|
|
||||||
known as TSP or Teaspoon)
|
|
||||||
- GRPC
|
|
||||||
|
|
||||||
Both can be tested using the `abci-cli` by setting the `--abci` flag
|
|
||||||
appropriately (ie. to `socket` or `grpc`).
|
|
||||||
|
|
||||||
See examples, in various stages of maintenance, in
|
|
||||||
[Go](https://github.com/tendermint/tendermint/tree/develop/abci/server),
|
|
||||||
[JavaScript](https://github.com/tendermint/js-abci),
|
|
||||||
[Python](https://github.com/tendermint/tendermint/tree/develop/abci/example/python3/abci),
|
|
||||||
[C++](https://github.com/mdyring/cpp-tmsp), and
|
|
||||||
[Java](https://github.com/jTendermint/jabci).
|
|
||||||
|
|
||||||
### GRPC
|
|
||||||
|
|
||||||
If GRPC is available in your language, this is the easiest approach,
|
|
||||||
though it will have significant performance overhead.
|
|
||||||
|
|
||||||
To get started with GRPC, copy in the [protobuf
|
|
||||||
file](https://github.com/tendermint/tendermint/blob/develop/abci/types/types.proto)
|
|
||||||
and compile it using the GRPC plugin for your language. For instance,
|
|
||||||
for golang, the command is `protoc --go_out=plugins=grpc:. types.proto`.
|
|
||||||
See the [grpc documentation for more details](http://www.grpc.io/docs/).
|
|
||||||
`protoc` will autogenerate all the necessary code for ABCI client and
|
|
||||||
server in your language, including whatever interface your application
|
|
||||||
must satisfy to be used by the ABCI server for handling requests.
|
|
||||||
|
|
||||||
### TSP
|
|
||||||
|
|
||||||
If GRPC is not available in your language, or you require higher
|
|
||||||
performance, or otherwise enjoy programming, you may implement your own
|
|
||||||
ABCI server using the Tendermint Socket Protocol, known affectionately
|
|
||||||
as Teaspoon. The first step is still to auto-generate the relevant data
|
|
||||||
types and codec in your language using `protoc`. Messages coming over
|
|
||||||
the socket are proto3 encoded, but additionally length-prefixed to
|
|
||||||
facilitate use as a streaming protocol. proto3 doesn't have an
|
|
||||||
official length-prefix standard, so we use our own. The first byte in
|
|
||||||
the prefix represents the length of the Big Endian encoded length. The
|
|
||||||
remaining bytes in the prefix are the Big Endian encoded length.
|
|
||||||
|
|
||||||
For example, if the proto3 encoded ABCI message is 0xDEADBEEF (4
|
|
||||||
bytes), the length-prefixed message is 0x0104DEADBEEF. If the proto3
|
|
||||||
encoded ABCI message is 65535 bytes long, the length-prefixed message
|
|
||||||
would be like 0x02FFFF....
|
|
||||||
|
|
||||||
Note this prefixing does not apply for grpc.
|
|
||||||
|
|
||||||
An ABCI server must also be able to support multiple connections, as
|
|
||||||
Tendermint uses three connections.
|
|
||||||
|
|
||||||
## Client
|
|
||||||
|
|
||||||
There are currently two use-cases for an ABCI client. One is a testing
|
|
||||||
tool, as in the `abci-cli`, which allows ABCI requests to be sent via
|
|
||||||
command line. The other is a consensus engine, such as Tendermint Core,
|
|
||||||
which makes requests to the application every time a new transaction is
|
|
||||||
received or a block is committed.
|
|
||||||
|
|
||||||
It is unlikely that you will need to implement a client. For details of
|
|
||||||
our client, see
|
|
||||||
[here](https://github.com/tendermint/tendermint/tree/develop/abci/client).
|
|
||||||
|
|
||||||
Most of the examples below are from [kvstore
|
Most of the examples below are from [kvstore
|
||||||
application](https://github.com/tendermint/tendermint/blob/develop/abci/example/kvstore/kvstore.go),
|
application](https://github.com/tendermint/tendermint/blob/develop/abci/example/kvstore/kvstore.go),
|
||||||
which is a part of the abci repo. [persistent_kvstore
|
which is a part of the abci repo. [persistent_kvstore
|
||||||
|
@ -3,12 +3,8 @@
|
|||||||
This section is for those looking to implement their own ABCI Server, perhaps in
|
This section is for those looking to implement their own ABCI Server, perhaps in
|
||||||
a new programming language.
|
a new programming language.
|
||||||
|
|
||||||
You are expected to have read [ABCI Methods and Types](abci.md) and [ABCI
|
You are expected to have read [ABCI Methods and Types](./abci.md) and [ABCI
|
||||||
Applications](apps.md).
|
Applications](./apps.md).
|
||||||
|
|
||||||
See additional details in the [ABCI
|
|
||||||
readme](https://github.com/tendermint/tendermint/blob/develop/abci/README.md)(TODO: deduplicate
|
|
||||||
those details).
|
|
||||||
|
|
||||||
## Message Protocol
|
## Message Protocol
|
||||||
|
|
||||||
@ -24,17 +20,16 @@ For each request, a server should respond with the corresponding
|
|||||||
response, where the order of requests is preserved in the order of
|
response, where the order of requests is preserved in the order of
|
||||||
responses.
|
responses.
|
||||||
|
|
||||||
## Server
|
## Server Implementations
|
||||||
|
|
||||||
To use ABCI in your programming language of choice, there must be a ABCI
|
To use ABCI in your programming language of choice, there must be a ABCI
|
||||||
server in that language. Tendermint supports two kinds of implementation
|
server in that language. Tendermint supports three implementations of the ABCI, written in Go:
|
||||||
of the server:
|
|
||||||
|
|
||||||
- Asynchronous, raw socket server (Tendermint Socket Protocol, also
|
- In-process (Golang only)
|
||||||
known as TSP or Teaspoon)
|
- ABCI-socket
|
||||||
- GRPC
|
- GRPC
|
||||||
|
|
||||||
Both can be tested using the `abci-cli` by setting the `--abci` flag
|
The latter two can be tested using the `abci-cli` by setting the `--abci` flag
|
||||||
appropriately (ie. to `socket` or `grpc`).
|
appropriately (ie. to `socket` or `grpc`).
|
||||||
|
|
||||||
See examples, in various stages of maintenance, in
|
See examples, in various stages of maintenance, in
|
||||||
@ -44,6 +39,12 @@ See examples, in various stages of maintenance, in
|
|||||||
[C++](https://github.com/mdyring/cpp-tmsp), and
|
[C++](https://github.com/mdyring/cpp-tmsp), and
|
||||||
[Java](https://github.com/jTendermint/jabci).
|
[Java](https://github.com/jTendermint/jabci).
|
||||||
|
|
||||||
|
### In Process
|
||||||
|
|
||||||
|
The simplest implementation uses function calls within Golang.
|
||||||
|
This means ABCI applications written in Golang can be compiled with TendermintCore and run as a single binary.
|
||||||
|
|
||||||
|
|
||||||
### GRPC
|
### GRPC
|
||||||
|
|
||||||
If GRPC is available in your language, this is the easiest approach,
|
If GRPC is available in your language, this is the easiest approach,
|
||||||
@ -58,15 +59,18 @@ See the [grpc documentation for more details](http://www.grpc.io/docs/).
|
|||||||
server in your language, including whatever interface your application
|
server in your language, including whatever interface your application
|
||||||
must satisfy to be used by the ABCI server for handling requests.
|
must satisfy to be used by the ABCI server for handling requests.
|
||||||
|
|
||||||
|
Note the length-prefixing used in the socket implementation (TSP) does not apply for GRPC.
|
||||||
|
|
||||||
### TSP
|
### TSP
|
||||||
|
|
||||||
|
Tendermint Socket Protocol is an asynchronous, raw socket server which provides ordered message passing over unix or tcp.
|
||||||
|
Messages are serialized using Protobuf3 and length-prefixed with a [signed Varint](https://developers.google.com/protocol-buffers/docs/encoding?csw=1#signed-integers)
|
||||||
|
|
||||||
If GRPC is not available in your language, or you require higher
|
If GRPC is not available in your language, or you require higher
|
||||||
performance, or otherwise enjoy programming, you may implement your own
|
performance, or otherwise enjoy programming, you may implement your own
|
||||||
ABCI server using the Tendermint Socket Protocol, known affectionately
|
ABCI server using the Tendermint Socket Protocol. The first step is still to auto-generate the relevant data
|
||||||
as Teaspoon. The first step is still to auto-generate the relevant data
|
types and codec in your language using `protoc`. In addition to being proto3 encoded, messages coming over
|
||||||
types and codec in your language using `protoc`. Messages coming over
|
the socket are length-prefixed to facilitate use as a streaming protocol. proto3 doesn't have an
|
||||||
the socket are proto3 encoded, but additionally length-prefixed to
|
|
||||||
facilitate use as a streaming protocol. proto3 doesn't have an
|
|
||||||
official length-prefix standard, so we use our own. The first byte in
|
official length-prefix standard, so we use our own. The first byte in
|
||||||
the prefix represents the length of the Big Endian encoded length. The
|
the prefix represents the length of the Big Endian encoded length. The
|
||||||
remaining bytes in the prefix are the Big Endian encoded length.
|
remaining bytes in the prefix are the Big Endian encoded length.
|
||||||
@ -76,12 +80,14 @@ bytes), the length-prefixed message is 0x0104DEADBEEF. If the proto3
|
|||||||
encoded ABCI message is 65535 bytes long, the length-prefixed message
|
encoded ABCI message is 65535 bytes long, the length-prefixed message
|
||||||
would be like 0x02FFFF....
|
would be like 0x02FFFF....
|
||||||
|
|
||||||
Note this prefixing does not apply for grpc.
|
The benefit of using this `varint` encoding over the old version (where integers were encoded as `<len of len><big endian len>` is that
|
||||||
|
it is the standard way to encode integers in Protobuf. It is also generally shorter.
|
||||||
|
|
||||||
|
As noted above, this prefixing does not apply for GRPC.
|
||||||
|
|
||||||
An ABCI server must also be able to support multiple connections, as
|
An ABCI server must also be able to support multiple connections, as
|
||||||
Tendermint uses three connections.
|
Tendermint uses three connections.
|
||||||
|
|
||||||
|
|
||||||
### Async vs Sync
|
### Async vs Sync
|
||||||
|
|
||||||
The main ABCI server (ie. non-GRPC) provides ordered asynchronous messages.
|
The main ABCI server (ie. non-GRPC) provides ordered asynchronous messages.
|
||||||
|
@ -10,6 +10,7 @@ import (
|
|||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
|
|
||||||
cfg "github.com/tendermint/tendermint/config"
|
cfg "github.com/tendermint/tendermint/config"
|
||||||
|
"github.com/tendermint/tendermint/crypto/secp256k1"
|
||||||
dbm "github.com/tendermint/tendermint/libs/db"
|
dbm "github.com/tendermint/tendermint/libs/db"
|
||||||
"github.com/tendermint/tendermint/libs/log"
|
"github.com/tendermint/tendermint/libs/log"
|
||||||
"github.com/tendermint/tendermint/p2p"
|
"github.com/tendermint/tendermint/p2p"
|
||||||
@ -178,3 +179,30 @@ func TestReactorSelectiveBroadcast(t *testing.T) {
|
|||||||
peers := reactors[1].Switch.Peers().List()
|
peers := reactors[1].Switch.Peers().List()
|
||||||
assert.Equal(t, 1, len(peers))
|
assert.Equal(t, 1, len(peers))
|
||||||
}
|
}
|
||||||
|
func TestEvidenceListMessageValidationBasic(t *testing.T) {
|
||||||
|
|
||||||
|
testCases := []struct {
|
||||||
|
testName string
|
||||||
|
malleateEvListMsg func(*EvidenceListMessage)
|
||||||
|
expectErr bool
|
||||||
|
}{
|
||||||
|
{"Good EvidenceListMessage", func(evList *EvidenceListMessage) {}, false},
|
||||||
|
{"Invalid EvidenceListMessage", func(evList *EvidenceListMessage) {
|
||||||
|
evList.Evidence = append(evList.Evidence,
|
||||||
|
&types.DuplicateVoteEvidence{PubKey: secp256k1.GenPrivKey().PubKey()})
|
||||||
|
}, true},
|
||||||
|
}
|
||||||
|
for _, tc := range testCases {
|
||||||
|
t.Run(tc.testName, func(t *testing.T) {
|
||||||
|
evListMsg := &EvidenceListMessage{}
|
||||||
|
n := 3
|
||||||
|
valAddr := []byte("myval")
|
||||||
|
evListMsg.Evidence = make([]types.Evidence, n)
|
||||||
|
for i := 0; i < n; i++ {
|
||||||
|
evListMsg.Evidence[i] = types.NewMockGoodEvidence(int64(i+1), 0, valAddr)
|
||||||
|
}
|
||||||
|
tc.malleateEvListMsg(evListMsg)
|
||||||
|
assert.Equal(t, tc.expectErr, evListMsg.ValidateBasic() != nil, "Validate Basic had an unexpected result")
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@ -83,6 +83,8 @@ func OpenAutoFile(path string) (*AutoFile, error) {
|
|||||||
return af, nil
|
return af, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Close shuts down the closing goroutine, SIGHUP handler and closes the
|
||||||
|
// AutoFile.
|
||||||
func (af *AutoFile) Close() error {
|
func (af *AutoFile) Close() error {
|
||||||
af.closeTicker.Stop()
|
af.closeTicker.Stop()
|
||||||
close(af.closeTickerStopc)
|
close(af.closeTickerStopc)
|
||||||
@ -116,6 +118,10 @@ func (af *AutoFile) closeFile() (err error) {
|
|||||||
return file.Close()
|
return file.Close()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Write writes len(b) bytes to the AutoFile. It returns the number of bytes
|
||||||
|
// written and an error, if any. Write returns a non-nil error when n !=
|
||||||
|
// len(b).
|
||||||
|
// Opens AutoFile if needed.
|
||||||
func (af *AutoFile) Write(b []byte) (n int, err error) {
|
func (af *AutoFile) Write(b []byte) (n int, err error) {
|
||||||
af.mtx.Lock()
|
af.mtx.Lock()
|
||||||
defer af.mtx.Unlock()
|
defer af.mtx.Unlock()
|
||||||
@ -130,6 +136,10 @@ func (af *AutoFile) Write(b []byte) (n int, err error) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Sync commits the current contents of the file to stable storage. Typically,
|
||||||
|
// this means flushing the file system's in-memory copy of recently written
|
||||||
|
// data to disk.
|
||||||
|
// Opens AutoFile if needed.
|
||||||
func (af *AutoFile) Sync() error {
|
func (af *AutoFile) Sync() error {
|
||||||
af.mtx.Lock()
|
af.mtx.Lock()
|
||||||
defer af.mtx.Unlock()
|
defer af.mtx.Unlock()
|
||||||
@ -158,23 +168,22 @@ func (af *AutoFile) openFile() error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Size returns the size of the AutoFile. It returns -1 and an error if fails
|
||||||
|
// get stats or open file.
|
||||||
|
// Opens AutoFile if needed.
|
||||||
func (af *AutoFile) Size() (int64, error) {
|
func (af *AutoFile) Size() (int64, error) {
|
||||||
af.mtx.Lock()
|
af.mtx.Lock()
|
||||||
defer af.mtx.Unlock()
|
defer af.mtx.Unlock()
|
||||||
|
|
||||||
if af.file == nil {
|
if af.file == nil {
|
||||||
err := af.openFile()
|
if err := af.openFile(); err != nil {
|
||||||
if err != nil {
|
|
||||||
if err == os.ErrNotExist {
|
|
||||||
return 0, nil
|
|
||||||
}
|
|
||||||
return -1, err
|
return -1, err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
stat, err := af.file.Stat()
|
stat, err := af.file.Stat()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return -1, err
|
return -1, err
|
||||||
}
|
}
|
||||||
return stat.Size(), nil
|
return stat.Size(), nil
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -84,3 +84,40 @@ func TestOpenAutoFilePerms(t *testing.T) {
|
|||||||
t.Errorf("unexpected error %v", e)
|
t.Errorf("unexpected error %v", e)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestAutoFileSize(t *testing.T) {
|
||||||
|
// First, create an AutoFile writing to a tempfile dir
|
||||||
|
f, err := ioutil.TempFile("", "sighup_test")
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = f.Close()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// Here is the actual AutoFile.
|
||||||
|
af, err := OpenAutoFile(f.Name())
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// 1. Empty file
|
||||||
|
size, err := af.Size()
|
||||||
|
require.Zero(t, size)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// 2. Not empty file
|
||||||
|
data := []byte("Maniac\n")
|
||||||
|
_, err = af.Write(data)
|
||||||
|
require.NoError(t, err)
|
||||||
|
size, err = af.Size()
|
||||||
|
require.EqualValues(t, len(data), size)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// 3. Not existing file
|
||||||
|
err = af.Close()
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = os.Remove(f.Name())
|
||||||
|
require.NoError(t, err)
|
||||||
|
size, err = af.Size()
|
||||||
|
require.EqualValues(t, 0, size, "Expected a new file to be empty")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
_ = os.Remove(f.Name())
|
||||||
|
}
|
||||||
|
@ -5,7 +5,6 @@ import (
|
|||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"log"
|
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
@ -231,7 +230,8 @@ func (g *Group) checkHeadSizeLimit() {
|
|||||||
}
|
}
|
||||||
size, err := g.Head.Size()
|
size, err := g.Head.Size()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
panic(err)
|
g.Logger.Error("Group's head may grow without bound", "head", g.Head.Path, "err", err)
|
||||||
|
return
|
||||||
}
|
}
|
||||||
if size >= limit {
|
if size >= limit {
|
||||||
g.RotateFile()
|
g.RotateFile()
|
||||||
@ -253,21 +253,21 @@ func (g *Group) checkTotalSizeLimit() {
|
|||||||
}
|
}
|
||||||
if index == gInfo.MaxIndex {
|
if index == gInfo.MaxIndex {
|
||||||
// Special degenerate case, just do nothing.
|
// Special degenerate case, just do nothing.
|
||||||
log.Println("WARNING: Group's head " + g.Head.Path + "may grow without bound")
|
g.Logger.Error("Group's head may grow without bound", "head", g.Head.Path)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
pathToRemove := filePathForIndex(g.Head.Path, index, gInfo.MaxIndex)
|
pathToRemove := filePathForIndex(g.Head.Path, index, gInfo.MaxIndex)
|
||||||
fileInfo, err := os.Stat(pathToRemove)
|
fInfo, err := os.Stat(pathToRemove)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Println("WARNING: Failed to fetch info for file @" + pathToRemove)
|
g.Logger.Error("Failed to fetch info for file", "file", pathToRemove)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
err = os.Remove(pathToRemove)
|
err = os.Remove(pathToRemove)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Println(err)
|
g.Logger.Error("Failed to remove path", "path", pathToRemove)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
totalSize -= fileInfo.Size()
|
totalSize -= fInfo.Size()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -11,7 +11,6 @@ import (
|
|||||||
|
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
|
|
||||||
amino "github.com/tendermint/go-amino"
|
|
||||||
abci "github.com/tendermint/tendermint/abci/types"
|
abci "github.com/tendermint/tendermint/abci/types"
|
||||||
cfg "github.com/tendermint/tendermint/config"
|
cfg "github.com/tendermint/tendermint/config"
|
||||||
auto "github.com/tendermint/tendermint/libs/autofile"
|
auto "github.com/tendermint/tendermint/libs/autofile"
|
||||||
@ -25,12 +24,12 @@ import (
|
|||||||
// PreCheckFunc is an optional filter executed before CheckTx and rejects
|
// PreCheckFunc is an optional filter executed before CheckTx and rejects
|
||||||
// transaction if false is returned. An example would be to ensure that a
|
// transaction if false is returned. An example would be to ensure that a
|
||||||
// transaction doesn't exceeded the block size.
|
// transaction doesn't exceeded the block size.
|
||||||
type PreCheckFunc func(types.Tx) bool
|
type PreCheckFunc func(types.Tx) error
|
||||||
|
|
||||||
// PostCheckFunc is an optional filter executed after CheckTx and rejects
|
// PostCheckFunc is an optional filter executed after CheckTx and rejects
|
||||||
// transaction if false is returned. An example would be to ensure a
|
// transaction if false is returned. An example would be to ensure a
|
||||||
// transaction doesn't require more gas than available for the block.
|
// transaction doesn't require more gas than available for the block.
|
||||||
type PostCheckFunc func(types.Tx, *abci.ResponseCheckTx) bool
|
type PostCheckFunc func(types.Tx, *abci.ResponseCheckTx) error
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
||||||
@ -68,24 +67,52 @@ var (
|
|||||||
ErrMempoolIsFull = errors.New("Mempool is full")
|
ErrMempoolIsFull = errors.New("Mempool is full")
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// ErrPreCheck is returned when tx is too big
|
||||||
|
type ErrPreCheck struct {
|
||||||
|
Reason error
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e ErrPreCheck) Error() string {
|
||||||
|
return e.Reason.Error()
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsPreCheckError returns true if err is due to pre check failure.
|
||||||
|
func IsPreCheckError(err error) bool {
|
||||||
|
_, ok := err.(ErrPreCheck)
|
||||||
|
return ok
|
||||||
|
}
|
||||||
|
|
||||||
// PreCheckAminoMaxBytes checks that the size of the transaction plus the amino
|
// PreCheckAminoMaxBytes checks that the size of the transaction plus the amino
|
||||||
// overhead is smaller or equal to the expected maxBytes.
|
// overhead is smaller or equal to the expected maxBytes.
|
||||||
func PreCheckAminoMaxBytes(maxBytes int64) PreCheckFunc {
|
func PreCheckAminoMaxBytes(maxBytes int64) PreCheckFunc {
|
||||||
return func(tx types.Tx) bool {
|
return func(tx types.Tx) error {
|
||||||
// We have to account for the amino overhead in the tx size as well
|
// We have to account for the amino overhead in the tx size as well
|
||||||
aminoOverhead := amino.UvarintSize(uint64(len(tx)))
|
// NOTE: fieldNum = 1 as types.Block.Data contains Txs []Tx as first field.
|
||||||
return int64(len(tx)+aminoOverhead) <= maxBytes
|
// If this field order ever changes this needs to updated here accordingly.
|
||||||
|
// NOTE: if some []Tx are encoded without a parenting struct, the
|
||||||
|
// fieldNum is also equal to 1.
|
||||||
|
aminoOverhead := types.ComputeAminoOverhead(tx, 1)
|
||||||
|
txSize := int64(len(tx)) + aminoOverhead
|
||||||
|
if txSize > maxBytes {
|
||||||
|
return fmt.Errorf("Tx size (including amino overhead) is too big: %d, max: %d",
|
||||||
|
txSize, maxBytes)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// PostCheckMaxGas checks that the wanted gas is smaller or equal to the passed
|
// PostCheckMaxGas checks that the wanted gas is smaller or equal to the passed
|
||||||
// maxGas. Returns true if maxGas is -1.
|
// maxGas. Returns nil if maxGas is -1.
|
||||||
func PostCheckMaxGas(maxGas int64) PostCheckFunc {
|
func PostCheckMaxGas(maxGas int64) PostCheckFunc {
|
||||||
return func(tx types.Tx, res *abci.ResponseCheckTx) bool {
|
return func(tx types.Tx, res *abci.ResponseCheckTx) error {
|
||||||
if maxGas == -1 {
|
if maxGas == -1 {
|
||||||
return true
|
return nil
|
||||||
}
|
}
|
||||||
return res.GasWanted <= maxGas
|
if res.GasWanted > maxGas {
|
||||||
|
return fmt.Errorf("gas wanted %d is greater than max gas %d",
|
||||||
|
res.GasWanted, maxGas)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -189,39 +216,33 @@ func WithMetrics(metrics *Metrics) MempoolOption {
|
|||||||
return func(mem *Mempool) { mem.metrics = metrics }
|
return func(mem *Mempool) { mem.metrics = metrics }
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// InitWAL creates a directory for the WAL file and opens a file itself.
|
||||||
|
//
|
||||||
|
// *panics* if can't create directory or open file.
|
||||||
|
// *not thread safe*
|
||||||
|
func (mem *Mempool) InitWAL() {
|
||||||
|
walDir := mem.config.WalDir()
|
||||||
|
err := cmn.EnsureDir(walDir, 0700)
|
||||||
|
if err != nil {
|
||||||
|
panic(errors.Wrap(err, "Error ensuring Mempool WAL dir"))
|
||||||
|
}
|
||||||
|
af, err := auto.OpenAutoFile(walDir + "/wal")
|
||||||
|
if err != nil {
|
||||||
|
panic(errors.Wrap(err, "Error opening Mempool WAL file"))
|
||||||
|
}
|
||||||
|
mem.wal = af
|
||||||
|
}
|
||||||
|
|
||||||
// CloseWAL closes and discards the underlying WAL file.
|
// CloseWAL closes and discards the underlying WAL file.
|
||||||
// Any further writes will not be relayed to disk.
|
// Any further writes will not be relayed to disk.
|
||||||
func (mem *Mempool) CloseWAL() bool {
|
func (mem *Mempool) CloseWAL() {
|
||||||
if mem == nil {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
mem.proxyMtx.Lock()
|
mem.proxyMtx.Lock()
|
||||||
defer mem.proxyMtx.Unlock()
|
defer mem.proxyMtx.Unlock()
|
||||||
|
|
||||||
if mem.wal == nil {
|
if err := mem.wal.Close(); err != nil {
|
||||||
return false
|
mem.logger.Error("Error closing WAL", "err", err)
|
||||||
}
|
|
||||||
if err := mem.wal.Close(); err != nil && mem.logger != nil {
|
|
||||||
mem.logger.Error("Mempool.CloseWAL", "err", err)
|
|
||||||
}
|
}
|
||||||
mem.wal = nil
|
mem.wal = nil
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func (mem *Mempool) InitWAL() {
|
|
||||||
walDir := mem.config.WalDir()
|
|
||||||
if walDir != "" {
|
|
||||||
err := cmn.EnsureDir(walDir, 0700)
|
|
||||||
if err != nil {
|
|
||||||
cmn.PanicSanity(errors.Wrap(err, "Error ensuring Mempool wal dir"))
|
|
||||||
}
|
|
||||||
af, err := auto.OpenAutoFile(walDir + "/wal")
|
|
||||||
if err != nil {
|
|
||||||
cmn.PanicSanity(errors.Wrap(err, "Error opening Mempool wal file"))
|
|
||||||
}
|
|
||||||
mem.wal = af
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Lock locks the mempool. The consensus must be able to hold lock to safely update.
|
// Lock locks the mempool. The consensus must be able to hold lock to safely update.
|
||||||
@ -285,8 +306,10 @@ func (mem *Mempool) CheckTx(tx types.Tx, cb func(*abci.Response)) (err error) {
|
|||||||
return ErrMempoolIsFull
|
return ErrMempoolIsFull
|
||||||
}
|
}
|
||||||
|
|
||||||
if mem.preCheck != nil && !mem.preCheck(tx) {
|
if mem.preCheck != nil {
|
||||||
return
|
if err := mem.preCheck(tx); err != nil {
|
||||||
|
return ErrPreCheck{err}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// CACHE
|
// CACHE
|
||||||
@ -336,8 +359,11 @@ func (mem *Mempool) resCbNormal(req *abci.Request, res *abci.Response) {
|
|||||||
switch r := res.Value.(type) {
|
switch r := res.Value.(type) {
|
||||||
case *abci.Response_CheckTx:
|
case *abci.Response_CheckTx:
|
||||||
tx := req.GetCheckTx().Tx
|
tx := req.GetCheckTx().Tx
|
||||||
if (r.CheckTx.Code == abci.CodeTypeOK) &&
|
var postCheckErr error
|
||||||
mem.isPostCheckPass(tx, r.CheckTx) {
|
if mem.postCheck != nil {
|
||||||
|
postCheckErr = mem.postCheck(tx, r.CheckTx)
|
||||||
|
}
|
||||||
|
if (r.CheckTx.Code == abci.CodeTypeOK) && postCheckErr == nil {
|
||||||
mem.counter++
|
mem.counter++
|
||||||
memTx := &mempoolTx{
|
memTx := &mempoolTx{
|
||||||
counter: mem.counter,
|
counter: mem.counter,
|
||||||
@ -346,12 +372,18 @@ func (mem *Mempool) resCbNormal(req *abci.Request, res *abci.Response) {
|
|||||||
tx: tx,
|
tx: tx,
|
||||||
}
|
}
|
||||||
mem.txs.PushBack(memTx)
|
mem.txs.PushBack(memTx)
|
||||||
mem.logger.Info("Added good transaction", "tx", TxID(tx), "res", r, "total", mem.Size())
|
mem.logger.Info("Added good transaction",
|
||||||
|
"tx", TxID(tx),
|
||||||
|
"res", r,
|
||||||
|
"height", memTx.height,
|
||||||
|
"total", mem.Size(),
|
||||||
|
"counter", memTx.counter,
|
||||||
|
)
|
||||||
mem.metrics.TxSizeBytes.Observe(float64(len(tx)))
|
mem.metrics.TxSizeBytes.Observe(float64(len(tx)))
|
||||||
mem.notifyTxsAvailable()
|
mem.notifyTxsAvailable()
|
||||||
} else {
|
} else {
|
||||||
// ignore bad transaction
|
// ignore bad transaction
|
||||||
mem.logger.Info("Rejected bad transaction", "tx", TxID(tx), "res", r)
|
mem.logger.Info("Rejected bad transaction", "tx", TxID(tx), "res", r, "err", postCheckErr)
|
||||||
mem.metrics.FailedTxs.Add(1)
|
mem.metrics.FailedTxs.Add(1)
|
||||||
// remove from cache (it might be good later)
|
// remove from cache (it might be good later)
|
||||||
mem.cache.Remove(tx)
|
mem.cache.Remove(tx)
|
||||||
@ -364,6 +396,7 @@ func (mem *Mempool) resCbNormal(req *abci.Request, res *abci.Response) {
|
|||||||
func (mem *Mempool) resCbRecheck(req *abci.Request, res *abci.Response) {
|
func (mem *Mempool) resCbRecheck(req *abci.Request, res *abci.Response) {
|
||||||
switch r := res.Value.(type) {
|
switch r := res.Value.(type) {
|
||||||
case *abci.Response_CheckTx:
|
case *abci.Response_CheckTx:
|
||||||
|
tx := req.GetCheckTx().Tx
|
||||||
memTx := mem.recheckCursor.Value.(*mempoolTx)
|
memTx := mem.recheckCursor.Value.(*mempoolTx)
|
||||||
if !bytes.Equal(req.GetCheckTx().Tx, memTx.tx) {
|
if !bytes.Equal(req.GetCheckTx().Tx, memTx.tx) {
|
||||||
cmn.PanicSanity(
|
cmn.PanicSanity(
|
||||||
@ -374,15 +407,20 @@ func (mem *Mempool) resCbRecheck(req *abci.Request, res *abci.Response) {
|
|||||||
),
|
),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
if (r.CheckTx.Code == abci.CodeTypeOK) && mem.isPostCheckPass(memTx.tx, r.CheckTx) {
|
var postCheckErr error
|
||||||
|
if mem.postCheck != nil {
|
||||||
|
postCheckErr = mem.postCheck(tx, r.CheckTx)
|
||||||
|
}
|
||||||
|
if (r.CheckTx.Code == abci.CodeTypeOK) && postCheckErr == nil {
|
||||||
// Good, nothing to do.
|
// Good, nothing to do.
|
||||||
} else {
|
} else {
|
||||||
// Tx became invalidated due to newly committed block.
|
// Tx became invalidated due to newly committed block.
|
||||||
|
mem.logger.Info("Tx is no longer valid", "tx", TxID(tx), "res", r, "err", postCheckErr)
|
||||||
mem.txs.Remove(mem.recheckCursor)
|
mem.txs.Remove(mem.recheckCursor)
|
||||||
mem.recheckCursor.DetachPrev()
|
mem.recheckCursor.DetachPrev()
|
||||||
|
|
||||||
// remove from cache (it might be good later)
|
// remove from cache (it might be good later)
|
||||||
mem.cache.Remove(req.GetCheckTx().Tx)
|
mem.cache.Remove(tx)
|
||||||
}
|
}
|
||||||
if mem.recheckCursor == mem.recheckEnd {
|
if mem.recheckCursor == mem.recheckEnd {
|
||||||
mem.recheckCursor = nil
|
mem.recheckCursor = nil
|
||||||
@ -447,7 +485,7 @@ func (mem *Mempool) ReapMaxBytesMaxGas(maxBytes, maxGas int64) types.Txs {
|
|||||||
for e := mem.txs.Front(); e != nil; e = e.Next() {
|
for e := mem.txs.Front(); e != nil; e = e.Next() {
|
||||||
memTx := e.Value.(*mempoolTx)
|
memTx := e.Value.(*mempoolTx)
|
||||||
// Check total size requirement
|
// Check total size requirement
|
||||||
aminoOverhead := int64(amino.UvarintSize(uint64(len(memTx.tx))))
|
aminoOverhead := types.ComputeAminoOverhead(memTx.tx, 1)
|
||||||
if maxBytes > -1 && totalBytes+int64(len(memTx.tx))+aminoOverhead > maxBytes {
|
if maxBytes > -1 && totalBytes+int64(len(memTx.tx))+aminoOverhead > maxBytes {
|
||||||
return txs
|
return txs
|
||||||
}
|
}
|
||||||
@ -565,10 +603,6 @@ func (mem *Mempool) recheckTxs(goodTxs []types.Tx) {
|
|||||||
mem.proxyAppConn.FlushAsync()
|
mem.proxyAppConn.FlushAsync()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (mem *Mempool) isPostCheckPass(tx types.Tx, r *abci.ResponseCheckTx) bool {
|
|
||||||
return mem.postCheck == nil || mem.postCheck(tx, r)
|
|
||||||
}
|
|
||||||
|
|
||||||
//--------------------------------------------------------------------------------
|
//--------------------------------------------------------------------------------
|
||||||
|
|
||||||
// mempoolTx is a transaction that successfully ran
|
// mempoolTx is a transaction that successfully ran
|
||||||
|
@ -14,7 +14,6 @@ import (
|
|||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
|
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
amino "github.com/tendermint/go-amino"
|
|
||||||
"github.com/tendermint/tendermint/abci/example/counter"
|
"github.com/tendermint/tendermint/abci/example/counter"
|
||||||
"github.com/tendermint/tendermint/abci/example/kvstore"
|
"github.com/tendermint/tendermint/abci/example/kvstore"
|
||||||
abci "github.com/tendermint/tendermint/abci/types"
|
abci "github.com/tendermint/tendermint/abci/types"
|
||||||
@ -66,7 +65,13 @@ func checkTxs(t *testing.T, mempool *Mempool, count int) types.Txs {
|
|||||||
t.Error(err)
|
t.Error(err)
|
||||||
}
|
}
|
||||||
if err := mempool.CheckTx(txBytes, nil); err != nil {
|
if err := mempool.CheckTx(txBytes, nil); err != nil {
|
||||||
t.Fatalf("Error after CheckTx: %v", err)
|
// Skip invalid txs.
|
||||||
|
// TestMempoolFilters will fail otherwise. It asserts a number of txs
|
||||||
|
// returned.
|
||||||
|
if IsPreCheckError(err) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
t.Fatalf("CheckTx failed: %v while checking #%d tx", err, i)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return txs
|
return txs
|
||||||
@ -102,11 +107,11 @@ func TestReapMaxBytesMaxGas(t *testing.T) {
|
|||||||
{20, 0, -1, 0},
|
{20, 0, -1, 0},
|
||||||
{20, 0, 10, 0},
|
{20, 0, 10, 0},
|
||||||
{20, 10, 10, 0},
|
{20, 10, 10, 0},
|
||||||
{20, 21, 10, 1},
|
{20, 22, 10, 1},
|
||||||
{20, 210, -1, 10},
|
{20, 220, -1, 10},
|
||||||
{20, 210, 5, 5},
|
{20, 220, 5, 5},
|
||||||
{20, 210, 10, 10},
|
{20, 220, 10, 10},
|
||||||
{20, 210, 15, 10},
|
{20, 220, 15, 10},
|
||||||
{20, 20000, -1, 20},
|
{20, 20000, -1, 20},
|
||||||
{20, 20000, 5, 5},
|
{20, 20000, 5, 5},
|
||||||
{20, 20000, 30, 20},
|
{20, 20000, 30, 20},
|
||||||
@ -126,47 +131,29 @@ func TestMempoolFilters(t *testing.T) {
|
|||||||
mempool := newMempoolWithApp(cc)
|
mempool := newMempoolWithApp(cc)
|
||||||
emptyTxArr := []types.Tx{[]byte{}}
|
emptyTxArr := []types.Tx{[]byte{}}
|
||||||
|
|
||||||
nopPreFilter := func(tx types.Tx) bool { return true }
|
nopPreFilter := func(tx types.Tx) error { return nil }
|
||||||
nopPostFilter := func(tx types.Tx, res *abci.ResponseCheckTx) bool { return true }
|
nopPostFilter := func(tx types.Tx, res *abci.ResponseCheckTx) error { return nil }
|
||||||
|
|
||||||
// This is the same filter we expect to be used within node/node.go and state/execution.go
|
|
||||||
nBytePreFilter := func(n int) func(tx types.Tx) bool {
|
|
||||||
return func(tx types.Tx) bool {
|
|
||||||
// We have to account for the amino overhead in the tx size as well
|
|
||||||
aminoOverhead := amino.UvarintSize(uint64(len(tx)))
|
|
||||||
return (len(tx) + aminoOverhead) <= n
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
nGasPostFilter := func(n int64) func(tx types.Tx, res *abci.ResponseCheckTx) bool {
|
|
||||||
return func(tx types.Tx, res *abci.ResponseCheckTx) bool {
|
|
||||||
if n == -1 {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
return res.GasWanted <= n
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// each table driven test creates numTxsToCreate txs with checkTx, and at the end clears all remaining txs.
|
// each table driven test creates numTxsToCreate txs with checkTx, and at the end clears all remaining txs.
|
||||||
// each tx has 20 bytes + amino overhead = 21 bytes, 1 gas
|
// each tx has 20 bytes + amino overhead = 21 bytes, 1 gas
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
numTxsToCreate int
|
numTxsToCreate int
|
||||||
preFilter func(tx types.Tx) bool
|
preFilter PreCheckFunc
|
||||||
postFilter func(tx types.Tx, res *abci.ResponseCheckTx) bool
|
postFilter PostCheckFunc
|
||||||
expectedNumTxs int
|
expectedNumTxs int
|
||||||
}{
|
}{
|
||||||
{10, nopPreFilter, nopPostFilter, 10},
|
{10, nopPreFilter, nopPostFilter, 10},
|
||||||
{10, nBytePreFilter(10), nopPostFilter, 0},
|
{10, PreCheckAminoMaxBytes(10), nopPostFilter, 0},
|
||||||
{10, nBytePreFilter(20), nopPostFilter, 0},
|
{10, PreCheckAminoMaxBytes(20), nopPostFilter, 0},
|
||||||
{10, nBytePreFilter(21), nopPostFilter, 10},
|
{10, PreCheckAminoMaxBytes(22), nopPostFilter, 10},
|
||||||
{10, nopPreFilter, nGasPostFilter(-1), 10},
|
{10, nopPreFilter, PostCheckMaxGas(-1), 10},
|
||||||
{10, nopPreFilter, nGasPostFilter(0), 0},
|
{10, nopPreFilter, PostCheckMaxGas(0), 0},
|
||||||
{10, nopPreFilter, nGasPostFilter(1), 10},
|
{10, nopPreFilter, PostCheckMaxGas(1), 10},
|
||||||
{10, nopPreFilter, nGasPostFilter(3000), 10},
|
{10, nopPreFilter, PostCheckMaxGas(3000), 10},
|
||||||
{10, nBytePreFilter(10), nGasPostFilter(20), 0},
|
{10, PreCheckAminoMaxBytes(10), PostCheckMaxGas(20), 0},
|
||||||
{10, nBytePreFilter(30), nGasPostFilter(20), 10},
|
{10, PreCheckAminoMaxBytes(30), PostCheckMaxGas(20), 10},
|
||||||
{10, nBytePreFilter(21), nGasPostFilter(1), 10},
|
{10, PreCheckAminoMaxBytes(22), PostCheckMaxGas(1), 10},
|
||||||
{10, nBytePreFilter(21), nGasPostFilter(0), 0},
|
{10, PreCheckAminoMaxBytes(22), PostCheckMaxGas(0), 0},
|
||||||
}
|
}
|
||||||
for tcIndex, tt := range tests {
|
for tcIndex, tt := range tests {
|
||||||
mempool.Update(1, emptyTxArr, tt.preFilter, tt.postFilter)
|
mempool.Update(1, emptyTxArr, tt.preFilter, tt.postFilter)
|
||||||
@ -385,15 +372,12 @@ func TestMempoolCloseWAL(t *testing.T) {
|
|||||||
|
|
||||||
// 7. Invoke CloseWAL() and ensure it discards the
|
// 7. Invoke CloseWAL() and ensure it discards the
|
||||||
// WAL thus any other write won't go through.
|
// WAL thus any other write won't go through.
|
||||||
require.True(t, mempool.CloseWAL(), "CloseWAL should CloseWAL")
|
mempool.CloseWAL()
|
||||||
mempool.CheckTx(types.Tx([]byte("bar")), nil)
|
mempool.CheckTx(types.Tx([]byte("bar")), nil)
|
||||||
sum2 := checksumFile(walFilepath, t)
|
sum2 := checksumFile(walFilepath, t)
|
||||||
require.Equal(t, sum1, sum2, "expected no change to the WAL after invoking CloseWAL() since it was discarded")
|
require.Equal(t, sum1, sum2, "expected no change to the WAL after invoking CloseWAL() since it was discarded")
|
||||||
|
|
||||||
// 8. Second CloseWAL should do nothing
|
// 8. Sanity check to ensure that the WAL file still exists
|
||||||
require.False(t, mempool.CloseWAL(), "CloseWAL should CloseWAL")
|
|
||||||
|
|
||||||
// 9. Sanity check to ensure that the WAL file still exists
|
|
||||||
m3, err := filepath.Glob(filepath.Join(rootDir, "*"))
|
m3, err := filepath.Glob(filepath.Join(rootDir, "*"))
|
||||||
require.Nil(t, err, "successful globbing expected")
|
require.Nil(t, err, "successful globbing expected")
|
||||||
require.Equal(t, 1, len(m3), "expecting the wal match in")
|
require.Equal(t, 1, len(m3), "expecting the wal match in")
|
||||||
|
24
node/node.go
24
node/node.go
@ -13,8 +13,8 @@ import (
|
|||||||
|
|
||||||
"github.com/prometheus/client_golang/prometheus"
|
"github.com/prometheus/client_golang/prometheus"
|
||||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||||
"github.com/tendermint/go-amino"
|
|
||||||
|
|
||||||
|
amino "github.com/tendermint/go-amino"
|
||||||
abci "github.com/tendermint/tendermint/abci/types"
|
abci "github.com/tendermint/tendermint/abci/types"
|
||||||
bc "github.com/tendermint/tendermint/blockchain"
|
bc "github.com/tendermint/tendermint/blockchain"
|
||||||
cfg "github.com/tendermint/tendermint/config"
|
cfg "github.com/tendermint/tendermint/config"
|
||||||
@ -265,21 +265,14 @@ func NewNode(config *cfg.Config,
|
|||||||
proxyApp.Mempool(),
|
proxyApp.Mempool(),
|
||||||
state.LastBlockHeight,
|
state.LastBlockHeight,
|
||||||
mempl.WithMetrics(memplMetrics),
|
mempl.WithMetrics(memplMetrics),
|
||||||
mempl.WithPreCheck(
|
mempl.WithPreCheck(sm.TxPreCheck(state)),
|
||||||
mempl.PreCheckAminoMaxBytes(
|
mempl.WithPostCheck(sm.TxPostCheck(state)),
|
||||||
types.MaxDataBytesUnknownEvidence(
|
|
||||||
state.ConsensusParams.BlockSize.MaxBytes,
|
|
||||||
state.Validators.Size(),
|
|
||||||
),
|
|
||||||
),
|
|
||||||
),
|
|
||||||
mempl.WithPostCheck(
|
|
||||||
mempl.PostCheckMaxGas(state.ConsensusParams.BlockSize.MaxGas),
|
|
||||||
),
|
|
||||||
)
|
)
|
||||||
mempoolLogger := logger.With("module", "mempool")
|
mempoolLogger := logger.With("module", "mempool")
|
||||||
mempool.SetLogger(mempoolLogger)
|
mempool.SetLogger(mempoolLogger)
|
||||||
mempool.InitWAL() // no need to have the mempool wal during tests
|
if config.Mempool.WalEnabled() {
|
||||||
|
mempool.InitWAL() // no need to have the mempool wal during tests
|
||||||
|
}
|
||||||
mempoolReactor := mempl.NewMempoolReactor(config.Mempool, mempool)
|
mempoolReactor := mempl.NewMempoolReactor(config.Mempool, mempool)
|
||||||
mempoolReactor.SetLogger(mempoolLogger)
|
mempoolReactor.SetLogger(mempoolLogger)
|
||||||
|
|
||||||
@ -586,6 +579,11 @@ func (n *Node) OnStop() {
|
|||||||
// TODO: gracefully disconnect from peers.
|
// TODO: gracefully disconnect from peers.
|
||||||
n.sw.Stop()
|
n.sw.Stop()
|
||||||
|
|
||||||
|
// stop mempool WAL
|
||||||
|
if n.config.Mempool.WalEnabled() {
|
||||||
|
n.mempoolReactor.Mempool.CloseWAL()
|
||||||
|
}
|
||||||
|
|
||||||
if err := n.transport.Close(); err != nil {
|
if err := n.transport.Close(); err != nil {
|
||||||
n.Logger.Error("Error closing transport", "err", err)
|
n.Logger.Error("Error closing transport", "err", err)
|
||||||
}
|
}
|
||||||
|
@ -32,8 +32,10 @@ type NetAddress struct {
|
|||||||
str string
|
str string
|
||||||
}
|
}
|
||||||
|
|
||||||
// IDAddressString returns id@hostPort.
|
// IDAddressString returns id@hostPort. It strips the leading
|
||||||
func IDAddressString(id ID, hostPort string) string {
|
// protocol from protocolHostPort if it exists.
|
||||||
|
func IDAddressString(id ID, protocolHostPort string) string {
|
||||||
|
hostPort := removeProtocolIfDefined(protocolHostPort)
|
||||||
return fmt.Sprintf("%s@%s", id, hostPort)
|
return fmt.Sprintf("%s@%s", id, hostPort)
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -218,10 +220,22 @@ func (na *NetAddress) Routable() bool {
|
|||||||
// For IPv4 these are either a 0 or all bits set address. For IPv6 a zero
|
// For IPv4 these are either a 0 or all bits set address. For IPv6 a zero
|
||||||
// address or one that matches the RFC3849 documentation address format.
|
// address or one that matches the RFC3849 documentation address format.
|
||||||
func (na *NetAddress) Valid() bool {
|
func (na *NetAddress) Valid() bool {
|
||||||
|
if string(na.ID) != "" {
|
||||||
|
data, err := hex.DecodeString(string(na.ID))
|
||||||
|
if err != nil || len(data) != IDByteLength {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
return na.IP != nil && !(na.IP.IsUnspecified() || na.RFC3849() ||
|
return na.IP != nil && !(na.IP.IsUnspecified() || na.RFC3849() ||
|
||||||
na.IP.Equal(net.IPv4bcast))
|
na.IP.Equal(net.IPv4bcast))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// HasID returns true if the address has an ID.
|
||||||
|
// NOTE: It does not check whether the ID is valid or not.
|
||||||
|
func (na *NetAddress) HasID() bool {
|
||||||
|
return string(na.ID) != ""
|
||||||
|
}
|
||||||
|
|
||||||
// Local returns true if it is a local address.
|
// Local returns true if it is a local address.
|
||||||
func (na *NetAddress) Local() bool {
|
func (na *NetAddress) Local() bool {
|
||||||
return na.IP.IsLoopback() || zero4.Contains(na.IP)
|
return na.IP.IsLoopback() || zero4.Contains(na.IP)
|
||||||
|
@ -216,7 +216,8 @@ OUTER_LOOP:
|
|||||||
// ListenAddr. Note that the ListenAddr is not authenticated and
|
// ListenAddr. Note that the ListenAddr is not authenticated and
|
||||||
// may not match that address actually dialed if its an outbound peer.
|
// may not match that address actually dialed if its an outbound peer.
|
||||||
func (info DefaultNodeInfo) NetAddress() *NetAddress {
|
func (info DefaultNodeInfo) NetAddress() *NetAddress {
|
||||||
netAddr, err := NewNetAddressString(IDAddressString(info.ID(), info.ListenAddr))
|
idAddr := IDAddressString(info.ID(), info.ListenAddr)
|
||||||
|
netAddr, err := NewNetAddressString(idAddr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
switch err.(type) {
|
switch err.(type) {
|
||||||
case ErrNetAddressLookup:
|
case ErrNetAddressLookup:
|
||||||
|
@ -240,7 +240,7 @@ func (p *peer) Send(chID byte, msgBytes []byte) bool {
|
|||||||
}
|
}
|
||||||
res := p.mconn.Send(chID, msgBytes)
|
res := p.mconn.Send(chID, msgBytes)
|
||||||
if res {
|
if res {
|
||||||
p.metrics.PeerSendBytesTotal.With("peer-id", string(p.ID())).Add(float64(len(msgBytes)))
|
p.metrics.PeerSendBytesTotal.With("peer_id", string(p.ID())).Add(float64(len(msgBytes)))
|
||||||
}
|
}
|
||||||
return res
|
return res
|
||||||
}
|
}
|
||||||
@ -255,7 +255,7 @@ func (p *peer) TrySend(chID byte, msgBytes []byte) bool {
|
|||||||
}
|
}
|
||||||
res := p.mconn.TrySend(chID, msgBytes)
|
res := p.mconn.TrySend(chID, msgBytes)
|
||||||
if res {
|
if res {
|
||||||
p.metrics.PeerSendBytesTotal.With("peer-id", string(p.ID())).Add(float64(len(msgBytes)))
|
p.metrics.PeerSendBytesTotal.With("peer_id", string(p.ID())).Add(float64(len(msgBytes)))
|
||||||
}
|
}
|
||||||
return res
|
return res
|
||||||
}
|
}
|
||||||
@ -330,7 +330,7 @@ func (p *peer) metricsReporter() {
|
|||||||
sendQueueSize += float64(chStatus.SendQueueSize)
|
sendQueueSize += float64(chStatus.SendQueueSize)
|
||||||
}
|
}
|
||||||
|
|
||||||
p.metrics.PeerPendingSendBytes.With("peer-id", string(p.ID())).Set(sendQueueSize)
|
p.metrics.PeerPendingSendBytes.With("peer_id", string(p.ID())).Set(sendQueueSize)
|
||||||
case <-p.Quit():
|
case <-p.Quit():
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
@ -652,6 +652,10 @@ func (a *addrBook) addAddress(addr, src *p2p.NetAddress) error {
|
|||||||
return ErrAddrBookInvalidAddr{addr}
|
return ErrAddrBookInvalidAddr{addr}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if !addr.HasID() {
|
||||||
|
return ErrAddrBookInvalidAddrNoID{addr}
|
||||||
|
}
|
||||||
|
|
||||||
// TODO: we should track ourAddrs by ID and by IP:PORT and refuse both.
|
// TODO: we should track ourAddrs by ID and by IP:PORT and refuse both.
|
||||||
if _, ok := a.ourAddrs[addr.String()]; ok {
|
if _, ok := a.ourAddrs[addr.String()]; ok {
|
||||||
return ErrAddrBookSelf{addr}
|
return ErrAddrBookSelf{addr}
|
||||||
|
@ -54,3 +54,11 @@ type ErrAddrBookInvalidAddr struct {
|
|||||||
func (err ErrAddrBookInvalidAddr) Error() string {
|
func (err ErrAddrBookInvalidAddr) Error() string {
|
||||||
return fmt.Sprintf("Cannot add invalid address %v", err.Addr)
|
return fmt.Sprintf("Cannot add invalid address %v", err.Addr)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type ErrAddrBookInvalidAddrNoID struct {
|
||||||
|
Addr *p2p.NetAddress
|
||||||
|
}
|
||||||
|
|
||||||
|
func (err ErrAddrBookInvalidAddrNoID) Error() string {
|
||||||
|
return fmt.Sprintf("Cannot add address with no ID %v", err.Addr)
|
||||||
|
}
|
||||||
|
@ -221,7 +221,11 @@ func (r *PEXReactor) Receive(chID byte, src Peer, msgBytes []byte) {
|
|||||||
// 2) limit the output size
|
// 2) limit the output size
|
||||||
if r.config.SeedMode {
|
if r.config.SeedMode {
|
||||||
r.SendAddrs(src, r.book.GetSelectionWithBias(biasToSelectNewPeers))
|
r.SendAddrs(src, r.book.GetSelectionWithBias(biasToSelectNewPeers))
|
||||||
r.Switch.StopPeerGracefully(src)
|
go func() {
|
||||||
|
// TODO Fix properly #2092
|
||||||
|
time.Sleep(time.Second * 5)
|
||||||
|
r.Switch.StopPeerGracefully(src)
|
||||||
|
}()
|
||||||
} else {
|
} else {
|
||||||
r.SendAddrs(src, r.book.GetSelection())
|
r.SendAddrs(src, r.book.GetSelection())
|
||||||
}
|
}
|
||||||
|
@ -328,6 +328,11 @@ func (sw *Switch) reconnectToPeer(addr *NetAddress) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if sw.IsDialingOrExistingAddress(addr) {
|
||||||
|
sw.Logger.Debug("Peer connection has been established or dialed while we waiting next try", "addr", addr)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
err := sw.DialPeerWithAddress(addr, true)
|
err := sw.DialPeerWithAddress(addr, true)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
return // success
|
return // success
|
||||||
@ -415,12 +420,15 @@ func (sw *Switch) DialPeersAsync(addrBook AddrBook, peers []string, persistent b
|
|||||||
if addr.Same(ourAddr) {
|
if addr.Same(ourAddr) {
|
||||||
sw.Logger.Debug("Ignore attempt to connect to ourselves", "addr", addr, "ourAddr", ourAddr)
|
sw.Logger.Debug("Ignore attempt to connect to ourselves", "addr", addr, "ourAddr", ourAddr)
|
||||||
return
|
return
|
||||||
} else if sw.IsDialingOrExistingAddress(addr) {
|
}
|
||||||
|
|
||||||
|
sw.randomSleep(0)
|
||||||
|
|
||||||
|
if sw.IsDialingOrExistingAddress(addr) {
|
||||||
sw.Logger.Debug("Ignore attempt to connect to an existing peer", "addr", addr)
|
sw.Logger.Debug("Ignore attempt to connect to an existing peer", "addr", addr)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
sw.randomSleep(0)
|
|
||||||
err := sw.DialPeerWithAddress(addr, persistent)
|
err := sw.DialPeerWithAddress(addr, persistent)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
switch err.(type) {
|
switch err.(type) {
|
||||||
@ -616,7 +624,7 @@ func (sw *Switch) addPeer(p Peer) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
p.SetLogger(sw.Logger.With("peer", p.NodeInfo().NetAddress().String))
|
p.SetLogger(sw.Logger.With("peer", p.NodeInfo().NetAddress()))
|
||||||
|
|
||||||
// All good. Start peer
|
// All good. Start peer
|
||||||
if sw.IsRunning() {
|
if sw.IsRunning() {
|
||||||
|
@ -8,7 +8,6 @@ import (
|
|||||||
dbm "github.com/tendermint/tendermint/libs/db"
|
dbm "github.com/tendermint/tendermint/libs/db"
|
||||||
"github.com/tendermint/tendermint/libs/fail"
|
"github.com/tendermint/tendermint/libs/fail"
|
||||||
"github.com/tendermint/tendermint/libs/log"
|
"github.com/tendermint/tendermint/libs/log"
|
||||||
"github.com/tendermint/tendermint/mempool"
|
|
||||||
"github.com/tendermint/tendermint/proxy"
|
"github.com/tendermint/tendermint/proxy"
|
||||||
"github.com/tendermint/tendermint/types"
|
"github.com/tendermint/tendermint/types"
|
||||||
)
|
)
|
||||||
@ -180,13 +179,8 @@ func (blockExec *BlockExecutor) Commit(
|
|||||||
err = blockExec.mempool.Update(
|
err = blockExec.mempool.Update(
|
||||||
block.Height,
|
block.Height,
|
||||||
block.Txs,
|
block.Txs,
|
||||||
mempool.PreCheckAminoMaxBytes(
|
TxPreCheck(state),
|
||||||
types.MaxDataBytesUnknownEvidence(
|
TxPostCheck(state),
|
||||||
state.ConsensusParams.BlockSize.MaxBytes,
|
|
||||||
state.Validators.Size(),
|
|
||||||
),
|
|
||||||
),
|
|
||||||
mempool.PostCheckMaxGas(state.ConsensusParams.BlockSize.MaxGas),
|
|
||||||
)
|
)
|
||||||
|
|
||||||
return res.Data, err
|
return res.Data, err
|
||||||
|
@ -1,15 +1,22 @@
|
|||||||
package state
|
package state
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
mempl "github.com/tendermint/tendermint/mempool"
|
||||||
"github.com/tendermint/tendermint/types"
|
"github.com/tendermint/tendermint/types"
|
||||||
)
|
)
|
||||||
|
|
||||||
// TxFilter returns a function to filter transactions. The function limits the
|
// TxPreCheck returns a function to filter transactions before processing.
|
||||||
// size of a transaction to the maximum block's data size.
|
// The function limits the size of a transaction to the block's maximum data size.
|
||||||
func TxFilter(state State) func(tx types.Tx) bool {
|
func TxPreCheck(state State) mempl.PreCheckFunc {
|
||||||
maxDataBytes := types.MaxDataBytesUnknownEvidence(
|
maxDataBytes := types.MaxDataBytesUnknownEvidence(
|
||||||
state.ConsensusParams.BlockSize.MaxBytes,
|
state.ConsensusParams.BlockSize.MaxBytes,
|
||||||
state.Validators.Size(),
|
state.Validators.Size(),
|
||||||
)
|
)
|
||||||
return func(tx types.Tx) bool { return int64(len(tx)) <= maxDataBytes }
|
return mempl.PreCheckAminoMaxBytes(maxDataBytes)
|
||||||
|
}
|
||||||
|
|
||||||
|
// TxPostCheck returns a function to filter transactions after processing.
|
||||||
|
// The function limits the gas wanted by a transaction to the block's maximum total gas.
|
||||||
|
func TxPostCheck(state State) mempl.PostCheckFunc {
|
||||||
|
return mempl.PostCheckMaxGas(state.ConsensusParams.BlockSize.MaxGas)
|
||||||
}
|
}
|
||||||
|
@ -18,12 +18,18 @@ func TestTxFilter(t *testing.T) {
|
|||||||
genDoc := randomGenesisDoc()
|
genDoc := randomGenesisDoc()
|
||||||
genDoc.ConsensusParams.BlockSize.MaxBytes = 3000
|
genDoc.ConsensusParams.BlockSize.MaxBytes = 3000
|
||||||
|
|
||||||
|
// Max size of Txs is much smaller than size of block,
|
||||||
|
// since we need to account for commits and evidence.
|
||||||
testCases := []struct {
|
testCases := []struct {
|
||||||
tx types.Tx
|
tx types.Tx
|
||||||
isTxValid bool
|
isErr bool
|
||||||
}{
|
}{
|
||||||
{types.Tx(cmn.RandBytes(250)), true},
|
{types.Tx(cmn.RandBytes(250)), false},
|
||||||
{types.Tx(cmn.RandBytes(3001)), false},
|
{types.Tx(cmn.RandBytes(1809)), false},
|
||||||
|
{types.Tx(cmn.RandBytes(1810)), false},
|
||||||
|
{types.Tx(cmn.RandBytes(1811)), true},
|
||||||
|
{types.Tx(cmn.RandBytes(1812)), true},
|
||||||
|
{types.Tx(cmn.RandBytes(3000)), true},
|
||||||
}
|
}
|
||||||
|
|
||||||
for i, tc := range testCases {
|
for i, tc := range testCases {
|
||||||
@ -31,8 +37,12 @@ func TestTxFilter(t *testing.T) {
|
|||||||
state, err := LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
|
state, err := LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
f := TxFilter(state)
|
f := TxPreCheck(state)
|
||||||
assert.Equal(t, tc.isTxValid, f(tc.tx), "#%v", i)
|
if tc.isErr {
|
||||||
|
assert.NotNil(t, f(tc.tx), "#%v", i)
|
||||||
|
} else {
|
||||||
|
assert.Nil(t, f(tc.tx), "#%v", i)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -21,6 +21,8 @@ const (
|
|||||||
|
|
||||||
// MaxAminoOverheadForBlock - maximum amino overhead to encode a block (up to
|
// MaxAminoOverheadForBlock - maximum amino overhead to encode a block (up to
|
||||||
// MaxBlockSizeBytes in size) not including it's parts except Data.
|
// MaxBlockSizeBytes in size) not including it's parts except Data.
|
||||||
|
// This means it also excludes the overhead for individual transactions.
|
||||||
|
// To compute individual transactions' overhead use types.ComputeAminoOverhead(tx types.Tx, fieldNum int).
|
||||||
//
|
//
|
||||||
// Uvarint length of MaxBlockSizeBytes: 4 bytes
|
// Uvarint length of MaxBlockSizeBytes: 4 bytes
|
||||||
// 2 fields (2 embedded): 2 bytes
|
// 2 fields (2 embedded): 2 bytes
|
||||||
|
@ -250,7 +250,7 @@ func TestMaxHeaderBytes(t *testing.T) {
|
|||||||
timestamp := time.Date(math.MaxInt64, 0, 0, 0, 0, 0, math.MaxInt64, time.UTC)
|
timestamp := time.Date(math.MaxInt64, 0, 0, 0, 0, 0, math.MaxInt64, time.UTC)
|
||||||
|
|
||||||
h := Header{
|
h := Header{
|
||||||
Version: version.Consensus{math.MaxInt64, math.MaxInt64},
|
Version: version.Consensus{Block: math.MaxInt64, App: math.MaxInt64},
|
||||||
ChainID: maxChainID,
|
ChainID: maxChainID,
|
||||||
Height: math.MaxInt64,
|
Height: math.MaxInt64,
|
||||||
Time: timestamp,
|
Time: timestamp,
|
||||||
|
@ -61,7 +61,7 @@ func TestEvidence(t *testing.T) {
|
|||||||
{vote1, makeVote(val, chainID, 0, 10, 3, 1, blockID2), false}, // wrong round
|
{vote1, makeVote(val, chainID, 0, 10, 3, 1, blockID2), false}, // wrong round
|
||||||
{vote1, makeVote(val, chainID, 0, 10, 2, 2, blockID2), false}, // wrong step
|
{vote1, makeVote(val, chainID, 0, 10, 2, 2, blockID2), false}, // wrong step
|
||||||
{vote1, makeVote(val2, chainID, 0, 10, 2, 1, blockID), false}, // wrong validator
|
{vote1, makeVote(val2, chainID, 0, 10, 2, 1, blockID), false}, // wrong validator
|
||||||
{vote1, badVote, false}, // signed by wrong key
|
{vote1, badVote, false}, // signed by wrong key
|
||||||
}
|
}
|
||||||
|
|
||||||
pubKey := val.GetPubKey()
|
pubKey := val.GetPubKey()
|
||||||
@ -121,3 +121,38 @@ func randomDuplicatedVoteEvidence() *DuplicateVoteEvidence {
|
|||||||
VoteB: makeVote(val, chainID, 0, 10, 2, 1, blockID2),
|
VoteB: makeVote(val, chainID, 0, 10, 2, 1, blockID2),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestDuplicateVoteEvidenceValidation(t *testing.T) {
|
||||||
|
val := NewMockPV()
|
||||||
|
blockID := makeBlockID(tmhash.Sum([]byte("blockhash")), math.MaxInt64, tmhash.Sum([]byte("partshash")))
|
||||||
|
blockID2 := makeBlockID(tmhash.Sum([]byte("blockhash2")), math.MaxInt64, tmhash.Sum([]byte("partshash")))
|
||||||
|
const chainID = "mychain"
|
||||||
|
|
||||||
|
testCases := []struct {
|
||||||
|
testName string
|
||||||
|
malleateEvidence func(*DuplicateVoteEvidence)
|
||||||
|
expectErr bool
|
||||||
|
}{
|
||||||
|
{"Good DuplicateVoteEvidence", func(ev *DuplicateVoteEvidence) {}, false},
|
||||||
|
{"Nil vote A", func(ev *DuplicateVoteEvidence) { ev.VoteA = nil }, true},
|
||||||
|
{"Nil vote B", func(ev *DuplicateVoteEvidence) { ev.VoteB = nil }, true},
|
||||||
|
{"Nil votes", func(ev *DuplicateVoteEvidence) {
|
||||||
|
ev.VoteA = nil
|
||||||
|
ev.VoteB = nil
|
||||||
|
}, true},
|
||||||
|
{"Invalid vote type", func(ev *DuplicateVoteEvidence) {
|
||||||
|
ev.VoteA = makeVote(val, chainID, math.MaxInt64, math.MaxInt64, math.MaxInt64, 0, blockID2)
|
||||||
|
}, true},
|
||||||
|
}
|
||||||
|
for _, tc := range testCases {
|
||||||
|
t.Run(tc.testName, func(t *testing.T) {
|
||||||
|
ev := &DuplicateVoteEvidence{
|
||||||
|
PubKey: secp256k1.GenPrivKey().PubKey(),
|
||||||
|
VoteA: makeVote(val, chainID, math.MaxInt64, math.MaxInt64, math.MaxInt64, 0x02, blockID),
|
||||||
|
VoteB: makeVote(val, chainID, math.MaxInt64, math.MaxInt64, math.MaxInt64, 0x02, blockID2),
|
||||||
|
}
|
||||||
|
tc.malleateEvidence(ev)
|
||||||
|
assert.Equal(t, tc.expectErr, ev.ValidateBasic() != nil, "Validate Basic had an unexpected result")
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@ -3,8 +3,10 @@ package types
|
|||||||
import (
|
import (
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||||
|
"github.com/tendermint/tendermint/crypto/secp256k1"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestHeartbeatCopy(t *testing.T) {
|
func TestHeartbeatCopy(t *testing.T) {
|
||||||
@ -58,3 +60,45 @@ func TestHeartbeatWriteSignBytes(t *testing.T) {
|
|||||||
require.Equal(t, string(signBytes), "null")
|
require.Equal(t, string(signBytes), "null")
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestHeartbeatValidateBasic(t *testing.T) {
|
||||||
|
testCases := []struct {
|
||||||
|
testName string
|
||||||
|
malleateHeartBeat func(*Heartbeat)
|
||||||
|
expectErr bool
|
||||||
|
}{
|
||||||
|
{"Good HeartBeat", func(hb *Heartbeat) {}, false},
|
||||||
|
{"Invalid address size", func(hb *Heartbeat) {
|
||||||
|
hb.ValidatorAddress = nil
|
||||||
|
}, true},
|
||||||
|
{"Negative validator index", func(hb *Heartbeat) {
|
||||||
|
hb.ValidatorIndex = -1
|
||||||
|
}, true},
|
||||||
|
{"Negative height", func(hb *Heartbeat) {
|
||||||
|
hb.Height = -1
|
||||||
|
}, true},
|
||||||
|
{"Negative round", func(hb *Heartbeat) {
|
||||||
|
hb.Round = -1
|
||||||
|
}, true},
|
||||||
|
{"Negative sequence", func(hb *Heartbeat) {
|
||||||
|
hb.Sequence = -1
|
||||||
|
}, true},
|
||||||
|
{"Missing signature", func(hb *Heartbeat) {
|
||||||
|
hb.Signature = nil
|
||||||
|
}, true},
|
||||||
|
{"Signature too big", func(hb *Heartbeat) {
|
||||||
|
hb.Signature = make([]byte, MaxSignatureSize+1)
|
||||||
|
}, true},
|
||||||
|
}
|
||||||
|
for _, tc := range testCases {
|
||||||
|
t.Run(tc.testName, func(t *testing.T) {
|
||||||
|
hb := &Heartbeat{
|
||||||
|
ValidatorAddress: secp256k1.GenPrivKey().PubKey().Address(),
|
||||||
|
Signature: make([]byte, 4),
|
||||||
|
ValidatorIndex: 1, Height: 10, Round: 1}
|
||||||
|
|
||||||
|
tc.malleateHeartBeat(hb)
|
||||||
|
assert.Equal(t, tc.expectErr, hb.ValidateBasic() != nil, "Validate Basic had an unexpected result")
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@ -83,3 +83,48 @@ func TestWrongProof(t *testing.T) {
|
|||||||
t.Errorf("Expected to fail adding a part with bad bytes.")
|
t.Errorf("Expected to fail adding a part with bad bytes.")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestPartSetHeaderSetValidateBasic(t *testing.T) {
|
||||||
|
|
||||||
|
testCases := []struct {
|
||||||
|
testName string
|
||||||
|
malleatePartSetHeader func(*PartSetHeader)
|
||||||
|
expectErr bool
|
||||||
|
}{
|
||||||
|
{"Good PartSet", func(psHeader *PartSetHeader) {}, false},
|
||||||
|
{"Negative Total", func(psHeader *PartSetHeader) { psHeader.Total = -2 }, true},
|
||||||
|
{"Invalid Hash", func(psHeader *PartSetHeader) { psHeader.Hash = make([]byte, 1) }, true},
|
||||||
|
}
|
||||||
|
for _, tc := range testCases {
|
||||||
|
t.Run(tc.testName, func(t *testing.T) {
|
||||||
|
data := cmn.RandBytes(testPartSize * 100)
|
||||||
|
ps := NewPartSetFromData(data, testPartSize)
|
||||||
|
psHeader := ps.Header()
|
||||||
|
tc.malleatePartSetHeader(&psHeader)
|
||||||
|
assert.Equal(t, tc.expectErr, psHeader.ValidateBasic() != nil, "Validate Basic had an unexpected result")
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPartValidateBasic(t *testing.T) {
|
||||||
|
|
||||||
|
testCases := []struct {
|
||||||
|
testName string
|
||||||
|
malleatePart func(*Part)
|
||||||
|
expectErr bool
|
||||||
|
}{
|
||||||
|
{"Good Part", func(pt *Part) {}, false},
|
||||||
|
{"Negative index", func(pt *Part) { pt.Index = -1 }, true},
|
||||||
|
{"Too big part", func(pt *Part) { pt.Bytes = make([]byte, BlockPartSizeBytes+1) }, true},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tc := range testCases {
|
||||||
|
t.Run(tc.testName, func(t *testing.T) {
|
||||||
|
data := cmn.RandBytes(testPartSize * 100)
|
||||||
|
ps := NewPartSetFromData(data, testPartSize)
|
||||||
|
part := ps.GetPart(0)
|
||||||
|
tc.malleatePart(part)
|
||||||
|
assert.Equal(t, tc.expectErr, part.ValidateBasic() != nil, "Validate Basic had an unexpected result")
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@ -1,10 +1,13 @@
|
|||||||
package types
|
package types
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"math"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
"github.com/tendermint/tendermint/crypto/tmhash"
|
||||||
)
|
)
|
||||||
|
|
||||||
var testProposal *Proposal
|
var testProposal *Proposal
|
||||||
@ -97,3 +100,41 @@ func BenchmarkProposalVerifySignature(b *testing.B) {
|
|||||||
pubKey.VerifyBytes(testProposal.SignBytes("test_chain_id"), testProposal.Signature)
|
pubKey.VerifyBytes(testProposal.SignBytes("test_chain_id"), testProposal.Signature)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestProposalValidateBasic(t *testing.T) {
|
||||||
|
|
||||||
|
privVal := NewMockPV()
|
||||||
|
testCases := []struct {
|
||||||
|
testName string
|
||||||
|
malleateProposal func(*Proposal)
|
||||||
|
expectErr bool
|
||||||
|
}{
|
||||||
|
{"Good Proposal", func(p *Proposal) {}, false},
|
||||||
|
{"Invalid Type", func(p *Proposal) { p.Type = PrecommitType }, true},
|
||||||
|
{"Invalid Height", func(p *Proposal) { p.Height = -1 }, true},
|
||||||
|
{"Invalid Round", func(p *Proposal) { p.Round = -1 }, true},
|
||||||
|
{"Invalid POLRound", func(p *Proposal) { p.POLRound = -2 }, true},
|
||||||
|
{"Invalid BlockId", func(p *Proposal) {
|
||||||
|
p.BlockID = BlockID{[]byte{1, 2, 3}, PartSetHeader{111, []byte("blockparts")}}
|
||||||
|
}, true},
|
||||||
|
{"Invalid Signature", func(p *Proposal) {
|
||||||
|
p.Signature = make([]byte, 0)
|
||||||
|
}, true},
|
||||||
|
{"Too big Signature", func(p *Proposal) {
|
||||||
|
p.Signature = make([]byte, MaxSignatureSize+1)
|
||||||
|
}, true},
|
||||||
|
}
|
||||||
|
blockID := makeBlockID(tmhash.Sum([]byte("blockhash")), math.MaxInt64, tmhash.Sum([]byte("partshash")))
|
||||||
|
|
||||||
|
for _, tc := range testCases {
|
||||||
|
t.Run(tc.testName, func(t *testing.T) {
|
||||||
|
prop := NewProposal(
|
||||||
|
4, 2, 2,
|
||||||
|
blockID)
|
||||||
|
err := privVal.SignProposal("test_chain_id", prop)
|
||||||
|
require.NoError(t, err)
|
||||||
|
tc.malleateProposal(prop)
|
||||||
|
assert.Equal(t, tc.expectErr, prop.ValidateBasic() != nil, "Validate Basic had an unexpected result")
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
17
types/tx.go
17
types/tx.go
@ -5,6 +5,8 @@ import (
|
|||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
|
"github.com/tendermint/go-amino"
|
||||||
|
|
||||||
abci "github.com/tendermint/tendermint/abci/types"
|
abci "github.com/tendermint/tendermint/abci/types"
|
||||||
"github.com/tendermint/tendermint/crypto/merkle"
|
"github.com/tendermint/tendermint/crypto/merkle"
|
||||||
"github.com/tendermint/tendermint/crypto/tmhash"
|
"github.com/tendermint/tendermint/crypto/tmhash"
|
||||||
@ -118,3 +120,18 @@ type TxResult struct {
|
|||||||
Tx Tx `json:"tx"`
|
Tx Tx `json:"tx"`
|
||||||
Result abci.ResponseDeliverTx `json:"result"`
|
Result abci.ResponseDeliverTx `json:"result"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ComputeAminoOverhead calculates the overhead for amino encoding a transaction.
|
||||||
|
// The overhead consists of varint encoding the field number and the wire type
|
||||||
|
// (= length-delimited = 2), and another varint encoding the length of the
|
||||||
|
// transaction.
|
||||||
|
// The field number can be the field number of the particular transaction, or
|
||||||
|
// the field number of the parenting struct that contains the transactions []Tx
|
||||||
|
// as a field (this field number is repeated for each contained Tx).
|
||||||
|
// If some []Tx are encoded directly (without a parenting struct), the default
|
||||||
|
// fieldNum is also 1 (see BinFieldNum in amino.MarshalBinaryBare).
|
||||||
|
func ComputeAminoOverhead(tx Tx, fieldNum int) int64 {
|
||||||
|
fnum := uint64(fieldNum)
|
||||||
|
typ3AndFieldNum := (uint64(fnum) << 3) | uint64(amino.Typ3_ByteLength)
|
||||||
|
return int64(amino.UvarintSize(typ3AndFieldNum)) + int64(amino.UvarintSize(uint64(len(tx))))
|
||||||
|
}
|
||||||
|
@ -96,6 +96,63 @@ func TestTxProofUnchangable(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestComputeTxsOverhead(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
txs Txs
|
||||||
|
wantOverhead int
|
||||||
|
}{
|
||||||
|
{Txs{[]byte{6, 6, 6, 6, 6, 6}}, 2},
|
||||||
|
// one 21 Mb transaction:
|
||||||
|
{Txs{make([]byte, 22020096, 22020096)}, 5},
|
||||||
|
// two 21Mb/2 sized transactions:
|
||||||
|
{Txs{make([]byte, 11010048, 11010048), make([]byte, 11010048, 11010048)}, 10},
|
||||||
|
{Txs{[]byte{1, 2, 3}, []byte{1, 2, 3}, []byte{4, 5, 6}}, 6},
|
||||||
|
{Txs{[]byte{100, 5, 64}, []byte{42, 116, 118}, []byte{6, 6, 6}, []byte{6, 6, 6}}, 8},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tc := range cases {
|
||||||
|
totalBytes := int64(0)
|
||||||
|
totalOverhead := int64(0)
|
||||||
|
for _, tx := range tc.txs {
|
||||||
|
aminoOverhead := ComputeAminoOverhead(tx, 1)
|
||||||
|
totalOverhead += aminoOverhead
|
||||||
|
totalBytes += aminoOverhead + int64(len(tx))
|
||||||
|
}
|
||||||
|
bz, err := cdc.MarshalBinaryBare(tc.txs)
|
||||||
|
assert.EqualValues(t, tc.wantOverhead, totalOverhead)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.EqualValues(t, len(bz), totalBytes)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestComputeAminoOverhead(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
tx Tx
|
||||||
|
fieldNum int
|
||||||
|
want int
|
||||||
|
}{
|
||||||
|
{[]byte{6, 6, 6}, 1, 2},
|
||||||
|
{[]byte{6, 6, 6}, 16, 3},
|
||||||
|
{[]byte{6, 6, 6}, 32, 3},
|
||||||
|
{[]byte{6, 6, 6}, 64, 3},
|
||||||
|
{[]byte{6, 6, 6}, 512, 3},
|
||||||
|
{[]byte{6, 6, 6}, 1024, 3},
|
||||||
|
{[]byte{6, 6, 6}, 2048, 4},
|
||||||
|
{make([]byte, 64), 1, 2},
|
||||||
|
{make([]byte, 65), 1, 2},
|
||||||
|
{make([]byte, 127), 1, 2},
|
||||||
|
{make([]byte, 128), 1, 3},
|
||||||
|
{make([]byte, 256), 1, 3},
|
||||||
|
{make([]byte, 512), 1, 3},
|
||||||
|
{make([]byte, 1024), 1, 3},
|
||||||
|
{make([]byte, 128), 16, 4},
|
||||||
|
}
|
||||||
|
for _, tc := range cases {
|
||||||
|
got := ComputeAminoOverhead(tc.tx, tc.fieldNum)
|
||||||
|
assert.EqualValues(t, tc.want, got)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func testTxProofUnchangable(t *testing.T) {
|
func testTxProofUnchangable(t *testing.T) {
|
||||||
// make some proof
|
// make some proof
|
||||||
txs := makeTxs(randInt(2, 100), randInt(16, 128))
|
txs := makeTxs(randInt(2, 100), randInt(16, 128))
|
||||||
|
@ -158,7 +158,7 @@ func (voteSet *VoteSet) addVote(vote *Vote) (added bool, err error) {
|
|||||||
if (vote.Height != voteSet.height) ||
|
if (vote.Height != voteSet.height) ||
|
||||||
(vote.Round != voteSet.round) ||
|
(vote.Round != voteSet.round) ||
|
||||||
(vote.Type != voteSet.type_) {
|
(vote.Type != voteSet.type_) {
|
||||||
return false, errors.Wrapf(ErrVoteUnexpectedStep, "Got %d/%d/%d, expected %d/%d/%d",
|
return false, errors.Wrapf(ErrVoteUnexpectedStep, "Expected %d/%d/%d, but got %d/%d/%d",
|
||||||
voteSet.height, voteSet.round, voteSet.type_,
|
voteSet.height, voteSet.round, voteSet.type_,
|
||||||
vote.Height, vote.Round, vote.Type)
|
vote.Height, vote.Round, vote.Type)
|
||||||
}
|
}
|
||||||
|
@ -250,3 +250,31 @@ func TestVoteString(t *testing.T) {
|
|||||||
t.Errorf("Got unexpected string for Vote. Expected:\n%v\nGot:\n%v", expected, str2)
|
t.Errorf("Got unexpected string for Vote. Expected:\n%v\nGot:\n%v", expected, str2)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestVoteValidateBasic(t *testing.T) {
|
||||||
|
privVal := NewMockPV()
|
||||||
|
|
||||||
|
testCases := []struct {
|
||||||
|
testName string
|
||||||
|
malleateVote func(*Vote)
|
||||||
|
expectErr bool
|
||||||
|
}{
|
||||||
|
{"Good Vote", func(v *Vote) {}, false},
|
||||||
|
{"Negative Height", func(v *Vote) { v.Height = -1 }, true},
|
||||||
|
{"Negative Round", func(v *Vote) { v.Round = -1 }, true},
|
||||||
|
{"Invalid BlockID", func(v *Vote) { v.BlockID = BlockID{[]byte{1, 2, 3}, PartSetHeader{111, []byte("blockparts")}} }, true},
|
||||||
|
{"Invalid Address", func(v *Vote) { v.ValidatorAddress = make([]byte, 1) }, true},
|
||||||
|
{"Invalid ValidatorIndex", func(v *Vote) { v.ValidatorIndex = -1 }, true},
|
||||||
|
{"Invalid Signature", func(v *Vote) { v.Signature = nil }, true},
|
||||||
|
{"Too big Signature", func(v *Vote) { v.Signature = make([]byte, MaxSignatureSize+1) }, true},
|
||||||
|
}
|
||||||
|
for _, tc := range testCases {
|
||||||
|
t.Run(tc.testName, func(t *testing.T) {
|
||||||
|
vote := examplePrecommit()
|
||||||
|
err := privVal.SignVote("test_chain_id", vote)
|
||||||
|
require.NoError(t, err)
|
||||||
|
tc.malleateVote(vote)
|
||||||
|
assert.Equal(t, tc.expectErr, vote.ValidateBasic() != nil, "Validate Basic had an unexpected result")
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@ -18,7 +18,7 @@ const (
|
|||||||
// TMCoreSemVer is the current version of Tendermint Core.
|
// TMCoreSemVer is the current version of Tendermint Core.
|
||||||
// It's the Semantic Version of the software.
|
// It's the Semantic Version of the software.
|
||||||
// Must be a string because scripts like dist.sh read this file.
|
// Must be a string because scripts like dist.sh read this file.
|
||||||
TMCoreSemVer = "0.26.0"
|
TMCoreSemVer = "0.26.1"
|
||||||
|
|
||||||
// ABCISemVer is the semantic version of the ABCI library
|
// ABCISemVer is the semantic version of the ABCI library
|
||||||
ABCISemVer = "0.15.0"
|
ABCISemVer = "0.15.0"
|
||||||
|
Reference in New Issue
Block a user