mirror of
https://github.com/fluencelabs/tendermint
synced 2025-07-21 15:21:57 +00:00
Compare commits
35 Commits
v0.26.3
...
v0.27.0-de
Author | SHA1 | Date | |
---|---|---|---|
|
44b769b1ac | ||
|
380afaa678 | ||
|
b30c34e713 | ||
|
4039276085 | ||
|
3f987adc92 | ||
|
b11788d36d | ||
|
9adcfe2804 | ||
|
3d15579e0c | ||
|
4571f0fbe8 | ||
|
8a73feae14 | ||
|
e291fbbebe | ||
|
416d143bf7 | ||
|
7213869fc6 | ||
|
ef9902e602 | ||
|
b771798d48 | ||
|
1abf34aa91 | ||
|
92dc5fc77a | ||
|
bef39f3346 | ||
|
94e63be922 | ||
|
9570ac4d3e | ||
|
99b9c9bf60 | ||
|
47a0669d12 | ||
|
fe3b97fd66 | ||
|
56052c0a87 | ||
|
98e442a8de | ||
|
b12488b5f1 | ||
|
b487feba42 | ||
|
72f86b5192 | ||
|
42592d9ae0 | ||
|
1610a05cbd | ||
|
2d525bf2b8 | ||
|
e9efbfe267 | ||
|
7b883a5457 | ||
|
fd8d1d6b69 | ||
|
5abdd254e7 |
43
CHANGELOG.md
43
CHANGELOG.md
@@ -1,5 +1,48 @@
|
||||
# Changelog
|
||||
|
||||
## v0.26.4
|
||||
|
||||
*November 27th, 2018*
|
||||
|
||||
Special thanks to external contributors on this release:
|
||||
ackratos, goolAdapter, james-ray, joe-bowman, kostko,
|
||||
nagarajmanjunath, tomtau
|
||||
|
||||
|
||||
Friendly reminder, we have a [bug bounty
|
||||
program](https://hackerone.com/tendermint).
|
||||
|
||||
### FEATURES:
|
||||
|
||||
- [rpc] [\#2747](https://github.com/tendermint/tendermint/issues/2747) Enable subscription to tags emitted from `BeginBlock`/`EndBlock` (@kostko)
|
||||
- [types] [\#2747](https://github.com/tendermint/tendermint/issues/2747) Add `ResultBeginBlock` and `ResultEndBlock` fields to `EventDataNewBlock`
|
||||
and `EventDataNewBlockHeader` to support subscriptions (@kostko)
|
||||
- [types] [\#2918](https://github.com/tendermint/tendermint/issues/2918) Add Marshal, MarshalTo, Unmarshal methods to various structs
|
||||
to support Protobuf compatibility (@nagarajmanjunath)
|
||||
|
||||
### IMPROVEMENTS:
|
||||
|
||||
- [config] [\#2877](https://github.com/tendermint/tendermint/issues/2877) Add `blocktime_iota` to the config.toml (@ackratos)
|
||||
- NOTE: this should be a ConsensusParam, not part of the config, and will be
|
||||
removed from the config at a later date
|
||||
([\#2920](https://github.com/tendermint/tendermint/issues/2920).
|
||||
- [mempool] [\#2882](https://github.com/tendermint/tendermint/issues/2882) Add txs from Update to cache
|
||||
- [mempool] [\#2891](https://github.com/tendermint/tendermint/issues/2891) Remove local int64 counter from being stored in every tx
|
||||
- [node] [\#2866](https://github.com/tendermint/tendermint/issues/2866) Add ability to instantiate IPCVal (@joe-bowman)
|
||||
|
||||
### BUG FIXES:
|
||||
|
||||
- [blockchain] [\#2731](https://github.com/tendermint/tendermint/issues/2731) Retry both blocks if either is bad to avoid getting stuck during fast sync (@goolAdapter)
|
||||
- [consensus] [\#2893](https://github.com/tendermint/tendermint/issues/2893) Use genDoc.Validators instead of state.NextValidators on replay when appHeight==0 (@james-ray)
|
||||
- [log] [\#2868](https://github.com/tendermint/tendermint/issues/2868) Fix `module=main` setting overriding all others
|
||||
- NOTE: this changes the default logging behaviour to be much less verbose.
|
||||
Set `log_level="info"` to restore the previous behaviour.
|
||||
- [rpc] [\#2808](https://github.com/tendermint/tendermint/issues/2808) Fix `accum` field in `/validators` by calling `IncrementAccum` if necessary
|
||||
- [rpc] [\#2811](https://github.com/tendermint/tendermint/issues/2811) Allow integer IDs in JSON-RPC requests (@tomtau)
|
||||
- [txindex/kv] [\#2759](https://github.com/tendermint/tendermint/issues/2759) Fix tx.height range queries
|
||||
- [txindex/kv] [\#2775](https://github.com/tendermint/tendermint/issues/2775) Order tx results by index if height is the same
|
||||
- [txindex/kv] [\#2908](https://github.com/tendermint/tendermint/issues/2908) Don't return false positives when searching for a prefix of a tag value
|
||||
|
||||
## v0.26.3
|
||||
|
||||
*November 17th, 2018*
|
||||
|
@@ -1,6 +1,6 @@
|
||||
# Pending
|
||||
|
||||
## v0.26.4
|
||||
## v0.27.0
|
||||
|
||||
*TBD*
|
||||
|
||||
@@ -12,17 +12,37 @@ program](https://hackerone.com/tendermint).
|
||||
### BREAKING CHANGES:
|
||||
|
||||
* CLI/RPC/Config
|
||||
- [rpc] \#2932 Rename `accum` to `proposer_priority`
|
||||
|
||||
* Apps
|
||||
|
||||
* Go API
|
||||
- [db] [\#2913](https://github.com/tendermint/tendermint/pull/2913)
|
||||
ReverseIterator API change -- start < end, and end is exclusive.
|
||||
- [types] \#2932 Rename `Validator.Accum` to `Validator.ProposerPriority`
|
||||
|
||||
* Blockchain Protocol
|
||||
- [state] \#2714 Validators can now only use pubkeys allowed within
|
||||
ConsensusParams.ValidatorParams
|
||||
|
||||
* P2P Protocol
|
||||
- [consensus] [\#2871](https://github.com/tendermint/tendermint/issues/2871)
|
||||
Remove *ProposalHeartbeat* message as it serves no real purpose
|
||||
- [state] Fixes for proposer selection:
|
||||
- \#2785 Accum for new validators is `-1.125*totalVotingPower` instead of 0
|
||||
- \#2941 val.Accum is preserved during ValidatorSet.Update to avoid being
|
||||
reset to 0
|
||||
|
||||
### FEATURES:
|
||||
|
||||
### IMPROVEMENTS:
|
||||
|
||||
### BUG FIXES:
|
||||
- [types] \#2938 Fix regression in v0.26.4 where we panic on empty
|
||||
genDoc.Validators
|
||||
- [state] \#2785 Fix accum for new validators to be `-1.125*totalVotingPower`
|
||||
instead of 0, forcing them to wait before becoming the proposer. Also:
|
||||
- do not batch clip
|
||||
- keep accums averaged near 0
|
||||
- [types] \#2941 Preserve val.Accum during ValidatorSet.Update to avoid it being
|
||||
reset to 0 every time a validator is updated
|
||||
|
5
Makefile
5
Makefile
@@ -32,6 +32,9 @@ build_race:
|
||||
install:
|
||||
CGO_ENABLED=0 go install $(BUILD_FLAGS) -tags $(BUILD_TAGS) ./cmd/tendermint
|
||||
|
||||
install_c:
|
||||
CGO_ENABLED=1 go install $(BUILD_FLAGS) -tags "$(BUILD_TAGS) gcc" ./cmd/tendermint
|
||||
|
||||
########################################
|
||||
### Protobuf
|
||||
|
||||
@@ -328,4 +331,4 @@ build-slate:
|
||||
# To avoid unintended conflicts with file names, always add to .PHONY
|
||||
# unless there is a reason not to.
|
||||
# https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html
|
||||
.PHONY: check build build_race build_abci dist install install_abci check_dep check_tools get_tools get_dev_tools update_tools get_vendor_deps draw_deps get_protoc protoc_abci protoc_libs gen_certs clean_certs grpc_dbserver test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt rpc-docs build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate protoc_grpc protoc_all
|
||||
.PHONY: check build build_race build_abci dist install install_abci check_dep check_tools get_tools get_dev_tools update_tools get_vendor_deps draw_deps get_protoc protoc_abci protoc_libs gen_certs clean_certs grpc_dbserver test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt rpc-docs build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate protoc_grpc protoc_all build_c install_c
|
||||
|
@@ -168,9 +168,12 @@ func (pool *BlockPool) IsCaughtUp() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// some conditions to determine if we're caught up
|
||||
receivedBlockOrTimedOut := (pool.height > 0 || time.Since(pool.startTime) > 5*time.Second)
|
||||
ourChainIsLongestAmongPeers := pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
|
||||
// Some conditions to determine if we're caught up.
|
||||
// Ensures we've either received a block or waited some amount of time,
|
||||
// and that we're synced to the highest known height. Note we use maxPeerHeight - 1
|
||||
// because to sync block H requires block H+1 to verify the LastCommit.
|
||||
receivedBlockOrTimedOut := pool.height > 0 || time.Since(pool.startTime) > 5*time.Second
|
||||
ourChainIsLongestAmongPeers := pool.maxPeerHeight == 0 || pool.height >= (pool.maxPeerHeight-1)
|
||||
isCaughtUp := receivedBlockOrTimedOut && ourChainIsLongestAmongPeers
|
||||
return isCaughtUp
|
||||
}
|
||||
@@ -252,7 +255,8 @@ func (pool *BlockPool) AddBlock(peerID p2p.ID, block *types.Block, blockSize int
|
||||
peer.decrPending(blockSize)
|
||||
}
|
||||
} else {
|
||||
// Bad peer?
|
||||
pool.Logger.Info("invalid peer", "peer", peerID, "blockHeight", block.Height)
|
||||
pool.sendError(errors.New("invalid peer"), peerID)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -292,7 +296,7 @@ func (pool *BlockPool) RemovePeer(peerID p2p.ID) {
|
||||
func (pool *BlockPool) removePeer(peerID p2p.ID) {
|
||||
for _, requester := range pool.requesters {
|
||||
if requester.getPeerID() == peerID {
|
||||
requester.redo()
|
||||
requester.redo(peerID)
|
||||
}
|
||||
}
|
||||
delete(pool.peers, peerID)
|
||||
@@ -326,8 +330,11 @@ func (pool *BlockPool) makeNextRequester() {
|
||||
defer pool.mtx.Unlock()
|
||||
|
||||
nextHeight := pool.height + pool.requestersLen()
|
||||
if nextHeight > pool.maxPeerHeight {
|
||||
return
|
||||
}
|
||||
|
||||
request := newBPRequester(pool, nextHeight)
|
||||
// request.SetLogger(pool.Logger.With("height", nextHeight))
|
||||
|
||||
pool.requesters[nextHeight] = request
|
||||
atomic.AddInt32(&pool.numPending, 1)
|
||||
@@ -453,7 +460,7 @@ type bpRequester struct {
|
||||
pool *BlockPool
|
||||
height int64
|
||||
gotBlockCh chan struct{}
|
||||
redoCh chan struct{}
|
||||
redoCh chan p2p.ID //redo may send multitime, add peerId to identify repeat
|
||||
|
||||
mtx sync.Mutex
|
||||
peerID p2p.ID
|
||||
@@ -465,7 +472,7 @@ func newBPRequester(pool *BlockPool, height int64) *bpRequester {
|
||||
pool: pool,
|
||||
height: height,
|
||||
gotBlockCh: make(chan struct{}, 1),
|
||||
redoCh: make(chan struct{}, 1),
|
||||
redoCh: make(chan p2p.ID, 1),
|
||||
|
||||
peerID: "",
|
||||
block: nil,
|
||||
@@ -524,9 +531,9 @@ func (bpr *bpRequester) reset() {
|
||||
// Tells bpRequester to pick another peer and try again.
|
||||
// NOTE: Nonblocking, and does nothing if another redo
|
||||
// was already requested.
|
||||
func (bpr *bpRequester) redo() {
|
||||
func (bpr *bpRequester) redo(peerId p2p.ID) {
|
||||
select {
|
||||
case bpr.redoCh <- struct{}{}:
|
||||
case bpr.redoCh <- peerId:
|
||||
default:
|
||||
}
|
||||
}
|
||||
@@ -565,9 +572,13 @@ OUTER_LOOP:
|
||||
return
|
||||
case <-bpr.Quit():
|
||||
return
|
||||
case <-bpr.redoCh:
|
||||
bpr.reset()
|
||||
continue OUTER_LOOP
|
||||
case peerID := <-bpr.redoCh:
|
||||
if peerID == bpr.peerID {
|
||||
bpr.reset()
|
||||
continue OUTER_LOOP
|
||||
} else {
|
||||
continue WAIT_LOOP
|
||||
}
|
||||
case <-bpr.gotBlockCh:
|
||||
// We got a block!
|
||||
// Continue the for-loop and wait til Quit.
|
||||
|
@@ -16,16 +16,52 @@ func init() {
|
||||
}
|
||||
|
||||
type testPeer struct {
|
||||
id p2p.ID
|
||||
height int64
|
||||
id p2p.ID
|
||||
height int64
|
||||
inputChan chan inputData //make sure each peer's data is sequential
|
||||
}
|
||||
|
||||
func makePeers(numPeers int, minHeight, maxHeight int64) map[p2p.ID]testPeer {
|
||||
peers := make(map[p2p.ID]testPeer, numPeers)
|
||||
type inputData struct {
|
||||
t *testing.T
|
||||
pool *BlockPool
|
||||
request BlockRequest
|
||||
}
|
||||
|
||||
func (p testPeer) runInputRoutine() {
|
||||
go func() {
|
||||
for input := range p.inputChan {
|
||||
p.simulateInput(input)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Request desired, pretend like we got the block immediately.
|
||||
func (p testPeer) simulateInput(input inputData) {
|
||||
block := &types.Block{Header: types.Header{Height: input.request.Height}}
|
||||
input.pool.AddBlock(input.request.PeerID, block, 123)
|
||||
input.t.Logf("Added block from peer %v (height: %v)", input.request.PeerID, input.request.Height)
|
||||
}
|
||||
|
||||
type testPeers map[p2p.ID]testPeer
|
||||
|
||||
func (ps testPeers) start() {
|
||||
for _, v := range ps {
|
||||
v.runInputRoutine()
|
||||
}
|
||||
}
|
||||
|
||||
func (ps testPeers) stop() {
|
||||
for _, v := range ps {
|
||||
close(v.inputChan)
|
||||
}
|
||||
}
|
||||
|
||||
func makePeers(numPeers int, minHeight, maxHeight int64) testPeers {
|
||||
peers := make(testPeers, numPeers)
|
||||
for i := 0; i < numPeers; i++ {
|
||||
peerID := p2p.ID(cmn.RandStr(12))
|
||||
height := minHeight + cmn.RandInt63n(maxHeight-minHeight)
|
||||
peers[peerID] = testPeer{peerID, height}
|
||||
peers[peerID] = testPeer{peerID, height, make(chan inputData, 10)}
|
||||
}
|
||||
return peers
|
||||
}
|
||||
@@ -45,6 +81,9 @@ func TestBasic(t *testing.T) {
|
||||
|
||||
defer pool.Stop()
|
||||
|
||||
peers.start()
|
||||
defer peers.stop()
|
||||
|
||||
// Introduce each peer.
|
||||
go func() {
|
||||
for _, peer := range peers {
|
||||
@@ -77,12 +116,8 @@ func TestBasic(t *testing.T) {
|
||||
if request.Height == 300 {
|
||||
return // Done!
|
||||
}
|
||||
// Request desired, pretend like we got the block immediately.
|
||||
go func() {
|
||||
block := &types.Block{Header: types.Header{Height: request.Height}}
|
||||
pool.AddBlock(request.PeerID, block, 123)
|
||||
t.Logf("Added block from peer %v (height: %v)", request.PeerID, request.Height)
|
||||
}()
|
||||
|
||||
peers[request.PeerID].inputChan <- inputData{t, pool, request}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@@ -264,8 +264,12 @@ FOR_LOOP:
|
||||
bcR.Logger.Info("Time to switch to consensus reactor!", "height", height)
|
||||
bcR.pool.Stop()
|
||||
|
||||
conR := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
|
||||
conR.SwitchToConsensus(state, blocksSynced)
|
||||
conR, ok := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
|
||||
if ok {
|
||||
conR.SwitchToConsensus(state, blocksSynced)
|
||||
} else {
|
||||
// should only happen during testing
|
||||
}
|
||||
|
||||
break FOR_LOOP
|
||||
}
|
||||
@@ -314,6 +318,13 @@ FOR_LOOP:
|
||||
// still need to clean up the rest.
|
||||
bcR.Switch.StopPeerForError(peer, fmt.Errorf("BlockchainReactor validation error: %v", err))
|
||||
}
|
||||
peerID2 := bcR.pool.RedoRequest(second.Height)
|
||||
peer2 := bcR.Switch.Peers().Get(peerID2)
|
||||
if peer2 != nil && peer2 != peer {
|
||||
// NOTE: we've already removed the peer's request, but we
|
||||
// still need to clean up the rest.
|
||||
bcR.Switch.StopPeerForError(peer2, fmt.Errorf("BlockchainReactor validation error: %v", err))
|
||||
}
|
||||
continue FOR_LOOP
|
||||
} else {
|
||||
bcR.pool.PopRequest()
|
||||
|
@@ -1,72 +1,151 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"net"
|
||||
"sort"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
dbm "github.com/tendermint/tendermint/libs/db"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/proxy"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
tmtime "github.com/tendermint/tendermint/types/time"
|
||||
)
|
||||
|
||||
func makeStateAndBlockStore(logger log.Logger) (sm.State, *BlockStore) {
|
||||
config := cfg.ResetTestRoot("blockchain_reactor_test")
|
||||
// blockDB := dbm.NewDebugDB("blockDB", dbm.NewMemDB())
|
||||
// stateDB := dbm.NewDebugDB("stateDB", dbm.NewMemDB())
|
||||
var config *cfg.Config
|
||||
|
||||
func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.GenesisDoc, []types.PrivValidator) {
|
||||
validators := make([]types.GenesisValidator, numValidators)
|
||||
privValidators := make([]types.PrivValidator, numValidators)
|
||||
for i := 0; i < numValidators; i++ {
|
||||
val, privVal := types.RandValidator(randPower, minPower)
|
||||
validators[i] = types.GenesisValidator{
|
||||
PubKey: val.PubKey,
|
||||
Power: val.VotingPower,
|
||||
}
|
||||
privValidators[i] = privVal
|
||||
}
|
||||
sort.Sort(types.PrivValidatorsByAddress(privValidators))
|
||||
|
||||
return &types.GenesisDoc{
|
||||
GenesisTime: tmtime.Now(),
|
||||
ChainID: config.ChainID(),
|
||||
Validators: validators,
|
||||
}, privValidators
|
||||
}
|
||||
|
||||
func makeVote(header *types.Header, blockID types.BlockID, valset *types.ValidatorSet, privVal types.PrivValidator) *types.Vote {
|
||||
addr := privVal.GetAddress()
|
||||
idx, _ := valset.GetByAddress(addr)
|
||||
vote := &types.Vote{
|
||||
ValidatorAddress: addr,
|
||||
ValidatorIndex: idx,
|
||||
Height: header.Height,
|
||||
Round: 1,
|
||||
Timestamp: tmtime.Now(),
|
||||
Type: types.PrecommitType,
|
||||
BlockID: blockID,
|
||||
}
|
||||
|
||||
privVal.SignVote(header.ChainID, vote)
|
||||
|
||||
return vote
|
||||
}
|
||||
|
||||
type BlockchainReactorPair struct {
|
||||
reactor *BlockchainReactor
|
||||
app proxy.AppConns
|
||||
}
|
||||
|
||||
func newBlockchainReactor(logger log.Logger, genDoc *types.GenesisDoc, privVals []types.PrivValidator, maxBlockHeight int64) BlockchainReactorPair {
|
||||
if len(privVals) != 1 {
|
||||
panic("only support one validator")
|
||||
}
|
||||
|
||||
app := &testApp{}
|
||||
cc := proxy.NewLocalClientCreator(app)
|
||||
proxyApp := proxy.NewAppConns(cc)
|
||||
err := proxyApp.Start()
|
||||
if err != nil {
|
||||
panic(cmn.ErrorWrap(err, "error start app"))
|
||||
}
|
||||
|
||||
blockDB := dbm.NewMemDB()
|
||||
stateDB := dbm.NewMemDB()
|
||||
blockStore := NewBlockStore(blockDB)
|
||||
state, err := sm.LoadStateFromDBOrGenesisFile(stateDB, config.GenesisFile())
|
||||
|
||||
state, err := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
|
||||
if err != nil {
|
||||
panic(cmn.ErrorWrap(err, "error constructing state from genesis file"))
|
||||
}
|
||||
return state, blockStore
|
||||
}
|
||||
|
||||
func newBlockchainReactor(logger log.Logger, maxBlockHeight int64) *BlockchainReactor {
|
||||
state, blockStore := makeStateAndBlockStore(logger)
|
||||
|
||||
// Make the blockchainReactor itself
|
||||
// Make the BlockchainReactor itself.
|
||||
// NOTE we have to create and commit the blocks first because
|
||||
// pool.height is determined from the store.
|
||||
fastSync := true
|
||||
var nilApp proxy.AppConnConsensus
|
||||
blockExec := sm.NewBlockExecutor(dbm.NewMemDB(), log.TestingLogger(), nilApp,
|
||||
blockExec := sm.NewBlockExecutor(dbm.NewMemDB(), log.TestingLogger(), proxyApp.Consensus(),
|
||||
sm.MockMempool{}, sm.MockEvidencePool{})
|
||||
|
||||
// let's add some blocks in
|
||||
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
|
||||
lastCommit := &types.Commit{}
|
||||
if blockHeight > 1 {
|
||||
lastBlockMeta := blockStore.LoadBlockMeta(blockHeight - 1)
|
||||
lastBlock := blockStore.LoadBlock(blockHeight - 1)
|
||||
|
||||
vote := makeVote(&lastBlock.Header, lastBlockMeta.BlockID, state.Validators, privVals[0])
|
||||
lastCommit = &types.Commit{Precommits: []*types.Vote{vote}, BlockID: lastBlockMeta.BlockID}
|
||||
}
|
||||
|
||||
thisBlock := makeBlock(blockHeight, state, lastCommit)
|
||||
|
||||
thisParts := thisBlock.MakePartSet(types.BlockPartSizeBytes)
|
||||
blockID := types.BlockID{thisBlock.Hash(), thisParts.Header()}
|
||||
|
||||
state, err = blockExec.ApplyBlock(state, blockID, thisBlock)
|
||||
if err != nil {
|
||||
panic(cmn.ErrorWrap(err, "error apply block"))
|
||||
}
|
||||
|
||||
blockStore.SaveBlock(thisBlock, thisParts, lastCommit)
|
||||
}
|
||||
|
||||
bcReactor := NewBlockchainReactor(state.Copy(), blockExec, blockStore, fastSync)
|
||||
bcReactor.SetLogger(logger.With("module", "blockchain"))
|
||||
|
||||
// Next: we need to set a switch in order for peers to be added in
|
||||
bcReactor.Switch = p2p.NewSwitch(cfg.DefaultP2PConfig(), nil)
|
||||
|
||||
// Lastly: let's add some blocks in
|
||||
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
|
||||
firstBlock := makeBlock(blockHeight, state)
|
||||
secondBlock := makeBlock(blockHeight+1, state)
|
||||
firstParts := firstBlock.MakePartSet(types.BlockPartSizeBytes)
|
||||
blockStore.SaveBlock(firstBlock, firstParts, secondBlock.LastCommit)
|
||||
}
|
||||
|
||||
return bcReactor
|
||||
return BlockchainReactorPair{bcReactor, proxyApp}
|
||||
}
|
||||
|
||||
func TestNoBlockResponse(t *testing.T) {
|
||||
maxBlockHeight := int64(20)
|
||||
config = cfg.ResetTestRoot("blockchain_reactor_test")
|
||||
genDoc, privVals := randGenesisDoc(1, false, 30)
|
||||
|
||||
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
|
||||
bcr.Start()
|
||||
defer bcr.Stop()
|
||||
maxBlockHeight := int64(65)
|
||||
|
||||
// Add some peers in
|
||||
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
|
||||
bcr.AddPeer(peer)
|
||||
reactorPairs := make([]BlockchainReactorPair, 2)
|
||||
|
||||
chID := byte(0x01)
|
||||
reactorPairs[0] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, maxBlockHeight)
|
||||
reactorPairs[1] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
|
||||
|
||||
p2p.MakeConnectedSwitches(config.P2P, 2, func(i int, s *p2p.Switch) *p2p.Switch {
|
||||
s.AddReactor("BLOCKCHAIN", reactorPairs[i].reactor)
|
||||
return s
|
||||
|
||||
}, p2p.Connect2Switches)
|
||||
|
||||
defer func() {
|
||||
for _, r := range reactorPairs {
|
||||
r.reactor.Stop()
|
||||
r.app.Stop()
|
||||
}
|
||||
}()
|
||||
|
||||
tests := []struct {
|
||||
height int64
|
||||
@@ -78,72 +157,100 @@ func TestNoBlockResponse(t *testing.T) {
|
||||
{100, false},
|
||||
}
|
||||
|
||||
// receive a request message from peer,
|
||||
// wait for our response to be received on the peer
|
||||
for _, tt := range tests {
|
||||
reqBlockMsg := &bcBlockRequestMessage{tt.height}
|
||||
reqBlockBytes := cdc.MustMarshalBinaryBare(reqBlockMsg)
|
||||
bcr.Receive(chID, peer, reqBlockBytes)
|
||||
msg := peer.lastBlockchainMessage()
|
||||
for {
|
||||
if reactorPairs[1].reactor.pool.IsCaughtUp() {
|
||||
break
|
||||
}
|
||||
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
}
|
||||
|
||||
assert.Equal(t, maxBlockHeight, reactorPairs[0].reactor.store.Height())
|
||||
|
||||
for _, tt := range tests {
|
||||
block := reactorPairs[1].reactor.store.LoadBlock(tt.height)
|
||||
if tt.existent {
|
||||
if blockMsg, ok := msg.(*bcBlockResponseMessage); !ok {
|
||||
t.Fatalf("Expected to receive a block response for height %d", tt.height)
|
||||
} else if blockMsg.Block.Height != tt.height {
|
||||
t.Fatalf("Expected response to be for height %d, got %d", tt.height, blockMsg.Block.Height)
|
||||
}
|
||||
assert.True(t, block != nil)
|
||||
} else {
|
||||
if noBlockMsg, ok := msg.(*bcNoBlockResponseMessage); !ok {
|
||||
t.Fatalf("Expected to receive a no block response for height %d", tt.height)
|
||||
} else if noBlockMsg.Height != tt.height {
|
||||
t.Fatalf("Expected response to be for height %d, got %d", tt.height, noBlockMsg.Height)
|
||||
}
|
||||
assert.True(t, block == nil)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
// NOTE: This is too hard to test without
|
||||
// an easy way to add test peer to switch
|
||||
// or without significant refactoring of the module.
|
||||
// Alternatively we could actually dial a TCP conn but
|
||||
// that seems extreme.
|
||||
func TestBadBlockStopsPeer(t *testing.T) {
|
||||
maxBlockHeight := int64(20)
|
||||
config = cfg.ResetTestRoot("blockchain_reactor_test")
|
||||
genDoc, privVals := randGenesisDoc(1, false, 30)
|
||||
|
||||
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
|
||||
bcr.Start()
|
||||
defer bcr.Stop()
|
||||
maxBlockHeight := int64(148)
|
||||
|
||||
// Add some peers in
|
||||
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
|
||||
otherChain := newBlockchainReactor(log.TestingLogger(), genDoc, privVals, maxBlockHeight)
|
||||
defer func() {
|
||||
otherChain.reactor.Stop()
|
||||
otherChain.app.Stop()
|
||||
}()
|
||||
|
||||
// XXX: This doesn't add the peer to anything,
|
||||
// so it's hard to check that it's later removed
|
||||
bcr.AddPeer(peer)
|
||||
assert.True(t, bcr.Switch.Peers().Size() > 0)
|
||||
reactorPairs := make([]BlockchainReactorPair, 4)
|
||||
|
||||
// send a bad block from the peer
|
||||
// default blocks already dont have commits, so should fail
|
||||
block := bcr.store.LoadBlock(3)
|
||||
msg := &bcBlockResponseMessage{Block: block}
|
||||
peer.Send(BlockchainChannel, struct{ BlockchainMessage }{msg})
|
||||
reactorPairs[0] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, maxBlockHeight)
|
||||
reactorPairs[1] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
|
||||
reactorPairs[2] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
|
||||
reactorPairs[3] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
|
||||
|
||||
ticker := time.NewTicker(time.Millisecond * 10)
|
||||
timer := time.NewTimer(time.Second * 2)
|
||||
LOOP:
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
if bcr.Switch.Peers().Size() == 0 {
|
||||
break LOOP
|
||||
}
|
||||
case <-timer.C:
|
||||
t.Fatal("Timed out waiting to disconnect peer")
|
||||
switches := p2p.MakeConnectedSwitches(config.P2P, 4, func(i int, s *p2p.Switch) *p2p.Switch {
|
||||
s.AddReactor("BLOCKCHAIN", reactorPairs[i].reactor)
|
||||
return s
|
||||
|
||||
}, p2p.Connect2Switches)
|
||||
|
||||
defer func() {
|
||||
for _, r := range reactorPairs {
|
||||
r.reactor.Stop()
|
||||
r.app.Stop()
|
||||
}
|
||||
}()
|
||||
|
||||
for {
|
||||
if reactorPairs[3].reactor.pool.IsCaughtUp() {
|
||||
break
|
||||
}
|
||||
|
||||
time.Sleep(1 * time.Second)
|
||||
}
|
||||
|
||||
//at this time, reactors[0-3] is the newest
|
||||
assert.Equal(t, 3, reactorPairs[1].reactor.Switch.Peers().Size())
|
||||
|
||||
//mark reactorPairs[3] is an invalid peer
|
||||
reactorPairs[3].reactor.store = otherChain.reactor.store
|
||||
|
||||
lastReactorPair := newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
|
||||
reactorPairs = append(reactorPairs, lastReactorPair)
|
||||
|
||||
switches = append(switches, p2p.MakeConnectedSwitches(config.P2P, 1, func(i int, s *p2p.Switch) *p2p.Switch {
|
||||
s.AddReactor("BLOCKCHAIN", reactorPairs[len(reactorPairs)-1].reactor)
|
||||
return s
|
||||
|
||||
}, p2p.Connect2Switches)...)
|
||||
|
||||
for i := 0; i < len(reactorPairs)-1; i++ {
|
||||
p2p.Connect2Switches(switches, i, len(reactorPairs)-1)
|
||||
}
|
||||
|
||||
for {
|
||||
if lastReactorPair.reactor.pool.IsCaughtUp() || lastReactorPair.reactor.Switch.Peers().Size() == 0 {
|
||||
break
|
||||
}
|
||||
|
||||
time.Sleep(1 * time.Second)
|
||||
}
|
||||
|
||||
assert.True(t, lastReactorPair.reactor.Switch.Peers().Size() < len(reactorPairs)-1)
|
||||
}
|
||||
*/
|
||||
|
||||
//----------------------------------------------
|
||||
// utility funcs
|
||||
@@ -155,56 +262,41 @@ func makeTxs(height int64) (txs []types.Tx) {
|
||||
return txs
|
||||
}
|
||||
|
||||
func makeBlock(height int64, state sm.State) *types.Block {
|
||||
block, _ := state.MakeBlock(height, makeTxs(height), new(types.Commit), nil, state.Validators.GetProposer().Address)
|
||||
func makeBlock(height int64, state sm.State, lastCommit *types.Commit) *types.Block {
|
||||
block, _ := state.MakeBlock(height, makeTxs(height), lastCommit, nil, state.Validators.GetProposer().Address)
|
||||
return block
|
||||
}
|
||||
|
||||
// The Test peer
|
||||
type bcrTestPeer struct {
|
||||
cmn.BaseService
|
||||
id p2p.ID
|
||||
ch chan interface{}
|
||||
type testApp struct {
|
||||
abci.BaseApplication
|
||||
}
|
||||
|
||||
var _ p2p.Peer = (*bcrTestPeer)(nil)
|
||||
var _ abci.Application = (*testApp)(nil)
|
||||
|
||||
func newbcrTestPeer(id p2p.ID) *bcrTestPeer {
|
||||
bcr := &bcrTestPeer{
|
||||
id: id,
|
||||
ch: make(chan interface{}, 2),
|
||||
}
|
||||
bcr.BaseService = *cmn.NewBaseService(nil, "bcrTestPeer", bcr)
|
||||
return bcr
|
||||
func (app *testApp) Info(req abci.RequestInfo) (resInfo abci.ResponseInfo) {
|
||||
return abci.ResponseInfo{}
|
||||
}
|
||||
|
||||
func (tp *bcrTestPeer) lastBlockchainMessage() interface{} { return <-tp.ch }
|
||||
|
||||
func (tp *bcrTestPeer) TrySend(chID byte, msgBytes []byte) bool {
|
||||
var msg BlockchainMessage
|
||||
err := cdc.UnmarshalBinaryBare(msgBytes, &msg)
|
||||
if err != nil {
|
||||
panic(cmn.ErrorWrap(err, "Error while trying to parse a BlockchainMessage"))
|
||||
}
|
||||
if _, ok := msg.(*bcStatusResponseMessage); ok {
|
||||
// Discard status response messages since they skew our results
|
||||
// We only want to deal with:
|
||||
// + bcBlockResponseMessage
|
||||
// + bcNoBlockResponseMessage
|
||||
} else {
|
||||
tp.ch <- msg
|
||||
}
|
||||
return true
|
||||
func (app *testApp) BeginBlock(req abci.RequestBeginBlock) abci.ResponseBeginBlock {
|
||||
return abci.ResponseBeginBlock{}
|
||||
}
|
||||
|
||||
func (tp *bcrTestPeer) FlushStop() {}
|
||||
func (tp *bcrTestPeer) Send(chID byte, msgBytes []byte) bool { return tp.TrySend(chID, msgBytes) }
|
||||
func (tp *bcrTestPeer) NodeInfo() p2p.NodeInfo { return p2p.DefaultNodeInfo{} }
|
||||
func (tp *bcrTestPeer) Status() p2p.ConnectionStatus { return p2p.ConnectionStatus{} }
|
||||
func (tp *bcrTestPeer) ID() p2p.ID { return tp.id }
|
||||
func (tp *bcrTestPeer) IsOutbound() bool { return false }
|
||||
func (tp *bcrTestPeer) IsPersistent() bool { return true }
|
||||
func (tp *bcrTestPeer) Get(s string) interface{} { return s }
|
||||
func (tp *bcrTestPeer) Set(string, interface{}) {}
|
||||
func (tp *bcrTestPeer) RemoteIP() net.IP { return []byte{127, 0, 0, 1} }
|
||||
func (tp *bcrTestPeer) OriginalAddr() *p2p.NetAddress { return nil }
|
||||
func (app *testApp) EndBlock(req abci.RequestEndBlock) abci.ResponseEndBlock {
|
||||
return abci.ResponseEndBlock{}
|
||||
}
|
||||
|
||||
func (app *testApp) DeliverTx(tx []byte) abci.ResponseDeliverTx {
|
||||
return abci.ResponseDeliverTx{Tags: []cmn.KVPair{}}
|
||||
}
|
||||
|
||||
func (app *testApp) CheckTx(tx []byte) abci.ResponseCheckTx {
|
||||
return abci.ResponseCheckTx{}
|
||||
}
|
||||
|
||||
func (app *testApp) Commit() abci.ResponseCommit {
|
||||
return abci.ResponseCommit{}
|
||||
}
|
||||
|
||||
func (app *testApp) Query(reqQuery abci.RequestQuery) (resQuery abci.ResponseQuery) {
|
||||
return
|
||||
}
|
||||
|
@@ -9,13 +9,30 @@ import (
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/db"
|
||||
dbm "github.com/tendermint/tendermint/libs/db"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
tmtime "github.com/tendermint/tendermint/types/time"
|
||||
)
|
||||
|
||||
func makeStateAndBlockStore(logger log.Logger) (sm.State, *BlockStore) {
|
||||
config := cfg.ResetTestRoot("blockchain_reactor_test")
|
||||
// blockDB := dbm.NewDebugDB("blockDB", dbm.NewMemDB())
|
||||
// stateDB := dbm.NewDebugDB("stateDB", dbm.NewMemDB())
|
||||
blockDB := dbm.NewMemDB()
|
||||
stateDB := dbm.NewMemDB()
|
||||
state, err := sm.LoadStateFromDBOrGenesisFile(stateDB, config.GenesisFile())
|
||||
if err != nil {
|
||||
panic(cmn.ErrorWrap(err, "error constructing state from genesis file"))
|
||||
}
|
||||
return state, NewBlockStore(blockDB)
|
||||
}
|
||||
|
||||
func TestLoadBlockStoreStateJSON(t *testing.T) {
|
||||
db := db.NewMemDB()
|
||||
|
||||
@@ -65,7 +82,7 @@ func freshBlockStore() (*BlockStore, db.DB) {
|
||||
var (
|
||||
state, _ = makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
|
||||
|
||||
block = makeBlock(1, state)
|
||||
block = makeBlock(1, state, new(types.Commit))
|
||||
partSet = block.MakePartSet(2)
|
||||
part1 = partSet.GetPart(0)
|
||||
part2 = partSet.GetPart(1)
|
||||
@@ -88,7 +105,7 @@ func TestBlockStoreSaveLoadBlock(t *testing.T) {
|
||||
}
|
||||
|
||||
// save a block
|
||||
block := makeBlock(bs.Height()+1, state)
|
||||
block := makeBlock(bs.Height()+1, state, new(types.Commit))
|
||||
validPartSet := block.MakePartSet(2)
|
||||
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
|
||||
Timestamp: tmtime.Now()}}}
|
||||
@@ -331,7 +348,7 @@ func TestLoadBlockMeta(t *testing.T) {
|
||||
func TestBlockFetchAtHeight(t *testing.T) {
|
||||
state, bs := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
|
||||
require.Equal(t, bs.Height(), int64(0), "initially the height should be zero")
|
||||
block := makeBlock(bs.Height()+1, state)
|
||||
block := makeBlock(bs.Height()+1, state, new(types.Commit))
|
||||
|
||||
partSet := block.MakePartSet(2)
|
||||
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
|
||||
|
@@ -260,6 +260,9 @@ create_empty_blocks_interval = "{{ .Consensus.CreateEmptyBlocksInterval }}"
|
||||
peer_gossip_sleep_duration = "{{ .Consensus.PeerGossipSleepDuration }}"
|
||||
peer_query_maj23_sleep_duration = "{{ .Consensus.PeerQueryMaj23SleepDuration }}"
|
||||
|
||||
# Block time parameters. Corresponds to the minimum time increment between consecutive blocks.
|
||||
blocktime_iota = "{{ .Consensus.BlockTimeIota }}"
|
||||
|
||||
##### transactions indexer configuration options #####
|
||||
[tx_index]
|
||||
|
||||
|
@@ -425,20 +425,6 @@ func ensureNewRound(roundCh <-chan interface{}, height int64, round int) {
|
||||
}
|
||||
}
|
||||
|
||||
func ensureProposalHeartbeat(heartbeatCh <-chan interface{}) {
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
panic("Timeout expired while waiting for ProposalHeartbeat event")
|
||||
case ev := <-heartbeatCh:
|
||||
heartbeat, ok := ev.(types.EventDataProposalHeartbeat)
|
||||
if !ok {
|
||||
panic(fmt.Sprintf("expected a *types.EventDataProposalHeartbeat, "+
|
||||
"got %v. wrong subscription channel?",
|
||||
reflect.TypeOf(heartbeat)))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func ensureNewTimeout(timeoutCh <-chan interface{}, height int64, round int, timeout int64) {
|
||||
timeoutDuration := time.Duration(timeout*3) * time.Nanosecond
|
||||
ensureNewEvent(timeoutCh, height, round, timeoutDuration,
|
||||
|
@@ -72,18 +72,18 @@ func TestMempoolProgressInHigherRound(t *testing.T) {
|
||||
startTestRound(cs, height, round)
|
||||
|
||||
ensureNewRound(newRoundCh, height, round) // first round at first height
|
||||
ensureNewEventOnChannel(newBlockCh) // first block gets committed
|
||||
ensureNewEventOnChannel(newBlockCh) // first block gets committed
|
||||
|
||||
height = height + 1 // moving to the next height
|
||||
round = 0
|
||||
|
||||
ensureNewRound(newRoundCh, height, round) // first round at next height
|
||||
deliverTxsRange(cs, 0, 1) // we deliver txs, but dont set a proposal so we get the next round
|
||||
deliverTxsRange(cs, 0, 1) // we deliver txs, but dont set a proposal so we get the next round
|
||||
ensureNewTimeout(timeoutCh, height, round, cs.config.TimeoutPropose.Nanoseconds())
|
||||
|
||||
round = round + 1 // moving to the next round
|
||||
round = round + 1 // moving to the next round
|
||||
ensureNewRound(newRoundCh, height, round) // wait for the next round
|
||||
ensureNewEventOnChannel(newBlockCh) // now we can commit the block
|
||||
ensureNewEventOnChannel(newBlockCh) // now we can commit the block
|
||||
}
|
||||
|
||||
func deliverTxsRange(cs *ConsensusState, start, end int) {
|
||||
|
@@ -8,7 +8,7 @@ import (
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
amino "github.com/tendermint/go-amino"
|
||||
"github.com/tendermint/go-amino"
|
||||
cstypes "github.com/tendermint/tendermint/consensus/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
tmevents "github.com/tendermint/tendermint/libs/events"
|
||||
@@ -264,11 +264,6 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
|
||||
BlockID: msg.BlockID,
|
||||
Votes: ourVotes,
|
||||
}))
|
||||
case *ProposalHeartbeatMessage:
|
||||
hb := msg.Heartbeat
|
||||
conR.Logger.Debug("Received proposal heartbeat message",
|
||||
"height", hb.Height, "round", hb.Round, "sequence", hb.Sequence,
|
||||
"valIdx", hb.ValidatorIndex, "valAddr", hb.ValidatorAddress)
|
||||
default:
|
||||
conR.Logger.Error(fmt.Sprintf("Unknown message type %v", reflect.TypeOf(msg)))
|
||||
}
|
||||
@@ -369,8 +364,8 @@ func (conR *ConsensusReactor) FastSync() bool {
|
||||
|
||||
//--------------------------------------
|
||||
|
||||
// subscribeToBroadcastEvents subscribes for new round steps, votes and
|
||||
// proposal heartbeats using internal pubsub defined on state to broadcast
|
||||
// subscribeToBroadcastEvents subscribes for new round steps and votes
|
||||
// using internal pubsub defined on state to broadcast
|
||||
// them to peers upon receiving.
|
||||
func (conR *ConsensusReactor) subscribeToBroadcastEvents() {
|
||||
const subscriber = "consensus-reactor"
|
||||
@@ -389,10 +384,6 @@ func (conR *ConsensusReactor) subscribeToBroadcastEvents() {
|
||||
conR.broadcastHasVoteMessage(data.(*types.Vote))
|
||||
})
|
||||
|
||||
conR.conS.evsw.AddListenerForEvent(subscriber, types.EventProposalHeartbeat,
|
||||
func(data tmevents.EventData) {
|
||||
conR.broadcastProposalHeartbeatMessage(data.(*types.Heartbeat))
|
||||
})
|
||||
}
|
||||
|
||||
func (conR *ConsensusReactor) unsubscribeFromBroadcastEvents() {
|
||||
@@ -400,13 +391,6 @@ func (conR *ConsensusReactor) unsubscribeFromBroadcastEvents() {
|
||||
conR.conS.evsw.RemoveListener(subscriber)
|
||||
}
|
||||
|
||||
func (conR *ConsensusReactor) broadcastProposalHeartbeatMessage(hb *types.Heartbeat) {
|
||||
conR.Logger.Debug("Broadcasting proposal heartbeat message",
|
||||
"height", hb.Height, "round", hb.Round, "sequence", hb.Sequence, "address", hb.ValidatorAddress)
|
||||
msg := &ProposalHeartbeatMessage{hb}
|
||||
conR.Switch.Broadcast(StateChannel, cdc.MustMarshalBinaryBare(msg))
|
||||
}
|
||||
|
||||
func (conR *ConsensusReactor) broadcastNewRoundStepMessage(rs *cstypes.RoundState) {
|
||||
nrsMsg := makeRoundStepMessage(rs)
|
||||
conR.Switch.Broadcast(StateChannel, cdc.MustMarshalBinaryBare(nrsMsg))
|
||||
@@ -1387,7 +1371,6 @@ func RegisterConsensusMessages(cdc *amino.Codec) {
|
||||
cdc.RegisterConcrete(&HasVoteMessage{}, "tendermint/HasVote", nil)
|
||||
cdc.RegisterConcrete(&VoteSetMaj23Message{}, "tendermint/VoteSetMaj23", nil)
|
||||
cdc.RegisterConcrete(&VoteSetBitsMessage{}, "tendermint/VoteSetBits", nil)
|
||||
cdc.RegisterConcrete(&ProposalHeartbeatMessage{}, "tendermint/ProposalHeartbeat", nil)
|
||||
}
|
||||
|
||||
func decodeMsg(bz []byte) (msg ConsensusMessage, err error) {
|
||||
@@ -1664,18 +1647,3 @@ func (m *VoteSetBitsMessage) String() string {
|
||||
}
|
||||
|
||||
//-------------------------------------
|
||||
|
||||
// ProposalHeartbeatMessage is sent to signal that a node is alive and waiting for transactions for a proposal.
|
||||
type ProposalHeartbeatMessage struct {
|
||||
Heartbeat *types.Heartbeat
|
||||
}
|
||||
|
||||
// ValidateBasic performs basic validation.
|
||||
func (m *ProposalHeartbeatMessage) ValidateBasic() error {
|
||||
return m.Heartbeat.ValidateBasic()
|
||||
}
|
||||
|
||||
// String returns a string representation.
|
||||
func (m *ProposalHeartbeatMessage) String() string {
|
||||
return fmt.Sprintf("[HEARTBEAT %v]", m.Heartbeat)
|
||||
}
|
||||
|
@@ -213,8 +213,8 @@ func (m *mockEvidencePool) Update(block *types.Block, state sm.State) {
|
||||
|
||||
//------------------------------------
|
||||
|
||||
// Ensure a testnet sends proposal heartbeats and makes blocks when there are txs
|
||||
func TestReactorProposalHeartbeats(t *testing.T) {
|
||||
// Ensure a testnet makes blocks when there are txs
|
||||
func TestReactorCreatesBlockWhenEmptyBlocksFalse(t *testing.T) {
|
||||
N := 4
|
||||
css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter,
|
||||
func(c *cfg.Config) {
|
||||
@@ -222,17 +222,6 @@ func TestReactorProposalHeartbeats(t *testing.T) {
|
||||
})
|
||||
reactors, eventChans, eventBuses := startConsensusNet(t, css, N)
|
||||
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
|
||||
heartbeatChans := make([]chan interface{}, N)
|
||||
var err error
|
||||
for i := 0; i < N; i++ {
|
||||
heartbeatChans[i] = make(chan interface{}, 1)
|
||||
err = eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryProposalHeartbeat, heartbeatChans[i])
|
||||
require.NoError(t, err)
|
||||
}
|
||||
// wait till everyone sends a proposal heartbeat
|
||||
timeoutWaitGroup(t, N, func(j int) {
|
||||
<-heartbeatChans[j]
|
||||
}, css)
|
||||
|
||||
// send a tx
|
||||
if err := css[3].mempool.CheckTx([]byte{1, 2, 3}, nil); err != nil {
|
||||
|
@@ -276,7 +276,12 @@ func (h *Handshaker) ReplayBlocks(
|
||||
|
||||
// If appBlockHeight == 0 it means that we are at genesis and hence should send InitChain.
|
||||
if appBlockHeight == 0 {
|
||||
nextVals := types.TM2PB.ValidatorUpdates(state.NextValidators) // state.Validators would work too.
|
||||
validators := make([]*types.Validator, len(h.genDoc.Validators))
|
||||
for i, val := range h.genDoc.Validators {
|
||||
validators[i] = types.NewValidator(val.PubKey, val.Power)
|
||||
}
|
||||
validatorSet := types.NewValidatorSet(validators)
|
||||
nextVals := types.TM2PB.ValidatorUpdates(validatorSet)
|
||||
csParams := types.TM2PB.ConsensusParams(h.genDoc.ConsensusParams)
|
||||
req := abci.RequestInitChain{
|
||||
Time: h.genDoc.GenesisTime,
|
||||
|
@@ -17,7 +17,7 @@ import (
|
||||
|
||||
"github.com/tendermint/tendermint/abci/example/kvstore"
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
crypto "github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
auto "github.com/tendermint/tendermint/libs/autofile"
|
||||
dbm "github.com/tendermint/tendermint/libs/db"
|
||||
"github.com/tendermint/tendermint/version"
|
||||
@@ -315,28 +315,21 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
|
||||
config := ResetConfig("proxy_test_")
|
||||
|
||||
walBody, err := WALWithNBlocks(NUM_BLOCKS)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
require.NoError(t, err)
|
||||
walFile := tempWALWithData(walBody)
|
||||
config.Consensus.SetWalFile(walFile)
|
||||
|
||||
privVal := privval.LoadFilePV(config.PrivValidatorFile())
|
||||
|
||||
wal, err := NewWAL(walFile)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
require.NoError(t, err)
|
||||
wal.SetLogger(log.TestingLogger())
|
||||
if err := wal.Start(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
err = wal.Start()
|
||||
require.NoError(t, err)
|
||||
defer wal.Stop()
|
||||
|
||||
chain, commits, err := makeBlockchainFromWAL(wal)
|
||||
if err != nil {
|
||||
t.Fatalf(err.Error())
|
||||
}
|
||||
require.NoError(t, err)
|
||||
|
||||
stateDB, state, store := stateAndStore(config, privVal.GetPubKey(), kvstore.ProtocolVersion)
|
||||
store.chain = chain
|
||||
|
@@ -22,13 +22,6 @@ import (
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
// Config
|
||||
|
||||
const (
|
||||
proposalHeartbeatIntervalSeconds = 2
|
||||
)
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
// Errors
|
||||
|
||||
@@ -118,7 +111,7 @@ type ConsensusState struct {
|
||||
done chan struct{}
|
||||
|
||||
// synchronous pubsub between consensus state and reactor.
|
||||
// state only emits EventNewRoundStep, EventVote and EventProposalHeartbeat
|
||||
// state only emits EventNewRoundStep and EventVote
|
||||
evsw tmevents.EventSwitch
|
||||
|
||||
// for reporting metrics
|
||||
@@ -324,10 +317,11 @@ func (cs *ConsensusState) startRoutines(maxSteps int) {
|
||||
go cs.receiveRoutine(maxSteps)
|
||||
}
|
||||
|
||||
// OnStop implements cmn.Service. It stops all routines and waits for the WAL to finish.
|
||||
// OnStop implements cmn.Service.
|
||||
func (cs *ConsensusState) OnStop() {
|
||||
cs.evsw.Stop()
|
||||
cs.timeoutTicker.Stop()
|
||||
// WAL is stopped in receiveRoutine.
|
||||
}
|
||||
|
||||
// Wait waits for the the main routine to return.
|
||||
@@ -751,7 +745,7 @@ func (cs *ConsensusState) enterNewRound(height int64, round int) {
|
||||
validators := cs.Validators
|
||||
if cs.Round < round {
|
||||
validators = validators.Copy()
|
||||
validators.IncrementAccum(round - cs.Round)
|
||||
validators.IncrementProposerPriority(round - cs.Round)
|
||||
}
|
||||
|
||||
// Setup new round
|
||||
@@ -784,7 +778,6 @@ func (cs *ConsensusState) enterNewRound(height int64, round int) {
|
||||
cs.scheduleTimeout(cs.config.CreateEmptyBlocksInterval, height, round,
|
||||
cstypes.RoundStepNewRound)
|
||||
}
|
||||
go cs.proposalHeartbeat(height, round)
|
||||
} else {
|
||||
cs.enterPropose(height, round)
|
||||
}
|
||||
@@ -801,38 +794,6 @@ func (cs *ConsensusState) needProofBlock(height int64) bool {
|
||||
return !bytes.Equal(cs.state.AppHash, lastBlockMeta.Header.AppHash)
|
||||
}
|
||||
|
||||
func (cs *ConsensusState) proposalHeartbeat(height int64, round int) {
|
||||
logger := cs.Logger.With("height", height, "round", round)
|
||||
addr := cs.privValidator.GetAddress()
|
||||
|
||||
if !cs.Validators.HasAddress(addr) {
|
||||
logger.Debug("Not sending proposalHearbeat. This node is not a validator", "addr", addr, "vals", cs.Validators)
|
||||
return
|
||||
}
|
||||
counter := 0
|
||||
valIndex, _ := cs.Validators.GetByAddress(addr)
|
||||
chainID := cs.state.ChainID
|
||||
for {
|
||||
rs := cs.GetRoundState()
|
||||
// if we've already moved on, no need to send more heartbeats
|
||||
if rs.Step > cstypes.RoundStepNewRound || rs.Round > round || rs.Height > height {
|
||||
return
|
||||
}
|
||||
heartbeat := &types.Heartbeat{
|
||||
Height: rs.Height,
|
||||
Round: rs.Round,
|
||||
Sequence: counter,
|
||||
ValidatorAddress: addr,
|
||||
ValidatorIndex: valIndex,
|
||||
}
|
||||
cs.privValidator.SignHeartbeat(chainID, heartbeat)
|
||||
cs.eventBus.PublishEventProposalHeartbeat(types.EventDataProposalHeartbeat{heartbeat})
|
||||
cs.evsw.FireEvent(types.EventProposalHeartbeat, heartbeat)
|
||||
counter++
|
||||
time.Sleep(proposalHeartbeatIntervalSeconds * time.Second)
|
||||
}
|
||||
}
|
||||
|
||||
// Enter (CreateEmptyBlocks): from enterNewRound(height,round)
|
||||
// Enter (CreateEmptyBlocks, CreateEmptyBlocksInterval > 0 ): after enterNewRound(height,round), after timeout of CreateEmptyBlocksInterval
|
||||
// Enter (!CreateEmptyBlocks) : after enterNewRound(height,round), once txs are in the mempool
|
||||
|
@@ -11,8 +11,6 @@ import (
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
cstypes "github.com/tendermint/tendermint/consensus/types"
|
||||
tmevents "github.com/tendermint/tendermint/libs/events"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
|
||||
@@ -1029,33 +1027,6 @@ func TestSetValidBlockOnDelayedPrevote(t *testing.T) {
|
||||
assert.True(t, rs.ValidRound == round)
|
||||
}
|
||||
|
||||
// regression for #2518
|
||||
func TestNoHearbeatWhenNotValidator(t *testing.T) {
|
||||
cs, _ := randConsensusState(4)
|
||||
cs.Validators = types.NewValidatorSet(nil) // make sure we are not in the validator set
|
||||
|
||||
cs.evsw.AddListenerForEvent("testing", types.EventProposalHeartbeat,
|
||||
func(data tmevents.EventData) {
|
||||
t.Errorf("Should not have broadcasted heartbeat")
|
||||
})
|
||||
go cs.proposalHeartbeat(10, 1)
|
||||
|
||||
cs.Stop()
|
||||
|
||||
// if a faulty implementation sends an event, we should wait here a little bit to make sure we don't miss it by prematurely leaving the test method
|
||||
time.Sleep((proposalHeartbeatIntervalSeconds + 1) * time.Second)
|
||||
}
|
||||
|
||||
// regression for #2518
|
||||
func TestHearbeatWhenWeAreValidator(t *testing.T) {
|
||||
cs, _ := randConsensusState(4)
|
||||
heartbeatCh := subscribe(cs.eventBus, types.EventQueryProposalHeartbeat)
|
||||
|
||||
go cs.proposalHeartbeat(10, 1)
|
||||
ensureProposalHeartbeat(heartbeatCh)
|
||||
|
||||
}
|
||||
|
||||
// What we want:
|
||||
// P0 miss to lock B as Proposal Block is missing, but set valid block to B after
|
||||
// receiving delayed Block Proposal.
|
||||
|
@@ -55,3 +55,31 @@ func (prs PeerRoundState) StringIndented(indent string) string {
|
||||
indent, prs.CatchupCommit, prs.CatchupCommitRound,
|
||||
indent)
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------
|
||||
// These methods are for Protobuf Compatibility
|
||||
|
||||
// Size returns the size of the amino encoding, in bytes.
|
||||
func (ps *PeerRoundState) Size() int {
|
||||
bs, _ := ps.Marshal()
|
||||
return len(bs)
|
||||
}
|
||||
|
||||
// Marshal returns the amino encoding.
|
||||
func (ps *PeerRoundState) Marshal() ([]byte, error) {
|
||||
return cdc.MarshalBinaryBare(ps)
|
||||
}
|
||||
|
||||
// MarshalTo calls Marshal and copies to the given buffer.
|
||||
func (ps *PeerRoundState) MarshalTo(data []byte) (int, error) {
|
||||
bs, err := ps.Marshal()
|
||||
if err != nil {
|
||||
return -1, err
|
||||
}
|
||||
return copy(data, bs), nil
|
||||
}
|
||||
|
||||
// Unmarshal deserializes from amino encoded form.
|
||||
func (ps *PeerRoundState) Unmarshal(bs []byte) error {
|
||||
return cdc.UnmarshalBinaryBare(bs, ps)
|
||||
}
|
||||
|
@@ -201,3 +201,31 @@ func (rs *RoundState) StringShort() string {
|
||||
return fmt.Sprintf(`RoundState{H:%v R:%v S:%v ST:%v}`,
|
||||
rs.Height, rs.Round, rs.Step, rs.StartTime)
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------
|
||||
// These methods are for Protobuf Compatibility
|
||||
|
||||
// Size returns the size of the amino encoding, in bytes.
|
||||
func (rs *RoundStateSimple) Size() int {
|
||||
bs, _ := rs.Marshal()
|
||||
return len(bs)
|
||||
}
|
||||
|
||||
// Marshal returns the amino encoding.
|
||||
func (rs *RoundStateSimple) Marshal() ([]byte, error) {
|
||||
return cdc.MarshalBinaryBare(rs)
|
||||
}
|
||||
|
||||
// MarshalTo calls Marshal and copies to the given buffer.
|
||||
func (rs *RoundStateSimple) MarshalTo(data []byte) (int, error) {
|
||||
bs, err := rs.Marshal()
|
||||
if err != nil {
|
||||
return -1, err
|
||||
}
|
||||
return copy(data, bs), nil
|
||||
}
|
||||
|
||||
// Unmarshal deserializes from amino encoded form.
|
||||
func (rs *RoundStateSimple) Unmarshal(bs []byte) error {
|
||||
return cdc.UnmarshalBinaryBare(bs, rs)
|
||||
}
|
||||
|
@@ -7,13 +7,13 @@ import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
// "sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/consensus/types"
|
||||
"github.com/tendermint/tendermint/libs/autofile"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmtypes "github.com/tendermint/tendermint/types"
|
||||
tmtime "github.com/tendermint/tendermint/types/time"
|
||||
|
||||
@@ -23,29 +23,27 @@ import (
|
||||
|
||||
func TestWALTruncate(t *testing.T) {
|
||||
walDir, err := ioutil.TempDir("", "wal")
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("failed to create temp WAL file: %v", err))
|
||||
}
|
||||
require.NoError(t, err)
|
||||
defer os.RemoveAll(walDir)
|
||||
|
||||
walFile := filepath.Join(walDir, "wal")
|
||||
|
||||
//this magic number 4K can truncate the content when RotateFile. defaultHeadSizeLimit(10M) is hard to simulate.
|
||||
//this magic number 1 * time.Millisecond make RotateFile check frequently. defaultGroupCheckDuration(5s) is hard to simulate.
|
||||
wal, err := NewWAL(walFile, autofile.GroupHeadSizeLimit(4096), autofile.GroupCheckDuration(1*time.Millisecond))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
wal.Start()
|
||||
wal, err := NewWAL(walFile,
|
||||
autofile.GroupHeadSizeLimit(4096),
|
||||
autofile.GroupCheckDuration(1*time.Millisecond),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
wal.SetLogger(log.TestingLogger())
|
||||
err = wal.Start()
|
||||
require.NoError(t, err)
|
||||
defer wal.Stop()
|
||||
|
||||
//60 block's size nearly 70K, greater than group's headBuf size(4096 * 10), when headBuf is full, truncate content will Flush to the file.
|
||||
//at this time, RotateFile is called, truncate content exist in each file.
|
||||
err = WALGenerateNBlocks(wal.Group(), 60)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
require.NoError(t, err)
|
||||
|
||||
time.Sleep(1 * time.Millisecond) //wait groupCheckDuration, make sure RotateFile run
|
||||
|
||||
@@ -99,9 +97,8 @@ func TestWALSearchForEndHeight(t *testing.T) {
|
||||
walFile := tempWALWithData(walBody)
|
||||
|
||||
wal, err := NewWAL(walFile)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
require.NoError(t, err)
|
||||
wal.SetLogger(log.TestingLogger())
|
||||
|
||||
h := int64(3)
|
||||
gr, found, err := wal.SearchForEndHeight(h, &WALSearchOptions{})
|
||||
|
@@ -11,13 +11,10 @@ Make sure you [have Go installed](https://golang.org/doc/install).
|
||||
Next, install the `abci-cli` tool and example applications:
|
||||
|
||||
```
|
||||
go get github.com/tendermint/tendermint
|
||||
```
|
||||
|
||||
to get vendored dependencies:
|
||||
|
||||
```
|
||||
cd $GOPATH/src/github.com/tendermint/tendermint
|
||||
mkdir -p $GOPATH/src/github.com/tendermint
|
||||
cd $GOPATH/src/github.com/tendermint
|
||||
git clone https://github.com/tendermint/tendermint.git
|
||||
cd tendermint
|
||||
make get_tools
|
||||
make get_vendor_deps
|
||||
make install_abci
|
||||
|
@@ -122,7 +122,7 @@
|
||||
],
|
||||
"abciServers": [
|
||||
{
|
||||
"name": "abci",
|
||||
"name": "go-abci",
|
||||
"url": "https://github.com/tendermint/tendermint/tree/master/abci",
|
||||
"language": "Go",
|
||||
"author": "Tendermint"
|
||||
@@ -133,6 +133,12 @@
|
||||
"language": "Javascript",
|
||||
"author": "Tendermint"
|
||||
},
|
||||
{
|
||||
"name": "rust-tsp",
|
||||
"url": "https://github.com/tendermint/rust-tsp",
|
||||
"language": "Rust",
|
||||
"author": "Tendermint"
|
||||
},
|
||||
{
|
||||
"name": "cpp-tmsp",
|
||||
"url": "https://github.com/mdyring/cpp-tmsp",
|
||||
@@ -164,7 +170,7 @@
|
||||
"author": "Dave Bryson"
|
||||
},
|
||||
{
|
||||
"name": "tm-abci",
|
||||
"name": "tm-abci (fork of py-abci with async IO)",
|
||||
"url": "https://github.com/SoftblocksCo/tm-abci",
|
||||
"language": "Python",
|
||||
"author": "Softblocks"
|
||||
|
@@ -252,14 +252,12 @@ we'll run a Javascript version of the `counter`. To run it, you'll need
|
||||
to [install node](https://nodejs.org/en/download/).
|
||||
|
||||
You'll also need to fetch the relevant repository, from
|
||||
[here](https://github.com/tendermint/js-abci) then install it. As go
|
||||
devs, we keep all our code under the `$GOPATH`, so run:
|
||||
[here](https://github.com/tendermint/js-abci), then install it:
|
||||
|
||||
```
|
||||
go get github.com/tendermint/js-abci &> /dev/null
|
||||
cd $GOPATH/src/github.com/tendermint/js-abci/example
|
||||
npm install
|
||||
cd ..
|
||||
git clone https://github.com/tendermint/js-abci.git
|
||||
cd js-abci
|
||||
npm install abci
|
||||
```
|
||||
|
||||
Kill the previous `counter` and `tendermint` processes. Now run the app:
|
||||
@@ -276,13 +274,16 @@ tendermint node
|
||||
```
|
||||
|
||||
Once again, you should see blocks streaming by - but now, our
|
||||
application is written in javascript! Try sending some transactions, and
|
||||
application is written in Javascript! Try sending some transactions, and
|
||||
like before - the results should be the same:
|
||||
|
||||
```
|
||||
curl localhost:26657/broadcast_tx_commit?tx=0x00 # ok
|
||||
curl localhost:26657/broadcast_tx_commit?tx=0x05 # invalid nonce
|
||||
curl localhost:26657/broadcast_tx_commit?tx=0x01 # ok
|
||||
# ok
|
||||
curl localhost:26657/broadcast_tx_commit?tx=0x00
|
||||
# invalid nonce
|
||||
curl localhost:26657/broadcast_tx_commit?tx=0x05
|
||||
# ok
|
||||
curl localhost:26657/broadcast_tx_commit?tx=0x01
|
||||
```
|
||||
|
||||
Neat, eh?
|
||||
|
@@ -54,7 +54,7 @@ Response:
|
||||
"value": "ww0z4WaZ0Xg+YI10w43wTWbBmM3dpVza4mmSQYsd0ck="
|
||||
},
|
||||
"voting_power": "10",
|
||||
"accum": "0"
|
||||
"proposer_priority": "0"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
@@ -5,14 +5,17 @@ implementations:
|
||||
|
||||
- FilePV uses an unencrypted private key in a "priv_validator.json" file - no
|
||||
configuration required (just `tendermint init`).
|
||||
- SocketPV uses a socket to send signing requests to another process - user is
|
||||
responsible for starting that process themselves.
|
||||
- TCPVal and IPCVal use TCP and Unix sockets respectively to send signing requests
|
||||
to another process - the user is responsible for starting that process themselves.
|
||||
|
||||
The SocketPV address can be provided via flags at the command line - doing so
|
||||
will cause Tendermint to ignore any "priv_validator.json" file and to listen on
|
||||
the given address for incoming connections from an external priv_validator
|
||||
process. It will halt any operation until at least one external process
|
||||
succesfully connected.
|
||||
Both TCPVal and IPCVal addresses can be provided via flags at the command line
|
||||
or in the configuration file; TCPVal addresses must be of the form
|
||||
`tcp://<ip_address>:<port>` and IPCVal addresses `unix:///path/to/file.sock` -
|
||||
doing so will cause Tendermint to ignore any private validator files.
|
||||
|
||||
TCPVal will listen on the given address for incoming connections from an external
|
||||
private validator process. It will halt any operation until at least one external
|
||||
process successfully connected.
|
||||
|
||||
The external priv_validator process will dial the address to connect to
|
||||
Tendermint, and then Tendermint will send requests on the ensuing connection to
|
||||
@@ -21,6 +24,9 @@ but the Tendermint process makes all requests. In a later stage we're going to
|
||||
support multiple validators for fault tolerance. To prevent double signing they
|
||||
need to be synced, which is deferred to an external solution (see #1185).
|
||||
|
||||
Conversely, IPCVal will make an outbound connection to an existing socket opened
|
||||
by the external validator process.
|
||||
|
||||
In addition, Tendermint will provide implementations that can be run in that
|
||||
external process. These include:
|
||||
|
||||
|
@@ -79,11 +79,9 @@ make install
|
||||
|
||||
Install [LevelDB](https://github.com/google/leveldb) (minimum version is 1.7).
|
||||
|
||||
Build Tendermint with C libraries: `make build_c`.
|
||||
|
||||
### Ubuntu
|
||||
|
||||
Install LevelDB with snappy:
|
||||
Install LevelDB with snappy (optionally):
|
||||
|
||||
```
|
||||
sudo apt-get update
|
||||
@@ -95,9 +93,9 @@ wget https://github.com/google/leveldb/archive/v1.20.tar.gz && \
|
||||
tar -zxvf v1.20.tar.gz && \
|
||||
cd leveldb-1.20/ && \
|
||||
make && \
|
||||
cp -r out-static/lib* out-shared/lib* /usr/local/lib/ && \
|
||||
sudo cp -r out-static/lib* out-shared/lib* /usr/local/lib/ && \
|
||||
cd include/ && \
|
||||
cp -r leveldb /usr/local/include/ && \
|
||||
sudo cp -r leveldb /usr/local/include/ && \
|
||||
sudo ldconfig && \
|
||||
rm -f v1.20.tar.gz
|
||||
```
|
||||
@@ -109,8 +107,16 @@ Set database backend to cleveldb:
|
||||
db_backend = "cleveldb"
|
||||
```
|
||||
|
||||
To build Tendermint, run
|
||||
To install Tendermint, run
|
||||
|
||||
```
|
||||
CGO_LDFLAGS="-lsnappy" go build -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`" -tags "tendermint gcc" -o build/tendermint ./cmd/tendermint/
|
||||
CGO_LDFLAGS="-lsnappy" make install_c
|
||||
```
|
||||
|
||||
or run
|
||||
|
||||
```
|
||||
CGO_LDFLAGS="-lsnappy" make build_c
|
||||
```
|
||||
|
||||
to put the binary in `./build`.
|
||||
|
@@ -40,7 +40,11 @@ These files are found in `$HOME/.tendermint`:
|
||||
```
|
||||
$ ls $HOME/.tendermint
|
||||
|
||||
config.toml data genesis.json priv_validator.json
|
||||
config data
|
||||
|
||||
$ ls $HOME/.tendermint/config/
|
||||
|
||||
config.toml genesis.json node_key.json priv_validator.json
|
||||
```
|
||||
|
||||
For a single, local node, no further configuration is required.
|
||||
@@ -110,7 +114,18 @@ source ~/.profile
|
||||
|
||||
This will install `go` and other dependencies, get the Tendermint source code, then compile the `tendermint` binary.
|
||||
|
||||
Next, use the `tendermint testnet` command to create four directories of config files (found in `./mytestnet`) and copy each directory to the relevant machine in the cloud, so that each machine has `$HOME/mytestnet/node[0-3]` directory. Then from each machine, run:
|
||||
Next, use the `tendermint testnet` command to create four directories of config files (found in `./mytestnet`) and copy each directory to the relevant machine in the cloud, so that each machine has `$HOME/mytestnet/node[0-3]` directory.
|
||||
|
||||
Before you can start the network, you'll need peers identifiers (IPs are not enough and can change). We'll refer to them as ID1, ID2, ID3, ID4.
|
||||
|
||||
```
|
||||
tendermint show_node_id --home ./mytestnet/node0
|
||||
tendermint show_node_id --home ./mytestnet/node1
|
||||
tendermint show_node_id --home ./mytestnet/node2
|
||||
tendermint show_node_id --home ./mytestnet/node3
|
||||
```
|
||||
|
||||
Finally, from each machine, run:
|
||||
|
||||
```
|
||||
tendermint node --home ./mytestnet/node0 --proxy_app=kvstore --p2p.persistent_peers="ID1@IP1:26656,ID2@IP2:26656,ID3@IP3:26656,ID4@IP4:26656"
|
||||
@@ -121,6 +136,6 @@ tendermint node --home ./mytestnet/node3 --proxy_app=kvstore --p2p.persistent_pe
|
||||
|
||||
Note that after the third node is started, blocks will start to stream in
|
||||
because >2/3 of validators (defined in the `genesis.json`) have come online.
|
||||
Seeds can also be specified in the `config.toml`. See [here](../tendermint-core/configuration.md) for more information about configuration options.
|
||||
Persistent peers can also be specified in the `config.toml`. See [here](../tendermint-core/configuration.md) for more information about configuration options.
|
||||
|
||||
Transactions can then be sent as covered in the single, local node example above.
|
||||
|
@@ -45,7 +45,9 @@ include a `Tags` field in their `Response*`. Each tag is key-value pair denoting
|
||||
something about what happened during the methods execution.
|
||||
|
||||
Tags can be used to index transactions and blocks according to what happened
|
||||
during their execution.
|
||||
during their execution. Note that the set of tags returned for a block from
|
||||
`BeginBlock` and `EndBlock` are merged. In case both methods return the same
|
||||
tag, only the value defined in `EndBlock` is used.
|
||||
|
||||
Keys and values in tags must be UTF-8 encoded strings (e.g.
|
||||
"account.owner": "Bob", "balance": "100.0",
|
||||
|
@@ -59,22 +59,14 @@ You can simply use below table and concatenate Prefix || Length (of raw bytes) |
|
||||
| PubKeySecp256k1 | tendermint/PubKeySecp256k1 | 0xEB5AE987 | 0x21 | |
|
||||
| PrivKeyEd25519 | tendermint/PrivKeyEd25519 | 0xA3288910 | 0x40 | |
|
||||
| PrivKeySecp256k1 | tendermint/PrivKeySecp256k1 | 0xE1B0F79B | 0x20 | |
|
||||
| SignatureEd25519 | tendermint/SignatureEd25519 | 0x2031EA53 | 0x40 | |
|
||||
| SignatureSecp256k1 | tendermint/SignatureSecp256k1 | 0x7FC4A495 | variable |
|
||||
| PubKeyMultisigThreshold | tendermint/PubKeyMultisigThreshold | 0x22C1F7E2 | variable | |
|
||||
|
||||
|
|
||||
### Example
|
||||
|
||||
### Examples
|
||||
|
||||
1. For example, the 33-byte (or 0x21-byte in hex) Secp256k1 pubkey
|
||||
For example, the 33-byte (or 0x21-byte in hex) Secp256k1 pubkey
|
||||
`020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
|
||||
would be encoded as
|
||||
`EB5AE98221020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
|
||||
|
||||
2. For example, the variable size Secp256k1 signature (in this particular example 70 or 0x46 bytes)
|
||||
`304402201CD4B8C764D2FD8AF23ECFE6666CA8A53886D47754D951295D2D311E1FEA33BF02201E0F906BB1CF2C30EAACFFB032A7129358AFF96B9F79B06ACFFB18AC90C2ADD7`
|
||||
would be encoded as
|
||||
`16E1FEEA46304402201CD4B8C764D2FD8AF23ECFE6666CA8A53886D47754D951295D2D311E1FEA33BF02201E0F906BB1CF2C30EAACFFB032A7129358AFF96B9F79B06ACFFB18AC90C2ADD7`
|
||||
`EB5AE98721020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
|
||||
|
||||
### Addresses
|
||||
|
||||
|
@@ -98,6 +98,10 @@ type Evidence struct {
|
||||
type Validator struct {
|
||||
PubKeyTypes []string
|
||||
}
|
||||
|
||||
type ValidatorParams struct {
|
||||
PubKeyTypes []string
|
||||
}
|
||||
```
|
||||
|
||||
#### BlockSize
|
||||
|
@@ -65,24 +65,24 @@ type Requester {
|
||||
mtx Mutex
|
||||
block Block
|
||||
height int64
|
||||
peerID p2p.ID
|
||||
redoChannel chan struct{}
|
||||
peerID p2p.ID
|
||||
redoChannel chan p2p.ID //redo may send multi-time; peerId is used to identify repeat
|
||||
}
|
||||
```
|
||||
|
||||
Pool is core data structure that stores last executed block (`height`), assignment of requests to peers (`requesters`), current height for each peer and number of pending requests for each peer (`peers`), maximum peer height, etc.
|
||||
Pool is a core data structure that stores last executed block (`height`), assignment of requests to peers (`requesters`), current height for each peer and number of pending requests for each peer (`peers`), maximum peer height, etc.
|
||||
|
||||
```go
|
||||
type Pool {
|
||||
mtx Mutex
|
||||
requesters map[int64]*Requester
|
||||
height int64
|
||||
peers map[p2p.ID]*Peer
|
||||
maxPeerHeight int64
|
||||
numPending int32
|
||||
store BlockStore
|
||||
requestsChannel chan<- BlockRequest
|
||||
errorsChannel chan<- peerError
|
||||
mtx Mutex
|
||||
requesters map[int64]*Requester
|
||||
height int64
|
||||
peers map[p2p.ID]*Peer
|
||||
maxPeerHeight int64
|
||||
numPending int32
|
||||
store BlockStore
|
||||
requestsChannel chan<- BlockRequest
|
||||
errorsChannel chan<- peerError
|
||||
}
|
||||
```
|
||||
|
||||
@@ -90,11 +90,11 @@ Peer data structure stores for each peer current `height` and number of pending
|
||||
|
||||
```go
|
||||
type Peer struct {
|
||||
id p2p.ID
|
||||
height int64
|
||||
numPending int32
|
||||
timeout *time.Timer
|
||||
didTimeout bool
|
||||
id p2p.ID
|
||||
height int64
|
||||
numPending int32
|
||||
timeout *time.Timer
|
||||
didTimeout bool
|
||||
}
|
||||
```
|
||||
|
||||
@@ -169,11 +169,11 @@ Requester task is responsible for fetching a single block at position `height`.
|
||||
|
||||
```go
|
||||
fetchBlock(height, pool):
|
||||
while true do
|
||||
while true do {
|
||||
peerID = nil
|
||||
block = nil
|
||||
peer = pickAvailablePeer(height)
|
||||
peerId = peer.id
|
||||
peerID = peer.id
|
||||
|
||||
enqueue BlockRequest(height, peerID) to pool.requestsChannel
|
||||
redo = false
|
||||
@@ -181,12 +181,15 @@ fetchBlock(height, pool):
|
||||
select {
|
||||
upon receiving Quit message do
|
||||
return
|
||||
upon receiving message on redoChannel do
|
||||
mtx.Lock()
|
||||
pool.numPending++
|
||||
redo = true
|
||||
mtx.UnLock()
|
||||
upon receiving redo message with id on redoChannel do
|
||||
if peerID == id {
|
||||
mtx.Lock()
|
||||
pool.numPending++
|
||||
redo = true
|
||||
mtx.UnLock()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pickAvailablePeer(height):
|
||||
selectedPeer = nil
|
||||
@@ -244,7 +247,7 @@ createRequesters(pool):
|
||||
main(pool):
|
||||
create trySyncTicker with interval trySyncIntervalMS
|
||||
create statusUpdateTicker with interval statusUpdateIntervalSeconds
|
||||
create switchToConsensusTicker with interbal switchToConsensusIntervalSeconds
|
||||
create switchToConsensusTicker with interval switchToConsensusIntervalSeconds
|
||||
|
||||
while true do
|
||||
select {
|
||||
|
@@ -338,12 +338,11 @@ BlockID has seen +2/3 votes. This routine is based on the local RoundState (`rs`
|
||||
|
||||
## Broadcast routine
|
||||
|
||||
The Broadcast routine subscribes to an internal event bus to receive new round steps, votes messages and proposal
|
||||
heartbeat messages, and broadcasts messages to peers upon receiving those events.
|
||||
The Broadcast routine subscribes to an internal event bus to receive new round steps and votes messages, and broadcasts messages to peers upon receiving those
|
||||
events.
|
||||
It broadcasts `NewRoundStepMessage` or `CommitStepMessage` upon new round state event. Note that
|
||||
broadcasting these messages does not depend on the PeerRoundState; it is sent on the StateChannel.
|
||||
Upon receiving VoteMessage it broadcasts `HasVoteMessage` message to its peers on the StateChannel.
|
||||
`ProposalHeartbeatMessage` is sent the same way on the StateChannel.
|
||||
|
||||
## Channels
|
||||
|
||||
|
@@ -89,33 +89,6 @@ type BlockPartMessage struct {
|
||||
}
|
||||
```
|
||||
|
||||
## ProposalHeartbeatMessage
|
||||
|
||||
ProposalHeartbeatMessage is sent to signal that a node is alive and waiting for transactions
|
||||
to be able to create a next block proposal.
|
||||
|
||||
```go
|
||||
type ProposalHeartbeatMessage struct {
|
||||
Heartbeat Heartbeat
|
||||
}
|
||||
```
|
||||
|
||||
### Heartbeat
|
||||
|
||||
Heartbeat contains validator information (address and index),
|
||||
height, round and sequence number. It is signed by the private key of the validator.
|
||||
|
||||
```go
|
||||
type Heartbeat struct {
|
||||
ValidatorAddress []byte
|
||||
ValidatorIndex int
|
||||
Height int64
|
||||
Round int
|
||||
Sequence int
|
||||
Signature Signature
|
||||
}
|
||||
```
|
||||
|
||||
## NewRoundStepMessage
|
||||
|
||||
NewRoundStepMessage is sent for every step transition during the core consensus algorithm execution.
|
||||
|
@@ -203,6 +203,9 @@ create_empty_blocks_interval = "0s"
|
||||
peer_gossip_sleep_duration = "100ms"
|
||||
peer_query_maj23_sleep_duration = "2000ms"
|
||||
|
||||
# Block time parameters. Corresponds to the minimum time increment between consecutive blocks.
|
||||
blocktime_iota = "1000ms"
|
||||
|
||||
##### transactions indexer configuration options #####
|
||||
[tx_index]
|
||||
|
||||
|
@@ -519,18 +519,16 @@ developers guide](../app-dev/app-development.md) for more details.
|
||||
|
||||
### Local Network
|
||||
|
||||
To run a network locally, say on a single machine, you must change the
|
||||
`_laddr` fields in the `config.toml` (or using the flags) so that the
|
||||
listening addresses of the various sockets don't conflict. Additionally,
|
||||
you must set `addr_book_strict=false` in the `config.toml`, otherwise
|
||||
Tendermint's p2p library will deny making connections to peers with the
|
||||
same IP address.
|
||||
To run a network locally, say on a single machine, you must change the `_laddr`
|
||||
fields in the `config.toml` (or using the flags) so that the listening
|
||||
addresses of the various sockets don't conflict. Additionally, you must set
|
||||
`addr_book_strict=false` in the `config.toml`, otherwise Tendermint's p2p
|
||||
library will deny making connections to peers with the same IP address.
|
||||
|
||||
### Upgrading
|
||||
|
||||
The Tendermint development cycle currently includes a lot of breaking changes.
|
||||
Upgrading from an old version to a new version usually means throwing
|
||||
away the chain data. Try out the
|
||||
[tm-migrate](https://github.com/hxzqlh/tm-tools) tool written by
|
||||
[@hxzqlh](https://github.com/hxzqlh) if you are keen to preserve the
|
||||
state of your chain when upgrading to newer versions.
|
||||
See the
|
||||
[UPGRADING.md](https://github.com/tendermint/tendermint/blob/master/UPGRADING.md)
|
||||
guide. You may need to reset your chain between major breaking releases.
|
||||
Although, we expect Tendermint to have fewer breaking releases in the future
|
||||
(especially after 1.0 release).
|
||||
|
@@ -35,7 +35,7 @@ func initializeValidatorState(valAddr []byte, height int64) dbm.DB {
|
||||
LastBlockHeight: 0,
|
||||
LastBlockTime: tmtime.Now(),
|
||||
Validators: valSet,
|
||||
NextValidators: valSet.CopyIncrementAccum(1),
|
||||
NextValidators: valSet.CopyIncrementProposerPriority(1),
|
||||
LastHeightValidatorsChanged: 1,
|
||||
ConsensusParams: types.ConsensusParams{
|
||||
Evidence: types.EvidenceParams{
|
||||
|
@@ -163,7 +163,7 @@ func (evR EvidenceReactor) checkSendEvidenceMessage(peer p2p.Peer, ev types.Evid
|
||||
// make sure the peer is up to date
|
||||
evHeight := ev.Height()
|
||||
peerState, ok := peer.Get(types.PeerStateKey).(PeerState)
|
||||
if !ok {
|
||||
if !ok {
|
||||
// Peer does not have a state yet. We set it in the consensus reactor, but
|
||||
// when we add peer in Switch, the order we call reactors#AddPeer is
|
||||
// different every time due to us using a map. Sometimes other reactors
|
||||
|
@@ -119,4 +119,4 @@ func TestAutoFileSize(t *testing.T) {
|
||||
|
||||
// Cleanup
|
||||
_ = os.Remove(f.Name())
|
||||
}
|
||||
}
|
||||
|
@@ -51,7 +51,7 @@ func TestParseLogLevel(t *testing.T) {
|
||||
|
||||
buf.Reset()
|
||||
|
||||
logger.With("module", "wire").Debug("Kingpin")
|
||||
logger.With("module", "mempool").With("module", "wire").Debug("Kingpin")
|
||||
if have := strings.TrimSpace(buf.String()); c.expectedLogLines[0] != have {
|
||||
t.Errorf("\nwant '%s'\nhave '%s'\nlevel '%s'", c.expectedLogLines[0], have, c.lvl)
|
||||
}
|
||||
|
@@ -61,3 +61,16 @@ func ASCIITrim(s string) string {
|
||||
}
|
||||
return string(r)
|
||||
}
|
||||
|
||||
// StringSliceEqual checks if string slices a and b are equal
|
||||
func StringSliceEqual(a, b []string) bool {
|
||||
if len(a) != len(b) {
|
||||
return false
|
||||
}
|
||||
for i := 0; i < len(a); i++ {
|
||||
if a[i] != b[i] {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
@@ -3,6 +3,8 @@ package common
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
@@ -35,3 +37,22 @@ func TestASCIITrim(t *testing.T) {
|
||||
assert.Equal(t, ASCIITrim(" a "), "a")
|
||||
assert.Panics(t, func() { ASCIITrim("\xC2\xA2") })
|
||||
}
|
||||
|
||||
func TestStringSliceEqual(t *testing.T) {
|
||||
tests := []struct {
|
||||
a []string
|
||||
b []string
|
||||
want bool
|
||||
}{
|
||||
{[]string{"hello", "world"}, []string{"hello", "world"}, true},
|
||||
{[]string{"test"}, []string{"test"}, true},
|
||||
{[]string{"test1"}, []string{"test2"}, false},
|
||||
{[]string{"hello", "world."}, []string{"hello", "world!"}, false},
|
||||
{[]string{"only 1 word"}, []string{"two", "words!"}, false},
|
||||
{[]string{"two", "words!"}, []string{"only 1 word"}, false},
|
||||
}
|
||||
for i, tt := range tests {
|
||||
require.Equal(t, tt.want, StringSliceEqual(tt.a, tt.b),
|
||||
"StringSliceEqual failed on test %d", i)
|
||||
}
|
||||
}
|
||||
|
@@ -180,13 +180,13 @@ func testDBIterator(t *testing.T, backend DBBackendType) {
|
||||
verifyIterator(t, db.ReverseIterator(nil, nil), []int64{9, 8, 7, 5, 4, 3, 2, 1, 0}, "reverse iterator")
|
||||
|
||||
verifyIterator(t, db.Iterator(nil, int642Bytes(0)), []int64(nil), "forward iterator to 0")
|
||||
verifyIterator(t, db.ReverseIterator(nil, int642Bytes(10)), []int64(nil), "reverse iterator 10")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(10), nil), []int64(nil), "reverse iterator from 10 (ex)")
|
||||
|
||||
verifyIterator(t, db.Iterator(int642Bytes(0), nil), []int64{0, 1, 2, 3, 4, 5, 7, 8, 9}, "forward iterator from 0")
|
||||
verifyIterator(t, db.Iterator(int642Bytes(1), nil), []int64{1, 2, 3, 4, 5, 7, 8, 9}, "forward iterator from 1")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(10), nil), []int64{9, 8, 7, 5, 4, 3, 2, 1, 0}, "reverse iterator from 10")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(9), nil), []int64{9, 8, 7, 5, 4, 3, 2, 1, 0}, "reverse iterator from 9")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(8), nil), []int64{8, 7, 5, 4, 3, 2, 1, 0}, "reverse iterator from 8")
|
||||
verifyIterator(t, db.ReverseIterator(nil, int642Bytes(10)), []int64{9, 8, 7, 5, 4, 3, 2, 1, 0}, "reverse iterator from 10 (ex)")
|
||||
verifyIterator(t, db.ReverseIterator(nil, int642Bytes(9)), []int64{8, 7, 5, 4, 3, 2, 1, 0}, "reverse iterator from 9 (ex)")
|
||||
verifyIterator(t, db.ReverseIterator(nil, int642Bytes(8)), []int64{7, 5, 4, 3, 2, 1, 0}, "reverse iterator from 8 (ex)")
|
||||
|
||||
verifyIterator(t, db.Iterator(int642Bytes(5), int642Bytes(6)), []int64{5}, "forward iterator from 5 to 6")
|
||||
verifyIterator(t, db.Iterator(int642Bytes(5), int642Bytes(7)), []int64{5}, "forward iterator from 5 to 7")
|
||||
@@ -195,20 +195,20 @@ func testDBIterator(t *testing.T, backend DBBackendType) {
|
||||
verifyIterator(t, db.Iterator(int642Bytes(6), int642Bytes(8)), []int64{7}, "forward iterator from 6 to 8")
|
||||
verifyIterator(t, db.Iterator(int642Bytes(7), int642Bytes(8)), []int64{7}, "forward iterator from 7 to 8")
|
||||
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(5), int642Bytes(4)), []int64{5}, "reverse iterator from 5 to 4")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(6), int642Bytes(4)), []int64{5}, "reverse iterator from 6 to 4")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(7), int642Bytes(4)), []int64{7, 5}, "reverse iterator from 7 to 4")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(6), int642Bytes(5)), []int64(nil), "reverse iterator from 6 to 5")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(7), int642Bytes(5)), []int64{7}, "reverse iterator from 7 to 5")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(7), int642Bytes(6)), []int64{7}, "reverse iterator from 7 to 6")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(4), int642Bytes(5)), []int64{4}, "reverse iterator from 5 (ex) to 4")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(4), int642Bytes(6)), []int64{5, 4}, "reverse iterator from 6 (ex) to 4")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(4), int642Bytes(7)), []int64{5, 4}, "reverse iterator from 7 (ex) to 4")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(5), int642Bytes(6)), []int64{5}, "reverse iterator from 6 (ex) to 5")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(5), int642Bytes(7)), []int64{5}, "reverse iterator from 7 (ex) to 5")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(6), int642Bytes(7)), []int64(nil), "reverse iterator from 7 (ex) to 6")
|
||||
|
||||
verifyIterator(t, db.Iterator(int642Bytes(0), int642Bytes(1)), []int64{0}, "forward iterator from 0 to 1")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(9), int642Bytes(8)), []int64{9}, "reverse iterator from 9 to 8")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(8), int642Bytes(9)), []int64{8}, "reverse iterator from 9 (ex) to 8")
|
||||
|
||||
verifyIterator(t, db.Iterator(int642Bytes(2), int642Bytes(4)), []int64{2, 3}, "forward iterator from 2 to 4")
|
||||
verifyIterator(t, db.Iterator(int642Bytes(4), int642Bytes(2)), []int64(nil), "forward iterator from 4 to 2")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(4), int642Bytes(2)), []int64{4, 3}, "reverse iterator from 4 to 2")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(2), int642Bytes(4)), []int64(nil), "reverse iterator from 2 to 4")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(2), int642Bytes(4)), []int64{3, 2}, "reverse iterator from 4 (ex) to 2")
|
||||
verifyIterator(t, db.ReverseIterator(int642Bytes(4), int642Bytes(2)), []int64(nil), "reverse iterator from 2 (ex) to 4")
|
||||
|
||||
}
|
||||
|
||||
|
@@ -205,13 +205,13 @@ type cLevelDBIterator struct {
|
||||
|
||||
func newCLevelDBIterator(source *levigo.Iterator, start, end []byte, isReverse bool) *cLevelDBIterator {
|
||||
if isReverse {
|
||||
if start == nil {
|
||||
if end == nil {
|
||||
source.SeekToLast()
|
||||
} else {
|
||||
source.Seek(start)
|
||||
source.Seek(end)
|
||||
if source.Valid() {
|
||||
soakey := source.Key() // start or after key
|
||||
if bytes.Compare(start, soakey) < 0 {
|
||||
eoakey := source.Key() // end or after key
|
||||
if bytes.Compare(end, eoakey) <= 0 {
|
||||
source.Prev()
|
||||
}
|
||||
} else {
|
||||
@@ -255,10 +255,11 @@ func (itr cLevelDBIterator) Valid() bool {
|
||||
}
|
||||
|
||||
// If key is end or past it, invalid.
|
||||
var start = itr.start
|
||||
var end = itr.end
|
||||
var key = itr.source.Key()
|
||||
if itr.isReverse {
|
||||
if end != nil && bytes.Compare(key, end) <= 0 {
|
||||
if start != nil && bytes.Compare(key, start) < 0 {
|
||||
itr.isInvalid = true
|
||||
return false
|
||||
}
|
||||
|
@@ -161,7 +161,7 @@ func (db *FSDB) MakeIterator(start, end []byte, isReversed bool) Iterator {
|
||||
|
||||
// We need a copy of all of the keys.
|
||||
// Not the best, but probably not a bottleneck depending.
|
||||
keys, err := list(db.dir, start, end, isReversed)
|
||||
keys, err := list(db.dir, start, end)
|
||||
if err != nil {
|
||||
panic(errors.Wrapf(err, "Listing keys in %s", db.dir))
|
||||
}
|
||||
@@ -229,7 +229,7 @@ func remove(path string) error {
|
||||
|
||||
// List keys in a directory, stripping of escape sequences and dir portions.
|
||||
// CONTRACT: returns os errors directly without wrapping.
|
||||
func list(dirPath string, start, end []byte, isReversed bool) ([]string, error) {
|
||||
func list(dirPath string, start, end []byte) ([]string, error) {
|
||||
dir, err := os.Open(dirPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -247,7 +247,7 @@ func list(dirPath string, start, end []byte, isReversed bool) ([]string, error)
|
||||
return nil, fmt.Errorf("Failed to unescape %s while listing", name)
|
||||
}
|
||||
key := unescapeKey([]byte(n))
|
||||
if IsKeyInDomain(key, start, end, isReversed) {
|
||||
if IsKeyInDomain(key, start, end) {
|
||||
keys = append(keys, string(key))
|
||||
}
|
||||
}
|
||||
|
@@ -213,13 +213,13 @@ var _ Iterator = (*goLevelDBIterator)(nil)
|
||||
|
||||
func newGoLevelDBIterator(source iterator.Iterator, start, end []byte, isReverse bool) *goLevelDBIterator {
|
||||
if isReverse {
|
||||
if start == nil {
|
||||
if end == nil {
|
||||
source.Last()
|
||||
} else {
|
||||
valid := source.Seek(start)
|
||||
valid := source.Seek(end)
|
||||
if valid {
|
||||
soakey := source.Key() // start or after key
|
||||
if bytes.Compare(start, soakey) < 0 {
|
||||
eoakey := source.Key() // end or after key
|
||||
if bytes.Compare(end, eoakey) <= 0 {
|
||||
source.Prev()
|
||||
}
|
||||
} else {
|
||||
@@ -265,11 +265,12 @@ func (itr *goLevelDBIterator) Valid() bool {
|
||||
}
|
||||
|
||||
// If key is end or past it, invalid.
|
||||
var start = itr.start
|
||||
var end = itr.end
|
||||
var key = itr.source.Key()
|
||||
|
||||
if itr.isReverse {
|
||||
if end != nil && bytes.Compare(key, end) <= 0 {
|
||||
if start != nil && bytes.Compare(key, start) < 0 {
|
||||
itr.isInvalid = true
|
||||
return false
|
||||
}
|
||||
|
@@ -237,7 +237,7 @@ func (itr *memDBIterator) assertIsValid() {
|
||||
func (db *MemDB) getSortedKeys(start, end []byte, reverse bool) []string {
|
||||
keys := []string{}
|
||||
for key := range db.db {
|
||||
inDomain := IsKeyInDomain([]byte(key), start, end, reverse)
|
||||
inDomain := IsKeyInDomain([]byte(key), start, end)
|
||||
if inDomain {
|
||||
keys = append(keys, key)
|
||||
}
|
||||
|
@@ -131,27 +131,13 @@ func (pdb *prefixDB) ReverseIterator(start, end []byte) Iterator {
|
||||
defer pdb.mtx.Unlock()
|
||||
|
||||
var pstart, pend []byte
|
||||
if start == nil {
|
||||
// This may cause the underlying iterator to start with
|
||||
// an item which doesn't start with prefix. We will skip
|
||||
// that item later in this function. See 'skipOne'.
|
||||
pstart = cpIncr(pdb.prefix)
|
||||
} else {
|
||||
pstart = append(cp(pdb.prefix), start...)
|
||||
}
|
||||
pstart = append(cp(pdb.prefix), start...)
|
||||
if end == nil {
|
||||
// This may cause the underlying iterator to end with an
|
||||
// item which doesn't start with prefix. The
|
||||
// prefixIterator will terminate iteration
|
||||
// automatically upon detecting this.
|
||||
pend = cpDecr(pdb.prefix)
|
||||
pend = cpIncr(pdb.prefix)
|
||||
} else {
|
||||
pend = append(cp(pdb.prefix), end...)
|
||||
}
|
||||
ritr := pdb.db.ReverseIterator(pstart, pend)
|
||||
if start == nil {
|
||||
skipOne(ritr, cpIncr(pdb.prefix))
|
||||
}
|
||||
return newPrefixIterator(
|
||||
pdb.prefix,
|
||||
start,
|
||||
@@ -310,7 +296,6 @@ func (itr *prefixIterator) Next() {
|
||||
}
|
||||
itr.source.Next()
|
||||
if !itr.source.Valid() || !bytes.HasPrefix(itr.source.Key(), itr.prefix) {
|
||||
itr.source.Close()
|
||||
itr.valid = false
|
||||
return
|
||||
}
|
||||
@@ -345,13 +330,3 @@ func stripPrefix(key []byte, prefix []byte) (stripped []byte) {
|
||||
}
|
||||
return key[len(prefix):]
|
||||
}
|
||||
|
||||
// If the first iterator item is skipKey, then
|
||||
// skip it.
|
||||
func skipOne(itr Iterator, skipKey []byte) {
|
||||
if itr.Valid() {
|
||||
if bytes.Equal(itr.Key(), skipKey) {
|
||||
itr.Next()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@@ -113,8 +113,46 @@ func TestPrefixDBReverseIterator2(t *testing.T) {
|
||||
db := mockDBWithStuff()
|
||||
pdb := NewPrefixDB(db, bz("key"))
|
||||
|
||||
itr := pdb.ReverseIterator(bz(""), nil)
|
||||
checkDomain(t, itr, bz(""), nil)
|
||||
checkItem(t, itr, bz("3"), bz("value3"))
|
||||
checkNext(t, itr, true)
|
||||
checkItem(t, itr, bz("2"), bz("value2"))
|
||||
checkNext(t, itr, true)
|
||||
checkItem(t, itr, bz("1"), bz("value1"))
|
||||
checkNext(t, itr, true)
|
||||
checkItem(t, itr, bz(""), bz("value"))
|
||||
checkNext(t, itr, false)
|
||||
checkInvalid(t, itr)
|
||||
itr.Close()
|
||||
}
|
||||
|
||||
func TestPrefixDBReverseIterator3(t *testing.T) {
|
||||
db := mockDBWithStuff()
|
||||
pdb := NewPrefixDB(db, bz("key"))
|
||||
|
||||
itr := pdb.ReverseIterator(nil, bz(""))
|
||||
checkDomain(t, itr, nil, bz(""))
|
||||
checkInvalid(t, itr)
|
||||
itr.Close()
|
||||
}
|
||||
|
||||
func TestPrefixDBReverseIterator4(t *testing.T) {
|
||||
db := mockDBWithStuff()
|
||||
pdb := NewPrefixDB(db, bz("key"))
|
||||
|
||||
itr := pdb.ReverseIterator(bz(""), bz(""))
|
||||
checkDomain(t, itr, bz(""), bz(""))
|
||||
checkInvalid(t, itr)
|
||||
itr.Close()
|
||||
}
|
||||
|
||||
func TestPrefixDBReverseIterator5(t *testing.T) {
|
||||
db := mockDBWithStuff()
|
||||
pdb := NewPrefixDB(db, bz("key"))
|
||||
|
||||
itr := pdb.ReverseIterator(bz("1"), nil)
|
||||
checkDomain(t, itr, bz("1"), nil)
|
||||
checkItem(t, itr, bz("3"), bz("value3"))
|
||||
checkNext(t, itr, true)
|
||||
checkItem(t, itr, bz("2"), bz("value2"))
|
||||
@@ -125,23 +163,30 @@ func TestPrefixDBReverseIterator2(t *testing.T) {
|
||||
itr.Close()
|
||||
}
|
||||
|
||||
func TestPrefixDBReverseIterator3(t *testing.T) {
|
||||
func TestPrefixDBReverseIterator6(t *testing.T) {
|
||||
db := mockDBWithStuff()
|
||||
pdb := NewPrefixDB(db, bz("key"))
|
||||
|
||||
itr := pdb.ReverseIterator(bz(""), nil)
|
||||
checkDomain(t, itr, bz(""), nil)
|
||||
checkItem(t, itr, bz(""), bz("value"))
|
||||
itr := pdb.ReverseIterator(bz("2"), nil)
|
||||
checkDomain(t, itr, bz("2"), nil)
|
||||
checkItem(t, itr, bz("3"), bz("value3"))
|
||||
checkNext(t, itr, true)
|
||||
checkItem(t, itr, bz("2"), bz("value2"))
|
||||
checkNext(t, itr, false)
|
||||
checkInvalid(t, itr)
|
||||
itr.Close()
|
||||
}
|
||||
|
||||
func TestPrefixDBReverseIterator4(t *testing.T) {
|
||||
func TestPrefixDBReverseIterator7(t *testing.T) {
|
||||
db := mockDBWithStuff()
|
||||
pdb := NewPrefixDB(db, bz("key"))
|
||||
|
||||
itr := pdb.ReverseIterator(bz(""), bz(""))
|
||||
itr := pdb.ReverseIterator(nil, bz("2"))
|
||||
checkDomain(t, itr, nil, bz("2"))
|
||||
checkItem(t, itr, bz("1"), bz("value1"))
|
||||
checkNext(t, itr, true)
|
||||
checkItem(t, itr, bz(""), bz("value"))
|
||||
checkNext(t, itr, false)
|
||||
checkInvalid(t, itr)
|
||||
itr.Close()
|
||||
}
|
||||
|
@@ -34,9 +34,9 @@ type DB interface {
|
||||
Iterator(start, end []byte) Iterator
|
||||
|
||||
// Iterate over a domain of keys in descending order. End is exclusive.
|
||||
// Start must be greater than end, or the Iterator is invalid.
|
||||
// If start is nil, iterates from the last/greatest item (inclusive).
|
||||
// If end is nil, iterates up to the first/least item (inclusive).
|
||||
// Start must be less than end, or the Iterator is invalid.
|
||||
// If start is nil, iterates up to the first/least item (inclusive).
|
||||
// If end is nil, iterates from the last/greatest item (inclusive).
|
||||
// CONTRACT: No writes may happen within a domain while an iterator exists over it.
|
||||
// CONTRACT: start, end readonly []byte
|
||||
ReverseIterator(start, end []byte) Iterator
|
||||
|
@@ -33,46 +33,13 @@ func cpIncr(bz []byte) (ret []byte) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Returns a slice of the same length (big endian)
|
||||
// except decremented by one.
|
||||
// Returns nil on underflow (e.g. if bz bytes are all 0x00)
|
||||
// CONTRACT: len(bz) > 0
|
||||
func cpDecr(bz []byte) (ret []byte) {
|
||||
if len(bz) == 0 {
|
||||
panic("cpDecr expects non-zero bz length")
|
||||
}
|
||||
ret = cp(bz)
|
||||
for i := len(bz) - 1; i >= 0; i-- {
|
||||
if ret[i] > byte(0x00) {
|
||||
ret[i]--
|
||||
return
|
||||
}
|
||||
ret[i] = byte(0xFF)
|
||||
if i == 0 {
|
||||
// Underflow
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// See DB interface documentation for more information.
|
||||
func IsKeyInDomain(key, start, end []byte, isReverse bool) bool {
|
||||
if !isReverse {
|
||||
if bytes.Compare(key, start) < 0 {
|
||||
return false
|
||||
}
|
||||
if end != nil && bytes.Compare(end, key) <= 0 {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
} else {
|
||||
if start != nil && bytes.Compare(start, key) < 0 {
|
||||
return false
|
||||
}
|
||||
if end != nil && bytes.Compare(key, end) <= 0 {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
func IsKeyInDomain(key, start, end []byte) bool {
|
||||
if bytes.Compare(key, start) < 0 {
|
||||
return false
|
||||
}
|
||||
if end != nil && bytes.Compare(end, key) <= 0 {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
@@ -11,9 +11,10 @@ const (
|
||||
)
|
||||
|
||||
type filter struct {
|
||||
next Logger
|
||||
allowed level // XOR'd levels for default case
|
||||
allowedKeyvals map[keyval]level // When key-value match, use this level
|
||||
next Logger
|
||||
allowed level // XOR'd levels for default case
|
||||
initiallyAllowed level // XOR'd levels for initial case
|
||||
allowedKeyvals map[keyval]level // When key-value match, use this level
|
||||
}
|
||||
|
||||
type keyval struct {
|
||||
@@ -33,6 +34,7 @@ func NewFilter(next Logger, options ...Option) Logger {
|
||||
for _, option := range options {
|
||||
option(l)
|
||||
}
|
||||
l.initiallyAllowed = l.allowed
|
||||
return l
|
||||
}
|
||||
|
||||
@@ -76,14 +78,45 @@ func (l *filter) Error(msg string, keyvals ...interface{}) {
|
||||
// logger = log.NewFilter(logger, log.AllowError(), log.AllowInfoWith("module", "crypto"), log.AllowNoneWith("user", "Sam"))
|
||||
// logger.With("user", "Sam").With("module", "crypto").Info("Hello") # produces "I... Hello module=crypto user=Sam"
|
||||
func (l *filter) With(keyvals ...interface{}) Logger {
|
||||
keyInAllowedKeyvals := false
|
||||
|
||||
for i := len(keyvals) - 2; i >= 0; i -= 2 {
|
||||
for kv, allowed := range l.allowedKeyvals {
|
||||
if keyvals[i] == kv.key && keyvals[i+1] == kv.value {
|
||||
return &filter{next: l.next.With(keyvals...), allowed: allowed, allowedKeyvals: l.allowedKeyvals}
|
||||
if keyvals[i] == kv.key {
|
||||
keyInAllowedKeyvals = true
|
||||
// Example:
|
||||
// logger = log.NewFilter(logger, log.AllowError(), log.AllowInfoWith("module", "crypto"))
|
||||
// logger.With("module", "crypto")
|
||||
if keyvals[i+1] == kv.value {
|
||||
return &filter{
|
||||
next: l.next.With(keyvals...),
|
||||
allowed: allowed, // set the desired level
|
||||
allowedKeyvals: l.allowedKeyvals,
|
||||
initiallyAllowed: l.initiallyAllowed,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return &filter{next: l.next.With(keyvals...), allowed: l.allowed, allowedKeyvals: l.allowedKeyvals}
|
||||
|
||||
// Example:
|
||||
// logger = log.NewFilter(logger, log.AllowError(), log.AllowInfoWith("module", "crypto"))
|
||||
// logger.With("module", "main")
|
||||
if keyInAllowedKeyvals {
|
||||
return &filter{
|
||||
next: l.next.With(keyvals...),
|
||||
allowed: l.initiallyAllowed, // return back to initially allowed
|
||||
allowedKeyvals: l.allowedKeyvals,
|
||||
initiallyAllowed: l.initiallyAllowed,
|
||||
}
|
||||
}
|
||||
|
||||
return &filter{
|
||||
next: l.next.With(keyvals...),
|
||||
allowed: l.allowed, // simply continue with the current level
|
||||
allowedKeyvals: l.allowedKeyvals,
|
||||
initiallyAllowed: l.initiallyAllowed,
|
||||
}
|
||||
}
|
||||
|
||||
//--------------------------------------------------------------------------------
|
||||
|
@@ -105,8 +105,8 @@ func (dbp *DBProvider) LatestFullCommit(chainID string, minHeight, maxHeight int
|
||||
}
|
||||
|
||||
itr := dbp.db.ReverseIterator(
|
||||
signedHeaderKey(chainID, maxHeight),
|
||||
signedHeaderKey(chainID, minHeight-1),
|
||||
signedHeaderKey(chainID, minHeight),
|
||||
append(signedHeaderKey(chainID, maxHeight), byte(0x00)),
|
||||
)
|
||||
defer itr.Close()
|
||||
|
||||
@@ -190,8 +190,8 @@ func (dbp *DBProvider) deleteAfterN(chainID string, after int) error {
|
||||
dbp.logger.Info("DBProvider.deleteAfterN()...", "chainID", chainID, "after", after)
|
||||
|
||||
itr := dbp.db.ReverseIterator(
|
||||
signedHeaderKey(chainID, 1<<63-1),
|
||||
signedHeaderKey(chainID, 0),
|
||||
signedHeaderKey(chainID, 1),
|
||||
append(signedHeaderKey(chainID, 1<<63-1), byte(0x00)),
|
||||
)
|
||||
defer itr.Close()
|
||||
|
||||
|
@@ -121,7 +121,7 @@ If we cannot update directly from H -> H' because there was too much change to
|
||||
the validator set, then we can look for some Hm (H < Hm < H') with a validator
|
||||
set Vm. Then we try to update H -> Hm and then Hm -> H' in two steps. If one
|
||||
of these steps doesn't work, then we continue bisecting, until we eventually
|
||||
have to externally validate the valdiator set changes at every block.
|
||||
have to externally validate the validator set changes at every block.
|
||||
|
||||
Since we never trust any server in this protocol, only the signatures
|
||||
themselves, it doesn't matter if the seed comes from a (possibly malicious)
|
||||
|
@@ -131,7 +131,6 @@ type Mempool struct {
|
||||
proxyMtx sync.Mutex
|
||||
proxyAppConn proxy.AppConnMempool
|
||||
txs *clist.CList // concurrent linked-list of good txs
|
||||
counter int64 // simple incrementing counter
|
||||
height int64 // the last block Update()'d to
|
||||
rechecking int32 // for re-checking filtered txs on Update()
|
||||
recheckCursor *clist.CElement // next expected response
|
||||
@@ -167,7 +166,6 @@ func NewMempool(
|
||||
config: config,
|
||||
proxyAppConn: proxyAppConn,
|
||||
txs: clist.New(),
|
||||
counter: 0,
|
||||
height: height,
|
||||
rechecking: 0,
|
||||
recheckCursor: nil,
|
||||
@@ -365,9 +363,7 @@ func (mem *Mempool) resCbNormal(req *abci.Request, res *abci.Response) {
|
||||
postCheckErr = mem.postCheck(tx, r.CheckTx)
|
||||
}
|
||||
if (r.CheckTx.Code == abci.CodeTypeOK) && postCheckErr == nil {
|
||||
mem.counter++
|
||||
memTx := &mempoolTx{
|
||||
counter: mem.counter,
|
||||
height: mem.height,
|
||||
gasWanted: r.CheckTx.GasWanted,
|
||||
tx: tx,
|
||||
@@ -378,7 +374,6 @@ func (mem *Mempool) resCbNormal(req *abci.Request, res *abci.Response) {
|
||||
"res", r,
|
||||
"height", memTx.height,
|
||||
"total", mem.Size(),
|
||||
"counter", memTx.counter,
|
||||
)
|
||||
mem.metrics.TxSizeBytes.Observe(float64(len(tx)))
|
||||
mem.notifyTxsAvailable()
|
||||
@@ -534,12 +529,6 @@ func (mem *Mempool) Update(
|
||||
preCheck PreCheckFunc,
|
||||
postCheck PostCheckFunc,
|
||||
) error {
|
||||
// First, create a lookup map of txns in new txs.
|
||||
txsMap := make(map[string]struct{}, len(txs))
|
||||
for _, tx := range txs {
|
||||
txsMap[string(tx)] = struct{}{}
|
||||
}
|
||||
|
||||
// Set height
|
||||
mem.height = height
|
||||
mem.notifiedTxsAvailable = false
|
||||
@@ -551,12 +540,18 @@ func (mem *Mempool) Update(
|
||||
mem.postCheck = postCheck
|
||||
}
|
||||
|
||||
// Remove transactions that are already in txs.
|
||||
goodTxs := mem.filterTxs(txsMap)
|
||||
// Add committed transactions to cache (if missing).
|
||||
for _, tx := range txs {
|
||||
_ = mem.cache.Push(tx)
|
||||
}
|
||||
|
||||
// Remove committed transactions.
|
||||
txsLeft := mem.removeTxs(txs)
|
||||
|
||||
// Recheck mempool txs if any txs were committed in the block
|
||||
if mem.config.Recheck && len(goodTxs) > 0 {
|
||||
mem.logger.Info("Recheck txs", "numtxs", len(goodTxs), "height", height)
|
||||
mem.recheckTxs(goodTxs)
|
||||
if mem.config.Recheck && len(txsLeft) > 0 {
|
||||
mem.logger.Info("Recheck txs", "numtxs", len(txsLeft), "height", height)
|
||||
mem.recheckTxs(txsLeft)
|
||||
// At this point, mem.txs are being rechecked.
|
||||
// mem.recheckCursor re-scans mem.txs and possibly removes some txs.
|
||||
// Before mem.Reap(), we should wait for mem.recheckCursor to be nil.
|
||||
@@ -568,12 +563,18 @@ func (mem *Mempool) Update(
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mem *Mempool) filterTxs(blockTxsMap map[string]struct{}) []types.Tx {
|
||||
goodTxs := make([]types.Tx, 0, mem.txs.Len())
|
||||
func (mem *Mempool) removeTxs(txs types.Txs) []types.Tx {
|
||||
// Build a map for faster lookups.
|
||||
txsMap := make(map[string]struct{}, len(txs))
|
||||
for _, tx := range txs {
|
||||
txsMap[string(tx)] = struct{}{}
|
||||
}
|
||||
|
||||
txsLeft := make([]types.Tx, 0, mem.txs.Len())
|
||||
for e := mem.txs.Front(); e != nil; e = e.Next() {
|
||||
memTx := e.Value.(*mempoolTx)
|
||||
// Remove the tx if it's alredy in a block.
|
||||
if _, ok := blockTxsMap[string(memTx.tx)]; ok {
|
||||
// Remove the tx if it's already in a block.
|
||||
if _, ok := txsMap[string(memTx.tx)]; ok {
|
||||
// remove from clist
|
||||
mem.txs.Remove(e)
|
||||
e.DetachPrev()
|
||||
@@ -581,15 +582,14 @@ func (mem *Mempool) filterTxs(blockTxsMap map[string]struct{}) []types.Tx {
|
||||
// NOTE: we don't remove committed txs from the cache.
|
||||
continue
|
||||
}
|
||||
// Good tx!
|
||||
goodTxs = append(goodTxs, memTx.tx)
|
||||
txsLeft = append(txsLeft, memTx.tx)
|
||||
}
|
||||
return goodTxs
|
||||
return txsLeft
|
||||
}
|
||||
|
||||
// NOTE: pass in goodTxs because mem.txs can mutate concurrently.
|
||||
func (mem *Mempool) recheckTxs(goodTxs []types.Tx) {
|
||||
if len(goodTxs) == 0 {
|
||||
// NOTE: pass in txs because mem.txs can mutate concurrently.
|
||||
func (mem *Mempool) recheckTxs(txs []types.Tx) {
|
||||
if len(txs) == 0 {
|
||||
return
|
||||
}
|
||||
atomic.StoreInt32(&mem.rechecking, 1)
|
||||
@@ -598,7 +598,7 @@ func (mem *Mempool) recheckTxs(goodTxs []types.Tx) {
|
||||
|
||||
// Push txs to proxyAppConn
|
||||
// NOTE: resCb() may be called concurrently.
|
||||
for _, tx := range goodTxs {
|
||||
for _, tx := range txs {
|
||||
mem.proxyAppConn.CheckTxAsync(tx)
|
||||
}
|
||||
mem.proxyAppConn.FlushAsync()
|
||||
@@ -608,7 +608,6 @@ func (mem *Mempool) recheckTxs(goodTxs []types.Tx) {
|
||||
|
||||
// mempoolTx is a transaction that successfully ran
|
||||
type mempoolTx struct {
|
||||
counter int64 // a simple incrementing counter
|
||||
height int64 // height that this tx had been validated in
|
||||
gasWanted int64 // amount of gas this tx states it will require
|
||||
tx types.Tx //
|
||||
|
@@ -163,6 +163,17 @@ func TestMempoolFilters(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestMempoolUpdateAddsTxsToCache(t *testing.T) {
|
||||
app := kvstore.NewKVStoreApplication()
|
||||
cc := proxy.NewLocalClientCreator(app)
|
||||
mempool := newMempoolWithApp(cc)
|
||||
mempool.Update(1, []types.Tx{[]byte{0x01}}, nil, nil)
|
||||
err := mempool.CheckTx([]byte{0x01}, nil)
|
||||
if assert.Error(t, err) {
|
||||
assert.Equal(t, ErrTxInCache, err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestTxsAvailable(t *testing.T) {
|
||||
app := kvstore.NewKVStoreApplication()
|
||||
cc := proxy.NewLocalClientCreator(app)
|
||||
|
63
node/node.go
63
node/node.go
@@ -3,7 +3,6 @@ package node
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
@@ -11,11 +10,12 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||
"github.com/rs/cors"
|
||||
|
||||
"github.com/tendermint/go-amino"
|
||||
amino "github.com/tendermint/go-amino"
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
bc "github.com/tendermint/tendermint/blockchain"
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
@@ -220,25 +220,14 @@ func NewNode(config *cfg.Config,
|
||||
)
|
||||
}
|
||||
|
||||
// If an address is provided, listen on the socket for a
|
||||
// connection from an external signing process.
|
||||
if config.PrivValidatorListenAddr != "" {
|
||||
var (
|
||||
// TODO: persist this key so external signer
|
||||
// can actually authenticate us
|
||||
privKey = ed25519.GenPrivKey()
|
||||
pvsc = privval.NewTCPVal(
|
||||
logger.With("module", "privval"),
|
||||
config.PrivValidatorListenAddr,
|
||||
privKey,
|
||||
)
|
||||
)
|
||||
|
||||
if err := pvsc.Start(); err != nil {
|
||||
return nil, fmt.Errorf("Error starting private validator client: %v", err)
|
||||
// If an address is provided, listen on the socket for a connection from an
|
||||
// external signing process.
|
||||
// FIXME: we should start services inside OnStart
|
||||
privValidator, err = createAndStartPrivValidatorSocketClient(config.PrivValidatorListenAddr, logger)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error with private validator socket client")
|
||||
}
|
||||
|
||||
privValidator = pvsc
|
||||
}
|
||||
|
||||
// Decide whether to fast-sync or not
|
||||
@@ -600,10 +589,8 @@ func (n *Node) OnStop() {
|
||||
}
|
||||
}
|
||||
|
||||
if pvsc, ok := n.privValidator.(*privval.TCPVal); ok {
|
||||
if err := pvsc.Stop(); err != nil {
|
||||
n.Logger.Error("Error stopping priv validator socket client", "err", err)
|
||||
}
|
||||
if pvsc, ok := n.privValidator.(cmn.Service); ok {
|
||||
pvsc.Stop()
|
||||
}
|
||||
|
||||
if n.prometheusSrv != nil {
|
||||
@@ -857,6 +844,36 @@ func saveGenesisDoc(db dbm.DB, genDoc *types.GenesisDoc) {
|
||||
db.SetSync(genesisDocKey, bytes)
|
||||
}
|
||||
|
||||
func createAndStartPrivValidatorSocketClient(
|
||||
listenAddr string,
|
||||
logger log.Logger,
|
||||
) (types.PrivValidator, error) {
|
||||
var pvsc types.PrivValidator
|
||||
|
||||
protocol, address := cmn.ProtocolAndAddress(listenAddr)
|
||||
switch protocol {
|
||||
case "unix":
|
||||
pvsc = privval.NewIPCVal(logger.With("module", "privval"), address)
|
||||
case "tcp":
|
||||
// TODO: persist this key so external signer
|
||||
// can actually authenticate us
|
||||
pvsc = privval.NewTCPVal(logger.With("module", "privval"), listenAddr, ed25519.GenPrivKey())
|
||||
default:
|
||||
return nil, fmt.Errorf(
|
||||
"Wrong listen address: expected either 'tcp' or 'unix' protocols, got %s",
|
||||
protocol,
|
||||
)
|
||||
}
|
||||
|
||||
if pvsc, ok := pvsc.(cmn.Service); ok {
|
||||
if err := pvsc.Start(); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to start")
|
||||
}
|
||||
}
|
||||
|
||||
return pvsc, nil
|
||||
}
|
||||
|
||||
// splitAndTrimEmpty slices s into all subslices separated by sep and returns a
|
||||
// slice of the string s with all leading and trailing Unicode code points
|
||||
// contained in cutset removed. If sep is empty, SplitAndTrim splits after each
|
||||
|
@@ -3,23 +3,26 @@ package node
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"syscall"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/example/kvstore"
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
"github.com/tendermint/tendermint/version"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
|
||||
tmtime "github.com/tendermint/tendermint/types/time"
|
||||
"github.com/tendermint/tendermint/version"
|
||||
)
|
||||
|
||||
func TestNodeStartStop(t *testing.T) {
|
||||
@@ -27,17 +30,16 @@ func TestNodeStartStop(t *testing.T) {
|
||||
|
||||
// create & start node
|
||||
n, err := DefaultNewNode(config, log.TestingLogger())
|
||||
assert.NoError(t, err, "expected no err on DefaultNewNode")
|
||||
err1 := n.Start()
|
||||
if err1 != nil {
|
||||
t.Error(err1)
|
||||
}
|
||||
require.NoError(t, err)
|
||||
err = n.Start()
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Logf("Started node %v", n.sw.NodeInfo())
|
||||
|
||||
// wait for the node to produce a block
|
||||
blockCh := make(chan interface{})
|
||||
err = n.EventBus().Subscribe(context.Background(), "node_test", types.EventQueryNewBlock, blockCh)
|
||||
assert.NoError(t, err)
|
||||
require.NoError(t, err)
|
||||
select {
|
||||
case <-blockCh:
|
||||
case <-time.After(10 * time.Second):
|
||||
@@ -89,7 +91,7 @@ func TestNodeDelayedStop(t *testing.T) {
|
||||
// create & start node
|
||||
n, err := DefaultNewNode(config, log.TestingLogger())
|
||||
n.GenesisDoc().GenesisTime = now.Add(5 * time.Second)
|
||||
assert.NoError(t, err)
|
||||
require.NoError(t, err)
|
||||
|
||||
n.Start()
|
||||
startTime := tmtime.Now()
|
||||
@@ -101,7 +103,7 @@ func TestNodeSetAppVersion(t *testing.T) {
|
||||
|
||||
// create & start node
|
||||
n, err := DefaultNewNode(config, log.TestingLogger())
|
||||
assert.NoError(t, err, "expected no err on DefaultNewNode")
|
||||
require.NoError(t, err)
|
||||
|
||||
// default config uses the kvstore app
|
||||
var appVersion version.Protocol = kvstore.ProtocolVersion
|
||||
@@ -113,3 +115,80 @@ func TestNodeSetAppVersion(t *testing.T) {
|
||||
// check version is set in node info
|
||||
assert.Equal(t, n.nodeInfo.(p2p.DefaultNodeInfo).ProtocolVersion.App, appVersion)
|
||||
}
|
||||
|
||||
func TestNodeSetPrivValTCP(t *testing.T) {
|
||||
addr := "tcp://" + testFreeAddr(t)
|
||||
|
||||
config := cfg.ResetTestRoot("node_priv_val_tcp_test")
|
||||
config.BaseConfig.PrivValidatorListenAddr = addr
|
||||
|
||||
rs := privval.NewRemoteSigner(
|
||||
log.TestingLogger(),
|
||||
config.ChainID(),
|
||||
addr,
|
||||
types.NewMockPV(),
|
||||
ed25519.GenPrivKey(),
|
||||
)
|
||||
privval.RemoteSignerConnDeadline(5 * time.Millisecond)(rs)
|
||||
go func() {
|
||||
err := rs.Start()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}()
|
||||
defer rs.Stop()
|
||||
|
||||
n, err := DefaultNewNode(config, log.TestingLogger())
|
||||
require.NoError(t, err)
|
||||
assert.IsType(t, &privval.TCPVal{}, n.PrivValidator())
|
||||
}
|
||||
|
||||
// address without a protocol must result in error
|
||||
func TestPrivValidatorListenAddrNoProtocol(t *testing.T) {
|
||||
addrNoPrefix := testFreeAddr(t)
|
||||
|
||||
config := cfg.ResetTestRoot("node_priv_val_tcp_test")
|
||||
config.BaseConfig.PrivValidatorListenAddr = addrNoPrefix
|
||||
|
||||
_, err := DefaultNewNode(config, log.TestingLogger())
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestNodeSetPrivValIPC(t *testing.T) {
|
||||
tmpfile := "/tmp/kms." + cmn.RandStr(6) + ".sock"
|
||||
defer os.Remove(tmpfile) // clean up
|
||||
|
||||
config := cfg.ResetTestRoot("node_priv_val_tcp_test")
|
||||
config.BaseConfig.PrivValidatorListenAddr = "unix://" + tmpfile
|
||||
|
||||
rs := privval.NewIPCRemoteSigner(
|
||||
log.TestingLogger(),
|
||||
config.ChainID(),
|
||||
tmpfile,
|
||||
types.NewMockPV(),
|
||||
)
|
||||
privval.IPCRemoteSignerConnDeadline(3 * time.Second)(rs)
|
||||
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
defer close(done)
|
||||
n, err := DefaultNewNode(config, log.TestingLogger())
|
||||
require.NoError(t, err)
|
||||
assert.IsType(t, &privval.IPCVal{}, n.PrivValidator())
|
||||
}()
|
||||
|
||||
err := rs.Start()
|
||||
require.NoError(t, err)
|
||||
defer rs.Stop()
|
||||
|
||||
<-done
|
||||
}
|
||||
|
||||
// testFreeAddr claims a free port so we don't block on listener being ready.
|
||||
func testFreeAddr(t *testing.T) string {
|
||||
ln, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
require.NoError(t, err)
|
||||
defer ln.Close()
|
||||
|
||||
return fmt.Sprintf("127.0.0.1:%d", ln.Addr().(*net.TCPAddr).Port)
|
||||
}
|
||||
|
@@ -6,7 +6,7 @@ import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
|
||||
crypto "github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
@@ -62,7 +62,7 @@ func PrometheusMetrics(namespace string) *Metrics {
|
||||
// NopMetrics returns no-op Metrics.
|
||||
func NopMetrics() *Metrics {
|
||||
return &Metrics{
|
||||
Peers: discard.NewGauge(),
|
||||
Peers: discard.NewGauge(),
|
||||
PeerReceiveBytesTotal: discard.NewCounter(),
|
||||
PeerSendBytesTotal: discard.NewCounter(),
|
||||
PeerPendingSendBytes: discard.NewGauge(),
|
||||
|
@@ -230,3 +230,31 @@ func (info DefaultNodeInfo) NetAddress() *NetAddress {
|
||||
}
|
||||
return netAddr
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------
|
||||
// These methods are for Protobuf Compatibility
|
||||
|
||||
// Size returns the size of the amino encoding, in bytes.
|
||||
func (info *DefaultNodeInfo) Size() int {
|
||||
bs, _ := info.Marshal()
|
||||
return len(bs)
|
||||
}
|
||||
|
||||
// Marshal returns the amino encoding.
|
||||
func (info *DefaultNodeInfo) Marshal() ([]byte, error) {
|
||||
return cdc.MarshalBinaryBare(info)
|
||||
}
|
||||
|
||||
// MarshalTo calls Marshal and copies to the given buffer.
|
||||
func (info *DefaultNodeInfo) MarshalTo(data []byte) (int, error) {
|
||||
bs, err := info.Marshal()
|
||||
if err != nil {
|
||||
return -1, err
|
||||
}
|
||||
return copy(data, bs), nil
|
||||
}
|
||||
|
||||
// Unmarshal deserializes from amino encoded form.
|
||||
func (info *DefaultNodeInfo) Unmarshal(bs []byte) error {
|
||||
return cdc.UnmarshalBinaryBare(bs, info)
|
||||
}
|
||||
|
@@ -10,7 +10,7 @@ import (
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
crypto "github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
@@ -13,7 +13,7 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
crypto "github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
)
|
||||
|
@@ -12,7 +12,7 @@ import (
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
crypto "github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
@@ -5,7 +5,7 @@ import (
|
||||
"net"
|
||||
"time"
|
||||
|
||||
crypto "github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
@@ -69,6 +69,7 @@ func (rs *IPCRemoteSigner) OnStart() error {
|
||||
for {
|
||||
conn, err := rs.listener.AcceptUnix()
|
||||
if err != nil {
|
||||
rs.Logger.Error("AcceptUnix", "err", err)
|
||||
return
|
||||
}
|
||||
go rs.handleConnection(conn)
|
||||
|
@@ -290,19 +290,6 @@ func (pv *FilePV) saveSigned(height int64, round int, step int8,
|
||||
pv.save()
|
||||
}
|
||||
|
||||
// SignHeartbeat signs a canonical representation of the heartbeat, along with the chainID.
|
||||
// Implements PrivValidator.
|
||||
func (pv *FilePV) SignHeartbeat(chainID string, heartbeat *types.Heartbeat) error {
|
||||
pv.mtx.Lock()
|
||||
defer pv.mtx.Unlock()
|
||||
sig, err := pv.PrivKey.Sign(heartbeat.SignBytes(chainID))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
heartbeat.Signature = sig
|
||||
return nil
|
||||
}
|
||||
|
||||
// String returns a string representation of the FilePV.
|
||||
func (pv *FilePV) String() string {
|
||||
return fmt.Sprintf("PrivValidator{%v LH:%v, LR:%v, LS:%v}", pv.GetAddress(), pv.LastHeight, pv.LastRound, pv.LastStep)
|
||||
|
@@ -125,35 +125,6 @@ func (sc *RemoteSignerClient) SignProposal(
|
||||
return nil
|
||||
}
|
||||
|
||||
// SignHeartbeat implements PrivValidator.
|
||||
func (sc *RemoteSignerClient) SignHeartbeat(
|
||||
chainID string,
|
||||
heartbeat *types.Heartbeat,
|
||||
) error {
|
||||
sc.lock.Lock()
|
||||
defer sc.lock.Unlock()
|
||||
|
||||
err := writeMsg(sc.conn, &SignHeartbeatRequest{Heartbeat: heartbeat})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
res, err := readMsg(sc.conn)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
resp, ok := res.(*SignedHeartbeatResponse)
|
||||
if !ok {
|
||||
return ErrUnexpectedResponse
|
||||
}
|
||||
if resp.Error != nil {
|
||||
return resp.Error
|
||||
}
|
||||
*heartbeat = *resp.Heartbeat
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Ping is used to check connection health.
|
||||
func (sc *RemoteSignerClient) Ping() error {
|
||||
sc.lock.Lock()
|
||||
@@ -186,8 +157,6 @@ func RegisterRemoteSignerMsg(cdc *amino.Codec) {
|
||||
cdc.RegisterConcrete(&SignedVoteResponse{}, "tendermint/remotesigner/SignedVoteResponse", nil)
|
||||
cdc.RegisterConcrete(&SignProposalRequest{}, "tendermint/remotesigner/SignProposalRequest", nil)
|
||||
cdc.RegisterConcrete(&SignedProposalResponse{}, "tendermint/remotesigner/SignedProposalResponse", nil)
|
||||
cdc.RegisterConcrete(&SignHeartbeatRequest{}, "tendermint/remotesigner/SignHeartbeatRequest", nil)
|
||||
cdc.RegisterConcrete(&SignedHeartbeatResponse{}, "tendermint/remotesigner/SignedHeartbeatResponse", nil)
|
||||
cdc.RegisterConcrete(&PingRequest{}, "tendermint/remotesigner/PingRequest", nil)
|
||||
cdc.RegisterConcrete(&PingResponse{}, "tendermint/remotesigner/PingResponse", nil)
|
||||
}
|
||||
@@ -218,16 +187,6 @@ type SignedProposalResponse struct {
|
||||
Error *RemoteSignerError
|
||||
}
|
||||
|
||||
// SignHeartbeatRequest is a PrivValidatorSocket message containing a Heartbeat.
|
||||
type SignHeartbeatRequest struct {
|
||||
Heartbeat *types.Heartbeat
|
||||
}
|
||||
|
||||
type SignedHeartbeatResponse struct {
|
||||
Heartbeat *types.Heartbeat
|
||||
Error *RemoteSignerError
|
||||
}
|
||||
|
||||
// PingRequest is a PrivValidatorSocket message to keep the connection alive.
|
||||
type PingRequest struct {
|
||||
}
|
||||
@@ -286,13 +245,6 @@ func handleRequest(req RemoteSignerMsg, chainID string, privVal types.PrivValida
|
||||
} else {
|
||||
res = &SignedProposalResponse{r.Proposal, nil}
|
||||
}
|
||||
case *SignHeartbeatRequest:
|
||||
err = privVal.SignHeartbeat(chainID, r.Heartbeat)
|
||||
if err != nil {
|
||||
res = &SignedHeartbeatResponse{nil, &RemoteSignerError{0, err.Error()}}
|
||||
} else {
|
||||
res = &SignedHeartbeatResponse{r.Heartbeat, nil}
|
||||
}
|
||||
case *PingRequest:
|
||||
res = &PingResponse{}
|
||||
default:
|
||||
|
@@ -137,22 +137,6 @@ func TestSocketPVVoteKeepalive(t *testing.T) {
|
||||
assert.Equal(t, want.Signature, have.Signature)
|
||||
}
|
||||
|
||||
func TestSocketPVHeartbeat(t *testing.T) {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewMockPV())
|
||||
|
||||
want = &types.Heartbeat{}
|
||||
have = &types.Heartbeat{}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
require.NoError(t, rs.privVal.SignHeartbeat(chainID, want))
|
||||
require.NoError(t, sc.SignHeartbeat(chainID, have))
|
||||
assert.Equal(t, want.Signature, have.Signature)
|
||||
}
|
||||
|
||||
func TestSocketPVDeadline(t *testing.T) {
|
||||
var (
|
||||
addr = testFreeAddr(t)
|
||||
@@ -301,32 +285,6 @@ func TestRemoteSignProposalErrors(t *testing.T) {
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestRemoteSignHeartbeatErrors(t *testing.T) {
|
||||
var (
|
||||
chainID = cmn.RandStr(12)
|
||||
sc, rs = testSetupSocketPair(t, chainID, types.NewErroringMockPV())
|
||||
hb = &types.Heartbeat{}
|
||||
)
|
||||
defer sc.Stop()
|
||||
defer rs.Stop()
|
||||
|
||||
err := writeMsg(sc.conn, &SignHeartbeatRequest{Heartbeat: hb})
|
||||
require.NoError(t, err)
|
||||
|
||||
res, err := readMsg(sc.conn)
|
||||
require.NoError(t, err)
|
||||
|
||||
resp := *res.(*SignedHeartbeatResponse)
|
||||
require.NotNil(t, resp.Error)
|
||||
require.Equal(t, resp.Error.Description, types.ErroringMockPVErr.Error())
|
||||
|
||||
err = rs.privVal.SignHeartbeat(chainID, hb)
|
||||
require.Error(t, err)
|
||||
|
||||
err = sc.SignHeartbeat(chainID, hb)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestErrUnexpectedResponse(t *testing.T) {
|
||||
var (
|
||||
addr = testFreeAddr(t)
|
||||
@@ -362,22 +320,12 @@ func TestErrUnexpectedResponse(t *testing.T) {
|
||||
require.NotNil(t, rsConn)
|
||||
<-readyc
|
||||
|
||||
// Heartbeat:
|
||||
go func(errc chan error) {
|
||||
errc <- sc.SignHeartbeat(chainID, &types.Heartbeat{})
|
||||
}(errc)
|
||||
// read request and write wrong response:
|
||||
go testReadWriteResponse(t, &SignedVoteResponse{}, rsConn)
|
||||
err = <-errc
|
||||
require.Error(t, err)
|
||||
require.Equal(t, err, ErrUnexpectedResponse)
|
||||
|
||||
// Proposal:
|
||||
go func(errc chan error) {
|
||||
errc <- sc.SignProposal(chainID, &types.Proposal{})
|
||||
}(errc)
|
||||
// read request and write wrong response:
|
||||
go testReadWriteResponse(t, &SignedHeartbeatResponse{}, rsConn)
|
||||
go testReadWriteResponse(t, &SignedVoteResponse{}, rsConn)
|
||||
err = <-errc
|
||||
require.Error(t, err)
|
||||
require.Equal(t, err, ErrUnexpectedResponse)
|
||||
@@ -387,7 +335,7 @@ func TestErrUnexpectedResponse(t *testing.T) {
|
||||
errc <- sc.SignVote(chainID, &types.Vote{})
|
||||
}(errc)
|
||||
// read request and write wrong response:
|
||||
go testReadWriteResponse(t, &SignedHeartbeatResponse{}, rsConn)
|
||||
go testReadWriteResponse(t, &SignedProposalResponse{}, rsConn)
|
||||
err = <-errc
|
||||
require.Error(t, err)
|
||||
require.Equal(t, err, ErrUnexpectedResponse)
|
||||
|
@@ -370,20 +370,27 @@ func TestTxSearch(t *testing.T) {
|
||||
}
|
||||
|
||||
// query by height
|
||||
result, err = c.TxSearch(fmt.Sprintf("tx.height >= %d", txHeight), true, 1, 30)
|
||||
result, err = c.TxSearch(fmt.Sprintf("tx.height=%d", txHeight), true, 1, 30)
|
||||
require.Nil(t, err, "%+v", err)
|
||||
require.Len(t, result.Txs, 1)
|
||||
|
||||
// we query for non existing tx
|
||||
// query for non existing tx
|
||||
result, err = c.TxSearch(fmt.Sprintf("tx.hash='%X'", anotherTxHash), false, 1, 30)
|
||||
require.Nil(t, err, "%+v", err)
|
||||
require.Len(t, result.Txs, 0)
|
||||
|
||||
// we query using a tag (see kvstore application)
|
||||
// query using a tag (see kvstore application)
|
||||
result, err = c.TxSearch("app.creator='Cosmoshi Netowoko'", false, 1, 30)
|
||||
require.Nil(t, err, "%+v", err)
|
||||
if len(result.Txs) == 0 {
|
||||
t.Fatal("expected a lot of transactions")
|
||||
}
|
||||
|
||||
// query using a tag (see kvstore application) and height
|
||||
result, err = c.TxSearch("app.creator='Cosmoshi Netowoko' AND tx.height<10000", true, 1, 30)
|
||||
require.Nil(t, err, "%+v", err)
|
||||
if len(result.Txs) == 0 {
|
||||
t.Fatal("expected a lot of transactions")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@@ -27,7 +27,7 @@ import (
|
||||
// "result": {
|
||||
// "validators": [
|
||||
// {
|
||||
// "accum": "0",
|
||||
// "proposer_priority": "0",
|
||||
// "voting_power": "10",
|
||||
// "pub_key": {
|
||||
// "data": "68DFDA7E50F82946E7E8546BED37944A422CD1B831E70DF66BA3B8430593944D",
|
||||
@@ -92,7 +92,7 @@ func Validators(heightPtr *int64) (*ctypes.ResultValidators, error) {
|
||||
// "value": "SBctdhRBcXtBgdI/8a/alTsUhGXqGs9k5ylV1u5iKHg="
|
||||
// },
|
||||
// "voting_power": "10",
|
||||
// "accum": "0"
|
||||
// "proposer_priority": "0"
|
||||
// }
|
||||
// ],
|
||||
// "proposer": {
|
||||
@@ -102,7 +102,7 @@ func Validators(heightPtr *int64) (*ctypes.ResultValidators, error) {
|
||||
// "value": "SBctdhRBcXtBgdI/8a/alTsUhGXqGs9k5ylV1u5iKHg="
|
||||
// },
|
||||
// "voting_power": "10",
|
||||
// "accum": "0"
|
||||
// "proposer_priority": "0"
|
||||
// }
|
||||
// },
|
||||
// "proposal": null,
|
||||
@@ -138,7 +138,7 @@ func Validators(heightPtr *int64) (*ctypes.ResultValidators, error) {
|
||||
// "value": "SBctdhRBcXtBgdI/8a/alTsUhGXqGs9k5ylV1u5iKHg="
|
||||
// },
|
||||
// "voting_power": "10",
|
||||
// "accum": "0"
|
||||
// "proposer_priority": "0"
|
||||
// }
|
||||
// ],
|
||||
// "proposer": {
|
||||
@@ -148,7 +148,7 @@ func Validators(heightPtr *int64) (*ctypes.ResultValidators, error) {
|
||||
// "value": "SBctdhRBcXtBgdI/8a/alTsUhGXqGs9k5ylV1u5iKHg="
|
||||
// },
|
||||
// "voting_power": "10",
|
||||
// "accum": "0"
|
||||
// "proposer_priority": "0"
|
||||
// }
|
||||
// }
|
||||
// },
|
||||
|
@@ -2,6 +2,7 @@ package core
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
@@ -53,11 +54,12 @@ import (
|
||||
// import "github.com/tendermint/tendermint/types"
|
||||
//
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// ctx, cancel := context.WithTimeout(context.Background(), timeout)
|
||||
// defer cancel()
|
||||
// query := query.MustParse("tm.event = 'Tx' AND tx.height = 3")
|
||||
// txs := make(chan interface{})
|
||||
// err := client.Subscribe(ctx, "test-client", query, txs)
|
||||
// err = client.Subscribe(ctx, "test-client", query, txs)
|
||||
//
|
||||
// go func() {
|
||||
// for e := range txs {
|
||||
@@ -104,7 +106,7 @@ func Subscribe(wsCtx rpctypes.WSRPCContext, query string) (*ctypes.ResultSubscri
|
||||
go func() {
|
||||
for event := range ch {
|
||||
tmResult := &ctypes.ResultEvent{query, event.(tmtypes.TMEventData)}
|
||||
wsCtx.TryWriteRPCResponse(rpctypes.NewRPCSuccessResponse(wsCtx.Codec(), wsCtx.Request.ID+"#event", tmResult))
|
||||
wsCtx.TryWriteRPCResponse(rpctypes.NewRPCSuccessResponse(wsCtx.Codec(), rpctypes.JSONRPCStringID(fmt.Sprintf("%v#event", wsCtx.Request.ID)), tmResult))
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -115,7 +117,8 @@ func Subscribe(wsCtx rpctypes.WSRPCContext, query string) (*ctypes.ResultSubscri
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Unsubscribe("test-client", query)
|
||||
// err := client.Start()
|
||||
// err = client.Unsubscribe("test-client", query)
|
||||
// ```
|
||||
//
|
||||
// > The above command returns JSON structured like this:
|
||||
@@ -154,7 +157,8 @@ func Unsubscribe(wsCtx rpctypes.WSRPCContext, query string) (*ctypes.ResultUnsub
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.UnsubscribeAll("test-client")
|
||||
// err := client.Start()
|
||||
// err = client.UnsubscribeAll("test-client")
|
||||
// ```
|
||||
//
|
||||
// > The above command returns JSON structured like this:
|
||||
|
@@ -2,7 +2,7 @@ package core
|
||||
|
||||
import (
|
||||
"github.com/tendermint/tendermint/consensus"
|
||||
crypto "github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
dbm "github.com/tendermint/tendermint/libs/db"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
mempl "github.com/tendermint/tendermint/mempool"
|
||||
|
@@ -5,7 +5,7 @@ import (
|
||||
"time"
|
||||
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
crypto "github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
|
@@ -99,7 +99,7 @@ func NewJSONRPCClient(remote string) *JSONRPCClient {
|
||||
}
|
||||
|
||||
func (c *JSONRPCClient) Call(method string, params map[string]interface{}, result interface{}) (interface{}, error) {
|
||||
request, err := types.MapToRequest(c.cdc, "jsonrpc-client", method, params)
|
||||
request, err := types.MapToRequest(c.cdc, types.JSONRPCStringID("jsonrpc-client"), method, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@@ -214,7 +214,7 @@ func (c *WSClient) Send(ctx context.Context, request types.RPCRequest) error {
|
||||
|
||||
// Call the given method. See Send description.
|
||||
func (c *WSClient) Call(ctx context.Context, method string, params map[string]interface{}) error {
|
||||
request, err := types.MapToRequest(c.cdc, "ws-client", method, params)
|
||||
request, err := types.MapToRequest(c.cdc, types.JSONRPCStringID("ws-client"), method, params)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -224,7 +224,7 @@ func (c *WSClient) Call(ctx context.Context, method string, params map[string]in
|
||||
// CallWithArrayParams the given method with params in a form of array. See
|
||||
// Send description.
|
||||
func (c *WSClient) CallWithArrayParams(ctx context.Context, method string, params []interface{}) error {
|
||||
request, err := types.ArrayToRequest(c.cdc, "ws-client", method, params)
|
||||
request, err := types.ArrayToRequest(c.cdc, types.JSONRPCStringID("ws-client"), method, params)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@@ -103,7 +103,7 @@ func makeJSONRPCHandler(funcMap map[string]*RPCFunc, cdc *amino.Codec, logger lo
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
b, err := ioutil.ReadAll(r.Body)
|
||||
if err != nil {
|
||||
WriteRPCResponseHTTP(w, types.RPCInvalidRequestError("", errors.Wrap(err, "Error reading request body")))
|
||||
WriteRPCResponseHTTP(w, types.RPCInvalidRequestError(types.JSONRPCStringID(""), errors.Wrap(err, "Error reading request body")))
|
||||
return
|
||||
}
|
||||
// if its an empty request (like from a browser),
|
||||
@@ -116,12 +116,12 @@ func makeJSONRPCHandler(funcMap map[string]*RPCFunc, cdc *amino.Codec, logger lo
|
||||
var request types.RPCRequest
|
||||
err = json.Unmarshal(b, &request)
|
||||
if err != nil {
|
||||
WriteRPCResponseHTTP(w, types.RPCParseError("", errors.Wrap(err, "Error unmarshalling request")))
|
||||
WriteRPCResponseHTTP(w, types.RPCParseError(types.JSONRPCStringID(""), errors.Wrap(err, "Error unmarshalling request")))
|
||||
return
|
||||
}
|
||||
// A Notification is a Request object without an "id" member.
|
||||
// The Server MUST NOT reply to a Notification, including those that are within a batch request.
|
||||
if request.ID == "" {
|
||||
if request.ID == types.JSONRPCStringID("") {
|
||||
logger.Debug("HTTPJSONRPC received a notification, skipping... (please send a non-empty ID if you want to call a method)")
|
||||
return
|
||||
}
|
||||
@@ -255,7 +255,7 @@ func makeHTTPHandler(rpcFunc *RPCFunc, cdc *amino.Codec, logger log.Logger) func
|
||||
// Exception for websocket endpoints
|
||||
if rpcFunc.ws {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
WriteRPCResponseHTTP(w, types.RPCMethodNotFoundError(""))
|
||||
WriteRPCResponseHTTP(w, types.RPCMethodNotFoundError(types.JSONRPCStringID("")))
|
||||
}
|
||||
}
|
||||
// All other endpoints
|
||||
@@ -263,17 +263,17 @@ func makeHTTPHandler(rpcFunc *RPCFunc, cdc *amino.Codec, logger log.Logger) func
|
||||
logger.Debug("HTTP HANDLER", "req", r)
|
||||
args, err := httpParamsToArgs(rpcFunc, cdc, r)
|
||||
if err != nil {
|
||||
WriteRPCResponseHTTP(w, types.RPCInvalidParamsError("", errors.Wrap(err, "Error converting http params to arguments")))
|
||||
WriteRPCResponseHTTP(w, types.RPCInvalidParamsError(types.JSONRPCStringID(""), errors.Wrap(err, "Error converting http params to arguments")))
|
||||
return
|
||||
}
|
||||
returns := rpcFunc.f.Call(args)
|
||||
logger.Info("HTTPRestRPC", "method", r.URL.Path, "args", args, "returns", returns)
|
||||
result, err := unreflectResult(returns)
|
||||
if err != nil {
|
||||
WriteRPCResponseHTTP(w, types.RPCInternalError("", err))
|
||||
WriteRPCResponseHTTP(w, types.RPCInternalError(types.JSONRPCStringID(""), err))
|
||||
return
|
||||
}
|
||||
WriteRPCResponseHTTP(w, types.NewRPCSuccessResponse(cdc, "", result))
|
||||
WriteRPCResponseHTTP(w, types.NewRPCSuccessResponse(cdc, types.JSONRPCStringID(""), result))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -580,7 +580,7 @@ func (wsc *wsConnection) readRoutine() {
|
||||
err = fmt.Errorf("WSJSONRPC: %v", r)
|
||||
}
|
||||
wsc.Logger.Error("Panic in WSJSONRPC handler", "err", err, "stack", string(debug.Stack()))
|
||||
wsc.WriteRPCResponse(types.RPCInternalError("unknown", err))
|
||||
wsc.WriteRPCResponse(types.RPCInternalError(types.JSONRPCStringID("unknown"), err))
|
||||
go wsc.readRoutine()
|
||||
} else {
|
||||
wsc.baseConn.Close() // nolint: errcheck
|
||||
@@ -615,13 +615,13 @@ func (wsc *wsConnection) readRoutine() {
|
||||
var request types.RPCRequest
|
||||
err = json.Unmarshal(in, &request)
|
||||
if err != nil {
|
||||
wsc.WriteRPCResponse(types.RPCParseError("", errors.Wrap(err, "Error unmarshaling request")))
|
||||
wsc.WriteRPCResponse(types.RPCParseError(types.JSONRPCStringID(""), errors.Wrap(err, "Error unmarshaling request")))
|
||||
continue
|
||||
}
|
||||
|
||||
// A Notification is a Request object without an "id" member.
|
||||
// The Server MUST NOT reply to a Notification, including those that are within a batch request.
|
||||
if request.ID == "" {
|
||||
if request.ID == types.JSONRPCStringID("") {
|
||||
wsc.Logger.Debug("WSJSONRPC received a notification, skipping... (please send a non-empty ID if you want to call a method)")
|
||||
continue
|
||||
}
|
||||
|
@@ -47,21 +47,22 @@ func statusOK(code int) bool { return code >= 200 && code <= 299 }
|
||||
func TestRPCParams(t *testing.T) {
|
||||
mux := testMux()
|
||||
tests := []struct {
|
||||
payload string
|
||||
wantErr string
|
||||
payload string
|
||||
wantErr string
|
||||
expectedId interface{}
|
||||
}{
|
||||
// bad
|
||||
{`{"jsonrpc": "2.0", "id": "0"}`, "Method not found"},
|
||||
{`{"jsonrpc": "2.0", "method": "y", "id": "0"}`, "Method not found"},
|
||||
{`{"method": "c", "id": "0", "params": a}`, "invalid character"},
|
||||
{`{"method": "c", "id": "0", "params": ["a"]}`, "got 1"},
|
||||
{`{"method": "c", "id": "0", "params": ["a", "b"]}`, "invalid character"},
|
||||
{`{"method": "c", "id": "0", "params": [1, 1]}`, "of type string"},
|
||||
{`{"jsonrpc": "2.0", "id": "0"}`, "Method not found", types.JSONRPCStringID("0")},
|
||||
{`{"jsonrpc": "2.0", "method": "y", "id": "0"}`, "Method not found", types.JSONRPCStringID("0")},
|
||||
{`{"method": "c", "id": "0", "params": a}`, "invalid character", types.JSONRPCStringID("")}, // id not captured in JSON parsing failures
|
||||
{`{"method": "c", "id": "0", "params": ["a"]}`, "got 1", types.JSONRPCStringID("0")},
|
||||
{`{"method": "c", "id": "0", "params": ["a", "b"]}`, "invalid character", types.JSONRPCStringID("0")},
|
||||
{`{"method": "c", "id": "0", "params": [1, 1]}`, "of type string", types.JSONRPCStringID("0")},
|
||||
|
||||
// good
|
||||
{`{"jsonrpc": "2.0", "method": "c", "id": "0", "params": null}`, ""},
|
||||
{`{"method": "c", "id": "0", "params": {}}`, ""},
|
||||
{`{"method": "c", "id": "0", "params": ["a", "10"]}`, ""},
|
||||
{`{"jsonrpc": "2.0", "method": "c", "id": "0", "params": null}`, "", types.JSONRPCStringID("0")},
|
||||
{`{"method": "c", "id": "0", "params": {}}`, "", types.JSONRPCStringID("0")},
|
||||
{`{"method": "c", "id": "0", "params": ["a", "10"]}`, "", types.JSONRPCStringID("0")},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
@@ -80,7 +81,7 @@ func TestRPCParams(t *testing.T) {
|
||||
recv := new(types.RPCResponse)
|
||||
assert.Nil(t, json.Unmarshal(blob, recv), "#%d: expecting successful parsing of an RPCResponse:\nblob: %s", i, blob)
|
||||
assert.NotEqual(t, recv, new(types.RPCResponse), "#%d: not expecting a blank RPCResponse", i)
|
||||
|
||||
assert.Equal(t, tt.expectedId, recv.ID, "#%d: expected ID not matched in RPCResponse", i)
|
||||
if tt.wantErr == "" {
|
||||
assert.Nil(t, recv.Error, "#%d: not expecting an error", i)
|
||||
} else {
|
||||
@@ -91,9 +92,56 @@ func TestRPCParams(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestJSONRPCID(t *testing.T) {
|
||||
mux := testMux()
|
||||
tests := []struct {
|
||||
payload string
|
||||
wantErr bool
|
||||
expectedId interface{}
|
||||
}{
|
||||
// good id
|
||||
{`{"jsonrpc": "2.0", "method": "c", "id": "0", "params": ["a", "10"]}`, false, types.JSONRPCStringID("0")},
|
||||
{`{"jsonrpc": "2.0", "method": "c", "id": "abc", "params": ["a", "10"]}`, false, types.JSONRPCStringID("abc")},
|
||||
{`{"jsonrpc": "2.0", "method": "c", "id": 0, "params": ["a", "10"]}`, false, types.JSONRPCIntID(0)},
|
||||
{`{"jsonrpc": "2.0", "method": "c", "id": 1, "params": ["a", "10"]}`, false, types.JSONRPCIntID(1)},
|
||||
{`{"jsonrpc": "2.0", "method": "c", "id": 1.3, "params": ["a", "10"]}`, false, types.JSONRPCIntID(1)},
|
||||
{`{"jsonrpc": "2.0", "method": "c", "id": -1, "params": ["a", "10"]}`, false, types.JSONRPCIntID(-1)},
|
||||
{`{"jsonrpc": "2.0", "method": "c", "id": null, "params": ["a", "10"]}`, false, nil},
|
||||
|
||||
// bad id
|
||||
{`{"jsonrpc": "2.0", "method": "c", "id": {}, "params": ["a", "10"]}`, true, nil},
|
||||
{`{"jsonrpc": "2.0", "method": "c", "id": [], "params": ["a", "10"]}`, true, nil},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
req, _ := http.NewRequest("POST", "http://localhost/", strings.NewReader(tt.payload))
|
||||
rec := httptest.NewRecorder()
|
||||
mux.ServeHTTP(rec, req)
|
||||
res := rec.Result()
|
||||
// Always expecting back a JSONRPCResponse
|
||||
assert.True(t, statusOK(res.StatusCode), "#%d: should always return 2XX", i)
|
||||
blob, err := ioutil.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
t.Errorf("#%d: err reading body: %v", i, err)
|
||||
continue
|
||||
}
|
||||
|
||||
recv := new(types.RPCResponse)
|
||||
err = json.Unmarshal(blob, recv)
|
||||
assert.Nil(t, err, "#%d: expecting successful parsing of an RPCResponse:\nblob: %s", i, blob)
|
||||
if !tt.wantErr {
|
||||
assert.NotEqual(t, recv, new(types.RPCResponse), "#%d: not expecting a blank RPCResponse", i)
|
||||
assert.Equal(t, tt.expectedId, recv.ID, "#%d: expected ID not matched in RPCResponse", i)
|
||||
assert.Nil(t, recv.Error, "#%d: not expecting an error", i)
|
||||
} else {
|
||||
assert.True(t, recv.Error.Code < 0, "#%d: not expecting a positive JSONRPC code", i)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestRPCNotification(t *testing.T) {
|
||||
mux := testMux()
|
||||
body := strings.NewReader(`{"jsonrpc": "2.0"}`)
|
||||
body := strings.NewReader(`{"jsonrpc": "2.0", "id": ""}`)
|
||||
req, _ := http.NewRequest("POST", "http://localhost/", body)
|
||||
rec := httptest.NewRecorder()
|
||||
mux.ServeHTTP(rec, req)
|
||||
@@ -134,7 +182,7 @@ func TestWebsocketManagerHandler(t *testing.T) {
|
||||
}
|
||||
|
||||
// check basic functionality works
|
||||
req, err := types.MapToRequest(amino.NewCodec(), "TestWebsocketManager", "c", map[string]interface{}{"s": "a", "i": 10})
|
||||
req, err := types.MapToRequest(amino.NewCodec(), types.JSONRPCStringID("TestWebsocketManager"), "c", map[string]interface{}{"s": "a", "i": 10})
|
||||
require.NoError(t, err)
|
||||
err = c.WriteJSON(req)
|
||||
require.NoError(t, err)
|
||||
|
@@ -132,7 +132,7 @@ func RecoverAndLogHandler(handler http.Handler, logger log.Logger) http.Handler
|
||||
"Panic in RPC HTTP handler", "err", e, "stack",
|
||||
string(debug.Stack()),
|
||||
)
|
||||
WriteRPCResponseHTTPError(rww, http.StatusInternalServerError, types.RPCInternalError("", e.(error)))
|
||||
WriteRPCResponseHTTPError(rww, http.StatusInternalServerError, types.RPCInternalError(types.JSONRPCStringID(""), e.(error)))
|
||||
}
|
||||
}
|
||||
|
||||
|
@@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
@@ -13,17 +14,75 @@ import (
|
||||
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
|
||||
)
|
||||
|
||||
// a wrapper to emulate a sum type: jsonrpcid = string | int
|
||||
// TODO: refactor when Go 2.0 arrives https://github.com/golang/go/issues/19412
|
||||
type jsonrpcid interface {
|
||||
isJSONRPCID()
|
||||
}
|
||||
|
||||
// JSONRPCStringID a wrapper for JSON-RPC string IDs
|
||||
type JSONRPCStringID string
|
||||
|
||||
func (JSONRPCStringID) isJSONRPCID() {}
|
||||
|
||||
// JSONRPCIntID a wrapper for JSON-RPC integer IDs
|
||||
type JSONRPCIntID int
|
||||
|
||||
func (JSONRPCIntID) isJSONRPCID() {}
|
||||
|
||||
func idFromInterface(idInterface interface{}) (jsonrpcid, error) {
|
||||
switch id := idInterface.(type) {
|
||||
case string:
|
||||
return JSONRPCStringID(id), nil
|
||||
case float64:
|
||||
// json.Unmarshal uses float64 for all numbers
|
||||
// (https://golang.org/pkg/encoding/json/#Unmarshal),
|
||||
// but the JSONRPC2.0 spec says the id SHOULD NOT contain
|
||||
// decimals - so we truncate the decimals here.
|
||||
return JSONRPCIntID(int(id)), nil
|
||||
default:
|
||||
typ := reflect.TypeOf(id)
|
||||
return nil, fmt.Errorf("JSON-RPC ID (%v) is of unknown type (%v)", id, typ)
|
||||
}
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
// REQUEST
|
||||
|
||||
type RPCRequest struct {
|
||||
JSONRPC string `json:"jsonrpc"`
|
||||
ID string `json:"id"`
|
||||
ID jsonrpcid `json:"id"`
|
||||
Method string `json:"method"`
|
||||
Params json.RawMessage `json:"params"` // must be map[string]interface{} or []interface{}
|
||||
}
|
||||
|
||||
func NewRPCRequest(id string, method string, params json.RawMessage) RPCRequest {
|
||||
// UnmarshalJSON custom JSON unmarshalling due to jsonrpcid being string or int
|
||||
func (request *RPCRequest) UnmarshalJSON(data []byte) error {
|
||||
unsafeReq := &struct {
|
||||
JSONRPC string `json:"jsonrpc"`
|
||||
ID interface{} `json:"id"`
|
||||
Method string `json:"method"`
|
||||
Params json.RawMessage `json:"params"` // must be map[string]interface{} or []interface{}
|
||||
}{}
|
||||
err := json.Unmarshal(data, &unsafeReq)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
request.JSONRPC = unsafeReq.JSONRPC
|
||||
request.Method = unsafeReq.Method
|
||||
request.Params = unsafeReq.Params
|
||||
if unsafeReq.ID == nil {
|
||||
return nil
|
||||
}
|
||||
id, err := idFromInterface(unsafeReq.ID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
request.ID = id
|
||||
return nil
|
||||
}
|
||||
|
||||
func NewRPCRequest(id jsonrpcid, method string, params json.RawMessage) RPCRequest {
|
||||
return RPCRequest{
|
||||
JSONRPC: "2.0",
|
||||
ID: id,
|
||||
@@ -36,7 +95,7 @@ func (req RPCRequest) String() string {
|
||||
return fmt.Sprintf("[%s %s]", req.ID, req.Method)
|
||||
}
|
||||
|
||||
func MapToRequest(cdc *amino.Codec, id string, method string, params map[string]interface{}) (RPCRequest, error) {
|
||||
func MapToRequest(cdc *amino.Codec, id jsonrpcid, method string, params map[string]interface{}) (RPCRequest, error) {
|
||||
var params_ = make(map[string]json.RawMessage, len(params))
|
||||
for name, value := range params {
|
||||
valueJSON, err := cdc.MarshalJSON(value)
|
||||
@@ -53,7 +112,7 @@ func MapToRequest(cdc *amino.Codec, id string, method string, params map[string]
|
||||
return request, nil
|
||||
}
|
||||
|
||||
func ArrayToRequest(cdc *amino.Codec, id string, method string, params []interface{}) (RPCRequest, error) {
|
||||
func ArrayToRequest(cdc *amino.Codec, id jsonrpcid, method string, params []interface{}) (RPCRequest, error) {
|
||||
var params_ = make([]json.RawMessage, len(params))
|
||||
for i, value := range params {
|
||||
valueJSON, err := cdc.MarshalJSON(value)
|
||||
@@ -89,12 +148,38 @@ func (err RPCError) Error() string {
|
||||
|
||||
type RPCResponse struct {
|
||||
JSONRPC string `json:"jsonrpc"`
|
||||
ID string `json:"id"`
|
||||
ID jsonrpcid `json:"id"`
|
||||
Result json.RawMessage `json:"result,omitempty"`
|
||||
Error *RPCError `json:"error,omitempty"`
|
||||
}
|
||||
|
||||
func NewRPCSuccessResponse(cdc *amino.Codec, id string, res interface{}) RPCResponse {
|
||||
// UnmarshalJSON custom JSON unmarshalling due to jsonrpcid being string or int
|
||||
func (response *RPCResponse) UnmarshalJSON(data []byte) error {
|
||||
unsafeResp := &struct {
|
||||
JSONRPC string `json:"jsonrpc"`
|
||||
ID interface{} `json:"id"`
|
||||
Result json.RawMessage `json:"result,omitempty"`
|
||||
Error *RPCError `json:"error,omitempty"`
|
||||
}{}
|
||||
err := json.Unmarshal(data, &unsafeResp)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
response.JSONRPC = unsafeResp.JSONRPC
|
||||
response.Error = unsafeResp.Error
|
||||
response.Result = unsafeResp.Result
|
||||
if unsafeResp.ID == nil {
|
||||
return nil
|
||||
}
|
||||
id, err := idFromInterface(unsafeResp.ID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
response.ID = id
|
||||
return nil
|
||||
}
|
||||
|
||||
func NewRPCSuccessResponse(cdc *amino.Codec, id jsonrpcid, res interface{}) RPCResponse {
|
||||
var rawMsg json.RawMessage
|
||||
|
||||
if res != nil {
|
||||
@@ -109,7 +194,7 @@ func NewRPCSuccessResponse(cdc *amino.Codec, id string, res interface{}) RPCResp
|
||||
return RPCResponse{JSONRPC: "2.0", ID: id, Result: rawMsg}
|
||||
}
|
||||
|
||||
func NewRPCErrorResponse(id string, code int, msg string, data string) RPCResponse {
|
||||
func NewRPCErrorResponse(id jsonrpcid, code int, msg string, data string) RPCResponse {
|
||||
return RPCResponse{
|
||||
JSONRPC: "2.0",
|
||||
ID: id,
|
||||
@@ -124,27 +209,27 @@ func (resp RPCResponse) String() string {
|
||||
return fmt.Sprintf("[%s %s]", resp.ID, resp.Error)
|
||||
}
|
||||
|
||||
func RPCParseError(id string, err error) RPCResponse {
|
||||
func RPCParseError(id jsonrpcid, err error) RPCResponse {
|
||||
return NewRPCErrorResponse(id, -32700, "Parse error. Invalid JSON", err.Error())
|
||||
}
|
||||
|
||||
func RPCInvalidRequestError(id string, err error) RPCResponse {
|
||||
func RPCInvalidRequestError(id jsonrpcid, err error) RPCResponse {
|
||||
return NewRPCErrorResponse(id, -32600, "Invalid Request", err.Error())
|
||||
}
|
||||
|
||||
func RPCMethodNotFoundError(id string) RPCResponse {
|
||||
func RPCMethodNotFoundError(id jsonrpcid) RPCResponse {
|
||||
return NewRPCErrorResponse(id, -32601, "Method not found", "")
|
||||
}
|
||||
|
||||
func RPCInvalidParamsError(id string, err error) RPCResponse {
|
||||
func RPCInvalidParamsError(id jsonrpcid, err error) RPCResponse {
|
||||
return NewRPCErrorResponse(id, -32602, "Invalid params", err.Error())
|
||||
}
|
||||
|
||||
func RPCInternalError(id string, err error) RPCResponse {
|
||||
func RPCInternalError(id jsonrpcid, err error) RPCResponse {
|
||||
return NewRPCErrorResponse(id, -32603, "Internal error", err.Error())
|
||||
}
|
||||
|
||||
func RPCServerError(id string, err error) RPCResponse {
|
||||
func RPCServerError(id jsonrpcid, err error) RPCResponse {
|
||||
return NewRPCErrorResponse(id, -32000, "Server error", err.Error())
|
||||
}
|
||||
|
||||
|
@@ -15,24 +15,57 @@ type SampleResult struct {
|
||||
Value string
|
||||
}
|
||||
|
||||
type responseTest struct {
|
||||
id jsonrpcid
|
||||
expected string
|
||||
}
|
||||
|
||||
var responseTests = []responseTest{
|
||||
{JSONRPCStringID("1"), `"1"`},
|
||||
{JSONRPCStringID("alphabet"), `"alphabet"`},
|
||||
{JSONRPCStringID(""), `""`},
|
||||
{JSONRPCStringID("àáâ"), `"àáâ"`},
|
||||
{JSONRPCIntID(-1), "-1"},
|
||||
{JSONRPCIntID(0), "0"},
|
||||
{JSONRPCIntID(1), "1"},
|
||||
{JSONRPCIntID(100), "100"},
|
||||
}
|
||||
|
||||
func TestResponses(t *testing.T) {
|
||||
assert := assert.New(t)
|
||||
cdc := amino.NewCodec()
|
||||
for _, tt := range responseTests {
|
||||
jsonid := tt.id
|
||||
a := NewRPCSuccessResponse(cdc, jsonid, &SampleResult{"hello"})
|
||||
b, _ := json.Marshal(a)
|
||||
s := fmt.Sprintf(`{"jsonrpc":"2.0","id":%v,"result":{"Value":"hello"}}`, tt.expected)
|
||||
assert.Equal(string(s), string(b))
|
||||
|
||||
a := NewRPCSuccessResponse(cdc, "1", &SampleResult{"hello"})
|
||||
b, _ := json.Marshal(a)
|
||||
s := `{"jsonrpc":"2.0","id":"1","result":{"Value":"hello"}}`
|
||||
assert.Equal(string(s), string(b))
|
||||
d := RPCParseError(jsonid, errors.New("Hello world"))
|
||||
e, _ := json.Marshal(d)
|
||||
f := fmt.Sprintf(`{"jsonrpc":"2.0","id":%v,"error":{"code":-32700,"message":"Parse error. Invalid JSON","data":"Hello world"}}`, tt.expected)
|
||||
assert.Equal(string(f), string(e))
|
||||
|
||||
d := RPCParseError("1", errors.New("Hello world"))
|
||||
e, _ := json.Marshal(d)
|
||||
f := `{"jsonrpc":"2.0","id":"1","error":{"code":-32700,"message":"Parse error. Invalid JSON","data":"Hello world"}}`
|
||||
assert.Equal(string(f), string(e))
|
||||
g := RPCMethodNotFoundError(jsonid)
|
||||
h, _ := json.Marshal(g)
|
||||
i := fmt.Sprintf(`{"jsonrpc":"2.0","id":%v,"error":{"code":-32601,"message":"Method not found"}}`, tt.expected)
|
||||
assert.Equal(string(h), string(i))
|
||||
}
|
||||
}
|
||||
|
||||
g := RPCMethodNotFoundError("2")
|
||||
h, _ := json.Marshal(g)
|
||||
i := `{"jsonrpc":"2.0","id":"2","error":{"code":-32601,"message":"Method not found"}}`
|
||||
assert.Equal(string(h), string(i))
|
||||
func TestUnmarshallResponses(t *testing.T) {
|
||||
assert := assert.New(t)
|
||||
cdc := amino.NewCodec()
|
||||
for _, tt := range responseTests {
|
||||
response := &RPCResponse{}
|
||||
err := json.Unmarshal([]byte(fmt.Sprintf(`{"jsonrpc":"2.0","id":%v,"result":{"Value":"hello"}}`, tt.expected)), response)
|
||||
assert.Nil(err)
|
||||
a := NewRPCSuccessResponse(cdc, tt.id, &SampleResult{"hello"})
|
||||
assert.Equal(*response, a)
|
||||
}
|
||||
response := &RPCResponse{}
|
||||
err := json.Unmarshal([]byte(`{"jsonrpc":"2.0","id":true,"result":{"Value":"hello"}}`), response)
|
||||
assert.NotNil(err)
|
||||
}
|
||||
|
||||
func TestRPCError(t *testing.T) {
|
||||
|
@@ -107,8 +107,22 @@ func (blockExec *BlockExecutor) ApplyBlock(state State, blockID types.BlockID, b
|
||||
|
||||
fail.Fail() // XXX
|
||||
|
||||
// validate the validator updates and convert to tendermint types
|
||||
abciValUpdates := abciResponses.EndBlock.ValidatorUpdates
|
||||
err = validateValidatorUpdates(abciValUpdates, state.ConsensusParams.Validator)
|
||||
if err != nil {
|
||||
return state, fmt.Errorf("Error in validator updates: %v", err)
|
||||
}
|
||||
validatorUpdates, err := types.PB2TM.ValidatorUpdates(abciValUpdates)
|
||||
if err != nil {
|
||||
return state, err
|
||||
}
|
||||
if len(validatorUpdates) > 0 {
|
||||
blockExec.logger.Info("Updates to validators", "updates", makeValidatorUpdatesLogString(validatorUpdates))
|
||||
}
|
||||
|
||||
// Update the state with the block and responses.
|
||||
state, err = updateState(blockExec.logger, state, blockID, &block.Header, abciResponses)
|
||||
state, err = updateState(state, blockID, &block.Header, abciResponses, validatorUpdates)
|
||||
if err != nil {
|
||||
return state, fmt.Errorf("Commit failed for application: %v", err)
|
||||
}
|
||||
@@ -132,7 +146,7 @@ func (blockExec *BlockExecutor) ApplyBlock(state State, blockID types.BlockID, b
|
||||
|
||||
// Events are fired after everything else.
|
||||
// NOTE: if we crash between Commit and Save, events wont be fired during replay
|
||||
fireEvents(blockExec.logger, blockExec.eventBus, block, abciResponses)
|
||||
fireEvents(blockExec.logger, blockExec.eventBus, block, abciResponses, validatorUpdates)
|
||||
|
||||
return state, nil
|
||||
}
|
||||
@@ -226,8 +240,9 @@ func execBlockOnProxyApp(
|
||||
|
||||
commitInfo, byzVals := getBeginBlockValidatorInfo(block, lastValSet, stateDB)
|
||||
|
||||
// Begin block.
|
||||
_, err := proxyAppConn.BeginBlockSync(abci.RequestBeginBlock{
|
||||
// Begin block
|
||||
var err error
|
||||
abciResponses.BeginBlock, err = proxyAppConn.BeginBlockSync(abci.RequestBeginBlock{
|
||||
Hash: block.Hash(),
|
||||
Header: types.TM2PB.Header(&block.Header),
|
||||
LastCommitInfo: commitInfo,
|
||||
@@ -307,53 +322,98 @@ func getBeginBlockValidatorInfo(block *types.Block, lastValSet *types.ValidatorS
|
||||
|
||||
}
|
||||
|
||||
func validateValidatorUpdates(abciUpdates []abci.ValidatorUpdate,
|
||||
params types.ValidatorParams) error {
|
||||
for _, valUpdate := range abciUpdates {
|
||||
if valUpdate.GetPower() < 0 {
|
||||
return fmt.Errorf("Voting power can't be negative %v", valUpdate)
|
||||
} else if valUpdate.GetPower() == 0 {
|
||||
// continue, since this is deleting the validator, and thus there is no
|
||||
// pubkey to check
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if validator's pubkey matches an ABCI type in the consensus params
|
||||
thisKeyType := valUpdate.PubKey.Type
|
||||
if !params.IsValidPubkeyType(thisKeyType) {
|
||||
return fmt.Errorf("Validator %v is using pubkey %s, which is unsupported for consensus",
|
||||
valUpdate, thisKeyType)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// If more or equal than 1/3 of total voting power changed in one block, then
|
||||
// a light client could never prove the transition externally. See
|
||||
// ./lite/doc.go for details on how a light client tracks validators.
|
||||
func updateValidators(currentSet *types.ValidatorSet, abciUpdates []abci.ValidatorUpdate) ([]*types.Validator, error) {
|
||||
updates, err := types.PB2TM.ValidatorUpdates(abciUpdates)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// these are tendermint types now
|
||||
func updateValidators(currentSet *types.ValidatorSet, updates []*types.Validator) error {
|
||||
for _, valUpdate := range updates {
|
||||
// should already have been checked
|
||||
if valUpdate.VotingPower < 0 {
|
||||
return nil, fmt.Errorf("Voting power can't be negative %v", valUpdate)
|
||||
return fmt.Errorf("Voting power can't be negative %v", valUpdate)
|
||||
}
|
||||
|
||||
address := valUpdate.Address
|
||||
_, val := currentSet.GetByAddress(address)
|
||||
if valUpdate.VotingPower == 0 {
|
||||
// remove val
|
||||
// valUpdate.VotingPower is ensured to be non-negative in validation method
|
||||
if valUpdate.VotingPower == 0 { // remove val
|
||||
_, removed := currentSet.Remove(address)
|
||||
if !removed {
|
||||
return nil, fmt.Errorf("Failed to remove validator %X", address)
|
||||
return fmt.Errorf("Failed to remove validator %X", address)
|
||||
}
|
||||
} else if val == nil {
|
||||
// add val
|
||||
} else if val == nil { // add val
|
||||
// make sure we do not exceed MaxTotalVotingPower by adding this validator:
|
||||
totalVotingPower := currentSet.TotalVotingPower()
|
||||
updatedVotingPower := valUpdate.VotingPower + totalVotingPower
|
||||
overflow := updatedVotingPower > types.MaxTotalVotingPower || updatedVotingPower < 0
|
||||
if overflow {
|
||||
return fmt.Errorf(
|
||||
"Failed to add new validator %v. Adding it would exceed max allowed total voting power %v",
|
||||
valUpdate,
|
||||
types.MaxTotalVotingPower)
|
||||
}
|
||||
// TODO: issue #1558 update spec according to the following:
|
||||
// Set ProposerPriority to -C*totalVotingPower (with C ~= 1.125) to make sure validators can't
|
||||
// unbond/rebond to reset their (potentially previously negative) ProposerPriority to zero.
|
||||
//
|
||||
// Contract: totalVotingPower < MaxTotalVotingPower to ensure ProposerPriority does
|
||||
// not exceed the bounds of int64.
|
||||
//
|
||||
// Compute ProposerPriority = -1.125*totalVotingPower == -(totalVotingPower + (totalVotingPower >> 3)).
|
||||
valUpdate.ProposerPriority = -(totalVotingPower + (totalVotingPower >> 3))
|
||||
added := currentSet.Add(valUpdate)
|
||||
if !added {
|
||||
return nil, fmt.Errorf("Failed to add new validator %v", valUpdate)
|
||||
return fmt.Errorf("Failed to add new validator %v", valUpdate)
|
||||
}
|
||||
} else {
|
||||
// update val
|
||||
} else { // update val
|
||||
// make sure we do not exceed MaxTotalVotingPower by updating this validator:
|
||||
totalVotingPower := currentSet.TotalVotingPower()
|
||||
curVotingPower := val.VotingPower
|
||||
updatedVotingPower := totalVotingPower - curVotingPower + valUpdate.VotingPower
|
||||
overflow := updatedVotingPower > types.MaxTotalVotingPower || updatedVotingPower < 0
|
||||
if overflow {
|
||||
return fmt.Errorf(
|
||||
"Failed to update existing validator %v. Updating it would exceed max allowed total voting power %v",
|
||||
valUpdate,
|
||||
types.MaxTotalVotingPower)
|
||||
}
|
||||
|
||||
updated := currentSet.Update(valUpdate)
|
||||
if !updated {
|
||||
return nil, fmt.Errorf("Failed to update validator %X to %v", address, valUpdate)
|
||||
return fmt.Errorf("Failed to update validator %X to %v", address, valUpdate)
|
||||
}
|
||||
}
|
||||
}
|
||||
return updates, nil
|
||||
return nil
|
||||
}
|
||||
|
||||
// updateState returns a new State updated according to the header and responses.
|
||||
func updateState(
|
||||
logger log.Logger,
|
||||
state State,
|
||||
blockID types.BlockID,
|
||||
header *types.Header,
|
||||
abciResponses *ABCIResponses,
|
||||
validatorUpdates []*types.Validator,
|
||||
) (State, error) {
|
||||
|
||||
// Copy the valset so we can apply changes from EndBlock
|
||||
@@ -362,19 +422,17 @@ func updateState(
|
||||
|
||||
// Update the validator set with the latest abciResponses.
|
||||
lastHeightValsChanged := state.LastHeightValidatorsChanged
|
||||
if len(abciResponses.EndBlock.ValidatorUpdates) > 0 {
|
||||
validatorUpdates, err := updateValidators(nValSet, abciResponses.EndBlock.ValidatorUpdates)
|
||||
if len(validatorUpdates) > 0 {
|
||||
err := updateValidators(nValSet, validatorUpdates)
|
||||
if err != nil {
|
||||
return state, fmt.Errorf("Error changing validator set: %v", err)
|
||||
}
|
||||
// Change results from this height but only applies to the next next height.
|
||||
lastHeightValsChanged = header.Height + 1 + 1
|
||||
|
||||
logger.Info("Updates to validators", "updates", makeValidatorUpdatesLogString(validatorUpdates))
|
||||
}
|
||||
|
||||
// Update validator accums and set state variables.
|
||||
nValSet.IncrementAccum(1)
|
||||
// Update validator proposer priority and set state variables.
|
||||
nValSet.IncrementProposerPriority(1)
|
||||
|
||||
// Update the params with the latest abciResponses.
|
||||
nextParams := state.ConsensusParams
|
||||
@@ -416,9 +474,17 @@ func updateState(
|
||||
// Fire NewBlock, NewBlockHeader.
|
||||
// Fire TxEvent for every tx.
|
||||
// NOTE: if Tendermint crashes before commit, some or all of these events may be published again.
|
||||
func fireEvents(logger log.Logger, eventBus types.BlockEventPublisher, block *types.Block, abciResponses *ABCIResponses) {
|
||||
eventBus.PublishEventNewBlock(types.EventDataNewBlock{block})
|
||||
eventBus.PublishEventNewBlockHeader(types.EventDataNewBlockHeader{block.Header})
|
||||
func fireEvents(logger log.Logger, eventBus types.BlockEventPublisher, block *types.Block, abciResponses *ABCIResponses, validatorUpdates []*types.Validator) {
|
||||
eventBus.PublishEventNewBlock(types.EventDataNewBlock{
|
||||
Block: block,
|
||||
ResultBeginBlock: *abciResponses.BeginBlock,
|
||||
ResultEndBlock: *abciResponses.EndBlock,
|
||||
})
|
||||
eventBus.PublishEventNewBlockHeader(types.EventDataNewBlockHeader{
|
||||
Header: block.Header,
|
||||
ResultBeginBlock: *abciResponses.BeginBlock,
|
||||
ResultEndBlock: *abciResponses.EndBlock,
|
||||
})
|
||||
|
||||
for i, tx := range block.Data.Txs {
|
||||
eventBus.PublishEventTx(types.EventDataTx{types.TxResult{
|
||||
@@ -429,12 +495,9 @@ func fireEvents(logger log.Logger, eventBus types.BlockEventPublisher, block *ty
|
||||
}})
|
||||
}
|
||||
|
||||
abciValUpdates := abciResponses.EndBlock.ValidatorUpdates
|
||||
if len(abciValUpdates) > 0 {
|
||||
// if there were an error, we would've stopped in updateValidators
|
||||
updates, _ := types.PB2TM.ValidatorUpdates(abciValUpdates)
|
||||
if len(validatorUpdates) > 0 {
|
||||
eventBus.PublishEventValidatorSetUpdates(
|
||||
types.EventDataValidatorSetUpdates{ValidatorUpdates: updates})
|
||||
types.EventDataValidatorSetUpdates{ValidatorUpdates: validatorUpdates})
|
||||
}
|
||||
}
|
||||
|
||||
|
@@ -12,6 +12,7 @@ import (
|
||||
"github.com/tendermint/tendermint/abci/example/kvstore"
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
"github.com/tendermint/tendermint/crypto/secp256k1"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
dbm "github.com/tendermint/tendermint/libs/db"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
@@ -152,6 +153,76 @@ func TestBeginBlockByzantineValidators(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateValidatorUpdates(t *testing.T) {
|
||||
pubkey1 := ed25519.GenPrivKey().PubKey()
|
||||
pubkey2 := ed25519.GenPrivKey().PubKey()
|
||||
|
||||
secpKey := secp256k1.GenPrivKey().PubKey()
|
||||
|
||||
defaultValidatorParams := types.ValidatorParams{[]string{types.ABCIPubKeyTypeEd25519}}
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
|
||||
abciUpdates []abci.ValidatorUpdate
|
||||
validatorParams types.ValidatorParams
|
||||
|
||||
shouldErr bool
|
||||
}{
|
||||
{
|
||||
"adding a validator is OK",
|
||||
|
||||
[]abci.ValidatorUpdate{{PubKey: types.TM2PB.PubKey(pubkey2), Power: 20}},
|
||||
defaultValidatorParams,
|
||||
|
||||
false,
|
||||
},
|
||||
{
|
||||
"updating a validator is OK",
|
||||
|
||||
[]abci.ValidatorUpdate{{PubKey: types.TM2PB.PubKey(pubkey1), Power: 20}},
|
||||
defaultValidatorParams,
|
||||
|
||||
false,
|
||||
},
|
||||
{
|
||||
"removing a validator is OK",
|
||||
|
||||
[]abci.ValidatorUpdate{{PubKey: types.TM2PB.PubKey(pubkey2), Power: 0}},
|
||||
defaultValidatorParams,
|
||||
|
||||
false,
|
||||
},
|
||||
{
|
||||
"adding a validator with negative power results in error",
|
||||
|
||||
[]abci.ValidatorUpdate{{PubKey: types.TM2PB.PubKey(pubkey2), Power: -100}},
|
||||
defaultValidatorParams,
|
||||
|
||||
true,
|
||||
},
|
||||
{
|
||||
"adding a validator with pubkey thats not in validator params results in error",
|
||||
|
||||
[]abci.ValidatorUpdate{{PubKey: types.TM2PB.PubKey(secpKey), Power: -100}},
|
||||
defaultValidatorParams,
|
||||
|
||||
true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
err := validateValidatorUpdates(tc.abciUpdates, tc.validatorParams)
|
||||
if tc.shouldErr {
|
||||
assert.Error(t, err)
|
||||
} else {
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateValidators(t *testing.T) {
|
||||
pubkey1 := ed25519.GenPrivKey().PubKey()
|
||||
val1 := types.NewValidator(pubkey1, 10)
|
||||
@@ -194,7 +265,6 @@ func TestUpdateValidators(t *testing.T) {
|
||||
types.NewValidatorSet([]*types.Validator{val1}),
|
||||
false,
|
||||
},
|
||||
|
||||
{
|
||||
"removing a non-existing validator results in error",
|
||||
|
||||
@@ -204,24 +274,17 @@ func TestUpdateValidators(t *testing.T) {
|
||||
types.NewValidatorSet([]*types.Validator{val1}),
|
||||
true,
|
||||
},
|
||||
|
||||
{
|
||||
"adding a validator with negative power results in error",
|
||||
|
||||
types.NewValidatorSet([]*types.Validator{val1}),
|
||||
[]abci.ValidatorUpdate{{PubKey: types.TM2PB.PubKey(pubkey2), Power: -100}},
|
||||
|
||||
types.NewValidatorSet([]*types.Validator{val1}),
|
||||
true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
_, err := updateValidators(tc.currentSet, tc.abciUpdates)
|
||||
updates, err := types.PB2TM.ValidatorUpdates(tc.abciUpdates)
|
||||
assert.NoError(t, err)
|
||||
err = updateValidators(tc.currentSet, updates)
|
||||
if tc.shouldErr {
|
||||
assert.Error(t, err)
|
||||
} else {
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, tc.resultingSet.Size(), tc.currentSet.Size())
|
||||
|
||||
assert.Equal(t, tc.resultingSet.TotalVotingPower(), tc.currentSet.TotalVotingPower())
|
||||
|
@@ -64,7 +64,8 @@ type State struct {
|
||||
// Validators are persisted to the database separately every time they change,
|
||||
// so we can query for historical validator sets.
|
||||
// Note that if s.LastBlockHeight causes a valset change,
|
||||
// we set s.LastHeightValidatorsChanged = s.LastBlockHeight + 1
|
||||
// we set s.LastHeightValidatorsChanged = s.LastBlockHeight + 1 + 1
|
||||
// Extra +1 due to nextValSet delay.
|
||||
NextValidators *types.ValidatorSet
|
||||
Validators *types.ValidatorSet
|
||||
LastValidators *types.ValidatorSet
|
||||
@@ -225,7 +226,7 @@ func MakeGenesisState(genDoc *types.GenesisDoc) (State, error) {
|
||||
validators[i] = types.NewValidator(val.PubKey, val.Power)
|
||||
}
|
||||
validatorSet = types.NewValidatorSet(validators)
|
||||
nextValidatorSet = types.NewValidatorSet(validators).CopyIncrementAccum(1)
|
||||
nextValidatorSet = types.NewValidatorSet(validators).CopyIncrementProposerPriority(1)
|
||||
}
|
||||
|
||||
return State{
|
||||
|
@@ -3,13 +3,13 @@ package state
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
crypto "github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
dbm "github.com/tendermint/tendermint/libs/db"
|
||||
@@ -133,8 +133,8 @@ func TestABCIResponsesSaveLoad2(t *testing.T) {
|
||||
{Code: 383},
|
||||
{Data: []byte("Gotcha!"),
|
||||
Tags: []cmn.KVPair{
|
||||
cmn.KVPair{Key: []byte("a"), Value: []byte("1")},
|
||||
cmn.KVPair{Key: []byte("build"), Value: []byte("stuff")},
|
||||
{Key: []byte("a"), Value: []byte("1")},
|
||||
{Key: []byte("build"), Value: []byte("stuff")},
|
||||
}},
|
||||
},
|
||||
types.ABCIResults{
|
||||
@@ -222,6 +222,7 @@ func TestOneValidatorChangesSaveLoad(t *testing.T) {
|
||||
_, val := state.Validators.GetByIndex(0)
|
||||
power := val.VotingPower
|
||||
var err error
|
||||
var validatorUpdates []*types.Validator
|
||||
for i := int64(1); i < highestHeight; i++ {
|
||||
// When we get to a change height, use the next pubkey.
|
||||
if changeIndex < len(changeHeights) && i == changeHeights[changeIndex] {
|
||||
@@ -229,8 +230,10 @@ func TestOneValidatorChangesSaveLoad(t *testing.T) {
|
||||
power++
|
||||
}
|
||||
header, blockID, responses := makeHeaderPartsResponsesValPowerChange(state, i, power)
|
||||
state, err = updateState(log.TestingLogger(), state, blockID, &header, responses)
|
||||
assert.Nil(t, err)
|
||||
validatorUpdates, err = types.PB2TM.ValidatorUpdates(responses.EndBlock.ValidatorUpdates)
|
||||
require.NoError(t, err)
|
||||
state, err = updateState(state, blockID, &header, responses, validatorUpdates)
|
||||
require.NoError(t, err)
|
||||
nextHeight := state.LastBlockHeight + 1
|
||||
saveValidatorsInfo(stateDB, nextHeight+1, state.LastHeightValidatorsChanged, state.NextValidators)
|
||||
}
|
||||
@@ -260,6 +263,27 @@ func TestOneValidatorChangesSaveLoad(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestStoreLoadValidatorsIncrementsProposerPriority(t *testing.T) {
|
||||
const valSetSize = 2
|
||||
tearDown, stateDB, state := setupTestCase(t)
|
||||
state.Validators = genValSet(valSetSize)
|
||||
state.NextValidators = state.Validators.CopyIncrementProposerPriority(1)
|
||||
SaveState(stateDB, state)
|
||||
defer tearDown(t)
|
||||
|
||||
nextHeight := state.LastBlockHeight + 1
|
||||
|
||||
v0, err := LoadValidators(stateDB, nextHeight)
|
||||
assert.Nil(t, err)
|
||||
acc0 := v0.Validators[0].ProposerPriority
|
||||
|
||||
v1, err := LoadValidators(stateDB, nextHeight+1)
|
||||
assert.Nil(t, err)
|
||||
acc1 := v1.Validators[0].ProposerPriority
|
||||
|
||||
assert.NotEqual(t, acc1, acc0, "expected ProposerPriority value to change between heights")
|
||||
}
|
||||
|
||||
// TestValidatorChangesSaveLoad tests saving and loading a validator set with
|
||||
// changes.
|
||||
func TestManyValidatorChangesSaveLoad(t *testing.T) {
|
||||
@@ -267,7 +291,7 @@ func TestManyValidatorChangesSaveLoad(t *testing.T) {
|
||||
tearDown, stateDB, state := setupTestCase(t)
|
||||
require.Equal(t, int64(0), state.LastBlockHeight)
|
||||
state.Validators = genValSet(valSetSize)
|
||||
state.NextValidators = state.Validators.CopyIncrementAccum(1)
|
||||
state.NextValidators = state.Validators.CopyIncrementProposerPriority(1)
|
||||
SaveState(stateDB, state)
|
||||
defer tearDown(t)
|
||||
|
||||
@@ -281,7 +305,10 @@ func TestManyValidatorChangesSaveLoad(t *testing.T) {
|
||||
|
||||
// Save state etc.
|
||||
var err error
|
||||
state, err = updateState(log.TestingLogger(), state, blockID, &header, responses)
|
||||
var validatorUpdates []*types.Validator
|
||||
validatorUpdates, err = types.PB2TM.ValidatorUpdates(responses.EndBlock.ValidatorUpdates)
|
||||
require.NoError(t, err)
|
||||
state, err = updateState(state, blockID, &header, responses, validatorUpdates)
|
||||
require.Nil(t, err)
|
||||
nextHeight := state.LastBlockHeight + 1
|
||||
saveValidatorsInfo(stateDB, nextHeight+1, state.LastHeightValidatorsChanged, state.NextValidators)
|
||||
@@ -353,6 +380,7 @@ func TestConsensusParamsChangesSaveLoad(t *testing.T) {
|
||||
changeIndex := 0
|
||||
cp := params[changeIndex]
|
||||
var err error
|
||||
var validatorUpdates []*types.Validator
|
||||
for i := int64(1); i < highestHeight; i++ {
|
||||
// When we get to a change height, use the next params.
|
||||
if changeIndex < len(changeHeights) && i == changeHeights[changeIndex] {
|
||||
@@ -360,7 +388,9 @@ func TestConsensusParamsChangesSaveLoad(t *testing.T) {
|
||||
cp = params[changeIndex]
|
||||
}
|
||||
header, blockID, responses := makeHeaderPartsResponsesParams(state, i, cp)
|
||||
state, err = updateState(log.TestingLogger(), state, blockID, &header, responses)
|
||||
validatorUpdates, err = types.PB2TM.ValidatorUpdates(responses.EndBlock.ValidatorUpdates)
|
||||
require.NoError(t, err)
|
||||
state, err = updateState(state, blockID, &header, responses, validatorUpdates)
|
||||
|
||||
require.Nil(t, err)
|
||||
nextHeight := state.LastBlockHeight + 1
|
||||
|
@@ -89,7 +89,9 @@ func saveState(db dbm.DB, state State, key []byte) {
|
||||
nextHeight := state.LastBlockHeight + 1
|
||||
// If first block, save validators for block 1.
|
||||
if nextHeight == 1 {
|
||||
lastHeightVoteChanged := int64(1) // Due to Tendermint validator set changes being delayed 1 block.
|
||||
// This extra logic due to Tendermint validator set changes being delayed 1 block.
|
||||
// It may get overwritten due to InitChain validator updates.
|
||||
lastHeightVoteChanged := int64(1)
|
||||
saveValidatorsInfo(db, nextHeight, lastHeightVoteChanged, state.Validators)
|
||||
}
|
||||
// Save next validators.
|
||||
@@ -105,8 +107,9 @@ func saveState(db dbm.DB, state State, key []byte) {
|
||||
// of the various ABCI calls during block processing.
|
||||
// It is persisted to disk for each height before calling Commit.
|
||||
type ABCIResponses struct {
|
||||
DeliverTx []*abci.ResponseDeliverTx
|
||||
EndBlock *abci.ResponseEndBlock
|
||||
DeliverTx []*abci.ResponseDeliverTx
|
||||
EndBlock *abci.ResponseEndBlock
|
||||
BeginBlock *abci.ResponseBeginBlock
|
||||
}
|
||||
|
||||
// NewABCIResponses returns a new ABCIResponses
|
||||
@@ -191,12 +194,14 @@ func LoadValidators(db dbm.DB, height int64) (*types.ValidatorSet, error) {
|
||||
),
|
||||
)
|
||||
}
|
||||
valInfo2.ValidatorSet.IncrementProposerPriority(int(height - valInfo.LastHeightChanged)) // mutate
|
||||
valInfo = valInfo2
|
||||
}
|
||||
|
||||
return valInfo.ValidatorSet, nil
|
||||
}
|
||||
|
||||
// CONTRACT: Returned ValidatorsInfo can be mutated.
|
||||
func loadValidatorsInfo(db dbm.DB, height int64) *ValidatorsInfo {
|
||||
buf := db.Get(calcValidatorsKey(height))
|
||||
if len(buf) == 0 {
|
||||
@@ -215,18 +220,22 @@ func loadValidatorsInfo(db dbm.DB, height int64) *ValidatorsInfo {
|
||||
return v
|
||||
}
|
||||
|
||||
// saveValidatorsInfo persists the validator set for the next block to disk.
|
||||
// saveValidatorsInfo persists the validator set.
|
||||
// `height` is the effective height for which the validator is responsible for signing.
|
||||
// It should be called from s.Save(), right before the state itself is persisted.
|
||||
// If the validator set did not change after processing the latest block,
|
||||
// only the last height for which the validators changed is persisted.
|
||||
func saveValidatorsInfo(db dbm.DB, nextHeight, changeHeight int64, valSet *types.ValidatorSet) {
|
||||
valInfo := &ValidatorsInfo{
|
||||
LastHeightChanged: changeHeight,
|
||||
func saveValidatorsInfo(db dbm.DB, height, lastHeightChanged int64, valSet *types.ValidatorSet) {
|
||||
if lastHeightChanged > height {
|
||||
panic("LastHeightChanged cannot be greater than ValidatorsInfo height")
|
||||
}
|
||||
if changeHeight == nextHeight {
|
||||
valInfo := &ValidatorsInfo{
|
||||
LastHeightChanged: lastHeightChanged,
|
||||
}
|
||||
if lastHeightChanged == height {
|
||||
valInfo.ValidatorSet = valSet
|
||||
}
|
||||
db.Set(calcValidatorsKey(nextHeight), valInfo.Bytes())
|
||||
db.Set(calcValidatorsKey(height), valInfo.Bytes())
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
|
@@ -207,8 +207,11 @@ func (txi *TxIndex) Search(q *query.Query) ([]*types.TxResult, error) {
|
||||
i++
|
||||
}
|
||||
|
||||
// sort by height by default
|
||||
// sort by height & index by default
|
||||
sort.Slice(results, func(i, j int) bool {
|
||||
if results[i].Height == results[j].Height {
|
||||
return results[i].Index < results[j].Index
|
||||
}
|
||||
return results[i].Height < results[j].Height
|
||||
})
|
||||
|
||||
@@ -225,9 +228,10 @@ func lookForHash(conditions []query.Condition) (hash []byte, err error, ok bool)
|
||||
return
|
||||
}
|
||||
|
||||
// lookForHeight returns a height if there is an "height=X" condition.
|
||||
func lookForHeight(conditions []query.Condition) (height int64) {
|
||||
for _, c := range conditions {
|
||||
if c.Tag == types.TxHeightKey {
|
||||
if c.Tag == types.TxHeightKey && c.Op == query.OpEqual {
|
||||
return c.Operand.(int64)
|
||||
}
|
||||
}
|
||||
@@ -408,9 +412,9 @@ LOOP:
|
||||
func startKey(c query.Condition, height int64) []byte {
|
||||
var key string
|
||||
if height > 0 {
|
||||
key = fmt.Sprintf("%s/%v/%d", c.Tag, c.Operand, height)
|
||||
key = fmt.Sprintf("%s/%v/%d/", c.Tag, c.Operand, height)
|
||||
} else {
|
||||
key = fmt.Sprintf("%s/%v", c.Tag, c.Operand)
|
||||
key = fmt.Sprintf("%s/%v/", c.Tag, c.Operand)
|
||||
}
|
||||
return []byte(key)
|
||||
}
|
||||
|
@@ -73,6 +73,8 @@ func TestTxSearch(t *testing.T) {
|
||||
{"account.number = 1 AND account.owner = 'Ivan'", 1},
|
||||
// search by exact match (two tags)
|
||||
{"account.number = 1 AND account.owner = 'Vlad'", 0},
|
||||
// search using a prefix of the stored value
|
||||
{"account.owner = 'Iv'", 0},
|
||||
// search by range
|
||||
{"account.number >= 1 AND account.number <= 5", 1},
|
||||
// search by range (lower bound)
|
||||
@@ -133,6 +135,7 @@ func TestTxSearchMultipleTxs(t *testing.T) {
|
||||
})
|
||||
txResult.Tx = types.Tx("Bob's account")
|
||||
txResult.Height = 2
|
||||
txResult.Index = 1
|
||||
err := indexer.Index(txResult)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -142,14 +145,26 @@ func TestTxSearchMultipleTxs(t *testing.T) {
|
||||
})
|
||||
txResult2.Tx = types.Tx("Alice's account")
|
||||
txResult2.Height = 1
|
||||
txResult2.Index = 2
|
||||
|
||||
err = indexer.Index(txResult2)
|
||||
require.NoError(t, err)
|
||||
|
||||
// indexed third (to test the order of transactions)
|
||||
txResult3 := txResultWithTags([]cmn.KVPair{
|
||||
{Key: []byte("account.number"), Value: []byte("3")},
|
||||
})
|
||||
txResult3.Tx = types.Tx("Jack's account")
|
||||
txResult3.Height = 1
|
||||
txResult3.Index = 1
|
||||
err = indexer.Index(txResult3)
|
||||
require.NoError(t, err)
|
||||
|
||||
results, err := indexer.Search(query.MustParse("account.number >= 1"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
require.Len(t, results, 2)
|
||||
assert.Equal(t, []*types.TxResult{txResult2, txResult}, results)
|
||||
require.Len(t, results, 3)
|
||||
assert.Equal(t, []*types.TxResult{txResult3, txResult2, txResult}, results)
|
||||
}
|
||||
|
||||
func TestIndexAllTags(t *testing.T) {
|
||||
|
@@ -191,7 +191,7 @@ func (t *transacter) sendLoop(connIndex int) {
|
||||
c.SetWriteDeadline(now.Add(sendTimeout))
|
||||
err = c.WriteJSON(rpctypes.RPCRequest{
|
||||
JSONRPC: "2.0",
|
||||
ID: "tm-bench",
|
||||
ID: rpctypes.JSONRPCStringID("tm-bench"),
|
||||
Method: t.BroadcastTxMethod,
|
||||
Params: rawParamsJSON,
|
||||
})
|
||||
|
@@ -7,7 +7,7 @@ import (
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
crypto "github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/libs/events"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
ctypes "github.com/tendermint/tendermint/rpc/core/types"
|
||||
|
@@ -34,7 +34,7 @@ func TestNodeNewBlockReceived(t *testing.T) {
|
||||
n.SendBlocksTo(blockCh)
|
||||
|
||||
blockHeader := tmtypes.Header{Height: 5}
|
||||
emMock.Call("eventCallback", &em.EventMetric{}, tmtypes.EventDataNewBlockHeader{blockHeader})
|
||||
emMock.Call("eventCallback", &em.EventMetric{}, tmtypes.EventDataNewBlockHeader{Header: blockHeader})
|
||||
|
||||
assert.Equal(t, int64(5), n.Height)
|
||||
assert.Equal(t, blockHeader, <-blockCh)
|
||||
|
@@ -275,6 +275,28 @@ func (b *Block) StringShort() string {
|
||||
return fmt.Sprintf("Block#%v", b.Hash())
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------
|
||||
// These methods are for Protobuf Compatibility
|
||||
|
||||
// Marshal returns the amino encoding.
|
||||
func (b *Block) Marshal() ([]byte, error) {
|
||||
return cdc.MarshalBinaryBare(b)
|
||||
}
|
||||
|
||||
// MarshalTo calls Marshal and copies to the given buffer.
|
||||
func (b *Block) MarshalTo(data []byte) (int, error) {
|
||||
bs, err := b.Marshal()
|
||||
if err != nil {
|
||||
return -1, err
|
||||
}
|
||||
return copy(data, bs), nil
|
||||
}
|
||||
|
||||
// Unmarshal deserializes from amino encoded form.
|
||||
func (b *Block) Unmarshal(bs []byte) error {
|
||||
return cdc.UnmarshalBinaryBare(bs, b)
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
|
||||
// MaxDataBytes returns the maximum size of block's data.
|
||||
|
@@ -13,3 +13,31 @@ func NewBlockMeta(block *Block, blockParts *PartSet) *BlockMeta {
|
||||
Header: block.Header,
|
||||
}
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------
|
||||
// These methods are for Protobuf Compatibility
|
||||
|
||||
// Size returns the size of the amino encoding, in bytes.
|
||||
func (bm *BlockMeta) Size() int {
|
||||
bs, _ := bm.Marshal()
|
||||
return len(bs)
|
||||
}
|
||||
|
||||
// Marshal returns the amino encoding.
|
||||
func (bm *BlockMeta) Marshal() ([]byte, error) {
|
||||
return cdc.MarshalBinaryBare(bm)
|
||||
}
|
||||
|
||||
// MarshalTo calls Marshal and copies to the given buffer.
|
||||
func (bm *BlockMeta) MarshalTo(data []byte) (int, error) {
|
||||
bs, err := bm.Marshal()
|
||||
if err != nil {
|
||||
return -1, err
|
||||
}
|
||||
return copy(data, bs), nil
|
||||
}
|
||||
|
||||
// Unmarshal deserializes from amino encoded form.
|
||||
func (bm *BlockMeta) Unmarshal(bs []byte) error {
|
||||
return cdc.UnmarshalBinaryBare(bs, bm)
|
||||
}
|
||||
|
@@ -41,16 +41,6 @@ type CanonicalVote struct {
|
||||
ChainID string
|
||||
}
|
||||
|
||||
type CanonicalHeartbeat struct {
|
||||
Type byte
|
||||
Height int64 `binary:"fixed64"`
|
||||
Round int `binary:"fixed64"`
|
||||
Sequence int `binary:"fixed64"`
|
||||
ValidatorAddress Address
|
||||
ValidatorIndex int
|
||||
ChainID string
|
||||
}
|
||||
|
||||
//-----------------------------------
|
||||
// Canonicalize the structs
|
||||
|
||||
@@ -91,18 +81,6 @@ func CanonicalizeVote(chainID string, vote *Vote) CanonicalVote {
|
||||
}
|
||||
}
|
||||
|
||||
func CanonicalizeHeartbeat(chainID string, heartbeat *Heartbeat) CanonicalHeartbeat {
|
||||
return CanonicalHeartbeat{
|
||||
Type: byte(HeartbeatType),
|
||||
Height: heartbeat.Height,
|
||||
Round: heartbeat.Round,
|
||||
Sequence: heartbeat.Sequence,
|
||||
ValidatorAddress: heartbeat.ValidatorAddress,
|
||||
ValidatorIndex: heartbeat.ValidatorIndex,
|
||||
ChainID: chainID,
|
||||
}
|
||||
}
|
||||
|
||||
// CanonicalTime can be used to stringify time in a canonical way.
|
||||
func CanonicalTime(t time.Time) string {
|
||||
// Note that sending time over amino resets it to
|
||||
|
@@ -71,12 +71,48 @@ func (b *EventBus) Publish(eventType string, eventData TMEventData) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *EventBus) validateAndStringifyTags(tags []cmn.KVPair, logger log.Logger) map[string]string {
|
||||
result := make(map[string]string)
|
||||
for _, tag := range tags {
|
||||
// basic validation
|
||||
if len(tag.Key) == 0 {
|
||||
logger.Debug("Got tag with an empty key (skipping)", "tag", tag)
|
||||
continue
|
||||
}
|
||||
result[string(tag.Key)] = string(tag.Value)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func (b *EventBus) PublishEventNewBlock(data EventDataNewBlock) error {
|
||||
return b.Publish(EventNewBlock, data)
|
||||
// no explicit deadline for publishing events
|
||||
ctx := context.Background()
|
||||
|
||||
resultTags := append(data.ResultBeginBlock.Tags, data.ResultEndBlock.Tags...)
|
||||
tags := b.validateAndStringifyTags(resultTags, b.Logger.With("block", data.Block.StringShort()))
|
||||
|
||||
// add predefined tags
|
||||
logIfTagExists(EventTypeKey, tags, b.Logger)
|
||||
tags[EventTypeKey] = EventNewBlock
|
||||
|
||||
b.pubsub.PublishWithTags(ctx, data, tmpubsub.NewTagMap(tags))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *EventBus) PublishEventNewBlockHeader(data EventDataNewBlockHeader) error {
|
||||
return b.Publish(EventNewBlockHeader, data)
|
||||
// no explicit deadline for publishing events
|
||||
ctx := context.Background()
|
||||
|
||||
resultTags := append(data.ResultBeginBlock.Tags, data.ResultEndBlock.Tags...)
|
||||
// TODO: Create StringShort method for Header and use it in logger.
|
||||
tags := b.validateAndStringifyTags(resultTags, b.Logger.With("header", data.Header))
|
||||
|
||||
// add predefined tags
|
||||
logIfTagExists(EventTypeKey, tags, b.Logger)
|
||||
tags[EventTypeKey] = EventNewBlockHeader
|
||||
|
||||
b.pubsub.PublishWithTags(ctx, data, tmpubsub.NewTagMap(tags))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *EventBus) PublishEventVote(data EventDataVote) error {
|
||||
@@ -94,17 +130,7 @@ func (b *EventBus) PublishEventTx(data EventDataTx) error {
|
||||
// no explicit deadline for publishing events
|
||||
ctx := context.Background()
|
||||
|
||||
tags := make(map[string]string)
|
||||
|
||||
// validate and fill tags from tx result
|
||||
for _, tag := range data.Result.Tags {
|
||||
// basic validation
|
||||
if len(tag.Key) == 0 {
|
||||
b.Logger.Info("Got tag with an empty key (skipping)", "tag", tag, "tx", data.Tx)
|
||||
continue
|
||||
}
|
||||
tags[string(tag.Key)] = string(tag.Value)
|
||||
}
|
||||
tags := b.validateAndStringifyTags(data.Result.Tags, b.Logger.With("tx", data.Tx))
|
||||
|
||||
// add predefined tags
|
||||
logIfTagExists(EventTypeKey, tags, b.Logger)
|
||||
@@ -120,10 +146,6 @@ func (b *EventBus) PublishEventTx(data EventDataTx) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *EventBus) PublishEventProposalHeartbeat(data EventDataProposalHeartbeat) error {
|
||||
return b.Publish(EventProposalHeartbeat, data)
|
||||
}
|
||||
|
||||
func (b *EventBus) PublishEventNewRoundStep(data EventDataRoundState) error {
|
||||
return b.Publish(EventNewRoundStep, data)
|
||||
}
|
||||
|
@@ -58,6 +58,90 @@ func TestEventBusPublishEventTx(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestEventBusPublishEventNewBlock(t *testing.T) {
|
||||
eventBus := NewEventBus()
|
||||
err := eventBus.Start()
|
||||
require.NoError(t, err)
|
||||
defer eventBus.Stop()
|
||||
|
||||
block := MakeBlock(0, []Tx{}, nil, []Evidence{})
|
||||
resultBeginBlock := abci.ResponseBeginBlock{Tags: []cmn.KVPair{{Key: []byte("baz"), Value: []byte("1")}}}
|
||||
resultEndBlock := abci.ResponseEndBlock{Tags: []cmn.KVPair{{Key: []byte("foz"), Value: []byte("2")}}}
|
||||
|
||||
txEventsCh := make(chan interface{})
|
||||
|
||||
// PublishEventNewBlock adds the tm.event tag, so the query below should work
|
||||
query := "tm.event='NewBlock' AND baz=1 AND foz=2"
|
||||
err = eventBus.Subscribe(context.Background(), "test", tmquery.MustParse(query), txEventsCh)
|
||||
require.NoError(t, err)
|
||||
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
for e := range txEventsCh {
|
||||
edt := e.(EventDataNewBlock)
|
||||
assert.Equal(t, block, edt.Block)
|
||||
assert.Equal(t, resultBeginBlock, edt.ResultBeginBlock)
|
||||
assert.Equal(t, resultEndBlock, edt.ResultEndBlock)
|
||||
close(done)
|
||||
}
|
||||
}()
|
||||
|
||||
err = eventBus.PublishEventNewBlock(EventDataNewBlock{
|
||||
Block: block,
|
||||
ResultBeginBlock: resultBeginBlock,
|
||||
ResultEndBlock: resultEndBlock,
|
||||
})
|
||||
assert.NoError(t, err)
|
||||
|
||||
select {
|
||||
case <-done:
|
||||
case <-time.After(1 * time.Second):
|
||||
t.Fatal("did not receive a block after 1 sec.")
|
||||
}
|
||||
}
|
||||
|
||||
func TestEventBusPublishEventNewBlockHeader(t *testing.T) {
|
||||
eventBus := NewEventBus()
|
||||
err := eventBus.Start()
|
||||
require.NoError(t, err)
|
||||
defer eventBus.Stop()
|
||||
|
||||
block := MakeBlock(0, []Tx{}, nil, []Evidence{})
|
||||
resultBeginBlock := abci.ResponseBeginBlock{Tags: []cmn.KVPair{{Key: []byte("baz"), Value: []byte("1")}}}
|
||||
resultEndBlock := abci.ResponseEndBlock{Tags: []cmn.KVPair{{Key: []byte("foz"), Value: []byte("2")}}}
|
||||
|
||||
txEventsCh := make(chan interface{})
|
||||
|
||||
// PublishEventNewBlockHeader adds the tm.event tag, so the query below should work
|
||||
query := "tm.event='NewBlockHeader' AND baz=1 AND foz=2"
|
||||
err = eventBus.Subscribe(context.Background(), "test", tmquery.MustParse(query), txEventsCh)
|
||||
require.NoError(t, err)
|
||||
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
for e := range txEventsCh {
|
||||
edt := e.(EventDataNewBlockHeader)
|
||||
assert.Equal(t, block.Header, edt.Header)
|
||||
assert.Equal(t, resultBeginBlock, edt.ResultBeginBlock)
|
||||
assert.Equal(t, resultEndBlock, edt.ResultEndBlock)
|
||||
close(done)
|
||||
}
|
||||
}()
|
||||
|
||||
err = eventBus.PublishEventNewBlockHeader(EventDataNewBlockHeader{
|
||||
Header: block.Header,
|
||||
ResultBeginBlock: resultBeginBlock,
|
||||
ResultEndBlock: resultEndBlock,
|
||||
})
|
||||
assert.NoError(t, err)
|
||||
|
||||
select {
|
||||
case <-done:
|
||||
case <-time.After(1 * time.Second):
|
||||
t.Fatal("did not receive a block header after 1 sec.")
|
||||
}
|
||||
}
|
||||
|
||||
func TestEventBusPublish(t *testing.T) {
|
||||
eventBus := NewEventBus()
|
||||
err := eventBus.Start()
|
||||
@@ -68,7 +152,7 @@ func TestEventBusPublish(t *testing.T) {
|
||||
err = eventBus.Subscribe(context.Background(), "test", tmquery.Empty{}, eventsCh)
|
||||
require.NoError(t, err)
|
||||
|
||||
const numEventsExpected = 15
|
||||
const numEventsExpected = 14
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
numEvents := 0
|
||||
@@ -88,8 +172,6 @@ func TestEventBusPublish(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
err = eventBus.PublishEventVote(EventDataVote{})
|
||||
require.NoError(t, err)
|
||||
err = eventBus.PublishEventProposalHeartbeat(EventDataProposalHeartbeat{})
|
||||
require.NoError(t, err)
|
||||
err = eventBus.PublishEventNewRoundStep(EventDataRoundState{})
|
||||
require.NoError(t, err)
|
||||
err = eventBus.PublishEventTimeoutPropose(EventDataRoundState{})
|
||||
|
@@ -4,6 +4,7 @@ import (
|
||||
"fmt"
|
||||
|
||||
amino "github.com/tendermint/go-amino"
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
|
||||
tmquery "github.com/tendermint/tendermint/libs/pubsub/query"
|
||||
)
|
||||
@@ -17,7 +18,6 @@ const (
|
||||
EventNewRound = "NewRound"
|
||||
EventNewRoundStep = "NewRoundStep"
|
||||
EventPolka = "Polka"
|
||||
EventProposalHeartbeat = "ProposalHeartbeat"
|
||||
EventRelock = "Relock"
|
||||
EventTimeoutPropose = "TimeoutPropose"
|
||||
EventTimeoutWait = "TimeoutWait"
|
||||
@@ -46,7 +46,6 @@ func RegisterEventDatas(cdc *amino.Codec) {
|
||||
cdc.RegisterConcrete(EventDataNewRound{}, "tendermint/event/NewRound", nil)
|
||||
cdc.RegisterConcrete(EventDataCompleteProposal{}, "tendermint/event/CompleteProposal", nil)
|
||||
cdc.RegisterConcrete(EventDataVote{}, "tendermint/event/Vote", nil)
|
||||
cdc.RegisterConcrete(EventDataProposalHeartbeat{}, "tendermint/event/ProposalHeartbeat", nil)
|
||||
cdc.RegisterConcrete(EventDataValidatorSetUpdates{}, "tendermint/event/ValidatorSetUpdates", nil)
|
||||
cdc.RegisterConcrete(EventDataString(""), "tendermint/event/ProposalString", nil)
|
||||
}
|
||||
@@ -56,11 +55,17 @@ func RegisterEventDatas(cdc *amino.Codec) {
|
||||
|
||||
type EventDataNewBlock struct {
|
||||
Block *Block `json:"block"`
|
||||
|
||||
ResultBeginBlock abci.ResponseBeginBlock `json:"result_begin_block"`
|
||||
ResultEndBlock abci.ResponseEndBlock `json:"result_end_block"`
|
||||
}
|
||||
|
||||
// light weight event for benchmarking
|
||||
type EventDataNewBlockHeader struct {
|
||||
Header Header `json:"header"`
|
||||
|
||||
ResultBeginBlock abci.ResponseBeginBlock `json:"result_begin_block"`
|
||||
ResultEndBlock abci.ResponseEndBlock `json:"result_end_block"`
|
||||
}
|
||||
|
||||
// All txs fire EventDataTx
|
||||
@@ -68,10 +73,6 @@ type EventDataTx struct {
|
||||
TxResult
|
||||
}
|
||||
|
||||
type EventDataProposalHeartbeat struct {
|
||||
Heartbeat *Heartbeat
|
||||
}
|
||||
|
||||
// NOTE: This goes into the replay WAL
|
||||
type EventDataRoundState struct {
|
||||
Height int64 `json:"height"`
|
||||
@@ -100,7 +101,7 @@ type EventDataCompleteProposal struct {
|
||||
Round int `json:"round"`
|
||||
Step string `json:"step"`
|
||||
|
||||
BlockID BlockID `json:"block_id"`
|
||||
BlockID BlockID `json:"block_id"`
|
||||
}
|
||||
|
||||
type EventDataVote struct {
|
||||
@@ -136,7 +137,6 @@ var (
|
||||
EventQueryNewRound = QueryForEvent(EventNewRound)
|
||||
EventQueryNewRoundStep = QueryForEvent(EventNewRoundStep)
|
||||
EventQueryPolka = QueryForEvent(EventPolka)
|
||||
EventQueryProposalHeartbeat = QueryForEvent(EventProposalHeartbeat)
|
||||
EventQueryRelock = QueryForEvent(EventRelock)
|
||||
EventQueryTimeoutPropose = QueryForEvent(EventTimeoutPropose)
|
||||
EventQueryTimeoutWait = QueryForEvent(EventTimeoutWait)
|
||||
|
@@ -1,83 +0,0 @@
|
||||
package types
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
// Heartbeat is a simple vote-like structure so validators can
|
||||
// alert others that they are alive and waiting for transactions.
|
||||
// Note: We aren't adding ",omitempty" to Heartbeat's
|
||||
// json field tags because we always want the JSON
|
||||
// representation to be in its canonical form.
|
||||
type Heartbeat struct {
|
||||
ValidatorAddress Address `json:"validator_address"`
|
||||
ValidatorIndex int `json:"validator_index"`
|
||||
Height int64 `json:"height"`
|
||||
Round int `json:"round"`
|
||||
Sequence int `json:"sequence"`
|
||||
Signature []byte `json:"signature"`
|
||||
}
|
||||
|
||||
// SignBytes returns the Heartbeat bytes for signing.
|
||||
// It panics if the Heartbeat is nil.
|
||||
func (heartbeat *Heartbeat) SignBytes(chainID string) []byte {
|
||||
bz, err := cdc.MarshalBinaryLengthPrefixed(CanonicalizeHeartbeat(chainID, heartbeat))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return bz
|
||||
}
|
||||
|
||||
// Copy makes a copy of the Heartbeat.
|
||||
func (heartbeat *Heartbeat) Copy() *Heartbeat {
|
||||
if heartbeat == nil {
|
||||
return nil
|
||||
}
|
||||
heartbeatCopy := *heartbeat
|
||||
return &heartbeatCopy
|
||||
}
|
||||
|
||||
// String returns a string representation of the Heartbeat.
|
||||
func (heartbeat *Heartbeat) String() string {
|
||||
if heartbeat == nil {
|
||||
return "nil-heartbeat"
|
||||
}
|
||||
|
||||
return fmt.Sprintf("Heartbeat{%v:%X %v/%02d (%v) %v}",
|
||||
heartbeat.ValidatorIndex, cmn.Fingerprint(heartbeat.ValidatorAddress),
|
||||
heartbeat.Height, heartbeat.Round, heartbeat.Sequence,
|
||||
fmt.Sprintf("/%X.../", cmn.Fingerprint(heartbeat.Signature[:])))
|
||||
}
|
||||
|
||||
// ValidateBasic performs basic validation.
|
||||
func (heartbeat *Heartbeat) ValidateBasic() error {
|
||||
if len(heartbeat.ValidatorAddress) != crypto.AddressSize {
|
||||
return fmt.Errorf("Expected ValidatorAddress size to be %d bytes, got %d bytes",
|
||||
crypto.AddressSize,
|
||||
len(heartbeat.ValidatorAddress),
|
||||
)
|
||||
}
|
||||
if heartbeat.ValidatorIndex < 0 {
|
||||
return errors.New("Negative ValidatorIndex")
|
||||
}
|
||||
if heartbeat.Height < 0 {
|
||||
return errors.New("Negative Height")
|
||||
}
|
||||
if heartbeat.Round < 0 {
|
||||
return errors.New("Negative Round")
|
||||
}
|
||||
if heartbeat.Sequence < 0 {
|
||||
return errors.New("Negative Sequence")
|
||||
}
|
||||
if len(heartbeat.Signature) == 0 {
|
||||
return errors.New("Signature is missing")
|
||||
}
|
||||
if len(heartbeat.Signature) > MaxSignatureSize {
|
||||
return fmt.Errorf("Signature is too big (max: %d)", MaxSignatureSize)
|
||||
}
|
||||
return nil
|
||||
}
|
@@ -1,104 +0,0 @@
|
||||
package types
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
"github.com/tendermint/tendermint/crypto/secp256k1"
|
||||
)
|
||||
|
||||
func TestHeartbeatCopy(t *testing.T) {
|
||||
hb := &Heartbeat{ValidatorIndex: 1, Height: 10, Round: 1}
|
||||
hbCopy := hb.Copy()
|
||||
require.Equal(t, hbCopy, hb, "heartbeat copy should be the same")
|
||||
hbCopy.Round = hb.Round + 10
|
||||
require.NotEqual(t, hbCopy, hb, "heartbeat copy mutation should not change original")
|
||||
|
||||
var nilHb *Heartbeat
|
||||
nilHbCopy := nilHb.Copy()
|
||||
require.Nil(t, nilHbCopy, "copy of nil should also return nil")
|
||||
}
|
||||
|
||||
func TestHeartbeatString(t *testing.T) {
|
||||
var nilHb *Heartbeat
|
||||
require.Contains(t, nilHb.String(), "nil", "expecting a string and no panic")
|
||||
|
||||
hb := &Heartbeat{ValidatorIndex: 1, Height: 11, Round: 2}
|
||||
require.Equal(t, "Heartbeat{1:000000000000 11/02 (0) /000000000000.../}", hb.String())
|
||||
|
||||
var key ed25519.PrivKeyEd25519
|
||||
sig, err := key.Sign([]byte("Tendermint"))
|
||||
require.NoError(t, err)
|
||||
hb.Signature = sig
|
||||
require.Equal(t, "Heartbeat{1:000000000000 11/02 (0) /FF41E371B9BF.../}", hb.String())
|
||||
}
|
||||
|
||||
func TestHeartbeatWriteSignBytes(t *testing.T) {
|
||||
chainID := "test_chain_id"
|
||||
|
||||
{
|
||||
testHeartbeat := &Heartbeat{ValidatorIndex: 1, Height: 10, Round: 1}
|
||||
signBytes := testHeartbeat.SignBytes(chainID)
|
||||
expected, err := cdc.MarshalBinaryLengthPrefixed(CanonicalizeHeartbeat(chainID, testHeartbeat))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expected, signBytes, "Got unexpected sign bytes for Heartbeat")
|
||||
}
|
||||
|
||||
{
|
||||
testHeartbeat := &Heartbeat{}
|
||||
signBytes := testHeartbeat.SignBytes(chainID)
|
||||
expected, err := cdc.MarshalBinaryLengthPrefixed(CanonicalizeHeartbeat(chainID, testHeartbeat))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expected, signBytes, "Got unexpected sign bytes for Heartbeat")
|
||||
}
|
||||
|
||||
require.Panics(t, func() {
|
||||
var nilHb *Heartbeat
|
||||
signBytes := nilHb.SignBytes(chainID)
|
||||
require.Equal(t, string(signBytes), "null")
|
||||
})
|
||||
}
|
||||
|
||||
func TestHeartbeatValidateBasic(t *testing.T) {
|
||||
testCases := []struct {
|
||||
testName string
|
||||
malleateHeartBeat func(*Heartbeat)
|
||||
expectErr bool
|
||||
}{
|
||||
{"Good HeartBeat", func(hb *Heartbeat) {}, false},
|
||||
{"Invalid address size", func(hb *Heartbeat) {
|
||||
hb.ValidatorAddress = nil
|
||||
}, true},
|
||||
{"Negative validator index", func(hb *Heartbeat) {
|
||||
hb.ValidatorIndex = -1
|
||||
}, true},
|
||||
{"Negative height", func(hb *Heartbeat) {
|
||||
hb.Height = -1
|
||||
}, true},
|
||||
{"Negative round", func(hb *Heartbeat) {
|
||||
hb.Round = -1
|
||||
}, true},
|
||||
{"Negative sequence", func(hb *Heartbeat) {
|
||||
hb.Sequence = -1
|
||||
}, true},
|
||||
{"Missing signature", func(hb *Heartbeat) {
|
||||
hb.Signature = nil
|
||||
}, true},
|
||||
{"Signature too big", func(hb *Heartbeat) {
|
||||
hb.Signature = make([]byte, MaxSignatureSize+1)
|
||||
}, true},
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.testName, func(t *testing.T) {
|
||||
hb := &Heartbeat{
|
||||
ValidatorAddress: secp256k1.GenPrivKey().PubKey().Address(),
|
||||
Signature: make([]byte, 4),
|
||||
ValidatorIndex: 1, Height: 10, Round: 1}
|
||||
|
||||
tc.malleateHeartBeat(hb)
|
||||
assert.Equal(t, tc.expectErr, hb.ValidateBasic() != nil, "Validate Basic had an unexpected result")
|
||||
})
|
||||
}
|
||||
}
|
@@ -69,6 +69,15 @@ func DefaultValidatorParams() ValidatorParams {
|
||||
return ValidatorParams{[]string{ABCIPubKeyTypeEd25519}}
|
||||
}
|
||||
|
||||
func (params *ValidatorParams) IsValidPubkeyType(pubkeyType string) bool {
|
||||
for i := 0; i < len(params.PubKeyTypes); i++ {
|
||||
if params.PubKeyTypes[i] == pubkeyType {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Validate validates the ConsensusParams to ensure all values are within their
|
||||
// allowed limits, and returns an error if they are not.
|
||||
func (params *ConsensusParams) Validate() error {
|
||||
@@ -124,19 +133,7 @@ func (params *ConsensusParams) Hash() []byte {
|
||||
func (params *ConsensusParams) Equals(params2 *ConsensusParams) bool {
|
||||
return params.BlockSize == params2.BlockSize &&
|
||||
params.Evidence == params2.Evidence &&
|
||||
stringSliceEqual(params.Validator.PubKeyTypes, params2.Validator.PubKeyTypes)
|
||||
}
|
||||
|
||||
func stringSliceEqual(a, b []string) bool {
|
||||
if len(a) != len(b) {
|
||||
return false
|
||||
}
|
||||
for i := 0; i < len(a); i++ {
|
||||
if a[i] != b[i] {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
cmn.StringSliceEqual(params.Validator.PubKeyTypes, params2.Validator.PubKeyTypes)
|
||||
}
|
||||
|
||||
// Update returns a copy of the params with updates from the non-zero fields of p2.
|
||||
@@ -157,7 +154,9 @@ func (params ConsensusParams) Update(params2 *abci.ConsensusParams) ConsensusPar
|
||||
res.Evidence.MaxAge = params2.Evidence.MaxAge
|
||||
}
|
||||
if params2.Validator != nil {
|
||||
res.Validator.PubKeyTypes = params2.Validator.PubKeyTypes
|
||||
// Copy params2.Validator.PubkeyTypes, and set result's value to the copy.
|
||||
// This avoids having to initialize the slice to 0 values, and then write to it again.
|
||||
res.Validator.PubKeyTypes = append([]string{}, params2.Validator.PubKeyTypes...)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
@@ -200,6 +200,9 @@ func (ps *PartSet) Total() int {
|
||||
}
|
||||
|
||||
func (ps *PartSet) AddPart(part *Part) (bool, error) {
|
||||
if ps == nil {
|
||||
return false, nil
|
||||
}
|
||||
ps.mtx.Lock()
|
||||
defer ps.mtx.Unlock()
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user