mirror of
https://github.com/fluencelabs/tendermint
synced 2025-05-06 12:02:13 +00:00
Merge branch 'develop' into 2926_dont_panic
This commit is contained in:
commit
c8ab062ce9
72
CHANGELOG.md
72
CHANGELOG.md
@ -1,13 +1,79 @@
|
||||
# Changelog
|
||||
|
||||
## v0.27.0
|
||||
|
||||
*December 5th, 2018*
|
||||
|
||||
Special thanks to external contributors on this release:
|
||||
@danil-lashin, @srmo
|
||||
|
||||
Special thanks to @dlguddus for discovering a [major
|
||||
issue](https://github.com/tendermint/tendermint/issues/2718#issuecomment-440888677)
|
||||
in the proposer selection algorithm.
|
||||
|
||||
Friendly reminder, we have a [bug bounty
|
||||
program](https://hackerone.com/tendermint).
|
||||
|
||||
This release is primarily about fixes to the proposer selection algorithm
|
||||
in preparation for the [Cosmos Game of
|
||||
Stakes](https://blog.cosmos.network/the-game-of-stakes-is-open-for-registration-83a404746ee6).
|
||||
It also makes use of the `ConsensusParams.Validator.PubKeyTypes` to restrict the
|
||||
key types that can be used by validators, and removes the `Heartbeat` consensus
|
||||
message.
|
||||
|
||||
### BREAKING CHANGES:
|
||||
|
||||
* CLI/RPC/Config
|
||||
- [rpc] [\#2932](https://github.com/tendermint/tendermint/issues/2932) Rename `accum` to `proposer_priority`
|
||||
|
||||
* Go API
|
||||
- [db] [\#2913](https://github.com/tendermint/tendermint/pull/2913)
|
||||
ReverseIterator API change: start < end, and end is exclusive.
|
||||
- [types] [\#2932](https://github.com/tendermint/tendermint/issues/2932) Rename `Validator.Accum` to `Validator.ProposerPriority`
|
||||
|
||||
* Blockchain Protocol
|
||||
- [state] [\#2714](https://github.com/tendermint/tendermint/issues/2714) Validators can now only use pubkeys allowed within
|
||||
ConsensusParams.Validator.PubKeyTypes
|
||||
|
||||
* P2P Protocol
|
||||
- [consensus] [\#2871](https://github.com/tendermint/tendermint/issues/2871)
|
||||
Remove *ProposalHeartbeat* message as it serves no real purpose (@srmo)
|
||||
- [state] Fixes for proposer selection:
|
||||
- [\#2785](https://github.com/tendermint/tendermint/issues/2785) Accum for new validators is `-1.125*totalVotingPower` instead of 0
|
||||
- [\#2941](https://github.com/tendermint/tendermint/issues/2941) val.Accum is preserved during ValidatorSet.Update to avoid being
|
||||
reset to 0
|
||||
|
||||
### IMPROVEMENTS:
|
||||
|
||||
- [state] [\#2929](https://github.com/tendermint/tendermint/issues/2929) Minor refactor of updateState logic (@danil-lashin)
|
||||
- [node] \#2959 Allow node to start even if software's BlockProtocol is
|
||||
different from state's BlockProtocol
|
||||
- [pex] \#2959 Pex reactor logger uses `module=pex`
|
||||
|
||||
### BUG FIXES:
|
||||
|
||||
- [p2p] \#2968 Panic on transport error rather than continuing to run but not
|
||||
accept new connections
|
||||
- [p2p] \#2969 Fix mismatch in peer count between `/net_info` and the prometheus
|
||||
metrics
|
||||
- [rpc] \#2408 `/broadcast_tx_commit`: Fix "interface conversion: interface {} in nil, not EventDataTx" panic (could happen if somebody sent a tx using `/broadcast_tx_commit` while Tendermint was being stopped)
|
||||
- [state] [\#2785](https://github.com/tendermint/tendermint/issues/2785) Fix accum for new validators to be `-1.125*totalVotingPower`
|
||||
instead of 0, forcing them to wait before becoming the proposer. Also:
|
||||
- do not batch clip
|
||||
- keep accums averaged near 0
|
||||
- [txindex/kv] [\#2925](https://github.com/tendermint/tendermint/issues/2925) Don't return false positives when range searching for a prefix of a tag value
|
||||
- [types] [\#2938](https://github.com/tendermint/tendermint/issues/2938) Fix regression in v0.26.4 where we panic on empty
|
||||
genDoc.Validators
|
||||
- [types] [\#2941](https://github.com/tendermint/tendermint/issues/2941) Preserve val.Accum during ValidatorSet.Update to avoid it being
|
||||
reset to 0 every time a validator is updated
|
||||
|
||||
## v0.26.4
|
||||
|
||||
*November 27th, 2018*
|
||||
|
||||
Special thanks to external contributors on this release:
|
||||
ackratos, goolAdapter, james-ray, joe-bowman, kostko,
|
||||
nagarajmanjunath, tomtau
|
||||
|
||||
@ackratos, @goolAdapter, @james-ray, @joe-bowman, @kostko,
|
||||
@nagarajmanjunath, @tomtau
|
||||
|
||||
Friendly reminder, we have a [bug bounty
|
||||
program](https://hackerone.com/tendermint).
|
||||
|
45
UPGRADING.md
45
UPGRADING.md
@ -3,9 +3,50 @@
|
||||
This guide provides steps to be followed when you upgrade your applications to
|
||||
a newer version of Tendermint Core.
|
||||
|
||||
## v0.27.0
|
||||
|
||||
This release contains some breaking changes to the block and p2p protocols,
|
||||
but does not change any core data structures, so it should be compatible with
|
||||
existing blockchains from the v0.26 series that only used Ed25519 validator keys.
|
||||
Blockchains using Secp256k1 for validators will not be compatible. This is due
|
||||
to the fact that we now enforce which key types validators can use as a
|
||||
consensus param. The default is Ed25519, and Secp256k1 must be activated
|
||||
explicitly.
|
||||
|
||||
It is recommended to upgrade all nodes at once to avoid incompatibilities at the
|
||||
peer layer - namely, the heartbeat consensus message has been removed (only
|
||||
relevant if `create_empty_blocks=false` or `create_empty_blocks_interval > 0`),
|
||||
and the proposer selection algorithm has changed. Since proposer information is
|
||||
never included in the blockchain, this change only affects the peer layer.
|
||||
|
||||
### Go API Changes
|
||||
|
||||
#### libs/db
|
||||
|
||||
The ReverseIterator API has changed the meaning of `start` and `end`.
|
||||
Before, iteration was from `start` to `end`, where
|
||||
`start > end`. Now, iteration is from `end` to `start`, where `start < end`.
|
||||
The iterator also excludes `end`. This change allows a simplified and more
|
||||
intuitive logic, aligning the semantic meaning of `start` and `end` in the
|
||||
`Iterator` and `ReverseIterator`.
|
||||
|
||||
### Applications
|
||||
|
||||
This release enforces a new consensus parameter, the
|
||||
ValidatorParams.PubKeyTypes. Applications must ensure that they only return
|
||||
validator updates with the allowed PubKeyTypes. If a validator update includes a
|
||||
pubkey type that is not included in the ConsensusParams.Validator.PubKeyTypes,
|
||||
block execution will fail and the consensus will halt.
|
||||
|
||||
By default, only Ed25519 pubkeys may be used for validators. Enabling
|
||||
Secp256k1 requires explicit modification of the ConsensusParams.
|
||||
Please update your application accordingly (ie. restrict validators to only be
|
||||
able to use Ed25519 keys, or explicitly add additional key types to the genesis
|
||||
file).
|
||||
|
||||
## v0.26.0
|
||||
|
||||
New 0.26.0 release contains a lot of changes to core data types and protocols. It is not
|
||||
This release contains a lot of changes to core data types and protocols. It is not
|
||||
compatible to the old versions and there is no straight forward way to update
|
||||
old data to be compatible with the new version.
|
||||
|
||||
@ -67,7 +108,7 @@ For more information, see:
|
||||
|
||||
### Go API Changes
|
||||
|
||||
#### crypto.merkle
|
||||
#### crypto/merkle
|
||||
|
||||
The `merkle.Hasher` interface was removed. Functions which used to take `Hasher`
|
||||
now simply take `[]byte`. This means that any objects being Merklized should be
|
||||
|
@ -303,7 +303,13 @@ func (h *Handshaker) ReplayBlocks(
|
||||
}
|
||||
state.Validators = types.NewValidatorSet(vals)
|
||||
state.NextValidators = types.NewValidatorSet(vals)
|
||||
} else {
|
||||
// If validator set is not set in genesis and still empty after InitChain, exit.
|
||||
if len(h.genDoc.Validators) == 0 {
|
||||
return nil, fmt.Errorf("Validator set is nil in genesis and still empty after InitChain")
|
||||
}
|
||||
}
|
||||
|
||||
if res.ConsensusParams != nil {
|
||||
state.ConsensusParams = types.PB2TM.ConsensusParams(res.ConsensusParams)
|
||||
}
|
||||
|
19
node/node.go
19
node/node.go
@ -210,13 +210,18 @@ func NewNode(config *cfg.Config,
|
||||
// what happened during block replay).
|
||||
state = sm.LoadState(stateDB)
|
||||
|
||||
// Ensure the state's block version matches that of the software.
|
||||
// Log the version info.
|
||||
logger.Info("Version info",
|
||||
"software", version.TMCoreSemVer,
|
||||
"block", version.BlockProtocol,
|
||||
"p2p", version.P2PProtocol,
|
||||
)
|
||||
|
||||
// If the state and software differ in block version, at least log it.
|
||||
if state.Version.Consensus.Block != version.BlockProtocol {
|
||||
return nil, fmt.Errorf(
|
||||
"Block version of the software does not match that of the state.\n"+
|
||||
"Got version.BlockProtocol=%v, state.Version.Consensus.Block=%v",
|
||||
version.BlockProtocol,
|
||||
state.Version.Consensus.Block,
|
||||
logger.Info("Software and state have different block protocols",
|
||||
"software", version.BlockProtocol,
|
||||
"state", state.Version.Consensus.Block,
|
||||
)
|
||||
}
|
||||
|
||||
@ -463,7 +468,7 @@ func NewNode(config *cfg.Config,
|
||||
Seeds: splitAndTrimEmpty(config.P2P.Seeds, ",", " "),
|
||||
SeedMode: config.P2P.SeedMode,
|
||||
})
|
||||
pexReactor.SetLogger(p2pLogger)
|
||||
pexReactor.SetLogger(logger.With("module", "pex"))
|
||||
sw.AddReactor("PEX", pexReactor)
|
||||
}
|
||||
|
||||
|
@ -98,13 +98,15 @@ func (ps *PeerSet) Get(peerKey ID) Peer {
|
||||
}
|
||||
|
||||
// Remove discards peer by its Key, if the peer was previously memoized.
|
||||
func (ps *PeerSet) Remove(peer Peer) {
|
||||
// Returns true if the peer was removed, and false if it was not found.
|
||||
// in the set.
|
||||
func (ps *PeerSet) Remove(peer Peer) bool {
|
||||
ps.mtx.Lock()
|
||||
defer ps.mtx.Unlock()
|
||||
|
||||
item := ps.lookup[peer.ID()]
|
||||
if item == nil {
|
||||
return
|
||||
return false
|
||||
}
|
||||
|
||||
index := item.index
|
||||
@ -116,7 +118,7 @@ func (ps *PeerSet) Remove(peer Peer) {
|
||||
if index == len(ps.list)-1 {
|
||||
ps.list = newList
|
||||
delete(ps.lookup, peer.ID())
|
||||
return
|
||||
return true
|
||||
}
|
||||
|
||||
// Replace the popped item with the last item in the old list.
|
||||
@ -127,6 +129,7 @@ func (ps *PeerSet) Remove(peer Peer) {
|
||||
lastPeerItem.index = index
|
||||
ps.list = newList
|
||||
delete(ps.lookup, peer.ID())
|
||||
return true
|
||||
}
|
||||
|
||||
// Size returns the number of unique items in the peerSet.
|
||||
|
@ -60,13 +60,15 @@ func TestPeerSetAddRemoveOne(t *testing.T) {
|
||||
n := len(peerList)
|
||||
// 1. Test removing from the front
|
||||
for i, peerAtFront := range peerList {
|
||||
peerSet.Remove(peerAtFront)
|
||||
removed := peerSet.Remove(peerAtFront)
|
||||
assert.True(t, removed)
|
||||
wantSize := n - i - 1
|
||||
for j := 0; j < 2; j++ {
|
||||
assert.Equal(t, false, peerSet.Has(peerAtFront.ID()), "#%d Run #%d: failed to remove peer", i, j)
|
||||
assert.Equal(t, wantSize, peerSet.Size(), "#%d Run #%d: failed to remove peer and decrement size", i, j)
|
||||
// Test the route of removing the now non-existent element
|
||||
peerSet.Remove(peerAtFront)
|
||||
removed := peerSet.Remove(peerAtFront)
|
||||
assert.False(t, removed)
|
||||
}
|
||||
}
|
||||
|
||||
@ -81,7 +83,8 @@ func TestPeerSetAddRemoveOne(t *testing.T) {
|
||||
// b) In reverse, remove each element
|
||||
for i := n - 1; i >= 0; i-- {
|
||||
peerAtEnd := peerList[i]
|
||||
peerSet.Remove(peerAtEnd)
|
||||
removed := peerSet.Remove(peerAtEnd)
|
||||
assert.True(t, removed)
|
||||
assert.Equal(t, false, peerSet.Has(peerAtEnd.ID()), "#%d: failed to remove item at end", i)
|
||||
assert.Equal(t, i, peerSet.Size(), "#%d: differing sizes after peerSet.Remove(atEndPeer)", i)
|
||||
}
|
||||
@ -105,7 +108,8 @@ func TestPeerSetAddRemoveMany(t *testing.T) {
|
||||
}
|
||||
|
||||
for i, peer := range peers {
|
||||
peerSet.Remove(peer)
|
||||
removed := peerSet.Remove(peer)
|
||||
assert.True(t, removed)
|
||||
if peerSet.Has(peer.ID()) {
|
||||
t.Errorf("Failed to remove peer")
|
||||
}
|
||||
|
@ -211,7 +211,9 @@ func (sw *Switch) OnStop() {
|
||||
// Stop peers
|
||||
for _, p := range sw.peers.List() {
|
||||
p.Stop()
|
||||
sw.peers.Remove(p)
|
||||
if sw.peers.Remove(p) {
|
||||
sw.metrics.Peers.Add(float64(-1))
|
||||
}
|
||||
}
|
||||
|
||||
// Stop reactors
|
||||
@ -299,8 +301,9 @@ func (sw *Switch) StopPeerGracefully(peer Peer) {
|
||||
}
|
||||
|
||||
func (sw *Switch) stopAndRemovePeer(peer Peer, reason interface{}) {
|
||||
sw.peers.Remove(peer)
|
||||
if sw.peers.Remove(peer) {
|
||||
sw.metrics.Peers.Add(float64(-1))
|
||||
}
|
||||
peer.Stop()
|
||||
for _, reactor := range sw.reactors {
|
||||
reactor.RemovePeer(peer, reason)
|
||||
@ -505,6 +508,12 @@ func (sw *Switch) acceptRoutine() {
|
||||
"err", err,
|
||||
"numPeers", sw.peers.Size(),
|
||||
)
|
||||
// We could instead have a retry loop around the acceptRoutine,
|
||||
// but that would need to stop and let the node shutdown eventually.
|
||||
// So might as well panic and let process managers restart the node.
|
||||
// There's no point in letting the node run without the acceptRoutine,
|
||||
// since it won't be able to accept new connections.
|
||||
panic(fmt.Errorf("accept routine exited: %v", err))
|
||||
}
|
||||
|
||||
break
|
||||
|
@ -3,10 +3,17 @@ package p2p
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
stdprometheus "github.com/prometheus/client_golang/prometheus"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
@ -335,6 +342,54 @@ func TestSwitchStopsNonPersistentPeerOnError(t *testing.T) {
|
||||
assert.False(p.IsRunning())
|
||||
}
|
||||
|
||||
func TestSwitchStopPeerForError(t *testing.T) {
|
||||
s := httptest.NewServer(stdprometheus.UninstrumentedHandler())
|
||||
defer s.Close()
|
||||
|
||||
scrapeMetrics := func() string {
|
||||
resp, _ := http.Get(s.URL)
|
||||
buf, _ := ioutil.ReadAll(resp.Body)
|
||||
return string(buf)
|
||||
}
|
||||
|
||||
namespace, subsystem, name := config.TestInstrumentationConfig().Namespace, MetricsSubsystem, "peers"
|
||||
re := regexp.MustCompile(namespace + `_` + subsystem + `_` + name + ` ([0-9\.]+)`)
|
||||
peersMetricValue := func() float64 {
|
||||
matches := re.FindStringSubmatch(scrapeMetrics())
|
||||
f, _ := strconv.ParseFloat(matches[1], 64)
|
||||
return f
|
||||
}
|
||||
|
||||
p2pMetrics := PrometheusMetrics(namespace)
|
||||
|
||||
// make two connected switches
|
||||
sw1, sw2 := MakeSwitchPair(t, func(i int, sw *Switch) *Switch {
|
||||
// set metrics on sw1
|
||||
if i == 0 {
|
||||
opt := WithMetrics(p2pMetrics)
|
||||
opt(sw)
|
||||
}
|
||||
return initSwitchFunc(i, sw)
|
||||
})
|
||||
|
||||
assert.Equal(t, len(sw1.Peers().List()), 1)
|
||||
assert.EqualValues(t, 1, peersMetricValue())
|
||||
|
||||
// send messages to the peer from sw1
|
||||
p := sw1.Peers().List()[0]
|
||||
p.Send(0x1, []byte("here's a message to send"))
|
||||
|
||||
// stop sw2. this should cause the p to fail,
|
||||
// which results in calling StopPeerForError internally
|
||||
sw2.Stop()
|
||||
|
||||
// now call StopPeerForError explicitly, eg. from a reactor
|
||||
sw1.StopPeerForError(p, fmt.Errorf("some err"))
|
||||
|
||||
assert.Equal(t, len(sw1.Peers().List()), 0)
|
||||
assert.EqualValues(t, 0, peersMetricValue())
|
||||
}
|
||||
|
||||
func TestSwitchReconnectsToPersistentPeer(t *testing.T) {
|
||||
assert, require := assert.New(t), require.New(t)
|
||||
|
||||
|
@ -184,7 +184,7 @@ func MakeSwitch(
|
||||
|
||||
// TODO: let the config be passed in?
|
||||
sw := initSwitch(i, NewSwitch(cfg, t, opts...))
|
||||
sw.SetLogger(log.TestingLogger())
|
||||
sw.SetLogger(log.TestingLogger().With("switch", i))
|
||||
sw.SetNodeKey(&nodeKey)
|
||||
|
||||
ni := nodeInfo.(DefaultNodeInfo)
|
||||
|
@ -15,6 +15,11 @@ import (
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// result, err := client.ABCIQuery("", "abcd", true)
|
||||
// ```
|
||||
//
|
||||
@ -69,6 +74,11 @@ func ABCIQuery(path string, data cmn.HexBytes, height int64, prove bool) (*ctype
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// info, err := client.ABCIInfo()
|
||||
// ```
|
||||
//
|
||||
|
@ -18,6 +18,11 @@ import (
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// info, err := client.BlockchainInfo(10, 10)
|
||||
// ```
|
||||
//
|
||||
@ -123,6 +128,11 @@ func filterMinMax(height, min, max, limit int64) (int64, int64, error) {
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// info, err := client.Block(10)
|
||||
// ```
|
||||
//
|
||||
@ -235,6 +245,11 @@ func Block(heightPtr *int64) (*ctypes.ResultBlock, error) {
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// info, err := client.Commit(11)
|
||||
// ```
|
||||
//
|
||||
@ -329,6 +344,11 @@ func Commit(heightPtr *int64) (*ctypes.ResultCommit, error) {
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// info, err := client.BlockResults(10)
|
||||
// ```
|
||||
//
|
||||
|
@ -16,6 +16,11 @@ import (
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// state, err := client.Validators()
|
||||
// ```
|
||||
//
|
||||
@ -67,6 +72,11 @@ func Validators(heightPtr *int64) (*ctypes.ResultValidators, error) {
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// state, err := client.DumpConsensusState()
|
||||
// ```
|
||||
//
|
||||
@ -225,6 +235,11 @@ func DumpConsensusState() (*ctypes.ResultDumpConsensusState, error) {
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// state, err := client.ConsensusState()
|
||||
// ```
|
||||
//
|
||||
@ -273,6 +288,11 @@ func ConsensusState() (*ctypes.ResultConsensusState, error) {
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// state, err := client.ConsensusParams()
|
||||
// ```
|
||||
//
|
||||
|
@ -55,6 +55,10 @@ import (
|
||||
//
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// ctx, cancel := context.WithTimeout(context.Background(), timeout)
|
||||
// defer cancel()
|
||||
// query := query.MustParse("tm.event = 'Tx' AND tx.height = 3")
|
||||
@ -118,6 +122,10 @@ func Subscribe(wsCtx rpctypes.WSRPCContext, query string) (*ctypes.ResultSubscri
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// err = client.Unsubscribe("test-client", query)
|
||||
// ```
|
||||
//
|
||||
@ -158,6 +166,10 @@ func Unsubscribe(wsCtx rpctypes.WSRPCContext, query string) (*ctypes.ResultUnsub
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// err = client.UnsubscribeAll("test-client")
|
||||
// ```
|
||||
//
|
||||
|
@ -13,6 +13,11 @@ import (
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// result, err := client.Health()
|
||||
// ```
|
||||
//
|
||||
|
@ -24,6 +24,11 @@ import (
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// result, err := client.BroadcastTxAsync("123")
|
||||
// ```
|
||||
//
|
||||
@ -64,6 +69,11 @@ func BroadcastTxAsync(tx types.Tx) (*ctypes.ResultBroadcastTx, error) {
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// result, err := client.BroadcastTxSync("456")
|
||||
// ```
|
||||
//
|
||||
@ -118,6 +128,11 @@ func BroadcastTxSync(tx types.Tx) (*ctypes.ResultBroadcastTx, error) {
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// result, err := client.BroadcastTxCommit("789")
|
||||
// ```
|
||||
//
|
||||
@ -198,7 +213,10 @@ func BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) {
|
||||
// TODO: configurable?
|
||||
var deliverTxTimeout = rpcserver.WriteTimeout / 2
|
||||
select {
|
||||
case deliverTxResMsg := <-deliverTxResCh: // The tx was included in a block.
|
||||
case deliverTxResMsg, ok := <-deliverTxResCh: // The tx was included in a block.
|
||||
if !ok {
|
||||
return nil, errors.New("Error on broadcastTxCommit: expected DeliverTxResult, got nil. Did the Tendermint stop?")
|
||||
}
|
||||
deliverTxRes := deliverTxResMsg.(types.EventDataTx)
|
||||
return &ctypes.ResultBroadcastTxCommit{
|
||||
CheckTx: *checkTxRes,
|
||||
@ -225,6 +243,11 @@ func BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) {
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// result, err := client.UnconfirmedTxs()
|
||||
// ```
|
||||
//
|
||||
@ -263,6 +286,11 @@ func UnconfirmedTxs(limit int) (*ctypes.ResultUnconfirmedTxs, error) {
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// result, err := client.UnconfirmedTxs()
|
||||
// ```
|
||||
//
|
||||
|
@ -17,6 +17,11 @@ import (
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// info, err := client.NetInfo()
|
||||
// ```
|
||||
//
|
||||
@ -95,6 +100,11 @@ func UnsafeDialPeers(peers []string, persistent bool) (*ctypes.ResultDialPeers,
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// genesis, err := client.Genesis()
|
||||
// ```
|
||||
//
|
||||
|
@ -20,6 +20,11 @@ import (
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// result, err := client.Status()
|
||||
// ```
|
||||
//
|
||||
|
@ -21,6 +21,11 @@ import (
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// tx, err := client.Tx([]byte("2B8EC32BA2579B3B8606E42C06DE2F7AFA2556EF"), true)
|
||||
// ```
|
||||
//
|
||||
@ -115,6 +120,11 @@ func Tx(hash []byte, prove bool) (*ctypes.ResultTx, error) {
|
||||
//
|
||||
// ```go
|
||||
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
|
||||
// err := client.Start()
|
||||
// if err != nil {
|
||||
// // handle error
|
||||
// }
|
||||
// defer client.Stop()
|
||||
// q, err := tmquery.New("account.owner='Ivan'")
|
||||
// tx, err := client.TxSearch(q, true)
|
||||
// ```
|
||||
|
@ -172,10 +172,10 @@ func (txi *TxIndex) Search(q *query.Query) ([]*types.TxResult, error) {
|
||||
|
||||
for _, r := range ranges {
|
||||
if !hashesInitialized {
|
||||
hashes = txi.matchRange(r, []byte(r.key))
|
||||
hashes = txi.matchRange(r, startKey(r.key))
|
||||
hashesInitialized = true
|
||||
} else {
|
||||
hashes = intersect(hashes, txi.matchRange(r, []byte(r.key)))
|
||||
hashes = intersect(hashes, txi.matchRange(r, startKey(r.key)))
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -190,10 +190,10 @@ func (txi *TxIndex) Search(q *query.Query) ([]*types.TxResult, error) {
|
||||
}
|
||||
|
||||
if !hashesInitialized {
|
||||
hashes = txi.match(c, startKey(c, height))
|
||||
hashes = txi.match(c, startKeyForCondition(c, height))
|
||||
hashesInitialized = true
|
||||
} else {
|
||||
hashes = intersect(hashes, txi.match(c, startKey(c, height)))
|
||||
hashes = intersect(hashes, txi.match(c, startKeyForCondition(c, height)))
|
||||
}
|
||||
}
|
||||
|
||||
@ -332,18 +332,18 @@ func isRangeOperation(op query.Operator) bool {
|
||||
}
|
||||
}
|
||||
|
||||
func (txi *TxIndex) match(c query.Condition, startKey []byte) (hashes [][]byte) {
|
||||
func (txi *TxIndex) match(c query.Condition, startKeyBz []byte) (hashes [][]byte) {
|
||||
if c.Op == query.OpEqual {
|
||||
it := dbm.IteratePrefix(txi.store, startKey)
|
||||
it := dbm.IteratePrefix(txi.store, startKeyBz)
|
||||
defer it.Close()
|
||||
for ; it.Valid(); it.Next() {
|
||||
hashes = append(hashes, it.Value())
|
||||
}
|
||||
} else if c.Op == query.OpContains {
|
||||
// XXX: doing full scan because startKey does not apply here
|
||||
// For example, if startKey = "account.owner=an" and search query = "accoutn.owner CONSISTS an"
|
||||
// we can't iterate with prefix "account.owner=an" because we might miss keys like "account.owner=Ulan"
|
||||
it := txi.store.Iterator(nil, nil)
|
||||
// XXX: startKey does not apply here.
|
||||
// For example, if startKey = "account.owner/an/" and search query = "accoutn.owner CONTAINS an"
|
||||
// we can't iterate with prefix "account.owner/an/" because we might miss keys like "account.owner/Ulan/"
|
||||
it := dbm.IteratePrefix(txi.store, startKey(c.Tag))
|
||||
defer it.Close()
|
||||
for ; it.Valid(); it.Next() {
|
||||
if !isTagKey(it.Key()) {
|
||||
@ -359,14 +359,14 @@ func (txi *TxIndex) match(c query.Condition, startKey []byte) (hashes [][]byte)
|
||||
return
|
||||
}
|
||||
|
||||
func (txi *TxIndex) matchRange(r queryRange, prefix []byte) (hashes [][]byte) {
|
||||
func (txi *TxIndex) matchRange(r queryRange, startKey []byte) (hashes [][]byte) {
|
||||
// create a map to prevent duplicates
|
||||
hashesMap := make(map[string][]byte)
|
||||
|
||||
lowerBound := r.lowerBoundValue()
|
||||
upperBound := r.upperBoundValue()
|
||||
|
||||
it := dbm.IteratePrefix(txi.store, prefix)
|
||||
it := dbm.IteratePrefix(txi.store, startKey)
|
||||
defer it.Close()
|
||||
LOOP:
|
||||
for ; it.Valid(); it.Next() {
|
||||
@ -409,16 +409,6 @@ LOOP:
|
||||
///////////////////////////////////////////////////////////////////////////////
|
||||
// Keys
|
||||
|
||||
func startKey(c query.Condition, height int64) []byte {
|
||||
var key string
|
||||
if height > 0 {
|
||||
key = fmt.Sprintf("%s/%v/%d/", c.Tag, c.Operand, height)
|
||||
} else {
|
||||
key = fmt.Sprintf("%s/%v/", c.Tag, c.Operand)
|
||||
}
|
||||
return []byte(key)
|
||||
}
|
||||
|
||||
func isTagKey(key []byte) bool {
|
||||
return strings.Count(string(key), tagKeySeparator) == 3
|
||||
}
|
||||
@ -429,11 +419,36 @@ func extractValueFromKey(key []byte) string {
|
||||
}
|
||||
|
||||
func keyForTag(tag cmn.KVPair, result *types.TxResult) []byte {
|
||||
return []byte(fmt.Sprintf("%s/%s/%d/%d", tag.Key, tag.Value, result.Height, result.Index))
|
||||
return []byte(fmt.Sprintf("%s/%s/%d/%d",
|
||||
tag.Key,
|
||||
tag.Value,
|
||||
result.Height,
|
||||
result.Index,
|
||||
))
|
||||
}
|
||||
|
||||
func keyForHeight(result *types.TxResult) []byte {
|
||||
return []byte(fmt.Sprintf("%s/%d/%d/%d", types.TxHeightKey, result.Height, result.Height, result.Index))
|
||||
return []byte(fmt.Sprintf("%s/%d/%d/%d",
|
||||
types.TxHeightKey,
|
||||
result.Height,
|
||||
result.Height,
|
||||
result.Index,
|
||||
))
|
||||
}
|
||||
|
||||
func startKeyForCondition(c query.Condition, height int64) []byte {
|
||||
if height > 0 {
|
||||
return startKey(c.Tag, c.Operand, height)
|
||||
}
|
||||
return startKey(c.Tag, c.Operand)
|
||||
}
|
||||
|
||||
func startKey(fields ...interface{}) []byte {
|
||||
var b bytes.Buffer
|
||||
for _, f := range fields {
|
||||
b.Write([]byte(fmt.Sprintf("%v", f) + tagKeySeparator))
|
||||
}
|
||||
return b.Bytes()
|
||||
}
|
||||
|
||||
///////////////////////////////////////////////////////////////////////////////
|
||||
|
@ -89,8 +89,10 @@ func TestTxSearch(t *testing.T) {
|
||||
{"account.date >= TIME 2013-05-03T14:45:00Z", 0},
|
||||
// search using CONTAINS
|
||||
{"account.owner CONTAINS 'an'", 1},
|
||||
// search using CONTAINS
|
||||
// search for non existing value using CONTAINS
|
||||
{"account.owner CONTAINS 'Vlad'", 0},
|
||||
// search using the wrong tag (of numeric type) using CONTAINS
|
||||
{"account.number CONTAINS 'Iv'", 0},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
@ -126,7 +128,7 @@ func TestTxSearchOneTxWithMultipleSameTagsButDifferentValues(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestTxSearchMultipleTxs(t *testing.T) {
|
||||
allowedTags := []string{"account.number"}
|
||||
allowedTags := []string{"account.number", "account.number.id"}
|
||||
indexer := NewTxIndex(db.NewMemDB(), IndexTags(allowedTags))
|
||||
|
||||
// indexed first, but bigger height (to test the order of transactions)
|
||||
@ -160,6 +162,17 @@ func TestTxSearchMultipleTxs(t *testing.T) {
|
||||
err = indexer.Index(txResult3)
|
||||
require.NoError(t, err)
|
||||
|
||||
// indexed fourth (to test we don't include txs with similar tags)
|
||||
// https://github.com/tendermint/tendermint/issues/2908
|
||||
txResult4 := txResultWithTags([]cmn.KVPair{
|
||||
{Key: []byte("account.number.id"), Value: []byte("1")},
|
||||
})
|
||||
txResult4.Tx = types.Tx("Mike's account")
|
||||
txResult4.Height = 2
|
||||
txResult4.Index = 2
|
||||
err = indexer.Index(txResult4)
|
||||
require.NoError(t, err)
|
||||
|
||||
results, err := indexer.Search(query.MustParse("account.number >= 1"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
|
@ -19,7 +19,7 @@ import (
|
||||
// x + (x >> 3) = x + x/8 = x * (1 + 0.125).
|
||||
// MaxTotalVotingPower is the largest int64 `x` with the property that `x + (x >> 3)` is
|
||||
// still in the bounds of int64.
|
||||
const MaxTotalVotingPower = 8198552921648689607
|
||||
const MaxTotalVotingPower = int64(8198552921648689607)
|
||||
|
||||
// ValidatorSet represent a set of *Validator at a given height.
|
||||
// The validators can be fetched by address or index.
|
||||
|
@ -45,7 +45,8 @@ func TestValidatorSetBasic(t *testing.T) {
|
||||
assert.Nil(t, vset.Hash())
|
||||
|
||||
// add
|
||||
val = randValidator_()
|
||||
|
||||
val = randValidator_(vset.TotalVotingPower())
|
||||
assert.True(t, vset.Add(val))
|
||||
assert.True(t, vset.HasAddress(val.Address))
|
||||
idx, val2 := vset.GetByAddress(val.Address)
|
||||
@ -61,7 +62,7 @@ func TestValidatorSetBasic(t *testing.T) {
|
||||
assert.NotPanics(t, func() { vset.IncrementProposerPriority(1) })
|
||||
|
||||
// update
|
||||
assert.False(t, vset.Update(randValidator_()))
|
||||
assert.False(t, vset.Update(randValidator_(vset.TotalVotingPower())))
|
||||
_, val = vset.GetByAddress(val.Address)
|
||||
val.VotingPower += 100
|
||||
proposerPriority := val.ProposerPriority
|
||||
@ -73,7 +74,7 @@ func TestValidatorSetBasic(t *testing.T) {
|
||||
assert.Equal(t, proposerPriority, val.ProposerPriority)
|
||||
|
||||
// remove
|
||||
val2, removed := vset.Remove(randValidator_().Address)
|
||||
val2, removed := vset.Remove(randValidator_(vset.TotalVotingPower()).Address)
|
||||
assert.Nil(t, val2)
|
||||
assert.False(t, removed)
|
||||
val2, removed = vset.Remove(val.Address)
|
||||
@ -280,16 +281,20 @@ func randPubKey() crypto.PubKey {
|
||||
return ed25519.PubKeyEd25519(pubKey)
|
||||
}
|
||||
|
||||
func randValidator_() *Validator {
|
||||
val := NewValidator(randPubKey(), cmn.RandInt64())
|
||||
val.ProposerPriority = cmn.RandInt64() % MaxTotalVotingPower
|
||||
func randValidator_(totalVotingPower int64) *Validator {
|
||||
// this modulo limits the ProposerPriority/VotingPower to stay in the
|
||||
// bounds of MaxTotalVotingPower minus the already existing voting power:
|
||||
val := NewValidator(randPubKey(), cmn.RandInt64()%(MaxTotalVotingPower-totalVotingPower))
|
||||
val.ProposerPriority = cmn.RandInt64() % (MaxTotalVotingPower - totalVotingPower)
|
||||
return val
|
||||
}
|
||||
|
||||
func randValidatorSet(numValidators int) *ValidatorSet {
|
||||
validators := make([]*Validator, numValidators)
|
||||
totalVotingPower := int64(0)
|
||||
for i := 0; i < numValidators; i++ {
|
||||
validators[i] = randValidator_()
|
||||
validators[i] = randValidator_(totalVotingPower)
|
||||
totalVotingPower += validators[i].VotingPower
|
||||
}
|
||||
return NewValidatorSet(validators)
|
||||
}
|
||||
@ -342,7 +347,174 @@ func TestAvgProposerPriority(t *testing.T) {
|
||||
got := tc.vs.computeAvgProposerPriority()
|
||||
assert.Equal(t, tc.want, got, "test case: %v", i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestAveragingInIncrementProposerPriority(t *testing.T) {
|
||||
// Test that the averaging works as expected inside of IncrementProposerPriority.
|
||||
// Each validator comes with zero voting power which simplifies reasoning about
|
||||
// the expected ProposerPriority.
|
||||
tcs := []struct {
|
||||
vs ValidatorSet
|
||||
times int
|
||||
avg int64
|
||||
}{
|
||||
0: {ValidatorSet{
|
||||
Validators: []*Validator{
|
||||
{Address: []byte("a"), ProposerPriority: 1},
|
||||
{Address: []byte("b"), ProposerPriority: 2},
|
||||
{Address: []byte("c"), ProposerPriority: 3}}},
|
||||
1, 2},
|
||||
1: {ValidatorSet{
|
||||
Validators: []*Validator{
|
||||
{Address: []byte("a"), ProposerPriority: 10},
|
||||
{Address: []byte("b"), ProposerPriority: -10},
|
||||
{Address: []byte("c"), ProposerPriority: 1}}},
|
||||
// this should average twice but the average should be 0 after the first iteration
|
||||
// (voting power is 0 -> no changes)
|
||||
11, 1 / 3},
|
||||
2: {ValidatorSet{
|
||||
Validators: []*Validator{
|
||||
{Address: []byte("a"), ProposerPriority: 100},
|
||||
{Address: []byte("b"), ProposerPriority: -10},
|
||||
{Address: []byte("c"), ProposerPriority: 1}}},
|
||||
1, 91 / 3},
|
||||
}
|
||||
for i, tc := range tcs {
|
||||
// work on copy to have the old ProposerPriorities:
|
||||
newVset := tc.vs.CopyIncrementProposerPriority(tc.times)
|
||||
for _, val := range tc.vs.Validators {
|
||||
_, updatedVal := newVset.GetByAddress(val.Address)
|
||||
assert.Equal(t, updatedVal.ProposerPriority, val.ProposerPriority-tc.avg, "test case: %v", i)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestAveragingInIncrementProposerPriorityWithVotingPower(t *testing.T) {
|
||||
// Other than TestAveragingInIncrementProposerPriority this is a more complete test showing
|
||||
// how each ProposerPriority changes in relation to the validator's voting power respectively.
|
||||
vals := ValidatorSet{Validators: []*Validator{
|
||||
{Address: []byte{0}, ProposerPriority: 0, VotingPower: 10},
|
||||
{Address: []byte{1}, ProposerPriority: 0, VotingPower: 1},
|
||||
{Address: []byte{2}, ProposerPriority: 0, VotingPower: 1}}}
|
||||
tcs := []struct {
|
||||
vals *ValidatorSet
|
||||
wantProposerPrioritys []int64
|
||||
times int
|
||||
wantProposer *Validator
|
||||
}{
|
||||
|
||||
0: {
|
||||
vals.Copy(),
|
||||
[]int64{
|
||||
// Acumm+VotingPower-Avg:
|
||||
0 + 10 - 12 - 4, // mostest will be subtracted by total voting power (12)
|
||||
0 + 1 - 4,
|
||||
0 + 1 - 4},
|
||||
1,
|
||||
vals.Validators[0]},
|
||||
1: {
|
||||
vals.Copy(),
|
||||
[]int64{
|
||||
(0 + 10 - 12 - 4) + 10 - 12 + 4, // this will be mostest on 2nd iter, too
|
||||
(0 + 1 - 4) + 1 + 4,
|
||||
(0 + 1 - 4) + 1 + 4},
|
||||
2,
|
||||
vals.Validators[0]}, // increment twice -> expect average to be subtracted twice
|
||||
2: {
|
||||
vals.Copy(),
|
||||
[]int64{
|
||||
((0 + 10 - 12 - 4) + 10 - 12) + 10 - 12 + 4, // still mostest
|
||||
((0 + 1 - 4) + 1) + 1 + 4,
|
||||
((0 + 1 - 4) + 1) + 1 + 4},
|
||||
3,
|
||||
vals.Validators[0]},
|
||||
3: {
|
||||
vals.Copy(),
|
||||
[]int64{
|
||||
0 + 4*(10-12) + 4 - 4, // still mostest
|
||||
0 + 4*1 + 4 - 4,
|
||||
0 + 4*1 + 4 - 4},
|
||||
4,
|
||||
vals.Validators[0]},
|
||||
4: {
|
||||
vals.Copy(),
|
||||
[]int64{
|
||||
0 + 4*(10-12) + 10 + 4 - 4, // 4 iters was mostest
|
||||
0 + 5*1 - 12 + 4 - 4, // now this val is mostest for the 1st time (hence -12==totalVotingPower)
|
||||
0 + 5*1 + 4 - 4},
|
||||
5,
|
||||
vals.Validators[1]},
|
||||
5: {
|
||||
vals.Copy(),
|
||||
[]int64{
|
||||
0 + 6*10 - 5*12 + 4 - 4, // mostest again
|
||||
0 + 6*1 - 12 + 4 - 4, // mostest once up to here
|
||||
0 + 6*1 + 4 - 4},
|
||||
6,
|
||||
vals.Validators[0]},
|
||||
6: {
|
||||
vals.Copy(),
|
||||
[]int64{
|
||||
0 + 7*10 - 6*12 + 4 - 4, // in 7 iters this val is mostest 6 times
|
||||
0 + 7*1 - 12 + 4 - 4, // in 7 iters this val is mostest 1 time
|
||||
0 + 7*1 + 4 - 4},
|
||||
7,
|
||||
vals.Validators[0]},
|
||||
7: {
|
||||
vals.Copy(),
|
||||
[]int64{
|
||||
0 + 8*10 - 7*12 + 4 - 4, // mostest
|
||||
0 + 8*1 - 12 + 4 - 4,
|
||||
0 + 8*1 + 4 - 4},
|
||||
8,
|
||||
vals.Validators[0]},
|
||||
8: {
|
||||
vals.Copy(),
|
||||
[]int64{
|
||||
0 + 9*10 - 7*12 + 4 - 4,
|
||||
0 + 9*1 - 12 + 4 - 4,
|
||||
0 + 9*1 - 12 + 4 - 4}, // mostest
|
||||
9,
|
||||
vals.Validators[2]},
|
||||
9: {
|
||||
vals.Copy(),
|
||||
[]int64{
|
||||
0 + 10*10 - 8*12 + 4 - 4, // after 10 iters this is mostest again
|
||||
0 + 10*1 - 12 + 4 - 4, // after 6 iters this val is "mostest" once and not in between
|
||||
0 + 10*1 - 12 + 4 - 4}, // in between 10 iters this val is "mostest" once
|
||||
10,
|
||||
vals.Validators[0]},
|
||||
10: {
|
||||
vals.Copy(),
|
||||
[]int64{
|
||||
// shift twice inside incrementProposerPriority (shift every 10th iter);
|
||||
// don't shift at the end of IncremenctProposerPriority
|
||||
// last avg should be zero because
|
||||
// ProposerPriority of validator 0: (0 + 11*10 - 8*12 - 4) == 10
|
||||
// ProposerPriority of validator 1 and 2: (0 + 11*1 - 12 - 4) == -5
|
||||
// and (10 + 5 - 5) / 3 == 0
|
||||
0 + 11*10 - 8*12 - 4 - 12 - 0,
|
||||
0 + 11*1 - 12 - 4 - 0, // after 6 iters this val is "mostest" once and not in between
|
||||
0 + 11*1 - 12 - 4 - 0}, // after 10 iters this val is "mostest" once
|
||||
11,
|
||||
vals.Validators[0]},
|
||||
}
|
||||
for i, tc := range tcs {
|
||||
tc.vals.IncrementProposerPriority(tc.times)
|
||||
|
||||
assert.Equal(t, tc.wantProposer.Address, tc.vals.GetProposer().Address,
|
||||
"test case: %v",
|
||||
i)
|
||||
|
||||
for valIdx, val := range tc.vals.Validators {
|
||||
assert.Equal(t,
|
||||
tc.wantProposerPrioritys[valIdx],
|
||||
val.ProposerPriority,
|
||||
"test case: %v, validator: %v",
|
||||
i,
|
||||
valIdx)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestSafeAdd(t *testing.T) {
|
||||
|
@ -18,7 +18,7 @@ const (
|
||||
// TMCoreSemVer is the current version of Tendermint Core.
|
||||
// It's the Semantic Version of the software.
|
||||
// Must be a string because scripts like dist.sh read this file.
|
||||
TMCoreSemVer = "0.26.4"
|
||||
TMCoreSemVer = "0.27.0"
|
||||
|
||||
// ABCISemVer is the semantic version of the ABCI library
|
||||
ABCISemVer = "0.15.0"
|
||||
@ -36,10 +36,10 @@ func (p Protocol) Uint64() uint64 {
|
||||
|
||||
var (
|
||||
// P2PProtocol versions all p2p behaviour and msgs.
|
||||
P2PProtocol Protocol = 4
|
||||
P2PProtocol Protocol = 5
|
||||
|
||||
// BlockProtocol versions all block data structures and processing.
|
||||
BlockProtocol Protocol = 7
|
||||
BlockProtocol Protocol = 8
|
||||
)
|
||||
|
||||
//------------------------------------------------------------------------
|
||||
|
Loading…
x
Reference in New Issue
Block a user