Compare commits

..

11 Commits

Author SHA1 Message Date
Ismail Khoffi
1b555b6329 write tests for the upgrade 2018-12-21 15:55:15 +01:00
Ethan Buchman
928d9dad99 upgrade path 2018-12-17 23:57:40 -05:00
Ethan Buchman
d382f064ed rearrange priv_validator.go 2018-12-17 23:08:03 -05:00
Ethan Buchman
b5544a4560 privval: remove mtx 2018-12-17 22:46:14 -05:00
Ethan Buchman
897b9f56a6 fixes from review 2018-12-17 22:40:50 -05:00
yutianwu
bc940757ec fix test 2018-11-30 13:01:36 +08:00
yutianwu
6132a3ec52 delete scripts/wire2amino.go 2018-11-30 13:01:36 +08:00
yutianwu
b6e44a2b3d retrig test 2018-11-30 13:01:36 +08:00
yutianwu
98128e72fa minor changes 2018-11-30 13:01:36 +08:00
yutianwu
e255b30c63 fix bugs 2018-11-30 13:01:36 +08:00
yutianwu
e8700152be split immutable and mutable parts of priv_validator.json 2018-11-30 13:01:36 +08:00
61 changed files with 884 additions and 1149 deletions

View File

@@ -7,13 +7,6 @@ defaults: &defaults
environment:
GOBIN: /tmp/workspace/bin
docs_update_config: &docs_update_config
working_directory: ~/repo
docker:
- image: tendermint/docs_deployment
environment:
AWS_REGION: us-east-1
jobs:
setup_dependencies:
<<: *defaults
@@ -346,25 +339,10 @@ jobs:
name: upload
command: bash .circleci/codecov.sh -f coverage.txt
deploy_docs:
<<: *docs_update_config
steps:
- checkout
- run:
name: Trigger website build
command: |
chamber exec tendermint -- start_website_build
workflows:
version: 2
test-suite:
jobs:
- deploy_docs:
filters:
branches:
only:
- master
- develop
- setup_dependencies
- lint:
requires:

View File

@@ -1,105 +1,13 @@
# Changelog
## v0.27.1
*December 15th, 2018*
Special thanks to external contributors on this release:
@danil-lashin, @hleb-albau, @james-ray, @leo-xinwang
### FEATURES:
- [rpc] [\#2964](https://github.com/tendermint/tendermint/issues/2964) Add `UnconfirmedTxs(limit)` and `NumUnconfirmedTxs()` methods to HTTP/Local clients (@danil-lashin)
- [docs] [\#3004](https://github.com/tendermint/tendermint/issues/3004) Enable full-text search on docs pages
### IMPROVEMENTS:
- [consensus] [\#2971](https://github.com/tendermint/tendermint/issues/2971) Return error if ValidatorSet is empty after InitChain
(@leo-xinwang)
- [ci/cd] [\#3005](https://github.com/tendermint/tendermint/issues/3005) Updated CircleCI job to trigger website build when docs are updated
- [docs] Various updates
### BUG FIXES:
- [cmd] [\#2983](https://github.com/tendermint/tendermint/issues/2983) `testnet` command always sets `addr_book_strict = false`
- [config] [\#2980](https://github.com/tendermint/tendermint/issues/2980) Fix CORS options formatting
- [kv indexer] [\#2912](https://github.com/tendermint/tendermint/issues/2912) Don't ignore key when executing CONTAINS
- [mempool] [\#2961](https://github.com/tendermint/tendermint/issues/2961) Call `notifyTxsAvailable` if there're txs left after committing a block, but recheck=false
- [mempool] [\#2994](https://github.com/tendermint/tendermint/issues/2994) Reject txs with negative GasWanted
- [p2p] [\#2990](https://github.com/tendermint/tendermint/issues/2990) Fix a bug where seeds don't disconnect from a peer after 3h
- [consensus] [\#3006](https://github.com/tendermint/tendermint/issues/3006) Save state after InitChain only when stateHeight is also 0 (@james-ray)
## v0.27.0
*December 5th, 2018*
Special thanks to external contributors on this release:
@danil-lashin, @srmo
Special thanks to @dlguddus for discovering a [major
issue](https://github.com/tendermint/tendermint/issues/2718#issuecomment-440888677)
in the proposer selection algorithm.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
This release is primarily about fixes to the proposer selection algorithm
in preparation for the [Cosmos Game of
Stakes](https://blog.cosmos.network/the-game-of-stakes-is-open-for-registration-83a404746ee6).
It also makes use of the `ConsensusParams.Validator.PubKeyTypes` to restrict the
key types that can be used by validators, and removes the `Heartbeat` consensus
message.
### BREAKING CHANGES:
* CLI/RPC/Config
- [rpc] [\#2932](https://github.com/tendermint/tendermint/issues/2932) Rename `accum` to `proposer_priority`
* Go API
- [db] [\#2913](https://github.com/tendermint/tendermint/pull/2913)
ReverseIterator API change: start < end, and end is exclusive.
- [types] [\#2932](https://github.com/tendermint/tendermint/issues/2932) Rename `Validator.Accum` to `Validator.ProposerPriority`
* Blockchain Protocol
- [state] [\#2714](https://github.com/tendermint/tendermint/issues/2714) Validators can now only use pubkeys allowed within
ConsensusParams.Validator.PubKeyTypes
* P2P Protocol
- [consensus] [\#2871](https://github.com/tendermint/tendermint/issues/2871)
Remove *ProposalHeartbeat* message as it serves no real purpose (@srmo)
- [state] Fixes for proposer selection:
- [\#2785](https://github.com/tendermint/tendermint/issues/2785) Accum for new validators is `-1.125*totalVotingPower` instead of 0
- [\#2941](https://github.com/tendermint/tendermint/issues/2941) val.Accum is preserved during ValidatorSet.Update to avoid being
reset to 0
### IMPROVEMENTS:
- [state] [\#2929](https://github.com/tendermint/tendermint/issues/2929) Minor refactor of updateState logic (@danil-lashin)
- [node] \#2959 Allow node to start even if software's BlockProtocol is
different from state's BlockProtocol
- [pex] \#2959 Pex reactor logger uses `module=pex`
### BUG FIXES:
- [p2p] \#2968 Panic on transport error rather than continuing to run but not
accept new connections
- [p2p] \#2969 Fix mismatch in peer count between `/net_info` and the prometheus
metrics
- [rpc] \#2408 `/broadcast_tx_commit`: Fix "interface conversion: interface {} in nil, not EventDataTx" panic (could happen if somebody sent a tx using `/broadcast_tx_commit` while Tendermint was being stopped)
- [state] [\#2785](https://github.com/tendermint/tendermint/issues/2785) Fix accum for new validators to be `-1.125*totalVotingPower`
instead of 0, forcing them to wait before becoming the proposer. Also:
- do not batch clip
- keep accums averaged near 0
- [txindex/kv] [\#2925](https://github.com/tendermint/tendermint/issues/2925) Don't return false positives when range searching for a prefix of a tag value
- [types] [\#2938](https://github.com/tendermint/tendermint/issues/2938) Fix regression in v0.26.4 where we panic on empty
genDoc.Validators
- [types] [\#2941](https://github.com/tendermint/tendermint/issues/2941) Preserve val.Accum during ValidatorSet.Update to avoid it being
reset to 0 every time a validator is updated
## v0.26.4
*November 27th, 2018*
Special thanks to external contributors on this release:
@ackratos, @goolAdapter, @james-ray, @joe-bowman, @kostko,
@nagarajmanjunath, @tomtau
ackratos, goolAdapter, james-ray, joe-bowman, kostko,
nagarajmanjunath, tomtau
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).

View File

@@ -1,23 +1,49 @@
## v0.27.2
# Pending
## v0.27.0
*TBD*
Special thanks to external contributors on this release:
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* CLI/RPC/Config
- [rpc] \#2932 Rename `accum` to `proposer_priority`
* Apps
* Go API
- [db] [\#2913](https://github.com/tendermint/tendermint/pull/2913)
ReverseIterator API change -- start < end, and end is exclusive.
- [types] \#2932 Rename `Validator.Accum` to `Validator.ProposerPriority`
* Blockchain Protocol
- [state] \#2714 Validators can now only use pubkeys allowed within
ConsensusParams.ValidatorParams
* P2P Protocol
- [consensus] [\#2871](https://github.com/tendermint/tendermint/issues/2871)
Remove *ProposalHeartbeat* message as it serves no real purpose
- [state] Fixes for proposer selection:
- \#2785 Accum for new validators is `-1.125*totalVotingPower` instead of 0
- \#2941 val.Accum is preserved during ValidatorSet.Update to avoid being
reset to 0
### FEATURES:
- [privval] \#1181 Split immutable and mutable parts of priv_validator.json
### IMPROVEMENTS:
### BUG FIXES:
- [types] \#2938 Fix regression in v0.26.4 where we panic on empty
genDoc.Validators
- [state] \#2785 Fix accum for new validators to be `-1.125*totalVotingPower`
instead of 0, forcing them to wait before becoming the proposer. Also:
- do not batch clip
- keep accums averaged near 0
- [types] \#2941 Preserve val.Accum during ValidatorSet.Update to avoid it being
reset to 0 every time a validator is updated

View File

@@ -294,7 +294,6 @@ build-linux:
build-docker-localnode:
cd networks/local
make
cd -
# Run a 4-node testnet locally
localnet-start: localnet-stop

View File

@@ -3,50 +3,9 @@
This guide provides steps to be followed when you upgrade your applications to
a newer version of Tendermint Core.
## v0.27.0
This release contains some breaking changes to the block and p2p protocols,
but does not change any core data structures, so it should be compatible with
existing blockchains from the v0.26 series that only used Ed25519 validator keys.
Blockchains using Secp256k1 for validators will not be compatible. This is due
to the fact that we now enforce which key types validators can use as a
consensus param. The default is Ed25519, and Secp256k1 must be activated
explicitly.
It is recommended to upgrade all nodes at once to avoid incompatibilities at the
peer layer - namely, the heartbeat consensus message has been removed (only
relevant if `create_empty_blocks=false` or `create_empty_blocks_interval > 0`),
and the proposer selection algorithm has changed. Since proposer information is
never included in the blockchain, this change only affects the peer layer.
### Go API Changes
#### libs/db
The ReverseIterator API has changed the meaning of `start` and `end`.
Before, iteration was from `start` to `end`, where
`start > end`. Now, iteration is from `end` to `start`, where `start < end`.
The iterator also excludes `end`. This change allows a simplified and more
intuitive logic, aligning the semantic meaning of `start` and `end` in the
`Iterator` and `ReverseIterator`.
### Applications
This release enforces a new consensus parameter, the
ValidatorParams.PubKeyTypes. Applications must ensure that they only return
validator updates with the allowed PubKeyTypes. If a validator update includes a
pubkey type that is not included in the ConsensusParams.Validator.PubKeyTypes,
block execution will fail and the consensus will halt.
By default, only Ed25519 pubkeys may be used for validators. Enabling
Secp256k1 requires explicit modification of the ConsensusParams.
Please update your application accordingly (ie. restrict validators to only be
able to use Ed25519 keys, or explicitly add additional key types to the genesis
file).
## v0.26.0
This release contains a lot of changes to core data types and protocols. It is not
New 0.26.0 release contains a lot of changes to core data types and protocols. It is not
compatible to the old versions and there is no straight forward way to update
old data to be compatible with the new version.
@@ -108,7 +67,7 @@ For more information, see:
### Go API Changes
#### crypto/merkle
#### crypto.merkle
The `merkle.Hasher` interface was removed. Functions which used to take `Hasher`
now simply take `[]byte`. This means that any objects being Merklized should be

View File

@@ -13,9 +13,10 @@ import (
func main() {
var (
addr = flag.String("addr", ":26659", "Address of client to connect to")
chainID = flag.String("chain-id", "mychain", "chain id")
privValPath = flag.String("priv", "", "priv val file path")
addr = flag.String("addr", ":26659", "Address of client to connect to")
chainID = flag.String("chain-id", "mychain", "chain id")
privValKeyPath = flag.String("priv-key", "", "priv val key file path")
privValStatePath = flag.String("priv-state", "", "priv val state file path")
logger = log.NewTMLogger(
log.NewSyncWriter(os.Stdout),
@@ -27,10 +28,11 @@ func main() {
"Starting private validator",
"addr", *addr,
"chainID", *chainID,
"privPath", *privValPath,
"privKeyPath", *privValKeyPath,
"privStatePath", *privValStatePath,
)
pv := privval.LoadFilePV(*privValPath)
pv := privval.LoadFilePV(*privValKeyPath, *privValStatePath)
rs := privval.NewRemoteSigner(
logger,

View File

@@ -17,7 +17,7 @@ var GenValidatorCmd = &cobra.Command{
}
func genValidator(cmd *cobra.Command, args []string) {
pv := privval.GenFilePV("")
pv := privval.GenFilePV("", "")
jsbz, err := cdc.MarshalJSON(pv)
if err != nil {
panic(err)

View File

@@ -4,7 +4,6 @@ import (
"fmt"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/p2p"
@@ -26,15 +25,18 @@ func initFiles(cmd *cobra.Command, args []string) error {
func initFilesWithConfig(config *cfg.Config) error {
// private validator
privValFile := config.PrivValidatorFile()
privValKeyFile := config.PrivValidatorKeyFile()
privValStateFile := config.PrivValidatorStateFile()
var pv *privval.FilePV
if cmn.FileExists(privValFile) {
pv = privval.LoadFilePV(privValFile)
logger.Info("Found private validator", "path", privValFile)
if cmn.FileExists(privValKeyFile) {
pv = privval.LoadFilePV(privValKeyFile, privValStateFile)
logger.Info("Found private validator", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
} else {
pv = privval.GenFilePV(privValFile)
pv = privval.GenFilePV(privValKeyFile, privValStateFile)
pv.Save()
logger.Info("Generated private validator", "path", privValFile)
logger.Info("Generated private validator", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
}
nodeKeyFile := config.NodeKeyFile()

View File

@@ -27,19 +27,20 @@ var ResetPrivValidatorCmd = &cobra.Command{
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetAll(cmd *cobra.Command, args []string) {
ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidatorFile(), logger)
ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidatorKeyFile(),
config.PrivValidatorStateFile(), logger)
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetPrivValidator(cmd *cobra.Command, args []string) {
resetFilePV(config.PrivValidatorFile(), logger)
resetFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile(), logger)
}
// ResetAll removes the privValidator and address book files plus all data.
// Exported so other CLI tools can use it.
func ResetAll(dbDir, addrBookFile, privValFile string, logger log.Logger) {
resetFilePV(privValFile, logger)
func ResetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logger log.Logger) {
resetFilePV(privValKeyFile, privValStateFile, logger)
removeAddrBook(addrBookFile, logger)
if err := os.RemoveAll(dbDir); err == nil {
logger.Info("Removed all blockchain history", "dir", dbDir)
@@ -48,15 +49,17 @@ func ResetAll(dbDir, addrBookFile, privValFile string, logger log.Logger) {
}
}
func resetFilePV(privValFile string, logger log.Logger) {
if _, err := os.Stat(privValFile); err == nil {
pv := privval.LoadFilePV(privValFile)
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger) {
if _, err := os.Stat(privValKeyFile); err == nil {
pv := privval.LoadFilePV(privValKeyFile, privValStateFile)
pv.Reset()
logger.Info("Reset private validator file to genesis state", "file", privValFile)
logger.Info("Reset private validator file to genesis state", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
} else {
pv := privval.GenFilePV(privValFile)
pv := privval.GenFilePV(privValKeyFile, privValStateFile)
pv.Save()
logger.Info("Generated private validator file", "file", privValFile)
logger.Info("Generated private validator file", "file", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
}
}

View File

@@ -16,7 +16,7 @@ var ShowValidatorCmd = &cobra.Command{
}
func showValidator(cmd *cobra.Command, args []string) {
privValidator := privval.LoadOrGenFilePV(config.PrivValidatorFile())
privValidator := privval.LoadOrGenFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
pubKeyJSONBytes, _ := cdc.MarshalJSON(privValidator.GetPubKey())
fmt.Println(string(pubKeyJSONBytes))
}

View File

@@ -85,11 +85,18 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
_ = os.RemoveAll(outputDir)
return err
}
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
if err != nil {
_ = os.RemoveAll(outputDir)
return err
}
initFilesWithConfig(config)
pvFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidator)
pv := privval.LoadFilePV(pvFile)
pvKeyFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidatorKey)
pvStateFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidatorState)
pv := privval.LoadFilePV(pvKeyFile, pvStateFile)
genVals[i] = types.GenesisValidator{
Address: pv.GetPubKey().Address(),
PubKey: pv.GetPubKey(),
@@ -127,31 +134,14 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
}
}
// Gather persistent peer addresses.
var (
persistentPeers string
err error
)
if populatePersistentPeers {
persistentPeers, err = persistentPeersString(config)
err := populatePersistentPeersInConfigAndWriteIt(config)
if err != nil {
_ = os.RemoveAll(outputDir)
return err
}
}
// Overwrite default config.
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
config.P2P.AddrBookStrict = false
if populatePersistentPeers {
config.P2P.PersistentPeers = persistentPeers
}
cfg.WriteConfigFile(filepath.Join(nodeDir, "config", "config.toml"), config)
}
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
return nil
}
@@ -174,16 +164,28 @@ func hostnameOrIP(i int) string {
return fmt.Sprintf("%s%d", hostnamePrefix, i)
}
func persistentPeersString(config *cfg.Config) (string, error) {
func populatePersistentPeersInConfigAndWriteIt(config *cfg.Config) error {
persistentPeers := make([]string, nValidators+nNonValidators)
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
nodeKey, err := p2p.LoadNodeKey(config.NodeKeyFile())
if err != nil {
return "", err
return err
}
persistentPeers[i] = p2p.IDAddressString(nodeKey.ID(), fmt.Sprintf("%s:%d", hostnameOrIP(i), p2pPort))
}
return strings.Join(persistentPeers, ","), nil
persistentPeersList := strings.Join(persistentPeers, ",")
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
config.P2P.PersistentPeers = persistentPeersList
config.P2P.AddrBookStrict = false
// overwrite default config
cfg.WriteConfigFile(filepath.Join(nodeDir, "config", "config.toml"), config)
}
return nil
}

View File

@@ -35,15 +35,24 @@ var (
defaultConfigFileName = "config.toml"
defaultGenesisJSONName = "genesis.json"
defaultPrivValName = "priv_validator.json"
defaultPrivValKeyName = "priv_validator_key.json"
defaultPrivValStateName = "priv_validator_state.json"
defaultNodeKeyName = "node_key.json"
defaultAddrBookName = "addrbook.json"
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName)
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName)
defaultPrivValPath = filepath.Join(defaultConfigDir, defaultPrivValName)
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName)
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName)
defaultPrivValKeyPath = filepath.Join(defaultConfigDir, defaultPrivValKeyName)
defaultPrivValStatePath = filepath.Join(defaultDataDir, defaultPrivValStateName)
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
)
var (
oldPrivVal = "priv_validator.json"
oldPrivValPath = filepath.Join(defaultConfigDir, oldPrivVal)
)
// Config defines the top level configuration for a Tendermint node
@@ -160,7 +169,10 @@ type BaseConfig struct {
Genesis string `mapstructure:"genesis_file"`
// Path to the JSON file containing the private key to use as a validator in the consensus protocol
PrivValidator string `mapstructure:"priv_validator_file"`
PrivValidatorKey string `mapstructure:"priv_validator_key_file"`
// Path to the JSON file containing the last sign state of a validator
PrivValidatorState string `mapstructure:"priv_validator_state_file"`
// TCP or UNIX socket address for Tendermint to listen on for
// connections from an external PrivValidator process
@@ -183,19 +195,20 @@ type BaseConfig struct {
// DefaultBaseConfig returns a default base configuration for a Tendermint node
func DefaultBaseConfig() BaseConfig {
return BaseConfig{
Genesis: defaultGenesisJSONPath,
PrivValidator: defaultPrivValPath,
NodeKey: defaultNodeKeyPath,
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:26658",
ABCI: "socket",
LogLevel: DefaultPackageLogLevels(),
LogFormat: LogFormatPlain,
ProfListenAddress: "",
FastSync: true,
FilterPeers: false,
DBBackend: "leveldb",
DBPath: "data",
Genesis: defaultGenesisJSONPath,
PrivValidatorKey: defaultPrivValKeyPath,
PrivValidatorState: defaultPrivValStatePath,
NodeKey: defaultNodeKeyPath,
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:26658",
ABCI: "socket",
LogLevel: DefaultPackageLogLevels(),
LogFormat: LogFormatPlain,
ProfListenAddress: "",
FastSync: true,
FilterPeers: false,
DBBackend: "leveldb",
DBPath: "data",
}
}
@@ -218,9 +231,20 @@ func (cfg BaseConfig) GenesisFile() string {
return rootify(cfg.Genesis, cfg.RootDir)
}
// PrivValidatorFile returns the full path to the priv_validator.json file
func (cfg BaseConfig) PrivValidatorFile() string {
return rootify(cfg.PrivValidator, cfg.RootDir)
// PrivValidatorKeyFile returns the full path to the priv_validator_key.json file
func (cfg BaseConfig) PrivValidatorKeyFile() string {
return rootify(cfg.PrivValidatorKey, cfg.RootDir)
}
// PrivValidatorFile returns the full path to the priv_validator_state.json file
func (cfg BaseConfig) PrivValidatorStateFile() string {
return rootify(cfg.PrivValidatorState, cfg.RootDir)
}
// OldPrivValidatorFile returns the full path of the priv_validator.json from pre v0.28.0.
// TODO: eventually remove.
func (cfg BaseConfig) OldPrivValidatorFile() string {
return rootify(oldPrivValPath, cfg.RootDir)
}
// NodeKeyFile returns the full path to the node_key.json file
@@ -283,7 +307,7 @@ type RPCConfig struct {
// Maximum number of simultaneous connections.
// Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
// If you want to accept a larger number than the default, make sure
// If you want to accept more significant number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
GRPCMaxOpenConnections int `mapstructure:"grpc_max_open_connections"`
@@ -293,7 +317,7 @@ type RPCConfig struct {
// Maximum number of simultaneous connections (including WebSocket).
// Does not include gRPC connections. See grpc_max_open_connections
// If you want to accept a larger number than the default, make sure
// If you want to accept more significant number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
// Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -774,12 +798,12 @@ type InstrumentationConfig struct {
PrometheusListenAddr string `mapstructure:"prometheus_listen_addr"`
// Maximum number of simultaneous connections.
// If you want to accept a larger number than the default, make sure
// If you want to accept more significant number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
MaxOpenConnections int `mapstructure:"max_open_connections"`
// Instrumentation namespace.
// Tendermint instrumentation namespace.
Namespace string `mapstructure:"namespace"`
}

View File

@@ -95,7 +95,10 @@ log_format = "{{ .BaseConfig.LogFormat }}"
genesis_file = "{{ js .BaseConfig.Genesis }}"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_file = "{{ js .BaseConfig.PrivValidator }}"
priv_validator_key_file = "{{ js .BaseConfig.PrivValidatorKey }}"
# Path to the JSON file containing the last sign state of a validator
priv_validator_state_file = "{{ js .BaseConfig.PrivValidatorState }}"
# TCP or UNIX socket address for Tendermint to listen on for
# connections from an external PrivValidator process
@@ -125,13 +128,13 @@ laddr = "{{ .RPC.ListenAddress }}"
# A list of origins a cross-domain request can be executed from
# Default value '[]' disables cors support
# Use '["*"]' to allow any origin
cors_allowed_origins = [{{ range .RPC.CORSAllowedOrigins }}{{ printf "%q, " . }}{{end}}]
cors_allowed_origins = "{{ .RPC.CORSAllowedOrigins }}"
# A list of methods the client is allowed to use with cross-domain requests
cors_allowed_methods = [{{ range .RPC.CORSAllowedMethods }}{{ printf "%q, " . }}{{end}}]
cors_allowed_methods = "{{ .RPC.CORSAllowedMethods }}"
# A list of non simple headers the client is allowed to use with cross-domain requests
cors_allowed_headers = [{{ range .RPC.CORSAllowedHeaders }}{{ printf "%q, " . }}{{end}}]
cors_allowed_headers = "{{ .RPC.CORSAllowedHeaders }}"
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
@@ -139,7 +142,7 @@ grpc_laddr = "{{ .RPC.GRPCListenAddress }}"
# Maximum number of simultaneous connections.
# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
# If you want to accept a larger number than the default, make sure
# If you want to accept more significant number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -151,7 +154,7 @@ unsafe = {{ .RPC.Unsafe }}
# Maximum number of simultaneous connections (including WebSocket).
# Does not include gRPC connections. See grpc_max_open_connections
# If you want to accept a larger number than the default, make sure
# If you want to accept more significant number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -269,8 +272,8 @@ blocktime_iota = "{{ .Consensus.BlockTimeIota }}"
# What indexer to use for transactions
#
# Options:
# 1) "null"
# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
# 1) "null" (default)
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "{{ .TxIndex.Indexer }}"
# Comma-separated list of tags to index (by default the only tag is "tx.hash")
@@ -302,7 +305,7 @@ prometheus = {{ .Instrumentation.Prometheus }}
prometheus_listen_addr = "{{ .Instrumentation.PrometheusListenAddr }}"
# Maximum number of simultaneous connections.
# If you want to accept a larger number than the default, make sure
# If you want to accept more significant number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
max_open_connections = {{ .Instrumentation.MaxOpenConnections }}
@@ -342,7 +345,8 @@ func ResetTestRoot(testName string) *Config {
baseConfig := DefaultBaseConfig()
configFilePath := filepath.Join(rootDir, defaultConfigFilePath)
genesisFilePath := filepath.Join(rootDir, baseConfig.Genesis)
privFilePath := filepath.Join(rootDir, baseConfig.PrivValidator)
privKeyFilePath := filepath.Join(rootDir, baseConfig.PrivValidatorKey)
privStateFilePath := filepath.Join(rootDir, baseConfig.PrivValidatorState)
// Write default config file if missing.
if !cmn.FileExists(configFilePath) {
@@ -352,7 +356,8 @@ func ResetTestRoot(testName string) *Config {
cmn.MustWriteFile(genesisFilePath, []byte(testGenesis), 0644)
}
// we always overwrite the priv val
cmn.MustWriteFile(privFilePath, []byte(testPrivValidator), 0644)
cmn.MustWriteFile(privKeyFilePath, []byte(testPrivValidatorKey), 0644)
cmn.MustWriteFile(privStateFilePath, []byte(testPrivValidatorState), 0644)
config := TestConfig().SetRoot(rootDir)
return config
@@ -374,7 +379,7 @@ var testGenesis = `{
"app_hash": ""
}`
var testPrivValidator = `{
var testPrivValidatorKey = `{
"address": "A3258DCBF45DCA0DF052981870F2D1441A36D145",
"pub_key": {
"type": "tendermint/PubKeyEd25519",
@@ -383,8 +388,11 @@ var testPrivValidator = `{
"priv_key": {
"type": "tendermint/PrivKeyEd25519",
"value": "EVkqJO/jIXp3rkASXfh9YnyToYXRXhBr6g9cQVxPFnQBP/5povV4HTjvsy530kybxKHwEi85iU8YL0qQhSYVoQ=="
},
"last_height": "0",
"last_round": "0",
"last_step": 0
}
}`
var testPrivValidatorState = `{
"height": "0",
"round": "0",
"step": 0
}`

View File

@@ -60,7 +60,7 @@ func TestEnsureTestRoot(t *testing.T) {
// TODO: make sure the cfg returned and testconfig are the same!
baseConfig := DefaultBaseConfig()
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, baseConfig.PrivValidator)
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, baseConfig.PrivValidatorKey, baseConfig.PrivValidatorState)
}
func checkConfig(configFile string) bool {

View File

@@ -6,14 +6,18 @@ import (
"fmt"
"io/ioutil"
"os"
"path"
"path/filepath"
"reflect"
"sort"
"sync"
"testing"
"time"
"github.com/go-kit/kit/log/term"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
bc "github.com/tendermint/tendermint/blockchain"
cfg "github.com/tendermint/tendermint/config"
@@ -27,11 +31,6 @@ import (
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
"github.com/go-kit/kit/log/term"
)
const (
@@ -281,9 +280,10 @@ func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state sm.S
}
func loadPrivValidator(config *cfg.Config) *privval.FilePV {
privValidatorFile := config.PrivValidatorFile()
ensureDir(path.Dir(privValidatorFile), 0700)
privValidator := privval.LoadOrGenFilePV(privValidatorFile)
privValidatorKeyFile := config.PrivValidatorKeyFile()
ensureDir(filepath.Dir(privValidatorKeyFile), 0700)
privValidatorStateFile := config.PrivValidatorStateFile()
privValidator := privval.LoadOrGenFilePV(privValidatorKeyFile, privValidatorStateFile)
privValidator.Reset()
return privValidator
}
@@ -591,7 +591,7 @@ func randConsensusNet(nValidators int, testName string, tickerFunc func() Timeou
for _, opt := range configOpts {
opt(thisConfig)
}
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
ensureDir(filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
app := appFunc()
vals := types.TM2PB.ValidatorUpdates(state.Validators)
app.InitChain(abci.RequestInitChain{Validators: vals})
@@ -612,16 +612,21 @@ func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerF
stateDB := dbm.NewMemDB() // each state needs its own db
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
thisConfig := ResetConfig(fmt.Sprintf("%s_%d", testName, i))
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
ensureDir(filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
var privVal types.PrivValidator
if i < nValidators {
privVal = privVals[i]
} else {
tempFile, err := ioutil.TempFile("", "priv_validator_")
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
if err != nil {
panic(err)
}
privVal = privval.GenFilePV(tempFile.Name())
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
if err != nil {
panic(err)
}
privVal = privval.GenFilePV(tempKeyFile.Name(), tempStateFile.Name())
}
app := appFunc()

View File

@@ -11,6 +11,7 @@ import (
"time"
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/version"
//auto "github.com/tendermint/tendermint/libs/autofile"
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
@@ -19,7 +20,6 @@ import (
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/version"
)
var crc32c = crc32.MakeTable(crc32.Castagnoli)
@@ -247,7 +247,6 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
// Set AppVersion on the state.
h.initialState.Version.Consensus.App = version.Protocol(res.AppVersion)
sm.SaveState(h.stateDB, h.initialState)
// Replay blocks up to the latest in the blockstore.
_, err = h.ReplayBlocks(h.initialState, appHash, blockHeight, proxyApp)
@@ -296,27 +295,19 @@ func (h *Handshaker) ReplayBlocks(
return nil, err
}
if stateBlockHeight == 0 { //we only update state when we are in initial state
// If the app returned validators or consensus params, update the state.
if len(res.Validators) > 0 {
vals, err := types.PB2TM.ValidatorUpdates(res.Validators)
if err != nil {
return nil, err
}
state.Validators = types.NewValidatorSet(vals)
state.NextValidators = types.NewValidatorSet(vals)
} else {
// If validator set is not set in genesis and still empty after InitChain, exit.
if len(h.genDoc.Validators) == 0 {
return nil, fmt.Errorf("Validator set is nil in genesis and still empty after InitChain")
}
// If the app returned validators or consensus params, update the state.
if len(res.Validators) > 0 {
vals, err := types.PB2TM.ValidatorUpdates(res.Validators)
if err != nil {
return nil, err
}
if res.ConsensusParams != nil {
state.ConsensusParams = types.PB2TM.ConsensusParams(res.ConsensusParams)
}
sm.SaveState(h.stateDB, state)
state.Validators = types.NewValidatorSet(vals)
state.NextValidators = types.NewValidatorSet(vals)
}
if res.ConsensusParams != nil {
state.ConsensusParams = types.PB2TM.ConsensusParams(res.ConsensusParams)
}
sm.SaveState(h.stateDB, state)
}
// First handle edge cases and constraints on the storeBlockHeight.

View File

@@ -319,7 +319,7 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
walFile := tempWALWithData(walBody)
config.Consensus.SetWalFile(walFile)
privVal := privval.LoadFilePV(config.PrivValidatorFile())
privVal := privval.LoadFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
wal, err := NewWAL(walFile)
require.NoError(t, err)
@@ -633,7 +633,7 @@ func TestInitChainUpdateValidators(t *testing.T) {
clientCreator := proxy.NewLocalClientCreator(app)
config := ResetConfig("proxy_test_")
privVal := privval.LoadFilePV(config.PrivValidatorFile())
privVal := privval.LoadFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
stateDB, state, store := stateAndStore(config, privVal.GetPubKey(), 0x0)
oldValAddr := state.Validators.Validators[0].Address

View File

@@ -40,8 +40,9 @@ func WALGenerateNBlocks(wr io.Writer, numBlocks int) (err error) {
// COPY PASTE FROM node.go WITH A FEW MODIFICATIONS
// NOTE: we can't import node package because of circular dependency.
// NOTE: we don't do handshake so need to set state.Version.Consensus.App directly.
privValidatorFile := config.PrivValidatorFile()
privValidator := privval.LoadOrGenFilePV(privValidatorFile)
privValidatorKeyFile := config.PrivValidatorKeyFile()
privValidatorStateFile := config.PrivValidatorStateFile()
privValidator := privval.LoadOrGenFilePV(privValidatorKeyFile, privValidatorStateFile)
genDoc, err := types.GenesisDocFromFile(config.GenesisFile())
if err != nil {
return errors.Wrap(err, "failed to read genesis file")

View File

@@ -8,17 +8,7 @@ module.exports = {
lineNumbers: true
},
themeConfig: {
repo: "tendermint/tendermint",
editLinks: true,
docsDir: "docs",
docsBranch: "develop",
editLinkText: 'Edit this page on Github',
lastUpdated: true,
algolia: {
apiKey: '59f0e2deb984aa9cdf2b3a5fd24ac501',
indexName: 'tendermint',
debug: false
},
lastUpdated: "Last Updated",
nav: [{ text: "Back to Tendermint", link: "https://tendermint.com" }],
sidebar: [
{

View File

@@ -12,10 +12,10 @@ respectively.
## How It Works
There is a CircleCI job listening for changes in the `/docs` directory, on both
There is a Jenkins job listening for changes in the `/docs` directory, on both
the `master` and `develop` branches. Any updates to files in this directory
on those branches will automatically trigger a website deployment. Under the hood,
the private website repository has a `make build-docs` target consumed by a CircleCI job in that repo.
a private website repository has make targets consumed by a standard Jenkins task.
## README
@@ -93,10 +93,6 @@ python -m SimpleHTTPServer 8080
then navigate to localhost:8080 in your browser.
## Search
We are using [Algolia](https://www.algolia.com) to power full-text search. This uses a public API search-only key in the `config.js` as well as a [tendermint.json](https://github.com/algolia/docsearch-configs/blob/master/configs/tendermint.json) configuration file that we can update with PRs.
## Consistency
Because the build processes are identical (as is the information contained herein), this file should be kept in sync as

View File

@@ -5,7 +5,7 @@ Tendermint blockchain application.
The following diagram provides a superb example:
![](../imgs/cosmos-tendermint-stack-4k.jpg)
<https://drive.google.com/open?id=1yR2XpRi9YCY9H9uMfcw8-RMJpvDyvjz9>
The end-user application here is the Cosmos Voyager, at the bottom left.
Voyager communicates with a REST API exposed by a local Light-Client

View File

@@ -181,13 +181,5 @@
"language": "Javascript",
"author": "Dennis McKinnon"
}
],
"aminoLibraries": [
{
"name": "JS-Amino",
"url": "https://github.com/TanNgocDo/Js-Amino",
"language": "Javascript",
"author": "TanNgocDo"
}
]
}

View File

@@ -1,9 +1,11 @@
# Ecosystem
The growing list of applications built using various pieces of the
Tendermint stack can be found at the [ecosystem page](https://tendermint.com/ecosystem).
Tendermint stack can be found at:
We thank the community for their contributions and welcome the
- https://tendermint.com/ecosystem
We thank the community for their contributions thus far and welcome the
addition of new projects. A pull request can be submitted to [this
file](https://github.com/tendermint/tendermint/blob/master/docs/app-dev/ecosystem.json)
to include your project.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 625 KiB

View File

@@ -70,6 +70,10 @@ Tendermint is in essence similar software, but with two key differences:
the application logic that's right for them, from key-value store to
cryptocurrency to e-voting platform and beyond.
The layout of this Tendermint website content is also ripped directly
and without shame from [consul.io](https://www.consul.io/) and the other
[Hashicorp sites](https://www.hashicorp.com/#tools).
### Bitcoin, Ethereum, etc.
Tendermint emerged in the tradition of cryptocurrencies like Bitcoin,

View File

@@ -1,17 +1,20 @@
# Docker Compose
With Docker Compose, you can spin up local testnets with a single command.
With Docker Compose, we can spin up local testnets in a single command:
```
make localnet-start
```
## Requirements
1. [Install tendermint](/docs/install.md)
2. [Install docker](https://docs.docker.com/engine/installation/)
3. [Install docker-compose](https://docs.docker.com/compose/install/)
- [Install tendermint](/docs/install.md)
- [Install docker](https://docs.docker.com/engine/installation/)
- [Install docker-compose](https://docs.docker.com/compose/install/)
## Build
Build the `tendermint` binary and, optionally, the `tendermint/localnode`
docker image.
Build the `tendermint` binary and the `tendermint/localnode` docker image.
Note the binary will be mounted into the container so it can be updated without
rebuilding the image.
@@ -22,10 +25,11 @@ cd $GOPATH/src/github.com/tendermint/tendermint
# Build the linux binary in ./build
make build-linux
# (optionally) Build tendermint/localnode image
# Build tendermint/localnode image
make build-docker-localnode
```
## Run a testnet
To start a 4 node testnet run:
@@ -34,13 +38,9 @@ To start a 4 node testnet run:
make localnet-start
```
The nodes bind their RPC servers to ports 26657, 26660, 26662, and 26664 on the
host.
The nodes bind their RPC servers to ports 26657, 26660, 26662, and 26664 on the host.
This file creates a 4-node network using the localnode image.
The nodes of the network expose their P2P and RPC endpoints to the host machine
on ports 26656-26657, 26659-26660, 26661-26662, and 26663-26664 respectively.
The nodes of the network expose their P2P and RPC endpoints to the host machine on ports 26656-26657, 26659-26660, 26661-26662, and 26663-26664 respectively.
To update the binary, just rebuild it and restart the nodes:
@@ -52,40 +52,34 @@ make localnet-start
## Configuration
The `make localnet-start` creates files for a 4-node testnet in `./build` by
calling the `tendermint testnet` command.
The `make localnet-start` creates files for a 4-node testnet in `./build` by calling the `tendermint testnet` command.
The `./build` directory is mounted to the `/tendermint` mount point to attach
the binary and config files to the container.
The `./build` directory is mounted to the `/tendermint` mount point to attach the binary and config files to the container.
To change the number of validators / non-validators change the `localnet-start` Makefile target:
```
localnet-start: localnet-stop
@if ! [ -f build/node0/config/genesis.json ]; then docker run --rm -v $(CURDIR)/build:/tendermint:Z tendermint/localnode testnet --v 5 --n 3 --o . --populate-persistent-peers --starting-ip-address 192.167.10.2 ; fi
docker-compose up
```
The command now will generate config files for 5 validators and 3
non-validators network.
Before running it, don't forget to cleanup the old files:
For instance, to create a single node testnet:
```
cd $GOPATH/src/github.com/tendermint/tendermint
# Clear the build folder
rm -rf ./build/node*
rm -rf ./build
# Build binary
make build-linux
# Create configuration
docker run -e LOG="stdout" -v `pwd`/build:/tendermint tendermint/localnode testnet --o . --v 1
#Run the node
docker run -v `pwd`/build:/tendermint tendermint/localnode
```
## Logging
Log is saved under the attached volume, in the `tendermint.log` file. If the
`LOG` environment variable is set to `stdout` at start, the log is not saved,
but printed on the screen.
Log is saved under the attached volume, in the `tendermint.log` file. If the `LOG` environment variable is set to `stdout` at start, the log is not saved, but printed on the screen.
## Special binaries
If you have multiple binaries with different names, you can specify which one
to run with the `BINARY` environment variable. The path of the binary is relative
to the attached volume.
If you have multiple binaries with different names, you can specify which one to run with the BINARY environment variable. The path of the binary is relative to the attached volume.

View File

@@ -14,31 +14,31 @@ please submit them to our [bug bounty](https://tendermint.com/security)!
### Data Structures
- [Encoding and Digests](./blockchain/encoding.md)
- [Blockchain](./blockchain/blockchain.md)
- [State](./blockchain/state.md)
- [Encoding and Digests](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/encoding.md)
- [Blockchain](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/blockchain.md)
- [State](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md)
### Consensus Protocol
- [Consensus Algorithm](./consensus/consensus.md)
- [Creating a proposal](./consensus/creating-proposal.md)
- [Time](./consensus/bft-time.md)
- [Light-Client](./consensus/light-client.md)
- [Consensus Algorithm](/docs/spec/consensus/consensus.md)
- [Creating a proposal](/docs/spec/consensus/creating-proposal.md)
- [Time](/docs/spec/consensus/bft-time.md)
- [Light-Client](/docs/spec/consensus/light-client.md)
### P2P and Network Protocols
- [The Base P2P Layer](./p2p/): multiplex the protocols ("reactors") on authenticated and encrypted TCP connections
- [Peer Exchange (PEX)](./reactors/pex/): gossip known peer addresses so peers can find each other
- [Block Sync](./reactors/block_sync/): gossip blocks so peers can catch up quickly
- [Consensus](./reactors/consensus/): gossip votes and block parts so new blocks can be committed
- [Mempool](./reactors/mempool/): gossip transactions so they get included in blocks
- [Evidence](./reactors/evidence/): sending invalid evidence will stop the peer
- [The Base P2P Layer](https://github.com/tendermint/tendermint/tree/master/docs/spec/p2p): multiplex the protocols ("reactors") on authenticated and encrypted TCP connections
- [Peer Exchange (PEX)](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/pex): gossip known peer addresses so peers can find each other
- [Block Sync](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/block_sync): gossip blocks so peers can catch up quickly
- [Consensus](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/consensus): gossip votes and block parts so new blocks can be committed
- [Mempool](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/mempool): gossip transactions so they get included in blocks
- Evidence: Forthcoming, see [this issue](https://github.com/tendermint/tendermint/issues/2329).
### Software
- [ABCI](./software/abci.md): Details about interactions between the
- [ABCI](/docs/spec/software/abci.md): Details about interactions between the
application and consensus engine over ABCI
- [Write-Ahead Log](./software/wal.md): Details about how the consensus
- [Write-Ahead Log](/docs/spec/software/wal.md): Details about how the consensus
engine preserves data and recovers from crash failures
## Overview

View File

@@ -36,26 +36,22 @@ db_backend = "leveldb"
# Database directory
db_dir = "data"
# Output level for logging, including package level options
log_level = "main:info,state:info,*:error"
# Output level for logging
log_level = "state:info,*:error"
# Output format: 'plain' (colored text) or 'json'
log_format = "plain"
##### additional base config options #####
# The ID of the chain to join (should be signed with every transaction and vote)
chain_id = ""
# Path to the JSON file containing the initial validator set and other meta data
genesis_file = "config/genesis.json"
genesis_file = "genesis.json"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_file = "config/priv_validator.json"
# TCP or UNIX socket address for Tendermint to listen on for
# connections from an external PrivValidator process
priv_validator_laddr = ""
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
node_key_file = "config/node_key.json"
priv_validator_file = "priv_validator.json"
# Mechanism to connect to the ABCI application: socket | grpc
abci = "socket"
@@ -78,13 +74,13 @@ laddr = "tcp://0.0.0.0:26657"
# A list of origins a cross-domain request can be executed from
# Default value '[]' disables cors support
# Use '["*"]' to allow any origin
cors_allowed_origins = []
cors_allowed_origins = "[]"
# A list of methods the client is allowed to use with cross-domain requests
cors_allowed_methods = ["HEAD", "GET", "POST"]
cors_allowed_methods = "[HEAD GET POST]"
# A list of non simple headers the client is allowed to use with cross-domain requests
cors_allowed_headers = ["Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time"]
cors_allowed_headers = "[Origin Accept Content-Type X-Requested-With X-Server-Time]"
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
@@ -92,7 +88,7 @@ grpc_laddr = ""
# Maximum number of simultaneous connections.
# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
# If you want to accept a larger number than the default, make sure
# If you want to accept more significant number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -104,7 +100,7 @@ unsafe = false
# Maximum number of simultaneous connections (including WebSocket).
# Does not include gRPC connections. See grpc_max_open_connections
# If you want to accept a larger number than the default, make sure
# If you want to accept more significant number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -117,12 +113,6 @@ max_open_connections = 900
# Address to listen for incoming connections
laddr = "tcp://0.0.0.0:26656"
# Address to advertise to peers for them to dial
# If empty, will use the same port as the laddr,
# and will introspect on the listener or use UPnP
# to figure out the address.
external_address = ""
# Comma separated list of seed nodes to connect to
seeds = ""
@@ -133,7 +123,7 @@ persistent_peers = ""
upnp = false
# Path to address book
addr_book_file = "config/addrbook.json"
addr_book_file = "addrbook.json"
# Set true for strict address routability rules
# Set false for private or local networks
@@ -181,26 +171,26 @@ dial_timeout = "3s"
recheck = true
broadcast = true
wal_dir = ""
wal_dir = "data/mempool.wal"
# size of the mempool
size = 5000
size = 100000
# size of the cache (used to filter transactions we saw earlier)
cache_size = 10000
cache_size = 100000
##### consensus configuration options #####
[consensus]
wal_file = "data/cs.wal/wal"
timeout_propose = "3s"
timeout_propose = "3000ms"
timeout_propose_delta = "500ms"
timeout_prevote = "1s"
timeout_prevote = "1000ms"
timeout_prevote_delta = "500ms"
timeout_precommit = "1s"
timeout_precommit = "1000ms"
timeout_precommit_delta = "500ms"
timeout_commit = "1s"
timeout_commit = "1000ms"
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
skip_timeout_commit = false
@@ -211,10 +201,10 @@ create_empty_blocks_interval = "0s"
# Reactor sleep duration parameters
peer_gossip_sleep_duration = "100ms"
peer_query_maj23_sleep_duration = "2s"
peer_query_maj23_sleep_duration = "2000ms"
# Block time parameters. Corresponds to the minimum time increment between consecutive blocks.
blocktime_iota = "1s"
blocktime_iota = "1000ms"
##### transactions indexer configuration options #####
[tx_index]
@@ -255,7 +245,7 @@ prometheus = false
prometheus_listen_addr = ":26660"
# Maximum number of simultaneous connections.
# If you want to accept a larger number than the default, make sure
# If you want to accept a more significant number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
max_open_connections = 3

View File

@@ -1,7 +1,6 @@
package log
import (
"io"
"os"
"testing"
@@ -20,22 +19,12 @@ var (
// inside a test (not in the init func) because
// verbose flag only set at the time of testing.
func TestingLogger() Logger {
return TestingLoggerWithOutput(os.Stdout)
}
// TestingLoggerWOutput returns a TMLogger which writes to (w io.Writer) if testing being run
// with the verbose (-v) flag, NopLogger otherwise.
//
// Note that the call to TestingLoggerWithOutput(w io.Writer) must be made
// inside a test (not in the init func) because
// verbose flag only set at the time of testing.
func TestingLoggerWithOutput(w io.Writer) Logger {
if _testingLogger != nil {
return _testingLogger
}
if testing.Verbose() {
_testingLogger = NewTMLogger(NewSyncWriter(w))
_testingLogger = NewTMLogger(NewSyncWriter(os.Stdout))
} else {
_testingLogger = NewNopLogger()
}

View File

@@ -108,10 +108,6 @@ func PostCheckMaxGas(maxGas int64) PostCheckFunc {
if maxGas == -1 {
return nil
}
if res.GasWanted < 0 {
return fmt.Errorf("gas wanted %d is negative",
res.GasWanted)
}
if res.GasWanted > maxGas {
return fmt.Errorf("gas wanted %d is greater than max gas %d",
res.GasWanted, maxGas)
@@ -490,15 +486,11 @@ func (mem *Mempool) ReapMaxBytesMaxGas(maxBytes, maxGas int64) types.Txs {
return txs
}
totalBytes += int64(len(memTx.tx)) + aminoOverhead
// Check total gas requirement.
// If maxGas is negative, skip this check.
// Since newTotalGas < masGas, which
// must be non-negative, it follows that this won't overflow.
newTotalGas := totalGas + memTx.gasWanted
if maxGas > -1 && newTotalGas > maxGas {
// Check total gas requirement
if maxGas > -1 && totalGas+memTx.gasWanted > maxGas {
return txs
}
totalGas = newTotalGas
totalGas += memTx.gasWanted
txs = append(txs, memTx.tx)
}
return txs
@@ -556,18 +548,13 @@ func (mem *Mempool) Update(
// Remove committed transactions.
txsLeft := mem.removeTxs(txs)
// Either recheck non-committed txs to see if they became invalid
// or just notify there're some txs left.
if len(txsLeft) > 0 {
if mem.config.Recheck {
mem.logger.Info("Recheck txs", "numtxs", len(txsLeft), "height", height)
mem.recheckTxs(txsLeft)
// At this point, mem.txs are being rechecked.
// mem.recheckCursor re-scans mem.txs and possibly removes some txs.
// Before mem.Reap(), we should wait for mem.recheckCursor to be nil.
} else {
mem.notifyTxsAvailable()
}
// Recheck mempool txs if any txs were committed in the block
if mem.config.Recheck && len(txsLeft) > 0 {
mem.logger.Info("Recheck txs", "numtxs", len(txsLeft), "height", height)
mem.recheckTxs(txsLeft)
// At this point, mem.txs are being rechecked.
// mem.recheckCursor re-scans mem.txs and possibly removes some txs.
// Before mem.Reap(), we should wait for mem.recheckCursor to be nil.
}
// Update metrics

View File

@@ -7,6 +7,7 @@ import (
"net"
"net/http"
_ "net/http/pprof"
"os"
"strings"
"time"
@@ -86,8 +87,26 @@ func DefaultNewNode(config *cfg.Config, logger log.Logger) (*Node, error) {
if err != nil {
return nil, err
}
// Convert old PrivValidator if it exists.
oldPrivVal := config.OldPrivValidatorFile()
newPrivValKey := config.PrivValidatorKeyFile()
newPrivValState := config.PrivValidatorStateFile()
if _, err := os.Stat(oldPrivVal); !os.IsNotExist(err) {
oldPV, err := privval.LoadOldFilePV(oldPrivVal)
if err != nil {
return nil, fmt.Errorf("Error reading OldPrivValidator from %v: %v\n", oldPrivVal, err)
}
logger.Info("Upgrading PrivValidator file",
"old", oldPrivVal,
"newKey", newPrivValKey,
"newState", newPrivValState,
)
oldPV.Upgrade(newPrivValKey, newPrivValState)
}
return NewNode(config,
privval.LoadOrGenFilePV(config.PrivValidatorFile()),
privval.LoadOrGenFilePV(newPrivValKey, newPrivValState),
nodeKey,
proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir()),
DefaultGenesisDocProviderFunc(config),
@@ -210,18 +229,13 @@ func NewNode(config *cfg.Config,
// what happened during block replay).
state = sm.LoadState(stateDB)
// Log the version info.
logger.Info("Version info",
"software", version.TMCoreSemVer,
"block", version.BlockProtocol,
"p2p", version.P2PProtocol,
)
// If the state and software differ in block version, at least log it.
// Ensure the state's block version matches that of the software.
if state.Version.Consensus.Block != version.BlockProtocol {
logger.Info("Software and state have different block protocols",
"software", version.BlockProtocol,
"state", state.Version.Consensus.Block,
return nil, fmt.Errorf(
"Block version of the software does not match that of the state.\n"+
"Got version.BlockProtocol=%v, state.Version.Consensus.Block=%v",
version.BlockProtocol,
state.Version.Consensus.Block,
)
}
@@ -459,7 +473,7 @@ func NewNode(config *cfg.Config,
Seeds: splitAndTrimEmpty(config.P2P.Seeds, ",", " "),
SeedMode: config.P2P.SeedMode,
})
pexReactor.SetLogger(logger.With("module", "pex"))
pexReactor.SetLogger(p2pLogger)
sw.AddReactor("PEX", pexReactor)
}

View File

@@ -160,7 +160,6 @@ func NewMConnectionWithConfig(conn net.Conn, chDescs []*ChannelDescriptor, onRec
onReceive: onReceive,
onError: onError,
config: config,
created: time.Now(),
}
// Create channels

View File

@@ -98,15 +98,13 @@ func (ps *PeerSet) Get(peerKey ID) Peer {
}
// Remove discards peer by its Key, if the peer was previously memoized.
// Returns true if the peer was removed, and false if it was not found.
// in the set.
func (ps *PeerSet) Remove(peer Peer) bool {
func (ps *PeerSet) Remove(peer Peer) {
ps.mtx.Lock()
defer ps.mtx.Unlock()
item := ps.lookup[peer.ID()]
if item == nil {
return false
return
}
index := item.index
@@ -118,7 +116,7 @@ func (ps *PeerSet) Remove(peer Peer) bool {
if index == len(ps.list)-1 {
ps.list = newList
delete(ps.lookup, peer.ID())
return true
return
}
// Replace the popped item with the last item in the old list.
@@ -129,7 +127,6 @@ func (ps *PeerSet) Remove(peer Peer) bool {
lastPeerItem.index = index
ps.list = newList
delete(ps.lookup, peer.ID())
return true
}
// Size returns the number of unique items in the peerSet.

View File

@@ -60,15 +60,13 @@ func TestPeerSetAddRemoveOne(t *testing.T) {
n := len(peerList)
// 1. Test removing from the front
for i, peerAtFront := range peerList {
removed := peerSet.Remove(peerAtFront)
assert.True(t, removed)
peerSet.Remove(peerAtFront)
wantSize := n - i - 1
for j := 0; j < 2; j++ {
assert.Equal(t, false, peerSet.Has(peerAtFront.ID()), "#%d Run #%d: failed to remove peer", i, j)
assert.Equal(t, wantSize, peerSet.Size(), "#%d Run #%d: failed to remove peer and decrement size", i, j)
// Test the route of removing the now non-existent element
removed := peerSet.Remove(peerAtFront)
assert.False(t, removed)
peerSet.Remove(peerAtFront)
}
}
@@ -83,8 +81,7 @@ func TestPeerSetAddRemoveOne(t *testing.T) {
// b) In reverse, remove each element
for i := n - 1; i >= 0; i-- {
peerAtEnd := peerList[i]
removed := peerSet.Remove(peerAtEnd)
assert.True(t, removed)
peerSet.Remove(peerAtEnd)
assert.Equal(t, false, peerSet.Has(peerAtEnd.ID()), "#%d: failed to remove item at end", i)
assert.Equal(t, i, peerSet.Size(), "#%d: differing sizes after peerSet.Remove(atEndPeer)", i)
}
@@ -108,8 +105,7 @@ func TestPeerSetAddRemoveMany(t *testing.T) {
}
for i, peer := range peers {
removed := peerSet.Remove(peer)
assert.True(t, removed)
peerSet.Remove(peer)
if peerSet.Has(peer.ID()) {
t.Errorf("Failed to remove peer")
}

View File

@@ -211,9 +211,7 @@ func (sw *Switch) OnStop() {
// Stop peers
for _, p := range sw.peers.List() {
p.Stop()
if sw.peers.Remove(p) {
sw.metrics.Peers.Add(float64(-1))
}
sw.peers.Remove(p)
}
// Stop reactors
@@ -301,9 +299,8 @@ func (sw *Switch) StopPeerGracefully(peer Peer) {
}
func (sw *Switch) stopAndRemovePeer(peer Peer, reason interface{}) {
if sw.peers.Remove(peer) {
sw.metrics.Peers.Add(float64(-1))
}
sw.peers.Remove(peer)
sw.metrics.Peers.Add(float64(-1))
peer.Stop()
for _, reactor := range sw.reactors {
reactor.RemovePeer(peer, reason)
@@ -508,12 +505,6 @@ func (sw *Switch) acceptRoutine() {
"err", err,
"numPeers", sw.peers.Size(),
)
// We could instead have a retry loop around the acceptRoutine,
// but that would need to stop and let the node shutdown eventually.
// So might as well panic and let process managers restart the node.
// There's no point in letting the node run without the acceptRoutine,
// since it won't be able to accept new connections.
panic(fmt.Errorf("accept routine exited: %v", err))
}
break

View File

@@ -3,17 +3,10 @@ package p2p
import (
"bytes"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
"regexp"
"strconv"
"sync"
"testing"
"time"
stdprometheus "github.com/prometheus/client_golang/prometheus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -342,54 +335,6 @@ func TestSwitchStopsNonPersistentPeerOnError(t *testing.T) {
assert.False(p.IsRunning())
}
func TestSwitchStopPeerForError(t *testing.T) {
s := httptest.NewServer(stdprometheus.UninstrumentedHandler())
defer s.Close()
scrapeMetrics := func() string {
resp, _ := http.Get(s.URL)
buf, _ := ioutil.ReadAll(resp.Body)
return string(buf)
}
namespace, subsystem, name := config.TestInstrumentationConfig().Namespace, MetricsSubsystem, "peers"
re := regexp.MustCompile(namespace + `_` + subsystem + `_` + name + ` ([0-9\.]+)`)
peersMetricValue := func() float64 {
matches := re.FindStringSubmatch(scrapeMetrics())
f, _ := strconv.ParseFloat(matches[1], 64)
return f
}
p2pMetrics := PrometheusMetrics(namespace)
// make two connected switches
sw1, sw2 := MakeSwitchPair(t, func(i int, sw *Switch) *Switch {
// set metrics on sw1
if i == 0 {
opt := WithMetrics(p2pMetrics)
opt(sw)
}
return initSwitchFunc(i, sw)
})
assert.Equal(t, len(sw1.Peers().List()), 1)
assert.EqualValues(t, 1, peersMetricValue())
// send messages to the peer from sw1
p := sw1.Peers().List()[0]
p.Send(0x1, []byte("here's a message to send"))
// stop sw2. this should cause the p to fail,
// which results in calling StopPeerForError internally
sw2.Stop()
// now call StopPeerForError explicitly, eg. from a reactor
sw1.StopPeerForError(p, fmt.Errorf("some err"))
assert.Equal(t, len(sw1.Peers().List()), 0)
assert.EqualValues(t, 0, peersMetricValue())
}
func TestSwitchReconnectsToPersistentPeer(t *testing.T) {
assert, require := assert.New(t), require.New(t)

View File

@@ -184,7 +184,7 @@ func MakeSwitch(
// TODO: let the config be passed in?
sw := initSwitch(i, NewSwitch(cfg, t, opts...))
sw.SetLogger(log.TestingLogger().With("switch", i))
sw.SetLogger(log.TestingLogger())
sw.SetNodeKey(&nodeKey)
ni := nodeInfo.(DefaultNodeInfo)

View File

@@ -0,0 +1,80 @@
package privval
import (
"io/ioutil"
"os"
"github.com/tendermint/tendermint/crypto"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/types"
)
// OldFilePV is the old version of the FilePV, pre v0.28.0.
type OldFilePV struct {
Address types.Address `json:"address"`
PubKey crypto.PubKey `json:"pub_key"`
LastHeight int64 `json:"last_height"`
LastRound int `json:"last_round"`
LastStep int8 `json:"last_step"`
LastSignature []byte `json:"last_signature,omitempty"`
LastSignBytes cmn.HexBytes `json:"last_signbytes,omitempty"`
PrivKey crypto.PrivKey `json:"priv_key"`
filePath string
}
// LoadOldFilePV loads an OldFilePV from the filePath.
func LoadOldFilePV(filePath string) (*OldFilePV, error) {
pvJSONBytes, err := ioutil.ReadFile(filePath)
if err != nil {
return nil, err
}
pv := &OldFilePV{}
err = cdc.UnmarshalJSON(pvJSONBytes, &pv)
if err != nil {
return nil, err
}
// overwrite pubkey and address for convenience
pv.PubKey = pv.PrivKey.PubKey()
pv.Address = pv.PubKey.Address()
pv.filePath = filePath
return pv, nil
}
// Upgrade convets the OldFilePV to the new FilePV, separating the immutable and mutable components,
// and persisting them to the keyFilePath and stateFilePath, respectively.
// It renames the original file by adding ".bak".
func (oldFilePV *OldFilePV) Upgrade(keyFilePath, stateFilePath string) *FilePV {
privKey := oldFilePV.PrivKey
pvKey := FilePVKey{
PrivKey: privKey,
PubKey: privKey.PubKey(),
Address: privKey.PubKey().Address(),
filePath: keyFilePath,
}
pvState := FilePVLastSignState{
Height: oldFilePV.LastHeight,
Round: oldFilePV.LastRound,
Step: oldFilePV.LastStep,
Signature: oldFilePV.LastSignature,
SignBytes: oldFilePV.LastSignBytes,
filePath: stateFilePath,
}
// Save the new PV files
pv := &FilePV{
Key: pvKey,
LastSignState: pvState,
}
pv.Save()
// Rename the old PV file
err := os.Rename(oldFilePV.filePath, oldFilePV.filePath+".bak")
if err != nil {
panic(err)
}
return pv
}

View File

@@ -5,7 +5,6 @@ import (
"errors"
"fmt"
"io/ioutil"
"sync"
"time"
"github.com/tendermint/tendermint/crypto"
@@ -35,100 +34,90 @@ func voteToStep(vote *types.Vote) int8 {
}
}
// FilePV implements PrivValidator using data persisted to disk
// to prevent double signing.
// NOTE: the directory containing the pv.filePath must already exist.
// It includes the LastSignature and LastSignBytes so we don't lose the signature
// if the process crashes after signing but before the resulting consensus message is processed.
type FilePV struct {
Address types.Address `json:"address"`
PubKey crypto.PubKey `json:"pub_key"`
LastHeight int64 `json:"last_height"`
LastRound int `json:"last_round"`
LastStep int8 `json:"last_step"`
LastSignature []byte `json:"last_signature,omitempty"`
LastSignBytes cmn.HexBytes `json:"last_signbytes,omitempty"`
PrivKey crypto.PrivKey `json:"priv_key"`
//-------------------------------------------------------------------------------
// FilePVKey stores the immutable part of PrivValidator.
type FilePVKey struct {
Address types.Address `json:"address"`
PubKey crypto.PubKey `json:"pub_key"`
PrivKey crypto.PrivKey `json:"priv_key"`
// For persistence.
// Overloaded for testing.
filePath string
mtx sync.Mutex
}
// GetAddress returns the address of the validator.
// Implements PrivValidator.
func (pv *FilePV) GetAddress() types.Address {
return pv.Address
}
// GetPubKey returns the public key of the validator.
// Implements PrivValidator.
func (pv *FilePV) GetPubKey() crypto.PubKey {
return pv.PubKey
}
// GenFilePV generates a new validator with randomly generated private key
// and sets the filePath, but does not call Save().
func GenFilePV(filePath string) *FilePV {
privKey := ed25519.GenPrivKey()
return &FilePV{
Address: privKey.PubKey().Address(),
PubKey: privKey.PubKey(),
PrivKey: privKey,
LastStep: stepNone,
filePath: filePath,
}
}
// LoadFilePV loads a FilePV from the filePath. The FilePV handles double
// signing prevention by persisting data to the filePath. If the filePath does
// not exist, the FilePV must be created manually and saved.
func LoadFilePV(filePath string) *FilePV {
pvJSONBytes, err := ioutil.ReadFile(filePath)
if err != nil {
cmn.Exit(err.Error())
}
pv := &FilePV{}
err = cdc.UnmarshalJSON(pvJSONBytes, &pv)
if err != nil {
cmn.Exit(fmt.Sprintf("Error reading PrivValidator from %v: %v\n", filePath, err))
}
// overwrite pubkey and address for convenience
pv.PubKey = pv.PrivKey.PubKey()
pv.Address = pv.PubKey.Address()
pv.filePath = filePath
return pv
}
// LoadOrGenFilePV loads a FilePV from the given filePath
// or else generates a new one and saves it to the filePath.
func LoadOrGenFilePV(filePath string) *FilePV {
var pv *FilePV
if cmn.FileExists(filePath) {
pv = LoadFilePV(filePath)
} else {
pv = GenFilePV(filePath)
pv.Save()
}
return pv
}
// Save persists the FilePV to disk.
func (pv *FilePV) Save() {
pv.mtx.Lock()
defer pv.mtx.Unlock()
pv.save()
}
func (pv *FilePV) save() {
outFile := pv.filePath
// Save persists the FilePVKey to its filePath.
func (pvKey FilePVKey) Save() {
outFile := pvKey.filePath
if outFile == "" {
panic("Cannot save PrivValidator: filePath not set")
panic("Cannot save PrivValidator key: filePath not set")
}
jsonBytes, err := cdc.MarshalJSONIndent(pv, "", " ")
jsonBytes, err := cdc.MarshalJSONIndent(pvKey, "", " ")
if err != nil {
panic(err)
}
err = cmn.WriteFileAtomic(outFile, jsonBytes, 0600)
if err != nil {
panic(err)
}
}
//-------------------------------------------------------------------------------
// FilePVLastSignState stores the mutable part of PrivValidator.
type FilePVLastSignState struct {
Height int64 `json:"height"`
Round int `json:"round"`
Step int8 `json:"step"`
Signature []byte `json:"signature,omitempty"`
SignBytes cmn.HexBytes `json:"signbytes,omitempty"`
filePath string
}
// CheckHRS checks the given height, round, step (HRS) against that of the
// FilePVLastSignState. It returns an error if the arguments constitute a regression,
// or if they match but the SignBytes are empty.
// The returned boolean indicates whether the last Signature should be reused -
// it returns true if the HRS matches the arguments and the SignBytes are not empty (indicating
// we have already signed for this HRS, and can reuse the existing signature).
// It panics if the HRS matches the arguments, there's a SignBytes, but no Signature.
func (lss *FilePVLastSignState) CheckHRS(height int64, round int, step int8) (bool, error) {
if lss.Height > height {
return false, errors.New("Height regression")
}
if lss.Height == height {
if lss.Round > round {
return false, errors.New("Round regression")
}
if lss.Round == round {
if lss.Step > step {
return false, errors.New("Step regression")
} else if lss.Step == step {
if lss.SignBytes != nil {
if lss.Signature == nil {
panic("pv: Signature is nil but SignBytes is not!")
}
return true, nil
}
return false, errors.New("No SignBytes found")
}
}
}
return false, nil
}
// Save persists the FilePvLastSignState to its filePath.
func (lss *FilePVLastSignState) Save() {
outFile := lss.filePath
if outFile == "" {
panic("Cannot save FilePVLastSignState: filePath not set")
}
jsonBytes, err := cdc.MarshalJSONIndent(lss, "", " ")
if err != nil {
panic(err)
}
@@ -138,23 +127,102 @@ func (pv *FilePV) save() {
}
}
// Reset resets all fields in the FilePV.
// NOTE: Unsafe!
func (pv *FilePV) Reset() {
var sig []byte
pv.LastHeight = 0
pv.LastRound = 0
pv.LastStep = 0
pv.LastSignature = sig
pv.LastSignBytes = nil
pv.Save()
//-------------------------------------------------------------------------------
// FilePV implements PrivValidator using data persisted to disk
// to prevent double signing.
// NOTE: the directories containing pv.Key.filePath and pv.LastSignState.filePath must already exist.
// It includes the LastSignature and LastSignBytes so we don't lose the signature
// if the process crashes after signing but before the resulting consensus message is processed.
type FilePV struct {
Key FilePVKey
LastSignState FilePVLastSignState
}
// GenFilePV generates a new validator with randomly generated private key
// and sets the filePaths, but does not call Save().
func GenFilePV(keyFilePath, stateFilePath string) *FilePV {
privKey := ed25519.GenPrivKey()
return &FilePV{
Key: FilePVKey{
Address: privKey.PubKey().Address(),
PubKey: privKey.PubKey(),
PrivKey: privKey,
filePath: keyFilePath,
},
LastSignState: FilePVLastSignState{
Step: stepNone,
filePath: stateFilePath,
},
}
}
// LoadFilePV loads a FilePV from the filePaths. The FilePV handles double
// signing prevention by persisting data to the stateFilePath. If the filePaths
// do not exist, the FilePV must be created manually and saved.
func LoadFilePV(keyFilePath, stateFilePath string) *FilePV {
keyJSONBytes, err := ioutil.ReadFile(keyFilePath)
if err != nil {
cmn.Exit(err.Error())
}
pvKey := FilePVKey{}
err = cdc.UnmarshalJSON(keyJSONBytes, &pvKey)
if err != nil {
cmn.Exit(fmt.Sprintf("Error reading PrivValidator key from %v: %v\n", keyFilePath, err))
}
// overwrite pubkey and address for convenience
pvKey.PubKey = pvKey.PrivKey.PubKey()
pvKey.Address = pvKey.PubKey.Address()
pvKey.filePath = keyFilePath
stateJSONBytes, err := ioutil.ReadFile(stateFilePath)
if err != nil {
cmn.Exit(err.Error())
}
pvState := FilePVLastSignState{}
err = cdc.UnmarshalJSON(stateJSONBytes, &pvState)
if err != nil {
cmn.Exit(fmt.Sprintf("Error reading PrivValidator state from %v: %v\n", stateFilePath, err))
}
pvState.filePath = stateFilePath
return &FilePV{
Key: pvKey,
LastSignState: pvState,
}
}
// LoadOrGenFilePV loads a FilePV from the given filePaths
// or else generates a new one and saves it to the filePaths.
func LoadOrGenFilePV(keyFilePath, stateFilePath string) *FilePV {
var pv *FilePV
if cmn.FileExists(keyFilePath) {
pv = LoadFilePV(keyFilePath, stateFilePath)
} else {
pv = GenFilePV(keyFilePath, stateFilePath)
pv.Save()
}
return pv
}
// GetAddress returns the address of the validator.
// Implements PrivValidator.
func (pv *FilePV) GetAddress() types.Address {
return pv.Key.Address
}
// GetPubKey returns the public key of the validator.
// Implements PrivValidator.
func (pv *FilePV) GetPubKey() crypto.PubKey {
return pv.Key.PubKey
}
// SignVote signs a canonical representation of the vote, along with the
// chainID. Implements PrivValidator.
func (pv *FilePV) SignVote(chainID string, vote *types.Vote) error {
pv.mtx.Lock()
defer pv.mtx.Unlock()
if err := pv.signVote(chainID, vote); err != nil {
return fmt.Errorf("Error signing vote: %v", err)
}
@@ -164,65 +232,63 @@ func (pv *FilePV) SignVote(chainID string, vote *types.Vote) error {
// SignProposal signs a canonical representation of the proposal, along with
// the chainID. Implements PrivValidator.
func (pv *FilePV) SignProposal(chainID string, proposal *types.Proposal) error {
pv.mtx.Lock()
defer pv.mtx.Unlock()
if err := pv.signProposal(chainID, proposal); err != nil {
return fmt.Errorf("Error signing proposal: %v", err)
}
return nil
}
// returns error if HRS regression or no LastSignBytes. returns true if HRS is unchanged
func (pv *FilePV) checkHRS(height int64, round int, step int8) (bool, error) {
if pv.LastHeight > height {
return false, errors.New("Height regression")
}
if pv.LastHeight == height {
if pv.LastRound > round {
return false, errors.New("Round regression")
}
if pv.LastRound == round {
if pv.LastStep > step {
return false, errors.New("Step regression")
} else if pv.LastStep == step {
if pv.LastSignBytes != nil {
if pv.LastSignature == nil {
panic("pv: LastSignature is nil but LastSignBytes is not!")
}
return true, nil
}
return false, errors.New("No LastSignature found")
}
}
}
return false, nil
// Save persists the FilePV to disk.
func (pv *FilePV) Save() {
pv.Key.Save()
pv.LastSignState.Save()
}
// Reset resets all fields in the FilePV.
// NOTE: Unsafe!
func (pv *FilePV) Reset() {
var sig []byte
pv.LastSignState.Height = 0
pv.LastSignState.Round = 0
pv.LastSignState.Step = 0
pv.LastSignState.Signature = sig
pv.LastSignState.SignBytes = nil
pv.Save()
}
// String returns a string representation of the FilePV.
func (pv *FilePV) String() string {
return fmt.Sprintf("PrivValidator{%v LH:%v, LR:%v, LS:%v}", pv.GetAddress(), pv.LastSignState.Height, pv.LastSignState.Round, pv.LastSignState.Step)
}
//------------------------------------------------------------------------------------
// signVote checks if the vote is good to sign and sets the vote signature.
// It may need to set the timestamp as well if the vote is otherwise the same as
// a previously signed vote (ie. we crashed after signing but before the vote hit the WAL).
func (pv *FilePV) signVote(chainID string, vote *types.Vote) error {
height, round, step := vote.Height, vote.Round, voteToStep(vote)
signBytes := vote.SignBytes(chainID)
sameHRS, err := pv.checkHRS(height, round, step)
lss := pv.LastSignState
sameHRS, err := lss.CheckHRS(height, round, step)
if err != nil {
return err
}
signBytes := vote.SignBytes(chainID)
// We might crash before writing to the wal,
// causing us to try to re-sign for the same HRS.
// If signbytes are the same, use the last signature.
// If they only differ by timestamp, use last timestamp and signature
// Otherwise, return error
if sameHRS {
if bytes.Equal(signBytes, pv.LastSignBytes) {
vote.Signature = pv.LastSignature
} else if timestamp, ok := checkVotesOnlyDifferByTimestamp(pv.LastSignBytes, signBytes); ok {
if bytes.Equal(signBytes, lss.SignBytes) {
vote.Signature = lss.Signature
} else if timestamp, ok := checkVotesOnlyDifferByTimestamp(lss.SignBytes, signBytes); ok {
vote.Timestamp = timestamp
vote.Signature = pv.LastSignature
vote.Signature = lss.Signature
} else {
err = fmt.Errorf("Conflicting data")
}
@@ -230,7 +296,7 @@ func (pv *FilePV) signVote(chainID string, vote *types.Vote) error {
}
// It passed the checks. Sign the vote
sig, err := pv.PrivKey.Sign(signBytes)
sig, err := pv.Key.PrivKey.Sign(signBytes)
if err != nil {
return err
}
@@ -244,24 +310,27 @@ func (pv *FilePV) signVote(chainID string, vote *types.Vote) error {
// a previously signed proposal ie. we crashed after signing but before the proposal hit the WAL).
func (pv *FilePV) signProposal(chainID string, proposal *types.Proposal) error {
height, round, step := proposal.Height, proposal.Round, stepPropose
signBytes := proposal.SignBytes(chainID)
sameHRS, err := pv.checkHRS(height, round, step)
lss := pv.LastSignState
sameHRS, err := lss.CheckHRS(height, round, step)
if err != nil {
return err
}
signBytes := proposal.SignBytes(chainID)
// We might crash before writing to the wal,
// causing us to try to re-sign for the same HRS.
// If signbytes are the same, use the last signature.
// If they only differ by timestamp, use last timestamp and signature
// Otherwise, return error
if sameHRS {
if bytes.Equal(signBytes, pv.LastSignBytes) {
proposal.Signature = pv.LastSignature
} else if timestamp, ok := checkProposalsOnlyDifferByTimestamp(pv.LastSignBytes, signBytes); ok {
if bytes.Equal(signBytes, lss.SignBytes) {
proposal.Signature = lss.Signature
} else if timestamp, ok := checkProposalsOnlyDifferByTimestamp(lss.SignBytes, signBytes); ok {
proposal.Timestamp = timestamp
proposal.Signature = pv.LastSignature
proposal.Signature = lss.Signature
} else {
err = fmt.Errorf("Conflicting data")
}
@@ -269,7 +338,7 @@ func (pv *FilePV) signProposal(chainID string, proposal *types.Proposal) error {
}
// It passed the checks. Sign the proposal
sig, err := pv.PrivKey.Sign(signBytes)
sig, err := pv.Key.PrivKey.Sign(signBytes)
if err != nil {
return err
}
@@ -282,20 +351,15 @@ func (pv *FilePV) signProposal(chainID string, proposal *types.Proposal) error {
func (pv *FilePV) saveSigned(height int64, round int, step int8,
signBytes []byte, sig []byte) {
pv.LastHeight = height
pv.LastRound = round
pv.LastStep = step
pv.LastSignature = sig
pv.LastSignBytes = signBytes
pv.save()
pv.LastSignState.Height = height
pv.LastSignState.Round = round
pv.LastSignState.Step = step
pv.LastSignState.Signature = sig
pv.LastSignState.SignBytes = signBytes
pv.LastSignState.Save()
}
// String returns a string representation of the FilePV.
func (pv *FilePV) String() string {
return fmt.Sprintf("PrivValidator{%v LH:%v, LR:%v, LS:%v}", pv.GetAddress(), pv.LastHeight, pv.LastRound, pv.LastStep)
}
//-------------------------------------
//-----------------------------------------------------------------------------------------
// returns the timestamp from the lastSignBytes.
// returns true if the only difference in the votes is their timestamp.

View File

@@ -18,36 +18,72 @@ import (
func TestGenLoadValidator(t *testing.T) {
assert := assert.New(t)
tempFile, err := ioutil.TempFile("", "priv_validator_")
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
require.Nil(t, err)
privVal := GenFilePV(tempFile.Name())
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
require.Nil(t, err)
privVal := GenFilePV(tempKeyFile.Name(), tempStateFile.Name())
height := int64(100)
privVal.LastHeight = height
privVal.LastSignState.Height = height
privVal.Save()
addr := privVal.GetAddress()
privVal = LoadFilePV(tempFile.Name())
privVal = LoadFilePV(tempKeyFile.Name(), tempStateFile.Name())
assert.Equal(addr, privVal.GetAddress(), "expected privval addr to be the same")
assert.Equal(height, privVal.LastHeight, "expected privval.LastHeight to have been saved")
assert.Equal(height, privVal.LastSignState.Height, "expected privval.LastHeight to have been saved")
}
func TestLoadOrGenValidator(t *testing.T) {
assert := assert.New(t)
tempFile, err := ioutil.TempFile("", "priv_validator_")
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
require.Nil(t, err)
tempFilePath := tempFile.Name()
if err := os.Remove(tempFilePath); err != nil {
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
require.Nil(t, err)
tempKeyFilePath := tempKeyFile.Name()
if err := os.Remove(tempKeyFilePath); err != nil {
t.Error(err)
}
privVal := LoadOrGenFilePV(tempFilePath)
tempStateFilePath := tempStateFile.Name()
if err := os.Remove(tempStateFilePath); err != nil {
t.Error(err)
}
privVal := LoadOrGenFilePV(tempKeyFilePath, tempStateFilePath)
addr := privVal.GetAddress()
privVal = LoadOrGenFilePV(tempFilePath)
privVal = LoadOrGenFilePV(tempKeyFilePath, tempStateFilePath)
assert.Equal(addr, privVal.GetAddress(), "expected privval addr to be the same")
}
func TestUnmarshalValidator(t *testing.T) {
func TestUnmarshalValidatorState(t *testing.T) {
assert, require := assert.New(t), require.New(t)
// create some fixed values
serialized := `{
"height": "1",
"round": "1",
"step": 1
}`
val := FilePVLastSignState{}
err := cdc.UnmarshalJSON([]byte(serialized), &val)
require.Nil(err, "%+v", err)
// make sure the values match
assert.EqualValues(val.Height, 1)
assert.EqualValues(val.Round, 1)
assert.EqualValues(val.Step, 1)
// export it and make sure it is the same
out, err := cdc.MarshalJSON(val)
require.Nil(err, "%+v", err)
assert.JSONEq(serialized, string(out))
}
func TestUnmarshalValidatorKey(t *testing.T) {
assert, require := assert.New(t), require.New(t)
// create some fixed values
@@ -67,22 +103,19 @@ func TestUnmarshalValidator(t *testing.T) {
"type": "tendermint/PubKeyEd25519",
"value": "%s"
},
"last_height": "0",
"last_round": "0",
"last_step": 0,
"priv_key": {
"type": "tendermint/PrivKeyEd25519",
"value": "%s"
}
}`, addr, pubB64, privB64)
val := FilePV{}
val := FilePVKey{}
err := cdc.UnmarshalJSON([]byte(serialized), &val)
require.Nil(err, "%+v", err)
// make sure the values match
assert.EqualValues(addr, val.GetAddress())
assert.EqualValues(pubKey, val.GetPubKey())
assert.EqualValues(addr, val.Address)
assert.EqualValues(pubKey, val.PubKey)
assert.EqualValues(privKey, val.PrivKey)
// export it and make sure it is the same
@@ -94,9 +127,12 @@ func TestUnmarshalValidator(t *testing.T) {
func TestSignVote(t *testing.T) {
assert := assert.New(t)
tempFile, err := ioutil.TempFile("", "priv_validator_")
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
require.Nil(t, err)
privVal := GenFilePV(tempFile.Name())
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
require.Nil(t, err)
privVal := GenFilePV(tempKeyFile.Name(), tempStateFile.Name())
block1 := types.BlockID{[]byte{1, 2, 3}, types.PartSetHeader{}}
block2 := types.BlockID{[]byte{3, 2, 1}, types.PartSetHeader{}}
@@ -104,7 +140,7 @@ func TestSignVote(t *testing.T) {
voteType := byte(types.PrevoteType)
// sign a vote for first time
vote := newVote(privVal.Address, 0, height, round, voteType, block1)
vote := newVote(privVal.Key.Address, 0, height, round, voteType, block1)
err = privVal.SignVote("mychainid", vote)
assert.NoError(err, "expected no error signing vote")
@@ -114,10 +150,10 @@ func TestSignVote(t *testing.T) {
// now try some bad votes
cases := []*types.Vote{
newVote(privVal.Address, 0, height, round-1, voteType, block1), // round regression
newVote(privVal.Address, 0, height-1, round, voteType, block1), // height regression
newVote(privVal.Address, 0, height-2, round+4, voteType, block1), // height regression and different round
newVote(privVal.Address, 0, height, round, voteType, block2), // different block
newVote(privVal.Key.Address, 0, height, round-1, voteType, block1), // round regression
newVote(privVal.Key.Address, 0, height-1, round, voteType, block1), // height regression
newVote(privVal.Key.Address, 0, height-2, round+4, voteType, block1), // height regression and different round
newVote(privVal.Key.Address, 0, height, round, voteType, block2), // different block
}
for _, c := range cases {
@@ -136,9 +172,12 @@ func TestSignVote(t *testing.T) {
func TestSignProposal(t *testing.T) {
assert := assert.New(t)
tempFile, err := ioutil.TempFile("", "priv_validator_")
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
require.Nil(t, err)
privVal := GenFilePV(tempFile.Name())
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
require.Nil(t, err)
privVal := GenFilePV(tempKeyFile.Name(), tempStateFile.Name())
block1 := types.BlockID{[]byte{1, 2, 3}, types.PartSetHeader{5, []byte{1, 2, 3}}}
block2 := types.BlockID{[]byte{3, 2, 1}, types.PartSetHeader{10, []byte{3, 2, 1}}}
@@ -175,9 +214,12 @@ func TestSignProposal(t *testing.T) {
}
func TestDifferByTimestamp(t *testing.T) {
tempFile, err := ioutil.TempFile("", "priv_validator_")
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
require.Nil(t, err)
privVal := GenFilePV(tempFile.Name())
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
require.Nil(t, err)
privVal := GenFilePV(tempKeyFile.Name(), tempStateFile.Name())
block1 := types.BlockID{[]byte{1, 2, 3}, types.PartSetHeader{5, []byte{1, 2, 3}}}
height, round := int64(10), 1
@@ -208,7 +250,7 @@ func TestDifferByTimestamp(t *testing.T) {
{
voteType := byte(types.PrevoteType)
blockID := types.BlockID{[]byte{1, 2, 3}, types.PartSetHeader{}}
vote := newVote(privVal.Address, 0, height, round, voteType, blockID)
vote := newVote(privVal.Key.Address, 0, height, round, voteType, blockID)
err := privVal.SignVote("mychainid", vote)
assert.NoError(t, err, "expected no error signing vote")

View File

@@ -109,24 +109,6 @@ func (c *HTTP) broadcastTX(route string, tx types.Tx) (*ctypes.ResultBroadcastTx
return result, nil
}
func (c *HTTP) UnconfirmedTxs(limit int) (*ctypes.ResultUnconfirmedTxs, error) {
result := new(ctypes.ResultUnconfirmedTxs)
_, err := c.rpc.Call("unconfirmed_txs", map[string]interface{}{"limit": limit}, result)
if err != nil {
return nil, errors.Wrap(err, "unconfirmed_txs")
}
return result, nil
}
func (c *HTTP) NumUnconfirmedTxs() (*ctypes.ResultUnconfirmedTxs, error) {
result := new(ctypes.ResultUnconfirmedTxs)
_, err := c.rpc.Call("num_unconfirmed_txs", map[string]interface{}{}, result)
if err != nil {
return nil, errors.Wrap(err, "num_unconfirmed_txs")
}
return result, nil
}
func (c *HTTP) NetInfo() (*ctypes.ResultNetInfo, error) {
result := new(ctypes.ResultNetInfo)
_, err := c.rpc.Call("net_info", map[string]interface{}{}, result)

View File

@@ -93,9 +93,3 @@ type NetworkClient interface {
type EventsClient interface {
types.EventBusSubscriber
}
// MempoolClient shows us data about current mempool state.
type MempoolClient interface {
UnconfirmedTxs(limit int) (*ctypes.ResultUnconfirmedTxs, error)
NumUnconfirmedTxs() (*ctypes.ResultUnconfirmedTxs, error)
}

View File

@@ -76,14 +76,6 @@ func (Local) BroadcastTxSync(tx types.Tx) (*ctypes.ResultBroadcastTx, error) {
return core.BroadcastTxSync(tx)
}
func (Local) UnconfirmedTxs(limit int) (*ctypes.ResultUnconfirmedTxs, error) {
return core.UnconfirmedTxs(limit)
}
func (Local) NumUnconfirmedTxs() (*ctypes.ResultUnconfirmedTxs, error) {
return core.NumUnconfirmedTxs()
}
func (Local) NetInfo() (*ctypes.ResultNetInfo, error) {
return core.NetInfo()
}

View File

@@ -281,42 +281,6 @@ func TestBroadcastTxCommit(t *testing.T) {
}
}
func TestUnconfirmedTxs(t *testing.T) {
_, _, tx := MakeTxKV()
mempool := node.MempoolReactor().Mempool
_ = mempool.CheckTx(tx, nil)
for i, c := range GetClients() {
mc, ok := c.(client.MempoolClient)
require.True(t, ok, "%d", i)
txs, err := mc.UnconfirmedTxs(1)
require.Nil(t, err, "%d: %+v", i, err)
assert.Exactly(t, types.Txs{tx}, types.Txs(txs.Txs))
}
mempool.Flush()
}
func TestNumUnconfirmedTxs(t *testing.T) {
_, _, tx := MakeTxKV()
mempool := node.MempoolReactor().Mempool
_ = mempool.CheckTx(tx, nil)
mempoolSize := mempool.Size()
for i, c := range GetClients() {
mc, ok := c.(client.MempoolClient)
require.True(t, ok, "%d", i)
res, err := mc.NumUnconfirmedTxs()
require.Nil(t, err, "%d: %+v", i, err)
assert.Equal(t, mempoolSize, res.N)
}
mempool.Flush()
}
func TestTx(t *testing.T) {
// first we broadcast a tx
c := getHTTPClient()

View File

@@ -15,11 +15,6 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.ABCIQuery("", "abcd", true)
// ```
//
@@ -74,11 +69,6 @@ func ABCIQuery(path string, data cmn.HexBytes, height int64, prove bool) (*ctype
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// info, err := client.ABCIInfo()
// ```
//

View File

@@ -18,11 +18,6 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// info, err := client.BlockchainInfo(10, 10)
// ```
//
@@ -128,11 +123,6 @@ func filterMinMax(height, min, max, limit int64) (int64, int64, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// info, err := client.Block(10)
// ```
//
@@ -245,11 +235,6 @@ func Block(heightPtr *int64) (*ctypes.ResultBlock, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// info, err := client.Commit(11)
// ```
//
@@ -344,11 +329,6 @@ func Commit(heightPtr *int64) (*ctypes.ResultCommit, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// info, err := client.BlockResults(10)
// ```
//

View File

@@ -16,11 +16,6 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// state, err := client.Validators()
// ```
//
@@ -72,11 +67,6 @@ func Validators(heightPtr *int64) (*ctypes.ResultValidators, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// state, err := client.DumpConsensusState()
// ```
//
@@ -235,11 +225,6 @@ func DumpConsensusState() (*ctypes.ResultDumpConsensusState, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// state, err := client.ConsensusState()
// ```
//
@@ -288,11 +273,6 @@ func ConsensusState() (*ctypes.ResultConsensusState, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// state, err := client.ConsensusParams()
// ```
//

View File

@@ -55,10 +55,6 @@ import (
//
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// ctx, cancel := context.WithTimeout(context.Background(), timeout)
// defer cancel()
// query := query.MustParse("tm.event = 'Tx' AND tx.height = 3")
@@ -122,10 +118,6 @@ func Subscribe(wsCtx rpctypes.WSRPCContext, query string) (*ctypes.ResultSubscri
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// err = client.Unsubscribe("test-client", query)
// ```
//
@@ -166,10 +158,6 @@ func Unsubscribe(wsCtx rpctypes.WSRPCContext, query string) (*ctypes.ResultUnsub
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// err = client.UnsubscribeAll("test-client")
// ```
//

View File

@@ -13,11 +13,6 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.Health()
// ```
//

View File

@@ -24,11 +24,6 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.BroadcastTxAsync("123")
// ```
//
@@ -69,11 +64,6 @@ func BroadcastTxAsync(tx types.Tx) (*ctypes.ResultBroadcastTx, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.BroadcastTxSync("456")
// ```
//
@@ -128,11 +118,6 @@ func BroadcastTxSync(tx types.Tx) (*ctypes.ResultBroadcastTx, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.BroadcastTxCommit("789")
// ```
//
@@ -213,10 +198,7 @@ func BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) {
// TODO: configurable?
var deliverTxTimeout = rpcserver.WriteTimeout / 2
select {
case deliverTxResMsg, ok := <-deliverTxResCh: // The tx was included in a block.
if !ok {
return nil, errors.New("Error on broadcastTxCommit: expected DeliverTxResult, got nil. Did the Tendermint stop?")
}
case deliverTxResMsg := <-deliverTxResCh: // The tx was included in a block.
deliverTxRes := deliverTxResMsg.(types.EventDataTx)
return &ctypes.ResultBroadcastTxCommit{
CheckTx: *checkTxRes,
@@ -243,11 +225,6 @@ func BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.UnconfirmedTxs()
// ```
//
@@ -286,11 +263,6 @@ func UnconfirmedTxs(limit int) (*ctypes.ResultUnconfirmedTxs, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.UnconfirmedTxs()
// ```
//

View File

@@ -17,11 +17,6 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// info, err := client.NetInfo()
// ```
//
@@ -100,11 +95,6 @@ func UnsafeDialPeers(peers []string, persistent bool) (*ctypes.ResultDialPeers,
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// genesis, err := client.Genesis()
// ```
//

View File

@@ -20,11 +20,6 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// result, err := client.Status()
// ```
//

View File

@@ -21,11 +21,6 @@ import (
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// tx, err := client.Tx([]byte("2B8EC32BA2579B3B8606E42C06DE2F7AFA2556EF"), true)
// ```
//
@@ -120,11 +115,6 @@ func Tx(hash []byte, prove bool) (*ctypes.ResultTx, error) {
//
// ```go
// client := client.NewHTTP("tcp://0.0.0.0:26657", "/websocket")
// err := client.Start()
// if err != nil {
// // handle error
// }
// defer client.Stop()
// q, err := tmquery.New("account.owner='Ivan'")
// tx, err := client.TxSearch(q, true)
// ```

View File

@@ -119,8 +119,9 @@ func NewTendermint(app abci.Application) *nm.Node {
config := GetConfig()
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
logger = log.NewFilter(logger, log.AllowError())
pvFile := config.PrivValidatorFile()
pv := privval.LoadOrGenFilePV(pvFile)
pvKeyFile := config.PrivValidatorKeyFile()
pvKeyStateFile := config.PrivValidatorStateFile()
pv := privval.LoadOrGenFilePV(pvKeyFile, pvKeyStateFile)
papp := proxy.NewLocalClientCreator(app)
nodeKey, err := p2p.LoadOrGenNodeKey(config.NodeKeyFile())
if err != nil {

41
scripts/privValUpgrade.go Normal file
View File

@@ -0,0 +1,41 @@
package main
import (
"fmt"
"os"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/privval"
)
var (
logger = log.NewTMLogger(log.NewSyncWriter(os.Stdout))
)
func main() {
args := os.Args[1:]
if len(args) != 3 {
fmt.Println("Expected three args: <old path> <new key path> <new state path>")
fmt.Println("Eg. ~/.tendermint/config/priv_validator.json ~/.tendermint/config/priv_validator_key.json ~/.tendermint/data/priv_validator_state.json")
os.Exit(1)
}
err := loadAndUpgrade(args[0], args[1], args[2])
if err != nil {
fmt.Println(err)
os.Exit(1)
}
}
func loadAndUpgrade(oldPVPath, newPVKeyPath, newPVStatePath string) error {
oldPV, err := privval.LoadOldFilePV(oldPVPath)
if err != nil {
return fmt.Errorf("Error reading OldPrivValidator from %v: %v\n", oldPVPath, err)
}
logger.Info("Upgrading PrivValidator file",
"old", oldPVPath,
"newKey", newPVKeyPath,
"newState", newPVStatePath,
)
oldPV.Upgrade(newPVKeyPath, newPVStatePath)
return nil
}

View File

@@ -0,0 +1,111 @@
package main
import (
"io/ioutil"
"os"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/privval"
)
const oldPrivvalContent = `{
"address": "1D8089FAFDFAE4A637F3D616E17B92905FA2D91D",
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": "r3Yg2AhDZ745CNTpavsGU+mRZ8WpRXqoJuyqjN8mJq0="
},
"last_height": "5",
"last_round": "0",
"last_step": 3,
"last_signature": "CTr7b9ZQlrJJf+12rPl5t/YSCUc/KqV7jQogCfFJA24e7hof69X6OMT7eFLVQHyodPjD/QTA298XHV5ejxInDQ==",
"last_signbytes": "750802110500000000000000220B08B398F3E00510F48DA6402A480A20FC258973076512999C3E6839A22E9FBDB1B77CF993E8A9955412A41A59D4CAD312240A20C971B286ACB8AAA6FCA0365EB0A660B189EDC08B46B5AF2995DEFA51A28D215B10013211746573742D636861696E2D533245415533",
"priv_key": {
"type": "tendermint/PrivKeyEd25519",
"value": "7MwvTGEWWjsYwjn2IpRb+GYsWi9nnFsw8jPLLY1UtP6vdiDYCENnvjkI1Olq+wZT6ZFnxalFeqgm7KqM3yYmrQ=="
}
}`
func TestLoadAndUpgrade(t *testing.T) {
oldFilePath := initTmpOldFile(t)
defer os.Remove(oldFilePath)
newStateFile, err := ioutil.TempFile("", "priv_validator_state*.json")
defer os.Remove(newStateFile.Name())
require.NoError(t, err)
newKeyFile, err := ioutil.TempFile("", "priv_validator_key*.json")
defer os.Remove(newKeyFile.Name())
require.NoError(t, err)
emptyOldFile, err := ioutil.TempFile("", "priv_validator_empty*.json")
require.NoError(t, err)
defer os.Remove(emptyOldFile.Name())
type args struct {
oldPVPath string
newPVKeyPath string
newPVStatePath string
}
tests := []struct {
name string
args args
wantErr bool
}{
{"successful upgrade",
args{oldPVPath: oldFilePath, newPVKeyPath: newKeyFile.Name(), newPVStatePath: newStateFile.Name()},
false,
},
{"unsuccessful upgrade: empty old privval file",
args{oldPVPath: emptyOldFile.Name(), newPVKeyPath: newKeyFile.Name(), newPVStatePath: newStateFile.Name()},
true,
},
{"unsuccessful upgrade: invalid new paths (1/3)",
args{oldPVPath: emptyOldFile.Name(), newPVKeyPath: "", newPVStatePath: newStateFile.Name()},
true,
},
{"unsuccessful upgrade: invalid new paths (2/3)",
args{oldPVPath: emptyOldFile.Name(), newPVKeyPath: newKeyFile.Name(), newPVStatePath: ""},
true,
},
{"unsuccessful upgrade: invalid new paths (3/3)",
args{oldPVPath: emptyOldFile.Name(), newPVKeyPath: "", newPVStatePath: ""},
true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := loadAndUpgrade(tt.args.oldPVPath, tt.args.newPVKeyPath, tt.args.newPVStatePath)
if tt.wantErr {
assert.Error(t, err)
} else {
assert.NoError(t, err)
upgradedPV := privval.LoadFilePV(tt.args.newPVKeyPath, tt.args.newPVStatePath)
oldPV, err := privval.LoadOldFilePV(tt.args.oldPVPath + ".bak")
require.NoError(t, err)
assert.Equal(t, oldPV.Address, upgradedPV.Key.Address)
assert.Equal(t, oldPV.Address, upgradedPV.GetAddress())
assert.Equal(t, oldPV.PubKey, upgradedPV.Key.PubKey)
assert.Equal(t, oldPV.PubKey, upgradedPV.GetPubKey())
assert.Equal(t, oldPV.PrivKey, upgradedPV.Key.PrivKey)
assert.Equal(t, oldPV.LastHeight, upgradedPV.LastSignState.Height)
assert.Equal(t, oldPV.LastRound, upgradedPV.LastSignState.Round)
assert.Equal(t, oldPV.LastSignature, upgradedPV.LastSignState.Signature)
assert.Equal(t, oldPV.LastSignBytes, upgradedPV.LastSignState.SignBytes)
assert.Equal(t, oldPV.LastStep, upgradedPV.LastSignState.Step)
}
})
}
}
func initTmpOldFile(t *testing.T) string {
tmpfile, err := ioutil.TempFile("", "priv_validator_*.json")
require.NoError(t, err)
t.Logf("created test file %s", tmpfile.Name())
_, err = tmpfile.WriteString(oldPrivvalContent)
require.NoError(t, err)
return tmpfile.Name()
}

View File

@@ -1,182 +0,0 @@
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"time"
"github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/crypto/ed25519"
cryptoAmino "github.com/tendermint/tendermint/crypto/encoding/amino"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/types"
)
type GenesisValidator struct {
PubKey Data `json:"pub_key"`
Power int64 `json:"power"`
Name string `json:"name"`
}
type Genesis struct {
GenesisTime time.Time `json:"genesis_time"`
ChainID string `json:"chain_id"`
ConsensusParams *types.ConsensusParams `json:"consensus_params,omitempty"`
Validators []GenesisValidator `json:"validators"`
AppHash cmn.HexBytes `json:"app_hash"`
AppState json.RawMessage `json:"app_state,omitempty"`
AppOptions json.RawMessage `json:"app_options,omitempty"` // DEPRECATED
}
type NodeKey struct {
PrivKey Data `json:"priv_key"`
}
type PrivVal struct {
Address cmn.HexBytes `json:"address"`
LastHeight int64 `json:"last_height"`
LastRound int `json:"last_round"`
LastStep int8 `json:"last_step"`
PubKey Data `json:"pub_key"`
PrivKey Data `json:"priv_key"`
}
type Data struct {
Type string `json:"type"`
Data cmn.HexBytes `json:"data"`
}
func convertNodeKey(cdc *amino.Codec, jsonBytes []byte) ([]byte, error) {
var nodeKey NodeKey
err := json.Unmarshal(jsonBytes, &nodeKey)
if err != nil {
return nil, err
}
var privKey ed25519.PrivKeyEd25519
copy(privKey[:], nodeKey.PrivKey.Data)
nodeKeyNew := p2p.NodeKey{privKey}
bz, err := cdc.MarshalJSON(nodeKeyNew)
if err != nil {
return nil, err
}
return bz, nil
}
func convertPrivVal(cdc *amino.Codec, jsonBytes []byte) ([]byte, error) {
var privVal PrivVal
err := json.Unmarshal(jsonBytes, &privVal)
if err != nil {
return nil, err
}
var privKey ed25519.PrivKeyEd25519
copy(privKey[:], privVal.PrivKey.Data)
var pubKey ed25519.PubKeyEd25519
copy(pubKey[:], privVal.PubKey.Data)
privValNew := privval.FilePV{
Address: pubKey.Address(),
PubKey: pubKey,
LastHeight: privVal.LastHeight,
LastRound: privVal.LastRound,
LastStep: privVal.LastStep,
PrivKey: privKey,
}
bz, err := cdc.MarshalJSON(privValNew)
if err != nil {
return nil, err
}
return bz, nil
}
func convertGenesis(cdc *amino.Codec, jsonBytes []byte) ([]byte, error) {
var genesis Genesis
err := json.Unmarshal(jsonBytes, &genesis)
if err != nil {
return nil, err
}
genesisNew := types.GenesisDoc{
GenesisTime: genesis.GenesisTime,
ChainID: genesis.ChainID,
ConsensusParams: genesis.ConsensusParams,
// Validators
AppHash: genesis.AppHash,
AppState: genesis.AppState,
}
if genesis.AppOptions != nil {
genesisNew.AppState = genesis.AppOptions
}
for _, v := range genesis.Validators {
var pubKey ed25519.PubKeyEd25519
copy(pubKey[:], v.PubKey.Data)
genesisNew.Validators = append(
genesisNew.Validators,
types.GenesisValidator{
PubKey: pubKey,
Power: v.Power,
Name: v.Name,
},
)
}
bz, err := cdc.MarshalJSON(genesisNew)
if err != nil {
return nil, err
}
return bz, nil
}
func main() {
cdc := amino.NewCodec()
cryptoAmino.RegisterAmino(cdc)
args := os.Args[1:]
if len(args) != 1 {
fmt.Println("Please specify a file to convert")
os.Exit(1)
}
filePath := args[0]
fileName := filepath.Base(filePath)
fileBytes, err := ioutil.ReadFile(filePath)
if err != nil {
panic(err)
}
var bz []byte
switch fileName {
case "node_key.json":
bz, err = convertNodeKey(cdc, fileBytes)
case "priv_validator.json":
bz, err = convertPrivVal(cdc, fileBytes)
case "genesis.json":
bz, err = convertGenesis(cdc, fileBytes)
default:
fmt.Println("Expected file name to be in (node_key.json, priv_validator.json, genesis.json)")
os.Exit(1)
}
if err != nil {
panic(err)
}
fmt.Println(string(bz))
}

View File

@@ -172,10 +172,10 @@ func (txi *TxIndex) Search(q *query.Query) ([]*types.TxResult, error) {
for _, r := range ranges {
if !hashesInitialized {
hashes = txi.matchRange(r, startKey(r.key))
hashes = txi.matchRange(r, []byte(r.key))
hashesInitialized = true
} else {
hashes = intersect(hashes, txi.matchRange(r, startKey(r.key)))
hashes = intersect(hashes, txi.matchRange(r, []byte(r.key)))
}
}
}
@@ -190,10 +190,10 @@ func (txi *TxIndex) Search(q *query.Query) ([]*types.TxResult, error) {
}
if !hashesInitialized {
hashes = txi.match(c, startKeyForCondition(c, height))
hashes = txi.match(c, startKey(c, height))
hashesInitialized = true
} else {
hashes = intersect(hashes, txi.match(c, startKeyForCondition(c, height)))
hashes = intersect(hashes, txi.match(c, startKey(c, height)))
}
}
@@ -332,18 +332,18 @@ func isRangeOperation(op query.Operator) bool {
}
}
func (txi *TxIndex) match(c query.Condition, startKeyBz []byte) (hashes [][]byte) {
func (txi *TxIndex) match(c query.Condition, startKey []byte) (hashes [][]byte) {
if c.Op == query.OpEqual {
it := dbm.IteratePrefix(txi.store, startKeyBz)
it := dbm.IteratePrefix(txi.store, startKey)
defer it.Close()
for ; it.Valid(); it.Next() {
hashes = append(hashes, it.Value())
}
} else if c.Op == query.OpContains {
// XXX: startKey does not apply here.
// For example, if startKey = "account.owner/an/" and search query = "accoutn.owner CONTAINS an"
// we can't iterate with prefix "account.owner/an/" because we might miss keys like "account.owner/Ulan/"
it := dbm.IteratePrefix(txi.store, startKey(c.Tag))
// XXX: doing full scan because startKey does not apply here
// For example, if startKey = "account.owner=an" and search query = "accoutn.owner CONSISTS an"
// we can't iterate with prefix "account.owner=an" because we might miss keys like "account.owner=Ulan"
it := txi.store.Iterator(nil, nil)
defer it.Close()
for ; it.Valid(); it.Next() {
if !isTagKey(it.Key()) {
@@ -359,14 +359,14 @@ func (txi *TxIndex) match(c query.Condition, startKeyBz []byte) (hashes [][]byte
return
}
func (txi *TxIndex) matchRange(r queryRange, startKey []byte) (hashes [][]byte) {
func (txi *TxIndex) matchRange(r queryRange, prefix []byte) (hashes [][]byte) {
// create a map to prevent duplicates
hashesMap := make(map[string][]byte)
lowerBound := r.lowerBoundValue()
upperBound := r.upperBoundValue()
it := dbm.IteratePrefix(txi.store, startKey)
it := dbm.IteratePrefix(txi.store, prefix)
defer it.Close()
LOOP:
for ; it.Valid(); it.Next() {
@@ -409,6 +409,16 @@ LOOP:
///////////////////////////////////////////////////////////////////////////////
// Keys
func startKey(c query.Condition, height int64) []byte {
var key string
if height > 0 {
key = fmt.Sprintf("%s/%v/%d/", c.Tag, c.Operand, height)
} else {
key = fmt.Sprintf("%s/%v/", c.Tag, c.Operand)
}
return []byte(key)
}
func isTagKey(key []byte) bool {
return strings.Count(string(key), tagKeySeparator) == 3
}
@@ -419,36 +429,11 @@ func extractValueFromKey(key []byte) string {
}
func keyForTag(tag cmn.KVPair, result *types.TxResult) []byte {
return []byte(fmt.Sprintf("%s/%s/%d/%d",
tag.Key,
tag.Value,
result.Height,
result.Index,
))
return []byte(fmt.Sprintf("%s/%s/%d/%d", tag.Key, tag.Value, result.Height, result.Index))
}
func keyForHeight(result *types.TxResult) []byte {
return []byte(fmt.Sprintf("%s/%d/%d/%d",
types.TxHeightKey,
result.Height,
result.Height,
result.Index,
))
}
func startKeyForCondition(c query.Condition, height int64) []byte {
if height > 0 {
return startKey(c.Tag, c.Operand, height)
}
return startKey(c.Tag, c.Operand)
}
func startKey(fields ...interface{}) []byte {
var b bytes.Buffer
for _, f := range fields {
b.Write([]byte(fmt.Sprintf("%v", f) + tagKeySeparator))
}
return b.Bytes()
return []byte(fmt.Sprintf("%s/%d/%d/%d", types.TxHeightKey, result.Height, result.Height, result.Index))
}
///////////////////////////////////////////////////////////////////////////////

View File

@@ -89,10 +89,8 @@ func TestTxSearch(t *testing.T) {
{"account.date >= TIME 2013-05-03T14:45:00Z", 0},
// search using CONTAINS
{"account.owner CONTAINS 'an'", 1},
// search for non existing value using CONTAINS
// search using CONTAINS
{"account.owner CONTAINS 'Vlad'", 0},
// search using the wrong tag (of numeric type) using CONTAINS
{"account.number CONTAINS 'Iv'", 0},
}
for _, tc := range testCases {
@@ -128,7 +126,7 @@ func TestTxSearchOneTxWithMultipleSameTagsButDifferentValues(t *testing.T) {
}
func TestTxSearchMultipleTxs(t *testing.T) {
allowedTags := []string{"account.number", "account.number.id"}
allowedTags := []string{"account.number"}
indexer := NewTxIndex(db.NewMemDB(), IndexTags(allowedTags))
// indexed first, but bigger height (to test the order of transactions)
@@ -162,17 +160,6 @@ func TestTxSearchMultipleTxs(t *testing.T) {
err = indexer.Index(txResult3)
require.NoError(t, err)
// indexed fourth (to test we don't include txs with similar tags)
// https://github.com/tendermint/tendermint/issues/2908
txResult4 := txResultWithTags([]cmn.KVPair{
{Key: []byte("account.number.id"), Value: []byte("1")},
})
txResult4.Tx = types.Tx("Mike's account")
txResult4.Height = 2
txResult4.Index = 2
err = indexer.Index(txResult4)
require.NoError(t, err)
results, err := indexer.Search(query.MustParse("account.number >= 1"))
assert.NoError(t, err)

View File

@@ -19,7 +19,7 @@ import (
// x + (x >> 3) = x + x/8 = x * (1 + 0.125).
// MaxTotalVotingPower is the largest int64 `x` with the property that `x + (x >> 3)` is
// still in the bounds of int64.
const MaxTotalVotingPower = int64(8198552921648689607)
const MaxTotalVotingPower = 8198552921648689607
// ValidatorSet represent a set of *Validator at a given height.
// The validators can be fetched by address or index.

View File

@@ -18,7 +18,7 @@ const (
// TMCoreSemVer is the current version of Tendermint Core.
// It's the Semantic Version of the software.
// Must be a string because scripts like dist.sh read this file.
TMCoreSemVer = "0.27.1"
TMCoreSemVer = "0.26.4"
// ABCISemVer is the semantic version of the ABCI library
ABCISemVer = "0.15.0"
@@ -36,10 +36,10 @@ func (p Protocol) Uint64() uint64 {
var (
// P2PProtocol versions all p2p behaviour and msgs.
P2PProtocol Protocol = 5
P2PProtocol Protocol = 4
// BlockProtocol versions all block data structures and processing.
BlockProtocol Protocol = 8
BlockProtocol Protocol = 7
)
//------------------------------------------------------------------------