mirror of
https://github.com/fluencelabs/tendermint
synced 2025-04-25 14:52:17 +00:00
Merge remote-tracking branch 'origin/develop' into jae/literefactor4
This commit is contained in:
commit
2d1c5a1ce6
@ -77,6 +77,12 @@ jobs:
|
||||
set -ex
|
||||
export PATH="$GOBIN:$PATH"
|
||||
make metalinter
|
||||
- run:
|
||||
name: check_dep
|
||||
command: |
|
||||
set -ex
|
||||
export PATH="$GOBIN:$PATH"
|
||||
make check_dep
|
||||
|
||||
test_abci_apps:
|
||||
<<: *defaults
|
||||
@ -141,7 +147,7 @@ jobs:
|
||||
for pkg in $(go list github.com/tendermint/tendermint/... | circleci tests split --split-by=timings); do
|
||||
id=$(basename "$pkg")
|
||||
|
||||
GOCACHE=off go test -v -timeout 5m -race -coverprofile=/tmp/workspace/profiles/$id.out -covermode=atomic "$pkg" | tee "/tmp/logs/$id-$RANDOM.log"
|
||||
GOCACHE=off go test -timeout 5m -race -coverprofile=/tmp/workspace/profiles/$id.out -covermode=atomic "$pkg" | tee "/tmp/logs/$id-$RANDOM.log"
|
||||
done
|
||||
- persist_to_workspace:
|
||||
root: /tmp/workspace
|
||||
|
42
.github/ISSUE_TEMPLATE
vendored
42
.github/ISSUE_TEMPLATE
vendored
@ -1,42 +0,0 @@
|
||||
<!-- Thanks for filing an issue! Before hitting the button, please answer these questions.-->
|
||||
|
||||
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one):
|
||||
|
||||
<!--
|
||||
If this is a BUG REPORT, please:
|
||||
- Fill in as much of the template below as you can.
|
||||
|
||||
If this is a FEATURE REQUEST, please:
|
||||
- Describe *in detail* the feature/behavior/change you'd like to see.
|
||||
|
||||
In both cases, be ready for followup questions, and please respond in a timely
|
||||
manner. We might ask you to provide additional logs and data (tendermint & app)
|
||||
in a case of bug.
|
||||
-->
|
||||
|
||||
**Tendermint version** (use `tendermint version` or `git rev-parse --verify HEAD` if installed from source):
|
||||
|
||||
|
||||
**ABCI app** (name for built-in, URL for self-written if it's publicly available):
|
||||
|
||||
**Environment**:
|
||||
- **OS** (e.g. from /etc/os-release):
|
||||
- **Install tools**:
|
||||
- **Others**:
|
||||
|
||||
|
||||
**What happened**:
|
||||
|
||||
|
||||
**What you expected to happen**:
|
||||
|
||||
|
||||
**How to reproduce it** (as minimally and precisely as possible):
|
||||
|
||||
**Logs (you can paste a small part showing an error or link a pastebin, gist, etc. containing more of the log file)**:
|
||||
|
||||
**Config (you can paste only the changes you've made)**:
|
||||
|
||||
**`/dump_consensus_state` output for consensus bugs**
|
||||
|
||||
**Anything else do we need to know**:
|
38
.github/ISSUE_TEMPLATE/bug-report.md
vendored
Normal file
38
.github/ISSUE_TEMPLATE/bug-report.md
vendored
Normal file
@ -0,0 +1,38 @@
|
||||
---
|
||||
name: Bug Report
|
||||
about: Create a report to help us squash bugs!
|
||||
|
||||
---
|
||||
<!--
|
||||
Please fill in as much of the template below as you can.
|
||||
|
||||
Be ready for followup questions, and please respond in a timely
|
||||
manner. We might ask you to provide additional logs and data (tendermint & app).
|
||||
-->
|
||||
|
||||
**Tendermint version** (use `tendermint version` or `git rev-parse --verify HEAD` if installed from source):
|
||||
|
||||
|
||||
**ABCI app** (name for built-in, URL for self-written if it's publicly available):
|
||||
|
||||
**Environment**:
|
||||
- **OS** (e.g. from /etc/os-release):
|
||||
- **Install tools**:
|
||||
- **Others**:
|
||||
|
||||
|
||||
**What happened**:
|
||||
|
||||
|
||||
**What you expected to happen**:
|
||||
|
||||
|
||||
**How to reproduce it** (as minimally and precisely as possible):
|
||||
|
||||
**Logs (paste a small part showing an error (< 10 lines) or link a pastebin, gist, etc. containing more of the log file)**:
|
||||
|
||||
**Config (you can paste only the changes you've made)**:
|
||||
|
||||
**`/dump_consensus_state` output for consensus bugs**
|
||||
|
||||
**Anything else we need to know**:
|
13
.github/ISSUE_TEMPLATE/feature-request.md
vendored
Normal file
13
.github/ISSUE_TEMPLATE/feature-request.md
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
---
|
||||
name: Feature Request
|
||||
about: Create a proposal to request a feature
|
||||
|
||||
---
|
||||
<!--
|
||||
Please describe *in detail* the feature/behavior/change you'd like to see.
|
||||
|
||||
Be ready for followup questions, and please respond in a timely
|
||||
manner.
|
||||
|
||||
Word of caution: poorly thought out proposals may be rejected without deliberation
|
||||
-->
|
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -3,4 +3,4 @@
|
||||
* [ ] Updated all relevant documentation in docs
|
||||
* [ ] Updated all code comments where relevant
|
||||
* [ ] Wrote tests
|
||||
* [ ] Updated CHANGELOG.md
|
||||
* [ ] Updated CHANGELOG_PENDING.md
|
||||
|
1
.gitignore
vendored
1
.gitignore
vendored
@ -17,6 +17,7 @@ docs/_build
|
||||
*.log
|
||||
abci-cli
|
||||
docs/node_modules/
|
||||
index.html.md
|
||||
|
||||
scripts/wal2json/wal2json
|
||||
scripts/cutWALUntil/cutWALUntil
|
||||
|
36
CHANGELOG.md
36
CHANGELOG.md
@ -1,6 +1,37 @@
|
||||
# Changelog
|
||||
|
||||
## TBA
|
||||
## 0.22.8
|
||||
|
||||
*July 26th, 2018*
|
||||
|
||||
BUG FIXES
|
||||
|
||||
- [consensus, blockchain] Fix 0.22.7 below.
|
||||
|
||||
## 0.22.7
|
||||
|
||||
*July 26th, 2018*
|
||||
|
||||
BUG FIXES
|
||||
|
||||
- [consensus, blockchain] Register the Evidence interface so it can be
|
||||
marshalled/unmarshalled by the blockchain and consensus reactors
|
||||
|
||||
## 0.22.6
|
||||
|
||||
*July 24th, 2018*
|
||||
|
||||
BUG FIXES
|
||||
|
||||
- [rpc] Fix `/blockchain` endpoint
|
||||
- (#2049) Fix OOM attack by returning error on negative input
|
||||
- Fix result length to have max 20 (instead of 21) block metas
|
||||
- [rpc] Validate height is non-negative in `/abci_query`
|
||||
- [consensus] (#2050) Include evidence in proposal block parts (previously evidence was
|
||||
not being included in blocks!)
|
||||
- [p2p] (#2046) Close rejected inbound connections so file descriptor doesn't
|
||||
leak
|
||||
- [Gopkg] (#2053) Fix versions in the toml
|
||||
|
||||
## 0.22.5
|
||||
|
||||
@ -13,9 +44,10 @@ BREAKING CHANGES:
|
||||
IMPROVEMENTS:
|
||||
- [abci, libs/common] Generated gogoproto static marshaller methods
|
||||
- [config] Increase default send/recv rates to 5 mB/s
|
||||
- [p2p] reject addresses coming from private peers
|
||||
- [p2p] allow persistent peers to be private
|
||||
|
||||
BUG FIXES
|
||||
BUG FIXES:
|
||||
- [mempool] fixed a race condition when `create_empty_blocks=false` where a
|
||||
transaction is published at an old height.
|
||||
- [p2p] dial external IP setup by `persistent_peers`, not internal NAT IP
|
||||
|
@ -9,8 +9,26 @@ BREAKING CHANGES:
|
||||
- [lite] Complete refactor of the package
|
||||
- [rpc] `/commit` returns a `signed_header` field instead of everything being
|
||||
top-level
|
||||
- [abci] Removed Fee from ResponseDeliverTx and ResponseCheckTx
|
||||
- [tools] Removed `make ensure_deps` in favor of `make get_vendor_deps`
|
||||
- [p2p] Remove salsa and ripemd primitives, in favor of using chacha as a stream cipher, and hkdf
|
||||
- [abci] Changed time format from int64 to google.protobuf.Timestamp
|
||||
- [abci] Changed Validators to LastCommitInfo in RequestBeginBlock
|
||||
|
||||
FEATURES:
|
||||
- [tools] Added `make check_dep`
|
||||
- ensures gopkg.lock is synced with gopkg.toml
|
||||
- ensures no branches are used in the gopkg.toml
|
||||
|
||||
IMPROVEMENTS:
|
||||
- [blockchain] Improve fast-sync logic
|
||||
- tweak params
|
||||
- only process one block at a time to avoid starving
|
||||
- [crypto] Switch hkdfchachapoly1305 to xchachapoly1305
|
||||
- [common] bit array functions which take in another parameter are now thread safe
|
||||
- [p2p] begin connecting to peers as soon a seed node provides them to you ([#2093](https://github.com/tendermint/tendermint/issues/2093))
|
||||
|
||||
BUG FIXES:
|
||||
- [common] Safely handle cases where atomic write files already exist [#2109](https://github.com/tendermint/tendermint/issues/2109)
|
||||
- [privval] fix a deadline for accepting new connections in socket private
|
||||
validator.
|
||||
|
226
Gopkg.lock
generated
226
Gopkg.lock
generated
@ -3,48 +3,67 @@
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:d6afaeed1502aa28e80a4ed0981d570ad91b2579193404256ce672ed0a609e0d"
|
||||
name = "github.com/beorn7/perks"
|
||||
packages = ["quantile"]
|
||||
pruneopts = "UT"
|
||||
revision = "3a771d992973f24aa725d07868b467d1ddfceafb"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:2c00f064ba355903866cbfbf3f7f4c0fe64af6638cc7d1b8bdcf3181bc67f1d8"
|
||||
name = "github.com/btcsuite/btcd"
|
||||
packages = ["btcec"]
|
||||
<<<<<<< HEAD
|
||||
revision = "f5e261fc9ec3437697fb31d8b38453c293204b29"
|
||||
=======
|
||||
pruneopts = "UT"
|
||||
revision = "9a2f9524024889e129a5422aca2cff73cb3eabf6"
|
||||
>>>>>>> origin/develop
|
||||
|
||||
[[projects]]
|
||||
digest = "1:1d8e1cb71c33a9470bbbae09bfec09db43c6bf358dfcae13cd8807c4e2a9a2bf"
|
||||
name = "github.com/btcsuite/btcutil"
|
||||
packages = [
|
||||
"base58",
|
||||
"bech32"
|
||||
"bech32",
|
||||
]
|
||||
pruneopts = "UT"
|
||||
revision = "d4cc87b860166d00d6b5b9e0d3b3d71d6088d4d4"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:a2c1d0e43bd3baaa071d1b9ed72c27d78169b2b269f71c105ac4ba34b1be4a39"
|
||||
name = "github.com/davecgh/go-spew"
|
||||
packages = ["spew"]
|
||||
pruneopts = "UT"
|
||||
revision = "346938d642f2ec3594ed81d874461961cd0faa76"
|
||||
version = "v1.1.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:c7644c73a3d23741fdba8a99b1464e021a224b7e205be497271a8003a15ca41b"
|
||||
name = "github.com/ebuchman/fail-test"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "95f809107225be108efcf10a3509e4ea6ceef3c4"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:544229a3ca0fb2dd5ebc2896d3d2ff7ce096d9751635301e44e37e761349ee70"
|
||||
name = "github.com/fortytw2/leaktest"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "a5ef70473c97b71626b9abeda80ee92ba2a7de9e"
|
||||
version = "v1.2.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:abeb38ade3f32a92943e5be54f55ed6d6e3b6602761d74b4aab4c9dd45c18abd"
|
||||
name = "github.com/fsnotify/fsnotify"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "c2828203cd70a50dcccfb2761f8b1f8ceef9a8e9"
|
||||
version = "v1.4.7"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:fdf5169073fb0ad6dc12a70c249145e30f4058647bea25f0abd48b6d9f228a11"
|
||||
name = "github.com/go-kit/kit"
|
||||
packages = [
|
||||
"log",
|
||||
@ -53,24 +72,35 @@
|
||||
"metrics",
|
||||
"metrics/discard",
|
||||
"metrics/internal/lv",
|
||||
"metrics/prometheus"
|
||||
"metrics/prometheus",
|
||||
]
|
||||
<<<<<<< HEAD
|
||||
revision = "ca4112baa34cb55091301bdc13b1420a122b1b9e"
|
||||
version = "v0.7.0"
|
||||
=======
|
||||
pruneopts = "UT"
|
||||
revision = "4dc7be5d2d12881735283bcab7352178e190fc71"
|
||||
version = "v0.6.0"
|
||||
>>>>>>> origin/develop
|
||||
|
||||
[[projects]]
|
||||
digest = "1:31a18dae27a29aa074515e43a443abfd2ba6deb6d69309d8d7ce789c45f34659"
|
||||
name = "github.com/go-logfmt/logfmt"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "390ab7935ee28ec6b286364bba9b4dd6410cb3d5"
|
||||
version = "v0.3.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:c4a2528ccbcabf90f9f3c464a5fc9e302d592861bbfd0b7135a7de8a943d0406"
|
||||
name = "github.com/go-stack/stack"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "259ab82a6cad3992b4e21ff5cac294ccb06474bc"
|
||||
version = "v1.7.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:35621fe20f140f05a0c4ef662c26c0ab4ee50bca78aa30fe87d33120bd28165e"
|
||||
name = "github.com/gogo/protobuf"
|
||||
packages = [
|
||||
"gogoproto",
|
||||
@ -78,37 +108,48 @@
|
||||
"proto",
|
||||
"protoc-gen-gogo/descriptor",
|
||||
"sortkeys",
|
||||
"types"
|
||||
"types",
|
||||
]
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
pruneopts = "UT"
|
||||
>>>>>>> origin/develop
|
||||
revision = "636bf0302bc95575d69441b25a2603156ffdddf1"
|
||||
version = "v1.1.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:17fe264ee908afc795734e8c4e63db2accabaf57326dbf21763a7d6b86096260"
|
||||
name = "github.com/golang/protobuf"
|
||||
packages = [
|
||||
"proto",
|
||||
"ptypes",
|
||||
"ptypes/any",
|
||||
"ptypes/duration",
|
||||
"ptypes/timestamp"
|
||||
"ptypes/timestamp",
|
||||
]
|
||||
pruneopts = "UT"
|
||||
revision = "b4deda0973fb4c70b50d226b1af49f3da59f5265"
|
||||
version = "v1.1.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:4a0c6bb4805508a6287675fac876be2ac1182539ca8a32468d8128882e9d5009"
|
||||
name = "github.com/golang/snappy"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "2e65f85255dbc3072edf28d6b5b8efc472979f5a"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:43dd08a10854b2056e615d1b1d22ac94559d822e1f8b6fcc92c1a1057e85188e"
|
||||
name = "github.com/gorilla/websocket"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "ea4d1f681babbce9545c9c5f3d5194a789c89f5b"
|
||||
version = "v1.2.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:12247a2e99a060cc692f6680e5272c8adf0b8f572e6bce0d7095e624c958a240"
|
||||
name = "github.com/hashicorp/hcl"
|
||||
packages = [
|
||||
".",
|
||||
@ -120,36 +161,47 @@
|
||||
"hcl/token",
|
||||
"json/parser",
|
||||
"json/scanner",
|
||||
"json/token"
|
||||
"json/token",
|
||||
]
|
||||
pruneopts = "UT"
|
||||
revision = "ef8a98b0bbce4a65b5aa4c368430a80ddc533168"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:870d441fe217b8e689d7949fef6e43efbc787e50f200cb1e70dbca9204a1d6be"
|
||||
name = "github.com/inconshreveable/mousetrap"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
|
||||
version = "v1.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:39b27d1381a30421f9813967a5866fba35dc1d4df43a6eefe3b7a5444cb07214"
|
||||
name = "github.com/jmhodges/levigo"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "c42d9e0ca023e2198120196f842701bb4c55d7b9"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:a64e323dc06b73892e5bb5d040ced475c4645d456038333883f58934abbf6f72"
|
||||
name = "github.com/kr/logfmt"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "b84e30acd515aadc4b783ad4ff83aff3299bdfe0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:c568d7727aa262c32bdf8a3f7db83614f7af0ed661474b24588de635c20024c7"
|
||||
name = "github.com/magiconair/properties"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "c2353362d570a7bfa228149c62842019201cfb71"
|
||||
version = "v1.8.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:ff5ebae34cfbf047d505ee150de27e60570e8c394b3b8fdbb720ff6ac71985fc"
|
||||
name = "github.com/matttproud/golang_protobuf_extensions"
|
||||
packages = ["pbutil"]
|
||||
pruneopts = "UT"
|
||||
revision = "c12348ce28de40eed0136aa2b644d0ee0650e56c"
|
||||
version = "v1.0.1"
|
||||
|
||||
@ -160,113 +212,168 @@
|
||||
revision = "f15292f7a699fcc1a38a80977f80a046874ba8ac"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:95741de3af260a92cc5c7f3f3061e85273f5a81b5db20d4bd68da74bd521675e"
|
||||
name = "github.com/pelletier/go-toml"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "c01d1270ff3e442a8a57cddc1c92dc1138598194"
|
||||
version = "v1.2.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:40e195917a951a8bf867cd05de2a46aaf1806c50cf92eebf4c16f78cd196f747"
|
||||
name = "github.com/pkg/errors"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "645ef00459ed84a119197bfb8d8205042c6df63d"
|
||||
version = "v0.8.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:0028cb19b2e4c3112225cd871870f2d9cf49b9b4276531f03438a88e94be86fe"
|
||||
name = "github.com/pmezard/go-difflib"
|
||||
packages = ["difflib"]
|
||||
pruneopts = "UT"
|
||||
revision = "792786c7400a136282c1664665ae0a8db921c6c2"
|
||||
version = "v1.0.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:c1a04665f9613e082e1209cf288bf64f4068dcd6c87a64bf1c4ff006ad422ba0"
|
||||
name = "github.com/prometheus/client_golang"
|
||||
packages = [
|
||||
"prometheus",
|
||||
"prometheus/promhttp"
|
||||
"prometheus/promhttp",
|
||||
]
|
||||
pruneopts = "UT"
|
||||
revision = "ae27198cdd90bf12cd134ad79d1366a6cf49f632"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
digest = "1:2d5cd61daa5565187e1d96bae64dbbc6080dacf741448e9629c64fd93203b0d4"
|
||||
>>>>>>> origin/develop
|
||||
name = "github.com/prometheus/client_model"
|
||||
packages = ["go"]
|
||||
revision = "5c3871d89910bfb32f5fcab2aa4b9ec68e65a99f"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:e469cd65badf7694aeb44874518606d93c1d59e7735d3754ad442782437d3cc3"
|
||||
name = "github.com/prometheus/common"
|
||||
packages = [
|
||||
"expfmt",
|
||||
"internal/bitbucket.org/ww/goautoneg",
|
||||
"model"
|
||||
"model",
|
||||
]
|
||||
<<<<<<< HEAD
|
||||
revision = "c7de2306084e37d54b8be01f3541a8464345e9a5"
|
||||
=======
|
||||
pruneopts = "UT"
|
||||
revision = "7600349dcfe1abd18d72d3a1770870d9800a7801"
|
||||
>>>>>>> origin/develop
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:8c49953a1414305f2ff5465147ee576dd705487c35b15918fcd4efdc0cb7a290"
|
||||
name = "github.com/prometheus/procfs"
|
||||
packages = [
|
||||
".",
|
||||
"internal/util",
|
||||
"nfs",
|
||||
"xfs"
|
||||
"xfs",
|
||||
]
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
pruneopts = "UT"
|
||||
>>>>>>> origin/develop
|
||||
revision = "05ee40e3a273f7245e8777337fc7b46e533a9a92"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:c4556a44e350b50a490544d9b06e9fba9c286c21d6c0e47f54f3a9214597298c"
|
||||
name = "github.com/rcrowley/go-metrics"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "e2704e165165ec55d062f5919b4b29494e9fa790"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:bd1ae00087d17c5a748660b8e89e1043e1e5479d0fea743352cda2f8dd8c4f84"
|
||||
name = "github.com/spf13/afero"
|
||||
packages = [
|
||||
".",
|
||||
"mem"
|
||||
"mem",
|
||||
]
|
||||
pruneopts = "UT"
|
||||
revision = "787d034dfe70e44075ccc060d346146ef53270ad"
|
||||
version = "v1.1.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:516e71bed754268937f57d4ecb190e01958452336fa73dbac880894164e91c1f"
|
||||
name = "github.com/spf13/cast"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "8965335b8c7107321228e3e3702cab9832751bac"
|
||||
version = "v1.2.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:7ffc0983035bc7e297da3688d9fe19d60a420e9c38bef23f845c53788ed6a05e"
|
||||
name = "github.com/spf13/cobra"
|
||||
packages = ["."]
|
||||
<<<<<<< HEAD
|
||||
revision = "ef82de70bb3f60c65fb8eebacbb2d122ef517385"
|
||||
version = "v0.0.3"
|
||||
=======
|
||||
pruneopts = "UT"
|
||||
revision = "7b2c5ac9fc04fc5efafb60700713d4fa609b777b"
|
||||
version = "v0.0.1"
|
||||
>>>>>>> origin/develop
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:080e5f630945ad754f4b920e60b4d3095ba0237ebf88dc462eb28002932e3805"
|
||||
name = "github.com/spf13/jwalterweatherman"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "7c0cea34c8ece3fbeb2b27ab9b59511d360fb394"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:9424f440bba8f7508b69414634aef3b2b3a877e522d8a4624692412805407bb7"
|
||||
name = "github.com/spf13/pflag"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "583c0c0531f06d5278b7d917446061adc344b5cd"
|
||||
version = "v1.0.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:f8e1a678a2571e265f4bf91a3e5e32aa6b1474a55cb0ea849750cc177b664d96"
|
||||
name = "github.com/spf13/viper"
|
||||
packages = ["."]
|
||||
<<<<<<< HEAD
|
||||
revision = "b5e8006cbee93ec955a89ab31e0e3ce3204f3736"
|
||||
version = "v1.0.2"
|
||||
=======
|
||||
pruneopts = "UT"
|
||||
revision = "25b30aa063fc18e48662b86996252eabdcf2f0c7"
|
||||
version = "v1.0.0"
|
||||
>>>>>>> origin/develop
|
||||
|
||||
[[projects]]
|
||||
digest = "1:7e8d267900c7fa7f35129a2a37596e38ed0f11ca746d6d9ba727980ee138f9f6"
|
||||
name = "github.com/stretchr/testify"
|
||||
packages = [
|
||||
"assert",
|
||||
"require"
|
||||
"require",
|
||||
]
|
||||
<<<<<<< HEAD
|
||||
revision = "f35b8ab0b5a2cef36673838d662e249dd9c94686"
|
||||
version = "v1.2.2"
|
||||
=======
|
||||
pruneopts = "UT"
|
||||
revision = "12b6f73e6084dad08a7c6e575284b177ecafbc71"
|
||||
version = "v1.2.1"
|
||||
>>>>>>> origin/develop
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:b3cfb8d82b1601a846417c3f31c03a7961862cb2c98dcf0959c473843e6d9a2b"
|
||||
name = "github.com/syndtr/goleveldb"
|
||||
packages = [
|
||||
"leveldb",
|
||||
@ -280,28 +387,41 @@
|
||||
"leveldb/opt",
|
||||
"leveldb/storage",
|
||||
"leveldb/table",
|
||||
"leveldb/util"
|
||||
"leveldb/util",
|
||||
]
|
||||
pruneopts = "UT"
|
||||
revision = "c4c61651e9e37fa117f53c5a906d3b63090d8445"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:087aaa7920e5d0bf79586feb57ce01c35c830396ab4392798112e8aae8c47722"
|
||||
name = "github.com/tendermint/ed25519"
|
||||
packages = [
|
||||
".",
|
||||
"edwards25519",
|
||||
"extra25519"
|
||||
"extra25519",
|
||||
]
|
||||
pruneopts = "UT"
|
||||
revision = "d8387025d2b9d158cf4efb07e7ebf814bcce2057"
|
||||
|
||||
[[projects]]
|
||||
<<<<<<< HEAD
|
||||
branch = "jae/writeemptyptr"
|
||||
name = "github.com/tendermint/go-amino"
|
||||
packages = ["."]
|
||||
revision = "8202139066d340b77084a583e176e29fb28b42e9"
|
||||
=======
|
||||
digest = "1:e9113641c839c21d8eaeb2c907c7276af1eddeed988df8322168c56b7e06e0e1"
|
||||
name = "github.com/tendermint/go-amino"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "2106ca61d91029c931fd54968c2bb02dc96b1412"
|
||||
version = "0.10.1"
|
||||
>>>>>>> origin/develop
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:c31a37cafc12315b8bd745c8ad6a006ac25350472488162a821e557b3e739d67"
|
||||
name = "golang.org/x/crypto"
|
||||
packages = [
|
||||
"bcrypt",
|
||||
@ -317,11 +437,16 @@
|
||||
"openpgp/errors",
|
||||
"poly1305",
|
||||
"ripemd160",
|
||||
"salsa20/salsa"
|
||||
"salsa20/salsa",
|
||||
]
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
pruneopts = "UT"
|
||||
>>>>>>> origin/develop
|
||||
revision = "c126467f60eb25f8f27e5a981f32a87e3965053f"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:d36f55a999540d29b6ea3c2ea29d71c76b1d9853fdcd3e5c5cb4836f2ba118f1"
|
||||
name = "golang.org/x/net"
|
||||
packages = [
|
||||
"context",
|
||||
@ -331,20 +456,28 @@
|
||||
"idna",
|
||||
"internal/timeseries",
|
||||
"netutil",
|
||||
"trace"
|
||||
"trace",
|
||||
]
|
||||
pruneopts = "UT"
|
||||
revision = "292b43bbf7cb8d35ddf40f8d5100ef3837cced3f"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:12ff7b51d336ea7e8b182aa3313679a37d53de64f84d2c3cbfd6a0237877e20a"
|
||||
name = "golang.org/x/sys"
|
||||
packages = [
|
||||
"cpu",
|
||||
"unix"
|
||||
"unix",
|
||||
]
|
||||
<<<<<<< HEAD
|
||||
revision = "3dc4335d56c789b04b0ba99b7a37249d9b614314"
|
||||
=======
|
||||
pruneopts = "UT"
|
||||
revision = "e072cadbbdc8dd3d3ffa82b8b4b9304c261d9311"
|
||||
>>>>>>> origin/develop
|
||||
|
||||
[[projects]]
|
||||
digest = "1:a2ab62866c75542dd18d2b069fec854577a20211d7c0ea6ae746072a1dccdd18"
|
||||
name = "golang.org/x/text"
|
||||
packages = [
|
||||
"collate",
|
||||
@ -360,17 +493,21 @@
|
||||
"unicode/bidi",
|
||||
"unicode/cldr",
|
||||
"unicode/norm",
|
||||
"unicode/rangetable"
|
||||
"unicode/rangetable",
|
||||
]
|
||||
pruneopts = "UT"
|
||||
revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0"
|
||||
version = "v0.3.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:cd018653a358d4b743a9d3bee89e825521f2ab2f2ec0770164bf7632d8d73ab7"
|
||||
name = "google.golang.org/genproto"
|
||||
packages = ["googleapis/rpc/status"]
|
||||
pruneopts = "UT"
|
||||
revision = "7fd901a49ba6a7f87732eb344f6e3c5b19d1b200"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:2dab32a43451e320e49608ff4542fdfc653c95dcc35d0065ec9c6c3dd540ed74"
|
||||
name = "google.golang.org/grpc"
|
||||
packages = [
|
||||
".",
|
||||
@ -397,20 +534,75 @@
|
||||
"stats",
|
||||
"status",
|
||||
"tap",
|
||||
"transport"
|
||||
"transport",
|
||||
]
|
||||
pruneopts = "UT"
|
||||
revision = "168a6198bcb0ef175f7dacec0b8691fc141dc9b8"
|
||||
version = "v1.13.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:342378ac4dcb378a5448dd723f0784ae519383532f5e70ade24132c4c8693202"
|
||||
name = "gopkg.in/yaml.v2"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183"
|
||||
version = "v2.2.1"
|
||||
|
||||
[solve-meta]
|
||||
analyzer-name = "dep"
|
||||
analyzer-version = 1
|
||||
<<<<<<< HEAD
|
||||
inputs-digest = "cb44aec2727610e0547ee75e2b4602266d85026bb47747f4fb8bdcef4709bdd1"
|
||||
=======
|
||||
input-imports = [
|
||||
"github.com/btcsuite/btcd/btcec",
|
||||
"github.com/btcsuite/btcutil/base58",
|
||||
"github.com/btcsuite/btcutil/bech32",
|
||||
"github.com/ebuchman/fail-test",
|
||||
"github.com/fortytw2/leaktest",
|
||||
"github.com/go-kit/kit/log",
|
||||
"github.com/go-kit/kit/log/level",
|
||||
"github.com/go-kit/kit/log/term",
|
||||
"github.com/go-kit/kit/metrics",
|
||||
"github.com/go-kit/kit/metrics/discard",
|
||||
"github.com/go-kit/kit/metrics/prometheus",
|
||||
"github.com/go-logfmt/logfmt",
|
||||
"github.com/gogo/protobuf/gogoproto",
|
||||
"github.com/gogo/protobuf/jsonpb",
|
||||
"github.com/gogo/protobuf/proto",
|
||||
"github.com/gogo/protobuf/types",
|
||||
"github.com/golang/protobuf/proto",
|
||||
"github.com/golang/protobuf/ptypes/timestamp",
|
||||
"github.com/gorilla/websocket",
|
||||
"github.com/jmhodges/levigo",
|
||||
"github.com/pkg/errors",
|
||||
"github.com/prometheus/client_golang/prometheus",
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp",
|
||||
"github.com/rcrowley/go-metrics",
|
||||
"github.com/spf13/cobra",
|
||||
"github.com/spf13/viper",
|
||||
"github.com/stretchr/testify/assert",
|
||||
"github.com/stretchr/testify/require",
|
||||
"github.com/syndtr/goleveldb/leveldb",
|
||||
"github.com/syndtr/goleveldb/leveldb/errors",
|
||||
"github.com/syndtr/goleveldb/leveldb/iterator",
|
||||
"github.com/syndtr/goleveldb/leveldb/opt",
|
||||
"github.com/tendermint/ed25519",
|
||||
"github.com/tendermint/ed25519/extra25519",
|
||||
"github.com/tendermint/go-amino",
|
||||
"golang.org/x/crypto/bcrypt",
|
||||
"golang.org/x/crypto/chacha20poly1305",
|
||||
"golang.org/x/crypto/curve25519",
|
||||
"golang.org/x/crypto/hkdf",
|
||||
"golang.org/x/crypto/nacl/box",
|
||||
"golang.org/x/crypto/nacl/secretbox",
|
||||
"golang.org/x/crypto/openpgp/armor",
|
||||
"golang.org/x/crypto/ripemd160",
|
||||
"golang.org/x/net/context",
|
||||
"golang.org/x/net/netutil",
|
||||
"google.golang.org/grpc",
|
||||
"google.golang.org/grpc/credentials",
|
||||
]
|
||||
>>>>>>> origin/develop
|
||||
solver-name = "gps-cdcl"
|
||||
solver-version = 1
|
||||
|
13
Gopkg.toml
13
Gopkg.toml
@ -10,11 +10,6 @@
|
||||
# name = "github.com/user/project"
|
||||
# version = "1.0.0"
|
||||
#
|
||||
# [[constraint]]
|
||||
# name = "github.com/user/project2"
|
||||
# branch = "dev"
|
||||
# source = "github.com/myfork/project2"
|
||||
#
|
||||
# [[override]]
|
||||
# name = "github.com/x/y"
|
||||
# version = "2.4.0"
|
||||
@ -31,7 +26,7 @@
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/go-kit/kit"
|
||||
version = "=0.7.0"
|
||||
version = "=0.6.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/gogo/protobuf"
|
||||
@ -51,15 +46,15 @@
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/spf13/cobra"
|
||||
version = "=0.0.3"
|
||||
version = "=0.0.1"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/spf13/viper"
|
||||
version = "=1.0.2"
|
||||
version = "=1.0.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/stretchr/testify"
|
||||
version = "=1.2.2"
|
||||
version = "=1.2.1"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/tendermint/go-amino"
|
||||
|
26
Makefile
26
Makefile
@ -5,7 +5,7 @@ GOTOOLS = \
|
||||
github.com/gogo/protobuf/protoc-gen-gogo \
|
||||
github.com/gogo/protobuf/gogoproto \
|
||||
github.com/square/certstrap
|
||||
PACKAGES=$(shell go list ./... | grep -v '/vendor/')
|
||||
PACKAGES=$(shell go list ./...)
|
||||
|
||||
INCLUDE = -I=. -I=${GOPATH}/src -I=${GOPATH}/src/github.com/gogo/protobuf/protobuf
|
||||
BUILD_TAGS?='tendermint'
|
||||
@ -37,7 +37,7 @@ protoc_all: protoc_libs protoc_abci protoc_grpc
|
||||
## If you get the following error,
|
||||
## "error while loading shared libraries: libprotobuf.so.14: cannot open shared object file: No such file or directory"
|
||||
## See https://stackoverflow.com/a/25518702
|
||||
protoc $(INCLUDE) $< --gogo_out=plugins=grpc:.
|
||||
protoc $(INCLUDE) $< --gogo_out=Mgoogle/protobuf/timestamp.proto=github.com/golang/protobuf/ptypes/timestamp,plugins=grpc:.
|
||||
@echo "--> adding nolint declarations to protobuf generated files"
|
||||
@awk -i inplace '/^\s*package \w+/ { print "//nolint" }1' $@
|
||||
|
||||
@ -77,16 +77,8 @@ update_tools:
|
||||
@echo "--> Updating tools"
|
||||
@go get -u $(GOTOOLS)
|
||||
|
||||
#Run this from CI
|
||||
#Update dependencies
|
||||
get_vendor_deps:
|
||||
@rm -rf vendor/
|
||||
@echo "--> Running dep"
|
||||
@dep ensure -vendor-only
|
||||
|
||||
|
||||
#Run this locally.
|
||||
ensure_deps:
|
||||
@rm -rf vendor/
|
||||
@echo "--> Running dep"
|
||||
@dep ensure
|
||||
|
||||
@ -254,6 +246,16 @@ metalinter_all:
|
||||
@echo "--> Running linter (all)"
|
||||
gometalinter.v2 --vendor --deadline=600s --enable-all --disable=lll ./...
|
||||
|
||||
DESTINATION = ./index.html.md
|
||||
|
||||
rpc-docs:
|
||||
cat rpc/core/slate_header.txt > $(DESTINATION)
|
||||
godoc2md -template rpc/core/doc_template.txt github.com/tendermint/tendermint/rpc/core | grep -v -e "pipe.go" -e "routes.go" -e "dev.go" | sed 's,/src/target,https://github.com/tendermint/tendermint/tree/master/rpc/core,' >> $(DESTINATION)
|
||||
|
||||
check_dep:
|
||||
dep status >> /dev/null
|
||||
!(grep -n branch Gopkg.toml)
|
||||
|
||||
###########################################################
|
||||
### Docker image
|
||||
|
||||
@ -309,4 +311,4 @@ build-slate:
|
||||
# To avoid unintended conflicts with file names, always add to .PHONY
|
||||
# unless there is a reason not to.
|
||||
# https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html
|
||||
.PHONY: check build build_race build_abci dist install install_abci check_tools get_tools update_tools get_vendor_deps draw_deps get_protoc protoc_abci protoc_libs gen_certs clean_certs grpc_dbserver test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate protoc_grpc protoc_all
|
||||
.PHONY: check build build_race build_abci dist install install_abci check_dep check_tools get_tools update_tools get_vendor_deps draw_deps get_protoc protoc_abci protoc_libs gen_certs clean_certs grpc_dbserver test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt rpc-docs build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate protoc_grpc protoc_all
|
||||
|
@ -22,7 +22,7 @@ develop | [ middleware that takes a state transition machine - written in any programming language -
|
||||
and securely replicates it on many machines.
|
||||
|
||||
For protocol details, see [the specification](/docs/spec).
|
||||
For protocol details, see [the specification](/docs/spec). For a consensus proof and detailed protocol analysis checkout our recent paper, "[The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938)".
|
||||
|
||||
## A Note on Production Readiness
|
||||
|
||||
|
@ -29,7 +29,7 @@ The two guides to focus on are the `Application Development Guide` and `Using AB
|
||||
To compile the protobuf file, run:
|
||||
|
||||
```
|
||||
make protoc
|
||||
cd $GOPATH/src/github.com/tendermint/tendermint/; make protoc_abci
|
||||
```
|
||||
|
||||
See `protoc --help` and [the Protocol Buffers site](https://developers.google.com/protocol-buffers)
|
||||
|
@ -63,7 +63,7 @@ RETRY_LOOP:
|
||||
|
||||
ENSURE_CONNECTED:
|
||||
for {
|
||||
_, err := client.Echo(context.Background(), &types.RequestEcho{"hello"}, grpc.FailFast(true))
|
||||
_, err := client.Echo(context.Background(), &types.RequestEcho{Message: "hello"}, grpc.FailFast(true))
|
||||
if err == nil {
|
||||
break ENSURE_CONNECTED
|
||||
}
|
||||
@ -129,7 +129,7 @@ func (cli *grpcClient) EchoAsync(msg string) *ReqRes {
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_Echo{res}})
|
||||
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Echo{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) FlushAsync() *ReqRes {
|
||||
@ -138,7 +138,7 @@ func (cli *grpcClient) FlushAsync() *ReqRes {
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_Flush{res}})
|
||||
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Flush{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) InfoAsync(params types.RequestInfo) *ReqRes {
|
||||
@ -147,7 +147,7 @@ func (cli *grpcClient) InfoAsync(params types.RequestInfo) *ReqRes {
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_Info{res}})
|
||||
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Info{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) SetOptionAsync(params types.RequestSetOption) *ReqRes {
|
||||
@ -156,7 +156,7 @@ func (cli *grpcClient) SetOptionAsync(params types.RequestSetOption) *ReqRes {
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_SetOption{res}})
|
||||
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_SetOption{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) DeliverTxAsync(tx []byte) *ReqRes {
|
||||
@ -165,7 +165,7 @@ func (cli *grpcClient) DeliverTxAsync(tx []byte) *ReqRes {
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_DeliverTx{res}})
|
||||
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_DeliverTx{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) CheckTxAsync(tx []byte) *ReqRes {
|
||||
@ -174,7 +174,7 @@ func (cli *grpcClient) CheckTxAsync(tx []byte) *ReqRes {
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_CheckTx{res}})
|
||||
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_CheckTx{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) QueryAsync(params types.RequestQuery) *ReqRes {
|
||||
@ -183,7 +183,7 @@ func (cli *grpcClient) QueryAsync(params types.RequestQuery) *ReqRes {
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_Query{res}})
|
||||
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Query{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) CommitAsync() *ReqRes {
|
||||
@ -192,7 +192,7 @@ func (cli *grpcClient) CommitAsync() *ReqRes {
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_Commit{res}})
|
||||
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Commit{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) InitChainAsync(params types.RequestInitChain) *ReqRes {
|
||||
@ -201,7 +201,7 @@ func (cli *grpcClient) InitChainAsync(params types.RequestInitChain) *ReqRes {
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_InitChain{res}})
|
||||
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_InitChain{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) BeginBlockAsync(params types.RequestBeginBlock) *ReqRes {
|
||||
@ -210,7 +210,7 @@ func (cli *grpcClient) BeginBlockAsync(params types.RequestBeginBlock) *ReqRes {
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_BeginBlock{res}})
|
||||
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_BeginBlock{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) EndBlockAsync(params types.RequestEndBlock) *ReqRes {
|
||||
@ -219,7 +219,7 @@ func (cli *grpcClient) EndBlockAsync(params types.RequestEndBlock) *ReqRes {
|
||||
if err != nil {
|
||||
cli.StopForError(err)
|
||||
}
|
||||
return cli.finishAsyncCall(req, &types.Response{&types.Response_EndBlock{res}})
|
||||
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_EndBlock{res}})
|
||||
}
|
||||
|
||||
func (cli *grpcClient) finishAsyncCall(req *types.Request, res *types.Response) *ReqRes {
|
||||
|
@ -149,7 +149,7 @@ func (app *localClient) FlushSync() error {
|
||||
}
|
||||
|
||||
func (app *localClient) EchoSync(msg string) (*types.ResponseEcho, error) {
|
||||
return &types.ResponseEcho{msg}, nil
|
||||
return &types.ResponseEcho{Message: msg}, nil
|
||||
}
|
||||
|
||||
func (app *localClient) InfoSync(req types.RequestInfo) (*types.ResponseInfo, error) {
|
||||
|
@ -514,7 +514,7 @@ func cmdInfo(cmd *cobra.Command, args []string) error {
|
||||
if len(args) == 1 {
|
||||
version = args[0]
|
||||
}
|
||||
res, err := client.InfoSync(types.RequestInfo{version})
|
||||
res, err := client.InfoSync(types.RequestInfo{Version: version})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -537,7 +537,7 @@ func cmdSetOption(cmd *cobra.Command, args []string) error {
|
||||
}
|
||||
|
||||
key, val := args[0], args[1]
|
||||
_, err := client.SetOptionSync(types.RequestSetOption{key, val})
|
||||
_, err := client.SetOptionSync(types.RequestSetOption{Key: key, Value: val})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -132,7 +132,7 @@ func testGRPCSync(t *testing.T, app *types.GRPCApplication) {
|
||||
// Write requests
|
||||
for counter := 0; counter < numDeliverTxs; counter++ {
|
||||
// Send request
|
||||
response, err := client.DeliverTx(context.Background(), &types.RequestDeliverTx{[]byte("test")})
|
||||
response, err := client.DeliverTx(context.Background(), &types.RequestDeliverTx{Tx: []byte("test")})
|
||||
if err != nil {
|
||||
t.Fatalf("Error in GRPC DeliverTx: %v", err.Error())
|
||||
}
|
||||
|
@ -90,8 +90,8 @@ func TestPersistentKVStoreInfo(t *testing.T) {
|
||||
header := types.Header{
|
||||
Height: int64(height),
|
||||
}
|
||||
kvstore.BeginBlock(types.RequestBeginBlock{hash, header, nil, nil})
|
||||
kvstore.EndBlock(types.RequestEndBlock{header.Height})
|
||||
kvstore.BeginBlock(types.RequestBeginBlock{Hash: hash, Header: header})
|
||||
kvstore.EndBlock(types.RequestEndBlock{Height: header.Height})
|
||||
kvstore.Commit()
|
||||
|
||||
resInfo = kvstore.Info(types.RequestInfo{})
|
||||
@ -176,13 +176,13 @@ func makeApplyBlock(t *testing.T, kvstore types.Application, heightInt int, diff
|
||||
Height: height,
|
||||
}
|
||||
|
||||
kvstore.BeginBlock(types.RequestBeginBlock{hash, header, nil, nil})
|
||||
kvstore.BeginBlock(types.RequestBeginBlock{Hash: hash, Header: header})
|
||||
for _, tx := range txs {
|
||||
if r := kvstore.DeliverTx(tx); r.IsErr() {
|
||||
t.Fatal(r)
|
||||
}
|
||||
}
|
||||
resEndBlock := kvstore.EndBlock(types.RequestEndBlock{header.Height})
|
||||
resEndBlock := kvstore.EndBlock(types.RequestEndBlock{Height: header.Height})
|
||||
kvstore.Commit()
|
||||
|
||||
valsEqual(t, diff, resEndBlock.ValidatorUpdates)
|
||||
|
@ -26,7 +26,7 @@ func startClient(abciType string) abcicli.Client {
|
||||
}
|
||||
|
||||
func setOption(client abcicli.Client, key, value string) {
|
||||
_, err := client.SetOptionSync(types.RequestSetOption{key, value})
|
||||
_, err := client.SetOptionSync(types.RequestSetOption{Key: key, Value: value})
|
||||
if err != nil {
|
||||
panicf("setting %v=%v: \nerr: %v", key, value, err)
|
||||
}
|
||||
|
@ -85,7 +85,7 @@ func NewGRPCApplication(app Application) *GRPCApplication {
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) Echo(ctx context.Context, req *RequestEcho) (*ResponseEcho, error) {
|
||||
return &ResponseEcho{req.Message}, nil
|
||||
return &ResponseEcho{Message: req.Message}, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) Flush(ctx context.Context, req *RequestFlush) (*ResponseFlush, error) {
|
||||
|
@ -71,7 +71,7 @@ func encodeVarint(w io.Writer, i int64) (err error) {
|
||||
|
||||
func ToRequestEcho(message string) *Request {
|
||||
return &Request{
|
||||
Value: &Request_Echo{&RequestEcho{message}},
|
||||
Value: &Request_Echo{&RequestEcho{Message: message}},
|
||||
}
|
||||
}
|
||||
|
||||
@ -95,13 +95,13 @@ func ToRequestSetOption(req RequestSetOption) *Request {
|
||||
|
||||
func ToRequestDeliverTx(tx []byte) *Request {
|
||||
return &Request{
|
||||
Value: &Request_DeliverTx{&RequestDeliverTx{tx}},
|
||||
Value: &Request_DeliverTx{&RequestDeliverTx{Tx: tx}},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestCheckTx(tx []byte) *Request {
|
||||
return &Request{
|
||||
Value: &Request_CheckTx{&RequestCheckTx{tx}},
|
||||
Value: &Request_CheckTx{&RequestCheckTx{Tx: tx}},
|
||||
}
|
||||
}
|
||||
|
||||
@ -139,13 +139,13 @@ func ToRequestEndBlock(req RequestEndBlock) *Request {
|
||||
|
||||
func ToResponseException(errStr string) *Response {
|
||||
return &Response{
|
||||
Value: &Response_Exception{&ResponseException{errStr}},
|
||||
Value: &Response_Exception{&ResponseException{Error: errStr}},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseEcho(message string) *Response {
|
||||
return &Response{
|
||||
Value: &Response_Echo{&ResponseEcho{message}},
|
||||
Value: &Response_Echo{&ResponseEcho{Message: message}},
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -85,7 +85,6 @@ func TestWriteReadMessage2(t *testing.T) {
|
||||
Tags: []cmn.KVPair{
|
||||
cmn.KVPair{[]byte("abc"), []byte("def")},
|
||||
},
|
||||
// Fee: cmn.KI64Pair{
|
||||
},
|
||||
// TODO: add the rest
|
||||
}
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -4,6 +4,7 @@ package types;
|
||||
// For more information on gogo.proto, see:
|
||||
// https://github.com/gogo/protobuf/blob/master/extensions.md
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
import "google/protobuf/timestamp.proto";
|
||||
import "github.com/tendermint/tendermint/libs/common/types.proto";
|
||||
|
||||
// This file is copied from http://github.com/tendermint/abci
|
||||
@ -56,7 +57,7 @@ message RequestSetOption {
|
||||
}
|
||||
|
||||
message RequestInitChain {
|
||||
int64 time = 1;
|
||||
google.protobuf.Timestamp time = 1 [(gogoproto.nullable)=false, (gogoproto.stdtime)=true];
|
||||
string chain_id = 2;
|
||||
ConsensusParams consensus_params = 3;
|
||||
repeated Validator validators = 4 [(gogoproto.nullable)=false];
|
||||
@ -70,10 +71,11 @@ message RequestQuery {
|
||||
bool prove = 4;
|
||||
}
|
||||
|
||||
// NOTE: validators here have empty pubkeys.
|
||||
message RequestBeginBlock {
|
||||
bytes hash = 1;
|
||||
Header header = 2 [(gogoproto.nullable)=false];
|
||||
repeated SigningValidator validators = 3 [(gogoproto.nullable)=false];
|
||||
LastCommitInfo last_commit_info = 3 [(gogoproto.nullable)=false];
|
||||
repeated Evidence byzantine_validators = 4 [(gogoproto.nullable)=false];
|
||||
}
|
||||
|
||||
@ -168,7 +170,6 @@ message ResponseCheckTx {
|
||||
int64 gas_wanted = 5;
|
||||
int64 gas_used = 6;
|
||||
repeated common.KVPair tags = 7 [(gogoproto.nullable)=false, (gogoproto.jsontag)="tags,omitempty"];
|
||||
common.KI64Pair fee = 8 [(gogoproto.nullable)=false];
|
||||
}
|
||||
|
||||
message ResponseDeliverTx {
|
||||
@ -179,7 +180,6 @@ message ResponseDeliverTx {
|
||||
int64 gas_wanted = 5;
|
||||
int64 gas_used = 6;
|
||||
repeated common.KVPair tags = 7 [(gogoproto.nullable)=false, (gogoproto.jsontag)="tags,omitempty"];
|
||||
common.KI64Pair fee = 8 [(gogoproto.nullable)=false];
|
||||
}
|
||||
|
||||
message ResponseEndBlock {
|
||||
@ -204,14 +204,14 @@ message ConsensusParams {
|
||||
BlockGossip block_gossip = 3;
|
||||
}
|
||||
|
||||
// BlockSize contain limits on the block size.
|
||||
// BlockSize contains limits on the block size.
|
||||
message BlockSize {
|
||||
int32 max_bytes = 1;
|
||||
int32 max_txs = 2;
|
||||
int64 max_gas = 3;
|
||||
}
|
||||
|
||||
// TxSize contain limits on the tx size.
|
||||
// TxSize contains limits on the tx size.
|
||||
message TxSize {
|
||||
int32 max_bytes = 1;
|
||||
int64 max_gas = 2;
|
||||
@ -224,6 +224,11 @@ message BlockGossip {
|
||||
int32 block_part_size_bytes = 1;
|
||||
}
|
||||
|
||||
message LastCommitInfo {
|
||||
int32 commit_round = 1;
|
||||
repeated SigningValidator validators = 2 [(gogoproto.nullable)=false];
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
// Blockchain Types
|
||||
|
||||
@ -232,7 +237,7 @@ message Header {
|
||||
// basics
|
||||
string chain_id = 1 [(gogoproto.customname)="ChainID"];
|
||||
int64 height = 2;
|
||||
int64 time = 3;
|
||||
google.protobuf.Timestamp time = 3 [(gogoproto.nullable)=false, (gogoproto.stdtime)=true];
|
||||
|
||||
// txs
|
||||
int32 num_txs = 4;
|
||||
@ -269,7 +274,7 @@ message Evidence {
|
||||
string type = 1;
|
||||
Validator validator = 2 [(gogoproto.nullable)=false];
|
||||
int64 height = 3;
|
||||
int64 time = 4;
|
||||
google.protobuf.Timestamp time = 4 [(gogoproto.nullable)=false, (gogoproto.stdtime)=true];
|
||||
int64 total_voting_power = 5;
|
||||
}
|
||||
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -109,9 +109,6 @@ func (bcR *BlockchainReactor) SetLogger(l log.Logger) {
|
||||
|
||||
// OnStart implements cmn.Service.
|
||||
func (bcR *BlockchainReactor) OnStart() error {
|
||||
if err := bcR.BaseReactor.OnStart(); err != nil {
|
||||
return err
|
||||
}
|
||||
if bcR.fastSync {
|
||||
err := bcR.pool.Start()
|
||||
if err != nil {
|
||||
@ -124,7 +121,6 @@ func (bcR *BlockchainReactor) OnStart() error {
|
||||
|
||||
// OnStop implements cmn.Service.
|
||||
func (bcR *BlockchainReactor) OnStop() {
|
||||
bcR.BaseReactor.OnStop()
|
||||
bcR.pool.Stop()
|
||||
}
|
||||
|
||||
|
@ -156,7 +156,7 @@ func makeTxs(height int64) (txs []types.Tx) {
|
||||
}
|
||||
|
||||
func makeBlock(height int64, state sm.State) *types.Block {
|
||||
block, _ := state.MakeBlock(height, makeTxs(height), new(types.Commit))
|
||||
block, _ := state.MakeBlock(height, makeTxs(height), new(types.Commit), nil)
|
||||
return block
|
||||
}
|
||||
|
||||
|
@ -2,12 +2,12 @@ package blockchain
|
||||
|
||||
import (
|
||||
"github.com/tendermint/go-amino"
|
||||
cryptoAmino "github.com/tendermint/tendermint/crypto/encoding/amino"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
var cdc = amino.NewCodec()
|
||||
|
||||
func init() {
|
||||
RegisterBlockchainMessages(cdc)
|
||||
cryptoAmino.RegisterAmino(cdc)
|
||||
types.RegisterBlockAmino(cdc)
|
||||
}
|
||||
|
@ -378,8 +378,11 @@ func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerF
|
||||
if i < nValidators {
|
||||
privVal = privVals[i]
|
||||
} else {
|
||||
_, tempFilePath := cmn.Tempfile("priv_validator_")
|
||||
privVal = privval.GenFilePV(tempFilePath)
|
||||
tempFile, err := ioutil.TempFile("", "priv_validator_")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
privVal = privval.GenFilePV(tempFile.Name())
|
||||
}
|
||||
|
||||
app := appFunc()
|
||||
@ -429,7 +432,7 @@ func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.G
|
||||
func randGenesisState(numValidators int, randPower bool, minPower int64) (sm.State, []types.PrivValidator) {
|
||||
genDoc, privValidators := randGenesisDoc(numValidators, randPower, minPower)
|
||||
s0, _ := sm.MakeGenesisState(genDoc)
|
||||
db := dbm.NewMemDB()
|
||||
db := dbm.NewMemDB() // remove this ?
|
||||
sm.SaveState(db, s0)
|
||||
return s0, privValidators
|
||||
}
|
||||
|
@ -58,9 +58,6 @@ func NewConsensusReactor(consensusState *ConsensusState, fastSync bool) *Consens
|
||||
// broadcasted to other peers and starting state if we're not in fast sync.
|
||||
func (conR *ConsensusReactor) OnStart() error {
|
||||
conR.Logger.Info("ConsensusReactor ", "fastSync", conR.FastSync())
|
||||
if err := conR.BaseReactor.OnStart(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
conR.subscribeToBroadcastEvents()
|
||||
|
||||
@ -77,7 +74,6 @@ func (conR *ConsensusReactor) OnStart() error {
|
||||
// OnStop implements BaseService by unsubscribing from events and stopping
|
||||
// state.
|
||||
func (conR *ConsensusReactor) OnStop() {
|
||||
conR.BaseReactor.OnStop()
|
||||
conR.unsubscribeFromBroadcastEvents()
|
||||
conR.conS.Stop()
|
||||
if !conR.FastSync() {
|
||||
|
@ -4,15 +4,22 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
"runtime"
|
||||
"runtime/pprof"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
abcicli "github.com/tendermint/tendermint/abci/client"
|
||||
"github.com/tendermint/tendermint/abci/example/kvstore"
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
bc "github.com/tendermint/tendermint/blockchain"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
dbm "github.com/tendermint/tendermint/libs/db"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
mempl "github.com/tendermint/tendermint/mempool"
|
||||
sm "github.com/tendermint/tendermint/state"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/p2p"
|
||||
@ -91,6 +98,120 @@ func TestReactorBasic(t *testing.T) {
|
||||
}, css)
|
||||
}
|
||||
|
||||
// Ensure we can process blocks with evidence
|
||||
func TestReactorWithEvidence(t *testing.T) {
|
||||
nValidators := 4
|
||||
testName := "consensus_reactor_test"
|
||||
tickerFunc := newMockTickerFunc(true)
|
||||
appFunc := newCounter
|
||||
|
||||
// heed the advice from https://www.sandimetz.com/blog/2016/1/20/the-wrong-abstraction
|
||||
// to unroll unwieldy abstractions. Here we duplicate the code from:
|
||||
// css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
|
||||
|
||||
genDoc, privVals := randGenesisDoc(nValidators, false, 30)
|
||||
css := make([]*ConsensusState, nValidators)
|
||||
logger := consensusLogger()
|
||||
for i := 0; i < nValidators; i++ {
|
||||
stateDB := dbm.NewMemDB() // each state needs its own db
|
||||
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
|
||||
thisConfig := ResetConfig(cmn.Fmt("%s_%d", testName, i))
|
||||
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
|
||||
app := appFunc()
|
||||
vals := types.TM2PB.Validators(state.Validators)
|
||||
app.InitChain(abci.RequestInitChain{Validators: vals})
|
||||
|
||||
pv := privVals[i]
|
||||
// duplicate code from:
|
||||
// css[i] = newConsensusStateWithConfig(thisConfig, state, privVals[i], app)
|
||||
|
||||
blockDB := dbm.NewMemDB()
|
||||
blockStore := bc.NewBlockStore(blockDB)
|
||||
|
||||
// one for mempool, one for consensus
|
||||
mtx := new(sync.Mutex)
|
||||
proxyAppConnMem := abcicli.NewLocalClient(mtx, app)
|
||||
proxyAppConnCon := abcicli.NewLocalClient(mtx, app)
|
||||
|
||||
// Make Mempool
|
||||
mempool := mempl.NewMempool(thisConfig.Mempool, proxyAppConnMem, 0)
|
||||
mempool.SetLogger(log.TestingLogger().With("module", "mempool"))
|
||||
if thisConfig.Consensus.WaitForTxs() {
|
||||
mempool.EnableTxsAvailable()
|
||||
}
|
||||
|
||||
// mock the evidence pool
|
||||
// everyone includes evidence of another double signing
|
||||
vIdx := (i + 1) % nValidators
|
||||
evpool := newMockEvidencePool(privVals[vIdx].GetAddress())
|
||||
|
||||
// Make ConsensusState
|
||||
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyAppConnCon, mempool, evpool)
|
||||
cs := NewConsensusState(thisConfig.Consensus, state, blockExec, blockStore, mempool, evpool)
|
||||
cs.SetLogger(log.TestingLogger().With("module", "consensus"))
|
||||
cs.SetPrivValidator(pv)
|
||||
|
||||
eventBus := types.NewEventBus()
|
||||
eventBus.SetLogger(log.TestingLogger().With("module", "events"))
|
||||
eventBus.Start()
|
||||
cs.SetEventBus(eventBus)
|
||||
|
||||
cs.SetTimeoutTicker(tickerFunc())
|
||||
cs.SetLogger(logger.With("validator", i, "module", "consensus"))
|
||||
|
||||
css[i] = cs
|
||||
}
|
||||
|
||||
reactors, eventChans, eventBuses := startConsensusNet(t, css, nValidators)
|
||||
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
|
||||
|
||||
// wait till everyone makes the first new block with no evidence
|
||||
timeoutWaitGroup(t, nValidators, func(j int) {
|
||||
blockI := <-eventChans[j]
|
||||
block := blockI.(types.EventDataNewBlock).Block
|
||||
assert.True(t, len(block.Evidence.Evidence) == 0)
|
||||
}, css)
|
||||
|
||||
// second block should have evidence
|
||||
timeoutWaitGroup(t, nValidators, func(j int) {
|
||||
blockI := <-eventChans[j]
|
||||
block := blockI.(types.EventDataNewBlock).Block
|
||||
assert.True(t, len(block.Evidence.Evidence) > 0)
|
||||
}, css)
|
||||
}
|
||||
|
||||
// mock evidence pool returns no evidence for block 1,
|
||||
// and returnes one piece for all higher blocks. The one piece
|
||||
// is for a given validator at block 1.
|
||||
type mockEvidencePool struct {
|
||||
height int
|
||||
ev []types.Evidence
|
||||
}
|
||||
|
||||
func newMockEvidencePool(val []byte) *mockEvidencePool {
|
||||
return &mockEvidencePool{
|
||||
ev: []types.Evidence{types.NewMockGoodEvidence(1, 1, val)},
|
||||
}
|
||||
}
|
||||
|
||||
func (m *mockEvidencePool) PendingEvidence() []types.Evidence {
|
||||
if m.height > 0 {
|
||||
return m.ev
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func (m *mockEvidencePool) AddEvidence(types.Evidence) error { return nil }
|
||||
func (m *mockEvidencePool) Update(block *types.Block, state sm.State) {
|
||||
if m.height > 0 {
|
||||
if len(block.Evidence.Evidence) == 0 {
|
||||
panic("block has no evidence")
|
||||
}
|
||||
}
|
||||
m.height += 1
|
||||
}
|
||||
|
||||
//------------------------------------
|
||||
|
||||
// Ensure a testnet sends proposal heartbeats and makes blocks when there are txs
|
||||
func TestReactorProposalHeartbeats(t *testing.T) {
|
||||
N := 4
|
||||
|
@ -227,7 +227,7 @@ func (h *Handshaker) NBlocks() int {
|
||||
func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
|
||||
|
||||
// Handshake is done via ABCI Info on the query conn.
|
||||
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{version.Version})
|
||||
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{Version: version.Version})
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error calling Info: %v", err)
|
||||
}
|
||||
@ -269,7 +269,7 @@ func (h *Handshaker) ReplayBlocks(state sm.State, appHash []byte, appBlockHeight
|
||||
nextVals := types.TM2PB.Validators(state.NextValidators) // state.Validators would work too.
|
||||
csParams := types.TM2PB.ConsensusParams(h.genDoc.ConsensusParams)
|
||||
req := abci.RequestInitChain{
|
||||
Time: h.genDoc.GenesisTime.Unix(), // TODO
|
||||
Time: h.genDoc.GenesisTime,
|
||||
ChainId: h.genDoc.ChainID,
|
||||
ConsensusParams: csParams,
|
||||
Validators: nextVals,
|
||||
|
@ -376,7 +376,7 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
|
||||
defer proxyApp.Stop()
|
||||
|
||||
// get the latest app hash from the app
|
||||
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{""})
|
||||
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{Version: ""})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
@ -915,6 +915,8 @@ func (cs *ConsensusState) isProposalComplete() bool {
|
||||
}
|
||||
|
||||
// Create the next block to propose and return it.
|
||||
// We really only need to return the parts, but the block
|
||||
// is returned for convenience so we can log the proposal block.
|
||||
// Returns nil block upon error.
|
||||
// NOTE: keep it side-effect free for clarity.
|
||||
func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts *types.PartSet) {
|
||||
@ -934,9 +936,8 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
|
||||
|
||||
// Mempool validated transactions
|
||||
txs := cs.mempool.Reap(cs.state.ConsensusParams.BlockSize.MaxTxs)
|
||||
block, parts := cs.state.MakeBlock(cs.Height, txs, commit)
|
||||
evidence := cs.evpool.PendingEvidence()
|
||||
block.AddEvidence(evidence)
|
||||
block, parts := cs.state.MakeBlock(cs.Height, txs, commit, evidence)
|
||||
return block, parts
|
||||
}
|
||||
|
||||
|
@ -2,11 +2,11 @@ package types
|
||||
|
||||
import (
|
||||
"github.com/tendermint/go-amino"
|
||||
cryptoAmino "github.com/tendermint/tendermint/crypto/encoding/amino"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
var cdc = amino.NewCodec()
|
||||
|
||||
func init() {
|
||||
cryptoAmino.RegisterAmino(cdc)
|
||||
types.RegisterBlockAmino(cdc)
|
||||
}
|
||||
|
@ -2,7 +2,7 @@ package consensus
|
||||
|
||||
import (
|
||||
"github.com/tendermint/go-amino"
|
||||
cryptoAmino "github.com/tendermint/tendermint/crypto/encoding/amino"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
var cdc = amino.NewCodec()
|
||||
@ -10,5 +10,5 @@ var cdc = amino.NewCodec()
|
||||
func init() {
|
||||
RegisterConsensusMessages(cdc)
|
||||
RegisterWALMessages(cdc)
|
||||
cryptoAmino.RegisterAmino(cdc)
|
||||
types.RegisterBlockAmino(cdc)
|
||||
}
|
||||
|
26
crypto/ed25519/bench_test.go
Normal file
26
crypto/ed25519/bench_test.go
Normal file
@ -0,0 +1,26 @@
|
||||
package ed25519
|
||||
|
||||
import (
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/internal/benchmarking"
|
||||
)
|
||||
|
||||
func BenchmarkKeyGeneration(b *testing.B) {
|
||||
benchmarkKeygenWrapper := func(reader io.Reader) crypto.PrivKey {
|
||||
return genPrivKey(reader)
|
||||
}
|
||||
benchmarking.BenchmarkKeyGeneration(b, benchmarkKeygenWrapper)
|
||||
}
|
||||
|
||||
func BenchmarkSigning(b *testing.B) {
|
||||
priv := GenPrivKey()
|
||||
benchmarking.BenchmarkSigning(b, priv)
|
||||
}
|
||||
|
||||
func BenchmarkVerification(b *testing.B) {
|
||||
priv := GenPrivKey()
|
||||
benchmarking.BenchmarkVerification(b, priv)
|
||||
}
|
@ -4,6 +4,7 @@ import (
|
||||
"bytes"
|
||||
"crypto/subtle"
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"github.com/tendermint/ed25519"
|
||||
"github.com/tendermint/ed25519/extra25519"
|
||||
@ -102,8 +103,16 @@ func (privKey PrivKeyEd25519) ToCurve25519() *[PubKeyEd25519Size]byte {
|
||||
// It uses OS randomness in conjunction with the current global random seed
|
||||
// in tendermint/libs/common to generate the private key.
|
||||
func GenPrivKey() PrivKeyEd25519 {
|
||||
return genPrivKey(crypto.CReader())
|
||||
}
|
||||
|
||||
// genPrivKey generates a new ed25519 private key using the provided reader.
|
||||
func genPrivKey(rand io.Reader) PrivKeyEd25519 {
|
||||
privKey := new([64]byte)
|
||||
copy(privKey[:32], crypto.CRandBytes(32))
|
||||
_, err := io.ReadFull(rand, privKey[:32])
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// ed25519.MakePublicKey(privKey) alters the last 32 bytes of privKey.
|
||||
// It places the pubkey in the last 32 bytes of privKey, and returns the
|
||||
// public key.
|
||||
|
@ -1,106 +0,0 @@
|
||||
// Package hkdfchacha20poly1305 creates an AEAD using hkdf, chacha20, and poly1305
|
||||
// When sealing and opening, the hkdf is used to obtain the nonce and subkey for
|
||||
// chacha20. Other than the change for the how the subkey and nonce for chacha
|
||||
// are obtained, this is the same as chacha20poly1305
|
||||
package hkdfchacha20poly1305
|
||||
|
||||
import (
|
||||
"crypto/cipher"
|
||||
"crypto/sha256"
|
||||
"errors"
|
||||
"io"
|
||||
|
||||
"golang.org/x/crypto/chacha20poly1305"
|
||||
"golang.org/x/crypto/hkdf"
|
||||
)
|
||||
|
||||
// Implements crypto.AEAD
|
||||
type hkdfchacha20poly1305 struct {
|
||||
key [KeySize]byte
|
||||
}
|
||||
|
||||
const (
|
||||
// KeySize is the size of the key used by this AEAD, in bytes.
|
||||
KeySize = 32
|
||||
// NonceSize is the size of the nonce used with this AEAD, in bytes.
|
||||
NonceSize = 24
|
||||
// TagSize is the size added from poly1305
|
||||
TagSize = 16
|
||||
// MaxPlaintextSize is the max size that can be passed into a single call of Seal
|
||||
MaxPlaintextSize = (1 << 38) - 64
|
||||
// MaxCiphertextSize is the max size that can be passed into a single call of Open,
|
||||
// this differs from plaintext size due to the tag
|
||||
MaxCiphertextSize = (1 << 38) - 48
|
||||
// HkdfInfo is the parameter used internally for Hkdf's info parameter.
|
||||
HkdfInfo = "TENDERMINT_SECRET_CONNECTION_FRAME_KEY_DERIVE"
|
||||
)
|
||||
|
||||
//New xChaChapoly1305 AEAD with 24 byte nonces
|
||||
func New(key []byte) (cipher.AEAD, error) {
|
||||
if len(key) != KeySize {
|
||||
return nil, errors.New("chacha20poly1305: bad key length")
|
||||
}
|
||||
ret := new(hkdfchacha20poly1305)
|
||||
copy(ret.key[:], key)
|
||||
return ret, nil
|
||||
|
||||
}
|
||||
func (c *hkdfchacha20poly1305) NonceSize() int {
|
||||
return NonceSize
|
||||
}
|
||||
|
||||
func (c *hkdfchacha20poly1305) Overhead() int {
|
||||
return TagSize
|
||||
}
|
||||
|
||||
func (c *hkdfchacha20poly1305) Seal(dst, nonce, plaintext, additionalData []byte) []byte {
|
||||
if len(nonce) != NonceSize {
|
||||
panic("hkdfchacha20poly1305: bad nonce length passed to Seal")
|
||||
}
|
||||
|
||||
if uint64(len(plaintext)) > MaxPlaintextSize {
|
||||
panic("hkdfchacha20poly1305: plaintext too large")
|
||||
}
|
||||
|
||||
subKey, chachaNonce := getSubkeyAndChachaNonceFromHkdf(&c.key, &nonce)
|
||||
|
||||
aead, err := chacha20poly1305.New(subKey[:])
|
||||
if err != nil {
|
||||
panic("hkdfchacha20poly1305: failed to initialize chacha20poly1305")
|
||||
}
|
||||
|
||||
return aead.Seal(dst, chachaNonce[:], plaintext, additionalData)
|
||||
}
|
||||
|
||||
func (c *hkdfchacha20poly1305) Open(dst, nonce, ciphertext, additionalData []byte) ([]byte, error) {
|
||||
if len(nonce) != NonceSize {
|
||||
return nil, errors.New("hkdfchacha20poly1305: bad nonce length passed to Open")
|
||||
}
|
||||
if uint64(len(ciphertext)) > MaxCiphertextSize {
|
||||
return nil, errors.New("hkdfchacha20poly1305: ciphertext too large")
|
||||
}
|
||||
|
||||
subKey, chachaNonce := getSubkeyAndChachaNonceFromHkdf(&c.key, &nonce)
|
||||
|
||||
aead, err := chacha20poly1305.New(subKey[:])
|
||||
if err != nil {
|
||||
panic("hkdfchacha20poly1305: failed to initialize chacha20poly1305")
|
||||
}
|
||||
|
||||
return aead.Open(dst, chachaNonce[:], ciphertext, additionalData)
|
||||
}
|
||||
|
||||
func getSubkeyAndChachaNonceFromHkdf(cKey *[32]byte, nonce *[]byte) (
|
||||
subKey [KeySize]byte, chachaNonce [chacha20poly1305.NonceSize]byte) {
|
||||
hash := sha256.New
|
||||
hkdf := hkdf.New(hash, (*cKey)[:], *nonce, []byte(HkdfInfo))
|
||||
_, err := io.ReadFull(hkdf, subKey[:])
|
||||
if err != nil {
|
||||
panic("hkdfchacha20poly1305: failed to read subkey from hkdf")
|
||||
}
|
||||
_, err = io.ReadFull(hkdf, chachaNonce[:])
|
||||
if err != nil {
|
||||
panic("hkdfchacha20poly1305: failed to read chachaNonce from hkdf")
|
||||
}
|
||||
return
|
||||
}
|
88
crypto/internal/benchmarking/bench.go
Normal file
88
crypto/internal/benchmarking/bench.go
Normal file
@ -0,0 +1,88 @@
|
||||
package benchmarking
|
||||
|
||||
import (
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
)
|
||||
|
||||
// The code in this file is adapted from agl/ed25519.
|
||||
// As such it is under the following license.
|
||||
// Copyright 2012 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found at the bottom of this file.
|
||||
|
||||
type zeroReader struct{}
|
||||
|
||||
func (zeroReader) Read(buf []byte) (int, error) {
|
||||
for i := range buf {
|
||||
buf[i] = 0
|
||||
}
|
||||
return len(buf), nil
|
||||
}
|
||||
|
||||
// BenchmarkKeyGeneration benchmarks the given key generation algorithm using
|
||||
// a dummy reader.
|
||||
func BenchmarkKeyGeneration(b *testing.B, GenerateKey func(reader io.Reader) crypto.PrivKey) {
|
||||
var zero zeroReader
|
||||
for i := 0; i < b.N; i++ {
|
||||
GenerateKey(zero)
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkSigning benchmarks the given signing algorithm using
|
||||
// the provided privkey.
|
||||
func BenchmarkSigning(b *testing.B, priv crypto.PrivKey) {
|
||||
message := []byte("Hello, world!")
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
priv.Sign(message)
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkVerification benchmarks the given verification algorithm using
|
||||
// the provided privkey on a constant message.
|
||||
func BenchmarkVerification(b *testing.B, priv crypto.PrivKey) {
|
||||
pub := priv.PubKey()
|
||||
// use a short message, so this time doesn't get dominated by hashing.
|
||||
message := []byte("Hello, world!")
|
||||
signature, err := priv.Sign(message)
|
||||
if err != nil {
|
||||
b.Fatal(err)
|
||||
}
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
pub.VerifyBytes(message, signature)
|
||||
}
|
||||
}
|
||||
|
||||
// Below is the aforementioned license.
|
||||
|
||||
// Copyright (c) 2012 The Go Authors. All rights reserved.
|
||||
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
26
crypto/secp256k1/bench_test.go
Normal file
26
crypto/secp256k1/bench_test.go
Normal file
@ -0,0 +1,26 @@
|
||||
package secp256k1
|
||||
|
||||
import (
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/internal/benchmarking"
|
||||
)
|
||||
|
||||
func BenchmarkKeyGeneration(b *testing.B) {
|
||||
benchmarkKeygenWrapper := func(reader io.Reader) crypto.PrivKey {
|
||||
return genPrivKey(reader)
|
||||
}
|
||||
benchmarking.BenchmarkKeyGeneration(b, benchmarkKeygenWrapper)
|
||||
}
|
||||
|
||||
func BenchmarkSigning(b *testing.B) {
|
||||
priv := GenPrivKey()
|
||||
benchmarking.BenchmarkSigning(b, priv)
|
||||
}
|
||||
|
||||
func BenchmarkVerification(b *testing.B) {
|
||||
priv := GenPrivKey()
|
||||
benchmarking.BenchmarkVerification(b, priv)
|
||||
}
|
@ -5,6 +5,7 @@ import (
|
||||
"crypto/sha256"
|
||||
"crypto/subtle"
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
secp256k1 "github.com/btcsuite/btcd/btcec"
|
||||
amino "github.com/tendermint/go-amino"
|
||||
@ -80,8 +81,16 @@ func (privKey PrivKeySecp256k1) Equals(other crypto.PrivKey) bool {
|
||||
// It uses OS randomness in conjunction with the current global random seed
|
||||
// in tendermint/libs/common to generate the private key.
|
||||
func GenPrivKey() PrivKeySecp256k1 {
|
||||
return genPrivKey(crypto.CReader())
|
||||
}
|
||||
|
||||
// genPrivKey generates a new secp256k1 private key using the provided reader.
|
||||
func genPrivKey(rand io.Reader) PrivKeySecp256k1 {
|
||||
privKeyBytes := [32]byte{}
|
||||
copy(privKeyBytes[:], crypto.CRandBytes(32))
|
||||
_, err := io.ReadFull(rand, privKeyBytes[:])
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// crypto.CRandBytes is guaranteed to be 32 bytes long, so it can be
|
||||
// casted to PrivKeySecp256k1.
|
||||
return PrivKeySecp256k1(privKeyBytes)
|
||||
|
103
crypto/xchacha20poly1305/vector_test.go
Normal file
103
crypto/xchacha20poly1305/vector_test.go
Normal file
@ -0,0 +1,103 @@
|
||||
package xchacha20poly1305
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func toHex(bits []byte) string {
|
||||
return hex.EncodeToString(bits)
|
||||
}
|
||||
|
||||
func fromHex(bits string) []byte {
|
||||
b, err := hex.DecodeString(bits)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
func TestHChaCha20(t *testing.T) {
|
||||
for i, v := range hChaCha20Vectors {
|
||||
var key [32]byte
|
||||
var nonce [16]byte
|
||||
copy(key[:], v.key)
|
||||
copy(nonce[:], v.nonce)
|
||||
|
||||
HChaCha20(&key, &nonce, &key)
|
||||
if !bytes.Equal(key[:], v.keystream) {
|
||||
t.Errorf("Test %d: keystream mismatch:\n \t got: %s\n \t want: %s", i, toHex(key[:]), toHex(v.keystream))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var hChaCha20Vectors = []struct {
|
||||
key, nonce, keystream []byte
|
||||
}{
|
||||
{
|
||||
fromHex("0000000000000000000000000000000000000000000000000000000000000000"),
|
||||
fromHex("000000000000000000000000000000000000000000000000"),
|
||||
fromHex("1140704c328d1d5d0e30086cdf209dbd6a43b8f41518a11cc387b669b2ee6586"),
|
||||
},
|
||||
{
|
||||
fromHex("8000000000000000000000000000000000000000000000000000000000000000"),
|
||||
fromHex("000000000000000000000000000000000000000000000000"),
|
||||
fromHex("7d266a7fd808cae4c02a0a70dcbfbcc250dae65ce3eae7fc210f54cc8f77df86"),
|
||||
},
|
||||
{
|
||||
fromHex("0000000000000000000000000000000000000000000000000000000000000001"),
|
||||
fromHex("000000000000000000000000000000000000000000000002"),
|
||||
fromHex("e0c77ff931bb9163a5460c02ac281c2b53d792b1c43fea817e9ad275ae546963"),
|
||||
},
|
||||
{
|
||||
fromHex("000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f"),
|
||||
fromHex("000102030405060708090a0b0c0d0e0f1011121314151617"),
|
||||
fromHex("51e3ff45a895675c4b33b46c64f4a9ace110d34df6a2ceab486372bacbd3eff6"),
|
||||
},
|
||||
{
|
||||
fromHex("24f11cce8a1b3d61e441561a696c1c1b7e173d084fd4812425435a8896a013dc"),
|
||||
fromHex("d9660c5900ae19ddad28d6e06e45fe5e"),
|
||||
fromHex("5966b3eec3bff1189f831f06afe4d4e3be97fa9235ec8c20d08acfbbb4e851e3"),
|
||||
},
|
||||
}
|
||||
|
||||
func TestVectors(t *testing.T) {
|
||||
for i, v := range vectors {
|
||||
if len(v.plaintext) == 0 {
|
||||
v.plaintext = make([]byte, len(v.ciphertext))
|
||||
}
|
||||
|
||||
var nonce [24]byte
|
||||
copy(nonce[:], v.nonce)
|
||||
|
||||
aead, err := New(v.key)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
dst := aead.Seal(nil, nonce[:], v.plaintext, v.ad)
|
||||
if !bytes.Equal(dst, v.ciphertext) {
|
||||
t.Errorf("Test %d: ciphertext mismatch:\n \t got: %s\n \t want: %s", i, toHex(dst), toHex(v.ciphertext))
|
||||
}
|
||||
open, err := aead.Open(nil, nonce[:], dst, v.ad)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
if !bytes.Equal(open, v.plaintext) {
|
||||
t.Errorf("Test %d: plaintext mismatch:\n \t got: %s\n \t want: %s", i, string(open), string(v.plaintext))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var vectors = []struct {
|
||||
key, nonce, ad, plaintext, ciphertext []byte
|
||||
}{
|
||||
{
|
||||
[]byte{0x80, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87, 0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f, 0x90, 0x91, 0x92, 0x93, 0x94, 0x95, 0x96, 0x97, 0x98, 0x99, 0x9a, 0x9b, 0x9c, 0x9d, 0x9e, 0x9f},
|
||||
[]byte{0x07, 0x00, 0x00, 0x00, 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4a, 0x4b},
|
||||
[]byte{0x50, 0x51, 0x52, 0x53, 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7},
|
||||
[]byte("Ladies and Gentlemen of the class of '99: If I could offer you only one tip for the future, sunscreen would be it."),
|
||||
[]byte{0x45, 0x3c, 0x06, 0x93, 0xa7, 0x40, 0x7f, 0x04, 0xff, 0x4c, 0x56, 0xae, 0xdb, 0x17, 0xa3, 0xc0, 0xa1, 0xaf, 0xff, 0x01, 0x17, 0x49, 0x30, 0xfc, 0x22, 0x28, 0x7c, 0x33, 0xdb, 0xcf, 0x0a, 0xc8, 0xb8, 0x9a, 0xd9, 0x29, 0x53, 0x0a, 0x1b, 0xb3, 0xab, 0x5e, 0x69, 0xf2, 0x4c, 0x7f, 0x60, 0x70, 0xc8, 0xf8, 0x40, 0xc9, 0xab, 0xb4, 0xf6, 0x9f, 0xbf, 0xc8, 0xa7, 0xff, 0x51, 0x26, 0xfa, 0xee, 0xbb, 0xb5, 0x58, 0x05, 0xee, 0x9c, 0x1c, 0xf2, 0xce, 0x5a, 0x57, 0x26, 0x32, 0x87, 0xae, 0xc5, 0x78, 0x0f, 0x04, 0xec, 0x32, 0x4c, 0x35, 0x14, 0x12, 0x2c, 0xfc, 0x32, 0x31, 0xfc, 0x1a, 0x8b, 0x71, 0x8a, 0x62, 0x86, 0x37, 0x30, 0xa2, 0x70, 0x2b, 0xb7, 0x63, 0x66, 0x11, 0x6b, 0xed, 0x09, 0xe0, 0xfd, 0x5c, 0x6d, 0x84, 0xb6, 0xb0, 0xc1, 0xab, 0xaf, 0x24, 0x9d, 0x5d, 0xd0, 0xf7, 0xf5, 0xa7, 0xea},
|
||||
},
|
||||
}
|
261
crypto/xchacha20poly1305/xchachapoly.go
Normal file
261
crypto/xchacha20poly1305/xchachapoly.go
Normal file
@ -0,0 +1,261 @@
|
||||
// Package xchacha20poly1305 creates an AEAD using hchacha, chacha, and poly1305
|
||||
// This allows for randomized nonces to be used in conjunction with chacha.
|
||||
package xchacha20poly1305
|
||||
|
||||
import (
|
||||
"crypto/cipher"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
"golang.org/x/crypto/chacha20poly1305"
|
||||
)
|
||||
|
||||
// Implements crypto.AEAD
|
||||
type xchacha20poly1305 struct {
|
||||
key [KeySize]byte
|
||||
}
|
||||
|
||||
const (
|
||||
// KeySize is the size of the key used by this AEAD, in bytes.
|
||||
KeySize = 32
|
||||
// NonceSize is the size of the nonce used with this AEAD, in bytes.
|
||||
NonceSize = 24
|
||||
// TagSize is the size added from poly1305
|
||||
TagSize = 16
|
||||
// MaxPlaintextSize is the max size that can be passed into a single call of Seal
|
||||
MaxPlaintextSize = (1 << 38) - 64
|
||||
// MaxCiphertextSize is the max size that can be passed into a single call of Open,
|
||||
// this differs from plaintext size due to the tag
|
||||
MaxCiphertextSize = (1 << 38) - 48
|
||||
|
||||
// sigma are constants used in xchacha.
|
||||
// Unrolled from a slice so that they can be inlined, as slices can't be constants.
|
||||
sigma0 = uint32(0x61707865)
|
||||
sigma1 = uint32(0x3320646e)
|
||||
sigma2 = uint32(0x79622d32)
|
||||
sigma3 = uint32(0x6b206574)
|
||||
)
|
||||
|
||||
// New returns a new xchachapoly1305 AEAD
|
||||
func New(key []byte) (cipher.AEAD, error) {
|
||||
if len(key) != KeySize {
|
||||
return nil, errors.New("xchacha20poly1305: bad key length")
|
||||
}
|
||||
ret := new(xchacha20poly1305)
|
||||
copy(ret.key[:], key)
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
// nolint
|
||||
func (c *xchacha20poly1305) NonceSize() int {
|
||||
return NonceSize
|
||||
}
|
||||
|
||||
// nolint
|
||||
func (c *xchacha20poly1305) Overhead() int {
|
||||
return TagSize
|
||||
}
|
||||
|
||||
func (c *xchacha20poly1305) Seal(dst, nonce, plaintext, additionalData []byte) []byte {
|
||||
if len(nonce) != NonceSize {
|
||||
panic("xchacha20poly1305: bad nonce length passed to Seal")
|
||||
}
|
||||
|
||||
if uint64(len(plaintext)) > MaxPlaintextSize {
|
||||
panic("xchacha20poly1305: plaintext too large")
|
||||
}
|
||||
|
||||
var subKey [KeySize]byte
|
||||
var hNonce [16]byte
|
||||
var subNonce [chacha20poly1305.NonceSize]byte
|
||||
copy(hNonce[:], nonce[:16])
|
||||
|
||||
HChaCha20(&subKey, &hNonce, &c.key)
|
||||
|
||||
// This can't error because we always provide a correctly sized key
|
||||
chacha20poly1305, _ := chacha20poly1305.New(subKey[:])
|
||||
|
||||
copy(subNonce[4:], nonce[16:])
|
||||
|
||||
return chacha20poly1305.Seal(dst, subNonce[:], plaintext, additionalData)
|
||||
}
|
||||
|
||||
func (c *xchacha20poly1305) Open(dst, nonce, ciphertext, additionalData []byte) ([]byte, error) {
|
||||
if len(nonce) != NonceSize {
|
||||
return nil, fmt.Errorf("xchacha20poly1305: bad nonce length passed to Open")
|
||||
}
|
||||
if uint64(len(ciphertext)) > MaxCiphertextSize {
|
||||
return nil, fmt.Errorf("xchacha20poly1305: ciphertext too large")
|
||||
}
|
||||
var subKey [KeySize]byte
|
||||
var hNonce [16]byte
|
||||
var subNonce [chacha20poly1305.NonceSize]byte
|
||||
copy(hNonce[:], nonce[:16])
|
||||
|
||||
HChaCha20(&subKey, &hNonce, &c.key)
|
||||
|
||||
// This can't error because we always provide a correctly sized key
|
||||
chacha20poly1305, _ := chacha20poly1305.New(subKey[:])
|
||||
|
||||
copy(subNonce[4:], nonce[16:])
|
||||
|
||||
return chacha20poly1305.Open(dst, subNonce[:], ciphertext, additionalData)
|
||||
}
|
||||
|
||||
// HChaCha exported from
|
||||
// https://github.com/aead/chacha20/blob/8b13a72661dae6e9e5dea04f344f0dc95ea29547/chacha/chacha_generic.go#L194
|
||||
// TODO: Add support for the different assembly instructions used there.
|
||||
|
||||
// The MIT License (MIT)
|
||||
|
||||
// Copyright (c) 2016 Andreas Auernhammer
|
||||
|
||||
// Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
// of this software and associated documentation files (the "Software"), to deal
|
||||
// in the Software without restriction, including without limitation the rights
|
||||
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
// copies of the Software, and to permit persons to whom the Software is
|
||||
// furnished to do so, subject to the following conditions:
|
||||
|
||||
// The above copyright notice and this permission notice shall be included in all
|
||||
// copies or substantial portions of the Software.
|
||||
|
||||
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
// SOFTWARE.
|
||||
|
||||
// HChaCha20 generates 32 pseudo-random bytes from a 128 bit nonce and a 256 bit secret key.
|
||||
// It can be used as a key-derivation-function (KDF).
|
||||
func HChaCha20(out *[32]byte, nonce *[16]byte, key *[32]byte) { hChaCha20Generic(out, nonce, key) }
|
||||
|
||||
func hChaCha20Generic(out *[32]byte, nonce *[16]byte, key *[32]byte) {
|
||||
v00 := sigma0
|
||||
v01 := sigma1
|
||||
v02 := sigma2
|
||||
v03 := sigma3
|
||||
v04 := binary.LittleEndian.Uint32(key[0:])
|
||||
v05 := binary.LittleEndian.Uint32(key[4:])
|
||||
v06 := binary.LittleEndian.Uint32(key[8:])
|
||||
v07 := binary.LittleEndian.Uint32(key[12:])
|
||||
v08 := binary.LittleEndian.Uint32(key[16:])
|
||||
v09 := binary.LittleEndian.Uint32(key[20:])
|
||||
v10 := binary.LittleEndian.Uint32(key[24:])
|
||||
v11 := binary.LittleEndian.Uint32(key[28:])
|
||||
v12 := binary.LittleEndian.Uint32(nonce[0:])
|
||||
v13 := binary.LittleEndian.Uint32(nonce[4:])
|
||||
v14 := binary.LittleEndian.Uint32(nonce[8:])
|
||||
v15 := binary.LittleEndian.Uint32(nonce[12:])
|
||||
|
||||
for i := 0; i < 20; i += 2 {
|
||||
v00 += v04
|
||||
v12 ^= v00
|
||||
v12 = (v12 << 16) | (v12 >> 16)
|
||||
v08 += v12
|
||||
v04 ^= v08
|
||||
v04 = (v04 << 12) | (v04 >> 20)
|
||||
v00 += v04
|
||||
v12 ^= v00
|
||||
v12 = (v12 << 8) | (v12 >> 24)
|
||||
v08 += v12
|
||||
v04 ^= v08
|
||||
v04 = (v04 << 7) | (v04 >> 25)
|
||||
v01 += v05
|
||||
v13 ^= v01
|
||||
v13 = (v13 << 16) | (v13 >> 16)
|
||||
v09 += v13
|
||||
v05 ^= v09
|
||||
v05 = (v05 << 12) | (v05 >> 20)
|
||||
v01 += v05
|
||||
v13 ^= v01
|
||||
v13 = (v13 << 8) | (v13 >> 24)
|
||||
v09 += v13
|
||||
v05 ^= v09
|
||||
v05 = (v05 << 7) | (v05 >> 25)
|
||||
v02 += v06
|
||||
v14 ^= v02
|
||||
v14 = (v14 << 16) | (v14 >> 16)
|
||||
v10 += v14
|
||||
v06 ^= v10
|
||||
v06 = (v06 << 12) | (v06 >> 20)
|
||||
v02 += v06
|
||||
v14 ^= v02
|
||||
v14 = (v14 << 8) | (v14 >> 24)
|
||||
v10 += v14
|
||||
v06 ^= v10
|
||||
v06 = (v06 << 7) | (v06 >> 25)
|
||||
v03 += v07
|
||||
v15 ^= v03
|
||||
v15 = (v15 << 16) | (v15 >> 16)
|
||||
v11 += v15
|
||||
v07 ^= v11
|
||||
v07 = (v07 << 12) | (v07 >> 20)
|
||||
v03 += v07
|
||||
v15 ^= v03
|
||||
v15 = (v15 << 8) | (v15 >> 24)
|
||||
v11 += v15
|
||||
v07 ^= v11
|
||||
v07 = (v07 << 7) | (v07 >> 25)
|
||||
v00 += v05
|
||||
v15 ^= v00
|
||||
v15 = (v15 << 16) | (v15 >> 16)
|
||||
v10 += v15
|
||||
v05 ^= v10
|
||||
v05 = (v05 << 12) | (v05 >> 20)
|
||||
v00 += v05
|
||||
v15 ^= v00
|
||||
v15 = (v15 << 8) | (v15 >> 24)
|
||||
v10 += v15
|
||||
v05 ^= v10
|
||||
v05 = (v05 << 7) | (v05 >> 25)
|
||||
v01 += v06
|
||||
v12 ^= v01
|
||||
v12 = (v12 << 16) | (v12 >> 16)
|
||||
v11 += v12
|
||||
v06 ^= v11
|
||||
v06 = (v06 << 12) | (v06 >> 20)
|
||||
v01 += v06
|
||||
v12 ^= v01
|
||||
v12 = (v12 << 8) | (v12 >> 24)
|
||||
v11 += v12
|
||||
v06 ^= v11
|
||||
v06 = (v06 << 7) | (v06 >> 25)
|
||||
v02 += v07
|
||||
v13 ^= v02
|
||||
v13 = (v13 << 16) | (v13 >> 16)
|
||||
v08 += v13
|
||||
v07 ^= v08
|
||||
v07 = (v07 << 12) | (v07 >> 20)
|
||||
v02 += v07
|
||||
v13 ^= v02
|
||||
v13 = (v13 << 8) | (v13 >> 24)
|
||||
v08 += v13
|
||||
v07 ^= v08
|
||||
v07 = (v07 << 7) | (v07 >> 25)
|
||||
v03 += v04
|
||||
v14 ^= v03
|
||||
v14 = (v14 << 16) | (v14 >> 16)
|
||||
v09 += v14
|
||||
v04 ^= v09
|
||||
v04 = (v04 << 12) | (v04 >> 20)
|
||||
v03 += v04
|
||||
v14 ^= v03
|
||||
v14 = (v14 << 8) | (v14 >> 24)
|
||||
v09 += v14
|
||||
v04 ^= v09
|
||||
v04 = (v04 << 7) | (v04 >> 25)
|
||||
}
|
||||
|
||||
binary.LittleEndian.PutUint32(out[0:], v00)
|
||||
binary.LittleEndian.PutUint32(out[4:], v01)
|
||||
binary.LittleEndian.PutUint32(out[8:], v02)
|
||||
binary.LittleEndian.PutUint32(out[12:], v03)
|
||||
binary.LittleEndian.PutUint32(out[16:], v12)
|
||||
binary.LittleEndian.PutUint32(out[20:], v13)
|
||||
binary.LittleEndian.PutUint32(out[24:], v14)
|
||||
binary.LittleEndian.PutUint32(out[28:], v15)
|
||||
}
|
@ -1,54 +1,12 @@
|
||||
package hkdfchacha20poly1305
|
||||
package xchacha20poly1305
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
cr "crypto/rand"
|
||||
"encoding/hex"
|
||||
mr "math/rand"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
// Test that a test vector we generated is valid. (Ensures backwards
|
||||
// compatibility)
|
||||
func TestVector(t *testing.T) {
|
||||
key, _ := hex.DecodeString("56f8de45d3c294c7675bcaf457bdd4b71c380b9b2408ce9412b348d0f08b69ee")
|
||||
aead, err := New(key[:])
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
cts := []string{"e20a8bf42c535ac30125cfc52031577f0b",
|
||||
"657695b37ba30f67b25860d90a6f1d00d8",
|
||||
"e9aa6f3b7f625d957fd50f05bcdf20d014",
|
||||
"8a00b3b5a6014e0d2033bebc5935086245",
|
||||
"aadd74867b923879e6866ea9e03c009039",
|
||||
"fc59773c2c864ee3b4cc971876b3c7bed4",
|
||||
"caec14e3a9a52ce1a2682c6737defa4752",
|
||||
"0b89511ffe490d2049d6950494ee51f919",
|
||||
"7de854ea71f43ca35167a07566c769083d",
|
||||
"cd477327f4ea4765c71e311c5fec1edbfb"}
|
||||
|
||||
for i := 0; i < 10; i++ {
|
||||
ct, _ := hex.DecodeString(cts[i])
|
||||
|
||||
byteArr := []byte{byte(i)}
|
||||
nonce := make([]byte, 24)
|
||||
nonce[0] = byteArr[0]
|
||||
|
||||
// Test that we get the expected plaintext on open
|
||||
plaintext, err := aead.Open(nil, nonce, ct, byteArr)
|
||||
if err != nil {
|
||||
t.Errorf("%dth Open failed", i)
|
||||
continue
|
||||
}
|
||||
assert.Equal(t, byteArr, plaintext)
|
||||
// Test that sealing yields the expected ciphertext
|
||||
ciphertext := aead.Seal(nil, nonce, plaintext, byteArr)
|
||||
assert.Equal(t, ct, ciphertext)
|
||||
}
|
||||
}
|
||||
|
||||
// The following test is taken from
|
||||
// https://github.com/golang/crypto/blob/master/chacha20poly1305/chacha20poly1305_test.go#L69
|
||||
// It requires the below copyright notice, where "this source code" refers to the following function.
|
@ -108,8 +108,11 @@ See below for more details on the message types and how they are used.
|
||||
### InitChain
|
||||
|
||||
- **Request**:
|
||||
- `Validators ([]Validator)`: Initial genesis validators
|
||||
- `AppStateBytes ([]byte)`: Serialized initial application state
|
||||
- `Time (google.protobuf.Timestamp)`: Genesis time.
|
||||
- `ChainID (string)`: ID of the blockchain.
|
||||
- `ConsensusParams (ConsensusParams)`: Initial consensus-critical parameters.
|
||||
- `Validators ([]Validator)`: Initial genesis validators.
|
||||
- `AppStateBytes ([]byte)`: Serialized initial application state. Amino-encoded JSON bytes.
|
||||
- **Response**:
|
||||
- `ConsensusParams (ConsensusParams)`: Initial
|
||||
consensus-critical parameters.
|
||||
@ -157,9 +160,8 @@ See below for more details on the message types and how they are used.
|
||||
- **Request**:
|
||||
- `Hash ([]byte)`: The block's hash. This can be derived from the
|
||||
block header.
|
||||
- `Header (struct{})`: The block header
|
||||
- `Validators ([]SigningValidator)`: List of validators in the current validator
|
||||
set and whether or not they signed a vote in the LastCommit
|
||||
- `Header (struct{})`: The block header.
|
||||
- `LastCommitInfo (LastCommitInfo)`: Info about the last commit.
|
||||
- `ByzantineValidators ([]Evidence)`: List of evidence of
|
||||
validators that acted maliciously
|
||||
- **Response**:
|
||||
@ -168,8 +170,9 @@ See below for more details on the message types and how they are used.
|
||||
- Signals the beginning of a new block. Called prior to
|
||||
any DeliverTxs.
|
||||
- The header is expected to at least contain the Height.
|
||||
- The `Validators` and `ByzantineValidators` can be used to
|
||||
determine rewards and punishments for the validators.
|
||||
- The `LastCommitInfo` and `ByzantineValidators` can be used to determine
|
||||
rewards and punishments for the validators. NOTE validators here do not
|
||||
include pubkeys.
|
||||
|
||||
### CheckTx
|
||||
|
||||
@ -186,7 +189,6 @@ See below for more details on the message types and how they are used.
|
||||
- `GasUsed (int64)`: Amount of gas consumed by transaction.
|
||||
- `Tags ([]cmn.KVPair)`: Key-Value tags for filtering and indexing
|
||||
transactions (eg. by account).
|
||||
- `Fee (cmn.KI64Pair)`: Fee paid for the transaction.
|
||||
- **Usage**: Validate a mempool transaction, prior to broadcasting
|
||||
or proposing. CheckTx should perform stateful but light-weight
|
||||
checks of the validity of the transaction (like checking signatures
|
||||
@ -223,7 +225,6 @@ See below for more details on the message types and how they are used.
|
||||
- `GasUsed (int64)`: Amount of gas consumed by transaction.
|
||||
- `Tags ([]cmn.KVPair)`: Key-Value tags for filtering and indexing
|
||||
transactions (eg. by account).
|
||||
- `Fee (cmn.KI64Pair)`: Fee paid for the transaction.
|
||||
- **Usage**:
|
||||
- Deliver a transaction to be executed in full by the application.
|
||||
If the transaction is valid, returns CodeType.OK.
|
||||
@ -265,7 +266,8 @@ See below for more details on the message types and how they are used.
|
||||
- **Fields**:
|
||||
- `ChainID (string)`: ID of the blockchain
|
||||
- `Height (int64)`: Height of the block in the chain
|
||||
- `Time (int64)`: Unix time of the block
|
||||
- `Time (google.protobuf.Timestamp)`: Time of the block. It is the proposer's
|
||||
local time when block was created.
|
||||
- `NumTxs (int32)`: Number of transactions in the block
|
||||
- `TotalTxs (int64)`: Total number of transactions in the blockchain until
|
||||
now
|
||||
@ -320,6 +322,14 @@ See below for more details on the message types and how they are used.
|
||||
"duplicate/vote".
|
||||
- `Validator (Validator`: The offending validator
|
||||
- `Height (int64)`: Height when the offense was committed
|
||||
- `Time (int64)`: Unix time of the block at height `Height`
|
||||
- `Time (google.protobuf.Timestamp)`: Time of the block at height `Height`.
|
||||
It is the proposer's local time when block was created.
|
||||
- `TotalVotingPower (int64)`: Total voting power of the validator set at
|
||||
height `Height`
|
||||
|
||||
### LastCommitInfo
|
||||
|
||||
- **Fields**:
|
||||
- `CommitRound (int32)`: Commit round.
|
||||
- `Validators ([]SigningValidator)`: List of validators in the current
|
||||
validator set and whether or not they signed a vote.
|
||||
|
@ -365,14 +365,14 @@ ResponseBeginBlock requestBeginBlock(RequestBeginBlock req) {
|
||||
|
||||
### EndBlock
|
||||
|
||||
The EndBlock request can be used to run some code at the end of every
|
||||
block. Additionally, the response may contain a list of validators,
|
||||
which can be used to update the validator set. To add a new validator or
|
||||
update an existing one, simply include them in the list returned in the
|
||||
EndBlock response. To remove one, include it in the list with a `power`
|
||||
equal to `0`. Tendermint core will take care of updating the validator
|
||||
set. Note the change in voting power must be strictly less than 1/3 per
|
||||
block if you want a light client to be able to prove the transition
|
||||
The EndBlock request can be used to run some code at the end of every block.
|
||||
Additionally, the response may contain a list of validators, which can be used
|
||||
to update the validator set. To add a new validator or update an existing one,
|
||||
simply include them in the list returned in the EndBlock response. To remove
|
||||
one, include it in the list with a `power` equal to `0`. Validator's `address`
|
||||
field can be left empty. Tendermint core will take care of updating the
|
||||
validator set. Note the change in voting power must be strictly less than 1/3
|
||||
per block if you want a light client to be able to prove the transition
|
||||
externally. See the [light client
|
||||
docs](https://godoc.org/github.com/tendermint/tendermint/lite#hdr-How_We_Track_Validators)
|
||||
for details on how it tracks validators.
|
||||
|
@ -93,9 +93,7 @@ like:
|
||||
"jsonrpc": "2.0",
|
||||
"id": "",
|
||||
"result": {
|
||||
"check_tx": {
|
||||
"fee": {}
|
||||
},
|
||||
"check_tx": {},
|
||||
"deliver_tx": {
|
||||
"tags": [
|
||||
{
|
||||
@ -106,8 +104,7 @@ like:
|
||||
"key": "YXBwLmtleQ==",
|
||||
"value": "YWJjZA=="
|
||||
}
|
||||
],
|
||||
"fee": {}
|
||||
]
|
||||
},
|
||||
"hash": "9DF66553F98DE3C26E3C3317A3E4CED54F714E39",
|
||||
"height": 14
|
||||
@ -219,13 +216,10 @@ the number `1`. If instead, we try to send a `5`, we get an error:
|
||||
"jsonrpc": "2.0",
|
||||
"id": "",
|
||||
"result": {
|
||||
"check_tx": {
|
||||
"fee": {}
|
||||
},
|
||||
"check_tx": {},
|
||||
"deliver_tx": {
|
||||
"code": 2,
|
||||
"log": "Invalid nonce. Expected 1, got 5",
|
||||
"fee": {}
|
||||
"log": "Invalid nonce. Expected 1, got 5"
|
||||
},
|
||||
"hash": "33B93DFF98749B0D6996A70F64071347060DC19C",
|
||||
"height": 34
|
||||
@ -241,12 +235,8 @@ But if we send a `1`, it works again:
|
||||
"jsonrpc": "2.0",
|
||||
"id": "",
|
||||
"result": {
|
||||
"check_tx": {
|
||||
"fee": {}
|
||||
},
|
||||
"deliver_tx": {
|
||||
"fee": {}
|
||||
},
|
||||
"check_tx": {},
|
||||
"deliver_tx": {},
|
||||
"hash": "F17854A977F6FA7EEA1BD758E296710B86F72F3D",
|
||||
"height": 60
|
||||
}
|
||||
|
113
docs/architecture/adr-012-peer-transport.md
Normal file
113
docs/architecture/adr-012-peer-transport.md
Normal file
@ -0,0 +1,113 @@
|
||||
# ADR 012: PeerTransport
|
||||
|
||||
## Context
|
||||
|
||||
One of the more apparent problems with the current architecture in the p2p
|
||||
package is that there is no clear separation of concerns between different
|
||||
components. Most notably the `Switch` is currently doing physical connection
|
||||
handling. An artifact is the dependency of the Switch on
|
||||
`[config.P2PConfig`](https://github.com/tendermint/tendermint/blob/05a76fb517f50da27b4bfcdc7b4cf185fc61eff6/config/config.go#L272-L339).
|
||||
|
||||
Addresses:
|
||||
* [#2046](https://github.com/tendermint/tendermint/issues/2046)
|
||||
* [#2047](https://github.com/tendermint/tendermint/issues/2047)
|
||||
|
||||
First iteraton in [#2067](https://github.com/tendermint/tendermint/issues/2067)
|
||||
|
||||
## Decision
|
||||
|
||||
Transport concerns will be handled by a new component (`PeerTransport`) which
|
||||
will provide Peers at its boundary to the caller. In turn `Switch` will use
|
||||
this new component accept new `Peer`s and dial them based on `NetAddress`.
|
||||
|
||||
### PeerTransport
|
||||
|
||||
Responsible for emitting and connecting to Peers. The implementation of `Peer`
|
||||
is left to the transport, which implies that the chosen transport dictates the
|
||||
characteristics of the implementation handed back to the `Switch`. Each
|
||||
transport implementation is responsible to filter establishing peers specific
|
||||
to its domain, for the default multiplexed implementation the following will
|
||||
apply:
|
||||
|
||||
* connections from our own node
|
||||
* handshake fails
|
||||
* upgrade to secret connection fails
|
||||
* prevent duplicate ip
|
||||
* prevent duplicate id
|
||||
* nodeinfo incompatibility
|
||||
|
||||
|
||||
``` go
|
||||
// PeerTransport proxies incoming and outgoing peer connections.
|
||||
type PeerTransport interface {
|
||||
// Accept returns a newly connected Peer.
|
||||
Accept() (Peer, error)
|
||||
|
||||
// Dial connects to a Peer.
|
||||
Dial(NetAddress) (Peer, error)
|
||||
}
|
||||
|
||||
// EXAMPLE OF DEFAULT IMPLEMENTATION
|
||||
|
||||
// multiplexTransport accepts tcp connections and upgrades to multiplexted
|
||||
// peers.
|
||||
type multiplexTransport struct {
|
||||
listener net.Listener
|
||||
|
||||
acceptc chan accept
|
||||
closec <-chan struct{}
|
||||
listenc <-chan struct{}
|
||||
|
||||
dialTimeout time.Duration
|
||||
handshakeTimeout time.Duration
|
||||
nodeAddr NetAddress
|
||||
nodeInfo NodeInfo
|
||||
nodeKey NodeKey
|
||||
|
||||
// TODO(xla): Remove when MConnection is refactored into mPeer.
|
||||
mConfig conn.MConnConfig
|
||||
}
|
||||
|
||||
var _ PeerTransport = (*multiplexTransport)(nil)
|
||||
|
||||
// NewMTransport returns network connected multiplexed peers.
|
||||
func NewMTransport(
|
||||
nodeAddr NetAddress,
|
||||
nodeInfo NodeInfo,
|
||||
nodeKey NodeKey,
|
||||
) *multiplexTransport
|
||||
```
|
||||
|
||||
### Switch
|
||||
|
||||
From now the Switch will depend on a fully setup `PeerTransport` to
|
||||
retrieve/reach out to its peers. As the more low-level concerns are pushed to
|
||||
the transport, we can omit passing the `config.P2PConfig` to the Switch.
|
||||
|
||||
``` go
|
||||
func NewSwitch(transport PeerTransport, opts ...SwitchOption) *Switch
|
||||
```
|
||||
|
||||
## Status
|
||||
|
||||
In Review.
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
|
||||
* free Switch from transport concerns - simpler implementation
|
||||
* pluggable transport implementation - simpler test setup
|
||||
* remove Switch dependency on P2PConfig - easier to test
|
||||
|
||||
### Negative
|
||||
|
||||
* more setup for tests which depend on Switches
|
||||
|
||||
### Neutral
|
||||
|
||||
* multiplexed will be the default implementation
|
||||
|
||||
[0] These guards could be potentially extended to be pluggable much like
|
||||
middlewares to express different concerns required by differentally configured
|
||||
environments.
|
80
docs/architecture/adr-015-crypto-encoding.md
Normal file
80
docs/architecture/adr-015-crypto-encoding.md
Normal file
@ -0,0 +1,80 @@
|
||||
# ADR 015: Crypto encoding
|
||||
|
||||
## Context
|
||||
|
||||
We must standardize our method for encoding public keys and signatures on chain.
|
||||
Currently we amino encode the public keys and signatures.
|
||||
The reason we are using amino here is primarily due to ease of support in
|
||||
parsing for other languages.
|
||||
We don't need its upgradability properties in cryptosystems, as a change in
|
||||
the crypto that requires adapting the encoding, likely warrants being deemed
|
||||
a new cryptosystem.
|
||||
(I.e. using new public parameters)
|
||||
|
||||
## Decision
|
||||
|
||||
### Public keys
|
||||
|
||||
For public keys, we will continue to use amino encoding on the canonical
|
||||
representation of the pubkey.
|
||||
(Canonical as defined by the cryptosystem itself)
|
||||
This has two significant drawbacks.
|
||||
Amino encoding is less space-efficient, due to requiring support for upgradability.
|
||||
Amino encoding support requires forking protobuf and adding this new interface support
|
||||
option in the langauge of choice.
|
||||
|
||||
The reason for continuing to use amino however is that people can create code
|
||||
more easily in languages that already have an up to date amino library.
|
||||
It is possible that this will change in the future, if it is deemed that
|
||||
requiring amino for interacting with tendermint cryptography is unneccessary.
|
||||
|
||||
The arguments for space efficiency here are refuted on the basis that there are
|
||||
far more egregious wastages of space in the SDK.
|
||||
The space requirement of the public keys doesn't cause many problems beyond
|
||||
increasing the space attached to each validator / account.
|
||||
|
||||
The alternative to using amino here would be for us to create an enum type.
|
||||
Switching to just an enum type is worthy of investigation post-launch.
|
||||
For referrence, part of amino encoding interfaces is basically a 4 byte enum
|
||||
type definition.
|
||||
Enum types would just change that 4 bytes to be a varuint, and it would remove
|
||||
the protobuf overhead, but it would be hard to integrate into the existing API.
|
||||
|
||||
### Signatures
|
||||
|
||||
Signatures should be switched to be `[]byte`.
|
||||
Spatial efficiency in the signatures is quite important,
|
||||
as it directly affects the gas cost of every transaction,
|
||||
and the throughput of the chain.
|
||||
Signatures don't need to encode what type they are for (unlike public keys)
|
||||
since public keys must already be known.
|
||||
Therefore we can validate the signature without needing to encode its type.
|
||||
|
||||
When placed in state, signatures will still be amino encoded, but it will be the
|
||||
primitive type `[]byte` getting encoded.
|
||||
|
||||
#### Ed25519
|
||||
Use the canonical representation for signatures.
|
||||
|
||||
#### Secp256k1
|
||||
There isn't a clear canonical representation here.
|
||||
Signatures have two elements `r,s`.
|
||||
We should encode these bytes as `r || s`, where `r` and `s` are both exactly
|
||||
32 bytes long.
|
||||
This is basically Ethereum's encoding, but without the leading recovery bit.
|
||||
|
||||
## Status
|
||||
|
||||
Proposed. The signature section seems to be agreed upon for the most part.
|
||||
Needs decision on Enum types.
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
* More space efficient signatures
|
||||
|
||||
### Negative
|
||||
* We have an amino dependency for cryptography.
|
||||
|
||||
### Neutral
|
||||
* No change to public keys
|
@ -52,7 +52,8 @@ type Header struct {
|
||||
// application
|
||||
ResultsHash []byte // SimpleMerkle of []abci.Result from prevBlock
|
||||
AppHash []byte // Arbitrary state digest
|
||||
ValidatorsHash []byte // SimpleMerkle of the ValidatorSet
|
||||
ValidatorsHash []byte // SimpleMerkle of the current ValidatorSet
|
||||
NextValidatorsHash []byte // SimpleMerkle of the next ValidatorSet
|
||||
ConsensusParamsHash []byte // SimpleMerkle of the ConsensusParams
|
||||
|
||||
// consensus
|
||||
@ -160,9 +161,12 @@ We refer to certain globally available objects:
|
||||
`block` is the block under consideration,
|
||||
`prevBlock` is the `block` at the previous height,
|
||||
and `state` keeps track of the validator set, the consensus parameters
|
||||
and other results from the application.
|
||||
and other results from the application. At the point when `block` is the block under consideration,
|
||||
the current version of the `state` corresponds to the state
|
||||
after executing transactions from the `prevBlock`.
|
||||
Elements of an object are accessed as expected,
|
||||
ie. `block.Header`. See [here](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md) for the definition of `state`.
|
||||
ie. `block.Header`.
|
||||
See [here](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md) for the definition of `state`.
|
||||
|
||||
### Header
|
||||
|
||||
@ -278,7 +282,14 @@ block.ValidatorsHash == SimpleMerkleRoot(state.Validators)
|
||||
|
||||
Simple Merkle root of the current validator set that is committing the block.
|
||||
This can be used to validate the `LastCommit` included in the next block.
|
||||
May be updated by the application.
|
||||
|
||||
### NextValidatorsHash
|
||||
|
||||
```go
|
||||
block.NextValidatorsHash == SimpleMerkleRoot(state.NextValidators)
|
||||
```
|
||||
Simple Merkle root of the next validator set that will be the validator set that commits the next block.
|
||||
Modifications to the validator set are defined by the application.
|
||||
|
||||
### ConsensusParamsHash
|
||||
|
||||
@ -407,25 +418,20 @@ set (TODO). Execute is defined as:
|
||||
|
||||
```go
|
||||
Execute(s State, app ABCIApp, block Block) State {
|
||||
TODO: just spell out ApplyBlock here
|
||||
and remove ABCIResponses struct.
|
||||
abciResponses := app.ApplyBlock(block)
|
||||
// Fuction ApplyBlock executes block of transactions against the app and returns the new root hash of the app state,
|
||||
// modifications to the validator set and the changes of the consensus parameters.
|
||||
AppHash, ValidatorChanges, ConsensusParamChanges := app.ApplyBlock(block)
|
||||
|
||||
return State{
|
||||
LastResults: abciResponses.DeliverTxResults,
|
||||
AppHash: abciResponses.AppHash,
|
||||
Validators: UpdateValidators(state.Validators, abciResponses.ValidatorChanges),
|
||||
AppHash: AppHash,
|
||||
LastValidators: state.Validators,
|
||||
ConsensusParams: UpdateConsensusParams(state.ConsensusParams, abci.Responses.ConsensusParamChanges),
|
||||
Validators: state.NextValidators,
|
||||
NextValidators: UpdateValidators(state.NextValidators, ValidatorChanges),
|
||||
ConsensusParams: UpdateConsensusParams(state.ConsensusParams, ConsensusParamChanges),
|
||||
}
|
||||
}
|
||||
|
||||
type ABCIResponses struct {
|
||||
DeliverTxResults []Result
|
||||
ValidatorChanges []Validator
|
||||
ConsensusParamChanges ConsensusParams
|
||||
AppHash []byte
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
@ -3,7 +3,7 @@
|
||||
## State
|
||||
|
||||
The state contains information whose cryptographic digest is included in block headers, and thus is
|
||||
necessary for validating new blocks. For instance, the set of validators and the results of
|
||||
necessary for validating new blocks. For instance, the validators set and the results of
|
||||
transactions are never included in blocks, but their Merkle roots are - the state keeps track of them.
|
||||
|
||||
Note that the `State` object itself is an implementation detail, since it is never
|
||||
@ -18,8 +18,9 @@ type State struct {
|
||||
LastResults []Result
|
||||
AppHash []byte
|
||||
|
||||
Validators []Validator
|
||||
LastValidators []Validator
|
||||
Validators []Validator
|
||||
NextValidators []Validator
|
||||
|
||||
ConsensusParams ConsensusParams
|
||||
}
|
||||
|
@ -27,27 +27,24 @@ Both handshakes have configurable timeouts (they should complete quickly).
|
||||
### Authenticated Encryption Handshake
|
||||
|
||||
Tendermint implements the Station-to-Station protocol
|
||||
using ED25519 keys for Diffie-Helman key-exchange and NACL SecretBox for encryption.
|
||||
using X25519 keys for Diffie-Helman key-exchange and chacha20poly1305 for encryption.
|
||||
It goes as follows:
|
||||
- generate an emphemeral ED25519 keypair
|
||||
- generate an ephemeral X25519 keypair
|
||||
- send the ephemeral public key to the peer
|
||||
- wait to receive the peer's ephemeral public key
|
||||
- compute the Diffie-Hellman shared secret using the peers ephemeral public key and our ephemeral private key
|
||||
- generate two nonces to use for encryption (sending and receiving) as follows:
|
||||
- sort the ephemeral public keys in ascending order and concatenate them
|
||||
- RIPEMD160 the result
|
||||
- append 4 empty bytes (extending the hash to 24-bytes)
|
||||
- the result is nonce1
|
||||
- flip the last bit of nonce1 to get nonce2
|
||||
- if we had the smaller ephemeral pubkey, use nonce1 for receiving, nonce2 for sending;
|
||||
else the opposite
|
||||
- all communications from now on are encrypted using the shared secret and the nonces, where each nonce
|
||||
increments by 2 every time it is used
|
||||
- generate two keys to use for encryption (sending and receiving) and a challenge for authentication as follows:
|
||||
- create a hkdf-sha256 instance with the key being the diffie hellman shared secret, and info parameter as
|
||||
`TENDERMINT_SECRET_CONNECTION_KEY_AND_CHALLENGE_GEN`
|
||||
- get 96 bytes of output from hkdf-sha256
|
||||
- if we had the smaller ephemeral pubkey, use the first 32 bytes for the key for receiving, the second 32 bytes for sending; else the opposite
|
||||
- use the last 32 bytes of output for the challenge
|
||||
- use a seperate nonce for receiving and sending. Both nonces start at 0, and should support the full 96 bit nonce range
|
||||
- all communications from now on are encrypted in 1024 byte frames,
|
||||
using the respective secret and nonce. Each nonce is incremented by one after each use.
|
||||
- we now have an encrypted channel, but still need to authenticate
|
||||
- generate a common challenge to sign:
|
||||
- SHA256 of the sorted (lowest first) and concatenated ephemeral pub keys
|
||||
- sign the common challenge with our persistent private key
|
||||
- send the go-wire encoded persistent pubkey and signature to the peer
|
||||
- sign the common challenge obtained from the hkdf with our persistent private key
|
||||
- send the amino encoded persistent pubkey and signature to the peer
|
||||
- wait to receive the persistent public key and signature from the peer
|
||||
- verify the signature on the challenge using the peer's persistent public key
|
||||
|
||||
|
@ -12,7 +12,8 @@ them.
|
||||
Some peers can be marked as `private`, which means
|
||||
we will not put them in the address book or gossip them to others.
|
||||
|
||||
All peers except private peers are tracked using the address book.
|
||||
All peers except private peers and peers coming from them are tracked using the
|
||||
address book.
|
||||
|
||||
## Discovery
|
||||
|
||||
|
@ -44,11 +44,6 @@ func (evR *EvidenceReactor) SetLogger(l log.Logger) {
|
||||
evR.evpool.SetLogger(l)
|
||||
}
|
||||
|
||||
// OnStart implements cmn.Service
|
||||
func (evR *EvidenceReactor) OnStart() error {
|
||||
return evR.BaseReactor.OnStart()
|
||||
}
|
||||
|
||||
// GetChannels implements Reactor.
|
||||
// It returns the list of channels for this reactor.
|
||||
func (evR *EvidenceReactor) GetChannels() []*p2p.ChannelDescriptor {
|
||||
|
@ -12,14 +12,4 @@ func init() {
|
||||
RegisterEvidenceMessages(cdc)
|
||||
cryptoAmino.RegisterAmino(cdc)
|
||||
types.RegisterEvidences(cdc)
|
||||
RegisterMockEvidences(cdc) // For testing
|
||||
}
|
||||
|
||||
//-------------------------------------------
|
||||
|
||||
func RegisterMockEvidences(cdc *amino.Codec) {
|
||||
cdc.RegisterConcrete(types.MockGoodEvidence{},
|
||||
"tendermint/MockGoodEvidence", nil)
|
||||
cdc.RegisterConcrete(types.MockBadEvidence{},
|
||||
"tendermint/MockBadEvidence", nil)
|
||||
}
|
||||
|
@ -1,6 +1,7 @@
|
||||
package autofile
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"sync/atomic"
|
||||
"syscall"
|
||||
@ -13,10 +14,14 @@ import (
|
||||
func TestSIGHUP(t *testing.T) {
|
||||
|
||||
// First, create an AutoFile writing to a tempfile dir
|
||||
file, name := cmn.Tempfile("sighup_test")
|
||||
if err := file.Close(); err != nil {
|
||||
file, err := ioutil.TempFile("", "sighup_test")
|
||||
if err != nil {
|
||||
t.Fatalf("Error creating tempfile: %v", err)
|
||||
}
|
||||
if err := file.Close(); err != nil {
|
||||
t.Fatalf("Error closing tempfile: %v", err)
|
||||
}
|
||||
name := file.Name()
|
||||
// Here is the actual AutoFile
|
||||
af, err := OpenAutoFile(name)
|
||||
if err != nil {
|
||||
|
@ -147,14 +147,13 @@ func TestSearch(t *testing.T) {
|
||||
|
||||
// Now search for each number
|
||||
for i := 0; i < 100; i++ {
|
||||
t.Log("Testing for i", i)
|
||||
gr, match, err := g.Search("INFO", makeSearchFunc(i))
|
||||
require.NoError(t, err, "Failed to search for line")
|
||||
assert.True(t, match, "Expected Search to return exact match")
|
||||
require.NoError(t, err, "Failed to search for line, tc #%d", i)
|
||||
assert.True(t, match, "Expected Search to return exact match, tc #%d", i)
|
||||
line, err := gr.ReadLine()
|
||||
require.NoError(t, err, "Failed to read line after search")
|
||||
require.NoError(t, err, "Failed to read line after search, tc #%d", i)
|
||||
if !strings.HasPrefix(line, fmt.Sprintf("INFO %v ", i)) {
|
||||
t.Fatal("Failed to get correct line")
|
||||
t.Fatalf("Failed to get correct line, tc #%d", i)
|
||||
}
|
||||
// Make sure we can continue to read from there.
|
||||
cur := i + 1
|
||||
@ -165,16 +164,16 @@ func TestSearch(t *testing.T) {
|
||||
// OK!
|
||||
break
|
||||
} else {
|
||||
t.Fatal("Got EOF after the wrong INFO #")
|
||||
t.Fatalf("Got EOF after the wrong INFO #, tc #%d", i)
|
||||
}
|
||||
} else if err != nil {
|
||||
t.Fatal("Error reading line", err)
|
||||
t.Fatalf("Error reading line, tc #%d, err:\n%s", i, err)
|
||||
}
|
||||
if !strings.HasPrefix(line, "INFO ") {
|
||||
continue
|
||||
}
|
||||
if !strings.HasPrefix(line, fmt.Sprintf("INFO %v ", cur)) {
|
||||
t.Fatalf("Unexpected INFO #. Expected %v got:\n%v", cur, line)
|
||||
t.Fatalf("Unexpected INFO #. Expected %v got:\n%v, tc #%d", cur, line, i)
|
||||
}
|
||||
cur++
|
||||
}
|
||||
|
@ -8,13 +8,15 @@ import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
// BitArray is a thread-safe implementation of a bit array.
|
||||
type BitArray struct {
|
||||
mtx sync.Mutex
|
||||
Bits int `json:"bits"` // NOTE: persisted via reflect, must be exported
|
||||
Elems []uint64 `json:"elems"` // NOTE: persisted via reflect, must be exported
|
||||
}
|
||||
|
||||
// There is no BitArray whose Size is 0. Use nil instead.
|
||||
// NewBitArray returns a new bit array.
|
||||
// It returns nil if the number of bits is zero.
|
||||
func NewBitArray(bits int) *BitArray {
|
||||
if bits <= 0 {
|
||||
return nil
|
||||
@ -25,6 +27,7 @@ func NewBitArray(bits int) *BitArray {
|
||||
}
|
||||
}
|
||||
|
||||
// Size returns the number of bits in the bitarray
|
||||
func (bA *BitArray) Size() int {
|
||||
if bA == nil {
|
||||
return 0
|
||||
@ -32,7 +35,8 @@ func (bA *BitArray) Size() int {
|
||||
return bA.Bits
|
||||
}
|
||||
|
||||
// NOTE: behavior is undefined if i >= bA.Bits
|
||||
// GetIndex returns the bit at index i within the bit array.
|
||||
// The behavior is undefined if i >= bA.Bits
|
||||
func (bA *BitArray) GetIndex(i int) bool {
|
||||
if bA == nil {
|
||||
return false
|
||||
@ -49,7 +53,8 @@ func (bA *BitArray) getIndex(i int) bool {
|
||||
return bA.Elems[i/64]&(uint64(1)<<uint(i%64)) > 0
|
||||
}
|
||||
|
||||
// NOTE: behavior is undefined if i >= bA.Bits
|
||||
// SetIndex sets the bit at index i within the bit array.
|
||||
// The behavior is undefined if i >= bA.Bits
|
||||
func (bA *BitArray) SetIndex(i int, v bool) bool {
|
||||
if bA == nil {
|
||||
return false
|
||||
@ -71,6 +76,7 @@ func (bA *BitArray) setIndex(i int, v bool) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// Copy returns a copy of the provided bit array.
|
||||
func (bA *BitArray) Copy() *BitArray {
|
||||
if bA == nil {
|
||||
return nil
|
||||
@ -98,7 +104,9 @@ func (bA *BitArray) copyBits(bits int) *BitArray {
|
||||
}
|
||||
}
|
||||
|
||||
// Returns a BitArray of larger bits size.
|
||||
// Or returns a bit array resulting from a bitwise OR of the two bit arrays.
|
||||
// If the two bit-arrys have different lengths, Or right-pads the smaller of the two bit-arrays with zeroes.
|
||||
// Thus the size of the return value is the maximum of the two provided bit arrays.
|
||||
func (bA *BitArray) Or(o *BitArray) *BitArray {
|
||||
if bA == nil && o == nil {
|
||||
return nil
|
||||
@ -110,7 +118,11 @@ func (bA *BitArray) Or(o *BitArray) *BitArray {
|
||||
return bA.Copy()
|
||||
}
|
||||
bA.mtx.Lock()
|
||||
defer bA.mtx.Unlock()
|
||||
o.mtx.Lock()
|
||||
defer func() {
|
||||
bA.mtx.Unlock()
|
||||
o.mtx.Unlock()
|
||||
}()
|
||||
c := bA.copyBits(MaxInt(bA.Bits, o.Bits))
|
||||
for i := 0; i < len(c.Elems); i++ {
|
||||
c.Elems[i] |= o.Elems[i]
|
||||
@ -118,13 +130,19 @@ func (bA *BitArray) Or(o *BitArray) *BitArray {
|
||||
return c
|
||||
}
|
||||
|
||||
// Returns a BitArray of smaller bit size.
|
||||
// And returns a bit array resulting from a bitwise AND of the two bit arrays.
|
||||
// If the two bit-arrys have different lengths, this truncates the larger of the two bit-arrays from the right.
|
||||
// Thus the size of the return value is the minimum of the two provided bit arrays.
|
||||
func (bA *BitArray) And(o *BitArray) *BitArray {
|
||||
if bA == nil || o == nil {
|
||||
return nil
|
||||
}
|
||||
bA.mtx.Lock()
|
||||
defer bA.mtx.Unlock()
|
||||
o.mtx.Lock()
|
||||
defer func() {
|
||||
bA.mtx.Unlock()
|
||||
o.mtx.Unlock()
|
||||
}()
|
||||
return bA.and(o)
|
||||
}
|
||||
|
||||
@ -136,12 +154,17 @@ func (bA *BitArray) and(o *BitArray) *BitArray {
|
||||
return c
|
||||
}
|
||||
|
||||
// Not returns a bit array resulting from a bitwise Not of the provided bit array.
|
||||
func (bA *BitArray) Not() *BitArray {
|
||||
if bA == nil {
|
||||
return nil // Degenerate
|
||||
}
|
||||
bA.mtx.Lock()
|
||||
defer bA.mtx.Unlock()
|
||||
return bA.not()
|
||||
}
|
||||
|
||||
func (bA *BitArray) not() *BitArray {
|
||||
c := bA.copy()
|
||||
for i := 0; i < len(c.Elems); i++ {
|
||||
c.Elems[i] = ^c.Elems[i]
|
||||
@ -149,13 +172,20 @@ func (bA *BitArray) Not() *BitArray {
|
||||
return c
|
||||
}
|
||||
|
||||
// Sub subtracts the two bit-arrays bitwise, without carrying the bits.
|
||||
// This is essentially bA.And(o.Not()).
|
||||
// If bA is longer than o, o is right padded with zeroes.
|
||||
func (bA *BitArray) Sub(o *BitArray) *BitArray {
|
||||
if bA == nil || o == nil {
|
||||
// TODO: Decide if we should do 1's complement here?
|
||||
return nil
|
||||
}
|
||||
bA.mtx.Lock()
|
||||
defer bA.mtx.Unlock()
|
||||
o.mtx.Lock()
|
||||
defer func() {
|
||||
bA.mtx.Unlock()
|
||||
o.mtx.Unlock()
|
||||
}()
|
||||
if bA.Bits > o.Bits {
|
||||
c := bA.copy()
|
||||
for i := 0; i < len(o.Elems)-1; i++ {
|
||||
@ -164,15 +194,15 @@ func (bA *BitArray) Sub(o *BitArray) *BitArray {
|
||||
i := len(o.Elems) - 1
|
||||
if i >= 0 {
|
||||
for idx := i * 64; idx < o.Bits; idx++ {
|
||||
// NOTE: each individual GetIndex() call to o is safe.
|
||||
c.setIndex(idx, c.getIndex(idx) && !o.GetIndex(idx))
|
||||
c.setIndex(idx, c.getIndex(idx) && !o.getIndex(idx))
|
||||
}
|
||||
}
|
||||
return c
|
||||
}
|
||||
return bA.and(o.Not()) // Note degenerate case where o == nil
|
||||
return bA.and(o.not()) // Note degenerate case where o == nil
|
||||
}
|
||||
|
||||
// IsEmpty returns true iff all bits in the bit array are 0
|
||||
func (bA *BitArray) IsEmpty() bool {
|
||||
if bA == nil {
|
||||
return true // should this be opposite?
|
||||
@ -187,6 +217,7 @@ func (bA *BitArray) IsEmpty() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// IsFull returns true iff all bits in the bit array are 1.
|
||||
func (bA *BitArray) IsFull() bool {
|
||||
if bA == nil {
|
||||
return true
|
||||
@ -207,6 +238,8 @@ func (bA *BitArray) IsFull() bool {
|
||||
return (lastElem+1)&((uint64(1)<<uint(lastElemBits))-1) == 0
|
||||
}
|
||||
|
||||
// PickRandom returns a random index in the bit array, and its value.
|
||||
// It uses the global randomness in `random.go` to get this index.
|
||||
func (bA *BitArray) PickRandom() (int, bool) {
|
||||
if bA == nil {
|
||||
return 0, false
|
||||
@ -260,6 +293,8 @@ func (bA *BitArray) String() string {
|
||||
return bA.StringIndented("")
|
||||
}
|
||||
|
||||
// StringIndented returns the same thing as String(), but applies the indent
|
||||
// at every 10th bit, and twice at every 50th bit.
|
||||
func (bA *BitArray) StringIndented(indent string) string {
|
||||
if bA == nil {
|
||||
return "nil-BitArray"
|
||||
@ -295,6 +330,7 @@ func (bA *BitArray) stringIndented(indent string) string {
|
||||
return fmt.Sprintf("BA{%v:%v}", bA.Bits, strings.Join(lines, indent))
|
||||
}
|
||||
|
||||
// Bytes returns the byte representation of the bits within the bitarray.
|
||||
func (bA *BitArray) Bytes() []byte {
|
||||
bA.mtx.Lock()
|
||||
defer bA.mtx.Unlock()
|
||||
@ -309,15 +345,18 @@ func (bA *BitArray) Bytes() []byte {
|
||||
return bytes
|
||||
}
|
||||
|
||||
// NOTE: other bitarray o is not locked when reading,
|
||||
// so if necessary, caller must copy or lock o prior to calling Update.
|
||||
// If bA is nil, does nothing.
|
||||
// Update sets the bA's bits to be that of the other bit array.
|
||||
// The copying begins from the begin of both bit arrays.
|
||||
func (bA *BitArray) Update(o *BitArray) {
|
||||
if bA == nil || o == nil {
|
||||
return
|
||||
}
|
||||
bA.mtx.Lock()
|
||||
defer bA.mtx.Unlock()
|
||||
o.mtx.Lock()
|
||||
defer func() {
|
||||
bA.mtx.Unlock()
|
||||
o.mtx.Unlock()
|
||||
}()
|
||||
|
||||
copy(bA.Elems, o.Elems)
|
||||
}
|
||||
|
@ -8,7 +8,6 @@ import (
|
||||
"os"
|
||||
"os/exec"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"syscall"
|
||||
)
|
||||
@ -124,60 +123,6 @@ func MustWriteFile(filePath string, contents []byte, mode os.FileMode) {
|
||||
}
|
||||
}
|
||||
|
||||
// WriteFileAtomic creates a temporary file with data and the perm given and
|
||||
// swaps it atomically with filename if successful.
|
||||
func WriteFileAtomic(filename string, data []byte, perm os.FileMode) error {
|
||||
var (
|
||||
dir = filepath.Dir(filename)
|
||||
tempFile = filepath.Join(dir, "write-file-atomic-"+RandStr(32))
|
||||
// Override in case it does exist, create in case it doesn't and force kernel
|
||||
// flush, which still leaves the potential of lingering disk cache.
|
||||
flag = os.O_WRONLY | os.O_CREATE | os.O_SYNC | os.O_TRUNC
|
||||
)
|
||||
|
||||
f, err := os.OpenFile(tempFile, flag, perm)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Clean up in any case. Defer stacking order is last-in-first-out.
|
||||
defer os.Remove(f.Name())
|
||||
defer f.Close()
|
||||
|
||||
if n, err := f.Write(data); err != nil {
|
||||
return err
|
||||
} else if n < len(data) {
|
||||
return io.ErrShortWrite
|
||||
}
|
||||
// Close the file before renaming it, otherwise it will cause "The process
|
||||
// cannot access the file because it is being used by another process." on windows.
|
||||
f.Close()
|
||||
|
||||
return os.Rename(f.Name(), filename)
|
||||
}
|
||||
|
||||
//--------------------------------------------------------------------------------
|
||||
|
||||
func Tempfile(prefix string) (*os.File, string) {
|
||||
file, err := ioutil.TempFile("", prefix)
|
||||
if err != nil {
|
||||
PanicCrisis(err)
|
||||
}
|
||||
return file, file.Name()
|
||||
}
|
||||
|
||||
func Tempdir(prefix string) (*os.File, string) {
|
||||
tempDir := os.TempDir() + "/" + prefix + RandStr(12)
|
||||
err := EnsureDir(tempDir, 0700)
|
||||
if err != nil {
|
||||
panic(Fmt("Error creating temp dir: %v", err))
|
||||
}
|
||||
dir, err := os.Open(tempDir)
|
||||
if err != nil {
|
||||
panic(Fmt("Error opening temp dir: %v", err))
|
||||
}
|
||||
return dir, tempDir
|
||||
}
|
||||
|
||||
//--------------------------------------------------------------------------------
|
||||
|
||||
func Prompt(prompt string, defaultValue string) (string, error) {
|
||||
|
@ -1,52 +1,10 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestWriteFileAtomic(t *testing.T) {
|
||||
var (
|
||||
data = []byte(RandStr(RandIntn(2048)))
|
||||
old = RandBytes(RandIntn(2048))
|
||||
perm os.FileMode = 0600
|
||||
)
|
||||
|
||||
f, err := ioutil.TempFile("/tmp", "write-atomic-test-")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer os.Remove(f.Name())
|
||||
|
||||
if err = ioutil.WriteFile(f.Name(), old, 0664); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err = WriteFileAtomic(f.Name(), data, perm); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rData, err := ioutil.ReadFile(f.Name())
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if !bytes.Equal(data, rData) {
|
||||
t.Fatalf("data mismatch: %v != %v", data, rData)
|
||||
}
|
||||
|
||||
stat, err := os.Stat(f.Name())
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if have, want := stat.Mode().Perm(), perm; have != want {
|
||||
t.Errorf("have %v, want %v", have, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGoPath(t *testing.T) {
|
||||
// restore original gopath upon exit
|
||||
path := os.Getenv("GOPATH")
|
||||
|
@ -109,6 +109,10 @@ func RandInt63n(n int64) int64 {
|
||||
return grand.Int63n(n)
|
||||
}
|
||||
|
||||
func RandBool() bool {
|
||||
return grand.Bool()
|
||||
}
|
||||
|
||||
func RandFloat32() float32 {
|
||||
return grand.Float32()
|
||||
}
|
||||
@ -274,6 +278,13 @@ func (r *Rand) Intn(n int) int {
|
||||
return i
|
||||
}
|
||||
|
||||
// Bool returns a uniformly random boolean
|
||||
func (r *Rand) Bool() bool {
|
||||
// See https://github.com/golang/go/issues/23804#issuecomment-365370418
|
||||
// for reasoning behind computing like this
|
||||
return r.Int63()%2 == 0
|
||||
}
|
||||
|
||||
// Perm returns a pseudo-random permutation of n integers in [0, n).
|
||||
func (r *Rand) Perm(n int) []int {
|
||||
r.Lock()
|
||||
|
128
libs/common/tempfile.go
Normal file
128
libs/common/tempfile.go
Normal file
@ -0,0 +1,128 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
fmt "fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
atomicWriteFilePrefix = "write-file-atomic-"
|
||||
// Maximum number of atomic write file conflicts before we start reseeding
|
||||
// (reduced from golang's default 10 due to using an increased randomness space)
|
||||
atomicWriteFileMaxNumConflicts = 5
|
||||
// Maximum number of attempts to make at writing the write file before giving up
|
||||
// (reduced from golang's default 10000 due to using an increased randomness space)
|
||||
atomicWriteFileMaxNumWriteAttempts = 1000
|
||||
// LCG constants from Donald Knuth MMIX
|
||||
// This LCG's has a period equal to 2**64
|
||||
lcgA = 6364136223846793005
|
||||
lcgC = 1442695040888963407
|
||||
// Create in case it doesn't exist and force kernel
|
||||
// flush, which still leaves the potential of lingering disk cache.
|
||||
// Never overwrites files
|
||||
atomicWriteFileFlag = os.O_WRONLY | os.O_CREATE | os.O_SYNC | os.O_TRUNC | os.O_EXCL
|
||||
)
|
||||
|
||||
var (
|
||||
atomicWriteFileRand uint64
|
||||
atomicWriteFileRandMu sync.Mutex
|
||||
)
|
||||
|
||||
func writeFileRandReseed() uint64 {
|
||||
// Scale the PID, to minimize the chance that two processes seeded at similar times
|
||||
// don't get the same seed. Note that PID typically ranges in [0, 2**15), but can be
|
||||
// up to 2**22 under certain configurations. We left bit-shift the PID by 20, so that
|
||||
// a PID difference of one corresponds to a time difference of 2048 seconds.
|
||||
// The important thing here is that now for a seed conflict, they would both have to be on
|
||||
// the correct nanosecond offset, and second-based offset, which is much less likely than
|
||||
// just a conflict with the correct nanosecond offset.
|
||||
return uint64(time.Now().UnixNano() + int64(os.Getpid()<<20))
|
||||
}
|
||||
|
||||
// Use a fast thread safe LCG for atomic write file names.
|
||||
// Returns a string corresponding to a 64 bit int.
|
||||
// If it was a negative int, the leading number is a 0.
|
||||
func randWriteFileSuffix() string {
|
||||
atomicWriteFileRandMu.Lock()
|
||||
r := atomicWriteFileRand
|
||||
if r == 0 {
|
||||
r = writeFileRandReseed()
|
||||
}
|
||||
|
||||
// Update randomness according to lcg
|
||||
r = r*lcgA + lcgC
|
||||
|
||||
atomicWriteFileRand = r
|
||||
atomicWriteFileRandMu.Unlock()
|
||||
// Can have a negative name, replace this in the following
|
||||
suffix := strconv.Itoa(int(r))
|
||||
if string(suffix[0]) == "-" {
|
||||
// Replace first "-" with "0". This is purely for UI clarity,
|
||||
// as otherwhise there would be two `-` in a row.
|
||||
suffix = strings.Replace(suffix, "-", "0", 1)
|
||||
}
|
||||
return suffix
|
||||
}
|
||||
|
||||
// WriteFileAtomic creates a temporary file with data and provided perm and
|
||||
// swaps it atomically with filename if successful.
|
||||
func WriteFileAtomic(filename string, data []byte, perm os.FileMode) (err error) {
|
||||
// This implementation is inspired by the golang stdlibs method of creating
|
||||
// tempfiles. Notable differences are that we use different flags, a 64 bit LCG
|
||||
// and handle negatives differently.
|
||||
// The core reason we can't use golang's TempFile is that we must write
|
||||
// to the file synchronously, as we need this to persist to disk.
|
||||
// We also open it in write-only mode, to avoid concerns that arise with read.
|
||||
var (
|
||||
dir = filepath.Dir(filename)
|
||||
f *os.File
|
||||
)
|
||||
|
||||
nconflict := 0
|
||||
// Limit the number of attempts to create a file. Something is seriously
|
||||
// wrong if it didn't get created after 1000 attempts, and we don't want
|
||||
// an infinite loop
|
||||
i := 0
|
||||
for ; i < atomicWriteFileMaxNumWriteAttempts; i++ {
|
||||
name := filepath.Join(dir, atomicWriteFilePrefix+randWriteFileSuffix())
|
||||
f, err = os.OpenFile(name, atomicWriteFileFlag, perm)
|
||||
// If the file already exists, try a new file
|
||||
if os.IsExist(err) {
|
||||
// If the files exists too many times, start reseeding as we've
|
||||
// likely hit another instances seed.
|
||||
if nconflict++; nconflict > atomicWriteFileMaxNumConflicts {
|
||||
atomicWriteFileRandMu.Lock()
|
||||
atomicWriteFileRand = writeFileRandReseed()
|
||||
atomicWriteFileRandMu.Unlock()
|
||||
}
|
||||
continue
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
break
|
||||
}
|
||||
if i == atomicWriteFileMaxNumWriteAttempts {
|
||||
return fmt.Errorf("Could not create atomic write file after %d attempts", i)
|
||||
}
|
||||
|
||||
// Clean up in any case. Defer stacking order is last-in-first-out.
|
||||
defer os.Remove(f.Name())
|
||||
defer f.Close()
|
||||
|
||||
if n, err := f.Write(data); err != nil {
|
||||
return err
|
||||
} else if n < len(data) {
|
||||
return io.ErrShortWrite
|
||||
}
|
||||
// Close the file before renaming it, otherwise it will cause "The process
|
||||
// cannot access the file because it is being used by another process." on windows.
|
||||
f.Close()
|
||||
|
||||
return os.Rename(f.Name(), filename)
|
||||
}
|
138
libs/common/tempfile_test.go
Normal file
138
libs/common/tempfile_test.go
Normal file
@ -0,0 +1,138 @@
|
||||
package common
|
||||
|
||||
// Need access to internal variables, so can't use _test package
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
fmt "fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
testing "testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestWriteFileAtomic(t *testing.T) {
|
||||
var (
|
||||
data = []byte(RandStr(RandIntn(2048)))
|
||||
old = RandBytes(RandIntn(2048))
|
||||
perm os.FileMode = 0600
|
||||
)
|
||||
|
||||
f, err := ioutil.TempFile("/tmp", "write-atomic-test-")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer os.Remove(f.Name())
|
||||
|
||||
if err = ioutil.WriteFile(f.Name(), old, 0664); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err = WriteFileAtomic(f.Name(), data, perm); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rData, err := ioutil.ReadFile(f.Name())
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if !bytes.Equal(data, rData) {
|
||||
t.Fatalf("data mismatch: %v != %v", data, rData)
|
||||
}
|
||||
|
||||
stat, err := os.Stat(f.Name())
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if have, want := stat.Mode().Perm(), perm; have != want {
|
||||
t.Errorf("have %v, want %v", have, want)
|
||||
}
|
||||
}
|
||||
|
||||
// This tests atomic write file when there is a single duplicate file.
|
||||
// Expected behavior is for a new file to be created, and the original write file to be unaltered.
|
||||
func TestWriteFileAtomicDuplicateFile(t *testing.T) {
|
||||
var (
|
||||
defaultSeed uint64 = 1
|
||||
testString = "This is a glorious test string"
|
||||
expectedString = "Did the test file's string appear here?"
|
||||
|
||||
fileToWrite = "/tmp/TestWriteFileAtomicDuplicateFile-test.txt"
|
||||
)
|
||||
// Create a file at the seed, and reset the seed.
|
||||
atomicWriteFileRand = defaultSeed
|
||||
firstFileRand := randWriteFileSuffix()
|
||||
atomicWriteFileRand = defaultSeed
|
||||
fname := "/tmp/" + atomicWriteFilePrefix + firstFileRand
|
||||
f, err := os.OpenFile(fname, atomicWriteFileFlag, 0777)
|
||||
defer os.Remove(fname)
|
||||
// Defer here, in case there is a panic in WriteFileAtomic.
|
||||
defer os.Remove(fileToWrite)
|
||||
|
||||
require.Nil(t, err)
|
||||
f.WriteString(testString)
|
||||
WriteFileAtomic(fileToWrite, []byte(expectedString), 0777)
|
||||
// Check that the first atomic file was untouched
|
||||
firstAtomicFileBytes, err := ioutil.ReadFile(fname)
|
||||
require.Nil(t, err, "Error reading first atomic file")
|
||||
require.Equal(t, []byte(testString), firstAtomicFileBytes, "First atomic file was overwritten")
|
||||
// Check that the resultant file is correct
|
||||
resultantFileBytes, err := ioutil.ReadFile(fileToWrite)
|
||||
require.Nil(t, err, "Error reading resultant file")
|
||||
require.Equal(t, []byte(expectedString), resultantFileBytes, "Written file had incorrect bytes")
|
||||
|
||||
// Check that the intermediate write file was deleted
|
||||
// Get the second write files' randomness
|
||||
atomicWriteFileRand = defaultSeed
|
||||
_ = randWriteFileSuffix()
|
||||
secondFileRand := randWriteFileSuffix()
|
||||
_, err = os.Stat("/tmp/" + atomicWriteFilePrefix + secondFileRand)
|
||||
require.True(t, os.IsNotExist(err), "Intermittent atomic write file not deleted")
|
||||
}
|
||||
|
||||
// This tests atomic write file when there are many duplicate files.
|
||||
// Expected behavior is for a new file to be created under a completely new seed,
|
||||
// and the original write files to be unaltered.
|
||||
func TestWriteFileAtomicManyDuplicates(t *testing.T) {
|
||||
var (
|
||||
defaultSeed uint64 = 2
|
||||
testString = "This is a glorious test string, from file %d"
|
||||
expectedString = "Did any of the test file's string appear here?"
|
||||
|
||||
fileToWrite = "/tmp/TestWriteFileAtomicDuplicateFile-test.txt"
|
||||
)
|
||||
// Initialize all of the atomic write files
|
||||
atomicWriteFileRand = defaultSeed
|
||||
for i := 0; i < atomicWriteFileMaxNumConflicts+2; i++ {
|
||||
fileRand := randWriteFileSuffix()
|
||||
fname := "/tmp/" + atomicWriteFilePrefix + fileRand
|
||||
f, err := os.OpenFile(fname, atomicWriteFileFlag, 0777)
|
||||
require.Nil(t, err)
|
||||
f.WriteString(fmt.Sprintf(testString, i))
|
||||
defer os.Remove(fname)
|
||||
}
|
||||
|
||||
atomicWriteFileRand = defaultSeed
|
||||
// Defer here, in case there is a panic in WriteFileAtomic.
|
||||
defer os.Remove(fileToWrite)
|
||||
|
||||
WriteFileAtomic(fileToWrite, []byte(expectedString), 0777)
|
||||
// Check that all intermittent atomic file were untouched
|
||||
atomicWriteFileRand = defaultSeed
|
||||
for i := 0; i < atomicWriteFileMaxNumConflicts+2; i++ {
|
||||
fileRand := randWriteFileSuffix()
|
||||
fname := "/tmp/" + atomicWriteFilePrefix + fileRand
|
||||
firstAtomicFileBytes, err := ioutil.ReadFile(fname)
|
||||
require.Nil(t, err, "Error reading first atomic file")
|
||||
require.Equal(t, []byte(fmt.Sprintf(testString, i)), firstAtomicFileBytes,
|
||||
"atomic write file %d was overwritten", i)
|
||||
}
|
||||
|
||||
// Check that the resultant file is correct
|
||||
resultantFileBytes, err := ioutil.ReadFile(fileToWrite)
|
||||
require.Nil(t, err, "Error reading resultant file")
|
||||
require.Equal(t, []byte(expectedString), resultantFileBytes, "Written file had incorrect bytes")
|
||||
}
|
@ -2,6 +2,7 @@ package db
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
@ -17,8 +18,8 @@ func cleanupDBDir(dir, name string) {
|
||||
|
||||
func testBackendGetSetDelete(t *testing.T, backend DBBackendType) {
|
||||
// Default
|
||||
dir, dirname := cmn.Tempdir(fmt.Sprintf("test_backend_%s_", backend))
|
||||
defer dir.Close()
|
||||
dirname, err := ioutil.TempDir("", fmt.Sprintf("test_backend_%s_", backend))
|
||||
require.Nil(t, err)
|
||||
db := NewDB("testdb", backend, dirname)
|
||||
|
||||
// A nonexistent key should return nil, even if the key is empty
|
||||
|
@ -2,12 +2,12 @@ package db
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
//----------------------------------------
|
||||
@ -61,9 +61,9 @@ func checkValuePanics(t *testing.T, itr Iterator) {
|
||||
}
|
||||
|
||||
func newTempDB(t *testing.T, backend DBBackendType) (db DB) {
|
||||
dir, dirname := cmn.Tempdir("db_common_test")
|
||||
dirname, err := ioutil.TempDir("", "db_common_test")
|
||||
require.Nil(t, err)
|
||||
db = NewDB("testdb", backend, dirname)
|
||||
dir.Close()
|
||||
return db
|
||||
}
|
||||
|
||||
|
@ -117,7 +117,7 @@ func TestTxsAvailable(t *testing.T) {
|
||||
|
||||
func TestSerialReap(t *testing.T) {
|
||||
app := counter.NewCounterApplication(true)
|
||||
app.SetOption(abci.RequestSetOption{"serial", "on"})
|
||||
app.SetOption(abci.RequestSetOption{Key: "serial", Value: "on"})
|
||||
cc := proxy.NewLocalClientCreator(app)
|
||||
|
||||
mempool := newMempoolWithApp(cc)
|
||||
|
29
node/node.go
29
node/node.go
@ -85,7 +85,7 @@ func DefaultNewNode(config *cfg.Config, logger log.Logger) (*Node, error) {
|
||||
proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir()),
|
||||
DefaultGenesisDocProviderFunc(config),
|
||||
DefaultDBProvider,
|
||||
DefaultMetricsProvider,
|
||||
DefaultMetricsProvider(config.Instrumentation),
|
||||
logger,
|
||||
)
|
||||
}
|
||||
@ -93,16 +93,16 @@ func DefaultNewNode(config *cfg.Config, logger log.Logger) (*Node, error) {
|
||||
// MetricsProvider returns a consensus, p2p and mempool Metrics.
|
||||
type MetricsProvider func() (*cs.Metrics, *p2p.Metrics, *mempl.Metrics)
|
||||
|
||||
// DefaultMetricsProvider returns consensus, p2p and mempool Metrics build
|
||||
// using Prometheus client library.
|
||||
func DefaultMetricsProvider() (*cs.Metrics, *p2p.Metrics, *mempl.Metrics) {
|
||||
// DefaultMetricsProvider returns Metrics build using Prometheus client library
|
||||
// if Prometheus is enabled. Otherwise, it returns no-op Metrics.
|
||||
func DefaultMetricsProvider(config *cfg.InstrumentationConfig) MetricsProvider {
|
||||
return func() (*cs.Metrics, *p2p.Metrics, *mempl.Metrics) {
|
||||
if config.Prometheus {
|
||||
return cs.PrometheusMetrics(), p2p.PrometheusMetrics(), mempl.PrometheusMetrics()
|
||||
}
|
||||
|
||||
// NopMetricsProvider returns consensus, p2p and mempool Metrics as no-op.
|
||||
func NopMetricsProvider() (*cs.Metrics, *p2p.Metrics, *mempl.Metrics) {
|
||||
return cs.NopMetrics(), p2p.NopMetrics(), mempl.NopMetrics()
|
||||
}
|
||||
}
|
||||
|
||||
//------------------------------------------------------------------------------
|
||||
|
||||
@ -229,17 +229,7 @@ func NewNode(config *cfg.Config,
|
||||
consensusLogger.Info("This node is not a validator", "addr", privValidator.GetAddress(), "pubKey", privValidator.GetPubKey())
|
||||
}
|
||||
|
||||
// metrics
|
||||
var (
|
||||
csMetrics *cs.Metrics
|
||||
p2pMetrics *p2p.Metrics
|
||||
memplMetrics *mempl.Metrics
|
||||
)
|
||||
if config.Instrumentation.Prometheus {
|
||||
csMetrics, p2pMetrics, memplMetrics = metricsProvider()
|
||||
} else {
|
||||
csMetrics, p2pMetrics, memplMetrics = NopMetricsProvider()
|
||||
}
|
||||
csMetrics, p2pMetrics, memplMetrics := metricsProvider()
|
||||
|
||||
// Make MempoolReactor
|
||||
mempoolLogger := logger.With("module", "mempool")
|
||||
@ -462,7 +452,8 @@ func (n *Node) OnStart() error {
|
||||
n.rpcListeners = listeners
|
||||
}
|
||||
|
||||
if n.config.Instrumentation.Prometheus {
|
||||
if n.config.Instrumentation.Prometheus &&
|
||||
n.config.Instrumentation.PrometheusListenAddr != "" {
|
||||
n.prometheusSrv = n.startPrometheusServer(n.config.Instrumentation.PrometheusListenAddr)
|
||||
}
|
||||
|
||||
|
@ -1,9 +1,3 @@
|
||||
// Uses nacl's secret_box to encrypt a net.Conn.
|
||||
// It is (meant to be) an implementation of the STS protocol.
|
||||
// Note we do not (yet) assume that a remote peer's pubkey
|
||||
// is known ahead of time, and thus we are technically
|
||||
// still vulnerable to MITM. (TODO!)
|
||||
// See docs/sts-final.pdf for more info
|
||||
package conn
|
||||
|
||||
import (
|
||||
@ -16,36 +10,45 @@ import (
|
||||
"net"
|
||||
"time"
|
||||
|
||||
"golang.org/x/crypto/chacha20poly1305"
|
||||
"golang.org/x/crypto/curve25519"
|
||||
"golang.org/x/crypto/nacl/box"
|
||||
"golang.org/x/crypto/nacl/secretbox"
|
||||
"golang.org/x/crypto/ripemd160"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"golang.org/x/crypto/hkdf"
|
||||
)
|
||||
|
||||
// 4 + 1024 == 1028 total frame size
|
||||
const dataLenSize = 4
|
||||
const dataMaxSize = 1024
|
||||
const totalFrameSize = dataMaxSize + dataLenSize
|
||||
const sealedFrameSize = totalFrameSize + secretbox.Overhead
|
||||
const aeadSizeOverhead = 16 // overhead of poly 1305 authentication tag
|
||||
const aeadKeySize = chacha20poly1305.KeySize
|
||||
const aeadNonceSize = chacha20poly1305.NonceSize
|
||||
|
||||
// Implements net.Conn
|
||||
// SecretConnection implements net.conn.
|
||||
// It is an implementation of the STS protocol.
|
||||
// Note we do not (yet) assume that a remote peer's pubkey
|
||||
// is known ahead of time, and thus we are technically
|
||||
// still vulnerable to MITM. (TODO!)
|
||||
// See docs/sts-final.pdf for more info
|
||||
type SecretConnection struct {
|
||||
conn io.ReadWriteCloser
|
||||
recvBuffer []byte
|
||||
recvNonce *[24]byte
|
||||
sendNonce *[24]byte
|
||||
recvNonce *[aeadNonceSize]byte
|
||||
sendNonce *[aeadNonceSize]byte
|
||||
recvSecret *[aeadKeySize]byte
|
||||
sendSecret *[aeadKeySize]byte
|
||||
remPubKey crypto.PubKey
|
||||
shrSecret *[32]byte // shared secret
|
||||
}
|
||||
|
||||
// Performs handshake and returns a new authenticated SecretConnection.
|
||||
// Returns nil if error in handshake.
|
||||
// MakeSecretConnection performs handshake and returns a new authenticated
|
||||
// SecretConnection.
|
||||
// Returns nil if there is an error in handshake.
|
||||
// Caller should call conn.Close()
|
||||
// See docs/sts-final.pdf for more information.
|
||||
func MakeSecretConnection(conn io.ReadWriteCloser, locPrivKey crypto.PrivKey) (*SecretConnection, error) {
|
||||
|
||||
locPubKey := locPrivKey.PubKey()
|
||||
|
||||
// Generate ephemeral keys for perfect forward secrecy.
|
||||
@ -59,29 +62,27 @@ func MakeSecretConnection(conn io.ReadWriteCloser, locPrivKey crypto.PrivKey) (*
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Compute common shared secret.
|
||||
shrSecret := computeSharedSecret(remEphPub, locEphPriv)
|
||||
|
||||
// Sort by lexical order.
|
||||
loEphPub, hiEphPub := sort32(locEphPub, remEphPub)
|
||||
loEphPub, _ := sort32(locEphPub, remEphPub)
|
||||
|
||||
// Check if the local ephemeral public key
|
||||
// was the least, lexicographically sorted.
|
||||
locIsLeast := bytes.Equal(locEphPub[:], loEphPub[:])
|
||||
|
||||
// Generate nonces to use for secretbox.
|
||||
recvNonce, sendNonce := genNonces(loEphPub, hiEphPub, locIsLeast)
|
||||
// Compute common diffie hellman secret using X25519.
|
||||
dhSecret := computeDHSecret(remEphPub, locEphPriv)
|
||||
|
||||
// Generate common challenge to sign.
|
||||
challenge := genChallenge(loEphPub, hiEphPub)
|
||||
// generate the secret used for receiving, sending, challenge via hkdf-sha2 on dhSecret
|
||||
recvSecret, sendSecret, challenge := deriveSecretAndChallenge(dhSecret, locIsLeast)
|
||||
|
||||
// Construct SecretConnection.
|
||||
sc := &SecretConnection{
|
||||
conn: conn,
|
||||
recvBuffer: nil,
|
||||
recvNonce: recvNonce,
|
||||
sendNonce: sendNonce,
|
||||
shrSecret: shrSecret,
|
||||
recvNonce: new([aeadNonceSize]byte),
|
||||
sendNonce: new([aeadNonceSize]byte),
|
||||
recvSecret: recvSecret,
|
||||
sendSecret: sendSecret,
|
||||
}
|
||||
|
||||
// Sign the challenge bytes for authentication.
|
||||
@ -92,6 +93,7 @@ func MakeSecretConnection(conn io.ReadWriteCloser, locPrivKey crypto.PrivKey) (*
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
remPubKey, remSignature := authSigMsg.Key, authSigMsg.Sig
|
||||
if !remPubKey.VerifyBytes(challenge[:], remSignature) {
|
||||
return nil, errors.New("Challenge verification failed")
|
||||
@ -102,7 +104,7 @@ func MakeSecretConnection(conn io.ReadWriteCloser, locPrivKey crypto.PrivKey) (*
|
||||
return sc, nil
|
||||
}
|
||||
|
||||
// Returns authenticated remote pubkey
|
||||
// RemotePubKey returns authenticated remote pubkey
|
||||
func (sc *SecretConnection) RemotePubKey() crypto.PubKey {
|
||||
return sc.remPubKey
|
||||
}
|
||||
@ -124,14 +126,17 @@ func (sc *SecretConnection) Write(data []byte) (n int, err error) {
|
||||
binary.BigEndian.PutUint32(frame, uint32(chunkLength))
|
||||
copy(frame[dataLenSize:], chunk)
|
||||
|
||||
aead, err := chacha20poly1305.New(sc.sendSecret[:])
|
||||
if err != nil {
|
||||
return n, errors.New("Invalid SecretConnection Key")
|
||||
}
|
||||
// encrypt the frame
|
||||
var sealedFrame = make([]byte, sealedFrameSize)
|
||||
secretbox.Seal(sealedFrame[:0], frame, sc.sendNonce, sc.shrSecret)
|
||||
// fmt.Printf("secretbox.Seal(sealed:%X,sendNonce:%X,shrSecret:%X\n", sealedFrame, sc.sendNonce, sc.shrSecret)
|
||||
incr2Nonce(sc.sendNonce)
|
||||
var sealedFrame = make([]byte, aeadSizeOverhead+totalFrameSize)
|
||||
aead.Seal(sealedFrame[:0], sc.sendNonce[:], frame, nil)
|
||||
incrNonce(sc.sendNonce)
|
||||
// end encryption
|
||||
|
||||
_, err := sc.conn.Write(sealedFrame)
|
||||
_, err = sc.conn.Write(sealedFrame)
|
||||
if err != nil {
|
||||
return n, err
|
||||
}
|
||||
@ -148,7 +153,11 @@ func (sc *SecretConnection) Read(data []byte) (n int, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
sealedFrame := make([]byte, sealedFrameSize)
|
||||
aead, err := chacha20poly1305.New(sc.recvSecret[:])
|
||||
if err != nil {
|
||||
return n, errors.New("Invalid SecretConnection Key")
|
||||
}
|
||||
sealedFrame := make([]byte, totalFrameSize+aeadSizeOverhead)
|
||||
_, err = io.ReadFull(sc.conn, sealedFrame)
|
||||
if err != nil {
|
||||
return
|
||||
@ -156,12 +165,11 @@ func (sc *SecretConnection) Read(data []byte) (n int, err error) {
|
||||
|
||||
// decrypt the frame
|
||||
var frame = make([]byte, totalFrameSize)
|
||||
// fmt.Printf("secretbox.Open(sealed:%X,recvNonce:%X,shrSecret:%X\n", sealedFrame, sc.recvNonce, sc.shrSecret)
|
||||
_, ok := secretbox.Open(frame[:0], sealedFrame, sc.recvNonce, sc.shrSecret)
|
||||
if !ok {
|
||||
_, err = aead.Open(frame[:0], sc.recvNonce[:], sealedFrame, nil)
|
||||
if err != nil {
|
||||
return n, errors.New("Failed to decrypt SecretConnection")
|
||||
}
|
||||
incr2Nonce(sc.recvNonce)
|
||||
incrNonce(sc.recvNonce)
|
||||
// end decryption
|
||||
|
||||
var chunkLength = binary.BigEndian.Uint32(frame) // read the first two bytes
|
||||
@ -176,6 +184,7 @@ func (sc *SecretConnection) Read(data []byte) (n int, err error) {
|
||||
}
|
||||
|
||||
// Implements net.Conn
|
||||
// nolint
|
||||
func (sc *SecretConnection) Close() error { return sc.conn.Close() }
|
||||
func (sc *SecretConnection) LocalAddr() net.Addr { return sc.conn.(net.Conn).LocalAddr() }
|
||||
func (sc *SecretConnection) RemoteAddr() net.Addr { return sc.conn.(net.Conn).RemoteAddr() }
|
||||
@ -204,18 +213,16 @@ func shareEphPubKey(conn io.ReadWriteCloser, locEphPub *[32]byte) (remEphPub *[3
|
||||
var _, err1 = cdc.MarshalBinaryWriter(conn, locEphPub)
|
||||
if err1 != nil {
|
||||
return nil, err1, true // abort
|
||||
} else {
|
||||
return nil, nil, false
|
||||
}
|
||||
return nil, nil, false
|
||||
},
|
||||
func(_ int) (val interface{}, err error, abort bool) {
|
||||
var _remEphPub [32]byte
|
||||
var _, err2 = cdc.UnmarshalBinaryReader(conn, &_remEphPub, 1024*1024) // TODO
|
||||
if err2 != nil {
|
||||
return nil, err2, true // abort
|
||||
} else {
|
||||
return _remEphPub, nil, false
|
||||
}
|
||||
return _remEphPub, nil, false
|
||||
},
|
||||
)
|
||||
|
||||
@ -230,9 +237,40 @@ func shareEphPubKey(conn io.ReadWriteCloser, locEphPub *[32]byte) (remEphPub *[3
|
||||
return &_remEphPub, nil
|
||||
}
|
||||
|
||||
func computeSharedSecret(remPubKey, locPrivKey *[32]byte) (shrSecret *[32]byte) {
|
||||
shrSecret = new([32]byte)
|
||||
box.Precompute(shrSecret, remPubKey, locPrivKey)
|
||||
func deriveSecretAndChallenge(dhSecret *[32]byte, locIsLeast bool) (recvSecret, sendSecret *[aeadKeySize]byte, challenge *[32]byte) {
|
||||
hash := sha256.New
|
||||
hkdf := hkdf.New(hash, dhSecret[:], nil, []byte("TENDERMINT_SECRET_CONNECTION_KEY_AND_CHALLENGE_GEN"))
|
||||
// get enough data for 2 aead keys, and a 32 byte challenge
|
||||
res := new([2*aeadKeySize + 32]byte)
|
||||
_, err := io.ReadFull(hkdf, res[:])
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
challenge = new([32]byte)
|
||||
recvSecret = new([aeadKeySize]byte)
|
||||
sendSecret = new([aeadKeySize]byte)
|
||||
// Use the last 32 bytes as the challenge
|
||||
copy(challenge[:], res[2*aeadKeySize:2*aeadKeySize+32])
|
||||
|
||||
// bytes 0 through aeadKeySize - 1 are one aead key.
|
||||
// bytes aeadKeySize through 2*aeadKeySize -1 are another aead key.
|
||||
// which key corresponds to sending and receiving key depends on whether
|
||||
// the local key is less than the remote key.
|
||||
if locIsLeast {
|
||||
copy(recvSecret[:], res[0:aeadKeySize])
|
||||
copy(sendSecret[:], res[aeadKeySize:aeadKeySize*2])
|
||||
} else {
|
||||
copy(sendSecret[:], res[0:aeadKeySize])
|
||||
copy(recvSecret[:], res[aeadKeySize:aeadKeySize*2])
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func computeDHSecret(remPubKey, locPrivKey *[32]byte) (shrKey *[32]byte) {
|
||||
shrKey = new([32]byte)
|
||||
curve25519.ScalarMult(shrKey, locPrivKey, remPubKey)
|
||||
return
|
||||
}
|
||||
|
||||
@ -247,25 +285,6 @@ func sort32(foo, bar *[32]byte) (lo, hi *[32]byte) {
|
||||
return
|
||||
}
|
||||
|
||||
func genNonces(loPubKey, hiPubKey *[32]byte, locIsLo bool) (recvNonce, sendNonce *[24]byte) {
|
||||
nonce1 := hash24(append(loPubKey[:], hiPubKey[:]...))
|
||||
nonce2 := new([24]byte)
|
||||
copy(nonce2[:], nonce1[:])
|
||||
nonce2[len(nonce2)-1] ^= 0x01
|
||||
if locIsLo {
|
||||
recvNonce = nonce1
|
||||
sendNonce = nonce2
|
||||
} else {
|
||||
recvNonce = nonce2
|
||||
sendNonce = nonce1
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func genChallenge(loPubKey, hiPubKey *[32]byte) (challenge *[32]byte) {
|
||||
return hash32(append(loPubKey[:], hiPubKey[:]...))
|
||||
}
|
||||
|
||||
func signChallenge(challenge *[32]byte, locPrivKey crypto.PrivKey) (signature crypto.Signature) {
|
||||
signature, err := locPrivKey.Sign(challenge[:])
|
||||
// TODO(ismail): let signChallenge return an error instead
|
||||
@ -288,18 +307,16 @@ func shareAuthSignature(sc *SecretConnection, pubKey crypto.PubKey, signature cr
|
||||
var _, err1 = cdc.MarshalBinaryWriter(sc, authSigMessage{pubKey, signature})
|
||||
if err1 != nil {
|
||||
return nil, err1, true // abort
|
||||
} else {
|
||||
return nil, nil, false
|
||||
}
|
||||
return nil, nil, false
|
||||
},
|
||||
func(_ int) (val interface{}, err error, abort bool) {
|
||||
var _recvMsg authSigMessage
|
||||
var _, err2 = cdc.UnmarshalBinaryReader(sc, &_recvMsg, 1024*1024) // TODO
|
||||
if err2 != nil {
|
||||
return nil, err2, true // abort
|
||||
} else {
|
||||
return _recvMsg, nil, false
|
||||
}
|
||||
return _recvMsg, nil, false
|
||||
},
|
||||
)
|
||||
|
||||
@ -315,36 +332,11 @@ func shareAuthSignature(sc *SecretConnection, pubKey crypto.PubKey, signature cr
|
||||
|
||||
//--------------------------------------------------------------------------------
|
||||
|
||||
// sha256
|
||||
func hash32(input []byte) (res *[32]byte) {
|
||||
hasher := sha256.New()
|
||||
hasher.Write(input) // nolint: errcheck, gas
|
||||
resSlice := hasher.Sum(nil)
|
||||
res = new([32]byte)
|
||||
copy(res[:], resSlice)
|
||||
return
|
||||
}
|
||||
|
||||
// We only fill in the first 20 bytes with ripemd160
|
||||
func hash24(input []byte) (res *[24]byte) {
|
||||
hasher := ripemd160.New()
|
||||
hasher.Write(input) // nolint: errcheck, gas
|
||||
resSlice := hasher.Sum(nil)
|
||||
res = new([24]byte)
|
||||
copy(res[:], resSlice)
|
||||
return
|
||||
}
|
||||
|
||||
// increment nonce big-endian by 2 with wraparound.
|
||||
func incr2Nonce(nonce *[24]byte) {
|
||||
incrNonce(nonce)
|
||||
incrNonce(nonce)
|
||||
}
|
||||
|
||||
// increment nonce big-endian by 1 with wraparound.
|
||||
func incrNonce(nonce *[24]byte) {
|
||||
for i := 23; 0 <= i; i-- {
|
||||
func incrNonce(nonce *[aeadNonceSize]byte) {
|
||||
for i := aeadNonceSize - 1; 0 <= i; i-- {
|
||||
nonce[i]++
|
||||
// if this byte wrapped around to zero, we need to increment the next byte
|
||||
if nonce[i] != 0 {
|
||||
return
|
||||
}
|
||||
|
@ -1,8 +1,16 @@
|
||||
package conn
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/hex"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
@ -208,6 +216,66 @@ func TestSecretConnectionReadWrite(t *testing.T) {
|
||||
|
||||
}
|
||||
|
||||
// Run go test -update from within this module
|
||||
// to update the golden test vector file
|
||||
var update = flag.Bool("update", false, "update .golden files")
|
||||
|
||||
func TestDeriveSecretsAndChallengeGolden(t *testing.T) {
|
||||
goldenFilepath := filepath.Join("testdata", t.Name()+".golden")
|
||||
if *update {
|
||||
t.Logf("Updating golden test vector file %s", goldenFilepath)
|
||||
data := createGoldenTestVectors(t)
|
||||
cmn.WriteFile(goldenFilepath, []byte(data), 0644)
|
||||
}
|
||||
f, err := os.Open(goldenFilepath)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer f.Close()
|
||||
scanner := bufio.NewScanner(f)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
params := strings.Split(line, ",")
|
||||
randSecretVector, err := hex.DecodeString(params[0])
|
||||
require.Nil(t, err)
|
||||
randSecret := new([32]byte)
|
||||
copy((*randSecret)[:], randSecretVector)
|
||||
locIsLeast, err := strconv.ParseBool(params[1])
|
||||
require.Nil(t, err)
|
||||
expectedRecvSecret, err := hex.DecodeString(params[2])
|
||||
require.Nil(t, err)
|
||||
expectedSendSecret, err := hex.DecodeString(params[3])
|
||||
require.Nil(t, err)
|
||||
expectedChallenge, err := hex.DecodeString(params[4])
|
||||
require.Nil(t, err)
|
||||
|
||||
recvSecret, sendSecret, challenge := deriveSecretAndChallenge(randSecret, locIsLeast)
|
||||
require.Equal(t, expectedRecvSecret, (*recvSecret)[:], "Recv Secrets aren't equal")
|
||||
require.Equal(t, expectedSendSecret, (*sendSecret)[:], "Send Secrets aren't equal")
|
||||
require.Equal(t, expectedChallenge, (*challenge)[:], "challenges aren't equal")
|
||||
}
|
||||
}
|
||||
|
||||
// Creates the data for a test vector file.
|
||||
// The file format is:
|
||||
// Hex(diffie_hellman_secret), loc_is_least, Hex(recvSecret), Hex(sendSecret), Hex(challenge)
|
||||
func createGoldenTestVectors(t *testing.T) string {
|
||||
data := ""
|
||||
for i := 0; i < 32; i++ {
|
||||
randSecretVector := cmn.RandBytes(32)
|
||||
randSecret := new([32]byte)
|
||||
copy((*randSecret)[:], randSecretVector)
|
||||
data += hex.EncodeToString((*randSecret)[:]) + ","
|
||||
locIsLeast := cmn.RandBool()
|
||||
data += strconv.FormatBool(locIsLeast) + ","
|
||||
recvSecret, sendSecret, challenge := deriveSecretAndChallenge(randSecret, locIsLeast)
|
||||
data += hex.EncodeToString((*recvSecret)[:]) + ","
|
||||
data += hex.EncodeToString((*sendSecret)[:]) + ","
|
||||
data += hex.EncodeToString((*challenge)[:]) + "\n"
|
||||
}
|
||||
return data
|
||||
}
|
||||
|
||||
func BenchmarkSecretConnection(b *testing.B) {
|
||||
b.StopTimer()
|
||||
fooSecConn, barSecConn := makeSecretConnPair(b)
|
||||
|
32
p2p/conn/testdata/TestDeriveSecretsAndChallengeGolden.golden
vendored
Normal file
32
p2p/conn/testdata/TestDeriveSecretsAndChallengeGolden.golden
vendored
Normal file
@ -0,0 +1,32 @@
|
||||
9fe4a5a73df12dbd8659b1d9280873fe993caefec6b0ebc2686dd65027148e03,true,80a83ad6afcb6f8175192e41973aed31dd75e3c106f813d986d9567a4865eb2f,96362a04f628a0666d9866147326898bb0847b8db8680263ad19e6336d4eed9e,2632c3fd20f456c5383ed16aa1d56dc7875a2b0fc0d5ff053c3ada8934098c69
|
||||
0716764b370d543fee692af03832c16410f0a56e4ddb79604ea093b10bb6f654,false,84f2b1e8658456529a2c324f46c3406c3c6fecd5fbbf9169f60bed8956a8b03d,cba357ae33d7234520d5742102a2a6cdb39b7db59c14a58fa8aadd310127630f,576643a8fcc1a4cf866db900f4a150dbe35d44a1b3ff36e4911565c3fa22fc32
|
||||
358dd73aae2c5b7b94b57f950408a3c681e748777ecab2063c8ca51a63588fa8,false,c2e2f664c8ee561af8e1e30553373be4ae23edecc8c6bd762d44b2afb7f2a037,d1563f428ac1c023c15d8082b2503157fe9ecbde4fb3493edd69ebc299b4970c,89fb6c6439b12fe11a4c604b8ad883f7dc76be33df590818fe5eb15ddb01face
|
||||
0958308bdb583e639dd399a98cd21077d834b4b5e30771275a5a73a62efcc7e0,false,523c0ae97039173566f7ab4b8f271d8d78feef5a432d618e58ced4f80f7c1696,c1b743401c6e4508e62b8245ea7c3252bbad082e10af10e80608084d63877977,d7c52adf12ebc69677aec4bd387b0c5a35570fe61cb7b8ae55f3ab14b1b79be0
|
||||
d93d134e72f58f177642ac30f36b2d3cd4720aa7e60feb1296411a9009cf4524,false,47a427bcc1ef6f0ce31dbf343bc8bbf49554b4dd1e2330fd97d0df23ecdbba10,73e23adb7801179349ecf9c8cdf64d71d64a9f1145ba6730e5d029f99eaf8840,a8fdcb77f591bfba7b8483aa15ae7b42054ba68625d51dec005896dfe910281f
|
||||
6104474c791cda24d952b356fb41a5d273c0ce6cc87d270b1701d0523cd5aa13,true,1cb4397b9e478430321af4647da2ccbef62ff8888542d31cca3f626766c8080f,673b23318826bd31ad1a4995c6e5095c4b092f5598aa0a96381a3e977bc0eaf9,4a25a25c5f75d6cc512f2ba8c1546e6263e9ef8269f0c046c37838cc66aa83e6
|
||||
8a6002503c15cab763e27c53fc449f6854a210c95cdd67e4466b0f2cb46b629c,false,f01ff06aef356c87f8d2646ff9ed8b855497c2ca00ea330661d84ef421a67e63,4f59bb23090010614877265a1597f1a142fa97b7208e1d554435763505f36f6a,1aadcb1c8b5993da102cebcb60c545b03197c98137064530840f45d917ad300e
|
||||
31a57c6b1fe33beb1f7ebbbfc06d58c4f307cd355b6f9753e58f3edec16c7559,false,13e126c4cb240349dccf0dc843977671d34a1daffd0517d06ed66b703344db22,d491431906a306af45ecf9f1977e32d7f65a79f5139f931760416de27554b687,5ea7e8e3d5a30503423341609d360d246b61a9159fc07f253a46e357977cd745
|
||||
71a3c79718b824627faeefdce887d9465b353bd962cc5e97c5b5dfedab457ef9,true,e2e8eea547dcee7eafa89ae41f48ab049beac24935fad75258924fd5273d23cb,45d2e839bf36a3616cbe8a9bdbd4e7b288bf5bf1e6e79c07995eb2b18eb2eaff,7ee50e0810bc9f98e56bc46de5da22d84b3efa52fe5d85db4b2344530ef17ed8
|
||||
2e9dba2eb4f9019c2628ff5899744469c26caf793636f30ddb76601751aee968,false,8bfc3b314e4468d4e19c9d28b7bfd5b5532263105273b0fe80801f6146313993,b77d2b223e27038f978ab87a725859f6995f903056bdbd594ab04f0b2cbad517,9032be49a9cbcd1de6fee332f8f24ebf545c05e0175b98c564e7d1e69630ae20
|
||||
81322b22c835efb26d78051f3a3840a9d01aa558c019ecfa26483b5c5535728c,true,61eacb7e9665e362ef492ef950cea58f8bc67434ab7ee5545139147adf395da4,0f600ef0c358cae938969f434c2ec0ce3be632fdf5246b7bb8ee3ff294036ecd,a7026b4c21fe225ecd775ae81249405c6f492882eb85f3f8e2232f11e515561e
|
||||
826b86c5e8cb4173ff2d05c48e3537140c5e0f26f7866bbcd4e57616806e1be2,true,ae44dabd077d227c8d898930a7705a2b785c8849121282106c045bb58b66eb36,24b2c1b1e2a9ebe387df6dfb9fbde6c681e4eeb0a33bb1c3df3789087f56ffe3,b37a64ea97431b25cb271c4c8435f6dd97118b35da57168f3c3c269920f7bbc1
|
||||
18b5a7b973d4b263072e69515c5b6ed22191c3d6e851aaba872904672f8344ec,true,ce402af2fb93b6ef18cd406f7c437d3cbfb09141b7a02116b1cfbabbf75ad84a,c86bdb1709ef0f4a31a818843660f83338b9db77e262bb7c6546138e51c6046b,11fcd8e59c4e7f6050d3cd332337db794ae31260c159e409af3ed8f4d6523bf4
|
||||
26d10c56872b72bb76ae7c7b3f074afb3d4a364e5e3f8c661be9b4f5a522ea75,true,1c9782a8485c4ecb13904ec551a7f9300ecd687abfbe63c91c7fd583f84a7a4d,ae3f4ccd0dfee8b514f67db2e923714d324935b9ae9e488d088ebb79569d8cc4,8139a3ab728b0e765e4d90549ab8eed7e1048a83267eafa7442208a7f627558a
|
||||
558838dfcfe94105c46a4ade4548e6c96271d33e6c752661356cc66024615bae,true,d5a38625be74177318072cf877f2427ce2327e9b58d2eb134d0ac52c9126572f,dead938f77007e3164b6eee4cd153433d03ca5d9ec64f41aa6b2d6a069edeeda,4a081a356361da429c564cf7ac8e217121bbe8c5ee5c9632bae0b7ddbe94f9d4
|
||||
f4a3f6a93a4827a59682fd8bf1a8e4fd9aaff01a337a86e1966c8fff0e746014,true,39a0aea2a8ac7f0524d63e395a25b98fc3844ed039f20b11058019dca2b3840f,6ff53243426ded506d22501ae0f989d9946b86a8bb2550d7ed6e90fdf41d0e7c,8784e728bf12f465ed20dc6f0e1d949a68e5795d4799536427a6f859547b7fd6
|
||||
1717020e1c4fca1b4926dba16671c0c04e4f19c621c646cb4525fa533b1c205c,false,b9a909767f3044608b4e314b149a729bef199f8311310e1ecd2072e5659b7194,7baf0ff4b980919cf545312f45234976f0b6c574aac5b772024f73248aad7538,99a18e1e4b039ef3777a8fdd0d9ffaccaf3b4523b6d26adacfe91cc5fcd9977e
|
||||
de769062be27b2a4248dd5be315960c8d231738417ece670c2d6a1c52877b59e,true,cc6c2086718b21813513894546e85766d34c754e81fd6a19c12fc322ffb9b1c3,5a7da7500191c65a5f1fbb2a6122717edc70ca0469baf2bbbd6ca8255b93c077,8c0d32091dc687f1399c754a617d224742726bece848b50c35b4db5f0469ace7
|
||||
7c5549f36767e02ebf49a4616467199459aa6932dcc091f182f822185659559a,true,d8335e606128b0c621ff6cda99dc62babf4a4436c574c5c478c20122712727d0,0a7c673cccd6f7fd4ed1673f7d0f2cb08961faced123ca901b74581d5bdc8b25,16ac1eb2a39384716c7d490272d87e76c10665fdb331e1883435de175ce4460e
|
||||
ecf8261ebda248dc7796f98987efe1b7be363a59037c9e61044490d08a077610,true,53def80fcdba01367c0ea36459b57409f59a771f57a8259b54f24785e5656b7d,90140870b3b1e84c9dcf7836eac0581b16fe0a40307619d267c6f871e1efce6a,c6d1836b66c1a722a377c7eb058995a0ef8711839c6d6a0cdd6ad1ff70f935a5
|
||||
21c0ef76ce0eae9391ceabfb08a861899db55ac4ccf010ed672599669c6938f2,false,8af5482cc015093f261d5b7ce87035dda41d8318b9960b52cca3e5f0d3f61808,f4d5338bcb57262e1034f01ed3858ca1e5d66a73f18588e72f3dc8c6a730be0c,7ba82c2820c95e3354d9a6ab4920ebcd7938ce19e25930fee58439246b0321b1
|
||||
05f3b66d6b0fe906137e60b4719083a2465106badedcdae3a4c91c46c5367340,false,e5c9e074e95c2896fa4093830e96e9cf159b8dcba2ead21f37237cf6e9a9aaa2,b3a0a50309b4ca23cd34363fd8df30e73ec4a275973986c2e11a53752eff0a3b,358a62056ff05f27185b9952d291c6346171937f6811cafbacddd82e17010f39
|
||||
fef0251cff7c5d1ba0514f1820a8265453365fd9f5bb8a92f955dc007a40e730,true,e35a0aff6e9060a39c15d276a1337f1948d0be0aef81fcd563a6783115b5283d,20a8efe83474253d70e5fd847df0cd26222cd39e9210687b68c0a23b73429108,2989fab4278b32f4f40dc02227ab30e10f62e15ab7aa7382da769b1d084e33df
|
||||
1b7bb172baa2753ec9c3e81a7a9b4c6ef10f9ed7afcafa975395f095eca63a54,false,a98257203987d0c4d260d8feef841466977276612e268b69b5ce4191af161b29,ea177a20d6c1f73f9667090568f9197943037d6586f7e2d6b7b81756fc71df5f,844eff318ef4c6ee45f158c1946ff999e40ffac70883ab6d6b90995f246e69a2
|
||||
5ee9b60a25753066d0ecc1155ca6afcc6b853ba558c9533c134a93b82e756856,true,9889460b95ca9545864a4a5194891b7d475362428d6d797532da10bf1fc92076,a7a96739abd8eceb6751afc98df68e29f7af16fbfda3d4710df9c35b6dcdb4d5,998326285c90a2ea2e1f6c6dac79530742645e3dd1b2b42a0733388a99cab81b
|
||||
a102613781872f88a949d82cb5efcc2e0f437010a950d71b87929ecb480af3b3,false,e099080a55b9b29ccecbbb0d91dbe49defcc217efd1de0588e0836ce5970d327,319293b8660a3cea9879487645ddadda72a5c60079c9154bb0dbb8a0c9cda79e,4d567f1b1a1b304347cf7b129e4c7a05aa57e2bbb8ea335db9e33d05fab12e4d
|
||||
1d4538180d06f37c43e8caa2d0d80aa7c5d701c8c3e31508704131427837f5cc,true,73afeeb46efc03d2b9f20fc271752528e52b8931287296a7e4367c96bccb32bd,59dc4b69d9ccf6f77715e47fb9bf454f1b90bbd05f1d2bbd07c7d6666f31c91f,ac59d735dfcdc3a0a4ce5a10f09dea8c6afd47de9c0308dc817e3789c8aee963
|
||||
e4c480af1b0e3487a331761f64eb3f020a2b8ffa25ad17e00f57aa7ec2c5e84d,true,1145e9f001c70d364e97fcdbc88a2a3d6aecdd975212923820f90a0b215f11f6,b802ac7ef21c8abaeae024c76e3fa70a2a82f73e0bb7c7fe76752ad1742af2e6,0a95876e30617e32ae25acd3af97c37dc075825f800def3f2bf3f68a268744e9
|
||||
3a7a83dd657dd6277bcfa957534f40d9b559039aad752066a8d7ed9a6d9c0ab5,false,f90a251ad2338b19cfee6a7965f6f5098136974abb99b3d24553fa6117384978,e422ed7567e5602731b3d980106d0546ef4a4da5eb7175d66a452df12d37bad2,b086bed71dfb6662cb10e2b4fb16a7c22394f488e822fc19697db6077f6caf6f
|
||||
273e8560c2b1734e863a6542bded7a6fcbfb49a12770bd8866d4863dceea3ae9,false,3b7849a362e7b7ba8c8b8a0cd00df5180604987dbda6c03f37d9a09fdb27fb28,e6cdf4d767df0f411e970da8dda6acd3c2c34ce63908d8a6dbf3715daa0318e4,359a4a39fbdffc808161a48a3ffbe77fc6a03ff52324c22510a42e46c08a6f22
|
||||
9b4f8702991be9569b6c0b07a2173104d41325017b27d68fa5af91cdab164c4d,true,598323677db11ece050289f31881ee8caacb59376c7182f9055708b2a4673f84,7675adc1264b6758beb097a991f766f62796f78c1cfa58a4de3d81c36434d3ae,d5d8d610ffd85b04cbe1c73ff5becd5917c513d9625b001f51d486d0dadcefe3
|
||||
e1a686ba0169eb97379ebf9d22e073819450ee5ad5f049c8e93016e8d2ec1430,false,ffe461e6075865cde2704aa148fd29bcf0af245803f446cb6153244f25617993,46df6c25fa0344e662490c4da0bddca626644e67e66705840ef08aae35c343fa,e9a56d75acad4272ab0c49ee5919a4e86e6c5695ef065704c1e592d4e7b41a10
|
@ -151,7 +151,7 @@ func (l *DefaultListener) OnStop() {
|
||||
l.listener.Close() // nolint: errcheck
|
||||
}
|
||||
|
||||
// Accept connections and pass on the channel
|
||||
// Accept connections and pass on the channel.
|
||||
func (l *DefaultListener) listenRoutine() {
|
||||
for {
|
||||
conn, err := l.listener.Accept()
|
||||
@ -178,6 +178,8 @@ func (l *DefaultListener) listenRoutine() {
|
||||
|
||||
// Connections returns a channel of inbound connections.
|
||||
// It gets closed when the listener closes.
|
||||
// It is the callers responsibility to close any connections received
|
||||
// over this channel.
|
||||
func (l *DefaultListener) Connections() <-chan net.Conn {
|
||||
return l.connections
|
||||
}
|
||||
|
@ -638,6 +638,7 @@ func (a *addrBook) addAddress(addr, src *p2p.NetAddress) error {
|
||||
if a.routabilityStrict && !addr.Routable() {
|
||||
return ErrAddrBookNonRoutable{addr}
|
||||
}
|
||||
|
||||
// TODO: we should track ourAddrs by ID and by IP:PORT and refuse both.
|
||||
if _, ok := a.ourAddrs[addr.String()]; ok {
|
||||
return ErrAddrBookSelf{addr}
|
||||
@ -647,6 +648,10 @@ func (a *addrBook) addAddress(addr, src *p2p.NetAddress) error {
|
||||
return ErrAddrBookPrivate{addr}
|
||||
}
|
||||
|
||||
if _, ok := a.privateIDs[src.ID]; ok {
|
||||
return ErrAddrBookPrivateSrc{src}
|
||||
}
|
||||
|
||||
ka := a.addrLookup[addr.ID]
|
||||
if ka != nil {
|
||||
// If its already old and the addr is the same, ignore it.
|
||||
|
@ -8,7 +8,6 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
@ -374,10 +373,19 @@ func TestPrivatePeers(t *testing.T) {
|
||||
}
|
||||
book.AddPrivateIDs(private)
|
||||
|
||||
// private addrs must not be added
|
||||
for _, addr := range addrs {
|
||||
err := book.AddAddress(addr, addr)
|
||||
require.Error(t, err, "AddAddress should have failed with private peer %s", addr)
|
||||
if assert.Error(t, err) {
|
||||
_, ok := err.(ErrAddrBookPrivate)
|
||||
require.True(t, ok, "Wrong error type, wanted ErrAddrBookPrivate, got error: %s", err)
|
||||
assert.True(t, ok)
|
||||
}
|
||||
}
|
||||
|
||||
// addrs coming from private peers must not be added
|
||||
err := book.AddAddress(randIPv4Address(t), addrs[0])
|
||||
if assert.Error(t, err) {
|
||||
_, ok := err.(ErrAddrBookPrivateSrc)
|
||||
assert.True(t, ok)
|
||||
}
|
||||
}
|
||||
|
@ -30,6 +30,14 @@ func (err ErrAddrBookPrivate) Error() string {
|
||||
return fmt.Sprintf("Cannot add private peer with address %v", err.Addr)
|
||||
}
|
||||
|
||||
type ErrAddrBookPrivateSrc struct {
|
||||
Src *p2p.NetAddress
|
||||
}
|
||||
|
||||
func (err ErrAddrBookPrivateSrc) Error() string {
|
||||
return fmt.Sprintf("Cannot add peer coming from private peer with address %v", err.Src)
|
||||
}
|
||||
|
||||
type ErrAddrBookNilAddr struct {
|
||||
Addr *p2p.NetAddress
|
||||
Src *p2p.NetAddress
|
||||
|
@ -74,6 +74,8 @@ type PEXReactor struct {
|
||||
requestsSent *cmn.CMap // ID->struct{}: unanswered send requests
|
||||
lastReceivedRequests *cmn.CMap // ID->time.Time: last time peer requested from us
|
||||
|
||||
seedAddrs []*p2p.NetAddress
|
||||
|
||||
attemptsToDial sync.Map // address (string) -> {number of attempts (int), last time dialed (time.Time)}
|
||||
}
|
||||
|
||||
@ -113,9 +115,6 @@ func NewPEXReactor(b AddrBook, config *PEXReactorConfig) *PEXReactor {
|
||||
|
||||
// OnStart implements BaseService
|
||||
func (r *PEXReactor) OnStart() error {
|
||||
if err := r.BaseReactor.OnStart(); err != nil {
|
||||
return err
|
||||
}
|
||||
err := r.book.Start()
|
||||
if err != nil && err != cmn.ErrAlreadyStarted {
|
||||
return err
|
||||
@ -123,10 +122,13 @@ func (r *PEXReactor) OnStart() error {
|
||||
|
||||
// return err if user provided a bad seed address
|
||||
// or a host name that we cant resolve
|
||||
if err := r.checkSeeds(); err != nil {
|
||||
seedAddrs, err := r.checkSeeds()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
r.seedAddrs = seedAddrs
|
||||
|
||||
// Check if this node should run
|
||||
// in seed/crawler mode
|
||||
if r.config.SeedMode {
|
||||
@ -139,7 +141,6 @@ func (r *PEXReactor) OnStart() error {
|
||||
|
||||
// OnStop implements BaseService
|
||||
func (r *PEXReactor) OnStop() {
|
||||
r.BaseReactor.OnStop()
|
||||
r.book.Stop()
|
||||
}
|
||||
|
||||
@ -285,7 +286,6 @@ func (r *PEXReactor) RequestAddrs(p Peer) {
|
||||
// request for this peer and deletes the open request.
|
||||
// If there's no open request for the src peer, it returns an error.
|
||||
func (r *PEXReactor) ReceiveAddrs(addrs []*p2p.NetAddress, src Peer) error {
|
||||
|
||||
id := string(src.ID())
|
||||
if !r.requestsSent.Has(id) {
|
||||
return cmn.NewError("Received unsolicited pexAddrsMessage")
|
||||
@ -301,6 +301,13 @@ func (r *PEXReactor) ReceiveAddrs(addrs []*p2p.NetAddress, src Peer) error {
|
||||
|
||||
err := r.book.AddAddress(netAddr, srcAddr)
|
||||
r.logErrAddrBook(err)
|
||||
|
||||
// If this address came from a seed node, try to connect to it without waiting.
|
||||
for _, seedAddr := range r.seedAddrs {
|
||||
if seedAddr.Equals(srcAddr) {
|
||||
r.ensurePeers()
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@ -472,18 +479,18 @@ func (r *PEXReactor) dialPeer(addr *p2p.NetAddress) {
|
||||
}
|
||||
|
||||
// check seed addresses are well formed
|
||||
func (r *PEXReactor) checkSeeds() error {
|
||||
func (r *PEXReactor) checkSeeds() ([]*p2p.NetAddress, error) {
|
||||
lSeeds := len(r.config.Seeds)
|
||||
if lSeeds == 0 {
|
||||
return nil
|
||||
return nil, nil
|
||||
}
|
||||
_, errs := p2p.NewNetAddressStrings(r.config.Seeds)
|
||||
netAddrs, errs := p2p.NewNetAddressStrings(r.config.Seeds)
|
||||
for _, err := range errs {
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
return netAddrs, nil
|
||||
}
|
||||
|
||||
// randomly dial seeds until we connect to one or exhaust them
|
||||
@ -492,13 +499,12 @@ func (r *PEXReactor) dialSeeds() {
|
||||
if lSeeds == 0 {
|
||||
return
|
||||
}
|
||||
seedAddrs, _ := p2p.NewNetAddressStrings(r.config.Seeds)
|
||||
|
||||
perm := cmn.RandPerm(lSeeds)
|
||||
// perm := r.Switch.rng.Perm(lSeeds)
|
||||
for _, i := range perm {
|
||||
// dial a random seed
|
||||
seedAddr := seedAddrs[i]
|
||||
seedAddr := r.seedAddrs[i]
|
||||
err := r.Switch.DialPeerWithAddress(seedAddr, false)
|
||||
if err == nil {
|
||||
return
|
||||
|
@ -211,31 +211,26 @@ func TestPEXReactorUsesSeedsIfNeeded(t *testing.T) {
|
||||
defer os.RemoveAll(dir) // nolint: errcheck
|
||||
|
||||
// 1. create seed
|
||||
seed := p2p.MakeSwitch(
|
||||
cfg,
|
||||
0,
|
||||
"127.0.0.1",
|
||||
"123.123.123",
|
||||
func(i int, sw *p2p.Switch) *p2p.Switch {
|
||||
book := NewAddrBook(filepath.Join(dir, "addrbook0.json"), false)
|
||||
book.SetLogger(log.TestingLogger())
|
||||
sw.SetAddrBook(book)
|
||||
|
||||
sw.SetLogger(log.TestingLogger())
|
||||
|
||||
r := NewPEXReactor(book, &PEXReactorConfig{})
|
||||
r.SetLogger(log.TestingLogger())
|
||||
sw.AddReactor("pex", r)
|
||||
return sw
|
||||
},
|
||||
)
|
||||
seed.AddListener(
|
||||
p2p.NewDefaultListener("tcp://"+seed.NodeInfo().ListenAddr, "", false, log.TestingLogger()),
|
||||
)
|
||||
seed := testCreateSeed(dir, 0, []*p2p.NetAddress{}, []*p2p.NetAddress{})
|
||||
require.Nil(t, seed.Start())
|
||||
defer seed.Stop()
|
||||
|
||||
// 2. create usual peer with only seed configured.
|
||||
peer := testCreatePeerWithSeed(dir, 1, seed)
|
||||
require.Nil(t, peer.Start())
|
||||
defer peer.Stop()
|
||||
|
||||
// 3. check that the peer connects to seed immediately
|
||||
assertPeersWithTimeout(t, []*p2p.Switch{peer}, 10*time.Millisecond, 3*time.Second, 1)
|
||||
}
|
||||
|
||||
func TestConnectionSpeedForPeerReceivedFromSeed(t *testing.T) {
|
||||
// directory to store address books
|
||||
dir, err := ioutil.TempDir("", "pex_reactor")
|
||||
require.Nil(t, err)
|
||||
defer os.RemoveAll(dir) // nolint: errcheck
|
||||
|
||||
// 1. create peer
|
||||
peer := p2p.MakeSwitch(
|
||||
cfg,
|
||||
1,
|
||||
@ -250,20 +245,34 @@ func TestPEXReactorUsesSeedsIfNeeded(t *testing.T) {
|
||||
|
||||
r := NewPEXReactor(
|
||||
book,
|
||||
&PEXReactorConfig{
|
||||
Seeds: []string{seed.NodeInfo().NetAddress().String()},
|
||||
},
|
||||
&PEXReactorConfig{},
|
||||
)
|
||||
r.SetLogger(log.TestingLogger())
|
||||
sw.AddReactor("pex", r)
|
||||
return sw
|
||||
},
|
||||
)
|
||||
peer.AddListener(
|
||||
p2p.NewDefaultListener("tcp://"+peer.NodeInfo().ListenAddr, "", false, log.TestingLogger()),
|
||||
)
|
||||
require.Nil(t, peer.Start())
|
||||
defer peer.Stop()
|
||||
|
||||
// 3. check that the peer connects to seed immediately
|
||||
assertPeersWithTimeout(t, []*p2p.Switch{peer}, 10*time.Millisecond, 3*time.Second, 1)
|
||||
// 2. Create seed which knows about the peer
|
||||
seed := testCreateSeed(dir, 2, []*p2p.NetAddress{peer.NodeInfo().NetAddress()}, []*p2p.NetAddress{peer.NodeInfo().NetAddress()})
|
||||
require.Nil(t, seed.Start())
|
||||
defer seed.Stop()
|
||||
|
||||
// 3. create another peer with only seed configured.
|
||||
secondPeer := testCreatePeerWithSeed(dir, 3, seed)
|
||||
require.Nil(t, secondPeer.Start())
|
||||
defer secondPeer.Stop()
|
||||
|
||||
// 4. check that the second peer connects to seed immediately
|
||||
assertPeersWithTimeout(t, []*p2p.Switch{secondPeer}, 10*time.Millisecond, 3*time.Second, 1)
|
||||
|
||||
// 5. check that the second peer connects to the first peer immediately
|
||||
assertPeersWithTimeout(t, []*p2p.Switch{secondPeer}, 10*time.Millisecond, 1*time.Second, 2)
|
||||
}
|
||||
|
||||
func TestPEXReactorCrawlStatus(t *testing.T) {
|
||||
@ -401,6 +410,7 @@ func assertPeersWithTimeout(
|
||||
outbound, inbound, _ := s.NumPeers()
|
||||
if outbound+inbound < nPeers {
|
||||
allGood = false
|
||||
break
|
||||
}
|
||||
}
|
||||
remaining -= checkPeriod
|
||||
@ -417,14 +427,77 @@ func assertPeersWithTimeout(
|
||||
numPeersStr += fmt.Sprintf("%d => {outbound: %d, inbound: %d}, ", i, outbound, inbound)
|
||||
}
|
||||
t.Errorf(
|
||||
"expected all switches to be connected to at least one peer (switches: %s)",
|
||||
numPeersStr,
|
||||
"expected all switches to be connected to at least %d peer(s) (switches: %s)",
|
||||
nPeers, numPeersStr,
|
||||
)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Creates a seed which knows about the provided addresses / source address pairs.
|
||||
// Starting and stopping the seed is left to the caller
|
||||
func testCreateSeed(dir string, id int, knownAddrs, srcAddrs []*p2p.NetAddress) *p2p.Switch {
|
||||
seed := p2p.MakeSwitch(
|
||||
cfg,
|
||||
id,
|
||||
"127.0.0.1",
|
||||
"123.123.123",
|
||||
func(i int, sw *p2p.Switch) *p2p.Switch {
|
||||
book := NewAddrBook(filepath.Join(dir, "addrbookSeed.json"), false)
|
||||
book.SetLogger(log.TestingLogger())
|
||||
for j := 0; j < len(knownAddrs); j++ {
|
||||
book.AddAddress(knownAddrs[j], srcAddrs[j])
|
||||
book.MarkGood(knownAddrs[j])
|
||||
}
|
||||
sw.SetAddrBook(book)
|
||||
|
||||
sw.SetLogger(log.TestingLogger())
|
||||
|
||||
r := NewPEXReactor(book, &PEXReactorConfig{})
|
||||
r.SetLogger(log.TestingLogger())
|
||||
sw.AddReactor("pex", r)
|
||||
return sw
|
||||
},
|
||||
)
|
||||
seed.AddListener(
|
||||
p2p.NewDefaultListener("tcp://"+seed.NodeInfo().ListenAddr, "", false, log.TestingLogger()),
|
||||
)
|
||||
return seed
|
||||
}
|
||||
|
||||
// Creates a peer which knows about the provided seed.
|
||||
// Starting and stopping the peer is left to the caller
|
||||
func testCreatePeerWithSeed(dir string, id int, seed *p2p.Switch) *p2p.Switch {
|
||||
peer := p2p.MakeSwitch(
|
||||
cfg,
|
||||
id,
|
||||
"127.0.0.1",
|
||||
"123.123.123",
|
||||
func(i int, sw *p2p.Switch) *p2p.Switch {
|
||||
book := NewAddrBook(filepath.Join(dir, fmt.Sprintf("addrbook%d.json", id)), false)
|
||||
book.SetLogger(log.TestingLogger())
|
||||
sw.SetAddrBook(book)
|
||||
|
||||
sw.SetLogger(log.TestingLogger())
|
||||
|
||||
r := NewPEXReactor(
|
||||
book,
|
||||
&PEXReactorConfig{
|
||||
Seeds: []string{seed.NodeInfo().NetAddress().String()},
|
||||
},
|
||||
)
|
||||
r.SetLogger(log.TestingLogger())
|
||||
sw.AddReactor("pex", r)
|
||||
return sw
|
||||
},
|
||||
)
|
||||
peer.AddListener(
|
||||
p2p.NewDefaultListener("tcp://"+peer.NodeInfo().ListenAddr, "", false, log.TestingLogger()),
|
||||
)
|
||||
return peer
|
||||
}
|
||||
|
||||
func createReactor(conf *PEXReactorConfig) (r *PEXReactor, book *addrBook) {
|
||||
// directory to store address book
|
||||
dir, err := ioutil.TempDir("", "pex_reactor")
|
||||
|
@ -496,6 +496,7 @@ func (sw *Switch) listenerRoutine(l Listener) {
|
||||
maxPeers := sw.config.MaxNumPeers - DefaultMinNumOutboundPeers
|
||||
if maxPeers <= sw.peers.Size() {
|
||||
sw.Logger.Info("Ignoring inbound connection: already have enough peers", "address", inConn.RemoteAddr().String(), "numPeers", sw.peers.Size(), "max", maxPeers)
|
||||
inConn.Close()
|
||||
continue
|
||||
}
|
||||
|
||||
@ -510,6 +511,7 @@ func (sw *Switch) listenerRoutine(l Listener) {
|
||||
// cleanup
|
||||
}
|
||||
|
||||
// closes conn if err is returned
|
||||
func (sw *Switch) addInboundPeerWithConfig(
|
||||
conn net.Conn,
|
||||
config *config.P2PConfig,
|
||||
|
@ -3,6 +3,7 @@ package privval
|
||||
import (
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
@ -11,22 +12,22 @@ import (
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
func TestGenLoadValidator(t *testing.T) {
|
||||
assert := assert.New(t)
|
||||
|
||||
_, tempFilePath := cmn.Tempfile("priv_validator_")
|
||||
privVal := GenFilePV(tempFilePath)
|
||||
tempFile, err := ioutil.TempFile("", "priv_validator_")
|
||||
require.Nil(t, err)
|
||||
privVal := GenFilePV(tempFile.Name())
|
||||
|
||||
height := int64(100)
|
||||
privVal.LastHeight = height
|
||||
privVal.Save()
|
||||
addr := privVal.GetAddress()
|
||||
|
||||
privVal = LoadFilePV(tempFilePath)
|
||||
privVal = LoadFilePV(tempFile.Name())
|
||||
assert.Equal(addr, privVal.GetAddress(), "expected privval addr to be the same")
|
||||
assert.Equal(height, privVal.LastHeight, "expected privval.LastHeight to have been saved")
|
||||
}
|
||||
@ -34,7 +35,9 @@ func TestGenLoadValidator(t *testing.T) {
|
||||
func TestLoadOrGenValidator(t *testing.T) {
|
||||
assert := assert.New(t)
|
||||
|
||||
_, tempFilePath := cmn.Tempfile("priv_validator_")
|
||||
tempFile, err := ioutil.TempFile("", "priv_validator_")
|
||||
require.Nil(t, err)
|
||||
tempFilePath := tempFile.Name()
|
||||
if err := os.Remove(tempFilePath); err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
@ -91,8 +94,9 @@ func TestUnmarshalValidator(t *testing.T) {
|
||||
func TestSignVote(t *testing.T) {
|
||||
assert := assert.New(t)
|
||||
|
||||
_, tempFilePath := cmn.Tempfile("priv_validator_")
|
||||
privVal := GenFilePV(tempFilePath)
|
||||
tempFile, err := ioutil.TempFile("", "priv_validator_")
|
||||
require.Nil(t, err)
|
||||
privVal := GenFilePV(tempFile.Name())
|
||||
|
||||
block1 := types.BlockID{[]byte{1, 2, 3}, types.PartSetHeader{}}
|
||||
block2 := types.BlockID{[]byte{3, 2, 1}, types.PartSetHeader{}}
|
||||
@ -101,7 +105,7 @@ func TestSignVote(t *testing.T) {
|
||||
|
||||
// sign a vote for first time
|
||||
vote := newVote(privVal.Address, 0, height, round, voteType, block1)
|
||||
err := privVal.SignVote("mychainid", vote)
|
||||
err = privVal.SignVote("mychainid", vote)
|
||||
assert.NoError(err, "expected no error signing vote")
|
||||
|
||||
// try to sign the same vote again; should be fine
|
||||
@ -132,8 +136,9 @@ func TestSignVote(t *testing.T) {
|
||||
func TestSignProposal(t *testing.T) {
|
||||
assert := assert.New(t)
|
||||
|
||||
_, tempFilePath := cmn.Tempfile("priv_validator_")
|
||||
privVal := GenFilePV(tempFilePath)
|
||||
tempFile, err := ioutil.TempFile("", "priv_validator_")
|
||||
require.Nil(t, err)
|
||||
privVal := GenFilePV(tempFile.Name())
|
||||
|
||||
block1 := types.PartSetHeader{5, []byte{1, 2, 3}}
|
||||
block2 := types.PartSetHeader{10, []byte{3, 2, 1}}
|
||||
@ -141,7 +146,7 @@ func TestSignProposal(t *testing.T) {
|
||||
|
||||
// sign a proposal for first time
|
||||
proposal := newProposal(height, round, block1)
|
||||
err := privVal.SignProposal("mychainid", proposal)
|
||||
err = privVal.SignProposal("mychainid", proposal)
|
||||
assert.NoError(err, "expected no error signing proposal")
|
||||
|
||||
// try to sign the same proposal again; should be fine
|
||||
@ -170,8 +175,9 @@ func TestSignProposal(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestDifferByTimestamp(t *testing.T) {
|
||||
_, tempFilePath := cmn.Tempfile("priv_validator_")
|
||||
privVal := GenFilePV(tempFilePath)
|
||||
tempFile, err := ioutil.TempFile("", "priv_validator_")
|
||||
require.Nil(t, err)
|
||||
privVal := GenFilePV(tempFile.Name())
|
||||
|
||||
block1 := types.PartSetHeader{5, []byte{1, 2, 3}}
|
||||
height, round := int64(10), 1
|
||||
|
@ -7,12 +7,12 @@ import (
|
||||
"net"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/go-amino"
|
||||
amino "github.com/tendermint/go-amino"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
|
||||
p2pconn "github.com/tendermint/tendermint/p2p/conn"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
@ -33,7 +33,7 @@ var (
|
||||
)
|
||||
|
||||
var (
|
||||
acceptDeadline = time.Second + defaultAcceptDeadlineSeconds
|
||||
acceptDeadline = time.Second * defaultAcceptDeadlineSeconds
|
||||
connDeadline = time.Second * defaultConnDeadlineSeconds
|
||||
connHeartbeat = time.Second * defaultConnHeartBeatSeconds
|
||||
)
|
||||
|
@ -142,7 +142,7 @@ func TestInfo(t *testing.T) {
|
||||
proxy := NewAppConnTest(cli)
|
||||
t.Log("Connected")
|
||||
|
||||
resInfo, err := proxy.InfoSync(types.RequestInfo{""})
|
||||
resInfo, err := proxy.InfoSync(types.RequestInfo{Version: ""})
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error: %v", err)
|
||||
}
|
||||
|
@ -23,7 +23,7 @@ var (
|
||||
)
|
||||
|
||||
func (a ABCIApp) ABCIInfo() (*ctypes.ResultABCIInfo, error) {
|
||||
return &ctypes.ResultABCIInfo{a.App.Info(abci.RequestInfo{version.Version})}, nil
|
||||
return &ctypes.ResultABCIInfo{a.App.Info(abci.RequestInfo{Version: version.Version})}, nil
|
||||
}
|
||||
|
||||
func (a ABCIApp) ABCIQuery(path string, data cmn.HexBytes) (*ctypes.ResultABCIQuery, error) {
|
||||
@ -31,7 +31,7 @@ func (a ABCIApp) ABCIQuery(path string, data cmn.HexBytes) (*ctypes.ResultABCIQu
|
||||
}
|
||||
|
||||
func (a ABCIApp) ABCIQueryWithOptions(path string, data cmn.HexBytes, opts client.ABCIQueryOptions) (*ctypes.ResultABCIQuery, error) {
|
||||
q := a.App.Query(abci.RequestQuery{data, path, opts.Height, opts.Trusted})
|
||||
q := a.App.Query(abci.RequestQuery{Data: data, Path: path, Height: opts.Height, Prove: opts.Trusted})
|
||||
return &ctypes.ResultABCIQuery{q}, nil
|
||||
}
|
||||
|
||||
|
@ -1,18 +1,15 @@
|
||||
# Tendermint RPC
|
||||
|
||||
## Generate markdown for [Slate](https://github.com/tendermint/slate)
|
||||
|
||||
We are using [Slate](https://github.com/tendermint/slate) to power our RPC
|
||||
We are using [Slate](https://github.com/lord/slate) to power our RPC
|
||||
documentation. For generating markdown use:
|
||||
|
||||
```shell
|
||||
go get github.com/davecheney/godoc2md
|
||||
|
||||
godoc2md -template rpc/core/doc_template.txt github.com/tendermint/tendermint/rpc/core | grep -v -e "pipe.go" -e "routes.go" -e "dev.go" | sed 's$/src/target$https://github.com/tendermint/tendermint/tree/master/rpc/core$'
|
||||
# from root of this repo
|
||||
make rpc-docs
|
||||
```
|
||||
|
||||
For more information see the [CI script for building the Slate docs](/scripts/slate.sh)
|
||||
|
||||
## Pagination
|
||||
|
||||
Requests that return multiple items will be paginated to 30 items by default.
|
||||
|
@ -1,10 +1,12 @@
|
||||
package core
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
ctypes "github.com/tendermint/tendermint/rpc/core/types"
|
||||
"github.com/tendermint/tendermint/version"
|
||||
cmn "github.com/tendermint/tendermint/libs/common"
|
||||
)
|
||||
|
||||
// Query the application for some information.
|
||||
@ -48,6 +50,10 @@ import (
|
||||
// | height | int64 | 0 | false | Height (0 means latest) |
|
||||
// | trusted | bool | false | false | Does not include a proof of the data inclusion |
|
||||
func ABCIQuery(path string, data cmn.HexBytes, height int64, trusted bool) (*ctypes.ResultABCIQuery, error) {
|
||||
if height < 0 {
|
||||
return nil, fmt.Errorf("height must be non-negative")
|
||||
}
|
||||
|
||||
resQuery, err := proxyAppQuery.QuerySync(abci.RequestQuery{
|
||||
Path: path,
|
||||
Data: data,
|
||||
@ -87,7 +93,7 @@ func ABCIQuery(path string, data cmn.HexBytes, height int64, trusted bool) (*cty
|
||||
// }
|
||||
// ```
|
||||
func ABCIInfo() (*ctypes.ResultABCIInfo, error) {
|
||||
resInfo, err := proxyAppQuery.InfoSync(abci.RequestInfo{version.Version})
|
||||
resInfo, err := proxyAppQuery.InfoSync(abci.RequestInfo{Version: version.Version})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -64,25 +64,15 @@ import (
|
||||
//
|
||||
// <aside class="notice">Returns at most 20 items.</aside>
|
||||
func BlockchainInfo(minHeight, maxHeight int64) (*ctypes.ResultBlockchainInfo, error) {
|
||||
if minHeight == 0 {
|
||||
minHeight = 1
|
||||
}
|
||||
|
||||
if maxHeight == 0 {
|
||||
maxHeight = blockStore.Height()
|
||||
} else {
|
||||
maxHeight = cmn.MinInt64(blockStore.Height(), maxHeight)
|
||||
}
|
||||
|
||||
// maximum 20 block metas
|
||||
const limit int64 = 20
|
||||
minHeight = cmn.MaxInt64(minHeight, maxHeight-limit)
|
||||
|
||||
logger.Debug("BlockchainInfoHandler", "maxHeight", maxHeight, "minHeight", minHeight)
|
||||
|
||||
if minHeight > maxHeight {
|
||||
return nil, fmt.Errorf("min height %d can't be greater than max height %d", minHeight, maxHeight)
|
||||
var err error
|
||||
minHeight, maxHeight, err = filterMinMax(blockStore.Height(), minHeight, maxHeight, limit)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
logger.Debug("BlockchainInfoHandler", "maxHeight", maxHeight, "minHeight", minHeight)
|
||||
|
||||
blockMetas := []*types.BlockMeta{}
|
||||
for height := maxHeight; height >= minHeight; height-- {
|
||||
@ -93,6 +83,37 @@ func BlockchainInfo(minHeight, maxHeight int64) (*ctypes.ResultBlockchainInfo, e
|
||||
return &ctypes.ResultBlockchainInfo{blockStore.Height(), blockMetas}, nil
|
||||
}
|
||||
|
||||
// error if either min or max are negative or min < max
|
||||
// if 0, use 1 for min, latest block height for max
|
||||
// enforce limit.
|
||||
// error if min > max
|
||||
func filterMinMax(height, min, max, limit int64) (int64, int64, error) {
|
||||
// filter negatives
|
||||
if min < 0 || max < 0 {
|
||||
return min, max, fmt.Errorf("heights must be non-negative")
|
||||
}
|
||||
|
||||
// adjust for default values
|
||||
if min == 0 {
|
||||
min = 1
|
||||
}
|
||||
if max == 0 {
|
||||
max = height
|
||||
}
|
||||
|
||||
// limit max to the height
|
||||
max = cmn.MinInt64(height, max)
|
||||
|
||||
// limit min to within `limit` of max
|
||||
// so the total number of blocks returned will be `limit`
|
||||
min = cmn.MaxInt64(min, max-limit+1)
|
||||
|
||||
if min > max {
|
||||
return min, max, fmt.Errorf("min height %d can't be greater than max height %d", min, max)
|
||||
}
|
||||
return min, max, nil
|
||||
}
|
||||
|
||||
// Get block at a given height.
|
||||
// If no height is provided, it will fetch the latest block.
|
||||
//
|
||||
|
58
rpc/core/blocks_test.go
Normal file
58
rpc/core/blocks_test.go
Normal file
@ -0,0 +1,58 @@
|
||||
package core
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestBlockchainInfo(t *testing.T) {
|
||||
|
||||
cases := []struct {
|
||||
min, max int64
|
||||
height int64
|
||||
limit int64
|
||||
resultLength int64
|
||||
wantErr bool
|
||||
}{
|
||||
|
||||
// min > max
|
||||
{0, 0, 0, 10, 0, true}, // min set to 1
|
||||
{0, 1, 0, 10, 0, true}, // max set to height (0)
|
||||
{0, 0, 1, 10, 1, false}, // max set to height (1)
|
||||
{2, 0, 1, 10, 0, true}, // max set to height (1)
|
||||
{2, 1, 5, 10, 0, true},
|
||||
|
||||
// negative
|
||||
{1, 10, 14, 10, 10, false}, // control
|
||||
{-1, 10, 14, 10, 0, true},
|
||||
{1, -10, 14, 10, 0, true},
|
||||
{-9223372036854775808, -9223372036854775788, 100, 20, 0, true},
|
||||
|
||||
// check limit and height
|
||||
{1, 1, 1, 10, 1, false},
|
||||
{1, 1, 5, 10, 1, false},
|
||||
{2, 2, 5, 10, 1, false},
|
||||
{1, 2, 5, 10, 2, false},
|
||||
{1, 5, 1, 10, 1, false},
|
||||
{1, 5, 10, 10, 5, false},
|
||||
{1, 15, 10, 10, 10, false},
|
||||
{1, 15, 15, 10, 10, false},
|
||||
{1, 15, 15, 20, 15, false},
|
||||
{1, 20, 15, 20, 15, false},
|
||||
{1, 20, 20, 20, 20, false},
|
||||
}
|
||||
|
||||
for i, c := range cases {
|
||||
caseString := fmt.Sprintf("test %d failed", i)
|
||||
min, max, err := filterMinMax(c.height, c.min, c.max, c.limit)
|
||||
if c.wantErr {
|
||||
require.Error(t, err, caseString)
|
||||
} else {
|
||||
require.NoError(t, err, caseString)
|
||||
require.Equal(t, 1+max-min, c.resultLength, caseString)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
13
rpc/core/slate_header.txt
Normal file
13
rpc/core/slate_header.txt
Normal file
@ -0,0 +1,13 @@
|
||||
---
|
||||
title: RPC Reference
|
||||
|
||||
language_tabs: # must be one of https://git.io/vQNgJ
|
||||
- shell
|
||||
- go
|
||||
|
||||
toc_footers:
|
||||
- <a href='https://tendermint.com/'>Tendermint</a>
|
||||
- <a href='https://github.com/lord/slate'>Documentation Powered by Slate</a>
|
||||
|
||||
search: true
|
||||
---
|
@ -109,21 +109,11 @@ func Status() (*ctypes.ResultStatus, error) {
|
||||
return result, nil
|
||||
}
|
||||
|
||||
const consensusTimeout = time.Second
|
||||
|
||||
func validatorAtHeight(h int64) *types.Validator {
|
||||
lastBlockHeight, vals := getValidatorsWithTimeout(
|
||||
consensusState,
|
||||
consensusTimeout,
|
||||
)
|
||||
|
||||
if lastBlockHeight == -1 {
|
||||
return nil
|
||||
}
|
||||
|
||||
privValAddress := pubKey.Address()
|
||||
|
||||
// If we're still at height h, search in the current validator set.
|
||||
lastBlockHeight, vals := consensusState.GetValidators()
|
||||
if lastBlockHeight == h {
|
||||
for _, val := range vals {
|
||||
if bytes.Equal(val.Address, privValAddress) {
|
||||
@ -144,32 +134,3 @@ func validatorAtHeight(h int64) *types.Validator {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
type validatorRetriever interface {
|
||||
GetValidators() (int64, []*types.Validator)
|
||||
}
|
||||
|
||||
// NOTE: Consensus might halt, but we still need to process RPC requests (at
|
||||
// least for endpoints whole output does not depend on consensus state).
|
||||
func getValidatorsWithTimeout(
|
||||
vr validatorRetriever,
|
||||
t time.Duration,
|
||||
) (int64, []*types.Validator) {
|
||||
resultCh := make(chan struct {
|
||||
lastBlockHeight int64
|
||||
vals []*types.Validator
|
||||
})
|
||||
go func() {
|
||||
h, v := vr.GetValidators()
|
||||
resultCh <- struct {
|
||||
lastBlockHeight int64
|
||||
vals []*types.Validator
|
||||
}{h, v}
|
||||
}()
|
||||
select {
|
||||
case res := <-resultCh:
|
||||
return res.lastBlockHeight, res.vals
|
||||
case <-time.After(t):
|
||||
return -1, []*types.Validator{}
|
||||
}
|
||||
}
|
||||
|
@ -1,39 +0,0 @@
|
||||
package core
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
func TestGetValidatorsWithTimeout(t *testing.T) {
|
||||
height, vs := getValidatorsWithTimeout(
|
||||
testValidatorReceiver{},
|
||||
time.Millisecond,
|
||||
)
|
||||
|
||||
if height != -1 {
|
||||
t.Errorf("expected negative height")
|
||||
}
|
||||
|
||||
if len(vs) != 0 {
|
||||
t.Errorf("expected no validators")
|
||||
}
|
||||
}
|
||||
|
||||
type testValidatorReceiver struct{}
|
||||
|
||||
func (tr testValidatorReceiver) GetValidators() (int64, []*types.Validator) {
|
||||
vs := []*types.Validator{}
|
||||
|
||||
for i := 0; i < 3; i++ {
|
||||
v, _ := types.RandValidator(true, 10)
|
||||
|
||||
vs = append(vs, v)
|
||||
}
|
||||
|
||||
time.Sleep(time.Millisecond)
|
||||
|
||||
return 10, vs
|
||||
}
|
@ -2,12 +2,10 @@ package core_types
|
||||
|
||||
import (
|
||||
"github.com/tendermint/go-amino"
|
||||
cryptoAmino "github.com/tendermint/tendermint/crypto/encoding/amino"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
func RegisterAmino(cdc *amino.Codec) {
|
||||
types.RegisterEventDatas(cdc)
|
||||
types.RegisterEvidences(cdc)
|
||||
cryptoAmino.RegisterAmino(cdc)
|
||||
types.RegisterBlockAmino(cdc)
|
||||
}
|
||||
|
@ -123,7 +123,7 @@ func NewTendermint(app abci.Application) *nm.Node {
|
||||
node, err := nm.NewNode(config, pv, papp,
|
||||
nm.DefaultGenesisDocProviderFunc(config),
|
||||
nm.DefaultDBProvider,
|
||||
nm.DefaultMetricsProvider,
|
||||
nm.DefaultMetricsProvider(config.Instrumentation),
|
||||
logger)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
|
@ -190,7 +190,10 @@ func execBlockOnProxyApp(logger log.Logger, proxyAppConn proxy.AppConnConsensus,
|
||||
_, err := proxyAppConn.BeginBlockSync(abci.RequestBeginBlock{
|
||||
Hash: block.Hash(),
|
||||
Header: types.TM2PB.Header(&block.Header),
|
||||
LastCommitInfo: abci.LastCommitInfo{
|
||||
CommitRound: int32(block.LastCommit.Round()),
|
||||
Validators: signVals,
|
||||
},
|
||||
ByzantineValidators: byzVals,
|
||||
})
|
||||
if err != nil {
|
||||
@ -207,7 +210,7 @@ func execBlockOnProxyApp(logger log.Logger, proxyAppConn proxy.AppConnConsensus,
|
||||
}
|
||||
|
||||
// End block.
|
||||
abciResponses.EndBlock, err = proxyAppConn.EndBlockSync(abci.RequestEndBlock{block.Height})
|
||||
abciResponses.EndBlock, err = proxyAppConn.EndBlockSync(abci.RequestEndBlock{Height: block.Height})
|
||||
if err != nil {
|
||||
logger.Error("Error in proxyAppConn.EndBlock", "err", err)
|
||||
return nil, err
|
||||
@ -245,7 +248,7 @@ func getBeginBlockValidatorInfo(block *types.Block, lastValSet *types.ValidatorS
|
||||
vote = block.LastCommit.Precommits[i]
|
||||
}
|
||||
val := abci.SigningValidator{
|
||||
Validator: types.TM2PB.Validator(val),
|
||||
Validator: types.TM2PB.ValidatorWithoutPubKey(val),
|
||||
SignedLastBlock: vote != nil,
|
||||
}
|
||||
signVals[i] = val
|
||||
|
@ -79,7 +79,7 @@ func TestBeginBlockValidators(t *testing.T) {
|
||||
lastCommit := &types.Commit{BlockID: prevBlockID, Precommits: tc.lastCommitPrecommits}
|
||||
|
||||
// block for height 2
|
||||
block, _ := state.MakeBlock(2, makeTxs(2), lastCommit)
|
||||
block, _ := state.MakeBlock(2, makeTxs(2), lastCommit, nil)
|
||||
_, err = ExecCommitBlock(proxyApp.Consensus(), block, log.TestingLogger(), state.Validators, stateDB)
|
||||
require.Nil(t, err, tc.desc)
|
||||
|
||||
@ -138,7 +138,7 @@ func TestBeginBlockByzantineValidators(t *testing.T) {
|
||||
lastCommit := &types.Commit{BlockID: prevBlockID, Precommits: votes}
|
||||
for _, tc := range testCases {
|
||||
|
||||
block, _ := state.MakeBlock(10, makeTxs(2), lastCommit)
|
||||
block, _ := state.MakeBlock(10, makeTxs(2), lastCommit, nil)
|
||||
block.Time = now
|
||||
block.Evidence.Evidence = tc.evidence
|
||||
_, err = ExecCommitBlock(proxyApp.Consensus(), block, log.TestingLogger(), state.Validators, stateDB)
|
||||
@ -168,7 +168,7 @@ func TestUpdateValidators(t *testing.T) {
|
||||
"adding a validator is OK",
|
||||
|
||||
types.NewValidatorSet([]*types.Validator{val1}),
|
||||
[]abci.Validator{{[]byte{}, types.TM2PB.PubKey(pubkey2), 20}},
|
||||
[]abci.Validator{{Address: []byte{}, PubKey: types.TM2PB.PubKey(pubkey2), Power: 20}},
|
||||
|
||||
types.NewValidatorSet([]*types.Validator{val1, val2}),
|
||||
false,
|
||||
@ -177,7 +177,7 @@ func TestUpdateValidators(t *testing.T) {
|
||||
"updating a validator is OK",
|
||||
|
||||
types.NewValidatorSet([]*types.Validator{val1}),
|
||||
[]abci.Validator{{[]byte{}, types.TM2PB.PubKey(pubkey1), 20}},
|
||||
[]abci.Validator{{Address: []byte{}, PubKey: types.TM2PB.PubKey(pubkey1), Power: 20}},
|
||||
|
||||
types.NewValidatorSet([]*types.Validator{types.NewValidator(pubkey1, 20)}),
|
||||
false,
|
||||
@ -186,7 +186,7 @@ func TestUpdateValidators(t *testing.T) {
|
||||
"removing a validator is OK",
|
||||
|
||||
types.NewValidatorSet([]*types.Validator{val1, val2}),
|
||||
[]abci.Validator{{[]byte{}, types.TM2PB.PubKey(pubkey2), 0}},
|
||||
[]abci.Validator{{Address: []byte{}, PubKey: types.TM2PB.PubKey(pubkey2), Power: 0}},
|
||||
|
||||
types.NewValidatorSet([]*types.Validator{val1}),
|
||||
false,
|
||||
@ -196,7 +196,7 @@ func TestUpdateValidators(t *testing.T) {
|
||||
"removing a non-existing validator results in error",
|
||||
|
||||
types.NewValidatorSet([]*types.Validator{val1}),
|
||||
[]abci.Validator{{[]byte{}, types.TM2PB.PubKey(pubkey2), 0}},
|
||||
[]abci.Validator{{Address: []byte{}, PubKey: types.TM2PB.PubKey(pubkey2), Power: 0}},
|
||||
|
||||
types.NewValidatorSet([]*types.Validator{val1}),
|
||||
true,
|
||||
@ -206,7 +206,7 @@ func TestUpdateValidators(t *testing.T) {
|
||||
"adding a validator with negative power results in error",
|
||||
|
||||
types.NewValidatorSet([]*types.Validator{val1}),
|
||||
[]abci.Validator{{[]byte{}, types.TM2PB.PubKey(pubkey2), -100}},
|
||||
[]abci.Validator{{Address: []byte{}, PubKey: types.TM2PB.PubKey(pubkey2), Power: -100}},
|
||||
|
||||
types.NewValidatorSet([]*types.Validator{val1}),
|
||||
true,
|
||||
@ -262,14 +262,14 @@ func state(nVals, height int) (State, dbm.DB) {
|
||||
SaveState(stateDB, s)
|
||||
|
||||
for i := 1; i < height; i++ {
|
||||
s.LastBlockHeight += 1
|
||||
s.LastBlockHeight++
|
||||
SaveState(stateDB, s)
|
||||
}
|
||||
return s, stateDB
|
||||
}
|
||||
|
||||
func makeBlock(state State, height int64) *types.Block {
|
||||
block, _ := state.MakeBlock(height, makeTxs(state.LastBlockHeight), new(types.Commit))
|
||||
block, _ := state.MakeBlock(height, makeTxs(state.LastBlockHeight), new(types.Commit), nil)
|
||||
return block
|
||||
}
|
||||
|
||||
@ -293,7 +293,7 @@ func (app *testApp) Info(req abci.RequestInfo) (resInfo abci.ResponseInfo) {
|
||||
}
|
||||
|
||||
func (app *testApp) BeginBlock(req abci.RequestBeginBlock) abci.ResponseBeginBlock {
|
||||
app.Validators = req.Validators
|
||||
app.Validators = req.LastCommitInfo.Validators
|
||||
app.ByzantineValidators = req.ByzantineValidators
|
||||
return abci.ResponseBeginBlock{}
|
||||
}
|
||||
|
@ -98,19 +98,27 @@ func (state State) IsEmpty() bool {
|
||||
//------------------------------------------------------------------------
|
||||
// Create a block from the latest state
|
||||
|
||||
// MakeBlock builds a block with the given txs and commit from the current state.
|
||||
func (state State) MakeBlock(height int64, txs []types.Tx, commit *types.Commit) (*types.Block, *types.PartSet) {
|
||||
// Build base block.
|
||||
block := types.MakeBlock(height, txs, commit)
|
||||
// MakeBlock builds a block from the current state with the given txs, commit, and evidence.
|
||||
func (state State) MakeBlock(
|
||||
height int64,
|
||||
txs []types.Tx,
|
||||
commit *types.Commit,
|
||||
evidence []types.Evidence,
|
||||
) (*types.Block, *types.PartSet) {
|
||||
|
||||
// Fill header with state data.
|
||||
// Build base block with block data.
|
||||
block := types.MakeBlock(height, txs, commit, evidence)
|
||||
|
||||
// Fill rest of header with state data.
|
||||
block.ChainID = state.ChainID
|
||||
block.TotalTxs = state.LastBlockTotalTx + block.NumTxs
|
||||
|
||||
block.LastBlockID = state.LastBlockID
|
||||
block.TotalTxs = state.LastBlockTotalTx + block.NumTxs
|
||||
|
||||
block.ValidatorsHash = state.Validators.Hash()
|
||||
block.NextValidatorsHash = state.NextValidators.Hash()
|
||||
block.AppHash = state.AppHash
|
||||
block.ConsensusHash = state.ConsensusParams.Hash()
|
||||
block.AppHash = state.AppHash
|
||||
block.LastResultsHash = state.LastResultsHash
|
||||
|
||||
return block, block.MakePartSet(state.ConsensusParams.BlockGossip.BlockPartSizeBytes)
|
||||
|
@ -185,7 +185,6 @@ func txResultWithTags(tags []cmn.KVPair) *types.TxResult {
|
||||
Code: abci.CodeTypeOK,
|
||||
Log: "",
|
||||
Tags: tags,
|
||||
Fee: cmn.KI64Pair{Key: nil, Value: 0},
|
||||
},
|
||||
}
|
||||
}
|
||||
@ -201,7 +200,6 @@ func benchmarkTxIndex(txsCount int64, b *testing.B) {
|
||||
Code: abci.CodeTypeOK,
|
||||
Log: "",
|
||||
Tags: []cmn.KVPair{},
|
||||
Fee: cmn.KI64Pair{Key: []uint8{}, Value: 0},
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -13,11 +13,11 @@ ENV REPO $GOPATH/src/github.com/tendermint/tendermint
|
||||
ENV GOBIN $GOPATH/bin
|
||||
WORKDIR $REPO
|
||||
|
||||
# Install the vendored dependencies before copying code
|
||||
# Copy in the code
|
||||
COPY . $REPO
|
||||
|
||||
# Install the vendored dependencies
|
||||
# docker caching prevents reinstall on code change!
|
||||
ADD Gopkg.toml Gopkg.toml
|
||||
ADD Gopkg.lock Gopkg.lock
|
||||
ADD Makefile Makefile
|
||||
RUN make get_tools
|
||||
RUN make get_vendor_deps
|
||||
|
||||
|
@ -1,204 +0,0 @@
|
||||
Tendermint Core
|
||||
License: Apache2.0
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright 2016 All in Bits, Inc
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
@ -1,192 +0,0 @@
|
||||
Copyright (C) 2017 Tendermint
|
||||
|
||||
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
https://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
https://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
@ -52,7 +52,7 @@ then query the Tendermint app logs from the first pod:
|
||||
|
||||
``kubectl logs -c tm -f tm-0``
|
||||
|
||||
finally, use our `Rest API <../specification/rpc.html>`__ to fetch the status of the second pod's Tendermint app.
|
||||
finally, use our `Rest API <https://tendermint.com/docs/tendermint-core/rpc.html>`__ to fetch the status of the second pod's Tendermint app.
|
||||
|
||||
Note we are using ``kubectl exec`` because pods are not exposed (and should not be) to the
|
||||
outer network:
|
||||
|
@ -1,204 +0,0 @@
|
||||
Tendermint Bench
|
||||
Copyright 2017 Tendermint
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user