Compare commits

...

151 Commits

Author SHA1 Message Date
Ethan Buchman
013b9cef64 Merge pull request #2152 from tendermint/release/v0.23.0
Release/v0.23.0
2018-08-05 17:11:22 -04:00
Ethan Buchman
309a6772d7 types: fix formatting when printing signatures
- use cmn.Fingerprint and %X
2018-08-05 16:35:43 -04:00
Ethan Buchman
8bd514d9fb update changelog 2018-08-05 15:55:48 -04:00
ValarDragon
f903947ff3 crypto: Remove interface from crypto.Signature
Signatures are now []byte, which saves on the number of bytes after
amino encoding

(squash this) address Ismail's comment
2018-08-05 15:46:57 -04:00
Dev Ojha
d6a666b445 p2p/pex: Fix mismatch between dialseeds and checkseeds. (#2151) 2018-08-05 14:47:01 -04:00
Ethan Buchman
c5c2b9601f update changelog and version 2018-08-05 14:14:56 -04:00
Dev Ojha
6e3c5e8033 p2p/pex: Allow configured seed nodes to not be resolvable over DNS (#2129)
* p2p/pex: Allow configured seed nodes to be offline

Previously you couldn't startup tendermint if a seed node was offline.
This now allows you to startup tendermint, as long as all seed node addresses
are formatted correctly. In the event that all seed nodes are down,
and the address book is empty, then it crashes with an informative error msg.
(This case doesn't occur if no seeds were specified)

Closes #1716

* (Squash this) Address melekes' comments

* (squash this) fix package imports

* (squash this) fix pex_reactor comment

* (squash this) add a test case
2018-08-04 20:35:08 -04:00
Ethan Buchman
8073e51b04 Merge pull request #2096 from tendermint/dev/adr_symmetric
[ADR] Proposal for encoding symmetric cryptography
2018-08-04 20:27:24 -04:00
Ethan Buchman
3161ebbc2f Merge pull request #2091 from tendermint/dev/adr_secp_signatures
[ADR] Fix malleability problems in Secp256k1 signatures
2018-08-04 20:22:56 -04:00
Ethan Buchman
4cbeb30da2 Merge pull request #2136 from tendermint/1944-update-grpc
update genproto
2018-08-03 23:40:21 -04:00
Ethan Buchman
d5b5e5a2e4 Merge pull request #2135 from tendermint/2072-unresponsive-tm-after-cs-failure
consensus: non-responsive to CTRL-C if consensus state panics
2018-08-03 23:39:25 -04:00
ValarDragon
6691492540 (squash this) indicate what Ethereum does 2018-08-03 17:49:46 -07:00
Anton Kaliaev
2878c7523f update github bug report template (#2131) 2018-08-03 11:39:57 +04:00
Anton Kaliaev
b1cff0f9bf [libs/autofile] create a Group ticker on Start
1) no need to stop the ticker in createTestGroup() method
2) now there is a symmetry - we start the ticker in OnStart(), we stop it
in OnStop()

Refs #2072
2018-08-03 11:34:58 +04:00
Anton Kaliaev
d09a3a6d3a stop gracefully instead of trying to resume ops
Refs #2072

We most probably shouldn't be running any further when there is some
unexpected panic. Some unknown error happened, and so we don't know if
that will result in the validator signing an invalid thing. It might be
worthwhile to explore a mechanism for manual resuming via some console
or secure RPC system, but for now, halting the chain upon unexpected
consensus bugs sounds like the better option.
2018-08-03 11:24:55 +04:00
ValarDragon
87f09adeec (Squash this) Be more explicit about the exact encoding of the secp signature 2018-08-02 23:27:16 -07:00
ValarDragon
b3a3c8a192 Merge remote-tracking branch 'origin/develop' into dev/adr_secp_signatures 2018-08-02 23:25:14 -07:00
Ethan Buchman
2487210414 Merge pull request #2097 from tendermint/1772-revert
revert "make `/status` RPC endpoint resistant to consensus halt"
2018-08-02 17:31:07 -04:00
ValarDragon
a040c36dfb (squash this) change adr number, remove redundancy in function names 2018-08-02 10:43:47 -07:00
Anton Kaliaev
d579f4c610 update genproto
Closes #1944
2018-08-02 17:54:55 +04:00
Anton Kaliaev
b82138b002 update changelog 2018-08-02 16:48:12 +04:00
Anton Kaliaev
8ed99c2c13 exit from initSighupWatcher child goroutine
also, remove excessive log message

Refs #2072
2018-08-02 16:42:25 +04:00
Anton Kaliaev
4c5a143a70 respawn receiveRoutine so we can properly exit
Closes #2072
2018-08-02 16:36:28 +04:00
Anton Kaliaev
b33f73eaf1 stop autofile and autogroup properly
NOTE: from the ticker#Stop documentation:

```
Stop does not close the channel, to prevent a read from the channel
succeeding incorrectly.
https://golang.org/src/time/tick.go?s=1318:1341#L35
```

Refs #2072
2018-08-02 16:33:34 +04:00
Dev Ojha
eaa137512c adr: Encoding for cryptography at launch (#2121) 2018-08-01 18:19:21 -04:00
Dev Ojha
023bb99eb0 p2p: Add test vectors for deriving secrets (#2120)
These test vectors are needed for comparison with the Rust implementation.
To implement this effectively, a "RandBool" method was added to cmn.Rand.
2018-08-01 15:06:29 -04:00
Dev Ojha
dde96b75ce abci: Update readme for building protoc (#2124) 2018-08-01 13:57:31 +04:00
Dev Ojha
6fb2f44cc3 p2p: Connect to peers from a seed node immediately (#2115)
This is to reduce wait times when initially connecting. This still runs checks
such as whether you still want additional peers.

A test case has been created, which fails if this change is not included.
2018-07-31 22:09:01 +02:00
Zarko Milosevic
08ad162daa docs: Modify blockchain spec to reflect validator set changes (#2107) 2018-07-31 20:19:57 +02:00
ValarDragon
a83eed104c libs/cmn: Remove Tempfile, Tempdir, switch to ioutil variants (#2114)
Our Tempfile was just a wrapper on ioutil that panicked instead of error.

Our Tempdir was a less safe variant of ioutil's Tempdir.
2018-07-31 19:43:36 +02:00
ValarDragon
be642754f5 libs/cmn/writefileatomic: Handle file already exists gracefully (#2113)
This uses the stdlib's method of creating a tempfile in our write
file atomimc method, with a few modifications. We use a 64 bit number
rather than 32 bit, and therefore a corresponding LCG. This is to
reduce collision probability. (Note we currently used 32 bytes previously,
so this is likely a concern)

We handle reseeding the LCG in such a way that multiple threads are
even less likely to reuse the same seed.
2018-07-31 19:43:36 +02:00
ValarDragon
2608249e5b libs/common: Refactor tempfile code into its own file 2018-07-31 19:43:36 +02:00
Anton Kaliaev
62b8ee270d [docs] Validator's address can be skipped (#2117)
Refs #1712
2018-07-31 18:28:19 +02:00
Anton Kaliaev
0c7338c5f0 abci: Change validators to last_commit_info in RequestBeginBlock (#2074)
* change validators to last_commit_info in RequestBeginBlock
* do not send pubkeys with RequestBeginBlock

Refs #1856
2018-07-30 17:29:40 +02:00
Dev Ojha
231cdc1320 ci: Reduce log output in test_cover (#2105)
This commit makes it such that circle CI only shows the module whose
tests it is currently running in the log, unless a test fails. For each
failing test, it will display the name of all failing tests, along with
their log output. This is done to make
our log output far more scrollable. We lose no information in debugging.
2018-07-30 01:05:27 +02:00
Zaki Manian
fef4fe1c66 Link to "The latest gossip on BFT consensus" (#2102)
The paper is such a useful resource to anyone building on Tendermint. It should be linked in the ReadMe.
2018-07-29 09:49:42 +04:00
Dev Ojha
f00b52b710 libs/autofile/group_test: Remove unnecessary logging (#2100)
Previously we logged `Testing for i <i>` for all i in [0,100).
This was unnecessary. This changes it to just log the value for i on
error messages, to reduce the unnecessary verbosity in log files.
2018-07-29 09:48:37 +04:00
VenkatDatta
188e459273 Removed unnecessary onStart call (#2098)
* Removed unnecessary onStart & onStop calls in reactor

* Refactor OnStart & OnStop in reactor

* Removed redundant OnStart func in reactor
2018-07-29 09:46:53 +04:00
ValarDragon
3d5d254932 (squash this) Mixed up field element and curve element. Idea still stands. 2018-07-28 20:41:19 -07:00
ValarDragon
ce9ddc7cd7 (squash this) Note not to overwrite aead's. 2018-07-28 06:33:15 -07:00
ValarDragon
c03ad56d55 (squash this) Note that this breaks existing keys. 2018-07-28 04:23:22 -07:00
ValarDragon
caef5dcd69 (Squash this) forgot to say that algo_name should be length prefixed 2018-07-28 04:14:07 -07:00
Anton Kaliaev
7634073718 revert "make /status RPC endpoint resistant to consensus halt"
Refs #1772

Reasons:
- this was a bad patch for something not well understood

Lessons learned:
- nobody should be modifying code without understanding the problem
first. it will only result in more technical debt and code rot.
- we never hide information when we suspect a bug or we'not sure what's
going on.
2018-07-28 09:17:42 +04:00
ValarDragon
af2894c0f8 (squash this) improve grammar. 2018-07-27 19:27:25 -07:00
ValarDragon
a2debe57c7 [ADR] Proposal for encoding symmetric cryptography 2018-07-27 18:10:41 -07:00
ValarDragon
5955eddc7d ADR: Fix malleability problems in Secp256k1 signatures
Previously you could not assume that your transaction hash would
appear on chain.
2018-07-27 13:18:21 -07:00
Ethan Buchman
4bab34ae45 Merge pull request #2088 from tendermint/bucky/fix-evidence-test
consensus: Fix test for blocks with evidence
2018-07-27 14:06:12 -04:00
Ethan Buchman
fe5b0d7074 consensus: fix test for blocks with evidence 2018-07-27 14:00:05 -04:00
Anton Kaliaev
96ae535fb8 proto3 timestamp (#2064)
This PR changes ABCI time format from int64 (Unix seconds) to WKT (WellKnownType) google.protobuf.Timestamp.

Refs #1857

Reasons:

better precision
standard DT for proto

* update Gopkg.lock
* [makefile] remove extra grep
    - go list excludes vendor by default now
* proto3 timestamp
* [docs/abci-spec] note about serialisation format
* make time non-nullable
2018-07-27 04:23:19 +02:00
Alexander Simmerl
4be6395ee0 Merge pull request #2085 from tendermint/master
Merge 0.23.8 back into develop
2018-07-27 04:21:34 +02:00
Dev Ojha
bc526f18a4 libs: Make bitarray functions lock parameters that aren't the caller (#2081)
This now makes bit array functions which take in a second bit array, thread
safe. Previously there was a warning on bitarray.Update to be lock the
second parameter externally if thread safety wasrequired.
This was not done within the codebase, so it was fine to change here.

Closes #2080
2018-07-27 04:21:08 +02:00
Jae Kwon
40d6dc2ee5 Merge pull request #2082 from tendermint/release/0.22.8
HotFix for 0.22.7, bump to 0.22.8
2018-07-26 19:13:27 -07:00
Jae Kwon
d542d2c394 Fix 0.22.7, bump to 0.22.8 2018-07-26 18:08:09 -07:00
Alexander Simmerl
fd29fd6465 adr: PeerTransport (#2069)
* p2p: Propose PeerTransport ADR
* adr: Set status to in review
* adr: Add high-level decision
* adr: Extend on the idea of guards
* adr: Rework guards into transport specific filters
* adr: Rename to nodeAddr
* adr: Incorporate review
2018-07-27 02:27:19 +02:00
Dev Ojha
6241e6b927 libs: update BitArray go docs (#2079) 2018-07-27 02:10:58 +02:00
Ethan Buchman
18acd77e40 Merge pull request #2076 from tendermint/hotfix/0.22.7
Hotfix/0.22.7
2018-07-26 19:00:50 -04:00
Hendrik Hofstadt
49b52ee3c7 Add test for MakePartSet with evidence 2018-07-26 19:00:07 -04:00
Ethan Buchman
e4dfab6349 changelog and version 2018-07-26 18:54:15 -04:00
Ethan Buchman
0e127562bf register evidence interface wherever its used 2018-07-26 18:53:19 -04:00
Dev Ojha
9f19229791 .github: Split the issue template into two seperate templates (#2073)
* .github: Split the issue template into two seperate templates

Now we have different bug report and feature request templates.

* Forgot to add the name, and about fields
2018-07-26 12:18:55 +04:00
Alexander Simmerl
bdab37a626 Merge pull request #2054 from tendermint/dev/secret_connection
secret connection update
2018-07-26 00:29:36 +02:00
ValarDragon
7bf28af590 p2p/secret_connection: Switch salsa usage to hkdf + chacha
This now uses one hkdf on the X25519 shared secret to create
a key for the sender and receiver.
The hkdf call is now just called upon the computed shared
secret, since the shared secret is a function of the pubkeys.

The nonces now start at 0, as we are using chacha as a stream
cipher, and the sender and receiver now have different keys.
2018-07-26 00:12:32 +02:00
Zaki Manian
1b04e4e5f1 p2p: Remove RipeMd160.
Generate keys with HKDF instead of hash functions, which provides better security properties.

Add xchacha20poly1305 to secret connection. (Due to rebasing, this code has been removed)
2018-07-26 00:09:37 +02:00
Zach
66fe5b7bae rpc: Improve slate for Jenkins (#2070) 2018-07-25 23:37:08 +02:00
Dev Ojha
24b94d7aa4 crypto: Switch hkdfchacha back to xchacha (#2058)
hkdfchachapoly was a construction we came up with. There is no longer any
reason to use it. We should instead just use xchacha for the remaining use
cases we have. (Symmetrically encrypting the keys in the sdk keys package)
2018-07-25 23:12:39 +02:00
Dev Ojha
0bd4fb96f0 crypto: Add benchmarking code for signature schemes (#2061)
* crypto: Add benchmarking code for signature schemes

This does a slight refactor for the key generation code. It now calls a
seperate unexported method to allow generation from a reader. I think this
will actually reduce time in generation, due to no longer initializing an
extra slice. This was needed in order to enable benchmarking.

This uses an internal package for the benchmarking code, so that this can
be standardized without being exported in the public API. The benchmarking
code is derived from agl/ed25519's benchmarking code, and has copied the
license over.

Closes #1984
2018-07-25 23:07:47 +02:00
Dev Ojha
9cfc47a93b makefile: Add make check_dep and remove make ensure_deps (#2055)
This adds a new makefile command, which is used in CI linting, `make check_dep`.
This ensures the toml is in sync with the lock, and that were not pinning to a
branch in any repository.

This also adapts `make get_vendor_deps` to check the lock, in addition to
populating the vendor directory. This removes the need for `make ensure_deps`.

This makes `make get_vendor_deps` consistent between tendermint and the sdk.
2018-07-25 18:09:52 +02:00
Alexander Simmerl
d212292f86 Merge pull request #2066 from tendermint/bucky/merge-master
Bucky/merge master
2018-07-25 17:36:39 +02:00
Ethan Buchman
7ad92c44cb Merge branch 'master' into bucky/merge-master 2018-07-25 11:34:32 -04:00
Ethan Buchman
5fdbcd70df Merge pull request #2056 from tendermint/hotfix/v0.22.6
Hotfix/v0.22.6
2018-07-25 11:18:00 -04:00
Dev Ojha
d149d8f96d github: update PR template to indicate changing pending changelog. (#2059) 2018-07-25 11:34:51 +04:00
Ethan Buchman
74b6cc9057 rpc: log error when we timeout getting validators from consensus (#2045) 2018-07-25 10:56:00 +04:00
Ethan Buchman
6046b99197 consensus: include evidence in proposed block parts. fixes #2050 2018-07-24 21:58:39 -04:00
Ethan Buchman
359898dcac p2p: fix conn leak. part of #2046 2018-07-24 21:53:37 -04:00
Ethan Buchman
5768b67162 changelog and version 2018-07-24 21:26:41 -04:00
Ethan Buchman
082557b7d4 rpc: validate height in abci_query 2018-07-24 21:25:47 -04:00
Ethan Buchman
8dc655dad2 rpc: fix /blockchain OOM #2049 2018-07-24 21:18:20 -04:00
Ethan Buchman
70d3783747 dep: revert updates
- leave protobuf on 1.1.1 for grpc import fixes
- amino v0.11 breaks tendermint - revert to v0.10.1
2018-07-24 21:01:14 -04:00
Anton Kaliaev
60378fd7f9 abci: remove fee (#2043)
Refs #1861

We don't use the fee field and its likely just confusing.

We can add backwards compatible priority (instead of fee) later.

Note priority is better than fee because it lets the app do the math on how to rank order transactions, rather than forcing that into tendermint (ie. if we return fee, priority would be fee/gas)
2018-07-24 17:28:26 +02:00
Anton Kaliaev
c248ce5ef6 p2p: Reject addrs coming from private peers (#2032)
Refs #1706
2018-07-24 17:10:58 +02:00
Anton Kaliaev
059a03a66a tools: clean up Makefile and remove LICENSE file (#2042)
* lean up Makefile and remove LICENSE file
* remove k8s and build LICENSE files
* remove non-existing target
2018-07-24 16:02:32 +02:00
Alexander Simmerl
abe5b63059 Merge pull request #2040 from tendermint/2027-socket-pv
fix acceptDeadline
2018-07-24 15:52:45 +02:00
Alexander Simmerl
7e7473ad41 Merge pull request #2044 from tendermint/2019-prometheus
do not overwrite metrics provider in node#NewNode
2018-07-24 15:41:44 +02:00
Anton Kaliaev
75a26ebd6d do not overwrite metrics provider in node#NewNode
also, make running Prometheus server optional.

Closes #2019
2018-07-24 16:01:11 +04:00
Anton Kaliaev
ad580e2734 fix acceptDeadline
before: 1.000000003s
after: 3.000000000s

Refs #2027
2018-07-24 10:51:12 +04:00
Ethan Buchman
b92860b6c4 Merge pull request #1805 from tendermint/jae/optimize_blockchain
Optimizing blockchain reactor.
2018-07-23 22:46:51 -04:00
Ethan Buchman
54d753e64e fix Gopkg, add changelog 2018-07-23 22:48:38 -04:00
Ethan Buchman
e1b48b16c4 Merge branch 'develop' into jae/optimize_blockchain 2018-07-23 22:16:34 -04:00
Ethan Buchman
7c07235649 Merge pull request #2036 from tendermint/master
Merge master to develop
2018-07-23 22:05:44 -04:00
Ethan Buchman
05a76fb517 Merge pull request #2029 from tendermint/release/0.22.5
Release/0.22.5
2018-07-23 21:57:43 -04:00
Ethan Buchman
15b112e669 mempool: chan bool -> chan struct{} 2018-07-23 21:06:47 -04:00
Ethan Buchman
2aef80bcff Merge pull request #2034 from tendermint/dev/fix_pkg_names
crypto: Fix package imports from the refactor
2018-07-23 20:34:34 -04:00
ValarDragon
f3d519c966 crypto: Fix package imports from the refactor 2018-07-23 16:14:21 -07:00
Anton Kaliaev
9962e598a0 reconnect to self-reported address if persistent peer is inbound (#2031)
* reconnect to self-reported address if persistent peer is inbound

* add a fixme
2018-07-23 21:15:08 +04:00
Anton Kaliaev
948b91e62e add missing changelog entries 2018-07-23 17:16:43 +04:00
Anton Kaliaev
1e05242297 update changelog and bump version to 0.22.5 2018-07-23 17:07:14 +04:00
Anton Kaliaev
94e8252607 #2021 follow up (#2028)
* update changelog

* txAvailable is always true

Refs #2021, #1920

* remove debug message

No additional value. `enterPropose` log message should be enough.

Refs #2021, #1920
2018-07-23 16:47:15 +04:00
Dev Ojha
eb7dea1b0d crypto/ed25519: Remove privkey.Generate method (#2022)
The privkey.Generate method here was a custom-made method for deriving
a private key from another private key. This function is currently
not used anywhere in our codebase, and has not been reviewed enough
that it would be secure to use. This removes that method. We should
adopt the official ed25519 HD derivation once that has been standardized,
in order to fulfill this need.

closes #2000
2018-07-23 15:35:13 +04:00
srmo
e36ce6f893 fix race condition on proposal height for published txs (#2021)
* #1920 try to fix race condition on proposal height for published txs

- related to create_empty_blocks=false
- published height for accepted tx can be wrong (too low)
- use the actual mempool height + 1 for the proposal
- expose Height() on mempool

* #1920 add initial test for mempool.Height()

- not sure how to test the lock
- can the mutex reference be of type Locker?
-- this way, we can use a "mock" of the mutex to test triggering

* #1920 use the ConsensusState height in favor of mempool

- gets rid of indirections
- doesn't need any "+1" magic

* #1920 cosmetic

- if we use cs.Height, it's enough to evaluate right before propose

* #1920 cleanup TODO and non-needed code

* #1920 add changelog entry
2018-07-23 15:34:45 +04:00
Dev Ojha
c5c1689591 crypto/secp256k1: Add godocs, remove indirection in privkeys (#2017)
* crypto/secp256k1: Add godocs, remove indirection in privkeys

The following was previously done for creating secp256k1 private keys:

First obtain privkey bytes. Then create a private key in the
underlying library, with scalar exponent equal to privKeyBytes.
(The method called was secp256k1.PrivKeyFromBytes,
fb90c334df/btcec/privkey.go (L21))

Then the private key was serialized using the underlying library, which just
returns back the bytes that comprised the scalar exponent, but padded to be
exactly 32 bytes.
fb90c334df/btcec/privkey.go (L70)

Thus the entire indirection of calling the underlying library can be avoided
by just ensuring that we pass in a 32 byte value. A test case has even be written
to show this more clearly in review.

* crypto/secp256k1: Address PR comments

Squash this commit

* crypto: Remove note about re-registering amino paths when unnecessary.

This commit should be squashed.
2018-07-21 08:52:04 +04:00
Jae Kwon
b41b89732d Update store.go
Revert to SetSync for saveABCIResponses() as per Ethan's feedback
2018-07-20 14:38:27 -07:00
Anton Kaliaev
5e96421d44 Merge pull request #1966 from tendermint/dev/refactor_crypto
crypto: Refactor to move files out of the top level directory
2018-07-20 22:48:41 +04:00
ValarDragon
c798702764 crypto: Remove Ed25519 and Secp256k1 suffix on GenPrivKey 2018-07-20 10:44:21 -07:00
ValarDragon
17c0029233 Merge remote-tracking branch 'origin/develop' into dev/refactor_crypto 2018-07-20 08:59:41 -07:00
Alexander Simmerl
0f2d97dffe Merge pull request #1742 from Liamsi/proto_files
Add Proto files for types.Header (incl. BlockId, Time, PartsSetHeader)
2018-07-20 17:43:25 +02:00
Alexander Simmerl
ed8714e40c Merge pull request #1965 from tendermint/693-part-2
make Block Header and Data non-pointers
2018-07-20 17:42:42 +02:00
Alexander Simmerl
6017d817e5 Merge pull request #2008 from tendermint/1772-rwmutex-cs
use RWMutex for consensus state
2018-07-20 17:24:55 +02:00
Alexander Simmerl
63835c0360 Merge pull request #2009 from tendermint/1772-call-validators-with-timeout
make `/status` RPC endpoint resistant to consensus halt
2018-07-20 17:22:16 +02:00
Alexander Simmerl
c82c60df11 rpc: Test Validator retrevial timeout 2018-07-20 17:08:55 +02:00
Dev Ojha
67762aec73 crypto/ed25519: Update the godocs (#2002)
This commit updates the godocs for the package, and adds an optimization
to the privkey.Pubkey() method.

The optimization is that in golang, the private key (due to interface
compatibility reasons) has a copy of the public key stored inside of it.
Therefore if this copy has already been computed, there is no need to
recompute it.
2018-07-20 10:09:30 +04:00
Anton Kaliaev
0fbb465b8f add protoc_all and protoc_grpc to .PHONY 2018-07-20 01:27:41 +04:00
Anton Kaliaev
2e75214316 update gogo to 1.1.1 and other misc. updates
Refs #1883
2018-07-20 01:27:41 +04:00
Anton Kaliaev
5be456e5b1 update grpc version to 1.13.0
Refs #1883
2018-07-20 01:27:41 +04:00
Anton Kaliaev
1bd5476854 make /status RPC endpoint resistant to consensus halt
Refs #1772
2018-07-19 11:26:50 +04:00
Anton Kaliaev
5037dd40c5 use RWMutex for consensus state
allows multiple RPC requests to query consensus state

Refs #1772
2018-07-19 10:49:12 +04:00
Liamsi
96818af9d5 fix protos to make all tests pass, document differences 2018-07-18 19:06:38 +02:00
ValarDragon
571e602f07 Merge remote-tracking branch 'origin/develop' into dev/refactor_crypto 2018-07-18 08:54:51 -07:00
ValarDragon
99e582d79a crypto: Refactor to move files out of the top level directory
Currently the top level directory contains basically all of the code
for the crypto package. This PR moves the crypto code into submodules
in a similar manner to what `golang/x/crypto` does. This improves code
organization.

Ref discussion: https://github.com/tendermint/tendermint/pull/1966

Closes #1956
2018-07-18 08:38:44 -07:00
Liamsi
a81ca93139 update to new (timestamp & empty structs) encoding in amino
- timestamps no longer have fixed length encoding
-
2018-07-18 16:37:15 +02:00
Liamsi
2744682e77 update to latest amino (pre) release v0.11.1
- also reformat code and order imports
2018-07-18 15:53:53 +02:00
Liamsi
d665c79cc9 WIP: more empty struct examples 2018-07-18 15:18:10 +02:00
Liamsi
3c38a25bbb add empty struct examples 2018-07-18 15:17:51 +02:00
Liamsi
0cd82fa166 add empty struct examples 2018-07-18 15:14:41 +02:00
Liamsi
99fa7f8132 everything works with https://github.com/tendermint/go-amino/pull/178 2018-07-18 15:14:41 +02:00
Liamsi
82104c9329 almost 2018-07-18 15:14:41 +02:00
Zach
40342bfa4a Update DOCS_README.md (#1985) 2018-07-18 13:32:17 +04:00
Anton Kaliaev
912fe477a4 Merge pull request #1987 from silasdavis/static-marshaler
Generate static marshalling methods for ABCI types to make compatible with downstream GRPC usage
2018-07-18 13:26:44 +04:00
Anton Kaliaev
b31ee798bd preserve original address and dial it instead of self-reported address (#1994)
Refs #1720
2018-07-18 13:23:29 +04:00
Jeremiah Andrews
6c4ca140ed Add private peer ID tracking to AddrBook (#1989)
* Add private peer ID tracking to AddrBook

* Remove private peer tracking/blocking from pex

* debug level msg when we fail to add private address
2018-07-18 13:22:09 +04:00
needkane
449846ccb2 NodeInfo version check: delete redundant code 2018-07-18 13:12:52 +04:00
Dev Ojha
3353bb99ae tools: Remove redundant grep -v vendors/ (#1996)
* tools: Remove redundant grep -v vendors/

This was used in conjunction with `go list <path>`, however `go list`
already ignores the vendor directory. This made this `grep -v` redundant.

* Missed an apostrophe
2018-07-17 21:33:00 +04:00
Silas Davis
b7e5cbeb3b Remove pb.go files from codecov
Signed-off-by: Silas Davis <silas@monax.io>
2018-07-17 17:42:30 +01:00
Silas Davis
21b900dceb Add gogo generated tests for pb.go files
Signed-off-by: Silas Davis <silas@monax.io>
2018-07-17 14:56:11 +01:00
Silas Davis
c9f92f465b Use pattern rule for protoc building and \\nolint in generated pb.go files
Signed-off-by: Silas Davis <silas@monax.io>
2018-07-17 14:20:49 +01:00
Silas Davis
398f3779cc Add gogoproto marshallers to proto files in order to make use of
gogoproto.nullable compatible with GRPC downstream of ABCI and libs
protbuf types
2018-07-17 11:51:38 +01:00
Zach
257622cf6b update rpc/core/doc.go
Closes #1932 (#1986)
2018-07-17 10:52:49 +04:00
Max Levy
07ad325b1a A link fixed (#1991)
To address the relocation of terraform-and-ansible.md under networks/, as per e54c0f804f (diff-95ac35ad12aa33ed70e9ff5db2229771)
2018-07-17 10:49:38 +04:00
Max Levy
76f5e92528 Fixed a link (#1992)
Broken by #e54c0f8
2018-07-17 10:44:59 +04:00
Dev Ojha
71859f8f3b common/rand: Remove exponential distribution functions (#1979)
We were computing these functions incorrectly.
I'm not sure what distribution these numbers are, but it isn't the
normal exponential distribution. (We're making the probability of
getting a number of a particular bitlength equal, but the number in
that bitlength thats gets chosen is uniformly chosen)

We weren't using these functions anywhere in our codebase, and they
had a nomenclature error. (There aren't exponentially distributed
integers, instead they would be geometrically distributed)
2018-07-16 11:38:04 +04:00
ValarDragon
a3df06d081 libs/common/rand: Update godocs
The godocs fell out of sync with the code here. Additionally we had
warning that these randomness functions weren't for cryptographic
use on every function. However these warnings are confusing, since
there was no implication that they would be secure there, and a
single warning on the actual Rand type would suffice. (This is what
is done in golang's math/rand godoc)

Additionally we indicated that rand.Bytes() was reading OS randomness
but in fact that had been changed.
2018-07-16 11:38:04 +04:00
Dev Ojha
dae7dc30e0 Switch usage of math/rand to cmn's rand (#1980)
This commit switches all usage of math/rand to cmn's rand. The only
exceptions are within the random file itself, the tools package, and the
crypto package. In tools you don't want it to lock between the go-routines.
The crypto package doesn't use it so the crypto package have no other
dependencies within tendermint/tendermint for easier portability.

Crypto/rand usage is unadjusted.

Closes #1343
2018-07-16 11:20:37 +04:00
Dev Ojha
14cebd181d config: 10x default send/recv rate (#1978)
* config: 10x default send/recv rate

This increases the default send/recv rate from 512 kB/s to 5.12 mB/s

Closes #1752

* Fix typo
2018-07-16 11:17:27 +04:00
Ethan Buchman
522a425708 Merge pull request #1975 from tendermint/bucky/1951-fix-protoc-libs
makefile: fix protoc_libs
2018-07-15 13:19:00 +01:00
Ethan Buchman
0fbcbb3aeb makefile: fix protoc_libs 2018-07-14 18:33:18 +01:00
Ethan Buchman
8a5930ad72 Merge pull request #1974 from tendermint/master
Merge master back to develop
2018-07-14 15:13:52 +01:00
Ethan Buchman
c64a3c74c8 Merge pull request #1972 from tendermint/release/v0.22.4
Release/v0.22.4
2018-07-14 14:55:12 +01:00
Ethan Buchman
722f8a1b6f Merge pull request #1973 from tendermint/bucky/fix-pubsub-stop
fix stopping pubsub
2018-07-14 14:47:20 +01:00
Ethan Buchman
d903057011 fix stopping pubsub 2018-07-14 14:50:56 +01:00
Ethan Buchman
74106c8bea update changelog 2018-07-14 14:05:50 +01:00
Anton Kaliaev
270659f03f make Block Header and Data non-pointers
make BlockMeta Header a non-pointer

Refs #693
2018-07-13 12:05:54 +04:00
Jae Kwon
8128627f08 Optimizing blockchain reactor.
Should be paired with https://github.com/tendermint/iavl/pull/65.
2018-06-22 21:47:48 -07:00
211 changed files with 22611 additions and 3867 deletions

View File

@@ -16,7 +16,7 @@ jobs:
- checkout
- restore_cache:
keys:
- v2-pkg-cache
- v3-pkg-cache
- run:
name: tools
command: |
@@ -38,11 +38,11 @@ jobs:
- bin
- profiles
- save_cache:
key: v2-pkg-cache
key: v3-pkg-cache
paths:
- /go/pkg
- save_cache:
key: v2-tree-{{ .Environment.CIRCLE_SHA1 }}
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
paths:
- /go/src/github.com/tendermint/tendermint
@@ -52,9 +52,9 @@ jobs:
- attach_workspace:
at: /tmp/workspace
- restore_cache:
key: v2-pkg-cache
key: v3-pkg-cache
- restore_cache:
key: v2-tree-{{ .Environment.CIRCLE_SHA1 }}
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: slate docs
command: |
@@ -68,15 +68,21 @@ jobs:
- attach_workspace:
at: /tmp/workspace
- restore_cache:
key: v2-pkg-cache
key: v3-pkg-cache
- restore_cache:
key: v2-tree-{{ .Environment.CIRCLE_SHA1 }}
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: metalinter
command: |
set -ex
export PATH="$GOBIN:$PATH"
make metalinter
- run:
name: check_dep
command: |
set -ex
export PATH="$GOBIN:$PATH"
make check_dep
test_abci_apps:
<<: *defaults
@@ -84,9 +90,9 @@ jobs:
- attach_workspace:
at: /tmp/workspace
- restore_cache:
key: v2-pkg-cache
key: v3-pkg-cache
- restore_cache:
key: v2-tree-{{ .Environment.CIRCLE_SHA1 }}
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: Run abci apps tests
command: |
@@ -101,9 +107,9 @@ jobs:
- attach_workspace:
at: /tmp/workspace
- restore_cache:
key: v2-pkg-cache
key: v3-pkg-cache
- restore_cache:
key: v2-tree-{{ .Environment.CIRCLE_SHA1 }}
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: Run abci-cli tests
command: |
@@ -116,9 +122,9 @@ jobs:
- attach_workspace:
at: /tmp/workspace
- restore_cache:
key: v2-pkg-cache
key: v3-pkg-cache
- restore_cache:
key: v2-tree-{{ .Environment.CIRCLE_SHA1 }}
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run: sudo apt-get update && sudo apt-get install -y --no-install-recommends bsdmainutils
- run:
name: Run tests
@@ -131,17 +137,17 @@ jobs:
- attach_workspace:
at: /tmp/workspace
- restore_cache:
key: v2-pkg-cache
key: v3-pkg-cache
- restore_cache:
key: v2-tree-{{ .Environment.CIRCLE_SHA1 }}
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run: mkdir -p /tmp/logs
- run:
name: Run tests
command: |
for pkg in $(go list github.com/tendermint/tendermint/... | grep -v /vendor/ | circleci tests split --split-by=timings); do
for pkg in $(go list github.com/tendermint/tendermint/... | circleci tests split --split-by=timings); do
id=$(basename "$pkg")
GOCACHE=off go test -v -timeout 5m -race -coverprofile=/tmp/workspace/profiles/$id.out -covermode=atomic "$pkg" | tee "/tmp/logs/$id-$RANDOM.log"
GOCACHE=off go test -timeout 5m -race -coverprofile=/tmp/workspace/profiles/$id.out -covermode=atomic "$pkg" | tee "/tmp/logs/$id-$RANDOM.log"
done
- persist_to_workspace:
root: /tmp/workspace
@@ -156,9 +162,9 @@ jobs:
- attach_workspace:
at: /tmp/workspace
- restore_cache:
key: v2-pkg-cache
key: v3-pkg-cache
- restore_cache:
key: v2-tree-{{ .Environment.CIRCLE_SHA1 }}
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: Run tests
command: bash test/persist/test_failure_indices.sh
@@ -181,7 +187,7 @@ jobs:
- attach_workspace:
at: /tmp/workspace
- restore_cache:
key: v2-tree-{{ .Environment.CIRCLE_SHA1 }}
key: v3-tree-{{ .Environment.CIRCLE_SHA1 }}
- run:
name: gather
command: |

View File

@@ -1,42 +0,0 @@
<!-- Thanks for filing an issue! Before hitting the button, please answer these questions.-->
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one):
<!--
If this is a BUG REPORT, please:
- Fill in as much of the template below as you can.
If this is a FEATURE REQUEST, please:
- Describe *in detail* the feature/behavior/change you'd like to see.
In both cases, be ready for followup questions, and please respond in a timely
manner. We might ask you to provide additional logs and data (tendermint & app)
in a case of bug.
-->
**Tendermint version** (use `tendermint version` or `git rev-parse --verify HEAD` if installed from source):
**ABCI app** (name for built-in, URL for self-written if it's publicly available):
**Environment**:
- **OS** (e.g. from /etc/os-release):
- **Install tools**:
- **Others**:
**What happened**:
**What you expected to happen**:
**How to reproduce it** (as minimally and precisely as possible):
**Logs (you can paste a small part showing an error or link a pastebin, gist, etc. containing more of the log file)**:
**Config (you can paste only the changes you've made)**:
**`/dump_consensus_state` output for consensus bugs**
**Anything else do we need to know**:

42
.github/ISSUE_TEMPLATE/bug-report.md vendored Normal file
View File

@@ -0,0 +1,42 @@
---
name: Bug Report
about: Create a report to help us squash bugs!
---
<!--
Please fill in as much of the template below as you can.
Be ready for followup questions, and please respond in a timely
manner. We might ask you to provide additional logs and data (tendermint & app).
-->
**Tendermint version** (use `tendermint version` or `git rev-parse --verify HEAD` if installed from source):
**ABCI app** (name for built-in, URL for self-written if it's publicly available):
**Environment**:
- **OS** (e.g. from /etc/os-release):
- **Install tools**:
- **Others**:
**What happened**:
**What you expected to happen**:
**Have you tried the latest version**: yes/no
**How to reproduce it** (as minimally and precisely as possible):
**Logs (paste a small part showing an error (< 10 lines) or link a pastebin, gist, etc. containing more of the log file)**:
**Config (you can paste only the changes you've made)**:
**node command runtime flags**:
**`/dump_consensus_state` output for consensus bugs**
**Anything else we need to know**:

View File

@@ -0,0 +1,13 @@
---
name: Feature Request
about: Create a proposal to request a feature
---
<!--
Please describe *in detail* the feature/behavior/change you'd like to see.
Be ready for followup questions, and please respond in a timely
manner.
Word of caution: poorly thought out proposals may be rejected without deliberation
-->

View File

@@ -3,4 +3,4 @@
* [ ] Updated all relevant documentation in docs
* [ ] Updated all code comments where relevant
* [ ] Wrote tests
* [ ] Updated CHANGELOG.md
* [ ] Updated CHANGELOG_PENDING.md

2
.gitignore vendored
View File

@@ -16,8 +16,8 @@ coverage.txt
docs/_build
*.log
abci-cli
abci/types/types.pb.go
docs/node_modules/
index.html.md
scripts/wal2json/wal2json
scripts/cutWALUntil/cutWALUntil

View File

@@ -1,20 +1,124 @@
# Changelog
## 0.23.0
*August 5th, 2018*
This release includes breaking upgrades in our P2P encryption,
some ABCI messages, and how we encode time and signatures.
A few more changes are still coming to the Header, ABCI,
and validator set handling to better support light clients, BFT time, and
upgrades. Most notably, validator set changes will be delayed by one block (see
[#1815][i1815]).
We also removed `make ensure_deps` in favour of `make get_vendor_deps`.
BREAKING CHANGES:
- [abci] Changed time format from int64 to google.protobuf.Timestamp
- [abci] Changed Validators to LastCommitInfo in RequestBeginBlock
- [abci] Removed Fee from ResponseDeliverTx and ResponseCheckTx
- [crypto] Switch crypto.Signature from interface to []byte for space efficiency
[#2128](https://github.com/tendermint/tendermint/pull/2128)
- NOTE: this means signatures no longer have the prefix bytes in Amino
binary nor the `type` field in Amino JSON. They're just bytes.
- [p2p] Remove salsa and ripemd primitives, in favor of using chacha as a stream cipher, and hkdf [#2054](https://github.com/tendermint/tendermint/pull/2054)
- [tools] Removed `make ensure_deps` in favor of `make get_vendor_deps`
- [types] CanonicalTime uses nanoseconds instead of clipping to ms
- breaks serialization/signing of all messages with a timestamp
FEATURES:
- [tools] Added `make check_dep`
- ensures gopkg.lock is synced with gopkg.toml
- ensures no branches are used in the gopkg.toml
IMPROVEMENTS:
- [blockchain] Improve fast-sync logic
[#1805](https://github.com/tendermint/tendermint/pull/1805)
- tweak params
- only process one block at a time to avoid starving
- [common] bit array functions which take in another parameter are now thread safe
- [crypto] Switch hkdfchachapoly1305 to xchachapoly1305
- [p2p] begin connecting to peers as soon a seed node provides them to you ([#2093](https://github.com/tendermint/tendermint/issues/2093))
BUG FIXES:
- [common] Safely handle cases where atomic write files already exist [#2109](https://github.com/tendermint/tendermint/issues/2109)
- [privval] fix a deadline for accepting new connections in socket private
validator.
- [p2p] Allow startup if a configured seed node's IP can't be resolved ([#1716](https://github.com/tendermint/tendermint/issues/1716))
- [node] Fully exit when CTRL-C is pressed even if consensus state panics [#2072](https://github.com/tendermint/tendermint/issues/2072)
[i1815]: https://github.com/tendermint/tendermint/pull/1815
## 0.22.8
*July 26th, 2018*
BUG FIXES
- [consensus, blockchain] Fix 0.22.7 below.
## 0.22.7
*July 26th, 2018*
BUG FIXES
- [consensus, blockchain] Register the Evidence interface so it can be
marshalled/unmarshalled by the blockchain and consensus reactors
## 0.22.6
*July 24th, 2018*
BUG FIXES
- [rpc] Fix `/blockchain` endpoint
- (#2049) Fix OOM attack by returning error on negative input
- Fix result length to have max 20 (instead of 21) block metas
- [rpc] Validate height is non-negative in `/abci_query`
- [consensus] (#2050) Include evidence in proposal block parts (previously evidence was
not being included in blocks!)
- [p2p] (#2046) Close rejected inbound connections so file descriptor doesn't
leak
- [Gopkg] (#2053) Fix versions in the toml
## 0.22.5
*July 23th, 2018*
BREAKING CHANGES:
- [crypto] Refactor `tendermint/crypto` into many subpackages
- [libs/common] remove exponentially distributed random numbers
IMPROVEMENTS:
- [abci, libs/common] Generated gogoproto static marshaller methods
- [config] Increase default send/recv rates to 5 mB/s
- [p2p] reject addresses coming from private peers
- [p2p] allow persistent peers to be private
BUG FIXES:
- [mempool] fixed a race condition when `create_empty_blocks=false` where a
transaction is published at an old height.
- [p2p] dial external IP setup by `persistent_peers`, not internal NAT IP
- [rpc] make `/status` RPC endpoint resistant to consensus halt
## 0.22.4
*July 14th, 2018*
FEATURES:
- [tools] Merged in from github.com/tendermint/tools
IMPROVEMENTS:
BREAKING CHANGES:
- [genesis] removed deprecated `app_options` field.
- [types] Genesis.AppStateJSON -> Genesis.AppState
FEATURES:
- [tools] Merged in from github.com/tendermint/tools
BUG FIXES:
- [tools/tm-bench] Various fixes
- [consensus] Wait for WAL to stop on shutdown
- [abci] Fix #1891, pending requests cannot hang when abci server dies. Previously a crash in BeginBlock could leave tendermint in broken state.
- [abci] Fix #1891, pending requests cannot hang when abci server dies.
Previously a crash in BeginBlock could leave tendermint in broken state.
## 0.22.3

210
Gopkg.lock generated
View File

@@ -3,48 +3,63 @@
[[projects]]
branch = "master"
digest = "1:d6afaeed1502aa28e80a4ed0981d570ad91b2579193404256ce672ed0a609e0d"
name = "github.com/beorn7/perks"
packages = ["quantile"]
pruneopts = "UT"
revision = "3a771d992973f24aa725d07868b467d1ddfceafb"
[[projects]]
branch = "master"
digest = "1:2c00f064ba355903866cbfbf3f7f4c0fe64af6638cc7d1b8bdcf3181bc67f1d8"
name = "github.com/btcsuite/btcd"
packages = ["btcec"]
revision = "fdfc19097e7ac6b57035062056f5b7b4638b8898"
pruneopts = "UT"
revision = "f5e261fc9ec3437697fb31d8b38453c293204b29"
[[projects]]
digest = "1:1d8e1cb71c33a9470bbbae09bfec09db43c6bf358dfcae13cd8807c4e2a9a2bf"
name = "github.com/btcsuite/btcutil"
packages = [
"base58",
"bech32"
"bech32",
]
pruneopts = "UT"
revision = "d4cc87b860166d00d6b5b9e0d3b3d71d6088d4d4"
[[projects]]
digest = "1:a2c1d0e43bd3baaa071d1b9ed72c27d78169b2b269f71c105ac4ba34b1be4a39"
name = "github.com/davecgh/go-spew"
packages = ["spew"]
pruneopts = "UT"
revision = "346938d642f2ec3594ed81d874461961cd0faa76"
version = "v1.1.0"
[[projects]]
digest = "1:c7644c73a3d23741fdba8a99b1464e021a224b7e205be497271a8003a15ca41b"
name = "github.com/ebuchman/fail-test"
packages = ["."]
pruneopts = "UT"
revision = "95f809107225be108efcf10a3509e4ea6ceef3c4"
[[projects]]
digest = "1:544229a3ca0fb2dd5ebc2896d3d2ff7ce096d9751635301e44e37e761349ee70"
name = "github.com/fortytw2/leaktest"
packages = ["."]
pruneopts = "UT"
revision = "a5ef70473c97b71626b9abeda80ee92ba2a7de9e"
version = "v1.2.0"
[[projects]]
digest = "1:abeb38ade3f32a92943e5be54f55ed6d6e3b6602761d74b4aab4c9dd45c18abd"
name = "github.com/fsnotify/fsnotify"
packages = ["."]
pruneopts = "UT"
revision = "c2828203cd70a50dcccfb2761f8b1f8ceef9a8e9"
version = "v1.4.7"
[[projects]]
digest = "1:fdf5169073fb0ad6dc12a70c249145e30f4058647bea25f0abd48b6d9f228a11"
name = "github.com/go-kit/kit"
packages = [
"log",
@@ -53,24 +68,30 @@
"metrics",
"metrics/discard",
"metrics/internal/lv",
"metrics/prometheus"
"metrics/prometheus",
]
pruneopts = "UT"
revision = "4dc7be5d2d12881735283bcab7352178e190fc71"
version = "v0.6.0"
[[projects]]
digest = "1:31a18dae27a29aa074515e43a443abfd2ba6deb6d69309d8d7ce789c45f34659"
name = "github.com/go-logfmt/logfmt"
packages = ["."]
pruneopts = "UT"
revision = "390ab7935ee28ec6b286364bba9b4dd6410cb3d5"
version = "v0.3.0"
[[projects]]
digest = "1:c4a2528ccbcabf90f9f3c464a5fc9e302d592861bbfd0b7135a7de8a943d0406"
name = "github.com/go-stack/stack"
packages = ["."]
pruneopts = "UT"
revision = "259ab82a6cad3992b4e21ff5cac294ccb06474bc"
version = "v1.7.0"
[[projects]]
digest = "1:35621fe20f140f05a0c4ef662c26c0ab4ee50bca78aa30fe87d33120bd28165e"
name = "github.com/gogo/protobuf"
packages = [
"gogoproto",
@@ -78,37 +99,45 @@
"proto",
"protoc-gen-gogo/descriptor",
"sortkeys",
"types"
"types",
]
revision = "1adfc126b41513cc696b209667c8656ea7aac67c"
version = "v1.0.0"
pruneopts = "UT"
revision = "636bf0302bc95575d69441b25a2603156ffdddf1"
version = "v1.1.1"
[[projects]]
digest = "1:17fe264ee908afc795734e8c4e63db2accabaf57326dbf21763a7d6b86096260"
name = "github.com/golang/protobuf"
packages = [
"proto",
"ptypes",
"ptypes/any",
"ptypes/duration",
"ptypes/timestamp"
"ptypes/timestamp",
]
revision = "925541529c1fa6821df4e44ce2723319eb2be768"
version = "v1.0.0"
pruneopts = "UT"
revision = "b4deda0973fb4c70b50d226b1af49f3da59f5265"
version = "v1.1.0"
[[projects]]
branch = "master"
digest = "1:4a0c6bb4805508a6287675fac876be2ac1182539ca8a32468d8128882e9d5009"
name = "github.com/golang/snappy"
packages = ["."]
pruneopts = "UT"
revision = "2e65f85255dbc3072edf28d6b5b8efc472979f5a"
[[projects]]
digest = "1:43dd08a10854b2056e615d1b1d22ac94559d822e1f8b6fcc92c1a1057e85188e"
name = "github.com/gorilla/websocket"
packages = ["."]
pruneopts = "UT"
revision = "ea4d1f681babbce9545c9c5f3d5194a789c89f5b"
version = "v1.2.0"
[[projects]]
branch = "master"
digest = "1:12247a2e99a060cc692f6680e5272c8adf0b8f572e6bce0d7095e624c958a240"
name = "github.com/hashicorp/hcl"
packages = [
".",
@@ -119,154 +148,197 @@
"hcl/token",
"json/parser",
"json/scanner",
"json/token"
"json/token",
]
pruneopts = "UT"
revision = "ef8a98b0bbce4a65b5aa4c368430a80ddc533168"
[[projects]]
digest = "1:870d441fe217b8e689d7949fef6e43efbc787e50f200cb1e70dbca9204a1d6be"
name = "github.com/inconshreveable/mousetrap"
packages = ["."]
pruneopts = "UT"
revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
version = "v1.0"
[[projects]]
branch = "master"
digest = "1:39b27d1381a30421f9813967a5866fba35dc1d4df43a6eefe3b7a5444cb07214"
name = "github.com/jmhodges/levigo"
packages = ["."]
pruneopts = "UT"
revision = "c42d9e0ca023e2198120196f842701bb4c55d7b9"
[[projects]]
branch = "master"
digest = "1:a64e323dc06b73892e5bb5d040ced475c4645d456038333883f58934abbf6f72"
name = "github.com/kr/logfmt"
packages = ["."]
pruneopts = "UT"
revision = "b84e30acd515aadc4b783ad4ff83aff3299bdfe0"
[[projects]]
digest = "1:c568d7727aa262c32bdf8a3f7db83614f7af0ed661474b24588de635c20024c7"
name = "github.com/magiconair/properties"
packages = ["."]
pruneopts = "UT"
revision = "c2353362d570a7bfa228149c62842019201cfb71"
version = "v1.8.0"
[[projects]]
digest = "1:ff5ebae34cfbf047d505ee150de27e60570e8c394b3b8fdbb720ff6ac71985fc"
name = "github.com/matttproud/golang_protobuf_extensions"
packages = ["pbutil"]
pruneopts = "UT"
revision = "c12348ce28de40eed0136aa2b644d0ee0650e56c"
version = "v1.0.1"
[[projects]]
branch = "master"
digest = "1:5ab79470a1d0fb19b041a624415612f8236b3c06070161a910562f2b2d064355"
name = "github.com/mitchellh/mapstructure"
packages = ["."]
revision = "bb74f1db0675b241733089d5a1faa5dd8b0ef57b"
pruneopts = "UT"
revision = "f15292f7a699fcc1a38a80977f80a046874ba8ac"
[[projects]]
digest = "1:95741de3af260a92cc5c7f3f3061e85273f5a81b5db20d4bd68da74bd521675e"
name = "github.com/pelletier/go-toml"
packages = ["."]
pruneopts = "UT"
revision = "c01d1270ff3e442a8a57cddc1c92dc1138598194"
version = "v1.2.0"
[[projects]]
digest = "1:40e195917a951a8bf867cd05de2a46aaf1806c50cf92eebf4c16f78cd196f747"
name = "github.com/pkg/errors"
packages = ["."]
pruneopts = "UT"
revision = "645ef00459ed84a119197bfb8d8205042c6df63d"
version = "v0.8.0"
[[projects]]
digest = "1:0028cb19b2e4c3112225cd871870f2d9cf49b9b4276531f03438a88e94be86fe"
name = "github.com/pmezard/go-difflib"
packages = ["difflib"]
pruneopts = "UT"
revision = "792786c7400a136282c1664665ae0a8db921c6c2"
version = "v1.0.0"
[[projects]]
digest = "1:c1a04665f9613e082e1209cf288bf64f4068dcd6c87a64bf1c4ff006ad422ba0"
name = "github.com/prometheus/client_golang"
packages = [
"prometheus",
"prometheus/promhttp"
"prometheus/promhttp",
]
pruneopts = "UT"
revision = "ae27198cdd90bf12cd134ad79d1366a6cf49f632"
[[projects]]
branch = "master"
digest = "1:2d5cd61daa5565187e1d96bae64dbbc6080dacf741448e9629c64fd93203b0d4"
name = "github.com/prometheus/client_model"
packages = ["go"]
revision = "99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c"
pruneopts = "UT"
revision = "5c3871d89910bfb32f5fcab2aa4b9ec68e65a99f"
[[projects]]
branch = "master"
digest = "1:63b68062b8968092eb86bedc4e68894bd096ea6b24920faca8b9dcf451f54bb5"
name = "github.com/prometheus/common"
packages = [
"expfmt",
"internal/bitbucket.org/ww/goautoneg",
"model"
"model",
]
revision = "7600349dcfe1abd18d72d3a1770870d9800a7801"
pruneopts = "UT"
revision = "c7de2306084e37d54b8be01f3541a8464345e9a5"
[[projects]]
branch = "master"
digest = "1:8c49953a1414305f2ff5465147ee576dd705487c35b15918fcd4efdc0cb7a290"
name = "github.com/prometheus/procfs"
packages = [
".",
"internal/util",
"nfs",
"xfs"
"xfs",
]
revision = "ae68e2d4c00fed4943b5f6698d504a5fe083da8a"
pruneopts = "UT"
revision = "05ee40e3a273f7245e8777337fc7b46e533a9a92"
[[projects]]
digest = "1:c4556a44e350b50a490544d9b06e9fba9c286c21d6c0e47f54f3a9214597298c"
name = "github.com/rcrowley/go-metrics"
packages = ["."]
pruneopts = "UT"
revision = "e2704e165165ec55d062f5919b4b29494e9fa790"
[[projects]]
digest = "1:bd1ae00087d17c5a748660b8e89e1043e1e5479d0fea743352cda2f8dd8c4f84"
name = "github.com/spf13/afero"
packages = [
".",
"mem"
"mem",
]
pruneopts = "UT"
revision = "787d034dfe70e44075ccc060d346146ef53270ad"
version = "v1.1.1"
[[projects]]
digest = "1:516e71bed754268937f57d4ecb190e01958452336fa73dbac880894164e91c1f"
name = "github.com/spf13/cast"
packages = ["."]
pruneopts = "UT"
revision = "8965335b8c7107321228e3e3702cab9832751bac"
version = "v1.2.0"
[[projects]]
digest = "1:7ffc0983035bc7e297da3688d9fe19d60a420e9c38bef23f845c53788ed6a05e"
name = "github.com/spf13/cobra"
packages = ["."]
pruneopts = "UT"
revision = "7b2c5ac9fc04fc5efafb60700713d4fa609b777b"
version = "v0.0.1"
[[projects]]
branch = "master"
digest = "1:080e5f630945ad754f4b920e60b4d3095ba0237ebf88dc462eb28002932e3805"
name = "github.com/spf13/jwalterweatherman"
packages = ["."]
pruneopts = "UT"
revision = "7c0cea34c8ece3fbeb2b27ab9b59511d360fb394"
[[projects]]
digest = "1:9424f440bba8f7508b69414634aef3b2b3a877e522d8a4624692412805407bb7"
name = "github.com/spf13/pflag"
packages = ["."]
pruneopts = "UT"
revision = "583c0c0531f06d5278b7d917446061adc344b5cd"
version = "v1.0.1"
[[projects]]
digest = "1:f8e1a678a2571e265f4bf91a3e5e32aa6b1474a55cb0ea849750cc177b664d96"
name = "github.com/spf13/viper"
packages = ["."]
pruneopts = "UT"
revision = "25b30aa063fc18e48662b86996252eabdcf2f0c7"
version = "v1.0.0"
[[projects]]
digest = "1:7e8d267900c7fa7f35129a2a37596e38ed0f11ca746d6d9ba727980ee138f9f6"
name = "github.com/stretchr/testify"
packages = [
"assert",
"require"
"require",
]
pruneopts = "UT"
revision = "12b6f73e6084dad08a7c6e575284b177ecafbc71"
version = "v1.2.1"
[[projects]]
branch = "master"
digest = "1:b3cfb8d82b1601a846417c3f31c03a7961862cb2c98dcf0959c473843e6d9a2b"
name = "github.com/syndtr/goleveldb"
packages = [
"leveldb",
@@ -280,28 +352,34 @@
"leveldb/opt",
"leveldb/storage",
"leveldb/table",
"leveldb/util"
"leveldb/util",
]
pruneopts = "UT"
revision = "c4c61651e9e37fa117f53c5a906d3b63090d8445"
[[projects]]
branch = "master"
digest = "1:087aaa7920e5d0bf79586feb57ce01c35c830396ab4392798112e8aae8c47722"
name = "github.com/tendermint/ed25519"
packages = [
".",
"edwards25519",
"extra25519"
"extra25519",
]
pruneopts = "UT"
revision = "d8387025d2b9d158cf4efb07e7ebf814bcce2057"
[[projects]]
digest = "1:e9113641c839c21d8eaeb2c907c7276af1eddeed988df8322168c56b7e06e0e1"
name = "github.com/tendermint/go-amino"
packages = ["."]
pruneopts = "UT"
revision = "2106ca61d91029c931fd54968c2bb02dc96b1412"
version = "0.10.1"
[[projects]]
branch = "master"
digest = "1:c31a37cafc12315b8bd745c8ad6a006ac25350472488162a821e557b3e739d67"
name = "golang.org/x/crypto"
packages = [
"bcrypt",
@@ -317,11 +395,13 @@
"openpgp/errors",
"poly1305",
"ripemd160",
"salsa20/salsa"
"salsa20/salsa",
]
revision = "a49355c7e3f8fe157a85be2f77e6e269a0f89602"
pruneopts = "UT"
revision = "c126467f60eb25f8f27e5a981f32a87e3965053f"
[[projects]]
digest = "1:d36f55a999540d29b6ea3c2ea29d71c76b1d9853fdcd3e5c5cb4836f2ba118f1"
name = "golang.org/x/net"
packages = [
"context",
@@ -331,20 +411,24 @@
"idna",
"internal/timeseries",
"netutil",
"trace"
"trace",
]
pruneopts = "UT"
revision = "292b43bbf7cb8d35ddf40f8d5100ef3837cced3f"
[[projects]]
branch = "master"
digest = "1:bb0fe59917bdd5b89f49b9a8b26e5f465e325d9223b3a8e32254314bdf51e0f1"
name = "golang.org/x/sys"
packages = [
"cpu",
"unix"
"unix",
]
revision = "1b2967e3c290b7c545b3db0deeda16e9be4f98a2"
pruneopts = "UT"
revision = "3dc4335d56c789b04b0ba99b7a37249d9b614314"
[[projects]]
digest = "1:a2ab62866c75542dd18d2b069fec854577a20211d7c0ea6ae746072a1dccdd18"
name = "golang.org/x/text"
packages = [
"collate",
@@ -360,17 +444,22 @@
"unicode/bidi",
"unicode/cldr",
"unicode/norm",
"unicode/rangetable"
"unicode/rangetable",
]
pruneopts = "UT"
revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0"
version = "v0.3.0"
[[projects]]
branch = "master"
digest = "1:077c1c599507b3b3e9156d17d36e1e61928ee9b53a5b420f10f28ebd4a0b275c"
name = "google.golang.org/genproto"
packages = ["googleapis/rpc/status"]
revision = "7fd901a49ba6a7f87732eb344f6e3c5b19d1b200"
pruneopts = "UT"
revision = "daca94659cb50e9f37c1b834680f2e46358f10b0"
[[projects]]
digest = "1:2dab32a43451e320e49608ff4542fdfc653c95dcc35d0065ec9c6c3dd540ed74"
name = "google.golang.org/grpc"
packages = [
".",
@@ -382,9 +471,11 @@
"credentials",
"encoding",
"encoding/proto",
"grpclb/grpc_lb_v1/messages",
"grpclog",
"internal",
"internal/backoff",
"internal/channelz",
"internal/grpcrand",
"keepalive",
"metadata",
"naming",
@@ -395,20 +486,71 @@
"stats",
"status",
"tap",
"transport"
"transport",
]
revision = "d11072e7ca9811b1100b80ca0269ac831f06d024"
version = "v1.11.3"
pruneopts = "UT"
revision = "168a6198bcb0ef175f7dacec0b8691fc141dc9b8"
version = "v1.13.0"
[[projects]]
digest = "1:342378ac4dcb378a5448dd723f0784ae519383532f5e70ade24132c4c8693202"
name = "gopkg.in/yaml.v2"
packages = ["."]
pruneopts = "UT"
revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183"
version = "v2.2.1"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "b0718135d5ade0a75c6b8fe703f70eb9d8064ba871ec31abd9ace3c4ab944100"
input-imports = [
"github.com/btcsuite/btcd/btcec",
"github.com/btcsuite/btcutil/base58",
"github.com/btcsuite/btcutil/bech32",
"github.com/ebuchman/fail-test",
"github.com/fortytw2/leaktest",
"github.com/go-kit/kit/log",
"github.com/go-kit/kit/log/level",
"github.com/go-kit/kit/log/term",
"github.com/go-kit/kit/metrics",
"github.com/go-kit/kit/metrics/discard",
"github.com/go-kit/kit/metrics/prometheus",
"github.com/go-logfmt/logfmt",
"github.com/gogo/protobuf/gogoproto",
"github.com/gogo/protobuf/jsonpb",
"github.com/gogo/protobuf/proto",
"github.com/gogo/protobuf/types",
"github.com/golang/protobuf/proto",
"github.com/golang/protobuf/ptypes/timestamp",
"github.com/gorilla/websocket",
"github.com/jmhodges/levigo",
"github.com/pkg/errors",
"github.com/prometheus/client_golang/prometheus",
"github.com/prometheus/client_golang/prometheus/promhttp",
"github.com/rcrowley/go-metrics",
"github.com/spf13/cobra",
"github.com/spf13/viper",
"github.com/stretchr/testify/assert",
"github.com/stretchr/testify/require",
"github.com/syndtr/goleveldb/leveldb",
"github.com/syndtr/goleveldb/leveldb/errors",
"github.com/syndtr/goleveldb/leveldb/iterator",
"github.com/syndtr/goleveldb/leveldb/opt",
"github.com/tendermint/ed25519",
"github.com/tendermint/ed25519/extra25519",
"github.com/tendermint/go-amino",
"golang.org/x/crypto/bcrypt",
"golang.org/x/crypto/chacha20poly1305",
"golang.org/x/crypto/curve25519",
"golang.org/x/crypto/hkdf",
"golang.org/x/crypto/nacl/box",
"golang.org/x/crypto/nacl/secretbox",
"golang.org/x/crypto/openpgp/armor",
"golang.org/x/crypto/ripemd160",
"golang.org/x/net/context",
"golang.org/x/net/netutil",
"google.golang.org/grpc",
"google.golang.org/grpc/credentials",
]
solver-name = "gps-cdcl"
solver-version = 1

View File

@@ -10,11 +10,6 @@
# name = "github.com/user/project"
# version = "1.0.0"
#
# [[constraint]]
# name = "github.com/user/project2"
# branch = "dev"
# source = "github.com/myfork/project2"
#
# [[override]]
# name = "github.com/x/y"
# version = "2.4.0"
@@ -35,11 +30,11 @@
[[constraint]]
name = "github.com/gogo/protobuf"
version = "=1.0.0"
version = "=1.1.1"
[[constraint]]
name = "github.com/golang/protobuf"
version = "=1.0.0"
version = "=1.1.0"
[[constraint]]
name = "github.com/gorilla/websocket"
@@ -63,11 +58,11 @@
[[constraint]]
name = "github.com/tendermint/go-amino"
version = "=0.10.1"
version = "=v0.10.1"
[[constraint]]
name = "google.golang.org/grpc"
version = "=1.11.3"
version = "=1.13.0"
[[constraint]]
name = "github.com/fortytw2/leaktest"
@@ -77,19 +72,9 @@
## Some repos dont have releases.
## Pin to revision
## We can remove this one by updating protobuf to v1.1.0
## but then the grpc tests break with
#--- FAIL: TestBroadcastTx (0.01s)
#panic: message/group field common.KVPair:bytes without pointer [recovered]
# panic: message/group field common.KVPair:bytes without pointer
#
# ...
#
# github.com/tendermint/tendermint/rpc/grpc_test.TestBroadcastTx(0xc420a5ab40)
# /go/src/github.com/tendermint/tendermint/rpc/grpc/grpc_test.go:29 +0x141
[[override]]
name = "google.golang.org/genproto"
revision = "7fd901a49ba6a7f87732eb344f6e3c5b19d1b200"
name = "github.com/jmhodges/levigo"
revision = "c42d9e0ca023e2198120196f842701bb4c55d7b9"
[[constraint]]
name = "github.com/ebuchman/fail-test"

View File

@@ -5,39 +5,46 @@ GOTOOLS = \
github.com/gogo/protobuf/protoc-gen-gogo \
github.com/gogo/protobuf/gogoproto \
github.com/square/certstrap
PACKAGES=$(shell go list ./... | grep -v '/vendor/')
PACKAGES=$(shell go list ./...)
INCLUDE = -I=. -I=${GOPATH}/src -I=${GOPATH}/src/github.com/gogo/protobuf/protobuf
BUILD_TAGS?=tendermint
BUILD_TAGS?='tendermint'
BUILD_FLAGS = -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`"
all: check build test install
check: check_tools ensure_deps
check: check_tools get_vendor_deps
########################################
### Build Tendermint
build:
CGO_ENABLED=0 go build $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint/
CGO_ENABLED=0 go build $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o build/tendermint ./cmd/tendermint/
build_race:
CGO_ENABLED=0 go build -race $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint
CGO_ENABLED=0 go build -race $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o build/tendermint ./cmd/tendermint
install:
CGO_ENABLED=0 go install $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' ./cmd/tendermint
CGO_ENABLED=0 go install $(BUILD_FLAGS) -tags $(BUILD_TAGS) ./cmd/tendermint
########################################
### Protobuf
protoc_all: protoc_libs protoc_abci protoc_grpc
%.pb.go: %.proto
## If you get the following error,
## "error while loading shared libraries: libprotobuf.so.14: cannot open shared object file: No such file or directory"
## See https://stackoverflow.com/a/25518702
protoc $(INCLUDE) $< --gogo_out=Mgoogle/protobuf/timestamp.proto=github.com/golang/protobuf/ptypes/timestamp,plugins=grpc:.
@echo "--> adding nolint declarations to protobuf generated files"
@awk -i inplace '/^\s*package \w+/ { print "//nolint" }1' $@
########################################
### Build ABCI
protoc_abci:
## If you get the following error,
## "error while loading shared libraries: libprotobuf.so.14: cannot open shared object file: No such file or directory"
## See https://stackoverflow.com/a/25518702
protoc $(INCLUDE) --gogo_out=plugins=grpc:. abci/types/*.proto
@echo "--> adding nolint declarations to protobuf generated files"
@awk '/package abci/types/ { print "//nolint: gas"; print; next }1' abci/types/types.pb.go > abci/types/types.pb.go.new
@mv abci/types/types.pb.go.new abci/types/types.pb.go
protoc_abci: abci/types/types.pb.go
build_abci:
@go build -i ./abci/cmd/...
@@ -51,7 +58,7 @@ install_abci:
# dist builds binaries for all platforms and packages them for distribution
# TODO add abci to these scripts
dist:
@BUILD_TAGS='$(BUILD_TAGS)' sh -c "'$(CURDIR)/scripts/dist.sh'"
@BUILD_TAGS=$(BUILD_TAGS) sh -c "'$(CURDIR)/scripts/dist.sh'"
########################################
### Tools & dependencies
@@ -70,16 +77,8 @@ update_tools:
@echo "--> Updating tools"
@go get -u $(GOTOOLS)
#Run this from CI
#Update dependencies
get_vendor_deps:
@rm -rf vendor/
@echo "--> Running dep"
@dep ensure -vendor-only
#Run this locally.
ensure_deps:
@rm -rf vendor/
@echo "--> Running dep"
@dep ensure
@@ -101,21 +100,14 @@ draw_deps:
get_deps_bin_size:
@# Copy of build recipe with additional flags to perform binary size analysis
$(eval $(shell go build -work -a $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint/ 2>&1))
$(eval $(shell go build -work -a $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o build/tendermint ./cmd/tendermint/ 2>&1))
@find $(WORK) -type f -name "*.a" | xargs -I{} du -hxs "{}" | sort -rh | sed -e s:${WORK}/::g > deps_bin_size.log
@echo "Results can be found here: $(CURDIR)/deps_bin_size.log"
########################################
### Libs
protoc_libs:
## If you get the following error,
## "error while loading shared libraries: libprotobuf.so.14: cannot open shared object file: No such file or directory"
## See https://stackoverflow.com/a/25518702
protoc $(INCLUDE) --go_out=plugins=grpc:. libs/common/*.proto
@echo "--> adding nolint declarations to protobuf generated files"
@awk '/package libs/common/ { print "//nolint: gas"; print; next }1' libs/common/types.pb.go > libs/common/types.pb.go.new
@mv libs/common/types.pb.go.new libs/common/types.pb.go
protoc_libs: libs/common/types.pb.go
gen_certs: clean_certs
## Generating certificates for TLS testing...
@@ -130,12 +122,14 @@ clean_certs:
rm -f db/remotedb/::.crt db/remotedb/::.key
test_libs: gen_certs
GOCACHE=off go test -tags gcc $(shell go list ./... | grep -v vendor)
GOCACHE=off go test -tags gcc $(PACKAGES)
make clean_certs
grpc_dbserver:
protoc -I db/remotedb/proto/ db/remotedb/proto/defs.proto --go_out=plugins=grpc:db/remotedb/proto
protoc_grpc: rpc/grpc/types.pb.go
########################################
### Testing
@@ -206,7 +200,7 @@ vagrant_test:
### go tests
test:
@echo "--> Running go test"
@go test $(PACKAGES)
@GOCACHE=off go test $(PACKAGES)
test_race:
@echo "--> Running go test --race"
@@ -252,6 +246,16 @@ metalinter_all:
@echo "--> Running linter (all)"
gometalinter.v2 --vendor --deadline=600s --enable-all --disable=lll ./...
DESTINATION = ./index.html.md
rpc-docs:
cat rpc/core/slate_header.txt > $(DESTINATION)
godoc2md -template rpc/core/doc_template.txt github.com/tendermint/tendermint/rpc/core | grep -v -e "pipe.go" -e "routes.go" -e "dev.go" | sed 's,/src/target,https://github.com/tendermint/tendermint/tree/master/rpc/core,' >> $(DESTINATION)
check_dep:
dep status >> /dev/null
!(grep -n branch Gopkg.toml)
###########################################################
### Docker image
@@ -307,4 +311,4 @@ build-slate:
# To avoid unintended conflicts with file names, always add to .PHONY
# unless there is a reason not to.
# https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html
.PHONY: check build build_race build_abci dist install install_abci check_tools get_tools update_tools get_vendor_deps draw_deps get_protoc protoc_abci protoc_libs gen_certs clean_certs grpc_dbserver test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate
.PHONY: check build build_race build_abci dist install install_abci check_dep check_tools get_tools update_tools get_vendor_deps draw_deps get_protoc protoc_abci protoc_libs gen_certs clean_certs grpc_dbserver test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt rpc-docs build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate protoc_grpc protoc_all

View File

@@ -22,7 +22,7 @@ develop | [![CircleCI](https://circleci.com/gh/tendermint/tendermint/tree/deve
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine - written in any programming language -
and securely replicates it on many machines.
For protocol details, see [the specification](/docs/spec).
For protocol details, see [the specification](/docs/spec). For a consensus proof and detailed protocol analysis checkout our recent paper, "[The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938)".
## A Note on Production Readiness
@@ -50,13 +50,13 @@ Go version | Go1.9 or higher
## Install
See the [install instructions](/docs/install.md)
See the [install instructions](/docs/introduction/install.md)
## Quick Start
- [Single node](/docs/using-tendermint.md)
- [Local cluster using docker-compose](/networks/local)
- [Remote cluster using terraform and ansible](/docs/terraform-and-ansible.md)
- [Remote cluster using terraform and ansible](/docs/networks/terraform-and-ansible.md)
- [Join the public testnet](https://cosmos.network/testnet)
## Resources

View File

@@ -29,7 +29,7 @@ The two guides to focus on are the `Application Development Guide` and `Using AB
To compile the protobuf file, run:
```
make protoc
cd $GOPATH/src/github.com/tendermint/tendermint/; make protoc_abci
```
See `protoc --help` and [the Protocol Buffers site](https://developers.google.com/protocol-buffers)

View File

@@ -63,7 +63,7 @@ RETRY_LOOP:
ENSURE_CONNECTED:
for {
_, err := client.Echo(context.Background(), &types.RequestEcho{"hello"}, grpc.FailFast(true))
_, err := client.Echo(context.Background(), &types.RequestEcho{Message: "hello"}, grpc.FailFast(true))
if err == nil {
break ENSURE_CONNECTED
}
@@ -129,7 +129,7 @@ func (cli *grpcClient) EchoAsync(msg string) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{&types.Response_Echo{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Echo{res}})
}
func (cli *grpcClient) FlushAsync() *ReqRes {
@@ -138,7 +138,7 @@ func (cli *grpcClient) FlushAsync() *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{&types.Response_Flush{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Flush{res}})
}
func (cli *grpcClient) InfoAsync(params types.RequestInfo) *ReqRes {
@@ -147,7 +147,7 @@ func (cli *grpcClient) InfoAsync(params types.RequestInfo) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{&types.Response_Info{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Info{res}})
}
func (cli *grpcClient) SetOptionAsync(params types.RequestSetOption) *ReqRes {
@@ -156,7 +156,7 @@ func (cli *grpcClient) SetOptionAsync(params types.RequestSetOption) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{&types.Response_SetOption{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_SetOption{res}})
}
func (cli *grpcClient) DeliverTxAsync(tx []byte) *ReqRes {
@@ -165,7 +165,7 @@ func (cli *grpcClient) DeliverTxAsync(tx []byte) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{&types.Response_DeliverTx{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_DeliverTx{res}})
}
func (cli *grpcClient) CheckTxAsync(tx []byte) *ReqRes {
@@ -174,7 +174,7 @@ func (cli *grpcClient) CheckTxAsync(tx []byte) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{&types.Response_CheckTx{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_CheckTx{res}})
}
func (cli *grpcClient) QueryAsync(params types.RequestQuery) *ReqRes {
@@ -183,7 +183,7 @@ func (cli *grpcClient) QueryAsync(params types.RequestQuery) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{&types.Response_Query{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Query{res}})
}
func (cli *grpcClient) CommitAsync() *ReqRes {
@@ -192,7 +192,7 @@ func (cli *grpcClient) CommitAsync() *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{&types.Response_Commit{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Commit{res}})
}
func (cli *grpcClient) InitChainAsync(params types.RequestInitChain) *ReqRes {
@@ -201,7 +201,7 @@ func (cli *grpcClient) InitChainAsync(params types.RequestInitChain) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{&types.Response_InitChain{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_InitChain{res}})
}
func (cli *grpcClient) BeginBlockAsync(params types.RequestBeginBlock) *ReqRes {
@@ -210,7 +210,7 @@ func (cli *grpcClient) BeginBlockAsync(params types.RequestBeginBlock) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{&types.Response_BeginBlock{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_BeginBlock{res}})
}
func (cli *grpcClient) EndBlockAsync(params types.RequestEndBlock) *ReqRes {
@@ -219,7 +219,7 @@ func (cli *grpcClient) EndBlockAsync(params types.RequestEndBlock) *ReqRes {
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{&types.Response_EndBlock{res}})
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_EndBlock{res}})
}
func (cli *grpcClient) finishAsyncCall(req *types.Request, res *types.Response) *ReqRes {

View File

@@ -149,7 +149,7 @@ func (app *localClient) FlushSync() error {
}
func (app *localClient) EchoSync(msg string) (*types.ResponseEcho, error) {
return &types.ResponseEcho{msg}, nil
return &types.ResponseEcho{Message: msg}, nil
}
func (app *localClient) InfoSync(req types.RequestInfo) (*types.ResponseInfo, error) {

View File

@@ -514,7 +514,7 @@ func cmdInfo(cmd *cobra.Command, args []string) error {
if len(args) == 1 {
version = args[0]
}
res, err := client.InfoSync(types.RequestInfo{version})
res, err := client.InfoSync(types.RequestInfo{Version: version})
if err != nil {
return err
}
@@ -537,7 +537,7 @@ func cmdSetOption(cmd *cobra.Command, args []string) error {
}
key, val := args[0], args[1]
_, err := client.SetOptionSync(types.RequestSetOption{key, val})
_, err := client.SetOptionSync(types.RequestSetOption{Key: key, Value: val})
if err != nil {
return err
}

View File

@@ -7,6 +7,8 @@ import (
"testing"
"time"
"github.com/stretchr/testify/require"
"google.golang.org/grpc"
"golang.org/x/net/context"
@@ -43,7 +45,7 @@ func testStream(t *testing.T, app types.Application) {
server := abciserver.NewSocketServer("unix://test.sock", app)
server.SetLogger(log.TestingLogger().With("module", "abci-server"))
if err := server.Start(); err != nil {
t.Fatalf("Error starting socket server: %v", err.Error())
require.NoError(t, err, "Error starting socket server")
}
defer server.Stop()
@@ -132,7 +134,7 @@ func testGRPCSync(t *testing.T, app *types.GRPCApplication) {
// Write requests
for counter := 0; counter < numDeliverTxs; counter++ {
// Send request
response, err := client.DeliverTx(context.Background(), &types.RequestDeliverTx{[]byte("test")})
response, err := client.DeliverTx(context.Background(), &types.RequestDeliverTx{Tx: []byte("test")})
if err != nil {
t.Fatalf("Error in GRPC DeliverTx: %v", err.Error())
}

View File

@@ -90,8 +90,8 @@ func TestPersistentKVStoreInfo(t *testing.T) {
header := types.Header{
Height: int64(height),
}
kvstore.BeginBlock(types.RequestBeginBlock{hash, header, nil, nil})
kvstore.EndBlock(types.RequestEndBlock{header.Height})
kvstore.BeginBlock(types.RequestBeginBlock{Hash: hash, Header: header})
kvstore.EndBlock(types.RequestEndBlock{Height: header.Height})
kvstore.Commit()
resInfo = kvstore.Info(types.RequestInfo{})
@@ -176,13 +176,13 @@ func makeApplyBlock(t *testing.T, kvstore types.Application, heightInt int, diff
Height: height,
}
kvstore.BeginBlock(types.RequestBeginBlock{hash, header, nil, nil})
kvstore.BeginBlock(types.RequestBeginBlock{Hash: hash, Header: header})
for _, tx := range txs {
if r := kvstore.DeliverTx(tx); r.IsErr() {
t.Fatal(r)
}
}
resEndBlock := kvstore.EndBlock(types.RequestEndBlock{header.Height})
resEndBlock := kvstore.EndBlock(types.RequestEndBlock{Height: header.Height})
kvstore.Commit()
valsEqual(t, diff, resEndBlock.ValidatorUpdates)

View File

@@ -26,7 +26,7 @@ func startClient(abciType string) abcicli.Client {
}
func setOption(client abcicli.Client, key, value string) {
_, err := client.SetOptionSync(types.RequestSetOption{key, value})
_, err := client.SetOptionSync(types.RequestSetOption{Key: key, Value: value})
if err != nil {
panicf("setting %v=%v: \nerr: %v", key, value, err)
}

View File

@@ -85,7 +85,7 @@ func NewGRPCApplication(app Application) *GRPCApplication {
}
func (app *GRPCApplication) Echo(ctx context.Context, req *RequestEcho) (*ResponseEcho, error) {
return &ResponseEcho{req.Message}, nil
return &ResponseEcho{Message: req.Message}, nil
}
func (app *GRPCApplication) Flush(ctx context.Context, req *RequestFlush) (*ResponseFlush, error) {

View File

@@ -71,7 +71,7 @@ func encodeVarint(w io.Writer, i int64) (err error) {
func ToRequestEcho(message string) *Request {
return &Request{
Value: &Request_Echo{&RequestEcho{message}},
Value: &Request_Echo{&RequestEcho{Message: message}},
}
}
@@ -95,13 +95,13 @@ func ToRequestSetOption(req RequestSetOption) *Request {
func ToRequestDeliverTx(tx []byte) *Request {
return &Request{
Value: &Request_DeliverTx{&RequestDeliverTx{tx}},
Value: &Request_DeliverTx{&RequestDeliverTx{Tx: tx}},
}
}
func ToRequestCheckTx(tx []byte) *Request {
return &Request{
Value: &Request_CheckTx{&RequestCheckTx{tx}},
Value: &Request_CheckTx{&RequestCheckTx{Tx: tx}},
}
}
@@ -139,13 +139,13 @@ func ToRequestEndBlock(req RequestEndBlock) *Request {
func ToResponseException(errStr string) *Response {
return &Response{
Value: &Response_Exception{&ResponseException{errStr}},
Value: &Response_Exception{&ResponseException{Error: errStr}},
}
}
func ToResponseEcho(message string) *Response {
return &Response{
Value: &Response_Echo{&ResponseEcho{message}},
Value: &Response_Echo{&ResponseEcho{Message: message}},
}
}

View File

@@ -85,7 +85,6 @@ func TestWriteReadMessage2(t *testing.T) {
Tags: []cmn.KVPair{
cmn.KVPair{[]byte("abc"), []byte("def")},
},
// Fee: cmn.KI64Pair{
},
// TODO: add the rest
}

File diff suppressed because it is too large Load Diff

View File

@@ -4,12 +4,22 @@ package types;
// For more information on gogo.proto, see:
// https://github.com/gogo/protobuf/blob/master/extensions.md
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
import "github.com/tendermint/tmlibs/common/types.proto";
import "google/protobuf/timestamp.proto";
import "github.com/tendermint/tendermint/libs/common/types.proto";
// This file is copied from http://github.com/tendermint/abci
// NOTE: When using custom types, mind the warnings.
// https://github.com/gogo/protobuf/blob/master/custom_types.md#warnings-and-issues
option (gogoproto.marshaler_all) = true;
option (gogoproto.unmarshaler_all) = true;
option (gogoproto.sizer_all) = true;
option (gogoproto.goproto_registration) = true;
// Generate tests
option (gogoproto.populate_all) = true;
option (gogoproto.equal_all) = true;
option (gogoproto.testgen_all) = true;
//----------------------------------------
// Request types
@@ -47,7 +57,7 @@ message RequestSetOption {
}
message RequestInitChain {
int64 time = 1;
google.protobuf.Timestamp time = 1 [(gogoproto.nullable)=false, (gogoproto.stdtime)=true];
string chain_id = 2;
ConsensusParams consensus_params = 3;
repeated Validator validators = 4 [(gogoproto.nullable)=false];
@@ -61,10 +71,11 @@ message RequestQuery {
bool prove = 4;
}
// NOTE: validators here have empty pubkeys.
message RequestBeginBlock {
bytes hash = 1;
Header header = 2 [(gogoproto.nullable)=false];
repeated SigningValidator validators = 3 [(gogoproto.nullable)=false];
LastCommitInfo last_commit_info = 3 [(gogoproto.nullable)=false];
repeated Evidence byzantine_validators = 4 [(gogoproto.nullable)=false];
}
@@ -159,7 +170,6 @@ message ResponseCheckTx {
int64 gas_wanted = 5;
int64 gas_used = 6;
repeated common.KVPair tags = 7 [(gogoproto.nullable)=false, (gogoproto.jsontag)="tags,omitempty"];
common.KI64Pair fee = 8 [(gogoproto.nullable)=false];
}
message ResponseDeliverTx {
@@ -170,7 +180,6 @@ message ResponseDeliverTx {
int64 gas_wanted = 5;
int64 gas_used = 6;
repeated common.KVPair tags = 7 [(gogoproto.nullable)=false, (gogoproto.jsontag)="tags,omitempty"];
common.KI64Pair fee = 8 [(gogoproto.nullable)=false];
}
message ResponseEndBlock {
@@ -195,14 +204,14 @@ message ConsensusParams {
BlockGossip block_gossip = 3;
}
// BlockSize contain limits on the block size.
// BlockSize contains limits on the block size.
message BlockSize {
int32 max_bytes = 1;
int32 max_txs = 2;
int64 max_gas = 3;
}
// TxSize contain limits on the tx size.
// TxSize contains limits on the tx size.
message TxSize {
int32 max_bytes = 1;
int64 max_gas = 2;
@@ -215,6 +224,11 @@ message BlockGossip {
int32 block_part_size_bytes = 1;
}
message LastCommitInfo {
int32 commit_round = 1;
repeated SigningValidator validators = 2 [(gogoproto.nullable)=false];
}
//----------------------------------------
// Blockchain Types
@@ -223,7 +237,7 @@ message Header {
// basics
string chain_id = 1 [(gogoproto.customname)="ChainID"];
int64 height = 2;
int64 time = 3;
google.protobuf.Timestamp time = 3 [(gogoproto.nullable)=false, (gogoproto.stdtime)=true];
// txs
int32 num_txs = 4;
@@ -260,7 +274,7 @@ message Evidence {
string type = 1;
Validator validator = 2 [(gogoproto.nullable)=false];
int64 height = 3;
int64 time = 4;
google.protobuf.Timestamp time = 4 [(gogoproto.nullable)=false, (gogoproto.stdtime)=true];
int64 total_voting_power = 5;
}

4365
abci/types/typespb_test.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -7,7 +7,7 @@ import (
"github.com/tendermint/go-amino"
proto "github.com/tendermint/tendermint/benchmarks/proto"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/ed25519"
"github.com/tendermint/tendermint/p2p"
ctypes "github.com/tendermint/tendermint/rpc/core/types"
)
@@ -16,7 +16,7 @@ func BenchmarkEncodeStatusWire(b *testing.B) {
b.StopTimer()
cdc := amino.NewCodec()
ctypes.RegisterAmino(cdc)
nodeKey := p2p.NodeKey{PrivKey: crypto.GenPrivKeyEd25519()}
nodeKey := p2p.NodeKey{PrivKey: ed25519.GenPrivKey()}
status := &ctypes.ResultStatus{
NodeInfo: p2p.NodeInfo{
ID: nodeKey.ID(),
@@ -52,7 +52,7 @@ func BenchmarkEncodeNodeInfoWire(b *testing.B) {
b.StopTimer()
cdc := amino.NewCodec()
ctypes.RegisterAmino(cdc)
nodeKey := p2p.NodeKey{PrivKey: crypto.GenPrivKeyEd25519()}
nodeKey := p2p.NodeKey{PrivKey: ed25519.GenPrivKey()}
nodeInfo := p2p.NodeInfo{
ID: nodeKey.ID(),
Moniker: "SOMENAME",
@@ -77,7 +77,7 @@ func BenchmarkEncodeNodeInfoBinary(b *testing.B) {
b.StopTimer()
cdc := amino.NewCodec()
ctypes.RegisterAmino(cdc)
nodeKey := p2p.NodeKey{PrivKey: crypto.GenPrivKeyEd25519()}
nodeKey := p2p.NodeKey{PrivKey: ed25519.GenPrivKey()}
nodeInfo := p2p.NodeInfo{
ID: nodeKey.ID(),
Moniker: "SOMENAME",
@@ -98,7 +98,7 @@ func BenchmarkEncodeNodeInfoBinary(b *testing.B) {
func BenchmarkEncodeNodeInfoProto(b *testing.B) {
b.StopTimer()
nodeKey := p2p.NodeKey{PrivKey: crypto.GenPrivKeyEd25519()}
nodeKey := p2p.NodeKey{PrivKey: ed25519.GenPrivKey()}
nodeID := string(nodeKey.ID())
someName := "SOMENAME"
someAddr := "SOMEADDR"

View File

@@ -29,10 +29,10 @@ eg, L = latency = 0.1s
*/
const (
requestIntervalMS = 100
maxTotalRequesters = 1000
requestIntervalMS = 2
maxTotalRequesters = 600
maxPendingRequests = maxTotalRequesters
maxPendingRequestsPerPeer = 50
maxPendingRequestsPerPeer = 20
// Minimum recv rate to ensure we're receiving blocks from a peer fast
// enough. If a peer is not sending us data at at least that rate, we
@@ -219,14 +219,12 @@ func (pool *BlockPool) RedoRequest(height int64) p2p.ID {
defer pool.mtx.Unlock()
request := pool.requesters[height]
if request.block == nil {
panic("Expected block to be non-nil")
peerID := request.getPeerID()
if peerID != p2p.ID("") {
// RemovePeer will redo all requesters associated with this peer.
pool.removePeer(peerID)
}
// RemovePeer will redo all requesters associated with this peer.
pool.removePeer(request.peerID)
return request.peerID
return peerID
}
// TODO: ensure that blocks come in order for each peer.

View File

@@ -1,7 +1,6 @@
package blockchain
import (
"math/rand"
"testing"
"time"
@@ -25,7 +24,7 @@ func makePeers(numPeers int, minHeight, maxHeight int64) map[p2p.ID]testPeer {
peers := make(map[p2p.ID]testPeer, numPeers)
for i := 0; i < numPeers; i++ {
peerID := p2p.ID(cmn.RandStr(12))
height := minHeight + rand.Int63n(maxHeight-minHeight)
height := minHeight + cmn.RandInt63n(maxHeight-minHeight)
peers[peerID] = testPeer{peerID, height}
}
return peers
@@ -80,7 +79,7 @@ func TestBasic(t *testing.T) {
}
// Request desired, pretend like we got the block immediately.
go func() {
block := &types.Block{Header: &types.Header{Height: request.Height}}
block := &types.Block{Header: types.Header{Height: request.Height}}
pool.AddBlock(request.PeerID, block, 123)
t.Logf("Added block from peer %v (height: %v)", request.PeerID, request.Height)
}()

View File

@@ -18,7 +18,8 @@ const (
// BlockchainChannel is a channel for blocks and status updates (`BlockStore` height)
BlockchainChannel = byte(0x40)
trySyncIntervalMS = 50
trySyncIntervalMS = 10
// stop syncing when last block's time is
// within this much of the system time.
// stopSyncingDurationMinutes = 10
@@ -76,8 +77,9 @@ func NewBlockchainReactor(state sm.State, blockExec *sm.BlockExecutor, store *Bl
store.Height()))
}
const capacity = 1000 // must be bigger than peers count
requestsCh := make(chan BlockRequest, capacity)
requestsCh := make(chan BlockRequest, maxTotalRequesters)
const capacity = 1000 // must be bigger than peers count
errorsCh := make(chan peerError, capacity) // so we don't block in #Receive#pool.AddBlock
pool := NewBlockPool(
@@ -107,9 +109,6 @@ func (bcR *BlockchainReactor) SetLogger(l log.Logger) {
// OnStart implements cmn.Service.
func (bcR *BlockchainReactor) OnStart() error {
if err := bcR.BaseReactor.OnStart(); err != nil {
return err
}
if bcR.fastSync {
err := bcR.pool.Start()
if err != nil {
@@ -122,7 +121,6 @@ func (bcR *BlockchainReactor) OnStart() error {
// OnStop implements cmn.Service.
func (bcR *BlockchainReactor) OnStop() {
bcR.BaseReactor.OnStop()
bcR.pool.Stop()
}
@@ -209,7 +207,6 @@ func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
// Handle messages from the poolReactor telling the reactor what to do.
// NOTE: Don't sleep in the FOR_LOOP or otherwise slow it down!
// (Except for the SYNC_LOOP, which is the primary purpose and must be synchronous.)
func (bcR *BlockchainReactor) poolRoutine() {
trySyncTicker := time.NewTicker(trySyncIntervalMS * time.Millisecond)
@@ -224,6 +221,8 @@ func (bcR *BlockchainReactor) poolRoutine() {
lastHundred := time.Now()
lastRate := 0.0
didProcessCh := make(chan struct{}, 1)
FOR_LOOP:
for {
select {
@@ -239,14 +238,17 @@ FOR_LOOP:
// The pool handles timeouts, just let it go.
continue FOR_LOOP
}
case err := <-bcR.errorsCh:
peer := bcR.Switch.Peers().Get(err.peerID)
if peer != nil {
bcR.Switch.StopPeerForError(peer, err)
}
case <-statusUpdateTicker.C:
// ask for status updates
go bcR.BroadcastStatusRequest() // nolint: errcheck
case <-switchToConsensusTicker.C:
height, numPending, lenRequesters := bcR.pool.GetStatus()
outbound, inbound, _ := bcR.Switch.NumPeers()
@@ -261,60 +263,78 @@ FOR_LOOP:
break FOR_LOOP
}
case <-trySyncTicker.C: // chan time
// This loop can be slow as long as it's doing syncing work.
SYNC_LOOP:
for i := 0; i < 10; i++ {
// See if there are any blocks to sync.
first, second := bcR.pool.PeekTwoBlocks()
//bcR.Logger.Info("TrySync peeked", "first", first, "second", second)
if first == nil || second == nil {
// We need both to sync the first block.
break SYNC_LOOP
select {
case didProcessCh <- struct{}{}:
default:
}
case <-didProcessCh:
// NOTE: It is a subtle mistake to process more than a single block
// at a time (e.g. 10) here, because we only TrySend 1 request per
// loop. The ratio mismatch can result in starving of blocks, a
// sudden burst of requests and responses, and repeat.
// Consequently, it is better to split these routines rather than
// coupling them as it's written here. TODO uncouple from request
// routine.
// See if there are any blocks to sync.
first, second := bcR.pool.PeekTwoBlocks()
//bcR.Logger.Info("TrySync peeked", "first", first, "second", second)
if first == nil || second == nil {
// We need both to sync the first block.
continue FOR_LOOP
} else {
// Try again quickly next loop.
didProcessCh <- struct{}{}
}
firstParts := first.MakePartSet(state.ConsensusParams.BlockPartSizeBytes)
firstPartsHeader := firstParts.Header()
firstID := types.BlockID{first.Hash(), firstPartsHeader}
// Finally, verify the first block using the second's commit
// NOTE: we can probably make this more efficient, but note that calling
// first.Hash() doesn't verify the tx contents, so MakePartSet() is
// currently necessary.
err := state.Validators.VerifyCommit(
chainID, firstID, first.Height, second.LastCommit)
if err != nil {
bcR.Logger.Error("Error in validation", "err", err)
peerID := bcR.pool.RedoRequest(first.Height)
peer := bcR.Switch.Peers().Get(peerID)
if peer != nil {
// NOTE: we've already removed the peer's request, but we
// still need to clean up the rest.
bcR.Switch.StopPeerForError(peer, fmt.Errorf("BlockchainReactor validation error: %v", err))
}
firstParts := first.MakePartSet(state.ConsensusParams.BlockPartSizeBytes)
firstPartsHeader := firstParts.Header()
firstID := types.BlockID{first.Hash(), firstPartsHeader}
// Finally, verify the first block using the second's commit
// NOTE: we can probably make this more efficient, but note that calling
// first.Hash() doesn't verify the tx contents, so MakePartSet() is
// currently necessary.
err := state.Validators.VerifyCommit(
chainID, firstID, first.Height, second.LastCommit)
continue FOR_LOOP
} else {
bcR.pool.PopRequest()
// TODO: batch saves so we dont persist to disk every block
bcR.store.SaveBlock(first, firstParts, second.LastCommit)
// TODO: same thing for app - but we would need a way to
// get the hash without persisting the state
var err error
state, err = bcR.blockExec.ApplyBlock(state, firstID, first)
if err != nil {
bcR.Logger.Error("Error in validation", "err", err)
peerID := bcR.pool.RedoRequest(first.Height)
peer := bcR.Switch.Peers().Get(peerID)
if peer != nil {
bcR.Switch.StopPeerForError(peer, fmt.Errorf("BlockchainReactor validation error: %v", err))
}
break SYNC_LOOP
} else {
bcR.pool.PopRequest()
// TODO This is bad, are we zombie?
cmn.PanicQ(cmn.Fmt("Failed to process committed block (%d:%X): %v",
first.Height, first.Hash(), err))
}
blocksSynced++
// TODO: batch saves so we dont persist to disk every block
bcR.store.SaveBlock(first, firstParts, second.LastCommit)
// TODO: same thing for app - but we would need a way to
// get the hash without persisting the state
var err error
state, err = bcR.blockExec.ApplyBlock(state, firstID, first)
if err != nil {
// TODO This is bad, are we zombie?
cmn.PanicQ(cmn.Fmt("Failed to process committed block (%d:%X): %v",
first.Height, first.Hash(), err))
}
blocksSynced++
if blocksSynced%100 == 0 {
lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds())
bcR.Logger.Info("Fast Sync Rate", "height", bcR.pool.height,
"max_peer_height", bcR.pool.MaxPeerHeight(), "blocks/s", lastRate)
lastHundred = time.Now()
}
if blocksSynced%100 == 0 {
lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds())
bcR.Logger.Info("Fast Sync Rate", "height", bcR.pool.height,
"max_peer_height", bcR.pool.MaxPeerHeight(), "blocks/s", lastRate)
lastHundred = time.Now()
}
}
continue FOR_LOOP
case <-bcR.Quit():
break FOR_LOOP
}

View File

@@ -156,7 +156,7 @@ func makeTxs(height int64) (txs []types.Tx) {
}
func makeBlock(height int64, state sm.State) *types.Block {
block, _ := state.MakeBlock(height, makeTxs(height), new(types.Commit))
block, _ := state.MakeBlock(height, makeTxs(height), new(types.Commit), nil)
return block
}
@@ -206,3 +206,4 @@ func (tp *bcrTestPeer) IsPersistent() bool { return true }
func (tp *bcrTestPeer) Get(s string) interface{} { return s }
func (tp *bcrTestPeer) Set(string, interface{}) {}
func (tp *bcrTestPeer) RemoteIP() net.IP { return []byte{127, 0, 0, 1} }
func (tp *bcrTestPeer) OriginalAddr() *p2p.NetAddress { return nil }

View File

@@ -126,7 +126,7 @@ func TestBlockStoreSaveLoadBlock(t *testing.T) {
eraseSeenCommitInDB bool
}{
{
block: newBlock(&header1, commitAtH10),
block: newBlock(header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
},
@@ -137,19 +137,19 @@ func TestBlockStoreSaveLoadBlock(t *testing.T) {
},
{
block: newBlock(&header2, commitAtH10),
block: newBlock(header2, commitAtH10),
parts: uncontiguousPartSet,
wantPanic: "only save contiguous blocks", // and incomplete and uncontiguous parts
},
{
block: newBlock(&header1, commitAtH10),
block: newBlock(header1, commitAtH10),
parts: incompletePartSet,
wantPanic: "only save complete block", // incomplete parts
},
{
block: newBlock(&header1, commitAtH10),
block: newBlock(header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
corruptCommitInDB: true, // Corrupt the DB's commit entry
@@ -157,7 +157,7 @@ func TestBlockStoreSaveLoadBlock(t *testing.T) {
},
{
block: newBlock(&header1, commitAtH10),
block: newBlock(header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
wantPanic: "unmarshal to types.BlockMeta failed",
@@ -165,7 +165,7 @@ func TestBlockStoreSaveLoadBlock(t *testing.T) {
},
{
block: newBlock(&header1, commitAtH10),
block: newBlock(header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
@@ -174,7 +174,7 @@ func TestBlockStoreSaveLoadBlock(t *testing.T) {
},
{
block: newBlock(&header1, commitAtH10),
block: newBlock(header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
@@ -183,7 +183,7 @@ func TestBlockStoreSaveLoadBlock(t *testing.T) {
},
{
block: newBlock(&header1, commitAtH10),
block: newBlock(header1, commitAtH10),
parts: validPartSet,
seenCommit: seenCommit1,
@@ -375,7 +375,7 @@ func doFn(fn func() (interface{}, error)) (res interface{}, err error, panicErr
return res, err, panicErr
}
func newBlock(hdr *types.Header, lastCommit *types.Commit) *types.Block {
func newBlock(hdr types.Header, lastCommit *types.Commit) *types.Block {
return &types.Block{
Header: hdr,
LastCommit: lastCommit,

View File

@@ -2,12 +2,12 @@ package blockchain
import (
"github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/types"
)
var cdc = amino.NewCodec()
func init() {
RegisterBlockchainMessages(cdc)
crypto.RegisterAmino(cdc)
types.RegisterBlockAmino(cdc)
}

View File

@@ -4,7 +4,7 @@ import (
"flag"
"os"
crypto "github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/ed25519"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/log"
@@ -37,7 +37,7 @@ func main() {
*chainID,
*addr,
pv,
crypto.GenPrivKeyEd25519(),
ed25519.GenPrivKey(),
)
err := rs.Start()
if err != nil {

View File

@@ -2,11 +2,11 @@ package commands
import (
"github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/crypto"
cryptoAmino "github.com/tendermint/tendermint/crypto/encoding/amino"
)
var cdc = amino.NewCodec()
func init() {
crypto.RegisterAmino(cdc)
cryptoAmino.RegisterAmino(cdc)
}

View File

@@ -21,3 +21,4 @@ ignore:
- "docs"
- "DOCKER"
- "scripts"
- "**/*.pb.go"

View File

@@ -284,7 +284,6 @@ type P2PConfig struct {
Seeds string `mapstructure:"seeds"`
// Comma separated list of nodes to keep persistent connections to
// Do not add private peers to this list if you don't want them advertised
PersistentPeers string `mapstructure:"persistent_peers"`
// UPNP port forwarding
@@ -349,9 +348,9 @@ func DefaultP2PConfig() *P2PConfig {
AddrBookStrict: true,
MaxNumPeers: 50,
FlushThrottleTimeout: 100,
MaxPacketMsgPayloadSize: 1024, // 1 kB
SendRate: 512000, // 500 kB/s
RecvRate: 512000, // 500 kB/s
MaxPacketMsgPayloadSize: 1024, // 1 kB
SendRate: 5120000, // 5 mB/s
RecvRate: 5120000, // 5 mB/s
PexReactor: true,
SeedMode: false,
AllowDuplicateIP: true, // so non-breaking yet

View File

@@ -152,7 +152,6 @@ external_address = "{{ .P2P.ExternalAddress }}"
seeds = "{{ .P2P.Seeds }}"
# Comma separated list of nodes to keep persistent connections to
# Do not add private peers to this list if you don't want them advertised
persistent_peers = "{{ .P2P.PersistentPeers }}"
# UPNP port forwarding

View File

@@ -17,14 +17,14 @@ import (
bc "github.com/tendermint/tendermint/blockchain"
cfg "github.com/tendermint/tendermint/config"
cstypes "github.com/tendermint/tendermint/consensus/types"
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/libs/log"
mempl "github.com/tendermint/tendermint/mempool"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/privval"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
@@ -378,8 +378,11 @@ func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerF
if i < nValidators {
privVal = privVals[i]
} else {
_, tempFilePath := cmn.Tempfile("priv_validator_")
privVal = privval.GenFilePV(tempFilePath)
tempFile, err := ioutil.TempFile("", "priv_validator_")
if err != nil {
panic(err)
}
privVal = privval.GenFilePV(tempFile.Name())
}
app := appFunc()
@@ -429,7 +432,7 @@ func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.G
func randGenesisState(numValidators int, randPower bool, minPower int64) (sm.State, []types.PrivValidator) {
genDoc, privValidators := randGenesisDoc(numValidators, randPower, minPower)
s0, _ := sm.MakeGenesisState(genDoc)
db := dbm.NewMemDB()
db := dbm.NewMemDB() // remove this ?
sm.SaveState(db, s0)
return s0, privValidators
}

View File

@@ -58,9 +58,6 @@ func NewConsensusReactor(consensusState *ConsensusState, fastSync bool) *Consens
// broadcasted to other peers and starting state if we're not in fast sync.
func (conR *ConsensusReactor) OnStart() error {
conR.Logger.Info("ConsensusReactor ", "fastSync", conR.FastSync())
if err := conR.BaseReactor.OnStart(); err != nil {
return err
}
conR.subscribeToBroadcastEvents()
@@ -77,7 +74,6 @@ func (conR *ConsensusReactor) OnStart() error {
// OnStop implements BaseService by unsubscribing from events and stopping
// state.
func (conR *ConsensusReactor) OnStop() {
conR.BaseReactor.OnStop()
conR.unsubscribeFromBroadcastEvents()
conR.conS.Stop()
if !conR.FastSync() {

View File

@@ -4,15 +4,22 @@ import (
"context"
"fmt"
"os"
"path"
"runtime"
"runtime/pprof"
"sync"
"testing"
"time"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
bc "github.com/tendermint/tendermint/blockchain"
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/libs/log"
mempl "github.com/tendermint/tendermint/mempool"
sm "github.com/tendermint/tendermint/state"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/p2p"
@@ -91,6 +98,120 @@ func TestReactorBasic(t *testing.T) {
}, css)
}
// Ensure we can process blocks with evidence
func TestReactorWithEvidence(t *testing.T) {
nValidators := 4
testName := "consensus_reactor_test"
tickerFunc := newMockTickerFunc(true)
appFunc := newCounter
// heed the advice from https://www.sandimetz.com/blog/2016/1/20/the-wrong-abstraction
// to unroll unwieldy abstractions. Here we duplicate the code from:
// css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
genDoc, privVals := randGenesisDoc(nValidators, false, 30)
css := make([]*ConsensusState, nValidators)
logger := consensusLogger()
for i := 0; i < nValidators; i++ {
stateDB := dbm.NewMemDB() // each state needs its own db
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
thisConfig := ResetConfig(cmn.Fmt("%s_%d", testName, i))
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
app := appFunc()
vals := types.TM2PB.Validators(state.Validators)
app.InitChain(abci.RequestInitChain{Validators: vals})
pv := privVals[i]
// duplicate code from:
// css[i] = newConsensusStateWithConfig(thisConfig, state, privVals[i], app)
blockDB := dbm.NewMemDB()
blockStore := bc.NewBlockStore(blockDB)
// one for mempool, one for consensus
mtx := new(sync.Mutex)
proxyAppConnMem := abcicli.NewLocalClient(mtx, app)
proxyAppConnCon := abcicli.NewLocalClient(mtx, app)
// Make Mempool
mempool := mempl.NewMempool(thisConfig.Mempool, proxyAppConnMem, 0)
mempool.SetLogger(log.TestingLogger().With("module", "mempool"))
if thisConfig.Consensus.WaitForTxs() {
mempool.EnableTxsAvailable()
}
// mock the evidence pool
// everyone includes evidence of another double signing
vIdx := (i + 1) % nValidators
evpool := newMockEvidencePool(privVals[vIdx].GetAddress())
// Make ConsensusState
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyAppConnCon, mempool, evpool)
cs := NewConsensusState(thisConfig.Consensus, state, blockExec, blockStore, mempool, evpool)
cs.SetLogger(log.TestingLogger().With("module", "consensus"))
cs.SetPrivValidator(pv)
eventBus := types.NewEventBus()
eventBus.SetLogger(log.TestingLogger().With("module", "events"))
eventBus.Start()
cs.SetEventBus(eventBus)
cs.SetTimeoutTicker(tickerFunc())
cs.SetLogger(logger.With("validator", i, "module", "consensus"))
css[i] = cs
}
reactors, eventChans, eventBuses := startConsensusNet(t, css, nValidators)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
// wait till everyone makes the first new block with no evidence
timeoutWaitGroup(t, nValidators, func(j int) {
blockI := <-eventChans[j]
block := blockI.(types.EventDataNewBlock).Block
assert.True(t, len(block.Evidence.Evidence) == 0)
}, css)
// second block should have evidence
timeoutWaitGroup(t, nValidators, func(j int) {
blockI := <-eventChans[j]
block := blockI.(types.EventDataNewBlock).Block
assert.True(t, len(block.Evidence.Evidence) > 0)
}, css)
}
// mock evidence pool returns no evidence for block 1,
// and returnes one piece for all higher blocks. The one piece
// is for a given validator at block 1.
type mockEvidencePool struct {
height int
ev []types.Evidence
}
func newMockEvidencePool(val []byte) *mockEvidencePool {
return &mockEvidencePool{
ev: []types.Evidence{types.NewMockGoodEvidence(1, 1, val)},
}
}
func (m *mockEvidencePool) PendingEvidence() []types.Evidence {
if m.height > 0 {
return m.ev
}
return nil
}
func (m *mockEvidencePool) AddEvidence(types.Evidence) error { return nil }
func (m *mockEvidencePool) Update(block *types.Block, state sm.State) {
if m.height > 0 {
if len(block.Evidence.Evidence) == 0 {
panic("block has no evidence")
}
}
m.height += 1
}
//------------------------------------
// Ensure a testnet sends proposal heartbeats and makes blocks when there are txs
func TestReactorProposalHeartbeats(t *testing.T) {
N := 4

View File

@@ -227,7 +227,7 @@ func (h *Handshaker) NBlocks() int {
func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
// Handshake is done via ABCI Info on the query conn.
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{version.Version})
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{Version: version.Version})
if err != nil {
return fmt.Errorf("Error calling Info: %v", err)
}
@@ -269,7 +269,7 @@ func (h *Handshaker) ReplayBlocks(state sm.State, appHash []byte, appBlockHeight
validators := types.TM2PB.Validators(state.Validators)
csParams := types.TM2PB.ConsensusParams(h.genDoc.ConsensusParams)
req := abci.RequestInitChain{
Time: h.genDoc.GenesisTime.Unix(), // TODO
Time: h.genDoc.GenesisTime,
ChainId: h.genDoc.ChainID,
ConsensusParams: csParams,
Validators: validators,

View File

@@ -376,7 +376,7 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
defer proxyApp.Stop()
// get the latest app hash from the app
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{""})
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{Version: ""})
if err != nil {
t.Fatal(err)
}

View File

@@ -81,7 +81,7 @@ type ConsensusState struct {
evpool sm.EvidencePool
// internal state
mtx sync.Mutex
mtx sync.RWMutex
cstypes.RoundState
state sm.State // State until height-1.
@@ -192,15 +192,15 @@ func (cs *ConsensusState) String() string {
// GetState returns a copy of the chain state.
func (cs *ConsensusState) GetState() sm.State {
cs.mtx.Lock()
defer cs.mtx.Unlock()
cs.mtx.RLock()
defer cs.mtx.RUnlock()
return cs.state.Copy()
}
// GetRoundState returns a shallow copy of the internal consensus state.
func (cs *ConsensusState) GetRoundState() *cstypes.RoundState {
cs.mtx.Lock()
defer cs.mtx.Unlock()
cs.mtx.RLock()
defer cs.mtx.RUnlock()
rs := cs.RoundState // copy
return &rs
@@ -208,24 +208,24 @@ func (cs *ConsensusState) GetRoundState() *cstypes.RoundState {
// GetRoundStateJSON returns a json of RoundState, marshalled using go-amino.
func (cs *ConsensusState) GetRoundStateJSON() ([]byte, error) {
cs.mtx.Lock()
defer cs.mtx.Unlock()
cs.mtx.RLock()
defer cs.mtx.RUnlock()
return cdc.MarshalJSON(cs.RoundState)
}
// GetRoundStateSimpleJSON returns a json of RoundStateSimple, marshalled using go-amino.
func (cs *ConsensusState) GetRoundStateSimpleJSON() ([]byte, error) {
cs.mtx.Lock()
defer cs.mtx.Unlock()
cs.mtx.RLock()
defer cs.mtx.RUnlock()
return cdc.MarshalJSON(cs.RoundState.RoundStateSimple())
}
// GetValidators returns a copy of the current validators.
func (cs *ConsensusState) GetValidators() (int64, []*types.Validator) {
cs.mtx.Lock()
defer cs.mtx.Unlock()
cs.mtx.RLock()
defer cs.mtx.RUnlock()
return cs.state.LastBlockHeight, cs.state.Validators.Copy().Validators
}
@@ -245,8 +245,8 @@ func (cs *ConsensusState) SetTimeoutTicker(timeoutTicker TimeoutTicker) {
// LoadCommit loads the commit for a given height.
func (cs *ConsensusState) LoadCommit(height int64) *types.Commit {
cs.mtx.Lock()
defer cs.mtx.Unlock()
cs.mtx.RLock()
defer cs.mtx.RUnlock()
if height == cs.blockStore.Height() {
return cs.blockStore.LoadSeenCommit(height)
}
@@ -553,9 +553,30 @@ func (cs *ConsensusState) newStep() {
// Updates (state transitions) happen on timeouts, complete proposals, and 2/3 majorities.
// ConsensusState must be locked before any internal state is updated.
func (cs *ConsensusState) receiveRoutine(maxSteps int) {
onExit := func(cs *ConsensusState) {
// NOTE: the internalMsgQueue may have signed messages from our
// priv_val that haven't hit the WAL, but its ok because
// priv_val tracks LastSig
// close wal now that we're done writing to it
cs.wal.Stop()
cs.wal.Wait()
close(cs.done)
}
defer func() {
if r := recover(); r != nil {
cs.Logger.Error("CONSENSUS FAILURE!!!", "err", r, "stack", string(debug.Stack()))
// stop gracefully
//
// NOTE: We most probably shouldn't be running any further when there is
// some unexpected panic. Some unknown error happened, and so we don't
// know if that will result in the validator signing an invalid thing. It
// might be worthwhile to explore a mechanism for manual resuming via
// some console or secure RPC system, but for now, halting the chain upon
// unexpected consensus bugs sounds like the better option.
onExit(cs)
}
}()
@@ -571,8 +592,8 @@ func (cs *ConsensusState) receiveRoutine(maxSteps int) {
var mi msgInfo
select {
case height := <-cs.mempool.TxsAvailable():
cs.handleTxsAvailable(height)
case <-cs.mempool.TxsAvailable():
cs.handleTxsAvailable()
case mi = <-cs.peerMsgQueue:
cs.wal.Write(mi)
// handles proposals, block parts, votes
@@ -588,16 +609,7 @@ func (cs *ConsensusState) receiveRoutine(maxSteps int) {
// go to the next step
cs.handleTimeout(ti, rs)
case <-cs.Quit():
// NOTE: the internalMsgQueue may have signed messages from our
// priv_val that haven't hit the WAL, but its ok because
// priv_val tracks LastSig
// close wal now that we're done writing to it
cs.wal.Stop()
cs.wal.Wait()
close(cs.done)
onExit(cs)
return
}
}
@@ -683,11 +695,11 @@ func (cs *ConsensusState) handleTimeout(ti timeoutInfo, rs cstypes.RoundState) {
}
func (cs *ConsensusState) handleTxsAvailable(height int64) {
func (cs *ConsensusState) handleTxsAvailable() {
cs.mtx.Lock()
defer cs.mtx.Unlock()
// we only need to do this for round 0
cs.enterPropose(height, 0)
cs.enterPropose(cs.Height, 0)
}
//-----------------------------------------------------------------------------
@@ -907,6 +919,8 @@ func (cs *ConsensusState) isProposalComplete() bool {
}
// Create the next block to propose and return it.
// We really only need to return the parts, but the block
// is returned for convenience so we can log the proposal block.
// Returns nil block upon error.
// NOTE: keep it side-effect free for clarity.
func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts *types.PartSet) {
@@ -926,9 +940,8 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
// Mempool validated transactions
txs := cs.mempool.Reap(cs.state.ConsensusParams.BlockSize.MaxTxs)
block, parts := cs.state.MakeBlock(cs.Height, txs, commit)
evidence := cs.evpool.PendingEvidence()
block.AddEvidence(evidence)
block, parts := cs.state.MakeBlock(cs.Height, txs, commit, evidence)
return block, parts
}

View File

@@ -4,10 +4,10 @@ import (
"testing"
"time"
"github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/types"
amino "github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/crypto/ed25519"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/types"
)
func BenchmarkRoundStateDeepCopy(b *testing.B) {
@@ -23,7 +23,7 @@ func BenchmarkRoundStateDeepCopy(b *testing.B) {
Hash: cmn.RandBytes(20),
},
}
sig := crypto.SignatureEd25519{}
sig := make([]byte, ed25519.SignatureEd25519Size)
for i := 0; i < nval; i++ {
precommits[i] = &types.Vote{
ValidatorAddress: types.Address(cmn.RandBytes(20)),
@@ -38,7 +38,7 @@ func BenchmarkRoundStateDeepCopy(b *testing.B) {
}
// Random block
block := &types.Block{
Header: &types.Header{
Header: types.Header{
ChainID: cmn.RandStr(12),
Time: time.Now(),
LastBlockID: blockID,
@@ -50,7 +50,7 @@ func BenchmarkRoundStateDeepCopy(b *testing.B) {
LastResultsHash: cmn.RandBytes(20),
EvidenceHash: cmn.RandBytes(20),
},
Data: &types.Data{
Data: types.Data{
Txs: txs,
},
Evidence: types.EvidenceData{},

View File

@@ -2,11 +2,11 @@ package types
import (
"github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/types"
)
var cdc = amino.NewCodec()
func init() {
crypto.RegisterAmino(cdc)
types.RegisterBlockAmino(cdc)
}

View File

@@ -2,7 +2,7 @@ package consensus
import (
"github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/types"
)
var cdc = amino.NewCodec()
@@ -10,5 +10,5 @@ var cdc = amino.NewCodec()
func init() {
RegisterConsensusMessages(cdc)
RegisterWALMessages(cdc)
crypto.RegisterAmino(cdc)
types.RegisterBlockAmino(cdc)
}

View File

@@ -3,8 +3,15 @@
crypto is the cryptographic package adapted for Tendermint's uses
## Importing it
To get the interfaces,
`import "github.com/tendermint/tendermint/crypto"`
For any specific algorithm, use its specific module e.g.
`import "github.com/tendermint/tendermint/crypto/ed25519"`
If you want to decode bytes into one of the types, but don't care about the specific algorithm, use
`import "github.com/tendermint/tendermint/crypto/amino"`
## Binary encoding
For Binary encoding, please refer to the [Tendermint encoding spec](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/encoding.md).
@@ -16,10 +23,8 @@ crypto `.Bytes()` uses Amino:binary encoding, but Amino:JSON is also supported.
```go
Example Amino:JSON encodings:
crypto.PrivKeyEd25519 - {"type":"954568A3288910","value":"EVkqJO/jIXp3rkASXfh9YnyToYXRXhBr6g9cQVxPFnQBP/5povV4HTjvsy530kybxKHwEi85iU8YL0qQhSYVoQ=="}
crypto.SignatureEd25519 - {"type":"6BF5903DA1DB28","value":"77sQNZOrf7ltExpf7AV1WaYPCHbyRLgjBsoWVzcduuLk+jIGmYk+s5R6Emm29p12HeiNAuhUJgdFGmwkpeGJCA=="}
crypto.PubKeyEd25519 - {"type":"AC26791624DE60","value":"AT/+aaL1eB0477Mud9JMm8Sh8BIvOYlPGC9KkIUmFaE="}
ed25519.PrivKeyEd25519 - {"type":"954568A3288910","value":"EVkqJO/jIXp3rkASXfh9YnyToYXRXhBr6g9cQVxPFnQBP/5povV4HTjvsy530kybxKHwEi85iU8YL0qQhSYVoQ=="}
ed25519.PubKeyEd25519 - {"type":"AC26791624DE60","value":"AT/+aaL1eB0477Mud9JMm8Sh8BIvOYlPGC9KkIUmFaE="}
crypto.PrivKeySecp256k1 - {"type":"019E82E1B0F798","value":"zx4Pnh67N+g2V+5vZbQzEyRerX9c4ccNZOVzM9RvJ0Y="}
crypto.SignatureSecp256k1 - {"type":"6D1EA416E1FEE8","value":"MEUCIQCIg5TqS1l7I+MKTrSPIuUN2+4m5tA29dcauqn3NhEJ2wIgICaZ+lgRc5aOTVahU/XoLopXKn8BZcl0bnuYWLvohR8="}
crypto.PubKeySecp256k1 - {"type":"F8CCEAEB5AE980","value":"A8lPKJXcNl5VHt1FK8a244K9EJuS4WX1hFBnwisi0IJx"}
```

View File

@@ -1,37 +0,0 @@
package crypto
import (
amino "github.com/tendermint/go-amino"
)
var cdc = amino.NewCodec()
func init() {
// NOTE: It's important that there be no conflicts here,
// as that would change the canonical representations,
// and therefore change the address.
// TODO: Add feature to go-amino to ensure that there
// are no conflicts.
RegisterAmino(cdc)
}
// RegisterAmino registers all crypto related types in the given (amino) codec.
func RegisterAmino(cdc *amino.Codec) {
cdc.RegisterInterface((*PubKey)(nil), nil)
cdc.RegisterConcrete(PubKeyEd25519{},
"tendermint/PubKeyEd25519", nil)
cdc.RegisterConcrete(PubKeySecp256k1{},
"tendermint/PubKeySecp256k1", nil)
cdc.RegisterInterface((*PrivKey)(nil), nil)
cdc.RegisterConcrete(PrivKeyEd25519{},
"tendermint/PrivKeyEd25519", nil)
cdc.RegisterConcrete(PrivKeySecp256k1{},
"tendermint/PrivKeySecp256k1", nil)
cdc.RegisterInterface((*Signature)(nil), nil)
cdc.RegisterConcrete(SignatureEd25519{},
"tendermint/SignatureEd25519", nil)
cdc.RegisterConcrete(SignatureSecp256k1{},
"tendermint/SignatureSecp256k1", nil)
}

View File

@@ -1,4 +1,4 @@
package crypto
package armor
import (
"bytes"

View File

@@ -1,4 +1,4 @@
package crypto
package armor
import (
"testing"

30
crypto/crypto.go Normal file
View File

@@ -0,0 +1,30 @@
package crypto
import (
cmn "github.com/tendermint/tendermint/libs/common"
)
type PrivKey interface {
Bytes() []byte
Sign(msg []byte) ([]byte, error)
PubKey() PubKey
Equals(PrivKey) bool
}
// An address is a []byte, but hex-encoded even in JSON.
// []byte leaves us the option to change the address length.
// Use an alias so Unmarshal methods (with ptr receivers) are available too.
type Address = cmn.HexBytes
type PubKey interface {
Address() Address
Bytes() []byte
VerifyBytes(msg []byte, sig []byte) bool
Equals(PubKey) bool
}
type Symmetric interface {
Keygen() []byte
Encrypt(plaintext []byte, secret []byte) (ciphertext []byte)
Decrypt(ciphertext []byte, secret []byte) (plaintext []byte, err error)
}

View File

@@ -22,7 +22,7 @@
// pubKey := key.PubKey()
// For example:
// privKey, err := crypto.GenPrivKeyEd25519()
// privKey, err := ed25519.GenPrivKey()
// if err != nil {
// ...
// }

View File

@@ -0,0 +1,26 @@
package ed25519
import (
"io"
"testing"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/internal/benchmarking"
)
func BenchmarkKeyGeneration(b *testing.B) {
benchmarkKeygenWrapper := func(reader io.Reader) crypto.PrivKey {
return genPrivKey(reader)
}
benchmarking.BenchmarkKeyGeneration(b, benchmarkKeygenWrapper)
}
func BenchmarkSigning(b *testing.B) {
priv := GenPrivKey()
benchmarking.BenchmarkSigning(b, priv)
}
func BenchmarkVerification(b *testing.B) {
priv := GenPrivKey()
benchmarking.BenchmarkVerification(b, priv)
}

196
crypto/ed25519/ed25519.go Normal file
View File

@@ -0,0 +1,196 @@
package ed25519
import (
"bytes"
"crypto/subtle"
"fmt"
"io"
"github.com/tendermint/ed25519"
"github.com/tendermint/ed25519/extra25519"
amino "github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/tmhash"
)
//-------------------------------------
var _ crypto.PrivKey = PrivKeyEd25519{}
const (
Ed25519PrivKeyAminoRoute = "tendermint/PrivKeyEd25519"
Ed25519PubKeyAminoRoute = "tendermint/PubKeyEd25519"
// Size of an Edwards25519 signature. Namely the size of a compressed
// Edwards25519 point, and a field element. Both of which are 32 bytes.
SignatureEd25519Size = 64
)
var cdc = amino.NewCodec()
func init() {
cdc.RegisterInterface((*crypto.PubKey)(nil), nil)
cdc.RegisterConcrete(PubKeyEd25519{},
Ed25519PubKeyAminoRoute, nil)
cdc.RegisterInterface((*crypto.PrivKey)(nil), nil)
cdc.RegisterConcrete(PrivKeyEd25519{},
Ed25519PrivKeyAminoRoute, nil)
}
// PrivKeyEd25519 implements crypto.PrivKey.
type PrivKeyEd25519 [64]byte
// Bytes marshals the privkey using amino encoding.
func (privKey PrivKeyEd25519) Bytes() []byte {
return cdc.MustMarshalBinaryBare(privKey)
}
// Sign produces a signature on the provided message.
func (privKey PrivKeyEd25519) Sign(msg []byte) ([]byte, error) {
privKeyBytes := [64]byte(privKey)
signatureBytes := ed25519.Sign(&privKeyBytes, msg)
return signatureBytes[:], nil
}
// PubKey gets the corresponding public key from the private key.
func (privKey PrivKeyEd25519) PubKey() crypto.PubKey {
privKeyBytes := [64]byte(privKey)
initialized := false
// If the latter 32 bytes of the privkey are all zero, compute the pubkey
// otherwise privkey is initialized and we can use the cached value inside
// of the private key.
for _, v := range privKeyBytes[32:] {
if v != 0 {
initialized = true
break
}
}
if initialized {
var pubkeyBytes [PubKeyEd25519Size]byte
copy(pubkeyBytes[:], privKeyBytes[32:])
return PubKeyEd25519(pubkeyBytes)
}
pubBytes := *ed25519.MakePublicKey(&privKeyBytes)
return PubKeyEd25519(pubBytes)
}
// Equals - you probably don't need to use this.
// Runs in constant time based on length of the keys.
func (privKey PrivKeyEd25519) Equals(other crypto.PrivKey) bool {
if otherEd, ok := other.(PrivKeyEd25519); ok {
return subtle.ConstantTimeCompare(privKey[:], otherEd[:]) == 1
} else {
return false
}
}
// ToCurve25519 takes a private key and returns its representation on
// Curve25519. Curve25519 is birationally equivalent to Edwards25519,
// which Ed25519 uses internally. This method is intended for use in
// an X25519 Diffie Hellman key exchange.
func (privKey PrivKeyEd25519) ToCurve25519() *[PubKeyEd25519Size]byte {
keyCurve25519 := new([32]byte)
privKeyBytes := [64]byte(privKey)
extra25519.PrivateKeyToCurve25519(keyCurve25519, &privKeyBytes)
return keyCurve25519
}
// GenPrivKey generates a new ed25519 private key.
// It uses OS randomness in conjunction with the current global random seed
// in tendermint/libs/common to generate the private key.
func GenPrivKey() PrivKeyEd25519 {
return genPrivKey(crypto.CReader())
}
// genPrivKey generates a new ed25519 private key using the provided reader.
func genPrivKey(rand io.Reader) PrivKeyEd25519 {
privKey := new([64]byte)
_, err := io.ReadFull(rand, privKey[:32])
if err != nil {
panic(err)
}
// ed25519.MakePublicKey(privKey) alters the last 32 bytes of privKey.
// It places the pubkey in the last 32 bytes of privKey, and returns the
// public key.
ed25519.MakePublicKey(privKey)
return PrivKeyEd25519(*privKey)
}
// GenPrivKeyFromSecret hashes the secret with SHA2, and uses
// that 32 byte output to create the private key.
// NOTE: secret should be the output of a KDF like bcrypt,
// if it's derived from user input.
func GenPrivKeyFromSecret(secret []byte) PrivKeyEd25519 {
privKey32 := crypto.Sha256(secret) // Not Ripemd160 because we want 32 bytes.
privKey := new([64]byte)
copy(privKey[:32], privKey32)
// ed25519.MakePublicKey(privKey) alters the last 32 bytes of privKey.
// It places the pubkey in the last 32 bytes of privKey, and returns the
// public key.
ed25519.MakePublicKey(privKey)
return PrivKeyEd25519(*privKey)
}
//-------------------------------------
var _ crypto.PubKey = PubKeyEd25519{}
// PubKeyEd25519Size is the number of bytes in an Ed25519 signature.
const PubKeyEd25519Size = 32
// PubKeyEd25519 implements crypto.PubKey for the Ed25519 signature scheme.
type PubKeyEd25519 [PubKeyEd25519Size]byte
// Address is the SHA256-20 of the raw pubkey bytes.
func (pubKey PubKeyEd25519) Address() crypto.Address {
return crypto.Address(tmhash.Sum(pubKey[:]))
}
// Bytes marshals the PubKey using amino encoding.
func (pubKey PubKeyEd25519) Bytes() []byte {
bz, err := cdc.MarshalBinaryBare(pubKey)
if err != nil {
panic(err)
}
return bz
}
func (pubKey PubKeyEd25519) VerifyBytes(msg []byte, sig_ []byte) bool {
// make sure we use the same algorithm to sign
if len(sig_) != SignatureEd25519Size {
return false
}
sig := new([SignatureEd25519Size]byte)
copy(sig[:], sig_)
pubKeyBytes := [PubKeyEd25519Size]byte(pubKey)
return ed25519.Verify(&pubKeyBytes, msg, sig)
}
// ToCurve25519 takes a public key and returns its representation on
// Curve25519. Curve25519 is birationally equivalent to Edwards25519,
// which Ed25519 uses internally. This method is intended for use in
// an X25519 Diffie Hellman key exchange.
//
// If there is an error, then this function returns nil.
func (pubKey PubKeyEd25519) ToCurve25519() *[PubKeyEd25519Size]byte {
keyCurve25519, pubKeyBytes := new([PubKeyEd25519Size]byte), [PubKeyEd25519Size]byte(pubKey)
ok := extra25519.PublicKeyToCurve25519(keyCurve25519, &pubKeyBytes)
if !ok {
return nil
}
return keyCurve25519
}
func (pubKey PubKeyEd25519) String() string {
return fmt.Sprintf("PubKeyEd25519{%X}", pubKey[:])
}
// nolint: golint
func (pubKey PubKeyEd25519) Equals(other crypto.PubKey) bool {
if otherEd, ok := other.(PubKeyEd25519); ok {
return bytes.Equal(pubKey[:], otherEd[:])
} else {
return false
}
}

View File

@@ -0,0 +1,29 @@
package ed25519_test
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/ed25519"
)
func TestSignAndValidateEd25519(t *testing.T) {
privKey := ed25519.GenPrivKey()
pubKey := privKey.PubKey()
msg := crypto.CRandBytes(128)
sig, err := privKey.Sign(msg)
require.Nil(t, err)
// Test the signature
assert.True(t, pubKey.VerifyBytes(msg, sig))
// Mutate the signature, just one bit.
// TODO: Replace this with a much better fuzzer, tendermint/ed25519/issues/10
sig[7] ^= byte(0x01)
assert.False(t, pubKey.VerifyBytes(msg, sig))
}

View File

@@ -0,0 +1,46 @@
package cryptoAmino
import (
amino "github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/ed25519"
"github.com/tendermint/tendermint/crypto/secp256k1"
)
var cdc = amino.NewCodec()
func init() {
// NOTE: It's important that there be no conflicts here,
// as that would change the canonical representations,
// and therefore change the address.
// TODO: Remove above note when
// https://github.com/tendermint/go-amino/issues/9
// is resolved
RegisterAmino(cdc)
}
// RegisterAmino registers all crypto related types in the given (amino) codec.
func RegisterAmino(cdc *amino.Codec) {
// These are all written here instead of
cdc.RegisterInterface((*crypto.PubKey)(nil), nil)
cdc.RegisterConcrete(ed25519.PubKeyEd25519{},
"tendermint/PubKeyEd25519", nil)
cdc.RegisterConcrete(secp256k1.PubKeySecp256k1{},
"tendermint/PubKeySecp256k1", nil)
cdc.RegisterInterface((*crypto.PrivKey)(nil), nil)
cdc.RegisterConcrete(ed25519.PrivKeyEd25519{},
"tendermint/PrivKeyEd25519", nil)
cdc.RegisterConcrete(secp256k1.PrivKeySecp256k1{},
"tendermint/PrivKeySecp256k1", nil)
}
func PrivKeyFromBytes(privKeyBytes []byte) (privKey crypto.PrivKey, err error) {
err = cdc.UnmarshalBinaryBare(privKeyBytes, &privKey)
return
}
func PubKeyFromBytes(pubKeyBytes []byte) (pubKey crypto.PubKey, err error) {
err = cdc.UnmarshalBinaryBare(pubKeyBytes, &pubKey)
return
}

View File

@@ -1,4 +1,4 @@
package crypto
package cryptoAmino
import (
"os"
@@ -6,18 +6,23 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/ed25519"
"github.com/tendermint/tendermint/crypto/secp256k1"
)
type byter interface {
Bytes() []byte
}
func checkAminoBinary(t *testing.T, src byter, dst interface{}, size int) {
func checkAminoBinary(t *testing.T, src, dst interface{}, size int) {
// Marshal to binary bytes.
bz, err := cdc.MarshalBinaryBare(src)
require.Nil(t, err, "%+v", err)
// Make sure this is compatible with current (Bytes()) encoding.
assert.Equal(t, src.Bytes(), bz, "Amino binary vs Bytes() mismatch")
if byterSrc, ok := src.(byter); ok {
// Make sure this is compatible with current (Bytes()) encoding.
assert.Equal(t, byterSrc.Bytes(), bz, "Amino binary vs Bytes() mismatch")
}
// Make sure we have the expected length.
if size != -1 {
assert.Equal(t, size, len(bz), "Amino binary size mismatch")
@@ -50,70 +55,72 @@ func ExamplePrintRegisteredTypes() {
//| PubKeySecp256k1 | tendermint/PubKeySecp256k1 | 0xEB5AE987 | 0x21 | |
//| PrivKeyEd25519 | tendermint/PrivKeyEd25519 | 0xA3288910 | 0x40 | |
//| PrivKeySecp256k1 | tendermint/PrivKeySecp256k1 | 0xE1B0F79B | 0x20 | |
//| SignatureEd25519 | tendermint/SignatureEd25519 | 0x2031EA53 | 0x40 | |
//| SignatureSecp256k1 | tendermint/SignatureSecp256k1 | 0x7FC4A495 | variable | |
}
func TestKeyEncodings(t *testing.T) {
cases := []struct {
privKey PrivKey
privKey crypto.PrivKey
privSize, pubSize int // binary sizes
}{
{
privKey: GenPrivKeyEd25519(),
privKey: ed25519.GenPrivKey(),
privSize: 69,
pubSize: 37,
},
{
privKey: GenPrivKeySecp256k1(),
privKey: secp256k1.GenPrivKey(),
privSize: 37,
pubSize: 38,
},
}
for _, tc := range cases {
for tcIndex, tc := range cases {
// Check (de/en)codings of PrivKeys.
var priv2, priv3 PrivKey
var priv2, priv3 crypto.PrivKey
checkAminoBinary(t, tc.privKey, &priv2, tc.privSize)
assert.EqualValues(t, tc.privKey, priv2)
assert.EqualValues(t, tc.privKey, priv2, "tc #%d", tcIndex)
checkAminoJSON(t, tc.privKey, &priv3, false) // TODO also check Prefix bytes.
assert.EqualValues(t, tc.privKey, priv3)
assert.EqualValues(t, tc.privKey, priv3, "tc #%d", tcIndex)
// Check (de/en)codings of Signatures.
var sig1, sig2, sig3 Signature
var sig1, sig2 []byte
sig1, err := tc.privKey.Sign([]byte("something"))
assert.NoError(t, err)
assert.NoError(t, err, "tc #%d", tcIndex)
checkAminoBinary(t, sig1, &sig2, -1) // Signature size changes for Secp anyways.
assert.EqualValues(t, sig1, sig2)
checkAminoJSON(t, sig1, &sig3, false) // TODO also check Prefix bytes.
assert.EqualValues(t, sig1, sig3)
assert.EqualValues(t, sig1, sig2, "tc #%d", tcIndex)
// Check (de/en)codings of PubKeys.
pubKey := tc.privKey.PubKey()
var pub2, pub3 PubKey
var pub2, pub3 crypto.PubKey
checkAminoBinary(t, pubKey, &pub2, tc.pubSize)
assert.EqualValues(t, pubKey, pub2)
assert.EqualValues(t, pubKey, pub2, "tc #%d", tcIndex)
checkAminoJSON(t, pubKey, &pub3, false) // TODO also check Prefix bytes.
assert.EqualValues(t, pubKey, pub3)
assert.EqualValues(t, pubKey, pub3, "tc #%d", tcIndex)
}
}
func TestNilEncodings(t *testing.T) {
// Check nil Signature.
var a, b Signature
var a, b []byte
checkAminoJSON(t, &a, &b, true)
assert.EqualValues(t, a, b)
// Check nil PubKey.
var c, d PubKey
var c, d crypto.PubKey
checkAminoJSON(t, &c, &d, true)
assert.EqualValues(t, c, d)
// Check nil PrivKey.
var e, f PrivKey
var e, f crypto.PrivKey
checkAminoJSON(t, &e, &f, true)
assert.EqualValues(t, e, f)
}
func TestPubKeyInvalidDataProperReturnsEmpty(t *testing.T) {
pk, err := PubKeyFromBytes([]byte("foo"))
require.NotNil(t, err, "expecting a non-nil error")
require.Nil(t, pk, "expecting an empty public key on error")
}

View File

@@ -2,6 +2,7 @@ package crypto
import (
"crypto/sha256"
"golang.org/x/crypto/ripemd160"
)

View File

@@ -1,105 +0,0 @@
// Package hkdfchacha20poly1305 creates an AEAD using hkdf, chacha20, and poly1305
// When sealing and opening, the hkdf is used to obtain the nonce and subkey for
// chacha20. Other than the change for the how the subkey and nonce for chacha
// are obtained, this is the same as chacha20poly1305
package hkdfchacha20poly1305
import (
"crypto/cipher"
"crypto/sha256"
"errors"
"io"
"golang.org/x/crypto/chacha20poly1305"
"golang.org/x/crypto/hkdf"
)
type hkdfchacha20poly1305 struct {
key [KeySize]byte
}
const (
// KeySize is the size of the key used by this AEAD, in bytes.
KeySize = 32
// NonceSize is the size of the nonce used with this AEAD, in bytes.
NonceSize = 24
// TagSize is the size added from poly1305
TagSize = 16
// MaxPlaintextSize is the max size that can be passed into a single call of Seal
MaxPlaintextSize = (1 << 38) - 64
// MaxCiphertextSize is the max size that can be passed into a single call of Open,
// this differs from plaintext size due to the tag
MaxCiphertextSize = (1 << 38) - 48
// HkdfInfo is the parameter used internally for Hkdf's info parameter.
HkdfInfo = "TENDERMINT_SECRET_CONNECTION_FRAME_KEY_DERIVE"
)
//New xChaChapoly1305 AEAD with 24 byte nonces
func New(key []byte) (cipher.AEAD, error) {
if len(key) != KeySize {
return nil, errors.New("chacha20poly1305: bad key length")
}
ret := new(hkdfchacha20poly1305)
copy(ret.key[:], key)
return ret, nil
}
func (c *hkdfchacha20poly1305) NonceSize() int {
return NonceSize
}
func (c *hkdfchacha20poly1305) Overhead() int {
return TagSize
}
func (c *hkdfchacha20poly1305) Seal(dst, nonce, plaintext, additionalData []byte) []byte {
if len(nonce) != NonceSize {
panic("hkdfchacha20poly1305: bad nonce length passed to Seal")
}
if uint64(len(plaintext)) > MaxPlaintextSize {
panic("hkdfchacha20poly1305: plaintext too large")
}
subKey, chachaNonce := getSubkeyAndChachaNonceFromHkdf(&c.key, &nonce)
aead, err := chacha20poly1305.New(subKey[:])
if err != nil {
panic("hkdfchacha20poly1305: failed to initialize chacha20poly1305")
}
return aead.Seal(dst, chachaNonce[:], plaintext, additionalData)
}
func (c *hkdfchacha20poly1305) Open(dst, nonce, ciphertext, additionalData []byte) ([]byte, error) {
if len(nonce) != NonceSize {
return nil, errors.New("hkdfchacha20poly1305: bad nonce length passed to Open")
}
if uint64(len(ciphertext)) > MaxCiphertextSize {
return nil, errors.New("hkdfchacha20poly1305: ciphertext too large")
}
subKey, chachaNonce := getSubkeyAndChachaNonceFromHkdf(&c.key, &nonce)
aead, err := chacha20poly1305.New(subKey[:])
if err != nil {
panic("hkdfchacha20poly1305: failed to initialize chacha20poly1305")
}
return aead.Open(dst, chachaNonce[:], ciphertext, additionalData)
}
func getSubkeyAndChachaNonceFromHkdf(cKey *[32]byte, nonce *[]byte) (
subKey [KeySize]byte, chachaNonce [chacha20poly1305.NonceSize]byte) {
hash := sha256.New
hkdf := hkdf.New(hash, (*cKey)[:], *nonce, []byte(HkdfInfo))
_, err := io.ReadFull(hkdf, subKey[:])
if err != nil {
panic("hkdfchacha20poly1305: failed to read subkey from hkdf")
}
_, err = io.ReadFull(hkdf, chachaNonce[:])
if err != nil {
panic("hkdfchacha20poly1305: failed to read chachaNonce from hkdf")
}
return
}

View File

@@ -0,0 +1,88 @@
package benchmarking
import (
"io"
"testing"
"github.com/tendermint/tendermint/crypto"
)
// The code in this file is adapted from agl/ed25519.
// As such it is under the following license.
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found at the bottom of this file.
type zeroReader struct{}
func (zeroReader) Read(buf []byte) (int, error) {
for i := range buf {
buf[i] = 0
}
return len(buf), nil
}
// BenchmarkKeyGeneration benchmarks the given key generation algorithm using
// a dummy reader.
func BenchmarkKeyGeneration(b *testing.B, GenerateKey func(reader io.Reader) crypto.PrivKey) {
var zero zeroReader
for i := 0; i < b.N; i++ {
GenerateKey(zero)
}
}
// BenchmarkSigning benchmarks the given signing algorithm using
// the provided privkey.
func BenchmarkSigning(b *testing.B, priv crypto.PrivKey) {
message := []byte("Hello, world!")
b.ResetTimer()
for i := 0; i < b.N; i++ {
priv.Sign(message)
}
}
// BenchmarkVerification benchmarks the given verification algorithm using
// the provided privkey on a constant message.
func BenchmarkVerification(b *testing.B, priv crypto.PrivKey) {
pub := priv.PubKey()
// use a short message, so this time doesn't get dominated by hashing.
message := []byte("Hello, world!")
signature, err := priv.Sign(message)
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
pub.VerifyBytes(message, signature)
}
}
// Below is the aforementioned license.
// Copyright (c) 2012 The Go Authors. All rights reserved.
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,164 +0,0 @@
package crypto
import (
"crypto/subtle"
secp256k1 "github.com/btcsuite/btcd/btcec"
"github.com/tendermint/ed25519"
"github.com/tendermint/ed25519/extra25519"
)
func PrivKeyFromBytes(privKeyBytes []byte) (privKey PrivKey, err error) {
err = cdc.UnmarshalBinaryBare(privKeyBytes, &privKey)
return
}
//----------------------------------------
type PrivKey interface {
Bytes() []byte
Sign(msg []byte) (Signature, error)
PubKey() PubKey
Equals(PrivKey) bool
}
//-------------------------------------
var _ PrivKey = PrivKeyEd25519{}
// Implements PrivKey
type PrivKeyEd25519 [64]byte
func (privKey PrivKeyEd25519) Bytes() []byte {
return cdc.MustMarshalBinaryBare(privKey)
}
func (privKey PrivKeyEd25519) Sign(msg []byte) (Signature, error) {
privKeyBytes := [64]byte(privKey)
signatureBytes := ed25519.Sign(&privKeyBytes, msg)
return SignatureEd25519(*signatureBytes), nil
}
func (privKey PrivKeyEd25519) PubKey() PubKey {
privKeyBytes := [64]byte(privKey)
pubBytes := *ed25519.MakePublicKey(&privKeyBytes)
return PubKeyEd25519(pubBytes)
}
// Equals - you probably don't need to use this.
// Runs in constant time based on length of the keys.
func (privKey PrivKeyEd25519) Equals(other PrivKey) bool {
if otherEd, ok := other.(PrivKeyEd25519); ok {
return subtle.ConstantTimeCompare(privKey[:], otherEd[:]) == 1
} else {
return false
}
}
func (privKey PrivKeyEd25519) ToCurve25519() *[32]byte {
keyCurve25519 := new([32]byte)
privKeyBytes := [64]byte(privKey)
extra25519.PrivateKeyToCurve25519(keyCurve25519, &privKeyBytes)
return keyCurve25519
}
// Deterministically generates new priv-key bytes from key.
func (privKey PrivKeyEd25519) Generate(index int) PrivKeyEd25519 {
bz, err := cdc.MarshalBinaryBare(struct {
PrivKey [64]byte
Index int
}{privKey, index})
if err != nil {
panic(err)
}
newBytes := Sha256(bz)
newKey := new([64]byte)
copy(newKey[:32], newBytes)
ed25519.MakePublicKey(newKey)
return PrivKeyEd25519(*newKey)
}
func GenPrivKeyEd25519() PrivKeyEd25519 {
privKeyBytes := new([64]byte)
copy(privKeyBytes[:32], CRandBytes(32))
ed25519.MakePublicKey(privKeyBytes)
return PrivKeyEd25519(*privKeyBytes)
}
// NOTE: secret should be the output of a KDF like bcrypt,
// if it's derived from user input.
func GenPrivKeyEd25519FromSecret(secret []byte) PrivKeyEd25519 {
privKey32 := Sha256(secret) // Not Ripemd160 because we want 32 bytes.
privKeyBytes := new([64]byte)
copy(privKeyBytes[:32], privKey32)
ed25519.MakePublicKey(privKeyBytes)
return PrivKeyEd25519(*privKeyBytes)
}
//-------------------------------------
var _ PrivKey = PrivKeySecp256k1{}
// Implements PrivKey
type PrivKeySecp256k1 [32]byte
func (privKey PrivKeySecp256k1) Bytes() []byte {
return cdc.MustMarshalBinaryBare(privKey)
}
func (privKey PrivKeySecp256k1) Sign(msg []byte) (Signature, error) {
priv__, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey[:])
sig__, err := priv__.Sign(Sha256(msg))
if err != nil {
return nil, err
}
return SignatureSecp256k1(sig__.Serialize()), nil
}
func (privKey PrivKeySecp256k1) PubKey() PubKey {
_, pub__ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey[:])
var pub PubKeySecp256k1
copy(pub[:], pub__.SerializeCompressed())
return pub
}
// Equals - you probably don't need to use this.
// Runs in constant time based on length of the keys.
func (privKey PrivKeySecp256k1) Equals(other PrivKey) bool {
if otherSecp, ok := other.(PrivKeySecp256k1); ok {
return subtle.ConstantTimeCompare(privKey[:], otherSecp[:]) == 1
} else {
return false
}
}
/*
// Deterministically generates new priv-key bytes from key.
func (key PrivKeySecp256k1) Generate(index int) PrivKeySecp256k1 {
newBytes := cdc.BinarySha256(struct {
PrivKey [64]byte
Index int
}{key, index})
var newKey [64]byte
copy(newKey[:], newBytes)
return PrivKeySecp256k1(newKey)
}
*/
func GenPrivKeySecp256k1() PrivKeySecp256k1 {
privKeyBytes := [32]byte{}
copy(privKeyBytes[:], CRandBytes(32))
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKeyBytes[:])
copy(privKeyBytes[:], priv.Serialize())
return PrivKeySecp256k1(privKeyBytes)
}
// NOTE: secret should be the output of a KDF like bcrypt,
// if it's derived from user input.
func GenPrivKeySecp256k1FromSecret(secret []byte) PrivKeySecp256k1 {
privKey32 := Sha256(secret) // Not Ripemd160 because we want 32 bytes.
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey32)
privKeyBytes := [32]byte{}
copy(privKeyBytes[:], priv.Serialize())
return PrivKeySecp256k1(privKeyBytes)
}

View File

@@ -1,60 +0,0 @@
package crypto_test
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/tendermint/tendermint/crypto"
)
func TestGeneratePrivKey(t *testing.T) {
testPriv := crypto.GenPrivKeyEd25519()
testGenerate := testPriv.Generate(1)
signBytes := []byte("something to sign")
pub := testGenerate.PubKey()
sig, err := testGenerate.Sign(signBytes)
assert.NoError(t, err)
assert.True(t, pub.VerifyBytes(signBytes, sig))
}
/*
type BadKey struct {
PrivKeyEd25519
}
func TestReadPrivKey(t *testing.T) {
assert, require := assert.New(t), require.New(t)
// garbage in, garbage out
garbage := []byte("hjgewugfbiewgofwgewr")
XXX This test wants to register BadKey globally to crypto,
but we don't want to support that.
_, err := PrivKeyFromBytes(garbage)
require.Error(err)
edKey := GenPrivKeyEd25519()
badKey := BadKey{edKey}
cases := []struct {
key PrivKey
valid bool
}{
{edKey, true},
{badKey, false},
}
for i, tc := range cases {
data := tc.key.Bytes()
fmt.Println(">>>", data)
key, err := PrivKeyFromBytes(data)
fmt.Printf("!!! %#v\n", key, err)
if tc.valid {
assert.NoError(err, "%d", i)
assert.Equal(tc.key, key, "%d", i)
} else {
assert.Error(err, "%d: %#v", i, key)
}
}
}
*/

View File

@@ -1,153 +0,0 @@
package crypto
import (
"bytes"
"crypto/sha256"
"fmt"
"golang.org/x/crypto/ripemd160"
secp256k1 "github.com/btcsuite/btcd/btcec"
"github.com/tendermint/ed25519"
"github.com/tendermint/ed25519/extra25519"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/crypto/tmhash"
)
// An address is a []byte, but hex-encoded even in JSON.
// []byte leaves us the option to change the address length.
// Use an alias so Unmarshal methods (with ptr receivers) are available too.
type Address = cmn.HexBytes
func PubKeyFromBytes(pubKeyBytes []byte) (pubKey PubKey, err error) {
err = cdc.UnmarshalBinaryBare(pubKeyBytes, &pubKey)
return
}
//----------------------------------------
type PubKey interface {
Address() Address
Bytes() []byte
VerifyBytes(msg []byte, sig Signature) bool
Equals(PubKey) bool
}
//-------------------------------------
var _ PubKey = PubKeyEd25519{}
const PubKeyEd25519Size = 32
// Implements PubKeyInner
type PubKeyEd25519 [PubKeyEd25519Size]byte
// Address is the SHA256-20 of the raw pubkey bytes.
func (pubKey PubKeyEd25519) Address() Address {
return Address(tmhash.Sum(pubKey[:]))
}
func (pubKey PubKeyEd25519) Bytes() []byte {
bz, err := cdc.MarshalBinaryBare(pubKey)
if err != nil {
panic(err)
}
return bz
}
func (pubKey PubKeyEd25519) VerifyBytes(msg []byte, sig_ Signature) bool {
// make sure we use the same algorithm to sign
sig, ok := sig_.(SignatureEd25519)
if !ok {
return false
}
pubKeyBytes := [PubKeyEd25519Size]byte(pubKey)
sigBytes := [SignatureEd25519Size]byte(sig)
return ed25519.Verify(&pubKeyBytes, msg, &sigBytes)
}
// For use with golang/crypto/nacl/box
// If error, returns nil.
func (pubKey PubKeyEd25519) ToCurve25519() *[PubKeyEd25519Size]byte {
keyCurve25519, pubKeyBytes := new([PubKeyEd25519Size]byte), [PubKeyEd25519Size]byte(pubKey)
ok := extra25519.PublicKeyToCurve25519(keyCurve25519, &pubKeyBytes)
if !ok {
return nil
}
return keyCurve25519
}
func (pubKey PubKeyEd25519) String() string {
return fmt.Sprintf("PubKeyEd25519{%X}", pubKey[:])
}
func (pubKey PubKeyEd25519) Equals(other PubKey) bool {
if otherEd, ok := other.(PubKeyEd25519); ok {
return bytes.Equal(pubKey[:], otherEd[:])
} else {
return false
}
}
//-------------------------------------
var _ PubKey = PubKeySecp256k1{}
const PubKeySecp256k1Size = 33
// Implements PubKey.
// Compressed pubkey (just the x-cord),
// prefixed with 0x02 or 0x03, depending on the y-cord.
type PubKeySecp256k1 [PubKeySecp256k1Size]byte
// Implements Bitcoin style addresses: RIPEMD160(SHA256(pubkey))
func (pubKey PubKeySecp256k1) Address() Address {
hasherSHA256 := sha256.New()
hasherSHA256.Write(pubKey[:]) // does not error
sha := hasherSHA256.Sum(nil)
hasherRIPEMD160 := ripemd160.New()
hasherRIPEMD160.Write(sha) // does not error
return Address(hasherRIPEMD160.Sum(nil))
}
func (pubKey PubKeySecp256k1) Bytes() []byte {
bz, err := cdc.MarshalBinaryBare(pubKey)
if err != nil {
panic(err)
}
return bz
}
func (pubKey PubKeySecp256k1) VerifyBytes(msg []byte, sig_ Signature) bool {
// and assert same algorithm to sign and verify
sig, ok := sig_.(SignatureSecp256k1)
if !ok {
return false
}
pub__, err := secp256k1.ParsePubKey(pubKey[:], secp256k1.S256())
if err != nil {
return false
}
sig__, err := secp256k1.ParseDERSignature(sig[:], secp256k1.S256())
if err != nil {
return false
}
return sig__.Verify(Sha256(msg), pub__)
}
func (pubKey PubKeySecp256k1) String() string {
return fmt.Sprintf("PubKeySecp256k1{%X}", pubKey[:])
}
func (pubKey PubKeySecp256k1) Equals(other PubKey) bool {
if otherSecp, ok := other.(PubKeySecp256k1); ok {
return bytes.Equal(pubKey[:], otherSecp[:])
} else {
return false
}
}

View File

@@ -1,50 +0,0 @@
package crypto
import (
"encoding/hex"
"testing"
"github.com/btcsuite/btcutil/base58"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type keyData struct {
priv string
pub string
addr string
}
var secpDataTable = []keyData{
{
priv: "a96e62ed3955e65be32703f12d87b6b5cf26039ecfa948dc5107a495418e5330",
pub: "02950e1cdfcb133d6024109fd489f734eeb4502418e538c28481f22bce276f248c",
addr: "1CKZ9Nx4zgds8tU7nJHotKSDr4a9bYJCa3",
},
}
func TestPubKeySecp256k1Address(t *testing.T) {
for _, d := range secpDataTable {
privB, _ := hex.DecodeString(d.priv)
pubB, _ := hex.DecodeString(d.pub)
addrBbz, _, _ := base58.CheckDecode(d.addr)
addrB := Address(addrBbz)
var priv PrivKeySecp256k1
copy(priv[:], privB)
pubKey := priv.PubKey()
pubT, _ := pubKey.(PubKeySecp256k1)
pub := pubT[:]
addr := pubKey.Address()
assert.Equal(t, pub, pubB, "Expected pub keys to match")
assert.Equal(t, addr, addrB, "Expected addresses to match")
}
}
func TestPubKeyInvalidDataProperReturnsEmpty(t *testing.T) {
pk, err := PubKeyFromBytes([]byte("foo"))
require.NotNil(t, err, "expecting a non-nil error")
require.Nil(t, pk, "expecting an empty public key on error")
}

View File

@@ -0,0 +1,26 @@
package secp256k1
import (
"io"
"testing"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/internal/benchmarking"
)
func BenchmarkKeyGeneration(b *testing.B) {
benchmarkKeygenWrapper := func(reader io.Reader) crypto.PrivKey {
return genPrivKey(reader)
}
benchmarking.BenchmarkKeyGeneration(b, benchmarkKeygenWrapper)
}
func BenchmarkSigning(b *testing.B) {
priv := GenPrivKey()
benchmarking.BenchmarkSigning(b, priv)
}
func BenchmarkVerification(b *testing.B) {
priv := GenPrivKey()
benchmarking.BenchmarkVerification(b, priv)
}

View File

@@ -0,0 +1,160 @@
package secp256k1
import (
"bytes"
"crypto/sha256"
"crypto/subtle"
"fmt"
"io"
secp256k1 "github.com/btcsuite/btcd/btcec"
amino "github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/crypto"
"golang.org/x/crypto/ripemd160"
)
//-------------------------------------
const (
Secp256k1PrivKeyAminoRoute = "tendermint/PrivKeySecp256k1"
Secp256k1PubKeyAminoRoute = "tendermint/PubKeySecp256k1"
)
var cdc = amino.NewCodec()
func init() {
cdc.RegisterInterface((*crypto.PubKey)(nil), nil)
cdc.RegisterConcrete(PubKeySecp256k1{},
Secp256k1PubKeyAminoRoute, nil)
cdc.RegisterInterface((*crypto.PrivKey)(nil), nil)
cdc.RegisterConcrete(PrivKeySecp256k1{},
Secp256k1PrivKeyAminoRoute, nil)
}
//-------------------------------------
var _ crypto.PrivKey = PrivKeySecp256k1{}
// PrivKeySecp256k1 implements PrivKey.
type PrivKeySecp256k1 [32]byte
// Bytes marshalls the private key using amino encoding.
func (privKey PrivKeySecp256k1) Bytes() []byte {
return cdc.MustMarshalBinaryBare(privKey)
}
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
func (privKey PrivKeySecp256k1) Sign(msg []byte) ([]byte, error) {
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey[:])
sig, err := priv.Sign(crypto.Sha256(msg))
if err != nil {
return nil, err
}
return sig.Serialize(), nil
}
// PubKey performs the point-scalar multiplication from the privKey on the
// generator point to get the pubkey.
func (privKey PrivKeySecp256k1) PubKey() crypto.PubKey {
_, pubkeyObject := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey[:])
var pubkeyBytes PubKeySecp256k1
copy(pubkeyBytes[:], pubkeyObject.SerializeCompressed())
return pubkeyBytes
}
// Equals - you probably don't need to use this.
// Runs in constant time based on length of the keys.
func (privKey PrivKeySecp256k1) Equals(other crypto.PrivKey) bool {
if otherSecp, ok := other.(PrivKeySecp256k1); ok {
return subtle.ConstantTimeCompare(privKey[:], otherSecp[:]) == 1
}
return false
}
// GenPrivKey generates a new ECDSA private key on curve secp256k1 private key.
// It uses OS randomness in conjunction with the current global random seed
// in tendermint/libs/common to generate the private key.
func GenPrivKey() PrivKeySecp256k1 {
return genPrivKey(crypto.CReader())
}
// genPrivKey generates a new secp256k1 private key using the provided reader.
func genPrivKey(rand io.Reader) PrivKeySecp256k1 {
privKeyBytes := [32]byte{}
_, err := io.ReadFull(rand, privKeyBytes[:])
if err != nil {
panic(err)
}
// crypto.CRandBytes is guaranteed to be 32 bytes long, so it can be
// casted to PrivKeySecp256k1.
return PrivKeySecp256k1(privKeyBytes)
}
// GenPrivKeySecp256k1 hashes the secret with SHA2, and uses
// that 32 byte output to create the private key.
// NOTE: secret should be the output of a KDF like bcrypt,
// if it's derived from user input.
func GenPrivKeySecp256k1(secret []byte) PrivKeySecp256k1 {
privKey32 := sha256.Sum256(secret)
// sha256.Sum256() is guaranteed to be 32 bytes long, so it can be
// casted to PrivKeySecp256k1.
return PrivKeySecp256k1(privKey32)
}
//-------------------------------------
var _ crypto.PubKey = PubKeySecp256k1{}
// PubKeySecp256k1Size is comprised of 32 bytes for one field element
// (the x-coordinate), plus one byte for the parity of the y-coordinate.
const PubKeySecp256k1Size = 33
// PubKeySecp256k1 implements crypto.PubKey.
// It is the compressed form of the pubkey. The first byte depends is a 0x02 byte
// if the y-coordinate is the lexicographically largest of the two associated with
// the x-coordinate. Otherwise the first byte is a 0x03.
// This prefix is followed with the x-coordinate.
type PubKeySecp256k1 [PubKeySecp256k1Size]byte
// Address returns a Bitcoin style addresses: RIPEMD160(SHA256(pubkey))
func (pubKey PubKeySecp256k1) Address() crypto.Address {
hasherSHA256 := sha256.New()
hasherSHA256.Write(pubKey[:]) // does not error
sha := hasherSHA256.Sum(nil)
hasherRIPEMD160 := ripemd160.New()
hasherRIPEMD160.Write(sha) // does not error
return crypto.Address(hasherRIPEMD160.Sum(nil))
}
// Bytes returns the pubkey marshalled with amino encoding.
func (pubKey PubKeySecp256k1) Bytes() []byte {
bz, err := cdc.MarshalBinaryBare(pubKey)
if err != nil {
panic(err)
}
return bz
}
func (pubKey PubKeySecp256k1) VerifyBytes(msg []byte, sig []byte) bool {
pub, err := secp256k1.ParsePubKey(pubKey[:], secp256k1.S256())
if err != nil {
return false
}
parsedSig, err := secp256k1.ParseDERSignature(sig[:], secp256k1.S256())
if err != nil {
return false
}
return parsedSig.Verify(crypto.Sha256(msg), pub)
}
func (pubKey PubKeySecp256k1) String() string {
return fmt.Sprintf("PubKeySecp256k1{%X}", pubKey[:])
}
func (pubKey PubKeySecp256k1) Equals(other crypto.PubKey) bool {
if otherSecp, ok := other.(PubKeySecp256k1); ok {
return bytes.Equal(pubKey[:], otherSecp[:])
}
return false
}

View File

@@ -0,0 +1,86 @@
package secp256k1_test
import (
"encoding/hex"
"testing"
"github.com/btcsuite/btcutil/base58"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/secp256k1"
underlyingSecp256k1 "github.com/btcsuite/btcd/btcec"
)
type keyData struct {
priv string
pub string
addr string
}
var secpDataTable = []keyData{
{
priv: "a96e62ed3955e65be32703f12d87b6b5cf26039ecfa948dc5107a495418e5330",
pub: "02950e1cdfcb133d6024109fd489f734eeb4502418e538c28481f22bce276f248c",
addr: "1CKZ9Nx4zgds8tU7nJHotKSDr4a9bYJCa3",
},
}
func TestPubKeySecp256k1Address(t *testing.T) {
for _, d := range secpDataTable {
privB, _ := hex.DecodeString(d.priv)
pubB, _ := hex.DecodeString(d.pub)
addrBbz, _, _ := base58.CheckDecode(d.addr)
addrB := crypto.Address(addrBbz)
var priv secp256k1.PrivKeySecp256k1
copy(priv[:], privB)
pubKey := priv.PubKey()
pubT, _ := pubKey.(secp256k1.PubKeySecp256k1)
pub := pubT[:]
addr := pubKey.Address()
assert.Equal(t, pub, pubB, "Expected pub keys to match")
assert.Equal(t, addr, addrB, "Expected addresses to match")
}
}
func TestSignAndValidateSecp256k1(t *testing.T) {
privKey := secp256k1.GenPrivKey()
pubKey := privKey.PubKey()
msg := crypto.CRandBytes(128)
sig, err := privKey.Sign(msg)
require.Nil(t, err)
assert.True(t, pubKey.VerifyBytes(msg, sig))
// Mutate the signature, just one bit.
sig[3] ^= byte(0x01)
assert.False(t, pubKey.VerifyBytes(msg, sig))
}
// This test is intended to justify the removal of calls to the underlying library
// in creating the privkey.
func TestSecp256k1LoadPrivkeyAndSerializeIsIdentity(t *testing.T) {
numberOfTests := 256
for i := 0; i < numberOfTests; i++ {
// Seed the test case with some random bytes
privKeyBytes := [32]byte{}
copy(privKeyBytes[:], crypto.CRandBytes(32))
// This function creates a private and public key in the underlying libraries format.
// The private key is basically calling new(big.Int).SetBytes(pk), which removes leading zero bytes
priv, _ := underlyingSecp256k1.PrivKeyFromBytes(underlyingSecp256k1.S256(), privKeyBytes[:])
// this takes the bytes returned by `(big int).Bytes()`, and if the length is less than 32 bytes,
// pads the bytes from the left with zero bytes. Therefore these two functions composed
// result in the identity function on privKeyBytes, hence the following equality check
// always returning true.
serializedBytes := priv.Serialize()
require.Equal(t, privKeyBytes[:], serializedBytes)
}
}

View File

@@ -1,90 +0,0 @@
package crypto
import (
"fmt"
"crypto/subtle"
. "github.com/tendermint/tendermint/libs/common"
)
func SignatureFromBytes(pubKeyBytes []byte) (pubKey Signature, err error) {
err = cdc.UnmarshalBinaryBare(pubKeyBytes, &pubKey)
return
}
//----------------------------------------
type Signature interface {
Bytes() []byte
IsZero() bool
Equals(Signature) bool
}
//-------------------------------------
var _ Signature = SignatureEd25519{}
const SignatureEd25519Size = 64
// Implements Signature
type SignatureEd25519 [SignatureEd25519Size]byte
func (sig SignatureEd25519) Bytes() []byte {
bz, err := cdc.MarshalBinaryBare(sig)
if err != nil {
panic(err)
}
return bz
}
func (sig SignatureEd25519) IsZero() bool { return len(sig) == 0 }
func (sig SignatureEd25519) String() string { return fmt.Sprintf("/%X.../", Fingerprint(sig[:])) }
func (sig SignatureEd25519) Equals(other Signature) bool {
if otherEd, ok := other.(SignatureEd25519); ok {
return subtle.ConstantTimeCompare(sig[:], otherEd[:]) == 1
} else {
return false
}
}
func SignatureEd25519FromBytes(data []byte) Signature {
var sig SignatureEd25519
copy(sig[:], data)
return sig
}
//-------------------------------------
var _ Signature = SignatureSecp256k1{}
// Implements Signature
type SignatureSecp256k1 []byte
func (sig SignatureSecp256k1) Bytes() []byte {
bz, err := cdc.MarshalBinaryBare(sig)
if err != nil {
panic(err)
}
return bz
}
func (sig SignatureSecp256k1) IsZero() bool { return len(sig) == 0 }
func (sig SignatureSecp256k1) String() string { return fmt.Sprintf("/%X.../", Fingerprint(sig[:])) }
func (sig SignatureSecp256k1) Equals(other Signature) bool {
if otherSecp, ok := other.(SignatureSecp256k1); ok {
return subtle.ConstantTimeCompare(sig[:], otherSecp[:]) == 1
} else {
return false
}
}
func SignatureSecp256k1FromBytes(data []byte) Signature {
sig := make(SignatureSecp256k1, len(data))
copy(sig[:], data)
return sig
}

View File

@@ -1,46 +0,0 @@
package crypto
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestSignAndValidateEd25519(t *testing.T) {
privKey := GenPrivKeyEd25519()
pubKey := privKey.PubKey()
msg := CRandBytes(128)
sig, err := privKey.Sign(msg)
require.Nil(t, err)
// Test the signature
assert.True(t, pubKey.VerifyBytes(msg, sig))
// Mutate the signature, just one bit.
sigEd := sig.(SignatureEd25519)
sigEd[7] ^= byte(0x01)
sig = sigEd
assert.False(t, pubKey.VerifyBytes(msg, sig))
}
func TestSignAndValidateSecp256k1(t *testing.T) {
privKey := GenPrivKeySecp256k1()
pubKey := privKey.PubKey()
msg := CRandBytes(128)
sig, err := privKey.Sign(msg)
require.Nil(t, err)
assert.True(t, pubKey.VerifyBytes(msg, sig))
// Mutate the signature, just one bit.
sigEd := sig.(SignatureSecp256k1)
sigEd[3] ^= byte(0x01)
sig = sigEd
assert.False(t, pubKey.VerifyBytes(msg, sig))
}

View File

@@ -0,0 +1,103 @@
package xchacha20poly1305
import (
"bytes"
"encoding/hex"
"testing"
)
func toHex(bits []byte) string {
return hex.EncodeToString(bits)
}
func fromHex(bits string) []byte {
b, err := hex.DecodeString(bits)
if err != nil {
panic(err)
}
return b
}
func TestHChaCha20(t *testing.T) {
for i, v := range hChaCha20Vectors {
var key [32]byte
var nonce [16]byte
copy(key[:], v.key)
copy(nonce[:], v.nonce)
HChaCha20(&key, &nonce, &key)
if !bytes.Equal(key[:], v.keystream) {
t.Errorf("Test %d: keystream mismatch:\n \t got: %s\n \t want: %s", i, toHex(key[:]), toHex(v.keystream))
}
}
}
var hChaCha20Vectors = []struct {
key, nonce, keystream []byte
}{
{
fromHex("0000000000000000000000000000000000000000000000000000000000000000"),
fromHex("000000000000000000000000000000000000000000000000"),
fromHex("1140704c328d1d5d0e30086cdf209dbd6a43b8f41518a11cc387b669b2ee6586"),
},
{
fromHex("8000000000000000000000000000000000000000000000000000000000000000"),
fromHex("000000000000000000000000000000000000000000000000"),
fromHex("7d266a7fd808cae4c02a0a70dcbfbcc250dae65ce3eae7fc210f54cc8f77df86"),
},
{
fromHex("0000000000000000000000000000000000000000000000000000000000000001"),
fromHex("000000000000000000000000000000000000000000000002"),
fromHex("e0c77ff931bb9163a5460c02ac281c2b53d792b1c43fea817e9ad275ae546963"),
},
{
fromHex("000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f"),
fromHex("000102030405060708090a0b0c0d0e0f1011121314151617"),
fromHex("51e3ff45a895675c4b33b46c64f4a9ace110d34df6a2ceab486372bacbd3eff6"),
},
{
fromHex("24f11cce8a1b3d61e441561a696c1c1b7e173d084fd4812425435a8896a013dc"),
fromHex("d9660c5900ae19ddad28d6e06e45fe5e"),
fromHex("5966b3eec3bff1189f831f06afe4d4e3be97fa9235ec8c20d08acfbbb4e851e3"),
},
}
func TestVectors(t *testing.T) {
for i, v := range vectors {
if len(v.plaintext) == 0 {
v.plaintext = make([]byte, len(v.ciphertext))
}
var nonce [24]byte
copy(nonce[:], v.nonce)
aead, err := New(v.key)
if err != nil {
t.Error(err)
}
dst := aead.Seal(nil, nonce[:], v.plaintext, v.ad)
if !bytes.Equal(dst, v.ciphertext) {
t.Errorf("Test %d: ciphertext mismatch:\n \t got: %s\n \t want: %s", i, toHex(dst), toHex(v.ciphertext))
}
open, err := aead.Open(nil, nonce[:], dst, v.ad)
if err != nil {
t.Error(err)
}
if !bytes.Equal(open, v.plaintext) {
t.Errorf("Test %d: plaintext mismatch:\n \t got: %s\n \t want: %s", i, string(open), string(v.plaintext))
}
}
}
var vectors = []struct {
key, nonce, ad, plaintext, ciphertext []byte
}{
{
[]byte{0x80, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87, 0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f, 0x90, 0x91, 0x92, 0x93, 0x94, 0x95, 0x96, 0x97, 0x98, 0x99, 0x9a, 0x9b, 0x9c, 0x9d, 0x9e, 0x9f},
[]byte{0x07, 0x00, 0x00, 0x00, 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4a, 0x4b},
[]byte{0x50, 0x51, 0x52, 0x53, 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7},
[]byte("Ladies and Gentlemen of the class of '99: If I could offer you only one tip for the future, sunscreen would be it."),
[]byte{0x45, 0x3c, 0x06, 0x93, 0xa7, 0x40, 0x7f, 0x04, 0xff, 0x4c, 0x56, 0xae, 0xdb, 0x17, 0xa3, 0xc0, 0xa1, 0xaf, 0xff, 0x01, 0x17, 0x49, 0x30, 0xfc, 0x22, 0x28, 0x7c, 0x33, 0xdb, 0xcf, 0x0a, 0xc8, 0xb8, 0x9a, 0xd9, 0x29, 0x53, 0x0a, 0x1b, 0xb3, 0xab, 0x5e, 0x69, 0xf2, 0x4c, 0x7f, 0x60, 0x70, 0xc8, 0xf8, 0x40, 0xc9, 0xab, 0xb4, 0xf6, 0x9f, 0xbf, 0xc8, 0xa7, 0xff, 0x51, 0x26, 0xfa, 0xee, 0xbb, 0xb5, 0x58, 0x05, 0xee, 0x9c, 0x1c, 0xf2, 0xce, 0x5a, 0x57, 0x26, 0x32, 0x87, 0xae, 0xc5, 0x78, 0x0f, 0x04, 0xec, 0x32, 0x4c, 0x35, 0x14, 0x12, 0x2c, 0xfc, 0x32, 0x31, 0xfc, 0x1a, 0x8b, 0x71, 0x8a, 0x62, 0x86, 0x37, 0x30, 0xa2, 0x70, 0x2b, 0xb7, 0x63, 0x66, 0x11, 0x6b, 0xed, 0x09, 0xe0, 0xfd, 0x5c, 0x6d, 0x84, 0xb6, 0xb0, 0xc1, 0xab, 0xaf, 0x24, 0x9d, 0x5d, 0xd0, 0xf7, 0xf5, 0xa7, 0xea},
},
}

View File

@@ -0,0 +1,261 @@
// Package xchacha20poly1305 creates an AEAD using hchacha, chacha, and poly1305
// This allows for randomized nonces to be used in conjunction with chacha.
package xchacha20poly1305
import (
"crypto/cipher"
"encoding/binary"
"errors"
"fmt"
"golang.org/x/crypto/chacha20poly1305"
)
// Implements crypto.AEAD
type xchacha20poly1305 struct {
key [KeySize]byte
}
const (
// KeySize is the size of the key used by this AEAD, in bytes.
KeySize = 32
// NonceSize is the size of the nonce used with this AEAD, in bytes.
NonceSize = 24
// TagSize is the size added from poly1305
TagSize = 16
// MaxPlaintextSize is the max size that can be passed into a single call of Seal
MaxPlaintextSize = (1 << 38) - 64
// MaxCiphertextSize is the max size that can be passed into a single call of Open,
// this differs from plaintext size due to the tag
MaxCiphertextSize = (1 << 38) - 48
// sigma are constants used in xchacha.
// Unrolled from a slice so that they can be inlined, as slices can't be constants.
sigma0 = uint32(0x61707865)
sigma1 = uint32(0x3320646e)
sigma2 = uint32(0x79622d32)
sigma3 = uint32(0x6b206574)
)
// New returns a new xchachapoly1305 AEAD
func New(key []byte) (cipher.AEAD, error) {
if len(key) != KeySize {
return nil, errors.New("xchacha20poly1305: bad key length")
}
ret := new(xchacha20poly1305)
copy(ret.key[:], key)
return ret, nil
}
// nolint
func (c *xchacha20poly1305) NonceSize() int {
return NonceSize
}
// nolint
func (c *xchacha20poly1305) Overhead() int {
return TagSize
}
func (c *xchacha20poly1305) Seal(dst, nonce, plaintext, additionalData []byte) []byte {
if len(nonce) != NonceSize {
panic("xchacha20poly1305: bad nonce length passed to Seal")
}
if uint64(len(plaintext)) > MaxPlaintextSize {
panic("xchacha20poly1305: plaintext too large")
}
var subKey [KeySize]byte
var hNonce [16]byte
var subNonce [chacha20poly1305.NonceSize]byte
copy(hNonce[:], nonce[:16])
HChaCha20(&subKey, &hNonce, &c.key)
// This can't error because we always provide a correctly sized key
chacha20poly1305, _ := chacha20poly1305.New(subKey[:])
copy(subNonce[4:], nonce[16:])
return chacha20poly1305.Seal(dst, subNonce[:], plaintext, additionalData)
}
func (c *xchacha20poly1305) Open(dst, nonce, ciphertext, additionalData []byte) ([]byte, error) {
if len(nonce) != NonceSize {
return nil, fmt.Errorf("xchacha20poly1305: bad nonce length passed to Open")
}
if uint64(len(ciphertext)) > MaxCiphertextSize {
return nil, fmt.Errorf("xchacha20poly1305: ciphertext too large")
}
var subKey [KeySize]byte
var hNonce [16]byte
var subNonce [chacha20poly1305.NonceSize]byte
copy(hNonce[:], nonce[:16])
HChaCha20(&subKey, &hNonce, &c.key)
// This can't error because we always provide a correctly sized key
chacha20poly1305, _ := chacha20poly1305.New(subKey[:])
copy(subNonce[4:], nonce[16:])
return chacha20poly1305.Open(dst, subNonce[:], ciphertext, additionalData)
}
// HChaCha exported from
// https://github.com/aead/chacha20/blob/8b13a72661dae6e9e5dea04f344f0dc95ea29547/chacha/chacha_generic.go#L194
// TODO: Add support for the different assembly instructions used there.
// The MIT License (MIT)
// Copyright (c) 2016 Andreas Auernhammer
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
// The above copyright notice and this permission notice shall be included in all
// copies or substantial portions of the Software.
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
// HChaCha20 generates 32 pseudo-random bytes from a 128 bit nonce and a 256 bit secret key.
// It can be used as a key-derivation-function (KDF).
func HChaCha20(out *[32]byte, nonce *[16]byte, key *[32]byte) { hChaCha20Generic(out, nonce, key) }
func hChaCha20Generic(out *[32]byte, nonce *[16]byte, key *[32]byte) {
v00 := sigma0
v01 := sigma1
v02 := sigma2
v03 := sigma3
v04 := binary.LittleEndian.Uint32(key[0:])
v05 := binary.LittleEndian.Uint32(key[4:])
v06 := binary.LittleEndian.Uint32(key[8:])
v07 := binary.LittleEndian.Uint32(key[12:])
v08 := binary.LittleEndian.Uint32(key[16:])
v09 := binary.LittleEndian.Uint32(key[20:])
v10 := binary.LittleEndian.Uint32(key[24:])
v11 := binary.LittleEndian.Uint32(key[28:])
v12 := binary.LittleEndian.Uint32(nonce[0:])
v13 := binary.LittleEndian.Uint32(nonce[4:])
v14 := binary.LittleEndian.Uint32(nonce[8:])
v15 := binary.LittleEndian.Uint32(nonce[12:])
for i := 0; i < 20; i += 2 {
v00 += v04
v12 ^= v00
v12 = (v12 << 16) | (v12 >> 16)
v08 += v12
v04 ^= v08
v04 = (v04 << 12) | (v04 >> 20)
v00 += v04
v12 ^= v00
v12 = (v12 << 8) | (v12 >> 24)
v08 += v12
v04 ^= v08
v04 = (v04 << 7) | (v04 >> 25)
v01 += v05
v13 ^= v01
v13 = (v13 << 16) | (v13 >> 16)
v09 += v13
v05 ^= v09
v05 = (v05 << 12) | (v05 >> 20)
v01 += v05
v13 ^= v01
v13 = (v13 << 8) | (v13 >> 24)
v09 += v13
v05 ^= v09
v05 = (v05 << 7) | (v05 >> 25)
v02 += v06
v14 ^= v02
v14 = (v14 << 16) | (v14 >> 16)
v10 += v14
v06 ^= v10
v06 = (v06 << 12) | (v06 >> 20)
v02 += v06
v14 ^= v02
v14 = (v14 << 8) | (v14 >> 24)
v10 += v14
v06 ^= v10
v06 = (v06 << 7) | (v06 >> 25)
v03 += v07
v15 ^= v03
v15 = (v15 << 16) | (v15 >> 16)
v11 += v15
v07 ^= v11
v07 = (v07 << 12) | (v07 >> 20)
v03 += v07
v15 ^= v03
v15 = (v15 << 8) | (v15 >> 24)
v11 += v15
v07 ^= v11
v07 = (v07 << 7) | (v07 >> 25)
v00 += v05
v15 ^= v00
v15 = (v15 << 16) | (v15 >> 16)
v10 += v15
v05 ^= v10
v05 = (v05 << 12) | (v05 >> 20)
v00 += v05
v15 ^= v00
v15 = (v15 << 8) | (v15 >> 24)
v10 += v15
v05 ^= v10
v05 = (v05 << 7) | (v05 >> 25)
v01 += v06
v12 ^= v01
v12 = (v12 << 16) | (v12 >> 16)
v11 += v12
v06 ^= v11
v06 = (v06 << 12) | (v06 >> 20)
v01 += v06
v12 ^= v01
v12 = (v12 << 8) | (v12 >> 24)
v11 += v12
v06 ^= v11
v06 = (v06 << 7) | (v06 >> 25)
v02 += v07
v13 ^= v02
v13 = (v13 << 16) | (v13 >> 16)
v08 += v13
v07 ^= v08
v07 = (v07 << 12) | (v07 >> 20)
v02 += v07
v13 ^= v02
v13 = (v13 << 8) | (v13 >> 24)
v08 += v13
v07 ^= v08
v07 = (v07 << 7) | (v07 >> 25)
v03 += v04
v14 ^= v03
v14 = (v14 << 16) | (v14 >> 16)
v09 += v14
v04 ^= v09
v04 = (v04 << 12) | (v04 >> 20)
v03 += v04
v14 ^= v03
v14 = (v14 << 8) | (v14 >> 24)
v09 += v14
v04 ^= v09
v04 = (v04 << 7) | (v04 >> 25)
}
binary.LittleEndian.PutUint32(out[0:], v00)
binary.LittleEndian.PutUint32(out[4:], v01)
binary.LittleEndian.PutUint32(out[8:], v02)
binary.LittleEndian.PutUint32(out[12:], v03)
binary.LittleEndian.PutUint32(out[16:], v12)
binary.LittleEndian.PutUint32(out[20:], v13)
binary.LittleEndian.PutUint32(out[24:], v14)
binary.LittleEndian.PutUint32(out[28:], v15)
}

View File

@@ -1,54 +1,12 @@
package hkdfchacha20poly1305
package xchacha20poly1305
import (
"bytes"
cr "crypto/rand"
"encoding/hex"
mr "math/rand"
"testing"
"github.com/stretchr/testify/assert"
)
// Test that a test vector we generated is valid. (Ensures backwards
// compatibility)
func TestVector(t *testing.T) {
key, _ := hex.DecodeString("56f8de45d3c294c7675bcaf457bdd4b71c380b9b2408ce9412b348d0f08b69ee")
aead, err := New(key[:])
if err != nil {
t.Fatal(err)
}
cts := []string{"e20a8bf42c535ac30125cfc52031577f0b",
"657695b37ba30f67b25860d90a6f1d00d8",
"e9aa6f3b7f625d957fd50f05bcdf20d014",
"8a00b3b5a6014e0d2033bebc5935086245",
"aadd74867b923879e6866ea9e03c009039",
"fc59773c2c864ee3b4cc971876b3c7bed4",
"caec14e3a9a52ce1a2682c6737defa4752",
"0b89511ffe490d2049d6950494ee51f919",
"7de854ea71f43ca35167a07566c769083d",
"cd477327f4ea4765c71e311c5fec1edbfb"}
for i := 0; i < 10; i++ {
ct, _ := hex.DecodeString(cts[i])
byteArr := []byte{byte(i)}
nonce := make([]byte, 24)
nonce[0] = byteArr[0]
// Test that we get the expected plaintext on open
plaintext, err := aead.Open(nil, nonce, ct, byteArr)
if err != nil {
t.Errorf("%dth Open failed", i)
continue
}
assert.Equal(t, byteArr, plaintext)
// Test that sealing yields the expected ciphertext
ciphertext := aead.Seal(nil, nonce, plaintext, byteArr)
assert.Equal(t, ct, ciphertext)
}
}
// The following test is taken from
// https://github.com/golang/crypto/blob/master/chacha20poly1305/chacha20poly1305_test.go#L69
// It requires the below copyright notice, where "this source code" refers to the following function.

View File

@@ -1,12 +1,15 @@
package crypto
package xsalsa20symmetric
import (
"errors"
. "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/crypto"
cmn "github.com/tendermint/tendermint/libs/common"
"golang.org/x/crypto/nacl/secretbox"
)
// TODO, make this into a struct that implements crypto.Symmetric.
const nonceLen = 24
const secretLen = 32
@@ -15,9 +18,9 @@ const secretLen = 32
// NOTE: call crypto.MixEntropy() first.
func EncryptSymmetric(plaintext []byte, secret []byte) (ciphertext []byte) {
if len(secret) != secretLen {
PanicSanity(Fmt("Secret must be 32 bytes long, got len %v", len(secret)))
cmn.PanicSanity(cmn.Fmt("Secret must be 32 bytes long, got len %v", len(secret)))
}
nonce := CRandBytes(nonceLen)
nonce := crypto.CRandBytes(nonceLen)
nonceArr := [nonceLen]byte{}
copy(nonceArr[:], nonce)
secretArr := [secretLen]byte{}
@@ -32,7 +35,7 @@ func EncryptSymmetric(plaintext []byte, secret []byte) (ciphertext []byte) {
// The ciphertext is (secretbox.Overhead + 24) bytes longer than the plaintext.
func DecryptSymmetric(ciphertext []byte, secret []byte) (plaintext []byte, err error) {
if len(secret) != secretLen {
PanicSanity(Fmt("Secret must be 32 bytes long, got len %v", len(secret)))
cmn.PanicSanity(cmn.Fmt("Secret must be 32 bytes long, got len %v", len(secret)))
}
if len(ciphertext) <= secretbox.Overhead+nonceLen {
return nil, errors.New("Ciphertext is too short")

View File

@@ -1,4 +1,4 @@
package crypto
package xsalsa20symmetric
import (
"testing"
@@ -6,12 +6,13 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/crypto"
"golang.org/x/crypto/bcrypt"
)
func TestSimple(t *testing.T) {
MixEntropy([]byte("someentropy"))
crypto.MixEntropy([]byte("someentropy"))
plaintext := []byte("sometext")
secret := []byte("somesecretoflengththirtytwo===32")
@@ -24,7 +25,7 @@ func TestSimple(t *testing.T) {
func TestSimpleWithKDF(t *testing.T) {
MixEntropy([]byte("someentropy"))
crypto.MixEntropy([]byte("someentropy"))
plaintext := []byte("sometext")
secretPass := []byte("somesecret")
@@ -32,7 +33,7 @@ func TestSimpleWithKDF(t *testing.T) {
if err != nil {
t.Error(err)
}
secret = Sha256(secret)
secret = crypto.Sha256(secret)
ciphertext := EncryptSymmetric(plaintext, secret)
plaintext2, err := DecryptSymmetric(ciphertext, secret)

View File

@@ -9,7 +9,9 @@ and built using [VuePress](https://vuepress.vuejs.org/) from the tendermint webs
- https://github.com/tendermint/tendermint.com
which has a [configuration file](https://github.com/tendermint/tendermint.com/blob/develop/docs/.vuepress/config.js) for displaying
the Table of Contents that lists all the documentation.
the Table of Contents that lists all the documentation.
Under the hood, Jenkins listens for changes in ./docs then pushes a `docs-staging` branch to the tendermint.com repo with the latest documentation. That branch must be manually PR'd to `develop` then `master` for staging then production. This process should happen in synchrony with a release.
The `README.md` in this directory is the landing page for
website documentation and the following folders are intentionally

View File

@@ -108,8 +108,11 @@ See below for more details on the message types and how they are used.
### InitChain
- **Request**:
- `Validators ([]Validator)`: Initial genesis validators
- `AppStateBytes ([]byte)`: Serialized initial application state
- `Time (google.protobuf.Timestamp)`: Genesis time.
- `ChainID (string)`: ID of the blockchain.
- `ConsensusParams (ConsensusParams)`: Initial consensus-critical parameters.
- `Validators ([]Validator)`: Initial genesis validators.
- `AppStateBytes ([]byte)`: Serialized initial application state. Amino-encoded JSON bytes.
- **Response**:
- `ConsensusParams (ConsensusParams)`: Initial
consensus-critical parameters.
@@ -157,9 +160,8 @@ See below for more details on the message types and how they are used.
- **Request**:
- `Hash ([]byte)`: The block's hash. This can be derived from the
block header.
- `Header (struct{})`: The block header
- `Validators ([]SigningValidator)`: List of validators in the current validator
set and whether or not they signed a vote in the LastCommit
- `Header (struct{})`: The block header.
- `LastCommitInfo (LastCommitInfo)`: Info about the last commit.
- `ByzantineValidators ([]Evidence)`: List of evidence of
validators that acted maliciously
- **Response**:
@@ -168,8 +170,9 @@ See below for more details on the message types and how they are used.
- Signals the beginning of a new block. Called prior to
any DeliverTxs.
- The header is expected to at least contain the Height.
- The `Validators` and `ByzantineValidators` can be used to
determine rewards and punishments for the validators.
- The `LastCommitInfo` and `ByzantineValidators` can be used to determine
rewards and punishments for the validators. NOTE validators here do not
include pubkeys.
### CheckTx
@@ -186,7 +189,6 @@ See below for more details on the message types and how they are used.
- `GasUsed (int64)`: Amount of gas consumed by transaction.
- `Tags ([]cmn.KVPair)`: Key-Value tags for filtering and indexing
transactions (eg. by account).
- `Fee (cmn.KI64Pair)`: Fee paid for the transaction.
- **Usage**: Validate a mempool transaction, prior to broadcasting
or proposing. CheckTx should perform stateful but light-weight
checks of the validity of the transaction (like checking signatures
@@ -223,7 +225,6 @@ See below for more details on the message types and how they are used.
- `GasUsed (int64)`: Amount of gas consumed by transaction.
- `Tags ([]cmn.KVPair)`: Key-Value tags for filtering and indexing
transactions (eg. by account).
- `Fee (cmn.KI64Pair)`: Fee paid for the transaction.
- **Usage**:
- Deliver a transaction to be executed in full by the application.
If the transaction is valid, returns CodeType.OK.
@@ -265,7 +266,8 @@ See below for more details on the message types and how they are used.
- **Fields**:
- `ChainID (string)`: ID of the blockchain
- `Height (int64)`: Height of the block in the chain
- `Time (int64)`: Unix time of the block
- `Time (google.protobuf.Timestamp)`: Time of the block. It is the proposer's
local time when block was created.
- `NumTxs (int32)`: Number of transactions in the block
- `TotalTxs (int64)`: Total number of transactions in the blockchain until
now
@@ -320,6 +322,14 @@ See below for more details on the message types and how they are used.
"duplicate/vote".
- `Validator (Validator`: The offending validator
- `Height (int64)`: Height when the offense was committed
- `Time (int64)`: Unix time of the block at height `Height`
- `Time (google.protobuf.Timestamp)`: Time of the block at height `Height`.
It is the proposer's local time when block was created.
- `TotalVotingPower (int64)`: Total voting power of the validator set at
height `Height`
### LastCommitInfo
- **Fields**:
- `CommitRound (int32)`: Commit round.
- `Validators ([]SigningValidator)`: List of validators in the current
validator set and whether or not they signed a vote.

View File

@@ -365,14 +365,14 @@ ResponseBeginBlock requestBeginBlock(RequestBeginBlock req) {
### EndBlock
The EndBlock request can be used to run some code at the end of every
block. Additionally, the response may contain a list of validators,
which can be used to update the validator set. To add a new validator or
update an existing one, simply include them in the list returned in the
EndBlock response. To remove one, include it in the list with a `power`
equal to `0`. Tendermint core will take care of updating the validator
set. Note the change in voting power must be strictly less than 1/3 per
block if you want a light client to be able to prove the transition
The EndBlock request can be used to run some code at the end of every block.
Additionally, the response may contain a list of validators, which can be used
to update the validator set. To add a new validator or update an existing one,
simply include them in the list returned in the EndBlock response. To remove
one, include it in the list with a `power` equal to `0`. Validator's `address`
field can be left empty. Tendermint core will take care of updating the
validator set. Note the change in voting power must be strictly less than 1/3
per block if you want a light client to be able to prove the transition
externally. See the [light client
docs](https://godoc.org/github.com/tendermint/tendermint/lite#hdr-How_We_Track_Validators)
for details on how it tracks validators.

View File

@@ -93,9 +93,7 @@ like:
"jsonrpc": "2.0",
"id": "",
"result": {
"check_tx": {
"fee": {}
},
"check_tx": {},
"deliver_tx": {
"tags": [
{
@@ -106,8 +104,7 @@ like:
"key": "YXBwLmtleQ==",
"value": "YWJjZA=="
}
],
"fee": {}
]
},
"hash": "9DF66553F98DE3C26E3C3317A3E4CED54F714E39",
"height": 14
@@ -219,13 +216,10 @@ the number `1`. If instead, we try to send a `5`, we get an error:
"jsonrpc": "2.0",
"id": "",
"result": {
"check_tx": {
"fee": {}
},
"check_tx": {},
"deliver_tx": {
"code": 2,
"log": "Invalid nonce. Expected 1, got 5",
"fee": {}
"log": "Invalid nonce. Expected 1, got 5"
},
"hash": "33B93DFF98749B0D6996A70F64071347060DC19C",
"height": 34
@@ -241,12 +235,8 @@ But if we send a `1`, it works again:
"jsonrpc": "2.0",
"id": "",
"result": {
"check_tx": {
"fee": {}
},
"deliver_tx": {
"fee": {}
},
"check_tx": {},
"deliver_tx": {},
"hash": "F17854A977F6FA7EEA1BD758E296710B86F72F3D",
"height": 60
}

View File

@@ -0,0 +1,113 @@
# ADR 012: PeerTransport
## Context
One of the more apparent problems with the current architecture in the p2p
package is that there is no clear separation of concerns between different
components. Most notably the `Switch` is currently doing physical connection
handling. An artifact is the dependency of the Switch on
`[config.P2PConfig`](https://github.com/tendermint/tendermint/blob/05a76fb517f50da27b4bfcdc7b4cf185fc61eff6/config/config.go#L272-L339).
Addresses:
* [#2046](https://github.com/tendermint/tendermint/issues/2046)
* [#2047](https://github.com/tendermint/tendermint/issues/2047)
First iteraton in [#2067](https://github.com/tendermint/tendermint/issues/2067)
## Decision
Transport concerns will be handled by a new component (`PeerTransport`) which
will provide Peers at its boundary to the caller. In turn `Switch` will use
this new component accept new `Peer`s and dial them based on `NetAddress`.
### PeerTransport
Responsible for emitting and connecting to Peers. The implementation of `Peer`
is left to the transport, which implies that the chosen transport dictates the
characteristics of the implementation handed back to the `Switch`. Each
transport implementation is responsible to filter establishing peers specific
to its domain, for the default multiplexed implementation the following will
apply:
* connections from our own node
* handshake fails
* upgrade to secret connection fails
* prevent duplicate ip
* prevent duplicate id
* nodeinfo incompatibility
``` go
// PeerTransport proxies incoming and outgoing peer connections.
type PeerTransport interface {
// Accept returns a newly connected Peer.
Accept() (Peer, error)
// Dial connects to a Peer.
Dial(NetAddress) (Peer, error)
}
// EXAMPLE OF DEFAULT IMPLEMENTATION
// multiplexTransport accepts tcp connections and upgrades to multiplexted
// peers.
type multiplexTransport struct {
listener net.Listener
acceptc chan accept
closec <-chan struct{}
listenc <-chan struct{}
dialTimeout time.Duration
handshakeTimeout time.Duration
nodeAddr NetAddress
nodeInfo NodeInfo
nodeKey NodeKey
// TODO(xla): Remove when MConnection is refactored into mPeer.
mConfig conn.MConnConfig
}
var _ PeerTransport = (*multiplexTransport)(nil)
// NewMTransport returns network connected multiplexed peers.
func NewMTransport(
nodeAddr NetAddress,
nodeInfo NodeInfo,
nodeKey NodeKey,
) *multiplexTransport
```
### Switch
From now the Switch will depend on a fully setup `PeerTransport` to
retrieve/reach out to its peers. As the more low-level concerns are pushed to
the transport, we can omit passing the `config.P2PConfig` to the Switch.
``` go
func NewSwitch(transport PeerTransport, opts ...SwitchOption) *Switch
```
## Status
In Review.
## Consequences
### Positive
* free Switch from transport concerns - simpler implementation
* pluggable transport implementation - simpler test setup
* remove Switch dependency on P2PConfig - easier to test
### Negative
* more setup for tests which depend on Switches
### Neutral
* multiplexed will be the default implementation
[0] These guards could be potentially extended to be pluggable much like
middlewares to express different concerns required by differentally configured
environments.

View File

@@ -0,0 +1,93 @@
# ADR 013: Need for symmetric cryptography
## Context
We require symmetric ciphers to handle how we encrypt keys in the sdk,
and to potentially encrypt `priv_validator.json` in tendermint.
Currently we use AEAD's to support symmetric encryption,
which is great since we want data integrity in addition to privacy and authenticity.
We don't currently have a scenario where we want to encrypt without data integrity,
so it is fine to optimize our code to just use AEAD's.
Currently there is not a way to switch out AEAD's easily, this ADR outlines a way
to easily swap these out.
### How do we encrypt with AEAD's
AEAD's typically require a nonce in addition to the key.
For the purposes we require symmetric cryptography for,
we need encryption to be stateless.
Because of this we use random nonces.
(Thus the AEAD must support random nonces)
We currently construct a random nonce, and encrypt the data with it.
The returned value is `nonce || encrypted data`.
The limitation of this is that does not provide a way to identify
which algorithm was used in encryption.
Consequently decryption with multiple algoritms is sub-optimal.
(You have to try them all)
## Decision
We should create the following two methods in a new `crypto/encoding/symmetric` package:
```golang
func Encrypt(aead cipher.AEAD, plaintext []byte) (ciphertext []byte, err error)
func Decrypt(key []byte, ciphertext []byte) (plaintext []byte, err error)
func Register(aead cipher.AEAD, algo_name string, NewAead func(key []byte) (cipher.Aead, error)) error
```
This allows you to specify the algorithm in encryption, but not have to specify
it in decryption.
This is intended for ease of use in downstream applications, in addition to people
looking at the file directly.
One downside is that for the encrypt function you must have already initialized an AEAD,
but I don't really see this as an issue.
If there is no error in encryption, Encrypt will return `algo_name || nonce || aead_ciphertext`.
`algo_name` should be length prefixed, using standard varuint encoding.
This will be binary data, but thats not a problem considering the nonce and ciphertext are also binary.
This solution requires a mapping from aead type to name.
We can achieve this via reflection.
```golang
func getType(myvar interface{}) string {
if t := reflect.TypeOf(myvar); t.Kind() == reflect.Ptr {
return "*" + t.Elem().Name()
} else {
return t.Name()
}
}
```
Then we maintain a map from the name returned from `getType(aead)` to `algo_name`.
In decryption, we read the `algo_name`, and then instantiate a new AEAD with the key.
Then we call the AEAD's decrypt method on the provided nonce/ciphertext.
`Register` allows a downstream user to add their own desired AEAD to the symmetric package.
It will error if the AEAD name is already registered.
This prevents a malicious import from modifying / nullifying an AEAD at runtime.
## Implementation strategy
The golang implementation of what is proposed is rather straight forward.
The concern is that we will break existing private keys if we just switch to this.
If this is concerning, we can make a simple script which doesn't require decoding privkeys,
for converting from the old format to the new one.
## Status
Proposed.
## Consequences
### Positive
* Allows us to support new AEAD's, in a way that makes decryption easier
* Allows downstream users to add their own AEAD
### Negative
* We will have to break all private keys stored on disk.
They can be recovered using seed words, and upgrade scripts are simple.
### Neutral
* Caller has to instantiate the AEAD with the private key.
However it forces them to be aware of what signing algorithm they are using, which is a positive.

View File

@@ -0,0 +1,61 @@
# ADR 014: Secp256k1 Signature Malleability
## Context
Secp256k1 has two layers of malleability.
The signer has a random nonce, and thus can produce many different valid signatures.
This ADR is not concerned with that.
The second layer of malleability basically allows one who is given a signature
to produce exactly one more valid signature for the same message from the same public key.
(They don't even have to know the message!)
The math behind this will be explained in the subsequent section.
Note that in many downstream applications, signatures will appear in a transaction, and therefore in the tx hash.
This means that if someone broadcasts a transaction with secp256k1 signature, the signature can be altered into the other form by anyone in the p2p network.
Thus the tx hash will change, and this altered tx hash may be committed instead.
This breaks the assumption that you can broadcast a valid transaction and just wait for its hash to be included on chain.
One example is if you are broadcasting a tx in cosmos,
and you wait for it to appear on chain before incrementing your sequence number.
You may never increment your sequence number if a different tx hash got committed.
Removing this second layer of signature malleability concerns could ease downstream development.
### ECDSA context
Secp256k1 is ECDSA over a particular curve.
The signature is of the form `(r, s)`, where `s` is a field element.
(The particular field is the `Z_n`, where the elliptic curve has order `n`)
However `(r, -s)` is also another valid solution.
Note that anyone can negate a group element, and therefore can get this second signature.
## Decision
We can just distinguish a canonical form for the ECDSA signatures.
Then we require that all ECDSA signatures be in the form which we defined as canonical.
We reject signatures in non-canonical form.
A canonical form is rather easy to define and check.
It would just be the smaller of the two values for `s`, defined lexicographically.
This is a simple check, instead of checking if `s < n`, instead check `s <= (n - 1)/2`.
An example of another cryptosystem using this
is the parity definition here https://github.com/zkcrypto/pairing/pull/30#issuecomment-372910663.
This is the same solution Ethereum has chosen for solving secp malleability.
## Proposed Implementation
Fork https://github.com/btcsuite/btcd, and just update the [parse sig method](https://github.com/btcsuite/btcd/blob/master/btcec/signature.go#195) and serialize functions to enforce our canonical form.
## Status
Proposed.
## Consequences
### Positive
* Lets us maintain the ability to expect a tx hash to appear in the blockchain.
### Negative
* More work in all future implementations (Though this is a very simple check)
* Requires us to maintain another fork
### Neutral

View File

@@ -0,0 +1,80 @@
# ADR 015: Crypto encoding
## Context
We must standardize our method for encoding public keys and signatures on chain.
Currently we amino encode the public keys and signatures.
The reason we are using amino here is primarily due to ease of support in
parsing for other languages.
We don't need its upgradability properties in cryptosystems, as a change in
the crypto that requires adapting the encoding, likely warrants being deemed
a new cryptosystem.
(I.e. using new public parameters)
## Decision
### Public keys
For public keys, we will continue to use amino encoding on the canonical
representation of the pubkey.
(Canonical as defined by the cryptosystem itself)
This has two significant drawbacks.
Amino encoding is less space-efficient, due to requiring support for upgradability.
Amino encoding support requires forking protobuf and adding this new interface support
option in the langauge of choice.
The reason for continuing to use amino however is that people can create code
more easily in languages that already have an up to date amino library.
It is possible that this will change in the future, if it is deemed that
requiring amino for interacting with tendermint cryptography is unneccessary.
The arguments for space efficiency here are refuted on the basis that there are
far more egregious wastages of space in the SDK.
The space requirement of the public keys doesn't cause many problems beyond
increasing the space attached to each validator / account.
The alternative to using amino here would be for us to create an enum type.
Switching to just an enum type is worthy of investigation post-launch.
For referrence, part of amino encoding interfaces is basically a 4 byte enum
type definition.
Enum types would just change that 4 bytes to be a varuint, and it would remove
the protobuf overhead, but it would be hard to integrate into the existing API.
### Signatures
Signatures should be switched to be `[]byte`.
Spatial efficiency in the signatures is quite important,
as it directly affects the gas cost of every transaction,
and the throughput of the chain.
Signatures don't need to encode what type they are for (unlike public keys)
since public keys must already be known.
Therefore we can validate the signature without needing to encode its type.
When placed in state, signatures will still be amino encoded, but it will be the
primitive type `[]byte` getting encoded.
#### Ed25519
Use the canonical representation for signatures.
#### Secp256k1
There isn't a clear canonical representation here.
Signatures have two elements `r,s`.
These bytes are encoded as `r || s`, where `r` and `s` are both exactly
32 bytes long, encoded big-endian.
This is basically Ethereum's encoding, but without the leading recovery bit.
## Status
Proposed. The signature section seems to be agreed upon for the most part.
Needs decision on Enum types.
## Consequences
### Positive
* More space efficient signatures
### Negative
* We have an amino dependency for cryptography.
### Neutral
* No change to public keys

View File

@@ -52,7 +52,8 @@ type Header struct {
// application
ResultsHash []byte // SimpleMerkle of []abci.Result from prevBlock
AppHash []byte // Arbitrary state digest
ValidatorsHash []byte // SimpleMerkle of the ValidatorSet
ValidatorsHash []byte // SimpleMerkle of the current ValidatorSet
NextValidatorsHash []byte // SimpleMerkle of the next ValidatorSet
ConsensusParamsHash []byte // SimpleMerkle of the ConsensusParams
// consensus
@@ -160,9 +161,12 @@ We refer to certain globally available objects:
`block` is the block under consideration,
`prevBlock` is the `block` at the previous height,
and `state` keeps track of the validator set, the consensus parameters
and other results from the application.
and other results from the application. At the point when `block` is the block under consideration,
the current version of the `state` corresponds to the state
after executing transactions from the `prevBlock`.
Elements of an object are accessed as expected,
ie. `block.Header`. See [here](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md) for the definition of `state`.
ie. `block.Header`.
See [here](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md) for the definition of `state`.
### Header
@@ -278,7 +282,14 @@ block.ValidatorsHash == SimpleMerkleRoot(state.Validators)
Simple Merkle root of the current validator set that is committing the block.
This can be used to validate the `LastCommit` included in the next block.
May be updated by the application.
### NextValidatorsHash
```go
block.NextValidatorsHash == SimpleMerkleRoot(state.NextValidators)
```
Simple Merkle root of the next validator set that will be the validator set that commits the next block.
Modifications to the validator set are defined by the application.
### ConsensusParamsHash
@@ -407,25 +418,20 @@ set (TODO). Execute is defined as:
```go
Execute(s State, app ABCIApp, block Block) State {
TODO: just spell out ApplyBlock here
and remove ABCIResponses struct.
abciResponses := app.ApplyBlock(block)
// Fuction ApplyBlock executes block of transactions against the app and returns the new root hash of the app state,
// modifications to the validator set and the changes of the consensus parameters.
AppHash, ValidatorChanges, ConsensusParamChanges := app.ApplyBlock(block)
return State{
LastResults: abciResponses.DeliverTxResults,
AppHash: abciResponses.AppHash,
Validators: UpdateValidators(state.Validators, abciResponses.ValidatorChanges),
AppHash: AppHash,
LastValidators: state.Validators,
ConsensusParams: UpdateConsensusParams(state.ConsensusParams, abci.Responses.ConsensusParamChanges),
Validators: state.NextValidators,
NextValidators: UpdateValidators(state.NextValidators, ValidatorChanges),
ConsensusParams: UpdateConsensusParams(state.ConsensusParams, ConsensusParamChanges),
}
}
type ABCIResponses struct {
DeliverTxResults []Result
ValidatorChanges []Validator
ConsensusParamChanges ConsensusParams
AppHash []byte
}
```

View File

@@ -3,7 +3,7 @@
## State
The state contains information whose cryptographic digest is included in block headers, and thus is
necessary for validating new blocks. For instance, the set of validators and the results of
necessary for validating new blocks. For instance, the validators set and the results of
transactions are never included in blocks, but their Merkle roots are - the state keeps track of them.
Note that the `State` object itself is an implementation detail, since it is never
@@ -18,8 +18,9 @@ type State struct {
LastResults []Result
AppHash []byte
Validators []Validator
LastValidators []Validator
Validators []Validator
NextValidators []Validator
ConsensusParams ConsensusParams
}

View File

@@ -27,27 +27,24 @@ Both handshakes have configurable timeouts (they should complete quickly).
### Authenticated Encryption Handshake
Tendermint implements the Station-to-Station protocol
using ED25519 keys for Diffie-Helman key-exchange and NACL SecretBox for encryption.
using X25519 keys for Diffie-Helman key-exchange and chacha20poly1305 for encryption.
It goes as follows:
- generate an emphemeral ED25519 keypair
- generate an ephemeral X25519 keypair
- send the ephemeral public key to the peer
- wait to receive the peer's ephemeral public key
- compute the Diffie-Hellman shared secret using the peers ephemeral public key and our ephemeral private key
- generate two nonces to use for encryption (sending and receiving) as follows:
- sort the ephemeral public keys in ascending order and concatenate them
- RIPEMD160 the result
- append 4 empty bytes (extending the hash to 24-bytes)
- the result is nonce1
- flip the last bit of nonce1 to get nonce2
- if we had the smaller ephemeral pubkey, use nonce1 for receiving, nonce2 for sending;
else the opposite
- all communications from now on are encrypted using the shared secret and the nonces, where each nonce
increments by 2 every time it is used
- generate two keys to use for encryption (sending and receiving) and a challenge for authentication as follows:
- create a hkdf-sha256 instance with the key being the diffie hellman shared secret, and info parameter as
`TENDERMINT_SECRET_CONNECTION_KEY_AND_CHALLENGE_GEN`
- get 96 bytes of output from hkdf-sha256
- if we had the smaller ephemeral pubkey, use the first 32 bytes for the key for receiving, the second 32 bytes for sending; else the opposite
- use the last 32 bytes of output for the challenge
- use a seperate nonce for receiving and sending. Both nonces start at 0, and should support the full 96 bit nonce range
- all communications from now on are encrypted in 1024 byte frames,
using the respective secret and nonce. Each nonce is incremented by one after each use.
- we now have an encrypted channel, but still need to authenticate
- generate a common challenge to sign:
- SHA256 of the sorted (lowest first) and concatenated ephemeral pub keys
- sign the common challenge with our persistent private key
- send the go-wire encoded persistent pubkey and signature to the peer
- sign the common challenge obtained from the hkdf with our persistent private key
- send the amino encoded persistent pubkey and signature to the peer
- wait to receive the persistent public key and signature from the peer
- verify the signature on the challenge using the peer's persistent public key

View File

@@ -12,7 +12,8 @@ them.
Some peers can be marked as `private`, which means
we will not put them in the address book or gossip them to others.
All peers except private peers are tracked using the address book.
All peers except private peers and peers coming from them are tracked using the
address book.
## Discovery

View File

@@ -5,12 +5,12 @@ import (
"os"
amino "github.com/tendermint/go-amino"
crypto "github.com/tendermint/tendermint/crypto"
cryptoAmino "github.com/tendermint/tendermint/crypto/encoding/amino"
)
func main() {
cdc := amino.NewCodec()
crypto.RegisterAmino(cdc)
cryptoAmino.RegisterAmino(cdc)
cdc.PrintTypes(os.Stdout)
fmt.Println("")
}

View File

@@ -99,7 +99,6 @@ laddr = "tcp://0.0.0.0:26656"
seeds = ""
# Comma separated list of nodes to keep persistent connections to
# Do not add private peers to this list if you don't want them advertised
persistent_peers = ""
# UPNP port forwarding
@@ -121,10 +120,10 @@ max_num_peers = 50
max_packet_msg_payload_size = 1024
# Rate at which packets can be sent, in bytes/second
send_rate = 512000
send_rate = 5120000
# Rate at which packets can be received, in bytes/second
recv_rate = 512000
recv_rate = 5120000
# Set true to enable the peer-exchange reactor
pex = true

View File

@@ -44,11 +44,6 @@ func (evR *EvidenceReactor) SetLogger(l log.Logger) {
evR.evpool.SetLogger(l)
}
// OnStart implements cmn.Service
func (evR *EvidenceReactor) OnStart() error {
return evR.BaseReactor.OnStart()
}
// GetChannels implements Reactor.
// It returns the list of channels for this reactor.
func (evR *EvidenceReactor) GetChannels() []*p2p.ChannelDescriptor {

View File

@@ -2,7 +2,7 @@ package evidence
import (
"github.com/tendermint/go-amino"
"github.com/tendermint/tendermint/crypto"
cryptoAmino "github.com/tendermint/tendermint/crypto/encoding/amino"
"github.com/tendermint/tendermint/types"
)
@@ -10,16 +10,6 @@ var cdc = amino.NewCodec()
func init() {
RegisterEvidenceMessages(cdc)
crypto.RegisterAmino(cdc)
cryptoAmino.RegisterAmino(cdc)
types.RegisterEvidences(cdc)
RegisterMockEvidences(cdc) // For testing
}
//-------------------------------------------
func RegisterMockEvidences(cdc *amino.Codec) {
cdc.RegisterConcrete(types.MockGoodEvidence{},
"tendermint/MockGoodEvidence", nil)
cdc.RegisterConcrete(types.MockBadEvidence{},
"tendermint/MockBadEvidence", nil)
}

View File

@@ -35,18 +35,20 @@ const autoFileOpenDuration = 1000 * time.Millisecond
// Automatically closes and re-opens file for writing.
// This is useful for using a log file with the logrotate tool.
type AutoFile struct {
ID string
Path string
ticker *time.Ticker
mtx sync.Mutex
file *os.File
ID string
Path string
ticker *time.Ticker
tickerStopped chan struct{} // closed when ticker is stopped
mtx sync.Mutex
file *os.File
}
func OpenAutoFile(path string) (af *AutoFile, err error) {
af = &AutoFile{
ID: cmn.RandStr(12) + ":" + path,
Path: path,
ticker: time.NewTicker(autoFileOpenDuration),
ID: cmn.RandStr(12) + ":" + path,
Path: path,
ticker: time.NewTicker(autoFileOpenDuration),
tickerStopped: make(chan struct{}),
}
if err = af.openFile(); err != nil {
return
@@ -58,18 +60,18 @@ func OpenAutoFile(path string) (af *AutoFile, err error) {
func (af *AutoFile) Close() error {
af.ticker.Stop()
close(af.tickerStopped)
err := af.closeFile()
sighupWatchers.removeAutoFile(af)
return err
}
func (af *AutoFile) processTicks() {
for {
_, ok := <-af.ticker.C
if !ok {
return // Done.
}
select {
case <-af.ticker.C:
af.closeFile()
case <-af.tickerStopped:
return
}
}

View File

@@ -1,6 +1,7 @@
package autofile
import (
"io/ioutil"
"os"
"sync/atomic"
"syscall"
@@ -13,10 +14,14 @@ import (
func TestSIGHUP(t *testing.T) {
// First, create an AutoFile writing to a tempfile dir
file, name := cmn.Tempfile("sighup_test")
if err := file.Close(); err != nil {
file, err := ioutil.TempFile("", "sighup_test")
if err != nil {
t.Fatalf("Error creating tempfile: %v", err)
}
if err := file.Close(); err != nil {
t.Fatalf("Error closing tempfile: %v", err)
}
name := file.Name()
// Here is the actual AutoFile
af, err := OpenAutoFile(name)
if err != nil {

View File

@@ -85,7 +85,6 @@ func OpenGroup(headPath string) (g *Group, err error) {
Head: head,
headBuf: bufio.NewWriterSize(head, 4096*10),
Dir: dir,
ticker: time.NewTicker(groupCheckDuration),
headSizeLimit: defaultHeadSizeLimit,
totalSizeLimit: defaultTotalSizeLimit,
minIndex: 0,
@@ -102,6 +101,7 @@ func OpenGroup(headPath string) (g *Group, err error) {
// OnStart implements Service by starting the goroutine that checks file and
// group limits.
func (g *Group) OnStart() error {
g.ticker = time.NewTicker(groupCheckDuration)
go g.processTicks()
return nil
}
@@ -199,21 +199,15 @@ func (g *Group) Flush() error {
}
func (g *Group) processTicks() {
for {
_, ok := <-g.ticker.C
if !ok {
return // Done.
}
select {
case <-g.ticker.C:
g.checkHeadSizeLimit()
g.checkTotalSizeLimit()
case <-g.Quit():
return
}
}
// NOTE: for testing
func (g *Group) stopTicker() {
g.ticker.Stop()
}
// NOTE: this function is called manually in tests.
func (g *Group) checkHeadSizeLimit() {
limit := g.HeadSizeLimit()

View File

@@ -16,23 +16,25 @@ import (
cmn "github.com/tendermint/tendermint/libs/common"
)
// NOTE: Returned group has ticker stopped
func createTestGroup(t *testing.T, headSizeLimit int64) *Group {
func createTestGroupWithHeadSizeLimit(t *testing.T, headSizeLimit int64) *Group {
testID := cmn.RandStr(12)
testDir := "_test_" + testID
err := cmn.EnsureDir(testDir, 0700)
require.NoError(t, err, "Error creating dir")
headPath := testDir + "/myfile"
g, err := OpenGroup(headPath)
require.NoError(t, err, "Error opening Group")
g.SetHeadSizeLimit(headSizeLimit)
g.stopTicker()
require.NotEqual(t, nil, g, "Failed to create Group")
g.SetHeadSizeLimit(headSizeLimit)
return g
}
func destroyTestGroup(t *testing.T, g *Group) {
g.Close()
err := os.RemoveAll(g.Dir)
require.NoError(t, err, "Error removing test Group directory")
}
@@ -45,7 +47,7 @@ func assertGroupInfo(t *testing.T, gInfo GroupInfo, minIndex, maxIndex int, tota
}
func TestCheckHeadSizeLimit(t *testing.T) {
g := createTestGroup(t, 1000*1000)
g := createTestGroupWithHeadSizeLimit(t, 1000*1000)
// At first, there are no files.
assertGroupInfo(t, g.ReadGroupInfo(), 0, 0, 0, 0)
@@ -107,7 +109,7 @@ func TestCheckHeadSizeLimit(t *testing.T) {
}
func TestSearch(t *testing.T) {
g := createTestGroup(t, 10*1000)
g := createTestGroupWithHeadSizeLimit(t, 10*1000)
// Create some files in the group that have several INFO lines in them.
// Try to put the INFO lines in various spots.
@@ -147,14 +149,13 @@ func TestSearch(t *testing.T) {
// Now search for each number
for i := 0; i < 100; i++ {
t.Log("Testing for i", i)
gr, match, err := g.Search("INFO", makeSearchFunc(i))
require.NoError(t, err, "Failed to search for line")
assert.True(t, match, "Expected Search to return exact match")
require.NoError(t, err, "Failed to search for line, tc #%d", i)
assert.True(t, match, "Expected Search to return exact match, tc #%d", i)
line, err := gr.ReadLine()
require.NoError(t, err, "Failed to read line after search")
require.NoError(t, err, "Failed to read line after search, tc #%d", i)
if !strings.HasPrefix(line, fmt.Sprintf("INFO %v ", i)) {
t.Fatal("Failed to get correct line")
t.Fatalf("Failed to get correct line, tc #%d", i)
}
// Make sure we can continue to read from there.
cur := i + 1
@@ -165,16 +166,16 @@ func TestSearch(t *testing.T) {
// OK!
break
} else {
t.Fatal("Got EOF after the wrong INFO #")
t.Fatalf("Got EOF after the wrong INFO #, tc #%d", i)
}
} else if err != nil {
t.Fatal("Error reading line", err)
t.Fatalf("Error reading line, tc #%d, err:\n%s", i, err)
}
if !strings.HasPrefix(line, "INFO ") {
continue
}
if !strings.HasPrefix(line, fmt.Sprintf("INFO %v ", cur)) {
t.Fatalf("Unexpected INFO #. Expected %v got:\n%v", cur, line)
t.Fatalf("Unexpected INFO #. Expected %v got:\n%v, tc #%d", cur, line, i)
}
cur++
}
@@ -209,7 +210,7 @@ func TestSearch(t *testing.T) {
}
func TestRotateFile(t *testing.T) {
g := createTestGroup(t, 0)
g := createTestGroupWithHeadSizeLimit(t, 0)
g.WriteLine("Line 1")
g.WriteLine("Line 2")
g.WriteLine("Line 3")
@@ -239,7 +240,7 @@ func TestRotateFile(t *testing.T) {
}
func TestFindLast1(t *testing.T) {
g := createTestGroup(t, 0)
g := createTestGroupWithHeadSizeLimit(t, 0)
g.WriteLine("Line 1")
g.WriteLine("Line 2")
@@ -263,7 +264,7 @@ func TestFindLast1(t *testing.T) {
}
func TestFindLast2(t *testing.T) {
g := createTestGroup(t, 0)
g := createTestGroupWithHeadSizeLimit(t, 0)
g.WriteLine("Line 1")
g.WriteLine("Line 2")
@@ -287,7 +288,7 @@ func TestFindLast2(t *testing.T) {
}
func TestFindLast3(t *testing.T) {
g := createTestGroup(t, 0)
g := createTestGroupWithHeadSizeLimit(t, 0)
g.WriteLine("Line 1")
g.WriteLine("# a")
@@ -311,7 +312,7 @@ func TestFindLast3(t *testing.T) {
}
func TestFindLast4(t *testing.T) {
g := createTestGroup(t, 0)
g := createTestGroupWithHeadSizeLimit(t, 0)
g.WriteLine("Line 1")
g.WriteLine("Line 2")
@@ -333,7 +334,7 @@ func TestFindLast4(t *testing.T) {
}
func TestWrite(t *testing.T) {
g := createTestGroup(t, 0)
g := createTestGroupWithHeadSizeLimit(t, 0)
written := []byte("Medusa")
g.Write(written)
@@ -354,7 +355,7 @@ func TestWrite(t *testing.T) {
// test that Read reads the required amount of bytes from all the files in the
// group and returns no error if n == size of the given slice.
func TestGroupReaderRead(t *testing.T) {
g := createTestGroup(t, 0)
g := createTestGroupWithHeadSizeLimit(t, 0)
professor := []byte("Professor Monster")
g.Write(professor)
@@ -383,7 +384,7 @@ func TestGroupReaderRead(t *testing.T) {
// test that Read returns an error if number of bytes read < size of
// the given slice. Subsequent call should return 0, io.EOF.
func TestGroupReaderRead2(t *testing.T) {
g := createTestGroup(t, 0)
g := createTestGroupWithHeadSizeLimit(t, 0)
professor := []byte("Professor Monster")
g.Write(professor)
@@ -414,7 +415,7 @@ func TestGroupReaderRead2(t *testing.T) {
}
func TestMinIndex(t *testing.T) {
g := createTestGroup(t, 0)
g := createTestGroupWithHeadSizeLimit(t, 0)
assert.Zero(t, g.MinIndex(), "MinIndex should be zero at the beginning")
@@ -423,7 +424,7 @@ func TestMinIndex(t *testing.T) {
}
func TestMaxIndex(t *testing.T) {
g := createTestGroup(t, 0)
g := createTestGroupWithHeadSizeLimit(t, 0)
assert.Zero(t, g.MaxIndex(), "MaxIndex should be zero at the beginning")

View File

@@ -18,13 +18,19 @@ var sighupCounter int32 // For testing
func initSighupWatcher() {
sighupWatchers = newSighupWatcher()
c := make(chan os.Signal, 1)
signal.Notify(c, syscall.SIGHUP)
hup := make(chan os.Signal, 1)
signal.Notify(hup, syscall.SIGHUP)
quit := make(chan os.Signal, 1)
signal.Notify(quit, os.Interrupt, syscall.SIGTERM)
go func() {
for range c {
select {
case <-hup:
sighupWatchers.closeAll()
atomic.AddInt32(&sighupCounter, 1)
case <-quit:
return
}
}()
}

View File

@@ -2,11 +2,12 @@ package clist
import (
"fmt"
"math/rand"
"runtime"
"sync/atomic"
"testing"
"time"
cmn "github.com/tendermint/tendermint/libs/common"
)
func TestSmall(t *testing.T) {
@@ -131,7 +132,7 @@ func _TestGCRandom(t *testing.T) {
els = append(els, el)
}
for _, i := range rand.Perm(numElements) {
for _, i := range cmn.RandPerm(numElements) {
el := els[i]
l.Remove(el)
_ = el.Next()
@@ -189,7 +190,7 @@ func TestScanRightDeleteRandom(t *testing.T) {
// Remove an element, push back an element.
for i := 0; i < numTimes; i++ {
// Pick an element to remove
rmElIdx := rand.Intn(len(els))
rmElIdx := cmn.RandIntn(len(els))
rmEl := els[rmElIdx]
// Remove it
@@ -243,7 +244,7 @@ func TestWaitChan(t *testing.T) {
for i := 1; i < 100; i++ {
l.PushBack(i)
pushed++
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond)
time.Sleep(time.Duration(cmn.RandIntn(100)) * time.Millisecond)
}
close(done)
}()

View File

@@ -8,13 +8,15 @@ import (
"sync"
)
// BitArray is a thread-safe implementation of a bit array.
type BitArray struct {
mtx sync.Mutex
Bits int `json:"bits"` // NOTE: persisted via reflect, must be exported
Elems []uint64 `json:"elems"` // NOTE: persisted via reflect, must be exported
}
// There is no BitArray whose Size is 0. Use nil instead.
// NewBitArray returns a new bit array.
// It returns nil if the number of bits is zero.
func NewBitArray(bits int) *BitArray {
if bits <= 0 {
return nil
@@ -25,6 +27,7 @@ func NewBitArray(bits int) *BitArray {
}
}
// Size returns the number of bits in the bitarray
func (bA *BitArray) Size() int {
if bA == nil {
return 0
@@ -32,7 +35,8 @@ func (bA *BitArray) Size() int {
return bA.Bits
}
// NOTE: behavior is undefined if i >= bA.Bits
// GetIndex returns the bit at index i within the bit array.
// The behavior is undefined if i >= bA.Bits
func (bA *BitArray) GetIndex(i int) bool {
if bA == nil {
return false
@@ -49,7 +53,8 @@ func (bA *BitArray) getIndex(i int) bool {
return bA.Elems[i/64]&(uint64(1)<<uint(i%64)) > 0
}
// NOTE: behavior is undefined if i >= bA.Bits
// SetIndex sets the bit at index i within the bit array.
// The behavior is undefined if i >= bA.Bits
func (bA *BitArray) SetIndex(i int, v bool) bool {
if bA == nil {
return false
@@ -71,6 +76,7 @@ func (bA *BitArray) setIndex(i int, v bool) bool {
return true
}
// Copy returns a copy of the provided bit array.
func (bA *BitArray) Copy() *BitArray {
if bA == nil {
return nil
@@ -98,7 +104,9 @@ func (bA *BitArray) copyBits(bits int) *BitArray {
}
}
// Returns a BitArray of larger bits size.
// Or returns a bit array resulting from a bitwise OR of the two bit arrays.
// If the two bit-arrys have different lengths, Or right-pads the smaller of the two bit-arrays with zeroes.
// Thus the size of the return value is the maximum of the two provided bit arrays.
func (bA *BitArray) Or(o *BitArray) *BitArray {
if bA == nil && o == nil {
return nil
@@ -110,7 +118,11 @@ func (bA *BitArray) Or(o *BitArray) *BitArray {
return bA.Copy()
}
bA.mtx.Lock()
defer bA.mtx.Unlock()
o.mtx.Lock()
defer func() {
bA.mtx.Unlock()
o.mtx.Unlock()
}()
c := bA.copyBits(MaxInt(bA.Bits, o.Bits))
for i := 0; i < len(c.Elems); i++ {
c.Elems[i] |= o.Elems[i]
@@ -118,13 +130,19 @@ func (bA *BitArray) Or(o *BitArray) *BitArray {
return c
}
// Returns a BitArray of smaller bit size.
// And returns a bit array resulting from a bitwise AND of the two bit arrays.
// If the two bit-arrys have different lengths, this truncates the larger of the two bit-arrays from the right.
// Thus the size of the return value is the minimum of the two provided bit arrays.
func (bA *BitArray) And(o *BitArray) *BitArray {
if bA == nil || o == nil {
return nil
}
bA.mtx.Lock()
defer bA.mtx.Unlock()
o.mtx.Lock()
defer func() {
bA.mtx.Unlock()
o.mtx.Unlock()
}()
return bA.and(o)
}
@@ -136,12 +154,17 @@ func (bA *BitArray) and(o *BitArray) *BitArray {
return c
}
// Not returns a bit array resulting from a bitwise Not of the provided bit array.
func (bA *BitArray) Not() *BitArray {
if bA == nil {
return nil // Degenerate
}
bA.mtx.Lock()
defer bA.mtx.Unlock()
return bA.not()
}
func (bA *BitArray) not() *BitArray {
c := bA.copy()
for i := 0; i < len(c.Elems); i++ {
c.Elems[i] = ^c.Elems[i]
@@ -149,13 +172,20 @@ func (bA *BitArray) Not() *BitArray {
return c
}
// Sub subtracts the two bit-arrays bitwise, without carrying the bits.
// This is essentially bA.And(o.Not()).
// If bA is longer than o, o is right padded with zeroes.
func (bA *BitArray) Sub(o *BitArray) *BitArray {
if bA == nil || o == nil {
// TODO: Decide if we should do 1's complement here?
return nil
}
bA.mtx.Lock()
defer bA.mtx.Unlock()
o.mtx.Lock()
defer func() {
bA.mtx.Unlock()
o.mtx.Unlock()
}()
if bA.Bits > o.Bits {
c := bA.copy()
for i := 0; i < len(o.Elems)-1; i++ {
@@ -164,15 +194,15 @@ func (bA *BitArray) Sub(o *BitArray) *BitArray {
i := len(o.Elems) - 1
if i >= 0 {
for idx := i * 64; idx < o.Bits; idx++ {
// NOTE: each individual GetIndex() call to o is safe.
c.setIndex(idx, c.getIndex(idx) && !o.GetIndex(idx))
c.setIndex(idx, c.getIndex(idx) && !o.getIndex(idx))
}
}
return c
}
return bA.and(o.Not()) // Note degenerate case where o == nil
return bA.and(o.not()) // Note degenerate case where o == nil
}
// IsEmpty returns true iff all bits in the bit array are 0
func (bA *BitArray) IsEmpty() bool {
if bA == nil {
return true // should this be opposite?
@@ -187,6 +217,7 @@ func (bA *BitArray) IsEmpty() bool {
return true
}
// IsFull returns true iff all bits in the bit array are 1.
func (bA *BitArray) IsFull() bool {
if bA == nil {
return true
@@ -207,6 +238,8 @@ func (bA *BitArray) IsFull() bool {
return (lastElem+1)&((uint64(1)<<uint(lastElemBits))-1) == 0
}
// PickRandom returns a random index in the bit array, and its value.
// It uses the global randomness in `random.go` to get this index.
func (bA *BitArray) PickRandom() (int, bool) {
if bA == nil {
return 0, false
@@ -260,6 +293,8 @@ func (bA *BitArray) String() string {
return bA.StringIndented("")
}
// StringIndented returns the same thing as String(), but applies the indent
// at every 10th bit, and twice at every 50th bit.
func (bA *BitArray) StringIndented(indent string) string {
if bA == nil {
return "nil-BitArray"
@@ -295,6 +330,7 @@ func (bA *BitArray) stringIndented(indent string) string {
return fmt.Sprintf("BA{%v:%v}", bA.Bits, strings.Join(lines, indent))
}
// Bytes returns the byte representation of the bits within the bitarray.
func (bA *BitArray) Bytes() []byte {
bA.mtx.Lock()
defer bA.mtx.Unlock()
@@ -309,15 +345,18 @@ func (bA *BitArray) Bytes() []byte {
return bytes
}
// NOTE: other bitarray o is not locked when reading,
// so if necessary, caller must copy or lock o prior to calling Update.
// If bA is nil, does nothing.
// Update sets the bA's bits to be that of the other bit array.
// The copying begins from the begin of both bit arrays.
func (bA *BitArray) Update(o *BitArray) {
if bA == nil || o == nil {
return
}
bA.mtx.Lock()
defer bA.mtx.Unlock()
o.mtx.Lock()
defer func() {
bA.mtx.Unlock()
o.mtx.Unlock()
}()
copy(bA.Elems, o.Elems)
}

View File

@@ -8,7 +8,6 @@ import (
"os"
"os/exec"
"os/signal"
"path/filepath"
"strings"
"syscall"
)
@@ -124,60 +123,6 @@ func MustWriteFile(filePath string, contents []byte, mode os.FileMode) {
}
}
// WriteFileAtomic creates a temporary file with data and the perm given and
// swaps it atomically with filename if successful.
func WriteFileAtomic(filename string, data []byte, perm os.FileMode) error {
var (
dir = filepath.Dir(filename)
tempFile = filepath.Join(dir, "write-file-atomic-"+RandStr(32))
// Override in case it does exist, create in case it doesn't and force kernel
// flush, which still leaves the potential of lingering disk cache.
flag = os.O_WRONLY | os.O_CREATE | os.O_SYNC | os.O_TRUNC
)
f, err := os.OpenFile(tempFile, flag, perm)
if err != nil {
return err
}
// Clean up in any case. Defer stacking order is last-in-first-out.
defer os.Remove(f.Name())
defer f.Close()
if n, err := f.Write(data); err != nil {
return err
} else if n < len(data) {
return io.ErrShortWrite
}
// Close the file before renaming it, otherwise it will cause "The process
// cannot access the file because it is being used by another process." on windows.
f.Close()
return os.Rename(f.Name(), filename)
}
//--------------------------------------------------------------------------------
func Tempfile(prefix string) (*os.File, string) {
file, err := ioutil.TempFile("", prefix)
if err != nil {
PanicCrisis(err)
}
return file, file.Name()
}
func Tempdir(prefix string) (*os.File, string) {
tempDir := os.TempDir() + "/" + prefix + RandStr(12)
err := EnsureDir(tempDir, 0700)
if err != nil {
panic(Fmt("Error creating temp dir: %v", err))
}
dir, err := os.Open(tempDir)
if err != nil {
panic(Fmt("Error opening temp dir: %v", err))
}
return dir, tempDir
}
//--------------------------------------------------------------------------------
func Prompt(prompt string, defaultValue string) (string, error) {

View File

@@ -1,55 +1,10 @@
package common
import (
"bytes"
"io/ioutil"
"math/rand"
"os"
"testing"
"time"
)
func TestWriteFileAtomic(t *testing.T) {
var (
seed = rand.New(rand.NewSource(time.Now().UnixNano()))
data = []byte(RandStr(seed.Intn(2048)))
old = RandBytes(seed.Intn(2048))
perm os.FileMode = 0600
)
f, err := ioutil.TempFile("/tmp", "write-atomic-test-")
if err != nil {
t.Fatal(err)
}
defer os.Remove(f.Name())
if err = ioutil.WriteFile(f.Name(), old, 0664); err != nil {
t.Fatal(err)
}
if err = WriteFileAtomic(f.Name(), data, perm); err != nil {
t.Fatal(err)
}
rData, err := ioutil.ReadFile(f.Name())
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(data, rData) {
t.Fatalf("data mismatch: %v != %v", data, rData)
}
stat, err := os.Stat(f.Name())
if err != nil {
t.Fatal(err)
}
if have, want := stat.Mode().Perm(), perm; have != want {
t.Errorf("have %v, want %v", have, want)
}
}
func TestGoPath(t *testing.T) {
// restore original gopath upon exit
path := os.Getenv("GOPATH")

View File

@@ -11,9 +11,13 @@ const (
strChars = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" // 62 characters
)
// pseudo random number generator.
// seeded with OS randomness (crand)
// Rand is a prng, that is seeded with OS randomness.
// The OS randomness is obtained from crypto/rand, however none of the provided
// methods are suitable for cryptographic usage.
// They all utilize math/rand's prng internally.
//
// All of the methods here are suitable for concurrent use.
// This is achieved by using a mutex lock on all of the provided methods.
type Rand struct {
sync.Mutex
rand *mrand.Rand
@@ -105,16 +109,8 @@ func RandInt63n(n int64) int64 {
return grand.Int63n(n)
}
func RandUint16Exp() uint16 {
return grand.Uint16Exp()
}
func RandUint32Exp() uint32 {
return grand.Uint32Exp()
}
func RandUint64Exp() uint64 {
return grand.Uint64Exp()
func RandBool() bool {
return grand.Bool()
}
func RandFloat32() float32 {
@@ -150,8 +146,7 @@ func (r *Rand) Seed(seed int64) {
r.Unlock()
}
// Constructs an alphanumeric string of given length.
// It is not safe for cryptographic usage.
// Str constructs a random alphanumeric string of given length.
func (r *Rand) Str(length int) string {
chars := []byte{}
MAIN_LOOP:
@@ -175,12 +170,10 @@ MAIN_LOOP:
return string(chars)
}
// It is not safe for cryptographic usage.
func (r *Rand) Uint16() uint16 {
return uint16(r.Uint32() & (1<<16 - 1))
}
// It is not safe for cryptographic usage.
func (r *Rand) Uint32() uint32 {
r.Lock()
u32 := r.rand.Uint32()
@@ -188,12 +181,10 @@ func (r *Rand) Uint32() uint32 {
return u32
}
// It is not safe for cryptographic usage.
func (r *Rand) Uint64() uint64 {
return uint64(r.Uint32())<<32 + uint64(r.Uint32())
}
// It is not safe for cryptographic usage.
func (r *Rand) Uint() uint {
r.Lock()
i := r.rand.Int()
@@ -201,22 +192,18 @@ func (r *Rand) Uint() uint {
return uint(i)
}
// It is not safe for cryptographic usage.
func (r *Rand) Int16() int16 {
return int16(r.Uint32() & (1<<16 - 1))
}
// It is not safe for cryptographic usage.
func (r *Rand) Int32() int32 {
return int32(r.Uint32())
}
// It is not safe for cryptographic usage.
func (r *Rand) Int64() int64 {
return int64(r.Uint64())
}
// It is not safe for cryptographic usage.
func (r *Rand) Int() int {
r.Lock()
i := r.rand.Int()
@@ -224,7 +211,6 @@ func (r *Rand) Int() int {
return i
}
// It is not safe for cryptographic usage.
func (r *Rand) Int31() int32 {
r.Lock()
i31 := r.rand.Int31()
@@ -232,7 +218,6 @@ func (r *Rand) Int31() int32 {
return i31
}
// It is not safe for cryptographic usage.
func (r *Rand) Int31n(n int32) int32 {
r.Lock()
i31n := r.rand.Int31n(n)
@@ -240,7 +225,6 @@ func (r *Rand) Int31n(n int32) int32 {
return i31n
}
// It is not safe for cryptographic usage.
func (r *Rand) Int63() int64 {
r.Lock()
i63 := r.rand.Int63()
@@ -248,7 +232,6 @@ func (r *Rand) Int63() int64 {
return i63
}
// It is not safe for cryptographic usage.
func (r *Rand) Int63n(n int64) int64 {
r.Lock()
i63n := r.rand.Int63n(n)
@@ -256,43 +239,6 @@ func (r *Rand) Int63n(n int64) int64 {
return i63n
}
// Distributed pseudo-exponentially to test for various cases
// It is not safe for cryptographic usage.
func (r *Rand) Uint16Exp() uint16 {
bits := r.Uint32() % 16
if bits == 0 {
return 0
}
n := uint16(1 << (bits - 1))
n += uint16(r.Int31()) & ((1 << (bits - 1)) - 1)
return n
}
// Distributed pseudo-exponentially to test for various cases
// It is not safe for cryptographic usage.
func (r *Rand) Uint32Exp() uint32 {
bits := r.Uint32() % 32
if bits == 0 {
return 0
}
n := uint32(1 << (bits - 1))
n += uint32(r.Int31()) & ((1 << (bits - 1)) - 1)
return n
}
// Distributed pseudo-exponentially to test for various cases
// It is not safe for cryptographic usage.
func (r *Rand) Uint64Exp() uint64 {
bits := r.Uint32() % 64
if bits == 0 {
return 0
}
n := uint64(1 << (bits - 1))
n += uint64(r.Int63()) & ((1 << (bits - 1)) - 1)
return n
}
// It is not safe for cryptographic usage.
func (r *Rand) Float32() float32 {
r.Lock()
f32 := r.rand.Float32()
@@ -300,7 +246,6 @@ func (r *Rand) Float32() float32 {
return f32
}
// It is not safe for cryptographic usage.
func (r *Rand) Float64() float64 {
r.Lock()
f64 := r.rand.Float64()
@@ -308,13 +253,12 @@ func (r *Rand) Float64() float64 {
return f64
}
// It is not safe for cryptographic usage.
func (r *Rand) Time() time.Time {
return time.Unix(int64(r.Uint64Exp()), 0)
return time.Unix(int64(r.Uint64()), 0)
}
// RandBytes returns n random bytes from the OS's source of entropy ie. via crypto/rand.
// It is not safe for cryptographic usage.
// Bytes returns n random bytes generated from the internal
// prng.
func (r *Rand) Bytes(n int) []byte {
// cRandBytes isn't guaranteed to be fast so instead
// use random bytes generated from the internal PRNG
@@ -325,9 +269,8 @@ func (r *Rand) Bytes(n int) []byte {
return bs
}
// RandIntn returns, as an int, a non-negative pseudo-random number in [0, n).
// Intn returns, as an int, a uniform pseudo-random number in the range [0, n).
// It panics if n <= 0.
// It is not safe for cryptographic usage.
func (r *Rand) Intn(n int) int {
r.Lock()
i := r.rand.Intn(n)
@@ -335,8 +278,14 @@ func (r *Rand) Intn(n int) int {
return i
}
// RandPerm returns a pseudo-random permutation of n integers in [0, n).
// It is not safe for cryptographic usage.
// Bool returns a uniformly random boolean
func (r *Rand) Bool() bool {
// See https://github.com/golang/go/issues/23804#issuecomment-365370418
// for reasoning behind computing like this
return r.Int63()%2 == 0
}
// Perm returns a pseudo-random permutation of n integers in [0, n).
func (r *Rand) Perm(n int) []int {
r.Lock()
perm := r.rand.Perm(n)

View File

@@ -73,9 +73,6 @@ func testThemAll() string {
fmt.Fprintf(out, "randInt64: %d\n", RandInt64())
fmt.Fprintf(out, "randUint32: %d\n", RandUint32())
fmt.Fprintf(out, "randUint64: %d\n", RandUint64())
fmt.Fprintf(out, "randUint16Exp: %d\n", RandUint16Exp())
fmt.Fprintf(out, "randUint32Exp: %d\n", RandUint32Exp())
fmt.Fprintf(out, "randUint64Exp: %d\n", RandUint64Exp())
return out.String()
}

Some files were not shown because too many files have changed in this diff Show More