merge master

This commit is contained in:
Marko Baricevic 2019-07-11 14:08:20 +02:00
commit a89a3c65c2
45 changed files with 1133 additions and 570 deletions

View File

@ -331,6 +331,34 @@ jobs:
docker push "tendermint/tendermint" docker push "tendermint/tendermint"
docker logout docker logout
reproducible_builds:
<<: *defaults
steps:
- attach_workspace:
at: /tmp/workspace
- checkout
- setup_remote_docker:
docker_layer_caching: true
- run:
name: Build tendermint
no_output_timeout: 20m
command: |
sudo apt-get install -y ruby
bash -x ./scripts/gitian-build.sh all
for os in darwin linux windows; do
cp gitian-build-${os}/result/tendermint-${os}-res.yml .
cp gitian-build-${os}/build/out/tendermint-*.tar.gz .
rm -rf gitian-build-${os}/
done
- store_artifacts:
path: /go/src/github.com/tendermint/tendermint/tendermint-darwin-res.yml
- store_artifacts:
path: /go/src/github.com/tendermint/tendermint/tendermint-linux-res.yml
- store_artifacts:
path: /go/src/github.com/tendermint/tendermint/tendermint-windows-res.yml
- store_artifacts:
path: /go/src/github.com/tendermint/tendermint/tendermint-*.tar.gz
workflows: workflows:
version: 2 version: 2
test-suite: test-suite:
@ -340,7 +368,6 @@ workflows:
branches: branches:
only: only:
- master - master
- develop
- setup_dependencies - setup_dependencies
- test_abci_apps: - test_abci_apps:
requires: requires:
@ -364,6 +391,12 @@ workflows:
- upload_coverage: - upload_coverage:
requires: requires:
- test_cover - test_cover
- reproducible_builds:
filters:
branches:
only:
- master
- /v[0-9]+\.[0-9]+/
release: release:
jobs: jobs:
- prepare_build - prepare_build

View File

@ -1,6 +1,6 @@
## v0.32.1 ## v0.32.1
** \*\*
Special thanks to external contributors on this release: Special thanks to external contributors on this release:
@ -9,33 +9,39 @@ program](https://hackerone.com/tendermint).
### BREAKING CHANGES: ### BREAKING CHANGES:
* CLI/RPC/Config - CLI/RPC/Config
* Apps - Apps
- Go API
* Go API
- [abci] \#2127 ABCI / mempool: Add a "Recheck Tx" indicator. Breaks the ABCI - [abci] \#2127 ABCI / mempool: Add a "Recheck Tx" indicator. Breaks the ABCI
client interface (`abcicli.Client`) to allow for supplying the ABCI client interface (`abcicli.Client`) to allow for supplying the ABCI
`types.RequestCheckTx` and `types.RequestDeliverTx` structs, and lets the `types.RequestCheckTx` and `types.RequestDeliverTx` structs, and lets the
mempool indicate to the ABCI app whether a CheckTx request is a recheck or mempool indicate to the ABCI app whether a CheckTx request is a recheck or
not. not.
- [libs] Remove unused `db/debugDB` and `common/colors.go` & `errors/errors.go` files (@marbar3778) - [libs] Remove unused `db/debugDB` and `common/colors.go` & `errors/errors.go` files (@marbar3778)
- [libs] \#2432 Remove unused `common/heap.go` file (@marbar3778)
- [libs] Remove unused `date.go`, `io.go`. Remove `GoPath()`, `Prompt()` and `IsDirEmpty()` functions from `os.go` (@marbar3778)
* Blockchain Protocol - Blockchain Protocol
* P2P Protocol - P2P Protocol
### FEATURES: ### FEATURES:
- [node] Refactor `NewNode` to use functional options to make it more flexible - [node] Refactor `NewNode` to use functional options to make it more flexible
and extensible in the future. and extensible in the future.
- [node] [\#3730](https://github.com/tendermint/tendermint/pull/3730) Add `CustomReactors` option to `NewNode` allowing caller to pass - [node][\#3730](https://github.com/tendermint/tendermint/pull/3730) Add `CustomReactors` option to `NewNode` allowing caller to pass
custom reactors to run inside Tendermint node (@ParthDesai) custom reactors to run inside Tendermint node (@ParthDesai)
### IMPROVEMENTS: ### IMPROVEMENTS:
- [rpc] \#3700 Make possible to set absolute paths for TLS cert and key (@climber73)
- [rpc] \#3700 Make possible to set absolute paths for TLS cert and key (@climber73)
### BUG FIXES: ### BUG FIXES:
- [p2p] \#3338 Prevent "sent next PEX request too soon" errors by not calling - [p2p] \#3338 Prevent "sent next PEX request too soon" errors by not calling
ensurePeers outside of ensurePeersRoutine ensurePeers outside of ensurePeersRoutine
- [behaviour] Return correct reason in MessageOutOfOrder (@jim380) - [behaviour] Return correct reason in MessageOutOfOrder (@jim380)
- [config] \#3723 Add consensus_params to testnet config generation; document time_iota_ms (@ashleyvega)

View File

@ -1,34 +0,0 @@
FROM alpine:3.7
ENV DATA_ROOT /tendermint
ENV TMHOME $DATA_ROOT
RUN addgroup tmuser && \
adduser -S -G tmuser tmuser
RUN mkdir -p $DATA_ROOT && \
chown -R tmuser:tmuser $DATA_ROOT
RUN apk add --no-cache bash curl jq
ENV GOPATH /go
ENV PATH "$PATH:/go/bin"
RUN mkdir -p /go/src/github.com/tendermint/tendermint && \
apk add --no-cache go build-base git && \
cd /go/src/github.com/tendermint/tendermint && \
git clone https://github.com/tendermint/tendermint . && \
git checkout develop && \
make get_tools && \
make install && \
cd - && \
rm -rf /go/src/github.com/tendermint/tendermint && \
apk del go build-base git
VOLUME $DATA_ROOT
EXPOSE 26656
EXPOSE 26657
ENTRYPOINT ["tendermint"]
CMD ["node", "--moniker=`hostname`", "--proxy_app=kvstore"]

View File

@ -12,28 +12,25 @@
- `0.9.1`, `0.9`, [(Dockerfile)](https://github.com/tendermint/tendermint/blob/809e0e8c5933604ba8b2d096803ada7c5ec4dfd3/DOCKER/Dockerfile) - `0.9.1`, `0.9`, [(Dockerfile)](https://github.com/tendermint/tendermint/blob/809e0e8c5933604ba8b2d096803ada7c5ec4dfd3/DOCKER/Dockerfile)
- `0.9.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/d474baeeea6c22b289e7402449572f7c89ee21da/DOCKER/Dockerfile) - `0.9.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/d474baeeea6c22b289e7402449572f7c89ee21da/DOCKER/Dockerfile)
- `0.8.0`, `0.8` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/bf64dd21fdb193e54d8addaaaa2ecf7ac371de8c/DOCKER/Dockerfile) - `0.8.0`, `0.8` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/bf64dd21fdb193e54d8addaaaa2ecf7ac371de8c/DOCKER/Dockerfile)
- `develop` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/master/DOCKER/Dockerfile.develop)
`develop` tag points to the [develop](https://github.com/tendermint/tendermint/tree/develop) branch.
## Quick reference ## Quick reference
* **Where to get help:** - **Where to get help:**
https://cosmos.network/community [cosmos.network/ecosystem](https://cosmos.network/ecosystem)
* **Where to file issues:** - **Where to file issues:**
https://github.com/tendermint/tendermint/issues [Tendermint Issues](https://github.com/tendermint/tendermint/issues)
* **Supported Docker versions:** - **Supported Docker versions:**
[the latest release](https://github.com/moby/moby/releases) (down to 1.6 on a best-effort basis) [the latest release](https://github.com/moby/moby/releases) (down to 1.6 on a best-effort basis)
## Tendermint ## Tendermint
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine, written in any programming language, and securely replicates it on many machines. Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine, written in any programming language, and securely replicates it on many machines.
For more background, see the [introduction](https://tendermint.readthedocs.io/en/master/introduction.html). For more background, see the [the docs](https://tendermint.com/docs/introduction/#quick-start).
To get started developing applications, see the [application developers guide](https://tendermint.readthedocs.io/en/master/getting-started.html). To get started developing applications, see the [application developers guide](https://tendermint.com/docs/introduction/quick-start.html).
## How to use this image ## How to use this image
@ -48,7 +45,7 @@ docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint node --proxy_app
## Local cluster ## Local cluster
To run a 4-node network, see the `Makefile` in the root of [the repo](https://github.com/tendermint/tendermint/master/Makefile) and run: To run a 4-node network, see the `Makefile` in the root of [the repo](https://github.com/tendermint/tendermint/blob/master/Makefile) and run:
``` ```
make build-linux make build-linux
@ -60,7 +57,7 @@ Note that this will build and use a different image than the ones provided here.
## License ## License
- Tendermint's license is [Apache 2.0](https://github.com/tendermint/tendermint/master/LICENSE). - Tendermint's license is [Apache 2.0](https://github.com/tendermint/tendermint/blob/master/LICENSE).
## Contributing ## Contributing

View File

@ -161,8 +161,9 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
// Generate genesis doc from generated validators // Generate genesis doc from generated validators
genDoc := &types.GenesisDoc{ genDoc := &types.GenesisDoc{
GenesisTime: tmtime.Now(),
ChainID: "chain-" + cmn.RandStr(6), ChainID: "chain-" + cmn.RandStr(6),
ConsensusParams: types.DefaultConsensusParams(),
GenesisTime: tmtime.Now(),
Validators: genVals, Validators: genVals,
} }

View File

@ -6,14 +6,12 @@ The documentation for Tendermint Core is hosted at:
- https://tendermint-staging.interblock.io/docs/ - https://tendermint-staging.interblock.io/docs/
built from the files in this (`/docs`) directory for built from the files in this (`/docs`) directory for
[master](https://github.com/tendermint/tendermint/tree/master/docs) [master](https://github.com/tendermint/tendermint/tree/master/docs) respectively.
and [develop](https://github.com/tendermint/tendermint/tree/develop/docs),
respectively.
## How It Works ## How It Works
There is a CircleCI job listening for changes in the `/docs` directory, on both There is a CircleCI job listening for changes in the `/docs` directory, on both
the `master` and `develop` branches. Any updates to files in this directory the `master` branch. Any updates to files in this directory
on those branches will automatically trigger a website deployment. Under the hood, on those branches will automatically trigger a website deployment. Under the hood,
the private website repository has a `make build-docs` target consumed by a CircleCI job in that repo. the private website repository has a `make build-docs` target consumed by a CircleCI job in that repo.
@ -35,7 +33,7 @@ of the sidebar.
**NOTE:** Strongly consider the existing links - both within this directory **NOTE:** Strongly consider the existing links - both within this directory
and to the website docs - when moving or deleting files. and to the website docs - when moving or deleting files.
Links to directories *MUST* end in a `/`. Links to directories _MUST_ end in a `/`.
Relative links should be used nearly everywhere, having discovered and weighed the following: Relative links should be used nearly everywhere, having discovered and weighed the following:
@ -101,4 +99,4 @@ We are using [Algolia](https://www.algolia.com) to power full-text search. This
## Consistency ## Consistency
Because the build processes are identical (as is the information contained herein), this file should be kept in sync as Because the build processes are identical (as is the information contained herein), this file should be kept in sync as
much as possible with its [counterpart in the Cosmos SDK repo](https://github.com/cosmos/cosmos-sdk/blob/develop/docs/DOCS_README.md). much as possible with its [counterpart in the Cosmos SDK repo](https://github.com/cosmos/cosmos-sdk/blob/master/docs/DOCS_README.md).

View File

@ -62,7 +62,7 @@ as `abci-cli` above. The kvstore just stores transactions in a merkle
tree. tree.
Its code can be found Its code can be found
[here](https://github.com/tendermint/tendermint/blob/develop/abci/cmd/abci-cli/abci-cli.go) [here](https://github.com/tendermint/tendermint/blob/master/abci/cmd/abci-cli/abci-cli.go)
and looks like: and looks like:
``` ```
@ -137,7 +137,7 @@ response.
The server may be generic for a particular language, and we provide a The server may be generic for a particular language, and we provide a
[reference implementation in [reference implementation in
Golang](https://github.com/tendermint/tendermint/tree/develop/abci/server). See the Golang](https://github.com/tendermint/tendermint/tree/master/abci/server). See the
[list of other ABCI implementations](./ecosystem.md) for servers in [list of other ABCI implementations](./ecosystem.md) for servers in
other languages. other languages.
@ -324,7 +324,7 @@ But the ultimate flexibility comes from being able to write the
application easily in any language. application easily in any language.
We have implemented the counter in a number of languages [see the We have implemented the counter in a number of languages [see the
example directory](https://github.com/tendermint/tendermint/tree/develop/abci/example). example directory](https://github.com/tendermint/tendermint/tree/master/abci/example).
To run the Node.js version, fist download & install [the Javascript ABCI server](https://github.com/tendermint/js-abci): To run the Node.js version, fist download & install [the Javascript ABCI server](https://github.com/tendermint/js-abci):

View File

@ -48,9 +48,9 @@ open ABCI connection with the application, which hosts an ABCI server.
Shown are the request and response types sent on each connection. Shown are the request and response types sent on each connection.
Most of the examples below are from [kvstore Most of the examples below are from [kvstore
application](https://github.com/tendermint/tendermint/blob/develop/abci/example/kvstore/kvstore.go), application](https://github.com/tendermint/tendermint/blob/master/abci/example/kvstore/kvstore.go),
which is a part of the abci repo. [persistent_kvstore which is a part of the abci repo. [persistent_kvstore
application](https://github.com/tendermint/tendermint/blob/develop/abci/example/kvstore/persistent_kvstore.go) application](https://github.com/tendermint/tendermint/blob/master/abci/example/kvstore/persistent_kvstore.go)
is used to show `BeginBlock`, `EndBlock` and `InitChain` example is used to show `BeginBlock`, `EndBlock` and `InitChain` example
implementations. implementations.

View File

@ -2,10 +2,7 @@
## Changelog ## Changelog
016-08-2018: Follow up from review: 016-08-2018: Follow up from review: - Revert changes to commit round - Remind about justification for removing pubkey - Update pros/cons
- Revert changes to commit round
- Remind about justification for removing pubkey
- Update pros/cons
05-08-2018: Initial draft 05-08-2018: Initial draft
## Context ## Context
@ -35,11 +32,11 @@ message ValidatorUpdate {
} }
``` ```
As noted in ADR-009[https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-009-ABCI-design.md], As noted in ADR-009[https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-009-ABCI-design.md],
the `Validator` does not contain a pubkey because quantum public keys are the `Validator` does not contain a pubkey because quantum public keys are
quite large and it would be wasteful to send them all over ABCI with every block. quite large and it would be wasteful to send them all over ABCI with every block.
Thus, applications that want to take advantage of the information in BeginBlock Thus, applications that want to take advantage of the information in BeginBlock
are *required* to store pubkeys in state (or use much less efficient lazy means are _required_ to store pubkeys in state (or use much less efficient lazy means
of verifying BeginBlock data). of verifying BeginBlock data).
### RequestBeginBlock ### RequestBeginBlock

View File

@ -0,0 +1,391 @@
# ADR 043: Blockhchain Reactor Riri-Org
## Changelog
* 18-06-2019: Initial draft
* 08-07-2019: Reviewed
## Context
The blockchain reactor is responsible for two high level processes:sending/receiving blocks from peers and FastSync-ing blocks to catch upnode who is far behind. The goal of [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md) was to refactor these two processes by separating business logic currently wrapped up in go-channels into pure `handle*` functions. While the ADR specified what the final form of the reactor might look like it lacked guidance on intermediary steps to get there.
The following diagram illustrates the state of the [blockchain-reorg](https://github.com/tendermint/tendermint/pull/35610) reactor which will be referred to as `v1`.
![v1 Blockchain Reactor Architecture
Diagram](https://github.com/tendermint/tendermint/blob/f9e556481654a24aeb689bdadaf5eab3ccd66829/docs/architecture/img/blockchain-reactor-v1.png)
While `v1` of the blockchain reactor has shown significant improvements in terms of simplifying the concurrency model, the current PR has run into few roadblocks.
* The current PR large and difficult to review.
* Block gossiping and fast sync processes are highly coupled to the shared `Pool` data structure.
* Peer communication is spread over multiple components creating complex dependency graph which must be mocked out during testing.
* Timeouts modeled as stateful tickers introduce non-determinism in tests
This ADR is meant to specify the missing components and control necessary to achieve [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md).
## Decision
Partition the responsibilities of the blockchain reactor into a set of components which communicate exclusively with events. Events will contain timestamps allowing each component to track time as internal state. The internal state will be mutated by a set of `handle*` which will produce event(s). The integration between components will happen in the reactor and reactor tests will then become integration tests between components. This design will be known as `v2`.
![v2 Blockchain Reactor Architecture
Diagram](https://github.com/tendermint/tendermint/blob/f9e556481654a24aeb689bdadaf5eab3ccd66829/docs/architecture/img/blockchain-reactor-v2.png)
### Reactor changes in detail
The reactor will include a demultiplexing routine which will send each message to each sub routine for independent processing. Each sub routine will then select the messages it's interested in and call the handle specific function specified in [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md). The demuxRoutine acts as "pacemaker" setting the time in which events are expected to be handled.
```go
func demuxRoutine(msgs, scheduleMsgs, processorMsgs, ioMsgs) {
timer := time.NewTicker(interval)
for {
select {
case <-timer.C:
now := evTimeCheck{time.Now()}
schedulerMsgs <- now
processorMsgs <- now
ioMsgs <- now
case msg:= <- msgs:
msg.time = time.Now()
// These channels should produce backpressure before
// being full to avoid starving each other
schedulerMsgs <- msg
processorMsgs <- msg
ioMesgs <- msg
if msg == stop {
break;
}
}
}
}
func processRoutine(input chan Message, output chan Message) {
processor := NewProcessor(..)
for {
msg := <- input
switch msg := msg.(type) {
case bcBlockRequestMessage:
output <- processor.handleBlockRequest(msg))
...
case stop:
processor.stop()
break;
}
}
func scheduleRoutine(input chan Message, output chan Message) {
schelduer = NewScheduler(...)
for {
msg := <-msgs
switch msg := input.(type) {
case bcBlockResponseMessage:
output <- scheduler.handleBlockResponse(msg)
...
case stop:
schedule.stop()
break;
}
}
}
```
## Lifecycle management
A set of routines for individual processes allow processes to run in parallel with clear lifecycle management. `Start`, `Stop`, and `AddPeer` hooks currently present in the reactor will delegate to the sub-routines allowing them to manage internal state independent without further coupling to the reactor.
```go
func (r *BlockChainReactor) Start() {
r.msgs := make(chan Message, maxInFlight)
schedulerMsgs := make(chan Message)
processorMsgs := make(chan Message)
ioMsgs := make(chan Message)
go processorRoutine(processorMsgs, r.msgs)
go scheduleRoutine(schedulerMsgs, r.msgs)
go ioRoutine(ioMsgs, r.msgs)
...
}
func (bcR *BlockchainReactor) Receive(...) {
...
r.msgs <- msg
...
}
func (r *BlockchainReactor) Stop() {
...
r.msgs <- stop
...
}
...
func (r *BlockchainReactor) Stop() {
...
r.msgs <- stop
...
}
...
func (r *BlockchainReactor) AddPeer(peer p2p.Peer) {
...
r.msgs <- bcAddPeerEv{peer.ID}
...
}
```
## IO handling
An io handling routine within the reactor will isolate peer communication. Message going through the ioRoutine will usually be one way, using `p2p` APIs. In the case in which the `p2p` API such as `trySend` return errors, the ioRoutine can funnel those message back to the demuxRoutine for distribution to the other routines. For instance errors from the ioRoutine can be consumed by the scheduler to inform better peer selection implementations.
```go
func (r *BlockchainReacor) ioRoutine(ioMesgs chan Message, outMsgs chan Message) {
...
for {
msg := <-ioMsgs
switch msg := msg.(type) {
case scBlockRequestMessage:
queued := r.sendBlockRequestToPeer(...)
if queued {
outMsgs <- ioSendQueued{...}
}
case scStatusRequestMessage
r.sendStatusRequestToPeer(...)
case bcPeerError
r.Swtich.StopPeerForError(msg.src)
...
...
case bcFinished
break;
}
}
}
```
### Processor Internals
The processor is responsible for ordering, verifying and executing blocks. The Processor will maintain an internal cursor `height` refering to the last processed block. As a set of blocks arrive unordered, the Processor will check if it has `height+1` necessary to process the next block. The processor also maintains the map `blockPeers` of peers to height, to keep track of which peer provided the block at `height`. `blockPeers` can be used in`handleRemovePeer(...)` to reschedule all unprocessed blocks provided by a peer who has errored.
```go
type Processor struct {
height int64 // the height cursor
state ...
blocks [height]*Block // keep a set of blocks in memory until they are processed
blockPeers [height]PeerID // keep track of which heights came from which peerID
lastTouch timestamp
}
func (proc *Processor) handleBlockResponse(peerID, block) {
if block.height <= height || block[block.height] {
} else if blocks[block.height] {
return errDuplicateBlock{}
} else {
blocks[block.height] = block
}
if blocks[height] && blocks[height+1] {
... = state.Validators.VerifyCommit(...)
... = store.SaveBlock(...)
state, err = blockExec.ApplyBlock(...)
...
if err == nil {
delete blocks[height]
height++
lastTouch = msg.time
return pcBlockProcessed{height-1}
} else {
... // Delete all unprocessed block from the peer
return pcBlockProcessError{peerID, height}
}
}
}
func (proc *Processor) handleRemovePeer(peerID) {
events = []
// Delete all unprocessed blocks from peerID
for i = height; i < len(blocks); i++ {
if blockPeers[i] == peerID {
events = append(events, pcBlockReschedule{height})
delete block[height]
}
}
return events
}
func handleTimeCheckEv(time) {
if time - lastTouch > timeout {
// Timeout the processor
...
}
}
```
## Schedule
The Schedule maintains the internal state used for scheduling blockRequestMessages based on some scheduling algorithm. The schedule needs to maintain state on:
* The state `blockState` of every block seem up to height of maxHeight
* The set of peers and their peer state `peerState`
* which peers have which blocks
* which blocks have been requested from which peers
```go
type blockState int
const (
blockStateNew = iota
blockStatePending,
blockStateReceived,
blockStateProcessed
)
type schedule {
// a list of blocks in which blockState
blockStates map[height]blockState
// a map of which blocks are available from which peers
blockPeers map[height]map[p2p.ID]scPeer
// a map of peerID to schedule specific peer struct `scPeer`
peers map[p2p.ID]scPeer
// a map of heights to the peer we are waiting for a response from
pending map[height]scPeer
targetPending int // the number of blocks we want in blockStatePending
targetReceived int // the number of blocks we want in blockStateReceived
peerTimeout int
peerMinSpeed int
}
func (sc *schedule) numBlockInState(state blockState) uint32 {
num := 0
for i := sc.minHeight(); i <= sc.maxHeight(); i++ {
if sc.blockState[i] == state {
num++
}
}
return num
}
func (sc *schedule) popSchedule(maxRequest int) []scBlockRequestMessage {
// We only want to schedule requests such that we have less than sc.targetPending and sc.targetReceived
// This ensures we don't saturate the network or flood the processor with unprocessed blocks
todo := min(sc.targetPending - sc.numBlockInState(blockStatePending), sc.numBlockInState(blockStateReceived))
events := []scBlockRequestMessage{}
for i := sc.minHeight(); i < sc.maxMaxHeight(); i++ {
if todo == 0 {
break
}
if blockStates[i] == blockStateNew {
peer = sc.selectPeer(blockPeers[i])
sc.blockStates[i] = blockStatePending
sc.pending[i] = peer
events = append(events, scBlockRequestMessage{peerID: peer.peerID, height: i})
todo--
}
}
return events
}
...
type scPeer struct {
peerID p2p.ID
numOustandingRequest int
lastTouched time.Time
monitor flow.Monitor
}
```
# Scheduler
The scheduler is configured to maintain a target `n` of in flight
messages and will use feedback from `_blockResponseMessage`,
`_statusResponseMessage` and `_peerError` produce an optimal assignment
of scBlockRequestMessage at each `timeCheckEv`.
```
func handleStatusResponse(peerID, height, time) {
schedule.touchPeer(peerID, time)
schedule.setPeerHeight(peerID, height)
}
func handleBlockResponseMessage(peerID, height, block, time) {
schedule.touchPeer(peerID, time)
schedule.markReceived(peerID, height, size(block))
}
func handleNoBlockResponseMessage(peerID, height, time) {
schedule.touchPeer(peerID, time)
// reschedule that block, punish peer...
...
}
func handlePeerError(peerID) {
// Remove the peer, reschedule the requests
...
}
func handleTimeCheckEv(time) {
// clean peer list
events = []
for peerID := range schedule.peersNotTouchedSince(time) {
pending = schedule.pendingFrom(peerID)
schedule.setPeerState(peerID, timedout)
schedule.resetBlocks(pending)
events = append(events, peerTimeout{peerID})
}
events = append(events, schedule.popSchedule())
return events
}
```
## Peer
The Peer Stores per peer state based on messages received by the scheduler.
```go
type Peer struct {
lastTouched timestamp
lastDownloaded timestamp
pending map[height]struct{}
height height // max height for the peer
state {
pending, // we know the peer but not the height
active, // we know the height
timeout // the peer has timed out
}
}
```
## Status
Work in progress
## Consequences
### Positive
* Test become deterministic
* Simulation becomes a-termporal: no need wait for a wall-time timeout
* Peer Selection can be independently tested/simulated
* Develop a general approach to refactoring reactors
### Negative
### Neutral
### Implementation Path
* Implement the scheduler, test the scheduler, review the rescheduler
* Implement the processor, test the processor, review the processor
* Implement the demuxer, write integration test, review integration tests
## References
* [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md): The original blockchain reactor re-org proposal
* [Blockchain re-org](https://github.com/tendermint/tendermint/pull/3561): The current blockchain reactor re-org implementation (v1)

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

View File

@ -1,9 +1,9 @@
# Install Tendermint # Install Tendermint
The fastest and easiest way to install the `tendermint` binary The fastest and easiest way to install the `tendermint` binary
is to run [this script](https://github.com/tendermint/tendermint/blob/develop/scripts/install/install_tendermint_ubuntu.sh) on is to run [this script](https://github.com/tendermint/tendermint/blob/master/scripts/install/install_tendermint_ubuntu.sh) on
a fresh Ubuntu instance, a fresh Ubuntu instance,
or [this script](https://github.com/tendermint/tendermint/blob/develop/scripts/install/install_tendermint_bsd.sh) or [this script](https://github.com/tendermint/tendermint/blob/master/scripts/install/install_tendermint_bsd.sh)
on a fresh FreeBSD instance. Read the comments / instructions carefully (i.e., reset your terminal after running the script, on a fresh FreeBSD instance. Read the comments / instructions carefully (i.e., reset your terminal after running the script,
make sure you are okay with the network connections being made). make sure you are okay with the network connections being made).

View File

@ -122,7 +122,7 @@ consensus engine, and provides a particular application state.
## ABCI Overview ## ABCI Overview
The [Application BlockChain Interface The [Application BlockChain Interface
(ABCI)](https://github.com/tendermint/tendermint/tree/develop/abci) (ABCI)](https://github.com/tendermint/tendermint/tree/master/abci)
allows for Byzantine Fault Tolerant replication of applications allows for Byzantine Fault Tolerant replication of applications
written in any programming language. written in any programming language.
@ -190,7 +190,7 @@ core to the application. The application replies with corresponding
response messages. response messages.
The messages are specified here: [ABCI Message The messages are specified here: [ABCI Message
Types](https://github.com/tendermint/tendermint/blob/develop/abci/README.md#message-types). Types](https://github.com/tendermint/tendermint/blob/master/abci/README.md#message-types).
The **DeliverTx** message is the work horse of the application. Each The **DeliverTx** message is the work horse of the application. Each
transaction in the blockchain is delivered with this message. The transaction in the blockchain is delivered with this message. The

View File

@ -116,7 +116,7 @@ consensus engine, and provides a particular application state.
## ABCI Overview ## ABCI Overview
The [Application BlockChain Interface The [Application BlockChain Interface
(ABCI)](https://github.com/tendermint/tendermint/tree/develop/abci) (ABCI)](https://github.com/tendermint/tendermint/tree/master/abci)
allows for Byzantine Fault Tolerant replication of applications allows for Byzantine Fault Tolerant replication of applications
written in any programming language. written in any programming language.
@ -184,7 +184,7 @@ core to the application. The application replies with corresponding
response messages. response messages.
The messages are specified here: [ABCI Message The messages are specified here: [ABCI Message
Types](https://github.com/tendermint/tendermint/blob/develop/abci/README.md#message-types). Types](https://github.com/tendermint/tendermint/blob/master/abci/README.md#message-types).
The **DeliverTx** message is the work horse of the application. Each The **DeliverTx** message is the work horse of the application. Each
transaction in the blockchain is delivered with this message. The transaction in the blockchain is delivered with this message. The

View File

@ -80,7 +80,7 @@ rm -rf ./build/node*
## Configuring abci containers ## Configuring abci containers
To use your own abci applications with 4-node setup edit the [docker-compose.yaml](https://github.com/tendermint/tendermint/blob/develop/docker-compose.yml) file and add image to your abci application. To use your own abci applications with 4-node setup edit the [docker-compose.yaml](https://github.com/tendermint/tendermint/blob/master/docker-compose.yml) file and add image to your abci application.
``` ```
abci0: abci0:

View File

@ -8,7 +8,7 @@ testnets on those servers.
## Install ## Install
NOTE: see the [integration bash NOTE: see the [integration bash
script](https://github.com/tendermint/tendermint/blob/develop/networks/remote/integration.sh) script](https://github.com/tendermint/tendermint/blob/master/networks/remote/integration.sh)
that can be run on a fresh DO droplet and will automatically spin up a 4 that can be run on a fresh DO droplet and will automatically spin up a 4
node testnet. The script more or less does everything described below. node testnet. The script more or less does everything described below.

View File

@ -2,11 +2,11 @@
ABCI is the interface between Tendermint (a state-machine replication engine) ABCI is the interface between Tendermint (a state-machine replication engine)
and your application (the actual state machine). It consists of a set of and your application (the actual state machine). It consists of a set of
*methods*, where each method has a corresponding `Request` and `Response` _methods_, where each method has a corresponding `Request` and `Response`
message type. Tendermint calls the ABCI methods on the ABCI application by sending the `Request*` message type. Tendermint calls the ABCI methods on the ABCI application by sending the `Request*`
messages and receiving the `Response*` messages in return. messages and receiving the `Response*` messages in return.
All message types are defined in a [protobuf file](https://github.com/tendermint/tendermint/blob/develop/abci/types/types.proto). All message types are defined in a [protobuf file](https://github.com/tendermint/tendermint/blob/master/abci/types/types.proto).
This allows Tendermint to run applications written in any programming language. This allows Tendermint to run applications written in any programming language.
This specification is split as follows: This specification is split as follows:

View File

@ -3,7 +3,7 @@
## Overview ## Overview
The ABCI message types are defined in a [protobuf The ABCI message types are defined in a [protobuf
file](https://github.com/tendermint/tendermint/blob/develop/abci/types/types.proto). file](https://github.com/tendermint/tendermint/blob/master/abci/types/types.proto).
ABCI methods are split across 3 separate ABCI _connections_: ABCI methods are split across 3 separate ABCI _connections_:

View File

@ -9,7 +9,7 @@ Applications](./apps.md).
## Message Protocol ## Message Protocol
The message protocol consists of pairs of requests and responses defined in the The message protocol consists of pairs of requests and responses defined in the
[protobuf file](https://github.com/tendermint/tendermint/blob/develop/abci/types/types.proto). [protobuf file](https://github.com/tendermint/tendermint/blob/master/abci/types/types.proto).
Some messages have no fields, while others may include byte-arrays, strings, integers, Some messages have no fields, while others may include byte-arrays, strings, integers,
or custom protobuf types. or custom protobuf types.
@ -33,9 +33,9 @@ The latter two can be tested using the `abci-cli` by setting the `--abci` flag
appropriately (ie. to `socket` or `grpc`). appropriately (ie. to `socket` or `grpc`).
See examples, in various stages of maintenance, in See examples, in various stages of maintenance, in
[Go](https://github.com/tendermint/tendermint/tree/develop/abci/server), [Go](https://github.com/tendermint/tendermint/tree/master/abci/server),
[JavaScript](https://github.com/tendermint/js-abci), [JavaScript](https://github.com/tendermint/js-abci),
[Python](https://github.com/tendermint/tendermint/tree/develop/abci/example/python3/abci), [Python](https://github.com/tendermint/tendermint/tree/master/abci/example/python3/abci),
[C++](https://github.com/mdyring/cpp-tmsp), and [C++](https://github.com/mdyring/cpp-tmsp), and
[Java](https://github.com/jTendermint/jabci). [Java](https://github.com/jTendermint/jabci).
@ -44,14 +44,13 @@ See examples, in various stages of maintenance, in
The simplest implementation uses function calls within Golang. The simplest implementation uses function calls within Golang.
This means ABCI applications written in Golang can be compiled with TendermintCore and run as a single binary. This means ABCI applications written in Golang can be compiled with TendermintCore and run as a single binary.
### GRPC ### GRPC
If GRPC is available in your language, this is the easiest approach, If GRPC is available in your language, this is the easiest approach,
though it will have significant performance overhead. though it will have significant performance overhead.
To get started with GRPC, copy in the [protobuf To get started with GRPC, copy in the [protobuf
file](https://github.com/tendermint/tendermint/blob/develop/abci/types/types.proto) file](https://github.com/tendermint/tendermint/blob/master/abci/types/types.proto)
and compile it using the GRPC plugin for your language. For instance, and compile it using the GRPC plugin for your language. For instance,
for golang, the command is `protoc --go_out=plugins=grpc:. types.proto`. for golang, the command is `protoc --go_out=plugins=grpc:. types.proto`.
See the [grpc documentation for more details](http://www.grpc.io/docs/). See the [grpc documentation for more details](http://www.grpc.io/docs/).
@ -107,4 +106,4 @@ received or a block is committed.
It is unlikely that you will need to implement a client. For details of It is unlikely that you will need to implement a client. For details of
our client, see our client, see
[here](https://github.com/tendermint/tendermint/tree/develop/abci/client). [here](https://github.com/tendermint/tendermint/tree/master/abci/client).

View File

@ -60,7 +60,7 @@ You can simply use below table and concatenate Prefix || Length (of raw bytes) |
( while || stands for byte concatenation here). ( while || stands for byte concatenation here).
| Type | Name | Prefix | Length | Notes | | Type | Name | Prefix | Length | Notes |
| ------------------ | ----------------------------- | ---------- | -------- | ----- | | ----------------------- | ---------------------------------- | ---------- | -------- | ----- |
| PubKeyEd25519 | tendermint/PubKeyEd25519 | 0x1624DE64 | 0x20 | | | PubKeyEd25519 | tendermint/PubKeyEd25519 | 0x1624DE64 | 0x20 | |
| PubKeySecp256k1 | tendermint/PubKeySecp256k1 | 0xEB5AE987 | 0x21 | | | PubKeySecp256k1 | tendermint/PubKeySecp256k1 | 0xEB5AE987 | 0x21 | |
| PrivKeyEd25519 | tendermint/PrivKeyEd25519 | 0xA3288910 | 0x40 | | | PrivKeyEd25519 | tendermint/PrivKeyEd25519 | 0xA3288910 | 0x40 | |
@ -70,9 +70,9 @@ You can simply use below table and concatenate Prefix || Length (of raw bytes) |
### Example ### Example
For example, the 33-byte (or 0x21-byte in hex) Secp256k1 pubkey For example, the 33-byte (or 0x21-byte in hex) Secp256k1 pubkey
`020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9` `020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
would be encoded as would be encoded as
`EB5AE98721020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9` `EB5AE98721020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
### Key Types ### Key Types
@ -170,11 +170,11 @@ We use the RFC 6962 specification of a merkle tree, with sha256 as the hash func
Merkle trees are used throughout Tendermint to compute a cryptographic digest of a data structure. Merkle trees are used throughout Tendermint to compute a cryptographic digest of a data structure.
The differences between RFC 6962 and the simplest form a merkle tree are that: The differences between RFC 6962 and the simplest form a merkle tree are that:
1) leaf nodes and inner nodes have different hashes. 1. leaf nodes and inner nodes have different hashes.
This is for "second pre-image resistance", to prevent the proof to an inner node being valid as the proof of a leaf. This is for "second pre-image resistance", to prevent the proof to an inner node being valid as the proof of a leaf.
The leaf nodes are `SHA256(0x00 || leaf_data)`, and inner nodes are `SHA256(0x01 || left_hash || right_hash)`. The leaf nodes are `SHA256(0x00 || leaf_data)`, and inner nodes are `SHA256(0x01 || left_hash || right_hash)`.
2) When the number of items isn't a power of two, the left half of the tree is as big as it could be. 2. When the number of items isn't a power of two, the left half of the tree is as big as it could be.
(The largest power of two less than the number of items) This allows new leaves to be added with less (The largest power of two less than the number of items) This allows new leaves to be added with less
recomputation. For example: recomputation. For example:
@ -290,7 +290,7 @@ func computeHashFromAunts(index, total int, leafHash []byte, innerHashes [][]byt
### IAVL+ Tree ### IAVL+ Tree
Because Tendermint only uses a Simple Merkle Tree, application developers are expect to use their own Merkle tree in their applications. For example, the IAVL+ Tree - an immutable self-balancing binary tree for persisting application state is used by the [Cosmos SDK](https://github.com/cosmos/cosmos-sdk/blob/develop/docs/sdk/core/multistore.md) Because Tendermint only uses a Simple Merkle Tree, application developers are expect to use their own Merkle tree in their applications. For example, the IAVL+ Tree - an immutable self-balancing binary tree for persisting application state is used by the [Cosmos SDK](https://github.com/cosmos/cosmos-sdk/blob/master/docs/clients/lite/specification.md)
## JSON ## JSON

View File

@ -120,7 +120,7 @@ A proposal is signed and published by the designated proposer at each
round. The proposer is chosen by a deterministic and non-choking round round. The proposer is chosen by a deterministic and non-choking round
robin selection algorithm that selects proposers in proportion to their robin selection algorithm that selects proposers in proportion to their
voting power (see voting power (see
[implementation](https://github.com/tendermint/tendermint/blob/develop/types/validator_set.go)). [implementation](https://github.com/tendermint/tendermint/blob/master/types/validator_set.go)).
A proposal at `(H,R)` is composed of a block and an optional latest A proposal at `(H,R)` is composed of a block and an optional latest
`PoLC-Round < R` which is included iff the proposer knows of one. This `PoLC-Round < R` which is included iff the proposer knows of one. This

View File

@ -7,7 +7,7 @@ See [this issue](https://github.com/tendermint/tendermint/issues/1503)
Mempool maintains a cache of the last 10000 transactions to prevent Mempool maintains a cache of the last 10000 transactions to prevent
replaying old transactions (plus transactions coming from other replaying old transactions (plus transactions coming from other
validators, who are continually exchanging transactions). Read [Replay validators, who are continually exchanging transactions). Read [Replay
Protection](../../../../app-development.md#replay-protection) Protection](../../../app-dev/app-development.md#replay-protection)
for details. for details.
Sending incorrectly encoded data or data exceeding `maxMsgSize` will result Sending incorrectly encoded data or data exceeding `maxMsgSize` will result

View File

@ -28,5 +28,5 @@ WAL. Then it will go to precommit, and that time it will work because the
private validator contains the `LastSignBytes` and then well replay the private validator contains the `LastSignBytes` and then well replay the
precommit from the WAL. precommit from the WAL.
Make sure to read about [WAL corruption](../../../tendermint-core/running-in-production.md#wal-corruption) Make sure to read about [WAL corruption](../../tendermint-core/running-in-production.md#wal-corruption)
and recovery strategies. and recovery strategies.

View File

@ -315,8 +315,7 @@ namespace = "tendermint"
If `create_empty_blocks` is set to `true` in your config, blocks will be If `create_empty_blocks` is set to `true` in your config, blocks will be
created ~ every second (with default consensus parameters). You can regulate created ~ every second (with default consensus parameters). You can regulate
the delay between blocks by changing the `timeout_commit`. E.g. `timeout_commit the delay between blocks by changing the `timeout_commit`. E.g. `timeout_commit = "10s"` should result in ~ 10 second blocks.
= "10s"` should result in ~ 10 second blocks.
**create_empty_blocks = false** **create_empty_blocks = false**
@ -342,7 +341,7 @@ Tendermint will only create blocks if there are transactions, or after waiting
## Consensus timeouts explained ## Consensus timeouts explained
There's a variety of information about timeouts in [Running in There's a variety of information about timeouts in [Running in
production](./running-in-production.html) production](./running-in-production.md)
You can also find more detailed technical explanation in the spec: [The latest You can also find more detailed technical explanation in the spec: [The latest
gossip on BFT consensus](https://arxiv.org/abs/1807.04938). gossip on BFT consensus](https://arxiv.org/abs/1807.04938).

View File

@ -115,7 +115,7 @@ little overview what they do.
- `abci-client` As mentioned in [Application Development Guide](../app-dev/app-development.md), Tendermint acts as an ABCI - `abci-client` As mentioned in [Application Development Guide](../app-dev/app-development.md), Tendermint acts as an ABCI
client with respect to the application and maintains 3 connections: client with respect to the application and maintains 3 connections:
mempool, consensus and query. The code used by Tendermint Core can mempool, consensus and query. The code used by Tendermint Core can
be found [here](https://github.com/tendermint/tendermint/tree/develop/abci/client). be found [here](https://github.com/tendermint/tendermint/tree/master/abci/client).
- `blockchain` Provides storage, pool (a group of peers), and reactor - `blockchain` Provides storage, pool (a group of peers), and reactor
for both storing and exchanging blocks between peers. for both storing and exchanging blocks between peers.
- `consensus` The heart of Tendermint core, which is the - `consensus` The heart of Tendermint core, which is the

View File

@ -4,4 +4,4 @@ The RPC documentation is hosted here:
- [https://tendermint.com/rpc/](https://tendermint.com/rpc/) - [https://tendermint.com/rpc/](https://tendermint.com/rpc/)
To update the documentation, edit the relevant `godoc` comments in the [rpc/core directory](https://github.com/tendermint/tendermint/tree/develop/rpc/core). To update the documentation, edit the relevant `godoc` comments in the [rpc/core directory](https://github.com/tendermint/tendermint/tree/master/rpc/core).

View File

@ -43,6 +43,11 @@ definition](https://github.com/tendermint/tendermint/blob/master/types/genesis.g
- `chain_id`: ID of the blockchain. This must be unique for - `chain_id`: ID of the blockchain. This must be unique for
every blockchain. If your testnet blockchains do not have unique every blockchain. If your testnet blockchains do not have unique
chain IDs, you will have a bad time. The ChainID must be less than 50 symbols. chain IDs, you will have a bad time. The ChainID must be less than 50 symbols.
- `consensus_params`
- `block`
- `time_iota_ms`: Minimum time increment between consecutive blocks (in
milliseconds). If the block header timestamp is ahead of the system clock,
decrease this value.
- `validators`: List of initial validators. Note this may be overridden entirely by the - `validators`: List of initial validators. Note this may be overridden entirely by the
application, and may be left empty to make explicit that the application, and may be left empty to make explicit that the
application will initialize the validator set with ResponseInitChain. application will initialize the validator set with ResponseInitChain.
@ -63,9 +68,10 @@ definition](https://github.com/tendermint/tendermint/blob/master/types/genesis.g
"genesis_time": "2018-11-13T18:11:50.277637Z", "genesis_time": "2018-11-13T18:11:50.277637Z",
"chain_id": "test-chain-s4ui7D", "chain_id": "test-chain-s4ui7D",
"consensus_params": { "consensus_params": {
"block_size": { "block": {
"max_bytes": "22020096", "max_bytes": "22020096",
"max_gas": "-1" "max_gas": "-1",
"time_iota_ms": "1000"
}, },
"evidence": { "evidence": {
"max_age": "100000" "max_age": "100000"

View File

@ -1,43 +0,0 @@
package common
import (
"strings"
"time"
"github.com/pkg/errors"
)
// TimeLayout helps to parse a date string of the format YYYY-MM-DD
// Intended to be used with the following function:
// time.Parse(TimeLayout, date)
var TimeLayout = "2006-01-02" //this represents YYYY-MM-DD
// ParseDateRange parses a date range string of the format start:end
// where the start and end date are of the format YYYY-MM-DD.
// The parsed dates are time.Time and will return the zero time for
// unbounded dates, ex:
// unbounded start: :2000-12-31
// unbounded end: 2000-12-31:
func ParseDateRange(dateRange string) (startDate, endDate time.Time, err error) {
dates := strings.Split(dateRange, ":")
if len(dates) != 2 {
err = errors.New("bad date range, must be in format date:date")
return
}
parseDate := func(date string) (out time.Time, err error) {
if len(date) == 0 {
return
}
out, err = time.Parse(TimeLayout, date)
return
}
startDate, err = parseDate(dates[0])
if err != nil {
return
}
endDate, err = parseDate(dates[1])
if err != nil {
return
}
return
}

View File

@ -1,46 +0,0 @@
package common
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
)
var (
date = time.Date(2015, time.Month(12), 31, 0, 0, 0, 0, time.UTC)
date2 = time.Date(2016, time.Month(12), 31, 0, 0, 0, 0, time.UTC)
zero time.Time
)
func TestParseDateRange(t *testing.T) {
assert := assert.New(t)
var testDates = []struct {
dateStr string
start time.Time
end time.Time
errNil bool
}{
{"2015-12-31:2016-12-31", date, date2, true},
{"2015-12-31:", date, zero, true},
{":2016-12-31", zero, date2, true},
{"2016-12-31", zero, zero, false},
{"2016-31-12:", zero, zero, false},
{":2016-31-12", zero, zero, false},
}
for _, test := range testDates {
start, end, err := ParseDateRange(test.dateStr)
if test.errNil {
assert.Nil(err)
testPtr := func(want, have time.Time) {
assert.True(have.Equal(want))
}
testPtr(test.start, start)
testPtr(test.end, end)
} else {
assert.NotNil(err)
}
}
}

View File

@ -1,125 +0,0 @@
package common
import (
"bytes"
"container/heap"
)
/*
Example usage:
```
h := NewHeap()
h.Push("msg1", 1)
h.Push("msg3", 3)
h.Push("msg2", 2)
fmt.Println(h.Pop()) // msg1
fmt.Println(h.Pop()) // msg2
fmt.Println(h.Pop()) // msg3
```
*/
type Heap struct {
pq priorityQueue
}
func NewHeap() *Heap {
return &Heap{pq: make([]*pqItem, 0)}
}
func (h *Heap) Len() int64 {
return int64(len(h.pq))
}
func (h *Heap) Push(value interface{}, priority int) {
heap.Push(&h.pq, &pqItem{value: value, priority: cmpInt(priority)})
}
func (h *Heap) PushBytes(value interface{}, priority []byte) {
heap.Push(&h.pq, &pqItem{value: value, priority: cmpBytes(priority)})
}
func (h *Heap) PushComparable(value interface{}, priority Comparable) {
heap.Push(&h.pq, &pqItem{value: value, priority: priority})
}
func (h *Heap) Peek() interface{} {
if len(h.pq) == 0 {
return nil
}
return h.pq[0].value
}
func (h *Heap) Update(value interface{}, priority Comparable) {
h.pq.Update(h.pq[0], value, priority)
}
func (h *Heap) Pop() interface{} {
item := heap.Pop(&h.pq).(*pqItem)
return item.value
}
//-----------------------------------------------------------------------------
// From: http://golang.org/pkg/container/heap/#example__priorityQueue
type pqItem struct {
value interface{}
priority Comparable
index int
}
type priorityQueue []*pqItem
func (pq priorityQueue) Len() int { return len(pq) }
func (pq priorityQueue) Less(i, j int) bool {
return pq[i].priority.Less(pq[j].priority)
}
func (pq priorityQueue) Swap(i, j int) {
pq[i], pq[j] = pq[j], pq[i]
pq[i].index = i
pq[j].index = j
}
func (pq *priorityQueue) Push(x interface{}) {
n := len(*pq)
item := x.(*pqItem)
item.index = n
*pq = append(*pq, item)
}
func (pq *priorityQueue) Pop() interface{} {
old := *pq
n := len(old)
item := old[n-1]
item.index = -1 // for safety
*pq = old[0 : n-1]
return item
}
func (pq *priorityQueue) Update(item *pqItem, value interface{}, priority Comparable) {
item.value = value
item.priority = priority
heap.Fix(pq, item.index)
}
//--------------------------------------------------------------------------------
// Comparable
type Comparable interface {
Less(o interface{}) bool
}
type cmpInt int
func (i cmpInt) Less(o interface{}) bool {
return int(i) < int(o.(cmpInt))
}
type cmpBytes []byte
func (bz cmpBytes) Less(o interface{}) bool {
return bytes.Compare([]byte(bz), []byte(o.(cmpBytes))) < 0
}

View File

@ -1,74 +0,0 @@
package common
import (
"bytes"
"errors"
"io"
)
type PrefixedReader struct {
Prefix []byte
reader io.Reader
}
func NewPrefixedReader(prefix []byte, reader io.Reader) *PrefixedReader {
return &PrefixedReader{prefix, reader}
}
func (pr *PrefixedReader) Read(p []byte) (n int, err error) {
if len(pr.Prefix) > 0 {
read := copy(p, pr.Prefix)
pr.Prefix = pr.Prefix[read:]
return read, nil
}
return pr.reader.Read(p)
}
// NOTE: Not goroutine safe
type BufferCloser struct {
bytes.Buffer
Closed bool
}
func NewBufferCloser(buf []byte) *BufferCloser {
return &BufferCloser{
*bytes.NewBuffer(buf),
false,
}
}
func (bc *BufferCloser) Close() error {
if bc.Closed {
return errors.New("BufferCloser already closed")
}
bc.Closed = true
return nil
}
func (bc *BufferCloser) Write(p []byte) (n int, err error) {
if bc.Closed {
return 0, errors.New("Cannot write to closed BufferCloser")
}
return bc.Buffer.Write(p)
}
func (bc *BufferCloser) WriteByte(c byte) error {
if bc.Closed {
return errors.New("Cannot write to closed BufferCloser")
}
return bc.Buffer.WriteByte(c)
}
func (bc *BufferCloser) WriteRune(r rune) (n int, err error) {
if bc.Closed {
return 0, errors.New("Cannot write to closed BufferCloser")
}
return bc.Buffer.WriteRune(r)
}
func (bc *BufferCloser) WriteString(s string) (n int, err error) {
if bc.Closed {
return 0, errors.New("Cannot write to closed BufferCloser")
}
return bc.Buffer.WriteString(s)
}

View File

@ -1,39 +1,13 @@
package common package common
import ( import (
"bufio"
"fmt" "fmt"
"io"
"io/ioutil" "io/ioutil"
"os" "os"
"os/exec"
"os/signal" "os/signal"
"strings"
"syscall" "syscall"
) )
var gopath string
// GoPath returns GOPATH env variable value. If it is not set, this function
// will try to call `go env GOPATH` subcommand.
func GoPath() string {
if gopath != "" {
return gopath
}
path := os.Getenv("GOPATH")
if len(path) == 0 {
goCmd := exec.Command("go", "env", "GOPATH")
out, err := goCmd.Output()
if err != nil {
panic(fmt.Sprintf("failed to determine gopath: %v", err))
}
path = string(out)
}
gopath = path
return path
}
type logger interface { type logger interface {
Info(msg string, keyvals ...interface{}) Info(msg string, keyvals ...interface{})
} }
@ -78,25 +52,6 @@ func EnsureDir(dir string, mode os.FileMode) error {
return nil return nil
} }
func IsDirEmpty(name string) (bool, error) {
f, err := os.Open(name)
if err != nil {
if os.IsNotExist(err) {
return true, err
}
// Otherwise perhaps a permission
// error or some other error.
return false, err
}
defer f.Close()
_, err = f.Readdirnames(1) // Or f.Readdir(1)
if err == io.EOF {
return true, nil
}
return false, err // Either not empty or error, suits both cases
}
func FileExists(filePath string) bool { func FileExists(filePath string) bool {
_, err := os.Stat(filePath) _, err := os.Stat(filePath)
return !os.IsNotExist(err) return !os.IsNotExist(err)
@ -125,19 +80,3 @@ func MustWriteFile(filePath string, contents []byte, mode os.FileMode) {
Exit(fmt.Sprintf("MustWriteFile failed: %v", err)) Exit(fmt.Sprintf("MustWriteFile failed: %v", err))
} }
} }
//--------------------------------------------------------------------------------
func Prompt(prompt string, defaultValue string) (string, error) {
fmt.Print(prompt)
reader := bufio.NewReader(os.Stdin)
line, err := reader.ReadString('\n')
if err != nil {
return defaultValue, err
}
line = strings.TrimSpace(line)
if line == "" {
return defaultValue, nil
}
return line, nil
}

View File

@ -1,46 +0,0 @@
package common
import (
"os"
"testing"
)
func TestOSGoPath(t *testing.T) {
// restore original gopath upon exit
path := os.Getenv("GOPATH")
defer func() {
_ = os.Setenv("GOPATH", path)
}()
err := os.Setenv("GOPATH", "~/testgopath")
if err != nil {
t.Fatal(err)
}
path = GoPath()
if path != "~/testgopath" {
t.Fatalf("should get GOPATH env var value, got %v", path)
}
os.Unsetenv("GOPATH")
path = GoPath()
if path != "~/testgopath" {
t.Fatalf("subsequent calls should return the same value, got %v", path)
}
}
func TestOSGoPathWithoutEnvVar(t *testing.T) {
// restore original gopath upon exit
path := os.Getenv("GOPATH")
defer func() {
_ = os.Setenv("GOPATH", path)
}()
os.Unsetenv("GOPATH")
// reset cache
gopath = ""
path = GoPath()
if path == "" || path == "~/testgopath" {
t.Fatalf("should get nonempty result of calling go env GOPATH, got %v", path)
}
}

View File

@ -13,7 +13,7 @@ import (
"strings" "strings"
"time" "time"
"errors" "github.com/pkg/errors"
) )
// NetAddress defines information about a peer on the network // NetAddress defines information about a peer on the network
@ -40,7 +40,7 @@ func IDAddressString(id ID, protocolHostPort string) string {
// NewNetAddress returns a new NetAddress using the provided TCP // NewNetAddress returns a new NetAddress using the provided TCP
// address. When testing, other net.Addr (except TCP) will result in // address. When testing, other net.Addr (except TCP) will result in
// using 0.0.0.0:0. When normal run, other net.Addr (except TCP) will // using 0.0.0.0:0. When normal run, other net.Addr (except TCP) will
// panic. // panic. Panics if ID is invalid.
// TODO: socks proxies? // TODO: socks proxies?
func NewNetAddress(id ID, addr net.Addr) *NetAddress { func NewNetAddress(id ID, addr net.Addr) *NetAddress {
tcpAddr, ok := addr.(*net.TCPAddr) tcpAddr, ok := addr.(*net.TCPAddr)
@ -53,6 +53,11 @@ func NewNetAddress(id ID, addr net.Addr) *NetAddress {
return netAddr return netAddr
} }
} }
if err := validateID(id); err != nil {
panic(fmt.Sprintf("Invalid ID %v: %v (addr: %v)", id, err, addr))
}
ip := tcpAddr.IP ip := tcpAddr.IP
port := uint16(tcpAddr.Port) port := uint16(tcpAddr.Port)
na := NewNetAddressIPPort(ip, port) na := NewNetAddressIPPort(ip, port)
@ -72,18 +77,11 @@ func NewNetAddressString(addr string) (*NetAddress, error) {
} }
// get ID // get ID
idStr := spl[0] if err := validateID(ID(spl[0])); err != nil {
idBytes, err := hex.DecodeString(idStr)
if err != nil {
return nil, ErrNetAddressInvalid{addrWithoutProtocol, err} return nil, ErrNetAddressInvalid{addrWithoutProtocol, err}
} }
if len(idBytes) != IDByteLength {
return nil, ErrNetAddressInvalid{
addrWithoutProtocol,
fmt.Errorf("invalid hex length - got %d, expected %d", len(idBytes), IDByteLength)}
}
var id ID var id ID
id, addrWithoutProtocol = ID(idStr), spl[1] id, addrWithoutProtocol = ID(spl[0]), spl[1]
// get host and port // get host and port
host, portStr, err := net.SplitHostPort(addrWithoutProtocol) host, portStr, err := net.SplitHostPort(addrWithoutProtocol)
@ -207,22 +205,28 @@ func (na *NetAddress) DialTimeout(timeout time.Duration) (net.Conn, error) {
// Routable returns true if the address is routable. // Routable returns true if the address is routable.
func (na *NetAddress) Routable() bool { func (na *NetAddress) Routable() bool {
if err := na.Valid(); err != nil {
return false
}
// TODO(oga) bitcoind doesn't include RFC3849 here, but should we? // TODO(oga) bitcoind doesn't include RFC3849 here, but should we?
return na.Valid() && !(na.RFC1918() || na.RFC3927() || na.RFC4862() || return !(na.RFC1918() || na.RFC3927() || na.RFC4862() ||
na.RFC4193() || na.RFC4843() || na.Local()) na.RFC4193() || na.RFC4843() || na.Local())
} }
// For IPv4 these are either a 0 or all bits set address. For IPv6 a zero // For IPv4 these are either a 0 or all bits set address. For IPv6 a zero
// address or one that matches the RFC3849 documentation address format. // address or one that matches the RFC3849 documentation address format.
func (na *NetAddress) Valid() bool { func (na *NetAddress) Valid() error {
if string(na.ID) != "" { if err := validateID(na.ID); err != nil {
data, err := hex.DecodeString(string(na.ID)) return errors.Wrap(err, "invalid ID")
if err != nil || len(data) != IDByteLength {
return false
} }
if na.IP == nil {
return errors.New("no IP")
} }
return na.IP != nil && !(na.IP.IsUnspecified() || na.RFC3849() || if na.IP.IsUnspecified() || na.RFC3849() || na.IP.Equal(net.IPv4bcast) {
na.IP.Equal(net.IPv4bcast)) return errors.New("invalid IP")
}
return nil
} }
// HasID returns true if the address has an ID. // HasID returns true if the address has an ID.
@ -329,3 +333,17 @@ func removeProtocolIfDefined(addr string) string {
return addr return addr
} }
func validateID(id ID) error {
if len(id) == 0 {
return errors.New("no ID")
}
idBytes, err := hex.DecodeString(string(id))
if err != nil {
return err
}
if len(idBytes) != IDByteLength {
return fmt.Errorf("invalid hex length - got %d, expected %d", len(idBytes), IDByteLength)
}
return nil
}

View File

@ -11,9 +11,13 @@ import (
func TestNewNetAddress(t *testing.T) { func TestNewNetAddress(t *testing.T) {
tcpAddr, err := net.ResolveTCPAddr("tcp", "127.0.0.1:8080") tcpAddr, err := net.ResolveTCPAddr("tcp", "127.0.0.1:8080")
require.Nil(t, err) require.Nil(t, err)
addr := NewNetAddress("", tcpAddr)
assert.Equal(t, "127.0.0.1:8080", addr.String()) assert.Panics(t, func() {
NewNetAddress("", tcpAddr)
})
addr := NewNetAddress("deadbeefdeadbeefdeadbeefdeadbeefdeadbeef", tcpAddr)
assert.Equal(t, "deadbeefdeadbeefdeadbeefdeadbeefdeadbeef@127.0.0.1:8080", addr.String())
assert.NotPanics(t, func() { assert.NotPanics(t, func() {
NewNetAddress("", &net.UDPAddr{IP: net.ParseIP("127.0.0.1"), Port: 8000}) NewNetAddress("", &net.UDPAddr{IP: net.ParseIP("127.0.0.1"), Port: 8000})
@ -106,7 +110,12 @@ func TestNetAddressProperties(t *testing.T) {
addr, err := NewNetAddressString(tc.addr) addr, err := NewNetAddressString(tc.addr)
require.Nil(t, err) require.Nil(t, err)
assert.Equal(t, tc.valid, addr.Valid()) err = addr.Valid()
if tc.valid {
assert.NoError(t, err)
} else {
assert.Error(t, err)
}
assert.Equal(t, tc.local, addr.Local()) assert.Equal(t, tc.local, addr.Local())
assert.Equal(t, tc.routable, addr.Routable()) assert.Equal(t, tc.routable, addr.Routable())
} }

View File

@ -586,8 +586,8 @@ func (a *addrBook) addAddress(addr, src *p2p.NetAddress) error {
return ErrAddrBookNilAddr{addr, src} return ErrAddrBookNilAddr{addr, src}
} }
if !addr.HasID() { if err := addr.Valid(); err != nil {
return ErrAddrBookInvalidAddrNoID{addr} return ErrAddrBookInvalidAddr{Addr: addr, AddrErr: err}
} }
if _, ok := a.privateIDs[addr.ID]; ok { if _, ok := a.privateIDs[addr.ID]; ok {
@ -607,10 +607,6 @@ func (a *addrBook) addAddress(addr, src *p2p.NetAddress) error {
return ErrAddrBookNonRoutable{addr} return ErrAddrBookNonRoutable{addr}
} }
if !addr.Valid() {
return ErrAddrBookInvalidAddr{addr}
}
ka := a.addrLookup[addr.ID] ka := a.addrLookup[addr.ID]
if ka != nil { if ka != nil {
// If its already old and the addr is the same, ignore it. // If its already old and the addr is the same, ignore it.

View File

@ -57,16 +57,9 @@ func (err ErrAddrBookNilAddr) Error() string {
type ErrAddrBookInvalidAddr struct { type ErrAddrBookInvalidAddr struct {
Addr *p2p.NetAddress Addr *p2p.NetAddress
AddrErr error
} }
func (err ErrAddrBookInvalidAddr) Error() string { func (err ErrAddrBookInvalidAddr) Error() string {
return fmt.Sprintf("Cannot add invalid address %v", err.Addr) return fmt.Sprintf("Cannot add invalid address %v: %v", err.Addr, err.AddrErr)
}
type ErrAddrBookInvalidAddrNoID struct {
Addr *p2p.NetAddress
}
func (err ErrAddrBookInvalidAddrNoID) Error() string {
return fmt.Sprintf("Cannot add address with no ID %v", err.Addr)
} }

View File

@ -350,22 +350,8 @@ func (r *PEXReactor) ReceiveAddrs(addrs []*p2p.NetAddress, src Peer) error {
} }
for _, netAddr := range addrs { for _, netAddr := range addrs {
// Validate netAddr. Disconnect from a peer if it sends us invalid data.
if netAddr == nil {
return errors.New("nil address in pexAddrsMessage")
}
// TODO: extract validating logic from NewNetAddressString
// and put it in netAddr#Valid (#2722)
na, err := p2p.NewNetAddressString(netAddr.String())
if err != nil {
return fmt.Errorf("%s address in pexAddrsMessage is invalid: %v",
netAddr.String(),
err,
)
}
// NOTE: we check netAddr validity and routability in book#AddAddress. // NOTE: we check netAddr validity and routability in book#AddAddress.
err = r.book.AddAddress(na, srcAddr) err = r.book.AddAddress(netAddr, srcAddr)
if err != nil { if err != nil {
r.logErrAddrBook(err) r.logErrAddrBook(err)
// XXX: should we be strict about incoming data and disconnect from a // XXX: should we be strict about incoming data and disconnect from a

201
scripts/gitian-build.sh Executable file
View File

@ -0,0 +1,201 @@
#!/bin/bash
# symbol prefixes:
# g_ -> global
# l_ - local variable
# f_ -> function
set -euo pipefail
GITIAN_CACHE_DIRNAME='.gitian-builder-cache'
GO_DEBIAN_RELEASE='1.12.5-1'
GO_TARBALL="golang-debian-${GO_DEBIAN_RELEASE}.tar.gz"
GO_TARBALL_URL="https://salsa.debian.org/go-team/compiler/golang/-/archive/debian/${GO_DEBIAN_RELEASE}/${GO_TARBALL}"
# Defaults
DEFAULT_SIGN_COMMAND='gpg --detach-sign'
DEFAULT_TENDERMINT_SIGS=${TENDERMINT_SIGS:-'tendermint.sigs'}
DEFAULT_GITIAN_REPO='https://github.com/devrandom/gitian-builder'
DEFAULT_GBUILD_FLAGS=''
DEFAULT_SIGS_REPO='https://github.com/tendermint/tendermint.sigs'
# Overrides
SIGN_COMMAND=${SIGN_COMMAND:-${DEFAULT_SIGN_COMMAND}}
GITIAN_REPO=${GITIAN_REPO:-${DEFAULT_GITIAN_REPO}}
GBUILD_FLAGS=${GBUILD_FLAGS:-${DEFAULT_GBUILD_FLAGS}}
# Globals
g_workdir=''
g_gitian_cache=''
g_cached_gitian=''
g_cached_go_tarball=''
g_sign_identity=''
g_sigs_dir=''
g_flag_commit=''
f_help() {
cat >&2 <<EOF
Usage: $(basename $0) [-h] PLATFORM
Launch a gitian build from the current source directory for the given PLATFORM.
The following platforms are supported:
darwin
linux
windows
all
Options:
-h display this help and exit
-c clone the signatures repository and commit signatures;
ignored if sign identity is not supplied
-s IDENTITY sign build as IDENTITY
If a GPG identity is supplied via the -s flag, the build will be signed and verified.
The signature will be saved in '${DEFAULT_TENDERMINT_SIGS}/'. An alternative output directory
for signatures can be supplied via the environment variable \$TENDERMINT_SIGS.
The default signing command used to sign the build is '$DEFAULT_SIGN_COMMAND'.
An alternative signing command can be supplied via the environment
variable \$SIGN_COMMAND.
EOF
}
f_builddir() {
printf '%s' "${g_workdir}/gitian-build-$1"
}
f_prep_build() {
local l_platforms \
l_os \
l_builddir
l_platforms="$1"
if [ -n "${g_flag_commit}" -a ! -d "${g_sigs_dir}" ]; then
git clone ${DEFAULT_SIGS_REPO} "${g_sigs_dir}"
fi
for l_os in ${l_platforms}; do
l_builddir="$(f_builddir ${l_os})"
f_echo_stderr "Preparing build directory $(basename ${l_builddir}), restoring files from cache"
cp -ar "${g_cached_gitian}" "${l_builddir}" >&2
mkdir "${l_builddir}/inputs/"
cp -v "${g_cached_go_tarball}" "${l_builddir}/inputs/"
done
}
f_build() {
local l_descriptor
l_descriptor=$1
bin/gbuild --commit tendermint="$g_commit" ${GBUILD_FLAGS} "$l_descriptor"
libexec/stop-target || f_echo_stderr "warning: couldn't stop target"
}
f_sign_verify() {
local l_descriptor
l_descriptor=$1
bin/gsign -p "${SIGN_COMMAND}" -s "${g_sign_identity}" --destination="${g_sigs_dir}" --release=${g_release} ${l_descriptor}
bin/gverify --destination="${g_sigs_dir}" --release="${g_release}" ${l_descriptor}
}
f_commit_sig() {
local l_release_name
l_release_name=$1
pushd "${g_sigs_dir}"
git add . || echo "git add failed" >&2
git commit -m "Add ${l_release_name} reproducible build" || echo "git commit failed" >&2
popd
}
f_prep_docker_image() {
pushd $1
bin/make-base-vm --docker --suite bionic --arch amd64
popd
}
f_ensure_cache() {
g_gitian_cache="${g_workdir}/${GITIAN_CACHE_DIRNAME}"
[ -d "${g_gitian_cache}" ] || mkdir "${g_gitian_cache}"
g_cached_go_tarball="${g_gitian_cache}/${GO_TARBALL}"
if [ ! -f "${g_cached_go_tarball}" ]; then
f_echo_stderr "${g_cached_go_tarball}: cache miss, caching..."
curl -L "${GO_TARBALL_URL}" --output "${g_cached_go_tarball}"
fi
g_cached_gitian="${g_gitian_cache}/gitian-builder"
if [ ! -d "${g_cached_gitian}" ]; then
f_echo_stderr "${g_cached_gitian}: cache miss, caching..."
git clone ${GITIAN_REPO} "${g_cached_gitian}"
fi
}
f_demangle_platforms() {
case "${1}" in
all)
printf '%s' 'darwin linux windows' ;;
linux|darwin|windows)
printf '%s' "${1}" ;;
*)
echo "invalid platform -- ${1}"
exit 1
esac
}
f_echo_stderr() {
echo $@ >&2
}
while getopts ":cs:h" opt; do
case "${opt}" in
h) f_help ; exit 0 ;;
c) g_flag_commit=y ;;
s) g_sign_identity="${OPTARG}" ;;
esac
done
shift "$((OPTIND-1))"
g_platforms=$(f_demangle_platforms "${1}")
g_workdir="$(pwd)"
g_commit="$(git rev-parse HEAD)"
g_sigs_dir=${TENDERMINT_SIGS:-"${g_workdir}/${DEFAULT_TENDERMINT_SIGS}"}
f_ensure_cache
f_prep_docker_image "${g_cached_gitian}"
f_prep_build "${g_platforms}"
export USE_DOCKER=1
for g_os in ${g_platforms}; do
g_release="$(git describe --tags --abbrev=9 | sed 's/^v//')-${g_os}"
g_descriptor="${g_workdir}/scripts/gitian-descriptors/gitian-${g_os}.yml"
[ -f ${g_descriptor} ]
g_builddir="$(f_builddir ${g_os})"
pushd "${g_builddir}"
f_build "${g_descriptor}"
if [ -n "${g_sign_identity}" ]; then
f_sign_verify "${g_descriptor}"
fi
popd
if [ -n "${g_sign_identity}" -a -n "${g_flag_commit}" ]; then
[ -d "${g_sigs_dir}/.git/" ] && f_commit_sig ${g_release} || f_echo_stderr "couldn't commit, ${g_sigs_dir} is not a git clone"
fi
done
exit 0

View File

@ -0,0 +1,111 @@
---
name: "tendermint-darwin"
enable_cache: true
distro: "ubuntu"
suites:
- "bionic"
architectures:
- "amd64"
packages:
- "bsdmainutils"
- "build-essential"
- "ca-certificates"
- "curl"
- "debhelper"
- "dpkg-dev"
- "devscripts"
- "fakeroot"
- "git"
- "golang-any"
- "xxd"
- "quilt"
remotes:
- "url": "https://github.com/tendermint/tendermint.git"
"dir": "tendermint"
files:
- "golang-debian-1.12.5-1.tar.gz"
script: |
set -e -o pipefail
GO_SRC_RELEASE=golang-debian-1.12.5-1
GO_SRC_TARBALL="${GO_SRC_RELEASE}.tar.gz"
# Compile go and configure the environment
export TAR_OPTIONS="--mtime="$REFERENCE_DATE\\\ $REFERENCE_TIME""
export BUILD_DIR=`pwd`
tar xf "${GO_SRC_TARBALL}"
rm -f "${GO_SRC_TARBALL}"
[ -d "${GO_SRC_RELEASE}/" ]
mv "${GO_SRC_RELEASE}/" go/
pushd go/
QUILT_PATCHES=debian/patches quilt push -a
fakeroot debian/rules build RUN_TESTS=false GOCACHE=/tmp/go-cache
popd
export GOOS=darwin
export GOROOT=${BUILD_DIR}/go
export GOPATH=${BUILD_DIR}/gopath
mkdir -p ${GOPATH}/bin
export PATH_orig=${PATH}
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
export ARCHS='386 amd64'
export GO111MODULE=on
# Make release tarball
pushd tendermint
VERSION=$(git describe --tags | sed 's/^v//')
COMMIT=$(git rev-parse --short=8 HEAD)
DISTNAME=tendermint-${VERSION}
git archive --format tar.gz --prefix ${DISTNAME}/ -o ${DISTNAME}.tar.gz HEAD
SOURCEDIST=`pwd`/`echo tendermint-*.tar.gz`
popd
# Correct tar file order
mkdir -p temp
pushd temp
tar xf $SOURCEDIST
rm $SOURCEDIST
find tendermint-* | sort | tar --no-recursion --mode='u+rw,go+r-w,a+X' --owner=0 --group=0 -c -T - | gzip -9n > $SOURCEDIST
popd
# Prepare GOPATH and install deps
distsrc=${GOPATH}/src/github.com/tendermint/tendermint
mkdir -p ${distsrc}
pushd ${distsrc}
tar --strip-components=1 -xf $SOURCEDIST
go mod download
popd
# Configure LDFLAGS for reproducible builds
LDFLAGS="-extldflags=-static -buildid=${VERSION} -s -w \
-X github.com/tendermint/tendermint/version.GitCommit=${COMMIT}"
# Extract release tarball and build
for arch in ${ARCHS}; do
INSTALLPATH=`pwd`/installed/${DISTNAME}-${arch}
mkdir -p ${INSTALLPATH}
# Build tendermint binary
pushd ${distsrc}
GOARCH=${arch} GOROOT_FINAL=${GOROOT} go build -a \
-gcflags=all=-trimpath=${GOPATH} \
-asmflags=all=-trimpath=${GOPATH} \
-mod=readonly -tags "tendermint" \
-ldflags="${LDFLAGS}" \
-o ${INSTALLPATH}/tendermint ./cmd/tendermint/
popd # ${distsrc}
pushd ${INSTALLPATH}
find -type f | sort | tar \
--no-recursion --mode='u+rw,go+r-w,a+X' \
--numeric-owner --sort=name \
--owner=0 --group=0 -c -T - | gzip -9n > ${OUTDIR}/${DISTNAME}-darwin-${arch}.tar.gz
popd # installed
done
rm -rf ${distsrc}
mkdir -p $OUTDIR/src
mv $SOURCEDIST $OUTDIR/src

View File

@ -0,0 +1,110 @@
---
name: "tendermint-linux"
enable_cache: true
distro: "ubuntu"
suites:
- "bionic"
architectures:
- "amd64"
packages:
- "bsdmainutils"
- "build-essential"
- "ca-certificates"
- "curl"
- "debhelper"
- "dpkg-dev"
- "devscripts"
- "fakeroot"
- "git"
- "golang-any"
- "xxd"
- "quilt"
remotes:
- "url": "https://github.com/tendermint/tendermint.git"
"dir": "tendermint"
files:
- "golang-debian-1.12.5-1.tar.gz"
script: |
set -e -o pipefail
GO_SRC_RELEASE=golang-debian-1.12.5-1
GO_SRC_TARBALL="${GO_SRC_RELEASE}.tar.gz"
# Compile go and configure the environment
export TAR_OPTIONS="--mtime="$REFERENCE_DATE\\\ $REFERENCE_TIME""
export BUILD_DIR=`pwd`
tar xf "${GO_SRC_TARBALL}"
rm -f "${GO_SRC_TARBALL}"
[ -d "${GO_SRC_RELEASE}/" ]
mv "${GO_SRC_RELEASE}/" go/
pushd go/
QUILT_PATCHES=debian/patches quilt push -a
fakeroot debian/rules build RUN_TESTS=false GOCACHE=/tmp/go-cache
popd
export GOROOT=${BUILD_DIR}/go
export GOPATH=${BUILD_DIR}/gopath
mkdir -p ${GOPATH}/bin
export PATH_orig=${PATH}
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
export ARCHS='386 amd64 arm arm64'
export GO111MODULE=on
# Make release tarball
pushd tendermint
VERSION=$(git describe --tags | sed 's/^v//')
COMMIT=$(git rev-parse --short=8 HEAD)
DISTNAME=tendermint-${VERSION}
git archive --format tar.gz --prefix ${DISTNAME}/ -o ${DISTNAME}.tar.gz HEAD
SOURCEDIST=`pwd`/`echo tendermint-*.tar.gz`
popd
# Correct tar file order
mkdir -p temp
pushd temp
tar xf $SOURCEDIST
rm $SOURCEDIST
find tendermint-* | sort | tar --no-recursion --mode='u+rw,go+r-w,a+X' --owner=0 --group=0 -c -T - | gzip -9n > $SOURCEDIST
popd
# Prepare GOPATH and install deps
distsrc=${GOPATH}/src/github.com/tendermint/tendermint
mkdir -p ${distsrc}
pushd ${distsrc}
tar --strip-components=1 -xf $SOURCEDIST
go mod download
popd
# Configure LDFLAGS for reproducible builds
LDFLAGS="-extldflags=-static -buildid=${VERSION} -s -w \
-X github.com/tendermint/tendermint/version.GitCommit=${COMMIT}"
# Extract release tarball and build
for arch in ${ARCHS}; do
INSTALLPATH=`pwd`/installed/${DISTNAME}-${arch}
mkdir -p ${INSTALLPATH}
# Build tendermint binary
pushd ${distsrc}
GOARCH=${arch} GOROOT_FINAL=${GOROOT} go build -a \
-gcflags=all=-trimpath=${GOPATH} \
-asmflags=all=-trimpath=${GOPATH} \
-mod=readonly -tags "tendermint" \
-ldflags="${LDFLAGS}" \
-o ${INSTALLPATH}/tendermint ./cmd/tendermint/
popd # ${distsrc}
pushd ${INSTALLPATH}
find -type f | sort | tar \
--no-recursion --mode='u+rw,go+r-w,a+X' \
--numeric-owner --sort=name \
--owner=0 --group=0 -c -T - | gzip -9n > ${OUTDIR}/${DISTNAME}-linux-${arch}.tar.gz
popd # installed
done
rm -rf ${distsrc}
mkdir -p $OUTDIR/src
mv $SOURCEDIST $OUTDIR/src

View File

@ -0,0 +1,111 @@
---
name: "tendermint-windows"
enable_cache: true
distro: "ubuntu"
suites:
- "bionic"
architectures:
- "amd64"
packages:
- "bsdmainutils"
- "build-essential"
- "ca-certificates"
- "curl"
- "debhelper"
- "dpkg-dev"
- "devscripts"
- "fakeroot"
- "git"
- "golang-any"
- "xxd"
- "quilt"
remotes:
- "url": "https://github.com/tendermint/tendermint.git"
"dir": "tendermint"
files:
- "golang-debian-1.12.5-1.tar.gz"
script: |
set -e -o pipefail
GO_SRC_RELEASE=golang-debian-1.12.5-1
GO_SRC_TARBALL="${GO_SRC_RELEASE}.tar.gz"
# Compile go and configure the environment
export TAR_OPTIONS="--mtime="$REFERENCE_DATE\\\ $REFERENCE_TIME""
export BUILD_DIR=`pwd`
tar xf "${GO_SRC_TARBALL}"
rm -f "${GO_SRC_TARBALL}"
[ -d "${GO_SRC_RELEASE}/" ]
mv "${GO_SRC_RELEASE}/" go/
pushd go/
QUILT_PATCHES=debian/patches quilt push -a
fakeroot debian/rules build RUN_TESTS=false GOCACHE=/tmp/go-cache
popd
export GOOS=windows
export GOROOT=${BUILD_DIR}/go
export GOPATH=${BUILD_DIR}/gopath
mkdir -p ${GOPATH}/bin
export PATH_orig=${PATH}
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
export ARCHS='386 amd64'
export GO111MODULE=on
# Make release tarball
pushd tendermint
VERSION=$(git describe --tags | sed 's/^v//')
COMMIT=$(git rev-parse --short=8 HEAD)
DISTNAME=tendermint-${VERSION}
git archive --format tar.gz --prefix ${DISTNAME}/ -o ${DISTNAME}.tar.gz HEAD
SOURCEDIST=`pwd`/`echo tendermint-*.tar.gz`
popd
# Correct tar file order
mkdir -p temp
pushd temp
tar xf $SOURCEDIST
rm $SOURCEDIST
find tendermint-* | sort | tar --no-recursion --mode='u+rw,go+r-w,a+X' --owner=0 --group=0 -c -T - | gzip -9n > $SOURCEDIST
popd
# Prepare GOPATH and install deps
distsrc=${GOPATH}/src/github.com/tendermint/tendermint
mkdir -p ${distsrc}
pushd ${distsrc}
tar --strip-components=1 -xf $SOURCEDIST
go mod download
popd
# Configure LDFLAGS for reproducible builds
LDFLAGS="-extldflags=-static -buildid=${VERSION} -s -w \
-X github.com/tendermint/tendermint/version.GitCommit=${COMMIT}"
# Extract release tarball and build
for arch in ${ARCHS}; do
INSTALLPATH=`pwd`/installed/${DISTNAME}-${arch}
mkdir -p ${INSTALLPATH}
# Build tendermint binary
pushd ${distsrc}
GOARCH=${arch} GOROOT_FINAL=${GOROOT} go build -a \
-gcflags=all=-trimpath=${GOPATH} \
-asmflags=all=-trimpath=${GOPATH} \
-mod=readonly -tags "tendermint" \
-ldflags="${LDFLAGS}" \
-o ${INSTALLPATH}/tendermint.exe ./cmd/tendermint/
popd # ${distsrc}
pushd ${INSTALLPATH}
find -type f | sort | tar \
--no-recursion --mode='u+rw,go+r-w,a+X' \
--numeric-owner --sort=name \
--owner=0 --group=0 -c -T - | gzip -9n > ${OUTDIR}/${DISTNAME}-windows-${arch}.tar.gz
popd # installed
done
rm -rf ${distsrc}
mkdir -p $OUTDIR/src
mv $SOURCEDIST $OUTDIR/src

View File

@ -0,0 +1,29 @@
## PGP keys of Gitian builders and Tendermint Developers
The file `keys.txt` contains fingerprints of the public keys of Gitian builders
and active developers.
The associated keys are mainly used to sign git commits or the build results
of Gitian builds.
The most recent version of each pgp key can be found on most PGP key servers.
Fetch the latest version from the key server to see if any key was revoked in
the meantime.
To fetch the latest version of all pgp keys in your gpg homedir,
```sh
gpg --refresh-keys
```
To fetch keys of Gitian builders and active core developers, feed the list of
fingerprints of the primary keys into gpg:
```sh
while read fingerprint keyholder_name; \
do gpg --keyserver hkp://subset.pool.sks-keyservers.net \
--recv-keys ${fingerprint}; done < ./keys.txt
```
Add your key to the list if you are a Tendermint core developer or you have
provided Gitian signatures for two major or minor releases of Tendermint.

View File

@ -0,0 +1 @@
04160004A8276E40BB9890FBE8A48AE5311D765A Alessio Treglia