fix RepeatTimer memory leak (Refs #137)
* test case
* drain channels on reset
Leaking memory:
```
leaktest.go:144: leaktest: leaked goroutine: goroutine 116 [chan send]:
github.com/tendermint/tmlibs/common.(*RepeatTimer).fireRoutine(0xc42006a410, 0xc4203403c0, 0xc42031b2c0)
/go/src/github.com/tendermint/tmlibs/common/repeat_timer.go:160 +0x6e
created by github.com/tendermint/tmlibs/common.(*RepeatTimer).reset
/go/src/github.com/tendermint/tmlibs/common/repeat_timer.go:196 +0xe9
```
The alternative solution could be draining channels on the client side.
* add one more select instead of draining
thanks to Jae
don't bother with this "only ping when we havent heard from them". lets
just always ping every peer from the sendRoutine every 10s no matter
what. if they dont pong within pongTimeout, disconnect :)
Fixes https://github.com/tendermint/tmlibs/issues/145
Fixes https://github.com/tendermint/tmlibs/issues/146
The code in here has been fragile when it comes to nil
but these edge cases were never tested, although they've
showed up in the wild and were only noticed because
the reporter actually read the logs otherwise
we'd have never known.
This changes covers some of these cases and adds some tests.
get rid of gox
build target builds inside docker, dev-build - locally
Revert "build target builds inside docker, dev-build - locally"
This reverts commit 8ba89d5e8c5668e3839ff49952a9166d1158f6e8.
add build tags to make build/build_race/install
use tendermint's fork of glide instead of tar.gz
remove TMHOME unused var + set length for git hash
get rid of GOTOOLS_CHECK
fixes after review
zip
needed for distribution
Fixes https://github.com/tendermint/tendermint/issues/1189
For every TxEventBuffer.Flush() invoking, we were invoking
a:
b.events = make([]EventDataTx, 0, b.capacity)
whose intention is to innocently clear the events slice but
maintain the underlying capacity.
However, unfortunately this is memory and garbage collection intensive
which is linear in the number of events added. If an attack had access
to our code somehow, invoking .Flush() in tight loops would be a sure
way to cause huge GC pressure, and say if they added about 1e9
events maliciously, every Flush() would take at least 3.2seconds
which is enough to now control our application.
The new using of the capacity preserving slice clearing idiom
takes a constant time regardless of the number of elements with zero
allocations so we are killing many birds with one stone i.e
b.events = b.events[:0]
For benchmarking results, please see
https://gist.github.com/odeke-em/532c14ab67d71c9c0b95518a7a526058
for a reference on how things can get out of hand easily.