10 Commits

Author SHA1 Message Date
Raúl Kripalani
31765355df
migrate to consolidated types. (#344) 2019-05-26 23:33:15 +01:00
Raúl Kripalani
b649bcbec6 defer dialqueue action until initial peers have been added. 2019-03-13 21:25:36 +00:00
Raúl Kripalani
ebcfcd46a6 make dial queue parameters configurable. 2019-02-01 13:43:27 +11:00
Raúl Kripalani
46e8562fdc restore import aliases. 2019-01-31 10:10:25 +00:00
Matt Joiner
7246a3b0f4 Fix races with DialQueue variables 2019-01-31 13:08:45 +11:00
Raúl Kripalani
74d22f3f5e park waiters in slice; revise closure logic. 2019-01-30 23:25:10 +00:00
Raúl Kripalani
bf4b91ce4b harden tests. 2019-01-29 20:49:14 +00:00
Raúl Kripalani
abacfe5fc9 refactor interface of Consume(). 2019-01-29 16:47:42 +00:00
Raúl Kripalani
95a0975dd7 enhance logging and import prefixes. 2019-01-29 00:56:12 +00:00
Raúl Kripalani
5e74c4a6f2 introduce adaptive queue for DHT dials.
This patch introduces an adaptive dial queue that spawns a dynamically sized
set of goroutines to preemptively stage dials for later handoff to the DHT
protocol for RPC. It identifies backpressure on both ends (dial consumers and
dial producers), and takes compensating action by adjusting the worker pool.

We start with `DialQueueMinParallelism` number of workers (6), and scale up
and down based on demand and supply of dialled peers.

The following events trigger scaling:
- we scale up when we can't immediately return a successful dial to a new
  consumer.
- we scale down when we've been idle for a while waiting for new dial
  attempts.
- we scale down when we complete a dial and realise nobody was waiting for it.

Dialler throttling (e.g. FD limit exceeded) is a concern, as we can easily
spin up more workers to compensate, and end up adding fuel to the fire. Since
we have no deterministic way to detect this for now, we hard-limit concurrency
to `DialQueueMaxParallelism` (20).
2019-01-29 00:56:12 +00:00