Toralf Wittner f86bc01e20
Add back the LRU cache to Throttled. (#1731)
* Restore `RequestResponse::throttled`.

In contrast to the existing "throttled" approach this PR adds back-
pressure to the protocol without requiring pre-existing knowledge
of all nodes about their limits. It adds small, CBOR-encoded headers
to the actual payload data. Extra credit messages communicate back
to the sender how many more requests it is allowed to send.

* Remove some noise.

* Resend credit grant after connection closed.

Should an error in some lower layer cause a connection to be closed,
our previously sent credit grant may not have reached the remote peer.
Therefore, pessimistically, a credit grant is resent whenever a
connection is closed. The remote ignores duplicate grants.

* Remove inbound/outbound tracking per peer.

* Send ACK as response to duplicate credit grants.

* Simplify.

* Fix grammar.

* Incorporate review feedback.

- Remove `ResponseSent` which was a leftover from previous attemps
  and issue a credit grant immediately in `send_response`.
- Only resend credit grants after a connection is closed if we are
  still connected to this peer.

* Move codec/header.rs to throttled/codec.rs.

* More review suggestions.

* Generalise `ProtocolWrapper` and use shorter prefix.

* Update protocols/request-response/src/lib.rs

Co-authored-by: Roman Borschel <romanb@users.noreply.github.com>

* Update protocols/request-response/src/throttled.rs

Co-authored-by: Roman Borschel <romanb@users.noreply.github.com>

* Update protocols/request-response/src/throttled.rs

Co-authored-by: Roman Borschel <romanb@users.noreply.github.com>

* Minor comment changes.

* Limit max. header size to 8KiB

* Always construct initial limit with 1.

Since honest senders always assume a send budget of 1 and wait for
credit afterwards, setting the default limit to a higher value
can only become effective after informing the peer about it which
means leaving `max_recv` at 1 and setting `next_max` to the desired
value.

* Use LRU cache to keep previous peer infos.

Peers may send all their requests, reconnect and send again all their
requests, starting from a fresh budget. The LRU cache keeps the peer
information around and reuses it when the peer reconnects, continuing
with the previous remaining limit.

Co-authored-by: Roman Borschel <romanb@users.noreply.github.com>
2020-09-09 11:04:20 +02:00
..
2020-09-07 12:13:10 +02:00
2020-09-07 12:13:10 +02:00
2020-07-27 22:27:33 +02:00
2020-09-07 12:13:10 +02:00