* Use a single exchange instead of two one_shots.
* Add `Throttled` to libp2p-request-response.
Wraps the existing `RequestResponse` behaviour and applies strict limits
to the number of inbound and outbound requests per peer.
The wrapper is opt-in and if not used, the protocol behaviour of
`RequestResponse` does not change. This PR also does not introduce
an extra protocol, hence the limits applied need to be known a priori
for all nodes which is not always possible or desirable. As mentioned
in #1687 I think that we should eventually augment the protocol with
metadata which allows a more dynamic exchange of requests and responses.
This PR also replaces the two oneshot channels with a single one from the
scambio crate which saves one allocation per request/response. If not
desirable because the crate has seen less testing the first commit could
be reverted.
* Fix rustdoc error.
* Remove some leftovers from development.
* Add docs to `NetworBehaviourAction::{map_in,map_out}`.
* Apply suggestions from code review
Co-authored-by: Roman Borschel <romanb@users.noreply.github.com>
* Add `ping_protocol_throttled` test.
* Add another test.
* Revert "Use a single exchange instead of two one_shots."
This reverts commit e34e1297d411298f6c69e238aa6c96e0b795d989.
# Conflicts:
# protocols/request-response/Cargo.toml
# protocols/request-response/src/handler/protocol.rs
* Update CHANGELOG.
* Update CHANGELOG.
Co-authored-by: Roman Borschel <romanb@users.noreply.github.com>
* Refactor the ping protocol.
Such that pings are sent over a single substream, as it is
done in other libp2p implementations. Note that, since each
peer sends its pings over a single, dedicated substream,
every peer that participates in the protocol has effectively
two open substreams.
* Cleanup
* Update ping changelog.
* Update protocols/ping/src/protocol.rs
Co-authored-by: Max Inden <mail@max-inden.de>
Co-authored-by: Max Inden <mail@max-inden.de>
With 826f513 a `StreamMuxer` can notify that the address of a remote
peer changed. This is needed to support the transport protocol QUIC as
remotes can change their IP addresses within the lifetime of a single
connection.
This commit implements the `NetworkBehaviour::inject_address_change`
handler to update the Kademlia routing table accordingly.
* Emit events for active connection close and fix `disconnect()`.
The `Network` does currently not emit events for actively
closed connections, e.g. via `EstablishedConnection::close`
or `ConnectedPeer::disconnect()`. As a result, when actively
closing connections, there will be `ConnectionEstablished`
events emitted without eventually a matching `ConnectionClosed`
event. This seems undesirable and has the consequence that
the `Swarm::ban_peer_id` feature in `libp2p-swarm` does not
result in appropriate calls to `NetworkBehaviour::inject_connection_closed`
and `NetworkBehaviour::inject_disconnected`. Furthermore,
the `disconnect()` functionality in `libp2p-core` is currently
broken as it leaves the `Pool` in an inconsistent state.
This commit does the following:
1. When connection background tasks are dropped
(i.e. removed from the `Manager`), they
always terminate immediately, without attempting
an orderly close of the connection.
2. An orderly close is sent to the background task
of a connection as a regular command. The
background task emits a `Closed` event
before terminating.
3. `Pool::disconnect()` removes all connection
tasks for the affected peer from the `Manager`,
i.e. without an orderly close, thereby also
fixing the discovered state inconsistency
due to not removing the corresponding entries
in the `Pool` itself after removing them from
the `Manager`.
4. A new test is added to `libp2p-swarm` that
exercises the ban/unban functionality and
places assertions on the number and order
of calls to the `NetworkBehaviour`. In that
context some new testing utilities have
been added to `libp2p-swarm`.
This addresses https://github.com/libp2p/rust-libp2p/issues/1584.
* Update swarm/src/lib.rs
Co-authored-by: Toralf Wittner <tw@dtex.org>
* Incorporate some review feedback.
* Adapt to changes in master.
* More verbose panic messages.
* Simplify
There is no need for a `StartClose` future.
* Fix doc links.
* Further small cleanup.
* Update CHANGELOGs and versions.
Co-authored-by: Toralf Wittner <tw@dtex.org>
This adds optional message signing and verification to the gossipsub protocol as
per the libp2p specifications.
In addition this commit:
- Removes the LruCache received cache and simply uses the memcache in it's
place.
- Send subscriptions to all peers
- Prevent invalid messages from being gossiped
- Send grafts when subscriptions are added to the mesh
Co-authored-by: Max Inden <mail@max-inden.de>
Co-authored-by: Rüdiger Klaehn <rklaehn@protonmail.com>
Co-authored-by: Rüdiger Klaehn <rklaehn@gmail.com>
* Add debug instances for MessageCache and GossipSub behaviour
* Manual impl of Debug for GossipsubMessage
* Add pretty printing of protocol_id in debug
* Use hex_fmt instead of hex
* Inline StringOrBytes helper struct
Since it is used only once here
* Limit data of gossipsub msg to 20 bytes
Otherwise they might become very large and useless for debugging
* Support spec-compliant reading of noise handshake payloads.
See https://github.com/libp2p/rust-libp2p/issues/1631.
This the first of a three-step process to addressing the issue.
In this step, support for reading noise handshake payloads
without an additional length prefix is added, falling back
to attempting a decoding with length prefix on failure.
Length prefixes are still sent in this step. Hence
interoperability with other libp2p implementations is not
yet achieved after with step.
To achieve a better separation of handshake and transport
I/O, the `NoiseFramed` type has been extracted from
`NoiseOutput`. `NoiseFramed` is a `Sink` and `Stream`
of length-delimited Noise protocol messages. This type
is used in the handshake phase. Once a handshake
completes the underlying Noise session transitions to
transport mode and the `NoiseFramed` is wrapped in
the `NoiseOutput` which provides a regular `AsyncRead`
/ `AsyncWrite` I/O resource on top of the framed
encoding. No new buffers are introduced, they are
just split between `NoiseFramed` and `NoiseOutput`.
The second step involves removing the sending of the
length prefix in a subsequent release.
The third step involves removing the support for reading
length-prefixed protobuf payloads.
* Small cleanup.
* Reuse frame decryption buffer.
Since frames are consumed one-by-one, `NoiseFramed` can have
a `BytesMut` decryption buffer, handing out immutable `Bytes`
views for each decrypted message. Since each view gets fully
consumed and dropped before the next frame is read, the
`BytesMut` decryption buffer in `NoiseFramed` can always
reuse the same buffer, only growing it as necessary.
* Simplify.
* Add missing inner poll_flush().
* Improve nested length detection.
* Avoid unnecessary clearing of send buffers.
Thus reducing the necessary zeroing of send buffers on resize,
as per the previous behaviour.
* Prepare release.
* Close substream after writing request/response.
* Update protocols/request-response/src/handler/protocol.rs
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
* Add the libp2p-request-response protocol.
This crate provides a generic implementation for request/response
protocols, whereby each request is sent on a new substream.
* Fix OneShotHandler usage in floodsub.
* Custom ProtocolsHandler and multiple protocols.
1. Implement a custom ProtocolsHandler instead of using
the OneShotHandler for better control and error handling.
In particular, all request/response sending/receiving is
kept in the substreams upgrades and thus the background
task of a connection.
2. Support multiple protocols (usually protocol versions)
with a single `RequestResponse` instance, with
configurable inbound/outbound support.
* Small doc clarification.
* Remove unnecessary Sync bounds.
* Remove redundant Clone constraint.
* Update protocols/request-response/Cargo.toml
Co-authored-by: Toralf Wittner <tw@dtex.org>
* Update dev-dependencies.
* Update Cargo.tomls.
* Add changelog.
* Remove Sync bound from RequestResponseCodec::Protocol.
Apparently the compiler just needs some help with the scope
of borrows, which is unfortunate.
* Try async-trait.
* Allow checking whether a ResponseChannel is still open.
Also expand the commentary on `send_response` to indicate that
responses may be discard if they come in too late.
* Add `RequestResponse::is_pending`.
As an analogue of `ResponseChannel::is_open` for outbound requests.
* Revert now unnecessary changes to the OneShotHandler.
Since `libp2p-request-response` is no longer using it.
* Update CHANGELOG for libp2p-swarm.
Co-authored-by: Toralf Wittner <tw@dtex.org>
* More control & insight for k-buckets.
1) More control: It is now possible to disable automatic
insertions of peers into the routing table via a new
`KademliaBucketInserts` configuration option. The
default is `OnConnected`, but it can be set to `Manual`,
in which case `add_address` must be called explicitly.
In order to communicate all situations in which a user
of `Kademlia` may want to manually update the routing
table, two new events are introduced:
* `KademliaEvent::RoutablePeer`: When a connection to
a peer with a known listen address is established
which may be added to the routing table. This is
also emitted when automatic inserts are allowed but
the corresponding k-bucket is full.
* `KademliaEvent::PendingRoutablePeer`: When a connection
to a peer with a known listen address is established
which is pending insertion into the routing table
(but may not make it). This is only emitted when
`OnConnected` (i.e. automatic inserts) are used.
These complement the existing `UnroutablePeer` and
`RoutingUpdated` events. It is now also possible to
explicitly remove peers and addresses from the
routing table.
2) More insight: `Kademlia::kbuckets` now gives an
iterator over `KBucketRef`s and `Kademlia::bucket`
a particular `KBucketRef`. A `KBucketRef` in turn
allows iteration over its entries. In this way,
the full contents of the routing table can be
inspected, e.g. in order to decide which peer(s)
to remove.
* Update protocols/kad/src/behaviour.rs
* Update protocols/kad/src/behaviour.rs
Co-authored-by: Max Inden <mail@max-inden.de>
* Update CHANGELOG.
Co-authored-by: Max Inden <mail@max-inden.de>
- Update `libp2p-kad` CHANGELOG and increment version to 0.20.1.
- Update `libp2p` version to 0.20.1.
- Fix some linter warning in `libp2p-gossipsub` and increment
version to 0.19.3 and update CHANGELOG.
The extension paper S-Kademlia includes a proposal for lookups over
disjoint paths. Within vanilla Kademlia, queries keep track of the
closest nodes in a single bucket. Any adversary along the path can thus
influence all future paths, in case they can come up with the
next-closest (not overall closest) hops. S-Kademlia tries to solve the
attack above by querying over disjoint paths using multiple buckets.
To adjust the libp2p Kademlia implementation accordingly this change-set
introduces an additional peers iterator: `ClosestDisjointPeersIter`.
This new iterator wraps around a set of `ClosestPeersIter`
`ClosestDisjointPeersIter` enforces that each of the `ClosestPeersIter`
explore disjoint paths by having each peer instantly return that was
queried by a different iterator before.
although 32 is prefect fine in our case, it would be consistent to use the const value PING_SIZE.
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
* [libp2p-kad] Provide more insight and control into Kademlia queries.
More insight: The API allows iterating over the active queries and
inspecting their state and execution statistics.
More control: The API allows aborting queries prematurely
at any time.
To that end, API operations that initiate new queries return the query ID
and multi-phase queries such as `put_record` retain the query ID across all
phases, each phase being executed by a new (internal) query.
* Cleanup
* Cleanup
* Update examples and re-exports.
* Incorporate review feedback.
* Update CHANGELOG
* Update CHANGELOG
Co-authored-by: Max Inden <mail@max-inden.de>
`FixedPeersIter` requires the initial set of peers to be passed as
`PeerId`s and not as `Key<PeerId>`s. This commit removes the unnecessary
conversion.
Instead of creating unconstrained random number generators in quickcheck
tests to generate test data, have quickcheck provide a `Seed` to seed
those random number generators and thus make the test execution
deterministic / reproducible.
Make sure to decrease `num_waiting` when being notified of a peer
failure to allow an additional peer to be queried.
Given that `FixedPeersIter` is initialized with `replication_factor` by
`QueryPool` this bug will not surface today.
If a user sends a message that is over the maximum transmission size gossipsub
will disconnect from the peer being sent the message.
This PR updates the logic to simply emit an error, not send the over-sized
message but maintain the long-lived streams for future messages.
Co-authored-by: Age Manning <Age@AgeManning.com>
* protocols/kad/query/peers/closest: Consider K_VALUE nodes at init
By considering `K_VALUE` at `ClosestPeersIter` initialization, the initial peer
set length is independent of `num_results` and thus of the `replication_factor`.
* protocols/kad/src/behaviour/test: Enable building single nodes
Introduces the `build_node` function to build a single not connected
node. Along the way replace the notion of a `port_base` with returning
the actual `Multiaddr` of the node.
* protocols/kad/behaviour/test: Fix bootstrap test initialization
When looking for the closest node to a key, Kademlia considers
ALPHA_VALUE nodes to query at initialization. If `num_groups` is larger
than ALPHA_VALUE the remaining locally known nodes will not be
considered. Given that no other node is aware of them other than node 1,
they would be lost entirely. To prevent the above restrict `num_groups`
to be equal or smaller than ALPHA_VALUE.
* protocols/kad/behaviour/test: Fix put_record and get_provider
In the past, when trying to find the closest nodes to a key, Kademlia
would consider `num_result` amount of nodes to query out of all the
nodes it is aware of.
Both the `put_record` and the `get_provider` tests initialized their
swarms in the same way. The tests took the replication factor to use as
an input. The number of results to get was equal to the replication
factor. The amount of swarms to start was twice the replication factor.
Nodes would be grouped in two groups a replication factor nodes. The
first node would be aware of all of the nodes in the first group. The
last node of the first group would be aware of all the nodes in the
second group.
By coincidence (I assume) these numbers played together very well. At
initialization the first node would consider `num_results` amount of
peers (see first paragraph). It would then contact each of them. As the
first node is aware of the last node of the first group which in turn is
aware of all nodes in the second group, the first node would eventually
discover all nodes.
Recently the amount of nodes Kademlia considers at initialization when
looking for the nodes closest to a key was changed to only consider
ALPHA nodes.
With this in mind playing through the test setup above again would
result in (1) `replication_factor - ALPHA` nodes being entirely lost as
the first node would never consider them and (2) the first node probably
never contacting the last node out of the first group and thus not
discovering any nodes of the second group.
To keep the multi hop discovery in place while not basing ones test
setup on the lucky assumption of Kademlia considering replication factor
amount of nodes at initialization, this patch alters the two tests:
Build a fully connected set of nodes and one addition node (the first
node). Connect the first node to a single node of the fully connected
set (simulating a boot node). Continue as done previously.
Co-authored-by: Roman Borschel <romanb@users.noreply.github.com>
When not making progress for `parallelism` time a `ClosestPeersIter`
becomes `State::Stalled`. When stalled an iterator is allowed to make
more parallel requests up to `num_results`. If `num_results` is smaller
than `parallelism` make sure to still allow up to `parallelism` requests
in-flight.
Co-Authored-By: Roman Borschel <romanb@users.noreply.github.com>
* feat: allow sent messages seen as subscribed
minor feature to allow mimicing the behaviour expected by ipfs api tests.
* refactor: rename per review comments
* refactor: rename Floodsub::options to config
* chore: update changelog
* Update CHANGELOG.md
Co-Authored-By: Max Inden <mail@max-inden.de>
Co-authored-by: Max Inden <mail@max-inden.de>
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>