Multiple connections per peer (#1440)

* Allow multiple connections per peer in libp2p-core.

Instead of trying to enforce a single connection per peer,
which involves quite a bit of additional complexity e.g.
to prioritise simultaneously opened connections and can
have other undesirable consequences [1], we now
make multiple connections per peer a feature.

The gist of these changes is as follows:

The concept of a "node" with an implicit 1-1 correspondence
to a connection has been replaced with the "first-class"
concept of a "connection". The code from `src/nodes` has moved
(with varying degrees of modification) to `src/connection`.
A `HandledNode` has become a `Connection`, a `NodeHandler` a
`ConnectionHandler`, the `CollectionStream` was the basis for
the new `connection::Pool`, and so forth.

Conceptually, a `Network` contains a `connection::Pool` which
in turn internally employs the `connection::Manager` for
handling the background `connection::manager::Task`s, one
per connection, as before. These are all considered implementation
details. On the public API, `Peer`s are managed as before through
the `Network`, except now the API has changed with the shift of focus
to (potentially multiple) connections per peer. The `NetworkEvent`s have
accordingly also undergone changes.

The Swarm APIs remain largely unchanged, except for the fact that
`inject_replaced` is no longer called. It may now practically happen
that multiple `ProtocolsHandler`s are associated with a single
`NetworkBehaviour`, one per connection. If implementations of
`NetworkBehaviour` rely somehow on communicating with exactly
one `ProtocolsHandler`, this may cause issues, but it is unlikely.

[1]: https://github.com/paritytech/substrate/issues/4272

* Fix intra-rustdoc links.

* Update core/src/connection/pool.rs

Co-Authored-By: Max Inden <mail@max-inden.de>

* Address some review feedback and fix doc links.

* Allow responses to be sent on the same connection.

* Remove unnecessary remainders of inject_replaced.

* Update swarm/src/behaviour.rs

Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com>

* Update swarm/src/lib.rs

Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com>

* Update core/src/connection/manager.rs

Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com>

* Update core/src/connection/manager.rs

Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com>

* Update core/src/connection/pool.rs

Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com>

* Incorporate more review feedback.

* Move module declaration below imports.

* Update core/src/connection/manager.rs

Co-Authored-By: Toralf Wittner <tw@dtex.org>

* Update core/src/connection/manager.rs

Co-Authored-By: Toralf Wittner <tw@dtex.org>

* Simplify as per review.

* Fix rustoc link.

* Add try_notify_handler and simplify.

* Relocate DialingConnection and DialingAttempt.

For better visibility constraints.

* Small cleanup.

* Small cleanup. More robust EstablishedConnectionIter.

* Clarify semantics of `DialingPeer::connect`.

* Don't call inject_disconnected on InvalidPeerId.

To preserve the previous behavior and ensure calls to
`inject_disconnected` are always paired with calls to
`inject_connected`.

* Provide public ConnectionId constructor.

Mainly needed for testing purposes, e.g. in substrate.

* Move the established connection limit check to the right place.

* Clean up connection error handling.

Separate connection errors into those occuring during
connection setup or upon rejecting a newly established
connection (the `PendingConnectionError`) and those
errors occurring on previously established connections,
i.e. for which a `ConnectionEstablished` event has
been emitted by the connection pool earlier.

* Revert change in log level and clarify an invariant.

* Remove inject_replaced entirely.

* Allow notifying all connection handlers.

Thereby simplify by introducing a new enum `NotifyHandler`,
used with a single constructor `NetworkBehaviourAction::NotifyHandler`.

* Finishing touches.

Small API simplifications and code deduplication.
Some more useful debug logging.

Co-authored-by: Max Inden <mail@max-inden.de>
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
Co-authored-by: Toralf Wittner <tw@dtex.org>
This commit is contained in:
Roman Borschel
2020-03-04 13:49:25 +01:00
committed by GitHub
parent 7cbb3cf8f3
commit 8337687b3a
45 changed files with 4560 additions and 4704 deletions

View File

@@ -31,8 +31,14 @@ use crate::protocol::{KadConnectionType, KadPeer};
use crate::query::{Query, QueryId, QueryPool, QueryConfig, QueryPoolState};
use crate::record::{self, store::{self, RecordStore}, Record, ProviderRecord};
use fnv::{FnvHashMap, FnvHashSet};
use libp2p_core::{ConnectedPoint, Multiaddr, PeerId};
use libp2p_swarm::{NetworkBehaviour, NetworkBehaviourAction, PollParameters, ProtocolsHandler};
use libp2p_core::{ConnectedPoint, Multiaddr, PeerId, connection::ConnectionId};
use libp2p_swarm::{
NetworkBehaviour,
NetworkBehaviourAction,
NotifyHandler,
PollParameters,
ProtocolsHandler
};
use log::{info, debug, warn};
use smallvec::SmallVec;
use std::{borrow::{Borrow, Cow}, error, iter, time::Duration};
@@ -917,13 +923,20 @@ where
}
/// Processes a record received from a peer.
fn record_received(&mut self, source: PeerId, request_id: KademliaRequestId, mut record: Record) {
fn record_received(
&mut self,
source: PeerId,
connection: ConnectionId,
request_id: KademliaRequestId,
mut record: Record
) {
if record.publisher.as_ref() == Some(self.kbuckets.local_key().preimage()) {
// If the (alleged) publisher is the local node, do nothing. The record of
// the original publisher should never change as a result of replication
// and the publisher is always assumed to have the "right" value.
self.queued_events.push_back(NetworkBehaviourAction::SendEvent {
self.queued_events.push_back(NetworkBehaviourAction::NotifyHandler {
peer_id: source,
handler: NotifyHandler::One(connection),
event: KademliaHandlerIn::PutRecordRes {
key: record.key,
value: record.value,
@@ -974,8 +987,9 @@ where
match self.store.put(record.clone()) {
Ok(()) => {
debug!("Record stored: {:?}; {} bytes", record.key, record.value.len());
self.queued_events.push_back(NetworkBehaviourAction::SendEvent {
self.queued_events.push_back(NetworkBehaviourAction::NotifyHandler {
peer_id: source,
handler: NotifyHandler::One(connection),
event: KademliaHandlerIn::PutRecordRes {
key: record.key,
value: record.value,
@@ -985,8 +999,9 @@ where
}
Err(e) => {
info!("Record not stored: {:?}", e);
self.queued_events.push_back(NetworkBehaviourAction::SendEvent {
self.queued_events.push_back(NetworkBehaviourAction::NotifyHandler {
peer_id: source,
handler: NotifyHandler::One(connection),
event: KademliaHandlerIn::Reset(request_id)
})
}
@@ -1062,7 +1077,9 @@ where
.position(|(p, _)| p == &peer)
.map(|p| q.inner.pending_rpcs.remove(p)))
{
self.queued_events.push_back(NetworkBehaviourAction::SendEvent { peer_id, event });
self.queued_events.push_back(NetworkBehaviourAction::NotifyHandler {
peer_id, event, handler: NotifyHandler::Any
});
}
// The remote's address can only be put into the routing table,
@@ -1133,30 +1150,18 @@ where
self.connected_peers.remove(id);
}
fn inject_replaced(&mut self, peer_id: PeerId, _old: ConnectedPoint, new_endpoint: ConnectedPoint) {
// We need to re-send the active queries.
for query in self.queries.iter() {
if query.is_waiting(&peer_id) {
self.queued_events.push_back(NetworkBehaviourAction::SendEvent {
peer_id: peer_id.clone(),
event: query.inner.info.to_request(query.id()),
});
}
}
if let Some(addrs) = self.kbuckets.entry(&kbucket::Key::new(peer_id)).value() {
if let ConnectedPoint::Dialer { address } = new_endpoint {
addrs.insert(address);
}
}
}
fn inject_node_event(&mut self, source: PeerId, event: KademliaHandlerEvent<QueryId>) {
fn inject_event(
&mut self,
source: PeerId,
connection: ConnectionId,
event: KademliaHandlerEvent<QueryId>
) {
match event {
KademliaHandlerEvent::FindNodeReq { key, request_id } => {
let closer_peers = self.find_closest(&kbucket::Key::new(key), &source);
self.queued_events.push_back(NetworkBehaviourAction::SendEvent {
self.queued_events.push_back(NetworkBehaviourAction::NotifyHandler {
peer_id: source,
handler: NotifyHandler::One(connection),
event: KademliaHandlerIn::FindNodeRes {
closer_peers,
request_id,
@@ -1174,8 +1179,9 @@ where
KademliaHandlerEvent::GetProvidersReq { key, request_id } => {
let provider_peers = self.provider_peers(&key, &source);
let closer_peers = self.find_closest(&kbucket::Key::new(key), &source);
self.queued_events.push_back(NetworkBehaviourAction::SendEvent {
self.queued_events.push_back(NetworkBehaviourAction::NotifyHandler {
peer_id: source,
handler: NotifyHandler::One(connection),
event: KademliaHandlerIn::GetProvidersRes {
closer_peers,
provider_peers,
@@ -1243,8 +1249,9 @@ where
Vec::new()
};
self.queued_events.push_back(NetworkBehaviourAction::SendEvent {
self.queued_events.push_back(NetworkBehaviourAction::NotifyHandler {
peer_id: source,
handler: NotifyHandler::One(connection),
event: KademliaHandlerIn::GetRecordRes {
record,
closer_peers,
@@ -1292,7 +1299,7 @@ where
record,
request_id
} => {
self.record_received(source, request_id, record);
self.record_received(source, connection, request_id, record);
}
KademliaHandlerEvent::PutRecordRes {
@@ -1397,8 +1404,8 @@ where
query.on_success(&peer_id, vec![])
}
if self.connected_peers.contains(&peer_id) {
self.queued_events.push_back(NetworkBehaviourAction::SendEvent {
peer_id, event
self.queued_events.push_back(NetworkBehaviourAction::NotifyHandler {
peer_id, event, handler: NotifyHandler::Any
});
} else if &peer_id != self.kbuckets.local_key().preimage() {
query.inner.pending_rpcs.push((peer_id.clone(), event));