Roman Borschel 8337687b3a
Multiple connections per peer (#1440)
* Allow multiple connections per peer in libp2p-core.

Instead of trying to enforce a single connection per peer,
which involves quite a bit of additional complexity e.g.
to prioritise simultaneously opened connections and can
have other undesirable consequences [1], we now
make multiple connections per peer a feature.

The gist of these changes is as follows:

The concept of a "node" with an implicit 1-1 correspondence
to a connection has been replaced with the "first-class"
concept of a "connection". The code from `src/nodes` has moved
(with varying degrees of modification) to `src/connection`.
A `HandledNode` has become a `Connection`, a `NodeHandler` a
`ConnectionHandler`, the `CollectionStream` was the basis for
the new `connection::Pool`, and so forth.

Conceptually, a `Network` contains a `connection::Pool` which
in turn internally employs the `connection::Manager` for
handling the background `connection::manager::Task`s, one
per connection, as before. These are all considered implementation
details. On the public API, `Peer`s are managed as before through
the `Network`, except now the API has changed with the shift of focus
to (potentially multiple) connections per peer. The `NetworkEvent`s have
accordingly also undergone changes.

The Swarm APIs remain largely unchanged, except for the fact that
`inject_replaced` is no longer called. It may now practically happen
that multiple `ProtocolsHandler`s are associated with a single
`NetworkBehaviour`, one per connection. If implementations of
`NetworkBehaviour` rely somehow on communicating with exactly
one `ProtocolsHandler`, this may cause issues, but it is unlikely.

[1]: https://github.com/paritytech/substrate/issues/4272

* Fix intra-rustdoc links.

* Update core/src/connection/pool.rs

Co-Authored-By: Max Inden <mail@max-inden.de>

* Address some review feedback and fix doc links.

* Allow responses to be sent on the same connection.

* Remove unnecessary remainders of inject_replaced.

* Update swarm/src/behaviour.rs

Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com>

* Update swarm/src/lib.rs

Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com>

* Update core/src/connection/manager.rs

Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com>

* Update core/src/connection/manager.rs

Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com>

* Update core/src/connection/pool.rs

Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com>

* Incorporate more review feedback.

* Move module declaration below imports.

* Update core/src/connection/manager.rs

Co-Authored-By: Toralf Wittner <tw@dtex.org>

* Update core/src/connection/manager.rs

Co-Authored-By: Toralf Wittner <tw@dtex.org>

* Simplify as per review.

* Fix rustoc link.

* Add try_notify_handler and simplify.

* Relocate DialingConnection and DialingAttempt.

For better visibility constraints.

* Small cleanup.

* Small cleanup. More robust EstablishedConnectionIter.

* Clarify semantics of `DialingPeer::connect`.

* Don't call inject_disconnected on InvalidPeerId.

To preserve the previous behavior and ensure calls to
`inject_disconnected` are always paired with calls to
`inject_connected`.

* Provide public ConnectionId constructor.

Mainly needed for testing purposes, e.g. in substrate.

* Move the established connection limit check to the right place.

* Clean up connection error handling.

Separate connection errors into those occuring during
connection setup or upon rejecting a newly established
connection (the `PendingConnectionError`) and those
errors occurring on previously established connections,
i.e. for which a `ConnectionEstablished` event has
been emitted by the connection pool earlier.

* Revert change in log level and clarify an invariant.

* Remove inject_replaced entirely.

* Allow notifying all connection handlers.

Thereby simplify by introducing a new enum `NotifyHandler`,
used with a single constructor `NetworkBehaviourAction::NotifyHandler`.

* Finishing touches.

Small API simplifications and code deduplication.
Some more useful debug logging.

Co-authored-by: Max Inden <mail@max-inden.de>
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
Co-authored-by: Toralf Wittner <tw@dtex.org>
2020-03-04 13:49:25 +01:00

328 lines
13 KiB
Rust

// Copyright 2018 Parity Technologies (UK) Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::upgrade::SendWrapper;
use crate::protocols_handler::{
KeepAlive,
ProtocolsHandler,
IntoProtocolsHandler,
ProtocolsHandlerEvent,
ProtocolsHandlerUpgrErr
};
use futures::prelude::*;
use libp2p_core::{
PeerId,
ConnectionInfo,
Connected,
connection::{
ConnectionHandler,
ConnectionHandlerEvent,
IntoConnectionHandler,
Substream,
SubstreamEndpoint,
},
muxing::StreamMuxerBox,
upgrade::{self, InboundUpgradeApply, OutboundUpgradeApply}
};
use std::{error, fmt, pin::Pin, task::Context, task::Poll, time::Duration};
use wasm_timer::{Delay, Instant};
/// Prototype for a `NodeHandlerWrapper`.
pub struct NodeHandlerWrapperBuilder<TIntoProtoHandler> {
/// The underlying handler.
handler: TIntoProtoHandler,
}
impl<TIntoProtoHandler> NodeHandlerWrapperBuilder<TIntoProtoHandler>
where
TIntoProtoHandler: IntoProtocolsHandler
{
/// Builds a `NodeHandlerWrapperBuilder`.
pub(crate) fn new(handler: TIntoProtoHandler) -> Self {
NodeHandlerWrapperBuilder {
handler,
}
}
}
impl<TIntoProtoHandler, TProtoHandler, TConnInfo> IntoConnectionHandler<TConnInfo>
for NodeHandlerWrapperBuilder<TIntoProtoHandler>
where
TIntoProtoHandler: IntoProtocolsHandler<Handler = TProtoHandler>,
TProtoHandler: ProtocolsHandler,
TConnInfo: ConnectionInfo<PeerId = PeerId>,
{
type Handler = NodeHandlerWrapper<TIntoProtoHandler::Handler>;
fn into_handler(self, connected: &Connected<TConnInfo>) -> Self::Handler {
NodeHandlerWrapper {
handler: self.handler.into_handler(connected.peer_id(), &connected.endpoint),
negotiating_in: Vec::new(),
negotiating_out: Vec::new(),
queued_dial_upgrades: Vec::new(),
unique_dial_upgrade_id: 0,
shutdown: Shutdown::None,
}
}
}
// A `ConnectionHandler` for an underlying `ProtocolsHandler`.
/// Wraps around an implementation of `ProtocolsHandler`, and implements `NodeHandler`.
// TODO: add a caching system for protocols that are supported or not
pub struct NodeHandlerWrapper<TProtoHandler>
where
TProtoHandler: ProtocolsHandler,
{
/// The underlying handler.
handler: TProtoHandler,
/// Futures that upgrade incoming substreams.
negotiating_in:
Vec<(InboundUpgradeApply<Substream<StreamMuxerBox>, SendWrapper<TProtoHandler::InboundProtocol>>, Delay)>,
/// Futures that upgrade outgoing substreams. The first element of the tuple is the userdata
/// to pass back once successfully opened.
negotiating_out: Vec<(
TProtoHandler::OutboundOpenInfo,
OutboundUpgradeApply<Substream<StreamMuxerBox>, SendWrapper<TProtoHandler::OutboundProtocol>>,
Delay,
)>,
/// For each outbound substream request, how to upgrade it. The first element of the tuple
/// is the unique identifier (see `unique_dial_upgrade_id`).
queued_dial_upgrades: Vec<(u64, (upgrade::Version, SendWrapper<TProtoHandler::OutboundProtocol>))>,
/// Unique identifier assigned to each queued dial upgrade.
unique_dial_upgrade_id: u64,
/// The currently planned connection & handler shutdown.
shutdown: Shutdown,
}
/// The options for a planned connection & handler shutdown.
///
/// A shutdown is planned anew based on the the return value of
/// [`ProtocolsHandler::connection_keep_alive`] of the underlying handler
/// after every invocation of [`ProtocolsHandler::poll`].
///
/// A planned shutdown is always postponed for as long as there are ingoing
/// or outgoing substreams being negotiated, i.e. it is a graceful, "idle"
/// shutdown.
enum Shutdown {
/// No shutdown is planned.
None,
/// A shut down is planned as soon as possible.
Asap,
/// A shut down is planned for when a `Delay` has elapsed.
Later(Delay, Instant)
}
/// Error generated by the `NodeHandlerWrapper`.
#[derive(Debug)]
pub enum NodeHandlerWrapperError<TErr> {
/// Error generated by the handler.
Handler(TErr),
/// The connection has been deemed useless and has been closed.
UselessTimeout,
}
impl<TErr> From<TErr> for NodeHandlerWrapperError<TErr> {
fn from(err: TErr) -> NodeHandlerWrapperError<TErr> {
NodeHandlerWrapperError::Handler(err)
}
}
impl<TErr> fmt::Display for NodeHandlerWrapperError<TErr>
where
TErr: fmt::Display
{
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
NodeHandlerWrapperError::Handler(err) => write!(f, "{}", err),
NodeHandlerWrapperError::UselessTimeout =>
write!(f, "Node has been closed due to inactivity"),
}
}
}
impl<TErr> error::Error for NodeHandlerWrapperError<TErr>
where
TErr: error::Error + 'static
{
fn source(&self) -> Option<&(dyn error::Error + 'static)> {
match self {
NodeHandlerWrapperError::Handler(err) => Some(err),
NodeHandlerWrapperError::UselessTimeout => None,
}
}
}
impl<TProtoHandler> ConnectionHandler for NodeHandlerWrapper<TProtoHandler>
where
TProtoHandler: ProtocolsHandler,
{
type InEvent = TProtoHandler::InEvent;
type OutEvent = TProtoHandler::OutEvent;
type Error = NodeHandlerWrapperError<TProtoHandler::Error>;
type Substream = Substream<StreamMuxerBox>;
// The first element of the tuple is the unique upgrade identifier
// (see `unique_dial_upgrade_id`).
type OutboundOpenInfo = (u64, TProtoHandler::OutboundOpenInfo, Duration);
fn inject_substream(
&mut self,
substream: Self::Substream,
endpoint: SubstreamEndpoint<Self::OutboundOpenInfo>,
) {
match endpoint {
SubstreamEndpoint::Listener => {
let protocol = self.handler.listen_protocol();
let timeout = protocol.timeout().clone();
let upgrade = upgrade::apply_inbound(substream, SendWrapper(protocol.into_upgrade().1));
let timeout = Delay::new(timeout);
self.negotiating_in.push((upgrade, timeout));
}
SubstreamEndpoint::Dialer((upgrade_id, user_data, timeout)) => {
let pos = match self
.queued_dial_upgrades
.iter()
.position(|(id, _)| id == &upgrade_id)
{
Some(p) => p,
None => {
debug_assert!(false, "Received an upgrade with an invalid upgrade ID");
return;
}
};
let (_, (version, upgrade)) = self.queued_dial_upgrades.remove(pos);
let upgrade = upgrade::apply_outbound(substream, upgrade, version);
let timeout = Delay::new(timeout);
self.negotiating_out.push((user_data, upgrade, timeout));
}
}
}
fn inject_event(&mut self, event: Self::InEvent) {
self.handler.inject_event(event);
}
fn poll(&mut self, cx: &mut Context) -> Poll<
Result<ConnectionHandlerEvent<Self::OutboundOpenInfo, Self::OutEvent>, Self::Error>
> {
// Continue negotiation of newly-opened substreams on the listening side.
// We remove each element from `negotiating_in` one by one and add them back if not ready.
for n in (0..self.negotiating_in.len()).rev() {
let (mut in_progress, mut timeout) = self.negotiating_in.swap_remove(n);
match Future::poll(Pin::new(&mut timeout), cx) {
Poll::Ready(_) => continue,
Poll::Pending => {},
}
match Future::poll(Pin::new(&mut in_progress), cx) {
Poll::Ready(Ok(upgrade)) =>
self.handler.inject_fully_negotiated_inbound(upgrade),
Poll::Pending => self.negotiating_in.push((in_progress, timeout)),
// TODO: return a diagnostic event?
Poll::Ready(Err(_err)) => {}
}
}
// Continue negotiation of newly-opened substreams.
// We remove each element from `negotiating_out` one by one and add them back if not ready.
for n in (0..self.negotiating_out.len()).rev() {
let (upgr_info, mut in_progress, mut timeout) = self.negotiating_out.swap_remove(n);
match Future::poll(Pin::new(&mut timeout), cx) {
Poll::Ready(Ok(_)) => {
let err = ProtocolsHandlerUpgrErr::Timeout;
self.handler.inject_dial_upgrade_error(upgr_info, err);
continue;
},
Poll::Ready(Err(_)) => {
let err = ProtocolsHandlerUpgrErr::Timer;
self.handler.inject_dial_upgrade_error(upgr_info, err);
continue;
},
Poll::Pending => {},
}
match Future::poll(Pin::new(&mut in_progress), cx) {
Poll::Ready(Ok(upgrade)) => {
self.handler.inject_fully_negotiated_outbound(upgrade, upgr_info);
}
Poll::Pending => {
self.negotiating_out.push((upgr_info, in_progress, timeout));
}
Poll::Ready(Err(err)) => {
let err = ProtocolsHandlerUpgrErr::Upgrade(err);
self.handler.inject_dial_upgrade_error(upgr_info, err);
}
}
}
// Poll the handler at the end so that we see the consequences of the method
// calls on `self.handler`.
let poll_result = self.handler.poll(cx);
// Ask the handler whether it wants the connection (and the handler itself)
// to be kept alive, which determines the planned shutdown, if any.
match (&mut self.shutdown, self.handler.connection_keep_alive()) {
(Shutdown::Later(timer, deadline), KeepAlive::Until(t)) =>
if *deadline != t {
*deadline = t;
timer.reset_at(t)
},
(_, KeepAlive::Until(t)) => self.shutdown = Shutdown::Later(Delay::new_at(t), t),
(_, KeepAlive::No) => self.shutdown = Shutdown::Asap,
(_, KeepAlive::Yes) => self.shutdown = Shutdown::None
};
match poll_result {
Poll::Ready(ProtocolsHandlerEvent::Custom(event)) => {
return Poll::Ready(Ok(ConnectionHandlerEvent::Custom(event)));
}
Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest {
protocol,
info,
}) => {
let id = self.unique_dial_upgrade_id;
let timeout = protocol.timeout().clone();
self.unique_dial_upgrade_id += 1;
let (version, upgrade) = protocol.into_upgrade();
self.queued_dial_upgrades.push((id, (version, SendWrapper(upgrade))));
return Poll::Ready(Ok(
ConnectionHandlerEvent::OutboundSubstreamRequest((id, info, timeout)),
));
}
Poll::Ready(ProtocolsHandlerEvent::Close(err)) => return Poll::Ready(Err(err.into())),
Poll::Pending => (),
};
// Check if the connection (and handler) should be shut down.
// As long as we're still negotiating substreams, shutdown is always postponed.
if self.negotiating_in.is_empty() && self.negotiating_out.is_empty() {
match self.shutdown {
Shutdown::None => {},
Shutdown::Asap => return Poll::Ready(Err(NodeHandlerWrapperError::UselessTimeout)),
Shutdown::Later(ref mut delay, _) => match Future::poll(Pin::new(delay), cx) {
Poll::Ready(_) => return Poll::Ready(Err(NodeHandlerWrapperError::UselessTimeout)),
Poll::Pending => {}
}
}
}
Poll::Pending
}
}