1146 Commits

Author SHA1 Message Date
folex
19a006329a Merge branch 'bucket_ordering' into weighted_bucket
# Conflicts:
#	protocols/kad/src/behaviour/test.rs
2020-03-25 19:26:43 +03:00
folex
3e0528fdcb merge 2020-03-25 19:26:24 +03:00
folex
8ced6588e2 Merge branch 'fluence_master' into bucket_ordering
# Conflicts:
#	protocols/kad/src/behaviour.rs
#	protocols/kad/src/behaviour/test.rs
2020-03-25 19:17:47 +03:00
folex
bbcb568bac Merge branch 'master' into fluence_master 2020-03-25 18:12:05 +03:00
folex
442847eb64 merge & fix kad/behaviour tests 2020-03-25 18:11:21 +03:00
folex
53b3c34de5 Merge branch 'master' into weighted_bucket
# Conflicts:
#	protocols/kad/src/behaviour.rs
#	protocols/kad/src/behaviour/test.rs
2020-03-25 17:52:38 +03:00
folex
5ff311d38d remove commented line 2020-03-25 17:46:48 +03:00
folex
c895ec386a extend full_bucket_discard_pending test 2020-03-25 17:33:20 +03:00
folex
e53e4b9059 fix bucket_update test 2020-03-25 17:28:59 +03:00
Roman Borschel
28ea62d1a9
[libp2p-swarm] Correct returned connections from notify_all. (#1513)
* [libp2p-swarm] Correct returned connections from notify_all.

If at least one connection was not ready (i.e. pending), only
those (pending) connections would be returned and considered on the next
iteration, whereas those which were ready should also remain
in the list of connections to notify on retry of `notify_all`.

* Simplify.

It seems unnecessary to use "poll all" -> "send all" semantics,
i.e. attempting an "atomic" broadcast. Rather, events send via
`notify_all` can be delivered as soon as possible, simplifying
the code further.
2020-03-25 13:53:03 +01:00
folex
cdcbbf7e89 Integrate TrustGraph into kademlia 2020-03-24 21:20:06 +03:00
folex
091e45374f weighted bucket: enable bucket_update 2020-03-24 21:00:54 +03:00
folex
5b901ab090 weighted bucket: fix full_bucket test 2020-03-24 20:26:33 +03:00
folex
1e9e42065a weighted bucket: implement update_pending 2020-03-24 19:27:06 +03:00
folex
0c724c815f weighted bucket: set weight to 0 in tests 2020-03-24 17:29:53 +03:00
folex
8f5cc730b1 weighted bucket: it compiles! 2020-03-24 17:24:22 +03:00
Roman Borschel
5f86f15dda
Fix connection limit check. (#1508) 2020-03-24 12:27:56 +01:00
folex
7eb6d425a4 weighted bucket: cleanup WIP 2020-03-24 14:22:41 +03:00
Pierre Krieger
d6a20a7c61
Finish changes regarding PeerId, again (#1460)
* Finish changes regarding PeerId, again

* Fix bad merge
2020-03-24 11:56:06 +01:00
folex
5b098a6e72 weighted bucket: implement update 2020-03-24 13:38:44 +03:00
folex
826fb99483 weighted bucket: implement apply_pending 2020-03-24 12:56:27 +03:00
folex
d5c0112fbb weighted bucket: implement insert() 2020-03-24 12:08:39 +03:00
Toralf Wittner
48e14d3247
Update yamux to version 0.4.5 (#1505) 2020-03-23 14:40:17 +01:00
Robert Klotzner
5a6111070e
Fix typo in doc (#1503) 2020-03-23 12:51:20 +01:00
Pierre Krieger
42290d94f2
Finish the migration to GitHub Actions (#1500) 2020-03-23 12:04:31 +01:00
Tobin Harding
7bf5266a02
Add addresses field for closing listeners (#1485)
* Add addresses field for closing listeners

Add an addresses field to the ListenersEvent and the ListenerClosed to
hold the addresses of a listener that has just closed. When we return a
ListenerClosed network event loop over the addresses and call
inject_expired_listen_address on each one.

Fixes: #1482

* Use Vec instead of SmallVec

In order to not expose a third party dependency in our API use a `Vec`
type for the addresses list instead of a `SmallVec`.

* Do not clone for ListenersEvent::Closed

We would like to avoid clones where possible for efficiency reasons.
When returning a `ListenersEvent::Closed` we are already consuming the
listener (by way of a pin projection).  We can therefore use a consuming
iterator instead of cloning.

Use `drain(..).collect()` instead of clone to consume the addresses when
returning a `ListenersEvent::Closed`.

* Expire addresses before listener

The listener and its addresses technically expire at the same time, but
since here we have to pick an order, it makes more sense that the
addresses expire first.

Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
2020-03-23 11:31:38 +01:00
folex
760e6baac3 bucket: cleanup 2020-03-20 20:11:17 +03:00
folex
6ba164e6d9 bucket: revert pattern matching 2020-03-20 20:00:05 +03:00
folex
fe0a141e0e bucket: cleanup 2020-03-20 19:45:04 +03:00
folex
d8b42a5cb1 bucket: apply_pending works 2020-03-20 19:36:33 +03:00
folex
c096db5e59 bucket: apply_pending WIP 2020-03-20 18:46:47 +03:00
folex
0d66695d3b update works 2020-03-20 18:35:57 +03:00
folex
2aa6fbc93f insert works 2020-03-20 18:11:19 +03:00
Pierre Krieger
92ce5d6179
Allow customizing the Kademlia maximum packet size (#1502)
* Allow customizing the Kademlia maximum packet size

* Address concern
2020-03-19 21:23:05 +01:00
Pierre Krieger
439dc8246e
Allow customizing the delay before closing a Kademlia connection (#1477)
* Reduce the delay before closing a Kademlia connection

* Rework pull request

* Update protocols/kad/src/behaviour.rs

Co-Authored-By: Roman Borschel <romanb@users.noreply.github.com>

* Update protocols/kad/src/behaviour.rs

Co-Authored-By: Roman Borschel <romanb@users.noreply.github.com>

* Rework the pull request

Co-authored-by: Roman Borschel <romanb@users.noreply.github.com>
2020-03-19 17:01:34 +01:00
folex
51b441247f cleanup 2020-03-19 18:38:37 +03:00
folex
235d23dcaa Refactor duplicated code 2020-03-19 18:36:38 +03:00
folex
9b93e500a7 arrrgh 2020-03-19 16:40:04 +03:00
folex
1dc62e61f2 use trust-graph 2020-03-19 15:50:12 +03:00
folex
87b8e23553 cleanup 2020-03-19 15:12:48 +03:00
folex
84fc48f1ac Add public key to buckets 2020-03-19 14:52:59 +03:00
Roman Borschel
4e63710a6f
Clean up libp2p-core tests. (#1498) 2020-03-19 10:51:46 +01:00
vms
e9240755f8 contact 2020-03-19 12:32:08 +03:00
folex
619ed94e64 Add public key to buckets WIP 2020-03-19 12:11:04 +03:00
folex
783ff2575e Add public key to buckets WIP 2020-03-18 18:16:36 +03:00
Pierre Krieger
97a02950bb
Reexport MemoryStoreConfig (#1499) 2020-03-18 14:56:53 +01:00
Max Inden
522020e0a4
protocols/kad: Do not attempt to store expired record in record store (#1496)
* protocols/kad: Do not attempt to store expired record in record store

`Kademlia::record_received` calculates the expiration time of a record
before inserting it into the record store. Instead of inserting the
record into the record store in any case, with this patch the record is
only inserted if it is not expired. If the record is expired a
`KademliaHandlerIn::Reset` for the given (sub) stream is triggered.

This would serve as a tiny defense mechanism against an attacker trying
to fill a node's record store with expired records before the record
store's clean up procedure removes the records.

* protocols/kad: Send regular ack when record discarded due to expiration

With this commit the remote receives a
[`KademliaHandlerIn::PutRecordRes`] even in the case where the record is
discarded due to being expired.  Given that the remote sent the local
node a [`KademliaHandlerEvent::PutRecord`] request, the remote perceives
the local node as one node among the k closest nodes to the target.
Returning a [`KademliaHandlerIn::Reset`] instead of an
[`KademliaHandlerIn::PutRecordRes`] to have the remote try another node
would only result in the remote node to contact an even more distant
node. In addition returning [`KademliaHandlerIn::PutRecordRes`] does not
reveal any internal information to a possibly malicious remote node.

* protocols/kad/src/behaviour: Use `now` and reword expiration comment

Co-authored-by: Roman Borschel <romanb@users.noreply.github.com>
2020-03-18 14:31:01 +01:00
Max Inden
29471467e3
protocols/kad: Fix right shift overflow panic in record_received (#1492)
* protocols/kad: Add test to reproduce right shift overflow panic

* protocols/kad: Fix right shift overflow panic in record_received

Within `Behaviour::record_received` the exponentially decreasing
expiration based on the distance to the target for a record is
calculated as following:

1. Calculate the amount of nodes between us and the record key beyond
the k replication constant as `n`.

2. Shift the configured record time-to-live `n` times to the right to
calculate an exponentially decreasing expiration.

The configured record time-to-live is a u64. If `n` is larger or equal
to 64 the right shift will lead to an overflow which panics in debug
mode.

This patch uses a checked right shift instead, defaulting to 0 (`now +
0`) for the expiration on overflow.

* protocols/kad: Put attribute below comment

* protocols/kad: Extract shifting logic and rework test

Extract right shift into isolated function and replace complex
regression test with small isolated one.

* protocols/kad/src/behaviour: Refactor exp_decr_expiration

Co-authored-by: Roman Borschel <romanb@users.noreply.github.com>
2020-03-18 10:15:33 +01:00
folex
1b3a30aa6c Add public key to buckets WIP 2020-03-17 18:35:39 +03:00
folex
e8bce909d0 Exchange public key WIP 2020-03-17 17:59:33 +03:00