mirror of
https://github.com/fluencelabs/js-libp2p
synced 2025-07-16 09:01:58 +00:00
Compare commits
24 Commits
v0.24.0-rc
...
v0.25.0-rc
Author | SHA1 | Date | |
---|---|---|---|
|
ab028a2be3 | ||
|
91e60d4253 | ||
|
679d446daa | ||
|
8047fb76fa | ||
|
c4cab007af | ||
|
ebaab3e47f | ||
|
b31690c8e6 | ||
|
3bde9c8bed | ||
|
14e12ee1f1 | ||
|
2374929990 | ||
|
26de739bb1 | ||
|
0f8d6afd8f | ||
|
daa26859e0 | ||
|
fdfb7b4e86 | ||
|
15bdb795a4 | ||
|
7d78728f54 | ||
|
53ed3bdb99 | ||
|
ae513887f5 | ||
|
7c78faa171 | ||
|
7d12eb9e26 | ||
|
581a1de472 | ||
|
288ac17954 | ||
|
2e4459b315 | ||
|
2a5232b541 |
56
CHANGELOG.md
56
CHANGELOG.md
@@ -1,3 +1,59 @@
|
||||
<a name="0.24.4"></a>
|
||||
## [0.24.4](https://github.com/libp2p/js-libp2p/compare/v0.24.3...v0.24.4) (2019-01-04)
|
||||
|
||||
|
||||
|
||||
<a name="0.24.3"></a>
|
||||
## [0.24.3](https://github.com/libp2p/js-libp2p/compare/v0.24.2...v0.24.3) (2018-12-14)
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* not started yet ([#297](https://github.com/libp2p/js-libp2p/issues/297)) ([fdfb7b4](https://github.com/libp2p/js-libp2p/commit/fdfb7b4))
|
||||
|
||||
|
||||
|
||||
<a name="0.24.2"></a>
|
||||
## [0.24.2](https://github.com/libp2p/js-libp2p/compare/v0.24.1...v0.24.2) (2018-12-04)
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* use symbol instead of constructor name ([#292](https://github.com/libp2p/js-libp2p/issues/292)) ([53ed3bd](https://github.com/libp2p/js-libp2p/commit/53ed3bd))
|
||||
|
||||
|
||||
|
||||
<a name="0.24.1"></a>
|
||||
## [0.24.1](https://github.com/libp2p/js-libp2p/compare/v0.24.0...v0.24.1) (2018-12-03)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* allow configurable validators and selectors to the dht ([#288](https://github.com/libp2p/js-libp2p/issues/288)) ([7d12eb9](https://github.com/libp2p/js-libp2p/commit/7d12eb9))
|
||||
|
||||
|
||||
|
||||
<a name="0.24.0"></a>
|
||||
# [0.24.0](https://github.com/libp2p/js-libp2p/compare/v0.24.0-rc.3...v0.24.0) (2018-11-16)
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* add maxtimeout to dht get ([#248](https://github.com/libp2p/js-libp2p/issues/248)) ([69f7264](https://github.com/libp2p/js-libp2p/commit/69f7264))
|
||||
* dht get options ([4460e82](https://github.com/libp2p/js-libp2p/commit/4460e82))
|
||||
* dont call callback before it's properly set ([17b5f73](https://github.com/libp2p/js-libp2p/commit/17b5f73))
|
||||
* improve get peer info errors ([714b6ec](https://github.com/libp2p/js-libp2p/commit/714b6ec))
|
||||
* start kad dht random walk ([#251](https://github.com/libp2p/js-libp2p/issues/251)) ([dd934b9](https://github.com/libp2p/js-libp2p/commit/dd934b9))
|
||||
|
||||
### Features
|
||||
|
||||
* add datastore to config ([40e840d](https://github.com/libp2p/js-libp2p/commit/40e840d))
|
||||
* add delegated peer and content routing support ([#242](https://github.com/libp2p/js-libp2p/issues/242)) ([a95389a](https://github.com/libp2p/js-libp2p/commit/a95389a))
|
||||
* add maxNumProviders to findprovs ([#283](https://github.com/libp2p/js-libp2p/issues/283)) ([970deec](https://github.com/libp2p/js-libp2p/commit/970deec))
|
||||
* conditionally emit errors ([f71fdfd](https://github.com/libp2p/js-libp2p/commit/f71fdfd))
|
||||
* enable relay by default (no hop) ([#254](https://github.com/libp2p/js-libp2p/issues/254)) ([686379e](https://github.com/libp2p/js-libp2p/commit/686379e))
|
||||
* make libp2p a state machine ([#257](https://github.com/libp2p/js-libp2p/issues/257)) ([0b75f99](https://github.com/libp2p/js-libp2p/commit/0b75f99))
|
||||
* use package-table vs custom script ([a63432e](https://github.com/libp2p/js-libp2p/commit/a63432e))
|
||||
|
||||
<a name="0.23.1"></a>
|
||||
## [0.23.1](https://github.com/libp2p/js-libp2p/compare/v0.23.0...v0.23.1) (2018-08-13)
|
||||
|
||||
|
4
OKR.md
4
OKR.md
@@ -2,6 +2,10 @@
|
||||
|
||||
We try to frame our ongoing work using a process based on quarterly Objectives and Key Results (OKRs). Objectives reflect outcomes that are challenging, but realistic. Results are tangible and measurable.
|
||||
|
||||
## 2019 Q1
|
||||
|
||||
Find the js-libp2p OKRs for 2019 Q1 at the [2019 Q1 libp2p OKRs Spreadsheet](https://docs.google.com/spreadsheets/d/11GKG1DBRIIAiQnHvLD7_IqWxDGsVdaZFpxJM6NWtXe8/edit#gid=1271182838)
|
||||
|
||||
## 2018 Q4
|
||||
|
||||
Find the js-libp2p OKRs for 2018 Q4 at the [2018 Q4 libp2p OKRs Spreadsheet](https://docs.google.com/spreadsheets/d/1BYwmbVicgo6_tOHAbgiUXWge8Ej0qR1M_gAUulazmrg/edit#gid=1241853194)
|
||||
|
@@ -175,12 +175,12 @@ class Node extends libp2p {
|
||||
},
|
||||
dht: {
|
||||
kBucketSize: 20,
|
||||
enabled: true,
|
||||
enabledDiscovery: true // Allows to disable discovery (enabled by default)
|
||||
},
|
||||
// Enable/Disable Experimental features
|
||||
EXPERIMENTAL: { // Experimental features ("behind a flag")
|
||||
pubsub: false,
|
||||
dht: false
|
||||
pubsub: false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@@ -24,6 +24,7 @@
|
||||
- [ ] Twitter
|
||||
- [ ] IRC
|
||||
- [ ] Reddit
|
||||
- [ ] [discuss.ipfs.io](https://discuss.ipfs.io/c/announcements)
|
||||
- [ ] Blog post
|
||||
|
||||
# 🙌🏽 Want to contribute?
|
||||
|
@@ -10,8 +10,7 @@ Let us know if you find any issue or if you want to contribute and add a new tut
|
||||
- [Protocol and Stream Muxing](./protocol-and-stream-muxing)
|
||||
- [Encrypted Communications](./encrypted-communications)
|
||||
- [Discovery Mechanisms](./discovery-mechanisms)
|
||||
- [Peer Routing](./peer-and-content-routing)
|
||||
- [Content Routing](./peer-and-content-routing)
|
||||
- [Peer and Content Routing](./peer-and-content-routing)
|
||||
- [PubSub](./pubsub)
|
||||
- [NAT Traversal](./nat-traversal)
|
||||
- Circuit Relay (future)
|
||||
|
@@ -4,7 +4,15 @@ One of the primary goals with libp2p P2P was to get it fully working in the brow
|
||||
|
||||
# 1. Setting up a simple app that lists connections to other nodes
|
||||
|
||||
Simple go into the folder [1](./1) and execute the following
|
||||
Start by installing libp2p's dependencies.
|
||||
|
||||
```bash
|
||||
> cd ../../
|
||||
> npm install
|
||||
> cd examples/libp2p-in-the-browser
|
||||
```
|
||||
|
||||
Then simply go into the folder [1](./1) and execute the following
|
||||
|
||||
```bash
|
||||
> cd 1
|
||||
|
29
package.json
29
package.json
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "libp2p",
|
||||
"version": "0.24.0-rc.3",
|
||||
"version": "0.25.0-rc.0",
|
||||
"description": "JavaScript base class for libp2p bundles",
|
||||
"leadMaintainer": "Jacob Heun <jacobheun@gmail.com>",
|
||||
"main": "src/index.js",
|
||||
@@ -50,17 +50,18 @@
|
||||
"libp2p-connection-manager": "~0.0.2",
|
||||
"libp2p-floodsub": "~0.15.1",
|
||||
"libp2p-ping": "~0.8.3",
|
||||
"libp2p-switch": "~0.41.2",
|
||||
"libp2p-switch": "~0.41.3",
|
||||
"libp2p-websockets": "~0.12.0",
|
||||
"mafmt": "^6.0.2",
|
||||
"multiaddr": "^5.0.2",
|
||||
"peer-book": "~0.8.0",
|
||||
"multiaddr": "^6.0.2",
|
||||
"once": "^1.4.0",
|
||||
"peer-book": "~0.9.0",
|
||||
"peer-id": "~0.12.0",
|
||||
"peer-info": "~0.14.1"
|
||||
"peer-info": "~0.15.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@nodeutils/defaults-deep": "^1.1.0",
|
||||
"aegir": "^17.0.1",
|
||||
"aegir": "^18.0.2",
|
||||
"chai": "^4.2.0",
|
||||
"chai-checkmark": "^1.0.1",
|
||||
"cids": "~0.5.5",
|
||||
@@ -71,15 +72,15 @@
|
||||
"libp2p-circuit": "~0.3.0",
|
||||
"libp2p-delegated-content-routing": "~0.2.2",
|
||||
"libp2p-delegated-peer-routing": "~0.2.2",
|
||||
"libp2p-kad-dht": "~0.11.1",
|
||||
"libp2p-kad-dht": "~0.14.2",
|
||||
"libp2p-mdns": "~0.12.0",
|
||||
"libp2p-mplex": "~0.8.4",
|
||||
"libp2p-secio": "~0.10.1",
|
||||
"libp2p-secio": "~0.11.0",
|
||||
"libp2p-spdy": "~0.13.0",
|
||||
"libp2p-tcp": "~0.13.0",
|
||||
"libp2p-webrtc-star": "~0.15.5",
|
||||
"libp2p-websocket-star": "~0.9.0",
|
||||
"libp2p-websocket-star-rendezvous": "~0.2.4",
|
||||
"libp2p-websocket-star": "~0.10.1",
|
||||
"libp2p-websocket-star-rendezvous": "~0.3.0",
|
||||
"lodash.times": "^4.3.2",
|
||||
"nock": "^10.0.2",
|
||||
"pull-goodbye": "0.0.2",
|
||||
@@ -90,6 +91,7 @@
|
||||
"wrtc": "~0.3.2"
|
||||
},
|
||||
"contributors": [
|
||||
"Alan Shaw <alan.shaw@protocol.ai>",
|
||||
"Alan Shaw <alan@tableflip.io>",
|
||||
"Chris Bratlien <chrisbratlien@gmail.com>",
|
||||
"Chris Dostert <chrisdostert@users.noreply.github.com>",
|
||||
@@ -101,6 +103,7 @@
|
||||
"Florian-Merle <florian.david.merle@gmail.com>",
|
||||
"Friedel Ziegelmayer <dignifiedquire@gmail.com>",
|
||||
"Giovanni T. Parra <fiatjaf@gmail.com>",
|
||||
"Henrique Dias <hacdias@gmail.com>",
|
||||
"Hugo Dias <hugomrdias@gmail.com>",
|
||||
"Irakli Gozalishvili <rfobic@gmail.com>",
|
||||
"Jacob Heun <jacobheun@gmail.com>",
|
||||
@@ -110,16 +113,22 @@
|
||||
"Kevin Kwok <antimatter15@gmail.com>",
|
||||
"Lars Gierth <lgierth@users.noreply.github.com>",
|
||||
"Maciej Krüger <mkg20001@gmail.com>",
|
||||
"Marcin Tojek <mtojek@users.noreply.github.com>",
|
||||
"Nuno Nogueira <nunofmn@gmail.com>",
|
||||
"Pedro Teixeira <pedro@protocol.ai>",
|
||||
"Pedro Teixeira <i@pgte.me>",
|
||||
"RasmusErik Voel Jensen <github@solsort.com>",
|
||||
"Richard Littauer <richard.littauer@gmail.com>",
|
||||
"Ryan Bell <ryan@piing.net>",
|
||||
"Soeren <nikorpoulsen@gmail.com>",
|
||||
"Sönke Hahn <soenkehahn@gmail.com>",
|
||||
"Thomas Eizinger <thomas@eizinger.io>",
|
||||
"Tiago Alves <alvesjtiago@gmail.com>",
|
||||
"Vasco Santos <vasco.santos@ua.pt>",
|
||||
"Vasco Santos <vasco.santos@moxy.studio>",
|
||||
"Volker Mische <volker.mische@gmail.com>",
|
||||
"Zane Starr <zcstarr@gmail.com>",
|
||||
"ebinks <elizabethjbinks@gmail.com>",
|
||||
"greenkeeperio-bot <support@greenkeeper.io>",
|
||||
"mayerwin <mayerwin@users.noreply.github.com>",
|
||||
"ᴠɪᴄᴛᴏʀ ʙᴊᴇʟᴋʜᴏʟᴍ <victorbjelkholm@gmail.com>"
|
||||
|
@@ -32,11 +32,18 @@ const OptionsSchema = Joi.object({
|
||||
})
|
||||
}).default(),
|
||||
dht: Joi.object().keys({
|
||||
kBucketSize: Joi.number().allow(null),
|
||||
enabledDiscovery: Joi.boolean().default(true)
|
||||
}),
|
||||
kBucketSize: Joi.number().default(20),
|
||||
enabled: Joi.boolean().default(true),
|
||||
randomWalk: Joi.object().keys({
|
||||
enabled: Joi.boolean().default(true),
|
||||
queriesPerPeriod: Joi.number().default(1),
|
||||
interval: Joi.number().default(30000),
|
||||
timeout: Joi.number().default(10000)
|
||||
}).default(),
|
||||
validators: Joi.object().allow(null),
|
||||
selectors: Joi.object().allow(null)
|
||||
}).default(),
|
||||
EXPERIMENTAL: Joi.object().keys({
|
||||
dht: Joi.boolean().default(false),
|
||||
pubsub: Joi.boolean().default(false)
|
||||
}).default()
|
||||
}).default()
|
||||
@@ -46,7 +53,7 @@ module.exports.validate = (options) => {
|
||||
options = Joi.attempt(options, OptionsSchema)
|
||||
|
||||
// Ensure dht is correct
|
||||
if (options.config.EXPERIMENTAL.dht) {
|
||||
if (options.config.dht.enabled) {
|
||||
Joi.assert(options.modules.dht, ModuleSchema.required())
|
||||
}
|
||||
|
||||
|
47
src/index.js
47
src/index.js
@@ -2,10 +2,10 @@
|
||||
|
||||
const FSM = require('fsm-event')
|
||||
const EventEmitter = require('events').EventEmitter
|
||||
const assert = require('assert')
|
||||
const debug = require('debug')
|
||||
const log = debug('libp2p')
|
||||
log.error = debug('libp2p:error')
|
||||
const errCode = require('err-code')
|
||||
|
||||
const each = require('async/each')
|
||||
const series = require('async/series')
|
||||
@@ -17,6 +17,7 @@ const Ping = require('libp2p-ping')
|
||||
const WebSockets = require('libp2p-websockets')
|
||||
const ConnectionManager = require('libp2p-connection-manager')
|
||||
|
||||
const { emitFirst } = require('./util')
|
||||
const peerRouting = require('./peer-routing')
|
||||
const contentRouting = require('./content-routing')
|
||||
const dht = require('./dht')
|
||||
@@ -24,7 +25,12 @@ const pubsub = require('./pubsub')
|
||||
const getPeerInfo = require('./get-peer-info')
|
||||
const validateConfig = require('./config').validate
|
||||
|
||||
const NOT_STARTED_ERROR_MESSAGE = 'The libp2p node is not started yet'
|
||||
const notStarted = (action, state) => {
|
||||
return errCode(
|
||||
new Error(`libp2p cannot ${action} when not started; state is ${state}`),
|
||||
'ERR_NODE_NOT_STARTED'
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* @fires Node#error Emitted when an error occurs
|
||||
@@ -97,14 +103,12 @@ class Node extends EventEmitter {
|
||||
}
|
||||
|
||||
// dht provided components (peerRouting, contentRouting, dht)
|
||||
if (this._config.EXPERIMENTAL.dht) {
|
||||
if (this._config.dht.enabled) {
|
||||
const DHT = this._modules.dht
|
||||
const enabledDiscovery = this._config.dht.enabledDiscovery !== false
|
||||
|
||||
this._dht = new DHT(this._switch, {
|
||||
kBucketSize: this._config.dht.kBucketSize || 20,
|
||||
enabledDiscovery,
|
||||
datastore: this.datastore
|
||||
datastore: this.datastore,
|
||||
...this._config.dht
|
||||
})
|
||||
}
|
||||
|
||||
@@ -187,7 +191,7 @@ class Node extends EventEmitter {
|
||||
* @returns {void}
|
||||
*/
|
||||
start (callback = () => {}) {
|
||||
this.once('start', callback)
|
||||
emitFirst(this, ['error', 'start'], callback)
|
||||
this.state('start')
|
||||
}
|
||||
|
||||
@@ -198,7 +202,7 @@ class Node extends EventEmitter {
|
||||
* @returns {void}
|
||||
*/
|
||||
stop (callback = () => {}) {
|
||||
this.once('stop', callback)
|
||||
emitFirst(this, ['error', 'stop'], callback)
|
||||
this.state('stop')
|
||||
}
|
||||
|
||||
@@ -215,8 +219,6 @@ class Node extends EventEmitter {
|
||||
* @returns {void}
|
||||
*/
|
||||
dial (peer, callback) {
|
||||
assert(this.isStarted(), NOT_STARTED_ERROR_MESSAGE)
|
||||
|
||||
this.dialProtocol(peer, null, callback)
|
||||
}
|
||||
|
||||
@@ -231,7 +233,9 @@ class Node extends EventEmitter {
|
||||
* @returns {void}
|
||||
*/
|
||||
dialProtocol (peer, protocol, callback) {
|
||||
assert(this.isStarted(), NOT_STARTED_ERROR_MESSAGE)
|
||||
if (!this.isStarted()) {
|
||||
return callback(notStarted('dial', this.state._state))
|
||||
}
|
||||
|
||||
if (typeof protocol === 'function') {
|
||||
callback = protocol
|
||||
@@ -259,7 +263,9 @@ class Node extends EventEmitter {
|
||||
* @returns {void}
|
||||
*/
|
||||
dialFSM (peer, protocol, callback) {
|
||||
assert(this.isStarted(), NOT_STARTED_ERROR_MESSAGE)
|
||||
if (!this.isStarted()) {
|
||||
return callback(notStarted('dial', this.state._state))
|
||||
}
|
||||
|
||||
if (typeof protocol === 'function') {
|
||||
callback = protocol
|
||||
@@ -280,8 +286,6 @@ class Node extends EventEmitter {
|
||||
}
|
||||
|
||||
hangUp (peer, callback) {
|
||||
assert(this.isStarted(), NOT_STARTED_ERROR_MESSAGE)
|
||||
|
||||
this._getPeerInfo(peer, (err, peerInfo) => {
|
||||
if (err) { return callback(err) }
|
||||
|
||||
@@ -291,7 +295,7 @@ class Node extends EventEmitter {
|
||||
|
||||
ping (peer, callback) {
|
||||
if (!this.isStarted()) {
|
||||
return callback(new Error(NOT_STARTED_ERROR_MESSAGE))
|
||||
return callback(notStarted('ping', this.state._state))
|
||||
}
|
||||
|
||||
this._getPeerInfo(peer, (err, peerInfo) => {
|
||||
@@ -340,7 +344,7 @@ class Node extends EventEmitter {
|
||||
}
|
||||
|
||||
if (t.filter(multiaddrs).length > 0) {
|
||||
this._switch.transport.add(t.tag || t.constructor.name, t)
|
||||
this._switch.transport.add(t.tag || t[Symbol.toStringTag], t)
|
||||
} else if (WebSockets.isWebSockets(t)) {
|
||||
// TODO find a cleaner way to signal that a transport is always used
|
||||
// for dialing, even if no listener
|
||||
@@ -461,13 +465,14 @@ class Node extends EventEmitter {
|
||||
}
|
||||
cb()
|
||||
},
|
||||
(cb) => {
|
||||
// Ensures idempotency for restarts
|
||||
this._switch.transport.removeAll(cb)
|
||||
},
|
||||
(cb) => {
|
||||
this.connectionManager.stop()
|
||||
this._switch.stop(cb)
|
||||
},
|
||||
(cb) => {
|
||||
// Ensures idempotent restarts, ignore any errors
|
||||
// from removeAll, they're not useful at this point
|
||||
this._switch.transport.removeAll(() => cb())
|
||||
}
|
||||
], (err) => {
|
||||
if (err) {
|
||||
|
@@ -33,7 +33,7 @@ module.exports = (node) => {
|
||||
subscribe(callback)
|
||||
},
|
||||
|
||||
unsubscribe: (topic, handler) => {
|
||||
unsubscribe: (topic, handler, callback) => {
|
||||
if (!node.isStarted() && !floodSub.started) {
|
||||
throw new Error(NOT_STARTED_YET)
|
||||
}
|
||||
@@ -43,6 +43,10 @@ module.exports = (node) => {
|
||||
if (floodSub.listenerCount(topic) === 0) {
|
||||
floodSub.unsubscribe(topic)
|
||||
}
|
||||
|
||||
if (typeof callback === 'function') {
|
||||
setImmediate(() => callback())
|
||||
}
|
||||
},
|
||||
|
||||
publish: (topic, data, callback) => {
|
||||
|
33
src/util/index.js
Normal file
33
src/util/index.js
Normal file
@@ -0,0 +1,33 @@
|
||||
'use strict'
|
||||
const once = require('once')
|
||||
|
||||
/**
|
||||
* Registers `handler` to each event in `events`. The `handler`
|
||||
* will only be called for the first event fired, at which point
|
||||
* the `handler` will be removed as a listener.
|
||||
*
|
||||
* Ensures `handler` is only called once.
|
||||
*
|
||||
* @example
|
||||
* // will call `callback` when `start` or `error` is emitted by `this`
|
||||
* emitFirst(this, ['error', 'start'], callback)
|
||||
*
|
||||
* @private
|
||||
* @param {EventEmitter} emitter The emitter to listen on
|
||||
* @param {Array<string>} events The events to listen for
|
||||
* @param {function(*)} handler The handler to call when an event is triggered
|
||||
* @returns {void}
|
||||
*/
|
||||
function emitFirst (emitter, events, handler) {
|
||||
handler = once(handler)
|
||||
events.forEach((e) => {
|
||||
emitter.once(e, (...args) => {
|
||||
events.forEach((ev) => {
|
||||
emitter.removeListener(ev, handler)
|
||||
})
|
||||
handler.apply(emitter, args)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
module.exports.emitFirst = emitFirst
|
@@ -11,6 +11,7 @@ const WS = require('libp2p-websockets')
|
||||
const Bootstrap = require('libp2p-bootstrap')
|
||||
const DelegatedPeerRouter = require('libp2p-delegated-peer-routing')
|
||||
const DelegatedContentRouter = require('libp2p-delegated-content-routing')
|
||||
const DHT = require('libp2p-kad-dht')
|
||||
|
||||
const validateConfig = require('../src/config').validate
|
||||
|
||||
@@ -62,7 +63,8 @@ describe('configuration', () => {
|
||||
peerInfo,
|
||||
modules: {
|
||||
transport: [ WS ],
|
||||
peerDiscovery: [ Bootstrap ]
|
||||
peerDiscovery: [ Bootstrap ],
|
||||
dht: DHT
|
||||
},
|
||||
config: {
|
||||
peerDiscovery: {
|
||||
@@ -78,7 +80,8 @@ describe('configuration', () => {
|
||||
peerInfo,
|
||||
modules: {
|
||||
transport: [ WS ],
|
||||
peerDiscovery: [ Bootstrap ]
|
||||
peerDiscovery: [ Bootstrap ],
|
||||
dht: DHT
|
||||
},
|
||||
config: {
|
||||
peerDiscovery: {
|
||||
@@ -88,8 +91,17 @@ describe('configuration', () => {
|
||||
}
|
||||
},
|
||||
EXPERIMENTAL: {
|
||||
pubsub: false,
|
||||
dht: false
|
||||
pubsub: false
|
||||
},
|
||||
dht: {
|
||||
kBucketSize: 20,
|
||||
enabled: true,
|
||||
randomWalk: {
|
||||
enabled: true,
|
||||
queriesPerPeriod: 1,
|
||||
interval: 30000,
|
||||
timeout: 10000
|
||||
}
|
||||
},
|
||||
relay: {
|
||||
enabled: true
|
||||
@@ -110,7 +122,8 @@ describe('configuration', () => {
|
||||
transport: [ WS ],
|
||||
peerDiscovery: [ Bootstrap ],
|
||||
peerRouting: [ peerRouter ],
|
||||
contentRouting: [ contentRouter ]
|
||||
contentRouting: [ contentRouter ],
|
||||
dht: DHT
|
||||
},
|
||||
config: {
|
||||
peerDiscovery: {
|
||||
@@ -143,4 +156,51 @@ describe('configuration', () => {
|
||||
|
||||
expect(() => validateConfig(options)).to.throw()
|
||||
})
|
||||
|
||||
it('should add defaults, validators and selectors for dht', () => {
|
||||
const selectors = {}
|
||||
const validators = {}
|
||||
|
||||
const options = {
|
||||
peerInfo,
|
||||
modules: {
|
||||
transport: [WS],
|
||||
dht: DHT
|
||||
},
|
||||
config: {
|
||||
dht: {
|
||||
selectors,
|
||||
validators
|
||||
}
|
||||
}
|
||||
}
|
||||
const expected = {
|
||||
peerInfo,
|
||||
modules: {
|
||||
transport: [WS],
|
||||
dht: DHT
|
||||
},
|
||||
config: {
|
||||
EXPERIMENTAL: {
|
||||
pubsub: false
|
||||
},
|
||||
relay: {
|
||||
enabled: true
|
||||
},
|
||||
dht: {
|
||||
kBucketSize: 20,
|
||||
enabled: true,
|
||||
randomWalk: {
|
||||
enabled: true,
|
||||
queriesPerPeriod: 1,
|
||||
interval: 30000,
|
||||
timeout: 10000
|
||||
},
|
||||
selectors,
|
||||
validators
|
||||
}
|
||||
}
|
||||
}
|
||||
expect(validateConfig(options)).to.deep.equal(expected)
|
||||
})
|
||||
})
|
||||
|
@@ -30,13 +30,7 @@ describe('.contentRouting', () => {
|
||||
before(function (done) {
|
||||
this.timeout(5 * 1000)
|
||||
const tasks = _times(5, () => (cb) => {
|
||||
createNode('/ip4/0.0.0.0/tcp/0', {
|
||||
config: {
|
||||
EXPERIMENTAL: {
|
||||
dht: true
|
||||
}
|
||||
}
|
||||
}, (err, node) => {
|
||||
createNode('/ip4/0.0.0.0/tcp/0', (err, node) => {
|
||||
expect(err).to.not.exist()
|
||||
node.start((err) => cb(err, node))
|
||||
})
|
||||
@@ -159,6 +153,9 @@ describe('.contentRouting', () => {
|
||||
contentRouting: [ delegate ]
|
||||
},
|
||||
config: {
|
||||
dht: {
|
||||
enabled: false
|
||||
},
|
||||
relay: {
|
||||
enabled: true,
|
||||
hop: {
|
||||
@@ -320,9 +317,6 @@ describe('.contentRouting', () => {
|
||||
enabled: true,
|
||||
active: false
|
||||
}
|
||||
},
|
||||
EXPERIMENTAL: {
|
||||
dht: true
|
||||
}
|
||||
}
|
||||
})
|
||||
@@ -387,7 +381,13 @@ describe('.contentRouting', () => {
|
||||
describe('no routers', () => {
|
||||
let nodeA
|
||||
before((done) => {
|
||||
createNode('/ip4/0.0.0.0/tcp/0', (err, node) => {
|
||||
createNode('/ip4/0.0.0.0/tcp/0', {
|
||||
config: {
|
||||
dht: {
|
||||
enabled: false
|
||||
}
|
||||
}
|
||||
}, (err, node) => {
|
||||
expect(err).to.not.exist()
|
||||
nodeA = node
|
||||
done()
|
||||
|
@@ -13,8 +13,10 @@ describe('libp2p creation', () => {
|
||||
createNode([], {
|
||||
config: {
|
||||
EXPERIMENTAL: {
|
||||
dht: true,
|
||||
pubsub: true
|
||||
},
|
||||
dht: {
|
||||
enabled: true
|
||||
}
|
||||
}
|
||||
}, (err, node) => {
|
||||
@@ -69,13 +71,11 @@ describe('libp2p creation', () => {
|
||||
createNode([], {
|
||||
config: {
|
||||
EXPERIMENTAL: {
|
||||
dht: false,
|
||||
pubsub: false
|
||||
}
|
||||
}
|
||||
}, (err, node) => {
|
||||
expect(err).to.not.exist()
|
||||
expect(node._dht).to.not.exist()
|
||||
expect(node._floodSub).to.not.exist()
|
||||
done()
|
||||
})
|
||||
|
@@ -17,12 +17,7 @@ describe('.dht', () => {
|
||||
|
||||
before(function (done) {
|
||||
createNode('/ip4/0.0.0.0/tcp/0', {
|
||||
datastore,
|
||||
config: {
|
||||
EXPERIMENTAL: {
|
||||
dht: true
|
||||
}
|
||||
}
|
||||
datastore
|
||||
}, (err, node) => {
|
||||
expect(err).to.not.exist()
|
||||
nodeA = node
|
||||
@@ -124,8 +119,8 @@ describe('.dht', () => {
|
||||
before(function (done) {
|
||||
createNode('/ip4/0.0.0.0/tcp/0', {
|
||||
config: {
|
||||
EXPERIMENTAL: {
|
||||
dht: false
|
||||
dht: {
|
||||
enabled: false
|
||||
}
|
||||
}
|
||||
}, (err, node) => {
|
||||
|
@@ -13,13 +13,20 @@ describe('libp2p state machine (fsm)', () => {
|
||||
describe('starting and stopping', () => {
|
||||
let node
|
||||
beforeEach((done) => {
|
||||
createNode([], (err, _node) => {
|
||||
createNode([], {
|
||||
config: {
|
||||
dht: {
|
||||
enabled: false
|
||||
}
|
||||
}
|
||||
}, (err, _node) => {
|
||||
node = _node
|
||||
done(err)
|
||||
})
|
||||
})
|
||||
afterEach(() => {
|
||||
node.removeAllListeners()
|
||||
sinon.restore()
|
||||
})
|
||||
after((done) => {
|
||||
node.stop(done)
|
||||
@@ -58,6 +65,23 @@ describe('libp2p state machine (fsm)', () => {
|
||||
node.start()
|
||||
})
|
||||
|
||||
it('should callback with an error when it occurs on stop', (done) => {
|
||||
const error = new Error('some error starting')
|
||||
node.once('start', () => {
|
||||
node.once('error', (err) => {
|
||||
expect(err).to.eql(error).mark()
|
||||
})
|
||||
node.stop((err) => {
|
||||
expect(err).to.eql(error).mark()
|
||||
})
|
||||
})
|
||||
|
||||
expect(2).checks(done)
|
||||
|
||||
sinon.stub(node._switch, 'stop').callsArgWith(0, error)
|
||||
node.start()
|
||||
})
|
||||
|
||||
it('should noop when starting a started node', (done) => {
|
||||
node.once('start', () => {
|
||||
node.state.on('STARTING', () => {
|
||||
@@ -110,9 +134,35 @@ describe('libp2p state machine (fsm)', () => {
|
||||
throw new Error('should not start')
|
||||
})
|
||||
|
||||
expect(2).checks(done)
|
||||
expect(3).checks(done)
|
||||
|
||||
node.start()
|
||||
node.start((err) => {
|
||||
expect(err).to.eql(error).mark()
|
||||
})
|
||||
})
|
||||
|
||||
it('should not dial when the node is stopped', (done) => {
|
||||
node.on('stop', () => {
|
||||
node.dial(null, (err) => {
|
||||
expect(err).to.exist()
|
||||
expect(err.code).to.eql('ERR_NODE_NOT_STARTED')
|
||||
done()
|
||||
})
|
||||
})
|
||||
|
||||
node.stop()
|
||||
})
|
||||
|
||||
it('should not dial (fsm) when the node is stopped', (done) => {
|
||||
node.on('stop', () => {
|
||||
node.dialFSM(null, null, (err) => {
|
||||
expect(err).to.exist()
|
||||
expect(err.code).to.eql('ERR_NODE_NOT_STARTED')
|
||||
done()
|
||||
})
|
||||
})
|
||||
|
||||
node.stop()
|
||||
})
|
||||
})
|
||||
})
|
||||
|
@@ -24,13 +24,7 @@ describe('.peerRouting', () => {
|
||||
|
||||
before('create the outer ring of connections', (done) => {
|
||||
const tasks = _times(5, () => (cb) => {
|
||||
createNode('/ip4/0.0.0.0/tcp/0', {
|
||||
config: {
|
||||
EXPERIMENTAL: {
|
||||
dht: true
|
||||
}
|
||||
}
|
||||
}, (err, node) => {
|
||||
createNode('/ip4/0.0.0.0/tcp/0', (err, node) => {
|
||||
expect(err).to.not.exist()
|
||||
node.start((err) => cb(err, node))
|
||||
})
|
||||
@@ -112,6 +106,11 @@ describe('.peerRouting', () => {
|
||||
createNode('/ip4/0.0.0.0/tcp/0', {
|
||||
modules: {
|
||||
peerRouting: [ delegate ]
|
||||
},
|
||||
config: {
|
||||
dht: {
|
||||
enabled: false
|
||||
}
|
||||
}
|
||||
}, (err, node) => {
|
||||
expect(err).to.not.exist()
|
||||
@@ -213,11 +212,6 @@ describe('.peerRouting', () => {
|
||||
createNode('/ip4/0.0.0.0/tcp/0', {
|
||||
modules: {
|
||||
peerRouting: [ delegate ]
|
||||
},
|
||||
config: {
|
||||
EXPERIMENTAL: {
|
||||
dht: true
|
||||
}
|
||||
}
|
||||
}, (err, node) => {
|
||||
expect(err).to.not.exist()
|
||||
@@ -270,7 +264,13 @@ describe('.peerRouting', () => {
|
||||
describe('no routers', () => {
|
||||
let nodeA
|
||||
before((done) => {
|
||||
createNode('/ip4/0.0.0.0/tcp/0', (err, node) => {
|
||||
createNode('/ip4/0.0.0.0/tcp/0', {
|
||||
config: {
|
||||
dht: {
|
||||
enabled: false
|
||||
}
|
||||
}
|
||||
}, (err, node) => {
|
||||
expect(err).to.not.exist()
|
||||
nodeA = node
|
||||
done()
|
||||
|
@@ -9,6 +9,7 @@ const PeerId = require('peer-id')
|
||||
const waterfall = require('async/waterfall')
|
||||
const WS = require('libp2p-websockets')
|
||||
const defaultsDeep = require('@nodeutils/defaults-deep')
|
||||
const DHT = require('libp2p-kad-dht')
|
||||
|
||||
const Libp2p = require('../src')
|
||||
|
||||
@@ -23,7 +24,8 @@ describe('private network', () => {
|
||||
config = {
|
||||
peerInfo,
|
||||
modules: {
|
||||
transport: [ WS ]
|
||||
transport: [ WS ],
|
||||
dht: DHT
|
||||
}
|
||||
}
|
||||
cb()
|
||||
|
@@ -5,9 +5,10 @@
|
||||
|
||||
const chai = require('chai')
|
||||
chai.use(require('dirty-chai'))
|
||||
chai.use(require('chai-checkmark'))
|
||||
const expect = chai.expect
|
||||
const parallel = require('async/parallel')
|
||||
const waterfall = require('async/waterfall')
|
||||
const series = require('async/series')
|
||||
const _times = require('lodash.times')
|
||||
|
||||
const createNode = require('./utils/create-node')
|
||||
@@ -52,26 +53,39 @@ function stopTwo (nodes, callback) {
|
||||
// TODO: consider if all or some of those should come here
|
||||
describe('.pubsub', () => {
|
||||
describe('.pubsub on (default)', (done) => {
|
||||
it('start two nodes and send one message', (done) => {
|
||||
waterfall([
|
||||
(cb) => startTwo(cb),
|
||||
(nodes, cb) => {
|
||||
const data = Buffer.from('test')
|
||||
nodes[0].pubsub.subscribe('pubsub',
|
||||
(msg) => {
|
||||
expect(msg.data).to.eql(data)
|
||||
cb(null, nodes)
|
||||
},
|
||||
(err) => {
|
||||
expect(err).to.not.exist()
|
||||
setTimeout(() => nodes[1].pubsub.publish('pubsub', data, (err) => {
|
||||
expect(err).to.not.exist()
|
||||
}), 500)
|
||||
}
|
||||
)
|
||||
},
|
||||
(nodes, cb) => stopTwo(nodes, cb)
|
||||
], done)
|
||||
it('start two nodes and send one message, then unsubscribe', (done) => {
|
||||
// Check the final series error, and the publish handler
|
||||
expect(2).checks(done)
|
||||
|
||||
let nodes
|
||||
const data = Buffer.from('test')
|
||||
const handler = (msg) => {
|
||||
// verify the data is correct and mark the expect
|
||||
expect(msg.data).to.eql(data).mark()
|
||||
}
|
||||
|
||||
series([
|
||||
// Start the nodes
|
||||
(cb) => startTwo((err, _nodes) => {
|
||||
nodes = _nodes
|
||||
cb(err)
|
||||
}),
|
||||
// subscribe on the first
|
||||
(cb) => nodes[0].pubsub.subscribe('pubsub', handler, cb),
|
||||
// Wait a moment before publishing
|
||||
(cb) => setTimeout(cb, 500),
|
||||
// publish on the second
|
||||
(cb) => nodes[1].pubsub.publish('pubsub', data, cb),
|
||||
// Wait a moment before unsubscribing
|
||||
(cb) => setTimeout(cb, 500),
|
||||
// unsubscribe on the first
|
||||
(cb) => nodes[0].pubsub.unsubscribe('pubsub', handler, cb),
|
||||
// Stop both nodes
|
||||
(cb) => stopTwo(nodes, cb)
|
||||
], (err) => {
|
||||
// Verify there was no error, and mark the expect
|
||||
expect(err).to.not.exist().mark()
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
|
@@ -15,9 +15,6 @@ describe('libp2p', () => {
|
||||
mdns: {
|
||||
enabled: false
|
||||
}
|
||||
},
|
||||
EXPERIMENTAL: {
|
||||
dht: true
|
||||
}
|
||||
}
|
||||
}, (err, node) => {
|
||||
|
@@ -215,7 +215,7 @@ describe('stream muxing', () => {
|
||||
|
||||
nodeA.dial(nodeB.peerInfo, (err) => {
|
||||
expect(err).to.not.exist()
|
||||
expect(Object.keys(nodeA._switch.muxedConns)).to.have.length(0)
|
||||
expect(nodeA._switch.connection.getAll()).to.have.length(0)
|
||||
cb()
|
||||
})
|
||||
},
|
||||
|
@@ -102,7 +102,7 @@ describe('transports', () => {
|
||||
function check () {
|
||||
const peers = nodeA.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(nodeA._switch.muxedConns)).to.have.length(0)
|
||||
expect(nodeA._switch.connection.getAll()).to.have.length(0)
|
||||
done()
|
||||
}
|
||||
})
|
||||
@@ -142,7 +142,7 @@ describe('transports', () => {
|
||||
const peers = nodeA.peerBook.getAll()
|
||||
expect(err).to.not.exist()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(nodeA._switch.muxedConns)).to.have.length(0)
|
||||
expect(nodeA._switch.connection.getAll()).to.have.length(0)
|
||||
done()
|
||||
}
|
||||
})
|
||||
@@ -153,16 +153,17 @@ describe('transports', () => {
|
||||
expect(err).to.not.exist()
|
||||
|
||||
connFSM.once('muxed', () => {
|
||||
expect(nodeA._switch.muxedConns).to.have.any.keys(
|
||||
peerB.id.toB58String()
|
||||
)
|
||||
expect(
|
||||
nodeA._switch.connection.getAllById(peerB.id.toB58String())
|
||||
).to.have.length(1)
|
||||
|
||||
connFSM.once('error', done)
|
||||
connFSM.once('close', () => {
|
||||
// ensure the connection is closed
|
||||
expect(nodeA._switch.muxedConns).to.not.have.any.keys([
|
||||
peerB.id.toB58String()
|
||||
])
|
||||
expect(
|
||||
nodeA._switch.connection.getAllById(peerB.id.toB58String())
|
||||
).to.have.length(0)
|
||||
|
||||
done()
|
||||
})
|
||||
|
||||
@@ -312,7 +313,7 @@ describe('transports', () => {
|
||||
function check () {
|
||||
const peers = node1.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(node1._switch.muxedConns)).to.have.length(0)
|
||||
expect(node1._switch.connection.getAll()).to.have.length(0)
|
||||
done()
|
||||
}
|
||||
})
|
||||
@@ -326,7 +327,7 @@ describe('transports', () => {
|
||||
|
||||
function check () {
|
||||
// Verify both nodes are connected to node 3
|
||||
if (node1._switch.muxedConns[b58Id] && node2._switch.muxedConns[b58Id]) {
|
||||
if (node1._switch.connection.getAllById(b58Id) && node2._switch.connection.getAllById(b58Id)) {
|
||||
done()
|
||||
}
|
||||
}
|
||||
@@ -417,7 +418,7 @@ describe('transports', () => {
|
||||
function check () {
|
||||
const peers = node1.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(node1._switch.muxedConns)).to.have.length(0)
|
||||
expect(node1._switch.connection.getAll()).to.have.length(0)
|
||||
done()
|
||||
}
|
||||
})
|
||||
@@ -430,8 +431,8 @@ describe('transports', () => {
|
||||
|
||||
function check () {
|
||||
if (++counter === 3) {
|
||||
expect(Object.keys(node1._switch.muxedConns).length).to.equal(1)
|
||||
expect(Object.keys(node2._switch.muxedConns).length).to.equal(1)
|
||||
expect(node1._switch.connection.getAll()).to.have.length(1)
|
||||
expect(node2._switch.connection.getAll()).to.have.length(1)
|
||||
done()
|
||||
}
|
||||
}
|
||||
|
@@ -91,14 +91,13 @@ describe('transports', () => {
|
||||
(cb) => {
|
||||
const peers = nodeA.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(nodeA._switch.muxedConns)).to.have.length(0)
|
||||
expect(nodeA._switch.connection.getAll()).to.have.length(0)
|
||||
cb()
|
||||
},
|
||||
(cb) => {
|
||||
const peers = nodeB.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
|
||||
expect(Object.keys(nodeB._switch.muxedConns)).to.have.length(0)
|
||||
expect(nodeB._switch.connection.getAll()).to.have.length(0)
|
||||
cb()
|
||||
}
|
||||
], done)
|
||||
@@ -117,15 +116,13 @@ describe('transports', () => {
|
||||
(cb) => {
|
||||
const peers = nodeA.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
|
||||
expect(Object.keys(nodeA._switch.muxedConns)).to.have.length(1)
|
||||
expect(nodeA._switch.connection.getAll()).to.have.length(1)
|
||||
cb()
|
||||
},
|
||||
(cb) => {
|
||||
const peers = nodeB.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
|
||||
expect(Object.keys(nodeA._switch.muxedConns)).to.have.length(1)
|
||||
expect(nodeA._switch.connection.getAll()).to.have.length(1)
|
||||
cb()
|
||||
}
|
||||
], () => tryEcho(conn, done))
|
||||
@@ -143,15 +140,13 @@ describe('transports', () => {
|
||||
(cb) => {
|
||||
const peers = nodeA.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
|
||||
expect(Object.keys(nodeA._switch.muxedConns)).to.have.length(0)
|
||||
expect(nodeA._switch.connection.getAll()).to.have.length(0)
|
||||
cb()
|
||||
},
|
||||
(cb) => {
|
||||
const peers = nodeB.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
|
||||
expect(Object.keys(nodeB._switch.muxedConns)).to.have.length(0)
|
||||
expect(nodeB._switch.connection.getAll()).to.have.length(0)
|
||||
cb()
|
||||
}
|
||||
], done)
|
||||
@@ -170,13 +165,13 @@ describe('transports', () => {
|
||||
(cb) => {
|
||||
const peers = nodeA.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(nodeA._switch.muxedConns)).to.have.length(1)
|
||||
expect(nodeA._switch.connection.getAll()).to.have.length(1)
|
||||
cb()
|
||||
},
|
||||
(cb) => {
|
||||
const peers = nodeB.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(nodeA._switch.muxedConns)).to.have.length(1)
|
||||
expect(nodeA._switch.connection.getAll()).to.have.length(1)
|
||||
cb()
|
||||
}
|
||||
], () => tryEcho(conn, done))
|
||||
@@ -194,13 +189,13 @@ describe('transports', () => {
|
||||
(cb) => {
|
||||
const peers = nodeA.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(nodeA._switch.muxedConns)).to.have.length(0)
|
||||
expect(nodeA._switch.connection.getAll()).to.have.length(0)
|
||||
cb()
|
||||
},
|
||||
(cb) => {
|
||||
const peers = nodeB.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(nodeB._switch.muxedConns)).to.have.length(0)
|
||||
expect(nodeB._switch.connection.getAll()).to.have.length(0)
|
||||
cb()
|
||||
}
|
||||
], done)
|
||||
@@ -213,16 +208,16 @@ describe('transports', () => {
|
||||
expect(err).to.not.exist()
|
||||
|
||||
connFSM.once('muxed', () => {
|
||||
expect(nodeA._switch.muxedConns).to.have.any.keys(
|
||||
nodeB.peerInfo.id.toB58String()
|
||||
)
|
||||
expect(
|
||||
nodeA._switch.connection.getAllById(nodeB.peerInfo.id.toB58String())
|
||||
).to.have.length(1)
|
||||
|
||||
connFSM.once('error', done)
|
||||
connFSM.once('close', () => {
|
||||
// ensure the connection is closed
|
||||
expect(nodeA._switch.muxedConns).to.not.have.any.keys([
|
||||
nodeB.peerInfo.id.toB58String()
|
||||
])
|
||||
expect(
|
||||
nodeA._switch.connection.getAllById(nodeB.peerInfo.id.toB58String())
|
||||
).to.have.length(0)
|
||||
done()
|
||||
})
|
||||
|
||||
@@ -235,9 +230,9 @@ describe('transports', () => {
|
||||
nodeA.dialFSM(nodeB.peerInfo, '/echo/1.0.0', (err, connFSM) => {
|
||||
expect(err).to.not.exist()
|
||||
connFSM.once('connection', (conn) => {
|
||||
expect(nodeA._switch.muxedConns).to.have.all.keys([
|
||||
nodeB.peerInfo.id.toB58String()
|
||||
])
|
||||
expect(
|
||||
nodeA._switch.connection.getAllById(nodeB.peerInfo.id.toB58String())
|
||||
).to.have.length(1)
|
||||
tryEcho(conn, () => {
|
||||
connFSM.close()
|
||||
})
|
||||
@@ -245,9 +240,9 @@ describe('transports', () => {
|
||||
connFSM.once('error', done)
|
||||
connFSM.once('close', () => {
|
||||
// ensure the connection is closed
|
||||
expect(nodeA._switch.muxedConns).to.not.have.any.keys([
|
||||
nodeB.peerInfo.id.toB58String()
|
||||
])
|
||||
expect(
|
||||
nodeA._switch.connection.getAllById(nodeB.peerInfo.id.toB58String())
|
||||
).to.have.length(0)
|
||||
done()
|
||||
})
|
||||
})
|
||||
@@ -309,13 +304,13 @@ describe('transports', () => {
|
||||
(cb) => {
|
||||
const peers = nodeTCP.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(nodeTCP._switch.muxedConns)).to.have.length(1)
|
||||
expect(nodeTCP._switch.connection.getAll()).to.have.length(1)
|
||||
cb()
|
||||
},
|
||||
(cb) => {
|
||||
const peers = nodeTCPnWS.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(nodeTCPnWS._switch.muxedConns)).to.have.length(1)
|
||||
expect(nodeTCPnWS._switch.connection.getAll()).to.have.length(1)
|
||||
cb()
|
||||
}
|
||||
], done)
|
||||
@@ -333,14 +328,13 @@ describe('transports', () => {
|
||||
(cb) => {
|
||||
const peers = nodeTCP.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(nodeTCP._switch.muxedConns)).to.have.length(0)
|
||||
|
||||
expect(nodeTCP._switch.connection.getAll()).to.have.length(0)
|
||||
cb()
|
||||
},
|
||||
(cb) => {
|
||||
const peers = nodeTCPnWS.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(nodeTCPnWS._switch.muxedConns)).to.have.length(0)
|
||||
expect(nodeTCPnWS._switch.connection.getAll()).to.have.length(0)
|
||||
cb()
|
||||
}
|
||||
], done)
|
||||
@@ -360,13 +354,13 @@ describe('transports', () => {
|
||||
(cb) => {
|
||||
const peers = nodeTCPnWS.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(2)
|
||||
expect(Object.keys(nodeTCPnWS._switch.muxedConns)).to.have.length(1)
|
||||
expect(nodeTCPnWS._switch.connection.getAll()).to.have.length(1)
|
||||
cb()
|
||||
},
|
||||
(cb) => {
|
||||
const peers = nodeWS.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(nodeWS._switch.muxedConns)).to.have.length(1)
|
||||
expect(nodeWS._switch.connection.getAll()).to.have.length(1)
|
||||
cb()
|
||||
}
|
||||
], done)
|
||||
@@ -384,14 +378,14 @@ describe('transports', () => {
|
||||
(cb) => {
|
||||
const peers = nodeTCPnWS.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(2)
|
||||
expect(Object.keys(nodeTCPnWS._switch.muxedConns)).to.have.length(0)
|
||||
expect(nodeTCPnWS._switch.connection.getAll()).to.have.length(0)
|
||||
|
||||
cb()
|
||||
},
|
||||
(cb) => {
|
||||
const peers = nodeWS.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(nodeWS._switch.muxedConns)).to.have.length(0)
|
||||
expect(nodeWS._switch.connection.getAll()).to.have.length(0)
|
||||
cb()
|
||||
}
|
||||
], done)
|
||||
@@ -427,7 +421,7 @@ describe('transports', () => {
|
||||
cb()
|
||||
}),
|
||||
(cb) => {
|
||||
const wstar = new WRTCStar({wrtc: wrtc})
|
||||
const wstar = new WRTCStar({ wrtc: wrtc })
|
||||
|
||||
createNode([
|
||||
'/ip4/0.0.0.0/tcp/0',
|
||||
@@ -474,7 +468,7 @@ describe('transports', () => {
|
||||
}),
|
||||
|
||||
(cb) => {
|
||||
const wstar = new WRTCStar({wrtc: wrtc})
|
||||
const wstar = new WRTCStar({ wrtc: wrtc })
|
||||
|
||||
createNode([
|
||||
'/ip4/127.0.0.1/tcp/24642/ws/p2p-webrtc-star'
|
||||
@@ -516,7 +510,7 @@ describe('transports', () => {
|
||||
let i = 1;
|
||||
[nodeAll, otherNode].forEach((node) => {
|
||||
expect(Object.keys(node.peerBook.getAll())).to.have.length(i-- ? peers : 1)
|
||||
expect(Object.keys(node._switch.muxedConns)).to.have.length(muxed)
|
||||
expect(node._switch.connection.getAll()).to.have.length(muxed)
|
||||
})
|
||||
callback()
|
||||
}
|
||||
@@ -678,7 +672,7 @@ describe('transports', () => {
|
||||
let i = 1;
|
||||
[nodeAll, otherNode].forEach((node) => {
|
||||
expect(Object.keys(node.peerBook.getAll())).to.have.length(i-- ? peers : 1)
|
||||
expect(Object.keys(node._switch.muxedConns)).to.have.length(muxed)
|
||||
expect(node._switch.connection.getAll()).to.have.length(muxed)
|
||||
})
|
||||
done()
|
||||
}
|
||||
|
@@ -77,7 +77,7 @@ describe('Turbolence tests', () => {
|
||||
function check () {
|
||||
const peers = nodeA.peerBook.getAll()
|
||||
expect(Object.keys(peers)).to.have.length(1)
|
||||
expect(Object.keys(nodeA.switch.muxedConns)).to.have.length(0)
|
||||
expect(nodeA._switch.connection.getAll()).to.have.length(0)
|
||||
done()
|
||||
}
|
||||
})
|
||||
|
@@ -80,10 +80,12 @@ class Node extends libp2p {
|
||||
},
|
||||
dht: {
|
||||
kBucketSize: 20,
|
||||
enabledDiscovery: true
|
||||
randomWalk: {
|
||||
enabled: true
|
||||
},
|
||||
enabled: false
|
||||
},
|
||||
EXPERIMENTAL: {
|
||||
dht: false,
|
||||
pubsub: false
|
||||
}
|
||||
}
|
||||
|
@@ -73,10 +73,12 @@ class Node extends libp2p {
|
||||
},
|
||||
dht: {
|
||||
kBucketSize: 20,
|
||||
enabledDiscovery: true
|
||||
randomWalk: {
|
||||
enabled: true
|
||||
},
|
||||
enabled: true
|
||||
},
|
||||
EXPERIMENTAL: {
|
||||
dht: false,
|
||||
pubsub: false
|
||||
}
|
||||
}
|
||||
|
Reference in New Issue
Block a user