Compare commits

...

115 Commits

Author SHA1 Message Date
665f7c6d66 chore: release version v0.18.2 2016-05-24 14:08:13 +01:00
03e02dfba9 chore: update contributors 2016-05-24 14:08:12 +01:00
019f84885d Merge pull request #68 from diasdavid/fix/identify
Make identify not freak out on missing pubkey, it is ok.. this enables webrtc-star discovery to work fine
2016-05-24 14:04:48 +01:00
ceaae4c53f Make identify not freak out on missing pubkey, it is ok.. this enables webrtc-star discovery to work fine 2016-05-24 13:51:34 +01:00
8e5d5c5694 chore: release version v0.18.1 2016-05-23 18:16:38 +01:00
53bb2d3e07 chore: update contributors 2016-05-23 18:16:37 +01:00
47f72296ed kick the machine 2016-05-23 16:47:32 +01:00
2d20a75114 fix everything™ 2016-05-23 16:46:35 +01:00
8212c09088 This makes things go nuts 2016-05-23 11:09:48 +01:00
c69254b00e chore: release version v0.18.0 2016-05-23 08:38:08 +01:00
4f36eda28f chore: update contributors 2016-05-23 08:38:08 +01:00
c53068c803 Merge pull request #64 from diasdavid/add/webrtc-star
restructure and add spdy to browser tests
2016-05-23 08:06:36 +01:00
d771a12d86 add webrtc-star transport tests 2016-05-22 18:32:05 +01:00
3cd5cbb8ec restructure and add spdy to browser tests 2016-05-22 14:52:08 +01:00
09acdab0d3 chore: release version v0.17.0 2016-05-21 11:47:07 +01:00
54128c228c chore: update contributors 2016-05-21 11:47:07 +01:00
872dfb2c03 s/identify/id/ 2016-05-21 11:32:28 +01:00
5f406c6ea0 chore: release version v0.16.0 2016-05-20 12:00:39 +01:00
8f785284dc chore: update contributors 2016-05-20 12:00:39 +01:00
ee0b1eaea5 Merge pull request #61 from diasdavid/update-spdy
update spdy
2016-05-20 11:57:50 +01:00
1791c19c8c update spdy 2016-05-20 11:03:22 +01:00
ea94a81a52 chore: release version v0.15.0 2016-05-19 00:50:59 +01:00
f3e705acc7 chore: update contributors 2016-05-19 00:50:59 +01:00
510b458de2 Merge pull request #60 from diasdavid/feat/crypto-not-crypto
plaintext proto to be compatible with go-ipfs
2016-05-18 23:14:37 +01:00
5aa74ee25d plaintext proto to be compatible with go-ipfs 2016-05-18 22:59:58 +01:00
d991c475df chore: release version v0.14.0 2016-05-18 11:16:01 +01:00
40da1ec2b1 chore: update contributors 2016-05-18 11:16:01 +01:00
7789f5da19 Merge pull request #59 from diasdavid/update/multistream
WIP add new version of multistream
2016-05-18 10:56:48 +01:00
a224b0bc54 add new version of multistream 2016-05-18 10:47:55 +01:00
9d958c3209 chore: release version v0.13.0 2016-05-18 04:23:59 +01:00
dc518f4178 chore: update contributors 2016-05-18 04:23:59 +01:00
1eacc5bc7b Merge pull request #58 from diasdavid/update-deps
update deps to support latest multiaddr
2016-05-18 04:21:32 +01:00
f6193301a5 update deps to support latest multiaddr 2016-05-17 23:28:25 +01:00
8d792fe954 chore: release version v0.12.11 2016-05-11 12:54:39 +01:00
de506873e7 chore: update contributors 2016-05-11 12:54:39 +01:00
61340e3909 Merge pull request #56 from diasdavid/fix/ws-ipfs
fix: handling of ipfs addresses in available transports
2016-05-11 12:52:33 +01:00
8e1413b984 fix: handling of ipfs addresses in available transports
and some refactoring into multiple files
2016-05-11 13:29:52 +02:00
163624c218 chore: release version v0.12.10 2016-05-10 10:26:54 +01:00
6fd8b076e2 chore: update contributors 2016-05-10 10:26:53 +01:00
e54ebb65fe Merge pull request #54 from diasdavid/feat/listen
Feat/listen
2016-05-10 11:14:04 +02:00
9c8a8bb26b feat: add .listen method 2016-05-10 11:06:48 +02:00
3f29ff5d33 chore: release version v0.12.9 2016-05-09 10:59:04 +01:00
a712fd6d22 chore: update contributors 2016-05-09 10:59:04 +01:00
7079f10bcc Merge pull request #53 from diasdavid/test-fixes
test: cleanup and fix hanging tests
2016-05-09 10:52:40 +01:00
1210a9f613 test: cleanup and fix hanging tests 2016-05-09 11:37:24 +02:00
5c76907f3d Merge pull request #52 from diasdavid/fix/warm-a-warm-up-the-other-way-around
make sure it does not try to dial on empty proto and write tests for it
2016-05-09 08:16:41 +01:00
074e7e323b make sure it does not try to dial on empty proto and write tests for it 2016-05-09 07:56:06 +01:00
20994f5320 chore: release version v0.12.8 2016-05-08 22:48:43 +01:00
eac00292f2 chore: update contributors 2016-05-08 22:48:43 +01:00
bf768d3585 Merge pull request #51 from diasdavid/fix-errs
Cleaning up some things
2016-05-08 22:22:23 +01:00
05f799f983 update deps 2016-05-08 23:10:09 +02:00
a81c328bf7 actually fix things 2016-05-08 22:58:08 +02:00
a6ba60a5c4 handle errors when closing 2016-05-08 22:22:46 +02:00
594b770d8e try to appease the travis gods 2016-05-08 22:19:43 +02:00
dbf0d2c422 fix dependencies 2016-05-08 21:44:22 +02:00
275434f873 cleanup close handling 2016-05-08 21:35:04 +02:00
631dad8647 chore: release version v0.12.7 2016-05-06 18:28:37 +01:00
3eac0e0dd6 chore: update contributors 2016-05-06 18:28:37 +01:00
30d4bb641e one more test to check if connected endpoints are closed correctly 2016-05-06 18:27:19 +01:00
b0aeff8f53 chore: release version v0.12.6 2016-05-06 14:29:24 +01:00
998c71fc84 chore: update contributors 2016-05-06 14:29:24 +01:00
b31245adc8 Merge pull request #49 from diasdavid/fix/close-count
fix: call cb in close after all transport are closed
2016-05-06 14:19:13 +01:00
85a064765a fix: call cb in close after all transport are closed 2016-05-06 15:05:34 +02:00
fb56cc3c30 Merge pull request #48 from diasdavid/feat/unhandle
unhandle a protocol
2016-05-06 13:11:58 +01:00
03d0c52d4d unhandle a protocol 2016-05-06 12:49:31 +01:00
0aa7bb72e7 Merge pull request #47 from diasdavid/test/swarm-dial-no-proto
dialing in no proto is fine
2016-05-06 12:43:26 +01:00
e9b3d3496f dialing in no proto is fine 2016-05-06 12:31:23 +01:00
58e18dd01b chore: release version v0.12.5 2016-05-05 00:44:53 +01:00
fb017ebb07 chore: update contributors 2016-05-05 00:44:53 +01:00
08c4c169d6 remove unnecessary console.log 2016-05-05 00:13:48 +01:00
de927e8052 chore: release version v0.12.4 2016-05-04 20:13:00 +01:00
df8e61632b chore: update contributors 2016-05-04 20:13:00 +01:00
b453bd4f83 Merge pull request #46 from diasdavid/feat/improve-identify
Freeze handling conns till identify is finished on the incomming multiplexed streams
2016-05-04 21:11:22 +02:00
0143ab6449 freeze handling conns till identify is finished on the incomming multiplexed streams 2016-05-04 19:42:53 +01:00
02dd32e7df chore: release version v0.12.3 2016-05-04 16:57:00 +01:00
4fe91796cd chore: update contributors 2016-05-04 16:57:00 +01:00
352876cade Merge pull request #45 from diasdavid/feat/id-on-conns
attach peerId to the conn
2016-05-04 17:11:50 +02:00
41b700f509 attach peerId to the conn 2016-05-04 14:55:40 +01:00
eea7e91b15 chore: release version v0.12.2 2016-04-27 10:09:24 +01:00
b11a7972f5 chore: update contributors 2016-04-27 10:09:24 +01:00
15d5bc53fb update deps and npm scripts 2016-04-27 10:08:02 +01:00
9d911af8e0 Merge pull request #44 from diasdavid/feature/events
feature/events
2016-04-27 10:03:29 +01:00
9f1f3c82dc add peer-mux-closed event 2016-04-27 09:44:16 +01:00
d6a1f52962 add peer-mux-established event 2016-04-26 20:47:31 +01:00
7b536819b1 chore: update contributors 2016-04-25 02:29:18 +01:00
7158aaf702 chore: release version v0.12.1 2016-04-25 02:29:18 +01:00
bc87fad5f9 update deps 2016-04-25 02:27:34 +01:00
c9418399a7 bump version manually 2016-04-25 00:20:23 +01:00
2cac123405 chore: update contributors 2016-04-25 00:18:30 +01:00
ff47a9c228 chore: release version v0.11.8 2016-04-25 00:18:30 +01:00
f86a981eb2 update npm scripts 2016-04-25 00:17:07 +01:00
674d68000b chore: update contributors 2016-04-24 23:29:15 +01:00
ae371085c1 chore: release version v0.10.7 2016-04-24 23:29:15 +01:00
770bee3c66 Merge pull request #42 from diasdavid/fix/multiaddr
fix(identify): convert all addresses to multiaddr
2016-04-24 23:19:34 +01:00
6943e3e90b fix(identify): convert all addresses to multiaddr
Fixes #37
2016-04-24 18:39:35 +02:00
a008ebd5b9 chore: update contributors 2016-04-20 13:25:16 +01:00
20108d2de8 chore: release version v0.10.6 2016-04-20 13:25:16 +01:00
15fcfb737c Merge pull request #41 from diasdavid/feat/ipfs-multiaddr
feat: handle ipfs multiaddrs
2016-04-20 13:13:02 +01:00
0fa14c9608 feat: handle ipfs multiaddrs
Closes #38
2016-04-20 13:35:36 +02:00
ac7c8a150e Merge pull request #39 from diasdavid/fix/require
fix: always use fs.readFileSync
2016-04-19 21:43:44 +01:00
851c8ee2a3 fix: always use fs.readFileSync 2016-04-19 13:35:02 +02:00
7a3f9d08d5 chore: update contributors 2016-04-14 21:57:08 +01:00
52d60a7391 chore: release version v0.10.5 2016-04-14 21:57:08 +01:00
165068d05c add circle and bump up timeout, cause travis is slow 2016-04-14 20:52:03 +01:00
9baae15dcf Merge pull request #36 from dignifiedquire/cover
chore: Enable auto coverage reporting
2016-04-14 20:48:12 +01:00
b87524f36a chore: Enable auto coverage reporting 2016-04-14 15:25:30 +02:00
b0484c678e chore: release version v0.10.4 2016-04-14 13:30:49 +01:00
b6c498055f update deps 2016-04-14 12:15:50 +01:00
93fdedf67b chore: release version v0.10.3 2016-04-14 11:42:30 +01:00
9e3b6a80af update npm scripts 2016-04-14 11:37:13 +01:00
997c275139 bump version manually 2016-04-14 10:32:37 +01:00
ba33f2ecd8 add npmignore and bump version manually 2016-04-14 10:19:54 +01:00
c74e2594f8 bump version manually 2016-04-14 03:08:35 +01:00
61757793ed update npm scripts 2016-04-14 03:04:46 +01:00
01bd659ee8 vendor node-forge, update websockets dep 2016-04-14 03:03:54 +01:00
30 changed files with 30695 additions and 614 deletions

35
.npmignore Normal file
View File

@ -0,0 +1,35 @@
test
# Logs
logs
*.log
npm-debug.log*
# Runtime data
pids
*.pid
*.seed
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
# Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files)
.grunt
# node-waf configuration
.lock-wscript
# Compiled binary addons (http://nodejs.org/api/addons.html)
build/Release
# Dependency directory
node_modules
# Optional npm cache directory
.npm
# Optional REPL history
.node_repl_history

View File

@ -11,6 +11,7 @@ before_install:
script:
- npm run lint
- npm test
- npm run coverage
addons:
firefox: 'latest'
@ -18,3 +19,6 @@ addons:
before_script:
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
after_success:
- npm run coverage-publish

View File

@ -1,15 +1,15 @@
libp2p-swarm JavaScript implementation
======================================
[![](https://img.shields.io/badge/made%20by-Protocol%20Labs-blue.svg?style=flat-square)](http://ipn.io)
[![](https://img.shields.io/badge/project-IPFS-blue.svg?style=flat-square)](http://ipfs.io/)
[![](https://img.shields.io/badge/made%20by-Protocol%20Labs-blue.svg?style=flat-square)](http://ipn.io)
[![](https://img.shields.io/badge/project-IPFS-blue.svg?style=flat-square)](http://ipfs.io/)
[![](https://img.shields.io/badge/freenode-%23ipfs-blue.svg?style=flat-square)](http://webchat.freenode.net/?channels=%23ipfs)
[![Build Status](https://img.shields.io/travis/diasdavid/js-libp2p-swarm/master.svg?style=flat-square)](https://travis-ci.org/diasdavid/js-libp2p-swarm)
![](https://img.shields.io/badge/coverage-%3F-yellow.svg?style=flat-square)
[![Coverage Status](https://coveralls.io/repos/github/diasdavid/js-libp2p-swarm/badge.svg?branch=master)](https://coveralls.io/github/diasdavid/js-libp2p-swarm?branch=master)
[![Dependency Status](https://david-dm.org/diasdavid/js-libp2p-swarm.svg?style=flat-square)](https://david-dm.org/ipfs/js-libp2p-swarm)
[![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg?style=flat-square)](https://github.com/feross/standard)
> libp2p swarm implementation in JavaScript
> libp2p swarm implementation in JavaScript.
# Description
@ -19,7 +19,7 @@ libp2p-swarm is used by libp2p but it can be also used as a standalone module.
# Usage
## Install
## Install
libp2p-swarm is available on npm and so, like any other npm module, just:
@ -47,10 +47,10 @@ peerInfo is a [PeerInfo](https://github.com/diasdavid/js-peer-info) object that
libp2p-swarm expects transports that implement [interface-transport](https://github.com/diasdavid/abstract-transport). For example [libp2p-tcp](https://github.com/diasdavid/js-libp2p-tcp).
- `key` - the transport identifier
- `transport` -
- `options`
- `callback`
- `key` - the transport identifier.
- `transport` -
- `options` -
- `callback` -
##### `swarm.transport.dial(key, multiaddrs, callback)`
@ -102,6 +102,10 @@ dial uses the best transport (whatever works first, in the future we can have so
- `protocol`
- `callback`
### `swarm.listen(callback)`
Start listening on all added transports that are available on the current `peerInfo`.
### `swarm.handle(protocol, handler)`
handle a new protocol.
@ -109,6 +113,12 @@ handle a new protocol.
- `protocol`
- `handler` - function called when we receive a dial on `protocol. Signature must be `function (conn) {}`
### `swarm.unhandle(protocol)`
unhandle a protocol.
- `protocol`
### `swarm.close(callback)`
close all the listeners and muxers.

12
circle.yml Normal file
View File

@ -0,0 +1,12 @@
machine:
node:
version: stable
dependencies:
pre:
- google-chrome --version
- wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
- sudo sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
- sudo apt-get update
- sudo apt-get --only-upgrade install google-chrome-stable
- google-chrome --version

View File

@ -1,21 +1,31 @@
'use strict'
const gulp = require('gulp')
const Peer = require('peer-info')
const Id = require('peer-id')
const PeerInfo = require('peer-info')
const PeerId = require('peer-id')
const WebSockets = require('libp2p-websockets')
const Swarm = require('./src')
const spdy = require('libp2p-spdy')
const multiaddr = require('multiaddr')
const fs = require('fs')
const path = require('path')
const sigServer = require('libp2p-webrtc-star/src/signalling-server')
let swarmA
let swarmB
let sigS
gulp.task('test:browser:before', (done) => {
function createListenerA (cb) {
const b58IdA = 'QmWg2L4Fucx1x4KXJTfKHGixBJvveubzcd7DdhB2Mqwfh1'
const peerA = new Peer(Id.createFromB58String(b58IdA))
const maA = multiaddr('/ip4/127.0.0.1/tcp/9100/websockets')
const id = PeerId.createFromJSON(
JSON.parse(
fs.readFileSync(
path.join(__dirname, './test/test-data/id-1.json'))))
const peerA = new PeerInfo(id)
const maA = multiaddr('/ip4/127.0.0.1/tcp/9100/ws')
peerA.multiaddr.add(maA)
swarmA = new Swarm(peerA)
@ -24,23 +34,30 @@ gulp.task('test:browser:before', (done) => {
}
function createListenerB (cb) {
const b58IdB = 'QmRy1iU6BHmG5Hd8rnPhPL98cy1W1przUSTAMcGDq9yAAV'
const maB = multiaddr('/ip4/127.0.0.1/tcp/9200/websockets')
const peerB = new Peer(Id.createFromB58String(b58IdB))
const id = PeerId.createFromJSON(
JSON.parse(
fs.readFileSync(
path.join(__dirname, './test/test-data/id-2.json'))))
const peerB = new PeerInfo(id)
const maB = multiaddr('/ip4/127.0.0.1/tcp/9200/ws')
peerB.multiaddr.add(maB)
swarmB = new Swarm(peerB)
swarmB.transport.add('ws', new WebSockets())
swarmB.transport.listen('ws', {}, null, cb)
swarmB.connection.addStreamMuxer(spdy)
swarmB.connection.reuse()
swarmB.listen(cb)
swarmB.handle('/echo/1.0.0', echo)
}
let count = 0
const ready = () => ++count === 2 ? done() : null
const ready = () => ++count === 3 ? done() : null
createListenerA(ready)
createListenerB(ready)
sigS = sigServer.start(15555, ready)
function echo (conn) {
conn.pipe(conn)
@ -49,10 +66,11 @@ gulp.task('test:browser:before', (done) => {
gulp.task('test:browser:after', (done) => {
let count = 0
const ready = () => ++count === 2 ? done() : null
const ready = () => ++count === 3 ? done() : null
swarmA.transport.close('ws', ready)
swarmB.transport.close('ws', ready)
swarmB.close(ready)
sigS.stop(ready)
})
require('dignified.js/gulp')(gulp)
require('aegir/gulp')(gulp)

View File

@ -1,17 +1,20 @@
{
"name": "libp2p-swarm",
"version": "0.9.3",
"version": "0.18.2",
"description": "libp2p swarm implementation in JavaScript",
"main": "lib/index.js",
"jsnext:main": "src/index.js",
"scripts": {
"lint": "dignified-lint",
"build": "dignified-build",
"lint": "gulp lint",
"build": "gulp build",
"test": "gulp test",
"test:node": "gulp test:node",
"test:browser": "gulp test:browser",
"release": "dignified-release",
"coverage": "istanbul cover --print both -- _mocha test/node.js"
"release": "gulp release",
"release-minor": "gulp release --type minor",
"release-major": "gulp release --type major",
"coverage": "gulp coverage",
"coverage-publish": "aegir-coverage publish"
},
"repository": {
"type": "git",
@ -34,35 +37,49 @@
"node": "^4.3.0"
},
"devDependencies": {
"aegir": "^3.0.4",
"bl": "^1.1.2",
"buffer-loader": "0.0.1",
"chai": "^3.5.0",
"dignified.js": "^1.0.0",
"gulp": "^3.9.1",
"istanbul": "^0.4.2",
"istanbul": "^0.4.3",
"libp2p-multiplex": "^0.2.1",
"libp2p-spdy": "^0.2.3",
"libp2p-tcp": "^0.4.0",
"libp2p-websockets": "^0.2.1",
"multiaddr": "^1.3.0",
"peer-id": "^0.6.0",
"peer-info": "^0.6.0",
"libp2p-spdy": "^0.6.1",
"libp2p-tcp": "^0.6.0",
"libp2p-webrtc-star": "^0.1.4",
"libp2p-websockets": "^0.6.0",
"pre-commit": "^1.1.2",
"stream-pair": "^1.0.3"
"stream-pair": "^1.0.3",
"webrtcsupport": "^2.2.0"
},
"dependencies": {
"babel-runtime": "^6.6.1",
"browserify-zlib": "github:ipfs/browserify-zlib",
"duplex-passthrough": "github:diasdavid/duplex-passthrough",
"ip-address": "^5.0.2",
"multistream-select": "^0.6.1",
"protocol-buffers-stream": "^1.2.0"
"ip-address": "^5.8.0",
"lodash.contains": "^2.4.3",
"multiaddr": "^2.0.0",
"multistream-select": "^0.9.0",
"peer-id": "^0.6.7",
"peer-info": "^0.6.2",
"protocol-buffers-stream": "^1.3.1",
"run-parallel": "^1.1.6"
},
"dignified": {
"aegir": {
"webpack": {
"resolve": {
"alias": {
"node-forge": "../deps/forge.bundle.js"
"node-forge": "../vendor/forge.bundle.js"
}
}
}
}
}
},
"contributors": [
"David Dias <daviddias.p@gmail.com>",
"David Dias <mail@daviddias.me>",
"Francisco Baio Dias <xicombd@gmail.com>",
"Pau Ramon Revilla <masylum@gmail.com>",
"Richard Littauer <richard.littauer@gmail.com>",
"dignifiedquire <dignifiedquire@gmail.com>"
]
}

65
src/connection.js Normal file
View File

@ -0,0 +1,65 @@
'use strict'
const connHandler = require('./default-handler')
const identify = require('./identify')
module.exports = function connection (swarm) {
return {
addUpgrade () {},
addStreamMuxer (muxer) {
// for dialing
swarm.muxers[muxer.multicodec] = muxer
// for listening
swarm.handle(muxer.multicodec, (conn) => {
const muxedConn = muxer(conn, true)
var peerIdForConn
muxedConn.on('stream', (conn) => {
function gotId () {
if (peerIdForConn) {
conn.peerId = peerIdForConn
connHandler(swarm.protocols, conn)
} else {
setTimeout(gotId, 100)
}
}
if (swarm.identify) {
return gotId()
}
connHandler(swarm.protocols, conn)
})
// if identify is enabled, attempt to do it for muxer reuse
if (swarm.identify) {
identify.exec(conn, muxedConn, swarm._peerInfo, (err, pi) => {
if (err) {
return console.log('Identify exec failed', err)
}
peerIdForConn = pi.id
swarm.muxedConns[pi.id.toB58String()] = {}
swarm.muxedConns[pi.id.toB58String()].muxer = muxedConn
swarm.muxedConns[pi.id.toB58String()].conn = conn // to be able to extract addrs
swarm.emit('peer-mux-established', pi)
muxedConn.on('close', () => {
delete swarm.muxedConns[pi.id.toB58String()]
swarm.emit('peer-mux-closed', pi)
})
})
}
})
},
reuse () {
swarm.identify = true
swarm.handle(identify.multicodec, identify.handler(swarm._peerInfo, swarm))
}
}
}

21
src/default-handler.js Normal file
View File

@ -0,0 +1,21 @@
'use strict'
const multistream = require('multistream-select')
// incomming connection handler
module.exports = function connHandler (protocols, conn) {
const ms = new multistream.Listener()
Object.keys(protocols).forEach((protocol) => {
if (!protocol) {
return
}
ms.addHandler(protocol, protocols[protocol])
})
ms.handle(conn, (err) => {
if (err) {
return // the multistream handshake failed
}
})
}

179
src/dial.js Normal file
View File

@ -0,0 +1,179 @@
'use strict'
const multistream = require('multistream-select')
const DuplexPassThrough = require('duplex-passthrough')
const connHandler = require('./default-handler')
module.exports = function dial (swarm) {
return (pi, protocol, callback) => {
if (typeof protocol === 'function') {
callback = protocol
protocol = null
}
if (!callback) {
callback = function noop () {}
}
const pt = new DuplexPassThrough()
const b58Id = pi.id.toB58String()
if (!swarm.muxedConns[b58Id]) {
if (!swarm.conns[b58Id]) {
attemptDial(pi, (err, conn) => {
if (err) {
return callback(err)
}
gotWarmedUpConn(conn)
})
} else {
const conn = swarm.conns[b58Id]
swarm.conns[b58Id] = undefined
gotWarmedUpConn(conn)
}
} else {
if (!protocol) {
return callback()
}
gotMuxer(swarm.muxedConns[b58Id].muxer)
}
return pt
function gotWarmedUpConn (conn) {
attemptMuxerUpgrade(conn, (err, muxer) => {
if (!protocol) {
if (err) {
swarm.conns[b58Id] = conn
}
return callback()
}
if (err) {
// couldn't upgrade to Muxer, it is ok
protocolHandshake(conn, protocol, callback)
} else {
gotMuxer(muxer)
}
})
}
function gotMuxer (muxer) {
openConnInMuxedConn(muxer, (conn) => {
protocolHandshake(conn, protocol, callback)
})
}
function attemptDial (pi, cb) {
const tKeys = swarm.availableTransports(pi)
if (tKeys.length === 0) {
return cb(new Error('No available transport to dial to'))
}
nextTransport(tKeys.shift())
function nextTransport (key) {
const multiaddrs = pi.multiaddrs.slice()
swarm.transport.dial(key, multiaddrs, (err, conn) => {
if (err) {
if (tKeys.length === 0) {
return cb(new Error('Could not dial in any of the transports'))
}
return nextTransport(tKeys.shift())
}
cryptoDial()
function cryptoDial () {
// currently, js-libp2p-swarm doesn't implement any crypto
const ms = new multistream.Dialer()
ms.handle(conn, (err) => {
if (err) {
return cb(err)
}
ms.select('/plaintext/1.0.0', cb)
})
}
})
}
}
function attemptMuxerUpgrade (conn, cb) {
const muxers = Object.keys(swarm.muxers)
if (muxers.length === 0) {
return cb(new Error('no muxers available'))
}
// 1. try to handshake in one of the muxers available
// 2. if succeeds
// - add the muxedConn to the list of muxedConns
// - add incomming new streams to connHandler
nextMuxer(muxers.shift())
function nextMuxer (key) {
const ms = new multistream.Dialer()
ms.handle(conn, (err) => {
if (err) {
return callback(new Error('multistream not supported'))
}
ms.select(key, (err, conn) => {
if (err) {
if (muxers.length === 0) {
cb(new Error('could not upgrade to stream muxing'))
} else {
nextMuxer(muxers.shift())
}
return
}
const muxedConn = swarm.muxers[key](conn, false)
swarm.muxedConns[b58Id] = {}
swarm.muxedConns[b58Id].muxer = muxedConn
swarm.muxedConns[b58Id].conn = conn
swarm.emit('peer-mux-established', pi)
muxedConn.on('close', () => {
delete swarm.muxedConns[pi.id.toB58String()]
swarm.emit('peer-mux-closed', pi)
})
// in case identify is on
muxedConn.on('stream', (conn) => {
conn.peerId = pi.id
connHandler(swarm.protocols, conn)
})
cb(null, muxedConn)
})
})
}
}
function openConnInMuxedConn (muxer, cb) {
cb(muxer.newStream())
}
function protocolHandshake (conn, protocol, cb) {
const ms = new multistream.Dialer()
ms.handle(conn, (err) => {
if (err) {
return callback(err)
}
ms.select(protocol, (err, conn) => {
if (err) {
return callback(err)
}
pt.wrapStream(conn)
pt.peerId = pi.id
callback(null, pt)
})
})
}
}
}

View File

@ -14,16 +14,12 @@ const Info = require('peer-info')
const Id = require('peer-id')
const multiaddr = require('multiaddr')
const isNode = !global.window
const identity = isNode
? fs.readFileSync(path.join(__dirname, 'identify.proto'))
: require('buffer!./identify.proto')
const identity = fs.readFileSync(path.join(__dirname, 'identify.proto'))
const pbStream = require('protocol-buffers-stream')(identity)
exports = module.exports
exports.multicodec = '/ipfs/identify/1.0.0'
exports.multicodec = '/ipfs/id/1.0.0'
exports.exec = (rawConn, muxer, peerInfo, callback) => {
// 1. open a stream
@ -34,9 +30,13 @@ exports.exec = (rawConn, muxer, peerInfo, callback) => {
const conn = muxer.newStream()
var msI = new multistream.Interactive()
msI.handle(conn, () => {
msI.select(exports.multicodec, (err, ds) => {
const ms = new multistream.Dialer()
ms.handle(conn, (err) => {
if (err) {
return callback(err)
}
ms.select(exports.multicodec, (err, ds) => {
if (err) {
return callback(err)
}
@ -45,7 +45,7 @@ exports.exec = (rawConn, muxer, peerInfo, callback) => {
pbs.on('identify', (msg) => {
if (msg.observedAddr.length > 0) {
peerInfo.multiaddr.addSafe(msg.observedAddr)
peerInfo.multiaddr.addSafe(multiaddr(msg.observedAddr))
}
const peerId = Id.createFromPubKey(msg.publicKey)
@ -62,8 +62,8 @@ exports.exec = (rawConn, muxer, peerInfo, callback) => {
pbs.identify({
protocolVersion: 'na',
agentVersion: 'na',
publicKey: peerInfo.id.pubKey,
listenAddrs: peerInfo.multiaddrs.map((mh) => { return mh.buffer }),
publicKey: peerInfo.id.pubKey || new Buffer(0),
listenAddrs: peerInfo.multiaddrs.map((mh) => mh.buffer),
observedAddr: obsMultiaddr ? obsMultiaddr.buffer : new Buffer('')
})
@ -74,15 +74,14 @@ exports.exec = (rawConn, muxer, peerInfo, callback) => {
}
exports.handler = (peerInfo, swarm) => {
return function (conn) {
return (conn) => {
// 1. receive incoming observed info about me
// 2. update my own information (on peerInfo)
// 3. send back what I see from the other (get from swarm.muxedConns[incPeerID].conn.getObservedAddrs()
var pbs = pbStream()
pbs.on('identify', function (msg) {
pbs.on('identify', (msg) => {
if (msg.observedAddr.length > 0) {
peerInfo.multiaddr.addSafe(msg.observedAddr)
peerInfo.multiaddr.addSafe(multiaddr(msg.observedAddr))
}
const peerId = Id.createFromPubKey(msg.publicKey)
@ -92,14 +91,13 @@ exports.handler = (peerInfo, swarm) => {
pbs.identify({
protocolVersion: 'na',
agentVersion: 'na',
publicKey: peerInfo.id.pubKey,
listenAddrs: peerInfo.multiaddrs.map(function (ma) {
return ma.buffer
}),
publicKey: peerInfo.id.pubKey || new Buffer(0),
listenAddrs: peerInfo.multiaddrs.map((ma) => ma.buffer),
observedAddr: obsMultiaddr ? obsMultiaddr.buffer : new Buffer('')
})
pbs.finalize()
})
pbs.pipe(conn).pipe(pbs)
}
}

View File

@ -1,11 +1,19 @@
'use strict'
const multistream = require('multistream-select')
const identify = require('./identify')
const DuplexPassThrough = require('duplex-passthrough')
const util = require('util')
const EE = require('events').EventEmitter
const parallel = require('run-parallel')
const contains = require('lodash.contains')
const transport = require('./transport')
const connection = require('./connection')
const dial = require('./dial')
const connHandler = require('./default-handler')
exports = module.exports = Swarm
util.inherits(Swarm, EE)
function Swarm (peerInfo) {
if (!(this instanceof Swarm)) {
return new Swarm(peerInfo)
@ -15,97 +23,13 @@ function Swarm (peerInfo) {
throw new Error('You must provide a value for `peerInfo`')
}
this._peerInfo = peerInfo
// transports --
// { key: transport }; e.g { tcp: <tcp> }
this.transports = {}
this.transport = {}
this.transport.add = (key, transport, options, callback) => {
if (typeof options === 'function') {
callback = options
options = {}
}
if (!callback) { callback = noop }
if (this.transports[key]) {
throw new Error('There is already a transport with this key')
}
this.transports[key] = transport
callback()
}
this.transport.dial = (key, multiaddrs, callback) => {
const t = this.transports[key]
if (!Array.isArray(multiaddrs)) {
multiaddrs = [multiaddrs]
}
// a) filter the multiaddrs that are actually valid for this transport (use a func from the transport itself) (maybe even make the transport do that)
multiaddrs = t.filter(multiaddrs)
// b) if multiaddrs.length = 1, return the conn from the
// transport, otherwise, create a passthrough
if (multiaddrs.length === 1) {
const conn = t.dial(multiaddrs.shift(), {ready: () => {
const cb = callback
callback = noop // this is done to avoid connection drops as connect errors
cb(null, conn)
}})
conn.once('error', () => {
callback(new Error('failed to connect to every multiaddr'))
})
return conn
}
// c) multiaddrs should already be a filtered list
// specific for the transport we are using
const pt = new DuplexPassThrough()
next(multiaddrs.shift())
return pt
function next (multiaddr) {
const conn = t.dial(multiaddr, {ready: () => {
pt.wrapStream(conn)
const cb = callback
callback = noop // this is done to avoid connection drops as connect errors
cb(null, pt)
}})
conn.once('error', () => {
if (multiaddrs.length === 0) {
return callback(new Error('failed to connect to every multiaddr'))
}
next(multiaddrs.shift())
})
}
}
this.transport.listen = (key, options, handler, callback) => {
// if no callback is passed, we pass conns to connHandler
if (!handler) { handler = connHandler }
const multiaddrs = this.transports[key].filter(peerInfo.multiaddrs)
this.transports[key].createListener(multiaddrs, handler, (err, maUpdate) => {
if (err) {
return callback(err)
}
if (maUpdate) {
// because we can listen on port 0...
peerInfo.multiaddr.replace(multiaddrs, maUpdate)
}
callback()
})
}
this.transport.close = (key, callback) => {
this.transports[key].close(callback)
}
// connections --
// { peerIdB58: { conn: <conn> }}
@ -122,204 +46,70 @@ function Swarm (peerInfo) {
// { protocol: handler }
this.protocols = {}
this.connection = {}
this.connection.addUpgrade = () => {}
// { muxerCodec: <muxer> } e.g { '/spdy/0.3.1': spdy }
this.muxers = {}
this.connection.addStreamMuxer = (muxer) => {
// for dialing
this.muxers[muxer.multicodec] = muxer
// for listening
this.handle(muxer.multicodec, (conn) => {
const muxedConn = muxer(conn, true)
muxedConn.on('stream', (conn) => {
connHandler(conn)
// is the Identify protocol enabled?
this.identify = false
this.transport = transport(this)
this.connection = connection(this)
this.availableTransports = (pi) => {
const addrs = pi.multiaddrs
// Only listen on transports we actually have addresses for
return Object.keys(this.transports).filter((ts) => {
// ipfs multiaddrs are not dialable so we drop them here
let dialable = addrs.map((addr) => {
// webrtc-star needs the /ipfs/QmHash
if (addr.toString().indexOf('webrtc-star') > 0) {
return addr
}
if (contains(addr.protoNames(), 'ipfs')) {
return addr.decapsulate('ipfs')
}
return addr
})
// if identify is enabled, attempt to do it for muxer reuse
if (this.identify) {
identify.exec(conn, muxedConn, peerInfo, (err, pi) => {
if (err) {
return console.log('Identify exec failed', err)
}
this.muxedConns[pi.id.toB58String()] = {}
this.muxedConns[pi.id.toB58String()].muxer = muxedConn
this.muxedConns[pi.id.toB58String()].conn = conn // to be able to extract addrs
})
}
return this.transports[ts].filter(dialable).length > 0
})
}
// enable the Identify protocol
this.identify = false
this.connection.reuse = () => {
this.identify = true
this.handle(identify.multicodec, identify.handler(peerInfo, this))
}
const self = this // prefered this to bind
// incomming connection handler
function connHandler (conn) {
var msS = new multistream.Select()
Object.keys(self.protocols).forEach((protocol) => {
if (!protocol) { return }
msS.addHandler(protocol, self.protocols[protocol])
})
msS.handle(conn)
}
// higher level (public) API
this.dial = (pi, protocol, callback) => {
var pt = null
if (typeof protocol === 'function') {
callback = protocol
protocol = null
} else {
pt = new DuplexPassThrough()
}
this.dial = dial(this)
const b58Id = pi.id.toB58String()
if (!this.muxedConns[b58Id]) {
if (!this.conns[b58Id]) {
attemptDial(pi, (err, conn) => {
if (err) {
return callback(err)
}
gotWarmedUpConn(conn)
})
} else {
const conn = this.conns[b58Id]
this.conns[b58Id] = undefined
gotWarmedUpConn(conn)
}
} else {
gotMuxer(this.muxedConns[b58Id].muxer)
}
return pt
function gotWarmedUpConn (conn) {
attemptMuxerUpgrade(conn, (err, muxer) => {
if (!protocol) {
if (err) {
self.conns[b58Id] = conn
}
return callback()
}
if (err) {
// couldn't upgrade to Muxer, it is ok
protocolHandshake(conn, protocol, callback)
} else {
gotMuxer(muxer)
}
})
}
function gotMuxer (muxer) {
openConnInMuxedConn(muxer, (conn) => {
protocolHandshake(conn, protocol, callback)
})
}
function attemptDial (pi, cb) {
const tKeys = Object.keys(self.transports)
nextTransport(tKeys.shift())
function nextTransport (key) {
const multiaddrs = pi.multiaddrs.slice()
self.transport.dial(key, multiaddrs, (err, conn) => {
if (err) {
if (tKeys.length === 0) {
return cb(new Error('Could not dial in any of the transports'))
}
return nextTransport(tKeys.shift())
}
cb(null, conn)
})
}
}
function attemptMuxerUpgrade (conn, cb) {
const muxers = Object.keys(self.muxers)
if (muxers.length === 0) {
return cb(new Error('no muxers available'))
}
// 1. try to handshake in one of the muxers available
// 2. if succeeds
// - add the muxedConn to the list of muxedConns
// - add incomming new streams to connHandler
nextMuxer(muxers.shift())
function nextMuxer (key) {
var msI = new multistream.Interactive()
msI.handle(conn, function () {
msI.select(key, (err, conn) => {
if (err) {
if (muxers.length === 0) {
cb(new Error('could not upgrade to stream muxing'))
} else {
nextMuxer(muxers.shift())
}
return
}
const muxedConn = self.muxers[key](conn, false)
self.muxedConns[b58Id] = {}
self.muxedConns[b58Id].muxer = muxedConn
self.muxedConns[b58Id].conn = conn
// in case identify is on
muxedConn.on('stream', connHandler)
cb(null, muxedConn)
})
})
}
}
function openConnInMuxedConn (muxer, cb) {
cb(muxer.newStream())
}
function protocolHandshake (conn, protocol, cb) {
var msI = new multistream.Interactive()
msI.handle(conn, function () {
msI.select(protocol, (err, conn) => {
if (err) {
return callback(err)
}
pt.wrapStream(conn)
callback(null, pt)
})
})
}
// Start listening on all available transports
this.listen = (callback) => {
parallel(this.availableTransports(peerInfo).map((ts) => (cb) => {
// Listen on the given transport
this.transport.listen(ts, {}, null, cb)
}), callback)
}
this.handle = (protocol, handler) => {
this.protocols[protocol] = handler
}
this.close = (callback) => {
var count = 0
// our crypto handshake :)
this.handle('/plaintext/1.0.0', (conn) => {
connHandler(this.protocols, conn)
})
this.unhandle = (protocol, handler) => {
if (this.protocols[protocol]) {
delete this.protocols[protocol]
}
}
this.close = (callback) => {
Object.keys(this.muxedConns).forEach((key) => {
this.muxedConns[key].muxer.end()
})
Object.keys(this.transports).forEach((key) => {
this.transports[key].close(() => {
if (++count === Object.keys(this.transports).length) {
callback()
}
})
})
parallel(Object.keys(this.transports).map((key) => {
return (cb) => this.transports[key].close(cb)
}), callback)
}
}
function noop () {}

122
src/transport.js Normal file
View File

@ -0,0 +1,122 @@
'use strict'
const contains = require('lodash.contains')
const DuplexPassThrough = require('duplex-passthrough')
const connHandler = require('./default-handler')
module.exports = function (swarm) {
return {
add (key, transport, options, callback) {
if (typeof options === 'function') {
callback = options
options = {}
}
if (!callback) { callback = noop }
if (swarm.transports[key]) {
throw new Error('There is already a transport with this key')
}
swarm.transports[key] = transport
callback()
},
dial (key, multiaddrs, callback) {
const t = swarm.transports[key]
if (!Array.isArray(multiaddrs)) {
multiaddrs = [multiaddrs]
}
// a) filter the multiaddrs that are actually valid for this transport (use a func from the transport itself) (maybe even make the transport do that)
multiaddrs = dialables(t, multiaddrs)
// b) if multiaddrs.length = 1, return the conn from the
// transport, otherwise, create a passthrough
if (multiaddrs.length === 1) {
const conn = t.dial(multiaddrs.shift(), {ready: () => {
const cb = callback
callback = noop // this is done to avoid connection drops as connect errors
cb(null, conn)
}})
conn.once('error', () => {
callback(new Error('failed to connect to every multiaddr'))
})
return conn
}
// c) multiaddrs should already be a filtered list
// specific for the transport we are using
const pt = new DuplexPassThrough()
next(multiaddrs.shift())
return pt
function next (multiaddr) {
const conn = t.dial(multiaddr, {ready: () => {
pt.wrapStream(conn)
const cb = callback
callback = noop // this is done to avoid connection drops as connect errors
cb(null, pt)
}})
conn.once('error', () => {
if (multiaddrs.length === 0) {
return callback(new Error('failed to connect to every multiaddr'))
}
next(multiaddrs.shift())
})
}
},
listen (key, options, handler, callback) {
// if no callback is passed, we pass conns to connHandler
if (!handler) {
handler = connHandler.bind(null, swarm.protocols)
}
const multiaddrs = dialables(swarm.transports[key], swarm._peerInfo.multiaddrs)
swarm.transports[key].createListener(multiaddrs, handler, (err, maUpdate) => {
if (err) {
return callback(err)
}
if (maUpdate) {
// because we can listen on port 0...
swarm._peerInfo.multiaddr.replace(multiaddrs, maUpdate)
}
callback()
})
},
close (key, callback) {
const transport = swarm.transports[key]
if (!transport) {
return callback(new Error(`Trying to close non existing transport: ${key}`))
}
transport.close(callback)
}
}
}
// transform given multiaddrs to a list of dialable addresses
// for the given transport `tp`.
function dialables (tp, multiaddrs) {
return tp.filter(multiaddrs.map((addr) => {
// webrtc-star needs the /ipfs/QmHash
if (addr.toString().indexOf('webrtc-star') > 0) {
return addr
}
// ipfs multiaddrs are not dialable so we drop them here
if (contains(addr.protoNames(), 'ipfs')) {
return addr.decapsulate('ipfs')
}
return addr
}))
}
function noop () {}

View File

@ -6,8 +6,7 @@ const expect = require('chai').expect
const Swarm = require('../src')
describe('basics', () => {
it('throws on missing peerInfo', (done) => {
expect(Swarm).to.throw(Error)
done()
it('throws on missing peerInfo', () => {
expect(() => Swarm()).to.throw(Error)
})
})

View File

@ -3,6 +3,7 @@
const expect = require('chai').expect
const parallel = require('run-parallel')
const multiaddr = require('multiaddr')
const Peer = require('peer-info')
const Swarm = require('../src')
@ -46,9 +47,17 @@ describe('transport - tcp', function () {
function ready () {
if (++count === 2) {
expect(peerA.multiaddrs.length).to.equal(1)
expect(peerA.multiaddrs[0]).to.deep.equal(multiaddr('/ip4/127.0.0.1/tcp/9888'))
expect(
peerA.multiaddrs[0].equals(multiaddr('/ip4/127.0.0.1/tcp/9888'))
).to.be.equal(
true
)
expect(peerB.multiaddrs.length).to.equal(1)
expect(peerB.multiaddrs[0]).to.deep.equal(multiaddr('/ip4/127.0.0.1/tcp/9999'))
expect(
peerB.multiaddrs[0].equals(multiaddr('/ip4/127.0.0.1/tcp/9999'))
).to.be.equal(
true
)
done()
}
}
@ -68,7 +77,7 @@ describe('transport - tcp', function () {
it('dial to set of multiaddr, only one is available', (done) => {
const conn = swarmA.transport.dial('tcp', [
multiaddr('/ip4/127.0.0.1/tcp/9910/websockets'), // not valid on purpose
multiaddr('/ip4/127.0.0.1/tcp/9910/ws'), // not valid on purpose
multiaddr('/ip4/127.0.0.1/tcp/9910'),
multiaddr('/ip4/127.0.0.1/tcp/9999'),
multiaddr('/ip4/127.0.0.1/tcp/9309')
@ -84,15 +93,10 @@ describe('transport - tcp', function () {
})
it('close', (done) => {
var count = 0
swarmA.transport.close('tcp', closed)
swarmB.transport.close('tcp', closed)
function closed () {
if (++count === 2) {
done()
}
}
parallel([
(cb) => swarmA.transport.close('tcp', cb),
(cb) => swarmB.transport.close('tcp', cb)
], done)
})
it('support port 0', (done) => {
@ -124,7 +128,11 @@ describe('transport - tcp', function () {
function ready () {
expect(peer.multiaddrs.length).to.equal(1)
expect(peer.multiaddrs[0]).to.deep.equal(multiaddr('/ip4/0.0.0.0/tcp/9050'))
expect(
peer.multiaddrs[0].equals(multiaddr('/ip4/0.0.0.0/tcp/9050'))
).to.be.equal(
true
)
swarm.close(done)
}
})

View File

@ -3,6 +3,7 @@
const expect = require('chai').expect
const parallel = require('run-parallel')
const multiaddr = require('multiaddr')
const Peer = require('peer-info')
const Swarm = require('../src')
@ -17,12 +18,11 @@ describe('transport - websockets', function () {
var peerA = new Peer()
var peerB = new Peer()
before((done) => {
peerA.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/9888/websockets'))
peerB.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/9999/websockets'))
before(() => {
peerA.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/9888/ws'))
peerB.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/9999/ws/ipfs/QmcgpsyWgH8Y8ajJz1Cu72KnS5uo2Aa2LpzU7kinSupNKC'))
swarmA = new Swarm(peerA)
swarmB = new Swarm(peerB)
done()
})
it('add', (done) => {
@ -35,27 +35,32 @@ describe('transport - websockets', function () {
})
it('listen', (done) => {
var count = 0
swarmA.transport.listen('ws', {}, (conn) => {
conn.pipe(conn)
}, ready)
swarmB.transport.listen('ws', {}, (conn) => {
conn.pipe(conn)
}, ready)
function ready () {
if (++count === 2) {
expect(peerA.multiaddrs.length).to.equal(1)
expect(peerA.multiaddrs[0]).to.deep.equal(multiaddr('/ip4/127.0.0.1/tcp/9888/websockets'))
expect(peerB.multiaddrs.length).to.equal(1)
expect(peerB.multiaddrs[0]).to.deep.equal(multiaddr('/ip4/127.0.0.1/tcp/9999/websockets'))
done()
}
}
parallel([
(cb) => swarmA.transport.listen('ws', {}, (conn) => {
conn.pipe(conn)
}, cb),
(cb) => swarmB.transport.listen('ws', {}, (conn) => {
conn.pipe(conn)
}, cb)
], () => {
expect(peerA.multiaddrs.length).to.equal(1)
expect(
peerA.multiaddrs[0].equals(multiaddr('/ip4/127.0.0.1/tcp/9888/ws'))
).to.be.equal(
true
)
expect(peerB.multiaddrs.length).to.equal(1)
expect(
peerB.multiaddrs[0].equals(multiaddr('/ip4/127.0.0.1/tcp/9999/ws/ipfs/QmcgpsyWgH8Y8ajJz1Cu72KnS5uo2Aa2LpzU7kinSupNKC'))
).to.equal(
true
)
done()
})
})
it('dial', (done) => {
const conn = swarmA.transport.dial('ws', multiaddr('/ip4/127.0.0.1/tcp/9999/websockets'), (err, conn) => {
const conn = swarmA.transport.dial('ws', multiaddr('/ip4/127.0.0.1/tcp/9999/ws'), (err, conn) => {
expect(err).to.not.exist
})
conn.pipe(bl((err, data) => {
@ -67,7 +72,7 @@ describe('transport - websockets', function () {
})
it('dial (conn from callback)', (done) => {
swarmA.transport.dial('ws', multiaddr('/ip4/127.0.0.1/tcp/9999/websockets'), (err, conn) => {
swarmA.transport.dial('ws', multiaddr('/ip4/127.0.0.1/tcp/9999/ws'), (err, conn) => {
expect(err).to.not.exist
conn.pipe(bl((err, data) => {
@ -80,14 +85,9 @@ describe('transport - websockets', function () {
})
it('close', (done) => {
var count = 0
swarmA.transport.close('ws', closed)
swarmB.transport.close('ws', closed)
function closed () {
if (++count === 2) {
done()
}
}
parallel([
(cb) => swarmA.transport.close('ws', cb),
(cb) => swarmB.transport.close('ws', cb)
], done)
})
})

View File

@ -3,6 +3,7 @@
const expect = require('chai').expect
const parallel = require('run-parallel')
const multiaddr = require('multiaddr')
const Peer = require('peer-info')
const Swarm = require('../src')
@ -10,7 +11,7 @@ const TCP = require('libp2p-tcp')
const multiplex = require('libp2p-spdy')
describe('stream muxing with multiplex (on TCP)', function () {
this.timeout(20000)
this.timeout(60 * 1000)
var swarmA
var peerA
@ -37,35 +38,22 @@ describe('stream muxing with multiplex (on TCP)', function () {
swarmC = new Swarm(peerC)
swarmA.transport.add('tcp', new TCP())
swarmA.transport.listen('tcp', {}, null, ready)
swarmB.transport.add('tcp', new TCP())
swarmB.transport.listen('tcp', {}, null, ready)
swarmC.transport.add('tcp', new TCP())
swarmC.transport.listen('tcp', {}, null, ready)
var counter = 0
function ready () {
if (++counter === 3) {
done()
}
}
parallel([
(cb) => swarmA.transport.listen('tcp', {}, null, cb),
(cb) => swarmB.transport.listen('tcp', {}, null, cb),
(cb) => swarmC.transport.listen('tcp', {}, null, cb)
], done)
})
after((done) => {
var counter = 0
swarmA.close(closed)
swarmB.close(closed)
swarmC.close(closed)
function closed () {
if (++counter === 3) {
done()
}
}
parallel([
(cb) => swarmA.close(cb),
(cb) => swarmB.close(cb),
(cb) => swarmC.close(cb)
], done)
})
it('add', (done) => {

View File

@ -3,6 +3,7 @@
const expect = require('chai').expect
const parallel = require('run-parallel')
const multiaddr = require('multiaddr')
const Peer = require('peer-info')
const Swarm = require('../src')
@ -10,7 +11,7 @@ const TCP = require('libp2p-tcp')
const spdy = require('libp2p-spdy')
describe('stream muxing with spdy (on TCP)', function () {
this.timeout(20000)
this.timeout(60 * 1000)
var swarmA
var peerA
@ -37,42 +38,28 @@ describe('stream muxing with spdy (on TCP)', function () {
swarmC = new Swarm(peerC)
swarmA.transport.add('tcp', new TCP())
swarmA.transport.listen('tcp', {}, null, ready)
swarmB.transport.add('tcp', new TCP())
swarmB.transport.listen('tcp', {}, null, ready)
swarmC.transport.add('tcp', new TCP())
swarmC.transport.listen('tcp', {}, null, ready)
var counter = 0
function ready () {
if (++counter === 3) {
done()
}
}
parallel([
(cb) => swarmA.transport.listen('tcp', {}, null, cb),
(cb) => swarmB.transport.listen('tcp', {}, null, cb),
(cb) => swarmC.transport.listen('tcp', {}, null, cb)
], done)
})
after((done) => {
var counter = 0
swarmA.close(closed)
swarmB.close(closed)
swarmC.close(closed)
function closed () {
if (++counter === 3) {
done()
}
}
parallel([
(cb) => swarmA.close(cb),
(cb) => swarmB.close(cb)
// (cb) => swarmC.close(cb)
], done)
})
it('add', (done) => {
it('add', () => {
swarmA.connection.addStreamMuxer(spdy)
swarmB.connection.addStreamMuxer(spdy)
swarmC.connection.addStreamMuxer(spdy)
done()
})
it('handle + dial on protocol', (done) => {
@ -128,4 +115,12 @@ describe('stream muxing with spdy (on TCP)', function () {
}, 500)
})
})
it('close one end, make sure the other does not blow', (done) => {
swarmC.close((err) => {
if (err) throw err
// to make sure it has time to propagate
setTimeout(done, 1000)
})
})
})

View File

@ -4,9 +4,6 @@
describe('secio conn upgrade (on TCP)', function () {
this.timeout(20000)
before((done) => { done() })
after((done) => { done() })
it.skip('add', (done) => {})
it.skip('dial', (done) => {})
it.skip('tls on a muxed stream (not the full conn)', (done) => {})

View File

@ -2,9 +2,6 @@
'use strict'
describe('tls conn upgrade (on TCP)', function () {
before((done) => { done() })
after((done) => { done() })
it.skip('add', (done) => {})
it.skip('dial', (done) => {})
it.skip('tls on a muxed stream (not the full conn)', (done) => {})

View File

@ -3,6 +3,7 @@
const expect = require('chai').expect
const parallel = require('run-parallel')
const multiaddr = require('multiaddr')
const Peer = require('peer-info')
const Swarm = require('../src')
@ -21,44 +22,32 @@ describe('high level API - 1st without stream multiplexing (on TCP)', function (
peerB = new Peer()
peerA.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/9001'))
peerB.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/9002'))
peerB.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/9002/ipfs/QmcgpsyWgH8Y8ajJz1Cu72KnS5uo2Aa2LpzU7kinSupNKC'))
swarmA = new Swarm(peerA)
swarmB = new Swarm(peerB)
swarmA.transport.add('tcp', new TCP())
swarmA.transport.listen('tcp', {}, null, ready)
swarmB.transport.add('tcp', new TCP())
swarmB.transport.listen('tcp', {}, null, ready)
var counter = 0
function ready () {
if (++counter === 2) {
done()
}
}
parallel([
(cb) => swarmA.transport.listen('tcp', {}, null, cb),
(cb) => swarmB.transport.listen('tcp', {}, null, cb)
], done)
})
after((done) => {
var counter = 0
swarmA.close(closed)
swarmB.close(closed)
function closed () {
if (++counter === 2) {
done()
}
}
parallel([
(cb) => swarmA.close(cb),
(cb) => swarmB.close(cb)
], done)
})
it('handle a protocol', (done) => {
swarmB.handle('/bananas/1.0.0', (conn) => {
conn.pipe(conn)
})
expect(Object.keys(swarmB.protocols).length).to.equal(1)
expect(Object.keys(swarmB.protocols).length).to.equal(2)
done()
})
@ -103,4 +92,11 @@ describe('high level API - 1st without stream multiplexing (on TCP)', function (
conn.on('end', done)
})
})
it('unhandle', (done) => {
const proto = '/bananas/1.0.0'
swarmA.unhandle(proto)
expect(swarmA.protocols[proto]).to.not.exist
done()
})
})

View File

@ -3,6 +3,7 @@
const expect = require('chai').expect
const parallel = require('run-parallel')
const multiaddr = require('multiaddr')
const Peer = require('peer-info')
const Swarm = require('../src')
@ -11,7 +12,7 @@ const WebSockets = require('libp2p-websockets')
const spdy = require('libp2p-spdy')
describe('high level API - with everything mixed all together!', function () {
this.timeout(20000)
this.timeout(100000)
var swarmA // tcp
var peerA
@ -45,19 +46,13 @@ describe('high level API - with everything mixed all together!', function () {
})
after((done) => {
var counter = 0
swarmA.close(closed)
swarmB.close(closed)
swarmC.close(closed)
swarmD.close(closed)
swarmE.close(closed)
function closed () {
if (++counter === 4) {
done()
}
}
parallel([
(cb) => swarmA.close(cb),
(cb) => swarmB.close(cb),
// (cb) => swarmC.close(cb),
(cb) => swarmD.close(cb),
(cb) => swarmE.close(cb)
], done)
})
it('add tcp', (done) => {
@ -66,53 +61,42 @@ describe('high level API - with everything mixed all together!', function () {
peerC.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/0'))
swarmA.transport.add('tcp', new TCP())
swarmA.transport.listen('tcp', {}, null, ready)
swarmB.transport.add('tcp', new TCP())
swarmB.transport.listen('tcp', {}, null, ready)
swarmC.transport.add('tcp', new TCP())
swarmC.transport.listen('tcp', {}, null, ready)
var counter = 0
function ready () {
if (++counter === 3) {
done()
}
}
parallel([
(cb) => swarmA.transport.listen('tcp', {}, null, cb),
(cb) => swarmB.transport.listen('tcp', {}, null, cb)
// (cb) => swarmC.transport.listen('tcp', {}, null, cb)
], done)
})
it.skip('add utp', (done) => {})
it('add websockets', (done) => {
peerB.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/9012/websockets'))
peerC.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/9022/websockets'))
peerD.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/9032/websockets'))
peerE.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/9042/websockets'))
peerB.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/9012/ws'))
peerC.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/9022/ws'))
peerD.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/9032/ws'))
peerE.multiaddr.add(multiaddr('/ip4/127.0.0.1/tcp/9042/ws'))
swarmB.transport.add('ws', new WebSockets())
swarmB.transport.listen('ws', {}, null, ready)
swarmC.transport.add('ws', new WebSockets())
swarmC.transport.listen('ws', {}, null, ready)
swarmD.transport.add('ws', new WebSockets())
swarmD.transport.listen('ws', {}, null, ready)
swarmE.transport.add('ws', new WebSockets())
swarmE.transport.listen('ws', {}, null, ready)
var counter = 0
function ready () {
if (++counter === 4) {
done()
}
}
parallel([
(cb) => swarmB.transport.listen('ws', {}, null, cb),
// (cb) => swarmC.transport.listen('ws', {}, null, cb),
(cb) => swarmD.transport.listen('ws', {}, null, cb),
(cb) => swarmE.transport.listen('ws', {}, null, cb)
], done)
})
it('add spdy', (done) => {
it('listen automatically', (done) => {
swarmC.listen(done)
})
it('add spdy', () => {
swarmA.connection.addStreamMuxer(spdy)
swarmB.connection.addStreamMuxer(spdy)
swarmC.connection.addStreamMuxer(spdy)
@ -124,13 +108,37 @@ describe('high level API - with everything mixed all together!', function () {
swarmC.connection.reuse()
swarmD.connection.reuse()
swarmE.connection.reuse()
done()
})
it.skip('add multiplex', (done) => {})
it.skip('add multiplex', () => {})
it('dial from tcp to tcp+ws', (done) => {
it('warm up from A to B on tcp to tcp+ws', (done) => {
parallel([
(cb) => swarmB.once('peer-mux-established', (peerInfo) => {
expect(peerInfo.id.toB58String()).to.equal(peerA.id.toB58String())
cb()
}),
(cb) => swarmA.once('peer-mux-established', (peerInfo) => {
expect(peerInfo.id.toB58String()).to.equal(peerB.id.toB58String())
cb()
}),
(cb) => swarmA.dial(peerB, (err) => {
expect(err).to.not.exist
expect(Object.keys(swarmA.muxedConns).length).to.equal(1)
cb()
})
], done)
})
it('warm up a warmed up, from B to A', (done) => {
swarmB.dial(peerA, (err) => {
expect(err).to.not.exist
expect(Object.keys(swarmA.muxedConns).length).to.equal(1)
done()
})
})
it('dial from tcp to tcp+ws, on protocol', (done) => {
swarmB.handle('/anona/1.0.0', (conn) => {
conn.pipe(conn)
})
@ -145,6 +153,14 @@ describe('high level API - with everything mixed all together!', function () {
})
})
it('dial from ws to ws no proto', (done) => {
swarmD.dial(peerE, (err) => {
expect(err).to.not.exist
expect(Object.keys(swarmD.muxedConns).length).to.equal(1)
done()
})
})
it('dial from ws to ws', (done) => {
swarmE.handle('/abacaxi/1.0.0', (conn) => {
conn.pipe(conn)
@ -182,11 +198,13 @@ describe('high level API - with everything mixed all together!', function () {
it('dial from tcp+ws to tcp+ws', (done) => {
swarmC.handle('/mamao/1.0.0', (conn) => {
expect(conn.peerId).to.exist
conn.pipe(conn)
})
swarmA.dial(peerC, '/mamao/1.0.0', (err, conn) => {
expect(err).to.not.exist
expect(conn.peerId).to.exist
expect(Object.keys(swarmA.muxedConns).length).to.equal(2)
conn.end()
@ -194,4 +212,11 @@ describe('high level API - with everything mixed all together!', function () {
conn.on('end', done)
})
})
it('close a muxer emits event', (done) => {
parallel([
(cb) => swarmC.close(cb),
(cb) => swarmA.once('peer-mux-closed', () => cb())
], done)
})
})

View File

@ -0,0 +1,48 @@
/* eslint-env mocha */
'use strict'
const expect = require('chai').expect
const multiaddr = require('multiaddr')
const Id = require('peer-id')
const Peer = require('peer-info')
const WebSockets = require('libp2p-websockets')
const bl = require('bl')
const Swarm = require('../src')
describe('transport - websockets', function () {
this.timeout(10000)
var swarm
before(() => {
const b58IdSrc = 'QmYzgdesgjdvD3okTPGZT9NPmh1BuH5FfTVNKjsvaAprhb'
// use a pre generated Id to save time
const idSrc = Id.createFromB58String(b58IdSrc)
const peerSrc = new Peer(idSrc)
swarm = new Swarm(peerSrc)
})
it('add', (done) => {
swarm.transport.add('ws', new WebSockets(), () => {
expect(Object.keys(swarm.transports).length).to.equal(1)
done()
})
})
it('dial', (done) => {
const ma = multiaddr('/ip4/127.0.0.1/tcp/9100/ws')
const conn = swarm.transport.dial('ws', ma, (err, conn) => {
expect(err).to.not.exist
})
conn.pipe(bl((err, data) => {
expect(err).to.not.exist
expect(data.toString()).to.equal('hey')
done()
}))
conn.write('hey')
conn.end()
})
})

View File

@ -0,0 +1,90 @@
/* eslint-env mocha */
'use strict'
const expect = require('chai').expect
const multiaddr = require('multiaddr')
const peerId = require('peer-id')
const PeerInfo = require('peer-info')
const WebRTCStar = require('libp2p-webrtc-star')
const bl = require('bl')
const parallel = require('run-parallel')
const Swarm = require('../src')
describe('transport - webrtc-star', function () {
this.timeout(10000)
let swarm1
let peer1
let swarm2
let peer2
before(() => {
const id1 = peerId.createFromB58String('QmcgpsyWgH8Y8ajJz1Cu72KnS5uo2Aa2LpzU7kinSooooA')
peer1 = new PeerInfo(id1)
const mh1 = multiaddr('/libp2p-webrtc-star/ip4/127.0.0.1/tcp/15555/ws/ipfs/QmcgpsyWgH8Y8ajJz1Cu72KnS5uo2Aa2LpzU7kinSooooA')
peer1.multiaddr.add(mh1)
const id2 = peerId.createFromB58String('QmcgpsyWgH8Y8ajJz1Cu72KnS5uo2Aa2LpzU7kinSooooB')
peer2 = new PeerInfo(id2)
const mh2 = multiaddr('/libp2p-webrtc-star/ip4/127.0.0.1/tcp/15555/ws/ipfs/QmcgpsyWgH8Y8ajJz1Cu72KnS5uo2Aa2LpzU7kinSooooB')
peer2.multiaddr.add(mh2)
swarm1 = new Swarm(peer1)
swarm2 = new Swarm(peer2)
})
it('add WebRTCStar transport to swarm 1', (done) => {
swarm1.transport.add('wstar', new WebRTCStar(), () => {
expect(Object.keys(swarm1.transports).length).to.equal(1)
done()
})
})
it('add WebRTCStar transport to swarm 2', (done) => {
swarm2.transport.add('wstar', new WebRTCStar(), () => {
expect(Object.keys(swarm2.transports).length).to.equal(1)
done()
})
})
it('listen on swarm 1', (done) => {
swarm1.transport.listen('wstar', {}, (conn) => {
conn.pipe(conn)
}, done)
})
it('listen on swarm 2', (done) => {
swarm2.transport.listen('wstar', {}, (conn) => {
conn.pipe(conn)
}, done)
})
it('dial', (done) => {
swarm1.transport.dial('wstar', peer2.multiaddrs[0], (err, conn) => {
expect(err).to.not.exist
const text = 'Hello World'
conn.pipe(bl((err, data) => {
expect(err).to.not.exist
expect(data.toString()).to.equal(text)
done()
}))
conn.write(text)
conn.end()
})
})
it('close', (done) => {
parallel([
(cb) => {
swarm1.transport.close('wstar', cb)
},
(cb) => {
swarm2.transport.close('wstar', cb)
}
], done)
})
})

View File

@ -0,0 +1,74 @@
/* eslint-env mocha */
'use strict'
const expect = require('chai').expect
const multiaddr = require('multiaddr')
const PeerId = require('peer-id')
const PeerInfo = require('peer-info')
const WebSockets = require('libp2p-websockets')
const spdy = require('libp2p-spdy')
const fs = require('fs')
const path = require('path')
const Swarm = require('../src')
describe('high level API (swarm with spdy + websockets)', function () {
this.timeout(60 * 1000)
var swarm
var peerDst
before(() => {
const peerSrc = new PeerInfo()
swarm = new Swarm(peerSrc)
})
it('add spdy', () => {
swarm.connection.addStreamMuxer(spdy)
swarm.connection.reuse()
})
it('add ws', () => {
swarm.transport.add('ws', new WebSockets())
expect(Object.keys(swarm.transports).length).to.equal(1)
})
it('create Dst peer info', () => {
const id = PeerId.createFromJSON(
JSON.parse(
fs.readFileSync(
path.join(__dirname, './test-data/id-2.json'))))
peerDst = new PeerInfo(id)
const ma = multiaddr('/ip4/127.0.0.1/tcp/9200/ws')
peerDst.multiaddr.add(ma)
})
it('dial to warm a conn', (done) => {
swarm.dial(peerDst, (err) => {
expect(err).to.not.exist
done()
})
})
it('dial on protocol, use warmed conn', (done) => {
swarm.dial(peerDst, '/echo/1.0.0', (err, conn) => {
expect(err).to.not.exist
conn.end()
conn.on('data', () => {}) // let it flow.. let it flooooow
conn.on('end', done)
})
})
it('close', (done) => {
// cause CI is slow
setTimeout(() => {
swarm.close(done)
}, 1000)
})
// TODO - test that the listener (node.js peer) can dial back
// do that by dialing on a protocol to activate that behaviour
// like libp2p-spdy tests
})

View File

@ -0,0 +1,129 @@
/* eslint-env mocha */
'use strict'
const expect = require('chai').expect
const multiaddr = require('multiaddr')
const peerId = require('peer-id')
const PeerInfo = require('peer-info')
const WebRTCStar = require('libp2p-webrtc-star')
const spdy = require('libp2p-spdy')
const bl = require('bl')
const parallel = require('run-parallel')
const Swarm = require('../src')
describe('high level API (swarm with spdy + webrtc-star)', function () {
this.timeout(60 * 1000)
let swarm1
let peer1
let wstar1
let swarm2
let peer2
let wstar2
before(() => {
const id1 = peerId.create()
peer1 = new PeerInfo(id1)
const mh1 = multiaddr('/libp2p-webrtc-star/ip4/127.0.0.1/tcp/15555/ws/ipfs/' + id1.toB58String())
peer1.multiaddr.add(mh1)
const id2 = peerId.create()
peer2 = new PeerInfo(id2)
const mh2 = multiaddr('/libp2p-webrtc-star/ip4/127.0.0.1/tcp/15555/ws/ipfs/' + id2.toB58String())
peer2.multiaddr.add(mh2)
swarm1 = new Swarm(peer1)
swarm2 = new Swarm(peer2)
})
it('add WebRTCStar transport to swarm 1', () => {
wstar1 = new WebRTCStar()
swarm1.transport.add('wstar', wstar1)
expect(Object.keys(swarm1.transports).length).to.equal(1)
})
it('add WebRTCStar transport to swarm 2', () => {
wstar2 = new WebRTCStar()
swarm2.transport.add('wstar', wstar2)
expect(Object.keys(swarm2.transports).length).to.equal(1)
})
it('listen on swarm 1', (done) => {
swarm1.listen(done)
})
it('listen on swarm 2', (done) => {
swarm2.listen(done)
})
it('add spdy', () => {
swarm1.connection.addStreamMuxer(spdy)
swarm1.connection.reuse()
swarm2.connection.addStreamMuxer(spdy)
swarm2.connection.reuse()
})
it('handle proto', () => {
swarm2.handle('/echo/1.0.0', (conn) => {
conn.pipe(conn)
})
})
it('dial on proto', (done) => {
swarm1.dial(peer2, '/echo/1.0.0', (err, conn) => {
expect(err).to.not.exist
expect(Object.keys(swarm1.muxedConns).length).to.equal(1)
const text = 'Hello World'
conn.pipe(bl((err, data) => {
expect(err).to.not.exist
expect(data.toString()).to.equal(text)
expect(Object.keys(swarm2.muxedConns).length).to.equal(1)
done()
}))
conn.write(text)
conn.end()
})
})
it('create a third node and check that discovery works', (done) => {
wstar1.discovery.on('peer', (peerInfo) => {
expect(Object.keys(swarm1.muxedConns).length).to.equal(1)
swarm1.dial(peerInfo, () => {
expect(Object.keys(swarm1.muxedConns).length).to.equal(2)
})
})
wstar2.discovery.on('peer', (peerInfo) => {
swarm2.dial(peerInfo)
})
const id3 = peerId.create()
const peer3 = new PeerInfo(id3)
const mh3 = multiaddr('/libp2p-webrtc-star/ip4/127.0.0.1/tcp/15555/ws/ipfs/' + id3.toB58String())
peer3.multiaddr.add(mh3)
const swarm3 = new Swarm(peer3)
const wstar3 = new WebRTCStar()
swarm3.transport.add('wstar', wstar3)
swarm3.connection.addStreamMuxer(spdy)
swarm3.connection.reuse()
swarm3.listen(() => {
setTimeout(() => {
expect(Object.keys(swarm1.muxedConns).length).to.equal(2)
expect(Object.keys(swarm2.muxedConns).length).to.equal(2)
expect(Object.keys(swarm3.muxedConns).length).to.equal(2)
swarm3.close(done)
}, 8000)
})
})
it('close', (done) => {
parallel([
swarm1.close,
swarm2.close
], done)
})
})

View File

@ -2,11 +2,7 @@
'use strict'
const expect = require('chai').expect
const multiaddr = require('multiaddr')
const Id = require('peer-id')
const Peer = require('peer-info')
const WebSockets = require('libp2p-websockets')
const bl = require('bl')
const w = require('webrtcsupport')
const Swarm = require('../src')
@ -17,109 +13,15 @@ describe('basics', () => {
})
})
describe('transport - websockets', function () {
this.timeout(10000)
require('./browser-00-transport-websockets.js')
var swarm
if (w.support) {
require('./browser-01-transport-webrtc-star.js')
}
before((done) => {
const b58IdSrc = 'QmYzgdesgjdvD3okTPGZT9NPmh1BuH5FfTVNKjsvaAprhb'
// use a pre generated Id to save time
const idSrc = Id.createFromB58String(b58IdSrc)
const peerSrc = new Peer(idSrc)
swarm = new Swarm(peerSrc)
require('./browser-02-swarm-with-muxing-plus-websockets.js')
done()
})
it('add', (done) => {
swarm.transport.add('ws', new WebSockets(), () => {
expect(Object.keys(swarm.transports).length).to.equal(1)
done()
})
})
it('dial', (done) => {
const ma = multiaddr('/ip4/127.0.0.1/tcp/9100/websockets')
const conn = swarm.transport.dial('ws', ma, (err, conn) => {
expect(err).to.not.exist
})
conn.pipe(bl((err, data) => {
expect(err).to.not.exist
expect(data.toString()).to.equal('hey')
done()
}))
conn.write('hey')
conn.end()
})
})
describe('high level API - 1st without stream multiplexing (on websockets)', function () {
this.timeout(10000)
var swarm
var peerDst
before((done) => {
const b58IdSrc = 'QmYzgdesgjdvD3okTPGZT9NPmh1BuH5FfTVNKjsvaAprhb'
// use a pre generated Id to save time
const idSrc = Id.createFromB58String(b58IdSrc)
const peerSrc = new Peer(idSrc)
swarm = new Swarm(peerSrc)
done()
})
after((done) => {
done()
// swarm.close(done)
})
it('add ws', (done) => {
swarm.transport.add('ws', new WebSockets())
expect(Object.keys(swarm.transports).length).to.equal(1)
done()
})
it('create Dst peer info', (done) => {
const b58IdDst = 'QmYzgdesgjdvD3okTPGZT9NPmh1BuH5FfTVNKjsvaAprhb'
// use a pre generated Id to save time
const idDst = Id.createFromB58String(b58IdDst)
peerDst = new Peer(idDst)
const ma = multiaddr('/ip4/127.0.0.1/tcp/9200/websockets')
peerDst.multiaddr.add(ma)
done()
})
it('dial on protocol', (done) => {
swarm.dial(peerDst, '/echo/1.0.0', (err, conn) => {
expect(err).to.not.exist
conn.pipe(bl((err, data) => {
expect(err).to.not.exist
expect(data.toString()).to.equal('hey')
done()
}))
conn.write('hey')
conn.end()
})
})
it('dial to warm a conn', (done) => {
swarm.dial(peerDst, (err) => {
expect(err).to.not.exist
done()
})
})
it('dial on protocol, reuse warmed conn', (done) => {
swarm.dial(peerDst, '/echo/1.0.0', (err, conn) => {
expect(err).to.not.exist
conn.end()
conn.on('data', () => {}) // let it flow.. let it flooooow
conn.on('end', done)
})
})
})
if (w.support) {
require('./browser-03-swarm-with-muxing-plus-webrtc-star.js')
require('./browser-04-swarm-with-muxing-plus-websockets-and-webrtc-star.js')
}

5
test/test-data/id-1.json Normal file
View File

@ -0,0 +1,5 @@
{
"id": "1220dddcb7b540358368df92bf537a60f20663130ca09ef9eec40b9a4ed15b3572a7",
"privKey": "080012a609308204a202010002820101008b6959ff2e965869d2d450ced6135e641c92ceaa216877c566de2cab9a9d6e62f8a859efa8f71c8e67d1722cddb5afc4d382001d423e9839f382e543723500566e3e71fe08c004765f848aacda8df2478dcfdafc0fa377cc6beda9e5c5a9b456ccb0fccde94ade50fd58a057d79e805344fa57ee75e28cce37bda4f68fd890f21a99598e68bb02ebb36864a8503403a5d01eba9873def7cf19c10a6cb9d0eaff55e714a0855fe0323feec9ce1a1d5787a9908ac4171becb75450c06e7d5f747954ac8b9ee58d259bf931ceb7366b454c7bc82c18490c8407866969858a6c13e11b721780806f2116ad771caf6a3674c77ba0176a4c4c5fefdaa93fcae5dc03d5020301000102820100056fea6ea5667fb440e0bef6122b573718563171393455d78117912e702d4bacd87dd8641c76e6ca370a58259fd00236eef8d7004d211bd6c6c488248543c3eb9b091c7107ee553e38a376b51f21021e004de70085ab9e747e911a5b37c6529e40057716a0cea6b509ec76f476185c70e2f3d092204ee1a6f94d902d7d96b8b069c7ccda1253c5c1fc41b9367a006791c1ba5d8e52352b338a56ebc0eee01941c66ec86a007664f543f58578a66d0bd6b12b33fde2e73189962629742827efb9e57df608116da25e0444ba7df6893273a652ec3d7c7b2d97e92c51465518a6132f78865b08403b41a2e3ee615793a4725227d26c028be3234d648c5b1b87235102818100cd39918e5d588e7b761868006df8952f8a1ad6e4a88cfb0cf0fc887e4cfc0631e45c95706ecb9c04c57b39d4117020a06e5b8aa21d8286cd8849ad01e491ff8590c872293f7db1ffd03731c98eac74e1387f26eeff5d835beb7e997f523caf633481432b1ff00e23c3d48d942820aed5fef125057c21b59c632c385c8dd411cb02818100ade7546eb30fead900724b289037449497547cac34bc69a85d675a48b40a29ec52a30cee4c392e93f25c2ecec6cdd6c77b1e0c7168dd75dc35044fafbda69ba11d3eb04ad5f708ad860cc3eb8c9640826be434cb7816f524b7bd4bf3f42e460f933d2a8330182998f5189dcd573839870f7f9d7a599777871827854316810cdf0281806c75dc636d19fc536b9a827c97a224d6371af02f7094f1a969434dafd267efae368e67bc401203a6d1e7ca2c35fb1883314fd7f8cdb7ca1e9dc4b256a9c22f551bab940a10b0117ead403e63d3af7925fe81d4c5c2d85d301b49913e24ec45951c8ea43d0a68085106923330f5f42ff291064916990007c75af267e7225dcdf90281804d19c2d5518e3d10f8a1b3b0c83fb8a8286fccd68c8afc4d291c296b12676f2ed77472c73404262271d16cef403502846e9163f2e40b4bb5d5cb9388d70c86f36783e3a54a37bc2132cd760f78c524d4ae00ff673656f758d01d9d0f0bb3785c6f6b2eedfae4bb8c951dd4d8b552b82ea9306b21539753e7114e7446ca336d0102818029cc3310268d0217f2392e16b02edf46d1e4f76feff61656831ab9a2161ba9c107f2d3a6f4661f0e6ac29c0632475b5b13fefb442e1ed5773c139612ba04277fcdf6d58fb392d5de90d352e9bbfffeb04604f68ef4bea3cd4b87764366db9344b66a45c2b2a68a33b1e4cba073519a4021599c0763a25a5390062141d781409d",
"pubKey": "080012a60230820122300d06092a864886f70d01010105000382010f003082010a02820101008b6959ff2e965869d2d450ced6135e641c92ceaa216877c566de2cab9a9d6e62f8a859efa8f71c8e67d1722cddb5afc4d382001d423e9839f382e543723500566e3e71fe08c004765f848aacda8df2478dcfdafc0fa377cc6beda9e5c5a9b456ccb0fccde94ade50fd58a057d79e805344fa57ee75e28cce37bda4f68fd890f21a99598e68bb02ebb36864a8503403a5d01eba9873def7cf19c10a6cb9d0eaff55e714a0855fe0323feec9ce1a1d5787a9908ac4171becb75450c06e7d5f747954ac8b9ee58d259bf931ceb7366b454c7bc82c18490c8407866969858a6c13e11b721780806f2116ad771caf6a3674c77ba0176a4c4c5fefdaa93fcae5dc03d50203010001"
}

5
test/test-data/id-2.json Normal file
View File

@ -0,0 +1,5 @@
{
"id": "12209cc55e7a4dd253eb4867307a21dde36c49ff044c7ffac7adbecc339dc949e32b",
"privKey": "080012a709308204a3020100028201010081365503703172a249b6d98d833c8ff875d8d7e8bec3f76fdca94f24ca0f73c84b5b2748381071deeabbb64309c9edd3c084f0fce0351d3fa131151fa9bb0d4bfd03b856f21f483a8162e657e0487a7b4c3045c0e86291007ae175bf46872c9794043271d0a5940ba76126e1b448cfcaf84150b7a56da997cad75e996176d2750526be99907edf35f4bae8490df2f96b4b337f11c18f7ec3d428209682fdf420e5cb11d59ba3aace98ead99bfa39e403bbc343aed643f04c1b0af9d9fbf6a815dc1ddaf58de655d49eaea58020634e6e3bdd21684e157b07b54162c509bed7ae74e04da7a188f59adc982da4e3ed1d525a3c2d23f6c7535d154bdb9b2e97426f02030100010282010039cd19489976b5461ddd9b026ff3b69fb9f00fddc1009efebe624ad23545a650b24d0b8c85efed5080070aa8808781495974deecf04b325355834464cea3ab7613b0075575a842c25140f1b3dbd3f05e999d7a86aa2df599965ea732b295238087293d7ba68f7b639f3399961bf4fa675c98b34803cbc3b2f07d5987198f72e354b46f1f485c0f4f7b0418a28627462f12989edbac437d953b4870e50b07b00401e380e0189b9ceed55bca893ad7ae3014e49a2163b2a6d81a678d92f5d417147ed5c0af9160bfc3f4213214e6c60c8a1d579191d74c0e79399d2fa335e8dd75b173b4b4d51fc19aca2d388b591293ec21b965fc9a31059f179a80349ad7c17902818100fc381f94a2559b256b3a267102c4ac5ea2cd8747b8806133dddc7f35c539a8ee7db5dfb1f97435946395daff7a1548009263fd4b4ff1569947612a4cb68bb4159436cd06e53c907f6cd5965648a6dee116afafe3eb643fbdd98293c9fa04dfb16ea6dc1565ecdf76d390cc21100fd6b1215dd6d314b3598c6a86a8932a1b489b0281810083262d308b0a2c72bc614c61ef465430762389e24d098bb404abdfda22d5b90193ca996aeb20a043a0b4a3794e871dca9a5575b78e37557c38fc3bca789f61b4def2e87a32a3097f99504b70c66dee5de1140f5455701548a70f8fa4a04058244b2289b183496305bed2b3cf7383bed00c04005b57b87b6f1560324788de78bd028181008ed3594ed9fe9034c85bcd9901704e0be93569fdfb44f7c65f4495e4e52299bf3400e203eeb7180047c47c975f92dd8b355ad9fce3f04e91ce11ffda21254b7d4cc91ba163febff4e8b4aac581aebd57c97903a9958ad76db2d676ca5182a109e1172c5a11b5e97568a8fe6f5aa11d7a80e29adc3d44b2d90919e82c2e13f531028180596fee7f5d4279df60f26599a008711f7f616b05a60dad74fd7e8cb100f28931372d8204750691e4acb2a38cf56d9576765b7364d31a8f34a0d3fa9e703618f6b4d8288c34fe145b2d1da1e4ed9d4862433d5fdade4d0a66ba6f15416a7b96e06150d35ff82d52d737340fa5989c2ec3487e6e13dcb5958fc29f108ad21f0d6d02818034615146ff8564037844255d5f6f324efa20f13f22ad0a1279060548e7b31a9be32794e6be3b92eb6b006d6dde82abe3b81c8dc55eb07e41d0b0fca2a2a6a6896170a128cac2041533825d176f0adcdd582c387f8bcdd4cd319764012714116c96f7735954d063f4280a98ae38faac2ae79fed7929a8d9d3c96c16410e5fc2ce",
"pubKey": "080012a60230820122300d06092a864886f70d01010105000382010f003082010a028201010081365503703172a249b6d98d833c8ff875d8d7e8bec3f76fdca94f24ca0f73c84b5b2748381071deeabbb64309c9edd3c084f0fce0351d3fa131151fa9bb0d4bfd03b856f21f483a8162e657e0487a7b4c3045c0e86291007ae175bf46872c9794043271d0a5940ba76126e1b448cfcaf84150b7a56da997cad75e996176d2750526be99907edf35f4bae8490df2f96b4b337f11c18f7ec3d428209682fdf420e5cb11d59ba3aace98ead99bfa39e403bbc343aed643f04c1b0af9d9fbf6a815dc1ddaf58de655d49eaea58020634e6e3bdd21684e157b07b54162c509bed7ae74e04da7a188f59adc982da4e3ed1d525a3c2d23f6c7535d154bdb9b2e97426f0203010001"
}

29552
vendor/forge.bundle.js vendored Normal file

File diff suppressed because it is too large Load Diff