We have a peerstore that keeps all data for all observed peers in memory with no eviction.
This is fine when you don't discover many peers but when using the DHT you encounter a significant number of peers so our peer storage grows and grows over time.
We have a persistent peer store, but it just periodically writes peers into the datastore to be read at startup, still keeping them in memory.
It also means a restart doesn't give you any temporary reprieve from the memory leak as the previously observed peer data is read into memory at startup.
This change refactors the peerstore to use a datastore by default, reading and writing peer info as it arrives. It can be configured with a MemoryDatastore if desired.
It was necessary to change the peerstore and *book interfaces to be asynchronous since the datastore api is asynchronous.
BREAKING CHANGE: `libp2p.handle`, `libp2p.registrar.register` and the peerstore methods have become async
Sometimes you encounter peers with lots of addresses. When this happens
you can attach more than 10x event listeners to the abort signal we
use to abort all the dials - this causes node to print a warning
which is misleading.
This PR increases the default number of listeners on the signal.
Fixes#900
BREAKING CHANGES:
top level types were updated, multiaddr@9.0.0 is used, dialer and keychain internal property names changed and connectionManager minPeers is not supported anymore