aboutsummaryrefslogtreecommitdiff
path: root/content/en/docs
diff options
context:
space:
mode:
Diffstat (limited to 'content/en/docs')
-rw-r--r--content/en/docs/Concepts/broadcast_layer.pngbin0 -> 176866 bytes
-rw-r--r--content/en/docs/Concepts/dependencies.jpgbin12970 -> 0 bytes
-rw-r--r--content/en/docs/Concepts/elements.md90
-rw-r--r--content/en/docs/Concepts/fa.md2
-rw-r--r--content/en/docs/Concepts/layers.jpgbin104947 -> 0 bytes
-rw-r--r--content/en/docs/Concepts/model_elements.pngbin0 -> 27515 bytes
-rw-r--r--content/en/docs/Concepts/ouroboros-model.md635
-rw-r--r--content/en/docs/Concepts/problem_osi.md174
-rw-r--r--content/en/docs/Concepts/rec_netw.jpgbin63370 -> 0 bytes
-rw-r--r--content/en/docs/Concepts/unicast_layer.pngbin0 -> 206355 bytes
-rw-r--r--content/en/docs/Concepts/unicast_layer_bc_pft.pngbin0 -> 444657 bytes
-rw-r--r--content/en/docs/Concepts/unicast_layer_bc_pft_split.pngbin0 -> 688010 bytes
-rw-r--r--content/en/docs/Concepts/unicast_layer_bc_pft_split_broadcast.pngbin0 -> 894152 bytes
-rw-r--r--content/en/docs/Concepts/unicast_layer_dag.pngbin0 -> 36856 bytes
-rw-r--r--content/en/docs/Concepts/what.md78
-rw-r--r--content/en/docs/Contributions/_index.md23
-rw-r--r--content/en/docs/Extra/ioq3.md7
-rw-r--r--content/en/docs/Extra/rumba.md13
-rw-r--r--content/en/docs/Intro/_index.md67
-rw-r--r--content/en/docs/Overview/_index.md120
-rw-r--r--content/en/docs/Releases/0_18.md109
-rw-r--r--content/en/docs/Releases/0_20.md70
-rw-r--r--content/en/docs/Releases/_index.md6
-rw-r--r--content/en/docs/Start/_index.md220
-rw-r--r--content/en/docs/Start/check.md49
-rw-r--r--content/en/docs/Start/download.md28
-rw-r--r--content/en/docs/Start/install.md57
-rw-r--r--content/en/docs/Start/requirements.md76
-rw-r--r--content/en/docs/Tools/_index.md7
-rw-r--r--content/en/docs/Tools/grafana-frcp-constants.pngbin0 -> 42048 bytes
-rw-r--r--content/en/docs/Tools/grafana-frcp-window.pngbin0 -> 107506 bytes
-rw-r--r--content/en/docs/Tools/grafana-frcp.pngbin0 -> 89571 bytes
-rw-r--r--content/en/docs/Tools/grafana-ipcp-dt-dht.pngbin0 -> 137833 bytes
-rw-r--r--content/en/docs/Tools/grafana-ipcp-dt-fa.pngbin0 -> 214580 bytes
-rw-r--r--content/en/docs/Tools/grafana-ipcp-np1-cc.pngbin0 -> 235720 bytes
-rw-r--r--content/en/docs/Tools/grafana-ipcp-np1-fu.pngbin0 -> 284669 bytes
-rw-r--r--content/en/docs/Tools/grafana-ipcp-np1.pngbin0 -> 282859 bytes
-rw-r--r--content/en/docs/Tools/grafana-lsdb.pngbin0 -> 27429 bytes
-rw-r--r--content/en/docs/Tools/grafana-system.pngbin0 -> 43809 bytes
-rw-r--r--content/en/docs/Tools/grafana-variables-interval.pngbin0 -> 75086 bytes
-rw-r--r--content/en/docs/Tools/grafana-variables-system.pngbin0 -> 43642 bytes
-rw-r--r--content/en/docs/Tools/grafana-variables-type.pngbin0 -> 50204 bytes
-rw-r--r--content/en/docs/Tools/grafana-variables.pngbin0 -> 14056 bytes
-rw-r--r--content/en/docs/Tools/metrics.md298
-rw-r--r--content/en/docs/Tools/rumba-topology.pngbin0 -> 16656 bytes
-rw-r--r--content/en/docs/Tools/rumba.md676
-rw-r--r--content/en/docs/Tools/rumba_example.py41
-rw-r--r--content/en/docs/Tutorials/tutorial-1.md18
-rw-r--r--content/en/docs/Tutorials/tutorial-2.md2
-rwxr-xr-xcontent/en/docs/_index.md4
50 files changed, 2315 insertions, 555 deletions
diff --git a/content/en/docs/Concepts/broadcast_layer.png b/content/en/docs/Concepts/broadcast_layer.png
new file mode 100644
index 0000000..01079c0
--- /dev/null
+++ b/content/en/docs/Concepts/broadcast_layer.png
Binary files differ
diff --git a/content/en/docs/Concepts/dependencies.jpg b/content/en/docs/Concepts/dependencies.jpg
deleted file mode 100644
index eaa9e79..0000000
--- a/content/en/docs/Concepts/dependencies.jpg
+++ /dev/null
Binary files differ
diff --git a/content/en/docs/Concepts/elements.md b/content/en/docs/Concepts/elements.md
deleted file mode 100644
index a803065..0000000
--- a/content/en/docs/Concepts/elements.md
+++ /dev/null
@@ -1,90 +0,0 @@
----
-title: "Elements of a recursive network"
-author: "Dimitri Staessens"
-date: 2019-07-11
-weight: 2
-description: >
- The building blocks for recursive networks.
----
-
-This section describes the high-level concepts and building blocks are
-used to construct a decentralized [recursive network](/docs/what):
-layers and flows. (Ouroboros has two different kinds of layers, but
-we will dig into all the fine details in later posts).
-
-A __layer__ in a recursive network embodies all of the functionalities
-that are currently in layers 3 and 4 of the OSI model (along with some
-other functions). The difference is subtle and takes a while to get
-used to (not unlike the differences in the term *variable* in
-imperative versus functional programming languages). A recursive
-network layer handles requests for communication to some remote
-process and, as a result, it either provides a handle to a
-communication channel -- a __flow__ endpoint --, or it raises some
-error that no such flow could be provided.
-
-A layer in Ouroboros is built up from a bunch of (identical) programs
-that work together, called Inter-Process Communication (IPC) Processes
-(__IPCPs__). The name "IPCP" was first coined for a component of the
-[LINCS]
-(https://www.osti.gov/biblio/5542785-delta-protocol-specification-working-draft)
-hierarchical network architecture built at Lawrence Livermore National
-Laboratories and was taken over in the RINA architecture. These IPCPs
-implement the core functionalities (such as routing, a dictionary) and
-can be seen as small virtual routers for the recursive network.
-
-{{<figure width="60%" src="/docs/concepts/rec_netw.jpg">}}
-
-In the illustration, a small 5-node recursive network is shown. It
-consists of two hosts that connect via edge routers to a small core.
-There are 6 layers in this network, labelled __A__ to __F__.
-
-On the right-hand end-host, a server program __Y__ is running (think a
-mail server program), and the (mail) client __X__ establishes a flow
-to __Y__ over layer __F__ (only the endpoints are drawn to avoid
-cluttering the image).
-
-Now, how does the layer __F__ get the messages from __X__ to __Y__?
-There are 4 IPCPs (__F1__ to __F4__) in layer __F__, that work
-together to provide the flow between the applications __X__ and
-__Y__. And how does __F3__ get the info to __F4__? That is where the
-recursion comes in. A layer at some level (its __rank__), will use
-flows from another layer at a lower level. The rank of a layer is a
-local value. In the hosts, layer __F__ is at rank 1, just above layer
-__C__ or layer __E_. In the edge router, layer __F__ is at rank 2,
-because there is also layer __D__ in that router. So the flow between
-__X__ and __Y__ is supported by flows in layer __C__, __D__ and __E__,
-and the flows in layer __D__ are supported by flows in layers __A__
-and __B__.
-
-Of course these dependencies can't go on forever. At the lowest level,
-layers __A__, __B__, __C__ and __E__ don't depend on a lower layer
-anymore, and are sometimes called 0-layers. They only implement the
-functions to provide flows, but internally, they are specifically
-tailored to a transmission technology or a legacy network
-technology. Ouroboros supports such layers over (local) shared memory,
-over the User Datagram Protocol, over Ethernet and a prototype that
-supports flows over an Ethernet FPGA device. This allows Ouroboros to
-integrate with existing networks at OSI layers 4, 2 and 1.
-
-If we then complete the picture above, when __X__ sends a packet to
-__Y__, it passes it to __F3__, which uses a flow to __F1__ that is
-implemented as a direct flow between __C2__ and __C1__. __F1__ then
-forwards the packet to __F2__ over a flow that is supported by layer
-__D__. This flow is implemented by two flows, one from __D2__ to
-__D1__, which is supported by layer A, and one from __D1__ to __D3__,
-which is supported by layer __B__. __F2__ will forward the packet to
-__F4__, using a flow provided by layer __E__, and __F4__ then delivers
-the packet to __Y__. So the packet moves along the following chain of
-IPCPs: __F3__ --> __C2__ --> __C1__ --> __F1__ --> __D2__ --> __A1__
---> __A2__ --> __D1__ --> __B1__ --> __B2__ --> __D3__ --> __F2__ -->
-__E1__ --> __E2__ --> __F4__.
-
-{{<figure width="40%" src="/docs/concepts/dependencies.jpg">}}
-
-A recursive network has __dependencies__ between layers in the
-network, and between IPCPs in a __system__. These dependencies can be
-represented as a directed acyclic graph (DAG). To avoid problems,
-these dependencies should never contain cycles (so a layer I should
-not directly or indirectly depend on itself). The rank of a layer is
-defined (either locally or globally) as the maximum depth of this
-layer in the DAG.
diff --git a/content/en/docs/Concepts/fa.md b/content/en/docs/Concepts/fa.md
index d91cc00..b03e3f7 100644
--- a/content/en/docs/Concepts/fa.md
+++ b/content/en/docs/Concepts/fa.md
@@ -30,7 +30,7 @@ system has an Ouroboros IRMd and a unicast IPCP. These IPCPs work
together to create a logical "layer". System 1 runs a "client"
program, System 2 runs a "server" program.
-We are going to explain in some detail the steps that Ourobros takes
+We are going to explain in some detail the steps that Ouroboros takes
to establish a flow between the "client" and "server" program so they
can communicate.
diff --git a/content/en/docs/Concepts/layers.jpg b/content/en/docs/Concepts/layers.jpg
deleted file mode 100644
index 5d3020c..0000000
--- a/content/en/docs/Concepts/layers.jpg
+++ /dev/null
Binary files differ
diff --git a/content/en/docs/Concepts/model_elements.png b/content/en/docs/Concepts/model_elements.png
new file mode 100644
index 0000000..bffbca8
--- /dev/null
+++ b/content/en/docs/Concepts/model_elements.png
Binary files differ
diff --git a/content/en/docs/Concepts/ouroboros-model.md b/content/en/docs/Concepts/ouroboros-model.md
new file mode 100644
index 0000000..7daa95b
--- /dev/null
+++ b/content/en/docs/Concepts/ouroboros-model.md
@@ -0,0 +1,635 @@
+---
+title: "The Ouroboros model"
+author: "Dimitri Staessens"
+date: 2020-06-12
+weight: 2
+description: >
+ A conceptual approach to packet networking fundamentals
+---
+
+```
+Computer science is as much about computers as astronomy is
+about telescopes.
+ -- Edsger Wybe Dijkstra
+```
+
+The model for computer networks underlying the Ouroboros prototype is
+the result of a long process of gradual increases in my understanding
+of the core principles that underly computer networks, starting from
+my work on traffic engineering packet-over-optical networks using
+Generalized Multi-Protocol Label Switching (G/MPLS) and Path
+Computation Element (PCE), then Software Defined Networks (SDN), the
+work with Sander investigating the Recursive InterNetwork Architecture
+(RINA) and finally our implementation of what would become the
+Ouroboros Prototype. The way it is presented here is not a reflection
+of this long process, but a crystalization of my current understanding
+of the Ouroboros model.
+
+I'll start with the very basics, assuming no delay on links and
+infinite capacity, and then gradually add delay, link capacity,
+failures, etc to assess their impact and derive _what_ needs to be
+added _where_ in order to come to the complete Ouroboros model.
+
+The main objective of the definitions -- and the Ouroboros model as a
+whole -- is to __separate mechanism__ (the _what_) __from policy__
+(the _how_) so that we have objective definitions and a _consistent_
+framework for _reasoning_ about functions and protocols in computer
+networks.
+
+### The importance of first principles
+
+One word of caution, because this model might read like I'm
+"reinventing the wheel" and we already know _how_ to do everything that
+is written here. Of course we do! The point is that the model
+[reduces](https://en.wikipedia.org/wiki/Reductionism)
+networking to its _essence_, to its fundamental parts.
+
+After studying most courses on Computer Networks, I could name the 7
+layers of the OSI model, I know how to draw TCP 3-way handshakes,
+could detail 5 different TCP congestion control mechanisms, calculate
+optimal IP subnets given a set of underlying Local Area Networks, draw
+UDP headers, chain firewall rules in iptables, calculate CRC
+checksums, and derive spanning trees given MAC addresses of Ethernet
+bridges. But after all that, I still feel such courses teach about as
+much about computer networks as cookbooks teach about chemistry. I
+wanted to go beyond technology and the rote knowledge of _how things
+work_ to establish a thorough understanding of _why they work_.
+During most of my PhD work at the engineering department, I spent my
+research time on modeling telecommunications networks and computer
+networks as _graphs_. The nodes represented some switch or router --
+either physical or virtual --, the links represented a cable or wire
+-- again either physical or virtual -- and then the behaviour of
+various technologies were simulated on those graphs to develop
+algorithms that analyze some behaviour or optimize some or other _key
+performance indicator_ (KPI). This line of reasoning, starting from
+_networked devices_ is how a lot of research on computer networks is
+conducted. But what happens if we turn this upside down, and develop a
+_universal_ model for computer networks starting from _first
+principles_?
+
+This sums up my problem with computer networks today: not everything
+in their workings can be fully derived from first principles. It also
+sums up why I was attracted to RINA: it was the first time I saw a
+network architecture as the result of a solid attempt to derive
+everything from first principles. And it’s also why Ouroboros is not
+RINA: RINA still contains things that can’t be derived from first
+principles.
+
+### Two types of layers
+
+The Ouroboros model postulates that there are only 2 scalable methods
+of distributing packets in a network layer: _FORWARDING_ packets based
+on some label, or _FLOODING_ packets on all links but the incoming
+link.
+
+We call an element that forwards a __forwarding element__,
+implementing a _packet forwarding function_ (PFF). The PFF has as
+input a destination name for another forwarding element (represented
+as a _vertex_), and as output a set of output links (represented
+as _arcs_) on which the incoming packet with that label is to be
+forwarded on. The destination name needs to be in a packet header.
+
+We call an element that floods a __flooding element__, and it
+implements a packet flooding function. The flooding element is
+completely stateless, and has a input the incoming arc, and as output
+all non-incoming arcs. Packets on a broadcast layer do not need a
+header at all.
+
+Forwarding elements are _equal_, and need to be named, flooding
+elements are _identical_ and do not need to be named[^1].
+
+{{<figure width="40%" src="/docs/concepts/model_elements.png">}}
+
+Peering relationships are only allowed between forwarding elements, or
+between flooding elements, but never between a forwarding element and
+a flooding element. We call a connected graph consisting of nodes that
+hold forwarding elements a __unicast layer__, and similary we call a
+connected _tree_[^2] consisting of nodes that house a flooding element
+a __broadcast layer__.
+
+The objective for the Ouroboros model is to hold for _all_ packet
+networks; our __conjecture__ is that __all scalable packet-switched
+network technologies can be decomposed into finite sets of unicast and
+broadcast layers__. Implementations of unicast and broadcast layers
+can be easily found in TCP/IP, Recursive InterNetworking Architecture
+(RINA), Delay Tolerant Networks (DTN), Ethernet, VLANs, Loc/Id split
+(LISP),... [^3]. The Ouroboros _model_ by itself is not
+recursive. What is known as _recursive networking_ is a choice to use
+a single standard API for interacting with all the implementatations
+of unicast layers and a single standard API for interacting with all
+implementations of broadcast layers[^4].
+
+### The unicast layer
+
+A unicast layer is a collection of interconnected nodes that implement
+forwarding elements. A unicast layer provides a best-effort unicast
+packet service between two endpoints in the layer. We call the
+abstraction of this point-to-point unicast service a flow. A flow in
+itself has no guarantees in terms of reliability [^5].
+
+{{<figure width="70%" src="/docs/concepts/unicast_layer.png">}}
+
+A representation of a very simple unicast layer is drawn above, with a
+flow between the _green_ (bottom left) and _red_ (top right)
+forwarding elements.
+
+The forwarding function operates in such a way that, given the label
+of the destination forwarding element (in the case of the figure, a
+_red_ label), the packet will move to the destination forwarding
+element (_red_) in a _deliberate_ manner. The paper has a precise
+mathematical definition, but qualitatively, our definition of
+_FORWARDING_ ensures that the trajectory that packets follow through a
+network layer between source and destination
+
+* doesn't need to use the 'shortest' path
+* can use multiple paths
+* can use different paths for different packets between the same
+ source-destination pair
+* can involve packet duplication
+* will not have non-transient loops[^6][^7]
+
+The first question is: _what information does that forwarding function
+need in order to work?_ Mathematically, the answer is that all
+forwarding elements needs to know the values of a valid __distance
+function__[^8] between themselves and the destination forwarding
+element, and between all of their neighbors and the destination
+forwarding element. The PFF can then select a (set of) link(s) to any
+of its neighbors that is closer to the destination forwarding element
+according to the chosen distance function and send the packet on these
+link(s). Thus, while the __forwarding elements need to be _named___,
+the __links between them need to be _measured___. This can be either
+explicit by assigning a certain weight to a link, or implicit and
+inferred from the distance function itself.
+
+The second question is: _how will that forwarding function know this
+distance information_? There are a couple of different possible
+answers, which are all well understood. I'll briefly summarize them
+here.
+
+A first approach is to use a coordinate space for the names of the
+forwarding elements. For instance, if we use the GPS coordinates of
+the machine in which they reside as a name, then we can apply some
+basic geometry to _calculate_ the distances based on this name
+only. This simple GPS example has pitfalls, but it has been proven
+that any connected finite graph has a greedy embedding in the
+hyperbolic plane. The obvious benefit of such so-called _geometric
+routing_ approaches is that they don't require any dissemination of
+information beyond the mathematical function to calculate distances,
+the coordinate (name) and the set of neighboring forwarding
+elements. In such networks, this information is disseminated during
+initial exchanges when a new forwarding element joins a unicast layer
+(see below).
+
+A second approach is to disseminate the values of the distance
+function to all destinations directly, and constantly updating your
+own (shortest) distances from these values received from other
+forwarding elements. This is a very well-known mechanism and is
+implemented by what is known as _distance vector_ protocols. It is
+also well-known that the naive approach of only disseminating the
+distances to neighbors can run into a _count to infinity_ issue when
+links go down. To alleviate this, _path vector_ protocols include a
+full path to every destination (making them a bit less scaleable), or
+distance vector protocols are augmented with mechanisms to avoid
+transient loops and the resulting count-to-infinity (e.g. Babel).
+
+The third approach is to disseminate the link weights of neighboring
+links. From this information, each forwarding element can build a view
+of the network graph and again calculate the necessary distances that
+the forwarding function needs. This mechanism is implemented in
+so-called _link-state_ protocols.
+
+I will also mention MAC learning here. MAC learning is a bit
+different, in that it is using piggybacked information from the actual
+traffic (the source MAC address) and the knowledge that the adjacency
+graph is a _tree_ as input for the forwarding function.
+
+There is plenty more to say about this, and I will, but first, I will
+need to introduce some other concepts, most notably the broadcast
+layer.
+
+### The broadcast layer
+
+A broadcast layer is a collection of interconnected nodes that house
+flooding elements. The node can have either, both or neither of the
+sender and receiver role. A broadcast layer provides a best-effort
+broadcast packet service from sender nodes to all (receiver) nodes in
+the layer.
+
+{{<figure width="70%" src="/docs/concepts/broadcast_layer.png">}}
+
+Our simple definition of _FLOODING_ -- given a set of adjacent links,
+send packets received on a link in the set on all other links in the
+set -- has a huge implication the properties of a fundamental
+broadcast layer: the graph always is a _tree_, or packets could travel
+along infinite trajectories with loops [^9].
+
+### Building layers
+
+We now define 2 fundamental operations for constructing packet network
+layers: __enrollment__ and __adjacency management__. These operations
+are very broadly defined, and can be implemented in a myriad of
+ways. These operations can be implemented through manual configuration
+or automated protocol interactions. They can be skipped (no-operation,
+(nop)) or involve complex operations such as authentication. The main
+objective here is just to establish some common terminology for these
+operations.
+
+The first mechanism, enrollment, adds a (forwarding or flooding)
+element to a layer; it prepares a node to act as a functioning element
+of the layer, establishes its name (in case of a unicast layer). In
+addition, it may exchange some key parameters (for instance a distance
+function for a unicast layer) it can involve authentication, and
+setting roles and permissions. __Bootstrapping__ is a special case of
+enrollment for the _first_ node in a layer. The inverse operation is
+called _unenrollment_.
+
+After enrollment, we may add peering relationships by _creating
+adjacencies_ between forwarding elements in a unicast layer or between
+flooding elements in a broadcast layer. This will establish neighbors
+and in case of a unicast layer, may addinitionally define link
+weights. The inverse operations is called _tearing down adjacencies_
+between elements. Together, these operations will be referred to as
+_adjacency management_.
+
+Operations such as merging and splitting layers can be decomposed into
+these two operations. This doesn't mean that merge operations
+shouldn't be researched. To the contrary, optimizing this will be
+instrumental for creating networks on a global scale.
+
+For the broadcast layer, we already have most ingredients in
+place. Now we will focus on the unicast layer.
+
+### Scaling the unicast layer
+
+Let's look at how to scale implementations of the packet forwarding
+function (PFF). On the one hand, in distance vector, path vector and
+link state, the PFF is implemented as a _table_. We call it the packet
+forwarding table (PFT). On the other hand, geometric routing doesn't
+need a table and can implement the PFF as a mathematical equation
+operating on the _forwarding element names_. In this respect,
+geometric routing looks like a magic bullet to routing table
+scalability -- it doesn't need one -- but there are many challenges
+relating to the complexity of calculating greedy embeddings of graphs
+that are not static (a changing network where routers and end-hosts
+enter and leave, and links can fail and return after repair) that
+currently make these solutions impractical at scale. We will focus on
+the solutions that use a PFT.
+
+The way the unicast layer is defined at this point, the PFT scales
+_linearly_ with the number of forwarding elements (n) in the layer,
+its space complexity is O(n)[^10]. The obvious solution to any student
+of computer networks is to use a scheme like IP and Classless
+InterDomain Routing (CIDR) where the hosts _addresses_ are subnetted,
+allowing for entries in the PFT to be aggregated, drastically reducing
+its space complexity, in theory at least, to O(log(n)). So we should
+not use arbitrary names for the forwarding elements, but give them an
+_address_!
+
+Sure, that _is_ the solution, but not so fast! When building a model,
+each element in the model should be well-defined and named at most
+once -- synonyms for human use are allowed and useful, but they are
+conveniences, not part of the functioning of the model. If we
+subdivide the name of the forwarding element in different subnames, as
+is done in hierarchical addressing, we have to ask ourselves what
+element in the model each subname that name is naming! In the
+geographic routing example above, we dragged the Earth into the model,
+and used GPS coordinates (latitude and longitude) in the name. But
+where do subnets come from, and what _are_ addresses? What do we drag
+into our model, if anything, to create them?
+
+#### A quick recap
+
+{{<figure width="70%" src="/docs/concepts/unicast_layer_bc_pft.png">}}
+
+Let's recap what a simple unicast layer that uses forwarding elements
+with packet forwarding table looks like in the model. First we have
+the unicast layer itself, consisting of a set of forwarding elements
+with defined adjacencies. Recall that the necessary and sufficient
+condition for the unicast layer to be able to forward packets between
+any (source, sink)-pair is that all forwarding engines can deduce the
+values of a distance function between themselves and the sink, and
+between each of their neighbors and the sink. This means that such a
+unicast layer requires an additional (optional) element that
+distributes this routing information. Let's call it the __Routing
+Element__, and assume that it implements a simple link-state
+routing protocol. The RE is drawn as a turquoise element accompanying
+each forwarding element in the figure above. Now, each routing element
+needs to disseminate information to _all_ other nodes in the layer, in
+other words, it needs to _broadcast_ link state information. The RE is
+inside of a unicast layer, and unicast layers don't do broadcast, so
+the REs will need the help of a broadcast layer. That is what is drawn
+in the figure above. Now, at first this may look weird, but an IP
+network does this too! For instance, the Open Shortest Path First
+(OSPF) protocol uses IP multicast between OSPF routers. The way that
+the IP layer is defined just obfuscates that this OSPF multicast
+network is in fact a disguised broadcast layer. I will refer to my
+[blog post on multicast](/blog/2021/04/02/how-does-ouroboros-do-anycast-and-multicast/)
+if you like a bit more elaboration on how this maps to the IP world.
+
+#### Subdividing the unicast layer
+
+```
+Vital realizations not only provide unforeseen clarity, they also
+energize us to dig deeper.
+ -- Brian Greene (in "Until the end of time")
+```
+
+Now, it's obvious that a single global layer like this with billions
+of nodes will buckle under its own size, we need to split things up
+into smaller, more manageable groups of nodes.
+
+{{<figure width="70%" src="/docs/concepts/unicast_layer_bc_pft_split.png">}}
+
+This is shown in the figure above, where the unicast layer is split
+into 3 groups of forwarding elements, let's call them __routing
+areas__, a yellow, a turquoise and a blue area, with each its own
+broadcast layer for disseminating the link state information that is
+needed to populate the forwarding tables. These areas can be chosen
+small enough so that the forwarding tables (which still scale linear
+with respect to the number of participating nodes in the routing area)
+are manageable in size. It can also keep latency in disseminating the
+link-state packets in check, but we will deal with latency later, for
+now, let's still assume latency on the links is zero and bandwidth on
+the links is infinite.
+
+Now, in this state, there can't be any communication between the
+routing areas, so we will need to add a fourth one.
+
+{{<figure width="70%" src="/docs/concepts/unicast_layer_bc_pft_split_broadcast.png">}}
+
+This is show in the figure above. We have our 3 original routing
+areas, and I numbered some of the nodes in these original routing
+areas. These are the numbers after the dot in figure: 1, 2, 3, 4 in
+the turquoise routing area, 5,6,10 in the yellow routing area, and 1,
+5 in the blue area (I omitted some not to clutter the illustration).
+
+We have also added 4 new forwarding elements, each with their own
+(red) routing element, that have a client-server relationship (rather
+than a peering relationship) with other forwarding elements in the
+layer. These are the numbers before the dot: 1, 2, 2, and 3. This may
+look intuitively obvious, and "1.4" and "3.5" may look like
+"addresses", but let's stress the things that I think are important,
+noting that this is a _model_ and most certainly _not an
+implementation design_.
+
+Every node in the unicast layer above consists of 2 forwarding
+elements in a client-server relationship, but all the ones that are
+not drawn all have the same name, and are not functionally active, but
+are there in a virtual way to keep the naming in the layer unique.
+
+We did not introduce new elements to the model, but we did add a new
+client-server relationship between forwarding elements.
+
+This client-server relationship gives rise to some new rules for
+naming the forwarding elements.
+
+First, the names of forwarding elements that are within a routing area
+have to be unique within that routing area if they have no client
+forwarding elements within the node.
+
+Forwarding elements with client forwarding elements have the same name
+if and only if their clients are within the same routing area.
+
+In the figure, there are peering relationships between unicast nodes
+"1.4" and "2.5" and unicast nodes "2.10" and "3.5", and these four
+nodes disseminate forwarding information using the red broadcast
+layer[^11].
+
+Note that not all forwarding elements need to actively disseminate
+routing information. If the forwarding elements in the turquoise
+routing area were all (logically) directly connected to 1.4, they
+would not need the broadcast layer, this is like IP, which also
+doesn't require end-hosts to run a routing protocol.
+
+#### Structure of a unicast node
+
+The rules for allowed peering relationships relate to the structure of
+the client-server relationship. In most generalized form, this
+relationship gives rise to a directed acyclic graph (DAG) between
+forwarding elements that are part of the same unicast node.
+
+{{<figure width="70%" src="/docs/concepts/unicast_layer_dag.png">}}
+
+We call the _rank_ of the forwarding element within the node the
+height at which it resides in this DAG. For instance, the figure above
+shows two unicast nodes with their forward elements arranged as DAGs.
+The forwarding elements with a turquoise and purple routing element
+are at rank 0, and the ones with a yellow routing element are at rank
+3.
+
+A forwarding elements in one node can have peering relationships only
+with forwarding elements of other nodes that
+
+1) Are at the same rank,
+
+2) Have a different name,
+
+3) Are in the same routing area at that rank,
+
+and only if
+
+1) there are no peering relationships between two forwarding elements
+that are in the same unicast nodes at any forwarding element that is
+on a path towards the root of the DAG
+
+2) there cannot be a lower ranked peering relationship.
+
+So, in the figure above, there cannot be a peering relationship at
+rank 0, because these elements are in different routing areas
+(turquoise and purple). The lowest peering relationship can be at rank
+1, in the routing area. If at rank one, the right node would be in a
+different routing area, there could be 2 peering relationships between
+these unicast nodes, for instance at rank 2 in the green routing area,
+and at rank 3 between in the yellow routing area (or also at rank 2 in
+the blue routing area).
+
+#### What are addresses?
+
+Let's end this discussion with how all this relates to IP addressing
+and CIDR. Each "IPv4" host has 32 forwarding elements with a straight
+parent-child relationship between them [^12]. The rules above imply
+that there can be only one peering relationship between two nodes. The
+subnet mask actually acts as a sort of short-hand notation, showing
+where the routing elements are in the same routing area: with mask
+255.255.255.0, the peering relationship is at rank 8, IP network
+engineers then state that the nodes are in a /24 network.
+
+Apart from building towards CIDR from the ground up, we have also
+derived _what network addresses really are_: they consist of names of
+forwarding elements in a unicast node and reflect the organisation of
+these forwarding elements in a directed acyclic graph (DAG). Now,
+there is still a (rather amusing) and seemingly neverending discussion
+in the network community on whether IP adresses should be assigned to
+nodes or interfaces. This discussion is moot: you can write your name
+on your mailbox, that doesn't make it the name of your mailbox, it is
+_your_ name. It is also a false dichotomy caused by device-oriented
+thinking, looking at a box of electronics with a bunch of holes in
+which to plug some wires, and then thinking that we either have to
+name the box or the holes: the answer is _neither_. Just like a post
+office building doesn't do anything without post office workers (or
+their automated robotic counterparts), a router or switch doesn't do
+anything without forwarding elements. I will come back to this when
+discussing multi-homing.
+
+One additional thing is that in the current IP Internet, the layout of
+the routing areas is predominantly administratively defined and
+structured into so-called Autonomous Systems (ASs) that each receive a
+chunk of the available IP address space, with BGP used to disseminate
+routes between them. The layout and peering relationship between these
+ASs is not the most optimal for the layout of the Internet. Decoupling
+the network addressing within an AS from the addressing and structure
+of an overlaying unicast layer, and how to disseminate routes in that
+overlay unicast layer is an interesting topic that mandates more
+study[^13].
+
+### Do we really need routing at a global scale?
+
+An interesting question to ask, is whether we need to be able to scale
+a layer to the scale of the planet, or -- some day -- the solar
+system, or even the universe? IPv6 was the winning technology to deal
+with the anticapted problem of IPv4 address exhaustion. But can we
+build an Internet that doesn't require all possible end users to share
+the same network (layer)?
+
+My answer is not proven and therefore not conclusive, but I think yes,
+any public Internetwork at scale -- where it is possible for any
+end-user to reach any application -- will always need at least one
+(unicast) layer that spans most of the systems on the network and thus
+a global address space. In the current Internet, applications are
+identified by an IP address and (well-defined) port, and the Domain
+Name System (DNS) maps the host name to an IP address (or a set of IP
+addresses). In any general Internetwork, if applications were in
+private networks, we would need a system to find the (private network,
+node name in private network) for some application, and every end-host
+would need to reach that system, which -- unless I am missing
+something big -- means that system will need a global address
+space[^14].
+
+### Multi-homing
+
+
+[Under construction - [this blog post](/blog/2022/12/07/loc/id-split-and-the-ouroboros-network-model/)
+over Loc/ID split might be interesting]
+
+### Dealing with limited link capacity
+
+
+
+[Under construction]
+
+[^1]: In the paper we call these elements _data transfer protocol
+ machines_, but I think this terminology is clearer.
+
+[^2]: A tree is a connected graph with N vertices and N-1 edges.
+
+[^3]: I've already explored how some technologies map to the Ouroboros
+ model in my blog post on
+ [unicast vs multicast](/blog/2021/04/02/how-does-ouroboros-do-anycast-and-multicast/).
+
+[^4]: Of course, once the model is properly understood and a
+ green-field scenario is considered, recursive networking is the
+ obvious choice, and so the Ouroboros prototype _is_ a recursive
+ network.
+
+[^5]: This is where Ouroboros is similar to IP, and differs from RINA.
+ RINA layers (DIFs) aim to provide reliability as part of the
+ service (flow). We found this approach in RINA to be severely
+ flawed, preventing RINA to be a _universal_ model for all
+ networking and IPC. RINA can be modeled as an Ouroboros network,
+ but Ouroboros cannot be modeled as a RINA network. I've written
+ about this in more detail about this in my blog post on
+ [Ouroboros vs RINA](/blog/2021/03/20/how-does-ouroboros-relate-to-rina-the-recursive-internetwork-architecture/).
+
+[^6]: Transient loops are loops that occur due to forwarding functions
+ momentarily having different views of the network graph, for
+ instance due to delays in disseminating information on
+ unavailable links.
+
+[^7]: Some may think that it's possible to build a network layer that
+ forwards packets in a way that _deliberately_ takes a couple of
+ loops between a set of nodes and then continues forwarding to
+ the destination, violating the definition of _FORWARDING_. It's
+ not possible, because based on the destination address alone,
+ there is no way to know whether that packet came from the loop
+ or not. _"But if I add a token/identifier/cookie to the packet
+ header"_ -- yes, that is possible, and it may _look like that
+ packet is traversing a loop_ in the network, but it doesn't
+ violate the definition. The question is: what is that
+ token/identifier/cookie naming? It can be only one of a couple
+ of things: a forwarding element, a link or the complete
+ layer. Adding a token and the associated logic to process it,
+ will be equivalent to adding nodes to the layer (modifying the
+ node name space to include that token) or adding another
+ layer. In essence, the implementation of the nodes on the loop
+ will be doing something like this:
+
+ ```
+ if logic_based_on_token:
+ # behave like node (token, X)
+ else if logic_based_on_token:
+ # behave like node (token, Y)
+ else # and so on
+ ```
+
+ When taking the transformation into account the resulting
+ layer(s) will follow the fundamental model as it is presented
+ above. Also observe that adding such tokens may drastically
+ increase the address space in the ouroboros representation.
+
+[^8]: For the mathematically inclined, the exact formulation is in the
+ [paper](https://arxiv.org/pdf/2001.09707.pdf) section 2.4
+
+[^9]: Is it possible to broadcast on a non-tree graph by pruning in
+ some way, shape or form? There are some things to
+ consider. First, if the pruning is done to eliminate links in
+ the graph, let's say in a way that STP prunes links on an
+ Ethernet or VLAN, then this is operation is equivalent creating
+ a new broadcast layer. We call this enrollment and adjacency
+ management. This will be explained in the next sections. Second
+ is trying to get around loops by adding the name of the (source)
+ node plus a token/identifier/cookie as a packet header in order
+ to detect packets that have traveled in a loop, and dropping
+ them when they do. This kind of network doesn't fit neither the
+ broadcast layer nor the unicast layer. But the thing is: it also
+ _doesn't scale_, as all packets need to be tracked, at least in
+ theory, forever. Assuming packet ordering is preserved inside a
+ layer a big no-no. Another line of thinking may be to add a
+ decreasing counter to avoid loops, but it goes down a similar
+ rabbit hole. How large to set the counter? This also doesn't
+ scale. Such things may work for some use cases, but they
+ don't work _in general_.
+
+[^10]:In addition to the size of the packet forwarding tables, link
+ state, path vector and distance vector protocols are also
+ limited in size because of time delays in disseminating link
+ state information between the nodes, and the amount to be
+ disseminated. We will address this a bit later in the discourse.
+
+[^11]:The functionality of this red routing element is often
+ implemented as an unfortunate human engineer that has to subject
+ himself to one of the most inhuman ordeals imaginable: manually
+ calculating and typing IP destinations and netmasks into the
+ routing tables of a wonky piece of hardware using the most
+ ill-designed command line interface seen this side of 1974.
+
+[^12]:Drawing this in a full network example is way beyond my artistic
+ skill.
+
+[^13]:There is a serious error in the paper that states that this
+ routing information can be marked with a single bit. This is
+ only true in the limited case that there is only one "gateway"
+ node in the routing area. In the general case, path information
+ will be needed to determine which gateway to use.
+
+[^14]:A [paper on RINA](http://rina.tssg.org/docs/CAMAD-final.pdf)
+ that claims that a global address space is not needed, seems to
+ prove the exact opposite of that claim. The resolution system,
+ called the Inter-DIF Directory (IDD) is present on every system
+ that can make use of it and uses internal forwarding rules based
+ on the lookup name (in a hierarchical namespace!) to route
+ requests between its peer nodes. If that is not a global address
+ space, then I am Mickey Mouse: the addresses inside the IDD are
+ just based on strings instead of numbers. The IDD houses a
+ unicast layer with a global address space. While the IDD is
+ techically not a DIF, the DIF-DAF distinction is [severely
+ flawed](/blog/2021/03/20/how-does-ouroboros-relate-to-rina-the-recursive-internetwork-architecture/#ouroboros-diverges-from-rina).
diff --git a/content/en/docs/Concepts/problem_osi.md b/content/en/docs/Concepts/problem_osi.md
index 845de5e..66b0ad4 100644
--- a/content/en/docs/Concepts/problem_osi.md
+++ b/content/en/docs/Concepts/problem_osi.md
@@ -2,22 +2,45 @@
title: "The problem with the current layered model of the Internet"
author: "Dimitri Staessens"
-date: 2019-07-06
+date: 2020-04-06
weight: 1
description: >
- The current networking paradigm
+
---
+```
+The conventional view serves to protect us from the painful job of
+thinking.
+ -- John Kenneth Galbraith
+```
+
+Every engineering class that deals with networks explains the
+[7-layer OSI model](https://www.bmc.com/blogs/osi-model-7-layers/)
+and the
+[5-layer TCP model](https://subscription.packtpub.com/book/cloud_and_networking/9781789349863/1/ch01lvl1sec13/tcp-ip-layer-model).
+
+Both models have common origins in the International Networking
+Working Group (INWG), and therefore share many similarities. The
+TCP/IP model evolved from the implementation of the early ARPANET in
+the '70's and '80's. The Open Systems Interconnect (OSI) model was the
+result of a standardization effort in the International Standards
+Organization (ISO), which ran well into the nineties. The OSI model
+had a number of useful abstractions: services, interfaces and
+protocols, where the TCP/IP model was more tightly coupled to the
+Internet Protocol (IP).
+
+### A birds-eye view of the OSI model
+
{{<figure width="40%" src="/docs/concepts/aschenbrenner.png">}}
-Every computer science class that deals with networks explains the
-[7-layer OSI model](https://www.bmc.com/blogs/osi-model-7-layers/).
Open Systems Interconnect (OSI) defines 7 layers, each providing an
-abstraction for a certain *function* that a network application may
-need.
+abstraction for a certain *function*, or _service_ that a networked
+application may need. The figure above shows probably
+[the first draft](https://tnc15.wordpress.com/2015/06/17/locked-in-tour-europe/)
+of the OSI model.
From top to bottom, the layers provide (roughly) the following
-functions:
+services.
The __application layer__ implements the details of the application
protocol (such as HTTP), which specifies the operations and data that
@@ -46,35 +69,112 @@ Finally, the __physical layer__ is responsible for translating the
bits into a signal (e.g. laser pulses in a fibre) that is carried
between endpoints.
+The benefit of the OSI model is that each of these layers has a
+_service description_, and an _interface_ to access this service. The
+details of the protocols inside the layer were of less importance, as
+long as they got the job -- defined by the service description --
+done.
+
This functional layering provides a logical order for the steps that
data passes through between applications. Indeed, existing (packet)
-networks go through these steps in roughly this order (however, some
-may be skipped).
-
-However, when looking at current networking solutions in more depth,
-things are not as simple as these 7 layers seem to indicate. Consider
-a realistic scenario for a software developer working
-remotely. Usually it goes something like this: he connects over the
-Internet to the company __Virtual Private Network__ (VPN) and then
-establishes an SSH __tunnel__ over the development server to a virtual
-machine and then establishes another SSH connection into that virtual
-machine.
-
-We are all familiar enough with this kind of technologies to take them
-for granted. But what is really happnening here? Let's assume that the
-Internet layers between the home of the developer and his office
-aren't too complicated. The home network is IP over Wi-Fi, the office
-network IP over Ethernet, and the telecom operater has a simple IP
-over xDSL copper network (because in reality operator networks are
-nothing like L3 over L2 over L1). Now, the VPN, such as openVPN,
-creates a new network on top of IP, for instance a layer 2 network
-over TAP interfaces supported by a TLS connection to the VPN server.
-
-Technologies such as VPNs, tunnels and some others (VLANs,
-Multi-Protocol Label switching) seriously jumble around the layers in
-this layered model. Now, by my book these counter-examples prove that
-the 7-layered model is, to put it bluntly, wrong. That doesn't mean
-it's useless, but from a purely scientific view, there has to be a
-better model, one that actually fits implementations.
-
-Ouroboros is our answer towards a more complete model for computer networks. \ No newline at end of file
+networks go through these steps in roughly this order.
+
+### A birds-eye view of the TCP/IP model
+
+{{<figure width="25%" src="https://static.packt-cdn.com/products/9781789349863/graphics/6c40b664-c424-40e1-9c65-e43ebf17fbb4.png">}}
+
+The TCP/IP model came directly from the implementation of TCP/IP, so
+instead of each layer corresponding to a service, each layer directly
+corresponded to a (set of) protocol(s). IP was the unifying protocol,
+not caring what was below at layer 1. The HOST-HOST protocols offered
+a connection-oriented service (TCP) or a connectionless service (UDP)
+to the application. The _TCP/IP model_ was retroactively made more
+"OSI-like", turning into the 5-layer model, which views the top 3
+layers of OSI as an "application layer".
+
+### Some issues with these models
+
+When looking at current networking solutions in more depth,
+things are not as simple as these 5/7 layers seem to indicate.
+
+#### The order of the layers is not fixed.
+
+Consider, for instance, __Virtual Private Network__ (VPN) technologies
+and SSH __tunnels__. We are all familiar enough with this kind of
+technologies to take them for granted. But a VPN, such as openVPN,
+creates a new network on top of IP. In _bridging_ mode this is a Layer
+2 (Ethernet) network over TAP interfaces, in _routing_ mode this is a
+Layer 3 (IP) network over TUN interfaces. In both cases they are
+supported by a Layer 4 connection (using, for instance Transport Layer
+Security) to the VPN server that provides the network
+access. Technologies such as VPNs and various so-called _tunnels_
+seriously jumble around the layers in this layered model.
+
+#### How many layers are there exactly?
+
+Multi-Protocol Label switching (MPLS), a technology that allows
+operators to establish and manage circuit-like paths in IP networks,
+typically sits in between Layer 2 and IP and is categorized as a
+_Layer 2.5_ technology. So are there 8 layers? Why not revise the
+model and number them 1-8 then?
+
+QUIC is a protocol that performs transport-layer functions such as
+retransmission, flow control and congestion control, but works around
+the initial performance bottleneck after starting a TCP connection
+(3-way handsake, slow start) and some other optimizations dealing with
+re-establishing connections for which security keys are known. But
+QUIC runs on top of UDP. If UDP is Layer 4, then what layer is QUIC?
+
+One could argue that UDP is an incomplete Layer 4 protocol and QUIC
+adds its missing Layer 4 functionalities. Fair enough, but then what
+is the minimum functionality for a complete Layer 4 protocol? And what
+is a minimum functionality for a Layer 3 protocol? What have IP, ICMP
+and IGMP in common that makes them Layer 3 beyond the arbitrary
+concensus that they should be available on a brick of future e-waste
+that is sold as a "router"?
+
+#### Which protocol fits in which layer is not clear-cut.
+
+There are a whole slew of protocols that are situated in Layer 3:
+ICMP, SNMP... They don't really need the features that Layer 4
+provides (retransmission, ...). But again, they run on _top of Layer
+3_ (IP). They get assigned a protocol number in the IP field, instead
+of a port number in the UDP header. But doesn't a Layer 3 protocol
+number indicate a Layer 4 protocol? Apparently only in some cases, but
+not in others.
+
+The Border Gateway Protocol (BGP) performs (inter-domain)
+routing. Routing is a function that is usually associated with Layer
+3. But BGP runs on top of TCP, which is Layer 4, so is it in the
+application layer? There is no real concensus of what layer BGP is in,
+some say Layer 3, some (probably most) say Layer 4, because it is
+using TCP, and some say it's application layer. But the concensus does
+seem to be that the BGP conundrum doesn't matter. BGP works, and the
+OSI and TCP/IP models are _just theoretical models_, not _rules_ that
+are set in stone.
+
+### Are these issues _really_ a problem?
+
+Well, in my opinion: yes! These models are pure [rubber
+science](https://en.wikipedia.org/wiki/Rubber_science). They have no
+predictive value, don't fit with observations of the real-world
+Internet most of us use every day, and are about as arbitrary as a
+seven-course tasting menu of home-grown vegetables. Their only uses
+are as technobabble for network engineers and as tools for university
+professors to gauge their students' ability to retain a moderate
+amount of stratified dribble.
+
+If there is no universally valid theoretical model, if we have no
+clear definitions of the fundamental concepts and no clearly defined
+set of rules that unequivocally lay out what the _necessary and
+sufficient conditions for networking_ are, then we are all just
+_engineering in the dark_, progress in developing computer networks
+condemned to a sisyphean effort of perpetual incremental fixes, its
+fate to remain a craft that builds on tradition, cobbling together an
+ever-growing bungle of technologies and protocols that stretch the
+limits of manageability.
+
+Not yet convinced? Read an even more in-depth explanation on our
+[blog](/blog/2022/02/12/what-is-wrong-with-the-architecture-of-the-internet/),
+about the seperation of concerns design principle and layer violations
+and about seperation of mechanism & policy and ossification.
diff --git a/content/en/docs/Concepts/rec_netw.jpg b/content/en/docs/Concepts/rec_netw.jpg
deleted file mode 100644
index bddaca5..0000000
--- a/content/en/docs/Concepts/rec_netw.jpg
+++ /dev/null
Binary files differ
diff --git a/content/en/docs/Concepts/unicast_layer.png b/content/en/docs/Concepts/unicast_layer.png
new file mode 100644
index 0000000..c77ce48
--- /dev/null
+++ b/content/en/docs/Concepts/unicast_layer.png
Binary files differ
diff --git a/content/en/docs/Concepts/unicast_layer_bc_pft.png b/content/en/docs/Concepts/unicast_layer_bc_pft.png
new file mode 100644
index 0000000..77860ce
--- /dev/null
+++ b/content/en/docs/Concepts/unicast_layer_bc_pft.png
Binary files differ
diff --git a/content/en/docs/Concepts/unicast_layer_bc_pft_split.png b/content/en/docs/Concepts/unicast_layer_bc_pft_split.png
new file mode 100644
index 0000000..9a4f9fb
--- /dev/null
+++ b/content/en/docs/Concepts/unicast_layer_bc_pft_split.png
Binary files differ
diff --git a/content/en/docs/Concepts/unicast_layer_bc_pft_split_broadcast.png b/content/en/docs/Concepts/unicast_layer_bc_pft_split_broadcast.png
new file mode 100644
index 0000000..fa66864
--- /dev/null
+++ b/content/en/docs/Concepts/unicast_layer_bc_pft_split_broadcast.png
Binary files differ
diff --git a/content/en/docs/Concepts/unicast_layer_dag.png b/content/en/docs/Concepts/unicast_layer_dag.png
new file mode 100644
index 0000000..010ad4f
--- /dev/null
+++ b/content/en/docs/Concepts/unicast_layer_dag.png
Binary files differ
diff --git a/content/en/docs/Concepts/what.md b/content/en/docs/Concepts/what.md
deleted file mode 100644
index ac87754..0000000
--- a/content/en/docs/Concepts/what.md
+++ /dev/null
@@ -1,78 +0,0 @@
----
-title: "Recursive networks"
-author: "Dimitri Staessens"
-
-date: 2020-01-11
-weight: 2
-description: >
- The recursive network paradigm
----
-
-The functional repetition in the network stack is discussed in
-detail in the book __*"Patterns in Network Architecture: A Return to
-Fundamentals"*__. From the observations in the book, a new architecture
-was proposed, called the "__R__ecursive __I__nter__N__etwork
-__A__rchitecture", or [__RINA__](http://www.pouzinsociety.org).
-
-__Ouroboros__ follows the recursive principles of RINA, but deviates
-quite a bit from its internal design. There are resources on the
-Internet explaining RINA, but here we will focus
-on its high level design and what is relevant for Ouroboros.
-
-Let's look at a simple scenario of an employee contacting an internet
-corporate server over a Layer 3 VPN from home. Let's assume for
-simplicity that the corporate LAN is not behind a NAT firewall. All
-three networks perform (among some other things):
-
-__Addressing__: The VPN hosts receive an IP address in the VPN, let's
-say some 10.11.12.0/24 address. The host will also have a public IP
-address, for instance in the 20.128.0.0/16 range . Finally that host
-will have an Ethernet MAC address. Now the addresses __differ in
-syntax and semantics__, but for the purpose of moving data packets,
-they have the same function: __identifying a node in a network__.
-
-__Forwarding__: Forwarding is the process of moving packets to a
-destination __with intent__: each forwarding action moves the data
-packet __closer__ to its destination node with respect to some
-__metric__ (distance function).
-
-__Network discovery__: Ethernet switches learn where the endpoints are
-through MAC learning, remembering the incoming interface when it sees
-a new soure address; IP routers learn the network by exchanging
-informational packets about adjacency in a process called *routing*;
-and a VPN proxy server relays packets as the central hub of a network
-connected as a star between the VPN clients and the local area
-network (LAN) that is provides access to.
-
-__Congestion management__: When there is a prolonged period where a
-node receives more traffic than can forward forward, for instance
-because there are incoming links with higher speeds than some outgoing
-link, or there is a lot of traffic between different endpoints towards
-the same destination, the endpoints experience congestion. Each
-network could handle this situation (but not all do: TCP does
-congestion control for IP networks, but Ethernet just drops traffic
-and lets the IP network deal with it. Congestion management for
-Ethernet never really took off).
-
-__Name resolution__: In order not having to remember addresses of the
-hosts (which are in a format that make it easier for a machine to deal
-with), each network keeps a mapping of a name to an address. For IP
-networks (which includes the VPN in our example), this is done by the
-Domain Name System (DNS) service (or, alternatively, other services
-such as *open root* or *namecoin*). For Ethernet, the Address
-Resolution Protocol maps a higher layer name to a MAC (hardware)
-address.
-
-{{<figure width="50%" src="/docs/concepts/layers.jpg">}}
-
-Recursive networks take all these functions to be part of a network
-layer, and layers are mostly defined by their __scope__. The lowest
-layers span a link or the reach of some wireless technology. Higher
-layers span a LAN or the network of a corporation e.g. a subnetwork or
-an Autonomous System (AS). An even higher layer would be a global
-network, followed by a Virtual Private Network and on top a tunnel
-that supports the application. Each layer being the same in terms of
-functionality, but different in its choice of algorithm or
-implementation. Sometimes the function is just not implemented
-(there's no need for routing in a tunnel!), but logically it could be
-there.
diff --git a/content/en/docs/Contributions/_index.md b/content/en/docs/Contributions/_index.md
index b5ffa5f..558298e 100644
--- a/content/en/docs/Contributions/_index.md
+++ b/content/en/docs/Contributions/_index.md
@@ -7,14 +7,23 @@ description: >
How to contribute to Ouroboros.
---
+### Ongoing work
+
+Ouroboros is far from complete. Plenty of things need to be researched
+and implemented. We don't really keep a list, but this
+[epic board](https://tree.taiga.io/project/dstaesse-ouroboros/epics) can
+give you some ideas of what is still on our mind and where you may be
+able to contribute.
+
### Communication
There are 2 ways that will be used to communicate: The mailing list
(ouroboros@freelists.org) will be used for almost everything except
-for day-to-day chat. For that we use the
-[slack](https://odecentralize.slack.com) (invite link in footer) and
-the #ouroboros channel on Freenode (IRC chat). The slack channel is a
-bit more active, and preferred. Use whatever login name you desire.
+for day-to-day chat. For that we use a public slack channel
+[slack](https://odecentralize.slack.com) (invite link in footer)
+bridged to a
+[matrix space](https://matrix.to/#/#ODecentralize:matrix.org).
+Use whatever login name you desire.
Introduce yourself, use common sense and be polite!
@@ -22,7 +31,7 @@ Introduce yourself, use common sense and be polite!
The coding guidelines of the main Ouroboros stack are similar as those
of the Linux kernel
-(https://www.kernel.org/doc/Documentation/CodingStyle) with the
+(https://www.kernel.org/doc/html/latest/process/coding-style.html) with the
following exceptions:
- Soft tabs are to be used instead of hard tabs
@@ -96,8 +105,8 @@ real e-mail address.
#### Commit messages
-A commit message should follow these 10 simple rules (adjusted from
-http://chris.beams.io/posts/git-commit/):
+A commit message should follow these 10 simple rules, based on
+(http://chris.beams.io/posts/git-commit/)
1. Separate subject from body with a blank line
2. Limit the subject line to 50 characters
diff --git a/content/en/docs/Extra/ioq3.md b/content/en/docs/Extra/ioq3.md
index db38d83..05a4626 100644
--- a/content/en/docs/Extra/ioq3.md
+++ b/content/en/docs/Extra/ioq3.md
@@ -41,8 +41,9 @@ With Ouroboros installed, build the ioq3 project in standalone mode:
$ STANDALONE=1 make
```
-You may need to install some dependencies like SDL2, see the [ioq3
-documentation](http://wiki.ioquake3.org/Building_ioquake3).
+You may need to install some dependencies like [SDL2]
+(https://wiki.libsdl.org/SDL2/Installation), see the [ioq3 documentation]
+(https://ioquake3.org/help/building-ioquake3/building-ioquake3-on-linux/).
The ioq3 project only supplies the game engine. To play Quake III Arena,
you need the original game files and a valid key. Various open source
@@ -66,7 +67,7 @@ $ unzip -j openarena-0.8.8.zip 'openarena-0.8.8/baseoa/*' -d ./baseoa
```
Make sure you have a local Ouroboros layer running in your system (see
-[this tutorial](/tutorial-1/)).
+[this tutorial](/docs/tutorials/tutorial-1/)).
To test the game, start a server (replace <arch> with the correct
architecture extension for your machine, eg x86_64):
diff --git a/content/en/docs/Extra/rumba.md b/content/en/docs/Extra/rumba.md
deleted file mode 100644
index 5023f8e..0000000
--- a/content/en/docs/Extra/rumba.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: "Rumba"
-author: "Dimitri Staessens"
-date: 2019-10-06
-draft: false
-description: >
- Small orchestration framework for deploying recursive networks.
----
-
-Rumba is an __experimentation framework__ for deploying recursive
-network experiments in various network testbeds. It was developed as
-part of the [ARCFIRE](http://ict-arcfire.eu) project, and available on
-[gitlab](https://gitlab.com/arcfire/rumba) .
diff --git a/content/en/docs/Intro/_index.md b/content/en/docs/Intro/_index.md
deleted file mode 100644
index 7ca8160..0000000
--- a/content/en/docs/Intro/_index.md
+++ /dev/null
@@ -1,67 +0,0 @@
----
-title: "Welcome to Ouroboros"
-linkTitle: "Introduction"
-author: "Dimitri Staessens"
-date: 2019-12-30
-weight: 5
-description: >
- Introduction.
----
-
-```
-Simplicity is a great virtue but it requires hard work to achieve it and
-education to appreciate it.
-And to make matters worse: complexity sells better.
- -- Edsger Dijkstra
-```
-
-This is the portal for the ouroboros networking prototype. Ouroboros
-aims to make packet networks simpler, and as a result, more reliable,
-secure and private. How? By introducing strong, well-defined
-abstractions and hiding internal complexity. A bit like modern
-programming languages abstract away details such as pointers.
-
-The main driver behind the ouroboros prototype is a good ol' personal
-itch. I've started my academic research career on optical networking,
-and moved up the stack towards software defined networks, learning the
-fine details of Ethernet, IP, TCP and what not. But when I came into
-contact with John Day and his Recursive InterNetwork Architecture
-(RINA), it really struck home how unnecessarily complicated today's
-networks are. The core abstractions that RINA moved towards simplify
-things a lot. I was fortunate to have a PhD student that understood
-the implications of these abstractions, and together we just went on
-and digged deeper into the question of how we could make everything as
-simple as possible. When something didn't fall into place or felt
-awkward, we trace back to why it didn't fit, instead of plough forward
-and make it fit. Ouroboros is the current state of affairs in this
-quest.
-
-We often get the question "How is this better than IP"? To which the
-only sensible answer that we can give right now is that ouroboros is
-way more elegant. It has far fewer abstractions and every concept is
-well-defined. It's funny (or maybe not) how many times when we start
-explaining Ouroboros to someone, people immediately interrupt and
-start explaining how they can do this or that with IP. We know,
-they're right, but it's also completely besides our point.
-
-But, if you're open to the idea that the TCP/IP network stack is a
-huge gummed-up mess that's in need for some serious redesign, do read
-on. If you are interested in computer networks in general, if you are
-eager to learn something new and exciting without the need to deploy
-it tomorrow, and if you are willing to put in the time and effort to
-understand how all of this works, by all means: ask away!
-
-We're very open to constructive suggestions on how to further improve
-the prototype and the documentation, in particular this website. We
-know it's hard to understand in places. No matter how simple we made
-the architecture, it's still a lot to explain, and writing efficient
-documentation is a tough trade. So don't hesitate to contact us with
-any questions you may have.
-
-Above all, stay curious!
-
-```
-... for the challenge of simplification is so fascinating that, if
-we do our job properly, we shall have the greatest fun in the world.
- -- Edsger Dijkstra
-``` \ No newline at end of file
diff --git a/content/en/docs/Overview/_index.md b/content/en/docs/Overview/_index.md
index 9fd9970..06f5400 100644
--- a/content/en/docs/Overview/_index.md
+++ b/content/en/docs/Overview/_index.md
@@ -9,62 +9,93 @@ description: >
Ouroboros is a prototype **distributed system** for packetized network
communications. It is a redesign _ab initio_ of the current packet
-networking model -- from the programming API ("Layer 7") almost to the
-_wire_ ("Layer 1") -- without compromises. This means it's not
-directly compatible with anything currently available. It can't simply
-be "plugged into" the current network stack. Instead it has some
-interfaces into inter-operate with common technologies: run Ouroboros
-over Ethernet or UDP, or create tunnels over Ouroboros using tap or
-tun devices.
-
-From an application perspective, Ouroboros network operates as a "black
-box" with a
-[very simple interface](https://ouroboros.rocks/man/man3/flow_alloc.3.html).
-Either it provides a _flow_, a bidirectional channel that delivers data
-within some requested operational parameters such as delay and
+networking model -- from the programming API almost to the wire --
+without compromises. While the prototype not directly compatible with
+IP or sockets, it has some interfaces to be interoperable with common
+technologies: we run Ouroboros over Ethernet or UDP, or create
+IP/Ethernet tunnels over Ouroboros by exposing tap or tun devices.
+
+From an application perspective, an Ouroboros network is a "black box"
+with a
+[simple interface](https://ouroboros.rocks/man/man3/flow_alloc.3.html).
+Either Ouroboros will provides a _flow_, a bidirectional channel that delivers
+data within some requested operational parameters such as delay and
bandwidth and reliability and security; or it provides a broadcast
-channel.
+channel to a set of joined programmes.
From an administrative perspective, an Ouroboros network is a bunch of
_daemons_ that can be thought of as **software routers** (unicast) or
**software _hubs_** (broadcast) that can be connected to each other;
again through
[a simple API](https://ouroboros.rocks/man/man8/ouroboros.8.html).
-Each daemon has an address, and they forward packets among each other.
-The daemons also implement their own internal name-to-address resolution.
-Some of the main _features_ are:
+Some of the main characteristics are:
+
+* Ouroboros is <b>minimalistic</b>: it has only the essential protocol
+ fields. It will also try to use the lowest possible network layer
+ (i.e. on a single machine, Ouroboros communicates directly over
+ shared memory, over a LAN it will communicate over Ethernet, over IP
+ it will communicate over UDP), in a completely transparent way to
+ the application.
+
+* Ouroboros enforces the _end-to-end_ principle. Packet headers are
+ <b>immutable</b> between the state machines that operate on their
+ state. Only two protocol fields change on a hop-by-hop (as viewed
+ within a network layer) basis: [TTL and
+ ECN](/docs/concepts/protocols/). This immutability can be enforced
+ through authentication (not yet implemented).
-* Ouroboros is minimal: it only sends what it needs to send to operate.
+* Ouroboros has _external_ and _dynamic_ server application
+ binding. Socket applications leave it to the application developer
+ to manage binding from within the program (typically a bind() call
+ to either a specific IP address or to all addresses (0.0.0.0),
+ leaving all configuration application (or library-) specific. When
+ shopping for network libraries, typical questions are "can it bind
+ to multiple IP addresses for high availability?", "Can I run
+ multiple servers in parallel on the same port for scaling?".
+ Ouroboros makes all this management external to the program: server
+ applications only need to call flow_accept(). The _bind()_ primitive
+ allows a program (or running process) to be bound from the command
+ line to a certain (set of) service names and when a flow request
+ arrives for that service, Ouroboros acts as a broker that hands of
+ the flow to any program that is bound to that service. Binding is
+ N-to-M: multiple programs can be bound to the same service name, and
+ programs can be bound to multiple names. This binding is also
+ _dynamic_: it can be done while the program is running, and will not
+ disrupt existing flows. In addition, the _register()_ primitive
+ allows external and dynamic control over which network a service
+ name is available over. Again, while the service is running, and
+ without disrupting existing flows.
-* Ouroboros adheres to the _end-to-end_ principle. Packet headers are
- immutable between the program components (state machines) that
- operate on their state. Only two protocol fields change on a
- hop-by-hop (as viewed within a network layer) basis:
- [TTL and ECN](/docs/concepts/protocols/).
+* The Ouroboros end-to-end protocol performs flow control, error
+ control and reliable transfer and is implemented as part of the
+ _application library_. This includes sequence numbering, ordering,
+ sending and handling acknowledgments, managing flow control windows,
+ ...
* Ouroboros can establish an encrypted flow in a _single RTT_ (not
including name-to-address resolution). The flow allocation API is a
2-way handshake (request-response) that agrees on endpoint IDs and
- performs an ECDHE key exchange. The end-to-end protocol
+ performs an ECDHE key exchange. The end-to-end protocol is based on
+ Delta-t and
[doesn't need a handshake](/docs/concepts/protocols/#operation-of-frcp).
-* The Ouroboros end-to-end protocol performs flow control, error
- control and reliable transfer and is implemented as part of the
- _application library_. Sequence numbers, acknowledgments, flow
- control windows... The last thing the application does (or should
- do) is encrypt everything before it hands it to the network layer
- for delivery. With this functionality in the library, it's easy to
- force encryption on _every_ flow that is created from your machine
- over Ouroboros regardless of what the application programmer has
- requested. Unlike TLS, the end-to-end header (sequence number etc)
- is fully encrypted.
+* Ouroboros allows encrypting everything before handing it to the next
+ layer for delivery. With this functionality in the library, it's
+ easy to force encryption on _every_ flow that is created from your
+ machine over Ouroboros regardless of what the application programmer
+ has implemented. Unlike TLS, the end-to-end header (sequence number
+ etc) can be fully encrypted.
+
+* Ouroboros congestion control operates at the network level. It does
+ not (_can not!_) rely on acknowledgements. This means all network
+ flows are automatically congestion controlled.
* The flow allocation API works as an interface to the network. An
Ouroboros network layer is therefore "aware" of all traffic that it
- is offered. This allows the layer to shape and police traffic, but
- only based on quantity and QoS, not on the contents of the packets,
- to ensure _net neutrality_.
+ is offered. This allows the layer to implement shaping and police
+ traffic, but only based on quantity and QoS, not on the contents of
+ the packets, to ensure _net neutrality_.
For a lot more depth, our article on the design of Ouroboros is
accessible on [arXiv](https://arxiv.org/pdf/2001.09707.pdf).
@@ -72,9 +103,16 @@ accessible on [arXiv](https://arxiv.org/pdf/2001.09707.pdf).
The best place to start understanding a bit what Ouroboros aims to do
and how it differs from other packet networks is to first watch this
presentation at [FOSDEM
-2018](https://archive.fosdem.org/2018/schedule/event/ipc/) (it's over
-two years old, so not entirely up-to-date anymore), and have a quick
-read of the [flow allocation](/docs/concepts/fa/) and [data
-path](/docs/concepts/datapath/) sections.
+2018](https://archive.fosdem.org/2018/schedule/event/ipc/) but note
+that this presentation is over three years old, and very outdated in
+terms of what has been implemented. The prototype implementation is
+now capable of asynchronous flows handling, doing retransmission, flow
+control, congestion control...
+
+The next things to do are to have a quick read of the
+[flow allocation](/docs/concepts/fa/)
+and
+[data path](/docs/concepts/datapath/)
+sections.
{{< youtube 6fH23l45984 >}}
diff --git a/content/en/docs/Releases/0_18.md b/content/en/docs/Releases/0_18.md
new file mode 100644
index 0000000..c489d33
--- /dev/null
+++ b/content/en/docs/Releases/0_18.md
@@ -0,0 +1,109 @@
+---
+date: 2021-02-12
+title: "Ouroboros 0.18"
+linkTitle: "Ouroboros 0.18"
+description: "Major additions and changes in 0.18.0"
+author: Dimitri Staessens
+---
+
+With version 0.18 come a number of interesting updates to the prototype.
+
+### Automated Repeat-Request (ARQ) and flow control
+
+We finished the implementation of the base retransmission
+logic. Ouroboros will now send, receive and handle acknowledgments
+under packet loss conditions. It will also send and handle window
+updates for flow control. The operation of flow control is very
+similar to the operation of window-based flow control in TCP, the main
+difference being that our sequence numbers are per-packet instead of
+per-byte.
+
+The previous version of FRCP had some partial implementation of the
+ARQ functionality, such as piggybacking ACK information on _writes_
+and handling sequence numbers on _reads_. But now, Ourobroos will also
+send (delayed) ACK packets without data if the application is not
+sending and finish sending when a flow is closed if not everything was
+acknowledged (can be turned off with the FRCTFLINGER flag).
+
+Recall that Ouroboros has this logic implemented in the application
+library, it's not a separate component (or kernel) that is managing
+transmit and receive buffers and retransmission. Furthermore, our
+implementation doesn't add a thread to the application. If a
+single-threaded application uses ARQ, it will remain single-threaded.
+
+It's not unlikely that in the future we will add the option for the
+library to start a dedicated thread to manage ARQ as this may have
+some beneficial characteristics for read/write call durations. Other
+future addditions may include fast-retransmit and selective ACK
+support.
+
+The most important characteristic of Ouroboros FRCP compared to TCP
+and derivative protocols (QUIC, SCTP, ...) is that it is 100%
+independent of congestion control, which allows for it to operate at
+real RTT timescales (i.e. microseconds in datacenters) without fear of
+RTT underestimates severely capping throughput. Another characteristic
+is that the RTT estimate is really measuring the responsiveness of the
+application, not the kernel on the machine.
+
+A detailed description of the operation of ARQ can be found
+in the [protocols](/docs/concepts/protocols/#operation-of-frcp)
+section.
+
+### Congestion Avoidance
+
+The next big addition is congestion avoidance. By default, the unicast
+layer's default configuration will now congestion-control all client
+traffic sent over them[^1]. As noted above, congestion avoidance in
+Ouroboros is completely independent of the operation of ARQ and flow
+control. For more information about how this all works, have a look at
+the developer blog
+[here](/blog/2020/12/12/congestion-avoidance-in-ouroboros/) and
+[here](/blog/2020/12/19/exploring-ouroboros-with-wireshark/).
+
+### Revision of the flow allocator
+
+We also made a change to the flow allocator, more specifically the
+Endpoint IDs to use 64-bit identifiers. The reason for this change is
+to make it harder to guess these endpoint identifiers. In TCP,
+applications can listen to sockets that are bound to a port on a (set
+of) IP addresses. You can't imagine how many hosts are trying to brute
+force password guess SSH logins on TCP port 22. To make this at least
+a bit harder, Ouroboros has no well-known application ports, and after
+this patch they are roughtly equivalent to a 32-bit random
+number. Note that in an ideal Ouroboros deployment, sensitive
+applications such as SSH login should run on a different layer/network
+than publicly available applications.
+
+### Revision of the ipcpd-udp
+
+The ipcpd-udp has gone through some revisions during its lifetime. In
+the beginning, we wanted to emulate the operation of an Ouroboros
+layers, having the flow allocator listening on a certain UDP port, and
+mapping endpoints identifiers to random ephemeral UDP ports. So as an
+example, the source would generate a UDP socket, e.g. on port 30927,
+and send a request for a new flow the fixed known Ouroboros UDP port
+(3531) at the receiver. This also generates a socket on an ephemeral
+UDP port, say 23705, and it sends a response back to the source on UDP
+port 3531. Traffic for the "client" flow would be on UDP port pair
+(30927, 23705). This was giving a bunch of headaches with computers
+behind NAT firewalls, rendering that scheme only useful in lab
+environments. To make it more useable, the next revision used a single
+fixed incoming UDP port for the flow allocator protocol, using an
+ephemeral UDP port from the sender side per flow and added the flow
+allocator endpoints as a "next header" inside UDP. So traffic would
+always be sent to destination UDP port 3531. Benefit was that only a
+single port was needed in the NAT forwarding rules, and that anyone
+running Ouroboros would be able to receive allocation messages, and
+this is enforcing a bit all users to participate in a mesh topology.
+However, opening a certain UDP port is still a hassle, so in this
+(most likely final) revision, we just run the flow allocator in the
+ipcpd-udp as a UDP server on a (configurable) port. No more NAT
+firewall configurations required if you want to connect (but if you
+want to accept connections, opening UDP port 3531 is still required).
+
+The full changelog can be browsed in
+[cgit](/cgit/ouroboros/log/?showmsg=1).
+
+[^1]: This is not a claim that every packet inside a layer is
+ flow-controlled: internal management traffic to the layer (flow
+ allocator protocol, etc) is not congestion-controlled. \ No newline at end of file
diff --git a/content/en/docs/Releases/0_20.md b/content/en/docs/Releases/0_20.md
new file mode 100644
index 0000000..7f2ff9a
--- /dev/null
+++ b/content/en/docs/Releases/0_20.md
@@ -0,0 +1,70 @@
+---
+date: 2023-09-21
+title: "Ouroboros 0.20"
+linkTitle: "Ouroboros 0.20"
+description: "Major additions and changes in 0.20.0"
+author: Dimitri Staessens
+---
+
+Version 0.20 brings some code refactoring and a slow of bugfixes to
+the prototype to improve stability, but the main quality-of-life
+addition is config file support in TOML format. This removes the need
+for bash scripts to configure the prototype on reboots/restarts; a
+very basic feature that was long overdue.
+
+As an example, before v0.20, this server had Ouroboros running as a
+systemd service, and it was configured using the following irm commands:
+
+```bash
+irm i b t udp n udp l udp ip 51.38.114.133
+irm n r ouroboros.rocks.oping l udp
+irm b prog oping n ouroboros.rocks.oping auto -- -l
+```
+
+These bootstrap a UDP layer to the server's public IP address,
+register the name "ouroboros.rocks.oping" with that layer and bind the
+program binary /usr/bin/oping to that name, telling the irmd to start
+that server automatically if it wasn't running before.
+
+While pretty simple to perform, if the service was restarted or the
+server was rebooted, we needed to re-run these commands (we could have
+added them to some system startup script, of course).
+
+Now the IRMd will load the config file specified in
+/etc/ouroboros/irmd.conf. The IRMd configuration to achieve the above
+(I renamed the UDP layer to "Internet", but that name doesn't really
+matter if there is only one ipcpd-udp in the system):
+```bash
+root@vps646159:~# cat /etc/ouroboros/irmd.conf
+### Ouroboros configuration file
+[name."ouroboros.rocks.oping"]
+prog=["/usr/bin/oping"]
+args=["-l"]
+
+[udp.internet]
+bootstrap="Internet"
+ip="51.38.114.133"
+reg=["ouroboros.rocks.oping"]
+```
+
+To enable config file support, tomlc99 is needed. Install via
+
+```bash
+git clone https://github.com/cktan/tomlc99
+cd tomlc99
+make
+sudo make install
+```
+
+and then reconfigure cmake and build Ouroboros as usual.
+
+More information on how to use config files is in the example
+configuration file, installed in /etc/ouroboros/irmd.conf.example, or
+you can have a quick look in the
+[repository](/cgit/ouroboros/tree/irmd.conf.in).
+
+The full git changelog can be browsed in
+[cgit](/cgit/ouroboros/log/?showmsg=1).
+
+
+
diff --git a/content/en/docs/Releases/_index.md b/content/en/docs/Releases/_index.md
new file mode 100644
index 0000000..8328c33
--- /dev/null
+++ b/content/en/docs/Releases/_index.md
@@ -0,0 +1,6 @@
+
+---
+title: "Releases"
+linkTitle: "Release notes"
+weight: 120
+---
diff --git a/content/en/docs/Start/_index.md b/content/en/docs/Start/_index.md
index 963b9f1..735511b 100644
--- a/content/en/docs/Start/_index.md
+++ b/content/en/docs/Start/_index.md
@@ -1,7 +1,225 @@
---
title: "Getting Started"
-linkTitle: "Getting Started"
+linkTitle: "Getting Started/Installation"
weight: 20
description: >
How to get up and running with the Ouroboros prototype.
---
+
+### Get Ouroboros
+
+**Packages:**
+
+For ArchLinux users, the easiest way to try Ouroboros is via the [Arch
+User Repository](https://aur.archlinux.org/packages/ouroboros-git/),
+which will also install all dependencies.
+
+**Source:**
+
+You can clone the [repository](/cgit/ouroboros) over https or
+git:
+
+```bash
+$ git clone https://ouroboros.rocks/git/ouroboros
+$ git clone git://ouroboros.rocks/ouroboros
+```
+
+Or download a [snapshot](/cgit/ouroboros/) tarball and extract it.
+
+### System requirements
+
+Ouroboros builds on most POSIX compliant systems. Below you will find
+instructions for GNU/Linux, FreeBSD and OS X. On Windows 10, you can
+build Ouroboros using the [Linux Subsystem for
+Windows](https://docs.microsoft.com/en-us/windows/wsl/install-win10) .
+
+You need [*git*](https://git-scm.com/) to clone the
+repository. To build Ouroboros, you need [*cmake*](https://cmake.org/),
+[*google protocol buffers*](https://github.com/protobuf-c/protobuf-c)
+installed in addition to a C compiler ([*gcc*](https://gcc.gnu.org/) or
+[*clang*](https://clang.llvm.org/)) and
+[*make*](https://www.gnu.org/software/make/).
+
+Optionally, you can also install
+[*libgcrypt*](https://gnupg.org/software/libgcrypt/index.html),
+[*libssl*](https://www.openssl.org/),
+[*fuse*](https://github.com/libfuse), and *dnsutils*.
+
+On GNU/Linux you will need either libgcrypt (≥ 1.7.0) or libssl if your
+[*glibc*](https://www.gnu.org/software/libc/) is older than version
+2.25.
+
+On OS X, you will need [homebrew](https://brew.sh/).
+[Disable System Integrity Protection](https://developer.apple.com/library/content/documentation/Security/Conceptual/System_Integrity_Protection_Guide/ConfiguringSystemIntegrityProtection/ConfiguringSystemIntegrityProtection.html)
+during the
+[installation](#install)
+and/or
+[removal](#remove)
+of Ouroboros.
+
+### Install the dependencies
+
+**Debian/Ubuntu Linux:**
+
+```bash
+$ apt-get install git protobuf-c-compiler cmake
+$ apt-get install libgcrypt20-dev libssl-dev libfuse-dev dnsutils cmake-curses-gui
+```
+
+If during the build process cmake complains that the Protobuf C
+compiler is required but not found, and you installed the
+protobuf-c-compiler package, you will also need this:
+
+```bash
+$ apt-get install libprotobuf-c-dev
+```
+
+**Arch Linux:**
+
+```bash
+$ pacman -S git protobuf-c cmake
+$ pacman -S libgcrypt openssl fuse dnsutils
+```
+
+**FreeBSD 11:**
+
+```bash
+$ pkg install git protobuf-c cmake
+$ pkg install libgcrypt openssl fusefs-libs bind-tools
+```
+
+**Mac OS X Sierra / High Sierra:**
+
+```bash
+$ brew install git protobuf-c cmake
+$ brew install libgcrypt openssl
+```
+
+### Install Ouroboros
+
+When installing from source, go to the cloned git repository or
+extract the tarball and enter the main directory. We recommend
+creating a build directory inside this directory:
+
+```bash
+$ mkdir build && cd build
+```
+
+Run cmake providing the path to where you cloned the Ouroboros
+repository. Assuming you created the build directory inside the
+repository directory, do:
+
+```bash
+$ cmake ..
+```
+
+Build and install Ouroboros:
+
+```bash
+$ sudo make install
+```
+
+### Advanced options
+
+Ouroboros can be configured by providing parameters to the cmake
+command:
+
+```bash
+$ cmake -D<option>=<value> ..
+```
+
+Alternatively, after running cmake and before installation, run
+[ccmake](https://cmake.org/cmake/help/latest/manual/ccmake.1.html) to
+configure Ouroboros:
+
+```bash
+$ ccmake .
+```
+
+A list of all build options can be found [here](/docs/reference/compopt).
+
+### Remove Ouroboros
+
+To uninstall Ouroboros, simply execute the following command from your
+build directory:
+
+```bash
+$ sudo make uninstall
+```
+
+To check if everything is installed correctly, you can now jump into
+the [Tutorials](../../tutorials/) section, or you can try to ping this
+webhost over ouroboros using the name _ouroboros.rocks.oping_
+
+Our webserver is of course on an IP network, and ouroboros does not
+control IP, but it can run over UDP/IP.
+
+To be able to contact our server over ouroboros, you will need to do
+some small DNS configuration: to tell the ouroboros UDP system that
+the process "ouroboros.rocks.oping" is running on our webserver by
+add the line
+
+```
+51.38.114.133 1bf2cb4fb361f67a59907ef7d2dc5290
+```
+
+to your ```/etc/hosts``` file[^1][^2].
+
+Here are the steps to ping our server over ouroboros:
+
+Run the IRMd:
+
+```bash
+$ sudo irmd &
+```
+Then you will need find your (private) IP address and start an ouroboros UDP
+daemon (ipcpd-udp) on that interface:
+```bash
+$ irm ipcp bootstrap type udp name udp layer udp ip <your local ip address>
+```
+
+Now you can ping our server:
+
+```bash
+$ oping -n ouroboros.rocks.oping
+```
+
+The output from the IRM daemon should look something like this (in DEBUG mode):
+```
+[dstaesse@heteropoda build]$ sudo irmd --stdout
+==01749== irmd(II): Ouroboros IPC Resource Manager daemon started...
+==01749== irmd(II): Created IPCP 1781.
+==01781== ipcpd/udp(DB): Bootstrapped IPCP over UDP with pid 1781.
+==01781== ipcpd/udp(DB): Bound to IP address 192.168.66.233.
+==01781== ipcpd/udp(DB): Using port 3435.
+==01781== ipcpd/udp(DB): DNS server address is not set.
+==01781== ipcpd/ipcp(DB): Locked thread 140321690191424 to CPU 7/8.
+==01749== irmd(II): Bootstrapped IPCP 1781 in layer udp.
+==01781== ipcpd/ipcp(DB): Locked thread 140321681798720 to CPU 6/8.
+==01781== ipcpd/ipcp(DB): Locked thread 140321673406016 to CPU 1/8.
+==01781== ipcpd/udp(DB): Allocating flow to 1bf2cb4f.
+==01781== ipcpd/udp(DB): Destination UDP ipcp resolved at 51.38.114.133.
+==01781== ipcpd/udp(DB): Flow to 51.38.114.133 pending on fd 64.
+==01749== irmd(II): Flow on flow_id 0 allocated.
+==01781== ipcpd/udp(DB): Flow allocation completed on eids (64, 64).
+==01749== irmd(DB): Partial deallocation of flow_id 0 by process 1800.
+==01749== irmd(II): Completed deallocation of flow_id 0 by process 1781.
+==01781== ipcpd/udp(DB): Flow with fd 64 deallocated.
+==01749== irmd(DB): Dead process removed: 1800.
+```
+
+If connecting to _ouroboros.rocks.oping_ failed, you are probably
+behind a NAT firewall that is actively blocking outbound UDP port
+3435.
+
+[^1]: This is the IP address of our server and the MD5 hash of the
+ string _ouroboros.rocks.oping_. To check if this is configured
+ correctly, you should be able to ping the server with ```ping
+ 1bf2cb4fb361f67a59907ef7d2dc5290``` from the command line.
+
+[^2]: The ipcpd-udp allows setting up a (private) DDNS server and
+ using the Ouroboros ```irm name``` API to populate it, instead
+ of requiring each node to manually edit the ```/etc/hosts```
+ file. While we technically could also set up such a DNS on our
+ server for demo purposes, it is just too likely that it would be
+ abused. The Internet is a nasty place.
diff --git a/content/en/docs/Start/check.md b/content/en/docs/Start/check.md
deleted file mode 100644
index 69c5bef..0000000
--- a/content/en/docs/Start/check.md
+++ /dev/null
@@ -1,49 +0,0 @@
----
-title: "Check installation"
-date: 2019-12-30
-weight: 40
-description: >
- Check if ouroboros works.
-draft: false
----
-
-To check if everything is installed correctly, you can now jump into
-the [Tutorials](../../tutorials/) section, or you can try to ping this
-webhost over ouroboros using the name _ouroboros.rocks.oping_
-
-Our webserver is of course on an IP network, and ouroboros does not
-control IP, but it can run over UDP.
-
-To be able to contact our server over ouroboros, you will need to do
-some IP configuration: to tell the ouroboros UDP system that the
-process "ouroboros.rocks.oping" is running on our webserver by adding
-the line
-
-```
-51.38.114.133 1bf2cb4fb361f67a59907ef7d2dc5290
-```
-
-to your /etc/hosts file (it's the IP address of our server and the MD5
-hash of _ouroboros.rocks.oping_).
-
-You will also need to forward UDP port 3435 on your NAT firewall if
-you are behind a NAT. Else this will not work.
-
-Here are the steps to ping our server over ouroboros:
-
-Run the IRMd:
-
-```bash
-$ sudo irmd &
-```
-Then you will need find your (private) IP address and start an ouroboros UDP
-daemon (ipcpd-udp) on that interface:
-```bash
-$ irm ipcp bootstrap type udp name udp layer udp ip <your local ip address>
-```
-
-Now you should be able to ping our server!
-
-```bash
-$ oping -n ouroboros.rocks.oping
-```
diff --git a/content/en/docs/Start/download.md b/content/en/docs/Start/download.md
deleted file mode 100644
index 0429ea1..0000000
--- a/content/en/docs/Start/download.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: "Download"
-date: 2019-06-22
-weight: 10
-description: >
- How to get ouroboros.
-draft: false
----
-
-### Get Ouroboros
-
-**Packages:**
-
-For ArchLinux users, the easiest way to try Ouroboros is via the [Arch
-User Repository](https://aur.archlinux.org/packages/ouroboros-git/),
-which will also install all dependencies.
-
-**Source:**
-
-You can clone the [repository](/cgit/ouroboros) over https or
-git:
-
-```bash
-$ git clone https://ouroboros.rocks/git/ouroboros
-$ git clone git://ouroboros.rocks/ouroboros
-```
-
-Or download a [snapshot](/cgit/ouroboros/) tarball and extract it. \ No newline at end of file
diff --git a/content/en/docs/Start/install.md b/content/en/docs/Start/install.md
deleted file mode 100644
index ea4a3f7..0000000
--- a/content/en/docs/Start/install.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-title: "Install from source"
-author: "Dimitri Staessens"
-date: 2019-07-23
-weight: 30
-draft: false
-description: >
- Installation instructions.
----
-
-We recommend creating a build directory:
-
-```bash
-$ mkdir build && cd build
-```
-
-Run cmake providing the path to where you cloned the Ouroboros
-repository. Assuming you created the build directory inside the
-repository directory, do:
-
-```bash
-$ cmake ..
-```
-
-Build and install Ouroboros:
-
-```bash
-$ sudo make install
-```
-
-### Advanced options
-
-Ouroboros can be configured by providing parameters to the cmake
-command:
-
-```bash
-$ cmake -D<option>=<value> ..
-```
-
-Alternatively, after running cmake and before installation, run
-[ccmake](https://cmake.org/cmake/help/latest/manual/ccmake.1.html) to
-configure Ouroboros:
-
-```bash
-$ ccmake .
-```
-
-A list of all options can be found [here](/docs/reference/compopt).
-
-### Remove Ouroboros
-
-To uninstall Ouroboros, simply execute the following command from your
-build directory:
-
-```bash
-$ sudo make uninstall
-``` \ No newline at end of file
diff --git a/content/en/docs/Start/requirements.md b/content/en/docs/Start/requirements.md
deleted file mode 100644
index 7615b44..0000000
--- a/content/en/docs/Start/requirements.md
+++ /dev/null
@@ -1,76 +0,0 @@
----
-title: "Requirements"
-author: "Dimitri Staessens"
-date: 2019-07-23
-weight: 10
-draft: false
-description: >
- System requirements and software dependencies.
----
-
-### System requirements
-
-Ouroboros builds on most POSIX compliant systems. Below you will find
-instructions for GNU/Linux, FreeBSD and OS X. On Windows 10, you can
-build Ouroboros using the [Linux Subsystem for
-Windows](https://docs.microsoft.com/en-us/windows/wsl/install-win10) .
-
-You need [*git*](https://git-scm.com/) to clone the
-repository. To build Ouroboros, you need [*cmake*](https://cmake.org/),
-[*google protocol buffers*](https://github.com/protobuf-c/protobuf-c)
-installed in addition to a C compiler ([*gcc*](https://gcc.gnu.org/) or
-[*clang*](https://clang.llvm.org/)) and
-[*make*](https://www.gnu.org/software/make/).
-
-Optionally, you can also install
-[*libgcrypt*](https://gnupg.org/software/libgcrypt/index.html),
-[*libssl*](https://www.openssl.org/),
-[*fuse*](https://github.com/libfuse), and *dnsutils*.
-
-On GNU/Linux you will need either libgcrypt (≥ 1.7.0) or libssl if your
-[*glibc*](https://www.gnu.org/software/libc/) is older than version
-2.25.
-
-On OS X, you will need [homebrew](https://brew.sh/). [Disable System
-Integrity
-Protection](https://developer.apple.com/library/content/documentation/Security/Conceptual/System_Integrity_Protection_Guide/ConfiguringSystemIntegrityProtection/ConfiguringSystemIntegrityProtection.html)
-during the [installation](#install) and/or [removal](#remove) of
-Ouroboros.
-
-### Install the dependencies
-
-**Debian/Ubuntu Linux:**
-
-```bash
-$ apt-get install git protobuf-c-compiler cmake
-$ apt-get install libgcrypt20-dev libssl-dev libfuse-dev dnsutils cmake-curses-gui
-```
-
-If during the build process cmake complains that the Protobuf C
-compiler is required but not found, and you installed the
-protobuf-c-compiler package, you will also need this:
-
-```bash
-$ apt-get install libprotobuf-c-dev
-```
-
-**Arch Linux:**
-
-```bash
-$ pacman -S git protobuf-c cmake
-$ pacman -S libgcrypt openssl fuse dnsutils
-```
-
-**FreeBSD 11:**
-
-```bash
-$ pkg install git protobuf-c cmake
-$ pkg install libgcrypt openssl fusefs-libs bind-tools
-```
-
-**Mac OS X Sierra / High Sierra:**
-
-```bash
-$ brew install git protobuf-c cmake
-$ brew install libgcrypt openssl
-``` \ No newline at end of file
diff --git a/content/en/docs/Tools/_index.md b/content/en/docs/Tools/_index.md
new file mode 100644
index 0000000..578c47f
--- /dev/null
+++ b/content/en/docs/Tools/_index.md
@@ -0,0 +1,7 @@
+---
+title: "Tools"
+linkTitle: "Tools"
+weight: 35
+descriptiorn: >
+ Ouroboros tools and software.
+---
diff --git a/content/en/docs/Tools/grafana-frcp-constants.png b/content/en/docs/Tools/grafana-frcp-constants.png
new file mode 100644
index 0000000..19470bd
--- /dev/null
+++ b/content/en/docs/Tools/grafana-frcp-constants.png
Binary files differ
diff --git a/content/en/docs/Tools/grafana-frcp-window.png b/content/en/docs/Tools/grafana-frcp-window.png
new file mode 100644
index 0000000..5e43985
--- /dev/null
+++ b/content/en/docs/Tools/grafana-frcp-window.png
Binary files differ
diff --git a/content/en/docs/Tools/grafana-frcp.png b/content/en/docs/Tools/grafana-frcp.png
new file mode 100644
index 0000000..9b428af
--- /dev/null
+++ b/content/en/docs/Tools/grafana-frcp.png
Binary files differ
diff --git a/content/en/docs/Tools/grafana-ipcp-dt-dht.png b/content/en/docs/Tools/grafana-ipcp-dt-dht.png
new file mode 100644
index 0000000..cb6f1a9
--- /dev/null
+++ b/content/en/docs/Tools/grafana-ipcp-dt-dht.png
Binary files differ
diff --git a/content/en/docs/Tools/grafana-ipcp-dt-fa.png b/content/en/docs/Tools/grafana-ipcp-dt-fa.png
new file mode 100644
index 0000000..e7b0a93
--- /dev/null
+++ b/content/en/docs/Tools/grafana-ipcp-dt-fa.png
Binary files differ
diff --git a/content/en/docs/Tools/grafana-ipcp-np1-cc.png b/content/en/docs/Tools/grafana-ipcp-np1-cc.png
new file mode 100644
index 0000000..d1c0016
--- /dev/null
+++ b/content/en/docs/Tools/grafana-ipcp-np1-cc.png
Binary files differ
diff --git a/content/en/docs/Tools/grafana-ipcp-np1-fu.png b/content/en/docs/Tools/grafana-ipcp-np1-fu.png
new file mode 100644
index 0000000..b325438
--- /dev/null
+++ b/content/en/docs/Tools/grafana-ipcp-np1-fu.png
Binary files differ
diff --git a/content/en/docs/Tools/grafana-ipcp-np1.png b/content/en/docs/Tools/grafana-ipcp-np1.png
new file mode 100644
index 0000000..2fdf20b
--- /dev/null
+++ b/content/en/docs/Tools/grafana-ipcp-np1.png
Binary files differ
diff --git a/content/en/docs/Tools/grafana-lsdb.png b/content/en/docs/Tools/grafana-lsdb.png
new file mode 100644
index 0000000..fadd185
--- /dev/null
+++ b/content/en/docs/Tools/grafana-lsdb.png
Binary files differ
diff --git a/content/en/docs/Tools/grafana-system.png b/content/en/docs/Tools/grafana-system.png
new file mode 100644
index 0000000..a8d1f15
--- /dev/null
+++ b/content/en/docs/Tools/grafana-system.png
Binary files differ
diff --git a/content/en/docs/Tools/grafana-variables-interval.png b/content/en/docs/Tools/grafana-variables-interval.png
new file mode 100644
index 0000000..0c297be
--- /dev/null
+++ b/content/en/docs/Tools/grafana-variables-interval.png
Binary files differ
diff --git a/content/en/docs/Tools/grafana-variables-system.png b/content/en/docs/Tools/grafana-variables-system.png
new file mode 100644
index 0000000..d16e621
--- /dev/null
+++ b/content/en/docs/Tools/grafana-variables-system.png
Binary files differ
diff --git a/content/en/docs/Tools/grafana-variables-type.png b/content/en/docs/Tools/grafana-variables-type.png
new file mode 100644
index 0000000..b3f4a78
--- /dev/null
+++ b/content/en/docs/Tools/grafana-variables-type.png
Binary files differ
diff --git a/content/en/docs/Tools/grafana-variables.png b/content/en/docs/Tools/grafana-variables.png
new file mode 100644
index 0000000..26fdee6
--- /dev/null
+++ b/content/en/docs/Tools/grafana-variables.png
Binary files differ
diff --git a/content/en/docs/Tools/metrics.md b/content/en/docs/Tools/metrics.md
new file mode 100644
index 0000000..4c36533
--- /dev/null
+++ b/content/en/docs/Tools/metrics.md
@@ -0,0 +1,298 @@
+---
+title: "Metrics Exporters"
+author: "Dimitri Staessens"
+date: 2021-07-21
+draft: false
+description: >
+ Realtime monitoring using a time-series database
+---
+
+## Ouroboros metrics
+
+A collection of observability tools for exporting and
+visualising metrics collected from Ouroboros.
+
+Currently has one very simple exporter for InfluxDB, and provides
+additional visualization via grafana.
+
+More features will be added over time.
+
+### Requirements:
+
+Ouroboros version >= 0.18.3
+
+InfluxDB OSS 2.0, https://docs.influxdata.com/influxdb/v2.0/
+
+python influxdb-client, install via
+
+```
+pip install 'influxdb-client[ciso]'
+```
+
+### Optional requirements:
+
+Grafana, https://grafana.com/
+
+### Setup
+
+Install and run InfluxDB and create a bucket in influxDB for exporting
+Ouroboros metrics, and a token for writing to that bucket. Consult the
+InfluxDB documentation on how to do this,
+https://docs.influxdata.com/influxdb/v2.0/get-started/#set-up-influxdb.
+
+To use grafana, install and run grafana open source,
+https://grafana.com/grafana/download
+https://grafana.com/docs/grafana/latest/?pg=graf-resources&plcmt=get-started
+
+Go to the grafana UI (usually http://localhost:3000) and set up
+InfluxDB as your datasource:
+Go to Configuration -> Datasources -> Add datasource and select InfluxDB
+Set "flux" as the Query Language, and
+under "InfluxDB Details" set your Organization as in InfluxDB and set
+the copy/paste the token for the bucket to the Token field.
+
+To add the Ouroboros dashboard,
+select Dashboards -> Manage -> Import
+
+and then either upload the json file from this repository in
+
+dashboards-grafana/general.json
+
+or copy the contents of that file to the "Import via panel json"
+textbox and click "Load".
+
+### Run the exporter:
+
+Clone the repository:
+
+```
+git clone https://ouroboros.rocks/git/ouroboros-metrics
+cd ouroboros-metrics
+cd exporters-influxdb/pyExporter/
+```
+
+Edit the config.ini.example file and fill out the InfluxDB
+information (token, org). Save it as config.ini.
+
+then run oexport.py
+
+```
+python oexport.py
+```
+
+## Overview of Grafana general dashboard for Ouroboros
+
+The grafana dashboard allows you to explore various aspects of
+Ouroboros running on your local or remote systems. As the prototype
+matures, more and more metrics will become available.
+
+### Variables
+
+At the top, you can set a number of variables to restrict what is seen
+on the dashboard:
+
+{{<figure width="30%" src="/docs/tools/grafana-variables.png">}}
+
+* System allows you to specify a set of host/node/devices in the network:
+
+{{<figure width="30%" src="/docs/tools/grafana-variables-system.png">}}
+
+The list will contain all hosts that put metrics in the InfluxDB
+database in the last 5 days (Unfortunaly there seems to be no current
+option to restrict this to the current selected time range).
+
+* Type allows you to select metrics for a certain IPCP type
+
+{{<figure width="30%" src="/docs/tools/grafana-variables-type.png">}}
+
+As you can see, all Ouroboros IPCP types are there, with unclusion of
+an UNKNOWN type. This may briefly pop up when the metric is misread by
+the exporter.
+
+* Layer allows you to restrict the metrics to a certain layer
+
+* IPCP allows to restrict metrics to a certain IPCP
+
+* Interval allows to select a window in which metrics are aggregated.
+
+{{<figure width="30%" src="/docs/tools/grafana-variables-interval.png">}}
+
+Metrics will be aggregated from the actual exporter values (e.g. mean
+or last value) that fall in this interval. This interval should thus
+be larger than the exporter interval to ensure that each window has
+enough raw data.
+
+### Panels
+
+As you can see in the image above, the dashboard is subdivided in a
+bunch of panels, each of which focuses on some aspect of the
+prototype.
+
+#### System
+
+{{<figure width="80%" src="/docs/tools/grafana-system.png">}}
+
+The system panel shows the number of IPCPs and known IPCP flows in all
+monitored systems as a stacked series. This system is running a small
+test with 3 IPCPs (2 unicast IPCPs and a local IPCP) with a single
+flow between oping server/client(which has one endpoint in each IPCP,
+so it shows 2 because this small test runs on a single host). The
+colors on the graphs are sometimes not matching the labels, which is a
+grafana issue that I hope will get fixed soon.
+
+#### Link State Database
+
+{{<figure width="80%" src="/docs/tools/grafana-lsdb.png">}}
+
+The Link State Database panel shows the knowledge each IPCP has about
+the network routing area(s) it is in. The example has 2 IPCPs that are
+directly connected, so each knows 1 neighbor (the other IPCP), 2
+nodes, and two links (each unidirectional arc in the topology graph is
+counted).
+
+#### Process N-1 flows
+
+{{<figure width="80%" src="/docs/tools/grafana-frcp.png">}}
+
+This is the first panel that deals with the [Flow-and-Retransmission
+Control
+Protocol](/docs/concepts/protocols##flow-and-retransmission-control-protocol-frcp)
+(FRCP). It shows metrics for the flows between the applications (this
+is not the same flow as the data transfer flow above, which is between
+the IPCPs). This panel shows metrics relating to retransmission. The
+first is the current retransmission timeout, i.e. the time after which
+a packet will be retranmitted. This is calculated from the smoothed
+round-trip time and its estimated deviation (well below 1ms), as
+estimated by FRCP.
+
+The flow is created by the oping application that is pinging at a 10ms
+interval with packet retransmission enabled (so basically a service
+equivalent as running ping over TCP). The main difference with TCP is
+that Ouroboros flows are between the applications themselves. The
+oping server immediately responds to the client, so the client sees a
+response time well below 1 ms[^1]. The server, however, sees the
+client sending a packet only every 10ms and its RTO is a bit over
+10ms. The ACKs from the perspective of the server are piggybacked on
+the client's next ping. (This is similar to TCP "delayed ACK", the
+timer in Ouroboros is set to 10ms, so if I would ping at 1 second
+intervals over a flow with FRCP enabled, the server would also see a
+10ms Round-trip time).
+
+#### Delta-t constants
+
+The second panel to do with FRCP are the Delta-t constants. Delta-t is
+the protocol on which FRCP is based. Right now, they are only
+configurable at compile time, but in the future they will probably be
+configurable using fccntl().
+
+{{<figure width="80%" src="/docs/tools/grafana-frcp-constants.png">}}
+
+A quick refresher on these Delta-t timers:
+
+* **Maximum Packet Lifetime** (MPL) is the maximum time a packet can
+ live in the network, default is 1 minute.
+
+* **Retransmission timer** (R) is the maximum time which a
+ retransmission for a packet may be sent by the sender. The default
+ is 2 minutes. The first retransmission will happen after RTO,
+ then 2 * RTO, 4* RTO and so on with an exponential back-off, but
+ no packets will be sent after R has expired. If this happens, the
+ flow is considered failed / down.
+
+* **Acknowledgment timer** (A) is the maximum time which an packet may
+ be acknowledged by the receiver. Default is 10 seconds. So a
+ packet may be acknowledged immediately, or after 10 milliseconds,
+ or after 4 seconds, but not any more after 10 seconds.
+
+#### Delta-t window
+
+{{<figure width="80%" src="/docs/tools/grafana-frcp-window.png">}}
+
+The third and (at least at this point) last panel related to FRCP is
+the window panel that shows information regarding Flow Control. FRCP
+flow control tracks the number of packets in flight. These are the
+packets that were sent by the sender, but have not been
+read/acknowledged yet by the receiver. Each packet is numbered
+sequentially starting from a random value. The default maximum window
+size is currently 256 packets.
+
+#### IPCP N+1 flows
+
+{{<figure width="80%" src="/docs/tools/grafana-ipcp-np1.png">}}
+
+These graphs show basic statistics from the point of view of the IPCP
+that is serving the application flow. It shows upstream and downstream
+bandwidth and packet rates, and total sent and received packets/bytes.
+
+#### N+1 Flow Management
+
+{{<figure width="60%" src="/docs/tools/grafana-ipcp-np1-fu.png">}}
+
+These 4 panels show the management traffic sent by the flow
+allocators. Currently this traffic is only related to congestion
+avoidance. The example here is taken from a jFed experiment during a
+period of congestion. The receiver IPCP monitors packets for
+congestion markers and it will send an update to the source IPCP to
+inform it to slow down. It shows the rate of flow updates for
+multi-bit Explicit Congestion Notification. As you can see, there is
+still an issue where the receiver is not receiving all the flow
+updates and there is a lot of jitter and burstiness at the receiver
+side for these (small) packets. I'm working on fixing this.
+
+#### Congestion Avoidance
+
+{{<figure width="80%" src="/docs/tools/grafana-ipcp-np1-cc.png">}}
+
+This is a more detailed panel that shows the internals of the MB-ECN
+congestion avoidance algorithm.
+
+The left side shows the congestion window width, which is the
+timeframe over which the algorithm is averaging bandwidth. This scales
+with the packet rate, as there have to be enough packets in the window
+to make a reasonable measurement. Biggest change compared to TCP is
+that this window width is independent of RTT. The congestion
+algorithm then sets a target for the maximum number of bytes to send
+within this window (congestion window size). The division of the
+number of bytes that can be sent and the size of the windows yields
+the target bandwidth. The congestion was caused by a 100Mbit link, and
+the target set by the algorithm is quite near this value. The
+congestion level is a quantity that controls the rate at which the
+window scales down when there is congestion. This upstream/downstream
+view should be as close as possible to identical, the reason they are
+not is because of the jitter and loss in the flow updates as observed
+above. Work in progress.
+
+The graphs also show the number of packets and bytes in the current
+congestion window. The default target is set to min 8 and max 64
+packets within the congestion window before it scales up/down.
+
+And finally, the upstream packet counters shows the number of packets
+sent without receiving a congestion update from the receiver, and the
+downstream packet counter shows the number of packets received since
+the last time there was no congestion.
+
+#### Data transfer local components
+
+The last panel shows the (management) traffic sent and received by the
+IPCP internal as measured by the forwarding engine (Data transfer).
+
+{{<figure width="80%" src="/docs/tools/grafana-ipcp-dt-dht.png">}}
+
+The components that are current shown on this panel are the DHT and
+the Flow Allocator. As you can see, the DHT didn't do much during this
+interval. That's because it is only needed for name-to-address
+resolution and it will only send/receive packets when an address is
+resolved or when it needs to refresh its state, which happens only
+once every 15 minutes or so.
+
+{{<figure width="80%" src="/docs/tools/grafana-ipcp-dt-fa.png">}}
+
+The bottom part of the local components is dedicated to the flow
+allocator. During the monitoring period, only flow updates were sent,
+so this is the same data as shown in the flow management traffic, but
+from the viewpoint of the forwarding element in the IPCP, so it shows
+actual bandwidth in addition to the packet rates.
+
+[^1]: If this still seems high, disabling CPU "C-states" and tuning
+ the kernel for low latency can reduce this to a few
+ microseconds.
diff --git a/content/en/docs/Tools/rumba-topology.png b/content/en/docs/Tools/rumba-topology.png
new file mode 100644
index 0000000..aa8ce7f
--- /dev/null
+++ b/content/en/docs/Tools/rumba-topology.png
Binary files differ
diff --git a/content/en/docs/Tools/rumba.md b/content/en/docs/Tools/rumba.md
new file mode 100644
index 0000000..28202b7
--- /dev/null
+++ b/content/en/docs/Tools/rumba.md
@@ -0,0 +1,676 @@
+---
+title: "Rumba"
+author: "Dimitri Staessens"
+date: 2021-07-21
+draft: false
+description: >
+ Orchestration framework for deploying recursive networks.
+---
+
+## About Rumba
+
+Rumba is a Python framework for setting up Ouroboros (and RINA)
+networks in a test environment that was originally developed during
+the ARCFIRE project. Its main objectives are to configure networks and
+to evaluate a bit the impact of the architecture on configuration
+management and devops in computer and telecommunications networks. The
+original Rumba project page is
+[here](https://gitlab.com/arcfire/rumba).
+
+I still use Rumba to quickly (and I mean in a matter of seconds!) set
+up test networks for Ouroboros that are made up of many IPCPs and
+layers. I try to keep it up-to-date for the Ouroboros prototype.
+
+The features of Rumba are:
+
+ * easily define network topologies
+ * use different prototypes]:
+ * Ouroboros[^1]
+ * rlite
+ * IRATI
+
+ * create these networks using different possible environments:
+ * local PC (Ouroboros only)
+ * docker container
+ * virtual machine (qemu)
+ * [jFed](https://jfed.ilabt.imec.be/) testbeds
+ * script experiments
+ * rudimentary support for drawing these networks (using pydot)
+
+## Getting Rumba
+
+We forked Rumba with to the Ouroboros website, and you should get this
+forked version for use with Ouroboros. It should work with most python
+versions, but I recommend using the latest version (currently
+Python3.9).
+
+To install system-wide, use:
+
+```bash
+git clone https://ouroboros.rocks/git/rumba
+cd rumba
+sudo ./setup.py install
+```
+
+or you can first create a Python virtual environment as you wish.
+
+## Using Rumba
+
+The Rumba model is heavily based on RINA terminology (since it was
+originally developed within a RINA research project). We will probably
+align the terminology in Rumba with Ouroboros in the near future. I
+will break down a typical Rumba experiment definition and show how to
+use Rumba in Python interactive mode. You can download the complete
+example experiment definition [here](/docs/tools/rumba_example.py).
+The example uses the Ouroboros prototype, and we will run the setup on
+the _local_ testbed since that is available on any machine and doesn't
+require additional dependencies. We use the local testbed a lot for
+quick development testing and debugging. I will also show the
+experiment definition for the virtual wall server testbed at Ghent
+University as an example for researchers who have access to this
+testbed. If you have docker or qemu installed, feel free to experiment
+with these at your leisure.
+
+### Importing the needed modules and definitions
+
+First, we need to import some definitions for the model, the testbed
+and the prototype we are going to use. Rumba defines the networks from
+the viewpoint of the _layers_ and how they are implemented present on
+the nodes. This was a design choice by the original developers of
+Rumba.
+
+Three elements are imported from the **rumba.model** module:
+
+```Python
+from rumba.model import Node, NormalDIF, ShimEthDIF
+```
+
+* **Node** is a machine that is hosting the IPCPs, usually a server. In
+the local testbed it is a purely abstract concept, but when using the
+qemu, docker or jfed testbeds, each Node will map to a virtual machine
+on the local host, a docker container on the local host, or a virtual
+or physical server on the jfed testbed, respectively.
+
+* **NormalDIF** is (roughly) the RINA counterpart for an Ouroboros
+ *unicast layer*. The Rumba framework has no support for broadcast
+ layers (yet).
+
+* **ShimEthDIF** is (roughly) the RINA counterpart for an Ouroboros
+ Ethernet IPCP. These links make up the "physical network topology"
+ in the experiment definition. On the local testbed, Rumba will use
+ the ipcpd-local as a substitute for the Ethernet links, in the other
+ testbeds (qemu, docker, jfed) these will be implemented on (virtual)
+ Ethernet interfaces. Rumba uses the DIX ethernet IPCP
+ (ipcpd-eth-dix) for Ouroboros as it has the least problems with
+ cheaper switches in the testbeds that often have buggy LLC
+ implementations.
+
+You might have expected that IPCPs themselves would be elements of the
+Rumba model, and they are. They are not directly defined but, as we
+shall see in short, inferred from the layer definitions.
+
+We still need to import the testbeds we will use. As mentioned, we
+will use the local testbed and jfed testbed. The commands to import
+the qemu and docker testbed plugins are shown in comments for reference:
+
+```Python
+import rumba.testbeds.jfed as jfed
+import rumba.testbeds.local as local
+# import rumba.testbeds.qemu as qemu
+# import rumba.testbeds.dockertb as docker
+```
+
+And finally, we import the Ouroboros prototype plugin:
+
+```Python
+import rumba.prototypes.ouroboros as our
+```
+
+As the final preparation, let's define which variables to export:
+
+```Python
+__all__ = ["exp", "nodes"]
+```
+
+* **exp** will contain the experiment definition for the local testbed
+
+* **nodes** will contain a list of the node names in the experiment,
+ which will be of use when we drive the experiment from the
+ IPython interface.
+
+### Experiment definition
+
+We will now define a small 4-node "star" topology of two client nodes,
+a server node, and a router node, that looks like this:
+
+{{<figure width="30%" src="/docs/tools/rumba-topology.png">}}
+
+In the prototype, there is a unicast layer which we call _n1_ (in
+Rumba, a "NormalDIF") and 3 point-to-point links ("ShimEthDIF"), _e1_,
+_e2_ and _e3_. There are 4 nodes, which we label "client1", "client2",
+"router", and "server". These are connected in a so-called star
+topology, so there is a link between the "router" and each of the 3
+other nodes.
+
+These layers can be defined fairly straightforward as such:
+
+```Python
+n1 = NormalDIF("n1")
+e1 = ShimEthDIF("e1")
+e2 = ShimEthDIF("e2")
+e3 = ShimEthDIF("e3")
+```
+
+And now the actual topology definition, the above figure will help
+making sense of this.
+
+```
+clientNode1 = Node("client1",
+ difs=[e1, n1],
+ dif_registrations={n1: [e1]})
+
+clientNode2 = Node("client2",
+ difs=[e3, n1],
+ dif_registrations={n1: [e3]})
+
+routerNode = Node("router",
+ difs=[e1, e2, e3, n1],
+ dif_registrations={n1: [e1, e2, e3]})
+
+serverNode = Node("server",
+ difs=[e2, n1],
+ dif_registrations={n1: [e2]})
+
+nodes = ["client1", "client2", "router", "server"]
+```
+
+Each node is modeled as a Rumba Node object, and we specify which difs
+are present on that node (which will cause Rumba to create an IPCP for
+you) and how these DIFs relate to eachother in that node. This is done
+by specifying the dependency graph between these DIFs as a dict object
+("dif_registrations") where the client layer is the key and the list
+of lower-ranked layers is the value.
+
+The endpoints of the star (clients and server) have a fairly simple
+configuration: They are connected to the router via an ethernet layer
+(_e1_ on "client1", _e3_ on "client2" and _e2_ on "server", and then
+the "n1" sits on top of that. So for node "client1" there are 2 layers
+present (difs=[_e1_, _n1_]) and _n1_ makes use of _e1_ to connect into
+the layer, or in other words, _n1_ is registered in the lower layer
+_e1_ (dif_registrations={_n1_: [_e1_]}.
+
+The router node is similar, but of course, all the ethernet layers are
+present and layer _n1_ has to be known from all other nodes, so on the
+router, _n1_ is registered in [_e1_, _e2_, _e3_].
+
+All this may look a bit unfamiliar and may take some time to get used
+to (and maybe an option for Rumba where the experiment is defined in
+terms of the IPCPs rather than the layers/DIFs might be more
+intuitive), but once one gets the hang of this, defining complex
+network topologies really becomes childs play.
+
+Now that we have the experiment defined, let's set up the testbed.
+
+For the local testbed, there is literally almost nothing to it:
+
+``` Python
+tb = local.Testbed()
+exp = our.Experiment(tb,
+ nodes=[clientNode1,
+ clientNode2,
+ routerNode,
+ serverNode])
+```
+
+
+We define a local.Testbed and then create an Ouroboros experiment
+(recall we imported the Ouroboros plugin _as our_) using the local
+testbed and pass the list of nodes defined for the experiment. For the
+local testbed, that literally is it. The local testbed module will not
+perform installations on the host machine and assumes Ouroboros is
+installed and running.
+
+### An example on the Fed4FIRE/GENI testbeds using the jFed plugin
+
+Before using Rumba with jFed, you need to enable ssh-agent in each
+terminal.
+
+```
+eval `ssh-agent`
+ssh-add /path/to/cert.pem
+```
+
+To give an idea of what Rumba can do on a testbed with actual hardware
+servers, I will also give an example for a testbed deployment using
+the jfed plugin. This may not be relevant to people who don't have
+access to these testbeds, but it can server as a taste for what a
+kubernetes[^2] plugin can achieve, which may come if there is enough
+interest for it.
+
+
+```Python
+jfed_tb = jfed.Testbed(exp_name='cc2',
+ cert_file='/path/to/cert.pem',
+ authority='wall1.ilabt.iminds.be',
+ image='UBUNTU16-64-STD',
+ username='<my_username>',
+ passwd='<my_password>',
+ exp_hours='1',
+ proj_name='ouroborosrocks')
+```
+
+The jfed testbed requires a bit more configuration than the local (or
+qemu/docker) plugins. First, the credentials for accessing jfed (your
+username, password, and certificate) need to be passed. Your password
+is optional, and if you don't like supplying it in plaintext, Rumba
+will ask you to enter it at certain occasions. A jFed experiment
+requires an experiment name that can be chosen at will for the
+experiment, an experation time (in hours) and also a project name that
+has to be created within the jfed portal and pre-approved by the jfed
+project. Finally, the authority specifies the actual test
+infrastructure to use, in this case wall1.ilabt.iminds.be is a testbed
+that consist of a large number of physical server machines. The image
+parameter specifies which OS to run, in this case, we selected Ubuntu
+16.04 LTS. For IRATI we used an image that had the prototype
+pre-installed.
+
+More interesting than the testbed configuration is the additional
+functionality for the experiment:
+
+```Python
+jfed_exp = our.Experiment(jfed_tb,
+ nodes=[clientNode1,
+ clientNode2,
+ routerNode,
+ serverNode],
+ git_repo='https://ouroboros.rocks/git/ouroboros',
+ git_branch='<some working branch>',
+ build_options='-DCMAKE_BUILD_TYPE=Debug '
+ '-DSHM_BUFFER_SIZE=131072',
+ add_packages=['ethtool'],
+ influxdb={
+ 'ip': '<my public IP address>',
+ 'port': 8086,
+ 'org': "Ouroboros",
+ 'token': "<my token>"
+ })
+```
+
+For these more beefy setups, Rumba will actually install the prototype
+and you can specify a repository and branch (if not, it will use the
+master branch from the main ouroboros repository), build options for
+the prototype, additional packages to install for use during the
+experiment and as a specific option for Ouroboros the coordinates for
+an influxDB database, which will also install the [metrics
+exporter](/docs/tools/metrics) and allow realtime observation of key
+experiment parameters.
+
+This concludes the brief overview of the experiment definition, let's
+give it a quick try using the "local" testbed.
+
+### Interactive orchestration
+
+First, make sure that Ouroboros is running your host machine, save the
+[experiment definition script](/docs/tools/rumba_example.py) to your
+machine and run a python shell in the directory with the example file.
+
+Let's first add some additional logging to Rumba so we have a bit more
+information about the process:
+
+```sh
+[dstaesse@heteropoda examples]$ python
+Python 3.9.6 (default, Jun 30 2021, 10:22:16)
+[GCC 11.1.0] on linux
+Type "help", "copyright", "credits" or "license" for more information.
+>>> import rumba.log as log
+>>> log.set_logging_level('DEBUG')
+```
+
+Now, in the shell, import the definitions from the example file. I
+will only put (and reformat) the most important snippets of the output
+here.
+
+```
+>>> from rumba_example import *
+
+DIF topological ordering: [DIF e2, DIF e1, DIF e3, DIF n1]
+DIF graph for DIF n1: client1 --[e1]--> router,
+ client2 --[e3]--> router,
+ router --[e1]--> client1,
+ router --[e3]--> client2,
+ router --[e2]--> server,
+ server --[e2]--> router
+Enrollments:
+ [DIF n1] n1.router --> n1.client1 through N-1-DIF DIF e1
+ [DIF n1] n1.client2 --> n1.router through N-1-DIF DIF e3
+ [DIF n1] n1.server --> n1.router through N-1-DIF DIF e2
+
+Flows:
+ n1.router --> n1.client1
+ n1.client2 --> n1.router
+ n1.server --> n1.router
+```
+
+When an experiment object is created, Rumba will pre-compute how to
+bootstrap the requested network layout. First, it will select a
+topological ordering, the order in which it will create the layers
+(DIFs). We now only have 4, and the Ethernet layers need to be up and
+running before we can bootstrap the unicast layer _n1_. Rumba will
+create them in the order _e2_, _e1_, _e3_ and then _n1_.
+
+The graph for N1 is shown as a check that the correct topology was
+input. Then Rumba shows the ordering it which it will enroll the
+members of the _n1_ layer.
+
+As mentioned above, Rumba creates IPCPs based on the layering
+information in the Node objects in the experiment description. The
+naming convention used in Rumba is "<layer name>.<node name>". The
+algorithm in Rumba selected the IPCP "n1.client1" as the bootstrap
+IPCP, this is not explicitly printed, but can be derived as
+"n1.client1" is the node that is not enrolled with another node in the
+layer. It will enroll the IPCP on the router with the one on client1,
+and then the other 2 IPCPs in _n1_ with the unicast IPCP on the router
+node.
+
+Finally, it will create 3 flows between the members of _n1_ that will
+complete the "star" topology. Note that in Ouroboros, there will
+actually be 6, as it will have 3 data flows (for all traffic between
+clients of the layer, the directory (DHT), etc) and 3 flows for
+management traffic (link state advertisements).
+
+It is possible to print the layer graph (DIF graph) as an image (PDF)
+for easier verification that the topology is correct. For instance,
+for the unicast layer _n1_:
+
+```Python
+>>> from rumba_example import n1
+>>> exp.export_dif_graph("example.pdf", n1)
+>>> <snip> Generated PDF of DIF graph
+```
+
+This is actually how the image above was generated.
+
+The usual flow for starting an experiment is to call the
+
+```Python
+exp.swap_in()
+exp.install_prototype()
+```
+
+functions. The swap_in() function prepares the testbed by booting the
+(virtual) machines or containers. The install_prototype call will
+install the prototype of choice and all its dependencies and
+tools. However, we are now using a local testbed, and in this case,
+these two functions are implemented as _nops_, allowing to use the
+same script on different types of testbeds.
+
+Now comes the real magic (output cleaned up for convenience). The
+_bootstrap_prototype()_ function will create the defined network
+topology on the selected testbed. For the local testbed, all hosts are
+the same, so client1/client2/router/server will actually execute on
+the same machine. The only difference in these commands, should for
+instance a virtual wall testbed be used, is that the 'type local'
+IPCPs would be 'type eth-dix' and be configured on an Ethernet
+interface, and of course be run on the correct host machine. It is
+also what a network administrator would have to execute if he or she
+were to create the network manually on physical or virtual
+machines.
+
+This is one of the key strenghts of Ouroboros: it doesn't care about
+machines at all. It's a network of software objects, or even a network
+of algorithms, not a network of _devices_. It needs devices to run, of
+course, but the device, nor the interface is a named entity in any of
+the objects that make up the actual network. The devices are a concern
+for the network architect and the network manager, as they choose
+where to run the processes that make up the network and monitor them,
+but devices are irrelevant for the operation of the network in itself.
+
+Anyway, here is the complete output of the bootstrap_prototype()
+command, I'll break it down a bit below.
+
+```Python
+>>> exp.bootstrap_prototype()
+16:29:28 Starting IRMd on all nodes...
+[sudo] password for dstaesse:
+16:29:32 Started IRMd, sleeping 2 seconds...
+16:29:34 Creating IPCPs
+16:29:34 client1 >> irm i b n e1.client1 type local layer e1
+16:29:34 client1 >> irm i b n n1.client1 type unicast layer n1 autobind
+16:29:34 client2 >> irm i b n e3.client2 type local layer e3
+16:29:34 client2 >> irm i c n n1.client2 type unicast
+16:29:34 router >> irm i b n e1.router type local layer e1
+16:29:34 router >> irm i b n e3.router type local layer e3
+16:29:34 router >> irm i b n e2.router type local layer e2
+16:29:34 router >> irm i c n n1.router type unicast
+16:29:34 server >> irm i b n e2.server type local layer e2
+16:29:34 server >> irm i c n n1.server type unicast
+16:29:34 Enrolling IPCPs...
+16:29:34 client1 >> irm n r n1.client1 ipcp e1.client1
+16:29:34 client1 >> irm n r n1 ipcp e1.client1
+16:29:34 router >> irm n r n1.router ipcp e1.router ipcp e2.router ipcp e3.router
+16:29:34 router >> irm i e n n1.router layer n1 autobind
+16:29:34 router >> irm n r n1 ipcp e1.router ipcp e2.router ipcp e3.router
+16:29:34 client2 >> irm n r n1.client2 ipcp e3.client2
+16:29:34 client2 >> irm i e n n1.client2 layer n1 autobind
+16:29:34 client2 >> irm n r n1 ipcp e3.client2
+16:29:34 server >> irm n r n1.server ipcp e2.server
+16:29:34 server >> irm i e n n1.server layer n1 autobind
+16:29:34 server >> irm n r n1 ipcp e2.server
+16:29:34 router >> irm i conn n n1.router dst n1.client1
+16:29:34 client2 >> irm i conn n n1.client2 dst n1.router
+16:29:34 server >> irm i conn n n1.server dst n1.router
+16:29:34 All done, have fun!
+16:29:34 Bootstrap took 6.05 seconds
+```
+
+First, the prototype is started if it is not already running:
+
+```Python
+16:29:28 Starting IRMd on all nodes...
+[sudo] password for dstaesse:
+16:29:32 Started IRMd, sleeping 2 seconds...
+```
+
+Since starting the IRMd requires root privileges, Rumba will ask for
+your password.
+
+Next, Rumba will create the IPCPs on each node, I will go more
+in-depth for client1 and client2 as they bring some interesting
+insights:
+
+```Python
+16:29:34 Creating IPCPs
+16:29:34 client1 >> irm i b n e1.client1 type local layer e1
+16:29:34 client1 >> irm i b n n1.client1 type unicast layer n1 autobind
+16:29:34 client2 >> irm i b n e3.client2 type local layer e3
+16:29:34 client2 >> irm i c n n1.client2 type unicast
+```
+
+First of all there are two different choices of commands, the
+**bootstrap** commands starting with ``` irm i b ``` and the
+**create** commands starting with ```irm i c```. If you know the CLI a
+bit (you can find out more using ```man ouroboros``` from the command
+line when Ouroboros is installed), you already know that these are
+shorthand for
+
+```
+irm ipcp bootstrap
+irm ipcp create
+```
+
+If the IPCP doesn't exist, the ```irm ipcp bootstrap``` call will
+automatically first create an IPCP behind the screens using an ```irm
+ipcp create``` call, so this is nothing but a bit of shorthand.
+Ouroboros will create the IPCPs that will enroll, and immediately
+bootstrap those that won't. The Ethernet IPCPs are simple: they always
+are bootstrapped and cannot be _enrolled_ as the configuration is
+manual and may involve Ethernet switches; Ethernet IPCPs do not
+support the ```irm ipcp enroll``` method. For the unicast IPCPs that
+make up the _n1_ layer, the situation is different. As mentioned
+above, the first IPCP in that layer is bootstrapped, "n1.client1" and
+then other members of the layer are enrolled to extend that layer. So
+if you turn your attention back to the full listing of the steps
+executed by the bootstrap() procedure in Rumba, you will now see that
+there are only 3 IPCPs that are created using ```irm i c```: those 3
+that are selected for enrollment, which is the next step.
+
+Here Ouroboros deviates quite a bit from RINA, as what RINA calls
+enrollment is actually split into 3 different phases in Ouroboros. But
+as Rumba was intended to work with RINA (a requirement for the ARCFIRE
+project at hand) this is a single "step" in Rumba. In RINA, the DIF
+registrations are initiated by the IPCPs themselves, which means
+making APIs and what not to feed all this information to the IPCPs and
+let them execute this. Ouroboros, on the other hand, keeps things lean
+by moving registration operations into the hands of the network
+manager (or network management system). The IPCP processes can be
+registered and unregistered as clients for lower layers at will
+without any need to touch them. Let's have a look at the commands, of
+which there are 3:
+
+```
+irm n r # shorthand for irm name register
+irm i e # shorthand for irm ipcp enroll
+irm i conn # shorthand for irm ipcp connect
+```
+
+Rumba will need to make sure that the _n1_ IPCPs are known in the
+(Ethernet) layer below, and that they are operational before another
+_n1_ IPCP tries to enroll with it. There are interesting things to note:
+
+First, looking at the "n1.client1" IPCP, it is registered with the e1
+layer twice (I reformatted the commands for clarity):
+
+```
+16:29:34 client1 >> irm n r n1.client1 ipcp e1.client1
+16:29:34 client1 >> irm n r n1 ipcp e1.client1
+```
+
+Once under the "n1.client1" name (which is the name of the IPCP) and
+once under the more general "n1" name, which is actually the name of
+the layer.
+
+In addition, if we scout out the _n1_ name registrations, we see that
+it is registered in all Ethernet layers (reformatted for clarity) and
+on all machines:
+
+```
+16:29:34 client1 >> irm n r n1 ipcp e1.client1
+16:29:34 router >> irm n r n1 ipcp e1.router ipcp e2.router ipcp e3.router
+16:29:34 client2 >> irm n r n1 ipcp e3.client2
+16:29:34 server >> irm n r n1 ipcp e2.server
+```
+
+This is actually Ouroboros anycast at work, and this allows us to make
+the enrollment commands for the IPCPs really simple (reformatted for
+clarity):
+
+
+```
+16:29:34 router >> irm i e n n1.router layer n1 autobind
+16:29:34 client2 >> irm i e n n1.client2 layer n1 autobind
+16:29:34 server >> irm i e n n1.server layer n1 autobind
+```
+
+By using an anycast name (equal to the layer name) for each IPCP in
+the _n1_ layer, we can just tell an IPCP to "enroll in the layer" and
+it will enroll with any IPCP in that layer. This simplifies things for
+human administrators not having to know the names for reachable IPCPs
+in the layer they want to enroll with (although, of course, Rumba does
+have this information from the experiment definition and we could have
+specified a specific IPCP just as easily). If the enrollment with the
+destination layer fails, it means that none of the members of that
+layer are reachable.
+
+The "autobind" directive will automatically bind the process to accept
+flows for the ipcp name (e.g. "n1.router") and the layer name
+(e.g. "n1").
+
+The last series of commands are the
+
+```
+irm ipcp connect
+```
+
+commands. Ouroboros splits the topology definition (forwarding
+adjacencies in IETF speak) from enrollment. So after an IPCP is
+enrolled with the layer and knows the basic information to operate as
+a peer router, it will break all connections and wait for a specific
+adjacency to be made for data transfer and for management. The command
+above just creates them both in parallel. We may create a shorthand to
+create these connections with the IPCP that was used for enrollment.
+
+Let's ping the server from client1 using the Rumba storyboard.
+
+```Python
+>>> from rumba.storyboard import *
+>>> sb = StoryBoard(experiment=exp, duration=1500, servers=[])
+>>> sb.run_command("server",
+ 'irm bind prog oping name oping_server;'
+ 'irm name register oping_server layer n1;'
+ 'oping --listen > /dev/null 2>&1 &')
+18:04:33 server >> irm bind prog oping name oping_server;
+ irm name register oping_server layer n1;
+ oping --listen > /dev/null 2>&1 &
+>>> sb.run_command("client1", "oping -n oping_server -i 10ms -c 100")
+18:05:26 client1 >> oping -n oping_server -i 10ms -c 100
+```
+
+### The same experiment on jFed
+
+The ```exp.swap_in()``` and ```exp.install_prototype()``` will reserve
+and boot the servers on the testbed and install the prototype on each
+of them. Let's just focus on the prototype itself and see of you can
+spot the differences (and the similarities!) between the (somewhat
+cleaned up) output for running the exact same bootstrap command as
+above using physical servers on the jFed virtual wall testbed compared
+to the test on a local machine.
+
+
+```Python
+>>> exp.bootstrap_prototype()
+18:26:15 Starting IRMd on all nodes...
+18:26:15 n078-05 >> sudo nohup irmd > /dev/null &
+18:26:15 n078-09 >> sudo nohup irmd > /dev/null &
+18:26:15 n078-03 >> sudo nohup irmd > /dev/null &
+18:26:15 n078-07 >> sudo nohup irmd > /dev/null &
+18:26:16 Creating IPCPs
+18:26:16 n078-05 >> irm i b n e1.client1 type eth-dix dev enp9s0f0 layer e1
+18:26:16 n078-05 >> irm i b n n1.client1 type unicast layer n1 autobind
+18:26:17 n078-09 >> irm i b n e3.client2 type eth-dix dev enp9s0f0 layer e3
+18:26:17 n078-09 >> irm i c n n1.client2 type unicast
+18:26:17 n078-03 >> irm i b n e3.router type eth-dix dev enp8s0f1 layer e3
+18:26:17 n078-03 >> irm i b n e1.router type eth-dix dev enp0s9 layer e1
+18:26:17 n078-03 >> irm i b n e2.router type eth-dix dev enp9s0f0 layer e2
+18:26:17 n078-03 >> irm i c n n1.router type unicast
+18:26:17 n078-07 >> irm i b n e2.server type eth-dix dev enp9s0f0 layer e2
+18:26:17 n078-07 >> irm i c n n1.server type unicast
+18:26:17 Enrolling IPCPs...
+18:26:17 n078-05 >> irm n r n1.client1 ipcp e1.client1
+18:26:17 n078-05 >> irm n r n1 ipcp e1.client1
+18:26:18 n078-03 >> irm n r n1.router ipcp e1.router ipcp e2.router ipcp e3.router
+18:26:18 n078-03 >> irm i e n n1.router layer n1 autobind
+18:26:20 n078-03 >> irm n r n1 ipcp e1.router ipcp e2.router ipcp e3.router
+18:26:20 n078-09 >> irm n r n1.client2 ipcp e3.client2
+18:26:20 n078-09 >> irm i e n n1.client2 layer n1 autobind
+18:26:20 n078-09 >> irm n r n1 ipcp e3.client2
+18:26:20 n078-07 >> irm n r n1.server ipcp e2.server
+18:26:20 n078-07 >> irm i e n n1.server layer n1 autobind
+18:26:20 n078-07 >> irm n r n1 ipcp e2.server
+18:26:20 n078-03 >> irm i conn n n1.router dst n1.client1
+18:26:24 n078-09 >> irm i conn n n1.client2 dst n1.router
+18:26:25 n078-07 >> irm i conn n n1.server dst n1.router
+18:26:All done, have fun!
+18:26:25 Bootstrap took 9.57 seconds
+```
+
+Anyone who has been configuring distributed services in datacenter and
+ISP networks should be able to appreciate the potential for the
+abstractions provided by the Ouroboros model to make life of a network
+administrator more enjoyable.
+
+
+[^1]: I only support Ouroboros, it may not work anymore with rlite and
+ IRATI.
+
+[^2]: Hmm, why didn't I think of using _O7s_ as a shorthand for
+ Ouroboros before... \ No newline at end of file
diff --git a/content/en/docs/Tools/rumba_example.py b/content/en/docs/Tools/rumba_example.py
new file mode 100644
index 0000000..fc132b6
--- /dev/null
+++ b/content/en/docs/Tools/rumba_example.py
@@ -0,0 +1,41 @@
+from rumba.model import Node, NormalDIF, ShimEthDIF
+
+# import testbed plugins
+import rumba.testbeds.jfed as jfed
+import rumba.testbeds.local as local
+
+# import Ouroboros prototype plugin
+import rumba.prototypes.ouroboros as our
+
+__all__ = ["local_exp", "nodes"]
+
+n1 = NormalDIF("n1")
+e1 = ShimEthDIF("e1")
+e2 = ShimEthDIF("e2")
+e3 = ShimEthDIF("e3")
+
+clientNode1 = Node("client1",
+ difs=[e1, n1],
+ dif_registrations={n1: [e1]})
+
+clientNode2 = Node("client2",
+ difs=[e3, n1],
+ dif_registrations={n1: [e3]})
+
+routerNode = Node("router",
+ difs=[e1, e2, e3, n1],
+ dif_registrations={n1: [e1, e2, e3]})
+
+serverNode = Node("server",
+ difs=[e2, n1],
+ dif_registrations={n1: [e2]})
+
+nodes = ["client1", "client2", "router", "server"]
+
+local_tb = local.Testbed()
+
+local_exp = our.Experiment(local_tb,
+ nodes=[clientNode1,
+ clientNode2,
+ routerNode,
+ serverNode])
diff --git a/content/en/docs/Tutorials/tutorial-1.md b/content/en/docs/Tutorials/tutorial-1.md
index d3d24c0..2e98809 100644
--- a/content/en/docs/Tutorials/tutorial-1.md
+++ b/content/en/docs/Tutorials/tutorial-1.md
@@ -13,8 +13,12 @@ This tutorial runs through the basics of Ouroboros. Here, we will see
the general use of two core components of Ouroboros, the IPC Resource
Manager daemon (IRMd) and an IPC Process (IPCP).
-{{<figure width="50%" src="/docs/tutorials/ouroboros_tut1_overview.png">}}
+It is recommended to use a Debug build for this tutorial to show extra
+IRMd output. To do this, compile with the CMAKE_BUILD_TYPE set to
+"Debug". For a full list of build options and how to activate them, see
+[here](/docs/reference/compopt/).
+{{<figure width="50%" src="/docs/tutorials/ouroboros_tut1_overview.png">}}
We will start the IRMd, create a local IPCP, start a ping server and
connect a client. This will involve **binding (1)** that server to a
@@ -65,13 +69,21 @@ $ oping --listen
Ouroboros ping server started.
```
-The IRMd will notice that an oping server with pid 10539 has started:
+The IRMd will notice that an oping server has started. In our case it has pid 2337, but this will be different on your system:
```bash
-==02301== irmd(DB): New instance (10539) of oping added.
+==02301== irmd(DB): New instance (2337) of oping added.
==02301== irmd(DB): This process accepts flows for:
```
+If you are not running a debug build, you won't see this output and will
+have to look for the pid of the process using a linux command such as ps.
+
+```
+$ ps ax | grep oping
+ 2337 pts/4 Sl+ 0:00 oping --listen
+```
+
The server application is not yet reachable by clients. Next we will
bind the server to a name and register that name in the
"local_layer". The name for the server can be chosen at will, let's
diff --git a/content/en/docs/Tutorials/tutorial-2.md b/content/en/docs/Tutorials/tutorial-2.md
index 5f52a5a..f043442 100644
--- a/content/en/docs/Tutorials/tutorial-2.md
+++ b/content/en/docs/Tutorials/tutorial-2.md
@@ -246,7 +246,7 @@ Our oping server is not registered yet in the normal layer. Let's
register it in the normal layer as well, and connect the client:
```bash
-$ irm r n oping_server layer normal_layer
+$ irm n r oping_server layer normal_layer
$ oping -n oping_server -c 5
```
diff --git a/content/en/docs/_index.md b/content/en/docs/_index.md
index 587d2af..8bed85f 100755
--- a/content/en/docs/_index.md
+++ b/content/en/docs/_index.md
@@ -8,6 +8,4 @@ menu:
weight: 20
---
-{{% pageinfo %}}
-Table of Contents.
-{{% /pageinfo %}}
+We are [moving the documentation to a wiki](https://ouroboros.rocks/wiki). Thesepages are left here for the time being, but will be deprecated.