Skip to content

180 Reference

Reference#

Detailed authoritative reference material such as configuration options, command-line options, API parameters and development guides.

Table of Content

Connect API Reference

Write your own component

Connect API Reference#

The broker is the central piece of an Unryo-ready infrastructure. Its role is to centralize the information of all components which need to interconnect with other component.

An Unryo-ready component pings the broker with its characteristics (its role and connection details) and receives a response from the broker with the details to communicate with the relevant symmetrical component(s).

This document explains the parameter names, relationship names, and protocols used for each unryo-ready application. With the knowledge of these parameters, you can then easily integrate apps within your network of components.

Roles#

An unryo-ready component is usually a node and can have one or more of the following roles:

Role Description
Producer A producer produces data of a particular kind and in a particular format. A producer's role is to push this data to an endpoint of a particular type, typically of a receiver. Example: A Producer makes a POST or a PUT to a receiver.
Receiver A receiver receives data of a particular kind and in a particular format. A receiver's role is to export an endpoint of a particular type, typically used by a producer. Example:** A Receiver expose the POST and PUT endpoints which producer will call.
Supplier A supplier makes data of a particular kind available to other components in a particular format. A supplier's role is to expose an endpoint of a particular type, typically used by a consumer. Example: a supplier exposes a GET endpoint which consumer will call.
Consumer A consumer fetches data of a particular kind and in a particular format. A consumer's role is to fetch data from an endpoint of a particular kind, typically from a supplier. Example: A consumer makes the GET on supplier.
Service The service role is a special type of role that does not search in the registry for symmetrical peers, but refers to a special behavior depending on the service. The computed value of this service is returned in the parameters of a node that is identified with the id of the node of origin. Example: Check service role usage below for examples.

Relationship symmetry#

  • Except for the service role, each role has a symmetric relation with another.
  • Producer is symmetric to Receiver and vice versa.
  • Supplier is symmetric to Consumer and vice versa.

Service role usage#

JWT Sign#

Protocol: jwt

This service returns a JWT token which can be used to securely connect to any peer configured with the JWT Auth service (next section). The result is encrypted and returned in JWE format, using a public key passed in the parameters in JWKS format, and then decrypted by the corresponding private key.

Optional: You can set the expiration duration of the JWT token. The default value is 24h, and the minimum value is 1h.

Here is a ping body example:

{
"node":{  
   "id":<node_id>,
   "namespace":<node_namespace>,
   "roles":[  
      {  
         "label":"auth",
         "relation":"auth",
         "role":"service",
         "protocol":"jwt",
         "support":[  
            "1.0"
         ],
         "parameters":{
            "jwks": {
               "keys":[
                  {
                     "kty": "...",
                     "kid": "keyid",
                     "n": "...",
                     "e": "..."
                  }
               ]
            },
            "expInSeconds": 100000
         }
      }
   ]
}

Returned object:

{
   "nodes": {
      "auth": [
         {
            "id": <node_id>,
            "namespace": <node_namespace>,
            "roles": [
               {
                  "label": "auth",
                  "relation": "auth",
                  "protocol": "jwt",
                  "role": "service",
                  "support": [
                     "1.0"
                  ],
                  "parameters": {
                     "jwt": <jwe format>
                  }
               }
            ]
         }
      ]
   },
   "expiration": 300000
}
JWT Auth#

Protocol: jwtauth

This service returns the information needed to validate a JWT generated by the JWT Sign service. This includes: - Valid public keys in JWKS format used to validate the token signature. - Claims validation data: - The valid issuer defined by the broker URL. - The valid audience of the token, defined by the customer id and namespace. - The valid scopes of the token, defined by all roles, protocols, and relations. Each scope are joined by label, which is useful to give them certain properties i.e. read/write.

No parameter is needed for this service.

Here is a ping body example:

{
   "node":{  
      "id":<node_id>,
      "namespace":<node_namespace>,
      "roles":[  
         {  
            "label":"write",
            "relation":"relation",
            "role":"receiver",
            "protocol":"protocol",
            "support":[  
               "1.0"
            ],
            "parameters":{}
         },
         {  
            "label":"read",
            "relation":"relation",
            "role":"supplier",
            "protocol":"protocol",
            "support":[  
               "1.0"
            ],
            "parameters":{}
         },
         {  
            "label":"auth",
            "relation":"auth",
            "role":"service",
            "protocol":"jwtauth",
            "support":[  
               "1.0"
            ],
            "parameters":{}
         }
      ]
   }
}

Returned object:

{
   "nodes": {
      "auth": [
         {
            "id": <node_id>,
            "namespace": <node_namespace>,
            "roles": [
               {
                  "label": "auth",
                  "relation": "auth",
                  "protocol": "jwt",
                  "role": "service",
                  "support": [
                     "1.0"
                  ],
                  "parameters": {
                     "jwks": {"keys": [...]},
                     "aud": "<customer_id>|<node_namespace>",
                     "scope": {
                        "write": [
                           "receiver|relation|protocol"
                        ],
                        "read": [
                           "supplier|relation|protocol"
                        ]
                     },
                     "iss": "https://broker-url.com"
                  }
               }
            ]
         }
      ]
   },
   "expiration": 300000
}

Parameter naming conventions#

The parameters field of the ping generally gives the details to access the component (usually a supplier or a receiver).

The names of the parameters should, as much as possible, be close to what is easily usable by the symmetrical components.

For example, an InfluxDB typical configuration for a HTTP endpoint breaks down as follow:

[http] 
  enabled = true 
  bind-address = ":8086" 
  https-enabled = true 

The parameter will be "url": "http://influxdb.unryo.com:8086" which can be then interpreted as-is by the producer telegraf and the consumer Grafana.

DNS names#

In order to ease the interconnection between component it is advisable to use DNS names that resolved from the networks, the components need to connect.

Ping details#

This document aims to help every unryo customer or developer to understand the relationships between an unryo-ready component (device or application), the broker himself, and the potential symmetric application.

In the following list, we give the standard ping envelop for each described application. The given parameters aim to be standard and won’t change over updates of unryo.

Also, here is a link to a bash file that represents some curl HTTP calls to test the broker.

Note: The ID of each nodes needs to be generated in some way, this way still need to be defined

Grafana#

Role: Consumer

Grafana is our UI module for metrics reporting and must be pair with some InfluxDB. Other data sources will be added eventually. To connect to Influxdb, use the JWT sign service. Here is a ping body example:

{
   "node":{  
      "id":<node_id>,
      "namespace":<node_namespace>,
      "roles":[  
         {  
            "label":"influxdb",
            "relation":"collect",
            "role":"producer",
            "protocol":"influxdb",
            "support":[  
               "1.0"
            ],
            "parameters":{  

            }
         },
         {  
            "label":"auth",
            "relation":"auth",
            "role":"service",
            "protocol":"jwt",
            "support":[  
               "1.0"
            ],
            "parameters":{
               "jwks": {"keys":[...]}
            }
         }
      ]
   }
}

Parameters: - url: URL of grafana himself for the use of the UI to generate a link.

Note: As a Consumer Grafana doesn’t need to share credential to access to it. The URL is used by Unryo UI to give a link to it.

Telegraf#

Role: Producer

Telegraf is a data collector which pushes metrics into an InfluxDB associated to him. Telegraf has multiple plugins to collect from different technologies, process the data, and push the output to several other applications. To connect to Influxdb, use the JWT sign service.

Here is a ping body example:

{
   "node":{  
      "id":<node_id>,
      "namespace":<node_namespace>,
      "roles":[  
         {  
            "label":"influxdb",
            "relation":"collect",
            "role":"producer",
            "protocol":"influxdb",
            "support":[  
               "1.0"
            ],
            "parameters":{  

            }
         },
         {  
            "label":"auth",
            "relation":"auth",
            "role":"service",
            "protocol":"jwt",
            "support":[  
               "1.0"
            ],
            "parameters":{
               "jwks": {"keys":[...]}
            }
         }
      ]
   }
}

Parameters: Beside the jwt service, Telegraf has no parameters because, as a Producer, it doesn’t need to communicate the information to the broker between himself and an InfluxDB.

InfluxDB#

Role: Receiver / Supplier

That example shows a node with several roles. Indeed, InfluxDB is a metric database which stores data coming from one or more telegraf (Receiver), and also InfluxDB is the data source for Grafana UI. TO give access to telegraf and grafana, use the JWT auth service.

Here is a ping body example:

{
   "node":{  
      "id":<node_id>,
      "namespace":<node_namespace>,
      "roles":[  
         {  
            "label":"writer",
            "relation":"collect",
            "role":"receiver",
            "protocol":"influxdb",
            "support":[  
               "1.0"
            ],
            "parameters":{  
               "url":"http://127.0.0.1:8085",
               "auth-enabled":"true",
               "https-enabled":"false",
               "username":"username"
            }
         },
         {  
            "label":"reader",
            "relation":"report",
            "role":"supplier",
            "protocol":"influxdb",
            "support":[  
               "1.0"
            ],
            "parameters":{  
               "url":"http://127.0.0.1:8085",
               "auth-enabled":"true",
               "https-enabled":"false",
               "username":"username"
            }
         },
         {  
            "label":"auth",
            "relation":"auth",
            "role":"service",
            "protocol":"jwtauth",
            "support":[  
               "1.0"
            ],
            "parameters":{}
         }
      ]
   }
}
Kapacitor#

Role: Consumer / Supplier

Kapacitor is a native data processing engine that works with InfluxDB (its Supplier). Kapacitor (acting as a Consumer) can process both stream and batch data from InfluxDB, acting on this data in real-time via scripts written in the TICKscript programming language. Kapacitor acts as a Supplier for Telegraf and any other node that needs to interact with it by providing its HTTP API and monitoring URL to these other nodes.

Here is a ping body example:

{
    "node": { 
        "id": <node_id>,
        "roles": [
            {
                "label": "influxdb",
                "relation": "analytic",
                "role": "consumer",
                "protocol": "influxdb",
                "support": ["1.0","{{.UNRYO_MESH_SUPPORT}}"],
                "parameters": {
                    "url": "https://{{.UNRYO_HOSTNAME}}:{{.UNRYO_PORT}}"
                }
            },
            {
                "label": "kapacitor",
                "relation": "manage",
                "role": "supplier",
                "protocol": "kapacitor",
                "support": ["1.0","{{.UNRYO_MESH_SUPPORT}}"],
                "parameters": {
                    "url": "https://{{.UNRYO_HOSTNAME}}:{{.UNRYO_PORT}}"
                }
            },
            {
                "label": "jwt",
                "relation": "authentication",
                "role": "service",
                "protocol": "jwt",
                "support": ["1.0"],
                "parameters": {
                    "jwks": {{getJWKS}}
                }
            },
            {
                "label": "tls",
                "relation": "authentication",
                "role": "service",
                "protocol": "tlsauth",
                "support": ["1.0"],
                "parameters": {}
            },
            {
                "label": "auth",
                "relation": "authentication",
                "role":"service",
                "protocol":"jwtauth",
                "support": ["1.0"],
                "parameters":{}
            },
            {
                "label": "auth",
                "relation": "authentication",
                "role":"service",
                "protocol":"tls",
                "support": ["1.0"],
                "parameters": {
                    "pem": {{getPubKey}}
                }
            }
        ] 
    },
    "metadata": {
        "displayName": <node_display_name>,
        "softwareVersion": <node_software_version>,
        "version": <node_version>
    },
    "health": <node_health>
}

Parameters: - url: DNS address of the database.

  • auth-enabled: Enable or Disable the authentication of the InfluxDB API.

  • https-enabled: Enable or Disable HTTPS protocol.

  • username: Username for the InfluxDB entity.

Write your own component#

Looking to interconnect your own app(s) securely to the Unryo platform? Easy: this guide is a step by step explanation on how to write your Unryo component. It's crucial to understand that each project is different, and some cases will need more customization. It explains the most simple case where you don't have to fork the project and a docker image of it already exists. We will take kibana as an example component to write.

Step 1: Dockerfile#

The first step is to create our Dockerfile for the component. It is based on the vanilla docker image for the component + the docker image to communicate with the unryo broker.

Create file Dockerfile at the root of your project.

Here is a simple example for kibana:

## We need to pass the confd image as an argument when we build the image
## Could be replaced for testing purposes by:
## FROM unryo/confd:latest
ARG CONFD_IMAGE
FROM $CONFD_IMAGE

FROM docker.elastic.co/kibana/kibana:6.4.3
ENV SERVICE_USER=kibana

USER root
// This sets up confd
COPY --from=0 /unryo_build /

## Add right user as parameter to this script
## Not necessary if run as root
RUN /scripts/build-runit.sh $SERVICE_USER

COPY /build /

USER $SERVICE_USER
ENTRYPOINT ["/scripts/unryo-entrypoint.sh"]

There are a few things we need to change here: - Change FROM docker.elastic.co/kibana/kibana:6.4.3 to your new service image. - Change ENV SERVICE_USER=kibana to the right user for your image

Step 2: Environment variables#

The next step is to create a bash file that sets all of our local environment variables. That script handles unryo specific variables and leaves the other environment variables as-is. That script must not alter the behavior of the component vanilla docker image.

Create file /build/scripts/local-vars.sh inside your project. Make it executable with chmod +x /build/scripts/local-vars.sh.

Here is an example for kibana:

#!/bin/bash

export UNRYO_SERVICE_ENTRYPOINT="/usr/local/bin/kibana-docker"
export UNRYO_SERVICE_VOLUME="/usr/share/kibana"
export UNRYO_SERVICE_NAME="Kibana"

if [ -z "$UNRYO_PORT" ]; then
  export UNRYO_PORT="9200"
fi

## Set your relation here.
if [ -z "$UNRYO_CONSUMER_RELATION" ]; then
  export UNRYO_CONSUMER_RELATION="report"
fi

There are a few things we need to change here:

  • UNRYO_SERVICE_ENTRYPOINT is the entrypoint of the original container. Change it to the right value.
  • UNRYO_SERVICE_VOLUME is the volume we are going to use to save the state of the container. Change it to something that makes sense for your project.
  • UNRYO_SERVICE_NAME will be used to generate the default ID of the Node

The rest of the script is specific to the service that you are dealing with. In this example, we set a default port and a default consumer relation that can be overwritten when the container is launched. This information is needed for the ping to the broker. Other projects might need other variables.

Here an example of all the default relationships:

// For a kibana or grafana
if [ -z "$UNRYO_CONSUMER_RELATION" ]; then
  export UNRYO_CONSUMER_RELATION="report"
fi
// For a logstash or telegraf
if [ -z "$UNRYO_PRODUCER_RELATION" ]; then
    export UNRYO_PRODUCER_RELATION="collect"
fi
// InfluxDB or elastic search
if [ -z "$UNRYO_SUPPLIER_RELATION" ]; then
    export UNRYO_SUPPLIER_RELATION="report"
fi
// InfluxDB or elastic search
if [ -z "$UNRYO_RECEIVER_RELATION" ]; then
    export UNRYO_RECEIVER_RELATION="collect"
fi

Step 3: Building the Confd folder structure#

In the docker file, we already installed confd, but we need to configure it before we can use it.

  • Create folders:

  • /build/etc/confd/

  • /build/etc/confd/conf.d
  • /build/etc/confd/templates

  • Create files:

  • /build/etc/confd/ping.json

  • /build/etc/confd/conf.d/service.toml
  • /build/etc/confd/templates/service.yml.tmpl

Confd is used for 2 things: - Ping the broker to register the service with ping.json data. - Parse the json result back to configure the service.

Step 4: Configure /build/etc/confd/ping.json#

This is where we set up the ping sent to the broker.

Example from kibana

{
    "node": { 
        "id": "{{.UNRYO_NODE_ID}}", 
        "roles": [ 
            { 
                "label": "elasticsearch", 
                "relation": "{{.UNRYO_CONSUMER_RELATION}}", 
                "role": "consumer", 
                "protocol": "elasticsearch", 
                "support": ["1.0"], 
                "parameters": { 
                    "$url":"http://{{.UNRYO_HOSTNAME}}:{{.UNRYO_PORT}}"
                } 
            } 
        ] 
    } 
}

Do not use this example as-is. Update the roles and parameters that represent your service.

Confd has the ability to use any environments variable exported in the container to fill this template. Use {{.VARIABLE_NAME}}

Step 5: Configure /build/etc/confd/conf.d/service.toml#

This is where we configure confd behavior on each ping. It follows the TOML format.

Case where the configuration of the service is dependent on the ping result (i.e. grafana, logstash, kibana, etc.):

Here is an example for kibana:

[template]
mode = "0644"
src = "service.yml.tmpl"
dest = "/usr/share/kibana/config/kibana.yml"
keys = [
  "/nodes/elasticsearch[*]",
  "/nodes/elasticsearch[*]/roles[*]/parameters/url"
]
## check_cmd="TODO"
reload_cmd="sv restart /etc/service/service"

In this file, we set:

  • mode: Rights to the config file.
  • src: the destination of the template to build the config file.
  • dest: the destination of the config file.
  • keys: the data needed from the result of the ping.
  • reload_cmd: the reload command to restart the service.

We only need to update 2 values:

  • dest: Modify to the right config destination.
  • keys: This value represents a jsonpath which will fetch the right data in the ping result. In the first key example, this would return all nodes under the elasticsearch label in an array. In the second example, this would return all URLs or all elasticsearch nodes in an array. Those keys will be used in the template.
  • Here is more information on jsonpath: https://goessner.net/articles/JsonPath/
  • Here is a json evaluator: http://jsonpath.com/

Case where the configuration is independent of the ping result (i.e. influxdb, elasticsearch):

In a scenario where the peers do not affect the configuration of our service, this is very simple as Confd is only used to ping the broker.

Here is an example for elasticsearch:

[template]
mode = "0644"
src = "service.conf.tmpl"
dest = "/tmp/empty.txt"
keys = [
  ""
]

You can use this example as-is.

Step 6: Configure template: /build/etc/confd/templates/service.yml.tmpl#

Case where the configuration of the service is dependent on the ping result (i.e. grafana, logstash, kibana, etc.):

This is where the magic happens. Confd will use this template to build the config file needed to run the service. All templates will be unique depending on the service, what it needs, and its format.

This template uses the golang templating functionality. For more details, look here: https://golang.org/pkg/text/template/ and https://blog.gopheracademy.com/advent-2017/using-go-templates/.

Unryo Confd adds some functionality to this templating: https://github.com/kelseyhightower/confd/blob/master/docs/templates.md

The most needed tools are:

  • getv: Fetch the value of a key set in service.toml
  • jsonArray: Make an array out of a json value
  • range: loop through an array
  • jsonMarshal: Show the value fetched as a json value
  • jsonPathLookup: Fetch a value from a json object using json path. Set default value if not found.

Here a complete example:

{{- $data := jsonArray (getv "/nodes/influxdb[*]" "[]")}} 
{{- range $i, $node := $data}}
[[outputs.influxdb]]
    urls = {{jsonMarshal (jsonPathLookup $node.roles "$.[*].parameters.url")}}
    database = "{{ jsonPathLookup $node.roles "$.[0].parameters.dbname" "telegraf" }}"
    username = "{{ jsonPathLookup $node.roles "$.[0].parameters.username" "" }}"
    password = "{{ jsonPathLookup $node.roles "$.[0].parameters.password" "" }}"
{{- end}}
{{- $data := jsonArray (getv "/nodes/elasticsearch[*]" "[]")}} 
{{- range $i, $node := $data}}
[[outputs.elasticsearch]]
    urls = {{jsonMarshal (jsonPathLookup $node.roles "$.[*].parameters.url")}}
    index_name = "telegraf-%Y.%m.%d"
{{- end}}

Your template should take into consideration that it might not find any peers. Be aware of this.

Case where the configuration is independent of the ping result (i.e. influxdb, elasticsearch):

In this scenario /build/etc/confd/templates/service.yml.tmpl will be an empty file. It seems useless, but without it, confd will shutdown.

Step 7: Test your docker image#

On the root of your repository:

  • Build image: docker build -t myservice --build-arg UNRYO_IMAGE=unryo:latest .
  • Run container: `docker run --env UNRYO_TOKEN=

Debugging:

  • This should register your new service to the broker: To check if it worked, ping a symmetric node with the same token and namespace or look at the portal if your user is correctly set up. If it does not show up, update ping.json and restart.
  • If there are peers to your service, this should update your config file and restart the service correctly. You can always look at the config file in your container to check if it is set up correctly. Modify the template if needed. To check config file while container while it is running do:

  • docker ps to get the right container ID

  • docker exec -ti containerID cat /dest/of/config/file