IDS Connector deployment
This guide describes how to deploy an IDS connector used by SCSN Service Providers to expose back-end services to the network.

Requesting an Identity

Identities for the test environment can be requested via: Test DAPS. Both a participant certificate and a component certificate should be requested, by creating a Certificate Signing Request. The exact formatting of the certificate content is not fixed for the test environment, for the production environment guidelines will be made available. Contact Mike de Roode for the approval of the certificates.
Certificate Signing Requests can be created using several different tools, e.g. OpenSSL, make sure the key-length is at least 2048 bits. The distinguished name should reflect your participant and connector respectively, however, the these certificates are not used for SSL transport encryption but are only used for authentication and therefore the Common Name of the certificate is not required to be a Fully Qualified Domain Name.
The Participant ID and Connector ID are IDS identifier that can be chosen freely for the test environment, but they should be uniform resource identifiers (URI). This can be either an URL (starting with https://) or an URN (starting with urn:), since these identifiers are only used for identification the URN scheme is preferred. A common URN prefix of urn:scsn:ids could be used (not registered as official prefix):
  • Participants: urn:scsn:ids:participants:Service_Provider_Name
  • Connectors: urn:scsn:ids:connectors:Service_Provider_Connector_Name
The result of the interaction with the DAPS should result in having at least the following files:
  • participant.key
  • participant.cert
  • connector.key
  • connector.cert
  • config.yaml (only required for alternative Docker Compose deployment, but useful as reference for Kubernetes deployment)
  • docker-compose.yaml (only required for alternative Docker Compose deployment)

Deploying the Connector

The deployment scenario for the IDS connector uses Kubernetes and Helm. Make sure you are familiar with setting up an Kubernetes cluster and know what the purpose of at least the following resource types are: Deployment, Pod, Stateful Set, Service, Ingress, Ingress Controller, Config Map.
Helm is used as package manager for Kubernetes, in the form of an Helm chart. This Helm chart uses templates to describe resources and makes sure that the right connections are made between the different resources of the complete deployment. From a deployment perspective only the values.yaml file must be modified to reflect your configuration.
The Helm chart can be found on the SCSN Gitlab, with comments inside the values.yaml indicating the configuration options for the Helm chart.

Ingress Controller

For Kubernets Ingresses to work, the cluster should be configured correctly with an existing Ingress Controller (see Ingress Controllers). The ingress controller should be an nginx ingress controller, or another controller that supports the same annotations. Other ingress controllers may work, but can require manual annotations to be configured.
Depending on the cluster, optionally, automatic certificate renewal can be supported via LetsEncrypt and cert-manager. For this the manual for Microsoft Azure can be followed, except the step for setting up the DNS A record this manual is provider independent. Remember the identifier of the ClusterIssuer, since this is needed in the configuration.


The services that will be deployed are the main focus for exposing the connector to other connectors and for exposing the connector to the internal backend systems.
  • service/{{ .Release.Name }}: The /router endpoint of the http (8080) port of this service should be exposed via an Ingress to the outside world. This will be used by other connectors to exchange messages. This service also exposes a metrics (9999) port, that returns metrics of the connector via its root. As well as an core container API on the api (4567) port, this will be enhanced in the future and documented accordingly.
  • service/{{ .Release.Name }}-openapi-data-app: The /openapi endpoint of this service must be used by your backend system in order to send messages to other connectors. This endpoint acts as kind of a proxy for the messages. Note: this service is not protected, exposing this to the outside world allows everyone to send messages via your connector! . Exposing this service can be done in several ways:
    • ClusterIP service, when the backend system also is deployed on the same Kubernetes cluster
    • NodePort service, when the cluster nodes are not directly reachable from outside your network a NodePort service can be used which allows other non-kubernetes machines to access the service via a specific port on one of the cluster nodes.
    • Ingress, a ingress can be used with strict access control policies (using authentication or strict network segregation).

Core container service configuration

The core container is intended to be exposed via an ingress, this can be configured in the coreContainer.ingress values. For this an ingress controller is required to be available on the kubernetes cluster. The chart assumes an nginx ingress controller, since there is a need for the configurability of the ingress. Optionally, a cert-manager ClusterIssuer can be used to automatically provision LetsEncrypt (or other ACME certificate authorities) certificates for the ingress.
The host or value is used as hostname for the ingress.
For the core container, the rewriteTarget must be able to include /router, this endpoint should be publicly accessible. Examples configurations are (with the corresponding accessUrl):
path: /(.*)
rewriteTarget: /$1
# accessUrl -> https://HOST/router
path: /connector(/|$)(.*)
rewriteTarget: /$2
# accessUrl -> https://HOST/connector/router
The ingress configuration is mapped to the http port of the core container, next to this port there are two other ports exposed as services:
  • api: exposes the core container API, used for the optional user interface of the connector. Will be explained via an OpenAPI description in the future, together with a new user interface
  • metrics: exposes metrics of the core container via a Prometheus style exporter

OpenAPI data app service configuration

For the OpenAPI data app service there is more flexibility, however, it is important that this service is not exposed publicly without strict firewalling or authentication.
Per default, the OpenAPI data app service is exposed as ClusterIP service, which makes the data app only available within the kubernetes cluster.
A NodePort service can also be configured, this service exposes the service to a port above 30000 on the nodes of the cluster. It is important to ensure firewall restrictions to make sure only your own applications can use this port. The following example exposes the data app on port 31000:
- port: 8080
name: http
nodePort: 31000
Also an ingress can be used, however it is advised to add authentication to the ingress (e.g.
- port: 8080
name: http
path: /(.*)
rewriteTarget: /$1
clusterIssuer: letsencrypt
annotations: basic basic-auth 'Authentication Required - foo'

Distributed tracing

Both the core container as the OpenAPI data app support distributed tracing, respecting X-B3 headers. The trace identifiers are logged to standard out of the containers. The log templates are:
  • Core container (Log4J):
%d{ISO8601} | [%X{traceIdHex}/%X{spanIdHex}] | %-5p | %-32c{1.} | %m%n
  • OpenAPI data app (Logback):
%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}

Horizontal scaling

Both the core container as the OpenAPI data app can be scaled horizontally by setting the coreContainer.replicaCount key or in the ids.containers[0].replicaCount.replicaCount key. The distribution of the load is provided via Kubernetes.

Metrics endpoint

The core container exposes a metrics endpoint with relevant metrics of the processing times and number of messages exchanged. These metrics may be used to identify bottlenecks early, or can be used to support the decision making for horizontal autoscaling of the core container. The important keys that are exposed are: camel_exchanges_failed, camel_exchanges_inflight, camel_exchanges_completed, camel_exchanges_total, camel_last_processing_time, camel_total_processing_time.
Metrics are gathered per route, for easier searches the ids.routes[].id key can be used to specify custom route identifiers (NOTE: these identifiers must be unique in the connector). Per default the routes are given an identifier based on the order in the ids.routes list, together with the type of the route (e.g. route-0-https-out and route-1-https-in for the default routes in values.yaml).
In following releases this functionality will be extended with metrics for the OpenAPI data app and with sample deployments for Prometheus and Grafana (to visualize the metrics) and the Kubernetes Horizontal Pod Autoscaler in combination with these metrics.


The configuration of the Helm deployment is contained in the values.yaml file. The values.yaml file in the SCSN Gitlab repository contains a basic deployment, with a lot of comments to further structure your connector. An example of a full values.yaml file can be found in values.test.yaml which includes ingress configuration, an agent database, and demo applications as used by the TNO Test Buyer. The contents of the values.yaml file is also shown below for easy reference by line numbers:
# -- the name of the pullsecret
name: pull-secret
# pullSecret.credentials -- (optional) the credentials to be used to connect to a specific Docker registry, when specified the secret will be created
# @default -- {}
# pullSecret.credentials.registry -- the hostname of the Docker registry
# pullSecret.credentials.username -- the username of the Docker registry
username: scsn
# pullSecret.credentials.password -- the password of the Docker registry
password: tnoidsscsn
# host -- Host url for the connector used in optional ingress specifications
# mongodb -- MongoDB dependency values, read the [docs](
# mongodb.enabled -- whether you want to deploy a mongodb chart
enabled: false
# mongodb.mongodbRootPassword -- Password for the root user, which is used to connect to the MongoDB instance.
# mongodb.replicaSet.enabled -- configure MongoDB as replicaset
enabled: true
# mongodb.persistence.size -- size of the persistent volume
size: 1Gi
# deployment.pullPolicy -- Pull policy for all data apps configured in `values`
pullPolicy: Always
# deployment.annotations -- Annotations applied to the core-container and data apps configured in ids.containers
annotations: {}
# coreContainer.replicaCount -- Replicas as specified in the core container deployment
replicaCount: 1
# coreContainer.image -- Core Container docker image name
# coreContainer.ingress -- Ingress configuration for the core container
# @default -- {}
# ingress:
# # -- Hostname used for this ingress, defaults to the top-level host
# # host:
# # coreContainer.ingress.path -- External path to be used by the ingress (can contain regular expressions)
# path: /(.*)
# # coreContainer.ingress.rewriteTarget -- Rewrite rule used for internal messaging, messages must be able to arrive at the /router endpoint
# rewriteTarget: /$1
# # coreContainer.ingress.clusterIssuer -- Optional clusterIssuer reference for usage in combination with cert-manager
# clusterIssuer: letsencrypt
# coreContainer.securityContext -- Security context for the container
runAsUser: 999
runAsGroup: 999
# services -- Data app service configuration, per default only ClusterIP services are created for port 8080. Custom configuration for ClusterIP, NodePort, or Ingress can be created. See the Readme for more info.
services: {}
# openapi-data-app:
# ports:
# # ClusterIP
# - port: 8080
# name: http
# # NodePort
# - port: 8080
# name: http
# nodePort: 31000
# # Ingress
# - port: 8080
# name: http
# ingress:
# path: /(.*)
# rewriteTarget: /$1
# clusterIssuer: letsencrypt
# annotations:
# basic
# basic-auth
# 'Authentication Required - foo'
# ids.logLevel -- Log level for the core container
logLevel: INFO
# ids.tpm.createSimulator -- Whether you want to simulate a Trusted Platform Module
createSimulator: false
# -- IDS Component ID, as specified in the DAPS component registration
idsid: &idsid urn:scsn:ids:connectors:...
# -- Title(s) of the Connector as used in the Broker, parsed as language string
- Connector Title
# -- Description(s) of the Connector as used in the Broker, parsed as language string
- Connector Description
# -- Access URL used for external connectors to access the connector
accessUrl: https://HOST/router
# -- Curator of the Connector, i.e. the participant that is the owner of connector
curator: &participant urn:scsn:ids:participants:...
# -- Maintainer of the Connector, i.e. the participant that is technical maintainer
maintainer: *participant
# -- Broker configuration
id: urn:ids:connectors:Broker
autoRegister: true
profile: FULL
# ids.keystore.key -- Base64 encoded PKCS#8 component private key
# ids.keystore.cert -- Base64 encoded PEM component certificate
# ids.truststore.chain -- IDS Trusted Certificate Authority Chain
# @default -- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0t...
# ids.containerManager -- Needed for kubernetes deployment
type: none
# ids.daps.url -- DAPS url
# ids.daps.uuid -- UUID as received when registered at a component in the DAPS
uuid: 00000000-0000-0000-0000-000000000000
# ids.routes -- Core container internal message router (Apache Camel) configuration
# ids.routes[0] -- Default egress route for the Core Container
- type: HTTPS-out
clearing: false
# ids.routes[1] -- Default ingress route inside the Core Container, used to expose the message router for external use (e.g. as backend for the Kubernetes Ingress)
- type: HTTPS-in
router: 'http://{{ template "connector.fullname" $ }}-openapi-data-app:8080/router'
endpoint: router
clearing: false
# ids.containers -- Additional containers (Data Apps or helpers) to deploy
# @default -- []
# ids.containers[0].type -- Type of the container, either data-app or helper
- type: data-app
# ids.containers[0].image -- Docker image used for this container deployment
# ids.containers[0].name -- Name of the container, will be prefixed in the actual Kubernetes deployment
name: openapi-data-app
# ids.containers[0].name -- Number of replicas
replicaCount: 1
# ids.containers[0].securityContext -- Security context for the container
runAsUser: 999
runAsGroup: 999
# ids.containers[0] -- IDS Component ID, as specified in the DAPS component registration
# Should be identical to, and is, therefore, an YAML anchor
id: *idsid
# ids.containers[0].config.participant -- Curator of the Connector, i.e. the participant that is the owner of connector, as specified in the DAPS participant registration
# Should be identical to, and is, therefore, an YAML anchor
participant: *participant
# ids.containers[0].config.selfDescriptionConfig.type -- Self Description configuration, used for Broker registrations
type: openapi
# Custom Properties for the OpenAPI Data App
# ids.containers[0].config.customProperties.consumerOnly -- (OpenAPI Data App) Act as consumer only, so no agents have to be specified
consumerOnly: false
# ids.containers[0].config.customProperties.openApiBaseUrl -- (OpenAPI Data App) Base URL for the OpenAPI specifications used
# ids.containers[0].config.customProperties.backEndBaseUrl -- (OpenAPI Data App) (Optional) Global backend URL, agent IDs will be forwarded to {backEndBaseUrl}/{agentId}/{version}
# backEndBaseUrl: # http://globalbackend:8080
# ids.containers[0].config.customProperties.backEndBaseUrlMapping -- (OpenAPI Data App) (Optional) Global backend per version, agent IDs will be forwarded to the respective mapping with agent ID postfixed
# backEndBaseUrlMapping:
# # 0.4.6: http://046.gloablbackend:8080
# ids.containers[0].config.customProperties.versions -- (OpenAPI Data App) Versions of the OpenAPI specification to be used
- 0.4.6
# ids.containers[0].config.customProperties.agentDatabase -- (OpenAPI Data App) Agent database configuration for dynamic agent configuration
# @default -- {}
# agentDatabase:
# # ids.containers[0].config.customProperties.agentDatabase.hostname -- (OpenAPI Data App) MongoDB hostname
# hostname: "{{ $.Release.Name }}-mongodb"
# # ids.containers[0].config.customProperties.agentDatabase.port -- (OpenAPI Data App) MongoDB port
# port: 27017
# # ids.containers[0].config.customProperties.agentDatabase.authenticationDatabase -- (OpenAPI Data App) Authentication database for the MongoDB deployment, in normal deployments defaults to `admin`
# authenticationDatabase: admin
# # ids.containers[0].config.customProperties.agentDatabase.database -- (OpenAPI Data App) MongoDB database to use for the agent configurations (will be created when not exists)
# database: connector
# # ids.containers[0].config.customProperties.agentDatabase.collection -- (OpenAPI Data App) MongoDB collection to use for the agent configuration (will be created when not exists)
# collection: agents
# # ids.containers[0].config.customProperties.agentDatabase.sslEnabled -- (OpenAPI Data App) Use SSL for connecting to MongoDB
# sslEnabled: false
# # ids.containers[0].config.customProperties.agentDatabase.watchable -- (OpenAPI Data App) Connect to MongoDB via watchable construct, only supported in MongoDB configured with replicaset
# watchable: true
# # ids.containers[0].config.customProperties.agentDatabase.username -- (OpenAPI Data App) MongoDB username
# username: root
# # ids.containers[0].config.customProperties.agentDatabase.password -- (OpenAPI Data App) MongoDB password
# password: "password"
# ids.containers[0].config.customProperties.agents -- (OpenAPI Data App) Agent configuration, if agentDatabase is configured and collection is empty the static configured agents are added to the agent database
# @default -- []
# ids.containers[0].config.customProperties.agents[0].id -- (OpenAPI Data App) SCSN ID of the party, in the form of a GLN prefixed with `urn:scsn:`
- id: urn:scsn:GLN
# ids.containers[0].config.customProperties.agents[0].title -- (OpenAPI Data App) Title of the party, as will be shown in the Broker (e.g. Manufacturing Company [email protected])
# ids.containers[0].config.customProperties.agents[0].config -- (SCSN) Party information following the SCSN Party object
# cac:PartyIdentification and cac:PartyName will be automatically filled with the previous id and title fields
# @default -- {}
SUPPORTEDMESSAGES: Order 2.0;OrderResponse 2.0
cbc:CompanyID: '5674352456'
cbc:CompanyID: '456423565'
# ids.containers[0].config.customProperties.agents[0].backendUrl -- (OpenAPI Data App) (Optional) Backend URL used for this party, will be appended with /VERSION/API_PATH (e.g. /0.4.6/order)
backendUrl: http://internal-backend
# ids.containers[0].config.customProperties.agents[0].versions -- (OpenAPI Data App) (Optional) Agent specific version override
# versions:
# - 0.4.6
# ids.containers[0].config.customProperties.agents[0].backendUrl -- (OpenAPI Data App) (Optional) Version based backend URL for this party
# backEndUrlMapping:
# # 0.4.6: http://046.agentbackend:8080
In the next release values that have to be secret (such as the IDS identity key) will be able to be configured as Kubernetes Secrets that can be configured to be installed via the Helm chart as well as using existing secrets.

Agent Database support

The OpenAPI data app supports configuration of agents via a MongoDB database. The structure of the documents follows the static agents in the values.yaml. The chart includes an optional Bitnami MongoDB deployment, which can be activated by setting mongodb.enabled to true and set a root password. Or you can use an existing MongoDB or deploy it next to the connector.
The configuration allows for different types of MongoDB to be connected:
hostname: "{{ $.Release.Name }}-mongodb"
port: 27017
authenticationDatabase: admin
database: connector
collection: agents
sslEnabled: false
# only supported with a MongoDB configured as replicaset
watchable: true
username: root
password: "password"
The provided database and collection will be created automatically if they do not exist and the user has the rights to create the database.

API version configuration

The configuration file allows different kinds of configuration for the API version that is used in the OpenAPI data app. In the example above, the global version configuration is shown that applies to all agents configured. But more granular version support per agent can also be configured, so that not all agents are required to support the same versions. Also, the way the versions are propagated to the backend system can be configured to further control the way the backends are called from the data app. The following examples show the different options, all starting on the level of the customProperties field on line 167 in the config example above.

Different versions per Agent

consumerOnly: false
- 1.0.0
- id: urn:scsn:agentA
- 0.9.0
backendUrl: http://agentAbackend:8080/api
- id: urn:scsn:agentB
- 1.1.0
backendUrl: http://agentBbackend:8080/api
In this example, both agents support the global version(s) from the list on line 3 and specific versions specified on lines 8 and 13 for respectively agent A and agent B. So agent A will support versions 0.9.0 and 1.0.0 and agent B 1.0.0 and 1.1.0.

Internal backend configuration

Global backend URL mapping

consumerOnly: false
backEndBaseUrl: http://globalBackend:8080
- 1.0.0
- id: urn:scsn:agentA
- id: urn:scsn:agentB
The global backEndBaseUrl can be used to forward all messages to generalized paths in the form of {backEndBaseUrl}/{agentId}/{version} in the example above: http://globalBackend:8080/urn:scsn:agentA/1.0.0.

Global backend per version

consumerOnly: false
0.9.0: http://090backend:8080
1.0.0: http://100backend:8080
- 0.9.0
- 1.0.0
- id: urn:scsn:agentA
- id: urn:scsn:agentB
The global backEndBaseUrlMapping can be used to forward all messages for specific API versions to generalized paths in the form of {backEndBaseUrlMapping}/{agentId}/ in the example above: http://100backend:8080/urn:scsn:agentA.

Agent based generic backend

consumerOnly: false
- 0.9.0
- 1.0.0
- id: urn:scsn:agentA
backendUrl: http://agentAbackend:8080
- id: urn:scsn:agentB
backendUrl: http://agentBbackend:8080
The agent based backendUrl can be used to forward all messages for a specific agent to a backend, this backend will be appended with the version. For example: http://agentAbackend:8080/0.9.0.

Agent based backend per version

consumerOnly: false
- 0.9.0
- 1.0.0
- id: urn:scsn:agentA
0.9.0: http://090agentAbackend:8080
1.0.0: http://100agentAbackend:8080
- id: urn:scsn:agentB
0.9.0: http://090agentBbackend:8080
1.0.0: http://100agentBbackend:8080
This is the most configurable option, where you can specify for each agent and each version specifically what backend should be used.

Alternative deployment: Docker-Compose

This deployment method will become deprecated, since this method is less configurable and requires more manual work to safely expose the connector to the outside world.
Deploying the connector is done via docker-compose. So the environment requires to have Docker and Docker Compose installed.
First the Docker registry credentials have to be set on the machine that will deploy the connector:
docker login -u scsn -p tnoidsscsn
The docker-compose.yaml file is as follows:
version: '3.7'
- /var/run/docker.sock:/var/run/docker.sock
- ./config.yaml:/ids/config.yaml:ro
The deployment is started by executing the docker compose up command inside the directory with the configuration files:
docker-compose up -d
When bringing the connector down, it is important to bring the deployment down by executing docker-compose down, as this ensures that the data app container and helper containers are shutdown correctly. Otherwise, when the machine shuts down without correctly removing these containers the possibility exists that when a new deployment is started the old containers are blocking the deployment.

Docker privileges

The basic deployment requires the core-container to mount the Docker socket, this is not desirable for all configurations. The requirement for mounting the Docker socket comes from the fact that the core container is responsible for managing the data apps. Mounting the Docker socker requires the container to run as root, or effectively as root, which might pose risks to the security of the system the container is running on.
For the docker-compose deployment you can run the core-container as non-root, but this will require the docker socket to not be mounted and to set the containerManager field in the config.yaml to none:
type: none
Disabling this feature also means that the data apps should be configured and deployed manually. The image tag of the core-container must be appended with -nonroot, not including the date pinned tag,

Production Environment

In order to migrate to the SCSN Production environment, the following actions should be undertaken.
  1. 1.
    Request a Participant and Component certificate signing request for the Production DAPS. Contact Mike de Roode for the approval of the certificates.
  2. 2.
    Replace the signed certificates in the Connector's configuration.
  3. 3.
    Change the UUID and truststore values:
  4. 5.
    Test the connection.