• Added the url-unquote function that URL unquotes any URL quoted characters in its input. See the related url-quote function.


  • The RDF source and SDShare source now supports the sort_lists property to automatically sort resulting properties containing lists (i.e. RDF statements having the same predicate). It is true by default.



  • Added encrypt-pgp and decrypt-pgp DTL functions that can encrypt strings to OpenPGP messages using a PGP public key and decrypt these messages back to strings using a PGP private key and its associated password.


  • Added encrypt-pki and decrypt-pki DTL functions that can asymmetrically encrypt strings to bytes and decrypt bytes to strings using a PKI public/private key-pair in DEM format (PKCSv8). The encryption is performed using RSA 2048 bits with sha-1 hashes and OAEP/MGF1 padding.




  • Added the intersects DTL function. This boolean function returns true if there is an overlap between the values in the two arguments.

  • The DTL compiler will now issue a warning if you try to perform two or more join expressions between the same two dataset aliases. It is there to notify you of possible cardinality issues and to tell you about the tuples function, which may be used to avoid cardinality issues.

    When there are two or more join expressions between the same two dataset aliases only the first one is treated as a join expression; the rest of them are equality comparisions. One can use the tuples function to combine them into one big join expression at the cost of composite indexes being used.


    Note that the eq function serves a dual purpose. It can both be used for join expressions and it can be used for equality comparisions. These two are different in that a join uses intersection (similar to the intersects function) and the equality comparison is an exact match. Use the intersects function if you want to check for intersection/overlap instead of an exact match.


  • The default value of the keep_existing_solr_ids configuration property in the The Sesam Databrowser sink has been changed from true to false.


  • The JSON push sink now supports customizable HTTP headers via a headers property.



  • If a pipe is running and the pipe-config is modified, the pipe will no longer be stopped. Instead a "An old version of the pipe is still running" warning will be displayed, and it is up to the user if they want to stop the running pipe or not.



  • Added a track_dead_letters option to the pump configuration. If set to true, it will delete "dead" entities from the dead letter dataset if a later version of it is successfully written to the sink. Note that using this option incurs a performance cost so use with care.


  • It is now possible to specify track-dependencies on all the HOPS_SPEC in a specific hops DTL function. This change was made so that one can disable tracking for any of the HOP_SPECs, not just the last one.


  • The json-parse and json-transit-parse DTL functions now accept an optional default value expression. The default value expression is used when the input value is not valid JSON.


  • The datetime-parse and datetime-format DTL functions now accept an optional timezone argument. This makes it possible to parse datetime strings and format datetime values in specific timezones.


  • When a pipe is reset then the pipe's retry queue is now also reset.
  • Bug fix: It is now possible to interrupt pumps that are performing retries.
  • Indexing of datasets changed so that each dataset is indexed for a maximum of five minutes in each iteration. This prevents some datasets from being blocked from indexing when there are other large datasets being indexed.




  • Added functionality for preventing all pipes from automatically running (useful in some debugging scenarios). See the Low level debugging page for details.


  • Added a is_sorted property to the RDF source to indicate that the input data is sorted on subject, enabling the source to avoid loading the entire file into memory. Note that it only works for nt (NTriples) format files without blank nodes.


  • Added a write_retry_delay property to pipe pumps. This is used in conjunction with max_consecutive_write_errors when the system the pipe is writing to is known to be sporadically (non-transiently) unavailable. See the Pump section for details.




  • Added the indexes property to the dataset sink. If set to "$ids" then an index will be maintained for the $ids property. This index will then be used by the dataset browser to look up entities both by _id and $ids.
  • The default value of the max_depth property in hops has been changed from null to 10. This means that the default is to stop the recursion at level 10.


  • The JSON push protocol has been simplified to make it easier to write receivers. It will now always send the entities as an array, even if it contains just a single object. The JSON push sink has been updated to reflect this. If you need single-object JSON POST/PUT operations, you should use the REST sink instead.
  • Systems now support environment variables in their config like pipes do


  • Added the tuples DTL function that can be used to create composite join keys.


  • The equality property on the merge source is now optional.


  • Changed the default value of the "schedule_interval" pump configuration property. Before, the default value was 30 seconds for all pipes. The new default value for pipes with a dataset sink and a dataset sink is now 30 seconds +/- 1.5 seconds. For all other pipes, the default is 900 seconds +/- 45 seconds. (The +/- part helps stagger the start-time of the pipes, so that we don't get lots of pipes starting at the same instant.)
  • Added a warning in the GUI for non-internal pipes that don't have a "schedule_interval" or a "cron_expression" attribute set.


  • Extended all systems to accept a new property worker_threads that limits the number of concurrent pipes that can run against a particular system. The default value is 10. For input pipes the source system is used and for output pipes the sink system is used. For internal pipes, the the pool has 50 worker threads (i.e. for dataset to dataset pipes or receiver/publisher endpoints).


  • Extended the URL system and REST system to accept default custom request headers using the headers property. Also fixed the REST system schema to reflect authentication options and the jwt_token property.


  • Extended the in DTL function to allow a single value in the second argument.



  • Added the _R variable, which can be used to refer to the root context in a DTL transform.


  • The base_url property of the URL system and REST system has been deprecated. It has been superseded by the the url_pattern property.



  • Added the is-changed DTL function that can be used compare data from the current and the previous version of the source entity.




  • Added a substring DTL function that returns a substring of another string given a start and end index.


  • Added include_replaced property to the dataset source. This property is used to filter out entities that are replaced by the merge source.


  • Added url_pattern property to URL system. This property gives you more control over how absolute URLs are produced. It can be used instead of the base_url property.


  • Added a jwt authentication scheme and jwt_token property to the URL system


  • Added text_body_template and text_body_template_property``properties to the :ref:``EMail message sink <mail_message_sink>. Use these to explicitly construct a plain-text version of your messages if sending multi-part messages.


  • For security reasons, the Mail and SMS sinks no longer support file-based templates. Note that this is a non-backwards compatible change. You can use environment variables and upload your existing template files using the environment variable API or the corresponding Management Studio form.


  • Datasets are now scheduled for automatic compaction once every 24 hours. The default is to keep the last 2 versions up until the current time. It is possible to customize the automatic compaction. See documentation on compaction for more information.


  • The SQL source no longer includes columns with null values by default. You can include them by setting the preserve_null_values property of the SQL source to true. Note that this is a change of the previous default behaviour.
  • The CSV source no longer includes empty string values by default. You can include these by setting the CSV source property preserve_empty_strings to true. Note that this is a change in the default behaviour.


  • The dict function now takes zero, one or an even number of arguments. If zero arguments given then an empty dict is returned. If an even number of arguments then a new dict with each pair of arguments as key and value. The latter is convenient for easy construction of dicts.
  • The transform functions add and default now take an expression in their first argument. This means that the properties can be dynamic and that there can be multiple. rename now takes dynamic arguments in the first and second positions.


  • Documented the pool_recycle option on SQL systems and changed its default from -1 (no recycling) to 1800 (30 minutes).


  • Added the merge source. This is a data source that is able to infer the sameness of entities across multiple datasets.



  • Added a uuid DTL function. It takes no parameters and returns a UUID object (type 4).


  • Added a disable_set_last_seen property to the Pipe properties. If set to true, it will not be possible to set or reset the last seen bookmark on the pipe using the API (i.e. protecting it from accidental changes by principals with write permission on the pipe).


  • Added a read_retry_delay property to pipe pumps. This is used in conjunction with max_read_retries when the source is known to be sporadically (non-transiently) unavailable. See the Pump section for details.


  • The documentation on cron expressions now makes it clear that they are evaluated in the UTC timezone.


  • The concat DTL function now takes a variable number of arguments. This avoids constructing unnecessary lists.


  • The url-quote DTL function now takes an optional SAFE_CHARS argument. This is especially useful when you don't want to quote the / character.


  • The section on Continuation Support has been extended. Each source now has a Continuation support table that shows the source's support for continuations.


  • Added the json and json-transit DTL functions.
  • The group-by DTL function has been changed to always return string keys. The string keys are the JSON transit encoded (same type of string as the json-transit function produces). The reason is that the entity data model (and JSON) only supports string keys. group-by has also gotten an optional STRING_FUNCTION argument which lets you specify a custom function to create the string keys.
  • The sorted, sorted-descending, min, max DTL functions have been updated to support mixed type ordering.





  • Added the range DTL function.


  • Added the Embedded source. This is a data source that lets you embed data inside the configuration of the source. This is convenient when you have a small and static dataset.


  • Added the XML transform and XML endpoint sink. These can be used to generate XML documents inline in entities or published to external consumers, respectively.


  • Changed the CSV endpoint sink to not output deleted entities by default. Added a new skip-deleted-entities config parameter that can be set to false if one want deleted entities to appear in the CSV output.


  • Added DTL Reference Guide section that explains how joins work.


  • Reworked DTL math functions to reflect that float is an allowed type in entities. If the function parameters are of mixed types, the result will be coerced to the type that is the most precise. I.e. float+decimal=decimal, int*float=float, int/div=decimal and so on. Not that this is a change in behaviour as entities that previously only had decimal as types after using DTL math functions if the input was of type float, now may end up with values that are floats instead. Use the dtl decimal cast-function to coerce the result to decimal if this is important to the application.
  • Added is-float and float DTL functions. Changed is-decimal function so it no longer returns true if the argument is a float. You will now have to add both a is-float and a is-decimal in an or clause to test for both types.


  • Added Elasticsearch support, which includes a system and a sink.
  • The Solr sink now supports batching.
  • Added the commit_at_end property to the Solr sink and the Sesam databrowser sink.
  • Moved the commit_within property from the Solr system to the Solr sink and the Sesam databrowser sink. The reason is that the commit rate is really specific to how and where it is used. This change is backward compatible, as the default value is taken from the system. It is recommended to update the configuration files accordingly.
  • Moved the prefix_includes and keep_existing_solr_ids properties from the Solr system to the Sesam databrowser sink. The reason is that they are only relevant there. This change is backward compatible, as the default value is taken from the system. It is recommended to update the configuration files accordingly.


  • Fixed the documentation for the merge DTL transform; it mistakingly stated that the merge transformation would not overwrite existing attributes in the target entity.
  • Updated the /api/config GET" endpoint to format the json in a more human-readable way.



  • Added the datetime-shift DTL function.
  • Added support for timezones to the datetime-parse DTL function.
  • Added missing sink- and source- prototypes in the "Edit pipe" gui in Management Studio.
  • Fixed a bug that prevented users from adding a system in Management Studio.