Logo

Automate Your Analysis

Quantinuity is a web-based platform for automating data transfer and orchestrating the execution of computational workflow elements in engineering, analysis, and simulation. It is a hybrid of model-based engineering (MBE), digital engineering, and continuous integration / continuous deployment (CI/CD) tools of modern software development. In practice, it is a general-purpose workflow orchestration and automation platform.

Quantinuity utilizes software containerization, or application-level virtualization, to isolate computational workloads and offer scalability. This also hides application, model, and task implementation details by adding a layer of abstraction. The web-based application programming interface (API) facilitates storage, retrieval, and execution of workflows and their associated data. A shared database provides a single source of truth for both the execution of the analysis and storage of the resulting data.

Quantinuity is agnostic to both hardware and software requirements of analyses, simulations, programs, and applications. This lends itself well to mixed-mode, mixed-language, mixed-machine, or mixed-resource analysis that is not easily accomplished using desktop software or shared hardware configurations.

The Quantinuity platform supports multi-fidelity, multi-disciplinary, and multi-lingual analysis by executing each analysis, simulation, or computational task in a separate software container hosted in a computing cluster rather than on your local machine. The software running in the container communicates with the platform through the web API. This allows the platform to provide data continuity by passing quantitative information throughout the workflow without the need to interface directly with other software that may be running on different hardware, different software, or with incompatible versions of software. It also obviates the need to rely on a single programming language to ensure interoperability among separate applications. Isolation in software containers also prevents one application from polluting another with incompatible or conflicting software dependencies, settings, or files on disk.

Workflow

The heart of Quantinuity is the workflow. The workflow is a hierarchy of reusable, executable (i.e. computational) tasks linked together by threads of exchanged data.

Users can set input values on parameters that are passed to the underlying application to be used during execution. When complete, resulting output values are set on corresponding task parameters. Values from one task may be linked to values on another task, creating a chain of data that flows throughout the hierarchy.

A description of the workflow can be logically grouped into three topics: configuration, execution, and artifacts. The Configuration section describes the building blocks for a workflow. The Execution section describes the structure of a workflow. The Artifacts section describes the data available as workflow components run to completion. It also facilitates the exploration of results and aids in troubleshooting problems.

Entire workflows can be shared with collaborators to build something together, to share results, to fork a work-in-progress, or to duplicate a final product.

Configuration

In order to build a workflow in Quantinuity, various items must be appropriately configured. The workflow hierarchy depends on the presence of various building blocks that define the underlying computation and its input and output values.

Applications

Applications are the foundational building blocks of workflows. Each application represents a reusable piece of software that performs the computation, analysis, or simulation for a task. An application references an external software image. The application must be configured with the url of an external software image as well as a command and entrypoint for execution. In the future, it may be configured to request appropriate computational resources* such as CPU, memory, or storage. It may also be configured with a maximum execution timeout.

The software inside the referenced container may be commercial software, open source software, in-house proprietary software, custom-developed software including compiled source and interpreted scripts in any language compatible with software containers. Licensed software is not excluded but is also not explicitly included, facilitated, or enabled at this time.

An application can be shared with other users, with your own or other organizations, or with the public. When modifying the definition of an application that has been shared with others, care must be taken to prevent the introduction of breaking changes. Similarly, changing (reducing) the permissions for sharing may result in some users losing access to the application, thus breaking tasks in their workflow.

Version control for applications in Quantinuity is available indirectly by using tags in the url of the software container.

Models

Models are variants of an application that have been constrained in some way to limit the application to a desired subset of the domain. The domain of interest may be defined by keywords or options passed to the application or by an input file.

As with applications, models may be shared with other users, with organizations, or with the public. Permissions set on models cascade to associated fields, files, and proxydefs. Reducing or removing permissions for a model may result in some users losing access to the model, thus breaking tasks in their workflow.

Each model may be regarded as a "black box" to which inputs are supplied and from which outputs are received but the inner workings are not visible to the user.

Fields

Fields are placeholders, or definitions, for values that will be copied to tasks. They are defined with an associated datatype and iotype (e.g. input or output). They can be optionally configured to have minimum or maximum values, or be constrained to match a pattern or belong to a predefined set. Fields may be configured with an external key or identifier for use by the application. This allows the local or display name to be independent of the name expected by the model or application.

Fields are used to define which parameters will be created when a task is instantiated. Fields may be defined on applications, models, proxydefs, or tasks.

Fields defined on applications may optionally be adopted by the models belonging to an application. This allows for a common, shared definition of a field. Fields defined on models will be copied to every task belonging to that model. Fields defined on proxydefs will be copied to every proxy associated with that proxydef. Fields defined on tasks are for local, user-specific use, such as the target of a proxy.

Quantinuity supports the following scalar datatypes for fields (and thus, parameters):

  • boolean true or false
  • bytes binary data (limited to 1024 characters)
  • char alphanumeric (including special) character values (limited to 256 characters) [uses html input element]
  • file a text or binary file
  • float double-precision, numeric, real, floating point values
  • integer integer values
  • json JSON object containing key-value pairs, arrays, and scalar values
  • text character or string data (limited to 1024 charcters) [uses html textarea element]

Quantinuity supports the following array datatypes:

  • boolean[] array of boolean values
  • char[] array of character strings
  • float[] array of floating point values
  • integer[] array of integers
  • json[] array of json objects
  • text[] array of text strings

Quantinuity supports the following iotypes:

  • input writable value passed to application prior to execution/invocation
  • output readonly value produced as the result of a computation
  • inproxy readonly value for users but value is set by proxy and passed as input to application
  • outproxy writable value typically linked from output parameter in child task; value serves as output for application
  • input_inproxy writable value belonging to an input proxy
  • input_outproxy writable value belonging to an output proxy

Files

File objects are used to associate text or binary content (i.e. files on disk) with models. This is one way to define a model as a distinct subset of its application. The file content may contain instructions to the application that limit, bound, or constrain the computation to a subset of its domain. For example, a spreadsheet file contains instructions that place bounds on the analysis of the spreadsheet application to those value and formulas contained in cells. Similarly, a script or source file contains instructions that limit the computation of the programming language in a way defined by the statements in the script.

Proxydefs

Proxydefs define a group of parameters used by an algorithm (i.e. an application) to systematically modify another parameter. This is typically done in an iterative fashion, such as with parameter sweeps, trade studies, or optimizers. The modification of downstream parameters occurs through data links. Proxydefs define a datatype and an iotype (i.e. input or output) and can only be used on target parameters that match those conditions. Proxydefs are placeholders for proxies.

Registries

Registries are remote repositories containing software container images. The default registry is configured to serve a set of standard applications available to all users. Additional registries can be added to access public or private images. Authentication can be provided with a username and password or using OAuth tokens.

Execution / Analysis

Execution of a workflow requires the workflow to be defined and organized. The workflow structure uses building blocks described in the Configuration section.

Projects

Every journey in Quantinuity begins with a project. Projects are the highest-level, outermost container for a workflow. A project has a name, a description, and associated settings. All these can be modified by the owner or other authorized users.

Branches

A branch is an independent section of a project. Every project can have multiple branches. Each branch contains a hierarchy of tasks that define the workflow. There is no relationship between branches within a project. One branch may have similar or completely different tasks than another branch. Branches are intended to support snapshots, parallel execution, and other capabilities. A default branch is created for each project. Branches are not experimental but are reserved for future use.

Tasks

Tasks are instances of models. Tasks can be combined in series or in parallel, in loops or in conditions, and in parent-child relationships to orchestrate a desired outcome across a collection of computational models. Tasks can have any number of input and output parameters or none at all. They can perform simple calculations or an extended analysis. They can drive or control child tasks (using parameter links) by short-lived, repeated invocation or by long-lived processes. Whatever the use or the capability of a task (and its underlying application and model), the platform treats all tasks equally. There is no awareness of the purpose of a task.

Tasks can be organized in flat, shallow, or deep hierarchal structures that mimic nested flowcharts. Each task (or node) may contain its own nested structure.

At runtime, tasks receive a url and an authorization token from the platform. Together, these allow the task's application to make calls to the platform API to manipulate task state and parameter values. There are no language bindings other than that of the internet.

Parameters

Parameters are instances of fields. Parameters are built on fields.

Unlinked input parameters will always be valid. Linked input parameters will reflect the valid state of the upstream parameter. Output parameters become valid after a task completes its computation at approximately the same time the task becomes valid. Inproxy parameter values are not writable by users because their values are assigned by the application using input_inproxy values. Inproxy parameters cannot have upstream links but should be linked downstream to the task being driven by the enclosing task. Outproxy parameter values are not writable by users. They should be linked upstream(?) to an output parameter on the task being driven.

User-defined output parameters on tasks can be configured with a JSON value for its expression field. An expression allows a secondary value to be calculated from the other input and output parameters defined on the task. Expressions are still experimental. The current syntax is an abstract syntax tree (AST) defined by a nested JSON object with a single key at each level. The original intent was for operators to be compatible with javascript. However, this is not entirely possible. Current operators are implemented as alphabetic strings that are textual abbreviations describing the operator.

Proxies

Proxies are used to control parameter values in a systematic or algorithmic way, as might be the case for parameter sweeps, trade studies, optimizers, or other iterative loops. A proxy is defined by a parameter and a proxydef. Both the datatype and the iotype must match between the parameter and the proxydef. This means that proxies may only be used with parameters having an iotype equal to inproxy or outproxy.

The iotype of the corresponding parameter must be either inproxy or outproxy and that value must match iotype defined on the proxydef. A proxy is defined on an inproxy or outproxy parameter by its associated proxydef and its corresponding datatype and iotype. A proxy must be associated with a parameter having the same datatype and iotype as the proxydef.

Artifacts

The computation, analysis, or simulation of each task may succeed, fail, or timeout. Or it may be cancelled. During execution, it may produce output to the console (e.g. stdout, stderr) or into files on disk. The application logic is expected to query the web-based API to retrieve input parameters and later to update output parameters. The status, duration, and content of execution artifacts can be inspected.

Jobs

Each invocation of a task corresponds to a job. Job details include the controlling task, the state value, child jobs, and associated containers. Some jobs (i.e. the application of the containing task) create child jobs in the hierarchy. These parent and child relationships are listed. A single job may invoke a child container multiple times, such as in a loop.

Containers

Container objects are a record of the software container used to execute the application associated with a task. More than one container may be associated with an enclosing job. Each container corresponds to an invocation of the model and its application as configured in their cmd and entrypoint settings.

Iterations

Iterations provide a convenient view into the changing values of an iterative task. Iterations are presented in table form but may also be accessed as JSON via the API for programmatic manipulation. Iterations are the basis for future visualization capabilities.

Logs

Logs capture console output from the execution of the application associated with a task. Each log belongs to a container. Each log has a file number (fileno) that corresponds to the source of its content. The first several filenos correspond to standard \*nix usage of input and output streams. The remaining values build on this concept.

Filenos supported by Quantinuity include:

  • 0 standard input (stdin) stream of application (not used)
  • 1 standard output (stdout) stream of application (typically console output)
  • 2 standard error (stderr) stream of application (typically console or file output)
  • 3 Kubernetes/orchestrator container status messages (e.g. container readiness)
  • 4 Kubernetes/orchestrator pod messages (e.g. pod readiness including copying files)
  • 5 Kubernetes/orchestrator cluster messages (e.g. downloading container images)
  • 6 Quantinuity controller messages (e.g. pending, initialization, timeout, quota violations, success)

Not all logs are visible to users. In general, logs from the orchestrator (3, 4, 5) are visible to staff but not to users because these logs may contain infrastructure details. Log visibility is subject to change.

Program output that is saved to files can be retrieved by creating an output file parameter and assigning its ext_key to the filename. Technically this is not an artifact but it serves the same purpose.

Usage

Usage of computing resources are aggregated by month. Totals are used to prepare invoices for billing. Historical values are also available.

When exceeding the usage quota, no additional workflow tasks or jobs will be run. Attempts to run a workflow will result in a failure with a message similar to Quota exceeded.

Account

Handle

Your handle, or alias, is a publicly visible username. A value or name is automatically assigned during account creation. You can change your handle name, although it must be globally unique across all users. Each handle has an associated immutable id. When sharing workflow projects, the handle name is used to find other users but the handle id is stored. This allows changing the handle name without affecting shared permissions.

Profile

Your profile contains optional data about you. During account creation, only an email address is required. You may optionally set other information such as given and family names.

Tokens

Account tokens allow access to the platform API without using the primary username and password. You must assign a name to each token at creation time, afterwhich the value of the token will be temporarily displayed. After you navigate away from the screen, the token value will no longer be visible nor accessible. For this reason, it is important to make a copy of the value for your intended use. Tokens can be revoked at any time. If you lose the value of a token, you can revoke it and create a new one.

User

Your user account is attached to the email address used during account creation. Currently, only one email address may be assigned to your account. It may be possible to update your email address. Your account identifier will remain unchanged.

Billing

Quantinuity offers a subscription service with tiers based on the number of CPU minutes consumed in a calendar month. A free tier exists with a modest number of CPU minutes suitable for exploration or infrequent use. Higher priced tiers have correspondingly higher thresholds for resource consumption. Every invocation of a task consumes computing resources.

Payment Cards

Payment cards may be added and assigned to a subscription. When assigned to a subscription, payment cards may not be deleted. When removing a card from a subscription, it is not possible to remove the payment card until after the card has been released following payment at the end of the subscription period (usually monthly).

Invoices

Invoices show monthly usage and charges incurred to your method of payment. Invoices are created at the beginning of every calendar month.

Subscription

Workflow execution requires a subscription. Users may choose from available plans, including the free tier. Other plans may require that a payment card is assigned to the subscription. A user may only have one subscription.

Monthly subscription fees are due at the end of each calendar month. By default, subscriptions are set to automatically charge the associated payment card when invoices are due.

Subscription plans may be changed at any time. Upgrades are effective immediately. Downgrades become effective at the beginning of the next billing period.

Email

The email inbox displays an historical listing of messages sent from Quantinuity servers. It does not include messages sent by individual staff members.

OAuth

OAuth (short for Open Authorization) is a framework that allows users to grant third-party access to their information without sharing their passwords. When a user authorizes Quantinuity to access information at a provider, a token is stored for future use. That token is used to download container images from remote repositories in order to execute the application logic contained therein. Tokens may also be used in the future to provide access to web-based analytical products or remote APIs.


For errors, mistakes, questions, clarifications, bugs, or feature requests, please contact us.