Introductory Concepts
Measures vs. Assessments
A measure in Janus refers to a psychological or psychometric instrument designed to detect specific markers or indicators in respondent data. It processes this data to compute values and present results in either narrative or data-driven formats. Each measure is uniquely identified by a measureID. Examples of measures available in Janus include ASRS, Conners 4, EQ-i, among others.
An assessment is an instance of a measure applied to a specific respondent. It is created following the guidelines outlined in the corresponding measure documentation. An assessment encompasses the full lifecycle of data collection, evaluation, and report generation. When a measure is administered to evaluate a respondent, it becomes an assessment.
In short:
- Measure = the tool or test
- Assessment = the application of that tool to a respondent
Assessment Lifecycle
The Assessment Lifecycle in Janus consists of the following:
- A Session, where preferences and parameters are set to define a single assessment for one person.
- A Data Gathering (sometimes referred to as a form or questionnaire), where Observation Items are collected.
- One or more Evaluations (sometimes referred to as scoring), where Outcome Items are generated.
- Optionally, one or more Enrichments (sometimes referred to as reporting), where visualization and narratives are created.
GUIDs
GUID stands for Globally Unique Identifier. These identifiers take the form “00000000-0000-0000-0000-000000000000”, where each “0” is a hex value from 0-f. Within the Janus API, GUIDs are used frequently to uniquely identify a specific component of a measure, or the measure itself. GUIDs are used to identify the following:
- measureID (the identifier for a specific measure)
- templateID (the identifier for a specific data gatherer, evaluator, or enrichment template being accessed)
- resourceID (the identifier for a specific instance of a template)
- interpretAs (the identifier for a specific observation set)
- outcomeSetID (the identifier for a specific outcome set)
- sourceID (a unique identifier generated by the caller to link back to the calling system)
All relevant GUIDs for a measure can be found in the associated ScalesItems file, under the “Structure” tab.
ScalesItems
In Janus, the ScalesItems document defines the structure and metadata of a measure. It is provided in both XLSX and JSON formats. As part of onboarding, the MHS team supplies the ScalesItems XLSX file, which outlines key components of the measure and supports its implementation across the Janus lifecycle.
Note that the JSON version of the ScalesItems is easier to parse for direct Integration clients and the XLSX version is more human readable.
The ScalesItems file includes the following key tabs:
Structure
This tab lists all globally unique identifiers (GUIDs) associated with the measure, including:- measureID
- evaluatorID
- interpretAs
- and others
Directives
This tab defines all Directives required at each stage of the measure’s lifecycle. These directives guide how data is gathered, evaluated, and enriched.
→ See Directives for more details.Observation Sets
This tab enumerates all Observation Sets included in the measure. These sets define the data inputs collected from respondents.
→ See Item Encoding for more information.Outcome Sets
This tab lists all Outcome Sets associated with the measure. These define the outputs generated after evaluation.
→ See Item Encoding for more information.
Raters and Respondents
“Raters” and “Respondents” refer to two roles within the assessment process. During the data gathering phase, the Rater is the person who is completing the assessment, responding to the questions presented by the measure. The Respondent on the other hand, is the person who is being evaluated by the assessment. These may be the same person in the case of a self-assessment, where the person being evaluated is also the one completing the assessment. In other cases, one person may be completing the assessment to evaluate another person that they are familiar with, in which case the Rater and Respondent are two different people.
Observation and Outcome Sets
Observation Items are data points collected by a rater during the Data Gatherer phase of the Janus lifecycle. These items represent raw inputs—such as responses, demographics, or contextual information—used to evaluate a respondent. Once collected, Observation Items are passed to the Evaluate and Enrichment stages of the API for further processing.
- Observation Items are grouped into Observation Sets (also referred to as ObservationItemSets).
- Common examples include Item Responses, Scored Demographics, and Additional Questions
- These sets are not tied to how items are grouped on a page in the Data Gatherer UI.
- Observation Sets can be pre-filled if some rater information is already known at the time of assessment setup.
Outcome Items are the results generated by the Evaluator after processing the Observation Items. These represent the scored or interpreted outputs of the assessment.
- Outcome Items are grouped into Outcome Sets (also known as OutcomeItemSets).
- These sets are measure-specific and often correspond to the Scales of the measure—collections of items that assess a shared psychological construct or theme.
- Outcome Items may include Total Scores, Scale Scores, and Normative Comparisons (e.g., differences from average)
Sessions
In Janus, a session is the foundational container that organizes all resources involved in a single assessment lifecycle. It is not a web session, but rather a logical grouping that binds together the components required to evaluate a respondent.
A session must be created before any other lifecycle components—such as Data Gatherers, Evaluators, or Enrichments—can be instantiated. All subsequent API calls reference this session to ensure consistency and traceability across the assessment process.
Standard Session Composition
A typical session includes:
- One Data Gatherer – collects input from a rater (e.g., responses, demographics)
- One Evaluator – processes the collected data and generates outcome items
- Zero or more Enrichments – optional reports that enhance the evaluation output
Each session is generally tied to a single rater, and in most cases, there is a one-to-one relationship between sessions and raters. However, a single respondent may be evaluated by multiple raters (e.g., parent, teacher, self), resulting in multiple sessions for that respondent 3.
Special Cases: Multi-Session Enrichments
While the standard model assumes a single session per rater, multi-session enrichments allow for the aggregation of data across multiple sessions. This is particularly useful in multi-rater scenarios and is handled through the Enrichment Transformer endpoint. Details on this advanced use case are provided in the Enrichment Transformer documentation
See: Session Endpoints
Thumbprints
Thumbprints in the Janus API are cryptographic hashes used to verify the integrity and authenticity of data transmitted between clients and the Janus platform. They ensure that Observation Items and Outcome Items are correctly encoded and have not been altered or corrupted during transit.
They must be generated using the Blake2B hashing algorithm. A thumbprint must be generated by the caller for each Observation Set in the request JSON. Outcome Sets, created by Janus during the Evaluate process, will have generated thumbprints attached as well. Once the request is made, the Janus API will also calculate the thumbprint for each Observation Set and Outcome Set in the input and ensure the two thumbprints match. If they do not, the input will considered untrustworthy and the request is rejected.
Generation
- Thumbprints must be generated using the Blake2B hashing algorithm.
- For each Observation Set in the request payload, the caller must generate and include a thumbprint.
- For Outcome Sets, Janus automatically generates thumbprints during the Evaluate process.
Validation
- Upon receiving a request, the Janus API independently recalculates the thumbprint for each Observation and Outcome Set.
- If the thumbprint provided by the caller does not match the one calculated by Janus, the input is considered untrustworthy and the request is rejected.
Summary Workflow
- Caller encodes Observation Items and generates thumbprints using Blake2B.
- Request is submitted to Janus with thumbprints included.
- Janus recalculates thumbprints and compares them to the submitted values.
- If all thumbprints match, processing continues. If not, the request fails validation.
See: Thumbprints
Data Gatherer
A Data Gatherer in Janus is a structured interface — typically a form, survey, or questionnaire — used to collect input from a rater. This input may include responses, demographic details, or behavioral observations. Each Data Gatherer is tailored to a specific measure and is hosted by MHS as a complete, ready-to-use application and meets our standards for compliance and collecting data accurately and equitably. Using the provided Data Gatherers are required for the Data Gathering process unless an exception is made.
Some Data Gatherers can be pre-filled with known information about the rater or respondent, streamlining the data entry process. And all Data Gatherers can save and load assessment progress from the session. Once a Data Gatherer is completed, the gathered data is transformed and formatted into Observation Items and stored in the session container for later retrieval and subsequent evaluation.
Evaluator
An Evaluator is model which is used to evaluate observation data. Each measure has at least one Evaluator (evaluatorID) for its corresponding data.
The Evaluator endpoint requires as input one or more Observation Item set(s) from the Data Gathering phase as well as control parameters (measureID, evaluatorID, and Directives). As output the Evaluator returns an Evaluation instance in JSON format which contains the generated Outcome Item set. This Evaluation is then stored in the session container.
See: Evaluator Endpoints
Enrichment
An Enrichment is a generated output that presents and expands on Evaluation results in a user-friendly format. An Enrichment can take the form of a PDF/DOCX report, complete with tables, figures, and descriptions. Each Enrichment is unique to its corresponding measure. This phase of the lifecycle is optional.
The Enrichment endpoint requires as input at least one Observation Item Set, the Outcome Item Sets generated by the Evaluator, and control parameters. As output the Enrichment endpoint returns a link to the newly generated Enrichment report. This Enrichment report can then be accessed with an API key using a GET Resource API call.
See: Enrichment Endpoints
Enrichment Transformer
The Enrichment Transformer service is a type of Enrichment that generates reports for multiple raters/respondent. These generated reports are used for comparison or consolidation between multiple sessions. These reports can include information from multiple raters assessing the same respondent or consolidating information from multiple respondents in an organization.
The Enrichment Transformer endpoint requires a list of “Session Data” objects (the Observation Items and Outcome Items for each session) as input as well as control parameters (measureID, enrichmentID, directives). Depending on the type of enrichment, reports can be generated as XLSX, PDF, or CSV files. As output the Enrichment Transformer endpoint returns a link to the newly generated report. This report is stored in the session container and can be accessed with an API key using a GET Resource API call to JANUS.
See: Enrichment Transformer Endpoints
Data Retention
Data provided to the Janus API is temporarily logged for both processing and debugging purposes. By default, this data is maintained for 30 days before being removed from the system. Each Tenant within the Janus system may opt to change that default for the specific tenant, or the retention period can also be adjusted on a session-by-session basis.
See: Data Retention
Sovereignty
The default region for data processing is “US East”. If another region is not specified, this is the region that will be used. Other regions are not yet available, but will be in the near future. Additional charges may apply when working in other regions.
See: Sovereignty
API Keys
In Janus, a tenant represents a dedicated environment for an individual customer. Each customer is provided with at least two tenants, one for their Production environment and the others for their Development environments. Each tenant is provided with two API keys. To authenticate any incoming request and uniquely identify the organization the request is coming from, Janus makes use of these API Keys. These keys should be treated as confidential information as they are used internally to track usage and in pay per-use scenarios or metered use scenarios to perform utilization accounting and billing.
The API key needs to be included in every call made to Janus, in the header section of the request. The header key is “Ocp-Apim-Subscription-Key”, and the value is a string.
At any time, an organization may request new API subscription keys, and they are encouraged to do so with some frequency as the API keys provide unrestricted API access to your account, licensed processes, and, potentially based on policy, your organization’s data. API users are also encouraged to deactivate keys they no longer need or use.
API Call Throttling
The MHS Janus APIs perform significant quantities of work with each call. In the Service Provider group, each call covers a large part of the measure / assessment lifecycle. To maintain system availability and performance for all users, by default, the Service Provider APIs are all governed by a throttling policy that limits calls over a period.
The default throttling for any given API Subscription Key is 5 calls per 60 second period. If an API Subscription Key reaches a throttling limit in a 60 second period, it receives a 429 Too Many Requests response on this and any other calls until the period resets. If an organization requires greater throughput on Service Provider API calls, please discuss your requirements with your account manager to have your license agreement adjusted.
Events
Events in Janus are notifications generated during various stages of its lifecycle, providing updates on changes or actions, such as the completion of a data gatherer or the creation of a new session. Janus offers a predefined set of notifications, but it also provides flexibility by allowing subscribers to control which events they receive based on their specific needs. These events can be observed by authorized parties and may trigger external actions based on participants’ activities or system processing activities. To utilize these events, users must set up a webhook configured to receive CloudEvents v1.0 schema and provide the webhook URL to MHS for registration. Only subscribed events are delivered to the webhook. Events can be subscribed to either during session setup or with subsequent Janus resource calls. Once received, these events can initiate system workflows, such as sending a GET request to retrieve completed enrichment data upon receiving an "Enrichment Complete" notification. This eliminates the need for continuous polling for status, creating a more scalable and efficient system for handling assessments.
See: Events
Measures, Change Management, and Versioning
MHS updates Measures in two ways, versions and revisions: A version update is a significant change to the Measure, in design, norming, content, etc. Multiple versions of a Measure may be available and when available Customer may use the available versions. MHS will provide a minimum of three hundred and sixty-five (365) days’ notice of the intent to withdraw a version of a Measure, of which shall be at MHS’ sole discretion.
Revisions are changes to an existing Measure. When a Measure has been revised this denotes significant enhancements to portions of the Measure, such as in the Evaluation step or expansion of the items in the Data Gathering step. Customer will have a minimum of one hundred and eighty (180) days to adopt a new revision before MHS ceases availability of the prior revision.
MHS occasionally introduces breaking changes to the APIs themselves. These changes are implemented in the form of versions that have a year and month as the version designate that is part of the URI space for the resources. MHS makes best effort to support current version plus one prior version. MHS will provide a minimum of 180 days notification in advance of retiring an API except where security or operating/reliability priorities require otherwise. The most recent version of the API is version “2022-10”.