Triggering the Dragon Voice recognizer
Note: The content in this topic is for Dragon Voice in on-premise deployments.
To trigger Dragon Voice recognition, provide a URL with key/value pairs to control the details:
<grammar src="http://base_path/filename?key=value"></grammar>
Syntax:
<grammar> is an element in your VoiceXML application. To understand how <grammar> fits into the workflow, see VoiceXML application structure and Dragon Voice recognition flow.
base_path points to the central repository where you store the artifacts (manifest file, models, and wordsets).
filename is optional. It specifies a DLM or wordset.
key=value pairs load artifacts needed for recognition. Use the a question mark (?) to specify more than one pair.
Key=value pairs | Description |
---|---|
nlptype=config |
Trigger for Dragon Voice recognition. Loads artifacts as defined in the manifest for the duration of the session. |
nlptype=krypton |
Trigger for Dragon Voice recognition. Loads artifacts using their fully-qualified paths for the next recognition event. |
nlptype=nle |
Loads a semantic model for extracting meaning from text. Required for semantic interpretation. You can load one semantic model during a session. It cannot be changed for the remainder of the session. Not allowed for Krypton-only recognition. |
nlptype=wordset |
Optional. Loads a wordset containing new vocabulary into the models. You can load more than one wordset during a session. See Using wordsets. |
dlm_weight=value |
Assigns a weight or importance to a domain language model. See Understanding weight. Required when loading a DLM for recognition scope (via nlptype=krypton). Optional when loading a DLM for session scope (via nlptype=config). |
builtin_NAME_weight=value |
Optional. Specifies one or more builtins, and assigns a weight. You can only load builtins that are supported by the base language model. |
Dragon Voice artifacts
You create artifacts using Nuance tools that are provided separately from Speech Suite. The output from the tools are artifacts that become inputs to Dragon Voice in Speech Suite.
Note: After generating a manifest and its artifacts, store them in the same base directory, and specify that directory as the base path of the <grammar> src attribute in VoiceXML documents. If you generate more than one manifest and artifacts, store each set in a different base. (You cannot substitute or move files from one artifact to another. For example, you cannot insert a DLM from one artifact into the filepath of a different artifact.)
Artifact | Description |
---|---|
Manifest file |
Required. You must supply a manifest file to identify the project, artifacts, and resources that serve each recognition event. See Understanding the manifest file.
Note: The <grammar> element points to the storage location but does not explicitly name the manifest file. The filename must be nuance_package.json. The NLP service automatically fetches the manifest and loads the engines. |
Semantic models |
NLU (natural language understanding) and linguistic models for the NLE and NTpE components. Nuance creates these models on your behalf or you create them with Nuance Command Line Interface, Nuance Experience Studio or Nuance Mix Tools . Not allowed for Krypton-only recognition. |
Domain language models |
Optional DLMs provide specialized knowledge of a domain or application-specific content. These models add to the factory or base language model that Krypton loads on startup. |
Wordsets |
Optional vocabularies that inject dynamic content at runtime. For example, a list of contact names or payees. You create wordsets in JSON format. |
Loading artifacts
You can load Dragon Voice artifacts in different scopes: service, session, and recognition. Service scope comprises the period when the recognizer service is running, session scope typically comprises the time when a call is connected, and recognition scope comprises the period during which the recognizer is processing input. Service is the highest scope and recognition is the lowest. A single service scope can contain multiple session scopes, and a single session scope can contain multiple recognition scopes.
While artifacts provide faster and more accurate recognition, the loading of artifacts can be time-consuming. To reduce the latency experienced by users it is a good practice to load artifacts at service or session scope, rather than when they are used at recognition scope. Loading artifacts at service scope or session scope can significantly reduce latency, especially if those artifacts are large and used in multiple recognition events. The disadvantage of preloading is that the artifacts occupy resources before they are needed.
In some situations, you may prefer to load artifacts at recognition scope because you don't know which artifacts are needed until immediately before recognition. The downside is that the users may experience latency during the call (while the artifacts load) and those artifacts are only available for one recognition turn.
Loading artifacts for service scope
Service scope begins when the recognition service starts up or restarts. Accordingly, if you load artifacts at service scope, they are available without delay to subsequent sessions and recognition turns. Loading at service scope is different from loading at session and recognition scope because it doesn't use a manifest.

To preload a DLM at service scope, edit the service’s configuration file and then start the service.
Open the krypton.yaml file manually and specify the DLM in the preload
section of the configuration.
Note: this must be done by editing the configuration file. You can view the preload configuration in Management Station but please do not edit these settings from Management Station because they will not be parsed correctly.
Note: Krypton modules (DLMs) are not cached across application restarts.
The preload
section of the .yaml file uses the following format:
preload: - dataPack: language: topic: objects: - url: weight: type:
These fields are defined as follows:
Service property |
Description |
Data type |
Default |
---|---|---|---|
preload:dataPack | An array of one or more data packs with DLMs to preload into the instance. For multi-language applications, repeat the language, topic, and objects fields. | object | n/a |
preload:dataPack:language | Language and locale identifier in the form xxx-YYY, for example eng-USA. The value is case-sensitive. | string | none |
preload:dataPack:topic | Language model name, for example GEN. The value is case-sensitive. | string | none |
preload:dataPack:objects | One or more DLMs to preload, to be available as static content for all sessions in the instance. The maximum number of loaded DLMs for a single recognition turn is 5. | object | n/a |
preload:dataPack:objects:url | URL of the DLM zip file, either remotely with http[s]:// or locally with file:// | string | none |
preload:dataPack:objects:weight | The weight of the DLM compared to the data pack: lowest, low, medium, high, highest, or an integer 0-1000. Default is 0. This value is optional. | int | 0 |
preload:dataPack:objects:type | A keyword representing the type of object to be preloaded. This value is optional. | string | none |
Here's an example that preloads DLMs for three different domains in two languages, with one object defined for each language. Note that the values do not need to be surrounded by quotes or double-quotes but they are used here for legibility.
preload: - dataPack: language: 'eng-USA' topic: 'GEN' objects: - url: 'http://host/path/finance-dlm.zip' weight: 0 type: application/x-nuance-domainlm - url: 'http://host/path/airline-dlm.zip' weight: 0 type: application/x-nuance-domainlm - url: 'http://host/path/pizza-dlm.zip' weight: 0 type: application/x-nuance-domainlm - dataPack: language: 'cmn-PRC' topic: 'GEN' objects: - url: 'http://host/path/finance-mandarin-dlm.zip' weight: 0 application/x-nuance-domainlm - url: 'http://host/path/airline-mandarin-dlm.zip' weight: 0 type: application/x-nuance-domainlm - url: 'http://host/path/pizza-mandarin-dlm.zip' weight: 0 type: application/x-nuance-domainlm
For details about configuring the Krypton service, see Configuring Krypton.

To preload an NLE model at service scope, use Management Station to set the following properties in the Monitoring & Control tab, and then start (or restart) the service.
Note: resource.preloadOnStartup is an array of modules to load. Each item should be added as resource.preloadOnStartup[i] where [i] is the index number of the module you are adding.
Alternatively, you can add the properties to the User-nle01.properties file before starting. For details about configuring NLE, see Configuring NLE.
Preloading artifacts for session scope
Session scope begins when a user is connected on a call. This is a good time to preload artifacts that are likely to be used during the call. Use the nlptype=config keyword pair to load and activate artifacts for the remainder of the session. This could happen while playing a welcome prompt so that it would not be noticed by the user.

To load models for the duration of a session, load the contents of the manifest file by adding nlptype=config to the <grammar> element in your VoiceXML application.
<grammar src="http://base_path/?nlptype=config"/>
where
- base_path points to the central repository where you store the artifacts.
nlptype
is the trigger for using Dragon Voice recognition and interpretation.config
is the trigger to load the base language model and semantic model (if configured in the manifest). You can load one semantic model during a session. It cannot be changed for the remainder of the session.For Krypton-only recognition, set nlps-audio-only or server.nlps.audioOnly (not shown here). You must not load a semantic model (and the manifest must not configure one).
With additional key=value pairs, you can load built-in grammars and DLMs that are defined in the manifest. See Preloading builtins: session scope and Preloading DLMs: service scope.
Note: The <grammar> element points to the storage location but does not explicitly name the manifest file. The filename must be nuance_package.json. The NLP service automatically fetches the manifest and loads the engines.

To load a domain language model while loading the manifest, append the DLM name. You can load more than one DLM, and you can load additional DLMs later in the session.
Note: To avoid a duplicate objects error in Krypton recognition, don't load the same DLM under different names and/or URLs in one SIP session. Instead, refer to the DLM using the same URL and name or, more efficiently, refer to the same, already loaded DLM in multiple recognition requests within one SIP session.
This example loads the DLM with the default weight defined in the manifest:
<grammar src="base_path/?nlptype=config&1000_MainMenu"></grammar>
This example loads the DLM and overrides the weight defined in the manifest:
<grammar src="base_path/?nlptype=config&1000_MainMenu_weight=medium"/>
This example loads two DLMs by using ampersand (&) to concatenate their names:
<grammar src="base_path/?nlptype=config&1000_MainMenu_weight=0.1&2000_BillingMenu_weight=0.25"></grammar>
For more on weights calculations, see Understanding weight.

This example sets the weight of 1000_MainMenu
and 2000_BillingMenu
to 0.25 and 0, respectively:
"krypton" : { "dpTopic" : "GEN", "dpVersion" : "3.7.1", "sessionObjects" : [ { "id" : "1000_MainMenu", "type" : "application/x-nuance-domainlm", "url" : "./A1000_MainMenu_DLM.zip", "weight" : 0.25 }, { "id" : "2000_BillingMenu", "type" : "application/x-nuance-domainlm", "url" : "./A2000_BillingMenu_DLM.zip", "weight" : 0 } ], … }
Note: Setting a weight of zero reduces the probability of words in the model to extremely low, but does not deactivate the DLM. To disable the model, do not load it.

To load a builtin while loading the manifest, append the name and assign a weight.
This example loads the DIGITS builtin associated with the base model and activates it with medium weight (0.25):
<grammar src="base_path/?nlptype=config&builtin_digits_weight=medium"></grammar>
This example loads and activates two builtins with explicit weights:
<grammar src="base_path/?nlptype=config&builtin_financial_weight=0.25&builtin_date_weight=0.5"></grammar>
For more on weights calculations, see Understanding weight.
Note: This discussion is about Dragon Voice builtins. There is no relationship to Nuance Recognizer built-in grammars. To see which builtins are available for your base language model, see the data pack Readme.
Loading artifacts for recognition scope
Use the nlptype=krypton and nlptype=wordset keyword pairs to load and activate objects for the next recognition event. This allows your application to respond flexibly to momentary needs. For example, you could load a DLM or wordset in the middle of a call to improve accuracy based on the context of the conversation.

For Krypton-only recognition, set nlps-audio-only or server.nlps.audioOnly (not shown here), specify the path to the manifest storage location, and use the nlptype=krypton pair.
You must have at least one DLM when loading Krypton-only artifacts at the recognition scope. If you have none, then use nlptype=config instead. See Loading artifacts from the manifest: session scope.
This example loads two DLMs and activates their weights:
<grammar src="http://base_path/A1000_MainMenu_DLM.zip?nlptype=krypton&dlm_weight=0.1"/>
<grammar src="http://base_path/A2000_BillingMenu_DLM.zip?nlptype=krypton&dlm_weight=0.25"/>

For recognition and interpretation, specify the path and filename of the semantic model, and use the nlptype=nle pair.
Note: When you load an NLE model with this mechanism, it loads at the session scope and cannot be changed.
This example references the semantic model and the DLMs:
<grammar src="http://base_path/nle_model.zip?nlptype=nle"/>
<grammar src="http://base_path/A1000_MainMenu_DLM.zip?nlptype=krypton&dlm_weight=0.1"/>
<grammar src="http://base_path/A2000_BillingMenu_DLM.zip?nlptype=krypton&dlm_weight=0.25"/>

To load a wordset, specify the full path and filename, and use the nlptype=wordset string.
This example loads two wordsets:
<grammar src="http://base_path/custom1_wordset.json?nlptype=wordset"></grammar> <grammar src="http://base_path/custom2_wordset.json?nlptype=wordset"></grammar>
You cannot assign weights to wordsets. Instead, they share a combined weight of 0.1. See Understanding weight.

To load a domain language model, specify the filepath, and use the nlptype=krypton string. Optionally, assign a weight. For example:
<grammar src="base_path/new_dlm.zip?nlptype=krypton&dlm_weight=0.1"></grammar>

To load a builtin, append the name and assign a weight. You can load one or more builtins per recognition turn.
This example loads the BOOLEAN builtin associated with the base model and activates it with 0.5 weight:
<grammar src="http://base_path/A1000_MainMenu_DLM.zip?nlptype=krypton&dlm_weight=0.1&builtin_BOOLEAN_weight=0.5"/>
For more on weights calculations, see Understanding weight.
Note: This discussion is about Dragon Voice builtins. There is no relationship to Nuance Recognizer built-in grammars. To see which builtins are available for your base language model, see the data pack Readme.