Customize the Processing of the Jobs¶
Without customization, the jobs are processed (converted) using defaults which already fit in most cases. For the configuration of the processing, so-called flows are used which are read and executed by the seal-convert-dispatcher
service.
Basically, a flow is an asynchronous Javascript function, defined by the special def
keyword. A flow gets the current job metadata as parameter and may call other flows or converters or manipulate the job metadata. Additionally, it can directly access additional functions and objects provided as program context.
The flows are kept in the MongoDB and can be administrated via PLOSSYS CLI.
The first flow to be executed for every job is the main
flow. More standard flows (see below) are provided and you may also implement your own customer-specific flows which then overwrite the standard flows with the same name.
Standard Flows for Conversion¶
The following standard flows are called by the main
flow for the conversion in the specified order:
-
preconvert
The
preconvert
flow converts non-PDF formats into PDF for further processing. For each known non-PDF format, a conversion service is called for the conversion into PDF. -
format
The
format
flow prepares the document created by thepreconvert
flow for the printers media, for example, by scaling or rotating. -
render
The
render
flow changes the document content created by theformat
flow, for example, by adding stamps or watermarks. -
postconvert
The
postconvert
flow converts the document created by therender
flow into PostScript for the output to the printer.
Customer-specific User Exits¶
Each standard flow for the conversion can be extended by calling customer-specific user exits:
-
before_<flow_name>
is called before the<flow_name>
flow. -
after_<flow_name>
is called after the<flow_name>
flow has been finished.
Use a Customer-Specific Flow¶
For customizing the processing of a job, you can implement a new flow and import this to the configuration by using PLOSSYS CLI.
-
Implement a new flow and save it to a file.
For implementation details, refer to the Implementation Details below.
-
Import the new flow by using PLOSSYS CLI:
plossys flow import <file_with_new_flow> --flowname <flow_name>
-
Restart the following service:
seal-convert-dispatcher
Implementation Details¶
Supported Functions and Objects¶
The following functions and objects are supported when implementing a flow:
Name | Type | Description |
---|---|---|
def(flowName, func) |
function | Defines a new flow with name flowName and async flow function func |
uuidv4() |
function | Returns a new unique UUID v4 identifier |
log |
object | Logging; contains methods debug , info , warn and error for the different log levels |
Error |
object | Standard Javascript error object for throwing conversion or custom errors |
convert(job, serviceParams) |
function | Calls a service for converting a job |
convert(job, serviceParams)¶
convert
is a function for calling a service to convert the job. The serviceParams
object contains the string property service
and the optional object property convertParams
containing key-value pairs with converter-specific configuration.
-
job
Job to be converted -
serviceParams
Object containing the service name and converter-specific options
convert
is a function for calling a service to convert the job. The serviceParams
object contains the string property service
and the optional object property convertParams
containing key-value pairs with converter-specific parameters, which in case of converting with an executable are given to this executable (e.g. sapgofu2pdf.exe).
await convert(job, { service: 'convert-dummy', convertParams: { '-convert-it': 'this way' } });
Job Objects¶
In addition to the metadata, the job object provides functions for calling another flow and for testing if a flow exists.
Name | Description |
---|---|
exist(flowName) |
Synchronous function returning true if a flow with the name flowName exists; otherwise, false |
call(flowName, args) |
Asynchronous function calling the flow called flowName and optional args parameters |
Example - syntax
def('main', async function (job) {
log.info('main dummy flow', {
uuid: job._id,
jobId: job.refId
});
if (job.current.printerName === 'pcl_1') {
if (job.exist('move2printer') {
await job.call('move2printer', { printer: 'printer2' });
} else {
throw new Error('Flow move2printer is missing.');
}
}
return job;
});
def('move2printer', async function (job, args) {
log.info('move2printer flow', {
uuid: job._id,
jobId: job.refId,
args
});
job.current.printerName = args.printer;
return job;
});
Environment Variables¶
The following environment variables are evaluated by the seal-convert-dispatcher
service:
-
CONSUL_TOKEN
: ACL token with which the PLOSSYS 5 services authenticate themselves to Consul -
CONSUL_URL
: URL of the Consul server to which the PLOSSYS 5 services log on -
CUSTOM_FLOW_<environment_variable>
: Value for the environment variable<environment_variable>
which is passed to the main flow -
JOB_MAX_CONVERSION_COUNT
: Number how often a job will be postponed by theseal-convert-dispatcher
service in case a conversion fails -
JOB_MAX_POSTPONED_COUNT
: Number how often a job will be postponed by the service in case the next service is not available for example -
JOB_RETRY_DELAY
: Waiting time between the postponements in case the next service is not available for example. -
LOUNGE_TTL
: Time interval between two transmission attempts in case of error (deprecated) -
MONGO_ACTIONS_URL
: URL of the database where the internal system actions are stored -
MONGO_CONNECT_RETRIES
: Number of attempts of a service to connect to the database -
MONGO_EVENTS_COLLECTION_SIZE
: Size of the capped collection used to store events in the database -
MONGO_EVENTS_URL
: URL of the database where the internal system notifications are stored -
MONGO_JOBS_URL
: URL of the database where the jobs data is stored -
MONGO_LOCKS_URL
: URL of the database where the locks data is stored -
MONGO_PREPROCESS_URL
: URL of the database where the flows data is stored -
MONGO_PRINTERS_URL
: URL of the database where the printers data is stored -
PACER_INTERVAL
: Time interval after which the database is searched again for jobs that have not yet been processed -
PRINT_ERROR_SHEET
: Flag for output an error sheet in case of error -
SERVICE_URL
: URL of the service