The Sysconsole Command

The Sysconsole command line tool handles administrative tasks not related to a deployed Infinity runtime environment.

Scope

The Sysconsole command handles the following tasks:

Setting Up the Environment

This section describes how to prepare your environment for using the Sysconsole command.

Start by creating a custom work folder, for example by copying one of the templates provided by your Infinity installation. In your Infinity installation, you find a folder called work, where the following template folders reside:

Now perform the following steps:

  1. Start a command console (cmd).
  2. Switch to your custom work folder.
  3. Adapt the setenv.bat file, residing in this folder:
  4. Now run the setenv.bat file.
  5. Add your client libraries and jar files required by the model according to the application server you use. Please refer to the Application Server Setup chapters in the Deployment Guide for detailed information on which libraries and jar files are required.
  6. Add a database driver. Please refer to the Audit Trail Database Setup chapters in the Deployment Guide for detailed information on the database drivers you need.
  7. Additionally add the database connector jar file and other required model related jar files.
  8. Adapt the carnot.properties file, residing in your CARNOT_WORK/etc folder.

Global Options

The sysconsole tool supports the following global options which can be used with every command:

Option Short form Description
-dbdriver 'arg' -r The JDBC driver class to use.
-dbpassword 'arg' -s Audit trail DB password to use.
-dbschema 'arg'   Audit trail DB schema to use.
-dbtype 'arg' -t The database type, i.e. oracle, db2, etc.
-dburl 'arg' -l The JDBC URL to use.
-dbuser 'arg' -d Audit trail DB user to use.
-force -f Forces the command to execute without any callback.
-password 'arg' -p The password of the sysop user.
-verbose -v Makes output more verbose.
-statementDelimiter 'arg' -sd Sets the delimiter string for all operations.
Default values:
\nGO : for Sybase. Note that \n is used to add a line feed.
; : for any other database.

The first five global options allow you to override the corresponding settings in the carnot.properties.

Commands

The following table lists the commands supported by the sysconsole.

Command Description
archive Deletes or archives the audit trail or parts of it. Please refer to section Audit Trail Archive Administration for detailed information.
  • -batchSize, -b 'arg' - Determines how many process instances should be performed in a transaction. The data will be kept in a consistent state. If this option is missing, a default batch size of 1000 will be used.
  • -deadData, -d 'arg' - deletes dead workflow data values. It has to be used together with option -noBackup. Accepts as argument a single data ID or a comma separated list of data IDs.
  • -deadModels, -m deletes or archives audit trail for all dead models (models not having nonterminated process instances) and their preferences.
  • -deadProcesses, -p deletes or archives terminated process instances and their preferences.
  • -interval, -i 'arg'- The interval for which the archiving has to be performed, from now until 'arg', in format nn{d{ays}|h{ours}|m{inutes}}. You can use one of the time units d(ays), h(ours) or m(inutes), e.g. 3d or 10hours. If this option is missing, a default interval of 1 day will be used.
  • -logEntries/-l - deletes or archives log entries.
  • -model, -v 'arg' deletes or archives audit trail for the model version with the specified OID including preferences.
  • -noBackup, -n - only deletes and doesn't archive data.
  • -schemaName, -s 'arg' specifies the schema containing the backup tables.
  • -timestamp/-t 'arg' - Restricts any operation to either process instances terminated before the given date or log records created before the given date (always inclusive). The specified date must conform to ISO date patterns (i.e. "2005-12-31", "2005-12-31 23:59" or "2005-12-31T23:59:59:999") or "yyyy/MM/dd hh:mm:ss:SSS" for backward compatibility.
  • -partition <ID[,ID]*> - identifies the partition(s) to operate against.
  • -processes <OID[,OID]*> - deletes or archives process instances by OID including their preferences. The process instances must be terminated (completed or aborted).
  • -userSession - provides the possibility that user session entries can be archived or deleted.
archiveDDL Creates DDL for the Infinity schema:
  • -file, -f 'arg' - the DDL file name.
  • -schemaName, -s 'arg' - specifies the schema supposed to contain the backup tables.
auditTrail Manages Proxy Lock Tables (see detailed description in the sections below):
  • -createPartition <ID> - creates a new partition with the given ID, if no partition having this ID currently exists. Additionally creates the new partitions default domain, having the same ID as the partition.
  • -dropDataClusters, -ddc [-sql <spool-file>] [-partition ID[,ID]*] - drops Data Clusters.
  • -dropLockTables, -dlt [-sql <spool-file>] - drops existing Proxy Lock Tables.
  • -dropPartition <ID> - deletes the partition identified by the given ID and any contained data from the AuditTrail. Also cleans up user, realm and partition scope preferences.
    Requires explicit confirmation, which may be overwritten with the -force parameter.
  • -enableDataClusters, -edc [-configFile <cluster-config-file>][-skipDDL] [-skipDML] [-sql <spool-file>] [-partition ID[,ID]*] - creates missing data cluster tables and synchronizes table content.
  • -enableLockTables, -elt [-skipDDL] [-skipDML] [-sql <spool-file>] - creates and enables Proxy Lock Tables.
  • -listPartitions lists all existing partitions.
  • -verifyDataClusters, -vdc [-partition ID[,ID]*] - existence of data cluster tables and their consistency.
  • -verifyLockTables, -vlt - verifies existence of proxy locking tables and their consistency.
createschema Creates the Infinity schema.
ddl Creates DDL for the Infinity schema. Possible arguments:
  • -file, -f 'arg' - the DDL file name.
  • -drop ,-d - creates DDL for dropping the schema.
  • -schemaName, -s 'arg' - Specifies the schema name to be used
dropschema Drops the Infinity schema. This command requires the system operator password to be passed with the -password global option, which by default is sysop.
encrypt Encrypts the password passed and returns the encrypted password string to the console. Use the following argument:
  • password 'arg' - the password to be encrypted.
  • passfile 'arg' - If this parameter is set, 'arg' is expected to contain a file path to a file. This file will be loaded and expected to conatin an encrypted password string prevoiously created with the command "encrypt -password 'arg'" . This String will be decrypted and then used for authentication. If the property "-passfile" is used, a "-password" option simultaneously passed will be ignored.
password Changes the password of the sysop user. It accepts a single argument, the new password:
  • -new/-n 'arg' - the new password.
property With this command it is possible to maintain runtime properties which override properties set by property files. Thus changes to properties are possible without the need to redeploy. This is useful for short test cycles.  
  • -delete, -d 'arg' - deletes a runtime property.
  • -get, -g 'arg' - retrieves the current value of a runtime property.
  • -list, -l - lists existing runtime properties.
  • -locale <locale-value> - specifies the locale of the runtime property.
  • -set, -s 'arg' -value 'arg' - sets the (new) value of a runtime property.
upgrademodel Upgrades a model from a previous Infinity version. It supports the following arguments:
  • -file, -f 'arg' - the model file to upgrade inplace.
  • -source, -s 'arg' - the source file to upgrade.
  • -target, -t 'arg' - the target file for upgrade.
Note, that this command is only for the usage of Infinity versions older than 3.0, for newer versions you will be prompted for an upgrade in Eclipse! Please refer to the chapter Model Upgrade of the Modeling Guide chapter for more information.
upgraderuntime Upgrades the audit trail from a previous Infinity version. The following arguments may be used:
  • -data, -a - Performs only data migration, preventing any SQL DDL execution. Combine with -step to perform migrations involving temporary schema versions.
  • -ddl, -l 'arg' - Spools the SQL DDL defining the audittrail schema migration into the specified file. No modifications to the audittrail will be performed.
  • -describe, -d - Describes the migration steps involved, including any temporary schema versions. No modifications to the audittrail will be performed.
  • -ignorelock, -i - forces an upgrade run even if the audit trail DB is already locked for upgrade.
  • -recover, -r - to force a recovery run of the upgrade.
  • -step, -s - Performs exactly one migration step. May require multiple invocations to fully perform migrations involving temporary schema versions.
version Returns version information for the Infinity Process engine.

The following sections provide details on archiving as well as lock table and cluster table administration:

Audit Trail Archive Administration

The archive command deletes or archives data from the audit trail. The deleted or archived data may be backed up in a second audit trail DB, the backup audit trail DB. The execution of the tool is responsible for maintaining the closure of the backed-up objects, e.g. backing up the corresponding models, model elements and grants. The backup audit trail may be cumulatively populated.

During a backup operation with long duration, the data keeps constistent even if the network breaks, due to transactional processing. Repeating the command will start it again at the last position.

For all backup operations the archive schema name has to be provided with the option

-schemaName schemaname.

This argument is independent of any other argument. It specifies the target schema where the audit trail will be archived, but will be ignored if the argument -noBackup is specified.

The following main subcommands (provided as a command option) can be used and are mutually exclusive:

Each of this option specifies a set of objects to be archived or deleted that can not be combined with the set of objects specified by one of the other options. The archive command must specify exactly one of these commands.

The following sections explain the usage of the specific options in detail:

Archiving models with references to other models

In case the model(s) to be archived have references to other models, archiving is performed in the following way:

Backing Up All Dead Models

With the following command all dead models (i.e. no longer active and with no non-terminated processes) can be backed up:

-deadModels

This means also backing up all dependent objects, i.e. process instances, activity instances, data values and log entries. Without the -noBackup command the dead models stay in the audit trail.

In case of physical deletion the optional command

-noBackup

must be used. Attention: Be sure that your back up was successful. Otherwise your dead models can't be restored.

Note: Note that the archive command with model deletion should only be performed in maintenance windows without workflow, otherwise this might lead to inconsistency in the audit trail.

Backing Up A Model

The following option is backing up a model with all its dependent objects:

-model oid (to find in the database table model)

Without the -noBackup command the dead models stay in the audit trail.

In case of physical deletion the optional command

-noBackup

must be used. Attention: Be sure that your back up was successful. Otherwise your dead models can't be restored.

Note: Note that the archive command with model deletion should only be performed in maintenance windows without workflow, otherwise this might lead to inconsistency in the audit trail.

Backing Up Completed Process Instances

The following option is backing up all completed or aborted process instances.

-deadProcesses

It specifies that all terminated processes will be archived or deleted.

With the following additional option you can confine the process instances to delete or archive to be terminated not after a certain timestamp

-timestamp timestamp.

The specified date must conform to ISO date patterns (i.e. "2005-12-31", "2005-12-31 23:59" or "2005-12-31T23:59:59:999") or yyyy/MM/dd hh:mm:ss:SSS for backward compatibility with older Infinity releases.

Optionally, this command can also be used for plain deletion of data. This is done with the option

-noBackup.

Backing Up Log Entries

The following option is backing up log entries that are not referenced by other objects (e.g by process instances):

-logEntries

(Referenced log entries will be archived when archiving the object, e.g. an activity or process instance.)

With the following option you can confine the log entries to delete to be not deleted after a certain timestamp:

-timestamp timestamp

Optionally, this command can also be used for deletion. This is done with the option:

-noBackup

Deleting Data Values

Deletes all data values for terminated process instances for a specific workflow data.

-deadData dataid (qualified ID of the model element)

Note that this command must be used with the additional option -noBackup, because standalone data can only be deleted and not archived.

The argument -noBackup is independent of any other argument. If it is specified, the argument -schemaName will be ignored.

Identifying partition(s)

This option is used to identify the partition(s) to operate against.

-partition <ID[,ID]*>

The given partitions will be used as search scope for additional arguments like -deadProcesses or -deadModels.

Archiving and deleting user session entries

With this option user session entries can be archived (first backup to archive schema and then delete from source schema) or deleted. Here are some usage examples:

  1. The following command inserts all user session entries that are not already archived and having an expiration timestamp <= "2011-01-31 11:30:00" into the archive schema and delete them afterwards from the source schema.
    sysconsole -password sysop archive -userSessions -schemaName arc_carnot -timestamp "2011-01-31 11:30:00"
  2. The following command does the same as above but without backup:
    sysconsole -password sysop archive -userSessions -schemaName arc_carnot -timestamp "2011-01-31 11:30:00" -noBackup
  3. The following command does the same as above but for all existing user session entries.
    sysconsole -password sysop archive -userSessions -schemaName arc_carnot -noBackup

Deleting process instances

The -processes <OID[,OID]*> option deletes process instances by OID. It expects an explicit list of process instance OIDs that will be archived or deleted. Note that the process instances must be terminated (completed or aborted).

Note

Note that if the parameter -partition is not used, the archiving command of sysconsole has an effect only on the default partition.

Proxy Lock Tables Administration

Lock tables will be enabled with the command

sysconsole auditTrail -enableLockTables [-skipDDL] [-skipDML] [-sql <spool-file>

The implementation of this command will create any missing proxy locking tables as well as synchronize table content with the existing rows in the associated original tables. The option

-skipDDL

indicates that the lock tables are already created, their creation will be skipped.

With the option

-skipDML

the synchronization of lock tables with the associated original tables will be skipped.

With

-sql <spool-file>

the required statements will be spooled to <spool-file> instead of executing creation and synchronization statements against the audit trail.

The command

sysconsole auditTrail -verifyLockTables

allows to verify the existence of all required lock tables as well as the completeness of proxy rows with regard to the existing rows in the associated original tables. If any inconsistency is found an error message will be produced. Any inconsistency may then be fixed by applying the auditTrail -enableLockTables command.

The command

sysconsole auditTrail -dropLockTables [-sql <spool-file>]

will drop any existing lock table from the audit trail. With

-sql <spool-file>

the required statements will be spooled to <spool-file> instead of executing drop statements against the audit trail database.

Cluster Table Administration

The command

sysconsole auditTrail -enableDataClusters [-configFile <cluster-config-file>] [-skipDDL] [-skipDML] [-sql <spool-file>] 
[-partition ID[,ID]*]

will create the cluster tables as well as synchronizing their content with the existing rows in the DATA_VALUE table according to the provided configuration data.

The configuration file provided by -configFile is necessary if no cluster configuration is present in audit trail and vice versa. Otherwise, an error message is produced which states that either an configuration file has to be specified or auditTrail -dropDataCluster needs to be performed first. The option

-configFile <cluster-config-file>

specifies the configuration file which shall be deployed to the audit trail. With

-skipDDL

it is assumed that data cluster tables are already created, their creation will be skipped. Using the option

-skipDML

synchronization of data cluster tables with the existing rows in the DATA_VALUE table will be skipped. With

-sql <spool-file>

the statements will be spooled to <spool-file> instead of executing creation and synchronization statements to the audit trail.

sysconsole auditTrail -verifyDataClusters [-partition ID[,ID]*]

The implementation of the command seen above will verify the existence of all required data cluster tables as well as their consistency with the existing rows in the DATA_VALUE table. If any inconsistency is found an error message will be produced. Any inconsistency may then be fixed by applying the auditTrail -enableDataClusters command.

The command

sysconsole auditTrail -dropDataClusters [-sql <spool-file>] [-partition ID[,ID]*]

will drop any existing data cluster table from the audit trail. Using the option

-sql <spool-file>

the required statements for dropping will be spooled to <spool-file> instead of executing drop statements against the audit trail of the statements.

The optional argument for all three DataCluster commands

-partition ID[,ID]*

identifies the partition(s) to operate against. If multiple partitions are specified, cluster definition modification will either be successfully performed against all given partitions or fail completely. Cluster DDL and DML operations will be performed in separate transactions and support idempotent invokation in case of errors.