Using the Sysconsole Command

The Sysconsole command line tool handles administrative tasks not related to a deployed Infinity Process Platform runtime environment.

Scope

The Sysconsole command handles the following tasks:

Setting Up the Environment

This section describes how to prepare your environment for using the Sysconsole command.

Start by downloading one of the Maven archetype templates from the Infinity Process Platform artifactory matching your requirements. Please refer to chapter Creating a Runtime Environment with Apache Maven in the Installation Guide section Maven Archetypes of our Infinity Process Platform Wiki Maven/Basic Setup page for details on how to retrieve these configurations.

Perform the following steps:

  1. Start a command console (cmd).
  2. Switch to your custom work folder.
  3. Add your client libraries and jar files required by the model according to the application server you use. Please refer to the Application Server Setup chapters in the Deployment Guide for detailed information on which libraries and jar files are required.
  4. Add a database driver. Please refer to the Audit Trail Database Setup chapters in the Deployment Guide for detailed information on the database drivers you need.
  5. Additionally add the database connector jar file and other required model related jar files.
  6. Adapt the carnot.properties file, residing in your etc folder of your workspace environment.

Using Archetypes to create Environments for Sysconsole Clients

Infinity provides archetypes to create environments for console and sysconsole clients. Please refer to chapter Creating a Runtime Environment with Apache Maven in the Installation Guide section Stardust Archetypes of our Infinity Process Platform Wiki Maven/Basic Setup page for details.

Global Options

The sysconsole tool supports the following global options which can be used with every command:

Option Short form Description
-dbdriver <arg> -r The JDBC driver class to use.
-dbpassword <arg> -s Audit trail DB password to use.
-dbschema <arg>   Audit trail DB schema to use.
-dbtype <arg> -t The database type, i.e. oracle, db2, etc.
-dburl <arg> -l The JDBC URL to use.
-dbuser <arg> -d Audit trail DB user to use.
-force -f Forces the command to execute without any callback.
-password <arg> -p The password of the sysop user.
-verbose -v Makes output more verbose.
-statementDelimiter <arg> -sd Sets the delimiter string for all operations.
Default values:
\nGO : for Sybase. Note that \n is used to add a line feed.
; : for any other database.

The first five global options allow you to override the corresponding settings in the carnot.properties.

Commands

The following table lists the commands supported by the sysconsole.

Command Description Example
archive Deletes or archives the audit trail or parts of it. Please refer to section Audit Trail Archive Administration for detailed information. Also, refer to the chapter Audit Trail Archive of the Operation Guide
  • -batchSize/-b <arg> - Determines how many process instances should be performed in a transaction. The data will be kept in a consistent state. If this option is missing, a default batch size of 1000 will be used. You have to specify at least one of the following options: model, processes, deadModels, deadProcesses, deadData, logEntries and userSessions.
  • -deadData/-d <arg> - deletes workflow data values and all referenced data history values (if existent) for terminated process instances. It has to be used together with option -noBackup. Accepts as argument a single data ID or a comma separated list of data IDs.
  • -deadModels/-m deletes or archives audit trail for all dead models (models not having nonterminated process instances) and their preferences. You need to specify archive audit trail schema with this option.
  • -deadProcesses/-p deletes or archives terminated process instances and their preferences.
  • -interval/-i <arg>- The interval for which the archiving has to be performed, from now until <arg>, in format nn{d{ays}|h{ours}|m{inutes}}. You can use one of the time units d(ays), h(ours) or m(inutes), e.g. 3d or 10hours. If this option is missing, a default interval of 1 day will be used.
  • -logEntries/-l - deletes or archives log entries.
  • -model -v <arg> deletes or archives audit trail for the model version with the specified OID including preferences.
  • -noBackup/-n - only deletes and doesn't archive data.
  • -schemaName/-s <arg> specifies the schema containing the backup tables.
  • -timestamp/-t <arg> - Restricts any operation to either process instances terminated before the given date or log records created before the given date (always inclusive). The specified date must conform to ISO date patterns (i.e. "2005-12-31", "2005-12-31 23:59" or "2005-12-31T23:59:59:999") or "yyyy/MM/dd hh:mm:ss:SSS" for backward compatibility.
  • -partition <ID[,ID]*> - identifies the partition(s) to operate against.
  • -processes <OID[,OID]*> - deletes or archives process instances by OID including their preferences. The OID should be of a root process instance. The process instances must be terminated (completed or aborted).
  • -userSessions - provides the possibility that user session entries can be archived or deleted.
  • -dropDataClusters/-ddc [-sql <spool-file>] [-partition ID[,ID]*] - drops Data Clusters
  • -enableDataClusters/-edc -configFile <cluster-config-file> [-skipDDL] [-skipDML] [-sql <spool-file>] [-partition ID[,ID]*] - creates missing data cluster tables and synchronizes table content.
  • -verifyDataClusters/-vdc [-partition ID[,ID]*] - existence of data cluster tables and their consistency.
  • sysconsole -password sysop archive -s <Specify target schemaname> -deadModels -batchSize 2000
  • sysconsole -password sysop -force archive -deadData aBoolean,aString,anInt,aBigSerializable,OrderBook1 -noBackup
  • sysconsole -password sysop archive -s <Specify target schemaname> -deadModels
  • sysconsole -password sysop archive -s <Specify target schemaname> -deadProcesses
  • sysconsole -password sysop archive -deadData aBoolean,aString,anInt,aBigSerializable,OrderBook1 -noBackup -interval 1m
  • sysconsole -password sysop archive -s <Specify target schemaname> -logEntries
  • sysconsole -password sysop archive -s <Specify target schemaname> -model <model OID>
  • sysconsole -password sysop archive -s <Specify target schemaname> -deadProcesses -partition <Specify partition ID>
  • sysconsole -p sysop archive -noBackup -userSessions
archiveDDL Creates DDL for the Infinity Process Platform schema:
  • -file/-f <arg> - the DDL file name.
  • -schemaName/-s <arg> - specifies the schema supposed to contain the backup tables.
  • sysconsole -password sysop archiveDDL -file <Specify file name> -s <Specify schemaname>
auditTrail Manages Proxy Lock Tables (see detailed description in the sections below):
  • -checkConsistency/-cco [-partition <ID>[,<ID>]*] - runs consistency checks to test whether any problem instances exist in the audit trail. See section Checking Consistency in the Audit Trail for details.
  • -createPartition <ID> - creates a new partition with the given ID, if no partition having this ID currently exists. Additionally creates the new partitions default domain, having the same ID as the partition.
  • -dropDataClusters/-ddc [-sql <spool-file>] [-partition ID[,ID]*] - drops Data Clusters.
  • -dropLockTables/-dlt [-sql <spool-file>] - drops existing Proxy Lock Tables.
  • -dropPartition <ID> - deletes the partition identified by the given ID and any contained data from the AuditTrail. Also cleans up user, realm and partition scope preferences.
    Requires explicit confirmation, which may be overwritten with the -force parameter.
  • -enableDataClusters/-edc -configFile <cluster-config-file> [-skipDDL] [-skipDML] [-sql <spool-file>] [-partition ID[,ID]*] - creates missing data cluster tables and synchronizes table content.
  • -enableLockTables/-elt [-skipDDL] [-skipDML] [-sql <spool-file>] - creates and enables Proxy Lock Tables.
  • -listPartitions lists all existing partitions.
  • -upgradeDataClusters/-udc -configFile <cluster-config-file> - allows modifications to the data cluster. Any exceptions during the upgrade (e.g. duplicate column name, invalid index name) result in the deletion of the cluster definition. The cluster table itself is kept in that case.
  • -verifyDataClusters/-vdc [-partition ID[,ID]*] - existence of data cluster tables and their consistency.
  • -verifyLockTables/-vlt - verifies existence of proxy locking tables and their consistency.
  • -synchronizeDataClusters/-sdc [-partition ID[,ID]*] - identifies inconsistencies
Options for MySQL sequence support (in case the database type is set to MYSQL_SEQ either per global option -dbtype or in your carnot.properties file):
  • -dropSequenceTable/-dst [-sql <spool-file>] - drops table sequence and function next_sequence_value_for.
  • -enableSequenceTable/-est [-skipDDL] [-skipDML] [-sql <spool-file>] - creates and initializes table sequence and adds function next_sequence_value_for.
  • -verifySequenceTable/-vst - checks if data synchronization is required.
  • sysconsole -password sysop auditTrail -createPartition 15
  • sysconsole -password sysop auditTrail -dropPartition 15
  • sysconsole -password sysop auditTrail -listPartitions
  • sysconsole -password sysop auditTrail -dropLockTables [-sql C:/myfile.sql]
  • sysconsole -password sysop auditTrail -enableDataClusters -configFile <specify .xml cluster configuration file>
  • sysconsole -p sysop auditTrail -verifyDataClusters
  • sysconsole -p sysop auditTrail -dropDataClusters
createschema Creates the Infinity Process Platform schema. sysconsole -password sysop createschema
ddl Creates DDL for the Infinity Process Platform schema. Possible arguments:
  • -file/-f <arg> - the DDL file name
  • -drop/-d - creates DDL for dropping the schema
  • -dropProcessInstances/-pi - Oracle specific! Creates DDL for dropping process instances. Please refer to section Generating SQL scripts to delete Process Instances (Oracle) for details on restrictions and usage.
  • -schemaName/-s <arg> - Specifies the schema name to be used
  • -statementDelimiter/-sd <arg> - sets the delimiter string for all operations.
    Default values:
    • \nGO - for Sybase (\n is used to add a line feed)
    • ; - for any other database
  • sysconsole ddl -file newddlfile
  • PL/SQL script generation for dropping process instances:

       sysconsole ddl -dropProcessInstances -file target-file-name.sql

    The resulting script is target-file-name.sql.

dropschema Drops the Infinity Process Platform schema. This command requires the system operator password to be passed with the -password global option, which by default is sysop. sysconsole -p sysop dropschema
encrypt Encrypts the password passed and returns the encrypted password string to the console. Use the following argument:
  • password <arg> - the password to be encrypted.
  • passfile <arg> - If this parameter is set, <arg> is expected to contain a file path to a file. This file will be loaded and expected to conatin an encrypted password string prevoiously created with the command "encrypt -password <arg>" . This String will be decrypted and then used for authentication. If the property "-passfile" is used, a "-password" option simultaneously passed will be ignored.
  • dbpassfile <arg> - this parameter points to a file containing an encrypted audit trail password previously created with the command "encrypt -password <arg>". This String will be decrypted and then used for authentication. If the property "-dbpassfile" is used, a "-password" option simultaneously passed will be ignored.
fixruntimeoids Fixes invalid runtime OIDs in the audit trail database. Possible arguments:
  • -logonly - if specified, no database operation will be performed
  • -nolog - if specified, no log file will be written
Note that this might need to be run iteratively! In some cases additional runs might be necessary after the first run.
Refer to section Fixing invalid runtime OIDs in audit trail for details.
password Changes the password of the sysop user. It accepts a single argument, the new password:
  • -new/-n <arg> - the new password.
property With this command it is possible to maintain runtime properties which override properties set by property files. Thus changes to properties are possible without the need to redeploy. This is useful for short test cycles.  
  • -delete/-d <arg> - deletes a runtime property.
  • -get/-g <arg> - retrieves the current value of a runtime property.
  • -list/-l - lists existing runtime properties.
  • -locale <locale-value> - specifies the locale of the runtime property.
  • -set/-s <arg> -value <arg> - sets the (new) value of a runtime property. Note that some properties are protected and cannot be deleted or modified. Please refer to section Protected Properties for details.
upgrademodel Upgrades a model from a previous Infinity Process Platform version. It supports the following arguments:
  • -file/-f <arg> - the model file to upgrade inplace.
  • -source/-s <arg> - the source file to upgrade.
  • -target/-t <arg> - the target file for upgrade.
upgraderuntime Upgrades the audit trail from a previous Infinity Process Platform version. The following arguments may be used:
  • -data/-a - Performs only data migration, preventing any SQL DDL execution. Note that the data argument always needs to be combined with -step to guarantee a correct result!
  • -ddl/-l <arg> - Spools the SQL DDL defining the audittrail schema migration into the specified file. No modifications to the audittrail will be performed.
    Note that the ddl argument always needs to be combined with -step to guarantee a correct result!
  • -describe/-d - Describes the migration steps involved, including any temporary schema versions. No modifications to the audittrail will be performed.
  • -ignorelock/-i - forces an upgrade run even if the audit trail DB is already locked for upgrade.
  • -recover/-r - to force a recovery run of the upgrade.
  • -step/-s - Performs exactly one migration step. May require multiple invocations to fully perform migrations involving temporary schema versions.
  • -verbose/-v Display additional information about the upgrade job.
  • sysconsole upgraderuntime -verbose (audit trail will be upgraded and detailed information of the upgrade will be displayed)
  • sysconsole upgraderuntime -describe -verbose (detailed upgrade information will be displayed but audit trail will not be modified)
version Returns version information for the Infinity Process engine.

Command Details

The following sections provide details on archiving as well as lock table and cluster table administration:

Audit Trail Archive Administration

The archive command deletes or archives data from the audit trail. The deleted or archived data may be backed up in a second audit trail DB, the backup audit trail DB. The execution of the tool is responsible for maintaining the closure of the backed-up objects, e.g. backing up the corresponding models, model elements and grants. The backup audit trail may be cumulatively populated.

During a backup operation with long duration, the data keeps consistent even if the network breaks, due to transactional processing. Repeating the command will start it again at the last position.

For all backup operations the archive schema name has to be provided with the option

-schemaName schemaname.

This argument is independent of any other argument. It specifies the target schema where the audit trail will be archived, but will be ignored if the argument -noBackup is specified.

The following main subcommands (provided as a command option) can be used and are mutually exclusive:

Each of this option specifies a set of objects to be archived or deleted that cannot be combined with the set of objects specified by one of the other options. The archive command must specify exactly one of these commands.

The following sections explain the usage of the specific options in detail:

Archiving models with references to other models

In case the model(s) to be archived have references to other models, archiving is performed in the following way:

Backing Up All Dead Models

With the following command all dead models (i.e. no longer active and with no non-terminated processes) can be backed up:

-deadModels

This means also backing up all dependent objects, i.e. process instances, activity instances, data values, data value history (if existent) and log entries. Without the -noBackup command the dead models stay in the audit trail.

Note that after this operation a flushCaches call is required to clear the engine's model cache to synchronize up the changed state of the AuditTrail again. The according console command is:

console engine -init

For details on this command refer to section Commands Overview of chapter Using the Console Command. Calling this command is not possible in the same call from the sysconsole as it cannot access services or the runtime caches.

In case of physical deletion the optional command

-noBackup

must be used. Attention: Be sure that your back up was successful. Otherwise your dead models can't be restored.

Note: Note that the archive command with model deletion should only be performed in maintenance windows without workflow, otherwise this might lead to inconsistency in the audit trail.

Backing Up A Model

The following option is backing up a model with all its dependent objects:

-model oid (to find in the database table model)

Whenever a new model is archived, the archive command checks for OID consistency with the older models and fixes the according OID registry and related references, if required.

Without the -noBackup command the dead models stay in the audit trail.

In case of physical deletion the optional command

-noBackup

must be used. Attention: Be sure that your back up was successful. Otherwise your dead models can't be restored.

Note: Note that the archive command with model deletion should only be performed in maintenance windows without workflow, otherwise this might lead to inconsistency in the audit trail.

Backing Up Completed Process Instances

The following option is backing up all completed or aborted process instances.

-deadProcesses

It specifies that all terminated processes will be archived or deleted.

With the following additional option you can confine the process instances to delete or archive to be terminated not after a certain timestamp

-timestamp timestamp.

The specified date must conform to ISO date patterns (i.e. "2013-12-31", "2013-12-31 23:59" or "2013-12-31T23:59:59:999") or yyyy/MM/dd hh:mm:ss:SSS for backward compatibility with older Infinity Process Platform releases.

Optionally, this command can also be used for plain deletion of data. This is done with the option

-noBackup.

Backing Up Log Entries

The following option is backing up log entries that are not referenced by other objects (e.g by process instances):

-logEntries

(Referenced log entries will be archived when archiving the object, e.g. an activity or process instance.)

With the following option you can confine the log entries to delete to be not deleted after a certain timestamp:

-timestamp timestamp

Optionally, this command can also be used for deletion. This is done with the option:

-noBackup

Deleting Data Values

Deletes all data values and all referenced data history values (if existent) for terminated process instances for a specific workflow data.

-deadData dataid (qualified ID of the model element)

Note that this command must be used with the additional option -noBackup, because standalone data can only be deleted and not archived.

The argument -noBackup is independent of any other argument. If it is specified, the argument -schemaName will be ignored.

Identifying partition(s)

This option is used to identify the partition(s) to operate against.

-partition <ID[,ID]*>

The given partitions will be used as search scope for additional arguments like -deadProcesses or -deadModels.

Archiving and deleting user session entries

With this option user session entries can be archived (first backup to archive schema and then delete from source schema) or deleted. Here are some usage examples:

  1. The following command inserts all user session entries that are not already archived and having an expiration timestamp <= "2011-01-31 11:30:00" into the archive schema and delete them afterwards from the source schema.
    sysconsole -password sysop archive -userSessions -schemaName arc_carnot -timestamp "2011-01-31 11:30:00"
  2. The following command does the same as above but without backup:
    sysconsole -password sysop archive -userSessions -schemaName arc_carnot -timestamp "2011-01-31 11:30:00" -noBackup
  3. The following command does the same as above but for all existing user session entries.
    sysconsole -password sysop archive -userSessions -schemaName arc_carnot -noBackup

Deleting process instances

The -processes <OID[,OID]*> option deletes process instances by OID. It expects an explicit list of root process instance OIDs that will be archived or deleted. Note that the process instances must be terminated (completed or aborted).

Archiving Spawned Processes

The spawned root process cannot be archived until all the child processes of the spawned process are complete.

Archiving Aborted and Started Process

If the target process is in terminated state then it gets archived along with its source process and linked processes.

Archiving Aborted and Joined Processes

If the target process is in terminated state then it gets archived along with its source process and linked processes.

Archiving Case Process Instance

The case process instance can be archived if only it is in a terminated state. All the case processes also get archived as they are also in the terminated state.

Note

Note that if the parameter -partition is not used, the archiving command of sysconsole has an effect only on the default partition.

Archiving of Business Object Process Instances

For each business object instance a process instance is created. Each business object process instance is indicated by -1 value in the process definition column in the database. The business objects are excluded from deletion from source schema but are archived/synched to archive schema. Following would be the behavior in case of archive audit trail which has business object process instances.

Checking Consistency in the Audit Trail

The auditTrail command option -checkConsistency runs consistency checks to test whether any problem instances exist in the audit trail. The check looks if the audit trail contains data of Document and Document-Set types that are shared between super- and subprocesses, which is not supported anymore. A property Infinity.Dms.SharedDataExist will be set in the audit trail indicating if the check passed or failed. This property will be evaluated by the archiver in order to determine whether simplified treatment in archiving can be applied or not. Note that archiving slows down if such shared data exist.

Proxy Lock Tables Administration

Lock tables will be enabled with the command

sysconsole auditTrail -enableLockTables [-skipDDL] [-skipDML] [-sql <spool-file>

The implementation of this command will create any missing proxy locking tables as well as synchronize table content with the existing rows in the associated original tables. The option

-skipDDL

indicates that the lock tables are already created, their creation will be skipped.

With the option

-skipDML

the synchronization of lock tables with the associated original tables will be skipped.

With

-sql <spool-file>

the required statements will be spooled to <spool-file> instead of executing creation and synchronization statements against the audit trail.

The command

sysconsole auditTrail -verifyLockTables

allows to verify the existence of all required lock tables as well as the completeness of proxy rows with regard to the existing rows in the associated original tables. If any inconsistency is found an error message will be produced. Any inconsistency may then be fixed by applying the auditTrail -enableLockTables command.

The command

sysconsole auditTrail -dropLockTables [-sql <spool-file>]

will drop any existing lock table from the audit trail. With

-sql <spool-file>

the required statements will be spooled to <spool-file> instead of executing drop statements against the audit trail database.

Cluster Table Administration

The command

sysconsole auditTrail -enableDataClusters [-configFile <cluster-config-file>] [-skipDDL] [-skipDML] [-sql <spool-file>] 
[-partition ID[,ID]*]

will create the cluster tables as well as synchronizing their content with the existing rows in the DATA_VALUE table according to the provided configuration data.

The configuration file provided by -configFile is necessary if no cluster configuration is present in audit trail and vice versa. Otherwise, an error message is produced which states that either an configuration file has to be specified or auditTrail -dropDataCluster needs to be performed first. The option

-configFile <cluster-config-file>

specifies the configuration file which shall be deployed to the audit trail. With

-skipDDL

it is assumed that data cluster tables are already created, their creation will be skipped. Using the option

-skipDML

synchronization of data cluster tables with the existing rows in the DATA_VALUE table will be skipped. With

-sql <spool-file>

the statements will be spooled to <spool-file> instead of executing creation and synchronization statements to the audit trail.

The following command

sysconsole auditTrail -verifyDataClusters [-partition ID[,ID]*]

Any inconsistency may then be fixed by applying the auditTrail -enableDataClusters command.

The following command

sysconsole auditTrail -dropDataClusters [-sql <spool-file>] [-partition ID[,ID]*]

will drop any existing data cluster table from the audit trail. Using the option

-sql <spool-file>

the required statements for dropping will be spooled to <spool-file> instead of executing drop statements against the audit trail of the statements.

The optional argument for all three DataCluster commands

-partition ID[,ID]*

identifies the partition(s) to operate against. If multiple partitions are specified, cluster definition modification will either be successfully performed against all given partitions or fail completely. Cluster DDL and DML operations will be performed in separate transactions and support idempotent invocation in case of errors.

The following command

sysconsole auditTrail -synchronizeDataClusters [-partition ID[,ID]*]

Runtime Upgrade

Executing the command upgraderuntime displays the progress of upgrade job execution.

The progress information is displayed similar as shown in the following snippet:

Upgrading audit trail DB:

Database type : MYSQL
Database URL : jdbc:mysql://localhost:3306/ipp
Database user : carnot
Database schema : ipp
Database driver : com.mysql.jdbc.Driver

Do you want to proceed? (Y/N): y

Upgrading Runtime.

Running job 'y.0.0' against item 'Runtime Environment' with version 'x.0.5'.

Upgrading schema...
Upgrade Job Ry_0_0fromx_x_xRuntimeJob:
Upgrading data block: 1 of 9. 0 % completed.
Upgrading data block: 2 of 9. 11 % completed.
Upgrading data block: 3 of 9. 22 % completed.
Upgrading data block: 4 of 9. 33 % completed.
Upgrading data block: 5 of 9. 44 % completed.
Upgrading data block: 6 of 9. 55 % completed.
...Schema upgrade done.
Migrating data...
Upgrading data block: 7 of 9. 66 % completed.
Upgrading data block: 8 of 9. 77 % completed.
Upgrading Datatypes...
Partition with OID: 1
Upgrading Datatypes...done.
...Data Migration done.
Upgrading Model...
...Model migration done.
Finalizing schema...
Upgrading data block: 9 of 9. 88 % completed.
Upgrade Job Ry_0_0fromx_x_xRuntimeJob: 100 % completed.
...Schema finalization done.
Upgrade to version y.0.0 done, upgrading runtime version stamp...
...Version stamp updated.
Upgrade to version y.0.0 done.

Runtime upgraded.

Viewing Additional Information during Runtime Upgrade Job

Following upgrade details are displayed when the -v/verbose option is used:


Upgrade from version x.x.x to x.x.x:

Upgrade schema task:
A new table 'department' with the columns'oid', 'id', 'name', 'partition', 'parentDepartment', 'description', 'organization' and indexes 'department_idx1' and 'department_idx2' will be created.
A new table 'department_hierarchy' with the columns 'superDepartment', 'subDepartment' and indexes 'department_hier_idx1' and 'department_hier_idx2' will be created.
The new columns 'currentUserPerformer', 'currentPerformer' and 'currentDepartment' will be created in table 'activity_instance' and indexes 'activity_inst_idx2' and 'activity_inst_idx3' will be modified.
The new columns 'performerKind', 'performer', 'department' and 'state' will be created in table 'workitem' and index 'workitem_idx2' will be modified.
The new columns 'department' and 'onBehalfOfDepartment' will be created in table 'act_inst_history'.
The new columns 'participant' and 'department' will be created in table 'user_participant' and index 'user_particip_idx2' will be modified.
The new column 'extendedState' will be created in table 'workflowuser'.

Upgrade from version x.x.x to x.x.x:

Upgrade schema task:
A new table 'preferences' with the columns 'ownerId', 'ownerType', 'moduleId', 'preferencesId', 'partition', 'stringValue' and index 'preferences_idx1' will be created.
The table 'message_store' will be dropped.
A new table 'model_ref' with the columns 'code', 'modelOid', 'id', 'refOid', 'deployment' and indexes 'model_ref_idx1' and 'model_ref_idx2' will be created.
A new table 'model_dep' with the columns 'oid', 'deployer', 'deploymentTime', 'validFrom', 'deploymentComment' and indexes 'model_dep_idx1', 'model_dep_idx2' and 'model_dep_idx3' will be created.
A new table 'model_dep_lck' with the column 'oid' and index 'model_dep_lck_idx' will be created. (only if AuditTrail.UseLockTables = true)
A new column 'deployment' will be created in table 'process_instance'.

Migrate data task:
Table 'model_ref' will be populated.
Table 'model_dep' will be populated.
Field 'deployment' in table 'process_instance' will be populated.
Index 'user_particip_idx2' in table 'user_participant' will be modified.
Permissions will be inserted into table 'preferences'.
Model Id will be added to xml data cluster definition.

Upgrade from version x.x.x to x.x.x:

Upgrade schema task:
The new columns 'criticality', 'propertiesAvailable' and index 'activity_inst_idx9' will be created in table 'activity_instance'.
A new column 'criticality' will be created in table 'workitem'.
A new table 'procinst_link' with the columns 'processInstance', 'linkedProcessInstance', 'linkType', 'createTime', 'creatingUser' and 'linkingComment' will be created.
A new table 'link_type' with the columns 'oid', 'id', 'description', 'partition' and index 'link_type_idx1' will be created.
Datacluster setup key will be upgraded to 'org.eclipse.stardust.engine.core.runtime.setup_definition' in column 'name' in table 'property'.
A new table 'partition_lck' with column 'oid' and index 'partition_lck_idx' will be created. (only if AuditTrail.UseLockTables = true)

Migrate data task:
Initializes the field 'propertiesAvailable' in table 'activity_instance'.
Missing XPaths which are needed to store the revisionComment will be created for Structured Datatypes.

Finalize schema task:
Default link types will be added.

Step-by-step Runtime Upgrade

You can execute runtime upgrade step by step. Suppose, you want to upgrade from Infinity Process Platform x.0 to Infinity Process Platform y.1. Firstly, perform the upgrade using the option -ddl in combination with -step option.

sysconsole -p sysop upgraderuntime -verbose -step -ddl x_0Toy_0.txt

The following upgrade details are displayed in the x_0Toy_0.txt file.

// y.0.0 schema upgrade DDL

ALTER TABLE ipp.activity_instance ADD CRITICALITY DOUBLE;
ALTER TABLE ipp.activity_instance ADD PROPERTIESAVAILABLE INT;
CREATE INDEX activity_inst_idx9 ON ipp.activity_instance(CRITICALITY, PROCESSINSTANCE);
UPDATE ipp.activity_instance SET criticality = -1;
UPDATE ipp.activity_instance SET propertiesAvailable = 0;
ALTER TABLE ipp.workitem ADD CRITICALITY DOUBLE;
UPDATE ipp.workitem SET criticality = -1;
CREATE TABLE ipp.procinst_link (PROCESSINSTANCE BIGINT, LINKEDPROCESSINSTANCE BIGINT, LINKTYPE BIGINT, CREATETIME BIGINT, CREATINGUSER BIGINT, LINKINGCOMMENT VARCHAR(255));
CREATE TABLE ipp.link_type (OID BIGINT AUTO_INCREMENT PRIMARY KEY, ID VARCHAR(50), DESCRIPTION VARCHAR(255), PARTITION BIGINT);
CREATE UNIQUE INDEX link_type_idx1 ON ipp.link_type(OID);
CREATE TABLE ipp.partition_lck (OID BIGINT);
CREATE UNIQUE INDEX partition_lck_idx ON ipp.partition_lck(OID);

// y.0.0 schema finalization DDL

Using -ddl parameter writes the sql that performs the audit trail upgrade into a ddl script. Until here the audit trail is not modified. Only when you execute the ddl script the audit trail gets modified. So, run upgrade command with option -step to modify the audit trail and upgrade to y.0. Note that the -ddl parameter should only be used in combination with the step option!

Now run the following command with -step and -ddl options:

sysconsole -p sysop upgraderuntime -verbose -step -ddl y_0Toy_1.txt

The following upgrade details are displayed in the y_0Toy_1.txt file.

// y.1.0 schema upgrade DDL

ALTER TABLE ipp.data_value ADD DOUBLE_VALUE DOUBLE;
UPDATE ipp.data_value SET double_value=0.0;
ALTER TABLE ipp.structured_data_value ADD DOUBLE_VALUE DOUBLE;
UPDATE ipp.structured_data_value SET double_value=0.0;

// y.1.0 schema finalization DDL

Generating SQL scripts to delete Process Instances (Oracle)

The -dropProcessInstances ddl command option allows to create a DDL for dropping process instances. Please note that this option has the following restrictions:

The following example command invokes a generation of a PL/SQL script for dropping process instances:

sysconsole ddl -dropProcessInstances -file target-file-name.sql

The resulting script is target-file-name.sql.

Note that to use the generated script, you have to create the following temporary table before the package can be successfully compiled:

CREATE GLOBAL TEMPORARY TABLE ipp_tools$pi_oids_to_delete (oid NUMBER) ON COMMIT DELETE ROWS

Fixing invalid runtime OIDs in audit trail

Use the command fixruntimeoids to repair archive OID inconsistencies in an audit trail database. For example:

sysconsole -password sysop -dbschema arctarg -force fixruntimeoids -nolog

This command fixes invalid runtime OIDs in database arctarg without writing a log file.

Note that this might need to be run iteratively! In some cases additional runs might be necessary after the first run. Please refer to section How to find out if further runs are necessary for details on when an iterative run is required.

How to find out the first time that you need to run the command

To find out the first time that you need to run the command, point your Portal to the archived database and start the server.

In case you face an inconsistent OID issue during server startup then you should execute the fixruntimeoids command on the archived database to resolve the inconsistency.

How to find out if further runs are necessary

To find out if further runs are necessary, try one of the following options:

Running the fixruntimeoids command on a MySQL database

To run the fixruntimeoids command on a MySQL database, you have to set the property AuditTrail.Schema in your carnot.properties file, for example:

AuditTrail.Schema =(Your archived db schema)

Protected Properties

The following properties cannot be deleted or modified via the Sysconsole command property:

Property Name Description
sysop.password password of sysop
Note that this property also cannot be retrieved with the list option (sysconsole property list)
carnot.version the version of the audit trail
product.name the name of the product:
  • Infinity Process Platform
  • or
  • Stardust Process Manager
org.eclipse.stardust.engine.core.runtime.setup_definition dummy value which marks that the property is available