Edit Job Attributes

The Job Designer's Edit Job interface, allows the user to edit basic job attributes. By default, when a job is created, it is placed into the first available Partition and Group. The user has the option of changing the Partition and/or Group that the job is in. A user will only be able to put the job into Partitions and/or Groups that the user has rights to.

Hints:
Place your mouse over the "Name" or "Description" labels to see a tooltip of the full text. This can be helpful if the job's name or description are long.

Each job is assigned a priority. This priority defines how the job is ordered once it is placed in the Queue for processing. A priority of 1 is the highest and 20 is the lowest. A job with a priority of 1, when placed into the Queue, will run before a job with a priority of 2 even if the job with a priority of 2 went into the Queue before it. By default each job is assigned a priority of 10 when the job is first created.

To actually make the job do something useful, you will need to add one or more Tasklet components to the job. You can add Tasklets to the job by clicking on the "Edit Job Workflow" button. This will take you to an interface where you can add and configure Tasklets for the job.

Retire/Unretire Job

By retiring a job, you are saying that the job is no longer being actively used. This is a good way to take a job out of circulation, so to speak, and still keep it around for historical purposes. Also, retiring a job is the only way to delete the job. You must first retire a job, to be allowed to have the option of deleting the job. Only retired jobs can be deleted. If you do retire a job, this will prevent the job from being scheduled and you will no longer see the job in normal job search results unless you are explicitly searching for retired jobs. It is also easy to unretire a job; simpmly click the "Unretire Job" button and the job will be unretired and put back into normal circulation.

Alert Emails

Email alerts are sent to the given addresses listed, when the job encounters any kind a unexpected failure or error. Up to 5 email addresses can be specified. Note that developer/user generated logging errors do not apply and will not raise email alerts. Only errors resulting from Tasklet exceptions will trigger emails or due to low level internal errors.

Job alerts notify users when a job failure occurs during processing. These are typically failures associated with the Job/Tasklet throwing an unexpected exception that may result in the Tasklet or job failing to continue processing. For example an uncaught out of memory exception or sql exception would constitute such a situation. Also when a Job/Tasklet throws TaskletFailureException this will also trigger an alert to be sent out. Note that errors and warnings logged via Log4J or the Java Logging API do not trigger an email alert. The email alerts use a cascading mechanism. It works by first sending an alert to the email address listed at the system level. It then sends the alert to the email addresses defined for that job's Partition, it then sends it to the job's Group alert addresses, and then it finally will send it to the alert email addresses defined for the specific job. With this design you can setup a hierarchy of email alerts. So, for example, you can set it up so that you only receive emails when a specific job fails or when any job in a specific Partition fails, ...etc.

The Tasklet may also programmatically trigger alerts by using the SOAFaces API. Refer to the API TaskletOutputContext.sendAlert().

Advanced Options

JVM Run Option

If permitted by the Partition, a job can be allowed to run in its own external and isolated JVM, which a separate JVM for the JobServer JVM system and from any other job. At the Partition level, you can specify and make all jobs in the Partition run exclusively in their own JVM or in a shared JVM with all other jobs, or the Partition can permit the job to choose which JVM mode it wants to use.

JVM Max Memory

If the job has the right to run in an external JVM, then you can optionally specify the maximum JVM memory size (if you leave it blank the Partition's default maximum is used and can't be exceeded).

Start External JVM with Container Listener

If this is checked, each job that runs in its own external JVM process will also execute the ContainerStartupListener implementation before running the job, for each time the job is run in its own JVM process. If you do not have the ContainerStartupListener property set, then this option does nothing and is ignored. This property is used to execute common logic/code needed by each job to initialize global properties or settings before a job can be run. You can configure this implementation in the jobserver/conf/product-config.properties file. This is normally set by the JobServer administrator when necessary.

Retry Rule

If a job fails abruptly during a run (process crashes or server crashes), JobServer can have the option of retrying the job again after the system has recovered. A job can be restarted from the begining or restarted from the last Tasklet that that job failed in. Note that a failure by a Tasklet/job during process will not necessarly lead to a retry of the job. Reties only occur if there is a low-level error that is not handled by the JobServer environment, like the case of an Agent or primary/secondary server crashing, for example.

When a job is retried, the RUN cookies are preserved between the job retries. This allows a job retry to determine how it should handle a retry of a Tasklet. A Tasklet can use the RUN scope cookies to determine in what state it was at when it failed.

Retry from start

This option retries the Tasklet from the beginning. RUN scope cookies will be shared between job/Tasklet retries.

Retry from last aborted Tasklet

If a job is retried using the "last aboreted Tasklet" option, then the state and output of the completed Tasklets will be essentially copied to the newly retied Tasklet and processing will continue from the last Tasklet that was interrupted. RUN scope cookies will be shared between job/Tasklet retries.

Retry of JVM Jobs

Jobs that are run in their own custom JVM will be retried if the JVM subprocess fails abruptly.

Retry of Shared JVM Jobs

Jobs that run as threads in the primary/secondary or remote Agent servers will be retried if the primary/secondary or remote server fails abruptly.

Tasklet Enabled Mule APIs

With an external JVM job, you can enable the job to load (or not to load) the built in Mule classloader to allow the Tasklet to use Mule/SOA APIs during its Tasklet/Job processing. This allows Tasklets to use such APIs as the MuleClient API during Tasklet processing. If you are running in a shared JVM, then this option is not necessary, since the availability of Mule is set at the system wide JobServer environment level. This setting is only for controlling the loading and availability of Mule for external JVM jobs on an as needed basis. This is made a per job option for external JVM jobs to avoid the overhead of loading Mule classloaders and intialization for jobs that don't use Mule.

Note that the Mule related features will only be available if your JobServer environment has the Mule package included in the JobServer installation. If Mule is not installed and enabled system wide in your JobServer environment, you will not see these related Mule features.

Refer to the SOAFaces API Javadocs to learn more about how to access the MuleClient interfaces from your Tasklet. The specific way to get a MuleClient interface can found here org.soafaces.bundle.workflow.WorkflowContainer.getMuleClient(). This interfaces allows you to access a MuleClient interface from within a shared or exteral JVM if you have Mule available and enabled within your JobServer environment. You may optinoally also create your own MuleClient object and context, if you do not want to use the built in one, but to do this you must initialize the MuleClient context yourself during startup phase of JobServer or the startup phase of an external JVM using the ContainerStartupListener. Initializing a MuleClient Context will usually create a background thread, so if you do not do this at startup time, it will cause the job to hang, since typically jobs will not complete processing if any background threads are created during job processing are not terminated.

Other Job Processing Options

Please note that a job will not finish processing and will appear as "running" in the JobTracker tool if any background threads created during processing do not finish running when the last Tasklet completes. You may disable this feature (that waits on background threads complete) at a system wide level by setting the property TaskBeanThreadJoin=false in the JobServer configuration files. This will allow a job to finish processing even though background threads are still running in one or more Tasklets. Note, using this feature is not recommended in typical situations. Jobs should not normally create background threads and leave them running in the background as this can create some bad side effects. But this option is available for special situations. If you do have a job or group of jobs that create background threads the first time they are run, during some initialization phase, it is recommended you perform this initialization phase during the JobServer JVM startup phase using a ConatinerStartupListener implementation. Refer to the soafaces javadocs for details. This way, any required one time background threads are created at JobServer JVM startup and not during the first time the job is run.