The Magnolia Scheduler module allows you to schedule commands to run at regularly scheduled times and is powered by the Quartz Engine.

Download

The Scheduler module is bundled with Magnolia and typically already installed. You can download it from our Magnolia Store or Nexus repository.

Installing

Scheduler is a community module bundled with both editions and typically already installed. Go to Magnolia Store > Installed modules in AdminCentral to check. To install the module individually, see the general module installation instructions.

Uninstalling

See the general module uninstalling instructions and advice.

Usage

The Scheduler module can be used to execute any configured command at regular intervals. For example, it could be used to:

  • Activate or deactivate a promotional web page on a specific date.
  • Import content from an external source into a Magnolia workspace.
  • Send emails on specific days.
  • Delete specified forum messages or threads.
  • Synchronize target and source instances.
  • Execute a custom command.

Configuration

The Scheduler module is used to execute commands that are typically configured in other modules. See Commands for more information on configuring commands.

The scheduled tasks are configured in modules/scheduler/config/jobs. The example demo configuration can be adapted to set up your own scheduled jobs:

Properties:

  • params: Parameters passed to the command. Depends on the command. For example, the activate command expects to receive a repository name and a content path, and the generatePlanetData command used by the RSS Aggregator module, expects only a repository parameter.
    • path: Content path to the item that the command should use.
    • repository: Workspace where the content item resides
  • active: Enables (true) and disables (false) the job.
  • catalog: Name of the catalog where the command resides
  • command: Name of the command
  • cron: CRON expression that sets the scheduled execution time. For example 0 0 1 5 11 ? 2010 means "run on November 5th, 2010 at 01:00 am" (as opposed to 0 0 1 5 11 ? * which will run annually on "Nov 5th at 01:00 am") . Cronmaker is a useful tool for building expressions
  • description: Description of the job

The Synchronization, Backup and RSS Aggregator modules use the Scheduler module for scheduling their execution.

Scheduling tasks on cluster nodes

In a clustered configuration one or more workspaces is stored in a shared, clustered storage. See Clustering for more information. Cluster nodes (Magnolia instances) access the clustered workspace rather than their own workspaces. This can lead to a situation where multiple scheduled jobs attempt to access the same content simultaneously and a lockup occurs. To avoid this situation, identify the cluster nodes and run the job on only one node.

  1. Set the magnolia.clusterid property in the magnolia.properties file of the cluster node. The file is in the /<CATALINA_HOME>/webapps/<contextPath>/WEB-INF/config/default folder. The property value can be a literal cluster name such as public123 (magnolia.clusterid=public123) or a variable such as ${servername}.
  2. To configure the job to run on the identified cluster node, go to Configuration >/modules/scheduler/jobs and edit the job configuration.
  3. Under the params node, add a clusterId property and set the value to match the magnolia.clusterId of the cluster node where you want to run the job.

Job configurations are stored in the config workspace. If you want to run a particular job on all cluster nodes you would have to cluster the config workspace so that all instances can read the configuration or create the same job configuration on all cluster nodes. This can be time consuming. As a workaround, configure the job once on a clustered instance without the clusterId property. This has the effect that the job will run on all cluster nodes.

#trackbackRdf ($trackbackUtils.getContentIdentifier($page) $page.title $trackbackUtils.getPingUrl($page))

1 Comment

  1. Scheduled tasks on cluster nodes:

    Clustering Nodes are usually clustered due to load balancing or high availability issues. With the above solution to bundle the scheduled job to a fix node you'll loose high availability and will have to reconfigure all scheduled jobs to a surfiving node due to failure of the prefered node.

    Recommendation:

    In a clustered environment should be a automatic handshake between all involved (and Online - heartbeated) clustered nodes and one should determine that it will do the job. To prevent to long communication tries if a node is down there should also be a heartbeat connection between all involved clustered nodes to get stati of all nodes - and if one (or more) aren't accessible it should get marked as offline until it will get up again and will be unmarked. Marked (unavailable) nodes shouldn't be involved in handshake mechanism.