-B,--batch-options <options> |
| Add these options to all batch submit files.
|
-j,--max-local <#> |
| Max number of local jobs to run at once. (default is # of cores)
|
-J,--max-remote <#> |
| Max number of remote jobs to run at once. (default is 1000 for -Twq, 100 otherwise)
|
-l,--makeflow-log <logfile> |
| Use this file for the makeflow log. (default is X.makeflowlog)
|
-L,--batch-log <logfile> |
| Use this file for the batch system log. (default is X.<type>log)
|
-R, --retry | Automatically retry failed batch jobs up to 100 times.
|
-r,--retry-count <n> |
| Automatically retry failed batch jobs up to n times.
|
--send-environment | Send all local environment variables in remote execution.
|
--wait-for-files-upto <#> |
| Wait for output files to be created upto this many seconds (e.g., to deal with NFS semantics).
|
-S,--submission-timeout <timeout> |
| Time to retry failed batch job submission. (default is 3600s)
|
-T,--batch-type <type> |
| Batch system type: local, dryrun, condor, sge, pbs, torque, blue_waters, slurm, moab, cluster, wq, amazon, mesos. (default is local)
|
--safe-submit-mode | Excludes resources at submission. (SLURM, TORQUE, and PBS)
|
--ignore-memory-spec | Excludes memory at submission. (SLURM)
|
--verbose-jobnames | Set the job name based on the command.
|
-a, --advertise | Advertise the master information to a catalog server.
|
-C,--catalog-server <catalog> |
| Set catalog server to <catalog>. Format: HOSTNAME:PORT
|
-F,--wq-fast-abort <#> |
| WorkQueue fast abort multiplier. (default is deactivated)
|
-M,---N <project-name> |
| Set the project name to <project>.
|
-p,--port <port> |
| Port number to use with WorkQueue. (default is 9123, 0=arbitrary)
|
-Z,--port-file <file> |
| Select port at random and write it to this file. (default is disabled)
|
-P,--priority <integer> |
| Priority. Higher the value, higher the priority.
|
-W,--wq-schedule <mode> |
| WorkQueue scheduling algorithm. (time|files|fcfs)
|
-s,--password <pwfile> |
| Password file for authenticating workers.
|
--disable-cache | Disable file caching (currently only Work Queue, default is false)
|
--work-queue-preferred-connection <connection> |
| Indicate preferred connection. Chose one of by_ip or by_hostname. (default is by_ip)
|
--mesos-master <hostname> |
| Indicate the host name of preferred mesos master.
|
--mesos-path <filepath> |
| Indicate the path to mesos python2 site-packages.
|
--mesos-preload <library> |
| Indicate the linking libraries for running mesos..
Amazon Lambda Options
--lambda-config <path> | | Path to the configuration file generated by makeflow_lambda_setup
|
Kubernetes Options
--k8s-image <docker_image> | | Indicate the Docker image for running pods on Kubernetes cluster.
|
Mountfile Support
--mounts <mountfile> | | Use this file as a mountlist. Every line of a mountfile can be used to specify the source and target of each input dependency in the format of target source (Note there should be a space between target and source.).
| --cache <cache_dir> | | Use this dir as the cache for file dependencies.
|
Archiving Options
--archive <path> | | Archive results of workflow at the specified path (by default /tmp/makeflow.archive.$UID) and use outputs of any archived jobs instead of re-executing job
| --archive-read <path> | | Only check to see if jobs have been cached and use outputs if it has been
| --archive-write <path> | | Write only results of each job to the archiving directory at the specified path
|
Other Options
-A, --disable-afs-check | Disable the check for AFS. (experts only)
| -z, --zero-length-error | Force failure on zero-length output files.
| -g,--gc <type> | | Enable garbage collection. (ref_cnt|on_demand|all)
| --gc-size <int> | | Set disk size to trigger GC. (on_demand only)
| -G,--gc-count <int> | | Set number of files to trigger GC. (ref_cnt only)
| --wrapper <script> | | Wrap all commands with this script. Each rule's original recipe is appended to script or replaces the first occurrence of {} in script.
| --wrapper-input <file> | | Wrapper command requires this input file. This option may be specified more than once, defining an array of inputs. Additionally, each job executing a recipe has a unique integer identifier that replaces occurrences %% in file.
| --wrapper-output <file> | | Wrapper command requires this output file. This option may be specified more than once, defining an array of outputs. Additionally, each job executing a recipe has a unique integer identifier that replaces occurrences %% in file.
| --enforcement | Use Parrot to restrict access to the given inputs/outputs.
| --parrot <path> | | Path to parrot_run executable on the host system.
| --shared-fs <dir> | | Assume the given directory is a shared filesystem accessible at all execution sites.
|
DRYRUN MODE
When the batch system is set to -T <dryrun>, Makeflow runs as usual
but does not actually execute jobs or modify the system. This is useful to
check that wrappers and substitutions are applied as expected. In addition,
Makeflow will write an equivalent shell script to the batch system log
specified by -L <logfile>. This script will run the commands in
serial that Makeflow would have run. This shell script format may be useful
for archival purposes, since it does not depend on Makeflow.
ENVIRONMENT VARIABLES
The following environment variables will affect the execution of your
Makeflow:
BATCH_OPTIONS
This corresponds to the -B <options> parameter and will pass extra
batch options to the underlying execution engine.
MAKEFLOW_MAX_LOCAL_JOBS
This corresponds to the -j <#> parameter and will set the maximum
number of local batch jobs. If a -j <#> parameter is specified, the
minimum of the argument and the environment variable is used.
MAKEFLOW_MAX_REMOTE_JOBS
This corresponds to the -J <#> parameter and will set the maximum
number of local batch jobs. If a -J <#> parameter is specified, the
minimum of the argument and the environment variable is used.
Note that variables defined in your Makeflow are exported to the
environment.
TCP_LOW_PORT
Inclusive low port in range used with -p 0.
TCP_HIGH_PORT)
Inclusive high port in range used with -p 0.
EXIT STATUS
On success, returns zero. On failure, returns non-zero.
EXAMPLES
Run makeflow locally with debugging:
makeflow -d all Makeflow
Run makeflow on Condor will special requirements:
makeflow -T condor -B "requirements = MachineGroup == 'ccl'" Makeflow
Run makeflow with WorkQueue using named workers:
makeflow -T wq -a -N project.name Makeflow
Create a directory containing all of the dependencies required to run the
specified makeflow
makeflow -b bundle Makeflow
COPYRIGHT
The Cooperative Computing Tools are Copyright (C) 2003-2004 Douglas Thain and Copyright (C) 2005-2015 The University of Notre Dame. This software is distributed under the GNU General Public License. See the file COPYING for details.
SEE ALSO
CCTools 7.0.21 FINAL released on |