# local version The Integrated Plasma Simulator (IPS) Framework. This framework enables loose, file-based coupling of certain class of nuclear fusion simulation codes.
For further design information see
- Wael Elwasif, David E. Bernholdt, Aniruddha G. Shet, Samantha S. Foley, Randall Bramley, Donald B. Batchelor, and Lee A. Berry, The Design and Implementation of the SWIM Integrated Plasma Simulator, in The 18th Euromirco International Conference on Parallel, Distributed and Network - Based Computing (PDP 2010), 2010.
- Samantha S. Foley, Wael R. Elwasif, David E. Bernholdt, Aniruddha G. Shet, and Randall Bramley, Extending the Concept of Component Interfaces: Experience with the Integrated Plasma Simulator, in Component - Based High - Performance Computing (CBHPC) 2009, 2009, (extended abstract).
- D Batchelor, G Alba, E D’Azevedo, G Bateman, DE Bernholdt, L Berry, P Bonoli, R Bramley, J Breslau, M Chance, J Chen, M Choi, W Elwasif, S Foley, G Fu, R Harvey, E Jaeger, S Jardin, T Jenkins, D Keyes, S Klasky, S Kruger, L Ku, V Lynch, D McCune, J Ramos, D Schissel, D Schnack, and J Wright, Advances in Simulation of Wave Interactions with Extended MHD Phenomena, in Horst Simon, editor, SciDAC 2009, 14-18 June 2009, San Diego, California, USA, volume 180 of Journal of Physics: Conference Series, page 012054, Institute of Physics, 2009, 6pp.
- Samantha S. Foley, Wael R. Elwasif, Aniruddha G. Shet, David E. Bernholdt, and Randall Bramley, Incorporating Concurrent Component Execution in Loosely Coupled Integrated Fusion Plasma Simulation, in Component-Based High-Performance Computing (CBHPC) 2008, 2008, (extended abstract).
- D. Batchelor, C. Alba, G. Bateman, D. Bernholdt, L. Berry, P. Bonoli, R. Bramley, J. Breslau, M. Chance, J. Chen, M. Choi, W. Elwasif, G. Fu, R. Harvey, E. Jaeger, S. Jardin, T. Jenkins, D. Keyes, S. Klasky, S. Kruger, L. Ku, V. Lynch, D. McCune, J. Ramos, D. Schissel, D. Schnack, and J. Wright, Simulation of Wave Interations with MHD, in Rick Stevens, editor, SciDAC 2008, 14-17 July 2008, Washington, USA, volume 125 of Journal of Physics: Conference Series, page 012039, Institute of Physics, 2008.
- Wael R. Elwasif, David E. Bernholdt, Lee A. Berry, and Don B. Batchelor, Component Framework for Coupled Integrated Fusion Plasma Simulation, in HPC-GECO/CompFrame 2007, 21-22 October, Montreal, Quebec, Canada, 2007.
Authors: | Wael R. Elwasif, Samantha Foley, Aniruddha G. Shet |
---|---|
Organization: | Center for Simulation of RF Wave Interactions with Magnetohydrodynamics (CSWIM) |
Produce critical message in simulation log file. Raise exception for bad formatting.
Produce debugging message in simulation log file. Raise exception for bad formatting.
Produce error message in simulation log file. Raise exception for bad formatting.
Produce exception message in simulation log file. Raise exception for bad formatting.
Produce informational message in simulation log file. Raise exception for bad formatting.
This is to be called by the configuration manager as part of dynamically creating a new simulation. The purpose here is to initiate the method invocations for the framework-visible components in the new simulation
Wrapper for Framework.info().
Register a call back method to handle a list of framework service invocations.
- handler: a Python callable object that takes a messages.ServiceRequestMessage.
- service_list: a list of service names to call handler when invoked by components. The service name must match the target_method parameter in messages.ServiceRequestMessage.
Run the communication outer loop of the framework.
This method implements the core communication and message dispatch functionality of the framework. The main phases of execution for the framework are:
- Invoke the init method on all framework-attached components, blocking pending method call termination.
- Generate method invocation messages for the remaining public method in the framework-centric components (i.e. step and finalize).
- Generate a queue of method invocation messages for all public framework accessible components in the simulations being run. framework-accessible components are made up of the Init component (if is exists), and the Driver component. The generated messages invoke the public methods init, step, and finalize.
- Dispatch method invocations for each framework-centric component and physics simulation in order.
Exceptions that propagate to this method from the managed simulations causes the framework to abort any pending method invocation for the source simulation. Exceptions from framework-centeric component aborts further invocations to that component.
When all method invocations have been dispatched (or aborted), Framework.terminate_sim() is called to trigger normal termination of all component processes.
Invoke terminate(status) on components in a simulation
This method remotely invokes the method C{terminate()} on all componnets in the IPS simulation sim_name.
Terminate all active component instances.
This method remotely invokes the method C{terminate()} on all componnets in the IPS simulation.
Produce warning message in simulation log file. Raise exception for bad formatting.
The data manager facilitates the movement and exchange of data files for the simulation.
Merge partial plasma state file with global master. Newly updated plasma state copied to caller’s workdir. Exception raised on copy error.
msg.args:
- partial_state_file
- target_state_file
- log_file: stdout for merge process if not None
Invokes the appropriate public data manager method for the component specified in msg. Return method’s return value.
Copy plasma state files from source dir to target dir. Return 0. Exception raised on copy error.
msg.args:
- plasma_files
- source_dir
- target_dir
Copy plasma state files from source dir to target dir. Return 0. Exception raised on copy error.
msg.args:
- plasma_files
- source_dir
- target_dir
The task manager is responsible for facilitating component method invocations, and the launching of tasks.
Construct task launch command to be executed by the component.
nproc - number of processes to use binary - binary to launch cmd_args - additional command line arguments for the binary working_dir - full path to where the executable will be launched ppn - processes per node value to use max_ppn - maximum possible ppn for this allocation nodes - comma separated list of node ids accurateNodes - if True, launch on nodes in nodes, otherwise the parallel launcher determines the process placement partial_nodes - if True and accurateNodes and task_launch_cmd == ‘mpirun’, a host file is created specifying the exact placement of processes on cores. core_list - used for creating host file with process to core mappings
Cleanup after a task launched by a component terminates
finish_task_msg is expected to be of type messages.ServiceRequestMessage
Message args:
Return a new call id
Return a new task id
Creates and sends a messages.MethodInvokeMessage from the calling component to the target component. If manage_return is True, a record is added to outstanding_calls. Return call id.
Message args:
- method_name
- + arguments to be passed on as method arguments.
Allocate resources needed for a new task and build the task launch command using the binary and arguments provided by the requesting component. Return launch command to component via messages.ServiceResponseMessage. Raise exception if task can not be launched at this time (ipsExceptions.BadResourceRequestException, ipsExceptions.InsufficientResourcesException).
init_task_msg is expected to be of type messages.ServiceRequestMessage
Message args:
# SIMYAN: added this to deal with the component directory change 2. working_dir: full path to directory where the task will be launched
Allocate resources needed for a new task and build the task launch command using the binary and arguments provided by the requesting component.
init_task_msg is expected to be of type messages.ServiceRequestMessage
Message args:
Initialize references to other managers and key values from configuration manager.
Prints the task table pretty-like.
Invokes the appropriate public data manager method for the component specified in msg. Return method’s return value.
Handle the response message generated by a component in response to a method invocation on that component.
reponse_msg is expected to be of type messages.MethodResultMessage
Determine if the call has finished. If finished, return any data or errors. If not finished raise the appropriate blocking or nonblocking exception and try again later.
wait_msg is expected to be of type messages.ServiceRequestMessage
Message args:
The resource manager is responsible for detecting the resources allocated to the framework, allocating resources to task requests, and maintaining the associated bookkeeping.
Add node entries to self.nodes. Typically used by initialize() to initialize self.nodes. May be used to add nodes to a dynamic allocation in the future.
listOfNodes is a list of tuples (node name, cores). self.nodes is a dictionary where the keys are the node names and the values are node_structure.Node structures.
Return total number of cores.
Print header information for resource usage reporting file.
Determine if it is currently possible to allocate nproc processes with a ppn of ppn without further restrictions.. Return True and list of nodes to use if successful. Return False and empty list if there are not enough available resources at this time, but it is possible to eventually satisfy the request. Exception raised if the request can never be fulfilled.
Determine if it is currently possible to allocate nproc processes with a ppn of ppn and whole nodes. Return True and list of nodes to use if successful. Return False and empty list if there are not enough available resources at this time, but it is possible to eventually satisfy the request. Exception raised if the request can never be fulfilled.
Determine if it is currently possible to allocate nproc processes with a ppn of ppn and whole sockets. Return True and list of nodes to use if successful. Return False and empty list if there are not enough available resources at this time, but it is possible to eventually satisfy the request. Exception raised if the request can never be fulfilled.
Traverse available nodes to return:
If whole_nodes is True:
- shared_nodes: False
- nodes: list of node names
- ppn: processes per node for launching the task
- max_ppn: processes that can be launched
- accurateNodes: True if nodes uses the actual names of the nodes, False otherwise.
If whole_nodes is False:
- shared_nodes: True
- nodes: list of node names
- node_file_entries: list of (node, corelist) tuples, where corelist is a list of core names. Core names are integers from 0 to n-1 where n is the number of cores on a node.
- ppn: processes per node for launching the task
- max_ppn: processes that can be launched
- accurateNodes: True if nodes uses the actual names of the nodes, False otherwise.
Aguments:
- nproc: the number of requested processes (int)
- comp_id: component identifier, must be unique with respect to the framework (string)
- task_id: task identifier from TM (int)
- method: name of method (string)
- task_ppn: ppn for this task (optional) (int)
Initialize resource management structures, references to other managers (dataMngr, taskMngr, configMngr), and feature settings (ftb).
Resource information comes from the following in order of priority:
- command line specification (cmd_nodes, cmd_ppn)
- detection using parameters from platform config file
- manual settings from platform config file
The second two sources are obtained through resourceHelper.getResourceList().
Print the node tree to stdout.
Set resources allocated to task task_id to available. status is not used, but may be used to correlate resource failures to task failures and implement task relaunch strategies.
Print current RM status to the reporting_file (“resource_usage”) Entries consist of:
- time in seconds since beginning of time (__init__ of RM)
- # cores that are available
- # cores that are allocated
- % allocated cores
- # processes launched by task
- % cores used by processes
- notes (a description of the event that changed the resource usage)
wrapper for constructing and publishing EM events
Models a node in the allocation.
- name: name of node, typically actual name from resource detection phase.
- task_ids, owners: identifiers for the tasks and components that are currently using the node.
- allocated, available: list of sockets that have cores allocated and available. A socket may appear in both lists if it is only partially allocated.
- sockets: list of sockets belonging to this node
- avail_cores: number of cores that are currently available.
- total_cores: total number of cores that can be allocated on this node.
- status: indicates if the node is ‘UP’ or ‘DOWN’. Currently not used, all nodes are considered functional..
Mark procs number of cores as allocated subject to the values of whole_nodes and whole_sockets. Return the number of cores allocated and their corresponding slots, a list of strings of the form:
<socket name>:<core name>
Pretty print of state of sockets.
Mark cores used by task tid and component o as available. Return the number of cores released.
Models a socket in a node.
- name: identifier for the socket
- task_ids, owners: identifiers for the tasks and components that are currently using the socket.
- allocated, available: lists of cores that are allocated and available.
- cores: list of Core objects belonging to this socket
- avail_cores: number of cores that are currently available.
- total_cores: total number of cores that can be allocated on this socket.
Mark num_procs cores as allocated subject to the value of whole. Return a list of strings of the form:
<socket name>:<core name>
Pretty print of state of cores.
Mark cores that are allocated to task tid as available. Return number of cores set to available.
Models a core of a socket.
- name: name of core
- is_available: boolean value indicating the availability of the core.
- task_id, owner: identifiers of the task and component using the core.
Mark core as allocated.
Mark core as available.
The Resource Helper file contains all of the code needed to figure out what host we are on and what resources we have. Taking this out of the resource manager will allow us to test it independent of the IPS.
Using the host information, the resources are detected. Return list of (<node name>, <processes per node>), cores per node, sockets per node, processes per node, and True if the node names are accurate, False otherwise.
Use checkjob $PBS_JOBID to get the node names and core counts of allocation. Typically works in a Cray environment.
Note
Two formats for outputing resource information.
Access info about allocation from PBS environment variables:
PBS_NNODES PBS_NODEFILE
Use qstat -f $PBS_JOBID to get the number of nodes and ppn of the allocation. Typically works on PBS systems.
A second way to use qstat -f $PBS_JOBID to get the number of nodes and ppn of the allocation. Typically works on PBS systems.
Access environment variables set by Slurm to get the node names, tasks per node and number of processes.
SLURM_NODELIST SLURM_TASKS_PER_NODE or SLURM_JOB_TASKS_PER_NODE SLURM_NPROC
Uses hwloc library calls in C program topo_disco to detect the topology of a node in the allocation. Return the number of sockets and the number of cores.
Note
Not available on all platforms.
Use values listed in platform configuration file.
Base class for all IPS components. Common set up, connection and invocation actions are implemented here.
Produce some default debugging information before the rest of the code is executed.
Produce some default debugging information before the rest of the code is executed.
Produce some default debugging information before the rest of the code is executed.
Produce some default debugging information before the rest of the code is executed.
Produce some default debugging information before the rest of the code is executed.
Produce some default debugging information before the rest of the code is executed.
Clean up services and call sys_exit.
Produce some default debugging information before the rest of the code is executed.
The configuration manager is responsible for paring the simulation and platform configuration files, creating the framework and simulation components, as well as providing an interface to accessing items from the configuration files (e.g., the time loop).
Structure to hold simulation data stored into the sim_map entry in the configurationManager class
Deprecated since version 1.0: Use get_port()
Return a dictionary of simulation names and lists of component references. (May only be the driver, and init (if present)???)
Return value of param from simulation configuration file for sim_name.
Return a list of driver components, one for each sim.
Return list of framework components.
Return list of init components.
Return value of platform parameter param. If silent is False (default) None is returned when param not found, otherwise an exception is raised.
Return a reference to the component from simulation sim_name implementing port port_name.
Return list of names of simulations.
Return value of param from simulation configuration file for sim_name.
Parse the platform and simulation configuration files using the ConfigObj module. Create and initialize simulation(s) and their components, framework components and loggers.
Invokes public configuration manager method for a component. Return method’s return value.
Set the configuration parameter param to value value in target_sim_name. If target_sim_name is the framework, all simulations will get the change. Return value.
Terminates all processes attached to the framework. status not used.
Add task task_name to task pool task_pool_name. Remaining arguments are the same as in ServicesProxy.launch_task().
Invoke method method_name on component component_id with optional arguments *args. Return result from invoking the method.
Invoke method method_name on component component_id with optional arguments *args. Return call_id.
Selectively checkpoint components in comp_id_list based on the configuration section CHECKPOINT. If Force is True, the checkpoint will be taken even if the conditions for taking the checkpoint are not met. If Protect is True, then the data from the checkpoint is protected from clean up. Force and Protect are optional and default to False.
The CHECKPOINT_MODE option controls determines if the components checkpoint methods are invoked.
Possible MODE options are:
The configuration parameter NUM_CHECKPOINT controls how many checkpoints to keep on disk. Checkpoints are deleted in a FIFO manner, based on their creation time. Possible values of NUM_CHECKPOINT are:
Checkpoints are saved in the directory ${SIM_ROOT}/restart
Create an empty pool of tasks with the name task_pool_name. Raise exception if duplicate name.
Produce critical message in simulation log file. Raise exception for bad formatting.
Produce debugging message in simulation log file. Raise exception for bad formatting.
Produce error message in simulation log file. Raise exception for bad formatting.
Produce exception message in simulation log file. Raise exception for bad formatting.
Deprecated since version 1.0: Use ServicesProxy.get_config_param()
Deprecated since version 1.0: Use ServicesProxy.get_port()
Deprecated since version 1.0: Use ServicesProxy.get_time_loop()
Return the value of the configuration parameter param. Raise exception if not found.
Return dictionary of finished tasks and return values in task pool task_pool_name. Raise exception if no active or finished tasks.
Return a reference to the component implementing port port_name.
Copy files needed for component restart from the restart directory:
<restart_root>/restart/<timeStamp>/components/$CLASS_${SUB_CLASS}_$NAME_${SEQ_NUM}
to the component’s work directory.
Copying errors are not fatal (exception raised).
Return the list of times as specified in the configuration file.
Return the working directory of the calling component.
The structure of the working directory is defined using the configuration parameters CLASS, SUB_CLASS, and NAME of the component configuration section. The structure of the working directory is:
${SIM_ROOT}/work/$CLASS_${SUB_CLASS}_$NAME_<instance_num>
Produce informational message in simulation log file. Raise exception for bad formatting.
Kill all tasks associated with this component.
Kill launched task task_id. Return if successful. Raises exceptions if the task or process cannot be found or killed successfully.
Launch binary in working_dir on nproc processes. *args are any arguments to be passed to the binary on the command line. **keywords are any keyword arguments used by the framework to manage how the binary is launched. Keywords may be the following:
- task_ppn : the processes per node value for this task
- block : specifies that this task will block (or raise an exception) if not enough resources are available to run immediately. If True, the task will be retried until it runs. If False, an exception is raised indicating that there are not enough resources, but it is possible to eventually run. (default = True)
- tag : identifier for the portal. May be used to group related tasks.
- logfile : file name for stdout (and stderr) to be redirected to for this task. By default stderr is redirected to stdout, and stdout is not redirected.
- whole_nodes : if True, the task will be given exclusive access to any nodes it is assigned. If False, the task may be assigned nodes that other tasks are using or may use.
- whole_sockets : if True, the task will be given exclusive access to any sockets of nodes it is assigned. If False, the task may be assigned sockets that other tasks are using or may use.
Return task_id if successful. May raise exceptions related to opening the logfile, being unable to obtain enough resources to launch the task (ipsExceptions.InsufficientResourcesException), bad task launch request (ipsExceptions.ResourceRequestMismatchException, ipsExceptions.BadResourceRequestException) or problems executing the command. These exceptions may be used to retry launching the task as appropriate.
Note
This is a nonblocking function, users must use a version of ServicesProxy.wait_task() to get result.
Construct messages to task manager to launch each task. Used by TaskPool to launch tasks in a task_pool.
not used
Wrapper for ServicesProxy.info().
Merge partial plasma state with global state. Partial plasma state contains only the values that the component contributes to the simulation. Raise exceptions on bad merge. Optional logfile will capture stdout from merge.
Poll for events on subscribed topics.
Publish event consisting of eventName and eventBody to topic topicName to the IPS event service.
Kill all running tasks, clean up all finished tasks, and delete task pool.
Copy files needed for component restart to the restart directory:
${SIM_ROOT}/restart/$timestamp/components/$CLASS_${SUB_CLASS}_$NAME
Copying errors are not fatal (exception raised).
Send event to web portal.
Send event to portal setting the URL where the monitor component will put data.
Set configuration parameter param to value. Raise exceptions if the parameter cannot be changed or if there are problems setting the value.
Deprecated since version 1.0: Use ServicesProxy.stage_plasma_state()
Deprecated since version 1.0: Use ServicesProxy.stage_input_files()
Deprecated since version 1.0: Use ServicesProxy.stage_output_files()
Same as stage_output_files, but only does Plasma State files.
Copy component data files to the component working directory (as obtained via a call to ServicesProxy.get_working_dir()). Input files are assumed to be originally located in the directory variable DATA_TREE_ROOT in the component configuration section.
Copy component input files to the component working directory (as obtained via a call to ServicesProxy.get_working_dir()). Input files are assumed to be originally located in the directory variable INPUT_DIR in the component configuration section.
Same as stage_output_files, but does not do anything with the Plasma State.
Copy associated component output files (from the working directory) to the component simulation results directory. Output files are prefixed with the configuration parameter OUTPUT_PREFIX. The simulation results directory has the format:
${SIM_ROOT}/simulation_results/<timeStamp>/components/$CLASS_${SUB_CLASS}_$NAME_${SEQ_NUM}
Additionally, plasma state files are archived for debugging purposes:
${SIM_ROOT}/history/plasma_state/<file_name>_$CLASS_${SUB_CLASS}_$NAME_<timeStamp>
Copying errors are not fatal (exception raised).
Copy current plasma state to work directory.
Copy output files from the replay component to current sim for physics time timeStamp. Return location of new local copies.
Copy plasma state files from the replay component to current sim for physics time timeStamp. Return location of new local copies.
Launch all unfinished tasks in task pool task_pool_name. If block is True, return when all tasks have been launched. If block is False, return when all tasks that can be launched immediately have been launched. Return number of tasks submitted.
Subscribe to topic topicName on the IPS event service and register callback as the method to be invoked whem an event is published to that topic.
Remove subscription to topic topicName.
Deprecated since version 1.0: Use ServicesProxy.update_plasma_state()
Deprecated since version 1.0: Use ServicesProxy.update_time_stamp()
Copy local (updated) plasma state to global state. If no plasma state files are specified, component configuration specification is used. Raise exceptions upon copy.
Update time stamp on portal.
If block is True, return when the call has completed with the return code from the call. If block is False, raise ipsExceptions.IncompleteCallException if the call has not completed, and the return value is it has.
Check the status of each of the call in call_id_list. If block is True, return when all calls are finished. If block is False, raise ipsExceptions.IncompleteCallException if any of the calls have not completed, otherwise return. The return value is a dictionary of call_ids and return values.
Check the status of task task_id. Return the return value of the task when finished successfully. Raise exceptions if the task is not found, or if there are problems finalizing the task.
Check the status of task task_id. If it has finished, the return value is populated with the actual value, otherwise None is returned. A KeyError exception may be raised if the task is not found.
not used
Check the status of a list of tasks. If block is True, return a dictionary of return values when all tasks have completed. If block is False, return a dictionary containing entries for each completed task. Note that the dictionary may be empty. Raise KeyError exception if task_id not found.
Produce warning message in simulation log file. Raise exception for bad formatting.
Container for task information:
Class to contain and manage a pool of tasks.
Create Task object and add to queued_tasks of the task pool. Raise exception if task name already exists in task pool.
Return a dictionary of exit status values for all tasks that have finished since the last time finished tasks were polled.
Launch tasks in queued_tasks. Finished tasks are handled before launching new ones. If block is True, the number of tasks submited is returned after all tasks have been launched and completed. If block is False the number of tasks that can immediately be launched is returned.
Deprecated since version Experimental: Use TaskPool.submit_tasks()
Kill all active tasks, clear all queued, blocked and finished tasks.
Exception is raised when an allocated node is discovered to be faulty. The task manager should catch the exception and do something with it.
Exception raised by the resource manager when a component requests a quantity of resources that can never be satisfied during a get_allocation() call
Exception Raised by the any manager when a blocking service invocation is made, and the invocation result is not readily available.
Exception Raised by the taskManager when a nonblocking wait_call() method is invoked before the call has finished.
Exception Raised by the resource manager when not enough resources are available to satisfy an allocate() call
Exception raised by the resource helper to indicate inconsistent resource settings.
Exception for any time nonexistent (nodes) are tried to be used
Exception raised by the resource manager when a release allocation request accounting yields unexpected results.
Exception raised by the resource manager when it is possible to launch the requested number of processes, but not on the requested number of processes per node.
Copy files in src_file_list from src_dir to target_dir with an optional prefix. If keep_old is True, existing files in target_dir will not be overridden, otherwise files can be clobbered (default). Wild-cards in file name specification are allowed.
Return a string representation of timeArg. timeArg is expected to be an appropriate object to be processed by time.strftime(). If timeArg is None, current time is used.
Write files to the ziphandle. Because when one wants to unzip the file, one typically doesn’t want the full path, this handles getting just the shorter path name. src_file_list can be a single string If src_dir is specified:
relative
filename in the zip file.
Message used to communicate the exit status of a component.
Base class for all IPS messages. Should not be used in actual communication.
Message used by components to invoke methods on other components.
- sender_id: component id of the sender
- receiver_id: component id of the receiver
- call_id: identifier of the call (generated by caller)
- target_method: method to be invoked on the receiver
- *args: arguments to be passed to the target_method
Message used to relay the return value after a method invocation.
- sender_id: component id of the sender (callee)
- receiver_id: component id of the receiver (caller)
- call_id: identifier of the call (generated by caller)
- status: either Message.SUCCESS or Message.FAILURE indicating the success of failure of the invocation.
- *args: other information to be passed back to the caller.
Message used by components to request the result of a service action by one of the IPS managers.
- sender_id: component id of the sender
- receiver_id: component id of the receiver (framework)
- target_comp_id: component id of target component (typically framework)
- target_method: name of method to be invoked on component target_comp_id
- *args: any number of arguments. These are specific to the target method.
Message used by managers to respond with the result of the service action to the calling component.
- sender_id: component id of the sender (framework)
- receiver_id: component id of the receiver (calling component)
- request_msg_id: id of request message this is a response to.
- status: either Message.SUCCESS or Message.FAILURE
- *args: any number of arguments. These are specific to type of response.
Framework component to communicate with the SWIM web portal.
Container for simulation data.
Return total elapsed time since simulation started in seconds (including a possible fraction)
Try to connect to the portal, subscribe to _IPS_MONITOR events and register callback process_event().
Create and send information about simulation sim_name living in sim_root so the portal can set up corresponding structures to manage data from the sim.
Process a single event theEvent on topic topicName.
Send contents of event_data and sim_data to portal.
Poll for events.
Framework component to manage runspace initialization, container file management, and file staging for simulation and analysis runs.
Placeholder
Writes final log_file and resource_usage file to the container and closes
Creates base directory, copies IPS and FacetsComposer input files.
Copies individual subcomponent input files into working subdirectories.
Placeholder for future validation step of runspace management.