Config

struct Config

Configuration structure that controls all aspects of task manager behavior.

This configuration is immutable after manager initialization, ensuring consistent behavior throughout the manager’s lifecycle. Choose settings based on your specific use case requirements for performance, memory usage, and task prioritization needs.

  • Defines how new tasks handle currently running tasks in LIFO priority mode.

    This strategy only applies when using PriorityStrategy.LIFO(strategy) and determines the behavior when a new high-priority task needs resources that are currently in use.

    Performance Characteristics

    • .await: Lower interruption overhead, predictable completion times
    • .stop: Higher responsiveness, potential task restart overhead

    Memory Implications

    • .await: Gradual memory usage patterns, no sudden spikes
    • .stop: Potential memory spikes from simultaneous task states
    See more

    Declaration

    Swift

    public enum LIFOStrategy
  • Defines the overall task execution order and priority handling strategy.

    This is the primary control for task scheduling behavior and significantly affects both performance characteristics and resource usage patterns.

    Choosing the Right Strategy

    • FIFO: Use for batch processing, fair resource allocation, or when task order matters
    • LIFO(.await): Use for responsive systems with predictable task completion
    • LIFO(.stop): Use for highly interactive systems with dynamic priorities
    See more

    Declaration

    Swift

    public enum PriorityStrategy
  • Maximum number of tasks that can wait in the queue simultaneously.

    This setting controls memory usage and prevents unbounded queue growth. When the queue is full, new tasks evict older waiting tasks based on the priority strategy (FIFO evicts oldest, LIFO evicts based on strategy).

    Tuning Guidelines:

    • Low values (10-50): Better memory control, risk of task eviction
    • High values (100+): Lower eviction risk, higher memory usage
    • Consider: Peak load, task submission rate, average task duration

    Declaration

    Swift

    public let maxNumberOfQueueingTasks: Int
  • Maximum number of tasks that can execute concurrently.

    This setting directly controls resource usage and system load. The optimal value depends on task characteristics, system resources, and performance requirements.

    Tuning Guidelines:

    • CPU-bound tasks: Typically equals number of CPU cores
    • I/O-bound tasks: Can be higher than CPU cores (2x-4x)
    • Memory-intensive tasks: Lower values to prevent memory pressure
    • Mixed workloads: Start conservative and monitor performance

    Trade-offs:

    • Higher values: Better throughput, higher resource usage
    • Lower values: Better resource control, potential throughput limitation

    Declaration

    Swift

    public let maxNumberOfRunningTasks: Int
  • Strategy for determining task execution order and priority handling.

    This setting affects the entire task scheduling behavior and should be chosen based on the specific requirements of your application’s task characteristics.

    Declaration

    Swift

    public let priorityStrategy: PriorityStrategy
  • Configuration for caching completed task results.

    The cache prevents duplicate task execution and provides immediate results for previously computed tasks. Configure based on result value characteristics, memory constraints, and access patterns.

    Key settings to consider:

    • Memory limits: Prevent cache from consuming too much memory
    • TTL (Time To Live): How long results remain valid
    • Key validation: Filter out invalid or malformed keys
    • Eviction policy: How to handle cache capacity limits

    Declaration

    Swift

    public let cacheConfig: MemoryCache<K, Element>.Configuration
  • Optional callback for monitoring cache performance and behavior.

    This callback provides detailed statistics about cache hits, misses, evictions, and memory usage. Use for performance monitoring, debugging, and optimization.

    Provided statistics include:

    • Hit/miss ratios and counts
    • Memory usage and capacity utilization
    • Eviction counts and reasons
    • Access patterns and timing information

    Declaration

    Swift

    public let cacheStatisticsReport: ((CacheStatistics, CacheRecord) -> Void)?
  • Initializes a new KVHeavyTasksManager configuration with specified parameters.

    This initializer provides sensible defaults for most use cases while allowing full customization of all behavioral aspects. The defaults are optimized for general-purpose heavy task management with balanced performance and resource usage.

    Default Value Rationale

    • 50 queuing tasks: Balances memory usage with eviction avoidance
    • 4 concurrent tasks: Suitable for most I/O-bound operations on multi-core systems
    • LIFO(.await): Responsive to recent requests without interruption complexity
    • Default cache config: Reasonable memory limits with basic TTL support
    • No statistics reporting: Minimal overhead for production use

    Declaration

    Swift

    public init(
        maxNumberOfQueueingTasks: Int = 50,
        maxNumberOfRunningTasks: Int = 4,
        priorityStrategy: PriorityStrategy = .LIFO(.await),
        cacheConfig: MemoryCache<K, Element>.Configuration = .defaultConfig,
        cacheStatisticsReport: ((CacheStatistics, CacheRecord) -> Void)? = nil
    )

    Parameters

    maxNumberOfQueueingTasks

    Maximum tasks in waiting queue (default: 50)

    maxNumberOfRunningTasks

    Maximum concurrent task execution (default: 4)

    priorityStrategy

    Task execution order and priority handling (default: .LIFO(.await))

    cacheConfig

    Cache configuration for task results (default: .defaultConfig)

    cacheStatisticsReport

    Optional statistics monitoring callback (default: nil)