Classes
The following classes are available globally.
-
Base class for heavy task data providers that defines the foundation for task execution.
This abstract base class provides the essential components that every heavy task data provider needs: a unique key identifier, event publishing capabilities, and result delivery mechanisms. It serves as the foundation for implementing concrete data providers that perform specific heavy operations.
Key Responsibilities
- Task Identification: Maintains unique key for task tracking and cache coordination
- Event Broadcasting: Provides mechanism for real-time progress and status updates
- Result Delivery: Handles final result publishing with success/failure states
- Type Safety: Enforces generic type constraints for keys, elements, and events
Generic Parameters
K
: Key type (must be Hashable) used for task identification and cachingElement
: Result type returned upon successful task completionCustomEvent
: Event type for progress updates and status notifications
Usage Pattern
Concrete implementations should inherit from this class and implement the
See moreKVHeavyTaskDataProviderInterface
protocol to provide actual task execution logic.Declaration
Swift
open class KVHeavyTaskBaseDataProvider<K, Element, CustomEvent> where K : Hashable
-
Comprehensive manager for resource-intensive computational tasks and operations.
KVHeavyTasksManager serves as the central coordination hub for executing, caching, and managing heavy computational tasks. It provides sophisticated concurrency control, intelligent caching, flexible priority strategies, and comprehensive lifecycle management for resource-intensive operations.
Architecture Overview
The manager operates through several interconnected systems:
- Task Queue Management: Dual-queue system (waiting/running) with configurable capacity and eviction
- Priority Coordination: FIFO/LIFO strategies with optional task interruption support
- Cache Integration: Multi-level caching with hit/miss/null/invalid state handling
- Provider Lifecycle: DataProvider creation, reuse, and cleanup with memory optimization
- Event Broadcasting: Real-time progress updates and custom event distribution
- Thread Safety: Complete concurrency protection using semaphore-based synchronization
Key Features
- Zero-configuration operation: Sensible defaults for immediate use
- Highly configurable: Every aspect of behavior can be customized
- Memory efficient: Automatic cleanup and configurable resource limits
- Performance optimized: Minimal overhead with intelligent caching and provider reuse
- Robust error handling: Comprehensive error propagation and graceful degradation
- Production ready: Thread-safe, tested, and optimized for high-load scenarios
Generic Type Parameters
K
: Key type for task identification (must be Hashable) - used for deduplication and cachingElement
: Result type returned by completed tasks - cached and delivered to callbacksCustomEvent
: Event type for progress updates - broadcast to all registered observersDataProvider
: Concrete provider class that executes the actual heavy operations
Usage Example
// Define your custom data provider class ImageProcessingProvider: KVHeavyTaskBaseDataProvider<String, UIImage, ProcessingProgress>, KVHeavyTaskDataProviderInterface { func start() { // Implement heavy image processing performImageProcessing { progress in customEventPublisher(progress) } completion: { result in resultPublisher(result) } } func stop() -> KVHeavyTaskDataProviderStopAction { return .reuse // Preserve processing state for resumption } } // Configure and create manager let config = KVHeavyTasksManager<String, UIImage, ProcessingProgress, ImageProcessingProvider>.Config( maxNumberOfRunningTasks: 2, priorityStrategy: .LIFO(.stop), cacheConfig: .init(memoryUsageLimitation: .init(capacity: 100, memory: 50_000_000)) ) let manager = KVHeavyTasksManager<String, UIImage, ProcessingProgress, ImageProcessingProvider>(config: config) // Execute tasks with progress monitoring manager.fetch(key: "process_photo_123", customEventObserver: { progress in print("Processing progress: \(progress.percentage)%") }, result: { result in switch result { case .success(let image): // Handle processed image case .failure(let error): // Handle processing error } })
Thread Safety
All public methods are fully thread-safe and can be called from any queue or thread. The manager uses internal synchronization to coordinate access to shared state while minimizing performance impact through careful lock scope management.
Performance Considerations
- Queue sizing: Balance memory usage vs task eviction risk
- Concurrency limits: Optimize for your specific task characteristics and system resources
- Cache configuration: Tune for your result size and access patterns
- Priority strategy: Choose based on responsiveness vs predictability requirements
- Provider lifecycle: Balance setup cost vs memory usage with reuse/dealloc decisions
Declaration
Swift
public class KVHeavyTasksManager< K, Element, CustomEvent, DataProvider: KVHeavyTaskBaseDataProvider<K, Element, CustomEvent> > where DataProvider: KVHeavyTaskDataProviderInterface
extension KVHeavyTasksManager: @unchecked Sendable
-
A lightweight key-value tasks manager that integrates a cache and configurable providers.
See moreDeclaration
Swift
public class KVLightTasksManager<K, Element> where K : Hashable
-
MonoTask: Single-Instance Task Executor with TTL Caching and Retry Logic
MonoTask is a thread-safe, high-performance task executor that ensures only one instance of a task runs at a time while providing intelligent result caching and retry capabilities.
Key Features:
- Execution Merging: Multiple concurrent requests are merged into a single execution
- TTL-based Caching: Results are cached for a configurable duration to avoid redundant work
- Retry Logic: Automatic retry with exponential backoff for failed executions
- Thread Safety: Full thread safety with fine-grained locking using semaphores
- Queue Management: Separate queues for task execution and callback invocation
- Manual Cache Control: Manual cache invalidation with execution strategy options
Architecture:
┌─────────────────┐ ┌──────────────┐ ┌─────────────────┐ │ Execution │ │ Caching │ │ Callbacks │ │ Merging │───▶│ Layer │───▶│ Distribution │ │ │ │ │ │ │ │ • Single run │ │ • TTL cache │ │ • All waiters │ │ • Merge calls │ │ • Expiration │ │ • Same result │ └─────────────────┘ └──────────────┘ └─────────────────┘
Usage Examples:
// Basic usage with caching let task = MonoTask<String>(resultExpireDuration: 60.0) { callback in // Expensive network call URLSession.shared.dataTask(with: url) { data, _, error in if let error = error { callback(.failure(error)) } else { callback(.success(String(data: data!, encoding: .utf8)!)) } }.resume() } // Multiple concurrent calls - only one network request let result1 = await task.asyncExecute() // Network call happens let result2 = await task.asyncExecute() // Returns cached result let result3 = await task.asyncExecute() // Returns cached result // With retry logic let retryTask = MonoTask<Data>( retry: .count(count: 3, intervalProxy: .exponentialBackoff(initialTimeInterval: 1.0, scaleRate: 2.0)), resultExpireDuration: 300.0 ) { callback in performNetworkRequest(callback) } // Manual cache control task.clearResult(ongoingExecutionStrategy: .cancel) // Cancel and clear
Thread Safety:
MonoTask uses a single internal semaphore for fine-grained thread safety:
- Protects cached result, expiration timestamp, callback registration, and execution state
Performance:
- Execution Merging: Prevents duplicate work when multiple callers request same task
- TTL Caching: Avoids repeated expensive operations within cache period
- Queue Separation: Task execution and callback invocation can run on different queues
Memory Efficient: Minimal overhead per task instance
Note
This class is designed for expensive, idempotent operations like network requests, database queries, or complex computations that benefit from caching and deduplication.
Declaration
Swift
public class MonoTask<TaskResult>
-
A high-performance, in-memory key-value cache with configurable thread safety and memory management.
This cache provides automatic expiration, priority-based eviction, and memory usage tracking. It’s designed for scenarios where you need fine-grained control over cache behavior and memory usage.
See moreDeclaration
Swift
public class MemoryCache<Key, Element> where Key : Hashable
-
PriorityLRUQueue: Priority-aware LRU queue with O(1) average access and updates.
This data structure maintains separate LRU queues per priority level and evicts elements according to a two-tier policy:
- By priority: lower priority values are evicted first
Within a priority: least-recently-used (LRU) elements are evicted first
Complexity
Complexity:
- get/set/remove by key: O(1) average
- eviction and priority promotion: O(1) average
Thread-safety: This type is not thread-safe on its own. Wrap usage with external synchronization (such as a semaphore) when accessed from multiple threads.
Declaration
Swift
public class PriorityLRUQueue<K, Element> where K : Hashable
-
TTLPriorityLRUQueue: Hybrid TTL + Priority + LRU cache.
This structure stores items with a time-to-live (TTL), an eviction priority, and tracks recency for LRU ordering. Eviction proceeds in the following order: 1) Expired entries 2) Lowest priority 3) Least-recently-used within the same priority
Complexity
O(1) average for set/get/remove by key; O(k) to purge k expired entries.- Thread-safety: Not thread-safe by itself; use external synchronization if needed.
Declaration
Swift
public class TTLPriorityLRUQueue<Key, Element> where Key : Hashable
-
DoublyLink: Minimal doubly linked list used for LRU ordering.
Designed as a building block for higher-level queues (such as priority-based LRU structures), providing O(1) average operations for push/pop from both ends and node removal when the node reference is known.
Complexity
Complexity:
- enqueueFront: O(1)
- dequeueFront / dequeueBack: O(1)
- removeNode (with node reference): O(1)
Thread-safety: Not synchronized; use external synchronization when accessed concurrently.
Invariants:
front
is the most-recent node;back
is the least-recent node;count
is >= 0.
Declaration
Swift
public class DoublyLink<Element>
-
A high-performance hash-based queue that maintains insertion order with O(1) access and removal operations.
Core Features
- Hash-Based Operations: All operations are performed using hashable keys, providing intuitive access patterns
- LRU (Least Recently Used) Management: Recently accessed keys are moved to the front of the queue
- O(1) Performance: Constant time complexity for enqueue, dequeue, and removal operations
- Capacity Management: Configurable capacity with automatic eviction of least recently used keys
- Duplicate Key Handling: Re-inserting an existing key moves it to the front (LRU behavior)
Data Structure
The queue uses a combination of:
- Doubly Linked List: Maintains insertion order and provides O(1) front/back operations
- Hash Map: Provides O(1) key-to-node lookup for efficient access and removal
Performance Characteristics
- Time Complexity: O(1) average case for all operations
- Space Complexity: O(n) where n is the number of keys
- Memory Efficiency: Automatic eviction when capacity is exceeded
- Thread Safety: Not thread-safe; caller must ensure thread safety if needed
Example Usage
See more// Create a hash queue with capacity of 100 let queue = HashQueue<String>(capacity: 100) // Enqueue keys (moves to front if already exists) queue.enqueueFront(key: "task1") queue.enqueueFront(key: "task2") queue.enqueueFront(key: "task1") // Moves "task1" to front // Dequeue from front (most recently used) let mostRecent = queue.dequeueFront() // Returns "task1" // Dequeue from back (least recently used) let leastRecent = queue.dequeueBack() // Returns "task2" // Remove specific key queue.remove(key: "task3")
Declaration
Swift
public class HashQueue<K> where K : Hashable
-
Heap: Generic priority queue with custom ordering and fixed capacity.
Provides a binary-heap implementation backed by a Swift
Array
, supporting a caller-supplied comparison closure to define ordering. Offers convenience builders for min/max heaps whenElement: Comparable
.Features:
- Fixed logical capacity with optional force-insert semantics when full
- Event callbacks for insert/remove/move to track external indices
- O(log n) insertion and removal; O(1) root access
Thread-safety: Not synchronized; guard with external synchronization when needed.
Declaration
Swift
public class Heap<Element>