Hello,
I have a question about a edge case scenario.
Before that some info on my project-
I have a launchdaemon that carries out some business logic, it also has XPC listener (built using C APIs).
Question-
Can there be a situation when the daemon is up and running but the XPC listener is down(due to some error or crash)? If yes then do I need to handle it in my code or launchd will handle it?
when the daemon is stopped or shut down, how do I stop the XPC listener? After getting listener object from xpc_connection_create_mach_service should I just call xpc_connection_cancel followed by a call to xpc_release?
Thanks!
K
Processes & Concurrency
RSS for tagDiscover how the operating system manages multiple applications and processes simultaneously, ensuring smooth multitasking performance.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm trying to understand how the API works to perform a function that can continue running if the user closes the app. For a very simple example, consider a function that increments a number on screen every second, counting from 1 to 100, reaching completion at 100. The user can stay in the app for 100s watching it work to completion, or the user can close the app say after 2s and do other things while watching it work to completion in the Live Activity.
To do this when the user taps a Start Counting button, you'd
1 Call BGTaskScheduler.shared.register(forTaskWithIdentifier:using:launchHandler:).
Question 1: Do I understand correctly, all of the logic to perform this counting operation would exist entirely in the launchHandler block (noting you could call another function you define passing it the task to be able to update its progress)? I am confused because the documentation states "The system runs the block of code for the launch handler when it launches the app in the background." but the app is already open in the foreground. This made me think this block is not going to be invoked until the user closes the app to inform you it's okay to continue processing in the background, but how would you know where to pick up. I want to confirm my thinking was wrong, that all the logic should be in this block from start to completion of the operation, and it's fine even if the app stays in the foreground the whole time.
2 Then you'd create a BGContinuedProcessingTaskRequest and set request.strategy = .fail for this example because you need it to start immediately per the user's explicit tap on the Start Counting button.
3 Call BGTaskScheduler.shared.submit(request).
Question 2: If the submit function throws an error, should you handle it by just performing the counting operation logic (call your function without passing a task)? I understand this can happen if for some reason the system couldn't immediately run it, like if there's already too many pending task requests. Seems you should not show an error message to the user, should still perform the request and just not support background continued processing for it (and perhaps consider showing a light warning "this operation can't be continued in the background so keep the app open"). Or should you still queue it up even though the user wants to start counting now? That leads to my next question
Question 3: In what scenario would you not want the operation to start immediately (the queue behavior which is the default), given the app is already in the foreground and the user requested some operation? I'm struggling to think of an example, like a button titled Compress Photos Whenever You Can, and it may start immediately or maybe it won't? While waiting for the launchHandler to be invoked, should the UI just show 0% progress or "Pending" until the system can get to this task in the queue? Struggling to understand the use cases here, why make the user wait to start processing when they might not even intend to close the app during the operation?
Thanks for any insights! As an aside, a sample project with a couple use cases would have been incredibly helpful to understand how the API is expected to be used.
I'm developing a macOS console application that uses ODBC to connect to PostgreSQL. The application works fine when run normally, but fails to load the ODBC driver when debugging with LLDB(under root works fine as well).
Error Details
When running the application through LLDB, I get this sandbox denial in the system log (via log stream):
Error 0x0 0 0 kernel: (Sandbox) Sandbox: logd_helper(587) deny(1) file-read-data /opt/homebrew/lib/psqlodbcw.so
The application cannot access the PostgreSQL ODBC driver located at /opt/homebrew/lib/psqlodbcw.so(also tried copy to /usr/local/lib/...).
Environment
macOS Version: Latest Sequoia
LLDB: Using LLDB from Xcode 16.3 (/Applications/Xcode16.3.app/Contents/Developer/usr/bin/lldb)
ODBC Driver: PostgreSQL ODBC driver installed via Homebrew
Code Signing: Application is signed with Apple Development certificate
What is the recommended approach for debugging applications that need to load dynamic libraries?
Are there specific entitlements or configurations that would allow LLDB to access ODBC drivers during debugging sessions?
Any guidance would be greatly appreciated.
Thank you for any assistance!
I'm working on an enterprise product that's mainly a daemon (with Endpoint Security) without any GUI component. I'm looking into the update process for daemons/agents that was introduced with Ventura (Link), but I have to say that the entire process is just deeply unfun. Really can't stress this enough how unfun.
Anyway...
The product bundle now contains a dedicated Swift executable that calls SMAppService.register for both the daemon and agent.
It registers the app in the system preferences login items menu, but I also get an error.
Error registering daemon: Error Domain=SMAppServiceErrorDomain Code=1 "Operation not permitted" UserInfo={NSLocalizedFailureReason=Operation not permitted}
What could be the reason?
I wouldn't need to activate the items, I just need them to be added to the list, so that I can control them via launchctl.
Which leads me to my next question, how can I control bundled daemons/agents via launchctl? I tried to use launchctl enable and bootstrap, just like I do with daemons under /Library/LaunchDaemons, but all I get is
sudo launchctl enable system/com.identifier.daemon
sudo launchctl bootstrap /Path/to/daemon/launchdplist/inside/bundle/Library/LaunchDaemons/com.blub.plist
Bootstrap failed: 5: Input/output error (not super helpful error message)
I'm really frustrated by the complexity of this process and all of its pitfalls.
Hello,
I am developing an application which is communicating with external device using BLE and L2CAP. I wonder what are the best practices of using Input & Output streams that are established with L2CAP connection when working with Swift 6 concurrency model.
I've been trying to find some examples and hints for some time now but unfortunately there isn't much available. One useful thread I've found is: https://developer.apple.com/forums/thread/756281
but it does not offer much insight into using eg. actor model with streams. I wonder if something has changed in this regards?
Also, are there any plans to migrate eg. CoreBluetooth stack to new swift 6 concurrency ?
Topic:
App & System Services
SubTopic:
Processes & Concurrency
Tags:
External Accessory
Swift
Core Bluetooth
Concurrency
Hi! I'm trying to submit a task request into BGTaskScheduler when I background my app. The backgrounding triggers an update of data to a shared app groups container. I'm currently getting the following error and unsure where it's coming from:
*** Assertion failure in -[BGTaskScheduler _unsafe_submitTaskRequest:error:], BGTaskScheduler.m:274
Here is my code:
BGAppRefreshTaskRequest *request = [[BGAppRefreshTaskRequest alloc] initWithIdentifier:kRBBackgroundTaskIdentifier];
NSError *error = nil;
bool success = [[BGTaskScheduler sharedScheduler] submitTaskRequest:request error:&error];
In iOS Background Execution limits, I see this:
When the user ‘force quits’ an app by swiping up in the multitasking UI, iOS interprets that to mean that the user doesn’t want the app running at all. iOS also sets a flag that prevents the app from being launched in the background. That flag gets cleared when the user next launches the app manually.
However, I see that when I close an app on iPadOS 26 with the red X, the app doesn't appear in the multitasking UI. So are they treated as force closes and prevented from running background tasks?
I'm developing a medication scheduling app similar to Apple Health's Medications feature, and I'd like some input on my current approach to background tasks.
In my app, when a user creates a medication, I generate ScheduledDose objects (with corresponding local notifications) for the next 2 weeks and save them to SwiftData. To ensure this 2-week window stays current, I've implemented a BGAppRefreshTask that runs daily to generate new doses as needed.
My concern is whether BGAppRefreshTask is the appropriate mechanism for this purpose. Since I'm not making any network requests but rather generating and storing local data, I'm questioning if this is the right approach.
I'm also wondering how Apple Health's Medications feature handles this kind of scheduling. Their app seems to maintain future doses regardless of app usage patterns.
Has anyone implemented something similar or can suggest the best background execution API for this type of scenario?
Thanks for any guidance you can provide.
Topic:
App & System Services
SubTopic:
Processes & Concurrency
Tags:
HealthKit
SwiftUI
Background Tasks
SwiftData
I can see a number of events in our error logging service where we track expired BGAppRefreshTask. We use BGAppRefreshTask to update metadata.
By looking into those events I can see most of reported expired tasks expired around 2-5 seconds after the app was launched. The documentations says: The system decides the best time to launch your background task, and provides your app up to 30 seconds of background runtime. I expected "up to 30 seconds" to be 10-30 seconds range, not that extremely short.
Is there any heuristic that affects how much time the app gets?
Is there a way to tell if the app was launched due to the background refresh task? If we have this information we can optimize what the app does during those 5 seconds.
Thank you!.
Hello everyone!
I'm having a problem with background tasks running in the foreground.
When a user enters the app, a background task is triggered. I've written some code to check if the app is in the foreground and to prevent the task from running, but it doesn't always work. Sometimes the task runs in the background as expected, but other times it runs in the foreground, as I mentioned earlier.
Could it be that I'm doing something wrong? Any suggestions would be appreciated.
here is code:
class BackgroundTaskService {
@Environment(\.scenePhase) var scenePhase
static let shared = BackgroundTaskService()
private init() {}
// MARK: - create task
func createCheckTask() {
let identifier = TaskIdentifier.check
BGTaskScheduler.shared.getPendingTaskRequests { requests in
if requests.contains(where: { $0.identifier == identifier.rawValue }) {
return
}
self.createByInterval(identifier: identifier.rawValue, interval: identifier.interval)
}
}
private func createByInterval(identifier: String, interval: TimeInterval) {
let request = BGProcessingTaskRequest(identifier: identifier)
request.earliestBeginDate = Date(timeIntervalSinceNow: interval)
scheduleTask(request: request)
}
// MARK: submit task
private func scheduleTask(request: BGProcessingTaskRequest) {
do {
try BGTaskScheduler.shared.submit(request)
} catch {
// some actions with error
}
}
// MARK: background actions
func checkTask(task: BGProcessingTask) {
let today = Calendar.current.startOfDay(for: Date())
let lastExecutionDate = UserDefaults.standard.object(forKey: "lastCheckExecutionDate") as? Date ?? Date.distantPast
let notRunnedToday = !Calendar.current.isDate(today, inSameDayAs: lastExecutionDate)
guard notRunnedToday else {
task.setTaskCompleted(success: true)
createCheckTask()
return
}
if scenePhase == .background {
TaskActionStore.shared.getAction(for: task.identifier)?()
}
task.setTaskCompleted(success: true)
UserDefaults.standard.set(today, forKey: "lastCheckExecutionDate")
createCheckTask()
}
}
And in AppDelegate:
BGTaskScheduler.shared.register(forTaskWithIdentifier: "check", using: nil) { task in
guard let task = task as? BGProcessingTask else { return }
BackgroundTaskService.shared.checkNodeTask(task: task)
}
BackgroundTaskService.shared.createCheckTask()
I understand that GCD and it's underlying implementations have evolved over time. And many things have not been shared explicitly in Apple documentation.
The most concepts of DispatchQueue (serial and concurrent queues), DispatchQoS, target queue and system provided queues: main and globals etc.
I have some doubts & questions to clarify:
[Main Dispatch Queue] [Link] Because the main queue doesn't behave entirely like a regular serial queue, it may have unwanted side-effects when used in processes that are not UI apps (daemons). For such processes, the main queue should be avoided. What does it mean? Can you elaborate?
[Global Concurrent Dispatch Queues] Are they global to a process or across processes on a device. I believe it is the first case but just wanted to be sure.
[Global Concurrent Dispatch Queues] Does system create 4 (for each QoS) * 2 (over-commiting and non-overcommiting queues) = 8 queues in all. When does which type of queue comes into play?
[Custom Queue][Target Queue concept] [swift-corelibs-libdispatch/man/dispatch_queue_create.3] QUOTE The default target queue of all dispatch objects created by the application is the default priority global concurrent queue. UNQUOTE Is this stil true?
We could not find a mention of this in any latest official apple documentation (though some old forum threads (one more) and github code documentation indicate the same).
The official documentation only says:
[dispatch_set_target_queue] QUOTE If you want the system to provide a queue that is appropriate for the current object UNQUOTE
[dispatch_queue_create_with_target] QUOTE Specify DISPATCH_TARGET_QUEUE_DEFAULT to set the target queue to the default type for the current dispatch queue.UNQUOTE
[Dispatch>DispatchQueue>init] QUOTE Specify DISPATCH_TARGET_QUEUE_DEFAULT if you want the system to provide a queue that is appropriate for the current object. UNQUOTE
What is the difference between passing target queue as 'nil' vs 'DISPATCH_TARGET_QUEUE_DEFAULT' to DispatchQueue init?
[Custom Queue][Target Queue concept] [dispatch_set_target_queue] QUOTE The system doesn't allocate threads to the dispatch queue if it has a target queue, unless that target queue is a global concurrent queue. UNQUOTE
The system does allocate threads to the custom dispatch queues that have global concurrent queue as the default target.
What does that mean? Why does targetting to global concurrent queues mean in that case?
[System / GCD Thread Pool] that excutes work items from DispatchQueue: Is this thread pool per queue? or across queues per process? or across processes per device?
Hello,
I have a few questions regarding the documentation here:
Can this method described in the article be built with Xcode 26 and run on iOS 26? Or is it restricted to run only on iOS 26, since AppExtensionPoint appears to be available starting from iOS 26?
Does this approach allow two apps under the same Team ID to communicate with each other?
Does this approach also allow two apps under different Team IDs to communicate with each other?
Is it mandatory to implement EXAppExtensionBrowserViewController and obtain user consent before using this method to exchange information?
In our implementation, we followed the documentation. Inside EXAppExtensionBrowserViewController, we were able to see the Generic Extension from another app and enabled the permission.
However, we still get the following error:
Failed to connect: Error Domain=NABUExtensionConnector Code=1
"No matching extension found"
UserInfo={NSLocalizedDescription=No matching extension found}
Could someone clarify whether this is expected behavior, or if we are missing an additional configuration step?
Thanks in advance!
Hello,
In a launched agent, I need to call into a third‑party library that may occasionally hang. At present, these calls are made from a separate thread, but if the thread hangs it cannot be terminated (pthread_cancel/pthread_kill are ineffective).
Would Apple recommend isolating this functionality in a separate process that can be force‑terminated if it becomes unresponsive, or is there a preferred approach for handling such cases in launched agents?
Can I use the system call fork() in launched agent?
Thank you in advance!
We would be creating N NWListener objects and M NWConnection objects in our process' communication subsystem to create server sockets, accepted client sockets on server and client sockets on clients.
Both NWConnection and NWListener rely on DispatchQueue to deliver state changes, incoming connections, send/recv completions etc.
What DispatchQueues should I use and why?
Global Concurrent Dispatch Queue (and which QoS?) for all NWConnection and NWListener
One custom concurrent queue (which QoS?) for all NWConnection and NWListener? (Does that anyways get targetted to one of the global queues?)
One custom concurrent queue per NWConnection and NWListener though all targetted to Global Concurrent Dispatch Queue (and which QoS?)?
One custom concurrent queue per NWConnection and NWListener though all targetted to single target custom concurrent queue?
For every option above, how am I impacted in terms of parallelism, concurrency, throughput & latency and how is overall system impacted (with other processes also running)?
Seperate questions (sorry for the digression):
Are global concurrent queues specific to a process or shared across all processes on a device?
Can I safely use setSpecific on global dispatch queues in our app?
I am using C APIs for XPC communication.
When my XPC server gets a xpc_dictionary as a message, I use xpc_dictionary_get_string to get the string which is of type const char*. Afterwards, when I try to free up the memory for the string, I get an error.
I could not find any details on why this happens.
Does XPC handle the lifecycle of these C strings ?
I did some tests to see the behaviour.
The following code snippet prints a string temp before and after releasing the dictionary memory.
char* string = "dummy-string";
xpc_object_t dict = xpc_dictionary_create(NULL, NULL, 0); xpc_dictionary_set_string(dict, "str", string);
const char* temp = xpc_dictionary_get_string(reply, "str");
printf("temp before release: %s\n", temp);
xpc_release(reply);
printf("temp after release: %s\n", temp);
output:
# temp before release: dummy-string
# temp after release:
I tried to free the variable temp before and after releasing dict .
char* string = "dummy-string";
xpc_object_t dict = xpc_dictionary_create(NULL, NULL, 0); xpc_dictionary_set_string(dict, "str", string);
const char* temp = xpc_dictionary_get_string(dict, "str");
printf("temp before release: %s\n", temp);
free((void *)temp); // case 1
xpc_release(dict);
// free((void *)temp); // case 2
printf("temp after release: %s\n", temp);
in both the cases i got the output:
# temp before release: dummy-string
# app(18502,0x1f02fc840) malloc: Double free of object 0x145004a20
# app(18502,0x1f02fc840) malloc: *** set a breakpoint in malloc_error_break to debug
# SIGABRT: abort
# PC=0x186953720 m=0 sigcode=0
# signal arrived during cgo execution
# ...
# ...
Hello, I am programming a CLI tool to partition USB disks. I am calling diskutil to do the work, but I am hitting issues with permissions, it seems.
Here is a trial run of the same command running diskutil directly on the terminal vs running from my code:
Calling diskutil directly (works as expected)
% /usr/sbin/diskutil partitionDisk /dev/disk2 MBR Free\ Space gap 2048S fat32 f-fix 100353S Free\ Space tail 0
Started partitioning on disk2
Unmounting disk
Creating the partition map
Waiting for partitions to activate
Formatting disk2s1 as MS-DOS (FAT32) with name f-fix
512 bytes per physical sector
/dev/rdisk2s1: 98784 sectors in 98784 FAT32 clusters (512 bytes/cluster)
bps=512 spc=1 res=32 nft=2 mid=0xf8 spt=32 hds=16 hid=2079 drv=0x80 bsec=100360 bspf=772 rdcl=2 infs=1 bkbs=6
Mounting disk
Finished partitioning on disk2
/dev/disk2 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: FDisk_partition_scheme +104.9 MB disk2
1: DOS_FAT_32 F-FIX 51.4 MB disk2s1
Calling diskutil programmatically (error -69877)
% sudo ./f-fix
DEBUG: /usr/sbin/diskutil partitionDisk /dev/disk2 MBR Free Space gap 2048S fat32 f-fix 100353S Free Space tail 0
Started partitioning on disk2
Unmounting disk
Error: -69877: Couldn't open device
(Is a disk in use by a storage system such as AppleRAID, CoreStorage, or APFS?)
Failed to fix drive `/dev/disk2'
Source Code
The relevant code from my program is this:
char *args[16]; int n = 0;
args[n++] = "/usr/sbin/diskutil";
args[n++] = "partitionDisk";
args[n++] = (char *)disk;
args[n++] = (char *)scheme;
(...)
args[n++] = NULL;
char **parent_env = *_NSGetEnviron();
if (posix_spawnp(&pid, args[0], NULL, NULL, args, parent_env) != 0)
return 1;
if (waitpid(pid, &status, 0) < 0)
return 1;
return 0;
Question
Are there any system protections against running it like so? What could I be missing? Is this a Disk Arbitration issue?
Are there any plans to add RBI support (the sending keyword) to the OSAllocatedLock interface? So it could be used with non-sendable objects without surrendering to the unchecked API
I'm looking into a newer XPC API available starting with macOS 14. Although it's declared as a low-level API I can't figure it how to specify code signing requirement using XPCListener and XPCSession. How do I connect it with xpc_listener_set_peer_code_signing_requirement and xpc_connection_set_peer_code_signing_requirement which require xpc_listener_t and xpc_connection_t respectively?
Foundation XPC is declared as a high-level API and provides easy ways to specify code signing requirements on both ends of xpc.
I'm confused with all these XPC APIs and their future:
Newer really high-level XPCListener and XPCSession API (in low-level framework???)
Low-level xpc_listener_t & xpc_connection_t -like API. Is it being replaced by newer XPCListener and XPCSession?
How is it related to High-level Foundation XPC? Are NSXPCListener and NSXPCConnection going to be deprecated and replaced by XPCListener and XPCSession??
i am trying to create a daemon with xpc for my app by referring to https://github.com/alienator88/HelperToolApp but i keep getting XPC remote proxy error: Couldn’t communicate with a helper application. All the identifiers all correct but the helper code is not reached.
Every time macOS goes to sleep the processes get suspended which is expected. But during the sleep period, all processes keep coming back and they all get a small execution window where they make some n/w requests. Regardless of what power settings i have. It also does not matter whether my app is a daemon or not
Is there any way that i can disable this so that when system is in sleep, it stays in suspended, no intermittent execution window? I have tried disabling Wake for network access setting but processes still keep getting intermittent execution window.
Is there any way that i can prevent my app from coming back while in sleep. I don't want my app to get execution window, perform some executions and then get suspended not knowing when it will get execution window again?