Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

Problems recording audio on Tahoe 26.0 (Intel only)
I have some tried-and-tested code that records and plays back audio via AUHAL which breaks on Tahoe on Intel. The same code works fine on Sequioa and also works on Tahoe on Apple Silicon. To start with something simple, the following code to request access to the Microphone doesn't work as it should: bool RequestMicrophoneAccess () { __block AVAuthorizationStatus status = [AVCaptureDevice authorizationStatusForMediaType: AVMediaTypeAudio]; if (status == AVAuthorizationStatusAuthorized) return true; __block bool done = false; [AVCaptureDevice requestAccessForMediaType: AVMediaTypeAudio completionHandler: ^ (BOOL granted) { status = (granted) ? AVAuthorizationStatusAuthorized : AVAuthorizationStatusDenied; done = true; }]; while (!done) CFRunLoopRunInMode (kCFRunLoopDefaultMode, 2.0, true); return status == AVAuthorizationStatusAuthorized; } On Tahoe on Intel, the code runs to completion but granted is always returned as NO. Tellingly, the popup to ask the user to grant microphone access is never displayed, even though the app is not present in the Privacy pane and never appears there. On Apple Silicon, everything works fine. There are some other problems, but I'm hoping they have a common underlying cause and that the Apple guys can figure out what's wrong from the information in this post. I'd be happy to test any potential fix. Thanks.
2
0
453
Oct ’25
Lock screen media controls for MusicKit/ ApplicationMusicPlayer
Hi, when using ApplicationMusicPlayer from MusicKit my app automatically gets the media controls on the lock screen: Play/ Pause, Skip Buttons, Playback Position etc. I would like to customize these. Tried a bunch of things, e.g. using MPRemoteCommandCenter. So far I haven't had any success. Does anyone know how I can customize the media controls of ApplicationMusicPlayer. Thank you.
2
0
533
Sep ’25
How to synchronize the clock sources of two audio devices
I created a virtual audio device to capture system audio with a sample rate of 44.1 kHz. After capturing the audio, I forward it to the hardware sound card using AVAudioEngine, also with a sample rate of 44.1 kHz. However, due to the clock sources being unsynchronized, problems occur after a period of playback. How can I retrieve the clock source of the hardware device and set it for the virtual device?
2
0
318
May ’25
CheckError.swift:CheckError(_:):211:kAudioUnitErr_InvalidParameter (CheckError.swift:CheckError(_:):211)
I'm getting this error when I launch my application on the iPhone 14 Pro via Xcode. Everything builds OK. I"m using the audio kit plugin and Sound Pipe Audiokit. The error starts as soon as I start the app and will carry on repeatedly. I have background processing turned on as I'd like the sounds to play when the phone is locked via the headphones. I can't find anything online about this error. None of my catches are printing anything in the logs either. So I don't know if this is just something that pops up repeatedly or whether there is something fundamentally wrong. private func setupAudioSession() { do { let session = AVAudioSession.sharedInstance() try session.setCategory(.playback, mode: .default, options: [.mixWithOthers]) try session.setActive(true, options: .notifyOthersOnDeactivation) } catch { errorMessage = "Failed to set up audio session: (error.localizedDescription)" print(errorMessage ?? "") } } // MARK: - Background Task Handling private func setupBackgroundTaskHandling() { // Handle app entering background notificationObservers.append( NotificationCenter.default.addObserver( forName: UIApplication.didEnterBackgroundNotification, object: nil, queue: .main, using: { [weak self] _ in // Safely unwrap self guard let self = self else { return } self.handleBackgroundTransition() } ) ) I'm not sure if this is the code causing the issue. Any help would be gratefully appreciated. This is my first app I'm working on .
2
2
194
Apr ’25
dlsym cannot find symbol g_dwILResult when debugging an audio plugin
I am trying to debug the AAX version of my plugin (MIDI effect) on Pro Tools, but I am getting the following error (Mac console) when attempting to load it: dlsym cannot find symbol g_dwILResult in CFBundle etc.. I used Xcode 16.4 to build the plugin. Has anybody come across the same or a similar message? Best, Achillefs Axart Labs
2
0
568
Sep ’25
AudioQueue Output fails playing audio almost immediately?
On macOS Sequoia, I'm having the hardest time getting this basic audio output to work correctly. I'm compiling in XCode using C99, and when I run this, I get audio for a split second, and then nothing, indefinitely. Any ideas what could be going wrong? Here's a minimum code example to demonstrate: #include <AudioToolbox/AudioToolbox.h> #include <stdint.h> #define RENDER_BUFFER_COUNT 2 #define RENDER_FRAMES_PER_BUFFER 128 // mono linear PCM audio data at 48kHz #define RENDER_SAMPLE_RATE 48000 #define RENDER_CHANNEL_COUNT 1 #define RENDER_BUFFER_BYTE_COUNT (RENDER_FRAMES_PER_BUFFER * RENDER_CHANNEL_COUNT * sizeof(f32)) void RenderAudioSaw(float* outBuffer, uint32_t frameCount, uint32_t channelCount) { static bool isInverted = false; float scalar = isInverted ? -1.f : 1.f; for (uint32_t frame = 0; frame < frameCount; ++frame) { for (uint32_t channel = 0; channel < channelCount; ++channel) { // series of ramps, alternating up and down. outBuffer[frame * channelCount + channel] = 0.1f * scalar * ((float)frame / frameCount); } } isInverted = !isInverted; } AudioStreamBasicDescription coreAudioDesc = { 0 }; AudioQueueRef coreAudioQueue = NULL; AudioQueueBufferRef coreAudioBuffers[RENDER_BUFFER_COUNT] = { NULL }; void coreAudioCallback(void* unused, AudioQueueRef queue, AudioQueueBufferRef buffer) { // 0's here indicate no fancy packet magic AudioQueueEnqueueBuffer(queue, buffer, 0, 0); } int main(void) { const UInt32 BytesPerSample = sizeof(float); coreAudioDesc.mSampleRate = RENDER_SAMPLE_RATE; coreAudioDesc.mFormatID = kAudioFormatLinearPCM; coreAudioDesc.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked; coreAudioDesc.mBytesPerPacket = RENDER_CHANNEL_COUNT * BytesPerSample; coreAudioDesc.mFramesPerPacket = 1; coreAudioDesc.mBytesPerFrame = RENDER_CHANNEL_COUNT * BytesPerSample; coreAudioDesc.mChannelsPerFrame = RENDER_CHANNEL_COUNT; coreAudioDesc.mBitsPerChannel = BytesPerSample * 8; coreAudioQueue = NULL; OSStatus result; // most of the 0 and NULL params here are for compressed sound formats etc. result = AudioQueueNewOutput(&coreAudioDesc, &coreAudioCallback, NULL, 0, 0, 0, &coreAudioQueue); if (result != noErr) { assert(false == "AudioQueueNewOutput failed!"); abort(); } for (int i = 0; i < RENDER_BUFFER_COUNT; ++i) { uint32_t bufferSize = coreAudioDesc.mBytesPerFrame * RENDER_FRAMES_PER_BUFFER; result = AudioQueueAllocateBuffer(coreAudioQueue, bufferSize, &(coreAudioBuffers[i])); if (result != noErr) { assert(false == "AudioQueueAllocateBuffer failed!"); abort(); } } for (int i = 0; i < RENDER_BUFFER_COUNT; ++i) { RenderAudioSaw(coreAudioBuffers[i]->mAudioData, RENDER_FRAMES_PER_BUFFER, RENDER_CHANNEL_COUNT); coreAudioBuffers[i]->mAudioDataByteSize = coreAudioBuffers[i]->mAudioDataBytesCapacity; AudioQueueEnqueueBuffer(coreAudioQueue, coreAudioBuffers[i], 0, 0); } AudioQueueStart(coreAudioQueue, NULL); sleep(10); // some time to hear the audio AudioQueueStop(coreAudioQueue, true); AudioQueueDispose(coreAudioQueue, true); return 0; }
2
0
591
Sep ’25
Convert CoreAudio AudioObjectID to IOUSB LocationID
Is there a recommended way on macOS 26 Tahoe to take a CoreAudio AudioObjectID and use it to lookup the underlying USB LocationID? I previously used AudioObjectID to query the corresponding DeviceUID with kAudioDevicePropertyDeviceUID. Then I queried for the IOService matching kIOAudioEngineClassName with property kIOAudioEngineGlobalUniqueIDKey matching DeviceUID, and I loaded kUSBDevicePropertyLocationID from the result. This fails on macOS 26, because the IO Registry for the device has an entry for usbaudiod rather than AppleUSBAudioEngine, and usbaudiod does not include a kIOAudioEngineGlobalUniqueIDKey property (or any other property to map it to a CoreAudio DeviceUID). My use-case here is a piece of audio recording software that allows configuring a set of supported audio devices via USB HID prior to recording. I present the user with a list of CoreAudio devices to use, but without a way to lookup the underlying USB LocationID, I cannot guarantee that the configured device matches the selected device (e.g. if the user plugged in two identical microphones).
2
0
615
Sep ’25
[iOS 26 bug] AVInputPickerInteraction selection immediately reverts on iOS 26
Hello everyone, I'm implementing the new AVInputPickerInteraction API on iOS 26 to allow users to select their microphone from a custom settings menu before recording. The implementation seems correct, but I'm encountering a strange issue where the input selection immediately reverts to the previous device. The Situation: The picker is presented correctly via a manual call to .present(). I can see all available inputs (e.g., "iPhone Microphone" and "AirPods"). The current input is "iPhone Microphone". I tap on "AirPods". The UI updates to show "AirPods" as selected for a fraction of a second, then immediately jumps back to "iPhone Microphone". The same thing happens in reverse. It seems like the system is automatically reverting the audio route change requested by the picker. My Implementation: My setup follows the standard pattern discussed in the WWDC sessions. Setup Code: This setup is performed once before the user can trigger the picker. @available(iOS 26.0, *) var inputPickerInteraction: AVInputPickerInteraction? // Note: The AVAudioSession is configured to .playAndRecord // and set to active elsewhere in the code before this setup is called. if #available(iOS 26.0, *) { // Setup the picker let picker = AVInputPickerInteraction() self.inputPickerInteraction = picker self.view.addInteraction(picker) // Added to establish context } Presentation Code: When a user selects "Change Input" from my custom settings menu, I call .present() on the main thread. // In a delegate method from a custom menu if #available(iOS 26.0, *) { DispatchQueue.main.async { self.inputPickerInteraction?.present(animated: true) } } What I've already checked: The AVAudioSession is active and its category is .playAndRecord. The inputPickerInteraction object is not nil. The .present() method is being called on the main thread. The picker is added to a view using view.addInteraction() in the setup phase. I've reviewed my code to ensure there is no other logic that could be manually resetting the AVAudioSession's preferred input. Has anyone else experienced this behavior? I suspect this might be a bug in the new API, but I want to make sure I'm not missing a crucial step in managing the AVAudioSession state. Any insights or potential workarounds would be greatly appreciated. Thank you.
2
0
250
Sep ’25
AirPlay v1 is broken in iOS 18.4?
After upgrading to iOS 18.4, I'm no longer able to establish an AirPlay v1 connection to an audio system. The symptom is that the AirPlay route picker just spins when trying to connect to an audio system. It eventually gives up. I tested this on an iPhone 14, connecting to a HomePod, AirPort express, AppleTV and a Wiim Pro. If I try connecting with AirPlay v2, ex: using Apple Music, the connection succeeds and audio can be played. I'm the developer of an app that plays audio over AirPlay while also recording. My app has to use AirPlay v1 because AvAudioSession doesn't allow the policy .longFormAudio when the category is .playAndRecord. This issue is a real pain as it means my app is suddenly broken for many thousands of users. Is anyone else seeing this issue? Any suggestions for a workaround?
2
3
533
Jun ’25
Does the OS has dedicated volume levels for each AVAudioSessionCategory.
We have an VoiP application, our application can be configured to amplify the PCM samples before feeding it to the Player to achieve volume gain at the receiver. In order to support this, We follow as below. If User has configured this Gain Settings within application, Application applies the amplification for the samples to introduce the gain. Application will also set the AVAudioSessionCategory to AVAudioSessionCategoryPlayback Provided the User has chosen the output to Speaker. This settings was working for us but we see there is a difference in behaviour w.r.t Volume Level System Settings between OS 26.3.1 and OS 26.4 When user has chosen earpiece as Output, then we will set the AVAudioSessionCategory to AVAudioSessionCategoryPlayAndRecord. User would have set the volume level to minimum. When user will change the output to Speaker, then we will set the AVAudioSessionCategory to AVAudioSessionCategoryPlayback. The expectation is, the volume level should be of AVAudioSessionCategoryPlayback what was set earlier instead we are seeing the volume level stays as minimum which was set to AVAudioSessionCategoryPlayAndRecord Could you please explain about this inconsistency w.r.t Volume level.
2
0
457
1d
CoreAudio server plugin gaining write access with SystemConfiguration.framework functions
Hi, our CourAudio server plugin utilizes the SystemConfiguration.framework to store and restore specific shared system wide settings. While our application can authenticate to utilize the SystemConfiguration.framework to gain write access to the shared configuration settings the CoreAudio server plugin obviously can't have any user interaction and therefor does not authenticate. Is it possible to authenticate the CoreAudio server plugin to gain write permissions? Are there any entitlements or other means that would allow this? Thanks!
2
0
187
Apr ’25
Hosting x86 Audio Units on Silicon Mac
My app encountered problems when trying to open an x86 audioUnit v2 on a Silicon Mac (although Rosetta is installed). There seems to be a XPC connection issue with the AUHostingService that I don't know how to fix. I observed other host apps opening the same plugins without problem, so there is probably something wrong or incompatible in my codes. I noticed that: The issue occurs whether or not the app is sandboxed. The issue does no longer occur when the app itself runs under Rosetta. There is no error reported by CoreAudio during allocation and initialization of the audio unit. The first notified errors appears when the unit calls AudioUnitRender from the rendering callback. With most x86 plugins, the error is on first call: kAudioUnitErr_RenderTimeout and on any subsequent call: kAudioComponentErr_InstanceInvalidated On the UI side, when the Cocoa View is loaded, it appears shortly, then disappears immediately leaving its superview empty. With another x86 plugin, the Cocoa View is loaded normally, but CoreAudio still emits kAudioUnitErr_NoConnection from AudioUnitRender, whether the view has been loaded or not, and the plugin produces no sound. I also find these messages in the console (printed in that order): CLIENT ERROR: RemoteAUv2ViewController does not override - and thus cannot react to catastrophic errors beyond logging them AUAudioUnit_XPC.mm:641 Crashed AU possible component description: aumu/Helm/Tyte My app uses the AUv2 API and I suspect that working with the AUv3 API would spare me these problems. However, considering how my audio system is built (audio units are wrapped into C++ classes and most connections between units are managed on the fly from the rendering callback), it would be a lot of work to convert, and I’m even not sure that all I do with the AUv2 API would be possible with the AUv3 API. I could possibly find an intermediate solution, but in the immediate future I'm looking for the simplest and fastest possible fix. If I cannot find better, I see two fallback options: In this part of the doc: “Beginning with macOS 11, the system loads audio units into a separate process that depends on the architecture or host preference”, does “host preference” means that it would be possible to disable the “out of process” behavior, for example from the app entitlements or info.plist? Otherwise, as a last resort, I could completely disable the use of x86 audioUnits when my app runs under ARM64, for at least making things cleaner. But the Audio Component API doesn’t give any info about the plugin architecture, how could I found it? Any tip or idea about this issue will be much appreciated. Thanks in advance!
2
0
712
Nov ’25
MusicKit - ApplicationMusicPlayer fails to play certain Songs
[Note: this issue was happening on a main testing device, and after testing the same code on other devices, this issue is only happening on 1 out of 4 devices] We are successfully getting a MusicCatalogResourceResponse for every song ID where we make the MusicCatalogResourceRequest. We are able to display the song title, artist name, and album artwork for each Song in the response. However - when we go to play the song, there are some songs that play, and several songs that do not play. For the songs that don't play, the console shows “Failed to prepareToPlay error=<MPMusicPlayerControllerErrorDomain.6 "Failed to prepare to play" {}>” let musicPlayer = ApplicationMusicPlayer.shared func playSong(_ song: Song) { musicPlayer.queue = [song] Task { try await musicPlayer.prepareToPlay() try await musicPlayer.play() } Is there anything else we can investigate about what may be causing specific song IDs not to play on this specific device? Even if we remove line 6 musicPlayer.prepareToPlay() we still see the same console error when running playSong with the Songs that don't work. It is always the same song IDs that we can play and always the same song IDs that we cannot get to play, even trying them across different projects with different bundle identifiers. We can tap to play a song that works, and it starts playing immediately. Then tap a song that doesn't work, and nothing happens. Then back to a song that works. It's consistent which songs succeed and fail on this device. Perhaps there is an issue specific to this very iPad when it comes to certain specific songs, but we'd like to be confident that an app relying on MusicKit will be able to play songs that have been successfully loaded with a MusicCatalogResourceResponse. Thanks for any help or suggestions about what we may be able to investigate further on the device or what we should consider when launching an app that expects anyone with Apple Music to be able to listen to any of the songs loaded by the app. Specific iPad details: iPad Pro (12.9-inch) (6th generation) running iPadOS 26.4 Beta Two of the song IDs that won't play on this iPad (even though we can access and display their album artwork and all other information): 943204000 and 1441164805
2
1
206
Mar ’26
failed to set category, reason: 未能完成操作。(OSStatus错误4097。)
When using the [AVAudioSession setCategory:withOptions:error:] API, the call hangs for a long time and eventually returns an error.This issue occurs on iOS 16, and did not appear in earlier versions. Thread 135: 0 libsystem_kernel.dylib 0x00000002478e3cd4 _mach_msg2_trap :8 (in libsystem_kernel.dylib) 1 libsystem_kernel.dylib 0x00000002478e7214 _mach_msg_overwrite :428 (in libsystem_kernel.dylib) 2 libsystem_kernel.dylib 0x00000002478e705c _mach_msg :24 (in libsystem_kernel.dylib) 3 libdispatch.dylib 0x00000001d63ffe84 __dispatch_mach_send_and_wait_for_reply :548 (in libdispatch.dylib) 4 libdispatch.dylib 0x00000001d6400224 _dispatch_mach_send_with_result_and_wait_for_reply :60 (in libdispatch.dylib) 5 libxpc.dylib 0x00000001b2114e04 _xpc_connection_send_message_with_reply_sync :256 (in libxpc.dylib) 6 Foundation 0x000000019b6249f0 ___NSXPCCONNECTION_IS_WAITING_FOR_A_SYNCHRONOUS_REPLY__ :16 (in Foundation) 7 Foundation 0x000000019c06d1b4 -[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:] :2100 (in Foundation) 8 CoreFoundation 0x000000019dfcb1cc ____forwarding___ :1072 (in CoreFoundation) 9 CoreFoundation 0x000000019dfd3200 ___forwarding_prep_0___ :96 (in CoreFoundation) 10 AudioSession 0x00000001c77498b0 __ZN4avas6client11SessionCore10HandlePingEv :192 (in AudioSession) 11 AudioSession 0x00000001c77497b0 ____ZN4avas6client11SessionCore12DispatchPingEv_block_invoke :52 (in AudioSession) 12 libdispatch.dylib 0x00000001d63e4adc __dispatch_call_block_and_release :32 (in libdispatch.dylib) 13 libdispatch.dylib 0x00000001d63fe7ec __dispatch_client_callout :16 (in libdispatch.dylib) 14 libdispatch.dylib 0x00000001d63ed468 __dispatch_lane_serial_drain :740 (in libdispatch.dylib) 15 libdispatch.dylib 0x00000001d63edf78 __dispatch_lane_invoke :440 (in libdispatch.dylib) 16 libdispatch.dylib 0x00000001d63f6f48 __dispatch_root_queue_drain :364 (in libdispatch.dylib) 17 libdispatch.dylib 0x00000001d63f6d08 __dispatch_worker_thread :268 (in libdispatch.dylib) 18 libsystem_pthread.dylib 0x00000001f9ff144c __pthread_start :136 (in libsystem_pthread.dylib) 19 libsystem_pthread.dylib 0x00000001f9fed8cc _thread_start :8 (in libsystem_pthread.dylib) Thread 132: 0 libsystem_kernel.dylib 0x00000002478e3cd4 _mach_msg2_trap :8 (in libsystem_kernel.dylib) 1 libsystem_kernel.dylib 0x00000002478e7214 _mach_msg_overwrite :428 (in libsystem_kernel.dylib) 2 libsystem_kernel.dylib 0x00000002478e705c _mach_msg :24 (in libsystem_kernel.dylib) 3 libdispatch.dylib 0x00000001d63ffe84 __dispatch_mach_send_and_wait_for_reply :548 (in libdispatch.dylib) 4 libdispatch.dylib 0x00000001d6400224 _dispatch_mach_send_with_result_and_wait_for_reply :60 (in libdispatch.dylib) 5 libxpc.dylib 0x00000001b2114e04 _xpc_connection_send_message_with_reply_sync :256 (in libxpc.dylib) 6 Foundation 0x000000019b6249f0 ___NSXPCCONNECTION_IS_WAITING_FOR_A_SYNCHRONOUS_REPLY__ :16 (in Foundation) 7 Foundation 0x000000019c06d1b4 -[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:] :2100 (in Foundation) 8 CoreFoundation 0x000000019dfcb1cc ____forwarding___ :1072 (in CoreFoundation) 9 CoreFoundation 0x000000019dfd3200 ___forwarding_prep_0___ :96 (in CoreFoundation) 10 AudioSession 0x00000001c7754198 __ZNK4avas6client11SessionCore18SetBatchPropertiesEP12NSDictionaryIP8NSStringPU25objcproto14NSSecureCoding11objc_objectEPU15__autoreleasingP7NSArrayIPS2_IS4_P8NSNumberEENS_30AVAudioSessionBatchSetStrategyEbb :548 (in AudioSession) 11 AudioSession 0x00000001c7753e58 __ZNK4avas6client11SessionCore20SetBatchPropertiesMXEP12NSDictionaryIP8NSStringPU25objcproto14NSSecureCoding11objc_objectE :92 (in AudioSession) 12 AudioSession 0x00000001c775179c __ZN4avas6client11SessionCore11setCategoryEP8NSStringS3_32AVAudioSessionRouteSharingPolicym :472 (in AudioSession) 13 AudioSession 0x00000001c7768f88 -[AVAudioSession setCategory:withOptions:error:] :68 (in AudioSession) 14 AlipayWallet 0x000000010140580c -[AVAudioSession(APMHook) apmhook_setCategory:withOptions:error:] APMHookAudioSession.m:35 (in AlipayWallet) 15 AlipayWallet 0x00000001014001a4 -[APMAudioSessionManager resume] APMAudioSessionManager.m:718 (in AlipayWallet) 16 libdispatch.dylib 0x00000001d63e4adc __dispatch_call_block_and_release :32 (in libdispatch.dylib) 17 libdispatch.dylib 0x00000001d63fe7ec __dispatch_client_callout :16 (in libdispatch.dylib) 18 libdispatch.dylib 0x00000001d63ed468 __dispatch_lane_serial_drain :740 (in libdispatch.dylib) 19 libdispatch.dylib 0x00000001d63edf44 __dispatch_lane_invoke :388 (in libdispatch.dylib) 20 libdispatch.dylib 0x00000001d63f83ec __dispatch_root_queue_drain_deferred_wlh :292 (in libdispatch.dylib) 21 libdispatch.dylib 0x00000001d63f7ce4 __dispatch_workloop_worker_thread :692 (in libdispatch.dylib) 22 libsystem_pthread.dylib 0x00000001f9fee3b8 __pthread_wqthread :292 (in libsystem_pthread.dylib) 23 libsystem_pthread.dylib 0x00000001f9fed8c0 _start_wqthread :8 (in libsystem_pthread.dylib)
2
0
778
Feb ’26
Dell monitor volume control issue on iMac via USB-C
I have a new 2725QC (Dell) Monitor that uses USB-C connection to connect with the iMac (2019, 27 inch) through the back port but the problem is that the volume control can currently only be done from the hardware, not the software control using the Apple keyboard. What should I do in terms of writing code to do this (Swift or Obj-C)? Is there a third-party solution for Intel iMac and ARM Mac?
2
0
267
Jan ’26
AVAudioSession : Audio issues when recording the screen in an app that changes IOBufferDuration on iOS 26.
Among Japanese end users, audio issues during screen recording—primarily in game applications—have become a topic of discussion. We have confirmed that the trigger for this issue is highly likely to be related to changes to IOBufferDuration. When using setPreferredIOBufferDuration and the IOBufferDuration is set to a value smaller than the default, audio problems occur in the recorded screen capture video. Audio playback is performed using AudioUnit (RemoteIO). https://developer.apple.com/documentation/avfaudio/avaudiosession/setpreferrediobufferduration(_:)?language=objc This issue was not observed on iOS 18, and it appears to have started occurring after upgrading to iOS 26. We provide an audio middleware solution, and we had incorporated changes to IOBufferDuration into our product to achieve low-latency audio playback. As a result, developers using our product as well as their end users are being affected by this issue. We kindly request that this issue be investigated and addressed in a future update. “This document has been translated by AI. The original text is included below for reference.” 日本のエンドユーザー間で主にゲームアプリケーションにおける画面収録時の音声の問題が話題になっています。 こちらの症状のトリガーが、IOBufferDurationの変更によるものである可能性が高いことを確認しました。 setPreferredIOBufferDurationを使用し、IOBufferDurationがデフォルトより小さい状態の時、画面収録された動画の音声に問題が発生することをしています。 音声の再生にはAudioUnit(RemoteIO)を使用しています。 https://developer.apple.com/documentation/avfaudio/avaudiosession/setpreferrediobufferduration(_:)?language=objc iOS 18ではこのような問題は確認されておらず、iOS26になってから問題が発生しているようです。 私たちはオーディオミドルウェアを提供しており、低遅延の再生のためにIOBufferDurationの変更を製品に組み込んでいました。 そのため、弊社製品をご利用いただいている開発者およびエンドユーザーの皆様がこの不具合の影響を受けています。 こちらの不具合の調査及び修正対応を検討いただけますでしょうか。
2
0
471
2w
No audio in screen recordings when using AVAudioEngine Voice Processing
Hello, We are developing a real-time speech recognition application and are utilizing AVAudioEngine with voice processing enabled on the input node. However, we have observed that enabling this mode interferes with the built-in iOS screen recording feature - specifically, the recorded video does not capture any audio when this mode is active. Since we want users to be able to record their experience within our app, this issue significantly impacts our functionality. Is there a known workaround or recommended approach to ensure that both voice processing and screen recording can function simultaneously? Any guidance would be greatly appreciated. Thank you!
2
1
390
Oct ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
2
2
468
Oct ’25
Airplay selection not working
I'm trying to implement airplay into my app. I can successfully playback sound and trigger the airplay selector sheet. If the target device is a Bluetooth only device I can connect with no problem and stream the audio to the Bluetooth device, but if the audio device is a airplay specific device like a HomePod or an Apple TV when I select it, I get a spinning icon, indicating that it is trying to connect, and eventually it times out and stops without connecting. I don't believe it is an AirPlay audio issue because if I go to a different app, for example a podcast app and select my HomePods for output, and then switch back to my app. My audio will correctly stream to the HomePod. Not only that, I have it so that my icon will change color to indicate that it is connected via airplay and it is correctly indicating that it is connected via AirPlay. But I cannot then disconnect it using the Airplay selector. The issue appears to be in the AirPlay selection side, which I have spent several days attempting to troubleshoot mostly using ChatGPT to suggest code different than what I have to maybe work around the issue. Mostly it is focused on the audio player section, but it doesn't seem like that is really the route that is the problem.
2
0
257
Jun ’25
AVAudioSession.outputVolume does not reflect system volume changes made while app is in background
I have a question regarding the behavior of AVAudioSession.sharedInstance().outputVolume. Observed behavior: When the app is in the foreground, I read audioSession.outputVolume (for example, 0.1). The app is then moved to the background. While the app is in the background, the user changes the system volume using the hardware buttons (for example, to 0.5). When the app returns to the foreground, audioSession.outputVolume still reports the previous value (0.1). From my testing, outputVolume only seems to update when the system volume is changed while the app is in the foreground. Volume changes made while the app is in the background are not reflected when the app returns to the foreground. Questions: According to Apple’s documentation for AVAudioSession.outputVolume: “The systemwide output volume set by the user.” https://developer.apple.com/documentation/avfaudio/avaudiosession/outputvolume However, based on our testing on iOS 18.6.2 and iOS 18.1, the observed behavior seems to differ from this description. Questions: The documentation states that outputVolume represents the system-wide volume set by the user. In our testing, the value does not reflect volume changes made while the app is in the background and only updates when the app is in the foreground.Is this the expected behavior of AVAudioSession.outputVolume? Is there any other recommended way in Swift to retrieve the current system volume that reflects user changes made both while the app is in the foreground and while it is in the background? Any clarification on the intended behavior or recommended handling would be greatly appreciated.
2
1
267
2w
Problems recording audio on Tahoe 26.0 (Intel only)
I have some tried-and-tested code that records and plays back audio via AUHAL which breaks on Tahoe on Intel. The same code works fine on Sequioa and also works on Tahoe on Apple Silicon. To start with something simple, the following code to request access to the Microphone doesn't work as it should: bool RequestMicrophoneAccess () { __block AVAuthorizationStatus status = [AVCaptureDevice authorizationStatusForMediaType: AVMediaTypeAudio]; if (status == AVAuthorizationStatusAuthorized) return true; __block bool done = false; [AVCaptureDevice requestAccessForMediaType: AVMediaTypeAudio completionHandler: ^ (BOOL granted) { status = (granted) ? AVAuthorizationStatusAuthorized : AVAuthorizationStatusDenied; done = true; }]; while (!done) CFRunLoopRunInMode (kCFRunLoopDefaultMode, 2.0, true); return status == AVAuthorizationStatusAuthorized; } On Tahoe on Intel, the code runs to completion but granted is always returned as NO. Tellingly, the popup to ask the user to grant microphone access is never displayed, even though the app is not present in the Privacy pane and never appears there. On Apple Silicon, everything works fine. There are some other problems, but I'm hoping they have a common underlying cause and that the Apple guys can figure out what's wrong from the information in this post. I'd be happy to test any potential fix. Thanks.
Replies
2
Boosts
0
Views
453
Activity
Oct ’25
Lock screen media controls for MusicKit/ ApplicationMusicPlayer
Hi, when using ApplicationMusicPlayer from MusicKit my app automatically gets the media controls on the lock screen: Play/ Pause, Skip Buttons, Playback Position etc. I would like to customize these. Tried a bunch of things, e.g. using MPRemoteCommandCenter. So far I haven't had any success. Does anyone know how I can customize the media controls of ApplicationMusicPlayer. Thank you.
Replies
2
Boosts
0
Views
533
Activity
Sep ’25
How to synchronize the clock sources of two audio devices
I created a virtual audio device to capture system audio with a sample rate of 44.1 kHz. After capturing the audio, I forward it to the hardware sound card using AVAudioEngine, also with a sample rate of 44.1 kHz. However, due to the clock sources being unsynchronized, problems occur after a period of playback. How can I retrieve the clock source of the hardware device and set it for the virtual device?
Replies
2
Boosts
0
Views
318
Activity
May ’25
CheckError.swift:CheckError(_:):211:kAudioUnitErr_InvalidParameter (CheckError.swift:CheckError(_:):211)
I'm getting this error when I launch my application on the iPhone 14 Pro via Xcode. Everything builds OK. I"m using the audio kit plugin and Sound Pipe Audiokit. The error starts as soon as I start the app and will carry on repeatedly. I have background processing turned on as I'd like the sounds to play when the phone is locked via the headphones. I can't find anything online about this error. None of my catches are printing anything in the logs either. So I don't know if this is just something that pops up repeatedly or whether there is something fundamentally wrong. private func setupAudioSession() { do { let session = AVAudioSession.sharedInstance() try session.setCategory(.playback, mode: .default, options: [.mixWithOthers]) try session.setActive(true, options: .notifyOthersOnDeactivation) } catch { errorMessage = "Failed to set up audio session: (error.localizedDescription)" print(errorMessage ?? "") } } // MARK: - Background Task Handling private func setupBackgroundTaskHandling() { // Handle app entering background notificationObservers.append( NotificationCenter.default.addObserver( forName: UIApplication.didEnterBackgroundNotification, object: nil, queue: .main, using: { [weak self] _ in // Safely unwrap self guard let self = self else { return } self.handleBackgroundTransition() } ) ) I'm not sure if this is the code causing the issue. Any help would be gratefully appreciated. This is my first app I'm working on .
Replies
2
Boosts
2
Views
194
Activity
Apr ’25
dlsym cannot find symbol g_dwILResult when debugging an audio plugin
I am trying to debug the AAX version of my plugin (MIDI effect) on Pro Tools, but I am getting the following error (Mac console) when attempting to load it: dlsym cannot find symbol g_dwILResult in CFBundle etc.. I used Xcode 16.4 to build the plugin. Has anybody come across the same or a similar message? Best, Achillefs Axart Labs
Replies
2
Boosts
0
Views
568
Activity
Sep ’25
AudioQueue Output fails playing audio almost immediately?
On macOS Sequoia, I'm having the hardest time getting this basic audio output to work correctly. I'm compiling in XCode using C99, and when I run this, I get audio for a split second, and then nothing, indefinitely. Any ideas what could be going wrong? Here's a minimum code example to demonstrate: #include &lt;AudioToolbox/AudioToolbox.h&gt; #include &lt;stdint.h&gt; #define RENDER_BUFFER_COUNT 2 #define RENDER_FRAMES_PER_BUFFER 128 // mono linear PCM audio data at 48kHz #define RENDER_SAMPLE_RATE 48000 #define RENDER_CHANNEL_COUNT 1 #define RENDER_BUFFER_BYTE_COUNT (RENDER_FRAMES_PER_BUFFER * RENDER_CHANNEL_COUNT * sizeof(f32)) void RenderAudioSaw(float* outBuffer, uint32_t frameCount, uint32_t channelCount) { static bool isInverted = false; float scalar = isInverted ? -1.f : 1.f; for (uint32_t frame = 0; frame &lt; frameCount; ++frame) { for (uint32_t channel = 0; channel &lt; channelCount; ++channel) { // series of ramps, alternating up and down. outBuffer[frame * channelCount + channel] = 0.1f * scalar * ((float)frame / frameCount); } } isInverted = !isInverted; } AudioStreamBasicDescription coreAudioDesc = { 0 }; AudioQueueRef coreAudioQueue = NULL; AudioQueueBufferRef coreAudioBuffers[RENDER_BUFFER_COUNT] = { NULL }; void coreAudioCallback(void* unused, AudioQueueRef queue, AudioQueueBufferRef buffer) { // 0's here indicate no fancy packet magic AudioQueueEnqueueBuffer(queue, buffer, 0, 0); } int main(void) { const UInt32 BytesPerSample = sizeof(float); coreAudioDesc.mSampleRate = RENDER_SAMPLE_RATE; coreAudioDesc.mFormatID = kAudioFormatLinearPCM; coreAudioDesc.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked; coreAudioDesc.mBytesPerPacket = RENDER_CHANNEL_COUNT * BytesPerSample; coreAudioDesc.mFramesPerPacket = 1; coreAudioDesc.mBytesPerFrame = RENDER_CHANNEL_COUNT * BytesPerSample; coreAudioDesc.mChannelsPerFrame = RENDER_CHANNEL_COUNT; coreAudioDesc.mBitsPerChannel = BytesPerSample * 8; coreAudioQueue = NULL; OSStatus result; // most of the 0 and NULL params here are for compressed sound formats etc. result = AudioQueueNewOutput(&amp;coreAudioDesc, &amp;coreAudioCallback, NULL, 0, 0, 0, &amp;coreAudioQueue); if (result != noErr) { assert(false == "AudioQueueNewOutput failed!"); abort(); } for (int i = 0; i &lt; RENDER_BUFFER_COUNT; ++i) { uint32_t bufferSize = coreAudioDesc.mBytesPerFrame * RENDER_FRAMES_PER_BUFFER; result = AudioQueueAllocateBuffer(coreAudioQueue, bufferSize, &amp;(coreAudioBuffers[i])); if (result != noErr) { assert(false == "AudioQueueAllocateBuffer failed!"); abort(); } } for (int i = 0; i &lt; RENDER_BUFFER_COUNT; ++i) { RenderAudioSaw(coreAudioBuffers[i]-&gt;mAudioData, RENDER_FRAMES_PER_BUFFER, RENDER_CHANNEL_COUNT); coreAudioBuffers[i]-&gt;mAudioDataByteSize = coreAudioBuffers[i]-&gt;mAudioDataBytesCapacity; AudioQueueEnqueueBuffer(coreAudioQueue, coreAudioBuffers[i], 0, 0); } AudioQueueStart(coreAudioQueue, NULL); sleep(10); // some time to hear the audio AudioQueueStop(coreAudioQueue, true); AudioQueueDispose(coreAudioQueue, true); return 0; }
Replies
2
Boosts
0
Views
591
Activity
Sep ’25
Convert CoreAudio AudioObjectID to IOUSB LocationID
Is there a recommended way on macOS 26 Tahoe to take a CoreAudio AudioObjectID and use it to lookup the underlying USB LocationID? I previously used AudioObjectID to query the corresponding DeviceUID with kAudioDevicePropertyDeviceUID. Then I queried for the IOService matching kIOAudioEngineClassName with property kIOAudioEngineGlobalUniqueIDKey matching DeviceUID, and I loaded kUSBDevicePropertyLocationID from the result. This fails on macOS 26, because the IO Registry for the device has an entry for usbaudiod rather than AppleUSBAudioEngine, and usbaudiod does not include a kIOAudioEngineGlobalUniqueIDKey property (or any other property to map it to a CoreAudio DeviceUID). My use-case here is a piece of audio recording software that allows configuring a set of supported audio devices via USB HID prior to recording. I present the user with a list of CoreAudio devices to use, but without a way to lookup the underlying USB LocationID, I cannot guarantee that the configured device matches the selected device (e.g. if the user plugged in two identical microphones).
Replies
2
Boosts
0
Views
615
Activity
Sep ’25
[iOS 26 bug] AVInputPickerInteraction selection immediately reverts on iOS 26
Hello everyone, I'm implementing the new AVInputPickerInteraction API on iOS 26 to allow users to select their microphone from a custom settings menu before recording. The implementation seems correct, but I'm encountering a strange issue where the input selection immediately reverts to the previous device. The Situation: The picker is presented correctly via a manual call to .present(). I can see all available inputs (e.g., "iPhone Microphone" and "AirPods"). The current input is "iPhone Microphone". I tap on "AirPods". The UI updates to show "AirPods" as selected for a fraction of a second, then immediately jumps back to "iPhone Microphone". The same thing happens in reverse. It seems like the system is automatically reverting the audio route change requested by the picker. My Implementation: My setup follows the standard pattern discussed in the WWDC sessions. Setup Code: This setup is performed once before the user can trigger the picker. @available(iOS 26.0, *) var inputPickerInteraction: AVInputPickerInteraction? // Note: The AVAudioSession is configured to .playAndRecord // and set to active elsewhere in the code before this setup is called. if #available(iOS 26.0, *) { // Setup the picker let picker = AVInputPickerInteraction() self.inputPickerInteraction = picker self.view.addInteraction(picker) // Added to establish context } Presentation Code: When a user selects "Change Input" from my custom settings menu, I call .present() on the main thread. // In a delegate method from a custom menu if #available(iOS 26.0, *) { DispatchQueue.main.async { self.inputPickerInteraction?.present(animated: true) } } What I've already checked: The AVAudioSession is active and its category is .playAndRecord. The inputPickerInteraction object is not nil. The .present() method is being called on the main thread. The picker is added to a view using view.addInteraction() in the setup phase. I've reviewed my code to ensure there is no other logic that could be manually resetting the AVAudioSession's preferred input. Has anyone else experienced this behavior? I suspect this might be a bug in the new API, but I want to make sure I'm not missing a crucial step in managing the AVAudioSession state. Any insights or potential workarounds would be greatly appreciated. Thank you.
Replies
2
Boosts
0
Views
250
Activity
Sep ’25
AirPlay v1 is broken in iOS 18.4?
After upgrading to iOS 18.4, I'm no longer able to establish an AirPlay v1 connection to an audio system. The symptom is that the AirPlay route picker just spins when trying to connect to an audio system. It eventually gives up. I tested this on an iPhone 14, connecting to a HomePod, AirPort express, AppleTV and a Wiim Pro. If I try connecting with AirPlay v2, ex: using Apple Music, the connection succeeds and audio can be played. I'm the developer of an app that plays audio over AirPlay while also recording. My app has to use AirPlay v1 because AvAudioSession doesn't allow the policy .longFormAudio when the category is .playAndRecord. This issue is a real pain as it means my app is suddenly broken for many thousands of users. Is anyone else seeing this issue? Any suggestions for a workaround?
Replies
2
Boosts
3
Views
533
Activity
Jun ’25
Does the OS has dedicated volume levels for each AVAudioSessionCategory.
We have an VoiP application, our application can be configured to amplify the PCM samples before feeding it to the Player to achieve volume gain at the receiver. In order to support this, We follow as below. If User has configured this Gain Settings within application, Application applies the amplification for the samples to introduce the gain. Application will also set the AVAudioSessionCategory to AVAudioSessionCategoryPlayback Provided the User has chosen the output to Speaker. This settings was working for us but we see there is a difference in behaviour w.r.t Volume Level System Settings between OS 26.3.1 and OS 26.4 When user has chosen earpiece as Output, then we will set the AVAudioSessionCategory to AVAudioSessionCategoryPlayAndRecord. User would have set the volume level to minimum. When user will change the output to Speaker, then we will set the AVAudioSessionCategory to AVAudioSessionCategoryPlayback. The expectation is, the volume level should be of AVAudioSessionCategoryPlayback what was set earlier instead we are seeing the volume level stays as minimum which was set to AVAudioSessionCategoryPlayAndRecord Could you please explain about this inconsistency w.r.t Volume level.
Replies
2
Boosts
0
Views
457
Activity
1d
CoreAudio server plugin gaining write access with SystemConfiguration.framework functions
Hi, our CourAudio server plugin utilizes the SystemConfiguration.framework to store and restore specific shared system wide settings. While our application can authenticate to utilize the SystemConfiguration.framework to gain write access to the shared configuration settings the CoreAudio server plugin obviously can't have any user interaction and therefor does not authenticate. Is it possible to authenticate the CoreAudio server plugin to gain write permissions? Are there any entitlements or other means that would allow this? Thanks!
Replies
2
Boosts
0
Views
187
Activity
Apr ’25
Hosting x86 Audio Units on Silicon Mac
My app encountered problems when trying to open an x86 audioUnit v2 on a Silicon Mac (although Rosetta is installed). There seems to be a XPC connection issue with the AUHostingService that I don't know how to fix. I observed other host apps opening the same plugins without problem, so there is probably something wrong or incompatible in my codes. I noticed that: The issue occurs whether or not the app is sandboxed. The issue does no longer occur when the app itself runs under Rosetta. There is no error reported by CoreAudio during allocation and initialization of the audio unit. The first notified errors appears when the unit calls AudioUnitRender from the rendering callback. With most x86 plugins, the error is on first call: kAudioUnitErr_RenderTimeout and on any subsequent call: kAudioComponentErr_InstanceInvalidated On the UI side, when the Cocoa View is loaded, it appears shortly, then disappears immediately leaving its superview empty. With another x86 plugin, the Cocoa View is loaded normally, but CoreAudio still emits kAudioUnitErr_NoConnection from AudioUnitRender, whether the view has been loaded or not, and the plugin produces no sound. I also find these messages in the console (printed in that order): CLIENT ERROR: RemoteAUv2ViewController does not override - and thus cannot react to catastrophic errors beyond logging them AUAudioUnit_XPC.mm:641 Crashed AU possible component description: aumu/Helm/Tyte My app uses the AUv2 API and I suspect that working with the AUv3 API would spare me these problems. However, considering how my audio system is built (audio units are wrapped into C++ classes and most connections between units are managed on the fly from the rendering callback), it would be a lot of work to convert, and I’m even not sure that all I do with the AUv2 API would be possible with the AUv3 API. I could possibly find an intermediate solution, but in the immediate future I'm looking for the simplest and fastest possible fix. If I cannot find better, I see two fallback options: In this part of the doc: “Beginning with macOS 11, the system loads audio units into a separate process that depends on the architecture or host preference”, does “host preference” means that it would be possible to disable the “out of process” behavior, for example from the app entitlements or info.plist? Otherwise, as a last resort, I could completely disable the use of x86 audioUnits when my app runs under ARM64, for at least making things cleaner. But the Audio Component API doesn’t give any info about the plugin architecture, how could I found it? Any tip or idea about this issue will be much appreciated. Thanks in advance!
Replies
2
Boosts
0
Views
712
Activity
Nov ’25
MusicKit - ApplicationMusicPlayer fails to play certain Songs
[Note: this issue was happening on a main testing device, and after testing the same code on other devices, this issue is only happening on 1 out of 4 devices] We are successfully getting a MusicCatalogResourceResponse for every song ID where we make the MusicCatalogResourceRequest. We are able to display the song title, artist name, and album artwork for each Song in the response. However - when we go to play the song, there are some songs that play, and several songs that do not play. For the songs that don't play, the console shows “Failed to prepareToPlay error=<MPMusicPlayerControllerErrorDomain.6 "Failed to prepare to play" {}>” let musicPlayer = ApplicationMusicPlayer.shared func playSong(_ song: Song) { musicPlayer.queue = [song] Task { try await musicPlayer.prepareToPlay() try await musicPlayer.play() } Is there anything else we can investigate about what may be causing specific song IDs not to play on this specific device? Even if we remove line 6 musicPlayer.prepareToPlay() we still see the same console error when running playSong with the Songs that don't work. It is always the same song IDs that we can play and always the same song IDs that we cannot get to play, even trying them across different projects with different bundle identifiers. We can tap to play a song that works, and it starts playing immediately. Then tap a song that doesn't work, and nothing happens. Then back to a song that works. It's consistent which songs succeed and fail on this device. Perhaps there is an issue specific to this very iPad when it comes to certain specific songs, but we'd like to be confident that an app relying on MusicKit will be able to play songs that have been successfully loaded with a MusicCatalogResourceResponse. Thanks for any help or suggestions about what we may be able to investigate further on the device or what we should consider when launching an app that expects anyone with Apple Music to be able to listen to any of the songs loaded by the app. Specific iPad details: iPad Pro (12.9-inch) (6th generation) running iPadOS 26.4 Beta Two of the song IDs that won't play on this iPad (even though we can access and display their album artwork and all other information): 943204000 and 1441164805
Replies
2
Boosts
1
Views
206
Activity
Mar ’26
failed to set category, reason: 未能完成操作。(OSStatus错误4097。)
When using the [AVAudioSession setCategory:withOptions:error:] API, the call hangs for a long time and eventually returns an error.This issue occurs on iOS 16, and did not appear in earlier versions. Thread 135: 0 libsystem_kernel.dylib 0x00000002478e3cd4 _mach_msg2_trap :8 (in libsystem_kernel.dylib) 1 libsystem_kernel.dylib 0x00000002478e7214 _mach_msg_overwrite :428 (in libsystem_kernel.dylib) 2 libsystem_kernel.dylib 0x00000002478e705c _mach_msg :24 (in libsystem_kernel.dylib) 3 libdispatch.dylib 0x00000001d63ffe84 __dispatch_mach_send_and_wait_for_reply :548 (in libdispatch.dylib) 4 libdispatch.dylib 0x00000001d6400224 _dispatch_mach_send_with_result_and_wait_for_reply :60 (in libdispatch.dylib) 5 libxpc.dylib 0x00000001b2114e04 _xpc_connection_send_message_with_reply_sync :256 (in libxpc.dylib) 6 Foundation 0x000000019b6249f0 ___NSXPCCONNECTION_IS_WAITING_FOR_A_SYNCHRONOUS_REPLY__ :16 (in Foundation) 7 Foundation 0x000000019c06d1b4 -[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:] :2100 (in Foundation) 8 CoreFoundation 0x000000019dfcb1cc ____forwarding___ :1072 (in CoreFoundation) 9 CoreFoundation 0x000000019dfd3200 ___forwarding_prep_0___ :96 (in CoreFoundation) 10 AudioSession 0x00000001c77498b0 __ZN4avas6client11SessionCore10HandlePingEv :192 (in AudioSession) 11 AudioSession 0x00000001c77497b0 ____ZN4avas6client11SessionCore12DispatchPingEv_block_invoke :52 (in AudioSession) 12 libdispatch.dylib 0x00000001d63e4adc __dispatch_call_block_and_release :32 (in libdispatch.dylib) 13 libdispatch.dylib 0x00000001d63fe7ec __dispatch_client_callout :16 (in libdispatch.dylib) 14 libdispatch.dylib 0x00000001d63ed468 __dispatch_lane_serial_drain :740 (in libdispatch.dylib) 15 libdispatch.dylib 0x00000001d63edf78 __dispatch_lane_invoke :440 (in libdispatch.dylib) 16 libdispatch.dylib 0x00000001d63f6f48 __dispatch_root_queue_drain :364 (in libdispatch.dylib) 17 libdispatch.dylib 0x00000001d63f6d08 __dispatch_worker_thread :268 (in libdispatch.dylib) 18 libsystem_pthread.dylib 0x00000001f9ff144c __pthread_start :136 (in libsystem_pthread.dylib) 19 libsystem_pthread.dylib 0x00000001f9fed8cc _thread_start :8 (in libsystem_pthread.dylib) Thread 132: 0 libsystem_kernel.dylib 0x00000002478e3cd4 _mach_msg2_trap :8 (in libsystem_kernel.dylib) 1 libsystem_kernel.dylib 0x00000002478e7214 _mach_msg_overwrite :428 (in libsystem_kernel.dylib) 2 libsystem_kernel.dylib 0x00000002478e705c _mach_msg :24 (in libsystem_kernel.dylib) 3 libdispatch.dylib 0x00000001d63ffe84 __dispatch_mach_send_and_wait_for_reply :548 (in libdispatch.dylib) 4 libdispatch.dylib 0x00000001d6400224 _dispatch_mach_send_with_result_and_wait_for_reply :60 (in libdispatch.dylib) 5 libxpc.dylib 0x00000001b2114e04 _xpc_connection_send_message_with_reply_sync :256 (in libxpc.dylib) 6 Foundation 0x000000019b6249f0 ___NSXPCCONNECTION_IS_WAITING_FOR_A_SYNCHRONOUS_REPLY__ :16 (in Foundation) 7 Foundation 0x000000019c06d1b4 -[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:] :2100 (in Foundation) 8 CoreFoundation 0x000000019dfcb1cc ____forwarding___ :1072 (in CoreFoundation) 9 CoreFoundation 0x000000019dfd3200 ___forwarding_prep_0___ :96 (in CoreFoundation) 10 AudioSession 0x00000001c7754198 __ZNK4avas6client11SessionCore18SetBatchPropertiesEP12NSDictionaryIP8NSStringPU25objcproto14NSSecureCoding11objc_objectEPU15__autoreleasingP7NSArrayIPS2_IS4_P8NSNumberEENS_30AVAudioSessionBatchSetStrategyEbb :548 (in AudioSession) 11 AudioSession 0x00000001c7753e58 __ZNK4avas6client11SessionCore20SetBatchPropertiesMXEP12NSDictionaryIP8NSStringPU25objcproto14NSSecureCoding11objc_objectE :92 (in AudioSession) 12 AudioSession 0x00000001c775179c __ZN4avas6client11SessionCore11setCategoryEP8NSStringS3_32AVAudioSessionRouteSharingPolicym :472 (in AudioSession) 13 AudioSession 0x00000001c7768f88 -[AVAudioSession setCategory:withOptions:error:] :68 (in AudioSession) 14 AlipayWallet 0x000000010140580c -[AVAudioSession(APMHook) apmhook_setCategory:withOptions:error:] APMHookAudioSession.m:35 (in AlipayWallet) 15 AlipayWallet 0x00000001014001a4 -[APMAudioSessionManager resume] APMAudioSessionManager.m:718 (in AlipayWallet) 16 libdispatch.dylib 0x00000001d63e4adc __dispatch_call_block_and_release :32 (in libdispatch.dylib) 17 libdispatch.dylib 0x00000001d63fe7ec __dispatch_client_callout :16 (in libdispatch.dylib) 18 libdispatch.dylib 0x00000001d63ed468 __dispatch_lane_serial_drain :740 (in libdispatch.dylib) 19 libdispatch.dylib 0x00000001d63edf44 __dispatch_lane_invoke :388 (in libdispatch.dylib) 20 libdispatch.dylib 0x00000001d63f83ec __dispatch_root_queue_drain_deferred_wlh :292 (in libdispatch.dylib) 21 libdispatch.dylib 0x00000001d63f7ce4 __dispatch_workloop_worker_thread :692 (in libdispatch.dylib) 22 libsystem_pthread.dylib 0x00000001f9fee3b8 __pthread_wqthread :292 (in libsystem_pthread.dylib) 23 libsystem_pthread.dylib 0x00000001f9fed8c0 _start_wqthread :8 (in libsystem_pthread.dylib)
Replies
2
Boosts
0
Views
778
Activity
Feb ’26
Dell monitor volume control issue on iMac via USB-C
I have a new 2725QC (Dell) Monitor that uses USB-C connection to connect with the iMac (2019, 27 inch) through the back port but the problem is that the volume control can currently only be done from the hardware, not the software control using the Apple keyboard. What should I do in terms of writing code to do this (Swift or Obj-C)? Is there a third-party solution for Intel iMac and ARM Mac?
Replies
2
Boosts
0
Views
267
Activity
Jan ’26
AVAudioSession : Audio issues when recording the screen in an app that changes IOBufferDuration on iOS 26.
Among Japanese end users, audio issues during screen recording—primarily in game applications—have become a topic of discussion. We have confirmed that the trigger for this issue is highly likely to be related to changes to IOBufferDuration. When using setPreferredIOBufferDuration and the IOBufferDuration is set to a value smaller than the default, audio problems occur in the recorded screen capture video. Audio playback is performed using AudioUnit (RemoteIO). https://developer.apple.com/documentation/avfaudio/avaudiosession/setpreferrediobufferduration(_:)?language=objc This issue was not observed on iOS 18, and it appears to have started occurring after upgrading to iOS 26. We provide an audio middleware solution, and we had incorporated changes to IOBufferDuration into our product to achieve low-latency audio playback. As a result, developers using our product as well as their end users are being affected by this issue. We kindly request that this issue be investigated and addressed in a future update. “This document has been translated by AI. The original text is included below for reference.” 日本のエンドユーザー間で主にゲームアプリケーションにおける画面収録時の音声の問題が話題になっています。 こちらの症状のトリガーが、IOBufferDurationの変更によるものである可能性が高いことを確認しました。 setPreferredIOBufferDurationを使用し、IOBufferDurationがデフォルトより小さい状態の時、画面収録された動画の音声に問題が発生することをしています。 音声の再生にはAudioUnit(RemoteIO)を使用しています。 https://developer.apple.com/documentation/avfaudio/avaudiosession/setpreferrediobufferduration(_:)?language=objc iOS 18ではこのような問題は確認されておらず、iOS26になってから問題が発生しているようです。 私たちはオーディオミドルウェアを提供しており、低遅延の再生のためにIOBufferDurationの変更を製品に組み込んでいました。 そのため、弊社製品をご利用いただいている開発者およびエンドユーザーの皆様がこの不具合の影響を受けています。 こちらの不具合の調査及び修正対応を検討いただけますでしょうか。
Replies
2
Boosts
0
Views
471
Activity
2w
No audio in screen recordings when using AVAudioEngine Voice Processing
Hello, We are developing a real-time speech recognition application and are utilizing AVAudioEngine with voice processing enabled on the input node. However, we have observed that enabling this mode interferes with the built-in iOS screen recording feature - specifically, the recorded video does not capture any audio when this mode is active. Since we want users to be able to record their experience within our app, this issue significantly impacts our functionality. Is there a known workaround or recommended approach to ensure that both voice processing and screen recording can function simultaneously? Any guidance would be greatly appreciated. Thank you!
Replies
2
Boosts
1
Views
390
Activity
Oct ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
Replies
2
Boosts
2
Views
468
Activity
Oct ’25
Airplay selection not working
I'm trying to implement airplay into my app. I can successfully playback sound and trigger the airplay selector sheet. If the target device is a Bluetooth only device I can connect with no problem and stream the audio to the Bluetooth device, but if the audio device is a airplay specific device like a HomePod or an Apple TV when I select it, I get a spinning icon, indicating that it is trying to connect, and eventually it times out and stops without connecting. I don't believe it is an AirPlay audio issue because if I go to a different app, for example a podcast app and select my HomePods for output, and then switch back to my app. My audio will correctly stream to the HomePod. Not only that, I have it so that my icon will change color to indicate that it is connected via airplay and it is correctly indicating that it is connected via AirPlay. But I cannot then disconnect it using the Airplay selector. The issue appears to be in the AirPlay selection side, which I have spent several days attempting to troubleshoot mostly using ChatGPT to suggest code different than what I have to maybe work around the issue. Mostly it is focused on the audio player section, but it doesn't seem like that is really the route that is the problem.
Replies
2
Boosts
0
Views
257
Activity
Jun ’25
AVAudioSession.outputVolume does not reflect system volume changes made while app is in background
I have a question regarding the behavior of AVAudioSession.sharedInstance().outputVolume. Observed behavior: When the app is in the foreground, I read audioSession.outputVolume (for example, 0.1). The app is then moved to the background. While the app is in the background, the user changes the system volume using the hardware buttons (for example, to 0.5). When the app returns to the foreground, audioSession.outputVolume still reports the previous value (0.1). From my testing, outputVolume only seems to update when the system volume is changed while the app is in the foreground. Volume changes made while the app is in the background are not reflected when the app returns to the foreground. Questions: According to Apple’s documentation for AVAudioSession.outputVolume: “The systemwide output volume set by the user.” https://developer.apple.com/documentation/avfaudio/avaudiosession/outputvolume However, based on our testing on iOS 18.6.2 and iOS 18.1, the observed behavior seems to differ from this description. Questions: The documentation states that outputVolume represents the system-wide volume set by the user. In our testing, the value does not reflect volume changes made while the app is in the background and only updates when the app is in the foreground.Is this the expected behavior of AVAudioSession.outputVolume? Is there any other recommended way in Swift to retrieve the current system volume that reflects user changes made both while the app is in the foreground and while it is in the background? Any clarification on the intended behavior or recommended handling would be greatly appreciated.
Replies
2
Boosts
1
Views
267
Activity
2w