Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

FaceTime Screen-Share Audio and Video Experience
FaceTime’s screen-share audio balance is insanely absurd right now. Whenever I share media, the system audio that gets sent through FaceTime is a tiny whisper even at full volume (or even when connected to my speaker or headphones). The moment anyone on the call makes any noise at all, the shared audio ducks so hard it disappears, while the voice (or rustling or air conditioning noise) spikes to painful levels. It’s impossible to watch or listen to anything together. Also, the feature where FaceTime would shrink to a square during screen-sharing has been completely removed. That was a good feature and I'm really confused why it's gone. Now, the FaceTime window stays as a long rectangle that covers part of the content I'm trying to share (unless I do full screen tile, but then I can't pull up any other windows during the call) and can't be made smaller than about a third of the screen. You can't resize the window or adjust its dimensions, so it ends up blocking the actual media you're trying to watch. Here are some feature requests/fixes that would greatly improve the FaceTime screen-share experience: Option to adjust the shared media volume independently of call audio. Disable/toggle the extreme automatic audio docking while screen-sharing Reintroduce the minimized “floating square” mode or allow full manual resizing and repositioning of the FaceTime window during screen-share sessions. Overall, this setup makes FaceTime screen-sharing basically unusable. The audio balance is so inconsistent that it’s easier to switch to Zoom or Google Meet, which both handle shared sound correctly and let you move the call window out of the way. Until these issues are fixed, there’s no practical reason to use FaceTime for shared viewing at all.
1
0
207
Nov ’25
Core Audio Tap: per-device attenuation vs. number of stereo output pairs — how to get unattenuated “raw” app streams?
Hi all, I’ve implemented the new Core Audio Tap API (AudioHardwareCreateProcessTap with CATapDescription) and I’m seeing consistent level attenuation that scales with the number of stereo output pairs exposed by the target device. What I observe Device with 4 stereo pairs (8 outs) → tap shows −12.04 dB relative to source. True 2-ch devices (built-in speakers, AirPods) → ~0 dB attenuation. The attenuation appears regardless of whether I: Create a global (default-output) tap via initStereoGlobalTapButExcludeProcesses: Or create a per-process/per-device tap via initWithProcesses:andDeviceUID:withStream: Additionally, the routing choice inside the sending app matters: App output to “System/Default Output” → I often see no attenuation. App output directly to a multi-out interface (e.g., RME Fireface) → I see the pair-count-scaled attenuation. I can query Core Audio for the number of output channels/pairs and gain-compensate (+20·log10(N_pairs) dB) and that matches my measurements for many cases. However, this compensation is not universally correct because it seems to depend on where each process routes its audio (Default Output vs. direct device), even when those processes are included in the same tap aggregate. Question Is there a supported way to obtain the raw, unattenuated streams for all processes through the Tap API—i.e., to bypass this automatic headroom/attenuation behavior entirely? If this attenuation is expected by design: Is there a documented rule for when it applies (global vs. device taps, per-process taps, stream selection, etc.)? Is there a property/flag to disable it, or a reliable, official method to compute the exact compensation (beyond counting stereo pairs)? Any guidance on ensuring consistent levels when multiple processes route differently (Default Output vs. direct device) but are captured by the same tap? Environment API: AudioHardwareCreateProcessTap + CATapDescription Devices: built-in output (2-ch), RME Fireface (8+ outs / 4+ stereo pairs) Behavior reproducible with both global and per-process/per-device tap descriptions. Attenuation example: 4 stereo pairs → −12.04 dB observed. Happy to provide a minimal sample, measurements, and device logs. Thanks! — David
0
0
92
Nov ’25
AVAudioSessionCategoryPlayback is not allowed while CallKit call is active
We require assistance in resolving a critical audio design conflict within our Push-to-Talk (PTT) application. Our current volume amplification strategy—which relies on applying a GAIN factor to PCM samples in conjunction with setting the AVAudioSession category to Playback—is working successfully when PTT is used independently. However, upon integrating and reporting the same PTT call through the CallKit framework, this amplification effect is lost. The CallKit integration appears to be forcing a different, non-amplifying audio session category or configuration, negatively impacting the user's perceived call volume. We need guidance on how to maintain the AVAudioSessionCategoryPlayback setting, or an equivalent high-volume configuration, while operating under the control of CallKit.
3
0
245
Nov ’25
Hosting x86 Audio Units on Silicon Mac
My app encountered problems when trying to open an x86 audioUnit v2 on a Silicon Mac (although Rosetta is installed). There seems to be a XPC connection issue with the AUHostingService that I don't know how to fix. I observed other host apps opening the same plugins without problem, so there is probably something wrong or incompatible in my codes. I noticed that: The issue occurs whether or not the app is sandboxed. The issue does no longer occur when the app itself runs under Rosetta. There is no error reported by CoreAudio during allocation and initialization of the audio unit. The first notified errors appears when the unit calls AudioUnitRender from the rendering callback. With most x86 plugins, the error is on first call: kAudioUnitErr_RenderTimeout and on any subsequent call: kAudioComponentErr_InstanceInvalidated On the UI side, when the Cocoa View is loaded, it appears shortly, then disappears immediately leaving its superview empty. With another x86 plugin, the Cocoa View is loaded normally, but CoreAudio still emits kAudioUnitErr_NoConnection from AudioUnitRender, whether the view has been loaded or not, and the plugin produces no sound. I also find these messages in the console (printed in that order): CLIENT ERROR: RemoteAUv2ViewController does not override - and thus cannot react to catastrophic errors beyond logging them AUAudioUnit_XPC.mm:641 Crashed AU possible component description: aumu/Helm/Tyte My app uses the AUv2 API and I suspect that working with the AUv3 API would spare me these problems. However, considering how my audio system is built (audio units are wrapped into C++ classes and most connections between units are managed on the fly from the rendering callback), it would be a lot of work to convert, and I’m even not sure that all I do with the AUv2 API would be possible with the AUv3 API. I could possibly find an intermediate solution, but in the immediate future I'm looking for the simplest and fastest possible fix. If I cannot find better, I see two fallback options: In this part of the doc: “Beginning with macOS 11, the system loads audio units into a separate process that depends on the architecture or host preference”, does “host preference” means that it would be possible to disable the “out of process” behavior, for example from the app entitlements or info.plist? Otherwise, as a last resort, I could completely disable the use of x86 audioUnits when my app runs under ARM64, for at least making things cleaner. But the Audio Component API doesn’t give any info about the plugin architecture, how could I found it? Any tip or idea about this issue will be much appreciated. Thanks in advance!
2
0
285
Nov ’25
AVSpeechSynthesizer pulls words out of thin air.
Hi, I'm working on a project that uses the AVSpeechSynthesizer and AVSpeechUtterance. I discovered by chance that the AVSpeechSynthesizer automatically completes some words instead of just outputting what it's supposed to. These are abbreviations for days of the week or months, but not all of them. I don't want either of them automatically completed, but only the specified text. The completion transcends languages. I have written a short example program for demonstration purposes. import SwiftUI import AVFoundation import Foundation let synthesizer: AVSpeechSynthesizer = AVSpeechSynthesizer() struct ContentView: View { var body: some View { VStack { Button { utter("mon") } label: { Text("mon") } .buttonStyle(.borderedProminent) Button { utter("tue") } label: { Text("tue") } .buttonStyle(.borderedProminent) Button { utter("thu") } label: { Text("thu") } .buttonStyle(.borderedProminent) Button { utter("feb") } label: { Text("feb") } .buttonStyle(.borderedProminent) Button { utter("feb", lang: "de-DE") } label: { Text("feb DE") } .buttonStyle(.borderedProminent) Button { utter("wed") } label: { Text("wed") } .buttonStyle(.borderedProminent) } .padding() } private func utter(_ text: String, lang: String = "en-US") { let utterance = AVSpeechUtterance(string: text) let voice = AVSpeechSynthesisVoice(language: lang) utterance.voice = voice synthesizer.speak(utterance) } } #Preview { ContentView() } Thank you Christian
1
0
205
Nov ’25
No mic capture on iOS 18.5
Hello! We stumbled upon a problem with our karaoke app where user on iPhone 16e/iOS 18.5 has problem with mic capture, other users cannot hear him. The mic capture is working fine on 17.5, 16.8. Maybe there is something else we need when configuring AVAudioSession for iOS 18.5? Currently it's set up like this: override func viewDidLoad() { super.viewDidLoad() UIApplication.shared.isIdleTimerDisabled = true mRoomId = appDelegate.getRoomId() let audioSession = AVAudioSession.sharedInstance() try! audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker]) try! audioSession.setPreferredSampleRate(48000) try! audioSession.setActive(true, options: []) }
1
0
254
Nov ’25
AVAudioEngine installTap stops working after phone call interruption on iPhone 16e
Environment Device: iPhone 16e iOS Version: 18.4.1 - 18.7.1 Framework: AVFoundation (AVAudioEngine) Problem Summary On iPhone 16e (iOS 18.4.1-18.7.1), the installTap callback stops being invoked after resuming from a phone call interruption. This issue is specific to phone call interruptions and does not occur on iPhone 14, iPhone SE 3, or earlier devices. Expected Behavior After a phone call interruption ends and audioEngine.start() is called, the previously installed tap should continue receiving audio buffers. Actual Behavior After resuming from phone call interruption: Tap callback is no longer invoked No audio data is captured No errors are thrown Engine appears to be running normally Note: Normal pause/resume (without phone call interruption) works correctly. Steps to Reproduce Start audio recording on iPhone 16e Receive or make a phone call (triggers AVAudioSession interruption) End the phone call Resume recording with audioEngine.start() Result: Tap callback is not invoked Tested devices: iPhone 16e (iOS 18.4.1-18.7.1): Issue reproduces ✗ iPhone 14 (iOS 18.x): Works correctly ✓ iPhone SE 3 (iOS 18.x): Works correctly ✓ Code Initial Setup (Works) let inputNode = audioEngine.inputNode inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in self.processAudioBuffer(buffer, at: time) } audioEngine.prepare() try audioEngine.start() Interruption Handling NotificationCenter.default.addObserver( forName: AVAudioSession.interruptionNotification, object: AVAudioSession.sharedInstance(), queue: nil ) { notification in guard let userInfo = notification.userInfo, let typeValue = userInfo[AVAudioSessionInterruptionTypeKey] as? UInt, let type = AVAudioSession.InterruptionType(rawValue: typeValue) else { return } if type == .began { self.audioEngine.pause() } else if type == .ended { try? self.audioSession.setActive(true) try? self.audioEngine.start() // Tap callback doesn't work after this on iPhone 16e } } Workaround Full engine restart is required on iPhone 16e: func resumeAfterInterruption() { audioEngine.stop() inputNode.removeTap(onBus: 0) inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in self.processAudioBuffer(buffer, at: time) } audioEngine.prepare() try audioSession.setActive(true) try audioEngine.start() } This works but adds latency and complexity compared to simple resume. Questions Is this expected behavior on iPhone 16e? What is the recommended way to handle phone call interruptions? Why does this only affect iPhone 16e and not iPhone 14 or SE 3? Any guidance would be appreciated!
0
0
128
Oct ’25
No audio in screen recordings when using AVAudioEngine Voice Processing
Hello, We are developing a real-time speech recognition application and are utilizing AVAudioEngine with voice processing enabled on the input node. However, we have observed that enabling this mode interferes with the built-in iOS screen recording feature - specifically, the recorded video does not capture any audio when this mode is active. Since we want users to be able to record their experience within our app, this issue significantly impacts our functionality. Is there a known workaround or recommended approach to ensure that both voice processing and screen recording can function simultaneously? Any guidance would be greatly appreciated. Thank you!
2
1
354
Oct ’25
PushToTalk
Using the PushToTalk library, call requestBeginTransmitting (channelUUID: UUID) on a Bluetooth device and then use the PTChannelManagerial Delegate proxy method channelManager:(PTChannelManager *)channelManager didActivateAudioSession:(AVAudioSession *)audioSession Start recording sound inside. Completed recording
11
0
922
Oct ’25
CoreMIDI driver - flow control
Hi, when a CoreMIDI driver controls physical HW it is probably quite commune to have to control the amount of MIDI data received from the system. What comes to mind is to just delay returning control of the MIDIDriverInterface::Send() callback to the calling process. While the application trying to send MIDI really stalls until the callback returns it seems only to be a side effect of a generally stalled CoreMIDI server. Between the callbacks the application can send as much MIDI data as it wants to CoreMIDI, it's buffering seems to be endless... However the HW might not be able to play out all the data. It seems there is no way to indicate an overflow/full buffer situation back the application/CoreMIDI. How is this supposed to work? Thanks, any hints or pointers are highly appreciated! Hagen.
0
0
153
Oct ’25
Process to request the restricted entitlement behind “DJ with Apple Music” (tempo control / time-stretch on Apple Music streams)?
Hi, I’m an iOS developer building an app with an use case that needs advanced playback on Apple Music subscription streams, specifically: • Real-time tempo change (BPM) during playback — i.e., time-stretch with key-lock, not just crossfade. • Beat-matched transitions between tracks. From what I can tell, this capability seems to exist only for approved partners and isn’t available through public MusicKit. Question: What’s the official request path to be evaluated for that restricted partner entitlement (application form, questionnaire, NDA, or internal team/BD contact)? If the entitlement identifier is internal, how can I get my account routed to the right Apple Music team? For reference, publicly announced partners include Algoriddim djay, Serato DJ Pro, rekordbox (AlphaTheta), and Engine DJ—all of which appear to implement mixing features that imply advanced playback (tempo/beat-matching) on Apple Music content. I’d prefer not to share product details publicly for the moment and can provide specifics privately if needed. Thanks in advance!
0
1
237
Oct ’25
watchOS 26: Audio Playback Interrupted by Fitness Notifications Across Multiple Apps
After upgrading to watchOS 26, users report that when playing music on Apple Watch, if a fitness reminder is received, the music automatically pauses and users need to manually tap the play button to resume music playback. This phenomenon occurs with multiple music and podcast apps. This issue did not exist before the upgrade. We would like to know if this is an Apple bug or if there are any special development configurations needed?"
1
0
166
Oct ’25
Displaying and working with Favorites in iOS app
New to iOS development and I've been trying to make heads or tails of the documentation. I know there is a difference between the data fields returned from songs from the user library and from the category, but whenever I search on the apple site I can't find a list of each. For example, Im trying to get the releaseDate of a song in my library, but it seems I'll have to cross-query either the catalog entry for the using song.catalogID or the song.irsc but when I try to use them I can't find a cross reference between the two. I'm totally turned around. Also trying to determine if a song in my library has been favorited or not? isFavorited (or something similar) doesn't seem to be a thing. Using this code and trying to find a way to display a solid star if the song has been favorited or an empty one if it's not. Seems like a basic request but I can't find anything on how to do it. I've searched docs, googled, tried. Does apple want us to query the user's Favorited Songs playlist or something? How do I know which playlist that is? I know isFavorited isn't a thing, just using it here so you can see what my intension is: HStack(spacing: 10) { Image(systemName: song.isFavorited ? "star.fill" : "star") .foregroundColor(song.isFavorited ? .yellow : .gray) Image(systemName: "magnifyingglass") }
1
0
177
Oct ’25
AVB Support for the AVnu MILAN Conventions
The AVB AVnu MILAN Convention has a groweing Population. Many big companies (Cisco, Meyer Sound, d&b Audio, l‘acoustics, Presonus, digico etc.) implements the AVB AVnu Milan Standards. Is there a plan on the Apple side to also implement AVnu Milan on top of the AVB Protocol? The advantage for Apple Sound would be a great Integration in the professionell Audio market and a more stable intergration on top of the AVB protocol. The atdecc work, but Not that stable.
1
0
166
Oct ’25
AVAudioEngine : Split 1x4 channel bus into 4x1 channel busses?
I'm using a 4 channel USB Audio interface, with 4 microphones, and want to process them through 4 independent effect chains. However the output from AVAudioInputNode is a single 4 channel bus. How can I split this into 4 mono busses? The following code splits the input into 4 copies, and routes them through the effects, but each bus contains all four channels. How can I remap the channels to remove the unwanted channels from the bus? I tried using channelMap on the mixer node but that had no effect. I'm currently using this code primarily on iOS but it should be portable between iOS and MacOS. It would be possible to do this through a Matrix Mixer Node, but that seems completely overkill, for such a basic operation. I'm already using a Matrix Mixer to combine the inputs, and it's not well supported in AVAudioEngine. AVAudioInputNode *inputNode=[engine inputNode]; [inputNode setVoiceProcessingEnabled:NO error:nil]; NSMutableArray *micDestinations=[NSMutableArray arrayWithCapacity:trackCount]; for(i=0;i<trackCount;i++) { fixMicFormat[i]=[AVAudioMixerNode new]; [engine attachNode:fixMicFormat[i]]; // And create reverb/compressor and eq the same way... [engine connect:reverb[i] to:matrixMixerNode fromBus:0 toBus:i format:nil]; [engine connect:eq[i] to:reverb[i] fromBus:0 toBus:0 format:nil]; [engine connect:compressor[i] to:eq[i] fromBus:0 toBus:0 format:nil]; [engine connect:fixMicFormat[i] to:compressor[i] fromBus:0 toBus:0 format:nil]; [micDestinations addObject:[[AVAudioConnectionPoint alloc] initWithNode:fixMicFormat[i] bus:0] ]; } AVAudioFormat *inputFormat = [inputNode outputFormatForBus: 1]; [engine connect:inputNode toConnectionPoints:micDestinations fromBus:1 format:inputFormat];
2
0
235
Oct ’25
macOS sample for AVAudioEngine recording with playthrough
Hi, I'm still stuck getting a basic record-with-playthrouh pipeline to work. Has anyone a sample of setting up a AVAudioEngine pipeline for recording with playthrough? Plkaythrough works with AVPlayerNode as input but not with any microphone input. The docs mention the "enabled state" of the outputNode of the engine without explaining the concept, i.e. how to enable an output. When the engine renders to and from an audio device, the AVAudioSession category and the availability of hardware determines whether an app performs output. Check the output node’s output format (specifically, the hardware format) for a nonzero sample rate and channel count to see if output is in an enabled state. Well, in my setup the output is NOT enabled, and any attempt to switch (e.g. audioEngine.outputNode.auAudioUnit.setDeviceID(deviceID) )/ attach a dedicated device / ... results in exceptions / errors
0
0
261
Oct ’25
iOS 17 camera capture assertions and issues
Hello, Starting in iOS 17, our application started having some issue publishing to our video session. More specifically the video capture seems to be broken in some, but not all sessions. What's troubling is that we're seeing that it fails consistently every 4 sessions. It also fails silently, without reporting any problems to the app. We only notice that there are no frames being rendered or sent to the remote devices. Here's what shows-up in the console: <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) Anyone seeing this? Any idea what could be the cause? Our sessions work perfectly on iOS16 and below. Thanks
3
1
1.4k
Oct ’25
AirPods Pro 3 Disconnecting from Apple Ultra 3 consistently
I have both apple devices, AirPods Pro 3 is up to date and Ultra 3 is on watch os 26.1 latest public beta. Each morning when I would go on my mindfulness app and start a meditation or listen to Apple Music on my watch and AirPods Pro 3, it will play for a few seconds then disconnects. My bluetooth settings on my watch says my AirPods is connected to my watch. I also have removed the tick about connecting automatically to iPhone on the AirPods setting in my iPhone. To fix this I invariably turn off my Apple Watch Ultra 3 and turn it on again. Then the connection becomes stable. I am not sure why I have to do this each morning. It is frustrating. I am not sure why this fix does not last long? Is there something wrong with my AirPods? Has anyone encountered this before?
1
0
503
Oct ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
2
2
352
Oct ’25