Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

SpeechTranscriber supported Devices
I have the new iOS 26 SpeechTranscriber working in my application. The issue I am facing is how to determine if the device I am running on supports SpeechTranscriber. I was able to create code that tests if the device supports transcription but it takes a bit of time to run and thus the results are not available when the app launches. What I am looking for is a list of what iOS 26 devices it doesn't run on. I think its safe to assume any new devices will support it so if we can just have a list of what devices that can run iOS 26 and not able to do transcription it would be much faster for the app. I have determined it doesn't work on a SE 2nd Gen, it works on iPhone 12, SE 3rd Gen, iPhone 14 Pro, 15 Pro. As the SpeechTranscriber doesn't work in the simulator I can't determine that way. I have checked the docs and it doesn't list the devices it doesn't work on.
1
0
365
Nov ’25
Is AVAudioPCMFormatFloat32 required for playing a buffer with AVAudioEngine / AVAudioPlayerNode
I have a PCM audio buffer (AVAudioPCMFormatInt16). When I try to play it using AVPlayerNode / AVAudioEngine an exception is thrown: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 (related thread https://forums.developer.apple.com/forums/thread/700497?answerId=780530022#780530022) If I convert the buffer to AVAudioPCMFormatFloat32 playback works. My questions are: Does AVAudioEngine / AVPlayerNode require AVAudioPCMBuffer to be in the Float32 format? Is there a way I can configure it to accept another format instead for my application? If 1 is YES is this documented anywhere? If 1 is YES is this required format subject to change at any point? Thanks! I was looking to watch the "AVAudioEngine in Practice" session video from WWDC 2014 but I can't find it anywhere (https://forums.developer.apple.com/forums/thread/747008).
1
0
1k
Oct ’25
WebM audio playback
Is it possible to play WebM audio on iOS? Either with AVPlayer, AVAudioEngine, or some other API? Safari has supported this for a few releases now, and I'm wondering if I missed something about how to do this. By default these APIs don't seem to work (nor does ExtAudioFileOpen). Our usecase is making it possible for iOS users to play back audio recorded in our webapp (desktop versions of Chrome & Firefox only support webm as a destination format for MediaRecorder)
1
0
455
Mar ’25
sysEx struct in CoreMIDI/MIDIMessages.h
The sysEx struct in the MIDIUniversalMessage struct has a channel member but the System Exclusive (7-Bit) Message doesn't have a channel field. The System Exclusive (7-Bit) Message has a # of bytes field but the sysEx struct doesn't have a nrOfBytes, byteCount or bytesUsed member. It looks like the channel member of the sysEx struct contains the number of used bytes. Is this a mistake in the header or did I misunderstand something?
1
0
528
5d
Is there an errors with SpatialAudioCLI?
Hi, everyone, I downloaded the source code EditingSpatialAudioWithAnAudioMix.zip from https://developer.apple.com/documentation/Cinematic/editing-spatial-audio-with-an-audio-mix, when I carried out one of the actions named "process" in command line the program crashed!! Form the source code, I found that the value of componentType is set to kAudioUnitType_FormatConverter: // The actual `AudioUnit`. public var auAudioMix = AVAudioUnitEffect() init() { // Generate a component description for the audio unit. let componentDescription = AudioComponentDescription( componentType: kAudioUnitType_FormatConverter, componentSubType: kAudioUnitSubType_AUAudioMix, componentManufacturer: kAudioUnitManufacturer_Apple, componentFlags: 0, componentFlagsMask: 0) auAudioMix=AVAudioUnitEffect(audioComponentDescription: componentDescription) } But in the document from https://developer.apple.com/documentation/avfaudio/avaudiouniteffect/init(audiocomponentdescription:), it seems that componentType can not be set to kAudioUnitType_FormatConverter and : Has everyone encountered this problem?
1
0
140
Nov ’25
Video Audio + Speech To Text
Hello, I am wondering if it is possible to have audio from my AirPods be sent to my speech to text service and at the same time have the built in mic audio input be sent to recording a video? I ask because I want my users to be able to say "CAPTURE" and I start recording a video (with audio from the built in mic) and then when the user says "STOP" I stop the recording.
1
0
320
3w
FaceTime Screen-Share Audio and Video Experience
FaceTime’s screen-share audio balance is insanely absurd right now. Whenever I share media, the system audio that gets sent through FaceTime is a tiny whisper even at full volume (or even when connected to my speaker or headphones). The moment anyone on the call makes any noise at all, the shared audio ducks so hard it disappears, while the voice (or rustling or air conditioning noise) spikes to painful levels. It’s impossible to watch or listen to anything together. Also, the feature where FaceTime would shrink to a square during screen-sharing has been completely removed. That was a good feature and I'm really confused why it's gone. Now, the FaceTime window stays as a long rectangle that covers part of the content I'm trying to share (unless I do full screen tile, but then I can't pull up any other windows during the call) and can't be made smaller than about a third of the screen. You can't resize the window or adjust its dimensions, so it ends up blocking the actual media you're trying to watch. Here are some feature requests/fixes that would greatly improve the FaceTime screen-share experience: Option to adjust the shared media volume independently of call audio. Disable/toggle the extreme automatic audio docking while screen-sharing Reintroduce the minimized “floating square” mode or allow full manual resizing and repositioning of the FaceTime window during screen-share sessions. Overall, this setup makes FaceTime screen-sharing basically unusable. The audio balance is so inconsistent that it’s easier to switch to Zoom or Google Meet, which both handle shared sound correctly and let you move the call window out of the way. Until these issues are fixed, there’s no practical reason to use FaceTime for shared viewing at all.
1
0
207
Nov ’25
Issue using Siphon Tap on input AudioQueue
Hi all, I've developed an audio DSP application in C++ using AudioToolbox and CoreAudio on MacOS 14.4.1 with Xcode 15. I use an AudioQueue for input and another for output. This works great. I'm now adding real-time audio analysis eg spectral analysis. I want this to run independently of my audio processing so it can not interfere with audio playback. Taps on AudioQueues seem to be a good way of doing this... Since the analytics won't modify the audio data, I am using a Siphon Tap by setting the AudioQueueProcessingTapFlags to kAudioQueueProcessingTap_PreEffects | kAudioQueueProcessingTap_Siphon; This works fine on my output queue. However, on my input queue the Tap callback is called once and then a EXC_BAD_ACCESS occurs - screen shot below. NB: I believe that a callback should only call AudioQueueProcessingTapGetSourceAudio when not using a Siphon, so I don't call it. Relevant code: AudioQueueProcessingTapCallback tap_callback) { // Makes an audio tap for a queue void * tap_data_ptr = NULL; AudioQueueProcessingTapFlags tap_flags = kAudioQueueProcessingTap_PostEffects | kAudioQueueProcessingTap_Siphon; uint32_t max_frames = 0; AudioStreamBasicDescription asbd; AudioQueueProcessingTapRef tap_ref; OSStatus status = AudioQueueProcessingTapNew(queue_ref, tap_callback, tap_data_ptr, tap_flags, &max_frames, &asbd, &tap_ref); if (status != noErr) printf("Error while making Tap\n"); else printf("Successfully made tap\n"); } void tapper(void * tap_data, AudioQueueProcessingTapRef tap_ref, uint32_t number_of_frames_in, AudioTimeStamp * ts_ptr, AudioQueueProcessingTapFlags * tap_flags_ptr, uint32_t * number_of_frames_out_ptr, AudioBufferList * buf_list) { // Callback function for audio queue tap printf("Tap callback"); }``` Image of exception stack provided by Xcode: ![]("https://developer.apple.com/forums/content/attachment/27479e8d-a118-459b-aa2d-7e30528910e3" "title=Screenshot 2025-06-14 at 1.29.14 PM.png;width=932;height=562") What have I missed? Appreciate any help you learned folks may be able to provide. Best, Geoff.
1
0
98
Jun ’25
How to capture audio from the stream that's playing on the speakers?
Good day, ladies and gents. I have an application that reads audio from the microphone. I'd like it to also be able to read from the Mac's audio output stream. (A bonus would be if it could detect when the Mac is playing music.) I'd eventually be able to figure it out reading docs, but if someone can give a hint, I'd be very grateful, and would owe you the libation of your choice. Here's the code used to set up the AudioUnit: -(NSString*) configureAU { AudioComponent component = NULL; AudioComponentDescription description; OSStatus err = noErr; UInt32 param; AURenderCallbackStruct callback; if( audioUnit ) { AudioComponentInstanceDispose( audioUnit ); audioUnit = NULL; } // was CloseComponent // Open the AudioOutputUnit description.componentType = kAudioUnitType_Output; description.componentSubType = kAudioUnitSubType_HALOutput; description.componentManufacturer = kAudioUnitManufacturer_Apple; description.componentFlags = 0; description.componentFlagsMask = 0; if( component = AudioComponentFindNext( NULL, &description ) ) { err = AudioComponentInstanceNew( component, &audioUnit ); if( err != noErr ) { audioUnit = NULL; return [ NSString stringWithFormat: @"Couldn't open AudioUnit component (ID=%d)", err] ; } } // Configure the AudioOutputUnit: // You must enable the Audio Unit (AUHAL) for input and output for the same device. // When using AudioUnitSetProperty the 4th parameter in the method refers to an AudioUnitElement. // When using an AudioOutputUnit for input the element will be '1' and the output element will be '0'. param = 1; // Enable input on the AUHAL err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &param, sizeof(UInt32) ); chkerr("Couldn't set first EnableIO prop (enable inpjt) (ID=%d)"); param = 0; // Disable output on the AUHAL err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &param, sizeof(UInt32) ); chkerr("Couldn't set second EnableIO property on the audio unit (disable ootpjt) (ID=%d)"); param = sizeof(AudioDeviceID); // Select the default input device AudioObjectPropertyAddress OutputAddr = { kAudioHardwarePropertyDefaultInputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; err = AudioObjectGetPropertyData( kAudioObjectSystemObject, &OutputAddr, 0, NULL, &param, &inputDeviceID ); chkerr("Couldn't get default input device (ID=%d)"); // Set the current device to the default input unit err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &inputDeviceID, sizeof(AudioDeviceID) ); chkerr("Failed to hook up input device to our AudioUnit (ID=%d)"); callback.inputProc = AudioInputProc; // Setup render callback, to be called when the AUHAL has input data callback.inputProcRefCon = self; err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &callback, sizeof(AURenderCallbackStruct) ); chkerr("Could not install render callback on our AudioUnit (ID=%d)"); param = sizeof(AudioStreamBasicDescription); // get hardware device format err = AudioUnitGetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1, &deviceFormat, &param ); chkerr("Could not install render callback on our AudioUnit (ID=%d)"); audioChannels = MAX( deviceFormat.mChannelsPerFrame, 2 ); // Twiddle the format to our liking actualOutputFormat.mChannelsPerFrame = audioChannels; actualOutputFormat.mSampleRate = deviceFormat.mSampleRate; actualOutputFormat.mFormatID = kAudioFormatLinearPCM; actualOutputFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved; if( actualOutputFormat.mFormatID == kAudioFormatLinearPCM && audioChannels == 1 ) actualOutputFormat.mFormatFlags &= ~kLinearPCMFormatFlagIsNonInterleaved; #if __BIG_ENDIAN__ actualOutputFormat.mFormatFlags |= kAudioFormatFlagIsBigEndian; #endif actualOutputFormat.mBitsPerChannel = sizeof(Float32) * 8; actualOutputFormat.mBytesPerFrame = actualOutputFormat.mBitsPerChannel / 8; actualOutputFormat.mFramesPerPacket = 1; actualOutputFormat.mBytesPerPacket = actualOutputFormat.mBytesPerFrame; // Set the AudioOutputUnit output data format err = AudioUnitSetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &actualOutputFormat, sizeof(AudioStreamBasicDescription)); chkerr("Could not change the stream format of the output device (ID=%d)"); param = sizeof(UInt32); // Get the number of frames in the IO buffer(s) err = AudioUnitGetProperty( audioUnit, kAudioDevicePropertyBufferFrameSize, kAudioUnitScope_Global, 0, &audioSamples, &param ); chkerr("Could not determine audio sample size (ID=%d)"); err = AudioUnitInitialize( audioUnit ); // Initialize the AU chkerr("Could not initialize the AudioUnit (ID=%d)"); // Allocate our audio buffers audioBuffer = [self allocateAudioBufferListWithNumChannels: actualOutputFormat.mChannelsPerFrame size: audioSamples * actualOutputFormat.mBytesPerFrame]; if( audioBuffer == NULL ) { [ self cleanUp ]; return [NSString stringWithFormat: @"Could not allocate buffers for recording (ID=%d)", err]; } return nil; } (...again, it would be nice to know if audio output is active and thereby choose the clean output stream over the noisy mic, but that would be a different chunk of code, and my main question may just be a quick edit to this chunk.) Thanks for your attention! ==Dave [p.s. if i get more than one useful answer, can i "Accept" more than one, to spread the credit around?] {pps: of course, the code lines up prettier in a monospaced font!}
1
0
167
Jun ’25
AVSpeechSynthesizer pulls words out of thin air.
Hi, I'm working on a project that uses the AVSpeechSynthesizer and AVSpeechUtterance. I discovered by chance that the AVSpeechSynthesizer automatically completes some words instead of just outputting what it's supposed to. These are abbreviations for days of the week or months, but not all of them. I don't want either of them automatically completed, but only the specified text. The completion transcends languages. I have written a short example program for demonstration purposes. import SwiftUI import AVFoundation import Foundation let synthesizer: AVSpeechSynthesizer = AVSpeechSynthesizer() struct ContentView: View { var body: some View { VStack { Button { utter("mon") } label: { Text("mon") } .buttonStyle(.borderedProminent) Button { utter("tue") } label: { Text("tue") } .buttonStyle(.borderedProminent) Button { utter("thu") } label: { Text("thu") } .buttonStyle(.borderedProminent) Button { utter("feb") } label: { Text("feb") } .buttonStyle(.borderedProminent) Button { utter("feb", lang: "de-DE") } label: { Text("feb DE") } .buttonStyle(.borderedProminent) Button { utter("wed") } label: { Text("wed") } .buttonStyle(.borderedProminent) } .padding() } private func utter(_ text: String, lang: String = "en-US") { let utterance = AVSpeechUtterance(string: text) let voice = AVSpeechSynthesisVoice(language: lang) utterance.voice = voice synthesizer.speak(utterance) } } #Preview { ContentView() } Thank you Christian
1
0
205
Nov ’25
Shazamkit with AirPods
HI Guys, I'm using Shazamkit in my IOS app and successfully capturing the currently playing track details, when using the devices (iPhone) built-in mic. When I test with AirPods though, my app cannot both send the output to through the AirPods and capture that same output with the AirPods mic, for Shazamkit recognition. I believe this must be possible, because the Shazamkit widget on IOS can do this. Is it restricted in some way for third party apps? If not, I'd appreciate some guidance on how to achieve this in Swift code. Thanks in advance.
1
1
579
Feb ’25
Is Call Translation API available for VOIP?
I might have misunderstood the docs, but is Call Translation going to be available for VOIP applications? Eg in an already connected VOIP call, would it be possible for Call Translations to be enabled on an iOS 26 and Apple Intelligence supported device? I have personally tried it and it doesn’t look like it supported VOIP but would love to confirm this. reference: https://developer.apple.com/documentation/callkit/cxsettranslatingcallaction/
1
0
74
Jun ’25
AVB Support for the AVnu MILAN Conventions
The AVB AVnu MILAN Convention has a groweing Population. Many big companies (Cisco, Meyer Sound, d&b Audio, l‘acoustics, Presonus, digico etc.) implements the AVB AVnu Milan Standards. Is there a plan on the Apple side to also implement AVnu Milan on top of the AVB Protocol? The advantage for Apple Sound would be a great Integration in the professionell Audio market and a more stable intergration on top of the AVB protocol. The atdecc work, but Not that stable.
1
0
167
Oct ’25
Garageband displaying error 100001 when loading up some AU plugins
I recently got some plugins from Universal Audio, and have licensed them properly through both UA and iLok manager. Whenever I try to load up the plugins (specifically from UA) in GarageBand, it first says that "NSCreateObjectFileImageFromMemory-p47UEwps” because the developper can not be verified. After clicking either 'show in finder' or 'okay', it opens the plugin in a form without its GUI and showing that it is not licensed (even though it is). It also displays error code 100001. I have tried only some basic stuff to troubleshoot like restarting the DAW/my computer and reinstalling/relicensing the softwares. I don't know if the macOS version has anything to do with it but for some reason I just can't get it to work.
1
0
395
Jan ’25
iOS 26.0 (23A5276f) - Bluetooth Call Audio Broken (AirPods + Car)
iOS 26.0 (23A5276f) – Bluetooth Call Audio Issue I’m experiencing a Bluetooth audio issue on iOS 26.0 (build 23A5276f). I cannot make or receive phone calls properly using Bluetooth devices — this affects both my car’s Bluetooth system and my AirPods Pro (2nd generation). Notably: Regular phone calls have no audio (either I can’t hear the other person, or they can’t hear me). WhatsApp and other VoIP apps work fine with the same Bluetooth devices. Media playback (music, video, etc.) works without issues over Bluetooth. It seems this bug is limited to the native Phone app or the system audio routing for regular cellular calls. Please advise if this is a known issue or if a fix is expected in upcoming beta releases.
1
1
296
Jun ’25
No mic capture on iOS 18.5
Hello! We stumbled upon a problem with our karaoke app where user on iPhone 16e/iOS 18.5 has problem with mic capture, other users cannot hear him. The mic capture is working fine on 17.5, 16.8. Maybe there is something else we need when configuring AVAudioSession for iOS 18.5? Currently it's set up like this: override func viewDidLoad() { super.viewDidLoad() UIApplication.shared.isIdleTimerDisabled = true mRoomId = appDelegate.getRoomId() let audioSession = AVAudioSession.sharedInstance() try! audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker]) try! audioSession.setPreferredSampleRate(48000) try! audioSession.setActive(true, options: []) }
1
0
254
Nov ’25
Logic Pro loads AUv3 when compiled in Swift 5 but not Swift 6
I have spent a long time refactoring lots of older Swift code to compile without error in Swift 6. The app is a v3 audio unit host and audio unit. Having installed Sonoma and XCode 16 I compile the code using Swift 6 and it compiles and runs without any warnings or errors. My host will load my AU no problem. LOGIC PRO is still the ONLY audio unit host that will load native Mac V3 audio units and so I like to test my code using Logic. In Sonoma with XCode 16... My AU passes the most stringent AUVAL tests both in terminal and Logic pro. If I compile the AU source in Swift 5 Logic will see the AU, load it and run it without problems. But when I compile the AU in Swift 6 Logic sees the AU, will scan it and verify it passes the tests but will not load the AU. In XCode I see a log message that a "helper application failed to run" but the debugger never connects to the AU and I don't think Logic even gets as far as instantiating the AU. So... what is causing this? I'm stumped.. Developing AUv3 is a brain-aching maze of undocumented hurdles and I'm hoping someone might have found a solution for this one. Meanwhile I guess my only option is to continue using the Swift 5 compiler. (appending a little note just to mention that all the DSP code is written in C/C++, Swift is used mainly for the user interface and also does some offline thready work )
1
0
525
Jan ’25