Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2
We're using AVAudioPlayer to play a sound when a button is tapped. In our use case, this button can be tapped very frequently — roughly every 0.1 to 0.2 seconds. Each tap triggers the following function:
var audioPlayer: AVAudioPlayer?
func soundPlay(resource: String, type: String){
guard let path = Bundle.main.path(forResource: resource, ofType: type) else {
return
}
do {
audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path))
audioPlayer!.delegate = self
try audioSession.setCategory(.playback)
} catch {
return
}
self.audioPlayer!.play()
}
The issue is that under high-frequency tapping (especially around 0.1–0.15s intervals), the app occasionally crashes. The crash does not occur every time, but it happens randomly — sometimes within 30 seconds, within 1 minute, or even 3 minutes of continuous tapping.
Interestingly, adding a delay of 0.2 seconds between button taps seems to prevent the crash entirely. Delays shorter than 0.2 seconds (e.g.,0.15s,0.18s) still result in occasional crashes.
My questions are:
**Is this expected behavior from AVAudioPlayer or AVAudioSession?
Could this be a known issue or a limitation in AVFoundation?
Is there any documentation or guidance on handling frequent sound playback safely?**
Any insights or recommendations on how to handle rapid, repeated audio playback more reliably would be appreciated.
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I’m currently developing an iOS metronome app using DispatchSourceTimer as the timer. The interval is set very small, around 50 milliseconds, and I’m using CFAbsoluteTimeGetCurrent to calculate the elapsed time to ensure the beat is played within a ±0.003-second margin.
The problem is that once the app goes to the background, the timing becomes unstable—it slows down noticeably, then recovers after 1–2 seconds.
When coming back to the foreground, it suddenly speeds up, and again, it takes 1–2 seconds to return to normal. It feels like the app is randomly “powering off” and then “overclocking.” It’s super frustrating.
I’ve noticed that some metronome apps in the App Store have similar issues, but there’s one called “Professional Metronome” that’s rock solid with no such problems. What kind of magic are they using? Any experts out there who can help? Thanks in advance!
P.S. I’ve already enabled background audio permissions.
The professional metronome that has no issues: https://link.zhihu.com/?target=https%3A//apps.apple.com/cn/app/pro-metronome-%25E4%25B8%2593%25E4%25B8%259A%25E8%258A%2582%25E6%258B%258D%25E5%2599%25A8/id477960671
I have a music app I'm developing and having a weird issue where I can see now playing info for every other platform than tvOS. As far as I can tell I have correctly configured the MPNowPlayingInfoCenter
MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo MPNowPlayingInfoCenter.default().playbackState = .playing
Are there any extra requirements to get my app's now-playing info showing in control center on tvOS? Another strange issue that might be related is I can use the apple TV remote to pause audio but not resume playback, so I feel like there's something I'm missing about registering audio playback on tvOS specifically.
I'm seeing crashes in _MPRemoteCommandEventDispatch on iOS 26.x devices in 3 apps. According to Bugsnag logs they are:
NSInternalInconsistencyException: event dispatch <_MPRemoteCommandEventDispatch: <MPRemoteCommandEvent: 0x11c049500 commandID=THV0 command=<MPRemoteCommand: 0x109ad1ea0 type=Play (0) enabled=YES handlers=[0x109b6a310]> sourceID=(null) ([HostedRoutingSessionDataSource] handleControlSendingCommand<2W5E>)> state:201> deallocated without calling continuation
I attached a log from Xcode organizer matching Bugsnag crash.
mpr_remote_command_event.crash
When I set the brakpoint on the -[_MPRemoteCommandEventDispatch dealloc] I can see it it's hit every time I tap play or pause on locked screen play button.
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x00000002370420cc __pthread_kill + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001e975c810 pthread_kill + 268 (pthread.c:1721)
2 libsystem_c.dylib 0x0000000198f8ff64 abort + 124 (abort.c:122)
3 libc++abi.dylib 0x000000018a7cf808 __abort_message + 132 (abort_message.cpp:66)
4 libc++abi.dylib 0x000000018a7be484 demangling_terminate_handler() + 304 (cxa_default_handlers.cpp:76)
5 libobjc.A.dylib 0x000000018a6cff78 _objc_terminate() + 156 (objc-exception.mm:496)
6 xxxxxxxxxxxxxx 0x00000001003a7db8 CPPExceptionTerminate() + 416 (BSG_KSCrashSentry_CPPException.mm:156)
7 libc++abi.dylib 0x000000018a7cebdc std::__terminate(void (*)()) + 16 (cxa_handlers.cpp:59)
8 libc++abi.dylib 0x000000018a7ceb80 std::terminate() + 108 (cxa_handlers.cpp:88)
9 CoreFoundation 0x000000018d7341c4 __CFRunLoopPerCalloutARPEnd + 256 (CFRunLoop.c:769)
10 CoreFoundation 0x000000018d70bb5c __CFRunLoopRun + 1976 (CFRunLoop.c:3179)
11 CoreFoundation 0x000000018d70aa6c _CFRunLoopRunSpecificWithOptions + 532 (CFRunLoop.c:3462)
12 GraphicsServices 0x000000022e31c498 GSEventRunModal + 120 (GSEvent.c:2049)
13 UIKitCore 0x00000001930ceba4 -[UIApplication _run] + 792 (UIApplication.m:3902)
14 UIKitCore 0x0000000193077a78 UIApplicationMain + 336 (UIApplication.m:5577)
15 xxxxxxxxxxxxxx 0x00000001000c0134 main + 308 (main.swift:15)
16 dyld 0x000000018a722e28 start + 7116 (dyldMain.cpp:1477)
Is the crash happening when the app is being terminated?
Thank you!
Hi,
On macOS I used to open MP3 and MP4 files with ExtAudioFile. For a few years it doesn't work anymore.
So I decided to try different macOS API using the AudioFileID of AudioToolbox framework.
I decided to write a test:
https://gist.github.com/joelkraehemann/7f5b241b52ca38c3a765c138fb647588
It fails right here:
AudioFileOpenWithCallbacks()
By telling OSStatus error 1954115647, which means kAudioFileUnsupportedFileTypeError.
The filename was set to an MP4 file:
~/Music/test.mp4
Howto fix this?
regards, Joël
I am using an AVAudioPlayer to play a "tick" sound once per second in a SwiftUI app.
When running the app on an iPhone 16 (18.2.1) the tick sounds increase in volume after a few seconds. This does not happen in the simulator nor on an iPhone SE 2020 (18.1.1).
Topic:
Media Technologies
SubTopic:
Audio
There appears to be no method of going forward or backwards in Get Info in the Music application,
Topic:
Media Technologies
SubTopic:
Audio
Since iOS 18, the system setting “Allow Audio Playback” (enabled by default) allows third-party app audio to continue playing while the user is recording video with the Camera app. This has created a problem for the app I’m developing.
➡️ The problem:
My app plays continuous audio in both foreground and background states. If the user starts recording video using the iOS Camera app, the app’s audio — still playing in the background — gets captured in the video — obviously an unintended behavior.
Yes, the user could stop the app manually before starting the video recording, but that can’t be guaranteed. As a developer, I need a way to stop the app’s audio before the video recording begins.
So far, I haven’t found a reliable way to detect when video recording starts if ‘Allow Audio Playback’ is ON.
➡️ What I’ve tried:
— AVAudioSession.interruptionNotification → doesn’t fire
— devicesChangedEventStream → not triggered
I don’t want to request mic permission (app doesn’t use mic). also, disabling the app from playing audio in the background isn’t an option as it is a crucial part of the user experience
➡️ What I need:
A reliable, supported way to detect when the Camera app begins video recording, without requiring mic access — so I can stop audio and avoid unintentional overlap with the user’s recordings.
Any official guidance, workarounds, or AVFoundation techniques would be greatly appreciated.
Thanks.
Is it possible to play WebM audio on iOS? Either with AVPlayer, AVAudioEngine, or some other API?
Safari has supported this for a few releases now, and I'm wondering if I missed something about how to do this. By default these APIs don't seem to work (nor does ExtAudioFileOpen).
Our usecase is making it possible for iOS users to play back audio recorded in our webapp (desktop versions of Chrome & Firefox only support webm as a destination format for MediaRecorder)
Using the official SwiftTranscriptionSampleApp from WWDC 2025, speech transcription takes 14+ seconds from audio input to first result, making it unusable for real-time applications.
Environment
iOS: 26.0 Beta
Xcode: Beta 5
Device: iPhone 16 pro
Sample App: Official Apple SwiftTranscriptionSampleApp from WWDC 2025
Configuration Tested
Locale: en-US (properly allocated with AssetInventory.allocate(locale:)) and es-ES
Setup: All optimizations applied (preheating, high priority, model retention)
I started testing in my own app to replace SFSpeech API and include speech detection but after long fights with documentation (this part is quite terrible TBH) I tested the example (https://developer.apple.com/documentation/speech/bringing-advanced-speech-to-text-capabilities-to-your-app) and saw same results.
I added some logs to check the specific time:
🎙️ [20:30:41.532] ✅ Analyzer started successfully - ready to receive audio!
🎙️ [20:30:41.532] Listening for transcription results...
🎙️ [20:30:56.342] 🚀 FIRST TRANSCRIPTION RESULT after 14.810s: 'Hello' (isFinal: false)
Questions
Is this expected performance for iOS 26 Beta, because old SFSpeech is far faster?
Are there additional optimization steps for SpeechTranscriber?
Should we expect significant performance improvements in later betas?
I have an app that displays artwork via MPMediaItem.artwork, requesting an image with a specific size. How do I get a media item's MPMediaItemAnimatedArtwork, and how to get the preview image and video to display to the user?
I'm encountering numerous crashes involving the com.apple.coreaudio.AQClient thread on our application. The crash details are as follows:
#10 com.apple.coreaudio.AQClient
SIGSEGV
SEGV_ACCERR
0 libobjc.A.dylib _objc_msgSend + 44
1 AudioToolbox ClientMessageHandler::PropertyChanged(unsigned int) + 872
2 AudioToolbox ClientAudioQueue::FetchAndDeliverPendingCallbacks(unsigned int) + 924
3 AudioToolbox __XCallbackNotificationsAvailable + 212
4 libAudioToolboxUtility.dylib _mshMIGPerform + 260
5 CoreFoundation ___CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 56
6 CoreFoundation ___CFRunLoopDoSource1 + 596
7 CoreFoundation ___CFRunLoopRun + 2392
8 CoreFoundation _CFRunLoopRunSpecific + 572
9 AudioToolbox CADeprecated::GenericRunLoopThread::Entry(void*) + 156
10 libAudioToolboxUtility.dylib CADeprecated::CAPThread::Entry(CADeprecated::CAPThread*) + 88
11 libsystem_pthread.dylib __pthread_start + 116
All these crashes occur on system versions below iOS/iPadOS 17, primarily when the device's available RAM is low. What steps can I take to resolve this issue? Any insights would be greatly appreciated!
Topic:
Media Technologies
SubTopic:
Audio
Hi guys,
I am having issue in live-streaming audio from Bluetooth headset and playing it live on the iPhone speaker.
I am able to redirect audio back to the headset but this is not what I want.
The issue happens when I am trying to override output - the iPhone switches to speaker but also switches a microphone.
This is example of the code:
import AVFoundation
class AudioRecorder {
let player: AVAudioPlayerNode
let engine:AVAudioEngine
let audioSession:AVAudioSession
let audioSessionOutput:AVAudioSession
init() {
self.player = AVAudioPlayerNode()
self.engine = AVAudioEngine()
self.audioSession = AVAudioSession.sharedInstance()
self.audioSessionOutput = AVAudioSession()
do {
try self.audioSession.setCategory(AVAudioSession.Category.playAndRecord, options: [.defaultToSpeaker])
try self.audioSessionOutput.setCategory(AVAudioSession.Category.playAndRecord, options: [.allowBluetooth]) // enables Bluetooth HFP profile
try self.audioSession.setMode(AVAudioSession.Mode.default)
try self.audioSession.setActive(true)
// try self.audioSession.overrideOutputAudioPort(.speaker) // doens't work
} catch {
print(error)
}
let input = self.engine.inputNode
self.engine.attach(self.player)
let bus = 0
let inputFormat = input.inputFormat(forBus: bus)
self.engine.connect(self.player, to: engine.mainMixerNode, format: inputFormat)
input.installTap(onBus: bus, bufferSize: 512, format: inputFormat) { (buffer, time) -> Void in
self.player.scheduleBuffer(buffer)
print(buffer)
}
}
public func start() {
try! self.engine.start()
self.player.play()
}
public func stop() {
self.player.stop()
self.engine.stop()
}
}
I am not sure if this is a bug or not.
Can somebody point me into the right direction?
I there a way to design a custom audio routing?
I would also appreciate some good documentation besides AVFoundation docs.
Please Update Andorid MusicKit,the version 1.1.2 will complied fail。the error msg:•SDKUriHandlerActivity>. Apps targeting Android 12 and higher are required to specify an explicit value for android:exported when the corres
Hi all,
I've developed an audio DSP application in C++ using AudioToolbox and CoreAudio on MacOS 14.4.1 with Xcode 15.
I use an AudioQueue for input and another for output. This works great.
I'm now adding real-time audio analysis eg spectral analysis. I want this to run independently of my audio processing so it can not interfere with audio playback. Taps on AudioQueues seem to be a good way of doing this...
Since the analytics won't modify the audio data, I am using a Siphon Tap by setting the AudioQueueProcessingTapFlags to
kAudioQueueProcessingTap_PreEffects | kAudioQueueProcessingTap_Siphon;
This works fine on my output queue. However, on my input queue the Tap callback is called once and then a EXC_BAD_ACCESS occurs - screen shot below.
NB: I believe that a callback should only call AudioQueueProcessingTapGetSourceAudio when not using a Siphon, so I don't call it.
Relevant code:
AudioQueueProcessingTapCallback tap_callback) {
// Makes an audio tap for a queue
void * tap_data_ptr = NULL;
AudioQueueProcessingTapFlags tap_flags =
kAudioQueueProcessingTap_PostEffects
| kAudioQueueProcessingTap_Siphon;
uint32_t max_frames = 0;
AudioStreamBasicDescription asbd;
AudioQueueProcessingTapRef tap_ref;
OSStatus status = AudioQueueProcessingTapNew(queue_ref,
tap_callback,
tap_data_ptr,
tap_flags,
&max_frames,
&asbd,
&tap_ref);
if (status != noErr) printf("Error while making Tap\n");
else printf("Successfully made tap\n");
}
void tapper(void * tap_data,
AudioQueueProcessingTapRef tap_ref,
uint32_t number_of_frames_in,
AudioTimeStamp * ts_ptr,
AudioQueueProcessingTapFlags * tap_flags_ptr,
uint32_t * number_of_frames_out_ptr,
AudioBufferList * buf_list) {
// Callback function for audio queue tap
printf("Tap callback");
}```
Image of exception stack provided by Xcode:

What have I missed?
Appreciate any help you learned folks may be able to provide.
Best,
Geoff.
On macOS Sequoia, I'm having the hardest time getting this basic audio output to work correctly. I'm compiling in XCode using C99, and when I run this, I get audio for a split second, and then nothing, indefinitely.
Any ideas what could be going wrong?
Here's a minimum code example to demonstrate:
#include <AudioToolbox/AudioToolbox.h>
#include <stdint.h>
#define RENDER_BUFFER_COUNT 2
#define RENDER_FRAMES_PER_BUFFER 128
// mono linear PCM audio data at 48kHz
#define RENDER_SAMPLE_RATE 48000
#define RENDER_CHANNEL_COUNT 1
#define RENDER_BUFFER_BYTE_COUNT (RENDER_FRAMES_PER_BUFFER * RENDER_CHANNEL_COUNT * sizeof(f32))
void RenderAudioSaw(float* outBuffer, uint32_t frameCount, uint32_t channelCount)
{
static bool isInverted = false;
float scalar = isInverted ? -1.f : 1.f;
for (uint32_t frame = 0; frame < frameCount; ++frame)
{
for (uint32_t channel = 0; channel < channelCount; ++channel)
{
// series of ramps, alternating up and down.
outBuffer[frame * channelCount + channel] = 0.1f * scalar * ((float)frame / frameCount);
}
}
isInverted = !isInverted;
}
AudioStreamBasicDescription coreAudioDesc = { 0 };
AudioQueueRef coreAudioQueue = NULL;
AudioQueueBufferRef coreAudioBuffers[RENDER_BUFFER_COUNT] = { NULL };
void coreAudioCallback(void* unused, AudioQueueRef queue, AudioQueueBufferRef buffer)
{
// 0's here indicate no fancy packet magic
AudioQueueEnqueueBuffer(queue, buffer, 0, 0);
}
int main(void)
{
const UInt32 BytesPerSample = sizeof(float);
coreAudioDesc.mSampleRate = RENDER_SAMPLE_RATE;
coreAudioDesc.mFormatID = kAudioFormatLinearPCM;
coreAudioDesc.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked;
coreAudioDesc.mBytesPerPacket = RENDER_CHANNEL_COUNT * BytesPerSample;
coreAudioDesc.mFramesPerPacket = 1;
coreAudioDesc.mBytesPerFrame = RENDER_CHANNEL_COUNT * BytesPerSample;
coreAudioDesc.mChannelsPerFrame = RENDER_CHANNEL_COUNT;
coreAudioDesc.mBitsPerChannel = BytesPerSample * 8;
coreAudioQueue = NULL;
OSStatus result;
// most of the 0 and NULL params here are for compressed sound formats etc.
result = AudioQueueNewOutput(&coreAudioDesc, &coreAudioCallback, NULL, 0, 0, 0, &coreAudioQueue);
if (result != noErr)
{
assert(false == "AudioQueueNewOutput failed!");
abort();
}
for (int i = 0; i < RENDER_BUFFER_COUNT; ++i)
{
uint32_t bufferSize = coreAudioDesc.mBytesPerFrame * RENDER_FRAMES_PER_BUFFER;
result = AudioQueueAllocateBuffer(coreAudioQueue, bufferSize, &(coreAudioBuffers[i]));
if (result != noErr)
{
assert(false == "AudioQueueAllocateBuffer failed!");
abort();
}
}
for (int i = 0; i < RENDER_BUFFER_COUNT; ++i)
{
RenderAudioSaw(coreAudioBuffers[i]->mAudioData, RENDER_FRAMES_PER_BUFFER, RENDER_CHANNEL_COUNT);
coreAudioBuffers[i]->mAudioDataByteSize = coreAudioBuffers[i]->mAudioDataBytesCapacity;
AudioQueueEnqueueBuffer(coreAudioQueue, coreAudioBuffers[i], 0, 0);
}
AudioQueueStart(coreAudioQueue, NULL);
sleep(10); // some time to hear the audio
AudioQueueStop(coreAudioQueue, true);
AudioQueueDispose(coreAudioQueue, true);
return 0;
}
I have an app under development - demo here - https://youtu.be/VbAfUk_eYl0?si=s6EDBx-4G6P_QbZO - which is sort of an audio player for airdropped files - something useful to musicians who dump work in progress to their phone, make notes, revise and update.
I've been testing my handling of audio session interruption notifications, but seems to be a lot of inconsistency in how, when and why iOS delivers them, and I'm wondering if there is some rhyme or reason to it that I'm just not detecting.
For example, I am playing a song in my app. Switch to Apple Music and start playing a song there. My app gets an interruption began notification - this is consistent.
Switch back to my app, and about half the time, I will get an interruption ended notification (coupled often with a blast of the tail of whatever audio buffer was partially played when the interruption started, even though the engine was stopped - and followed by call to my AVAudioPlayerNodeCompletionCallback - is there some way to avoid this?). Half the time I don't get an interruption ended notification; my app can (as expected) end the interruption by activating the AVAudioSession and playing something.
I have not been able to determine any pattern to this behavior, other than that if my app started playing using AVAudioPlayerNode.scheduleSegment rather than scheduleFile I think the notification will be consistently delivered on app activation rather than when I activate the session programmatically.
I would like my app to behave deterministically, and would appreciate any help in deciphering what causes the inconsistent behavior in notifications from iOS.
Hi all,
I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in.
Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped.
Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played.
Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset?
I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity.
Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them
Thanks for any feedback!
I am developing a VOD playback app, but when I stream video to an external monitor connected via HDMI with Lightning on iOS 18 or later, the screen goes dark and I cannot confirm playback.
The app I am developing does not detect the HDMI and display the Player separately, but simply mirrors the video.
We have confirmed that the same phenomenon occurs with other services, but we were able to confirm playback with some services such as Apple TV.
Please let us know if there are any other necessary settings such as video certificates required for video playback.
We would also like to know if the problem occurs with iOS 18 or later.
Topic:
Media Technologies
SubTopic:
Audio
I have a flutter iOS app that has some simple sound FX for button clicks, swipes, etc.
In simulator and on real device the sound works fine, but when i upload the app to testflight (and App store) the sound FX don't play. When I upload the app to my phone via xcode I am using the release profile so I don't see what the difference could be.
I have also gone through the archive that i uploaded and verified that the sound files are indeed there.
I have other flutter apps that use sound but non since the iOS 26 update. I've tried 3 different flutter sound libraries and all face the same issue.
Wondering if anyone else is seeing this issue or if I'm missing a simple permission or something that has changed recently?
Thanks in advanced
Topic:
Media Technologies
SubTopic:
Audio