Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

SpeechTranscriber/SpeechAnalyzer being relatively slow compared to FoundationModel and TTS
So, I've been wondering how fast a an offline STT -> ML Prompt -> TTS roundtrip would be. Interestingly, for many tests, the SpeechTranscriber (STT) takes the bulk of the time, compared to generating a FoundationModel response and creating the Audio using TTS. E.g. InteractionStatistics: - listeningStarted: 21:24:23 4480 2423 - timeTillFirstAboveNoiseFloor: 01.794 - timeTillLastNoiseAboveFloor: 02.383 - timeTillFirstSpeechDetected: 02.399 - timeTillTranscriptFinalized: 04.510 - timeTillFirstMLModelResponse: 04.938 - timeTillMLModelResponse: 05.379 - timeTillTTSStarted: 04.962 - timeTillTTSFinished: 11.016 - speechLength: 06.054 - timeToResponse: 02.578 - transcript: This is a test. - mlModelResponse: Sure! I'm ready to help with your test. What do you need help with? Here, between my audio input ending and the Text-2-Speech starting top play (using AVSpeechUtterance) the total response time was 2.5s. Of that time, it took the SpeechAnalyzer 2.1s to get the transcript finalized, FoundationModel only took 0.4s to respond (and TTS started playing nearly instantly). I'm already using reportingOptions: [.volatileResults, .fastResults] so it's probably as fast as possible right now? I'm just surprised the STT takes so much longer compared to the other parts (all being CoreML based, aren't they?)
3
0
715
6d
iOS AUv3 extension: no Icon shown in host
Hi, I'm working on an AUv3 project. The app itself displays my icon. However the Auv3 extension does not display any icon in any host app (AUM, Drambo, etc.0). I thought that the extension would inherit the host app icon but that it does not appear to be the case. I tried to add the icon as a 1024x1024 file to the extension target and the update my extension plist file withe a CFBundleIconFile key but no luck either. It must surely be really easy. What am I missing? Thanks in advance for your help!
5
0
152
May ’25
Convert CoreAudio AudioObjectID to IOUSB LocationID
Is there a recommended way on macOS 26 Tahoe to take a CoreAudio AudioObjectID and use it to lookup the underlying USB LocationID? I previously used AudioObjectID to query the corresponding DeviceUID with kAudioDevicePropertyDeviceUID. Then I queried for the IOService matching kIOAudioEngineClassName with property kIOAudioEngineGlobalUniqueIDKey matching DeviceUID, and I loaded kUSBDevicePropertyLocationID from the result. This fails on macOS 26, because the IO Registry for the device has an entry for usbaudiod rather than AppleUSBAudioEngine, and usbaudiod does not include a kIOAudioEngineGlobalUniqueIDKey property (or any other property to map it to a CoreAudio DeviceUID). My use-case here is a piece of audio recording software that allows configuring a set of supported audio devices via USB HID prior to recording. I present the user with a list of CoreAudio devices to use, but without a way to lookup the underlying USB LocationID, I cannot guarantee that the configured device matches the selected device (e.g. if the user plugged in two identical microphones).
2
0
615
Sep ’25
Delay in Microphone Input When Talking While Receiving Audio in PTT Framework (Full Duplex Mode)
Context: I am currently developing an app using the Push-to-Talk (PTT) framework. I have reviewed both the PTT framework documentation and the CallKit demo project to better understand how to properly manage audio session activation and AVAudioEngine setup. I am not activating the audio session manually. The audio session configuration is handled in the incomingPushResult or didBeginTransmitting callbacks from the PTChannelManagerDelegate. I am using a single AVAudioEngine instance for both input and playback. The engine is started in the didActivate callback from the PTChannelManagerDelegate. When I receive a push in full duplex mode, I set the active participant to the user who is speaking. Issue When I attempt to talk while the other participant is already speaking, my input tap on the input node takes a few seconds to return valid PCM audio data. Initially, it returns an empty PCM audio block. Details: The audio session is already active and configured with .playAndRecord. The input tap is already installed when the engine is started. When I talk from a neutral state (no one is speaking), the system plays the standard "microphone activation" tone, which covers this initial delay. However, this does not happen when I am already receiving audio. Assumptions / Current Setup Because the audio session is active in play and record, I assumed that microphone input would be available immediately, even while receiving audio. However, there seems to be a delay before valid input is delivered to the tap, only occurring when switching from a receive state to simultaneously talking. Questions Is this expected behavior when using the PTT framework in full duplex mode with a shared AVAudioEngine? Should I be restarting or reconfiguring the engine or audio session when beginning to talk while receiving audio? Is there a recommended pattern for managing microphone readiness in this scenario to avoid the initial empty PCM buffer? Would using separate engines for input and output improve responsiveness? I would like to confirm the correct approach to handling simultaneous talk and receive in full duplex mode using PTT framework and AVAudioEngine. Specifically, I need guidance on ensuring the microphone is ready to capture audio immediately without the delay seen in my current implementation. Relevant Code Snippets Engine Setup func setup() { let input = audioEngine.inputNode do { try input.setVoiceProcessingEnabled(true) } catch { print("Could not enable voice processing \(error)") return } input.isVoiceProcessingAGCEnabled = false let output = audioEngine.outputNode let mainMixer = audioEngine.mainMixerNode audioEngine.connect(pttPlayerNode, to: mainMixer, format: outputFormat) audioEngine.connect(beepNode, to: mainMixer, format: outputFormat) audioEngine.connect(mainMixer, to: output, format: outputFormat) // Initialize converters converter = AVAudioConverter(from: inputFormat, to: outputFormat)! f32ToInt16Converter = AVAudioConverter(from: outputFormat, to: inputFormat)! audioEngine.prepare() } Input Tap Installation func installTap() { guard AudioHandler.shared.checkMicrophonePermission() else { print("Microphone not granted for recording") return } guard !isInputTapped else { print("[AudioEngine] Input is already tapped!") return } let input = audioEngine.inputNode let microphoneFormat = input.inputFormat(forBus: 0) let microphoneDownsampler = AVAudioConverter(from: microphoneFormat, to: outputFormat)! let desiredFormat = outputFormat let inputFramesNeeded = AVAudioFrameCount((Double(OpusCodec.DECODED_PACKET_NUM_SAMPLES) * microphoneFormat.sampleRate) / desiredFormat.sampleRate) input.installTap(onBus: 0, bufferSize: inputFramesNeeded, format: input.inputFormat(forBus: 0)) { [weak self] buffer, when in guard let self = self else { return } // Output buffer: 1920 frames at 16kHz guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: desiredFormat, frameCapacity: AVAudioFrameCount(OpusCodec.DECODED_PACKET_NUM_SAMPLES)) else { return } outputBuffer.frameLength = outputBuffer.frameCapacity let inputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in outStatus.pointee = .haveData return buffer } var error: NSError? let converterResult = microphoneDownsampler.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock) if converterResult != .haveData { DebugLogger.shared.print("Downsample error \(converterResult)") } else { self.handleDownsampledBuffer(outputBuffer) } } isInputTapped = true }
4
0
503
Aug ’25
AudioOutputUnitStart takes ~500 ms when using Push-to-Talk framework after beginTransmission
I’m working with the Push-to-Talk (PTT) framework and observing a consistent delay when starting audio capture. Scenario: A PTT call is already active The AVAudioSession is fully configured I request beginTransmission on the PTT channel I start my Audio Unit for recording (AudioOutputUnitStart) Observed behavior: AudioOutputUnitStart takes ~500 ms This happens whether I start the Audio Unit: after didBeginTransmission, or after AVAudioSession didActivate Comparison: Using the same Audio Unit, same format, and same configuration Without the PTT framework, AudioOutputUnitStart takes ~200 ms Additional notes: I am not modifying or reconfiguring AVAudioSession when requesting beginTransmission The audio session is already set up when the PTT call starts There are no interruptions or route changes at the time of starting the Audio Unit Impact: This extra latency is significant for Push-to-Talk use cases where fast transmit start is critical.
1
0
352
Feb ’26
SpeechAnalyzer > AnalysisContext lack of documentation
I'm using the new SpeechAnalyzer framework to detect certain commands and want to improve accuracy by giving context. Seems like AnalysisContext is the solution for this, but couldn't find any usage example. So I want to make sure that I'm doing it right or not. let context = AnalysisContext() context.contextualStrings = [ AnalysisContext.ContextualStringsTag("commands"): [ "set speed level", "set jump level", "increase speed", "decrease speed", ... ], AnalysisContext.ContextualStringsTag("vocabulary"): [ "speed", "jump", ... ] ] try await analyzer.setContext(context) With this implementation, it still gives outputs like "Set some speed level", "It's speed level", etc. Also, is it possible to make it expect number after those commands, in order to eliminate results like "set some speed level to" (instead of two).
1
0
501
Jan ’26
SpeechTranscriber not providing audioTimeRange for most results
I started playing which transcription of audio files on macOS today, latest beta of Xcode and latest beta of Tahoe. Transcription itself works really well, but for some reason the majority of the results contain no audioTimeRange. I got 22 single-word results with time ranges, spread out all over total file of 53 minutes. Is there something I can do to improve this? To my understanding, I have followed sample code and instructions very closely, but the SwiftTranscriptionSampleApp and other examples I've seen lead me to believe I should be getting a lot more time ranges than I actually do.
3
0
205
Aug ’25
SpeechTranscriber extremely slow (14+ seconds) despite proper locale allocation and optimization
Using the official SwiftTranscriptionSampleApp from WWDC 2025, speech transcription takes 14+ seconds from audio input to first result, making it unusable for real-time applications. Environment iOS: 26.0 Beta Xcode: Beta 5 Device: iPhone 16 pro Sample App: Official Apple SwiftTranscriptionSampleApp from WWDC 2025 Configuration Tested Locale: en-US (properly allocated with AssetInventory.allocate(locale:)) and es-ES Setup: All optimizations applied (preheating, high priority, model retention) I started testing in my own app to replace SFSpeech API and include speech detection but after long fights with documentation (this part is quite terrible TBH) I tested the example (https://developer.apple.com/documentation/speech/bringing-advanced-speech-to-text-capabilities-to-your-app) and saw same results. I added some logs to check the specific time: 🎙️ [20:30:41.532] ✅ Analyzer started successfully - ready to receive audio! 🎙️ [20:30:41.532] Listening for transcription results... 🎙️ [20:30:56.342] 🚀 FIRST TRANSCRIPTION RESULT after 14.810s: 'Hello' (isFinal: false) Questions Is this expected performance for iOS 26 Beta, because old SFSpeech is far faster? Are there additional optimization steps for SpeechTranscriber? Should we expect significant performance improvements in later betas?
1
0
230
Aug ’25
SystemAudio Capture API Fails with OSStatus error 1852797029 (kAudioCodecIllegalOperationError)
Issue Description I'm implementing a system audio capture feature using AudioHardwareCreateProcessTap and AudioHardwareCreateAggregateDevice. The app successfully creates the tap and aggregate device, but when starting the IO procedure with AudioDeviceStart, it sometimes fails with OSStatus error 1852797029. (The operation couldn’t be completed. (OSStatus error 1852797029.)) The error occurs inconsistently, which makes it particularly difficult to debug and reproduce. Questions Has anyone encountered this intermittent "nope" error code (0x6e6f7065) when working with system audio capture? Are there specific conditions or system states that might trigger this error sporadically? Are there any known workarounds for handling this intermittent failure case? Any insights or guidance would be greatly appreciated. I'm wondering if anyone else has encountered this specific "nope" error code (0x6e6f7065) when working with system audio capture.
0
0
189
May ’25
Unexpected AVAudioSession behavior after iOS 18.5 causing audio loss in VoIP calls
After updating to iOS 18.5, we’ve observed that outgoing audio from our app intermittently stops being transmitted during VoIP calls using AVAudioSession configured with .playAndRecord and .voiceChat. The session is set active without errors, and interruptions are handled correctly, yet audio capture suddenly ceases mid-call. This was not observed in earlier iOS versions (≤ 18.4). We’d like to confirm if there have been any recent changes in AVAudioSession, CallKit, or related media handling that could affect audio input behavior during long-running calls. func configureForVoIPCall() throws { try setCategory( .playAndRecord, mode: .voiceChat, options: [.allowBluetooth, .allowBluetoothA2DP, .defaultToSpeaker]) try setActive(true) }
1
0
287
Aug ’25
Track changes in the browser tab's audibility property.
Hi! I am writing a browser extension that allows you to control the playback of media content on a music service website. Unfortunately Safari does not support tracking changes to the audible property in an event tabs.onUpdated. Is there an alternative to this event? I'm looking for a way to track when the automatic inference engine interrupts playback on a music service website. That you.
0
0
99
Apr ’25
Mic audio before and after a call is answered
I have an app that records a health provider’s conversation with a patient. I am using Audio Queue Services for this. If a phone call comes in while recording, the doctor wants to be able to ignore the call and continue the conversation without touching the phone. If the doctor answers the call, that’s fine – I will stop the recording. I can detect when the call comes in and ends using CXCallObserver and AVAudioSession.interruptionNotification. Unfortunately, when a call comes in and before it is answered or dismissed, the audio is suppressed. After the call is dismissed, the audio continues to be suppressed. How can I continue to get audio from the mic as long as the user does not answer the phone call?
1
0
75
May ’25
Can't set AVAudio sampleRate and installTap needs bufferSize 4800 at minimum
Two issues: No matter what I set in try audioSession.setPreferredSampleRate(x) the sample rate on both iOS and macOS is always 48000 when the output goes through the speaker, and 24000 when my Airpods connect to an iPhone/iPad. Now, I'm checking the current output loudness to animate a 3D character, using mixerNode.installTap(onBus: 0, bufferSize: y, format: nil) { [weak self] buffer, time in Task { @MainActor in // calculate rms and animate character accordingly but any buffer size under 4800 is just ignored and the buffers I get are 4800 sized. This is ok, when the sampleRate is currently 48000, as 10 samples per second lead to decent visual results. But when AirPods connect, the samplerate is 24000, which means only 5 samples per second, so the character animation looks lame. My AVAudioEngine setup is the following: audioEngine.connect(playerNode, to: pitchShiftEffect, format: format) audioEngine.connect(pitchShiftEffect, to: mixerNode, format: format) audioEngine.connect(mixerNode, to: audioEngine.outputNode, format: nil) Now, I'd be fine if the outputNode runs at whatever if it needs, as long as my tap would get at least 10 samples per second. PS: Specifying my favorite format in the let format = AVAudioFormat(standardFormatWithSampleRate: 48_000, channels: 2)! mixerNode.installTap(onBus: 0, bufferSize: y, format: format) doesn't change anything either
1
0
454
Aug ’25
In Speech framework is SFTranscriptionSegment timing supposed to be off and speechRecognitionMetadata nil until isFinal?
I'm working in Swift/SwiftUI, running XCode 16.3 on macOS 15.4 and I've seen this when running in the iOS simulator and in a macOS app run from XCode. I've also seen this behaviour with 3 different audio files. Nothing in the documentation says that the speechRecognitionMetadata property on an SFSpeechRecognitionResult will be nil until isFinal, but that's the behaviour I'm seeing. I've stripped my class down to the following: private var isAuthed = false // I call this in a .task {} in my SwiftUI View public func requestSpeechRecognizerPermission() { SFSpeechRecognizer.requestAuthorization { authStatus in Task { self.isAuthed = authStatus == .authorized } } } public func transcribe(from url: URL) { guard isAuthed else { return } let locale = Locale(identifier: "en-US") let recognizer = SFSpeechRecognizer(locale: locale) let recognitionRequest = SFSpeechURLRecognitionRequest(url: url) // the behaviour occurs whether I set this to true or not, I recently set // it to true to see if it made a difference recognizer?.supportsOnDeviceRecognition = true recognitionRequest.shouldReportPartialResults = true recognitionRequest.addsPunctuation = true recognizer?.recognitionTask(with: recognitionRequest) { (result, error) in guard result != nil else { return } if result!.isFinal { //speechRecognitionMetadata is not nil } else { //speechRecognitionMetadata is nil } } } } Further, and this isn't documented either, the SFTranscriptionSegment values don't have correct timestamp and duration values until isFinal. The values aren't all zero, but they don't align with the timing in the audio and they change to accurate values when isFinal is true. The transcription otherwise "works", in that I get transcription text before isFinal and if I wait for isFinal the segments are correct and speechRecognitionMetadata is filled with values. The context here is I'm trying to generate a transcription that I can then highlight the spoken sections of as audio plays and I'm thinking I must be just trying to use the Speech framework in a way it does not work. I got my concept working if I pre-process the audio (i.e. run it through until isFinal and save the results I need to json), but being able to do even a rougher version of it 'on the fly' - which requires segments to have the right timestamp/duration before isFinal - is perhaps impossible?
1
0
171
Jul ’25
MusicKit - Not showing as a capability in Xcode
A bit of a novice to app development here but I have a paid developer account, I have registered the identifier for MusicKit on the developer website (using the bundle identifier I've selected in Xcode) but the option to add MusicKit as a capability is not available in Xcode? I've manually updated the certificates, closed the app and reopened it, started a new project and tried with a different demo project? Apologies if I am missing something obvious but could someone help me get this capability added?
0
0
155
Aug ’25
storing AVAsset in SwiftData
Hi, I am creating an app that can include videos or images in it's data. While @Attribute(.externalStorage) helps with images, with AVAssets I actually would like access to the URL behind that data. (as it would be stupid to load and then save the data again just to have a URL) One key component is to keep all of this clean enough so that I can use (private) CloudKit syncing with the resulting model. All the best Christoph
1
0
619
Jun ’25
AVSpeechUtterance stutters in CarPlay when connected to a BT headset
We are currently working on a CarPlay navigation app and so far everything is working well except for speaking turn notifications. Our TTS implementation works fine on the phone and works fine on CarPlay if the voice is spoken over the speaker in the car. If users connect a BT headset to the car and listen through that headset, then the voice commands are chopped up / stutter. Why would users use BT headset? Well, we are working on a motorcycle app, and there are no speakers usually on a motorcycle. It sounds like the BT channel is opened and closed repeatedly for every character / word spoken. This happens on different CarPlay devices and different Bluetooth headsets, we have reports from multiple users that they find this behavior annoying and that other apps work fine. Is this a known issue? Are there possible workaround?
0
0
90
Apr ’25
MIDI output form Standalone MIDI Processor Demo App to DAW
I am trying to get MIDI output from the AU Host demo app using the recent MIDI processor example. The processor works correctly in Logic Pro, but I cannot send MIDI from the AUv3 extension in standalone mode using the default host app to another program (e.g., Ableton). The MIDI manager, which is part of the standalone host app, works fine, and I can send MIDI using it directly—Ableton receives it without issues. I have already set the midiOutputNames in the extension, and the midiOutBlock is mapped. However, the MIDI data from the AUv3 extension does not reach Ableton in standalone mode. I suspect the issue is that midiOutBlock might never be called in the plugin, or perhaps an input to the plugin is missing, which prevents it from sending MIDI. I am currently using the default routing. I have modified the MIDI manager such that it works well as described above. Here is a part of my code for SimplePlayEngine.swift and my MIDIManager.swift for reference: @MainActor @Observable public class SimplePlayEngine { private let midiOutBlock: AUMIDIOutputEventBlock = { sampleTime, cable, length, data in return noErr } var scheduleMIDIEventListBlock: AUMIDIEventListBlock? = nil public init() { engine.attach(player) engine.prepare() setupMIDI() } private func setupMIDI() { if !MIDIManager.shared.setupPort(midiProtocol: MIDIProtocolID._2_0, receiveBlock: { [weak self] eventList, _ in if let scheduleMIDIEventListBlock = self?.scheduleMIDIEventListBlock { _ = scheduleMIDIEventListBlock(AUEventSampleTimeImmediate, 0, eventList) } }) { fatalError("Failed to setup Core MIDI") } } func initComponent(type: String, subType: String, manufacturer: String) async -> ViewController? { reset() guard let component = AVAudioUnit.findComponent(type: type, subType: subType, manufacturer: manufacturer) else { fatalError("Failed to find component with type: \(type), subtype: \(subType), manufacturer: \(manufacturer))" ) } do { let audioUnit = try await AVAudioUnit.instantiate( with: component.audioComponentDescription, options: AudioComponentInstantiationOptions.loadOutOfProcess) self.avAudioUnit = audioUnit self.connect(avAudioUnit: audioUnit) return await audioUnit.loadAudioUnitViewController() } catch { return nil } } private func startPlayingInternal() { guard let avAudioUnit = self.avAudioUnit else { return } setSessionActive(true) if avAudioUnit.wantsAudioInput { scheduleEffectLoop() } let hardwareFormat = engine.outputNode.outputFormat(forBus: 0) engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat) do { try engine.start() } catch { isPlaying = false fatalError("Could not start engine. error: \(error).") } if avAudioUnit.wantsAudioInput { player.play() } isPlaying = true } private func resetAudioLoop() { guard let avAudioUnit = self.avAudioUnit else { return } if avAudioUnit.wantsAudioInput { guard let format = file?.processingFormat else { fatalError("No AVAudioFile defined.") } engine.connect(player, to: engine.mainMixerNode, format: format) } } public func connect(avAudioUnit: AVAudioUnit?, completion: @escaping (() -> Void) = {}) { guard let avAudioUnit = self.avAudioUnit else { return } engine.disconnectNodeInput(engine.mainMixerNode) resetAudioLoop() engine.detach(avAudioUnit) func rewiringComplete() { scheduleMIDIEventListBlock = auAudioUnit.scheduleMIDIEventListBlock if isPlaying { player.play() } completion() } let hardwareFormat = engine.outputNode.outputFormat(forBus: 0) engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat) if isPlaying { player.pause() } let auAudioUnit = avAudioUnit.auAudioUnit if !auAudioUnit.midiOutputNames.isEmpty { auAudioUnit.midiOutputEventBlock = midiOutBlock } engine.attach(avAudioUnit) if avAudioUnit.wantsAudioInput { engine.disconnectNodeInput(engine.mainMixerNode) if let format = file?.processingFormat { engine.connect(player, to: avAudioUnit, format: format) engine.connect(avAudioUnit, to: engine.mainMixerNode, format: format) } } else { let stereoFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareFormat.sampleRate, channels: 2) engine.connect(avAudioUnit, to: engine.mainMixerNode, format: stereoFormat) } rewiringComplete() } } and my MIDI Manager @MainActor class MIDIManager: Identifiable, ObservableObject { func setupPort(midiProtocol: MIDIProtocolID, receiveBlock: @escaping @Sendable MIDIReceiveBlock) -> Bool { guard setupClient() else { return false } if MIDIInputPortCreateWithProtocol(client, portName, midiProtocol, &port, receiveBlock) != noErr { return false } for source in self.sources { if MIDIPortConnectSource(port, source, nil) != noErr { print("Failed to connect to source \(source)") return false } } setupVirtualMIDIOutput() return true } private func setupVirtualMIDIOutput() { let virtualStatus = MIDISourceCreate(client, virtualSourceName, &virtualSource) if virtualStatus != noErr { print("❌ Failed to create virtual MIDI source: \(virtualStatus)") } else { print("✅ Created virtual MIDI source: \(virtualSourceName)") } } func sendMIDIData(_ data: [UInt8]) { print("hey") var packetList = MIDIPacketList() withUnsafeMutablePointer(to: &packetList) { ptr in let pkt = MIDIPacketListInit(ptr) _ = MIDIPacketListAdd(ptr, 1024, pkt, 0, data.count, data) if virtualSource != 0 { let status = MIDIReceived(virtualSource, ptr) if status != noErr { print("❌ Failed to send MIDI data: \(status)") } else { print("✅ Sent MIDI data: \(data)") } } } } }
0
0
525
Aug ’25
Accessory not supported by this device
Hi, I've had a new deck installed in my car for about 1.5 weeks. I'm having compatibility issues with my 15PM. It happens both wired and wirelessly, I get the error "Accessory not supported by this device". It used to happen all the time, now it's 50/50. Sometimes it works. I've removed and added Bluetooth multiple times on phone and deck, I bought a belkin usb-c to usb-a cable today and it seems to fix it but the problem comes back. I've changed the setting "FaceID and passcode-allow access when locked-accessories." The car stereo guy reckons it's definitely an issue with the phone not the deck, I'm inclined to believe him since the error states "by this device". Any advice appreciated.
0
0
222
Aug ’25
SpeechTranscriber/SpeechAnalyzer being relatively slow compared to FoundationModel and TTS
So, I've been wondering how fast a an offline STT -> ML Prompt -> TTS roundtrip would be. Interestingly, for many tests, the SpeechTranscriber (STT) takes the bulk of the time, compared to generating a FoundationModel response and creating the Audio using TTS. E.g. InteractionStatistics: - listeningStarted: 21:24:23 4480 2423 - timeTillFirstAboveNoiseFloor: 01.794 - timeTillLastNoiseAboveFloor: 02.383 - timeTillFirstSpeechDetected: 02.399 - timeTillTranscriptFinalized: 04.510 - timeTillFirstMLModelResponse: 04.938 - timeTillMLModelResponse: 05.379 - timeTillTTSStarted: 04.962 - timeTillTTSFinished: 11.016 - speechLength: 06.054 - timeToResponse: 02.578 - transcript: This is a test. - mlModelResponse: Sure! I'm ready to help with your test. What do you need help with? Here, between my audio input ending and the Text-2-Speech starting top play (using AVSpeechUtterance) the total response time was 2.5s. Of that time, it took the SpeechAnalyzer 2.1s to get the transcript finalized, FoundationModel only took 0.4s to respond (and TTS started playing nearly instantly). I'm already using reportingOptions: [.volatileResults, .fastResults] so it's probably as fast as possible right now? I'm just surprised the STT takes so much longer compared to the other parts (all being CoreML based, aren't they?)
Replies
3
Boosts
0
Views
715
Activity
6d
iOS AUv3 extension: no Icon shown in host
Hi, I'm working on an AUv3 project. The app itself displays my icon. However the Auv3 extension does not display any icon in any host app (AUM, Drambo, etc.0). I thought that the extension would inherit the host app icon but that it does not appear to be the case. I tried to add the icon as a 1024x1024 file to the extension target and the update my extension plist file withe a CFBundleIconFile key but no luck either. It must surely be really easy. What am I missing? Thanks in advance for your help!
Replies
5
Boosts
0
Views
152
Activity
May ’25
Convert CoreAudio AudioObjectID to IOUSB LocationID
Is there a recommended way on macOS 26 Tahoe to take a CoreAudio AudioObjectID and use it to lookup the underlying USB LocationID? I previously used AudioObjectID to query the corresponding DeviceUID with kAudioDevicePropertyDeviceUID. Then I queried for the IOService matching kIOAudioEngineClassName with property kIOAudioEngineGlobalUniqueIDKey matching DeviceUID, and I loaded kUSBDevicePropertyLocationID from the result. This fails on macOS 26, because the IO Registry for the device has an entry for usbaudiod rather than AppleUSBAudioEngine, and usbaudiod does not include a kIOAudioEngineGlobalUniqueIDKey property (or any other property to map it to a CoreAudio DeviceUID). My use-case here is a piece of audio recording software that allows configuring a set of supported audio devices via USB HID prior to recording. I present the user with a list of CoreAudio devices to use, but without a way to lookup the underlying USB LocationID, I cannot guarantee that the configured device matches the selected device (e.g. if the user plugged in two identical microphones).
Replies
2
Boosts
0
Views
615
Activity
Sep ’25
Delay in Microphone Input When Talking While Receiving Audio in PTT Framework (Full Duplex Mode)
Context: I am currently developing an app using the Push-to-Talk (PTT) framework. I have reviewed both the PTT framework documentation and the CallKit demo project to better understand how to properly manage audio session activation and AVAudioEngine setup. I am not activating the audio session manually. The audio session configuration is handled in the incomingPushResult or didBeginTransmitting callbacks from the PTChannelManagerDelegate. I am using a single AVAudioEngine instance for both input and playback. The engine is started in the didActivate callback from the PTChannelManagerDelegate. When I receive a push in full duplex mode, I set the active participant to the user who is speaking. Issue When I attempt to talk while the other participant is already speaking, my input tap on the input node takes a few seconds to return valid PCM audio data. Initially, it returns an empty PCM audio block. Details: The audio session is already active and configured with .playAndRecord. The input tap is already installed when the engine is started. When I talk from a neutral state (no one is speaking), the system plays the standard "microphone activation" tone, which covers this initial delay. However, this does not happen when I am already receiving audio. Assumptions / Current Setup Because the audio session is active in play and record, I assumed that microphone input would be available immediately, even while receiving audio. However, there seems to be a delay before valid input is delivered to the tap, only occurring when switching from a receive state to simultaneously talking. Questions Is this expected behavior when using the PTT framework in full duplex mode with a shared AVAudioEngine? Should I be restarting or reconfiguring the engine or audio session when beginning to talk while receiving audio? Is there a recommended pattern for managing microphone readiness in this scenario to avoid the initial empty PCM buffer? Would using separate engines for input and output improve responsiveness? I would like to confirm the correct approach to handling simultaneous talk and receive in full duplex mode using PTT framework and AVAudioEngine. Specifically, I need guidance on ensuring the microphone is ready to capture audio immediately without the delay seen in my current implementation. Relevant Code Snippets Engine Setup func setup() { let input = audioEngine.inputNode do { try input.setVoiceProcessingEnabled(true) } catch { print("Could not enable voice processing \(error)") return } input.isVoiceProcessingAGCEnabled = false let output = audioEngine.outputNode let mainMixer = audioEngine.mainMixerNode audioEngine.connect(pttPlayerNode, to: mainMixer, format: outputFormat) audioEngine.connect(beepNode, to: mainMixer, format: outputFormat) audioEngine.connect(mainMixer, to: output, format: outputFormat) // Initialize converters converter = AVAudioConverter(from: inputFormat, to: outputFormat)! f32ToInt16Converter = AVAudioConverter(from: outputFormat, to: inputFormat)! audioEngine.prepare() } Input Tap Installation func installTap() { guard AudioHandler.shared.checkMicrophonePermission() else { print("Microphone not granted for recording") return } guard !isInputTapped else { print("[AudioEngine] Input is already tapped!") return } let input = audioEngine.inputNode let microphoneFormat = input.inputFormat(forBus: 0) let microphoneDownsampler = AVAudioConverter(from: microphoneFormat, to: outputFormat)! let desiredFormat = outputFormat let inputFramesNeeded = AVAudioFrameCount((Double(OpusCodec.DECODED_PACKET_NUM_SAMPLES) * microphoneFormat.sampleRate) / desiredFormat.sampleRate) input.installTap(onBus: 0, bufferSize: inputFramesNeeded, format: input.inputFormat(forBus: 0)) { [weak self] buffer, when in guard let self = self else { return } // Output buffer: 1920 frames at 16kHz guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: desiredFormat, frameCapacity: AVAudioFrameCount(OpusCodec.DECODED_PACKET_NUM_SAMPLES)) else { return } outputBuffer.frameLength = outputBuffer.frameCapacity let inputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in outStatus.pointee = .haveData return buffer } var error: NSError? let converterResult = microphoneDownsampler.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock) if converterResult != .haveData { DebugLogger.shared.print("Downsample error \(converterResult)") } else { self.handleDownsampledBuffer(outputBuffer) } } isInputTapped = true }
Replies
4
Boosts
0
Views
503
Activity
Aug ’25
AudioOutputUnitStart takes ~500 ms when using Push-to-Talk framework after beginTransmission
I’m working with the Push-to-Talk (PTT) framework and observing a consistent delay when starting audio capture. Scenario: A PTT call is already active The AVAudioSession is fully configured I request beginTransmission on the PTT channel I start my Audio Unit for recording (AudioOutputUnitStart) Observed behavior: AudioOutputUnitStart takes ~500 ms This happens whether I start the Audio Unit: after didBeginTransmission, or after AVAudioSession didActivate Comparison: Using the same Audio Unit, same format, and same configuration Without the PTT framework, AudioOutputUnitStart takes ~200 ms Additional notes: I am not modifying or reconfiguring AVAudioSession when requesting beginTransmission The audio session is already set up when the PTT call starts There are no interruptions or route changes at the time of starting the Audio Unit Impact: This extra latency is significant for Push-to-Talk use cases where fast transmit start is critical.
Replies
1
Boosts
0
Views
352
Activity
Feb ’26
SpeechAnalyzer > AnalysisContext lack of documentation
I'm using the new SpeechAnalyzer framework to detect certain commands and want to improve accuracy by giving context. Seems like AnalysisContext is the solution for this, but couldn't find any usage example. So I want to make sure that I'm doing it right or not. let context = AnalysisContext() context.contextualStrings = [ AnalysisContext.ContextualStringsTag("commands"): [ "set speed level", "set jump level", "increase speed", "decrease speed", ... ], AnalysisContext.ContextualStringsTag("vocabulary"): [ "speed", "jump", ... ] ] try await analyzer.setContext(context) With this implementation, it still gives outputs like "Set some speed level", "It's speed level", etc. Also, is it possible to make it expect number after those commands, in order to eliminate results like "set some speed level to" (instead of two).
Replies
1
Boosts
0
Views
501
Activity
Jan ’26
SpeechTranscriber not providing audioTimeRange for most results
I started playing which transcription of audio files on macOS today, latest beta of Xcode and latest beta of Tahoe. Transcription itself works really well, but for some reason the majority of the results contain no audioTimeRange. I got 22 single-word results with time ranges, spread out all over total file of 53 minutes. Is there something I can do to improve this? To my understanding, I have followed sample code and instructions very closely, but the SwiftTranscriptionSampleApp and other examples I've seen lead me to believe I should be getting a lot more time ranges than I actually do.
Replies
3
Boosts
0
Views
205
Activity
Aug ’25
SpeechTranscriber extremely slow (14+ seconds) despite proper locale allocation and optimization
Using the official SwiftTranscriptionSampleApp from WWDC 2025, speech transcription takes 14+ seconds from audio input to first result, making it unusable for real-time applications. Environment iOS: 26.0 Beta Xcode: Beta 5 Device: iPhone 16 pro Sample App: Official Apple SwiftTranscriptionSampleApp from WWDC 2025 Configuration Tested Locale: en-US (properly allocated with AssetInventory.allocate(locale:)) and es-ES Setup: All optimizations applied (preheating, high priority, model retention) I started testing in my own app to replace SFSpeech API and include speech detection but after long fights with documentation (this part is quite terrible TBH) I tested the example (https://developer.apple.com/documentation/speech/bringing-advanced-speech-to-text-capabilities-to-your-app) and saw same results. I added some logs to check the specific time: 🎙️ [20:30:41.532] ✅ Analyzer started successfully - ready to receive audio! 🎙️ [20:30:41.532] Listening for transcription results... 🎙️ [20:30:56.342] 🚀 FIRST TRANSCRIPTION RESULT after 14.810s: 'Hello' (isFinal: false) Questions Is this expected performance for iOS 26 Beta, because old SFSpeech is far faster? Are there additional optimization steps for SpeechTranscriber? Should we expect significant performance improvements in later betas?
Replies
1
Boosts
0
Views
230
Activity
Aug ’25
SystemAudio Capture API Fails with OSStatus error 1852797029 (kAudioCodecIllegalOperationError)
Issue Description I'm implementing a system audio capture feature using AudioHardwareCreateProcessTap and AudioHardwareCreateAggregateDevice. The app successfully creates the tap and aggregate device, but when starting the IO procedure with AudioDeviceStart, it sometimes fails with OSStatus error 1852797029. (The operation couldn’t be completed. (OSStatus error 1852797029.)) The error occurs inconsistently, which makes it particularly difficult to debug and reproduce. Questions Has anyone encountered this intermittent "nope" error code (0x6e6f7065) when working with system audio capture? Are there specific conditions or system states that might trigger this error sporadically? Are there any known workarounds for handling this intermittent failure case? Any insights or guidance would be greatly appreciated. I'm wondering if anyone else has encountered this specific "nope" error code (0x6e6f7065) when working with system audio capture.
Replies
0
Boosts
0
Views
189
Activity
May ’25
Unexpected AVAudioSession behavior after iOS 18.5 causing audio loss in VoIP calls
After updating to iOS 18.5, we’ve observed that outgoing audio from our app intermittently stops being transmitted during VoIP calls using AVAudioSession configured with .playAndRecord and .voiceChat. The session is set active without errors, and interruptions are handled correctly, yet audio capture suddenly ceases mid-call. This was not observed in earlier iOS versions (≤ 18.4). We’d like to confirm if there have been any recent changes in AVAudioSession, CallKit, or related media handling that could affect audio input behavior during long-running calls. func configureForVoIPCall() throws { try setCategory( .playAndRecord, mode: .voiceChat, options: [.allowBluetooth, .allowBluetoothA2DP, .defaultToSpeaker]) try setActive(true) }
Replies
1
Boosts
0
Views
287
Activity
Aug ’25
Track changes in the browser tab's audibility property.
Hi! I am writing a browser extension that allows you to control the playback of media content on a music service website. Unfortunately Safari does not support tracking changes to the audible property in an event tabs.onUpdated. Is there an alternative to this event? I'm looking for a way to track when the automatic inference engine interrupts playback on a music service website. That you.
Replies
0
Boosts
0
Views
99
Activity
Apr ’25
Mic audio before and after a call is answered
I have an app that records a health provider’s conversation with a patient. I am using Audio Queue Services for this. If a phone call comes in while recording, the doctor wants to be able to ignore the call and continue the conversation without touching the phone. If the doctor answers the call, that’s fine – I will stop the recording. I can detect when the call comes in and ends using CXCallObserver and AVAudioSession.interruptionNotification. Unfortunately, when a call comes in and before it is answered or dismissed, the audio is suppressed. After the call is dismissed, the audio continues to be suppressed. How can I continue to get audio from the mic as long as the user does not answer the phone call?
Replies
1
Boosts
0
Views
75
Activity
May ’25
Can't set AVAudio sampleRate and installTap needs bufferSize 4800 at minimum
Two issues: No matter what I set in try audioSession.setPreferredSampleRate(x) the sample rate on both iOS and macOS is always 48000 when the output goes through the speaker, and 24000 when my Airpods connect to an iPhone/iPad. Now, I'm checking the current output loudness to animate a 3D character, using mixerNode.installTap(onBus: 0, bufferSize: y, format: nil) { [weak self] buffer, time in Task { @MainActor in // calculate rms and animate character accordingly but any buffer size under 4800 is just ignored and the buffers I get are 4800 sized. This is ok, when the sampleRate is currently 48000, as 10 samples per second lead to decent visual results. But when AirPods connect, the samplerate is 24000, which means only 5 samples per second, so the character animation looks lame. My AVAudioEngine setup is the following: audioEngine.connect(playerNode, to: pitchShiftEffect, format: format) audioEngine.connect(pitchShiftEffect, to: mixerNode, format: format) audioEngine.connect(mixerNode, to: audioEngine.outputNode, format: nil) Now, I'd be fine if the outputNode runs at whatever if it needs, as long as my tap would get at least 10 samples per second. PS: Specifying my favorite format in the let format = AVAudioFormat(standardFormatWithSampleRate: 48_000, channels: 2)! mixerNode.installTap(onBus: 0, bufferSize: y, format: format) doesn't change anything either
Replies
1
Boosts
0
Views
454
Activity
Aug ’25
In Speech framework is SFTranscriptionSegment timing supposed to be off and speechRecognitionMetadata nil until isFinal?
I'm working in Swift/SwiftUI, running XCode 16.3 on macOS 15.4 and I've seen this when running in the iOS simulator and in a macOS app run from XCode. I've also seen this behaviour with 3 different audio files. Nothing in the documentation says that the speechRecognitionMetadata property on an SFSpeechRecognitionResult will be nil until isFinal, but that's the behaviour I'm seeing. I've stripped my class down to the following: private var isAuthed = false // I call this in a .task {} in my SwiftUI View public func requestSpeechRecognizerPermission() { SFSpeechRecognizer.requestAuthorization { authStatus in Task { self.isAuthed = authStatus == .authorized } } } public func transcribe(from url: URL) { guard isAuthed else { return } let locale = Locale(identifier: "en-US") let recognizer = SFSpeechRecognizer(locale: locale) let recognitionRequest = SFSpeechURLRecognitionRequest(url: url) // the behaviour occurs whether I set this to true or not, I recently set // it to true to see if it made a difference recognizer?.supportsOnDeviceRecognition = true recognitionRequest.shouldReportPartialResults = true recognitionRequest.addsPunctuation = true recognizer?.recognitionTask(with: recognitionRequest) { (result, error) in guard result != nil else { return } if result!.isFinal { //speechRecognitionMetadata is not nil } else { //speechRecognitionMetadata is nil } } } } Further, and this isn't documented either, the SFTranscriptionSegment values don't have correct timestamp and duration values until isFinal. The values aren't all zero, but they don't align with the timing in the audio and they change to accurate values when isFinal is true. The transcription otherwise "works", in that I get transcription text before isFinal and if I wait for isFinal the segments are correct and speechRecognitionMetadata is filled with values. The context here is I'm trying to generate a transcription that I can then highlight the spoken sections of as audio plays and I'm thinking I must be just trying to use the Speech framework in a way it does not work. I got my concept working if I pre-process the audio (i.e. run it through until isFinal and save the results I need to json), but being able to do even a rougher version of it 'on the fly' - which requires segments to have the right timestamp/duration before isFinal - is perhaps impossible?
Replies
1
Boosts
0
Views
171
Activity
Jul ’25
MusicKit - Not showing as a capability in Xcode
A bit of a novice to app development here but I have a paid developer account, I have registered the identifier for MusicKit on the developer website (using the bundle identifier I've selected in Xcode) but the option to add MusicKit as a capability is not available in Xcode? I've manually updated the certificates, closed the app and reopened it, started a new project and tried with a different demo project? Apologies if I am missing something obvious but could someone help me get this capability added?
Replies
0
Boosts
0
Views
155
Activity
Aug ’25
storing AVAsset in SwiftData
Hi, I am creating an app that can include videos or images in it's data. While @Attribute(.externalStorage) helps with images, with AVAssets I actually would like access to the URL behind that data. (as it would be stupid to load and then save the data again just to have a URL) One key component is to keep all of this clean enough so that I can use (private) CloudKit syncing with the resulting model. All the best Christoph
Replies
1
Boosts
0
Views
619
Activity
Jun ’25
AVSpeechUtterance stutters in CarPlay when connected to a BT headset
We are currently working on a CarPlay navigation app and so far everything is working well except for speaking turn notifications. Our TTS implementation works fine on the phone and works fine on CarPlay if the voice is spoken over the speaker in the car. If users connect a BT headset to the car and listen through that headset, then the voice commands are chopped up / stutter. Why would users use BT headset? Well, we are working on a motorcycle app, and there are no speakers usually on a motorcycle. It sounds like the BT channel is opened and closed repeatedly for every character / word spoken. This happens on different CarPlay devices and different Bluetooth headsets, we have reports from multiple users that they find this behavior annoying and that other apps work fine. Is this a known issue? Are there possible workaround?
Replies
0
Boosts
0
Views
90
Activity
Apr ’25
Why is MusicKit ApplicationMusicPlayer not available on watchOS?
ApplicationMusicPlayer is not available on watchOS but all other platforms. Is there a technical reason for that like battery life? Same goes for SystemMusicPlayer and MPMusicPlayerController. I already filed feedbacks for that.
Replies
0
Boosts
0
Views
106
Activity
May ’25
MIDI output form Standalone MIDI Processor Demo App to DAW
I am trying to get MIDI output from the AU Host demo app using the recent MIDI processor example. The processor works correctly in Logic Pro, but I cannot send MIDI from the AUv3 extension in standalone mode using the default host app to another program (e.g., Ableton). The MIDI manager, which is part of the standalone host app, works fine, and I can send MIDI using it directly—Ableton receives it without issues. I have already set the midiOutputNames in the extension, and the midiOutBlock is mapped. However, the MIDI data from the AUv3 extension does not reach Ableton in standalone mode. I suspect the issue is that midiOutBlock might never be called in the plugin, or perhaps an input to the plugin is missing, which prevents it from sending MIDI. I am currently using the default routing. I have modified the MIDI manager such that it works well as described above. Here is a part of my code for SimplePlayEngine.swift and my MIDIManager.swift for reference: @MainActor @Observable public class SimplePlayEngine { private let midiOutBlock: AUMIDIOutputEventBlock = { sampleTime, cable, length, data in return noErr } var scheduleMIDIEventListBlock: AUMIDIEventListBlock? = nil public init() { engine.attach(player) engine.prepare() setupMIDI() } private func setupMIDI() { if !MIDIManager.shared.setupPort(midiProtocol: MIDIProtocolID._2_0, receiveBlock: { [weak self] eventList, _ in if let scheduleMIDIEventListBlock = self?.scheduleMIDIEventListBlock { _ = scheduleMIDIEventListBlock(AUEventSampleTimeImmediate, 0, eventList) } }) { fatalError("Failed to setup Core MIDI") } } func initComponent(type: String, subType: String, manufacturer: String) async -> ViewController? { reset() guard let component = AVAudioUnit.findComponent(type: type, subType: subType, manufacturer: manufacturer) else { fatalError("Failed to find component with type: \(type), subtype: \(subType), manufacturer: \(manufacturer))" ) } do { let audioUnit = try await AVAudioUnit.instantiate( with: component.audioComponentDescription, options: AudioComponentInstantiationOptions.loadOutOfProcess) self.avAudioUnit = audioUnit self.connect(avAudioUnit: audioUnit) return await audioUnit.loadAudioUnitViewController() } catch { return nil } } private func startPlayingInternal() { guard let avAudioUnit = self.avAudioUnit else { return } setSessionActive(true) if avAudioUnit.wantsAudioInput { scheduleEffectLoop() } let hardwareFormat = engine.outputNode.outputFormat(forBus: 0) engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat) do { try engine.start() } catch { isPlaying = false fatalError("Could not start engine. error: \(error).") } if avAudioUnit.wantsAudioInput { player.play() } isPlaying = true } private func resetAudioLoop() { guard let avAudioUnit = self.avAudioUnit else { return } if avAudioUnit.wantsAudioInput { guard let format = file?.processingFormat else { fatalError("No AVAudioFile defined.") } engine.connect(player, to: engine.mainMixerNode, format: format) } } public func connect(avAudioUnit: AVAudioUnit?, completion: @escaping (() -> Void) = {}) { guard let avAudioUnit = self.avAudioUnit else { return } engine.disconnectNodeInput(engine.mainMixerNode) resetAudioLoop() engine.detach(avAudioUnit) func rewiringComplete() { scheduleMIDIEventListBlock = auAudioUnit.scheduleMIDIEventListBlock if isPlaying { player.play() } completion() } let hardwareFormat = engine.outputNode.outputFormat(forBus: 0) engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat) if isPlaying { player.pause() } let auAudioUnit = avAudioUnit.auAudioUnit if !auAudioUnit.midiOutputNames.isEmpty { auAudioUnit.midiOutputEventBlock = midiOutBlock } engine.attach(avAudioUnit) if avAudioUnit.wantsAudioInput { engine.disconnectNodeInput(engine.mainMixerNode) if let format = file?.processingFormat { engine.connect(player, to: avAudioUnit, format: format) engine.connect(avAudioUnit, to: engine.mainMixerNode, format: format) } } else { let stereoFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareFormat.sampleRate, channels: 2) engine.connect(avAudioUnit, to: engine.mainMixerNode, format: stereoFormat) } rewiringComplete() } } and my MIDI Manager @MainActor class MIDIManager: Identifiable, ObservableObject { func setupPort(midiProtocol: MIDIProtocolID, receiveBlock: @escaping @Sendable MIDIReceiveBlock) -> Bool { guard setupClient() else { return false } if MIDIInputPortCreateWithProtocol(client, portName, midiProtocol, &port, receiveBlock) != noErr { return false } for source in self.sources { if MIDIPortConnectSource(port, source, nil) != noErr { print("Failed to connect to source \(source)") return false } } setupVirtualMIDIOutput() return true } private func setupVirtualMIDIOutput() { let virtualStatus = MIDISourceCreate(client, virtualSourceName, &virtualSource) if virtualStatus != noErr { print("❌ Failed to create virtual MIDI source: \(virtualStatus)") } else { print("✅ Created virtual MIDI source: \(virtualSourceName)") } } func sendMIDIData(_ data: [UInt8]) { print("hey") var packetList = MIDIPacketList() withUnsafeMutablePointer(to: &packetList) { ptr in let pkt = MIDIPacketListInit(ptr) _ = MIDIPacketListAdd(ptr, 1024, pkt, 0, data.count, data) if virtualSource != 0 { let status = MIDIReceived(virtualSource, ptr) if status != noErr { print("❌ Failed to send MIDI data: \(status)") } else { print("✅ Sent MIDI data: \(data)") } } } } }
Replies
0
Boosts
0
Views
525
Activity
Aug ’25
Accessory not supported by this device
Hi, I've had a new deck installed in my car for about 1.5 weeks. I'm having compatibility issues with my 15PM. It happens both wired and wirelessly, I get the error "Accessory not supported by this device". It used to happen all the time, now it's 50/50. Sometimes it works. I've removed and added Bluetooth multiple times on phone and deck, I bought a belkin usb-c to usb-a cable today and it seems to fix it but the problem comes back. I've changed the setting "FaceID and passcode-allow access when locked-accessories." The car stereo guy reckons it's definitely an issue with the phone not the deck, I'm inclined to believe him since the error states "by this device". Any advice appreciated.
Replies
0
Boosts
0
Views
222
Activity
Aug ’25