I was trying to set custom audio output device for a generated audio on macCatalyst.
While using let status = AudioUnitSetProperty(outputUnit,
kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global,
0,
&outputDeviceID,
UInt32(MemoryLayout.size))
kAudioOutputUnitProperty_CurrentDevice is invalid, and status = -10879, indicating an error.
STEPS TO REPRODUCE
Set Run Destination to MacOS and run the program. "AudioUnitSetProperty: 0" should be printed, indicating it works fine.
Set Run Destination to Mac Catalyst and run the program. "Error setting output device: -10879" should be printed, indicating an error.
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
If I call AudioDeviceStart on an AudioDevice in my application then "Hey Siri!" will not wake Siri up. Our users have complained that Siri does not get activated with my application is running. We found that calling AudioDeviceStart is causing the issue.
How should we handle this?
Hi all, I have spent a lot of time reading the tech note and watching the WDDC video that introduce the PTTFramework on iOS. I currently have a custom setup where I am using AVAudioEngine to schedule and play buffers that are being streamed through a call.
I am looking to use the PTTFramework to allow a user to trigger this push to talk behavior from the lock screen and the various places with the system UI it provides.
However I am unsure what the correct behavior is regarding the handling of the audio session. Right now I am using .playback when there is no active voice transmission so that devices such as AirPods can be in AD2P mode where applicable, and then transitioning to .playbackAndRecord category only when the mic input should become active. Following this change in my AVAudioEngine manager I am then manually activating and deactivating the audio session manually when the engine is either playing/recording or idle.
In the documentation it states that you should not attempt to activate or deactivate your audio session directly, but allow the framework to handle it.
Does that mean that I need to either call the request to transmit delegate function or set an active participant on the channel manager first, and then wait for the didBecomeActive delegate method to trigger before I actually attempt to play or record any audio? (I am using the fullDuplex mode currently.) I noticed that that delegate method will only trigger if the audio session wasn't active before doing one of the above (setting active participant, requesting transmit).
Lastly, when using the PTTFramework it also mentions that we get support for PTT devices and I notice on the didBeginTransmittingFrom property we have a handsfreeButton case. Is there any documentation or resources for what is actually supported out of the box for this? I am currently working on handling a lot of the push to talk through bluetooth LE, and wanted to make sure there wasn't overlap with what the system provides.
Thank you!
I've been generating new Audio Unit Extension apps with Xcode 16 (and newer), and although they generally work initially, it is easy (although I'm not sure how to do it reliably) to cause the app to no longer be able to instantiate the audiounit. Generally the call to AVAudioUnit.findComponent fails and SimplePlayEngine hits the fatalError("Failed to find component with type...")
In the most recent project, merely adding files to the extension (without making any use of them) caused it to go off the rails.
If I "Archive" the app+plugin, there is no audio unit extension in the bundle.
If I switch to the audiounit extension and build it it's fine. If I look at the build folder in Library/Developer/Xcode/project_folder the extension_name.appex is there.
Any ideas? If I can coax an unmodified audio unit extension project to exhibit this behavior I'll attach it here. Right now what I have has code I don't want to share.
I tried adding watermarks to the recorded video. Appending sample buffers using AVAssetWriterInput's append method fails and when I inspect the AVAssetWriter's error property, I get the following:
Error Domain=AVFoundation Error Domain Code=-11800 "This operation cannot be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDDescription=This operation cannot be completed, NSUnderlyingError=0x302399a70 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}}
As far as I can tell -11800 indicates an AVErrorUknown, however I have not been able to find information about the -12780 error code, which as far as I can tell is undocumented.
Thanks!
Here is the code
I'm working with modern Core Audio API introduced in macOS Sequoia. I have an AudioHadwareDevice which has several controls of type AudioHardwareControl. I figured out to filter only volume controls I can use classID == kAudioVolumeControlClassID condition. Some devices have volume controls for both input and output. How I can determine the direction of the control?
Streams, i.e. AudioHardwareStream object have direction, but I didn't found a way to map controls to streams. There are kAudioObjectPropertyScopeInput and kAudioObjectPropertyScopeOutput property scopes, but no matter what I tried controls always return false to any control.hasProperty(address: whatever). Any other ideas?
I am trying to get access to raw audio samples from mic. I've written a simple example application that writes the values to a text file.
Below is my sample application. All the input samples from the buffers connected to the input tap is zero. What am I doing wrong?
I did add the Privacy - Microphone Usage Description key to my application target properties and I am allowing microphone access when the application launches. I do find it strange that I have to provide permission every time even though in Settings > Privacy, my application is listed as one of the applications allowed to access the microphone.
class AudioRecorder {
private let audioEngine = AVAudioEngine()
private var fileHandle: FileHandle?
func startRecording() {
let inputNode = audioEngine.inputNode
let audioFormat: AVAudioFormat
#if os(iOS)
let hardwareSampleRate = AVAudioSession.sharedInstance().sampleRate
audioFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareSampleRate, channels: 1)!
#elseif os(macOS)
audioFormat = inputNode.inputFormat(forBus: 0) // Use input node's current format
#endif
setupTextFile()
inputNode.installTap(onBus: 0, bufferSize: 1024, format: audioFormat) { [weak self] buffer, _ in
self!.processAudioBuffer(buffer: buffer)
}
do {
try audioEngine.start()
print("Recording started with format: \(audioFormat)")
} catch {
print("Failed to start audio engine: \(error.localizedDescription)")
}
}
func stopRecording() {
audioEngine.stop()
audioEngine.inputNode.removeTap(onBus: 0)
print("Recording stopped.")
}
private func setupTextFile() {
let tempDir = FileManager.default.temporaryDirectory
let textFileURL = tempDir.appendingPathComponent("audioData.txt")
FileManager.default.createFile(atPath: textFileURL.path, contents: nil, attributes: nil)
fileHandle = try? FileHandle(forWritingTo: textFileURL)
}
private func processAudioBuffer(buffer: AVAudioPCMBuffer) {
guard let channelData = buffer.floatChannelData else { return }
let channelSamples = channelData[0]
let frameLength = Int(buffer.frameLength)
var textData = ""
var allZero = true
for i in 0..<frameLength {
let sample = channelSamples[i]
if sample != 0 {
allZero = false
}
textData += "\(sample)\n"
}
if allZero {
print("Got \(frameLength) worth of audio data on \(buffer.stride) channels. All data is zero.")
} else {
print("Got \(frameLength) worth of audio data on \(buffer.stride) channels.")
}
// Write to file
if let data = textData.data(using: .utf8) {
fileHandle!.write(data)
}
}
}
Hi everyone,
I wanted to bring up a question about Core Audio and its potential for future updates or improvements, specifically regarding latency optimization. As someone who relies on Core Audio for real-time audio processing, any enhancements in this area would be incredibly beneficial for professionals in the industry.
Does anyone know if Apple has shared any plans or updates regarding Core Audio’s performance, particularly for low-latency applications? I’d appreciate any insights or advice from the community!
Thanks so much!
Best,
Michael
I'm experiencing audio issues while developing for visionOS when playing PCM data through AVAudioPlayerNode.
Issue Description:
Occasionally, the speaker produces loud popping sounds or distorted noise
This occurs during PCM audio playback using AVAudioPlayerNode
The issue is intermittent and doesn't happen every time
Technical Details:
Platform: visionOS
Device: vision pro / simulator
Audio Framework: AVFoundation
Audio Node: AVAudioPlayerNode
Audio Format: PCM
I would appreciate any insights on:
Common causes of audio distortion with AVAudioPlayerNode
Recommended best practices for handling PCM playback in visionOS
Potential configuration issues that might cause this behavior
Has anyone encountered similar issues or found solutions? Any guidance would be greatly helpful.
Thank you in advance!
private var audioEngine = AVAudioEngine()
private var inputNode: AVAudioInputNode!
func startAnalyzing() {
inputNode = audioEngine.inputNode
let recordingFormat = inputNode.outputFormat(forBus: 0)
let hardwareSampleRate = recordingSession.sampleRate
inputNode.removeTap(onBus: 0)
if recordingFormat.sampleRate != hardwareSampleRate {
print("。")
let newFormat = AVAudioFormat(commonFormat: recordingFormat.commonFormat,
sampleRate: hardwareSampleRate,
channels: recordingFormat.channelCount,
interleaved: recordingFormat.isInterleaved)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: newFormat) { buffer, time in
self.processAudioBuffer(buffer, time: time)
}
} else {
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, time in
self.processAudioBuffer(buffer, time: time)
}
}
do {
audioEngine.prepare()
try audioEngine.start()
} catch {
print(": \(error)")
}
}
I back the app to the background and then call startAnalyzing(), which reports an error and the background recording permissions are configured。
error:
[10429:570139] [aurioc] AURemoteIO.cpp:1668 AUIOClient_StartIO failed (561145187)
[10429:570139] [avae] AVAEInternal.h:109 [AVAudioEngineGraph.mm:1545:Start: (err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)): error 561145187
Audio engine couldn't start.
Is background boot not allowed?
I've got a web app built with MusicKit that displays a list of songs.
I have player controls for play, pause, skip next, skip, previous, toggle shuffle and set repeat mode.
All of these work by using music.
The play button, when nothing is playing and nothing is in the queue, will enqueue all the tracks and start playing with the below, for example:
await music.setQueue({ songs, startPlaying: true });
I've implemented a progress slider based on feedback from the "playbackProgressDidChange" listener.
Now, how in the world can I set the volume? This seems like it should be simple, but I am at a complete loss here.
The docs say:
"The volume of audio playback, which is set directly on the HTMLMediaElement as the HTMLMediaElement.volume property. This value ranges between 0, which would be muting the audio, and 1, which would be the loudest possible."
Given that all my controls work off the music instance, I don't understand how I can do that.
In this video from WWDC 2022, music web components are touched on briefly. These are also documented very sparsely. The volume docs are here.
For the life of me, I can't even get the volume web component to display in the UI.
It appears that MusicKit Web is hobbled compared to the native implementation, but surely adjusting volume shouldn't be that hard right?
I'd appreciate any insight on how to do this, including how to get web components to work (in a Next JS app).
Thanks.
After updating to 18.3 my iPad 7th generation has no sound and will not play videos
I have tried everything. The songs load unto the playlists and on searches, but when prompted to play, they just won't play.
I have a wrapper since my main player (which carries the buttons for play/rewind/forward/etc.), is in Objc.
//
// ApplePlayerWrapper.swift
// UniversallyMac
//
// Created by Dorian Mattar on 11/10/24.
//
import Foundation
import MusicKit
import MediaPlayer
@objc public class MusicKitWrapper: NSObject {
@objc public static let shared = MusicKitWrapper()
private let player = ApplicationMusicPlayer.shared
// Play the current track
@objc public func play() {
guard !player.queue.entries.isEmpty else {
print("Queue is empty. Cannot start playback.")
return
}
logPlayerState(message: "Before play")
Task {
do {
try await player.prepareToPlay()
try await player.play()
print("Playback started successfully.")
} catch {
if let nsError = error as NSError? {
print("NSError Code: \(nsError.code), Domain: \(nsError.domain)")
}
}
logPlayerState(message: "After play")
}
}
// Log the current player state
@objc public func logPlayerState(message: String = "") {
print("Player State - \(message):")
print("Playback Status: \(player.state.playbackStatus)")
print("Queue Count: \(player.queue.entries.count)")
// Only log current track details if the player is playing
if player.state.playbackStatus == .playing {
if let currentEntry = player.queue.currentEntry {
print("Current Track: \(currentEntry.title)")
print("Current Position: \(player.playbackTime) seconds")
print("Track Length: \(currentEntry.endTime ?? 0.0) seconds")
} else {
print("No current track.")
}
} else {
print("No track is playing.")
}
print("----------")
}
// Debug the queue
@objc public func debugQueue() {
print("Debugging Queue:")
for (index, entry) in player.queue.entries.enumerated() {
print("\(index): \(entry.title)")
}
}
// Ensure track availability in the queue
public func queueTracks(_ tracks: [Track]) {
Task {
do {
for track in tracks {
// Validate Play Parameters
guard let playParameters = track.playParameters else {
print("Track \(track.title) has no Play Parameters.")
continue
}
// Log the Play Parameters
print("Track Title: \(track.title)")
print("Play Parameters: \(playParameters)")
print("Raw Values: \(track.id.rawValue)")
// Ensure the ID is valid
if track.id.rawValue.isEmpty {
print("Track \(track.title) has an invalid or empty ID in Play Parameters.")
continue
}
// Queue the track
try await player.queue.insert(track, position: .afterCurrentEntry)
print("Queued track: \(track.title)")
}
print("Tracks successfully added to the queue.")
} catch {
print("Error queuing tracks: \(error)")
}
debugQueue()
}
}
// Clear the current queue
@objc public func resetMusicPlayer() {
Task {
player.stop()
player.queue.entries.removeAll()
print("Queue cleared.")
print("Apple Music player reset successfully.")
}
}
}
I opened an Apple Dev. ticket, but I'm trying here as well. Thanks!
I recently installed a rear-view camera in my car, and ever since, I've been experiencing a frustrating issue with my CarPlay. After about 15 seconds of playing audio via Bluetooth, the sound stops coming out of the speakers, even though the song continues to run in the background.
For context, my stereo system is an aftermarket unit that I installed to enable CarPlay functionality. Everything worked perfectly before adding the rear-view camera. Unfortunately, my unit does not have a port for a wired connection, so I can't test the audio using a cable.
Has anyone experienced a similar issue? Could the camera installation be interfering with the Bluetooth or audio system somehow? Any advice or troubleshooting tips would be greatly appreciated!
I am having issues deploying my iOS app, that uses ShazamKit, to get working on a Mac with Apple silicon.
When uploading the archive to App Store Connect I do get
ITMS-90863: Macs with Apple silicon support issue - The app links with libraries that aren’t present in macOS:
/usr/lib/swift/libswiftShazamKit.dylib
Is ShazamKit not supported for iOS apps that can run on Macs with Apple silicon? Or is there something I should fix in my setup / deployment?
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected.
Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers)
Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers)
When using paired input and output devices:
The setup works as expected.
Example: MacBook Pro microphone → MacBook Pro speakers.
When using mismatched devices:
AVAudioEngine setup fails during aggregate device construction.
Example: AirPods microphone → MacBook Pro speakers.
Error logs indicate a channel count mismatch.
Here are the partial logs. Due to the content limit, I cannot post the entire logs.
AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875)
AUVPAggregate.cpp:1036 err=-10875
AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875
AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts
AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U)
AggregateDevice.mm:182 error fetching default pair
AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts
AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U)
AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702]
AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID
AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702]
AggregateDevice.mm:182 error fetching default pair
AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts
AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U)
Is it possible to use voice processing with different input/output devices?
If yes, are there any specific configurations required to handle mismatched devices?
How can we resolve channel count mismatch errors during aggregate device construction?
Are there settings or API adjustments to enforce compatibility between input/output devices?
Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices?
For instance, can we force an intermediate channel configuration or downmix input/output formats?
I’m currently developing an iOS metronome app using DispatchSourceTimer as the timer. The interval is set very small, around 50 milliseconds, and I’m using CFAbsoluteTimeGetCurrent to calculate the elapsed time to ensure the beat is played within a ±0.003-second margin.
The problem is that once the app goes to the background, the timing becomes unstable—it slows down noticeably, then recovers after 1–2 seconds.
When coming back to the foreground, it suddenly speeds up, and again, it takes 1–2 seconds to return to normal. It feels like the app is randomly “powering off” and then “overclocking.” It’s super frustrating.
I’ve noticed that some metronome apps in the App Store have similar issues, but there’s one called “Professional Metronome” that’s rock solid with no such problems. What kind of magic are they using? Any experts out there who can help? Thanks in advance!
P.S. I’ve already enabled background audio permissions.
The professional metronome that has no issues: https://link.zhihu.com/?target=https%3A//apps.apple.com/cn/app/pro-metronome-%25E4%25B8%2593%25E4%25B8%259A%25E8%258A%2582%25E6%258B%258D%25E5%2599%25A8/id477960671
I am using an AVAudioPlayer to play a "tick" sound once per second in a SwiftUI app.
When running the app on an iPhone 16 (18.2.1) the tick sounds increase in volume after a few seconds. This does not happen in the simulator nor on an iPhone SE 2020 (18.1.1).
Topic:
Media Technologies
SubTopic:
Audio
Please include the line below in follow-up emails for this request.
Case-ID: 11089799
When using AVSpeechUtterance and setting it to play in Mandarin, if Siri is set to Cantonese on iOS 18, it will be played in Cantonese. There is no such issue on iOS 17 and 16.
1.let utterance = AVSpeechUtterance(string: textView.text)
let voice = AVSpeechSynthesisVoice(language: "zh-CN")
utterance.voice = voice
2.In the phone settings, Siri is set to Cantonese
Please Update Andorid MusicKit,the version 1.1.2 will complied fail。the error msg:•SDKUriHandlerActivity>. Apps targeting Android 12 and higher are required to specify an explicit value for android:exported when the corres