I've got a web app built with MusicKit that displays a list of songs.
I have player controls for play, pause, skip next, skip, previous, toggle shuffle and set repeat mode.
All of these work by using music.
The play button, when nothing is playing and nothing is in the queue, will enqueue all the tracks and start playing with the below, for example:
await music.setQueue({ songs, startPlaying: true });
I've implemented a progress slider based on feedback from the "playbackProgressDidChange" listener.
Now, how in the world can I set the volume? This seems like it should be simple, but I am at a complete loss here.
The docs say:
"The volume of audio playback, which is set directly on the HTMLMediaElement as the HTMLMediaElement.volume property. This value ranges between 0, which would be muting the audio, and 1, which would be the loudest possible."
Given that all my controls work off the music instance, I don't understand how I can do that.
In this video from WWDC 2022, music web components are touched on briefly. These are also documented very sparsely. The volume docs are here.
For the life of me, I can't even get the volume web component to display in the UI.
It appears that MusicKit Web is hobbled compared to the native implementation, but surely adjusting volume shouldn't be that hard right?
I'd appreciate any insight on how to do this, including how to get web components to work (in a Next JS app).
Thanks.
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Everytime I put my AirPods in and connect them to my phone or my Mac or my iPad since the iOS 18.3 update on my devices they’ve been disconnecting without reason, pausing songs I’m in the middle of playing, and only partially reconnecting in one pod and it’s getting really frustrating
Topic:
Media Technologies
SubTopic:
Audio
Hello,
I'm trying to receive parquet files using the example that provided in documentation. I've done all required steps but receive constantly error 500 with "Upstream Service Error". By looking into the issues list, seems this error exists for months. Is it possible to get it working?
We are developing an apple music app on phone, the developed web works fine on chrome, but when i load it on webivew on my phone, i can't play the first song,
We doubt that the drm init, key exchange, session creation was on the music.play() function, while we trigger the play, the drm or session was not ok for play a real song, so it got an error
so we may wanna know:
what about the realative process of drm, key, session, etc in the play() function?
are there some state detect function to show weather the drm is ok?
Topic:
Media Technologies
SubTopic:
Audio
Tags:
Apple Music API
MusicKit
MusicKit JS
Apple Music Feed
Hello,
i can successfully match music using shazamkit on Apple using SwiftUI, a simple app that let user to load an audio file and exctracts the relative match, while i am unable to match music using shamzamkit on Android. I am trying to make the same simple app but i cannot match music as i get MATCH_ATTEMPT_FAILED every time i try to. I don't know what i am doing wrong but the shazam part in the kotlin Android code is in this method :
suspend fun processAudioFileInBackground(
filePath: String,
developerTokenProvider: DeveloperTokenProvider
) = withContext(Dispatchers.IO) {
val bufferSize = 1024 * 1024
val audioFile = FileInputStream(filePath)
val byteBuffer = ByteBuffer.allocate(bufferSize)
byteBuffer.order(ByteOrder.LITTLE_ENDIAN)
var bytesRead: Int
while (audioFile.read(byteBuffer.array()).also { bytesRead = it } != -1) {
val signatureGenerator = (ShazamKit.createSignatureGenerator(AudioSampleRateInHz.SAMPLE_RATE_44100) as ShazamKitResult.Success).data
signatureGenerator.append(byteBuffer.array(), bytesRead, System.currentTimeMillis())
val signature = signatureGenerator.generateSignature()
println("Signature: ${signature.durationInMs}")
val catalog = ShazamKit.createShazamCatalog(developerTokenProvider, Locale.ENGLISH)
val session = (ShazamKit.createSession(catalog) as ShazamKitResult.Success).data
val matchResult = session.match(signature)
println("MatchResult : $matchResult")
setMatchResult(matchResult)
byteBuffer.clear()
}
audioFile.close()
}
I noticed that changing Locale in catalog creation results in different result as i get NoMatch without exception. Can you please help me with this? Do i need to create a custom catalog?
It's only occurs on iOS 18+. Backtrace attached below.
Exception Codes: 0x0000000000000000, 0x0000000000000000
Termination Reason: SIGNAL 6 Abort trap: 6
Terminating Process: NoteKeys [24384]
Triggered by Thread: 0
Last Exception Backtrace:
0 CoreFoundation 0x1a2d4c7cc __exceptionPreprocess + 164 (NSException.m:249)
1 libobjc.A.dylib 0x1a001f2e4 objc_exception_throw + 88 (objc-exception.mm:356)
2 CoreFoundation 0x1a2e47748 +[NSException raise:format:] + 128 (NSException.m:0)
3 AVFAudio 0x1bd41f4c8 -[AVMIDIPlayer play:] + 300 (AVMIDIPlayer.mm:145)
4 NoteKeys 0x1023c0670 SoundGenerator.playData() + 20 (SoundGenerator.swift:170)
5 NoteKeys 0x1023c0670 EditViewController.playBtnTapped(startIndex:) + 940 (EditViewController.swift:2034)
6 NoteKeys 0x1024497fc specialized Keyboard.playBtnTapped(sender:) + 1904 (Keyboard.swift:1249)
7 NoteKeys 0x10244631c Keyboard.playBtnTapped(sender:) + 4 (<compiler-generated>:0)
8 NoteKeys 0x10244631c @objc Keyboard.playBtnTapped(sender:) + 48
9 UIKitCore 0x1a58739cc -[UIApplication sendAction:to:from:forEvent:] + 100 (UIApplication.m:5816)
10 UIKitCore 0x1a58738a4 -[UIControl sendAction:to:forEvent:] + 112 (UIControl.m:942)
11 UIKitCore 0x1a58736f4 -[UIControl _sendActionsForEvents:withEvent:] + 324 (UIControl.m:1013)
12 UIKitCore 0x1a5fe8d8c -[UIButton _sendActionsForEvents:withEvent:] + 124 (UIButton.m:4198)
13 UIKitCore 0x1a5fea5a0 -[UIControl touchesEnded:withEvent:] + 400 (UIControl.m:692)
14 UIKitCore 0x1a57bb9ac -[UIWindow _sendTouchesForEvent:] + 852 (UIWindow.m:3318)
15 UIKitCore 0x1a57bb3d8 -[UIWindow sendEvent:] + 2964 (UIWindow.m:3641)
16 UIKitCore 0x1a564fb70 -[UIApplication sendEvent:] + 376 (UIApplication.m:12972)
17 UIKitCore 0x1a565009c __dispatchPreprocessedEventFromEventQueue + 1048 (UIEventDispatcher.m:2686)
18 UIKitCore 0x1a5659f3c __processEventQueue + 5696 (UIEventDispatcher.m:3044)
19 UIKitCore 0x1a5552c60 updateCycleEntry + 160 (UIEventDispatcher.m:133)
20 UIKitCore 0x1a55509d8 _UIUpdateSequenceRun + 84 (_UIUpdateSequence.mm:136)
21 UIKitCore 0x1a5550628 schedulerStepScheduledMainSection + 172 (_UIUpdateScheduler.m:1171)
22 UIKitCore 0x1a555159c runloopSourceCallback + 92 (_UIUpdateScheduler.m:1334)
23 CoreFoundation 0x1a2d20328 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 28 (CFRunLoop.c:1970)
24 CoreFoundation 0x1a2d202bc __CFRunLoopDoSource0 + 176 (CFRunLoop.c:2014)
25 CoreFoundation 0x1a2d1ddc0 __CFRunLoopDoSources0 + 244 (CFRunLoop.c:2051)
26 CoreFoundation 0x1a2d1cfbc __CFRunLoopRun + 840 (CFRunLoop.c:2969)
27 CoreFoundation 0x1a2d1c830 CFRunLoopRunSpecific + 588 (CFRunLoop.c:3434)
28 GraphicsServices 0x1eecfc1c4 GSEventRunModal + 164 (GSEvent.c:2196)
29 UIKitCore 0x1a5882eb0 -[UIApplication _run] + 816 (UIApplication.m:3844)
30 UIKitCore 0x1a59315b4 UIApplicationMain + 340 (UIApplication.m:5496)
31 NoteKeys 0x10254bc10 main + 68 (AppDelegate.swift:15)
32 dyld 0x1c870aec8 start + 2724 (dyldMain.cpp:1334)
Thanks very much for any help: )
Topic:
Media Technologies
SubTopic:
Audio
Hello!
I'm experiencing an issue with iOS's audio routing system when trying to use Bluetooth headphones for audio output while also recording environmental audio from the built-in microphone.
Desired behavior:
Play audio through Bluetooth headset (AirPods)
Record unprocessed environmental audio from the iPhone's built-in microphone
Actual behavior:
When explicitly selecting the built-in microphone, iOS reports it's using it (in currentRoute.inputs)
However, the actual audio data received is clearly still coming from the AirPods microphone
The audio is heavily processed with voice isolation/noise cancellation, removing environmental sounds
Environment Details
Device: iPhone 12 Pro Max
iOS Version: 18.4.1
Hardware: AirPods
Audio Framework: AVAudioEngine (also tried AudioQueue)
Code Attempted
I've tried multiple approaches to force the correct routing:
func configureAudioSession() {
let session = AVAudioSession.sharedInstance()
// Configure to allow Bluetooth output but use built-in mic
try? session.setCategory(.playAndRecord,
options: [.allowBluetoothA2DP, .defaultToSpeaker])
try? session.setActive(true)
// Explicitly select built-in microphone
if let inputs = session.availableInputs,
let builtInMic = inputs.first(where: { $0.portType == .builtInMic }) {
try? session.setPreferredInput(builtInMic)
print("Selected input: \(builtInMic.portName)")
}
// Log the current route
let route = session.currentRoute
print("Current input: \(route.inputs.first?.portName ?? "None")")
// Configure audio engine with native format
let inputNode = audioEngine.inputNode
let nativeFormat = inputNode.inputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: nativeFormat) { buffer, time in
// Process audio buffer
// Despite showing "Built-in Microphone" in route, audio appears to be
// coming from AirPods with voice isolation applied - welp!
}
try? audioEngine.start()
}
I've also tried various combinations of:
Different audio session modes (.default, .measurement, .voiceChat)
Different option combinations (with/without .allowBluetooth, .allowBluetoothA2DP)
Setting session.setPreferredInput() both before and after activation
Diagnostic Observations
When AirPods are connected:
AVAudioSession.currentRoute.inputs correctly shows "Built-in Microphone" after setPreferredInput()
The actual audio data received shows clear signs of AirPods' voice isolation processing
Background/environmental sounds are actively filtered out...
When recording a test audio played near the phone (not through the app), the recording is nearly silent. Only headset voice goes through.
Questions
Is there a workaround to force iOS to actually use the built-in microphone while maintaining Bluetooth output?
Are there any lower-level configurations that might resolve this issue?
Any insights, workarounds, or suggestions would be greatly appreciated. This is blocking a critical feature in my application that requires environmental audio recording while providing audio feedback through headphones 😅
Dear Sirs,
I'd like to add an icon to my audio driver based on AudioDriverKit. This icon should show up left of my audio device in the audio devices dialog. For an Audio Server Plugin I managed to do this using the property kAudioDevicePropertyIcon and CFBundleCopyResourceURL(...) but how would you do this with AudioDriverKit? Should I use IOUserAudioCustomProperty or IOUserAudioControl and how would I refer to the Bundle? Is there an example available somewhere?
Thanks and best regards,
Johannes
I'm using a 4 channel USB Audio interface, with 4 microphones, and want to process them through 4 independent effect chains. However the output from AVAudioInputNode is a single 4 channel bus. How can I split this into 4 mono busses?
The following code splits the input into 4 copies, and routes them through the effects, but each bus contains all four channels. How can I remap the channels to remove the unwanted channels from the bus? I tried using channelMap on the mixer node but that had no effect.
I'm currently using this code primarily on iOS but it should be portable between iOS and MacOS. It would be possible to do this through a Matrix Mixer Node, but that seems completely overkill, for such a basic operation. I'm already using a Matrix Mixer to combine the inputs, and it's not well supported in AVAudioEngine.
AVAudioInputNode *inputNode=[engine inputNode];
[inputNode setVoiceProcessingEnabled:NO error:nil];
NSMutableArray *micDestinations=[NSMutableArray arrayWithCapacity:trackCount];
for(i=0;i<trackCount;i++)
{
fixMicFormat[i]=[AVAudioMixerNode new];
[engine attachNode:fixMicFormat[i]];
// And create reverb/compressor and eq the same way...
[engine connect:reverb[i] to:matrixMixerNode fromBus:0 toBus:i format:nil];
[engine connect:eq[i] to:reverb[i] fromBus:0 toBus:0 format:nil];
[engine connect:compressor[i] to:eq[i] fromBus:0 toBus:0 format:nil];
[engine connect:fixMicFormat[i] to:compressor[i] fromBus:0 toBus:0 format:nil];
[micDestinations addObject:[[AVAudioConnectionPoint alloc] initWithNode:fixMicFormat[i] bus:0] ];
}
AVAudioFormat *inputFormat = [inputNode outputFormatForBus: 1];
[engine connect:inputNode toConnectionPoints:micDestinations fromBus:1 format:inputFormat];
My audio app shows a control bar at the bottom of the window. The controls show nicely, but there is a black "slab" appearing behind the inline controls, the same size as the playerView. Setting the player view background color does nothing:
playerView.wantsLayer = true playerView.layer?.backgroundColor = NSColor.clear.cgColor
How can I clear the background?
If I use .floating controlsStyle, I don't get the background "slab".
Topic:
Media Technologies
SubTopic:
Audio
I'm developing an iOS app that requires continuous audio recording.
Currently, when a phone call comes in, the AVAudioSession is interrupted and recording stops completely during the ringing phase.
While I understand recording should stop if the call is answered, my app needs to continue recording while the phone is merely ringing.
I've observed that Apple's Voice Memos app maintains recording during incoming call rings. This indicates the hardware and iOS are capable of supporting this functionality.
Request
Please advise on any available AVAudioSession configurations or APIs that would allow my app to:
Continue recording during an incoming call ring
Only stop recording if/when the call is actually answered
Impact
This interruption significantly impacts the user experience and core functionality of my app. Workarounds like asking users to enable airplane mode are impractical and create a poor user experience.
Questions
Is there an approved way to maintain microphone access during call rings?
If not currently possible, could this capability be considered for addition to a future iOS SDK?
Are there any interim solutions or best practices Apple recommends for this use case?
Thank you for your help.
SUPPORT INFORMATION
Did someone from Apple ask you to submit a code-level support request?
No
Do you have a focused test project that demonstrates your issue?
Yes, I have a focused test project to submit with my request
What code level support issue are you having?
Problems with an Apple framework API in my app
I've filed this as FB21446798 but figured I'd post here too.
In the first build of macOS 26.3, playback via ApplicationMusicPlayer is completely broken. When starting playback of anything at all, the console shows the following error:
applicationController: xpc service connection interrupted
Failed to obtain remoteObject: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service created from an endpoint was invalidated from this process." UserInfo={NSDebugDescription=The connection to service created from an endpoint was invalidated from this process.}
Failed to prepareToPlay with error: Error Domain=MPMusicPlayerControllerErrorDomain Code=10 "(null)" UserInfo={NSUnderlyingError=0xc92910ff0 {Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service created from an endpoint was invalidated from this process." UserInfo={NSDebugDescription=The connection to service created from an endpoint was invalidated from this process.}}}
In addition, several crash logs for RemotePlayerService are generated, showing my app as the parent process.
This issue is 100% repeatable. No matter how I load the queue, whether it’s catalog or library content, any variation I can think of all fails like this.
I really hope this can be fixed before 26.3 comes out, otherwise my app will be totally unusable. 😅
I have an iPadOS M-processor application with two different running configurations.
In config1, the shared AVAudioSession is configured for .videoChat mode using the built-in microphone. The input/output nodes of the AVAudioEngine are configured with voice processing enabled. The built-in mic is formatted for 1 channel at 48KHz.
In config2, the shared AVAudioSession is configured for .measurement mode using an external USB microphone. The input/output nodes of the AVAudioEngine are configured with voice processing disabled. The external mic is formatted for 2 channels at 44.1KHz
I've written a configuration manager designed to safely switch between these two configurations. It works by stopping AVAudioEngine and detaching all but the input and output nodes, updating the shared audio session for the desired mic and sample-rates, and setting the appropriate state for voice processing to either true or false as required by the configuration. Finally the new audio graph is constructed by attaching appropriate nodes, connecting them, and re-starting AVAudioEngine
I'm experiencing what I believe is a race-condition between switching voice processing on or off and then trying to re-build and start the new audio graph. Even though notifications, which are dumped to the console indicate that my requested input and sample-rate settings are in place, I crash when trying to start the audio engine because the sample-rate is wrong. Investigating further it looks like the switch from remote I/O to voice-processing I/O or vice-versa has not yet actually completed. I introduced a 100ms second delay and that seems to help but is obviously not a reliable way to build software that must work consistently.
How can I make sure that what are apparently asynchronous configuration changes to the shared audio session and the input/output nodes have completed before I go on?
I tried using route change notifications from the shared AVAudioSession but these lie. They say my preferred mic input and sample-rate setting is in place but when I dump the AVAudioEngine graph to the debugger console, I still see the wrong sample rate assigned to the input/output nodes. Also these are the wrong AU nodes. That is, VPIO is still in place when RIO should be, or vice-versa.
How can I make the switch reliable without arbitrary time delays?
Is my configuration manager approach appropriate (question for Apple engineers)?
Hi everyone 👋
I’m building an iOS app in Swift where I want to do the following:
Record the user’s voice
Transcribe the spoken sentence (speech-to-text)
Auto-detect the spoken language
Translate it to another language selected by the user (e.g., English → Spanish or Hindi → English)
Speak back (text-to-speech) the translated text on the same device
Is this possible to record via phone mic and play the transcribe voice into headphone's audio?
I'm working in Swift/SwiftUI, running XCode 16.3 on macOS 15.4 and I've seen this when running in the iOS simulator and in a macOS app run from XCode. I've also seen this behaviour with 3 different audio files.
Nothing in the documentation says that the speechRecognitionMetadata property on an SFSpeechRecognitionResult will be nil until isFinal, but that's the behaviour I'm seeing.
I've stripped my class down to the following:
private var isAuthed = false
// I call this in a .task {} in my SwiftUI View
public func requestSpeechRecognizerPermission() {
SFSpeechRecognizer.requestAuthorization { authStatus in
Task {
self.isAuthed = authStatus == .authorized
}
}
}
public func transcribe(from url: URL) {
guard isAuthed else { return }
let locale = Locale(identifier: "en-US")
let recognizer = SFSpeechRecognizer(locale: locale)
let recognitionRequest = SFSpeechURLRecognitionRequest(url: url)
// the behaviour occurs whether I set this to true or not, I recently set
// it to true to see if it made a difference
recognizer?.supportsOnDeviceRecognition = true
recognitionRequest.shouldReportPartialResults = true
recognitionRequest.addsPunctuation = true
recognizer?.recognitionTask(with: recognitionRequest) { (result, error) in
guard result != nil else { return }
if result!.isFinal {
//speechRecognitionMetadata is not nil
} else {
//speechRecognitionMetadata is nil
}
}
}
}
Further, and this isn't documented either, the SFTranscriptionSegment values don't have correct timestamp and duration values until isFinal. The values aren't all zero, but they don't align with the timing in the audio and they change to accurate values when isFinal is true.
The transcription otherwise "works", in that I get transcription text before isFinal and if I wait for isFinal the segments are correct and speechRecognitionMetadata is filled with values.
The context here is I'm trying to generate a transcription that I can then highlight the spoken sections of as audio plays and I'm thinking I must be just trying to use the Speech framework in a way it does not work. I got my concept working if I pre-process the audio (i.e. run it through until isFinal and save the results I need to json), but being able to do even a rougher version of it 'on the fly' - which requires segments to have the right timestamp/duration before isFinal - is perhaps impossible?
I have a memory leak, when using AVAudioPlayer. I managed to narrow down the issue into a very simple app, which code I paste in at the end.
The memory leak start immediately when I start playing sound, but only in the emylator. On the real iPhone there is no memory leak.
The memory leak on the Simulator looks like this:
import SwiftUI
import AVFoundation
struct ContentView_Audio: View {
var sound: AVAudioPlayer?
init() {
guard let path = Bundle.main.path(forResource: "cd201", ofType: "mp3") else { return }
let url = URL(fileURLWithPath: path)
do {
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: [.mixWithOthers])
} catch {
return
}
do {
try AVAudioSession.sharedInstance().setActive(true)
} catch {
return
}
do {
sound = try AVAudioPlayer(contentsOf: url)
} catch {
return
}
}
var body: some View {
HStack {
Button {
playSound()
} label: {
ZStack {
Circle()
.fill(.mint.opacity(0.3))
.frame(width: 44, height: 44)
.shadow(radius: 8)
Image(systemName: "play.fill")
.resizable()
.frame(width: 20, height: 20)
}
}
.padding()
Button {
stopSound()
} label: {
ZStack {
Circle()
.fill(.mint.opacity(0.3))
.frame(width: 44, height: 44)
.shadow(radius: 8)
Image(systemName: "stop.fill")
.resizable()
.frame(width: 20, height: 20)
}
}
.padding()
}
}
private func playSound() {
guard sound != nil else { return }
sound?.volume = 1
// sound?.numberOfLoops = -1
sound?.play()
}
func stopSound() {
sound?.stop()
}
}
I am work an app development on an app which request an audio function in background as an alert sound.
during debug testing , the function work fine,
but once I testing standalone without debugging , The function not work , it will play out the sound when I back to app.
does any way to trace the issues ?
The sysEx struct in the MIDIUniversalMessage struct has a channel member but the System Exclusive (7-Bit) Message doesn't have a channel field.
The System Exclusive (7-Bit) Message has a # of bytes field but the sysEx struct doesn't have a nrOfBytes, byteCount or bytesUsed member.
It looks like the channel member of the sysEx struct contains the number of used bytes.
Is this a mistake in the header or did I misunderstand something?
I created a virtual audio device to capture system audio with a sample rate of 44.1 kHz. After capturing the audio, I forward it to the hardware sound card using AVAudioEngine, also with a sample rate of 44.1 kHz. However, due to the clock sources being unsynchronized, problems occur after a period of playback. How can I retrieve the clock source of the hardware device and set it for the virtual device?
I'm using the new SpeechAnalyzer framework to detect certain commands and want to improve accuracy by giving context. Seems like AnalysisContext is the solution for this, but couldn't find any usage example. So I want to make sure that I'm doing it right or not.
let context = AnalysisContext()
context.contextualStrings = [
AnalysisContext.ContextualStringsTag("commands"): [
"set speed level",
"set jump level",
"increase speed",
"decrease speed",
...
],
AnalysisContext.ContextualStringsTag("vocabulary"): [
"speed", "jump", ...
]
]
try await analyzer.setContext(context)
With this implementation, it still gives outputs like "Set some speed level", "It's speed level", etc.
Also, is it possible to make it expect number after those commands, in order to eliminate results like "set some speed level to" (instead of two).