Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

Issue using Siphon Tap on input AudioQueue
Hi all, I've developed an audio DSP application in C++ using AudioToolbox and CoreAudio on MacOS 14.4.1 with Xcode 15. I use an AudioQueue for input and another for output. This works great. I'm now adding real-time audio analysis eg spectral analysis. I want this to run independently of my audio processing so it can not interfere with audio playback. Taps on AudioQueues seem to be a good way of doing this... Since the analytics won't modify the audio data, I am using a Siphon Tap by setting the AudioQueueProcessingTapFlags to kAudioQueueProcessingTap_PreEffects | kAudioQueueProcessingTap_Siphon; This works fine on my output queue. However, on my input queue the Tap callback is called once and then a EXC_BAD_ACCESS occurs - screen shot below. NB: I believe that a callback should only call AudioQueueProcessingTapGetSourceAudio when not using a Siphon, so I don't call it. Relevant code: AudioQueueProcessingTapCallback tap_callback) { // Makes an audio tap for a queue void * tap_data_ptr = NULL; AudioQueueProcessingTapFlags tap_flags = kAudioQueueProcessingTap_PostEffects | kAudioQueueProcessingTap_Siphon; uint32_t max_frames = 0; AudioStreamBasicDescription asbd; AudioQueueProcessingTapRef tap_ref; OSStatus status = AudioQueueProcessingTapNew(queue_ref, tap_callback, tap_data_ptr, tap_flags, &max_frames, &asbd, &tap_ref); if (status != noErr) printf("Error while making Tap\n"); else printf("Successfully made tap\n"); } void tapper(void * tap_data, AudioQueueProcessingTapRef tap_ref, uint32_t number_of_frames_in, AudioTimeStamp * ts_ptr, AudioQueueProcessingTapFlags * tap_flags_ptr, uint32_t * number_of_frames_out_ptr, AudioBufferList * buf_list) { // Callback function for audio queue tap printf("Tap callback"); }``` Image of exception stack provided by Xcode: ![]("https://developer.apple.com/forums/content/attachment/27479e8d-a118-459b-aa2d-7e30528910e3" "title=Screenshot 2025-06-14 at 1.29.14 PM.png;width=932;height=562") What have I missed? Appreciate any help you learned folks may be able to provide. Best, Geoff.
1
0
98
Jun ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
1
1
626
3w
How to capture audio from the stream that's playing on the speakers?
Good day, ladies and gents. I have an application that reads audio from the microphone. I'd like it to also be able to read from the Mac's audio output stream. (A bonus would be if it could detect when the Mac is playing music.) I'd eventually be able to figure it out reading docs, but if someone can give a hint, I'd be very grateful, and would owe you the libation of your choice. Here's the code used to set up the AudioUnit: -(NSString*) configureAU { AudioComponent component = NULL; AudioComponentDescription description; OSStatus err = noErr; UInt32 param; AURenderCallbackStruct callback; if( audioUnit ) { AudioComponentInstanceDispose( audioUnit ); audioUnit = NULL; } // was CloseComponent // Open the AudioOutputUnit description.componentType = kAudioUnitType_Output; description.componentSubType = kAudioUnitSubType_HALOutput; description.componentManufacturer = kAudioUnitManufacturer_Apple; description.componentFlags = 0; description.componentFlagsMask = 0; if( component = AudioComponentFindNext( NULL, &description ) ) { err = AudioComponentInstanceNew( component, &audioUnit ); if( err != noErr ) { audioUnit = NULL; return [ NSString stringWithFormat: @"Couldn't open AudioUnit component (ID=%d)", err] ; } } // Configure the AudioOutputUnit: // You must enable the Audio Unit (AUHAL) for input and output for the same device. // When using AudioUnitSetProperty the 4th parameter in the method refers to an AudioUnitElement. // When using an AudioOutputUnit for input the element will be '1' and the output element will be '0'. param = 1; // Enable input on the AUHAL err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &param, sizeof(UInt32) ); chkerr("Couldn't set first EnableIO prop (enable inpjt) (ID=%d)"); param = 0; // Disable output on the AUHAL err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &param, sizeof(UInt32) ); chkerr("Couldn't set second EnableIO property on the audio unit (disable ootpjt) (ID=%d)"); param = sizeof(AudioDeviceID); // Select the default input device AudioObjectPropertyAddress OutputAddr = { kAudioHardwarePropertyDefaultInputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; err = AudioObjectGetPropertyData( kAudioObjectSystemObject, &OutputAddr, 0, NULL, &param, &inputDeviceID ); chkerr("Couldn't get default input device (ID=%d)"); // Set the current device to the default input unit err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &inputDeviceID, sizeof(AudioDeviceID) ); chkerr("Failed to hook up input device to our AudioUnit (ID=%d)"); callback.inputProc = AudioInputProc; // Setup render callback, to be called when the AUHAL has input data callback.inputProcRefCon = self; err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &callback, sizeof(AURenderCallbackStruct) ); chkerr("Could not install render callback on our AudioUnit (ID=%d)"); param = sizeof(AudioStreamBasicDescription); // get hardware device format err = AudioUnitGetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1, &deviceFormat, &param ); chkerr("Could not install render callback on our AudioUnit (ID=%d)"); audioChannels = MAX( deviceFormat.mChannelsPerFrame, 2 ); // Twiddle the format to our liking actualOutputFormat.mChannelsPerFrame = audioChannels; actualOutputFormat.mSampleRate = deviceFormat.mSampleRate; actualOutputFormat.mFormatID = kAudioFormatLinearPCM; actualOutputFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved; if( actualOutputFormat.mFormatID == kAudioFormatLinearPCM && audioChannels == 1 ) actualOutputFormat.mFormatFlags &= ~kLinearPCMFormatFlagIsNonInterleaved; #if __BIG_ENDIAN__ actualOutputFormat.mFormatFlags |= kAudioFormatFlagIsBigEndian; #endif actualOutputFormat.mBitsPerChannel = sizeof(Float32) * 8; actualOutputFormat.mBytesPerFrame = actualOutputFormat.mBitsPerChannel / 8; actualOutputFormat.mFramesPerPacket = 1; actualOutputFormat.mBytesPerPacket = actualOutputFormat.mBytesPerFrame; // Set the AudioOutputUnit output data format err = AudioUnitSetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &actualOutputFormat, sizeof(AudioStreamBasicDescription)); chkerr("Could not change the stream format of the output device (ID=%d)"); param = sizeof(UInt32); // Get the number of frames in the IO buffer(s) err = AudioUnitGetProperty( audioUnit, kAudioDevicePropertyBufferFrameSize, kAudioUnitScope_Global, 0, &audioSamples, &param ); chkerr("Could not determine audio sample size (ID=%d)"); err = AudioUnitInitialize( audioUnit ); // Initialize the AU chkerr("Could not initialize the AudioUnit (ID=%d)"); // Allocate our audio buffers audioBuffer = [self allocateAudioBufferListWithNumChannels: actualOutputFormat.mChannelsPerFrame size: audioSamples * actualOutputFormat.mBytesPerFrame]; if( audioBuffer == NULL ) { [ self cleanUp ]; return [NSString stringWithFormat: @"Could not allocate buffers for recording (ID=%d)", err]; } return nil; } (...again, it would be nice to know if audio output is active and thereby choose the clean output stream over the noisy mic, but that would be a different chunk of code, and my main question may just be a quick edit to this chunk.) Thanks for your attention! ==Dave [p.s. if i get more than one useful answer, can i "Accept" more than one, to spread the credit around?] {pps: of course, the code lines up prettier in a monospaced font!}
1
0
167
Jun ’25
Appleデバイスの内蔵楽器音について
iPhoneやiPadにおいて、画面上のボタンなどをタップした際に、特定の楽器音を発音させる方法をご存知の方いらっしゃいませんか? 現在音楽学習アプリを作成途中で、画面上の鍵盤や指板のボタン状のframeに、単音又は和音を割当て発音させる事を考えております SwiftUIのcodeのみで実現できないでしょうか 嘗て、MIDIのlevel1の楽器の発音機能があった様に記憶していますが、現在のOS上では同様の機能を実装してないように思えます 皆様のお知恵をお貸しください
1
0
381
Mar ’25
Is Call Translation API available for VOIP?
I might have misunderstood the docs, but is Call Translation going to be available for VOIP applications? Eg in an already connected VOIP call, would it be possible for Call Translations to be enabled on an iOS 26 and Apple Intelligence supported device? I have personally tried it and it doesn’t look like it supported VOIP but would love to confirm this. reference: https://developer.apple.com/documentation/callkit/cxsettranslatingcallaction/
1
0
74
Jun ’25
Essentials of macOS to read and write mp3 and mp4 audio files
Hi, On macOS I used to open MP3 and MP4 files with ExtAudioFile. For a few years it doesn't work anymore. So I decided to try different macOS API using the AudioFileID of AudioToolbox framework. I decided to write a test: https://gist.github.com/joelkraehemann/7f5b241b52ca38c3a765c138fb647588 It fails right here: AudioFileOpenWithCallbacks() By telling OSStatus error 1954115647, which means kAudioFileUnsupportedFileTypeError. The filename was set to an MP4 file: ~/Music/test.mp4 Howto fix this? regards, Joël
1
0
134
Jun ’25
Feature Request: Long-Lived Access to Personal Apple Music Data
Feature Request: Long-Lived Access to Personal Apple Music Data Use Case Summary I'm developing a personal portfolio website (using Nuxt) and want to display information from my own Apple Music library - showcasing personal playlists, recently played tracks, or a read-only "now playing" widget. This is purely for personal use on my website and doesn't require other users to log in. With Spotify's API, implementing this was straightforward thanks to automatic token refresh. I want a similarly seamless integration with Apple Music. Challenge with MusicKit and Music User Tokens Apple Music API requirements Apple's Music API requires a valid Music User Token (MUT) for requests involving personal library data. Beyond the Apple Developer Token, you must obtain a user-specific token via MusicKit authentication to access your own library playlists, play history, or current playback status. Token expiration and manual renewal Music User Tokens expire after approximately 6 months without any mechanism to automatically refresh or renew them - unlike typical OAuth flows that provide refresh tokens. Apple's guidance suggests the device (e.g., iPhone) is responsible for obtaining new user tokens when old ones expire. This works for interactive apps on Apple devices but fails in server-side or long-lived web contexts like a personal website widget. Impact on personal projects Displaying Apple Music data on a public-facing site becomes difficult. I would need to periodically re-authenticate through the MusicKit JS flow every few months just to keep a widget alive. Embedding credentials in a public site is insecure, and manual token refreshing is cumbersome and easy to forget. Comparison to Spotify's Token Model Spotify's API offers a developer-friendly authentication model. Their OAuth flow provides a Refresh Token that applications can use to obtain new access tokens automatically without requiring user re-authorization. This means a personal app can maintain continuous access to a user's Spotify data for extended periods until access is revoked. When building a similar feature with Spotify, this automatic token renewal was crucial. I could safely store the refresh token on my server and have my app periodically update the access token. Many developers have created public-facing widgets showing currently playing tracks on blogs or GitHub profiles using this model. Unfortunately, Apple Music's API lacks an equivalent capability, putting it at a disadvantage for personal projects. Proposed Solutions I request Apple's consideration for one of these enhancements: Provide a mechanism to refresh or extend a Music User Token programmatically for server-side applications. This could be an OAuth-style refresh token issued alongside the MUT, or a dedicated endpoint to exchange an expired MUT for a new one. This would enable renewal without a full user re-auth/login each time. Allow developers to access their own Apple Music library data with just the long-lived Developer Token. Apple could permit GET requests to personal library endpoints using the Developer Token alone, or a special token tied to the developer's Apple ID. This access would be read-only - no ability to modify the library, purely for retrieving data. It could be an opt-in feature in the Apple Developer account settings. Either solution would significantly improve the developer experience for Apple Music API in personal projects. Security and Privacy Considerations This request is not about accessing others' data or creating privacy loopholes - it's about empowering an Apple Music subscriber to access their own information more conveniently. The proposed options respect privacy principles: The data accessed is only what the user already has access to - their own playlists, library items, or playback status. An automatic token refresh can be designed securely (revocable tokens bound to a single account with no increase in permissions). Read-only developer token access could be restricted to non-sensitive data and require explicit opt-in. Conclusion I request an improvement to Apple Music's developer experience through either (1) an automatic Music User Token refresh mechanism, or (2) a provision for read-only personal library access using a Developer Token. This would bring Apple Music integration capabilities closer to parity with services like Spotify for personal projects. I ask Apple's Developer Relations and the Apple Music API team to consider this feature request. If there are existing best practices or workarounds with current APIs, I would appreciate guidance. I invite feedback from Apple or other developers. Are there known patterns for maintaining an Apple Music user token for server-side applications, or any plans to support non-interactive use cases? Any advice is welcome. Thank you for your consideration. I look forward to integrating Apple Music into my personal site as smoothly as with other services, and believe many developers would benefit from this added flexibility. Sources: User Authentication for MusicKit - Requirements for Music User Tokens StackOverflow: Do Apple Music User Tokens expire? - Confirmation of 6-month expiration MetaBrainz GSoC Blog - Documentation of MusicKit authentication limitations Apple Developer Forums - Information on token renewal behavior Spotify for Developers - Documentation on refresh token mechanism
1
0
244
Mar ’25
When to set AVAudioSession's preferredInput?
I want the audio session to always use the built-in microphone. However, when using the setPreferredInput() method like in this example private func enableBuiltInMic() { // Get the shared audio session. let session = AVAudioSession.sharedInstance() // Find the built-in microphone input. guard let availableInputs = session.availableInputs, let builtInMicInput = availableInputs.first(where: { $0.portType == .builtInMic }) else { print("The device must have a built-in microphone.") return } // Make the built-in microphone input the preferred input. do { try session.setPreferredInput(builtInMicInput) } catch { print("Unable to set the built-in mic as the preferred input.") } } and calling that function once in the initializer, the audio session still switches to the external microphone once one is plugged in. The session's preferredInput is nil again at that point, even if the built-in microphone is still listed in availableInputs. So, why is the preferredInput suddenly reset? when would be the appropriate time to set the preferredInput again? Observing the session’s availableInputs did not work and setting the preferredInput again in the routeChangeNotification handler seems a bad choice as it’s already a bit too late then.
1
0
830
Oct ’25
iOS 26.0 (23A5276f) - Bluetooth Call Audio Broken (AirPods + Car)
iOS 26.0 (23A5276f) – Bluetooth Call Audio Issue I’m experiencing a Bluetooth audio issue on iOS 26.0 (build 23A5276f). I cannot make or receive phone calls properly using Bluetooth devices — this affects both my car’s Bluetooth system and my AirPods Pro (2nd generation). Notably: Regular phone calls have no audio (either I can’t hear the other person, or they can’t hear me). WhatsApp and other VoIP apps work fine with the same Bluetooth devices. Media playback (music, video, etc.) works without issues over Bluetooth. It seems this bug is limited to the native Phone app or the system audio routing for regular cellular calls. Please advise if this is a known issue or if a fix is expected in upcoming beta releases.
1
1
296
Jun ’25
Apple Music Won't Play using the latest version of Xcode/MacOS
I have tried everything. The songs load unto the playlists and on searches, but when prompted to play, they just won't play. I have a wrapper since my main player (which carries the buttons for play/rewind/forward/etc.), is in Objc. // // ApplePlayerWrapper.swift // UniversallyMac // // Created by Dorian Mattar on 11/10/24. // import Foundation import MusicKit import MediaPlayer @objc public class MusicKitWrapper: NSObject { @objc public static let shared = MusicKitWrapper() private let player = ApplicationMusicPlayer.shared // Play the current track @objc public func play() { guard !player.queue.entries.isEmpty else { print("Queue is empty. Cannot start playback.") return } logPlayerState(message: "Before play") Task { do { try await player.prepareToPlay() try await player.play() print("Playback started successfully.") } catch { if let nsError = error as NSError? { print("NSError Code: \(nsError.code), Domain: \(nsError.domain)") } } logPlayerState(message: "After play") } } // Log the current player state @objc public func logPlayerState(message: String = "") { print("Player State - \(message):") print("Playback Status: \(player.state.playbackStatus)") print("Queue Count: \(player.queue.entries.count)") // Only log current track details if the player is playing if player.state.playbackStatus == .playing { if let currentEntry = player.queue.currentEntry { print("Current Track: \(currentEntry.title)") print("Current Position: \(player.playbackTime) seconds") print("Track Length: \(currentEntry.endTime ?? 0.0) seconds") } else { print("No current track.") } } else { print("No track is playing.") } print("----------") } // Debug the queue @objc public func debugQueue() { print("Debugging Queue:") for (index, entry) in player.queue.entries.enumerated() { print("\(index): \(entry.title)") } } // Ensure track availability in the queue public func queueTracks(_ tracks: [Track]) { Task { do { for track in tracks { // Validate Play Parameters guard let playParameters = track.playParameters else { print("Track \(track.title) has no Play Parameters.") continue } // Log the Play Parameters print("Track Title: \(track.title)") print("Play Parameters: \(playParameters)") print("Raw Values: \(track.id.rawValue)") // Ensure the ID is valid if track.id.rawValue.isEmpty { print("Track \(track.title) has an invalid or empty ID in Play Parameters.") continue } // Queue the track try await player.queue.insert(track, position: .afterCurrentEntry) print("Queued track: \(track.title)") } print("Tracks successfully added to the queue.") } catch { print("Error queuing tracks: \(error)") } debugQueue() } } // Clear the current queue @objc public func resetMusicPlayer() { Task { player.stop() player.queue.entries.removeAll() print("Queue cleared.") print("Apple Music player reset successfully.") } } } I opened an Apple Dev. ticket, but I'm trying here as well. Thanks!
1
0
475
Jan ’25
Why is the volume very low when using the real-time recording and playback feature with AEC?
I’ve been researching how to achieve a recording playback effect in iOS similar to the hands-free calling effect in the system’s phone app. How can this be implemented? I tried using the voice chat recording method, but found that the volume of the speaker output is too low. How should this issue be addressed? I couldn’t find a suitable API. Could you provide me with some documentation or sample code? Thank you.
1
0
420
Feb ’25
Audio player app is silent if device connected via CarPlay
I have a SwiftUI app - (https://youtu.be/VbAfUk_eYl0?si=JxUBh0Bpb-vc1E1U) - which I thought was almost ready for release - a manager for airdropped audio files from Logic Pro or other music creation applications. It uses AVAudioEngine and AVAudioPlayerNode to play audio, and the MediaPlayer API to integrate with car audio and similar, all of which works well. It does not currently have an explicit CarPlay integration (and I'm slightly horrified at the amount of work that is going to require). I had the good or bad luck of getting a loaner car with carplay while mine is being repaired yesterday, and lo and behold, when connected to the vehicle via CarPlay, there is no audio output in the vehicle at all. The now playing panel correctly shows the information my app provides about the currently playing song; the player node believes it is playing, the AVAudioSession is configured as it should be. But there is no sound. Obviously I cannot ship it in this state. I've tried fiddling with the parameters the AVAudioSession is configured with, in case there was some parameter that was preventing audio output, to no avail - currently: var options = AVAudioSession.CategoryOptions() options.insert(.allowAirPlay) options.insert(.allowBluetooth) options.insert(.allowBluetoothA2DP) try session.setCategory(.playback, mode: .default, options: options) try? session.setPreferredIOBufferDuration(0.002) // ~96 samples at 44.1kHz try? session.setPrefersNoInterruptionsFromSystemAlerts(true) try? session.setPrefersInterruptionOnRouteDisconnect(false) try session.setActive(true, options: [.notifyOthersOnDeactivation]) All diagnostics within the app show the player operating correctly - files are played and flushed; AVAudioPlayerNodeCompletionCallbacks are called when they should be. But the output is not audible in the vehicle. I would much prefer to ship this app without full-blown CarPlay integration, but with working audio when connected via CarPlay, and work on full CarPlay integration for the next release. Is there some secret handshake I am just missing to make this work?
1
0
125
Mar ’25
Music Keeps cutting off
Everytime I put my AirPods in and connect them to my phone or my Mac or my iPad since the iOS 18.3 update on my devices they’ve been disconnecting without reason, pausing songs I’m in the middle of playing, and only partially reconnecting in one pod and it’s getting really frustrating
1
0
318
Feb ’25
AVAudioRecorder loses audio recorded before interruption
Hi everyone, I'm running into an issue with AVAudioRecorder when handling interruptions such as phone calls or alarms. Problem: When the app is recording audio and an interruption occurs: I handle the interruption with audioRecorder?.pause() inside AVAudioSession.interruptionNotification (on .began). On .ended, I check for .shouldResume and call audioRecorder?.record() again. The recorder resumes successfully, but only the audio recorded after the interruption is saved. The audio recorded before the interruption is lost, even though I'm using the same file URL and not recreating the recorder. Repro: Start a recording with AVAudioRecorder Simulate a system interruption (e.g., incoming call) Resume recording after the interruption Stop and inspect the output audio file Expected: Full audio (before and after interruption) should be saved. Actual: Only the audio after interruption is saved; the earlier part is missing Notes: According to the documentation, calling .record() after .pause() should resume recording into the same file. I confirmed that the file URL does not change, and I do not recreate the recorder instance. No error is thrown by the system during this process. This behavior happens consistently when the app is interrupted and resumed. Question: Is this a known issue? Is there a recommended workaround for preserving the full recording when interruptions happen? Thanks in advance!
1
1
257
2w
Is AVAudioPCMFormatFloat32 required for playing a buffer with AVAudioEngine / AVAudioPlayerNode
I have a PCM audio buffer (AVAudioPCMFormatInt16). When I try to play it using AVPlayerNode / AVAudioEngine an exception is thrown: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 (related thread https://forums.developer.apple.com/forums/thread/700497?answerId=780530022#780530022) If I convert the buffer to AVAudioPCMFormatFloat32 playback works. My questions are: Does AVAudioEngine / AVPlayerNode require AVAudioPCMBuffer to be in the Float32 format? Is there a way I can configure it to accept another format instead for my application? If 1 is YES is this documented anywhere? If 1 is YES is this required format subject to change at any point? Thanks! I was looking to watch the "AVAudioEngine in Practice" session video from WWDC 2014 but I can't find it anywhere (https://forums.developer.apple.com/forums/thread/747008).
1
0
1k
Oct ’25
How to find `AudioHardwareControl` direction?
I'm working with modern Core Audio API introduced in macOS Sequoia. I have an AudioHadwareDevice which has several controls of type AudioHardwareControl. I figured out to filter only volume controls I can use classID == kAudioVolumeControlClassID condition. Some devices have volume controls for both input and output. How I can determine the direction of the control? Streams, i.e. AudioHardwareStream object have direction, but I didn't found a way to map controls to streams. There are kAudioObjectPropertyScopeInput and kAudioObjectPropertyScopeOutput property scopes, but no matter what I tried controls always return false to any control.hasProperty(address: whatever). Any other ideas?
1
0
524
Jan ’25
Some questions about musickit
We are developing an apple music app on phone, the developed web works fine on chrome, but when i load it on webivew on my phone, i can't play the first song, We doubt that the drm init, key exchange, session creation was on the music.play() function, while we trigger the play, the drm or session was not ok for play a real song, so it got an error so we may wanna know: what about the realative process of drm, key, session, etc in the play() function? are there some state detect function to show weather the drm is ok?
1
0
117
Mar ’25
Always audio from latest connected external USB mic
Hello! I've two mics connected to a USB-hub. The USB-hub is then connected to my iPad. Both mics are part of the audio session's list of available inputs. The problem is that regardless of which mic I select in my app (using setPreferredInput() on the audio session), the audio keeps coming from the mic that was last connected to the USB-hub. Anyone that knows if this is a limitation in iPadOS/iOS?
1
1
205
Jul ’25