Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

P3 Display to XYZ Color Space Conversion
When I use the ColorSync Utility to convert Display P3 color (1, 0, 0) to an XYZ color, the result is (0.5151, 0.2412, -0.0011). I expected that result because that is identical to the red colorant tristimulus value in the Display P3.icc file.When I use the CGColor converted method to do the same, the XYZ color is approximately (0.5151, 0.2412, 0.0). Note that the third element is 0.0 whereas it is -0.0011 when using the ColorSync Utility. I have printed out the Z component to 16 digits of precision, and Z is all 0s. It appears that the CGColor converted method is clamping the result from 0 to 1.My questions are:1. Which conversion is correct? The ColorSync utility or the CGColor converted method?2. I am not a color specialist, but I thought that the XYZ components should never be negative. If so, is the colorant tristimulus value in the Display P3.icc file wrong?3. Because CGColor clamped the Z component to 0, the XYZ color cannot be converted back exactly or closely to the Display P3 color (1, 0, 0). I would have expected to be able to go back and forth between the two color spaces when starting from a valid P3 Display color especially since the XYZ color space completely encompasses the P3 Display color space. Is that not true?4. Is (1, 0, 0) an invalid Display P3 color? If so, I can understand the peculiar results. I'm not sure how I would know if a Display P3 color is valid or not. (I only know that the component values must be from 0 to 1.) I think it is valid because Apple uses that color in the UIColor API Reference in an example.
3
0
3.5k
Dec ’25
Disconnect from AirPlay device programmatically
Hello there, I'm trying to implement feature which uses AirPlay with Apple TV. I want to disconnect from the device programmatically when something happens. Under something I mean a situation when a user wants to stop broadcasting (for example close the PiP window on his phone). I use this snippet: try audioSession.setCategory(.playAndRecord, options: .defaultToSpeaker) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) It works fine sometimes but not always (it works on iOS 18 but it doesn't on iOS 17 or ). So I thought it's a bug and create a ticker to feedback assistant (FB21220013). The support told me write a post on the forum.
2
0
471
Dec ’25
AVPlayerViewController volume slider UI changes but audio output level remains constant
Facing an issue with audio playback using AVPlayerViewController in iOS application. We are using the native player to play recorded audio files. When the AVPlayerViewController appears, the native user interface is displayed correctly, including the playback controls and the volume slider. However, when the user interacts with the volume slider The slider UI moves and responds to touch events. The actual audio output volume does not change. The audio continues playing at the initial volume level regardless of the slider position. We initialize the player and present it modally using the following code: AVPlayerViewController *avController = [[AVPlayerViewController alloc] init]; avController.player = [AVPlayer playerWithURL:videoURL]; // Setting initial volume avController.player.volume = 1.0f; avController.modalPresentationStyle = UIModalPresentationOverFullScreen; avController.allowsPictureInPicturePlayback = NO; // Present the controller [self presentViewController:avController animated:YES completion:nil];
0
0
134
Dec ’25
Sound not working on testflight / Appstore
I have a flutter iOS app that has some simple sound FX for button clicks, swipes, etc. In simulator and on real device the sound works fine, but when i upload the app to testflight (and App store) the sound FX don't play. When I upload the app to my phone via xcode I am using the release profile so I don't see what the difference could be. I have also gone through the archive that i uploaded and verified that the sound files are indeed there. I have other flutter apps that use sound but non since the iOS 26 update. I've tried 3 different flutter sound libraries and all face the same issue. Wondering if anyone else is seeing this issue or if I'm missing a simple permission or something that has changed recently? Thanks in advanced
2
0
238
Dec ’25
Error resuming background audio while connected to CarPlay
My app utilizes background audio to play music files. I have the audio background mode enabled and I initialize the AVAudioSession in playback mode with the mixWithOthers option. And it usually works great while the app is backgrounded. I listen for audio interruptions as well as route changes and I am able to handle them appropriately and I can usually resume my background audio no problem. I discovered an issue while connected to CarPlay though. Roughly 50% of the time when I disconnect from a phone call while connected to CarPlay I get the following error after calling the play() method of my AVAudioPlayer instance: "ATAudioSessionClientImpl.mm:281 activation failed. status = 561015905" If I instead try to start a new audio session I get a similar error: Error Domain=NSOSStatusErrorDomain Code=561015905 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed} Like I said, this isn't reproducible 100% of the time and is so far only seen while connected to CarPlay. I don't think Im forgetting so additional capability or plist setting, but if anyone has any clues it would be greatly appreciated. Otherwise this is likely just a bug that I need to report to Apple. One very important note, and reason I believe it's just a bug, is that while I was testing I found that other music apps like Spotify will also fail to resume their audio at the same time my app fails. Another important detail is that when it works successfully I receive the audio session interruption ended notification, and when it doesn't work I only receive a route configuration change or route override notification. From there I am able to still successfully granted background time to execute code, but my call to resume audio fails with the above mentioned error codes.
0
0
317
Dec ’25
Dockkit custom tracking does not work on iOS18.3
Hi all, I'm using Apple Sample Code below to create application using dockkit. "Controlling a DockKit accessory using your camera app" https://developer.apple.com/documentation/dockkit/controlling-a-dockkit-accessory-using-your-camera-app?changes=_8 I used vision hand recognition and put the observation data to dockAccessory.track, but Belkin or Insta360 devices never move on iPhone 16 Pro Max with iOS 18.3. If I use other functions like face search (system tracking) in the app, those work ok. I used Belkin and Insta360 Flow 2 Pro to reproduce the problem. My friend is also saying that the custom tracking feature was working fine on the OS 18 beta, but on recent iOS 18.3 that feature does not work. If I can get the iOS 18.0 beta then we can test that feature. But I cannot revert my iOS from 18.3 to the iOS 18.0 Beta. Regards, TO
1
1
347
Dec ’25
AVAudioEngine Voice Processing Fails with Mismatched Input/Output Devices: AggregateDevice Channel Count Mismatch
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected. Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers) Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers) When using paired input and output devices: The setup works as expected. Example: MacBook Pro microphone → MacBook Pro speakers. When using mismatched devices: AVAudioEngine setup fails during aggregate device construction. Example: AirPods microphone → MacBook Pro speakers. Error logs indicate a channel count mismatch. Here are the partial logs. Due to the content limit, I cannot post the entire logs. AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875) AUVPAggregate.cpp:1036 err=-10875 AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875 AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) Is it possible to use voice processing with different input/output devices? If yes, are there any specific configurations required to handle mismatched devices? How can we resolve channel count mismatch errors during aggregate device construction? Are there settings or API adjustments to enforce compatibility between input/output devices? Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices? For instance, can we force an intermediate channel configuration or downmix input/output formats?
0
0
301
Dec ’25
Logged error/warning in FigCaptureSourceRemote when capturing a photo
I'm using this library: https://github.com/Yummypets/YPImagePicker to capture photos. I've modified it slightly, and I'm using an older version. When testing on my iPhone 16e, ios 26, whenever I take a photo, I get the following two error messages: <<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302 <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281) These error messages appear, but as far as I can tell, the photo comes through OK, and I can save the data no problem. I've even removed all my handling code to see if it was something I was doing. I don't really want to ship with these errors showing, but I also have no idea what can be causing this error to appear. chatgpt was not helpful diagnosing this. Does anyone know what can cause this error Is there a way I can see the source code to figure out if there's something I'm doing wrong here? It really seems like this is an internal apple error, or else I would have expected more details on the error relating to the code I've written. Any clues would be appreciated!
2
2
786
Dec ’25
Photos are captured with incorrect exposure bias in specific scenarios on iPhone 17 Pro
Hey, There seems to be an inconsistency when capturing a photo using QualityPrioritization.Quality on the iPhone 17 Pro Main wide Lens. If you zoom above "2x" the output image always has "-2.0ev" bias in the meta data and looks underexposued. This does not happen at zoom levels above 2, or if you set the QualityPrioritization to .Balanced. See below: with .Quality with .Balanced This does not happen on the other lenses. I'm using a simple set up and it is consistent across JPEG and ProRAW capture. I have a demo project if that is useful. Thanks, Alex
3
0
548
Dec ’25
Can iOS capture video at 4032×3024 while running a Vision/ML model?
I am new to Swift and iOS development, and I have a question about video capture performance. Is it possible to capture video at a resolution of 4032×3024 while simultaneously running a vision/ML model on the video stream (e.g., using Vision or CoreML)? I want to know: whether iOS devices support capturing video at that resolution, whether the frame rate drops significantly at that scale, and whether it is practical to run a Vision/ML model in real-time while recording at such a high resolution. If anyone has experience with high-resolution AVCaptureSession setups or combining them with real-time ML processing, I would really appreciate guidance or sample code.
1
0
223
Dec ’25
VTFrameRateConversionConfiguration don't support 640x480
hello, I'm using VideoTololbox VTFrameRateConversionConfiguration to perform frame interpolation: https://developer.apple.com/documentation/videotoolbox/vtframerateconversionconfiguration?language=objc ,when using 640x480 vidoe input, I got error: Error ! Invalid configuration [VEEspressoModel] build failure : flow_adaptation_feature_extractor_rev2.espresso.net. Configuration: landscape640x480 [EpsressoModel] Cannot load Net file flow_adaptation_feature_extractor_rev2.espresso.net. Configuration: landscape640x480 Error: failed to create FRCFlowAdaptationFeatureExtractor for usage 8 Failed to switch (0x12c40e140) [usage:8, 1/4 flow:0, adaptation layer:1, twoStage:0, revision:2, flow size (320x240)]. Could not init FlowAdaptation initFlowAdaptationWithError fail tried 2048x1080 is ok.
3
0
443
Dec ’25
Repeat song listens not queryable
Hi all, I've been working on some personal programming projects and have gotten into using the Apple Music API. I'm currently looking to get a list of recent songs using the /v1/me/recent/played/tracks endpoint and it's working well. However, I know there are some songs I've listened to multiple times in a row, and those are not showing up as unique tracks when querying this endpoint. I'm only seeing a list of the different songs I've listened to lately, not a true list of the most recent plays on my account. Is this intended behavior or am I going about something incorrectly here? My query is using that endpoint & specifying the types to be only [songs]. Thanks in advance for any ideas or insight.
0
0
358
Dec ’25
Unexpected artist names in song table
Hi team, In the Apple Music Feed datasets, we've noticed some unexpected values in the song and album tables. The primaryartists column from either song or album may contain a "non-default" artist name such as the katakana name shown in the example below: select id, name, namedefault, primaryartists from amf_song where id = '1698723329' id | name | namedefault | primaryartists ---------------------------------------- 1698723329 | {default=California} | California | [{id=1264818718, name=チャペル・ローン}] select * from amf_artist where id = '1264818718' id | name | namedefault | namepronunciation | ---------------------------------------------- 1264818718 | {default=Chappell Roan, ja=チャペル・ローン} | Chappell Roan | {ja=チャペルローン} | Shouldn't the primaryartists column be showing the namedefault instead of the Japanese language version? When can we expect this bug to resolved? Thanks,
0
2
515
Dec ’25
_shouldExposeItemIdentifier is false. Unable to get itemIdentifier
PHPhotoLibrary.authorizationStatus(for: .readWrite) == .authorized Iinfo.plist Privacy - Photo Library Usage Description set I check authorization before attempting to get the photoPickerItem.itemIdentifier, but every time the return value from itemIdentifier is nil. Seems I missing some permissions, but unsure why the system is still keeping _shouldExposeItemIdentifier set to false.
1
0
241
Dec ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
1
1
651
Dec ’25
AVAudioUnitSampler Bug with Consolidated Audio Files
Hello, I've discovered a buffer initialization bug in AVAudioUnitSampler that happens when loading presets with multiple zones referencing different regions in the same audio file (monolith/concatenated samples approach). Almost all zones output silence (i.e. zeros) at the beginning of playback instead of starting with actual audio data. The Problem Setup: Single audio file (monolith) containing multiple concatenated samples Multiple zones in an .aupreset, each with different sample start and sample end values pointing to different regions of the same file All zones load successfully without errors Expected Behavior: All zones should play their respective audio regions immediately from the first sample. Actual Behavior: Last zone in the zone list: Works perfectly - plays audio immediately All other zones: Output [0, 0, 0, 0, ..., _audio_data] instead of [real_audio_data] The number of zeros varies from event to event for each zone. It can be a couple of samples (<30) up to several buffers. After the initial zeros, the correct audio plays normally, so there is no shift in audio playback, just missing samples at the beginning. Minimal Reproduction 1. Create Test Monolith Audio File Create a single Wav file with 3 concatenated 1-second samples (44.1kHz): Sample 1: frames 0-44099 (constant amplitude 0.3) Sample 2: frames 44100-88199 (constant amplitude 0.6) Sample 3: frames 88200-132299 (constant amplitude 0.9) 2. Create Test Preset Create an .aupreset with 3 zones all referencing the same file: Pseudo code <Zone array> <zone 1> start : 0, end: 44099, note: 60, waveform: ref_to_monolith.wav; <zone 2> start sample: 44100, note: 62, end sample: 88199, waveform: ref_to_monolith.wav; <zone 3> start sample: 88200, note: 64, end sample: 132299, waveform: ref_to_monolith.wav; </Zone array> 3. Load and Test // Load preset into AVAudioUnitSampler let sampler = AVAudioUnitSampler() try sampler.loadAudioFiles(from: presetURL) // Play each zone (MIDI notes C4=60, D4=62, E4=64) sampler.startNote(60, withVelocity: 64, onChannel: 0) // Zone 1 sampler.startNote(62, withVelocity: 64, onChannel: 0) // Zone 2 sampler.startNote(64, withVelocity: 64, onChannel: 0) // Zone 3 4. Observed Result Zone 1 (C4): [0, 0, 0, ..., 0.3, 0.3, 0.3] ❌ Zeros at beginning Zone 2 (D4): [0, 0, 0, ..., 0.6, 0.6, 0.6] ❌ Zeros at beginning Zone 3 (E4): [0.9, 0.9, 0.9, ...] ✅ Works correctly (last zone) What I've Extensively Tested What DOES Work Separate files per zone: Each zone references its own individual audio file All zones play correctly without zeros Problem: Not viable for iOS apps with 500+ sample libraries due to file handle limitations What DOESN'T Work (All Tested) 1. Different Audio Formats: CAF (Float32 PCM, Int16 PCM, both interleaved and non-interleaved) M4A (AAC compressed) WAV (uncompressed) SF2 (SoundFont2) Bug persists across all formats 2. CAF Region Chunks: Created CAF files with embedded region chunks defining zone boundaries Set zones with no sampleStart/sampleEnd in preset (nil values) AVAudioUnitSampler completely ignores CAF region metadata Bug persists 3. Unique Waveform IDs: Gave each zone a unique waveform ID (268435456, 268435457, 268435458) Each ID has its own file reference entry (all pointing to same physical file) Hypothesized this might trigger separate buffer initialization Bug persists - no improvement 4. Different Sample Rates: Tested: 44.1kHz, 48kHz, 96kHz Bug occurs at all sample rates 5. Mono vs Stereo: Bug occurs with both mono and stereo files Environment macOS: Sonoma 14.x (tested across multiple minor versions) iOS: Tested on iOS 17.x with same results Xcode: 16.x Frameworks: AVFoundation, AudioToolbox Reproducibility: 100% reproducible with setup described above Impact & Use Case This bug severely impacts professional music applications that need: Small file sizes: Monolith files allow sharing compressed audio data (AAC/M4A) iOS file handle limits: Opening 400+ individual sample files is not viable on iOS Performance: Single file loading is much faster than hundreds of individual files Standard industry practice: Monolith/concatenated samples are used by EXS24, Kontakt, and most professional samplers Current Impact: Cannot use monolith files with AVAudioUnitSampler on iOS Forced to choose between: unusable audio (zeros at start) OR hitting iOS file limits No viable workaround exists Root Cause Hypothesis The bug appears to be in AVAudioUnitSampler's internal buffer initialization when: Multiple zones share the same source audio file Each zone specifies different sampleStart/sampleEnd offsets Key observation: The last zone in the zone array always works correctly. This is NOT related to: File permissions or security-scoped resources (separate files work fine) Audio codec issues (happens with uncompressed PCM too) Preset parsing (preset loads correctly, all zones are valid) Questions Is this a known issue? I couldn't find any documentation, bug reports, or discussions about this. Is there ANY workaround that allows monolith files to work with AVAudioUnitSampler? Alternative APIs? Is there a different API or approach for iOS that properly supports monolith sample files?
0
0
378
Dec ’25
Switching default input/output channels using Core Audio
I wrote a Swift macOS app to control a PCI audio device. The code switches between the default output and input channels. As soon as I launch the Audio-Midi Setup utility, channel switching stops working. The driver properties allow switching, but the system doesn't respond. I have to delete the contents of /Library/Preferences/Audio and reset Core Audio. What am I missing? func setDefaultChannelsOutput() { guard let deviceID = getDeviceIDByName(deviceName: "PCI-424") else { return } let selectedIndex = DefaultChannelsOutput.indexOfSelectedItem if selectedIndex < 0 || selectedIndex >= 24 { return } let channel1 = UInt32(selectedIndex * 2 + 1) let channel2 = UInt32(selectedIndex * 2 + 2) var channels: [UInt32] = [channel1, channel2] var propertyAddress = AudioObjectPropertyAddress( mSelector: kAudioDevicePropertyPreferredChannelsForStereo, mScope: kAudioDevicePropertyScopeOutput, mElement: kAudioObjectPropertyElementWildcard ) let dataSize = UInt32(MemoryLayout<UInt32>.size * channels.count) let status = AudioObjectSetPropertyData(deviceID, &propertyAddress, 0, nil, dataSize, &channels) if status != noErr { print("Error setting default output channels: \(status)") } }
0
0
299
Dec ’25
FPS not able to do > 30 fps for Pro and ProMax models only
We’re developing an AVFoundation-based video recording app (4K @ 60 fps required for biomechanical analysis). On most devices this works perfectly (iPhone 12/14/15/16 non-Pro models), but on several iPhone Pro models (12 Pro, 13 Pro, 14 Pro, 15 Pro/Pro Max), we consistently get 4K 30 fps recordings—even when the device should support 4K 60 fps on the wide-angle camera. What we observe We configure the session for .hd4K3840x2160. We iterate through AVCaptureDevice.formats and select formats that: have 3840×2160 resolution support ≥60 fps (videoSupportedFrameRateRanges) On some Pro devices, this format search returns no results, even though: The Camera app records 4K60 fine. External references list the wide camera as 4K60 capable. The fallback becomes the device's default 4K30 format, so final files are 3840×2160 @ 30 fps. This happens immediately on app launch (not after heating), so not thermal-related. What we’ve tried Force selecting .builtInWideAngleCamera instead of dual/triple cameras. Disabling HDR (videoHDREnabled = false). Disabling low-light boost. Allowing 59.94 fps formats (in case exact 60.0 isn’t exposed). Logging all videoSupportedFrameRateRanges per format. What we’re seeing in logs On affected Pro devices, the capture device reports only 4K formats with maxFrameRate ≈ 30 fps, despite the hardware being able to do 4K60. Main question Has anyone encountered cases where 4K60 formats are available in the Camera app but not exposed through AVFoundation, especially on Pro models or multi-camera devices? Could HEVC/HDR capability or multi-camera constraints be preventing certain formats from appearing? Are there known conditions where 4K60 formats are hidden unless specific device configuration is applied? Any guidance on reliably locking 4K60 on iPhone Pro models via AVFoundation would be hugely appreciated.
0
0
397
Dec ’25
P3 Display to XYZ Color Space Conversion
When I use the ColorSync Utility to convert Display P3 color (1, 0, 0) to an XYZ color, the result is (0.5151, 0.2412, -0.0011). I expected that result because that is identical to the red colorant tristimulus value in the Display P3.icc file.When I use the CGColor converted method to do the same, the XYZ color is approximately (0.5151, 0.2412, 0.0). Note that the third element is 0.0 whereas it is -0.0011 when using the ColorSync Utility. I have printed out the Z component to 16 digits of precision, and Z is all 0s. It appears that the CGColor converted method is clamping the result from 0 to 1.My questions are:1. Which conversion is correct? The ColorSync utility or the CGColor converted method?2. I am not a color specialist, but I thought that the XYZ components should never be negative. If so, is the colorant tristimulus value in the Display P3.icc file wrong?3. Because CGColor clamped the Z component to 0, the XYZ color cannot be converted back exactly or closely to the Display P3 color (1, 0, 0). I would have expected to be able to go back and forth between the two color spaces when starting from a valid P3 Display color especially since the XYZ color space completely encompasses the P3 Display color space. Is that not true?4. Is (1, 0, 0) an invalid Display P3 color? If so, I can understand the peculiar results. I'm not sure how I would know if a Display P3 color is valid or not. (I only know that the component values must be from 0 to 1.) I think it is valid because Apple uses that color in the UIColor API Reference in an example.
Replies
3
Boosts
0
Views
3.5k
Activity
Dec ’25
Disconnect from AirPlay device programmatically
Hello there, I'm trying to implement feature which uses AirPlay with Apple TV. I want to disconnect from the device programmatically when something happens. Under something I mean a situation when a user wants to stop broadcasting (for example close the PiP window on his phone). I use this snippet: try audioSession.setCategory(.playAndRecord, options: .defaultToSpeaker) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) It works fine sometimes but not always (it works on iOS 18 but it doesn't on iOS 17 or ). So I thought it's a bug and create a ticker to feedback assistant (FB21220013). The support told me write a post on the forum.
Replies
2
Boosts
0
Views
471
Activity
Dec ’25
AVPlayerViewController volume slider UI changes but audio output level remains constant
Facing an issue with audio playback using AVPlayerViewController in iOS application. We are using the native player to play recorded audio files. When the AVPlayerViewController appears, the native user interface is displayed correctly, including the playback controls and the volume slider. However, when the user interacts with the volume slider The slider UI moves and responds to touch events. The actual audio output volume does not change. The audio continues playing at the initial volume level regardless of the slider position. We initialize the player and present it modally using the following code: AVPlayerViewController *avController = [[AVPlayerViewController alloc] init]; avController.player = [AVPlayer playerWithURL:videoURL]; // Setting initial volume avController.player.volume = 1.0f; avController.modalPresentationStyle = UIModalPresentationOverFullScreen; avController.allowsPictureInPicturePlayback = NO; // Present the controller [self presentViewController:avController animated:YES completion:nil];
Replies
0
Boosts
0
Views
134
Activity
Dec ’25
Sound not working on testflight / Appstore
I have a flutter iOS app that has some simple sound FX for button clicks, swipes, etc. In simulator and on real device the sound works fine, but when i upload the app to testflight (and App store) the sound FX don't play. When I upload the app to my phone via xcode I am using the release profile so I don't see what the difference could be. I have also gone through the archive that i uploaded and verified that the sound files are indeed there. I have other flutter apps that use sound but non since the iOS 26 update. I've tried 3 different flutter sound libraries and all face the same issue. Wondering if anyone else is seeing this issue or if I'm missing a simple permission or something that has changed recently? Thanks in advanced
Replies
2
Boosts
0
Views
238
Activity
Dec ’25
iPhone UVC camera
We are planning to develop an app that connects to a UVC camera to capture and display video via AVFoundation. Could you please advise on which iPhone models support UVC cameras?
Replies
0
Boosts
0
Views
154
Activity
Dec ’25
Error resuming background audio while connected to CarPlay
My app utilizes background audio to play music files. I have the audio background mode enabled and I initialize the AVAudioSession in playback mode with the mixWithOthers option. And it usually works great while the app is backgrounded. I listen for audio interruptions as well as route changes and I am able to handle them appropriately and I can usually resume my background audio no problem. I discovered an issue while connected to CarPlay though. Roughly 50% of the time when I disconnect from a phone call while connected to CarPlay I get the following error after calling the play() method of my AVAudioPlayer instance: "ATAudioSessionClientImpl.mm:281 activation failed. status = 561015905" If I instead try to start a new audio session I get a similar error: Error Domain=NSOSStatusErrorDomain Code=561015905 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed} Like I said, this isn't reproducible 100% of the time and is so far only seen while connected to CarPlay. I don't think Im forgetting so additional capability or plist setting, but if anyone has any clues it would be greatly appreciated. Otherwise this is likely just a bug that I need to report to Apple. One very important note, and reason I believe it's just a bug, is that while I was testing I found that other music apps like Spotify will also fail to resume their audio at the same time my app fails. Another important detail is that when it works successfully I receive the audio session interruption ended notification, and when it doesn't work I only receive a route configuration change or route override notification. From there I am able to still successfully granted background time to execute code, but my call to resume audio fails with the above mentioned error codes.
Replies
0
Boosts
0
Views
317
Activity
Dec ’25
Dockkit custom tracking does not work on iOS18.3
Hi all, I'm using Apple Sample Code below to create application using dockkit. "Controlling a DockKit accessory using your camera app" https://developer.apple.com/documentation/dockkit/controlling-a-dockkit-accessory-using-your-camera-app?changes=_8 I used vision hand recognition and put the observation data to dockAccessory.track, but Belkin or Insta360 devices never move on iPhone 16 Pro Max with iOS 18.3. If I use other functions like face search (system tracking) in the app, those work ok. I used Belkin and Insta360 Flow 2 Pro to reproduce the problem. My friend is also saying that the custom tracking feature was working fine on the OS 18 beta, but on recent iOS 18.3 that feature does not work. If I can get the iOS 18.0 beta then we can test that feature. But I cannot revert my iOS from 18.3 to the iOS 18.0 Beta. Regards, TO
Replies
1
Boosts
1
Views
347
Activity
Dec ’25
AVAudioEngine Voice Processing Fails with Mismatched Input/Output Devices: AggregateDevice Channel Count Mismatch
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected. Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers) Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers) When using paired input and output devices: The setup works as expected. Example: MacBook Pro microphone → MacBook Pro speakers. When using mismatched devices: AVAudioEngine setup fails during aggregate device construction. Example: AirPods microphone → MacBook Pro speakers. Error logs indicate a channel count mismatch. Here are the partial logs. Due to the content limit, I cannot post the entire logs. AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875) AUVPAggregate.cpp:1036 err=-10875 AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875 AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) Is it possible to use voice processing with different input/output devices? If yes, are there any specific configurations required to handle mismatched devices? How can we resolve channel count mismatch errors during aggregate device construction? Are there settings or API adjustments to enforce compatibility between input/output devices? Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices? For instance, can we force an intermediate channel configuration or downmix input/output formats?
Replies
0
Boosts
0
Views
301
Activity
Dec ’25
Logged error/warning in FigCaptureSourceRemote when capturing a photo
I'm using this library: https://github.com/Yummypets/YPImagePicker to capture photos. I've modified it slightly, and I'm using an older version. When testing on my iPhone 16e, ios 26, whenever I take a photo, I get the following two error messages: <<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302 <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281) These error messages appear, but as far as I can tell, the photo comes through OK, and I can save the data no problem. I've even removed all my handling code to see if it was something I was doing. I don't really want to ship with these errors showing, but I also have no idea what can be causing this error to appear. chatgpt was not helpful diagnosing this. Does anyone know what can cause this error Is there a way I can see the source code to figure out if there's something I'm doing wrong here? It really seems like this is an internal apple error, or else I would have expected more details on the error relating to the code I've written. Any clues would be appreciated!
Replies
2
Boosts
2
Views
786
Activity
Dec ’25
Photos are captured with incorrect exposure bias in specific scenarios on iPhone 17 Pro
Hey, There seems to be an inconsistency when capturing a photo using QualityPrioritization.Quality on the iPhone 17 Pro Main wide Lens. If you zoom above "2x" the output image always has "-2.0ev" bias in the meta data and looks underexposued. This does not happen at zoom levels above 2, or if you set the QualityPrioritization to .Balanced. See below: with .Quality with .Balanced This does not happen on the other lenses. I'm using a simple set up and it is consistent across JPEG and ProRAW capture. I have a demo project if that is useful. Thanks, Alex
Replies
3
Boosts
0
Views
548
Activity
Dec ’25
Can iOS capture video at 4032×3024 while running a Vision/ML model?
I am new to Swift and iOS development, and I have a question about video capture performance. Is it possible to capture video at a resolution of 4032×3024 while simultaneously running a vision/ML model on the video stream (e.g., using Vision or CoreML)? I want to know: whether iOS devices support capturing video at that resolution, whether the frame rate drops significantly at that scale, and whether it is practical to run a Vision/ML model in real-time while recording at such a high resolution. If anyone has experience with high-resolution AVCaptureSession setups or combining them with real-time ML processing, I would really appreciate guidance or sample code.
Replies
1
Boosts
0
Views
223
Activity
Dec ’25
VTFrameRateConversionConfiguration don't support 640x480
hello, I'm using VideoTololbox VTFrameRateConversionConfiguration to perform frame interpolation: https://developer.apple.com/documentation/videotoolbox/vtframerateconversionconfiguration?language=objc ,when using 640x480 vidoe input, I got error: Error ! Invalid configuration [VEEspressoModel] build failure : flow_adaptation_feature_extractor_rev2.espresso.net. Configuration: landscape640x480 [EpsressoModel] Cannot load Net file flow_adaptation_feature_extractor_rev2.espresso.net. Configuration: landscape640x480 Error: failed to create FRCFlowAdaptationFeatureExtractor for usage 8 Failed to switch (0x12c40e140) [usage:8, 1/4 flow:0, adaptation layer:1, twoStage:0, revision:2, flow size (320x240)]. Could not init FlowAdaptation initFlowAdaptationWithError fail tried 2048x1080 is ok.
Replies
3
Boosts
0
Views
443
Activity
Dec ’25
Repeat song listens not queryable
Hi all, I've been working on some personal programming projects and have gotten into using the Apple Music API. I'm currently looking to get a list of recent songs using the /v1/me/recent/played/tracks endpoint and it's working well. However, I know there are some songs I've listened to multiple times in a row, and those are not showing up as unique tracks when querying this endpoint. I'm only seeing a list of the different songs I've listened to lately, not a true list of the most recent plays on my account. Is this intended behavior or am I going about something incorrectly here? My query is using that endpoint & specifying the types to be only [songs]. Thanks in advance for any ideas or insight.
Replies
0
Boosts
0
Views
358
Activity
Dec ’25
Unexpected artist names in song table
Hi team, In the Apple Music Feed datasets, we've noticed some unexpected values in the song and album tables. The primaryartists column from either song or album may contain a "non-default" artist name such as the katakana name shown in the example below: select id, name, namedefault, primaryartists from amf_song where id = '1698723329' id | name | namedefault | primaryartists ---------------------------------------- 1698723329 | {default=California} | California | [{id=1264818718, name=チャペル・ローン}] select * from amf_artist where id = '1264818718' id | name | namedefault | namepronunciation | ---------------------------------------------- 1264818718 | {default=Chappell Roan, ja=チャペル・ローン} | Chappell Roan | {ja=チャペルローン} | Shouldn't the primaryartists column be showing the namedefault instead of the Japanese language version? When can we expect this bug to resolved? Thanks,
Replies
0
Boosts
2
Views
515
Activity
Dec ’25
_shouldExposeItemIdentifier is false. Unable to get itemIdentifier
PHPhotoLibrary.authorizationStatus(for: .readWrite) == .authorized Iinfo.plist Privacy - Photo Library Usage Description set I check authorization before attempting to get the photoPickerItem.itemIdentifier, but every time the return value from itemIdentifier is nil. Seems I missing some permissions, but unsure why the system is still keeping _shouldExposeItemIdentifier set to false.
Replies
1
Boosts
0
Views
241
Activity
Dec ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
Replies
1
Boosts
1
Views
651
Activity
Dec ’25
UVC Camera ,AVFoundation can not start video stream
I develop a application with an uvc camera, this camera is a webcam, I use the AVFoundation library ,but when I run the code "[self.mCaptureSession startRunning]" ,I can not get the buffer, I already set the delegate, any answer will help.
Replies
1
Boosts
0
Views
1.2k
Activity
Dec ’25
AVAudioUnitSampler Bug with Consolidated Audio Files
Hello, I've discovered a buffer initialization bug in AVAudioUnitSampler that happens when loading presets with multiple zones referencing different regions in the same audio file (monolith/concatenated samples approach). Almost all zones output silence (i.e. zeros) at the beginning of playback instead of starting with actual audio data. The Problem Setup: Single audio file (monolith) containing multiple concatenated samples Multiple zones in an .aupreset, each with different sample start and sample end values pointing to different regions of the same file All zones load successfully without errors Expected Behavior: All zones should play their respective audio regions immediately from the first sample. Actual Behavior: Last zone in the zone list: Works perfectly - plays audio immediately All other zones: Output [0, 0, 0, 0, ..., _audio_data] instead of [real_audio_data] The number of zeros varies from event to event for each zone. It can be a couple of samples (<30) up to several buffers. After the initial zeros, the correct audio plays normally, so there is no shift in audio playback, just missing samples at the beginning. Minimal Reproduction 1. Create Test Monolith Audio File Create a single Wav file with 3 concatenated 1-second samples (44.1kHz): Sample 1: frames 0-44099 (constant amplitude 0.3) Sample 2: frames 44100-88199 (constant amplitude 0.6) Sample 3: frames 88200-132299 (constant amplitude 0.9) 2. Create Test Preset Create an .aupreset with 3 zones all referencing the same file: Pseudo code <Zone array> <zone 1> start : 0, end: 44099, note: 60, waveform: ref_to_monolith.wav; <zone 2> start sample: 44100, note: 62, end sample: 88199, waveform: ref_to_monolith.wav; <zone 3> start sample: 88200, note: 64, end sample: 132299, waveform: ref_to_monolith.wav; </Zone array> 3. Load and Test // Load preset into AVAudioUnitSampler let sampler = AVAudioUnitSampler() try sampler.loadAudioFiles(from: presetURL) // Play each zone (MIDI notes C4=60, D4=62, E4=64) sampler.startNote(60, withVelocity: 64, onChannel: 0) // Zone 1 sampler.startNote(62, withVelocity: 64, onChannel: 0) // Zone 2 sampler.startNote(64, withVelocity: 64, onChannel: 0) // Zone 3 4. Observed Result Zone 1 (C4): [0, 0, 0, ..., 0.3, 0.3, 0.3] ❌ Zeros at beginning Zone 2 (D4): [0, 0, 0, ..., 0.6, 0.6, 0.6] ❌ Zeros at beginning Zone 3 (E4): [0.9, 0.9, 0.9, ...] ✅ Works correctly (last zone) What I've Extensively Tested What DOES Work Separate files per zone: Each zone references its own individual audio file All zones play correctly without zeros Problem: Not viable for iOS apps with 500+ sample libraries due to file handle limitations What DOESN'T Work (All Tested) 1. Different Audio Formats: CAF (Float32 PCM, Int16 PCM, both interleaved and non-interleaved) M4A (AAC compressed) WAV (uncompressed) SF2 (SoundFont2) Bug persists across all formats 2. CAF Region Chunks: Created CAF files with embedded region chunks defining zone boundaries Set zones with no sampleStart/sampleEnd in preset (nil values) AVAudioUnitSampler completely ignores CAF region metadata Bug persists 3. Unique Waveform IDs: Gave each zone a unique waveform ID (268435456, 268435457, 268435458) Each ID has its own file reference entry (all pointing to same physical file) Hypothesized this might trigger separate buffer initialization Bug persists - no improvement 4. Different Sample Rates: Tested: 44.1kHz, 48kHz, 96kHz Bug occurs at all sample rates 5. Mono vs Stereo: Bug occurs with both mono and stereo files Environment macOS: Sonoma 14.x (tested across multiple minor versions) iOS: Tested on iOS 17.x with same results Xcode: 16.x Frameworks: AVFoundation, AudioToolbox Reproducibility: 100% reproducible with setup described above Impact & Use Case This bug severely impacts professional music applications that need: Small file sizes: Monolith files allow sharing compressed audio data (AAC/M4A) iOS file handle limits: Opening 400+ individual sample files is not viable on iOS Performance: Single file loading is much faster than hundreds of individual files Standard industry practice: Monolith/concatenated samples are used by EXS24, Kontakt, and most professional samplers Current Impact: Cannot use monolith files with AVAudioUnitSampler on iOS Forced to choose between: unusable audio (zeros at start) OR hitting iOS file limits No viable workaround exists Root Cause Hypothesis The bug appears to be in AVAudioUnitSampler's internal buffer initialization when: Multiple zones share the same source audio file Each zone specifies different sampleStart/sampleEnd offsets Key observation: The last zone in the zone array always works correctly. This is NOT related to: File permissions or security-scoped resources (separate files work fine) Audio codec issues (happens with uncompressed PCM too) Preset parsing (preset loads correctly, all zones are valid) Questions Is this a known issue? I couldn't find any documentation, bug reports, or discussions about this. Is there ANY workaround that allows monolith files to work with AVAudioUnitSampler? Alternative APIs? Is there a different API or approach for iOS that properly supports monolith sample files?
Replies
0
Boosts
0
Views
378
Activity
Dec ’25
Switching default input/output channels using Core Audio
I wrote a Swift macOS app to control a PCI audio device. The code switches between the default output and input channels. As soon as I launch the Audio-Midi Setup utility, channel switching stops working. The driver properties allow switching, but the system doesn't respond. I have to delete the contents of /Library/Preferences/Audio and reset Core Audio. What am I missing? func setDefaultChannelsOutput() { guard let deviceID = getDeviceIDByName(deviceName: "PCI-424") else { return } let selectedIndex = DefaultChannelsOutput.indexOfSelectedItem if selectedIndex < 0 || selectedIndex >= 24 { return } let channel1 = UInt32(selectedIndex * 2 + 1) let channel2 = UInt32(selectedIndex * 2 + 2) var channels: [UInt32] = [channel1, channel2] var propertyAddress = AudioObjectPropertyAddress( mSelector: kAudioDevicePropertyPreferredChannelsForStereo, mScope: kAudioDevicePropertyScopeOutput, mElement: kAudioObjectPropertyElementWildcard ) let dataSize = UInt32(MemoryLayout<UInt32>.size * channels.count) let status = AudioObjectSetPropertyData(deviceID, &propertyAddress, 0, nil, dataSize, &channels) if status != noErr { print("Error setting default output channels: \(status)") } }
Replies
0
Boosts
0
Views
299
Activity
Dec ’25
FPS not able to do > 30 fps for Pro and ProMax models only
We’re developing an AVFoundation-based video recording app (4K @ 60 fps required for biomechanical analysis). On most devices this works perfectly (iPhone 12/14/15/16 non-Pro models), but on several iPhone Pro models (12 Pro, 13 Pro, 14 Pro, 15 Pro/Pro Max), we consistently get 4K 30 fps recordings—even when the device should support 4K 60 fps on the wide-angle camera. What we observe We configure the session for .hd4K3840x2160. We iterate through AVCaptureDevice.formats and select formats that: have 3840×2160 resolution support ≥60 fps (videoSupportedFrameRateRanges) On some Pro devices, this format search returns no results, even though: The Camera app records 4K60 fine. External references list the wide camera as 4K60 capable. The fallback becomes the device's default 4K30 format, so final files are 3840×2160 @ 30 fps. This happens immediately on app launch (not after heating), so not thermal-related. What we’ve tried Force selecting .builtInWideAngleCamera instead of dual/triple cameras. Disabling HDR (videoHDREnabled = false). Disabling low-light boost. Allowing 59.94 fps formats (in case exact 60.0 isn’t exposed). Logging all videoSupportedFrameRateRanges per format. What we’re seeing in logs On affected Pro devices, the capture device reports only 4K formats with maxFrameRate ≈ 30 fps, despite the hardware being able to do 4K60. Main question Has anyone encountered cases where 4K60 formats are available in the Camera app but not exposed through AVFoundation, especially on Pro models or multi-camera devices? Could HEVC/HDR capability or multi-camera constraints be preventing certain formats from appearing? Are there known conditions where 4K60 formats are hidden unless specific device configuration is applied? Any guidance on reliably locking 4K60 on iPhone Pro models via AVFoundation would be hugely appreciated.
Replies
0
Boosts
0
Views
397
Activity
Dec ’25