Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Created

MusicKit can't find identifiers
I am trying to create keys for my personal project with MusicKit and other resources, but MusicKit specifically for now. I want to gather my recent music history and log the time in my system to measure with my other life data to do analysis on. I have created an Identifier with an appropriate Description and Bundle ID and have MusicKit checked in AppServices. I have saved and reset cash and waited all day and the keys have still not update and show "There are no identifiers available that can be associated with the key" in this field. Please help!
0
0
62
2w
AVAudioSession : Audio issues when recording the screen in an app that changes IOBufferDuration on iOS 26.
Among Japanese end users, audio issues during screen recording—primarily in game applications—have become a topic of discussion. We have confirmed that the trigger for this issue is highly likely to be related to changes to IOBufferDuration. When using setPreferredIOBufferDuration and the IOBufferDuration is set to a value smaller than the default, audio problems occur in the recorded screen capture video. Audio playback is performed using AudioUnit (RemoteIO). https://developer.apple.com/documentation/avfaudio/avaudiosession/setpreferrediobufferduration(_:)?language=objc This issue was not observed on iOS 18, and it appears to have started occurring after upgrading to iOS 26. We provide an audio middleware solution, and we had incorporated changes to IOBufferDuration into our product to achieve low-latency audio playback. As a result, developers using our product as well as their end users are being affected by this issue. We kindly request that this issue be investigated and addressed in a future update. “This document has been translated by AI. The original text is included below for reference.” 日本のエンドユーザー間で主にゲームアプリケーションにおける画面収録時の音声の問題が話題になっています。 こちらの症状のトリガーが、IOBufferDurationの変更によるものである可能性が高いことを確認しました。 setPreferredIOBufferDurationを使用し、IOBufferDurationがデフォルトより小さい状態の時、画面収録された動画の音声に問題が発生することをしています。 音声の再生にはAudioUnit(RemoteIO)を使用しています。 https://developer.apple.com/documentation/avfaudio/avaudiosession/setpreferrediobufferduration(_:)?language=objc iOS 18ではこのような問題は確認されておらず、iOS26になってから問題が発生しているようです。 私たちはオーディオミドルウェアを提供しており、低遅延の再生のためにIOBufferDurationの変更を製品に組み込んでいました。 そのため、弊社製品をご利用いただいている開発者およびエンドユーザーの皆様がこの不具合の影響を受けています。 こちらの不具合の調査及び修正対応を検討いただけますでしょうか。
2
0
471
2w
How to upload large videos with PHAssetResourceUploadJobChangeRequest?
I'm implementing a PHBackgroundResourceUploadExtension to back up photos and videos from the user's library to our cloud storage service. Our existing upload infrastructure uses chunked uploads for large files (splitting videos into smaller byte ranges and uploading each chunk separately). This approach: Allows resumable uploads if interrupted Stays within server-side request size limits Provides granular progress tracking Looking at the PHAssetResourceUploadJobChangeRequest.createJob(destination:resource:) API, I don't see a way to specify byte ranges or create multiple jobs for chunks of the same resource. Questions: Does the system handle large files (1GB+) automatically under the hood, or is there a recommended maximum file size for a single upload job? Is there a supported pattern for chunked/resumable uploads, or should the destination URL endpoint handle the entire file in one request? If our server requires chunked uploads (e.g., BITS protocol with CreateSession → Fragment → CloseSession), is this extension the right mechanism, or should we use a different approach for large videos? Any guidance on best practices for large asset uploads would be greatly appreciated. Environment: iOS 26, Xcode 18
4
0
529
2w
Crash while presenting a media picker for Music
This is the code I use: @MainActor func picker()->UIViewController{ let pickerController = MPMediaPickerController(mediaTypes: .music) print(pickerController) pickerController.showsCloudItems=true pickerController.prompt = NSLocalizedString("Add pieces to queue", comment:""); pickerController.allowsPickingMultipleItems=true; pickerController.delegate=MPMusicPlayerControllerSingleton.sharedController(); return pickerController } MainActor @IBAction func handleBrowserTapped(_ sender: AnyObject){ if let pickerController=contentProvider?.picker(){ self.present(pickerController, animated:true, completion:nil) } } And this his is the crash log: *** Assertion failure in -[MPMediaPickerController_Appex requestRemoteViewController], MPMediaPickerController.m:523 *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'remoteViewController cannot be nil -- process will crash inserting in hierarchy. We likely got a nil remoteViewController because Music is crashing.' *** First throw call stack: (0x1869cac70 0x183499224 0x1844f9f50 0x1a6c6a060 0x18d45518c 0x18d4cd410 0x103354544 0x10336dccc 0x10338f748 0x103364204 0x103364144 0x186957a64 0x1868e5288 0x1868e41d0 0x22bde7498 0x18c5a5ca0 0x18c510254 0x18c71ce90 0x103854340 0x1038542b0 0x103854430 0x1834f1c1c) libc++abi: terminating due to uncaught exception of type NSException *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'remoteViewController cannot be nil -- process will crash inserting in hierarchy. We likely got a nil remoteViewController because Music is crashing.'
12
0
239
2w
Unexpected Ambisonics format
When trying to load an ambisonics file using this project: https://github.com/robertncoomber/NativeiOSAmbisonicPlayback/ I get "Unexpected Ambisonics format". Interestingly, loading a 3rd order ambisonics file works fine: let ambisonicLayoutTag = kAudioChannelLayoutTag_HOA_ACN_SN3D | 16 let AmbisonicLayout = AVAudioChannelLayout(layoutTag: ambisonicLayoutTag) let StereoLayout = AVAudioChannelLayout(layoutTag: kAudioChannelLayoutTag_Stereo) So it's purely related to the kAudioChannelLayoutTag_Ambisonic_B_Format
0
0
45
3w
On iOS 26, HLS alternate audio track selection behaves inconsistently
Summary On iOS 26, HLS alternate audio track selection behaves inconsistently on both VOD and live streams: the French track falls back to the DEFAULT=YES (English) track after manual selection, and in some cases switching to a non-default track appears to work but it is then impossible to switch back to English. Environment iOS version: 26 Players affected: native Safari on iOS 26 and THEOplayer (issue also reproducible on THEOplayer's own demo page) Stream type: HLS/CMAF with demuxed alternate audio renditions (CMFC container) Affected stream types: both VOD and live streaming Issue NOT present on iOS 17/18 Manifest #EXTM3U #EXT-X-VERSION:4 #EXT-X-INDEPENDENT-SEGMENTS #EXT-X-STREAM-INF:BANDWIDTH=8987973,AVERAGE-BANDWIDTH=8987973,VIDEO-RANGE=SDR,CODECS="avc1.640028",RESOLUTION=1920x1080,FRAME-RATE=29.970,AUDIO="program_audio" video_1080p.m3u8 #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="program_audio",LANGUAGE="en",ASSOC-LANGUAGE="en",NAME="English",AUTOSELECT=YES,DEFAULT=YES,CHANNELS="2",URI="audio_ENG.m3u8" #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="program_audio",LANGUAGE="de",ASSOC-LANGUAGE="de",NAME="Deutsch",AUTOSELECT=YES,DEFAULT=NO,CHANNELS="2",URI="audio_DEU.m3u8" #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="program_audio",LANGUAGE="fr",ASSOC-LANGUAGE="fr",NAME="Francais",AUTOSELECT=YES,DEFAULT=NO,CHANNELS="2",URI="audio_FRA.m3u8" Steps to Reproduce Load the HLS manifest (VOD or live) in Safari on iOS 26, or in any AVFoundation-backed player Start playback — English plays correctly as DEFAULT Manually select "Francais" from the audio track selector Observe that English audio continues playing (French does not play) In a separate scenario: manually select "Deutsch" — German plays correctly Attempt to switch back to English — English does not resume; audio remains on the previously selected track Expected behavior Selecting any track should immediately switch to that language Switching back to English (DEFAULT=YES) should work at any time Behavior should be consistent across VOD and live streams Actual behavior Two distinct anomalies observed, reproducible on both VOD and live streams, in both native Safari and THEOplayer: French-specific fallback: selecting the French track causes playback to fall back to English. This does not happen with German. Cannot return to English: in cases where a non-default track plays correctly, attempting to switch back to the DEFAULT=YES track (English) fails — the previous non-default track continues playing. The fact that the issue reproduces in native Safari confirms this is an AVFoundation/WebKit-level regression, not a third-party player bug. What we have already verified and ruled out LANGUAGE codes are BCP-47 compliant (en, fr, de) ✓ EXT-X-VERSION:4 is present ✓ Audio codec removed from STREAM-INF CODECS (video-only) ✓ ASSOC-LANGUAGE attribute added matching LANGUAGE value ✓ Container metadata verified via ffprobe: mdhd box correctly contains language tags (e.g. "fra") ✓ Audio segment content verified via ffplay: correct audio in each language file ✓ French audio source file contains correct French audio content ✓ Issue reproduces in native Safari on iOS 26, confirming it is not a THEOplayer-specific bug Issue does NOT reproduce on iOS 17/18 with the same manifest and segments Additional notes The VOD stream is packaged with AWS MediaConvert, CMAF output group, SEGMENTED_FILES, AAC-LC codec (mp4a.40.2), 128kbps, 48kHz stereo. English uses AudioTrackType ALTERNATE_AUDIO_AUTO_SELECT_DEFAULT; French and German use ALTERNATE_AUDIO_AUTO_SELECT. The live stream uses AWS MediaPackage with a similar CMAF/HLS output configuration. Please advise whether this is a known regression in AVFoundation on iOS 26 and whether a fix is planned.
1
0
110
3w
AU MIDI Plugin UI not showing
Hello, I am having an issue with a very small percentage of our users not being able to view the UI of our MIDI Plugin Chord Prism. I have looked this up and seen to where it has been resolved within Logic for AU Instrument and Effect plugins by switching out of "Controls" view, but my situation is different and there is no option on what is displayed to switch out of "Controls" view. Is this something that can be fixed by adjusting settings within Logic?
0
0
150
3w
Using StoreKit from an AUv3 plugin that can be loaded in-process
I have a bunch of Audio Unit v3 plugins that are approaching release, and I was considering using subscription-model pricing, as I have done in a soon to be released iOS app. However, whether this is possible or not is not at all obvious. Specifically: The plugin can, depending on the host app, be loaded in-process or out-of-process - yes, I know, Logic Pro and Garage Band will not load a plug-in in-process anymore, but I am not going to rule that out for other audio apps and force on them the overhead of IPC (I spent two solid weeks deciphering the process to actually make it possible for an AUv3 to run in-process - see this - https://github.com/timboudreau/audio_unit_rust_demo - example with notes) Depending on how it is loaded, the value of Bundle.main.bundleIdentifier will vary. If I use the StoreKit API, will that return product results for my bundle identifier when being called as a library from a foreign application? I would expect it would be a major security hole if random apps could query about purchases of other random apps, so I assume not. Even if I restricted the plugins to running out-of-process, I have to set up the in-app purchases on the app store for the App container's ID, not the extension's ID, and the extension is what run - the outer app that is what you purchase is just a toy demo that exists solely to register the audio unit. I have similar questions with regard to MetricKit, which I would similarly like to use, but which may be running inside some random app. If there were some sort of signed token, or similar mechanism, that could be bundled or acquired by the running plugin extension that could be used to ensure both StoreKit and MetricKit operate under the assumption that purchases and metrics should be accessed as if called from the container app, that would be very helpful. This is the difference between having a one-and-done sales model and something that provides ongoing revenue to maintain these products - I am a one-person shop - if I price these products where they would need to be to pay the bills assuming a single sale per customer ever, the price will be too high for anyone to want to try products from a small vendor they've never heard of. So, being able to do a free trial period and then subscription is the difference between this being a viable business or not.
10
0
747
3w
10-Bit UVC on iPadOS
Hello, I've been very familiar with the UVC Support in iPadOS ever since it launched in iOS 17. There are a number of people that use the software I've developed built around UVC and there are often queries about 8-Bit vs. 10-Bit. My understanding is that the newest UVC Spec is 1.5 which was standardised in 2012 and almost every UVC Capture Card runs at 8-Bit. The only 10-Bit Capture Card that is on my radar is the AJA U-Tap SDI, however it looks like this is 10-Bit up until the UVC Part where the 10-Bit Input is downsampled to 8-Bit. Though I have read in certain places that it works as a 10-Bit Capture Card on macOS but not on iPadOS. I was just wondering if 10-Bit via UVC is even possible on iPadOS? If there was indeed a true 10-Bit Source being passed into an iPad, would iPadOS allow it or would it be downsampled by AVFoundation so it can show up as a valid external video input? All USB Capture Cards that I have encountered use one of the following formats: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange kCVPixelFormatType_420YpCbCr8BiPlanarFullRange kCVPixelFormatType_32BGRA So if a UVC Device delivered a 10-Bit Format, would that be accessible by iPadOS or would it fallback to these 8-Bit Formats by default? Thanks!
0
0
327
3w
SpeechAnalyzer.start(inputSequence:) fails with _GenericObjCError nilError, while the same WAV succeeds with start(inputAudioFile:)
I'm trying to use the new Speech framework for streaming transcription on macOS 26.3, and I can reproduce a failure with SpeechAnalyzer.start(inputSequence:). What is working: SpeechAnalyzer + SpeechTranscriber offline path using start(inputAudioFile:finishAfterFile:) same Spanish WAV file transcribes successfully and returns a coherent final result What is not working: SpeechAnalyzer + SpeechTranscriber stream path using start(inputSequence:) same WAV, replayed as AnalyzerInput(buffer:bufferStartTime:) fails once replay starts with: _GenericObjCError domain=Foundation._GenericObjCError code=0 detail=nilError I also tried: DictationTranscriber instead of SpeechTranscriber no realtime pacing during replay Both still fail in stream mode with the same error. So this does not currently look like a ScreenCaptureKit issue or a Python integration issue. I reduced it to a pure Swift CLI repro. Environment: macOS 26.3 (25D122) Xcode 26.3 Swift 6.2.4 Apple Silicon Mac Has anyone here gotten SpeechAnalyzer.start(inputSequence:) working reliably on macOS 26.x? If so, I'd be interested in any workaround or any detail that differs from the obvious setup: prepareToAnalyze(in:) bestAvailableAudioFormat(...) AnalyzerInput(buffer:bufferStartTime:) replaying a known-good WAV in chunks I already filed Feedback Assistant: FB22149971
1
0
342
3w
How to get iCloud item(photo/video) size?
How I can get iCloud photo file size? Could I use private API like this in prod? Does anyone know another way? (without downloading the file to the device) func getFileSize(asset: PHAsset) -> Int64? { let resources = PHAssetResource.assetResources(for: asset) let resource = resources.first let size = resource?.value(forKey: "fileSize") as? Int64 return size }
1
0
130
3w
Implementing PHBackgroundResourceUploadExtension
Hi, I am trying to implement a PHBackgroundResourceUploadExtension to upload backup media files to an external cloud service based on these docs: https://developer.apple.com/documentation/PhotoKit/uploading-asset-resources-in-the-background#Acknowledge-completed-jobs Creating jobs and actual uploading is working as expected, but the problem I have is in the acknowledgeCompletedJobs() function. When trying to access a job's resource, the resource is nil and thus has empty assetLocalIdentifier and originalFilename. Did anybody successfully implement this extension or knows, why this would happen? Because the resource of an acknowledgable job is empty, I can not match it back to my processed assets.
1
0
229
3w
Implementing PHBackgroundResourceUploadExtension
Hi, I am trying to implement a PHBackgroundResourceUploadExtension to upload backup media files to an external cloud service based on these docs: https://developer.apple.com/documentation/PhotoKit/uploading-asset-resources-in-the-background#Acknowledge-completed-jobs Creating jobs and actual uploading is working as expected, but the problem I have is in the acknowledgeCompletedJobs() function. When in this function, I am trying to access a job's resource. The resource is nil and thus has empty assetLocalIdentifier and originalFilename. Did anybody successfully implement this extension or knows, why this would happen? Because the resource of an acknowledgable job is empty, I can not match it back to my processed assets.
0
0
87
3w
Unity iOS (Metal) → WebRTC (Unity WebRTC) video stream to remote Unity client: PeerConnection connects but receiver renders black frames
Hello! My name is Mason Prather. I'm a graduate student at Kennesaw State University and a Research Engineer working in XR environments through my Graduate Research Assistant role. I’m currently building a research prototype that connects a mobile companion application to a VR headset. The mobile application is built in Unity and deployed on iOS, and it streams video frames to a remote Unity client using WebRTC. Environment Device: iPhone 15 OS: iOS 26.3 (tested on physical device, not Simulator) Engine: Unity 2022.3.57f1 Graphics API: Metal Streaming Technology: WebRTC (Unity WebRTC package) Architecture: Mobile Unity app streaming video frames to a remote Unity client Receiver Device: Meta Quest Pro headset (Unity application) Networking: LAN (UDP discovery + TCP signaling) Video Source: Unity RenderTexture Goal The goal of the system is to allow a VR user to view media stored on their phone inside a VR environment. The iOS app: renders or captures media content converts frames into a WebRTC video track streams the video to the headset Current Status Connection setup works correctly. Observed behavior: Signaling connection successful ICE candidate exchange successful PeerConnection state becomes Connected Video track created successfully However, the receiving application displays black frames. iOS App Details The video source originates from a Unity RenderTexture. Inside the phone application: RenderTexture displays correctly Frames appear correct locally But the receiving peer does not display the frames. Relevant Components Unity WebRTC package iOS Metal rendering pipeline Custom TCP signaling LAN discovery via UDP Expected Behavior Rendered frames should transmit via WebRTC and appear on the remote device. Actual Behavior The remote video track is active, but the rendered frames appear black on the receiving client. Questions Are there known issues involving Unity WebRTC + iOS Metal texture capture? Are there specific pixel format requirements when streaming textures from Unity on iOS? Could the issue relate to texture readback limitations or GPU synchronization? I am more than happy to provide screenshots and console logs upon request. If anyone has experience streaming Unity video frames via WebRTC on iOS, I would greatly appreciate any guidance.
0
0
210
3w
AVAssetDownloadConfiguration: How many video variants are actually downloaded when multiple variants exist in the HLS master playlist?
Hi, I’m trying to better understand how AVAssetDownloadConfiguration selects video variants when downloading HLS content for offline playback. Suppose I have an HLS master playlist (.m3u8) that contains several video variants defined with #EXT-X-STREAM-INF. For example, the master playlist may contain multiple video streams like this: Same resolution, different BANDWIDTH Or different resolutions (for example 720p, 1080p, etc.) My question is: How many video variants are actually downloaded when using AVAssetDownloadConfiguration without specifying any variantQualifiers? In other words: If the master playlist contains multiple video variants, will the download task fetch only one variant, or multiple variants? Does the behavior differ depending on whether the variants differ only by BANDWIDTH or also by RESOLUTION? What I observed in testing In my tests, I always end up with only one video variant downloaded, specifically the one with the highest BANDWIDTH parameter. In the m3u8 files I tested, all video variants had identical parameters (resolution, codec, frame rate, etc.) and differed only by the BANDWIDTH attribute in the master playlist. However, when inspecting the downloaded .movpkg, I noticed something interesting in boot.xml. It lists two video streams: one with complete="true" (the one with highest bandwidth) another with complete="no" (the one with lowest bandwidth) I actually had 3 video streams listed in m3u8, but the one with middle bandwidth wasn't listed in boot.xml file at all. There are also additional streams for audio and subtitles in boot.xml file. This made me wonder whether the system initially attempts to download another video variant (possibly a lower bitrate one), but then switches to the highest-quality variant and only completes that one. Additional question about variantQualifiers If I provide a predicate such as: NSPredicate(format: "peakBitRate > 0") which should theoretically match all variants, will the download task attempt to download all matching video variants, or will it still select only one? Summary So the main questions are: Without variantQualifiers, does AVAssetDownloadConfiguration always download a single video variant, and if so, how is it chosen? Does the behavior differ if variants have different resolutions vs only different bitrates? When a predicate matches multiple variants, can multiple video variants actually be downloaded in a single .movpkg? Why might boot.xml list multiple video streams when only one appears to be fully downloaded? Any clarification on the intended behavior would be greatly appreciated. Thanks!
1
0
293
4w
Swift Array Out of Bounds Crash in VTFrameProcessor when using VTLowLatencyFrameInterpolationParameters
Hi everyone, Our team is encountering a reproducible crash when using VTLowLatencyFrameInterpolation on iOS 26.3 while processing a live LL-HLS input stream. 🤖 Environment Device: iPhone 16 OS: iOS 26.3 Xcode: Xcode 26.3 Framework: VideoToolbox 💥 Crash Details The application crashes with the following fatal error: Fatal error: Swift/ContiguousArrayBuffer.swift:184: Array index out of range The stack trace highlights the following: VTLowLatencyFrameInterpolationImplementation processWithParameters:frameOutputHandler: Called from VTFrameProcessor.process(parameters:) Here is the simplified implementation block where the crash occurs. (Note: PrismSampleBuffer and PrismLLFIError are our internal custom wrapper types). // Create `VTFrameProcessorFrame` for the source (previous) frame. let sourcePTS = sourceSampleBuffer.presentationTimeStamp var sourceFrame: VTFrameProcessorFrame? if let pixelBuffer = sourceSampleBuffer.imageBuffer { sourceFrame = VTFrameProcessorFrame(buffer: pixelBuffer, presentationTimeStamp: sourcePTS) } // Validate the source VTFrameProcessorFrame. guard let sourceFrame else { throw PrismLLFIError.missingImageBuffer } // Create `VTFrameProcessorFrame` for the next frame. let nextPTS = nextSampleBuffer.presentationTimeStamp var nextFrame: VTFrameProcessorFrame? if let pixelBuffer = nextSampleBuffer.imageBuffer { nextFrame = VTFrameProcessorFrame(buffer: pixelBuffer, presentationTimeStamp: nextPTS) } // Validate the next VTFrameProcessorFrame. guard let nextFrame else { throw PrismLLFIError.missingImageBuffer } // Calculate interpolation intervals and allocate destination frame buffers. let intervals = interpolationIntervals() let destinationFrames = try framesBetween(firstPTS: sourcePTS, lastPTS: nextPTS, interpolationIntervals: intervals) let interpolationPhase: [Float] = intervals.map { Float($0) } // Create VTLowLatencyFrameInterpolationParameters. // This sets up the configuration required for temporal frame interpolation between the previous and current source frames. guard let parameters = VTLowLatencyFrameInterpolationParameters( sourceFrame: nextFrame, previousFrame: sourceFrame, interpolationPhase: interpolationPhase, destinationFrames: destinationFrames ) else { throw PrismLLFIError.failedToCreateParameters } try await send(sourceSampleBuffer) // Process the frames. // Using progressive callback here to get the next processed frame as soon as it's ready, // preventing the system from waiting for the entire batch to finish. for try await readOnlyFrame in self.frameProcessor.process(parameters: parameters) { // Create an interpolated sample buffer based on the output frame. let newSampleBuffer: PrismSampleBuffer = try readOnlyFrame.frame.withUnsafeBuffer { pixelBuffer in try PrismLowLatencyFrameInterpolation.createSampleBuffer(from: pixelBuffer, readOnlyFrame.timeStamp) } // Pass the newly generated frame to the output stream. try await send(newSampleBuffer) } 🙋 Questions Are there any known limitations or bugs regarding VTLowLatencyFrameInterpolation when handling live 60fps streams? Are there any undocumented constraints we should be aware of regarding source/previous frame timing, pixel buffer attributes, or how destinationFrames and interpolationPhase arrays must be allocated? Is a "warm-up" sequence recommended after startSession() before making the first process(parameters:) call?
1
0
458
4w
Final Cut Pro not loading my plugin
I have exhausted standard debugging approaches and need guidance on Final Cut Pro's AU plugin loading behavior. PLUGIN OVERVIEW My plugin is an AUv2 audio plugin built with JUCE 8. The plugin loads and functions correctly in: Pro Tools (AAX) Media Composer (AAX) Reaper (AU + VST3) Logic Pro (AU) GarageBand (AU) Audacity (AU) DaVinci Resolve / Fairlight (AU) Harrison Mixbus (AU) Ableton Live (AU) Cubase (VST3) Nuendo (VST3) It does NOT load in Final Cut Pro (tested on 10.8.x, macOS 14.6 and 15.2). DIAGNOSTICS COMPLETED auval passes cleanly: auval -v aufx Hwhy Manu → AU VALIDATION SUCCEEDED Plugin is notarized and stapled: xcrun stapler validate → The validate action worked spctl --assess --type exec → Note: returns 'not an app' which we understand is expected for .component bundles. Hardened Runtime is enabled on the bundle. We identified that our Info.plist contained a 'resourceUsage' dictionary in the AudioComponents entry. We found via system logs that this was setting au_componentFlags = 2 (kAudioComponentFlag_SandboxSafe), causing FCP to attempt loading the plugin in-process inside its sandbox, where our UDP networking is denied. We removed the resourceUsage dict, confirmed au_componentFlags = 0 in the CAReportingClient log, and FCP now loads the plugin out-of-process via AUHostingServiceXPC_arrow. Despite au_componentFlags = 0 and out-of-process loading confirmed, the plugin still does not appear in FCP's effects browser. We also identified and fixed a channel layout issue — our isBusesLayoutSupported was not returning true for 5-channel layouts (which FCP uses internally). This is now fixed. AU cache has been fully cleared: ~/Library/Caches/com.apple.audio.AudioComponentRegistrar /Library/Caches/com.apple.audio.AudioComponentRegistrar coreaudiod and AudioComponentRegistrar killed to force rescan auval -a run to force re-registration
0
0
215
4w
Remote control of DRM audio - need to customise
I'm using MusicKit for DRM track playback in my iOS app and a third party library to play local user-owned music on the file system and from the music library. This app is also supporting accessory devices that offer Bluetooth remote media control. The wish is to achieve parity between how the remote interacts with user owned music and the DRM / cloud / Apple Music tracks in my application music player. Track navigation, app volume (rather than system volume), and scrubbing need to work consistently on a mix of tracks which could alternate DRM and cloud status within one album or playlist. Apple Music queue and track pickers are not useful tools in my app. How can I support playing DRM and Apple Music tracks while not surrendering the remote control features to the system?
0
0
118
4w
MusicKit can't find identifiers
I am trying to create keys for my personal project with MusicKit and other resources, but MusicKit specifically for now. I want to gather my recent music history and log the time in my system to measure with my other life data to do analysis on. I have created an Identifier with an appropriate Description and Bundle ID and have MusicKit checked in AppServices. I have saved and reset cash and waited all day and the keys have still not update and show "There are no identifiers available that can be associated with the key" in this field. Please help!
Replies
0
Boosts
0
Views
62
Activity
2w
AVAudioSession : Audio issues when recording the screen in an app that changes IOBufferDuration on iOS 26.
Among Japanese end users, audio issues during screen recording—primarily in game applications—have become a topic of discussion. We have confirmed that the trigger for this issue is highly likely to be related to changes to IOBufferDuration. When using setPreferredIOBufferDuration and the IOBufferDuration is set to a value smaller than the default, audio problems occur in the recorded screen capture video. Audio playback is performed using AudioUnit (RemoteIO). https://developer.apple.com/documentation/avfaudio/avaudiosession/setpreferrediobufferduration(_:)?language=objc This issue was not observed on iOS 18, and it appears to have started occurring after upgrading to iOS 26. We provide an audio middleware solution, and we had incorporated changes to IOBufferDuration into our product to achieve low-latency audio playback. As a result, developers using our product as well as their end users are being affected by this issue. We kindly request that this issue be investigated and addressed in a future update. “This document has been translated by AI. The original text is included below for reference.” 日本のエンドユーザー間で主にゲームアプリケーションにおける画面収録時の音声の問題が話題になっています。 こちらの症状のトリガーが、IOBufferDurationの変更によるものである可能性が高いことを確認しました。 setPreferredIOBufferDurationを使用し、IOBufferDurationがデフォルトより小さい状態の時、画面収録された動画の音声に問題が発生することをしています。 音声の再生にはAudioUnit(RemoteIO)を使用しています。 https://developer.apple.com/documentation/avfaudio/avaudiosession/setpreferrediobufferduration(_:)?language=objc iOS 18ではこのような問題は確認されておらず、iOS26になってから問題が発生しているようです。 私たちはオーディオミドルウェアを提供しており、低遅延の再生のためにIOBufferDurationの変更を製品に組み込んでいました。 そのため、弊社製品をご利用いただいている開発者およびエンドユーザーの皆様がこの不具合の影響を受けています。 こちらの不具合の調査及び修正対応を検討いただけますでしょうか。
Replies
2
Boosts
0
Views
471
Activity
2w
How to upload large videos with PHAssetResourceUploadJobChangeRequest?
I'm implementing a PHBackgroundResourceUploadExtension to back up photos and videos from the user's library to our cloud storage service. Our existing upload infrastructure uses chunked uploads for large files (splitting videos into smaller byte ranges and uploading each chunk separately). This approach: Allows resumable uploads if interrupted Stays within server-side request size limits Provides granular progress tracking Looking at the PHAssetResourceUploadJobChangeRequest.createJob(destination:resource:) API, I don't see a way to specify byte ranges or create multiple jobs for chunks of the same resource. Questions: Does the system handle large files (1GB+) automatically under the hood, or is there a recommended maximum file size for a single upload job? Is there a supported pattern for chunked/resumable uploads, or should the destination URL endpoint handle the entire file in one request? If our server requires chunked uploads (e.g., BITS protocol with CreateSession → Fragment → CloseSession), is this extension the right mechanism, or should we use a different approach for large videos? Any guidance on best practices for large asset uploads would be greatly appreciated. Environment: iOS 26, Xcode 18
Replies
4
Boosts
0
Views
529
Activity
2w
Crash while presenting a media picker for Music
This is the code I use: @MainActor func picker()->UIViewController{ let pickerController = MPMediaPickerController(mediaTypes: .music) print(pickerController) pickerController.showsCloudItems=true pickerController.prompt = NSLocalizedString("Add pieces to queue", comment:""); pickerController.allowsPickingMultipleItems=true; pickerController.delegate=MPMusicPlayerControllerSingleton.sharedController(); return pickerController } MainActor @IBAction func handleBrowserTapped(_ sender: AnyObject){ if let pickerController=contentProvider?.picker(){ self.present(pickerController, animated:true, completion:nil) } } And this his is the crash log: *** Assertion failure in -[MPMediaPickerController_Appex requestRemoteViewController], MPMediaPickerController.m:523 *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'remoteViewController cannot be nil -- process will crash inserting in hierarchy. We likely got a nil remoteViewController because Music is crashing.' *** First throw call stack: (0x1869cac70 0x183499224 0x1844f9f50 0x1a6c6a060 0x18d45518c 0x18d4cd410 0x103354544 0x10336dccc 0x10338f748 0x103364204 0x103364144 0x186957a64 0x1868e5288 0x1868e41d0 0x22bde7498 0x18c5a5ca0 0x18c510254 0x18c71ce90 0x103854340 0x1038542b0 0x103854430 0x1834f1c1c) libc++abi: terminating due to uncaught exception of type NSException *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'remoteViewController cannot be nil -- process will crash inserting in hierarchy. We likely got a nil remoteViewController because Music is crashing.'
Replies
12
Boosts
0
Views
239
Activity
2w
Unexpected Ambisonics format
When trying to load an ambisonics file using this project: https://github.com/robertncoomber/NativeiOSAmbisonicPlayback/ I get "Unexpected Ambisonics format". Interestingly, loading a 3rd order ambisonics file works fine: let ambisonicLayoutTag = kAudioChannelLayoutTag_HOA_ACN_SN3D | 16 let AmbisonicLayout = AVAudioChannelLayout(layoutTag: ambisonicLayoutTag) let StereoLayout = AVAudioChannelLayout(layoutTag: kAudioChannelLayoutTag_Stereo) So it's purely related to the kAudioChannelLayoutTag_Ambisonic_B_Format
Replies
0
Boosts
0
Views
45
Activity
3w
offline with FairPlay
I am working on offline license support for FairPlay. I am trying to identify offline vs. live streaming requests. How do I do that? Is there any way to identify the request type (offline or streaming) from SPC?
Replies
0
Boosts
1
Views
70
Activity
3w
On iOS 26, HLS alternate audio track selection behaves inconsistently
Summary On iOS 26, HLS alternate audio track selection behaves inconsistently on both VOD and live streams: the French track falls back to the DEFAULT=YES (English) track after manual selection, and in some cases switching to a non-default track appears to work but it is then impossible to switch back to English. Environment iOS version: 26 Players affected: native Safari on iOS 26 and THEOplayer (issue also reproducible on THEOplayer's own demo page) Stream type: HLS/CMAF with demuxed alternate audio renditions (CMFC container) Affected stream types: both VOD and live streaming Issue NOT present on iOS 17/18 Manifest #EXTM3U #EXT-X-VERSION:4 #EXT-X-INDEPENDENT-SEGMENTS #EXT-X-STREAM-INF:BANDWIDTH=8987973,AVERAGE-BANDWIDTH=8987973,VIDEO-RANGE=SDR,CODECS="avc1.640028",RESOLUTION=1920x1080,FRAME-RATE=29.970,AUDIO="program_audio" video_1080p.m3u8 #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="program_audio",LANGUAGE="en",ASSOC-LANGUAGE="en",NAME="English",AUTOSELECT=YES,DEFAULT=YES,CHANNELS="2",URI="audio_ENG.m3u8" #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="program_audio",LANGUAGE="de",ASSOC-LANGUAGE="de",NAME="Deutsch",AUTOSELECT=YES,DEFAULT=NO,CHANNELS="2",URI="audio_DEU.m3u8" #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="program_audio",LANGUAGE="fr",ASSOC-LANGUAGE="fr",NAME="Francais",AUTOSELECT=YES,DEFAULT=NO,CHANNELS="2",URI="audio_FRA.m3u8" Steps to Reproduce Load the HLS manifest (VOD or live) in Safari on iOS 26, or in any AVFoundation-backed player Start playback — English plays correctly as DEFAULT Manually select "Francais" from the audio track selector Observe that English audio continues playing (French does not play) In a separate scenario: manually select "Deutsch" — German plays correctly Attempt to switch back to English — English does not resume; audio remains on the previously selected track Expected behavior Selecting any track should immediately switch to that language Switching back to English (DEFAULT=YES) should work at any time Behavior should be consistent across VOD and live streams Actual behavior Two distinct anomalies observed, reproducible on both VOD and live streams, in both native Safari and THEOplayer: French-specific fallback: selecting the French track causes playback to fall back to English. This does not happen with German. Cannot return to English: in cases where a non-default track plays correctly, attempting to switch back to the DEFAULT=YES track (English) fails — the previous non-default track continues playing. The fact that the issue reproduces in native Safari confirms this is an AVFoundation/WebKit-level regression, not a third-party player bug. What we have already verified and ruled out LANGUAGE codes are BCP-47 compliant (en, fr, de) ✓ EXT-X-VERSION:4 is present ✓ Audio codec removed from STREAM-INF CODECS (video-only) ✓ ASSOC-LANGUAGE attribute added matching LANGUAGE value ✓ Container metadata verified via ffprobe: mdhd box correctly contains language tags (e.g. "fra") ✓ Audio segment content verified via ffplay: correct audio in each language file ✓ French audio source file contains correct French audio content ✓ Issue reproduces in native Safari on iOS 26, confirming it is not a THEOplayer-specific bug Issue does NOT reproduce on iOS 17/18 with the same manifest and segments Additional notes The VOD stream is packaged with AWS MediaConvert, CMAF output group, SEGMENTED_FILES, AAC-LC codec (mp4a.40.2), 128kbps, 48kHz stereo. English uses AudioTrackType ALTERNATE_AUDIO_AUTO_SELECT_DEFAULT; French and German use ALTERNATE_AUDIO_AUTO_SELECT. The live stream uses AWS MediaPackage with a similar CMAF/HLS output configuration. Please advise whether this is a known regression in AVFoundation on iOS 26 and whether a fix is planned.
Replies
1
Boosts
0
Views
110
Activity
3w
AU MIDI Plugin UI not showing
Hello, I am having an issue with a very small percentage of our users not being able to view the UI of our MIDI Plugin Chord Prism. I have looked this up and seen to where it has been resolved within Logic for AU Instrument and Effect plugins by switching out of "Controls" view, but my situation is different and there is no option on what is displayed to switch out of "Controls" view. Is this something that can be fixed by adjusting settings within Logic?
Replies
0
Boosts
0
Views
150
Activity
3w
Using StoreKit from an AUv3 plugin that can be loaded in-process
I have a bunch of Audio Unit v3 plugins that are approaching release, and I was considering using subscription-model pricing, as I have done in a soon to be released iOS app. However, whether this is possible or not is not at all obvious. Specifically: The plugin can, depending on the host app, be loaded in-process or out-of-process - yes, I know, Logic Pro and Garage Band will not load a plug-in in-process anymore, but I am not going to rule that out for other audio apps and force on them the overhead of IPC (I spent two solid weeks deciphering the process to actually make it possible for an AUv3 to run in-process - see this - https://github.com/timboudreau/audio_unit_rust_demo - example with notes) Depending on how it is loaded, the value of Bundle.main.bundleIdentifier will vary. If I use the StoreKit API, will that return product results for my bundle identifier when being called as a library from a foreign application? I would expect it would be a major security hole if random apps could query about purchases of other random apps, so I assume not. Even if I restricted the plugins to running out-of-process, I have to set up the in-app purchases on the app store for the App container's ID, not the extension's ID, and the extension is what run - the outer app that is what you purchase is just a toy demo that exists solely to register the audio unit. I have similar questions with regard to MetricKit, which I would similarly like to use, but which may be running inside some random app. If there were some sort of signed token, or similar mechanism, that could be bundled or acquired by the running plugin extension that could be used to ensure both StoreKit and MetricKit operate under the assumption that purchases and metrics should be accessed as if called from the container app, that would be very helpful. This is the difference between having a one-and-done sales model and something that provides ongoing revenue to maintain these products - I am a one-person shop - if I price these products where they would need to be to pay the bills assuming a single sale per customer ever, the price will be too high for anyone to want to try products from a small vendor they've never heard of. So, being able to do a free trial period and then subscription is the difference between this being a viable business or not.
Replies
10
Boosts
0
Views
747
Activity
3w
10-Bit UVC on iPadOS
Hello, I've been very familiar with the UVC Support in iPadOS ever since it launched in iOS 17. There are a number of people that use the software I've developed built around UVC and there are often queries about 8-Bit vs. 10-Bit. My understanding is that the newest UVC Spec is 1.5 which was standardised in 2012 and almost every UVC Capture Card runs at 8-Bit. The only 10-Bit Capture Card that is on my radar is the AJA U-Tap SDI, however it looks like this is 10-Bit up until the UVC Part where the 10-Bit Input is downsampled to 8-Bit. Though I have read in certain places that it works as a 10-Bit Capture Card on macOS but not on iPadOS. I was just wondering if 10-Bit via UVC is even possible on iPadOS? If there was indeed a true 10-Bit Source being passed into an iPad, would iPadOS allow it or would it be downsampled by AVFoundation so it can show up as a valid external video input? All USB Capture Cards that I have encountered use one of the following formats: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange kCVPixelFormatType_420YpCbCr8BiPlanarFullRange kCVPixelFormatType_32BGRA So if a UVC Device delivered a 10-Bit Format, would that be accessible by iPadOS or would it fallback to these 8-Bit Formats by default? Thanks!
Replies
0
Boosts
0
Views
327
Activity
3w
SpeechAnalyzer.start(inputSequence:) fails with _GenericObjCError nilError, while the same WAV succeeds with start(inputAudioFile:)
I'm trying to use the new Speech framework for streaming transcription on macOS 26.3, and I can reproduce a failure with SpeechAnalyzer.start(inputSequence:). What is working: SpeechAnalyzer + SpeechTranscriber offline path using start(inputAudioFile:finishAfterFile:) same Spanish WAV file transcribes successfully and returns a coherent final result What is not working: SpeechAnalyzer + SpeechTranscriber stream path using start(inputSequence:) same WAV, replayed as AnalyzerInput(buffer:bufferStartTime:) fails once replay starts with: _GenericObjCError domain=Foundation._GenericObjCError code=0 detail=nilError I also tried: DictationTranscriber instead of SpeechTranscriber no realtime pacing during replay Both still fail in stream mode with the same error. So this does not currently look like a ScreenCaptureKit issue or a Python integration issue. I reduced it to a pure Swift CLI repro. Environment: macOS 26.3 (25D122) Xcode 26.3 Swift 6.2.4 Apple Silicon Mac Has anyone here gotten SpeechAnalyzer.start(inputSequence:) working reliably on macOS 26.x? If so, I'd be interested in any workaround or any detail that differs from the obvious setup: prepareToAnalyze(in:) bestAvailableAudioFormat(...) AnalyzerInput(buffer:bufferStartTime:) replaying a known-good WAV in chunks I already filed Feedback Assistant: FB22149971
Replies
1
Boosts
0
Views
342
Activity
3w
How to get iCloud item(photo/video) size?
How I can get iCloud photo file size? Could I use private API like this in prod? Does anyone know another way? (without downloading the file to the device) func getFileSize(asset: PHAsset) -> Int64? { let resources = PHAssetResource.assetResources(for: asset) let resource = resources.first let size = resource?.value(forKey: "fileSize") as? Int64 return size }
Replies
1
Boosts
0
Views
130
Activity
3w
Implementing PHBackgroundResourceUploadExtension
Hi, I am trying to implement a PHBackgroundResourceUploadExtension to upload backup media files to an external cloud service based on these docs: https://developer.apple.com/documentation/PhotoKit/uploading-asset-resources-in-the-background#Acknowledge-completed-jobs Creating jobs and actual uploading is working as expected, but the problem I have is in the acknowledgeCompletedJobs() function. When trying to access a job's resource, the resource is nil and thus has empty assetLocalIdentifier and originalFilename. Did anybody successfully implement this extension or knows, why this would happen? Because the resource of an acknowledgable job is empty, I can not match it back to my processed assets.
Replies
1
Boosts
0
Views
229
Activity
3w
Implementing PHBackgroundResourceUploadExtension
Hi, I am trying to implement a PHBackgroundResourceUploadExtension to upload backup media files to an external cloud service based on these docs: https://developer.apple.com/documentation/PhotoKit/uploading-asset-resources-in-the-background#Acknowledge-completed-jobs Creating jobs and actual uploading is working as expected, but the problem I have is in the acknowledgeCompletedJobs() function. When in this function, I am trying to access a job's resource. The resource is nil and thus has empty assetLocalIdentifier and originalFilename. Did anybody successfully implement this extension or knows, why this would happen? Because the resource of an acknowledgable job is empty, I can not match it back to my processed assets.
Replies
0
Boosts
0
Views
87
Activity
3w
Unity iOS (Metal) → WebRTC (Unity WebRTC) video stream to remote Unity client: PeerConnection connects but receiver renders black frames
Hello! My name is Mason Prather. I'm a graduate student at Kennesaw State University and a Research Engineer working in XR environments through my Graduate Research Assistant role. I’m currently building a research prototype that connects a mobile companion application to a VR headset. The mobile application is built in Unity and deployed on iOS, and it streams video frames to a remote Unity client using WebRTC. Environment Device: iPhone 15 OS: iOS 26.3 (tested on physical device, not Simulator) Engine: Unity 2022.3.57f1 Graphics API: Metal Streaming Technology: WebRTC (Unity WebRTC package) Architecture: Mobile Unity app streaming video frames to a remote Unity client Receiver Device: Meta Quest Pro headset (Unity application) Networking: LAN (UDP discovery + TCP signaling) Video Source: Unity RenderTexture Goal The goal of the system is to allow a VR user to view media stored on their phone inside a VR environment. The iOS app: renders or captures media content converts frames into a WebRTC video track streams the video to the headset Current Status Connection setup works correctly. Observed behavior: Signaling connection successful ICE candidate exchange successful PeerConnection state becomes Connected Video track created successfully However, the receiving application displays black frames. iOS App Details The video source originates from a Unity RenderTexture. Inside the phone application: RenderTexture displays correctly Frames appear correct locally But the receiving peer does not display the frames. Relevant Components Unity WebRTC package iOS Metal rendering pipeline Custom TCP signaling LAN discovery via UDP Expected Behavior Rendered frames should transmit via WebRTC and appear on the remote device. Actual Behavior The remote video track is active, but the rendered frames appear black on the receiving client. Questions Are there known issues involving Unity WebRTC + iOS Metal texture capture? Are there specific pixel format requirements when streaming textures from Unity on iOS? Could the issue relate to texture readback limitations or GPU synchronization? I am more than happy to provide screenshots and console logs upon request. If anyone has experience streaming Unity video frames via WebRTC on iOS, I would greatly appreciate any guidance.
Replies
0
Boosts
0
Views
210
Activity
3w
Audio System Trace: Zero Time Stamp
In Instruments, I'm seeing "Zero Time Stamp" events in the "Audio Server" lane. What does that mean?
Replies
1
Boosts
0
Views
174
Activity
3w
AVAssetDownloadConfiguration: How many video variants are actually downloaded when multiple variants exist in the HLS master playlist?
Hi, I’m trying to better understand how AVAssetDownloadConfiguration selects video variants when downloading HLS content for offline playback. Suppose I have an HLS master playlist (.m3u8) that contains several video variants defined with #EXT-X-STREAM-INF. For example, the master playlist may contain multiple video streams like this: Same resolution, different BANDWIDTH Or different resolutions (for example 720p, 1080p, etc.) My question is: How many video variants are actually downloaded when using AVAssetDownloadConfiguration without specifying any variantQualifiers? In other words: If the master playlist contains multiple video variants, will the download task fetch only one variant, or multiple variants? Does the behavior differ depending on whether the variants differ only by BANDWIDTH or also by RESOLUTION? What I observed in testing In my tests, I always end up with only one video variant downloaded, specifically the one with the highest BANDWIDTH parameter. In the m3u8 files I tested, all video variants had identical parameters (resolution, codec, frame rate, etc.) and differed only by the BANDWIDTH attribute in the master playlist. However, when inspecting the downloaded .movpkg, I noticed something interesting in boot.xml. It lists two video streams: one with complete="true" (the one with highest bandwidth) another with complete="no" (the one with lowest bandwidth) I actually had 3 video streams listed in m3u8, but the one with middle bandwidth wasn't listed in boot.xml file at all. There are also additional streams for audio and subtitles in boot.xml file. This made me wonder whether the system initially attempts to download another video variant (possibly a lower bitrate one), but then switches to the highest-quality variant and only completes that one. Additional question about variantQualifiers If I provide a predicate such as: NSPredicate(format: "peakBitRate > 0") which should theoretically match all variants, will the download task attempt to download all matching video variants, or will it still select only one? Summary So the main questions are: Without variantQualifiers, does AVAssetDownloadConfiguration always download a single video variant, and if so, how is it chosen? Does the behavior differ if variants have different resolutions vs only different bitrates? When a predicate matches multiple variants, can multiple video variants actually be downloaded in a single .movpkg? Why might boot.xml list multiple video streams when only one appears to be fully downloaded? Any clarification on the intended behavior would be greatly appreciated. Thanks!
Replies
1
Boosts
0
Views
293
Activity
4w
Swift Array Out of Bounds Crash in VTFrameProcessor when using VTLowLatencyFrameInterpolationParameters
Hi everyone, Our team is encountering a reproducible crash when using VTLowLatencyFrameInterpolation on iOS 26.3 while processing a live LL-HLS input stream. 🤖 Environment Device: iPhone 16 OS: iOS 26.3 Xcode: Xcode 26.3 Framework: VideoToolbox 💥 Crash Details The application crashes with the following fatal error: Fatal error: Swift/ContiguousArrayBuffer.swift:184: Array index out of range The stack trace highlights the following: VTLowLatencyFrameInterpolationImplementation processWithParameters:frameOutputHandler: Called from VTFrameProcessor.process(parameters:) Here is the simplified implementation block where the crash occurs. (Note: PrismSampleBuffer and PrismLLFIError are our internal custom wrapper types). // Create `VTFrameProcessorFrame` for the source (previous) frame. let sourcePTS = sourceSampleBuffer.presentationTimeStamp var sourceFrame: VTFrameProcessorFrame? if let pixelBuffer = sourceSampleBuffer.imageBuffer { sourceFrame = VTFrameProcessorFrame(buffer: pixelBuffer, presentationTimeStamp: sourcePTS) } // Validate the source VTFrameProcessorFrame. guard let sourceFrame else { throw PrismLLFIError.missingImageBuffer } // Create `VTFrameProcessorFrame` for the next frame. let nextPTS = nextSampleBuffer.presentationTimeStamp var nextFrame: VTFrameProcessorFrame? if let pixelBuffer = nextSampleBuffer.imageBuffer { nextFrame = VTFrameProcessorFrame(buffer: pixelBuffer, presentationTimeStamp: nextPTS) } // Validate the next VTFrameProcessorFrame. guard let nextFrame else { throw PrismLLFIError.missingImageBuffer } // Calculate interpolation intervals and allocate destination frame buffers. let intervals = interpolationIntervals() let destinationFrames = try framesBetween(firstPTS: sourcePTS, lastPTS: nextPTS, interpolationIntervals: intervals) let interpolationPhase: [Float] = intervals.map { Float($0) } // Create VTLowLatencyFrameInterpolationParameters. // This sets up the configuration required for temporal frame interpolation between the previous and current source frames. guard let parameters = VTLowLatencyFrameInterpolationParameters( sourceFrame: nextFrame, previousFrame: sourceFrame, interpolationPhase: interpolationPhase, destinationFrames: destinationFrames ) else { throw PrismLLFIError.failedToCreateParameters } try await send(sourceSampleBuffer) // Process the frames. // Using progressive callback here to get the next processed frame as soon as it's ready, // preventing the system from waiting for the entire batch to finish. for try await readOnlyFrame in self.frameProcessor.process(parameters: parameters) { // Create an interpolated sample buffer based on the output frame. let newSampleBuffer: PrismSampleBuffer = try readOnlyFrame.frame.withUnsafeBuffer { pixelBuffer in try PrismLowLatencyFrameInterpolation.createSampleBuffer(from: pixelBuffer, readOnlyFrame.timeStamp) } // Pass the newly generated frame to the output stream. try await send(newSampleBuffer) } 🙋 Questions Are there any known limitations or bugs regarding VTLowLatencyFrameInterpolation when handling live 60fps streams? Are there any undocumented constraints we should be aware of regarding source/previous frame timing, pixel buffer attributes, or how destinationFrames and interpolationPhase arrays must be allocated? Is a "warm-up" sequence recommended after startSession() before making the first process(parameters:) call?
Replies
1
Boosts
0
Views
458
Activity
4w
Final Cut Pro not loading my plugin
I have exhausted standard debugging approaches and need guidance on Final Cut Pro's AU plugin loading behavior. PLUGIN OVERVIEW My plugin is an AUv2 audio plugin built with JUCE 8. The plugin loads and functions correctly in: Pro Tools (AAX) Media Composer (AAX) Reaper (AU + VST3) Logic Pro (AU) GarageBand (AU) Audacity (AU) DaVinci Resolve / Fairlight (AU) Harrison Mixbus (AU) Ableton Live (AU) Cubase (VST3) Nuendo (VST3) It does NOT load in Final Cut Pro (tested on 10.8.x, macOS 14.6 and 15.2). DIAGNOSTICS COMPLETED auval passes cleanly: auval -v aufx Hwhy Manu → AU VALIDATION SUCCEEDED Plugin is notarized and stapled: xcrun stapler validate → The validate action worked spctl --assess --type exec → Note: returns 'not an app' which we understand is expected for .component bundles. Hardened Runtime is enabled on the bundle. We identified that our Info.plist contained a 'resourceUsage' dictionary in the AudioComponents entry. We found via system logs that this was setting au_componentFlags = 2 (kAudioComponentFlag_SandboxSafe), causing FCP to attempt loading the plugin in-process inside its sandbox, where our UDP networking is denied. We removed the resourceUsage dict, confirmed au_componentFlags = 0 in the CAReportingClient log, and FCP now loads the plugin out-of-process via AUHostingServiceXPC_arrow. Despite au_componentFlags = 0 and out-of-process loading confirmed, the plugin still does not appear in FCP's effects browser. We also identified and fixed a channel layout issue — our isBusesLayoutSupported was not returning true for 5-channel layouts (which FCP uses internally). This is now fixed. AU cache has been fully cleared: ~/Library/Caches/com.apple.audio.AudioComponentRegistrar /Library/Caches/com.apple.audio.AudioComponentRegistrar coreaudiod and AudioComponentRegistrar killed to force rescan auval -a run to force re-registration
Replies
0
Boosts
0
Views
215
Activity
4w
Remote control of DRM audio - need to customise
I'm using MusicKit for DRM track playback in my iOS app and a third party library to play local user-owned music on the file system and from the music library. This app is also supporting accessory devices that offer Bluetooth remote media control. The wish is to achieve parity between how the remote interacts with user owned music and the DRM / cloud / Apple Music tracks in my application music player. Track navigation, app volume (rather than system volume), and scrubbing need to work consistently on a mix of tracks which could alternate DRM and cloud status within one album or playlist. Apple Music queue and track pickers are not useful tools in my app. How can I support playing DRM and Apple Music tracks while not surrendering the remote control features to the system?
Replies
0
Boosts
0
Views
118
Activity
4w