Integrate video and other forms of moving visual media into your apps.

Posts under Video tag

41 Posts

Post

Replies

Boosts

Views

Activity

AVKit crash when rendering AVPlayerView controls — macOS 26.4 regression
Description Our app, Octory, allows users to create onboarding and communication workflows composed of slides containing various UI components, including embedded video players powered by AVPlayerView. Since macOS 26.4 Beta, the app crashes at launch whenever a workflow contains a video component. Workflows without video components load and render without issue, which points to a regression in AVKit's player control rendering pipeline. As anyone seen similar behaviour when using AVKit or is it something we do not do properly? Expected Behavior The app opens and renders the workflow, including the embedded video component. Actual Behavior The app briefly launches and immediately crashes with SIGABRT on the main thread. Crash Analysis Key takeaways for anyone investigating: Root cause — abort() inside NSImageSymbolConfiguration The crash occurs entirely within Apple frameworks, with no third-party code in the faulting call chain (aside from the app's entry point). The sequence is: AVPlayerItem finishes loading and fires a KVO notification that it is ready to play (_updateCanPlayAndCanStepPropertiesWhenReadyToPlayWithNotificationPayload:) This KVO chain propagates through NSKeyValueDidChange up to AVPlayerView, which calls _updateVideoGravityType AVPlayerControlsViewController responds by calling _updateZoomButtonImage, which asks AVPlayerControlsConfigurator for a configured SF Symbol via +[NSImage(AVAdditions) avkit_imageWithSymbolName:textStyle:scale:accessibilityDescription:] The symbol rendering hits -[NSImageSymbolConfiguration _getEffectivePointSize:glyphWeight:glyphSize:backfilledWithFont:scale:], which calls abort() The entire crash stack is in AppKit (NSImage / NSImageSymbolConfiguration) and AVKit — no application code is involved in the faulting path. The abort() suggests a precondition or assertion failure inside the symbol configuration logic, possibly due to an invalid or nil font/text style being passed during the zoom button image setup. This is triggered automatically the moment an AVPlayerItem becomes ready to play and AVKit attempts to render its transport controls. There is no way for the application to intercept or work around this. Relevant stack frames (Thread 0 — main thread) 3 AppKit -[NSImageSymbolConfiguration _getEffectivePointSize:glyphWeight:glyphSize:backfilledWithFont:scale:] + 440 ← abort() here 4 AppKit -[NSImageSymbolRepProvider _bestRepresentationForImage:hints:] + 404 11 AVKit +[NSImage(AVAdditions) avkit_imageWithSymbolName:textStyle:scale:accessibilityDescription:] + 332 12 AVKit -[AVPlayerControlsConfigurator configuredSymbolForImageName:] + 92 13 AVKit -[AVPlayerControlsViewController _updateZoomButtonImage] + 160 14 AVKit -[AVPlayerControlsViewController setVideoGravityType:] + 52 15 AVKit -[AVPlayerView _updateVideoGravityType] + 1056 28 AVFCore -[AVPlayerItem didChangeValueForKey:] + 56 29 AVFCore -[AVPlayerItem _updateCanPlayAndCanStepPropertiesWhenReadyToPlayWithNotificationPayload:updateStatusToReadyToPlay:] + 660 Additional Notes Removing the video component from the workflow (i.e. not instantiating AVPlayerView) resolves the crash entirely. The crash is 100% reproducible on every launch. This behavior was not present on macOS 26.3 or any prior release. The app was not recompiled — the same binary that works on 26.3 crashes on 26.4 Beta & 26.4. Environment Detail Value OS macOS 26.4 Hardware MacBook Pro M1 (MacBookPro17,1)
2
0
39
8h
How to enter Picture-in-Picture on background from inline playback in WKWebView
I'm building a Capacitor iOS app with a plain <video> element playing an MP4 file inline. I want Picture-in-Picture to activate automatically when the user goes home — swipe up from the bottom edge of the screen (on an iPhone with Face ID) or press the Home button (on an iPhone with a Home button). Fullscreen → background works perfectly — iOS automatically enters Picture-in-Picture. But I need this to work from inline playback without requiring the user to enter fullscreen first. Setup <video id="video" playsinline autopictureinpicture controls src="http://podcasts.apple.com/resources/462787156.mp4"> </video> // AppDelegate.swift let audioSession = AVAudioSession.sharedInstance() try? audioSession.setCategory(.playback, mode: .moviePlayback) try? audioSession.setActive(true) UIBackgroundModes: audio in Info.plist allowsPictureInPictureMediaPlayback is true (Apple default) iOS 26.3.1, WKWebView via Capacitor What I've tried 1. autopictureinpicture attribute <video playsinline autopictureinpicture ...> WKWebView doesn't honor this attribute from inline playback. It only works when transitioning from fullscreen. 2. requestPictureInPicture() on visibilitychange document.addEventListener('visibilitychange', () => { if (document.visibilityState === 'hidden' && !video.paused) { video.requestPictureInPicture(); } }); Result: Fails with "not triggered by user activation". The visibilitychange event doesn't count as a user gesture. 3. webkitSetPresentationMode('picture-in-picture') on visibilitychange document.addEventListener('visibilitychange', () => { if (document.visibilityState === 'hidden' && !video.paused) { video.webkitSetPresentationMode('picture-in-picture'); } }); Result: No error thrown. The webkitpresentationmodechanged event fires with value picture-in-picture. But the PIP window never actually appears. The API silently accepts the call but nothing renders. 4. await play() then webkitSetPresentationMode document.addEventListener('visibilitychange', async () => { if (document.visibilityState === 'hidden') { await video.play(); video.webkitSetPresentationMode('picture-in-picture'); } }); Result: play() succeeds (audio resumes in background), but PIP still doesn't open. 5. Auto-resume on system pause + PIP on visibilitychange iOS fires pause before visibilitychange when backgrounding. I tried resuming in the pause handler, then requesting PIP in visibilitychange: video.addEventListener('pause', () => { if (document.visibilityState === 'hidden') { video.play(); // auto-resume } }); document.addEventListener('visibilitychange', () => { if (document.visibilityState === 'hidden' && !video.paused) { video.webkitSetPresentationMode('picture-in-picture'); } }); Result: Audio resumes successfully, but PIP still doesn't open. 6. Native JS eval from applicationDidEnterBackground func applicationDidEnterBackground(_ application: UIApplication) { webView?.evaluateJavaScript( "document.querySelector('video').requestPictureInPicture()" ) } Result: Same failure — no user activation context. Observations The event order on background is: pause → visibility: hidden webkitSetPresentationMode reports success (event fires, no error) but the PIP window never renders requestPictureInPicture() consistently requires user activation, even from native JS eval Audio can be resumed in background via play(), but PIP is a separate gate Fullscreen → background automatically enters Picture-in-Picture, confirming the WKWebView PIP infrastructure is functional Question Is there any way to programmatically enter PIP from inline playback when a WKWebView app goes to background? Or is this intentionally restricted by WebKit to fullscreen-only transitions? Any pointers appreciated. Thanks!
1
2
620
1w
AVAssetDownloadConfiguration: How many video variants are actually downloaded when multiple variants exist in the HLS master playlist?
Hi, I’m trying to better understand how AVAssetDownloadConfiguration selects video variants when downloading HLS content for offline playback. Suppose I have an HLS master playlist (.m3u8) that contains several video variants defined with #EXT-X-STREAM-INF. For example, the master playlist may contain multiple video streams like this: Same resolution, different BANDWIDTH Or different resolutions (for example 720p, 1080p, etc.) My question is: How many video variants are actually downloaded when using AVAssetDownloadConfiguration without specifying any variantQualifiers? In other words: If the master playlist contains multiple video variants, will the download task fetch only one variant, or multiple variants? Does the behavior differ depending on whether the variants differ only by BANDWIDTH or also by RESOLUTION? What I observed in testing In my tests, I always end up with only one video variant downloaded, specifically the one with the highest BANDWIDTH parameter. In the m3u8 files I tested, all video variants had identical parameters (resolution, codec, frame rate, etc.) and differed only by the BANDWIDTH attribute in the master playlist. However, when inspecting the downloaded .movpkg, I noticed something interesting in boot.xml. It lists two video streams: one with complete="true" (the one with highest bandwidth) another with complete="no" (the one with lowest bandwidth) I actually had 3 video streams listed in m3u8, but the one with middle bandwidth wasn't listed in boot.xml file at all. There are also additional streams for audio and subtitles in boot.xml file. This made me wonder whether the system initially attempts to download another video variant (possibly a lower bitrate one), but then switches to the highest-quality variant and only completes that one. Additional question about variantQualifiers If I provide a predicate such as: NSPredicate(format: "peakBitRate > 0") which should theoretically match all variants, will the download task attempt to download all matching video variants, or will it still select only one? Summary So the main questions are: Without variantQualifiers, does AVAssetDownloadConfiguration always download a single video variant, and if so, how is it chosen? Does the behavior differ if variants have different resolutions vs only different bitrates? When a predicate matches multiple variants, can multiple video variants actually be downloaded in a single .movpkg? Why might boot.xml list multiple video streams when only one appears to be fully downloaded? Any clarification on the intended behavior would be greatly appreciated. Thanks!
1
0
292
3w
AVPlayerView. Internal constraints conflicts
I’m getting Auto Layout constraint conflict warnings related to AVPlayerView in my project. I’ve reproduced the issue on macOS Tahoe 26.2. The conflict appears to originate inside AVPlayerView itself, between its internal subviews, rather than in my own layout code. This issue can be easily reproduced in an empty project by simply adding an AVPlayerView as a subview using the code below. class ViewController: NSViewController { override func viewDidLoad() { super.viewDidLoad() let playerView = AVPlayerView() view.addSubview(playerView) } } After presenting that view controller, the following Auto Layout constraint conflict warnings appear in the console: Conflicting constraints detected: <decode: bad range for [%@] got [offs:346 len:1057 within:0]>. Will attempt to recover by breaking <decode: bad range for [%@] got [offs:1403 len:81 within:0]>. Unable to simultaneously satisfy constraints: ( "<NSLayoutConstraint:0xb33c29950 H:|-(0)-[AVDesktopPlayerViewContentView:0x10164dce0](LTR) (active, names: '|':AVPlayerView:0xb32ecc000 )>", "<NSLayoutConstraint:0xb33c299a0 AVDesktopPlayerViewContentView:0x10164dce0.right == AVPlayerView:0xb32ecc000.right (active)>", "<NSAutoresizingMaskLayoutConstraint:0xb33c62850 h=--& v=--& AVPlayerView:0xb32ecc000.width == 0 (active)>", "<NSLayoutConstraint:0xb33d46df0 H:|-(0)-[AVEventPassthroughView:0xb33cfb480] (active, names: '|':AVDesktopPlayerViewContentView:0x10164dce0 )>", "<NSLayoutConstraint:0xb33d46e40 AVEventPassthroughView:0xb33cfb480.trailing == AVDesktopPlayerViewContentView:0x10164dce0.trailing (active)>", "<NSLayoutConstraint:0xb33ef8320 NSGlassView:0xb33ed8c00.trailing == AVEventPassthroughView:0xb33cfb480.trailing - 6 (active)>", "<NSLayoutConstraint:0xb33ef8460 NSGlassView:0xb33ed8c00.width == 180 (active)>", "<NSLayoutConstraint:0xb33ef84b0 NSGlassView:0xb33ed8c00.leading >= AVEventPassthroughView:0xb33cfb480.leading + 6 (active)>" ) Will attempt to recover by breaking constraint <NSLayoutConstraint:0xb33ef8460 NSGlassView:0xb33ed8c00.width == 180 (active)> Set the NSUserDefault NSConstraintBasedLayoutVisualizeMutuallyExclusiveConstraints to YES to have -[NSWindow visualizeConstraints:] automatically called when this happens. And/or, set a symbolic breakpoint on LAYOUT_CONSTRAINTS_NOT_SATISFIABLE to catch this in the debugger. Is it system bug or maybe someone knows how to fix that? Thank you.
2
0
817
Jan ’26
LiveKit SDK + VisionPro
I’m developing an app for visionOS and testing it on AVP (visionOS 26), on iOS 17 and 26 devices, and in simulators (visionOS 2.5). The idea of the app is random video calls. For video calls I use LiveKit SDK. At the moment, SDK the version is 2.11.0. Maybe this will help clarify where the problem is: A user presses the "Start" button, which calls matchingViewModel.startMatching(). After that, matchingViewModel.connectionState changes to .searching. Another user presses "Start" and the same thing happens. Then the API returns information for both users (myInfo, partners, roomId, myLiveKitToken). When a user receives the room information, matchingViewModel.connectionState changes to .connecting. At this point, the connection to LiveKit should happen. In the MatchingWrapperView file, the handleChangeRoomIdOrPartners method checks whether it should connect to LiveKit (when partner information and a room ID are available) or disconnect from the room (when the partner ends the call). The connectRoom method handles connecting to the LiveKit room, enables the camera, and runs emitHasConnectedToLiveKit() (matchingViewModel.connectionState changes to .connected, and information is sent to the other partner that the user has joined LiveKit). When testing visionOS device + visionOS device or iPhone or visionOS simulator, most of the time the video from the iPhone is not shown on the visionOS device. Less often, the video from the visionOS device is not shown on the iPhone or in the simulator. When testing iPhone + iPhone + visionOS simulator, everything usually works fine. Occasionally the video doesn’t appear, but this happens much less often. Maybe you know why this issue occurs much more frequently on visionOS? Here is all the code for the core functionality. If you need any additional code, please let me know. RoomModel.swift
2
0
136
Jan ’26
Video cannot be found in the bundle error message
I am having difficulty coding my video to play in the app I am developing. My video file is located in the navigator pane, along with the other swift files in the project, with an A to the right hand side of the file, which I thought means it is available in the bundle. Unfortunately, an error saying the video cannot be found in the bundle shows at the bottom of my Xcode screen, after previewing the project. The file names match. Any further trouble shooting I'm doing references the old Xcode app, so I am stuck.
2
0
322
Jan ’26
iOS React Native: Can two WebRTC stacks (Wazo & Jitsi) share media?
Hi everyone, I’m building a React Native iOS app where I’m integrating Wazo (native WebRTC) and Jitsi (WebView / WebRTC). Use case: Wazo is used to maintain a background call session (mainly signaling + audio keep-alive). Jitsi is used in the foreground for video calls. Problem: When Jitsi starts, it takes control of the microphone and camera. The Wazo call disconnects after ~5 minutes (likely due to media / audio session conflict). Even if Wazo audio/video is muted or tracks are disabled, the session still drops. My questions: Is it officially supported or recommended to run two WebRTC stacks (Wazo + Jitsi) simultaneously on iOS? Can Wazo stay connected without active audio/video tracks while Jitsi uses mic/camera? Is there a way to release Wazo media streams temporarily (but keep signaling alive) while Jitsi is loading or active? Are there any AVAudioSession / background mode limitations on iOS that make this impossible by design? If this is not supported, what is the recommended architecture (single WebRTC pipeline, switching media ownership, etc.)? Environment: iOS (React Native) Wazo SDK (native WebRTC) Jitsi Meet (WebView) CallKit + PushKit enabled Any guidance, documentation, or real-world experience would be greatly appreciated. Thanks in advance 🙏
1
0
358
Jan ’26
When does AVPlayer exclude _HLS_part query param in LL-HLS requests?
Hello, I'm investigating an issue with LL-HLS playback using AVPlayer, specifically during DVR Live seeking (seeking to a past time). I noticed that in certain seeking scenarios, AVPlayer sends a Blocking Playlist Reload request that includes the _HLS_msn parameter but is missing the _HLS_part parameter. While I understand this is compliant with the HLS spec, I would like to know the specific criteria AVPlayer uses to decide when to drop the _HLS_part parameter. Does AVPlayer intentionally omit the part info when it determines that loading a specific partial segment is unnecessary during a seek operation? Clarification on this behavior would help us greatly in debugging our stream delivery. Thanks in advance.
1
0
211
Dec ’25
General iOS/iPadOS 26 decoding bug: MP4 unexpectedly hangs, video image frozen, audio goes on
Playback of any kind of HD H.264 MP4 files (720p, 50fps) could randomly cause a stalled image. Playback does not stop in such a case. Image is frozen/stalled. Audio goes on. Timeline goes on. By tapping play/pause or scrubbing in the timeline, the playback recovers. It could also happen, if you are scrubbing in the timeline, especially to areas not loaded already (progressive MP4 download). Behaviour is always the same: image is stalled/frozen, audio goes on. To reproduce: use example project https://developer.apple.com/documentation/AVKit/playing-video-content-in-a-standard-user-interface Example file: https://www.keepinmind.info/test.mp4
0
0
417
Nov ’25
How to integrate Apple Immersive Video into the app you are developing.
Hello, Let me ask you a question about Apple Immersive Video. https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/ I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format. First, I would like to know if it is possible to integrate Apple Immersive Video into an app. Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app? It would be great if you could also share any helpful website resources. I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos. As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements. I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed? I’ve asked several questions, and that’s all for now. Thank you in advance!
2
1
959
Nov ’25
PhotosPickerItem.itemIdentifier always nil
I'm using the SwiftUI Photos Picker to select videos from the users Photos library and then opening the video using the PhotosPickerItem. I'm looking for a way to allow the user to open the same video on their other devices as the app uses SwiftData and CloudKit to provide access to a recently watched list of videos. The URL from the PhotosPickerItem appears to be device specific and so I was looking to see if I can use the itemIdentifier and then the init that takes the itemIdentifier to create the PhotosPickerItem on the other devices. The itemIdentifier however is always nil and so wouldn't be able to be used in this way. Is there an alternative approach whereby the users can open a video using a PhotosPickerItem and that item would be viewable on their other devices with an item identifier or a URL that is device agnostic. This approach should also not involve copying the video into other storage as it would simply expand the use of the users iCloud storage, providing a less than ideal user experience. If the user has opened the video from their Photos library, there should be a way to allow the same user (e.g. same Apple ID), to use the same app on another device to open the video again.
1
0
687
Nov ’25
iPhone 17 Pro, 17 Pro Maxで撮った映像をWebRTCで送るとブラウザで表示した際に緑の映像になる
iPhoneで撮影した映像をブラウザのアプリへ送信して画面に映す機能を持ったアプリを開発しています。 iPhone 17 Pro, 17 Pro Maxでこのアプリを利用するとブラウザ側に表示される映像が緑一色や、緑がメインのカラフルな映像になってしまいます。 調べてみると17Proと17ProMaxで超広角カメラと望遠カメラの画素数が変更になっている(1200万画素→4800万画素)ためエンコーディングで失敗しているのではないかと疑っています。 なんでも情報下さい。 環境情報 WebRTCライブラリ: GoogleWebRTC バージョン 1.1 (CocoaPodsで導入) シグナリングサーバー: AWS Kinesis Video Streams 問題が発生するデバイス: モデル名: iPhone18,1, OS: 26.0 モデル名: iPhone18,1, OS: 26.1 問題が発生しないデバイス: iPhone17,5 以前の多数のモデル モデル名: iPhone18,1, OS: 26.0 モデル名: iPhone18,3, OS: 26.0
2
0
286
Nov ’25
Control system video effects support for CMIO extension
We're distributing a virtual camera with our app that does not profit in the slightest from automatically applied system video effects both to the video going in (physical camera device) or out (virtual camera device). I'm aware of setting NSCameraReactionEffectGesturesEnabledDefault in Info.plist and determining active video effects via AVCaptureDevice API. Those are obviously crutches, because having to tell users to go look for and click around in menu bar apps is the opposite of a great UX. To make our product's video output more deterministic, I'm looking for a way to tell the CMIO subsystem that our virtual camera does not support any of the system video effects. I'm seeing properties like AVCaptureDevice.Format.isPortraitEffectSupported and AVCaptureDevice.Format.isStudioLightSupported whose documentation refers to the format's ability to support these effects. Since we're setting a CMFormatDescription via CMIOExtensionStreamSource.formats I was hoping to find something in the extensions, but wasn't successful so far. Can this be done?
2
0
303
Nov ’25
Performance drop when particle emitter is combined with video play
Hi All, We're a studio building an app and as part of a scene we have a 3D asset with a smoke particle emitter and a curved mesh that plays video. I notice that when the video alone is played or the particle effect alone is done then the scene works fine but the frame rate drops drastically when both are turned on. How do I solve this because this is an important storytelling feature.
2
0
341
Oct ’25
Does HEVC VideoToolbox support temporal layering of streams with B-frames?
context:Explore low-latency video encoding with VideoToolbox https://developer.apple.com/videos/play/wwdc2021/10158/ I see above post said HEVC VideoToolbox can support SVC, temporal layering of streams in low-latency mode with all P frames. my question is that Does HEVC VideoToolbox support temporal layering of streams with B-frames ?? thanks
0
0
269
Oct ’25
AVKit crash when rendering AVPlayerView controls — macOS 26.4 regression
Description Our app, Octory, allows users to create onboarding and communication workflows composed of slides containing various UI components, including embedded video players powered by AVPlayerView. Since macOS 26.4 Beta, the app crashes at launch whenever a workflow contains a video component. Workflows without video components load and render without issue, which points to a regression in AVKit's player control rendering pipeline. As anyone seen similar behaviour when using AVKit or is it something we do not do properly? Expected Behavior The app opens and renders the workflow, including the embedded video component. Actual Behavior The app briefly launches and immediately crashes with SIGABRT on the main thread. Crash Analysis Key takeaways for anyone investigating: Root cause — abort() inside NSImageSymbolConfiguration The crash occurs entirely within Apple frameworks, with no third-party code in the faulting call chain (aside from the app's entry point). The sequence is: AVPlayerItem finishes loading and fires a KVO notification that it is ready to play (_updateCanPlayAndCanStepPropertiesWhenReadyToPlayWithNotificationPayload:) This KVO chain propagates through NSKeyValueDidChange up to AVPlayerView, which calls _updateVideoGravityType AVPlayerControlsViewController responds by calling _updateZoomButtonImage, which asks AVPlayerControlsConfigurator for a configured SF Symbol via +[NSImage(AVAdditions) avkit_imageWithSymbolName:textStyle:scale:accessibilityDescription:] The symbol rendering hits -[NSImageSymbolConfiguration _getEffectivePointSize:glyphWeight:glyphSize:backfilledWithFont:scale:], which calls abort() The entire crash stack is in AppKit (NSImage / NSImageSymbolConfiguration) and AVKit — no application code is involved in the faulting path. The abort() suggests a precondition or assertion failure inside the symbol configuration logic, possibly due to an invalid or nil font/text style being passed during the zoom button image setup. This is triggered automatically the moment an AVPlayerItem becomes ready to play and AVKit attempts to render its transport controls. There is no way for the application to intercept or work around this. Relevant stack frames (Thread 0 — main thread) 3 AppKit -[NSImageSymbolConfiguration _getEffectivePointSize:glyphWeight:glyphSize:backfilledWithFont:scale:] + 440 ← abort() here 4 AppKit -[NSImageSymbolRepProvider _bestRepresentationForImage:hints:] + 404 11 AVKit +[NSImage(AVAdditions) avkit_imageWithSymbolName:textStyle:scale:accessibilityDescription:] + 332 12 AVKit -[AVPlayerControlsConfigurator configuredSymbolForImageName:] + 92 13 AVKit -[AVPlayerControlsViewController _updateZoomButtonImage] + 160 14 AVKit -[AVPlayerControlsViewController setVideoGravityType:] + 52 15 AVKit -[AVPlayerView _updateVideoGravityType] + 1056 28 AVFCore -[AVPlayerItem didChangeValueForKey:] + 56 29 AVFCore -[AVPlayerItem _updateCanPlayAndCanStepPropertiesWhenReadyToPlayWithNotificationPayload:updateStatusToReadyToPlay:] + 660 Additional Notes Removing the video component from the workflow (i.e. not instantiating AVPlayerView) resolves the crash entirely. The crash is 100% reproducible on every launch. This behavior was not present on macOS 26.3 or any prior release. The app was not recompiled — the same binary that works on 26.3 crashes on 26.4 Beta & 26.4. Environment Detail Value OS macOS 26.4 Hardware MacBook Pro M1 (MacBookPro17,1)
Replies
2
Boosts
0
Views
39
Activity
8h
How to enter Picture-in-Picture on background from inline playback in WKWebView
I'm building a Capacitor iOS app with a plain <video> element playing an MP4 file inline. I want Picture-in-Picture to activate automatically when the user goes home — swipe up from the bottom edge of the screen (on an iPhone with Face ID) or press the Home button (on an iPhone with a Home button). Fullscreen → background works perfectly — iOS automatically enters Picture-in-Picture. But I need this to work from inline playback without requiring the user to enter fullscreen first. Setup <video id="video" playsinline autopictureinpicture controls src="http://podcasts.apple.com/resources/462787156.mp4"> </video> // AppDelegate.swift let audioSession = AVAudioSession.sharedInstance() try? audioSession.setCategory(.playback, mode: .moviePlayback) try? audioSession.setActive(true) UIBackgroundModes: audio in Info.plist allowsPictureInPictureMediaPlayback is true (Apple default) iOS 26.3.1, WKWebView via Capacitor What I've tried 1. autopictureinpicture attribute <video playsinline autopictureinpicture ...> WKWebView doesn't honor this attribute from inline playback. It only works when transitioning from fullscreen. 2. requestPictureInPicture() on visibilitychange document.addEventListener('visibilitychange', () => { if (document.visibilityState === 'hidden' && !video.paused) { video.requestPictureInPicture(); } }); Result: Fails with "not triggered by user activation". The visibilitychange event doesn't count as a user gesture. 3. webkitSetPresentationMode('picture-in-picture') on visibilitychange document.addEventListener('visibilitychange', () => { if (document.visibilityState === 'hidden' && !video.paused) { video.webkitSetPresentationMode('picture-in-picture'); } }); Result: No error thrown. The webkitpresentationmodechanged event fires with value picture-in-picture. But the PIP window never actually appears. The API silently accepts the call but nothing renders. 4. await play() then webkitSetPresentationMode document.addEventListener('visibilitychange', async () => { if (document.visibilityState === 'hidden') { await video.play(); video.webkitSetPresentationMode('picture-in-picture'); } }); Result: play() succeeds (audio resumes in background), but PIP still doesn't open. 5. Auto-resume on system pause + PIP on visibilitychange iOS fires pause before visibilitychange when backgrounding. I tried resuming in the pause handler, then requesting PIP in visibilitychange: video.addEventListener('pause', () => { if (document.visibilityState === 'hidden') { video.play(); // auto-resume } }); document.addEventListener('visibilitychange', () => { if (document.visibilityState === 'hidden' && !video.paused) { video.webkitSetPresentationMode('picture-in-picture'); } }); Result: Audio resumes successfully, but PIP still doesn't open. 6. Native JS eval from applicationDidEnterBackground func applicationDidEnterBackground(_ application: UIApplication) { webView?.evaluateJavaScript( "document.querySelector('video').requestPictureInPicture()" ) } Result: Same failure — no user activation context. Observations The event order on background is: pause → visibility: hidden webkitSetPresentationMode reports success (event fires, no error) but the PIP window never renders requestPictureInPicture() consistently requires user activation, even from native JS eval Audio can be resumed in background via play(), but PIP is a separate gate Fullscreen → background automatically enters Picture-in-Picture, confirming the WKWebView PIP infrastructure is functional Question Is there any way to programmatically enter PIP from inline playback when a WKWebView app goes to background? Or is this intentionally restricted by WebKit to fullscreen-only transitions? Any pointers appreciated. Thanks!
Replies
1
Boosts
2
Views
620
Activity
1w
AVAssetDownloadConfiguration: How many video variants are actually downloaded when multiple variants exist in the HLS master playlist?
Hi, I’m trying to better understand how AVAssetDownloadConfiguration selects video variants when downloading HLS content for offline playback. Suppose I have an HLS master playlist (.m3u8) that contains several video variants defined with #EXT-X-STREAM-INF. For example, the master playlist may contain multiple video streams like this: Same resolution, different BANDWIDTH Or different resolutions (for example 720p, 1080p, etc.) My question is: How many video variants are actually downloaded when using AVAssetDownloadConfiguration without specifying any variantQualifiers? In other words: If the master playlist contains multiple video variants, will the download task fetch only one variant, or multiple variants? Does the behavior differ depending on whether the variants differ only by BANDWIDTH or also by RESOLUTION? What I observed in testing In my tests, I always end up with only one video variant downloaded, specifically the one with the highest BANDWIDTH parameter. In the m3u8 files I tested, all video variants had identical parameters (resolution, codec, frame rate, etc.) and differed only by the BANDWIDTH attribute in the master playlist. However, when inspecting the downloaded .movpkg, I noticed something interesting in boot.xml. It lists two video streams: one with complete="true" (the one with highest bandwidth) another with complete="no" (the one with lowest bandwidth) I actually had 3 video streams listed in m3u8, but the one with middle bandwidth wasn't listed in boot.xml file at all. There are also additional streams for audio and subtitles in boot.xml file. This made me wonder whether the system initially attempts to download another video variant (possibly a lower bitrate one), but then switches to the highest-quality variant and only completes that one. Additional question about variantQualifiers If I provide a predicate such as: NSPredicate(format: "peakBitRate > 0") which should theoretically match all variants, will the download task attempt to download all matching video variants, or will it still select only one? Summary So the main questions are: Without variantQualifiers, does AVAssetDownloadConfiguration always download a single video variant, and if so, how is it chosen? Does the behavior differ if variants have different resolutions vs only different bitrates? When a predicate matches multiple variants, can multiple video variants actually be downloaded in a single .movpkg? Why might boot.xml list multiple video streams when only one appears to be fully downloaded? Any clarification on the intended behavior would be greatly appreciated. Thanks!
Replies
1
Boosts
0
Views
292
Activity
3w
CoreMotion QuickTake Video Data iOS 17.5.1
I am conducting a forensic examination of a video, and it would be helpful if anyone knew how to decode the LiveTrackInfo of the metadata of a QuickTake .MOV recorded on iOS 17.5.1 There are 27 different fields, and I am not sure what each one represents. Any help would be appreciated! Thank you! log-file
Replies
0
Boosts
0
Views
186
Activity
Mar ’26
App Previews Upload Taking a Long, Long Time
Am I the only one wasting my time having preview videos processed? I've been waiting for hours. I wasted an hour just trying to sign out of an iCloud account yesterday. Now this process is taking a big chunk of my valuable time. Pardon me for saying it, Apple, Inc. But this is stupid and so unproductive.
Replies
0
Boosts
0
Views
96
Activity
Feb ’26
AVPlayerView. Internal constraints conflicts
I’m getting Auto Layout constraint conflict warnings related to AVPlayerView in my project. I’ve reproduced the issue on macOS Tahoe 26.2. The conflict appears to originate inside AVPlayerView itself, between its internal subviews, rather than in my own layout code. This issue can be easily reproduced in an empty project by simply adding an AVPlayerView as a subview using the code below. class ViewController: NSViewController { override func viewDidLoad() { super.viewDidLoad() let playerView = AVPlayerView() view.addSubview(playerView) } } After presenting that view controller, the following Auto Layout constraint conflict warnings appear in the console: Conflicting constraints detected: <decode: bad range for [%@] got [offs:346 len:1057 within:0]>. Will attempt to recover by breaking <decode: bad range for [%@] got [offs:1403 len:81 within:0]>. Unable to simultaneously satisfy constraints: ( "<NSLayoutConstraint:0xb33c29950 H:|-(0)-[AVDesktopPlayerViewContentView:0x10164dce0](LTR) (active, names: '|':AVPlayerView:0xb32ecc000 )>", "<NSLayoutConstraint:0xb33c299a0 AVDesktopPlayerViewContentView:0x10164dce0.right == AVPlayerView:0xb32ecc000.right (active)>", "<NSAutoresizingMaskLayoutConstraint:0xb33c62850 h=--& v=--& AVPlayerView:0xb32ecc000.width == 0 (active)>", "<NSLayoutConstraint:0xb33d46df0 H:|-(0)-[AVEventPassthroughView:0xb33cfb480] (active, names: '|':AVDesktopPlayerViewContentView:0x10164dce0 )>", "<NSLayoutConstraint:0xb33d46e40 AVEventPassthroughView:0xb33cfb480.trailing == AVDesktopPlayerViewContentView:0x10164dce0.trailing (active)>", "<NSLayoutConstraint:0xb33ef8320 NSGlassView:0xb33ed8c00.trailing == AVEventPassthroughView:0xb33cfb480.trailing - 6 (active)>", "<NSLayoutConstraint:0xb33ef8460 NSGlassView:0xb33ed8c00.width == 180 (active)>", "<NSLayoutConstraint:0xb33ef84b0 NSGlassView:0xb33ed8c00.leading >= AVEventPassthroughView:0xb33cfb480.leading + 6 (active)>" ) Will attempt to recover by breaking constraint <NSLayoutConstraint:0xb33ef8460 NSGlassView:0xb33ed8c00.width == 180 (active)> Set the NSUserDefault NSConstraintBasedLayoutVisualizeMutuallyExclusiveConstraints to YES to have -[NSWindow visualizeConstraints:] automatically called when this happens. And/or, set a symbolic breakpoint on LAYOUT_CONSTRAINTS_NOT_SATISFIABLE to catch this in the debugger. Is it system bug or maybe someone knows how to fix that? Thank you.
Replies
2
Boosts
0
Views
817
Activity
Jan ’26
LiveKit SDK + VisionPro
I’m developing an app for visionOS and testing it on AVP (visionOS 26), on iOS 17 and 26 devices, and in simulators (visionOS 2.5). The idea of the app is random video calls. For video calls I use LiveKit SDK. At the moment, SDK the version is 2.11.0. Maybe this will help clarify where the problem is: A user presses the "Start" button, which calls matchingViewModel.startMatching(). After that, matchingViewModel.connectionState changes to .searching. Another user presses "Start" and the same thing happens. Then the API returns information for both users (myInfo, partners, roomId, myLiveKitToken). When a user receives the room information, matchingViewModel.connectionState changes to .connecting. At this point, the connection to LiveKit should happen. In the MatchingWrapperView file, the handleChangeRoomIdOrPartners method checks whether it should connect to LiveKit (when partner information and a room ID are available) or disconnect from the room (when the partner ends the call). The connectRoom method handles connecting to the LiveKit room, enables the camera, and runs emitHasConnectedToLiveKit() (matchingViewModel.connectionState changes to .connected, and information is sent to the other partner that the user has joined LiveKit). When testing visionOS device + visionOS device or iPhone or visionOS simulator, most of the time the video from the iPhone is not shown on the visionOS device. Less often, the video from the visionOS device is not shown on the iPhone or in the simulator. When testing iPhone + iPhone + visionOS simulator, everything usually works fine. Occasionally the video doesn’t appear, but this happens much less often. Maybe you know why this issue occurs much more frequently on visionOS? Here is all the code for the core functionality. If you need any additional code, please let me know. RoomModel.swift
Replies
2
Boosts
0
Views
136
Activity
Jan ’26
Video cannot be found in the bundle error message
I am having difficulty coding my video to play in the app I am developing. My video file is located in the navigator pane, along with the other swift files in the project, with an A to the right hand side of the file, which I thought means it is available in the bundle. Unfortunately, an error saying the video cannot be found in the bundle shows at the bottom of my Xcode screen, after previewing the project. The file names match. Any further trouble shooting I'm doing references the old Xcode app, so I am stuck.
Replies
2
Boosts
0
Views
322
Activity
Jan ’26
iOS React Native: Can two WebRTC stacks (Wazo & Jitsi) share media?
Hi everyone, I’m building a React Native iOS app where I’m integrating Wazo (native WebRTC) and Jitsi (WebView / WebRTC). Use case: Wazo is used to maintain a background call session (mainly signaling + audio keep-alive). Jitsi is used in the foreground for video calls. Problem: When Jitsi starts, it takes control of the microphone and camera. The Wazo call disconnects after ~5 minutes (likely due to media / audio session conflict). Even if Wazo audio/video is muted or tracks are disabled, the session still drops. My questions: Is it officially supported or recommended to run two WebRTC stacks (Wazo + Jitsi) simultaneously on iOS? Can Wazo stay connected without active audio/video tracks while Jitsi uses mic/camera? Is there a way to release Wazo media streams temporarily (but keep signaling alive) while Jitsi is loading or active? Are there any AVAudioSession / background mode limitations on iOS that make this impossible by design? If this is not supported, what is the recommended architecture (single WebRTC pipeline, switching media ownership, etc.)? Environment: iOS (React Native) Wazo SDK (native WebRTC) Jitsi Meet (WebView) CallKit + PushKit enabled Any guidance, documentation, or real-world experience would be greatly appreciated. Thanks in advance 🙏
Replies
1
Boosts
0
Views
358
Activity
Jan ’26
Use custom unlit shader for videomaterial
i'd like to have a little bit control over the transparency of the videomaterial. is there any way to prepare a shadergraph unlit shader and use it with the videomaterial.
Replies
2
Boosts
0
Views
418
Activity
Dec ’25
When does AVPlayer exclude _HLS_part query param in LL-HLS requests?
Hello, I'm investigating an issue with LL-HLS playback using AVPlayer, specifically during DVR Live seeking (seeking to a past time). I noticed that in certain seeking scenarios, AVPlayer sends a Blocking Playlist Reload request that includes the _HLS_msn parameter but is missing the _HLS_part parameter. While I understand this is compliant with the HLS spec, I would like to know the specific criteria AVPlayer uses to decide when to drop the _HLS_part parameter. Does AVPlayer intentionally omit the part info when it determines that loading a specific partial segment is unnecessary during a seek operation? Clarification on this behavior would help us greatly in debugging our stream delivery. Thanks in advance.
Replies
1
Boosts
0
Views
211
Activity
Dec ’25
<<<< PlayerRemoteXPC >>>> signalled err=-12860 at <>:1519
Avplayer encapsulates a player. After connecting to an Apple Bluetooth headset, it immediately reports an error. Non-Apple Bluetooth headsets can be played, but currently the issue is that the player works normally in one app but not in another. We are an educational app.
Replies
0
Boosts
0
Views
355
Activity
Nov ’25
General iOS/iPadOS 26 decoding bug: MP4 unexpectedly hangs, video image frozen, audio goes on
Playback of any kind of HD H.264 MP4 files (720p, 50fps) could randomly cause a stalled image. Playback does not stop in such a case. Image is frozen/stalled. Audio goes on. Timeline goes on. By tapping play/pause or scrubbing in the timeline, the playback recovers. It could also happen, if you are scrubbing in the timeline, especially to areas not loaded already (progressive MP4 download). Behaviour is always the same: image is stalled/frozen, audio goes on. To reproduce: use example project https://developer.apple.com/documentation/AVKit/playing-video-content-in-a-standard-user-interface Example file: https://www.keepinmind.info/test.mp4
Replies
0
Boosts
0
Views
417
Activity
Nov ’25
How to integrate Apple Immersive Video into the app you are developing.
Hello, Let me ask you a question about Apple Immersive Video. https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/ I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format. First, I would like to know if it is possible to integrate Apple Immersive Video into an app. Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app? It would be great if you could also share any helpful website resources. I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos. As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements. I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed? I’ve asked several questions, and that’s all for now. Thank you in advance!
Replies
2
Boosts
1
Views
959
Activity
Nov ’25
Recording ProRes RAW on iPhone 17 Pro with AVAssetWriter
When trying to record ProRes RAW (btp2) with AVAssetWriter I get several types of errors: -12780 or -11875. I wonder if recording ProRes RAW can only be done through AVCaptureMovieFileOutput, or if there a way to correctly configure AVAssetWriter to do it.
Replies
1
Boosts
0
Views
372
Activity
Nov ’25
PhotosPickerItem.itemIdentifier always nil
I'm using the SwiftUI Photos Picker to select videos from the users Photos library and then opening the video using the PhotosPickerItem. I'm looking for a way to allow the user to open the same video on their other devices as the app uses SwiftData and CloudKit to provide access to a recently watched list of videos. The URL from the PhotosPickerItem appears to be device specific and so I was looking to see if I can use the itemIdentifier and then the init that takes the itemIdentifier to create the PhotosPickerItem on the other devices. The itemIdentifier however is always nil and so wouldn't be able to be used in this way. Is there an alternative approach whereby the users can open a video using a PhotosPickerItem and that item would be viewable on their other devices with an item identifier or a URL that is device agnostic. This approach should also not involve copying the video into other storage as it would simply expand the use of the users iCloud storage, providing a less than ideal user experience. If the user has opened the video from their Photos library, there should be a way to allow the same user (e.g. same Apple ID), to use the same app on another device to open the video again.
Replies
1
Boosts
0
Views
687
Activity
Nov ’25
iPhone 17 Pro, 17 Pro Maxで撮った映像をWebRTCで送るとブラウザで表示した際に緑の映像になる
iPhoneで撮影した映像をブラウザのアプリへ送信して画面に映す機能を持ったアプリを開発しています。 iPhone 17 Pro, 17 Pro Maxでこのアプリを利用するとブラウザ側に表示される映像が緑一色や、緑がメインのカラフルな映像になってしまいます。 調べてみると17Proと17ProMaxで超広角カメラと望遠カメラの画素数が変更になっている(1200万画素→4800万画素)ためエンコーディングで失敗しているのではないかと疑っています。 なんでも情報下さい。 環境情報 WebRTCライブラリ: GoogleWebRTC バージョン 1.1 (CocoaPodsで導入) シグナリングサーバー: AWS Kinesis Video Streams 問題が発生するデバイス: モデル名: iPhone18,1, OS: 26.0 モデル名: iPhone18,1, OS: 26.1 問題が発生しないデバイス: iPhone17,5 以前の多数のモデル モデル名: iPhone18,1, OS: 26.0 モデル名: iPhone18,3, OS: 26.0
Replies
2
Boosts
0
Views
286
Activity
Nov ’25
Control system video effects support for CMIO extension
We're distributing a virtual camera with our app that does not profit in the slightest from automatically applied system video effects both to the video going in (physical camera device) or out (virtual camera device). I'm aware of setting NSCameraReactionEffectGesturesEnabledDefault in Info.plist and determining active video effects via AVCaptureDevice API. Those are obviously crutches, because having to tell users to go look for and click around in menu bar apps is the opposite of a great UX. To make our product's video output more deterministic, I'm looking for a way to tell the CMIO subsystem that our virtual camera does not support any of the system video effects. I'm seeing properties like AVCaptureDevice.Format.isPortraitEffectSupported and AVCaptureDevice.Format.isStudioLightSupported whose documentation refers to the format's ability to support these effects. Since we're setting a CMFormatDescription via CMIOExtensionStreamSource.formats I was hoping to find something in the extensions, but wasn't successful so far. Can this be done?
Replies
2
Boosts
0
Views
303
Activity
Nov ’25
Performance drop when particle emitter is combined with video play
Hi All, We're a studio building an app and as part of a scene we have a 3D asset with a smoke particle emitter and a curved mesh that plays video. I notice that when the video alone is played or the particle effect alone is done then the scene works fine but the frame rate drops drastically when both are turned on. How do I solve this because this is an important storytelling feature.
Replies
2
Boosts
0
Views
341
Activity
Oct ’25
Does HEVC VideoToolbox support temporal layering of streams with B-frames?
context:Explore low-latency video encoding with VideoToolbox https://developer.apple.com/videos/play/wwdc2021/10158/ I see above post said HEVC VideoToolbox can support SVC, temporal layering of streams in low-latency mode with all P frames. my question is that Does HEVC VideoToolbox support temporal layering of streams with B-frames ?? thanks
Replies
0
Boosts
0
Views
269
Activity
Oct ’25