Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

New FairPlay Keys
Hello, My company has an in-store app with FPS SDK 4.x (1024) keys. We've handed those keys over to a trusted third-party and we do not have them. We've been in-store for several years. The person that created the keys in our organization mistakenly stored them encrypted to our third-party's PGP keys, so we cannot decrypt them, and the third party also has no mechanism to provide us with the keys even though it is in their runtime environment. They only have secure mechanisms for us to upload keys onto their servers. We are trying to migrate to a different third-party DRM provider, and would like to obtain new keys. Unfortunately, the developer portal won't let me create new keys, saying that we have exceeded the number of keys allowed, which I assume is one. Additionally, the new DRM provider can only support SDK 4.x keys, and it appears that we can only request SDK 5.x keys on the Apple Developer portal, as the SDK 4.0 option is grayed out. Regardless, it seems that we are not able to request any keys. We've submitted a request to the support e-mail address and received an automated e-mail that the response should take a few days, but may take longer on occasion. It's now been a month. The e-mail says that the reply address is not monitored. Is there any way we can accelerate this? Thank you, Carlos
0
1
291
Aug ’25
How to delete FPS Certificate from Apple developer account
Hello All, I am looking for assistance with our FairPlay Streaming (FPS) certificates. We are in the process of migrating to a new video streaming vendor and need to create a new FPS certificate using SDK 4. However, we have reached the limit of allowed FPS certificates in our account and cannot create a new one. Issue Details: • We currently have two FPS certificates active in our developer account. • One of these was created using SDK 5, but our new vendor (Mux) requires an FPS certificate based on SDK 4. • Since Apple does not allow deleting FPS certificates from the developer portal, we are unable to create a new SDK 4 certificate. • We kindly request Apple to revoke one of our existing FPS certificates to allow us to generate a new SDK 4 certificate. Request: We would greatly appreciate it if you could assist us on how to delete one of our existing FPS certificates so that we can proceed with creating a new SDK 4 certificate for our vendor integration. Thank you for your support.
1
1
659
Oct ’25
Why Does AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps Occur on iPhone?
Hi everyone, We're encountering an unexpected issue with our iPhone-only camera app: 👉 TimeMark - Photo Proof https://apps.apple.com/us/app/timemark-photo-proof/id6446071834 Problem Description: Our app uses a full-screen camera view via AVCaptureSession. In some cases reported by users, the camera fails immediately upon app launch, and we receive this interruption reason: AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps According to the Apple documentation https://developer.apple.com/documentation/avfoundation/avcapturesession/interruptionreason/videodevicenotavailablewithmultipleforegroundapps?language=objc , this interruption typically occurs when the app is running in a multi-app layout such as Slide Over, Split View, or Picture in Picture — all of which are iPad-only features. However, this issue is being reported on iPhones, and our app does not support iPad at all. Also noted in the documentation: "Given your present AVCaptureSession configuration, the session may only be run if your app occupies the full screen." Additional Context: The issue occurs immediately on app launch, before the user can interact with the camera. We don’t enable multitaskingCameraAccessEnabled. We are 100% sure this is happening on iPhone, not iPad. It’s hard to reproduce; users report it happening sporadically. Locally, we tried playing Picture-in-Picture videos (e.g., Safari/YouTube) before launching our app, but we could not reproduce the issue. Questions: Why is this interruption reason occurring on iPhone, which doesn’t officially support Slide Over or Split View? Could this be caused by some system-level multitasking or resource contention (e.g., Picture in Picture from FaceTime or Safari)? Would enabling multitaskingCameraAccessEnabled help prevent this issue on iPhone, even though it's designed for iPad? Enabling multitaskingCameraAccessEnabled seems to require enabling UIBackgroundModes → voip. Would adding this background mode cause any App Store review risk or rejection if our app doesn't actually use VoIP functionality? Any help, insight, or suggestions would be greatly appreciated. Thanks in advance!
3
0
885
Oct ’25
AVKit crash when rendering AVPlayerView controls — macOS 26.4 regression
Description Our app, Octory, allows users to create onboarding and communication workflows composed of slides containing various UI components, including embedded video players powered by AVPlayerView. Since macOS 26.4 Beta, the app crashes at launch whenever a workflow contains a video component. Workflows without video components load and render without issue, which points to a regression in AVKit's player control rendering pipeline. As anyone seen similar behaviour when using AVKit or is it something we do not do properly? Expected Behavior The app opens and renders the workflow, including the embedded video component. Actual Behavior The app briefly launches and immediately crashes with SIGABRT on the main thread. Crash Analysis Key takeaways for anyone investigating: Root cause — abort() inside NSImageSymbolConfiguration The crash occurs entirely within Apple frameworks, with no third-party code in the faulting call chain (aside from the app's entry point). The sequence is: AVPlayerItem finishes loading and fires a KVO notification that it is ready to play (_updateCanPlayAndCanStepPropertiesWhenReadyToPlayWithNotificationPayload:) This KVO chain propagates through NSKeyValueDidChange up to AVPlayerView, which calls _updateVideoGravityType AVPlayerControlsViewController responds by calling _updateZoomButtonImage, which asks AVPlayerControlsConfigurator for a configured SF Symbol via +[NSImage(AVAdditions) avkit_imageWithSymbolName:textStyle:scale:accessibilityDescription:] The symbol rendering hits -[NSImageSymbolConfiguration _getEffectivePointSize:glyphWeight:glyphSize:backfilledWithFont:scale:], which calls abort() The entire crash stack is in AppKit (NSImage / NSImageSymbolConfiguration) and AVKit — no application code is involved in the faulting path. The abort() suggests a precondition or assertion failure inside the symbol configuration logic, possibly due to an invalid or nil font/text style being passed during the zoom button image setup. This is triggered automatically the moment an AVPlayerItem becomes ready to play and AVKit attempts to render its transport controls. There is no way for the application to intercept or work around this. Relevant stack frames (Thread 0 — main thread) 3 AppKit -[NSImageSymbolConfiguration _getEffectivePointSize:glyphWeight:glyphSize:backfilledWithFont:scale:] + 440 ← abort() here 4 AppKit -[NSImageSymbolRepProvider _bestRepresentationForImage:hints:] + 404 11 AVKit +[NSImage(AVAdditions) avkit_imageWithSymbolName:textStyle:scale:accessibilityDescription:] + 332 12 AVKit -[AVPlayerControlsConfigurator configuredSymbolForImageName:] + 92 13 AVKit -[AVPlayerControlsViewController _updateZoomButtonImage] + 160 14 AVKit -[AVPlayerControlsViewController setVideoGravityType:] + 52 15 AVKit -[AVPlayerView _updateVideoGravityType] + 1056 28 AVFCore -[AVPlayerItem didChangeValueForKey:] + 56 29 AVFCore -[AVPlayerItem _updateCanPlayAndCanStepPropertiesWhenReadyToPlayWithNotificationPayload:updateStatusToReadyToPlay:] + 660 Additional Notes Removing the video component from the workflow (i.e. not instantiating AVPlayerView) resolves the crash entirely. The crash is 100% reproducible on every launch. This behavior was not present on macOS 26.3 or any prior release. The app was not recompiled — the same binary that works on 26.3 crashes on 26.4 Beta & 26.4. Environment Detail Value OS macOS 26.4 Hardware MacBook Pro M1 (MacBookPro17,1)
2
0
75
23h
iOS 17 camera capture assertions and issues
Hello, Starting in iOS 17, our application started having some issue publishing to our video session. More specifically the video capture seems to be broken in some, but not all sessions. What's troubling is that we're seeing that it fails consistently every 4 sessions. It also fails silently, without reporting any problems to the app. We only notice that there are no frames being rendered or sent to the remote devices. Here's what shows-up in the console: <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) Anyone seeing this? Any idea what could be the cause? Our sessions work perfectly on iOS16 and below. Thanks
3
1
1.4k
Oct ’25
Is there a way to get lossless music playback on macOS?
I noticed that while playing back the same tracks via MusicKit on different OSes I get different results regarding the audio files being streamed. Playing back a lossless file with 24Bit 48kHz and watching the Console for RemotePlayerService I get: on iPadOS: Lossless; groupID: audio-alac-stereo-48000-24; bitDepth: 24-bit; sampleRate: 48khz; codec: alac; channels: 2; layout: Stereo; on macOS: Creating AudioQueue with format:'paac', framesPerPacket:1024, sampleRate:44100 While the iPad looks perfect, the Mac does not. Is there a way to fix this issue on macOS. BTW: I switched the Audio-Midi Settings before, after and while the macOS App was lunched. I also switched to different output devices. I wasn't able to change the bad audio-output on the mac. I tested this under Sequoia 15.5 and Tahoe beta 1, Xcode 16.4 and 26 beta 1. The AudioVariants of the Album/Tracks are .dolbyAtmos, .lossless, .lossyStereo Apple Music displays Lossless 24 Bit/48 kHz ALAC when clicking on the playercontroll icon on macOS I hope there are only some missing or misconfigured properties to get macOS up to par. Thanks :-)
0
1
162
Jun ’25
No audio in screen recordings when using AVAudioEngine Voice Processing
Hello, We are developing a real-time speech recognition application and are utilizing AVAudioEngine with voice processing enabled on the input node. However, we have observed that enabling this mode interferes with the built-in iOS screen recording feature - specifically, the recorded video does not capture any audio when this mode is active. Since we want users to be able to record their experience within our app, this issue significantly impacts our functionality. Is there a known workaround or recommended approach to ensure that both voice processing and screen recording can function simultaneously? Any guidance would be greatly appreciated. Thank you!
2
1
390
Oct ’25
After iOS 18.5, the AVFoundation AVCaptureSessionInterruptionReason.videoDeviceNotAvailableWithMultipleForegroundApps error has caused a significant increase in camera black screen issues.
Issue: After iOS 18.5 release, our app is experiencing a significant increase in AVCaptureSessionInterruptionReason.videoDeviceNotAvailableWithMultipleForegroundApps errors. Details: Our camera-related code has not been updated recently.However, we've observed that the error rate has significantly increased starting from May 2025. The error rate has risen from approximately 0.02% (2 in 10,000 users) to 0.1% (1 in 1,000 users). This represents a 5x increase in error occurrence. The frequency has increased noticeably since iOS 18.5 This is affecting our app's camera functionality and user experience Questions: Are there any known changes in iOS 18.5 regarding camera access management? What are the recommended best practices to handle this interruption reason? Are there any API changes we should be aware of? Best, Shay
2
1
500
Oct ’25
ApplicationMusicPlayer.shared player.play() permission denied in app sandbox (Tauri)
Hi, I'm developing a Tauri V2 app on MacOS, and am wanting to implement playback controls. It seems that Apple locks down playback, requiring a signed application. My app also has capabilities to "get currently playing track", and I confirmed this works; Apple produces a popup triggered by my await MusicAuthorization.request() call. It returns nil, of course, because I can't get anything to play via the ApplicationMusicPlayer; only through the system's Apple Music app. I understand SystemMusicPlayer is not available on MacOS, which is fine. I'm just a little confused as it seems pretty standard to need to test playback controls quickly without having to codesign and do some provisionprofile embedding acrobatics each time Rust re-compiles target/debug. This slows down development a lot. I do have these entries in my Entitlements.plist: <key>com.apple.security.personal-information.media-library</key> <true/> <key>com.apple.developer.music-kit</key> <true/> <key>com.apple.security.app-sandbox</key> <true/> In my tauri.conf.json, I have: "macOS": { "entitlements": "./Entitlements.plist", "signingIdentity": "Apple Development: ()" } My application works like this: I have a temporary button click to fire off a tauriinvoke() command which goes to a #tauri::command, which bridges to Swift code. Again, I validated that my less-permissive "get currently playing track" works; i.e., does not get permission denied. exact error message: [swift] playMedia error: .permissionDenied (^specifically, ".permissionDenied") My code to trigger playback of a specific media item: Task { print("[swift] entered sema Task") let status: MusicAuthorization.Status = await MusicAuthorization.request() print("auth status: \(status)") guard status == .authorized else { sema.signal(); return } print("passed the status guard.") do { var request = MusicCatalogResourceRequest<Song>(matching: \.id, equalTo: MusicItemID(rawValue: songId)) request.limit = 1 let response = try await request.response() guard let song = response.items.first else { sema.signal(); return } let player = ApplicationMusicPlayer.shared player.queue = [song] try await player.play() success = true } catch { print("[swift] playMedia error: \(error)") } sema.signal()
3
0
408
2d
MusicKit for Android reports an error when playing stations
When I use musicKit SDK for Android 1.1.2, I found that MediaContainerType only defines three types: NONE = 0; ALBUM = 1; PLAYLIST = 2; The RADIO_STATION type is not defined. However, the documentation of com.apple.android.music.playback.model states that the RADIO_STATION type is supported. This problem causes an error after I pass in the stations ID: MediaSessionManager com.apple.android.music.sdk.testapp D onPlaybackError() Quincy java.io.IOException May I ask how to solve this problem?
1
1
136
Apr ’25
Apple Music API Library Playlist Images
Hi, I'm sending an API request to: https://api.music.apple.com/v1/me/library/playlists?limit=$limit&offset=$offset To list all of the users library playlists, however the resulting objects do not contain the playlist artwork in the JSON. I've tried adding the extend and include attributes as well but to no avail. A partial example of the response: {"id": PLAYLIST_ID, "type": "library-playlists", "href": "/v1/me/library/playlists/PLAYLIST_ID", "attributes": {"lastModifiedDate": "2024-09-18T20:18:24Z", "canEdit": true, "name": "Afro Party Anthems", "isPublic": false, "description": {"standard": "Definitive African party starters"}, "hasCatalog": false, "dateAdded": "2022-03-10T18:30:56Z", "playParams": {"id": PLAYLIST_ID, "kind": "playlist", "isLibrary": true}}, "relationships": {"catalog": {"href": "/v1/me/library/playlists/PLAYLIST_ID/catalog", "data": []}}} Is there a way to get the artwork URL without sending a request for each playlist? And if not can this be fixed?
0
1
124
Jun ’25
AVAudioFile.read extremely slow after seeking in FLAC and MP3 files
I'm developing an audio player app that uses AVAudio​File to read PCM data from various formats. I'm experiencing severe performance issues when seeking in FLAC, while other compressed formats (M4A/AAC) work correctly. I don't intend to use them in my app, but I also tested mp3 files just by curiosity and they also have this issue. Environment: macOS 26 (Tahoe) Xcode 26.3 Apple Silicon (M1) The issue: After setting AVAudio​File​.frame​Position to a position mid-file, the subsequent call to AVAudio​File​.read(into​:frame​Count:) blocks for an unreasonable amount of time for FLAC and MP3 files. The delay scales linearly with the seek target, seeking near the beginning is fast, seeking toward the end is proportionally slower, which suggests the decoder is decoding linearly from the beginning of the file rather than using any seek index. (My app deals with “images” of Audio CDs ripped as a single long audio file.) The issue is particularly severe when reading files from an SMB network share (server on Ethernet, client on Wi-Fi with the access point ~2 meters away in line of sight). Quick Benchmark results: I tested with the same 75-minute audio content (16-bit/44.1 kHz stereo, 200,502,708 frames) encoded in five formats, seeking to the midpoint. Over SMB (Local Network, Server on Ethernet, Client on WiFi): Format | Seek + Read Time ----------|------------------ WAV | 0.007 s AIFF | 0.009 s Apple | 0.015 s Lossless | MP3 | 9.2 s FLAC | 30.2 s Locally (MacBook Air M1 SSD) : Format | Seek + Read Time ----------|------------------ WAV | 0.0005 s AIFF | 0.0004 s Apple | 0.0011 s Lossless | MP3 | 0.1958 s FLAC | 0.7528 s WAV, AIFF, and M4A all seek virtually instantly (< 15 ms). MP3 and FLAC exhibit linear-time behavior, with FLAC being the worst affected. Note that M4A (AAC) is also a compressed format that requires decoding after seeking, yet it completes in 15 ms. This rules out any inherent limitation of compressed formats, the MP4 container's packet index (stts/stco) is clearly being used for fast random access. Both MP3 (Xing/LAME TOC) and FLAC (SEEKTABLE metadata block) have their own seek mechanisms that should provide similar performance. Minimal CLI tool to reproduce: import Foundation guard CommandLine.arguments.count > 1 else { print("Usage: FLACSpeed <audio-file-path>") exit(1) } let path = CommandLine.arguments[1] let fileURL = URL(fileURLWithPath: path) do { let file = try AVAudioFile(forReading: fileURL) let format = file.processingFormat let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: 8192)! let totalFrames = file.length let seekTarget = totalFrames / 2 print("File: \(fileURL.lastPathComponent)") print("Format: \(format)") print("Total frames: \(totalFrames)") print("Seeking to frame: \(seekTarget)") file.framePosition = seekTarget let start = CFAbsoluteTimeGetCurrent() try file.read(into: buffer, frameCount: 8192) let elapsed = CFAbsoluteTimeGetCurrent() - start print("Read after seek took \(elapsed) seconds") } catch { print("Error: \(error.localizedDescription)") exit(1) } Expected behavior: AVAudio​File​.read(into​:frame​Count:) after setting frame​Position should use the available seek mechanisms in FLAC and MP3 files for fast random access, as it already does for M4A (AAC). Even accounting for the fact that seek tables provide approximate (not sample-precise) positioning, the "jump to nearest index point + decode forward" approach should complete in milliseconds, not seconds. Workaround: For FLAC, I've worked around this by using libFLAC directly, which provides instant seeking via FLAC__stream​_decoder​_seek​_absolute(). libFLAC Performance: For comparison, libFLAC's FLAC__stream​_decoder​_seek​_absolute() performs the same seek + read on the same FLAC file in around 0.015, using the FLAC seek table to jump to the nearest preceding seek point, then decoding forward a small number of frames to the exact target sample.
0
1
123
1w
AVAudioEngine fails to start during FaceTime call (error 2003329396)
Is it possible to perform speech-to-text using AVAudioEngine to capture microphone input while being on a FaceTime call at the same time? I tried implementing this, but whenever I attempt to start the  AVAudioEngine  while a FaceTime call is active, I get the following error: “The operation couldn’t be completed. (OSStatus error 2003329396)” I assume this might be due to microphone resource restrictions during FaceTime, but I’d like to confirm whether this limitation is at the system level or if there’s any possible workaround or entitlement that allows concurrent microphone access. Has anyone encountered this issue or found a solution?
2
1
742
1w
How to consume video from an RTSP service?
Hi,It seems like it's pretty easy to consume HTTP Live Streaming content in an iOS app. Unfortunately, I need to consume media from an RTSP server. It seems to me that this is a very similar thing, and that all of the underpinnings for doing it ought to be present in iOS, but I'm having a devil of a time figuring out how to make it work without doing a lot of programming.For starters, I know that there are web-based services that can consume an RTSP stream and rebroadcast it as an HTTP Live Stream that can be easily consumed by the media players in iOS. This won't work for me because my application needs to function in an environment where there is no internet access (it's on a private Wifi network where the only other thing on the network is the device that is serving the RTSP stream).Having read everything I can get my hands on and exploring third-party and open-source solutions, I've compiled the following list of ideas:1. Using an iOS build of the open-source ffmpeg library, which supports RTSP, I've come up with a test app that can receive the RTSP packets, decode them, create UIImages out of the frames, and display those frames on-screen. This provides a crude player, but performance is poor, most likely because ffmpeg can't take advantage of any hardware acceleration. It also doesn't provide me with any way to integrate the video stream into AVFoundation, so I'm on my own as far as saving the stream to a file, transcoding it, etc.2. I know that the AVURLAsset class doesn't directly support the RTSP scheme. Since I have access to the undecoded RTSP packets via ffmpeg, I've thought it should be possible to implement RTSP support myself via a custom NSURLProtocol, essentially fooling AVFoundation into reading those packets as if they originated in a file. I'm not sure if this would work, since the raw packets coming from the RTSP server might lack the headers that would otherwise be present in data being read from a file. I'm not even sure if AVFoundation would recognize my custom protocol.3. If a protocol doesn't work, I've considered that I might be able to implement my own local HTTP Live Streaming server that converts the RTSP packets into an HTTP stream that the media players can read. This sounds like a terribly convoluted solution to the problem, at best, and very difficult at worst.4. Going back to solution (1), if I could speed up the decoding by using some iOS CoreVideo function instead of ffmpeg, this solution might be okay. However, I can't find any documentation for CoreVideo on iOS (Apple only documents it for OS X).5. I'm certainly willing to license a third-party solution if it works well and provides good performance. Unfortunately, everything I've found so far is pretty crummy and mostly just leverages ffmpeg and/or VLC. What is most disappointing to me is that nobody seems to be able or willing to provide a solution that neatly integrates with AVFoundation. I really want to make my RTSP stream available as an AVAsset so I can use it with AVFoundation players and other classes -- I don't want to build an app that relies on custom third-party code for everything.Any ideas, tips, advice would be greatly appreciated.Thanks,Frank
9
1
16k
Oct ’25
Indicate Packet Loss With AVAudioConverter for OPUS Decoding
I'm using an AVAudioConverter object to decode an OPUS stream for VoIP. The decoding itself works well, however, whenever the stream stalls (no more audio packet is available to decode because of network instability) this can be heard in crackling / abrupt stop in decoded audio. OPUS can mitigate this by indicating packet loss by passing a null pointer in the C-library to int opus_decode_float (OpusDecoder * st, const unsigned char * data, opus_int32 len, float * pcm, int frame_size, int decode_fec), see https://opus-codec.org/docs/opus_api-1.2/group__opus__decoder.html#ga9c554b8c0214e24733a299fe53bb3bd2. However, with AVAudioConverter using Swift I'm constructing an AVAudioCompressedBuffer like so:         let compressedBuffer = AVAudioCompressedBuffer(             format: VoiceEncoder.Constants.networkFormat,             packetCapacity: 1,             maximumPacketSize: data.count         )         compressedBuffer.byteLength = UInt32(data.count)         compressedBuffer.packetCount = 1   compressedBuffer.packetDescriptions! .pointee.mDataByteSize = UInt32(data.count)         data.copyBytes(             to: compressedBuffer.data .assumingMemoryBound(to: UInt8.self),             count: data.count         ) where data: Data contains the raw OPUS frame to be decoded. How can I specify data loss in this context and cause the AVAudioConverter to output PCM data whenever no more input data is available? More context: I'm specifying the audio format like this:         static let frameSize: UInt32 = 960         static let sampleRate: Float64 = 48000.0         static var networkFormatStreamDescription = AudioStreamBasicDescription(             mSampleRate: sampleRate,             mFormatID: kAudioFormatOpus,             mFormatFlags: 0,             mBytesPerPacket: 0,             mFramesPerPacket: frameSize,             mBytesPerFrame: 0,             mChannelsPerFrame: 1,             mBitsPerChannel: 0,             mReserved: 0         )         static let networkFormat = AVAudioFormat( streamDescription: &networkFormatStreamDescription )! I've tried 1) setting byteLength and packetCount to zero and 2) returning nil but setting .haveData in the AVAudioConverterInputBlock I'm using with no success.
1
1
926
May ’25
AVAudioSession.outputVolume does not reflect system volume changes made while app is in background
I have a question regarding the behavior of AVAudioSession.sharedInstance().outputVolume. Observed behavior: When the app is in the foreground, I read audioSession.outputVolume (for example, 0.1). The app is then moved to the background. While the app is in the background, the user changes the system volume using the hardware buttons (for example, to 0.5). When the app returns to the foreground, audioSession.outputVolume still reports the previous value (0.1). From my testing, outputVolume only seems to update when the system volume is changed while the app is in the foreground. Volume changes made while the app is in the background are not reflected when the app returns to the foreground. Questions: According to Apple’s documentation for AVAudioSession.outputVolume: “The systemwide output volume set by the user.” https://developer.apple.com/documentation/avfaudio/avaudiosession/outputvolume However, based on our testing on iOS 18.6.2 and iOS 18.1, the observed behavior seems to differ from this description. Questions: The documentation states that outputVolume represents the system-wide volume set by the user. In our testing, the value does not reflect volume changes made while the app is in the background and only updates when the app is in the foreground.Is this the expected behavior of AVAudioSession.outputVolume? Is there any other recommended way in Swift to retrieve the current system volume that reflects user changes made both while the app is in the foreground and while it is in the background? Any clarification on the intended behavior or recommended handling would be greatly appreciated.
2
1
269
2w
Apple Music API: Adding To Collaborative playlist gives 500 error
I am using https://developer.apple.com/documentation/applemusicapi/add-tracks-to-a-library-playlist to add tracks to playlists. This endpoint works fine for all playlists except for collaborative playlists. For collaborative playlist I get the following 500 error as a response: "errors": [ { "id": "<some id>", "title": "Upstream Service Error", "detail": "Unable to update tracks", "status": "500", "code": "50001" } ] } Steps to reproduce: Create a playlist in your library. Use the api to add a song. Confirm that it works. Make that same playlist collaborative. Update the playlist ID in your api request (as making a playlist collaborative changes its id) Confirm that you get the 500 error.
5
0
880
Oct ’25
Assistance Needed: CoreMediaErrorDomain Error -12971
Hello Apple Developer Community, I am trying to play an HLS stream using the React Native Video player (underneath it's using AvPlayer). I am able to play the stream smoothly, but in some cases the player can not play the stream properly. Behaviour: react-native-video: I am getting the below error. Error details from react-native-video player: Error Code: -12971 Domain: CoreMediaErrorDomain Localised Description: The operation couldn’t be completed. (CoreMediaErrorDomain error -12971.) Target: 2457 The error does not provide a specific failure reason or recovery suggestion, which makes troubleshooting challenging. AvPlayer on native iOS project: Video playback stopped after playing a few seconds. AVPlayer configuration: player.currentItem?.preferredForwardBufferDuration = 1 player.automaticallyWaitsToMinimizeStalling = true N.B.: The same buffer duration is working perfectly for others. Stream properties: video resolution: 1280 x 720 I have attached an overview report generated from MediaStreamValidator. I would appreciate any insights or suggestions on how to address this error. Has anyone in the community experienced a similar issue or have any advice on potential solutions? Thank you for your help!
0
1
212
Apr ’25
Lock screen media controls for MusicKit/ ApplicationMusicPlayer
Hi, when using ApplicationMusicPlayer from MusicKit my app automatically gets the media controls on the lock screen: Play/ Pause, Skip Buttons, Playback Position etc. I would like to customize these. Tried a bunch of things, e.g. using MPRemoteCommandCenter. So far I haven't had any success. Does anyone know how I can customize the media controls of ApplicationMusicPlayer. Thank you.
2
0
533
Sep ’25
New FairPlay Keys
Hello, My company has an in-store app with FPS SDK 4.x (1024) keys. We've handed those keys over to a trusted third-party and we do not have them. We've been in-store for several years. The person that created the keys in our organization mistakenly stored them encrypted to our third-party's PGP keys, so we cannot decrypt them, and the third party also has no mechanism to provide us with the keys even though it is in their runtime environment. They only have secure mechanisms for us to upload keys onto their servers. We are trying to migrate to a different third-party DRM provider, and would like to obtain new keys. Unfortunately, the developer portal won't let me create new keys, saying that we have exceeded the number of keys allowed, which I assume is one. Additionally, the new DRM provider can only support SDK 4.x keys, and it appears that we can only request SDK 5.x keys on the Apple Developer portal, as the SDK 4.0 option is grayed out. Regardless, it seems that we are not able to request any keys. We've submitted a request to the support e-mail address and received an automated e-mail that the response should take a few days, but may take longer on occasion. It's now been a month. The e-mail says that the reply address is not monitored. Is there any way we can accelerate this? Thank you, Carlos
Replies
0
Boosts
1
Views
291
Activity
Aug ’25
How to delete FPS Certificate from Apple developer account
Hello All, I am looking for assistance with our FairPlay Streaming (FPS) certificates. We are in the process of migrating to a new video streaming vendor and need to create a new FPS certificate using SDK 4. However, we have reached the limit of allowed FPS certificates in our account and cannot create a new one. Issue Details: • We currently have two FPS certificates active in our developer account. • One of these was created using SDK 5, but our new vendor (Mux) requires an FPS certificate based on SDK 4. • Since Apple does not allow deleting FPS certificates from the developer portal, we are unable to create a new SDK 4 certificate. • We kindly request Apple to revoke one of our existing FPS certificates to allow us to generate a new SDK 4 certificate. Request: We would greatly appreciate it if you could assist us on how to delete one of our existing FPS certificates so that we can proceed with creating a new SDK 4 certificate for our vendor integration. Thank you for your support.
Replies
1
Boosts
1
Views
659
Activity
Oct ’25
Why Does AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps Occur on iPhone?
Hi everyone, We're encountering an unexpected issue with our iPhone-only camera app: 👉 TimeMark - Photo Proof https://apps.apple.com/us/app/timemark-photo-proof/id6446071834 Problem Description: Our app uses a full-screen camera view via AVCaptureSession. In some cases reported by users, the camera fails immediately upon app launch, and we receive this interruption reason: AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps According to the Apple documentation https://developer.apple.com/documentation/avfoundation/avcapturesession/interruptionreason/videodevicenotavailablewithmultipleforegroundapps?language=objc , this interruption typically occurs when the app is running in a multi-app layout such as Slide Over, Split View, or Picture in Picture — all of which are iPad-only features. However, this issue is being reported on iPhones, and our app does not support iPad at all. Also noted in the documentation: "Given your present AVCaptureSession configuration, the session may only be run if your app occupies the full screen." Additional Context: The issue occurs immediately on app launch, before the user can interact with the camera. We don’t enable multitaskingCameraAccessEnabled. We are 100% sure this is happening on iPhone, not iPad. It’s hard to reproduce; users report it happening sporadically. Locally, we tried playing Picture-in-Picture videos (e.g., Safari/YouTube) before launching our app, but we could not reproduce the issue. Questions: Why is this interruption reason occurring on iPhone, which doesn’t officially support Slide Over or Split View? Could this be caused by some system-level multitasking or resource contention (e.g., Picture in Picture from FaceTime or Safari)? Would enabling multitaskingCameraAccessEnabled help prevent this issue on iPhone, even though it's designed for iPad? Enabling multitaskingCameraAccessEnabled seems to require enabling UIBackgroundModes → voip. Would adding this background mode cause any App Store review risk or rejection if our app doesn't actually use VoIP functionality? Any help, insight, or suggestions would be greatly appreciated. Thanks in advance!
Replies
3
Boosts
0
Views
885
Activity
Oct ’25
AVKit crash when rendering AVPlayerView controls — macOS 26.4 regression
Description Our app, Octory, allows users to create onboarding and communication workflows composed of slides containing various UI components, including embedded video players powered by AVPlayerView. Since macOS 26.4 Beta, the app crashes at launch whenever a workflow contains a video component. Workflows without video components load and render without issue, which points to a regression in AVKit's player control rendering pipeline. As anyone seen similar behaviour when using AVKit or is it something we do not do properly? Expected Behavior The app opens and renders the workflow, including the embedded video component. Actual Behavior The app briefly launches and immediately crashes with SIGABRT on the main thread. Crash Analysis Key takeaways for anyone investigating: Root cause — abort() inside NSImageSymbolConfiguration The crash occurs entirely within Apple frameworks, with no third-party code in the faulting call chain (aside from the app's entry point). The sequence is: AVPlayerItem finishes loading and fires a KVO notification that it is ready to play (_updateCanPlayAndCanStepPropertiesWhenReadyToPlayWithNotificationPayload:) This KVO chain propagates through NSKeyValueDidChange up to AVPlayerView, which calls _updateVideoGravityType AVPlayerControlsViewController responds by calling _updateZoomButtonImage, which asks AVPlayerControlsConfigurator for a configured SF Symbol via +[NSImage(AVAdditions) avkit_imageWithSymbolName:textStyle:scale:accessibilityDescription:] The symbol rendering hits -[NSImageSymbolConfiguration _getEffectivePointSize:glyphWeight:glyphSize:backfilledWithFont:scale:], which calls abort() The entire crash stack is in AppKit (NSImage / NSImageSymbolConfiguration) and AVKit — no application code is involved in the faulting path. The abort() suggests a precondition or assertion failure inside the symbol configuration logic, possibly due to an invalid or nil font/text style being passed during the zoom button image setup. This is triggered automatically the moment an AVPlayerItem becomes ready to play and AVKit attempts to render its transport controls. There is no way for the application to intercept or work around this. Relevant stack frames (Thread 0 — main thread) 3 AppKit -[NSImageSymbolConfiguration _getEffectivePointSize:glyphWeight:glyphSize:backfilledWithFont:scale:] + 440 ← abort() here 4 AppKit -[NSImageSymbolRepProvider _bestRepresentationForImage:hints:] + 404 11 AVKit +[NSImage(AVAdditions) avkit_imageWithSymbolName:textStyle:scale:accessibilityDescription:] + 332 12 AVKit -[AVPlayerControlsConfigurator configuredSymbolForImageName:] + 92 13 AVKit -[AVPlayerControlsViewController _updateZoomButtonImage] + 160 14 AVKit -[AVPlayerControlsViewController setVideoGravityType:] + 52 15 AVKit -[AVPlayerView _updateVideoGravityType] + 1056 28 AVFCore -[AVPlayerItem didChangeValueForKey:] + 56 29 AVFCore -[AVPlayerItem _updateCanPlayAndCanStepPropertiesWhenReadyToPlayWithNotificationPayload:updateStatusToReadyToPlay:] + 660 Additional Notes Removing the video component from the workflow (i.e. not instantiating AVPlayerView) resolves the crash entirely. The crash is 100% reproducible on every launch. This behavior was not present on macOS 26.3 or any prior release. The app was not recompiled — the same binary that works on 26.3 crashes on 26.4 Beta & 26.4. Environment Detail Value OS macOS 26.4 Hardware MacBook Pro M1 (MacBookPro17,1)
Replies
2
Boosts
0
Views
75
Activity
23h
iOS 17 camera capture assertions and issues
Hello, Starting in iOS 17, our application started having some issue publishing to our video session. More specifically the video capture seems to be broken in some, but not all sessions. What's troubling is that we're seeing that it fails consistently every 4 sessions. It also fails silently, without reporting any problems to the app. We only notice that there are no frames being rendered or sent to the remote devices. Here's what shows-up in the console: <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) Anyone seeing this? Any idea what could be the cause? Our sessions work perfectly on iOS16 and below. Thanks
Replies
3
Boosts
1
Views
1.4k
Activity
Oct ’25
Is there a way to get lossless music playback on macOS?
I noticed that while playing back the same tracks via MusicKit on different OSes I get different results regarding the audio files being streamed. Playing back a lossless file with 24Bit 48kHz and watching the Console for RemotePlayerService I get: on iPadOS: Lossless; groupID: audio-alac-stereo-48000-24; bitDepth: 24-bit; sampleRate: 48khz; codec: alac; channels: 2; layout: Stereo; on macOS: Creating AudioQueue with format:'paac', framesPerPacket:1024, sampleRate:44100 While the iPad looks perfect, the Mac does not. Is there a way to fix this issue on macOS. BTW: I switched the Audio-Midi Settings before, after and while the macOS App was lunched. I also switched to different output devices. I wasn't able to change the bad audio-output on the mac. I tested this under Sequoia 15.5 and Tahoe beta 1, Xcode 16.4 and 26 beta 1. The AudioVariants of the Album/Tracks are .dolbyAtmos, .lossless, .lossyStereo Apple Music displays Lossless 24 Bit/48 kHz ALAC when clicking on the playercontroll icon on macOS I hope there are only some missing or misconfigured properties to get macOS up to par. Thanks :-)
Replies
0
Boosts
1
Views
162
Activity
Jun ’25
No audio in screen recordings when using AVAudioEngine Voice Processing
Hello, We are developing a real-time speech recognition application and are utilizing AVAudioEngine with voice processing enabled on the input node. However, we have observed that enabling this mode interferes with the built-in iOS screen recording feature - specifically, the recorded video does not capture any audio when this mode is active. Since we want users to be able to record their experience within our app, this issue significantly impacts our functionality. Is there a known workaround or recommended approach to ensure that both voice processing and screen recording can function simultaneously? Any guidance would be greatly appreciated. Thank you!
Replies
2
Boosts
1
Views
390
Activity
Oct ’25
After iOS 18.5, the AVFoundation AVCaptureSessionInterruptionReason.videoDeviceNotAvailableWithMultipleForegroundApps error has caused a significant increase in camera black screen issues.
Issue: After iOS 18.5 release, our app is experiencing a significant increase in AVCaptureSessionInterruptionReason.videoDeviceNotAvailableWithMultipleForegroundApps errors. Details: Our camera-related code has not been updated recently.However, we've observed that the error rate has significantly increased starting from May 2025. The error rate has risen from approximately 0.02% (2 in 10,000 users) to 0.1% (1 in 1,000 users). This represents a 5x increase in error occurrence. The frequency has increased noticeably since iOS 18.5 This is affecting our app's camera functionality and user experience Questions: Are there any known changes in iOS 18.5 regarding camera access management? What are the recommended best practices to handle this interruption reason? Are there any API changes we should be aware of? Best, Shay
Replies
2
Boosts
1
Views
500
Activity
Oct ’25
ApplicationMusicPlayer.shared player.play() permission denied in app sandbox (Tauri)
Hi, I'm developing a Tauri V2 app on MacOS, and am wanting to implement playback controls. It seems that Apple locks down playback, requiring a signed application. My app also has capabilities to "get currently playing track", and I confirmed this works; Apple produces a popup triggered by my await MusicAuthorization.request() call. It returns nil, of course, because I can't get anything to play via the ApplicationMusicPlayer; only through the system's Apple Music app. I understand SystemMusicPlayer is not available on MacOS, which is fine. I'm just a little confused as it seems pretty standard to need to test playback controls quickly without having to codesign and do some provisionprofile embedding acrobatics each time Rust re-compiles target/debug. This slows down development a lot. I do have these entries in my Entitlements.plist: <key>com.apple.security.personal-information.media-library</key> <true/> <key>com.apple.developer.music-kit</key> <true/> <key>com.apple.security.app-sandbox</key> <true/> In my tauri.conf.json, I have: "macOS": { "entitlements": "./Entitlements.plist", "signingIdentity": "Apple Development: ()" } My application works like this: I have a temporary button click to fire off a tauriinvoke() command which goes to a #tauri::command, which bridges to Swift code. Again, I validated that my less-permissive "get currently playing track" works; i.e., does not get permission denied. exact error message: [swift] playMedia error: .permissionDenied (^specifically, ".permissionDenied") My code to trigger playback of a specific media item: Task { print("[swift] entered sema Task") let status: MusicAuthorization.Status = await MusicAuthorization.request() print("auth status: \(status)") guard status == .authorized else { sema.signal(); return } print("passed the status guard.") do { var request = MusicCatalogResourceRequest<Song>(matching: \.id, equalTo: MusicItemID(rawValue: songId)) request.limit = 1 let response = try await request.response() guard let song = response.items.first else { sema.signal(); return } let player = ApplicationMusicPlayer.shared player.queue = [song] try await player.play() success = true } catch { print("[swift] playMedia error: \(error)") } sema.signal()
Replies
3
Boosts
0
Views
408
Activity
2d
MusicKit for Android reports an error when playing stations
When I use musicKit SDK for Android 1.1.2, I found that MediaContainerType only defines three types: NONE = 0; ALBUM = 1; PLAYLIST = 2; The RADIO_STATION type is not defined. However, the documentation of com.apple.android.music.playback.model states that the RADIO_STATION type is supported. This problem causes an error after I pass in the stations ID: MediaSessionManager com.apple.android.music.sdk.testapp D onPlaybackError() Quincy java.io.IOException May I ask how to solve this problem?
Replies
1
Boosts
1
Views
136
Activity
Apr ’25
Apple Music API Library Playlist Images
Hi, I'm sending an API request to: https://api.music.apple.com/v1/me/library/playlists?limit=$limit&offset=$offset To list all of the users library playlists, however the resulting objects do not contain the playlist artwork in the JSON. I've tried adding the extend and include attributes as well but to no avail. A partial example of the response: {"id": PLAYLIST_ID, "type": "library-playlists", "href": "/v1/me/library/playlists/PLAYLIST_ID", "attributes": {"lastModifiedDate": "2024-09-18T20:18:24Z", "canEdit": true, "name": "Afro Party Anthems", "isPublic": false, "description": {"standard": "Definitive African party starters"}, "hasCatalog": false, "dateAdded": "2022-03-10T18:30:56Z", "playParams": {"id": PLAYLIST_ID, "kind": "playlist", "isLibrary": true}}, "relationships": {"catalog": {"href": "/v1/me/library/playlists/PLAYLIST_ID/catalog", "data": []}}} Is there a way to get the artwork URL without sending a request for each playlist? And if not can this be fixed?
Replies
0
Boosts
1
Views
124
Activity
Jun ’25
AVAudioFile.read extremely slow after seeking in FLAC and MP3 files
I'm developing an audio player app that uses AVAudio​File to read PCM data from various formats. I'm experiencing severe performance issues when seeking in FLAC, while other compressed formats (M4A/AAC) work correctly. I don't intend to use them in my app, but I also tested mp3 files just by curiosity and they also have this issue. Environment: macOS 26 (Tahoe) Xcode 26.3 Apple Silicon (M1) The issue: After setting AVAudio​File​.frame​Position to a position mid-file, the subsequent call to AVAudio​File​.read(into​:frame​Count:) blocks for an unreasonable amount of time for FLAC and MP3 files. The delay scales linearly with the seek target, seeking near the beginning is fast, seeking toward the end is proportionally slower, which suggests the decoder is decoding linearly from the beginning of the file rather than using any seek index. (My app deals with “images” of Audio CDs ripped as a single long audio file.) The issue is particularly severe when reading files from an SMB network share (server on Ethernet, client on Wi-Fi with the access point ~2 meters away in line of sight). Quick Benchmark results: I tested with the same 75-minute audio content (16-bit/44.1 kHz stereo, 200,502,708 frames) encoded in five formats, seeking to the midpoint. Over SMB (Local Network, Server on Ethernet, Client on WiFi): Format | Seek + Read Time ----------|------------------ WAV | 0.007 s AIFF | 0.009 s Apple | 0.015 s Lossless | MP3 | 9.2 s FLAC | 30.2 s Locally (MacBook Air M1 SSD) : Format | Seek + Read Time ----------|------------------ WAV | 0.0005 s AIFF | 0.0004 s Apple | 0.0011 s Lossless | MP3 | 0.1958 s FLAC | 0.7528 s WAV, AIFF, and M4A all seek virtually instantly (< 15 ms). MP3 and FLAC exhibit linear-time behavior, with FLAC being the worst affected. Note that M4A (AAC) is also a compressed format that requires decoding after seeking, yet it completes in 15 ms. This rules out any inherent limitation of compressed formats, the MP4 container's packet index (stts/stco) is clearly being used for fast random access. Both MP3 (Xing/LAME TOC) and FLAC (SEEKTABLE metadata block) have their own seek mechanisms that should provide similar performance. Minimal CLI tool to reproduce: import Foundation guard CommandLine.arguments.count > 1 else { print("Usage: FLACSpeed <audio-file-path>") exit(1) } let path = CommandLine.arguments[1] let fileURL = URL(fileURLWithPath: path) do { let file = try AVAudioFile(forReading: fileURL) let format = file.processingFormat let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: 8192)! let totalFrames = file.length let seekTarget = totalFrames / 2 print("File: \(fileURL.lastPathComponent)") print("Format: \(format)") print("Total frames: \(totalFrames)") print("Seeking to frame: \(seekTarget)") file.framePosition = seekTarget let start = CFAbsoluteTimeGetCurrent() try file.read(into: buffer, frameCount: 8192) let elapsed = CFAbsoluteTimeGetCurrent() - start print("Read after seek took \(elapsed) seconds") } catch { print("Error: \(error.localizedDescription)") exit(1) } Expected behavior: AVAudio​File​.read(into​:frame​Count:) after setting frame​Position should use the available seek mechanisms in FLAC and MP3 files for fast random access, as it already does for M4A (AAC). Even accounting for the fact that seek tables provide approximate (not sample-precise) positioning, the "jump to nearest index point + decode forward" approach should complete in milliseconds, not seconds. Workaround: For FLAC, I've worked around this by using libFLAC directly, which provides instant seeking via FLAC__stream​_decoder​_seek​_absolute(). libFLAC Performance: For comparison, libFLAC's FLAC__stream​_decoder​_seek​_absolute() performs the same seek + read on the same FLAC file in around 0.015, using the FLAC seek table to jump to the nearest preceding seek point, then decoding forward a small number of frames to the exact target sample.
Replies
0
Boosts
1
Views
123
Activity
1w
AVAudioEngine fails to start during FaceTime call (error 2003329396)
Is it possible to perform speech-to-text using AVAudioEngine to capture microphone input while being on a FaceTime call at the same time? I tried implementing this, but whenever I attempt to start the  AVAudioEngine  while a FaceTime call is active, I get the following error: “The operation couldn’t be completed. (OSStatus error 2003329396)” I assume this might be due to microphone resource restrictions during FaceTime, but I’d like to confirm whether this limitation is at the system level or if there’s any possible workaround or entitlement that allows concurrent microphone access. Has anyone encountered this issue or found a solution?
Replies
2
Boosts
1
Views
742
Activity
1w
How to consume video from an RTSP service?
Hi,It seems like it's pretty easy to consume HTTP Live Streaming content in an iOS app. Unfortunately, I need to consume media from an RTSP server. It seems to me that this is a very similar thing, and that all of the underpinnings for doing it ought to be present in iOS, but I'm having a devil of a time figuring out how to make it work without doing a lot of programming.For starters, I know that there are web-based services that can consume an RTSP stream and rebroadcast it as an HTTP Live Stream that can be easily consumed by the media players in iOS. This won't work for me because my application needs to function in an environment where there is no internet access (it's on a private Wifi network where the only other thing on the network is the device that is serving the RTSP stream).Having read everything I can get my hands on and exploring third-party and open-source solutions, I've compiled the following list of ideas:1. Using an iOS build of the open-source ffmpeg library, which supports RTSP, I've come up with a test app that can receive the RTSP packets, decode them, create UIImages out of the frames, and display those frames on-screen. This provides a crude player, but performance is poor, most likely because ffmpeg can't take advantage of any hardware acceleration. It also doesn't provide me with any way to integrate the video stream into AVFoundation, so I'm on my own as far as saving the stream to a file, transcoding it, etc.2. I know that the AVURLAsset class doesn't directly support the RTSP scheme. Since I have access to the undecoded RTSP packets via ffmpeg, I've thought it should be possible to implement RTSP support myself via a custom NSURLProtocol, essentially fooling AVFoundation into reading those packets as if they originated in a file. I'm not sure if this would work, since the raw packets coming from the RTSP server might lack the headers that would otherwise be present in data being read from a file. I'm not even sure if AVFoundation would recognize my custom protocol.3. If a protocol doesn't work, I've considered that I might be able to implement my own local HTTP Live Streaming server that converts the RTSP packets into an HTTP stream that the media players can read. This sounds like a terribly convoluted solution to the problem, at best, and very difficult at worst.4. Going back to solution (1), if I could speed up the decoding by using some iOS CoreVideo function instead of ffmpeg, this solution might be okay. However, I can't find any documentation for CoreVideo on iOS (Apple only documents it for OS X).5. I'm certainly willing to license a third-party solution if it works well and provides good performance. Unfortunately, everything I've found so far is pretty crummy and mostly just leverages ffmpeg and/or VLC. What is most disappointing to me is that nobody seems to be able or willing to provide a solution that neatly integrates with AVFoundation. I really want to make my RTSP stream available as an AVAsset so I can use it with AVFoundation players and other classes -- I don't want to build an app that relies on custom third-party code for everything.Any ideas, tips, advice would be greatly appreciated.Thanks,Frank
Replies
9
Boosts
1
Views
16k
Activity
Oct ’25
Indicate Packet Loss With AVAudioConverter for OPUS Decoding
I'm using an AVAudioConverter object to decode an OPUS stream for VoIP. The decoding itself works well, however, whenever the stream stalls (no more audio packet is available to decode because of network instability) this can be heard in crackling / abrupt stop in decoded audio. OPUS can mitigate this by indicating packet loss by passing a null pointer in the C-library to int opus_decode_float (OpusDecoder * st, const unsigned char * data, opus_int32 len, float * pcm, int frame_size, int decode_fec), see https://opus-codec.org/docs/opus_api-1.2/group__opus__decoder.html#ga9c554b8c0214e24733a299fe53bb3bd2. However, with AVAudioConverter using Swift I'm constructing an AVAudioCompressedBuffer like so:         let compressedBuffer = AVAudioCompressedBuffer(             format: VoiceEncoder.Constants.networkFormat,             packetCapacity: 1,             maximumPacketSize: data.count         )         compressedBuffer.byteLength = UInt32(data.count)         compressedBuffer.packetCount = 1   compressedBuffer.packetDescriptions! .pointee.mDataByteSize = UInt32(data.count)         data.copyBytes(             to: compressedBuffer.data .assumingMemoryBound(to: UInt8.self),             count: data.count         ) where data: Data contains the raw OPUS frame to be decoded. How can I specify data loss in this context and cause the AVAudioConverter to output PCM data whenever no more input data is available? More context: I'm specifying the audio format like this:         static let frameSize: UInt32 = 960         static let sampleRate: Float64 = 48000.0         static var networkFormatStreamDescription = AudioStreamBasicDescription(             mSampleRate: sampleRate,             mFormatID: kAudioFormatOpus,             mFormatFlags: 0,             mBytesPerPacket: 0,             mFramesPerPacket: frameSize,             mBytesPerFrame: 0,             mChannelsPerFrame: 1,             mBitsPerChannel: 0,             mReserved: 0         )         static let networkFormat = AVAudioFormat( streamDescription: &networkFormatStreamDescription )! I've tried 1) setting byteLength and packetCount to zero and 2) returning nil but setting .haveData in the AVAudioConverterInputBlock I'm using with no success.
Replies
1
Boosts
1
Views
926
Activity
May ’25
AVAudioSession.outputVolume does not reflect system volume changes made while app is in background
I have a question regarding the behavior of AVAudioSession.sharedInstance().outputVolume. Observed behavior: When the app is in the foreground, I read audioSession.outputVolume (for example, 0.1). The app is then moved to the background. While the app is in the background, the user changes the system volume using the hardware buttons (for example, to 0.5). When the app returns to the foreground, audioSession.outputVolume still reports the previous value (0.1). From my testing, outputVolume only seems to update when the system volume is changed while the app is in the foreground. Volume changes made while the app is in the background are not reflected when the app returns to the foreground. Questions: According to Apple’s documentation for AVAudioSession.outputVolume: “The systemwide output volume set by the user.” https://developer.apple.com/documentation/avfaudio/avaudiosession/outputvolume However, based on our testing on iOS 18.6.2 and iOS 18.1, the observed behavior seems to differ from this description. Questions: The documentation states that outputVolume represents the system-wide volume set by the user. In our testing, the value does not reflect volume changes made while the app is in the background and only updates when the app is in the foreground.Is this the expected behavior of AVAudioSession.outputVolume? Is there any other recommended way in Swift to retrieve the current system volume that reflects user changes made both while the app is in the foreground and while it is in the background? Any clarification on the intended behavior or recommended handling would be greatly appreciated.
Replies
2
Boosts
1
Views
269
Activity
2w
Apple Music API: Adding To Collaborative playlist gives 500 error
I am using https://developer.apple.com/documentation/applemusicapi/add-tracks-to-a-library-playlist to add tracks to playlists. This endpoint works fine for all playlists except for collaborative playlists. For collaborative playlist I get the following 500 error as a response: "errors": [ { "id": "<some id>", "title": "Upstream Service Error", "detail": "Unable to update tracks", "status": "500", "code": "50001" } ] } Steps to reproduce: Create a playlist in your library. Use the api to add a song. Confirm that it works. Make that same playlist collaborative. Update the playlist ID in your api request (as making a playlist collaborative changes its id) Confirm that you get the 500 error.
Replies
5
Boosts
0
Views
880
Activity
Oct ’25
Assistance Needed: CoreMediaErrorDomain Error -12971
Hello Apple Developer Community, I am trying to play an HLS stream using the React Native Video player (underneath it's using AvPlayer). I am able to play the stream smoothly, but in some cases the player can not play the stream properly. Behaviour: react-native-video: I am getting the below error. Error details from react-native-video player: Error Code: -12971 Domain: CoreMediaErrorDomain Localised Description: The operation couldn’t be completed. (CoreMediaErrorDomain error -12971.) Target: 2457 The error does not provide a specific failure reason or recovery suggestion, which makes troubleshooting challenging. AvPlayer on native iOS project: Video playback stopped after playing a few seconds. AVPlayer configuration: player.currentItem?.preferredForwardBufferDuration = 1 player.automaticallyWaitsToMinimizeStalling = true N.B.: The same buffer duration is working perfectly for others. Stream properties: video resolution: 1280 x 720 I have attached an overview report generated from MediaStreamValidator. I would appreciate any insights or suggestions on how to address this error. Has anyone in the community experienced a similar issue or have any advice on potential solutions? Thank you for your help!
Replies
0
Boosts
1
Views
212
Activity
Apr ’25
Lock screen media controls for MusicKit/ ApplicationMusicPlayer
Hi, when using ApplicationMusicPlayer from MusicKit my app automatically gets the media controls on the lock screen: Play/ Pause, Skip Buttons, Playback Position etc. I would like to customize these. Tried a bunch of things, e.g. using MPRemoteCommandCenter. So far I haven't had any success. Does anyone know how I can customize the media controls of ApplicationMusicPlayer. Thank you.
Replies
2
Boosts
0
Views
533
Activity
Sep ’25
offline with FairPlay
I am working on offline license support for FairPlay. I am trying to identify offline vs. live streaming requests. How do I do that? Is there any way to identify the request type (offline or streaming) from SPC?
Replies
0
Boosts
1
Views
70
Activity
3w