Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

how to enumerate build-in media devices?
Hello, I need to enumerate built-in media devices (cameras, microphones, etc.). For this purpose, I am using the CoreAudio and CoreMediaIO frameworks. According to the table 'Daemon-Safe Frameworks' in Apple’s TN2083, CoreAudio is daemon-safe. However, the documentation does not mention CoreMediaIO. Can CoreMediaIO be used in a daemon? If not, are there any documented alternatives to detect built-in cameras in a daemon (e.g., via device classes in IOKit)? Thank you in advance, Pavel
0
0
93
May ’25
SystemAudio Capture API Fails with OSStatus error 1852797029 (kAudioCodecIllegalOperationError)
Issue Description I'm implementing a system audio capture feature using AudioHardwareCreateProcessTap and AudioHardwareCreateAggregateDevice. The app successfully creates the tap and aggregate device, but when starting the IO procedure with AudioDeviceStart, it sometimes fails with OSStatus error 1852797029. (The operation couldn’t be completed. (OSStatus error 1852797029.)) The error occurs inconsistently, which makes it particularly difficult to debug and reproduce. Questions Has anyone encountered this intermittent "nope" error code (0x6e6f7065) when working with system audio capture? Are there specific conditions or system states that might trigger this error sporadically? Are there any known workarounds for handling this intermittent failure case? Any insights or guidance would be greatly appreciated. I'm wondering if anyone else has encountered this specific "nope" error code (0x6e6f7065) when working with system audio capture.
0
0
140
May ’25
Error codes -1102 and 1852797029 when entering the background playing HLS assets
In our logging tools (Firebase) I see a lot of errors reported when users are playing content and the app transitions to the background. A AVPlayerItemFailedToPlayToEndTime notification is fired with an error containing error codes like -1102 and 1852797029 which seem to correspond to NSURLErrorNoPermissionsToReadFile and kCMIOHardwareIllegalOperationError respectively. To me, it looks like these might have something to do with caching logic. The items being played are HLS streams and we make use of AVAssetDownloadTask to make any streamed content offline available. Our setup is similar to the sample provided here: https://developer.apple.com/documentation/avfoundation/using-avfoundation-to-play-and-persist-http-live-streams. Whenever an item is selected for playback the app will check if a cached version is available and if so gets the url to the stored file like the "localAssetForStream()" method in the example, or get the asset from a currently running AVAssetDownloadTask for the item, or else, starts a new AVAssetDownloadTask and returns a AVAsset from that task to play. This seems to work fine, and I can't reproduce the issues our users and our logging tools are reporting. Is there some case I am missing where AVAssetDownloadTask and associated AVAssets might become unreadable when the app transitions to the background? Or do these errors indicate a different problem entirely?
0
0
219
Mar ’25
RealityKit/ARKit Memory Not Fully Released After AR Session Cleanup
Hi, I'm developing a SwiftUI app using RealityKit and ARKit for an AR measuring feature. I’ve noticed that after navigating away from my AR view and performing extensive cleanup (including removing all anchors/entities, pausing the ARSession, and nil-ing out all references), memory usage remains elevated and sometimes grows with repeated AR sessions. Each time I enter and exit the AR view, memory increases The memory does not return to the baseline after cleanup, even though all custom objects are deallocated. Are there best practices beyond what I’ve described to ensure all ARKit/RealityKit resources are released after an AR session?
0
0
100
Jun ’25
About the built-in instrument sound of Apple devices
Does anyone know how to pronounce the sound of a specific instrument when you tap a button on the screen on your iPhone or iPad? Now, in the middle of creating a music learning app, I'm thinking of assigning monotones or chords to the button-like frames on the keyboard and fingerboard on the screen. Can it be achieved with SwiftUI chords alone? Once upon a time, MIDI level 1 I remember that there was a pronunciation function of the instrument, but I don't think about implementing the same function in the current OS. Please lend me your wisdom.
0
0
57
May ’25
How to detect HDCP support in Safari.
I am playing FairPlay + Multi-Key content (fMP4) in Safari browser. I want to implement the implementation to distinguish between SD and HD video quality, and play it in HD if HDCP is supported, and in SD if HDCP is not supported. I have already confirmed that HDCP support is the default, and that a black screen is output in non-HDCP environments. What I want is to improve the user experience by appropriately switching to SD/HD depending on HDCP support when playing DRM content. Question: Is there an API or function that can detect HDCP support in Safari through JavaScript or other methods? Or is there a way to indirectly guess it?
0
0
186
Mar ’25
Unable to match music with shazamkit for Android
Hello, i can successfully match music using shazamkit on Apple using SwiftUI, a simple app that let user to load an audio file and exctracts the relative match, while i am unable to match music using shamzamkit on Android. I am trying to make the same simple app but i cannot match music as i get MATCH_ATTEMPT_FAILED every time i try to. I don't know what i am doing wrong but the shazam part in the kotlin Android code is in this method : suspend fun processAudioFileInBackground( filePath: String, developerTokenProvider: DeveloperTokenProvider ) = withContext(Dispatchers.IO) { val bufferSize = 1024 * 1024 val audioFile = FileInputStream(filePath) val byteBuffer = ByteBuffer.allocate(bufferSize) byteBuffer.order(ByteOrder.LITTLE_ENDIAN) var bytesRead: Int while (audioFile.read(byteBuffer.array()).also { bytesRead = it } != -1) { val signatureGenerator = (ShazamKit.createSignatureGenerator(AudioSampleRateInHz.SAMPLE_RATE_44100) as ShazamKitResult.Success).data signatureGenerator.append(byteBuffer.array(), bytesRead, System.currentTimeMillis()) val signature = signatureGenerator.generateSignature() println("Signature: ${signature.durationInMs}") val catalog = ShazamKit.createShazamCatalog(developerTokenProvider, Locale.ENGLISH) val session = (ShazamKit.createSession(catalog) as ShazamKitResult.Success).data val matchResult = session.match(signature) println("MatchResult : $matchResult") setMatchResult(matchResult) byteBuffer.clear() } audioFile.close() } I noticed that changing Locale in catalog creation results in different result as i get NoMatch without exception. Can you please help me with this? Do i need to create a custom catalog?
0
0
113
May ’25
Build a vision capture system for apple vision pro
Hi, I am a newbie here. We have been given a task to build a robotic vision system to capture an immersive video in a hazed environment, which will later be played on Apple Vision Pro. I am thinking of starting with 2 or 4 basic CMOS camera sensors, such as IMX378, AR0144, or VD66GY, and designing an FPGA-based circuit to synchronously capture and store raw frame-by-frame data. Some frame initial processing such as demosaicing and filtering can also be done by the FPGA. Then, I would use software for post-processing to convert the data into a compatible video format for Apple Vision Pro. Will this idea work? I can handle the raw data capture, but I’m unsure if this approach is feasible and what post-processing software I should use. Thanks a lot for your suggestions! Charlie
0
0
319
Feb ’25
Capacitor app on iOS inline video stops
Hello, I’m experiencing an issue with video playback in my Javascript (SvelteKit) application using Capacitor. The video plays and loops correctly on Android and web browsers (including Safari) but stops unexpectedly after a few iterations on iOS native App. <video src={videoPath} autoplay muted loop playsinline class="h-auto w-full max-w-full object-cover"></video> Has anyone encountered a similar issue or have insights into what might be causing this behavior on iOS? Any suggestions or workarounds would be greatly appreciated. Maybe it has something to do with the iOS power saving policy? Thank you in advance for your help!
0
0
501
Jan ’25
AirDropped Videos from Photos Save to Files Instead of Photos on Receiving Device
My app allows users to capture and save videos to the Photos app using the following Swift code: PHPhotoLibrary.shared().performChanges { PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: fileURL) } completionHandler: { success, error in Videos are successfully saved to Photos and play correctly. However, users report that when they AirDrop these videos from the Photos app to another device (e.g., iPad to iPhone), the videos are saved in the Files app on the receiving device instead of the Photos app. This issue is more common with higher-resolution videos, such as 2K, recorded in HEVC format at 30 fps. I wasn't able to reproduce the issue locally. I've found a thread in public apple forum: https://discussions.apple.com/thread/255276865?sortBy=rank but I wonder maybe there are some special flags that I should clear or add to my videos (e.g. PHAssetChangeRequest)? Thank you!
0
0
101
May ’25
NonLiDAR Photogrammetry
Hello! In iOS1.7.5, photogrammetry sessions cannot be performed on iPhones without LiDAR, but I don't think there is much difference in GPU performance between those with and without LiDAR. For example, the chips installed in the iPhone 14 Pro and iPhone 15 are the same A16 Bionic, and I think the GPU performance is also the same. Despite this, photogrammetry can be performed on the iPhone 14 Pro but not on the iPhone 15. Why is this? In fact, we have confirmed that if you transfer images taken with an iPhone 16 without LiDAR to an iPhone 16 Pro and run a photogrammetry session using those images, a 3D model can be generated. Also, will photogrammetry be able to be performed on high-performance iPhones without LiDAR in the future?
0
0
82
Apr ’25
Apple News Publisher: How To Successfully Apply
Hi there, Can anyone tell me how to possibly get approved as an Apple News Publisher in 2025? We attempted in 2024, but received this message from Apple support: "Thank you for your interest in Apple News. At this time, we're not accepting new applications." When I inquired further, I got this second response: "Apple News is no longer accepting unsolicited applications. To learn more about Apple News requirements, visit the Apple News support page. If you have any feedback, please use this form to send us your comments. Keep in mind that while we read all feedback, we are unable to respond to each submission individually." My questions are: Is this still the case? (Especially when you are a legit local news outlet) Is there a link to apply as a news publisher? I don't seem to have that option at all. Thanks for any feedback.
0
0
83
Apr ’25
4k 120fps Showing Black Screen on iPhone 16
Hey - I am developing an app that uses the camera for recording video. I put the ability to choose a framerate and resolution and all combinations work perfectly fine, except for 4k 120fps for the new iPhone 16 pro. This just shows black on the preview. I tried to record even though the preview was black, but the recording is also just a black screen. Is there anything special that needs to be done in the camera setup for 4k 120fps to work? I have my camera setup code attached. Is it possible this is a bug in Apple's code, since this works with every other combination (1080p up to 240fps and 4k up to 60fps)? Thanks so much for the help. class CameraManager: NSObject { enum Errors: Error { case noCaptureDevice case couldNotAddInput case unsupportedConfiguration } enum Resolution { case hd1080p case uhd4K var preset: AVCaptureSession.Preset { switch self { case .hd1080p: return .hd1920x1080 case .uhd4K: return .hd4K3840x2160 } } var dimensions: CMVideoDimensions { switch self { case .hd1080p: return CMVideoDimensions(width: 1920, height: 1080) case .uhd4K: return CMVideoDimensions(width: 3840, height: 2160) } } } enum CameraType { case wide case ultraWide var captureDeviceType: AVCaptureDevice.DeviceType { switch self { case .wide: return .builtInWideAngleCamera case .ultraWide: return .builtInUltraWideCamera } } } enum FrameRate: Int { case fps60 = 60 case fps120 = 120 case fps240 = 240 } let orientationManager = OrientationManager() let captureSession: AVCaptureSession let previewLayer: AVCaptureVideoPreviewLayer let movieFileOutput = AVCaptureMovieFileOutput() let videoDataOutput = AVCaptureVideoDataOutput() private var videoCaptureDevice: AVCaptureDevice? override init() { self.captureSession = AVCaptureSession() self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession) super.init() self.previewLayer.videoGravity = .resizeAspect } func configureSession(resolution: Resolution, frameRate: FrameRate, stabilizationEnabled: Bool, cameraType: CameraType, sampleBufferDelegate: AVCaptureVideoDataOutputSampleBufferDelegate?) throws { assert(Thread.isMainThread) captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } captureSession.sessionPreset = resolution.preset if captureSession.canAddOutput(movieFileOutput) { captureSession.addOutput(movieFileOutput) } else { throw Errors.couldNotAddInput } videoDataOutput.setSampleBufferDelegate(sampleBufferDelegate, queue: DispatchQueue(label: "VideoDataOutputQueue")) if captureSession.canAddOutput(videoDataOutput) { captureSession.addOutput(videoDataOutput) // Set the video orientation if needed if let connection = videoDataOutput.connection(with: .video) { //connection.videoOrientation = .portrait } } else { throw Errors.couldNotAddInput } guard let videoCaptureDevice = AVCaptureDevice.default(cameraType.captureDeviceType, for: .video, position: .back) else { throw Errors.noCaptureDevice } let useDimensions = resolution.dimensions guard let format = videoCaptureDevice.formats.first(where: { format in let dimensions = CMVideoFormatDescriptionGetDimensions(format.formatDescription) let isRes = dimensions.width == useDimensions.width && dimensions.height == useDimensions.height let frameRates = format.videoSupportedFrameRateRanges return isRes && frameRates.contains(where: { $0.maxFrameRate >= Float64(frameRate.rawValue) }) }) else { throw Errors.unsupportedConfiguration } self.videoCaptureDevice = videoCaptureDevice do { let videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice) if captureSession.canAddInput(videoInput) { captureSession.addInput(videoInput) } else { throw Errors.couldNotAddInput } try videoCaptureDevice.lockForConfiguration() videoCaptureDevice.activeFormat = format videoCaptureDevice.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue)) videoCaptureDevice.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue)) videoCaptureDevice.activeMaxExposureDuration = CMTime(seconds: 1.0 / 960, preferredTimescale: 1000000) videoCaptureDevice.exposureMode = .locked videoCaptureDevice.unlockForConfiguration() } catch { throw error } configureStabilization(enabled: stabilizationEnabled) }`
0
0
472
Jan ’25
How correctly setup AVSampleBufferDisplayLayer
How can I setup correctly AVSampleBufferDisplayLayer for video display when I have input picture format kCVPixelFormatType_32BGRA? Currently video i visible in simulator, but not iPhone, miss I something? Render code: var pixelBuffer: CVPixelBuffer? let attrs: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey as String: width, kCVPixelBufferHeightKey as String: height, kCVPixelBufferBytesPerRowAlignmentKey as String: width * 4, kCVPixelBufferIOSurfacePropertiesKey as String: [:] ] let status = CVPixelBufferCreateWithBytes( nil, width, height, kCVPixelFormatType_32BGRA, img, width * 4, nil, nil, attrs as CFDictionary, &pixelBuffer ) guard status == kCVReturnSuccess, let pb = pixelBuffer else { return } var formatDesc: CMVideoFormatDescription? CMVideoFormatDescriptionCreateForImageBuffer( allocator: nil, imageBuffer: pb, formatDescriptionOut: &formatDesc ) guard let format = formatDesc else { return } var timingInfo = CMSampleTimingInfo( duration: .invalid, presentationTimeStamp: currentTime, decodeTimeStamp: .invalid ) var sampleBuffer: CMSampleBuffer? CMSampleBufferCreateForImageBuffer( allocator: kCFAllocatorDefault, imageBuffer: pb, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: format, sampleTiming: &timingInfo, sampleBufferOut: &sampleBuffer ) if let sb = sampleBuffer { if CMSampleBufferGetPresentationTimeStamp(sb) == .invalid { print("Invalid video timestamp") } if (displayLayer.status == .failed) { displayLayer.flush() } DispatchQueue.main.async { [weak self] in guard let self = self else { print("Lost reference to self drawing") return } displayLayer.enqueue(sb) } frameIndex += 1 }
0
0
118
Apr ’25
Videos viewed in WKWebView have Spatial Audio issues
I have an app that has a WKWebView for watching YouTube videos. When the videos are windowed the audio seems fine, positionally as well. All perfectly. When I fullscreen the video and it goes into the native visionOS video player the audio messes up. It will suddenly sound like it is in your ears, or maybe even just one ear channel, or the position will be wrong. It might be fine for a moment but the second I touch the controls or move the window the sound jumps across the room, away from the window, or switches to stereo. Sometimes exiting windows entirely you will still hear the videos playing. Even if you open the window back up and go to another screen and open another video, now you hear 2 videos playing at the same time with no way to stop the first one in the background, requiring to force restart the app. It is all sorts of glitchy. I haven't the slightest clue what is happening here. I am strongly feeling this is a visionOS bug. I tried using AVAudioSession to change some of the sound settings, and that makes zero difference in behavior. Multiple testers have also reported this behavior and it has been seen on both visionOS 2.3 and 2.4 betas. Thanks for the help! This is driving me mad! It is extremely consistent behavior!
0
0
161
Mar ’25
Intermittent Memory Leak Indicated in Simulator When Using AVAudioEngine with mainMixerNode Only
Hello, I'm observing an intermittent memory leak being reported in the iOS Simulator when initializing and starting an AVAudioEngine. Even with minimal setup—just attaching a single AVAudioPlayerNode and connecting it to the mainMixerNode—Xcode's memory diagnostics and Instruments sometimes flag a leak. Here is a simplified version of the code I'm using: // This function is called when the user taps a button in the view controller: #import "ViewController.h" @interface ViewController () @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; } - (IBAction)myButtonAction:(id)sender { NSLog(@"Test"); soundCreate(); } @end // media.m static AVAudioEngine *audioEngine = nil; void soundCreate(void) { if (audioEngine != nil) return; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil]; [[AVAudioSession sharedInstance] setActive:YES error:nil]; audioEngine = [[AVAudioEngine alloc] init]; AVAudioPlayerNode* playerNode = [[AVAudioPlayerNode alloc] init]; [audioEngine attachNode:playerNode]; [audioEngine connect:playerNode to:(AVAudioNode *)[audioEngine mainMixerNode] format:nil]; [audioEngine startAndReturnError:nil]; } In the memory leak report, the following call stack is repeated, seemingly in a loop: ListenerMap::InsertEvent(XAudioUnitEvent const&, ListenerBinding*) AudioToolboxCore ListenerMap::AddParameter(AUListener*, void*, XAudioUnitEvent const&) AudioToolboxCore AUListenerAddParameter AudioToolboxCore addOrRemoveParameterListeners(OpaqueAudioComponentInstance*, AUListenerBase*, AUParameterTree*, bool) AudioToolboxCore 0x180178ddf
0
0
115
Apr ’25
New FairPlay Keys
Hello, My company has an in-store app with FPS SDK 4.x (1024) keys. We've handed those keys over to a trusted third-party and we do not have them. We've been in-store for several years. The person that created the keys in our organization mistakenly stored them encrypted to our third-party's PGP keys, so we cannot decrypt them, and the third party also has no mechanism to provide us with the keys even though it is in their runtime environment. They only have secure mechanisms for us to upload keys onto their servers. We are trying to migrate to a different third-party DRM provider, and would like to obtain new keys. Unfortunately, the developer portal won't let me create new keys, saying that we have exceeded the number of keys allowed, which I assume is one. Additionally, the new DRM provider can only support SDK 4.x keys, and it appears that we can only request SDK 5.x keys on the Apple Developer portal, as the SDK 4.0 option is grayed out. Regardless, it seems that we are not able to request any keys. We've submitted a request to the support e-mail address and received an automated e-mail that the response should take a few days, but may take longer on occasion. It's now been a month. The e-mail says that the reply address is not monitored. Is there any way we can accelerate this? Thank you, Carlos
0
1
255
Aug ’25