Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Lightning to HDMI mirrors
I am developing a VOD playback app, but when I stream video to an external monitor connected via HDMI with Lightning on iOS 18 or later, the screen goes dark and I cannot confirm playback. The app I am developing does not detect the HDMI and display the Player separately, but simply mirrors the video. We have confirmed that the same phenomenon occurs with other services, but we were able to confirm playback with some services such as Apple TV. Please let us know if there are any other necessary settings such as video certificates required for video playback. We would also like to know if the problem occurs with iOS 18 or later.
0
0
273
Mar ’25
NonLiDAR Photogrammetry
Hello! In iOS1.7.5, photogrammetry sessions cannot be performed on iPhones without LiDAR, but I don't think there is much difference in GPU performance between those with and without LiDAR. For example, the chips installed in the iPhone 14 Pro and iPhone 15 are the same A16 Bionic, and I think the GPU performance is also the same. Despite this, photogrammetry can be performed on the iPhone 14 Pro but not on the iPhone 15. Why is this? In fact, we have confirmed that if you transfer images taken with an iPhone 16 without LiDAR to an iPhone 16 Pro and run a photogrammetry session using those images, a 3D model can be generated. Also, will photogrammetry be able to be performed on high-performance iPhones without LiDAR in the future?
0
0
82
Apr ’25
Error -50 writing to AVAudioFile
I'm trying to write 16-bit interleaved 2-channel data captured from a LiveSwitch audio source to a AVAudioFile. The buffer and file formats match but I get a bad parameter error from the API. Does this API not support the specified format or is there some other issue? Here is the debugger output. (lldb) po audioFile.url ▿ file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201 - _url : file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201 - _parseInfo : nil - _baseParseInfo : nil (lldb) po error Error Domain=com.apple.coreaudio.avfaudio Code=-50 "(null)" UserInfo={failed call=ExtAudioFileWrite(_impl->_extAudioFile, buffer.frameLength, buffer.audioBufferList)} (lldb) po buffer.format <AVAudioFormat 0x302a12b20: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po audioFile.fileFormat <AVAudioFormat 0x302a515e0: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po buffer.frameLength 882 (lldb) po buffer.audioBufferList ▿ 0x0000000300941e60 - pointerValue : 12894608992 This code handles the details of converting the Live Switch frame into an AVAudioPCMBuffer. extension FMLiveSwitchAudioFrame { func convertedToPCMBuffer() -> AVAudioPCMBuffer { Self.convertToAVAudioPCMBuffer(from: self)! } static func convertToAVAudioPCMBuffer(from frame: FMLiveSwitchAudioFrame) -> AVAudioPCMBuffer? { // Retrieve the audio buffer and format details from the FMLiveSwitchAudioFrame guard let buffer = frame.buffer(), let format = buffer.format() as? FMLiveSwitchAudioFormat else { return nil } // Extract PCM format details from FMLiveSwitchAudioFormat let sampleRate = Double(format.clockRate()) let channelCount = AVAudioChannelCount(format.channelCount()) // Determine bytes per sample based on bit depth let bitsPerSample = 16 let bytesPerSample = bitsPerSample / 8 let bytesPerFrame = bytesPerSample * Int(channelCount) let frameLength = AVAudioFrameCount(Int(buffer.dataBuffer().length()) / bytesPerFrame) // Create an AVAudioFormat from the FMLiveSwitchAudioFormat guard let avAudioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: sampleRate, channels: channelCount, interleaved: true) else { return nil } // Create an AudioBufferList to wrap the existing buffer let audioBufferList = UnsafeMutablePointer<AudioBufferList>.allocate(capacity: 1) audioBufferList.pointee.mNumberBuffers = 1 audioBufferList.pointee.mBuffers.mNumberChannels = channelCount audioBufferList.pointee.mBuffers.mDataByteSize = UInt32(buffer.dataBuffer().length()) audioBufferList.pointee.mBuffers.mData = buffer.dataBuffer().data().mutableBytes // Directly use LiveSwitch buffer // Transfer ownership of the buffer to AVAudioPCMBuffer let pcmBuffer = AVAudioPCMBuffer(pcmFormat: avAudioFormat, bufferListNoCopy: audioBufferList) /* { buffer in // Ensure the buffer is freed when AVAudioPCMBuffer is deallocated buffer.deallocate() // Only call this if LiveSwitch allows manual deallocation } */ pcmBuffer?.frameLength = frameLength return pcmBuffer } } This is the handler that is invoked with every frame in order to convert it for use with AVAudioFile and optionally update a scrolling signal display on the screen. private func onRaisedFrame(obj: Any!) -> Void { // Bail out early if no one is interested in the data. guard isMonitoring else { return } // Convert LS frame to AVAudioPCMBuffer (no-copy) let frame = obj as! FMLiveSwitchAudioFrame let buffer = frame.convertedToPCMBuffer() // Hand subscribers a reference to the buffer for rendering to display. bufferPublisher?.send(buffer) // If we have and output file, store the data there, as well. guard let audioFile = self.audioFile else { return } do { try audioFile.write(from: buffer) // FIXME: This call is throwing error -50 } catch { FMLiveSwitchLog.error(withMessage: "Failed to write buffer to audio file at \(audioFile.url): \(error)") self.audioFile = nil } } This is how the audio file is being setup. static var recordingFormat: AVAudioFormat = { AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44_100, channels: 2, interleaved: true)! }() let audioFile = try AVAudioFile(forWriting: outputURL, settings: Self.recordingFormat.settings)
0
0
398
Mar ’25
Open an app from the photos share sheet
I'd like to add a share extension to my app (an Action app extension, I think). The extension would appear when users share a photo in the Photos app (and, ideally, Safari). If you tapped my app icon on the share sheet, iOS would pass the photo to my app and switch the user from Photos or Safari to my full app, with the shared photo(s) available for my app to work with. I know this is possible, because Instagram (a third-party app) works exactly like this. If you look at an image in the Photos app, tap Share and then tap Instagram, iOS will background the Photos app, activate the Instagram app and let you edit and post your photo in the main Instagram app. It seems like NSExtensionContext#open(_:completionHandler:) might do this if I add a custom URL to my main app, but the documentation for that says: Each extension point determines whether to support this method, or under which conditions to support this method. In iOS, the Today and iMessage app extension points support this method. That would rule out an Action, Photo Editing or Share extension. But then how does Instagram do this, and how can I achieve the same in my app? I know that it's possible for an Action, Photo Editing or Share extension to open as a mini-app on top of the app providing the content. But coordinating the IPC for that is much, much more work (for my particular app) than just switching the user over to the app, with full access to all the functionality and data that my main app usually has access to.
0
0
473
Jan ’25
Non-sendable type AVMediaSelectionGroup
Hi all, we try migrate project to Swift 6 Project use AVPlayer in MainActor Selection audio and subtitiles not work Task { @MainActor in let group = try await item.asset.loadMediaSelectionGroup(for: AVMediaCharacteristic.audible) get error: Non-sendable type 'AVMediaSelectionGroup?' returned by implicitly asynchronous call to nonisolated function cannot cross actor boundary and second example `if #available(iOS 15.0, *) { player?.currentItem?.asset.loadMediaSelectionGroup(for: AVMediaCharacteristic.audible, completionHandler: { group, error in if error != nil { return } if let groupWrp = group { DispatchQueue.main.async { self.setupAudio(groupWrp, audio: audioLang) } } }) }` get error: Sending 'groupWrp' risks causing data races
1
0
512
Feb ’25
AVFoundationErrorDomain Code=-11819 in AVPlayer – Causes & Fixes?
We are encountering an issue where AVPlayer throws the error: Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" > Underlying Error Domain[(null)]:Code[0]:Desc[(null)] This error seems to occur intermittently during video playback, especially after extended usage or when switching between different streams. We observe Error 11819 (AVFoundationErrorDomain) in Conviva platform that some of our users experience it but we couldn't reproduce it so far and we’re need support to determine the root cause and/or best practices to prevent it. Some questions we have: What typically triggers this error? Could it be related to memory/resource constraints, network instability, or backgrounding? Are there any recommended ways to handle or recover from this error gracefully? Any insights or guidance would be greatly appreciated. Thanks!
0
0
152
Apr ’25
Missing Depth Frames When Recording with AVCaptureVideoDataOutputSampleBufferDelegate/AVCaptureDataOutputSynchronizerDelegate and AVAssetWriter
I’ve tried both AVCaptureVideoDataOutputSampleBufferDelegate (captureOutput) and AVCaptureDataOutputSynchronizerDelegate (dataOutputSynchronizer), but the number of depth frames and saved timestamps is significantly lower than the number of frames in the .mp4 file written by AVAssetWriter. In my code, I save: Timestamps for each frame to a metadata file Depth frames to a binary file Video to an .mp4 file If I record a 4-second video at 30fps, the .mp4 file correctly plays for 4 seconds, but the number of stored timestamps and depth frames is much lower—around 70 frames instead of the expected 120. Does anyone know why this mismatch happens? func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) { // Read all outputs guard let syncedDepthData: AVCaptureSynchronizedDepthData = synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData, let syncedVideoData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else { // only work on synced pairs return } if syncedDepthData.depthDataWasDropped || syncedVideoData.sampleBufferWasDropped { return } let depthData = syncedDepthData.depthData let depthPixelBuffer = depthData.depthDataMap let sampleBuffer = syncedVideoData.sampleBuffer guard let videoPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer), let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) else { return } addToPreviewStream?(CIImage(cvPixelBuffer: videoPixelBuffer)) if !canWrite() { return } // Extract the presentation timestamp (PTS) from the sample buffer let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) //sessionAtSourceTime is the first buffer we will write to the file if self.sessionAtSourceTime == nil { //Make sure we don't start recording until the buffer reaches the correct time (buffer is always behind, this will fix the difference in time) guard sampleBuffer.presentationTimeStamp >= self.recordFromTime! else { return } self.sessionAtSourceTime = sampleBuffer.presentationTimeStamp self.videoWriter!.startSession(atSourceTime: sampleBuffer.presentationTimeStamp) } if self.videoWriterInput!.isReadyForMoreMediaData { self.videoWriterInput!.append(sampleBuffer) self.videoTimestamps.append( Timestamp( frame: videoTimestamps.count, value: timestamp.value, timescale: timestamp.timescale ) ) let ddm = depthData.depthDataMap depthCapture.addDepthData(pixelBuffer: ddm, timestamp: timestamp) } }
3
0
458
Feb ’25
AVPlayerItem. externalMetadata not available
According to the documentation (https://developer.apple.com/documentation/avfoundation/avplayeritem/externalmetadata), AVPlayerItem should have an externalMetadata property. However it does not appear to be visible to my app. When I try, I get: Value of type 'AVPlayerItem' has no member 'externalMetadata' Documentation states iOS 12.2+; I am building with a minimum deployment target of iOS 18. Code snippet: import Foundation import AVFoundation /// ... in function ... // create metadata as described in https://developer.apple.com/videos/play/wwdc2022/110338 var title = AVMutableMetadataItem() title.identifier = .commonIdentifierAlbumName title.value = "My Title" as NSString? title.extendedLanguageTag = "und" var playerItem = await AVPlayerItem(asset: composition) playerItem.externalMetadata = [ title ]
0
0
80
Apr ’25
How to write RGB & Depth Frames Without Losing Synchronization
I’m currently working on a project where I capture both depth frames and RGB frames using AVCaptureDataOutputSynchronizer. Depth frames are stored as raw binary data and RGB frames are saved with AVAssetWriter. The issue I’m facing is that AVAssetWriter enforces a fixed framerate, meaning it adds or discards frames to maintain that rate (as I understand it). This causes a desynchronization between the depth and RGB frames, which is a problem because I need each depth frame to be exactly matched with the corresponding RGB frame as they were captured. How can I ensure that the RGB frames are saved without AVAssetWriter modifying the frame count?
0
0
380
Feb ’25
Prevent iOS from Switching Between Back Camera Lenses in getUserMedia (Safari/WebView, iOS 18)
I’m developing a hybrid app (WebView / Turbo Native) that uses getUserMedia to access the back camera for a PPG/heart rate measurement feature (the user places their finger on the camera). Problem: Even when I specify constraints like: { video: { deviceId: '...', facingMode: { exact: 'environment' }, advanced: [{ zoom: 1.0 }] }, audio: false } On iPhone 15 (iOS 18), iOS unexpectedly switches between the wide, ultra-wide, and telephoto lenses during the measurement. This breaks the heart rate detection, and it forces the user to move their finger in the middle of the measurement. Question: Is there any way, via getUserMedia/WebRTC, to force iOS to use only the wide-angle lens and prevent automatic lens switching? I know that with AVFoundation (Swift) you can pick .builtInWideAngleCamera, but I’m hoping to avoid building a custom native layer and would prefer to stick with WebView/JavaScript if possible to save time and complexity. Any suggestions, workarounds, or updates from Apple would be greatly appreciated! Thanks a lot!
1
0
410
Mar ’25
Musickit Media player missing output device selection
Hi All, I am working on a DJ playout app (MACOS). The app has a few AVAudioPlayerNode's combined with the ApplicationMusicPlayer from Musickit. I can route the output of the AVaudioPlayer to a hardware device so that the audio files are directed to their own dedicated output on my Mac. The ApplicationMusicPlayer is following the default output and this is pretty annoying. Has anyone found a solution to chain the ApplicationMusicPlayer and get it set to a output device? Thanks Pancras
2
0
583
Feb ’25
save audio file in iOS 18 instead of iOS 12
I'm able to get text to speech to audio file using the following code for iOS 12 iPhone 8 to create a car file: audioFile = try AVAudioFile( forWriting: saveToURL, settings: pcmBuffer.format.settings, commonFormat: .pcmFormatInt16, interleaved: false) where pcmBuffer.format.settings is: [AVAudioFileTypeKey: kAudioFileMP3Type, AVSampleRateKey: 48000, AVEncoderBitRateKey: 128000, AVNumberOfChannelsKey: 2, AVFormatIDKey: kAudioFormatLinearPCM] However, this code does not work when I run the app in iOS 18 on iPhone 13 Pro Max. The audio file is created, but it doesn't sound right. It has a lot of static and it seems the speech is very low pitch. Can anyone give me a hint or an answer?
2
0
116
Mar ’25
3D scanning and 3D model with texture mapping
I am new here and would appreciate help in coding or an explanation what to use in swift for an app which will be able to capture LiDAR scanning and RGB data from taken pictures, generate a 3D mesh, and create .OBJ, .MTL, and .JPEG file set for further manipulation of 3D model. I am able to create from LiDAR scanning 3D mesh and .OBJ file but can not generate .MTL and .JPEG for a texture of 3D model.
1
0
390
Feb ’25
Usage of colorCurvesFilter
How can I use my RGB Curve points: let redCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.152), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)] let greenCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.247, y: 0.196), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)] let blueCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.184), CIVector(x: 0.466, y: 0.466), CIVector(x: 1, y: 1)] in colorCurvesFilter which I've found in Apple Docs: func colorCurves(inputImage: CIImage) -> CIImage { let colorCurvesEffect = CIFilter.colorCurves() colorCurvesEffect.inputImage = inputImage colorCurvesEffect.curvesDomain = CIVector(x: 0, y: 1) colorCurvesEffect.curvesData = Data( bytes: [Float32]([ 0.0,0.0,0.0, 0.8,0.8,0.8, 1.0,1.0,1.0 ]), count: 36) colorCurvesEffect.colorSpace = CGColorSpaceCreateDeviceRGB() return colorCurvesEffect.outputImage! }
0
0
378
Jan ’25
Unable to generate .p12 file from fairplay.cer
I am reaching out regarding an issue with my Apple FairPlay Streaming Certificate. To generate the certificate signing request (CSR), I used the following OpenSSL commands: openssl genrsa -out private_key.pem 1024 openssl req -new -key private_key.pem -out request.csr However, according to the guide provided by Apple and instructions from my DRM provider, I should have used: openssl genrsa -aes256 -out privatekey.pem 1024 openssl req -new -sha1 -key privatekey.pem -out certreq.csr -subj "/CN=SubjectName /OU=OrganizationalUnit /O=Organization /C=US" I suspect this discrepancy might be causing the issue with my FairPlay certificate. After obtaining the fairplay.cer file and importing it into Keychain Access, I noticed the following: When I expand the certificate in Keychain Access, I can only see a public key and no private key. As a result, I am unable to export the certificate as a .p12 file, as this option is disabled. As per my DRM provider's instructions, I need to export the certificate along with the corresponding private key as a .p12 file with a password. Since the private key is not visible in Keychain Access, I am unable to proceed further. I have read the FairPlay Streaming Overview but could not find any reasons as to why this issue is occurring or guidance on the procedure to revoke a certificate. Additionally, I came across the terms and conditions which mentioned reaching out to product-security at Apple for assistance in revoking corrupt certificates. However, despite reaching out, I have not received a response. Any help on how to proceed will be great!
0
0
542
Jan ’25