We are currently working on a CarPlay navigation app and so far everything is working well except for speaking turn notifications.
Our TTS implementation works fine on the phone and works fine on CarPlay if the voice is spoken over the speaker in the car. If users connect a BT headset to the car and listen through that headset, then the voice commands are chopped up / stutter.
Why would users use BT headset? Well, we are working on a motorcycle app, and there are no speakers usually on a motorcycle.
It sounds like the BT channel is opened and closed repeatedly for every character / word spoken. This happens on different CarPlay devices and different Bluetooth headsets, we have reports from multiple users that they find this behavior annoying and that other apps work fine.
Is this a known issue? Are there possible workaround?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have used AVQueuePlayer in my music app to play sequence of audios from a remote server, this how I have defined things my player in my ViewModel
Variables
private var cancellables = Set()
private let audioSession = AVAudioSession.sharedInstance()
private var avQueuePlayer: AVQueuePlayer?
@Published var playbackSpeed: Float = 1.0
before starting playback, I am making sure that audio session is set properly, the code snippet used for that is
do {
try audioSession.setCategory(.playback, mode: .default, options: [])
try audioSession.setActive(true, options: [])
} catch {
return
}
and this is the function I am using to update playback speed
func updatePlaybackSpeed(_ newSpeed: Float){
if newSpeed > 0.0, newSpeed <= 2.0{
playbackSpeed = newSpeed
avQueuePlayer?.rate = newSpeed
print("requested speed is (newSpeed) and actual speed is (String(describing: avQueuePlayer?.rate))")
}
}
sometimes whatever speed is set, player seems to play at the same speed as it was set,
e.g. Once I got "requested speed is 1.5 and actual speed is 1.5", and player also seemed to play at the speed of 1.5
but another time I got "requested speed is 2.0 and actual speed is 2.0", but player still seemed to play at the speed of 1.0
to observe changes in rate, I used this
**private func observeRateChanges() {
guard let avQueuePlayer = self.avQueuePlayer else { return }
NotificationCenter.default.publisher(for: AVQueuePlayer.rateDidChangeNotification, object: avQueuePlayer)
.compactMap { $0.userInfo?[AVPlayer.rateDidChangeReasonKey] as? AVPlayer.RateDidChangeReason }
.sink { reason in
switch reason {
case .appBackgrounded:
print("The app transitioned to the background.")
case .audioSessionInterrupted:
print("The system interrupts the app’s audio session.")
case .setRateCalled:
print("The app set the player’s rate.")
case .setRateFailed:
print("An attempt to change the player’s rate failed.")
default:
break
}
}
.store(in: &cancellables)
}**
when rate was set properly, I got this "The app set the player’s rate." from the above function, but when it wasn't, I got this "An attempt to change the player’s rate failed.,"
now I am not able to understand why rate is not being set, and if it gave "requested speed is 2.0 and actual speed is 2.0" from updatePlaybackSpeed function, why does the player seems to play with the speed of 1.0?
Topic:
Media Technologies
SubTopic:
Audio
Hello Apple Developer Community,
We are developing a music management platform for restaurants and cafes in Saudi Arabia. Our app enables businesses to schedule playlists and allows visitors to request songs via barcodes. Music playback is powered by Apple Music, and users must have their own Apple Music subscriptions to access the music. Our service charges a monthly subscription fee for these management features, not for music access itself.
Project Overview and MusicKit Role
Our app integrates MusicKit to leverage Apple Music’s catalog and playback capabilities. Users log in with their Apple Music accounts, ensuring they have an active subscription for music playback. Our platform’s value lies in its tools—playlist scheduling and song requests—which are built on top of MusicKit’s APIs. We offer these features exclusively in Saudi Arabia.
Legal Context in Saudi Arabia
In Saudi Arabia, to our understanding, no special licenses are required for playing music in commercial venues like restaurants and cafes. This means our clients can use Apple Music subscriptions for playback without additional performance rights licenses. While this aligns with local laws, we recognize that Apple’s global policies may impose stricter requirements, prompting our need for clarification.
Subscription Model and Monetization Concerns
We charge a monthly subscription fee for access to our app’s features (e.g., scheduling playlists and managing song requests). This fee is separate from the Apple Music subscription, which users must maintain for playback. However, Apple’s MusicKit terms state: "You agree not to require payment for or indirectly monetize access to the Apple Music service." We’re concerned whether our subscription model might be interpreted as indirectly monetizing Apple Music access, given its reliance on MusicKit for functionality.
Scheduling Feature and Synchronization Rights
Our app allows businesses to schedule playlists for general time slots (e.g., “play this playlist from 6 PM to 8 PM”). It does not support precise scheduling, such as playing a specific song at an exact moment (e.g., “play this song at 7:30 PM”). Apple’s guidelines mention that “deeper or more complex music integration” may require additional licenses, like synchronization rights. We’re unsure if our general scheduling feature crosses this threshold or remains within MusicKit’s standard usage.
Questions for Clarification
We’d greatly appreciate expert input on the following:
Monetization: Does our subscription fee for management features (scheduling and song requests) violate Apple’s policy against indirectly monetizing Apple Music access?
Local Context: Given that Saudi Arabia requires no additional licenses for commercial music playback, does this impact our compliance with Apple’s global terms?
Scheduling: Does our playlist scheduling for general time slots (not exact moments) fall within MusicKit’s permitted scope, or does it require further licensing?
Thank you in advance for any insights or guidance to ensure our app aligns with Apple’s policies!
Topic:
Media Technologies
SubTopic:
General
Tags:
Apple Music API
MusicKit
MusicKit JS
Apple Music Feed
I'm unable to play music using Musickit.js in the Chrome or Firefox browsers.
Even using the apple guide here: https://js-cdn.music.apple.com/musickit/v1/index.html - I've added my Music Developer Token and song/album url, but it only works in Safari and not in Chrome or Firefox.
I'm unsure if this is a global issue, or if there is something I need to do to enable playback in other browsers, but as it stands it's not working for me.
Thanks for any help in advance!
I am trying to achieve an animated gradient effect that changes values over time based on the current seconds. I am also using AVPlayer and AVMutableVideoComposition along with custom instruction and class to generate the effect. I didn't want to load any video file, but rather generate a custom video with my own set of instructions. I used Metal Compute shaders to generate the effects and make the video to be 20 seconds.
However, when I run the code, I get a frozen player with the gradient applied, but when I try to play the video, I get this warning in the console :- Visual isTranslatable: NO; reason: observation failure: noObservations
Here is the screenshot :-
My entire code :-
import AVFoundation
import Metal
class GradientVideoCompositorTest: NSObject, AVVideoCompositing {
var sourcePixelBufferAttributes: [String: Any]? = [
kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA
]
var requiredPixelBufferAttributesForRenderContext: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA
]
private var renderContext: AVVideoCompositionRenderContext?
private var metalDevice: MTLDevice!
private var metalCommandQueue: MTLCommandQueue!
private var metalLibrary: MTLLibrary!
private var metalPipeline: MTLComputePipelineState!
override init() {
super.init()
setupMetal()
}
func setupMetal() {
guard let device = MTLCreateSystemDefaultDevice(),
let queue = device.makeCommandQueue(),
let library = try? device.makeDefaultLibrary(),
let function = library.makeFunction(name: "gradientShader") else {
fatalError("Metal setup failed")
}
self.metalDevice = device
self.metalCommandQueue = queue
self.metalLibrary = library
self.metalPipeline = try? device.makeComputePipelineState(function: function)
}
func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {
renderContext = newRenderContext
}
func startRequest(_ request: AVAsynchronousVideoCompositionRequest) {
guard let outputPixelBuffer = renderContext?.newPixelBuffer(),
let metalTexture = createMetalTexture(from: outputPixelBuffer) else {
request.finish(with: NSError(domain: "com.example.gradient", code: -1, userInfo: nil))
return
}
var time = Float(request.compositionTime.seconds)
renderGradient(to: metalTexture, time: time)
request.finish(withComposedVideoFrame: outputPixelBuffer)
}
private func createMetalTexture(from pixelBuffer: CVPixelBuffer) -> MTLTexture? {
var texture: MTLTexture?
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(
pixelFormat: .bgra8Unorm,
width: width,
height: height,
mipmapped: false
)
textureDescriptor.usage = [.shaderWrite, .shaderRead]
CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly)
if let textureCache = createTextureCache(), let cvTexture = createCVMetalTexture(from: pixelBuffer, cache: textureCache) {
texture = CVMetalTextureGetTexture(cvTexture)
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
return texture
}
private func renderGradient(to texture: MTLTexture, time: Float) {
guard let commandBuffer = metalCommandQueue.makeCommandBuffer(),
let commandEncoder = commandBuffer.makeComputeCommandEncoder() else { return }
commandEncoder.setComputePipelineState(metalPipeline)
commandEncoder.setTexture(texture, index: 0)
var mutableTime = time
commandEncoder.setBytes(&mutableTime, length: MemoryLayout<Float>.size, index: 0)
let threadsPerGroup = MTLSize(width: 16, height: 16, depth: 1)
let threadGroups = MTLSize(
width: (texture.width + 15) / 16,
height: (texture.height + 15) / 16,
depth: 1
)
commandEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadsPerGroup)
commandEncoder.endEncoding()
commandBuffer.commit()
}
private func createTextureCache() -> CVMetalTextureCache? {
var cache: CVMetalTextureCache?
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, metalDevice, nil, &cache)
return cache
}
private func createCVMetalTexture(from pixelBuffer: CVPixelBuffer, cache: CVMetalTextureCache) -> CVMetalTexture? {
var cvTexture: CVMetalTexture?
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
CVMetalTextureCacheCreateTextureFromImage(
kCFAllocatorDefault,
cache,
pixelBuffer,
nil,
.bgra8Unorm,
width,
height,
0,
&cvTexture
)
return cvTexture
}
}
class GradientCompositionInstructionTest: NSObject, AVVideoCompositionInstructionProtocol {
var timeRange: CMTimeRange
var enablePostProcessing: Bool = true
var containsTweening: Bool = true
var requiredSourceTrackIDs: [NSValue]? = nil
var passthroughTrackID: CMPersistentTrackID = kCMPersistentTrackID_Invalid
init(timeRange: CMTimeRange) {
self.timeRange = timeRange
}
}
func createGradientVideoComposition(duration: CMTime, size: CGSize) -> AVMutableVideoComposition {
let composition = AVMutableComposition()
let instruction = GradientCompositionInstructionTest(timeRange: CMTimeRange(start: .zero, duration: duration))
let videoComposition = AVMutableVideoComposition()
videoComposition.customVideoCompositorClass = GradientVideoCompositorTest.self
videoComposition.renderSize = size
videoComposition.frameDuration = CMTime(value: 1, timescale: 30) // 30 FPS
videoComposition.instructions = [instruction]
return videoComposition
}
#include <metal_stdlib>
using namespace metal;
kernel void gradientShader(texture2d<float, access::write> output [[texture(0)]],
constant float &time [[buffer(0)]],
uint2 id [[thread_position_in_grid]]) {
float2 uv = float2(id) / float2(output.get_width(), output.get_height());
// Animated colors based on time
float3 color1 = float3(sin(time) * 0.8 + 0.1, 0.6, 1.0);
float3 color2 = float3(0.12, 0.99, cos(time) * 0.9 + 0.3);
// Linear interpolation for gradient
float3 gradientColor = mix(color1, color2, uv.y);
output.write(float4(gradientColor, 1.0), id);
}
I’m experiencing a crash at runtime when trying to extract audio from a video. This issue occurs on both iOS 18 and earlier versions. The crash is caused by the following error:
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '*** -[AVAssetExportSession exportAsynchronouslyWithCompletionHandler:] Cannot call exportAsynchronouslyWithCompletionHandler: more than once.'
*** First throw call stack:
(0x1875475ec 0x184ae1244 0x1994c49c0 0x217193358 0x217199899 0x192e208b9 0x217192fd9 0x30204c88d 0x3019e5155 0x301e5fb41 0x301af7add 0x301aff97d 0x301af888d 0x301aff27d 0x301ab5fa5 0x301ab6101 0x192e5ee39)
libc++abi: terminating due to uncaught exception of type NSException
My previous code worked fine, but it's crashing with Swift 6.
Does anyone know a solution for this?
## **Previous code:**
func extractAudioFromVideo(from videoURL: URL, exportHandler: ((AVAssetExportSession, CurrentValueSubject<Float, Never>?) -> Void)? = nil, completion: @escaping (Swift.Result<URL, Error>) -> Void) {
let asset = AVAsset(url: videoURL)
// Create an AVAssetExportSession to export the audio track
guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else {
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to create AVAssetExportSession"])))
return
}
// Set the output file type and path
guard let filename = videoURL.lastPathComponent.components(separatedBy: ["."]).first else { return }
let outputURL = VideoUtils.getTempAudioExportUrl(filename)
VideoUtils.deleteFileIfExists(outputURL.path)
exportSession.outputFileType = .m4a
exportSession.outputURL = outputURL
let audioExportProgressPublisher = CurrentValueSubject<Float, Never>(0.0)
if let exportHandler = exportHandler {
exportHandler(exportSession, audioExportProgressPublisher)
}
// Periodically check the progress of the export session
let timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { _ in
audioExportProgressPublisher.send(exportSession.progress)
}
// Export the audio track asynchronously
exportSession.exportAsynchronously {
switch exportSession.status {
case .completed:
completion(.success(outputURL))
case .failed:
completion(.failure(exportSession.error ?? NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown error occurred while exporting audio"])))
case .cancelled:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"])))
default:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown export session status"])))
}
// Invalidate the timer when the export session completes or is cancelled
timer.invalidate()
}
}
## New Code:
func extractAudioFromVideo(from videoURL: URL, exportHandler: ((AVAssetExportSession, CurrentValueSubject<Float, Never>?) -> Void)? = nil, completion: @escaping (Swift.Result<URL, Error>) -> Void) async {
let asset = AVAsset(url: videoURL)
// Create an AVAssetExportSession to export the audio track
guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else {
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to create AVAssetExportSession"])))
return
}
// Set the output file type and path
guard let filename = videoURL.lastPathComponent.components(separatedBy: ["."]).first else { return }
let outputURL = VideoUtils.getTempAudioExportUrl(filename)
VideoUtils.deleteFileIfExists(outputURL.path)
let audioExportProgressPublisher = CurrentValueSubject<Float, Never>(0.0)
if let exportHandler {
exportHandler(exportSession, audioExportProgressPublisher)
}
if #available(iOS 18.0, *) {
do {
try await exportSession.export(to: outputURL, as: .m4a)
let states = exportSession.states(updateInterval: 0.1)
for await state in states {
switch state {
case .pending, .waiting:
break
case .exporting(progress: let progress):
print("Exporting: \(progress.fractionCompleted)")
if progress.isFinished {
completion(.success(outputURL))
}else if progress.isCancelled {
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"])))
}else {
audioExportProgressPublisher.send(Float(progress.fractionCompleted))
}
}
}
}catch let error {
print(error.localizedDescription)
}
}else {
// Periodically check the progress of the export session
let publishTimer = Timer.publish(every: 0.1, on: .main, in: .common)
.autoconnect()
.sink { [weak exportSession] _ in
guard let exportSession else { return }
audioExportProgressPublisher.send(exportSession.progress)
}
exportSession.outputFileType = .m4a
exportSession.outputURL = outputURL
await exportSession.export()
switch exportSession.status {
case .completed:
completion(.success(outputURL))
case .failed:
completion(.failure(exportSession.error ?? NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown error occurred while exporting audio"])))
case .cancelled:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"])))
default:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown export session status"])))
}
// Invalidate the timer when the export session completes or is cancelled
publishTimer.cancel()
}
}
I have integrated the ShazamKit SDK into my iOS app and would like to implement the same functionality in my Android app.
My question is: Can I use the Android version of the ShazamKit SDK for commercial purposes?
After extensive research, I could not find any official information regarding the license of the Android version of the ShazamKit SDK.
Could you please provide a formal license statement?
Hi all,
i can successfully match music using shazamkit on Apple using SwiftUI, a simple app that let user to load an audio file and exctracts the relative match, while i am unable to match music using shamzamkit on Android. I am trying to make the same simple app but i cannot match music as i get MATCH_ATTEMPT_FAILED every time i try to. I don't know what i am doing wrong but the shazam part in the kotlin Android code is in this method :
suspend fun processAudioFileInBackground(
filePath: String,
developerTokenProvider: DeveloperTokenProvider
) = withContext(Dispatchers.IO) {
val bufferSize = 1024 * 1024
val audioFile = FileInputStream(filePath)
val byteBuffer = ByteBuffer.allocate(bufferSize)
byteBuffer.order(ByteOrder.LITTLE_ENDIAN)
var bytesRead: Int
while (audioFile.read(byteBuffer.array()).also { bytesRead = it } != -1) {
val signatureGenerator = (ShazamKit.createSignatureGenerator(AudioSampleRateInHz.SAMPLE_RATE_44100) as ShazamKitResult.Success).data
signatureGenerator.append(byteBuffer.array(), bytesRead, System.currentTimeMillis())
val signature = signatureGenerator.generateSignature()
println("Signature: ${signature.durationInMs}")
val catalog = ShazamKit.createShazamCatalog(developerTokenProvider, Locale.ENGLISH)
val session = (ShazamKit.createSession(catalog) as ShazamKitResult.Success).data
val matchResult = session.match(signature)
println("MatchResult : $matchResult")
setMatchResult(matchResult)
byteBuffer.clear()
}
audioFile.close()
}
I noticed that changing Locale in catalog creation results in different result as i get NoMatch without exception. Can you please help me with this?
Topic:
Media Technologies
SubTopic:
Audio
Hi everyone!
I’ve developed a location-based Audio AR app in Unity with FMOD & Resonance Audio and AirPods Pro Head-Tracking to create a ubiquitous augmented soundscape experience. Think of it as an audio version of Pokémon Go, but with a more precise location requirement to ensure spatial audio is placed correctly.
I want this experience to run in the background on iOS, but from what I’ve gathered, it seems Unity doesn’t support this well. So, I’m considering developing a Swift version instead.
Since this is primarily for research purposes, privacy concerns are not a major issue in my case. However, I’ve come across some potential challenges:
Real-time precise location updates – Can iOS provide fully instantaneous, high-accuracy location updates in the background?
Continuous real-time data processing – Can an app continuously process spatial audio, head-tracking, and location data while running in the background?
I’m not sure if newer iOS versions have improved in these areas or if there are workarounds to achieve this.
Would this kind of experience be feasible to run in the background on iOS? Any insights or pointers would be greatly appreciated!
I’m very new to iOS development, so apologies if this is a basic question. Thanks in advance!
Hi, I am a newbie here.
We have been given a task to build a robotic vision system to capture an immersive video in a hazed environment, which will later be played on Apple Vision Pro. I am thinking of starting with 2 or 4 basic CMOS camera sensors, such as IMX378, AR0144, or VD66GY, and designing an FPGA-based circuit to synchronously capture and store raw frame-by-frame data. Some frame initial processing such as demosaicing and filtering can also be done by the FPGA. Then, I would use software for post-processing to convert the data into a compatible video format for Apple Vision Pro.
Will this idea work? I can handle the raw data capture, but I’m unsure if this approach is feasible and what post-processing software I should use.
Thanks a lot for your suggestions!
Charlie
Topic:
Media Technologies
SubTopic:
Video
Hello,
I’m experiencing an issue with video playback in my Javascript (SvelteKit) application using Capacitor. The video plays and loops correctly on Android and web browsers (including Safari) but stops unexpectedly after a few iterations on iOS native App.
<video src={videoPath} autoplay muted loop playsinline
class="h-auto w-full max-w-full object-cover"></video>
Has anyone encountered a similar issue or have insights into what might be causing this behavior on iOS?
Any suggestions or workarounds would be greatly appreciated. Maybe it has something to do with the iOS power saving policy?
Thank you in advance for your help!
Everytime I put my AirPods in and connect them to my phone or my Mac or my iPad since the iOS 18.3 update on my devices they’ve been disconnecting without reason, pausing songs I’m in the middle of playing, and only partially reconnecting in one pod and it’s getting really frustrating
Topic:
Media Technologies
SubTopic:
Audio
I found that when the development tool above Xcode16 ran my app, I opened the suspended inscription function, and then opened the system camera, the content in the suspended window would not be displayed, and the suspended window would have a black screen. However, this phenomenon does not appear on Xcode15.4 development tools, it is the same code, I do not know why
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Swift Packages
App Clips
Developer Tools
iOS
On some devices, loadFileRepresentation(forTypeIdentifier: completionHandler) take a loong time(about two minute) to callback result for some large video(about 200 MB, take by device camera).
environment:
Model: iPhone 12
Model Number: MGGM3CH/A
iOS Version: 18.3.2
PHPickerResult.NSItemProvider.loadFileRepresentation()
// import PhotosUI
func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
picker.dismiss(animated: true, completion: nil)
guard let provider = results.last?.itemProvider else { return }
guard provider.hasItemConformingToTypeIdentifier(UTType.movie.identifier) else {
return
}
Task {
provider.loadFileRepresentation(forTypeIdentifier: UTType.movie.identifier) { url, error in
guard let url = url else {
return
}
// Do some stuff...
}
}
}
ps: I also try some other function, eg: provide.loadItem(forTypeIdentifier:), but not work too.
On some devices, when i select the same media multiple times, the data by` loadFileRepresentation(forTypeIdentifier: completionHandler) ` returned is different(data.count is not equal).
environment:
* Model: iPhone 12
* Model Number: MGGM3CH/A
* iOS Version: 18.3.2
```Swift
// import PhotosUI
func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
picker.dismiss(animated: true, completion: nil)
guard let provider = results.last?.itemProvider else { return }
guard provider.hasItemConformingToTypeIdentifier(UTType.movie.identifier) else {
return
}
Task {
provider.loadFileRepresentation(forTypeIdentifier: UTType.movie.identifier) { url, error in
guard let url = url else {
return
}
if let data = try? Data(contentsOf: url) {
print("data count is: \(data.count)")
}
}
}
}
```
ps: I also try some other function, eg: ` provide.loadItem(forTypeIdentifier:)`, but not work too.
Hey - I am developing an app that uses the camera for recording video. I put the ability to choose a framerate and resolution and all combinations work perfectly fine, except for 4k 120fps for the new iPhone 16 pro. This just shows black on the preview. I tried to record even though the preview was black, but the recording is also just a black screen. Is there anything special that needs to be done in the camera setup for 4k 120fps to work? I have my camera setup code attached. Is it possible this is a bug in Apple's code, since this works with every other combination (1080p up to 240fps and 4k up to 60fps)?
Thanks so much for the help.
class CameraManager: NSObject {
enum Errors: Error {
case noCaptureDevice
case couldNotAddInput
case unsupportedConfiguration
}
enum Resolution {
case hd1080p
case uhd4K
var preset: AVCaptureSession.Preset {
switch self {
case .hd1080p:
return .hd1920x1080
case .uhd4K:
return .hd4K3840x2160
}
}
var dimensions: CMVideoDimensions {
switch self {
case .hd1080p:
return CMVideoDimensions(width: 1920, height: 1080)
case .uhd4K:
return CMVideoDimensions(width: 3840, height: 2160)
}
}
}
enum CameraType {
case wide
case ultraWide
var captureDeviceType: AVCaptureDevice.DeviceType {
switch self {
case .wide:
return .builtInWideAngleCamera
case .ultraWide:
return .builtInUltraWideCamera
}
}
}
enum FrameRate: Int {
case fps60 = 60
case fps120 = 120
case fps240 = 240
}
let orientationManager = OrientationManager()
let captureSession: AVCaptureSession
let previewLayer: AVCaptureVideoPreviewLayer
let movieFileOutput = AVCaptureMovieFileOutput()
let videoDataOutput = AVCaptureVideoDataOutput()
private var videoCaptureDevice: AVCaptureDevice?
override init() {
self.captureSession = AVCaptureSession()
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
super.init()
self.previewLayer.videoGravity = .resizeAspect
}
func configureSession(resolution: Resolution, frameRate: FrameRate, stabilizationEnabled: Bool, cameraType: CameraType, sampleBufferDelegate: AVCaptureVideoDataOutputSampleBufferDelegate?) throws {
assert(Thread.isMainThread)
captureSession.beginConfiguration()
defer { captureSession.commitConfiguration() }
captureSession.sessionPreset = resolution.preset
if captureSession.canAddOutput(movieFileOutput) {
captureSession.addOutput(movieFileOutput)
} else {
throw Errors.couldNotAddInput
}
videoDataOutput.setSampleBufferDelegate(sampleBufferDelegate, queue: DispatchQueue(label: "VideoDataOutputQueue"))
if captureSession.canAddOutput(videoDataOutput) {
captureSession.addOutput(videoDataOutput)
// Set the video orientation if needed
if let connection = videoDataOutput.connection(with: .video) {
//connection.videoOrientation = .portrait
}
} else {
throw Errors.couldNotAddInput
}
guard let videoCaptureDevice = AVCaptureDevice.default(cameraType.captureDeviceType, for: .video, position: .back) else {
throw Errors.noCaptureDevice
}
let useDimensions = resolution.dimensions
guard let format = videoCaptureDevice.formats.first(where: { format in
let dimensions = CMVideoFormatDescriptionGetDimensions(format.formatDescription)
let isRes = dimensions.width == useDimensions.width && dimensions.height == useDimensions.height
let frameRates = format.videoSupportedFrameRateRanges
return isRes && frameRates.contains(where: { $0.maxFrameRate >= Float64(frameRate.rawValue) })
}) else {
throw Errors.unsupportedConfiguration
}
self.videoCaptureDevice = videoCaptureDevice
do {
let videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
if captureSession.canAddInput(videoInput) {
captureSession.addInput(videoInput)
} else {
throw Errors.couldNotAddInput
}
try videoCaptureDevice.lockForConfiguration()
videoCaptureDevice.activeFormat = format
videoCaptureDevice.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue))
videoCaptureDevice.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue))
videoCaptureDevice.activeMaxExposureDuration = CMTime(seconds: 1.0 / 960, preferredTimescale: 1000000)
videoCaptureDevice.exposureMode = .locked
videoCaptureDevice.unlockForConfiguration()
} catch {
throw error
}
configureStabilization(enabled: stabilizationEnabled)
}`
We are developing an apple music app on phone, the developed web works fine on chrome, but when i load it on webivew on my phone, i can't play the first song,
We doubt that the drm init, key exchange, session creation was on the music.play() function, while we trigger the play, the drm or session was not ok for play a real song, so it got an error
so we may wanna know:
what about the realative process of drm, key, session, etc in the play() function?
are there some state detect function to show weather the drm is ok?
Topic:
Media Technologies
SubTopic:
Audio
Tags:
Apple Music API
MusicKit
MusicKit JS
Apple Music Feed
After updating to 18.3 my iPad 7th generation has no sound and will not play videos
t has been quite some time since I requested the Apple FPS package, yet I haven’t received it. I haven’t received any email either. Is there a developer support inquiry center where I can check the status of the process? Alternatively, could you share approximately how long it took for you to receive a response email?
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
Accounts
FairPlay Streaming
Video
HTTP Live Streaming
I would appreciate help in coding or an explanation what to use in swift for an app which will be able to capture LiDAR scanning and RGB data from taken pictures, generate a 3D mesh, and create .OBJ, .MTL, and .JPEG file set for further manipulation of 3D model.
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
3D Graphics
Swift Playground
Object Capture