Hello. My app uses AVAudioRecorder to generate recording files, which are consistently only 4kb in size. Most users generate audio files normally, with only a few users experiencing this phenomenon occasionally. After uninstalling and installing the app, it will work normally, but it will reappear after a period of time. I have compared that the problematic audio files generated each time are fixed and cannot be played. Added the audioRecorderDidFinishRecording proxy method, which shows that the recording was completed normally. The user also reported that the recording is normal, but there is a problem with the generated file. How should I handle this issue? Look forward to your reply.
- (void)startRecordWithOrderID:(NSString *)orderID {
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryRecord error:nil];
[audioSession setActive:YES error:nil];
NSMutableDictionary *settings = [[NSMutableDictionary alloc] init];
[settings setObject:[NSNumber numberWithFloat: 8000.0] forKey:AVSampleRateKey];
[settings setObject:[NSNumber numberWithInt: kAudioFormatLinearPCM] forKey:AVFormatIDKey];
[settings setObject:[NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey];
[settings setObject:[NSNumber numberWithInt: 1] forKey:AVNumberOfChannelsKey];
[settings setObject:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsBigEndianKey];
[settings setObject:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsFloatKey];
NSString *path = [WDUtility createDirInDocument:@"audios" withOrderID:orderID withPathExtension:@"wav"];
NSURL *tmpFile = [NSURL fileURLWithPath:path];
recorder = [[AVAudioRecorder alloc] initWithURL:tmpFile settings:settings error:nil];
[recorder setDelegate:self];
[recorder prepareToRecord];
[recorder record];
}
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput type set [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]] and scan Face.
After updating to OS 26 Beta2 and iOS 26 Beta2, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue.
iPad (9th Gen)
iPad air (4th Gen)
iPhone 15
This issue has not occur on any other devices I have.
I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces
Are any additional settings required after OS 26 beta and iOS 26 beta? Or is there some problem on the OS side?
Hello.
My team and I think we have an issue where our app is asked to gracefully shutdown with a following SIGTERM. As we’ve learned, this is normally not an issue. However, it seems to also be happening while our app (an audio streamer) is actively playing in the background.
From our perspective, starting playback is indicating strong user intent. We understand that there can be extreme circumstances where the background audio needs to be killed, but should it be considered part of normal operation? We hope that’s not the case.
All we see in the logs is the graceful shutdown request. We can say with high certainty that it’s happening though, as we know that playback is running within 0.5 seconds of the crash, without any other tracked user interaction.
Can you verify if this is intended behavior, and if there’s something we can do about it from our end. From our logs it doesn’t look to be related to either memory usage within the app, or the system as a whole.
Best,
John
I'm capturing video stream from GoPro camera (via UDP MPEG-TS packets) but unable to play in iOS app. can someone provide a source code to decode MPEG-TS data to CMSampleBuffer.
Hi everyone,
I’m trying to use AVAssetResourceLoaderDelegate to handle a live radio stream (e.g. Icecast/HTTP stream). My goal is to have access to the last 30 seconds of audio data during playback, so I can analyze it for specific audio patterns in near-real-time.
I’ve implemented a custom resource loader that works fine for podcasts and static files, where the file size and content length are known. However, for infinite live streams, my current implementation stops receiving new loading requests after the first one is served. As a result, the playback either stalls or fails to continue.
Has anyone successfully used AVAssetResourceLoaderDelegate with a continuous radio stream? Or maybe you can suggest betterapproach for buffering and analyzing live audio?
Any tips, examples, or advice would be appreciated. Thanks!
Is there a way to permanently disable PHASE SDK logging? It seems to be a lot chattier than Apple's other SDKs.
While developing a RealityKit app that uses AudioPlaybackController, I must manually hide the PHASE SDK log output several times each day so I can see my app's log messages.
Thank you.
Hello Apple Developer Community,
I am trying to play an HLS stream using the React Native Video player (underneath it's using AvPlayer). I am able to play the stream smoothly, but in some cases the player can not play the stream properly.
Behaviour:
react-native-video: I am getting the below error.
Error details from react-native-video player:
Error Code: -12971
Domain: CoreMediaErrorDomain
Localised Description: The operation couldn’t be completed. (CoreMediaErrorDomain error -12971.)
Target: 2457
The error does not provide a specific failure reason or recovery suggestion, which makes troubleshooting challenging.
AvPlayer on native iOS project: Video playback stopped after playing a few seconds.
AVPlayer configuration:
player.currentItem?.preferredForwardBufferDuration = 1
player.automaticallyWaitsToMinimizeStalling = true
N.B.: The same buffer duration is working perfectly for others.
Stream properties:
video resolution: 1280 x 720
I have attached an overview report generated from MediaStreamValidator.
I would appreciate any insights or suggestions on how to address this error. Has anyone in the community experienced a similar issue or have any advice on potential solutions?
Thank you for your help!
I'm developing a Final Cut Pro X workflow extension that transcribes audio and creates a text output. I need to allow users to drag this text directly from my extension into FCPX's timeline as titles.
Current Implementation:
Using NSFilePromiseProvider as per Apple's guidelines for drag and drop
Generating valid FCPXML (v1.10) with proper structure:
Complete resources section with format and asset references
Event and project hierarchy
Asset clip with connected title elements
Proper timing and duration calculations
Supporting multiple pasteboard types:
com.apple.finalcutpro.xml.v1-10
com.apple.finalcutpro.xml.v1-9
com.apple.finalcutpro.xml
What's Working:
Drag operation initiates correctly
File promise provider is set up properly
FCPXML generation is successful (verified content)
All required pasteboard types are registered
Proper logging confirms data is being requested and provided
Current Pasteboard Types Offered:
com.apple.NSFilePromiseItemMetaData
com.apple.pasteboard.promised-file-name
com.apple.pasteboard.promised-suggested-file-name
com.apple.pasteboard.promised-file-content-type
Apple files promise pasteboard type
com.apple.pasteboard.NSFilePromiseID
com.apple.pasteboard.promised-file-url
com.apple.finalcutpro.xml.v1-10
com.apple.finalcutpro.xml.v1-9
com.apple.finalcutpro.xml
What additional requirements or considerations are needed to make FCPX accept the dragged FCPXML content? Are there specific requirements for workflow extensions regarding drag and drop operations with titles that aren't documented?
Any insights, especially from those who have implemented similar functionality in FCPX workflow extensions, would be greatly appreciated.
Technical Details:
macOS Version: 15.5 (24F74)
FCPX Version: 11.1.1
Extension built with SwiftUI and AppKit integration
Using NSFilePromiseProvider and NSPasteboardItemDataProvider
Full pasteboard type support for FCPXML versions
Hi, I'm trying to plan out development of an app and am wondering if it is possible to have user generated content automatically populate into a custom shazamkit catalogue and be able to query this catalogue non-locally?
Storing all the submissions locally would obviously not scale.
We are facing a strange issue where a small portion of our large userbase can not start the capture session in our app, as it gets interrupted with the following reason:
AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps
Our users are all from iPhones, no one is using an iPad. Just to be sure we have set
session.isMultitaskingCameraAccessEnabled = true
but it does not seem to make any difference.
Another weird scenario we are seeing on an even smaller number of users is that the following call:
AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)
returns nil. A quick look at our error reports show this happening on iPhone XR, 13 and 14 models. They should all support this device type.
Any help on investigating these issue would be greatly appreciated!
I am developing an app to stream and download DRM protected HLS videos based on the official “FairPlay Streaming Server SDK”.
When I play the downloaded video, it asks the server for .ts or .aac, even though I have passed the path of the downloaded video to AVURLAsset.
As a result, playback fails when the device is offline, such as in airplane mode.
This behavior depends on the playback time of the video and occurs when trying to download and play a video with a playback time of 19 hours or more.
It did not occur for videos with a playback time of 18 hours.
The environment we checked is iOS 18.3.
The solution at this time is to limit the video playback time to 18 hours, but if possible, we would like to allow download playback of videos longer than 19 hours.
Does anyone have any information or know of a solution to this problem, such as if you have experienced this type of event, or if you know that content longer than 19 hours cannot be played offline?
// load
let path = ".../xxx.movpkg" // Path of the downloaded file
videoAsset = AVURLAsset(url: path)
playerItem = AVPlayerItem(asset: videoAsset!)
player.replaceCurrentItem(with: playerItem)
// isPlayableOffline
print("videoAsset.assetCache.isPlayableOffline = \(videoAsset.assetCache.isPlayableOffline)") // true
We are facing a strange issue where a small portion of our large userbase can not start the capture session in our app, as it gets interrupted with the following reason:
AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps
Our users are all from iPhones, no one is using an iPad. Just to be sure we have set
session.isMultitaskingCameraAccessEnabled = true
but it does not seem to make any difference.
Another weird interruption we are seeing
I'm seeking to a specific sync frame in a video file (HEVC, recorded on iPad). When I feed the buffers from that sync frame on to VTDecompressionSession it consistently drops the 2.,3.,4. buffer with a kVTVideoDecoderReferenceMissingErr (or no error but no buffer on the simulator). If I feed all the buffers from the penultimate sync frame prior to the desired frame the buffers come out fine but that would just create a massive overhead to always do it. Tried multiple OS versions, devices etc. Seems a consistent problem.
Here's a sample project with the offending video (disregard memory handling etc):
https://github.com/marcuseckert/vtSample
I've filed a radar FB18228296 but would appreciate any feedback on circumventing or at least detecting this behavior prior to decoding.
I’m building a music app using Apple Music streaming via ApplicationMusicPlayer.
My goal is to decrease the volume of the current song during the last 10 seconds, and when the next track begins, restore the volume to its normal level.
I know that ApplicationMusicPlayer doesn’t expose a volume API, and I want to avoid triggering the system volume HUD.
✅ Using Apple Music streaming (not local files)
❓ Is it possible to implement per-track fade-out/fade-in logic with ApplicationMusicPlayer?
Appreciate any clarification or official guidance!
I am getting high error rates from the Apple Music API. This has been happening for months now, and it is quite frustrating. It is a mix of 404, 504, and random 500 errors. I hit these endpoints all of the time, so it is not like I am hitting a resource that doesn't exist. Why is this happening? Is this a known issue that is getting worked on?
Hey,
Quick question. I noticed that Adobe's new app, Project Indigo, allows you to open the app using the Camera Control button. However, when your device is locked it just shows this screen:
Would this normally be approved by the Appstore approval process? I ask because I would like to do something similar with my camera app.
I know that this is not the best user experience, but my apps UI is not built in Swift and I don't have the resources to build the UI again. At least this way the user experience would be improved from what it is now, where users cannot even launch the app. I get many requests per week about this feature and would love to improve the UX for my users, even if it's not the best possible.
Thanks,
Alex
According to the doc, I did a simple demo to verify.
My env:
ProductName: macOS
ProductVersion: 15.5
BuildVersion: 24F74
2.4 GHz 四核Intel Core i5
Info.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>IOKitPersonalities</key>
<dict>
<key>UVCamera</key>
<dict>
<key>CFBundleIdentifierKernel</key>
<string>com.apple.kpi.iokit</string>
<key>IOClass</key>
<string>IOUserService</string>
<key>IOMatchCategory</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string>
<key>IOProviderClass</key>
<string>IOUserResources</string>
<key>IOResourceMatch</key>
<string>IOKit</string>
<key>IOUserClass</key>
<string>UVCamera</string>
<key>IOUserServerName</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string>
<key>IOProbeScore</key>
<integer>100000</integer>
<key>idVendor</key>
<integer>1452</integer>
<key>idProduct</key>
<integer>34068</integer>
</dict>
</dict>
<key>OSBundleUsageDescription</key>
<string></string>
</dict>
</plist>
UVCamera.cpp
//
// UVCamera.cpp
// UVCamera
//
// Created by DTEN on 2025/6/12.
//
#include <os/log.h>
#include <DriverKit/IOUserServer.h>
#include <DriverKit/IOLib.h>
#include "UVCamera.h"
kern_return_t
IMPL(UVCamera, Start)
{
kern_return_t ret;
ret = Start(provider, SUPERDISPATCH);
os_log(OS_LOG_DEFAULT, "Hello World");
return ret;
}
UVCamera.iig
//
// UVCamera.iig
// UVCamera
//
// Created by DTEN on 2025/6/12.
//
#ifndef UVCamera_h
#define UVCamera_h
#include <Availability.h>
#include <DriverKit/IOService.iig>
class UVCamera: public IOService
{
public:
virtual kern_return_t
Start(IOService * provider) override;
};
#endif /* UVCamera_h */
Then I build by xcode and mv it to /Library/DriverExtensions:
sudo mv com.lqs.MyVirtualCam.UVCamera.dext /Library/DriverExtensions
sudo kmutil install -R / -r /Library/DriverExtensions
kmutil rebuild done
However,the dext can't be loaded:
kmutil showloaded --list-only | grep UVCamera
No variant specified, falling back to release
What's the problem? anyone can help me?
Hello there!
Is there any list of voices that are always available on iOS/iPadOS devices?
It seems that AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-US.Samantha") is always available on all devices.
I thought that AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Nicky_en-US_compact") and AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Aaron_en-US_compact") were available by default on certain newer devices. Is this true?
I also noticed that on the same iPad where I was using those 2 voices (Nicky and Aaron) - when I updated to the iPadOS 26 beta, those voices were no longer available.
Any information you can share about which voices should be reliably available on which devices would be extremely helpful for our development. Thanks so much!
I am following the Apple sample code and trying to add a manual focus lens position slider:
@available(iOS 18.0, *)
private func addCameraControls() {
if !self.session.controls.isEmpty {
for control in self.session.controls {
self.session.removeControl(control)
}
}
self.cameraControlFocusSlider = nil
//Focus Slider
if self.videoDevice!.isLockingFocusWithCustomLensPositionSupported {
self.cameraControlFocusSlider = AVCaptureSlider("Focus", symbolName: "dot.square", in: 0.0...1.0)
self.cameraControlFocusSlider!.setActionQueue(self.sessionQueue) { focusValue in
//Do manual focus
}
if self.session.canAddControl(self.cameraControlFocusSlider!) {
self.session.addControl(self.cameraControlFocusSlider!)
}
}
}
So there are these AVCaptureSessionControlsDelegate methods:
final func sessionControlsDidBecomeActive(_ session: AVCaptureSession) {
print ("sessionControlsDidBecomeActive")
}
final func sessionControlsWillEnterFullscreenAppearance(_ session: AVCaptureSession) {
print ("sessionControlsWillEnterFullscreenAppearance")
}
final func sessionControlsWillExitFullscreenAppearance(_ session: AVCaptureSession) {
print ("sessionControlsWillExitFullscreenAppearance")
}
final func sessionControlsDidBecomeInactive(_ session: AVCaptureSession) {
print ("sessionControlsDidBecomeInactive")
}
So when self.cameraControlFocusSlider is presented, I have to show the current value of the lense position. Lens position can change from auto focus and also from manual focus by the user using the app UI. Is there a way to see if self.cameraControlFocusSlider is active or being used?
Please note that I will have more than one AVCaptureSlider in the final code.
Among the millions of users of our online product, we have identified through data metrics that the silent audio data capture rate on iPadOS 18.4.1 or 18.5 has increased abnormally. However, we are unable to reproduce the issue. Has anyone encountered a similar issue? The parameters we used are as follows:
AudioSession:
category:AVAudioSessionCategoryPlayAndRecord
mode:AVAudioSessionModeDefault
option:77
preferredSampleRate:48000.000000
preferredIOBufferDuration:0.010000
AudioUnit
format.mFormatID = kAudioFormatLinearPCM;
format.mSampleRate = 48000.0;
format.mChannelsPerFrame = 2;
format.mBitsPerChannel = 16;
format.mFramesPerPacket = 1;
format.mBytesPerFrame = format.mChannelsPerFrame * 16 / 8;
format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket;
format.mFormatFlags = kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsSignedInteger;
component.componentType = kAudioUnitType_Output;
component.componentSubType = kAudioUnitSubType_RemoteIO;
component.componentManufacturer = kAudioUnitManufacturer_Apple;
component.componentFlags = 0;
component.componentFlagsMask = 0;